🧭 Terraform + Ansible + Proxmox Automation Guide
Author’s Note:
All paths and filenames in this guide reflect a real working setup (/home/automation/...).
You can safely adapt them for your environment if your directory layout differs.
⚙️ Introduction: Why Automate with Terraform and Ansible
Managing a Proxmox homelab or production environment can become repetitive — manually creating containers, setting network VLANs, and configuring packages is time-consuming.
This setup solves that problem by combining two powerful tools:
- Terraform – Handles provisioning of LXC containers and network configuration via the Proxmox API.
- Ansible – Handles post-deployment configuration like installing tools, creating users, and system updates.
Together, they provide a fully automated workflow:
“From
terraform applyto ready-to-use servers — no manual SSH required.”
🧩 Architecture Overview: How the Flow Works
Let’s break down the automation lifecycle:
- Terraform provisions containers on the Proxmox host using the API.
- Terraform’s local-exec block connects to the Proxmox host and bootstraps each container (creates user, pushes SSH key, enables sudo).
- Terraform generates an Ansible inventory dynamically with container IPs.
- Terraform triggers Ansible automatically, running the
init-sony.ymlplaybook. - Ansible connects to each container via SSH and performs configuration (updates, installs packages, creates users).
End result:
After one terraform apply, your containers are created, configured, and ready to use.
🧱 1. Setting Up the Automation VM
Create the automation user
adduser automation
usermod -aG sudo automation
Grant passwordless sudo:
visudo
Add this line at the end:
automation ALL=(ALL) NOPASSWD:ALL
Generate SSH keys
Login as automation and create a key pair:
su - automation
ssh-keygen -t rsa -b 4096 -C "automation@automation-vm"
This creates:
/home/automation/.ssh/id_rsa
/home/automation/.ssh/id_rsa.pub
Use this key pair for both Terraform and Ansible.
🧰 2. Terraform Configuration
Terraform is responsible for creating and bootstrapping the LXC containers.
Directory structure:
/home/automation/terraform/container-creation/
├── main.tf
├── variables.tf
├── terraform.tfvars
variables.tf
Defines parameters for flexibility (storage, VLAN, etc.).
variable "target_node" { type = string }
variable "ostemplate" { type = string }
variable "storage" { type = string }
variable "bridge" { type = string }
variable "gateway" { type = string }
variable "containers" {
type = list(object({
name = string
ip = string
vlan = number
cores = number
memory = number
disk = string
}))
}
terraform.tfvars (example)
target_node = "pve-node"
ostemplate = "local:vztmpl/ubuntu-22.04-standard_22.04-2_amd64.tar.zst"
storage = "local-lvm"
bridge = "vmbr0"
gateway = "10.0.0.1"
containers = [
{
name = "web01"
ip = "10.0.10.10"
vlan = 10
cores = 2
memory = 1024
disk = "8G"
}
]
main.tf
Contains the full automation logic.
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.1-rc3"
}
}
}
provider "proxmox" {
pm_api_url = "https://<PROXMOX-IP>:8006/api2/json"
pm_api_token_id = "terraform@pve!tf-token"
pm_api_token_secret = "<SECRET>"
pm_tls_insecure = true
}
# --- LXC Creation ---
resource "proxmox_lxc" "containers" {
for_each = { for c in var.containers : c.name => c }
hostname = each.value.name
target_node = var.target_node
ostemplate = var.ostemplate
cores = each.value.cores
memory = each.value.memory
swap = 512
unprivileged = true
onboot = true
rootfs {
storage = var.storage
size = each.value.disk
}
network {
name = "eth0"
bridge = var.bridge
ip = "${each.value.ip}/24"
gw = var.gateway
tag = each.value.vlan
}
features { nesting = true }
start = true
ssh_public_keys = file("/home/automation/.ssh/id_rsa.pub")
# --- Post-provision bootstrap ---
provisioner "local-exec" {
command = <<EOT
VMID=$(echo ${self.id} | sed 's#.*/##')
PROXMOX_HOST=<PROXMOX-IP>
echo ">> Bootstrapping ${each.value.name} (VMID: $VMID) ..."
ssh -o StrictHostKeyChecking=no root@$PROXMOX_HOST "pct exec $VMID -- bash -c 'apt-get update -qq && apt-get install -y sudo'"
ssh root@$PROXMOX_HOST "pct exec $VMID -- bash -c 'id -u automation &>/dev/null || useradd -m -s /bin/bash automation'"
ssh root@$PROXMOX_HOST "pct push $VMID /home/automation/.ssh/id_rsa.pub /home/automation/.ssh/authorized_keys"
ssh root@$PROXMOX_HOST "pct exec $VMID -- bash -c 'chown -R automation:automation /home/automation/.ssh && chmod 600 /home/automation/.ssh/authorized_keys'"
ssh root@$PROXMOX_HOST "pct exec $VMID -- bash -c 'echo "automation ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/automation && chmod 440 /etc/sudoers.d/automation'"
EOT
}
}
# --- Dynamic Ansible Inventory ---
output "ansible_inventory" {
value = <<EOT
[proxmox_lxc]
%{ for name, c in proxmox_lxc.containers }
${name} ansible_host=${c.network[0].ip} ansible_user=automation ansible_ssh_private_key_file=/home/automation/.ssh/id_rsa
%{ endfor }
EOT
}
# --- Run Ansible Automatically ---
resource "null_resource" "ansible_run" {
depends_on = [proxmox_lxc.containers]
provisioner "local-exec" {
command = <<EOT
echo ">> Generating dynamic Ansible inventory..."
terraform output ansible_inventory > /home/automation/ansible/inventory.ini
echo ">> Running Ansible playbook..."
ansible-playbook -i /home/automation/ansible/inventory.ini /home/automation/ansible/init-sony.yml
EOT
}
}
🧠 How Terraform Triggers Ansible
Here’s the behind-the-scenes flow:
- Terraform provisions the LXCs using the
proxmox_lxcresource. - Post-provision (
local-exec) runs shell commands to:- Create the
automationuser inside each container. - Install
sudoand configure SSH keys.
- Create the
- Terraform outputs an inventory of all newly created containers (with IPs and SSH details).
null_resourcewithlocal-execwrites this inventory to/home/automation/ansible/inventory.ini.- Terraform then runs Ansible automatically using the
init-sony.ymlplaybook.
Result → Every container is configured and ready right after creation.
🧩 3. Ansible Configuration
Directory structure:
/home/automation/ansible/
├── ansible.cfg
├── inventory.ini (auto-generated)
└── init-sony.yml
ansible.cfg
[defaults]
inventory = /home/automation/ansible/inventory.ini
host_key_checking = False
remote_user = automation
interpreter_python = /usr/bin/python3
deprecation_warnings = False
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
init-sony.yml
This playbook performs post-deployment configuration:
---
- name: Initial setup for Proxmox LXC containers
hosts: proxmox_lxc
become: yes
tasks:
- name: Update and upgrade packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
- name: Install connectivity and troubleshooting tools
apt:
name:
- traceroute
- curl
- net-tools
- iputils-ping
- dnsutils
- mtr
- wget
- tcpdump
- vim
state: present
- name: Create user 'sony'
user:
name: sony
password: "{{ 'password' | password_hash('sha512') }}"
shell: /bin/bash
groups: sudo
create_home: yes
- name: Allow passwordless sudo for sony
copy:
dest: /etc/sudoers.d/sony
content: "sony ALL=(ALL) NOPASSWD:ALL\n"
mode: '0440'
- name: Ensure SSH service is running
service:
name: ssh
state: started
enabled: yes
🧱 4. Running the Full Pipeline
Run Terraform:
cd /home/automation/terraform/container-creation
terraform init
terraform apply -auto-approve
Terraform will:
- Create containers.
- Bootstrap users and SSH.
- Generate an inventory.
- Trigger Ansible automatically.
Expected output:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
>> Generating dynamic Ansible inventory...
>> Running Ansible playbook...
PLAY [Initial setup for Proxmox LXC containers]
PLAY RECAP *********************************************************
web01 : ok=7 changed=4
app01 : ok=7 changed=4
db01 : ok=7 changed=4
🧠 5. Adapting for Future Needs
| Scenario | What to Modify |
|---|---|
| Add or remove containers | Edit terraform.tfvars and re-run terraform apply |
| Change IPs, VLANs, or memory | Update container values in terraform.tfvars |
| Add new configuration tasks | Edit /home/automation/ansible/init-sony.yml |
| Change default packages | Modify the apt section in the playbook |
| Re-run only Ansible | terraform apply -target=null_resource.ansible_run |
| Test Ansible manually | ansible-playbook -i inventory.ini init-sony.yml |
🧾 Terraform & Ansible Command Reference
| Purpose | Command | Description |
|---|---|---|
| Initialize Terraform | terraform init |
Downloads provider plugins |
| Plan changes | terraform plan |
Preview proposed changes |
| Apply configuration | terraform apply -auto-approve |
Deploy containers & run Ansible |
| Destroy resources | terraform destroy |
Delete containers |
| Run only Ansible | terraform apply -target=null_resource.ansible_run |
Skip provisioning, run playbook only |
| Test Ansible | ansible all -m ping |
Ping all hosts |
| Run a playbook manually | ansible-playbook -i inventory.ini init-sony.yml |
Run configuration manually |
⚠️ Common Issues & Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| Playbook not found | Wrong path in Terraform | Use full path /home/automation/ansible/init-sony.yml |
| SSH permission denied | Missing SSH key or wrong ownership | Ensure /home/automation/.ssh has 700 perms and key is injected |
| Ansible “UNREACHABLE” errors | Container not yet started | Wait a few seconds or re-run Ansible |
| Terraform plan fails | Incorrect variable or missing Proxmox token | Check API credentials and token permissions |
| Ansible inventory empty | Terraform output not generated | Run terraform output ansible_inventory to verify |
✅ End-to-End Summary
This setup achieves a complete automation lifecycle:
Terraform (Infrastructure)
↓
Local-Exec (Bootstrap in LXC)
↓
Terraform Output (Dynamic Inventory)
↓
Ansible (Configuration)
↓
Fully configured, ready-to-use containers
With one command:
terraform apply -auto-approve
you can rebuild or expand your environment — clean, consistent, and fully automated.
Tip:
Store your Terraform and Ansible directories in Git to version control infrastructure and configuration together.
This ensures easy rollback, traceability, and repeatability.