This content originally appeared on DEV Community and was authored by Aurelia Peters
In my last article I talked about getting Terraform set up on Proxmox VE. In this article I want to talk about how I got Cloud-Init set up to use with my Terraform templates.
To begin with, I needed a cloud-init base VM. While I could use the cloud image that Ubuntu provides, I found a nifty article that shows you how to roll your own base image.
NOTE: The Proxmox VE cloud-init documentation suggests adding a serial console next. I have found that not to be necessary with the Ubuntu cloud image, so I'm not going to do it.
Now that we've got the base template set up (turns out I was mistaken in my last post when I said it needed to be a VM and not a template) let's set up an actual VM. I'll have a single virtual Ethernet interface that gets its IP address via DHCP, 32 GB of virtual disk, 2048 GB of RAM, and 2 processor cores.
Note here that I've broken my Terraform config into several files to make it more manageable. As long as all of the Terraform files (i.e. the ones ending in .tf
or .tfvars
) are in the same directory, Terraform will process them in the same way as if they were one big file.
# provider.tf - This is where I define my providers
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
#latest version as of 16 July 2024
version = "3.0.1-rc3"
}
}
}
provider "proxmox" {
# References our vars.tf file to plug in the api_url
pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
# Provided in a file, secrets.tfvars containing secret terraform variables
pm_api_token_id = var.token_id
# Also provided in secrets.tfvars
pm_api_token_secret = var.token_secret
# Defined in vars.tf
pm_tls_insecure = var.pm_tls_insecure
pm_log_enable = true
# this is useful for logging what Terraform is doing
pm_log_file = "terraform-plugin-proxmox.log"
pm_log_levels = {
_default = "debug"
_capturelog = ""
}
}
# cloud-init.tf - This is where I store cloud-init configuration
# Source the Cloud Init Config file. NB: This file should be located
# in the "files" directory under the directory you have your Terraform
# files in.
data "template_file" "cloud_init_test1" {
template = "${file("${path.module}/files/test1.cloud_config")}"
vars = {
ssh_key = file("~/.ssh/id_ed25519.pub")
hostname = var.vm_name
domain = "scurrilous.foo"
}
}
# Create a local copy of the file, to transfer to Proxmox.
resource "local_file" "cloud_init_test1" {
content = data.template_file.cloud_init_test1.rendered
filename = "${path.module}/files/user_data_cloud_init_test1.cfg"
}
# Transfer the file to the Proxmox Host
resource "null_resource" "cloud_init_test1" {
connection {
type = "ssh"
user = "root"
private_key = file("~/.ssh/id_ed25519")
host = var.proxmox_host
}
provisioner "file" {
source = local_file.cloud_init_test1.filename
destination = "/var/lib/vz/snippets/cloud_init_test1.yml"
}
}
# main.tf - This is where I define the VMs I want to deploy with Terraform
resource "proxmox_vm_qemu" "cloudinit-test" {
name = var.vm_name
desc = "Testing Terraform and cloud-init"
depends_on = [ null_resource.cloud_init_test1 ]
# Node name has to be the same name as within the cluster
# this might not include the FQDN
target_node = var.proxmox_host
# The template name to clone this vm from
clone = var.template_name
# Activate QEMU agent for this VM
agent = 1
os_type = "cloud-init"
cores = 2
sockets = 1
vcpus = 0
cpu = "host"
memory = 2048
scsihw = "virtio-scsi-single"
# Setup the disk
disks {
ide {
ide2 {
cloudinit {
storage = "containers-and-vms"
}
}
}
scsi {
scsi0 {
disk {
size = "32G"
storage = "containers-and-vms"
discard = true
iothread = true
}
}
}
}
network {
model = "virtio"
bridge = var.nic_name
tag = -1
}
# Setup the ip address using cloud-init.
boot = "order=scsi0"
# Keep in mind to use the CIDR notation for the ip.
ipconfig0 = "ip=192.168.1.80/24,gw=192.168.1.1,ip6=dhcp"
skip_ipv6 = true
lifecycle {
ignore_changes = [
ciuser,
sshkeys,
network
]
}
cicustom = "user=local:snippets/cloud_init_test1.yml"
}
In addition to the Terraform files, we also need the cloud-config file (cloud_init_test1.yml
) that we're referencing in main.tf
.
IMPORTANT If you specify a value for cicustom
as I did here, the ciuser
and sshkeys
fields in the template definition (e.g. main.tf
) are ignored in favor of whatever is in the cloud-config file, even when nothing is there. This also trumps whatever is in the base template. You must specify your SSH keys in your cloud-config file.
#cloud-config
ssh_authorized_keys:
- <ssh public key 1>
- <ssh public key 2>
runcmd:
- apt-get update
- apt-get install -y nginx
write_files:
- content: |
#!/bin/bash
echo "ZOMBIES RULE BELGIUM?"
path: /usr/local/bin/my-script
permissions: '0755'
scripts-user:
- /usr/local/bin/my-script
So you can see here that you can run arbitrary commands at first boot with runcmd
, and you can also run a custom Bash script with scripts-user
and write-files
. (See this writeup from SaturnCloud for more information).
You might notice that the Terraform template definition is pretty close in structure to the one I used in my last article. That's intentional - I set up the last one with cloud-init, but didn't do much with it. This one actually provisions the VM with cloud-init. You can also use Ansible playbooks to provision a VM, and I might talk about that in a future post, but in my next post I'm going to talk about doing something actually useful in my home infrastructure and setting up Plex.
Once again, we execute terraform plan
. The plan looks good, so we apply it with terraform apply
, wait a couple of minutes, and boom! We've got ourselves a VM with both cloud-init and QEMU Guest Agent. Pretty cool! Next time I'll show you how to use Ansible playbooks to provision your VMs
This content originally appeared on DEV Community and was authored by Aurelia Peters
Aurelia Peters | Sciencx (2024-07-31T23:10:19+00:00) Setting Up The Home Lab: Terraform and Cloud-Init. Retrieved from https://www.scien.cx/2024/07/31/setting-up-the-home-lab-terraform-and-cloud-init/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.