Setting up HashiCorp Packer with Proxmox Part 1: Build a Baseline Ubuntu Image from Installation Media ISO
I recently set up Proxmox in my home lab. I wanted something easier to automate using Terraform, and I saw that ProxMox has a Terraform provider and a Packer plugin. Therefore, I completely wiped (no, not with a cloth) my Dell PowerEdge R820 server (the one labeled ‘DUMBLEDORE’ because it’s an aged beast of a Wizard), which used to run Windows Server 2016 with HyperV, and installed Proxmox Virtualization Environment. It was quite traumatic because it involved shutting down a Minecraft server I had been running for many years. Don’t worry, though; I backed up my worlds directory, so it will live again. As I write this, I am feeling quite nostalgic about the first underground hut that my son and I built to avoid being bludgeoned to death by raging Zombies and Skeleton Archers.
Setup Proxmox Virtual Environment
To log in to my Proxmox environment, I go to the management console hosted at the Proxmox server’s IP address. The console website is hosted at port 8006. So if the IP address was 192.168.1.100 you can access the Proxmox console by going to 192.168.1.100:8006 in your web browser.
Proxmox Virtual Environment Management Console
This reminds me of VSphere’s management console, which allows you to manage multiple Data Centers. For my Home Lab, though, I only have one Data Center with the default name pve.
I still haven’t explored all the capabilities, but it looks like there are Virtual Networks and Storage Volumes under my Data Center. I did an extremely basic install using as many default values as possible.
The two local volumes that I have are the OS disk local of 100 GB and the Data disk local-lvm of 1.34 TB. The server that I installed Proxmox on is a Dell R820 server with eight (7) 250 GB disks set up in a single RAID-5 volume.
IMAGE Proxmox Data Center’s Storage Volumes
Setup File Share on NAS for Storage
So in my Homelab, I rock a pretty sick 12-bay Synology RS3621RPxs NAS with 70 TB of storage. This thing is a beast. I run Plex for video. I run Roon for music and my Synology NAS hardly breaks a sweat. So with 60 TB of free storage, I decided to add an NFS share called Lab that I could use for my Proxmox rig.
IMAGE Mount Synology NAS NFS File Share
Stage the Installation Media
Downloaded Ubuntu 22.04 and saved the ISO file to my Synology NAS. When Proxmox sets up a volume to be used for hosting different content types it needs to have the files stored in a specific location within the volume.
IMAGE Proxmox Content Types
I knew I would be using ISO images, but I had trouble getting Packer to recognize the image when I just put it in the root directory of the NFS share. It turns out Proxmox specifically looks for ISO images in a folder called template/iso . Once I installed the media in this location, I was able to get things working.
IMAGE Physical Folder Structure
It’s Packer Time
Unlike most public clouds, when you run Proxmox on-premises, you have to drop into the dark recesses of building Packer images from OS installation media. I can’t emphasize how much this absolutely sucks. Whatever your public cloud, we absolutely take for granted the plethora of robust Marketplace images from which we can launch Virtual Machines with close to zero effort. I knew that before I would be able to conduct BAU (Business-As-Usual) from a public cloud perspective, I would need a reliable Operating System image that I could use as my baseline.
This meant that I would have to use the proxmox-iso Packer builder in order to install an Operating System from an ISO. Once this was complete, I could use the proxmox-clone builder to clone this base image and proceed as I normally would using any public cloud’s marketplace images. When designing the input variables for my Packer template there were some basic things I needed to cover. Where would I get the OS installation media from? Where would my image be stored? That’s pretty much it. So I setup the below input variables to allow me to configure this Packer template.
- iso_file the physical location of the installation media
- iso_storage_pool the ProxMox storage pool where the installation media is stored
- image_name the name of the Image, including semantic versioning
- image_storage_pool the ProxMox storage pool to save the image
- proxmox_node the name of the ProxMox cluster we are connecting to
Like I do when setting up a new Terraform project, I created an input variable file to store the common configuration values of my Packer template. These files go into a file called ‘default.pkrvars.hcl’. Now I personally am not a huge fan of the ‘.hcl’ sub-extension but it seems like this is how Packer’s HCL support was introduced. Sometimes you just have to look at the bright side of things: It’s better than writing this in JSON!
iso_file = "ubuntu-22.04.1-live-server-amd64.iso"
iso_storage_pool = "synology-lab"
image_name = "u2204-baseline-v1.0.0"
image_storage_pool = "local-lvm"
proxmox_node = "pve"
Many of these values would likely always stay the same. The Proxmox node and the storage pools were fixed in time. I would literally have to exert myself physically to change them. The iso_file might change if there was a new version of Ubuntu LTS that I wanted to build a baseline image for (i.e. 24.04). The image_name could change if I needed to rebuild the image from the new installation media. However, I aimed to avoid creating new images from OS installation media. It would be much easier to maintain if I used LTS versions of Ubunto to create the baseline image. Then I could use proxmox-clone to patch and upgrade the OS with minor updates.
source "proxmox-iso" "ubuntu" {
boot_command = [ ***TODO*** ]
boot_wait = "10s"
disks {
disk_size = "8G"
storage_pool = var.image_storage_pool
type = "scsi"
}
http_directory = "http"
insecure_skip_tls_verify = true
iso_file = "${var.iso_storage_pool}:iso/${var.iso_file}"
iso_checksum = "none"
network_adapters {
bridge = "vmbr0"
model = "virtio"
}
node = var.proxmox_node
memory = 4096
cores = 2
sockets = 2
ssh_timeout = "60m"
ssh_username = "ubuntu"
ssh_password = "ubuntu"
ssh_port = 22
qemu_agent = true
template_description = var.image_description
template_name = var.image_name
unmount_iso = true
}
IMAGE Automated Linux Installation from ISO
This is honestly the stuff nightmares are made of. Not even kidding. Essentially we need to keyboard emulate commands to get through interactive ASCII text based Linux window environment.
I was extremely lucky to find somebody who already figured out all of the boot commands for LTS versions of Ubuntu. Dustin Rue has an incredibly useful GitHub repository with 20.04, 22.04, and 24.04.
boot_command = [
"c",
"linux /casper/vmlinuz -- autoinstall ds='nocloud-net;s=http://:/'",
"<enter><wait><wait>",
"initrd /casper/initrd",
"<enter><wait><wait>",
"boot<enter>"
]
The above commands essentially emulate what a human being would do on the keyboard during an Ubunto 22.04 installation. It’s pretty nasty and extremely foreign if you are used to public cloud Marketplace images.
Clean Up CloudInit
To ensure that the image is bootable when we spin up a new Virtual Machine with this image, we need to “clean” the state after provisioning to prepare it for further automated configuration tasks.
provisioner "shell" {
inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done", "sudo rm -f /etc/cloud/cloud.cfg.d/99-installer.cfg", "sudo cloud-init clean", "sudo passwd -d ubuntu"]
}
This script is performing the following tasks:
- Waits for Cloud-init to finish its configuration process.
- Cleans up specific cloud-init configuration files and resets cloud-init’s state, which could be used to prepare the system for re-initialization or to clean up after the instance setup.
- Deletes the password for the ubuntu user, likely for security reasons or to enforce key-based login.
After all that a simple packer build and you are in business.