The good news about modern virtualization technologies like KVM (Kernel-based Virtual Machine) and containers (e.g., LXC) is that you get almost limitless options for spinup and teardown of clusters, machines, networks and so on. That’s also the bad news: all that flexibility can create a pretty substantial administrative burden. We’ve found we can make a life of cloud infrastructure sysadmins much easier by combining some focused automation tactics with the powerful open source virtualization platform, Proxmox.

The VM lifecycle problem

Proxmox VE is a great open-source server virtualization platform to manage two virtualization technologies, KVM and LXC is our preferred option for server virtualization management software. The combination of these three has helped us to avoid major investments by reusing current infrastructure, added clustering functionality and ensured that all our HA infrastructures remain manageable at all times. It pays off the increased availability of our servers and reduced administration complexity. It also makes the most of our IT budget, compared with other virtualization systems, like ESX or XEN.

Here in Bekitzur, we run many new Proxmox Linux VMs on a daily basis. But it still takes a fair amount of hands-on work. For example, the creation of VMs in Proxmox requires sysadmin to run the VM and provision some OS to it, i.e. perform the OS installation. This process involves a lot of actions: mounting disks of the proper type, creating file systems, installing and configuring OS and packages, etc. This can be tedious; done manually, the process of VM provisioning is too time consuming and error-prone.

Automation setup was an intuitive first step. By the time we were through, we ended up with a full automation of Proxmox VMs that covers not only provisioning new VMs, but also maintaining them over all their life-cycle. In this post, we’ll take a look at how ansible and Debian provide key tactics to make the whole thing work.

Virtualization automation implementation

Internally, Proxmox uses KVM to run VMs. To speed up and fully automate Proxmox VMs, we need to automate the KVM Linux instance creation. After reviewing existing open source solutions, we decided to teach our Jenkins to properly trigger Ansible playbooks using the proxmox_kvm module. This seemed to be the simplest and the most portable solution we found. Here is the schema that shows how it works:

Hands-on cloud lifecycle virtualization for hands-free automation

When automating the Proxmox VM provisioning part, we used the so-called ‘pre-seeding’.
Preseeding is a way to configure the Debian Installer non-interactively. During installation, a special text file is generated or downloaded to the target VM. This file can provide answers to the installer questions and include all required installation options for all components being installed/configured.

Part 1: Ansible

First of all, we will use proxmox_kvm ansible module. It uses Proxmox API to create/delete/stop Qemu(KVM) Virtual Machines in Proxmox VE cluster. Module settings cover the majority of KVM options, i.e. things like network adapter type, disk size and type, CPU options and so on (important note: be careful with API creds, hide them with Ansible Vault). For further automation of Proxmox VM actions via Ansible you may take a look at Ansible Proxmox module.

Here, we will run script-based OS installation. We run CentOS and Debian Linux distros in our environment so we will run Kickstart/Preseed accordingly, by passing the “args” parameter to the Proxmox API.

Basically, we grab the kernel of choice (you can compile your own or use the one from your distro) from our intra-network https server with Ansible, then fire it up in our empty VM.

Centos Kickstart example:
--args '-kernel /tmp/vmlinuz -initrd /tmp/initrd.img -append "inst.stage2=http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/ inst.repo=http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/ inst.ks=https://intra-host.loc/kickstart.cfg rd.noverifyssl"'
Ubuntu/Debian Preseed example:
--args '-kernel /tmp/linux -initrd /tmp/initrd.gz -append "preseed/url=https://https://intra-host.loc/preseed.seed debian-installer/allow_unauthenticated_ssl=true locale=en_US.UTF-8 debian/priority=critical vga=normal debian-installer/keymap=en console-keymaps-at/keymap=en console-setup/layoutcode=en_US netcfg/choose_interface=auto localechooser/translation/warn-light=true localechooser/translation/warn-severe=true console-setup/ask_detect=false netcfg/get_hostname=PRESEED FRONTEND_BACKGROUND=original"'

Using those scripts, you can set any setup option for your new system. We usually set up partition layout (including things like mdraid or lvm), basic set of packages, locale, timezone, users etc. Also, there are lots of manuals out there on Preseed/Kickstart scripting, so you can easily adjust everything for your needs.

The main sequence is to start the install, then watch status and delete args, when installation is finished. Be sure to power off the virtual machine or the operation will be just pending and your VM will end up in an endless installation cycle Then start VM again to boot into new system. We used Ansible handlers to pause the playbook to wait for VM deployment to finish.

Part 2: Jenkins on the job

In Jenkins, we have created a particular job with the following parameters:

Hands-on cloud lifecycle virtualization for hands-free automation

The end users can just click through all the VM options to set everything up.

This is achieved by parameterized job options:

VM_NAME, 
PROXMOX_NODE, 
VM_TYPE,
VM_MEMORY_SIZE,
VM_CORES,
VM_STORAGE_SIZE,
VM_STORAGE_NAME,
VM_STORAGE_TYPE

Of course, you can add your own set of options

One more thing about DHCP and DNS hostnames: We use dhcpd+bind to perform dynamic DNS updates in our environment, thus the new VM with hostname set in Jenkins UI with VM_NAME variable will be the actual hostname and the host will be accessible via DNS.
And here we pass all the needed vars to Ansible in JSON format wrapped by bash script (which is triggered by Jenkins job)

#!/bin/bash -x
# Put Vault password to a temp file that lives only during the build time
trap "{ rm -f $JENKINS_HOME/proxmox-vault-pass; }" EXIT && echo "${VAULT_PASS}" > $JENKINS_HOME/proxmox-vault-pass &
ansible-playbook -i infra run.yaml \
     -e '{"vms":{"'"$VM_NAME"'":{"node":"'"$PROXMOX_NODE"'","type":"'"$VM_TYPE"'"}}}' \
     -e '{"defaults":{"net":"{\"net0\":\"virtio,bridge=vmbr1\"}","cores": "'"$VM_CORES"'","memory_size": "'"$VM_MEMORY_SIZE"'","scsihw": "virtio-scsi-pci","virtio": "{\"virtio0\":\"'"$VM_STORAGE_NAME"':'"$VM_STORAGE_SIZE"',format='"$VM_STORAGE_TYPE"',discard=on\"}","ostype": "l26","agent": "1","vga": "qxl"}}' \
     --vault-password-file="$JENKINS_HOME/proxmox-vault-pass"

This job starts with VM creation, then it starts that VM instance with special args for script-based OS installation. Next, it waits for install period to finish and finally reboots the VM without the setup args into an operational state.

Hands-on cloud lifecycle virtualization for hands-free automation

As a result, we end up with a ready Linux VM with all settings, packages, disks, users, etc in almost no time (a few minutes max) and hassle-free! In addition, this solution is scalable, transparent, easy to debug and improve. Our system team is happy now as they get to focus on that great stuff that makes being a systems administrator so awesome.

We invite you to try out our solution on your Proxmox toolset; download our proof-of-concept files from our repo. Here are some additional resources we found useful:

Alexey Kuplensky, DevOps Professional

View posts by

Talk to Us

Bekitzur—Amazon Partner Network Consulting Partner

CloudGeometry is a certified AWS Consulting Partner and expert in legacy systems migrations to AWS.

Free Database Migration

Move your SQL database to AWS RDS with CloudGeometry

Learn more…