post

Use .iso and Kickstart files to automatically create Ubuntu VMs

I wrote an updated version of this article in 2023: Quickly create guest VMs using virsh, cloud image files, and cloud-init. The newer code works better on more recent distros. Unless you really need to use Kickstart, try the new code.

I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

  • Allow me to use raw ISO files as the basis for guest VMs.
  • Guest VMs should be set up with bridged IPs that are routable from the host.
  • Guest VMs should be able to reach the Internet.
  • Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).
  • VM creation should work with any distro that supports Kickstart files.
  • Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.
  • Guest VMs should be set up to allow passwordless ssh access from the “ansible” user.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

Scripts

I put all of my code into a Github repo containing these scripts:

  • create-vm – Use .iso and kickstart files to auto-generate a VM.
  • delete-vm – Delete a virtual machine created with create-vm.
  • get-vm-ip – Get the IP address of a VM managed by virsh.
  • encrypt-pw – Returns a SHA512 encrypted password suitable for pasting into Kickstart files.

I’ve also included a sample ubuntu.ks Kickstart file for creating an Ubuntu host.

Host setup

I’m running the scripts from a host with Ubuntu Linux 18.10 installed. I added the following to the host’s Ansible playbook to install the necessary virtualization packages:

  - name: Install virtualization packages
apt:
name: "{{item}}"
state: latest
with_items:
- qemu-kvm
- libvirt-bin
- libvirt-clients
- libvirt-daemon
- libvirt-daemon-driver-storage-zfs
- python-libvirt
- python3-libvirt
- system-config-kickstart
- vagrant-libvirt
- vagrant-sshfs
- virt-manager
- virtinst

If you’re not using Ansible just apt-get install the above packages.

Sample Kickstart file

There are plenty of documents on the Internet on how to set up Kickstart files.

A couple of things that are special about the included Kickstart file

The Ansible user: Although I’d prefer to create the “ansible” user as a locked account,with no password just an ssh public key, Kickstart on Ubuntu does not allow this, so I do set up an encrypted password.

To set up your own password, use the encrypt-pw script to create a SHA512-hashed password that you can copy and paste into the Kickstart file. After a VM is created you can use this password if you need to log into the VM via the console.

To use your own ssh key, replace the ssh key in the %post section with your own public key.

The %post section at the bottom of the Kickstart file does a couple of things:

  • It updates all packages with the latest versions.
  • To configure a VM with Ansible, you just need ssh access to a VM and Python installed. on the VM. So I use %post to install an ssh-server and Python.
  • I start the serial console, so that virsh console $vmname works.
  • I add a public key for Ansible, so I can configure the servers with Ansible without entering a password.

Despite the name, the commands in the %post section are not the last commands executed by Kickstart on an Ubuntu 18.10 server. The “ansible” user is added after the %post commands are executed. This means that the Ansible ssh public key gets added before the ansible user is created.

To make key-based logins work I set the UID:GID of authorized_keys to 1000:1000. The user is later created with UID=1000, GID=1000, which means that the authorized_keys file ends up being owned by the ansible user by the time the VM creation is complete.

Create an Ubuntu 18.10 server

This creates a VM using Ubuntu’s text-based installer. Since the `-d` parameter is used,progress of the install is shown on screen.

create-vm -n node1 \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks \
-d

Create 8 Ubuntu 18.10 servers

This starts the VM creation process and exits. Creation of the VMs continues in the background.

for n in `seq 1 8`; do
create-vm -n node$n \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks
done

Delete 8 virtual machines

for n in `seq 1 8`; do
delete-vm node$n
done

Connect to a VM via the console

virsh console node1

Connect to a VM via ssh

ssh ansible@`get-vm-ip node1`

Generate an Ansible hosts file

(
echo '[hosts]'
for n in `seq 1 8`; do
ip=`get-vm-ip node$n`
echo "node$n ansible_ip=$ip ansible_user=ansible"
done
) > hosts

Handy virsh commands

  • virsh list – List all running VMs.
  • virsh domifaddr node1 – Get a node’s IP address. Does not work with all network setups,which is why I wrote the get-vm-ip script.
  • virsh net-list – Show what networks were created by virsh.
  • virsh net-dhcp-leases $network – Shows current DHCP leases when virsh is acting as the DHCP server. Leases may be shown for machines that no longer exist.

Known Issues

  • VMs created without the -d (debug mode) parameter may be created in “stopped” mode. To start them up, run the command virsh start $vmname
  • Depending on how your host is set up, you may need to run these scripts as root.
  • Ubuntu text mode install messes up terminal screens. Run reset from the command line to restore a terminal’s functionality.
  • I use Ansible to set a guest’s hostname, not Kickstart, so all Ubuntu guests created have the host name “ubuntu”.

Hope you find this useful.

create-vm script

Download create-vm from Github

#!/bin/bash

# create-vm - Use .iso and kickstart files to auto-generate a VM.

# Copyright 2018 Earl C. Ruby III
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

HOSTNAME=
ISO_FQN=
KS_FQN=
RAM=1024
VCPUS=2
STORAGE=20
BRIDGE=virbr0
MAC="RANDOM"
VERBOSE=
DEBUG=
VM_IMAGE_DIR=/var/lib/libvirt

usage()
{
cat << EOF
usage: $0 options

This script will take an .iso file created by revisor and generate a VM from it.

OPTIONS:
-h Show this message
-n Host name (required)
-i Full path and name of the .iso file to use (required)
-k Full path and name of the Kickstart file to use (required)
-r RAM in MB (defaults to ${RAM})
-c Number of VCPUs (defaults to ${VCPUS})
-s Amount of storage to allocate in GB (defaults to ${STORAGE})
-b Bridge interface to use (defaults to ${BRIDGE})
-m MAC address to use (default is to use a randomly-generated MAC)
-v Verbose
-d Debug mode
EOF
}

while getopts "h:n:i:k:r:c:s:b:m:v:d" option; do
case "${option}"
in
h)
usage
exit 0
;;
n) HOSTNAME=${OPTARG};;
i) ISO_FQN=${OPTARG};;
k) KS_FQN=${OPTARG};;
r) RAM=${OPTARG};;
c) VCPUS=${OPTARG};;
s) STORAGE=${OPTARG};;
b) BRIDGE=${OPTARG};;
m) MAC=${OPTARG};;
v) VERBOSE=1;;
d) DEBUG=1;;
esac
done

if [[ -z $HOSTNAME ]]; then
echo "ERROR: Host name is required"
usage
exit 1
fi

if [[ -z $ISO_FQN ]]; then
echo "ERROR: ISO file name or http url is required"
usage
exit 1
fi

if [[ -z $KS_FQN ]]; then
echo "ERROR: Kickstart file name or http url is required"
usage
exit 1
fi

if ! [[ -f $ISO_FQN ]]; then
echo "ERROR: $ISO_FQN file not found"
usage
exit 1
fi

if ! [[ -f $KS_FQN ]]; then
echo "ERROR: $KS_FQN file not found"
usage
exit 1
fi
KS_FILE=$(basename "$KS_FQN")

if [[ ! -z $VERBOSE ]]; then
echo "Building ${HOSTNAME} using MAC ${MAC} on ${BRIDGE}"
echo "======================= $KS_FQN ======================="
cat "$KS_FQN"
echo "=============================================="
set -xv
fi

mkdir -p $VM_IMAGE_DIR/{images,xml}

virt-install \
--connect=qemu:///system \
--name="${HOSTNAME}" \
--bridge="${BRIDGE}" \
--mac="${MAC}" \
--disk="${VM_IMAGE_DIR}/images/${HOSTNAME}.img,bus=virtio,size=${STORAGE}" \
--ram="${RAM}" \
--vcpus="${VCPUS}" \
--autostart \
--hvm \
--arch x86_64 \
--accelerate \
--check-cpu \
--os-type=linux \
--force \
--watchdog=default \
--extra-args="ks=file:/${KS_FILE} console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject="${KS_FQN}" \
--graphics=none \
--noautoconsole \
--debug \
--location="${ISO_FQN}"

if [[ ! -z $DEBUG ]]; then
# Connect to the console and watch the install
virsh console "${HOSTNAME}"
virsh start "${HOSTNAME}"
fi

# Make a backup of the VM's XML definition file
virsh dumpxml "${HOSTNAME}" > "${VM_IMAGE_DIR}/xml/${HOSTNAME}.xml"

if [ ! -z $VERBOSE ]; then
set +xv
fi

ubuntu.ks Kickstart file

Download ubuntu.ks on Github.

# System language
lang en_US
# Language modules to install
langsupport en_US
# System keyboard
keyboard us
# System mouse
mouse
# System timezone
timezone --utc Etc/UTC
# Root password
rootpw --disabled
# Initial user
user ansible --fullname "ansible" --iscrypted --password $6$CfjrLvwGbzSPGq49$t./6zxk9D16P6J/nq2eBVWQ74aGgzKDrQ9LdbTfVA0IrHTQ7rQ8iq61JTE66cUjdIPWY3fN7lGyR4LzrGwnNP.
# Reboot after installation
reboot
# Use text mode install
text
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom
# System bootloader configuration
bootloader --location=mbr 
# Clear the Master Boot Record
zerombr yes
# Partition clearing information
clearpart --all 
# Disk partitioning information
part / --fstype ext4 --size 3700 --grow
part swap --size 200 
# System authorization infomation
auth  --useshadow  --enablemd5 
# Firewall configuration
firewall --enabled --ssh 
# Do not configure the X Window System
skipx
%post --interpreter=/bin/bash
echo ### Redirect output to console
exec < /dev/tty6 > /dev/tty6
chvt 6
echo ### Update all packages
apt-get update
apt-get -y upgrade
# Install packages
apt-get install -y openssh-server vim python
echo ### Enable serial console so virsh can connect to the console
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
echo ### Add public ssh key for Ansible
mkdir -m0700 -p /home/ansible/.ssh
cat <<EOF >/home/ansible/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC14kgOfzuOA4+hD16JFTAtXWc0UkzvXw3TrPxivh+/t86FH1N1qlXLLZn60voysLHnyiwTXrC8Zy0H8rckRopepozMBRegeklnkLHaiFDBdbAkHm8DMOd5QuGedBZ+s80+05btzOVxZcN5M6m4A03vLGGoOqnJvewv1u/yP6et0hP2Bs6D9ycczOpXeOgKnUXt1rciVYTk9xwOXFWcZ5phnXSCGA1w0BACK2CZaCKmsAT5YR7PA4N+7xVMvhOgo3gt8dNxNtmEtkZSlYYqkJehuldt1IPpfQ5/QYngYKX1ZCKS2LHc9Ys3F8QX3djhOhqFL+kcDMrdTaT5GlAFcgqebIao5mTYpiqc72YbvbCMWRVomhE0TWgliIftR/65NzOf8N4b2fE/hakLkIsGyR7TQiNmAHgagqBX/qdBJ7QJdNN7GH2aGP/Ni7b9xX2jsWXRj8QcSee+IDgfm2k/uKGvI6+RotRVx/EjwOGVUeGp8txP8l4T0AmYsgiL1Phe5swAPaMj4R+m38dwzr2WF/PViI1upF/Weczoiu0dDODLsijdHBAIju9BEDBzcFbDPoLLKHOSMusy86CVGNSEaDZUYwZ2GHY6anfEaRzTJtqRKNvsGiSRJAeKIQrZ16e9QPihcQQQNM+Z9QW7Ppaum8f2QlFMit03UYMplw0EpNPAWQ== ansible@host
EOF
echo ### Set permissions for Ansible directory and key. Since the "Initial user"
echo ### is added *after* %post commands are executed, I use the UID:GID
echo ### as a hack since I know that the first user added will be 1000:1000.
chown -R 1000:1000 /home/ansible
chmod 0600 /home/ansible/.ssh/authorized_keys
echo ### Change back to terminal 1
chvt 1

Corosync / Pacemaker

While building high-availability Linux computing clusters with Corosync, Pacemaker and OpenSUSE I’ve assembled a collection of useful notes. I’ll share them here, and add to them as time permits.

Terms and abbreviations

Corosync allows any number of servers to be part of the cluster using any number of fault-tolerant configurations (active/passive, active/active, N+1, etc.)

Corosync provides messaging between servers within the same cluster.

Pacemaker manages the resources and applications on a node within the cluster.

OpenAIS can be thought of as the API between Corosync and Pacemaker, as well as between Corosync and other plug in components.

The ClusterLabs FAQ explains it this way: “Originally Corosync and OpenAIS were the same thing. Then they split into two parts… the core messaging and membership capabilities are now called Corosync, and OpenAIS retained the layer containing the implementation of the AIS standard. Pacemaker itself only needs the Corosync piece in order to function, however some of the applications it can manage (such as OCFS2 and GFS2) require the OpenAIS layer as well.”

“DC” stands for “Designated Co-ordinator”, the node with final say on the cluster’s current state.

“dlm” is the “distributed lock manager” used for lock-update-unlock data updates across the cluster.

Pacemaker Daemons (managed by Corosync):

  • lrmd – local resource manager daemon
  • crmd – cluster resource manager daemon
  • cib – cluster information base (database of cluster information)
  • pengine – policy engine
  • stonithd – “shoot the other node in the head” daemon
  • attrd – I would guess “attribute daemon”, but I haven’t found a formal definition yet of what this does. Need to check the source code and see what it says
  • mgmtd – management daemon. I haven’t found a formal definition yet of what this does. Need to check the source code and see what it says

Operations

To start Corosync on an OpenSUSE server:

# rcopenais start

To stop Corosync on an OpenSUSE server:

# rcopenais stop

To get the current cluster status:

# crm status

Text-based cluster monitoring tool:

# crm_mon

To list all of the resources in a cluster:

# crm resource status

To view the current pacemaker config:

# crm configure show
# crm configure show xml

To verify the current pacemaker config:

# crm_verify -L -V

Force the ClusterIP resource (and all related resources based on your policy settings) to move from server0 to server1:

# crm resource move ClusterIP server1

Note that to transfer the resources this actually sets the preferred node to server1. To remove the preference and return control to the cluster:

# crm resource unmove ClusterIP

Making a new shadow (backup) copy of a working CIB called “working”:

# crm configure
crm(live)configure# cib new working
INFO: working shadow CIB created

Updating the “working” backup copy of the live CIB:

# crm configure
crm(live)configure# cib reset working
INFO: copied live CIB to working

Setting the “working” CIB to act as the “live” configuration:

# crm configure
crm(live)configure# cib commit working
INFO: commited 'working' shadow CIB to the cluster

Saving the configuration to a file, modifying the file, and loading just the updates:

# crm configure show > /tmp/crm.xml
# vi /tmp/crm.xml
# crm configure load update /tmp/crm.xml

To get a list of all of the OCF resource agents provided by Pacemaker and Heartbeat:

# crm ra list ocf heartbeat
# crm ra list ocf pacemaker

To see which host a resource is running on:

# crm resource status ClusterIP
resource ClusterIP is running on: server0

To clear the fail count on a WebSite resource:

# crm resource cleanup WebSite

Cluster configuration

On OpenSUSE servers OCF resource definitions can be found in /usr/lib/ocf/resource.d/.

Pacemaker resources are configured using the crm tool.

To disable STONITH on a two-node cluster:

# crm configure property stonith-enabled=false
# crm_verify -L -V

Adding a second IP address resource to the cluster:

# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
        params ip="192.168.1.97" cidr_netmask="24" \
        op monitor interval="30" timeout="25" start-delay="0"

To turn the quorum policy OFF for two-node clusters:

# crm configure property no-quorum-policy=ignore
# crm configure show

Adding an Apache resource with https enabled:

# crm configure primitive WebSite ocf:heartbeat:apache \
        params configfile="/etc/apache2/httpd.conf" options="-DSSL" \
        operations $id="WebSite-operations" \
        op start interval="0" timeout="40" \
        op stop interval="0" timeout="60" \
        op monitor interval="60" timeout="120" start-delay="0" \
        statusurl="http://127.0.0.1/server-status/" \
        meta target-role="Started"
 

Note that the Apache mod_status module must be loaded and configured so that http://127.0.0.1/server-status/ will return the server status page, otherwise Pacemaker cannot tell that the server is actually running and it will attempt to fail over to the other node, then fail again.

Adding a DRBD resource (based on http://www.drbd.org/users-guide-emb/s-pacemaker-crm-drbd-backed-service.html):

# crm configure primitive VolumeDRBD ocf:linbit:drbd \
        params drbd_resource="drbd_resource_name" \
        operations $id="VolumeDRBD-operations" \
        op start interval="0" timeout="240" \
        op promote interval="0" timeout="90" \
        op demote interval="0" timeout="90" \
        op stop interval="0" timeout="100" \
        op monitor interval="40" timeout="60" start-delay="0" \
        op notify interval="0" timeout="90" \
        meta target-role="started"
# crm configure primitive FileSystemDRBD ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/srv/src" fstype="ext3" \
        operations $id="FileSystemDRBD-operations" \
        op start interval="0" timeout="60" \
        op stop interval="0" timeout="60" fast_stop="no" \
        op monitor interval="40" timeout="60" start-delay="0" \
        op notify interval="0" timeout="60"
# crm configure ms MasterDRBD VolumeDRBD \
        meta clone-max="2" notify="true" target-role="started"

Note that the resource name drbd_resource="drbd_resource_name" must match the name listed in /etc/drbd.conf and that drbd should NOT be started by init — you want to let Pacemaker control starting and stopping drbd. To make sure DRBD isn’t being started by init use:

# chkconfig drbd off

Telling Pacemaker that the floating IP address, DRBD primary, and Apache all have to be on the same node:

# crm configure group Cluster ClusterIP FileSystemDRBD WebSite \
        meta target-role="Started"
# crm configure colocation WebServerWithIP inf: Cluster MasterDRBD:Master
# crm configureorder StartFileSystemFirst inf: MasterDRBD:promote Cluster:start

How to turn off smart quotes in Libre Office Writer

To turn off smart quotes in Libre Office Writer, so that the double quote character is shown in the document as ” — exactly as you typed it — and doesn’t get converted into something curly:

  • Go to Tools > Autocorrect Options
  • Select the Localized Options tab
  • Click the button under Start Quote. If you’re using the Basic Latin character set, scroll all the way to the top of the character set display and click the ” box (third box from the top left row, next to the !). If you are using some other character set try searching near the top of the set for the ” character.
  • Click OK
  • Click the button under End Quote. Scroll all the way to the top of the character set display and click the ” box (third box from the top left row, next to the !)
  • Click OK
  • Click OK

Smart quotes are now off for the document that you’re working on. They will also be off for any new documents that you create, including spreadsheets and files created by other Libre Office applications.

To get rid of smart quotes already in a document:

  • Highlight any start smart quote and copy it (Ctrl-C or File > Copy)
  • Select Edit > Find & Replace
  • Paste the smart quote into the Search for box
  • Type ” in the Replace with box
  • Click the Replace All button
  • Click Close
  • Repeat these steps using a copy of the end smart quote