post

Quickly create guest VMs using virsh, cloud image files, and cloud-init

After the latest updates to the code these scripts now create VMs from full Linux distros in a few seconds.

I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

  • Allow me to use raw image files as the basis for guest VMs.
  • Guest VMs should be set up with bridged IPs that are routable from the host.
  • Guest VMs should be able to reach the Internet.
  • Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).
  • VM creation should work with any distro that supports cloud-init.
  • Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.
  • Guest VMs should be set up to allow passwordless ssh access from the “ansible” user, so that once a VM is running Ansible can be used for additional configuration and customization of the VM.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

cloud-init

The current version of the create-vm script uses cloud images, cloud-init, and virsh tooling to quickly create VMs from the command line. Using a single Linux host system you can create multiple guest VMs running on that host. Each guest VM has its own file system, memory, virtualized CPUs, IP address, etc.

Cloud Images

create-vm creates a QCOW2 file for your VM’s file system. The QCOW2 image uses the cloud image as a base filesystem, so instead of copying all of the files that come with a Linux distribution and installing them, QCOW will just use files directly from the base image as long as those files remain unchanged. QCOW stands for “QEMU Copy On Write”, so once you make a change to a file the changes are written to your VM’s QCOW2 file.

Cloud images have the extension .img or .qcow and are compiled for different system architectures.

Cloud images are available for the following distros:

Pick the base image for the distro and release that you want to install and download it onto your host system. Make sure that the base image uses the same hardware architecture as your host system, e.g. “x86_64” or “amd64” for Intel and AMD -based host systems, “arm64” for 64 bit ARM-based host systems.

cloud-init configuration

cloud-init reads in two configuration files, user-data and meta-data, to initialize a VM’s settings. One of the places it looks for these files is any attached disk volume labeled cidata.

The create-vm script creates an ISO disk called cidata with these two files and passes that in as a volume to virsh when it creates the VM. This is referred to as the “no cloud” method, so if you see a cloud image for “nocloud” that’s the one you want to use.

If you’re interested in other ways of doing this check out the Datasources documentation on for cloud-init.

Files

create-vm stores files as follows:

  • ${HOME}/vms/base/ – Place to store your base Linux cloud images.
  • ${HOME}/vms/images/ – your-vm-name.img and your-vm-name-cidata.img files.
  • ${HOME}/vms/init/ – user-data and meta-data.
  • ${HOME}/vms/xml/ – Backup copies of your VMs’ XML definition files.

QCOW2 filesystems allocate space as needed, so if you create a VM with 100GB of storage, the initial size of the your-vm-name.img and your-vm-name-cidata.img files is only about 700K total. The your-vm-name.img file will grow as you install packages and update files, but will never grow beyond the disk size that you set when you create the VM.

Scripts

The create-vm repo contains these scripts:

  • create-vm – Use .img and cloud-init files to auto-generate a VM.
  • delete-vm – Delete a virtual machine created with create-vm.
  • get-vm-ip – Get the IP address of a VM managed by virsh.

Host setup

I’m running the scripts from a host with Ubuntu Linux 22.04 installed. I added the following to the host’s Ansible playbook to install the necessary virtualization packages:

  - name: Install virtualization packages
    apt:
      name: "{{item}}"
      state: latest
    with_items:
    - libvirt-bin
    - libvirt-clients
    - libvirt-daemon
    - libvirt-daemon-system
    - libvirt-daemon-driver-storage-zfs
    - python-libvirt
    - python3-libvirt
    - virt-manager
    - virtinst

If you’re not using Ansible just apt-get install the above packages.

Permissions

The libvirtd daemon runs under the libvirt-qemu user service account. The libvirt-qemu user must be able to read the files in ${HOME}/vms/. If your ${HOME} directory has permissions set to 0x750 then libvirt-qemu won’t be able to read the ${HOME}/vms/ directory.

You could open up your home directory, e.g.:

chmod 755 ${HOME}

… but that allows anyone logged into your Linux host to read everything in your home directory. A better approach is just to add libvirt-qemu to your home directory’s group. For instance, on my host my home directory is /home/earl owned by user earl and group earl, permissions 0x750:

$ chmod 750 /home/earl
$ ls -al /home
total 24
drwxr-xr-x   6 root      root      4096 Aug 28 21:26 .
drwxr-xr-x  21 root      root      4096 Aug 28 21:01 ..
drwxr-x--- 142 earl      earl      4096 Feb 16 09:27 earl

To make sure that only the libvirt-qemu user can read my files I can add the user to the earl group:

$ sudo usermod --append --groups earl libvirt-qemu
$ sudo systemctl restart libvirtd
$ grep libvirt-qemu /etc/group
earl:x:1000:libvirt-qemu
libvirt-qemu:x:64055:libvirt-qemu

That shows that the group earl, group ID 1000, has a member libvirt-qemu. Since the group earl has read and execute permissions on my home directory, libvirt-qemu has read and execute permissions on my home directory.

Note: The libvirtd daemon will chown some of the files in the directory, including the files in the ~/vms/images directory, to be owned by libvirt-qemu group kvm. In order to delete these files without sudo, add yourself to the kvm group, e.g.:

$ sudo usermod --append --groups kvm earl

You’ll need to log out and log in again before the additional group is active.

create-vm options

create-vm supports the following options:

OPTIONS:
   -h      Show this message
   -n      Host name (required)
   -i      Full path and name of the base .img file to use (required)
   -k      Full path and name of the ansible user's public key file (required)
   -r      RAM in MB (defaults to 2048)
   -c      Number of VCPUs (defaults to 2)
   -s      Amount of storage to allocate in GB (defaults to 80)
   -b      Bridge interface to use (defaults to virbr0)
   -m      MAC address to use (default is to use a randomly-generated MAC)
   -v      Verbose

Create an Ubuntu 22.04 server VM

This creates an Ubuntu 22.04 “Jammy Jellyfish” VM with a 40G hard drive.

First download a copy of the Ubuntu 22.04 “Jammy Jellyfish” cloud image:

mkdir -p ~/vms/base
cd ~/vms/base
wget http://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then create the VM:

create-vm -n node1 \
    -i ~/vms/base/jammy-server-cloudimg-amd64.img \
    -k ~/.ssh/id_rsa_ansible.pub \
    -s 40

Once created I can get the IP address and ssh to the VM as the user “ansible”:

$ get-vm-ip node1
192.168.122.219
$ ssh -i ~/.ssh/id_rsa_ansible ansible@192.168.122.219
The authenticity of host '192.168.122.219 (192.168.122.219)' can't be established.
ED25519 key fingerprint is SHA256:L88LPO9iDCGbowuPucV5Lt7Yf+9kKelMzhfWaNlRDxk.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.122.219' (ED25519) to the list of known hosts.
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-60-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Feb 15 20:05:45 UTC 2023

  System load:  0.47216796875     Processes:             105
  Usage of /:   3.7% of 38.58GB   Users logged in:       0
  Memory usage: 9%                IPv4 address for ens3: 192.168.122.219
  Swap usage:   0%

Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ansible@node1:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           198M 1008K  197M   1% /run
/dev/sda1        39G  1.5G   38G   4% /
tmpfs           988M     0  988M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/sda15      105M  6.1M   99M   6% /boot/efi
tmpfs           198M  4.0K  198M   1% /run/user/1000
ansible@node1:~$

Note that this VM was created with a 40GB hard disk, and the total disk space shown is 40GB, but the actual hard drive space initially used by this VM was about 700K. The VM can consume up to 40GB, but will only use the space it actually needs.

Create 8 Ubuntu 22.04 servers

This starts the VM creation process and exits. Creation of the VMs continues in the background.

for n in `seq 1 8`; do
    create-vm -n node$n -i ~/vms/base/jammy-server-cloudimg-amd64.img -k ~/.ssh/id_rsa_ansible.pub
done

Delete 8 virtual machines

for n in `seq 1 8`; do
    delete-vm node$n
done

Connect to a VM via the console

virsh console node1

Connect to a VM via ssh

ssh ansible@$(get-vm-ip node1)

Generate an Ansible hosts file

(
    echo '[hosts]'
    for n in `seq 1 8`; do
        ip=$(get-vm-ip node$n)
        echo "node$n ansible_host=$ip ip=$ip ansible_user=ansible"
    done
) > hosts.ini

Handy virsh commands

virsh list – List all running VMs.

virsh domifaddr node1 – Get a node’s IP address. Does not work with all network setups, which is why I wrote the get-vm-ip script.

virsh net-list – Show what networks were created by virsh.

virsh net-dhcp-leases $network – Shows current DHCP leases when virsh is acting as the DHCP server. Leases may be shown for machines that no longer exist.

Hope you find this useful.

post

Setting up a personal, production-quality Kubernetes cluster with Kubespray

I’ve been setting up and tearing down Kubernetes clusters for testing various things for the past year, mostly using Vagrant/Virtualbox but also some VMware vSphere and OpenStack deployments.

I wanted to set something a little more permanent up at my home lab — a cluster where I could add and remove nodes, run nodes on multiple physical machines, and use different types of compute hardware.

Set up the virtual machines

To get started I used a desktop System76 Wild Dog Pro Linux box (4.5 GHz i7-7700K, 64GB DDR4) and my create-vm script to create six Ubuntu 18.04 “Bionic Beaver” VMs for the cluster:

for n in $(seq 1 6); do
create-vm -n node$n \
-i ./ubuntu-18.04-server-amd64.iso \
-k ./ubuntu.ks \
-r 4096 \
-c 2 \
-s 40
done

With these parameters each VM will have 4GB RAM, 2 VCPUs, and a 40GB hard drive.

Install and configure Kubespray

I cloned Kubespray into a directory and created an Ansible inventory file following the instructions from the README.

git clone git@github.com:kubernetes-sigs/kubespray.git
cd kubespray
pip install -r requirements.txt
rm -Rf inventory/mycluster/
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=($(for n in $(seq 1 6); do get-vm-ip node$n; done))
CONFIG_FILE=inventory/mycluster/hosts.ini \
python3 contrib/inventory_builder/inventory.py ${IPS[@]}

The get-vm-ip script is in the same repo as the create-vm script, and both are described in my Use .iso and Kickstart files to automatically create Ubuntu VMs article.

The inventory.py script generates an Ansible hosts inventory file in inventory/mycluster/hosts.ini with all of your VM IP addresses.

I like to add one variable override to the bottom of hosts.ini which copies the kubectl credentials over to my host machine. That way I can run kubectl commands directly from my desktop. The extra lines to add to the bottom of hosts.ini are:

[all:vars]
kubectl_localhost=true

Install Kubernetes

To install Kubernetes on the VMs I run the Kubespray cluster.yaml playbook:

export ANSIBLE_REMOTE_USER=ansible
ansible-playbook -i inventory/mycluster/hosts.ini \
--become --become-user=root cluster.yml

Once the playbooks have finished, you should have a fully-operational Kubernetes cluster running on your desktop.

At this point you should be able to query the cluster from your desktop using kubectl. For example:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.122.251:6443
coredns is running at https://192.168.122.251:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.122.251:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master,node 3d6h v1.13.0
node2 Ready master,node 3d6h v1.13.0
node3 Ready node 3d6h v1.13.0
node4 Ready node 3d6h v1.13.0
node5 Ready node 3d6h v1.13.0
node6 Ready node 3d6h v1.13.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-67f89845f-6zbvx 1/1 Running 1 3d6h
kube-system calico-node-jh7ng 1/1 Running 2 3d6h
kube-system calico-node-l9vfb 1/1 Running 2 3d6h
kube-system calico-node-mqxjx 1/1 Running 2 3d6h
...

Set up the Kubernetes Dashboard

One of the first things I like to do is set up access to the Kubernetes dashboard. First I set up a service account for the admin user:

$ cat ~/Projects/k8s-cluster/dashboard-adminuser.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
$ kubectl apply -f ~/Projects/k8s-cluster/dashboard-adminuser.yaml

Next I get the bearer token for the user account:

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Finally I plug the dashboard URL that I got from kubectl cluster-info into my browser, select “Token” authentication, and cut and paste in the bearer token to log into the system.

Once logged in, an overview of my cluster pops up:

With a minimal amount of working compute infrastructure, it’s easy to set up your own production-quality Kubernetes cluster using Kubespray.

Hope you find this useful.

post

How to get the IP address of a KVM/virsh VM

Since virsh domifaddr doesn’t work to get the IP addresses of VMs on a bridged network, I wrote a get-vm-ip script (which you can download from Github) which uses this to get the IP of a running VM:

HOSTNAME=[your vm name]
MAC=$(virsh domiflist $HOSTNAME | awk '{ print $5 }' | tail -2 | head -1)
arp -a | grep $MAC | awk '{ print $2 }' | sed 's/[()]//g'

The virsh command gets the MAC address, the last line finds the IP address using arp.

Hope you find this useful.

post

Use .iso and Kickstart files to automatically create Ubuntu VMs

I wrote an updated version of this article in 2023: Quickly create guest VMs using virsh, cloud image files, and cloud-init. The newer code works better on more recent distros. Unless you really need to use Kickstart, try the new code.

I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

  • Allow me to use raw ISO files as the basis for guest VMs.
  • Guest VMs should be set up with bridged IPs that are routable from the host.
  • Guest VMs should be able to reach the Internet.
  • Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).
  • VM creation should work with any distro that supports Kickstart files.
  • Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.
  • Guest VMs should be set up to allow passwordless ssh access from the “ansible” user.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

Scripts

I put all of my code into a Github repo containing these scripts:

  • create-vm – Use .iso and kickstart files to auto-generate a VM.
  • delete-vm – Delete a virtual machine created with create-vm.
  • get-vm-ip – Get the IP address of a VM managed by virsh.
  • encrypt-pw – Returns a SHA512 encrypted password suitable for pasting into Kickstart files.

I’ve also included a sample ubuntu.ks Kickstart file for creating an Ubuntu host.

Host setup

I’m running the scripts from a host with Ubuntu Linux 18.10 installed. I added the following to the host’s Ansible playbook to install the necessary virtualization packages:

  - name: Install virtualization packages
apt:
name: "{{item}}"
state: latest
with_items:
- qemu-kvm
- libvirt-bin
- libvirt-clients
- libvirt-daemon
- libvirt-daemon-driver-storage-zfs
- python-libvirt
- python3-libvirt
- system-config-kickstart
- vagrant-libvirt
- vagrant-sshfs
- virt-manager
- virtinst

If you’re not using Ansible just apt-get install the above packages.

Sample Kickstart file

There are plenty of documents on the Internet on how to set up Kickstart files.

A couple of things that are special about the included Kickstart file

The Ansible user: Although I’d prefer to create the “ansible” user as a locked account,with no password just an ssh public key, Kickstart on Ubuntu does not allow this, so I do set up an encrypted password.

To set up your own password, use the encrypt-pw script to create a SHA512-hashed password that you can copy and paste into the Kickstart file. After a VM is created you can use this password if you need to log into the VM via the console.

To use your own ssh key, replace the ssh key in the %post section with your own public key.

The %post section at the bottom of the Kickstart file does a couple of things:

  • It updates all packages with the latest versions.
  • To configure a VM with Ansible, you just need ssh access to a VM and Python installed. on the VM. So I use %post to install an ssh-server and Python.
  • I start the serial console, so that virsh console $vmname works.
  • I add a public key for Ansible, so I can configure the servers with Ansible without entering a password.

Despite the name, the commands in the %post section are not the last commands executed by Kickstart on an Ubuntu 18.10 server. The “ansible” user is added after the %post commands are executed. This means that the Ansible ssh public key gets added before the ansible user is created.

To make key-based logins work I set the UID:GID of authorized_keys to 1000:1000. The user is later created with UID=1000, GID=1000, which means that the authorized_keys file ends up being owned by the ansible user by the time the VM creation is complete.

Create an Ubuntu 18.10 server

This creates a VM using Ubuntu’s text-based installer. Since the `-d` parameter is used,progress of the install is shown on screen.

create-vm -n node1 \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks \
-d

Create 8 Ubuntu 18.10 servers

This starts the VM creation process and exits. Creation of the VMs continues in the background.

for n in `seq 1 8`; do
create-vm -n node$n \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks
done

Delete 8 virtual machines

for n in `seq 1 8`; do
delete-vm node$n
done

Connect to a VM via the console

virsh console node1

Connect to a VM via ssh

ssh ansible@`get-vm-ip node1`

Generate an Ansible hosts file

(
echo '[hosts]'
for n in `seq 1 8`; do
ip=`get-vm-ip node$n`
echo "node$n ansible_ip=$ip ansible_user=ansible"
done
) > hosts

Handy virsh commands

  • virsh list – List all running VMs.
  • virsh domifaddr node1 – Get a node’s IP address. Does not work with all network setups,which is why I wrote the get-vm-ip script.
  • virsh net-list – Show what networks were created by virsh.
  • virsh net-dhcp-leases $network – Shows current DHCP leases when virsh is acting as the DHCP server. Leases may be shown for machines that no longer exist.

Known Issues

  • VMs created without the -d (debug mode) parameter may be created in “stopped” mode. To start them up, run the command virsh start $vmname
  • Depending on how your host is set up, you may need to run these scripts as root.
  • Ubuntu text mode install messes up terminal screens. Run reset from the command line to restore a terminal’s functionality.
  • I use Ansible to set a guest’s hostname, not Kickstart, so all Ubuntu guests created have the host name “ubuntu”.

Hope you find this useful.

create-vm script

Download create-vm from Github

#!/bin/bash

# create-vm - Use .iso and kickstart files to auto-generate a VM.

# Copyright 2018 Earl C. Ruby III
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

HOSTNAME=
ISO_FQN=
KS_FQN=
RAM=1024
VCPUS=2
STORAGE=20
BRIDGE=virbr0
MAC="RANDOM"
VERBOSE=
DEBUG=
VM_IMAGE_DIR=/var/lib/libvirt

usage()
{
cat << EOF
usage: $0 options

This script will take an .iso file created by revisor and generate a VM from it.

OPTIONS:
-h Show this message
-n Host name (required)
-i Full path and name of the .iso file to use (required)
-k Full path and name of the Kickstart file to use (required)
-r RAM in MB (defaults to ${RAM})
-c Number of VCPUs (defaults to ${VCPUS})
-s Amount of storage to allocate in GB (defaults to ${STORAGE})
-b Bridge interface to use (defaults to ${BRIDGE})
-m MAC address to use (default is to use a randomly-generated MAC)
-v Verbose
-d Debug mode
EOF
}

while getopts "h:n:i:k:r:c:s:b:m:v:d" option; do
case "${option}"
in
h)
usage
exit 0
;;
n) HOSTNAME=${OPTARG};;
i) ISO_FQN=${OPTARG};;
k) KS_FQN=${OPTARG};;
r) RAM=${OPTARG};;
c) VCPUS=${OPTARG};;
s) STORAGE=${OPTARG};;
b) BRIDGE=${OPTARG};;
m) MAC=${OPTARG};;
v) VERBOSE=1;;
d) DEBUG=1;;
esac
done

if [[ -z $HOSTNAME ]]; then
echo "ERROR: Host name is required"
usage
exit 1
fi

if [[ -z $ISO_FQN ]]; then
echo "ERROR: ISO file name or http url is required"
usage
exit 1
fi

if [[ -z $KS_FQN ]]; then
echo "ERROR: Kickstart file name or http url is required"
usage
exit 1
fi

if ! [[ -f $ISO_FQN ]]; then
echo "ERROR: $ISO_FQN file not found"
usage
exit 1
fi

if ! [[ -f $KS_FQN ]]; then
echo "ERROR: $KS_FQN file not found"
usage
exit 1
fi
KS_FILE=$(basename "$KS_FQN")

if [[ ! -z $VERBOSE ]]; then
echo "Building ${HOSTNAME} using MAC ${MAC} on ${BRIDGE}"
echo "======================= $KS_FQN ======================="
cat "$KS_FQN"
echo "=============================================="
set -xv
fi

mkdir -p $VM_IMAGE_DIR/{images,xml}

virt-install \
--connect=qemu:///system \
--name="${HOSTNAME}" \
--bridge="${BRIDGE}" \
--mac="${MAC}" \
--disk="${VM_IMAGE_DIR}/images/${HOSTNAME}.img,bus=virtio,size=${STORAGE}" \
--ram="${RAM}" \
--vcpus="${VCPUS}" \
--autostart \
--hvm \
--arch x86_64 \
--accelerate \
--check-cpu \
--os-type=linux \
--force \
--watchdog=default \
--extra-args="ks=file:/${KS_FILE} console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject="${KS_FQN}" \
--graphics=none \
--noautoconsole \
--debug \
--location="${ISO_FQN}"

if [[ ! -z $DEBUG ]]; then
# Connect to the console and watch the install
virsh console "${HOSTNAME}"
virsh start "${HOSTNAME}"
fi

# Make a backup of the VM's XML definition file
virsh dumpxml "${HOSTNAME}" > "${VM_IMAGE_DIR}/xml/${HOSTNAME}.xml"

if [ ! -z $VERBOSE ]; then
set +xv
fi

ubuntu.ks Kickstart file

Download ubuntu.ks on Github.

# System language
lang en_US
# Language modules to install
langsupport en_US
# System keyboard
keyboard us
# System mouse
mouse
# System timezone
timezone --utc Etc/UTC
# Root password
rootpw --disabled
# Initial user
user ansible --fullname "ansible" --iscrypted --password $6$CfjrLvwGbzSPGq49$t./6zxk9D16P6J/nq2eBVWQ74aGgzKDrQ9LdbTfVA0IrHTQ7rQ8iq61JTE66cUjdIPWY3fN7lGyR4LzrGwnNP.
# Reboot after installation
reboot
# Use text mode install
text
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom
# System bootloader configuration
bootloader --location=mbr 
# Clear the Master Boot Record
zerombr yes
# Partition clearing information
clearpart --all 
# Disk partitioning information
part / --fstype ext4 --size 3700 --grow
part swap --size 200 
# System authorization infomation
auth  --useshadow  --enablemd5 
# Firewall configuration
firewall --enabled --ssh 
# Do not configure the X Window System
skipx
%post --interpreter=/bin/bash
echo ### Redirect output to console
exec < /dev/tty6 > /dev/tty6
chvt 6
echo ### Update all packages
apt-get update
apt-get -y upgrade
# Install packages
apt-get install -y openssh-server vim python
echo ### Enable serial console so virsh can connect to the console
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
echo ### Add public ssh key for Ansible
mkdir -m0700 -p /home/ansible/.ssh
cat <<EOF >/home/ansible/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC14kgOfzuOA4+hD16JFTAtXWc0UkzvXw3TrPxivh+/t86FH1N1qlXLLZn60voysLHnyiwTXrC8Zy0H8rckRopepozMBRegeklnkLHaiFDBdbAkHm8DMOd5QuGedBZ+s80+05btzOVxZcN5M6m4A03vLGGoOqnJvewv1u/yP6et0hP2Bs6D9ycczOpXeOgKnUXt1rciVYTk9xwOXFWcZ5phnXSCGA1w0BACK2CZaCKmsAT5YR7PA4N+7xVMvhOgo3gt8dNxNtmEtkZSlYYqkJehuldt1IPpfQ5/QYngYKX1ZCKS2LHc9Ys3F8QX3djhOhqFL+kcDMrdTaT5GlAFcgqebIao5mTYpiqc72YbvbCMWRVomhE0TWgliIftR/65NzOf8N4b2fE/hakLkIsGyR7TQiNmAHgagqBX/qdBJ7QJdNN7GH2aGP/Ni7b9xX2jsWXRj8QcSee+IDgfm2k/uKGvI6+RotRVx/EjwOGVUeGp8txP8l4T0AmYsgiL1Phe5swAPaMj4R+m38dwzr2WF/PViI1upF/Weczoiu0dDODLsijdHBAIju9BEDBzcFbDPoLLKHOSMusy86CVGNSEaDZUYwZ2GHY6anfEaRzTJtqRKNvsGiSRJAeKIQrZ16e9QPihcQQQNM+Z9QW7Ppaum8f2QlFMit03UYMplw0EpNPAWQ== ansible@host
EOF
echo ### Set permissions for Ansible directory and key. Since the "Initial user"
echo ### is added *after* %post commands are executed, I use the UID:GID
echo ### as a hack since I know that the first user added will be 1000:1000.
chown -R 1000:1000 /home/ansible
chmod 0600 /home/ansible/.ssh/authorized_keys
echo ### Change back to terminal 1
chvt 1