post

Generate a crypted password for Ansible

The Ansible user: command allows you to add a user to a Linux system with a password. The password must be passed to Ansible in a hashed password format using one of the hash formats supported by /etc/shadow.

Some Ansible docs suggest storing your passwords in plain text and using the Ansible SHA512 filter to hash the plaintext passwords before passing them to the user module. This is a bad practice for a number of reasons.

Storing your passwords in plain text is a bad idea

  • Storing your passwords in plain text is a bad idea, since anyone who can read your Ansible playbook now knows your password.
  • The play is not idempotent, since the SHA512 filter will re-hash the password every time you run the play, resetting the password each time the play is run.
  • Attempting to make the play idempotent, by using update_password: on_create, means that you can no longer update the password using Ansible. This might be OK if you’re just updating one machine. It’s a giant pain in the ass if you need to update the password on many machines.

A better way is to hash the password once using openssl and store the hashed version of the password in your Ansible playbooks:

- name: Set the user's password
  user:
    name: earl
    password: "$6$wLZ77bHhLVJsHaMz$WqJhNW2VefjhnupK0FBj5LDPaONaAMRoiaWle4rU5DkXz7hxhl3Gxcwshuy.KQWRFt6YPWXNbdKq9B/Rk9q7A."

To generate the hashed password use the openssl passwd command on any Linux host:

openssl passwd -6 -stdin

This opens an interactive shell to openssl. Just enter the password that you want to use, hit enter, and openssl will respond with the hashed version. Copy and paste the hashed version of the password into Ansible, and the next time you run Ansible on a host the user’s password will be updated.

Type Ctrl-D to exit the interactive openssl shell.

Since you used the interactive shell the plaintext password that you entered is not saved into the Linux host’s history file and is not visible to anyone running ps.

The crypted password is encrypted with a SHA512 one-way hash and a random 16 character salt, so you can check the playbook into a Git repository without revealing your password.

Hope you find this useful.

post

Automatically decrypt multiple LUKS-encrypted volumes

I’ve written in the past on Adding an external encrypted drive with LVM to Ubuntu Linux and Adding a LUKS-encrypted iSCSI volume to Synology DS414 NAS but I neglected to mention how to automatically decrypt additional volumes.

When installing a fresh copy of Ubuntu one of the options is to install with a LUKS-encrypted Logical Volume Manager Volume Group (LVM VG). This puts your root volume on the encrypted LVM VG. When you power up your machine Ubuntu prompts you to enter the decryption passphrase in order to decrypt the VG and start your computer. Without the passphrase the contents of your hard drive are unreadable.

If you add encrypted external drives and/or additional VGs you will end up with multiple encrypted volumes. Ubuntu will prompt you for the passphrase of each additional encrypted volume when you boot up the machine.

If you don’t want to enter multiple, different passphrases each time you boot, you can store the passphrases for additional volumes on the encrypted root filesystem of your first drive using the /etc/crypttab file. You’ll just be prompted for one passphrase, of the first VG, and that decrypts the passphrases needed to decrypt the additional volumes.

Here’s how it works.

The /etc/crypttab file contains 4 fields per line: the name of the encrypted volume, a UUID identifying the storage device, the name of a file with the decryption passphrase, and encryption options.

nvme0n1p5   UUID=405d8c73-1cf9-4b2c-9b8e-c76b90d27c67 none                        luks,discard
datastorage UUID=f2d73ac8-1ef1-4735-9dd4-9e778fc9e781 /root/.luks-datastorage     luks,discard
external1   UUID=0140476b-dd0b-4aab-b7d4-2f5fa14d1a0c /root/.luks-backupexternal1 luks
external2   UUID=610a67d4-c4f6-4b73-a824-a437971e8d24 /root/.luks-backupexternal2 luks
iscsi       UUID=b106b749-f4ab-44be-8962-6ff867dc074e /root/.luks-backupiscsi     luks

The first volume, nvme0n1p5, is the encrypted boot volume. It contains the root filesystem and the /root home directory. The third field is “none” which means that Ubuntu will prompt you for a decryption passphrase in order to unlock and decrypt the drive.

The remaining volumes have files defined that contain the decryption passphrase for each volume. Those files are hidden files in the /root home directory. Once the nvme0n1p5 volume is decrypted and mounted, the remaining volumes are automatically decrypted using the passphrases stored in the hidden files.

The end result is that all of your drives are encrypted, but you only have to enter one passphrase to unlock all of your drives.

Hope you find this useful.

post

Too many authentication failures

I was working with a new Linux distro and after creating a brand-new VM with a single login I attempted to ssh into the VM only to be greeted with:

Received disconnect from 10.0.0.180 port 22:2: Too many authentication failures
Disconnected from 10.0.0.180 port 22

It was a new VM, and I hadn’t loaded an ssh key (there was no option to do so in the install). I’d set up a user and password, so I expected to get a password prompt. I didn’t get to a password prompt, just an immediate disconnect.

I used ssh -vvv to connect and found that my ssh client was attempting to use my ssh keys, as ssh is supposed to, and on the third key the VM spat back the error:

Received disconnect from 10.0.0.180 port 22:2: Too many authentication failures
Disconnected from 10.0.0.180 port 22

Well, I wanted to connect with a password anyhow, so I tried:

ssh -o PubkeyAuthentication=no username@10.0.0.180

I was greeted with a password: prompt.

I checked the /etc/ssh/sshd_config and found that someone who’d built the install image had changed the default setting for MaxAuthTries from 6 to 2.

The MaxAuthTries setting tells the ssh daemon how many different authentication attempts a user can try before it disconnects. Each ssh key loaded into ssh-agent counts as one authentication attempt. The default is 6 because many users (like me) have multiple ssh keys loaded into ssh-agent so that we can automatically log into different hosts that use different ssh keys. Trying more than one ssh key isn’t the same as thumb-fingering a password — ssh is designed to allow for multiple key attempts. After the ssh connection attempts all of your ssh keys and you haven’t run out of attempts and passwords are enabled you’ll eventually get a password prompt.

Setting MaxAuthTries back to the more reasonable default of 6 and reloading the sshd daemon fixed the issue. Apparently whoever tested the setup only has one ssh key and wasn’t aware of what changing the MaxAuthTries setting does when people with more than one key attempt to log in.

Alternatively, if it’s someone else’s server and you can’t change the /etc/ssh/sshd_config file, you can also add these lines to your local ~/.ssh/config file:

Host 10.0.0.180
    PubkeyAuthentication no

If you’re concerned about ssh security sshd_config allows you to control what versions of the ssh protocol are supported, which ciphers you trust (or don’t trust), and to tune other settings that lock down what you will or won’t allow ssh to do in your environment. It may be that for some applications in some environments setting MaxAuthTries 2 makes sense, but using it for an out of the box installation just breaks ssh for no good reason.

Hope you find this useful.

post

How to get the IP address of a KVM/virsh VM

Since virsh domifaddr doesn’t work to get the IP addresses of VMs on a bridged network, I wrote a get-vm-ip script (which you can download from Github) which uses this to get the IP of a running VM:

HOSTNAME=[your vm name]
MAC=$(virsh domiflist $HOSTNAME | awk '{ print $5 }' | tail -2 | head -1)
arp -a | grep $MAC | awk '{ print $2 }' | sed 's/[()]//g'

The virsh command gets the MAC address, the last line finds the IP address using arp.

Hope you find this useful.

post

Use .iso and Kickstart files to automatically create Ubuntu VMs

I wrote an updated version of this article in 2023: Quickly create guest VMs using virsh, cloud image files, and cloud-init. The newer code works better on more recent distros. Unless you really need to use Kickstart, try the new code.

I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

  • Allow me to use raw ISO files as the basis for guest VMs.
  • Guest VMs should be set up with bridged IPs that are routable from the host.
  • Guest VMs should be able to reach the Internet.
  • Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).
  • VM creation should work with any distro that supports Kickstart files.
  • Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.
  • Guest VMs should be set up to allow passwordless ssh access from the “ansible” user.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

Scripts

I put all of my code into a Github repo containing these scripts:

  • create-vm – Use .iso and kickstart files to auto-generate a VM.
  • delete-vm – Delete a virtual machine created with create-vm.
  • get-vm-ip – Get the IP address of a VM managed by virsh.
  • encrypt-pw – Returns a SHA512 encrypted password suitable for pasting into Kickstart files.

I’ve also included a sample ubuntu.ks Kickstart file for creating an Ubuntu host.

Host setup

I’m running the scripts from a host with Ubuntu Linux 18.10 installed. I added the following to the host’s Ansible playbook to install the necessary virtualization packages:

  - name: Install virtualization packages
apt:
name: "{{item}}"
state: latest
with_items:
- qemu-kvm
- libvirt-bin
- libvirt-clients
- libvirt-daemon
- libvirt-daemon-driver-storage-zfs
- python-libvirt
- python3-libvirt
- system-config-kickstart
- vagrant-libvirt
- vagrant-sshfs
- virt-manager
- virtinst

If you’re not using Ansible just apt-get install the above packages.

Sample Kickstart file

There are plenty of documents on the Internet on how to set up Kickstart files.

A couple of things that are special about the included Kickstart file

The Ansible user: Although I’d prefer to create the “ansible” user as a locked account,with no password just an ssh public key, Kickstart on Ubuntu does not allow this, so I do set up an encrypted password.

To set up your own password, use the encrypt-pw script to create a SHA512-hashed password that you can copy and paste into the Kickstart file. After a VM is created you can use this password if you need to log into the VM via the console.

To use your own ssh key, replace the ssh key in the %post section with your own public key.

The %post section at the bottom of the Kickstart file does a couple of things:

  • It updates all packages with the latest versions.
  • To configure a VM with Ansible, you just need ssh access to a VM and Python installed. on the VM. So I use %post to install an ssh-server and Python.
  • I start the serial console, so that virsh console $vmname works.
  • I add a public key for Ansible, so I can configure the servers with Ansible without entering a password.

Despite the name, the commands in the %post section are not the last commands executed by Kickstart on an Ubuntu 18.10 server. The “ansible” user is added after the %post commands are executed. This means that the Ansible ssh public key gets added before the ansible user is created.

To make key-based logins work I set the UID:GID of authorized_keys to 1000:1000. The user is later created with UID=1000, GID=1000, which means that the authorized_keys file ends up being owned by the ansible user by the time the VM creation is complete.

Create an Ubuntu 18.10 server

This creates a VM using Ubuntu’s text-based installer. Since the `-d` parameter is used,progress of the install is shown on screen.

create-vm -n node1 \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks \
-d

Create 8 Ubuntu 18.10 servers

This starts the VM creation process and exits. Creation of the VMs continues in the background.

for n in `seq 1 8`; do
create-vm -n node$n \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks
done

Delete 8 virtual machines

for n in `seq 1 8`; do
delete-vm node$n
done

Connect to a VM via the console

virsh console node1

Connect to a VM via ssh

ssh ansible@`get-vm-ip node1`

Generate an Ansible hosts file

(
echo '[hosts]'
for n in `seq 1 8`; do
ip=`get-vm-ip node$n`
echo "node$n ansible_ip=$ip ansible_user=ansible"
done
) > hosts

Handy virsh commands

  • virsh list – List all running VMs.
  • virsh domifaddr node1 – Get a node’s IP address. Does not work with all network setups,which is why I wrote the get-vm-ip script.
  • virsh net-list – Show what networks were created by virsh.
  • virsh net-dhcp-leases $network – Shows current DHCP leases when virsh is acting as the DHCP server. Leases may be shown for machines that no longer exist.

Known Issues

  • VMs created without the -d (debug mode) parameter may be created in “stopped” mode. To start them up, run the command virsh start $vmname
  • Depending on how your host is set up, you may need to run these scripts as root.
  • Ubuntu text mode install messes up terminal screens. Run reset from the command line to restore a terminal’s functionality.
  • I use Ansible to set a guest’s hostname, not Kickstart, so all Ubuntu guests created have the host name “ubuntu”.

Hope you find this useful.

create-vm script

Download create-vm from Github

#!/bin/bash

# create-vm - Use .iso and kickstart files to auto-generate a VM.

# Copyright 2018 Earl C. Ruby III
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

HOSTNAME=
ISO_FQN=
KS_FQN=
RAM=1024
VCPUS=2
STORAGE=20
BRIDGE=virbr0
MAC="RANDOM"
VERBOSE=
DEBUG=
VM_IMAGE_DIR=/var/lib/libvirt

usage()
{
cat << EOF
usage: $0 options

This script will take an .iso file created by revisor and generate a VM from it.

OPTIONS:
-h Show this message
-n Host name (required)
-i Full path and name of the .iso file to use (required)
-k Full path and name of the Kickstart file to use (required)
-r RAM in MB (defaults to ${RAM})
-c Number of VCPUs (defaults to ${VCPUS})
-s Amount of storage to allocate in GB (defaults to ${STORAGE})
-b Bridge interface to use (defaults to ${BRIDGE})
-m MAC address to use (default is to use a randomly-generated MAC)
-v Verbose
-d Debug mode
EOF
}

while getopts "h:n:i:k:r:c:s:b:m:v:d" option; do
case "${option}"
in
h)
usage
exit 0
;;
n) HOSTNAME=${OPTARG};;
i) ISO_FQN=${OPTARG};;
k) KS_FQN=${OPTARG};;
r) RAM=${OPTARG};;
c) VCPUS=${OPTARG};;
s) STORAGE=${OPTARG};;
b) BRIDGE=${OPTARG};;
m) MAC=${OPTARG};;
v) VERBOSE=1;;
d) DEBUG=1;;
esac
done

if [[ -z $HOSTNAME ]]; then
echo "ERROR: Host name is required"
usage
exit 1
fi

if [[ -z $ISO_FQN ]]; then
echo "ERROR: ISO file name or http url is required"
usage
exit 1
fi

if [[ -z $KS_FQN ]]; then
echo "ERROR: Kickstart file name or http url is required"
usage
exit 1
fi

if ! [[ -f $ISO_FQN ]]; then
echo "ERROR: $ISO_FQN file not found"
usage
exit 1
fi

if ! [[ -f $KS_FQN ]]; then
echo "ERROR: $KS_FQN file not found"
usage
exit 1
fi
KS_FILE=$(basename "$KS_FQN")

if [[ ! -z $VERBOSE ]]; then
echo "Building ${HOSTNAME} using MAC ${MAC} on ${BRIDGE}"
echo "======================= $KS_FQN ======================="
cat "$KS_FQN"
echo "=============================================="
set -xv
fi

mkdir -p $VM_IMAGE_DIR/{images,xml}

virt-install \
--connect=qemu:///system \
--name="${HOSTNAME}" \
--bridge="${BRIDGE}" \
--mac="${MAC}" \
--disk="${VM_IMAGE_DIR}/images/${HOSTNAME}.img,bus=virtio,size=${STORAGE}" \
--ram="${RAM}" \
--vcpus="${VCPUS}" \
--autostart \
--hvm \
--arch x86_64 \
--accelerate \
--check-cpu \
--os-type=linux \
--force \
--watchdog=default \
--extra-args="ks=file:/${KS_FILE} console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject="${KS_FQN}" \
--graphics=none \
--noautoconsole \
--debug \
--location="${ISO_FQN}"

if [[ ! -z $DEBUG ]]; then
# Connect to the console and watch the install
virsh console "${HOSTNAME}"
virsh start "${HOSTNAME}"
fi

# Make a backup of the VM's XML definition file
virsh dumpxml "${HOSTNAME}" > "${VM_IMAGE_DIR}/xml/${HOSTNAME}.xml"

if [ ! -z $VERBOSE ]; then
set +xv
fi

ubuntu.ks Kickstart file

Download ubuntu.ks on Github.

# System language
lang en_US
# Language modules to install
langsupport en_US
# System keyboard
keyboard us
# System mouse
mouse
# System timezone
timezone --utc Etc/UTC
# Root password
rootpw --disabled
# Initial user
user ansible --fullname "ansible" --iscrypted --password $6$CfjrLvwGbzSPGq49$t./6zxk9D16P6J/nq2eBVWQ74aGgzKDrQ9LdbTfVA0IrHTQ7rQ8iq61JTE66cUjdIPWY3fN7lGyR4LzrGwnNP.
# Reboot after installation
reboot
# Use text mode install
text
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom
# System bootloader configuration
bootloader --location=mbr 
# Clear the Master Boot Record
zerombr yes
# Partition clearing information
clearpart --all 
# Disk partitioning information
part / --fstype ext4 --size 3700 --grow
part swap --size 200 
# System authorization infomation
auth  --useshadow  --enablemd5 
# Firewall configuration
firewall --enabled --ssh 
# Do not configure the X Window System
skipx
%post --interpreter=/bin/bash
echo ### Redirect output to console
exec < /dev/tty6 > /dev/tty6
chvt 6
echo ### Update all packages
apt-get update
apt-get -y upgrade
# Install packages
apt-get install -y openssh-server vim python
echo ### Enable serial console so virsh can connect to the console
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
echo ### Add public ssh key for Ansible
mkdir -m0700 -p /home/ansible/.ssh
cat <<EOF >/home/ansible/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC14kgOfzuOA4+hD16JFTAtXWc0UkzvXw3TrPxivh+/t86FH1N1qlXLLZn60voysLHnyiwTXrC8Zy0H8rckRopepozMBRegeklnkLHaiFDBdbAkHm8DMOd5QuGedBZ+s80+05btzOVxZcN5M6m4A03vLGGoOqnJvewv1u/yP6et0hP2Bs6D9ycczOpXeOgKnUXt1rciVYTk9xwOXFWcZ5phnXSCGA1w0BACK2CZaCKmsAT5YR7PA4N+7xVMvhOgo3gt8dNxNtmEtkZSlYYqkJehuldt1IPpfQ5/QYngYKX1ZCKS2LHc9Ys3F8QX3djhOhqFL+kcDMrdTaT5GlAFcgqebIao5mTYpiqc72YbvbCMWRVomhE0TWgliIftR/65NzOf8N4b2fE/hakLkIsGyR7TQiNmAHgagqBX/qdBJ7QJdNN7GH2aGP/Ni7b9xX2jsWXRj8QcSee+IDgfm2k/uKGvI6+RotRVx/EjwOGVUeGp8txP8l4T0AmYsgiL1Phe5swAPaMj4R+m38dwzr2WF/PViI1upF/Weczoiu0dDODLsijdHBAIju9BEDBzcFbDPoLLKHOSMusy86CVGNSEaDZUYwZ2GHY6anfEaRzTJtqRKNvsGiSRJAeKIQrZ16e9QPihcQQQNM+Z9QW7Ppaum8f2QlFMit03UYMplw0EpNPAWQ== ansible@host
EOF
echo ### Set permissions for Ansible directory and key. Since the "Initial user"
echo ### is added *after* %post commands are executed, I use the UID:GID
echo ### as a hack since I know that the first user added will be 1000:1000.
chown -R 1000:1000 /home/ansible
chmod 0600 /home/ansible/.ssh/authorized_keys
echo ### Change back to terminal 1
chvt 1