post

Automatically decrypt multiple LUKS-encrypted volumes

I’ve written in the past on Adding an external encrypted drive with LVM to Ubuntu Linux and Adding a LUKS-encrypted iSCSI volume to Synology DS414 NAS but I neglected to mention how to automatically decrypt additional volumes.

When installing a fresh copy of Ubuntu one of the options is to install with a LUKS-encrypted Logical Volume Manager Volume Group (LVM VG). This puts your root volume on the encrypted LVM VG. When you power up your machine Ubuntu prompts you to enter the decryption passphrase in order to decrypt the VG and start your computer. Without the passphrase the contents of your hard drive are unreadable.

If you add encrypted external drives and/or additional VGs you will end up with multiple encrypted volumes. Ubuntu will prompt you for the passphrase of each additional encrypted volume when you boot up the machine.

If you don’t want to enter multiple, different passphrases each time you boot, you can store the passphrases for additional volumes on the encrypted root filesystem of your first drive using the /etc/crypttab file. You’ll just be prompted for one passphrase, of the first VG, and that decrypts the passphrases needed to decrypt the additional volumes.

Here’s how it works.

The /etc/crypttab file contains 4 fields per line: the name of the encrypted volume, a UUID identifying the storage device, the name of a file with the decryption passphrase, and encryption options.

nvme0n1p5   UUID=405d8c73-1cf9-4b2c-9b8e-c76b90d27c67 none                        luks,discard
datastorage UUID=f2d73ac8-1ef1-4735-9dd4-9e778fc9e781 /root/.luks-datastorage     luks,discard
external1   UUID=0140476b-dd0b-4aab-b7d4-2f5fa14d1a0c /root/.luks-backupexternal1 luks
external2   UUID=610a67d4-c4f6-4b73-a824-a437971e8d24 /root/.luks-backupexternal2 luks
iscsi       UUID=b106b749-f4ab-44be-8962-6ff867dc074e /root/.luks-backupiscsi     luks

The first volume, nvme0n1p5, is the encrypted boot volume. It contains the root filesystem and the /root home directory. The third field is “none” which means that Ubuntu will prompt you for a decryption passphrase in order to unlock and decrypt the drive.

The remaining volumes have files defined that contain the decryption passphrase for each volume. Those files are hidden files in the /root home directory. Once the nvme0n1p5 volume is decrypted and mounted, the remaining volumes are automatically decrypted using the passphrases stored in the hidden files.

The end result is that all of your drives are encrypted, but you only have to enter one passphrase to unlock all of your drives.

Hope you find this useful.

Share Button
post

Too many authentication failures

I was working with a new Linux distro and after creating a brand-new VM with a single login I attempted to ssh into the VM only to be greeted with:

Received disconnect from 10.0.0.180 port 22:2: Too many authentication failures
Disconnected from 10.0.0.180 port 22

It was a new VM, and I hadn’t loaded an ssh key (there was no option to do so in the install). I’d set up a user and password, so I expected to get a password prompt. I didn’t get to a password prompt, just an immediate disconnect.

I used ssh -vvv to connect and found that my ssh client was attempting to use my ssh keys, as ssh is supposed to, and on the third key the VM spat back the error:

Received disconnect from 10.0.0.180 port 22:2: Too many authentication failures
Disconnected from 10.0.0.180 port 22

Well, I wanted to connect with a password anyhow, so I tried:

ssh -o PubkeyAuthentication=no username@10.0.0.180

I was greeted with a password: prompt.

I checked the /etc/ssh/sshd_config and found that someone who’d built the install image had changed the default setting for MaxAuthTries from 6 to 2.

The MaxAuthTries setting tells the ssh daemon how many different authentication attempts a user can try before it disconnects. Each ssh key loaded into ssh-agent counts as one authentication attempt. The default is 6 because many users (like me) have multiple ssh keys loaded into ssh-agent so that we can automatically log into different hosts that use different ssh keys. Trying more than one ssh key isn’t the same as thumb-fingering a password — ssh is designed to allow for multiple key attempts. After the ssh connection attempts all of your ssh keys and you haven’t run out of attempts and passwords are enabled you’ll eventually get a password prompt.

Setting MaxAuthTries back to the more reasonable default of 6 and reloading the sshd daemon fixed the issue. Apparently whoever tested the setup only has one ssh key and wasn’t aware of what changing the MaxAuthTries setting does when people with more than one key attempt to log in.

If you’re concerned about ssh security sshd_config allows you to control what versions of the ssh protocol are supported, which ciphers you trust (or don’t trust), and to tune other settings that lock down what you will or won’t allow ssh to do in your environment. It may be that for some applications in some environments setting MaxAuthTries 2 makes sense, but using it for an out of the box installation just breaks ssh for no good reason.

Hope you find this useful.

Share Button
post

Install a local .deb file and its dependencies

To install a local deb file and its dependencies use apt, not dpkg:

sudo apt install ./foo-1.2.3.deb

You’ll automatically get all of the dependencies installed with the package. (dpkg doesn’t understand dependencies or repos, apt does.)

The leading ./, or a full or relative path to the deb file, is required. The path is what tells apt that it’s a local file.

Hope you find this useful.

Share Button
post

Determine maximum MTU

I first started paying attention to network MTU settings when I was building petabyte-scale object storage systems. Tuning the network that backs your storage requires maximizing the size of the data packets and verifying that packets aren’t being fragmented. Currently I’m working on performance tuning the processing of image data using racks of GPU servers and verifying the network MTU came up again. I dug up a script I’d used before and thought I’d share it in case other people run into the same problem.

You can set the host network interface’s MTU setting to 9000 on all of the hosts in your network to enable jumbo frames, but how can you verify that the settings are working? If you’ve set up servers in a cloud environment using multiple availability zones or multiple regions, how can you verify that there isn’t a switch somewhere in the middle of your connection that doesn’t support MTU 9000 and fragments your packets?

Use this shell script:

target_host=192.168.1.10
size=1272
while ping -s $size -M do -c1 $target_host >&/dev/null; do
    ((size+=4));
done
echo "Max MTU size: $((size-4+28))"

-s $size sets the size of the packet being sent.

-M do prohibits fragmentation, so ping fails if the packet fragments.

-c1 sends 1 packet only.

size-4+28 = subtract the last 4 bytes added (that caused the fragmentation), add 28 bytes for the IP and ICMP headers.

If minimizing packet fragmentation is important to you, set MTU to 9000 on all hosts and then run this test between every pair of hosts in the network. If you get an unexpectedly low value, troubleshoot your switch and host settings and fix the issue.

Assuming that all of your hosts and switches are configured at their maximum MTU values, then the minimum value returned from the script is the actual maximum MTU you can support without fragmentation. Use the minimum value returned as your new host interface MTU setting.

If you’re operating in a cloud environment you may need to repeat this exercise from time to time as switches are changed and upgraded at your cloud provider.

Hope you find this useful.

Share Button
post

The Right Way to reboot a host with Ansible

For a long time rebooting a host with Ansible has been tricky. The steps are:

  • ssh to the host
  • Reboot the host
  • Disconnect before the host closes your ssh connection
  • Wait some number of seconds to ensure the host has really shut down
  • Attempt to ssh to the host and execute a command
  • Repeat ssh attempt until it works or you give up

Seems clear enough, but if you Google for an answer you may end up at this StackExchange page that gives lots of not-quite-correct answers from 2015 (and one correct answer). Some people suggest checking port 22, but just because ssh is listening doesn’t mean that it’s at state where it’s accepting connections.

The correct answer is use Ansible version 2.7 or greater. 2.7 introduced the reboot command, and now all you have to do is add this to your list of handlers:

- name: Reboot host and wait for it to restart
  reboot:
    msg: "Reboot initiated by Ansible"
    connect_timeout: 5
    reboot_timeout: 600
    pre_reboot_delay: 0
    post_reboot_delay: 30
    test_command: whoami

This handler will:

  • Reboot the host
  • Wait 30 seconds
  • Attempt to connect via ssh and run whoami
  • Disconnect after 5 seconds if it ssh isn’t working
  • Keep attempting to connect for 10 minutes (600 seconds)

Add the directive:

  notify: Reboot host and wait for it to restart

… to any Ansible command that requires a reboot after a change. The host will be rebooted when the playbook finishes, then Ansible will wait until the host is back up and ssh is working before continuing on to the next playbook.

If you need to reboot halfway through a playbook you can force all handlers to execute with the command:

- name: Reboot if necessary
  meta: flush_handlers

I sometimes do that to change something, force a reboot, then verify that the change worked, all within the same playbook.

Hope you found this useful.

Share Button