post

How to get the IP address of a KVM/virsh VM

Since virsh domifaddr doesn’t work to get the IP addresses of VMs on a bridged network, I wrote a get-vm-ip script (which you can download from Github) which uses this to get the IP of a running VM:

HOSTNAME=[your vm name]
MAC=$(virsh domiflist $HOSTNAME | awk '{ print $5 }' | tail -2 | head -1)
arp -a | grep $MAC | awk '{ print $2 }' | sed 's/[()]//g'

The virsh command gets the MAC address, the last line finds the IP address using arp.

Hope you find this useful.

Share Button
post

Use .iso and Kickstart files to automatically create VMs

I was looking for a way to automate the creation of VMs for testing various distributed system / cluster software packages. I’ve used Vagrant in the past but I wanted something that would:

  • Allow me to use raw ISO files as the basis for guest VMs.
  • Guest VMs should be set up with bridged IPs that are routable from the host.
  • Guest VMs should be able to reach the Internet.
  • Other hosts on the local network should be able to reach guest VMs. (Setting up additional routes is OK).
  • VM creation should work with any distro that supports Kickstart files.
  • Scripts should be able to create and delete VMs in a scripted, fully-automatic manner.
  • Guest VMs should be set up to allow passwordless ssh access from the “ansible” user.

I’ve previously used virsh’s virt-install tool to create VMs and I like how easy it is to set up things like extra network interfaces and attach existing disk images. The scripts in this repo fully automate the virsh VM creation process.

Scripts

I put all of my code into a Github repo containing these scripts:

  • create-vm – Use .iso and kickstart files to auto-generate a VM.
  • delete-vm – Delete a virtual machine created with create-vm.
  • get-vm-ip – Get the IP address of a VM managed by virsh.
  • encrypt-pw – Returns a SHA512 encrypted password suitable for pasting into Kickstart files.

I’ve also included a sample ubuntu.ks Kickstart file for creating an Ubuntu host.

Host setup

I’m running the scripts from a host with Ubuntu Linux 18.10 installed. I added the following to the host’s Ansible playbook to install the necessary virtualization packages:

  - name: Install virtualization packages
apt:
name: "{{item}}"
state: latest
with_items:
- qemu-kvm
- libvirt-bin
- libvirt-clients
- libvirt-daemon
- libvirt-daemon-driver-storage-zfs
- python-libvirt
- python3-libvirt
- system-config-kickstart
- vagrant-libvirt
- vagrant-sshfs
- virt-manager
- virtinst

If you’re not using Ansible just apt-get install the above packages.

Sample Kickstart file

There are plenty of documents on the Internet on how to set up Kickstart files.

A couple of things that are special about the included Kickstart file

The Ansible user: Although I’d prefer to create the “ansible” user as a locked account,with no password just an ssh public key, Kickstart on Ubuntu does not allow this, so I do set up an encrypted password.

To set up your own password, use the encrypt-pw script to create a SHA512-hashed password that you can copy and paste into the Kickstart file. After a VM is created you can use this password if you need to log into the VM via the console.

To use your own ssh key, replace the ssh key in the %post section with your own public key.

The %post section at the bottom of the Kickstart file does a couple of things:

  • It updates all packages with the latest versions.
  • To configure a VM with Ansible, you just need ssh access to a VM and Python installed. on the VM. So I use %post to install an ssh-server and Python.
  • I start the serial console, so that virsh console $vmname works.
  • I add a public key for Ansible, so I can configure the servers with Ansible without entering a password.

Despite the name, the commands in the %post section are not the last commands executed by Kickstart on an Ubuntu 18.10 server. The “ansible” user is added after the %post commands are executed. This means that the Ansible ssh public key gets added before the ansible user is created.

To make key-based logins work I set the UID:GID of authorized_keys to 1000:1000. The user is later created with UID=1000, GID=1000, which means that the authorized_keys file ends up being owned by the ansible user by the time the VM creation is complete.

Create an Ubuntu 18.10 server

This creates a VM using Ubuntu’s text-based installer. Since the `-d` parameter is used,progress of the install is shown on screen.

create-vm -n node1 \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks \
-d

Create 8 Ubuntu 18.10 servers

This starts the VM creation process and exits. Creation of the VMs continues in the background.

for n in `seq 1 8`; do
create-vm -n node$n \
-i ~/isos/ubuntu-18.10-server-amd64.iso \
-k ~/conf/ubuntu.ks
done

Delete 8 virtual machines

for n in `seq 1 8`; do
delete-vm node$n
done

Connect to a VM via the console

virsh console node1

Connect to a VM via ssh

ssh ansible@`get-vm-ip node1`

Generate an Ansible hosts file

(
echo '[hosts]'
for n in `seq 1 8`; do
ip=`get-vm-ip node$n`
echo "node$n ansible_ip=$ip ansible_user=ansible"
done
) > hosts

Handy virsh commands

  • virsh list – List all running VMs.
  • virsh domifaddr node1 – Get a node’s IP address. Does not work with all network setups,which is why I wrote the get-vm-ip script.
  • virsh net-list – Show what networks were created by virsh.
  • virsh net-dhcp-leases $network – Shows current DHCP leases when virsh is acting as the DHCP server. Leases may be shown for machines that no longer exist.

Known Issues

  • VMs created without the -d (debug mode) parameter may be created in “stopped” mode. To start them up, run the command virsh start $vmname
  • Depending on how your host is set up, you may need to run these scripts as root.
  • Ubuntu text mode install messes up terminal screens. Run reset from the command line to restore a terminal’s functionality.
  • I use Ansible to set a guest’s hostname, not Kickstart, so all Ubuntu guests created have the host name “ubuntu”.

Hope you find this useful.

create-vm script

Download create-vm from Github

#!/bin/bash

# create-vm - Use .iso and kickstart files to auto-generate a VM.

# Copyright 2018 Earl C. Ruby III
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

HOSTNAME=
ISO_FQN=
KS_FQN=
RAM=1024
VCPUS=2
STORAGE=20
BRIDGE=virbr0
MAC="RANDOM"
VERBOSE=
DEBUG=
VM_IMAGE_DIR=/var/lib/libvirt

usage()
{
cat << EOF
usage: $0 options

This script will take an .iso file created by revisor and generate a VM from it.

OPTIONS:
-h Show this message
-n Host name (required)
-i Full path and name of the .iso file to use (required)
-k Full path and name of the Kickstart file to use (required)
-r RAM in MB (defaults to ${RAM})
-c Number of VCPUs (defaults to ${VCPUS})
-s Amount of storage to allocate in GB (defaults to ${STORAGE})
-b Bridge interface to use (defaults to ${BRIDGE})
-m MAC address to use (default is to use a randomly-generated MAC)
-v Verbose
-d Debug mode
EOF
}

while getopts "h:n:i:k:r:c:s:b:m:v:d" option; do
case "${option}"
in
h)
usage
exit 0
;;
n) HOSTNAME=${OPTARG};;
i) ISO_FQN=${OPTARG};;
k) KS_FQN=${OPTARG};;
r) RAM=${OPTARG};;
c) VCPUS=${OPTARG};;
s) STORAGE=${OPTARG};;
b) BRIDGE=${OPTARG};;
m) MAC=${OPTARG};;
v) VERBOSE=1;;
d) DEBUG=1;;
esac
done

if [[ -z $HOSTNAME ]]; then
echo "ERROR: Host name is required"
usage
exit 1
fi

if [[ -z $ISO_FQN ]]; then
echo "ERROR: ISO file name or http url is required"
usage
exit 1
fi

if [[ -z $KS_FQN ]]; then
echo "ERROR: Kickstart file name or http url is required"
usage
exit 1
fi

if ! [[ -f $ISO_FQN ]]; then
echo "ERROR: $ISO_FQN file not found"
usage
exit 1
fi

if ! [[ -f $KS_FQN ]]; then
echo "ERROR: $KS_FQN file not found"
usage
exit 1
fi
KS_FILE=$(basename "$KS_FQN")

if [[ ! -z $VERBOSE ]]; then
echo "Building ${HOSTNAME} using MAC ${MAC} on ${BRIDGE}"
echo "======================= $KS_FQN ======================="
cat "$KS_FQN"
echo "=============================================="
set -xv
fi

mkdir -p $VM_IMAGE_DIR/{images,xml}

virt-install \
--connect=qemu:///system \
--name="${HOSTNAME}" \
--bridge="${BRIDGE}" \
--mac="${MAC}" \
--disk="${VM_IMAGE_DIR}/images/${HOSTNAME}.img,bus=virtio,size=${STORAGE}" \
--ram="${RAM}" \
--vcpus="${VCPUS}" \
--autostart \
--hvm \
--arch x86_64 \
--accelerate \
--check-cpu \
--os-type=linux \
--force \
--watchdog=default \
--extra-args="ks=file:/${KS_FILE} console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject="${KS_FQN}" \
--graphics=none \
--noautoconsole \
--debug \
--location="${ISO_FQN}"

if [[ ! -z $DEBUG ]]; then
# Connect to the console and watch the install
virsh console "${HOSTNAME}"
virsh start "${HOSTNAME}"
fi

# Make a backup of the VM's XML definition file
virsh dumpxml "${HOSTNAME}" > "${VM_IMAGE_DIR}/xml/${HOSTNAME}.xml"

if [ ! -z $VERBOSE ]; then
set +xv
fi

ubuntu.ks Kickstart file

Download ubuntu.ks on Github.

# System language
lang en_US
# Language modules to install
langsupport en_US
# System keyboard
keyboard us
# System mouse
mouse
# System timezone
timezone --utc Etc/UTC
# Root password
rootpw --disabled
# Initial user
user ansible --fullname "ansible" --iscrypted --password $6$CfjrLvwGbzSPGq49$t./6zxk9D16P6J/nq2eBVWQ74aGgzKDrQ9LdbTfVA0IrHTQ7rQ8iq61JTE66cUjdIPWY3fN7lGyR4LzrGwnNP.
# Reboot after installation
reboot
# Use text mode install
text
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom
# System bootloader configuration
bootloader --location=mbr 
# Clear the Master Boot Record
zerombr yes
# Partition clearing information
clearpart --all 
# Disk partitioning information
part / --fstype ext4 --size 3700 --grow
part swap --size 200 
# System authorization infomation
auth  --useshadow  --enablemd5 
# Firewall configuration
firewall --enabled --ssh 
# Do not configure the X Window System
skipx
%post --interpreter=/bin/bash
echo ### Redirect output to console
exec < /dev/tty6 > /dev/tty6
chvt 6
echo ### Update all packages
apt-get update
apt-get -y upgrade
# Install packages
apt-get install -y openssh-server vim python
echo ### Enable serial console so virsh can connect to the console
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
echo ### Add public ssh key for Ansible
mkdir -m0700 -p /home/ansible/.ssh
cat <<EOF >/home/ansible/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC14kgOfzuOA4+hD16JFTAtXWc0UkzvXw3TrPxivh+/t86FH1N1qlXLLZn60voysLHnyiwTXrC8Zy0H8rckRopepozMBRegeklnkLHaiFDBdbAkHm8DMOd5QuGedBZ+s80+05btzOVxZcN5M6m4A03vLGGoOqnJvewv1u/yP6et0hP2Bs6D9ycczOpXeOgKnUXt1rciVYTk9xwOXFWcZ5phnXSCGA1w0BACK2CZaCKmsAT5YR7PA4N+7xVMvhOgo3gt8dNxNtmEtkZSlYYqkJehuldt1IPpfQ5/QYngYKX1ZCKS2LHc9Ys3F8QX3djhOhqFL+kcDMrdTaT5GlAFcgqebIao5mTYpiqc72YbvbCMWRVomhE0TWgliIftR/65NzOf8N4b2fE/hakLkIsGyR7TQiNmAHgagqBX/qdBJ7QJdNN7GH2aGP/Ni7b9xX2jsWXRj8QcSee+IDgfm2k/uKGvI6+RotRVx/EjwOGVUeGp8txP8l4T0AmYsgiL1Phe5swAPaMj4R+m38dwzr2WF/PViI1upF/Weczoiu0dDODLsijdHBAIju9BEDBzcFbDPoLLKHOSMusy86CVGNSEaDZUYwZ2GHY6anfEaRzTJtqRKNvsGiSRJAeKIQrZ16e9QPihcQQQNM+Z9QW7Ppaum8f2QlFMit03UYMplw0EpNPAWQ== ansible@host
EOF
echo ### Set permissions for Ansible directory and key. Since the "Initial user"
echo ### is added *after* %post commands are executed, I use the UID:GID
echo ### as a hack since I know that the first user added will be 1000:1000.
chown -R 1000:1000 /home/ansible
chmod 0600 /home/ansible/.ssh/authorized_keys
echo ### Change back to terminal 1
chvt 1
Share Button

Redirect mail links to GMail on Ubuntu 18.04 using Desktop Webmail

My Ubuntu 18.04 box has Thunderbird installed as the default mail client. I was a Thunderbird user for years, but I currently spend most of my time using GMail, and when I click on a email mailto: link on a web page Ubuntu will load Thunderbird.

The documented fix is to go to Settings > Details > Default Applications and pick a different mail client. However, I don’t want a mail client at all, I want mail links to go to my default browser (Firefox, on this machine), load GMail, and open a to email “to” the name in the link.

The documented fix for that issue is to install the gnome-gmail package, but I don’t always use Gnome, so I installed Desktop Webmail instead.

If you want to try it, these are the steps:

  • Fire up Synaptic Package Manager
  • Install the desktop-webmail package
  • Go to Settings > Details > Default Applications and pick Desktop Webmail as your default mail client.

The next time you click a mailto: link Desktop Webmail will ask you what web mail service you want to use. Desktop Webmail currently supports Gmail, Hotmail, Yahoo and Zoho. Select Gmail and it’ll pop up a new email message using GMail, set the “to” address to the mailto: link, using your preferred browser.

Hope you found this useful.

Share Button


Removing mount volumes from your desktop in Ubuntu 17.10

I just upgraded from Ubuntu 17.04 to 17.10 and one of the first things I noticed was all of the disk volumes that are mounted under my home directory appeared on my desktop. In Ubuntu 17.10, all volumes that are mounted under /home or /media appear on your desktop, and none of the switches in the Settings tool will make them go away.

The names of the folders aren’t even useful. They’re names like 10GB Volume and 20GB Volume. If you have two volumes the same size they’ll both have the same useless name. No hint of where the volume is mounted appears.

I have files, documents, databases, and email going back 20 years, much of it archival data that I want to be able to search but which never gets updated, so I keep these archive directories on separate read-only logical volumes. If my home directory’s file system gets corrupted beyond repair, the archives will still be intact. Since the volumes are read-only a misbehaving program or command-line oops won’t destroy the data.

But I don’t want to see them all over my desktop.

Tweak tool to the rescue! Install the tool and run it:

sudo apt install gnome-tweak-tool
gnome-tweak-tool

Then:

Desktop > Mounted Volumes > Off

No more volume icons on the desktop!

gnome-tweak-tool has other useful settings that are absent from the Settings tool, such as giving you the ability to move the window buttons to the upper left side of your windows.

Want to make the icons on your desktop smaller? Open up the File Manager, browse to Desktop, and select the icon size you want by moving the slider bar. The size of the icons on your Desktop and the size in the File Manager’s Desktop folder both use the same setting.

Hope you find this useful.

Share Button


Turn off Apple Wallet / Apple Pay notifications and nag screens

The latest version of IOS for iPhone, iPad, and iWatch devices really, really wants you to set up Apple Wallet / Apple Pay. Your devices want you to embed Apple in the middle of every purchase, and they’ll pester you every time you pick up your device with full-screen dialogs insisting that you need to finish setting up Apple Wallet. Right. Now.

I try to limit the number of things that can access my money, and I have no reason or desire to set up Apple Wallet, so I turned the notifications off. They are embedded in three separate places on the iPhone.

Go to Settings > Notifications > Wallet, turn Allow Notifications off.

Go to Settings > Wallet & Apple Pay, turn Double-Click Home Button off.

Go to Settings > Safari, scroll to the bottom, turn Check for Apple Pay off.

After you update the three settings above, you will still see a notice in Settings stating Finish Setting Up Your iPhone.

    • Click Finish Setting Up Your iPhone.
    • It’ll prompt you to Set Up Apple Pay.
    • Click Set Up Apple Pay, then exit without doing anything.

No more nag screens!

Hope you found this useful.

Article updated on 2018-07-10

Share Button


Running a 6-node CockroachDB cluster with AWS EFS storage

CockroachDB is new distributed database which, like its namesake, is really hard to kill.

CockroachDB implements SQL DML commands for creating schemas, tables, and indexes using the same syntax as PostgreSQL, and it supports the PostgreSQL wire protocol, meaning that any PostgreSQL database driver or client can be used to connect to a CockroachDB database. If you’re currently using PostgresSQL and you want an easier scale-out, highly-available way to deploy a database, you should take a look at CockroachDB. In many cases you can just repoint your application at a CockroachDB server and your application will run the same as it did using PostgreSQL.

The first day I tried using CockroachDB I got a six-node system up and running using CockroachDB’s Docker image on my Apcera cluster using AWS EFS as a backing store in less than an hour. This is what I did to get it working.

Set up an NFS provider for EFS

I already had an Apcera cluster for deploying Docker images running on AWS. This is the same cluster I used for my article on Mounting AWS EFS volumes inside Docker Containers. In fact, I set up the EFS provider using the same steps:

Set up the EFS volume using the AWS console.

Create an NFS provider that targets the EFS volume.

apc provider register apcfs-ha --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a namespace and a private network

Create a namespace and a private network named “roachnet”.

apc namespace /sandbox/cockroach
apc network create roachnet

“roachnet” is a private VxLAN created by the Apcera platform that only containers that I’ve joined to the network can see.

Create the first CockroachDB node

Next I create a container instance called “roach1” from the latest Docker image, open ports 8080 and 26257, tell it to use the EFS provider for storage, and to advertise itself to other CockroachDB nodes so they can find it and join the DB cluster.

apc docker run roach1 --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --advertise-host roach1.apcera.local"
apc network join roachnet --job roach1 --discovery-address roach1
apc app start roach1
apc route add http://cockroach.earlruby.apcera-platform.io --https-only --app roach1 --port 8080 --batch

Create 5 more nodes

Create 5 more nodes and add them to roachnet:

for x in `seq 2 6`; do
    apc docker run roach$x --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider  /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --join roach1.apcera.local:26257"
    apc network join roachnet --job roach$x --discovery-address roach$x
    apc app start roach$x
    sleep 3
done

I added the “sleep 3” command because when I originally tested this (on CockroachDB 1.1.0) the platform started the containers so fast that the DB got confused and didn’t add all of them to the cluster. All nodes started, but only some joined the cluster. After I added the delay all nodes joined the cluster.

Verify that the containers are all running.:

After that the cluster was up and running. I could connect to the database, create schemas, create tables, add, update, and delete records. I’m pretty happy with the initial results. Next step is automatically generating secure certificates so I’m not operating in insecure mode, then I’m going to run actual applications against the cluster.

Hope you found this useful.

CockroachDB overview screen

CockroachDB Storage Screen

CockroachDB Queues

 

Share Button


Quickly get IP addresses of new VMs

I spin up a lot of VMs using VMware Fusion. I generally keep “clean” generic copies of a few different distros and versions of Linux servers ready to go with my login, an sshd server, ssh keys, and basic settings that I use already set up. When I need to quickly test something manually — usually some new, multi-VM distributed container orchestration or database system — I just make as many copies of the server’s *.vmwarevm file as I need, fire up the VM copies on my laptop, test whatever I need to test, then shut them down. Eventually I delete the copies and recover the disk space.

Depending on where my laptop is running I’ll get a completely random IP address for the VM from the local DHCP server. I would log into the consoles, get the IPs, then log into the various VMs from a terminal. (Cut and paste just works a whole lot better on a terminal than on the VMware console.)

However, since the console screens are up, and I repeat this pattern several times a week, I figured why not save a step and make the ephemeral VMs just show me their IP address on their consoles without having to login, so I added an “on reboot” file called /etc/cron.d/welcome on the master image which updates the /etc/issue file.

/etc/cron.d/welcome looks like this:

@reboot root (/bin/hostname; /bin/uname -a; echo; if [ -x /sbin/ip ]; then /sbin/ip addr; else /sbin/ifconfig; fi) > /etc/issue

When a new VM boots, it writes the hostname, kernel info, and the ethernet config to the /etc/issue file. /etc/issue is displayed on the screen before the login prompt, so now I can just glance at the console, see the IP address, and ssh to the new VM.

Ephemeral VM

Although you’d never want to do this on a production system, it works great for ephemeral, throw-away test VMs.

Hope you find this useful.

Share Button


Mounting AWS EFS volumes inside Docker Containers

Amazon announced the development of the Amazon Elastic File System (AWS EFS) in 2015. EFS was designed to provide multiple EC2 instances with shared, low-latency access to a fully-managed file system. On June 28, 2016 Amazon announced that EFS is now available for production use in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions.

Apcera‘s NFS Service Gateway can be used to access AWS EFS storage volumes within containers. You can use EFS to provide persistent storage to your containers running on AWS-hosted clouds in regions where EFS is available.

Gathering information

Before you begin you will need to know:

  • The name of the AWS Region where your Apcera Platform is running
  • The name/ID of the AWS VPC where your Apcera Platform is running
  • The name/ID of the AWS security group for your Apcera Platform

Setting up an EFS volume

  1. Log into your AWS console.
  2. Select the name of the AWS Region where your Apcera Platform is running on the upper right side of the screen.
  3. Select Elastic File System.
  4. Click Create File System.
  5. Configure the file system access:
    • Select the name of the VPC.
    • The availability zone and subnet should be selected for you automatically.
    • If your VPC has more than one subnet (unusual) then select the subnet containing the Instance Managers that will be connecting to the EFS volume.
    • Leave IP address set to Automatic.
    • The first EFS volume you create will create a new security group. Use that security group for this and all future EFS volumes. Write down the name of the new EFS security group – we’ll configure it in the next few steps.
    • Click Next Step
  6. Configure optional settings:
    • Set the name of the EFS volume.
    • Choose the performance mode.
    • Click Next Step
      mounting-aws-with-esf-inside-docker
  7. Review and create:
  8. If everything looks OK, click Create File System.
    mounting-aws-with-esf-inside-docker
  9. You should see a “Success!” message and a new EFS volume with “Life Cycle State” = “Creating”.
  10. Write down the IP address of the EFS volume.

Update the EFS security group

  • Go back to the main console menu and select EC2.
  • Click Security Groups in the left hand nav menu.
  • Type the name of the new EFS security group into the search filter list.
  • On the bottom half of the screen delete the default inbound and outbound rules.
  • Add one inbound rule to allow all TCP traffic on port 2049 from the source “name/ID of the AWS security group for your Apcera Platform”
  • Add one outbound rule to allow all TCP traffic on port 2049 to the destination “name/ID of the AWS security group for your Apcera Platform”
  • This allows all VMs within your Apcera Platform security group to connect to your EFS volume on port 2049 (NFS).
  • No other traffic from any other source or to any other destination is allowed.|

Create an NFS Provider for the EFS volume

We’re going to create a single provider for the EFS volume. Each time you have a container or set of containers that need a persistent file system, just create a new service from the same provider. Each new service will carve out a new namespace on the EFS volume, keeping the files associated with that service separate from the files in all other services that use the same provider.

According to the EFS FAQ, When you create a file system, you create endpoints in your VPC called “mount targets.” Each mount target provides an IP address and a DNS name, and you use this IP address or DNS name in your mount command. Only resources that can access a mount target can access your file system. Since the Apcera Platform isn’t using Amazon DNS services internally, we’ll use the IP address to connect to the EFS volume.

To create the provider, you need to construct a URL describing the volume. In this case, we’ll use the internal IP address of the EFS volume as the hostname and / as the exported volume name. All EFS volumes use the NFS v4.1 protocol. If the IP address of the EFS volume is 10.0.0.112 we’d construct a provider using:

apc provider register awsefs --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a service from the provider:

apc service create efs-service-1 \
    --provider awsefs \
    --description 'Amazon EFS Service' \
    --batch

Create a capsule, bind the service to the capsule, and connect to the capsule:

apc capsule create efs-capsule1 --image linux -ae --batch
apc service bind efs-service-1 --job efs-capsule1 \
    --batch -- --mountpath /an/unlimited/supply
apc capsule connect efs-capsule1

Once connected, type df -k to see the mounted file system.

You can bind this service to any container that needs a shared, persistent file system. Each time you need a new shared, persistent file system for a container or group of containers just create a new service using the same provider and bind the service to your job or jobs.

Persistence for Docker

Now that we have a provider that can carve out EFS storage for containers, let’s try spinning up some Docker images.

On the Apcera Platform, if the specification for a Docker image (Dockerfile) specifies that the app requires persisted volumes, you must do one of the following when creating the job:

  • Include the –provider flag when you create or run the Docker job. You must include this flag if you include the –volume flag when creating or running the Docker job.
  • Include the –ignore-volumes flag when you create or run the Docker job.

Here is an example of running NGINX inside a Docker container on the Apcera platform, where the content for the site is stored on an EFS volume:

I’m using the Apcera “apc” command-line tool to build the container, pulling the nginx image directly off hub.docker.com, telling it to use the awsefs EFS volume provider I created earlier for persistence, and to mount the EFS volume at the mount point “/usr/share/nginx/html”.

Now connect to the container:

/proc/mounts contains a list of all of the container’s mount points. I can verify that the container does indeed have an EFS volume by grepping /proc/mounts for the mount point:

Grepping for “/usr/share/nginx/html” shows the IP address 10.0.0.112, which is the IP of the EFS volume, the log directory name after is the unique namespace for the service, the mountpoint is “/usr/share/nginx/html”, and the mount type “nfs4”.

There is no content in the directory, so I add some by echoing some HTML code to an index.html file. My container will proclaim to the world “NGINX in a Docker container on Apcera with content stored on EFS” in an H3 typeface!

Now that I have some content I need to add a route to the content. Right now the NGINX container is running, and listening on ports 80 and 443, but it’s completely isolated from the outside world — no one can connect to those ports unless there’s a route (a URL) set up.

My cluster is running on the domain earlruby.apcera-platform.io, so I add a route like so:

I have successfully added the http route http://nginx.earlruby.apcera-platform.io/ to my NGINX container. This is a real public DNS entry. To verify that it works I point my browser at the route I just added:

Success!

Such an amazing app is bound to go viral, and a single NGINX container may not be able to keep up with the load. I want to ensure that my app can keep up and remain highly-available, and that it keeps running even if one or more VMs in my cluster get killed off, so I add more NGINX containers:

Now I’ve got 20 containers running my NGINX app, all serving up the same content, running on multiple VMs across my cluster, all load-balanced under the single URL http://nginx.earlruby.apcera-platform.io/. If any container gets killed off, the Apcera platform will spin up a new one. If any VM in the cluster dies, any containers running on it will automatically be migrated to new hosts. If I want to scale up the app to 100 or 1000 containers, or back down to 1, it’s a one-line command to make the change.

In terms of resources, I’m using slightly less than 45 MiB to run those 20 containers. That’s not a typo — 45MiB! Containers are much more efficient users of RAM than VMs.

I hope you find this useful.

This article originally appeared as an Apcera blog post on July 21, 2016.

Share Button


5 User Engagement Problems Twitter Should Fix this Week

Twitter’s mobile app needs help.

Of all of the social networks, engagement on Twitter is dismally low. Even the people who like the app don’t spend nearly as much time on Twitter as they do on other social media. There are some obvious problems with the app that Twitter could fix, but they don’t.

iPhone with badged icons

Which one will you click?

Until a few months ago, Twitter’s iPhone app didn’t support badge notifications. A badge is the small red number that appears on the app’s icon letting you know that you have Notifications. Twitter’s iPhone app didn’t have them. You could look at your phone’s screen and see that Facebook, LinkedIn, MeetUp, and NextDoor had messages waiting for you, but not Twitter. A glance at your screen and the small red numbers taunt you – Check FaceBook! Check your email! Check Messages! Badges are a simple way to get you to start up that app and engage.

1. Fix Notifications

With a recent update, Twitter finally added badge notifications. Only problem is, they don’t actually work. The badge will appear with a “2” on it, I’ll start the Twitter app, and the “Notifications” icon indicates that there’s something new. I click it, and there are no updates. I can check the “Me” link and see that I have 2 more followers, but they’re not listed on the Notifications screen. If I log into the TweetDeck web app I can see who the new followers are, but the iPhone mobile app pretends they don’t exist.

2. Make it easy to engage with friends

Ever have a conversation with a friend on Twitter? It’s next to impossible to follow replies, comments, or have any sort of conversation using the tools they provide on their mobile app. If Twitter wants to increase user engagement, they should get rid of the Messages tab and make it a “Mentions” tab that shows private messages and allows for threaded conversations. Alternately they could add a “swipe left” feature or even a “view replies” icon to view the replies to a message, in threaded order. Make it possible for people to have a conversation about the things that they’re posting, and they’ll stay on engaged for longer.

3. Show me what posts are trending

The Home button shows me the latest messages from everyone I’m following in order by time posted. What if I want to see the posts with the most retweets? Or the most hearts? Or the most replies? You know, the messages from the people I’m following that are the most interesting/funny/relevant? Get rid of the “Moments” section and give me a “Trending” section that shows the items from the people I follow with the most retweets, likes, and replies. I guarantee I’ll spend more time in that section than I do looking at “Moments”.

4. Load more items in the “Home” feed

I use an iPhone with “service” from AT&T. I also ride BART, which means that I spend about half my commute with no data service. (Newsflash to AT&T: People on trains spend most of their time on their phones. If you cared about your customers you’d send a tech to ride a train with a signal strength meter a couple of times a year and fix the dead zones.)

Since mobile data service is spotty, you’d think the Twitter app would start downloading items for my “Home” feed as whenever I have a signal, so I’d never run out of items to read. Unfortunately that’s not the case, and I routinely hit the end of the list of things on “Home” to read just as BART enters another AT&T dead zone. I sit there watching the spinner for a few seconds, then quit Twitter and load another app – one that was smart enough to download content in the background so it’s ready for me to view.

5. Cache some damn profile pictures

I only follow 361 people on Twitter. Each one of them has a small profile picture that rarely gets updated, so why does Twitter download and render a the same, identical profile photos every time I open the app? I’ll be scrolling along, and I can see it download and render the photos one by one. If I’m in an AT&T dead zone, I’ll just see a bunch of empty boxes instead of profile pictures in my feed. How hard could it be to cache a copy of the photos on my phone? The app can always check for new photos and update them if one is available, so why is it downloading them every time I open the app? If I don’t have a connection at the moment it’s OK to show me someone’s 24-hour-old profile picture – it’s better than showing me an empty box.

That’s it. 5 simple things that Twitter engineers could fix this week to increase the amount of time people spend using their mobile app.


Full disclosure: I own shares of Twitter stock. If someone at Twitter fixed these problems I might be making less of of a loss on those shares. In addition, the Twitter stockholders meeting is this week. I won’t be there, but if you are feel free to share this article with the people in attendance.

Share Button


Policy-based Cloud Storage

This is a talk I gave last week at the SF Microservices Meetup titled Policy-based Cloud Storage, Persisting Data in a Multi-Site, Multi-Cloud World. In it I cover Apcera‘s approach to storage for containers and how to use policy to manage very large scale application deployments.

Share Button