post

Setting up NFS FSID for multiple networks

The official documentation for creating an NFS /etc/exports file for multiple networks and FSIDs is unclear and confusing. Here’s what you need to know.

If you need to specify multiple subnets that are allowed to mount a volume, you can either use separate lines in /etc/exports, like so:

/opt/dir1 192.168.1.0/24(rw,sync)
/opt/dir1 10.10.0.0/22(rw,sync)

Or you can list each subnet on a single line, repeating all of the mount options for each subnet, like so:

/opt/dir1 192.168.1.0/24(rw,sync) 10.10.0.0/22(rw,sync)

These are both equivalent. They will allow clients in the 192.168.1.0/24 and 10.10.0.0/22 subnets to mount the /opt/dir1 directory via NFS. A client in a different subnet will not be able to mount the filesystem.

When I’m setting up NFS servers I like to assign each exported volume with a unique FSID. If you don’t use FSID, there is a chance that when you reboot your NFS server the way that the server identifies the volume will change between reboots, and your NFS clients will hang with “stale file handle” errors. I say “a chance” because it depends on how your server stores volumes, what version of NFS it’s running, and if it’s a fault tolerant / high availability server, how that HA ability was implemented. Using a unique FSID ensures that the volume that the server presents is always identified the same way, and it makes it easier for NFS clients to reconnect and resume operations after an NFS server gets rebooted.

If you are using FSID to define a unique filesystem ID for each mount point you must include the same FSID in the export options for a single volume. It’s the “file system identifier”, so it needs to be the same each time a single filesystem is exported. If I want to identify /opt/dir1 as fsid=1 I have to include that declaration in the options every time that filesystem is exported. So for the examples above:

/opt/dir1 192.168.1.0/24(rw,sync,fsid=1)
/opt/dir1 10.10.0.0/22(rw,sync,fsid=1)

Or:

/opt/dir1 192.168.1.0/24(rw,sync,fsid=1) 10.10.0.0/22(rw,sync,fsid=1)

If you use a different FSID for one of these entries, or if you declare the FSID for one subnet and not the other, your NFS server will slowly and mysteriously grind to a halt, sometimes over hours and sometimes over days.

For NFSv4, there is the concept of a “distinguished filesystem” which is the root of all exported filesystems. This is specified with fsid=root or fsid=0, which both mean exactly the same thing. Unless you have a good reason to create a distinguished filesystem don’t use fsid=0, it will just add unnecessary complexity to your NFS setup.

Hope you find this useful.

post

Creating AWS Elastic Filesystems (EFS) with Terraform

The AWS Elastic Filesystem (EFS) gives you an NFSv4-mountable file system with almost unlimited storage capacity. The filesystem I just created to write this article reports 9,007,199,254,739,968 bytes free. In human-readable format df -kh reports 8.0E (Exabytes) of available disk space. In the year 2019, that’s a lot of storage space.

In past articles I’ve shown how to create EFS resources manually, but this week I wanted to programmatically create EFS resources with Terraform so that I could easily create, test, and tear-down EFS and VM resources on AWS.

I also wanted to make sure that my EFS resources are secure, that only VMs within my Virtual Private Cloud (VPC) could access the EFS data, so that no one outside of my VPC could mount or otherwise access the data.

Creating an EFS resource is easy. The Terraform code looks like this:

// efs.tf
resource "aws_efs_file_system" "efs-example" {
creation_token = "efs-example"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EfsExample"
}
}

This creates the EFS filesystem on AWS. EFS also requires a mount target, which gives your VMs a way to mount the EFS volume using NFS. The Terraform code to create a mount target looks like this:

// efs.tf (continued)
resource "aws_efs_mount_target" "efs-mt-example" {
file_system_id = "${aws_efs_file_system.efs-example.id}"
subnet_id = "${aws_subnet.subnet-efs.id}"
security_groups = ["${aws_security_group.ingress-efs.id}"]
}

The file_system_id is automatically set to the efs-example resource’s ID, which ties the mount target to the EFS file system.

The subnet_id for subnet-efs is a separate /24 subnet I created from my VPC just for EFS. The ingress-efs security group is a separate security group I created for EFS. Let’s cover each one of these separately.

A separate EFS subnet

First off I’ve allocated a /16 subnet for my VPC and I carve out individual /24 subnets from that VPC for each cluster of VMs and/or EFS resources that I add to an AWS availability zone. Here’s how I’ve defined my test environment VPC and EFS subnet:

//network.tf
resource "aws_vpc" "test-env" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags {
Name = "test-env"
}
}

resource "aws_subnet" "subnet-efs" {
cidr_block = "${cidrsubnet(aws_vpc.test-env.cidr_block, 8, 8)}"
vpc_id = "${aws_vpc.test-env.id}"
availability_zone = "us-east-1a"
}

That will give me the subnet 10.0.8.0/24 for my EFS subnet.

If you want to understand how to use Terraform’s cidrsubnet command to carve out separate subnets, see the article Terraform `cidrsubnet` Deconstructed by Lisa Hagemann. Her article gives excellent examples on how to do just that.

The EFS security group

Finally, I need a security group that only allows traffic between my test environment VMs and my test environment EFS volume. I already have a security group called ingress-test-env that is used to control security for my VMs. For EFS I create another security group that allows inbound traffic on port 2049 (the NFSv4 port), allows egress traffic on any port.

By setting the ingress-efs-test resource’s security_groups attribute to ingress-test-env this only allows network traffic to and from VMs in the ingress-test-env security group to talk to the EFS volume. If you use security_groups like this, you really lock down the EFS volume and you don’t need to set the cidr_blocks attribute at all.

// security.tf
resource "aws_security_group" "ingress-efs-test" {
name = "ingress-efs-test-sg"
vpc_id = "${aws_vpc.test-env.id}"

// NFS
ingress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 2049
to_port = 2049
protocol = "tcp"
}

// Terraform removes the default rule
egress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 0
to_port = 0
protocol = "-1"
}
}

After adding these Terraform files to my cluster configuration and running terraform apply, I end up with a new EFS filesystem that I can mount from any VM running in my VPC.

# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-31337er3.efs.us-east-1.amazonaws.com:/ /mnt/efs
# df -kh
Filesystem Size Used Avail Use% Mounted on
udev 481M 0 481M 0% /dev
tmpfs 99M 744K 98M 1% /run
/dev/xvda1 7.7G 3.0G 4.7G 40% /
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/loop0 13M 13M 0 100% /snap/amazon-ssm-agent/150
/dev/loop1 87M 87M 0 100% /snap/core/4650
/dev/loop2 90M 90M 0 100% /snap/core/6130
/dev/loop3 18M 18M 0 100% /snap/amazon-ssm-agent/930
tmpfs 99M 0 99M 0% /run/user/1000
fs-31337er3.efs.us-east-1.amazonaws.com:/ 8.0E 0 8.0E 0% /mnt/efs

Hope you found this useful.

post

Using Rook+Ceph for persistent storage on Kubernetes

I wanted to install Prometheus and Grafana on my new Kubernetes cluster, but in order for these packages to work they need someplace to store persistent data. I had run performance and scale tests on Ceph when I was working as a Cloud Architect at Seagate, and I’ve played with Rook during the past year, so I decided to install Rook+Ceph and use that for the Kubernetes cluster’s data storage.

Ceph is a distributed storage system that provides object, file, and block storage. On each storage node you’ll find a file system where Ceph stores objects and a Ceph OSD (Object storage daemon) process. On a Ceph cluster you’ll also find Ceph MON (monitoring) daemons, which ensure that the Ceph cluster remains highly available.

Rook acts as a Kubernetes orchestration layer for Ceph, deploying the OSD and MON processes as POD replica sets. From the Rook README file:

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.

https://github.com/rook/rook/blob/master/README.md

When I created the cluster I built VMs with 40GB hard drives, so with 5 Kubernetes nodes that gives me ~200GB of storage on my cluster, most of which I’ll use for Ceph.

Installing Rook+Ceph

Installing Rook+Ceph is pretty straightforward. On my personal cluster I installed Rook+Ceph v0.9.0 by following these steps:

git clone git@github.com:rook/rook.git
cd rook
git checkout v0.9.0
cd cluster/examples/kubernetes/ceph
kubectl create -f operator.yaml
kubectl create -f cluster.yaml

Rook deploys the PODs in two namespaces, rook-ceph-system and rook-ceph. On my cluster it took about 2 minutes for the PODs to deploy, initialize, and get to a running state. While I was waiting for everything to finish I checked the POD status with:

$ kubectl -n rook-ceph-system get pod
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-agent-8tsq7                 1/1     Running   0          2d20h
rook-ceph-agent-b6mgs                 1/1     Running   0          2d20h
rook-ceph-agent-nff8n                 1/1     Running   0          2d20h
rook-ceph-agent-vl4zf                 1/1     Running   0          2d20h
rook-ceph-agent-vtpbj                 1/1     Running   0          2d20h
rook-ceph-agent-xq5dv                 1/1     Running   0          2d20h
rook-ceph-operator-85d64cfb99-hrnbs   1/1     Running   0          2d20h
rook-discover-9nqrp                   1/1     Running   0          2d20h
rook-discover-b62ds                   1/1     Running   0          2d20h
rook-discover-k77gw                   1/1     Running   0          2d20h
rook-discover-kqknr                   1/1     Running   0          2d20h
rook-discover-v2hhb                   1/1     Running   0          2d20h
rook-discover-wbkkq                   1/1     Running   0          2d20h
$ kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-7d884ddc8b-kfxt9 1/1 Running 0 2d20h
rook-ceph-mon-a-77cbd865b8-ncg67 1/1 Running 0 2d20h
rook-ceph-mon-b-7cd4b9774f-js8n9 1/1 Running 0 2d20h
rook-ceph-mon-c-86778859c7-x2qg9 1/1 Running 0 2d20h
rook-ceph-osd-0-67fff79666-fcrss 1/1 Running 0 35h
rook-ceph-osd-1-58bd4ccbbf-lsxj9 1/1 Running 1 2d20h
rook-ceph-osd-2-bf99864b5-n4q7v 1/1 Running 0 2d20h
rook-ceph-osd-3-577466c968-j8gjr 1/1 Running 0 2d20h
rook-ceph-osd-4-6856c5c6c9-92tb6 1/1 Running 0 2d20h
rook-ceph-osd-5-8669577f6b-zqrq9 1/1 Running 0 2d20h
rook-ceph-osd-prepare-node1-xfbs7 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node2-c9f55 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node3-5g4nc 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node4-wj475 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node5-tf5bt 0/2 Completed 0 2d20h

Final tasks

Now I need to do two more things before I can install Prometheus and Grafana:

  • I need to make Rook the default storage provider for my cluster.
  • Since the Prometheus Helm chart requests volumes formatted with the XFS filesystem, I need to install XFS tools on all of my Ubuntu Kubernetes nodes. (XFS is not yet installed by Kubespray by default, although there’s currently a PR up that addresses that issue.)

Make Rook the default storage provider

To make Rook the default storage provider I just run a kubectl command:

kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

That updates the rook-ceph-block storage class and makes it the default for storage on the cluster. Any applications that I install will use Rook+Ceph for their data storage if they don’t specify a specific storage class.

Install XFS tools

Normally I would not recommend running one-off commands on a cluster. If you want to make a change to a cluster, you should encode the change in a playbook so it’s applied every time you update the cluster or add a new node. That’s why I submitted a PR to Kubespray to address this problem.

However, since my Kubespray PR has not yet merged, and I built the cluster using Kubespray, and Kubespray uses Ansible, one of the easiest ways to install XFS tools on all hosts is by using the Ansible “run a single command on all hosts” feature:

cd kubespray
export ANSIBLE_REMOTE_USER=ansible
ansible kube-node -i inventory/mycluster/hosts.ini \
--become --become-user root \
-a 'apt-get install -y xfsprogs'

Deploy Prometheus and Grafana

Now that XFS is installed I can successfully deploy Prometheus and Grafana using Helm:

helm install --name prometheus stable/prometheus
helm install --name grafana stable/grafana

The Helm charts install Prometheus and Grafana and create persistent storage volumes on Rook+Ceph for Prometheus Server and Prometheus Alert Manager (formatted with XFS).

Prometheus dashboard

Grafana dashboard

Rook persistent volume for Prometheus Server

Want to learn more?

If you’re interested in learning more about Rook, watch these videos from KubeCon 2018:

Introduction to Rook

Rook Deep Dive

Hope you find this useful.

Removing mount volumes from your desktop in Ubuntu 17.10

I just upgraded from Ubuntu 17.04 to 17.10 and one of the first things I noticed was all of the disk volumes that are mounted under my home directory appeared on my desktop. In Ubuntu 17.10, all volumes that are mounted under /home or /media appear on your desktop, and none of the switches in the Settings tool will make them go away.

The names of the folders aren’t even useful. They’re names like 10GB Volume and 20GB Volume. If you have two volumes the same size they’ll both have the same useless name. No hint of where the volume is mounted appears.

I have files, documents, databases, and email going back 20 years, much of it archival data that I want to be able to search but which never gets updated, so I keep these archive directories on separate read-only logical volumes. If my home directory’s file system gets corrupted beyond repair, the archives will still be intact. Since the volumes are read-only a misbehaving program or command-line oops won’t destroy the data.

But I don’t want to see them all over my desktop.

Tweak tool to the rescue! Install the tool and run it:

sudo apt install gnome-tweak-tool
gnome-tweak-tool

Then:

Desktop > Mounted Volumes > Off

No more volume icons on the desktop!

gnome-tweak-tool has other useful settings that are absent from the Settings tool, such as giving you the ability to move the window buttons to the upper left side of your windows.

Want to make the icons on your desktop smaller? Open up the File Manager, browse to Desktop, and select the icon size you want by moving the slider bar. The size of the icons on your Desktop and the size in the File Manager’s Desktop folder both use the same setting.

Hope you find this useful.

post

Running a 6-node CockroachDB cluster with AWS EFS storage

CockroachDB is new distributed database which, like its namesake, is really hard to kill.

CockroachDB implements SQL DML commands for creating schemas, tables, and indexes using the same syntax as PostgreSQL, and it supports the PostgreSQL wire protocol, meaning that any PostgreSQL database driver or client can be used to connect to a CockroachDB database. If you’re currently using PostgresSQL and you want an easier scale-out, highly-available way to deploy a database, you should take a look at CockroachDB. In many cases you can just repoint your application at a CockroachDB server and your application will run the same as it did using PostgreSQL.

The first day I tried using CockroachDB I got a six-node system up and running using CockroachDB’s Docker image on my Apcera cluster using AWS EFS as a backing store in less than an hour. This is what I did to get it working.

Set up an NFS provider for EFS

I already had an Apcera cluster for deploying Docker images running on AWS. This is the same cluster I used for my article on Mounting AWS EFS volumes inside Docker Containers. In fact, I set up the EFS provider using the same steps:

Set up the EFS volume using the AWS console.

Create an NFS provider that targets the EFS volume.

apc provider register apcfs-ha --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a namespace and a private network

Create a namespace and a private network named “roachnet”.

apc namespace /sandbox/cockroach
apc network create roachnet

“roachnet” is a private VxLAN created by the Apcera platform that only containers that I’ve joined to the network can see.

Create the first CockroachDB node

Next I create a container instance called “roach1” from the latest Docker image, open ports 8080 and 26257, tell it to use the EFS provider for storage, and to advertise itself to other CockroachDB nodes so they can find it and join the DB cluster.

apc docker run roach1 --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --advertise-host roach1.apcera.local"
apc network join roachnet --job roach1 --discovery-address roach1
apc app start roach1
apc route add http://cockroach.earlruby.apcera-platform.io --https-only --app roach1 --port 8080 --batch

Create 5 more nodes

Create 5 more nodes and add them to roachnet:

for x in `seq 2 6`; do
    apc docker run roach$x --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider  /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --join roach1.apcera.local:26257"
    apc network join roachnet --job roach$x --discovery-address roach$x
    apc app start roach$x
    sleep 3
done

I added the “sleep 3” command because when I originally tested this (on CockroachDB 1.1.0) the platform started the containers so fast that the DB got confused and didn’t add all of them to the cluster. All nodes started, but only some joined the cluster. After I added the delay all nodes joined the cluster.

Verify that the containers are all running.:

After that the cluster was up and running. I could connect to the database, create schemas, create tables, add, update, and delete records. I’m pretty happy with the initial results. Next step is automatically generating secure certificates so I’m not operating in insecure mode, then I’m going to run actual applications against the cluster.

Hope you found this useful.

CockroachDB overview screen

CockroachDB Storage Screen

CockroachDB Queues