post

Creating AWS Elastic Filesystems (EFS) with Terraform

The AWS Elastic Filesystem (EFS) gives you an NFSv4-mountable file system with almost unlimited storage capacity. The filesystem I just created to write this article reports 9,007,199,254,739,968 bytes free. In human-readable format df -kh reports 8.0E (Exabytes) of available disk space. In the year 2019, that’s a lot of storage space.

In past articles I’ve shown how to create EFS resources manually, but this week I wanted to programmatically create EFS resources with Terraform so that I could easily create, test, and tear-down EFS and VM resources on AWS.

I also wanted to make sure that my EFS resources are secure, that only VMs within my Virtual Private Cloud (VPC) could access the EFS data, so that no one outside of my VPC could mount or otherwise access the data.

Creating an EFS resource is easy. The Terraform code looks like this:

// efs.tf
resource "aws_efs_file_system" "efs-example" {
creation_token = "efs-example"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EfsExample"
}
}

This creates the EFS filesystem on AWS. EFS also requires a mount target, which gives your VMs a way to mount the EFS volume using NFS. The Terraform code to create a mount target looks like this:

// efs.tf (continued)
resource "aws_efs_mount_target" "efs-mt-example" {
file_system_id = "${aws_efs_file_system.efs-example.id}"
subnet_id = "${aws_subnet.subnet-efs.id}"
security_groups = ["${aws_security_group.ingress-efs.id}"]
}

The file_system_id is automatically set to the efs-example resource’s ID, which ties the mount target to the EFS file system.

The subnet_id for subnet-efs is a separate /24 subnet I created from my VPC just for EFS. The ingress-efs security group is a separate security group I created for EFS. Let’s cover each one of these separately.

A separate EFS subnet

First off I’ve allocated a /16 subnet for my VPC and I carve out individual /24 subnets from that VPC for each cluster of VMs and/or EFS resources that I add to an AWS availability zone. Here’s how I’ve defined my test environment VPC and EFS subnet:

//network.tf
resource "aws_vpc" "test-env" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags {
Name = "test-env"
}
}

resource "aws_subnet" "subnet-efs" {
cidr_block = "${cidrsubnet(aws_vpc.test-env.cidr_block, 8, 8)}"
vpc_id = "${aws_vpc.test-env.id}"
availability_zone = "us-east-1a"
}

That will give me the subnet 10.0.8.0/24 for my EFS subnet.

If you want to understand how to use Terraform’s cidrsubnet command to carve out separate subnets, see the article Terraform `cidrsubnet` Deconstructed by Lisa Hagemann. Her article gives excellent examples on how to do just that.

The EFS security group

Finally, I need a security group that only allows traffic between my test environment VMs and my test environment EFS volume. I already have a security group called ingress-test-env that is used to control security for my VMs. For EFS I create another security group that allows inbound traffic on port 2049 (the NFSv4 port), allows egress traffic on any port.

By setting the ingress-efs-test resource’s security_groups attribute to ingress-test-env this only allows network traffic to and from VMs in the ingress-test-env security group to talk to the EFS volume. If you use security_groups like this, you really lock down the EFS volume and you don’t need to set the cidr_blocks attribute at all.

// security.tf
resource "aws_security_group" "ingress-efs-test" {
name = "ingress-efs-test-sg"
vpc_id = "${aws_vpc.test-env.id}"

// NFS
ingress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 2049
to_port = 2049
protocol = "tcp"
}

// Terraform removes the default rule
egress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 0
to_port = 0
protocol = "-1"
}
}

After adding these Terraform files to my cluster configuration and running terraform apply, I end up with a new EFS filesystem that I can mount from any VM running in my VPC.

# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-31337er3.efs.us-east-1.amazonaws.com:/ /mnt/efs
# df -kh
Filesystem Size Used Avail Use% Mounted on
udev 481M 0 481M 0% /dev
tmpfs 99M 744K 98M 1% /run
/dev/xvda1 7.7G 3.0G 4.7G 40% /
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/loop0 13M 13M 0 100% /snap/amazon-ssm-agent/150
/dev/loop1 87M 87M 0 100% /snap/core/4650
/dev/loop2 90M 90M 0 100% /snap/core/6130
/dev/loop3 18M 18M 0 100% /snap/amazon-ssm-agent/930
tmpfs 99M 0 99M 0% /run/user/1000
fs-31337er3.efs.us-east-1.amazonaws.com:/ 8.0E 0 8.0E 0% /mnt/efs

Hope you found this useful.

Share Button
post

Using Rook+Ceph for persistent storage on Kubernetes

I wanted to install Prometheus and Grafana on my new Kubernetes cluster, but in order for these packages to work they need someplace to store persistent data. I had run performance and scale tests on Ceph when I was working as a Cloud Architect at Seagate, and I’ve played with Rook during the past year, so I decided to install Rook+Ceph and use that for the Kubernetes cluster’s data storage.

Ceph is a distributed storage system that provides object, file, and block storage. On each storage node you’ll find a file system where Ceph stores objects and a Ceph OSD (Object storage daemon) process. On a Ceph cluster you’ll also find Ceph MON (monitoring) daemons, which ensure that the Ceph cluster remains highly available.

Rook acts as a Kubernetes orchestration layer for Ceph, deploying the OSD and MON processes as POD replica sets. From the Rook README file:

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.

https://github.com/rook/rook/blob/master/README.md

When I created the cluster I built VMs with 40GB hard drives, so with 5 Kubernetes nodes that gives me ~200GB of storage on my cluster, most of which I’ll use for Ceph.

Installing Rook+Ceph

Installing Rook+Ceph is pretty straightforward. On my personal cluster I installed Rook+Ceph v0.9.0 by following these steps:

git clone git@github.com:rook/rook.git
cd rook
git checkout v0.9.0
cd cluster/examples/kubernetes/ceph
kubectl create -f operator.yaml
kubectl create -f cluster.yaml

Rook deploys the PODs in two namespaces, rook-ceph-system and rook-ceph. On my cluster it took about 2 minutes for the PODs to deploy, initialize, and get to a running state. While I was waiting for everything to finish I checked the POD status with:

$ kubectl -n rook-ceph-system get pod
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-8tsq7 1/1 Running 0 2d20h
rook-ceph-agent-b6mgs 1/1 Running 0 2d20h
rook-ceph-agent-nff8n 1/1 Running 0 2d20h
rook-ceph-agent-vl4zf 1/1 Running 0 2d20h
rook-ceph-agent-vtpbj 1/1 Running 0 2d20h
rook-ceph-agent-xq5dv 1/1 Running 0 2d20h
rook-ceph-operator-85d64cfb99-hrnbs 1/1 Running 0 2d20h
rook-discover-9nqrp 1/1 Running 0 2d20h
rook-discover-b62ds 1/1 Running 0 2d20h
rook-discover-k77gw 1/1 Running 0 2d20h
rook-discover-kqknr 1/1 Running 0 2d20h
rook-discover-v2hhb 1/1 Running 0 2d20h
rook-discover-wbkkq 1/1 Running 0 2d20h
$ kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-7d884ddc8b-kfxt9 1/1 Running 0 2d20h
rook-ceph-mon-a-77cbd865b8-ncg67 1/1 Running 0 2d20h
rook-ceph-mon-b-7cd4b9774f-js8n9 1/1 Running 0 2d20h
rook-ceph-mon-c-86778859c7-x2qg9 1/1 Running 0 2d20h
rook-ceph-osd-0-67fff79666-fcrss 1/1 Running 0 35h
rook-ceph-osd-1-58bd4ccbbf-lsxj9 1/1 Running 1 2d20h
rook-ceph-osd-2-bf99864b5-n4q7v 1/1 Running 0 2d20h
rook-ceph-osd-3-577466c968-j8gjr 1/1 Running 0 2d20h
rook-ceph-osd-4-6856c5c6c9-92tb6 1/1 Running 0 2d20h
rook-ceph-osd-5-8669577f6b-zqrq9 1/1 Running 0 2d20h
rook-ceph-osd-prepare-node1-xfbs7 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node2-c9f55 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node3-5g4nc 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node4-wj475 0/2 Completed 0 2d20h
rook-ceph-osd-prepare-node5-tf5bt 0/2 Completed 0 2d20h

Final tasks

Now I need to do two more things before I can install Prometheus and Grafana:

  • I need to make Rook the default storage provider for my cluster.
  • Since the Prometheus Helm chart requests volumes formatted with the XFS filesystem, I need to install XFS tools on all of my Ubuntu Kubernetes nodes. (XFS is not yet installed by Kubespray by default, although there’s currently a PR up that addresses that issue.)

Make Rook the default storage provider

To make Rook the default storage provider I just run a kubectl command:

kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

That updates the rook-ceph-block storage class and makes it the default for storage on the cluster. Any applications that I install will use Rook+Ceph for their data storage if they don’t specify a specific storage class.

Install XFS tools

Normally I would not recommend running one-off commands on a cluster. If you want to make a change to a cluster, you should encode the change in a playbook so it’s applied every time you update the cluster or add a new node. That’s why I submitted a PR to Kubespray to address this problem.

However, since my Kubespray PR has not yet merged, and I built the cluster using Kubespray, and Kubespray uses Ansible, one of the easiest ways to install XFS tools on all hosts is by using the Ansible “run a single command on all hosts” feature:

cd kubespray
export ANSIBLE_REMOTE_USER=ansible
ansible kube-node -i inventory/mycluster/hosts.ini \
--become --become-user root \
-a 'apt-get install -y xfsprogs'

Deploy Prometheus and Grafana

Now that XFS is installed I can successfully deploy Prometheus and Grafana using Helm:

helm install --name prometheus stable/prometheus
helm install --name grafana stable/grafana

The Helm charts install Prometheus and Grafana and create persistent storage volumes on Rook+Ceph for Prometheus Server and Prometheus Alert Manager (formatted with XFS).

Prometheus dashboard

Grafana dashboard

Rook persistent volume for Prometheus Server

Want to learn more?

If you’re interested in learning more about Rook, watch these videos from KubeCon 2018:

Introduction to Rook

Rook Deep Dive

Hope you find this useful.

Share Button

Removing mount volumes from your desktop in Ubuntu 17.10

I just upgraded from Ubuntu 17.04 to 17.10 and one of the first things I noticed was all of the disk volumes that are mounted under my home directory appeared on my desktop. In Ubuntu 17.10, all volumes that are mounted under /home or /media appear on your desktop, and none of the switches in the Settings tool will make them go away.

The names of the folders aren’t even useful. They’re names like 10GB Volume and 20GB Volume. If you have two volumes the same size they’ll both have the same useless name. No hint of where the volume is mounted appears.

I have files, documents, databases, and email going back 20 years, much of it archival data that I want to be able to search but which never gets updated, so I keep these archive directories on separate read-only logical volumes. If my home directory’s file system gets corrupted beyond repair, the archives will still be intact. Since the volumes are read-only a misbehaving program or command-line oops won’t destroy the data.

But I don’t want to see them all over my desktop.

Tweak tool to the rescue! Install the tool and run it:

sudo apt install gnome-tweak-tool
gnome-tweak-tool

Then:

Desktop > Mounted Volumes > Off

No more volume icons on the desktop!

gnome-tweak-tool has other useful settings that are absent from the Settings tool, such as giving you the ability to move the window buttons to the upper left side of your windows.

Want to make the icons on your desktop smaller? Open up the File Manager, browse to Desktop, and select the icon size you want by moving the slider bar. The size of the icons on your Desktop and the size in the File Manager’s Desktop folder both use the same setting.

Hope you find this useful.

Share Button


Running a 6-node CockroachDB cluster with AWS EFS storage

CockroachDB is new distributed database which, like its namesake, is really hard to kill.

CockroachDB implements SQL DML commands for creating schemas, tables, and indexes using the same syntax as PostgreSQL, and it supports the PostgreSQL wire protocol, meaning that any PostgreSQL database driver or client can be used to connect to a CockroachDB database. If you’re currently using PostgresSQL and you want an easier scale-out, highly-available way to deploy a database, you should take a look at CockroachDB. In many cases you can just repoint your application at a CockroachDB server and your application will run the same as it did using PostgreSQL.

The first day I tried using CockroachDB I got a six-node system up and running using CockroachDB’s Docker image on my Apcera cluster using AWS EFS as a backing store in less than an hour. This is what I did to get it working.

Set up an NFS provider for EFS

I already had an Apcera cluster for deploying Docker images running on AWS. This is the same cluster I used for my article on Mounting AWS EFS volumes inside Docker Containers. In fact, I set up the EFS provider using the same steps:

Set up the EFS volume using the AWS console.

Create an NFS provider that targets the EFS volume.

apc provider register apcfs-ha --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a namespace and a private network

Create a namespace and a private network named “roachnet”.

apc namespace /sandbox/cockroach
apc network create roachnet

“roachnet” is a private VxLAN created by the Apcera platform that only containers that I’ve joined to the network can see.

Create the first CockroachDB node

Next I create a container instance called “roach1” from the latest Docker image, open ports 8080 and 26257, tell it to use the EFS provider for storage, and to advertise itself to other CockroachDB nodes so they can find it and join the DB cluster.

apc docker run roach1 --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --advertise-host roach1.apcera.local"
apc network join roachnet --job roach1 --discovery-address roach1
apc app start roach1
apc route add http://cockroach.earlruby.apcera-platform.io --https-only --app roach1 --port 8080 --batch

Create 5 more nodes

Create 5 more nodes and add them to roachnet:

for x in `seq 2 6`; do
    apc docker run roach$x --image cockroachdb/cockroach:v1.1.2 --port 8080 --port 26257 --provider  /apcera/providers::apcfs-ha --start-cmd "/cockroach/cockroach.sh start --insecure --join roach1.apcera.local:26257"
    apc network join roachnet --job roach$x --discovery-address roach$x
    apc app start roach$x
    sleep 3
done

I added the “sleep 3” command because when I originally tested this (on CockroachDB 1.1.0) the platform started the containers so fast that the DB got confused and didn’t add all of them to the cluster. All nodes started, but only some joined the cluster. After I added the delay all nodes joined the cluster.

Verify that the containers are all running.:

After that the cluster was up and running. I could connect to the database, create schemas, create tables, add, update, and delete records. I’m pretty happy with the initial results. Next step is automatically generating secure certificates so I’m not operating in insecure mode, then I’m going to run actual applications against the cluster.

Hope you found this useful.

CockroachDB overview screen

CockroachDB Storage Screen

CockroachDB Queues

 

Share Button


Mounting AWS EFS volumes inside Docker Containers

Amazon announced the development of the Amazon Elastic File System (AWS EFS) in 2015. EFS was designed to provide multiple EC2 instances with shared, low-latency access to a fully-managed file system. On June 28, 2016 Amazon announced that EFS is now available for production use in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions.

Apcera‘s NFS Service Gateway can be used to access AWS EFS storage volumes within containers. You can use EFS to provide persistent storage to your containers running on AWS-hosted clouds in regions where EFS is available.

Gathering information

Before you begin you will need to know:

  • The name of the AWS Region where your Apcera Platform is running
  • The name/ID of the AWS VPC where your Apcera Platform is running
  • The name/ID of the AWS security group for your Apcera Platform

Setting up an EFS volume

  1. Log into your AWS console.
  2. Select the name of the AWS Region where your Apcera Platform is running on the upper right side of the screen.
  3. Select Elastic File System.
  4. Click Create File System.
  5. Configure the file system access:
    • Select the name of the VPC.
    • The availability zone and subnet should be selected for you automatically.
    • If your VPC has more than one subnet (unusual) then select the subnet containing the Instance Managers that will be connecting to the EFS volume.
    • Leave IP address set to Automatic.
    • The first EFS volume you create will create a new security group. Use that security group for this and all future EFS volumes. Write down the name of the new EFS security group – we’ll configure it in the next few steps.
    • Click Next Step
  6. Configure optional settings:
    • Set the name of the EFS volume.
    • Choose the performance mode.
    • Click Next Step
      mounting-aws-with-esf-inside-docker
  7. Review and create:
  8. If everything looks OK, click Create File System.
    mounting-aws-with-esf-inside-docker
  9. You should see a “Success!” message and a new EFS volume with “Life Cycle State” = “Creating”.
  10. Write down the IP address of the EFS volume.

Update the EFS security group

  • Go back to the main console menu and select EC2.
  • Click Security Groups in the left hand nav menu.
  • Type the name of the new EFS security group into the search filter list.
  • On the bottom half of the screen delete the default inbound and outbound rules.
  • Add one inbound rule to allow all TCP traffic on port 2049 from the source “name/ID of the AWS security group for your Apcera Platform”
  • Add one outbound rule to allow all TCP traffic on port 2049 to the destination “name/ID of the AWS security group for your Apcera Platform”
  • This allows all VMs within your Apcera Platform security group to connect to your EFS volume on port 2049 (NFS).
  • No other traffic from any other source or to any other destination is allowed.|

Create an NFS Provider for the EFS volume

We’re going to create a single provider for the EFS volume. Each time you have a container or set of containers that need a persistent file system, just create a new service from the same provider. Each new service will carve out a new namespace on the EFS volume, keeping the files associated with that service separate from the files in all other services that use the same provider.

According to the EFS FAQ, When you create a file system, you create endpoints in your VPC called “mount targets.” Each mount target provides an IP address and a DNS name, and you use this IP address or DNS name in your mount command. Only resources that can access a mount target can access your file system. Since the Apcera Platform isn’t using Amazon DNS services internally, we’ll use the IP address to connect to the EFS volume.

To create the provider, you need to construct a URL describing the volume. In this case, we’ll use the internal IP address of the EFS volume as the hostname and / as the exported volume name. All EFS volumes use the NFS v4.1 protocol. If the IP address of the EFS volume is 10.0.0.112 we’d construct a provider using:

apc provider register awsefs --type nfs \
    --url "nfs://10.0.0.112/" \
    --description 'Amazon EFS' \
    --batch \
    -- --version 4.1

Create a service from the provider:

apc service create efs-service-1 \
    --provider awsefs \
    --description 'Amazon EFS Service' \
    --batch

Create a capsule, bind the service to the capsule, and connect to the capsule:

apc capsule create efs-capsule1 --image linux -ae --batch
apc service bind efs-service-1 --job efs-capsule1 \
    --batch -- --mountpath /an/unlimited/supply
apc capsule connect efs-capsule1

Once connected, type df -k to see the mounted file system.

You can bind this service to any container that needs a shared, persistent file system. Each time you need a new shared, persistent file system for a container or group of containers just create a new service using the same provider and bind the service to your job or jobs.

Persistence for Docker

Now that we have a provider that can carve out EFS storage for containers, let’s try spinning up some Docker images.

On the Apcera Platform, if the specification for a Docker image (Dockerfile) specifies that the app requires persisted volumes, you must do one of the following when creating the job:

  • Include the –provider flag when you create or run the Docker job. You must include this flag if you include the –volume flag when creating or running the Docker job.
  • Include the –ignore-volumes flag when you create or run the Docker job.

Here is an example of running NGINX inside a Docker container on the Apcera platform, where the content for the site is stored on an EFS volume:

I’m using the Apcera “apc” command-line tool to build the container, pulling the nginx image directly off hub.docker.com, telling it to use the awsefs EFS volume provider I created earlier for persistence, and to mount the EFS volume at the mount point “/usr/share/nginx/html”.

Now connect to the container:

/proc/mounts contains a list of all of the container’s mount points. I can verify that the container does indeed have an EFS volume by grepping /proc/mounts for the mount point:

Grepping for “/usr/share/nginx/html” shows the IP address 10.0.0.112, which is the IP of the EFS volume, the log directory name after is the unique namespace for the service, the mountpoint is “/usr/share/nginx/html”, and the mount type “nfs4”.

There is no content in the directory, so I add some by echoing some HTML code to an index.html file. My container will proclaim to the world “NGINX in a Docker container on Apcera with content stored on EFS” in an H3 typeface!

Now that I have some content I need to add a route to the content. Right now the NGINX container is running, and listening on ports 80 and 443, but it’s completely isolated from the outside world — no one can connect to those ports unless there’s a route (a URL) set up.

My cluster is running on the domain earlruby.apcera-platform.io, so I add a route like so:

I have successfully added the http route http://nginx.earlruby.apcera-platform.io/ to my NGINX container. This is a real public DNS entry. To verify that it works I point my browser at the route I just added:

Success!

Such an amazing app is bound to go viral, and a single NGINX container may not be able to keep up with the load. I want to ensure that my app can keep up and remain highly-available, and that it keeps running even if one or more VMs in my cluster get killed off, so I add more NGINX containers:

Now I’ve got 20 containers running my NGINX app, all serving up the same content, running on multiple VMs across my cluster, all load-balanced under the single URL http://nginx.earlruby.apcera-platform.io/. If any container gets killed off, the Apcera platform will spin up a new one. If any VM in the cluster dies, any containers running on it will automatically be migrated to new hosts. If I want to scale up the app to 100 or 1000 containers, or back down to 1, it’s a one-line command to make the change.

In terms of resources, I’m using slightly less than 45 MiB to run those 20 containers. That’s not a typo — 45MiB! Containers are much more efficient users of RAM than VMs.

I hope you find this useful.

This article originally appeared as an Apcera blog post on July 21, 2016.

Share Button


Policy-based Cloud Storage

This is a talk I gave last week at the SF Microservices Meetup titled Policy-based Cloud Storage, Persisting Data in a Multi-Site, Multi-Cloud World. In it I cover Apcera‘s approach to storage for containers and how to use policy to manage very large scale application deployments.

Share Button


Adding a LUKS-encrypted iSCSI volume to Synology DS414 NAS and Ubuntu 15.04

I have an Ubuntu 15.04 “Vivid” workstation already set up with LUKS full disk encryption, and I have a Synology DS414 NAS with 12TB raw storage on my home network. I wanted to add a disk volume on the Synology DS414 that I could mount on the Ubuntu server, but NFS doesn’t support “at rest” encrypted file systems, and using EncFS over NFS seemed like the wrong way to go about it, so I decided to try setting up an iSCSI volume and encrypting it with LUKS. Using this type of setup, all data is encrypted both “on the wire” and “at rest”.

Log into the Synology Admin Panel and select Main Menu > Storage Manager:

  • Add an iSCSI LUN
    • Set Thin Provisioning = No
    • Advanced LUN Features = No
    • Make the volume as big as you need
  • Add an iSCSI Target
    • Use CHAP authentication
    • Write down the login name and password you choose

On your Ubuntu box switch over to a root prompt:

sudo /bin/bash

Install the open-iscsi drivers. (Since I’m already running LUKS on my Ubuntu box I don’t need to install LUKS.)

apt-get install open-iscsi

Edit the conf file

vi /etc/iscsi/iscsid.conf

Edit these lines:

node.startup = automatic
node.session.auth.username = [CHAP user name on Synology box]
node.session.auth.password = [CHAP password on Synology box]

Restart the open-iscsi service:

service open-iscsi restart
service open-iscsi status

Start open-iscsi at boot time:

systemctl enable open-iscsi

Now find the name of the iSCSI target on the Synology box:

iscsiadm -m discovery -t st -p $SYNOLOGY_IP
iscsiadm -m node

The target name should look something like “iqn.2000-01.com.synology:boxname.target-1.62332311”

Still on the Ubuntu workstation, log into the iSCSI target:

iscsiadm -m node --targetname "$TARGET_NAME" --portal "$SYNOLOGY_IP:3260" --login

Look for new devices:

fdisk -l

At this point fdisk should show you a new block device which is the iSCSI disk volume on the Synology box. In my case it was /dev/sdd.

Partition the device. I made one big /dev/sdd1 partition, type 8e (Linux LVM):

fdisk /dev/sdd

Set up the device as a LUKS-encrypted device:

cryptsetup --verbose --verify-passphrase luksFormat /dev/sdd1

Open the LUKS volume:

cryptsetup luksOpen /dev/sdd1 backupiscsi

Create a physical volume from the LUKS volume:

pvcreate /dev/mapper/backupiscsi

Add that to a new volume group:

vgcreate ibackup /dev/mapper/backupiscsi

Create a logical volume within the volume group:

lvcreate -L 1800GB -n backupvol /dev/ibackup

Put a file system on the logical volume:

mkfs.ext4 /dev/ibackup/backupvol

Add the logical volume to /etc/fstab to mount it on startup:

# Synology iSCSI target LUN-1
/dev/ibackup/backupvol /mnt/backup ext4 defaults,nofail,nobootwait 0 6

Get the UUID of the iSCSI drive:

ls -l /dev/disk/by-uuid | grep sdd1

Add the UUID to /etc/crypttab to be automatically prompted for the decrypt passphrase when you boot up Ubuntu:

backupiscsi UUID=693568ca-9334-4c19-8b01-881f2247ae0d none luks

If you found this interesting, you might want to check out my article Adding an external encrypted drive with LVM to Ubuntu Linux.

Hope you found this useful.

Share Button


2014 HPCwire Awards

The StratoStor project I’ve been working on for the past 10 months just got a “Top 5 New Products or Technologies to Watch” award from HPCwire announced at this week’s SuperComputing 2014 (SC14) conference in New Orleans.

HPC = High Performance Computing, HPCwire is a news bureau for all things regarding High Performance Computing, and SC14 is where every major vendor of HPC equipment and products shows off their wares, so getting this bit of recognition from the readers of HPCwire is really nice.

So THANK YOU HPCwire readers, for this award.

https://www.hpcwire.com/2014-hpcwire-readers-choice-awards/23/

2014 HPCwire Awards

Share Button


Validating Distributed Application Workloads

This is the talk I gave at RICON this year on Validating Distributed Application Workloads. It’s about how we set up test environments at Seagate for validating storage system performance at the petabyte scale. This talk centers around the testing done to validate performance of a 2PB rack running Riak CS.

Share Button


Creating differential backups with hard links and rsync

You can use a hard link in Linux to create two file names that both point to the same physical location on a hard disk. For instance, if I type:

> echo xxxx > a
> cp -l a b
> cat a
xxxx
> cat b
xxxx

I create a file named “a” that contains the string “xxxx”. Then I create a hard link “b” that also points to the same spot on the disk. Now if I write to the file “a” whatever I write also appears in file “b” and vice versa:

> echo yyyy > b
> cat b
yyyy
> cat a
yyyy
> echo zzzz > a
> cat a
zzzz
> cat b
zzzz

Copying to a hard link updates the data on the disk that each hard link points to:

> rm a b c
> echo xxxx > a
> echo yyyy > c
> cp -l a b
> cat a b c
xxxx
xxxx
yyyy

“a” and “b” point to the same file on disk, “c” is a separate file. If I copy a file “c” to “b” that also updates “a”:

> cp c b 
> cat a b c
yyyy
yyyy
yyyy
> echo zzzz > c
> cat a b c
yyyy
yyyy
zzzz 

What most people don’t know is that rsync is an exception to this rule. If you use rsync to sync two files, and it sees that the target file is a hard link, it will create a new target file but only if the contents of the two files are not the same:

> rm a
> rm b
> echo xxxx > a
> cp -l a b
> cat a
xxxx
> cat b
xxxx
> echo yyyy > c
> cat c
yyyy
> rsync -av c b
sending incremental file list
c
sent 87 bytes  received 31 bytes  236.00 bytes/sec
total size is 5  speedup is 0.04
> cat b
yyyy
> cat c
yyyy
> cat a
xxxx

File “b” is no longer a hard link of “a”, it’s a new file. If I update “a” it no longer updates “b”:

> echo zzzz > a
> cat a b c
zzzz
yyyy
yyyy

However, if the file that I’m rsync-ing is the same as “b”, then rsync does NOT break the hard link, it leaves the file alone:

> rm a
> rm b
> rm c
> echo xxxx > a
> cp -al a b
> cp -p a c
> cat a b c
xxxx
xxxx
xxxx

At this point “a” and “b” both point to the same file on the disk, which contains the string “xxxx”. “c” is a separate file that also contains the string “xxxx” and has the same permissions and timestamp as “a”.

> rsync -av c b
sending incremental file list
sent 39 bytes  received 12 bytes  102.00 bytes/sec
total size is 5  speedup is 0.10
> cat a b c
xxxx
xxxx
xxxx

At this point I’ve rsynced file “c” to “b”, but since c has the same contents and timestamp as “a” and “b” rsync does nothing at all. It doesn’t break the hard link. If I change “b” it still updates “a”:

> echo yyyy > b
> cat a b c
yyyy
yyyy
xxxx

This is how many modern file system backup programs work. On day 1 you make an rsync copy of your entire file system:

backup@backup_server> DAY1=`date +%Y%m%d%H%M%S`
backup@backup_server> rsync -av -e ssh earl@192.168.1.20:/home/earl/ /var/backups/$DAY1/

On day 2 you make a hard link copy of the backup, then a fresh rsync:

backup@backup_server> DAY2=`date +%Y%m%d%H%M%S`
backup@backup_server> cp -al /var/backups/$DAY1 /var/backups/$DAY2
backup@backup_server> rsync -av -e ssh --delete earl@192.168.1.20:/home/earl/ /var/backups/$DAY2/

“cp -al” makes a hard link copy of the entire /home/earl/ directory structure from the previous day, then rsync runs against the copy of the tree. If a file remains unchanged then rsync does nothing — the file remains a hard link. However, if the file’s contents changed, then rsync will create a new copy of the file in the target directory. If a file was deleted from /home/earl then rsync deletes the hard link from that day’s copy.

In this way, the $DAY1 directory has a snapshot of the /home/earl tree as it existed on day 1, and the $DAY2 directory has a snapshot of the /home/earl tree as it existed on day 2, but only the files that changed take up additional disk space. If you need to find a file as it existed at some point in time you can look at that day’s tree. If you need to restore yesterday’s backup you can rsync the tree from yesterday, but you don’t have to store a copy of all of the data from each day, you only use additional disk space for files that changed or were added.

I use this technique to keep 90 daily backups of a 500GB file system on a 1TB drive.

One caveat: The hard links do use up inodes. If you’re using a file system such as ext3, which has a set number of inodes, you should allocate extra inodes on the backup volume when you create it. If you’re using a file system that can dynamically add inodes, such as ext4, zfs or btrfs, then you don’t need to worry about this.

Share Button