post

Installing TrueNAS Scale on a Terra Master NAS

My 10+ year old Synology NAS failed recently. I tried replacing the power supply and replacing the CMOS battery but it would not come back to life, so I started searching around for a replacement.

I originally intended to buy another Synology NAS, but when I looked over the various models I came to the conclusion that Synology was not keeping up with the competition in terms of hardware. I could get a NAS from a competitor with faster networking, a faster CPU, and more RAM for about half the price. Also, many of the apps that I’d originally used on the Synology NAS were no longer being supported by Synology, and most of the new apps they’d added were Synology-branded apps with unknown provenance and unknown support status, not open-source apps that I could Google a fix for if I had a problem.

After looking at a few brands I picked a low-end 4-drive Terra Master, the F4-424. The F4-424 comes in 3 flavors, the F4-424, F4-424 Pro, and F4-424 Max. The 3 share the same chassis, with main difference between the models being the RAM, CPU, GPU, and NICs being used. I’m mostly using the NAS as backup storage for all of my home computers, plus test storage (Minio, IPFS) for various Kubernetes test clusters, so I didn’t need lots of RAM, a fast CPU, or 10GbE. With the low-end F4-424 I was still getting a 4-core Intel CPU with 8GB DDR5 RAM and 2x 2.5GbE NICs, which is more than adequate for my needs.

I’d read that the Terra Master OS (TOS) wasn’t great, but what sold me on the Terra Master is that it’s basically a low-power Intel PC, so if you want to install some other OS on it, like TrueNAS Scale, openmediavault, or UnRAID, you can. I also read that Terra Master had just released TOS 6, and that it was a huge improvement over previous releases.

I set up the Terra Master with 4x Western Digital 8TB “Red” NAS drives and 2x Western Digital 1TB “Black” NVMe drives. I used TOS 6 for a week, and I thought it was fine. It was actually fairly simple to set up and run, supported using the NVMe drives for R/W cache, supported Time Machine backups, iSCSI, SMB, and NFS. I had no issues with it.

But I wanted to try out TrueNAS Scale so I downloaded a copy of the ISO and burned it to a USB flash drive. The Terra Master has a single USB-A port on the back, so I connected a USB hub and plugged in the flash drive, a keyboard, and a mouse. (There are YouTube videos that say you need to take the NAS apart and install the flash drive in an internal USB slot. You do not need to do this.)

I rebooted the NAS and started hitting the DEL key when it first powered on to get into the BIOS. First I went to the Boot screen and disabled the “UTOS Boot First” setting. This is the setting that tells the NAS to boot TOS 6 from a 3.75GB internal flash drive.

Next I went to the Save & Exit screen and selected my USB drive as the Boot Override device. It booted up my USB flash drive and I followed the TrueNAS Scale install instructions to install TrueNAS Scale on the first NVMe drive. It only took a minute or two.

Once that was done I rebooted again, and started hitting the DEL key when it rebooted to get into the BIOS. This time I went to the Boot screen to change the primary boot device. One of the NVMe disks was now labeled “Debian”. That’s the TrueNAS disk so I selected that, then saved and exited.

Once the NAS booted up the screen displayed the IP address and URL for logging in, so I went to my laptop and logged into the web UI to finish the setup. I added the 4x 8TB drives to a single RAIDZ1 storage pool and made the remaining 1TB NVMe drive into a R/W cache drive.

If I want to go back to TOS 6 I can re-enable the “UTOS Boot First” setting in BIOS, boot from the TOS6 flash drive, and rebuild the disk array. If I want to use the NVMe drive that TrueNAS is on for something else I can try installing TrueNAS on the TOS 6 flash drive but I’m not convinced that it will fit on a 3.75GB drive. I checked the size of the TrueNAS install and it looks like it might just barely fit.

Hope you find this useful.

post

Adding a LUKS-encrypted iSCSI volume to TrueNAS and Ubuntu 24.04

I have an Ubuntu 24.04 “Noble Numbat” workstation already set up with LUKS full disk encryption, and I have a Terra Master F4-424 NAS with 32TB raw storage that I installed TrueNAS Scale on. Years ago I set up a LUKS-encrypted iSCSI volume on a Synology NAS and used that to back up my main Ubuntu server, and I wanted to do the same thing using TrueNAS.

Create the iSCSI volume on TrueNAS

Log into the TrueNAS Scale Web UI and select System > Services. Make sure that the iSCSI service is running and set to start automatically.

Select Datasets > Add Dataset to create a new storage pool.

  • Add Dataset
    • Parent Path: [I used my main data pool]
    • Name: ibackup
    • Dataset Preset: Generic

Select Shares > Block (iSCSI) Shares Targets > Wizard to create a new iSCSI target.

  • Block Device
    • Name: ibackup
    • Extent Type: Device
    • Device: Create New
    • Pool/Dataset: [select the dataset that you created in the previous step]
    • Size: 3 TiB [How many TiB do you want?]
    • Sharing Platform: Modern OS
    • Target: Create New
  • Portal
    • Portal: Create New
    • Discovery Authentication Method: CHAP
    • Discovery Authentication Group: Create New
    • User: CHAP user name (doesn’t need to be a real user, can be any name)
    • Secret: CHAP user password (make sure you write the user name and password down)
    • IP Address: Click Add. [If you only want one specific IP address to be able to connect, enter it. If you don’t care, use 0.0.0.0]
  • Initiator
    • Initiators: [Leave blank to allow all or enter a list of initiator hostnames]
  • Click Save. You’ve now created an iSCSI volume that you can mount from across your network.

Get the iSCSI volume to appear as a block device on Linux

On your Ubuntu box switch over to a root prompt:

sudo su

Install the open-iscsi drivers. (Since I’m already running LUKS on my Ubuntu box I don’t need to install LUKS.)

apt-get install open-iscsi

Edit the conf file

vi /etc/iscsi/iscsid.conf

Edit these lines:

node.startup = automatic
node.session.auth.username = [CHAP user name on TrueNAS box]
node.session.auth.password = [CHAP password on TrueNAS box]

Restart the open-iscsi service:

systemctl restart open-iscsi
systemctl status open-iscsi

Start open-iscsi at boot time:

systemctl enable open-iscsi

Now find the name of the iSCSI target on the TrueNAS box:

iscsiadm -m discovery -t st -p $NAS_IP
iscsiadm -m node

The target name should look something like “iqn.2005-10.org.freenas.ctl:ibackup”

Still on the Ubuntu workstation, log into the iSCSI target:

iscsiadm -m node --targetname "$TARGET_NAME" --portal "$NAS_IP:3260" --login

Look for new devices:

fdisk -l | less

At this point fdisk should show you a new block device which is the iSCSI disk volume on the Synology box. In my case it was /dev/sda.

Set up the block device as an encrypted file system

Partition the device. I made one big /dev/sda1 partition, type 8e (Linux LVM):

gparted /dev/sda

Set up the partition as a LUKS-encrypted volume:

cryptsetup --verbose --verify-passphrase luksFormat /dev/sda1

You’ll be asked to type “YES” to confirm. Typing “y” or “Y” or “yes” will not work. You have to type “YES”.

Open the LUKS volume:

cryptsetup luksOpen /dev/sda1 backupiscsi

Create a physical volume from the LUKS volume:

pvcreate /dev/mapper/backupiscsi

Add that to a new volume group:

vgcreate ibackup /dev/mapper/backupiscsi

Create a logical volume within the volume group using all available space:

lvcreate -l +100%FREE -n backupvol /dev/ibackup

Put a file system on the logical volume:

mkfs.ext4 /dev/ibackup/backupvol

Add the logical volume to /etc/fstab to mount it on startup:

# TrueNAS iSCSI target
/dev/ibackup/backupvol /mnt/backup ext4 defaults,nofail,nobootwait 0 6

Get the UUID of the iSCSI drive:

ls -l /dev/disk/by-uuid | grep sda1

Add the UUID to /etc/crypttab to be automatically prompted for the decrypt passphrase when you boot up Ubuntu:

backupiscsi UUID=693568ca-9334-4c19-8b01-881f2247ae0d none luks

That’s pretty much it. The next time you boot you’ll be prompted for the decrypt passphrase before the drive will mount. If you type df -h you should see a new disk mounted on /mnt/backup.

If you found this interesting, you might want to check out my article Adding an external encrypted drive with LVM to Ubuntu Linux.

Hope you found this useful.

post

Copy entire file directories from a Linux host to Box

I had about 2TB of files on a cloud-based Linux host that I needed to backup to cloud storage. I had an Box Enterprise storage account with a 30PB limit on storage and a maximum file size of 150GB, so I decided to try to connect from Linux to Box and store all of the backup data in Box. You can check your own limits under Box “Account Settings”, bottom of the page:

The most difficult part of getting Box to work on a headless, cloud-based Linux host is getting authorization to work. Box wants to use OAuth2 web-based authentication, and I need to set up Box access on a remote host where I’m connecting via ssh and there is no web browser or desktop. The easiest way that I’ve found to do this is to generate an OAuth2 bearer token on my laptop that’s formatted using the JSON Web Token (JWT) format and then copy that to the Linux host.

I used rclone for the backup. I first installed rclone on the Ubuntu Linux host:

sudo apt-get install rclone

Then I installed rclone on my Mac laptop:

brew install rclone

I pulled up a terminal on the Mac and configured rclone so that my Box account was authorized:

rclone authorize box

This will cause a browser window to pop up and ask you to log into Box. Once you’ve logged in and authorized rclone to read and write files in your Box drive the command will finish up and spit out a bearer token:

$ rclone authorize box
2024/09/12 08:57:15 NOTICE: Config file "/Users/eruby/.config/rclone/rclone.conf" not found - using defaults
2024/09/12 08:57:15 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=qqf7swGNZ8pH4iJksvR3xA
2024/09/12 08:57:15 NOTICE: Log in and authorize rclone for access
2024/09/12 08:57:15 NOTICE: Waiting for code…
2024/09/12 08:57:45 NOTICE: Got code
Paste the following into your remote machine --->
{"access_token":"nOtaReaLacCessT0k3NsuCkas","token_type":"bearer","refresh_token":"nOtaReaLb3aR3rT0k3NKxRrg2J1rB7DKzKg6svazAlwAwHWKl","expiry":"2024-09-12T10:02:31.314087-07:00"}
<---End paste

The bearer token contains an access token, a refresh token, and an expiration date. The access token is good for an hour. After that it expires and application (rclone) will use the refresh token to get a new access token. This can keep happening until the user that generated the token (you) is no longer allowed to access Box or your password changes. If you change your password you’ll need to generate a new bearer token. The refresh token may expire before your password changes, depending on the security policy of the organization issuing the refresh token, so at some point you may need to regenerate the bearer token even if you don’t change your password.

Now you just have to paste the bearer token into the Linux host’s rclone config, so log into the Linux host and run rclone config. Here’s the entire interaction with the config command:

$ rclone config
2024/09/12 18:19:19 NOTICE: Config file "/home/eruby/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> Box
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
 6 / Box
   \ "box"
 7 / Cache a remote
   \ "cache"
 8 / Citrix Sharefile
   \ "sharefile"
 9 / Dropbox
   \ "dropbox"
10 / Encrypt/Decrypt a remote
   \ "crypt"
11 / FTP Connection
   \ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
13 / Google Drive
   \ "drive"
14 / Google Photos
   \ "google photos"
15 / Hubic
   \ "hubic"
16 / In memory object storage system.
   \ "memory"
17 / Jottacloud
   \ "jottacloud"
18 / Koofr
   \ "koofr"
19 / Local Disk
   \ "local"
20 / Mail.ru Cloud
   \ "mailru"
21 / Microsoft Azure Blob Storage
   \ "azureblob"
22 / Microsoft OneDrive
   \ "onedrive"
23 / OpenDrive
   \ "opendrive"
24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
25 / Pcloud
   \ "pcloud"
26 / Put.io
   \ "putio"
27 / SSH/SFTP Connection
   \ "sftp"
28 / Sugarsync
   \ "sugarsync"
29 / Transparently chunk/split large files
   \ "chunker"
30 / Union merges the contents of several upstream fs
   \ "union"
31 / Webdav
   \ "webdav"
32 / Yandex Disk
   \ "yandex"
33 / http Connection
   \ "http"
34 / premiumize.me
   \ "premiumizeme"
35 / seafile
   \ "seafile"
Storage> 6
** See help for box backend at: https://rclone.org/box/ **

OAuth Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Box App config.json location
Leave blank normally.

Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.

Enter a string value. Press Enter for the default ("").
box_config_file>
Box App Primary Access Token
Leave blank normally.
Enter a string value. Press Enter for the default ("").
access_token>

Enter a string value. Press Enter for the default ("user").
Choose a number from below, or type in your own value
 1 / Rclone should act on behalf of a user
   \ "user"
 2 / Rclone should act on behalf of a service account
   \ "enterprise"
box_sub_type>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n> n
For this to work, you will need rclone available on a machine that has
a web browser available.

For more help and alternate methods see: https://rclone.org/remote_setup/

Execute the following on the machine with the web browser (same rclone
version recommended):

	rclone authorize "box"

Then paste the result below:
result> {"access_token":"nOtaReaLacCessT0k3NsuCkas","token_type":"bearer","refresh_token":"nOtaReaLb3aR3rT0k3NKxRrg2J1rB7DKzKg6svazAlwAwHWKl","expiry":"2024-09-12T10:02:31.314087-07:00"}
--------------------
[Box]
token = {"access_token":"nOtaReaLacCessT0k3NsuCkas","token_type":"bearer","refresh_token":"nOtaReaLb3aR3rT0k3NKxRrg2J1rB7DKzKg6svazAlwAwHWKl","expiry":"2024-09-12T10:02:31.314087-07:00"}
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
Box                  box

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

To explain:

  • I named the remote connection “Box”.
  • I made the type of remote connection “box” (choice 6 from the list of supported remote storage types),
  • I didn’t edit any options, just answered “n” when asked if I wanted to make edits.
  • When I got to the Use auto config? question I answered “n”.
  • I copied the entire bearer token I got from my Mac into the result> field on the Linux box. (This is the entire contents inside the curly braces “{ … }” that rclone tells you to “Paste the following into your remote machine”.)

At this point rclone should be configured, now I just have to copy the data into Box. All of the data in in the /testdata directory, so I ran:

rclone copy --copy-links \
    --create-empty-src-dirs \
    /testdata Box:backup-testdata \
    > ~/box-backup-testdata.log 2>&1 &
  • There are symlinks in the directory, so --copy-links is needed.
  • If there are empty directories on the source, I want to copy them over, so
    --create-empty-src-dirs is needed.
  • /testdata is the root directory that I’m copying over.
  • Box” is the name I used for the remote connection.
  • backup-testdata is going to be the location of the data in my Box account.
  • Since this is going to take many hours to copy, I redirected all output to a log file and ran the process in the background. I can come back and check the log file later to see if everything worked.

Just to make sure everything is working I logged into Box with my laptop and I can see the new backup-testdata directory and it’s being populated with data, so everything appears to be working.

Hope you find this useful.

Thank you Adam Ru for the suggestion to try rclone.

post

Setting up NFS FSID for multiple networks

The official documentation for creating an NFS /etc/exports file for multiple networks and FSIDs is unclear and confusing. Here’s what you need to know.

If you need to specify multiple subnets that are allowed to mount a volume, you can either use separate lines in /etc/exports, like so:

/opt/dir1 192.168.1.0/24(rw,sync)
/opt/dir1 10.10.0.0/22(rw,sync)

Or you can list each subnet on a single line, repeating all of the mount options for each subnet, like so:

/opt/dir1 192.168.1.0/24(rw,sync) 10.10.0.0/22(rw,sync)

These are both equivalent. They will allow clients in the 192.168.1.0/24 and 10.10.0.0/22 subnets to mount the /opt/dir1 directory via NFS. A client in a different subnet will not be able to mount the filesystem.

When I’m setting up NFS servers I like to assign each exported volume with a unique FSID. If you don’t use FSID, there is a chance that when you reboot your NFS server the way that the server identifies the volume will change between reboots, and your NFS clients will hang with “stale file handle” errors. I say “a chance” because it depends on how your server stores volumes, what version of NFS it’s running, and if it’s a fault tolerant / high availability server, how that HA ability was implemented. Using a unique FSID ensures that the volume that the server presents is always identified the same way, and it makes it easier for NFS clients to reconnect and resume operations after an NFS server gets rebooted.

If you are using FSID to define a unique filesystem ID for each mount point you must include the same FSID in the export options for a single volume. It’s the “file system identifier”, so it needs to be the same each time a single filesystem is exported. If I want to identify /opt/dir1 as fsid=1 I have to include that declaration in the options every time that filesystem is exported. So for the examples above:

/opt/dir1 192.168.1.0/24(rw,sync,fsid=1)
/opt/dir1 10.10.0.0/22(rw,sync,fsid=1)

Or:

/opt/dir1 192.168.1.0/24(rw,sync,fsid=1) 10.10.0.0/22(rw,sync,fsid=1)

If you use a different FSID for one of these entries, or if you declare the FSID for one subnet and not the other, your NFS server will slowly and mysteriously grind to a halt, sometimes over hours and sometimes over days.

For NFSv4, there is the concept of a “distinguished filesystem” which is the root of all exported filesystems. This is specified with fsid=root or fsid=0, which both mean exactly the same thing. Unless you have a good reason to create a distinguished filesystem don’t use fsid=0, it will just add unnecessary complexity to your NFS setup.

Hope you find this useful.

post

Creating AWS Elastic Filesystems (EFS) with Terraform

The AWS Elastic Filesystem (EFS) gives you an NFSv4-mountable file system with almost unlimited storage capacity. The filesystem I just created to write this article reports 9,007,199,254,739,968 bytes free. In human-readable format df -kh reports 8.0E (Exabytes) of available disk space. In the year 2019, that’s a lot of storage space.

In past articles I’ve shown how to create EFS resources manually, but this week I wanted to programmatically create EFS resources with Terraform so that I could easily create, test, and tear-down EFS and VM resources on AWS.

I also wanted to make sure that my EFS resources are secure, that only VMs within my Virtual Private Cloud (VPC) could access the EFS data, so that no one outside of my VPC could mount or otherwise access the data.

Creating an EFS resource is easy. The Terraform code looks like this:

// efs.tf
resource "aws_efs_file_system" "efs-example" {
creation_token = "efs-example"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EfsExample"
}
}

This creates the EFS filesystem on AWS. EFS also requires a mount target, which gives your VMs a way to mount the EFS volume using NFS. The Terraform code to create a mount target looks like this:

// efs.tf (continued)
resource "aws_efs_mount_target" "efs-mt-example" {
file_system_id = "${aws_efs_file_system.efs-example.id}"
subnet_id = "${aws_subnet.subnet-efs.id}"
security_groups = ["${aws_security_group.ingress-efs.id}"]
}

The file_system_id is automatically set to the efs-example resource’s ID, which ties the mount target to the EFS file system.

The subnet_id for subnet-efs is a separate /24 subnet I created from my VPC just for EFS. The ingress-efs security group is a separate security group I created for EFS. Let’s cover each one of these separately.

A separate EFS subnet

First off I’ve allocated a /16 subnet for my VPC and I carve out individual /24 subnets from that VPC for each cluster of VMs and/or EFS resources that I add to an AWS availability zone. Here’s how I’ve defined my test environment VPC and EFS subnet:

//network.tf
resource "aws_vpc" "test-env" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags {
Name = "test-env"
}
}

resource "aws_subnet" "subnet-efs" {
cidr_block = "${cidrsubnet(aws_vpc.test-env.cidr_block, 8, 8)}"
vpc_id = "${aws_vpc.test-env.id}"
availability_zone = "us-east-1a"
}

That will give me the subnet 10.0.8.0/24 for my EFS subnet.

If you want to understand how to use Terraform’s cidrsubnet command to carve out separate subnets, see the article Terraform `cidrsubnet` Deconstructed by Lisa Hagemann. Her article gives excellent examples on how to do just that.

The EFS security group

Finally, I need a security group that only allows traffic between my test environment VMs and my test environment EFS volume. I already have a security group called ingress-test-env that is used to control security for my VMs. For EFS I create another security group that allows inbound traffic on port 2049 (the NFSv4 port), allows egress traffic on any port.

By setting the ingress-efs-test resource’s security_groups attribute to ingress-test-env this only allows network traffic to and from VMs in the ingress-test-env security group to talk to the EFS volume. If you use security_groups like this, you really lock down the EFS volume and you don’t need to set the cidr_blocks attribute at all.

// security.tf
resource "aws_security_group" "ingress-efs-test" {
name = "ingress-efs-test-sg"
vpc_id = "${aws_vpc.test-env.id}"

// NFS
ingress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 2049
to_port = 2049
protocol = "tcp"
}

// Terraform removes the default rule
egress {
security_groups = ["${aws_security_group.ingress-test-env.id}"]
from_port = 0
to_port = 0
protocol = "-1"
}
}

After adding these Terraform files to my cluster configuration and running terraform apply, I end up with a new EFS filesystem that I can mount from any VM running in my VPC.

# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-31337er3.efs.us-east-1.amazonaws.com:/ /mnt/efs
# df -kh
Filesystem Size Used Avail Use% Mounted on
udev 481M 0 481M 0% /dev
tmpfs 99M 744K 98M 1% /run
/dev/xvda1 7.7G 3.0G 4.7G 40% /
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/loop0 13M 13M 0 100% /snap/amazon-ssm-agent/150
/dev/loop1 87M 87M 0 100% /snap/core/4650
/dev/loop2 90M 90M 0 100% /snap/core/6130
/dev/loop3 18M 18M 0 100% /snap/amazon-ssm-agent/930
tmpfs 99M 0 99M 0% /run/user/1000
fs-31337er3.efs.us-east-1.amazonaws.com:/ 8.0E 0 8.0E 0% /mnt/efs

Hope you found this useful.