post

Garden of D’Lights

In December 2018 I was asked if I would like to put together a light show to light up the plants at The Ruth Bancroft Garden in Walnut Creek, California.

If you like weird and unusual plants and you’re not familiar with The Ruth Bancroft Garden, you should come see it. Tripadvisor has named The Ruth Bancroft Garden The most beautiful garden in the world and The Garden Conservancy says that “The Ruth Bancroft Garden is recognized as one of America’s finest examples of a dry garden. It features a variety of rare and extraordinary succulents and cacti and has a year-round presence, coming in and out of bloom and coloration as if the plants were absorbed in a fascinating conversation with each other.” It’s a world-famous garden which attracts thousands of visitors every year.

I spent the next year researching LED and laser lighting to create some unique lighting installations for the garden. Some of the installations I created used NeoPixel LED strips and many people have asked me how I created these lights, so I am documenting these installations here, including the Arduino C++ source code used to make the NeoPixels work.

GoDL 2023 – Valley Oak with color-changing Japanese Lanterns

GoDL 2023 – LED lighting on plants behind the pond

GoDL 2023 – LED lighting on agaves

GoDL 2020 – Koi Pond

NeoPixels

I started researching NeoPixels by reading and re-reading the Adafruit NeoPixel Überguide. If you’re planning on building anything I recommend that you do the same. The Überguide is your best place to get started. I will not be repeating information here that’s already covered in The Überguide, so if I say something here that doesn’t quite made sense go back and read The Überguide again.

I built 3 different NeoPixel installations:

  • Dancing Oak – The garden has a 200+ year old Valley Oak tree with a trunk that has a 4m (13 foot) circumference. My plan was to run (12) separate 5m strips of NeoPixels up the trunk spaced 0.3m (1 foot) apart and held in place with black bungee cords. The NeoPixels would travel up the trunk into the branches until we got to the end of a strip. The lights would pulse up from the earth and make the tree appear to spin and “dance”. For the 2023 show I moved the lights from the Oak tree to a hillside at the back of the garden where people say it reminds them of a lava flow or large glowing worms.
  • Light Stream – There is a dry stream bed in the garden and I wanted to make it appear as if there was water flowing through and fish swimming in it. I took (4) 10m NeoPixel strips to make the stream. The lights were programmed to pulse in different shades of blue to represent waves of water, while salmon-colored lights swam “upstream”.
  • Sky Circle Triangle – Originally envisisoned as a 40m cicumference circle of lights suspended 4.5m-6m (15-20 feet) in the air, making a rigid circle wasn’t practical for the site so I ended up using steel cable mounted to three trees in the garden’s Eucalyptus Grove and created a “Sky Triangle” instead. The effect still works, and mesmerizes people passing through the grove every night of the event. Many guests refer to the effect as “fireflies” or “glowing orbs”: six lights chase each other around the sides of the triangle, sometimes passing through one another and sometimes bouncing off each other.

For all three installations I used:

  • An Arduino “Mega” microcontroller. (I originally used Arduino “Metro” boards, then found out that they didn’t have enough RAM for the number of LEDs I was using, so I replaced them with “Mega” boards which come with more RAM.)
  • One or two 5VDC @ 20A “brick” power supplies.
  • A 1000uF capacitor connected across the DC power supply to protect the Arduino and NeoPixels from power surges.
  • A 470 Ohm resistor between the Arduino data pins and the NeoPixel signal line for each NeoPixel strip connected to the Arduino.
  • A waterproof case.
  • Cable glans waterproof cable connectors on each case.
  • 3-wire twist-to-connect waterproof cable connectors, one end soldered to the NeoPixel strip, one end going through the case to connect to the Arduino and power supply connectors inside the case.
  • Terminal screw blocks for connecting cables to power.
  • Heat shrink tubing to seal all of the wired solder joints.
  • Larger, clear heat shrink tubing to seal and strengthen the connections between soldered-together NeoPixel strips.

An Arduino is a low-power microcontroller that can be programmed to do simple tasks, such as to send a signal to a NeoPixel strip that tells Pixel #117 to glow purple. An Arduino will send whatever signals you tell it to send, over and over and over again, for as long as it has power. To program an Arduino you use a USB cable connected between your laptop and the Arduino. You write the code on your laptop, send it to the Arduino, unplug your laptop, and power up the Arduino.

Each NeoPixel strip is controlled by 3 wires: black (GND), red (+5VDC), and yellow (signaling, sometimes labeled DIN). The 5VDC @ 20A “brick” power supplies I use have a barrel connector on the end, but if you cut that off you’ll find 2 wires inside the cable: black (GND) and red (+5VDC). If your project requires more power you can get a supply that provides more amps or connect two 5VDC 20A supplies in parallel (red to red, black to black) to provide 5VDC up to 40A.

For what it’s worth, a single 5VDC@20A supply running at maximum capacity is drawing ~1A from the 110VAC wall socket, so it’s not using a lot of power, put it can still kill you. Be careful when you’re working around live electrical sources.

To get a strip to work you have to connect the NeoPixel’s DC+ to the power supply’s DC+ (red to red), the NeoPixel’s GND to the power supply’s GND and the Arduino’s GND (black to black), and the NeoPixel’s signalling wire through a 370 Ohm resistor to one of the digital signalling pins on the Arduino.

You also need to provide power to the Arduino itself. When programming the Arduino, the Arduino gets all of the power it needs from the USB cable. When you’re done programming it still needs to get power from somewhere. I usually take an old USB cable, cut it in half, and connect the USB connector’s red and black power wires directly to the power supply’s red and black power wires. When I’m done programming the Arduino I just replace the USB cable that connects my laptop to the Arduino with the cable from the power supply to the Arduino. (Check the control box pictures below for examples.)

Refer to the Adafruit NeoPixel Überguide Basic Connections page for more wiring details.

Testing NeoPixels

To test a strip I attach the NeoPixel signalling wire to pin 0 on the Arduino and run the neopixel-test-single-strand.ino code. This code is set up for (1) 150 LED NeoPixel strip (a 5m strip with 30 LEDs/m). If that doesn’t match what you have just modify the constants at the beginning of the code block to match what you’re using. If it works you should see colored light pulses down the entire strip, as shown in the video below:

Neopixel Test (2019-05-18)

I also wrote some test code to light the first and last pixels on a strip (handy when you’re soldering multiple strips together) and some code to test multiple strips at once.

Dancing Oak AKA “Lava Worms”

The original code used in 2019 for the Dancing Oak used multiple colors rotating around and twisting up the oak tree. However, because of the fact that the lights were not well-aligned left to right the effect was diminished. I though of programming an offset number of LEDs per light strand so that a strand that was 5 LEDs “lower” than the one next to it would be offset by 5 LEDs and then the lights would line up left to right, and then reprogramming the controller after installation with the “as installed” offset, but due to the uneven nature of the oak’s trunk an offset that worked at the base might start to look “off” 1 or 2m up the tree, so I abandoned that idea.

Instead I changed the entire program in 2020 and made the lights go from the base of the tree pulsing up, abandoning the counter-clockwise sprial.

In 2023 I installed color-changing Japanese Lanterns in the oak tree and moved the NeoPixel installation to a hillside at the back of the garden, where most guests decided that it reminded them of lava or worms.

Dancing Oak (2020)

Lava Worms (2024)

Dancing Oak control box (2019)

Dancing Oak and Sky Triangle control boxes (2019)

Light Stream

For the Light Stream I took (8) 5m NeoPixel strips and soldered them to make (4) 10m strips. Then I attached them to an Arduino Mega programmed to pulse in different shades of blue to represent waves of water, while salmon-colored lights swam “upstream”. I installed the strips in a winding dry stream bed.

Light Stream (2021)

Light Stream control box (2019-10-19, in progress)

Sky Circle Triangle AKA “Fireflies”

For the Sky Triangle I originally took (8) 5m NeoPixel strips and connected them all together to make one long 40m strip. However, I found out that the signal strength of an Arduino Mega drops off after about 25m, so I couldn’t get a consistent, stable signal to the LEDs at the end of the strip.

Since it was a triangle I solved the problem by splitting the strip into (2) 20m sections and connecting both to the the Arduino, so the 2 signalling pins each controlled half the triangle. On the far side of the triangle I physically attached the two strips together using heat shrink tubing, but didn’t make an electrical connection. Then I just had to write the Sky Triangle Arduino software so that one pin controlled the first half of the triangle and one pin controlled the other half.

Sky Triangle (2020)

Sky Circle Triangle control box (2019-11-08)

Need Help?

If you need help try asking in the “LEDs ARE AWESOME” Facebook Group. You can usually find me (and lots of other helpful people) there.

Code and License

All of the code for these projects can be found on Github. All code is licensed under the GPLv3.

Hope you find this useful.

post

Making JIRA Suck Less

Why JIRA Sucks

JIRA is almost universally reviled by every engineer that I know. Most of them can’t quite explain why is sucks, they just hate it.

In my view the problem isn’t JIRA, it’s how managers implement JIRA’s features that causes so much suckage. Here’s a short list of the problems that I see every time I start at a new company:

  • JIRA’s default settings suck. Most people start with the default settings and try to build on top of them, extending them as they go along. If you build on a foundation that sucks, whatever you build is also going to suck. Don’t use the defaults.
  • Managers try to implement overly-complex workflows. I’ve been successful using five status states. You might need four or six. You don’t need 17. Use the minimum number of status values required to express the states you need to track.
  • Managers try to implement workflows that require changing a ticket’s owner when the status changes. e.g. Bob finished the coding, now we need to assign it to Carol and Ted for code review, then Alice needs to write the test plan and QA the code, and finally Vladimir needs to sign off on QA and generate a release build before the ticket is complete. Later the VP of Engineering gives Vladimir a huge bonus because he’s the only one completing any tickets, and the fact that Bob & Carol & Ted & Alice worked hard on that ticket has been lost unless you manually check the ticket’s history. There’s no way to get a list of the tickets that Bob worked on, or Carol, or Ted, or Alice. A single ticket should be assigned to a single person, worked to completion, and closed.
  • Managers add values for priorities, resolutions, status, and other fields without documenting how they’re supposed to be used or training their staff how they are supposed to use them. Use the absolute minimum number that you need, make sure they’re self-explanatory, then still train your staff on how to use them.
  • Managers limit which status states can transition to other status states, frustrating end-users. Allow every status to transition to any other status.
  • Managers use generic names for priorities, resolutions, and status fields that are meaning-free, or use multiple names that have almost identical meanings. Do I “close” a ticket or “resolve” it? Which priority is higher, “urgent” or “critical”? Use the minimum number of values that you can, and make the choices self-explanatory.
  • No one cleans up their shit. If a manager adds a new field for a poorly-thought out project to track something or other on JIRA tickets, and then abandons that effort after a month, you can bet that engineers will still be prompted to enter a value for that field 4 years later. Resist the temptation to add more fields to JIRA, clean up after yourself when you give in to that temptation, and don’t be afraid to delete data.

Making JIRA suck less

Many years ago I was working for a startup that got bought by a large disk drive manufacturer. I was doing R&D work on large-scale object storage and as we were launching a new project I was told we needed to use JIRA to manage our task workloads. I wanted to dive in and start documenting all of the tasks that we needed to complete to get the project started, but one of the older engineers stopped me. He said we needed to meet first to discuss the configuration of JIRA for this project.

I was very reluctant to do this, my boss was asking me to get the tasks entered so we could start planning schedules and assigning work, but this guy was more experienced with JIRA so I met with him first. Afterwards I was glad I waited. I’ve applied what I learned at every company since then and made JIRA suck less at all of them. I documented what I did, and many managers I worked for have reached out to me after we parted ways on asking for a copy of my “JIRA document” so they could apply it at their new jobs. This is that document.

JIRA Workflow

Goals for JIRA

These are the things we want to accomplish using JIRA.

  • Track all of the tasks engineers are working on.
  • Be able to report on what tasks are necessary to fix a bug, finish a release, complete a feature, complete a project, or are related to a specific portion of the software stack.
  • Make using JIRA usage frictionless by having few, very concise, clear values for ticket fields so that an engineer never has to wonder what value a JIRA field should have.
  • Have a clear set of goals for each Sprint.
  • Get better at estimating how much work we can get done within a period of time.
  • Make sure that bugs are being fixed in a reasonable period of time.
  • Use JIRA’s automated reports and dashboards as a way to communicate back to PMs, Sales, Execs, and Engineers how much progress had been made on towards delivering the features they were specifically interested in.
  • Use JIRA’s automated dashboards to forecast how close we are to completing major deliverables.
  • Make sure that the tradeoffs that need to be made when goals are changed are clear to PMs, Sales, Execs, and Engineers.

Ticket Scope

A ticket should describe a single task that can be done by a single person within a 2 week period. A ticket that requires one person for longer than that should be broken into separate tickets.

If a ticket requires QA, documentation, or another related but independent task, create another ticket for that task and link the two tickets. Tickets can be linked across projects.

Do not create sub-task tickets under a ticket, just create more tickets. Sub-tasks have limitations on tracking, reporting, and cannot be part of a different project. Don’t use them.

If a project is truly huge with many moving parts, create an Epic and put the tickets in the Epic.

JIRA Fields

Ticket Status

There are five ticket statuses:

  • Backlog
    • All tickets start out as Backlog.
    • New tickets are not assigned to anyone and are not scheduled.
    • To schedule the ticket is assigned to someone and the status is changed to Selected for Development.
  • Selected for Development – Work has been defined, assigned, and scheduled but not started.
  • In Progress – The ticket is actively being worked on by the person it’s assigned to.
  • In Review – Assignee has completed all tasks, is waiting for reviewers to complete their reviews, or is waiting for blocking items to be completed, e.g. QA tasks, other engineering work, requestor sign-off. Blocking items should have their own tickets and be linked to the tickets that they block.
  • Closed – Ticket has been completed. A “Status: Closed” ticket has a small set of possible resolution values:
    • Done
    • Duplicate
    • Rejected (Won’t Do)
    • Cannot Reproduce

In order to reduce the amount of task-switching and improve focus each engineer should have no more than 3 – 5 tickets actively being worked on, “Selected for Development” and “In Progress,” combined.

Using this method and a 4-column Kanban board based on Ticket Status (omit backlog) every engineer and manager can see at a glance what needs to be done this week (Selected for Development), what’s in progress, what’s being reviewed, and what has been done.

Transitions should be defined from every state to every state. If someone wants to drag a ticket from the “Selected for Development” column and drop it in the “Closed” column they should be able to do that.

Transitions to Closed should prompt the user to fill in the resolution state

Assignee

The person who will be doing the work to complete the ticket. Usually this is set when the ticket is Selected for Development and doesn’t change.

Reporter

The name of the person reporting the problem or requesting the feature.

Components

Each ticket has one or more Components. A component is a limited set of fixed categories which define which group has the primary responsibility for the ticket. Components are usually named after a product or service being developed. End-users should not be able to create new components.

A ticket with no assigned owner may be automatically assigned to the lead person responsible for that component.

Components are mostly used for reporting, to see how much backlog remains to be done for a given software product or service. By keeping the number of components limited to a small set of categories they become useful for reporting, running queries, or building dashboards.

Labels

Labels are “free form” and can be used to tag a ticket so that it’s included in specific reports.

Now if a manager has something that they want to report on across multiple tickets, rather than adding another field they can just add a label and generate reports and make queries based on that label. When they lose interest a month later they can stop using that label, without forcing engineers to fill in extra, unnecessary fields for years to come.

Customer

Name of the customer (if any) who requested this task or reported this bug. This is so you can follow up with the customer afterwards to let them know the issue was fixed, or know who to ask if there isn’t enough information given to resolve the task.

Issue Types

  • Task – Something that needs to be done.
  • Bug – Something that needs to be fixed.
  • Epic – A collection of tasks and/or bugs needed to complete a feature.

Remove any other issue types included as JIRA defaults.

Need a “Story”? A Story is just a loosely-defined Epic that’s in a Backlog state. You don’t need a “Story” type.

Need a “Feature”? A feature is either a Task or an Epic with multiple Tasks in it. You don’t need a separate “Feature” issue type.

Task, Bug, Epic. That’s it. Keep choices to a minimum so end users don’t have to think about what to use.

Affects Version / Fixed Version

  • Affects Version – For a bug, the version or versions affected by the bug.
  • Fixed Version – The version (first release) or versions (first release, edge release, maintenance release) where the fixed bug or completed task appears or is scheduled to appear.

Versions are used when it comes time to issue a release. You can easily see what work needs to be done to complete all of the tasks in a release. You can generate a change log of all of the changes that are in a release.

Priorities

By default JIRA assigns a default priority to tickets. After a while you have to wonder, is this ticket really a “High” priority or is it a “High” priority because no one changed the default when the ticket was created To avoid this, make your tickets start with priority “None”. Now it’s clear that no priority was assigned.

At my company, if the filer doesn’t set the ticket priority, the product manager, engineering manager or team lead would set the ticket’s priority. If there is disagreement they can have a discussion to determine the correct priority, and the managers make the final decision. If the managers cannot reach agreement the VP of engineering breaks the tie. I don’t think it’s ever gotten to the VP.

Valid priorities are:

  • P0/Blocker – Drop everything else and fix this
  • P1/Critical – Important and urgent
  • P2/High – High priority. Important or urgent, not both
  • P3/Low – Low priority. Not important or urgent, but should get done as time permits
  • None – Priority has not been determined (default)

However you assign priorities to tickets at your company, define the process and let people know what it is.

Description

In the description of every ticket the filer has to include a “Definition of Done” (DoD) — a statement of what the system behavior would look like once the problem is fixed. This was very important since the person filing the ticket often has a very different expectation of what “done” looks like compared to what the person completing the ticket thinks “done” looks like. This mismatch can occur in both directions — sometimes the person doing the work does far less than what was needed and sometimes they do far more than what was needed, turning a 2 hour task into a two week project.

If a ticket is assigned to someone with no DoD the assignee should ask the reporter to add a DoD.

Due Date

If we promised a customer or anyone that a task would be completed by a specific date, then fill in the due date. Otherwise leave it blank.

Additional Fields

You may need some additional fields for your workflow. Some people like to track story points or effort required per ticket. If you need them, add them, just try to keep the number of fields that someone needs to fill in to the absolute minimum.

If you later find you’re not using a field, delete it.

Retrospective

Meeting every two weeks to discuss completed tasks, uncompleted tasks, what went well and what could have been done better over the past two weeks.

Stand Ups

When you do a stand-up and have the Kanban board for the group displayed on a large monitor, then the stand-up only needs to cover two questions:

  • Is the board correct?
  • Do you have any blockers?

Engineers appreciate brief standups. Make sure you’re tracking the right things, make sure that everyone has what they need to get things done that day. Standup completed.

Git

If a branch is created for a JIRA ticket put the branch name in the ticket. If an MR has an associated JIRA ticket you should be able to find the JIRA ticket from the MR or the MR from the JIRA ticket.

Both Github and Gitlab have JIRA plugins that can post updates to JIRA tickets with a link to the MR, on test pass/fail status, merge status, reviewer comments, etc. They can even automatically close tickets when an MR merges. Use these plugins to automate workflows and reduce time spent by engineers managing their JIRA tickets.

Summary

  • Keep the number of JIRA fields to fill in to a minimum.
  • Keep the workflow simple.
  • Don’t make people think about what they need to do — make it obvious.
  • Document your workflow.
  • Make sure that end users know how to use the system.
  • Automate required reports and dashboards.

I have applied these rules at three startups and several pretty large companies since I first wrote them down. Hopefully you can use some of these lessons at your company, because you may be required to use JIRA, but it doesn’t have to suck.

post

Adding a volume for docker images to Tanzu Kubernetes

If you deploy a Tanzu Kubernetes cluster using a typical YAML file with no volumes defined you’ll end up with a fairly small worker node that can quickly fill up all available disk space with container images. Each time a container is deployed on a node Kubernetes makes a local copy of the container image file. Each image file can be 5GB or more. It doesn’t take long to fill up a workspace hard disk with images. If you just have one big root partition then filling up the hard disk will cause Kubernetes to crash.

To create a Kubernetes cluster you create a YAML file and run kubectl on it. The following YAML file builds a cluster based on the ubuntu-2204-amd64-v1.31.1—vmware.2-fips-vkr.2 TKR image, which is based on Ubuntu 22.04 and contains Kubernetes 1.31.1.

apiVersion: run.tanzu.vmware.com/v1alpha3
kind: TanzuKubernetesCluster
metadata:
  name: my-tanzu-kubernetes-cluster-name
  namespace: my-tanzu-kubernetes-cluster-namespace
  annotations:
    run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
spec:
  topology:
    controlPlane:
      replicas: 3
      vmClass: guaranteed-small
      storageClass: vsan-default-storage-policy
      tkr:
        reference:
          name: v1.31.1---vmware.2-fips-vkr.2
    nodePools:
    - name: worker
      replicas: 3
      vmClass: guaranteed-8xlarge
      storageClass: vsan-default-storage-policy
      volumes:
        - name: containerd
          mountPath: /var/lib/containerd
          capacity:
            storage: 160Gi
      tkr:
        reference:
          name: v1.31.1---vmware.2-fips-vkr.2

In order to allocate a separate, larger volume for storing docker images on the worker nodes I added a volumes section. I have a storage class defined named vsan-default-storage-policy and the volumes section will allocate a 160GiB volume using the disk specified by vsan-default-storage-policy and mount it on the worker node using the path /var/lib/containerd, which is where container images are stored. Change vsan-default-storage-policy to the name of a storage policy defined for your tanzu-kubernetes-cluster-namespace if you want this to work on your system.

Now if I fill up the volume with images I won’t be able to add more images, but my Kubernetes cluster will keep running.

Hope you find this useful.

post

Installing TrueNAS Scale on a Terra Master NAS

My 10+ year old Synology NAS failed recently. I tried replacing the power supply and replacing the CMOS battery but it would not come back to life, so I started searching around for a replacement.

I originally intended to buy another Synology NAS, but when I looked over the various models I came to the conclusion that Synology was not keeping up with the competition in terms of hardware. I could get a NAS from a competitor with faster networking, a faster CPU, and more RAM for about half the price. Also, many of the apps that I’d originally used on the Synology NAS were no longer being supported by Synology, and most of the new apps they’d added were Synology-branded apps with unknown provenance and unknown support status, not open-source apps that I could Google a fix for if I had a problem.

After looking at a few brands I picked a low-end 4-drive Terra Master, the F4-424. The F4-424 comes in 3 flavors, the F4-424, F4-424 Pro, and F4-424 Max. The 3 share the same chassis, with main difference between the models being the RAM, CPU, GPU, and NICs being used. I’m mostly using the NAS as backup storage for all of my home computers, plus test storage (Minio, IPFS) for various Kubernetes test clusters, so I didn’t need lots of RAM, a fast CPU, or 10GbE. With the low-end F4-424 I was still getting a 4-core Intel CPU with 8GB DDR5 RAM and 2x 2.5GbE NICs, which is more than adequate for my needs.

I’d read that the Terra Master OS (TOS) wasn’t great, but what sold me on the Terra Master is that it’s basically a low-power Intel PC, so if you want to install some other OS on it, like TrueNAS Scale, openmediavault, or UnRAID, you can. I also read that Terra Master had just released TOS 6, and that it was a huge improvement over previous releases.

I set up the Terra Master with 4x Western Digital 8TB “Red” NAS drives and 2x Western Digital 1TB “Black” NVMe drives. I used TOS 6 for a week, and I thought it was fine. It was actually fairly simple to set up and run, supported using the NVMe drives for R/W cache, supported Time Machine backups, iSCSI, SMB, and NFS. I had no issues with it.

But I wanted to try out TrueNAS Scale so I downloaded a copy of the ISO and burned it to a USB flash drive. The Terra Master has a single USB-A port on the back, so I connected a USB hub and plugged in the flash drive, a keyboard, and a mouse. (There are YouTube videos that say you need to take the NAS apart and install the flash drive in an internal USB slot. You do not need to do this.)

I rebooted the NAS and started hitting the DEL key when it first powered on to get into the BIOS. First I went to the Boot screen and disabled the “UTOS Boot First” setting. This is the setting that tells the NAS to boot TOS 6 from a 3.75GB internal flash drive.

Next I went to the Save & Exit screen and selected my USB drive as the Boot Override device. It booted up my USB flash drive and I followed the TrueNAS Scale install instructions to install TrueNAS Scale on the first NVMe drive. It only took a minute or two.

Once that was done I rebooted again, and started hitting the DEL key when it rebooted to get into the BIOS. This time I went to the Boot screen to change the primary boot device. One of the NVMe disks was now labeled “Debian”. That’s the TrueNAS disk so I selected that, then saved and exited.

Once the NAS booted up the screen displayed the IP address and URL for logging in, so I went to my laptop and logged into the web UI to finish the setup. I added the 4x 8TB drives to a single RAIDZ1 storage pool and made the remaining 1TB NVMe drive into a R/W cache drive.

If I want to go back to TOS 6 I can re-enable the “UTOS Boot First” setting in BIOS, boot from the TOS6 flash drive, and rebuild the disk array. If I want to use the NVMe drive that TrueNAS is on for something else I can try installing TrueNAS on the TOS 6 flash drive but I’m not convinced that it will fit on a 3.75GB drive. I checked the size of the TrueNAS install and it looks like it might just barely fit.

Hope you find this useful.

post

Adding a LUKS-encrypted iSCSI volume to TrueNAS and Ubuntu 24.04

I have an Ubuntu 24.04 “Noble Numbat” workstation already set up with LUKS full disk encryption, and I have a Terra Master F4-424 NAS with 32TB raw storage that I installed TrueNAS Scale on. Years ago I set up a LUKS-encrypted iSCSI volume on a Synology NAS and used that to back up my main Ubuntu server, and I wanted to do the same thing using TrueNAS.

Create the iSCSI volume on TrueNAS

Log into the TrueNAS Scale Web UI and select System > Services. Make sure that the iSCSI service is running and set to start automatically.

Select Datasets > Add Dataset to create a new storage pool.

  • Add Dataset
    • Parent Path: [I used my main data pool]
    • Name: ibackup
    • Dataset Preset: Generic

Select Shares > Block (iSCSI) Shares Targets > Wizard to create a new iSCSI target.

  • Block Device
    • Name: ibackup
    • Extent Type: Device
    • Device: Create New
    • Pool/Dataset: [select the dataset that you created in the previous step]
    • Size: 3 TiB [How many TiB do you want?]
    • Sharing Platform: Modern OS
    • Target: Create New
  • Portal
    • Portal: Create New
    • Discovery Authentication Method: CHAP
    • Discovery Authentication Group: Create New
    • User: CHAP user name (doesn’t need to be a real user, can be any name)
    • Secret: CHAP user password (make sure you write the user name and password down)
    • IP Address: Click Add. [If you only want one specific IP address to be able to connect, enter it. If you don’t care, use 0.0.0.0]
  • Initiator
    • Initiators: [Leave blank to allow all or enter a list of initiator hostnames]
  • Click Save. You’ve now created an iSCSI volume that you can mount from across your network.

Get the iSCSI volume to appear as a block device on Linux

On your Ubuntu box switch over to a root prompt:

sudo su

Install the open-iscsi drivers. (Since I’m already running LUKS on my Ubuntu box I don’t need to install LUKS.)

apt-get install open-iscsi

Edit the conf file

vi /etc/iscsi/iscsid.conf

Edit these lines:

node.startup = automatic
node.session.auth.username = [CHAP user name on TrueNAS box]
node.session.auth.password = [CHAP password on TrueNAS box]

Restart the open-iscsi service:

systemctl restart open-iscsi
systemctl status open-iscsi

Start open-iscsi at boot time:

systemctl enable open-iscsi

Now find the name of the iSCSI target on the TrueNAS box:

iscsiadm -m discovery -t st -p $NAS_IP
iscsiadm -m node

The target name should look something like “iqn.2005-10.org.freenas.ctl:ibackup”

Still on the Ubuntu workstation, log into the iSCSI target:

iscsiadm -m node --targetname "$TARGET_NAME" --portal "$NAS_IP:3260" --login

Look for new devices:

fdisk -l | less

At this point fdisk should show you a new block device which is the iSCSI disk volume on the Synology box. In my case it was /dev/sda.

Set up the block device as an encrypted file system

Partition the device. I made one big /dev/sda1 partition, type 8e (Linux LVM):

gparted /dev/sda

Set up the partition as a LUKS-encrypted volume:

cryptsetup --verbose --verify-passphrase luksFormat /dev/sda1

You’ll be asked to type “YES” to confirm. Typing “y” or “Y” or “yes” will not work. You have to type “YES”.

Open the LUKS volume:

cryptsetup luksOpen /dev/sda1 backupiscsi

Create a physical volume from the LUKS volume:

pvcreate /dev/mapper/backupiscsi

Add that to a new volume group:

vgcreate ibackup /dev/mapper/backupiscsi

Create a logical volume within the volume group using all available space:

lvcreate -l +100%FREE -n backupvol /dev/ibackup

Put a file system on the logical volume:

mkfs.ext4 /dev/ibackup/backupvol

Add the logical volume to /etc/fstab to mount it on startup:

# TrueNAS iSCSI target
/dev/ibackup/backupvol /mnt/backup ext4 defaults,nofail,nobootwait 0 6

Get the UUID of the iSCSI drive:

ls -l /dev/disk/by-uuid | grep sda1

Add the UUID to /etc/crypttab to be automatically prompted for the decrypt passphrase when you boot up Ubuntu:

backupiscsi UUID=693568ca-9334-4c19-8b01-881f2247ae0d none luks

That’s pretty much it. The next time you boot you’ll be prompted for the decrypt passphrase before the drive will mount. If you type df -h you should see a new disk mounted on /mnt/backup.

If you found this interesting, you might want to check out my article Adding an external encrypted drive with LVM to Ubuntu Linux.

Hope you found this useful.