Kubernetes Homelab Part 4: Hyperconverged Storage

Sorry it’s taken a while to get to the next part of my blog series. This section was supposed to be about hyperconverged clustered storage in my cluster, but I unfortunately ran into a total cluster loss due to some bugs in MicroK8s and/or dqlite that maintainers haven’t managed to get to the bottom of.

The volumes that were provisioned on my off-cluster storage, I was able to re-associate with my rebuilt-from-scratch cluster. The volumes that were provisioned on the containerised, clustered storage were irrecoverably lost.

Therefore, I have decided to rework this part of my blog series into a cautionary tale – partly about specific technologies, and partly to push my pro-backup agenda.

It’s worth looking at the previous posts in the series for some background, especially the overview.

My design

Let’s have a look at my original design for hyperconverged, containerised, clustered storage. And before we get stuck in, let me quickly demystify some of the jargon:

  • hyperconverged means the storage runs on the same nodes as the compute
  • containerised means the storage controller runs as Kubernetes pods inside the cluster
  • clustered means many nodes provide some kind of storage hardware, and your volumes are split up into replicas on more than one node, so you can tolerate a node failure without losing a volume

Several clustered storage solutions are available. Perhaps Rook/Ceph is the best known, but as MicroK8s packages OpenEBS, I decided to use that. The default setup you get if you simply do microk8s enable openebs creates a file on the root filesystem and provisions block volumes out of that file. In my case, that file would have ended up on the same SATA SSD as the OS, and I didn’t want that.

So I went poking at OpenEBS, and found that it offers various storage backends: Mayastor, cStor or Jiva. Mayastor is the newest engine, but has higher hardware requirements. In the end I decided on cStor as it appeared to be lightweight (i.e. didn’t consume much CPU or memory) and was also based on ZFS, which is a technology I already rely on in my TrueNAS storage. I ended up deploying OpenEBS from its Helm chart.

This diagram is quite complex, so let me walk you through it – starting at the bottom. Each physical node has an M.2 NVMe storage device installed, and this is separate from the SATA SSD that runs the OS. When you install OpenEBS, it creates a DaemonSet of a component called Node Disk Manager (NDM) which runs on each node and looks for available storage devices, and makes them available to OpenEBS as BlockDevices. When you have several BlockDevices, you can create a storage cluster. From this cluster, you can provision Volumes which will be replicated across multiple NVMe devices (by default you get 3 replicas). Creating a Volume also creates a Pod that acts as an iSCSI target for the volume. The Volume can now be mounted by workload Pods from any node in the usual way. It’s important to note that the workload Pod does not have to be on the same node as the Volume Target, and the three VolumeReplicas are placed according to the nodes with most capacity.

Architecture of OpenEBs/cStor on Kubernetes

The problem

MicroK8s uses dqlite as its cluster datastore instead of etcd like most other Kubernetes distributions. I ran into some problems with MicroK8s where dqlite started consuming all CPU, running at high latency and eventually silently lost quorum. The Kubernetes API server then also silently went read-only, so any requests to change cluster state would silently fail, and any requests to read cluster state would effectively be snapshots from the moment the cluster went read-only, and might vary depending on which of the dqlite replicas was being queried.

The further complication is that as a clustered storage engine, cStor uses CRDs to represent its objects and therefore relies on the Kubernetes API server and the underlying datastore to track its own volumes, replicas, block devices, etc. By default, cStor then also lost quorum.


I followed through the how to restore lost quorum guide for MicroK8s, several times, but it never worked for me. I worked with MicroK8s developers for a while on recovery.

Even without cluster quorum, I attempted to recover my cStor volumes. However, actions like creating a pod to mount a volume rely on having a kube API that is not read only!

Eventually I had no other choice but to reset the cluster and start from scratch. I made sure I did not wipe the NVMe devices, and assumed I would be able to reassociate them on a new cluster. I exported all of the OpenEBS/cStor CRs to yaml files as a backup.

After the cluster was rebuilt, I reimported the BlockDevice resources but doing so did not discover the NVMe drives as they seemed to change UUID in the new cluster. I tweaked my yaml to adopt them under their new names, but I was not able to rebuild them as an OpenEBS cluster and rediscover my old volumes.

The documentation for cStor is quite minimal, and focuses on installing it rather than fixing it. The only relevant page is the Troubleshooting page, and it didn’t cater to my problem. Which seems surprising, because a common question with any storage system must be “how do I get my stuff back when it goes wrong?”

I contacted the OpenEBS community via Slack and my question was ignored for a week, despite my nudges. Eventually, an engineer contacted me and we worked through some steps, but were not able to reassociate a previous cluster’s cStor volumes with a new cluster.

All my cStor volumes were either MariaDB or PostgreSQL databases, and fortunately I had recent backups of all of them and was able to create new volumes on TrueNAS external storage (slower, but reliable) and restore the databases.


  • First and foremost, take backups. Backups saved my bacon here in what would otherwise have been a significant data loss. I’ll cover my backup solutions in a later part of this blog post series.
  • Volume snapshots are not backups. cStor provides volume snapshot functionality and it is very easy to take snapshots automatically. However, using those snapshots requires a functioning kube API.
  • The control plane is fragile. It doesn’t take a lot for your datastore to lose quorum, and then all bets are off.
  • I advise against hyperconverged storage in your homelab, unless you really need it. As soon as there is persistent data stored in your cluster, it stops being ephemeral and you need to treat it as carefully as a NAS. It’s fine for caches and things that can be regenerated.
  • Check support arrangements before you commit to a product. MicroK8s developers have been responsive and helpful. However, cStor support has been useless. The product seems mature and the website looked shiny and makes claims about it being enterprise-grade, but the recovery documentation was useless and nobody was willing to help. Most of the chatter in the Slack channel are around Mayastor, so this must be the new shiny that gets all the attention.

Next steps

The root cause of this problem was dqlite and MicroK8s quorum. At the moment, I don’t yet understand why this incident happened and I don’t know how to prevent it from happening again. I’m not the only person to have been bitten by it.

For time being, I restored like-for-like on MicroK8s even though I don’t really trust dqlite any more. I’ve upped the frequency of my backups in the expectation that it will probably happen again at some point.

I think I’ve decided that if this happens again, I will consider rebuilding on K3s instead of MicroK8s, as they use the more standard etcd datastore.

I’m not currently using the NVMe disks, but it seems a waste just to leave them there doing nothing. I will probably fiddle with hyperconverged storage again one day – maybe either Mayastor or Rook/Ceph, both of which seem to get more attention than cStor.

My MIDI pipe organ workflow


I’ve written a couple of times about playing about with a MIDI-enabled pipe organ and I’ve shared some of my results on YouTube. Today I want to say a bit about my workflow because a few people have asked, and it is a somewhat complicated but hopefully interesting.

This isn’t supposed to be instructional: this is just some notes about the way I’ve found that works for me. I’ll give some examples and demonstrate progress as we go along by working on a public domain piece, Prelude and Fugue in C major (BWV 553) by Johann Sebastian Bach.


If you want to play along with this guide, you will need:

  • a pipe organ with MIDI ports
  • an installation of MuseScore
  • an installation of OrganAssist, configured for your organ
  • an installation of GrandOrgue, configured for your organ (optional)

Obtain score

The first thing I do when I decide I want to make the organ play something is obtain a score. I have three options:

Find and download a score on MuseScore

As well as being an notation editor app, MuseScore allows musicians to upload their own compositions to musescore.com, and it also contains various public domain works. There are also some copyrighted works with various licensing options.

When I’ve found an arrangement I like, as I’m a paid-up MuseScore Pro member, I can download the score directly in MuseScore format.

Here’s my score for BWV 553 on MuseScore, and for reference here’s the first line.

First line of BWV 553
Enter a score from a physical book into MuseScore

If the work is only in physical form (a book or sheet score) then the only option is to manually enter it into MuseScore. There are various options for scanning it and getting MuseScore to “recognise” the notes, but I have found this inaccurate, and it takes as long to correct the mistakes as it does to just enter the music by hand.

I created my MuseScore version of the score by manually entering the notation from a physical book.

Import a plain old MIDI file into MuseScore

The last option is to import an ordinary MIDI file into MuseScore. The success of this method varies wildly depending on the quality and complexity of the original MIDI file, but you can often end up with an unreadable score that needs a lot of cleanup.


No matter which of the three methods for getting a score you chose, you should now have a score in MuseScore. You will likely have to do some editing and arrangement to make it suitable for pipe organ.

Organ music arranged for humans would typically be written on 2 or 3 staves – right hand, left hand and optionally feet – and it is up to the organist to interpret the score and decide which manual (keyboard) to play each section on. There are often (but not always) written notes to tell the organist what to do.

Directions to the organist about choice of manual

But to a computer, an organ is several instruments – each manual (keyboard) and the pedalboard is its own instrument. So we need to arrange our score in this way – one stave for each manual, and we must pre-determine which manual each section will be played back on.

The specific organ I am arranging for has a Great manual, a Swell manual and a Pedal, so I need to arrange my score for 3 parts, the Swell and Great parts having 2 staves each and the Pedal part having 1 stave. In my own lingo I refer to this as SSGGP.

Here’s my version of BWV 553 re-arranged for SSGGP, and the first line for quick reference again.

First line of BWV 553, arranged for OrganAssist

Note that I have had to take out the convenient repeat and interpret the 1st on Sw, 2nd on Gt direction as playing the entire section through twice, once on each manual.

Finally, I export the MuseScore project as a MIDI file, which can be consumed by OrganAssist.

Add stops

Now I import this MIDI file into the OrganAssist library. The first thing it asks me to do is map the MIDI tracks to the organ manuals. We exported as SSGGP so that’s how we’ll set the mapping for import.

Importing a MuseScore score into OrganAssist
Mapping the SSGGP staves to organ manuals in OrganAssist

If we play this back now, the organ will make no sound, because although the keys are being pressed, no stops are drawn. We need to tell OrganAssist which stops we want it to use, which is something the human organist would decide when they played the piece on a real organ. In this case, the front of the book of Eight Short Preludes and Fugues gives this advice:

Suggested registrations for BWV 553

BWV 553 has a direction of mf, so let’s set those stops accordingly. Following the suggested registrations in the table, and knowing what I have available on the organ at St Mary’s, I’ve chosen these stops:

To add these stops, we will use the OrganAssist editor. You can see the notes in a “piano roll” style view. Right click in the upper part of the screen to add stop changes and coupler changes. This obviously depends on your specific organ.

The editor view in OrganAssist shows notes in the main part of the screen, colour coded by manual (green for Swell, blue for Great, purple for Pedal). The top area is for events such as switching on or off stops, couplers, tremulants and any other controls the organ might have. Here I’ve turned on a bunch of stops at the beginning, and about two-thirds of the way across I’ve switched off the Swell to Pedal coupler, and switched on the Great to Pedal coupler, so the pedal notes are always coupled to the manual that is being currently played with the hands.

OrganAssist score editor, showing notes in the main display and stop/coupler events at the top

This step can be done away from the actual organ, as OrganAssist has rudimentary sound output which is sufficient to check for wrong notes, etc.

Playback on organ

If everything so far has been done properly, I should be ready for a first listening. No doubt there will be snags that show up when I listen to it, and I’ll probably want to make some tweaks.

The organ may be MIDI-controlled, but the mechanical components are still made of wood and leather and operated by springs and solenoids and pressurised air, so a little bit of latency creeps in

This video shows the score being played back on the organ at St Mary’s Church, Fishponds.

Changes to stops and small changes to durations of notes are easy to tweak in OrganAssist. Anything more usually means going back to MuseScore, editing there, and doing the export and import process again.

Playback on GrandOrgue

As I said above, OrganAssist only offers rudimentary playback when not attached to a real organ. It’s good enough for basic testing but not much good for hearing what it might sound like. Sure, I can go into the church and play the organ sometimes, but it would be nice have an approximation of the sound at home.

This is where GrandOrgue comes in. It’s a Virtual Pipe Organ (VPO) which is a virtual recreation of a pipe organ which receives input via MIDI – just like the real thing!

GrandOrgue uses real recordings of every single pipe on a real organ. Together these are known as a sampleset. Various samplesets are available online, some free, and some commercial. I haven’t (yet) had a chance to sample the organ at St Mary’s, so for now I am using a composite sampleset with similar-sounding stops taken from two free samplesets (Friesach by Piotr Grabowski, and Skinner op. 497 by Sonus Paradisi), and a basic graphical interface created with Organ Builder.

It takes a few minutes to configure a GrandOrgue organ to map the stop on/off events etc but after this is done, OrganAssist can play back through GrandOrgue via a MIDI loopback port, and make a surprisingly realistic sound. I can now make meaningful decisions about which stops to add to my OrganAssist scores at home.

In this video, OrganAssist (in the background) is “playing” the virtual organ by sending MIDI events, which GrandOrgue (in the corner) is receiving and generating the sound, using samples of real organ pipes.

I think this is a pretty good approximation of the real organ at St Mary’s – certainly good enough for playing around with at home.

Kubernetes Homelab Part 3: Off-Cluster Storage

Welcome to part 3 of the Kubernetes Homelab guide. In this section we’re going to look at how to provide off-cluster shared storage. If you haven’t read the other parts of this guide, I recommend you check those out too.

Out of the box, MicroK8s does provide a hostpath storage provider but this only works on a single-node cluster. It basically lets pods use storage within a subdirectory on the node’s root filesystem, so this obviously isn’t going to work in a multi-node cluster where your workload could end up on any node.

It’s important to me that any storage solution I choose is compliant with CSI, the Kubernetes framework for storage drivers. This allows you to simply tell Kubernetes that your pod requires a 10GB volume, and Kubernetes goes off and talks to its CSI driver, which provisions and mounts your volume automatically. This isn’t your typical fileserver.


So I decided to go with TrueNAS SCALE (technically I started with TrueNAS CORE and then I migrated to TrueNAS SCALE). TrueNAS is a NAS operating system which uses the OpenZFS filesystem to manage its storage. By its nature, ZFS supports nested volumes and is ideal for this application.

I’m running a fairly elderly HP MicroServer N40L with 16GB memory and 4x4TB disks in a RAID-Z2 vdev, for a total of 8TB usable storage. It’s not the biggest or the fastest, but it works for me.

HP MicroServer N40L

Democratic CSI

The magic glue that connects Kubernetes and TrueNAS is a project called Democratic CSI, which is a CSI driver that supports various storage appliances, including TrueNAS.

Note: Democratic CSI packaged an older driver called freenas-nfs which required SSH access to the NAS. For users running TrueNAS SCALE, there is a newer driver called freenas-api-nfs which does not require SSH and does all its work via an HTTP API. As I am running TrueNAS SCALE, I will deploy the freenas-api-nfs driver.

There are some steps to set up the root volume on your TrueNAS appliance but I wrote about these before, and they are pretty much the same, so please refer to my TrueNAS guide. There are also some Democratic CSI prerequisites you need to install on your Kubernetes nodes before deploying.

I’m installing via Helm, and the values file needed is quite complex as it is drawn from two upstream examples: the generic values.yaml for the Helm chart, and some more specific options for the freenas-api-nfs driver.

This is the local values.yaml I have come up with for my homelab:

    driver: freenas-api-nfs
      protocol: http
      username: root
      password: mypassword
      port: 80
      allowInsecure: true
      datasetParentName: hdd/k8s/vols
      detachedSnapshotsDatasetParentName: hdd/k8s/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 0
      datasetPermissionsGroup: 0
      shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: root
      shareMapallUser: ""
      shareMapallGroup: ""

  # Required for MicroK8s
  kubeletHostPath: /var/snap/microk8s/common/var/lib/kubelet

  # should be globally unique for a given cluster
  name: "org.democratic-csi.nfs-api"

  - name: truenas
    defaultClass: true
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
      fsType: nfs
      - noatime
      - nfsvers=4

  - name: truenas

And it is installed like this:

helm upgrade \
    --install \
    --create-namespace \
    --values values.yaml \
    --namespace democratic-csi \
    truenas democratic-csi/democratic-csi


Once deployment has finished watch the pods until they have have spun up. Expect to see one csi-node pod per node, and one csi-controller.

[jonathan@latitude ~]$ kubectl get po -n democratic-csi
NAME                                                 READY   STATUS    RESTARTS   AGE
truenas-democratic-csi-node-rkmq8                    4/4     Running   0          9d
truenas-democratic-csi-node-w5ktj                    4/4     Running   0          9d
truenas-democratic-csi-node-k88cx                    4/4     Running   0          9d
truenas-democratic-csi-node-f7zw4                    4/4     Running   0          9d
truenas-democratic-csi-controller-54db74999b-5zjv2   5/5     Running   0          9d

Check to make sure there’s a truenas StorageClass:

[jonathan@latitude ~]$ kubectl get storageclasses
truenas (default)   org.democratic-csi.nfs-api   Retain          Immediate           true                   9d

Then apply a manifest to create a PersistentVolumeClaim, which should provision a volume in TrueNAS:

kind: PersistentVolumeClaim
apiVersion: v1
  name: test-claim-nfs
  storageClassName: truenas
    - ReadWriteMany
      storage: 1Gi

Check to make sure it appears and is provisioned correctly:

[jonathan@latitude ~]$ kubectl get persistentvolumeclaim
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim-nfs   Bound    pvc-ac9940c4-29a8-4056-b0bf-d8ac0dd05beb   1Gi        RWX            truenas        15s

You should be able to see a Dataset and a corresponding Share for this volume in the TrueNAS web GUI:

Dataset details in TrueNAS UI

Finally we can create a Pod that mounts this PersistentVolume to make sure we got the settings of the share right.

apiVersion: v1
kind: Pod
  name: test-pod-nfs
    - name: myfrontend
      image: nginx
      - mountPath: "/var/www/html"
        name: mypd
    - name: mypd
        claimName: test-claim-nfs

If this pod starts up successfully, it means it was able to mount the volume from TrueNAS. Woo!

[jonathan@latitude ~]$ kubectl get pods
test-pod-nfs   1/1     Running   0          46s

We can now start using the truenas storage class to run workloads which require persistent storage. In fact, you might already have noticed that this storage class is set as the default, so you won’t even need to explicitly specify it for many deployments.

As this storage class is backed by NFS, it intrinsically supports multi-user, and so the storage class supports ReadWriteOnce (aka RWO, can be mounted by one pod) and ReadWriteMany (aka RWX, can be mounted by many pods).

Kubernetes Homelab Part 2: Networking

The next part of our look at my Kubernetes homelab is a deep dive into networking. If you haven’t read the other parts of this guide, I recommend you check those out too.

On the surface, my network implementation is very simple. The cluster nodes, the NAS and the router are all on the same /24 private network. The router NATs to the Internet. No VLANs here – this is a standard home setup.

In order to expose your application, you’ll need an ingress controller. This runs on every node in the cluster and listens on ports 80 and 443 (HTTP and HTTPS). This is easily enabled with:

microk8s enable ingress

You can send HTTP requests to the ingress controller on any of the nodes and it will find its way to the application pods, no matter where they are, by traversing the Calico overlay network. Simplistically, we can set up a port-forward on the router to forward TCP ports 80 and 443 to any one of the nodes, and everything will work.

Ingress controllers and port forwarding

As we can see from the diagram, node kube01 with IP has been chosen as the target of the port forwarding from the router. kube01 will handle all ingress traffic, and use the Calico network overlay to route to the traffic to the application pods, wherever they may be. This also means that if kube01 is unavailable for any reason, there will be an outage of all applications that are using the ingress.

The solution is to set up a layer 2 load balancer with MetalLB. This is an addon for MicroK8s, and when enabled, it will ask you to set aside a few IP addresses in the same subnet which can be allocated to load balancers. In this example, I’ve allocated as load balancer IPs.

microk8s enable metallb

Now we need to create a new Service definition for the ingress controller, which will create the corresponding load balancer on

apiVersion: v1
kind: Service
  name: ingress-lb
  namespace: ingress
    name: nginx-ingress-microk8s
  type: LoadBalancer
  externalTrafficPolicy: Local
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443

With the load balancer now in place, we can hit the ingress controller either on any of the node IPs, or the load balancer IP. The load balancer will send the traffic to any of the node IPs, taking into consideration which ones are available and healthy. So we can change the port forwarding rule to forward to the load balancer’s IP, and now any of the nodes can receive ingress traffic.

Ingress controllers with MetalLB load balancer

One last thing we can do to make deployments much easier is set up a wildcard DNS record. If you domain is example.com, you could register a wildcard record for *.example.com that points to your router’s public IP. Then you can deploy arbitrary apps and give them hostnames like myapp.example.com and you won’t have to do anything else for the new application’s ingress to work.

Kubernetes Homelab Part 1: Overview

A lot of people have asked me about my home Kubernetes cluster, and so I have decided to put together a series of blog posts about the architecture. I’m going to split it into sections, with each section focusing on a specific area. If you haven’t read the other parts of this guide, I recommend you check those out too.

This is Part 1, a general overview of the hardware, the architecture and the base OS install. It is not intended as a set of instructions, but as some notes about my design choices.


First, let’s have a look at the architecture. I’m using the MicroK8s distribution of Kubernetes, which can run on a single node, supports clustering, but needs at least 3 nodes for high availability. I’m running on 4 nodes because that gives me plenty of memory.

I chose to use HP EliteDesk 800 G2 Mini PC systems, because they are tiny, use very little power, release very little heat, and make very little noise. There are several other manufacturers who also make ultra small form factor PCs (Lenovo, Dell, Intel) but it just happened that HP were the cheapest at the time I looked.

Each EliteDesk node is equipped with an Intel Core i5-6500T CPU (4 cores and 4 threads at 2.50 GHz), 16GB DDR4 memory, a 240GB SATA SSD for the OS, and a 240GB M.2 NVMe SSD for storage (more on that later).

I also have a NAS to provide off-cluster shared storage. This is an HP MicroServer N40L with 4x4TB disks, running TrueNAS. We’ll look at this in detail in a later section.

Networking is dead simple – everything is connected to an unmanaged gigabit Ethernet switch and is in the same RFC1918 /24 network. A router provides Internet connectivity via NAT.

Diagram showing architecture of Kubernetes cluster
Kubernetes cluster architecture

In case you were wondering what this looks like, it’s all neatly tucked away in the bottom of a closet. I built a rack for the nodes from plywood. Each node is screwed to a small plywood panel by its VESA mount screws, and the plywood panel slides into a pair of grooves. This means the nodes are rack mounted, have good airflow, and it’s easy to slide one out for maintenance etc.

Photo of Kubernetes hardware in situ
Kubernetes cluster photo

The small box with the red light at the bottom of the rack is a Raspberry Pi, which provides DNS and DHCP for the LAN with Pi-hole. This allows me to easily set static reservations for the Kubernetes nodes.

Also visible is a slim KVM and the cable modem/router. To save space, the monitor is mounted on the inside of the closet door.

Operating System

As MicroK8s is maintained by Canonical, it made sense to run it on its native Ubuntu platform. I’m running Ubuntu Server 22.04 LTS, installed as a Minimal installation.

It is almost entirely a default installation and the only customisation I made to the OS was to disable swap and delete the swap file.

After this, MicroK8s can be installed on all the nodes using snap. By default, packages installed via snap auto-update to every release in the future, whether major or minor. This is potentially dangerous as Kubernetes releases often add and deprecate features that you may be using. So I strongly recommend pinning your MicroK8s release to a specific version, like this. Make sure to check what the latest release of MicroK8s is at the time – don’t just blindly copy my 1.25 example in case it’s out of date!

sudo snap install microk8s --classic --channel=1.25/stable

Once installed, I started MicroK8s running on each node and followed the instructions for clustering the nodes. It doesn’t matter which node you start with – just pick one, and add the rest to it one by one.

When all the nodes are ready, you’re done provisioning a simple Kubernetes cluster! There are a few more steps to make the cluster actually useful, and we’ll cover these in subsequent posts, where I’ll take a deep dive into the other components.

BitShift Variations in C Minor

This is a story about music composed by a computer, and collaboration between many individuals, each of whom has extended the work of their predecessor.

BitShift Variations

The original BitShift Variations in C Minor is a composition generated by code written in C by Rob Miles. It’s an extremely short yet amazingly complex piece of code, written for a “code golf” competition. Here’s Rob himself introducing his work.

The code, if you’re interested, is freely available online, and included here for your convenience.

echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6%\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay

The end result of running this tiny piece of code is a chiptune which sounds like this:

Pretty cool work, but as a project, this seems hard to extend.

BitShift Variations Unrolled

Enter James Newton, who is also fascinated with Rob’s code. He decided to unroll the code and express it in a longer, more human-readable way, to make it easier for others to understand.

James’s unrolled code is available on Github.

BitShift Variations: Lilypond Edition

A key limitation of the original BitShift Variations code is that it can only output a sound wave directly, and not any kind of score.

John Donovan re-implemented the algorithm from the original BitShift code in Python and gave it the ability to generate its output in Lilypond format, instead of a sound wave. Lilypond is a versatile music notation system, and from here the score of BitShift Variations in C Minor can be exported from Lilypond to various other formats.

John’s Python code is also available on Github and there is also a rendering of his MIDI output on SoundCloud:

BitShift Variations for Pipe Organ

I’ve long thought pipe organs are the original synthesizers, and have a lot in common with chiptune technology. You start with a fundamental tone (the basic organ flute pipe has a sound quite close to a pure sine wave) and create richness in the sound by adding in higher harmonics and then combining notes in harmony.

I’m also fortunate enough to have access to a real pipe organ which was renovated in 2020 and now has MIDI ports which can be used to record and play back music from a computer or other MIDI-enabled instrument.

So when I heard there was a Lilypond version of the BitShift Variations, there was no way I was not going to find a way of playing it back on the organ!

I cloned John Donovan’s BitShift Variations: Lilypond Edition and ran the following commands:

# Run the BitShift code to output the score in Lilypond format
python2.7 main.py > bitshift_variations.ly

# Use Lilypond to convert the Lilypond score to MIDI format
lilypond bitshift_variations.ly

I then imported this MIDI file into my favourite notation editor, MuseScore. BitShift Variations is written for 4 voices, which MuseScore natively interprets as 4 instruments. For this to work on an organ, I need to do a little bit of mapping.

Organs typically have two or more keyboards (manuals) and a pedalboard. The organ I’ll be using has two manuals and a pedalboard, so that can be thought of as 3 “voices”, although each voice is also capable of polyphony.

Taking BitShift Variations’ voices to be 1-4, starting with 1 as the lowest voice, I mapped voice 1 to the pedals, voices 2 and 3 to the Great organ (the lower of the two manuals) and voice 4 to the Swell organ (the upper manual). This is a fairly typical setup for classical music (although in this case, it probably isn’t possible to play 3 voices with 2 hands!).

Here’s my recording of BitShift Variations being played back on the organ. The video is a screen capture from an app called OrganAssist, which is specifically designed to control MIDI-enabled pipe organs. The sound is a recording of the actual sound – just air moving through pipes.

BitShift Variations for pipe organ

MuseScore has a really cool ecosystem for uploading and sharing scores, so they can be played back, downloaded and edited. So I’ve uploaded my arrangement of BitShift Variations for Pipe Organ for general consumption. Feel free to further edit it and see what you can come up with.

Making a public music streaming service with Navidrome

For a while, I’ve wanted to set up some kind of public music player, to allow people to stream and download music I’ve recorded for free, without having to make an account.

First I tried using Bandcamp but I found the user interface on the free tier to be awkward, and it took too long to upload new releases and required re-entry of the metadata.

Then I tried using Navidrome which is a great self-hosted music server but requires a login. People can’t just sign up, either – the admin has to make them an account. I dived into the documentation and found that it’s possible to use an external auth proxy – and I wondered if it would be possible to create a fake auth proxy that just lets you in. Turns out, it is.

First you have to set up a Navidrome instance and create your usual admin user. Now use your admin user to create a second, non-admin user. I called my user music, but it doesn’t matter because nobody will see it.

You configure Navidrome using environment variables, and there are a few you need to set. Firstly you need to tell Navidrome it should check the HTTP request headers. Secondly you need to disable all features that don’t make sense in an environment where all users are effectively signing in with the same account (so you don’t want them to change the password or set favourites that won’t make sense to other people).

# Enable auto login for the "music" user

# Disable any user-specific features

The other piece of the puzzle is to do with the auth proxy. I’m hosting Navidrome in Kubernetes (using the k8s@home Navidrome Helm chart) so it makes sense to use an Ingress resource. My cluster is already running NGINX Ingress. It was simple to add a config snippet to the Ingress to statically set the Remote-User header to the music username created above.

apiVersion: networking.k8s.io/v1
kind: Ingress
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header Remote-User music;
  name: navidrome
  namespace: navidrome
  ingressClassName: public
  - host: music.example.com
      - backend:
            name: navidrome
              number: 4533
        path: /
        pathType: Prefix

And that’s it! Now, visiting music.example.com automagically signs you in as the music user without you ever seeing a login screen. The public can now browse, stream and download music freely.

The only user-specific features I couldn’t disable are playlists and themes. So anyone visiting your Navidrome instance can create, edit and delete playlists, and change the theme at will.

Bluetooth MIDI with CME WIDI

I recently had to set up a wireless MIDI link between a laptop and a MIDI-enabled pipe organ. I learnt a few lessons along the way, so this is partly a tutorial, partly some notes on the lessons learned, and partly a mini review of the devices I bought.

My use case

After a recent refurbishment, the pipe organ at my church was fitted with MIDI ports which can be used to record and play back performances on the organ. Initially, I used a regular USB-to-MIDI cable connect a laptop, and we successfully proved the concept with an app called OrganAssist.

A short USB-MIDI cable is a bit limiting though, as you have to stand around the organ console to play anything back, which is not ideal in church services. I looked for a wireless alternative.

Wireless MIDI

Wireless MIDI is apparently a thing these days. It seems to go by various names, but is officially known as Bluetooth LE MIDI. I found that support for it is inconsistent: support was only added to Windows in Windows 10 Anniversary Edition and it also requires support in the audio application itself. Support is apparently better on MacOS and iOS, but I’m not a Mac user.

My laptop was running a compatible version of Windows, but OrganAssist does not support Bluetooth LE MIDI.


Then I discovered the family of WIDI products from CME which can work in a number of different ways. To be honest I found their documentation quite confusing. WIDI is a trademark of CME, and as a technology it is based on Bluetooth LE MIDI but also has a superset of features, such as being to group WIDI devices together and set virtual patching from your phone.

At the “instrument” side of the connection you need a WIDI device – either a WIDI Master or a WIDI Jack. As far as I can tell, the only difference is the physical form factor. (The WIDI Master is a pair of stubby dongles that fits into a 5-DIN MIDI port, while the WIDI Jack is a separate box that you connect to your MIDI ports with little patch leads).

If you have a Mac, iOS device, or a piece of hardware that supports Bluetooth LE MIDI (there are apparently some synths that offer this now), then that’s all you need.

If you have Windows 10 Anniversary Edition or newer, you can install a third-party Bluetooth LE MIDI driver from Korg, and then you can use apps that support Bluetooth LE MIDI. At the time of writing, this is only Cubase, and I wasn’t able to get it to work.

Most Windows users will need another piece of WIDI hardware at the “computer” side of the connection – a WIDI Bud Pro. This device talks to your WIDI Master or WIDI Jack using Bluetooth LE MIDI, but talks to your PC using regular USB MIDI. It appears as a normal MIDI device and “just works” with older versions of Windows and older apps.



I chose the WIDI Jack for a semi-permanent installation on a pipe organ that has been fitted with MIDI ports during a renovation. I liked that the DIN jacks were so stubby and short, with little patch leads. Due to the location of the MIDI ports by the organist’s right knee, anything longer would’ve got in the way when the organist got on or off the bench.

WIDI Jack in situ

The WIDI Jack is magnetic, and it includes a self-adhesive metal plate – so you can either stick it onto a metal object by itself, or you can apply the metal “sticker” to a surface and attach the WIDI Jack to that. You can see in my picture I’ve stuck the metal “sticker” to the underside of the MIDI ports so the WIDI Jack is kept out of the way and out of sight.

The WIDI Jack draws power from the MIDI Out connection of your instrument so there is no need for a power supply. It just turns on when you turn your instrument on.


WIDI Bud Pro

The WIDI Bud Pro effectively uses Bluetooth LE as a link between itself and the WIDI Jack, but it presents the connection back to Windows as a regular USB MIDI device which “just works” on any version of Windows. No Bluetooth complexity to worry about. The WIDI Bud Pro and WIDI Jack automatically pair with each other so you don’t need to do anything.

In actual usage, I can only review this in the context of using the WIDI Bud Pro together with the WIDI Jack. Put simply, it works, the latency is low and I haven’t had any problems. The range is better than expected – it claims up to 20m range in open spaces but I actually got 25m away from it in the church without any problems. However, be careful of interference because when I got close to some metal railings it dropped a couple of notes and the timing of some notes went a bit sloppy.


Just a quick demo to show that it’s possible to control a pipe organ from a laptop via Bluetooth, and walk around the church while it’s playing some Bach. Sorry it’s dark… I try to save electricity when working in the church in the evening.

In practice the laptop will be tucked away to one side during services, and then hymns can be played back remotely.

How to distinguish the Jaguar XJ6 and XJ8

The Jaguar XJ models of the 1990s, the X300 generation XJ6 and the X308 generation XJ8, are very similar looking cars. The key different is what’s under the bonnet – the XJ6 has a straight 6 AJ16 engine in 3.2 or 4.0 form while the XJ8 sports an 8-cylinder AJ-V8 engine in the same displacements. But what if you happen to see an XJ drive past you in the street – how can you tell whether it’s an XJ6 or an XJ8 without checking under the bonnet?

Well, there are a few tell-tale signs. This is not supposed to be an exhaustive list of the differences between the XJ6 and XJ8 – rather, a way of telling them apart at a glance.

The easiest way is to look at the badge on the back. Predictably, the XJ6 says XJ6 and the XJ8 says XJ8. But wait! There are exceptions. The Sovereign trim level of either model will just say Sovereign and not give a clue about the model of the car. Technically, it’s just called a Jaguar Sovereign, and not a Jaguar XJ6 Sovereign. Likewise the Sport trim level will be badged XJ Sport for both the XJ6 and XJ8 variants. Likewise the XJR badge will let you know there’s a supercharger on board, but not which generation you have.

So unless you have a base spec XJ6 or XJ8, the boot badge might not be much help to you.

You can also look at the registration plate of the car to try and work out the year. The XJ6 was produced from 1994 to 1997 and the XJ8 from 1997 to 2002. This means, at least for the UK, an XJ6 number plate should start with M, N, P or R, while an XJ8 number plate should start with R, S, T, V, W, X, Y, 51, 02 or 52. This is ambiguous for R (1997) and of course lots of XJs have custom/vanity plates to disguise their age.

If this still didn’t help, there are some physical differences we can check. Working from front to back, the key differences are:

The XJ6 has rectangular indicator/reflector lenses. The XJ8 has oval ones. This is probably the easiest attribute to look for, and it applies to the reflectors and running lights on the side of the car too. The XJ6 also has oval fog lights, while the XJ8 has round.

Slightly more subtle, the XJ6 has Fresnel glass on the main headlamps, while the XJ8 has clear glass.

The XJ6 has a chrome strip along the top of both bumpers. The XJ8 only has an L-shaped chrome strip around the corners. The XJ6 has a squarer front grille, while the XJ8 grille has more rounded corners.

The XJ8 has a V8 badge on the B pillar. Some XJ6s have nothing there, some say 4.0 Litre, some say 4.0 Sport, but none of them say V8! The XJ6 here is a Sovereign, and has a lot more chrome than the base spec XJ8.

The tail lights are subtly different. The XJ6 has a smoked top half, and the lower red half is flat with a matte appearance. The reflector area is square. The XJ8 tail light is not smoked at the top, appearing brighter and slightly rounder, and the reflector area is a smaller rounded square within the lower half, with a more 3D appearance.

Finally, if you get the chance to peep in the window, you can immediately tell the XJ6 and XJ8 apart from their dashboard. The XJ6 has a flat instrument cluster derived from the older XJ40. It has two large dials, four smaller ones and an array of lights and switches. The XJ8 has a simpler dashboard with three recessed binnacles for the dials. Most of the lights have been replaced by a two-line message LCD display within the speedometer.

The centre consoles also differ. The XJ6 has a rectangular bezel around the climate and audio controls, while the XJ8 has changed the bezel to a rounded shape.

The XJ8 is mine. Many thanks to Will Lyon Tupman for sharing photos of his XJ6 Sovereign. I resorted to a library photo for the XJ6 boot badge.

Jaguar XJ8 X308 rear view mirror replacement

The rear view mirror used the 1997-2002 Jaguar XJ8 (X308) and related cars like the Jaguar XK8 (X100) has a light-sensitive electrochromic auto-dimming feature which is unfortunately prone to failure. The mirror develops discoloured patches. The chemical that darkens to dim the mirror tends to move around, causing blotches of brown or black. If you’re really unlucky, the glass can crack and the highly corrosive brown liquid can drip out and damage your interior. These failures seem to happen just with age, even if the mirror has never been mistreated.

The mirror houses two light sensors (front and back) to know when it should dim. These light sensors also control the automatic headlight function, and on higher/later models there is also a rain sensor that controls the automatic wiper.

The combination of these mirrors being complex and having a high failure rate means they are now scarce, and expensive.

At some point in the production run, the mirrors changed design. As far as I know, there is no way of telling the two mirrors apart externally – the only way to know is to remove the top centre console via the screw in the sunglasses holder and check the colour of the connector. Earlier ones have a 6-pin yellow connector while later ones have an 8-pin white connector.

I’ve needed a replacement mirror for ages but have been holding off due to the high price. One popped up on eBay for a low price recently, so I snapped it up. When it arrived, I realised I’d accidentally bought the white connector type when I actually need the yellow connector type.

There is a lot of confusion on forums about compatibility, whether they can be rewired, whether you can swap the glass over and leave the wires, etc. It is possible to swap the glass over, but the mirror casing is glued together and seems quite hard to open without cracking the glass (especially if you’re clumsy and impatient like me) so I ruled that out.

I was able to find the following information about the wiring of the yellow connector:

1White+12V IGN
2GreyReverse Interrupt
4YellowCell (output to exterior dimming mirrors?)
5GreenTi S (auto headlight trigger?)
Yellow connector wiring

I couldn’t find corresponding information for the white connector, but by studying where the wires went, I deduced that the blue, red and purple wires were for the rain sensor – which my car didn’t have. Eliminating those three, all the other colours matched up except the brown. Nobody online seems sure what the brown wire is for but plenty of people were claiming it didn’t do anything or was safe to ignore – so I did.

I’m not much good at electronics but I managed to solder together the 5 matching wires and insulate them with heat shrink tube. Then I carefully insulated the cut-off brown, red, blue and purple wires to prevent shorts later on, and then covered the whole lot in more heat shrink tube. For those asking why I didn’t just release the crimped connector: I tried, but it was too hard and I don’t have the right tool.

It seems to work perfectly – if I cover the light sensor with my fingers, the mirror turns blue and dim and the headlights come on. My car doesn’t have the automatic wipers so they obviously don’t work anyway. I haven’t noticed anything bad happening from not connecting the brown wire to anything.

Dimming in action

So please don’t take my advice as gospel truth, because I’m just a guy with a blog. But in my experience, if you can’t find the right type of spare mirror, you can quite easily swap the yellow and white connectors and have a functional dimming mirror again.