Kubernetes Homelab Part 6: Deployments

Welcome to part 6 of my Kubernetes Homelab series. In the previous posts we’ve discussed the architecture of the hardware, networking, Kubernetes cluster, and infrastructure services. Today we’re going to look at deployment strategies for applications on Kubernetes.

My goal for my homelab is not necessarily 100% automation, but I would like the ability to deploy applications with a consistent config, redeploy them if necessary, manage upgrades, have versioned config, handle secrets safely, and to reduce the scope for human error.

Full CD or GitOps tools like Argo CD, Flux CD and others are a bit of a heavy hammer for this homelab environment, so let’s have a look at what it’s possible to achieve in a very lightweight solution.


Helm brands itself as a package manager for Kubernetes. Personally, I think calling it a package manager is a bit of a stretch as it lacks key functionality that we’ve seen with other package managers such as yum, pip and other distro and language-specific package managers.

What Helm can do is install & upgrade an application with a config file (which it calls a values file), which we can keep in git.

So I have created a private git repo called kubernetes-manifests (in hindsight I could’ve chosen something shorter), which contains a directory for each app I want to deploy. That directory contains a README to explain what the app is, a values file, and a deployment script.

├── deploy.sh
├── README.md
└── values.yaml

Every Helm chart ships with a values.yaml file that contains all possible values (i.e. config options) so I usually copy that file into my repo and edit it for my use case, removing redundant options to keep the file short.

The deployment script just wraps a Helm command so I don’t forget which args to pass next time I want to deploy.

Let’s have a look at a real example – my About Me page I use as a biosite, to pull some links together. It deploys an app called Homer.

helm upgrade -i --create-namespace \
    -n about about \
    -f values.yaml \

My values.yaml is derived from the upstream values.yaml, with my config added – which is just a yaml structure listing all the links that appear on the live site.

Using helm upgrade -i instead of helm install just tells Helm to perform an upgrade, or do an installation if there is no existing deployment. I can safely run the deploy.sh script at any time and it will install the app if it needs to, upgrade the installation if necessary, configure it with new values if there are any, otherwise do nothing.

This satisfies my requirements of being able to make repeatable deployments, redeploy apps in the case of cluster loss, upgrade them at will, and keep my config in version control.


The above example of my About Me app is a simple one because no secrets are required to deploy. What if I needed to provide the app with credentials, API keys or other secrets? I wouldn’t want to store those in git.

I use Helm Secrets with age to be able to store secrets encrypted in a git repo. I won’t go into the full procedure for setting it up because that is documented, but let’s have a look at how it works in practice.

For this example, let’s look at a photo sharing app called PhotoPrism. The default values.yaml requires you to set the root password. It’s mixed in with a bunch of other config values:

# -- environment variables. See docs for more details.
  # -- Set the container timezone
  # -- Photoprism storage path
  PHOTOPRISM_STORAGE_PATH: /photoprism/storage
  # -- Photoprism originals path
  PHOTOPRISM_ORIGINALS_PATH: /photoprism/originals
  # -- Initial admin password. **BE SURE TO CHANGE THIS!**

It’s possible to encrypt the entire values.yaml but I prefer to encrypt only the secret values, so it’s still easy to read the non-secrets. Helm lets you supply multiple values files, so let’s split out the secrets into a separate file called secrets.yaml, leaving the publicly readable values in values.yaml. The env hash will be merged by Helm upon deployment.

  # -- Initial admin password. **BE SURE TO CHANGE THIS!**
# -- environment variables. See docs for more details.
  # -- Set the container timezone
  # -- Photoprism storage path
  PHOTOPRISM_STORAGE_PATH: /photoprism/storage
  # -- Photoprism originals path
  PHOTOPRISM_ORIGINALS_PATH: /photoprism/originals

Now we can encrypt secrets.yaml without affecting the readability of values.yaml.

$ helm secrets enc secrets.yaml 
Encrypting secrets.yaml
Encrypted secrets.yaml

The contents of the file are encrypted and safe to check into git:

    PHOTOPRISM_ADMIN_PASSWORD: ENC[AES256_GCM,data:7Tt9mKkZ+U7zAtskQw==,iv:37AJkEmUk8VaA3wSaH5jPc2VwIB/hXCxM/FFxa9fPTc=,tag:S4hts6NO6zDA+DGsttIdoQ==,type:str]
    kms: []
    gcp_kms: []
    azure_kv: []
    hc_vault: []
        - recipient: age1xeguyqecm3zx2talea7jfawpgzfymula3f9e7cyr76czeh3qdqhs6ap9sp
          enc: |
            -----BEGIN AGE ENCRYPTED FILE-----
            -----END AGE ENCRYPTED FILE-----
    lastmodified: "2023-04-18T14:25:13Z"
    mac: ENC[AES256_GCM,data:F3pV6Ly+eP5ZfMTerWxfrgOny/CK6O2M3bhAQLM6+SxpmK2Ya+9rDYskcKSLaq8w7WVWJ/XAz3plg2Gx8gCAYJ+2SMgo6TVsENO8tu/xMRSgmbr6NViOlHTvNc/EcYl5NOj420r8TmF31B5OArvH4BSfoTijphKppnv/546hUco=,iv:2uwIeUXUVZigJR0j0FH2gYt4KlXAx9OMHh0yx52NqMw=,tag:ET16bfdlua7E8c0n9tGC3Q==,type:str]
    pgp: []
    unencrypted_suffix: _unencrypted
    version: 3.7.3

We can easily view or edit this file with helm secrets view secrets.yaml or helm secrets edit secrets.yaml.

The last piece of the puzzle is tweak the deploy script deploy.sh to be able to decrypt our secrets on the fly. We do this by changing helm upgrade -i to helm secrets upgrade -i and specifying two values files with -f. Values files on the right override ones on the left. In this case, both values files specify an env key but the values are merged.

helm secrets upgrade -i --create-namespace \
    -n photoprism photoprism \
    -f values.yaml -f secrets.yaml \

Keeping up to date

We have the facility to upgrade a deployed app by first running

helm repo update

to update our Helm charts, then simply


to run the Helm upgrade from our deploy script.

As this is just my homelab, I don’t mind running upgrades manually, and I don’t need a fully automated solution. But it would be nice to know that there are updates available for my charts, without having to go checking manually.

There is a tool called Nova which can do exactly this.

$ nova find --format table --show-old
Release Name            Installed    Latest     Old     Deprecated
============            =========    ======     ===     ==========
about                   8.1.5        8.1.6      true    false
oauth2-proxy            6.8.0        6.10.1     true    false
graphite-exporter       0.1.5        0.1.6      true    false
node-problem-detector   2.3.3        2.3.4      true    false
prometheus-stack        45.8.0       45.10.1    true    false
rook-ceph               v1.11.2      1.11.4     true    false
rook-ceph-cluster       v1.11.3      1.11.4     true    false

This output lists the outdated Helm deployments on my cluster (in the current Kubernetes context). Nova doesn’t use local Helm chart repositories – it checks ArtifactHub as an index of Helm charts so any charts you want to check must be published there.

To update your Helm deployments, don’t forget to freshen your local Helm repositories so you have the latest charts:

helm repo update

At the moment, running Nova is a manual step that I do as and when I remember, but it does support output in different formats and could easily be run as a cron job or metrics exporter in the cluster.

I have started writing a Nova exporter to get the output of Nova into Prometheus so I can get alerts when I have outdated deployments, but it’s not finished yet. I’ll share here when I’ve had some time to finish it off.

Kubernetes Homelab Part 5: Hyperconverged Storage (again)

Part 4 of this series was supposed to cover hyperconverged storage with OpenEBS and cStor, but while I was in the middle of writing that guide, it all exploded and I lost data, so the blog post turned into an anti-recommendation for various reasons.

I’ve now redesigned my architecture using Rook/Ceph and it has had a few weeks to bed in, so let’s cover that today as a pattern that I’m happy recommending.

Ceph architecture

First, let’s have a brief look at Rook/Ceph architecture and topology, and clear up some of the terminology. I’ll keep it as short as I can – if you need more detail, check out the Ceph architecture guide and glossary.

Ceph is a long-standing software-defined clustered storage system, and also exists outside of Kubernetes. Rook is a Kubernetes operator that installs and manages Ceph on Kubernetes. On Kubernetes, people seem to interchangeably use the terms Rook, Ceph, and Rook/Ceph to refer to the whole system.

Ceph has many components to make up its stack. We don’t need to know about all of them here – partly because we don’t need to use all of them, and partly because Rook shields us from much of the complexity.

The most fundamental component of Ceph is the OSD (object storage daemon). Ceph runs exactly one OSD for each storage device available to the cluster. In my case, I have 4 NVMe devices, so I also have 4 OSDs – one per node. These run as pods. The OSD can only claim an entire storage device, separate from what you’re booting the node OS from. My nodes each have a SATA SSD for the OS and an NVMe for Ceph.

The part of my diagram that I have simply labelled Ceph Cluster consists of several components, but the key components are Monitors (mons). Monitors are the heart of the cluster, as they decide which pieces of data get written to each OSD, and maintain that mapping. Monitors run as pods and 3 are required to maintain quorum.

At its core, a Ceph cluster has a distributed object storage system called RADOS (Reliable Autonomic Distributed Object Store) – not to be confused with S3-compatible object storage. Everything is stored as a RADOS object. In order to actually use a Ceph cluster, an additional presentation layer is required, and 3 are available:

  • Ceph Block Device (aka RADOS Block Device, RBD) – a block device image that can be mounted by one pod as ReadWriteOnce
  • Ceph File System (aka CephFS) – a POSIX-compliant filesystem that can be mounted by multiple pods as ReadWriteMany
  • Ceph Object Store (aka RADOS Gateway, RGW) – an S3-compatible gateway backed by RADOS
Ceph cluster architecture

As I already have a ReadWriteMany storage class provided by TrueNAS, and I don’t need S3 object storage, I’m only going to enable RBD to provide block storage, mostly for databases which don’t play nicely with NFS.


As everything else on my cluster is deployed with Helm, I will also deploy Rook with Helm. It’s a slightly strange method as you have to install two Helm charts.

The first chart installs the Rook Ceph Operator, and sets up Ceph CRDs (custom resource definitions). The operator waits to be fed a CR (custom resource) which describes a Ceph cluster and its config.

The second chart generates and feeds in the CR, which the operator will use create the cluster and provision OSDs, Monitors and the other components.

I’m using almost entirely the default values from the charts. The only things I have customised are to:

  • enable RBD but disable CephFS and RGW
  • set appropriate requests & limits for my cluster (Ceph can be quite hungry)
  • to define which devices Ceph can claim By default, Ceph will claim all unused block devices in all nodes, which is a reasonable default. As all my Ceph devices are NVMe, there is only one in each node, and I’m not using NVMe for anything else, I can play it a bit safer by disabling useAllDevices and specifying a deviceFilter. I would need to change this if I had a node with multiple NVMes, or if I wanted to introduce a SATA device into the cluster.
    useAllNodes: true 
    useAllDevices: false
    deviceFilter: "^nvme0n1"


As I mentioned above, a Ceph cluster can consume quite a lot of CPU and memory resources (which is one of the reasons I started off with cStor). Here’s a quick snapshot of the actual CPU and memory usage by Ceph in my cluster, which is serving 14 Ceph block volumes to a handful of not-very-busy databases.

$ kubectl top po -n rook-ceph
NAME                                         CPU   MEMORY
csi-rbdplugin-6s42w                          1m    80Mi            
csi-rbdplugin-l75kh                          1m    23Mi            
csi-rbdplugin-provisioner-694f54898b-67nnf   1m    47Mi            
csi-rbdplugin-provisioner-694f54898b-s9vpx   7m    108Mi           
csi-rbdplugin-pvhcx                          1m    76Mi            
csi-rbdplugin-vt7gs                          1m    20Mi            
rook-ceph-crashcollector-kube05-65d87b7d8b   0m    6Mi             
rook-ceph-crashcollector-kube06-64b798c4bc   0m    6Mi             
rook-ceph-crashcollector-kube07-887878456    0m    6Mi             
rook-ceph-crashcollector-kube08-688f948ddf   0m    6Mi             
rook-ceph-exporter-kube05-b6d6c6c9c-splt6    1m    16Mi            
rook-ceph-exporter-kube06-f9757c848-j47qm    1m    6Mi             
rook-ceph-exporter-kube07-5bdbb94f47-8kt8d   2m    16Mi            
rook-ceph-exporter-kube08-c98496b8b-8tnrz    3m    16Mi            
rook-ceph-mgr-a-6cb6484ff7-9gh8r             54m   571Mi           
rook-ceph-mgr-b-686bcb7f66-5nkvp             70m   446Mi           
rook-ceph-mon-a-86cbcbcfc7-6bsn6             28m   428Mi           
rook-ceph-mon-b-579f857b7f-rkkpc             23m   407Mi           
rook-ceph-mon-d-59f97f97-9r4b8               25m   427Mi           
rook-ceph-operator-6bcf46667-gv426           39m   57Mi            
rook-ceph-osd-0-77c56c774c-2jtff             24m   959Mi
rook-ceph-osd-1-67df8f6ccd-4qbrw             28m   1386Mi          
rook-ceph-osd-2-66cf8c8f55-6m6zt             31m   1310Mi          
rook-ceph-osd-3-74f794b458-hbhvr             31m   1296Mi          
rook-ceph-tools-c679447fc-cjpcs              3m    2Mi            

There are quite a few pods in this deployment but the heaviest memory usage is by the OSDs, which consume over 1Gi each (my nodes have 16Gi RAM each). Bear this in mind if you’re running on a more lightweight cluster.

None of the pods have high CPU usage, but the Monitor pods tends to spike a little during activity (such as provisioning a new volume).

To save you the adding up, this is a total of 375m CPU (or 2% of the total cluster CPU) and 7721Mi memory (or 12% of the total cluster memory). Bear this in mind… it’s not exactly lightweight.


The Rook/Ceph Helm chart comes with metrics endpoints and Prometheus rules which I enabled. I then added the Ceph Cluster Grafana dashboard for an out-of-the-box dashboard.

My Ceph dashboard in Grafana

The only problem I have found with this dashboard is the Throughput and IOPS counters towards the top-left usually display 0 even when this is not true, and intermittently show the real numbers, before returning to zero. Likewise, the IOPS and Throughput graphs in the middle always register 0, and don’t record the spikes. I haven’t had a chance to look at this yet.

You can see that my cluster isn’t being stressed at all, and I’m sure any storage experts are laughing at my rookie numbers. My OSDs are inexpensive consumer-grade NVMe devices, each of which claims performance up to 1700 MB/s throughput and 200,000 IOPS and a clustered systems can theoretically beat this, so I’m nowhere near any limits.

One thing to note is the available capacity. Ceph aggregates the size of all OSDs into a pool (4 × 256GB ≈ 1TB) but doesn’t account for the fact that it also stores multiple replicas of each object it stores (this is configurable). The default is 3 replicas, so a 1MB object would consume 3MB of the total capacity. My 1TB pool will actually store about 333GB of data.


It’s hard to make a meaningful assessment of the support available for Rook/Ceph, but as lack of support was a key reason for abandoning OpenEBS/cStor it makes sense to have a look.

Ceph is a more mature product, and its documentation is more complete. There are pages about disaster recovery and detailed guides on how to restore/replace OSDs that break. There is also the Ceph Toolbox which provides a place to run the ceph command to perform a variety of maintenance and repair tasks.

Remember my problem with cStor wasn’t cStor’s fault – it was the Kubernetes control plane that lost quorum, and cStor used Kubernetes’ data store for its own state. This made it very hard to recover a cStor cluster. I was then unable to create a new cStor cluster and adopt the existing volumes, and no support was available to help me do that.

The Kubernetes control plane could explode again, so how would this affect Ceph? Sneakily, Ceph doesn’t use the Kubernetes data store for its state – it keeps it in /var/lib/rook on the host filesystem of each node. In the event of total cluster loss, it would be possible to create a new Kubernetes cluster and for Ceph to discover its state from the node filesystem.

$ tree -d /var/lib/rook
├── exporter
├── mon-c
│   └── data
│       └── store.db
└── rook-ceph
    ├── crash
    │   └── posted
    ├── d4ec2a82-4b19-4b03-a4e0-7951a45eec35_47391857-0c95-4b47-9ab9-41721e101eff
    ├── e7b6c3ad-b460-4e77-9b5f-3522bc69c1e8_0d464f95-e171-4bed-b785-03c665b8e411
    └── log

In fact, as Ceph can work outside of Kubernetes, if for some reason a new Kubernetes cluster can’t be created, it should be possible to install Ceph right on the node OS, tell it where to find its state, and tell it where the local NVMe devices are. Ceph block devices can be mounted manually on the node as /dev/rbd0 similarly to iSCSI. It’s sketchy, but it should be OK to temporarily reconstruct a Ceph cluster, to pull the data off it. I’m not saying I would enjoy doing it, but it would an option in an emergency.

Lastly, I know of quite a few large corporations using Ceph, and it also forms the basis of Red Hat’s OpenShift Data Foundation product. This gives me confidence in its reliability.

Kubernetes Homelab Part 4: Hyperconverged Storage

Sorry it’s taken a while to get to the next part of my blog series. This section was supposed to be about hyperconverged clustered storage in my cluster, but I unfortunately ran into a total cluster loss due to some bugs in MicroK8s and/or dqlite that maintainers haven’t managed to get to the bottom of.

The volumes that were provisioned on my off-cluster storage, I was able to re-associate with my rebuilt-from-scratch cluster. The volumes that were provisioned on the containerised, clustered storage were irrecoverably lost.

Therefore, I have decided to rework this part of my blog series into a cautionary tale – partly about specific technologies, and partly to push my pro-backup agenda.

It’s worth looking at the previous posts in the series for some background, especially the overview.

My design

Let’s have a look at my original design for hyperconverged, containerised, clustered storage. And before we get stuck in, let me quickly demystify some of the jargon:

  • hyperconverged means the storage runs on the same nodes as the compute
  • containerised means the storage controller runs as Kubernetes pods inside the cluster
  • clustered means many nodes provide some kind of storage hardware, and your volumes are split up into replicas on more than one node, so you can tolerate a node failure without losing a volume

Several clustered storage solutions are available. Perhaps Rook/Ceph is the best known, but as MicroK8s packages OpenEBS, I decided to use that. The default setup you get if you simply do microk8s enable openebs creates a file on the root filesystem and provisions block volumes out of that file. In my case, that file would have ended up on the same SATA SSD as the OS, and I didn’t want that.

So I went poking at OpenEBS, and found that it offers various storage backends: Mayastor, cStor or Jiva. Mayastor is the newest engine, but has higher hardware requirements. In the end I decided on cStor as it appeared to be lightweight (i.e. didn’t consume much CPU or memory) and was also based on ZFS, which is a technology I already rely on in my TrueNAS storage. I ended up deploying OpenEBS from its Helm chart.

This diagram is quite complex, so let me walk you through it – starting at the bottom. Each physical node has an M.2 NVMe storage device installed, and this is separate from the SATA SSD that runs the OS. When you install OpenEBS, it creates a DaemonSet of a component called Node Disk Manager (NDM) which runs on each node and looks for available storage devices, and makes them available to OpenEBS as BlockDevices. When you have several BlockDevices, you can create a storage cluster. From this cluster, you can provision Volumes which will be replicated across multiple NVMe devices (by default you get 3 replicas). Creating a Volume also creates a Pod that acts as an iSCSI target for the volume. The Volume can now be mounted by workload Pods from any node in the usual way. It’s important to note that the workload Pod does not have to be on the same node as the Volume Target, and the three VolumeReplicas are placed according to the nodes with most capacity.

Architecture of OpenEBs/cStor on Kubernetes

The problem

MicroK8s uses dqlite as its cluster datastore instead of etcd like most other Kubernetes distributions. I ran into some problems with MicroK8s where dqlite started consuming all CPU, running at high latency and eventually silently lost quorum. The Kubernetes API server then also silently went read-only, so any requests to change cluster state would silently fail, and any requests to read cluster state would effectively be snapshots from the moment the cluster went read-only, and might vary depending on which of the dqlite replicas was being queried.

The further complication is that as a clustered storage engine, cStor uses CRDs to represent its objects and therefore relies on the Kubernetes API server and the underlying datastore to track its own volumes, replicas, block devices, etc. By default, cStor then also lost quorum.


I followed through the how to restore lost quorum guide for MicroK8s, several times, but it never worked for me. I worked with MicroK8s developers for a while on recovery.

Even without cluster quorum, I attempted to recover my cStor volumes. However, actions like creating a pod to mount a volume rely on having a kube API that is not read only!

Eventually I had no other choice but to reset the cluster and start from scratch. I made sure I did not wipe the NVMe devices, and assumed I would be able to reassociate them on a new cluster. I exported all of the OpenEBS/cStor CRs to yaml files as a backup.

After the cluster was rebuilt, I reimported the BlockDevice resources but doing so did not discover the NVMe drives as they seemed to change UUID in the new cluster. I tweaked my yaml to adopt them under their new names, but I was not able to rebuild them as an OpenEBS cluster and rediscover my old volumes.

The documentation for cStor is quite minimal, and focuses on installing it rather than fixing it. The only relevant page is the Troubleshooting page, and it didn’t cater to my problem. Which seems surprising, because a common question with any storage system must be “how do I get my stuff back when it goes wrong?”

I contacted the OpenEBS community via Slack and my question was ignored for a week, despite my nudges. Eventually, an engineer contacted me and we worked through some steps, but were not able to reassociate a previous cluster’s cStor volumes with a new cluster.

All my cStor volumes were either MariaDB or PostgreSQL databases, and fortunately I had recent backups of all of them and was able to create new volumes on TrueNAS external storage (slower, but reliable) and restore the databases.


  • First and foremost, take backups. Backups saved my bacon here in what would otherwise have been a significant data loss. I’ll cover my backup solutions in a later part of this blog post series.
  • Volume snapshots are not backups. cStor provides volume snapshot functionality and it is very easy to take snapshots automatically. However, using those snapshots requires a functioning kube API.
  • The control plane is fragile. It doesn’t take a lot for your datastore to lose quorum, and then all bets are off.
  • I advise against hyperconverged storage in your homelab, unless you really need it. As soon as there is persistent data stored in your cluster, it stops being ephemeral and you need to treat it as carefully as a NAS. It’s fine for caches and things that can be regenerated.
  • Check support arrangements before you commit to a product. MicroK8s developers have been responsive and helpful. However, cStor support has been useless. The product seems mature and the website looked shiny and makes claims about it being enterprise-grade, but the recovery documentation was useless and nobody was willing to help. Most of the chatter in the Slack channel are around Mayastor, so this must be the new shiny that gets all the attention.

Next steps

The root cause of this problem was dqlite and MicroK8s quorum. At the moment, I don’t yet understand why this incident happened and I don’t know how to prevent it from happening again. I’m not the only person to have been bitten by it.

For time being, I restored like-for-like on MicroK8s even though I don’t really trust dqlite any more. I’ve upped the frequency of my backups in the expectation that it will probably happen again at some point.

I think I’ve decided that if this happens again, I will consider rebuilding on K3s instead of MicroK8s, as they use the more standard etcd datastore.

I’m not currently using the NVMe disks, but it seems a waste just to leave them there doing nothing. I will probably fiddle with hyperconverged storage again one day – maybe either Mayastor or Rook/Ceph, both of which seem to get more attention than cStor.

My MIDI pipe organ workflow


I’ve written a couple of times about playing about with a MIDI-enabled pipe organ and I’ve shared some of my results on YouTube. Today I want to say a bit about my workflow because a few people have asked, and it is a somewhat complicated but hopefully interesting.

This isn’t supposed to be instructional: this is just some notes about the way I’ve found that works for me. I’ll give some examples and demonstrate progress as we go along by working on a public domain piece, Prelude and Fugue in C major (BWV 553) by Johann Sebastian Bach.


If you want to play along with this guide, you will need:

  • a pipe organ with MIDI ports
  • an installation of MuseScore
  • an installation of OrganAssist, configured for your organ
  • an installation of GrandOrgue, configured for your organ (optional)

Obtain score

The first thing I do when I decide I want to make the organ play something is obtain a score. I have three options:

Find and download a score on MuseScore

As well as being an notation editor app, MuseScore allows musicians to upload their own compositions to musescore.com, and it also contains various public domain works. There are also some copyrighted works with various licensing options.

When I’ve found an arrangement I like, as I’m a paid-up MuseScore Pro member, I can download the score directly in MuseScore format.

Here’s my score for BWV 553 on MuseScore, and for reference here’s the first line.

First line of BWV 553
Enter a score from a physical book into MuseScore

If the work is only in physical form (a book or sheet score) then the only option is to manually enter it into MuseScore. There are various options for scanning it and getting MuseScore to “recognise” the notes, but I have found this inaccurate, and it takes as long to correct the mistakes as it does to just enter the music by hand.

I created my MuseScore version of the score by manually entering the notation from a physical book.

Import a plain old MIDI file into MuseScore

The last option is to import an ordinary MIDI file into MuseScore. The success of this method varies wildly depending on the quality and complexity of the original MIDI file, but you can often end up with an unreadable score that needs a lot of cleanup.


No matter which of the three methods for getting a score you chose, you should now have a score in MuseScore. You will likely have to do some editing and arrangement to make it suitable for pipe organ.

Organ music arranged for humans would typically be written on 2 or 3 staves – right hand, left hand and optionally feet – and it is up to the organist to interpret the score and decide which manual (keyboard) to play each section on. There are often (but not always) written notes to tell the organist what to do.

Directions to the organist about choice of manual

But to a computer, an organ is several instruments – each manual (keyboard) and the pedalboard is its own instrument. So we need to arrange our score in this way – one stave for each manual, and we must pre-determine which manual each section will be played back on.

The specific organ I am arranging for has a Great manual, a Swell manual and a Pedal, so I need to arrange my score for 3 parts, the Swell and Great parts having 2 staves each and the Pedal part having 1 stave. In my own lingo I refer to this as SSGGP.

Here’s my version of BWV 553 re-arranged for SSGGP, and the first line for quick reference again.

First line of BWV 553, arranged for OrganAssist

Note that I have had to take out the convenient repeat and interpret the 1st on Sw, 2nd on Gt direction as playing the entire section through twice, once on each manual.

Finally, I export the MuseScore project as a MIDI file, which can be consumed by OrganAssist.

Add stops

Now I import this MIDI file into the OrganAssist library. The first thing it asks me to do is map the MIDI tracks to the organ manuals. We exported as SSGGP so that’s how we’ll set the mapping for import.

Importing a MuseScore score into OrganAssist
Mapping the SSGGP staves to organ manuals in OrganAssist

If we play this back now, the organ will make no sound, because although the keys are being pressed, no stops are drawn. We need to tell OrganAssist which stops we want it to use, which is something the human organist would decide when they played the piece on a real organ. In this case, the front of the book of Eight Short Preludes and Fugues gives this advice:

Suggested registrations for BWV 553

BWV 553 has a direction of mf, so let’s set those stops accordingly. Following the suggested registrations in the table, and knowing what I have available on the organ at St Mary’s, I’ve chosen these stops:

To add these stops, we will use the OrganAssist editor. You can see the notes in a “piano roll” style view. Right click in the upper part of the screen to add stop changes and coupler changes. This obviously depends on your specific organ.

The editor view in OrganAssist shows notes in the main part of the screen, colour coded by manual (green for Swell, blue for Great, purple for Pedal). The top area is for events such as switching on or off stops, couplers, tremulants and any other controls the organ might have. Here I’ve turned on a bunch of stops at the beginning, and about two-thirds of the way across I’ve switched off the Swell to Pedal coupler, and switched on the Great to Pedal coupler, so the pedal notes are always coupled to the manual that is being currently played with the hands.

OrganAssist score editor, showing notes in the main display and stop/coupler events at the top

This step can be done away from the actual organ, as OrganAssist has rudimentary sound output which is sufficient to check for wrong notes, etc.

Playback on organ

If everything so far has been done properly, I should be ready for a first listening. No doubt there will be snags that show up when I listen to it, and I’ll probably want to make some tweaks.

The organ may be MIDI-controlled, but the mechanical components are still made of wood and leather and operated by springs and solenoids and pressurised air, so a little bit of latency creeps in

This video shows the score being played back on the organ at St Mary’s Church, Fishponds.

Changes to stops and small changes to durations of notes are easy to tweak in OrganAssist. Anything more usually means going back to MuseScore, editing there, and doing the export and import process again.

Playback on GrandOrgue

As I said above, OrganAssist only offers rudimentary playback when not attached to a real organ. It’s good enough for basic testing but not much good for hearing what it might sound like. Sure, I can go into the church and play the organ sometimes, but it would be nice have an approximation of the sound at home.

This is where GrandOrgue comes in. It’s a Virtual Pipe Organ (VPO) which is a virtual recreation of a pipe organ which receives input via MIDI – just like the real thing!

GrandOrgue uses real recordings of every single pipe on a real organ. Together these are known as a sampleset. Various samplesets are available online, some free, and some commercial. I haven’t (yet) had a chance to sample the organ at St Mary’s, so for now I am using a composite sampleset with similar-sounding stops taken from two free samplesets (Friesach by Piotr Grabowski, and Skinner op. 497 by Sonus Paradisi), and a basic graphical interface created with Organ Builder.

It takes a few minutes to configure a GrandOrgue organ to map the stop on/off events etc but after this is done, OrganAssist can play back through GrandOrgue via a MIDI loopback port, and make a surprisingly realistic sound. I can now make meaningful decisions about which stops to add to my OrganAssist scores at home.

In this video, OrganAssist (in the background) is “playing” the virtual organ by sending MIDI events, which GrandOrgue (in the corner) is receiving and generating the sound, using samples of real organ pipes.

I think this is a pretty good approximation of the real organ at St Mary’s – certainly good enough for playing around with at home.

Kubernetes Homelab Part 3: Off-Cluster Storage

Welcome to part 3 of the Kubernetes Homelab guide. In this section we’re going to look at how to provide off-cluster shared storage. If you haven’t read the other parts of this guide, I recommend you check those out too.

Out of the box, MicroK8s does provide a hostpath storage provider but this only works on a single-node cluster. It basically lets pods use storage within a subdirectory on the node’s root filesystem, so this obviously isn’t going to work in a multi-node cluster where your workload could end up on any node.

It’s important to me that any storage solution I choose is compliant with CSI, the Kubernetes framework for storage drivers. This allows you to simply tell Kubernetes that your pod requires a 10GB volume, and Kubernetes goes off and talks to its CSI driver, which provisions and mounts your volume automatically. This isn’t your typical fileserver.


So I decided to go with TrueNAS SCALE (technically I started with TrueNAS CORE and then I migrated to TrueNAS SCALE). TrueNAS is a NAS operating system which uses the OpenZFS filesystem to manage its storage. By its nature, ZFS supports nested volumes and is ideal for this application.

I’m running a fairly elderly HP MicroServer N40L with 16GB memory and 4x4TB disks in a RAID-Z2 vdev, for a total of 8TB usable storage. It’s not the biggest or the fastest, but it works for me.

HP MicroServer N40L

Democratic CSI

The magic glue that connects Kubernetes and TrueNAS is a project called Democratic CSI, which is a CSI driver that supports various storage appliances, including TrueNAS.

Note: Democratic CSI packaged an older driver called freenas-nfs which required SSH access to the NAS. For users running TrueNAS SCALE, there is a newer driver called freenas-api-nfs which does not require SSH and does all its work via an HTTP API. As I am running TrueNAS SCALE, I will deploy the freenas-api-nfs driver.

There are some steps to set up the root volume on your TrueNAS appliance but I wrote about these before, and they are pretty much the same, so please refer to my TrueNAS guide. There are also some Democratic CSI prerequisites you need to install on your Kubernetes nodes before deploying.

I’m installing via Helm, and the values file needed is quite complex as it is drawn from two upstream examples: the generic values.yaml for the Helm chart, and some more specific options for the freenas-api-nfs driver.

This is the local values.yaml I have come up with for my homelab:

    driver: freenas-api-nfs
      protocol: http
      username: root
      password: mypassword
      port: 80
      allowInsecure: true
      datasetParentName: hdd/k8s/vols
      detachedSnapshotsDatasetParentName: hdd/k8s/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 0
      datasetPermissionsGroup: 0
      shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: root
      shareMapallUser: ""
      shareMapallGroup: ""

  # Required for MicroK8s
  kubeletHostPath: /var/snap/microk8s/common/var/lib/kubelet

  # should be globally unique for a given cluster
  name: "org.democratic-csi.nfs-api"

  - name: truenas
    defaultClass: true
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
      fsType: nfs
      - noatime
      - nfsvers=4

  - name: truenas

And it is installed like this:

helm upgrade \
    --install \
    --create-namespace \
    --values values.yaml \
    --namespace democratic-csi \
    truenas democratic-csi/democratic-csi


Once deployment has finished watch the pods until they have have spun up. Expect to see one csi-node pod per node, and one csi-controller.

[jonathan@latitude ~]$ kubectl get po -n democratic-csi
NAME                                                 READY   STATUS    RESTARTS   AGE
truenas-democratic-csi-node-rkmq8                    4/4     Running   0          9d
truenas-democratic-csi-node-w5ktj                    4/4     Running   0          9d
truenas-democratic-csi-node-k88cx                    4/4     Running   0          9d
truenas-democratic-csi-node-f7zw4                    4/4     Running   0          9d
truenas-democratic-csi-controller-54db74999b-5zjv2   5/5     Running   0          9d

Check to make sure there’s a truenas StorageClass:

[jonathan@latitude ~]$ kubectl get storageclasses
truenas (default)   org.democratic-csi.nfs-api   Retain          Immediate           true                   9d

Then apply a manifest to create a PersistentVolumeClaim, which should provision a volume in TrueNAS:

kind: PersistentVolumeClaim
apiVersion: v1
  name: test-claim-nfs
  storageClassName: truenas
    - ReadWriteMany
      storage: 1Gi

Check to make sure it appears and is provisioned correctly:

[jonathan@latitude ~]$ kubectl get persistentvolumeclaim
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim-nfs   Bound    pvc-ac9940c4-29a8-4056-b0bf-d8ac0dd05beb   1Gi        RWX            truenas        15s

You should be able to see a Dataset and a corresponding Share for this volume in the TrueNAS web GUI:

Dataset details in TrueNAS UI

Finally we can create a Pod that mounts this PersistentVolume to make sure we got the settings of the share right.

apiVersion: v1
kind: Pod
  name: test-pod-nfs
    - name: myfrontend
      image: nginx
      - mountPath: "/var/www/html"
        name: mypd
    - name: mypd
        claimName: test-claim-nfs

If this pod starts up successfully, it means it was able to mount the volume from TrueNAS. Woo!

[jonathan@latitude ~]$ kubectl get pods
test-pod-nfs   1/1     Running   0          46s

We can now start using the truenas storage class to run workloads which require persistent storage. In fact, you might already have noticed that this storage class is set as the default, so you won’t even need to explicitly specify it for many deployments.

As this storage class is backed by NFS, it intrinsically supports multi-user, and so the storage class supports ReadWriteOnce (aka RWO, can be mounted by one pod) and ReadWriteMany (aka RWX, can be mounted by many pods).

Kubernetes Homelab Part 2: Networking

The next part of our look at my Kubernetes homelab is a deep dive into networking. If you haven’t read the other parts of this guide, I recommend you check those out too.

On the surface, my network implementation is very simple. The cluster nodes, the NAS and the router are all on the same /24 private network. The router NATs to the Internet. No VLANs here – this is a standard home setup.

In order to expose your application, you’ll need an ingress controller. This runs on every node in the cluster and listens on ports 80 and 443 (HTTP and HTTPS). This is easily enabled with:

microk8s enable ingress

You can send HTTP requests to the ingress controller on any of the nodes and it will find its way to the application pods, no matter where they are, by traversing the Calico overlay network. Simplistically, we can set up a port-forward on the router to forward TCP ports 80 and 443 to any one of the nodes, and everything will work.

Ingress controllers and port forwarding

As we can see from the diagram, node kube01 with IP has been chosen as the target of the port forwarding from the router. kube01 will handle all ingress traffic, and use the Calico network overlay to route to the traffic to the application pods, wherever they may be. This also means that if kube01 is unavailable for any reason, there will be an outage of all applications that are using the ingress.

The solution is to set up a layer 2 load balancer with MetalLB. This is an addon for MicroK8s, and when enabled, it will ask you to set aside a few IP addresses in the same subnet which can be allocated to load balancers. In this example, I’ve allocated as load balancer IPs.

microk8s enable metallb

Now we need to create a new Service definition for the ingress controller, which will create the corresponding load balancer on

apiVersion: v1
kind: Service
  name: ingress-lb
  namespace: ingress
    name: nginx-ingress-microk8s
  type: LoadBalancer
  externalTrafficPolicy: Local
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443

With the load balancer now in place, we can hit the ingress controller either on any of the node IPs, or the load balancer IP. The load balancer will send the traffic to any of the node IPs, taking into consideration which ones are available and healthy. So we can change the port forwarding rule to forward to the load balancer’s IP, and now any of the nodes can receive ingress traffic.

Ingress controllers with MetalLB load balancer

One last thing we can do to make deployments much easier is set up a wildcard DNS record. If you domain is example.com, you could register a wildcard record for *.example.com that points to your router’s public IP. Then you can deploy arbitrary apps and give them hostnames like myapp.example.com and you won’t have to do anything else for the new application’s ingress to work.

Kubernetes Homelab Part 1: Overview

A lot of people have asked me about my home Kubernetes cluster, and so I have decided to put together a series of blog posts about the architecture. I’m going to split it into sections, with each section focusing on a specific area. If you haven’t read the other parts of this guide, I recommend you check those out too.

This is Part 1, a general overview of the hardware, the architecture and the base OS install. It is not intended as a set of instructions, but as some notes about my design choices.


First, let’s have a look at the architecture. I’m using the MicroK8s distribution of Kubernetes, which can run on a single node, supports clustering, but needs at least 3 nodes for high availability. I’m running on 4 nodes because that gives me plenty of memory.

I chose to use HP EliteDesk 800 G2 Mini PC systems, because they are tiny, use very little power, release very little heat, and make very little noise. There are several other manufacturers who also make ultra small form factor PCs (Lenovo, Dell, Intel) but it just happened that HP were the cheapest at the time I looked.

Each EliteDesk node is equipped with an Intel Core i5-6500T CPU (4 cores and 4 threads at 2.50 GHz), 16GB DDR4 memory, a 240GB SATA SSD for the OS, and a 240GB M.2 NVMe SSD for storage (more on that later).

I also have a NAS to provide off-cluster shared storage. This is an HP MicroServer N40L with 4x4TB disks, running TrueNAS. We’ll look at this in detail in a later section.

Networking is dead simple – everything is connected to an unmanaged gigabit Ethernet switch and is in the same RFC1918 /24 network. A router provides Internet connectivity via NAT.

Diagram showing architecture of Kubernetes cluster
Kubernetes cluster architecture

In case you were wondering what this looks like, it’s all neatly tucked away in the bottom of a closet. I built a rack for the nodes from plywood. Each node is screwed to a small plywood panel by its VESA mount screws, and the plywood panel slides into a pair of grooves. This means the nodes are rack mounted, have good airflow, and it’s easy to slide one out for maintenance etc.

Photo of Kubernetes hardware in situ
Kubernetes cluster photo

The small box with the red light at the bottom of the rack is a Raspberry Pi, which provides DNS and DHCP for the LAN with Pi-hole. This allows me to easily set static reservations for the Kubernetes nodes.

Also visible is a slim KVM and the cable modem/router. To save space, the monitor is mounted on the inside of the closet door.

Operating System

As MicroK8s is maintained by Canonical, it made sense to run it on its native Ubuntu platform. I’m running Ubuntu Server 22.04 LTS, installed as a Minimal installation.

It is almost entirely a default installation and the only customisation I made to the OS was to disable swap and delete the swap file.

After this, MicroK8s can be installed on all the nodes using snap. By default, packages installed via snap auto-update to every release in the future, whether major or minor. This is potentially dangerous as Kubernetes releases often add and deprecate features that you may be using. So I strongly recommend pinning your MicroK8s release to a specific version, like this. Make sure to check what the latest release of MicroK8s is at the time – don’t just blindly copy my 1.25 example in case it’s out of date!

sudo snap install microk8s --classic --channel=1.25/stable

Once installed, I started MicroK8s running on each node and followed the instructions for clustering the nodes. It doesn’t matter which node you start with – just pick one, and add the rest to it one by one.

When all the nodes are ready, you’re done provisioning a simple Kubernetes cluster! There are a few more steps to make the cluster actually useful, and we’ll cover these in subsequent posts, where I’ll take a deep dive into the other components.

BitShift Variations in C Minor

This is a story about music composed by a computer, and collaboration between many individuals, each of whom has extended the work of their predecessor.

BitShift Variations

The original BitShift Variations in C Minor is a composition generated by code written in C by Rob Miles. It’s an extremely short yet amazingly complex piece of code, written for a “code golf” competition. Here’s Rob himself introducing his work.

The code, if you’re interested, is freely available online, and included here for your convenience.

echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6%\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay

The end result of running this tiny piece of code is a chiptune which sounds like this:

Pretty cool work, but as a project, this seems hard to extend.

BitShift Variations Unrolled

Enter James Newton, who is also fascinated with Rob’s code. He decided to unroll the code and express it in a longer, more human-readable way, to make it easier for others to understand.

James’s unrolled code is available on Github.

BitShift Variations: Lilypond Edition

A key limitation of the original BitShift Variations code is that it can only output a sound wave directly, and not any kind of score.

John Donovan re-implemented the algorithm from the original BitShift code in Python and gave it the ability to generate its output in Lilypond format, instead of a sound wave. Lilypond is a versatile music notation system, and from here the score of BitShift Variations in C Minor can be exported from Lilypond to various other formats.

John’s Python code is also available on Github and there is also a rendering of his MIDI output on SoundCloud:

BitShift Variations for Pipe Organ

I’ve long thought pipe organs are the original synthesizers, and have a lot in common with chiptune technology. You start with a fundamental tone (the basic organ flute pipe has a sound quite close to a pure sine wave) and create richness in the sound by adding in higher harmonics and then combining notes in harmony.

I’m also fortunate enough to have access to a real pipe organ which was renovated in 2020 and now has MIDI ports which can be used to record and play back music from a computer or other MIDI-enabled instrument.

So when I heard there was a Lilypond version of the BitShift Variations, there was no way I was not going to find a way of playing it back on the organ!

I cloned John Donovan’s BitShift Variations: Lilypond Edition and ran the following commands:

# Run the BitShift code to output the score in Lilypond format
python2.7 main.py > bitshift_variations.ly

# Use Lilypond to convert the Lilypond score to MIDI format
lilypond bitshift_variations.ly

I then imported this MIDI file into my favourite notation editor, MuseScore. BitShift Variations is written for 4 voices, which MuseScore natively interprets as 4 instruments. For this to work on an organ, I need to do a little bit of mapping.

Organs typically have two or more keyboards (manuals) and a pedalboard. The organ I’ll be using has two manuals and a pedalboard, so that can be thought of as 3 “voices”, although each voice is also capable of polyphony.

Taking BitShift Variations’ voices to be 1-4, starting with 1 as the lowest voice, I mapped voice 1 to the pedals, voices 2 and 3 to the Great organ (the lower of the two manuals) and voice 4 to the Swell organ (the upper manual). This is a fairly typical setup for classical music (although in this case, it probably isn’t possible to play 3 voices with 2 hands!).

Here’s my recording of BitShift Variations being played back on the organ. The video is a screen capture from an app called OrganAssist, which is specifically designed to control MIDI-enabled pipe organs. The sound is a recording of the actual sound – just air moving through pipes.

BitShift Variations for pipe organ

MuseScore has a really cool ecosystem for uploading and sharing scores, so they can be played back, downloaded and edited. So I’ve uploaded my arrangement of BitShift Variations for Pipe Organ for general consumption. Feel free to further edit it and see what you can come up with.

Making a public music streaming service with Navidrome

For a while, I’ve wanted to set up some kind of public music player, to allow people to stream and download music I’ve recorded for free, without having to make an account.

First I tried using Bandcamp but I found the user interface on the free tier to be awkward, and it took too long to upload new releases and required re-entry of the metadata.

Then I tried using Navidrome which is a great self-hosted music server but requires a login. People can’t just sign up, either – the admin has to make them an account. I dived into the documentation and found that it’s possible to use an external auth proxy – and I wondered if it would be possible to create a fake auth proxy that just lets you in. Turns out, it is.

First you have to set up a Navidrome instance and create your usual admin user. Now use your admin user to create a second, non-admin user. I called my user music, but it doesn’t matter because nobody will see it.

You configure Navidrome using environment variables, and there are a few you need to set. Firstly you need to tell Navidrome it should check the HTTP request headers. Secondly you need to disable all features that don’t make sense in an environment where all users are effectively signing in with the same account (so you don’t want them to change the password or set favourites that won’t make sense to other people).

# Enable auto login for the "music" user

# Disable any user-specific features

The other piece of the puzzle is to do with the auth proxy. I’m hosting Navidrome in Kubernetes (using the k8s@home Navidrome Helm chart) so it makes sense to use an Ingress resource. My cluster is already running NGINX Ingress. It was simple to add a config snippet to the Ingress to statically set the Remote-User header to the music username created above.

apiVersion: networking.k8s.io/v1
kind: Ingress
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header Remote-User music;
  name: navidrome
  namespace: navidrome
  ingressClassName: public
  - host: music.example.com
      - backend:
            name: navidrome
              number: 4533
        path: /
        pathType: Prefix

And that’s it! Now, visiting music.example.com automagically signs you in as the music user without you ever seeing a login screen. The public can now browse, stream and download music freely.

The only user-specific features I couldn’t disable are playlists and themes. So anyone visiting your Navidrome instance can create, edit and delete playlists, and change the theme at will.

Bluetooth MIDI with CME WIDI

I recently had to set up a wireless MIDI link between a laptop and a MIDI-enabled pipe organ. I learnt a few lessons along the way, so this is partly a tutorial, partly some notes on the lessons learned, and partly a mini review of the devices I bought.

My use case

After a recent refurbishment, the pipe organ at my church was fitted with MIDI ports which can be used to record and play back performances on the organ. Initially, I used a regular USB-to-MIDI cable connect a laptop, and we successfully proved the concept with an app called OrganAssist.

A short USB-MIDI cable is a bit limiting though, as you have to stand around the organ console to play anything back, which is not ideal in church services. I looked for a wireless alternative.

Wireless MIDI

Wireless MIDI is apparently a thing these days. It seems to go by various names, but is officially known as Bluetooth LE MIDI. I found that support for it is inconsistent: support was only added to Windows in Windows 10 Anniversary Edition and it also requires support in the audio application itself. Support is apparently better on MacOS and iOS, but I’m not a Mac user.

My laptop was running a compatible version of Windows, but OrganAssist does not support Bluetooth LE MIDI.


Then I discovered the family of WIDI products from CME which can work in a number of different ways. To be honest I found their documentation quite confusing. WIDI is a trademark of CME, and as a technology it is based on Bluetooth LE MIDI but also has a superset of features, such as being to group WIDI devices together and set virtual patching from your phone.

At the “instrument” side of the connection you need a WIDI device – either a WIDI Master or a WIDI Jack. As far as I can tell, the only difference is the physical form factor. (The WIDI Master is a pair of stubby dongles that fits into a 5-DIN MIDI port, while the WIDI Jack is a separate box that you connect to your MIDI ports with little patch leads).

If you have a Mac, iOS device, or a piece of hardware that supports Bluetooth LE MIDI (there are apparently some synths that offer this now), then that’s all you need.

If you have Windows 10 Anniversary Edition or newer, you can install a third-party Bluetooth LE MIDI driver from Korg, and then you can use apps that support Bluetooth LE MIDI. At the time of writing, this is only Cubase, and I wasn’t able to get it to work.

Most Windows users will need another piece of WIDI hardware at the “computer” side of the connection – a WIDI Bud Pro. This device talks to your WIDI Master or WIDI Jack using Bluetooth LE MIDI, but talks to your PC using regular USB MIDI. It appears as a normal MIDI device and “just works” with older versions of Windows and older apps.



I chose the WIDI Jack for a semi-permanent installation on a pipe organ that has been fitted with MIDI ports during a renovation. I liked that the DIN jacks were so stubby and short, with little patch leads. Due to the location of the MIDI ports by the organist’s right knee, anything longer would’ve got in the way when the organist got on or off the bench.

WIDI Jack in situ

The WIDI Jack is magnetic, and it includes a self-adhesive metal plate – so you can either stick it onto a metal object by itself, or you can apply the metal “sticker” to a surface and attach the WIDI Jack to that. You can see in my picture I’ve stuck the metal “sticker” to the underside of the MIDI ports so the WIDI Jack is kept out of the way and out of sight.

The WIDI Jack draws power from the MIDI Out connection of your instrument so there is no need for a power supply. It just turns on when you turn your instrument on.


WIDI Bud Pro

The WIDI Bud Pro effectively uses Bluetooth LE as a link between itself and the WIDI Jack, but it presents the connection back to Windows as a regular USB MIDI device which “just works” on any version of Windows. No Bluetooth complexity to worry about. The WIDI Bud Pro and WIDI Jack automatically pair with each other so you don’t need to do anything.

In actual usage, I can only review this in the context of using the WIDI Bud Pro together with the WIDI Jack. Put simply, it works, the latency is low and I haven’t had any problems. The range is better than expected – it claims up to 20m range in open spaces but I actually got 25m away from it in the church without any problems. However, be careful of interference because when I got close to some metal railings it dropped a couple of notes and the timing of some notes went a bit sloppy.


Just a quick demo to show that it’s possible to control a pipe organ from a laptop via Bluetooth, and walk around the church while it’s playing some Bach. Sorry it’s dark… I try to save electricity when working in the church in the evening.

In practice the laptop will be tucked away to one side during services, and then hymns can be played back remotely.