Welcome to part 3 of the Kubernetes Homelab guide. In this section we’re going to look at how to provide off-cluster shared storage. If you haven’t read the other parts of this guide, I recommend you check those out too.
Out of the box, MicroK8s does provide a hostpath storage provider but this only works on a single-node cluster. It basically lets pods use storage within a subdirectory on the node’s root filesystem, so this obviously isn’t going to work in a multi-node cluster where your workload could end up on any node.
It’s important to me that any storage solution I choose is compliant with CSI, the Kubernetes framework for storage drivers. This allows you to simply tell Kubernetes that your pod requires a 10GB volume, and Kubernetes goes off and talks to its CSI driver, which provisions and mounts your volume automatically. This isn’t your typical fileserver.
TrueNAS
So I decided to go with TrueNAS SCALE (technically I started with TrueNAS CORE and then I migrated to TrueNAS SCALE). TrueNAS is a NAS operating system which uses the OpenZFS filesystem to manage its storage. By its nature, ZFS supports nested volumes and is ideal for this application.
I’m running a fairly elderly HP MicroServer N40L with 16GB memory and 4x4TB disks in a RAID-Z2 vdev, for a total of 8TB usable storage. It’s not the biggest or the fastest, but it works for me.

Democratic CSI
The magic glue that connects Kubernetes and TrueNAS is a project called Democratic CSI, which is a CSI driver that supports various storage appliances, including TrueNAS.
Note: Democratic CSI packaged an older driver called freenas-nfs
which required SSH access to the NAS. For users running TrueNAS SCALE, there is a newer driver called freenas-api-nfs
which does not require SSH and does all its work via an HTTP API. As I am running TrueNAS SCALE, I will deploy the freenas-api-nfs
driver.
There are some steps to set up the root volume on your TrueNAS appliance but I wrote about these before, and they are pretty much the same, so please refer to my TrueNAS guide. There are also some Democratic CSI prerequisites you need to install on your Kubernetes nodes before deploying.
I’m installing via Helm, and the values file needed is quite complex as it is drawn from two upstream examples: the generic values.yaml
for the Helm chart, and some more specific options for the freenas-api-nfs driver.
This is the local values.yaml
I have come up with for my homelab:
driver:
config:
driver: freenas-api-nfs
httpConnection:
protocol: http
username: root
password: mypassword
host: 192.168.0.4
port: 80
allowInsecure: true
zfs:
datasetParentName: hdd/k8s/vols
detachedSnapshotsDatasetParentName: hdd/k8s/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 0
datasetPermissionsGroup: 0
nfs:
shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
shareHost: 192.168.0.4
shareAlldirs: false
shareAllowedHosts: []
shareAllowedNetworks: []
shareMaprootUser: root
shareMaprootGroup: root
shareMapallUser: ""
shareMapallGroup: ""
node:
# Required for MicroK8s
kubeletHostPath: /var/snap/microk8s/common/var/lib/kubelet
csiDriver:
# should be globally unique for a given cluster
name: "org.democratic-csi.nfs-api"
storageClasses:
- name: truenas
defaultClass: true
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- noatime
- nfsvers=4
volumeSnapshotClasses:
- name: truenas
And it is installed like this:
helm upgrade \
--install \
--create-namespace \
--values values.yaml \
--namespace democratic-csi \
truenas democratic-csi/democratic-csi
Testing
Once deployment has finished watch the pods until they have have spun up. Expect to see one csi-node
pod per node, and one csi-controller
.
[jonathan@latitude ~]$ kubectl get po -n democratic-csi
NAME READY STATUS RESTARTS AGE
truenas-democratic-csi-node-rkmq8 4/4 Running 0 9d
truenas-democratic-csi-node-w5ktj 4/4 Running 0 9d
truenas-democratic-csi-node-k88cx 4/4 Running 0 9d
truenas-democratic-csi-node-f7zw4 4/4 Running 0 9d
truenas-democratic-csi-controller-54db74999b-5zjv2 5/5 Running 0 9d
Check to make sure there’s a truenas
StorageClass:
[jonathan@latitude ~]$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
truenas (default) org.democratic-csi.nfs-api Retain Immediate true 9d
Then apply a manifest to create a PersistentVolumeClaim, which should provision a volume in TrueNAS:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-nfs
spec:
storageClassName: truenas
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Check to make sure it appears and is provisioned correctly:
[jonathan@latitude ~]$ kubectl get persistentvolumeclaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim-nfs Bound pvc-ac9940c4-29a8-4056-b0bf-d8ac0dd05beb 1Gi RWX truenas 15s
You should be able to see a Dataset and a corresponding Share for this volume in the TrueNAS web GUI:

Finally we can create a Pod that mounts this PersistentVolume to make sure we got the settings of the share right.
apiVersion: v1
kind: Pod
metadata:
name: test-pod-nfs
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: test-claim-nfs
If this pod starts up successfully, it means it was able to mount the volume from TrueNAS. Woo!
[jonathan@latitude ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod-nfs 1/1 Running 0 46s
We can now start using the truenas
storage class to run workloads which require persistent storage. In fact, you might already have noticed that this storage class is set as the default, so you won’t even need to explicitly specify it for many deployments.
As this storage class is backed by NFS, it intrinsically supports multi-user, and so the storage class supports ReadWriteOnce
(aka RWO
, can be mounted by one pod) and ReadWriteMany
(aka RWX
, can be mounted by many pods).
I went down the same path using Democratic-CSI, which is fantastic and works as advertised. My problem was when I needed to do TrueNAS maintenance — I had to take down all my applications using this persistent storage. Kind of defeats the purpose to use Kubernetes. In addition, if you are doing GitOps and have ArgoCD/FluxCD watching, they will see your attempt to cleanly stop the Deployment/StatefulSet and rollback that change nearly instant. I eventually moved to in-cluster storage (Rook-Ceph) [Longhorn also decent] to allow your PVCs to move around without downtime and then use Velero to backup the in-cluster storage to MinIO S3 storage on TrueNAS.
LikeLike
You’re absolutely right about this limitation. When I write part 4 of this Kubernetes guide, I will be covering my in-cluster storage solution, which is based on OpenEBS rather than Rook/Ceph, which which is basically the same.
LikeLike