A while ago I blogged about the possibilities of using Ceph to provide hyperconverged storage for Kubernetes. It works, but I never really liked the solution so I decided to look at dedicated storage solutions for my home lab and a small number of production sites, which would escape the single-node limitation of the MicroK8s storage addon and allow me to scale to more than one node.
In the end I settled upon TrueNAS (which used to be called FreeNAS but was recently renamed) as it is simple to set up and provides a number of storage options that Kubernetes can consume, both as block storage via iSCSI and file storage via NFS.
The key part is how to integrate Kubernetes with TrueNAS. It’s quite easy to mount an existing NFS or iSCSI share into a Kubernetes pod but the hard part is automating the creation of these storage resources with a provisioner. After some searching, I found a project called democratic-csi
which describes itself as
democratic-csi
implements thecsi
(container storage interface) spec providing storage for various container orchestration systems (ie: Kubernetes).
I was unfamiliar with Kubernetes storage and TrueNAS, but I found it quite easy to get started and the lead developer was super helpful while answering my questions. I thought it would be helpful to document and share my experience, so here’s my rough guide on how to set up storage on TrueNAS Core 12 with MicroK8s and democratic-csi
.
TrueNAS
Pools
A complete guide to TrueNAS is outside the scope of this article, but basically you’ll need a working pool. This is configured in the Storage / Pools menu. In my case, this top-level pool is called hdd
. I’ve got various things on my TrueNAS box so under hdd
I created a dataset k8s
. I wanted to provide both iSCSI and NFS, so under k8s
I created more sub datasets iscsi
and nfs
. Brevity is important here, as we’ll see later.
Here’s what my dataset structure looks like – ignore backup
and media
:

With your storage pools in place, it’s time to enable the services you need. I’m using both iSCSI and NFS, and I’ve started them running and also set them to start automatically (e.g. if the TrueNAS box is rebooted).

NFS
The NFS service requires a little tweaking to make it work properly with Kubernetes. Access the NFS settings by clicking on the pencil icon in the Services menu. You must select Enable NFSv4, NFSv3 ownership model for NFSv4 and Allow non-root mount.

iSCSI
The iSCSI service needs a little bit more setting up than NFS, and the iSCSI settings are in a different place, too. Look under Sharing / Block Shares (iSCSI). In short, you need to accept the default settings for almost everything until you have basic settings for Target Global Configuration, Portals and Initiator Groups until you have something that resembles these screenshots.
This was my first encounter with iSCSI and I found some of the terminology confusing to begin with. Roughly speaking:
- a Portal is what would normally be called a server or a listener, i.e. you define the IP address and port to bind to. In this simple TrueNAS setup, we bind to all IPs (
0.0.0.0
) and accept the default port (3260
). Authentication can also be set up here, but that is outside the scope of this guide. - an Initiator is what would normally be called a client
- an Initiator Group allows you to define which Targets an Initiator can connect to. Here we will allow everything to connect, but you may wish to restrict that in the future.
- a Target is a specific storage resource, analogous to a hard disk controller. These will be created automatically by Kubernetes as needed.
- an Extent is the piece of storage that is referenced by a Target, analogous to a hard disk. These will be created automatically by Kubernetes as needed.



Users
Kubernetes will need access to the TrueNAS API with a privileged user. This guide uses the root
user for simplicity but in a production environment you should create a separate user with either a strong password, or a certificate.
Kubernetes
There are no special requirements on the Kubernetes side of things, except a Helm 3 client. I have set this up on MicroK8s on single-node and multi-node clusters. It’s especially useful on multi-node clusters because the default MicroK8s storage addon allocates storage via hostPath
on the node itself, which then ties your pod to that node forever.
In preparation for both the NFS and iSCSI steps, prepare your helm repo:
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
helm search repo democratic-csi/
NFS
First, we need to prepare all the nodes in the cluster to be able to use the NFS protocol.
# Fedora, CentOS, etc
sudo dnf -y install nfs-utils
# Ubuntu, Debian, etc
sudo apt install libnfs-utils
On Fedora/CentOS/RedHat you will either need to disable SELinux (not recommended) or load this custom SELinux policy to allow pods to mount storage:
# nfs-provisioner.te
module nfs-provisioner 1.0;
require {
type snappy_t;
type container_file_t;
class dir { getattr open read rmdir };
}
#============= snappy_t ==============
allow snappy_t container_file_t:dir { getattr open read rmdir };
# Compile the above policy into a binary object
checkmodule -M -m -o nfs-provisioner.mod nfs-provisioner.te
# Package it
semodule_package -o nfs-provisioner.pp -m nfs-provisioner.mod
# Install it
semodule -i nfs-provisioner.pp
Finally we can install the FreeNAS NFS provisioner from democratic-csi
! First fetch the example config so we can customise it for our environment:
wget https://raw.githubusercontent.com/democratic-csi/charts/master/stable/democratic-csi/examples/freenas-nfs.yaml
Most of the key values to change are all in the driver
section. Anywhere where you see 192.168.0.4
here, replace with the IP or hostname of your TrueNAS server. Be sure to set nfsvers=4
.
Note about NFSv4: it is possible to use NFSv3 here with democratic-csi
and TrueNAS. In fact it is often recommended due to simpler permissions. However, on Fedora I ran into an issue with NFSv3 where in order for the client to work, the systemd unit rpc-statd
has to be running. This cannot be enabled to start on boot, and it says it will automatically start when needed. However this did not happen for me, meaning if any of my nodes rebooted, they would come back unable to mount any NFS volumes. As a workaround, I opted to use NFSv4 which has a simpler daemon configuration.
If you have followed my naming convention for TrueNAS pools, you can also use my values for datasetParentName
and detachedSnapshotsDatasetParentName
. Otherwise, adjust to suit your environment. I found this a little confusing but in this simple case, these two values should be direct children of whatever your nfs
dataset is. They will be created automatically – don’t create them yourself.
csiDriver:
# should be globally unique for a given cluster
name: "org.democratic-csi.nfs"
storageClasses:
- name: freenas-nfs-csi
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- noatime
- nfsvers=4
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
driver:
config:
driver: freenas-nfs
instance_id:
httpConnection:
protocol: http
host: 192.168.0.4
port: 80
username: root
password: ************
allowInsecure: true
sshConnection:
host: 192.168.0.4
port: 22
username: root
# use either password or key
password: "***********"
# privateKey: |
# -----BEGIN RSA PRIVATE KEY-----
# ...
# -----END RSA PRIVATE KEY-----
zfs:
datasetParentName: hdd/k8s/nfs/vols
detachedSnapshotsDatasetParentName: hdd/k8s/nfs/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: root
datasetPermissionsGroup: wheel
nfs:
shareHost: 192.168.0.4
shareAlldirs: false
shareAllowedHosts: []
shareAllowedNetworks: []
shareMaprootUser: root
shareMaprootGroup: wheel
shareMapallUser: ""
shareMapallGroup: ""
Now we can install the NFS provisioner using Helm, based on the config file we’ve just created:
helm upgrade \
--install \
--create-namespace \
--values freenas-nfs.yaml \
--namespace democratic-csi \
--set node.kubeletHostPath="/var/snap/microk8s/common/var/lib/kubelet" \
zfs-nfs democratic-csi/democratic-csi
iSCSI
First, we need to prepare all the nodes in the cluster to be able to use the iSCSI protocol.
# Fedora, CentOS, etc
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
sudo mpathconf --enable --with_multipathd y
sudo systemctl enable --now iscsid multipathd
sudo systemctl enable --now iscsi
# Ubuntu, Debian, etc
sudo apt-get install -y open-iscsi lsscsi sg3-utils multipath-tools scsitools
sudo tee /etc/multipath.conf <<-'EOF'
defaults {
user_friendly_names yes
find_multipaths yes
}
EOF
sudo systemctl enable multipath-tools.service
sudo service multipath-tools restart
sudo systemctl enable open-iscsi.service
sudo service open-iscsi start
Finally we can install the FreeNAS iSCSI provisioner from democratic-csi
! First fetch the example config so we can customise it for our environment:
wget https://raw.githubusercontent.com/democratic-csi/charts/master/stable/democratic-csi/examples/freenas-iscsi.yaml
The key values to change are all in the driver
section. Anywhere where you see 192.168.0.4
here, replace with the IP or hostname of your TrueNAS server.
If you have followed my naming convention for TrueNAS pools, you can also use my values for datasetParentName
and detachedSnapshotsDatasetParentName
. Otherwise, adjust to suit your environment. I found this a little confusing but these two values should be direct children of whatever your iscsi
dataset is. They will be created automatically.
Note that iSCSI imposes a limit on the length of the volume name. The total volume name (zvol/<datasetParentName>/<pvc name>
) length cannot exceed 63 characters. The standard volume naming overhead is 46 characters, so datasetParentName
should therefore be 17 characters or less.
csiDriver:
# should be globally unique for a given cluster
name: "org.democratic-csi.iscsi"
# add note here about volume expansion requirements
storageClasses:
- name: freenas-iscsi-csi
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
driver:
config:
driver: freenas-iscsi
instance_id:
httpConnection:
protocol: http
host: 192.168.0.4
port: 80
username: root
password: *************
allowInsecure: true
apiVersion: 2
sshConnection:
host: 192.168.0.4
port: 22
username: root
# use either password or key
password: ******************
# privateKey: |
# -----BEGIN RSA PRIVATE KEY-----
# ...
# -----END RSA PRIVATE KEY-----
zfs:
# the example below is useful for TrueNAS 12
cli:
paths:
zfs: /usr/local/sbin/zfs
zpool: /usr/local/sbin/zpool
sudo: /usr/local/bin/sudo
chroot: /usr/sbin/chroot
# total volume name (zvol/<datasetParentName>/<pvc name>) length cannot exceed 63 chars
# https://www.ixsystems.com/documentation/freenas/11.2-U5/storage.html#zfs-zvol-config-opts-tab
# standard volume naming overhead is 46 chars
# datasetParentName should therefore be 17 chars or less
datasetParentName: hdd/k8s/iscsi/v
detachedSnapshotsDatasetParentName: hdd/k8s/iscsi/s
# "" (inherit), lz4, gzip-9, etc
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "192.168.0.4:3260"
targetPortals: []
# leave empty to omit usage of -I with iscsiadm
interface:
namePrefix: csi-
nameSuffix: "-cluster"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 1
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 4096
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "7200"
# 0-100 (0 == ignore)
extentAvailThreshold: 0
Testing
There are a few sanity checks you should do. First make sure all the democratic-csi
pods are healthy across all your nodes:
[jonathan@zeus ~]$ kubectl get pods -n democratic-csi -o wide
NAME READY STATUS RESTARTS AGE IP NODE
zfs-iscsi-democratic-csi-node-pdkgn 3/3 Running 6 7d3h 192.168.0.44 zeus-kube02
zfs-iscsi-democratic-csi-node-g25tq 3/3 Running 12 7d3h 192.168.0.45 zeus-kube03
zfs-iscsi-democratic-csi-node-mmcnm 3/3 Running 0 2d15h 192.168.0.2 zeus.jg.lan
zfs-iscsi-democratic-csi-controller-5888fb7c46-hgj5c 4/4 Running 0 2d15h 10.1.27.131 zeus.jg.lan
zfs-nfs-democratic-csi-controller-6b84ffc596-qv48h 4/4 Running 0 24h 10.1.27.136 zeus.jg.lan
zfs-nfs-democratic-csi-node-pdn72 3/3 Running 0 24h 192.168.0.2 zeus.jg.lan
zfs-nfs-democratic-csi-node-f4xlv 3/3 Running 0 24h 192.168.0.44 zeus-kube02
zfs-nfs-democratic-csi-node-7jngv 3/3 Running 0 24h 192.168.0.45 zeus-kube03
Also make sure your storageClasses are present, and set one as the default if you like:
[jonathan@zeus ~]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath microk8s.io/hostpath Delete Immediate false 340d
freenas-iscsi-csi org.democratic-csi.iscsi Delete Immediate true 26d
freenas-nfs-csi (default) org.democratic-csi.nfs Delete Immediate true 26d
Now we’re ready to create some test volumes:
# test-claim-iscsi.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-iscsi
annotations:
volume.beta.kubernetes.io/storage-class: "freenas-iscsi-csi"
spec:
storageClassName: freenas-iscsi-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# test-claim-nfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-iscsi
annotations:
volume.beta.kubernetes.io/storage-class: "freenas-iscsi-csi"
spec:
storageClassName: freenas-iscsi-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Use the above test manifests to create some persistentVolumeClaims:
[jonathan@zeus ~]$ kubectl -n democratic-csi create -f test-claim-iscsi.yaml -f test-claim-nfs.yaml
persistentvolumeclaim/test-claim-iscsi created
persistentvolumeclaim/test-claim-nfs created
Then check that your PVCs are showing as Bound. This should only take a few seconds, so if your PVCs are showing as Pending, something has probably gone wrong.
[jonathan@zeus ~]$ kubectl -n democratic-csi get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim-nfs Bound pvc-0ca8bbf4-33e9-4c3a-8e27-6a3022194ec3 1Gi RWX freenas-nfs-csi 119s
test-claim-iscsi Bound pvc-9bd9228e-d548-48ea-9824-2b96daf29cd3 1Gi RWO freenas-iscsi-csi 119s
Verify that the new volumes or filesystems are showing up as datasets in TrueNAS:

Likewise verify that NFS shares, or iSCSI targets and extents have been created:



Clean up your test PVCs:
[jonathan@zeus ~]$ kubectl -n democratic-csi delete -f test-claim-iscsi.yaml -f test-claim-nfs.yaml
persistentvolumeclaim "test-claim-iscsi" deleted
persistentvolumeclaim "test-claim-nfs" deleted
Double-check that the volumes, shares, targets and extents have been cleaned up.