The next part of our look at my Kubernetes homelab is a deep dive into networking. If you haven’t read the other parts of this guide, I recommend you check those out too.
On the surface, my network implementation is very simple. The cluster nodes, the NAS and the router are all on the same /24
private network. The router NATs to the Internet. No VLANs here – this is a standard home setup.
In order to expose your application, you’ll need an ingress controller. This runs on every node in the cluster and listens on ports 80 and 443 (HTTP and HTTPS). This is easily enabled with:
microk8s enable ingress
You can send HTTP requests to the ingress controller on any of the nodes and it will find its way to the application pods, no matter where they are, by traversing the Calico overlay network. Simplistically, we can set up a port-forward on the router to forward TCP ports 80 and 443 to any one of the nodes, and everything will work.

As we can see from the diagram, node kube01
with IP 192.168.0.11
has been chosen as the target of the port forwarding from the router. kube01 will handle all ingress traffic, and use the Calico network overlay to route to the traffic to the application pods, wherever they may be. This also means that if kube01
is unavailable for any reason, there will be an outage of all applications that are using the ingress.
The solution is to set up a layer 2 load balancer with MetalLB. This is an addon for MicroK8s, and when enabled, it will ask you to set aside a few IP addresses in the same subnet which can be allocated to load balancers. In this example, I’ve allocated 192.168.0.200-210
as load balancer IPs.
microk8s enable metallb
Now we need to create a new Service definition for the ingress controller, which will create the corresponding load balancer on 192.168.0.200
.
apiVersion: v1
kind: Service
metadata:
name: ingress-lb
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
loadBalancerIP: 192.168.0.200
externalTrafficPolicy: Local
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
With the load balancer now in place, we can hit the ingress controller either on any of the node IPs, or the load balancer IP. The load balancer will send the traffic to any of the node IPs, taking into consideration which ones are available and healthy. So we can change the port forwarding rule to forward to the load balancer’s IP, and now any of the nodes can receive ingress traffic.

One last thing we can do to make deployments much easier is set up a wildcard DNS record. If you domain is example.com
, you could register a wildcard record for *.example.com
that points to your router’s public IP. Then you can deploy arbitrary apps and give them hostnames like myapp.example.com
and you won’t have to do anything else for the new application’s ingress to work.