Managing the Jenkins volume with Longhorn has become increasingly problematic. Because of its large size, whenever Longhorn needs to rebuild/replicate it (which happens often for no apparent reason), it can take several hours. While the synchronization is happening, the entire cluster suffers from degraded performance. Instead of using Longhorn, I've decided to try storing the data directly on the Synology NAS and expose it to Kubernetes via iSCSI. The Synology offers many of the same features as Longhorn, including snapshots/rollbacks and backups. Using the NAS allows the volume to be available to any Kubernetes node, without keeping multiple copies of the data. In order to expose the iSCSI service on the NAS to the Kubernetes nodes, I had to make the storage VLAN routable. I kept it as IPv6-only, though, as an extra precaution against unauthorized access. The firewall only allows nodes on the Kubernetes network to access the NAS via iSCSI. I originally tried proxying the iSCSI connection via the VM hosts, however, this failed because of how iSCSI target discovery works. The provided "target host" is really only used to identify available LUNs; follow-up communication is done with the IP address returned by the discovery process. Since the NAS would return its IP address, which differed from the proxy address, the connection would fail. Thus, I resorted to reconfiguring the storage network and connecting directly to the NAS. To migrate the contents of the volume, I temporarily created a PVC with a different name and bound it to the iSCSI PersistentVolume. Using a pod with both the original PVC and the new PVC mounted, I used `rsync` to copy the data. Once the copy completed, I deleted the Pod and both PVCs, then created a new PVC with the original name (i.e. `jenkins`), bound to the iSCSI PV. While doing this, Longhorn, for some reason, kept re-creating the PVC whenever I would delete it, no matter how I requested the deletion. Deleting the PV, the PVC, or the Volume, using either the Kubernetes API or the Longhorn UI, they would all get recreated almost immediately. Fortunately, there was actually enough of a delay after deleting it before Longhorn would recreate it that I was able to create the new PVC manually. Once I did that, Longhorn seemed to give up. |
||
---|---|---|
argocd | ||
authelia | ||
autoscaler | ||
cert-manager | ||
dch-root-ca | ||
dch-webhooks | ||
device-plugins | ||
docker-distribution | ||
dynk8s-provisioner | ||
firefly-iii | ||
fleetlock | ||
grafana | ||
home-assistant | ||
hudctrl | ||
ingress | ||
invoice-ninja | ||
jenkins | ||
keyserv | ||
kitchen | ||
loki-ca | ||
metrics | ||
ntfy | ||
paperless-ngx | ||
photoframesvc | ||
phpipam | ||
postgresql | ||
prometheus_speedtest | ||
promtail | ||
rent-reminder | ||
scanservjs | ||
sealed-secrets | ||
setup | ||
sshca | ||
step-ca | ||
storage | ||
victoria-metrics | ||
websites | ||
xactfetch | ||
README.md |
README.md
Dustin's Kubernetes Cluster
This repository contains resources for deploying and managing my on-premises Kubernetes cluster
Cluster Setup
The cluster primarily consists of libvirt/QEMU+KVM virtual machines. The Control Plane nodes are VMs, as are the x86_64 worker nodes. Eventually, I would like to add Raspberry Pi or Pine64 machines as aarch64 nodes.
All machines run Fedora, using only Fedora builds of the Kubernetes components
(kubeadm
, kubectl
, and kubeadm
).
See Cluster Setup for details.
Jenkins Agents
One of the main use cases for the Kubernetes cluster is to provide dynamic agents for Jenkins. Using the Kubernetes Plugin, Jenkins will automatically launch worker nodes as Kubernetes pods.
See Jenkins Kubernetes Integration for details.
Persistent Storage
Persistent storage for pods is provided by Longhorn. Longhorn runs within the cluster and provisions storage on worker nodes to make available to pods over iSCSI.
See Persistent Storage Using Longorn for details.