Originally, I decided to use *btrfs* subvolumes to create writable directories inside otherwise immutable locations, such as for `/etc/cni/net.d`, etc. I figured this would be cleaner than bind-mounting directories from `/var`, and would avoid the trouble of determining an appropriate volume sizes necessary to make them each their own filesystem. Unfortunately, it turns out that *cri-o* may still have some issues with its *btrfs* storage driver. One [blog post][0] hints at performance issues in *containerd*, and it seems they may apply to *cri-o* as well. I certainly encountered performance issues when attempting to run `npm` in a Jenkins job running in a Kubernetes pod. There is definitely a [performance issue with `npm`][1] when running in a container, which may or may not have been exacerbated by the *btrfs* storage driver. In any case, upstream [does not reecommend][2] using the *btrfs* driver, performance notwithstanding. The *overlay* driver is much more widely used and tested. Plus, it's easier to filter out container layers from filesystem usage statistics simply by ignoring *overlay* filesystems. [0]: https://blog.cubieserver.de/2022/dont-use-containerd-with-the-btrfs-snapshotter/ [1]: https://github.com/npm/cli/issues/3208#issuecomment-1002990902 [2]: https://github.com/containers/storage/issues/929 |
||
---|---|---|
.. | ||
README.md | ||
fedora-k8s-ctrl.ks | ||
fedora-k8s-node.ks |
README.md
Cluster Setup
- Fedora 35
- Fedora Kubernetes packages 1.22
Installation
For control plane nodes, use the fedora-k8s-ctrl.ks
kickstart file. For
worker nodes, use fedora-k8s-node.ks
.
Use virt-manager
to create the virtual machines.
Control Plane
name=k8s-ctrl0; virt-install \
--name ${name} \
--memory 4096 \
--vcpus 2 \
--cpu host \
--location http://dl.fedoraproject.org/pub/fedora/linux/releases/35/Everything/x86_64/os \
--extra-args "ip=::::${name}::dhcp inst.ks=http://rosalina.pyrocufflink.blue/~dustin/kickstart/fedora-k8s-ctrl.ks" \
--os-variant fedora34 \
--disk pool=default,size=16,cache=none \
--network network=kube,model=virtio,mac=52:54:00:be:29:76 \
--sound none \
--redirdev none \
--rng /dev/urandom \
--noautoconsole \
--wait -1
Worker
Be sure to set the correct MAC address for each node!
name=k8s-amd64-n0; virt-install \
--name ${name} \
--memory 4096
--vcpus 2 \
--cpu host \
--location http://dl.fedoraproject.org/pub/fedora/linux/releases/35/Everything/x86_64/os \
--extra-args "ip=::::${name}::dhcp inst.ks=http://rosalina.pyrocufflink.blue/~dustin/kickstart/fedora-k8s-node.ks" \
--os-variant fedora34 \
--disk pool=default,size=64,cache=none \
--disk pool=default,size=256,cache=none \
--network network=kube,model=virtio,mac=52:54:00:67:ce:35 \
--sound none \
--redirdev none \
--rng /dev/urandom \
--noautoconsole \
--wait -1
Machine Setup
Add to pyrocufflink.blue domain:
ansible-playbook \
-l k8s-ctrl0.pyrocufflink.blue \
remount.yml \
base.yml \
hostname.yml \
pyrocufflink.yml \
-e ansible_host=172.30.0.170 \
-u root \
-e @join.creds
Initialize cluster
Run on k8s-ctrl0.pyrocufflink.blue:
kubeadm init \
--control-plane-endpoint kubernetes.pyrocufflink.blue \
--upload-certs \
--kubernetes-version=$(rpm -q --qf '%{V}' kubernetes-node) \
--pod-network-cidr=10.149.0.0/16
Configure Pod Networking
Calico seems to be the best choice, based on its feature completeness, and a couple of performance benchmarks put it basically at the top.
curl -fL\
-O 'https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml' \
-O 'https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml'
sed -i 's/192\.168\.0\.0\/16/10.149.0.0\/16/' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
Wait for Calico to deploy completely, then restart CoreDNS:
kubectl wait -n calico-system --for=condition=ready \
$(kubectl get pods -n calico-system -l k8s-app=calico-node -o name)
kubectl -n kube-system rollout restart deployment coredns
Add Worker Nodes
kubeadm join kubernetes.pyrocufflink.blue:6443 \
--token xxxxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:…
Add Control Plane Nodes
kubeadm join kubernetes.pyrocufflink.blue:6443 \
--token xxxxxx.xxxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:… \
--control-plane \
--certificate-key …
Create Admin user
cat > kubeadm-user.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: kubernetes.pyrocufflink.blue:6443
certificatesDir: /etc/kubernetes/pki
EOF
kubeadm kubeconfig user \
--client-name dustin \
--config kubeadm-user.yaml \
--org system:masters \
> dustin.kubeconfig