I originally added the `du5t1n.me/storage` label to the x86_64 nodes and
configured Longhorn to only run on nodes with those labels because I
thought that was the correct way to control where volume replicas are
stored. It turns out that this was incorrect, as it prevented Longhorn
from running on non-matching nodes entirely. Thus, any machine that was
not so labeled could not access any Longhorn storage volumes.
The correct way to limit where Longhorn stores volume replicas is to
enable the `create-default-disk-labeled-nodes` setting. With this
setting enabled, Longhorn will run on all nodes, but will not create
"disks" on them unless they have the
`node.longhorn.io/create-default-disk` label set to `true`. Nodes that
do not have "disks" will not store volume replicas, but will run the
other Longhorn components and can therefore access Longhorn volumes.
Note that changing the "default settings" ConfigMap does not change the
setting once Longhorn has been deployed. To update the setting on an
existing installation, the setting has to be changed explicitly:
```sh
kubectl get setting -n longhorn-system -o json \
create-default-disk-labeled-nodes \
| jq '.value="true"' \
| kubectl apply -f -
```
I was originally going to use GlusterFS to provide persistent storage
for pods, but [Heketi][0], the component that provides the API for
the Kubernetes StorageClass, is in "deep maintenance" status and looks
to be practically dead. I was a bit afraid to try to use it because of
that, and went looking for guidance on Reddit, which is how I discovered
Longhorn.