Commit Graph

268 Commits (f8f19405c7ad8e6e61aad239b758cff55f605567)

Author SHA1 Message Date
Dustin 3be9c40f2b r/vmhost: Install mount helpers
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively).  These need to be installed
in order for a system to be able to mount those filesystems.

The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
2021-10-10 16:09:15 -05:00
Dustin 9c97d0318c r/vmhost: mount shared filesystems
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem.  This dramatically simplifies
the configuration necessary for shared storage.

I don't really see any reason why the shared storage configuration needs
to be managed by a separate role.  The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now.  As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
2021-10-10 16:09:15 -05:00
Dustin dac34a3620 r/vmhost: Switch to pure YAML syntax 2021-10-10 16:09:15 -05:00
Dustin 3c8f26cf8d r/vmhost: Install libvirt network driver
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt.  It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.
2021-10-10 16:09:15 -05:00
Dustin 2708dfe3f2 r/systemd-networkd: Role to configure networkd
*systemd-networkd* is (currently) my preferred way to manage network
interfaces on machines running Fedora.  The *systemd-networkd* role
provides a generic way to configure network links, devices, and
interfaces, using Ansible variables to generate network unit
configuration files.
2021-10-10 16:09:15 -05:00
Dustin 58832b392b collectd: Add collectd_df variable
The `collectd_df` variable can be used to configure the *df* plugin for
collectd.  It should contain a map on key-value pairs that correspond
exactly to the plugin's configuration options.
2021-08-22 11:38:40 -05:00
Dustin b7ba6a59ab hosts: Add nvr0.p.b
*nvr0.pyrocufflink.blue* hosts Frigate.  It is deployed on a separate
subnet, for two reasons:

* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
  communicating with Frigate, since it does not have any kind of
  authentication or access control
2021-08-21 17:20:19 -05:00
Dustin 997760968e r/frigate: Add role to deploy Frigate
Frigate is an NVR that uses machine learning to detect objects on camera
in real time.  It integrates with Home Assistant to expose sensors which
can be used for automation, etc.

The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
2021-08-21 17:16:58 -05:00
Dustin 7d2b3887c2 Add ability to update HA-related containers
Home Assistant, Zigbee2MQTT, and ZWaveJS2MQTT can now be updated by
setting the corresponding Ansible variable.
2021-08-12 19:02:34 -05:00
Dustin 8f6f553820 Add HTTPS certificate for hass2.p.b 2021-07-24 18:39:45 -05:00
Dustin d78257326a Merge branch 'tabitha-website' 2021-07-24 18:37:05 -05:00
Dustin 910d430e1e website: Deploy Tabitha's website
Tabihta's website is a very simple, static HTML site.  It uploaded via
SFTP and served at *tabitha.biz*.
2021-07-24 18:36:13 -05:00
Dustin 5d7ebbaa05 r/hass-dhcp: Clean up DHCP/DNS service
The production deployment of *dnsmasq* for Home Assistant has deviated
from how the *hass-dhcp* role configures it.  Bringing the role back in
sync with how things really are.
2021-07-24 18:33:15 -05:00
Dustin ceeb61cdb0 roles/homeassistant: Proxy ZwaveJS2Mqtt Web UI
ZwaveJS2Mqtt includes a very powerful web-based UI for configuring and
controlling the Z-Wave network.  This functionality is no longer
available within Home Assistant itself, so being able to access the
ZwaveJS2Mqtt UI is crucial to operating the network.

I wanted to make the UI available at */zwave/*, which requires using
*mod_rewrite* to conditionally proxy requests based on the `Connection`
HTTP header, since the UI passes both HTTP and WebSocket requests to the
same paths.  *mod_rewrite* configuration is not inherited from the main
server configuration to virtual hosts, so the
`RewriteRule`/`RewriteCond` directives have to be specified within the
`<VirtualHost>` block.  This means that the Home Assistant proxy
configuration has to be within its own virtual host, and the
Zwavejs2Mqtt configuration has to be there as well.
2021-07-19 15:58:58 -05:00
Dustin b826d8355e hosts: Add hass2.p.b
*hass2.pyrocufflink.blue* is a Raspberry Pi Compute Module 4-based
system, currently mounted in a WaveShare CM4 Mini Base Board (A).  With
an NVMe SSD for primary storage, it runs significantly faster than a
standard Raspberry Pi 4, and blows the old Raspberry Pi 3-based Home
Assistant deployment out of the water. It has a Zooz 700 series Z-Wave
Plus S2 USB stick and a ConBee II Zigbee USB stick attached to its USB
2.0 ports.  It runs a customized Fedora Minimal distribution.
2021-07-19 15:58:58 -05:00
Dustin 2f3d0f74a1 roles/Zigbee2MQTT: Deploy using Podman
Zigbee2MQTT is very similar to ZwaveJS2Mqtt: it is a daemon process that
communicates with the Zigbee radio and integrates with Home Assistant
using MQTT.  Naturally, I decided to deploy it in the same way as
ZwaveJS2Mqtt, using a systemd unit to run it in a container with Podman.
2021-07-19 15:58:58 -05:00
Dustin 57b3039f2c roles/mosquitto: Update for Mosquitto 2.x
Mosquitto 2.x included two significant changes from 1.6:

* There is no longer a "default" listener; all listeners are configured
  in the same way
* The daemon drops privileges *before* reading TLS certificates and
  private keys
2021-07-19 15:58:58 -05:00
Dustin 0f70a5b6ba roles/zwavejs2mqtt: Deploy Z2M using Podman
Home Assistant no longer recommends using the built-in libopenzwave
integration for communicating with Z-Wave devices.  Evidently, OpenZWave
is no longer maintained, and community efforts have shifted toward
Z-Wave JS.

Z-Wave JS is architecturally much different than the legacy Z-Wave
integration.  Instead of running the network controller inside the Home
Assistant process, a separate daemon communicates with the Z-Wave radio.
Home Assistant integrates with that daemon using a WebSockets API.  This
has the advantage of decoupling the network operation from the lifecycle
of the Home Assistant process: restarting Home Assistant (e.g. to load
new configuration changes) does not take the Z-Wave network offline.

ZwaveJS2Mqtt is a distribution of the Z-Wave JS daemon, as well as a
web-based user interface for configuring it.  Although its name implies
that it uses MQTT for communication, this feature is actually optional,
and the native WebSockets API can still be used for integration with
Home Assistant.

I decided to follow the same deployment pattern for ZwaveJS2Mqtt as for
Home Assistant itself: run the application from a container image using
Podman.  This of course simplifies the installation of the application
significantly, leaving most of that work up to the maintainer of the
container image.  Podman provides the container runtime, managing the
privileges, etc.  The systemd service unit starts Podman, configuring an
ephemeral container on each run.  The container uses the default network
namespace, avoiding the unnecessary overhead of port mapping.  It uses
Podman's "rootless" mode, via the `--uidmap` and `--gidmap` arguments,
mapping users inside the container, including root, to unprivileged
users on the host.  The Z-Wave radio, which is specified by the
`zwavejs_device` Ansible variable,  is passed into the container via the
`--device` argument.
2021-07-19 15:58:52 -05:00
Dustin 288b050a33 roles/homeassistant: Deploy container with Podman
Installing Home Assistant in a Python virtualenv is rather tedious,
especially on non-x86 machines.  The main issue is Python packages that
include native extensions, as many of these do not have binary wheels
available for aarch64, etc. on PyPI.  Thus, to install these, they have
to be built from source, which then requires the appropriate development
packages to be installed.  Additionally, compiling native code on a
Raspberry Pi is excruciatingly slow.  I have considered various ways of
mitigating this, but all would require a substantial time investment,
both up front and ongoing, making them rather pointless.  Eventually, I
settled on just deploying the official Home Assistant container image
with Podman.

Although Podman includes a tool for generating systemd service unit
files for running containers, I ended up creating my own for several
reasons.  First and foremost, the generated unit files configure the
containers to run as *root*, but I wanted to run Home Assistant as an
unprivileged user.  Unfortunately, I could not seem to get the container
to work when dropping privileges using the `User` directive of the unit.
Fortunately, `podman` has `--uidmap` and `--gidmap` arguments, which I
was able to use to map UID/GID 0 in the container to the *homeassistant*
user on the host.  Another drawback of the generated unit files is that
they specify a "forking" type service, which is not really necessary.
Podman/conmon supports the systemd notify protocol, but the generator
has not been updated to make use of that yet.

Recent versions of Home Assistant are more strict with respect to how
reverse proxies are handled.  In order to use one, it must be explicitly
listed in the configuration file.  Therefore, the *homeassistant*
Ansible role will now create a stub `configuration.yaml`, based on the
one generated by Home Assistant itslf when it starts for the first time
on a new machine, that includes the appropriate configuration for a
reverse proxy running on the same machine.  The stub configuration will
not overwrite an existing configuration file, so it is only useful when
deploying Home Assistant for the first time on a new machine.

Overall, although I think a 300+ MB container image is ridiculous,
deploying Home Assistant this way should make it a lot easier to manage,
especially when updating.
2021-07-19 13:38:08 -05:00
Dustin 4aa3cdddd9 hosts: Add zezere0.p.b
*zezere0.pyrocufflink.blue* hosts the local deployment of the Fedora
Zezere provisioning service.
2021-07-05 09:34:25 -05:00
Dustin ccdaad40bf zezere: role/playbook to deploy Zezere
Zezere is the Fedora IoT device provisioning service.  It is the
software that runs *provision.fedoraproject.org*, but it can be
self-hosted (if you can figure it out; there is no documentation
whatsoever).

The main use case for running Zezere locally is to automatically add
trusted SSH public keys to Fedora IoT devices, without depending on a
cloud service.  This playbook sets up Zezere with the very minimal
configuration needed to meet this goal.
2021-07-05 09:34:25 -05:00
Dustin 9565b740b0 hosts: add stats0.p.b
*stats0.pyrocufflink.blue* hosts Grafana (for now, adding Victoria
Metrics, etc. later)
2021-07-02 21:55:02 -05:00
Dustin 5e61e93cea roles/grafana: Deploy Grafana
This commit introduces the *grafana* role and the corresponding
`grafana.yml` playbook.  The role installs Grafana using the system
package manager, and configures the server (including LDAP
authentication).
2021-07-02 21:47:33 -05:00
Dustin 5de8f3e612 roles/nginx: Add role for nginx
This role installs nginx and provides a base/skeleton configuration
suitable for most deployments.
2021-06-29 21:00:46 -05:00
Dustin 1fcfc80254 r/protonvpn: watchdog: Also watch for EAP/FAIL
Occasionally, ProtonVPN servers randomly reject the EAP authentication
credentials.  When this happens, the tunnel fails and is not restarted
automatically by strongSwan.  As such, the watchdog needs to react to
this event as well.
2021-06-27 09:23:46 -05:00
Dustin 6b9b87a406 roles/nextcloud: Configure outbound email
Since the Nextcloud configuration file is managed by the configuration
policy, all of the settings configurable through the web UI need to be
templated.  One important group of settings is the outbound email
configuration.  This can now be configured using the `nextcloud_smtp`
Ansible variable.
2021-06-25 11:12:38 -05:00
Dustin c68f10d771 roles/nextcloud: Use Redis for caching
The Nextcloud community [recommends][0] using Redis as a cache provider,
to improve response times and file locking reliability.
2021-06-25 11:12:12 -05:00
Dustin 05a414100a roles/redis: Add role to deploy Redis
This simple role installs the *redis* package and starts the associated
service.  It leaves the configuration as provided by upstream, at least
for now.
2021-06-25 11:10:10 -05:00
Dustin b86e0d8f29 roles/nextcloud: Switch to Fedora package
Fedora now includes a packaged version of Nextcloud.  This will be
_much_ easier to maintain than the tarball-based distribution method.
There are some minor differences in how the Fedora package works,
compared to the upstream tarball.  Notably, it puts the configuration
file in `/etc/` and makes it read-only, and it stores persistent data
separate from the application.  These differences require modifications
to the Apache and PHP-FPM configuration, but the package also included
examples to make this easier.  Since the `config.php` is read-only now,
it has to be managed by the configuration policy; it cannot be modified
by the Administration web UI.
2021-06-24 20:21:48 -05:00
Dustin 0add34a9a3 roles/protonvpn: Add watchdog script
One major problem with the current DNS-over-VPN implementation is that
the ProtonVPN servers are prone to random outages.  When the server
we're using goes down, there is not a straightforward way to switch to
another one.  At first I tried creating a fake DNS zone with A records
for each ProtonVPN server, all for the same name.  This ultimately did
not work, but I am not sure I understand why.  strongSwan would
correctly resolve the name each time it tried to connect, and send IKE
initialization requests to a different address each time, but would
reject the responses from all except the first address it used.  The
only way to get it working again was to restart the daemon.

Since strongSwan is apparently not going to be able to handle this kind
of fallback on its own, I decided to write a script to do it externally.
Enter `protonvpn-watchdog.py`.  This script reads the syslog messages
from strongSwan (via the systemd journal, using `journalctl`'s JSON
output) and reacts when it receives the "giving up after X tries"
message.  This message indicates that strongSwan has lost connection to
the current server and has not been able to reestablish it within the
retry period.  When this happens, the script will consult the cached
list of ProtonVPN servers and find the next one available.  It keeps
track of the ones that have failed in the past, and will not connect to
them again, so as not to simply bounce back-and-forth between two
(possibly dead) servers.  Approximately every hour, it will attempt to
refresh the server list, to ensure that the most accurate server scores
and availability are known.
2021-06-21 20:48:23 -05:00
Dustin bb6186b90e roles/mosquitto: Add role to deploy MQTT server
*Mosquitto* implements an MQTT server.  It is the recommended
implementation for using MQTT with Home Assistant.

I have added this role to deploy Mosquitto on the Home Assistant server.
It will be used to send data from custom sensors, such as the
temperature/pressure/humidity sensor connected to the living room wall
display.
2021-05-02 19:10:17 -05:00
Dustin 7b04326146 roles/websites/chmod777: Remove HTTP vhost
Since there are no other plain HTTP virtual hosts, the one defined for
chmod777.sh became the "default."  Since it explicitly redirects all
requests to https://chmod777.sh, it caused all non-HTTPS requests to be
redirected there, regardless of the requested name.  This was
particularly confusing for Tabitha, as she frequently forgets to put
https://…, and would find herself at my stupid blog instead of
Nextcloud.
2021-03-11 19:57:37 -06:00
Dustin 6aaf1b7dbb roles/strongswan-swanctl: Load esp4 module at boot
The *esp4* kernel module does not load automatically on Fedora.  Without
this module, strongSwan can establish IKE SAs, but not ESP SAs.  Listing
the module name in a file in `/etc/modules-load.d` configures the
*systemd-modules-load* service to load it at boot.
2021-02-17 20:33:41 -06:00
Dustin 243510e74a roles/protonvpn: Fix swanctl syntax
I believe the reason the VPN was not auto-restarting was because I had
incorrectly specified the `keyingtries` and `dpd_delay` configuration options.
These are properties of the top-level connection, not the child. I must
have placed them in the `children` block by accident.
2021-02-09 07:17:32 -06:00
Dustin 81417068c2 roles/synapse: Add cert role dependency
The *cert* role must be defined as a role dependency now, so that the
role can define a handler to "listen" for the "certificate changed"
event.  This change happened on *master*, before the *matrix* branch was
merged.
2021-01-31 15:38:18 -06:00
Dustin 7f8a9ce114 roles/graylog: Update Graylog repository RPM URL
Graylog 3.3 is currently installed on logs0.  Attempting to install the
*graylog-3.1-repository* package causes a transaction conflict, making
the task and playbook fail.
2021-01-31 15:33:42 -06:00
Dustin ee93586a95 roles/apache: Add previously-ignored cert symlinks
Before the advent of `ansible-vault`, and  long before `certbot`/`lego`,
I used to keep certificate files (and especially private key files) out
of the Git repository.  Now that certificates are stored in a separate
repository, and only symlinks are stored in the configuration policy,
this no longer makes any sense.  In particular, it prevents the continuous
enforcement process from installing Let's Encrypt certificates that have
been automatically renewed.
2021-01-24 17:08:00 -06:00
Dustin 7d740079c8 roles/ssh-hostkeys: Remove duplicate keys for git0 2020-12-30 22:13:47 -06:00
Dustin 5a114eecf0 websites/proxy-matrix: Add Synapse rev proxy setup
The *websites/proxy-matrix* role configures the Internet-facing reverse
proxy to handle the *hatch.chat* domain.  Most Matrix communication
happens over the default HTTPS port, and as such will be directed
through the reverse proxy.
2020-12-30 22:05:26 -06:00
Dustin 2df1605421 hosts: Add matrix0.p.b
*matrix0.pyrocufflink.blue* hosts the Matrix homeserver for
*hatch.chat*.
2020-12-30 22:04:51 -06:00
Dustin 371305bed4 roles/synapse: Deploy the Matrix homeserver
The *synapse* role and the corresponding `synapse.yml` playbook deploy
Synapse, the reference Matrix homeserver implementation.

Deploying Synapse itself is fairly straightforward: it is packaged by
Fedora and therefore can simply be installed via `dnf` and started by
`systemd`.  Making the service available on the Internet, however, is
more involved.  The Matrix protocol mostly works over HTTPS on the
standard port (443), so a typical reverse proxy deployment is mostly
sufficient.  Some parts of the Matrix protocol, however, involve
communication over an alternate port (8448).  This could be handled by a
reverse proxy as well, but since it is a fairly unique port, it could
also be handled by NAT/port forwarding.  In order to support both
deployment scenarios (as well as the hypothetical scenario wherein the
Synapse machine is directly accessible from the Internet), the *synapse*
role supports specifying an optional `matrix_tls_cert` variable.  If
this variable is set, it should contain the path to a certificate file
on the Ansible control machine that will be used for the "direct"
connections (i.e. on port 8448).  If it is not set, the default Apache
certificate will be used for both virtual hosts.

Synapse has a pretty extensive configuration schema, but most of the
options are set to their default values by the *synapse* role.  Other
than substituting secret keys, the only exposed configuration option is
the LDAP authentication provider.
2020-12-30 21:54:02 -06:00
Dustin 2da26d9bf6 roles/bitwarden_rs: Ensure docker service runs
Since the *bitwarden_rs* relies on Docker for distribution and process
management (at least for now), it needs to ensure that the `docker`
service starts automatically.
2020-12-30 21:02:32 -06:00
Dustin f9e8c78e5a roles/websites: Set authorized_keys file perms
Because the various "webapp.*" users' home directories are under
`/srv/www`, the default SELinux context type is `httpd_sys_content_t`.
The SSH daemon is not allowed to read files with this label, so it
cannot load the contents of these users' `authorized_keys` files.  To
address this, we have to explicitly set the SELinux type to
`ssh_home_t`.
2020-12-30 20:59:27 -06:00
Dustin c92af29e84 roles/named: Send application logs to syslog
BIND sends its normal application logs (as opposed to query logs) to the
`default_debug` channel.  By sending these log messages to syslog, they
can be routed and rotated using the normal system policies.  Using a
separate dedicated log file just ends up consuming a lot of space, as it
is not managed by any policy.
2020-12-26 11:36:15 -06:00
Dustin 59e8244a08 roles/apache: Add tags to tasks
Adding tags to tasks makes them easier to run in isolation.
2020-12-26 11:35:45 -06:00
Dustin aa1ab75edd roles/apache: logrotate: Compress logs immediately
To conserve space on the log volume of web servers, "old" log files are
now compressed immediately, as opposed to waiting until the next
rotation.
2020-12-26 11:33:09 -06:00
Dustin b519f97f6a roles/apache: Disable ssl_request_log
I am not sure the point of having both `ssl_request_log` and
`ssl_access_log`.  The former includes the TLS ciphers used in the
connection, which is not particularly interesting information.  To save
space on the log volume of web servers using Apache, we should just stop
creating this log file.
2020-12-26 11:31:24 -06:00
Dustin d1cdc8bfc3 roles/cert: Add handler topic notification
Changing/renewing a certificate generally requires restarting or
reloading some service.  Since the *cert* role is intended to be generic
and reusable, it naturally does not know what action to take to effect
the change.  It works well for the initial deployment of a new
application, since the service is reloaded anyway in order for the new
configuration to be applied.  It fails, however, for continuous
enforcement, when a certificate is renewed automatically (i.e. by
`lego`) but no other changes are being made.  This has caused a number
of disruptions when some certificate expires and its replacement is
available but has not yet been loaded.

To address this issue, I have added a handler "topic" notification to
the *certs* role.  When either the certificate or private key file is
replaced, the relevant task will "notify" a generic handler "topic."
This allows some other role to define a specific handler, which
"listens" for these notifications, and takes the appropriate action for
its respective service.

For this mechanism to work, though, the *cert* role can only be used as
a dependency of another role.  That role must define the handler and
configure it to listen to the generic "certificate changed" topic.  As
such, each of the roles that are associated with a certificate deployed
by the *cert* role now declare it as a dependency, and the top-level
playbooks only include those roles.
2020-12-26 10:38:17 -06:00
Dustin 8a18c92730 roles/collectd-prometheus: Configure plugin
The *collectd-prometheus* role configures the *write_prometheus* plugin
for collectd.  This plugin exposes data collected or received by the
collectd process in the Prometheus Exposition Format over HTTP.  It
provides the same functionality as the "official" collectd Exporter
maintained by the Prometheus team, but integrates natively into the
collectd process, and is much more complete.

The main intent of this role is to provide a mechanism to combine the
collectd data from all Pyrocufflink hosts and insert it into Prometheus.
By configuring the collectd instance on the Prometheus server itself to
enable and use the *write_prometheus* plugin and to receive the
multicast data from other hosts, collectd itself provides the desired
functionality.
2020-12-26 09:44:04 -06:00
Dustin 8e180d00ab collectd: Ensure service is enabled 2020-12-23 21:25:49 -06:00