Occasionally, ProtonVPN servers randomly reject the EAP authentication
credentials. When this happens, the tunnel fails and is not restarted
automatically by strongSwan. As such, the watchdog needs to react to
this event as well.
Since the Nextcloud configuration file is managed by the configuration
policy, all of the settings configurable through the web UI need to be
templated. One important group of settings is the outbound email
configuration. This can now be configured using the `nextcloud_smtp`
Ansible variable.
This simple role installs the *redis* package and starts the associated
service. It leaves the configuration as provided by upstream, at least
for now.
Fedora now includes a packaged version of Nextcloud. This will be
_much_ easier to maintain than the tarball-based distribution method.
There are some minor differences in how the Fedora package works,
compared to the upstream tarball. Notably, it puts the configuration
file in `/etc/` and makes it read-only, and it stores persistent data
separate from the application. These differences require modifications
to the Apache and PHP-FPM configuration, but the package also included
examples to make this easier. Since the `config.php` is read-only now,
it has to be managed by the configuration policy; it cannot be modified
by the Administration web UI.
One major problem with the current DNS-over-VPN implementation is that
the ProtonVPN servers are prone to random outages. When the server
we're using goes down, there is not a straightforward way to switch to
another one. At first I tried creating a fake DNS zone with A records
for each ProtonVPN server, all for the same name. This ultimately did
not work, but I am not sure I understand why. strongSwan would
correctly resolve the name each time it tried to connect, and send IKE
initialization requests to a different address each time, but would
reject the responses from all except the first address it used. The
only way to get it working again was to restart the daemon.
Since strongSwan is apparently not going to be able to handle this kind
of fallback on its own, I decided to write a script to do it externally.
Enter `protonvpn-watchdog.py`. This script reads the syslog messages
from strongSwan (via the systemd journal, using `journalctl`'s JSON
output) and reacts when it receives the "giving up after X tries"
message. This message indicates that strongSwan has lost connection to
the current server and has not been able to reestablish it within the
retry period. When this happens, the script will consult the cached
list of ProtonVPN servers and find the next one available. It keeps
track of the ones that have failed in the past, and will not connect to
them again, so as not to simply bounce back-and-forth between two
(possibly dead) servers. Approximately every hour, it will attempt to
refresh the server list, to ensure that the most accurate server scores
and availability are known.
*Mosquitto* implements an MQTT server. It is the recommended
implementation for using MQTT with Home Assistant.
I have added this role to deploy Mosquitto on the Home Assistant server.
It will be used to send data from custom sensors, such as the
temperature/pressure/humidity sensor connected to the living room wall
display.
Since there are no other plain HTTP virtual hosts, the one defined for
chmod777.sh became the "default." Since it explicitly redirects all
requests to https://chmod777.sh, it caused all non-HTTPS requests to be
redirected there, regardless of the requested name. This was
particularly confusing for Tabitha, as she frequently forgets to put
https://…, and would find herself at my stupid blog instead of
Nextcloud.
The *esp4* kernel module does not load automatically on Fedora. Without
this module, strongSwan can establish IKE SAs, but not ESP SAs. Listing
the module name in a file in `/etc/modules-load.d` configures the
*systemd-modules-load* service to load it at boot.
I believe the reason the VPN was not auto-restarting was because I had
incorrectly specified the `keyingtries` and `dpd_delay` configuration options.
These are properties of the top-level connection, not the child. I must
have placed them in the `children` block by accident.
The *cert* role must be defined as a role dependency now, so that the
role can define a handler to "listen" for the "certificate changed"
event. This change happened on *master*, before the *matrix* branch was
merged.
Graylog 3.3 is currently installed on logs0. Attempting to install the
*graylog-3.1-repository* package causes a transaction conflict, making
the task and playbook fail.
Before the advent of `ansible-vault`, and long before `certbot`/`lego`,
I used to keep certificate files (and especially private key files) out
of the Git repository. Now that certificates are stored in a separate
repository, and only symlinks are stored in the configuration policy,
this no longer makes any sense. In particular, it prevents the continuous
enforcement process from installing Let's Encrypt certificates that have
been automatically renewed.
The *websites/proxy-matrix* role configures the Internet-facing reverse
proxy to handle the *hatch.chat* domain. Most Matrix communication
happens over the default HTTPS port, and as such will be directed
through the reverse proxy.
The *synapse* role and the corresponding `synapse.yml` playbook deploy
Synapse, the reference Matrix homeserver implementation.
Deploying Synapse itself is fairly straightforward: it is packaged by
Fedora and therefore can simply be installed via `dnf` and started by
`systemd`. Making the service available on the Internet, however, is
more involved. The Matrix protocol mostly works over HTTPS on the
standard port (443), so a typical reverse proxy deployment is mostly
sufficient. Some parts of the Matrix protocol, however, involve
communication over an alternate port (8448). This could be handled by a
reverse proxy as well, but since it is a fairly unique port, it could
also be handled by NAT/port forwarding. In order to support both
deployment scenarios (as well as the hypothetical scenario wherein the
Synapse machine is directly accessible from the Internet), the *synapse*
role supports specifying an optional `matrix_tls_cert` variable. If
this variable is set, it should contain the path to a certificate file
on the Ansible control machine that will be used for the "direct"
connections (i.e. on port 8448). If it is not set, the default Apache
certificate will be used for both virtual hosts.
Synapse has a pretty extensive configuration schema, but most of the
options are set to their default values by the *synapse* role. Other
than substituting secret keys, the only exposed configuration option is
the LDAP authentication provider.
Since the *bitwarden_rs* relies on Docker for distribution and process
management (at least for now), it needs to ensure that the `docker`
service starts automatically.
Because the various "webapp.*" users' home directories are under
`/srv/www`, the default SELinux context type is `httpd_sys_content_t`.
The SSH daemon is not allowed to read files with this label, so it
cannot load the contents of these users' `authorized_keys` files. To
address this, we have to explicitly set the SELinux type to
`ssh_home_t`.
BIND sends its normal application logs (as opposed to query logs) to the
`default_debug` channel. By sending these log messages to syslog, they
can be routed and rotated using the normal system policies. Using a
separate dedicated log file just ends up consuming a lot of space, as it
is not managed by any policy.
I am not sure the point of having both `ssl_request_log` and
`ssl_access_log`. The former includes the TLS ciphers used in the
connection, which is not particularly interesting information. To save
space on the log volume of web servers using Apache, we should just stop
creating this log file.
Changing/renewing a certificate generally requires restarting or
reloading some service. Since the *cert* role is intended to be generic
and reusable, it naturally does not know what action to take to effect
the change. It works well for the initial deployment of a new
application, since the service is reloaded anyway in order for the new
configuration to be applied. It fails, however, for continuous
enforcement, when a certificate is renewed automatically (i.e. by
`lego`) but no other changes are being made. This has caused a number
of disruptions when some certificate expires and its replacement is
available but has not yet been loaded.
To address this issue, I have added a handler "topic" notification to
the *certs* role. When either the certificate or private key file is
replaced, the relevant task will "notify" a generic handler "topic."
This allows some other role to define a specific handler, which
"listens" for these notifications, and takes the appropriate action for
its respective service.
For this mechanism to work, though, the *cert* role can only be used as
a dependency of another role. That role must define the handler and
configure it to listen to the generic "certificate changed" topic. As
such, each of the roles that are associated with a certificate deployed
by the *cert* role now declare it as a dependency, and the top-level
playbooks only include those roles.
The *collectd-prometheus* role configures the *write_prometheus* plugin
for collectd. This plugin exposes data collected or received by the
collectd process in the Prometheus Exposition Format over HTTP. It
provides the same functionality as the "official" collectd Exporter
maintained by the Prometheus team, but integrates natively into the
collectd process, and is much more complete.
The main intent of this role is to provide a mechanism to combine the
collectd data from all Pyrocufflink hosts and insert it into Prometheus.
By configuring the collectd instance on the Prometheus server itself to
enable and use the *write_prometheus* plugin and to receive the
multicast data from other hosts, collectd itself provides the desired
functionality.
For hosts with multiple network interfaces, collectd may not send
multicast messages through the correct interface. To ensure that it
does, the `Interface` configuration option can be specified with each
`Server` option. To define this option, entries in the
`collectd_network_servers` list can now have an `interface` property.
The *collectd* role, with its corresponding `collectd.yml` playbook,
installs *collectd* onto the managed node and manages basic
configuration for it. By default, it will enable several plugins,
including the `network` plugin. The `collectd_disable_plugins` variable
can be set to a list names of plugins that should NOT be enabled.
The default configuration for the `network` plugin instructs *collectd*
to send metrics to the default IPv6 multicast group. Any host that has
joined this group and is listening on the specified UDP port (default
25826) can receive the data. This allows for nearly zero configuration,
as the configuration does not need to be updated if the name or IP
address of the receiver changes.
This configuration is ready to be deployed without any variable changes
to all Pyrocufflink servers. Once *collectd* is running on the servers,
we can set up a *collectd* instance to receive the data and store them
in a time series database (i.e. Prometheus).
Since Apache HTTPD does not have any built-in log rotation capability,
we need `logrotate`. Somewhere along the line, the *logrotate* package
stopped being installed by default. Additionally, with Fedora 30, it
changed from including a drop-in file for (Ana)cron to providing a
systemd timer unit.
The *logrotate* role will ensure that the *logrotate* package is
installed, and that the *logrotate.timer* service is enabled and
running. This in turn will ensure that `logrotate` runs daily. Of
course, since the systemd units were added in Fedora 30, machines to
which this role is applied must be running at least that version.
By listing the *logrotate* role as a dependency of the *httpd* role, we
can ensure that `logrotate` manages the Apache error and access log
files on any server that runs Apache HTTPD.
By default, strongSwan will only attempt key negotiation once and then
give up. If the VPN connection is closed because of a network issue, it
is unlikely that a single attempt to reconnect will work, so let's keep
trying until it succeeds.
The *motioneye* role installs motionEye on a Fedora machine using `pip`.
It configures Apache to proxy for motionEye for outside (HTTPS) access.
The official installation instructions and default configuration for
motionEye assume it will be running as root. There is, however, no
specific reason for this, as it works just fine as an unprivileged user.
The only minor surprise is that the `conf_path` configuration setting
must be writable, as this is where motionEye places generated
configuration for `motion`. This path does not, however, have to
include the `motioneye.conf` file itself, which can still be read-only.
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*. This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.
With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN. Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.
Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns. Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic. By combining
these solutions, we can get the benefits of both!
This commit adds two new variables to the *named* role:
`named_queries_syslog` and `named_rpz_syslog`. These variables control
whether BIND will send query and RPZ log messages to the local syslog
daemon, respectively.
BIND response policy zones (RPZ) support provides a mechanism for
overriding the responses to DNS queries based on a wide range of
criteria. In the simplest form, a response policy zone can be used to
provide different responses to different clients, or "block" some DNS
names.
For the Pyrocufflink and related networks, I plan to use an RPZ to
implement ad/tracker blocking. The goal will be to generate an RPZ
definition from a collection of host lists (e.g. those used by uBlock
Origin) periodically.
This commit introduces basic support for RPZ configuration in the
*named* role. It can be activated by providing a list of "response
policy" definitions (e.g. `zone "name"`) in the `named_response_policy`
variable, and defining the corresponding zones in `named_zones`.
Normally, Home Assistant uses a SQLite database for storing state
history. On a Raspberry Pi with only an SD card for storage like
*hass1.pyrocufflink.blue*, this can become extremely slow, especially
for large data sets. To speed up features like history and logbook,
Home Assistant supports using an external database engine such as
PostgreSQL or MariaDB.
The *hassdb* role and corresponding `hassdb.yml` playbook deploys a
PostgreSQL server for Home Assistant to use. It needs only to create
the role and database, as Home Assistant manages its own schema.
The *postgresql-setup* service is no longer necessary, as upstream has
fixed the SELinux policy to allow root to invoke the `postgresql-setup`
command directly.
This commit adds a task to generate a PostgreSQL configuration file from
a template. Previously, the default configuration file generated by
`initdb` was sufficient, but in order to enable SSL connections, some
changes to it are required.
Naturally, SSL connections require a server certificate, so the
*postgresql-server* role will now also copy certificate files to the
managed node, if any.
Fedora has renamed the *strongswan* service to *strongswan-starter*.
The *strongswan* service now controls strongSwan via Vici, which uses a
different configuration format and is not compatible with the files in
`/etc/strongswan/ipsec.d`. As I am migrating everything to Wireguard
now, it does not make sense to rewrite all of the IPsec configuration in
this new format, so using the legacy format with the renamed service
makes more sense.
Because the Home Assistant user's home directory is on `/var`, Python
packages installed in the "user site" do not get the correct SELinux
labels and thus run in the wrong domain. This causes a lot of AVC
denials and other issues that prevent Home Assistant from working
correctly.
To resolve this issue, Home Assistant is now installed in a virtual
environment at `/usr/local/homeassistant`. This directory is still
owned by the Home Assistant user, allowing Home Assistant to manage
packages installed there. Since it is rooted under `/usr`, files are
labelled correctly and processes launched from executables there will
run in the correct domain.
The main network, *pyrocufflink.blue* (172.30.0.0/26) is now on VLAN 1
instead of VLAN 30. This changed when I replaced the Cisco SG200-26
with the UniFI Switch 48, to simplify configuration of all of the
Ubiquiti devices.