The *motioneye* role installs motionEye on a Fedora machine using `pip`.
It configures Apache to proxy for motionEye for outside (HTTPS) access.
The official installation instructions and default configuration for
motionEye assume it will be running as root. There is, however, no
specific reason for this, as it works just fine as an unprivileged user.
The only minor surprise is that the `conf_path` configuration setting
must be writable, as this is where motionEye places generated
configuration for `motion`. This path does not, however, have to
include the `motioneye.conf` file itself, which can still be read-only.
Since DNS only allowed to be sent over the VPN, it is not possible to
resolve the VPN server name unless the VPN is already connected. This
naturally creates a chicken-and-egg scenario, which we can resolve by
manually providing the IP address of the server we want to connect to.
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*. This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.
With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN. Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.
Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns. Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic. By combining
these solutions, we can get the benefits of both!
This commit adds two new variables to the *named* role:
`named_queries_syslog` and `named_rpz_syslog`. These variables control
whether BIND will send query and RPZ log messages to the local syslog
daemon, respectively.
BIND response policy zones (RPZ) support provides a mechanism for
overriding the responses to DNS queries based on a wide range of
criteria. In the simplest form, a response policy zone can be used to
provide different responses to different clients, or "block" some DNS
names.
For the Pyrocufflink and related networks, I plan to use an RPZ to
implement ad/tracker blocking. The goal will be to generate an RPZ
definition from a collection of host lists (e.g. those used by uBlock
Origin) periodically.
This commit introduces basic support for RPZ configuration in the
*named* role. It can be activated by providing a list of "response
policy" definitions (e.g. `zone "name"`) in the `named_response_policy`
variable, and defining the corresponding zones in `named_zones`.
* Need to apply the *postgresql-server* role to ensure PostgreSQL is
properly configured
* Need to supply a PostgreSQL certificate (use Let's Encrypt so we don't
have to manage a CA)
* Missing Ansible Vault file that includes the DB user password
Some hosts, such as the Raspberry Pis built using default Fedora images,
do not have proper filesystem separation, but use a single volume for
the entire filesystem. These hosts cannot have the root filesystem
mounted read-only, since all the writable data are also stored there.
When Jenkins runs configuration policy jobs, it always tries to remount
the root filesystem as read-only on every machine that it configured.
For these hosts with a single volume, this step fails, causing the job
to be marked as failed. To avoid this, I have added a new group,
*rw-root*; hosts in this group will be omitted from the final remount
step.
Software should never be installed or updated by the continuous
enforcement jobs. This can cause unexpected outages or other problems
if applications or libraries unexpectedly. Everything should already be
installed and in production before continuous enforcement begins, so
skipping install steps should not matter.
Most tasks that install software are tagged with the `install` tag.
When Jenkins runs `ansible-playbook` to apply configuration policy, it
will now skip any task that includes this tag.
Normally, Home Assistant uses a SQLite database for storing state
history. On a Raspberry Pi with only an SD card for storage like
*hass1.pyrocufflink.blue*, this can become extremely slow, especially
for large data sets. To speed up features like history and logbook,
Home Assistant supports using an external database engine such as
PostgreSQL or MariaDB.
The *hassdb* role and corresponding `hassdb.yml` playbook deploys a
PostgreSQL server for Home Assistant to use. It needs only to create
the role and database, as Home Assistant manages its own schema.
The *postgresql-setup* service is no longer necessary, as upstream has
fixed the SELinux policy to allow root to invoke the `postgresql-setup`
command directly.
This commit adds a task to generate a PostgreSQL configuration file from
a template. Previously, the default configuration file generated by
`initdb` was sufficient, but in order to enable SSL connections, some
changes to it are required.
Naturally, SSL connections require a server certificate, so the
*postgresql-server* role will now also copy certificate files to the
managed node, if any.
Fedora has renamed the *strongswan* service to *strongswan-starter*.
The *strongswan* service now controls strongSwan via Vici, which uses a
different configuration format and is not compatible with the files in
`/etc/strongswan/ipsec.d`. As I am migrating everything to Wireguard
now, it does not make sense to rewrite all of the IPsec configuration in
this new format, so using the legacy format with the renamed service
makes more sense.
The UniFi Security Gateway now provides DHCP for the Home Assistant
network. This simplifies management a bit, so I do not have to manage
three DHCP servers. The USG has firewall rules to prevent Internet
traffic.
Because the Home Assistant user's home directory is on `/var`, Python
packages installed in the "user site" do not get the correct SELinux
labels and thus run in the wrong domain. This causes a lot of AVC
denials and other issues that prevent Home Assistant from working
correctly.
To resolve this issue, Home Assistant is now installed in a virtual
environment at `/usr/local/homeassistant`. This directory is still
owned by the Home Assistant user, allowing Home Assistant to manage
packages installed there. Since it is rooted under `/usr`, files are
labelled correctly and processes launched from executables there will
run in the correct domain.
*hass1.pyrocufflink.blue* is the new host for Home Assistant. I
migrated from using a virtual machine to using a Raspberry Pi to avoid
having to deal with USB passthrough for the Z-Wave USB stick.
The main network, *pyrocufflink.blue* (172.30.0.0/26) is now on VLAN 1
instead of VLAN 30. This changed when I replaced the Cisco SG200-26
with the UniFI Switch 48, to simplify configuration of all of the
Ubiquiti devices.
*build1-aarch64* is a Raspberry Pi 3 B+ running Fedora aarch64. It is
intended to be used to build software and operating system images for
other aarch64 machines.
The Jenkins pipeline definition files are highly redundant. Each one
implements almost the same stages, with only a few variations. Whenever
a new pipeline is added, it's copied from the most recent file and
modified. If any improvements are made to it, they do not usually get
implemented in any of the existing pipelines.
To address this, the `applyConfigPolicy` pipeline library function is
now available. This function generates the full pipeline for a
particular application, including stages for setup, each individual
playbook, and cleanup. Using this function, pipeline files can be as
simple as:
@Library('cfgpol')_
applyConfigPolicy(
'gitea',
[
'Gitea': [
'gitea.yml',
],
]
)
This will create a pipeline that mounts the root filesystem read-write
on all hosts in the "gitea" group (any Ansible host pattern is allowed),
applies the `gitea.yml` playbook (in a stage named "Gitea"), and then
remounts the filesystems read-only.
Since this "library" is so simple, containing only a single function in
a single file, and since it will not be used by any pipelines outside
this repository, it makes sense to keep it in this repository, instead
of a separate repository as is customary for Jenkins pipeline shared
libraries.
The TCP ports 80 and 443 are NAT-forwarded by the gateway to 172.30.0.6.
This address was originally occupied by *rprx0.pyrocufflink.blue*, which
operated a reverse proxy for Bitwarden, Gitea, Jenkins, Nextcloud,
OpenVPN, and the public web sites. Now, *web0.pyrocufflink.blue* is
configured to proxy for those services that it does not directly host
itself, effectively making it a web server, a reverse proxy, and a
forward proxy (for OpenVPN only). Thus, it must take over this address
in order to receive forwarded traffic from the Internet.
*rprx0.pyrocufflink.blue* is no longer in operation.
*web0.pyrocufflink.blue* handles incoming HTTP/HTTPS requests directly,
proxying to Bitwarden, OpenVPN, etc. as needed.
For some time, I have been trying to design a new configuration for the
reverse proxy on port 443 to correctly handle all the types of traffic
on that port. In the original implementation, all traffic on port 443
was forwarded by the gateway to HAProxy. HAproxy then used TLS SNI to
route connections to the correct backend server based the requested host
name. This allowed both HTTPS and OpenVPN-over-TLS to use the same
port, however it was not without issues. A layer 4 (TCP) proxy like
this "hides" the real source address of clients connecting to the
backend, which makes IP-based security (e.g. rate limiting, blacklists,
etc.) impossible at the application level. In particular, Nextcloud,
which implements rate limiting was constantly imposing login delays on
all users, because legitimate traffic was indistinguishable from
Internet background noise.
To alleviate these issues, I needed to change the proxy to operate in
layer 7 (HTTP) mode, so that headers like *X-Forwarded-For* and
*X-Forwarded-Host* could be added. Unfortunately, this was not easy,
because of the simultaneous requirement to forward OpenVPN traffic.
HAProxy can only do SNI inspection in TCP mode. So, I began looking for
an alternate way to proxy both HTTP and non-HTTP traffic on the same
port.
The HTTP protocol defines the `CONNECT` method, which is used by forward
proxies to tunnel HTTPS over plain HTTP. OpenVPN clients support
tunneling OpenVPN over HTTP using this method as well. HAProxy has
limited support for the CONNECT method (i.e. it doesn't do DNS
resolution, and I could find no way of restricting the destination) with
the `http_proxy` option, so I looked for alternate proxy servers that
had more complete support. Unsurprisingly, Apache HTTPD has the most
complete implementation of the `CONNECT` method (Nginx doesn't support
it at all). Using a name-based virtual host on port 443, Apache will
accept requests for *vpn.pyrocufflink.net* (using TLS SNI) and allow the
clients to use the `CONNECT` method to create a tunnel to the OpenVPN
server. This requires OpenVPN clients to a) use *stunnel* to wrap plain
HTTP proxy connections in TLS and b) configure OpenVPN to use the
TLS-wrapped HTTP proxy.
With Apache accepting all incoming connections, it was trivial to also
configure it as a layer 7 forward proxy for Bitwarden, Gitea, Jenkins,
and Nextcloud. Unfortunately, proxying for the other websites
(darkchestofwonders.us, chmod777.sh, dustin.hatch.name) was not quite as
straightforward. These websites would need to have an internal name
that differed from their external name, and thus a certificate valid for
that name. Rather than reconfigure all of these sites and set all of
that up, I decided to just move the responsibility for handling direct
connections from outside to the *web0* and eliminate the dedicated
reverse proxy. This was not possible before, because Apache could not
forward the OpenVPN traffic directly, but now with the forward proxy
configuration, there is no reason to have a separate server for these
connections.
Overall, I am pleased with how this turned out. It makes the OpenVPN
configuration simpler (*stunnel* no longer needs to run on the OpenVPN
server itself, since Apache is handling TLS termination), eliminates a
network hop for the websites, makes the reverse proxy configuration for
the other web applications much easier to understand, and resolves the
original issue of losing client connection information.
A name-based HTTP (not HTTPS) virtual host for *pyrocufflink.net* is
necessary to ensure requests are handled properly, now that there is
another HTTP virtual host (chmod777.sh) defined on the same server.
This commit updates the configuration for *pyrocufflink.net* to use the
wildcard certificate managed by *lego* instead of an unique certificate
managed by *certbot*.
Specifying a unique tag for each website role included in the
`websites.yml` playbook will allow convenient partial runs of the
playbook to deploy a subset of the websites it manages.