Welcome to my blog

qr-code for this page's url

This is my personal blog. Don't expect frequent updates so consider subscribing to any of the feeds (there is a general one and each tag has its own feed as well).

2019-04-11 Updating the Wifi and home network

We have always had issues with the wifi coverage in our house. The FritzBox modem / router / wifi is located on one side of the house where the telephone line enters the house. Already on the other end of the house the reception is bad if you are sitting in the corner; outside in the garden there is no reception whatsoever. Now, I could have bought a wifi repeater which would in all probability have done the job; but I have always been curious about networking technology and was also interested in getting something that showed me a better breakdown of network traffic — e.g. something that would explain why our data usage keeps creeping up (not a major issue since we are on an "unlimited" plan, but still). With this in the back of my mind I happened across Ubiquiti's Unifi range and got sucked into that.

I still want to keep the FritzBox — it is a reliable modem with great reporting of the DSL state and we use the seamless combination of landline (I know, we are dinosaurs) and VOIP via SIP. Once you start with this it is really hard to stop. I early on decided that I didn't need the "cloud key" which is basically a very simple small computer with their admin software running on it. Instead I just used a raspberry pi I had lying around and installed the software there (Following Howard Durdle's post on using the USG on a raspberry pi.) But that's where restraint ended; I did get the router (called "Unify Security Gateway" - USG), a small managed switch (the smallest one with POE), and three wifi access points. I think two access points would indeed have been enough — my thinking was to place two inside at opposite ends of the house and have the third one outside pointing into the garden. That last one isn't really needed, I see the indoor AP in the garden as well, but, oh well…

Setting things up is not trivial but fairly straightforward, especially after watching lots of videos on the internet (I watched mostly Youtube videos from Crosstalk Solutions. The most interesting issues occurred when I had my laptop connected to both the FritzBox's network and the USG's network where weird things with routing happened.

Migrating all network clients from the FritzBox's network to the USG's network was sort of incremental and went fairly smoothly. The network topology I ended up with is like this:

                      USG → Switch → … internal networks
Internet → FritzBox ↗

The FritzBox's firewall is configured to forward outside ports not to the original machines but to the USG which then has another set of firewall rules to do the next level of forwarding (ugly but not too much an issue — I don't have that many services exposed to the outside world). You can configure the FritzBox to automatically forward all traffic on delegated IPv6 prefixes, so the manual forwarding is only required for legacy IP (aka IPv4). While IPv6 support in the USG's interface is still marked as only "alpha" it does work fairly well; you cannot have IPv6-only networks it seems (not that I'd need/want that but it would be interesting to play around with) but it is perfectly happy to get some prefixes via PD and allocate them to VLANs as well. (Using a separate set of ULA prefixes is apparently possible but I haven't done that yet — my ISP provides static addresses so my prefixes never change.) For internal traffic you can have a static route on the FritzBox for the RFC1918 networks used behind the USG. The USG is fine since it has its WAN address in FritzBox's network, so it knows how to get there and I don't use VLANs with the FritzBox.

You can actually configure the USG to not do NAT on the WAN interface to avoid double NATing IPv4 traffic. This is not possible via the web interface but requires you to tweak the configuration by placing a json file into a specific location (not the greatest way of doing things but it works quite reasonably). I had that working at some point but lost that change when making other changes. It took me weeks to notice so I decided it wasn't worth it and never put it back.

At some point the raspberry pi had disk issues and it was basically impossible to get the Unifi server working again. (The server uses MongoDB which apparently makes it notoriously liable to just break when that DB has issues.) Luckily I had a fairly recent auto-backup generated by the server and then restored that onto our main home server. I had to factory reset the Wifi endpoints and re-associate them with this new server but that was a fairly simple procedure. As I understand it you don't even need the server running all the time to keep things ticking, but now that it is running on the proper server things have been running smoothly (plus the backups are far more reliable).

Once everything was running fine I just switched off the Wifi on the FritzBox. Interestingly, it seems to switch itself on on its own again from time to time (after a reboot?) but after I change the wifi password our devices no longer auto-connected to that.

So, was it worth it and what is good and what is bad? It is definitely cool to have a more interesting network stack. E.g. configuring VLANs for guests and for VPNs where you can assign specific switch ports to be on that VPN. I still want to make the USG a VPN server as well (wireguard? tinc? or just the built-in OpenVpn?) but that is certainly in the realm of the doable (and then again, I already have that set up with the FritzBox as well, so it's not a priority). The monitoring is definitely the biggest disappointment. It's all very pretty but at least I cannot make any sense of the numbers shown. It's not clear for what interval the measurements are shown and the numbers don't seem to add up to what the FritzBox claims we have used in traffic. There seems to be many people moaning about this on the forum so let's hope that this will improve eventually. Ubiquiti is definitely improving the web interface continuously. The recent update made things much nicer. So, overall this is definitely overkill for what I needed but it is still a fun thing to have and to tinker with.

2016-11-28 Using LetsEncrypt

So far I had used StartCom for free website certificates. But they have apparently messed up their security so badly that browsers will stop accepting their certificates soon. So it's time to find a new certificate provider.

LetEncrypt is the cool new kid on the block for this. I had a look at it a while ago and chose not to jump on the bandwagon back then — you had to run their script on your server and certificates were only valid for a few months. But with the impending loss of my StartCom certificates it was time to have another look.

It turns out that in the meantime there is a plethora of options to get certificates from them. Many of these don't need to run on the remote server and don't even require root privileges. After some cursory look at the available options I semi-randomly picked getssl.

cd Private/Certs/
mkdir LetsEncrypt
cd LetsEncrypt

# Generate the LetsEncrypt user key:
openssl genrsa 4096 > LetsEncrypt/user.key
openssl rsa -in LetsEncrypt/user.key -pubout > LetsEncrypt/user.pub
chmod 400 LetsEncrypt/*

getssl -c hdurer.net
# Edit ~/.getssl/getssl.cfg for common options and ~/.getssl/hdurer.net/getssl.cfg for the one specific to the hdurer.net certificate

# If you have more domain you can just say
getssl -c some.other-domain.com
# and change the relevant bits in the getssl.cfg in the new subdirectory.

The tool allows you to use DNS as a verification mechanism which is useful as it allows me to verify the domain(s) without having to place files onto a webserver running under that domain (and in fact, issue separate certificates for domains that don't have a webserver serving content). My DNS hoster has an API to manage DNS entries and there is a Python library to access that API, so all I needed was a little helper script (see below). The only issue I found is that I need to use almost excessive waiting delays to ensure that LetsEncrypt will reliably see the changed DNS entries.

The relevant section from my getssl.cfg file reads:

# Use the following 3 variables if you want to validate via DNS
DNS_ADD_COMMAND="/path/to/DNSHelper add"
DNS_DEL_COMMAND="/path/to/DNSHelper remove"

And the helper script itself is: (excuse the hacky Python, I couldn't even be bothered to properly parse the arguments)

import pynfsn

nfsn = pynfsn.NFSN('...username...', '...api key...')

op, fulldomain, token=sys.argv[1:]

domain_parts = fulldomain.split('.')
# Figure out the root domain (always the last two components for my domains):

if len(domain_parts) > 2:
    # for subdomain foo.hdurer.net, the record is _acme-challenge.foo'
    record_name='_acme-challenge.' + '.'.join(domain_parts[:-2])

dns = nfsn.dns(domain)
if op=='add':
    dns.addRR(record_name, 'TXT', token, '200')
elif op=='remove':
    dns.removeRR(record_name, 'TXT', token)
    print "Don't know how to", op

You can try different config settings and fiddle with the script as much as you want while the getssl config points to the staging server (the default value). Once you get it to work well, change the config to use the real server and just getssl -a to have all configured certificates issued. Don't switch too early as the production server has a (not very severe) rate limit and you could lock yourself out of certificate generating for a week.

The tools places the certificates in the relevant subdirectories but you can also configure it to place copies elsewhere (e.g. per scp to the remote server itself). But once you have the certificates, the rest is easy. Just remember to regenerate them (just run getssl -a again) before they expire. If you have everything set up properly you could make that a cronjob…

2016-06-15 A new server

Our old family server was getting a bit long in the tooth, had a few random reboots and was running out of disk space anyway (plus it had an old 32-bit Atom CPU so that we could not even run Docker etc.). So in the end I broke down and ordered a new one. After a bit of searching around on the web I went with QuietPC.com and picked a much beefier machine this time than what we had before (back then I was going for size and very low engery consumption but in the end we paid for it with no upgradability and very slow speed). The machine arrived recently and looks fine. So far I am happy but it's too early to say much yet, I'll write up my experiences in a while.

2015-12-06 Perfect forward secrecy for your website

For a while now I had SSL and Speedy on this site. Having SSL isn't very hard. StartCom will give you a free certificate for your server (and also S/Mime email certificates for your email accounts) if you are willing to navigate and endure their terrible UI. There is an easy option of letting them create the key and certificate, but I encourage you do do the proper thing of creating your own key pair so that you know that only you have the private key. I found these instructions quite useful.

But setting things up so you don't just have SSL but have good and secure SSL settings is trickier. I found a good article which walks you through the steps to set options and ciphers so that the SSL checker will give you an A rating.

2015-07-03 Ads and analytics are gone again

Just a quick note that after a bit more than a year I have again removed the ads and Google Analytics from this site. I no longer need to learn about these things and they are not really useful for a low-traffic site like this anyway, so why bother?

See all posts.