So far I had used StartCom for free website certificates. But they have apparently messed up their security so badly that browsers will stop accepting their certificates soon. So it's time to find a new certificate provider.
LetEncrypt is the cool new kid on the block for this. I had a look at it a while ago and chose not to jump on the bandwagon back then — you had to run their script on your server and certificates were only valid for a few months. But with the impending loss of my StartCom certificates it was time to have another look.
It turns out that in the meantime there is a plethora of options to get certificates from them. Many of these don't need to run on the remote server and don't even require root privileges. After some cursory look at the available options I semi-randomly picked getssl.
cd Private/Certs/ mkdir LetsEncrypt cd LetsEncrypt # Generate the LetsEncrypt user key: openssl genrsa 4096 > LetsEncrypt/user.key openssl rsa -in LetsEncrypt/user.key -pubout > LetsEncrypt/user.pub chmod 400 LetsEncrypt/* getssl -c hdurer.net # Edit ~/.getssl/getssl.cfg for common options and ~/.getssl/hdurer.net/getssl.cfg for the one specific to the hdurer.net certificate # If you have more domain you can just say getssl -c some.other-domain.com # and change the relevant bits in the getssl.cfg in the new subdirectory.
The tool allows you to use DNS as a verification mechanism which is useful as it allows me to verify the domain(s) without having to place files onto a webserver running under that domain (and in fact, issue separate certificates for domains that don't have a webserver serving content). My DNS hoster has an API to manage DNS entries and there is a Python library to access that API, so all I needed was a little helper script (see below). The only issue I found is that I need to use almost excessive waiting delays to ensure that LetsEncrypt will reliably see the changed DNS entries.
The relevant section from my
getssl.cfg file reads:
# Use the following 3 variables if you want to validate via DNS VALIDATE_VIA_DNS="true" DNS_ADD_COMMAND="/path/to/DNSHelper add" DNS_DEL_COMMAND="/path/to/DNSHelper remove" DNS_WAIT=15 DNS_EXTRA_WAIT=500
And the helper script itself is: (excuse the hacky Python, I couldn't even be bothered to properly parse the arguments)
import pynfsn nfsn = pynfsn.NFSN('...username...', '...api key...') op, fulldomain, token=sys.argv[1:] domain_parts = fulldomain.split('.') # Figure out the root domain (always the last two components for my domains): domain='.'.join(domain_parts[-2:]) if len(domain_parts) > 2: # for subdomain foo.hdurer.net, the record is _acme-challenge.foo' record_name='_acme-challenge.' + '.'.join(domain_parts[:-2]) else: record_name='_acme-challenge' dns = nfsn.dns(domain) if op=='add': dns.addRR(record_name, 'TXT', token, '200') elif op=='remove': dns.removeRR(record_name, 'TXT', token) else: print "Don't know how to", op
You can try different config settings and fiddle with the script as
much as you want while the getssl config points to the staging
server (the default value). Once you get it to work well, change
the config to use the real server and just
getssl -a to have all
configured certificates issued. Don't switch too early as the
production server has a (not very severe) rate limit and you could
lock yourself out of certificate generating for a week.
The tools places the certificates in the relevant subdirectories
but you can also configure it to place copies elsewhere (e.g. per
scp to the remote server itself). But once you have the
certificates, the rest is easy. Just remember to regenerate them
getssl -a again) before they expire. If you have
everything set up properly you could make that a cronjob…