Howto: secure your website with TLS on nginx

Encrypt all the things

Security 101

This post explains why I think encrypting web traffic by default is a good idea, discusses the options for obtaining your own certificates, and describes how to configure TLS on nginx securely.

Last November I decided to make the websites I host available only over TLS. I drafted this more technical blog post at the time but never got round to publishing it. Yesterday a serious bug in the cryptographic OpenSSL library was disclosed. I figured that while I had the bonnet up fixing that problem I should probably review my configuration and finish this post at the same time.

Why encrypt by default?

Governments and corporations are engaged in mass surveillance of our communications. They are logging everything we do online. They have even gone so far as to weaken the security mechanisms of the internet the better to spy on us. This is bad for privacy, security, society and democracy. As well as robust political change we need to re-engineer the internet from the ground up to restore confidence and security for everyone. I’m doing my part by improving the security of my own sites as much as I can.

The information we seek and read online reflects our innermost thoughts. Those opinions might not be controversial today – but who knows how the law or society might change in future? We can’t guarantee that current public attitudes will persist indefinitely. We don’t have to look back far into history to see mainstream beliefs that would be unacceptable by today’s standards, yet in the past very few people had their thoughts published or recorded, whereas these days the internet and connected systems store our every musing, interest and utterance for posterity. This perfect yet contextless memory could come back to bite us in a less free future society – or in various less free current societies for that matter.

Using strong encryption for routine purposes helps those who need to hide in the noise of everyone else’s encrypted traffic. If the only people using encryption were those of interest to malicious actors (states, criminals, corporations) then they’d be easy to identify and target. The bigger the haystack of encrypted traffic the safer the needles hiding inside.

Internet service providers and other intermediaries use deep-packet inspection to profile visitors’ reading habits in order to target them with advertising (e.g. Phorm). Translation: when you request a webpage your ISP can read its content as it passes through their network. They can then analyse the content for keywords in order to build up a picture of the kinds of things in which you’re interested. They can then use that information to sell targetted advertising. Encrypting the traffic between my webserver and your browser prevents this snooping and protects your privacy.

There’s also the mundane risk that insecure login pages could compromise the admin credentials for the site.

How to stop networks snooping on web traffic

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to prevent third parties eavesdropping on, or tampering with, communications between clients and servers.

TLS doesn’t stop someone finding out that you visited this site, or a particular page on this site, or what is on that page. If your net connection is being monitored actively the spy can see which URLs you request and then visit and read them just like you can. What TLS stops is passive interception en-route from my webserver to your browser. It makes mass surveillance more difficult, but targetted surveillance, not so much. If you want to browse the web anonymously you should investigate the Tor system and its TAILS live operating system in particular.

I had a number of technical objectives in mind for this work and added a few more that I learned about as I went along:

  • Make the site available over TLS (https)
  • Eliminate mixed content warnings caused by embedding insecure resources
  • Use strict transport security (HSTS) to make sure modern browsers always visit the secure site
  • Enable forward secrecy to protect old sessions against future key compromise

Public Key Infrastructure (or how to choose a certificate)

The first thing I needed to do was to choose a certificate for my site. This turned out to require some thought. Here are the options I considered.

Self-signed certificates

Chrome TLS invalid certificate warning

The end of the world is nigh

These are easy to generate yourself and require no third-party involvement. Unfortunately visiting a site secured with a self-signed certificate will cause modern browsers to break out in hives.

What happens next depends on the level of knowledge the visitor has about TLS. Clueful cryptography, privacy, digital rights and technology people are more likely to understand what is happening and might be more willing to accept a self-signed certificate.

Some users will just click “continue” without bothering to read the warning or understand what it’s trying to tell them. Others will heed the warning and make use of that “back to safety” button – meaning they won’t see your website at all.

Self-signed certificates are the most secure against man-in-the-middle attacks once the veracity of the certificate has been established. There is no infrastructure to help the verification process though, so unless you some other out-of-band method of verifying certificates, there will be few people who can take advantage of this security in practice.

CACert

CACert logo

CACert

CAcert.org is a community-driven Certificate Authority that issues certificates to the public at large for free. This includes wildcard and multi-domain certificates.

CACert has one major disadvantage: no popular browsers ship with its root certificates installed by default. This means browsers have the same kind of bad reaction to CACert certificates as they do to self-signed ones. CACert have been working to get their root CA into the Mozilla trusted store, but that work has been ongoing for a long time, with no indication of when it might complete.

An advantage of using CACert over a self-signed certificate is that clueful users can install the root certificates into their browsers manually. The root certificate is shared between many sites meaning the scary messages go away once this has been done.

Another disadvantage of CACert is that they only issue certificates to new users that are valid for six months at a time. It’s possible to increase this, and to have your own name included in certificates rather than just “WoT user”, by integrating yourself with the CACert web of trust. Unfortunately this is very hard to do unless you live in an area with a critical mass of systems administrators with a passion for Free software. If you are based in a city like London or San Fransisco you’re probably OK. Otherwise you might struggle. Having certificates with such short durations is a pain because you have to keep generating new ones and replacing the old ones when they expire.

In my mind CACert is a compromise between (free) self-signed certificates and those signed by a certificate authority whose with a root certificate installed in most browsers (which cost money). If CACert ever gets the major browser vendors to ship its root certificates by default it will become the best free option available.

StartSSL

StartSSL is currently the best free option for a single domain serving a single site if you want a certificate backed by a trusted root CA. It comes with all the MITM risks of “trusted” CAs, though. StartSSL is an Israeli company so you should probably assume at least Mossad can issue duplicate certificates for your domain!

Richard's Kingdom TLS certificate

The certificate for this website, showing its Subject Alt Names

A limitation of the TLS protocol is that only one certificate can be served per IP address. This can be a problem if you have more than one virutal host. The best work-around at the moment is to use Subject Alternative Names in the certificate that cover all the domain names you want to serve.

Unfortunately there are no certificate authorities that are both in the trusted root CA stores of major browsers AND will issue wildcard or multi-domain certificates for free. That’s probably reasonable as part of being trusted is being careful about the identities of people to whom they issue certificates – a manual process that incurs cost. Expect to pay money to have your identity validated if you want to obtain wildcard certificates or include more than one domain in the same certificate.

My choice

You can see from clicking the padlock icon in your browser that I chose to go with StartSSL and to spring for their second-level identity verification so that I could get a certificate valid for wildcard hostnames at all three of the domains I’m serving.

Configuring TLS on nginx

The Nginx website has a good set of instructions on how to configure TLS.

Certificates

The server certificate is public. It gets sent to every client that connects to the server. By contrast the private key is secure and should be stored in a file with restricted access – however it must be readable by nginx’s master process. On my Debian system the nginx master process runs as root so this wasn’t an issue.

Sometimes Certificate Authorities will sign server certificates using an intermediate certificate rather than their root certificate. Often these intermediate certificates don’t get included in browser certificate stores. If your certificate works in this way you should concatenate it with the intermediate certificate into a single file and configure nginx to serve that instead of just the certificate. That way if browsers only have the root certificate they will receive the intermediate and server certificates from nginx and the certificate chain will be complete.

# cat www.example.com.crt intermediate.crt > www.example.com.bundle.crt

Note: if you get the certificates the wrong way round, nginx will refuse to start, so check carefully!

nginx configuration files

My Ngingx configuration uses include-files wherever possible to avoid specifying the same parameters in more than one place. To implement TLS I created a new include-file called ssl.conf:

# /etc/nginx/ssl.conf

# Wildcard cert for richardskingdom.net, arcticjen.co.uk, everyblackday.com
ssl_certificate /path/to/my/certificates/certificate-bundle.pem;
ssl_certificate_key /path/to/my/certificates/certificate.key;

# Global SSL parameters
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers kEECDH+ECDSA:kEECDH:kEDH:HIGH:+SHA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!DSS:!PSK:!SRP:!kECDH:!CAMELLIA;

# Enable HTTP Strict Transport Security
# Browsers will remembers this setting for 365 days
# NOTE: not standards compliant! See http://trac.nginx.org/nginx/ticket/289
add_header Strict-Transport-Security max-age=31536000;

# Prevent site from being framed to avoid clickjacking attacks
add_header X-Frame-Options DENY;

Pay particular attention to the ssl_protocols, ssl_prefer_server_ciphers and ssl_ciphers parameters. There is a balance to be struck here between security and compatibility with older browsers.

In an ideal world everyone would use only the lastest protocol version, TLSv1.2, however some browsers with significant market share don’t implement this version. I decided to allow versions as far back as TLSv1 however I did not allow the deprecated SSL protocols.

ssl_ciphers defines which encryption methods should be used for the session keys and the order in which they should be tried. The list here starts with the most secure ciphers and omits any that are known to be weak. The first-listed ciphers support forward secrecy, which means that the contents of past sessions encrypted with these ciphers can’t be compromised, even if the certificates themselves are compromised at some point in the future.

The list of ssl_ciphers should not be a set-and-forget parameter. Advances in cryptanalsyis and computing power render ciphers less secure over time so you should review this setting periodically to make sure you’re up to date. The list above was good as of November 2013 (as far as I know).

Specifying “ssl_prefer_server_ciphers on;” tells the server to use only its list of allowed ciphers rather than the list provided by the connecting client. If this parameter is not set browsers can override our nice secure list of protocols and force the session to use a weak cipher.

HSTS tells modern browsers that they should visit the site using the HTTPS protocol for the foreseeable future. It’s a way of preventing people loading the site insecurely over HTTP by accident. This isn’t yet supported by nginx on my operating system (Debian Wheezy) so I included a work-around. This isn’t ideal because it’s not standards compliant however in practice it seems to work OK.

I wanted these settings to apply to all virtual hosts served by nginx, so once I was happy with the configuration, I included it in the main http section of nginx.conf file with this line:

http {
...
include ssl.conf;
...
}

The advantage of this is that all sites will share a single in-memory copy of the certificates etc.

To make a site listen over TLS you must specify this in its server-level configuration like this:

server {
...
listen 443 ssl;
...
}

Don’t forget to restart nginx or reload its configuration once everything is ready.

Testing

There are a few tools that can help tune and debug TLS configurations.

SSLLabs report card for richardskingdom.net

If your report looks like this, go to the top of the class.

I made heavy use of the SSL server test at SSLLabs, which will hit your site, analyse it and report on ways you could improve your server configuration. Don’t pay too much attention to the “grade” it gives you – the best information is in the detail of the report and its accompanying documentation. It’s particularly good at helping you make decisions about how much security to trade off against backwards compatibility for older browsers.

WhyNoPadlock is another handy debugging tool. This helps identify the cause of “mixed content” browser warnings on your site. These are caused by embedded resources such as images, stylesheets or javascript being requested over insecure connections, even though the page itself is served over HTTPS. WhyNoPadlock will crawl every resource on a given page and present you with a list of those being fetched over standard HTTP. Switching these for versions accessible over HTTPS should get rid of the warnings.

Final thoughts

I wrote this primarily as an aide-memoire for when I forget how I configured my own server in future. I hope others might find it useful though – please drop me a line if it helped you. Please especially get in touch if you think any of the above is wrong so I can correct it.

I hope one day we see encrypted traffic become the norm on the internet and that mass surveillance becomes impossible as a result. Please do consider encrypting all your things if you can.

One thought on “Howto: secure your website with TLS on nginx

Comments are closed.