
Security and Performance with Nginx and TLS 1.3
Earn a Top Score from Qualys
Getting Started: Log Analysis and TLS Versions Building Nginx and OpenSSL
You have recent versions of Nginx and OpenSSL installed,
and you're ready to reconfigure your secure web server
for a top security score.
Or at least I assume so—if not,
start at
the beginning
for rationale and planning.
Or, if needed because you're locked into an old platform,
learn how to
compile
OpenSSL and/or Nginx.
While we're improving security, we'll also make sure
that performance is optimized.
Simply updating to TLS 1.3 greatly improves performance,
but we can do even more.
The result should be good scores from
Qualys or ssllabs.com
or the Dutch national
Internet Standards Platform
at Internet.nl.
An A+ score from Qualys is great.
The letter grades are simple, and A+ catches your attention.
But we will improve the system beyond a simple A+ grade.
Taking Inventory
Try Google Cloud Platform and receive $50
OpenSSL version 1.1.1 was the first to support TLS 1.3.
Check your version with the command:
openssl version
Nginx version 1.14.0 supported TLS 1.3.
Check your version with the command:
nginx -v
If they're simply missing, install what's missing.
If that leaves you with a too-early version, you must be using a rather old operating system. Maybe something like RHEL/CentOS 7?
Dual ECC/RSACertificates How ECC Works How RSA Works
If you can, upgrade to a newer operating system version. If you can't do that, then maybe you can compile what's missing from source code.
You will also need a pair of certificates — one for a elliptic-curve key pair and the other for an RSA key pair. If needed, see my page on creating and maintaining free dual ECC/RSA certificates from LetsEncrypt.
Now you have all the parts, let's get started!
Which Ciphers?
TLS 1.3 only supports a short list of three cipher suites, all with ECDHE session key negotiation for forward secrecy.
- TLS_AES_256_GCM_SHA384 (0x1302)
- TLS_CHACHA20_POLY1305_SHA256 (0x1303)
- TLS_AES_128_GCM_SHA256 (0x1301)
AES or Rijndael
is a block cipher, so it can operate in
several modes.
AES-GCM or
Galois/counter mode
has high performance and provides
authenticated encryption
with verified data integrity.
ChaCha20
is a stream cipher.
It can be used along with the
Poly1305
MAC or message authentication code;
ChaCha20-Poly1305 also provides authenticated encryption
with verified data integrity.
I had already decided to support TLS 1.2 and 1.3 only. TLS 1.2 supports a long list of cipher suites. Which should my server support?
For a while I answered "Most of them, leaving out those with the CBC or cipher-block-chaining mode which doesn't provide authenticated encryption." Then I noticed that Wikipedia.org uses only six cipher suites. I haven't heard anyone complain about Wikipedia's compatibility or performance. But what do I need?
Which protocols and ciphers are used by my site's clients?
That's easy to answer with UNIX-family command-line utilities. I did this on May 19, 2021, so with a little over 4.5 months' log data since I had rotated the logs on New Year's Day. First, how many clients have connected to my server with HTTP or HTTPS? 1,954,881, almost two million:
$ wc /var/www/logs/httpd-access.log 1955087 25409868 277212352 /tmp/httpd-access.log
Each line looks like this:
$ tail -1 /var/www/logs/httpd-access.log 67.162.124.176 - - [23/Sep/2023:09:59:57 +0000] "GET /open-source/nginx-tls-1.3/security-performance-nginx.html HTTP/2.0" 200 21844 "-" TLSv1.3 TLS_AES_256_GCM_SHA384
How many used HTTPS with TLS 1.3, HTTPS with TLS 1.2, or either connected via HTTP or tried to use TLS 1.1 or earlier? I just need to count the next-to-last field.
$ awk '{print $(NF-1)}' /var/www/logs/httpd-access.log | sort | uniq -c | sort -nr 811302 TLSv1.3 787297 TLSv1.2 356488 -
TLS 1.3 was a little more popular than 1.2, but it was pretty even. Meanwhile there are many bots using HTTP or old versions of TLS and even SSL.
Now, what about the ciphers? Let's do this for the combination of protocol and the cipher, the next-to-last and last fields.
$ awk '{print $(NF-1), $NF}' /var/www/logs/httpd-access.log | sort | uniq -c | sort -nr 811284 TLSv1.3 TLS_AES_256_GCM_SHA384 657035 TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384 356488 - - 99769 TLSv1.2 ECDHE-ECDSA-CHACHA20-POLY1305 27652 TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256 2212 TLSv1.2 DHE-RSA-AES256-GCM-SHA384 225 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 170 TLSv1.2 ECDHE-RSA-CHACHA20-POLY1305 95 TLSv1.2 DHE-RSA-CHACHA20-POLY1305 36 TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 11 TLSv1.3 TLS_AES_128_GCM_SHA256 9 TLSv1.2 DHE-RSA-AES128-GCM-SHA256 7 TLSv1.3 TLS_CHACHA20_POLY1305_SHA256 7 TLSv1.2 ECDHE-ECDSA-ARIA256-GCM-SHA384 7 TLSv1.2 ECDHE-ECDSA-ARIA128-GCM-SHA256 7 TLSv1.2 ECDHE-ECDSA-AES128-CCM8 7 TLSv1.2 ECDHE-ECDSA-AES128-CCM 7 TLSv1.2 ECDHE-ARIA256-GCM-SHA384 7 TLSv1.2 DHE-RSA-ARIA256-GCM-SHA384 7 TLSv1.2 DHE-RSA-ARIA128-GCM-SHA256 7 TLSv1.2 DHE-RSA-AES256-CCM8 7 TLSv1.2 DHE-RSA-AES128-CCM8 7 TLSv1.2 DHE-RSA-AES128-CCM 6 TLSv1.2 ECDHE-ECDSA-AES256-CCM8 6 TLSv1.2 ECDHE-ECDSA-AES256-CCM 6 TLSv1.2 ECDHE-ARIA128-GCM-SHA256 6 TLSv1.2 DHE-RSA-AES256-CCM
The first six of those are the six that the Wikipedia servers support. The rest of those, with just 225 clients out of 1,955,087, represent just 0.0115% of the clients.
I decided to support the first eight in that list as that adds two more ChaCha20-Poly1305 ciphers. I also planned to do what Wikipedia had done, configure the server to prefer the ChaCha20 suite with clients such as Android devices that don't have AES-NI acceleration in their processors. Yes, I realize that the two added ChaCha20 ciphers will probably be used only by automated scans looking to inventory cipher use across the Internet...
Preferring ChaCha20 over AES With Some Clients
The Intel and AMD processors plus some ARM processors support the AES-NI extension to the x86 instruction set. This increases encryption speed and also reduces the side-channel attack surface.
Some mobile client platforms have processors that do not have this extension.
AES and ChaCha20 are, as best as we know, equal in security.
With the AES-NI acceleration, AES is faster. But without it, which is the case for many mobile devices, ChaCha20 is three times faster.
So, you can configure the server to prefer ChaCha20 if that's the client's preferred cipher. If the client's most-preferred cipher is anything else, the server will use its own preference list.
See the configuration details below in my heavily commented
nginx.conf
file.
Hardware designs constantly improve. Now many smartphones and tablets have processors with extended instruction sets including AES acceleration. I let the server run for about 20 hours with this new configuration, then counted the cipher suites used. ChaCha20-Poly1305 was used for only 1.3% of the TLS 1.3 connections, and 0.6% of the TLS 1.2 connections.
$ awk '{print $(NF-1), $NF}' /tmp/shortlog | sort | uniq -c | sort -nr 5360 TLSv1.3 TLS_AES_256_GCM_SHA384 4801 TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384 2743 - - 178 TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256 71 TLSv1.3 TLS_CHACHA20_POLY1305_SHA256 28 TLSv1.2 ECDHE-ECDSA-CHACHA20-POLY1305 11 TLSv1.2 DHE-RSA-AES256-GCM-SHA384 5 TLSv1.2 ECDHE-RSA-CHACHA20-POLY1305
The third line, with "- -" instead of a cipher, is the number of connections via HTTP. Nowadays that will mostly be clients following old links, and some of the indexing bots.
And then six months into 2022:
$ awk '{print $(NF-1), $NF}' /var/www/logs/httpd-access.log | \ sort | uniq -c | sort -nr 1441720 TLSv1.3 TLS_AES_256_GCM_SHA384 760766 TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384 400119 - - 30985 TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256 20192 TLSv1.3 TLS_CHACHA20_POLY1305_SHA256 8383 TLSv1.2 ECDHE-ECDSA-CHACHA20-POLY1305 308 TLSv1.2 DHE-RSA-AES256-GCM-SHA384 187 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 113 TLSv1.2 ECDHE-RSA-CHACHA20-POLY1305 36 TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 4 TLSv1.3 TLS_AES_128_GCM_SHA256 4 TLSv1.2 DHE-RSA-CHACHA20-POLY1305
TLS 1.3 has become more popular,
increasing from 52% of the HTTPS connections
in that initial test to 65% of the later six-month period.
Salsa20 was used for just 1.38% of the TLS 1.3 connections
and 1.06% of the TLS 1.2 connections in this longer test.
For the TLS 1.3 connections:
98.62% used TLS_AES_256_GCM_SHA384
1.38% used TLS_CHACHA20_POLY1305_SHA256
0.00027% used TLS_AES_256_GCM_SHA384
I can see a reason why Salsa20 is used so infrequently with the Terminal Emulator app on a low-end Samsung Android phone I bought in July 2017:
$ more /proc/cpuinfo
processor : 0
features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt lpae evtstrm aes pmull sha1 sha2 crc32
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
[...and the same for the other three cores...]
/proc/cpuinfo
flags
By about 2018, all ARM processors based on the ARMv8-A architecture had hardware acceleration for AES, as well as SHA-1 and SHA-2.
And so, clients will increasingly prefer AES over ChaCha20.
HTTPS Security Headers
HSTS or HTTP Strict Transport Security tells the client that the server is an HSTS server. Throughout the following specified time period the client should insist on only communicating with that server over TLS. The recommended period is one year.
Once you accept connections and send this header, you are committed to running TLS for at least the next year. This is why it is crucial to ensure that a regularly scheduled job will automatically renew the certificate through the ACME protocol.
Referrer-Policy
It's possible that your site has some pages with sensitive URLs and which contain links to other sites. If you need to, you can tell the browser not to report the Referrer data to the server with the linked page.
The Referrer data can be helpful when analyzing web traffic, although only a small percentage of clients report it, and even in those cases it may be only partial. For example, maybe the client came to my site from www.google.com, but it doesn't report the entire URL so I don't see the search they did.
I leave this almost completely liberalized, telling the browser only to leave it out (if it weren't going to already!) when it's going from an HTTPS URL on my site to a non-HTTPS URL on someone else's.
X-Xss-Protection
The X-Xss-Protection header tells the client to turn on its defenses against XSS or Cross-Site Scripting attacks. Not all browsers support this, and for those that do, I don't know why they don't always do this by default. Anyway, it's certainly a good thing to tell the browser to protect itself against Cross-Site Scripting.
You tell the browser to use one of three modes: "0" to disable the protection, "1" to enable the protection, and "1; mode=block" to enable protection and block the response rather than trying to sanitize the content.
X-Frame-Options
The X-Frame-Options header protects
against so-called "click-jacking attacks", in which your
site's page could be referenced within a frame on a
hostile page.
This header tells the client that your pages can
only be placed within frames which themselves
are from your site.
To accomplish this with Nginx, insert this
X-Frame-Option
line and make further modifications
to the Content-Security-Policy
line
in your nginx.conf
file.
add_header X-Frame-Options "SAMEORIGIN";
add_header Content-Security-Policy "frame-ancestors 'self'; [...other options here...]";
X-Content-Type-Options
The server will report the MIME content-type in the response. The Chrome and Explorer browsers can try to be a little too clever for their own good. They can attempt to read and interpret the content and conclude on their own that it's a different type of data that needs different handling. This can go very wrong if a hostile user can upload content in, for example, a comments feature.
All you need to do is tell the browser not to do that. This would be a good time to make sure that you have the correct file name extensions on all your image files.
Content Security Policy, or CSP
The Content Security Policy header is potentially critical. It tells the browser to restrict the sources of scripts and style, among other possibilities.
For pages with sensitive data, such as forms to enter user names and passwords, or manipulating user credential, we must protect against cross-site scripting attacks there. And, less obviously, cross-site styling attacks.
If you allow users to upload data, as with comment fields and similar, you definitely should use this.
However, let's say you have a purely informational site like mine, with no user-submitted content such as comments. Especially if you are trying to support the site with advertising, as I do, this interferes. Both Google AdSense and Infolinks advertising fail if you set a strict CSP.
Google AdSense involves a few thousand 3rd party advertising networks. The initial JavaScript comes from Google over HTTPS, but that can load other JavaScript which could load more, not necessarily over HTTPS. The same is true for Infolinks. Any attempt to restrict script and style source will break AdSense and Infolinks advertising. I have read that it also breaks Google Analytics.
I experimented with some of this in Chrome. Load a page, then open the Developer Tools (see the 3-dot button at upper right, then "More tools", then "Developer tools"). Then reload the page. I had the policy set for report-only, so everything loaded, but oh my, the complexity...
I decided that for my site, restricting the source of just the style was plenty of restriction.
It's interesting to see this Google web security page citing the importance of CSP while multiple Google services are incompatible.
And More...
There are other things you can do. But be careful. HPKP or HTTP Public Key Pinning was all the rage until people started realizing how it could go horribly wrong. See, for example, essays by Ivan Ristic and Scott Helme. As Ivan writes, "The main problem with HPKP, and with pinning in general, is that it can brick web sites." That is, if you lose control of the keys, possibly by accidental deletion, you lose your web site. Even if you suffer no disaster, key rotation becomes an elaborate ritual prone to error. It's hard, and it's dangerous.
Testing Your Headers
Use the following sites to test your headers. The Mozilla HTTP Observatory will score you harshly if you support third-party ads. But if your site does online retail, or is for a government agency, pay attention to it.
The Resulting Nginx Configuration File
Here is the Nginx configuration file. The comments should describe what's going on.
# Nginx configuration, stored as /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
index Index.html;
# For general log variables:
# http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
# For SSL-specific log variables:
# http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'$ssl_protocol $ssl_cipher $ssl_curve';
access_log /var/www/logs/httpd-access.log main;
error_log /var/www/logs/httpd-error.log;
# HTTP Server
server {
listen 80;
server_name cromwell-intl.com;
# Don't log bulky data, even HTTP requests that will be redirected.
location ~* \.(jpg|jpeg|png|gif|ico|css|js|ttf)$ {
access_log off;
}
# Whether original had "www." or not, redirect without it.
return 301 https://cromwell-intl.com$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name cromwell-intl.com;
keepalive_timeout 65;
location / {
root html;
}
# $msec is seconds since the UNIX epoch to millisecond resolution.
# I could set a nonce string to the number of milliseconds since
# the epoch.
#
# The problem remains: If your Content Security Policy says:
# Content-Security-Policy: style=src 'nonce-12345'
# then you can only apply the nonce to style elements:
# <style nonce='12345'> ... </style>
# You can't use it on the style attribute on an element.
# This does not work:
# <div style="..." nonce='12345'> ... </div>
## if ($msec ~ "(.*)\.(.*)") {
## set $noncestring "$1$2";
## }
## sub_filter_once off;
## sub_filter_types *;
## sub_filter ' style=' ' nonce="$noncestring" style=';
## add_header X-Nonce $noncestring;
error_page 404 /ssi/404page.html;
underscores_in_headers on;
####################################################################
## Logging and cache control for images, CSS, JavaScript, fonts
####################################################################
location ~* \.(jpg|jpeg|png|gif|ico|css|js|ttf)$ {
############################################################
# Logging
############################################################
# Don't log bulky data.
# HOWEVER, it seems we get a log entry no matter what
# if the request was referred in by something else. For
# example, fetching an image because of Google search.
access_log off;
############################################################
# Caching
############################################################
# Tell client to cache bulky data for 7 days, which is
# the Google Pagespeed recommendation / requirement.
expires 7d;
# "Pragma public" = Now rather outdated, skip it.
# "public" = Cache in browser and any intermediate caches.
# "no-transform" = Caches may not modify my data formats.
add_header Cache-Control "public, no-transform";
# Tell client and intermediate caches to understand that
# compressed and uncompressed versions are equivalent.
# Goes with gzip_vary below. More details here:
# https://blog.stackpath.com/accept-encoding-vary-important
add_header Vary "Accept-Encoding";
############################################################
# Block image hotlinking. I changed "rewrite" to "return"
# in the description provided here:
# http://nodotcom.org/nginx-image-hotlink-rewrite.html
# Also see:
# http://nginx.org/en/docs/http/ngx_http_referer_module.html
############################################################
valid_referers none blocked ~\.google\. ~\.printfriendly\. ~\.bing\. ~\.yahoo\. ~\.baidu.com server_names ~($host);
if ($invalid_referer) {
return 301 https://$host/pictures/denied.png;
}
}
# Needed because of above hotlink redirection.
location = /pictures/denied.png { }
####################################################################
# Compression suggestions from:
# https://www.digitalocean.com/community/tutorials/how-to-increase-pagespeed-score-by-changing-your-nginx-configuration-on-ubuntu-16-04
# - Level 5 (of 1-9) is almost as good as 9, much less work.
# - Compressing very small things may make them larger.
# - Compress even for clients connecting via proxies like Cloudfront.
# - If client says it can handle compression, but it asks
# for uncompressed, send it the compressed version.
# - Nginx compresses HTML by default. Tell it about others.
####################################################################
# Turn on compression.
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain text/css application/x-font-ttf image/x-icon
application/javascript application/x-javascript text/javascript;
####################################################################
# TCP Tuning
#
# Nagle's algorithm (potentially) adds a 0.2 second delay to every
# TCP connection. It made sense in the days of remote keyboard
# interaction, but it gets in the way of transferring many files.
# Turn on tcp_nodelay to disable Nagle's algorithm.
#
# FreeBSD man page for tcp(4) says:
# TCP_NODELAY Under most circumstances, TCP sends data when it is
# presented; when outstanding data has not yet been
# acknowledged, it gathers small amounts of output to
# be sent in a single packet once an acknowledgement
# is received. For a small number of clients, such
# as window systems that send a stream of mouse
# events which receive no replies, this packetization
# may cause significant delays. The boolean option
# TCP_NODELAY defeats this algorithm.
#
# It's on by default, but why not make it explicit:
####################################################################
tcp_nodelay on;
####################################################################
# tcp_nopush blocks data until either it's done or the packet
# reaches the MSS, so you more efficiently stream data in
# larger segments. You can send a response header and the
# beginning of a file in one packet, and generally send a
# file with full packets.
#
# FreeBSD man page for tcp(4) says:
# TCP_NOPUSH By convention, the sender-TCP will set the "push"
# bit, and begin transmission immediately (if
# permitted) at the end of every user call to
# write(2) or writev(2). When this option is set to
# a non-zero value, TCP will delay sending any data
# at all until either the socket is closed, or the
# internal send buffer is filled.
#
# This is like the TCP_CORK socket option on Linux. It's only
# effective when sendfile is used.
####################################################################
tcp_nopush on;
sendfile on;
####################################################################
# Certificates and private keys.
# Send both ECC and RSA certificates.
# Generating and renewing Let's Encrypt certificates described here:
# https://cromwell-intl.com/open-source/google-freebsd-tls/tls-certificate.html
####################################################################
# ECC
ssl_certificate /usr/local/etc/letsencrypt/ecc-live/cromwell-intl.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/ecc-live/cromwell-intl.com/privkey.pem;
# RSA
ssl_certificate /usr/local/etc/letsencrypt/rsa-live/cromwell-intl.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/rsa-live/cromwell-intl.com/privkey.pem;
####################################################################
## Cryptography and TLS
####################################################################
# TLS versions 1.2 and 1.3 only
ssl_protocols TLSv1.3 TLSv1.2;
# SSL session cache timeout defaults to 5 minutes, 1 minute should
# be plenty. This is abused by advertisers like Google and Facebook,
# long timeouts like theirs will look suspicious. See, for example:
# https://www.zdnet.com/article/advertisers-can-track-users-across-the-internet-via-tls-session-resumption/
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
# Specify the ciphers. Guidance available here:
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# https://wiki.mozilla.org/Security/Server_Side_TLS
#
# See https://cromwell-intl.com/open-source/nginx-tls-1.3/security-performance-nginx.html
# for details of how I selected these. Using 4.5 months of log data,
# the following supports all but 157 out of 1,955,087 clients,
# or all but 0.00803%. These are the ciphers used by wikipedia.org
# as of May 2021, plus two more ChaCha20 ciphers used by just
# 0.0087% and 0.0049% of clients.
#
# See "man ciphers" under OpenSSL 1.1.1 for a mapping between
# OpenSSL cipher names and IETF names. Or run:
# openssl -s -v
#
# My preference:
# ECDHE-ECDSA first, then
# ECDHE-RSA, then
# DHE-RSA
# and 256-bit before 128-bit in each block.
#
# HOWEVER: OpenSSL's API does not let you specify the TLS 1.3
# ciphers in the same way as the earlier ones. TLS 1.3 ciphers
# are set by:
# SSL_CTX_set_ciphersuites() and SSL_set_ciphersuites()
# while TLS 1.2 and earlier are set by
# SSL_CTX_set_ciphers() and SSL_set_ciphers()
# So, at least with OpenSSL 1.1.1, Nginx (and Apache) teams weren't
# initially sure if the API would be stable over the long run.
# They don't use the *ciphersuites() functions, so you can't change
# cipher preference order for TLS 1.3. You get the default:
# TLS_AES_256_GCM_SHA384
# TLS_CHACHA20_POLY1305_SHA256
# TLS_AES_128_GCM_SHA256
# See https://wiki.openssl.org/index.php/TLS1.3 for background.
# Also see:
# https://github.com/ssllabs/ssllabs-scan/issues/636
# https://trac.nginx.org/nginx/ticket/1529
#
# My resulting list:
# ECDHE-ECDSA-AES256-GCM-SHA384
# ECDHE-ECDSA-CHACHA20-POLY1305
# ECDHE-ECDSA-AES128-GCM-SHA256
# ECDHE-RSA-AES256-GCM-SHA384
# ECDHE-RSA-CHACHA20-POLY1305
# ECDHE-RSA-AES128-GCM-SHA256
# DHE-RSA-AES256-GCM-SHA384
# DHE-RSA-CHACHA20-POLY1305
#
# Unfortunately, unlike Apache, this has to be
# one enormously long line.
#
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
# Have the server prefer them in that order:
ssl_prefer_server_ciphers on;
# However, only some clients have processors with the AES-NI extension.
# AES is faster for clients with those processors, but ChaCha20 is
# 3 times faster for processors without the special instructions.
# Tell the server to prioritize ChaCha20, which means that if the
# client reports ChaCha20 as its most-preferred cipher, use it;
# otherwise the server preference wins.
ssl_conf_command Options PrioritizeChaCha;
# Elliptic curves for ECDHE: prefer Ed25519 curve first, NIST curves
# later. Speculation remains about NSA backdoors in NIST curves.
# Chrome and derivatives dropped secp521r1 when it wasn't listed
# in NSA's Suite B list. Indexing bots *.search.msn.com will be
# about the only clients using secp384r1.
ssl_ecdh_curve X25519:secp521r1:secp384r1;
# This file contains the predefined DH group ffdhe4096 recommended
# by IETF in RFC 7919. Those have been audited, and may be more
# resistant to attacks than randomly generated ones. See:
# https://wiki.mozilla.org/Security/Server_Side_TLS
# As opposed to generating my own:
# openssl dhparam 4096 -out /etc/ssl/dhparam.pem
ssl_dhparam /usr/local/nginx/ssl_dhparam;
####################################################################
## Security headers
####################################################################
# HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
# OCSP Stapling
# Support for OCSP Must-Staple is done in the CSR when the certificate
# is generated.
ssl_stapling on;
ssl_stapling_verify on;
# X-Frame-Options
add_header X-Frame-Options "SAMEORIGIN";
# Turn on XSS / Cross-Site Scripting protection in browsers.
# "1" = on,
# "mode=block" = block an attack, don't try to sanitize it
add_header X-Xss-Protection "1; mode=block";
# Tell the browser (Chrome and Explorer, anyway) not to "sniff" the
# content and try to figure it out, but simply use the MIME type
# reported by the server.
# This means that all files named "*.jpg" must be JPEG, and so on!
add_header X-Content-Type-Options "nosniff";
# I set Referrer-Policy liberally. I think referrer info
# can be helpful without being absolutely trustworthy,
# and I don't have any scandalous or sensitive URLs.
# See https://scotthelme.co.uk/a-new-security-header-referrer-policy/
add_header Referrer-Policy "no-referrer-when-downgrade";
# Content Security Policy
# *If* a client could add a stylesheet, then they could drastically
# change visibility and appearance. Also, there is a way to steal
# sensitive data from within a page. See:
# https://www.mike-gualtieri.com/posts/stealing-data-with-css-attack-and-defense
# However, none of my pages have forms or handle sensitive data.
# So, I feel safe using 'unsafe-inline' below.
#
# NOTE that if I didn't do that, I would have to convert every single
# 'style="..."' string to a CSS class instead. The below limits CSS
# to coming from my site and *.googleapis.com while allowing inline
# 'style="..."', which seems plenty safe for me. Also see the above
# comment block about setting $noncestring, and why my attempt to
# replace 'unsafe-inline' with 'nonce-$noncestring' failed.
#
# NOTE that adding a similar parameter for style-src breaks both
# Google AdSense and Infolinks ads, even with 'unsafe-inline'.
add_header Content-Security-Policy "upgrade-insecure-requests; style-src https://cromwell-intl.com https://*.googleapis.com 'unsafe-inline'; frame-ancestors 'self';";
# Permissions Policy, see:
# https://scotthelme.co.uk/goodbye-feature-policy-and-hello-permissions-policy/
add_header Permissions-Policy "fullscreen=(self)";
# Include the TLS protocol version and negotiated cipher
# in the HTTP headers, so we can capture them in the logs.
add_header X-HTTPS-Protocol $ssl_protocol;
add_header X-HTTPS-Curve $ssl_curve;
add_header X-HTTPS-Cipher $ssl_cipher;
####################################################################
# Process all *.html as PHP. The php-fam service must be
# running, listening on TCP/9000 on localhost only.
# The try_files line serves the 404 error page if they ask for
# a non-existent file. Without that, you get a cryptic error.
####################################################################
location ~ \.html$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index Index.html;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param TLS_PROTOCOL $ssl_protocol;
fastcgi_param TLS_CURVE $ssl_curve;
fastcgi_param TLS_CIPHER $ssl_cipher;
fastcgi_hide_header X-Powered-By;
}
####################################################################
# .htaccess is specific to Apache. Re-implement rules here.
#
# .htaccess conversion done with help of:
# http://winginx.com/en/htaccess
...remainder of file deleted...
You could also test with nmap, but remember that it will also depend on support from OpenSSL.
$ nmap --script ssl-enum-ciphers -p 443 cromwell-intl.com [... scan output ...] $ nmap --script-help=ssl-enum-ciphers [... explanation ...]
You can accomplish much more with Nginx. See the Nginx documentation for the details.