SSL/TLS Security
SSL/TLS Background
The SSL/TLS protocol suite underlies much of what we would like to think of as the secure Internet. However, there have been ongoing problems with both the design and implementation of this suite of related protocols, limiting just how much security we can achieve. This means that our web browsing, our e-mail sending and receiving, and much of the rest of our Internet activity rely on some buggy and poorly deployed protocols.
Yes, I use SSH or Secure Shell to update these pages and do remote server administration. But my personal financial activity (banking, Paypal, payment of rent and utilities, and other activity) and other critical networking must all be done through browsers running the SSL/TLS protocol suite. We all have to rely on what used to be (and is still often called) SSL, which now should be a later version of TLS.
Why SSL/TLS?
You want to secure your network connections. That means connecting only to the known and trusted system, and protecting the data in transit.
Confidentiality
You want to encrypt using a strong cipher algorithm (e.g., AES, the Advanced Encryption Standard) operating in an appropriate mode (e.g., GCM, Galois/Counter Mode) with an adequately large, randomly-chosen key. It should be a session key, uniquely generated for this one message or connection only, then discarded. DH (Diffie-Hellman) and ECDH (Elliptic-Curve Diffie-Hellman) are algorithms for securely agreeing on shared secrets in insecure environments.
An attacker could still do traffic analysis. They could observe a connection from a client to TCP port 443 on a server, and then a flow of data to the client. Apparently the client asked for one or more web pages over HTTPS. The attacker knows the IP address of the server (which might be hosting multiple sites), and the volume of data transferred, but the actual URL and page content were encrypted.
Or, a connection to TCP port 465, and then a flow of data from the client to the server. Apparently the client uploaded one or more email messages for onward delivery. Again, the actual content (headers with sender and recipients, plus message content) were hidden.
The attacker would need the session key used to secure the connection with symmetric encryption in order to read content.
The theoretical but impractical way to get the session key would be to break the extremely difficult math problem on which the key exchange is based. For RSA-based key exchange, that would be the factoring problem. For Elliptic-Curve Diffie-Hellman (Ephemeral) key exchange, or ECDH(E), that would be the discrete logarithm problem. As far as we know, it's reasonable to base security on the assumption that it is impractically difficult to solve the problem in reverse and thereby derive the private key from the public key, in turn leading to the shared secret symmetric key used for this one connection.
The practical way to get the session key is for an endpoint to leak it, likely through software vulnerabilities like the virtual memory mismanagement in the Heartbleed bug in the OpenSSL cryptography library.
There are also side-channel attacks based on timing or a variety of short-range unintentional emanations. See my list of side-channel attacks for some possibilities, some of them rather impractical.
Another practical opportunity to violate confidentiality can occur when the victim is using a virtualized server at a cloud provider. Many cloud customers simply use an "off-the-shelf" virtual machine image without changing anything, including cryptographic keys. Don't do that.
Authentication
Don't whisper secrets to strangers. That's what confidentiality without authentication would be.
The server and client need to authenticate themselves to the other end. Otherwise, you risk a man-in-the-middle attack which could violate both confidentiality and integrity.
Endpoints can authenticate through asymmetric cryptography such as RSA or Elliptic-Curve cryptography. It's only meaningful when you are confident that you really have the other end's public key. That requires digital signatures from a trusted CA or certificate authority.
Integrity
Accidental data corruption isn't a worry as there are checksums in the TCP, IP, and physical layers of the protocol stack. But malicious modification is still a concern.
A MAC or Message Authentication Code detects attempted message modification or spoofing. It needs a shared secret (use DH or ECDH) and a good hash function (SHA-2-256, SHA-2-384, or SHA-2-512).
History of the SSL/TLS Protocol Suite
Netscape developed a protocol they called Secure Sockets Layer (SSL) to provide multiple network communications security features. These include host authentication through the use of digital certificates, key exchange through asymmetric cryptography, efficient message confidentiality through symmetric cryptography, and message integrity through hashed message authentication codes.
Later modifications to SSL evolved into Transport Layer Security (TLS), which was first defined in 1999. We had already gotten into the habit of using "SSL" as a generic term, sort of like making a "Xerox" of a document or wiping our nose with a "Kleenex". Further confusing matters, TLS v1.0, v1.1, and v1.2 are also known as SSL v3.1, v3.2, and v3.3, respectively!
As you see in the table, the sequence of protocols was defined as a series of refinements and corrections to design errors.
Application protocols can specify the use of TLS in two ways. One method, used by HTTPS as opposed to HTTP, is to connect to a different TCP port — 443 for HTTPS versus 80 for HTTP. Once TLS has been established, HTTPS is exactly the same protocol as HTTP.
Other protocols always connect to the same TCP port but
then the client (that is, the host that initiated the
connection) sends a request to switch to TLS protection.
For example, the STARTTLS
request used
in SMTP and NNTP for mail and news, respectively.
TLS is established through a handshaking sequence:
1 — The client requests a secure connection and presents its list of supported ciphers and hash functions.
2 — The server picks the strongest cipher and hash function that it also supports, and notifies the client of that selection.
3 — The server sends its digital certificate, containing (among other things) the server's name and public key encapsulated within a digital signature from a trusted Certificate Authority or CA.
4 — Optionally, the client may contact the CA, using a URL embedded within the certificate, to see if the certificate is still valid or if it has been placed on the Certificate Revocation List (CRL) for any of a variety of reasons. Clients generally do not do this step as it would delay the establishment of the desired connection. However, if the CA has been compromised (as has happened in some high profile cases), the browser will accept bogus certificates.
5 — The client generates a random number, encrypts it with the server's public key, and sends the result to the server. It is possible that the server is really an imposter. Anyone can connect to the server and save a copy of the offered digital certificate, later claiming to be that server and sending out the saved certificate. This step offers a challenge to the purported server.
6 — An imposter could not continue, but the legitimate server can use its private key to decrypt that random number and continue with the handshaking sequence.
7 — Both the client and the server use that random number (or nonce, for "number used only once") to generate the shared secret key or keys to be used for subsequent communication. Asymmetric cryptography, using private-public key pairs, is computationally very expensive in comparison to symmetric algorithms. For the limited number of steps based on small data as used in host authentication and key management, the computational efficient just doesn't matter. But the subsequent communication might easily move many gigabytes of data if what follows is a long stream of audio or video data, or the download of large data files.
Continuing Security Problems
At this point, things seem pretty good, right? We are using a fairly complicated protocol, but we are on its sixth version (plus a refinement to disallow rolling back to earlier, insecure versions), and its most recent major release added cryptographically stronger ciphers and disallowed the use of weaker ones.
We are keeping up with our patches, so we have the latest version of the OpenSSL shared libraries, programming API, and command-line toolkit. We have the latest Apache web server using its mod_ssl or possibly the gnutls module for HTTPS over TLS. And, the latest version of the Firefox browser. So, we should be safe, right? Right?
Comparisonof TLS
implementations
Wrong.
Back in 2002, Phillip Rogaway discovered an attack against certain uses of Cipher Block Chaining modes of encryption. For details, see his paper "Evaluation of Some Blockcipher Modes of Operation". Rogaway's attack was certainly possible in theory, but there was no practical implementation or demonstration of the attack, and it was thought to be nothing but a theoretical risk for the time being.
The IETF and the TLS working group published the TLS 1.1 protocol in RFC 4346 in April 2006. This plugged the logical holes in TLS 1.0.
Our problem is that the IT industry did not move beyond TLS version 1.0 in either browsers or clients, because the threat was thought to be nothing but a theoretical one requiring a brute-force search of an impractically large space.
Thai Duong and Juliano Rizzo gave a presentation at the Ekoparty Security Conference in Buenos Aires in September, 2011. Their paper "Here Come The ⊕ Ninjas", dated May 13, 2011, describes their approach in detail. A hostile Java applet injects data, providing a chosen-plaintext attack leading to a decrease in search requirement by a factor of 222. The theoretical attack has become practical.
Duong and Rizzo introduced a Java applet named BEAST, for Browser Exploit Against SSL/TLS. This applet breaks the "same-origin" policy built into browsers by taking advantage of a flaw in the Oracle/Java software framework.
Why were we concerned in 2011 about a vulnerability we had known about since 2002, and for which TLS had been fixed since 2006?
As I mentioned, developers of web servers and web clients for the most part had not moved beyond TLS 1.0.
Looking at web server software, Apache using gnutls supports TLS 1.1 and 1.2, with TLS 1.2 disabled by default. Microsoft's IIS at version 7.5 supports both TLS 1.1 and 1.2, using TLS implementations built into the operating system, although both are disabled by default. So, it's possible to run TLS 1.2 on your website. But when will the TLS v1.2 clients arrive?
Looking at browsers, Opera version 10 supports TLS 1.1 and 1.2. Microsoft's Internet Explorer on Windows 7/2008R2 and later can use TLS 1.1 and 1.2, although they are disabled by default. Safari on Windows 7/2008R2 and later may be able to use it.
Chrome and Firefox use the Mozilla Network Security Services (NSS) SSL/TLS implementation, and TLS 1.1 and 1.2 are not supported. Safari on macOS uses a custom SSL/TLS engine that does not support TLS 1.1 or 1.2. This page introduces a report about server and browser support for TLS versions, and the report is available here.
Sadly, developers of both web servers and browsers have pointed at the other camp, saying "There is no reason for web servers to support TLS 1.1 and later because web browsers do not support that protocol", and vice-versa.
Qualys does periodic tests of large sets of public web servers to survey the sets of supported protocols. Here are their results of starting with Alexa's top one million HTTPS sites, as presented at Hack in the Box in Amsterdam in 2011:
Protocol | Supported |
Reported as best available |
SSL v2.0 | 143,591 | 110 |
SSL v3.0 | 298,078 | 5,205 |
TLS v1.0 | 293,286 | 292,366 |
TLS v1.1 | 916 | 854 |
TLS v1.2 | 69 | 69 |
The Qualys study is introduced
here and the detailed study is available here. |
Out of about 300,000 of the most popular HTTPS servers, close to half of them still supported SSL v2.0, a protocol that browsers introduced since 2005-2006 refuse to use!
Only about 0.3% of the most popular HTTPS servers supported TLS version 1.1 or 1.2, the remaining 99.7% used a protocol susceptible to the BEAST attack.
Problems with CBC modes
Generally speaking, there are problems with the way that TLS uses block ciphers in cipher-block chaining (CBC) modes. See Lucky Thirteen: Breaking the TLS and DTLS Record Protocols as a brief overview here and as a research paper here and here.
Mitigation is underway
Both Firefox and Chrome have been patched by updating NSS to randomize some header fields and thereby prevent the chosen plaintext injection.
Oracle has released a patch to Java to fix the same-origin error abused by the exploit. However, the paper by Duong and Rizzo indicates that they have also been investigating non-Java attack vectors. The Javascript XMLHttpRequest API, HTML5 WebSocket API, Flash URLRequest API, Java Applet URLConnection API, and Silverlight WebClient API might all be used to launch this attack, see Google's Browser Security Handbook for further details.
Some resources on the weakness of RC4:
-
The latest:
On the Security of RC4 in TLS and WPA, 2013
- Overview
- Research paper showing biases in the RC4 keystream and plaintext recovery attacks
- Presentation with more details on RC4 keystream biases
- Research paper with more details on RC4 keystream biases appearing in WPA/TKIP
- Statistical Analysis of the Alleged RC4 Keystream Generator, 2000
- Weaknesses in the Key Scheduling Algorithm of RC4, 2001
- A Practical Attack on Broadcast RC4, 2001
- Analysis of Non-fortuitous Predictive States of the RC4 Keystream Generator, 2003
- A New Weakness in the RC4 Keystream Generator and an Approach to Improve the Security of the Cipher, 2004
- Breaking 104 bit WEP in less than 60 seconds, 2007
- RC4 NOMORE, 2015, breaking WPA-TKIP in less than an hour and making TLS-protected cookie decryption attacks practical.
Many web servers were reconfigured to disable ciphers other than RC4, included in the TLS 1.0 cipher suite. RC4 is a stream cipher, not a block cipher, and so an attack based on abuse of Cipher Block Chaining will not work against it. The concern over BEAST pushed about half the HTTPS servers on the Internet to use RC4. Unfortunately for U.S. Government website administrators, or so it seemed at the time, RC4 was not approved for use under FIPS 140-2.
Then by late summer of 2013 the community came to the realization that RC4 had weaknesses. Meanwhile BEAST is a client-side attack, best mitigated in the client browser. By September of that year Qualys had concluded that for the most part patched browsers were working around BEAST and related threats and RC4 should no longer be used.
RFC 7525 decreed in May 2015 that RC4 must no longer be used in TLS. Also, "static RSA" key transport should no longer be used. Authenticate with RSA, but use Ephemeral Diffie-Hellman and Elliptic Curve Ephemeral Diffie-Hellman (DHE and ECDHE) for perfect forward secrecy, and use keys at least 2048 bits long. Now the preferred cipher would be AES-256-GCM — AES in Galois Counter Mode using a 256-bit key.
CRIME and the Next Wave of SSL/TLS Attacks
One year later, Juliano Rizzo and Thai Duong presented their latest work. CRIME, or Compression Ratio Info-leak Made Easy, is a practical attack against the way that TLS can be used in browsers. It would be especially useful for stealing session cookies, which would allow the attacker to then masquerade as the victim. It has been described as the potential "nation-state attack", something used not by criminals to steal your credit card number, but by nations like Iran and China to find dissidents. For more information see Ivan Ristić's blog at Qualys, Ars Technica's report, Kaspersky Lab's threatpost, and the description at iSEC Partners.
The underlying cause is that some information leaks through when you compress data before encryption, especially if the attacker is able to repeatedly insert small pieces of known data with the sensitive and relatively predictable target data. It's an elegant attack based on information theory. When the attacker-provided data has more in common with the sensitive cookie data, there is more redundancy to remove and the output of the compression will be smaller. The attacker can't read the resulting ciphertext (yet), but the smaller size indicates that the guess is getting closer.
Many of us were very worried, but CRIME seems to be much less of a threat than we had feared.
First, the server must support the compression of request data before compression. The DEFLATE compression in TLS is vulnerable, and about 42% of servers world-wide support it according to Qualys' SSL Labs tests. The newer SPDY protocol developed and used by Google is also vulnerable, but it was only supported by about 2% of servers when CRIME was announced.
Second, the browser must also support compression. Chrome supported TLS compression, and both Chrome and Firefox support SPDY, but Chrome removed support for TLS compression and Firefox either removed support or never supported it in the first place.
It seems that CRIME would be relatively easy to exploit, but much harder to find potential victims.
Testing Servers: Qualys and gnutls-cli
OtherQualys
Projects Qualys
Server
Test
The easiest and friendliest, while still being very thorough and conservative is the Qualys SSL Labs automated SSL Server Test
You can also use the gnutls-cli
tool to
examine web server certificates.
Here is an example, with really long lines broken for display:
% gnutls-cli www.paypal.com Processed 181 CA certificate(s). Resolving 'www.paypal.com'... Connecting to '23.73.82.234:443'... - Certificate type: X.509 - Got a certificate list of 3 certificates. - Certificate[0] info: - subject `jurisdictionOfIncorporationCountryName=US,\ jurisdictionOfIncorporationStateOrProvinceName=Delaware,\ businessCategory=Private Organization,serialNumber=3014267,\ C=US,postalCode=95131-2021,ST=California,L=San Jose,\ street=2211 N 1st St,O=PayPal\, Inc.,OU=CDN Support,CN=www.paypal.com',\ issuer `C=US,O=VeriSign\, Inc.,OU=VeriSign Trust Network,\ OU=Terms of use at https://www.verisign.com/rpa (c)06,\ CN=VeriSign Class 3 Extended Validation SSL CA', RSA key 2048 bits,\ signed using RSA-SHA1, activated `2013-06-20 00:00:00 UTC', expires `2015-04-02 23:59:59 UTC',\ SHA-1 fingerprint `43043190ba8a98970c60b1e9e1f70cdcfea285d2' Public Key Id: 4987cd428fdbceff8690f7f2a28dae48c0c8c3e0 Public key's random art: +--[ RSA 2048]----+ | . | | . * | |. = = | |.+ o . * | | E= o S .. | | . . oo . | | . oo o | | . . ++ o | | . .o+.o*o | +-----------------+ - Certificate[1] info: - subject `C=US,O=VeriSign\, Inc.,OU=VeriSign Trust Network,\ OU=Terms of use at https://www.verisign.com/rpa (c)06,\ CN=VeriSign Class 3 Extended Validation SSL CA',\ issuer `C=US,O=VeriSign\, Inc.,\ OU=VeriSign Trust Network,OU=(c) 2006 VeriSign\, Inc. - For\ authorized use only,\ CN=VeriSign Class 3 Public Primary Certification Authority - G5',\ RSA key 2048 bits, signed using RSA-SHA1,\ activated `2006-11-08 00:00:00 UTC',\ expires `2016-11-07 23:59:59 UTC',\ SHA-1 fingerprint `2bac956c4ee47f9d5c1e05ae8ed7f95d47c21f80' - Certificate[2] info: - subject `C=US,O=VeriSign\, Inc.,OU=VeriSign Trust Network,\ OU=(c) 2006 VeriSign\, Inc. - For authorized use only,\ CN=VeriSign Class 3 Public Primary Certification Authority - G5',\ issuer `C=US,O=VeriSign\, Inc.,\ OU=Class 3 Public Primary Certification Authority',\ RSA key 2048 bits, signed using RSA-SHA1,\ activated `2006-11-08 00:00:00 UTC', expires `2021-11-07 23:59:59 UTC',\ SHA-1 fingerprint `32f30882622b87cf8856c63db873df0853b4dd27' - Status: The certificate is trusted. - Description: (TLS1.2-PKIX)-(RSA)-(ARCFOUR-128)-(SHA1) - Session ID: 44:48:7F:16:FD:B6:E9:84:27:43:A7:BD:61:49:A5:45:CF:90:88:33:E9:1C:FE:5B:5D:51:5C:91:E4:69:67:BE - Version: TLS1.2 - Key Exchange: RSA - Cipher: ARCFOUR-128 - MAC: SHA1 - Compression: NULL - Handshake was completed - Simple Client Mode: - Peer has closed the GnuTLS connection
2014 — Several Big SSL/TLS Bugs are Found
February 22, 2014:
Apple reported the
goto fail
bug,
a logical coding error in the macOS and iOS implementations
of a TLS shared library.
Yes, it really happened at a goto
statement
jumping execution to a code block handling failure.
March 3, 2014:
GnuTLS was found to have errors
allowing a man-in-the-middle attacker to spoof server identity.
This bug is found to have been
in the code since 2003.
It's strangely similar:
it breaks the same thing
(verification of an x509.v3 digital certificate)
with the same end result
(a carefully crafted invalid certificate causes a logical
failure and error-handling checks stop too soon)
and it happens around an invocation of the same antiquated
and long-disparaged goto
directive.
affected web sites XKCD's wonderful explanation
of how Heartbleed works
March 21, 2014: The Heartbleed bug the OpenSSL implementation of the TLS Heartbeat Extension was found and patched. The vulnerability was introduced into the OpenSSL source code repository in December 2011, and was in widespread use since the release of OpenSSL 1.0.1 on March 14, 2012. According to two inside sources as reported by Bloomberg, USA Today, and the Financial Post, the NSA knew about and exploited this bug for two years.
June 5, 2014: Post-Heartbleed scrutiny of the OpenSSL code leads to the discovery of six more bugs. The worst was CVE-2014-0224, there were also CVE-2014-0221, CVE-2014-0195, CVE-2014-0198, CVE-2010-5298, and CVE-2014-3470. See the overview by the discoverers and discussions by HP's Zero Day Initiative, Symantec, and a Google researcher.
October 14, 2014: The POODLE flaw in SSLv3 was announced. There is simply no way to secure SSLv3 or earlier. To everyone running nearly-18-year-old SSLv3.0 in late 2014 — Stop doing that! Disable SSLv3.0, or at least disable CBC-mode ciphers in SSLv3.0.
To disable SSLv3 in Firefox:
-
Enter
about:config
in the Location box. -
Search for
TLS
. -
Double-click
security.tls.version.min
-
Change the value from
0
to1
.
And Later ...
Further attacks were discovered, all of them made possible by intentional weaknesses built into cryptographic protocols in the 1990s. This is an example of why backdoors are dangerous.
The FREAK attack was announced in March 2015. It is based on forcing the server to use weakened "export-grade" encryption.
The DROWN attack was announced in March 2016, it also abuses the continuing support for SSL v2.
The Logjam attack allows a man-in-the-middle attacker to downgrade a TLS connection to using 512-bit "export-grade" Diffie-Hellman key negotiation. See the technical paper for the details.
Weaknesses and Misuse of SSL/TLS Shared Libraries
A group of researchers at Stanford University and the University of Texas published an alarming paper based on their research: "The most dangerous code in the world: validating SSL certificates in non-browser software". They say, "We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries."
The APIs (or Application Programming Interfaces) of the standard SSL/TLS implementations have confusing settings and options. This includes OpenSSL, GnuTLS, and JSSE, plus data-transport libraries such as cURL. This leads to weaknesses in applications built on these libraries.
Amazon's EC2 Java library is vulnerable, breaking the security of all cloud applications based on it. So are the software development kits which Amazon and PayPal on-line merchants use to transmit payment details. Integrated "shopping cart" applications relying on these broken libraries include ZenCart, Ubercart, PrestaShop and osCommerce. Chase Bank's mobile banking application and other Android apps are also vulnerable.
These poorly designed APIs confront
programmers with confusing low-level interfaces.
One example used in the paper is an Amazon PHP based payment
library that attempts to secure a connection by setting cURL's
CURLOPT_SSL_VERIFYHOST
parameter to true
.
That certainly looks reasonable!
However, it should be set to 2
.
If a programmer sets it to true
,
that is non-zero and so it ends up being interpreted as
1
and the disables the verification!
Some popular API calls don't even try to be secure.
PHP's fsockopen
and Python's urllib
,
urllib2
and httplib
are popular with
programmers, even though they establish SSL connections
with no attempt at verifying the server's identity.
The authors draw three main lessons from this mess, in addition to the observation that SSL bugs tend to be buried in the middleware and hard to find.
- Application software is not being rigorously tested.
- Many SSL libraries are unsafe by default, requiring the higher-level software to correctly set several somewhat misleadingly named options and correctly interpret return values. Error reporting is often indirect and easily overlooked.
- Even those SSL libraries that are safe by default tend to be misused by developers.
Back to the main Security Page