Firefox security fix points to deep flaws in ‘chain of trust’

image

 

By Scott M. Fulton, III

These days, Mozilla’s Firefox is responsible for a diminishing amount of Web use as Google’s Chrome gains prominence–some would say ubiquity. One of the urban myths that gets passed around is that Chrome is ahead of the game in security. This week, Mozilla took another step in publicly dispelling that myth with the distribution of Firefox version 32, whose most prominent addition is an under-the-hood feature called key pinning.

While it’s being portrayed as something that should make you feel safer, the key word there is “feel”. The fact that it’s come to this points to a serious problem that runs deeper than browsers, but which browsers have managed to mainly exacerbate.

The problem, at its root, is really this simple: Back in the 1990s, the way the Web was evolving, it seemed not only possible but likely that the server computers that ran the Web server software could be given fixed, identifying names. So if a website you visited asserted that it was X, that meant that it was running on a piece of hardware X, whose identity could be attested to by a trusted source running on a piece of hardware Y, that essentially says, “Yes, indeed, that’s X. Trust me.”

Virtualization destroyed that model completely. A Web server–the piece of software that serves up the contents of a website–cannot be guaranteed to run on any piece of hardware at any one time.

The Web’s entire system of session encryption, which ensures that your communications with your bank or your lawyer or any source of transactions you’d rather not share with the world, depends on an assertion ability that is no longer relevant. So for all these years, browsers have essentially been fudging it. When a site tries to serve you a page using a protocol your browser identifies as https://, it asserts itself by presenting a chain of SSL, and more recently TLS, certificates. In this chain, the validity of each upper node, starting with the one at the top (the leaf) is authenticated (“signed”) by the node below, until one step above the very bottom. At that point, your browser keeps a root certificate that should attest to the validity of all the rest.

Supposedly your browser already knows what all the root certificates look like. Here’s hoping.

But as Google senior staff engineer Adam Langley explained at a hackers’ conference in 2012attended by the U.S. Dept. of Homeland Security, up until then at least, browsers had been ignoring the chain altogether and resolving the path from the leaf to the root for themselves.

“If you read the specs, they say that TLS servers must send their certificates in the correct order, they must send exactly the right certificates, and they must not include the root because that’s silly–the client either already has it and trusts it, or it doesn’t and it’s not going to work,” explained Langley (demonstrating on a presentation slide). “This [the leaf] is always first, that’s true, because if that’s not first, nothing works. But… the reality is that sites include only the [leaf] certificate; they include this [the leaf], the intermediate, and the root; they include this, the intermediate, some other intermediate they heard about, and two roots; they include this, another leaf certificate for their other Web server, multiple intermediates, and multiple routes, and they basically go for a ‘drive-by shooting’ approach where they include about twelve certificates, in the hopes that the browsers will be able to pick out the correct ones and just figure it out–and it actually works, because the browsers are really nice about that.

“All browsers completely ignore the standards,” he continued, “and actually, they simply take the first one, call that the leaf, and then just try desperately to build some chain from there to a root they know about, using any intermediate they can get their grubby hands on.”

With this last technique Langley referred to, intermediate certificates that browsers have used before with success are borrowed and used again. One potential solution proposed to the IETF a few years ago was called TACK: a chain of valid certificates that is itself encrypted. Once validated, browsers could “pin” the valid routes to a root for future reference, but an expiration time ensures the route to the root won’t be considered valid for too long. TACK competed with an approach proposed by Google in 2011 called public key pinning. The idea there was that, once a path to the root is validated, the browser can set up a “pinned” chain of certificates that the host in the connection must present again, for each new element delivered, in order to maintain the connection before it expires.

After having waited perhaps too long for browser engineers to resolve their disputes, Mozilla has gone ahead with implementing a version of public key pinning. With its implementation, it hopes that rogue certificate authorities (CAs) will be rejected when the certificates they pass as legitimate are authenticated–in the midst of an existing session where the browser created a chain of trust on its own despite the website’s assertions.

For more:
– read this blog from a Mozilla contributor
– see this Youtube video featuring Google’s Adam Langley
– read this article on LWN

Related Articles:
Intel hopes asynchronous OpenSSL will thwart future Heartbleed
Google forks OpenSSL to create BoringSSL [FierceCIO]

Read more about: SSL

Jarrett Neil Ridlinghafer
Founder & CEO/CTO
Synapse Synergy Group, Inc.
synapsesynergygroup.com
GenRxHealth.com
HTML5Deck.com
Eindrive.com
CSPcomply.com
Virtual-Data.center
jarrett@synapsesynergygroup.com

Advertisements