A Criticism of JavaScript Cryptography

17 Jun 2014

Introduction

To be pedantic, perhaps a better name for this post would be “A Criticism of In-Browser Cryptography,” because it’s really the “In-Browser” part that gets people–JavaScript is just another programming language.

But on to matters of more substance, the blog posts, comment threads, and IRC chats about in-browser crypto always contain a wholesale lack of rigor and are usually soul-eatingly banal and Sisyphean. The epitome of this is Matasano Security’s article, entitled Javascript Cryptography Considered Harmful, which–to the dismay of those who consider it a be-all and end-all damnation–is simply an effort to scare novices away from doing something they will, in all likelihood, regret. (Granted, that can be useful in some cases, but has no place in a more formal discussion.)

Here, instead of throwing around cryptographically meaningless phrases like “harmful,” I hope to provide a thorough and objective analysis of in-browser crypto.

First, A Reduction

My first objective is to convince the reader that the security of in-browser crypto can be analyzed in the same manner as that of software repositories.

After all, disregarding individual security mechanisms, the general flow is:

Take a second and think about it–they’re basically the same thing, right?

Except, often the argument is presented that with in-browser crypto, the code must be requested every time, while with a software repository the code only needs to be requested once. This argument completely disregards HTTP cache headers and the ability of a website to operate offline–both of which request that the browser behave in the exact opposite manner. Furthermore, it’s also oblivious to the fact that while a browser requests resources more often, a software updater requests resources at a regular interval (often without the user’s knowledge or consent).

And often, the second attempt to distinguish between the two is to aver that software repositories offer some increased form of auditability or are unable to target users. To be honest, I’ve no idea where that argument comes from. I’ve no idea how that might be possible, considering that if a server can be coerced into having a successful TLS transaction with a specific user, then a repository can be coerced into doing the same, or signing a malicious package, or whatever their security model requires.

More astutely, there’s also the distinction that either a file system compromise or a key compromise is required to serve malicious code to users with TLS, but software repositories can be architected such that a key compromise is required, through the implementation of offline keys. Despite this, though, there are several examples of secure software repositories that successfully use TLS. This isn’t a grave concern (and they probably won’t be converting soon) because the decision to use offline keys is palliative, at best, and only marginally increases a system’s level of security.

However, I openly concede that there are cases where a software repository can offer a higher level of security than a browser is currently capable of. I plan to discuss this later.

A Discussion of Security

As hinted at above, security is not a boolean–there are levels of security, which is why when people paraphrase Matasano and say instead “Javascript cryptography is insecure,” they venture even further down the rabbit hole of meaninglessness because they’ve failed to specify what it’s insecure against.

A construction or implementation is secure if an adversary, given a certain level of power, is unable to achieve a given objective. The level of power an adversary is assumed to have and their ultimate objective is called the threat model.

If a new construction is secure under a new threat model that either increases the amount of power an adversary can have or makes the adversary’s objective broader, the new construction is said to have a higher level of security.

Passive Adversaries

Calling an adversary ‘passive’ is shorthand for the threat model in which a third party wishes to learn some piece of information and has the ability to observe all of the communication channels between two parties. Passive adversaries are the weakest worth considering, because the defense against them requires simply the secrecy of any sensitive information.

Examples of package managers that are secure against passive adversaries are Pacman, ports, and Slaktool. By this, it’s simply meant that they use no security mechanisms other than the selective choice of mirrors they trust. [1] Under this threat model, the repositories of these package managers can still securely deliver software that allows the user to build a communication channel that’s impervious to the adversary. For example, they may send their own public key or a package that provides an interface to a Diffie-Hellman Key Exchange (DHKE) protocol.

However, a login form provided by HTTP is not secure against passive adversaries, because the user’s password can simply be sniffed once sent. The very protocol Matasano described is secure, though. The server sends a random nonce to the user, and the user responds with HMAC-SHA1(password, nonce). [2] Much like with a DHKE, a passive adversary can observe all communication between the server and client, but remain unable to derive the user’s password. Providing that the server backs this up with sessions that are also secure against passive adversaries, our server owners may find themselves spared the cost of a TLS certificate if this is the highest level of security they desire.

(And, yes. Such adversaries are present “in the wild.” The infamous NSA’s mass surveillance program is the work of a passive adversary.)

It’s worth noting that this is where Matasano’s fallacy is present: they introduce a construction and discuss its security under a certain, heavily implied, threat model, and then pull the rug out from under the reader by suddenly asserting that it is, in fact, completely insecure under a significantly stronger threat model than it was first considered in. [2] Of course it is.

Active Adversaries

An active adversary, on the other hand, is both able to observe and alter all communication channels between the server and the client in the interest of learning some piece of information. To defend against an active adversary requires authenticity as well as the secrecy used to protect against passive adversaries.

In package managers, there are two fundamental classes of information that must be authenticated: the content of packages, and their timeliness. Obviously, if the package itself isn’t authenticated, the adversary can easily replace it with malware. However, if the timeliness of a package isn’t authenticated–meaning that the user isn’t sure they’re receiving the latest version–then an attacker can provide an obsolete, but still valid, package and exploit known vulnerabilities that may have been patched in the most recent version. [3]

The simplest–and perhaps only current way–to protect against this in the server-client case is with HTTPS. In studying software repositories, though, the conclusion has been:

If [a] package manager supports HTTPS and it correctly checks certificates, the distribution can set up repositories or mirrors that support HTTPS transfers. This will not protect against a malicious mirror, but will prevent a man-in-the-middle attack from an outsider. [1]

Therefore, in the absence of any mirrors (which secure servers don’t utilize, anyways), HTTPS successfully authenticates the contents of files and verifies that they are the newest available versions. (We already knew this, of course, but it’s nice to have corroborating evidence.)

Finally, as opposed to passive adversaries, there is nothing in-browser crypto can do to defend against active adversaries. Establishing a secure channel in the presence of an active adversary will always require a pre-shared secret and the ability to use the secret correctly, which is contrary to the nature of in-browser crypto.

Honest-but-Curious Servers

An honest-but-curious threat model requires that all parties follow a given protocol honestly/correctly, but may be curious and analyze or refuse to forget any information gleaned from their transcripts (records of a particular instance of a protocol). The objective, then, is to derive some critical piece of information.

HBC is the next logical step in increasing security, because now that we’ve secured ourselves against the strongest possible third-party, we can start drawing trust away from the second-party.

For example, in the case of searchable symmetric encryption, the search server, when following the protocol honestly, allows the user to search and retrieve their private documents efficiently without allowing the server to learn anything about the contents of the documents. Or, more generally, HBC is often used in constructions where n parties, each with a secret value, wish to calculate some function over all of their secrets without actually disclosing their secret to another party (like finding the average of a set of sensitive values).

While there’s been little concern about the curiosity of software repositories, they are trusted to be honest. This is because whether there has been a break-in or the repository itself has become malicious, “we believe a software update system cannot remain secure if an attacker who can respond to client messages has compromised all of its keys.” [3] In the case of package managers that are considered secure at this level, that means the one private key used for signing or TLS.

Opposite from software repositories, where HBC can’t offer much more (besides, perhaps, PIR), servers are forced to handle their users’ data in exceedingly novel ways to protect their privacy. Zero-knowledge password proofs, or password-authenticated key agreements (like SRP), or even forms of attribute-based or server-assisted cryptography can and should be used to authenticate users. As mentioned above, searchable symmetric encryption should be used to efficiently and flexibly handle private data. Client-to-client encrypted messaging systems should be used to handle private communication between two different users. The list goes on ad infinitum, ad nauseam. At the end of the day, it’s up to the developer to strike a balance between utility of data and security of data.

Extending and Enforcing the Honest-but-Curious Threat Model

The highest security package manager I know of is called The Update Framework, and it’s formally discussed in [3]. The property that distinguishes it from the above systems is its survivability–it can continue to function properly while under attack or partial compromise–through the use of threshold signatures. Because, as the authors put it, “historically, software update systems have been designed such that the compromise of a single key is fatal.” They have, in essence, designed a software update system that trusts itself as little as possible.

I feel the need to critique in-browser crypto for it’s failing in this regard, but we haven’t reached a limitation of in-browser crypto–we’ve reached the limit of web security. Browsers simply don’t have the ability to provide any level of survivability. Maybe that can be a design goal of HTTP 3.0.

Disregarding that, there are often attempts or criticisms for the lack of attempt to authenticate the underlying code of in-browser crypto. The standard nostrum is some kind of browser add-on that verifies a website’s code before running it. Presumably under the same executive control as the site itself, so I fail to see how changing two codebases rather than one was ever considered a real security option. Perhaps this is another manifestation of the idea that offline keys are more secure than TLS keys, but making some poor developers maintain a high-security website and an extension for every modern browser for negligible security benefits doesn’t sound sustainable (or #winning).

Stronger Levels of Security…?

There are no stronger threat models worth considering. I literally just made a suggestion for HTTP 3.0. HTTP 3.0 isn’t going to happen for hundreds of years, and you want more?

The Problems with Past Articles

I think it may be of value to some to discuss the rhetorical and logical fallacies that previous articles on in-browser crypto use to dissuade the reader.

Javascript Cryptography Considered Harmful

Matasano’s article is one of the most painful things I’ve read since studying cryptography because it absolutely buries the reader in nonsense. The title is a sweeping generalization and an appeal to fear. The author regularly omits, confuses, or writes off threat models in an attempt to disarm the reader–what I talked about in the ‘Passive Adversaries’ section.

He reveals a transparent ignorance of the actual subject matter by saying that “having established a secure channel with SSL, you no longer need Javascript cryptography,” and uses appeals to common practice and anonymous authority as the main stilts of his argument in phrases like “Javascript isn’t a serious crypto research environment” and “[now] you have ‘real’ cryptography.”

He clearly assumes that the reader is an idiot and treats him as one. As mentioned earlier, this isn’t even a semblance of helpful.

Final post on Javascript crypto [sic]

This article is misleading in many of the same ways that Matasano’s is. The author never clearly states any particular threat model–he’s more interested in tallying up the number of easily-apparent attacks, under whatever threat model he can get away with, and then considers something “more secure” if it grosses fewer attacks. He also makes the typical appeal to common practice, like Matasano, that PGP is irreproachable and everything else is too dodgy.

He takes security problems that are well understood and thought about carefully in other systems, and instead uses it as a reason to run screaming. For example, “7. Auditability — each user is served a potentially differing copy of the code.” … Yes, that is true for in-browser crypto, but it’s true for every time you get anything off of the internet. Even formal discussions of secure software repositories consider the targeting of users. It is wholly unavoidable.

A second example of this is present in the assertion that the “same-origin policy is not a replacement for ACLs,” which hints at the more general argument that in-browser crypto lacks a secure keystore.

What’s wrong with in-browser cryptography?

This article is actually a fairly good read. Until you get, like, one-third of the way through.

The first objection I have is that he rather tacitly references TUF when saying:

Where installation of native code is increasingly restrained through the use of cryptographic signatures and software update systems which check multiple digital signatures to prevent compromise…, the web itself just grabs and implicitly trusts whatever files it happens to find on a given server at a given time.

and then immediately compares it to HTTP. In fact, that’s what most of his argument consists of: contrasting in-browser crypto with TUF, making it one grand perfectionist fallacy. ‘In-browser crypto is not as secure as the state-of-the-art, therefore it’s insecure.’

TUF is by no means in wide use or even widely known about, and is in no way comparable to HTTP. He also later distinguishes between “data-in-motion” and “data-at-rest,” which has merit, but regardless, is a distinction the vast majority of other software update systems don’t make.

Requests for Clarification

A common issue I’ve decided to omit is MEGApwn and browsers’ lack of a “secure keystore.” The reason being, nobody has ever actually defined what a “secure keystore” is.

Is the .ssh folder a secure keystore? Because any program running as root or as the current user can access all of the private keys in that folder freely, and can even act as a keylogger to find any encryption passwords. Completely ignoring that, the fact that malicious scripts can access stored pieces of sensitive data from the same origin is somehow evidence of insurmountable insecurity. Do “secure keystores” just not exist? Because I can’t seem to find one… at least not one that’s the panacea they’re claimed to be.

Nate Lawson groundlessly averred that the “same-origin policy is not a replacement for ACLs,” but I fail to see how it offers any less protection than file-access policies, given the above.

I’d greatly appreciate any clarification of the above–especially in comparison to other widely deployed security mechanisms.

References

  1. “A Look In the Mirror: Attacks on Package Managers”
  2. “Javascript Cryptography Considered Harmful”
  3. “Survivable Key Compromise in Software Update Systems”
  4. “Final post on Javascript crypto” [sic]
  5. “What’s wrong with in-browser cryptography?”

Home ]