Please note, I am not accusing any specific person of being a government spook
intent on weakening security, but I can’t help notice the parallels in this
situation with what is known about how Internet standards get compromised
Except that's exactly what the author is doing. Otherwise why did I have to wade through the first 50% of this blog post? Palmer's blog on the subject [1] is well-reasoned and makes the motivations for the decision clear. Slepak may not like those reasons, but adding a veiled accusation of stoogery to an otherwise weak rebuttal just makes you look like an asshole.
I'll requote Palmer's fundamental assertion:
However, it is not possible for a low-privilege application to defend against
the platform it runs on, if the platform is intent on undermining the
application’s expectations. To try would be futile, and would necessarily
also violate a crucial digital rights principle: The computer’s owner should
get to decide how the computer behaves. Dell and Lenovo let their customers
down in that way, but for better and for worse, it’s not something that a web
browser can fix.
I can appreciate this argument. Today it's pre-installed certificates, but tomorrow it'll be some malware that hijacks the owner's machine in some other way. If Chrome's responsibilities expand to include the security of the entire system, it would place an impossible burden on its engineers.
What specific person is the Author accusing "of being a government spook intent on weakening security"?
I find Palmer's assertion to be flawed and full of hyperbole and misdirection:
> To try would be futile,
This is like saying that because perfect security is impossible, all security measures are futile.
> and would necessarily also violate a crucial digital rights principle: The computer’s owner should get to decide how the computer behaves
In no way is it necessary to violate that principle (unless you view the hardware manufacturer as the 'owner', which does seem to be an increasingly prevalent view)
> it’s not something that a web browser can fix.
Except that we've had at least 2 specific instances that the computer browser could have mitigated by warning when HPKP is violated.
> Today it's pre-installed certificates, but tomorrow it'll be some malware that hijacks the owner's machine in some other way.
I do not find your 'slippery slope' argument at all convincing. I'm not asking the browser to fix my ACL permissions, I'm asking it to tell me when my OS asks the Browsers to do something that is potentialy insecure (use a certificate other than the one pinned).
> I find Palmer's assertion to be flawed and full of hyperbole and misdirection
yes, we wouldn't want to be hyperbolic, would we.
> This is like saying that because perfect security is impossible, all security measures are futile
No, it's saying security theater is just that, and there would be significant downsides for the stated gain. Where you and he disagree is the magnitude of that gain. You're well within your right to disagree on that, but seriously, let's refrain from being so hyperbolic.
It's not like this was unexpected. This has been noted for years in their security FAQ:
It was also extensively discussed when the Lenovo/Superfish incident happened.
It's also notable that no other browser will reject a connection based on a differing certificate to the pinned one, either, so it's not clear why Chrome is being singled out here. Firefox was only immune here because they don't use the OS's cert store. Superfish happily just injected its certificates in their's too.
The possibility of working around a security measure doesn't make that measure 'security theater' as long as the workaround requires more resources and effort.
> there would be significant downsides for the stated gain.
What exactly are the downsides to the browser notifying a user when a cert different that the one pinned is used? The potential for some confused questions to Antivirus providers and Corporate IT departments?
> but seriously, let's refrain from being so hyperbolic.
What hyperbole did I use? I specified specifically what I found hyperbolic and flawed about Palmer's assertion. You did not.
> it's not clear why Chrome is being singled out
We are discussing chrome because that is the subject of the article in question. Obviously these arguments apply to other browsers as well.
While I appreciate the intent of the okTurtles folks, I think their reasoning about Google Chrome is flawed.
If I, as the user, want to override HPKP, it should be easy for me to do so. Perhaps, for instance, I'd like to MITM my own traffic for debugging purposes, or to get an idea of what data an app is transmitting.
Let's imagine for a moment that Google modifies Chrome the way the okTurtles people propose. Nothing stops Dell from writing a kernel patch which detects that code path and alters the functionality at runtime.
If you own the system, you can make it do whatever you want. And Dell starts out owning the system here.
> I, as the user, want to override HPKP, it should be easy for me to do so.
Absolutely. For debugging purposes, browsers should have a mechanism where you can purposefully use your own local certificate, with a big unremovable warning at the top telling you that. It should not, on the other hand, ever look like a valid secure site.
No legitimate reason exists for a browser to silently show a site as secure that uses certificate pinning and doesn't serve the pinned certificate. Ever. For debugging purposes, you don't need it to work silently.
And if someone started intentionally compromising browsers to make such warnings go away, we'd have a clearer indication that they'd progressed past ignorance into malice.
I've heard that some corporate networks do this to enable filtering and monitor employee internet usage. I could see that being a somewhat legitimate use case.
They can monitor, that's fine, and the correct/ethical thing would be to inform the employees about it and show them a little thing in Chrome to indicate they're being monitored.
We could argue over whether that qualifies as "legitimate", but in any case that doesn't require doing it silently and showing sites as secure. Browsing on such a network should show persistent warnings about every site whose traffic appears potentially intercepted. You can then knowingly continue browsing on that network if you consider it legitimate.
A corporate network (or any other end user) should have no trouble enabling some sort of explicit override option. Having the ability to override certs is fine; the problem is that it's default behavior.
Dell would (hopefully) be a lot more reluctant to ship one of these certs and show specific intentions by changing settings to allow this kind of override.
Palo Alto Networks firewalls do exactly that. It was one of their big differentiating feature; this is the main reason companies buy them.
Personally I agree with you, but PANW has 15.62 billion reasons to disagree with us. I suspect Google's position is to placate companies like PANW and their customers.
I eagerly await the point where we finally start treating the generation of MITM certificates by any CA as immediate grounds for permanent revocation by all browsers.
Articles like this are some of the funniest posts on Old New Thing, eg [1]. There's no alternative to trusting everyone you allowed to install software on your machine or who has admin/root access.
You seriously don't see any difference between 1) Dell installing a cert that applies broadly with plausible deniability (the current situation), and 2) Dell installing a similar cert and explicitly overriding some sort of per-domain debug setting?
Yes, Dell could do anything they want, but the latter situation clearly establishes mens rea[1].
First, keep in mind the alternative. AVs, security suites, legitimate corporate monitoring tools (and malware) will hook the browser function calls to remove any warning/block, causing crashes and vulnerabilities. If they can plant a root, they can do that. (Unless you give them a easy flag to toggle, but then how is it different from just trusting the certificate?)
Consider that most AVs are doing exactly what Dell did, but "correctly". There are more than you probably think. And they will keep doing it (now via horrible binary hooking), because an AV that "protects your HTTPS traffic" sells more than one that doesn't. THAT will increase the attack surface exposed to the outside world (see Tavis Ormandy's work), only to reduce the "attack surface" exposed to your own manufacturer or system.
Second, please consider with skepticism any proposal that solves any concern by adding more UI. Users can barely understand the green lock (as anyone following the excellent work of Adrienne Porter Felt knows), how is the user supposed to react to "hey, this site has a HPKP pin that is being overridden by a local root, are you sure it's the IT Dept or the AV vendor but not your manufacturer screwing up?".
We settled the "should you defend from root" discussion in Linux years ago, do we have to start again from scratch with Windows?
> hey, this site has a HPKP pin that is being overridden by a local root, are you sure it's the IT Dept or the AV vendor but not your manufacturer screwing up?
Yeah, that is a crappy error message that will confuse and scare people. However, it is not that hard to write a better one:
"Your connection to {this site} may be using a security certificate other than the one specified by {this site}. This may be a part of your Antivirus or Corporate network firewall. However, be aware that it is possible that groups other than you and the operator of this site may have access to any data you view at or send to this site."
Then handle this warning the same way that self-signed or expired certificates are handled.
> THAT will increase the attack surface exposed to the outside world (see Tavis Ormandy's work), only to reduce the "attack surface" exposed to your own manufacturer or system.
How does it increase the attack surface? The attack surface of 'installing an antivirus that has access to your https traffic and can perform a MITM on it' is the same either way, right?
I don't think any Chrome error mention "certificates" anymore, and for good reason. Anyway, what is the user supposed to do after reading your (totally easier) message? It still lacks any actionability, because there is no clear action the user can take if their system is that compromised.
I consider hooking into a x509 verification stack some orders of magnitude harder than parsing HTML.
> Anyway, what is the user supposed to do after reading your (totally easier) message? It still lacks any actionability, because there is no clear action the user can take if their system is that compromised.
Display a "Back to Safety" and a "Continue Anyway" button, and then don't re-display the warning for that site until the browser is re-launched or some period of time passes.
The best thing I read about modern computer security came from Google, I'm fairly certain, though I can't find the reference now.
Basically, the old model was two-party: you vs adversary. It could be assumed that your computer was trustworthy unless it had already been compromised by the adversary. Security in that model was largely based around the "computers should do what their users want" idea. Unfortunately, it doesn't line up with the actual state of computers and computer users today.
Modern security systems (especially those in mobile devices and web browsers) are designed to be three-party: you, your agent, and the adversary. You can be trusted, the adversary can't be trusted, and your agent (software on your computer) can be trusted sometimes. It's not automatically true that code running on your computer, on your behalf, is actually safe. Most users have no idea what that code is even doing.
This three-party model is what underpins the app sandbox, it's what underpins web app security including the same-origin policy, and it's what underpins things like Mozilla and Chrome's plugin and extension blocking. You can't always assume that computers are doing what their users want.
I think what is difficult about this is that, as your technical expertise rises, the two-party model becomes closer to the truth and the three-party model less so. As a developer I find my computer telling me I can't do something enormously frustrating. However, after the thousandth time cleaning malware off my mother's helpful computer, I really wish it would tell her what she can't do more often.
So, while I buy the "we can't control the layer above us" argument, I don't find "do what the user wants" compelling at all. Where was that argument when the users wanted to install old insecure versions of Flash? Or had apps that Google remotely removed from Android phones?
That doesn't mean it needs to happen silently; such interception should result in an unremovable browser indicator warning that it looks like your traffic is being intercepted.
If you consider it acceptable for the network you're on to intercept your traffic, then you can certainly continue to browse despite that warning. You might decide not to transact any private business on such a network, of course.
The author flat out says a toggleable debug mode to allow that. Something like a command line flag on starting Chrome to explicitly disable that behavior. You know, like all the other testing flags Chrome has.
I don't agree with the article. The author talks about reducing the attack surface, yet there's nothing you could do against Dell abusing its control over the laptops it sells.
The author is saying that Dell would have to "literally install malware on their computers. Dell’s stock would plummet".
Oh, like how they installed a root certificate in the local trust store? How is that any different from malware and where is Dell's stock plummeting because I'm not seeing it.
> If I were a decision-maker at Dell faced with the decision of “how do I quickly get my support team the model number of a computer?”, I would not even consider the idea of doing what Dell did. It is simply unnecessary to install a root cert into Google Chrome’s local trust store to solve that problem.
It's unnecessary, but it's a clever hack. The browser will send any server that cert if they ask for it. Thus, any browser consulting the Windows trust store will give Dell the number it wants.
Is it a massively flawed approach? Absolutely, but the explanation does make sense, at least to me.
A more general problem is that A LOT of Windows software is actually released on non-HTTPS-only sites, so that any given Windows installation is highly likely to eventually run an executable downloaded via non-HTTPS download.
All the NSA has to do against random Windows users is wait until an HTTP request for a .exe file comes and automatically infect the downloaded executable.
Unless it has changed recently, Chrome on Windows and OS X uses the system trust store - it does not have it's own. Not sure why Chrome's being singled out here.
> All the same, people seem to wish that servers could say to clients, “Here are my expected keys, and you should fail to connect to me if I seem to present different keys, even if the person who owns the computer wants to connect anyway.” That would be beneficial in that non-consensual proxying would be exposed sooner and with somewhat more certainty. But if a server could get such a guarantee, it could also be used in ways very much counter to the open Internet we know and love..
Thus putting up a warning when the pinned cert is being overriden:
1) Does not prevent all instances of non-consensual proxying, but
2) Allows better discovery of some instances of non-consensual proxying, and
3) Makes semi-consensual proxying (e.g. antiviruses, corporate firewalls) visible to the average user unless the proxier explicitly takes steps to hide the warning.
4) Does not allow the servers to force clients to use a specific key.
> However, it is not possible for a low-privilege application to defend against the platform it runs on, if the platform is intent on undermining the application’s expectations. To try would be futile, and would necessarily also violate a crucial digital rights principle: The computer’s owner should get to decide how the computer behaves. Dell and Lenovo let their customers down in that way, but for better and for worse, it’s not something that a web browser can fix.
To which the response from the original article is:
>One of the most fundamental principles of information security is reducing the attack surface, meaning that when something is unnecessary and can lead to a vulnerability, it must be removed. Is anything Google could do to make it more difficult for Dell to compromise their customers? Yes, there is. The most obvious thing would be to not allow roots within the local trust store to override HPKP pins. Had Google removed the red carpet to disabling all of its safeguards, where would that have left Dell? Sure, they could still compromise Chrome, but there would be no “Google Approved” method of doing so. Dell would either: Have to give up, or: Literally install malware on their computers. Dell’s stock would plummet.
Thus fixing this would:
1) Reduce the attack surface created by 'accidentaly' publishing a private key for root certificate (especially when removing those certs is beyond the technical ability of the majority of users)
2) Improve the ability of the computer OWNER to decide how the computer behaves since they would be aware when HPKP pins are being overiden.
3) Force OEMs that wish to compromise Chrome to write explicit Malware that will be much harder to handwave away and thus be a bigger risk to their reputation.
"3) Force OEMs that wish to compromise Chrome to write explicit Malware that will be much harder to handwave away and thus be a bigger risk to their reputation."
THIS. Shame on you, Google, for allowing anyone to defeat the certificate pinning you advocated for in the first place!
I'll requote Palmer's fundamental assertion:
I can appreciate this argument. Today it's pre-installed certificates, but tomorrow it'll be some malware that hijacks the owner's machine in some other way. If Chrome's responsibilities expand to include the security of the entire system, it would place an impossible burden on its engineers.[1] https://noncombatant.org/2015/11/24/what-is-hpkp-for/