We consider these attacks outside Chrome's threat model, because there is no way for Chrome (or any application) to defend against a malicious user who has managed to log into your computer as you, or who can run software with the privileges of your operating system user account. Such an attacker can modify executables and DLLs, change environment variables like PATH, change configuration files, read any data your user account owns, email it to themselves, and so on. Such an attacker has total control over your computer, and nothing Chrome can do would provide a serious guarantee of defense. This problem is not special to Chrome — all applications must trust the physically-local user.
I think the percentage is interesting, but it adds nothing for either side of the debate (either for or against). I'm not challenging you, I'm talking about the people going back to that number in the thread you posted.
* I don't even know how to setup a master password and have never heard of the option being available in FF or Chrome. I also don't know what it does. Does it replace all password boxes with a master-password that you enter which then pulls down the appropriate password? Is it a keychain?
Saying "X people don't use this feature" could mean anything. It could mean the feature is buried in the system, or that the feature isn't descriptive enough, or that the feature is hard to understand... it doesn't default to being "people clearly don't want that feature".
[*] I could research it, but I'm giving you my current uneducated opinion to make a point.
Users can navigate menus. But given the out-of-the-way location of the "Use a master password" checkbox, what percentage of Firefox's users even know of it's existence? It's likely pretty low.
This is a failure of the browser manufacturers, not the users. I had no idea this was even possible until now- they should surface a feature like this a lot more clearly if they want people to use it.
There is a middle-ground. Something like MS Office's 'Ribbon UI' where they set out to minimize the interface while exposing features that were usually hidden deeply enough that users couldn't find them.
Go to Settings -> Show advanced settings -> Manage saved passwords -> Click on a "hidden" password -> Click on "Show" button -> Voila, password shown in plain text
Absolutely! This functionality is present in most (if not all) browsers. The goal of this post was to show how malware could automatically attempt to extract all credentials.
However, that's certainly a good feature to mention!
But passwords are already available in plaintext. This fact alone means that passwords are not exactly hidden from the logged in user.
As an analogy, say you have a house and you have a drawer where you keep all your secret information. If you really want to keep the information secret, then you shouldn't allow outside visitors inside your house. You could encrypt the secret information to make it difficult for the attacker to read the information. But he still has access to your drawer because you let him into your house.
The attacker can install a remote camera near your drawer to see how you decrypt the information, or he can directly see the decrypted plaintext.
I did this on a Mac and got a modal dialog saying "Do you allow Chrome access to keychain item blah.com?" Click anything, get another modal dialog for bleh.com. I realized it was going to go through each and every password I've ever saved in Chrome. With modal dialogs. kill -9.
One thing I've been meaning to test. Does Chrome's form-autofill (the thing where it fills in as much of a form as it can when you specify an email address) populate hidden fields if they match? If-so, it seems like potential for mischief to create some form inputs of type "hidden" or just some visually-hidden form inputs using style sheets to capture more information than a user is aware is being populated and submitted.
I assume a real attack wouldn't use <input type="hidden" />. Instead, you'd style the input such that the user doesn't see it, but the browser thinks it's visible. Extremely low opacity and/or an incredibly small size could do the trick. To provoke the browser into autocompleting data, you might even be able to use JavaScript to fake keystrokes in the stealth form inputs.
Front-end stuff is well outside my area of expertise, so I'm betting someone already tried these ideas and now browsers protect against them.
Let's assume for the sake of argument that we are running code on both a Windows machine and an OS X machine, and trying to steal someone's browser passwords.
While it is undeniable that the OSX keychain adds a roadblock to the theft, many average users would happily enter their password if the box was displayed when they ran up their browser (even if the browser wasn't the originating process) and likely also fall for a fake keychain prompt.
I think the keychain is a good thing (just as it is in Android). Just wanted to make the point that the keychain for your average non-power user is a minor roadblock in theft, rather than a "real" security feature.
Yes, but there is nothing that a program can do to prevent the user from hanging themselves. If an attacker has access to your hardware and user session then they can do almost anything, and at the OS level too.
At least Chrome extends the supplied OS security features and doesn't try to re-engineer them from scratch. This makes me more comfortable with Chrome rather than less.
Gnome-keyring doesn't have many dependencies, so you can run it with any desktop environment or window manager. Recent versions of Xfce have built-in support for loading it at login. It also integrates nicely with GPG, SSH, and network-manager. I'm not sure about KWallet.
Or, for example, attacker stole your backup via vulnerability in your NAS. Or, for example, some idiots share whole system volumes in e2k and Direct Connect networks. Or ever web:
Just getting the file would not help you for the attack vectors shown in the article for Chrome (need user account's CryptProtectData), IE-pre-10 (need copy of registry keys + CryptProtectData), or IE 10 (need binary on user account).
Firefox would appear to be vulnerable to that approach. Not sure about Opera's wand.dat, probably vulnerable as well.
Opera allows you to set up a master password, if you want it. If you don’t want it, you can copy around wand.dat as you like (even from your computer to your phone!) and it just works. :)
Hi there! Thanks for the great comment. I had a similar one on my blog that I responded to in the following way (I hope it helps!):
"Good question! You're right - in these cases it is assumed malware is already present on the system and running in the context of the user. But there can simply be better protection.
Consider Firefox's use of a Master Password. Even if an attacker is on the otherside of the airtight hatchway, he/she will not get the credentials unless they can find out the password used.
But if there is already malware on the user system, it just needs to wait until the user authenticates once in Firefox to get the master password, then it can fetch all the other passwords. Right?
Absolutely it could. However, the layer of defense provided by a master password is still (so far, seemingly) better than the instantaneous and automatic access to credentials malware could have when extracting credentials from Chrome and IE.
But yes, to answer your question (and to validate the other poster on this thread) - If malware infects your system, you will likely have a bad time.
I think you're answering in good faith but I find that caveat so overwhelming that it makes the point meaningless: if malware runs locally outside of a sandbox, you're screwed – full stop, end of story.
There are scenarios where master passwords are extremely useful and that's passive file disclosure such as a network home directory, a compromise of another account while you're not logged in, or – particularly relevant these days – a breached cloud sync service. I would make the case for that reason rather than as a malware resistance measure.
The long term fix requires architectural changes: none of the attacks described work directly on Mac OS X because the Keychain decoding happens in the securityd process which runs as root so the malware would trigger a confirmation prompt for each password it tried to pilfer. Unfortunately, this is also less than perfect as most users check the “Always allow” box granting permission to their browser for unprompted access…
Which, according to his findings with the dumpmon twitter bot, is not uncommon. Obviously you can make the case that YOU would use anti-virus software and YOU wouldn't let malware be installed on your computer, but in the end, you're still using a fairly insecure method to store your important passwords. And really, wouldn't a solution like LastPass be better in every way anyway?
I remember some software explicitly storing your saved passwords in plain text to make the point that storing it "encrypted" is in the end no different.
Short of making you log in to your browser/password manager with a master password every time, how can you possibly store and retrieve passwords without letting other programs running with exact same permissions as you retrieve them?
The Mac OS X Keychain[1] acts as a gatekeeper and has fine grained permissions so you can let one application have default access to certain passwords (e.g., web passwords) but not other (IMAP/local file shares), or even to require the app to prompt you each time the app wants access to a password.
Of course, that's assuming all the software is well-behaved. You could have local malware that pops up a fake master password dialog, trick a user into filling it out, pulling the keychain out of the user's Library, then decrypting the whole keychain file manually.
Once there's malware running as the user himself, all bets are off. This is why iOS is probably the most secure OS out there - there's no chance for malware to get on the device.
This is harder than it used to be due to the secure text entry and sandboxing options which OS X has added but it's definitely the biggest risk for password manager users.
If you have a keylogger on your machine, all hope is lost. This is true for any password based security, much like a the best safe in the world is thwarted by someone videotaping you entering the combination. Even so, 1Password does utilize sandboxing in OS X and a secure desktop in Windows, which should in theory make this significantly harder to achieve.
Yes... and the premise of the original post was about vulnerability to arbitrary code being executed on the machine with the user account's rights. I.e., nothing's stopping the keyloggers now.
This is the airtight hatchway we're talking about. The post's premise, and the solutions for Chrome and IE, imply bad guys are already on the other side. All hope is lost. Best you can do is try and make it so that anyone just stumbling around rather than purposefully looking for the passwords doesn't find them, and the value of that is questionable on false sense of security arguments.
It's non-news to anyone who understands how Windows is built.
Why does Chrome, when the registration page includes both email and username fields, only remember the email but then insert it into the username field when you attempt to log in? I know some sites let you use the two interchangeably to login, but doesn't this seem like a silly assumption on Chrome's part? Why not remember both, and insert the username OR the email depending on what the field is called?
Google recently refused to give me access to my account when I'd lost the password.
While it was intensely frustrating at the time I'm actually grateful that it is so hard to get an account. I provided considerable amounts of information, but it wasn't enough for them to hand it over.
Still, when I got access to my super secret hard copy of passwords, and loaded Chrome onto a new machine, and signed into Google, I was a bit alarmed by just how much stuff came back from them onto my local machine. I'm currently slowly migrating to Yubikey and a nice password safe and better passwords for everything.
if I can channel RMS for a second; If you use Windows at this point, it's very clear that you do not care about security as much as you care about convenience. Whatever browser you attempt to put on top of that backdoor/COFEE-infested nightmare matters almost as much as what bikini you wear before jumping into a vat of acid.
That said, It's very good to know that Firefox is the safest of the three. If I ever again have the misfortune of advising windows users on the safest browser to use, I will definitely let them know that it would take far longer to compromise their passwords in firefox (even hours longer!) than the other browsers.
Myself, I'll stick to Firefox with the KWallet extension under Kubuntu.
For the attack mentioned above, only one of those is actually useful. Certificates are a complete distraction from what you really need – some sort of multi-factor authentication.
Certificates (I'm assuming assymetric encryption) are better than passwords in that they aren't passed on to the receiving site. This means that sites can build databases of public keys rather than passwords, and that an attacker compromising such a site, rather than getting a password file that he can reuse all over the place, only gets a fairly useless public key which would let him identify the user. And of course there aren't any dictionary attacks for keys either.
Which isn't to say that multi-factor auth isn't a good idea, it's just that certificates are still better than passwords.
Certificates make no difference in the threat scenario we were discussing: if I have enough access to your computer to pilfer passwords, I can snag your certificates at the same time.
Technically, a smartcard is both something you have (the card) and something you know (the PIN). Even if there were no PIN, smartcards are better than passwords:
1. The public key stored by the server cannot be used for authentication. That means that hacking a server will not give the attacker access to anything beyond that server.
2. More randomness; there are no dictionary attacks on secret keys, and brute force attacks are hard to mount.
3. Defense against phishing: the attacker cannot trick you into giving your secret key, because the card does not export secret keys.
All of the above address the biggest problems we have with passwords right now. You are not likely to be tortured for your card or your PIN, just like you are not likely to be tortured for your password. Sure, smartcards come with their own set of problems, like dealing with lost/stolen/destroyed cards; yet these are not terribly hard to solve (banks are able to deal with lost/stolen/destroyed credit cards). The benefits far outweigh the cost.
Chrome may be the most unsafe browser in the world just because how it gives away saved passwords in clear text with extreme ease. This is such a blatant violation of trust with users that developers who implemented this and thought this was OK shouldn't be allowed to work on anything related to security. They did not understood the simple fact that most users of Chrome do not have a clue about all these intricacies of software security. They use Chrome because they trust it to keep them safe. When they save their passwords they don't get any clear warning that many 7 year old can get all of their passwords in 30 seconds without installing or running any additional software on their machine.
I don't get it, if there is malware on your computer you are compromised anyway - it could just keylog to get the passwords... so why bother about how secure is to get the stored passwords for a program running on the same computer?
...if there was a remotely exploitable browser bug that would make the browser leak them it would be a threat, but this post seems meaningless from a security pov.
Every browser seems to implement its own password management scheme. None of them are as good as the same functionality that already exists in the operating system. Browsers should request access to passwords from the OS when needed, perhaps once per session.
I've been thinking about this (and the more general keychain problem) recently. Wouldn't it make sense to have your keychain stored on your smartphone, and allow applications access over a standard protocol using NFC/USB/Bluetooth?
Better still, let the phone do the public key cryptography (as in plan9's factotum), so that your private keys never leave your phone.
The situation is very common on the Web now.
My 17" monitor is from NEC years ago, is razor
sharp and rock solid, and I see no great reason
to take time out to change from it. Besides,
as another comment in this thread noted, laptops
also have relatively small screens! So do tablets
and phones!
I know nothing about using Google's blogspot.
For the Web site I'm building, all screens are
just 800 pixels wide, and all my fonts are nice
and large. So, on a big screen, could have
a 'pile' of dozens of such windows each offset
a little, and on a small screen could still see
the full width easily without using horizontal
scroll bars.
How'd I do that? I'm a beginner at HTML but
just stuffed in 800 px some places, and got
800 pixels. If in a browser I shrink a window to less than
800 pixels, then I get horizontal scroll bars.
Broadly it's easy to assume that there it's reasonable
to use lots of space vertically and let the user do
a vertical scroll but try to minimize space
used horizontally so that a user doesn't have to
do a horizontal scroll just to read the text.
In some cases, I just highlight the text, copy
it to the clipboard, pull it into my favorite
editor, and flow the text to, say, 60 characters
a line.
My view is, in reading, 40 characters per
line has some advantages in eye movement
in reading,
60 characters per line
is plenty, 72 is almost too many, 80
is about the upper limit, and over 100 is
too often a problem and, really, for
easy eye movement in reading,
too much.
Heck, even when newspapers had sheet
sizes big enough to cover a table top,
they still kept way down the number
of characters per line.
But I can't tell the world how to design
Web pages. If some people want, say, 300
characters per line and 20 characters per
inch on the screen horizontally, then so be it.
Your web pages with fixed width of 800px may be fairly annoying to users with high-DPI displays. Mac Retina displays are perhaps the most recent and well known, but for years some people have had displays with DPI 50-200% higher than "normal", and for these people, your 800px decision looks like handcuffs. If you look back a dozen years you'll see a number of sites that instituted fixed-pixel-width layouts, then abandoned the approach as people bought more large, high-res displays.
P.S.: Yes, some operating systems and browsers now zoom in a way that this doesn't matter so much, but not all, and not without downsides.
If they have a way just to zoom my Web pages,
then they will be okay.
But if they have a big screen with lots of
pixels, then I don't want my Web pages taking
all of that screen. It's better for the
UI/UX for my pages to take less than the
full screen so that my users can see some
other screens while using my site.
We're not being clear: The Web page
'screens' I am putting up are just 800
pixels wide. I hope that usually the user's physical
screen has more pixels than 800 so that my
Web pages do not take up all of the
width of the user's physical screen.
For my Web pages, the 800 pixels is wide
enough to get the information out there
for the users to read easily. For my
Web site, that my Web pages are only
800 pixels wide and, thus, usually don't
take up the full width of the user's
physical screen helps the UI/UX.
Even if the user's screen is 4096 pixels
wide and three feet wide, I still only
need 800 pixels of their screen!
For users with tablets, phones, etc.
my Web pages should still be easy to
use.
I'm not super informed on web-design trends, but I think fixed-width designs are still pretty common. They work. Why do you think you can discount netbooks, anyway?
I use my browsers password store only for harmless profiles, like here and other blogs. The issue I run into most is that a slight url variation, common with webmail, toggles its asking to remember.
Why aren't physically-local attacks in Chrome's threat model?
People sometimes report that they can compromise Chrome by installing a malicious DLL on a computer in a place where Chrome will find it and load it. (See https://code.google.com/p/chromium/issues/detail?id=130284 for one example.) People also sometimes report password disclosure using the Inspect Element feature (see e.g. https://code.google.com/p/chromium/issues/detail?id=126398).
We consider these attacks outside Chrome's threat model, because there is no way for Chrome (or any application) to defend against a malicious user who has managed to log into your computer as you, or who can run software with the privileges of your operating system user account. Such an attacker can modify executables and DLLs, change environment variables like PATH, change configuration files, read any data your user account owns, email it to themselves, and so on. Such an attacker has total control over your computer, and nothing Chrome can do would provide a serious guarantee of defense. This problem is not special to Chrome — all applications must trust the physically-local user.