ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.
They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.
tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.
Your first statement is incompatible with your second. (I think the second statement is reasonable, although I disagree with the conclusion).
People aren't underestimating the risk to _their_ accounts, they are discounting the risk to _others_ accounts.
That is, they're essentially saying, 'well, other users chose to have bad passwords, so bully them'.
I think that's a fair viewpoint to have. We've entered a world in which computer literacy is a basic requirement in order to, well, exist.
That said, what's reasonable, and what actually occurs, are two different things. A company isn't going to ideologically decide "screw the users that use bad passwords" if it loses them money.
So we get _seemingly_ suboptimal solutions like this.
I think you may be giving people more credit than they deserve, but I'm willing to accept that they're making that argument. Even if that's their argument, that their personal habits around password use and being attentive to not being phished are so good they don't need Google's help defending themselves, so bully for everyone who does, I'm not convinced it's a good one.
There are a few things needed for that to be a good argument
1) Their security really is so good (I'd bet it isn't. I saw a tenured security professor/former State Department cyber expert get phished on the first go by an undergrad.)
2) Google isn't improving their security posture on top of that (I'd be shocked if Google isn't improving theirs, and I'm certain having JS required to sign into gmail closes a major hole in observability of automation)
3) There are real harms from the JS being there for their security/privacy posture (as I've said elsewhere, I'm unconvinced Google is allowed by their own privacy policy from doing anything untoward here)
As to your point about computer literacy and existence, I think the sad truth is that computer engagement is required, but literacy is optional. When that's the case, large companies are in the position of having to defend even the least computer literate against the most vicious of attackers.
> I think the sad truth is that computer engagement is required, but literacy is optional.
You're right on, but I wouldn't call it sad.
The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work. There are endless amounts of things we could demand people spend their precious time deeply understanding. We just like to demand tech-savviness because it's self-aggrandizing.
Like everything else, the solution is to help people on their own behalf.
At an online casino I once worked at, we ended up generating random passwords for our users. We had to, because otherwise attackers would lookup usernames in the large password dumps online and log in as our users. No amount of warnings on our /register page stopped password reuse. So we decided we could do better than that, and that "well, we warned you" was not an appropriate response.
If you look around at everyday objects, everything is designed to protect the user. But for some reason in computing we're still in the dark ages of snickering and rolling our eyes at users for making mistakes.
> The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work.
Exactly! We require "car literacy" in drivers before we allow them to use them. Pretty much every advanced economy has mandatory driver licensing.
A driver can trivially press a few levers and slam themselves into a barrier at 100mph. But they don't do that, because they know, through experience and education, that it's a terrible idea.
That's the exact opposite of the approach that would have cars restrict their own usage into a narrow set of patterns and refuse to function otherwise.
WRT the last half of your comment: I think that's reasonable. Generating random passwords for users is a fair approach.
Account security exists on a spectrum. I don't think anyone (reasonable) is arguing against that, we're talking about mutable state here, actual _actions_.
What I'm railing against, is this idea that every webpage on the internet needs to be behind a CAPTCHA that does a bunch of invasive data collection including probably asking the user to perform a Mechanical Turk task in order to _access a website_ without even logging in.
It happens all the time. A website doesn't like my IP block -> forced through a bunch of nonsense. The site operator probably isn't even aware because they're using an upstream service which does it for them.
If by ‘cred stuffing’ you mean brute forcing accounts, that’s what short lockouts and 2 factor authentication are for. JavaScript is just a layer of obfuscation and doesn’t fundamentally help.
Credential stuffing more commonly refers to the practice of getting valid sets of creds from various password database dumps and retrying them across common/popular systems.
2FA is a good defence against it, but lockouts are less as they attacker will be going broad and not deep (could be a single request per user account)
Ip based lockouts as opposed to account based lockouts do better against cred stuffing. Because there is a cost to getting more IP adresses. Maybe carrier grade NAT would lead to too many false positives?
Most bad actors doing abuse at scale have access to large networks of proxies on residential or mobile IPs, usually backed by malware on workstations, laptops and mobile phones.
Even as a newcomer without the right contacts on the black market you can get started with very little upfront investment, using services like https://luminati.io/ (they pay software developers to bundle their proxy endpoints within their apps).
/64 and /48 are pretty much on the same order of magnitude as IPv4s in terms of difficulty of acquisition, and I don't know why you would ever look at more than /64 when most major operating systems randomize the last 64 bits anyway (RFC4941).
I don’t see how JavaScript is anything more than a bandaid for that. The assumption is the attacker has the usercode and password combination and then you want to prevent him from logging in.
Javascript can be served dynamic as well, per user/connection specific even. So an attacker would have to investigate and counter each new version of the scripts. Even if this could be done automatic it greatly increases the cat/mouse factor for Google.
Obscurity is just another layer you add onto your security. As with all security methods, no one is perfect and its always a balance with usability.
But with security at this level nowadays every added layer helps. Even if it is not even used in the initial authentication step. Think of classifying certain patterns in the attacks and retroactive de-authorizing after login, increasing the time-cost for the attacker.
security through obscurity may be no security at all, but security without obscurity is probably not as good as security with obscurity for many security scenarios that one can imagine.
Can't you use Javascript to implement challenge-response authentication, which meaningfully improves security by:
1. Preventing interception of passwords on the wire
2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because the web site chooses what they do and could make them mine Bitcoins) or rewrite their brute-forcer each time the JS-driven network communication channel is altered
It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...
>1. Preventing interception of passwords on the wire
Isn't this solved by https? I have no idea, but I hope at least that https protects my passwords.
>2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
I don't want to wait for a login more than a second. Actually, I don't want to wait at all.
>3. ... or rewrite their brute-forcer each time the JS-driven network communication channel is altered
How is this different from altered HTML/CSS? An attacker has to adapt to the altered login page. It is not an argument for javascript.
>It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...
You say it: a protocol! not a piece of javascript.
Neither do I. But I also accept that, given the sheer volume of stolen creds and bots out there, sites that damage their bang/buck performance, even at the cost of very minor inconvenience to users, are likely to be targeted less frequently and in lower volume. Even if I wasn't begrudgingly willing to pay that price, I'd at least admit to the logic of making the process more time-consuming as a deterrent.
I find the idea of detecting someone's trying to bust your login page with some kind of automated system and deciding to serve them a ridiculously aggressive Bitcoin miner rather amusing.
Whats stopping that war from happening at any other time? If a attacker has the resources and carelessness to mount such an attack at a whim you should be prepared for it?
> Can't you use Javascript to implement challenge-response authentication
> 1. Preventing interception of passwords on the wire
It can, but challenge-response that isn't PKI based requires the remote side to have the secret stored or the local side to know how to generate the value that is stored instead, which goes against other recommended practise (with PKI the remote side can store the public key and ask for something to be signed with the private key).
Protecting passwords on the wire is better done with good encryption and key exchange protocols - in the case of web-based systems that is provided by HTTPS assuming it is well configured.
> 2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
Could you give an example of that? If you are tuning difficulty based on the computation power of the other side, surely the other side could lie about being low powered and get an easier challenge?
> 3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because...)
A knowledgable attacker doing this would be safe: they'd make sure the interpreter was properly sandboxed (to avoid reverse hacking) and given execution resource limits (to avoid resource waste). Then if the site/app is important enough that they really want in, they modify their approach if the resource limits are hit.
> or rewrite their brute-forcer each time the JS-driven network communication channel is altered
If your method is only used by you (and you aren't a Google or similar so you are big enough to be a juicy target on your own) and you enter into this arms race you might find it takes so much resource that it gets in the way of your other work. You are only you, the attackers are legion: put one off and another will come along later. Also there is the danger in rolling your own scheme that you make a naive mistake rendering it far less useful (potentially negatively useful: helpful to the attacker!) than your intention.
If the method is more globally used then it is worth the attackers being more persistent.
> It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification.
It can, though often only against simple fully automated attacks. Cleverer automated attacks may still succeed, as may more manual ones, and targetted manual attacks will win by inspection & replication.
Or they get in through an XSS, injection, or session hijacking bug elsewhere (bypassing the authentication mechanisms completely) that you missed because you spent so much time writing an evolving custom authentication mechanism.
Modern cred stuffing is done by botnets. When I see a cred stuffing attack, it's maybe 1-3 attempts per IP address spread over 100-500k IP addresses.
Often you'll have a family of legitimate users behind an IP address that's cred stuffing you at the same time.
Throttling by IP address may have worked 10 years ago, unfortunately it's not an effective measure anymore.
Modern cred stuffing countermeasures include a wide variety of exotic fingerprinting, behavioral analysis, and other de-anonymization tech - not because anyone wants to destroy user privacy, but because the threat is that significant and has evolved so much in the past few years.
To be entirely honest, I'm kinda surprised Google didn't require javascript enabled to log in already.
Unfortunately I don't have much reading material to provide. It's a bit of an arms war, so the latest and greatest countermeasures are typically kept secret/protected by NDA. The rabbit hole can go very deep and can differ from company to company.
The most drastic example I can think of was an unverified rumor that a certain company would "fake" log users in when presented with valid credentials from a client they considered suspicious. They would then monitor what the client did - from the client's point of view it successfully logged in and would begin normal operation. If server observed the device was acting "correctly" with the fake login token, they would fully log it in. If the client deviated from expected behavior, it would present false data to the client & ban the client based on a bunch of fancy fingerprinting.
Every once in awhile, someone will publish their methods/software; Salesforce and their SSL fingerprinting software comes to mind: https://github.com/salesforce/ja3
A relatively successful company in the area is Shape Security. Their marketing is a bit painful, but they invented the concept of cred stuffing. Disclaimer: I worked there for four years.
Fundamentally it's a question of fingerprinting the behaviours of humans versus bots. The problem is that it's becoming increasingly difficult to distinguish them, particularly when bots are running headless chrome or similar, and real users are automating their sign-ins with password managers.
I don't do much of this sort of thing, but numerous things come to mind. Aim to identify and whitelist obviously human browsers, blacklist obviously robot browsers, and mildly inconvenience/challenge the rest.
For example, an obvious property of a real human browser is that it had been used to log in successfully in the past. Proving that is left as an exercise for the reader, though it inevitably requires some state/memory on the server side.
Throttle based on what? IP address? This works for domestic IT departments looking to shut out automated attempts from specific ranges but at Google's scale IP based filtering could end up shutting out an entire country.
That's a terrible idea. Back when MSN was one of the most common instant messengers, there was a common prank that was called "freezing" where you just continuously kept trying to log into someones account and it would lock itself out for 15mins or more depending how long you kept doing it.
That's the first obvious countermeasure and will prevent hackers targeting a specific account. But there are other ways to crack passwords, one is to try the same password but iterate over user ids instead. As hackers would start with the most common password you can't throttle globally on same password attempts either because well yeah, it is by definition the most commonly used one which should have a lot of traffic.
This has nothing to do with anything but I don't know how else to get in touch with you. Could you upload your zero spam email setup guide somewhere? Your site was hacked so the link I had doesn't work:
"Credential stuffing" as I've heard it used refers to taking username/password combos from one breached site and trying them in other sites.
So for example LinkedIn has a breach, which reveals to evildoers that user 'johnsmith@example.com' uses the password 'smith1234' then they test that username and password in Amazon, Netflix, Steam and so on.
They only make one attempt per account, because they only have one leaked password per account. Hence, throttling per account isn't an option.
All of Qatar's traffic used to be routed through 82.148.97.69, though that was back in 2006-2007. At one point it was banned from Wikipedia, which unintentionally affected the whole country.
And indeed it's time to give up on the web being a document format only. The internet is about loading remote applications in your local sandbox. That's what it is. It sucks, but it is what it is. As part of loading remote applications, we now might be asked to compute whatever anti-abuse puzzles are required. So it goes.
If something shitty is happening, you don't have to shrug your shoulders coswhatyagonnado. Understanding the human reason why something shitty is happening doesn't mean you have to accept it. So it goes, until it doesn't.
Passwords are obsolete - actual security would involve keys. The fact they have to care about automation for security instead of availability is a sign they have already lost. If you have a disposable EC2 server administration password accessible you are already doing it horribly wrong because you /will/ get attacked frequently.
Javascript is opening an attack surface for what will certainly turn into an arms race anyway instead of ending it.
Given that they aren't pushing a new standard for what has already been a problem for a long time while introducing a vector for abuse both to and from it google can be criticized for both of those sins far more.
To be fair, Google released their own OTP hardware keys and have already 2FA login mandatory for accounts that they deem "high risk."
I don't think it's fair to blame them for the facts that most folks are not willing to give up passwords yet. Given that passwords are the current reality, shouldn't they do everything in their power to make them as secure as possible?
I'll be the guy who says that while I recognize those are insults, they are also sufficiently descriptive of a point of view... it's not like he called someone Mr. Poopypants.
> So what about a opt-out at account level? Something in the account settings, like this:
> [check] Allow sign-in from javascript disabled browsers. WARNING etc. (usual warnings about security etc.)
It sounds a bit like what Gmail's doing with their "allow less secure apps" login option, except that's more for allowing IMAP logins using password instead of OAuth.
I used a long and supercomplicated password for one of my accounts that i access intermittently. Why I have it is a long story, but I only log into it once or twice a month to check if there is something that needs my attention.
Usually the login is in incognito, guest mode, and even from different locations and machines. Google asks for a second factor (i dont have it on for my accounts) like phone verification for my usual accounts (not so complicated password) but not for the one with complex password. So I think the level of extra steps/security is linked with how complex your password is. Not so sure if this is a good thing or bad. But, I hope they should continue basing their security measures based on the security measures you take.
I asked LastPass to generate me a long and complicated password for a new Office 365 account only to have it rejected as too long because it was over 16 characters. Sigh.
Maybe one reason is because google doesn’t know which account is trying to login before the login page, so how could they remember that security setting before attempting to serve JS?
I don't understand why anybody concerned about having JS on a login screen would want to log into Google in the first place. I imagine there's a tiny overlap between "Runs NoScript" and "Trusts Google"
> ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.
Password are effectively obsolete and everyone should be using multi-factor authentication of some kind. Keys with passphrases. 2FA auth. Whatever.
Making 2FA auth mandatory would be substantially more effective than bot signaling.
> tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.
If they were, 2FA auth would be mandatory with additional phone-based (i.e. SMS) whenever you try to login from a new geographic area. That would stop anything short of a targeted hack.
Instead, they created an attack on the bot maker's profit margins. Cloudflare, Google, et al. are really just trying to increase the cost of making bots. They are not really trying to _stop_ bots.
XSS vulnerabilities are everywhere. You obliviously don’t realize that.
Note that I do use js, because it makes life easier. But you got to realize that not using js will at some point protect you against an XSS vuln. They are that prevalent.
Running JavaScript means parsing text from outside source plus executing the program from outside source. Both requires really complicated code counted by the unit of M LOC(Mega Line of Code).
> The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.
That's quite the hand-wave. How do you even measure privacy loss? And given that browsing history is not in your inbox, why are you so confident that one compromised email account is a bigger deal?
They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.
tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.