Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its just a bug in Chrome that you can disable it. A cat and mouse game that Chrome should easily win, given that it holds all the cards.


Chrome can easily fix this, but it wouldn't actually be a bad idea for them to show a message to the user warning them of social-engineering based self-XSS attacks when devtools are first brought up.

Either that or "hide" the developer tools a bit like they do in modern Android so that it is really obvious to the user if they are directed to mess with things that they shouldn't be messing with without understanding them.


"so that it is really obvious to the user if they are directed to mess with things that they shouldn't be messing with"

I think that as soon as an attacker tempts the user with: "follow these steps to access American/UK (substitute a locale that has content your account shouldn't have access to) only films that Netflix don't want you to know!" that the apparent gain for the user will lead them to ignore any warnings. In fact, warning might actually encourage these kinds of attacks since the user could think "that's just Netflix trying to hide something, I'm gonna following [the attackers] guide"


I don't disagree at all that such a warning would not be 100% effective, but I'd much rather have an attempted warning on Google's part along with a fix that doesn't allow dev tools to be broken by sites than have a situation in which more and more websites break dev tools with this workaround.

The warning won't be fool proof because the world is constantly evolving greater fools, but if well implemented it'll stop at least some of the fools without really impacting legitimate uses of the dev tools.


The issue is that there is currently not a clear line for many laypeople between what parts of the computer are normal to access, and what parts are not. As the moment telling someone to go into the console is not more suspicious then telling them to go to the control panel.

What the proposed dialog would do is inform users that the console is someplace that they probably do not want to be.


You can't protect the users this way: the attacker will create a custom Chromium build and lure the user to download and execute it. At that point the user will be pawned either way.


That's a ridiculous argument. If the attacker could do that why would they even bother with XSS attacks based on developer tools?


The argument would be that the custom build of Chrome (or other malicious software) is harder to create, but not so much harder so as to be not worth doing for the attacker.


If you're getting a user to download and execute a piece of software, why bother going beyond that?


Is the console itself mostly written in JS? If that's the case, why didn't anyone think "this thing is accessible from JS, JS which could come from a (possibly untrusted) external site!" Were they planning on this "feature" being useful somehow? It reminds me of debuggers that can be crashed by what they're debugging, VMs that escape into their hosts, and (as mentioned) websites that try to disable right-click or otherwise interfere with the browser, which should ultimately be the one in control (by the user)...


The original idea of the javascript console was that it would be used by developers, to debug their own sites.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: