Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"self-XSS" (the thing this malfeature is purportedly protecting people from) is a made-up concept. It's basically "don't run your own scripts to interfere with our site, and we'll use scary-sounding security words in an attempt to discourage you from doing it." I don't believe for a second this is about helping the user - more likely is that FB and Netflix want to prevent users running scripts that add features or do functions they find inconvenient, like exporting your address book or movie rating info.

I get to run code just as much as you do - it's MY computer, MY browser, and MY bandwidth. Making up a scare word (that just means "users running code I don't like") in an attempt to legitimize disabling access to development and exploration tools is beyond the pale. There is absolutely no reason to permit this kind of behavior, and I'm frankly a little appalled a community of startup founders and hackers would ever defend this kind of behavior, as some of the comments here have done. If you want to protect users from themselves and limit and restrict what they can do, write a mobile app. Don't try and put your shit on the web if you want it to be a walled garden.



I don't see exactly how it is a concern for Netflix, but sadly "self-XSS" is real on Facebook. Not among us tech-savvy people obviously, so consider how much you look from inside a bubble.

If people read of a "h4x0r trick to read their bf/gf private messages", they will execute it. And hey, "it has this l33t keyboard shortcut that will make a strange window pop up, it must be what the hackers in the movies use!!". And then "Oh well, thanks to this friend of mine for sharing this cool trick that gives me the stuff to paste there, I would not know how to use it!". And finally "Booooo, Facebook sucks, my account got hacked".

I remember of the internet making fun of a girl that believed to be enrolled in some secret police because she popped up the Dev console. Well, that is just normal people, not uncommon.

I trust that actual developer can find their way around blocks and warnings, that however raise the bar for social engineering.


If Facebook made use of smart OCAP practices, none of that would be possible. Use of object identity as a "security key" would prevent code that doesn't have access to the "key" object from being successful.


What? What's OCAP? Object identity? How does that have to do with XSS? Links, references, sources?


The developer console has access to everything the origin does. If the user can do it, the console can do it.


Not necessarily, because you can limit how certain values can be accessed due to your scope. For JavaScript, if a key is saved in a variable inside an anonymous function, it's inaccessible for somebody who has a console that sits at the root of the document.


    > fn = function(){ var key = "shhhhh" }
    > fn.toString()
    'function (){ var key = "shhhhh" }'
Nothing is sacred in JS.


A very nifty trick!

But it requires that the value is hardcoded inside the function. If it was given to an unreachable scope by some async action (like an ajax request) this trick wouldn't work.

One could possibly also wrap the function in an native .bind call to change the output of toString() to [native code]


I wonder if even that's feasibly secure though, when you have stuff like http://esprima.org that can let you fully parse the entirety of the JS on the page.

It's better to assume the console has 100% root (client-side) privileges.


But the enemy here isn't necessarily the console, it's the social attack against the console. Making it harder for the user to screw himself over is a worthwhile endeavor, and not merely "security by obscurity".


It would be harder, but the developer console has access to closures as well:

http://stackoverflow.com/a/16048707/73681


    document.getElementsByTagName('script')[0].innerText


.innerText is an MSIE extension, .textContent is a DOM standard property.


You need to explain this further because I don't see how that would prevent anything.


right, because patching one of 1000 is the right solution? please give back your engineer card.

- just download this file to see your gf private messages. ok, lets remove download from the browser (actually, ios did this)

- just run this long string in the address bar to see whatever. ok, let prevent javascript: schema in url bar (actually, android stock browser did this)

anyway you go at it, is ineffective. the only solution is to educate. trying to prevent idiots from harming themselves will just lead to annoyance to the non-idiots and more sophisticated attacks until you cant prevent them.

people who implement those dumb thing disgusts me. your comment disgusted me.


> the only solution is to educate.

Exactly. The more things you do to restrict users so "it's safer for them", the more reckless and stupid they'll become - "because $security_feature will protect me" - and they won't ever learn anything. On the other hand, if we give them the freedom to make mistakes, those that do will learn from them. Instead of trying to block "self-XSS" or telling everyone "don't listen to anyone telling you to paste something into the URL bar", we should be encouraging them with "if you don't know what it does or don't trust the one who told you to do that, then don't do it - or find out what it really does." That last part is particularly important, since it encourages curiosity and that motivates learning.

I understand that many people would just want to use something and not want to learn all that much, but I feel we should also not be encouraging this "lack of thought" mentality either.



I understand that you disagree, but I can't see why you have to be rude about it.


Sorry if offended too much. intended for just a little because that behavior is awful and must be treated as such.

Also if you want to learn how to do it right, look at Mozilla. They mostly do the right thing. E.g. telling user and in extreme cases making him wait a very short time before accepting something that may be dangerous.



I'll give back my engineer card (if I find it) if you swear not to apply for a security one :)

There's no way to make a system really secure and as you point out there is no way to "prevent idiots from harming themselves". What you do is stopping common/easy vectors and raising the effort bar/reducing conversions for the attacker.

If we require that a guy to use his computer learns how its threat model works, we failed as a industry.

And to address your points: users know that downloaded files are evil and Chrome warns about that. The javascript schema is mitigated by Chrome - if you copy-paste the initial "javascript:" is cut out. I'd love to know the other 998 to open discussions about them.

Note about the use of the word idiots: more of people who are not tech-savvy; you might be an idiot in this meaning for one or more of: electricians, mechanics, doctors...


So, is the best next step disabling the developer console "for security reasons"?


Facebook provides a link to disable this behavior in the message that appears when you try to use the console: https://www.facebook.com/selfxss Furthermore, a Facebook employee involved with the feature explained it on StackOverflow: http://stackoverflow.com/a/21693931/62628

It seems that users were being duped in to running malicious scripts that gave attackers control of their accounts. Sure, Facebook could be evil and not offer the option to re-enable the console and I'm sure other sites will do exactly that until browser makers prevent it, but at this time, Facebook is not being evil. I'm not sure about Netflix.

If people are being successfully duped in to running malicious scripts this way, perhaps browser developers should put a first-run warning on the dev tools saying that running code there supplied by a third-party is dangerous.


> I don't believe for a second this is about helping the user - more likely is that FB and Netflix want to prevent users running scripts that add features

How does that even make sense given that Facebook clearly gives the opt-out link which is easy to use, and works? I don't believe for a second that you aren't being completely cynical.


If it was actually about helping the user, there wouldn't be an opt-out link, because the first instruction in the new "self-XSS" instructions is going to be "go to this page and opt-out - yeah, it says a bunch of scary bullshit but that's just because they don't want you to know about $feature!"

The opt-out is there to boil the frog until they can remove it in the guise of "security". I also agree with bsamuels that it provides a convenient CFAA lever to hit in the event you do run a script they don't like. ("They had to deliberately bypass our 'protection' to paste the javascript into the console!")


None of that makes sense, at all.

Asking targets to first go to the account->security settings to disable the option named "Allow my account to be hijacked if I paste malicious JavaScript", and then to paste this JavaScript, seems to me to be quite clearly less effective than the simpler "paste this JavaScript".

Furthermore I strongly suspect that compromised accounts cause more harm to Facebook's bottom line than users who are exporting their address books. Millions lost every quarter due to fraud vs... what, exactly?


I don't think they added the blocking script to actually keep people out.

Rather, it makes more sense for them to add it so that if someone does debug their site, it gives Netflix a legal precedent to press charges against them for hacking their site by bypassing a security system.


By using developer consoles you're accessing to your own computer, and as long as you aren't circumventing DRM, or committing some other criminal act, it's not illegal. Not by my reading of the situation, but IANAL... What is the legal precedent to which you're referring?

These companies lose more money to fraud in a quarter than they have ever lost due to people exporting their movie ratings, or whatever other non-criminal acts people have been committing at the console. I don't understand why we need to posit a 2nd (more sinister) motivation.


So many times people forget the legal principal.

It's like those tacky trailers at the start of a movie "pirating movies is illegal". First question the judge asks - what steps did you do to prevent pirating and when did you notify your customers.


Well, if true it's a surprising principal... Theaters have to inform their occupants that pirating is illegal? What case set that precedent?


He may have been referring to the trailers included in DVDs and such. But here in France I think we also have piracy warnings in theatres now. As usual, the customer is the victim.


Yes, here in the UK we have adverts before the movie, and messages that say "You may not record with any device", or something to that effect.


It seems to me, it's just the laziness of the programmers working on the front end that leads to them needing this. The code should be designed with the idea that the end user will run arbitrary JS of all kinds. If it can't handle that without negative side effects, it has huge problems. I hope Chrome fixes this bug once and for all and takes steps to prevent overwriting window.console by user code.


Same thing happens in video games with anti-cheat systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: