Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Spectre in JavaScript (leaky.page)
229 points by jeffbee on March 12, 2021 | hide | past | favorite | 85 comments


To me this changes nothing to the 30% CPU wasted by default with new linux distributions.

Linus said he wanted the workarounds disabled by default, why didn't anyone listen?

I'm not browsing javascript sites on my linux server!?

If the server is compromised they don't need to use meltdown/spectre to do damage since servers need root for everything useful (open port <1024)?!


The idea that root has absolute privileges, or even that it is, like in the olden Linux days, kernel-equivalent (through /dev/kmem, ioperm etc.), is outdated.

Nowadays, the focus going forward is much more on giving every piece the privileges it needs, and not have an "absolute privilege" entity. (Outside of the kernel, although even for that it's less and less true anymore.)


As for not planning to run untrusted code in any of those pieces: It's bad if compromise through bugs of any component, however tiny, potentially leads to the ability to siphon off secrets from your server globally.


> The idea that root has absolute privileges, or even that it is, like in the olden Linux days, kernel-equivalent (through /dev/kmem, ioperm etc.), is outdated.

No. A million times no. Absolutely no. It is not. You are doing a disservice to actual real people when you take away what they own.

I own my hardware. I am root. I have absolute privileges. The idea that root has absolute privileges is the core if why I use Linux.


Don't panic, just read man capabilities and breath deeply.


No. `root` should always have _full_ capabilities. Capabilities are there to allow non-root users access to some root systems


You own the hardware, which means you have the ability to boot the system into diagnostic or repair modes for extended privileges. This doesn't mean that during normal operation there should be a superuser account with 100% access. Plan9 from Bell Labs pioneered (edit: they probably weren't the first, but it's the first OS I can think of that does this pervasively -- I bet mainframe systems are similar but that ain't my cup of tea) the idea back in the 90s, where certain processes (like the credential store factotum) were sealed off even from the root equivalent.

If you had physical access, you could add a flag at boot to enable access to these services, but this was not enabled by default, which is good.


VMS is one example, not only mainframes.

Which is why NT was designed from the ground up with capabilities.


Had you actually read the manpage you'd realize that "root" has full capabilities, except where you configure it not to for specific processes or under specific conditions that are under your control, which is mostly for situations around setuid-0 marked binaries.


You are right I think. No idea why you're getting downvoted.


> If the server is compromised they don't need to use meltdown/spectre to do damage since servers need root for everything useful (open port <1024)?!

- there are lots of useful things you can do without root

- you don’t need root to bind to port numbers below 1024 on Linux, just CAP_NET_BIND_SERVICE

- when you do need root, you can usually drop it either partially or completely


> just CAP_NET_BIND_SERVICE

How do you do that if you're not root?

And if you are building an executable with that capability, how do you enable your Makefile to set it?

I think we might need "CAP_SET_CAP_NET_BIND_SERVICE" as well.

In fact, I might want to allow a user to bind to port 200 only.

So in this case "CAP_SET_CAP_NET_BIND_SERVICE_PORT_200" is needed.

And here we see the inflexibility of the entire approach :)


The simple way is to use a process that has root for the smallest moment before dropping it. Which means it's vulnerable to a tiny subset of attacks, and it will never be root and serving requests at the same time.


Yes, that solves one of the problems, but it doesn't solve the problem where I want to prevent my process from binding to any other port than (say) specifically port 200.


Your comment chain has taken a bit of a strange turn, but a suitable SELinux policy would work for that, which would be straightforward… ish.

Alternatively, if you're only concerned about other privileged ports, with a systemd socket unit, you could have systemd do the binding and pass your unprivileged process the socket, so that your process no longer needs the privileged-bind capability at any time; that'd probably be the easiest way if you're willing to modify the application code a bit. If you don't like systemd, you could have another helper program do it.

Or you could put your process in its own network namespace which has a filtered bridging or routing configuration between it and whichever namespace has the ‘real’ connectivity, which is complex and awkward but allows your process to ‘think’ it has full access, which might help if the code would otherwise balk.

Or you could use various other userspace sandboxing frameworks that let you install a BPF syscall filter, which I like a lot in theory but don't currently have personal experience with.


Make systemd bind to that port and let it pass the open fd to your process. See socket-activated services.

http://0pointer.de/blog/projects/socket-activation.html


> How do you do that if you're not root?

> And if you are building an executable with that capability, how do you enable your Makefile to set it?

The goal is to bind without passing through root, not to never have root at all, so you could run the service with it as an ambient capability (supported by e.g. systemd). Or setcap as root after building.

> In fact, I might want to allow a user to bind to port 200 only.

Program with the capability on its file to check the user, bind the socket, drop the capability, and exec. (I’m sure something existing does that, but I don’t have one to recommend since I’ve never needed it.)

But there’s also iptables and friends as alternative solutions to the same problem that still don’t require giving the process root at any point.


Ok, thx!

Can you combine that with nohup?

nohup setcap 'cap_net_bind_service=+ep' blabla...


Yep! If you’re using file capabilities (setcap), it doesn’t look quite like that – you use setcap on the file separately, and then nohup when running it.

You should probably run production services using something other than nohup, though, like systemd.


No, but someone that finds even an unprivileged path to execute code on your server can fully control root on it if spectre is carried out successfully.


And what happens if they work out enough about your server to use ROP?


Would Java be vulnerable to ROP?


ROP is a strategy to leverage an existing vulnerability to do more so it’s not really language specific. It’s a question of whether you can find a vuln in your JVM or any native code it or your app calls out to.


Well if they find a vulnerability in the most used peice of code in existence (JVM) then I'm pretty sure it will get patched no?


Depends, if it is on its Android equivalent, ART, it won't matter.


Distros probably want to fail safe. That said most desktop installs are probably single user, so should be safe to do there


Wouldn't exploits from JavaScript specifically mean that single-user desktops aren't necessarily safe?


Yeah, I didn't recall the Javascript case until it was too late. Though I thought all browsers had reduced their timing precision to mitigate that.


Chrome worked to re-enable precise timers after implementing a number of mitigation strategies. They could always lower precision again, if necessary. Though it's unfortunate how much progress and performance we've lost to Spectre.

Interestingly the Chromium blog posted about mitigating side-channel attacks just earlier today.

https://blog.chromium.org/2021/03/mitigating-side-channel-at...

The Google Security blog also has a writeup on this PoC.

https://security.googleblog.com/2021/03/a-spectre-proof-of-c...


Well, this steps up the game quite a lot. I’d considered the CPU attacks relatively unconcerning because it required executing code locally. Bring able execute from a web page makes for a broader attack vector.


You have to specify a whole lot more context if you talk about executing code "locally" vs. executing "from a web page".

JavaScript usually gets JIT'ed into machine language code. That code gets executed locally. There are usually a bunch of differences in terms of form and restrictions around that code, but this might well be a case where most of those differences don't matter.

Or in other words, browsers are just compilers/interpreters like any other, albeit very hardened ones because of the large exposure of untrustworthy code they are subject to. But if an attack fundamentally skirts around sandboxes, the browser sandbox won't help.

It's Turing machines all the way down.


JavaScript based examples are shown in the Spectre paper.


This is not new and it's half the reason I browse the web in icecat/elinks. The other half being that most of the javascript out there is written without regard for resource consumption.


i,ve also found that accessibility and content quality are strongly correlated, saving me reading time.


What protections are unique to IceCat?


Presumably the different kind of "security by obscurity" that comes with using software that nobody else uses and thus nobody will spend resources targeting


As far as I can tell, IceCat uses the same rendering and Javascript engines as Firefox, so it'll have the same security issues as Firefox.


IceCat disables JavaScript that it doesn't have in a database or isn't written in a non-recursive subset of the language. Each chunk of JavaScript has to be reviewed and explicitly enabled by the user before executing.


This doesn't run in Safari on an M1. I'm getting is error in the JS console:

    [Error] RuntimeError: Out of bounds memory access (evaluating 'this.wasm.exports.oscillateTreePLRU2')
       <?>.wasm-function[2] (memory_frame.html:78)
       wasm-stub
       oscillateTreePLRU2
       _timeL1 (memory_frame.html:78)
       leakMeTestSet (memory_frame.html:146)
       Global Code (memory_frame.html:165)


Same error on my Intel MacBook running Safari


Yep same here, Safari on Intel Macbook. However, it runs perfectly in Chrome. More reason to use Safari I guess


While this POC may not reliably work on Safari, it is worth noting that from a defense perspective Safari is missing site-isolation (which Chrome, Edge, and soon Firefox all have). So if an attacker were to get this to work on Safari, the impact would be potentially much greater in terms of what could be stolen.


Did you select the "M1 CPU" checkbox? Just making sure


So given that the only mitigation to Spectre is to isolate each website into its own underlying OS process, it seems very important to know which browsers are doing that.

Chrome is doing it. What about Firefox and Safari? What about Edge? Do they implement site isolation?


Firefox reduced the timer precision, this demo does not work. https://developer.mozilla.org/en-US/docs/Web/API/Performance...

On Chrome I have "too many false negatives" on 1st gen Ryzen.


I'm not able to get it to work in firefox either, but it feels like "reduced timer precision" shouldn't be sufficient to stop this particular attack. The authors of the demo here claim to be able to "amplify timing differences" by repeatedly hitting or missing cache and measuring the combined total time.

My searching indicates firefox's performance.now precision is 20us, but the demo claims to be able to get timing differences of 2ms pretty easily.

EDIT: 20us precision might be outdated... running this repeatedly on osx and linux, I'm distinctly getting my results "bucketed" into whatever the unit on the x-axis is. Based on that, I'm guessing timer precision on firefox is actually 1ms.

Increasing the cache timer repetitions gets me to a smooth graph, but still no separation of the peaks.

The fact that I can get the graph to go from discrete buckets to smooth normal curves makes me think this can indeed overcome reduced timer precision

The fact that I can't get the peaks to separate makes me wonder if some other mitigation is protecting me.


It is worth noting that this POC was specifically targeted at Chromium based browsers. To quote the blog post, they also developed "a PoC which leaks data at 60B/s using timers with a precision of 1ms or worse". So Firefox's protection's are likely not sufficient to mitigate all Spectre attacks.


Chromium has site-isolation (with some caveats around phones with limited resources) so both Chrome and Edge have site-isolation. Firefox is getting very close with Project Fission [1] and I predict they'll ship it relatively soon. Currently Safari doesn't have site-isolation and AFAIK they have not publicly committed to anything in terms of getting there. They have done some work in this space (search around for Process Swap on Navigation (PSON)) but it isn't complete.

[1]: https://wiki.mozilla.org/Project_Fission


Oye. So that means that in Safari, it's possible for any site to run some Spectre Javascript to read the cookies and passwords for any other site that happens to be running in the user's browser, and then log in as that user on other sites.

That's pretty bad.


The proof of concept is in Chrome, so it appears Chrome's mitigations are not sufficient.


No, the attack always works, whether there's an isolated process or not. In Chrome's design you shouldn't be able to access any data of value with the attack, that is data from other sites (like cookies) or privileged data. I don't know if that's indeed true or not in Chrome, but that's why it was designed that way.


Chrome's design ensures that Spectre can only access resources that end up in an attacker controlled process. And this [1] post on "Post-Spectre Web Development" goes into detail about how a given website can ensure that its resources don't end up in an attacker controlled process. There are also a number of default protections against this like SameSite cookies and CORB that protect some resources by default.

[1]: https://w3c.github.io/webappsec-post-spectre-webdev/


No, the POC only shows the script leaking memory into javascript running within the same process, and thus the same site. Chrome is still preventing the info from leaking across sites.


The big caveat to this is that an attacker can generally get a browser to include a cross-site resource in their process. For example, `<img src="https://sensitive.com/myprofilepic.png">` will cause the image to be loaded in the attacker's process where they can then potentially steal it. The article "Post-Spectre Web Development" goes into details on how sites can defend against this (and other vectors).


That's why there was the recent W3C draft about Spectre and the policies around which sites can frame other sites.


I may be wrong about this, and about specifically how exposed browsers are to Spectre, but the only real mitigation here, since protected memory can be accessed through the same mechanism, is disabling branch prediction and CPU caches, or barring those caches being reused across threads or execution contexts.

Or completely redesigning those aspects of CPU behavior to remove the ability for similar timing attacks.


No. Process isolation still works, assuming your CPU is not broken. The real mitigation is that going forwards, you have to assume that any attacker-controlled code always has full read access to the entire process it runs in, and you need to architect your systems so that this does not result in any bad things. It is entirely possible to do this.

You can still have branch prediction and caches, which is a very good thing, because not having those would cut the performance of modern CPUs to less than 1% of what it currently is.


Or just not accepting and evaling arbitrary code from every single website the user visits, including ones that should only be static documents or forms.

That this is generally considered ok boggles the mind. That browser vendors have made it difficult for users to disable this is insane! Even MS Internet Explorer gave users at least that security tool!


the modern web doesn't work without Javascript. It's as simple as that.

People like you who want to turn js off are a very small niche. And you have better solutions. Run your browser in a remote server using rdp or vnc. I think it may be equally safe and you may actually have a larger chunk of web working for you.


The "modern web" does not want to work without JS. It definitely could work without JS for the most part.


I don’t know about that. Google Maps (or any “slippy” map) couldn’t work. Instant messaging couldn’t work. Video chat wouldn’t work. Rich document editing wouldn’t work.

I get that some people would like that because they think all those things belong in native apps, but that’s not where the “modern web” is today.


If we didn't complain about Java applets, we still would have Java applets.

If we didn't complain about Flash, we still would have Flash applets.

Yesterday there was however people that were saying, "that's how it is, deal with it" or "But, but... Without it we can't have this and that" or "The ship has sailed, get over it".

Today, we have reasons to complain about JS. Yes, it enhances interactivity, but it also abused. There's a lot of unwanted interactivity (the type of one-way interactivity that lets a server know more than necessary about their clients).


Note that I limited what I was talking about to static documents and forms. Executable code should be a permission the page has to explicitly request from the user like notifications and microphones.


Considering the increasing sandboxing and protections I think we're getting closer to browsers in VMs already. Someday I can see a permission prompt to allow performance sensitive sites lower level access.


Chrome desktop does actually allow you to run with JS disabled and allow per site with only a few clicks.


Not only that but google search, gmail, and other google services will still workout without javascript.


Not sure if it's that simple. Chrome (89.0.4389.90 64-bit) runs the tests but leaks memory for me. Firefox (86.0 64-bit) runs the tests but every one of the tests fails. Brave (1.5.112 64-bit) doesn't even run the tests. I'm on Linux (5.11.4-arch1-1).


Meltdown and Spectre are serious. But we need clarity on what they do and do not threaten. To address the widespread confusion on this topic, and to demonstrate a completely different approach to mitigating these, we wrote:

https://agoric.com/taxonomy-of-security-issues/ and https://ses-demo.netlify.app/demos/challenge/


Enjoyed the article. Thanks for sharing. It is more on a scientific side though, but this is exactly what I like to see when it comes to system analysis and security.


This is effectively a demo for https://news.ycombinator.com/item?id=26436515.


It's actually a demo for it. It's linked from the post.


Now I am become Death.js, the destroyer of worlds


I often bring this up in HTTP vs HTTPS conversations. It's not about what CAs you trust, as that's a policy decision you can make on your own devices. It's about knowing (through whatever CAs you trust), what the origin of the code you're going to execute on your device is. It's about knowing that your ISP isn't injecting extra JavaScript into your page requests. This isn't hypothetical, it's literally happening right now.

When the people injecting JavaScript are interested in exploits rather than dumb ISP value added services and notifications, it becomes more obvious that running code from untrusted sources, even if it's sandboxed, is dangerous.


I think you are confusing HTTP which is a protocol and the browser which is doing more than just HTTP.

I'm using HTTP securely just fine when I connect to it with my own client and my own encryption!

Servers don't use browsers.

We don't need HTTPS, we need less complexity and HTTP is just fine for transport!


Friday afternoon, I was really kind of looking forward to https://en.wikipedia.org/wiki/Spectre_(1991_video_game)


This was my first thought too. :/


Same here, this article about the designer of the Spectre box put it at the top of mind...

https://obscuritory.com/essay/incredible-boxes-of-hock-wah-y...


I didn't experience the issue on my Threadripper2 with Windows 10. Is that because of OS patches or because my CPU is not affected?


Definitely OS patch. Currently neither Intel nor AMD have Specter free CPU's. And one would be in like a few years on the market. Until then it's OS patches as protection, and you know very well the history of those :).


Ironically, I could only get this to work in Edge (the new Edge which is Chromium underneath) by turning down the "Cache timer repetitions", the thing which is supposed to make timing differences more discernible. At only 500, it works every time. If I crank it up to 5000, it doesn't work at all.


this doesn't work on my pinephone although my hands are pretty sweaty now.


Tested this page in 3 different systems (1 AMD, 2 Intel) and it didn't work under Firefox (which is good).


Attempting to take those cache timings in FF doesn't result in anything like the demo.

https://a.pomf.cat/gawfxu.png


From what I understand, Firefox has a lower precision timer, which means you need to do more cache timer repetitions; try increasing it to 400,000, it should look more like the demo (though it still doesn't generate two entirely separate curves).


Doesn't do anything on Firefox on a pixel 3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: