The idea that root has absolute privileges, or even that it is, like in the olden Linux days, kernel-equivalent (through /dev/kmem, ioperm etc.), is outdated.
Nowadays, the focus going forward is much more on giving every piece the privileges it needs, and not have an "absolute privilege" entity. (Outside of the kernel, although even for that it's less and less true anymore.)
As for not planning to run untrusted code in any of those pieces: It's bad if compromise through bugs of any component, however tiny, potentially leads to the ability to siphon off secrets from your server globally.
> The idea that root has absolute privileges, or even that it is, like in the olden Linux days, kernel-equivalent (through /dev/kmem, ioperm etc.), is outdated.
No. A million times no. Absolutely no. It is not. You are doing a disservice to actual real people when you take away what they own.
I own my hardware. I am root. I have absolute privileges. The idea that root has absolute privileges is the core if why I use Linux.
You own the hardware, which means you have the ability to boot the system into diagnostic or repair modes for extended privileges. This doesn't mean that during normal operation there should be a superuser account with 100% access. Plan9 from Bell Labs pioneered (edit: they probably weren't the first, but it's the first OS I can think of that does this pervasively -- I bet mainframe systems are similar but that ain't my cup of tea) the idea back in the 90s, where certain processes (like the credential store factotum) were sealed off even from the root equivalent.
If you had physical access, you could add a flag at boot to enable access to these services, but this was not enabled by default, which is good.
Had you actually read the manpage you'd realize that "root" has full capabilities, except where you configure it not to for specific processes or under specific conditions that are under your control, which is mostly for situations around setuid-0 marked binaries.
The simple way is to use a process that has root for the smallest moment before dropping it. Which means it's vulnerable to a tiny subset of attacks, and it will never be root and serving requests at the same time.
Yes, that solves one of the problems, but it doesn't solve the problem where I want to prevent my process from binding to any other port than (say) specifically port 200.
Your comment chain has taken a bit of a strange turn, but a suitable SELinux policy would work for that, which would be straightforward… ish.
Alternatively, if you're only concerned about other privileged ports, with a systemd socket unit, you could have systemd do the binding and pass your unprivileged process the socket, so that your process no longer needs the privileged-bind capability at any time; that'd probably be the easiest way if you're willing to modify the application code a bit. If you don't like systemd, you could have another helper program do it.
Or you could put your process in its own network namespace which has a filtered bridging or routing configuration between it and whichever namespace has the ‘real’ connectivity, which is complex and awkward but allows your process to ‘think’ it has full access, which might help if the code would otherwise balk.
Or you could use various other userspace sandboxing frameworks that let you install a BPF syscall filter, which I like a lot in theory but don't currently have personal experience with.
> And if you are building an executable with that capability, how do you enable your Makefile to set it?
The goal is to bind without passing through root, not to never have root at all, so you could run the service with it as an ambient capability (supported by e.g. systemd). Or setcap as root after building.
> In fact, I might want to allow a user to bind to port 200 only.
Program with the capability on its file to check the user, bind the socket, drop the capability, and exec. (I’m sure something existing does that, but I don’t have one to recommend since I’ve never needed it.)
But there’s also iptables and friends as alternative solutions to the same problem that still don’t require giving the process root at any point.
Yep! If you’re using file capabilities (setcap), it doesn’t look quite like that – you use setcap on the file separately, and then nohup when running it.
You should probably run production services using something other than nohup, though, like systemd.
No, but someone that finds even an unprivileged path to execute code on your server can fully control root on it if spectre is carried out successfully.
ROP is a strategy to leverage an existing vulnerability to do more so it’s not really language specific. It’s a question of whether you can find a vuln in your JVM or any native code it or your app calls out to.
Chrome worked to re-enable precise timers after implementing a number of mitigation strategies. They could always lower precision again, if necessary. Though it's unfortunate how much progress and performance we've lost to Spectre.
Interestingly the Chromium blog posted about mitigating side-channel attacks just earlier today.
Well, this steps up the game quite a lot. I’d considered the CPU attacks relatively unconcerning because it required executing code locally. Bring able execute from a web page makes for a broader attack vector.
You have to specify a whole lot more context if you talk about executing code "locally" vs. executing "from a web page".
JavaScript usually gets JIT'ed into machine language code. That code gets executed locally. There are usually a bunch of differences in terms of form and restrictions around that code, but this might well be a case where most of those differences don't matter.
Or in other words, browsers are just compilers/interpreters like any other, albeit very hardened ones because of the large exposure of untrustworthy code they are subject to. But if an attack fundamentally skirts around sandboxes, the browser sandbox won't help.
This is not new and it's half the reason I browse the web in icecat/elinks. The other half being that most of the javascript out there is written without regard for resource consumption.
Presumably the different kind of "security by obscurity" that comes with using software that nobody else uses and thus nobody will spend resources targeting
IceCat disables JavaScript that it doesn't have in a database or isn't written in a non-recursive subset of the language. Each chunk of JavaScript has to be reviewed and explicitly enabled by the user before executing.
While this POC may not reliably work on Safari, it is worth noting that from a defense perspective Safari is missing site-isolation (which Chrome, Edge, and soon Firefox all have). So if an attacker were to get this to work on Safari, the impact would be potentially much greater in terms of what could be stolen.
So given that the only mitigation to Spectre is to isolate each website into its own underlying OS process, it seems very important to know which browsers are doing that.
Chrome is doing it. What about Firefox and Safari? What about Edge? Do they implement site isolation?
I'm not able to get it to work in firefox either, but it feels like "reduced timer precision" shouldn't be sufficient to stop this particular attack. The authors of the demo here claim to be able to "amplify timing differences" by repeatedly hitting or missing cache and measuring the combined total time.
My searching indicates firefox's performance.now precision is 20us, but the demo claims to be able to get timing differences of 2ms pretty easily.
EDIT: 20us precision might be outdated... running this repeatedly on osx and linux, I'm distinctly getting my results "bucketed" into whatever the unit on the x-axis is. Based on that, I'm guessing timer precision on firefox is actually 1ms.
Increasing the cache timer repetitions gets me to a smooth graph, but still no separation of the peaks.
The fact that I can get the graph to go from discrete buckets to smooth normal curves makes me think this can indeed overcome reduced timer precision
The fact that I can't get the peaks to separate makes me wonder if some other mitigation is protecting me.
It is worth noting that this POC was specifically targeted at Chromium based browsers. To quote the blog post, they also developed "a PoC which leaks data at 60B/s using timers with a precision of 1ms or worse". So Firefox's protection's are likely not sufficient to mitigate all Spectre attacks.
Chromium has site-isolation (with some caveats around phones with limited resources) so both Chrome and Edge have site-isolation. Firefox is getting very close with Project Fission [1] and I predict they'll ship it relatively soon. Currently Safari doesn't have site-isolation and AFAIK they have not publicly committed to anything in terms of getting there. They have done some work in this space (search around for Process Swap on Navigation (PSON)) but it isn't complete.
Oye. So that means that in Safari, it's possible for any site to run some Spectre Javascript to read the cookies and passwords for any other site that happens to be running in the user's browser, and then log in as that user on other sites.
No, the attack always works, whether there's an isolated process or not. In Chrome's design you shouldn't be able to access any data of value with the attack, that is data from other sites (like cookies) or privileged data. I don't know if that's indeed true or not in Chrome, but that's why it was designed that way.
Chrome's design ensures that Spectre can only access resources that end up in an attacker controlled process. And this [1] post on "Post-Spectre Web Development" goes into detail about how a given website can ensure that its resources don't end up in an attacker controlled process. There are also a number of default protections against this like SameSite cookies and CORB that protect some resources by default.
No, the POC only shows the script leaking memory into javascript running within the same process, and thus the same site. Chrome is still preventing the info from leaking across sites.
The big caveat to this is that an attacker can generally get a browser to include a cross-site resource in their process. For example, `<img src="https://sensitive.com/myprofilepic.png">` will cause the image to be loaded in the attacker's process where they can then potentially steal it. The article "Post-Spectre Web Development" goes into details on how sites can defend against this (and other vectors).
I may be wrong about this, and about specifically how exposed browsers are to Spectre, but the only real mitigation here, since protected memory can be accessed through the same mechanism, is disabling branch prediction and CPU caches, or barring those caches being reused across threads or execution contexts.
Or completely redesigning those aspects of CPU behavior to remove the ability for similar timing attacks.
No. Process isolation still works, assuming your CPU is not broken. The real mitigation is that going forwards, you have to assume that any attacker-controlled code always has full read access to the entire process it runs in, and you need to architect your systems so that this does not result in any bad things. It is entirely possible to do this.
You can still have branch prediction and caches, which is a very good thing, because not having those would cut the performance of modern CPUs to less than 1% of what it currently is.
Or just not accepting and evaling arbitrary code from every single website the user visits, including ones that should only be static documents or forms.
That this is generally considered ok boggles the mind. That browser vendors have made it difficult for users to disable this is insane! Even MS Internet Explorer gave users at least that security tool!
the modern web doesn't work without Javascript. It's as simple as that.
People like you who want to turn js off are a very small niche. And you have better solutions. Run your browser in a remote server using rdp or vnc. I think it may be equally safe and you may actually have a larger chunk of web working for you.
If we didn't complain about Java applets, we still would have Java applets.
If we didn't complain about Flash, we still would have Flash applets.
Yesterday there was however people that were saying, "that's how it is, deal with it" or "But, but... Without it we can't have this and that" or "The ship has sailed, get over it".
Today, we have reasons to complain about JS. Yes, it enhances interactivity, but it also abused. There's a lot of unwanted interactivity (the type of one-way interactivity that lets a server know more than necessary about their clients).
Note that I limited what I was talking about to static documents and forms. Executable code should be a permission the page has to explicitly request from the user like notifications and microphones.
Considering the increasing sandboxing and protections I think we're getting closer to browsers in VMs already. Someday I can see a permission prompt to allow performance sensitive sites lower level access.
Not sure if it's that simple. Chrome (89.0.4389.90 64-bit) runs the tests but leaks memory for me. Firefox (86.0 64-bit) runs the tests but every one of the tests fails. Brave (1.5.112 64-bit) doesn't even run the tests. I'm on Linux (5.11.4-arch1-1).
Meltdown and Spectre are serious. But we need clarity on what they do and do not threaten. To address the widespread confusion on this topic, and to demonstrate a completely different approach to mitigating these, we wrote:
Enjoyed the article. Thanks for sharing. It is more on a scientific side though, but this is exactly what I like to see when it comes to system analysis and security.
I often bring this up in HTTP vs HTTPS conversations. It's not about what CAs you trust, as that's a policy decision you can make on your own devices. It's about knowing (through whatever CAs you trust), what the origin of the code you're going to execute on your device is. It's about knowing that your ISP isn't injecting extra JavaScript into your page requests. This isn't hypothetical, it's literally happening right now.
When the people injecting JavaScript are interested in exploits rather than dumb ISP value added services and notifications, it becomes more obvious that running code from untrusted sources, even if it's sandboxed, is dangerous.
Definitely OS patch. Currently neither Intel nor AMD have Specter free CPU's. And one would be in like a few years on the market. Until then it's OS patches as protection, and you know very well the history of those :).
Ironically, I could only get this to work in Edge (the new Edge which is Chromium underneath) by turning down the "Cache timer repetitions", the thing which is supposed to make timing differences more discernible. At only 500, it works every time. If I crank it up to 5000, it doesn't work at all.
From what I understand, Firefox has a lower precision timer, which means you need to do more cache timer repetitions; try increasing it to 400,000, it should look more like the demo (though it still doesn't generate two entirely separate curves).
Linus said he wanted the workarounds disabled by default, why didn't anyone listen?
I'm not browsing javascript sites on my linux server!?
If the server is compromised they don't need to use meltdown/spectre to do damage since servers need root for everything useful (open port <1024)?!