I'm curious, would this be possible in Linux? What would the kernel <-> {xscreensaver,gdm,lightdm,gnome-screensaver,etc} communication work like? How would the pathway be secure from an attacker just turning DMA back on?
How does this work in OSX if a Firewire device is in the middle of a DMA operation when the OS is locked? Does the operation fail? Does the user lose data?
The correct way to implement it would be to prevent the initiation of DMA operations when the device is locked, but to let current ones continue. I have no idea if that's what happened, though.
I'm unsure how the DMA operations are structured though. A single 'operation' from the user's perspective might involve multiple operations at a lower level where locking in the middle might allow a single low-level atomic (in-progress) operation to complete, but the overall job to fail.
I just removed the module from loading. I dont have any firewire stuff anyways, so it's just some useless 4 pin port at the bottom left of my thinkpad.
Right, but that's not what OSX is apparently doing. It's just limiting DMA operations under specific conditions. I'm wondering what that functionality would look like if implemented in Linux.
I've disabled it in BIOS (didn't find the option on my thinkpad though). It was a bit frustrating though when I, years later, tried to use a firewire media source and had forgotten that I had disabled it :\
It's still slightly problematic for me. To prevent the re-activation of DMA, I have to disable the Guest account. Unfortunately, this also disables the "Find My iPhone" application. I would like to have both. Is there a way to have the Guest account, but not let people log into it when the machine is locked?
EDIT: Got it. It reactivates the Guest user when you turn on the "Find My" feature, but you can deactivate Guest user afterwards.
Note that this isn't necessary. If you have full disk encryption enabled, the machine will reboot (to a limited OS) when the Guest account is requested.
Looks like a standard DMA attack. This problem has been known about forever, though I guess now people will stop thinking of it as an obscure hypothetical attack?
This is interesting, but is certainly not novel. I did a talk (in one of my grad school courses) in 2008 summarizing the Princeton research regarding attacks on encryption keys stored in memory (https://citp.princeton.edu/research/memory/). DMA and Firewire were both attack vectors mentioned in that paper.
It should not be that way, though. Physical access should not equal compromise.
There's no good reason why this vulnerability still exists after 10 years except a failed design, laziness on the part of OS developers and that security professionals in general meet the problem with the above statement that "physical access equals compromise".
I think end users deserve (and expect) secure devices, even when physical access is lost. I realize that it's harder to protect a physical device, but it's not impossible.
Security ratings for those devices are measured in time. Basically, if you lose possession, it's just a matter of time. Digital security is both easier and harder, because all you're protecting there is information. If you wish for the information to be destroyed on tampering, then your job may be easier.
The only way for there to be hardening when physical access is lost is to have some form of layered defense in depth, the aim being giving the user enough time to send a command to wipe the device.
Over time, yes. Your statement about physical access == compromise is missing that crucial detail. There's no reason why someone should be able to access all your data just because they have physical access to your device for a short period.
If you really want to do the analogy thing, the DMA vulnerability would be the equivalent of a safe with a door where no key is needed in the back. It would not be a very good safe.
I don't think the specifics of the analogy is under discussion here, but rather that it's stupid and counter-productive to dismiss an obvious vulnerability because protecting against it is hard.
> Physical access == compromise even for devices that are as simple as a hollow metal box.
Only if it's unattended. You can't break a safe, without looking suspicious. You can't disassemble a PC, and take out its hard drive, and not attract a bit of attention.
Being able to root a system by attaching a dongle is a whole different story. It's like auto-play on USB all over again.
> Only if it's unattended. You can't break a safe, without looking suspicious. You can't disassemble a PC, and take out its hard drive, and not attract a bit of attention.
Yes, it goes both for a safe and a computer, so you're reinforcing my point about the equivalency of their security. Safes wouldn't be secure at all without the vigilance of bank employees, etc.
With a Firewire device to DMA the password, all you need is to hook up the device long enough to copy all of active memory. Certainly something that could happen at a hackerspace or at a conference. For the James Bond set, invent a device that you can set about 4 feet (1.3 meters) away on the table, and after it's done copying memory, the firewire cord unhooks itself and retracts back into the device.
Attaching a dongle seems very suspicions to me, unless it something like a library computer. A library computer shouldn't have any critical data in the first place making the point moot.
Physical access always has been much harder to protect against than anything else.
If someone is trusted to get access to the actual device then it's pretty much game over. Do you check the keyboard cables for key-capture hardware? Do you check for all the other nefarious devices? Do you check your OS has not been tampered with?
As the article notes, OS X is only vulnerable to this issue when the machine is unlocked as DMA access via FireWire is disabled while it is locked. If your machine is left unattended and unlocked then anyone with the ability to plug in a FireWire device could already cause you grief.
A user with physical access to an unlocked machine probably doesn't need DMA to cause you grief anyway. Obviously this is a security issue in some contexts, and some people need to take it seriously. But physical access remains one of the hardest barriers to overcome. Real world security for people who aren't spies really doesn't worry about stuff like this.
Yep, physical access in general means game over. If its unlocked they can just clock you over the head and take the laptop and copy things. No need for fancy devices.
You'd be surprised. It's not just the James Bond types that are being attacked by foreign intelligence anymore. By some estimates industrial espionage against technology companies by China is second only to domestic security in terms of funding and manpower.
Companies are issuing disposable laptops, iPads, and cell phones to employees traveling to Asia. Consider why that might be.
By some estimates industrial espionage against technology companies by China is second only to domestic security in terms of funding and manpower.
What estimates? That sounds frankly incredible.
Given that China's total population is far larger than the total population of engineers in the entire world, having the manpower devoted to industrial espionage being anywhere even close to that dedicated to domestic security would imply that there are more "chinese spies per engineer in the world" than there are "chinese police per chinese person".
If I do, in fact, have my own personal Chinese spy, I'd like to say "hi" and "i'm sorry I didn't get much coding done today."
Shameless startup promotion here: PrivateCore (www.privatecore.com) is working on protecting against DMA and other memory extraction attacks, like non-volatile memory.
Our model is that no hardware is trusted except the CPU and TPM. Plaintext data never leaves the CPU. All of main memory is encrypted, so it's beyond just protecting crypto keys in registers like TRESOR.
There seems to be two distinct types of attack for that attack vector:
(1) bypassing OS logon screen by patching in-memory code
(2) stealing password or encryption key for already-mounted encrypted volume
But bypassing or stealing password or key for cold, non-mounted volume is impossible: for example, TrueCrypt volume is mathematically indistinguishable from /dev/urandom output until exact password is provided.
Well, I'd imagine something like a laptop hooking up to the target machine, over both USB and firewire (or something like it). The USB interface would act like a disk, holding an executable or two you'd like.
Walk over the process list in memory, and find one that's got good privileges but stays asleep. Patch its code to call mount (anywhere's fine, maybe in the heap, or if it's asleep in sys_read, in the destination buffer), alter the saved IP in the task table to the mount call, and (this may be the tricky part) move it to a CPU's ready-task list.
After mount, play the same trick to run an executable, and change it's UID to root. Hell it can just spin in a get UID/sleep() loop until it's ready. Then you've got all of root-userland to do what you need. That can include installing keyboard sniffers that save the last kilobyte or so, which is saved or transmitted when the right entry point into the encrypted-mount path is hit.
At an old MacHack, someone was going around with a powerbook. They'd plug a cable into your mac, and start drawing things on your screen.
More than a few organizations shove hot glue into USB ports to secure the machine.
It's hard enough to get machines secure from drive-by web browsing attacks. Getting them safe from physical access is substantially, substantially harder.
Open machine and plug unglued USB connector to the motherboard. It is probably not that difficult to unglue the connector with a heat gun and some pliers or something.
I'm guessing the point of hot-glueing USB ports is to prevent drive-by attacks, not to prevent someone that can steal the computer from doing something to it.
I am assuming this would not be a vulnerability against a laptop that is turned off, with disk encryption? e.g. I leave my laptop, off, in a hotel room? (I do realize there are other vulns in that scenario, e.g., hotel maid puts in a new BIOS chip that contains a keylogger.)
No, a laptop that is turned off is safe from this.
And you don't really need something as tailored as a new BIOS chip, in most cases where you can boot from USB/Firewire/whatever you can just replace the bootloader with one that copies the password as you enter it.
That's why you might want to have your bootloader on a USB stick or something that isn't available to the attacker even if he/she gets physical access.
But that of course doesn't protect someone from altering the BIOS (or even, the firmware of your keyboard...). But it at least usually requires more effort.
Or, one could be listening in by targeting a laser on a window nearby (and watch the reflections as the sound-waves propagate through the glass) and listen to your keystrokes, and based on statistics (gathered when you write HN posts ;)) figure out your password without even getting in the same room as the hardware.
If someone has the resources and motivation to get your data he/she will probably be able to :(
[1] https://support.apple.com/kb/HT5002
(missed the note about this on the page the first time through, but I'm leaving this for others who may have missed it)