Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Windows 7/8, Linux, OS X Full Disk Encryption FireWire Exploit (breaknenter.org)
36 points by jefe78 on Jan 4, 2013 | hide | past | favorite | 51 comments


As the site says, OS X is mostly safe by default. DMA is disabled when a machine is locked or the user is not logged in[1], in OS X 10.7.2+.

[1] https://support.apple.com/kb/HT5002

(missed the note about this on the page the first time through, but I'm leaving this for others who may have missed it)


I'm curious, would this be possible in Linux? What would the kernel <-> {xscreensaver,gdm,lightdm,gnome-screensaver,etc} communication work like? How would the pathway be secure from an attacker just turning DMA back on?

How does this work in OSX if a Firewire device is in the middle of a DMA operation when the OS is locked? Does the operation fail? Does the user lose data?


The correct way to implement it would be to prevent the initiation of DMA operations when the device is locked, but to let current ones continue. I have no idea if that's what happened, though.


I'm unsure how the DMA operations are structured though. A single 'operation' from the user's perspective might involve multiple operations at a lower level where locking in the middle might allow a single low-level atomic (in-progress) operation to complete, but the overall job to fail.


I just removed the module from loading. I dont have any firewire stuff anyways, so it's just some useless 4 pin port at the bottom left of my thinkpad.

No driver, no workey.


Right, but that's not what OSX is apparently doing. It's just limiting DMA operations under specific conditions. I'm wondering what that functionality would look like if implemented in Linux.


I've disabled it in BIOS (didn't find the option on my thinkpad though). It was a bit frustrating though when I, years later, tried to use a firewire media source and had forgotten that I had disabled it :\


> DMA is disabled when a machine is locked

For Firewire. What about for Thunderbolt?


This was asked on Reddit, and the answer was yes ...

http://www.reddit.com/r/netsec/comments/15ydem/inception_is_...


Another post[1] on the same website suggests that DMA is also disabled for Thunderbolt when the machine is locked.

[1]: http://www.breaknenter.org/2012/02/adventures-with-daisy-in-...


It's still slightly problematic for me. To prevent the re-activation of DMA, I have to disable the Guest account. Unfortunately, this also disables the "Find My iPhone" application. I would like to have both. Is there a way to have the Guest account, but not let people log into it when the machine is locked?

EDIT: Got it. It reactivates the Guest user when you turn on the "Find My" feature, but you can deactivate Guest user afterwards.


Note that this isn't necessary. If you have full disk encryption enabled, the machine will reboot (to a limited OS) when the Guest account is requested.


Does thunderbolt always use DMA? Perhaps if it's just being used as a screen output then it can turn DMA off but still work.


Looks like a standard DMA attack. This problem has been known about forever, though I guess now people will stop thinking of it as an obscure hypothetical attack?


Once there is a tool that's, "so easy a script-kiddie could do it," you're forced to take it seriously.


This tool (inception) has been around for a long time ... I remember playing with it at Blackhat 2008.

Edit: May not have been inception, but something similar... I remember unlocking Windows XP laptops!


This is interesting, but is certainly not novel. I did a talk (in one of my grad school courses) in 2008 summarizing the Princeton research regarding attacks on encryption keys stored in memory (https://citp.princeton.edu/research/memory/). DMA and Firewire were both attack vectors mentioned in that paper.

Physical Access == Game Over


It should not be that way, though. Physical access should not equal compromise.

There's no good reason why this vulnerability still exists after 10 years except a failed design, laziness on the part of OS developers and that security professionals in general meet the problem with the above statement that "physical access equals compromise".

I think end users deserve (and expect) secure devices, even when physical access is lost. I realize that it's harder to protect a physical device, but it's not impossible.


> Physical access should not equal compromise.

Physical access == compromise even for devices that are as simple as a hollow metal box.

http://en.wikipedia.org/wiki/Safe

Security ratings for those devices are measured in time. Basically, if you lose possession, it's just a matter of time. Digital security is both easier and harder, because all you're protecting there is information. If you wish for the information to be destroyed on tampering, then your job may be easier.

The only way for there to be hardening when physical access is lost is to have some form of layered defense in depth, the aim being giving the user enough time to send a command to wipe the device.


Over time, yes. Your statement about physical access == compromise is missing that crucial detail. There's no reason why someone should be able to access all your data just because they have physical access to your device for a short period.

If you really want to do the analogy thing, the DMA vulnerability would be the equivalent of a safe with a door where no key is needed in the back. It would not be a very good safe.

Just sayin'.


> the DMA vulnerability would be the equivalent of a safe with a door where no key is needed in the back.

More like a safe where the "master key" was leaked and wasn't disabled in the models that were sold.


I don't think the specifics of the analogy is under discussion here, but rather that it's stupid and counter-productive to dismiss an obvious vulnerability because protecting against it is hard.

Welcome to the world of security, I guess.


> Physical access == compromise even for devices that are as simple as a hollow metal box.

Only if it's unattended. You can't break a safe, without looking suspicious. You can't disassemble a PC, and take out its hard drive, and not attract a bit of attention.

Being able to root a system by attaching a dongle is a whole different story. It's like auto-play on USB all over again.


> Only if it's unattended. You can't break a safe, without looking suspicious. You can't disassemble a PC, and take out its hard drive, and not attract a bit of attention.

Yes, it goes both for a safe and a computer, so you're reinforcing my point about the equivalency of their security. Safes wouldn't be secure at all without the vigilance of bank employees, etc.

With a Firewire device to DMA the password, all you need is to hook up the device long enough to copy all of active memory. Certainly something that could happen at a hackerspace or at a conference. For the James Bond set, invent a device that you can set about 4 feet (1.3 meters) away on the table, and after it's done copying memory, the firewire cord unhooks itself and retracts back into the device.


Hey.

Do you leave your PC or in standby when traveling? When you leave your desk at work?

More importantly, do you think that end users would expect password protection to work? Even when their PC is on?

There's plenty of scenarios where a PC may end up being in another persons control while powered on. This is a relevant threat scenario. Deal with it.


Attaching a dongle seems very suspicions to me, unless it something like a library computer. A library computer shouldn't have any critical data in the first place making the point moot.


Physical access always has been much harder to protect against than anything else.

If someone is trusted to get access to the actual device then it's pretty much game over. Do you check the keyboard cables for key-capture hardware? Do you check for all the other nefarious devices? Do you check your OS has not been tampered with?


> There's no good reason why this vulnerability still exists after 10 years except a failed design

I'll give you that, but there are too many kinds of physical access attacks to even consider aiming at solving the entire class of attacks.

> Physical access should not equal compromise.

And I want a million dollars. Guess which one's more likely.


As the article notes, OS X is only vulnerable to this issue when the machine is unlocked as DMA access via FireWire is disabled while it is locked. If your machine is left unattended and unlocked then anyone with the ability to plug in a FireWire device could already cause you grief.


A user with physical access to an unlocked machine probably doesn't need DMA to cause you grief anyway. Obviously this is a security issue in some contexts, and some people need to take it seriously. But physical access remains one of the hardest barriers to overcome. Real world security for people who aren't spies really doesn't worry about stuff like this.


Yep, physical access in general means game over. If its unlocked they can just clock you over the head and take the laptop and copy things. No need for fancy devices.


> But physical access remains one of the hardest barriers to overcome.

Basically it's the same problem as DRM that actually works.


You'd be surprised. It's not just the James Bond types that are being attacked by foreign intelligence anymore. By some estimates industrial espionage against technology companies by China is second only to domestic security in terms of funding and manpower.

Companies are issuing disposable laptops, iPads, and cell phones to employees traveling to Asia. Consider why that might be.


By some estimates industrial espionage against technology companies by China is second only to domestic security in terms of funding and manpower.

What estimates? That sounds frankly incredible.

Given that China's total population is far larger than the total population of engineers in the entire world, having the manpower devoted to industrial espionage being anywhere even close to that dedicated to domestic security would imply that there are more "chinese spies per engineer in the world" than there are "chinese police per chinese person".

If I do, in fact, have my own personal Chinese spy, I'd like to say "hi" and "i'm sorry I didn't get much coding done today."


Shameless startup promotion here: PrivateCore (www.privatecore.com) is working on protecting against DMA and other memory extraction attacks, like non-volatile memory.

Check out the attack device we implemented and how we defend against it: http://www.privatecore.com/dma-attack-video.html


Where do you store your encryption key? In registers?


Our model is that no hardware is trusted except the CPU and TPM. Plaintext data never leaves the CPU. All of main memory is encrypted, so it's beyond just protecting crypto keys in registers like TRESOR.


There seems to be two distinct types of attack for that attack vector: (1) bypassing OS logon screen by patching in-memory code (2) stealing password or encryption key for already-mounted encrypted volume

But bypassing or stealing password or key for cold, non-mounted volume is impossible: for example, TrueCrypt volume is mathematically indistinguishable from /dev/urandom output until exact password is provided.


Well, I'd imagine something like a laptop hooking up to the target machine, over both USB and firewire (or something like it). The USB interface would act like a disk, holding an executable or two you'd like.

Walk over the process list in memory, and find one that's got good privileges but stays asleep. Patch its code to call mount (anywhere's fine, maybe in the heap, or if it's asleep in sys_read, in the destination buffer), alter the saved IP in the task table to the mount call, and (this may be the tricky part) move it to a CPU's ready-task list.

After mount, play the same trick to run an executable, and change it's UID to root. Hell it can just spin in a get UID/sleep() loop until it's ready. Then you've got all of root-userland to do what you need. That can include installing keyboard sniffers that save the last kilobyte or so, which is saved or transmitted when the right entry point into the encrypted-mount path is hit.


At an old MacHack, someone was going around with a powerbook. They'd plug a cable into your mac, and start drawing things on your screen.

More than a few organizations shove hot glue into USB ports to secure the machine.

It's hard enough to get machines secure from drive-by web browsing attacks. Getting them safe from physical access is substantially, substantially harder.


Not very useful really

Open machine and plug unglued USB connector to the motherboard. It is probably not that difficult to unglue the connector with a heat gun and some pliers or something.


I'm guessing the point of hot-glueing USB ports is to prevent drive-by attacks, not to prevent someone that can steal the computer from doing something to it.


Sealing usb ports was more about keeping employees from bringing in and spreading infections on thumbdrives.


It also prevents 'cleaning lady'-type attacks (or malicious co-workers), which is what I was alluding to.


A previous tool for the same class of exploits (Firewire DMA): http://web.archive.org/web/20100510013948/http://www.storm.n...

Were these kinds of exploits noted when writing the 1394 spec and ignored in favour of the DMA feature?


I just looked up IO MMUs, which can provide protection for this. Sadly, on Wikipedia, it looks like the hardware support for them is blotchy. http://en.wikipedia.org/wiki/IOMMU_hardware_list

Hopefully that'll get better.


The author of the tool commented in this thread on Reddit:

http://www.reddit.com/r/netsec/comments/15ydem/inception_is_...


I am assuming this would not be a vulnerability against a laptop that is turned off, with disk encryption? e.g. I leave my laptop, off, in a hotel room? (I do realize there are other vulns in that scenario, e.g., hotel maid puts in a new BIOS chip that contains a keylogger.)


No, a laptop that is turned off is safe from this.

And you don't really need something as tailored as a new BIOS chip, in most cases where you can boot from USB/Firewire/whatever you can just replace the bootloader with one that copies the password as you enter it.

That's why you might want to have your bootloader on a USB stick or something that isn't available to the attacker even if he/she gets physical access.

But that of course doesn't protect someone from altering the BIOS (or even, the firmware of your keyboard...). But it at least usually requires more effort.

Or, one could be listening in by targeting a laser on a window nearby (and watch the reflections as the sound-waves propagate through the glass) and listen to your keystrokes, and based on statistics (gathered when you write HN posts ;)) figure out your password without even getting in the same room as the hardware.

If someone has the resources and motivation to get your data he/she will probably be able to :(


> If someone has the resources and motivation to get your data he/she will probably be able to :(

Goes for anything else you "own." The real question is, how bad do they want it, what are the economics, and how does society at large feel about it?


Previously discussed (though not a very big turnout in the thread):

http://news.ycombinator.com/item?id=4646918




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: