1Password's browser extension v4, in contrast, fills in the form only when you press a key combination (⌘-\ on OSX), and has you enter your master password into a dropdown from the OSX Menu Bar, and not inside the browser frame. Pretty snazzy all around.
Aha, I see the distinction now. Thanks. Enough to simply disable autofill in LastPass then? I'm loathe to learn a new system because of such a seemingly small size vulnerability.
LastPass has me enter the master password in a pop up window when I click on the icon from the extension (firefox/chrome), although I suppose that's maybe not as good as an independent application?
I can, not only disable autofill in the Lastpass configuration, but also set to require password reprompt for any of my credentials. I could also individually disable autofill for any credential.
Please should first know how something works before criticizing.
A common marketing tactic is to include a link to a small, transparent image in the email they send. The URL is personalized to each recipient, and so they can check their server logs to see who opened their email and when.
It's unclear if this technique will work against the new GMail approach; maybe Google will always fetch all images as soon as the mail arrives, which would immediately ruin any signal.
Some JS engines already disallow security-violating operations from happening (such as inserting evil hooks into document.location). These same protections could apply to any Crypto object.
As eru said elsewhere, you've created a stream cipher. To be secure, the
generated keystream (the thing you xor against the plaintext) must not have any
biases: the attacker shouldn't be able to guess the contents of any keystream
byte (with P > 1/2^8). Otherwise, an attacker can probabilistically recover
parts of the plaintext.
I played around with your C implementation a little[1]. I put it into 8-bit
mode, and generated 16 random keys and output their keystreams[2]. Here's the
first 32 bytes of those keystreams, in hex:
As you can see, there are some significant biases here. For example, as an
attacker, once I have a ciphertext, I can guess that bytes 18 and 19 of the
keystream were (hex) "1112", and have a very good chance of being right.
While the 32-bit version isn't as bad, I think there's still significant biases.
I generated 24495 keys. Here's the distribution for keystream byte 1 (script at
[3]):
I should really get to sleep so I didn't do the statistics on this, but I'm
fairly sure that's not a uniform distrubtion (i.e., bytes aren't being drawn
uniformly from [0:255]). Also, keystream byte 0 only ever has 128 values,
instead of 255.
I did not look at 64-bit.
All in all, I wouldn't use your cryptosystem :) but there's no shame in that!
Crypto is hard! and youre only going to get better.
I'm not really sure how you can get similar results with completely different keys. I'm looking in to this. I tried generating a meg encrypted zeroes in 8bit mode and the most used 8 bit value was only 1% more common then the least used value.
This looks fishy to me. Follow my thought: If you are looking at the very first value coming out of the stream, that value should have zero bias from the algorithm. Why? because it is an xor of two different values in the key. If both values are properly random then so should their XOR.
Note that at this point the algorithm has not yet started modifying itself, so the original key is still intact. The self modifying code may still be broken, but i think we can be fairly sure the first value has the exact same bias as the random number generator.
it's not standard with symmetric encryption in general; it's a "known plaintext attack". http://en.wikipedia.org/wiki/Known-plaintext_attack - with aes, for example, even if you know the plaintext there is no known way to obtain the key (faster than brute force search).
That's something you really need to do your own research on, it's a big topic. Start by looking up 'complex carbohydrates' and consider that while they are made up of basic sugars, in nature the breakdown of those complex carbs would be done mostly in the gut rather than through refinement in a sugar plant. A pediatrician friend of mine doesn't even believe in consuming juice, because she considers the body is better able to regulate its sugar intake by processing the fruit and also using the non-nutritious bulk fiber of the fruit to regulate the amount consumed. This is her hunch rather than being proved through exhaustive research, but it seems like a fairly well-informed hunch.
A glass of juice is a lot of sugar. One serving size of fruit is much smaller, and would give very much less juice. It's very easy to drink a lot of sugar if you're drinking undiluted juice.
Dietitians in the UK recommend that people are cautious with juices and smoothies because of the amount of sugar, and they recommend that people just eat the fruit and drink water. Especially for children, the high sugar and acid is tricky for teeth.
(I agree that the hunch seems reasonable. But then, those are the things that need research, to combat my biases.)
"Processed food is fiberless food. That's basically what it comes down to. Processed food means that you've got to take the fiber out for shelf-life. And there are two kinds of fiber. There's soluble fiber: which is the kind of stuff that holds jelly together, and pectins, and things like that. And then there's the insoluble fiber: the stringy stuff, like, you know, cellulose, like what you see in celery. You need both. What I describe in the book is like it's kind of like your hair-catcher in your bathtub drain. Um, you have this plastic lattice work with holes in it. So, if you take a shower and the hair is coming down, it blocks up the holes, but only if the hair catcher is there. So, imagine that the cellulose is the hair-catcher, and imagine the hair is the soluble fiber, blocking up the little holes. When they're both there, it forms a barrier on the inside of your intestine.
You actually can see it during electron microscopy, that it's a secondary barrier that reduces the rate of absorption of nutrients from the gut, into the bloodstream. And what that does is that it actually keeps the liver safe, because it reduces the rate at which the liver has to metabolize, the stuff. And if you overload the liver, what it does is it has no choice but to turn extra energy into liver fat. And that's what drives this whole process. Is the process of liver fat accumulation, and the thing that does that the worst is sugar, especially when it's not teamed up with fiber.""
Ostensibly, a "a low-key ceremony in an actual beautiful natural area" (that can support 360 guests) would have a far greater ecological impact than manufacturing such a place in an already developed campground.
To make such a suggestion, you'd have to claim that they shouldn't have invited nearly so many people, which is an uncomfortable claim to make.
You are misinformed. There may or may not be DRM according to the teardowns. There is certainly a bit of silicon which has a minimal DRM capability, but it may have come along for the ride. If there is DRM, it is not enforced. There are knockoff connectors out there.
The connector electronics are there to allow higher current charging by allocating more pins to power transfer when they are not needed for data and to allow as yet undeveloped higher speed protocols to operate on the same connector.
If there is DRM I can see Apple's point. When you destroy an iPhone with a cheap charger that fails, it costs Apple money to replace it. If you attempt to reverse engineer the connector, you will be wrong. The future capabilities are not present for observation. You will probably create a device that behaves improperly for future protocols. Will you destroy that future device by pulling -5V on a low voltage differential data line?
This article focuses pretty heavily on the possibility of cache timing attacks against AES, and cites djb's original work along with Tromer/Osvik's publication in 2005.
Last week at CCSW, we published a paper[1] detailing our attempts to bring these attacks to bear against Chromium.
In short, we don't see AES cache timing attacks as possible on more recent processors, and especially so once you factor in the sheer size of modern architected code.
DJB's attacks were from a remote attacker's vantage point. But your paper also takes on Osvik and Tromer, who used "spy processes" to continuously probe local caches to create traces that could be analyzed for key information. I know your paper mentions branch prediction and says you don't have results for it, but what's your take on whether Aciicmez's BTB attack is going to remain viable?
I thought the BTB attack was the cleverest and most disquieting of the bunch, in that it suggested that we don't even know enough about the x86-64 microarchitecture to predict what side channel vulnerabilities we might have in software AES.
Regarding the paper itself: the most provocative claim it makes is that we're trending towards "complete mitigation" of cache side channel attacks. You give two reasons: AES-NI and multicore systems.
The AES-NI argument seems compelling but a little obvious, in the same sense as one could have argued that peripherals that offloaded AES would also blunt attacks against software AES. AES-NI blurs the line between software and hardware AES, but it's still a hardware implementation.
Another argumentative point that could be made here is, AES-NI mitigates cache-timing attacks against systems that use AES. It doesn't do much good if you can't use AES, since the most popular block ciphers that compete with AES are also implemented with table lookups.
I found the multicore argument a lot less compelling, since it relied in part on the notion that attackers wouldn't easily be able to predict the cache behavior of their target multicore systems. It seems to me that the most likely environment in which cache timing attacks are going to be a factor on the Internet is shared hosting environments, in which attackers with the sophistication to time AES are easily going to be able to get a bead on exactly what hardware and software they're aiming at. Most users of AES are also using off-the-shelf hardware and software.
Aciicmez's BTB attack looks at the branch predictor, and is potentially valid against any implementation which branches based on sensitive data. There's a whole class of these attacks which look at instruction paths, including a new one by Zhang et. al. against ElGamal at CCS this year, but they usually target asymmetric ciphers. In particular, since AES doesn't have key-dependent branching, these attacks don't apply.
I do agree with you that x86-64 is extremely complicated, and that new attacks might crop up due to some future optimization.
As for the paper:
Yeah, AES-NI is sort of the final hammer against AES cache timing attacks, since it doesn't use the cache at all, but I felt that a paper on AES cache timing would be remiss without mentioning it :)
There are two parts to the multicore argument: the first is that it complicates things massively, and the second is that it can be a complete mitigation if used properly.
First is the complication bit, and that's just saying that the attacker must understand almost everything about the multicore implementation, including multilevel cache behavior and (possibly non-deterministic?) replacement strategy. I'm willing to believe that, were this the only hurdle, a dedicated attacker could still succeed. I was looking at a single core machine, so I didn't have to deal with the complexity here.
For the complete mitigation, you need to rely on platform support for core pinning. If you're allowed to say "I want to do encryption now, give me my own core for 400ms", then, since the 4KiB T-tables fit into your core's L2, attacker threads on other cores just can't examine them during use. This complicates the VM hosting model and might be a decent DOS attack, but it does completely stop cache probing attacks.
Finally, as you said, my work can really only apply to AES on the x86 on the desktop. Change one of these variables (such as AES to ElGamal or RSA or Blowfish), and side channel attacks might still exist. Such is the problem with negative results :)
This was fun to read; thanks. It's interesting how side channel attacks can be both assisted and complicated by new hardware; usually, advances in hardware tend to favor attackers slightly more than defenders, but even just by pushing operations below attacker measurement thresholds --- without even trying, that is --- hardware makes some side channels very hard to exploit.
If you're an HN'er reading along at home, Aciicmez' BTB timing paper (you should just be able to Google that) is very very very cool. They not only realized that you could theoretically watch the caches used by the branch predictor to build a trace from which you could recover RSA keys, but also came up with a very simple way to profile those branch predictor caches; that is, they designed a "spy process" like Osvik and Tromer did for memory caches that targeted the BTB instead.