Hacker Newsnew | past | comments | ask | show | jobs | submit | blazingice's commentslogin

1Password's browser extension v4, in contrast, fills in the form only when you press a key combination (⌘-\ on OSX), and has you enter your master password into a dropdown from the OSX Menu Bar, and not inside the browser frame. Pretty snazzy all around.


Aha, I see the distinction now. Thanks. Enough to simply disable autofill in LastPass then? I'm loathe to learn a new system because of such a seemingly small size vulnerability.

LastPass has me enter the master password in a pop up window when I click on the icon from the extension (firefox/chrome), although I suppose that's maybe not as good as an independent application?


Exactly.

I can, not only disable autofill in the Lastpass configuration, but also set to require password reprompt for any of my credentials. I could also individually disable autofill for any credential.

Please should first know how something works before criticizing.


Does the fetch happen immediately when Google receives the email? Or does it occur the first time the user opens the email?

If the latter, sender image tracking will be alive and well.


A common marketing tactic is to include a link to a small, transparent image in the email they send. The URL is personalized to each recipient, and so they can check their server logs to see who opened their email and when.

It's unclear if this technique will work against the new GMail approach; maybe Google will always fetch all images as soon as the mail arrives, which would immediately ruin any signal.


It may be trivially easy to host your own email, but in my experience it is nowhere as easy to do your own spam filtering.

A few years ago when I ran my own mail server, having only one false negative was a good day. Google can throw teams of people at the problem.


Some JS engines already disallow security-violating operations from happening (such as inserting evil hooks into document.location). These same protections could apply to any Crypto object.


As eru said elsewhere, you've created a stream cipher. To be secure, the generated keystream (the thing you xor against the plaintext) must not have any biases: the attacker shouldn't be able to guess the contents of any keystream byte (with P > 1/2^8). Otherwise, an attacker can probabilistically recover parts of the plaintext.

I played around with your C implementation a little[1]. I put it into 8-bit mode, and generated 16 random keys and output their keystreams[2]. Here's the first 32 bytes of those keystreams, in hex:

0002010203fd050607b0090a0b250d0e0f8b111213a4151617f5191a1be6003e 3b4a00790369050607004b170b820d0e0fb7111213d0005f1796191a1b321d1e 00130102036b05060734090a0b260d0e0fca111264141516178700671b534ae8 df7c806903d10506077fcc670b583600dec617580a2b8b161712191a1b1c00a7 00260102032f1d0082a0090a0bd30d0e0f4ca8c616aa151617a400a34f1c9e1e af7cb175036705060786090a0bc90d0e0f2a111213831516173700501bfa1d1e 000236ca58110506079c00090b860d0e0f8f111251009e1617b4191a1b311d1e 00250102039700ec0729090a0b2a0d0e0f961112133a151617e3a200956764b8 002a0102031e050607ed003c0b260d0e0fcf111213c000151a18191a1bb21d1e 413c0102031505060745090a0b060d0e0fc6111213b815161784191a1b341d1e 00890102039500b807a200570ba70d0e0fa31112134c1516172d191a1ba03900 00cd01020332050607e100b10bdd0d0e0f0d111213261516170e191a1b041d1e 0076010203c30506074e03c7880069000eb8078d54134604a61a142d1a0600c3 00f101020344050607a8090a0b0e0d0e0f11111213e6151617f3191a1b291d1e 00f1007a0369aab10068090a0b53b1ac0fd1111213531516171800ee1bde1d1e 0036010203570506076c00d60b050d0e0f2f27583f03591617bc09471b141d1e

As you can see, there are some significant biases here. For example, as an attacker, once I have a ciphertext, I can guess that bytes 18 and 19 of the keystream were (hex) "1112", and have a very good chance of being right.

While the 32-bit version isn't as bad, I think there's still significant biases. I generated 24495 keys. Here's the distribution for keystream byte 1 (script at [3]):

   92    82    82   104    99    90    99   111    90   102    91    95    94    91    73   102
  109   106    97    99    88    99    90    96    88    97   101   100   108    76    87    87
   91    94   111    90    92   104    88    97    94   100    89   102    90    91    92    89
   98    96    89    94   111   111   105    90    87    89    93   104   100   110   109    93
   77   107   103    84    88    96    89    87    77    96    90    84    87   106   101    98
   99    99   114   102   104   106    95    91    95    92    94   104    95    88    93    91
   91    78   102    89   104    88    94   100   102   105    94   102   100   105    99    94
   87    89    86    93    95    77    82    83    99    94    88   106   106   101   101    91
   82    88    98   111   104    93   102    91    87    93   106    89   102    78    88   105
   91    93   105    84   101   100    94    93    94   107    88    86   114    84   112    98
   97    84   111    87    91    93    89    95    92    96    78    85    90   104    84    80
  103    95    98   114    90    89    91   110    89   100    87   107    95   109    83   103
  112   102    93    93    87    90   101    91   108   108    90   107   103    95   111   126
   88    97    74   111    97    99    95    95   102    97   122    94   106    94    97   103
   86    90   102    91    95   108    83    97    91   102    99    90   103   111    93    84
   87   107    91    96    77    98    96    85   108   110   116   100    95    89    72   100
I should really get to sleep so I didn't do the statistics on this, but I'm fairly sure that's not a uniform distrubtion (i.e., bytes aren't being drawn uniformly from [0:255]). Also, keystream byte 0 only ever has 128 values, instead of 255.

I did not look at 64-bit.

All in all, I wouldn't use your cryptosystem :) but there's no shame in that! Crypto is hard! and youre only going to get better.


noooo it ate my references:

1 http://canta.ucsd.edu/~kmowery/quelsolaar/crypto_test.c

2 http://canta.ucsd.edu/~kmowery/quelsolaar/keystreams.tar.gz First line of each file is the key generated by rand(), All other lines are the keystream (encryption of 00 data)

3 http://canta.ucsd.edu/~kmowery/quelsolaar/keystreams.py Note that you might have to change to using os.walk to find files; I'm too sleepy...


I'm not really sure how you can get similar results with completely different keys. I'm looking in to this. I tried generating a meg encrypted zeroes in 8bit mode and the most used 8 bit value was only 1% more common then the least used value.


Seeing your name again Eskil reminded me of the excellent tooling videos I used to watch for Löve!

Where are things at and what are you doing now? I think the whole of HN would be interested!


In very bad at publicizing the things I do.... This year I have been working on a bunch of new things:

I have written a SDL/GLUT replacement: http://www.youtube.com/watch?v=oMJP6vlsmbE

And a new library to create widgets and UIs: http://www.youtube.com/watch?v=oDulGQnjsDQ

I have used all this to build a VFX editor for realtime applications: (Ill try to post a video of it soon-ish) http://www.quelsolaar.com/Confuse_particle_system.jpg

That in turn used to make an installation using head tracking last month: http://www.youtube.com/watch?v=UYSVEhSC2DU

My big project this year has been to work on my new RTS game. Its not a huge secret, but hasn't been properly announced yet.


This looks fishy to me. Follow my thought: If you are looking at the very first value coming out of the stream, that value should have zero bias from the algorithm. Why? because it is an xor of two different values in the key. If both values are properly random then so should their XOR.

Note that at this point the algorithm has not yet started modifying itself, so the original key is still intact. The self modifying code may still be broken, but i think we can be fairly sure the first value has the exact same bias as the random number generator.


so if you know the plaintext you get the user's key?


Standard with one time pads (and symmetric encryption in general) — the key and plaintext are a shared secret.


it's not standard with symmetric encryption in general; it's a "known plaintext attack". http://en.wikipedia.org/wiki/Known-plaintext_attack - with aes, for example, even if you know the plaintext there is no known way to obtain the key (faster than brute force search).


The answer should be no. But I don't know.


IIRC, the typical random variable is not uniformly distributed, it is normally distributed.


Usually, when discussing human diet: carbohydrates, fat, and protein. These comprise the energy content of food.


Ok, just what I thought. But how does processing destroy them?


That's something you really need to do your own research on, it's a big topic. Start by looking up 'complex carbohydrates' and consider that while they are made up of basic sugars, in nature the breakdown of those complex carbs would be done mostly in the gut rather than through refinement in a sugar plant. A pediatrician friend of mine doesn't even believe in consuming juice, because she considers the body is better able to regulate its sugar intake by processing the fruit and also using the non-nutritious bulk fiber of the fruit to regulate the amount consumed. This is her hunch rather than being proved through exhaustive research, but it seems like a fairly well-informed hunch.


A glass of juice is a lot of sugar. One serving size of fruit is much smaller, and would give very much less juice. It's very easy to drink a lot of sugar if you're drinking undiluted juice.

Dietitians in the UK recommend that people are cautious with juices and smoothies because of the amount of sugar, and they recommend that people just eat the fruit and drink water. Especially for children, the high sugar and acid is tricky for teeth.

(I agree that the hunch seems reasonable. But then, those are the things that need research, to combat my biases.)


Dr. Robert Lustig agrees, from http://www.kqed.org/a/forum/R201301280900

"Processed food is fiberless food. That's basically what it comes down to. Processed food means that you've got to take the fiber out for shelf-life. And there are two kinds of fiber. There's soluble fiber: which is the kind of stuff that holds jelly together, and pectins, and things like that. And then there's the insoluble fiber: the stringy stuff, like, you know, cellulose, like what you see in celery. You need both. What I describe in the book is like it's kind of like your hair-catcher in your bathtub drain. Um, you have this plastic lattice work with holes in it. So, if you take a shower and the hair is coming down, it blocks up the holes, but only if the hair catcher is there. So, imagine that the cellulose is the hair-catcher, and imagine the hair is the soluble fiber, blocking up the little holes. When they're both there, it forms a barrier on the inside of your intestine.

You actually can see it during electron microscopy, that it's a secondary barrier that reduces the rate of absorption of nutrients from the gut, into the bloodstream. And what that does is that it actually keeps the liver safe, because it reduces the rate at which the liver has to metabolize, the stuff. And if you overload the liver, what it does is it has no choice but to turn extra energy into liver fat. And that's what drives this whole process. Is the process of liver fat accumulation, and the thing that does that the worst is sugar, especially when it's not teamed up with fiber.""


Ostensibly, a "a low-key ceremony in an actual beautiful natural area" (that can support 360 guests) would have a far greater ecological impact than manufacturing such a place in an already developed campground.

To make such a suggestion, you'd have to claim that they shouldn't have invited nearly so many people, which is an uncomfortable claim to make.


They released an adapter:

http://store.apple.com/uk/product/MD099ZM/A/apple-iphone-mic...

Apparently these are quite difficult to buy in the United States.

Edit: Lightning to Micro-USB is only $20, though: http://store.apple.com/us/product/MD820ZM/A/lightning-to-mic...


Only $20? Are there even any voltage modifiers in it, or is it just a pin to pin mapper?


They are active devices.


Because Apple added DRM to the interface. They are active because Apple made that a requirement.

People should stop being apologetic about the bullshit like this which Apple is doing all the time, in small dosages here and there.

If you keep taking this, there soon wont be any open, general purpose computing-platforms left.


You are misinformed. There may or may not be DRM according to the teardowns. There is certainly a bit of silicon which has a minimal DRM capability, but it may have come along for the ride. If there is DRM, it is not enforced. There are knockoff connectors out there.

The connector electronics are there to allow higher current charging by allocating more pins to power transfer when they are not needed for data and to allow as yet undeveloped higher speed protocols to operate on the same connector.

If there is DRM I can see Apple's point. When you destroy an iPhone with a cheap charger that fails, it costs Apple money to replace it. If you attempt to reverse engineer the connector, you will be wrong. The future capabilities are not present for observation. You will probably create a device that behaves improperly for future protocols. Will you destroy that future device by pulling -5V on a low voltage differential data line?


Sure, but that chip doesn't have to do anything but authenticate. So what, ten cents?


That chip has to dynamically map the pins to work with USB. So, probably 20 cents.


This article focuses pretty heavily on the possibility of cache timing attacks against AES, and cites djb's original work along with Tromer/Osvik's publication in 2005.

Last week at CCSW, we published a paper[1] detailing our attempts to bring these attacks to bear against Chromium.

In short, we don't see AES cache timing attacks as possible on more recent processors, and especially so once you factor in the sheer size of modern architected code.

[1] http://cseweb.ucsd.edu/~kmowery/papers/aes-cache-timing.pdf


This is very cool, thanks for posting.

DJB's attacks were from a remote attacker's vantage point. But your paper also takes on Osvik and Tromer, who used "spy processes" to continuously probe local caches to create traces that could be analyzed for key information. I know your paper mentions branch prediction and says you don't have results for it, but what's your take on whether Aciicmez's BTB attack is going to remain viable?

I thought the BTB attack was the cleverest and most disquieting of the bunch, in that it suggested that we don't even know enough about the x86-64 microarchitecture to predict what side channel vulnerabilities we might have in software AES.

Regarding the paper itself: the most provocative claim it makes is that we're trending towards "complete mitigation" of cache side channel attacks. You give two reasons: AES-NI and multicore systems.

The AES-NI argument seems compelling but a little obvious, in the same sense as one could have argued that peripherals that offloaded AES would also blunt attacks against software AES. AES-NI blurs the line between software and hardware AES, but it's still a hardware implementation.

Another argumentative point that could be made here is, AES-NI mitigates cache-timing attacks against systems that use AES. It doesn't do much good if you can't use AES, since the most popular block ciphers that compete with AES are also implemented with table lookups.

I found the multicore argument a lot less compelling, since it relied in part on the notion that attackers wouldn't easily be able to predict the cache behavior of their target multicore systems. It seems to me that the most likely environment in which cache timing attacks are going to be a factor on the Internet is shared hosting environments, in which attackers with the sophistication to time AES are easily going to be able to get a bead on exactly what hardware and software they're aiming at. Most users of AES are also using off-the-shelf hardware and software.


Aciicmez's BTB attack looks at the branch predictor, and is potentially valid against any implementation which branches based on sensitive data. There's a whole class of these attacks which look at instruction paths, including a new one by Zhang et. al. against ElGamal at CCS this year, but they usually target asymmetric ciphers. In particular, since AES doesn't have key-dependent branching, these attacks don't apply.

I do agree with you that x86-64 is extremely complicated, and that new attacks might crop up due to some future optimization.

As for the paper:

Yeah, AES-NI is sort of the final hammer against AES cache timing attacks, since it doesn't use the cache at all, but I felt that a paper on AES cache timing would be remiss without mentioning it :)

There are two parts to the multicore argument: the first is that it complicates things massively, and the second is that it can be a complete mitigation if used properly.

First is the complication bit, and that's just saying that the attacker must understand almost everything about the multicore implementation, including multilevel cache behavior and (possibly non-deterministic?) replacement strategy. I'm willing to believe that, were this the only hurdle, a dedicated attacker could still succeed. I was looking at a single core machine, so I didn't have to deal with the complexity here.

For the complete mitigation, you need to rely on platform support for core pinning. If you're allowed to say "I want to do encryption now, give me my own core for 400ms", then, since the 4KiB T-tables fit into your core's L2, attacker threads on other cores just can't examine them during use. This complicates the VM hosting model and might be a decent DOS attack, but it does completely stop cache probing attacks.

Finally, as you said, my work can really only apply to AES on the x86 on the desktop. Change one of these variables (such as AES to ElGamal or RSA or Blowfish), and side channel attacks might still exist. Such is the problem with negative results :)


This was fun to read; thanks. It's interesting how side channel attacks can be both assisted and complicated by new hardware; usually, advances in hardware tend to favor attackers slightly more than defenders, but even just by pushing operations below attacker measurement thresholds --- without even trying, that is --- hardware makes some side channels very hard to exploit.

If you're an HN'er reading along at home, Aciicmez' BTB timing paper (you should just be able to Google that) is very very very cool. They not only realized that you could theoretically watch the caches used by the branch predictor to build a trace from which you could recover RSA keys, but also came up with a very simple way to profile those branch predictor caches; that is, they designed a "spy process" like Osvik and Tromer did for memory caches that targeted the BTB instead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: