Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Factoring Cryptopocalypse (daemonology.net)
101 points by cperciva on Aug 10, 2013 | hide | past | favorite | 24 comments


I think Colin's bang on here. Now is the time for cryptographers and software developers to act to implement a mature alternative to RSA, but it's not the point for your average joe to switch to an as yet relatively untested/unproven ECC implementation.

For what it's worth, Tom Ritter recently posted his notes on de-anonymizing alt.anonymous.messages[1], something widely believed to be very anonymous and very secure. It's an interesting read and shows why you not only need the crypto to be sound, but the implementation and your own use of it to be right too.

[1] - http://ritter.vg/blog-deanonymizing_amm.html


How is EC “relatively untested/unproven”? It's been in OpenSSL for ages (by internet norms), is the subject of numerous national standards. It secures a billion dollar bitcoin economy.


He said it's not time for the average programmer to switch to a new and untested ECC implementation, not that ECC itself is new and untested. By the standards of any of the common RSA libraries out there, their ECC counterparts would be relatively untested given how much comparative use each has seen.


Out of curiosity: If you were to design Tarsnap from scratch in 2013, would you still use RSA?


I think so.


Am I right to guess its because you're more worried about what people get wrong about ECC imementations in 2013, perhaps that we don't even really know about in 2013, than about anything fundamental about ECC or RSA?


Pretty much. RSA is the devil we know.



Thanks, I missed the first discussion and was inspired to write only by the second discussion.


For someone who is somewhat unfamiliar with cryptography, what is the obstacle in creating cryptography that is provably secure: in other words, cryptography that can be absolutely trusted within certain provided parameters.


I'm by no means a cryptography expert, but a few difficult bits from what I've gathered, observing from a different area of CS that also cares about hardness (AI):

1. At the base, we need a one-way function [1], something that's easy to compute in one direction but hard to reverse. For example, it is believed to be much easier to multiply two large primes than to factor the result back into the original two primes. However so far nobody has come up with a way of proving that a function is one-way for any plausible definition. This is often, in the popular press, thought to be the P=NP question, but that would only prove the negative: if P=NP, then there are no one-way functions (for a certain definition). But even if P!=NP were proven, it is still not necessarily the case that there are one-way functions. In particular, NP-complete problems are not automatically one-way functions, because that is only a worst-case hardness notion, and cryptosystems that are only unbreakable in the worst case for the attacker are not so useful [2] (and indeed none of the major cryptosystems currently used rely on NP-complete problems).

In short, the seemingly hard problems that have had practical cryptosystems built out of them, such as integer factorization (closed related to the RSA problem) and the discrete logarithm problem on elliptic curves (ECC) only seem hard, but are not proven hard in any rigorous sense. This is an interesting outstanding question mathematically, because it would be nice if we understood either why these are hard and be able to prove them hard for some definition, or else to discover why they in fact aren't.

2. At the practical level, it's hard to produce a security proof that takes into account an attack completely outside the conception of the original framework. For example, early security proofs were completely oblivious to the problem of side-channel attacks, since they were proving the security of certain mathematical procedures, in a framework that did not countenance things like "your mathematical procedure might be run on an EC2 instance where your attacker can also get a VM and collect some (noisy) data about what it's doing". This is arguably a quite different question from whether the fundamental cryptosystem's approach is flawed, but this kind of consideration is the source of many practical attacks.

[1] http://en.wikipedia.org/wiki/One-way_function

[2] http://www.kmjn.org/notes/nphard_not_always_hard.html


Heuristically speaking, this problem is at least as hard as the P != NP question, which is one of the millenium problems for which the Clay institute has awarded one million dollars to whoever resolves it.

From a theory point of view, breaking a crypto-system typically means something like being able to do the following fast (i.e., in polynomial time): Given two plaintexts and a ciphertext which is the encryption of one of those two plaintexts, decide which of the plaintexts it is. Of course you could flip a coin, so the essential condition is that your success probability must be larger than 1/2.

So to show that a crypto-system is secure, you have to show that the problem of breaking it cannot be solved in polynomial time. We simply do not know how to show that any useful problem cannot be solved in polynomial time (we do know that P != EXPTIME, but we don't know any problems in EXPTIME that would be useful for cryptography). Hence the lack of provable security.


I think the complexity class of discrete log or integer factorization are not known. No one has proved there isn't a polynomial time algorithm but if one is found it does not mean P = NP.


This is true. However, this actually means that the type of proof that kazagistar asked for is harder than a proof that P != NP. After all, if you prove that factorization cannot be done in polynomial time, then a corollary of this result is that P != NP.

Intuitively, NP-hard problems are believed to be harder than factorization and discrete log. And so it should be easier to prove that an NP-hard problem is not in P.

What you are referring to is the opposite direction of the implication, which is very relevant in a discussion of why people have failed to find faster algorithms for integer factorization. However, the question I replied to is essentially why people have failed to prove that integer factorization (and discrete log) are hard problems.

Edit: After re-reading my previous comment, perhaps this clarifies it best: Integer factorization and discrete log are probably less hard than NP-hard problems. However, the meta-problem "prove that integer factorization or discrete log is really, unconditionally hard" is at least as difficult as the meta-problem "prove P != NP".


There is one public key system that is pretty much that, it's called Merkle Puzzles. It's kind of non-practical though. No one has found a system that is practical and provably secure. You need the right kind of problem. Pretty much all the problems we have today that can be used for public key cryptography are thought to be hard but can't be proved to be hard. Under the assumption they are hard the system is proved to be secure. Symmetric cryptography (e.g. AES) is in a slightly different boat but again security proofs are based on assumptions about the function used and I don't think anyone has found a function that is provably secure (but there are various constructions that are used that have some provable characteristics). Also under some base security assumption there are further constructions of ciphers that can be proved secure.


To expand on why it's non-practical: the gap between the "easy" and "hard" directions is only quadratic, whereas key-exchange protocols based on integer factorization and other such unproven candidates for one-way functions are conjectured to have a super-polynomial gap. With such a small gap it's much harder to produce a setup that is on the one hand actually usable by the parties, but on the other hand intractable for the attacker.

You might wonder if something closer to Merkle's approach could be improved to produce a larger gap, but the answer turns out to be no: http://www.boazbarak.org/Papers/merkle.pdf


Usability. One time pads have been around forever, are provably secure, but a pain in the ass.

Modern symmetric ciphers solve the "you need to securely exchange as much key material as you wish to send data" using mathematical formulas to stretch key material. Asymmetric ciphers use mathematical formulas to fix "you need a secure way to exchange keys."

Unfortunately, the math can't be probably secure, only believed secure and proved insecure.


>Usability. One time pads have been around forever, are provably secure, but a pain in the ass.

With cell phones so widespread, I wondered why someone doesn't write a one time pad app. People could share gigabyte-sized pads via trusted wireless or a cable if they prefer.

http://softthere.com/projects/otp/


The only reason to use one-time pads is an extraordinary level of paranoia. If you are paranoid enough to need a one-time pad then it makes little sense for you as a private citizen to trust that hardware you take into public places hasn't been compromised. Manual creation and transmission of the encrypted message is the only approach that makes sense.

If lots of people adopted one-time pads it might hinder NSA et al but this would be akin to trying to convince people to wear masks whenever they are in public to hinder monitoring with CCTV.


Not sure why this is the case though. What makes it inherently impossible to prove something is secure?


As others have mentioned. One-time pads are unbreakable:

"While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in October 1949" - http://en.wikipedia.org/wiki/Claude_Shannon

They can also be made to decrypt to any plaintext string

http://16s.us/FreeOTP/nsa/

However, OTP does not scale to more than two or three people, they don't work at all on very large messages as the key has to be as large as the message and they have authentication issues. For two people sending small messages, one-time pads are ideal and if the pads are created, used and destroyed correctly, the encryption is, indeed, unbreakable.


It's my understanding that one-time pad is proved to be secure, but it's not very practical.


ECC means "Elliptic curve cryptography" here, it is not about something with "ECC memory". I've been so confused with all the discussion so far because of this. Dear masters of your domain, if there are acronyms that are not unique, please try not to use them or at least introduce them!


Alternatively, when trying to follow a conversation in a domain specific context, learn the terms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: