Guest post by Janos Simon
A group of researchers (Arjen Lenstra and collaborators from EPFL Lausanne and James Hughes from Palo Alto) published a study, Ron was wrong Whit is right, of new vulnerabilities of cryptosystems. The New York Times picked up the story. Although Lenstra et al discuss several cryptosystems, their results are particularly relevant to those based on RSA. The title mirrors their conviction that cryptosystems based on a single random element have fewer key generation problems than RSA, that uses two random primes.
The technical problem they identify in RSA is the following: The RSA cryptosystem uses a modulus n that is the product of two large "random" primes p and q. Actual keys may not be truly random, and this may cause several possible problems:
1. Different users may end up with the same n. Since a user knows the factors p, q, she will be able to decrypt the data of any user with the same modulus.
2. If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data. Given two moduli, one can use the Euclidean algorithm to determine whether they have a common factor, and find it if it exists.
Note that the second vulnerability is more insidious: in the first only the user with the matching key can decrypt the messages of its mate, while anyone can explore the web looking for pairs of keys with a common factor.
The lack of randomness in key generation may be caused by bad choices for the seed of a random number generator. As an extreme example, devices may be shipped with a standard common seed. In this case all devices would generate the same n. In general, if the collection of seeds is a low entropy set, with high probability insecure keys will be generated.
The EPFL group collected 11.7 million public keys "while avoiding activities that our system administrators may have frowned upon" and essentially found that about 99.8% of the keys were not insecure (to the extent that they did not suffer from the vulnerabilities above.)
Is this secure enough?
Note that .2 percent of 11 million is tens of thousands of bad keys.
To make matters murkier, another group with researchers from the University of Michigan and UCSD did a somewhat similar experiment. Their results are not published yet, but one of the authors, Nadia Heninger blogs about their results in Freedom to Tinker. They find a similar proprtion of bad keys, but they claim that the vulnerability mostly occurs with embedded devices like firewalls and routers, so "important" keys like bank certificates are not affected. Lenstra et al disagree.
Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas....