Friday, February 24, 2012

Is 99.8% Secure Secure?

Guest post by Janos Simon

A group of researchers (Arjen Lenstra and collaborators  from EPFL Lausanne and James Hughes from Palo Alto) published a study, Ron was wrong Whit is right, of new vulnerabilities of cryptosystems. The New York Times picked up the story. Although Lenstra et al discuss several cryptosystems, their results are particularly relevant to those based on RSA. The title mirrors their conviction that cryptosystems based on a single random element have fewer key generation problems than RSA, that uses two random primes.

The technical problem they identify in RSA is the following: The RSA cryptosystem uses a modulus n that is the product of two large "random" primes p and q. Actual keys may not be truly random, and this may cause several possible problems:

1. Different users may end up with the same n. Since a user knows the factors p, q, she will be able to decrypt the data of any user with the same modulus.

2. If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data. Given two moduli, one can use the Euclidean algorithm to determine whether they have a common factor, and find it if it exists.

Note that the second vulnerability is more insidious: in the first only the user with the matching key can decrypt the messages of its mate, while anyone can  explore the web looking for pairs of keys with a common factor.

The lack of randomness in key generation may be caused by bad choices for the seed of a random number generator. As an extreme example, devices may be shipped with a standard common seed. In this case all devices would generate the same n. In general, if the collection of seeds is a low entropy set, with high probability insecure keys will be generated.

The EPFL group collected 11.7 million public keys "while avoiding activities that our  system administrators may have frowned upon" and essentially found that about 99.8% of the keys were not insecure (to the extent that they did not suffer from the vulnerabilities above.)

Is this secure enough?

Note that .2 percent of 11 million is tens of thousands of bad keys.

To make matters murkier, another group with researchers from the University of Michigan and UCSD did a somewhat similar experiment. Their results are not published yet, but one of the authors, Nadia Heninger blogs about their results in Freedom to Tinker. They find a similar proprtion of bad keys, but they claim that the vulnerability mostly occurs with embedded devices like firewalls and routers, so "important" keys like bank certificates are not affected. Lenstra et al disagree.

Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas....

9 comments:

  1. Its a very interesting paper, and a fascinating security flaw, but it seems that the title of their paper is ill advised.

    First of all, the problem is not with RSA but with random number generation in embedded systems.

    Second of all, why is it Ron that was wrong (and not also Adi and Len?) and Whit (and not also Martin) that was right?

    As far as I can tell, the title of the paper is unsupported by its contents, even though the contents make for a very interesting paper.

    ReplyDelete
  2. I was also going to ask what the paper title meant, because I couldn't find it explained in the paper.

    ReplyDelete
  3. Reminds me of the famous xkcd comic on the difference between theory and practice of crypto: http://xkcd.com/538/

    ReplyDelete
  4. "Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas.... "
    I think the reason why a lot of implementors use ad hoc and weak solution to produce randomness is because theory hasn't provided them with good enough results on practical random number generators.
    It's better than a security flaw in the core of RSA but according to me this is still a flaw on our part

    ReplyDelete
  5. Perhaps I misunderstood, but it seems like a malicious user could go about generating a massive number of keys for themselves and then use these techniques to find which keys out there are vulnerable. Thus the 99.8%, which seems based on having access to 11M such keys would drop significantly as the size of the number of keys increased. At some point in time it would get close to zero?

    Paul.

    ReplyDelete
  6. Paul, the attack you describe is equivalent to trying to factor n by just guessing random factors. It won't have any chance of success if I generated my key with a proper source of randomness -- it will only work for with any chance of success for keys that were generated without proper randomness, which Henninger claims is only these embedded systems, not websites or banks.

    ReplyDelete
  7. Anonymous(1) -- I don't agree with the title of the paper, or the implications. But to phrase things in a less inflammatory manner, there are two kind of issue here:

    1. Use of 'poor' entropy makes keys vulnerable to user who can actually /predict/ the entropy
    2. Use of 'poor' entropy makes keys vulnerable even to attackers who can't predict the entropy

    Every cryptosystem fails in case (1). RSA fails in cases (1) and (2). Hence even if I pick an extremely high-quality PRNG seed /and/ you can't guess it, I'm vulnerable if I share that seed with one other person -- even if I trust that they will never reveal it (this requires that they only use it to pick one prime, of course). Whereas with most other cryptosystems you would end up with two different (but mutually trusting) parties sharing the same key, which is bad, but does not lead to secret key recovery.

    Other than that, the title is over the top.

    Insofar as I'll criticize theoreticians for this kind of thing (in general) it's because they're too willing to make assumptions about the quality of inputs to a given cryptosystem. "Yes, it's secure under this assumption" only makes sense if that assumption is born out by the real world. The heartening thing is that we now see a lot of theoretical research that considers extremely advanced (and unlikely) attack models -- e.g., leakage-resilient crypto, etc.

    ReplyDelete
  8. More often than not, it would be "secure enough".

    @PaulHomer The number approaches zero proportionately to the increase in keys. So, yes.

    ~sarah

    ReplyDelete
  9. "If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data."

    This is actually much worse than that: any person with access to both public keys will be able to detect that a factor is shared and compute their private keys.

    ReplyDelete