Monday, January 12, 2004

Arrr, Even Pirates Be Polynomially-Bounded (by guest blogger Scott Aaronson)

Leaving home after the holidays, I said goodbye tonight to my friend since junior high school, Alex Halderman. You might have read about Alex in the news: Princeton computer science graduate student, breaker of music CD copy-protection schemes, and the first person ever to attain national fame for holding down a Shift key. (Alas, I tease him, my own research too often entails pressing multiple keys.)

Alex's recent run-in with the recording industry got me thinking about whether anti-piracy technology can have any theoretical basis. Passive experiences like listening to music are hard to copy-protect for an obvious reason: if you can see or hear something, then you can also record it, especially since disk space is almost never a limitation today. (Admittedly, there's some loss of quality any time you convert from digital to analog and back. Also, this theory would predict rampant piracy of books, which hasn't happened -- yet.)

The copy-protection problem is more interesting for interactive experiences like video games and application software. The standard solution -- you send the software company your computer's hardware ID, X, and the company sends you back a key f(X) that unlocks the program -- is insecure. You could always copy the unlocked program, then run it on another computer using an emulator. Whenever the program asks for the hardware ID, the emulator says it's X.

A better solution involves a program that constantly needs to communicate with a central server in order to run. For example, the program could demand a new key each time it's executed (based on its current input), which the server only supplies after getting back the previous key sent by the server. That way, any pirated copies of the program not only have to spoof IP addresses; they have to remain in communication with each other (or else be coordinated by a "renegade" server) in order to synchronize their keys.

An even more centralized solution is to run the whole program off a server and charge for each use. In this situation, a program can be "pirated" only if (in learning theory terms) the function that it computes is PAC-learnable from membership queries. The downside, of course, is the high cost in server computing time and communication latency.

Open Research Issue #1. Is there a way for the user's machine to do almost all the actual computation, yet to still need a short message from the server to "unlock" its results? If so, how much can the required communication with the server be reduced (especially the number of rounds)? Boaz Barak has pointed me to some relevant crypto papers, including this one by Sander, Young, and Yung; but the general problem seems wide open.

Of course there's always copy-protection based on physically opaque hardware, such as dongles, smartcards, or the 'Fritz' chip. Since I have no idea how secure these technologies really are, I prefer to end this post with a question more up my alley:

Open Research Issue #2. Can the No-Cloning Theorem of quantum mechanics be exploited to create unpirateable software? What we want is a quantum state ψ such that (1) a program P can be written that needs to measure ψ in order to work correctly; (2) ψ can be prepared by a polynomial-size quantum circuit, given secret information known only to the software company; and (3) a circuit for ψ can't be efficiently inferred, even given P's source code and unlimited copies of ψ. More impressive still would be if P used the same state ψ over and over, without the software company needing to provide a fresh copy for each execution. I suspect the latter is impossible. Proof?

No comments:

Post a Comment