## Friday, January 30, 2004

### A Little Theorem

Here's a simple result I have seen several times recently, a bit surprising when you first see it.

Theorem: co-NEXP is in NEXP/poly.

NEXP are the languages accepted in nondeterministic time 2poly(n). A language L is in C/poly for a complexity class C if there is a language A in C and a list of strings a0, a1, ... with |an| bounded by a polynomial in n such that x is in L if and only if (x,a|x|) is in A.

I don't know who first showed this theorem and since the proof is rather simple it may have never been published. Let K be a complete set for NEXP and the polynomial length advice an for strings of length n is just the number of strings of length n in K. To nondeterministically check that y of length n is not in K, just guess an strings other than y of length n and verify they are in K.

This kind of result does not likely hold for NP. Yap shows that if co-NP is in NP/poly then the polynomial-time hierarchy collapses to the third level. This theorem above does not imply co-NEXP has subexponential nondeterministic circuit since exponential circuits might be required to describe the exponential computation.

Harry Buhrman noticed you can strengthen the result to show that EXPttNP is in NEXP/poly where EXPttNP are the set of languages nonadaptively reducible to an NP set in exponential time.

## Wednesday, January 28, 2004

### The Defense, Part II

Hein Röhrig successfully defended his Ph.D. thesis at the University of Amsterdam yesterday. The Dutch thesis defense reminds me most of a traditional American wedding. The defense takes place in a chapel. The players include the defender (Röhrig), two paranimf (the groomsmen role), the promotor (advisor, in Röhrig's case two promoters: Harry Buhrman and Paul Vitányi), a Pedel (an official position in the university now held by a woman; she plays a master of ceremonies role) and eight opponents (including myself). The defender and paranimf are in full tux and tails, the Pedel and full professors in academic gowns and the other opponents in suits. In the audience are the defender's friends and family.

The ceremony starts by the defender giving a short description of this thesis to the audience from a Podium in front of the chapel. Led by the Pedel, the promotors and opponents enter the chapel from the back and march to sit in the choir seats. For forty-five minutes the opponents, one at a time, ask hard questions to the defender about his thesis. At the end the Pedel reenters the chapel marches to the front, hits her staff on the ground and says "Hora Est" (Time has expired). The opponents and promotors march out of the chapel to a discussion room where we vote on the defense and sign the thesis. We march back in, present the diploma where the promoters read some traditional text and give a short speech.

The ceremony is followed by a receiving line and reception with dinner later on.

Call me a romantic but I truly enjoy the pomp and circumstances that accompany the Dutch defense sorely lacking in the American counterpart.

## Monday, January 26, 2004

### Howdy from Amsterdam

I have returned to Amsterdam for the week. I did my sabbatical in Amsterdam seven years ago and I always enjoy the visit. Yesterday I saw the soccer team Amsterdam Ajax beat NEC (the team from Nijmegen, not my previous employer). Today I am visiting CWI, the Dutch math and computer science institute in the group of Harry Buhrman (my most prolific co-author) and Paul Vitányi (who co-wrote the book on Kolmogorov complexity).

Also visiting CWI is Kolya Vereshchagin from Moscow. I had an interesting idea about Kolmogorov complexity but Vereshchagin had the same idea weeks ago. Hate when that happens.

Tomorrow is Hein Röhrig's Ph.D. defense. Hein always wanted me to mention him in this weblog having mentioned both of his officemates, John Tromp and Ronald de Wolf, before. So here is my graduation present to Hein.

## Friday, January 23, 2004

### STOC and the NSF

A couple of quick notes.

The list of accepted papers for the upcoming STOC conference has been posted. The most intriguing looking paper in complexity is Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size by Ran Raz (also mentioned earlier by Scott Aaronson).

Congress has finally passed the FY 2004 US budget. A 5% increase for NSF, 4.8% for research and related activities.

### The Defense, Part I

You've taken your classes, passed your preliminary/qualifying exams, done your research and written your thesis. What stands between you and the Ph.D.--the thesis defense.

After all this buildup, the defense in the states is rather anti-climatic. The student gives an extended talk on his thesis research, the thesis committee peppers the student with questions, then the committee deliberates and decides whether to pass the student.

The defense is mostly for show, the student almost always passes. If the student didn't deserve to pass the fault lies not with the student but with the advisor for letting the process get this far. A dirty little secret: We often make the deliberation longer than needed just to add a little drama.

I have served on defense committees in Denmark and Portugal that follow this same basic plan. But not all countries do the same.

I remember visiting the University of Karlsruhe (Germany) when a parade broke out. I asked about the parade and my host said someone just got their Ph.D. I have also heard of some countries where the advisor has to defend the thesis. Glad I don't teach there.

I bring this up because next week I sit on my third committee at the University of Amsterdam. The Dutch do their Ph.D. defenses the way the way a defense ought to be done. What happens at the Dutch defense? I'll let you know next week.

## Thursday, January 22, 2004

### John Lewis

[From Chris Fuchs]

Dear friends in the quantum information and foundations communities,

Many of you may not know it, but the concept of a generalized quantum measurement or positive-operator-valued measure was introduced in

E. B. Davies and J. T. Lewis, "An Operational Approach to Quantum Probability," Communications in Mathematical Physics 17, 239-260 (1970).

Last night, John Lewis passed away here in Dublin, his home of 28 years, from complications due to a recent surgery. He was a good man, honest and upright, and left us a deep legacy. He will be missed.

## Wednesday, January 21, 2004

### The Da Vinci Code

On my vacation I read Dan Brown's The Da Vinci Code, a very popular book I received recently as a gift. Warning: Minor spoilers follow.

I always enjoy a novel with an academic protagonist but the Da Vinci Code reads like a bad conspiracy theory using the roles of the professor and other experts to give the theory some weight. But I bring up this book in this weblog because it spends considerable time on various cryptographic schemes.

I don't blame the author for not using modern cryptography but the methods described would be laughable 50 or 100 years ago. Imagine using the password 1123581321 to guard the biggest secret in the history of religion. It only gets worst: backwards writing, simple anagrams, substitution ciphers, riddles. I suppose these make for fun puzzles for the reader but do not make for a safe secret.

The book describes one intriguing device supposedly invented by Leonardo Da Vinci called a cryptex, a small cylinder with a combination lock that will destroy its written contents if broken. However the book calls it a rudimentary form of public-key cryptography, which only tells me Dan Brown has no idea what that term means.

For a far better novel dealing with cryptography and the related paranoia, check out Neal Stephenson's Cryptonomicon.

## Monday, January 19, 2004

### I'm Back

Thanks to Scott Aaronson for covering for me last week. If you've enjoyed the last week, check out more of his writings. Maybe the weblog bug has bit him and he will start his own blog someday.

I feel the need to remark on Scott's advice post and comments, particularly the following paragraph (having just come back from a week of skiing and no research).

So then, how do you do original research? By throwing your entire life into it. Many researchers play piano, go to clubs, sail, etc., but if they're any good they probably think about research while they're doing these things. I know grad students who never suffer the indignity of working late into the night. They go surfing with friends every weekend and are constantly away on road trips. At this rate, they'll enjoy life more than I will but won't be successful researchers.

And now a message for Warren, the college freshman with a potential interest in graduate school. Take some computer science classes and lots of math classes, particularly probability, algebra and logic. But most important of all, don't worry about research now. Enjoy your college days, get involved in lots of activities, have an active social life. You'll have plenty of time for research in graduate school.

### Scaring Away The Scientists Of Tomorrow (last post of guest blogger Scott Aaronson)

Sir Lance-lot has returned, and tomorrow will reclaim his fortress from this barbarian invader. He writes: "Thanks for blogging for me, though I hope you haven't scared away potential future researchers."

Let me state clearly what I think. The greatest perk of being a scientist is never having to doubt the value of what you do. If someone who fed starving Ethiopians, or rescued baby seals from oil spills, asked me how I justify spending my time proving complexity theorems, I might have difficulty answering; eventually I'd mumble something about basic science (along with art and music) embodying the highest aspirations of civilized humankind since the age of Democritus, and therefore being worthy of pursuit even in the face of palpable suffering. But if some regular schmo -- a sportswriter, or consultant, or homeopathist -- demanded that I justify what I do, I'd laugh in his or her face.

Other benefits of a research career include the freedom more or less to choose your hours, the satisfaction of being "the person who discovered such-and-such", the opportunity to inspire students, and copious expenses-paid trips to conferences around the world. I won't dwell on the downsides of being a scientist, both out of deference to Lance, and because the downsides are obvious to anyone familiar with cultural stereotypes.

The point I want to make is that for me, both the benefits and the downsides are irrelevant, because I can't even imagine not doing science. Having once tasted it, I couldn't go cold turkey any more than a heroin addict. What if someone solved one of my open problems, or emailed me with questions about a paper I wrote? Would I ignore that person, just as though BQP/qpoly and NISZK had never been part of my life? I mean, obviously I'd be happier were I a self-assured ignoramus who majored in marketing and mingled on the beach -- but then I wouldn't be I; I'd be a different person.

In summary, then, you should pursue a research career if and only if science to you is life's kth greatest pleasure for some k=O(1). Thank you for reading.

[Addendum: Here O(1) is intended in the physicist's sense, not the computer scientist's asymptotic sense. You only live once.]

### Algorithmic Cooling on Mars II: Mars (by guest blogger Scott Aaronson)

OK, now Mars. I'm sure you've all read about the dramatic successes of the Spirit rover, which incidentally raise two computer science questions:

1. Can a lander be programmed to scout a safe, interesting landing site during its 6-minute descent phase? (Sending pictures to Earth takes too long; the round-trip time for radio signals is about 20 minutes.) As far as I know, Spirit took photos only to gauge its speed relative to the surface, not to scout landing sites.
2. Can (and should) the Internet be extended beyond Earth's atmosphere? During the periods when Spirit is not in Earth's line of sight, two existing Mars orbiters are pressed into service as relays -- so in some sense a Martian communications network already exists. Will denial-of-service attacks and Viagra offers soon plague the solar system?

I'm sure you've also all read about the Bush administration's new vision for space exploration, which includes a manned Mars mission at an unspecified future date. Despite my no-politics mandate, Lance has often discussed science funding in this blog, so I will too. The usual rule is that sending humans somewhere (the Moon, Mars, low-Earth orbit) costs 100 to 1000 times as much as sending robots to the same place. Part of the reason is that, letting ε be the probability of a catastrophic failure, the cost of a mission increases asymptotically as ε approaches 0. Unmanned Mars landers have done well with ε around 2/3. For manned missions, by contrast, any estimated ε above (say) 1/1000 is unacceptable (although ε will always be higher in practice, as we were recently reminded).

But is human spaceflight worth the costs? Lest this post become too polemical, I'll skip the usual arguments and their rebuttals (if you don't know them, read What's New by Bob Park), and end with a question for readers. If you were the President's science adviser, would you suggest gutting the Shuttle, the ISS, and all work towards a moon base or manned Mars mission, and diverting the funds toward basic science? If so, a followup: suppose the NSF budget for theoretical computer science were quintupled tomorrow. What would be the best way to spend the money?

## Sunday, January 18, 2004

### Algorithmic Cooling on Mars I: Algorithmic Cooling (by guest blogger Scott Aaronson)

Sorry I haven't posted for a while -- QIP has left me with nary an hour to spare. Today Leonard Schulman gave a talk whose title was announced as "Physical Limits of Heat-Bath Algorithmic Cooling on Mars." No, the talk didn't actually have anything to do with Mars; Leonard just wanted to show us his Windows wallpaper (a Mars photo), and suggest that Mars, being even colder than Waterloo, might be an even better site for quantum computing experiments.

Nevertheless, the title provides an excuse to discuss two things on my mind: Leonard's talk and Mars. I'll start with Leonard's talk. Liquid NMR quantum computing has the advantage that hundreds or thousands of quantum gates can be applied before decoherence sets in, and the disadvantage that the qubits are difficult to initialize to the standard "all-0" state. Instead, the starting state is exponentially close to the maximally mixed state. This means that in the final measurement outcome, the ratio of signal to noise decreases exponentially in the number of qubits -- so exponentially many repetitions are needed to extract a signal, negating any quantum speedup.

But is this fundamental? A few years ago Schulman and Vazirani introduced algorithmic cooling, a technique that starts with a hot, high-entropy state, then uses data compression to cool down a few qubits, at the cost of making the rest of the qubits even hotter. (As long as we're limited to reversible unitary gates, the total entropy of the system must remain constant.) The cold ("Waterloo/Mars") qubits can then be used as a quantum computer's standard initial state. The trouble is that too much chaff is needed for too little wheat: with current NMR error rates, extracting a few dozen cold qubits could take 108 or 1012 starting qubits.

A natural alternative, proposed by Boykin, Mor, Roychowdhury, Vatan, and Vrijen, is to let the qubits evolve nonunitarily; that is, interact with the environment. In physics jargon, this is called "coupling the qubits to a heat bath," even though the goal is to cool the qubits. Amazingly, it turns out that by using classical tricks (for example, mapping the basis state |a,b,c> to |a+c,b+c,MAJ(a,b,c)>, where addition is mod 2 and MAJ denotes the majority function), the qubits can be made even colder than the environment to which they're coupled. This raises a question: are there any limits to such cooling? Schulman, jointly with collaborators who I can't recall right now (one of them is Tal Mor), have given an affirmative answer. Suppose each qubit initially has bias ε (that is, is in the mixed state (1/2+ε)|0><0|+(1/2-ε)|1><1|). Then the heat-bath method can't increase the probability (that is, |amplitude|2) of any basis state above 2-nexp(ε2n), where n is the number of qubits. This bound is essentially tight: if ε>24-n, then the initial state can be cooled significantly. Unfortunately, the algorithm that achieves this cooling requires order 1/ε2 steps, which is exponential assuming ε is exponentially small. Furthermore, this exponential blowup seems to be unavoidable (Schulman didn't give a rigorous lower bound, but said it would be easy to obtain).

To my mind, the cooling result raises a fascinating question for complexity theory. Imagine that each generation of human beings, just as it plants trees and stores wine in cellars, starts cooling quantum states -- so that future generations, millions of years hence, could use those states to perform whatever quantum computations they wanted. "Vintage" quantum states would then be highly prized possessions (served chilled, of course). In this situation, would we be using exponential time (the time needed to cool the states), or polynomial time (the time between specifying the input and measuring the output)?

Part II of "Algorithmic Cooling on Mars" will be about Mars.

## Friday, January 16, 2004

### Live From QIP (by guest blogger Scott Aaronson)

As my plane descended toward Toronto on Monday, it felt as though I was landing on the surface of another planet (though maybe the Mars rover was too fresh in my mind). All I could see out the window was white snow crisscrossed by black highways. On the ground, the weather was probably the coldest I've ever experienced. Call me a wuss if you're from Alaska, northern Canada, Siberia, or Antarctica, but I did go to Cornell.

Before QIP started I visited the University of Western Ontario for a day, to work with Dan Christensen on the complexity of simulating spin-foam models of quantum gravity. We didn't get far. The trouble is that no one knows how to define measurement in these models, and the answer could strongly affect computational complexity. Maybe spin-foam models can solve graph isomorphism in polynomial time; then again, maybe they can't even simulate garden-variety quantum computers.

I took the train to Waterloo on Tuesday night, then on Wednesday hung around the Perimeter Institute, which is a converted tavern full of theoretical physicists and quantum computing people. The conference talks started on Thursday; here are summaries of a few.

• Richard Cleve spoke about some extremely cool joint work with Peter Høyer, Ben Toner, and John Watrous. They point out that the classical proof of MIP = NEXP breaks down if the two provers share entanglement -- regardless of whether the verifier is able to manipulate, or even knows anything about, quantum information. (It might still be true that multiple provers who share entanglement can convince us of any language in NEXP, but if so it will need a new proof.) Cleve et al. give explicit examples of 2-prover interactive proof systems that are classically sound but become unsound if the provers share entanglement. To me, the most exciting aspect of this work is that it offers a new, complexity-theoretic way to understand the famous Bell inequalities. In turns out that Bell inequality violation is "really" about two provers convincing a verifier that (say) a certain graph has a 3-coloring when in fact it doesn't, by using entanglement to correlate their answers to the verifier's queries.
• John Watrous spoke about stronger error reduction for QMA. Recall that QMA, or Quantum MA, is the class of languages for which there exist polynomial-size quantum proofs that convince a polynomial-time quantum verifier that an input is in the language when indeed it is. Here the completeness and soundness errors are 1/3. Early on Kitaev observed that the prover can amplify the correctness probability to 1-2-p(n) by giving the verifier O(p(n)) copies of the proof. The verifier then checks each proof independently (destroying it in the process) and outputs the majority result. Against everyone's intuitions (or at least mine!), Watrous now shows that O(p(n)) copies are overkill -- the verifier can amplify the correctness probability arbitrarily using a single copy of the proof! This means that a "quantum library" could store proofs on the shelves, to be borrowed and returned intact by quantum patrons who want to convince themselves of the truth of various statements. The conventional wisdom -- that learning something from a quantum state always disturbs that state -- is wrong in the case of proofs. (Could this be related to zero-knowledge proofs?) Another implication of Watrous' result is that QMAlog = BQP.
• Scott Aaronson spoke about Multilinear Formulas and Skepticism of Quantum Computing. Journalistic objectivity precludes me from commenting on the excellence or otherwise of that particular talk. Next Ran Raz explained why Multi-Linear Formulas for Permanent and Determinant are of Super-Polynomial Size -- a brilliant result whose relevance to quantum computing is that it provides the technical tools for my talk. I'd say more about Raz's result, but it's 1AM and I have to get up early tomorrow for another day of manipulating entanglement at ultra-cold temperatures.

## Thursday, January 15, 2004

### Advice, Not The Quantum Kind (by guest blogger Scott Aaronson)

A comment to my last post asked for advice for people interested in getting into complexity research. So here it is. Keep in mind that I'm still a grad student -- for advice from more experienced researchers, read Lance's earlier post, and this essay by physicist Steven Weinberg.

I think the key is to start doing creative original research right away. My first year at Berkeley, I took three courses a semester, hoping to prepare by stuffing my brain with knowledge. This was a mistake. Take as few courses as you can get away with, besides directly relevant ones like complexity theory. Learn what you need to know while doing research, not beforehand.

This approach has two advantages. First, you never know what you need to know until you need to know it. Not even Einstein could have predicted as a student that he'd need differential geometry to invent general relativity. And second, you don't really understand anything unless you have a personal stake in it -- meaning that you discovered it, rediscovered it, extended it, applied it, tested it, implemented it, reinterpreted it, explained it to others, etc. This the reason most students forget everything in a course right after the exam. (As Feynman said, "what I cannot create, I do not understand.")

So then, how do you do original research? By throwing your entire life into it. Many researchers play piano, go to clubs, sail, etc., but if they're any good they probably think about research while they're doing these things. I know grad students who never suffer the indignity of working late into the night. They go surfing with friends every weekend and are constantly away on road trips. At this rate, they'll enjoy life more than I will but won't be successful researchers.

I can't offer any advice on research topics, other than to solve the open problems listed in my papers. Blanket advice is difficult because your research ought to be intimately connected to who you are as an individual. Lance suggests leafing through conference proceedings until you find what excites you, while Weinberg suggests getting involved in the "messes" that nobody understands. As for me, I like to start with physical or philosophical questions (can we assign any meaning to "the past" besides memories and records in the present? is there a theory that agrees with quantum mechanics on all experiments to date but that wouldn't allow quantum computation? why should we expect information content to be proportional to area rather than volume?), and then look for related questions that can be addressed using complexity theory. But I don't know if anyone else works that way.

## Wednesday, January 14, 2004

### Ingredients for Serious Thought (by guest blogger Scott Aaronson)

To prove theorems I need a particular kind of intense concentration, sustained for hours, that I don't need for programming, fiction writing, guest blogging, or anything else I've ever done. This kind of concentration seems to come naturally to some researchers, but it never has to me. So over the past four years, I've been keeping a list of what in my physical environment and state of mind facilitates the proving of STOC/FOCS-type results. Although this list is personal and idiosyncratic (and even a bit embarrassing), I offer it in the hope that its very specificity will inspire you to add your own ingredients. Feel free to do so in the comments section.

1. Lots of light.
2. Adequate sleep the night before (duh).
3. Freedom from buzzing insects, screaming babies, ringing phones, slamming doors, and car alarms. I'll never know what I could have proved if not for these things.
4. A well-ventilated room with fresh, non-oxygen-depleted air at about room temperature. (Bug screens allow the last two ingredients simultaneously.)
5. Caffeine or other stimulants.
6. A comfortable swivel chair, or else a couch or bed to sprawl across.
7. Long deserted halls or outdoor walkways. (Pacing around in tight circles is no good.)
8. Hours and hours of concentration with no end in sight. I've never been able to set aside (say) two hours for serious work, in between other commitments. That's why I work at night.
9. Lack of awareness of how much time has elapsed with no new ideas. Before starting to work I take off my watch and hide the Windows taskbar so I can't see the little clock in the corner.
10. Comfortable clothes. I've never proved a publishable result wearing a shirt with a too-tight collar.
11. Black erasable pens, unruled paper (the backs of printouts serve nicely), Scientific Workplace for TeX, and (don't laugh) MS-DOS QBasic for quick calculations. Substitute your own favorite tools.
12. No tempting distractions. Train rides are good: plenty of room to spread out papers and a laptop, but no Internet access (something I hope doesn't change soon).
13. No people around toward whom I have strong unresolved feelings (attraction being only one example).
14. Freedom from bodily annoyances and pains. Advil, cold medicine, lip balm, a nail clipper, and a glasses cleaning cloth are important weapons in my theory arsenal. Also, I can't do serious work until about half an hour after a meal.
15. A positive attitude, which is fostered by a calm, uneventful week in my life.
16. Colleagues to talk to. People able to shoot down wrong proofs are ideal, but even "write-only black boxes" are invaluable as sounding boards. Of course I try to reciprocate both services.
17. A problem that I consider "mine" -- either because I posed the problem, I've had recent successes on subproblems or related problems, the problem is important for one of my research goals (or even better, two goals), or I'm (rightly or wrongly) seen as the world expert on the problem.
18. A problem that others are eager to see solved. It's easier to let myself down than to let others down.
19. Conference deadlines. They motivate me to work, but then if I miss them (as I do), my "research GPA" doesn't suffer: there's always the next conference.

## Monday, January 12, 2004

### Arrr, Even Pirates Be Polynomially-Bounded (by guest blogger Scott Aaronson)

Leaving home after the holidays, I said goodbye tonight to my friend since junior high school, Alex Halderman. You might have read about Alex in the news: Princeton computer science graduate student, breaker of music CD copy-protection schemes, and the first person ever to attain national fame for holding down a Shift key. (Alas, I tease him, my own research too often entails pressing multiple keys.)

Alex's recent run-in with the recording industry got me thinking about whether anti-piracy technology can have any theoretical basis. Passive experiences like listening to music are hard to copy-protect for an obvious reason: if you can see or hear something, then you can also record it, especially since disk space is almost never a limitation today. (Admittedly, there's some loss of quality any time you convert from digital to analog and back. Also, this theory would predict rampant piracy of books, which hasn't happened -- yet.)

The copy-protection problem is more interesting for interactive experiences like video games and application software. The standard solution -- you send the software company your computer's hardware ID, X, and the company sends you back a key f(X) that unlocks the program -- is insecure. You could always copy the unlocked program, then run it on another computer using an emulator. Whenever the program asks for the hardware ID, the emulator says it's X.

A better solution involves a program that constantly needs to communicate with a central server in order to run. For example, the program could demand a new key each time it's executed (based on its current input), which the server only supplies after getting back the previous key sent by the server. That way, any pirated copies of the program not only have to spoof IP addresses; they have to remain in communication with each other (or else be coordinated by a "renegade" server) in order to synchronize their keys.

An even more centralized solution is to run the whole program off a server and charge for each use. In this situation, a program can be "pirated" only if (in learning theory terms) the function that it computes is PAC-learnable from membership queries. The downside, of course, is the high cost in server computing time and communication latency.

Open Research Issue #1. Is there a way for the user's machine to do almost all the actual computation, yet to still need a short message from the server to "unlock" its results? If so, how much can the required communication with the server be reduced (especially the number of rounds)? Boaz Barak has pointed me to some relevant crypto papers, including this one by Sander, Young, and Yung; but the general problem seems wide open.

Of course there's always copy-protection based on physically opaque hardware, such as dongles, smartcards, or the 'Fritz' chip. Since I have no idea how secure these technologies really are, I prefer to end this post with a question more up my alley:

Open Research Issue #2. Can the No-Cloning Theorem of quantum mechanics be exploited to create unpirateable software? What we want is a quantum state ψ such that (1) a program P can be written that needs to measure ψ in order to work correctly; (2) ψ can be prepared by a polynomial-size quantum circuit, given secret information known only to the software company; and (3) a circuit for ψ can't be efficiently inferred, even given P's source code and unlimited copies of ψ. More impressive still would be if P used the same state ψ over and over, without the software company needing to provide a fresh copy for each execution. I suspect the latter is impossible. Proof?

## Sunday, January 11, 2004

### Complexity Class of the Week: PP (by guest blogger Scott Aaronson)

Yeah, I know: PP has already been this weblog's complexity class of the week. But once you've seen how to define PP using super-powerful variants of quantum mechanics, you might never look at Probabilistic Polynomial-Time the same way again! (Then again, you might.)

Let's define PostBQP (or BQP with postselection) as the class of languages L for which there exists a uniform family of polynomial-size quantum circuits such that

• For all inputs x, the circuit's first qubit has a nonzero probability of being measured '1' at the end of the computation.
• If x is in L, the second qubit will be measured '1' with probability at least 2/3, conditioned on the first qubit being measured '1'.
• If x is not in L, the second qubit will be measured '1' with probability at most 1/3, conditioned on the first qubit being measured '1'.
In physics, "postselection" means you throw away all runs of an experiment for which a measurement of some quantity X doesn't yield a desired outcome. (Hopefully, X isn't itself what you're trying to measure - otherwise it's not postselection, it's fraud!) But you can also think of PostBQP as the quantum analogue of the classical complexity class BPPpath (another previous CCW).

Clearly PostBQP sits between BQP and PP. I became interested in PostBQP when I realized that the containment BQP/qpoly in EXP/poly (discussed earlier in this weblog) can be improved to BQP/qpoly in PostBQP/poly.

Exercise 1. Imagine that the gates of a quantum computer only needed to be invertible - not unitary. Since states might no longer be normalized, let's define the probability of measuring a basis state x with amplitude αx to be |αx|2 divided by the sum over all basis states y of |αy|2. Show that we could decide exactly the languages in PostBQP.

Exercise 2. Now imagine that the gates are unitary, but the probability of measuring a basis state x equals |αx|p divided by the sum over all basis states y of |αy|p, where p is a nonnegative real number not equal to 2. Show that we could decide all languages in PostBQP.

I was getting more and more excited about the fundamental new complexity class I'd discovered. Alas:

Exercise 3. Show that PostBQP equals PP.

The moral is that when you make a quantum class too powerful, it turns into a classical class! (Another example of this is NQP = coC=P.)

## Friday, January 09, 2004

### How Long Until We Get Along? (by guest blogger Scott Aaronson)

I'm honored and humbled that Lance Fortnow decided to entrust his weblog to me for the week. Lance's only request was that I obey a few simple ground rules: keep it clean, stay on topic, don't betray confidences, and absolutely no politics.

To demonstrate my commitment to Lance's ground rules, I'd like in this first post to address the Israeli-Palestinian conflict. No, not the conflict itself, but rather a meta-question that it raises: how can so many smart, educated, well-meaning people disagree so vehemently about the most basic facts of an issue? How can they end every conversation not closer together but farther apart, "agreeing to disagree"? We can ask the same question about free markets versus socialism, the interpretation of quantum mechanics, or other issues on which two or more sides are certain of their own arguments.

A 1976 theorem of Robert Aumann has been interpreted as showing that two honest, rational people (who believe each other to be honest and rational) should never disagree about anything. More precisely, let Alice and Bob be Bayesians who assign the same prior probabilities to all random variables (admittedly a big assumption), but who have since gained different knowledge about the variables. Let p be Alice's posterior probability that (say) it will rain tomorrow, conditioned on everything she knows, and let q be Bob's posterior probability. The theorem says that if p and q are common knowledge to Alice and Bob, then p=q.

The key point here is that "everything Alice and Bob know" includes their knowledge of p and q. So for Alice and Bob to reach consensus, it isn't enough for them just to announce p and q -- for then p and q might change, so they'd need to announce the new values, and so on iteratively. However, so long as the whole probability space is finite, this iterative process will end eventually with Alice and Bob agreeing on the probability that it will rain tomorrow.

Great, you say, but what does this have to do with complexity? Well, if Alice and Bob exchanged everything they knew, then obviously they'd agree on the chance of rain. So the crucial question for me -- and one that seems never to have been addressed in the large economics and philosophy literature on this subject -- is, how many iterations are needed until convergence? Or, if we let Alice and Bob use an arbitrary protocol, then how many bits must they exchange before reaching consensus or near-consensus? In communication complexity language, instead of evaluating a function f(x,y) where x and y are n-bit strings, now we merely want Alice's expectation of f(x,y) (conditioned on her partial information about y) to equal (or nearly equal) Bob's expectation, given a known prior distribution over x,y pairs.

For some f,x,y and distribution over x,y pairs, can we show that (say) n(1-o(1)) bits of communication are needed? Such a lower bound could provide the first compelling explanation for why honest, rational friends can disagree: because they're not Siamese twins, and they don't have their entire lives to talk to each other.

### Guest Blogger

I off on vacation next week and you will have a guest weblogger, Scott Aaronson, while I am gone. I have confidence Scott will keep you all entertained and enlightened. He will also bring you the latest news from the world of quantum computing from the QIP Workshop next week. Enjoy.

### Balance is a Red-Hot Word

Some interesting science policy quotes courtesy of the American Institute of Physics.

Where have the Americans gone? - DOE Office of Science Director Ray Orbach discussing declining number of American university students studying physical sciences.

The decline in funding for the physical sciences has put our Nation's capabilities for scientific innovation at risk. - Senate Appropriations Subcommittee Chairman Christopher "Kit" Bond (R-MO) at NSF budget hearing

The concern expressed for the physical sciences in the budget reminds me a little bit of the old joke about the will that said, 'To Joe, who I said I would mention in my will, "Hello Joe.'" Sympathy won't fund labs. - House Science Committee Chairman Sherwood Boehlert (R-NY) when discussing FY 2004 budget request

I would like to caution you about the use of the word "balance." - OSTP Assistant Director for Physical Sciences and Engineering Patrick Looney at a DOE advisory committee meeting, regarding federal research funding allocations. Looney later called it a "red-hot word" that was "divisive."

More quotes and a roundup of 2003 science policy and budget developments. In the end it looks like a 5% increase for NSF for FY2004 (which started in November). Not bad given the rest of the budget but far less than needed to properly fund American scientific research.

## Wednesday, January 07, 2004

### Survey on Private Information Retrieval

I posted the latest BEATCS Complexity Column, A Survey on Private Information Retrieval by Bill Gasarch.

With this article I am retiring as editor of the Complexity Column. Jacobo Torán will take on the editing duties starting with the June issue.

Update 1/19: Bill Gasarch now has set up a web page on PIR.

### Pictures from Mars

I grew up as an information hound. Lacking the internet in my high school days I would often hang out in the library looking things up. One day I found a catalog from the US Government Printing Office with all sorts of stuff at reasonable prices. I ordered a brochure with pictures of Mars from the 1970s Viking Missions to Mars. A few weeks later came some pretty color images including a stereographic (3D) image of the red planet.

Fast forward over two decades later. Another Mars mission. More pictures. The pictures haven't changed much but I can access them far easier and quicker than before. When you look at those Mars pictures realize that the great technological advance is not so much in NASA getting pictures from Mars but in NASA getting those pictures to you.

## Tuesday, January 06, 2004

### What is an Algorithm?

The October 2003 BEATCS has two articles discussing the Church-Turing thesis, Beyond Turing Machines by Eugene Eberbach and Peter Wegner (I can't find this paper online though I've discussed Wegner's work before) and the Logics in Computer Science column Algorithms: A Quest for Absolute Definitions by Andreas Blass and Yuri Gurevich.

Maybe it's a man bites dog thing: One cannot write an article that says, yes, Turing machines capture computation and fully describe algorithms. But I can use this weblog to say that.

Blass and Gurevich ask "What is an algorithm?" From their introduction: It is often assumed that the Church-Turing thesis settled the problem of what an algorithm is. That isn't so. The thesis clarifies the notion of computable function. And there is more, much more to an algorithm than the function it computes. The thesis was a great step toward understanding algorithms, but it did not solve the problem what an algorithm is.

Why not? The paper goes on to discuss the meaning of the Church-Turing thesis and some scenarios where they claim the Turing machine fails to capture algorithms.

• Interactive Algorithms: A broad class containing randomized algorithms, nondeterministic algorithms and asynchronous algorithms. All of these can be simulated on Turing machines and in any case the actual interaction process is always modeled by an algorithm easily implementable on a Turing machine.
• Computing with Abstract Structures: Turing machines have no problems dealing with abstract structures given a logic that describes them. Hidden parallelism is easily simulatable.
• Non-discrete computations: Yes, a finite Turing machine cannot model arbitrary real numbers. But a Turing machine can simulate any process involving real numbers to a greater precision that any physical instrument can hope to measure.
The article also discusses issue of time where the simulation issues get stickier. But in general every algorithmic process can best be described and simulated by Turing machines. There really is nothing more to it.

## Monday, January 05, 2004

### Freeman on CISE Reorganization

Peter Freeman, Assistant Director of NSF for CISE, has an "important message" on the recent reorganization. Good to see he's finally acknowledging the confusion about the changes, though I would still like to see more of the philosophy behind it.

I talked recently to a former program director who worries that the new clusters will make it more difficult for theory since they now have to directly compete with more applied areas. He also worries that removing power from the program director position will make it even harder to recruit good program directors in the future.

### These are a few of my favorite theorems

In December of 1994 I presented My Favorite Ten Complexity Theorems of the Past Decade, a paper where I chose ten theorems representing different areas in complexity and used them as a springboard to describe the progress in my field over the previous ten years, roughly from when I started graduate school.

Hard to believe another decade has nearly passed. By the end of this year, you will see My Favorite Ten Complexity Theorems of the Past Decade II. I have no shortage of theorems to draw from though I foresee tough decisions like which derandomization result to choose.

I will keep you updated on this project as the year goes on.