Thursday, January 28, 2010

Referees'''' reports

A commenter a LOOOOONG time ago left the following:
Tell me, Gasarch, how in the world do you get your papers published when you consistently skip the apostrophe in it's and that's? Do referees notice these things anymore, or are you simply careless in blogs.
This commenter unintentionally raised some good questions:
  1. Do referees notice these things anymore... This indicates that there was a time when referees were real referees, and men were real men, and women were real women, and little blue fuzzballs from alpha-8 were real little blue fuzzballs from alpha-8. Was there such a time? Or is this is really case of nostalgia for a time that never was? If anything I think referees are more demanding of changes then in a past time since they know that with word processors such changes are easy to make.
  2. What should referees look for? Ian Parberry has a good paper on this that is linked to from our website. Informally, here is what I think the order should be (1) Are the results true/important/interesting?` (2) Are they well presented? See next item for expansion on (2).
  3. Being well presented also has a priority ordering: (1) Are the results well motivated? (2) Are the proofs presented in a way that the reader can see the intuition? (3) Grammar. (4) Spelling. (5) Apostrophes.
  4. A referee's job is not just to accept or reject a paper. Its also to offer advice on a paper to make it better
Are referees now more demanding than they used to be? less? This splits into many questions: concern with truth, importance, interest, motivation, intuition, grammar, spelling, apostrophes. I do not claim to know the answers.

Wednesday, January 27, 2010

Guest post on ICS 2010 (x of y for some x and y)

(Another Guest post about ICS 2010. From Aaron Sterling. Is he on his way to break the MOST GUEST POSTS IN A YEAR record? I doubt it- I think I hold it from before I became a co-blogger, and I think its at least 10.)

Bill asked me if I thought the ICS conference truly was innovative, and in particular how I thought the content compared to that of STOC or FOCS. I've never been to either STOC or FOCS (though I've read some papers and seen some videotaped presentations from those conferences), so I don't consider myself qualified to answer that question directly. However, something related has been on my mind, and I think it's important enough to share with the larger community.

I do believe it is innovative and politically significant that ICS literally represents another country heard from -- and that the derivatives paper appeared there, not in either STOC or FOCS. Compare the derivatives paper to Gentry's homomorphic encryption paper. Gentry's result is of course a stunning breakthrough in an area that had remained wide-open for many years; and, to my (brief) reading, it contains more profound mathematics than the derivatives paper. However, it's quite possible that the derivatives paper will spark changes in the regulation of the multi-trillion-dollar financial product industry. If that happens, it would be reasonable to argue that the derivatives paper would be one of the most influential TCS papers ever.

That comparison, to me, captures the value new concepts can add to the field. US consumers would only have to save $10 million on financial services for Uncle Sam to be 100% repaid for his investment in an Intractability Center. I don't it's a coincidence that that Center's director is a co-author of the derivatives paper, and also a co-author of this CACM position paper on how computer scientists should represent their field to better raise money.



I've had the last two paragraphs of that paper on my office door for a few months now, because I got sick of people complaining to me that there was nothing to be done about financial woes. I'll reproduce those paragraphs here.

One wonders if the failure of computer scientists to articulate the intellectual excitement of their field is not one of the causes of their current funding crisis in the U.S. Too often, policymakers, and hence funding agencies, treat computer science as a provider of services and infrastructure rather than as an exciting discipline worth studying on its own. Promises of future innovation and related scientific advances will be more credible to them if they actually understand that past and current breakthroughs arose from an underlying science rather than from a one-time investment in 'infrastructure.
It is high time the computer science community began to reveal to the public its best kept secret: its work is exciting science -- and indispensable to society.


I also believe it is no coincidence that both co-authors of that paper are on the Steering Committee of ICS. There's nothing like a conference that encourages innovation to demonstrate "promises of future innovation and scientific advances."

A generation or two ago, aerospace contractors used the Space Race as a fundraising tactic. I got the impression from some comments on my first ICS blog post that people were threatened by the idea that China might be a major TCS player, and would prefer if I hadn't even mentioned the possibility. I think that attitude is foolish, and, rather, China's presence on the world TCS stage should be embraced, and used as a reason it is that much more important for the US to invest in theory. After all, if Washington allows things to continue as they are, in ten years, it could be facing a Square Root of Log N Gap!

Okay, that last phrase made me laugh when it popped into my head, so I figured I'd share it. My point, however, is a serious one. A handful of Ph.D's will get postdocs this year at IAS or through the Simons Fellowship. Most people won't, even if they're good. If the CI Fellows program isn't renewed, that means pretty much everyone else is going abroad. In fact, when I was at ICS, a senior researcher told me that he was advising students and recent grads to go abroad, not just for postdocs but also for assistant professorships, and only to return to the US for tenure. That is not a recipe for maintaining scientific prominence in a field, especially if one's "major competitor" is investing heavily in recruitment of theorists.

I will end with a question to consider, if I may. How can we better communicate that computer science, and, in particular, theoretical computer science, is indispensable to society? The government of China doesn't seem to have any trouble understanding this. What about the government of the United States?

Tuesday, January 26, 2010

The First pseudorandom generator- probably

(The following was told to be by Ilan Newman at Dagstuhl 2009. His source was the book The Broken Dice and other mathematical tales of chance.)

What was the first pseudorandom number generator? Who did it? While these type of questions are hard to really answer, the book The Broken Dice by Ivar Ekeland gives a very good candidate.

Brother Edvin (a monk), sometime between 1240 and 1250 AD, was preparing a case for the sainthood of King Olaf Haraldsson, who had been the King of Norway. There was a well documented story (that could still be false) that King Olaf and the King of Sweden needed to determine which country owned the Island the Hising. They agreed to determine this by chance. They were using normal 6-sided dice. The King of Sweden rolled two dice and got a 12. Then King Olaf rolled and got a 12 AND one of the dice broke (hence the name of the book) and he got an additional 1 for a 13. Some attributed this event to divine intervention, which strengthened his case for sainthood.

Brother Edvin got interested in the general question of how you can generate a random number so that nobody could manipulate it. He may have phrased it as a way to know what was divine intervention as opposed to human intervention.
  1. There are two players and they want to pick a random number between 0 and 5. They want the process to be such that neither player can bias the outcome. Each picks a natural number in secret. They are revealed, added, and then the remainder upon division by 6 is taken. Brother Edvin noted that the players really only need pick numbers between 0 and 5; however, he thought it best not to tell the players this since they will think they have more choice then they do.
  2. What if its only one person. It is too easy to bias things. But Brother Edwin proposed the following in modern notation.
    1. Pick a 4-digit number x.
    2. Compute y1=x2,
    3. y1 will be 7 or 8 digits. Remove the two leftmost digits and one or two rightmost digits to obtain a 4-digit number z1.
    4. Repeat this process four times to obtain z=z4.
    5. Divide z by 6 and take the remainder.
The hope is that it is very hard for a human to bias the results by picking a particular original 4-digit number. Brother Edvin did note that some choices for x make the final choice choice obvious and hence not random (e.g., 1001). Brother Edvin proposed some solution: make sure the initial x has no 0's and no repeated digits. He also suggesting taking more initial digits or more times that you iterate the process. But he does realize that this might not work.

The method was rediscovered by von Neumann in a different context. He wanted to generate long random-looking sequences of numbers. His idea was to take a 4-digit number x1, square it, and take the middle 4 digits, repeat some number of times (say 4) to obtain x2 then repeat to get more and more numbers. It was abandoned since the periods weren't that large. People used linear congruential generators instead. (Are they still used?)

However, Brother Edvin does deserve LOTS OF credit. Given the math known in his day it is impressive that he asked these questions and got some reasonable answers.

Monday, January 25, 2010

When is a theorem really proven?

One of the comments on this blog pointed out correctly that for a theorem to be accepted by the community is not a Eureka Moment. It is a social process. The author of the comment was probably alluding to an excellent article by DeMillo and Lipton that I blogged about here. I highly recommend reading the article that blog points to.

We often say or write things like
  1. In 1978 Yao proved that if you store n elements of U in a table of size n then (for large U) membership requires log n probes (Ref FOCS 78).
  2. In 1981 Yao proved that if you store n elements of U in a table of size n then (for large U) membership requires log n probes (Ref JACM 81).
Personally I try to include both the conference and the journal version in a citation. That solves the citation problem. However, what does prove mean? It could be that Yao proved this in 1977. The exact time/day/year when something is proved is not that well defined. The original post was about the Fund Lemma where the paper was written in 2004 and accepted in 2009. What was its status in 2006? Proven or unproven? Is there a better way to say these things?

  1. In his FOCS 1978 paper (see also Journal version in JACM 1981) Yao proved the following:
  2. In his JACM 1981 paper (original in conference version in FOCS 1978) Yao proved the following:
These both sound awkward. I am willing to live with using proved even though its not quite right. Does someone have an alternative?

Was Time Magazine right to say that the Fund Lemma was one of the big Science Stories of 2009 even though the paper was written in 2004? I think so- acceptance in a journal seems like a good time to declare YES THIS IS TRUE. And I do not know an alternative.

Thursday, January 21, 2010

Job Postings

Two sites to look for jobs at:

Lance Fortnow set up this blog that collects theory annoucments including jobs: here

Boaz Barak Obama, in an effort to create more jobs, has set up a site listing jobs in theory: here

Wednesday, January 20, 2010

Two Theorems this blog missed

I warned Lance to wait until early Jan to post 2009 Complexity Year in Review. It was my fear that by posting it on Dec 28, 2009 he may miss out if someone proves P &ne NP on Dec 29, 30, or 31st during 2009.

That did not happen and the review was fine. However, we did miss out on two of the biggest math stories of 2009. Not because they happened on Dec 29,30, or 31, 2009. Not sure why we missed them. But here they are.
  1. The Fundamental Lemma was proven. I won't embarrass myself by even trying to blog on it, but will instead point to a blog that did report on it: here. I found out about it when I saw it listed in Time Magazine as one of the top science stories of the year. The results seems to be really important.
  2. The Fundamental Pizza was proven. This was proven in 2009 but seems to have gotten attention on some blogs recently. Unlike The Fundamental Lemma this one I can state. Can I prove it? I doubt it--- it was conjectured 40 years ago. The paper is here. Here is the result:

    A waiter picks a point on a pizza and makes N slices through that point. Each slice has the same angle. One player gets every other slice and the other gets the other every other slice. Will they each get the same amount? This problem has now been completely solved:
    1. If the point is in the center then yes.
    2. If any of the slices happens to go through the center then yes. (Henceforth assume that no slice goes through the center.)
    3. If N=1 or N=2 or N &equiv 3 mod 4 then the person who gets the slice that has the center gets more.
    4. If N ≥ 5 and N &equiv 1 mod 4 then the person who gets the slice that has the center gets less.
    5. If N ≥ 4 and even then each person gets the same. (NOTE- I added this later, I omited it the first time by accident.)
This result did not make Time magazines list of one of the top science stories of the year. It wasn't even reported on this blog which it should have been. However, I can state it and I suspect I can read the paper if I brush up on my High School Trig. Hence its one of my favorite theorems of the year.

Tuesday, January 19, 2010

Should CCC2012 be at the North Pole?

The last few posts on ICS in China have lead to off-topic (though maybe they were not off topic) comments on whether we should have conferences in countries who oppress human rights. Here is a post where such comments will be ON TOPIC.

How does one best deal with a regime which represses human rights. While I have a strong opinion on Human Rights (I'm for them. Duh.) I honestly do not know the best way to encourage them from the viewpoint of, say, Where should CCC 2012 be? Having a conference in a country (a) legitimizes them and says that you approve of what they are doing, and (b) opens up a conversation so they may change what they are doing. Hence my confusion. Math is easier in that there are well defined questions that have answers, hard as they may be to find.

Here are some scenarios.
  1. Uganda is in the process of passing a law which would make homosexuality a crime with very harsh penalties, including the death penalty in some cases (See here for some details.) If Uganda offered us money to go to a conference, would we turn it down on those grounds?
  2. When South African had Apartheid many organizations boycotted them for this reason. If they had offered money to hold a conference there, would we have done it? My guess is that we would TURN DOWN The money. Since our actions would have been part of a much bigger movement they may have been effective. If we are the only organization who does not go to China because of their Human Rights Policies I doubt it has any affect.
  3. If Saudi Arabia offered a monetary prize (say $400,000) for advances in Mathematics would you turn it down because of the nature of the regime? (I'll be honest- I would take the money.)
  4. Hyatt Hotels in Boston fires 100 of their workers, who are then known as The Hyatt 100. Should STOC/CCC not use their hotels for the 2010 STOC/CCC meetings? For this one there are other organizations boycotting so it may be effective to join it (I do not know what STOC/CCC are actually doing.) What if they reach a compromise that some of the workers are happy with and some are not? Then what do we do? Do we really want to get involved with the details of a labor negotiation? However, the orignial boycott might help get Hyatt to reverse their decision.
  5. To protest Human Rights Violations in America (Gitmo, the treatment of the American Indians, Abu-Ghraib, the shooting of Randy Weaver, pick-your-favorite-cause) its not clear where you would decide to NOT have a conference. America is so large and diverse, and no one state or city did these things, that its not clear how you would express your outrage. Also, the government is not that connected to Academia as in other countries. The only thing I can think of is to not take money from the Military for a conference. But then the arguments goes better they spend it on Interactive Proof Systems Oracles then on Machine Guns.
  6. We should NOT have a conference in California, or any other state where they voted against Gay Marriage. Or maybe not stay at the Marriott since they put alot of money on the anti-gay side. How about countries that do not allow gay marriage? (I apologize to the 3 readers of this blog who are against gay marriage if you find this notion offensive.)
  7. We should NOT have a conference in any state that has laws against interracial marriage. Actually, I don't think there are any such states. But there may be countries that do not allow it. (I apologize to the 0 readers of this blog who think the government should make interracial marriage illegal. I also apologize to the 1 reader who thinks its a states right issues where states can decide for themselves.)
  8. Should we ban Nazi Mathematics? See here for an opinion, actually the 88th Opinion of Doron Zeilberger.

Monday, January 18, 2010

Sam Roweis (1972-2010)

Sam Roweis, an NYU CS professor specializing in machine learning, took his own life last Tuesday night. Jennifer Linden and Maneesh Sahani set up a weblog to share memories of Sam and John Langford's blog also has a collection of remembrances

Suicide never makes sense especially with someone who seemed so happy in life. Makes us take stock about how we handle our own stress in our academic and family worlds and causes us to realize what is really important in life.

Thursday, January 14, 2010

Do Innovative papers have a hard time getting into STOC and FOCS? I ask this objectively with no ax or teeth to grind.

(This is my last post until next week Tuesday.)

Many people believe the following:
FOCS and STOC only take technically hard results on problems that we already agree are worth studying. Papers that are truly innovative, starting new directions, have a very hard time getting into those conferences.
This point or view was one of the motivations for ICS.

Is it true? If you ask someone they will give anecdotal evidence. While I don't discount that evidence, especially if it comes from people on the committee, I do wonder if there is a way to study the issue more systematically.

With this in mind I request the following:
  1. If you know of an innovative paper that DID make STOC or FOCS the then please leave a comment on it. Include the year the paper appeared.
  2. If you know of a submission that was rejected that was innovative, that the authors would not mind if it was known the paper was rejected, comment on that. Include the approx year the paper was submitted.
I am just trying to collect evidence on the issue. Even include older papers if you can so we can get a sense of if this has changed or not.

Wednesday, January 13, 2010

ICS I: snapshots (guest post)

(Guest Post by Rahul Santhanam)

Title: ICS I : Snapshots

1. Local Arrangements: Kudos to the organizing committee for going far beyond the call of duty and arranging for the coldest Beijing winter in 40 years. By deterring sightseeing, this ensured healthy attendance at talks and more opportunities for informal interactions among the participants.

2. Los Amigos: The surprising chilliness of the weather outside was balanced by an equally surprising warmth indoors. High and low mixed with each other, strangers did not scruple to say hello. Key gambits from FOCS and STOC such as the unrecognition (looking right through someone you've met many times before) and the Arctic Smile (the frozen mask whose meaning is - "There's nothing I'd like more than not to speak to you") seemed absent. Not that these gambits are signs of hostility - rather, in our community, they seem the result of social awkwardness together with a consciousness of the limited time available at conferences for attending talks, meeting friends and proving theorems. ICS was friendlier in part because it was more relaxed, but also because it assembled a new "social configuration", with fewer established cliques and hierarchies.

3. Law of the Excluded Middle: There were a lot of distinguished attendees at the conference - people who've had seminal ideas and founded entire fields, but also a significant younger crowd of grad students, postdocs and post-postdocs. The "middle level" of people at the post-tenure, pre-professor stage were rather sparsely represented, though. Maybe this will change when the conference becomes more established... I do hope it's not an indication of a philosophical difference between generations about what constitutes valuable research.

4. The Dilettante Has Qualms: Which of us haven't seen (or for that matter, haven't been) a conference dilettante - someone who makes sure to attend their own talk and maybe two or three others chosen at random by flipping through the conference proceedings, and spends the rest of their time productively in shopping and sightseeing? It is not possible to completely eliminate the dilettante, but it is possible to discourage him, to give him qualms. The speakers at ICS did a fine job of this by motivating the talks so well - no longer was it acceptable to stay away by claiming you knew nothing about topic X and hence were likely to get anything from a talk on the topic. The fact that a talk was on a completely different topic was almost an incentive to attend. Of course, you did run the risk of having to give up your prejudices about those strange other subfields of theory - "trendy" or "incestuous" or "esoteric", as the case might be.

5. What is New?: So what was new about ICS as a conference ? The face-mask dance! Edible conference food! Re-imbursed conference costs! True, and true, and true, but the question was more about the basic structuring of the conference with regard to talks and sessions. Early on in the process, it seemed there was an initiative to have several panel discussions to supplement the talks. Eventually, we had just one panel discussion, and the talks were pretty conventional, but this is understandable in the first edition of a conference, when the conference is still trying to find its footing. What I would like in ICS talks in the future is more audience participation. You can't agree or disagree with the proof of the parallel repetition theorem - it's just there. With so-called conceptual talks, on the other hand, the speaker usually needs to make a case, with regard to the validity of a new model or the importance of a new perspective. This is best done in a dialogical framework, as in economics talks, where questions and disagreements are plainly voiced. This does make it harder to fit talks into fixed time slots, so maybe it's worth looking at more flexible scheduling...



6. The Hare and the Tortoise: My favourite talks at the conference were Srikanth Srinivasan's (on work with V. Arvind about a connection between circuit complexity and the remote point problem) and Ariel Gabizon's (on work with Avinatan Hassidim about derandomization of streaming algorithms and communication complexity protocols on product distributions). Both talks were excellent, but while Srikanth's was more of a standard-issue talk that was exceptionally clear, Ariel's was distinguished by a stylistic tic. In the middle of each slide, he would pause almost theatrically for a few seconds, as if to allow the audience to absorb what he had just said. Ironically, while the talk would have been memorable even otherwise because it was well-structured, this novel (to me) feature of the talk will make it unforgettable. I'm so used to theorists viewing time as a constraint and making the most of every second they have available; Ariel's tactic of having time make its presence felt in a positive rather than negative way seems liberating.

I was told just before my second talk that the time slot had been cut down from 30 minutes to 25 minutes, and when I expressed my worries to my session chair Mike Saks about how I'd adjust, he advised me to speak 6/5th as fast, referencing an old story about a Narendra Karmarkar talk. Maybe it's time now to start preparing 15 minute talks for 25 minute slots and to speak 5/3rd as slowly. I remember reading once during Obama's election campaign that the power of his speaking style owed a lot to the measured way he spoke - speaking slowly is a sign of confidence, and what is said becomes, ahem, more momentous. Come to think of it, we already do have a speaker in our community - Ran Raz (perhaps not coincidentally, Ariel's PhD advisor) - whose talks illustrate perfectly the virtues of simplicity, clarity and not being rushed.

Tuesday, January 12, 2010

Guest Post on ICS 2010 (2 of 3)

Innovations in Computer Science 2010 (post #2)

Guest Post by Aaron Sterling

This is the sequel to my previous post on ICS 2010, the new theoretical computer science conference whose Call for Papers emphasized conceptual contributions and the opening of new areas of research. Before diving back into the technical program, though, I'd like to say one thing about travel to Beijing. I found it surprising that I spent almost three hours at the Beijing airport after I landed. Part of the delay was due to a long line at the immigration counter, but I also spent almost 45 minutes waiting for a taxi -- and I was waiting outside, in 15 degrees Fahrenheit. One attendee told me that he had waited for a taxi for well over an hour on Sunday (I arrived Monday), and he thought it was due to the fact that fewer taxi drivers work on Sunday. So if you fly to Beijing in the winter, dress warm and be sure to allow a lot of time between your flight's arrival and anything else you might schedule yourself to do.

I'll start by mentioning "Effectively Polynomial Simulations" by Pitassi and Santhanam. (Rahul may think I'm stealing his thunder by blogging about both of the papers he presented at the conference. Oh well. His talks just rocked, and I want to share a little piece of them. He must be one of the best speakers in all of complexity theory.) Motivated by consideration of SAT solvers as proof systems, Pitassi and Santhanam generalize the well-known notion of p-simulation of one proof system by another, to a reduction where polynomially much preprocessing can be performed on an input (i.e., set of clauses or tautology) before the simulating proof system simulates a proof of the tautology in the simulated proof system. (Intuitively, using this reduction captures what proof complexity has to say about the P?=NP problem, in much the same way as p-simulation captures what proof complexity has to say about the NP?=coNP problem: a proof obtained by effective p-simulation is a witness for the ability of a SAT-solver to find a satisfying assignment for the input with only polynomial time processing and preprocessing.) This technique clarifies and fleshes out the relationships between several well-studied proof systems. For example, Tree Resolution effectively p-simulates several other proof systems, such as Nullstellensatz and Polynomial Calculus, even though Tree Resolution does not p-simulate those systems.

Perhaps the most philosophically motivated talk was Elad Eban's "Interactive Proofs for Quantum Computations," co-authored with Aharanov and Ben-Or. Eban pointed out that D-Wave is claiming it will soon have a working 128-qubit quantum computer, and he asked the reasonable question: "How could we determine whether that machine is actually a quantum computer if we are limited to classical computation ourselves?" His answer was a new interactive proof system, where the prover can perform feasible quantum computation (i.e., QBP), but the verifier is limited to feasible classical computation (i.e., BPP). (Note that what I just said isn<92>t quite right, because their proofs require that the verifier also have access to three qubits that he can send to the prover, and check the state of later. It's open whether the number of such qubits could be reduced, hopefully to zero.) So the prover -- that is, the D-Wave quantum computer -- needs to convince the classical verifier that a quantum computation did, in fact, take place. This classical-quantum IP system can be generalized so that the verifier is "playing" both Alice and Bob, and the prover is "playing" a quantum channel over which Alice communicates to Bob. This yields results about ensuring that quantum computations are performed correctly, even if the party that owns the quantum computer is not trusted.

The last paper I will summarize (and again, apologies to everyone else!) is "A New Approximation Technique for Resource Allocation Problems," by Saha and A. Srinivasan. I missed this talk, but the paper is cool, and the technique shares a theme with the new Strong Linear Programming method I mentioned in my previous post. Essentially, Saha and Srinivasan present a randomized rounding technique that is sensitive to the constraints satisfied exactly (i.e., with equality). Given an LP, and some vertex x within the polytope, perform a random walk starting at x in such a way that any constraint satisfied exactly by x remains satisfied exactly at every step in the random walk. Provably, the walk will end at a vertex that is closer to an optimal solution than guaranteed by previous methods. They use this technique to obtain improved results for several well-studied problems. Impressive, but my favorite part of the paper is the section on future work. Unlike most authors (including me), who just make general mention of open problems at the end, Saha and Srinivasan provide a detailed plan of attack to settle a matrix theory conjecture using their technique, and they discuss potential application to two other open problems as well.

There was a two hour discussion session following the last talk. It was divided into two parts: suggestions of possible research ideas, and comments about this year's and next year's ICS conference. Several well-known researchers spoke. Shafi Goldwasser gave a brief overview of recent innovations in cryptography, focusing especially on the results presented in ICS. Paul Valiant encouraged researchers to investigate biological evolution through an algorithmic lens, calling it "the ultimate algorithm." Ron Rivest said that innovation need not stop with the submissions; rather, the format of the conference itself could be innovative, accepting videos or other material, not just 10-page papers. Mike Saks synthesized several suggestions by saying there could be a rump session in which people present open problems by saying, "I tried to solve this problem by doing X, and I failed because I ran into the following problem."

(As an aside, Bernard Chazelle made the important point that an open problem session should not turn into a list of grievances. I strongly agree. There were a handful of comments in the discussion like, "The community needs to stop rejecting papers about topic X." I think that's a dangerous road to walk down, and, early on, some of the negative blog commentary about ICS was due to concerns that it might just be a way for certain people to get their pet projects published even though that work was not considered significant by the larger community. I saw no evidence of this kind of nepotism at ICS 2010 -- the papers all seemed to "belong there" on their own merits -- and I hope future conferences continue to maintain the high road.)

There were only three posters at the poster session. During the discussion, Cynthia Dwork encouraged students to prepare posters on their current work, and to submit them to ICS 2011. (She was on the ICS 2010 Program Committee, so I think graduate students worldwide should consider that suggestion to be a big phat hint.)

There were 39 accepted papers out of 71 submitted. Between jetlag and lack of an excursion, a lot of the western-hemisphere attendees missed a lot of the talks. I attended about 30, because I took one afternoon off to see some of downtown Beijing and meet an online (now real-life) friend. Many people attended fewer. For example, I talked to one person who attended only 12-15 talks, because of sightseeing and needing to sleep during the day. Amazing as the banquets and entertainment were, the ICS 2011 organizers might consider investing those resources into some form of "tourist" excursion instead, to increase attendance at the technical program.

The conference atmosphere was a bit different from that of other conferences I have attended, because, if you were on site, you were expected to be in the lecture hall. There were no break-out rooms to my knowledge, and most of the informal contact took place at the coffee break, on the shuttle bus and during the opening and closing banquets. This strikes me as a matter of taste rather than a problem, and I don't feel strongly one way or the other about changing it. However, if the ICS 2011 organizers would like to encourage on-site research collaboration, a couple conference rooms near the lecture hall would go a long way toward making that happen.

All of the talks were videotaped, and ITCS has copies of at least most of the slides used in presentations. Andrew Yao said in the discussion that, if authors consent, he would like to make that material publicly available. I have emailed my consent to post both video and slides, and I would like to encourage other authors to do the same. I believe this conference is a great new resource, and, as such, should be widely shared.

ICS 2011 will be held once again at ITCS in Beijing. Bernard Chazelle will be the Program Chair, and a preliminary Call for Papers is already available. I'm extremely grateful to everyone who made ICS 2010 possible, and I am looking forward to the new approaches developed by "innovators" in the coming year.

Monday, January 11, 2010

Guest Post on ICS 2010 (1 of 3)

Innovations in Computer Science 2010 (post #1)

Guest post by Aaron Sterling

This is the first of three posts about ICS 2010, the much-discussed "concept conference," which took place at the Institute for Theoretical Computer Science (ITCS), Tsinghua University, Beijing, from January 5th-7th. I will provide my impressions in this post and one other, and Rahul Santhanam plans to contribute something as well.

First, I need to say that this was the best-run conference I have ever attended, and one of the best-organized events of any kind that I have ever participated in. The level of financial support for students and authors, the quality of food and lodging, and the remarkable closing ceremony (which included several music and dance acts, a Kung Fu demonstration, and -- my favorite -- a Face-Off performance) set a high bar for any other conference in the world. Local Arrangements Committee members don't often get mentioned in posts like these, but I believe the entire TCS community owes a debt of gratitude not just to PC Chair Andrew Yao, but also to Local Arrangements Chair Amy Yuexuan Wang, Conference Secretary Yuying Chang, and to everyone else who made this event happen. This feels like a turning point in the history of the field.

In the US, I have often gotten the impression that computer science departments and funding sources consider TCS to be of secondary importance. What a difference in Beijing! As a silly-yet-telling example, Sanjeev Arora told me that, for a conference in 2009, ITCS printed a sign in which the phrase "Theoretical Computer Science" appeared in the largest-size font ever. I believe the investment in theory on the part of the Chinese government and academia, contrasted to the malaise of departments in the United States, speaks volumes about the future, unless the United States changes direction significantly. I'll leave that topic to be discussed on a political blog, though. Suffice it to say, I think everyone was pleased to be treated like a first-class scientist, instead of like someone doing "impractical" things that are less worthy of support.

Perhaps the highlight of the technical program was the "derivatives paper," already covered at length by Richard Lipton and other bloggers, so I won't discuss it here. Many of the accepted papers were in algorithmic game theory, and I will limit myself to mentioning the two papers in that area I found the most exciting. These are "Bounding Rationality by Discounting Time" by Fortnow and Santhanam, and "Game Theory with Costly Computation: Formulation and Application to Protocol Security" by Halpern and Pass. Essentially, Halpern and Pass define a class of games with complexity functions attached, so it is possible to reason about concepts like equilibrium with respect to a particular measure of complexity. The Fortnow/Santhanam model embeds into this approach, as it considers one particular type of complexity function. On the other hand, the complexity function defined in Fortnow/Santhanam seems particularly natural, and they are able to obtain more specific results than Halpern/Pass, because they start with a less generalized model.

The conference started off with a bang: Benny Applebaum gave an excellent talk about cryptography obtained by using only local computation. This was "Cryptography by Cellular Automata or How Fast Can Complexity Emerge in Nature?" co-authored with Ishai and Kushilevitz. They constructed, for example, one-way functions with one step of cellular automata (i.e., after one step, it is computationally hard to invert the state of the system to the original state). As cellular automata can only communicate with their immediate neighbors, this has bearing on the parallel complexity of cryptography. One point that came up in discussion is that, unlike one-way functions, document signatures cannot be obtained by local computation only, because of the need to make global change to the output if a single bit of the input is changed.

The "Best Impromptu" Award goes to Avrim Blum, who, on three hours' notice, gave one of the most stimulating talks of the conference when he presented "A New Approach to Strongly Polynomial Linear Programming" by Barasz and Vempala, after the authors had a problem with their trip. The Barasz/Vempala concept is a hybrid of the Simplex Algorithm and the Interior Point Method for solving LP's. Rather than just trace the edges, or just go through the interior of the polytope, they take the weighted average of the "useful" edges near the current location, and follow the obtained "averaged" line until they hit another face in the polytope. It is unknown in general whether their algorithm runs in polynomial time, but it seems very interesting, because they have shown that, for each case for which Simplex runs in exponential time, their algorithm can solve that "hard case" in polynomial time. This is because their solution method is invariant under affine transformations of the problem statement, so it is robust even when the angles of the polytope are narrow, i.e., the constraints are very close to one another.

I will conclude this post by mentioning Bernard Chazelle's "Analytical Tools for Natural Algorithms." (Please see a previous guest post of mine, and comment 3 of that post by Chazelle, for some background.) His main philosophical message was: "Use algorithms to analyze algorithms" -- meaning that if one is trying to analyze the behavior of a nonlinear multi-agent system like ABC...Dx, where A,B,C, ... ,D are matrices whose identity depends on time and some kind of feedback loop, it is not helpful to consider the problem "just mathematically," by analyzing the operator ABC...D independent of x. Rather, one should consider the problem in the form A(B(C(Dx))), and design an algorithm to reason about this nested behavior. That algorithm can then (hopefully) be tweaked to prove similar results about related nonlinear multi-agent systems. To quote from his paper: "Theorems often have proofs that look like algorithms. But theorems are hard to generalize whereas algorithms are easy to modify. Therefore, if a complex system is too ill-structured to satisfy the requirements of a specific theorem, why not algorithmicize its proof and retool it as a suitable analytical device?"

In my next post, I'll sketch results from a few more papers, try to give some flavor of the discussion session at the end of the conference, and offer a few suggestions for the future. My apologies in advance to all the authors whose work I will be leaving out. Several attendees commented how accessible and well-presented the talks were -- and I noticed this too. (I think this was due in large part to the "call for concepts." Certainly when I prepared my own presentation, I felt motivated to communicate "high" concepts as well as I could, and I felt less pressure to include the Mandatory Scary Formula Slide(tm) to demonstrate my ability to perform rigorous research.) In any case, there is far more great material than I could possibly cover in two posts -- which is a very good problem for a new conference to have!

Friday, January 08, 2010

COLT and CCC

The COLT (Computational Learning Theory) call for papers is out. (Actually its been out since October but I was only recently emailed it.) For other information about COLT see here.
How do COLT and CCC relate to each other? (I use COLT for both the conference and the field. I use CCC for both the conference and the field.)
  1. There are some results in COLT that are of interest to CCC and vice versa. But there is not much overlap. That is, the papers at COLT would be out-of-scope at CCC. And vice versa.
  2. CCC has its original roots in computability theory. The basic notions of reductions and completeness were adapted from computability theory. COLT has some roots in Inductive Inference (computability-theoretic model of learning) but the connection is much weaker. The PAC model, and the other models, do not really take things from Inductive Inference and adapt them.
  3. Both conferences used to have more of the Computability-theoretic material but it is faded in recent years.
  4. Both fields use tools from discrete math; however, virtually all of Theoretical Computer Science uses discrete math.
  5. COLT is co-located with ML (Machine Learning). They have done this before (I'm not sure how often.) COLT would like to be relevant to ML and probably is. CCC does not have a (more) applied field that it would like to be relevant to. CCC co-locating with STOC since there is a overlap in the people who want to go to both.
COLT is in Israel. Is this a good idea? A conference should be located in a place where the following occur.
  1. There are people who normally can't go but now CAN go (e.g., CCC in Prague has 12(?) people from Prague, most of whom normally would have a hard time going). So- are there people in Israel who want to go? I would think yes since there is a strong theory community.
  2. It is not too hard for the people who usually go to go. How hard will it be for Americans to get to Israel? For Europeans? I don't know.
  3. The Guest Speakers (in this case Noga Alon and Noam Nissan) are close by thus saving on travel expenses. Actually, I had never thought of this one until I saw that they were the guest speakers.

Thursday, January 07, 2010

DO NOT do this when choosing books for your class

When I took my first graduate course in complexity theory the professor had FOUR books on the REQUIRED FOR THE COURSE list. I bought all four. He said that
We may not use these books much but they will be good to have on your shelf if you go into theory.
I am the only one from that class who went into theory. One of them I have used (Hopcroft and Ullman's White Book). The other three I never touched and no longer have. I do not know what I did with them.

He was wrong, but for an interesting reason. Two of the books were on grubby Turing Machine stuff and models. (I don't recall what the third one was.) Things like constructing a universal Turing machine with 5 states. To be fair, theory was changing: Looking at grubby Turing Machine simulations was a dying field. Hence even a complexity theorist would not be served well by these books. We didn't even do this material in that class.

However, while I can be sympathetic that the prof didn't know that complexity theory was changing, asking students to buy FOUR books that are good to have on your shelf is a terrible idea.

Tuesday, January 05, 2010

Axioms: What should we believe?

Some misc thoughts on set theory inspired by yesterdays comments and other things.
  1. Geometry: Use Euclidean Geometry when appropriate, for example if you are designing a bridge, use Riemannian geometry when looking at space time, and use geometries when they are appropriate. So there is no correct geometry, its more of a right tool for the right job thing. So far Set Theory does not seem to have a strong enough connection to the real world for this to make sense. I supposed you use ZFC when dealing with most of mathematics, but I doubt you would ever say something like: When dealing with Quantum Mechanics its best to assume AD. So what can you use to decide what axioms to use? You may decide what axioms to use based on your tastes. For example see this prior blog posting. This is good for an individual but will not really work for the whole community. For example, I happen to like AD since I like a world where the Banach Tarski paradox is false. But that's just me.
  2. People concerned with these issues in the early 1900's were much more passionate then we are today. They had strong opinions on foundations and on non-constructive proofs. Mathematicians commonly carried firearms. We are far less passionate today on these issues. As an example, there are today people who study constructive proofs and prefer them, but I doubt anyone today would reject a theorem that was proven nonconstructively. Why the change of heart? Possibly Godel's theorem, but also the fact that people in different parts of math can't talk to each other so they can't argue.
  3. Another axiom of interest: The existence of Inaccessible cardinals. MOTIVATION: Take omega. If |X| < omega then |powerset(X)| < omega. Does any other cardinal have this property? Why should omega be so unique? Kappa is an inaccessible cardinal is such that if |X| < Kappa then |powerset(X)| < Kappa. Do such cardinals exist? The existence of an inaccessible cardinal large than omega cannot be proven in ZFC. An inaccessible cardinal would be a model of ZFC and hence would prove that ZFC is consistent (omega does not prove ZFC consistent since no proper subset of omega is infinite). It is known that ZFC cannot prove its own consistency (I think that's true of any theory but there may be some conditions.)
  4. Penelope Maddy has two nice articles on why mathematicians believe what they do: believing the axioms I believing the axioms II Also good to read: Shelah's Logical Dreams

Monday, January 04, 2010

Voting on Mathematical Truths: The Axiom of Det.

One of the founders of Conservapedia (a conservative alternative to Wikipedia) said the following on The Colbert Report:
There is an absolute truth. People don't vote on mathematical things like 2+2=4.
Given the source this quote may be ironic. However, this post is not about Conservapedia or Wikipedia. Its about voting on mathematical truths.

There is one kind of math where a vote might be appropriate. Some Set Theorists would like to resolve CH. We already know that this cannot be done in ZFC. So they want to add more axioms. What property should an axiom have? It should be obvious. It is unlikely that we will have new axioms of that type. How about that it be reasonable? Some set theorists think it is reasonable to remove FULL AC and add The Axiom of Determinacy (stated below). I want YOU to VOTE on if it is reasonable.

Definition: Let A be a subset of {0,1}&omega. Let GA be the following game: player I picks b1 &isin {0,1}, then player II picks b2 &isin {0,1}, then player I picks b3 &isin {0,1}, etc. If the final sequence b1 , b2 , b3 ... is in A then I wins. If not then II wins.

Definition: Let A be a subset of {0,1}&omega. A is determined if either player I or II has a winning strategy for GA.

The Axiom of Determinacy (AD): For all sets A that are subsets of {0,1}&omega, A is determined.
  1. AD is known to be true for A a Borel Set (Donald Martin proofed that).
  2. AD contradicts the uncountable AC but implies the countable AC.
  3. AD implies that every subset of the plane is measurable.
  4. I don't think AD implies CH or not CH (are both AD + CH and AD + NOT(CH) known to have models?)
  5. AD has a Wikipedia entry. This should not be taken as a sign that it is true of false. It should not even be taken as a sign that its well known. It just means someone put up an entry.
  6. My wife thought AD was false in 1995 but true in 2005. I do not know what changed her mind. She is not a set theorist and had not thought about it in the intervening 10 years.
  7. All finite games are determined so this is taking something true of finite games and assuming it is true for infinite games.
When you vote note that there is no right or wrong answer.