Tuesday, June 29, 2004

FOCS Accepted Papers

The list of accepted papers for the upcoming FOCS conference in Rome is out. [Thanks Suresh]

A few complexity papers to note: Ran Raz finds easy languages with no log-depth multilinear circuits. Andris Ambainis and Mario Szegedy have separate papers showing nice applications of quantum "random" walks. Barak, Impagliazzo and Wigderson show how to do extract nearly uniform distributions from multiple independent random sources as opposed to one random source and a few truly random bits. And lots more.

Monday, June 28, 2004

Don't Make it Too Easy or Too Much

Two easy ways to improve your paper but lessen your chances of acceptance at a conference: Add more results and simplify your proofs. Adding a result could only increase the usefulness of a paper but program committees see many results in a paper and conclude that none of them could be very strong. One of our students a few years ago had a paper rejected at STOC, he removed one of his two main theorems and won the best student paper award at FOCS.

Given the same theorem, the community benefits from a simple proof over a complicated proof. Program committees look for hard results so if they see a very simple proof, it can count against you.

You need to play the game. If you have many results depending on the situation, you can either split the paper or highlight one result and bury the others. It's a bit unethical to use a hard proof where you know an easy one but many people make an easy proof look harder by adding an unnecessary level of detail or proving a more general but less interesting theorem.

You do what you need to do, within ethical standards, to get your paper accepted. After you get it accepted, remember you have a rewrite for a proceedings version to get the paper written the way it should.

Saturday, June 26, 2004

Note from Vereshchagin

I received the following from Nikolay Vereshchagin.
The combinatorial question I have discussed last summer at the rump session at Computational complexity (about partitioning a planar set into a small number of uniform parts) has been answered almost immediately by Ilan Newman and Gabor Tárdos. They have found a pure combinatorial proof. Recently I have written a note on the subject.

Friday, June 25, 2004

Complexity Conference Recap

The Complexity Conference ended yesterday. You can already find the papers on the IEEE site and if you don't have access you can often find versions of the papers on author's homepages.

We had a strong turnout and a nice variety of papers on many different areas of complexity with particularly strong showings in quantum complexity and structural complexity making a comeback.

Amit Chakrabarti asked about group isomorphism. Arvind and Torán showed that solvable group isomorphism is "almost" in NP∩co-NP.

Although I did not have my own talk in the conference, I presented a paper by Buhrman and Torenvliet since they unfortunately could not be in Amherst. I like giving talks on other people's work since you can be honest about the strengths of a paper without having to brag. My favorite result in their paper showed that if you take a many-one complete set for EXP, remove any easily computable set of subexponential size, what remains is Turing-complete for EXP. The proof is a clever recursive algorithm using the set itself to find safe places to map the reduction.

Next year we have our 20th conference in San Jose followed by Prague in 2006.

Tuesday, June 22, 2004

Rump Session Redux

This week I am in Amherst at the University of Massachusetts for the 19th IEEE Conference on Computational Complexity. Lots of fun papers and complexity theorists. This is complexity heaven.

Like last year, we had a number of interesting new results described at the rump session. Let me describe a couple of them to you.

Scott Aaronson follows up on his guest post about the complexity of agreement. Aumann has a famous theorem that two players who communicate cannot agree to disagree on the probability of some state of the world; after some discussion they will converge to a common probability. Aaronson looked at the complexity of this process and found that convergence comes relatively fast. He defined a notion of (ε,δ)-agreement where the probabilities are within ε of correct with a confidence of 1-δ and shows that such an agreement happens after polynomial in 1/ε and 1/δ rounds.

Neeraj Kayal looked at the complexity of the problem #RA, the number of automorphisms of a ring given by generators. He showed that factoring and graph isomorphism reduce to #RA and #RA sits in AM∩co-AM. As an open question he wondered about the complexity of determining whether a ring has nontrivial automorphisms where one is given tables for addition and multiplication. It remains open even for commutative rings.

Update 6/23: Kayal tells me I didn't accurately capture his rump session talk and sent me the following summary.

We have an algorithm that determines whether a ring has a nontrivial isomorphism even when the ring is given in the form of generators for its additive group and pairwise product of the generators expressed as a linear combination of the generators. (We get this by getting a characterization of all finite rigid rings and it turns out that we can test whether a ring follows this characterization or not without solving integer factoring.) Unfortunately however we do not know of a reduction from Graph automorphism to ring automorphism although we have found a cute reduction from Graph Isomorphism to Ring Isomorphism!

The open problem that I would love to solve is to decide whether two rings are isomorphic or not when they are given in the form of tables (one table each for addition and multiplication.) I do not know how to do this even for commutative rings.

Monday, June 21, 2004

Shimon Even (1935-2004)

Shimon Even was born in Israel on June 15th, 1935. He died on May 1st, 2004. In addition to his pioneering research contributions (most notably to Graph Algorithms and Cryptography), Shimon is known for having been a highly influential educator. He played a major role in establishing computer science education in Israel (e.g., at the Weizmann Institute and the Technion). He served as a source of professional inspiration and as a role model for generations of young students and researchers. Two notable avenues of influence were his PhD students and his books Algorithmic Combinatorics (Macmillan, 1973) and Graph Algorithms (Computer Science Press, 1979).
From a memorial page by Oded Goldreich.

Friday, June 18, 2004

Visa Problems Continue

Wisconsin Professor Dieter van Melkebeek has a paper at the ICALP conference but cannot go to Finland to present it. Why not? Delayed processing of his green card application has led to problems with his current visa putting him in some temporary state of visa hell. Dieter would actually have no trouble attending ICALP; he would just have problems coming back.

Dieter is one of many stories of people changing travel plans and missing conferences because of America's tougher requirements and slower processing of foreign immigration applications. An Indian graduate student with a paper at next week's Complexity conference could not get a visa in time. I would not be surprised if many graduate students will not start the fall semester on time awaiting my government's blessing to come to study here.

This is a story I have told before and will likely tell again. I understand the need for security but most scientific progress happens through collaboration and preventing or delaying this collaboration holds back the advancement of knowledge. Not since the 80's have we seen such a limitation on traveling though this time in reverse. During the cold war several countries would not let many of their best scientists out; these days we don't allow many of the world's best scientists in.

Wednesday, June 16, 2004

Riemann Hypothesis and Computational Complexity

A commenter asks a good question for a bad reason: Would a proof of the Riemann Hypothesis have any impact on complexity theory?

Rather surprisingly the answer is yes, particularly in the area of computational number theory. In the most famous example, Gary Miller in 1975 gave a polynomial-time algorithm for primality whose correctness could be proven by assuming the Extended Riemann Hypothesis (ERH). Of course in 2002 we had a polynomial-time primality algorithm with no assumption. However the original analysis of the algorithm gave a constant which depends on how ERH is resolved.

There are still many other problems in computational number theory that require ERH. For example, according to Eric Bach, the only polynomial-time algorithm computing square roots modulo p, when p is large relies on ERH. "The idea is to combine an algorithm that uses a quadratic nonresidue, such as Shanks's algorithm (this in Knuth v. 2 I am pretty sure) with a bound on the least quadratic nonresidue mod p (e.g. in my thesis it is proved to be <= 2 (ln p)^2 if ERH is true)."

Tuesday, June 15, 2004

Special Issues

Journals dominate the non-research talk at STOC. We had a long discussion at the business meeting about the special issue of STOC. A little background: For the past 24 years the STOC program committee selects 6-10 papers from the conference and one of the PC members serves as editor of a special issue of a journal where all these papers are invited to appear. The Journal of Computer and System Sciences (JCSS) has always hosted the special issue for STOC as well as a few other conferences including FOCS and Complexity.

JCSS became an Elsevier journal a few years ago when Elsevier bought Academic Press. Elsevier has come under attack over the past few years in our field for their pricing policies, an issue discussed in this weblog before. Some editorial boards have resigned and many others are considering it. The current PC chair (and fellow U. Chicago Professor) Laszlo Babai has strong negative feelings towards Elsevier and spearheaded the issue at the conference.

The STOC Executive Board has final say on the future of the special issue but based on the business meeting discussion, the special issue for STOC will likely move to SIAM Journal on Computing (SICOMP) perhaps as early as this year.

My concern, which I expressed at the meeting, is that we already have a culture where too many papers never appear in a journal, i.e., never get written with full proofs and go through a rigorous refereeing process. The more negative press we give towards journals the more likely authors will take the easy solution of no journal. When was the last time you downloaded the journal paper never written?

Update 6/18: Hal Gabow, chair of SIGACT, has set up a website containing additional information on the meeting and subsequent procedures.

Monday, June 14, 2004

STOC Business Meeting

STOC got underway Sunday with a full slate of talks and a lengthy business meeting last night. I do not have time for a long post now so I will just bring you up to date on some facts from the business meeting.

The attendance was 261 (242 paid + 19 local helpers). Later today I will update the contest post with the results.

STOC 2005 will be in Baltimore May 22-24 and STOC 2006 will be in Seattle. There were announcements of three new journals, the previously mentioned ACM Transactions on Algorithms and two on-line open-access journals Logical Methods in Computer Science and Theory of Computing.

Most of the business meeting was devoted to the future of the special issue and I left around 11 PM last night before this discussion had ended. This discussion will require a post of its own in the near future.

STOC runs through Tuesday. Much more as the week goes on.

Friday, June 11, 2004

Favorite Theorems: Connections

May Edition

I have always loved results that find connections between previously-thought different areas of complexity. This month we highlight one of the best.

Extractors and Pseudorandom Generators by Luca Trevisan

Informally a pseudorandom generator takes a small random seed and generates strings that can fool every probabilistic algorithm. To describe an extractor we start with some distribution D over strings of length n. Let p be the maximum probability of any string in D and let k = log(1/p). An extractor uses D and a small number of truly random bits to create a new uniform distribution of strings of length close to k.

Both pseudorandom generators and extractors have many uses in complexity and many papers in the field show various constructions to improve the parameters of both. Trevisan showed that one can view any pseudorandom generator as an extractor and then derives better extractors from known pseudorandom generator constructions.

Pseudorandom generators fool resource-bounded algorithms while extractors nearly uniform distributions in an information-theoretic sense. That makes this connection all the more amazing. Trevisan's paper has affected the how researchers think about and prove results in both areas.

Wednesday, June 09, 2004

Win a Gmail Account

My first weblog contest. Guess the paid attendance (including students and postdocs) at next week's STOC conference. Closest to the correct answer receives an invitation for a Beta Gmail account (donated by weblog friend Meridel).

Rules: Send your guess in the subject of an email to stocguess@fortnow.com. Include your name and email in the body of the message. One guess per person. All guesses must be sent by Saturday noon CDT. Closest guess to the attendance announced at the business meeting Sunday night will receive an invitation to open a Gmail account (still in Beta testing). In case of tie, first closest guess received will win. Anyone involved in STOC organization is ineligible. Not responsible for delayed or undelivered email. My decision of the winner is final. Contest not sponsored or affiliated with Google or ACM SIGACT.

Good luck.

Results Update 6/14: Total paid attendance was 242. The closest at 254 was Nanda Raghunathan, second place at 223 was Kamalika Chaudhuri and third at 265 was Chandra Chekuri. We have some extra invites so we've decided to give gmail accounts to all three. Congratulations and thanks to everyone who participated.

Monday, June 07, 2004

Professional Societies

Professional Societies perform valuable roles in academics. They give awards, sponsor conference and publish reasonably-priced journals as well as bulletins, newsletters and reviews. Societies disseminate information among researchers about future activities and the state of the field. They form an advocacy group representing the scientists in government and universities. Most importantly they give a focal point that lets us identify as a community.

Unfortunately in theoretical computer science no single group plays all these roles and thus one interacts with a large number of professional societies during an academic career. Let's look at some of them.

First most comes the Association for Computing Machinery (ACM) as the largest society devoted to computer issues. ACM tries to cover the entire computing profession so computer science research issues do not get center stage. They do publish several journals and give many of the important awards such as the Turing award.

ACM has a number of special interest groups (SIGs). SIGACT, the Special Interest Group on Algorithms and Computation Theory, is the main organization devoted to theoretical computer science in the US. They sponsor STOC and other conferences and publish SIGACT News. Many theorists join SIGACT without joining ACM.

The IEEE Computer Society also deals with computer issues and has a Technical Committee on Mathematical Foundations of Computer Science that sponsors conferences including FOCS and Computational Complexity. Why do we need both a Computer Society and ACM and a SIGACT and a TC-MFCS? Perhaps for the competition?

None of these societies serve as a strong advocate for computer science research and so we have the Computing Research Association. The CRA has as its members not individuals but academic departments and research labs. They have a newsletter, advocate and keep us informed on government policy on computer science, and collect information such as the Taulbee Surveys giving salary and job information in CS research. The CRA also has a strong focus on women's issues in CS research.

Let's not forget the Society for Industrial and Applied Mathematics (SIAM) that helps sponsor some conferences (SODA) and publishes the well-respected Journal on Computing.

The European Association for Theoretical Computer Science (EATCS) covers not just Europe but captures theory from an international perspective. They sponsor conferences like ICALP and publish a hefty bulletin three times a year. Also many countries have their own computer science and/or theoretical computer science societies.

Then based on my research interests I have now or at some time been a member of AMS, MAA, ASL, SIGecom and the Game Theory Society. Where does it all end?

Saturday, June 05, 2004

BEATCS Complexity Column

With the June issue, Jacobo Torán takes over the editorial duties of the BEATCS Complexity Column. Following with tradition, he wrote his first column, Space and Width in Propositional Resolution. A strong start to what should be a great run of columns.

Friday, June 04, 2004

Survey Papers

Let's end this week how we started it, with a survey paper. Luca Trevisan has recently posted on ECCC a new survey Some Applications of Coding Theory in Computational Complexity. The survey gives a rather in-depth look at several different types of codes with some connections to private information retrieval, average-case complexity and probabilistically checkable proofs. Trevisan gives a broader and more in-depth look at coding theory than an earlier yet also excellent survey by Madhu Sudan focusing on list decoding.

Survey papers play a valuable role in our field. As computational complexity has broadened over the years, one cannot hope to keep on top of all of the many areas. A survey paper written by an expert in the field can perform many valuable tasks including

  • Putting the main results of an area in a common framework. Early work often uses different notation and definitions making it hard to compare one paper to another. Fixing the notation and definitions allow us to easily compare different results. A well-liked survey can also influence future notation.
  • Proofs get easier over time and a survey can give easier-to-follow proofs of old results. A survey can also develop a common proof technique useful for many result in the area.
  • Giving the author's informed opinion to the importance of different results in an area.
  • Stating open problems and directing future research in that area.
In case I've managed to put the survey bug in you, here are two topics where we've seen several recent research papers but lack good surveys that I know of.
  1. The complexity of Nash Equilibrium
  2. ε-biased Sets

Thursday, June 03, 2004

Complexity Registration Deadline

Tomorrow is the last day for early registration for this year's Complexity Conference in Amherst. I promise a good time will be had by all.

Wednesday, June 02, 2004

IEEE Fellowship at the State Department

Are you an American IEEE member? Now you can help guide American foreign policy. IEEE-USA has announced an Engineering and Diplomacy Fellowship where IEEE members can serve as a Fellow in the U.S. State Department and continue to advise them afterwards. These fellowships are being offered for a few professional societies; perhaps the ACM should try to get in on this.

Some more background from FYI.

Tuesday, June 01, 2004

Impagliazzo's Five Worlds

Boaz Barak in a comment last week mentioned one of my favorite survey papers, Russell Impagliazzo's A Personal Viewof Average-Case Complexity presented at the 1995 Complexity Conference. In that paper he describes five possible worlds and their implications to computer science.
  • Algorithmica: P = NP or something "morally equivalent" like fast probabilistic algorithms for NP. This was the world I described last week but looking back at Impagliazzo's paper, he does a nicer job.
  • Heuristica: NP problems are hard in the worst case but easy on average.
  • Pessiland: NP problems hard on average but no one-way functions exist. We can easily create hard NP problems, but not hard NP problems where we know the solution. This is the worst of all possible worlds, since not only can we not solve hard problems on average but we apparantly do not get any cryptographic advantage from the hardness of these problems.
  • Minicrypt: One-way functions exist but we do not have public-key cryptography.
  • Cryptomania: Public-key cryptography is possible, i.e. two parties can exchange secret messages over open channels.
Impagliazzo does not guess which world we live in. Most computer scientists would say Cryptomania or Minicrypt.
The paper goes on to give one of the better justifications for Levin's definition of average-case complexity.