Tuesday, July 21, 2009


One of the main topics of discussion at CCC last week centered around the problems with the Electronic Colloquium on Computational Complexity now 15 years old and showing its age.

In 1994, ECCC truly innovated the way we discover research. ECCC has a large board of editors that in theory quickly accept submissions that reach a minimum quality level in computational complexity. Once ECCC hit critical mass it became the must check place to see the latest results in complexity. I first heard about Omer Reingold's L=SL result from its ECCC post

Roughly here is how ECCC works: Once submitted, anyone on the ECCC scientific board can look at the paper and either accept or reject through a clunky email interface. Once accepted the paper appears nearly immediately. Papers by board members are automatically accepted. Papers not acted upon after two months are automatically rejected. A weekly email goes out to the board listing new and about-to-expire papers. 

Most papers are either accepted immediately if an editor is interested in it, or right before it expires. But often papers fall through the cracks, or the email doesn't get sent and papers time out. Recently many reasonable papers did time out which led to the discussions at the Complexity Conference. Also the site hasn't been updated much and is sometimes down or not working properly.

The main alternative to ECCC is the Computational Complexity section of the Computing Research Repository, now part of the ArXiv maintained by the Cornell University libraries and thus usually more more reliable than ECCC.

I subscribe to the RSS feeds for complexity papers on both ECCC and the CoRR. CoRR has a policy of accepting all papers in scope, including various P v NP "proofs", and most active complexity theorists self-select ECCC so only a few CoRR papers are interesting to me.

Scott Aaronson led a discussion at the CCC business meeting on suggestions on what to do with ECCC, not that the Complexity conference has any active role in ECCC. There were many thoughts including changing some of the policies by perhaps assigning papers to editors or have acceptance instead of rejection as a default, or have ECCC as just an overlay over CoRR to increase reliability. Maybe we just need to find a way to make sure more submissions get processed.

As the number of researchers and papers in complexity has increased, we have come to rely on systems like ECCC to let us learn the latest important results without having to wade through the chaff. When these systems fail us, even in little ways, we feel lost. But we can't fault the people who run ECCC, they have served us well for a decade and a half with little remuneration. But we need something or we will have to go back to the old ways of learning about good papers from conferences and journals.

Update 7/27: ECCC has relaunched with a new and improved website.


  1. If the only problem is separating the wheat from the chaff, then making ECCC an arXiv overlay makes sense. Why duplicate all the other functionality, particularly when it doesn't work as well?

    If submissions are timing out (after two months without attention!), then the system is clearly completely broken. This is not a difficult problem to solve. The arXiv's philosophy of maintaining a low minimal standard, and leaving overlays to other parties, makes much more sense than combining all these functionalities into one place.

  2. That was a very informative post about ECCC--I actually did not know how it worked. It seems better to go with arXiv, because I am somewhat confused by the fact that a paper that will eventually be accepted ("right before it expires") sits for 2 months when 1) it will be accepted eventually and 2) it is not refereed anyway so why the delay? Making papers sit around for two months just waiting for someone to check a box is pretty stupid.

  3. There is another way to do this that I have had to resort to. You submit the paper and then immediately PING an editor yourself to get it approved. This works some percentage of the time. The problem is that sometimes someone else has picked up the paper from the list and said that they will decide on it but don't get around to it. In this case there is a "hold" on the paper that needs to be released before it can appear.

    The overlay seems like absolutely the right idea.

  4. There is no question that CS theory would benefit if ECCC were converted to an arXiv overlay. It could have been done years ago. And the main reason is not reliability or technical features of the arXiv, although those are good reasons. No, the ultimate issue is that it's unsatisfactory for CS theory e-prints to be "a house divided". Each of these services is a work of social engineering, and usage is self-reinforcing. People would submit more e-prints, and their papers would see more readers more quickly, if ECCC merged with the arXiv.

    In 1998, mathematicians formed a committee to put together the mathematics section of the arXiv. The chair of this committee was David Morrison, and I was one of the other members. At the time, we listed 11 non-arXiv preprint servers that played a role similar to ECCC. It was entirely clear to use that we should invite them to join the arXiv. Generally, when one of them agreed, submissions jumped quickly jumped to more than the sum of the parts. It was a big success.

    The preprint services that did not agree to join the arXiv did not particularly go anywhere in the short term. They didn't succeed or fail. But in the long term, some of their maintainers have become bored with upkeep. In several cases, their semi-permanent user base turned into a wasted opportunity. The arXiv also slowly grinds down these "competitors". This type of competition has not served a useful purpose.

  5. An overlay for arxiv.org is given by scirate.com. For example here you can check it weekly.

    If the editorial board would use that, anyone could check quickly what they liked. Also, it seems to work well for quantum people...

  6. As ECCC writes in its "call for papers," the stress is on *lower bounds*. So, this is a very specific collection, far from covering all of, say, CCC/FOCS/STOC scope. It is therefore no surprise that many submissions of algorithmic or structural character receive less interest and are "rejected" due to the "2 month limit," although as such they are, perhaps, very interesting. I think the misunderstanding in the community comes from misunderstanding the role of ECCC. It is not a substitute for CCC/FOCS/STOC: this is merely a site collecting papers on *lower bounds* (mainly), just like this is done by quantum people.

    ArXiv is OK, but only as the last possibility to find a paper, if it is not online on the author's homepage. It simply has too much information to be checked permanently. Many people (I also) prefer boutique shop -- supermarket requires too much time to find "wow, that's for me."

    A real competition for ECCC would be to make accepted papers to FOCS/STOC/CCC/etc available online. Then people would go there first, no doubts. Then we would need no "overly wirh ArXiv" anymore.

  7. I think there is some value to applying minimal filtering for scope and correctness. But why does it need to involve any more than that? Crypto has the eprint archives which is administered by 3 people and seems to work quite well.

  8. Well, the difference is that the paper should wake an interest of at least one of these "lower bounds people". They work hard. And will only look at a paper which makes a step. The "screening" is therefore secondary---primary is whether the submission puts us further. Wakes an interest. Of at least one of the 30 people wanting to have "real" lower bounds. Not just because "P!=NP" should hold. This is more than just "moderation by 3 people". This is rather "smacks this of something new" or is this just "the business as usual".