Tuesday, October 23, 2007

FOCS V- Report from Prog Comm.

Another great guest post from Nicole Nicole Immorlica, from FOCS

(Note from Bill G: This is one of several posts Nicole will be making from the Business meeting.)

Program committee report: Another congrats is in order. No matter what you think about the distribution of topics (see below), you can't deny that the PC did a great job this year. I was especially impressed by the extensive feedback, both internal and external, on submissions; the comments were thorough and explicitly stated the PCs reasons for their decision. Now for the statistics, well, here's the list that was presented at the business meeting: The number of submissions: 302. The number of accepts: 66. Below is a topic that there were papers submitted on, followed by how many submitted,and how many accepted.
  1. Algebraic/Numerical Computation (14, 1)
  2. Algorithmic Game Theory (27, 5)
  3. Algorithms 85
    1. Approximation (35, 9)
    2. Geometric (8, 2)
    3. Graph (17, 1)
    4. Randomized (14, 4)
    5. Streaming (6, 2)
    6. Misc. (15, 0)
  4. Combinatorics (2, 0)
  5. Computational Biology (2, 0)
  6. Computational Complexity (30, 14)
  7. Cryptography (20, 7)
  8. Data Structures (6, 2)
  9. Geometry (13, 3)
  10. Learning (7, 0)
  11. Logic/Proof Complexity (11, 2)
  12. Parallel/Distributed Computation (15, 1)
  13. Property Testing (10, 5)
  14. Quantum Computing/Crypto (25, 6)
  15. Random Structures (7, 2)
  16. Misc. (11, 0)
  17. Out of Scope (6, 0)
  18. P = NP (1, 0)
I think I'll leave it up to the commentators to interpret this.


  1. A NP=P paper without an algorithm! Surely that's worth celebrating in itself...

  2. P=NP is an important result; I'm surprised that the PC chose not to accept it, especially with the good job they did on everything else.

  3. The paper actually gave a randomized algorithm for an NP-complete problem. While RP=NP is still an impressive result, we were put off by the sell job (claiming the submission was about P vs. NP).

  4. What would it take for an NP=P paper to be reviewed seriously? Is it often the case that papers are not reviewed seriously due to lack of credentials especially for famous unsolved problems?

  5. Who submitted the P=NP paper? That Sergey Gubin guy?

  6. Very informative. But I think you meant 66 (instead of 86) papers.

  7. Oooh, first interpretation! I know, I know: "FOCS is a complexity theory conference with near-50 % acceptance rate where some other suckers have been fooled into submitting their non-core-complexity theory papers, thereby artificially inflating the 'quality' rating of acceptance percentage", right?

  8. "FOCS is a complexity theory conference with near-50 % acceptance rate where some other suckers have been fooled into submitting their non-core-complexity theory papers, thereby artificially inflating the 'quality' rating of acceptance percentage"

    Funnily enough that was my interpretation of things before I even saw the figures.

  9. This shows that the STOC/FOCS mafia is biased against the miscellaneous and out-of-scope communities.

    It is time for a new conference!

    Authors shold submit their best miscellaneous and out-of-scope papers to MISCO 2008

  10. I do like how P=NP is its own category. Does that mean there was a NP=P category, but no one submitted this year?

    It seems *too* focused for a category; why not just have a catch-all "crackpot" category?

  11. It is rather striking that not a single learning theory paper made it to FOCS.

  12. anonymous #4 and #3 seem to imply that the paper on P=NP was rejected without serious review, solely based on title and authorship
    If this is the case we have a serious problem with the integrity of the review system.

  13. P=NP is right! This is because an assigment for satisfication has a lot of redundancies.

    For example

    X + Y + Z only need X =1 or Y =1 or Z=1, we do not need X=1 and Y =1...
    Therefore, for reasoning of proposational rule can be simplified, e.g, for distribution for disjuctive rule.

    The main error of the previous method is to converge to a solution from a huge of hyperspace.

    Any combination with backtrack or backjump is a worse computation. A tree search is essentially a combination !

    The algorithmic complexity is 2^k T^3

    where k is the maximum size of clause; T is the number of clauses.

    So the algorithm can be polynomial but the description of problem can be exponential. However, who can understand a language with exponential description ?

    The evidence can be verified from the transformation of SAT into 3SAT. The funny thing is the problem is not changed with its complexity even its description !

    No matter 2SAT or 3SAT or SAT, and no matter how many variables, the description, still description!!!

    Sergey's method is simple and naive. We can have a clear improvement by modifying a little proposational rules for reasoning without truth values.

    The solution consist of the main two points, and is not new according to my observation.

    Hope my observation and results can verify P=NP with Sergey.


  14. Sorry, a bug. It should be f(k)T^3.


  15. all geometric computation can be linearithmic or loglinear.

  16. My recent research finds that we can build a Voronoi diagram with the d-dimension in nlogn.

    This is really interesting! I suddenly find people only can efficiently address VD in a plane before. For d-dimension, it is N^[d/2].

    Finally, I realize that the VD problem and other NP problem belong to the same group of hard problem (not exact).

    I also realise that people tend to give an optimal solution by combination. But it is not an arithmetic description about solution. The combination is bad computation in the practical applications.

    All techniques are only traditional. But problem should be geometrically described.

    I write these here as a trace such that the final result verifies what I say in future.