Monday, July 16, 2012

CCC12: Post 1 of n

(Post 1 of n on CCC 2012. I don't know how large n is yet.)

I will discuss the papers in the order they were presented.

June 26, 2012. Morning
  1. Amplifying Circuit Lower Bounds Against Polynomial Time with Applications by Richard Lipton and Ryan Williams. Of course, what we mean by Applications may differ from what other people call applications. Here is one of the two main results:

    Let c,d ≥ 1 and e < 1 such that c < (1-e+d)/d. Either QBF is not solvable in time nc or the Circuit Eval problem cannot be solved with circuits of size nd and depth nc.

    This is one of those odd results where its and OR of two things that we think are both true. But there are two unconditional lower bounds that can be derived from it:

    1. QBF does not have O(n1.618)-time uniform circuits of depth no(1).
    2. For all ε > 0, QBF does not have O(n2- ε)-time uniform circuits of depth no(1).
    In his talk Ryan said that he needed the kind of complexity that only people like Dick Lipton still remember. (ADDED LATER: This is a misquote, see Ryan Williams comment below.)
  2. Is the Valiant-Vazirani Isolation Lemma Improvable? By Holger Dell, Valentine Kabanets, Dieter van Melkebeek. Probably not. Oh well.

  3. Parallel Approximation of min-max problems with application to classical and quantum zero-sum games by Gus Gutoski and Xiado Wu. In this paper they developed a better solution to a classical problem and then applied it to a quantum problem. I don't think this is common-- do you know of any other examples?

  4. On the complexity of the Separable Hamiltonian Problem by Andre Chailoux and Or Sattah. (The first name is Or.). This paper looks hard!

  5. Quantum Money with Classical Verification by Dimitry Gavinsky. Quantum Money is a funny topic in that it may be feasible in 100 years, but by then we will all have e-money.

  1. The Usefulness of Predicates by Per Austrin and Johan Hastad. There is an hour-long version of the talk done on a blackboard here.

    li> Reductions between Expansion Problems by Prasad Raghavendra, David Steurer, and Madhur Tulsiani. The authors claim that the Small Set Expansion Hypothesis (SSEH) is a natural hypothesis. (It has been discussed in earlier papers.) This paper shows that SSEH is equivalent to a subcase of UGC. The authors claim that SSEH is is the combinatorial heart of the UGC. WOW!

  2. On Problems as Hard as CNFSAT by Marek Cygan, Holger Dell, Daniel Lokshantov, Daniel Marx, Jesper Nederlof, Yoshio Okamoto, Ramamohan Paturi, Saket Saurabi, Magnus Wahlstrom, The Strong Exponential Time Hypothesis (SETH) states that for every ε > 0 there exists k such that k-SAT cannot be solved in time 2 ε n This paper assumes SETH and proves from it lower bounds on HITTING SET, SET SPLITTING, and NAE-SAT.

  3. Testing List H-Homomorphisms A List H-Homomorphism is (informally) a homomorphism that satisfies given constraints. Given the constraints and given oracle access to an alleged hist H-Homomorphism, we want to determine if it IS one or is FAR FROM being one (if its not one but close we don't care.) As usual in property testing we are concerned with O(1) queries and sublinear queries. Both are characterized.

  4. A Dichotomy for Real Weighted Holant Problems. by Sangxia Huang and Pinyan Lu. Recall that in Valiant's paper Accidental Algorithms he showed that for monotone planer 3-CNF formulas where each var appears twice (1) finding the number of solutions mod 2 is parity-P complete, but (2) finding the number of solutions mod 7 is in P. Since then there has been a wave of research trying to classify which counting problems are hard (likely complete somewhere) and which are on P. The paper by Huang and Lu is a big jump in this field.


  1. In his talk Ryan said that he needed the kind of complexity that only people like Dick Lipton still remember.

    That's kind of a misquote :) What I said was something like: "can we improve the QBF lower bounds by using tools developed in, say, the last 20 years?... you know, things that people other than Dick know about."

    But it is true that things like the proof that Circuit Evaluation is in n*poly(log n) time on multitape Turing machines seem to require an archaeological expedition in the library... or several clever thoughts about sorting... or Dick as a co-author.

    This is unfortunate. How will we ever prove that SAT isn't in n*poly(log n) time on RAMs if we don't know how to prove it for multitape Turing machines? And how will we ever prove that if we forget all those neat constructions from the 70's?

  2. Although I am just beginning my Phd, I was always fascinated by "old school" complexity theory, perhaps more than it is "healthy" for my scientific judgement. As a young researcher that draws its future carreer, it is clear to me why people avoid the "big questions" that are mostly purely theoretical. Those reasons however are primarily not scientific based but financial, either personal or department wide. Most researchers would prefer a solid record of publications, rather than 5 years spent on a theoretical matter, which might not produce any publishable results in the end. Furthermore, you can't blame a department head/dean/commitee for rejecting a grant for this type of long-term research, that might be required to solve difficult problems on complexity theory.

    Complexity theorists know the importance of time you dedicate to a problem, perhaps better than anyone. We know that some of the interestings ones have long proofs that require a lot of time to be written, refined and validated. As an example, I refer you to what Mulmuley has to say about veryfying a proof of P =/= NP.

    Perhaps research centers like Simons, can use the reputation of accomplished computer scientists to create research and student positions on pure structural complexity theory. Personally, I think it's time we looked thoroughly on the results from 1965 to 1980, update them to today's circumstances and continue the line of research that has been slowed down, if not halted since the late 80's.

  3. Hello Harry, some people would argue that your comment is based on a misunderstanding of the current situation of the field.
    Read for instance this earlier post and some of the comments: