The first two days were scheduled to be all Ketan all the time. Despite the fact that Ketan is an excellent presenter, talking for two days straight and keeping the audience interested would be a tough task for anyone. Not everyone made it to the end, but a significant fraction of the audience did. Ketan's lectures were interspersed with

*a lot*of discussion and debate, including a couple short impromptu guest lectures from audience members on some background topics. Pointed questions came from both classical mathematicians and complexity theorists, and as often as not were answered by audience members from the other field. It was refreshing to see the humility of the giants of multiple fields: great classical mathematicians asked complexity theory questions that a complexity undergrad could answer, and vice versa. I think this is one of the few times I have ever heard of serious classical mathematicians mingling with complexity theorists, and I think it worked quite well. This type of interaction and more seems necessary for the GCT program, and can only be a Good Thing for both fields.

And now for two technical items from the workshop.

Avi Wigderson and others (sorry to the others, but Avi sticks out most in my memory, and I didn't learn everyone's names!) repeatedly suggested applying the techniques of GCT to easier problems than P vs NP and see if they can be pushed through to fruition for easier lower bounds. At the moment, GCT reduces P vs NP to certain hard conjectures in algebraic geometry and representation theory. The corresponding conjectures arising in easier lower bound problems would hopefully be easier, maybe even easy enough to solve in our lifetimes! Peter Burgisser has been looking at matrix multiplication using these techniques (an idea originally suggested by his adviser, Volker Strassen, long before GCT came along), and his student Christian Ikenmeyer gave a presentation on their results as one of the research talks on the third day.

Christian's talk was probably the most salient for complexity theorists not intimately familiar with GCT, so I'll mention a bit more about it to finish off the post. GCT introduces the idea of a "geometric obstruction" (=some type of representation-theoretic object) to "there are circuits of size n

^{c}for SAT up to length n" (NP vs P/poly). If a geometric obstruction exists for all (or infinitely many) input lengths n, then P is not equal to NP. The conjectures arising in GCT imply that such obstructions exist. But even in smaller problems, such as matrix multiplication, no one had ever seen such geometric obstructions! Peter Burgisser and Christian Ikenmeyer did computer calculations and found geometric obstructions to the border-rank of 2x2 matrix multiplication. (The border-rank of 2x2 matrix multiplication roughly captures how many multiplications are needed to approximate 2x2 matrix multiplication, in a specific sense.) The geometric obstructions they found show that the border rank is at least 6 (it is known to be 7, by a significant algebro-geometric result of Joseph Landsberg). Although this only proved a weaker result than what was already known, for a comparatively easy lower bound (compared to P vs NP), this is an important proof-of-concept for the GCT approach of using geometric obstructions to prove complexity-theoretic lower bounds. This work also raises the intriguing possibility (also suggested in the GCT series of papers) that one might be able to "verify P neq NP up to length n" for larger and larger n, the same way people verify the Goldbach Conjecture or Riemann Hypothesis up to a certain numbers of integers/zeroes. Unfortunately, doing this in the GCT setting even up to n=10 seems to take too much time (whereas Goldbach and Riemann have been verified for millions of integers/zeroes).

Finally, I should mention even in the case of 2x2 matrix multiplication, the conjectures that arise seem almost as difficult as the ones arising in the P vs NP problem. Avi and others suggested we look at

*even easier*problems.

**UPDATE 7/20:** Slides of the research talks are now available here,
and videos will be added soon.

As it turns out, the rank of a tensor has considerable practical importance in quantum systems engineering, where it arises in a geometrically natural way in the context of computing the musical isomorphism between dynamical one-forms (specified by Hamiltonians and Lindbladians) and tangents to quantum trajectories (computed as the output of quantum simulation codes).

ReplyDeleteFor better or worse, an audience of quantum spin microscopists is scheduled to hear me lecture on this subject this coming Friday.

Since I *don't* presently know how to explain the relation of geometry to computational complexity in any natural way—especially to an audience whose interest is almost wholly focussed upon practical experiments—comments in this regard *definitely* would be welcome.

The simpler, the better ... and uhhh ... quickly, please! :)

I've always wondered why so much effort is expended on P vs NP. Why not P vs PSPACE?

ReplyDeleteP!=PSPACE

can'tbe harder to prove than P!=NP, andoughtto be much easier. There arelotscomplexity classes between P and PSPACE, any you can't proveanyof them are different without proving P!=PSPACE.Are there algebraic conjectures equivalent to P!=PSPACE?

On the "equals" side, there are implications of P=NP like NP=coNP, but there isn't such a clear "easiest" question.

Intractability center is doing great job organizing frequent workshops (at the intersection of complexity and mathematics) and also offering financial aid to students. I learnt a lot of concepts (that textbooks/courses cannot cover) through these workshops. I hope the videos of the talks of this GCT workshop will be uploaded soon.

ReplyDeleteOn a technical note, I was wondering how difficult it is to give a GCT-based proof of a 'parity \notin AC^0". Does it require a generalization of RH ?

ReplyDelete"verify P neq NP up to length n" for larger and larger nPlease forgive a dumb question, but just what is

nin this context?Thanks for this nice summary. I really wanted to attend the workshop but simply couldn't make time for it.

ReplyDelete@Ralph: Much work has gone into P vs PSPACE, and currently there's about the same amount of progress on it as P vs NP. (Very, very little.) Can Quantified CNF Satisfiability be solved in linear time on a multitape Turing machine?... of course not, but why?

Somehow LOGSPACE vs NP looks as if it may be easier. (There has been some "real" lower bound progress on that question, but still very little.)

ReplyDeleteSomehow LOGSPACE vs NP looks as if it may be easier. (There has been some "real" lower bound progress on that question, but still very little.)What progress has there been?

ryanw (Ryan Williams), your IBM Almaden colleagues Dan Rugar and John Mamin are here in France giving well-received lectures on spin microscopy. To the extent that there is a natural connection between IBM's quantum spin imaging experiments—which by consensus are among the most sophisticated in the world—and your own theoretical work on complexity complexity, that connection seems to arise pretty naturally via the geometric complexity aspects of simulating IBM's experiments

ReplyDeleteab initio.So you might consider having lunch together some afternoon, and brainstorming about new multidisciplinary uses for that Blue Gene that sits down the hallway at Almaden. Whether anything concrete came of your lunch or not, the struggle to speak each other's language, and understand each other's problems, would itself be a lot of fun.

@Ralph: First, the GCT approach applies to the determinant vs permanent problem, which is essentially NC vs #P, which should be easier to resolve than P vs NP (as you point out with P vs PSPACE). Presumably there is a GCT approach to P vs PSPACE, but it hasn't been explored yet as far as I know.

ReplyDeleteSecond, it's important to know that the conjectures in GCT are not known to be *equivalent* to P neq NP, but only imply that P neq NP. The most developed part of GCT at the moment is aimed at proving P neq NP in the algebraic model (essentially the "real RAM" model or "Blum-Shub-Smale model over the complex numbers"). I believe that the GCT approach to P neq NP in the classical model will be fleshed out in a forthcoming paper (I think that's GCT IX).

@Kurt: "verify P neq NP up to length n" should be read as: "verify that there are not circuits of size n^c for SAT up to length n".

@Josh: as you said the GCT program (or at least the part of it that Ketan talked about at the workshop) is about permanent versus determinant.

ReplyDeleteIf it succeeds it will be of course a huge achievement, but as far as I can see it will not directly imply that P is not NP in the BSS model over the complex numbers.

@Pascal: Although Ketan only talked about permanent vs determinant at the workshop, even in GCT I & II (the original Mulmuley-Sohoni GCT papers) they construct two polynomials H(X) and E(X) such that the GCT conjectures regarding H vs E (instead of det vs perm) imply that NP over the complex numbers does not have polynomial-size arithmetic circuits.

ReplyDelete(In my previous response to Raplh, I meant that GCT *also* applies to det vs perm, in addition to NP vs P/poly, not that it only applies to det vs perm.)

The definitions of the functions H and E are entirely elementary, and I recommend anyone who is interested in GCT to go lookup their definitions in GCT I.

@Josh: the problem about E and H is yet another algebraic version of P versus NP, but not the BSS one.

ReplyDeleteThe BSS problem asks whether the satisfiability of a system of polynomial equations (over say the field of complex numbers) can be decided in a polynomial number of arithmetic operations and equality tests.

Note in particular that this is about decision problems, whereas the GCT program is about evaluation of polynomials (perm versus det, E versus H).

There are actually some connections between the complexity of decision and evaluation problems, but maybe not quite as tight as your post seemed to imply.