(Guest post from Rahul Santhanam)
As promised, I did do a bit of experimenting this FOCS in terms of
attending talks from other areas. Here's an encapsulation of the
Cryptography - Yevgeny Vahlis spoke about his negative result with
Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions
of identity-based cryptosystems (cryptosystems in which an arbitrary
string can be used as a public key) from trapdoor permutations.
ID-based cryptosystems have been extensively studied recently in work
of Dan Boneh and others. Security of known constructions is proved in
the random oracle model, and it would be interesting to base security
on the same assumptions as those used in standard public-key
cryptography. This result indicates this might be too much to hope
for, at least using standard techniques.
Learning - Adam Klivans gave an excellent talk on "Learning Geometric
Concepts via Gaussian Surface Area" (with O'Donnell and Servedio).
There's been a lot of interesting work recently on learning in the
presence of random or adversarial noise. It's known that functions
with low noise sensitivity, or even with low Gaussian noise
sensitivity, can be learned efficiently in this setting, but these
quantities are hard to bound in general. The current work resolves
some open problems about learnability of geometric concept classes by
using some deep mathematical results relating Gaussian noise
sensitivity to Gaussian surface area, and working with the latter more
Games with Entangled Provers - I went to a couple of talks on another
currently hot topic: the power of two-prover games with entangled
provers. A semi-definite programming formulation due to Tsirelson can
be used to show that the optimal value can be computed efficiently for
a very simple form of 2-prover games, known as XOR games. The first
talk that I attended was given by Thomas Vidick on "Entangled Games
are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and
Toner) - he showed a reduction from the problem of approximating the
value of 2-prover games for classical provers (known to be
NP-complete) to the problem of approximating the value of 3-prover
games for entangled provers. A complementary talk by Ben Toner on
"Unique Games with Entangled Provers are Easy" (with Kempe and Regev),
showed that for unique games, the optimal value can actually be
approximated efficiently using "quantum rounding" to a semi-definite
program. The most interesting open problem in this area seems to be to
derive some sort of upper bound on the complexity of approximating the
value for general games.
Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic
Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan).
This paper tries to understand the reason why simple algorithms for
constraint satisfaction problems on random instances fail in the
region around the threshold, by showing that this "region of failure"
coincides with the region where the structure of the solution space
changes from being connected to being (essentially) an
Rather intriguing that a computational fact, the success or failure of
standard algorithms, is so closely related to a combinatorial
artifact. Finally, I went to Grant Schoenebeck's talk on proving lower
bounds for the Lasserre hierarchy of semi-definite programming
formulations of CSP problems, which is interesting because a number of
different algorithmic techniques including the Arora-Rao-Vazirani
formulation of Sparsest-Cut can be implemented in the lower levels of
the hierarchy. Grant's result uses a connection to the width
complexity of resolution, which I found very surprising, but then my
understanding of this area is rather naive...
All interesting stuff, but I'd actually prefer FOCS to be even
broader, and representative of all areas of theory. If that requires
multiple sessions, so be it. Currently, FOCS seems to be an
algorithms, complexity and combinatorics conference with papers from
other areas filtering in unpredictably depending on the composition of
the committee. It's very creditable that the standard of papers has
remained consistently high (or even improved) over the years. But with
several major sub-areas (semantics and verification, concurrency
theory, database theory, theory of distributed computing,
computational geometry, computational biology etc.) represented barely
or not at all, it's quite hard to still make the case that FOCS and
STOC are absolutely central to theoretical computer science as a
whole. I guess the question is how much we value FOCS and STOC being
Thank you mentioning Learning Geometric Concepts via Gaussian Surface Area. This is a topic of great interest in quantum simulation ... are the slides to this talk available?ReplyDelete
I am asking for the dumbest of all reasons: the above article has no pictures, gossip, or jokes ... which are not essential in learning a subject ... but they sure help. :)
Ditto for Raphael Yuster's closely related (in the mind of engineers) Matrix sparsification for rank and determinant computations via nested dissection.
And, thanks for the summary. For what it's worth, I agree that greater FOCS breadth would be welcome.
Sorry, but I have to correct the cryptography result. While the initial constructions of identity-based encryption (IBE) were in the random oracle model, we do now have constructions of IBE that do not rely on random oracles. However, the existing constructions all rely on specific number-theoretic assumptions rather than "general" assumptions (e.g., the existence of standard public-key encryption). The talk you mention showed that we cannot expect to construct IBE from certain general assumptions, at least using standard techniques.ReplyDelete
Jonathan, I haven't read the paper, but I wouldn't expect their result to rule out constructions using techniques like NIZKs for well-formedness, which I would consider standard.ReplyDelete
I would explain these results as ruling out "highly efficient" constructions of IBE from a PKE/TDF component. I see them as more of a guidepost towards a construction of IBE than anything else.
As someone working in computational biology, I feel the STOC/FOCS representation for the field has been fair. The main work in the area is less-theoretical in the past, as the focus has switched away from elegant toy problems, and towards obtaining biologically meaningful results by whatever (not necessarily elegant or theoretical) means possible.ReplyDelete
Anon #3, you are correct but I was writing for a general audience...ReplyDelete
Technically, the result only rules out certain black-box constructions.
Jonathan, thanks for the correction. I'm pleasantly surprised that there are people actually reading the technical parts :) The results in the post are all quoted from memory, so it's quite possible there are other inaccuracies.ReplyDelete
What you are suggesting seems to needReplyDelete
a "Federated Computer Science Theory Symposium"?
(One slight inaccuracy: There were a fair number of computational geometry papers at FOCS - 8 by the PC chair's count.)
Thanks for these posts, they were very good.ReplyDelete
John, I don't know about the slides, you'll have to ask one of the authors.ReplyDelete
Anon3, I'd just like to see a conference that features the best foundational papers in theoretical computer science, without any implicit or explicit bias as to which sub-area the papers belong to. It seems that FOCS and STOC did fulfill this role, once upon a time. Re. the specific observation about Comp. Geom, the point is that these statistics are very unpredictable for some of the non-complexity/algorithms areas, a variance that surely exceeds the natural year-to-year variance...
For computational geometry, FOCS/STOC screwed up by (1) having very little attendance at the geometry sessions, (2) accepting a weaker but more broadly accessible geometry paper while rejecting the best geometry paper submitted, and (3) not accepting enough geometry papers to make it worth a geometer's while to go to the conference.ReplyDelete
Earlier, we drove semantics away by scheduling the semantics session opposite the outing (at the Providence conference in the 80's, long ago).
I don't really think there was any way to have avoided this narrowing of focus without accepting more papers, but there was a subset of the community that was dead set against this.