Jeanette Wing, the new director of CISE, kicks off the business meeting with an overview of the funding situation for theory at NSF. I think I discern two clear messages: First, NSF is part of the executive branch, so there is one clear way we can affect the budget this November. CISE has requested a 19.5% funding increase for 2009, with a 25.5% increase requested for CCF. Secondly, the best way to expand the amount of funding for theory is for algorithms people to look for money outside of pure theory venues. The opportunity to do this will hopefully be improved by having Sampath and Jeanette on our side at NSF.
Dick Karp wins the SIGACT Distinguished Service Award, for his tireless dedication to promoting and expanding TCS. Prasad and Harald are given their best paper awards. Then Cynthia gives her report on the program committee, and its decision process.
80 papers accepted out of 325 submitted (that's about 24.6%). Some notable results: Congestion and game theory goes 5/13 (38.5%), and metric embeddings goes 0/13 (0.0%). Before the committee met, they agreed on having a more open mind toward conceptual papers which might be otherwise overlooked because they lack technical depth. The following paragraph was added to the call:
Papers that broaden the reach of theory, or raise important problems that can benefit from theoretical investigation and analysis, are encouraged.This paragraph has been kept for FOCS'08.
The committee sought to appreciate simplicity as a virtue; no longer "I like the ideas, but the proofs are simple"; instead, "I like the ideas, and the proofs are simple!" I don't know if "They changed the model so as to trivialize the problem" is also replaced by "They changed the model, and now the problem is trivial!" I think responsible analysis of a paper is probably a bit more nuanced.
Later, Madhu Sudan spoke of instances where a well-known problem had an easy solution, and this prevented a journal or conference from publishing it. This is certainly ridiculous, and I have a hard time believing that it's a frequent occurrence (of course, I have about 1% of Madhu's experience). I've seen examples where the community considered it "embarrassing" that the solution was so simple, but not where the paper itself was derided.
Personally, I love the beautiful intricacies of hard, technical proofs. It's like a little universe sprung out of the human effort poured into developing a deep understanding of some problem. There are often reoccurring characters, a unique language, a sense of history, twists and turns, all mounting towards a resounding conclusion that one only fully comprehends after multiple readings, and intense effort. But in our field, the beauty of complexity only makes sense in contrast to our search for simplicity. Simplicity is certainly a virtue.
When I have criticized a paper based on "technical simplicity," it's not because I wish the authors had purposely obfuscated their arguments. Rather, one has to understand the primary goals of a theoretical field: To approach understanding through rigor. What we are trying to understand is computation in all its forms. Toward this end, we often consider idealized versions of problems, and in this respect modeling becomes incredibly important. It comes up in algorithms: What happens if the traveling salesman wants to minimize the average latency, and not the total travel time? And it happens in complexity: What if we allow our constant-depth circuits to have mod gates with composite moduli?
In both cases, we are not confronting the actual problem we want to solve; real-life instances of salesman problems (e.g. satellite movement) probably involve other practical constraints, and (uniform) poly-size circuits can probably do a lot more than AC_0[m]. So often I have to measure the importance of a new model by how it differs technically from the old one. If simple modifications of the old TSP algorithms suffice for the minimum-latency version, it's not clear that we have learned something new (even though one could argue independently that the min-latency version is practically important!). And if AC_0[m] circuits could be simulated in a simple way by AC_0[p] circuits, then I wouldn't think as highly of a paper proving lower bounds against AC_0[m].
Maybe we can be a little more appreciative of the subtlety involved in the reviewing process, and agree that "simplicity is a virtue" is a a bit too simplistic to be the motto for a program committee.
I agree that this issue has many nuances.
ReplyDeleteAnother one is that the technical parts of the paper can be technically challenging, but have no new idea. Similarly, a paper can be very simple, but have very new and powerful techniques (eg: probabilistic method).
Often, when reading a very technical paper, I'm left wishing the authors took the time to indicate exactly (and honestly) what the new technical/conceptual ideas there are in the paper. Usually, papers build on one another, so you can say that paper B is very technically challenging, but it is a matter of doing the hard work of applying the ideas from paper A to this context. Not to say that such papers are devoid of merit, it's just that it would be nice if people wrote their honest opinions of what the key ideas are that helped them achieve their breakthroughs.
It's pretty annoying to read a very technical paper, expecting that on the road to the solution a big Challenge is going to come up that will warrant some wizardry, only to find (several hours later) that it's all smooth (though tedious) sailing till the end.
Biologists have established that the least healthy and stable ecosystems are mono-cultures.
ReplyDeleteFortunately, that is not something that complexity theorists have to worry about -- the simple-and-conceptual advances have never been so profound, and the hard-and-technical advances have never reached so deep, as in recent years.
In consequence, the mathematical ecosystem as a whole has (arguably) never been so robustly healthy as it is right now.
That is why the most urgent need for mathematics is to grow, in order that a new generation of young mathematicians can grasp these growing opportunities.
That is where the politics and the squabbling comes in. IMHO, the Computing Community Consortium is performing a huge service in engaging the mathematics community in the vital task of increasing support levels for fundamental mathematics.
Speaking as a medical researcher, in the past few years it has become abundantly evident that there are many urgent goals that we in medicine will not achieve unless and until we acquire more powerful mathematical tools than we presently possess.
That is our (selfish) motive for hoping that the CCC effort succeeds, and that the whole mathematical enterprise becomes even broader, deeper, and richer.
As for simple-and-conceptual advances versus hard-and-technical advances, we in medicine will take all we can get, of both.
When authors introduce new definitions or a new model, etc, which is well-motivated, non-trivial and interesting the proofs can often be "simple" since nobody before has thought about these definition or model and tried to prove these things. This to me is the heart of conceptual innovation and authors should not be penalized for "simple" proofs in this case.
ReplyDeleteI never understood what the complaints were about in the first place. What "conceptual" papers were ever being rejected?!
ReplyDeleteThe aliens paper, of course. I heard it got rejected 3 times already.
ReplyDeleteIt's quite ridiculous that a respected member of the community lost it, his ego compelled him to shove this junk paper down the throat of the STOC community, and now we're having a big argument over conceptual contributions.
We should call this what it is. Some old folks lost it and they're trying to make a fuss. Much ado about nothing.
I just looked at the nuggets currently available and was very disappointed with them - not by what's included (they seem great) but what's excluded. It seems like the nuggets are designed for the NSF panelists and other non-theory people but we do not need to however completely give up on the fundamental problems of TCS.
ReplyDeleteI am of the opinion that we as a community should (still) strive to advertise our fundamental problems to the rest of computer science and convince them that progress on these fundamental problems can/will have a significant impact on the rest. In other words, have a nugget addressing "P==BPP?" (or something else if you like).
Just to emphasize, I'm not saying that the current nuggets are bad, I just think the collection is quite inadequate.
I know of good papers, conceptual AND technical, that were rejected once or twice, but eventually got accepted. The way committees work they are bound to make mistakes, but good results do get recognition. I don't see what the fuss is about. There is nothing wrong with our current way of evaluating papers.
ReplyDelete> Just to emphasize, I'm not
ReplyDelete> saying that the current nuggets
> are bad, I just think the
> collection is quite inadequate.
Anon 6: It's easy to complain and we all do it sufficiently often. What the community needs is the input. If you think that the current selection is inadequate then contact the meeting organizers and send them your concrete suggestions/nuggets.
I think the fact that there are some discussions, meetings trying to advertise TCS achievements, is a huge progress. But we shouldn't expect Arora, Wigderson, Karp, Chazelle, etc to do all the job. Wider (and still smart) community contribution is needed!
When authors introduce new definitions or a new model, etc, which is well-motivated, non-trivial and interesting, the proofs...
ReplyDeleteThe question is how one judges "interesting" in a theoretical field.
My new paper,
Title: "TSP with wait times"
Abstract:
The Google Observation Division needs to move a satellite from its initial low-energy orbit to a set of known positions in order to update the Google Maps satellite imagery. At each position, the satellite takes a series of pictures, but depending on the complexity of the terrain, some positions require more time than others. Furthermore, these wait times are random, and only the expectations are known from past measurements. Since satellite fuel is prohibitively expensive, one seeks to find a TSP tour that minimizes the total expected time to take all required pictures. While this problem is NP-complete, we give a fast deterministic algorithm which is always within 50% of optimal.
Wow, it sounds like an awesome real life problem where they really need good algorithms.
Body of the paper:
Let G be a weighted graph representing the TSP instance, and for each node v, let w(v) be the expected wait time at v. Add (1/2)*[w(u)+w(v)] to the weight of every edge, and run the Christofides algorithm. Due to space limitations, the analysis of our algorithm is deferred to an appendix.
Okay, forget I said interesting, and stick to well-motivated and non-trivial. I think your definition/model is trivial, in the sense that once you explain Google's problem it is obvious they want tour in a graph that minimizes expected wait time. If it was quite difficult to come up with an appropriate problem statement then the paper could merit publication.
ReplyDeleteThank you, James, for trivializing this debate by giving an indeed trivial example.
ReplyDeleteI don't think anyone is suggesting that technical merit is to be forgotten. But there is such a thing as insightful conceptual work, and useful practical work also based on insight into how to move a theoretical idea into the practical realm, that I think the STOC/FOCS community would benefit from being more open towards.
In your example, the result is simple, not just in the fact that technically it just makes use of a prior result, but that the insight is a basic reduction that we'd expect strong undergraduates to come up with fairly quickly. I'm happy to agree that we can pass on such results for FOCS/STOC.
But I'd happy to give you (several) papers that, for example, I've written, where I'm sure they wouldn't be accepted to FOCS/STOC, because they're not "technical" enough, but they have a nice insight that nobody seems to have noticed before. Perhaps I'm just mistaken, and I should be sending these papers to FOCS/STOC. But experience tells me otherwise. And some of these papers, for whatever reason, have I think proven themselves over time.
I'm happy to agree that there are two worthwhile sides to this debate. But your trivial example is encouraging me to rethink that position.
Michael, I think the main problem is that 'conceptual papers', just like conceptual art, are evaluated in the present while conceptual contributions are typically apprecieated only over time. The end result may be too much junk (see conceptual art!).
ReplyDeleteI'm also wondering which papers do you think should have been rejected in order to make room. What kind of papers do you think you see too often in STOC/FOCS?
Hi Michael,
ReplyDeleteIn fact my goal was to give as trivial an example as possible, which still had an interesting story to go along with it (also, it's non-trivial to come up with a mostly trivial but not completely trivial toy example!)
I think you are missing my point. It is *not* that conceptually interesting but technically simple papers are unimportant. It is *not* that they shouldn't get into FOCS and STOC. Hell, my advisor has managed more than once to change the face of the field just with definitions! (This is not meant to discount his awesome technical work in any way.)
My point is that technical "depth" is one way to measure the conceptual contribution of a paper, and we shouldn't create a situation where we sneer at people when they say "this follows from known results," or "after defining the problem this way, the solution seems trivial." There are good reasons to make those comments, other than "the paper doesn't look hard enough for my tastes."
Secondly, if your model is not technically sophisticated (i.e. doesn't require new theoretical ideas), then the burden is on you to sell it. You have to explain what the contribution holds for theoretical CS, or you have to explain how the TCS lens has been used to impact another field. In the latter case, the burden is even higher (in my opinion), because I am not a biologist/economist/professional programmer/etc.
One of my least favorite thing in the world is arguing that something is important by analogy: Consideration A is important in field 1, so it should also be important in field 2. Or Problem A is interesting, thus problem A' is also interesting. I hold "technical" papers to the same standards. If mathematical technique A was useful in algorithms, but you change it to A' and then study A' for its own sake, you had better explain to me why you are studying A' if it no longer applies to the original problem (there are sometimes good answers to all these questions).
James,
ReplyDeleteI figured we agreed more than either of our posts suggested, so I thank you for your response.
I think we both agree that technical content is important to consider in a paper. I think we differ personally, however, in how much weight to give it, in comparison with other considerations.
I am concerned, however, that
Udi, I think the main problem is that 'technical papers', just like traditional art, are evaluated in the present while all artistic contributions are typically appreciated only over time. The end result may be too much junk, as many of my colleagues from outside TCS are only oh-so-willing to tell me, frequently.
ReplyDeleteI am concerned, however, that
ReplyDeleteIt appears that your comment does not have enough technical content :)
> But I'd happy to give you
ReplyDelete> (several) papers that, for
> example, I've written, where
> I'm sure they wouldn't be
> accepted to FOCS/STOC, because
> they're not "technical" enough,
> but they have a nice insight that
> nobody seems to have noticed
> before. Perhaps I'm just
> mistaken, and I should be sending > these papers to FOCS/STOC. But
> experience tells me otherwise.
> And some of these papers, for
> whatever reason, have I think
> proven themselves over time.
Mike: I'm exaggerating a little, but now it sounds as you're repeating the same story that many other writers wrote before: "I've had this nice paper that should (deserved to) be accepted to STOC/FOCS and it wasn't (in your case it's because you didn't submit it). It's the fault of the community/PC."
STOC/FOCS is not the only venue and not all papers fit in. The decisions are subjective and if you (or any single individual) think that it's a STOC/FOCS-quality paper then it doesn't have to mean that it should be accepted on STOC/FOCS.
I think, the problem with "papers with a nice insight" is that it's very subjective to assess them. I'm not sure how to distinguish between a visionary paper that might open a new exciting area in TCS from a paper which is very well written but have almost no contents but a nice story. How to assess that a paper contribute to the progress of TCS?
The main problem with conceptual papers is that they tend to get evaluated based largely on the name of the authors. (Maybe if you are already well known, you think this is a good thing.) So there is a very subjective standard: "if so-and-so thinks this is important, it must be".
ReplyDeleteWhereas technical papers can be evaluated more objectively: either someone settles an open question or they don't. (I don't claim the process is completely objective even in that case -- hell, I'd vote for anonymous submissions to STOC/FOCS.)
Various anonymous --
ReplyDeleteThe argument for the "technicians" seems to be that technical merit is something that we can (at least approximately) measure, while potential practicality, potential use or impact outside our field, or potential conceptual contribution is much harder to measure with any accuracy. So the logic seems to be that we should only consider technical merit -- even though, arguably, that's clearly not the best metric for the utility of a paper after the fact.
Here's a thought -- perhaps we should as a community get better at understanding these other important aspects of research value. And another -- perhaps some of the major problems the TCS community has in appearance to rest of CS (and beyond) is precisely that we're globally seen as more interested in mathematical technique than in more real-world measures of utility, and as unwilling to get better at understanding these other important aspects that other communities, arguably, are better at dealing with than we are.
If the argument is that FOCS/STOC should be a conference focused on technical merit, since there's otherwise not a good home for such papers, that's one thing. (I disagree.) If FOCS/STOC should be focused on the best papers coming out of the theoretical community for a broader notion of best, I think we could be doing a better job -- and, I think, we can afford to take a bit more risk.
Mike: I'm exaggerating a little, but now it sounds as you're repeating the same story that many other writers wrote before: "I've had this nice paper that should (deserved to) be accepted to STOC/FOCS and it wasn't (in your case it's because you didn't submit it). It's the fault of the community/PC."
ReplyDeleteAnonymous -- I'll submit to you that I agree that if I alone say it, it's whining. But when "many other writers", including many significant members of our community, say similar things, I'd say it amounts to evidence, and it's time for community self-reflection and debate.
There's not much discussion on this thread of historical precedent, so I will remind folks of two such precedents ... perhaps other people will suggest more such precedents.
ReplyDeleteA non-mathematical precedent is the biomedical community's slow acceptance of observational research as contrasted with hypothesis-driven research. A decade or so ago, genetics was universally understood to be the hypothesis-driven experimental study of inherited traits. "No hypothesis == no publications and no funding" was the rule for genetics research (this oversimplifies, but not much).
As Michael Mitzenmacher points out, a virtue of these "old school" values was their well-defined defined quality metric---if your genetic hypothesis was the strongest and most interesting, and your experiments the most ingenious and rigorous, then yours was the best science.
In recent years, these well-understood biomedical research values have been knocked for a loop by (1) better instruments, and (2) better algorithms. Not to belabor the point, I commend the recent work (appearing in both Science and Nature) by the David Baker/ Kendall Houk group on the computational design and synthesis of enzymes (nowadays called "theozyme" science).
In the domain of mathematics, I'm sure most readers are familiar with the struggles that John von Neumann faced gaining acceptance from the traditional mathematical community (and the IAS directors) even for the idea of large-scale computing.
The point here is that both biomedicine and computer science have prospered by embracing a big-tent strategy ... at at the cost of continuously re-evaluating their most-cherished traditional scientific values and metrics.
That is why I will submit that the *main* problem with theoretical computing science today is that the tent is simply too small to hold all the good ideas and all the young students that *should* be sheltering under it. And these good ideas have to include *both* simple-and-conceptual advances and hard-and-technical advances.
Again speaking from a (selfish) medical perspective, the challenges of modern biomedicine and modern bioinstrumentation are simply too big, and too urgent, to be solved by "small tent" mathematics.
I don't think this discussion can get very far until we can agree on what "conceptual papers" means. To me, it refers to things like the aliens paper that someone linked to above. But to Michael, it apparently includes papers applying theory to practice. Maybe the writers of the original complaint letter can tell us what they meant.
ReplyDeleteIt seems to me that a reasonable definition of conceptual paper is one in which modulo the proofs the paper is still non-trivial.
ReplyDeleteThat is why I will submit that the *main* problem with theoretical computing science today is that the tent is simply too small to hold all the good ideas and all the young students that *should* be sheltering under it. And these good ideas have to include *both* simple-and-conceptual advances and hard-and-technical advances.
ReplyDeleteI very much agree.
Piotr
Michael,
ReplyDeletewhat you say (that potentially useful work which could have impact outside of theory and is simple and insightful is not appreciated enough in STOC/FOCS when compared with more technical work) is quite different from the content of the "statement on conceptual work" that James commented on.
One, for example, could agree with you that we don't have the right theory-vs-practice balance (sorry if this trivializes your point) in STOC and FOCS, and at the same time disagree with the "statement on conceptual work," which refers more to papers that introduce new models and new problems.
"I've had this nice paper that should (deserved to) be accepted to STOC/FOCS and it wasn't (in your case it's because you didn't submit it). It's the fault of the community/PC."
ReplyDeleteEvery year, as with any other CS researcher, a certain number of my papers get rejected. For technical theorem-proof style papers, the comments from the referees are generally reasonable, even if if sometimes we don't entirely agree with them.
In contrast, for papers where new conceptual ideas are introduced, or surprising links with other areas are discovered, we often get comments from at least one of the referees (and more often two out of three) which are embarrassingly bad. This is what leads to so many complains and calls for reform.
I don't know how exactly we can evaluate a new conceptual paper, given that its impact tends to take longer to evaluate. In the networks community new conceptual ideas are introduced in HOTNETS, and if they survive the grilling there, they are often followed up by another more complete paper in SIGCOMM/INFOCOM.
Perhaps a HOT-Theory workshop is in order.
For what it's worth, here's my take on the issue.
ReplyDeleteThis discussion would be much better if more people were (1) non-anonymous; (2) giving concrete examples of what "concepts" we like or don't like. If we are just expressing an honest opinion without childish personal attacks, I don't think the authors whose work we are discussing should feel offended.
But who known, maybe I am being naive.
Luca,
ReplyDeleteI see the points (practical vs. theory, conceptual vs. technical) as related; a common "complaint" against practically oriented papers is that they're too simple/not technical enough, because there's a lack of realization that framing the practical problem theoretically (often in a way that constructs a new problem) is in itself a significant challenge. I think there's a clear match here with the conceptual vs. technical argument.
You are right, of course, that the questions could be considered independently.
Anonymous --
ReplyDeleteI've mentioned the
the HOT Theory workshop idea before:
http://mybiasedcoin.blogspot.com/2007/08/hottheory-workshop.html
I think it may be worthy of further discussion, if people feel FOCS/STOC isn't an appropriate home for conceptual papers.
a common "complaint" against practically oriented papers is that they're too simple/not technical enough, because there's a lack of realization that framing the practical problem theoretically (often in a way that constructs a new problem) is in itself a significant challenge.
ReplyDeleteIn my experience this has not been the main issue with papers claiming practical impact. The real issue is the legitimate concern by PCs that what is being modeled or solved is in fact not truly representative or essential to the practical problems claimed. In general, PCs really like tasty new theoretical problems if they are convinced that solving them has or will have practical impact. I certainly have read plenty of papers with questionable claims of practicality in over-hyped submitted versions. On the other hand I have also read some great STOC/FOCS papers that make the practical impact clear.
The question is how best to judge the legitimacy of this impact. In constructing a PC for a theory conference it is easy to anticipate the theoretical areas where expertise is needed. It is much less clear that one can anticipate the right expertise in a given practical area where theory is going to be applied.
An example: consider the paper by Koutsoupias and Papadimitriou from '99 that opened the "price of anarchy" line of research but wasn't that interesting technically.
ReplyDeleteIt was published in STACS - should it have been a STOC/FOCS paper? I think so.