Good luck Rose!
Update: Rose lasted until round 11 and ended up tied for fourth. Congratulations!
Final standings here.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Good luck Rose!
Update: Rose lasted until round 11 and ended up tied for fourth. Congratulations!
Final standings here.
Sorry, completeness results are algorithms. They are proved by algorithms people using algorithmic techniques. There is simple-minded view that upper bounds are algorithms and lower bounds are complexity. But that doesn't reflect reality: Upper bound results like the PCP theorem or SL=L are complexity. It's not even clear whether a result like circuit lower bounds implies derandomization is an upper or a lower bound.
Since we both algorithmicists, like complexity theorists, lack techniques to prove general lower bounds, they instead use reductions to to show that problems are hard. This gives them a two-pronged attack on a problem where the failure to find a reduction might lead to an algorithm or vice-versa. In structural complexity, we see a similar dichotomy between inclusions and relativized separations.
There is no absolute dividing line between algorithms and complexity, but loosely algorithms deals with specific problems while complexity studies classes of problems based on some computation model with certain resource bounds. So the definition of PPAD and its relationship to other classes is complexity but the reduction from PPAD to Nash Equilbrium is algorithmic.
Among the dozen is the oddly-named Journal of Algorithms in Cognition, Informatics and Logic, a sort-of resurrection of the Journal of Algorithms whose editorial board resigned at the end of 2003. Given the new title, a manifesto and aims, the journal has moved mostly away from tradtional TCS algorithms for a more logic and AI focus. Hal Gabow tells more including how, without their knowledge, many people from our community, including some previous Journal of Algorithms editors, were mentioned as supposedly connected to this new incarnation.
A similar story happened with the Journal of Logic Programming whose editorial board had resigned in 1999 and whose journal was remade as the Journal of Logic and Algebraic Programming.
The last issue of J. Alg was volume 62 number 2. The first issue of JACIL is volume 62 number 3, so JACIL is officially just a continuation of the Journal of Algorithms. Given the vastly different editorial focus, why not just start it as a new journal? Partly to take advantage of the reputation of the former journal, but also to protect the back catalog, the valuable assets that Elsevier has in the many important papers that have years ago appeared in J. Alg and the other Elsevier theory journals.
But even for the theory journals that remain at Elsevier, like TCS, JCSS and I&C, one cannot help but notice an overall decline in the quality and quantity of the articles appearing over the last couple of years. One would hope that those missing strong papers are being sent to journals like Theory of Computing and the ACM Transactions on Algorithms and Computation Theory and a few have. But the controversies over journals are causing even greater numbers of authors in theory and throughout computer science not to bother writing journal versions of their conference papers. The main complaints about Elsevier relate to access, but no paper is less accessible than the paper not written.
Update 3/16/09 from the editors of JACIL
The Journal of Algorithms in Cognition, Informatics and Logic is severing its ties to Elsevier and is moving to a new publisher.
This is correct, but the preceding text in your blog gives the wrong impression of the correct standard procedures of setting up a journal that we have followed here. Our starting point was a substantial list of editors (about 50) from the logic and cognition communities who had accepted to be on the board of the new journal. To this we added additional names from the algorithms community and then sent a formal letter of invitation to all names on the list. The letter and attached list were private and we made it clear that this were invited editors, but included those names who had already accepted. This is common practice, and such lists always remain private until the process is complete. Unfortunately someone (most likely from the algorithms community) made it public and it ended up on your web page.
The misunderstandings and grievances generated by these actions need to be corrected, now that our community has withdrawn the journal from Elsevier, as a consequence of a report by one of our members John Lloyd.
We would therefore be grateful, if you append this entire letter to your blog
best
Dov Gabbay and Jörg Siekmann
"She has fought a very energetic race, but the math just isn't there." (Tim Russert on MSNBC)and many more."She's mounted an extraordinarily impressive and tough campaign," said Steve Grossman, a Massachusetts superdelegate and pledged Clinton supporter. "The math is tough. Most people think the math is virtually impossible." (Boston Herald)
Obama chief strategist David Axelrod said whichever way the Clinton camp spins it, "the math is the math." (AFP)
The Clintons' War Against the Math (ABC News)
What is our beloved field of mathematics doing to poor Hillary? Of course "math" does not describe the technical delicacies of the field, but rather to remark that in the end the nomination goes to the candidate with a majority of delegates and given the current delegate status the probability that Obama will not achieve that majority is quite low. The term "math" is also being used as a logical game-stopper—no one can make 1+1=3 no matter how hard they try.
"Math" gets played by the media as a cruel and heartless monster that many believe Hillary Clinton cannot defeat, rather than the the beautiful and ever growing field of knowledge that we love and respect.
Or maybe, as John Dickerson suggests, another scientific field now applies.
The race for the Democratic nomination…now feels like a quantum physics problem: How long can a body exist in a state approximating motionlessness without actually stopping?
Still at the business meeting (with a 15-item agenda), Cynthia explains that another rule of thumb was in place for STOC’08: If a program committee member said "this is my favorite submission," then the paper was marked for acceptance, barring a severe negative reaction from the other committee members.
Next, Bobby Kleinberg tells us about the "TheoryWiki" project, whose goal is to organize a community-wide effort to improve the presence and quality of TCS on Wikipedia. This seems like a fantastic goal. To rant tangentially while I have the chance: Unfortunately, a large contingent of our community recently contributed to the Springer Encyclopedia of Algorithms. There were various area editors who put together a list of possible articles, and then solicited authors to write them. The area editors were not paid. The authors were not paid. On the other hand, the default was for the copyright on all materials to be handed over to Springer, who will create a huge and potentially useful volume, and then sell it at very expensive prices. I agreed to write my article, but I complained quite a bit first. I was told that the process was too far along to change anything. I asked Springer if I could put it on Wikipedia. They said no. Finally, I said: I am writing an article. I am putting it in the public domain. I will give you permission to distribute it however you like; take it or leave it. They took it. Unfortunately, most authors did not make similar deals, and a tremendous amount of time and effort has been wasted (or, in the least, vastly underutilized).
Adam Kalai announces that STOC 2010 will be in Boston (where he and Yael will be joining the newly formed MSR New England). Allan Borodin reads the conceptual manifesto. Despite a lot of dissent expressed in private conversations, Mikkel Thorup is the only one to speak up. He argues that, while new models should be valued, we might also want to appreciate the kind of algorithms that are running on millions of computers around the world right now.
I’ll end with some talks I enjoyed from the rest of the conference:
Jeanette Wing, the new director of CISE, kicks off the business meeting with an overview of the funding situation for theory at NSF. I think I discern two clear messages: First, NSF is part of the executive branch, so there is one clear way we can affect the budget this November. CISE has requested a 19.5% funding increase for 2009, with a 25.5% increase requested for CCF. Secondly, the best way to expand the amount of funding for theory is for algorithms people to look for money outside of pure theory venues. The opportunity to do this will hopefully be improved by having Sampath and Jeanette on our side at NSF.
Dick Karp wins the SIGACT Distinguished Service Award, for his tireless dedication to promoting and expanding TCS. Prasad and Harald are given their best paper awards. Then Cynthia gives her report on the program committee, and its decision process.
80 papers accepted out of 325 submitted (that's about 24.6%). Some notable results: Congestion and game theory goes 5/13 (38.5%), and metric embeddings goes 0/13 (0.0%). Before the committee met, they agreed on having a more open mind toward conceptual papers which might be otherwise overlooked because they lack technical depth. The following paragraph was added to the call:
Papers that broaden the reach of theory, or raise important problems that can benefit from theoretical investigation and analysis, are encouraged.This paragraph has been kept for FOCS'08.
The committee sought to appreciate simplicity as a virtue; no longer "I like the ideas, but the proofs are simple"; instead, "I like the ideas, and the proofs are simple!" I don't know if "They changed the model so as to trivialize the problem" is also replaced by "They changed the model, and now the problem is trivial!" I think responsible analysis of a paper is probably a bit more nuanced.
Later, Madhu Sudan spoke of instances where a well-known problem had an easy solution, and this prevented a journal or conference from publishing it. This is certainly ridiculous, and I have a hard time believing that it's a frequent occurrence (of course, I have about 1% of Madhu's experience). I've seen examples where the community considered it "embarrassing" that the solution was so simple, but not where the paper itself was derided.
Personally, I love the beautiful intricacies of hard, technical proofs. It's like a little universe sprung out of the human effort poured into developing a deep understanding of some problem. There are often reoccurring characters, a unique language, a sense of history, twists and turns, all mounting towards a resounding conclusion that one only fully comprehends after multiple readings, and intense effort. But in our field, the beauty of complexity only makes sense in contrast to our search for simplicity. Simplicity is certainly a virtue.
When I have criticized a paper based on "technical simplicity," it's not because I wish the authors had purposely obfuscated their arguments. Rather, one has to understand the primary goals of a theoretical field: To approach understanding through rigor. What we are trying to understand is computation in all its forms. Toward this end, we often consider idealized versions of problems, and in this respect modeling becomes incredibly important. It comes up in algorithms: What happens if the traveling salesman wants to minimize the average latency, and not the total travel time? And it happens in complexity: What if we allow our constant-depth circuits to have mod gates with composite moduli?
In both cases, we are not confronting the actual problem we want to solve; real-life instances of salesman problems (e.g. satellite movement) probably involve other practical constraints, and (uniform) poly-size circuits can probably do a lot more than AC_0[m]. So often I have to measure the importance of a new model by how it differs technically from the old one. If simple modifications of the old TSP algorithms suffice for the minimum-latency version, it's not clear that we have learned something new (even though one could argue independently that the min-latency version is practically important!). And if AC_0[m] circuits could be simulated in a simple way by AC_0[p] circuits, then I wouldn't think as highly of a paper proving lower bounds against AC_0[m].
Maybe we can be a little more appreciative of the subtlety involved in the reviewing process, and agree that "simplicity is a virtue" is a a bit too simplistic to be the motto for a program committee.
STOC 2008 begins. Victoria is a gorgeous city, if a bit sterile. The population feels mostly transient. The people are very friendly, and the streets are very clean. Most academic conversation turns eventually to one of two topics: Outcomes of the hiring season, and opinions on the "conceptual manifesto" (my naming); more on the latter topic in the business meeting post next.
The conference starts off strong, with Ran Raz presenting Anup Rao's optimal parallel repetition theorem for projection games (Anup had visa issues). Anup gives optimal bounds on the rate of decay of the value of a 2-player game repeated in parallel, in the case where the answers of one player determine the unique answer of the other player that causes the verifier to accept (this is the projection property). The decay rate was recently proved to be optimal by Raz, thereby disproving a strong parallel repetition theorem.
A special case of a projection game is a unique game, the topic of Prasad Raghavendra's paper Algorithms and inapproximability results for every CSP?. Prasad is one of our own, a theory student at UW; his paper was co-winner of the best paper award and sole winner of the best student paper award. For a few years now, since the KKMO max-cut paper, it has been suspected that there is an intimate connection between the unique games conjecture and the power of semi-definite programming in approximation algorithms, although it is only recently--in the work of Austrin--that this connection has begun to materialize explicitly. Prasad sets the connection in stone: He gives a general class of SDPs and a generic poly-time rounding algorithm for all of them, such that for any MAX k-CSP problem, the approximation bound achieved by his algorithm is best possible assuming the unique games conjecture. A key technical step involves converting any integrality gap for his SDP to a unique games hardness result. The talk is remarkably lucid and well-paced.
The other best paper winner is Harald Raecke, for his work "Optimal hierarchical decompositions for congestion minimization in networks." Raecke shows roughly that, given a graph G, there exists a family of trees such that any multi-commodity flow problem in G can be solved by first routing it in each of the trees (trivial), and then mapping a convex combination of those routings into G. The resulting routing in G has congestion within O(log n) of optimal. The mapping from the tree routings to routings in G is fixed, and in particular independent of the flow instance. This gives an O(log n)-approximate oblivious routing protocol, which is best-possible. His proof is a beautiful and unexpected reduction to the FRT tree embedding theorem. In another quite unexpected move, Raecke shows that his tree decomposition theorem can be used to obtain an O(log n)-approximation for the minimum bisection problem.
I expect that the next post, concerning the business meeting, will be a bit controversial. In other news, Adam Klivans loses $20 for betting that the desert contains papaya. It was mango.
On Saturday, SIGACT, in conjunction with the Computing Community
Consortium, held a workshop on Visions for Theoretical Computer Science. The
goal of the workshop was to produce "vision nuggets" about exciting
research themes in TCS that could have a large impact in the future.
In other words, to craft PR materials that advertise TCS outside the
community (most importantly, to funding agencies). Some pre-workshop
socializing started off a bit dangerously, with Anna
Karlin explaining that Avi Wigderson should saber the champagne since
last time
she ended up in the emergency room…
On Saturday in Seattle, there will be a Visioning Workshop with two goals.
At the STOC business meeting, Borodin will discuss his co-authored letter about conceptual contributions that has already appeared on Scott's blog. Nobody seriously argues against papers with important conceptual points, rather we have the problem that STOC and FOCS have gotten to the point that they accept only a fraction of the strong papers in a given year and difficult decisions have to be made and it is much easier to recongize a strong technical paper than a strong conceptual one. Still both the STOC and FOCS 2008 committees are fighting back with the new line added to the call.
Papers that broaden the reach of theory, or raise important problems that can benefit from theoretical investigation and analysis, are encouraged.We can only recognize true conceptual greatness when it stands the test of time conflicting with computer science's deadline-driven conference system. Something has to give.
If you ask the average person on the street which is more accurate: a random sampling of 1200 people or an online survey open to all, most will (incorrectly) say the latter. On-line surveys suffer from statistical skewing—they only measure people who take the time to fill out surveys. And as people like me get inundated with requests, the only ones to fill out surveys are people with strong opinions about the topic or those with too much time on their hands and the results of these survey will be a quite poor reflection of reality.
If every survey writer only sent their surveys to a small randomly selected group of people, then each of us would have very few surveys to fill out and could take the time to do so. But we can't expect surveyors to act so responsibly, nearly all surveys will suffer. So don't bother with the surveys. Open up an on-line suggestion box, a message board or a blog and get the discussion going. Use words instead of meaningless statistics to guide your decisions.
The basic claim is that:
As others have implied, you can find several fine sources for PRAM algorithms. For this reason, my comments below mostly focus on a way to address the parallel programming issue:
If you are looking for code examples, you are welcome to write to me.
Here are some Q&A:
Q: I never learned parallel programming formally, but I picked up some ideas in my free time from Java/MPI/OpenMP/etc. How do any of these relate to XMTC parallel programming?
A: XMTC parallel programming is simpler and different.
Q: The problem of algorithms being taught independently of programming is present within the exclusively serial world. What would you say to the many theorists who are resistant to the idea of having a heavy programming component in their courses?
A: IMHO the serial case is completely different. Most students have experienced/learned serial programming BEFORE taking the serial algorithms course. This is NOT the case for parallel programming. My experience is that students learn best if parallel programming is coupled with parallel algorithms. The main difference is that the parallel algorithms course is where parallel programming should be FIRST taught. The reason is that parallelism requires introduction of some first principles representing an "alien culture" to students. In contrast, serial computing is related to: (i) mathematical induction, (ii) the way our brain instructs our body (as a single processor), etc. There is nothing out there that prepares us for parallel computing.
Q: What text do you use for teaching parallel algorithms?
A: I have been using my class notes.
Warm thanks to Adam Smith and Aravind Srinivasan for their helpful comments on an earlier draft of this text.
One of those workshops covers an area that has excited me for several years now, The Third Workshop on Prediction Markets. Prediction markets aggregate information quite efficiently in ways we don't yet fully understand and remains a fertile area of study. Legal limitations on betting have restricted the applications of prediction markets, particularly in the US, but that might change soon. The Commodity Futures Trading Commission (CFTC) is asking for public comment for regulations of prediction markets. Their concept release gives a nice discussion of the legal issues. Will this lead to more legitimate real money markets in the US? Time will tell.
Each question though involves three parties: the questioner, the speaker and the rest of the audience. A good talk has a certain rhythm and questions can disturb that rhythm. So how does the audience feel about the questions? Depends on the question.
Combinatorial Number Theory: Results of Hilbert, Schur, Folkman, and Hindman by Yudi Setyaan. A Thesis submitted in partial fulfillment of the requirements of the defense of Master of Science in the Department of Mathematics and Statistics. Simon Fraser University, July 1998I printed it out and still have it. It was not helpful for what I wanted, but it was interesting and I'm glad to have it.
The first talk by Richard Karp carried the message that problems of the Hitting-Set kind are easier most often in practice than their worst-case NP-complete pedigree leads one to expect. This was supplemented by Neal Young's second-day talk involving Lipton's question, "Is it hard to generate random hard instances of NP-complete problems?" Ravi Kannan spoke on the practicality of finding good approximate equilibria for non-zero-sum games and market situations. Dan Boneh showed the extent to which even certified-sound cryptosystems become vulnerable when keys k are used to encrypt data that overtly contains k, or when there are closed cycles of encodings of multiple keys. He also explained how Lipton's kung-fu wielding of the Chinese Remainder Theorem in attacks on RSA and kin slowed web servers by 8%. Anita Jones surveyed the urban landscape of computer security, and propounded the sequence of system calls made by a program as a signature by which to identify malware.
Michael Rabin described a practical implementation of zero-knowledge protocols for high-stakes auctions, one point being efficiency gains from coding on the arithmetic rather than going all the way down to Yao's oblivious ZK circuit verification. Nisheeth Vishnoi presented a polynomial-time algorithm that distinguishes the "Yes" case of the "Unique Games Conjecture" from a tighter "No" case asserting also that the underlying graph is an expander (paper). This evinces a surprising difference between "Unique Games" and non-unique Constraint Satisfaction Problems, and suggests that if the UGC holds at all, any proof must differ widely from traditional proofs establishing hardness of CSPs.
Erik Winfree's multimedia talk "Is DNA Computing?" demonstrated that whatever one feels about the feasibility of large-scale DNA computers (on which Lipton followed Adleman's first paper with a neater formulation), DNA activity is undeniably computational.
Giovanni diCrescenzo talked on Lipton's idea of storing keys not as small files but parceled among huge hunks of stored data, so that intrusion attempts to recover them leave huge footprints, applying hashing and extractors to implement it. I surveyed the frontiers of super-linear lower bounds, including the Lipton-Tarjan separator theorem's relevance and Dick's part in time-space tradeoffs for SAT, and used Allender-Koucky's observation as a segue to super-polynomial lower bounds. I outlined my position that "Very High Degree" methods are capable of surmounting known barriers and may be necessary.
Dick's longtime friend and associate Richard DeMillo surveyed Lipton's contributions to fundamental problems of software correctness. Wenke Lee focused on how 'bots have opposite behavior and intent to viruses, and how the company Damballa founded by Merrick Furst, him, David Dagon, and Lipton combats botnets.
Parikshit Gopalan presented new ideas on the lower bound frontier of ACC[m] for composite m, and continued a running gag on the power of Chinese remaindering. But Avi Wigderson planted a Monty Python foot on further progress by demonstrating that almost all known complexity-class results preserve themselves under an arithmetical form of relativization, while P vs. NP and most other frontier relations do not. Memo to new graduate students honing their technical abilities by building oracles A separating classes C from D: the ante just got upped to building both A and a low-degree extension A' such that C^A is not contained in D^{A'}, so then proving C contained in D would require "non-algebrizing techniques". And various vice-versas...all of which tighten the rules of equation-solving needed to build A. One sentence of hope remained on Avi's slides actually by Scott Aaronson: methods that go beyond treating (feasibly-constructed) multilinear extensions as black-boxes can possibly evade the new barrier. Jin-Yi Cai closed by presenting a "Holographic" algorithm toolkit of subversive power, whose steep learning curve is helped by its affinity with quantum computation.
On the learning-curve subject, I came away with the impression that although it is often higher for cryptographic protocols than algorithms, more of the former actually get implemented. Of course, Karp has always stood for implementing algorithms, and Neal Young reported on how his beats simplex for its target domain, but I'm just reporting my positive impressions of security applications from the two days. In all it was a rich meeting, with "not too many embarrassing stories" for Dick and lots of energy.
That's the problem with the social security number. We've become so scared of using the SSN, the number has become useless. What we need is a unique public ID that we are not afraid of using. In my ideal world, we would all have a public and private ID. With someone's public ID I could use it to call them, text them, IM them, email them even send them postal mail by simply writing their ID number on the envelope and the post office's computers will know how to route the mail. People would use their private ID to log onto some central server to set the places that the public ID points to as well as block certain users and deal with privacy restrictions.
You could use your public and private ID to log onto all your services so you don't need to keep separate accounts on various webpages. Both Northwestern and the University of Chicago have single electronic IDs and passwords to access email, benefits and wage information, course information, get wifi access and much more.
Now that people avoid using the SSN as a public ID, cell phone numbers and email addresses are beginning to play that role. Privacy advocates have slowed down efforts to have a public ID for a variety of reasons. But the great need for an ID means the market will start using whatever it has available and isn't it better to carefully design a proper public/private ID than have some ad-hoc market-driven system instead.