Tuesday, August 31, 2010

Report from Barriers II Part 1

On Thursday Aug 26 Lance stated Bill is at Barriers II in Princeton and promises a full report upon his return. Not quite sure I promised that, or what a full report would entail, but here goes.
  1. When I first heard there would be a BARRIERS II workshop I assumed it would be full of Oracle results, Natural Proofs, Algebrazation, maybe some speculation on independence results (I do not think we have any right now). And indeed, this was at least part of the agenda for Barriers I. (See here for a blog about Barriers I.) Barriers II was (1) Approx algorithms and lower bounds, (2) Crypto, (3) Quantum Computing, (4) Communication Complexity, (5) Geometric Algorithms (NOTE- NOT Ketan's Geometric Complexity Theory program). However I can't say it was false advertising since the program was on the web early enough that one could see what one was getting into. It was misnamed.
  2. The organizers told me that it was called Barriers because the speakers were urged to talk about where the field is and where it is stuck. I think talks like that have a name already: Open Problems, or Directions or whatnot.
  3. All of that aside, how was it? It was WONDERFUL!. The talks were uniformly good. They were mostly survey-talks so one outside the field could follow them (that was the intent). This also lead to the following oddity: the day on (say) Quantum had very few quantum people at it (except the speakers) since your standard quantum person has probably heard the talks before or at least knows the material.
  4. Approx Alg and lower bounds: I was inspired to actually learn what Semi Definite Programming and Iterative Methods are. I finally understand the UGC and its different formulations. Dana Moshkovitz gave a nice rump session about a promising route to PROVE UGC. This was not one of those we doubt it will work but the approach is still interesting talks (this could be a rallying cry for our field) but seemed like a real plan.
  5. Crypto: Lattice stuff and also stuff about: If your key leaks slowly can you still do crypto? (yes).
  6. Quantum: Scott introduced the speakers and was the first one. He had a slide warning us to NOT PANIC at the mention of Quantum. All of the talks were outreach--- why complexity theorists should learn quantum methods. I blogged about this non-technically here. One key point I will re-iterate: Even if you don't work in quantum computing you will need to learn their methods. (Analog: even if you don't work in prob you need to know the prob method.) Over dinner we noted the following: there are many quantum books for the layperson but there are few (none?) complexity books for the layperson (hurry up Lance!). Why is this? Quantum seems less abstract then Complexity and some laypeople may thinks they have a notion of it. They may be wrong.
  7. Communication Complexity: Paul Beame gave an hour long talk where he went over the entire Kushilevitz-Nisan book. (NOTE: This book is on Lance's list of favorite computational complexity books.) How did he do it? He skipped some stuff. (I recommended talking fast. Fortunately he didn't take my advice.) The other talks were applications of Communication Complexity to lower bounds on data structures (not quite- some of the lower bounds didn't use Comm. Comp.), data streams, direct sum problems, and a talk on quantum comm. comp. This raises the question: Is Communication Complexity only interesting when it applies to something else, or it is interesting for its own sake? I proposed the following open problem in a rump session, though it seems like its already out there (that is, its not really MY open problem).
    Alice has x, Bob has y, both are n-bits long. We are promised that PARITY(x)=0 and PARITY(y)=1. Alice and Bob want to find i such that xi ≠ yi with d rounds (d a constant) and O(log n) bits-per-round. Show they cannot do this. ANSWER: We already know this statement is true since it is equivalent to PARITY ∉ AC0. (Using Karchmer-Wigderson games, another game that is not much fun.) However, I would like a communication complexity proof of this which would give an alternative proof of PARITY ∉ AC0. In particular it would give a top-down proof instead of a bottom-up proof.
  8. Geometric Algorithms. Paul Beame said that this session would be a nice counterpoint to the communication complexity talks since the Geometric Algorithms session would prove some upper bounds on problems that the Comm. Comp. Session proved lower bounds on. (Mike Sipser once told me It is good to do an algorithm as a sanity check on your lower bounds.). Alas I had to take off that morning so I don't know if this really happened; however, I assume that it did.
  9. Not much talk about the alleged P NE NP proof; however, I will discuss what was discussed in my next blog.

Monday, August 30, 2010

New Institute for Theory of Computing

The Simons Foundation has announced a competition to establish a new Institute for the Theory of Computing in the United States.
Computation (and its abstract form, the algorithm) has not only revolutionized science, technology, and society, but also is among the most important scientific concepts discovered and developed in the 20th century. This scientific discipline has enabled numerous technological advances and has forged many connections to mathematics and other sciences, providing fruitful insights and new problems. It has impacted not only computer science and technology, but also parts of mathematics, physics, biology, economics and sociology. Meanwhile, its core scientific agenda is extremely ambitious and challenging. In short, this theoretical field is one of the most exciting and important today, attracting excellent young talent to its ranks at a growing rate. Young people with education and training in this field are well positioned to make central contributions to computer science and science in general.
An institute focused on the theory of computation could bring together a critical mass of researchers from around the world to accelerate fundamental research on computation and to further develop its interactions with other areas of science ranging from mathematics and statistics to biology, physics and engineering. The Simons Foundation invites applications for grants to establish such an Institute.
Letter of intent due by October 27 and full proposals by next June.

It's the funding that catches the eye, $6 million/year for 10 years with possible renewed funding or endowment after that. To understand the scale, that's roughly the budget of TTI-Chicago. This new institute will play a major role in our field.

The institute will be modeled on the Kavli Institute for Theoretical Physics, MSRI and the IMA (also similar to DIMACS) which are located at or near major university campuses and host programs on a given topic for several months to a year with long-term visitors and have several workshops related to those program topics.

This is opposed to the Oberwolfach/Dagstuhl/BIRS model of an isolated location with on-site room and board, weekly workshops but little to no long term programs or visitors. I'm glad the Simons Foundation is going with the former model. While I've enjoyed the many Dagstuhl workshops I've attended, centers like DIMACS tend have a greater long term impact on the field.

Where will this institute be located? We'll find out next fall. Should be exciting.

Friday, August 27, 2010

Theory Journals

I got the following request from a reader.
I have a question about TCS journals. As I am trying to follow your advice on being more diligent about journalizing my papers, I realized that I am surprisingly ignorant about where to send them. A couple of more senior colleagues I asked didn't really have the answers.
What are the journals that accept theory papers? What's their specialization (if any)? What's their relative quality/reputation? What's the expected turn around time for each of them? Anything else I should be asking?
Sounds like a list that should be on a site we can edit in the future like Wikipedia and in fact they have such a list. It can use some additions, links, publishers and specializations if anyone is so inclined to help update it. Most of these journals cover general theory unless their title indicates otherwise.

For reputation I put JACM first, followed by SICOMP, followed by a rough equivalence class of the most of the others. Information Processing Letters has only short papers, good for a result that has a simple proof but still worth writing up.

Turn around time depends more on the editors and the referees than the particular journal. The average time from submission to publication (assuming no major issues found in the referee process) is a bit over a year with a very large variance. Feel free to ask/bug the editor if you need faster publication. 

Another big issue to consider is which publisher to choose. It can affect who can access your paper, where it will be indexed and archived and how and whether the paper will appear in print and/or on the web. I'm a (biased) fan of the ACM with its well-organized low-cost digital library but there is something to be said for open-access journals.

Thursday, August 26, 2010

Cryptography if P = NP

Bill is at Barriers II in Princeton and promises a full report upon his return.

Ask many computer scientists what happens if P = NP and you'll get the response that it will kill cryptography. Let's ignore solving all optimization problems, solving AI and learning everything, likely cures to many diseases and many more amazing advances in society, what a disaster it will be that we can't send secret messages to each other.

I don't at all believe P = NP but let's have this hypothetical discussion of what happens to cryptography in that world. We do lose public-key cryptography, the ability for two people who have never met to electronically still send encrypted messages to each other. How will I send my credit card info to Amazon if P = NP?

Amazon will send me a USB key containing a gigabyte worth of a one-time pad, enough to encrypt all my transactions for my entire life (even though I will live much much longer thanks to P=NP). This might work for Amazon but what about tiny mom and pop web stores? We just need a trusted site for a one-time pad registry so my USB key will work for any Internet transaction. Even for public-key cryptography today we need trusted parties to verify sites so we haven't lost much here.

For large financial institutions they can ship huge amounts of one-time pad data through secure couriers (or just ship data this way). Large amounts of compressed data can have a very small footprint these days.

What about cryptographic protocols? Zero-knowledge is uninteresting if P = NP. Many secure multi-party computations can be done with private channels which just need private keys. There are many others, electronic money, secure voting and digital signatures come to mind, that seem difficult to do if P = NP. On the other hand they are also hard to do in practice today even under the usual assumptions.

Certainly P=NP will make cryptography considerably more painful that it is now and many things we thought previously encrypted won't be anymore. But we'll figure out how to meet our crypto needs in any environment. The loss of public-key cryptography shouldn't be our first gut reaction to what happens if P = NP.

Wednesday, August 25, 2010

***SORELLE*** PHD!!/Workshop on Boolean Threshold Functions/NSF program

Three annoucements (the last two I was asked to post)

ANNOUCEMENT 1: Congrads to fellow blogger ***SORELLE*** who got her PhD recently. I was on her committee. She did a masterful job of presenting it. It looked like theory that can be applied to stuff. At least Google, who hired her, might think so. Or they might think (correctly) that she is smart and knows things so she can help them. See her post about her PhD for more information and a pointer to her actual thesis.

ANNOUCEMENT 2: Rocco Servedio and Ryan O'Donell have requested that I post about upcoming workshop at the Center for Computational Intractability in Princeton.

The topic is Analysis and Geometry of Boolean Threshold Functions. It will take place the Thursday and Friday morning before FOCS, Oct. 21--22. For detailed information, please see: here There is also a poster here.

COMMENT FROM GASARCH: Its right before FOCS but its NOT close to FOCS. It is in Princeton NJ. FOCS is in Las Vegas. Not sure this makes sense but I suppose they will find out.

ANNOUCEMENT 3: Tracy Kimball at NSF has requested this annoucement be posted.

CISE has announced a new funding opportunity administered jointly with the Directorate for Social, Behavioral and Economic Sciences (SBE). The new program is called ICES (Interface between Computer Science and Economics and Social Sciences). You'll find the all the details about ICES here. Be sure to read the whole solicitation and click through to the non-exhaustive list of example topics as well. ICES seeks to fund interdisciplinary work at one particular boundary between computer science and economics & social sciences. As always with anything new at NSF, there will be lots of questions. Email CISE/CCF Program Director Tracy Kimbrel (tkimbrel@nsf.gov), SBE/SES Program Director Nancy Lutz (nlutz@nsf.gov), CISE/IIS Program Director Sven Koenig (skoenig@nsf.gov), or CISE/CNS Program Director Darleen Fisher (dlfisher@nsf.gov) and we will do our best to answer the questions. If you're wondering Is my project suitable for ICES? be sure to include a one or two page description with your email because that will help us with the answer. The deadline for submitting proposals to ICES is October 5, 2010.

Tuesday, August 24, 2010

NEW math on Futurama

The Aug 19, 2010 episode of Futurama had NEW math in it! It also has some other math refs.
  1. This website claims that Ken Keeler, one of the writers who has a PhD in math, devised a NEW theorem for use in the show. The theorem and proof are here. I do not know if it is new but it is correct and interesting.
  2. Bender the robot had mind switched with Amy, so he was in a human body. In order to prove that he was really a robot he had to pass the reverse Turing Test.
  3. Since PURE MATH lead to a solution of a PRACTICAL PROBLEM The professor exclaimed And they say pure math has no real applications!
This is the first time I know of where some new correct math was introduced on a fictional TV show. NUMB3RS often had new bogus math.

Both The Simpsons and Futurama have websites devoted to math refs in the show: Simpsons Math, Futurama Math.

It is not surprising that The Simpsons and Futurama have math refs since writers Jeff Westbrook, David X. Cohen, and Ken Keeler who write for these shows, are all math folk.

See these two prior posts for more on comedy and math.

Monday, August 23, 2010

Is Scheduling a Solved Problem? (Guest Post)

(Guest Post by Ben Fulton.)

"At first glance, scheduling does not seem like a topic that requires much attention from computer scientists".

This was how I wanted to start my review of a book on scheduling, but Bill Gasarch called me out on it.  "Really?" he said.  "I don't know any computer scientists who think scheduling isn't important."  It's true but computer scientists aren't the ones taking first glances at scheduling problems.

They're curriculum designers trying to determine which instructors are needed to teach all of the classes for the fall semester.  Shop managers wondering how the widgets could be sent through the assembly line more efficiently.  Kids on the playground choosing up sides for a soccer game (a problem identical to partitioning a set of jobs with known running times between two processors).   These are the people with scheduling problems.  They'll think about their problem for a few minutes, and come up with a good solution in each case.

The curriculum designer - possibly Michael Mitzenmacher  - might decide that the most important goal is to keep all instructors at around the same number of teaching hours.   In that case, each next available assignment would be given to the least busy instructor.   In doing so, he'll choose the List scheduling algorithm first proposed by Graham in 1966 and known to be no worse than twice as slow as an optimal schedule.

The kids will choose Greedy.  They'll choose a couple of captains and alternate picking the "best remaining" player, in the same way that a scheduler would choose the "largest remaining" job.   Greedy is a heuristic that runs in polynomial time.  It's not guaranteed to find the best solution it's The Easiest Hard Problem but the kids are interested in having competitive teams, not making sure that all possible sets of children can be perfectly divided in polynomial time.

The shop manager is likely to have a lot of different constraints to take into account, but she might notice that a station can stop working on one widget and start on another one, if the second is likely to be finished more quickly.  She probably won't realize it, but the Shortest Remaining Processing Time algorithm is known to optimize the average time to completion of the widgets.

In all three cases, the algorithms they choose are simple, easy to describe, and should work fairly well in their situations.  The schedulers aren't computer scientists - just people with problems to solve.   They'll take a first glance, and they'll solve them with a minimal amount of effort.  Even if you showed them a way to solve their problems that was twice as efficient, but also much more difficult to understand, they'd probably reject it.

So what's the point of studying scheduling?  The practical problems are solved. 

That's the first glance.  You've got to dig a little deeper to find the interesting problems in scheduling.  For example, the complexity of simply describing scheduling problems is a subject that hasn't fully been explored yet.  Problems are typically broken down three ways: the number of processors available; the constraints on when jobs can be run; and the criteria for determining whether one schedule is better than another.  Even if the first and third items are fairly simple, setting up a job precedence graph, lag times, and perhaps a few rules involving a specific processor needing to run a specific job, will likely generate a description so complex that an engineer trying to solve the problem might not even recognize it.

Scheduling gurus Anne Benoit, Loris Marchal, and Yves Robert also ask this question.  In response, they outline some areas of study involving distributed-memory parallel computing that could give rise to some interesting practical improvements. 

That's where you go when you're past the first glance.   And that's why computer scientists need to pay attention to scheduling.

 

Saturday, August 21, 2010

Comments

Bill and I are strong believers in freedom of speech and have long since had an open comment policy, allowing anonymous comments, no moderation (except old posts for spam) and never deleting comments (again except for spam). This has allowed our blog to serve as a public forum for the community giving a place where people can express their opinions openly.

The level of nastiness has dramatically increased in the last couple of months, but we've maintained this policy believing everyone should be allowed to share their opinion, whether it be good, bad or ugly. But some seem to be purposely incendiary instead of giving a real point of view. Please keep comments civil and on topic so this blog can continue its long role where we can have fruitful discussions of the important issues and ideas in computational complexity and theoretical computer science.

Friday, August 20, 2010

NSF Updates

Many changes at the National Science Foundation both in programs and personnel. Some highlights of upcoming CISE programs.
Unlike conference deadlines, NSF and other grants change frequently. If you are a US academic you should subscribe to the CISE emails and RSS feeds (see here) or check the Theory Matters Funding Page often.

For personnel, let's work our way up. Richard Beigel, a theory program director, is finishing his term and heading back to Temple. There was a part-time replacement chosen but looks like that will be delayed. Might be a slower turn-around on theory grants this year. Tracy Kimbrel stays on as a program director.

Susanne Hambrusch will become the new director of CCF, the division of CISE that funds most of theory. Susanne replaces Sampath Kannan who is returning to Penn. 

The search for a new CISE head (called Assistant Director) to replace the already departed Jeannette Wing continues but no announcements yet.

MIT Engineering Dean Subra Suresh has been nominated by Obama for NSF Director and awaits congressional approval. 

The NSF budget process for next year looks good so far but there has been talk of a congressional freeze on most domestic activities. The NSF is preparing for both possibilities.

Thursday, August 19, 2010

Spielman Receives the Nevanlinna Prize

Dan Spielman wins the Nevanlinna prize for "smoothed analysis of Linear Programming, algorithms for graph-based codes and applications of graph theory to Numerical Computing." Let's also not forget that as an undergrad Spielman co-authored one of my favorite complexity results, PP is Closed Under Intersection. A nice recognition for just a brilliant career.

Other IMU prizes awarded at the ICM opening ceremonies in Hyderabad, India: The Field medals go to Elon Lindenstrauss (Ergodic Theory), NgĂ´ Bảo Châu (Algebraic Geometry), Stanislav Smirnov (Statistical Physics) and CĂ©dric Villani (Mathematical Physics). The Gauss Prize goes to Yves Meyer (Number Theory) and the first Chern medal goes to Louis Nirenberg (Differential Equations).

Tim Gowers is blogging the meeting and the ICM is offering live streaming of the plenary sessions. The Nevanlinna Prize lecture will be given Saturday at 4:15 AM Eastern time. Also notable is Irit Dinur on PCPs at 12:45 AM Eastern Saturday morning.

Wednesday, August 18, 2010

Today is Paul Turan's 100th Birthday!

The following conversation is a fictional version of a real conversation.

Lance: Bill, Wed August 18, 2010 is Paul Turan's 100th birthday. We should post on it.

GASARCH: WOW, he's still alive? Will there be a conference in his honor? Will he go to it? Will he enjoy it? (ADDED LATER FOR CLARIFICATION: Noticing that Lance's mention of Paul Turan is actually a pointer to a Wikipedia Entry on him.) Lance, I see you have one of those gizmos that lets you insert pointers into your speech.

Lance: Uh, Paul Turan is dead, but if he were alive it would be his 100th birthday. And of course I have one of those gizmos--- I am a firstblocker

GASARCH: Even though he is dead can we still celebrate? Is there cake someplace? How about a free lunch?

Lance: No cake. And the economists I hang out with say there is no such thing as a free lunch. But we should post about it.

GASARCH: Okay. I know a nice proof of Turan's theorem and two applications, so I'll post about it.

When Lance first asked me to do this post my first reaction (after I found out there would be no cake or free lunch) was Paul Turan- Turan's theorem in graph theory. I know a nice proof of it that is not on Wikipedia. I also know two applications of Turan's theorem that I don't think are well known. I'll write those up as part of my post. (I will do this later in the post.) But then I looked at his Wikipedia Entry and I saw that he did so much more than graph theory. He also worked in Analysis, Number Theory, and Analytic Number Theory. Moreover, Wikipedia claims that Number Theory was his main interest. In addition he, along with Paul Erdos, made the Erdos-Turan Conjecture which has inspired so much later work (I will state it later.)

Is it good to have a theorem named after you? While we would mostly think YES, it may overshadow all of your other work in peoples memory of you. On the other hand, if it was not for Turan's theorem I would not know who he was and I doubt Lance would ask me to post on Turan's 100th birthday.

Is it good to have a conjecture named after you? It is so long as its open. Once someone proves it your name will vanish from the literature. Just ask Baudet or Vazsonyi (Baudet's conjecture is now van der Warden's theorem, and Vazsonyi's conjecture is now the Kruskal Tree Theorem.)

Turan's theorem:
If G is a graph with n vertices and e edges then G has an independent set of size at least n2/(2e+n)
Here is an application:
If S is any set of points in the unit disc (including the boundary) then there exist n2/6 - n/2 pairs of points such that each pair is of points that are ≤ sqrt(2) apart.
Here is my write up of a nice proof of both Turan's theorem and the application. The proof is from the Alon-Spencer book on Prob method. I do not know whose proof it is. (ADDED LATER: The proof is due to Ravi Boppana- see the comments.) I don't know where I read the application; however, it was by Paul Turan. If you know of other applications please leave comments about them.

Here is another application:
Assume you want to find the MAX of n elements and you get to ask k rounds of questions. It is know (without Turan's theorem) that you can do this with O(n(1+(1/(2k-1))) questions-per-round. Using Turan's theorem you can show this is optimal. (The exponent is 1+(1/(2k-1)) in case its hard to read.)
A write up of the case of k=2 is in the manuscript pointed to above. Once you read that you should be able to generalize it to k rounds easily.

The Erdos-Turan conjecture is the following:

  • Let A be a set of natural numbers. If ÎŁx ∈ A 1/x diverges then A has arbitrarily long arithmetic sequences.
  • Some have suggested that this replace Poincare's conjecture on the list of Millennium prizes. See here for more on the conjecture.

    Tuesday, August 17, 2010

    My last post on the alleged P NE NP paper

    (This is likely my last post on the alleged P NE NP paper unless more real news on it occurs. The only real news I can see happening at this point is a retraction.)

    I have not commented much on the alleged P ≠ NP since I was waiting for better informed or more opinionated people to comment first. Now that the dust has somewhat settled here are some views of points of view and some pointers to points of interest.
    1. The best places to read about the technical details of the problems with the proof are Lipton's Blog Entries: here, here, here, here, here, here and the Wikipedia entry here.
    2. Some people are saying that Scott's post was nasty. Here is an excerpt from Scott's post:
      What's obvious from even a superficial reading is that Deolalikar's manuscript is well-written, and that it discusses the history, background, and difficulties of the P vs. NP question in a competent way. More importantly (and in contrast to 98% of claimed P ≠ NP proofs), even if this attempt fails, it seems to introduce some thought-provoking new ideas, particularly a connection between statistical physics and the first-order logic characterization of NP. I'll leave it to the commenters to debate whether Deolalikar's paper exhibits one or more of the Ten Signs A Claimed Mathematical Breakthrough Is Wrong.
      Scott, that is downright nasty! Calling a paper well-written!! Accusing it of having thought-provoking new ideas!! Scott, you are just one Nasty dude!!

      More seriously, Scotts offer of $200,000 if the proof is correct (more formally, if he wins the Mil prize) was considered nasty by some. Being skeptical and quantifing your skepticism is not nasty.
    3. Other posts worth looking at:
      1. Written pre-Deo, Lance now looks like Nostradamus: So you think you've settled P vs NP. (NOTE- personal triumph- I got the spelling of Nostradamus right on my first try!)
      2. A pre-Deo post of Scott's on 10 signs a claimed mathematical breakthrough is wrong. Deolalikar passed most of these.
      3. A post-Deo post of Scott's that was likely inspired by Deo: 8 signs your proof of P ≠ NP is wrong. One of Scott's signs cuts both ways- he says that the use of Descriptive Complexity Theory is one of the signs. He claims (correctly) ... subtle differences in encoding can lead to HUGE differences in computational complexity. Indeed, this was one of the problems with the Deo-Proof. However, I think Descriptive complexity theory is one of the few techniques that has not been proven cannot work. So I take its use as a good sign.
    4. Lance was on vacation when all of this happened so he posted on it when he got back. He was skeptical from the beginning and he says why. (See here.) People respond by saying that the paper inspired a lot of interesting discussion, and hence... I'm not sure- Lance shouldn't be skeptical? Lance shouldn't say he is skeptical? Lance should phrase his skepticism more politely? I'm not sure what the objection to his post really is.
    5. Is the proof correct? It looks like there are enough serious flaws that the answer for now is no. By now this is old news.
    6. I was originally skeptical (see here) since there has been no progress on P vs NP at all so its surprising that there is this much progress so fast. I do not know if this is valid reasoning. Has any serious hard math problem been solved all of a sudden after no progress on it? Has there truly been no progress? We might not know until we solve it in 3000 AD and look back and see which results from 1970-2010 ended up being relevant.
    7. Deolalikar should, by Thursday Aug 26, 2010 (the first day of Barriers II) either (1) retract the claim, or (2) post a version that fixes ALL of the flaws found and explains the fixes. If he does neither of these two things then he will cease to be a serious researcher.
    8. It would be impossible to have a version that fixes ALL of the flaws by Thursday Aug 26, 2010. Hence he needs to retract by then. (Some would say sooner. Much sooner. Like... now.)
    9. Was all of this good for the community? It got some talking and the people who read the proof learned some complexity. If this gets more people interested in the problem, that's got to be a good thing. If some interesting ideas come out of this, even indirectly (e.g., Terry Tao is inspired to read up on finite model theory and comes up with a result of interest) then it will of course be a good thing. However, if more of these proofs come out and the community spends too much time on them, that will be bad.
    10. Why did people take this paper so seriously?
      1. It was in LaTeX!
      2. It was long!
      3. It was by a respectable researcher. (More on that later.)
      4. It used familiar notation.
      5. It seemed to have some good ideas in. It may still. (More on that later.)
      6. It was on Slashdot and on Lipton's blog. But this begs the question: why was it on slashdot and why did Lipton take it seriously? Dick- I am not asking this rhetorically- if you read this please leave a comment about why you took it seriously. I am not asking this sarcastically or in any other nasty way. (I know YOU won't think so but other readers might.)
      7. Others took it seriously. Chicken and egg problem here.
      8. It had very few of the usual trademarks of papers written by cranks.
      9. It was a proof in the correct direction (a proof that P=NP would be taken less seriously, though the result, if true, would be more awesome!!!)
      10. In the age of the web, news travels fast (I know of 3 other serious people claiming to have shown P ≠ NP; however, that was pre-web so they were debunked before it got very far. I'm not quite sure which ones went public so I decline to mention their names.)
    11. How respectable a researcher is Deolalikar? I do not know any other work that he is done. He is not someone you've heard about as having done excellent work. The fact that some in the community took his work seriously is an excellent sign that our community is not elitist. They took it seriously based on its merits. (The fact that some did not take it seriously is irrelevant.) ADDED LATER: The above was badly written and hence misunderstood. My point is that Deolalikar is not well known to the complexity community and hence the fact that many in the community took his work seriously speaks well of the community (or at least of those people) as not being elitist. Some of us WILL look at ideas from outside of the community.)
    12. Are there interesting ideas in it? The answer from the latest comments by knowledgeable people on Lipton's blogs seems to be no. Oh well.
    13. In the midst of all of this came the DUMBEST comment on a post of mine EVER!! In a recent post I noted that a recent (2007) article on number theory still referred to factoring as being easy. I meant to imply that this was bad- they should at least acknowledge the issues involved even if they don't care. One of the comments was
      Since when do mathematicians care about the notion of polynomial time?
      Note that this is the same week when Gowers and Tao, two Fields Medal winners, are carefully looking over an alleged proof that P ≠ NP. Other mathematicians were looking at it also. The notion that mathematicians do not care about polynomial time is just so 1990's.

    Monday, August 16, 2010

    But This One Is Different...

    This summer I took a two part vacation: Touring Ireland July 26-August 5, with my wife to celebrate twenty years of marriage and a short trip to Santa Fe, Augut 9-12, to catch opera with my elder daughter. Let's talk about what happened in between.

    On Friday, August 6, Vinay Deolalikar sent me and 21 of my closest frinds an email entitled "Proof announcement: P is not equal to NP". I took a quick look and white it did look better that most P/NP proof attempts (in the sense that Deolalikar doesn't make up his own notation), I spotted the following line in his write-up:
    Distributions computed by LFP must satisfy this model.
    Many P versus NP attempts work by claiming that a polynomial-time algorithm must act a certain way forgetting that an algorithm can completely ignore any semantic meaning in the problem. Since LFP is just a fancy way of saying "polynomial-time algorithm", it looked like Deolalikar fell into the same trap. So I filed the paper into my Why Me? folder as I have with many many others and figured that was the end of that.

    As you readers all know, on Sunday the 8th the paper was slashdotted and Lipton, impressed that Deolalikar worked on infinite Turing machines (whatever they are), posted on the proof saying he was "certainly hopeful". People then flooded with me with emails, tweets and instant messages asking me about the paper.

    I read over Lipton's post and took another look at the paper and still didn't see the semantic approach outlined in the paper as a viable attack on the P versus NP problem. At this point I knew people I trust would look over the paper carefully so I wouldn't have too. I couldn't keep completely quiet so I tweeted
    Much ado about an unverified proof. The word "must" is troubling. I'll let others check it carefully.
    In retrospect I should have been more negative. Kudos to Scott for taking a strong and risky stance.

    The emails kept coming but I remembered my vacation message was still active and I didn't have to answer them. I went to Santa Fe and didn't check email until I returned.

    There are two camps in our community on Deolalikar's paper. Those of us who saw Deolalikar's paper as just another in a long series of bad attempts at P v NP and wondering what all the fuss was about. And those who thought Deolalikar hit upon a new proof paradigm and despite the numerous problems, big and small, with the paper still hold hope something important will come out of it.

    Deolalikar at this point should retract his P ≠ NP claim until he can address all these issues. In an email Deolalikar sent out to the gang of 22 on Friday the 13th, he restates his claim and overview of the proof and says he "fixed the issues in the finite model theory portion of the proof". He promises a revised version on his homepage in a couple of days.

    Deolalikar is following the script, currently at stage 5 of the 12 stage process. Stage 6 will happen sooner than he expects.

    Friday, August 13, 2010

    P vs NP vs IEEE (Guest post by Paul Beame)

    (Update on P vs NP: The proof uses Finite Model Theory which is sometimes called That stuff that Neil Immerman does.. Neil Immerman has found a flaw in it. See Lipton's Blog for more information. My prediction is still that something interesting will come out of this but NOT P ≠ NP. And for those of you who read my blog but do not follow Lance on Twitter (the empty set?) note that Lance tweeted Much ado about an unverified proof.. Of course, if we all felt that way then who would verify it? A PCP?)

    Guest Post from Paul Beame!

    While the complexity world is debating how much to invest in trying to extract useful information from the recent attempt at resolving P vs NP, there is another item that may be useful to complexity blog readers who also happen to be members of the IEEE Computer Society which sponsors the CCC, LICS, and FOCS conferences. Voting just opened in the election for IEEE CS officers including the President.

    I'd like to alert voters to a write-in campaign for Joe Bumblis for president of the IEEE Computer Society.

    As anyone who has run a CCC, FOCS, or LICS knows, the IEEE CS has had a lot of very bureaucratic rules for running conferences. Conferences are run by the IEEE CS Technical Committees (like the TC on Mathematical Foundations of Computing which does CCC, LICS, and FOCS and which I currently chair). These TCs are groups of volunteers who have little power in the current organization of the IEEE CS. These TC volunteers have long chafed against the bureaucracy, often with little response from the main organization. The main officers of the IEEE CS have been from the Publications and Standards part of the organization and these groups dominate the nominating committees. The TC side of the organization has been shut out. IEEE CS ran into financial difficulty for a variety of reasons unrelated to conferences and the result has been more bureaucracy based on the Standards mentality that the way to fix things is to set up more rules, which pit CS staff in opposition to TC and conference volunteers.

    This year we finally succeeded in getting the IEEE CS to allow an option that should substantially reduce bureaucracy for conferences like CCC, LICS, and FOCS but the organization still resists anything like the flexibility for TCs that SIGs have within ACM and still is tied to bureaucratic rules. Many of the most active and thoughtful TC chairs have agreed that we need to push a write-in campaign to finally get someone from the TC side in a position of authority and Joe has agreed to serve. Joe has been vice-chair of the board of TC chairs and would bring a very different perspective that has a chance of shaking things up. By comparison one of the other candidates for president recently refused to authorize the LICS 2010 budget as part of the Federated Logic Conference (long after registrations had opened) because it was charging $10 too little for non-IEEE-members (or $8 too much for members, depending on how you count). It took my intervention to expose the ridiculousness of this position before he was overruled.

    We need your vote now!

    (NOTE FROM GASARCH- as someone who has run a CCC I can VERIFY what Paul is saying without pointing to Lipton's blog or using a PCP.)

    Wednesday, August 11, 2010

    Factoring in P ?

    (Update on alleged P NE NP proof: There are some issues with it. See these posts on Lipton's blog: here and here and also see a Wikipedia site that (I think) Terry Tao set up here. )

    I recently read the following: (Backdrop- the author had already defined p(n) to be the number of partitions of n.) NOTE- I use ki where the actual quote uses ki. Looks better in html, though I am sure a better htmler than I could handle this.
    For a positive integer n let d(n) be the sum of the divisors of n. For example
    1. d(4) = 1 + 2 + 4 = 7
    2. d(1000)=1+2+4+5+8+10+20+25+40+50+100+125+200+250+500+1000 = 2340
    3. d(1001)=1+7+11+13+77+91+143+1001=1344
    Unlike the numbers p(n), the numbers d(n) are easy to compute. There is a simple explicit formula for them. Namely, if n=2k23k3 ... pkp then
    d(n)=(2k2+1-1)((3k3+1-1)/2)...((pkp+1-1)/(p-1))
    They are assuming factoring is easy. Or perhaps they are assuming you are given the number already factored. There are no comments on what easy to compute might really mean, though they seem to think that having a simple explicit formula implies easy to compute.

    To be fair, this book was written a while back before people in math really took these notions seriously. The book is Mathematical Omnibus: Thirty Lectures on Classic Mathematics by Fuchs and Tabachnikov. Copyright 2007.

    Disclosure: I got my copy for free in my capacity as SIGACT NEWS book review editor.

    Monday, August 09, 2010

    That P ne NP proof- whats up with that?

    Let me be he last on the block to tell you that an alleged proof of P ≠ NP is out there. NOT posting on it would be absurd; however, I cannot do any better than what Richard Lipton already posted so I point you to his post.

    So what are my uninformed views? If I was a betting man I would bet that it won't pan out. I would bet that he proved something very interesting, but not P ≠ NP. Why do I think this? This is NOT based on the author who seems to be a person to be taken seriously. Its just that the problem is so hard and we've made no progress on it since... . Hmmm, when is the last time we made progress on it? Parity not in constant depth? Other bounds on weak models? Oracles? Natural Proofs? Something from Mulmuley's program? Fagin's theorem connecting the problem to finite model theory (which is used in the alleged proof)? I would have thought we would make slow progress at first. Not sure what type- Maybe SAT ∉ DTIME(nk) for small values of k. Maybe something else.

    Scott Aaronson apparently IS a betting man. See his post on the problem.

    I was going to go to the Barriers in Computational Complexity II workshop. If the proof is correct I hope Amtrak gives refunds.

    Actually- looking over the schedule, much of it will still be relevant. If the proof is CORRECT how will that change what we teach? What we work on? Lance's book-for-the-layperson on complexity theory? (I recently proofread a chapter on what the world would be like if P=NP. He may have to remove it. Too bad, it was awesome!)

    Wednesday, August 04, 2010

    The Solution to the Mark-Betty Game

    RECALL from my last post the following game:

    Let f(n) be a non-decreasing function from naturals to naturals. Consider the following game:
    Let n be a large natural number. Mark and Betty alternate (Mark goes first) putting M's and B's on an n by n checkerboard. No square can have two markers on it. They play until all squares are covered; hence the game will end after n2 moves. If at the end there is some row or column A where the number of M's in A is at least n/2 + f(n) then Mark wins! If not then Betty wins! (NOTE- the M's in A do not have to be adjacent.)
    I asked for your intuitions on when Mark or Betty have a winning strategy. Here are the answers and a pointer to my writeup of it. NONE of this is my work- it is all the work of J.Beck (the reference is in the writeup). Here are the results:
    1. If f(n) ≤ O(\sqrt(n)) then Mark has a winning strategy.
    2. If f(n) ≥ Ω(\sqrt(n log n)) then Betty has a winning strategy.
    3. My writeup is here. If I fill in more details it may change.
    (NOTE- html's sqrt sign is pathetic. Here is what square root of n looks like: √ n. The sqrt sign has no top! That's why I use `sqrt')

    My writeup of the first result is complete. I included more detail then you usually get in a math paper. I do not know if this is good or bad; however, I really wanted to check it all for myself. My writeup of the second result is much sketchier; however, it is essentially Beck's proof.

    Beck has a book on games of this nature entitled Combinatorial Games: Tic Tac Toe Theory. Will children buy it thinking it is about tic-tac-toe? Maybe Bill Gates' kids: the book costs $146.00 new and $107.00 used.

    Monday, August 02, 2010

    I want your intuitions on this, so the less you know the more I want you to post

    Let f(n) be a monotone increasing function from N to N. (CLARIFICATION ADDED LATER: N is the naturals., f is non-decreasing) Consider the following game:
    Let n be a natural number which we thing of as being large. Mark and Betty alternate (Mark goes first) putting M's and B's on an n by n checkerboard. No square can have two markers on it; hence the game will end after n2 moves. If at the end there is some row or column A where the number of M's in A is at least n/2 + f(n) then Mark wins! If not then Betty wins! (NOTE- the M's in A do not have to be adjacent.)
    For which f(n) does Mark have a winning strategy? For which f(n) does Betty have a winning strategy? Much is known on this problem. I have written up some notes that I will post on Wednesday. (None of this is my work.)

    What are your intuitions and why? If you already know the answers then please do not post. SO, paradoxically, the less you know the more I want you to post. I want to know what the intuitions of someone who does not know the problem are.