 When I first heard there would be a BARRIERS II workshop I assumed it would be full of Oracle results, Natural Proofs, Algebrazation, maybe some speculation on independence results (I do not think we have any right now). And indeed, this was at least part of the agenda for Barriers I. (See here for a blog about Barriers I.) Barriers II was (1) Approx algorithms and lower bounds, (2) Crypto, (3) Quantum Computing, (4) Communication Complexity, (5) Geometric Algorithms (NOTE NOT Ketan's Geometric Complexity Theory program). However I can't say it was false advertising since the program was on the web early enough that one could see what one was getting into. It was misnamed.
 The organizers told me that it was called Barriers because the speakers were urged to talk about where the field is and where it is stuck. I think talks like that have a name already: Open Problems, or Directions or whatnot.
 All of that aside, how was it? It was WONDERFUL!. The talks were uniformly good. They were mostly surveytalks so one outside the field could follow them (that was the intent). This also lead to the following oddity: the day on (say) Quantum had very few quantum people at it (except the speakers) since your standard quantum person has probably heard the talks before or at least knows the material.
 Approx Alg and lower bounds: I was inspired to actually learn what Semi Definite Programming and Iterative Methods are. I finally understand the UGC and its different formulations. Dana Moshkovitz gave a nice rump session about a promising route to PROVE UGC. This was not one of those we doubt it will work but the approach is still interesting talks (this could be a rallying cry for our field) but seemed like a real plan.
 Crypto: Lattice stuff and also stuff about: If your key leaks slowly can you still do crypto? (yes).
 Quantum: Scott introduced the speakers and was the first one. He had a slide warning us to NOT PANIC at the mention of Quantum. All of the talks were outreach why complexity theorists should learn quantum methods. I blogged about this nontechnically here. One key point I will reiterate: Even if you don't work in quantum computing you will need to learn their methods. (Analog: even if you don't work in prob you need to know the prob method.) Over dinner we noted the following: there are many quantum books for the layperson but there are few (none?) complexity books for the layperson (hurry up Lance!). Why is this? Quantum seems less abstract then Complexity and some laypeople may thinks they have a notion of it. They may be wrong.

Communication Complexity: Paul Beame gave an hour long talk where he
went over the entire
KushilevitzNisan book.
(NOTE: This book is on Lance's list of
favorite computational complexity books.)
How did he do it? He skipped some stuff.
(I recommended talking fast. Fortunately he didn't take my advice.)
The other talks were applications of Communication Complexity to lower bounds
on data structures (not quite some of the lower bounds didn't use Comm. Comp.),
data streams, direct sum problems, and a talk on quantum comm. comp.
This raises the question: Is Communication Complexity only interesting
when it applies to something else, or it is interesting for its own
sake?
I proposed the following open problem in a rump session, though it
seems like its already out there (that is, its not really MY open problem).
Alice has x, Bob has y, both are nbits long. We are promised that PARITY(x)=0 and PARITY(y)=1. Alice and Bob want to find i such that x_{i} ≠ y_{i} with d rounds (d a constant) and O(log n) bitsperround. Show they cannot do this. ANSWER: We already know this statement is true since it is equivalent to PARITY ∉ AC_{0}. (Using KarchmerWigderson games, another game that is not much fun.) However, I would like a communication complexity proof of this which would give an alternative proof of PARITY ∉ AC_{0}. In particular it would give a topdown proof instead of a bottomup proof.
 Geometric Algorithms. Paul Beame said that this session would be a nice counterpoint to the communication complexity talks since the Geometric Algorithms session would prove some upper bounds on problems that the Comm. Comp. Session proved lower bounds on. (Mike Sipser once told me It is good to do an algorithm as a sanity check on your lower bounds.). Alas I had to take off that morning so I don't know if this really happened; however, I assume that it did.
 Not much talk about the alleged P NE NP proof; however, I will discuss what was discussed in my next blog.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Google Analytics and Mathjax
Tuesday, August 31, 2010
Report from Barriers II Part 1
Monday, August 30, 2010
New Institute for Theory of Computing
Computation (and its abstract form, the algorithm) has not only revolutionized science, technology, and society, but also is among the most important scientific concepts discovered and developed in the 20th century. This scientific discipline has enabled numerous technological advances and has forged many connections to mathematics and other sciences, providing fruitful insights and new problems. It has impacted not only computer science and technology, but also parts of mathematics, physics, biology, economics and sociology. Meanwhile, its core scientific agenda is extremely ambitious and challenging. In short, this theoretical field is one of the most exciting and important today, attracting excellent young talent to its ranks at a growing rate. Young people with education and training in this field are well positioned to make central contributions to computer science and science in general.
An institute focused on the theory of computation could bring together a critical mass of researchers from around the world to accelerate fundamental research on computation and to further develop its interactions with other areas of science ranging from mathematics and statistics to biology, physics and engineering. The Simons Foundation invites applications for grants to establish such an Institute.Letter of intent due by October 27 and full proposals by next June.
It's the funding that catches the eye, $6 million/year for 10 years with possible renewed funding or endowment after that. To understand the scale, that's roughly the budget of TTIChicago. This new institute will play a major role in our field.
The institute will be modeled on the Kavli Institute for Theoretical Physics, MSRI and the IMA (also similar to DIMACS) which are located at or near major university campuses and host programs on a given topic for several months to a year with longterm visitors and have several workshops related to those program topics.
This is opposed to the Oberwolfach/Dagstuhl/BIRS model of an isolated location with onsite room and board, weekly workshops but little to no long term programs or visitors. I'm glad the Simons Foundation is going with the former model. While I've enjoyed the many Dagstuhl workshops I've attended, centers like DIMACS tend have a greater long term impact on the field.
Where will this institute be located? We'll find out next fall. Should be exciting.
Friday, August 27, 2010
Theory Journals
I have a question about TCS journals. As I am trying to follow your advice on being more diligent about journalizing my papers, I realized that I am surprisingly ignorant about where to send them. A couple of more senior colleagues I asked didn't really have the answers.
What are the journals that accept theory papers? What's their specialization (if any)? What's their relative quality/reputation? What's the expected turn around time for each of them? Anything else I should be asking?Sounds like a list that should be on a site we can edit in the future like Wikipedia and in fact they have such a list. It can use some additions, links, publishers and specializations if anyone is so inclined to help update it. Most of these journals cover general theory unless their title indicates otherwise.
Thursday, August 26, 2010
Cryptography if P = NP
Ask many computer scientists what happens if P = NP and you'll get the response that it will kill cryptography. Let's ignore solving all optimization problems, solving AI and learning everything, likely cures to many diseases and many more amazing advances in society, what a disaster it will be that we can't send secret messages to each other.
I don't at all believe P = NP but let's have this hypothetical discussion of what happens to cryptography in that world. We do lose publickey cryptography, the ability for two people who have never met to electronically still send encrypted messages to each other. How will I send my credit card info to Amazon if P = NP?
Amazon will send me a USB key containing a gigabyte worth of a onetime pad, enough to encrypt all my transactions for my entire life (even though I will live much much longer thanks to P=NP). This might work for Amazon but what about tiny mom and pop web stores? We just need a trusted site for a onetime pad registry so my USB key will work for any Internet transaction. Even for publickey cryptography today we need trusted parties to verify sites so we haven't lost much here.
For large financial institutions they can ship huge amounts of onetime pad data through secure couriers (or just ship data this way). Large amounts of compressed data can have a very small footprint these days.
What about cryptographic protocols? Zeroknowledge is uninteresting if P = NP. Many secure multiparty computations can be done with private channels which just need private keys. There are many others, electronic money, secure voting and digital signatures come to mind, that seem difficult to do if P = NP. On the other hand they are also hard to do in practice today even under the usual assumptions.
Certainly P=NP will make cryptography considerably more painful that it is now and many things we thought previously encrypted won't be anymore. But we'll figure out how to meet our crypto needs in any environment. The loss of publickey cryptography shouldn't be our first gut reaction to what happens if P = NP.
Wednesday, August 25, 2010
***SORELLE*** PHD!!/Workshop on Boolean Threshold Functions/NSF program
ANNOUCEMENT 1: Congrads to fellow blogger ***SORELLE*** who got her PhD recently. I was on her committee. She did a masterful job of presenting it. It looked like theory that can be applied to stuff. At least Google, who hired her, might think so. Or they might think (correctly) that she is smart and knows things so she can help them. See her post about her PhD for more information and a pointer to her actual thesis.
ANNOUCEMENT 2: Rocco Servedio and Ryan O'Donell have requested that I post about upcoming workshop at the Center for Computational Intractability in Princeton.
The topic is Analysis and Geometry of Boolean Threshold Functions. It will take place the Thursday and Friday morning before FOCS, Oct. 2122. For detailed information, please see: here There is also a poster here.
COMMENT FROM GASARCH: Its right before FOCS but its NOT close to FOCS. It is in Princeton NJ. FOCS is in Las Vegas. Not sure this makes sense but I suppose they will find out.
ANNOUCEMENT 3: Tracy Kimball at NSF has requested this annoucement be posted.
CISE has announced a new funding opportunity administered jointly with the Directorate for Social, Behavioral and Economic Sciences (SBE). The new program is called ICES (Interface between Computer Science and Economics and Social Sciences). You'll find the all the details about ICES here. Be sure to read the whole solicitation and click through to the nonexhaustive list of example topics as well. ICES seeks to fund interdisciplinary work at one particular boundary between computer science and economics & social sciences. As always with anything new at NSF, there will be lots of questions. Email CISE/CCF Program Director Tracy Kimbrel (tkimbrel@nsf.gov), SBE/SES Program Director Nancy Lutz (nlutz@nsf.gov), CISE/IIS Program Director Sven Koenig (skoenig@nsf.gov), or CISE/CNS Program Director Darleen Fisher (dlfisher@nsf.gov) and we will do our best to answer the questions. If you're wondering Is my project suitable for ICES? be sure to include a one or two page description with your email because that will help us with the answer. The deadline for submitting proposals to ICES is October 5, 2010.
Tuesday, August 24, 2010
NEW math on Futurama
 This website claims that Ken Keeler, one of the writers who has a PhD in math, devised a NEW theorem for use in the show. The theorem and proof are here. I do not know if it is new but it is correct and interesting.
 Bender the robot had mind switched with Amy, so he was in a human body. In order to prove that he was really a robot he had to pass the reverse Turing Test.
 Since PURE MATH lead to a solution of a PRACTICAL PROBLEM The professor exclaimed And they say pure math has no real applications!
Both The Simpsons and Futurama have websites devoted to math refs in the show: Simpsons Math, Futurama Math.
It is not surprising that The Simpsons and Futurama have math refs since writers Jeff Westbrook, David X. Cohen, and Ken Keeler who write for these shows, are all math folk.
See these two prior posts for more on comedy and math.
Monday, August 23, 2010
Is Scheduling a Solved Problem? (Guest Post)
"At first glance, scheduling does not seem like a topic that requires much attention from computer scientists".
This was how I wanted to start my review of a book on scheduling, but Bill Gasarch called me out on it. "Really?" he said. "I don't know any computer scientists who think scheduling isn't important." It's true but computer scientists aren't the ones taking first glances at scheduling problems.
They're curriculum designers trying to determine which instructors are needed to teach all of the classes for the fall semester. Shop managers wondering how the widgets could be sent through the assembly line more efficiently. Kids on the playground choosing up sides for a soccer game (a problem identical to partitioning a set of jobs with known running times between two processors). These are the people with scheduling problems. They'll think about their problem for a few minutes, and come up with a good solution in each case.
The curriculum designer  possibly Michael Mitzenmacher  might decide that the most important goal is to keep all instructors at around the same number of teaching hours. In that case, each next available assignment would be given to the least busy instructor. In doing so, he'll choose the List scheduling algorithm first proposed by Graham in 1966 and known to be no worse than twice as slow as an optimal schedule.
The kids will choose Greedy. They'll choose a couple of captains and alternate picking the "best remaining" player, in the same way that a scheduler would choose the "largest remaining" job. Greedy is a heuristic that runs in polynomial time. It's not guaranteed to find the best solution it's The Easiest Hard Problem but the kids are interested in having competitive teams, not making sure that all possible sets of children can be perfectly divided in polynomial time.
The shop manager is likely to have a lot of different constraints to take into account, but she might notice that a station can stop working on one widget and start on another one, if the second is likely to be finished more quickly. She probably won't realize it, but the Shortest Remaining Processing Time algorithm is known to optimize the average time to completion of the widgets.
In all three cases, the algorithms they choose are simple, easy to describe, and should work fairly well in their situations. The schedulers aren't computer scientists  just people with problems to solve. They'll take a first glance, and they'll solve them with a minimal amount of effort. Even if you showed them a way to solve their problems that was twice as efficient, but also much more difficult to understand, they'd probably reject it.
So what's the point of studying scheduling? The practical problems are solved.
That's the first glance. You've got to dig a little deeper to find the interesting problems in scheduling. For example, the complexity of simply describing scheduling problems is a subject that hasn't fully been explored yet. Problems are typically broken down three ways: the number of processors available; the constraints on when jobs can be run; and the criteria for determining whether one schedule is better than another. Even if the first and third items are fairly simple, setting up a job precedence graph, lag times, and perhaps a few rules involving a specific processor needing to run a specific job, will likely generate a description so complex that an engineer trying to solve the problem might not even recognize it.
Scheduling gurus Anne Benoit, Loris Marchal, and Yves Robert also ask this question. In response, they outline some areas of study involving distributedmemory parallel computing that could give rise to some interesting practical improvements.
That's where you go when you're past the first glance. And that's why computer scientists need to pay attention to scheduling.
Saturday, August 21, 2010
Comments
The level of nastiness has dramatically increased in the last couple of months, but we've maintained this policy believing everyone should be allowed to share their opinion, whether it be good, bad or ugly. But some seem to be purposely incendiary instead of giving a real point of view. Please keep comments civil and on topic so this blog can continue its long role where we can have fruitful discussions of the important issues and ideas in computational complexity and theoretical computer science.
Friday, August 20, 2010
NSF Updates
 Expeditions is moving to an 18month cycle but this year's preproposal deadline is September 10. Last year's awardees have yet to be announced. (Update: Just announced)
 The CCF Core (traditional theory grants) have three deadlines: September 15 (medium), November 28 (large) and December 17 (small).
 A new program Interface between Computer Science and Economics & Social Science (ICES), proposals due October 5. This program came out of a workshop held at Cornell last fall and its report.
Thursday, August 19, 2010
Spielman Receives the Nevanlinna Prize
Other IMU prizes awarded at the ICM opening ceremonies in Hyderabad, India: The Field medals go to Elon Lindenstrauss (Ergodic Theory), Ngô Bảo Châu (Algebraic Geometry), Stanislav Smirnov (Statistical Physics) and Cédric Villani (Mathematical Physics). The Gauss Prize goes to Yves Meyer (Number Theory) and the first Chern medal goes to Louis Nirenberg (Differential Equations).
Wednesday, August 18, 2010
Today is Paul Turan's 100th Birthday!
Lance: Bill, Wed August 18, 2010 is Paul Turan's 100th birthday. We should post on it.
GASARCH: WOW, he's still alive? Will there be a conference in his honor? Will he go to it? Will he enjoy it? (ADDED LATER FOR CLARIFICATION: Noticing that Lance's mention of Paul Turan is actually a pointer to a Wikipedia Entry on him.) Lance, I see you have one of those gizmos that lets you insert pointers into your speech.
Lance: Uh, Paul Turan is dead, but if he were alive it would be his 100th birthday. And of course I have one of those gizmos I am a firstblocker
GASARCH: Even though he is dead can we still celebrate? Is there cake someplace? How about a free lunch?
Lance: No cake. And the economists I hang out with say there is no such thing as a free lunch. But we should post about it.
GASARCH: Okay. I know a nice proof of Turan's theorem and two applications, so I'll post about it.
When Lance first asked me to do this post my first reaction (after I found out there would be no cake or free lunch) was Paul Turan Turan's theorem in graph theory. I know a nice proof of it that is not on Wikipedia. I also know two applications of Turan's theorem that I don't think are well known. I'll write those up as part of my post. (I will do this later in the post.) But then I looked at his Wikipedia Entry and I saw that he did so much more than graph theory. He also worked in Analysis, Number Theory, and Analytic Number Theory. Moreover, Wikipedia claims that Number Theory was his main interest. In addition he, along with Paul Erdos, made the ErdosTuran Conjecture which has inspired so much later work (I will state it later.)
Is it good to have a theorem named after you? While we would mostly think YES, it may overshadow all of your other work in peoples memory of you. On the other hand, if it was not for Turan's theorem I would not know who he was and I doubt Lance would ask me to post on Turan's 100th birthday.
Is it good to have a conjecture named after you? It is so long as its open. Once someone proves it your name will vanish from the literature. Just ask Baudet or Vazsonyi (Baudet's conjecture is now van der Warden's theorem, and Vazsonyi's conjecture is now the Kruskal Tree Theorem.)
Turan's theorem:
If G is a graph with n vertices and e edges then G has an independent set of size at least n^{2}/(2e+n)Here is an application:
If S is any set of points in the unit disc (including the boundary) then there exist n^{2}/6  n/2 pairs of points such that each pair is of points that are ≤ sqrt(2) apart.Here is my write up of a nice proof of both Turan's theorem and the application. The proof is from the AlonSpencer book on Prob method. I do not know whose proof it is. (ADDED LATER: The proof is due to Ravi Boppana see the comments.) I don't know where I read the application; however, it was by Paul Turan. If you know of other applications please leave comments about them.
Here is another application:
Assume you want to find the MAX of n elements and you get to ask k rounds of questions. It is know (without Turan's theorem) that you can do this with O(n^{(1+(1/(2k1))}) questionsperround. Using Turan's theorem you can show this is optimal. (The exponent is 1+(1/(2^{k}1)) in case its hard to read.)A write up of the case of k=2 is in the manuscript pointed to above. Once you read that you should be able to generalize it to k rounds easily.
The ErdosTuran conjecture is the following:
Some have suggested that this replace Poincare's conjecture on the list of Millennium prizes. See here for more on the conjecture.
Let A be a set of natural numbers. If Σ_{x ∈ A} 1/x diverges then A has arbitrarily long arithmetic sequences.
Tuesday, August 17, 2010
My last post on the alleged P NE NP paper
I have not commented much on the alleged P ≠ NP since I was waiting for better informed or more opinionated people to comment first. Now that the dust has somewhat settled here are some views of points of view and some pointers to points of interest.
 The best places to read about the technical details of the problems with the proof are Lipton's Blog Entries: here, here, here, here, here, here and the Wikipedia entry here.

Some people are saying that
Scott's post
was nasty. Here is an excerpt from Scott's post:
What's obvious from even a superficial reading is that Deolalikar's manuscript is wellwritten, and that it discusses the history, background, and difficulties of the P vs. NP question in a competent way. More importantly (and in contrast to 98% of claimed P ≠ NP proofs), even if this attempt fails, it seems to introduce some thoughtprovoking new ideas, particularly a connection between statistical physics and the firstorder logic characterization of NP. I'll leave it to the commenters to debate whether Deolalikar's paper exhibits one or more of the Ten Signs A Claimed Mathematical Breakthrough Is Wrong.
Scott, that is downright nasty! Calling a paper wellwritten!! Accusing it of having thoughtprovoking new ideas!! Scott, you are just one Nasty dude!!
More seriously, Scotts offer of $200,000 if the proof is correct (more formally, if he wins the Mil prize) was considered nasty by some. Being skeptical and quantifing your skepticism is not nasty. 
Other posts worth looking at:
 Written preDeo, Lance now looks like Nostradamus: So you think you've settled P vs NP. (NOTE personal triumph I got the spelling of Nostradamus right on my first try!)
 A preDeo post of Scott's on 10 signs a claimed mathematical breakthrough is wrong. Deolalikar passed most of these.
 A postDeo post of Scott's that was likely inspired by Deo: 8 signs your proof of P ≠ NP is wrong. One of Scott's signs cuts both ways he says that the use of Descriptive Complexity Theory is one of the signs. He claims (correctly) ... subtle differences in encoding can lead to HUGE differences in computational complexity. Indeed, this was one of the problems with the DeoProof. However, I think Descriptive complexity theory is one of the few techniques that has not been proven cannot work. So I take its use as a good sign.
 Lance was on vacation when all of this happened so he posted on it when he got back. He was skeptical from the beginning and he says why. (See here.) People respond by saying that the paper inspired a lot of interesting discussion, and hence... I'm not sure Lance shouldn't be skeptical? Lance shouldn't say he is skeptical? Lance should phrase his skepticism more politely? I'm not sure what the objection to his post really is.
 Is the proof correct? It looks like there are enough serious flaws that the answer for now is no. By now this is old news.
 I was originally skeptical (see here) since there has been no progress on P vs NP at all so its surprising that there is this much progress so fast. I do not know if this is valid reasoning. Has any serious hard math problem been solved all of a sudden after no progress on it? Has there truly been no progress? We might not know until we solve it in 3000 AD and look back and see which results from 19702010 ended up being relevant.
 Deolalikar should, by Thursday Aug 26, 2010 (the first day of Barriers II) either (1) retract the claim, or (2) post a version that fixes ALL of the flaws found and explains the fixes. If he does neither of these two things then he will cease to be a serious researcher.
 It would be impossible to have a version that fixes ALL of the flaws by Thursday Aug 26, 2010. Hence he needs to retract by then. (Some would say sooner. Much sooner. Like... now.)
 Was all of this good for the community? It got some talking and the people who read the proof learned some complexity. If this gets more people interested in the problem, that's got to be a good thing. If some interesting ideas come out of this, even indirectly (e.g., Terry Tao is inspired to read up on finite model theory and comes up with a result of interest) then it will of course be a good thing. However, if more of these proofs come out and the community spends too much time on them, that will be bad.

Why did people take this paper so seriously?
 It was in LaTeX!
 It was long!
 It was by a respectable researcher. (More on that later.)
 It used familiar notation.
 It seemed to have some good ideas in. It may still. (More on that later.)
 It was on Slashdot and on Lipton's blog. But this begs the question: why was it on slashdot and why did Lipton take it seriously? Dick I am not asking this rhetorically if you read this please leave a comment about why you took it seriously. I am not asking this sarcastically or in any other nasty way. (I know YOU won't think so but other readers might.)
 Others took it seriously. Chicken and egg problem here.
 It had very few of the usual trademarks of papers written by cranks.
 It was a proof in the correct direction (a proof that P=NP would be taken less seriously, though the result, if true, would be more awesome!!!)
 In the age of the web, news travels fast (I know of 3 other serious people claiming to have shown P ≠ NP; however, that was preweb so they were debunked before it got very far. I'm not quite sure which ones went public so I decline to mention their names.)
 How respectable a researcher is Deolalikar? I do not know any other work that he is done. He is not someone you've heard about as having done excellent work. The fact that some in the community took his work seriously is an excellent sign that our community is not elitist. They took it seriously based on its merits. (The fact that some did not take it seriously is irrelevant.) ADDED LATER: The above was badly written and hence misunderstood. My point is that Deolalikar is not well known to the complexity community and hence the fact that many in the community took his work seriously speaks well of the community (or at least of those people) as not being elitist. Some of us WILL look at ideas from outside of the community.)
 Are there interesting ideas in it? The answer from the latest comments by knowledgeable people on Lipton's blogs seems to be no. Oh well.

In the midst of all of this came the DUMBEST comment on a post of mine EVER!!
In a recent
post
I noted that a recent (2007) article on number theory still referred to factoring
as being easy. I meant to imply that this was bad they should at least acknowledge
the issues involved even if they don't care. One of the comments was
Since when do mathematicians care about the notion of polynomial time?
Note that this is the same week when Gowers and Tao, two Fields Medal winners, are carefully looking over an alleged proof that P ≠ NP. Other mathematicians were looking at it also. The notion that mathematicians do not care about polynomial time is just so 1990's.
Monday, August 16, 2010
But This One Is Different...
On Friday, August 6, Vinay Deolalikar sent me and 21 of my closest frinds an email entitled "Proof announcement: P is not equal to NP". I took a quick look and white it did look better that most P/NP proof attempts (in the sense that Deolalikar doesn't make up his own notation), I spotted the following line in his writeup:
Distributions computed by LFP must satisfy this model.Many P versus NP attempts work by claiming that a polynomialtime algorithm must act a certain way forgetting that an algorithm can completely ignore any semantic meaning in the problem. Since LFP is just a fancy way of saying "polynomialtime algorithm", it looked like Deolalikar fell into the same trap. So I filed the paper into my Why Me? folder as I have with many many others and figured that was the end of that.
As you readers all know, on Sunday the 8th the paper was slashdotted and Lipton, impressed that Deolalikar worked on infinite Turing machines (whatever they are), posted on the proof saying he was "certainly hopeful". People then flooded with me with emails, tweets and instant messages asking me about the paper.
I read over Lipton's post and took another look at the paper and still didn't see the semantic approach outlined in the paper as a viable attack on the P versus NP problem. At this point I knew people I trust would look over the paper carefully so I wouldn't have too. I couldn't keep completely quiet so I tweeted
Much ado about an unverified proof. The word "must" is troubling. I'll let others check it carefully.In retrospect I should have been more negative. Kudos to Scott for taking a strong and risky stance.
The emails kept coming but I remembered my vacation message was still active and I didn't have to answer them. I went to Santa Fe and didn't check email until I returned.
There are two camps in our community on Deolalikar's paper. Those of us who saw Deolalikar's paper as just another in a long series of bad attempts at P v NP and wondering what all the fuss was about. And those who thought Deolalikar hit upon a new proof paradigm and despite the numerous problems, big and small, with the paper still hold hope something important will come out of it.
Deolalikar at this point should retract his P ≠ NP claim until he can address all these issues. In an email Deolalikar sent out to the gang of 22 on Friday the 13th, he restates his claim and overview of the proof and says he "fixed the issues in the finite model theory portion of the proof". He promises a revised version on his homepage in a couple of days.
Deolalikar is following the script, currently at stage 5 of the 12 stage process. Stage 6 will happen sooner than he expects.
Friday, August 13, 2010
P vs NP vs IEEE (Guest post by Paul Beame)
Guest Post from Paul Beame!
While the complexity world is debating how much to invest in trying to extract useful information from the recent attempt at resolving P vs NP, there is another item that may be useful to complexity blog readers who also happen to be members of the IEEE Computer Society which sponsors the CCC, LICS, and FOCS conferences. Voting just opened in the election for IEEE CS officers including the President.
I'd like to alert voters to a writein campaign for Joe Bumblis for president of the IEEE Computer Society.
As anyone who has run a CCC, FOCS, or LICS knows, the IEEE CS has had a lot of very bureaucratic rules for running conferences. Conferences are run by the IEEE CS Technical Committees (like the TC on Mathematical Foundations of Computing which does CCC, LICS, and FOCS and which I currently chair). These TCs are groups of volunteers who have little power in the current organization of the IEEE CS. These TC volunteers have long chafed against the bureaucracy, often with little response from the main organization. The main officers of the IEEE CS have been from the Publications and Standards part of the organization and these groups dominate the nominating committees. The TC side of the organization has been shut out. IEEE CS ran into financial difficulty for a variety of reasons unrelated to conferences and the result has been more bureaucracy based on the Standards mentality that the way to fix things is to set up more rules, which pit CS staff in opposition to TC and conference volunteers.
This year we finally succeeded in getting the IEEE CS to allow an option that should substantially reduce bureaucracy for conferences like CCC, LICS, and FOCS but the organization still resists anything like the flexibility for TCs that SIGs have within ACM and still is tied to bureaucratic rules. Many of the most active and thoughtful TC chairs have agreed that we need to push a writein campaign to finally get someone from the TC side in a position of authority and Joe has agreed to serve. Joe has been vicechair of the board of TC chairs and would bring a very different perspective that has a chance of shaking things up. By comparison one of the other candidates for president recently refused to authorize the LICS 2010 budget as part of the Federated Logic Conference (long after registrations had opened) because it was charging $10 too little for nonIEEEmembers (or $8 too much for members, depending on how you count). It took my intervention to expose the ridiculousness of this position before he was overruled.
We need your vote now!
(NOTE FROM GASARCH as someone who has run a CCC I can VERIFY what Paul is saying without pointing to Lipton's blog or using a PCP.)
Wednesday, August 11, 2010
Factoring in P ?
I recently read the following: (Backdrop the author had already defined p(n) to be the number of partitions of n.) NOTE I use ki where the actual quote uses k_{i}. Looks better in html, though I am sure a better htmler than I could handle this.
For a positive integer n let d(n) be the sum of the divisors of n. For exampleThey are assuming factoring is easy. Or perhaps they are assuming you are given the number already factored. There are no comments on what easy to compute might really mean, though they seem to think that having a simple explicit formula implies easy to compute.Unlike the numbers p(n), the numbers d(n) are easy to compute. There is a simple explicit formula for them. Namely, if n=2^{k2}3^{k3} ... p^{kp} then
 d(4) = 1 + 2 + 4 = 7
 d(1000)=1+2+4+5+8+10+20+25+40+50+100+125+200+250+500+1000 = 2340
 d(1001)=1+7+11+13+77+91+143+1001=1344
d(n)=(2^{k2+1}1)((3^{k3+1}1)/2)...((p^{kp+1}1)/(p1))
To be fair, this book was written a while back before people in math really took these notions seriously. The book is Mathematical Omnibus: Thirty Lectures on Classic Mathematics by Fuchs and Tabachnikov. Copyright 2007.
Disclosure: I got my copy for free in my capacity as SIGACT NEWS book review editor.
Monday, August 09, 2010
That P ne NP proof whats up with that?
So what are my uninformed views? If I was a betting man I would bet that it won't pan out. I would bet that he proved something very interesting, but not P ≠ NP. Why do I think this? This is NOT based on the author who seems to be a person to be taken seriously. Its just that the problem is so hard and we've made no progress on it since... . Hmmm, when is the last time we made progress on it? Parity not in constant depth? Other bounds on weak models? Oracles? Natural Proofs? Something from Mulmuley's program? Fagin's theorem connecting the problem to finite model theory (which is used in the alleged proof)? I would have thought we would make slow progress at first. Not sure what type Maybe SAT ∉ DTIME(n^{k}) for small values of k. Maybe something else.
Scott Aaronson apparently IS a betting man. See his post on the problem.
I was going to go to the Barriers in Computational Complexity II workshop. If the proof is correct I hope Amtrak gives refunds.
Actually looking over the schedule, much of it will still be relevant. If the proof is CORRECT how will that change what we teach? What we work on? Lance's bookforthelayperson on complexity theory? (I recently proofread a chapter on what the world would be like if P=NP. He may have to remove it. Too bad, it was awesome!)
Wednesday, August 04, 2010
The Solution to the MarkBetty Game
Let f(n) be a nondecreasing function from naturals to naturals. Consider the following game:
Let n be a large natural number. Mark and Betty alternate (Mark goes first) putting M's and B's on an n by n checkerboard. No square can have two markers on it. They play until all squares are covered; hence the game will end after n^{2} moves. If at the end there is some row or column A where the number of M's in A is at least n/2 + f(n) then Mark wins! If not then Betty wins! (NOTE the M's in A do not have to be adjacent.)I asked for your intuitions on when Mark or Betty have a winning strategy. Here are the answers and a pointer to my writeup of it. NONE of this is my work it is all the work of J.Beck (the reference is in the writeup). Here are the results:
 If f(n) ≤ O(\sqrt(n)) then Mark has a winning strategy.
 If f(n) ≥ Ω(\sqrt(n log n)) then Betty has a winning strategy.
 My writeup is here. If I fill in more details it may change.
My writeup of the first result is complete. I included more detail then you usually get in a math paper. I do not know if this is good or bad; however, I really wanted to check it all for myself. My writeup of the second result is much sketchier; however, it is essentially Beck's proof.
Beck has a book on games of this nature entitled Combinatorial Games: Tic Tac Toe Theory. Will children buy it thinking it is about tictactoe? Maybe Bill Gates' kids: the book costs $146.00 new and $107.00 used.
Monday, August 02, 2010
I want your intuitions on this, so the less you know the more I want you to post
Let n be a natural number which we thing of as being large. Mark and Betty alternate (Mark goes first) putting M's and B's on an n by n checkerboard. No square can have two markers on it; hence the game will end after n^{2} moves. If at the end there is some row or column A where the number of M's in A is at least n/2 + f(n) then Mark wins! If not then Betty wins! (NOTE the M's in A do not have to be adjacent.)For which f(n) does Mark have a winning strategy? For which f(n) does Betty have a winning strategy? Much is known on this problem. I have written up some notes that I will post on Wednesday. (None of this is my work.)
What are your intuitions and why? If you already know the answers then please do not post. SO, paradoxically, the less you know the more I want you to post. I want to know what the intuitions of someone who does not know the problem are.