Friday, April 30, 2010

The Base of Computational Complexity

In the April CACM, George V. Neville-Neil wrote a column on a question about the foundations of computer science:
In most areas of science there are a few basic underlying laws that inform the rest of the study of a given subject. Physics, chemistry, and electrical engineering all have these basic equations. What are the basic equations in computer science? Or is computer science baseless?
Neville-Neil goes on to describe data structures as one of the foundations of computer science. John Dupuis led a discussion on his blog.

Computer science does have a problem of focus. What we theorists do can differ considerably from AI and operating systems. Some machine learning people might list Bayes rule as a fundamental equation.

The "base" of a field are not a set of equations but of principles. After all what are the basic equations of biology?

How about computational complexity? 
  • The Turing Machine: A robust formal model of computation.
  • The idea that we measure resources as a function of the problem size (from the Hartmanis-Stearns paper from which our field gets its name).
  • Our Goal: Understanding the power and limitations of efficient computation.

Thursday, April 29, 2010

STOC/CCC/EC/THEORYDAY/GRANTS

Announcements:
  1. STOC Early Registration closes on April 30. STOC itself is June 6,7,8.
  2. CCC Early Registration closes May 3. CCC itself is June 9,10,11.
  3. EC Early Registration closes on May 6. EC itself is June 7-11.
  4. The IBM Research|NYU|Columbia Theory Day is Friday, May 7, 2010. See here for details.
  5. Call for applications for 2011-11 Computing Innovation Fellows. For details see here.
Predictions that are not on intrade.
  1. CCC will top 100 for attendance. I think so because (1) In prior years when we co-located with STOC we have topped 100: Berkeley-1986 we had 110, and Montreal 2002 we had 140. (2) I suspect there are people who often goto STOC who will go to CCC this year since it is easy to do. (I am the opposite- I always go to CCC (only missed one) and will goto STOC because it is easy to do.) See here for info on attendance at CCC and here for opinions on the attendance at CCC.
  2. STOC and EC overlap may hurt one or both of them. I could be wrong and it may help both of them since you can dash from hotel to hotel depending on the session.
  3. CCC and EC overlap may hurt one or both of them. I could be wrong and it may help both of them since you can dash from hotel to hotel depending on the session.
  4. Lance is going to ALL THREE: STOC, CCC, EC. I hope he's in good shape to do all that dashing. (This is not a prediction since I know that its true.)
  5. Theory Day in New York will be AWESOME! I doubt that can be quantified. Hence it cannot be on intrade. (Alas I cannot go because of other commitments.)
  6. The winners of the Computing Innovation Fellows will deserve it, though some of the people who apply who don't get it will also deserve it.

Wednesday, April 28, 2010

A possible NP-Intermediary Problem

(REMINDERS: STOC Early Registration closes on April 30. CCC Early Registration closes May 3. EC Early Registration closes on May 6. )

(ADDED LATER- AS STATED the problem below has problems with it. After reading see Eric Allenders comment in the comments.)

Here is a problem whose complexity has probably not been studied. I think it is NP-Intermediary. I will also give a version that is likely NPC. I am proposing that you either prove it is in P or NPC or show that if it is NPC then PH collapses (or something unlikely happens).

DEFINITION: We call a coloring of the x by y grid proper if there are no rectangles with all four corners the same color.

Let L be The set of all (x,y,c) in UNARY such that there is a proper c-coloring of the x by y grid.
  1. Clearly in NP: verifying that a c-coloring of x by y is proper can be done in time poly in x,y,c.
  2. Let Lc be the set of all (x,y) such that (x,y,c) ∈ L. Lc has a finite obstruction set and hence is in O(|x|+|y|) thought the constant depends on c. This can be proven by Well-Quasi-Order theory which yields nonconstructive bounds on the size of the obs set, or there is a proof with reasonable O(c2) bounds. (For ALL info on this problem and links to more info see the links below.) Hence the problem is Fixed Parameter Tractable.
  3. I suspect that the problem L is NP-intermediary. Why? I think its NOT NPC since there is not much to play with- just 3 numbers. I think its not in P because my co-authors and I have not been able to do much more than ad-hoc colorings (this is not that good a reason- however the 17x17 challenge (linked to below) has lead other people to think about the problem and not come up with clean solutions.)
  4. It is likely that the following is NPC: The set of all (x,y,c,f) where f is a partial c-coloring of the x by y grid such that f can be extended to a proper c-coloring.
  5. I suspect that whatever is true for rectangles is true if you replace rectangles by other shapes such as a squares. There are also other Ramsey-Theoretic functions that could be studied (though Ramsey's theorem itself not-so-much--- verifying is hard).
  6. This question is related to the (still unresolved) question I posed here and that Brian Hayes explained better here. However, I don't think proving the general problem NPC will shed light on why determining if (17,17,4) ∈ L is hard. That's just one instance.

Tuesday, April 27, 2010

Trading Money for Computation

Since the beginning of complexity we talked about time complexity t(n) as a function of the input size. But it has been the inverse of this function that we really care about: We have a roughly fixed number t of computation steps available on our computer and we want to know the largest problem we can solve within that running time. The value t has increased dramtically under Moore's Law and we care about how much larger a problem we can solve with our larger running times.

You could get around the limited time with a large investment in additional computer hardware but that rarely made sense to run a single algorithm. But in several (non-theory) talks I've seen recently you see speakers talking about solving their large problems by renting time on Amazon's servers. You can now cheaply buy computing time on the cloud via Amazon or Microsoft and soon via Google.

Cloud computing doesn't make all computing cheap. One still needs to have good parallelization since the problem gets run on many servers with limited communication between them. And all the servers in the world won't beat out exponential growths in running time. The P versus NP problem remains a problem. Nevertheless we need a good formal framework to study these problems.

Computational complexity has in the past adapted well to new computation models from the PRAM to biological and quantum computers. But we are seeing new computing paradigms in multicore and cloud computing and theory seems late to the party. There was a nice SODA paper on MapReduce, the basic cloud computing operation, but for the most part theorists haven't tackled the cloud computing model and only a few have looked at multicore. Theory can say much about new computational methods, both in how we can take advantage of them and what they can't do, but only if we make the effort to develop the proper models to capture these new approaches.

Monday, April 26, 2010

Google VS Experts VS readers VS Bing

If you want to find something out you can ask Google, ask an expert, or (if you have a blog) ask your readers. Google is the most common; however, there are times when asking an expert or your readers does better.
  1. In my last post I asked for a Pangramic Palindrome- a sentence that was the same backwards and forwards and had all of the letters in the alphabet. I had spend some time on Google and other search engines trying to find such, with no success. Within an hour of posting Gareth Rees pointed to Peter Norvig's Pangramic Palindrome! I also found out (NOT to my surprise) that someone else had asked about such things. (NOTE- my spell checker wanted me to replace Pangramic with Pancreatic.)
  2. In this post I asked if the following was true: Given 2k-1 integers there exists a subset of k of them that sum to a multiple of k. I had looked at Google ALOT for this one and also asked some people, but didn't find anything. The comments pointed to a paper on the subject which answered the question (yes) and also gave additional theorems. If I had asked just a few more people I would have gotten it without asking my readers; however, like the HALTING problem, its hard to know when to stop asking and when to start posting.
  3. In this post if a certain sum that my co-author Clyde Kruskal proved was new. (I was sure it wasn't but couldn't find a reference.) I got a WONDERFUL combinatorial proof in the comments. I'm much happier with the combinatorial proof. (I also got a reference- Euler beat Clyde to it. Oh well.)
  4. In both this post and this post I asked readers if they wanted to review books for SIGACT NEWS. I got far more responses then I usually do when I just print it in my column.


Will search engines ever be so good that they are better than asking experts or asking your readers? (In the future we will all have blogs and hence we will all have readers--- though with FACEBOOK the future may be now. In the future we will all have 15 readers.) I doubt it. For several of the questions above I didn't know quite what to look for. For example I didn't know to look for Palindromic Panagram.

On a related note- has anyone tried BING? How does it compare to Google?

Friday, April 23, 2010

A Post on the Post Post

On Monday Richard Lipton wrote a nice piece on the work of Emil Post, a famous logician who had great results and even greater questions in the early days of recursion theory. I have a few comments and thought I would follow up with my own post.

As Lipton mentioned, Post showed that every language that is both c.e. and co-c.e. is computable. Lipton talks about the open complexity version, what he calls Post++: L is in P if and only if both L and its complement are in NP, or in my terms P = NP ∩ co-NP. Lipton seems to suggest that Post++ is true but consider the following language:
A = {(n,r) | There is a m that divides n with 1 < m ≤ r}
A is in both NP and co-NP (even UP and co-UP) by guessing the prime factors of n. If A is in P then you can factor n by binary search. So if you believe that Factoring is a hard problem you have to believe that Post++ is not true. Lipton alludes to this when he says Post++ would kill most crypto protocols.

Lipton then talks about Post’s problem: Show there are non-computable, incomplete computably enumerable sets. Post had a program for this problem: Find some property P of sets, show there are non-computable c.e. sets with property P and no complete sets have property P. Out of Post’s program came useful properties such as simplicity, mitoticity and autoreducbility  but it wasn’t the right approach to solve Post’s problem. A decade ago I co-authored a paper that talked about using polynomial-time version of autoreducibility to separate complexity classes but we were also unsuccessful so far in using it to prove any new separations.

Friedberg and Muchnik independently settled Post’s problem in the late 50’s (Ladner proved the complexity version, assuming, P ≠ NP in the 70’s). Lipton asks “why do open problems stay open for years and then get solved independently at about the same time?” The answer is they don’t, most open problems are not solved independently, Fermat’s Last Theorem, Primes in P, etc., but the few that do stand out.

In our field if you look at the most famous examples: Friedberg and Muchnik, Borodin and Trakhtenbrot, Cook and Levin, Immerman and SzelepcsĂ©nyi, you have two people, one on each side of the Iron Curtain during the cold war in a time before we sent results electronically. It could take several years for results to travel giving a much larger window for “about the same time”.

To solve Post’s problem one needed “computational thinking,” in this case thinking of c.e. sets as being produced by a computational process. Post didn’t think of c.e. sets this way, they were called recursively enumerable back then. Friedberg and Muchnik were children of the new computer age and could make that intellectual leap needed to solve Post’s problem. That’s why Post’s program was solved in the US and Russia at about the same time.

Wednesday, April 21, 2010

Is there a pangramic palindrome?

Pangrams are sentences that contain every letter of the alphabet. The classic is
The quick brown fox jumps over a lazy dog.
(NOTE- I had jumped but a reader corrected it to jumps) There are more here.

Palindromes are words, phrases, sentences, or even longer that are the same backwards as forward. See here for history and some examples. Weird Al has an entire song that is just palindromes which is titled bob. (Its the 14th best Bob Dylan satire of all time: See here which has a pointer to a ranked list.)

BUT here is my question: are there any sentences that are BOTH Pangrams AND Palindromes? I really want to ask are there any in English that are not too contrived. However, seaching the web I couldn't find any at all!! So I'll be happy to find any in any language.

Tuesday, April 20, 2010

Life without Flying


A reminder that registration for all three Cambridge conferences are now live: STOC (early registration deadline April 30), Complexity (May 3) and Electronic Commerce (May 6). The week of June 6th should be quite exciting and busy.

Bill asked me about Europeans who might not want to register early given the disruption of flights due to volcanic ash from Eyjafjallajokull. While the possibility that problems will persist into June is very slight, we will reimburse any registrations for people unable to travel to STOC because of flight cancellations related to Eyjafjallajokull. While I can’t speak for CCC and EC I suspect they’ll have similar policies. So preregister with confidence.

In computer science, we have become quite dependent on air travel, for attending conferences, being able to give talks and discuss research with colleagues, for attending committee meetings and grant review panels, recruiting trips and much more. I’ve argued before that the main reason CS handles conferences and recruiting differently than every other academic field is because CS didn’t really get started until the jet age.

Suppose that the volcano situation happened at a larger scale and prevented air travel worldwide for the next several decades. How would our field (not to mention the rest of society) adjust? We could still travel just a bit more slowly. We wouldn’t revert back to the early 20th century situation with conferences either regional or rare. Rather video and Internet conferencing tools will become much better and widely available.

When we make the effort to travel, we and the people we visit make the effort to focus on the purpose of the trip. Harder to spend the day in my office claiming I’m busy if I’m just working with a colleague over the Internet.

The end of air travel would force the issue making it socially acceptable to virtually travel somewhere. Meanwhile I don’t get jet lag, get to sleep in my own bed and spend more time with the family. I think I’d like the no-flight world.

Monday, April 19, 2010

Is Guessing a good idea?

The following is from an Ask Marilyn Column. I paraphrase this since its from memory.

READER'S LETTER: I have heard of exams where you are penalized for guessing. How do they know you are guessing?

ANSWER: These are multiple choice exams where you get (say) 4 points for getting it right but -1 for getting it wrong. Hence guessing might lower your score. (NOTE TO READERS: Earlier version had an error so I just shortened it to eliminate error.)



I recently gave an exam where part of it was as follows:
For each of the following 10 languages indicate if it is REGULAR, CONTEXT-FREE BUT NOT REGULAR, or NOT CONTEXT-FREE. You may also leave it blank. You get +3 for a right answer, -3 for a wrong answer, and 0 for leaving it blank. DO NOT GUESS! Really DO NOT GUESS! If your total score is LESS THAN 0 then you will just get a 0. (NOTES TO MY READERS: This is DIFFERENT from the SATs and other exams that use this way to grade. For this post I omit the 10 languages.)
This problem inspires a math problem:

Should a student guess? If a student has NO IDEA how to do ANY of the questions then there is no harm in guessing since leaving all blanks will yield a 0, whereas guessing at random might yield a positive score. (If a student has NO IDEA but can do this reasoning then perhaps he should drop my course and take probability instead.)

What if the student is sure of ONE of the answers? Then should he randomly guess the rest? Randomly guess some of the remaining? What if he is sure of x of the answers? What is the value y so that he should guess y of the remaining but should not guess y+1? For x=0 I think the answer is y=10.

Here is one general question: There are n problems on an exam, each one has c choices. You get A points for getting a problem right, B points for getting a problem wrong, and C for writing DK for Don't Know (I had toyed with the idea of giving 1 points for DK.) If you know x of the answers and are clueless on the rest, how many should you guess (as a function of n,A,B,C,x)? We assume that those that you don't guess you write DK (admitting that you don't know something is helpful here, as in life). You can assume A > 0, B < 0, C &ge 0.

One can ask more general questions by dividing the questions into c categories: Those where you can eliminate 0 options (clueless), 1 option, 2 options, ..., c options (you know you are correct).

If a student can figure out how many to guess on during the exam then they are likely a very good student and should guess 0 of them.

Friday, April 16, 2010

The Pad


So I broke down and bought the iPad. Many people have asked whether the iPad is worth buying. The short answer: It will be.

There are many many iPad reviews out there on the web so what can I add? It wins as an entertainment device. My favorite apps so far: Netflix (streaming looks great), Instapaper, The Weather Channel (TWC Max+) and of course MLB. Oddly enough no built-in calculator but the Wolfram Alpha app is pretty cheap now and there is the free Pcalc Lite.

Books look much prettier on the iPad than the Kindle but for reading for a long stretch I prefer the Kindle. Feels more comfortable on my middle-aged eyes.

Based on some suggestions from my Twitter followers, I bought the iAnnotate PDF app which offers some great mark-up tools from PDF. But it is really difficult to move files to the app. I still haven’t figured out how to do it on Northwestern’s network. The next version of iAnnotate should make it easier to move files but Apple really needs a standard way for applications to share documents so I can download from Safari into programs like iAnnotate.

I hope Google optimizes their docs for the iPad. I’d love to be able to edit them.

Supposedly multitasking comes in the fall. But right now I can run two tasks at a time by running one of them on my iPhone. Don’t laugh--I can now stream baseball games while surfing the web.

To make it a reasonable laptop replacement to take on trips, we’ll need someone to write a full-featured latex app including an editor along the lines of winedt. Would be considerable work but it could be done. I’d pay $50 for it. Also the iPad will need someway to connect to a projector.

I just have too many gadgets now: iPhone, iPad, Kindle and a laptop. I could imagine wanting to travel with all of them but at some point that just gets ridiculous. I keep hoping one day we’ll have one gadget to rule them all but I guess a device that fits in my pocket with a big screen and keyboard was just not meant to be.

Thursday, April 15, 2010

New Constructive Aspects of the Lovász Local Lemma

Guest Post by Bernhard Haeupler

The Lovász Local Lemma (LLL), slightly simplified, states that: Given a set of “bad” events, if for every event A there exists a subset of events with total probability mass at most 1/e such that A is mutually independent to all other events, then the probability that all the “bad” events are avoided simultaneously is nonzero. The LLL is used in combination with the probabilistic method to (nonconstructively) prove the existence of, e.g., a satisfying assignment for any k-CNF formula in which clauses do not share variables with more than 2k/e other clauses, optimal routing strategies, and many other (seemingly unrelated) structures of interest to TCS.

In the last STOC, Robin Moser gave his celebrated talk on how to find such structures efficiently, and later he and Gabor Tardos simplified his algorithm and generalized it to the full asymmetric LLL. They assume that the events are determined by a collection of independent random variables, and define two events as dependent iff they share a variable. This is the basic setting in most applications of the LLL.

The Moser-Tardos algorithm (henceforth MT-algorithm) is incredibly simple: Start with an arbitrary assignment and repeatedly take the variables of any one of the given bad events that holds currently, and assign new random values to these variables. Of course, attempting to “fix” such an event can lead to other neighboring events becoming violated, but the beautiful argument of Moser and Tardos shows that this branching process dies out quickly if the original LLL-conditions (even with the optimal constants) are fulfilled. This yields a randomized algorithm with expected linear running time in the number of bad events.

Recently, further important properties of the MT-algorithm have been shown:

1.) It can be derandomized using approximately logwise-independent probability spaces (or logspacefooling PRGs), if one allows an arbitrary small ε-slack in the LLL-conditions. This leads to deterministic (NC) algorithms for most LLL-applications with polynomially many bad events.

2.) It outputs a solution that not only avoids all the bad events, but has some further randomness properties: in exactly the same way as the conditional LLL-distribution (i.e., the distribution that conditions on all bad events being avoided) the output distribution of the MT-algorithm approximately preserves the probability of any event that is sparsely dependent on the bad events. While interesting in its own right, this can also be used to develop efficient algorithms for several LLL-applications in which the number of events is superpolynomial in the number of variables which in turn is the “size” of the desired assignment. It can be shown that even in these cases the number of resamplings done by the algorithm remains small (e.g., nearly linear in the number of variables), but often just finding a bad event that currently holds or verifying that the current solution avoids all the bad events – the two basic steps in the MT-algorithm – is (NP-)hard. The solution to this dilemma is to use the algorithm only on a suitably-chosen polynomial-sized core-subset of events. A union bound over the non-core events in the conditional LLL-distribution than shows that a good assignment is produced with high probability for essentially any LLL-application. This resolves, e.g., the curious situation of the Santa Claus problem for which two completely different non-constructive proofs for a constant LP-integrality gap had been developed, without leading to a polynomial-time constant-factor approximation algorithm.

Wednesday, April 14, 2010

Tom L DVD and Birthday and You Tube and...

April 9 was Tom Lehrer's 82nd birthday! To celebrate I give you breaking news that a Tom L DVD was released April 13, 2010. It seems to have some videos of him performing and some other things. It also has The Derivative Song which is not available anywhere else (except on You-Tube, so I suppose its actually available to anyone).

I first became aware of this material from someone who thought it would NOT be coming out on DVD. I got this email a few months ago:
I believe (or hope) that you are the blogger Gasarch who in 2007 wrote a blog called The definition of rare. And you had a good point: after some stuff has been made available on You Tube it's not rare anymore. Here are some more examples: Tom Lehrer material that's never been -- and probably never will be - commercially available to the public.
Right before I was going to post this I emailed the author if it was okay. He said that it was and he emailed me (1) about the DVD collection coming out, and (2) some more clips on You-Tube that were posted (I think) to advertise that the DVD is coming out. Here they are:
  1. My favorite: Tom L doing a song by Danny Kaye that Tom L himself has said was the inspiration for his classic The Elements. Here it is: here.
  2. Tom L singing two of his songs. Nice to see what he looks like, but nothing really new here.
  3. Ad for the Tom L DVD.
  4. Misc Stuff The first clip shows that Tom L is surprised it is coming out.
  5. A letter from Tom L about having his stuff on You Tube. I wish more artists felt they way he did.
Will I find or be send other obscure Tom L stuff for next year. I kind of doubt it. However, I last year I would have doubted I would have more stuff this year.

Tuesday, April 13, 2010

Choosing a Graduate School

Besides being tax day, Thursday is the deadline to decide where to attend graduate school. How should you choose? I've blogged on this topic before but a few recent new wrinkles to talk about.

Typically Ph.D. programs in computer science offer funding (via fellowships, research assistants or teaching assistants) that cover your tuition and a small stipend. But a few programs at some financially-strapped universities are offering admission to the graduate program without such support. Should you join such a program if you can afford it? Maybe you'll get lucky and find an outside fellowship or an well-funded advisor but you have to worry about how much commitment the school will give you if they don't have a financial stake in your success. You really need to talk to the faculty involved and make sure you are comfortable with the situation.

As the field of theoretical computer science has gotten quite broad, theory groups at most universities cannot hope to adequately cover all research areas. Some departments have made the conscious decision to build strength in a particular area (like Northwestern in Algorithmic Game Theory). Joining such a group can be exciting if you are interested in that line of work but explore what other options would be available if you were to change your mind. 

Perhaps you are looking at the lousy job market for tenure-track faculty and thinking about not attending graduate school at all. Don't worry. As undergraduate enrollment is on an upswing, the economy recovers and the first wave of computer science faculty starts to retire the market should get much better by the time you get your doctorate. (And if I'm wrong this post may mysteriously disappear).

Monday, April 12, 2010

Sum of squares: How much to cheat?

In discrete math (or other courses) we teach AND DERIVE the formula for 1+2+3+...+n. We then look at the sum 12+22+...+n2. Here there are some options.
  1. State the formula and prove it by induction.
    1. PRO: This is a good example of induction.
    2. CON: The formula comes out of nowhere.
  2. Do the integral method to show that its roughly n3. Then use constructive induction to get the actual result. (Constructive induction here would be the following: assume the formula is of the form An3 + Bn2 + Cn + D and then, by doing the proof by induction, derive what A,B,C,D must be.)
    1. PRO: They get a sense that they have derived the answer.
    2. CON: A bit of a cheat. They didn't really derive it since the fact that it is roughly n3 is not a proof that it is a polynomial.
    3. CON: Messy. (This is probably what I would do in the standard Discrete Math course for Sophomores.)
  3. Prove that it is a cubic poly by the method of differences. Then use constructive induction or curve fitting to find the actual answer.
    1. PRO: They really get to derive it.
    2. CON: You need to teach the method of differences.
    3. PRO: You get to teach the method of differences.
    4. CAVEAT: Whether you do it this way depends on what your goals for the course are. For our discrete math course this would be too far a field.
  4. There is a clever proof from which you could derive the actual formula. An exposition of this proof is here.
    1. PRO: The method extends to sums of kth powers.
    2. CON: The algebra is a bit too clever. The out-of-nowhere problem again. (I will probably show this to the Honors Discrete Math course.)
    3. CAVEAT: Could use the method to just show that there IS a polynomial of degree 3 and then use Constructive Induction or Curve Fitting to find the exact formula.

Friday, April 09, 2010

What Makes a Lecture "Distiguished"?

In January I gave a Distinguished Lecture in the CS Department at the University of Alberta. In early March I gave essentially the same lecture at Penn State in their regular CSE colloquium. What's the difference?

One is audience. At my Alberta lecture I got a large turnout from the broad CS department. In regular colloquiums I usually just get the CS Theory types and people I know, though at Penn State I happen to know faculty ranging from pure logicians to experimental economists.

I'm more likely to accept an invite as a distinguished lecturer. I don't usually travel to give a regular seminar talk unless I'm using it as an excuse to visit specific people and I didn't really know anyone at Alberta well save for one former NEC colleague. 

Perhaps the biggest difference is attitude. As a distinguished lecturer, people want to talk to me, my schedule is booked solid shuffling from office to office much like an interview trip. People want to hear what I have to say because I was "distinguished". I have no shortage of opinions and I always like an audience willing to hear them.

Thursday, April 08, 2010

Baseball violates the rules of mathematics!!

(Looking for a roomate for STOC. Check out this site..)

Baseball Season started this week. I want to point out that Baseball violates mathematics in two ways.

1) By the rules of the game Home Plate is a right triangle with a square adjacent to it. And what are the dimensions of this right triangle? They are 12-12-17. BUT THERE CANNOT BE A 12-12-17 RIGHT TRIANGLE!!!!

2) (Information in this point is from Bill James Article The Targeting Phenomenon.) A players batting average is what percent of the time he or she gets a hit (its a bit more complicated since some things don't count as at-bats: walks, sacrifices, hit-by-ball, maybe others). You might think that the higher the number the less players achieve that batting average. Let N(a) be the Number of players with batting average a over all of baseball history. You might think
N(296) ≥ N(297) ≥ N(298) ≥ N(299) ≥ N(300)
But you would be wrong.
  1. N(296)=123
  2. N(297)=139
  3. N(298)=128
  4. N(299)=107
  5. N(300)=195
There so many more players batting 300 then 299!. There so many more players batting 300 then 298!. There so many more players batting 300 then 297!. There so many more players batting 300 then 296!.

This would seem to violate the very laws of mathematics! Or of baseball! Or of baseball mathematics! Actually there is an explanation. Batting 300 has become a standard that players try to achieve. If you are batting 300 and it is the last week of the season you may become very selective on what balls you hit, you may ask to sit out a game, you will do whatever you can to maintain that 300. Similarly, if you are batting 296-299 then you will do whatever it takes to get up to 300.

This happens with number-of-hits (with 200 as the magic number), Runs-batted-in (with 80,90, and 100 as magic numbers), for pitchers number-of-strikeouts (with 200 and 300 as magic numbers), and wins (with 20 as the magic number).

If we all had 6 fingers on our hands instead of 5 then there would be different magic numbers.

So what to do with this information? Model it and get a paper out. Hope to see it at next years STOC.

Wednesday, April 07, 2010

But seriously now folks- what do you make of barrier results?

In my April Fools Day Post I said the following:
Here are problems that I believe can be solved with current techniques.
That was indeed true- since they were all known theorems. However, I was making a more serious point in capital letters. I will make it again here using all small letters.

in theoretical computer science we have some results on what techniques are not going to suffice to solve P vs NP and other problems. the authors of these results claim (correctly) that they are trying to get us to look at other techniques. however, proving barrier results can become an end in itself. what to the barrier results mean both mathematically and sociologically?
  1. has there every been, in the history of mathematics, an open problem that inspired so many negative results?
  2. one can argue that in logic there was work on negative result. however, before godel's inc. thm, was there the kind of flurry of activity to try to disprove hilbert's program could work that there is now on trying to prove that proving p \ne np is going to be hard. i do not think so. for ch there was more of this before cohen, but again, not as much as now.
  3. how about in number theory? i have never seen a result along the lines of the following techniques will not suffice to solve goldback's conjecture. are there any?
  4. why is P vs NP different than other problems in math? why have negative results about trying to prove it become a topic people work on?
  5. to be fair there aren't that many negative results, though they seem to be growing and are regarded (correctly) as important.
  6. the real question is will the barrier results lead to a prove that p is not np (or even that p is np though i find that unlikely).
  7. the other real question is, are these results being pursued because they are important or because we are in a rut and can't do much else. i tend to think its because they are important since (a) there is lots of non-barrier work in complexity also, and (b) barrier results are hard! it would be an odd way to get out of a rut by going into a really hard area. when i am in a rut i try to do things that are rather doable.

Tuesday, April 06, 2010

Finding the Right Model

Last fall I wrote about the different focus on models and proofs in the Econ and CS theory communities. Today I'll focus on the purpose of a model and what makes a good one.

An economist once explained the difficulty in coming up with a good model. A good example is how people prefer one choice to another. In standard utility theory, people assign a value (or utility) to each state of the world and try to maximize their expected expected. But then one notices people tend to give too much weight to low probability events like in the Allais Paradox. So one can look at more general models of preferences such as prospect theory. But these general models might be too general, and thus have little explanatory value and in particular may not allow us to predict future behavior and outcomes, the ultimate purpose of an economic model.

In computational complexity, we don't use our models for prediction of future events. Our models of computation are meant more for comparing different types of resources: Time versus space, quantum versus classical, parallel versus serial and of course, nondeterministic versus deterministic. What makes a good model for us is robustness (small changes to the definition doesn't change the problems it can solve), nice closure properties and some connection to reality. Sure the class P, Polynomial-time, is not equal to efficient computation if the constants or exponents of the running time are large but it's a reasonable approximation.

Part of my research agenda over the past few years is trying to find computation models for agents in economic theory. Many of the ideas of our community, such as connections between randomness and unpredictability, can play an important role in economic theory. But finding and determining what is the right models are the biggest challenge, as even the purpose of models differ in our communities. Communication is the key, working directly with people in the other community. One of the great advantages of Northwestern is having a large strong micro-economics group, many of whom understand that computation is an important resource and are willing to at least listen to us computer scientists.

We also need to keep an open mind--the goal should be making the best connections between communities and not worrying so much about results to impress our direct peers. Having tenure definitely helps.

Monday, April 05, 2010

What Does It Meant to be Published?

I don't remember what prompted it but about a month ago I tweeted
Your paper might appear on Arxiv or ECCC, be widely read and even well cited. But don't think that it is in any way "published".
Suresh responded
Question is, isn't the point of publication to be "widely read and well cited"?
So what is the point of publication? Certainly you want your paper easily read and cited. But also you want a careful peer review leading to a polished version that has the stamp of approval by appearing in some respectable conference or journal. Publishing also acts as a filter, allowing the reader to get some idea of the level of quality of the paper before reading it. Almost any paper can appear on an archive site but it takes more to be published.

Nevertheless if you get major kudos for your archive paper, why bother taking it further? Even if people like your paper now, it may be forgotten years from now. Complain as you will about journal publishers, they are scanning in all the old articles to make them available in a digital age. Papers that years ago appeared as old department technical reports may be lost forever. Nobody can predict what form research papers may take one hundred or even ten years from now. One day those PDF files may no longer be readable. Get your paper really published and you have a much better chance of it surviving far into the future. 

A few years ago, the IEEE saw no reason to scan in old FOCS proceedings thinking that any of the important old papers appeared in better form in some journal. We knew though that if these papers weren't put in digital form, many of them might disappear forever. With some strong pushing by Paul Beame, Bob Sloan and others, those papers are now available on both the IEEE and Computer Society digital libraries.

I wanted to read a copy of Karp's NP-completeness paper which only appeared in the proceedings of a one-shot workshop in 1972. I ended up going to the library to dig it up. But library books get lost and many young people today don't even know where the library is. Later I found out Luca had scanned the paper for a course he taught. But how long will Luca's Berkeley pages last and what about all the papers that don't lead to Turing awards.

So publish your papers, best in a journal as well as a conference. Even if you don't think it matters for you in the short run, it can make a big difference for the community long into the future. What good is pushing the boundaries of science if those boundaries get snapped back because work gets lost.

Friday, April 02, 2010

SIGACT Social Networking

STOC conference and hotel registration now live. Early registration deadline is April 30th. Registration for Complexity and Electronic Commerce coming soon. You can now submit your papers to FOCS, deadline 11 PM Eastern on Wednesday.

We are in the midst of reorganizing the SIGACT web presence. We have redesigned the SIGACT website now under the supervision of Amit Chakrabarti. As I've announced here before we have a new blog Theory Announcements that collects a broad set of TCS related announcements including from TheoryNet and DMANet. If you have an event or announcement you want to appear there, please submit it to one of those mailing lists.

But the days that one needs a theory portal are gone. Google and other search engines work well if you want information on a specific conference, journal or researcher. But what you do need is something to guide you through the clutter. Right now you have to rely on a network of theory bloggers and tweeters to keep you informed. But even then you get important theory information mixed in with personal stuff you may not care about (or vice versa).

So I've started a @sigact twitter where we will point to important information and news relating to the theory community. Recent tweets will also appear on the SIGACT home page. Lets me focus my own twitter and this blog away from announcements (like this one) and more on giving our opinions of the important issues of the theory community.

But that's only the start. We'll continue to search new ways to serve the needs of the theory community: a revamped theory calendar, some sort of jobs database, Facebook and other social networks, and a killer iPad/iPhone app. I don't really have plans for an iPad/iPhone app or know what should go in it but I'd love to have one. 

Thursday, April 01, 2010

Lets Prove Something instead of proving that we can't prove something

We complexity theorists seem more concerned with proving that we can't prove things than with actually proving things!!!! There have been two workshop on Barriers - reasons we cannot prove things (See here and here ). In a prior blog entry I pointed out that in an excellent talk by Peter Bro Miltersen he listed as an open problem that he wanted to get a barriers result. In fact, he seemed more interested in proving that nobody could prove the result then in proving it.

We are in a rut. We seem to have the urge to show a theorem is hard to prove rather than to actually prove it! To be fair, many of our conjectures are hard to prove. Even so, we need to get out of this rut. I list several problems that I think can be solved with current techniques. For each one I will say why the current barrier techniques might not apply.
  1. Prove that NP not equal to EXP=DTIME(2O(n)). There are oracles for proper containment either way, and for incompatibility (see this paper) but there are no oracles for which they are equal. Also note that NP is robust and EXP is not. That might be useful. DO NOT TRY TO FIND AN ORACLE TO MAKE THEM EQUAL!!! THAT IS THE MENTALITY I WANT TO BREAK US OUT OF!!!
  2. How do NL and P compare? There are oracles to make either properly contained in the other (see this paper). However, Relativized (spellcheck made me capitalize Relativized) space has several definitions and its not clear which one is appropriate (Jonathan Buss's and Ruzzo-Simon-Tompa have worked on this). DO NOT TRY TO FIND THE RIGHT DEFINITION OF RELATIVIZED SPACE!!! JUST PROVE A CONTAINMENT EITHER WAY!!! IT DOESN"T EVEN HAVE TO BE PROPER!!! OR PROVE THEY ARE THE SAME (unlikely).
  3. Is NL properly contained in PSPACE? Of course it is, but lets PROVE IT rather than REDEFINE ORACLES to prove that its hard (some of the same comments for NL and P apply here).
  4. Lets look at Nondeterminism in a different light- for rather powerful classes and rather weak ones.
    1. Let PR be the set of SETS that are Prim Rec, and NPR be the set of all SETS that are Nondet Prim Rec. Is PR=NPR? DO NOT TRY TO DEFINE PRIM REC WITH ORACLES TO OBTAIN A BARRIERS RESULT!!!! JUST SOLVE THE DAMN PROBLEM!!!
    2. Deterministic Finite Automata (DFA) and Nondeterministic Finite Automata (NFA). Do they recognize the same set of languages or not? DO NOT DEFINE DFA'S WITH ORACLES!!! DO NOT TRY TO MAKE DFA's FIT INTO THE NAT PROOFS FRAMEWORK!!! JUST SOLVE THE BLEEPING PROBLEM!!!!