Thursday, September 27, 2012

Things a Complexity Theorist Should Do At Least Once

A few weeks ago, Suresh wrote a post Things a TCSer should have done at least once with the caveat
This list is necessarily algorithms-biased. I doubt you'll need many of these if you're doing (say) structural complexity.
Basically begging a response. So here is what every structural complexity theorists should have done.
• Define a new complexity class that has some reason for being.
• To keep balance in the world, you should also collapse two other complexity classes.
• While you are at it, separate two complexity classes that weren't separable before.
• Create a new relativized world. Extra points if in this work you collapse two complexity classes while separating two others.
• Use Kolmogorov complexity, information theory or the probabilistic method as a proof technique. They are really all the same technique in disguise.
• Use the sunflower lemma, or the Local Lovasz lemma, or some other weird probabilistic or combinatorial lemma just for the fun of it.
• Invoke VC dimension to solve a problem. I should have at least one in common with Suresh and Sauer's lemma is sometimes useful in complexity.
• Have a theorem that starts "Assuming the extended Riemann hypothesis..."
• Give a "simpler" proof of a nasty theorem.
• [Advice given by Noam Nisan many moons ago] Try to settle P v NP. In both ways. Only by really trying and failing can you understand why the easy stuff doesn't work.

Tuesday, September 25, 2012

A new kind of Spam

(In this post I quote attempted posts to the blog.
I transcribe them as they are, so if you see a missing period
or awkward language, its not me (this time) its them.)

We moderate comments but do so very lightly.
There is no hard and fast rules but roughly speaking
we block comments that are BOTH offensive AND off-topic.
There may be exceptions- like if its on topic but REALLY REALLY offensive
and adds nothing to the discussion.
There may more benign exceptions- like if I post a question and will block the
answers so that when I reveal the answer the next day its more dramatic.
Of if someone posts information that is not public yet.
In these case we hope they try to post non-anonymously so we can email them
and tell them why they were blocked. There are other isolated cases as well.
All of these are very rare.

Recent attempted comments do not fall into these rules and we had to
decide on them. The following was an attempted comment on my post
STOC 2012-Workshops and honored talks

Hi there STOC 2012- workshop and honors talks Loved every second! Great views on that!
That actually breaks the mold! Great thinking!

This comment is NEITHER offensive NOR off-topic.Its a bit odd- I can't tell if
the Great view is of my post or of the talks I was writing about.
It does sound awkward. Why is that? IT WAS GENERATED BY A SPAMBOT!!!
How do I know this? Because if you click on the author you are directed to a site thatsells you paints for your living room.Hence we block such posts.

So, they think the readers of our blog are into interior decorating.
I am sure that some are, I don't think our readers are a particularly good market for this. Technology is good enough to find our blogs and try to use spambots on them,but not good enough (or there is no incentive) to figure out which blogs are worthtargeting.This is part of a bigger problem I blogged about
herewhere I noted that technology is good enough to know that I am a book review editor for SIGACT NEWS but notgood enough (or there is no incentive) to figure out that I only review comp sci and math books, and not
books on (say) politics.

A borderline case: an attempted comment on the blog
A natural function with very odd properties was

Awesome logic. You truly have some expert skills and enhanced my knowledge on Cantor Set Construction Agreements.

This one did not link to any product so it might be legit, except thatit is awkward sounding and the same person tried to submit,as a comment to Six Questions a about natural and unnatural mathematical objects

This truly enhanced my skills. very helpful Job Proposal.

Clearly spam, though I'm not sure why since there is no link to a product.

These posts are trying to pass a Turing Test- but so far they are not succeeding.

Sometimes they only positive comments I get are from spambots. Oh well.

Thursday, September 20, 2012

Poset Games are PSPACE-complete

Consider the following game on a poset, each player takes turns picking an element x of a finite poset and removes all y ≥ x. First one to empty the poset wins. I posted last March about a high school student, Adam Kalinich, who showed how to flip the winner of a poset game.

Finding the winner of a poset game is in PSPACE by searching the game tree. A corollary of Adam's work showed that poset games were hard for Boolean formula leaving a huge gap in the complexity of finding the winner.

Daniel Grier, an undergrad at the University of South Carolina, has settled the problem and shows that determining the winner of a poset game is PSPACE-complete. His reduction is ridiculously simple (though not obvious) and the proof is not that complicated either.

Grier starts from Node Kayles which is a game on an undirected graph where each player takes turns removing a vertex and all its neighbors. Whomever empties the graph first wins. Thomas Schaefer showed the PSPACE-completeness of Node Kayles back in 1978.

Grier's reduction from Node Kayles to posets is very simple: Let G be the graph. Have one element in the poset for each vertex of G, all incomparable. For each edge e=(u,v) we add two more elements, one above the vertex elements corresponding to u and v, and one below every vertex element other than u and v. That's the whole construction.

Grier shows that if G has an odd number of edges and no immediate win for the first player then the first player wins the Node Kayles game if and only if the first player wins the corresponding poset game.

You can read more details in Grier's short paper. It's really neat seeing high school students and undergrads solving interesting open problems. We need more problems like poset games.

Wednesday, September 19, 2012

Theory Conferences Galore

Early registration for the FOCS conference in New Jersey is September 27th. There is some travel support available for students and postdocs, deadline is this Friday the 21st.

STOC and Complexity will be co-located in Palo Alto in early June. STOC CFP (deadline November 2), Complexity CFP (deadline November 30).

SODA comes back to the US and New Orleans January 6-8. Accepted Papers.

Monday, September 17, 2012

Imagining Imaginary Probabilities

In the year 4000BC my great-great-...-great grandmother tried to solve (in today's terms) the equation
x2 + 2x + 2 = 0
She discovered that if it had a solution then there would be a number a such that a2=-1. Since there clearly was no such number, the equation had not solution. She missed her chance to (depending on your viewpoint) discover or invent complex numbers.

Fast Forward 6012 years.

In the year 2012 I wondered: is there a probability p such that if you flip a coin that has prob(H)=p twice the prob that you get HT is 1/2? This leads to
p(1-p)=1/2
If you solve this you get p=(1+i)/2. Hence there is no such coin. WAIT A MINUTE! I don't want to miss the chance that my great...great grandmother missed! In the real world you can't have a coin with prob(H) = (1+i)/2. But is there some meaning to this?

More generally, for any 0 ≤ d ≤ 1 there is a p ∈ C (the complex numbers) such that prob(HT)=d.'' The oddest case (IMHO) was to take d=1. You then get that if a coin has prob(H)=(1+\sqrt(-3))/2 then prob(HT)=1. Does that mean it always happens? No since prob(TH)=1. Do the probs of HH, HT, TH, TT add up to 1? Yes they do since some are negative.

Is there an interpretation or use for this? I know that quantum mechanics uses stuff like this. Could examples like this be good for education? Are there non-quantum examples of the uses of this thatcould be taught in a discrete math course?

Thursday, September 13, 2012

Max Flow

A couple of weeks ago Suresh tweeted the following result of James Orlin
I'm thinking, wow, max flow is one of the major standard algorithms problems, and O(nm)  time (n = number of vertices, m = number of edges) seems like a great clean bound. But there hasn't been much chatter about this result beyond Suresh's tweet.

Reading Orlin's paper gives some clues. The previous best bound due to King, Rao and Tarjan has a running time of O(nm logm/(n log n)n) = O(nm log n) just a logarithm off from O(nm). Orlin doesn't directly give an O(nm) algorithm, his takes time O(nm+m31/16log2n). It's the minimum of the running times of King-Rao-Tarjan and Orlin's algorithms that yields O(nm). Nor is O(nm) tight, Orlin also gives an algorithm with a running time of O(n2/log n) when m=O(n).

I don't mean to knock Orlin's work, he makes real progress on a classical algorithmic problem. But somehow I think of O(nm) as a magical bound when it is really just another bound. I'm just fooled by simplicity.

Tuesday, September 11, 2012

Some quantum Stuff

Two quantum announcements (emailed to me by Umesh Vazirani, and producedhere almost exactly) and then some thoughts of mine quantum computing.

Announcement one: The NSF has a new initiative to try to address the lack of tenured faculty (particularly in computer science departments) involved in quantum computation research.  CISE-MPS Interdisciplinary Faculty Program in Quantum Information

The initiative provides a paid sabbatical year to interested tenured faculty to visit a strong quantum computing group, so that can reposition their research interests. The rationale behind the solicitation is to increase the number of tenured researchers in quantum computation, but also to break through the "quantum skepticism" in faculty hiring decisions in those departments where there are no faculty actively involved in quantum computing research.

Announcement two: In support of this program, Carl Williams (a physicist working in Quantum Information who put together the US Vision for Quantum Information Science for the Office of the President) and Umesh have put together a workshop where interested individuals can learn about the initiative, the field and make contacts with people from the major quantum computing centers: see here.

The initiative comes at a particularly opportune moment for researchers in complexity theory, given the increasing relevance of quantum techniques in complexity theory --- the 2-4 norm paper of Barak, et al (SDPs, Lasserre), exponential lower bounds for TSP polytope via quantum communication complexity arguments (See Drucker and de Wolf  paper Quantum proofs for classical theorems for several apps of Q to Complexity, and see
here for the TSP polytope result)
quantum Hamiltonian complexity as a generalization of CSPs, lattice-based cryptography whose security is based on quantum arguments, etc.

MY COMMENTS: Umesh gives as a reason quantum is important its uses in other parts of complexity theory. While that is certainly good there are other intellectual reasons why Quantum is worth studying.
1. Factoring is in Quantum P! There are MANY problems (maybe 10) where Quantum seemsto be faster than classical.  I wouldn't really want to push this point sincequantum computer aren't build yet. More generally, if one claims a field is validfor real practical value, those arguments may become less believable over time.
2. Quantum computing can be used to simulate quantum systems- I think this was one of the original motivations.
3. Quantum computing is valuable for a better understanding of Physics.
This was first told to be my Fred Green (A Physics PhD who went into computer science)and I made it the subject ofthis blog entry.
I like his quote so much that I will quote it here

Learning quantum computing helped me understand quantum mechanicsbetter. As a physicist I never thought about measurement theoryor entanglement, which were foundational issues, irrelevantto what was doing. In quantum computing, we reason about thesethings all the time.

Over the years others have told me similar things.

• Side note: The word Quantum is mostly misused in popular culture. Quantum Leap meant a big leapwhen actually quantum means small. The James Bond movie Quantum of Solace used it correctlybut was an awful movie. Oh well.

Thursday, September 06, 2012

The Time of Research

Among the many conference/journal discussions, one systems person said the reason they don't publish in journals is that their work is very dependent on current technology. A few years in the future the technology will change and this research is no long relevant. The best systems research develop ideas that transcends technology, but much research in systems have a limited lifespan of relevancy.

In theory, if you prove a theorem that theorem will be true forever. In fact that theorem was always true, we just didn't know it. So it is important to have refereed long-term archival write ups of these results. A theorem could be supplanted by another result or get less interesting over time but it always remains true.  Theory results transcend technology, though most theory result have a zero lifespan of relevancy.

Wednesday, September 05, 2012

Theory Day at UMCP Oct 24

The University of Maryland at College park is having a Theory Day on Wed Oct 24! Come hear

1. Distinguished talks by Julia Chuzhoy and Venkataesan Guruswami!
2. Short talks (is that code for NOT distinguished?) by Bill Gasarch, MohammadTaghi Hajiaghayi, Jonathan Katz, Samir Khuller, David Mount, Elaine (Runting) Shi, and Aravind Srinivsan.  (I never realized I was first alphabetically until now.)

I like theory days in general and often go to the NY theory days.  They are free and only one day.  I recommend going to any theory day that is an Amtrak Ride away.  (Might depend on how long the trip is- There is a 13-hour Amtrak from Atlanta Georgia to Maryland, though I doubt I'll see Lance there.) I get a lot out of theory day as noted in this post about NY theory day

(ADDED LATER AT THE REQUEST OF THE ORGANIZER:
Theory day at UMCP is intentionally after the NJ Focs and is an easy amtrak
away from NJ. The stop is New Carolton)

Tuesday, September 04, 2012

Should we learn from the Masters or from the Pupils?

The following is a paraphrase of a comment at the end of the Suggested Readings section of Spivak's calculus book:

Abel remarked that he attributed his profound knowledge of mathematics to the fact that he read the masters, rather than the pupils.

Are you better off reading the Masters or the pupils?  This of course depends on the masters and the pupil and other factors.

1. I have heard that Godel's original papers (even when translated) are well written and show a profound understanding of the subject and why its important.
2. However, we now have a better understanding of what Godel did and better ways to express it.
3. The Masters may include the motivation which may be lost in later papers.
4. Often the first proof of anything is ugly or odd and later proofs really clean it up.
5. Often the first proof of anything uses only basic concept- later abstractions may hide the heart of the proof.
6. As a practical matter sometimes the early papers are not available (thanks to paywalls or obscurity) or in a language you do not read.
7. If Lance and I ever do a book-of-blog-posts I will clean up some of the spelling, make some of the arguments more clear (perhaps indicate where I am being sarcastic in cases where it was not understood), improve the writing. This will make it better than the blog but less authentic.

Here are examples where the Masters papers may not be worth reading:

1. Recursion theory in the early 1960's had several infinite injury arguments. I have heard that they were known to work only because the lemmas and proofs worked out.  Only after Bob Soare's excellent article on the topic were they really understood. For 0''' priority arguments it is also true that the early papers are not the ones to read.
2. Example (and the real motivation for this post). I have tried to read Ramsey's original article. I knew that his goal was a problem in logic, and I wanted to know what that problem was. I had a hard time reading the paper.  (I did  my own writeup.) Why was his version so hard to read?  (1) He never uses the words coloring or graph or hypergraph. He doesn't mention that if you have six people at a party either three of them know each other or three of them don't know each other. Perhaps he didn't go to many parties.  (2) He uses odd terms at time.  (3) His paper is rather abstract. If he had just proven a simple case then it would be obvious how to proceed to his abstract case.  This is true for both his combinatorial theorem--- he only proves (what we would call) the hypergraph version, and also the Logic theorem.
3. The Cliff notes for Atlas Shrugged are far better than the book. Shorter too.  They are online for free here which makes sense since Ayn Rand was known for her altruism.

SO- what do you think? Examples of cases where the Master is better to read?
Examples of cases where the Pupil (or more generally later summaries, surveys, expositions) is better to read?