Barry Cooper, a computability theorist and professor at the University of Leeds, passed away on Monday after a brief illness. Cooper was a big proponent of computability and Alan Turing in particular. He chaired the advisor committee for the Turing Year celebrations in 2012, and organized a number of events in Cambridge around the centennial of Turing's birth on June 23.
For the Turing year, Cooper and Jan van Leeuwen co-edited a massive volume Alan Turing: His Work and Impact which includes all of Turing's publications and a number of commentaries on the legacy of that work. I wrote a piece on Turing's Dots for the volume. Alan Turing: His Work and Impact received the R. R. Hawkins Award, the top award for professional and scholarly publishing across all the arts and sciences.
Barry Cooper was also the driving force behind Computability in Europe, an eclectic meeting to discuss all things related to computation.
I hope Barry now gets to meet his hero Turing but we'll miss him greatly down here.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Thursday, October 29, 2015
Monday, October 26, 2015
A human-readable proof that every planar graph is 4.5-colorable
About 20 years ago I went to a talk by Jim Propp on the fractional chromatic number of a graph. (Here is a link to a free legal copy of the book on fractional graph theory by Ed Scheinerman and Dan Ullman.)
Let c be a natural number. A graph G=(V,E) is c-colorable if there is a mapping of the vertices of G to the vertices of Kc such that (x,y) ∈ E --> (x,y) is an edge in Kc
Let Kt:k be the Knesser graph: the vertices are all k-element subsets of {1,...,t}, (A,B) is an edge if A∩B=∅. A graph is t/k-colorable if there is a mapping of the vertices of G to the vertices of Kt:k such that (x,y) ∈ E --> (x,y) is an edge in Kt:k. (That is not quite right as that might depend on if you use, say, 15/10 or 3/2 but is close. See page 40-42 in the book linked to above for the formal definition.)
One can also show that this is equivalent to other definitions of fractional chromatic number (one involves a relaxation of an LP problem).
The big open problem in the field and perhaps the motivation for it was this:
Known and the proof is human-readable: Every Planar Graph is 5-colorable.
Known and the current proof is NOT human-readable: Every Planar Graph is 4-colorable.
OPEN: Is there some number 4<c<5 such that every planar graph is c-colorable and the proof is human-readable?
I had not thought about this problem for a long time when I was invited by Dan Cranston to give a talk at a Discrete Math Seminar in Virginia Commonwealth University. Looking over the prior talks in the seminar I noticed a talk on Fractional Chromatic Number of the Plane by Dan Cranston. I inquired:
BILL: Fractional colorings! Has there been progress on finding a number less than 5 so that the proof that every planar graph is c colorable is reasonable?
DAN: Yes. By myself and Landon Rabern. We have shown that every planar graph is 4.5-colorable in this paper! I've given talks on it, here are my slides.
BILL: This is big news! I am surprised I haven't heard about it.
DAN: The paper is on arXiv but not published yet. Also, while you and me think its a great result, since its already known that all planar graphs are 4-colorable, some people are not interested.
BILL: Too bad you didn't prove it in 1975 (the four-color theorem was proven in 1976).
DAN: I was in kindergarten.
BILL: Were you good at math?
DAN: My finger paintings were fractional colorings of the plane and I never used more than 4.5 colors.
DAN: By the way, the paper I spoke about in seminar was about fractional colorings of the plane. The graph is: all points in the plane are vertices, and two points have an edge if they are an inch apart. The ordinary chromatic number is known to be between 4 and 7. Tighter bounds are known for the fractional chromatic number, see the paper linked to above when you first mention that paper.
Some thoughts:
1) I would have thought that they would first get something like 4.997 and then whittle it down. No, it went from 5 right to 4.5. Reducing it any further looks hard and they proved that it cannot be a tweak of their proof.
2) The paper is readable. Its very clever and doesn't really use anything not known in 1975. But the problem wasn't known then and of course the right combination of ideas would have been hard to find back the. In particular, the method of discharging is better understood now.
3) When I told this to Clyde Kruskal he wanted to know if there was a rigorous definition of ``human-readable'' since the open question asked for a human-readable proof. I doubt there is or that there can be, but this paper clearly qualifies. Perhaps a paper is human-readable if 2 humans have actually read it and understood it. Or perhaps you can parameterize is and have n-human-readable for n humans.
4) Why does my spell checker thing colorable and colorings and parameterize are not words?
5) The serendipity: I'm very happy to know the result. I came upon it by complete accident. I'm bothered that there may be other results I want to know about but don't. How does one keep up? One way is to check arXiv's once-a-week or on some regular basis. But is that a good use of your time? I ask non-rhetorical.
Let c be a natural number. A graph G=(V,E) is c-colorable if there is a mapping of the vertices of G to the vertices of Kc such that (x,y) ∈ E --> (x,y) is an edge in Kc
Let Kt:k be the Knesser graph: the vertices are all k-element subsets of {1,...,t}, (A,B) is an edge if A∩B=∅. A graph is t/k-colorable if there is a mapping of the vertices of G to the vertices of Kt:k such that (x,y) ∈ E --> (x,y) is an edge in Kt:k. (That is not quite right as that might depend on if you use, say, 15/10 or 3/2 but is close. See page 40-42 in the book linked to above for the formal definition.)
One can also show that this is equivalent to other definitions of fractional chromatic number (one involves a relaxation of an LP problem).
The big open problem in the field and perhaps the motivation for it was this:
Known and the proof is human-readable: Every Planar Graph is 5-colorable.
Known and the current proof is NOT human-readable: Every Planar Graph is 4-colorable.
OPEN: Is there some number 4<c<5 such that every planar graph is c-colorable and the proof is human-readable?
I had not thought about this problem for a long time when I was invited by Dan Cranston to give a talk at a Discrete Math Seminar in Virginia Commonwealth University. Looking over the prior talks in the seminar I noticed a talk on Fractional Chromatic Number of the Plane by Dan Cranston. I inquired:
BILL: Fractional colorings! Has there been progress on finding a number less than 5 so that the proof that every planar graph is c colorable is reasonable?
DAN: Yes. By myself and Landon Rabern. We have shown that every planar graph is 4.5-colorable in this paper! I've given talks on it, here are my slides.
BILL: This is big news! I am surprised I haven't heard about it.
DAN: The paper is on arXiv but not published yet. Also, while you and me think its a great result, since its already known that all planar graphs are 4-colorable, some people are not interested.
BILL: Too bad you didn't prove it in 1975 (the four-color theorem was proven in 1976).
DAN: I was in kindergarten.
BILL: Were you good at math?
DAN: My finger paintings were fractional colorings of the plane and I never used more than 4.5 colors.
DAN: By the way, the paper I spoke about in seminar was about fractional colorings of the plane. The graph is: all points in the plane are vertices, and two points have an edge if they are an inch apart. The ordinary chromatic number is known to be between 4 and 7. Tighter bounds are known for the fractional chromatic number, see the paper linked to above when you first mention that paper.
Some thoughts:
1) I would have thought that they would first get something like 4.997 and then whittle it down. No, it went from 5 right to 4.5. Reducing it any further looks hard and they proved that it cannot be a tweak of their proof.
2) The paper is readable. Its very clever and doesn't really use anything not known in 1975. But the problem wasn't known then and of course the right combination of ideas would have been hard to find back the. In particular, the method of discharging is better understood now.
3) When I told this to Clyde Kruskal he wanted to know if there was a rigorous definition of ``human-readable'' since the open question asked for a human-readable proof. I doubt there is or that there can be, but this paper clearly qualifies. Perhaps a paper is human-readable if 2 humans have actually read it and understood it. Or perhaps you can parameterize is and have n-human-readable for n humans.
4) Why does my spell checker thing colorable and colorings and parameterize are not words?
5) The serendipity: I'm very happy to know the result. I came upon it by complete accident. I'm bothered that there may be other results I want to know about but don't. How does one keep up? One way is to check arXiv's once-a-week or on some regular basis. But is that a good use of your time? I ask non-rhetorical.
Thursday, October 22, 2015
Complexity Blast from the Past
My friend Marty in grad school mysteriously disappeared in 1985 and showed up yesterday looking exactly the same as I remember him. He said he traveled into my time and asked me how we finally proved P different than NP. I didn't give him the answer he seeked but I let him look around at what complexity has done in the past thirty years. He read through the night and agreed to share his thoughts on this blog.
When I left in '85 Yao and Håstad had just showed that Parity require exponential-size circuits and Razborov showed clique does not have poly-size monotone circuits. Looks like showing some NP problem didn't have poly-size circuits was right around corner so I'm a bit disappointed that didn't happen. I would have thought you would have proven NP is not computable in log space (nope), NP not computable by Boolean formulae (nope) or NP not computable by constant depth circuits with threshold gates (nope). All I see are excuses like "natural proofs". You did show lower bounds for constant depth circuits with modm gates for m prime but that happened back just a year after I left. For m composite you need to go all the way to NEXP and even that took another 25 years.
Back in 1984 we had a 3n lower circuit bound for an explicit function and surely you have much better bounds now, at least quadratic. I almost didn't find any improvement until I saw a paper written last week that improves this lower bound to a whopping 3.011n. You can't make this stuff up.
I shouldn't be that hard on you complexity theorists. All that work on interactive proofs, PCPs, approximations and unique games is pretty cool. Some real impressive stuff on pseudorandom generators and error-correcting codes. Who would have thunk that NL = co-NL or that the entire polynomial-time hierarchy reduces to counting? Nice to see you finally proved that Primes in P and undirected graph connectivity in log-space.
Oh no! Biff stole Lance's copy of Arora-Barak and is taking it back in time. This could change computational complexity forever. I'd better go find Doc Brown and make things right.
[Lance's Note: Thanks to Mostafa Ammar for inspiring this post.]
When I left in '85 Yao and Håstad had just showed that Parity require exponential-size circuits and Razborov showed clique does not have poly-size monotone circuits. Looks like showing some NP problem didn't have poly-size circuits was right around corner so I'm a bit disappointed that didn't happen. I would have thought you would have proven NP is not computable in log space (nope), NP not computable by Boolean formulae (nope) or NP not computable by constant depth circuits with threshold gates (nope). All I see are excuses like "natural proofs". You did show lower bounds for constant depth circuits with modm gates for m prime but that happened back just a year after I left. For m composite you need to go all the way to NEXP and even that took another 25 years.
Back in 1984 we had a 3n lower circuit bound for an explicit function and surely you have much better bounds now, at least quadratic. I almost didn't find any improvement until I saw a paper written last week that improves this lower bound to a whopping 3.011n. You can't make this stuff up.
I shouldn't be that hard on you complexity theorists. All that work on interactive proofs, PCPs, approximations and unique games is pretty cool. Some real impressive stuff on pseudorandom generators and error-correcting codes. Who would have thunk that NL = co-NL or that the entire polynomial-time hierarchy reduces to counting? Nice to see you finally proved that Primes in P and undirected graph connectivity in log-space.
Oh no! Biff stole Lance's copy of Arora-Barak and is taking it back in time. This could change computational complexity forever. I'd better go find Doc Brown and make things right.
[Lance's Note: Thanks to Mostafa Ammar for inspiring this post.]
Sunday, October 18, 2015
Amazon going after fake reviews but mine is still posted
I have always wondered why YELP and Amazon Reviews and other review sites work as well as they do since a company COULD flood one with false reviews (high marks for their company or low marks for their competitors). From what I've seen this does not seem to be a big problem.
Even so, it is A problem, and amazon is taking action on it. See here for details.
A while back I posted a review that is... not sure its false, but I am surprised they allowed it and still allow it. What happened: I bought a copy of my own book Bounded Queries in Recursion Theory (Gasarch and Martin) since my dept wanted a copy to put in one of those glass cases of faculty books. I bought it used for about $10.00. I got email from amazon asking if I wanted to review it, so I thought YES, I'll review my book! I wrote:
This is a great book. I should know, I wrote it.
I am surprised Amazon ASKED me for my opinion.
Seriously- I also have a survey which says whats in the book
better, email me if you want it, gasarch@cs.umd.edu
That was in November of 2014. Its still posted. Is it a false review? Not clear since everything I say in it is true and I am honestly saying who I am.
If Amazon removed it I would be surprised they noticed.
If Amazon does not remove it I'm surprised they allow it.
Is it possible to be surprised both ways?
Even so, it is A problem, and amazon is taking action on it. See here for details.
A while back I posted a review that is... not sure its false, but I am surprised they allowed it and still allow it. What happened: I bought a copy of my own book Bounded Queries in Recursion Theory (Gasarch and Martin) since my dept wanted a copy to put in one of those glass cases of faculty books. I bought it used for about $10.00. I got email from amazon asking if I wanted to review it, so I thought YES, I'll review my book! I wrote:
This is a great book. I should know, I wrote it.
I am surprised Amazon ASKED me for my opinion.
Seriously- I also have a survey which says whats in the book
better, email me if you want it, gasarch@cs.umd.edu
That was in November of 2014. Its still posted. Is it a false review? Not clear since everything I say in it is true and I am honestly saying who I am.
If Amazon removed it I would be surprised they noticed.
If Amazon does not remove it I'm surprised they allow it.
Is it possible to be surprised both ways?
Thursday, October 15, 2015
2015 Fall Jobs Post
When the 2014 fall jobs post is our most popular post, you know it is time for the 2015 jobs post. This year instead of (or in addition to) posting your jobs in the comments, post your theory jobs to Theoretical Computer Science Jobs site coordinated by Boaz Barak.
For job searchers the standard official sites for CS faculty jobs are the the CRA and the ACM. Check out postdocs and other opportunities on the TCS Jobs site mentioned above and Theory Announcements. It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up.
Many if not most computer science department should be trying to expand again this year to keep up with ever growing enrollments. Hiring in theoretical computer science is harder to predict, not likely to be a priority in many departments. You can make yourself more valuable by showing a willingness to participate and teach beyond core theory. Machine learning, data science and information security are areas of great need where theorists can play a large role.
Don't ignore postdoc and lecturer positions that will give you some extra training and a stronger CV for future searchers. Think global--there are growing theory groups around the world.
Good luck out there and I look forward to seeing your names on the Spring 2016 jobs post.
For job searchers the standard official sites for CS faculty jobs are the the CRA and the ACM. Check out postdocs and other opportunities on the TCS Jobs site mentioned above and Theory Announcements. It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up.
Many if not most computer science department should be trying to expand again this year to keep up with ever growing enrollments. Hiring in theoretical computer science is harder to predict, not likely to be a priority in many departments. You can make yourself more valuable by showing a willingness to participate and teach beyond core theory. Machine learning, data science and information security are areas of great need where theorists can play a large role.
Don't ignore postdoc and lecturer positions that will give you some extra training and a stronger CV for future searchers. Think global--there are growing theory groups around the world.
Good luck out there and I look forward to seeing your names on the Spring 2016 jobs post.
Sunday, October 11, 2015
Moore's Law of Code Optimization
An early version of Moore's law is as follows:
Moore's law is often used to refer to the following:
There are other versions as well. I've heard that some versions of it are no longer working (it couldn't go on forever). But what about the gains made NOT by hardware? Is there a Moore's Law of Code Optimization?
There is! Its called Proebstring's law
The paper pointed to gives evidence for this law.
So is Code Opt worth it? I've heard the following:
1) Some of the Code Opts eventually go into the hardware, so its not quite fair to say that Code Opts improve speed that slowly.
2) Any improvement is worth having.
3) Being forced to think about such issues leads to other benefits.
HARDWARE: The number of transistors on an integrated circuits doubles every 18 MONTHS.
Moore's law is often used to refer to the following:
SPEED: The speed of computers due to hardware doubles every 18 MONTHS.
There are other versions as well. I've heard that some versions of it are no longer working (it couldn't go on forever). But what about the gains made NOT by hardware? Is there a Moore's Law of Code Optimization?
There is! Its called Proebstring's law
SPEED: The speed of computers due to code opt doubles every 18 YEARS.
The paper pointed to gives evidence for this law.
So is Code Opt worth it? I've heard the following:
1) Some of the Code Opts eventually go into the hardware, so its not quite fair to say that Code Opts improve speed that slowly.
2) Any improvement is worth having.
3) Being forced to think about such issues leads to other benefits.
Wednesday, October 07, 2015
Randomness by Complexity
Let n be the following number
135066410865995223349603216278805969938881475605667027524485143851526510604859533833940287150571909441798207282164471551373680419703964191743046496589274256239341020864383202110372958725762358509643110564073501508187510676594629205563685529475213500852879416377328533906109750544334999811150056977236890927563 (RSA 1024)
What is the probability that the fifth least significant bit of the smallest prime factor of n is a one? This bit is fully defined--the probability is either one or zero. But if gave me better than 50/50 odds one way or the other I would take the bet, unless you worked for RSA Labs.
How does this jibe with a Frequentist or Bayesian view of probability? No matter how often you factor the number you will always get the same answer. No matter what randomized process was used to generate the original primes, conditioned on n being the number above, the bit is determined.
Whether we flip a coin, shuffle cards, choose lottery balls or use a PRG, we are not creating truly random bits, just a complex process who unpredictability is,we hope, indistinguishable from true randomness. We know from Impagliazzo-Wigderson, and its many predecessors and successors, that any sufficiently complex process can be converted to a PRG indistinguishable from true randomness. Kolmogorov complexity tells us we can treat a single string with no short description as a "random string".
That's how we ground randomness in complexity: Randomness is not some distribution over something not determined, just something we cannot determine.
135066410865995223349603216278805969938881475605667027524485143851526510604859533833940287150571909441798207282164471551373680419703964191743046496589274256239341020864383202110372958725762358509643110564073501508187510676594629205563685529475213500852879416377328533906109750544334999811150056977236890927563 (RSA 1024)
What is the probability that the fifth least significant bit of the smallest prime factor of n is a one? This bit is fully defined--the probability is either one or zero. But if gave me better than 50/50 odds one way or the other I would take the bet, unless you worked for RSA Labs.
How does this jibe with a Frequentist or Bayesian view of probability? No matter how often you factor the number you will always get the same answer. No matter what randomized process was used to generate the original primes, conditioned on n being the number above, the bit is determined.
Whether we flip a coin, shuffle cards, choose lottery balls or use a PRG, we are not creating truly random bits, just a complex process who unpredictability is,we hope, indistinguishable from true randomness. We know from Impagliazzo-Wigderson, and its many predecessors and successors, that any sufficiently complex process can be converted to a PRG indistinguishable from true randomness. Kolmogorov complexity tells us we can treat a single string with no short description as a "random string".
That's how we ground randomness in complexity: Randomness is not some distribution over something not determined, just something we cannot determine.
Monday, October 05, 2015
Is Kim Davis also against Nonconstrutive proofs?
Recall that Kim Davis is the Kentucky clerk who refused to issue marriage licenses for same-sex couples and was cheered on by Mike Huckabee and other Republican candidates for Prez. Had she refused to issue marriage license for couples that had been previously divorced than Mike Huckabee would not be supporting her, and the Pope wouldn't have a private 15 minute meeting with her telling her to stay strong (NOTE- I wrote this post in that tiny window of time when it was believed she did have such a meeting with the Pope, which is not true. The Pope DID meet with a former student of his who is gay, and that studen'ts partner.) Had she refused to issue marriage licenses to inter-racial couples (this did happen in the years after the Supreme court said that states could not ban interracial marriage, Loving vs Virginia, 1967 ) then ... hmm, I'm curious what would happen. Suffie to say that Mike H and the others would prob not be supporting her.
David Hilbert solved Gordon's problem using methods that were nonconstructive in (I think) the 1890's. This was considered controversial and Gordon famously said this is not math, this is theology. Had someone else solved this problem in 1990 then the fact that the proof is non-constructive might be noted, and the desire for a constructive proof might have been stated, but nobody would think the proof was merely theology.
I don't think the Prob Method was ever controversial; however, it was originally not used much and a paper might highlight its use in the title or abstract. Now its used so often that it would be unusual to point to it as a novel part of a paper. If I maintained a website of Uses of the Prob Method in Computer Science then it would be very hard to maintain since papers use it without commentary.
The same is becoming true of Ramsey Theory. I DO maintain a website of apps of Ramsey Theory to CS (see here) and its geting harder to maintain since using Ramsey Theory is not quite so novel as to be worth a mention.
SO- when does a math technique (e.g., prob method) or a social attuitude (e.g., acceptance of same-sex marriage) cross a threshold where its no longer controversial? Or no longer novel? How can you tell? Is it sudden or gradual? Comment on other examples from Math! CS!
Thursday, October 01, 2015
Cancer Sucks
Karsten Schwan said the title quote when we were gathered as a faculty two years ago mourning the Georgia Tech School of Computer Science faculty member Mary Jean Harrold who died from the disease. Karsten just lost his own battle with cancer monday morning and my department is now mourning another great faculty member.
Just a few months ago, Alberto Apostolico, an algorithms professor at Georgia Tech, also passed away from cancer.
Just a few months ago, Alberto Apostolico, an algorithms professor at Georgia Tech, also passed away from cancer.
I went back through the obituaries in the blog and we lost quite a few to cancer, often way too young, including Mihai Pătraşcu, Benoît Mandelbrot, Partha Niyogi, Ingo Wegener, Clemens Lautemann and Carl Smith. I just taught Lautemann's proof that BPP is in Σp2∩Πp2 in class yesterday.
With apologies to Einstein, God does play dice with people's lives, taking them at random in this cruel way. Maybe someday we'll find a way to cure or mitigate this terrible disease but for now all I can say is Cancer Sucks.
With apologies to Einstein, God does play dice with people's lives, taking them at random in this cruel way. Maybe someday we'll find a way to cure or mitigate this terrible disease but for now all I can say is Cancer Sucks.