Monday, June 01, 2015

Award Season

László Babai will receive the 2015 Knuth Prize and Daniel Spielman and Shang-Hua Teng will receive the 2015 Gödel Prize. ACM issued a press release for both awards which will be presented at the upcoming STOC at FCRC.

Babai did seminal research on interactive proofs, communication complexity, group algorithms and much more. One cannot count the number of PhD theses in mathematics and computer science that can trace themselves back to some initial work by Babai. I was lucky to have Laci Babai as a colleague, mentor and friend during my years at the University of Chicago.

Spielman and Teng, who received the 2008 Gödel Prize for smooth analysis, won again for three papers using nearly linear time Laplacian solvers for a series of graph problems.
The ACM Awards ceremony later this month will have a number of theory related prizes.

Thursday, May 28, 2015

Who wins this bet?


Alice and Bob are at an auction and Alice wants to buy an encyclopedia set from 1980. Bob says don't buy that, you'll never use it. In this age of Wikipedia and Google and THE WEB.  Alice says you don't know that.  They agree that Alice will spend no more than $20.00 on it (and she does win it at $20.00) and that:

If Alice does not use the encyclopedia within 5 years then she owes Bob $10.00. If she does use it then Bob owes Alice $10.00.

3 years later its really cold outside. Alice's house is not that well insulated. So she takes carpets against the bottom cracks in the door and weights them down with the volumes of the encyclopedia. This helps her keep warm and cuts down on her heating bill.

Alice says I used the encyclopedia. Pay up! In your face! I used them! You were wrong!

 Bob says You didn't use them to look anything up. So that doesn't count.

Alice and Bob are asking YOU do decide. So leave comments either way and whoever has more votes before my next post wins! Feel free to leave reasons as well to persuade the other readers.






Monday, May 25, 2015

John Nash (1928-2015)

John Nash and his wife Alicia died in a taxi accident, returning from the airport after he received the Abel prize in Norway. The public knew John Nash as the "Beautiful Mind" of book and screen, but we knew him as one of the great geniuses of the 20th century. Rakesh Vohra captures Nash's life and work, including his amazing letters to the NSA.

I briefly met John Nash at some MIT alumni events in New Jersey when I lived there (even though neither of us were MIT undergrads). He would come with his wife and son, the son wearing a winter coat no matter the season. Nash just seemed like any other introverted scientist and was happy to talk though understating his research ("I did some work in game theory") and never revealing the challenging life he led.

Now that John and Alicia have found their final equilibrium, may we remember them and Nash's vision of using mathematics to understand the world we live in.

Thursday, May 21, 2015

An Intentional and an Unintentional teaching experiment regarding proving the number of primes is infinite.


I taught Discrete Math Honors this semester. Two of the days were cancelled entirely because of snow (the entire school was closed) and four more I couldn't make because of health issues (I'm fine now). People DID sub for me those two and DID do what I would have done. I covered some crypto which I had not done in the past.

Because of all of this I ended up not covering the proof that the primes were infinite until the last week.

INTENTIONAL EXPERIMENT: Rather than phrase it as a proof by contradiction I phrased it, as I think Euclid did, as

Given primes p1,p2,...,pn you can find a prime NOT on the list. (From this it easily follows that the primes are infinite.)

Proof: the usual one, look at p1xp2x...xpn + 1 and either its prime or it has a prime factor not on the list.

The nice thing about doing it this way is that there are EASY examples where p1xp2x...xpn+1 is NOT prime

(e.g., the list is {2,5,11} yields 2x5x11 + 1 = 111 = 3 x 37, so 3 and 37 are both not in {2,5,11})


where as if you always use the the product of the first n primes then add 1, you don't get to a non-prime until 2x3x5x7x11x13 + 1 = 30031 = 59x 509.

They understood the proof better than prior classes had, even prior honors classes.

UNINTENTIONAL: Since I did the proof at the end of the semester they ALREADY had some proof maturity, more so than had I done it (as I usually do) about 1/3 of the way through the course.

They understood the proof better than prior classes had, even prior honors classes. Hence I should proof all of the theorems the last week! :-)

But seriously, they did understand it better, but I don't know which of the two factors, or what combination caused it. Oh well.


Monday, May 18, 2015

Theory Jobs 2015

In the fall we list theory jobs, in the spring we see who got them. Like last year, I created a fully editable Google Spreadsheet to crowd source who is going where. Ground rules:
  • I set up separate sheets for faculty, industry and postdoc/visitors.
  • People should be connected to theoretical computer science, broadly defined.
  • Only add jobs that you are absolutely sure have been offered and accepted. This is not the place for speculation and rumors.
  • You are welcome to add yourself, or people your department has hired.
This document will continue to grow as more jobs settle. So check it often.

Edit

Thursday, May 14, 2015

Fiftieth Anniversary of the Publication of the seminal paper on Computational Complexity

Juris Hartmanis and Richard Stearns in a photo dated May 1963. The main theorem from their paper is on the board later improved by Hennie and Stearns. Photo courtesy of Richard Stearns.
The seminal paper of Juris Hartmanis and Richard Stearns, On the Computational Complexity of Algorithms, appeared in the Transactions of the American Mathematical Society in the May 1965 issue. This paper gave the name to the field of Computational Complexity which I took for the name of this blog. Hartmanis and Stearns received the Turing Award in 1993 for this work.

I've mentioned this paper several times in the blog before, including as a favorite theorem. Hartmanis and Stearns first formalized and truly popularized the idea of measuring time and other resources as a function the problem size, laying the foundation for virtually every paper in computational complexity and algorithms to follow.

Both Hartmanis and Stearns wrote about those early days. The main breakthroughs for their paper started in November 1962 and on December 31 Hartmanis wrote in his logbook "This was a good year," A good year indeed.

Monday, May 11, 2015

The law of the excluded middle of the road republicans


In the book Hail to the Chiefs about the presidents, when pointing to a race between two people who had no business being president (I think it was Franklin Pierce vs Winfield Scott) wrote something like That's the thing about elections, someone has to win. 

 Looking at the republicans running for the nomination I can (with the help of reading many of Nate Silver's Columns) tell you why, for each one, they can't win the nomination. Note that this is not a partisan thing. But again, someone has to win. Is it possible to have the statements

A1 can't win AND A2 can't win AND .... AND An can't win

and yet someone wins?

Here is a list of the candidates and why they can't win.

  1. Jeb Bush is what passes for a front runner nowadays. Has the money, does not have the party (very few endorsements), and not doing well in polls in Iowa or New Hampshire, the first two states. Possibly because he does not hate Obama enough.  Its an interesting question of whether the party, the people, or the money pick the candidate. In the past its been the party and the money together, but that might be changing. CAN"T WIN: Viewed as too moderate by the people who go to Caucus's. Also some people may be put off by the family name. THOUGHT: If H CLINTON was not the likely Democratic candidate, thus making the family name thing a less of an issue in the general, I don't think he would have run at all. UPDATE- Dropped out after the South Carolina Primary. 
  2. Marco Rubio. Running for president for the first time is hard. The republicans rarely nominate someone who hadn't run before  (W in 2000, Ford in 1976 which is an out lier since he was prez). Perhaps the voters/party/money like someone they are familiar with OR perhaps first timers make mistakes. CAN"T WIN: Will make some mistake and some might think its not his turn since he's so young. Also, Senators have it rough since they sometimes vote for bills in funny ways, TRIVIA: The electoral college reps from state X cannot cast both the Prez and Veep vote for people both from state X. So you won't see a Bush-Rubio or Rubio-Bush ticket  UPDATE- Dropped out in Mid March after losing his home state to Trump. Very odd in that various pundits pointed to him as being the hope to beat Trump even though he only won Minnesota and Puerto Rico. Looks like running for prez IS hard for first times- though Trump seems to be managing..
  3. Rick Perry. He lost in 2012 because either he was too soft on immigrants (he supported some sort of Dream Act) or because of his Whoops moment in a debate. Frankly I have sympathy on that one--- I also sometimes  forget which cabinet positions I want to get rid of. He has tried to cure both of his problems by flip-flopping (or evolving) on immigration and by wearing glasses so at least he looks smart. CAN"T WIN:  His past failure makes him not look that serious this time around, and he will likely have another WHOOPS moment. UPDATE: He dropped out in September. His speech saying he was dropping out was pretty good (I am not being sarcastic).
  4. Scott Walker. Like Rubio but might be in better shape because he's a governor. Still, people in his home state are beginning to turn on him, a bad sign. He may soon have the same problem that Gov Brownback of Kansas has--- you promise to cut taxes, close loopholes, and cut spending, and that might actually be a good idea if done right, but you cut taxes , don't close loopholes, cut spending in stupid places like education, run up a big debt, destroy your states economy, and ... get re-elected. CAN"T WIN: Having not run before Walker will   say something stupid. And as a gov of a moderate state I suspect some moderate things he did may be a problem for hard core republican voters.Plus his states current problems may be an issue. UPDATE: He dropped out in September. See Note on Bobby Jindal.
  5. Ted Cruz. Which of these quotes did Ted Cruz really say and which did I hear in some satirical setting (as a fan of The Daily Show, The Colbert Report, John Oliver's show, The Capitol Steps, others, I lose track of where I heard what) (1) There was no ebola before Obamacare, or (2) Net neutrality is Obamacare for the internet. I'll answer at the end of the post. He is now using Obamacare himself. . CAN"T WIN: A niche candidate with a small but loyal set of supporters. Not enough CAVEAT:  Might be running to get some points of view out there like Adlai Stevenson and Barry Goldwater, though they got all the way to the nomination which I doubt he can.
  6. Rand Paul. Interesting mathematically: an authenticity -  electability trade off.  Some people are attracted to his libertarianism, authenticity and consistency,  but not enough to get him anywhere close to the nomination. So he changes some of his positions to be more mainstream but less libertarian, authentic, and consistent. Alas, those who trade their integrity for electability end up with neither. CAN"T WIN: His evolving views might lose him his base but not gain him any establishment cred. Also he has the chicken-egg problem in that even people who like him don't think he can win the general election. (Ted Cruz and others on this list may also have this problem.) ADDED IN FEB- DROPPED OUT AFTER IOWA CAUCUS.
  7. Mike Huckabee. Won't get much  support beyond his Social Conservative base. His stance on Same Sex Marriage will hurt him outside of the social conservatives, especially in 2016 (as opposed to his last run in 2008).  I'm surprised he's running- he got what he wanted from his last run, a show on FOX News. CAN"T WIN: Was a moderate on some economic and immigration issues (he may also `evolve' which won't work), a conservative on social issues, is just the wrong mix for current  republican primary voters. Note that many candidates are trying to avoid the Same Sex Marriage question as they know that being opposed to it will hurt them  in the general. Plus some don't want to be on the wrong side of history (or as the kids say WSOH) .  Politics- sometimes you're forced to have a public opinion that you disagree with and know will make you be on the WSOH , but you're stuck with it. I think Huckabee is sincere in his opposition to same sex marriage but he must surely know he's on the WSOH. He's hoping for a win in Iowa like he had in 2008. I predict that if he doesn't get it he'll drop out. ADDED IN FEB- DROPPED OUT AFTER IOWA CAUCUS
  8. Ben Carson. I suspect he is actually running to be a FOX News commentator. At that he might succeed. CAN"T WIN:  First timer, never ran for anything, he'll be a curiosity not a candidate. Which of the following did he say:  (1) Obama is a sociopath (2)  Obamacare is like slavery. This may even hurt him with the Republican hard core who want someone who can win. CAVEAT: is being African-American going to help or hurt? I doubt his campaign will get far enough to tell. The other candidates and the debate panelists (in the sanctioned debates) will treat him with kid gloves to avoid being called racist. UPDATE- saying absurd things about Obama seems to not hurt any candidate. ADDED LATER- he dropped out in early March, prob because of his poor showing on Super Tuesday.
  9. Carly Fiorina. She said that our founding fathers did not intend there to be a political class, what we now call politicians. They intended for ordinary people (like the president of HP), for the good of their community, to serve in office. She left out that the founding fathers also did not intend for women to be president. CAN"T WIN: First timer, never ran for anything.  (correction added later: She ran for Senator of California . She got the nomination but lost to Incumbent Barbara Boxer.) CAVEAT: is being female going to help or hurt? I doubt her campaign will get far enough to tell. The other candidates and debate panelists (in the sanctioned debates) will treat her with kid gloves to avoid being called sexist. HER HOOK: She claims that as a women she can neutralize H Clinton' women-advantage in the general. Interesting that she is making an electability argument instead of a policy argument, given that she has no chance of being elected. ADDED LATER- Dropped out after the NH primary.
  10. Chris Christie. CAN"T WIN: Hated inside of New Jersey. Hated outside of New Jersey.ADDED LATER- Dropped out after the NH primary.
  11. Bobby Jindal. Once said the Republicans have to stop being the stupid party. Later said some stupid things about Muslims in America. CAN"T WIN: If he ran as the moderate sane voice who will rescue the party from itself, he might get some traction. If he runs as anything else he has too much competition. Also a first-time-runner which is hard. UPDATE- He dropped out in November. He was in double digits (not his percent- the number of people who wanted him to be Prez- his wife, one of his kids, the other is in the Cruz-Camp.) He tried to be BOTH Tea-party AND establishment which was foreshadowed by saying Reps shouldn't be the stupid party (an establishment thing to say) and being racists towards Muslims (a Tea-Party thing). Was not liked by either. Scott Walker may have had the same problem. Rubio may have the same problem.
  12. Lindsey Graham. CAN"T WIN:  Has  worked with democrats which should be a PRO but it's a CON. He's running to be the voice of more troops-on-the-ground in the mideast, not to win. Since Americans don't really want troops on the ground I doubt this will go anywhere; however, instead of troops the other candidates will show their machismo by deporting Muslims. UPDATE- He Dropped out.
  13. John Kasich. Gov of Ohio. CAN"T WIN:  Not that well known. Democrats have nominated unknowns (B Clinton, B Obama) but republicans almost never do (W might count).
  14. Donald Trump (ADDED LATER). Some people say that corporate America controls this country. If we make Donald Trump Prez we're just cutting out the middle-man. Won't get the nomination--- over half of republicans say they would never vote for him--- but his running is a farewell gift to Jon Stewart.
  15. Rick Santorum (ADDED LATER). Against Birth Control? Really? Google him to find other reasons he can't win. Odd thing- usually the person who came in second the last time around has a pretty good shot at getting the nomination this time around, but the fact that he came in second last time may be a sign of how weak the field was last time. Is hoping for a win in Iowa like he had in 2012. I predict that if he doesn't do well in Iowa he'll drop out. UPDATE- DROPPED OUT AFTER THE IOWA CAUCUS.
  16. George Pataki (ADDED LATER) Would be at home in the democratic party. I want to see him challenge Hillary from the right. Oh, he's a republican? But he's pro-choice, pro-gay, anti-gun, and doesn't seem to hate Obama. UPDATE- DROPPED OUT.
  17. Rob Portman. (ADDED LATER) I saw him on a list of possible nominees. Not under `declared', not under `exploratory committee) (Chris Christe and Scott Walker are still exploring, which surprised me- I thought they already declared), but on a list of `No indication'. Not sure why he's on any list; however, as Rick Perry taught us last time (and this is not a joke) you need to start early.UPDATE- never got into the race.
  18. Jim Gillmore (ADDED LATER).  Never made it into the adult debates nor even the kiddie table. Do not know why he is running since he does not have a chance and is not trying to push some sort of issue (that may be why Rand Paul is running- to promote a viewpoint) UPDATE: In the least important political story of the year, Jim Gillmore dropped out after the NH primary. 

UPDATE IN FEB: After NH primary its  down to 6. Maybe now they CAN all fit on a debate stage!

UPDATE IN FEB: After SC primary Jeb dropped so its down to 5. They can now all fit into a car!

UPDATE ON MAR 2. Ben Carson dropped out so its down to 4. Now they can have an intelligent debate.

UPDATE ON MAR 4. They spend some of the debate talking about the size of Donald Trump's... hands.

UPDATE ON MAR 16. Rubio Dropped out. Now its just Trump, Cruz, Kasich. They can all fit in
a Volkswagon.

There may be more (Donald Trump anyone? ADDED LATER- When I first posted this mentioning Donald Trump was supposed to be a joke. But political satire and reality may have finally merged with a reality-TV star running for Prez). But my point is that it seems like nobody can win, yet someone has to.  Do you know other examples where A OR B OR C has to be true, yet none of A,B,C look plausible?

I can phrase this another way:

novices can't win  (e.g., Ben Carson) AND first timers can't win (e.g., Mario Rubio) AND too moderate can't win (e.g., perception of Jeb) AND unknowns can't win (e.g., John Kasich).

They all can't be right, but looking at it   `first timers' is prob the weak link in my reasoning. I could replace it with `inexperienced politician'. Even so, sure looks like nobody can win.

Ted Cruz: Both of the quotes I attribute to him he really did say.
 
I don't know Ben Carson's original quote but he backtracked to Obama reminds you of a psychopath,
which is much better than saying Obama IS a psychopath . But he never said sociopath, so the quote I gave is NOT from Ben Carson.

On Obamacare he said its the worst thing to happen in America since slavery. But later
opposite-of-backtracking said it was in a way like slavery because it robs you of  your ability to control your own life.


Thursday, May 07, 2015

The Virtuous Cycle

Last week we have a celebration of the new CS graduates at Georgia Tech where each graduating student says what their plans are for next year. Other than the few going to grad school, they all have great jobs but the difference I see this year is how many are staying in Atlanta. Any CS student who wants to stay in Atlanta can find a great job in Atlanta.

A large number of companies are moving their headquarters or established large facilities in the Atlanta area and the reasons they give almost always include the students and researchers at Georgia Tech. We're starting to grow a good start-up culture here as well. 

Companies in Atlanta want our students and our research. That helps add to the prestige of Georgia Tech which in turn draws more companies. A virtuous cycle. A similar story is playing out in several other cities across the country and I even saw it in tiny but growing Bozeman where I visited Montana State earlier this week. Computer Science these days plays a major role if not the largest role in many of these industries.

All this growth leads to challenges such as finding the people and resources to meet the growing demands. All told though a good problem to have.

We don't want the success of the STEM fields to come at the expense of the rest of academics. It shouldn't have to be CS vs Classics, Physics or Philosophy. How we make it all successful will be the great challenge in higher education in the years to come.

Monday, May 04, 2015

Sizes of DFAs, NFAs, DPDAs, UCFG, CFGs, CSLs.


If A is a decider (e.g, DFA)  or generator (e.g., CFG) then L(A) is the language  that it decides or generates.

The following are well known:

L(DFA) = L(NDFA) ⊂ L(DPDA) ⊂ L(PDA) ⊂ L(CSL).

We are concerned with the  size of these devices. For a DFA and NDFA the size is
the number of states, for a DPDA, PDA the size is the sum of the stack alphabet and the number of states, and for CSL its the number of nonterminals.

If D and E are two classes of devices (e.g., DFAs and DPDAs) then a bounding function for (D,E)  is a function f such that if L is recognized by both a D-device and an E-device, and L is recognized by an E-device of size n, then it is recognized by a D-device of size ≤ f(n). We abbreviate b-fun

Readers of this column likely know that f(n)=2^n is a b-fun for (DFA,NFA) and prob know that this is tight. Below are some results that I suspect many readers don't know. Some of them  may be suitable for inclusion in an undergrad theory class. In what is below ≤ means Turing-Less-Than-Or-Equal.

  1.  Stearns showed that f(n) = n^{n^{n^{O(n)}}} is a b-fun for (DFA,DPDA).
  2.  Valiant improved this to double exp for a b-fun for (DFA,DPDA).
  3.  Meyer and Fischer showed the 2^n lower bound for (DFA,NDFA). They also showed a lower bound of 2^{2^{O(n)}} for (DFA,DPDA). I think the question of closing the gap between Valiant's result and the Meyer-Fischer result is still open; however, if you know a ref please leave a comment.
  4. Valiant showed that the if f is a b-fun for (DPDA,UCFG) then HALT ≤  f.
  5. Schmidt showed that if f is a b-fun for (UCFG,CFG) then HALT ≤  f. 
  6. Hartmanis showed that if f is a b-fun for (DPDA,PDA) then HALT ≤ f
  7. Hay  showed that if f is a b-fun for (DPDA,PDA) then f is NOT computable-in-HALT. 
  8. Beigel and Gasarch prove a general theorem from which they obtain the following: (a) if f is a b-fun  for (DPDA,PDA) then f ≤_INF. (It is easy to show that there exists a b-fun f ≤ INF for (DPDA,PDA) so the Turing degree is now precise), and (b) if f is a b-fun for (PDA,CSL) then SAME AS PART (a).
Results 3,4,5,6,7 can be restated in the following way--- we'll do result 6 as an example:

 If f  ≤  HALT then there exists infinitely many n such that there exists L_n such that  (a) L_n is DPDA, (b) there is a PDA for L_n of size n, (c) any DPDA for L_n is of size  at least f(n).

 Are there any ``for almost all n '' type bounds? There are but they are much weaker.  The following theorems are from the Beigel-Gasarch paper pointed to above.

  1. For almost all n there exists a  cofinite (hence DPDA) L_n that has a PDA of size O(n) but any DPDA for it has size 2^2^{n^{Ω(1)}}. 
  2. Same as point 1 but for PDA,CSL.

Both results 1,2 above use natural languages  in that they are not created for the sole purpose of proving the theorem, no diagonalization (my spell check says thats spelled wrong but I think its spelled right). Using a construction Beigel-Gasarch obtained (Meyer probably had the result 40 years earlier with a diff proof, see the Beigel-Gasarch paper  for historical details) that if f  ≤  HALT then for almost all n there is a lang L_n such that L_n has a CSL of size n but any PDA for it is of size at least f(n).






Wednesday, April 29, 2015

Is Logarithmic Space Closed Under Kleene Star?

A Georgia Tech student asked the title question in an introductory theory course. The instructor asked his TA, the TA asked me and I asked the oracle of all things log space, Eric Allender. Eric didn't disappoint and pointed me to Burkhard Monien’s 1975 theorem
L is closed under Kleene star if and only if L = NL.
L here is the set of problems solved in deterministic O(log n) space and NL is the nondeterministic counterpart. For a set of strings A, Kleene star, denoted A* is the set of all finite concatenations of strings of A. For example if A = {00,1} then A* = {ε, 1, 00, 11, 001, 100, 111, 0000, 0011, 1100, …} where ε is the zero-length string.

Kleene star comes up in regular expressions but also makes for many a good homework problem.
  1. Show that if A is in NL then A* is also in NL.
  2. Show that if A is in P then A* is also in P.
  3. Show that if A is c.e. (recognizable) then A* is also c.e.
Problem 1 above is equivalent to saying NL is closed under Kleene star and implies the “if” part of Monien’s result. Here is a simple proof of the other direction that L closed under Kleene star implies L = NL.

Consider the following NL-complete problem: The set of triples (G,s,t) such that G is a directed graph with a restriction that all edges (i,j) have i<j and there is a path from node s to node t.

Define the following language
B = {G#i+1#G#i+2#...#G#j# | there is an edge (i,j) in G}
B is computable in log space and the string G#s+1#G#s+2#…#G#t# is in B* if and only if there is a path from node s to node t. QED

Allender, Arvind and Mahajan give some generalizations to log-space counting classes and also notes that there are languages computable in AC0 (constant-depth circuits) whose Kleene star is NL-complete. B above is one such set.

Monday, April 27, 2015

Advice on running a theory day

Last semester I ran a THEORY DAY AT UMCP. Below I have ADVICE for people running theory days. Some I did, some I didn't do but wish I did, and some are just questions you need to ask yourself.

1) Picking the day- I had two external speakers (Avrim Blum and Sanjeev Arrora) so I was able to ask THEM what day was good for THEM. Another way is to pick the DAY first and then asking for speakers.

2) Number of external speakers- We had two, and the rest were internal. The external speakers had hour-long talks, the internal had 20-minute talks. This worked well; however, one can have more or even all external speakers.

3) Whoever is paying for it should be told of it towards the  beginning of the process.

4) Lunch- catered or out? I recommend catered if you can afford it  since good time for people to all talk. See next point.

5) If its catered you need a head count so you need people to register. The number you get may be off- you may want to ask when they register if they want lunch. Then add ten percent.

6) Tell the guest speakers what time is good for them to arrive before they make plans so you can coordinate their dinner the previous night.

7) If have the money and energy do name tags ahead of time. If not then just have some sticky tags and a magic marker.

8) Guest speakers- getting them FROM Amtrak or Airport to dinner/hotel --- give them a personal lift (they may be new to the city and a bit lost). Getting them from the event back TO the airport or amtrak, can call airline limo and taxi. (though if can give a ride, that's of course good.)

9) Pick a day early and stick with it. NO day is perfect, so if someone can't make it, or there is a faculty meeting that day, then don't worry about it.

10) Have website, speakers, all set at least a month ahead of time. Advertise on theory-local email lists, blogs you have access to, and analogs of theory-local for other places (I did NY, NJ, PA). Also email people  to spread the word.

11) Advertise to ugrads. Students are the future!

12) If you are the organizer you might not want to give a talk since you'll be too busy doing other things.

13) Well established regular theory days (e.g., NY theory day) can ignore some of the above as they already have things running pretty well.


Friday, April 24, 2015

Fifty Years of Moore's Law

Gordon Moore formulated his famous law in a paper dated fifty years and five days ago. We all have seen how Moore's law has changed real-world computing, but how does it relate to computational complexity?

In complexity we typically focus on running times but we really care about how large a problem we can solve in current technology. In one of my early posts I showed how this view can change how we judge running time improvements from faster algorithms. Improved technology also allows us to solve bigger problems. This is one justification for asymptotic analysis. For polynomial-time algorithms a doubling of processor speed gives a constant multiplicative factor increase in the size of the problem we can solve. We only get an additive factor for an exponential-time algorithm.

Although Moore's law continues, computers have stopped getting faster ten years ago. Instead we've seen the rise of new technologies: GPUs and other specialized processors, multicore, cloud computing and more on the horizon.

The complexity and algorithmic communities are slow to catch up. With some exceptions, we still focus on single-core single-thread algorithms. Rather we need to find good models for these new technologies and develop algorithms and complexity bounds that map nicely into our current computing reality.

Wednesday, April 22, 2015

The New Oracle Result! The new circuit result! which do you care about?

You have likely heard of the new result by Ben Roco, and Li-Yang on random oracles (see here for preprint) from either Lance or Scott or some other source:

Lance's headline was PH infinite under random oracle

Scott's headline was Two papers but when he stated the result he also stated it as a random oracle result.

The paper itself has the title

An average case depth hierarchy theorem for Boolean circuits

and the abstract is:

We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates. Our hierarchy theorem says that for every d2, there is an explicit n-variable Boolean function f, computed by a linear-size depth-d formula, which is such that any depth-(d1) circuit that agrees with f on (1/2+on(1)) fraction of all inputs must have size exp(nΩ(1/d)). This answers an open question posed by Hastad in his Ph.D. thesis.
Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Hastad, Cai, and Babai. We also use our result to show that there is no "approximate converse" to the results of Linial, Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus answering a question posed by O'Donnell, Kalai, and Hatami.
A key ingredient in our proof is a notion of \emph{random projections} which generalize random restrictions.

Note that they emphasize the circuit aspect.

In Yao's paper where he showed PARITY in constant dept requires exp size the title was

Separating the polynomial hiearchy by oracles

Hastad's paper and book had titles about circuits, not oracles.

When Scott showed that for a random oracle P^NP is properly in Sigma_2^p the title was

Counterexample to the general Linial-Nissan Conjecture

However the abstarct begins with a statement of the oracle result.

SO, here is the real question: What is more interesting, the circuit lower bounds or the oracle results that follow? The authors titles and abstracts might tell you what they are thinking, then again they might not. For  example, I can't really claim to know that Yao cared about oracles more than circuits.

Roughly speaking the Circuit results are interesting since they are actual lower bounds, often on reasonable models for natural problems (both of these statements can be counter-argued), oracle results are interesting since they give us a sense that certain proof teachniques are not going to work. Random oracle results are interesting since for classes like these (that is not well defined) things true for random oracles tend to be things we think are true.

But I want to hear from you, the reader: For example which of PARITY NOT IN AC_0 and THERE IS AN ORACLE SEP PH FROM PSPACE do you find more interesting? Is easier to motivate to other theorists? To non-theorists (for non-theorists I think PARITY).

Thursday, April 16, 2015

PH Infinite Under a Random Oracle

Benjamin Rossman, Rocco Servedio and Li-Yang Tan show new circuit lower bounds that imply, among other things, that the polynomial-time hierarchy is infinite relative to a random oracle. What does that mean, and why is it important?

The polynomial-time hierarchy can be defined inductively as follows: ΣP= P, the set of problems solvable in polynomial-time. ΣPi+1 = NPΣPi, the set of problems computable in nondeterministic polynomial-time that can ask arbitrary questions to the previous level. We say the polynomial-time hierarchy is infinite if ΣPi+1 ≠ ΣPi for all i and it collapses otherwise.

Whether the polynomial-time hierarchy is infinite is one of the major assumptions in computational complexity and would imply a large number of statements we believe to be true including that NP-complete problems do not have small circuits and that Graph Isomorphism is not co-NP-complete.

We don't have the techniques to settle whether or not the polynomial-time hierarchy is infinite so we can look at relativized worlds, where all machines have access to the same oracle. The Baker-Gill-Solovay oracle that makes P = NP also collapses the hierarchy. Finding an oracle that makes the hierarchy infinite was a larger challenge and required new results in circuit complexity.

In 1985, Yao in his paper Separating the polynomial-time hierarchy by oracles showed that there were functions that had small depth d+1 circuits but large depth d circuits which was needed for the oracle. Håstad gave a simplified proof. Cai proved that PSPACE ≠ ΣPi for all i even if we choose the oracle at random (with probability one). Babai later and independently gave a simpler proof.

Whether a randomly chosen oracle would make the hierarchy infinite required showing the depth separation of circuits in the average case which remained open for three decades. Rossman, Servedio and Tan solved that circuit problem and get the random oracle result as a consequence. They build on Håstad's proof technique of randomly restricting variables to true and false. Rossman et al. generalize to a random projection method that projects onto a new set of variables. Read their paper to see all the details.

In 1994, Ron Book showed that if the polynomial-time hierarchy was infinite that it remained infinite relativized to a random oracle. Rossman et al. thus gives even more evidence to believe that the hierarchy is indeed infinite, in the sense that if they had proven the opposite result than the hierarchy would have collapsed.

I used Book's paper to show that a number of complexity hypothesis held simultaneously with the hierarchy being infinite, now a trivial consequence of the Rossman et al. result. I can live with that.

Tuesday, April 14, 2015

Baseball is More Than Data

As baseball starts its second week, lets reflect a bit on how data analytics has changed the game. Not just the Moneyball phenomenon of ranking players but also the extensive use of defensive shifts (repositioning the infielders and outfielders for each batter) and other maneuvers. We're not quite to the point that technology can replace managers and umpires but give it another decade or two.

We've seen a huge increase in data analysis in sports. ESPN ranked teams based on their use of analytics and it correlates well with how those teams are faring. Eventually everyone will use the same learning algorithms and games will just be a random coin toss with coins weighted by how much each team can spend.

Steve Kettmann wrote an NYT op-ed piece Don't Let StatisticsRuin Baseball. At first I thought this was just another luddite who will be left behind but he makes a salient point. We don’t go to baseball to watch the stats. We go to see people play. We enjoy the suspense of every pitch, the one-on-one battle between pitcher and batter and the great defensive moves. Maybe statistics can tell which players a team should acquire and where the fielders should stand but it still is people that play the game.

Kettmann worries about the obsession of baseball writers with statistics. Those who write based on stats can be replaced by machines. Baseball is a great game to listen on the radio for the best broadcasters don't talk about the numbers, they talk about the people. Otherwise you might as well listen to competitive tic-tac-toe.

Thursday, April 09, 2015

FCRC 2015

Every four years the Association for Computing Machinery organizes a Federated Computing Research Conference consisting of several co-located conferences and some joint events. This year's event will be held June 13-20 in Portland, Oregon and includes Michael Stonebraker's Turing award lecture. There is a single registration site for all conferences (early deadline May 18th) and I recommend booking hotels early and definitely before the May 16th cutoff.

Theoretical computer science is well represented.
The CRA-W is organizing mentoring workshops for early career and mid-career faculty and faculty supervising undergraduate research.

A number of other major conferences will also be part of FCRC including HPDC, ISCA, PLDI and SIGMETRICS. There are many algorithmic challenges in all these areas and FCRC really gives you an opportunity to sit in talks outside your comfort zone. You might be surprised in what you see.

See you in Portland!

Tuesday, April 07, 2015

Two cutoffs about Warings problem for cubes


Known:
  1. All numbers except 23 can be written as the sum of 8 cubes
  2. All but a finite number of numbers can be written as the sum of 7 cubes 
  3. There are an infinite number of numbers that cannot be written as the sum of 3 cubes(this you can prove yourself, the other two are hard, deep theorems).
Open: Find x such that:
  1. All but a finite number of numbers can be written as the sum of x cubes.
  2. There exists an infinite number of numbers that cannot be written as the sum of x-1 cubes.
It is known that 4 ≤ x ≤ 7

Lets say you didn't know any of this and were looking at empirical data.

  1. If you find that every number ≤ 10 can be written as the sum of 7 cubes this is NOT interesting because 10 is too small.
  2.  If you find that every number ≤ 1,000,000 except 23 can be written as the sum of 8 cubes this IS interesting since 1,000,000 is big enough that one thinks this is telling us something (though we could be wrong). What if you find all but 10 numbers (I do not know if that is true) ≤ 1,000,000 are the sum of seven cubes?
Open but too informal to be a real question: Find x such that
  1. Information about sums-of-cubes for all numbers ≤ x-1 is NOT interesting
  2. Information about sums-of-cubes for all numbers ≤ x IS interesting.
By the intermediate value theorem such an x exists. But of course this is silly. The fallacy probably relies on the informal notion `interesting'. But a serious question: How big does x have to be before data about this would be considered interesting? (NO- I won't come back with `what about x-1').

More advanced form: Find a function f(x,y) and constants c1 and c2 such that
  1. If f(x,y) ≥ c1  then the statement all but y numbers ≤ x are the sum of 7 cubes is interesting.
  2. If f(x,y) ≤  c2 then the statement all but y numbers ≤ x are the sum of 7 cubes is not interesting. 
To end with a more concrete question: Show that there are an infinite number of numbers that cannot be written as the sum of 14 4th powers.





Wednesday, April 01, 2015

Which of these stories is false

I would rather challenge you than fool you on April fools day. Below I have some news items. All but one are true. I challenge you to determine which one is false.

  1. Amazon open a brick and mortar store: Full story here. If true this is really really odd since I thought they saved time and money by not having stores.
  2. You may have heard of some music groups releasing Vinyl albums in recent times. They come with an MP3 chip so I doubt the buyers ever use Vinyl,but the size allows for more interesting art. What did people record on before Vinyl. Wax Cylinders! Some music groups have released songs on wax cylinders! See here for a release a while back by Tiny Tim (the singer, not the fictional character) and here for a release by a group whose name is  The Men Will Not Be Blamed For Anything.
  3. An error in Google Maps lead to Nicaragua accidentally invading Costa Rica. Even more amazing--- This excuse was correct and Google admitted the error. See here for details.
  4. There was a conference called Galileo was wrong, The Church Was Right for people who think the Earth really is the centre of the universe (my spell checker says that `center' is wrong and `centre' is right. Maybe its from England). I assume they mean that the sun and other stuff goes around the earth in concentric circles, and not that one can take any ref point and call it the center. The conference is run by Robert Sungenesis  who also wrote a book on the topic (its on Amazon  here and the comments section actually has a debate on the merits of his point of view.) There is also a website on the topic here.  The Catholic Church does not support him or his point of view, and in fact asked him to take ``Catholic'' out of the name of his organization, which he has done. (ADDED LATER- A commenter named Shane Chubbs, who has read over the relevent material on this case more carefully than I have, commented that Robert Sungenesis DOES claim that we can take the center of the universe to be anywhere, so it mine as well be here.   If thats Roberts S's only point, its hard to believe he got a whole book out of it.) OH- this is one of the TRUE points.

Monday, March 30, 2015

Intuitive Proofs

As I mentioned a few months ago, I briefly joined an undergraduate research seminar my freshman year at Cornell. In that seminar I was asked if a two-dimensional random walk on a lattice would return to the origin infinitely often. I said of course. The advisor was impressed until he asked about three-dimensional walks and I said they also hit the origin infinitely often. My intuition was wrong.

33 years later I'd like to give the right intuition. This is rough intuition, not a proof, and I'm sure none of this is original with me.

In a 1-dimensional random walk, you will be at the origin on the nth step with probability about 1/n0.5. Since the sum of 1/n0.5 diverges this happens infinitely often.

In a 2-dimensional random walk, you will be at the origin on the nth step with probability about (1/n0.5)2 = 1/n. Since the sum of 1/n diverges this happens infinitely often.

In a 3-dimensional random walk, you will be at the origin on the nth step with probability about (1/n0.5)3 = 1/n1.5. Since the sum of 1/n1.5 converges this happens finitely often.

Wednesday, March 25, 2015

News Aplenty

Both the Turing Award and the Abel Prize were announced this morning.

MIT databases researcher Michael Stonebraker wins the ACM Turing Award. He developed INGRES one of the first relational databases. Stonebraker is the first Turing award winner since the prize went up to a cool million dollars.

John Nash and Louis Nirenberg share this years Abel Prize “for striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis.” This work on PDEs is completely independent from the equilibrium results that won Nash the 1994 Nobel Prize.

Earlier this week the CRA released their latest Best Practice Memo: Incentivizing Quality and Impact: Evaluating Scholarship in Hiring, Tenure, and Promotion. In short: Emphasize quality over quantity in research.

The NSF announced their public access plan to ensure that research is "available for download, reading and analysis free of charge no later than 12 months after initial publication".

Sunday, March 22, 2015

Which mathematician had the biggest gap between fame and contribution?

(I was going to call this entry  Who was the worst mathematician of all time? but Clyde Kruskal reminded me that its not (say) Goldbach's fault that his conjecture got so well known, in fact its a good thing! I'll come back to Goldbach later.)

Would Hawking be as well known if he didn't have ALS?  I suspect that within Physics yes, but I doubt he would have had guest shots on ST:TNG, The Simpsons, Futurama, and The Big Bang Theory (I just checked the IDMB database- they don't mention Futurama but they do say he's a Capricorn. I find that appalling that they mention a Scientists Horoscope.) I also doubt there would be a movie about him.

Would Turing be as well known if he wasn't gay and didn't die young  (likely because of the ``treatment'') would he be as well known? I suspect that within Computer Science yes, but I doubt there would be a play, a movie, and there are rumors of a musical. Contrast him with John von Neumann who one could argue contributed as much as Turing, but, alas, no guest shots on I Love Lucy, no movie, no Rap songs about him. (The only scientist that there may be a rap song about is Heisenberg, and that doesn't count since it would really be about Walter White.)

Hawking and Turing are/were world class in their fields. Is there someone who is very well known but didn't do that much?

SO we are looking for a large gap between how well known the person is and how much math they actually did. This might be unfair to well-known people (it might be unfair to ME since complexityblog makes me better known than I would be otherwise).  However, I have AN answer that is defensible. Since the question is not that well defined there prob cannot be a definitive answer.

First lets consider Goldbach (who is NOT my answer). He was a professor  of math and did some stuff on the Theory of curves, diff eqs, and infinite series. Certainly respectable. But if not for his
conjecture (every even number is the sum of two primes- still open)  I doubt we would have heard of him.

My answer: Pythagoras! He is well known as a mathematician but there is no evidence that he had any connection to the theorem that bears his name.

Historians (or so-called historians) would say that it was well known that he proved the theorem, or gave the first rigorous proof, or something, but there is no evidence. Can people make things up out of whole cloth? Indeed they can.

Witness this Mr. Clean Commercial which says:  they say that after seeing a magician make his assistant disappear Mr Clean came up with a product that makes dirt disappear- the magic eraser. REALLY? Who are ``they''? Is this entire story fabricated? Should we call the FCC :-) ANYWAY, yes, people can and do make up things out of whole cloth and then claim they are well known. Even historians.

Commenters: I politely request that if you suggest other candidates for large gap then they be people who died before 1950 (arbitrary but firm deadline). This is not just out of politeness to the living and recently deceased, its also because these questions needs time. Kind of like people who want to rank George W Bush or Barack Obama as the worst prez of all time--- we need lots more time to evaluate these things.




Thursday, March 19, 2015

Feeling Underappreciated

As academics we live and die by our research. While our proofs are either correct or not, the import of our work has a far more subjective feel. One can see where the work is published or how many citations it gets and we often say that we care most about the true intrinsic or extrinsic value of the research. But the measure of success of a research that we truly care most about is how it is viewed within the community. Such measures can have a real value in terms of hiring, tenure, promotion, raises and grants but it goes deeper, filling some internal need to have our research matter to our peers.

So even little things can bother you. Not being cited when you think your work should be. Not being mentioned during a talk. Seeing a review that questions the relevance of your model. Nobody following up on your open questions. Difficulty in finding excitement in others about your work. We tend to keep these feelings bottled up since we feel we shouldn't be bragging about own work.

If you feel this way a few things to keep in mind. It happens to all of us even though we rarely talk about it. You are not alone. Try not to obsess, it's counterproductive and just makes you feel even worse. If appropriate let the authors know that your work is relevant to theirs, the authors truly may have been unaware. Sometimes it is just best to acknowledge to yourself that while you think the work is good, you can't always convince the rest of the world and just move on.

More importantly remember the golden rule, and try to cite all relevant research and show interest in other people's work as well as your own.

Sunday, March 15, 2015

Has anything interesting ever come out of a claimed proof that P=NP or P ≠ NP?


When I was young and foolish and I heard that someone thinks they proven P=NP or P ≠  NP I would think Wow- maybe they did!. Then my adviser, who was on the FOCS committee, gave me a paper that claimed to resolve P vs NP!   to review for FOCS.  It was terrible. I got a became more skeptical.

When I was older and perhaps a byte less foolish I would think the following:

For P=NP proofs: I am sure it does not proof P=NP BUT maybe there are some nice ideas here that could be used to speed up some known algorithms in practice, or give some insights, or something. Could still be solid research (A type of research that Sheldon Cooper has disdain for, but I think is fine).

and

For P ≠  NP proofs: I am sure it does not prove P ≠ NP BUT maybe there are some nice ideas  here, perhaps a `if BLAH than P ≠  NP', perhaps an nlog^* lower bound on something in some restricted model'.

Since I've joined this blog I've been emailed some proofs that claim to resolve P vs NP (I also get some proofs in Ramsey Theory, which probably  saves Graham/Rothchild/Spencer some time since cranks might bother me instead of them). These proofs fall into some categories:

P ≠  NP because there are all those possibilities to look through (or papers that are far less coherent than that but that's what it comes down to)

P=NP look at my code!

P=NP here is my (incoherent) approach. For example `first look for two variables that are quasi-related' What does `quasi-related' mean? They don't say.

Papers where I can't tell what they are saying. NO they are not saying independent of ZFC, I wish they were that coherent. Some say that its the wrong question, a point which could be argued intelligently but not by those who are writing such papers.

OKAY, so is there ANY value to these papers? Sadly looking over all of the papers I've gotten on P vs NP (in my mind- I didn't save them --should I have?) the answer is an empirical NO. Why not? I'll tell you why not by way of counter-example:

Very early on, before most people know about FPT, I met Mike Fellows at a conference and he told me about the Graph Minor Theorem and Vertex Cover. It was fascinating. Did he say `I've solved P vs NP' Of course not. He knew better.

Taking Mike Sipers's Grad Theory course back in the 1980's he presented the recent result: DTIME(n) ≠ NTIME(n). Did Mike Sipser or the authors (Paul, Pippenger, Szemeredi, Trotter) claim that they had proven P vs NP? Of course not, they knew better

Think of the real advances made in theory. They are made by insiders, outsiders, people you've heard of, people you hadn't heard of before, but they were all made my people who... were pretty good and know stuff.  YES, some are made by people who are  not tainted by conventional thinking, but such people can still differentiate an informal argument from a proof, and they know that an alleged proof that resolves P vs NP needs to be checked quite carefully before bragging about it.

When the result that Monotone circuits have exponential lower bounds for some problems there was excitement that this may lead to a proof that P ≠ NP, however, nobody, literally nobody, claimed that these results proved P ≠ NP. They knew better.

So, roughly speaking, the people who claim they've resolved P vs NP either have a knowledge gap or can't see their own mistakes or something that makes their work unlikely to have value. One test for that is to ask if they retracted the proof once flaws have been exposed.

This is not universally true- I know of two people who claimed to have solved the problem who are pretty careful normally.  I won't name names since my story might not be quite right, and because they of them retracted IMMEDIATELY after seeing the error. (When Lance proofread this post he guessed one of them,
so there just aren't that many careful people who claim to have resolved P vs NP.)  And one of them got an obscure paper into an obscure journal out of their efforts.

I honestly don't know how careful Deolaikar  is, nor do I know if anything of interest every came out of his work, or if has retracted it.  If someone knows, please leave a comment.

I discuss Swart after the next paragraph.

I WELCOME counter-example! If you know of a claim to resolve P vs NP where the authors paper had something of value, please comment. The term of value means one of two things: there really was some theorem of interest OR there really were some ideas that were later turned into theorems (or in the case of P=NP turned into usable algorithms that worked well in practice).

One partial counter-example- Swarts claim that P=NP inspired OTHER papers that were good: Yannakakis's proof that Swart's approach could not work and some sequels that made Lance's list of best papers of the 2000's (see this post). I don't quite know how to count that.



Thursday, March 12, 2015

Quotes with which I disagree

Often we hear pithy quotes by famous people but some just don't hold water.

"Computer science is no more about computers than astronomy is about telescopes."

Usually attributed to Edsger Dijkstra, the quote tries to capture that using computers or even programming is not computer science, which I agree. But computer science is most definitely about the computers, making them connected, smarter, faster, safer, reliable and easier to use. You can get a PhD in computer science with a smarter cache system, you can't get a PhD in Astronomy from developing a better telescope lens.

"If your laptop cannot find it, neither can the market."

This quote by Kamal Jain is used to say a market can't find equilibrium prices when the equilibrium problem is hard to compute. But to think that the market, with thousands of highly sophisticated and unknown trading algorithm combined with more than a few less than rational agents all interacting with each other can be simulated on a sole laptop seems absurd, even in theory.

"If you never miss the plane, you're spending too much time in airports."

George Stigler, a 1982 Nobelist in economics, had this quote to explain individual rationality. But missing a flight is a selfish activity since you will delay seeing people at the place or conference you are heading to or family if you are heading home. I've seen people miss PhD defenses because they couldn't take an extra half hour to head to the airport earlier. If you really have no one on the other side, go ahead and miss your plane. But keep in mind usually you aren't the only one to suffer if you have to take a later flight.

I take the opposite approach, heading to the airport far in advance of my flight and working at the airport free of distractions of the office. Most airports have the three ingredients I need for an effective working environment: wifi, coffee and restrooms.

Tuesday, March 10, 2015

Guest Post by Thomas Zeume on Applications of Ramsey Theory to Dynamic Descriptive Complexity

Guest Post by Thomas Zeume on

Lower Bounds for Dynamic Descriptive Complexity

(A result that uses Ramsey Theory!)


In a previous blog post Bill mentioned his hobby to collect theorems that
apply Ramsey theory.  I will present one such application that arises in
dynamic descriptive complexity theory.  The first half of the post introduces
the setting, the second part sketches a lower bound proof that uses Ramsey theory.

Dynamic descriptive complexity theory studies which queries can be maintained by
first-order formulas with the help of auxiliary relations, when the input structure
is subject to simple modifications  such as tuple insertions and tuple deletions.

As an example consider a directed graph into which edges are inserted. When an edge
(u, v) is inserted, then the new transitive closure T' can be defined from the old
transitive closure T by a first-order formula that uses u and v as parameters:

T'(x,y) = T(x,y) ∨ (T(x, u) ∧ T(v, y))

Thus the reachability query can be maintained under insertions in this fashion
(even though it cannot be expressed in first-order logic directly).

The above update formula is an example of a dynamic descriptive complexity program.
In general, dynamic programs may use several auxiliary relations that are helpful
to maintain the query under consideration. Then each auxiliary relation has one
update formula for edge insertions and one formula for edge deletions.
The example above uses a single auxiliary relation T (which is also the designated
query result) and only updates T under insertions.

This principle setting has been independently formalized in very similar ways by
Dong, Su and Topor [1, 2] and by Patnaik and Immerman [3]. For both groups one of
the main motivations was that first-order logic is the core of SQL and therefore
       queries maintainable in this setting can also be maintained using SQL. Furthermore
the correspondance of first-order logic with built-in arithmetic to uniform
AC0-circuits (constant-depth circuits of polynomial size with unbounded fan-in)
yields that queries maintainable in this way can be evaluated dynamically in a
highly parallel fashion.

One of the main questions studied in Dynamic Complexity has been whether
Reachability on directed graphs can be maintained in DynFO
(under insertions and deletions of edges). Here DynFO is the class of
properties that can be maintained by first-order update formulas.
The conjecture by Patnaik and Immerman that this is possible has been recently
confirmed by Datta, Kulkarni, Mukherjee, Schwentick and the author of this post,
but has not been published yet [4].

In this blog post, I would like to talk about dynamic complexity LOWER rather
than upper bounds.  Research on dynamic complexity lower bounds has not been
very successful so far. Even though there are routine methods to prove that a
property can not be expressed in first-order logic (or, for that matter, not in AC0),
the dynamic setting adds a considerable level of complication. So far, there is
no lower bound showing that a particular property can not be maintained in
DynFO (besides trivial bounds for properties beyond polynomial time).

For this reason, all (meaningful) lower bounds proved so far in this setting
have been proved for restricted dynamic programs. One such restriction is to
disallow the use of quantifiers in update formulas.  The example above illustrates
that useful properties can be maintained even without quantifiers
(though in this example under insertions only). Therefore proving lower bounds
for this small syntactic fragment can be of interest.

Several lower bounds for quantifier-free dynamic programs have been proved by using
basic combinatorial tools. For example, counting arguments yield a lower bound for
alternating reachability and non-regular languages [5], and Ramsey-like theorems
as well as Higman's lemma can be used to prove that the reachability query
(under edge insertions and deletions) cannot be maintained by
quantifier-free dynamic programs with binary auxiliary relations [6].

Here, I will present how bounds for Ramsey numbers can be used to obtain lower bounds.
Surprisingly, the proof of the lower bound in the following result relies on both
upper and lower bounds for Ramsey numbers. Therefore the result might be a good candidate
for Bill's collection of theorems that use Ramsey-like results.

THEOREM (from [7])
When only edge insertions are allowed, then (k+2)-clique can be maintained by a
quantifier-free dynamic program with (k+1)-ary auxiliary relations, but it cannot be
maintained by such a program with k-ary auxiliary relations.

SKETCH OF PROOF

I present a (very) rough proof sketch of the lower bound in the theorem.
The proof sketch aims at giving a flavour of how the upper and lower bounds
on the size of Ramsey numbers are used to prove the above lower bound.

Instead of using bounds on Ramsey numbers, it will be more convenient to use
the following equivalent bounds on the size of Ramsey cliques. For every c and large enough n:

1) Every $c$-colored complete $k$-hypergraph of size n contains a large Ramsey clique.

2) There is a 2-coloring of the complete $(k+1)$-hypergraph of size n that does \emph{not} contain a large Ramsey clique.


In the following it is not necessary to know what "large" exactly means
(though it roughly means of size log^{k-1} n in both statements).
Those bounds are due to Rado, Hajnal and Erdős.

Towards a contradiction we assume that there is a quantifier-free program P with
k-ary auxiliary relations that maintains whether a graph contains a (k+2)-clique.

The first step is to construct a graph G = (V UNION W, E) such that in all large subsets
C of V one can find independent sets A and B of size k+1 such that adding all edges
between nodes of A yields a graph containing a (k+2)-clique while adding all edges
between nodes of B yields a graph without a (k+2)-clique. Such a graph G can be constructed
 using (2). (Choose a large set V and let W := V^{k+1}. Color the set W according to
(2) with colors red and blue. Connect all blue elements w = (v_1, ..., v_{k+1}) in W
with the elements v_1, \ldots, v_{k+1} in V.)

Now, if the program P currently stores G, then within the current auxiliary relations
stored by P one can find a large subset C of V where all k-tuples are colored equally
by the auxiliary relations. Such a set C can be found using (1). (More precisely:
by a slight extension of (1) to structures.)

By the construction of G there are subsets A and B of the set C with the property stated
above. As A and B are subsets of C, they are isomorphic with respect to the auxiliary
relations and the edge relation. A property of quantifier-free programs is that for such
isomorphic sets, the application of corresponding modification sequences yields the same
answer of the program, where "corresponding" means that they adhere to the isomorphism.

Thus the dynamic program P will give the same answer when adding all edges of A, and whenadding all edges of B (in an order that preserves the isomorphism). This is a contradiction
as the first sequence of modifications yields a graph with a (k+2)-clique while the second
yields a graph without a (k+2)-clique. Hence such a program P cannot exist. This proves
the lower bound from the above theorem.

I thank Thomas Schwentick and Nils Vortmeier for many helpful suggestions on how to
improve a draft of this blog post.

 [1] Guozhu Dong and Rodney W. Topor. Incremental evaluation of datalog queries. In ICDT 1992, pages 282–296. Springer, 1992.

 [2] Guozhu Dong and Jianwen Su. First-order incremental evaluation of datalog queries. In Database Programming Languages, pages 295–308. Springer, 1993.

 [3] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dynamic complexity class. J. Comput. Syst. Sci., 55(2):199–209, 1997.

 [4] Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and Thomas Zeume. Reachability is in DynFO. ArXiv 2015.

 [5] Wouter Gelade, Marcel Marquardt, and Thomas Schwentick. The dynamic complexity of formal languages. ACM Trans. Comput. Log., 13(3):19, 2012.

 [6] Thomas Zeume and Thomas Schwentick. On the quantifier-free dynamic complexity of Reachability. Inf. Comput. 240 (2015), pp. 108–129

 [7] Thomas Zeume. The dynamic descriptive complexity of k-clique. In MFCS 2014, pages 547–558. Springer, 2014.

Thursday, March 05, 2015

(1/2)! = sqrt(pi) /2 and other conventions

 (This post is inspired by the book The cult of Pythagoras: Math and Myths which I recently
read and reviewed. See here for my review.)

STUDENT: The factorial function is only defined on the natural numbers. Is there some way to extend it to all the reals? For example, what is (1/2)! ?

BILL: Actually (1/2)! is sqrt(π)/2

STUDENT: Oh well, ask a stupid question, get a stupid answer.

BILL: No, I'm serious, (1/2)! is sqrt(π)/2.

STUDENT: C'mon, be serious. If you don't know or if its not known just tell me.

The Student has a point. (1/2)! = sqrt(π)/2 is stupid even though its true. So I ask--- is there some other way that factorial could be expanded to all the reals that is as well motivated as the Gamma function? Since 0!=1 and 1!=1, perhaps  (1/2)! should be 1.

Is there a combinatorial interpretation  for (1/2)!=sqrt(π) /2?

If one defined n! by piecewies linear interpolation that works but is it useful? interesting?

For that matter is the Gamma function useful? Interesting?

ANOTHER CONVENTION:  We say that 0^0 is undefined. But I think it should be 1.
Here is why:

d/dx  x^n = nx^{n-1} is true except at 1. Lets make it ALSO true at 1 by saying that x^0=1 ALWAYS
and that includes at 0.

A SECOND LOOK AT A CONVENTION:  (-3)(4) = -12 makes sense since if I owe my bookie
3 dollars 4 times than I owe him 12 dollars. But what about (-3)(-4)=12. This makes certain
other laws of arithmetic extend to the negatives, which is well and good, but we should not
mistake this convention for a discovered truth. IF there was an application where definiting
NEG*NEG = NEG then that would be a nice alternative system, much like the diff geometries.

I COULD TALK ABOUT a^{1/2} = sqrt(a) also being a convention to make a rule work out
however (1) my point is made, and (2) I think I blogged about that a while back.

So what is my point- we adapt certain conventions which are fine and good, but should not
mistake them for eternal truths. This may also play into the question of is math invented or
discovered.


Monday, March 02, 2015

Leonard Nimoy (1931-2015)


Bill and I rarely write joint blog posts but with the loss of a great cultural icon we both had to have our say.

Bill: Leonard Nimoy (Spock) died last week at the age of 83. DeForest Kelley (McCoy) passed away in 1999. William Shatner (Kirk) is still alive, though I note that he is four days older than Nimoy.

Spock tried to always be logical. I wonder if an unemotional scientist would be a better or worse scientist.
Does emotion drive our desire to learn things? Our choice of problems to work on? Our creativity?

Did Star Trek (or its successors) inspire many to go into science? Hard to tell but I suspect yes. Did it inspire you?

There depiction of technology ranged from predicative (communicators are cell phones!) to awful (Episode 'The Ultimate Computer' wanted to show that humans are better than computers. It instead showed that humans are better than a malfunctioning killer-computer. I think we knew that.) I think TV shows now hire science consultants to get things right (The Big Bang Theory seems to get lots of science right, though their view of academia is off.) but in those days there was less of a concern for that.

Lance: I'm too young to remember the original Star Trek series when it first aired but I did watch the series religiously during the 70's when a local TV station aired an episode every day, seeing every episode multiple times. The original Star Trek was a product of its time, using the future to reflect the current societal issues of the 60's. Later Star Trek movies and series seemed to have lost that premise.

Every nerdy teenager, myself included, could relate to Spock with his logical exterior and his half-human emotional interior that he could usually suppress. Perhaps my favorite Spock episode was the penultimate "All Our Yesterdays" where Spock having been sent back in time takes on an earlier emotional state of the old Vulcans and falls in love.

I did see Leonard Nimoy in person once, during a lecture at MIT in the 80's. He clearly relished being Spock and we all relished him.

Goodby Leonard. You have lived long and prospered and gone well beyond where any man has gone before.

Thursday, February 26, 2015

Selecting the Correct Oracle

After my post last week on the Complexity accepts, a friend of Shuichi Hirahara send Shuichi an email saying that I was interested in his paper. Shuichi contacted me, sent me his paper and we had a few good emails back and forth. He posted his paper Identifying an Honest EXPNP Oracle Among Many on the arXiv yesterday.

Shuichi asks the following question: Given two oracles both claiming to compute a language L, figure out which oracle is correct. For which languages does there exist such a selector?

For deterministic polynomial-time selectors, every such L must sit in PSPACE and all PSPACE-complete languages have selectors. The question gets much more interesting if you allow probabilistic computation.

Shuichi shows that every language that has a probabilistically poly-time selector sits in S2EXP, the exponential analogue of S2P. His main theorem shows that EXPNP-complete sets have this property. His proof is quite clever, using the EXPNP-complete problem of finding the lexicographically least witness of a succinctly-described exponential-size 3-SAT question. He uses PCP techniques to have each oracle produce a witness and then he has a clever way to doing binary search to find the least bit where these witnesses differ. I haven't checked all the details carefully but the proof ideas look good.

Still leaves an interesting gap between EXPNP and S2EXP. Is there a selector for Promise-S2EXP-complete languages?

Monday, February 23, 2015

Eliminate the Postal Service

It's gotten very difficult to mail a letter these days. There are no mailboxes along my ten mile commute. Georgia Tech has contracted with an outside company to handle outgoing mail. To send a piece of mail requires filling out a form with an account number and many other universities have similar practices. Mail into or out of the university can tack on several days. I sent a piece of mail from Georgia Tech in Atlanta to the University of Pennsylvania in Philadelphia--two weeks from sender to recipient.

Why do I have to send mail in this world with email, texts and instant messages? Some places require "original receipts". Some government agencies require forms sent by mail or fax, and I've given up trying to find a reliable fax machine with someone who knows how to work it. It's still not always easy to transfer money to another person or company with a physical check. I stopped using the Netflix DVD service because it lost its value when I had to make a special trip to mail the DVD back. It's easier to find a Redbox than a mailbox.

Meanwhile most of the mail I receive is junk, or magazines, which look better on the iPad, or official letters that I have to scan to keep an electronic copy since they didn't email it to me. I do get the occasional birthday card or hand-written thank you note, a nice Southern tradition but we can live without it. USPS also does package delivery but that is often handled better by private provider such as UPS and FedEx.

So what if we just eliminated the US Postal System, say with a three-year warning? There is nothing that can't be replaced by electronic means and a planned closing would force the various government and businesses make that final push. We'll reminisce about mail like we did about the telegram. But why keep an inferior technology alive? It's time to move on.