Thursday, January 28, 2016

We Still Can't Beat Relativization

As we celebrate our successes in computational complexity here's a sobering fact: We have had no new non-relativizing techniques in the last 25 years.

A little background: In 1975, a few years after Cook established the P v NP problem, Baker, Gill and Solovay created two sets A and B such that every NP machine that could ask questions about A could be simulated by a P machine that could ask questions to A and there is some language accepted by an NP machine that could ask questions to B that cannot be solved by a P machine that could ask questions to B. In other words PA = NPA but PB ≠ NPB.

All the known proof techniques at the time relativized, i.e., if you proved two classes were the same or different, they would be the same of different relative to any set. BGS implied that these techniques could not settle the P v NP problem.

In the 80's and 90's we got very good at creating these relativized worlds and, with a couple of exceptions, we created relativized worlds that make nearly all the open questions of complexity classes between P and PSPACE both true and false.

There were some results in the that were non-relativizing for technical uninteresting reasons. In the late 80s/early 90s we had some progress, results like NP having zero-knowledge proofs, co-NP (and later PSPACE) having interactive proofs and NEXP having (exponential-sized) probabilistically checkable proofs, despite relativized worlds making those statements false. But then it stopped. We had a few more nonrelativizing results but those just used the results above, not any new techniques.

In 1994 I wrote on survey on what relativization meant post-interactive proofs, still mostly up to date. We have seen new barriers put up such as natural proofs and algebrization, but until we can at least get past traditional relativization we just cannot make much more progress with complexity classes.

Sunday, January 24, 2016

When do we stop giving the original reference?


I was preparing  a talk which included my result that there is a sane reduction 4-COL \le 3-COL (the paper is here) when I  realized that if I am going to claim the reduction is by Gasarch then I should find out and credit the person who proved 3-COL NP-complete (I assume that the result 3-COL \le 4-COL is too trivial to have an author).  I could not find the ref on the web (I am sure that one can find it on the web, but a cursory glance did not yield it).  The reference was not in Goldreich's complexity book, nor the CLR Algorithms book. I suspect its not in most recent books.

The reference is Stockmeyer, SIGACT NEWS 1973,

Few modern paper references Cook or Levin's original papers for the Cook-Levin Theorem. I've even heard thats a sign its a crank paper.

Few modern papers reference Ramsey's original paper for Ramsey's Theorem.

But the questions arises--- when is a result such common knowledge that a ref is no longer needed?

A result can also go through a phase where the reference is to a book that contains it, rather than the original paper.

On the one hand, one SHOULD ref the original so that people know who did it and what year it was from. On the other hand there has to be a limit to this or else we would all be refering to Euclid and Pythagoras.

Where does Stockmeyer's result fall? I do not know; however, I've noticed that I didn't reference him, and I will correct that.

Thursday, January 21, 2016

The Growing Academic Divide

The decreases of the past three years bring the number of advertised jobs to a new low, below the level reached after the severe drop between 2007–08 and 2009–10. 
So reads the Report on the Modern Language Association Job Information List. The MLA is the main scholarly organization for the humanities in the US. The bright spot--we aren't Japan.

Meanwhile in computer science we continue to see large enrollment increases in major, minors and students just wanting to take computer science courses. In the upcoming CRA Snowbird Conference "a major focus of the conference will be booming enrollments, with a short plenary followed by parallel sessions devoted to the topic, its various ramifications, and ideas to help you deal with it, including best practices for managing growth."

The 2015 November CRA News had 83 pages of faculty job ads, up from 75 in 2014 and 34 in 2012. This doesn't even count that fact that many departments are looking to hire two, three, four or more positions in CS. It will be an interesting job market this spring.

All of this is driven by jobs. We can't produce enough strong computer scientists to fill industry demand. And it's becoming increasingly hard for a humanities major to get a good first job.

It's nice to be on the side of growth but it's a shame that faculty hiring seems to be a zero-sum game. We need poets as well as nerds. We've help create a world where we have made ourselves indispensable but is this a world we really want to live in?

Monday, January 18, 2016

Is it okay to praise an article or book in an article or book?

I recently had a paper accepted (YEAH!). The referees had some good corrections and one that puzzled me.

you wrote ``our proof is similar to the one in the wonderful book by Wilf on generationg functions [ref]''. You should not call a book wonderful as that is subjective. You can say it's well known.


I asked a pretension of professors about this. Is it okay to praise an article or book? Is it okay to state an opinon? Would the following be acceptable:

1) In Cook's groundbreaking paper SAT was shown to be NP-complete. It IS grounbreaking, so maybe thats okay.

2) Ramsey's paper, while brilliant, is hard for the modern reader to comprehend. Hence we give an exposition. If I am writing an exposition then I might need to say why the original is not good to read so this is informative.

3) Ryan Williams proved an important lower bound in [ref]. Is this okay to write? For most people yes, but NOT if  you are  Ryan Williams. (He never wrote such.)

4) William Gasarch proved an unimportant lower bound in [ref]. Is this okay to write ? Only if you ARE William Gasarch (He never wrote such).

The version that will be in a journal will indeed NOT call Wilf's book wonderful. The version on arXiv which will be far more read (not behind a paywall) will call Wilf's book wonderful.




Thursday, January 14, 2016

A limit on the DFA-CFG divide for finite sets

It is easy to see that for Ln =  {a,b}*a{a,b}2n

There IS a CFG of size O(n)

ANY DFA is of size double-exp-in-n

I was wondering if we can increase this gap. That is, a statment like: For all but a finite number of n there exists a lang Ln such that

There IS a CFG of size O(n)

ANY DFA is of size triple-exp-in-n.

I also looked at finite sets. For COMPLIMENT of { ww : |w|=2n }

There IS a CFG of size O(n)

ANY DFA (in fact any DPDA) is of size double-exp-in-n.

This is stated in my paper here though the real interesting math needed to get it are in here where they get lower bounds on the size of CFG's, generalizing techniques from here.

SO my question still stands- can we get a triple exp separation? I have shown that this CANNOT be achieved with finite sets:

THEOREM: If L is a finite lang then there exists n such that (1) Any CFG in Chomsky Normal Form for L has to be of size \ge log n, AND (2) there IS a DFA of size 2n.

Proof:  Let n be such that the longest string in L if of length n.  In order for a Chomsky Normal Form grammar to generate this string it needs \ge log n nonterminals. Since the lang has at most 2n
strings in it, there is a DFA of size 2n for it.
End of proof.

So I have shown that one approach won't work. I am hoping that YOU know an approach to get
a trip-exp-sep and leave a comment about it.







Monday, January 11, 2016

A question in Formal Lang Theory about Size of Desc of Languages.


(This post is inspired by me thinking more about this blog entry  and this paper.)

Upper bounds on n are really O(n).
Lower bounds of f(n) are really Omega(f(n))

DPDA = Deterministic Push Down Automata

NDFA = Nondet. Finite Aut.

DFA = Det. Finite Aut.

It is known that FOR ALL n there is a lang Ln such that there is an NDFA of size n, but ANY DFA for Ln  is of size at least  2n :  Ln = (a,b)*a(a,b)n.  for DFA-exactly 2n,  NDFA- n+2. One can also get 2n and n.

It is known that FOR ALL n there is a lang Ln such that there is a DPDA of size n, but ANY NDFA for Ln  is of size at least roughly 2n size: Ln={a2n  }. (ADDED LATER TO CLARIFY- NOTE THAT
Ln is a one-element set, hence regular.)

Can we acheive both at the same time? Is the following true: for all n there exists Ln such that

1) There is a size n DPDA for Ln.

2) Any NDFA for Ln requires size  2n.

3) There IS an NDFA for Ln of size 2n.

4) Any DFA for Ln requires size  22n.

If we replace DPDA with PDA then { (a,b)*a(a,b)2n } works.



Wednesday, January 06, 2016

Rūsiņš Freivalds (1942-2016)

Rūsiņš Mārtiņš Freivalds passed away on Monday from a heart attack at the age of 73. I met Freivalds several times often through Carl Smith, who passed away himself in 2004. Rūsiņš, Carl, Bill, myself and a couple of others have a joint paper on inductive inference and measure theory. Freivalds is the sixth co-author I've lost and it never gets easier.

Rūsiņš Freivalds did much of the early work in probabilistic algorithms and automata, and in inductive inference, a computability-theoretic approach to learning. In the 70's he found fast probabilistic algorithms for checking multiplication of integers and matrices. More recently he has looked at quantum finite automata. He supervised several PhD students including Andris Ambainis, one of the leading researchers in quantum algorithms.

Mostly I remember Freivalds as the leader of the Latvian theoretical computer science community and just an extremely nice and friendly colleague. I am glad to have known and worked with him.

Sunday, January 03, 2016

Predictions for 2016

This is the first post of 2016! (that is not a factorial). Hence I will make some predictions and at the end of the year I'll see how I did

1)  The USA Prez election will have two main candidates: Hillary Clinton and Ted Cruz. Clinton will win by floating rumors that Cruz was born in a foreign country.

2) There will be a proof that  there exists a constant c such that the methods that got a lower bound of (3+1/86)n on an explicit function cannot be extended to cn.

3) P vs NP will not be solved. But see next point.

4) I will be asked to look at at least two resolutions of P vs NP. This year I was asked to look at two
proofs that P=NP. Neither was correct.

5) GI will still not be in known to be in P.

6) There will be a proof that Babai's techniques cannot get GI into P.

7) The fact that 2016=25327 will be useful in a math competition.

8) Posting a video of a talk (as Babai did) will become an acceptable way to claim a result, rather than having a paper on arXiv. Getting a paper in a Journal will become less and less relevant for big results.
(This is a topic for a later blog post.)

9) The Unique Game conjecture... When it was first stated it seemed like maybe it could be proven or disproven. The longer it stays open the harder it seems. Clyde Kruskal tells me that a good method for guessing how long a problem will stay open is how long its been open.  So I'll predict it won't be resolved this year. However, by that reasoning, I will always predict it will not be resolved in the following year.

10) There will be a big data breach. The security protocols used were never proven secure, though that won't be what caused the breach.  Nor was it caused because of a really fast factoring or DL algorithm.

11) Comp Sci enrollment will continue to rise.

12) Leave your predictions in the comments!