Tuesday, March 28, 2017

Parity Games in Quasipolynomial Time

In one of the hallway discussions of last week's Dagstuhl I learned about an upcoming STOC paper Deciding Parity Games in Quasipolynomial Time by Cristian Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and Frank Stephan. Hugo Gimbert and Rasmus Ibsen-Jensen offer a simplified proof of the correctness of the algorithm.

A Parity Game works as follows: An instance is a finite directed graph where every vertex has at least one outgoing edge, integer weights on the vertices and a designated starting vertex. Alice and Bob take turns choosing the next vertex by following an edge from the current vertex. They play this game infinitely long and Alice wins if the the largest weight seen infinitely often is even. Not trivial to show but the game is determined and memoryless, no matter the graph some player has a winning strategy, and that strategy depends only the current vertex and not the history so far. That puts the problem into NP∩co-NP and unlikely to be NP-complete.

Like graph isomorphism, whether there exists a polynomial-time algorithm to determine the winner of a parity game remains open. Also like graph isomorphism we now have a quasipolynomial-time (exponential in logk) algorithm, an exponential improvement. Parity games have some applications to verification and model checking and some at Dagstuhl claim the problem is more important than graph isomorphism.

One difference: If you had to guess who would make the big breakthrough in graph isomorphism, László Babai would be at the top of your list. But many of the authors of this new parity games paper, like Frank Stephan and Sanjay Jain, focus mostly on computability and rarely worry about time bounds. Their algorithm does have the flavor of a priority argument often found in computability theory results. A nice crossover paper.

Thursday, March 23, 2017

The Dagstuhl Family

This week I'm at the Dagstuhl workshop on Computational Complexity of Discrete Problems. As you long time readers know Dagstuhl is a German center that hosts weekly computer science workshops. I've been coming to Dagstuhl for some 25 years now but for the first time brought my family, my wife Marcy and daughter Molly, so they can see where I have spent more than half a year total of my life. Molly, currently a freshman at the University of Chicago, was the only Chicago representative, though the attendees included four Chicago PhDs, a former postdoc and a former professor.

We had a different ice breaker, where each person wrote topics they think about which ended up looking look like an interesting bipartite graph.


Molly has a few thoughts on Dagstuhl:

The coolest thing about the study of computer science is this place.

Okay, I know my dad would disagree with me (he probably thinks the coolest thing about computer science is the computer science itself). But for me, someone quite removed from the math and science and thinking, this place is by far the coolest thing about the computer science community. The point of it is isolation, as well simultaneous connection. The isolation comes in the form of a meeting center in rural Germany, separated from the world, devices which can (and do) block wifi in rooms like lecture rooms and the dining hall, resulting in a week without much interaction with the outside world. The connection stems from this very isolation -- in this highly isolated place, people are forced to connect with each other face-to-face, and to get to know each other, as well as the ideas and problems people are working on. The isolation creates a heightened sense of community, both in social and intellectual senses of the word. Forced to be so close and so interconnected, it’s no wonder so many problems get solved here.

I’m glad I got to come see why my father has been coming here for a quarter century. He is very old.

Sunday, March 19, 2017

If you want to help your bad students DO NOT give an easy exam


1) When I was a grad student TAing Formal Lang Theory we had a final ready to give out but noticed that one problem was too hard. So we changed it. But we made it too easy. Whoops. My thought at the time was this will help the bad students. I was wrong. Roughly speaking the students who got 70-80 on the midterm now got 90-100 on the final whereas the students who got 30-40 on the midterm got 35-45 on the final. So the bad students improved, but the better students improved more.

2) When I teach Discrete Math to LOTS of students we have a policy about midterm regrade requests. Rather than have them argue in person they have to:

In writing make a clear concise argument as to why it was mis-graded

If your argument displays that you really don't know the material, even when you can reflect on it, you can lose points. (True Story: We ask for an example of a Boolean Function with two satisfying assignments. They gave us a formula with only one, so they got -5. In the regrade request they try to still argue that it has two satisfying assignments. They lost 2 more points.)

In reality the policy is more preventative and we rarely remove points. However even this policy benefits the better students more than the poor ones who have a hard time even articulating why what they wrote is actually right (likely it is not).

3) Just this winter teaching a 3-week 1-credit course we were grading a problem and giving lots of 15/25 since the students were all making the same mistake. Half way through I got suspicious that maybe WE were incorrect. Looking at the exact wording of the question I realized WE were wrong, and, given the wording and what they would quite reasonably think we wanted, they were right. So we went back and upgraded many students from 15 to 25. And again, this lifted students in the 70's to 90's, but did NOTHING for the students below 50 since none of them had anything like a correct answer to any way to view the question.

Okay, so what does all of this mean?  It means that an easy exam or a generous grading policy is devastating for the bad students.

However, that's just my experience- what are your experiences with this?



Thursday, March 16, 2017

NP in ZPP implies PH in ZPP

If NP is in ZPP is the entire polynomial-time hierarchy in ZPP? I saw this result used in an old TCS Stackexchange post but I couldn't find a proof (comment if you know a reference). The proof that NP in BPP implies PH in BPP is harder than it looks and NP in BQP implies PH is in BQP is still open as far as I know.

I found a simple proof that NP in ZPP implies PH in ZPP and then an even simpler one.

Assume NP in ZPP. This implies NP in BPP so PH is also in BPP. So we need only show BPP in ZPP.

BPP is in ZPPNP follows directly by Lautemann's proof that BPP is in Σ2P or by the fact that BPP is in MA is in S2P is in ZPPNP. By assumption, BPP in ZPPNP implies BPP in ZPPZPP = ZPP.

And this is even simpler.

ZPP = RP∩co-RP in NP∩co-NP. Σ2P = NPNP in NPZPP (by assumption) in NPNP∩co-NP = NP in ZPP. You can get the higher levels of the hierarchy by an easy induction.

Monday, March 13, 2017

Other fields of math don't prove barrier results- why do we?

Before FLT was solved did some people prove theorems like:

FLT cannot be proven using techniques BLAH. This is important since all current proofs use BLAH.

I do not believe so.

Replace FLT with Goldbach's conjectures or others and I do not believe there were ever such papers.

I have sometimes seen a passing reference like `the techniques of this paper cannot get past BLAH but it was not dwelled on. The most striking example of this (and what got me to right this post) was the
Erdos Distance Problem (see here)--- when the result Omega( n^{ (48-14e)/(55-16e) - epsilon}) was shown I heard it said that this was as far as current techniques could push it. And then 11 years later the result Omega(n/log n) was proven. I asked around and  YES the new paper DID use new techniques. But there was not the same kind of excitement I here when someone in TCS uses new techniques (e.g., IP=PSPACE used techniques that did not relativize!!!!!!!!)


With P vs NP and other results we in TCS DO prove theorems and have papers like that. I am NOT being critical-- I am curious WHY we do this and other fields don't. Some options

1) Bill is WRONG- other fields DO do this- see BLAH. Actually proof theory, and both the recursive math program and the reverse math program DID look into `does this theorem require this technique' but this was done for theorems that were already proven.

2) Bill is WRONG- we are not that obsessed with barrier results.

3) P vs NP is SO HARD that we are forced into considering why its hard. By contrast there has been progress on FLT and Goldbach over time. Rather than ponder that they NEED new techniques  they went out and FOUND new techniques. Our inability to do that with P vs NP might be because it's a harder problem- though we'll know more about that once its solved (in the year 3000).

4) P vs NP is closer to logic so the notion of seeing techniques as an object worth studying is more natural to them.

What do you think?

Thursday, March 09, 2017

The Beauty of Computation

Lisa Randall wrote a New York Times book review of Carlo Rovelli's Reality Is Not What It Seems with some interesting responses. I want to focus on a single sentence from Randall's review.
The beauty of physics lies in its precise statements, and that is what is essential to convey.
I can't speak for physics but I couldn't disagree more when it comes to computation. It's nice we have formal models, like the Turing machine, for that gives computation a firm mathematical foundation. But computation, particularly a computable function, transcend the model and remain the same no matter what reasonable model of computation or programming language you wish to use. This is the Church-Turing thesis, exciting exactly because it doesn't have a formality that we can prove or disprove.

Likewise the P versus NP question remains the same under any reasonable computational model. Russell Impagliazzo goes further in his description of his world Algorithmica.
Algorithmica is the world in which P = NP or some moral equivalent, e.g. NP in BPP [probabilistic polynomial time]. 
In other words the notion of easily finding checkable solutions transcends even a specifically stated mathematical question.

That's why I am not a huge fan of results that are so specific to a single model, like finding the fewest number of states for a universal Turing machine. I had an email discussion recently about the busy beaver function which I think of in general terms: a mapping from some notion of program size to program output as opposed to some precise definition. I find the concept incredibly interesting and important, no one should care about the exact values of the function.

We need the formal definitions to prove theorems but we really care about the conceptual meaning.

Maybe that's what separates us from the physicists. They want precise definitions to capture their conceptual ideas. We want conceptual ideas that transcend formal definitions.

Monday, March 06, 2017

why are regular expressions defined the way they are


BILL: The best way to prove closure properties of regular languages is to first prove  the equiv of DFA's, NDFA's and Reg Expressions. Then, if you want to prove a closure property, choose the definition of regular that makes it easiest. For example, to prove Reg Langs closed under intersection I would use DFA's, NOT  Reg Expressions.

STUDENT: I thought reg expressions were

a) finite sets

b) if alpha and beta are reg exp then so are alpha UNION beta, alpha INTERSECT beta, alpha CONCAT beta and alpha*

BILL: No. Regular expressions are defined just using UNION, CONCAT, and *.

STUDENT: Why? Had the defined it my way then closure under INTERSECTION would be easier. For that matter toss in COMPLIMENTATION and you're get that easily also.

BILL: First off, thats not quite right. You  compliment a DFA by saying how lovely its states are. I think you mean complement. Second off, GOOD question!- Why are Reg Expressions defined the way they are. I"ll try to look that up and if I can't find anything I'll blog about it.

STUDENT: When will you blog about it?

BILL: I just did. Now, let me ask the question more directly:

The definition of Reg Exp is essentially closure under UNION, CONCAT, STAR. Why not other things? There are very broadly three possibilities:

a) Historical Accident.

b) Some good math or CS reason for it.

c) Something else I haven't thought of.

I hope its (b). Moreover, I hope one of my readers knows and can enlighten me and the other readers.

Thursday, March 02, 2017

International Science

I did some counting and the 35 academic faculty members in the Georgia Tech School of Computer Science come from 14 different countries. My co-authors come from at least 20 different nations. My 10 successful PhD students hail from 7 different countries. I have benefited immensely from global collaborations thanks to relatively open borders and communication during most of my academic career and I am hardly the only academic who has done so.

I'm old enough to remember the days of the cold war where travel between East and West was quite difficult. We had considerable duplication of effort--many important theorems were proven independently on both sides of the iron curtain but even worse ideas took a long time to permeate from one side to the other. We could not easily build on each other's work. Science progressed slower as a result. Pushing back the boundaries of science is not a zero-sum game, quite the opposite--we can only grow knowledge. We grow that knowledge much faster working together.

As the United States and other countries take on a more nationalistic point of view, we'll see fewer people travel, fewer people willing or even able to spend significant parts of their career in other countries. We will (hopefully) continue to have an open Internet so information will still flow but nothing can replace the focus of face-to-face collaboration to share ideas and create new ones.

The real loss for America will be an invisible one: the students who won't be coming here to study and later become our professors, scientists and colleagues, to make our universities, industries and society stronger. Sad.