# Computational Complexity

Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

## Tuesday, September 16, 2014

### Maryland Theory Day October 10!

Univ of Maryland at College Park is having a Theory Day

Friday October 10.

Free Registration and Free Lunch! (there are no economists coming to tell us there is no such thing).

For Information and Registration goto here

A good way to learn lots of current theory in a short time.

Schedule:

8:30-9:00 Light Breakfast and Intro Remarks

9:00-9:20 Gasarch, UMCP

NIM with Cash

9:25-9:45 Mount, UMCP

A New Algorithm for Approximating the Euclidean Minimum Spanning Tree

9:50-10:10 Samir, UMCP:

To do or not to do: scheduling to minimize energy

10:20-11:00 Coffee Break

11:00-12:00 Distinguished Invited Speaker Avrim Blum, CMU

Reconstructing preferences and priorities from opaque transactions

12:00-1:00 Catered Lunch

1:00-2:00 Distinguished Invited Speaker Sanjeev Arora, Princeton

Overcoming the intractability bottleneck in unsupervised learning.

2:00-2:30 Coffee Break

2:30-2:50 Elaine Shi, UMCP

Circuit ORAM and Tightness of the Goldreich-Ostrovksy bound

2:55-3:15 David Harris, UMCP

The Moser-Tardos Framework with Partial Resampling

3:20-3:40 Mohammad Hajiaghayi, UMCP

Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond

3:45-4:05 Michael Dinitz, JHU

Explicit Expanding Expanders

4:10-5:00 Poster Session in 2120 (Grad Students)

## Monday, September 15, 2014

### Richard Lipton Wins Knuth Prize

Georgia Tech professor and fellow blogger Richard Lipton will receive the 2014 Knuth Prize at the upcoming FOCS conference in Philadelphia. The Knuth Prize is given jointly by the ACM SIGACT and the IEEE TC-MFCS for major research accomplishments and contributions to the foundations of computer science over an extended period of time.

Lipton's research has major results across a large spectrum of theoretical computer science from probabilistic algorithms to DNA computing to communication complexity. I'd like to highlight a couple of his papers in computational complexity hugely influential, including much of my own research.

Richard Karp and Lipton showed that if NP has non-uniform polynomial-size circuits then the polynomial-time hierarchy collapses. The result, and its successors, are a powerful tool, used to show a number of interesting hypotheses are not likely to happen, and plays an important role itself in circuit lower bounds and pseudorandomness. Most importantly Karp-Lipton showed gave the strongest evidence that NP does not have small circuits, justifying the circuit lower bound approach to separating complexity classes.

In lesser known but perhaps even more influential work, Lipton developed a notion of program testing and showed how to test the permanent function, a result that directly led to the surprising power of interactive proofs. This algebraic characterization of hard problems led us to IP = PSPACE, MIP = NEXP and the PCP theorem.

Again this just covers a sliver of his impressive canon of research. Congrats Dick!

Lipton's research has major results across a large spectrum of theoretical computer science from probabilistic algorithms to DNA computing to communication complexity. I'd like to highlight a couple of his papers in computational complexity hugely influential, including much of my own research.

Richard Karp and Lipton showed that if NP has non-uniform polynomial-size circuits then the polynomial-time hierarchy collapses. The result, and its successors, are a powerful tool, used to show a number of interesting hypotheses are not likely to happen, and plays an important role itself in circuit lower bounds and pseudorandomness. Most importantly Karp-Lipton showed gave the strongest evidence that NP does not have small circuits, justifying the circuit lower bound approach to separating complexity classes.

In lesser known but perhaps even more influential work, Lipton developed a notion of program testing and showed how to test the permanent function, a result that directly led to the surprising power of interactive proofs. This algebraic characterization of hard problems led us to IP = PSPACE, MIP = NEXP and the PCP theorem.

Again this just covers a sliver of his impressive canon of research. Congrats Dick!

## Thursday, September 11, 2014

### Beyond the Commodity

Back in 2005 I lamented the fact that students viewed computers as a commodity, a tool they use, like an automobile, but have no reason to understand how or why it works. In 2011 I noticed a change, that computers like IBM's Watson were starting to make computer science cool again.

Now we are in the midst of yet another major change, reflected in refound interest in high school computer science, and the huge enrollment growth in universities, particularly in non-majors taking upper-level CS courses. Jobs certainly drive much of this enrollment but for an important reason. Basic computer science principles and reasoning have become a critical tool in almost any business. Every large company tries to glean knowledge from data, deal with security and privacy challenges, and solves big optimization questions in an ever complex environment. I've been told that car companies will take as many Mechanical Engineering major with CS minors as Georgia Tech can produce. For what are cars today but computers on wheels.

We've been down this path before, and trends that seem to be with us forever die out leading to computer science disillusionment. Somehow this seems different, but we'll just have to wait and see.

Now we are in the midst of yet another major change, reflected in refound interest in high school computer science, and the huge enrollment growth in universities, particularly in non-majors taking upper-level CS courses. Jobs certainly drive much of this enrollment but for an important reason. Basic computer science principles and reasoning have become a critical tool in almost any business. Every large company tries to glean knowledge from data, deal with security and privacy challenges, and solves big optimization questions in an ever complex environment. I've been told that car companies will take as many Mechanical Engineering major with CS minors as Georgia Tech can produce. For what are cars today but computers on wheels.

We've been down this path before, and trends that seem to be with us forever die out leading to computer science disillusionment. Somehow this seems different, but we'll just have to wait and see.

## Monday, September 08, 2014

### A Statistical oddity ?

I keep a list of people that are famous-to-me that are old so that if someone dies I won't be surprised. When Lauren Bacall died recently I (1) knew who she was, AND (2) knew she wasn't already dead. I DO NOT look at lists of celebs. My list is organic- if I think of someone who seems old (`GEE, I wonder if that famous probabilist Monty Hall is still alive? He is! He's 92.) I look it up and if they are over 80, they go on the list. Most people are surprised to know that Dorris Day is still alive.

Okay, so what of it? Bill has another weird hobby. (Add this to collecting satires, collecting papers that apply Ramsey Theory, and writing a satire of papers that apply Ramsey theory).

I decided to see how many people on my list had the same birthday and see if it was reasonable with regard to probability (the birthday paradox and all that). The list currently has 70 people.

What I found was probably reasonable in one respect and odd in another.

REASONABLE: Nine pairs had the same birthday. One triple had the same birthday.

ODD: There were NO pairs or triples of same birthdays in July, September, October, November, or December.

I leave as an exercise: How reasonable is what I called reasonable and how odd is what I called odd?

Okay, so what of it? Bill has another weird hobby. (Add this to collecting satires, collecting papers that apply Ramsey Theory, and writing a satire of papers that apply Ramsey theory).

I decided to see how many people on my list had the same birthday and see if it was reasonable with regard to probability (the birthday paradox and all that). The list currently has 70 people.

What I found was probably reasonable in one respect and odd in another.

REASONABLE: Nine pairs had the same birthday. One triple had the same birthday.

ODD: There were NO pairs or triples of same birthdays in July, September, October, November, or December.

I leave as an exercise: How reasonable is what I called reasonable and how odd is what I called odd?

## Thursday, September 04, 2014

### Favorite Theorems: Quantum Interactive Proofs

Practical quantum computing still has many hurdles to jump through, but the quantum computing model does generate great complexity questions and often surprising answers.

QIP ⊇ PSPACE follows from IP = PSPACE. In an earlier paper, Kitaev and Watrous show QIP ⊆ EXP by reducing QIP to an exponential-sized semi-definite program. This papers applies a clever matrix multiplicative weight algorithm to approximate a subclass of SDPs to achieve QIP ⊆ PSPACE.

We've also seen progress on QMIP, quantum interactive proof with multiple entangled provers who cannot otherwise communicate. QMIP containing MIP=NEXP remained open for a long time because the provers might use entanglement to cheat. Ito and Vidick show that entangled provers can't get an advantage on the multilinear test used in the original MIP=NEXP paper, and thus QMIP does contain NEXP. QMIP contained in NEXP remains open.

QIP = PSPACE by Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay and John Watrous.QIP is the quantum analogue of interactive proof systems. Since IP = PSPACE we get the consequence QIP = IP, that quantum doesn't give an advantage over classical randomness in the interactive proof model. I wouldn't read too much into that interpretation, more that we have a strange situation where IP is far more powerful than we initially suspected and that QIP is weaker than expected and so we get the collision at PSPACE.

QIP ⊇ PSPACE follows from IP = PSPACE. In an earlier paper, Kitaev and Watrous show QIP ⊆ EXP by reducing QIP to an exponential-sized semi-definite program. This papers applies a clever matrix multiplicative weight algorithm to approximate a subclass of SDPs to achieve QIP ⊆ PSPACE.

We've also seen progress on QMIP, quantum interactive proof with multiple entangled provers who cannot otherwise communicate. QMIP containing MIP=NEXP remained open for a long time because the provers might use entanglement to cheat. Ito and Vidick show that entangled provers can't get an advantage on the multilinear test used in the original MIP=NEXP paper, and thus QMIP does contain NEXP. QMIP contained in NEXP remains open.

## Tuesday, September 02, 2014

### How hard is changing fields? Ask Sheldon!

In the last season of

Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.

How hard is it to change fields, both intellectually and in terms of your dept?

*The Big Band Theory*Sheldon wants to change field from String theory to something else (I don't recall if he settled on what it would be, though Standard Model Physics, Quantum Loop Gravity, Calculation of nuclear matrix elements, were mentioned negatively, and Geology is, according to Sheldon, not a real science.)Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.

How hard is it to change fields, both intellectually and in terms of your dept?

- If you are hired as a string theorist and you are one of the only ones in your dept, your dept may quite reasonably ask you to still teach string theory. I think this is fine.
- Math and Physics are far older than CS so to change fields requires learning more background knowledge. In CS it was easier about 20 years ago, but CS has grown so much that I suspect it would be harder now.
- There may be a time when you have less papers and grants as you are making the transition. Hence it may be unwise to do this before you get Tenure.
- If your dept hired you to do String Theory and you change to Calculation of nuclear Matrix elements should they mind that? I would think that as long as it's still good work they wouldn't. And they should give you enough time to get grants and papers in it. If you change to a field they don't care about, or change to a field not in the area they might not like that. If Sheldon went into Computational Complexity then would his dept (physics) be justified in not liking that? If he solved P vs NP then all would be forgiven.
- Perhaps the further away you change from your field the better your work has to be before your dept doesn't mind. This could be modelled by a formula. Maybe Sheldon will change to computational dept dynamics and derive it for us.
- The obvious thing to say is
*Depts should allow their professors to wander free as a bird and not erect arbitrary walls since the best work comes from people not being constrained.*I would like to think this is true but I wonder--- how many people have changed fields and ended up doing great work? good work? totally failed?

## Thursday, August 28, 2014

### Sixteen Years in the Making

Every paper has a story but Sunny Daniel's Arxiv paper from yesterday deserves a blog post.

We begin in 1982 when Ravi Kannan proved that Î£

Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Î£

We begin in 1982 when Ravi Kannan proved that Î£

_{2}(the problems computable in NP with an NP oracle) cannot have n^{2}size circuits. Kannan's result hold for n^{k}-size circuits but for this story we'll keep it simple.Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Î£

_{4 }that does not have n^{2}-size circuits. Now there are two cases:- SAT doesn't have n
^{2}-size circuits. Since SAT is in Î£_{2}we are done. - SAT has n
^{2}-size circuits. Then by Karp-Lipton Î£_{4}= Î£_{2}so L is in Î£_{2}and we are done.

Kannan's proof is non-constructive and doesn't give an explicit Î£

_{2}language that we can show doesn't have n^{2}-size circuits. Either SAT or L but one can't be sure which one.
In 1998, Sunny Daniels, a PhD student at Rutgers, took Eric Allender's complexity course. Eric offered up a constructive example of Kannan as an open problem. Sunny came up with a solution. He wrote up a draft in LaTeX but for personal reasons dropped out of academics and never published the paper.

In 2003, Jin-Yi Cai and Osamu Watanabe, not aware of Daniels' paper, came up with their own independent construction and presented their paper at the COCOON conference in Montana. Word got back to Sunny but he thought he had lost the LaTeX file and didn't want to retypeset the whole proof.

Sunny had Iomega Zip Drive cartridges from his Rutgers days. Recently he found someone who had a Zip Drive reader and managed to recover the files. In there he discovered the original LaTeX, cleaned the paper up, and sixteen years after his proof put the paper on ArXiv. Even if you don't care about the math, read the introduction for the complete version of this story.

Kannan's proof actually shows Î£

_{2}∩Î_{2}does not have n^{2}-size circuits and this was later improved to S_{2}^{P}. Whether we have any constructive language in Î£_{2}∩Î_{2}or S_{2}^{P}without n^{2}-size circuits still remains open.## Monday, August 25, 2014

### A Deadly Side of Complexity

Better algorithms can lead to better medicine and save lives. Just today Tim Gowers discusses Emmanuel CandÃ¨s' ICM Plenary Lecture, which among other things describes how CandÃ¨s' work on compressed sensing leads to shorter MRI scans for children, greatly reducing the risk of oxygen deprivation. Prove P = NP with a practical algorithm, and you'll conquer that worst of our diseases. Sounds great until you realize what we can't do.

I was talking to a cancer researcher recently and he points out that many of their challenges are indeed algorithmic. But he also brings up the contrapositive. Since we don't have great algorithms now, we don't know how to make sense of DNA sequences and in particular don't know how to map genetic markers to an appropriate cancer treatment. He works with cancer patients, knowing he can't give them the best possible treatment, not because of lack of data, but due to lack of ways to analyze that data. People die because we don't have the ability to break through the complexity of these algorithmic challenges.

Subscribe to:
Posts (Atom)