Wednesday, December 27, 2017

Complexity Year in Review 2017

Theorem of the year goes to the resolution of the dichotomy conjecture. I wrote about the conjecture in February and while the Feder et. al paper didn't hold up, two later papers seem to resolve the conjecture.

A Proof of CSP Dichotomy Conjecture by Dmitriy Zhuk

I checked with experts in the field and at least one of these papers and more likely both ought to be correct.

Runners up include two matching papers I posted about last month, Svensson and Tarnawski who give a quasi-NC algorithm for general graph matching and Anari and Vazirani who give a NC algorithm for matching on planar graphs. We also had the nice quasi-polynomial time algorithm for parity games by Calude, Jain, Khoussainov, Li and Stephan that I posted on last March.

In last year's review we said "2016 will go down as a watershed year for machine learning" yet somehow it paled against 2017 with breakthroughs in chess, poker, astronomy not to mention continuous advances in machine translation, autonomous vehicles and everything else. Maybe next year ML can write the 2018 year in review.

We had an awesome eclipse to remind us of the wonders of the world and almost made me forget about US politics. Computing keeps growing and how do we find the resources to train people from pre-school through college and throughout their lives? How much should we worry about the dominance of a handful of computing companies? 



2018 is just full of questions: What will the Internet look like post-net neutrality? How will the new tax code play out? Where will Amazon put HQ2? What will machine learning do next? What can quantum computers with 50 qbits accomplish? Will bitcoin move to $300K or 30 cents? And what great advances in complexity await us?

Thursday, December 21, 2017

Having Faith in Complexity

I believe P ≠ NP as much as anyone else. Nevertheless should we worry about trust we put in complexity?

You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.

But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.

Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?

Sunday, December 17, 2017

Monkey First!

The following story is not true nor has anyone claimed its true, but it has a point:

A company gets a contract to do the following: train a monkey to sit on a 10-foot pedestal and recite some passages of Shakespeare. After a week they announce they have made progress! They invite their investors to see what progress they have made! They unveil a curtain and there is... a 10-foot pedestal.

This story was in an article about how Google does moonshots-- that is, high-risk, high-reward, innovative work. The article is here. (How the Atlantic makes money when they have stuff online is a mystery to me. Perhaps they do in a very innovative way.)  The point is that its BAD to have tangible results (like the pedestal) that are not getting at the heart of the problem. So Google has various incentives to do the important stuff. Their slogan is MONKEY FIRST.

This also applies to our research.  The following sequence of events is common:

1) Prove some scattered results.

2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.

More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey).  Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).

Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.

Wednesday, December 13, 2017

Our AI future: The Good and the Ugly

I don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.

The Neural Information Process Systems conference held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts."

AlphaZero, an offshoot of Google’s Go programs, learned chess given only the rules in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. Check out this match against Stockfish.



Just a trend that machine learning often works better when humans just get out of the way.

The advances in machine learning and automation have a dark side. Earlier this week I attended the CRA Summit on Technology and Jobs, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.

The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but manufacturing jobs have declined dramatically due to automation.


The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.

There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.

Tuesday, December 12, 2017

Interesting Probability on a VERY OLD TV show

I have posted about things I see in TV or Movies that are math or CS related:

Do TV shows overestimate how much a genius can help solve crimes or make really good crystal meth which seems to be blue. YES, see here

Do TV shows get math wrong. YES, see here and about 90% of the episodes of Numb3rs

Closer to home- do TV shows say stupid things about P vs NP. Elementary (one of the two Modern Day Sherlock Holmes shows did) does  see  here

Did Kirk and Spock really defeat a computer by a trick that wouldn't work now. Yes, see Lance's post on this here

Do TV shows use the word Quantum incorrectly? They do but they are not alone as such, see here

Do people writing Futrama get their math right! Yes- see here

Do people writing 24 get their math wrong! Yes- see here

Does the Big Bang Theory mostly get things right? Yes! - see here

There are more (Seinfeld things comedians should learn proofs! Really- see here) but I can make my point just with the ones above.

ALL of the TV shows except Star Trek were from after 2000 (or so).  So, with the exception of Science Fiction, math-refs and sci-refs in TV shows are relatively recent- I had thought.

Which is why I was surprised and delighted to see, an episode of the old western (anti-western? satire of a western?) Maverick, from 1958 (before I was born!), called Rope of Cards a CORRECT and INTERESTING  math reference.Maverick bets that a random 25 cards from a deck of cards can be arranged into five  5-card pat hands (I had to look that up-- hands where you don't want to discard any cards, so  flush, a straight, a full house would qualify. 4 of a kind would be pat if there were no wild cards).  The sucker takes the bet and loses. Maverick later says the odds are high and called the game Maverick Solitaire.And that is now the name of the puzzle- see here. The prob is around 0.98.

I call this a mention of math since it has to do with probability- which may be a stretch. And I doubt the scene would encourage people to go into math. But it might encourage one to learn probability either to sucker others or to not be suckered.

So the question now is- are there other non-science-fiction, refs to math in older TV shows?
I suspect yes - similar to the one above which is gambling and probability. What is the earliest mention of math on a TV show? The oldest that did not involve science fiction or gambling?


Thursday, December 07, 2017

Razor's Edge

Informally the sensitivity conjecture asks whether every hard Boolean function has a razor's edge input, where flipping a random bit has a reasonable chance of flipping the output.

Let's be more precise. We consider functions f mapping {0,1}n to {0,1}. For every input x, the decision tree complexity at x is the least number of bits of  x you would need to query to decide whether the function outputs 0 or 1. The decision tree complexity of a function is the maximum decision tree complexity over all possible x. Most interesting functions have high decision tree complexity, even the lowly OR function requires querying every bit on the input of all zeroes. The decision tree complexity is polynomially-equivalent to randomized-complexity, quantum complexity, certificate complexity, and the degree of a polynomial that computes the function exactly or approximately. The recent paper by Aaronson, Ben-David and Kothari gives a nice chart showing the relationship between these measures and references to the various papers giving the bounds.

The sensitivity of f on an input x is the number of bit locations i such that f(x)≠f(x⊕i) where x⊕i is x with the ith bit flipped. The sensitivity of f is the maximum sensitivity over all inputs. The sensitivity conjecture states that there is some ε>0 such that the sensitivity of f is at least mε if the decision tree complexity is at least m. If the conjecture were true then for any function with maximal decision tree complexity n (querying every input bit) there must be some razor's edge input x such that flipping a random bit of x has probability at least n of flipping the output.

I find it surprising that we have no proof or counterexample to this purely combinatorial question. There is a generalization of sensitivity known as block sensitivity which is the largest set of disjoint blocks where flipping the bits in any block flips the output bit. Block sensitivity is known to be polynomially related to decision tree complexity.

In a future post I'll talk about some approaches towards resolving this conjecture.

Monday, December 04, 2017

Fireside chat with Simons Inst Director Dick Karp



Above link is Samir Khuller interviewing Dick Karp, though its labelled as a fireside chat with Dick Karp.

Very interesting to hear how TCS has evolved. More generally its good to know where you've come from to have a better idea of where you're going.

bill g.