Thursday, June 30, 2016

Excalibur

Goro Hasegawa passed away last week at the age of 83. Hasegawa created and popularized the board game Othello based on an old British game Reversi.

Back in the 80's, a high school friend Chris Eisnaugle and I used to write programs together including the Frogger-like program Ribbit for the Apple II. We decided to try our hands at board games and aimed at Othello as it seemed simpler to manage than the more popular attempts at computer chess. Our first program played awful but we contacted and had some great discussions with Peter Frey, a Northwestern psychology professor who worked on computer games. Frey pointed us to some great techniques like alpha-beta pruning and table look up for edge positions. Who knew that I would become a Northwestern professor myself later in life (2008-2012).

Unlike many games, the number of moves in Othello are limited in the end of the game, so even on the old IBM PC we used, we could play a perfect endgame from 14 moves out. So a simple strategy of maximizing mobility in the early game, controlling the edges and corners in the mid-game and a perfect end game give a pretty strong Othello program. We called our game Excalibur after King Arthur's sword and the title of a great movie telling his tale. We traveled to Cal State Northridge for the 1986 computer Othello tournament and captured third place, not bad considering the limited hardware we used. I entered a human tournament myself and for a brief time ranked the 35th best Othello player in the US. 

In 1989 we tried another computer Othello tournament, this time just calling in and coming in fifth place. One of our games was against the eventual winner by then CMU professor Kai-Fu Lee. His program beat us of course but he was still impressed with the play of Excalibur. Kai-Fu Lee would later work for Microsoft then leave to build up Google China, leading to one of the more memorable lawsuits over a non-compete agreement.

Computer Othello improved greatly since then and in 1997 Michael Buro's game Logistello easily beat the best human players. Michael Buro worked at the NEC Research Institute and we met when I joined in 1999. We chatted Othello but of course Excalibur was not in the same league as Logistello. Michael Buro later would join University of Alberta, which became the academic center for computer games. 

Computer Othello gained popularity because no one could create a Computer Go program that could beat good amateurs. That changed with randomized search and machine learning that led to AlphaGo

So thank you Goro Hasegawa for creating this simple game that played such an interesting part of my life back in the day. 

Sunday, June 26, 2016

There is now a Bounded Discrete Envy Free Cake Cutting Protocol!

Lance: Bill, there is a new result on cake cutting that was presented at STOC! Do you want to blog about it?

Bill: Do snakes have hips! Do  chickens have lips!

Lance:  No to the first one and I don't know to the second one.

Bill: Yes I'll blog about it! Whats the paper?

Lance: It's this paper by Aziz and Mackenzie.

Bill: Oh. That's  not new. Five people emailed me about it a while back. But yes I will blog about it.

Cake Cutting: There are n people with different tastes in cake (some like chocolate  and some... OH, who doesn't like chocolate? Okay, someone prefers kale which is on the cake.) They want a protocol that divides the cake in a way that is fair. What is fair? There are many definitions but I'll talk about two of them.

Proportional: Everyone gets 1/n of the cake (in their own opinion- I will omit saying this from now on).

Proportional sounds fair but consider the following scenario: Alice thinks she got 1/3 but she thinks Bob got 1/2 and Eve got 1/6.  Alice  will envy Bob.

Envy Free: Everyone thinks they have the largest piece (or are tied for it).

What is a protocol? It is a set of instructions and advice so that if (1) if the players all follow the advice then the end result is fair, and (2) if a player does not follow the advice then that player might get less than his fair share. Hence all players are motivated to follow the advice. We assume that everyone acts in their own self interest and that they are at a diamond cutters convention (perhaps co-located with STOC) so they really can cut cake very finely.

We will only consider discrete protocols. We won't define this formally.

Prior Results:
1) There is a protocol for Prop fairness for n people that uses  O(n log n) cuts. See my notes

2) Jeff Edmonds and Kirk Pruhs showed a lower bound of Omega(n log n). See their paper.

3) There is a protocol for Envy Free fairness for 3 people due to Conway and Selfridge. This was in 1960. This protocol took 5 cuts. (It is in the doc I point to in next item)

4) In 1995 Brams and Taylor obtained a protocol for envy free fairness  for n people. But there is a catch- there is no bound on the number of cuts. For all N there is a way to set four peoples tastes so that the protocol takes more than N cuts.  See my notes.

All items to follow are for an envy free protocol for n people.

5) It was an open question to determine if there is a bounded protocol. Stromquest proved that there can be no bounded protocol if all of the players got a contiguous piece, though this was not the case in the Brams-Taylor protocol. See his paper

At the time I thought there would be no bounded protocol. I found a way to measure unbounded protocols using ordinals and wrote a paper on it: See my paper.

6) Aziz and  MacKenzie showed there was a bounded protocol for 4 people. See their paper.

7) Aziz and MacKenzie, STOC 2016, showed there was, a  protocol that takes at most TOW(n) cuts.
(thats roughly n to the n to the n ... to the n, n times).  Hence a bounded protocol! See their paper.

What's next? Either improve the number of cuts or show it can't be done!


Thursday, June 23, 2016

STOC 2016

I attended the 2016 STOC Conference earlier this week in Cambridge, Massachusetts. No shortage of awesome results highlighted by László Babai's new quasipolynomial-time graph isomorphism algorithm. Babai didn't have a chance to give more than a glimpse into his algorithm in the 20 minute talk but you did see his joy in discovering the concept of "affected" last September, the key to making the whole algorithm work.

Babai also received the ACM SIGACT Distinguished Service prize for his work for the community including his open-access journal Theory of Computing.

You can see and access all the papers from the program page. Links on that page will give you free access to the papers forever. I'll mention some papers here but you'll have to click over on the program page to access them.

Troy Lee gave my favorite talk on his paper Separations in Query Complexity Based on Pointer Functions with Andris Ambainis, Kaspars Balodis, Aleksandrs Belovs, Miklos Santha and Juris Smotrovs. He gave a wonderful description of a (contrived) function that gives strong separations between deterministic, randomized and quantum decision tree complexity. Scott Aaronson, Shalev Ben-David and Robin Kothari followed up on that work in Separations in query complexity using cheat sheets. 

Let me also mention the best paper winners
  • Reed-Muller Codes Achieve Capacity on Erasure Channels by Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Eren Sasoglu and Rudiger Urbanke
  • Explicit Two-Source Extractors and Resilient Functions by Eshan Chattopadhyay and David Zuckerman
  • Graph Isomorphism in Quasipolynomial Time by Laszlo Babai
and the Danny Lewin best student paper awardees
  • A Tight Space Bound for Consensus by Leqi Zhu
  • The 4/3 Additive Spanner Exponent is Tight by Amir Abboud and Greg Bodwin
The conference had 327 registrants, 44% students and 74% from the USA. The Symposium on Computational Geometry was held at Tufts before STOC and they held a joint workshop day in between. There were about 20 registered for both conferences.

STOC had 92 accepted papers out of 370 submissions. None of the three papers claiming to settle P v NP were accepted.

STOC 2017 will be held in Montreal June 19-23, the first of a theory festival. There will be multiple day poster sessions, a poster for every papers. Invited talks will included best papers from other theory conference. There will be a mild increase in the number of accepted papers. The hope is to draw a broader and larger group of theorists for this festival.

The first STOC conference was held in Marina del Rey in 1969. That makes the 2018 conference STOC's 50th which will return to Southern California. The Conference on Computational Complexity will co-locate.

FOCS 2016 will be held October 9-11 in New Brunswick, New Jersey preceded by a celebration for the 60th birthday for Avi Wigderson.

SODA 2017 will be held January 16-19 in Barcelona. Paper registration deadline is July 6.

If you weren't at the business meeting, it's worth looking at the slides for the NSF and the Committee for the Advancement for Theoretical Computer Science. Of particular note, the CATCS site Theory Matters has job announcements and sample CAREER proposals.

Tuesday, June 21, 2016

When does n divide a_n? The answer, though you already know it. The Point, though its not as strong as I wanted. Oh well.

In my last post When does n divide a_n? I gave a sequence:

a(1)=0

a(2)=2

a(3)=3

for all n ≥ 4 a(n) = a(n-2) + a(n-3)

and I noted that for 2 ≤ n ≤ 23 it looked like

n divides a(n) iff n is prime

and challenged my readers to prove it or disprove it.

I thought it would be hard to find the first counterexample and hence I would make the point:

Just because a pattern holds for the first 271440  numbers does not mean that it always holds!

However, several readers found that 5212=271441 is a counterexample. In fact they found it rather quickly. I thought it would take longer since this blog (also my inspiration) by Colin Wright seemed to indicate that fancy data structures and algorithms tricks are needed. So I emailed Colin about this and it turns out he originally wrote the program many years ago. I quote his email:

---------------------------
> I posted on Perrin Primes (without calling them that) and people seemed to
> find the counterexample, 561^2, easily.

Yes.  These days the machines are faster, and languages with arbitrary
precision arithmetic are common-place.  When I first came across this
problem, if you wanted arithmetic on more than 32 bits you had to write
the arbitrary precision package yourself.  This was pre-WWW,
although not pre-internet.  Quite.

> So I wondered why your code needed those optimizations.

Even now, it's easy to find the first few counter-examples, but it rapidly
gets out-of-hand.  The sequence grows exponentially, and very soon you are
dealing with numbers that have thousands of digits. Again, not that bad now,
but impossible when I first did it, and even now, to find anything beyond the
first 20 or 30 counter-examples takes a very long time.

So ask people how to find the first 1000 counter-examples,
and what they notice about them all.
-----------------------------------------

back to my post:

It is known that if n is prime then n divides a(n). (ADDED LATER: see here for a proof) The converse is false. Composites n such  that n divides a(n) are called Perrin Pseudoprimes. The following questions seem open, interesting, and rather difficult:

1) Why is the first Perrin Pseudoprime so large?

2) Are there other simple recurrences b(n) where n divides b(n) iff n is prime LOOKS true for a while?

Friday, June 17, 2016

The Relevance of TCS

Avi Wigderson responds to yesterday's post.

20 years is a long time, and TCS is in a completely different place today than it was then.
I am happy to say that internally its members are far more confident of its importance and independence as a scientific discipline, and externally the recognition of that importance by all sciences (including computer science) has grown tremendously. I have no doubt that both will continue to grow, as will its impact on science and technology.

Let me address a few aspects of the original post (one can elaborate much more than I do here).

Young talent: The way we continuously draw exceptional young talent to our core questions is not just a fact to state - it carries important meaning, namely a key sign of our health and prosperity. After all, these exceptionally talented young people could have done well in any field in science and technology, and they freely chose us (and indeed have been responsible for the many great results of the field in the past 20 years)!

Funding levels: In contrast, funding levels are always controlled by few and are subject to political pressures. So here our field was wise to grow up politically and realize the importance of advocacy and PR besides just doing great research. This has definitely helped, as did other factors.

Growth of theory in academia: I have no idea of the exact statistics (or even how to measure it exactly) but I should note as an anecdote that as soon as Harvard got 12 new positions in CS it made three senior offers to theorists: Cynthia, Madhu and Boaz! I see it as an important critical development to have TCS well represented not only in the top 20 universities but in the top 100. Our educational mission is too important to be reserved only to the elite schools. (Needless to say, our science and way of thinking should be integrated to the K-12 educational system as well. While this is budding, significant meaningful presence will probably take decades.)

Scientific relevance: While it may be too early to evaluate the true impact of our (many) specific incursions into and collaborations with Biology, Economics, Physics, Mathematics, Social Science etc., I believe the following general statement. *The emerging centrality of the notion of algorithm, and the limits of its efficient utilization of relevant resources, is nothing short than a scientific revolution in the making.* We are playing a major role in that revolution. Some of the modeling and analysis techniques we have developed and continue to develop, and even more, the language we have created over the past half century to discuss, invent and understand processes, is the fuel and catalyst of this revolution. Eventually all scientists will speak this language, and the algorithmic way of thought will be an essential part of their upbringing and research.

Technological relevance: Even without going to great past achievements which are taken for granted and dominate technological products, and considering only current TCS work evidence is staggering. Sublinear algorithms, Linear solvers, Crypto (NC0-crypto, Homomorphic encryption,...), Privacy, Delegation, Distributed protocols, Network design, Verification tools, Quantum algorithms and error correction, and yes, machine learning and optimization as well, are constantly feeding technological progress. How much of it? Beyond counting patents, or direct implementations of conference papers, one should look at the integration of modeling and analysis techniques, ways of thinking, and the sheer impact of "proofs of concept" that may lead to drastically different implementations of that concept. Our part in the education we provide to future developers was, is and should be of central influence on technology.

In a tiny field like ours, having the impact we do on so many scientific and technological fields that are factors 10-100 larger than us may seem miraculous. Of course we know the main reason since Turing - we have universality on our side - algorithms are everywhere. What is perhaps more miraculous is the talent and willingness of pioneers in our field over decades to search, interact, learn  and uncover the numerous forms of this universal need in diverse scientific and technological fields, and then suggest and study models using our language and tools. This has greatly enriched not only our connections and impact on other disciplines, but also had the same effect on our intrinsic challenges and mysteries, many of which remain widely open. I am happy to say that at least part of that remarkable young talent constantly  drawn into our field keeps focus on these intrinsic challenges and the natural, purely intellectual pursuits they lead to. Our history proves that there are direct connections between the study and progress on core questions and our interactions with the outside world. Our current culture luckily embraces both!

All the above does not mean that we can't improve on various aspects, and constant questioning and discussion are welcome and fruitful. But I believe that a the firm foundation of these deliberations should be the intrinsic scientific importance of our mission, to understand the power, limits and properties of algorithms of all incarnations, shapes and forms, and the study of natural processes and intellectual concepts from this viewpoint. This importance does not depend on utility to other disciplines (it rather explains it), and should not seek justification from them. The correct analogy in my view is expecting theoretical physicists to seek similar confirmation in their quest to uncover the secrets of the universe.

Thursday, June 16, 2016

Karp v Wigderson 20 Years Later

The 48th ACM Symposium on the Theory of Computing starts this weekend in Boston. Let's go back twenty years to the 28th STOC, part of the second Federated Computing Research Conference in Philadelphia. A year earlier in June of 1995, the NSF sponsored a workshop with the purpose of assess the current goals and directions of the theory community. Based on that workshop a committee, chaired by Richard Karp, presented their report Theory of Computing: Goals and Directions at the 1996 STOC conference. While the report emphasized the importance of core research, the central thesis stated
In order for TOC to prosper in the coming years, it is essential to strengthen our communication with the rest of computer science and with other disciplines, and to increase our impact on key application areas.
Oded Goldreich and Avi Wigderson put together a competing report, Theory of Computing: A Scientific Perspective that focuses on theory as a scientific discipline.
In order for TOC to prosper in the coming years, it is essential that Theoretical Computer Scientists concentrate their research efforts in Theory of Computing and that they enjoy the freedom to do so. 
There was a lively discussion at the business meeting, with Karp and Christos Papadimitriou on one side, with Goldreich and Wigderson on the other. I remember one exchange where one side said that the people who implement an algorithm should get as much credit as those who developed it. Avi, I believe, said he'd like to see the implementer go first.

So what has transpired in the last two decades. The theory and community has not withered and died, the field continues to produce great results and attract many a strong student. On the other hand the theory community has not had the growth we've seen in other CS areas, particularly in the recent university expansion. Industrial research in core theory, which had its highs in the 90's, has dwindled to a small number of researchers in a few companies. Foundation research has helped some, IAS now has a faculty position in theoretical CS, the Simons Foundation funds some faculty and recently started an institute in Berkeley and the Clay Mathematics Institute has given the field a considerate boost by naming the P v NP problem as one of their millennial challenges.

The main core theory conferences, STOC, FOCS, SODA, Complexity and others have continued to focus on theorems and proofs. Rarely do the research in these papers affect real-world computing. The theory community has not played a major role in the growth of machine learning and has left real-world optimization to the operations research community.

We have seen some other developments making some progress in connecting theory to applications.
  • 1996 saw the first Kanellakis Prize to honor "specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing"
  • Some companies, most notably Akamai, have come out of the theory community and helped shape real-world computing.
  • We have seen new research communities in EC and Quantum Computing that connect with economists and physicists. 
  • The NSF now has a program Algorithms in the Field that connects theorists with applied computer scientists.
  • Some theory topics like differential privacy have entered the mainstream discussion.
We live in a golden age of computer science and computing research is transforming society as we know it. Do we view ourselves as a scientific discipline divorced from these changes, or should theory play a major role? This is a discussion and debate the theory community should continue to have. 

Sunday, June 12, 2016

When does n divide a_n in this sequence?

Consider the following sequence:

a(1)=0

a(2)=2

a(3)=3

for all n ≥ 4  a(n) = a(n-2)+a(n-3)

Here is a table of a(n) for 2 ≤ n ≤ 23

n       2     3     4      5      6     7       8      9    10     11     12
a(n)  2     3     2      5     5      7     10    12    17     22     29

n       13    14    15    16       17      18     19      20     21      22      23
a(n) 39    51    68    90    119   158   209   277   367   486   644

For 2≤ n ≤ 23 the n such that n divides a(n) are

n = 2,3,5,7,11,13,17,19,23.

Notice a pattern? Of course you do!

I propose the following question which I will answer in my next post (in a week or so)

PROVE or DISPROVE the following:

for all n≥ 2   n divides a(n) iff n is prime.



Thursday, June 09, 2016

Math Movies

In 1997 Good Will Hunting, a fictional movie about the hidden mathematical talents of a MIT janitor grossed $225 million and won a best screenplay Oscar for Matt Damon and Ben Affleck. At the time the chair of the Chicago math department told me how much he disliked the movie given the way mathematics and mathematicians were portrayed. I told him the movie made math seem exciting and brought public awareness of the Fields medal, mentioned several times in the movie. You can't buy that kind of publicity for an academic field.

In 2001 A Beautiful Mind, on the life of John Nash grossed $313 million in the box office and won the best picture Oscar. In 2014 we saw critically acclaimed movies The Imitation Game (8 Oscar nominations with a win for adapted screenplay and $233 million gross) on Alan Turing and The Theory of Everything (5 nominations with a win for best actor, $123 million) on Stephen Hawking. These movies focused more on the struggles of the lead character than the science itself. Though these movies had their flaws they did show to a popular audience that the goal of math and science are worth an incredible struggle.

And complain all you want about the 2005 TV series Numbers, but get your head around the fact that a show about a crime-solving mathematician lasted six seasons. The Big Bang Theory remains the top US television comedy heading into its tenth season this fall.

Which takes us to the recent movie The Man Who Knew Infinity about the life of Ramanujan, a movie that has gotten wide excitement from mathematicians for the portrayal of the math itself, with credit given to consulting mathematician Ken Ono. I haven't seen the movie as it has barely played in Atlanta. It got critically mixed reviews, grossed only $3.4 million and will probably be forgotten in award season. The Ramanujan story is just not that dramatically interesting.

What's more important: Getting the math right, or taking some liberties, telling a good story and drawing a large audience. Can you actually do both? Because you can't inspire people with a movie they don't see.

Sunday, June 05, 2016

What do Evolution, Same Sex Marriage, and Abstract Set Theory have in Common?

(This post is based on articles from 2012 so it may no longer be true. Also- to be fair- I tried finding stuff on the web BY the people who object to our children being exposed to abstract set theory but could not, so this article is based on hearsay.)

Louisiana has a voucher system for poor kids to go to other schools, including religious ones. I am not here to debate the merits of that or the state of US education. However, from this article it seems that they are learning some odd things.

As you would expect, some Christian  schools teach that Evolution did not occur, same sex marriage is wrong (actually their opinion of gay people is far more negative than just being against same sex marriage), and that Abstract Set theory is evil.(Note- Catholic Schools have no problem with Evolution, in fact, the catholic church has never had a problem with it.)

Come again? Yes we would expect these opinions on evolution and same sex marraige, but  Abstract Set Theory? Why? Its explained in this article but I'll say briefly that they don't like the post-modern view of mathematics where anything goes.  The coherent version of their point of view is that they are Platonists.  A less charitable view is that they find Abstract Set Theory too dang hard.  I've also seen somewhere that they object to Cantor's theory since there is only one infinity and it is God.

The book Infinitesimals: How a dangerous mathematical theory shaped the modern world is about an earlier time, around 1600, when the Catholic church thought that using infinitesimals was... bad? sinful? contrary to the the laws of God and Man? (I reviewed it here) I though we were no longer living in a time where religion had an influence on Mathematics. And, to be fair, we ARE past that time. But this voucher program worries me. And I haven't even got to what they do to American History.




Thursday, June 02, 2016

CCC 2016

Earlier this week I attended the 31st Computational Complexity Conference in Tokyo. I've been to thirty of these meetings, missing only the 2012 conference in Porto. Eric Allender has attended them all.

The conference had a 130 participants with fewer women than you can count on one hand and 26 made it from the States. There were 34 accepted papers out of 91 submitted.

The proceedings are fully open access though the Dagstuhl LIPICS series, including the paper by Rahul Santhanam and myself that I presented at the meeting. The best paper by Marco Carmosino, Russell Impagliazzo, Valentine Kabanets and  Antonina Kolokolova drew a surprising strong connection between natural proofs and learning theory. In one of my favorite other talks, John Kim and Swastik Kopparty show how to decode Reed-Muller codes over an arbitrary product set instead of a structured field.

The German government will in the future no longer support LIPICS due to EU rules to prevent unfair competition with the commercial publishers. (Don't shoot the messenger) LIPICS will continue, the conferences will have to spend a little more to use them.

Next year's conference will be in Riga, Latvia July 6-9 right before ICALP in Warsaw. The 2018 meeting is likely to take place closely located to STOC in southern California.

Osamu Watanabe put together this slide show for the conference reception featuring pictures of attendees of the Complexity Conference through the ages, including the authors of this blog.


Peter van Emde Boas forwarded the call for papers and initial letters for the very first conference, originally called Structure in Complexity Theory.