Thursday, July 02, 2015

Goodbye SIGACT and CRA

Tuesday I served my last day on two organizations, the ACM SIGACT Executive Committee and the CRA Board of Directors.

I spent ten years on the SIGACT (Special Interest Group on Algorithms and Computation Theory) EC, four years as vice-chair, three years as chair and three years as ex-chair, admittedly not so active those last three years. SIGACT is the main US academic organization for theoretical computer science and organizes STOC as its flagship conference. I tried to do big things, managed a few smaller things (ToCT, a few more accepted papers in STOC, poster sessions, workshops, moving Knuth and Distinguished Service to annual awards, an award for best student presentation, a tiered PC), some of them stuck and some of them didn't. Glad to see a new movement to try big changes to meet the main challenge that no conference, including STOC, really brings the theory community together anymore. As Michael Mitzenmacher becomes chair and Paul Beame takes my place as ex-chair, I wish them them and SIGACT well moving forward.

The Computing Research Association's main efforts promotes computing research to industry and government and increasing the diversity in computing research. It's a well-run organization and we can thank them particularly for helping improve the funding situation for computing in difficult financial times. The CRA occasionally puts out best practices memos like a recent one recommending quality over quantity for hiring and promotion. Serving on the board, I most enjoyed interacting with computer scientists from across the entire field, instead of just hanging with theorists at the usual conferences and workshops.

One advantage of leaving these committees: I can now kibbitz more freely on the theory community and computing in general. Should be fun.

Sunday, June 28, 2015

When do we care about small improvements?


A while back complexity blog,  Shtetl-optimized , and GLL all blogged about the improved matrix mult algorithms (Complexityblog: here, Shtetl-optimized: here, GLL here) of Stothers/Williams. It may have been on other theory blogs as well (if you know then let me know). We denote Matrix Mult Algorithm by MMA, and we use na instead of O(na).  All the papers we refer to can be found either  here or here.

1987: Cooper and Winograd get MMA in n2.375477

2010: Stothers gets MMA in n2.374

2011:  Williams gets MMA in  n2.3728642

(Williams and  Stother were ind, though Williams used some of Stother's stuff to simplify her proofs for the final version.)

This Stothers/Williams results were a big deal! Williams paper got into STOC and, as mentioned above, three blogs reported on it AS SOON AS it was public.

Fast forward to

2014: Le Gall gets MMA in n2.3728639. Wikipedia says that this MMA is a simplificaion of Williams algorithm.

The 2014 result may or may not be interesting (thats a tautology!). But what strikes me is that I came across it in 2015 and had not heard of it. I emailed Lance to ask if he had heard of it, he had not. I don't quite know if Lance and I not knowing about means its not that well known, but its at least on indicator.

ALL of which brings us back to the title of this blog: When do we care about small improvements?

  1. We care about a small improvement if the result in question has been stuck where it was for a long time. This was the case for Stothers/Williams MMA and also for when the approx for metric TSP went from  approx of  3/2 to approx of (3/2 - c) (see here).
  2. We care about small improvements if they illustrate an a new technique. The leaf-counting technique for lower bounds on number-of-comparisons for finding the ith largest gave clean proofs and (I think) small improvements to known results. (Leaf Counting technique in brief: Any Dec Tree for MAX has EVERY branch is of length at least n-1 hence 2^{n-2} leaves. Take a tree for ith largest. For all x_1,...,x_{i-1} prune the tree by having these elements beat anything else. How they compare to each other determine arbitrarily. This results in a MAX tree for n-i elements and hence has 2^{n-i} leaves.All such trees have disjoint sets of leaves. So original tree has at least (n choose i)2^{n-i} leaves, hence has a branch of length log_2( (n choose i)2^{n-i}) = n+ ilog n - i. Was this better than what was known at the time? Not sure but the proof was much simpler.)
  3. We care about small improvements if they tell you that a natural upper or lower bound is NOT true (I can't seem to get the numbered lists right so regard the following items as subitems of this one.)
  1. Bent/ John showed that Median required 2n -  2sqrt(n) - O(log n) comparisons and then later Dor and Zwick improved this to (2+ 1/2^50)n. (I can't seem to find a free online version of this- if you do please leave a comment.)
  2.  Schonage, Paterson, Pippenger had median in 3n + o(n), and Dor and Zwick (not a typo- same people) improved this to 2.95n + o(n). (I can't seem to find a free online version of this- if you do please leave a comment.)
  3. In both cases, especially (1), the improvement is really small and unimportant; however, knowing that 2n is NOT the lower bound and 3n is NOT the upper bound is worth knowing. Hence exact complexity is NOT going to be the nice number 2n or 3n. The only nice number between 2 and 3 is e, so lets hope its en. (I think I read that 2.5n is the actual conjecture. 2.5 is nice, but e is nicer.)






Thursday, June 25, 2015

Changing STOC

At the recently completed STOC and the previous FOCS, much of the discussion revolved around reforming the conferences. You read the discussion and comments on Windows on Theory and I've also been cc'd on several very long email threads.

STOC, as the flagship conference of ACM SIGACT, should be the focal point of the community, the place where researchers circle their calendars and make sure they attend the event. You see that at SOSP for the systems community or SIGCOMM for networking. But not STOC, smaller than it was thirty years ago when the theory community had a fraction of the people we have today.

Instead STOC has become a conference for its authors, to give researchers a prestigious line in their CVs. While authors get to present their papers, STOC is no longer a primary place for dissemination, better served by arXiv and ECCC.

The problem is conference overload. We have two top theory conferences a year, STOC and FOCS, not to mention SODA, Computational Complexity and so many others. Conferences are expensive in both time and money and we can't afford to attend too many. People often choose more specialized conferences and workshops where they can focus on talking to people in their specialized research areas.

SOSP on the other hand meets only once every two years, accepts only thirty papers and gets 600 attendees.

The only true solution would merge and/or eliminate conferences. We don't need two major theory conferences a year. But that's not politically feasible.

So the talk is of a Theory Festival centered around STOC in 2017, to make an event that all theorists would want to attend. What that theory festival should or should not do is the topic of all the discussion. I'm not going to talk about the various proposals but I encourage strong experimentation to get us out of this bad equilibrium. Otherwise we end up with status quo and status quo does not bring our community together.

Monday, June 22, 2015

Learning from teaching a HS student Schur's theorem on change


(All the math this post refers to is in my manuscript which is here.)

Recall Schur's theorem on making change as stated in wikipedia and other source:

Let a1,...,aL be rel prime coin denominations. Then the number of ways to make n cents
change is nL-1 /(L-1)!a1a2...aL  + Θ(nL-2).

The proof I knew (from Wilfs book on generating functions)  was not difficult; however,it involved roots of unity, partial fractions, Taylor series, and Generating functions. I needed to present the proof to a HS students who was in precalc.  The writeup above is what I finally came up with. A few points.

  1. HS students, or at least mine, knew complex numbers. Hence roots-of-unity was okay. The proof of Schur's theorem has another plus: he had asked me just recently how complex numbers could be used in the real world since they weren't... real. I said the are often used as an intermediary on the way to a real solution and gave him an example of a cubic equation where you spot a complex solution and use it to obtain the real solutions. Schur's theorem is a more sophisticated example of using complex numbers to get a result about reals (about naturals!) so that's a win.
  2. Partial Fractions. If the student had had calculus then he would know what partial fractions were and believe me when I said they always work. But since he had not had calculus I prepared a proof that they worked. Then I realized--- I have never seen a proof that they work! This is a matter of timing- I saw them in High School Calculus in 1975 which was taught without proofs (just as well, analysis is a bad first-proof course) and I didn't quite realize that the techniques they taught us aren't quite a proof that it works. I came up with my own proof (I can't imagine its  original but I have not found a ref) in 2015. That's 40 years between seeing a concept and proving that it works. A personal record.
  3. Taylor Series. I needed the Taylor series for 1/(1-x)^b   (just for b a natural). I came up with a proof that does not use calculus and that a HS student could follow. Very happy that I was forced to do do this. Actually uses a nice combinatorial identity!
  4. The lemmas about partial fractions and about Taylor series are of course very very old. Are my proofs new? I doubt it though I have not been able to find a reference. If you know one please leave a polite comment.
  5. Having gone through the proof so carefully I noticed something else the proof yields: Let M be the LCM of a1,...,aL.  For all 0\le r\le M-1 there is a poly p of degree L-1 such that if n\equiv r mod M then p(n) is the number of ways to make change of n cents.  I suspect this is known but could not find a ref (again- if you know one then please leave a polite comment.)
Moral of the story: By working with a HS student I was forced to find a proof for partial frac deomp, find a non-calc proof of a Taylor series, and obtain an interesting corollary. Hence this is already a win!

Thursday, June 18, 2015

FCRC Complexity

On Wednesday at FCRC, the Complexity, STOC and EC (Economics and Computation) conferences all have sessions, a smorgasbord of talks, but tough decisions on what session to attend. Here's one you might have misses, the EC paper Why Prices Need Algorithms by Roughgarden and Talgam-Cohen that has a nice application of complexity to the existence of equilibrium, not whether the equilibrium is hard to compute but whether it exists.

Roughly Roughgarden and Talgam-Cohen show if a certain kind of a pricing equilibrium holds then one can get an efficient algorithm for a certain kind of reduction. Under reasonable complexity assumptions (like P <> NP) such reductions can't exist and so neither can the equilibrium.

Late Wednesday came the Complexity business meeting, the first open business meeting of the now unaffiliated Computational Complexity Conference. There were 84 attendees, a little bit down from last FCRC but higher than last year. There were 30 papers accepted out of 110 submissions. The 2016 conference will be held in Tokyo May 29-June 1.

There was much discussion on where Complexity will be in 2017 and on which journal will get the special issue for the next three years. Watch my twitter to see when they get set.




Tuesday, June 16, 2015

STOC Business Meeting

This week I'm at the Federated Computing Research Conference in Portland, a collection of many mostly ACM conferences. Last night was the STOC business meeting.

The meeting had beer but the beer had no alcohol. Not an auspicious start.

289 registrants, a bit lower than the past few STOCs. With EC and Complexity at the same conference you would think STOC would draw larger but it doesn't.

The conference had 93 accepted papers out of 347 papers. Best papers (copied from proceedings):
The papers “Exponential Separation of Information and Communication for Boolean Function”, by Anat Ganor, Gillat Kol and Ran Raz, “2-Server PIR with sub-polynomial communication” by Zeev Dvir and Sivakanth Gopi, and “Lower bounds on the size of semidefinite programming relaxations ” by James Lee, Prasad Raghavendra and David Steurer, were selected for the STOC Best Paper Award. The paper “Inapproximability of Nash Equilibrium”, by Aviad Rubinstein, was selected for the Danny Lewin Best Student Paper Award.

These and the 2015 STOC papers are publicly accessible via acm-stoc.org/stoc2015/toc.html forever. part of a new STOC website with a history of past conferences.

Babai and Spielman and Teng officially received the awards we've mentioned below. The distinguished service award went to Avi Wigderson.

As of July 1 we have a new SIGACT Executive Committee: Paul Beame (ex-Chair), Anna Karlin, Michael Mitzenmacher (Chair), Toni Pitassi, Eric Vigoda, Ryan Williams

Lots of useful information about funding and more in  Salil Vadhan's CATCS Report.

Upcoming conferences:

  • FOCS 2015 October 17-20 in Berkeley
  • SODA 2016 January 1012 in Arlington, VA.Abstract deadline July 1
  • STOC 2016 - Cambridge, MA June 18-21 collocated with SoCG
  • STOC 2017 - Potential "Theory Festival" - see below
  • STOC 2018 - 50th STOC possibly near Marina del Ray, the home of the first STOC
The business meeting ended with a discussion about a "Theory Festival" for STOC 2017. The goal is to get STOC to be a "must attend" meeting the way SOSP is for systems and SIGCOMM for networking. Check out the many discussions on Boaz Barak's blog.

Sunday, June 14, 2015

First issue of SIGACT news where I wasn't the editor. But...


I posted at some earlier time that I was resigning from the editorship of SIGACT NEWS book review column, and handing the reins over to Fred Green (who is older than me, so `handing over the reins' might not be quite right).

You can find the column on Fred's webpage here.

I wish him well of course. He's off to a great start:

  1. The Cult of Pythagoras: Math and Myths by Alberto A. Martinez. Review by Bill Gasarch.
  2. Infinitesimal: How a dangerous mathematical theory shaped the modern world, by Amir Alexander. Review by Bill Gasarch.
  3. Martin Gardner in the Twenty-First Century, edited by Michael Henle and Brian Hopkins. Review by Bill Gasarch.
  4. Algorithmic Barriers Falling: P=NP?, and The Essential Knuth, both by  Edgar Daylight. Review by Bill Gasarch.
  5. Love and Math: The Heart of Hidden Reality by Edward Frenkel. Review by Bill Gasarch.
  6. Structure and Randomness: Pages from Year One of a Mathematical Blog by Terence Tao. Review by Bill Gasarch. 
Why so many reviews by... me?!   Fred writes that its a tribute to my service which is of course true and appreciated. But I also note that when I was editor it would have been... odd? impolite? to have all of the columns by me in a single issue. But having Fred do it is fine. When Fred resigns 17 years from now (not to give away his age, but I think he'll retire before then), perhaps his successor will do that same.

There are still books to review! Here is a list of books I still have in my office. If you want to review them email both gasarch@cs.umd.edu, fgreen@clarku.edu.
Email the books and also your physical address to send it to.
 ALGORITHMS

ReCombinatorics: The algorithmics of ancestral recombination graphs and
phylogenic networks by Gusfield.


Tractability: Practical approach to Hard Problems.  Edited by Bordeaux, Hamadi, Kohli.

Recent progress in the Boolean Domain.  Edited by Bernd Steinbach


PROGRAMMNG LANGUAGES

Selected Papers on Computer Languages by Donald Knuth.

MISC COMP SCI

Introduction to reversible computing by Perumalla.


Digital Logic Design: A Rigorous Approach by Even and Medina

CoCo: The colorful history of Tandy's Underdog Computer by Boisy Pitre and
Bill Loguidice.

MATH AND HISTORY

Professor Stewart's Casebook of Mathematical Mysteries by Ian Stewart.

The Golden Ratio and Fibonacci Numbers by Richard Dunlap.



Mathematics Galore! The first five years of the St. Marks Institue of Mathematics by Tanton.

Mathematics Everywhere. Edited by Aigner and Behrends.

An Epsiodic History of Mathematics: Mathematical Culture Through Problem Solving by Krantz.

Proof Analysis: A Contribution to Hilbert's Last Problem by Negri and Von Plato.






Thursday, June 11, 2015

A Metric Group Product

A guest post by Dylan McKay, recently graduated from Georgia Tech and soon to be PhD student at Stanford.

[Editor's Note: Turns out the given solution doesn't work and whether a metric group product over the non-negative reals exists remains open.]

Here a cute puzzle motivated by a pair of undergrads and their poor understanding of what the phrase “Algebraic Geometry” really should mean:

Find a function f from the nonnegative reals to the nonnegative reals that satisfies the group axioms and the metric axioms, or prove that there is no such function. That is, find an f:RxR→R such that (R,f) is a group and a metric space. (I am using R to refer to the set of nonnegative real numbers).

As a quick reminder, the group axioms are:

- Closure: f(a,b) must be in R
- Identity: There must exist an element e such that for all a, f(e,a)=f(a,e)=a
- Associative: for all a,b,c, f(f(a,b),c) = f(a,f(b,c))
- Inverse: for all a, there must exist a b such that f(a,b)=e

And the metric axioms are:

- f(a,b) = 0 iff a=b
- f(a,b) <= f(a,c) + f(c,b)
- f(a,b) = f(b,a)

One really bizarre thing about this supposed function is that, what this metric does is essentially guarantee that, given some number x, every other number has a distance from x that is unique -- no other number is exactly that distance away! This is quite counterintuitive to how we think about distance. Each number is its own distance from 0, though, which is very much in line with intuition.

(If you want to solve the puzzle yourself, don't read any further until you do! Unless you want some hints, in which case, look at the next paragraph.)

We can, however, make some other observations that point us in the right direction. For instance, f(e,e) = 0 from the metric axioms and f(e,e) = e from the group axioms, so you get that e = 0. From this and the metric axioms, you get that every element must be it’s own inverse!

Here, we are reminded of a pretty familiar and trusty function, this exclusive-or (XOR) function! And indeed, this will lead us a to a solution.

Here is our candidate function:

Consider x and y in their binary representations. If they do not have the same number of bits to the left of the decimal point (or should it be called the binary point?), pad the one with fewer such bits with 0’s so they have the same number of such bits. Then f(x,y) will be the number represented in binary by the bit-wise exclusive or of these two strings (maintaining the same number of bits to the left of the decimal point, of course). And there we have it: our function!

Now of course, we still need to prove that it satisfies the axioms. But if you do not believe that it does, work it out for yourself! Each axiom is pretty simple to check, especially as we are (for the most part) familiar with this as an extension of an idea for some smaller groups. Well, all of them except our good friend the triangle inequality. This one actually isn’t too bad, but if you have trouble, I will include a hint at the bottom of the post.

Anyway, I hope you liked our puzzle!

Thanks!

-Dylan

Hint for triangle inequality: XOR and addition are really similar, but there is a key difference between them that make a XOR b <= a + b.