A while back complexity blog, Shtetl-optimized , and GLL all blogged about the improved matrix mult algorithms (Complexityblog: here, Shtetl-optimized: here, GLL here) of Stothers/Williams. It may have been on other theory blogs as well (if you know then let me know). We denote Matrix Mult Algorithm by MMA, and we use na instead of O(na). All the papers we refer to can be found either here or here.
1987: Cooper and Winograd get MMA in n2.375477
2010: Stothers gets MMA in n2.374
2011: Williams gets MMA in n2.3728642
(Williams and Stother were ind, though Williams used some of Stother's stuff to simplify her proofs for the final version.)
This Stothers/Williams results were a big deal! Williams paper got into STOC and, as mentioned above, three blogs reported on it AS SOON AS it was public.
Fast forward to
2014: Le Gall gets MMA in n2.3728639. Wikipedia says that this MMA is a simplificaion of Williams algorithm.
The 2014 result may or may not be interesting (thats a tautology!). But what strikes me is that I came across it in 2015 and had not heard of it. I emailed Lance to ask if he had heard of it, he had not. I don't quite know if Lance and I not knowing about means its not that well known, but its at least on indicator.
ALL of which brings us back to the title of this blog: When do we care about small improvements?
- We care about a small improvement if the result in question has been stuck where it was for a long time. This was the case for Stothers/Williams MMA and also for when the approx for metric TSP went from approx of 3/2 to approx of (3/2 - c) (see here).
- We care about small improvements if they illustrate an a new technique. The leaf-counting technique for lower bounds on number-of-comparisons for finding the ith largest gave clean proofs and (I think) small improvements to known results. (Leaf Counting technique in brief: Any Dec Tree for MAX has EVERY branch is of length at least n-1 hence 2^{n-2} leaves. Take a tree for ith largest. For all x_1,...,x_{i-1} prune the tree by having these elements beat anything else. How they compare to each other determine arbitrarily. This results in a MAX tree for n-i elements and hence has 2^{n-i} leaves.All such trees have disjoint sets of leaves. So original tree has at least (n choose i)2^{n-i} leaves, hence has a branch of length log_2( (n choose i)2^{n-i}) = n+ ilog n - i. Was this better than what was known at the time? Not sure but the proof was much simpler.)
- We care about small improvements if they tell you that a natural upper or lower bound is NOT true (I can't seem to get the numbered lists right so regard the following items as subitems of this one.)
- Bent/ John showed that Median required 2n - 2sqrt(n) - O(log n) comparisons and then later Dor and Zwick improved this to (2+ 1/2^50)n. (I can't seem to find a free online version of this- if you do please leave a comment.)
- Schonage, Paterson, Pippenger had median in 3n + o(n), and Dor and Zwick (not a typo- same people) improved this to 2.95n + o(n). (I can't seem to find a free online version of this- if you do please leave a comment.)
- In both cases, especially (1), the improvement is really small and unimportant; however, knowing that 2n is NOT the lower bound and 3n is NOT the upper bound is worth knowing. Hence exact complexity is NOT going to be the nice number 2n or 3n. The only nice number between 2 and 3 is e, so lets hope its en. (I think I read that 2.5n is the actual conjecture. 2.5 is nice, but e is nicer.)
I am not following what you are saying about the leaf counting technique. Could you provide a link/reference to a longer discussion?
ReplyDelete(This is Bill Gasarch, I am having some login problems)
ReplyDeleteA counting approach to lower bounds for selection problems
by Frank Fussenger and Harold Gabow
JACM volume 26, number 2, 227-238, 1979.