Just as I was complaining that we haven't seen many surprising breakthroughs in complexity recently, we get an earthquake of a result to start the year, showing that all algorithms can be simulated using considerable less memory than the time of the original algorithm. You can reuse space (memory) but you can't reuse time, and this new result from Ryan Williams in an upcoming STOC paper provides the first stark difference.
DTIME(
This is a vast improvement on the previous best known simulation, the classic 1977 Hopcroft-Paul-Valiant paper showing
DTIME(
only slightly lower than the trivial
Williams' proof relies on a space-efficient tree evaluation algorithm by James Cook and Ian Mertz from last year's STOC conference. Cook and Mertz's algorithm builds on earlier work on catalytic computing, highlighted in a recent Quanta article.
Let me give an highly overly simplified view of the combined proof.
A
Williams then applies the tree evaluation algorithm of Cook and Mertz. Cook and Mertz use finite fields to encode these segments as a combination of registers of size
It's worth going through the proof yourself. I recommend Sections 3.1 and Footnote 6 in Williams' paper (a slightly weaker space bound but much simpler) and Sections 2-4 of the Cook-Mertz paper. Oded Goldreich has an alternative exposition of the Cook-Mertz algorithm and proof.
Williams' theorem works for multitape Turing machines and oblivious random-access machines, where the queries to the memory are fixed in advance. He shows how to use this result to compute the output a circuit of size
In 1986 my advisor Mike Sipser gave the first hardness vs randomness result, showing roughly that if there were problems that took time
Moving forward, can we push Williams' result to get a simulation in space
Read sections 4 and 5 of Williams' paper for some further consequences and challenges for further improvements.
Sorry but is there a link to
ReplyDeleteWilliams’ paper in this post? Am I missing something?
The links to the papers are at the top of the post but I'll put them here: Williams and Cook-Mertz.
DeleteWorks now. Many thanks
DeleteNon-expert here. Is "random access machines remains open" related to the Turing machine argument "using the fact that it takes time to cross an entire segment"? Could we say this result is as much about the time-inefficiency of Turning machines as it is about the existence of space-efficient algorithms?
ReplyDeleteI first read the post title as "You Need Much Less Money than Time", which would have been a great advice.
ReplyDeleteI wonder what Mihai Patrascu would have said about this development ...
ReplyDeleteplenty of space and not enough time
ReplyDeleteThe New Scientist covered this breakthrough in an article (only available to subscribers) that contains some quotes from Lance: https://www.newscientist.com/article/2469663-shock-discovery-tears-up-the-rules-of-time-and-space-inside-a-computer/
ReplyDeleteMinor breakthroughs, yes and within a couple of years. Major breakthroughs--don't count on it.
ReplyDeleteAssume that you know that you will first know breakthrough x at time y, which is in the future. This means you also know x now, in the present. This contradicts the assumption. Therefore, you cannot predict what you will learn in the future.
ReplyDeleteDavid Deutsch puts it this way: “A simple example, if there were a method of predicting today some scientific truth that is only going to be discovered next year, then by using that method, we would have gained that knowledge today, and that’s a contradiction.” See https://medium.com/dorothyknows/david-deutsch-the-unknowable-how-to-prepare-for-it-e1b2c7d78744
Wouldn't P in Logspace be a surprise?
ReplyDelete