Tuesday, March 08, 2005

Favorite Theorems: Efficient Computation

February Edition

How do we formalize the notion of efficient computation? Two important papers from the 60's suggest polynomial time algorithms are efficient though both caution against equating the concepts.

Paths, Trees and Flowers, Jack Edmonds, 1965.

The Intrinsic Computational Difficulty of Functions, Alan Cobham, 1964.

Edmonds paper will always be best known for giving the first efficient algorithm for matching on general graphs. But I list it here because of a section of his paper labeled "Digression". Edmonds talks about the difference between exponential and algebraic (polynomial) order though he cautions against any rigid criteria for efficiency.

An explanation is due on the use of the words "efficient algorithm"…I am not prepared to set up the machinery necessary to give it formal meaning, nor is the present context appropriate for doing this…For practical purposes the difference between algebraic and exponential order is more crucial than the difference between [computable and not computable]…It would be unfortunate for any rigid criterion to inhibit the practical development of algorithms which are either not known or known not to conform nicely to the criterion…However, if only to motivate the search for good, practical algorithms, it is important to realize that it is mathematically sensible even to question their existence.
Cobham defined the class we now call P as important because of its machine independence.
For several reasons the class P seems a natural one to consider. For one thing, if we formalize the definition relative to various general classes of computing machines we seem always to end up with the same well-defined class of functions. Thus we can give a mathematical characterization of P having some confidence it characterizes correctly our informally defined class. This class then turns out to have several natural closure properties, being closed in particular under explicit transformation, composition and limited recursion on notation (digit-by-digit recursion).
Cobham also offers a caution.
The problem is reminiscent of, and obviously closely related to, that of the formalization of the notion of effectiveness. But the emphasis is different in that the physical aspects of the computation process are here of predominant concern.
Perhaps Cobham realized there might be future models of computation that may not correspond to his class P. Later developments of randomized and quantum computation will show that perhaps we cannot have a fixed notion of efficient computation.

10 comments:

  1. Peter Bendix was a sophomore in Knuth's class. More info can be found at p.320 in "All Questions Answered", Notices of the AMS, Volume 49, Number 3, p.318-324

    fea-knuth.pdf

    ReplyDelete
  2. Actually there are much earlier published references than Cobham on polynomial efficiency. Jeff Shallit at Waterloo tracked a published reference going back to the late XIX century in the context of algorithms for factoring, I think.

    At any rate a better question is if this definition still makes sense. For one most practical computational models are within a poly-logarithmic factor of each other.

    As well, to argue that an n^80 algorithm is efficient is nonsense. Is it time for a new definition of efficiency?

    ReplyDelete
  3. Is there a natural poly-time solvable problem for which the best algorithm takes time more than n^10?

    ReplyDelete
  4. Yes, Joe O'Rourke and G. Tewari have an O(n^42) algorithm (yes, 42) for partitioning orthogonal polygons into fat rectangles in polynomial time. This type of decompositions is often studied in the context of VLSI.

    The authors claimed to have tried to reduce the complexity to even n^41 and failed, meaning that the n^42 algorithm was their best effort and not just the result of sloppy analysis or design.

    ReplyDelete
  5. The reason why n^80 should still be efficient is that if we come up with an n^80 algorithm for something, experience tells us that if we work a bit harder, we can always improve it to n^3 or n^4... or at least n^42.

    ReplyDelete
  6. "we can bring it down to n^4"...

    I've heard that one before and still doesn't fly. Current input sizes in real life applications in which we care about performance are usually in the thousands to the millions. In that range an n^4 algorithm would take somewhere between a trillion operations and a septillion operations.

    Say, on input size 10,000 the algorithm would take about a year to run on a desktop PC. This is hardly what one would call an efficient algorithm in any practical sense of the word.

    ReplyDelete
  7. Some of this discussion completely misses the point. The value of any concept is its impact on how it changes your thinking. The notion of polynomial time fundamentally altered the way people thought about computational problems. Of course once we have polynomial time algorithms we can try to reduce the running time from n^2 to nlog n or whatever but the key point is that there is something fundamentally different about polynomial-time versus exponential time approaches to problems.

    New ways of thinking about problems have much bigger impacts in the long run than precisely which polynomial time bound works. A major reason that the sizes of the problems we try to solve in practice have grown so much these days is the improvement in algorithmic approaches due to this polynomial-time thinking.

    (BTW: If one is really worried about the practicality of polynomial-time algorithms, Robertson and Seymour have a family of n^3 algorithms for minor-closed families where the enormous constant factors are the main impediments.)

    ReplyDelete
  8. The value of any concept is its impact on how it changes your thinking. The notion of polynomial time fundamentally altered the way people thought about computational problems.My point is that it is time for yet another such change in thinking. For a while the idea that polynomial=efficient worked well for us, but as our understanding has deepened it's now time for an update in the definition of what is and isn't efficient.

    Such adjustments happen every so often, it's not such a big deal. For example, we have now come to accept the word RAM as a perfectly valid and realistic model, along with its sharper bounds.

    ReplyDelete
  9. Researchers are well aware of the target of achieving quasilinear time algorithms and there has been a small amount of interesting work in the area with that as the goal. However, in contrast to the value of the polytime concept, fixing this as a target has not particularly led to breakthroughs.

    There are more restrictive criteria that have led to interesting breakthroughs; it's just that the quasilinear time concept hasn't particularly yet been one. (Linear time targets have led people to such breakthroughs, e.g. linear-time coding. )

    An even more restrictive criterion: one pass algorithms using small space, so-called streaming algorithms, have also led to very interesting results.

    ReplyDelete
  10. There is some biographical information about Cobham here.

    ReplyDelete