Sunday, November 11, 2012

Fifty Years of Computational Complexity


From Juris Hartmanis’ Observations About the Development of Theoretical Computer Science on the research leading to his seminal 1965 paper On the Computational Complexity of Algorithms with Richard Stearns.
Only in early November 1962 did we start an intensive investigation of time-bounded computations and realized that a rich theory about computation complexity could be developed. The first explicit mention of classifying function by their computation time occurs in my logbook on November 11. 
And so today we celebrate the fiftieth anniversary of the conception of computational complexity. We have learned much since then and yet we still know so little. Here’s to another fifty years of trying to figure it out.

4 comments:

  1. A big big cheers from me to computational complexity and the theoretical computer science community in the world.
    Long live mankind, long live complexity theory.

    ReplyDelete
  2. Didn't Lance declare the demise of complexity classes almost a year ago from today?

    Unless I missed something no revival happened since then. In fact, looking at the recent abstracts in ECCC, we may as well rename the field to "informatics" or something like that.

    ReplyDelete
  3. A BibTeX entry for Hartmanis' monograph 1978 Feasible Computations and Provable Complexity Properties might include these verbatim quotes:

    -------------
    "It may be only a slight exaggeration to claim that in the 1930s we started to understand what is and is not effectively computatable and that in the 1970s we started to understand what is and is not practically or feasibly computable."

    "Results about the complexity of algorithms change quite radically if we consider only properties of computations which can be proven formally."

    "In the traditional view of complexity theory we consider all programs computing the same function as equivalent, though we have no way of verifying this; and we know that in any fixed formal function F many of these programs cannot be proven equivalent. Thus we should expect that the results about optimality of all programs computing the same function as a given program will differ from the optimality results about all programs which can be formally proven to be equivalent to the given program. The surprising thing is how much the classical optimization results differ from the optimization results for provably equivalent programs."

    "These results suggest very strongly that we need to explore further how our ``world view'' of the complexity of algorithms has to be changed if we consider only provable properties of algorithms."
    ------------

    What new insights has complexity theory evolved, in the decades since the 1970s, to match Hartman's still-fresh insights from the 1970s (along with Baker-Gill-Solovay, also from the 1970s)?

    ReplyDelete
  4. 50 shades of complexity theory. ;)

    ReplyDelete