Tuesday, November 19, 2002

Foundations of Complexity
Lesson 8: Efficient Computation

Previous Lesson | Next Lesson

In past lessons, we have studied the computable functions. Computable functions can take an arbitrary amount of time: What good is a program that will eventually give the correct answer but might not finish before the universe collapses?

Somehow we want to limit the amount of time or memory that a computer can use. Just giving a fixed bound does not work well. As technology improves and computers get faster and better, we expect to solve larger and larger problems in a reasonable amount of time. Hartmanis and Stearns, in their seminal paper on computational complexity, turn this around to come up with the right idea: Consider time and memory as functions of the size of the input.

The time a machine M takes on input x is just the number of computation steps that it takes before halting starting with input x. I am being very informal about what a "step" is. In a later lesson we will get into formal definitions of computers and steps but for now just use the idea of implementing one instruction.

The memory or space as we theorists call it is just the number of bits of storage used by M on a given input.

Edmonds gave an algorithm for the matching problem that ran in polynomial time: The number of steps used by the algorithm on a graph of size n is nk for some k. He suggests that polynomial-time captures efficient computation.

We now define our first complexity class P as the set of all languages L for which machine exist that determine whether x is in L and halts in time polynomial in the length of x. The P has many nice properties: A polynomial-time algorithm that uses a polynomial-time subroutine remains in polynomial-time. Also P is robust, that is the class is the same no matter how you formally define your machines.

In these lessons, we will treat P as the class consisting of efficiently computable problems. More classes will come.

No comments:

Post a Comment