Let Z_d[x] be the set of polynomials of degree d over the integers.
Let ALG_d be the set of roots of polys in Z_d.
One can easily show that
ALG_1 is a proper subset ALG_2is a proper subset ...
and that there are numbers not in any of the ALG_i (by a countability argument).
I began my ugrad automata theory course with this theorem (also a good review of countability- I found out, NOT to my surprise, that they never really understood it as freshman taking Discrete Math, even the good students.)
I presented this as the first theorem in complexity.
But is it? I suspect that the question What was the first result in complexity?
has no real answer, but there are thoughts:
Complexity theory is about proving that you can't do BLAH with resource bound BLAHBLAH.. We will distinguish it from Computability theory by insisting that the things we want to compute are computable; however, if someone else wants to argue that (say) HALT is undecidable is the first result in complexity, I would not agree but I would not argue against it.
Here are other candidates:
1) sqrt(2) is irrational. This could be considered a result in descriptive complexity theory.
2) The number of primes is infinite. If you view `finite' as a complexity class then this takes PRIMES out of that class.
3) You cannot, with ruler and compass, trisect an angle, square a circle, or double a cube. This seems very close to the mark--- one can view ruler-and-compass as a well defined model of computation and these are lower bounds in that model.
4) There is no quintic equation. Also close to the mark as this is a well defined lower bound.
5) In the early 60s the definition of P (Cobram) and of DTIME, etc (Hartmanis-Stearns). The result I would point to is the time hiearchy theorem. While these results are much later than those above, they are also far closer to our current notion of complexity.
6) I'm not sure which paper to point to, but Knuth's observation that one can analyse algorithms without running them. This is more algorithms than complexity, but at this level the distinction may be minor.
7) Cook-Levin Theorem. Probably not the first theorem in Complexity, but certainly a big turning point.
I urge you to comment with other candidates!
Without the Cook-Levin theorem, there is no complexity theory.
ReplyDeleteRafee Kamouna.
Kleene's Theorem characterizes constant space.
ReplyDeleteFor me the main difference between computability and complexity is that the former is qualitative (one cannot compute something at all in the considered model) whereas the latter is quantitative (you can compute something and it will take you a certain amount of resources, e.g. time size memory, ...).
ReplyDeleteWith this working definition, your questions 1) to 4) are not complexity questions whereas the others are.
Candidate: the Ackermann function is not primitive recursive.
ReplyDeleteare you sure they show that to freshmen in the discrete math class there?
ReplyDeleteYes. I sometimes teach the Discrete Math class and I always keep track of whats going on in it.
DeleteAnd yes, Countability and Uncountability seems to be the hardest topic in it by half an order of magnitude. That might be a reason to take it out of the course, or not. (A later post might be on what should be in a DM course, or might not.)
Gabriel Lame's bounds on the Euclidean algorithm.
ReplyDeleteThis may not fit your definition, but there's Shannon's 1949 proof that most Boolean functions need exponentially-large circuits to compute. (Scott Aaronson's recent P?=NP survey describes it nicely.) It does have the large weakness that it doesn't tell you any function which requires large circuits... On the other hand, it is from the vacuum-tube era, which arguably makes it more noteworthy.
ReplyDeleteYou can find older examples in analysis.
ReplyDelete