Monday, September 17, 2007

Is Immunity Interesting?

In my graduate complexity class I proved
For all computable T there exists a decidable set A such that A ¬in DTIME(T(n)).
I then had on the homework
For all computable T there exists a decidable set A such that, for all Turing Machines M that run in T(n) time there exists an infinite number of strings x such that A(x) &ne M(x)
I then had extra credit
For all computable T there exists a decidable set A such that, for all Turing Machines M that decide A, for almost all inputs x, M(x) takes more than T(|x|) steps.
Katrina LaCurts, one of my students (who is thrilled to be mentioned by name in this blog!) asked the following: Is the HW and Extra Credit just to reinforce knowledge OR is the notion of differing infinitly often an important topic within complexity theory?

How important is the notion of sets differing infinitly often? A quick-and-dirty measure is to find the number of papers on this topic. A quick-and-ditry way to do that is to grep `immunity' in Joel Seifras's Theory Database and then edit. Once done I get the following list

So I could answer Katrina, there are at least 21 papers on this topic That shows people have worked on it (and some quite recently) but does not quite answer the question. Hence I ask my readers:
Once we have that, for all time bounds T, there is a computable set A ¬in DTIME(T(n)), why do we care about getting an A that differ infinitely often, or other variants? More generally, why is the notion of having two sets differ infinitely often important. I ask non-rhetorically and politely. Note that, no matter how you answer, Katrina still has to do the HW.

6 comments:

  1. I don't know how "important" the idea of infinitely-often is, but I does sometimes crop up in proofs when we can't prove what we'd "like" to. E.g., the result that non-trivial zero knowledge implies infinitely-often one-way functions.

    ReplyDelete
  2. I'm sure that the idea is used implicitly in all kinds of settings. For example, any lower bound result, just because of the way lower bounds are defined, involves showing that a computation exceeds the resource bound infinitely often.

    ReplyDelete
  3. Unless I misunderstand the question (which is, of course, possible), I'd say it's "trivially interesting" - if the two sets don't differ infinitely often, a lookup table for inputs of size smaller than some N (the size of the largest different instance) will make the running time very similar.

    ReplyDelete
  4. I'm still waiting for you to show us how to do the bonus problem...

    ReplyDelete
  5. It is a good idea, in general, to see how strong a result one can obtain in a given line of thought. For example, the idea of being hard-on-average (e.g., no algorithm from a class of algorithms has a significantly better chance than randomly guessing membership in a language) is one such idea, and is at the heart of pseudorandom generators via the hardness vs. randomness connection.

    Another example, closer in spirit to what you mentioned, is almost-everywhere hardness (every algorithm for L takes more than T(.) time on all but finitely many inputs) -- this has been shown to be related to other versions of average-case complexity theory, which have connections to one-way functions, etc.

    So, in summary: all of these ideas have value beyond being interesting intellectual exercises.

    ReplyDelete
  6. Is immunity interesting?

    Yes, for reality show contestants.

    ReplyDelete