Friday, August 08, 2008

Discounted Time

A write-up of some ideas I presented at the Complexity Conference Rump Session.


In computational complexity when we talk about time it usually represents a hard limit in the running time, solving the problem in time t(n). So we are happy, say, if we can solve the problem in one hour and miserable if it takes 61 minutes. But our real gradation of happiness over the running time is not so discontinuous.

Let's take an idea from how economists deal with time. They discount the utility by a factor of δ in each time step for some δ<1. What if we did the same for complexity?

Let δ = 1-ε for ε>0 and very small. Think ε about 10-12. We then discount the value of the solution by a factor δt for t steps of computation.

Discounted time gives us a continuous loss due to time. It has the nice property that the future looks like the past: The discount for t steps now is the same as the the discount for the t steps already taken.

When t is small, δt is about 1-εt, a linear decrease. For t large, δt is about e-εt, an exponential decrease.

We can also recover traditional complexity classes. DTIME(O(m(n)) is the set of languages such that for some constant c>0, δt>c for δ=(1-1/m(n)).

I'm not sure what to do with discounted time which is why this is a blog post instead of a FOCS paper.
Some ideas:
  • What does average case and expected time mean in the discounted time model?
  • What if you take the value of the solution of some approximation problem and discount it with the time taken? Can you determine the optimal point to stop?

6 comments:

  1. Instead of talking about the expected time to solve a problem, this transitions nicely to the expected value of that solution. For approximation problems, you can combine that with a function expressing the value of an approximation relative to the value of the optimal solution.

    Combining them you get the expected value of approximately solving the problem in time t, given the accuracy you expect to develop in time t, relative to having the optimal solution immediately.

    For time constrained problems, this favors algorithms that can produce more than an 1/δ improvement in the value of the approximate result in a given unit of time, over a period of time within the time constraint.

    This also suggests an optimal length of time to run such algorithms: until the expected improvement per unit time drops below 1/δ.

    ReplyDelete
  2. This reminds me of a poly-time algorithm for factoring integers that I heard from Ed Fredkin. You don't even need quantum computers for it; you only need Moore's law. If processing speed continues increasing exponentially, then you need only wait a number of years that is linear in the number of bits of your input.

    If you don't believe Moore's law will last for much longer, then you could instead rely on economic growth. If the economy grows in real terms by at least a constant rate, then you can invest money in a foundation where it will grow exponentially until there is enough to buy enough computers to solve the problem.

    (Hopefully it's clear that he was joking.)

    ReplyDelete
  3. Actually, the factoring algorithm is sub-linear. If it takes time exp(O(n^{1/3})) to factor an n-bit number, then the same can also be achieved with O(n^{1/3}) doublings of processor power.

    ReplyDelete
  4. One of the first papers I read in complexity was a paper by Levin (about complexity-theoretic cryptography, I think), where the first sentence of a proof was: (paraphrase) without loss of generality, we may assume that all algorithms run in O(1) time (/paraphrase)

    needless to say, i was stunned - until I realized that the quantity he was analyzing was the product of the running time and success probability of the algorithm, and was alluding to "normalizing" algorithms in the following way: if it was a worst-case t(n) time computation, toss a coin with Pr[heads] = 1/t(n), and perform the computation iff you see heads. This changes the (expected) running time to O(1) while leaving the product of the two quantities still analyzable.

    I suspect that one can similarly hack around with your notion of utility (which is certainly an interesting way to think about computations).

    ReplyDelete
  5. This concept has been explored in Databases, Real time systems and AI where it is known as a "soft deadline". Typically there is a desired time by which the result should be computed, but thereafter the utility drops according to some function, rather than a sharp threshold.

    They were introduced, as far as I know, by Garcia-Molina and Abbott in a
    SIGMOD Record paper
    in 1988.

    ReplyDelete
  6. Just a note to add that the fact that this idea has been proposed before should not stop anyone from doing research on it. In fact, if other people have proposed it this is an indication that there is interest in the study of such a measure.

    ReplyDelete