In computational complexity when we talk about time it usually represents a hard limit in the running time, solving the problem in time t(n). So we are happy, say, if we can solve the problem in one hour and miserable if it takes 61 minutes. But our real gradation of happiness over the running time is not so discontinuous.
Let's take an idea from how economists deal with time. They discount the utility by a factor of δ in each time step for some δ<1. What if we did the same for complexity?
Let δ = 1-ε for ε>0 and very small. Think ε about 10-12. We then discount the value of the solution by a factor δt for t steps of computation.
Discounted time gives us a continuous loss due to time. It has the nice property that the future looks like the past: The discount for t steps now is the same as the the discount for the t steps already taken.
When t is small, δt is about 1-εt, a linear decrease. For t large, δt is about e-εt, an exponential decrease.
We can also recover traditional complexity classes. DTIME(O(m(n)) is the set of languages such that for some constant c>0, δt>c for δ=(1-1/m(n)).
I'm not sure what to do with discounted time which is why this is a blog post instead of a FOCS paper.
- What does average case and expected time mean in the discounted time model?
- What if you take the value of the solution of some approximation problem and discount it with the time taken? Can you determine the optimal point to stop?