tag:blogger.com,1999:blog-3722233.post312494656476508476..comments2020-05-25T04:05:03.716-04:00Comments on Computational Complexity: Discounted TimeLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3722233.post-84000171255404027872008-08-09T10:01:00.000-04:002008-08-09T10:01:00.000-04:00Just a note to add that the fact that this idea ha...Just a note to add that the fact that this idea has been proposed before should not stop anyone from doing research on it. In fact, if other people have proposed it this is an indication that there is interest in the study of such a measure.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-20340168853079372642008-08-08T16:00:00.000-04:002008-08-08T16:00:00.000-04:00This concept has been explored in Databases, Real ...This concept has been explored in Databases, Real time systems and AI where it is known as a "soft deadline". Typically there is a desired time by which the result should be computed, but thereafter the utility drops according to some function, rather than a sharp threshold. <BR/><BR/>They were introduced, as far as I know, by Garcia-Molina and Abbott in a <A HREF="http://portal.acm.org/citation.cfm?id=44203.44209&coll=GUIDE&dl=GUIDE&CFID=15151515&CFTOKEN=6184618" REL="nofollow"><BR/>SIGMOD Record paper</A> in 1988.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-72701053110422032302008-08-08T13:03:00.000-04:002008-08-08T13:03:00.000-04:00One of the first papers I read in complexity was a...One of the first papers I read in complexity was a paper by Levin (about complexity-theoretic cryptography, I think), where the first sentence of a proof was: (paraphrase) without loss of generality, we may assume that all algorithms run in O(1) time (/paraphrase)<BR/><BR/>needless to say, i was stunned - until I realized that the quantity he was analyzing was the product of the running time and success probability of the algorithm, and was alluding to "normalizing" algorithms in the following way: if it was a worst-case t(n) time computation, toss a coin with Pr[heads] = 1/t(n), and perform the computation iff you see heads. This changes the (expected) running time to O(1) while leaving the product of the two quantities still analyzable.<BR/><BR/>I suspect that one can similarly hack around with your notion of utility (which is certainly an interesting way to think about computations).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-44819763334082201972008-08-08T11:05:00.000-04:002008-08-08T11:05:00.000-04:00Actually, the factoring algorithm is sub-linear. ...Actually, the factoring algorithm is sub-linear. If it takes time exp(O(n^{1/3})) to factor an n-bit number, then the same can also be achieved with O(n^{1/3}) doublings of processor power.aram harrowhttps://www.blogger.com/profile/01272118188252697149noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-17624908760678836792008-08-08T11:03:00.000-04:002008-08-08T11:03:00.000-04:00This reminds me of a poly-time algorithm for facto...This reminds me of a poly-time algorithm for factoring integers that I heard from Ed Fredkin. You don't even need quantum computers for it; you only need Moore's law. If processing speed continues increasing exponentially, then you need only wait a number of years that is linear in the number of bits of your input.<BR/><BR/>If you don't believe Moore's law will last for much longer, then you could instead rely on economic growth. If the economy grows in real terms by at least a constant rate, then you can invest money in a foundation where it will grow exponentially until there is enough to buy enough computers to solve the problem.<BR/><BR/>(Hopefully it's clear that he was joking.)aram harrowhttps://www.blogger.com/profile/01272118188252697149noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-23466053450235909802008-08-08T10:26:00.000-04:002008-08-08T10:26:00.000-04:00Instead of talking about the expected time to solv...Instead of talking about the expected time to solve a problem, this transitions nicely to the expected value of that solution. For approximation problems, you can combine that with a function expressing the value of an approximation relative to the value of the optimal solution.<BR/><BR/>Combining them you get the expected value of approximately solving the problem in time t, given the accuracy you expect to develop in time t, relative to having the optimal solution immediately.<BR/><BR/>For time constrained problems, this favors algorithms that can produce more than an 1/δ improvement in the value of the approximate result in a given unit of time, over a period of time within the time constraint.<BR/><BR/>This also suggests an optimal length of time to run such algorithms: until the expected improvement per unit time drops below 1/δ.Anonymousnoreply@blogger.com