You could get around the limited time with a large investment in additional computer hardware but that rarely made sense to run a single algorithm. But in several (non-theory) talks I've seen recently you see speakers talking about solving their large problems by renting time on Amazon's servers. You can now cheaply buy computing time on the cloud via Amazon or Microsoft and soon via Google.
Cloud computing doesn't make all computing cheap. One still needs to have good parallelization since the problem gets run on many servers with limited communication between them. And all the servers in the world won't beat out exponential growths in running time. The P versus NP problem remains a problem. Nevertheless we need a good formal framework to study these problems.
Computational complexity has in the past adapted well to new computation models from the PRAM to biological and quantum computers. But we are seeing new computing paradigms in multicore and cloud computing and theory seems late to the party. There was a nice SODA paper on MapReduce, the basic cloud computing operation, but for the most part theorists haven't tackled the cloud computing model and only a few have looked at multicore. Theory can say much about new computational methods, both in how we can take advantage of them and what they can't do, but only if we make the effort to develop the proper models to capture these new approaches.