tag:blogger.com,1999:blog-3722233.post4730703234964780968..comments2020-02-19T03:43:18.369-05:00Comments on Computational Complexity: GPU ComputingLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-3722233.post-13415668469271750882013-05-19T20:42:39.526-04:002013-05-19T20:42:39.526-04:00As far as I understand, GPU instructions (and exec...As far as I understand, GPU instructions (and execution paths) vary in power consumption. If so, then I think we need reasonable theoretical models which treat power consumption as a resource.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-86568027060725346052013-05-16T12:10:40.849-04:002013-05-16T12:10:40.849-04:00Google just announced launching a quantum computin...Google just announced launching a quantum computing research lab in collaboration with NASA:<br /><br />http://plus.google.com/u/0/117790530324740296539/posts/24cfPLbFXFPAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-8578168490560203712013-05-10T03:20:04.972-04:002013-05-10T03:20:04.972-04:00There are tons of publications in the area of para...There are tons of publications in the area of parallel computing, dealing with the SIMD (vector) or MIMD (multiprocessor, multicore) programming paradigms. Tons of them. About any algorithm you can dream of. Your entire life wouldn't be enough to read just the abstracts. Nor the numerous textbooks.<br /><br />Implementation on existing processors is a matter of applied research (or just plain development), not a matter of theory.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-84294425122804555952013-05-09T22:57:55.658-04:002013-05-09T22:57:55.658-04:00Bit parallelism was for a long time considered a c...Bit parallelism was for a long time considered a constant factor increase, until we realized that is was closer to O(log n) than O(1). The increase in the number of GPU/CPUs seems to be tracking an O(log n) curve as well.<br /><br /><i>Do we get an interesting new class of problems? </i><br /><br />"The class of problems that we can solve rather fast in real world computers" sounds to me like the <b>most</b> interesting class one can study. I guess it depends on what each person's definition of interesting. Alex Lopez-Ortizhttp://www.cs.uwaterloo.ca/~alopez-onoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-17543935119165368672013-05-09T17:57:17.198-04:002013-05-09T17:57:17.198-04:00Maybe the model is not that interesting from theor...Maybe the model is not that interesting from theoretical perspective? Vectorizing may increase the speed of computation of some problems, however it seems to be a constant factor of increase. Let's say we take problems that practitioners solve using GPU and consider AC0 reductions to them. Do we get an interesting new class of problems? Doesn't seem so. Would it be better to use some other class of reductions? Doesn't seem so. The fact that practitioners are interested in some technology does not make it interesting theoretically. It is hard for a model of computation to be theoretically interesting when it is not closed under a nice robust class of reductions. While for practitioners an improvement of a constant multiplicative factor like 10 or 100 can be very important. Unless there is evidence that the usage of GPUs is more than an r engineering issue their theoretical study does not make sense.<br /><br />On the other hand quantum computing is clearly interesting from theoretical perspective. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-14972913026377032032013-05-09T17:41:34.786-04:002013-05-09T17:41:34.786-04:00The wikipedia article on general-purpose computing...The <a href="http://en.wikipedia.org/wiki/GPGPU" rel="nofollow">wikipedia article</a> on general-purpose computing on GPUs (GPGPU) may be of interest, or the proceedings of the <a href="http://www.cs.unc.edu/Events/Conferences/GP2/" rel="nofollow">2004 workshop</a> on the topic; people who have worked on GPGPU over the last decade include Sudipto Guha, Uzi Vishkin, Suresh Venkatasubramanian, Leo Guibas, John Gilbert, Frank Dehne, Erik Demaine, and Prosenjit Bose.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-73043730999182099582013-05-09T13:53:45.118-04:002013-05-09T13:53:45.118-04:00What is the latest result from the theory communit...What is the latest result from the theory community that is "in use" in multicore, cpu-gpu or distributed environments?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-54206503208247506692013-05-09T12:14:01.608-04:002013-05-09T12:14:01.608-04:00The theory world writes algorithms for non-existen...<i> The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use. </i><br /><br />This applies to multicore (CMP) computers as well, though people are actively working on this issue. Alex Lopez-Ortizhttps://cs.uwaterloo.ca/~alopez-o/noreply@blogger.com