Thursday, May 09, 2013

GPU Computing

Back around 1980, I used to write computer games for the Apple II. Plotting a point on the Apple II screen required dividing by 7, a lengthy process for the 6502 microprocessor. Asking around, we learned how to make division by 7 much faster--lookup tables.

As computer gaming got more intense in the decades that followed, we first had graphics cards designed to speed up the process and later Graphics Processing Units or GPUs, dedicated processors devoted to graphics.

Around the turn of the century, people started using GPUs for more than just graphics. GPUs did certain kinds of vector manipulation quickly and one could use these for a variety of mostly scientific computing. But GPUs weren't really well designed for other purposes. Following the cupholder principle, GPUs began to evolve to allow easier to access APIs from more common programming languages becoming General Purpose GPU or GPGPUs. Several systems researchers at Georgia Tech and elsewhere are now redesigning chip layouts to make the best most efficient uses of CPUs and GPGPUs.

The theory community hasn't seem to catch on yet. There should be some nice theoretical model that captures the vector and other operations of a GPGPU and then we should search for algorithms that make the best use of the hardware. The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use.

8 comments:

  1. The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use.

    This applies to multicore (CMP) computers as well, though people are actively working on this issue.

    ReplyDelete
  2. What is the latest result from the theory community that is "in use" in multicore, cpu-gpu or distributed environments?

    ReplyDelete
  3. The wikipedia article on general-purpose computing on GPUs (GPGPU) may be of interest, or the proceedings of the 2004 workshop on the topic; people who have worked on GPGPU over the last decade include Sudipto Guha, Uzi Vishkin, Suresh Venkatasubramanian, Leo Guibas, John Gilbert, Frank Dehne, Erik Demaine, and Prosenjit Bose.

    ReplyDelete
  4. Maybe the model is not that interesting from theoretical perspective? Vectorizing may increase the speed of computation of some problems, however it seems to be a constant factor of increase. Let's say we take problems that practitioners solve using GPU and consider AC0 reductions to them. Do we get an interesting new class of problems? Doesn't seem so. Would it be better to use some other class of reductions? Doesn't seem so. The fact that practitioners are interested in some technology does not make it interesting theoretically. It is hard for a model of computation to be theoretically interesting when it is not closed under a nice robust class of reductions. While for practitioners an improvement of a constant multiplicative factor like 10 or 100 can be very important. Unless there is evidence that the usage of GPUs is more than an r engineering issue their theoretical study does not make sense.

    On the other hand quantum computing is clearly interesting from theoretical perspective.

    ReplyDelete
    Replies
    1. Bit parallelism was for a long time considered a constant factor increase, until we realized that is was closer to O(log n) than O(1). The increase in the number of GPU/CPUs seems to be tracking an O(log n) curve as well.

      Do we get an interesting new class of problems?

      "The class of problems that we can solve rather fast in real world computers" sounds to me like the most interesting class one can study. I guess it depends on what each person's definition of interesting.

      Delete
  5. There are tons of publications in the area of parallel computing, dealing with the SIMD (vector) or MIMD (multiprocessor, multicore) programming paradigms. Tons of them. About any algorithm you can dream of. Your entire life wouldn't be enough to read just the abstracts. Nor the numerous textbooks.

    Implementation on existing processors is a matter of applied research (or just plain development), not a matter of theory.

    ReplyDelete
  6. Google just announced launching a quantum computing research lab in collaboration with NASA:

    http://plus.google.com/u/0/117790530324740296539/posts/24cfPLbFXFP

    ReplyDelete
  7. As far as I understand, GPU instructions (and execution paths) vary in power consumption. If so, then I think we need reasonable theoretical models which treat power consumption as a resource.

    ReplyDelete