Back around 1980, I used to write computer games for the Apple II. Plotting a point on the Apple II screen required dividing by 7, a lengthy process for the 6502 microprocessor. Asking around, we learned how to make division by 7 much faster--lookup tables.
As computer gaming got more intense in the decades that followed, we first had graphics cards designed to speed up the process and later Graphics Processing Units or GPUs, dedicated processors devoted to graphics.
Around the turn of the century, people started using GPUs for more than just graphics. GPUs did certain kinds of vector manipulation quickly and one could use these for a variety of mostly scientific computing. But GPUs weren't really well designed for other purposes. Following the cupholder principle, GPUs began to evolve to allow easier to access APIs from more common programming languages becoming General Purpose GPU or GPGPUs. Several systems researchers at Georgia Tech and elsewhere are now redesigning chip layouts to make the best most efficient uses of CPUs and GPGPUs.
The theory community hasn't seem to catch on yet. There should be some nice theoretical model that captures the vector and other operations of a GPGPU and then we should search for algorithms that make the best use of the hardware. The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use.