Thursday, December 12, 2013

Approximate Computing

Hadi Esmaeilzadeh is the newest professor in the School of Computer Science at Georgia Tech. Hadi works in computer architecture and did some great work on dark silicon. (Just noticed this is starting to look like a Lipton-style blog post.)

I had Hadi give a lecture in my theory class (the dreaded day-before-Thanksgiving lecture). Hadi talked about his new research directions in approximate computing. Approximate computing is a new paradigm in the architecture community, doesn't even have its own wikipedia entry yet.

When you do arithmetic operations, say addition and multiplication on real numbers, you typically want full precision up to the limits of the bits we use to store those numbers. Suppose you allow some error, say 5%. For logical operations, this would be a disaster giving a very wrong answer. Running various optimization algorithms, like simplex, these error might compound leading to very suboptimal results.

But there are several scenarios where approximate computing might not hurt that much. Processing media, like pictures, sound and video, are not exact anyway and a small error might not degrade the quality successfully. Statistical methods that sample a large space, such as when we analyze big data, still could yield reasonable results using approximate computing.

Why do approximate computing? Because of the architecture--approximate computing can be done often faster and with less power consumption. The tradeoffs may allow approximate computing to handle tasks beyond what we can do with traditional computing.

Approximate computing needs good theoretical models and that's where our community can come it. What's the right way to model the power-speed-accuracy tradeoffs and how can we determine the right computational problems that can take advantage of these tradeoffs. Might have nice connections to learning theory and property testing.

5 comments:

  1. There's presently a bit of a practical spat between computer scientists specializing in high performance computing and computational mathematicians/scientists with respect to how extreme-scale systems are going to evolve. The computer scientists believe that the sky will fall and that we won't be able to believe the results of a dot product over the whole machine ever again, necessitating nearly exclusive use of algorithms that tolerate a lot of numerical noise or putting parts of the computation in a special but slow error-checked and verified mode. The computational mathematicians believe that no supercomputer vendor would shame themselves by ever delivering a machine that unreliable, and that we will be able to continue to scale up cautiously. It will be interesting to see if machines with approximate computing capability ever take off, because they're going to be very hard to use.

    ReplyDelete
  2. This research direction was started at GaTech almost a decade ago. See http://www.computer.org/csdl/trans/tc/2005/09/t1123-abs.html

    Moshe Vardi

    ReplyDelete
  3. Here is a company doing something similar.
    http://www.zdnet.com/probability-chip-start-up-bought-by-analog-devices-3040093105/

    ReplyDelete
  4. Moshe and Aram are reluctant to give Hadi much credit. Looks like ME politics carries through to the academic world

    ReplyDelete
  5. Please.... they are just pointing out related/past work. While I don't want to discredit Hadi for his work, we certainly cannot credit him with being the first to think of this idea. To the best of my knowledge Krishna Palem (see Moshe's comment) gets the credit for it. This is not politics, just the ability to read publication dates properly.

    ReplyDelete