In coding theory, one typically maps a string to a code such that with some small amount of error in the code one can still recover the original string. What if the amount of error is too much to give a unique decoding? In the 1950s Peter Elias suggested the idea of list decoding, coming up with a short list of possibilities one of which is correct.

Madhu Sudan showed that list decoding can be achieved in scenarios where one cannot do unique decoding.

In this paper Sudan gives a polynomial-time list-decoding algorithm that can deal with errors in the code beyond what regular codes can handle. Later Guruswami and Sudan give a polynomial-time algorithm that handles what is believed to be the best possible amount of error.

List-decodable codes have had applications to many areas including pseudo-random generators, extractors, hard-core predicates, probabilistically-checkable proofs and a neat result by Sivakumar on the implications of SAT being membership-comparable.

We've seen many other important papers in coding theory from computer scientists over the last decade. Besides the work on list decoding I should also mention Spielman's breakthrough result showing linear time encodable and decodable codes building on the initial work of using expander graphs for codes developed by Sipser and Spielman.

For much more on coding see the surveys by Sudan on List Decoding and Guruswami on Error-correcting codes and Expander Graphs.

Luca Trevisan also has a nice survey on coding theory and its application to computational complexity.

ReplyDeleteWhy is list-decoding necessary for the Sivakumar application? There the message has length n, and we look at a Reed-Solomon encoding of length q = poly(n) over a field of size q. Also for each index of the encoding we have a list of q^1/3 elements, which is guaranteed to include the right one. So if we guess each element at random from its list, we obtain a word that is, in expectation, within distance q^2/3 of the correct codeword. This is well within the unique decoding radius.

ReplyDeleteNow I see it... please ignore the last comment :)

ReplyDeleteI have seen mention in more than one place that this is beleived to be the best possible amount of error. I was wondering what evidence is there that one cannot do better. Is this just for RS codes or is it generally believed that the Johnson bound is tight for all codes?

ReplyDelete