Posted by: **atri** | March 4, 2009

## Lecture 20: List Decoding

In today’s lecture, we formally defined the notion of list decoding. Then we discussed a couple of ways to “deal” with the scenario when a list decoder returns more than one codeword:

- Declare an error. In this scenario, one can argue in many cases that for random errors, the list size is at most one with high probability. One can show using the argument in Shannon’s proof for capacity for , that for
*most* codes the property claimed in the previous sentence is true (e.g. see Section 2.1 here). Also one can show for a similar result for Reed-Solomon codes (e.g. see this paper by Bob McEliece).
- Make use of side information ot prune down the list. This framework has been used to great advantage in complexity theory (e.g. see this survey by Madhu Sudan). Side information can also be obtained via a side channel: see this older blog post for more details.

We then stated the theorem which implies that the list decoding capacity is which is exactly the same as the capacity for the channel. In other words, list decoding bridges the gap between Shannon’s world and Hamming’s world. For material covered in today’s lecture see the notes on Lecture 13 from Fall 07.

On Friday, we will finish the proof for the list decoding capacity (the proof will give you the necessary background for problem 8 in the homework).

### Like this:

Like Loading...

*Related*

## Leave a Reply