Posted by: atri | March 4, 2009

Lecture 20: List Decoding

In today’s lecture, we formally defined the notion of list decoding. Then we discussed a couple of ways to “deal” with the scenario when a list decoder returns more than one codeword:

  1. Declare an error. In this scenario, one can argue in many cases that for random errors, the list size is at most one with high probability. One can show using the argument in Shannon’s proof for capacity for qSC_p, that for most  codes the property claimed in the previous sentence is true (e.g. see Section 2.1 here). Also one can show for a similar result for Reed-Solomon codes (e.g. see this paper by Bob McEliece).
  2. Make use of side information ot prune down the list. This framework has been used to great advantage in complexity theory (e.g. see this survey by Madhu Sudan). Side information can also be obtained via a side channel: see this older blog post for more details.

We then stated the theorem which implies that the list decoding capacity is 1-H_q(p), which is exactly the same as the capacity for the qSC_p channel. In other words, list decoding bridges the gap between Shannon’s world and Hamming’s world. For material covered in today’s lecture see the notes on Lecture 13 from Fall 07.

On Friday, we will finish the proof for the list decoding capacity (the proof will give you the necessary background for problem 8 in the homework).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: