Posted by: atri | March 6, 2009

Lecture 21: List decoding capacity

(Have a wonderful spring break. Unlike you guys, I’ll be chilling in Buffalo so if you have any questions etc., feel free to comment on the blog and/or send me email.)

In today’s lecture, we proved the result for list decoding capacity. In particular, we proved the following two results:

  1. There exists a \left(p,O\left(\frac{1}{\epsilon}\right)\right)-list decodable q-ary code of rate 1-H_q(p)-\epsilon (for any \epsilon>0).
  2. Any code of rate 1-H_q(p)+\epsilon that is (p,L)-list decodable needs to satisfy L\ge q^{\Omega(n)}.

The first result was proved by picking a random code (ala Shannon’s proof for the qSC_p capacity). As with the Shannon’s proof, this leads to the following natural questions:

  • Can we achieve list decoding capacity with linear codes?  In the homework, you will show that this is indeed the case. In particular, one can get to within \epsilon of the capacity with a list size of q^{O(1/\epsilon)}.
  • Can we come up with an explicit construction of a code (along with efficient encoding and list decoding algorithms) that achieves the list decoding capacity? Towards the end of this course, we will see a partial positive answer to this question.

Scribed notes of  Lecture 14 from Fall 07 contains the material we talked about today.

List decoding throws at us another interesting combinatorial parameter: the worst case list size. In particular, what is the maximum list size we need to approach the list decoding capacity?  After the jump, I have mentioned some questions related to the list size (there are some open questions here: so if you are interested in an open combinatorial question in coding theory, hoepfully you’ll find them interesting):

  • Note that in the above, there is a gap in what is known for linear codes and general codes. In particular, to get within \epsilon of the list decoding capacity, one needs only a list size of O(1/\epsilon) for general codes, while for linear codes the list size is exponentially larger– q^{O(1/\epsilon)}. It is natural to ask if this exponential gap is necessary. It turns out that the answer is no for q=2 (this was shown in this paper by Venkat Guruswami, Johan Hastad, Madhu Sudan and David Zuckerman). However, the question for q\ge 3 is still unresolved.
  • As you will show in your homework problem, a random linear code can get to within \epsilon of the list decoding capacity with list size q^{O(1/\epsilon)} (while the result we proved in class today works for general random codes). The open question is whether one can show a q^{o(1/\epsilon)} bound for random linear codes. (The result by Guruswami et al. mentioned above does not work for random linear codes.)
  • Another natural question is to ask whether one can get to within \epsilon of the list decoding capacity with list size of o(1/\epsilon). The answer is known to be no when p\ge 1-1/q-\sqrt{\epsilon}. This was first proved  in two works by Blinovsky (also see this paper).  The result for q>2 was independently proved by Venkat Guruswami and Salil Vadhan. It is worth noting that the Guruswami-Vadhan result does not work for smaller p while Blinovsky’s result works for all p>0, though for smaller p, his lower bound is asymptotically much weaker than the known upper bound of O(1/\epsilon). Thus, for constant 0<p<1-1/q, the gap is still open.
  • A weaker version of the question above would be to ask the question for random codes. In particular, is the analysis that we did in class tight? This has been answered in the affirmative recently.
Advertisements

Responses

  1. In is interesting that my bounds appeared much earlier that papers of Venkat and Salil so it is better to write that ‘it has been proved by Blinovsky’.
    Also it is interesting to find reason why my bound is asymptotically much weaker than the known upper bound, possibly the upper bound is much weaker than mine?

    • Sorry about that. I re-worded the entry above and put Venkat and Salil’s work as independent for q>2 (for some reason they were not aware of your later work).

      Regarding the upper bound: it is tight for random codes, i.e. the analysis of Zyablov and Pinsker is tight. So the upper bound might be weaker but to prove that will need looking at some other (ensemble of) codes. My intuition is that the upper bound of O(1/\epsilon) is tight but that is based on the fact that random codes cannot do better (which could be wrong ala AG codes and GV bound). So yes, finding out why your bound is weaker is a very interesting question.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: