In today’s lecture, we proved the averaging argument that finished the proof of the positive part of Shannon’s capacity theorem for . I then mentioned the capacities of some other channels like and , which are and (you’ll prove the latter result in your homework).We also talked about the shortcomings of the Shannon’s proof: in particular, the main open question after his work was: Construct an explicit code that achieves the capacity of along with efficient encoding and decoding algorithm. A simpler version of the question is to achieve the above but only for some and . Elias gave a positive answer to this latter question using *Hamming codes* (I realized just now that I had by mistake said Hadamard code in the lecture today). See these lecture notes from Venkat Guruswami‘s coding theory course for more details.

In the second part of the lecture we saw that there was a gap in the Hamming and Shannon world. In particular, for the worst-case noise model, one cannot correct beyond fraction of errors while in one can correct roughly fraction of errors (for large enough ). This motivated us to look back at the counter-example that showed that one cannot correct beyond half the distance worst-case errors. We then saw that such bad cases are pathological, i.e. for most error patterns beyond half the distance many errors the received word has a unique closest codeword. This motivated the notion of list decoding, which we will define more formally on Wednesday.

The stuff covered in the first part of lecture can be found in notes for Lecture 11 from Fall 07 while the stuff in the second part can be found in notes for Lecture 12 from Fall 07.

## Leave a Reply