Posted by: atri | March 22, 2010

## Lecture 25: List Decoding from Random Errors

Today we showed that for a natural (yet pretty general) random noise model, for any code with relative distance $\delta$, can be list decoded w.h.p. from arbitrarily close to $\delta$ fraction of random errors with a list size of $1$.

In the lecture, I mentioned that the lower bound of $q\ge 2^{\Omega(1/\epsilon)}$ is necessary. See this paper for the details.