Posted by: atri | April 16, 2009

## Lecture 35: Distance of Expander code

In today’s lecture, we showed that an expander code based on an $(n,m,a,\beta,a(1-\epsilon))$ expander results in a binary linear code of rate at least $1-m/n$ and has relative distance at least $\beta$. The crux of the argument was to show that any vector of small enough weight, which corresponds to a subset $S$ of the left vertices has at least one unique neighbor (i.e. a right vertex that has exactly one edge between itself and a vertex in $S$). We also mentioned (without proof) that the argument can be strengthened to prove a relative distance of $2\beta(1-\epsilon)$. (For a proof by picture  jump to after the fold.)

We also quickly saw a natural decoding algorithm, where a left vertex flips its value if a strict majority of the parities it is involved in unsatisfied with the current bit values on the left vertices. We saw how we can again use the unique neighbor argument (for $\epsilon <1/4$) to argue that as long as the number of errors are at most $\beta(1-\epsilon)n$, there is always a “flip-able” vertex. This almost completes the proof that the algorithm works, except that we need to prove that we are not “converging” to a wrong codeword. The latter fact is true but we did not have time to prove. We also did not have the time to prove that the algorithm can be implemented in linear time.

Steve raised an interesting point that the algorithm we considered in class outputs the transmitted codeword and not the actual message. I think doing it for general expander codes is hard/unknown.  (I have not thought about this for long so I could be wrong: let me know if you see a way otherwise!) However, I’m reasonably sure that in Spielmans’ linear time encodable and decodable codes based on expanders, the linear time decoding can be made to output the actual message. (Again, let me know if I’m wrong!)   is systematic and hence the corresponding decoding algorithm does also output the message bits.

All the omitted details from today’s lecture can be found in Lecture 13 of Guruswami‘s coding theory course

We now present a “proof by picture” for the assertion that in any $(n,m,a,\beta,a(1-\epsilon))$ expander, for any subset $T\subseteq L$ such that $|T|\le 2\beta(1-\epsilon)n$ has the property that $|U(T)|>0$, which in turn proves that the corresponding expander code has relative distance at least $2\beta(1-\epsilon)$ as was claimed before the fold.

For the proof, note that if $|T|\le \beta n$, then the proof we did in the class works. So for the rest of the proof assume that $|T|>\beta n$. Now consider the situation below:

In the figure above $S\subset T$ is of size exactly $\beta n$. The same colored subsets on the right are the corresponding neighborhood sets, i.e. the blue oval is $N(T)$ and the pink oval is $N(S)$. Now consider the unique neighborhood of $S$, i.e. $U(S)$ in the picture:

Till now, we have not done anything new. However note that

$|U(S)|> a(1-2\epsilon)\beta n$.

Next consider the neighborhood of the set $T\setminus S$, which is colored green:

Note that in the figure above, $N(T\setminus S)$ does not cover all of $U(S)$. So is this because I cannot draw properly or is something else going on? It turns out that this is always going to be the case. To see this, first note that

$|T\setminus S|=|T|-|S|\le 2\beta(1-\epsilon)n -\beta n =\beta(1-2\epsilon)n$.

This implies that

$|N(T\setminus S)|\le a(1-2\epsilon)\beta n$.

Thus, $|N(T\setminus S)|< |U(S)|$, which implies that there is always a vertex like the yellow vertex below (that belongs to $U(S)\setminus N(T\setminus S)$):

Note that the yellow vertex has no neighbor in $T\setminus S$. In other words, the yellow vertex belongs to $U(T)$, which proves that $|U(T)|>0$, as desired.