Posted by: atri | January 14, 2009

## Lecture 2: Definitions

Today we went through a bunch of definitions, which would enable us to formally define our main question: “What is the least amount of redundancy one can use to correct as many errors as possible?” Notes from lecture 1 and lecture 2 from the fall 07 offering have the notes for the stuff we covered today.

In the process of defining error correction, we also saw the crucial role of the noise model. We discussed the adversarial noise model pioneered by Hamming and the stochastic noise model (in particular, $BSC_p$) pioneered by Shannon.

Please feel free to use the comments section of this post if you have questions on today’s lecture or the course in general.

Finally, I apologize for my broken voice: hopefully it was not too hard for you guys to follow what I was saying.

Advertisements

## Responses

1. This is with reference to Lecture 2 notes from Fall 07. Encoding function is defined as a “bijective” mapping, but shouldn’t it be an injective mapping?

I have this doubt because I think the encoding function is not surjective for it to be bijective.

2. Krishna: you’re right– thanks for pointing out the error. I have updated the notes accordingly.