# Códigos detectores y correctores de errores  /  Error-detecting and error-correcting codes

## Introduction

Queremos transmitir al equipo el camino a seguir; no tienen manera de contestarnos.

We want to transmit to the other team the path they must follow; they cannot answer back.

N = 00, W = 01, E = 10, S = 11

 Utiliza la función transmit_error de abajo para intentar transmitir el camino al equipo.  /  Use the function transmit_error below to try to send the path to the team.
def random_msg(l,alph="01"): return "".join([choice(alph) for i in [1..l]]) def transmit_error(msg, alph="01", p=0.2): newmsg = "" for c in msg: newmsg += c if random()>p else choice(alph.replace(c,"")) return newmsg transmit_error("0"*20,p=0.3)
 '10011000000000000111' '10011000000000000111'

¿Cómo transmitir ese mensaje para que se entienda? Una idea es repetir cada caracter varias veces: para transmitir "0" mandamos "0...0". Antes de empezar nos tendríamos que haber puesto de acuerdo en el número de repeticiones.

How to send that message so that it is understood? An idea is to repeat each character a few times: we transmit "0" by sending "0...0". We should have agreed beforehand on the number of repetitions.

 Transmite por repetición los two primeros pasos del itinerario con la función de arriba.  /  Transmit by repetition the first two steps of the itinerary with the function above..

La idea general de la codificación es transmitir más información de la que contiene el mensaje, para que esa información adicional compense la pérdida de datos debida a los errores.

The general idea of coding is to add information to the message such that the extra information compensates the loss due to transmission errors.

Other examples:

• NIF/NIE letter:
def NIF_letter(s): pos = "XYZ".find(s[0].capitalize()) first = s[0] if pos==-1 else str(pos) n = int(first+s[1:]) letters = "TRWAGMYFPDXBNJZSQVHLCKE" return letters[mod(n,23)] NIF_letter("12345678")
 'Z' 'Z'

## Basic concepts and definitions

Example:

Our alphabet will be "0" and "1"...  /  Nuestro alfabeto serán ceros y unos...

... and our words will have fixed length.  /  ... y nuestras palabras tendrán todas la misma longitud.

 Convierte tu nombre completo a una cadena de 0s y 1s (pista: ord, bin). Luego conviértela de nuevo en tu nombre (pista: int, chr). Convert your complete name to a string of 0's and 1's (hint: ord, bin). Then convert it back to your name (pista: int, chr).

Llamaremos error a un cambio en uno de los símbolos de una palabra transmitida. Los canales con los que trabajaremos producirán errores aleatoriamente de manera independiente y con la misma probabilidad (la función transmit_error de arriba hace precisamente eso). Muchas veces no es así, luego comentaremos algún otro caso.

An error will be a change in one of the symbols of the transmitted codeword. The channels we will consider will produce random independent errors of the same probability (the function transmit_error above does precisely that). It is not often the case, we will comment another possible model later.

 Un canal binario cambia cada bit con probabilidad $p=0.01$. Si transmitimos 10 bits, ¿qué probabilidad hay de que no cambie ninguno? ¿Y exactamente uno? ¿Y dos? A binary channel changes each bit with probability $p=0.01$. If we send 10 bits, what is the probability of having no errors? Exactly one? Two?

Un código es simplemente una colección de palabras asociadas a los distintos mensajes que podríamos querer transmitir. Por ejemplo, para el mapa el código es el conjunto 00, 01, 10, 11.

A code is simply a collection of codewords that we associate to the messages we would like to transmit. For example, for the map the code is the set 00, 01, 10, 11.

La estrategia general es utilizar palabras lo suficientemente diferentes como para que, a pesar de los errores, podamos recuperar el mensaje enviado.

The general strategy is to use words that are sufficiently different so that, despite the errors, we can recover the original message.

## How different are two words?  /  ¿Cómo de diferentes son dos palabras?

La distancia entre dos palabras del código es el número de posiciones en las que difieren.  /  The distance between two codewords is the number of positions where they differ.

Estudia la función siguiente.  /  Study the next function.

def dist_words(x,y): if len(x)<>len(y): raise ValueError('The lengths are different') else: pos = [i for i in [0..len(x)-1] if x[i]<>y[i]] return len(pos)
 Encuentra dos palabras de longitud 5 y distancia 3. Find two words of length 5 and distance 3.

Dado un código, su distancia es el mínimo de las distancias entre palabras distintas. Por ejemplo, el código $\{001,111,100\}$ tiene distancia 2.

For a given code, its distance is the minimum of the distances between different words. Por example, the code $\{001,111,100\}$ has distance 2.

Luego veremos la importancia de la distancia.  /  We will later see the significance of the distance.

 Las cifras de 0 a 9 se pueden codificar con un código de barras. El código descrito en https://en.wikipedia.org/wiki/Two-out-of-five_code (la primera columna), ¿qué distancia tiene? The digits 0 to 9 can be encoded with a barcode. What is the distance of the one described i (the first column)?

 Calcula la distancia del código del mapa. Luego inventa un código para transmitir esas cuatro direcciones que tenga distancia mayor que el anterior. Calculate the distance of the map code. Then think of a code for those four messages that has a greater distance than the original code.

## Detección y corrección de errores  /  Error detection and correction

Cuando recibimos un mensaje a través de un canal con ruido, ¿cómo averiguamos cuál fue el mensaje original?

When we receive a message sent through a noisy channel, how to find out the original message?

Idea 1

Solo se envían palabras del código; si recibimos una palabra que no está en el código, sabemos que ha habido algún error.

Only codewords are sent; if we receive a word that is not in the code, we know that errors have occurred.

 En el código del mapa, si transmitimos una palabra y hay algún error ¿se puede detectar? In the map code, if we send a word and there is an error, could we know it?

Idea 2

Supongamos que nuestro código tiene distancia $d$. Si una palabra transmitida ha tenido menos de $d$ errores, la palabra recibida no está en el código (¿por qué?).

Suppose that our code has distance $d$. If a sent word has suffered less than $d$ errors, the received word is not a codeword (why?).

 ¿Cuántos errores puede detectar el código de barras de arriba? How many errors can the barcode coding detect?

Un buen ejemplo en este sentido es el de dígitos de paridad: dadas unas palabras binarias que queremos transmitir, les añadimos al final otro bit de manera que el número de unos sea par. Al hacer esto nos aseguramos de que la distancia mínima sea 2, luego podemos siempre detectar cuando ha habido un solo error, porque en ese caso el número de unos ya no es par.

A good example is parity checks: given some binary words, we add an extra digit so that the number of ones is even. In this way the minimum distance is 2, so we can always detect a single error, because the number of ones would not be even any more.

Example:

 00 → 000 01 → 011 10 → 101 11 → 110
 ¿Por qué la distancia de un código con dígito de paridad siempre es al menos 2? Why is the distance of a parity check code at least 2?
 ¿Qué pasa si añadimos otro dígito de paridad? What happens if we add another parity digit?

Aunque detectemos un error, siempre debemos intentar descodificar, es decir, decidir qué palabra creemos que nos han enviado. Dadas nuestras suposiciones sobre cómo se producen errores, y como la probabilidad de cada error será pequeña en la práctica, tiene sentido asumir que el número de errores tenderá a ser pequeño. Por eso elegiremos la palabra del código más parecida a la que hemos recibido. Esto se suele llamar "Minimum distance decoding", en inglés MDD.

Even when we know that errors have occurred, we will always try to decode, that is, to decide which word was sent. Given our assumptions about how errors occur, and since the probability of each error is small in practice, it makes sense to assume that less errors are more likely than more errors. Therefore we will choose the codeword that is most similar to the received word. This is called "Minimum distance decoding" or MDD.

 Con el código de barras (usa 0,1 en lugar de letras) codifica un número de tres cifras, transmítelo con la función de arriba del todo, y descodifica por MDD. With the barcode coding (use 0,1 instead of letters) encode a 3-digit number, transmit it with the function at the beginning of this sheet, and decode it by MDD.

Idea 3

Supongamos que nuestro código tiene distancia $d$. Si una palabra transmitida ha tenido menos de $d/2$ errores, la palabra más cercana es la palabra que se envió (¿por qué?).

Suppose that our code has distance $d$. If a sent word has suffered less than $d/2$ errors, the codeword that is closest to the received word is the one that was sent (why?).

 ¿Cuántos errores puede corregir cada uno de los códigos que hemos visto hasta ahora? How many errors can we correct with each of the codes we have seen so far?
 El código de tres dígitos con uno de paridad tiene distancia 2. Piensa en qué pasa si hay 1 error, o 2, o 3. Luego haz lo mismo para códigos de distancia 3. The parity 3-digit code above has distance 2. Think about what happens when there is 1 error, or 2, or 3. Do the same for codes with distance 3.

Parameters of a code:  / parámetros de un código:

• Number of messages it can encode  /  El número de mensajes que se pueden codificar
• Length of the codewords  /  Longitud de las palabras del código
• Distance

Nos gustaría:  /  We would like to:

• Maximizar el número de palabras que podemos codificar  /  Maximize the number of words we can encode
• Minimizar la longitud  /  Minimize the length
• Maximizar la capacidad correctora y detectora  /  Maximize the ability to detect and correct errors
• Minimizar los recursos necesarios para codificar y decodificar  /  Minimize the resources needed to encode and decode

Una buena parte de la investigación en la teoría de códigos consiste en encontrar nuevos códigos que mejoren los ya conocidos en estos cuatro aspectos.  /  A good deal of research in Coding Theory is done to find better codes in the senses above.

Como vemos, codificar viene a ser aplicar una función (se puede programar, hacer una tabla de valores...) y descodificar viene a ser buscar la mejor de las posibles palabras dado el mensaje recibido (en principio compararlo con todas las posibles palabras). Pronto veremos que usando álgebra lineal podemos hacer ambas cosas de manera relativamente sencilla.

## Códigos lineales  /  Linear codes

Utilizando Álgebra Lineal vamos a producir multitud de códigos. Vamos a manejar los conceptos de dimensión, base y ecuaciones/núcleo de espacios vectoriales.

Using Linear Algebra we will produce many codes. We will need the concepts of dimension, basis and equations/kernel of a linear space.

Los espacios vectoriales se construyen a partir de un cuerpo. No usaremos los números reales como es habitual, sino el formado por los números 0,1 solamente. Lo llamaremos $F$.

Vector spaces are built over a field. We will use the field with only two numbers 0,1 rather than the usual real numbers. We will denote it as $F$.

 Comprueba $F$ que cumple la definición de cuerpo, es decir, que podemos hacer las cuatro operaciones necesarias para hacer Álgebra Lineal. Check $F$ satisfies the definition of a field, that is, one can do the four basic operations that we need for Linear Algebra.

Por ejemplo, un espacio de dimensión 3 sería el de todos los vectores de tres componentes, el equivalente a $R^3$ (¿cuántos elementos tiene en nuestro caso?)

For example, a space of dimension 3 could be that of all vectors of three components, the equivalent to $R^3$ (how many elements does it have?)

Conocemos dos maneras de manejar espacios lineales dentro de un espacio ambiente:

• Dada una base: el espacio lineal son todas sus combinaciones lineales.
• Dadas unas ecuaciones: el espacio lineal son todos los vectores que las cumplen.

We know two ways of working with linear subspaces:

• Given a basis: the subspace is all the linear combinations.
• Given equations: the subspace is all the vectors that satisfy them.

Example: let us define a linear 2-dimensional subspace in $F^5$. We start by defining $F^5$.

F = GF(2) V = VectorSpace(F,5) V; len(list(V))
 Vector space of dimension 5 over Finite Field of size 2 32 Vector space of dimension 5 over Finite Field of size 2 32

Let us take the subspace with basis $v_1$ and $v_2$ shown in the code below. All the elements will be linear combinations $a\cdot v_1+b\cdot v_2$ where $a,b\in F$ (how many are there?). If we put the basis vectors as rows of a matrix we obtain a generator matrix.

L = V.subspace([(1,0,1,1,0),(0,1,1,0,1)]) M = L.matrix() L; list(L)
 Vector space of degree 5 and dimension 2 over Finite Field of size 2 Basis matrix: [1 0 1 1 0] [0 1 1 0 1] [(0, 0, 0, 0, 0), (1, 0, 1, 1, 0), (0, 1, 1, 0, 1), (1, 1, 0, 1, 1)] Vector space of degree 5 and dimension 2 over Finite Field of size 2 Basis matrix: [1 0 1 1 0] [0 1 1 0 1] [(0, 0, 0, 0, 0), (1, 0, 1, 1, 0), (0, 1, 1, 0, 1), (1, 1, 0, 1, 1)]

By the way, looking at the four words we can see that the distance is three. So this is a $(5,2,3)$-code.

We can use matrix multiplication to generate a linear combination. For example the linear combination with $a=b=1$:

vector([1,1])*M
 (1, 1, 0, 1, 1) (1, 1, 0, 1, 1)

The other way of working with subspaces is to give linear equations. For example,all the elements of $L$ have the same first and fourth coordinate, so they satisfy the equation $x_1+x_4=0$ (remember that 1+1=0!). In Sage, we  ask for the vectors $v$ such that $M\cdot v=0$. Each vector is an equation that all the elements of $L$ satisfy (because they are the linear combinations of the rows of $M$). There may be many equations; if we take a basis of the equations and put them in rows, we obtain a check matrix. By the dimension property of linear subspaces, the number of rows will be $n-k$.

H = M.right_kernel_matrix() H
 [1 0 0 1 0] [0 1 0 0 1] [0 0 1 1 1] [1 0 0 1 0] [0 1 0 0 1] [0 0 1 1 1]

Now we multiply each vector of $L$ by the equations to check that the results are always zero:

for v in L: print v, [v.dot_product(w) for w in H.rows()]
 (0, 0, 0, 0, 0) [0, 0, 0] (1, 0, 1, 1, 0) [0, 0, 0] (0, 1, 1, 0, 1) [0, 0, 0] (1, 1, 0, 1, 1) [0, 0, 0] (0, 0, 0, 0, 0) [0, 0, 0] (1, 0, 1, 1, 0) [0, 0, 0] (0, 1, 1, 0, 1) [0, 0, 0] (1, 1, 0, 1, 1) [0, 0, 0]

Let us write it in a more usual way: as a product of the check matrix by the word written as a column. Note that Sage assumed that the vectors were already columns.

for v in L: print v, H*v
 (0, 0, 0, 0, 0) (0, 0, 0) (1, 0, 1, 1, 0) (0, 0, 0) (0, 1, 1, 0, 1) (0, 0, 0) (1, 1, 0, 1, 1) (0, 0, 0) (0, 0, 0, 0, 0) (0, 0, 0) (1, 0, 1, 1, 0) (0, 0, 0) (0, 1, 1, 0, 1) (0, 0, 0) (1, 1, 0, 1, 1) (0, 0, 0)
 Build a linear subspace of dimension 4 in $F^8$ and see what the computations above produce in your case.

Here comes the big definition of this session:

A linear code is a linear subspace of some vector space over $F$.

That's it.

 Make a list of all the codes that we have seen in this session. Which ones are linear? Try to write a basis for some of those.

We will see how to do linear algebra for encoding and decoding.

A (binary) linear code of length $n$, dimension $k$ and distance $d$ will be called "linear $(n,k,d)$-code".

 How many messages can we encode with a linear $(n,k,d)$-code?

## Distance of linear codes

Let us talk a bit about computing the distance of a linear code. Suppose that the code has $M$ codewords (by now, you should be able to calculate it). For any code we would have to check all the distances between pairs of codewords. The number of pairs of $M$ elements is $\frac{M(M-1)}{2}$. But for linear codes it will be much simpler:

• Let us define the weight of a word $x$ to be the number of ones in $x$. For example $w(1001010101)=5$.
• Then it should be clear that the distance between $x$ and $y$ is $w(x-y)$.
• And now comes the theorem: the distance of a linear code is equal to the smallest nonzero weight of its elements.

The conclusion is that in order to find the distance of a linear code we just need to check $M-1$ elements.

 Calculate the distances of some of the linear codes that have appeared until now. One of them should be the dimension 4 code that you constructed. For this you may want to write a function to calculate weights of words.

## Encoding and decoding linear codes

First, how do we encode messages with linear codes? The main idea here is: the codewords are all the elements of the subspace, so if we have a basis we just take linear combinations of it.

• Write a basis of your linear code. It will have $k$ vectors of length $n$.
• Put them as rows in a matrix (a generator matrix).
• Write your message as a vector of length $k$.
• Multiply your message by the matrix to obtain a vector of length $n$: the codeword for your message.
 Use the code of dimension 4 that you wrote above to encode the first two steps of the map path.

A famous example of linear code, and very important when the theory was being developed, is the Hamming(7,4,3) code. Here we have a modified version. Its generator matrix is:

M = matrix(F,[(1,0,0,0,1,1,1),(0,1,0,0,1,1,0),(0,0,1,0,1,0,1),(0,0,0,1,0,1,1)]) M
 [1 0 0 0 1 1 1] [0 1 0 0 1 1 0] [0 0 1 0 1 0 1] [0 0 0 1 0 1 1] [1 0 0 0 1 1 1] [0 1 0 0 1 1 0] [0 0 1 0 1 0 1] [0 0 0 1 0 1 1]

Notice that the 4x4 submatrix of the left is the identity matrix. This will produce something interesting when we encode:

vector([1,1,0,1])*M
 (1, 1, 0, 1, 0, 1, 0) (1, 1, 0, 1, 0, 1, 0)

Did you notice the beginning of that codeword?

Codes whose generator matrices contain the identity $k$ x $k$ matrix on the left are called systematic codes. They are particularly nice because it is easy to decode a message: just take the beginning of the received word, and fix any errors that you can correct.

Decoding is more interesting. First of all, it is easy to check if a received word $y$ is in the code: with a check matrix $H$ we just need to see if $H\cdot y$ is zero.

 Check that the codeword of the previous exercise (the one you encoded with your linear code) satisfies this condition. Then change one bit: is the new word in the code? Relate this answer to the distance of your code.

So we can detect errors in that way. How about correcting them? It turns out that the properties of linear spaces will help a lot here. Let us say that the codeword $x$ was sent, but it was modified by an error vector $e$, so we received $y=x+e$. If we apply the matrix $H$ to the left we get:

$H\cdot y = H\cdot (x+e) = H\cdot x + H\cdot e = 0 + H\cdot e = H\cdot e$.

$H\cdot y$ is called the syndrome of $y$. There are many vectors that have the same syndrome as $y$; one of them is $e$, but which one? Well, remember the idea that less errors are more likely than more errors. This means that from all those vectors we should choose the one that has less 1's (because each 1 in e is an error in the transmission). So the decoding method is:

• Calculate all the vectors with the same syndrome as $y$.
• Choose $e$ as one of the vectors with smallest weight (if there is more than one, choose one at random).

Let us try an example with the $(5,2,3)$-code of the beginning of the section. We will encode the message "11", add one error to it, and try to correct it.

F = GF(2) V = VectorSpace(F,5) L = V.subspace([(1,0,1,1,0),(0,1,1,0,1)]) M = L.matrix() x = vector([1,1])*M x
 (1, 1, 0, 1, 1) (1, 1, 0, 1, 1)

Now we change one bit; that is the word that we would receive.

e = vector([0,1,0,0,0]) y = x+e y
 (1, 0, 0, 1, 1) (1, 0, 0, 1, 1)

Let us compute it syndrome.

H = M.right_kernel_matrix() H*y
 (0, 1, 0) (0, 1, 0)

If $y$ was a codeword we would have obtained the zero vector; so we have detected an error. Now we will look for all the vectors of length 5 that have the same syndrome.

es = [v for v in V if H*v==H*y]
 [(0, 1, 0, 0, 0), (1, 1, 1, 1, 0), (0, 0, 1, 0, 1), (1, 0, 0, 1, 1)] [(0, 1, 0, 0, 0), (1, 1, 1, 1, 0), (0, 0, 1, 0, 1), (1, 0, 0, 1, 1)]

Let us check that fixing (subtracting) any of these errors produces a codeword (remember that the codewords are precisely the words with zero syndrome):

for e in es: print y-e, H*(y-e)
 (1, 1, 0, 1, 1) (0, 0, 0) (0, 1, 1, 0, 1) (0, 0, 0) (1, 0, 1, 1, 0) (0, 0, 0) (0, 0, 0, 0, 0) (0, 0, 0) (1, 1, 0, 1, 1) (0, 0, 0) (0, 1, 1, 0, 1) (0, 0, 0) (1, 0, 1, 1, 0) (0, 0, 0) (0, 0, 0, 0, 0) (0, 0, 0)

Which of those four vectors $y-e$ is the correct one? The one coming from the $e$ with the least amount of 1's:

def w(v): return len([i for i in v if i==1]) minw = min([w(e) for e in es]) for e in es: if w(e)==minw: best_e = e print y-best_e, x
 (1, 1, 0, 1, 1) (1, 1, 0, 1, 1) (1, 1, 0, 1, 1) (1, 1, 0, 1, 1)

As we see, we corrected the error.

 Exchange messages with someone else: agree on a code, encode something, add some errors to it (by hand or with the transmitting function at the beginning of the sheet) and send it (you can share a new worksheet where you put the word and the other person decodes it). Then you receive a word in the same way and try to correct it.

Now, if we send many codewords through a channel using a certain linear code, we can precompute some values that are going to be used to correct every word:

• Compute the syndromes of every word of length $n$.
• Group the words that have the same syndrome. (In the literature the usual name for this groups is coset).
• En each group choose a word with minimum weight; if there is more than one such word, choose any one of them. (In the literature the usual name for the word is coset leader).
• Whenever you receive a word, compute its syndrome; the coset leader for that value is the error.
 A programming exercise (don't do it if you are not quite good at programming): Calculate the table of coset leaders of the $(5,2,3)$-code. Then improve your code so that, taking as input a generator matrix, the output is the table of syndromes and coset leaders.

The only thing that remains is to recover the original message once we have corrected the errors. This is just Linear Algebra: we have a vector in a space, a basis of the space, and we want the coefficients of that vector in that basis. We will skip the details.

Sage contains lots of functions to work with codes. You can see by checking codes. in a cell below. Now let us see how we can do the example above with Sage functions instead of by hand.

F = GF(2) V = VectorSpace(F,5) L = V.subspace([(1,0,1,1,0),(0,1,1,0,1)]) M = L.matrix() C = codes.LinearCode(M) C
 Linear code of length 5, dimension 2 over Finite Field of size 2 Linear code of length 5, dimension 2 over Finite Field of size 2
m = vector([1,1]) x = C.encode(m) m; x
 (1, 1) (1, 1, 0, 1, 1) (1, 1) (1, 1, 0, 1, 1)
e = vector([0,1,0,0,0]) y = x+e y
 (1, 0, 0, 1, 1) (1, 0, 0, 1, 1)
C.decode_to_code(y); C.decode_to_message(y)
 (1, 1, 0, 1, 1) (1, 1) (1, 1, 0, 1, 1) (1, 1)
C.decoders_available()
 ['Syndrome', 'NearestNeighbor'] ['Syndrome', 'NearestNeighbor']
# see all that there is in Sage about codes codes.

## The Singleton bound

Suppose we want to construct a linear code that can send 20 different messages. We want the code to be as short as possible and with as much detection/correction capability as possible. What would be the limits of our expectation? The following bounds give us information about this.

Singleton bound: For any code of length $n$ and distance $d$, the maximum number of codewords is $q^{n-d+1}$.

For the case of a $(n,k,d)$-code, since the number of words is $q^k$, we have: $k \leq n-d+1$.

 Find some values of $k$, $n$ and $d$ that would allow us to send 20 different messages, indicating their capabilities to detect and correct. Note that there may exist no code with these values; the bound is a necessary condition only.

There are many, many other bounds that give us information about how good can a code be (remember the goals of code construction that we saw above). You can find some in https://en.wikipedia.org/wiki/Singleton_bound#See_also.

## The rate of a code and the capacity of a channel

You could imagine that we can make sure that we fix errors if we add many, many bits to messages (basically making $n$ and $d$ very large). But it is impractical to go too far. What can we achieve if we only want to add a few bits of length? We answer this question now.

We know that with distance $d$ we can correct almost $d/2$ errors. The probability of error correction depends on the probability $p$ of each single error in the transmission. So we could calculate the probability of a given code to correct a message. For example, for a code with $n=10$ and $d=3$, for a channel with $p=0.01$ we already calculated the probability of correct decoding, because we calculated the probability of 0 or 1 errors. It would be good to make this probability as close to 1 as possible (virtual certainty).

The following number gives us information about how noisy is the channel: the capacity of a (binary, symmetric, memoryless) channel with symbol probability $p$ is $C(p)=1 + p\log_2 p + (1-p)\log_2(1-p)$.

var('p') capacity(p) = 1+p*log(p)/log(2)+(1-p)*log(1-p)/log(2) for p in [10^-3, 1/100, 1/10, 1/4, 1/2]: print p, capacity(p).n()
 1/1000 0.988592242262539 1/100 0.919206864104089 1/10 0.531004406410719 1/4 0.188721875540867 1/2 0.000000000000000 1/1000 0.988592242262539 1/100 0.919206864104089 1/10 0.531004406410719 1/4 0.188721875540867 1/2 0.000000000000000

Now, when we transmit we would like not to add too much redundancy, just whatever is needed to correct errors. This can be measured by the so called rate of the code, the number $k/n$. It is between 0 and 1, and the larger the better (it means we do not add much extra information).

The following fundamental theorem by Shannon, one of the founders of Coding Theory, relates the rate of a code, the capacity of a channel and the success probability.

Informal statement of Shannon's Theorem: for any rate $R$ and channel with capacity $C$,

• If $R<C$, it is possible to transmit information at rate $R$ with probability of correct decoding as close to 1 as we want.
• If $R>C$, it is not possible to transmit information at rate $R$ without the probability of correct decoding going down.

Basically, the smaller the capacity is, the more redundancy (extra bits) we need if we want that the number of errors stays below $d/2$.

As a summary, take a look at the article https://en.wikipedia.org/wiki/Mariner_9 and see what you can understand from the section "Error-Correction Codes achievements". Unfortunately the page in Spanish does not contain information about the encoding.

 Research project Find information about some type of linear error-correcting codes used in practice. Write what you find that is directly related to the things we have learned in this session (no need to write things that we would not understand at this level). Two suggestions, but find your own interest: Information in audio CDs is recorded in a way that makes it possible to recover even from scatches. QR codes are very popular these days. One can even put logos in the middle of them and the information (URL for example) can be recovered. (There was a link near the beginning of this sheet). Document all your sources. Do not invest too much time: to a session of 4 hours in class there correspond 6 hours of autonomous work.