Coding theory
From Wikipedia, the free encyclopedia
Coding theory is a branch of mathematics and computer science dealing with the error-prone process of transmitting data across noisy channels, via clever means, so that a large number of errors that occur can be corrected. It also deals with the properties of codes, and thus with their fitness for a specific application.
There are two classes of codes.
- Source coding (Data compression)
- Channel coding (Forward error correction)
The first, source encoding, attempts to compress the data from a source in order to transmit it more efficiently. We see this practice every day on the Internet where the common "Zip" data compression is used to reduce the network load and make files smaller. The second, channel encoding adds extra data bits, commonly called redundancy bits, to make the transmission of data more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using channel coding. A typical music CD uses a powerful Reed-Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use powerful coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and of course NASA all employ powerful channel coding to get the bits through.
Contents |
[edit] Source coding
The aim of source coding is to take the source data and make it smaller. An in-depth article can be found at Data compression.
[edit] Principle
Entropy of a source is the measure of information. Basically source codes try to reduce the redundancy present in the source, and represent the source with a fewer bits that carry more information.
Data compression which explicitly tries to minimise the entropy of messages according to a particular probability model is called entropy encoding.
Various techniques used by source coding schemes try to achieve the limit of Entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy limit of the symbol.
[edit] Example
Facsimilie transmission uses a simple run length code.
[edit] Channel encoding
The aim of channel encoding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. These aims are mutually exclusive however, so different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. Thus codes are used in an interleaved manner. The data is spread out over the disk. Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we don't merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used.
Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise present in the telephone network and is also modeled better as a continuous disturbance. Cell phones are troubled by rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading.
The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched.
Algebraic Coding theory, is basically divided into two major types of codes
- Linear block codes
- Convolutional codes
It analyzes the following three properties of a code -- mainly:
- code word length
- total number of valid code words
- the minimum Hamming distance between two valid code words
[edit] Linear block codes
Linear block codes have the property of linearity, i.e the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property.
Any linear block code is represented as (n,m,dmin) where
- n is the length of the codeword, in symbols,
- m is the number of source symbols that will be used for encoding at once,
- dmin is the minimum hamming distance for the code
There are many types within linear block codes, like
- Cyclic codes (Hamming code is a subset of cyclic codes)
- Repetition codes
- Parity codes
- Reed Solomon codes
- BCH code
- Algebraic Geometric Codes
- Reed Muller codes
- Perfect codes
Block codes are tied to the "penny packing" problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so called perfect codes. There are very few of these codes (Hamming [n,k,3], Golay [24,12,8],[23,12,7],[12,6,6])
Another item which is often overlooked is the number of neighbors a single codeword may have. Again, let's use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly.
The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.
[edit] Convolution codes
Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices.
Here the idea is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response.
So we generally find the output of the system [convolutional encoder] , which is the convolution of the input bit, against the states of the convolution encoder, registers.
Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware.
The Viterbi algorithm is the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally found to give good results in the lower noise environments. Modern microprocessors are capable of implementing these reduced search algorithms at rate greater than 4000 codewords/s.
[edit] Applications of coding theory
Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel. There is an interesting class of codes we see every day on our cell phones. These are the Code Division Multiple Access (CDMA) codes. The details are beyond the scope of this discussion but briefly, each phone is assigned a codeword from a special class (algebraic field). When transmitting, the code word is used to scramble the bits representing the voice message. At the receiver, a descrambling process is done to decipher the message. The properties of this class of code words allow many users (with different codes) to use the same radio channel at the same time. The receiver, using the descrambling, will only "hear" other callers as low level "noise".
Another popular class of codes are the Automatic Repeat reQuest (ARQ) codes. In this general class, the transmitter adds the parity check bits to a longer message. The receiver checks the parity bits against the message and if there is not a match, it will ask the transmitter to retransmit the message. Almost all wide area networks and protocols except for the very simple ones use ARQ retransmission. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes have been used, although in some networks, the packet may have another identifier or it may be left to higher layers to request retransmission. TCP/IP is a good example of a protocol that supports both techniques. In a connected scenario, TCP/IP leaves the retransmission to the network thus it uses the ARQ coding. In a connectionless network, ARQ is not used. Instead, it is up to the application to examine the packet and request retransmission as needed. This may go as high up as requiring the user to hit the "refresh" button on a browser. But, even this is still in the class of ARQ research; the user just has to become involved.
[edit] See also
- Information theory
- error-correcting code
- Timeline of information theory, data compression, and error correcting codes
[edit] References
- Vera Pless, Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc. (1982) ISBN 0-471-08684-3