Jump to content

Data Coding Theory/Information

From Wikibooks, open books for an open world

Encoders/Decoders

[edit | edit source]

Devices that take an input data sequence, and output a transmission code are called encoders. Devices that receive a coded transmission, and output the original data are called decoders. Encoders and decoders can vary widely depending on what particular specification they are designed to meet.

Codes are designed for various reasons. Some of the reasons for using a code are:

  • Error Detection and Correction. The ability to find and correct errors that have been introduced by a noisy communication medium.
  • Data Compression. Some coding techniques allow large data to be reduced in size for faster communication.
  • Spectrum Spreading. Some codes allow a signal to be spread across many frequencies for many benefits, including resistance to jamming and interference, and allowing multiple users to send data simultaneously over the same frequency range.
  • Encryption. Some codes can be used to obscure information from all except the intended recipient.

This book cannot possibly cover all the topics associated with data compression, encryption, or long-distance communications. For that topics, the reader is referred instead to various other Wikibooks and other resources. This book will discuss data coding, but will not necessarily be able to discuss all the uses or implementations of those codes.

Entropy

[edit | edit source]

Let's say, hypothetically, that we have a particular code that contains bits that don't actually carry any information, but instead only serve to help out with the transmission. For instance, these could be start- and stop- bits to denote the beginning and end of an asynchronous transmission, CRC bits to provide error checking, or any number of different bits. These extra bits, then, are combined with the data bits to form a packet or a symbol. It is important to know what the average amount of information per symbol is, in a transmission system. We call this parameter entropy, and is measured in "bits per symbol". Note that this is not the same kind of entropy used in physics to describe energy dissipation. We will lable entropy with the variable H(S).

Let's say that our transmission alphabet contains K different symbols in the set S:

These different symbols can represent anything: bits, dibits, long sequences, etc. On a given transmission scheme, these different symbols will have different probabilities, Pk, depending on how likely each symbol is to be transmitted. We can define the entropy of our communication system with the following equation:

From this equation, we can set some general bounds on H(S):

We can see that H(S) approaches the lower bound in a completely deterministic system, i.e. a system where the probability of a certain transmission is Pk = 1. We can also see that H(S) approaches the upper bound in the case when all the different symbols are equiprobable.

Code Length

[edit | edit source]

We have an encoder that is receiving data input, and is outputting a particular code. This output code, by virtue of containing extra, non-data bits, is longer than the input data sequence. We can define the average output code length, using the following equation:

Where Pk is the probability of outputting each different transmission code, and Lk is the length of each code. For instance, let's say that we have an alphabet with 2 symbols:

S = {S0, S1}
S0 = 101010, P0 = .5, L0 = 6
S1 = 1100,   P1 = .5, L1 = 4

We can use the above equation to find the average code length: 5.

Efficiency

[edit | edit source]

The efficiency of a coding system is the ratio of the average information per symbol to the average code length. The maximum efficiency possible is 1, and can theoretically be obtained using a prefix code (discussed below).