Chapter 3
Linear Block Codes

3.1 Basic Definitions

Consider a source that produces symbols from an alphabet images having images symbols, where images forms a field. We refer to a tuple images with images elements as an images‐vector or an images‐tuple.

For a block code to be useful for error correction purposes, there should be a one‐to‐one correspondence between a message images and its codeword images. However, for a given code images, there may be more than one possible way of mapping messages to codewords.

A block code can be represented as an exhaustive list, but for large images this would be prohibitively complex to store and decode. The complexity can be reduced by imposing some sort of mathematical structure on the code. The most common requirement is linearity.

In some literature, an images linear code is denoted using square brackets, images. A linear code may also be designated as images, where images is the minimum distance of the code (as discussed below).

For a linear code, the sum of any two codewords is also a codeword. More generally, any linear combination of codewords is a codeword.

Recall from Definition 1.11 that the minimum distance is the smallest Hamming distance between any two codewords of the code.

An images code with minimum distance images is sometimes denoted as an images code.

As described in Section 1.8.1, the random error correcting capability of a code with minimum distance images is images.

3.2 The Generator Matrix Description of Linear Block Codes

Since a linear block code images is a images‐dimensional vector space, there exist images linearly independent vectors which we designate as images such that every codeword images in images can be represented as a linear combination of these vectors,

where images. (For binary codes, all arithmetic in 3.1 is done modulo 2; for codes of images, the arithmetic is done in images.) Thinking of the images as row vectors1 and stacking up, we form the images matrix images,

equation

Let

equation

Then 3.1 can be written as

and every codeword images has such a representation for some vector images. Since the rows of images generate (or span) the images linear code images, images is called a generator matrix for images. Equation 3.2 can be thought of as an encoding operation for the code images. Representing the code thus requires storing only images vectors of length images (rather than the images vectors that would be required to store all codewords of a nonlinear code).

Note that the representation of the code provided by images is not unique. From a given generator images, another generator images can be obtained by performing row operations (nonzero linear combinations of the rows). Then an encoding operation defined by images maps the message images to a codeword in images, but it is not necessarily the same codeword that would be obtained using the generator images.

It should be emphasized that being systematic is a property of the encoder and not a property of the code. For a linear block code, the encoding operation represented by images is systematic if an identity matrix can be identified among the rows of images. Neither the generator images nor images of Example 3.5 are systematic.

Frequently, a systematic generator is written in the form

where images is the images identity matrix and images is a images matrix which generates parity symbols. The encoding operation is

equation

The codeword is divided into two parts: the part images consists of the message symbols, and the part images consists of the parity check symbols.

Performing elementary row operations (replacing a row with linear combinations of some rows) does not change the row span, so that the same code is produced. If two columns of a generator are interchanged, then the corresponding positions of the code are changed, but the distance structure of the code is preserved.

Let images and images be generator matrices of equivalent codes. Then images and images are related by the following operations:

  1. Column permutations,
  2. Elementary row operations.

Given an arbitrary generator images, it is possible to put it into the form 3.4 by performing Gaussian elimination with pivoting.

3.2.1 Rudimentary Implementation

Implementing encoding operations for binary codes is straightforward, since the multiplication operation corresponds to the and operation and the addition operation corresponds to the exclusive‐or operation. For software implementations, encoding is accomplished by straightforward matrix/vector multiplication. This can be greatly accelerated for binary codes by packing several bits into a single word (e.g., 32 bits in an unsigned int of 4 bytes). The multiplication is then accomplished using the bit exclusive‐or operation of the language (e.g., the ^ operator of C). Addition must be accomplished by looping through the bits, or by precomputing bit sums and storing them in a table, where they can be immediately looked up.

3.3 The Parity Check Matrix and Dual Codes

Since a linear code images is a images‐dimensional vector subspace of images, by Theorem 2.64 there must be a dual space to images of dimension images.

As a vector space, images has a basis which we denote by images. We form a matrix images using these basis vectors as rows:

equation

This matrix is known as the parity check matrix for the code images. The generator matrix and the parity check matrix for a code satisfy

equation

The parity check matrix has the following important property:

(Sometimes additional linearly dependent rows are included in images, but the same result still holds.)

When images is in systematic form 3.4, a parity check matrix is readily determined:

(For the field images, images, since 1 is its own additive inverse.) Frequently, a parity check matrix for a code is obtained by finding a generator matrix in systematic form and employing (3.6).

The condition images imposes linear constraints among the bits of images called the parity check equations.

A parity check matrix for a code (whether systematic or not) provides information about the minimum distance of the code.

3.3.1 Some Simple Bounds on Block Codes

Theorem 3.13 leads to a relationship between images, images, and images:

Note: Although this bound is proved here for linear codes, it is also true for nonlinear codes. (See [292].)

A code for which images is called a maximum distance separable (MDS) code.

Thinking geometrically, around each code point is a cloud of points corresponding to non‐codewords. (See Figure 1.17.) For a images‐ary code, there are images vectors at a Hamming distance 1 away from a codeword, images vectors at a Hamming distance 2 away from a codeword and, in general, images vectors at a Hamming distance images from a codeword.

The vectors at Hamming distances images away from a codeword form a “sphere” called the Hamming sphere of radius images. The number of vectors in a Hamming sphere up to radius images for a code of length images over an alphabet of images symbols is denoted images, where

(3.9)equation
Schematic illustration of the digital signal screen.

The bounded distance decoding sphere of a codeword is the Hamming sphere of radius images around the codeword. Equivalently, a code whose random error correction capability is images must have a minimum distance between codewords satisfying images.

The redundancy of a code is essentially the number of parity symbols in a codeword. More precisely we have

equation

where images is the number of codewords. For a linear code we have images, so images.

A code satisfying the Hamming bound with equality is said to be a perfect code. Actually, being perfect codes does not mean the codes are the best possible codes; it is simply a designation regarding how points fall in the Hamming spheres. The set of perfect codes is actually quite limited. It has been proved (see [292]) that the entire set of perfect codes is:

  1. The set of all images‐tuples, with minimum distance = 1 and images.
  2. Odd‐length binary repetition codes.
  3. Binary and nonbinary Hamming codes (linear) or other nonlinear codes with equivalent parameters.
  4. The binary images Golay code images.
  5. The ternary (i.e., over images) images code images and the (23,11,5) code images. These codes are discussed in Chapter 8.

3.4 Error Detection and Correction Over Hard‐Output Channels

By Theorem 3.10, images if and only if images is a codeword of images. In medical terminology, a syndrome is a pattern of symptoms that aids in diagnosis; here images aids in diagnosing if images is a codeword or has been corrupted by noise. As we will see, it also aids in determining what the error is.

3.4.1 Error Detection

The syndrome can be used as an error detection scheme. Suppose that a codeword images in a binary linear block code images over images is transmitted through a hard channel (e.g., a binary code over a BSC) and that the images‐vector images is received. We can write

equation

where the arithmetic is done in images, and where images is the error vector, being 0 in precisely the locations where the channel does not introduce errors. The received vector images could be any of the vectors in images, since any error pattern is possible. Let images be a parity check matrix for images. Then the syndrome

equation

From Theorem 3.10, images if images is a codeword. However, if images, then an error condition has been detected: we do not know what the error is, but we do know that an error has occurred. (Some additional perspective to this problem is provided in Box 3.1.)

3.4.2 Error Correction: The Standard Array

Let us now consider one method of decoding linear block codes transmitted through a hard channel using maximum likelihood (ML) decoding. As discussed in Section 1.8.1, ML decoding of a vector images consists of choosing the codeword images that is closest to images in Hamming distance. That is,

equation

Let the set of codewords in the code be images, where images. Let us take images, the all‐zero codeword. Let images denote the set of images‐vectors which are closer to the codeword images than to any other codeword. (Vectors which are equidistant to more than one codeword are assigned into a set images at random.) The sets images partition the space of images‐vectors into images disjoint subsets. If images falls in the set images, then, being closer to images than to any other codeword, images is decoded as images. So, decoding can be accomplished if the images sets can be found.

The standard array is a representation of the partition images. It is a two‐dimensional array in which the columns of the array represent the images. The standard array is built as follows. First, every codeword images belongs in its own set images. Writing down the set of codewords thus gives the first row of the array. From the remaining vectors in images, find the vector images of smallest weight. This must lie in the set images, since it is closest to the codeword images. But

equation

for each images, so the vector images must also lie in images for each images. So images is placed into each images. The vectors images are included in their respective columns of the standard array to form the second row of the standard array. The procedure continues, selecting an unused vector of minimal weight and adding it to each codeword to form the next row of the standard array, until all images possible vectors have been used in the standard array. In summary:

  1. Write down all the codewords of the code images.
  2. Select from the remaining unused vectors of images one of minimal weight, images. Write images in the column under the all‐zero codeword, then add images to each codeword in turn, writing the sum in the column under the corresponding codeword.
  3. Repeat step 2 until all images vectors in images have been placed in the standard array.

Table 3.1 The Standard Array for a Code

Row 1 0000000 0111100 1011010 1100110 1101001 1010101 0110011 0001111
Row 2 1000000 1111100 0011010 0100110 0101001 0010101 1110011 1001111
Row 3 0100000 0011100 1111010 1000110 1001001 1110101 0010011 0101111
Row 4 0010000 0101100 1001010 1110110 1111001 1000101 0100011 0011111
Row 5 0001000 0110100 1010010 1101110 1100001 1011101 0111011 0000111
Row 6 0000100 0111000 1011110 1100010 1101101 1010001 0110111 0001011
Row 7 0000010 0111110 1011000 1100100 1101011 1010111 0110001 0001101
Row 8 0000001 0111101 1011011 1100111 1101000 1010100 0110010 0001110
Row 9 1100000 1011100 0111010 0000110 0001001 0110101 1010011 1101111
Row 10 1010000 1101100 0001010 0110110 0111001 0000101 1100011 1011111
Row 11 0110000 0001100 1101010 1010110 1011001 1100101 0000011 0111111
Row 12 1001000 1110100 0010010 0101110 0100001 0011101 1111011 1000111
Row 13 0101000 0010100 1110010 1001110 1000001 1111101 0011011 0100111
Row 14 0011000 0100100 1000010 1111110 1110001 1001101 0101011 0010111
Row 15 1000100 1111000 0011110 0100010 0101101 0010001 1110111 1001011
Row 16 1110000 1001100 0101010 0010110 0011001 0100101 1000011 1111111

The horizontal lines in the standard array separate the error patterns of different weights.

We make the following observations:

  1. There are images codewords (columns) and images possible vectors, so there are images rows in the standard array. We observe, therefore, that: an images code is capable of correcting images different error patterns.
  2. The difference (or sum, over images) of any two vectors in the same row of the standard array is a code vector. In a row, the vectors are images and images. Then
    equation
    which is a codeword, since linear codes form a vector subspace.
  3. No two vectors in the same row of a standard array are identical. Because otherwise we have
    equation
    which means images, which is impossible.
  4. Every vector appears exactly once in the standard array. We know every vector must appear at least once, by the construction. If a vector appears in both the imagesth row and the imagesth row, we must have
    equation
    for some images and images. Let us take images. We have
    equation
    for some images. This means that images is on the imagesth row of the array, which is a contradiction.

The rows of the standard array are called cosets. Each row is of the form

equation

That is, the rows of the standard array are translations of images. These are the same cosets we met in Section 2.2.3 in conjunction with groups.

The vectors in the first column of the standard array are called the coset leaders. They represent the error patterns that can be corrected by the code under this decoding strategy. The decoder of Example 3.19 is capable of correcting all errors of weight 1, seven different error patterns of weight 2, and one error pattern of weight 3.

To decode with the standard array, we first locate the received vector images in the standard array. Then identify

equation

for a vector images which is a coset leader (in the left column) and a codeword images (on the top row). Since we designed the standard array with the smallest error patterns as coset leaders, the error codeword so identified in the standard array is the ML decision. The coset leaders are called the correctable error patterns.

As this decoding example shows, the standard array decoder may have coset leaders with weight higher than the random‐error‐correcting capability of the code images.

This observation motivates the following definition.

If a standard array is used as the decoding mechanism, then complete decoding is achieved. On the other hand, if the rows of the standard array are filled out so that all instances of up to images errors appear in the table, and all other rows are left out, then a bounded distance decoder is obtained.

A perfect code can be understood in terms of the standard array: it is one for which there are no “leftover” rows: for binary codes all images error patterns of weight images and all lighter error patterns appear as coset leaders in the table, with no “leftovers.” (For images‐ary codes, the number of error patterns is images.)

What makes it “perfect” then, is that the bounded distance decoder is also the ML decoder.

The standard array can, in principle, be used to decode any linear block code, but suffers from a major problem: the memory required to represent the standard array quickly become excessive, and decoding requires searching the entire table to find a match for a received vector images. For example, a images binary code — not a particularly long code by modern standards — would require images vectors of length 256 bits to be stored in it and every decoding operation would require on average searching through half of the table.

A first step in reducing the storage and search complexity (which doesn't go far enough) is to use syndrome decoding. Let images be a vector in the standard array. The syndrome for this vector is images. Furthermore, every vector in the coset has the same syndrome: images. We therefore only need to store syndromes and their associated error patterns. This table is called the syndrome decoding table. It has images rows but only two columns, so it is smaller than the entire standard array. But is still impractically large in many cases.

With the syndrome decoding table, decoding is done as follows:

  1. Compute the syndrome, images.
  2. In the syndrome decoding table look up the error pattern images corresponding to images.
  3. Then images.

Despite the significant reduction compared to the standard array, the memory requirements for the syndrome decoding table are still very high. It is still infeasible to use this technique for very long codes. Additional algebraic structure must be imposed on the code to enable decoding long codes.

3.5 Weight Distributions of Codes and Their Duals

The weight distribution of a code plays a significant role in calculating probabilities of error.

The weight enumerator is (essentially) the images‐transform of the weight distribution sequence.

There is a relationship, known as the MacWilliams identities, between the weight enumerator of a linear code and its dual. This relationship is of interest because for many codes it is possible to directly characterize the weight distribution of the dual code, from which the weight distribution of the code of interest is obtained by the MacWilliams identity.

For binary codes with images, the MacWilliams identities are

equation

The proof of this theorem reveals some techniques that are very useful in coding. We give the proof for codes over images, but it is straightforward to extend to larger fields (once you are familiar with them). The proof relies on the Hadamard transform. For a function images defined on images, the Hadamard transform images of images is

equation

where the sum is taken over all images images‐tuples images, where each images. The inverse Hadamard transform is

equation

3.6 Binary Hamming Codes and Their Duals

We now formally introduce a family of binary linear block codes, the Hamming codes, and their duals.

For example, when images, we get

equation

as the parity check matrix for a images Hamming code. However, it is usually convenient to reorder the columns — resulting in an equivalent code — so that the identity matrix which is interspersed throughout the columns of images appears in the first images columns. We therefore write

equation

It is clear from the form of the parity check matrix that for any images there exist three columns which add to zero; for example,

equation

so by Theorem 3.13 the minimum distance is 3; Hamming codes are capable of correcting 1 bit error in the block, or detecting up to 2 bit errors.

An algebraic decoding procedure for Hamming codes was described in Section 1.9.1.

The dual to a images Hamming code is a images code called a simplex code or a maximal‐length feedback shift register code.

In general, all of the nonzero codewords of the images simplex code have weight images (see Exercise 3.12) and every pair of codewords is at a distance images apart (which is why it is called a simplex). For example, for the images case, the codewords images form a tetrahedron. Thus, the weight enumerator of the dual code is

From the weight enumerator of the dual, we find using (3.13) that the weight distribution of the Hamming code is

3.7 Performance of Linear Codes

There are several different ways that we can characterize the error detecting and correcting capabilities of codes at the output of the channel decoder [483].

  1. images is the probability of decoder error, also known as the word error rate. This is the probability that the codeword at the output of the decoder is not the same as the codeword at the input of the encoder.
  2. images or images is the probability of bit error, also known as the bit error rate. This is the probability that the decoded message bits (extracted from a decoded codeword of a binary code) are not the same as the encoded message bits. Note that when a decoder error occurs, there may be anywhere from 1 to images message bits in error, depending on what codeword is sent, what codeword was decoded, and the mapping from message bits to codewords.
  3. images is the probability of undetected codeword error, the probability that errors occurring in a codeword are not detected.
  4. images is the probability of detected codeword error, the probability that one or more errors occurring in a codeword are detected.
  5. images is the undetected bit error rate, the probability that a decoded message bit is in error and is contained within a codeword corrupted by an undetected error.
  6. images is the detected bit error rate, the probability that a received message bit is in error and is contained within a codeword corrupted by a detected error.
  7. images is the probability of decoder failure, which is the probability that the decoder is unable to decode the received vector (e.g., for a bounded distance decoder) and is able to determine that it cannot decode.

In what follows, bounds and exact expressions for these probabilities will be developed.

3.7.1 Error Detection Performance

All errors with weight up to images can be detected, so in computing the probability of detection, only error patterns with weight images or higher need be considered. If a codeword images of a linear code is transmitted and the error pattern images happens to be a codeword, images, then the received vector

equation

is also a codeword. Hence, the error pattern would be undetectable. Thus, the probability that an error pattern is undetectable is precisely the probability that it is a codeword.

We consider only errors in transmission of binary codes over the BSC with crossover probability images. (Extension to codes with larger alphabets is discussed in [483].) The probability of any particular pattern of images errors in a codeword is images. Recalling that images is the number of codewords in images of weight images, the probability that images errors form a codeword is images. The probability of undetectable error in a codeword is then

(3.19)equation

The probability of a detected codeword error is the probability that one or more errors occur minus the probability that the error is undetected:

equation

Computing these probabilities requires knowing the weight distribution of the code, which is not always available. It is common, therefore, to provide bounds on the performance. A bound on images can be obtained by observing that the probability of undetected error is bounded above by the probability of occurrence of any error patterns of weight greater than or equal to images. Since there are images different ways that images positions out of images can be changed,

A bound on images is simply

equation

The corresponding bit error rates can be bounded as follows. The undetected bit error rate images can be lower‐bounded by assuming the undetected codeword error corresponds to only a single message bit error. images can be upper‐bounded by assuming that the undetected codeword error corresponds to all images message bits being in error. Thus,

equation

Similarly for images:

equation

3.7.2 Error Correction Performance

An error pattern is correctable if and only if it is a coset leader in the standard array for the code, so the probability of correcting an error is the probability that the error is a coset leader. Let images denote the number of coset leaders of weight images. The numbers images are called the coset leader weight distribution. Over a BSC with crossover probability images, the probability of images errors forming one of the coset leaders is images. The probability of a decoding error is thus the probability that the error is not one of the coset leaders

equation

This result applies to any linear code with a complete decoder.

Most hard‐decision decoders are bounded‐distance decoders, selecting the codeword images which lies within a Hamming distance of images of the received vector images. An exact expression for the probability of error for a bounded‐distance decoder can be developed as follows. Let images be the probability that a received word images is exactly Hamming distance images from a codeword of weight images.

The probability of error is now obtained as follows.

The probability of decoder failure for the bounded distance decoder is the probability that the received codeword does not fall into any of the decoding spheres,

equation

Exact expressions to compute images require information relating the weight of the message bits and the weight of the corresponding codewords. This information is summarized in the number images, which is the total weight of the message blocks associated with codewords of weight images.

Modifying 3.21, we obtain

equation

(See hamcode74pe.m .) Unfortunately, while obtaining values for images for small codes is straightforward computationally, appreciably large codes require theoretical expressions which are usually unavailable.

The probability of decoder error can be easily bounded by the probability of any error patterns of weight greater than images:

equation

An easy bound on probability of failure is the same as the bound on this probability of error.

Bounds on the probability of bit error can be obtained as follows. A lower bound is obtained by assuming that a decoder error causes a single bit error out of the images message bits. An upper bound is obtained by assuming that all images message bits are incorrect when the block is incorrectly decoded. This leads to the bounds

equation

3.7.3 Performance for Soft‐Decision Decoding

While all of the decoding in this chapter has been for hard‐input decoders, it is interesting to examine the potential performance for soft‐decision decoding. Suppose the codewords of an images code images are modulated to a vector images using BPSK having energy images per coded bit and transmitted through an AWGN with variance images. The transmitted vector images is a point in images‐dimensional space. In Exercise 1.15, it is shown that the Euclidean distance between two BPSK modulated codewords is related to the Hamming distance between the codewords by

equation

Suppose that there are images codewords (on average) at a distance images from a codeword. By the union bound (1.27), the probability of a block decoding error is given by

equation

Neglecting the multiplicity constant images, we see that we achieve essentially comparable performance compared to uncoded transmission when

equation

The asymptotic coding gain is the factor by which the coded images can be decreased to obtain equivalent performance. (It is called asymptotic because it applies only as the SNR becomes large enough that the union bound can be regarded as a reasonable approximation.) In this case the asymptotic coding gain is

equation

Recall that Figure 1.19 illustrated the advantage of soft‐input decoding compared with hard‐input decoding.

3.8 Erasure Decoding

An erasure is an error in which the error location is known, but the value of the error is not. Erasures can arise in several ways. In some receivers the received signal can be examined to see if it falls outside acceptable bounds. If it falls outside the bounds, it is declared as an erasure. (For example, for BPSK signaling, if the received signal is too close to the origin, an erasure might be declared.)

Erasures can also sometimes be dealt with using concatenated coding techniques, where an inner code declares erasures at some symbol positions, which an outer code can then correct.

Consider the erasure capability for a code of distance images. A single erased symbol removed from a code (with no additional errors) leaves a code with a minimum distance at least images. Thus, images erased symbols can be “filled” provided that images. For example, a Hamming code with images can correct up to 2 erasures.

Now suppose that there are both errors and erasures. For a code with images experiencing a single erasure, there are still images unerased coordinates and the codewords are separated by a distance of at least images. More generally, if there are images erased symbols, then the distance among the remaining digits is at least images. Letting images denote the random error decoding distance in the presence of images erasures, we can correct up to

equation

errors. If there are images erasures and images errors, they can be corrected provided that

(3.22)equation

Since correcting an error requires determination of both the error position and the error value, while filling an erasure requires determination only of the error value, essentially twice the number of erasures can be filled as errors corrected.

3.8.1 Binary Erasure Decoding

We consider now how to simultaneously fill images erasures and correct images errors in a binary code with a given decoding algorithm [483, p. 229]. In this case, all that is necessary is to determine for each erasure whether the missing value should be a one or a zero. An erasure decoding algorithm for this case can be described as follows:

  1. Place zeros in all erased coordinates and decode using the usual decoder for the code. Call the resulting codeword images.
  2. Place ones in all erased coordinates and decode using the usual decoder for the code. Call the resulting codeword images.
  3. Find which of images and images is closest to images. This is the output code.

Let us examine why this decoder works. Suppose we have images (so that correct decoding is possible). In assigning 0 to the images erased coordinates, we thereby generated images errors, images, so that the total number of errors to be corrected is images. In assigning 1 to the images erased coordinates, we make images errors, images, so that the total number of errors to be corrected is images. Note that images, so that either images or images is less than or equal to images. Thus, either

equation

and images, so that one of the two decodings must be correct.

Erasure decoding for nonbinary codes depends on the particular code structure. For example, decoding of Reed–Solomon codes is discussed in Section 6.7.

3.9 Modifications to Linear Codes

We introduce some minor modifications to linear codes. These are illustrated for some particular examples in Figure 3.2.

Schematic illustration of demonstrating modifications on a Hamming code.

Figure 3.2 Demonstrating modifications on a Hamming code.

Puncturing an extended code can return it to the original code (if the extended symbols are the ones punctured). Puncturing can reduce the weight of each codeword by its weight in the punctured positions. The minimum distance of a code is reduced by puncturing if the minimum weight codeword is punctured in a nonzero position. Puncturing an images code images times can result in a code with minimum distance as small as images.

3.10 Best‐Known Linear Block Codes

Tables of the best‐known linear block codes are available. An early version appears in [292]. More recent tables can be found at [50].

Exercises

  1. 3.1 Find, by trial and error, a set of four binary codewords of length three such that each word is at least a distance of 2 from every other word.
  2. 3.2 Find a set of 16 binary words of length 7 such that each word is at least a distance of 3 from every other word. Hint: Hamming code.
  3. 3.3 Perhaps the simplest of all codes is the binary parity check code, a images code, where images. Given a message vector images, the codeword is images, where images (arithmetic in images) is the parity bit. Such a code is called an even parity code, since all codewords have even parity — an even number of 1 bits.
    1. Determine the minimum distance for this code.
    2. How many errors can this code correct? How many errors can this code detect?
    3. Determine a generator matrix for this code.
    4. Determine a parity check matrix for this code.
    5. Suppose that bit errors occur independently with probability images. The probability that a parity check is satisfied is the probability that an even number of bit errors occur in the received codeword. Verify the following expression for this probability:
      equation
  4. 3.4 For the images repetition code, determine a parity check matrix.
  5. 3.5 [483] Let images be the probability that any bit in a received vector is incorrect. Compute the probability that the received vector contains undetected errors given the following encoding schemes:
    1. No code, word length images.
    2. Even parity (see Exercise 3.3 3.3), word length images.
    3. Odd parity, word length images. (Is this a linear code?)
    4. Even parity, word length = images.
  6. 3.6 [272] Let images be an images binary linear systematic code with generator images. Let images be an images binary linear systematic code with generator images. Form the parity check matrix for an images code as
    equation

    Show that this code has minimum distance at least images.

  7. 3.7 The generator matrix for a code over images is given by
    equation

    Find a generator matrix and parity check matrix for an equivalent systematic code.

  8. 3.8 The generator and parity check matrix for a binary code images are given by

    This code is small enough that it can be used to demonstrate several concepts from throughout the chapter.

    1. Verify that images is a parity check matrix for this generator.
    2. Draw a logic diagram schematic for an implementation of an encoder for the nonsystematic generator images using ”and” and ”xor” gates.
    3. Draw a logic diagram schematic for an implementation of a circuit that computes the syndrome.
    4. List the vectors in the orthogonal complement of the code (that is, the vectors in the dual code images).
    5. Form the standard array for code images.
    6. Form the syndrome decoding table for images.
    7. How many codewords are there of weight images in images? Determine the weight enumerator images.
    8. Using the generator matrix in (3.23), find the codeword with images as message bits.
    9. Decode the received word images using the generator of (3.23).
    10. Determine the weight enumerator for the dual code.
    11. Write down an explicit expression for images for this code. Evaluate this when images.
    12. Write down an explicit expression for images for this code. Evaluate this when images.
    13. Write down an explicit expression for images for this code. Evaluate this when images.
    14. Write down an explicit expression for images for this code, assuming a bounded distance decoder is used. Evaluate this when images.
    15. Write down an explicit expression for images for this code. Evaluate this when images.
    16. Determine the generator images for an extended code, in systematic form.
    17. Determine the generator for a code which has expurgated all codewords of odd weight. Then express it in systematic form.
  9. 3.9 [271] Let a binary systematic images code have parity check equations
    equation
    1. Determine the generator matrix images for this code in systematic form. Also determine the parity check matrix images.
    2. Using Theorem 3.13, show that the minimum distance of this code is 4.
    3. Determine images for this code. Determine images.
    4. Show that this is a self‐dual code.
  10. 3.10 Show that a self‐dual code has a generator matrix images which satisfies images.
  11. 3.11 Given a code with a parity check matrix images, show that the coset with syndrome images contains a vector of weight images if and only if some linear combination of images columns of images equals images.
  12. 3.12 Show that all of the nonzero codewords of the images simplex code have weight images. Hint: Start with images and work by induction.
  13. 3.13 Show that (3.13) follows from (3.12).
  14. 3.14 Show that (3.16) follows from (3.15) using the MacWilliams identity.
  15. 3.15 Let images, for images. Determine the Hadamard transform images of images.
  16. 3.16 The weight enumerator images of (3.11) for a code images is sometimes written as
    equation

    In this problem, consider binary codes, images.

    1. Show that images.
    2. Let images be the weight enumerator for the code dual to images. Show that the MacWilliams identity can be written as
      equation

      or

    3. In the following subproblems, assume a binary code. Let images in (3.24). We can write

      Set images in this and show that images. Justify this result.

    4. Now differentiate (3.25) with respect to images and set images to show that
      equation

      If images, this gives the average weight.

    5. Differentiate (3.25) images times with respect to images and set images to show that
      equation

      Hint: Define images. We have the following generalization of the product rule for differentiation:

      equation
    6. Now set images in (3.24) and write
      equation

      Differentiate images times with respect to images and set images to show that

      (3.26)equation
  17. 3.17 Let images be a binary images code with weight enumerator images and let images be the extended code of length images,
    equation

    Determine the weight enumerator for images.

  18. 3.18 [272] Let images be a binary linear code with both even‐ and odd‐weight codewords. Show that the number of even‐weight codewords is equal to the number of odd‐weight codewords.
  19. 3.19 Show that for a binary code, images can be written as:
    1. images
    2. and images.
  20. 3.20 [483] Find the lower bound on required redundancy for the following codes.
    1. A single‐error correcting binary code of length 7.
    2. A single‐error correcting binary code of length 15.
    3. A triple‐error correcting binary code of length 23.
    4. A triple‐error correcting 4‐ary code (i.e., images) of length 23.
  21. 3.21 Show that all odd‐length binary repetition codes are perfect.
  22. 3.22 Show that Hamming codes achieve the Hamming bound.
  23. 3.23 Determine the weight distribution for a binary Hamming code of length 31. Determine the weight distribution of its dual code.
  24. 3.24 The parity check matrix for a nonbinary Hamming code of length images and dimension images with minimum distance 3 can be constructed as follows. For each images‐ary images‐tuple of the base‐images representation of the numbers from 1 to images, select those for which the first nonzero element is equal to 1. The list of all such images‐tuples as columns gives the generator images.
    1. Explain why this gives the specified length images.
    2. Write down a parity check matrix in systematic form for the images Hamming code over the field of four elements.
    3. Write down the corresponding generator matrix. Note: in this field, every element is its own additive inverse: images, images, images.
  25. 3.25 [272] Let images be the generator matrix of an images binary code images and let no column of images be all zeros. Arrange all the codewords of images as rows of a images array.
    1. Show that no column of the array contains only zeros.
    2. Show that each column of the array consists of images zeros and images ones.
    3. Show that the set of all codewords with zeros in particular component positions forms a subspace of images. What is the dimension of this subspace?
    4. Show that the minimum distance images of this code must satisfy the following inequality, known as the Plotkin bound:
      equation
  26. 3.26 [272] Let images be the ensemble of all the binary systematic linear images codes.
    1. Prove that a nonzero binary vector images is contained in exactly images of the codes in images or it is in none of the codes in images.
    2. Using the fact that the nonzero images‐tuples of weight images or less can be in at most
      equation
      images systematic binary linear codes, show that there exists an images linear code with minimum distance of at least images if the following bound is satisfied:
      equation
    3. Show that there exists an images binary linear code with minimum distance at least images that satisfies the following inequality:
      equation

      This provides a lower bound on the minimum distance attainable with an images linear code known as the Gilbert–Varshamov bound.

  27. 3.27 Define a linear images code over images by the generator matrix
    equation
    1. Find the parity check matrix.
    2. Prove that this is a single‐error‐correcting code.
    3. Prove that it is a double‐erasure‐correcting code.
    4. Prove that it is a perfect code.
  28. 3.28 [271] Let images be the parity check matrix for an images linear code images. Let images be the extended code whose parity check matrix images is formed by
    equation
    1. Show that every codeword of images has even weight.
    2. Show that images can be obtained from images by adding an extra parity bit called the overall parity bit to each codeword.
  29. 3.29 The images construction: Let images, images be linear binary images block codes with generator matrix images and minimum distance images. Define the code images by
    equation
    1. Show that images has the generator
      equation
    2. Show that the minimum distance of images is
      equation

3.12 References

The definitions of generator, parity check matrix, distance, and standard arrays are standard; see, for example, [271, 483]. The MacWilliams identity appeared in [291]. Extensions to nonlinear codes appear in [292]. The discussion of probability of error in Section 3.7 is drawn closely from [483]. Our discussion on modifications follows [483], which, in turn, draws from [33]. Our analysis of soft‐input decoding was drawn from [26]. Classes of perfect codes are in [444].

Note

  1. 1 Most signal processing and communication work employs column vectors by convention. However, a venerable tradition in coding theory has employed row vectors and we adhere to that through most of the book.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.30.162