The interleaver bijectively maps the binary codewords into the codewords . The interleaver is determined by the interleaving vector which defines the mapping between and as follows:
where (see (2.80)) and
The sequences and are obtained, respectively, by reading columnwise the elements of the sequences of binary vectors and . Very often, we will skip the notation which includes the interleaving vector , i.e., we use .
We now define two simple and well-known interleavers.
As the interleaving transforms the code into the code , considerations in the domain of the codewords may help clarify the relationships between modulation and coding. In what follows, we “reuse” the DS-related definitions from Section 2.6.
Further, we adapt Definition 2.23 as follows.
We note that the interleaving does not change the HW of the codewords, and thus, we have . On the other hand, in general, . In fact, the GHW used in and in has, in general, different dimensions: for and for .
At this point, it is also useful to define the generalized input-output distance spectrum (GIODS) of the code , which relates the input sequences and the corresponding codewords .
It is then convenient to introduce the input-dependent distance spectrum (IDS) and generalized input-dependent distance spectrum (GIDS) of the encoder.
As the IDS and GIDS defined above relate the codewords and the input sequences , they depend on the encoding operation. They are thus different from the DS or GDS in Definitions 2.21 and 2.23 which are defined solely for the codewords, i.e., in abstraction of how the encoding is done.
The following relationships are obtained via marginalization of the GIODS
The codewords and are related to each other via the bijective operation of interleaving, which owes its presence to considerations similar to those we evoked in Section 2.6 to motivate the introduction of the DS. We repeat here the assertion we made that the “protection” against decoding errors can be related to the HW of the codewords (or the HD between pairs of codewords). Then,in order to simplify the decoder's task of distinguishing between the binary codewords, we postulate, not only to have a large distance between all pairs of codewords, but also to transmit every bit of the binary difference between the codewords, using different symbols . In such a case, we decrease the probability that the respective observations are simultaneously affected by the channel in an adverse manner. These considerations motivate the following definition.
Clearly, the codeword diversity is bounded as
We thus say that the codeword achieves its maximum diversity if , which depends on how the interleaver allocates all the nonzero elements in to different labels . In addition, if , the codeword cannot achieve its maximum diversity. From now on, we only consider the cases .
The interleaver diversity efficiency is bounded as , where interleavers that achieve the upper bound are desirable. This is justified by the assertions we made above, namely that it is desirable to transmit bits in which the codewords differ using independent symbols. As a yardstick to compare the diversity efficiency of different interleavers, we define the random interleaver.
We note that the codewords are generated in a deterministic manner, but for a random interleaver, the mapping between and is random. Therefore, both the codeword diversity and the interleaver diversity efficiency are random. To evaluate them meaningfully, we then use expectations, i.e., we consider their average with respect to the distribution of the interleaving vector given in (2.119).10
For large , treating as a continuous variable, we may apply a first-order Taylor-series approximation of around , to obtain
From now, we will refer to as the average complementary interleaver diversity efficiency.
In Fig. 2.23, we show the expression (2.121) as well as the approximation in (2.127). We observe that the average complementary interleaver diversity efficiency is strongly affected by an increase in . For example, when , to keep the average complementary interleaver diversity efficiency at , is necessary for but we need when . It is also interesting to note from Fig. 2.23 that a random interleaver does not guarantee an average interleaver diversity , even for large .
Another property of the code we are interested in is related to the way the codewords' weights are distributed across the bit positions after interleaving, which we formalize in the following.
Up to now, we have considered two effects of the interleaving of the code independently: the GDS in Definition 2.36 deals with the assignment of nonzero bits from into different positions at the modulator's input, while the interleaver diversity efficiency in Definition 2.40 deals with the assignment of these bits into labels at different time instants . We may also jointly consider these two “dimensions” of the interleaving. Namely, for all with , we have
i.e., the ratio between the number of codewords in the set which achieve their maximum diversity and the number of codewords in the set depends solely on . The proof of (2.140) can be made along the lines of the proofs of Theorems 2.43 and 2.45.
Theorem 2.45 takes advantage of the enumeration over all possible interleavers , which makes the enumeration over all codewords unnecessary. As we can use any codeword , the assignment of the bits to the position in the label becomes independent of the codeword . Thus, we can simply treat the assigned position as a random variable . The assignment is not “biased” toward any position , so has a uniform distribution on the set , which leads to (2.139). Such a model is depicted in Fig. 2.24.
The concept of random interleavers leads to the random position-assignment model from Fig. 2.24. Of course, in practice, a fixed interleaving vector is used. However, we will be able to apply the simple model depicted in Fig. 2.24 to analyze also a fixed interleaver, provided that it inherits some of the properties of the random interleaver. In such a case, we talk about quasirandom interleavers which we define in the following.
As we refer to properties of the interleavers for , when discussing their effects, we have in mind a particular family of interleavers which defines how to obtain the interleaving vector for each value of . Therefore, strictly speaking, the definition above applies to a family of interleavers and not to a particular interleaver with interleaving vector .
The essential difference between the random interleaving and quasirandom interleaving lies in the analysis of the DS or GDS. In the case of random interleaving, instead of enumerating all possible codewords , we fix one of them and average the spectrum over all possible realizations of the interleaver. In the case of a fixed (quasirandom) interleaving, we enumerate the codewords produced by interleaving all the codewords from . The property of quasirandomness assumes that the enumeration over the codewords will produces the same average (spectrum) as the enumeration over the interleavers.
Of course, quasirandomness is a property which depends on the code and how the family of interleavers is defined. In the following example, we analyze a CC and two particular interleavers in the light of the conditions of quasirandomness.
TCM was originally proposed at ISIT 1976 [1] and then developed in [2–4]. TCM quickly became a very popular research topic and improved TCM paradigms were soon proposed: rotationally invariant TCM [5, 6], multidimensional TCM [3, 7–9], TCM based on cosets and lattices [10, 11], TCM with nonequally spaced symbols [12–15], etc. TCM went also quickly from research to practice; it was introduced in the modem standards in the early 1990s (V.32 [16] and V.32bis [17]) increasing the transmission rates up to 14.4 kbps. TCM is a well-studied topic and extensive information about it can be found, e.g., in [18, 19, Section 8.12], [20, Chapter 4], [21, Section 8.2], [22, Chapter 14], [23, Chapter 18]. TCM for fading channels is studied in [20, Chapter 5].
MLC was proposed by Imai and Hirakawa in [24, 25]. MLC with MSD as well as the design rules for selecting the rates of the encoders were analyzed in detail in [26, 27]. MLC for fading channels, which includes bit interleavers in each level, has been proposed in [28], and MLC using capacity-approaching (turbo) codes was proposed in [29].
BICM was introduced in [30] and later analyzed from an information-theoretic point of view in [31, 32]. BICM-ID was introduced in [33–35] where BICM was recognized as a serial concatenation of encoders (the encoder and the mapper) and further studied in [36–40]. For relevant references about the topics related to BICM treated in this book, we refer the reader to the end of each chapter.
The discrete-time AWGN model and the detection of signals in a continuous-time AWGN channel is a well-studied topic in the literature, see, e.g., [19, Chapter 3], [41, Chapter 2], [42, Chapter 5], [43, Section 2.5], [44, Chapters 26 and 28]. For more details about models for fading channels, we refer the reader to [19, Chapter 13] or to [42, Chapter 3]. In particular, more details on the Nakagami fading distribution [45] are given in [42, Section 3.2.2].
The BRGC was introduced in [46] and studied for uncoded transmission in [47, 48] where its asymptotic optimality for PAM, PSK, and QAM constellations was proved. For more details about Gray labelings, we refer the reader also to [49]. The expansion used to define the BRGC was introduced in [47]. An alternative construction that can be used is based on reflections, which is detailed in [47, Section IV]. The FBC was analyzed in [50] for uncoded transmission and the BSGC was recently introduced in [51].
For 8PSK and in the context of BICM-ID, other labelings have been proposed; see [52] and references therein. For example, the SSP labeling was proposed in [40, Fig. 2 (c)], (later found via an algorithmic search in [38, Fig. 2 (a)], and called M8), or the MSP labeling [53, Fig. 2 (b)]. These two labelings are shown in Fig. 2.17 (b) and (c). The M16 labeling used in BICM-ID for 16QAM was first proposed in [38, Fig. 2 (b)].
The design of bit labelings for improving the performance of BICM-ID has been studied in [37, 38, 54–59] and references therein. Most of the works consider one-to-one mappers (i.e., a bijective mapping from labels to constellation symbols); however, when signal shaping is considered, a many-to-one mapping () may be useful, as shown in [59, Section 6.2].
The channel capacity defined in 1948 by Shannon [60] spurred a great deal of activity and approaching Shannon's limit became one of the most important problems among researchers for about 45 years. The complexity of encoding/decoding was always an issue but these limits have been continuously pushed by the development of integrated circuits. CENCs provided a low-complexity encoding strategy and the Viterbi algorithm [61] gave a clever and relatively low-complexity decoding method. Introduced by Elias [62] in 1955, CENCs were studied extensively and their detailed description can be found in popular textbooks, e.g., [23, Chapter 11], [63, Chapter 5], [64]. The name for CENCs with ODS was coined by Frenger et al. in [65], however, most of the newly reported spectra in [65] had already been presented in [66, Tables III–V], [67, Tables II–IV], as later clarified in [68].
TCs were invented by Berrou et al. [69] and surprised the coding community with their performance maintaining relatively simple encoding and decoding via iterative processing. Since then, they have been analyzed in detail in many works and textbooks, e.g., [63, Chapter 8.2; 23, Chapter 16]; TCs are also used in many communication standards such as 3G and 4G telephony [70, Section 16.5.3], digital video broadcasting (DVB) standards [71], as well as in deep space communications [72, 73].
Most often, the BICM literature assumes that random interleaving with infinite length is used [31], which leads to the simple random multiplexing model we have shown in Fig. 2.24. The formulas describing the diversity efficiency of the finite-length interleaver can be found in [32, Chapter 4.3].
3.144.21.126