Appendix 8A: The Sum of Log-Likelihood Ratios

Following are the algebraic details yielding the results shown in Equation (8.12), rewritten below:

L(d1)L(d2)Δ=L(d1d2)=loge(eL(d1)+eL(d2)1+eL(d1)eL(d1))(8A.1)

We start with a likelihood ratio of the APP that a data bit equals +1 compared to the APP that it equals –1. Since the logarithm of this likelihood ratio, denoted L(d), has been conveniently taken to the base e, it can be expressed as

L(d)=loge[P(d=+1)P(d=1)]=loge[P(d=+1)1P(d=+1)](8A.2)

so that

eL(d)=[P(d=+1)1P(d=+1)](8A.3)

Solving for P(d = +1) we obtain

eL(d)eL(d)×P(d=+1)=P(d=+1)(8A.4)
eL(d)=P(d=+1)×[1+eL(d)](8A.5)

and

P(d=+1)=eL(d)1+eL(d)(8A.6)

Observe from Equation 8A.6 that

P(d=1)=1P(d=+1)=1eL(d)1+eL(d)=11+eL(d)(8A.7)

Let d1 and d2 be two statistically independent data bits taking on voltage values of +1 and –1 corresponding to logic levels 1 and 0, respectively.

When formatted in this way, the modulo-2 summation of d1 and d2 yields 1 whenever d1 and d2 have identical values (both +1 or both –1), and the summation yields –1 whenever d1 and d2 have different values. Then

L(d1d2)=loge[P(d1d2=1)P(d1d2=1)]=loge[P(d1=+1)×P(d2=1)+[1P(d1=+1)][1P(d2=1)]P(d1=+1)×P(d2=+1)+[1P(d1=+1)][1P(d2=+1)]](8A.8)

Using Equations (8A.6) and (8A.7) to replace the probability terms of Equation (8A.8), we obtain

L(d1d2)=loge[(eL(d1)1+eL(d1))(11+eL(d2))+(11+eL(d1))(eL(d2)1+eL(d2))(eL(d1)1+eL(d1))(eL(d2)1+eL(d2))+(11+eL(d1))(11+eL(d2))](8A.9)
=loge[(eL(d1)+eL(d2)[1+eL(d1)][1+eL(d2)])(eL(d1)+eL(d2)+1[1+eL(d1)][1+eL(d2)])](8A.10)
=loge[eL(d1)+eL(d2)1+eL(d1)eL(d2)](8A.11)

Appendix 8B: Using Bayes’ Theorem to Simplify the Bit Conditional Probability

Proof that:

P(vn=1|yn)=P(xn=1|yn)=11+e2yn/σ2(8B.1)

vn ∊ (1,0), xn ∊ (1, –1) and yn = xn + gn, where gn is an AWGN sample (with mean = 0, variance = σ2), and the a priori probabilities are assumed to be P(xn = 1) = P(xn = –1) = 1/2

Using Bayes’ theorem, we write

P(xn=1|yn)=P(yn|xn=1)P(xn=1)P(yn|xn=1)P(xn=1)+P(yn|xn=1)P(xn=1)(8B.2)
=12πσe(yn1)22σ212πσe(yn1)22σ2+12πσe(yn+1)22σ2(8B.3)
=eyn/σ2eyn/σ2+eyn/σ2=11+e2yn/σ2(8B.4)

Similarly, we can write

P(xn=0|yn)=11+e2yn/σ2(8B.5)

Appendix 8C: Probability that a Binary Sequence Contains an Even Number of Ones

For the sequence A = a1,..., ak,..., aK, the probability that A contains an even number of ones is

PA(even)=12+12Πk=1K(12Pk)(8C.1)

Proof by induction: Let us denote the expression P(ak = 1) more simply as Pk. Also, let K = 2, so that

PA(even)=P(a1+a2=0)=P1P2+(1P1)(1P2)=2P1P2+1P1P2=12(4P1P2+22P12p2)=12(1+12P12P2+4P1P2)=12+12(12P1)(12P2)(8C.2)

And, when K = 3, then, we can write

PA(even)=P(a1+a2+a3=0)=P1P2(1P3)+P1P3(1P2)+P2P3(1P1)+(1P1)(1P2)(1P3)=12+12(12P1)(12P2)(12P3)(8C.3)

Then, by induction, for any value of K, we can write

PA(even)=12+12Πk=1(12Pk)(8C.4)

Appendix 8D: Simplified Expression for the Hyperbolic Tangent of the Natural Log of a Ratio of Binary Probabilities

Proof that:

tanh[12loge(p1p0)]=p0p1=12p1(8D.1)
Let x=12loge(p1p0)then, 2x=loge(p1p0) and e2x=p1p0(8D.2)
tanh(x)=sinh(x)cosh(x)=exexex+ex(8D.3)

Multiplying the numerator and denominator on the right side of Equation (8D.3) by ex yields

e2x1e2x+1=(p1/p0)1(p1/p0)+1=p1p0p1+p0(8D.4)

for a binary channel p1 + p0 = 1. Therefore,

tanh[12loge(p1p0)]=p0p1=12p1(8D.5)

Appendix 8E: Proof that ?(x) = ?–1(x)

If f(g(x)) = x, then f(x) is the inverse of g(x) stated as f(x) = g–1(x) For the function y = ϕ(x) = –log tanh (x/2), we prove that ϕ–-1(x) = ϕ(x) by proving that (ϕ(x)) = x = ϕ(y):

y=φ(x)=In[tanh(x2)](8E.1)
y=In[tanh(x2)](8E.2)
ey=tanh(x2)=sinh(x2)cosh(x2)(8E.3)
=e+x2ex2e+x2+ex2=e+x1e+x+1(8E.4)
ey(e+x+1)=(e+x+1)(8E.5)
eye+x+ey=e+x1(8E.6)
e+x(1ey)=(1+ey)(8E.7)
e+x=1+ey1ey=ey2+ey2ey2ey2(8E.8)
ex=ey2ey2ey2ey2=sinh(y2)cosh(y2)=tanh(y2)(8E.9)
x=In[tanh(y2)](8E.10)
x=φ(y)=In[tanh(y2)](8E.11)

Appendix 8F: Bit Probability Initialization

Proof that:

L(qm,n)=L(vn)=loge[1+e2yn/σ2]1[1+e2yn/σ2]1=loge1+e+2yn/σ21+e2yn/σ2=2yn/σ2(8F.1)
loge[1+e2yn/σ2]1[1+e+2yn/σ2]1=loge1+e+2yn/σ21+e2yn/σ2(8F.2)
=loge{(e+2yn/σ2e+2yn/σ2)(1+e+2yn/σ21+e2yn/σ2)}=loge{(e+2yn/σ2)(1+e+2yn/σ21+e+2yn/σ2)}(8F.3)
=loge[e+2yn/σ2]=2yn/σ2(8F.4)

References

1. Forney, G. D., Jr., Concatenated Codes, MIT Press, Cambridge, Mass., 1966.

2. J. H. Yuen, et al., “Modulation and Coding for Satellite and Space Communications,” Proc. IEEE, vol. 78, no. 7, July 1990, pp. 1250–1265.

3. Berrou, C., Glavieux, A., and Thitimajshima, P., “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes,” IEEE Proceedings of the Int. Conf. on Communications, Geneva, Switzerland, May 1993, pp. 1064–1070.

4. Berrou, C., and Glavieux, A., “Near Optimum Error Correcting Coding and Decoding: Turbo-Codes,” IEEE Trans. on Communications, vol. 44, no. 10, Oct. 1996, pp. 1261–1271.

5. Hagenauer, J., “Iterative Decoding of Binary Block and Convolutional Codes,” IEEE Trans. on Information Theory, vol. 42, no. 2, March 1996, pp. 429–445.

6. Divsalar, D., and Pollara, F., “On the Design of Turbo Codes,” TDA Progress Report 42–123, Jet Propulsion Laboratory, Pasadena, Calif., Nov. 15, 1995, pp. 99–121.

7. Divsalar, D., and McEliece, R. J., “Effective Free Distance of Turbo Codes,” Electronic Letters, vol. 32, no. 5, Feb. 29, 1996, pp. 445–446.

8. Dolinar, S., and Divsalar, D., “Weight Distributions for Turbo Codes Using Random and Nonrandom Permutations,” TDA Progress Report 42–122, Jet Propulsion Laboratory, Pasadena, Calif., Aug. 15, 1995, pp. 56–65.

9. Divsalar, D., and Pollara, F., “Turbo Codes for Deep-Space Communications,” TDA Progress Report 42–120, Jet Propulsion Laboratory, Pasadena, Calif., Feb. 15, 1995, pp. 29–39.

10. Divsalar, D., and Pollara, F., “Multiple Turbo Codes for Deep-Space Communications,” TDA Progress Report 42–121, Jet Propulsion Laboratory, Pasadena, Calif., May 15, 1995, pp. 66–77.

11. Divsalar, D., and Pollara, F., “Turbo Codes for PCS Applications,” Proc. ICC ‘95, Seattle, Wash., June 18–22, 1995.

12. Bahl, L. R., Cocke, J., Jelinek, F., and Raviv, J., “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” Trans. Inform. Theory, vol. IT-20, March 1974, pp. 248–287.

13. Benedetto, S., et al., “Soft Output Decoding Algorithm in Iterative Decoding of Turbo Codes,” TDA Progress Report 42–124, Jet Propulsion Laboratory, Pasadena, Calif., Feb. 15, 1996, pp. 63–87.

14. Benedetto, S., et al., “A Soft-Input Soft-Output Maximum A Posteriori (MAP) Module to Decode Parallel and Serial Concatenated Codes,” TDA Progress Report 42–127, Jet Propulsion Laboratory, Pasadena, Calif., Nov. 15, 1996, pp. 63–87.

15. Benedetto, S., et al., “A Soft-Input Soft-Output APP Module for Iterative Decoding of Concatenated Codes,” IEEE Communications Letters, vol. 1, no. 1, Jan. 1997, pp. 22–24.

16. Pietrobon, S., “Implementation and Performance of a Turbo/MAP Decoder,” Int’l. J. Satellite Commun., vol. 16, Jan–Feb 1998, pp. 23–46.

17. Robertson, P., Villebrun, E., and Hoeher, P., “A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in the Log Domain,” Proc. of ICC ‘95, Seattle, Wash., June 1995, pp. 1009–1013.

18. Shannon, C. E., “A Mathematical Theory of Communication,” Bell System Tech. J., vol. 27, July 1948, pp. 379–656.

19. Shannon, C. E., “Communication in the Presence of Noise,” Proc. IRE, vol. 37, no. 1, Jan. 1949, pp. 10–21.

20. Gallager, R., “Low-Density Parity-Check Codes,” Ph. D. Dissertation, Massachusetts Institute of Technology, Cambridge, Mass., 1960.

21. Gallager, R. G., “Low-Density Parity-Check Codes,” IRE Trans. Inform. Th., vol. IT-8, Jan. 1962, pp. 21–28,.

22. Gallager, R. G., Low Density Parity-Check Codes, MIT Press, Cambridge, Mass., 1963.

23. Mackay, D. J. C., and Neal, R. M., “Near Shannon-Limit Performance of Low-Density Parity-Check Codes,” Electronics Letters, vol. 32, Aug. 1996, pp. 1645–1646.

24. Alon, N., and Luby, M., “A Linear Time Erasure-Resilient Code with nearly Optimum Recovery,” IEEE Trans. Inf. Theory, vol. 42, no. 6, Nov. 1996, pp. 1732–1736.

25. MacKay, D. J. C., “Good Error Correcting Codes Based on Very Sparse Matrices,” IEEE Trans. Inf. Theory, March 1999, pp. 399–431.

26. Richardson, T. J., and Urbanke, R., “The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, Feb. 2001, pp. 599–618.

27. Richardson, T. J., Shokrollahi, M. A., and Urbanke, R., “Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, Feb. 2001, pp. 619–637.

28. Luby, M., Mitzenmacher, M., Shokrollahi, A., and Spielman, D., “Improved Low-Density Parity-Check Codes Using Irregular Graphs,” IEEE Trans. Info. Theory, vol. 47, no. 2, Feb. 2001, pp. 585–598.

29. Chung, C.-Y., et al., “On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit,” IEEE Comm. Letters, vol. 5, Feb. 2001, pp. 58–60.

30. Tanner, R., “A Recursive Approach to Low Complexity Codes,” IEEE Trans. Inform. Th., vol. IT-27, Sept. 1981, pp. 533–547.

31. Kou, Y., Lin, S., and Fossorier, M. P. C., “Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and New Results,” IEEE Trans. Inf. Theory, vol. 47, Nov. 2001, pp. 2711–2736.

32. Ryan, W., “A Low Density Parity-Check Tutorial,” Class Notes, University of Arizona, Electrical and Computer Engineering Dept., June 2002, [email protected].

33. Wu X., and Ling, C., “Towards Understanding Weighted Bit-Flipping Decoding,” IEEE Int. Symp. Inf. Theory, 2007.

34. Luby, M., Mitzenmacher, M., Shokrollahi, A., and Spielman, D., “Efficient Erasure Correcting Codes.” IEEE Trans. Inf. Theory, pp. vol. 47, no. 2, Feb. 2001, 569–584.

35. Spielman, D. A., “Finding Good LDPC Codes,” Proc. 36th Allerton Conf. Commun., Contr., and Comput., Sept. 1998.2.

36. MacKay, D., “Good Error Correcting Codes Based on Very Sparse Matrices,” IEEE Trans. Inf. Theory, March 1999, pp. 399–431.

37. MacKay, D. J. C., and Neal, R. M., “Near Shannon Limit Performance of Low-Density Parity Check Codes,” Electronic Letters, vol. 33, no. 6, March 13, 1997, pp. 457–458.

38. Fossorier, M., et al., “Reduced Complexity Iterative Decoding of Low-Density Parity-Check Codes Based on Belief Propagation,” IEEE Trans. Comm., May 1999, pp. 673–680.

39. MacKay, D., et al., “Comparison of Constructions of Irregular Gallager Codes,” IEEE Trans. Comm., Oct. 1999.

40. Lucas, R., Fossorier, M., Kou, Y., and Lin, S., “Iterative Decoding of One-Step Majority-Logic Decodable Codes Based on Belief Propagation,” IEEE Trans. Comm., June 2000, pp. 931–937.

41. Kschischang, F. R., Frey, B. J., and Loeliger, H. A., “Factor Graphs and the Sum Product Algorithm,” IEEE Trans. Inf. Th., vol. 47, no. 2, Feb. 2001, pp. 498–519.

42. Richardson, T. J., et al., “Efficient Encoding of Low-Density Parity-Check Codes,” IEEE Trans. Inf. Theory, Feb. 2001.

43. Luby, M., et al., “Improved Low-Density Parity Check Codes Using Irregular Graphs,” IEEE Trans. Inf. Theory, Feb. 2001.

44. Richardson, T. J., et al., “Efficient Encoding of Low-Density Parity-Check Codes,” IEEE Trans. Inf. Theory, Feb. 2001.

45. Kschischang, F. R., “Codes Defined on Graphs,” IEEE Communications Magazine, vol. 41, no. 8, Aug. 2003, pp. 118–125.

46. Richardson, T., and Urbanke, R., “The Renaissance of Gallager’s Low-Density Parity-Check Codes,” IEEE Communications Magazine, vol. 41, no. 8, Aug. 2003, pp. 126–131.

47. Yeo, E., et al., “Iterative Decoder Architectures,” IEEE Communications Magazine, vol. 41, no. 8, Aug. 2003, pp. 132–140.

48. Zhao, J., et. al., “On Implementation of Min-Sum Algorithm and Its Modifications for Decoding Low Density Parity-Check (LDPC) Codes,” IEEE Trans. on Comm., vol. 53, no. 4, April 2005, pp. 549–554.

Problems

8.1. A BPSK system receives equiprobable bipolar symbols (+1 or 1) plus AWGN. Assume unity noise variance. At time k, the value of the received signal xk is equal to 0.11.

  1. Calculate the two likelihood values for this received signal.

  2. What would be the maximum a posteriori decision, +1 or –1?

  3. The a priori probability that the transmitted symbol was +1 is equal to 0.3. What would be the maximum a posteriori decision, +1 or –1?

  4. Assuming the a priori probabilities from part (c), calculate the log-likelihood ratio L(dk|xk).

8.2. Consider the two-dimensional parity-check code example described in Section 8.1.3. As outlined there, the transmitted symbols are represented by the sequence d1, d2, d3, d4, p12, p34, p13, p24, resulting in a code rate of 1/2. A particular application requiring a higher data rate allows the output sequence from this code to be punctured by discarding every other parity bit, resulting in an overall code rate of 2/3. The transmitted output is now given by the sequence d1, d2, d3, d4, p12, _, p13, _. (Parity bits p34 and p24 are not transmitted.) The transmitted sequence is {di}, {pij} = +1 1 1 +1 +1 +1, where i and j are location indexes. The noise transforms this data plus parity sequence into the received sequence {xk} = 0.75, 0.05, 0.10, 0.15, 1.25, 3.0, where k is a time index. Calculate the values of the soft outputs for the data bits after two horizontal and two vertical decoding iterations. Assume unity noise variance.

8.3. Consider the parallel concatenation of two RSC component encoders as shown in Figure 8.7. An interleaver of block size 10 maps a sequence of input bits {dk} to bits {dk} where the interleaver permutation is given by [6, 3, 8, 9, 5, 7, 1, 4, 10, 2]; that is, the first bit of the incoming data block is mapped to position 6, the second bit is mapped to position 3, and so on. The input sequence is given by (0, 1, 1, 0, 0, 1, 0, 1, 1, 0). Assume that the component encoders start in the all-zeros state and that no termination bits are added to force them back to the all-zeros state.

  1. Calculate the 10-bit parity sequence {v1k}.

  2. Calculate the 10-bit parity sequence {v2k}.

  3. The switch yielding the sequence {vk} performs puncturing such that {vk} is given by the following: v1k, v2(k+1), v3(k+1) and the code rate is 1/2. Calculate the weight of the output codeword.

  4. When decoding with the MAP algorithm, what changes do you think need to be made with regard to initializing the state metrics and branch metrics if the encoders are left unterminated?

8.4.

  1. For the nonrecursive encoder shown in Figure P8.1, calculate the minimum distance of the overall code.

  2. For the recursive encoder shown in Figure 8.7, calculate the minimum distance of the overall code. Assume that there is no puncturing so that the code rate is 1/3.

  3. For the encoder shown in Figure 8.7, discuss the effect on the output code weight if the input to each component encoder is given by the weight-2 sequence (0 0. . . 0 0 1 0 0 1 0 0. . . 0 0). (Assume no puncturing.)

  4. Repeat part (c) for the case where the weight-2 sequence is given by (0 0...0 0 1 0 1 0 0...0 0).

    A typical block diagram of the BPSK system is displayed.

    Figure P8.1

8.5. Consider that the encoder in Figure 8.6a is used as a component code within a turbo code. Its 4-state trellis structure is shown in Figure 8.6b. The code rate is 12, and the branch labeling, u v, represents the output branch word (code bits) for that branch, where u is a data bit (systematic code) and v is a parity bit, and at each time k, a data bit and parity bit are transmitted. Signals received from a demodulator have the noise-disturbed u, v values of 1.9, 0.7 at time k = 1, and 0.4, 0.8 at time k = 2. Assume that the a priori probability for the data bit being a 1 or 0 is equally likely and that the encoder begins in the all-zeros state at starting time k = 1. Also assume that the noise variance is equal to 1.3. Recall that a data sequence of N bits is characterized by N transition-time intervals and N + 1 states (from start to finish). Thus, for this example, bits are launched at times k = 1, 2, and we are interested in the state metrics at times k = 1, 2, 3.

  1. Calculate the branch metrics for times k = 1 and k = 2, that are needed for using the MAP algorithm.

  2. Calculate the forward state metrics for times k = 1, 2, and 3.

  3. The values of the reverse state metrics at times k = 2 and 3 are given below in Table P8.1 for each valid state. Based on the values in the table and the values calculated in parts (a) and (b) calculate the values for the log-likelihood ratio associated with the data bits at time k = 1 and k = 2. Use the MAP decision rule to find the most likely data bit sequence that was transmitted.

Table P8.1

βkm

k = 2

k = 3

m = a

4.6

2.1

m = b

2.4

11.5

m = c

5.7

3.4

m = d

4.3

0.9

8.6. Suppose the received sequence obtained in Problem 8.5 is in fact for a rate 2/3 code that is obtained by puncturing the rate 1/2 code (defined by the trellis in Figure 8.6b). The puncturing is such that only every second parity bit generated is transmitted. Therefore, the four-signal sequence received represents data symbol, parity symbol, data symbol, data symbol. Calculate the branch metrics and forward state metrics for times k = 1 and k = 2 that would be needed for using the MAP algorithm.

8.7. The trellis for a four-state code used as a component code within a turbo code is shown in Figure 8.6b. The code rate is 1/2 and the branch labeling, u v, represents the output, branch word (code bits) for that branch, where u is a data bit (systematic code) and v is a parity bit. A block of N = 1024 samples are received from a demodulator. Assume that the first signals in the block arrive at time k = 1, and at each time k, a noisy data bit and parity bit is received. At time k = 1023, the received signals have noisy u, v values of 1.3, 0.8, and at time k = 1024, the values are 1.4, 0.9. Assume that the a priori probability for the data bit being a 1 or 0 is equally likely and the encoder ends in a state a = 00 at termination time k = 1025. Also, assume that the noise variance is equal to 2.5.

  1. Calculate the branch metrics for time k = 1023 and k = 1024.

  2. Calculate the reverse state metrics for time k = 1023, 1024, and 1025.

  3. The values of the forward state metrics at time k = 1023 and k = 1024 are given below in Table P8.2 for each valid state. Based on the values in the table and the values calculated in parts a) and b) calculate the values for the likelihood ratio associated with the data bits at time k = 1023 and k = 1024, and using the MAP decision rule, find the most likely data bit sequence that was transmitted.

    Table P8.2

    βkm

    k = 1023

    k = 1024

    m = a

    6.6

    12.1

    m = b

    7.0

    1.5

    m = c

    4.2

    13.4

    m = d

    4.0

    5.9

8.8. Given two statistically independent observations of a noisy signal x1 and x2, prove that the log-likelihood ratio (LLR) L(d|x1, x2) can be expressed in terms of individual LLRs as

L(d|x1, x2) = L(x1|d) + L(x2|d) + L(d)

where L(d) is the a priori LLR of the underlying data bit d.

8.9.

  1. Using Bayes’ theorem, show the detailed steps that transform αkm in Equation (8.69) to Equation (8.70b). Hint: Use a simple lettering scheme like the one used in Equations (8.61) and (8.62).

  2. Explain how the summation over the states m′ in Equation (8.70a) results in the expression shown in Equation (8.70b).

  3. Repeat part (a) to show in detail how Equation (8.73) evolves to Equation (8.75). Also explain how the summation over the states m′ at the future time k = 1 results in the form of Equation (8.75).

8.10. Starting with Equation (8.79) for the branch metric δki,m, show the detailed development resulting in Equation (8.80) and indicate which terms can be identified as the constant Ak in Equation (8.80). Why does the Ak term disappear in Equation (8.81a)?

8.11. The interleaver in Figure 8.8 (identical to the interleaver in the corresponding encoder) is needed to ensure that the sequence out of DEC1 is time ordered in the same way as the sequence {yek} Can this be implemented in a simpler way? What about using a deinterleaver in the lower line? Wouldn’t that accomplish the same time ordering more simply? If we did that, then the two deinterleavers, just prior to the output, could be eliminated. Explain why that would not work.

8.12. In the implementation of the Viterbi decoding algorithm, an add-compare-select (ACS) processor is used. However, in performing the maximum a posteriori (MAP) algorithm in turbo decoding, there is no such concept as comparing and selecting one transition over another. Instead the MAP algorithm incorporates all of the branch and state metrics at each time interval. Explain the reason for this fundamental difference between the two algorithms.

8.13. Figure P8.2 illustrates a recursive systematic convolutional (RSC) rate 1/2, K = 4, encoder. Note that the figure uses 1-bit delay blocks rather than storage stages (see Section 8.1.7.4). Thus, the current state of this circuit can be described by the signal levels at points ak–1, ak–2, and ak–3, similar to the way a state is described in the format using storage stages. Form a table, similar to Table 8.1, that describes all possible transitions in this circuit and use the table to draw a trellis section.

A diagram represents the Recursive systematic convolutional code (RSC).

Figure P8.2 Recursive systematic convolutional (RSC) encoder, rate 1/2, K = 4.

8.14. Figure P8.3 illustrates a recursive systematic convolutional (RSC) rate 2/3, K = 3, encoder. Note that the figure uses 1-bit delay blocks rather than storage stages (see Section 8.1.7.4). Form a table, similar to Table 8.1, that describes all possible transitions in this circuit and use the table to draw a trellis section. Use a table similar to Table 8.2 to find the output codeword for the message sequence 1100110011. At each clock time, data bits enter the circuit in pairs {d1k, d2k} and each output branch word {d1k, d2k, vk} is made up of that pair of data bits plus one parity bit, vk.

A circuit diagram depicts the Recursive systematic convolution encoder.

Figure P8.3 Recursive systematic convolutional (RSC) encoder, rate 1/2, K = 3.

8.15. Consider a turbo code consisting of two four-state convolutional codes as the component codes, both of which are described by the same trellis, as shown in Figure 8.6b. The code rate is equal to 1/2, and the block length is equal to 12. The second encoder is left unterminated. The branch metrics, forward state metrics, and reverse state metrics for the data bits associated with the terminated encoder are described by the matrixes that follow. The received 12-signal vector is of the form data signal, parity signal, data signal, parity signal, and so forth, and has the following values:

1.2   1.3   −12   0.6   −0.4   1.9   −0.7   −1.9   −2.2   0.2   −0.2   −0.1   0.6

Branch δki,m matrix

δki,m=[δ10,aδ20,aδ60,aδ11,aδ10,bδ11,bδ10,cδ11,cδ10,dδ11,dδ61,d]=[1.001.001.001.001.001.003.490.742.120.270.371.281.001.001.001.001.001.003.490.742.120.270.371.281.921.352.590.391.111.351.820.550.820.700.330.951.921.352.590.391.111.351.820.550.820.700.330.95]

Alpha (αkm) matrix

αkm=[α1aα2aα7aα1bα7bα1cα1dα1d]=[1.001.001.005.058.5410.4124.450.000.001.9212.795.0710.9331.480.003.490.744.0314.168.2224.300.000.004.775.775.6317.5327.76]

Beta (βkm) matrix

βkm=[β1aβ2aβ7aβ1bβ7bβ1cβ1dβ1d]=[24.455.442.831.121.001.001.0024.435.623.170.700.371.280.0021.325.423.530.810.430.000.0021.315.792.751.141.420.000.00]

Calculate the log-likelihood ratio for each of the six data bits {dk} and by using the MAP decision rule, find the most likely data bit sequence that was transmitted.

8.16. Consider a message sequence m = 0 1 1 that has been encoded with the (6, 3) block code introduced in Section 8.2.3.1, yielding the coded sequence U(n) = [1 1 0 0 1 1] Assume transmission using NRZ waveforms over an AWGN channel with σ2 = 1 The coded system is configured to use LDPC message-passing decoding. Let us suppose that the received vector is Y(n) = [2.0, 3.0, 0.3, –2.0, 3.0, –1.0]. Form a table, similar to Table 8.6, to show the receiver’s initialization probabilities qm,n(1) sent from {vn} to {Cm}.

8.17. For the table that you formed for Problem 8.16, use the initialization probability values to come up with a hard-decision estimate V(n) of the received code bit sequence. Compare V(n) with the transmitted sequence U(n) given in Problem 8.16. There should be two bits in error. Which bit numbers are the incorrect ones? Form a table (the next LDPC step) similar to Table 8.7 to show check message probabilities rm,n(1) sent from {Cm} to {vn}.

8.18. Continuing the LDPC work started in Problems 8.16 and 8.17, form a table showing the refined probabilities qm,n(1). Complete this first iteration by also forming probability estimates of the code bit components Q(n) and an estimate of V(n). How does V(n) compare with the U(n) given in Problem 8.16? Are there any undetected errors? If, so, which bit numbers?

8.19. Continue the LDPC work started with Problems 8.16 through 8.18 by forming a second iteration of the decoding. Show the values of Q(n) and V(n). How does V(n) compare with the transmitted U(n)?

Questions

8.1. Why is the Shannon limit of 1.6 dB not a useful goal in the design of real systems? (See Section 8.1.5.2.)

8.2. What are the consequences of the Viterbi decoding algorithm not yielding a posteriori probabilities? (See Section 8.1.6.)

8.3. What is a more descriptive name for the Viterbi algorithm? (See Section 8.1.6.)

8.4. Describe the similarities and differences between implementing a Viterbi decoding algorithm and implementing a maximum a posteriori (MAP) decoding algorithm? (See Section 8.1.6.)

8.5. What is the difference between the H matrix of a regular LDPC code and the H matrix of an irregular LDPC code? (See Section 8.2.2)

8.6. What is the difference between the a posteriori probability (APP) ratio and the likelihood ratio? (See Section 8.2.5.2.)

8.7. What property allows us to replace the likelihood terms in Equation (8.96) with the product of individual check probabilities?

8.8. When passing information between bipartite nodes, why does Gallager’s sum-product algorithm exclude information passed in the prior round? (See Section 8.2.4.1.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.134.104.173