We consider the problem of long-distance transmission over low cost electric cable of a compressed information source S from a video signal compression ystem. The compression system used makes sure that the source S delivers words s a a dictionary with only five words: [s1, s2, s3, s4, s5] . The probabilities of issuing symbols are given in Table 2.1.
Table 2.1. Probability of emission of source S
s | s1 | s2 | s3 | s4 | s5 |
Pr(s) | 0.11 | 0.19 | 0.40 | 0.21 | 0.09 |
The symbols are delivered by the source with a rate of 13.5 × 106 symbols per second.
NOTE.– In the design of the code C2, the coding suffix associated with the element of lower probability will be systematically set to 1.
Deduce the average length L2 of the codeword C2, its efficiency η2 and the bitrate per second D2
Deduce the corresponding sequence SB of bits obtained at the output of the Huffman coding C2 . SB is of the form { ⋯⋯ bk–1, bk, bk+1 ⋯⋯}. What do you observe?
NOTE.– Both here and also in question b), we will consider, at the start of the sequence, that the parity flip-flop of the number of “1” is equal to 1.
The following SBB binary sequence will be used for the rest of this problem:
Entropy bitrate:
Fixed-length code C1 (length L1): we have five symbols to encode, hence: L1 = 3 bits.
Efficiency:
Bitrate of code C1:
Average length of the codewords:
Efficiency of code C2
Bitrate of code C2:
SS | s2 | s5 | s3 | s4 | s3 | s3 | s2 | s3 | s5 | s4 |
SB(s) | 011 | 0001 | 1 | 01 | 1 | 1 | 001 | 1 | 0001 | 01 |
More bits are observed at zero than at one and, in addition, we have sequences of three consecutive zeros from time to time.
Sequences of three consecutive zeros are replaced by sequences of type “ 0 0 V” or “B 0V “ and thus, we can have a maximum duration of 2T without impulse.
The zeros of IRZ (f) occur every 1/Tb, so its bandwidth is 1/Tb. It is substantially the same for the HDB-n code (here HDB-2).
Furthermore, in the vicinity of the frequency f = 0, the power is zero, this code can therefore be used for long-distance cable transmission. However, the presence of long sequences of zeros is detrimental to the clock recovery, therefore we use the code HDB – n.
The zeros of take place every 1/2Tb, its bandwidth is then approximately 1/2Tb
Thus, there is a reduction of the bandwidth of the transmitted signal by a factor of 2, compared to question 4, because of the introduction of the correlation. In addition, one has no continuous component.
hence, an instantaneous decoding (no recursion). Thus:
We consider the digital transmission signal defined by:
and:
with:
The symbols an are independent random variables which can take only the values 0 and 1 with a probability equal to 1/2 and t0 is random, of uniform law on the time interval [0, T[.
NOTE.– In this problem, we consider that the instant t belongs to the time interval [t0, t0 + T[. Without any loss of generalities, we will assign the index n to this ight. interval where the time t is a priori.
Calculate the autocorrelation function Rs (τ) and the power spectral density Γs (f) of s (t) and carry out the two particular cases:
Let us make the following form of the signal s (t) (with θ < T/2 on Figure 2.3).
Such a signal can be represented by:
With the deterministic function (pulse):
and:
since t0 is uniform on time interval [0, T[, then s (t) is a second-order stationary random signal.
Moreover, the autocorrelation function Rs (τ) is an even function: Rs (–τ) = Rs (τ) and thus the calculation can be done with suitability with τ ≥ 0 or τ ≤ 0.
The autocorrelation function Rs (τ) is written:
hence:
Moreover, using the theorem of compound probability, Rs(τ) is also written:
To calculate Rs(τ), it is sufficient to calculate Pr{V at t} and Pr{V at t–τ/V at t}
A. First case, where 0 < θ ≤ T/2
There are three situations to consider.
The only possibility is that t and t – τ belong to the same first part of time slice (hatched region).
Let:
and:
or, since an = 1
that is to say:
Or:
with random t0, of a posteriori law uniform over a measurement interval θ therefore with probability density (ddp(t0)) equal to 1/θ
Hence:
So finally:
NOTE.– In general, if a priori t belongs to a time interval, of measurement T (t0 is random, of uniform law on time interval [0, T[), we know a posteriori that t belongs to a first part of the time interval, of measure θ. It is therefore this posterior uniform law that is used in time conditional probabilities.
The only possibility is that s(t) and s(t – τ) come from two first parts of adjacent time slots, hence independence between the variables a considered.
In general, we will always have:
and:
turns into:
Or:
Which depends on the value of τ with respect to θ and T
Two situations can occur (see graph in Figure 2.5).
We know that if a real random variable follows a uniform law over a given interval [c, d] , then its probability density is equal to 1/Mes [c, d] and that:
and if Mes[c, d] = 0, then the probability density will be zero because we are faced here with a continuous random variable
if: θ < τ ≤ T, then:
So if θ < τ ≤ T – θ, the measure of the interval is zero, thus q = 0
Otherwise, if T – θ < τ ≤ T, then the measure of the interval is given by:
τ + θ – T, and:
So if:
and if:
In summary, for 0 < θ ≤ T/2, if:
For values τ > T + θ, we see that the intervals intervening in the conditional probability relating to instants (t – τ) and t are different and therefore n – k < n. This implies that conditional probability Pr{an–k/an}k≠0 = Pr{an–k}, and that the probability relating (t – τ) conditionally to t will evolve periodically, from T to T strictly as we have just described it. More precisely, we have:
It is therefore a periodic function of period T
Note that the autocorrelation function breaks down into a sum of two functions:
with R1 (τ) , a non-periodic function, and R2 (τ), a periodic function of period T
Calculation of the power spectral density Γs(f)
Calculation of the power spectral density Γ1(f)
R1 (τ) is an even symmetric function, hence:
but:
and:
Integration by part, with:
u = τ and v = cos(2πfτ)dτ
hence:
which is a classic result. Indeed, recall that if:
then:
Γ1 (f) has a continuous spectrum.
Calculation of Γ2 (f)
The basic form of R2 (τ) is identical to that of R1 (τ) and in addition, it is periodic with period T, then:
It is a discrete spectrum, the discrete spectral components being spaced 1/T apart from each other.
Special case where θ = T/2: binary RZ code.
In this case, we then have:
and:
The continuous component is such that:
Γ2(0) = [continuous component]2 × δ(f)
One has:
Thus, the continuous component is then equal to V/4
The function sin(kπ/2) is non zero for odd integer k, so by setting k = 2n + 1 the expression of Γ2 (f) is then written:
This is also written:
Γ2 (f) has discrete components at odd frequencies and in particular at the frequency f = 1/T which makes it possible to recover the clock.
B. Second case, the one where T/2 < θ ≤ T
There are also three situations to consider.
Calculation of p and q. There are two cases:
First possibility: t and t – τ do not belong to the same first part of the time slice, but to the first parts of adjacent time slices:
Or:
Second possibility: t and t – τ belong to the same first part of the time slice, hence:
NOTE.– Both hypotheses a) and b) exclude each other.
Thus, for:
Note that in the case a), the expression of Rs (τ) is also valid throughout the time interval τ ∈ [0, θ]
The only possibility is that s (t) = V and s (t – τ) = V come from the first two parts of adjacent time slices.
Let’s set:
and:
hence:
Calculation of p and q. There are two cases:
Following the same approach as in the previous situation, we obtain:
NOTE.– Both hypotheses a) and b) exclude each other.
Thus, for T ≤ τ < T + θ:
Note that in case a), the expression of Rs (τ) is also valid throughout the time interval [T ≤ τ ≤ T + θ]
In summary, for T/2 < θ ≤ T:
As in the case 0 < θ ≤ T/2, the probability relating to (t – τ) conditionally at t will evolve periodically from T to T, strictly as we have just described it above.
In the same way as for 0 < θ ≤ T/2, the autocorrelation function breaks down into a sum of two functions:
with R1(τ) , a non-periodic function, and R2(τ), a periodic function, of period T. In addition, we note that the expression of R1(τ) and R2(τ) are the same as previously (case 0 < θ ≤ T/2).
Calculation of the power spectral density, case where: T/2 < θ ≤ T
The power spectral density of the signal s (t) can be broken down into the sum of two functions:
with:
This is a continuous spectrum.
And:
This is a discrete spectrum.
Special case: θ = T (NRZ code)
In this case, we obtain:
So:
Thus, finally we get:
Notice that for θ = T (NRZ code), the signal does not have a discrete spectrum component at 1/T (Γ2 (f) is zero for the other values of k ≠ 0). Therefore, clock recovery is not easy with NRZ code.
NOTE.– One could easily generalize the problem to the situation where the symbols of information to be transmitted are not equiprobable: Pr {a = 0} = λ and: Pr{a = 1} =1 – λ
The bipolar RZ code is a three-level code such as:
and:
The binary random variables are assumed to be equiprobable:
We consider the digital transmission signal defined by:
The signal is a second-order stationary random signal assuming that the time of origin t0 is random and uniformly distributed over the time interval [0, T[.
NOTE.– In this problem, we consider that the instant t belongs to the time interval [t0, t0 + T[. Without any loss of generalities, we will assign the index n to this ight. interval where the time t is a priori.
Moreover, as for problem 17, one could immediately generalize to the situation where the information source does not generate equiprobable b symbols.
However, we preferred not to complicate the problem. For those who wish to do so, after having analyzed the situation in the equiprobable case, it will be easy to generalize the results to a situation with a non-equiprobable source.
Calculate the autocorrelation function Rs(τ) and the power spectral density Γs(f) of the signal s(t)
The signal s (t) can be represented by:
where:
is a deterministic function, and:
The autocorrelation Rs(τ) is written:
Using the theorem of compound probabilities, Rs(τ) is written as:
Calculation of the simple probabilities Pr{V at t} and Pr{–V at t}
since there is independence between t0 and the symbols of information, we have:
Likewise:
and:
Besides:
And yet:
as we have two mutually exclusive hypotheses, hence:
thus:
and:
Let’s now calculate the other probabilities, representing four mutually exclusive hypotheses. Note that for reasons of symmetry, we have:
and:
The only possibility is that t and t – τ belong to the same first half of the time slice:
We have:
therefore:
So, we have: q2 = 0
So, we have also:
and:
The only possibility is that t and t – τ∈ first halves of adjacent time slices.
Since two successive bits to 1 are issued in alternating polarity, then:
yet:
(due to the independence between binary information symbols).
And on the other hand:
hence:
and, by symmetry, we have:
The only possibility is that t and t – τ belong to the first two halves of adjacent time slices.
Since two successive bits at “1” are issued in alternating polarity, then as explained previously:
hence:
We have:
and, on the other hand:
hence:
and, by symmetry, we have:
The only possibility is that (t – τ) and t belong to two first time slices separated by a time slot of intermediate duration T:
Moreover, since we have the conditional event: {V at (t – τ)/V at t} , we must also have in the intermediate time slot a negative impulse, and therefore a symbol an–1 = –1
Therefore:
Because of the statistical independence between the pairs of instants considered and the values of the symbols considered, we have:
The first conditional probability on the instants gives:
The second conditional probability on the symbols gives (because of the independence between the symbols):
and due to the independence between the symbols b:
hence:
and, by symmetry, we have: q3 = q1 and therefore:
As before, the only possibility is that t – τ and t each belong to two first time slices separated by a time slice of intermediate duration T but, unlike in the previous case, in the intermediate time slice, it must not have transmitted a pulse, therefore a symbol: an–1 = 0 . The conditional probability for the instants remains the same. It is the same for conditional probabilities on symbols a. As a result, we have:
and therefore:
And by symmetry, we also have q2 = q4 and therefore:
Consequently, we finally get:
Actually, we show that for:
Indeed, let’s take for example the case: 2T ≤ τ < 5T/2.
From the study of the previous case, we see that:
Likewise:
So in the general case where: kT < τ ≤ (k + 1/2)T and k ≥ 2, we have: Rs(τ) = 0
Indeed, (for clarity, see the case: 3T/2 < τ ≤ 2T), we have:
That is:
Because of the independence between t0 and the information symbols a, we therefore have:
or:
with:
and:
This last conditional probability implies implicitly the fact that between n – k and n, we have an odd number of bits bn–k′ at 1 (0 < k′ < k)
Thus:
Because of the independence between the symbols, then one obtains successively:
because it is obvious that on a given bit length k, the probability of having an even number of bits at 1 is identical to having an odd number. Indeed, for each given configuration of k – 1 bits, there are two configurations of the additional bit (whatever its position in the k bits): one with 0, the other with 1.
So we have: .
By symmetry, we also have:
In addition, we also have:
i.e.: q4 = q11 × q42
with:
For the same reasons as previously with the calculation of q13, it is easy to show that:
and with the independence of the information symbols:
By symmetry, we also have:
because:
Thus, we obtain:
In summary:
for:
The autocorrelation function Rs (τ) of the bipolar RZ code represented in the Figure 2.19 a shows that it can be decomposed into the sum of two functions R1 (τ) and R2 (τ) represented in the Figure 2.19b
The power spectral density Γs (f) of the bipolar code is therefore given by:
with:
–Γ1 (f) , power spectral density of the signal whose autocorrelation function is R1 (τ) of triangular shape. It is given by:
–Γ2 (f) , power spectral density of the signal whose autocorrelation function is R2 (τ) of trapezoidal shape. And if g (τ) has a trapezoidal form (see Figure 2.20)
It is obtained by convolution between two even rectangular functions that do not have the same support in general. Hence:
Let’s apply this to R2 (τ) , with:
Thus, we have successively:
Hence:
Finally, it gives:
Properties of Γs (f)
– no continuous component;
– no energy at frequency f = 1/T, however a double alternation rectification of the bipolar RZ code gives a RZ code which has a discrete component at frequency f = 1/T in its power spectral density and therefore, a rather easy clock recovery;
– more than 90% of the energy is located in the physical frequency band [0, 1/T]
We consider the transmission system using the partial response linear coding of Figure 2.22
A partial response linear encoder of the form 1 – D2 where D is the delay operator of Tb (time slice dedicated to the transmission of symbol cn) is used.
We consider the following 21 -bit sequence {bn} (time running from left to right:
Let x (t) be the following deterministic pulse (return to zero code, RZ):
And the signal transmitted (without pre-filtering) is se (t).
Use the HDB-3 coding scheme (high density bipolar pulse code of order 3) to represent the signal sHDB–3 (t) carrying the information and represent it on a time diagram.
Solution of problem 19
then:
The chronogram of the signal se (t) is shown in Table 2.5 above (answer to question 4).
In this sequence, we have a continuous component, since there are five +V pulses and three –V pulses. This continuous component has the value:
Value 2 in the denominator comes from the fact that the signal x (t) is of type RZ
On a statistical average: E { se (t) } = 0 . (i.e. no continuous component)
hence:
since:
Furthermore:
hence:
finally, we have:
This code is well suited to long distance cable transmissions because:
With V: polarity alternation violation bit (bit of violation); B: stuffing bit.
We consider the problem of long-distance transmission (d > 1Km) over an electrical cable of a source S, of equiprobable binary symbol information, and delivering a binary symbol every Tb seconds. To illustrate this problem, it will be considered that a limited (20-element) length realization of the binary symbol sequence produced by S is:
The amplitude of the modulated pulses is equal to V except for the partial response coding where the amplitude will be V/2.
NOTE.– For a better comparison, time representations of transmitted signals si (t) i = {1, ⋯ , 4} will be drawn on the same sheet, as well as the sequences , {an} and {cn}.
In order to construct the signal s1 (t) carrying the information, a binary return to zero code (RZ code ) is used.
A bipolar code of the RZ type is now used to construct the signal s2 (t) carrying the information to be transmitted.
We want to use a code with a high density of pulses of type HDB-2.
We want to further reduce the bandwidth of the transmitted signal. For this, we use a partial response linear coding as shown in the diagram of Figure 2.24, with:
The structure of the partial response coder is given in Figure 2.25.
Solution of problem 20
Chronograms of the different signals:
The power spectral density Γ1 (f) of the RZ code is:
This code is not interesting for transmission over long distance cable because:
However, the presence of a discrete component at the frequency 1/Tb in the power spectral density facilitates the recovery of the clock rate in reception.
Benefits of this code:
Disadvantage of this code: it does not produce pulses to encode a sequence of consecutive 0, therefore the receiver may lose synchronization.
The precoding makes it possible to perform in reception (after transmission) an instantaneous decoding.
Since:
Furthermore:
hence:
This code is well suited for long distance cable transmission because:
Thus, decision error on bn if en is odd (en = 1) and no decision error on bn if en is even (en = 0).
Independence:
For the reasoning that follows, see the graph of the temporal representations of the different signals:
A baseband digital transmission system of binary information is considered. It transmits coded digital images (with information compression) on a reduced capacity transmission channel (cable). The characteristics of the system in Figure 2.26 will be studied.
The random sequence {bn} is of a given probability law and bn are assumed to be independent. The transcoding of binary information sequence {bn} into symbol sequence {an} corresponds to the following assignment:
with the following probability law:
The symbols an of information to be transmitted are supplied to the transmitter at a rate of 1/T = 13.5MHz which corresponds to the sampling frequency of the luminance signal in television (standard CCIR 4: 2: 2).
The transmitted signal se (t) is given by:
where x (t) is a rectangular signal of amplitude V and duration:
The transmission channel is modeled by a linear filter whose impulse response is denoted by h (t) (the propagation delay is not taken into account) and an additive degradation noise b0 (t) at the output of the channel. The noise b0 (t) is assumed to be a second-order stationary Gaussian random process, independent of the useful signal. It has a zero mean value, a power , and its power spectral density Γb0 (f) is modeled by a rectangular function of support Δfb (on positive frequency axis) as shown on Figure 2.27.
The equalization of the channel is performed by a linear filtering of the signal received sr (t) : impulse response filter gc (t) , frequency gain Gc (f) such that its support is fully included in the frequency band of the noise b0 (t)
The clock regeneration system is assumed to be flawless, and thus provides the decision system with a sequence of decision instants {tk} with tk = t0 + kT (thereafter, t0 is assumed to be zero).
The decision system uses a given decision threshold μ0 and the decision rule is as follows:
Decoding {ȃk} → {bk} is obvious.
We denote successively:
where ⊗ is the convolution product.
The frequency gain of the equalizer Gc (f) is assumed to be equal to 1 at zero frequency: Gc (0) =1
From now on, the equalization filter is assumed such that the amplitude spectrum P (f) of p (t) is constant, equal to VT on the frequency domain and equal to zero otherwise.
Subsequently, for sake of simplification, it will be considered that only the two symbols adjacent to a given symbol ak can interfere with it (namely symbols ak-1 and ak+1, and that α = 1/6.
Assuming that at the output of the equalizer the signal-to-noise ratio obtained is:
NOTE.- For a Gaussian random variable X centered (m = 0) and reduced (σ = 1) we will assume that we have approximately:
Solution of problem 21
The noise power b0 (t) is given by:
The power of the noise b1 (t) is given by:
The signal received sr (t) is:
with:
The signal y (t) is the response of the channel to the basic pulse (rectangular shape) x (t) , of duration θ, in the noiseless case.
The equalized signal (corrected) sc (t) is:
with:
The noise b1 (t) is the result of noise b0 (t) filtering by the equalizer whose impulse response is gc (t).
At sampling times tk = kT, the signal sc (t) is written:
The term akp (0) represents the useful response of the system (channel + equalization) to the transmission of the symbol ak associated with the time interval kT.
The term is the intersymbol interference. It is a disturbing signal depending on all of the symbols transmitted {an}, except for the symbol ak which is related to the time interval considered.
The term b1 (kT) is the noise at the time of decision.
Iml (kT) depends on p [(k – n) T] , here on p (± T) . We must first calculate p (t) from P (f).
By definition, we have: p (t) = F–1 {P (f)} , hence:
This gives:
and:
Table 2.8. Intersymbol interference: amplitudes and probabilities
ml = [ak–1,+ ak+1 ] | pml | ||
---|---|---|---|
-1 | -1 | V/3 | |
-1 | 1 | 0 | |
1 | -1 | 0 | |
1 | 1 | –V/3 |
Furthermore:
hence the following Table 2.9.
The two simplified expressions of conditional probabilities of error are now:
The two simplified expressions of conditional probabilities of error are now:
As we have the same signal-to-noise ratio as before, it means that:
And we can keep (as the approximation remains rather good) the value of the optimal threshold μ0 as a function of the noise power :
Thus, in this case, the previous Table 2.9 is replaced by Table 2.10:
Table 2.10. Conditional probabilities of error without intersymbol interference
µ0 + p(0) | Pe–1 | µ0 – p(0) | Pe1 |
---|---|---|---|
3.3 σb1 | ≅ 4.8 × 10–4 | –3.7 σb1 | ≅ 1.1 × 10–4 |
Finally, we get:
Thus, in this case (with the same signal-to-noise ratio), the probability of transmission error without intersymbol interference is approximately 10 times lower than it was in the presence of intersymbol interference.
The following baseband digital transmission system (Figure 2.29) is considered for the transmission of binary information.
The source of information produces a random sequence {bn} of equiprobable and independent binary variables. The coding of binary information {bn} into information symbol {an} corresponds to the following assignment:
The symbols an of information to be transmitted are supplied to the transmitter at a rate of 1/T. The coder information to signal generates a transmitted signal se (t) given by:
where x (t) is a rectangular signal of amplitude V and duration T/2.
The transmission channel is modeled by a linear filter whose impulse response is denoted by h (t) (the propagation delay here is not taken into account) and an additive degradation noise b0 (t) at the transmission channel output.
The noise b0 (t) is modeled by the low pass filtering of a white noise, of constant power spectral density Γ0. This low pass filter is considered as a first-order low pass R-C filter whose frequency gain is denoted by L (f). The noise b0 (t) is assumed, to be a second-order stationary Gaussian random process with zero mean, and independent of the useful signal. We called its average power and Γb0 (f) its power spectral density.
A receiver makes it possible to retrieve the binary information from the signal received at the output of the channel according to the block diagram from Figure 2.29. The channel equalization is produced by a linear filter of impulse response gc (t) and complex gain Gc (f) . The clock recovery, supposed to be faultless, produces a sequence of decision instants {tk} of the form tk = t0 + kT (thereafter, t0 is assumed to be zero).
The decision system uses a given decision threshold μ0 and the decision rule is as follows:
Decoding {ȃk} → {bk} is obvious.
We denote successively (⊗: convolution product):
It is assumed that the equalization is of gain Gc (f) on a support fully included in the frequency band [– Δfb, Δfb] and Gc (0) =1 . We denoted Δfc as its equivalent energy bandwidth.
The equalization filter is set so that the amplitude spectrum P (f) of p (t) is constant, equal to VT/α in the frequency band and equal to zero otherwise (see Figure 2.31).
Subsequently, for sake of simplification, it will be considered that only the two symbols adjacent to a given symbol ak can interfere with it (namely symbols ak-1 and ak+1).
We then adjust the equalizer with the value α0. Under these conditions, the signal-to-noise ratio obtained at the output of the equalizer is equal to 6dB with:
knowing that we have:
Solution of problem 22
On one hand:
On the other hand, we have to calculate from the expression of Γb0 (f) , which is the result of filtering Γ0 by the first-order RC low pass filter:
Calculation of the transfer function of a 1 st order low pass R – C filter:
For:
and:
then:
Recall:
Finally, we get:
We have:
and:
Since the support of Gc is included in [– Δfb, Δfb] then:
hence:
with:
The noise b1 (t) is the result of filtering b0 (t) by the equalizer filter whose impulse response is gc (t)
At the sampling times tk = kT, the signal sc (t) is written:
The term akp (0) represents the response of the system (channel + equalization) to the transmission of the symbol ak associated with the time interval kT.
The term is the intersymbol interference.
It is a disturbing signal depending on all the transmitted symbols {an } , except for the symbol ak which is related to the time interval considered.
The term b1 (kT) is the noise at the output of the equalizer at the decision instant.
Iml (kT) depends on p [(k – n) T] , here on p (± T) . We have to calculate p (t) from its Fourier transform P (f) which is defined (see Figure 2.33).
We have: p (t) = F–1 {P (f)} . This gives:
Finally, we get:
So:
This must be true for α non null integer.
Relation between V and σb1. We have
Since μ0 = 0 (equiprobable symbols) and Iml (kT) =0, then:
Knowing that:
and:
Finally we then get:
This problem deals with the baseband transmission of coded digital images over a transmission channel (cable) with reduced capacity. The different characteristics of the transmitter and receiver system in Figure 2.34 below will be analyzed together with its performances.
The symbols bn are delivered by the binary source every Tb second (with the use of a buffer memory). Moreover, they are supposed to be independent and equiprobable.
The coding of binary information {bn} into information symbols {an} is done by grouping 2 bits bn to form a quaternary symbol an = {–3, – 1, 1, 3} of period T = 2Tb
The symbols an of information are provided to the transmitter at a rate of: 1/T = 10MHz
The transmitted signal se (t) is given by se (t) = ∑nanx (t – nT) where x (t) is a rectangular signal of amplitude V over the time interval [0, T [, zero elsewhere.
The transmission channel is modeled by a linear filtering whose impulse response is denoted h (t) with an additive degradation noise b0 (t) at the output of the channel. The latter is supposed to be a second-order stationary Gaussian random noise, with zero mean value, having a broad frequency bandwidth, and a mean power .
The equalization filter of the transmission channel works in a frequency band totally included in that of the noise. The clock regeneration is assumed to be perfectivate and provides a sequence of decision instants of the form: tk = t0 + kT
The decision system uses three thresholds, denoted μ–1 , μ0, μ1, to separate the equalized signal sc (tk) into four classes. They are given by:
We have:
We denote successively (⊗ is the convolution product):
In a first phase, the equalization filter is such that the frequency gain P (f) of p (t) is constant, equal to 2VT in the frequency band [-1/4T, 1/4T] , and zero elsewhere.
Afterwards, for sake of simplification, it is considered that only the two symbols adjacent to the symbol ak interfere with it (namely ak–1 and ak+1).
Also calculate the different probabilities, each of them associated to one of the seven different values of the intersymbol interference. Here the ak will be considered equiprobable.
So, we decide to perform a better equalization of the cable distortion. This second equalization is such that the frequency spectrum P (f) of p (t) is constant on the frequency band [–1/2T, 1/2T], zero elsewhere, and P (0) =2VT.
Under these new conditions, we will assume that at the output of the equalizer the signal-to-noise ratio is:
and show that the 4x4 conditional probability matrix:
is quasi of the form given in Table 2.11:
Table 2.11. Form of the conditional probability matrix
ak âk | - 3 | - 1 | + 1 | + 3 |
---|---|---|---|---|
-3 | 1 - p | p | 0 | 0 |
-1 | p | 1 – 2 p | p | 0 |
+1 | 0 | p | 1 – 2 p | p |
+3 | 0 | 0 | p | 1 – p |
NOTE.– We will consider here that if p(x) is the probability density function of a Gaussian random variable with zero mean value:
Solution of problem 23
The received signal sr(t) is:
with: y(t) = x(t)⊗h(t).
The signal y(t) is the output of the channel when its input is the basic impulse (rectangular shape) x(t) of period T and without noise.
The equalized (corrected) signal sc(t) is:
with: p(t) = y(t)⊗gc(t) , that is: p(t) = x(t)⊗h(t)⊗gc(t).
The noise b1(t) is the result of filtering the noise b0(t) by the equalizer whose impulse response is gc(t).
At the sampling times tk = kT, the signal sc(t) is written:
The term akp (0) represents the useful response of the system (channel + equalization) to the transmission of the symbol ak associated with the time interval kT.
The term is the intersymbol interference.
It is a disturbing signal which depends on all the symbol {an} transmitted, except for the symbol ak which is related to the time interval considered.
The term b1(kT) is the noise at the decision instant.
It depends on p [(k – n) T] , so we have to calculate p(t) from P(f)
By definition, we have: p(t) = F–1 {P(f)}, hence:
From these values, we can conclude that:
In the case where the messages only of the form ml = [ak–1 , ak+1] interfere with the symbol ak, we have (with π ≅ 3):
Thus, there are seven distinct values of Iml (kT).
Table 2.13. Probabilities of amplitude of intersymbol interference
Iml (kT) | Pr{Iml (kT)} |
---|---|
4 V | 1/16 |
2.6 V | 1/8 |
1.3 V | 3/16 |
0 | 1/4 |
- 1.3 V | 3/16 |
- 2.6 V | 1/8 |
- 4 V | 1/16 |
The decision thresholds are given by:
It can easily be seen that to change the decision class, it is sufficient that
In view of these results, even without noise, the probability of error is extremely high.
Under these new conditions, we have:
Let us first express V as a function of and calculate the new values of the decision thresholds μ–1 , μ0, μ1:
At the output of the equalizer, the signal is then written:
and the threshold values of the decision blocks are:
The calculation of the conditional error probabilities is based on the knowledge of the noise intervals given by the course formulas (see the relations in Chapter 6 of Volume 1). For each value of the symbol ak transmitted, we have four possible decisions (three erroneous, and a correct one) on the estimated value ȃk of the symbol ak.
Recall that for the intermediate values of the symbol ak transmitted, the conditional error probability is given by:
and for the extreme values of the transmitted symbol ak, the two conditional error probabilities are:
The conditional probability matrix is then obtained like this:
Furthermore, we have:
Hence see the matrix of conditional decision probabilities in Table 2.14 below.
Table 2.14. Conditional decision probability matrix
ak âk | - 3 | - 1 | + 1 | + 3 |
---|---|---|---|---|
-3 | 1 – p | p | 0 | 0 |
-1 | p | 1 – 2 p | p | 0 |
+1 | 0 | P | 1 – 2 p | p |
+3 | 0 | 0 | p | 1–p |
We consider the transmission of information (speech) in digital form on a twowire cable transmission channel. The on-line code used is the bipolar code. The block diagram of the transmission system is shown in Figure 2.38.
The source produces a series of independent but not equiprobable binary sequence {bn}, with:
The transmitted signal is given by: se(t) = ∑nanx(t – nT)
The signal x(t) is a pulse of amplitude V on the time interval [0, T/2 [. The additive noise b0(t) is assumed to be stationary, Gaussian and centered, with a very broad power spectral density Γb0(f) compared to that of the signal (energy bandwidth equal to Δfb) and an average power .
Table 2.15. Generation of the sequence {an} of the bipolar code
{bn} | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 |
{an} |
In the rest of the problem, it will be considered that the equalization is not perfect and that actually, the amplitude spectrum of the signal p(t) at the output of the equalizer, denoted P(f) , when a single impulse x(t) is sent by the transmitter and without considering the noise, is constant, equal to VT on the frequency domain [– (1 + α)/2T, (1 + α)/2T] , and zero elsewhere, with α = 0.1.
So we have: p(t) = x(t)⊗h(t)⊗gc(t).
Similarly and for simplicity, intersymbol interference will only be considered as that resulting from the two symbols ak–1 and ak+1 adjacent to symbol ak.
Table 2.16. Possible messages ml for each ak
ml= [ak–1, ak+1 ] | |||||||
ak = –1 | |||||||
ak = 0 | |||||||
ak = 1 |
Table 2.17. Conditional probabilities Pr{ml / ak} of messages ml
ml= [ak–1, ak+1 ] | Pr{ml/ak= –1} | Pr{ml/ak= 0} | Pr{ml/ak= 1} |
---|---|---|---|
m1( , ) | |||
m2( , ) | |||
m3( , ) | |||
m4( , ) | |||
m5( , ) | |||
m6( , ) | |||
m7( , ) | |||
m8( , ) | |||
m9( , ) |
Table 2.18. Probability pml and value of the intersymbol interference for each message ml
ml= [ak–1, ak+1 ] | pml | Iml |
---|---|---|
m1( , ) | ||
m2( , ) | ||
m3( , ) | ||
m4( , ) | ||
m5( , ) | ||
m6( , ) | ||
m7( , ) | ||
m8( , ) | ||
m9( , ) |
Table 2.19. Possible values of inter-symbol interference and associated probabilities
Iml | I–2 | I–1 | I0 | I1 | I2 |
---|---|---|---|---|---|
Value of Iml | |||||
Pr{Iml} |
To simplify, it is considered that only the errors âk of the following type: "the decided values are adjacent to the prior value ak", are of non-zero probability:
It is assumed that at the output of the equalization, the signal-to-noise ratio is:
NOTE.–
Table 2.20. Noise dynamics interval and conditional error probabilities
ml | m1 | m2 | m3 | m4 | m5 | m6 | m7 | m8 | m9 |
---|---|---|---|---|---|---|---|---|---|
ul = ll= |
|||||||||
Pe(–1/0,ml) | |||||||||
ul = ll = |
|||||||||
Pe(1/0,ml) | |||||||||
ul = ll = |
|||||||||
Pe(0/–1,ml) | |||||||||
ul = ll = | |||||||||
Pe(0/1,ml) |
Solution of problem 24
The maximum possible symbol rate (according to Nyquist criterion) is:
The maximum possible bitrate is:
If coding on two levels, then:
NOTE.– Usually, one uses a M-ary coding system with M ⪢ 2 where the number M is a function of the signal-to-noise ratio at the input of the decision block which ensures a probability of a wrong decision lower than a given level of admissible errors.
The bipolar code is a three-level code such as:
{bn} | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 |
{an} | 1 | 0 | 0 | -1 | 1 | 0 | -1 | 0 | 1 | -1 | 1 | 0 | 0 | 0 | -1 | 0 |
If we have a long series of bits at zero, then there are no pulses issued over a period that can be significant. This causes the loss of clock synchronization at the receiver side. To overcome this drawback, the bipolar code with a high pulse density is used.
If on a period of 4Tb, there are no pulses, we have to use the HDB– 3 code.
The Nyquist frequency criterion states that the equalization must ensure that we have:
The power spectral density b1(t) is given by:
The noise power b0(t) is:
hence:
and:
with:
hence:
ml= [ak–1, ak+1 ] | |||||||
ak = –1 | 0 0 | 0 1 | 1 0 | 1 1 | |||
ak = 0 | 0 0 | 0 -1 | 0 1 | -1 0 | -1 1 | 1 0 | 1 -1 |
ak = 1 | 0 0 | 0 -1 | -1 0 | -1 -1 |
Table 2.23. Conditional probabilities: p(ml /–1)= Pr{ml / ak = –1}
{ml/ak = –1} ml= [ak–1, ak+1 ] | p(ml/–1)= Pr{ml/ak= –1} |
---|---|
0 0 | Pr{bk–1= 0, bk+1= 0} = 4/25 |
0 1 | Pr{bk–1 = 0, bk+1= 1} = 6/25 |
1 0 | Pr{bk–1 = 1, bk+1 = 0} = 6/25 |
1 1 | Pr{bk–1 = 1, bk+1 = 1} = 9/25 |
Table 2.24. Conditional probabilities: p(ml /–0)= Pr{ml / ak = o}
{ml/ak = 0} ml = [ak–1, ak+1 ] | p(ml/0) = Pr{ml–ak = 0} |
---|---|
0 0 | Pr{bk–1 = 0, bk+1 = 0} = 4/25 |
0 1 | Pr{bk–1 = 0, bk+1 = 1 and ak+1 > 0} = 6/50 |
0 -1 | Pr{bk–1 = 0, bk+1 = 1 and ak+1 < 0} = 6/50 |
1 0 | Pr{bk–1 = 1, bk+1 = 0 and ak–1 > 0} = 6/50 |
1 -1 | Pr{bk–1 = 1, bk+1 = 1 and ak–1 > 0} = 9/50 |
-1 0 | Pr{bk–1 = 1, bk+1 = 0 and ak–1 < 0} = 6/50 |
-1 1 | Pr{bk-1 = 1, bk+1 = 1 and ak–1 < 0} = 9/50 |
Table 2.25. Conditional probabilities: p(ml / 1)=Pr{ml / ak=1}
{ml/ak = 1} ml = [ak–1, ak+1 ] | p(ml/1) = Pr{ml–ak = 1} |
---|---|
0 0 | Pr{bk–1 = 0, bk+1 = 0} = 4/25 |
0 -1 | Pr{bk–1 = 0, bk+1 = 1} = 6/25 |
-1 0 | Pr{bk–1 = 1, bk+1 = 0} = 6/25 |
-1 -1 | Pr{bk–1 = 1, bk+1 = 1} = 9/25 |
The summary of all these values is given in Table 2.26.
Table 2.26. Conditional probabilities: p(ml/i) = Pr{ ml/ak = i }
ml = [ak–1, ak+1 ] | Pr{ml/ak = –1} | Pr{ml/ak = 0} | Pr{ml/ak = 1} |
---|---|---|---|
m1 = (0, 0) | 4/25 | 4/25 | 4/25 |
m2 = (0, 1) | 6/25 | 6/50 | |
m3 = (0, –1) | 6/50 | 6/25 | |
m4 = (1, 0) | 6/25 | 6/50 | |
m5 = (1, –1) | 9/50 | ||
m6 = (–1, 0) | 6/50 | 6/25 | |
m7 = (–1, 1) | 9/50 | ||
m8 = (–1, –1) | 9/25 | ||
m9 = (1, 1) | 9/25 |
Table 2.27. Intersymbol interference amplitudes and associated probabilities with each possible interfering message
ml = [ak–1, ak+1 ] | pml | Iml |
---|---|---|
m1 = (0, 0) | 3/10 × 4/25 + 2/5 × 4/25 + 3/10 × 4/25 = 4/25 | 0 |
m2 = (0, 1) | 3/10 × 6/25 + 2/5 × 6/50 = 3/25 | V/10 |
m3 = (0, –1) | 2/5 × 6/50 + 3/10 × 6/25 = 3/25 | –V/10 |
m4 = (1, 0) | 3/10 × 6/25 + 2/5 × 6/50 = 3/25 | V/10 |
m5 = (1, –1) | 2/5 × 9/50 = 1.8/25 | 0 |
m6 = (–1, 0) | 2/5 × 6/50 + 3/10 × 6/25 = 3/25 | –V/10 |
m7 = (–1, 1) | 2/5 × 9/50 = 1.8/25 | 0 |
m8 = (–1, –1) | 3/10 × 9/25 = 2.7/25 | –V/5 |
m9 = (1, 1) | 3/10 × 9/25 = 2.7/25 | V/5 |
with:
and we actually have:
Table 2.28. Inter-symbol interference values and associated probabilities
Iml | I–2 | I–1 | I0 | I1 | I2 |
---|---|---|---|---|---|
Value of Iml | –V/5 | –V/10 | 0 | V/10 | V/5 |
Pr{Iml } | 1/9 | 2/9 | 3/9 | 2/9 | 1/9 |
And we have:
The corrected and sampled signal is:
The decision thresholds are such that:
The noise b1(kT) is written:
If:
Then, we have:
If we decide:
If we decide:
If we decide:
If we decide:
The noise amplitude intervals should be expressed as a function of σb1. We have:
hence:
therefore: Pe,b = Pe1,a.
This result was predictable with the simplification of the statement because:
But these errors on ak do not introduce errors on bk, hence: Pe,b = Pe1,a.
hence:
We have, from the normalized Gaussian law table:
The problem of baseband transmitting and receiving independent binary information on a reduced capacity channel is considered.
The transmission and reception system in question uses partial response linear coding according to the Figure 2.42.
Where
The partial response linear coding used in this problem is the NRZ duobinary coding, characterized by its transfer function:
Where D is the delay operator T (time slot allocated to the transmission of a symbol ck).
The shaping filter has an impulse response x(t) considered as an NRZ signal of duration T and amplitude V.
Let us take the 14 -bit {bk} time sequence shown in Figure 2.43 (time running from left to right).
The transmission channel is modeled by a linear filtering and additive noise at the output of the channel. The latter is a stationary second-order Gaussian noise, with a zero mean and a broad spectrum.
We assume that at the output of the equalizer, the signal-to-noise ratio obtained is:
First, we consider the classical baseband transmission system (without precoding and coding).
Take the case of the duobinary partial response transmission and reception system.
We assume for the rest of the problem that: p(T + t0) = p(t0) = V.
NoTE.– If X is a Gaussian random process, with mean value m and standard deviation σ, you will take:
Solution of problem 25
With precoding, the transcoder provides:
hence:
Thus, we get a direct estimation of the emitted sequence {bk} from the received sequence {ĉk}.
Without precoding:
This leads to a propagation of decision errors. Indeed, if bk–1 is badly decoded, it will also be the case for bk.
With p(T + t0) = p(t0) = V, the intersymbol interference is now given by:
From the result obtained in response 8, we have:
The decision thresholds are located in the middle of two adjacent levels obtained without noise:
Let’s express the decision thresholds according to σb1:
We have three possible decision values of symbol ĉk one without errors and two with errors.
Transmission of ck = 2, we decide on reception:
Transmission of ck = 0, we decide on reception:
Transmission of ck =–2, we decide on reception:
The total probability of error is then given by:
The binary symbols bk are independent and identically distributed on the alphabet {0,1}, hence:
The probabilities of transmitting the symbols ck are respectively:
Hence, the total probability of error:
Calculation of integrals:
The problem of transmitting and receiving independent binary information on a reduced capacity channel is considered. The baseband transmission and reception system in question uses the partial response linear coding according to the following Figures 2.48 and 2.49.
with:
The partial response linear coding used in this problem is defined by the two following transfer functions:
The transmission channel is modeled by a linear filtering and additive noise at the output of the channel. Noise is considered as a second-order stationary, Gaussian random process, with zero mean and broad spectrum.
The shaping filter has an impulse response x(t) considered as an NRZ signal of duration T and amplitude V/2.
Let us take the 14 -bit sequence {bk} shown in Table 2.30 (time running from left to right).
We assume that at the output of the equalizer, the signal-to-noise ratio obtained is:
We first consider the classical baseband transmission system (without precoding and coding).
In the case of the partial response transmission and reception system defined in Figure 2.48.
Assuming that: p (3T) = p (0) = V.
NoTE.– If X is a Gaussian random process, with mean value m and standard deviation σ, you will take:
Table 2.30. Temporal representation of the proposed partial response coding and decoding system
{bk} | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | |||
0 | 0 | 0 | |||||||||||||||
{ak} | |||||||||||||||||
{ck} | |||||||||||||||||
se(t) |
Solution of problem 26
With precoding, the transcoder gives: hence:
So a direct estimate of the sequence {bk} issued from the sequence {ck} received.
Without precoding:
This leads to a propagation of decision errors. Indeed, if bk-3 is badly decoded, it will be also the same case for bk.
The decision thresholds [c, d [ are such that:
Let’s express the decision thresholds according to σb1:
Calculation of integrals:
The symbols bk are independent and identically distributed on the alphabet {0,1}, hence:
The probabilities of emission of the symbols ck are:
Hence finally:
3.21.233.41