In this chapter we illustrate various aspects of methodology as they apply to the problem of estimating the performance of a wireless digital communication system operating over a slowly fading channel. We start with a block diagram of the system, which is shown in Figure 11.1.
We will assume that the design of the system is nearly complete and the following aspects of the design have been finalized:
The voice signal is source encoded using linear predictive coding to produce an output bit rate of 9,600 bits per second.
Error control coding is accomplished through the use of a rate 1/3 convolutional encoder with hard decision decoding (or soft decision decoding with 8 levels of quantizing).
The filters used in the system are 50% square root raised cosine (SQRC).
The equalizer is a 9 tap synchronously spaced linear mean square (LMS) equalizer.
Modulation is QPSK with coherent demodulation at the receiver.
The channel over which this system is assumed to operate is characterized as a “two-ray” multipath channel with slow fading (slow compared to the symbol rate so that the channel can be treated as quasi-static). The input-output relationship of the channel is given by
where and are the complex (lowpass) input and output, respectively, and ã1(t) and ã2(t) represent the complex attenuation of the two multipath components with delays τ1(t) and τ2(t). The complex attenuations are modeled as independent stationary processes, and the bandwidths of these processes (and hence their rate of change) are assumed to be small compared to the symbol rate. The complex attenuations are modeled by two independent complex Gaussian processes (Rayleigh envelope), and the delays are assumed to have uniform distributions. (This channel will be developed in detail in Chapter 14.)
The channel characteristics are randomly changing as a function of time. Therefore, the received signal power and the amount of signal distortion introduced by the channel, which will impact system performance, will also be changing over time. When the signal loss and distortion are small, the system performance will be very good, but when the signal loss and distortion are severe the system performance will degrade significantly. The overall performance metric of interest in this system is the output voice quality, which is obtained from listening tests. In these tests the output of the voice decoder is recorded and played back to a number of human subjects who rate the voice quality from 1 to 5, with 1 being the poorest quality and 5 being the highest quality. The average of the individual scores from a set of subjects is used as the voice-quality metric, and the overall goal of the system design is to guarantee a voice-quality metric greater than or equal to 3 at least 98% of the time. If the voice-quality metric is less than 3, the communication link is declared to be unusable and out of service.
The objective of this simulation exercise is to evaluate the system performance, as measured by a voice-quality metric V as a function of Eb/N0, and compute the value of Eb/N0 needed to maintain an outage probability less than 2% at a voice quality metric threshold of 3. We now present the details of the overall approach that can be used to estimate the outage probability as a function of Eb/N0.
The slow-fading assumption leads to the following immediate simplifications of the simulation model that will be used for performance estimation.
Synchronization: For the purposes of performance estimation it can be assumed that synchronization is ideal, since fading is slow and hence the timing and phase recovery subsystems can establish near ideal timing and phase references. These subsystems can be omitted from the simulation model for performance estimation.
Static channel: The slow-fading assumption also implies that the channel can be treated as quasi-static and snapshots of the channel can be used during performance estimation. The channel model now reduces to
where ã1, ã2, τ1, and τ2 are now random variables whose values remain fixed during each performance estimation simulation. It is common practice to assume (normalize) τ1 = 0, and ã1 = 1, which results in the input-output relationship
and the channel transfer function
In this model, the channel is characterized by two random variables ã and τ, where ã has a Rayleigh pdf and τ has a uniform pdf.
Radio frequency (RF) modulator and demodulator: These two blocks can be assumed to perform ideal frequency translations and hence they can be eliminated from the simulation model. The entire system can then be simulated using complex lowpass equivalent representations.
One other important simulation parameter that can be established at the outset is the overall sampling rate. The voice source, the source encoder, the error control encoder blocks on the transmit side, and the error control decoder and the source decoder blocks on the receive side, operate on symbol sequences and should be simulated using one sample per symbol (i.e., they are processed at the appropriate symbol or bit rate). From the output of the QPSK modulator to the output of the equalizer we are dealing with waveform representations and hence the signals, and components in this portion of the overall system should be simulated at a sampling rate consistent with the bandwidths of the signals and components such as filters. Since there are no nonlinearities and time-varying components in the system, we need not be concerned about any bandwidth expansion. Also, there is no need to consider multirate sampling for this portion of the system, since we are not dealing with multiple signals with widely differing bandwidths. The sampling rate for the “analog” portion of the system can be set at 16 times the bandwidth of the QPSK signal. This can be truncated to the bandwidth of the raised cosine filter, which is 0.75 times the symbol rate (0.5R + 50% of (0.5R) = 0.75R). With the QPSK symbol rate of R = (9600 × 3)/2 = 14,400 symbols/second, we can use a sampling rate of 16 × 0.75 × 14,400 = 172,800 samples/second. This is equivalent to 12 samples/QPSK symbol.
In outage probability estimation we are interested in determining the fraction of time that the channel conditions degrade the system performance below some acceptable threshold. Since the channel parameters are random variables, we can use the Monte Carlo approach to determine the outage probability induced by the channel. The Monte Carlo approach will involve drawing random numbers from the distributions of the channel parameters ã and τ and computing the system performance for each pair of values for ã and τ. The outage probability is estimated as the percentage of channels simulated that yield a performance metric that is below the acceptable (threshold) level. Note that this part of the Monte Carlo simulation is different from the Monte Carlo simulation used for performance estimation for each channel condition. Monte Carlo simulation for performance estimation will involve the generation of sampled values of one or more random processes that represent the signals and noise.
The flowchart of the procedure for estimating outage probability is shown in Figure 11.2. To estimate outage probability, we need to compute system performance as measured by the output voice-quality metric for each value of Eb/N0 and for thousands of snapshots of the channel. We typically define the channel in terms of ã and τ (the amplitude-delay profile). Some of these channel conditions will produce significant performance degradation that might lead to a voice-quality metric below the desired threshold level (3 in this example). The outage probability for a given value of Eb/N0 is estimated by the ratio of the number of simulated channels that produce a voice-quality metric lower than the threshold divided by the total number of channels simulated. As Eb/N0 is increased the outage probability will decrease, and by repeating the simulations for different values of Eb/N0 we can find the (minimum) value of Eb/N0 that guarantees an outage probability less than 2%. Let us assume that the range of values for Eb/N0 to be considered is given as 17 to 35 dB in increments of 2 dB. Let us also assume that for each Eb/N0 we need to simulate 10,000 channel conditions in order to obtain the distribution (histogram) of V and the outage probability. Typical results are illustrated in Figure 11.3. It can be seen from the histograms that the outage probability is less for Eb/N0 = 35 dB than for Eb/N0 = 25 dB.
For a given channel condition and Eb/N0 we can estimate the voice-quality metric using a brute-force Monte Carlo approach in which we use sampled and digitized voice as the input, record the simulated output of the voice decoder, and play the recorded output to a set of human subjects and determine the voice-quality metric based on their scores. While this approach mimics reality, it is not practical to repeat this for thousands of channel conditions and many values of Eb/N0, for even if we have the computer resources to do the simulations, this approach will require each listener to score thousands of voice segments.
A better approach is be to divide (partition) the problem into parts and simulate the parts separately. In order to arrive at an efficient partitioning scheme, let us consider the influence of different portions of the communication system on the overall performance as measured by the voice-quality metric. With respect to Figure 11.4, the waveform “processing” part of the system accepts a binary sequence at point C and produces a binary sequence for hard decision decoding (or quantized values for soft decision decoding) at point F. The probability of error q (or the transition probabilities for soft decision decoding) for this analog portion of the system, which we will call “the waveform channel,” depends on the channel parameters and Eb/N0. This probability of error (or a set of transition probabilities for soft decision decoding) can be estimated via a Monte Carlo or semianalytic technique with a random binary sequence as the input. It is not necessary to drive this part of the simulation with encoded voice bits.
The next segment of the system includes the error control encoder and decoder, which accepts a sequence of binary digits at point B and produces a sequence of binary digits at point G. The probability of error between points B and G will strictly be a function of the probability of error q (or the set of transition probabilities) in the waveform channel. Indeed, since the errors in the waveform channel are the result of additive, white, Gaussian noise (AWGN), we can assume the error pattern is an independent sequence and hence, as far as the evaluation of the coded bit error probability between points B and G are concerned, the waveform channel can be replaced by a binary random number generator that produces 1’s and 0’s with probabilities q and 1 − q with 1 representing a transmission error in the waveform channel. The coded error probability PE can be evaluated via a Monte Carlo simulation in which the input to the encoder is a random binary sequence and the entire waveform channel is replaced by a binary random number generator. There is no need to drive this part of the simulation with encoded voice bits.
The performance of the error control coding can also be evaluated using a semianalytic approach that maps the uncoded error probability q to the coded error probability PE. The technique for accomplishing this was explored for both block and convolutional codes as discussed in Chapter 8. With this approach we can map the distribution of q as a function of Eb/N0 to the distribution of PE as a function of Eb/N0.
The final step in the estimation of outage probability is the estimation of the distributions of the voice-quality metric V for various values of Eb/N0. The voice-quality metric will depend on PE (which itself depends on Eb/N0) and the distribution of V as a function of PE. Hence the distribution of V as a function of Eb/N0 can be obtained by evaluating the voice-quality metric for different values of PE. This evaluation can be done independent of the first two parts of the problem; all we have to do is to evaluate the performance of the voice coder and encoder for different values of the error probability PE. This is best done using the actual voice encoder decoder chip set, running digitized voice through them, and evaluating the voice quality at the output of the encoder as a function of PE. The effect of the entire system between points B and G is emulated by injecting random errors at the rate of PE between the output of the voice encoder and the input of the voice decoder. This part of the voice-quality metric evaluation has to be done for only about a dozen values of PE, say, from 10−1 to 10−7, and the listener has to score the voice quality for each of these dozen values of PE, which is much simpler than having to score the voice quality for thousands of channel conditions in the direct Monte Carlo simulation of the entire system.
Given the estimate of the voice-quality metric V as a function of PE obtained from Part III, the distribution of PE as a function of q in Parts II and III, and the distribution of q as a function of Eb/N0 obtained in Part I, we can obtain V as a function of Eb/N0. From
we can estimate the distribution of V and the outage probability for each value of Eb/N0. From a plot of the outage probability versus Eb/N0 we can determine the minimum value of Eb/N0 needed to assure an outage probability less than 2% at a voice quality less than 3. The overall approach for outage probability estimation is summarized in Figure 11.5.
We now turn our attention to the details of each of the three parts of performance evaluation starting with the estimation of the error probability for the waveform channel. This is the most computationally intensive part, since it has to be repeated for 10 values of Eb/N0 and 10,000 channel conditions. The other two parts dealing with the mapping of the error probability q for the waveform channel to the coded error probability PE and the voice-quality metric V are repeated only once for each of approximately 10 values of q (10 values of Eb/N0).
The simulation model for the waveform channel (analog portion of the system) is shown in Figure 11.6. The main objective of the simulation is to obtain the distribution (histogram) of the probability of error q for 10 different values of Eb/N0. For each value of Eb/N0, 10,000 snapshot conditions of the channel are simulated and the histogram of q is obtained from the estimated BER for each channel. During each simulation, the channel condition remains fixed.
We describe the salient features of the simulation model before presenting computationally efficient techniques for simulating this portion of the system.
Input: The input to the system consists of two random binary sources each with a bit rate of 14,400 bits/second (combined rate of 28,800 bits per second), which represents the bit stream coming from the error control encoder. The two bit sequences ak and bk are combined to produce a complex QPSK symbol sequence , where Ak and Bk are mappings of the binary sequences ak and bk into amplitude sequences of +1 or - 1. The QPSK symbol sequence is mapped to a complex valued QPSK waveform
which is sampled at a rate of 12 samples per symbol to create a sampled version of the QPSK waveform
where T is the duration of the QPSK symbol, p(t) is a rectangular pulse of unit amplitude, and duration T. We assume
(The two random binary sources in Figure 11.6 may be combined into a single random binary source operating at the combined rate and the two separate sequences can be obtained at the output by taking the odd and even numbered bits of the output.)
Transmit and Receive Filters: The transmit and receive filters are square root raised cosine (SQRC) filters with the transfer function
where β = 0.5 for a 50% roll-off [1]. These filters are optimum in the sense that they produce a finite bandwidth waveform with zero ISI and also produce optimum BER performance over AWGN channels. It is customary to include a 1/sinc function in the transfer function of the transmit filter in order to compensate for the fact that the QPSK waveform at filter input is a rectangular non-return-to-zero (NRZ) pulse waveform rather than an impulse waveform. The filter transfer function given in the preceding equation will produce a response with zero intersymbol interference (ISI) only when the input is a sequence of impulses. Instead of including a 1/sinc function, we can use an impulse sequence representation of the QPSK waveform; in this case, only the first of the 12 samples for the kth QPSK symbol has the value Ak + jBk and the remaining 11 samples are zeros.
The SQRC filters are implemented as finite duration impulse response (FIR) filters, since an infinite duration impulse response (IIR) implementation will be very difficult since the transfer function is not given in pole-zero form in the s (Laplace transform) domain as discussed in Chapter 5. We will assume that an impulse invariant transformation with time-domain convolution is used for each filter. The impulse response of the SQRC filter is given by [1]
Since this is clearly a noncausal filter, the impulse response is truncated to a length of four symbols on either side of zero yielding a truncated duration of eight symbols. Shifting the resulting impulse response by four symbols then yields a causal time function.
Channel: The quasi-static channel model defined in (11.2) is characterized by two random variables ã and τ. Each simulation is executed with fixed values of ã and τ drawn from a Raleigh and uniform distribution, respectively. The value of τ is rounded off to an integer number of samples, say, r, and the simulation model for the channel consists of a direct path and a delayed path with a delay of r samples and attenuation ã. This model is trivial to implement.
Equalizer: The SQRC filters will produce zero ISI only when the channel transfer function is ideal over the bandwidth of the signal (which in this example is 0.75 times the symbol rate). Since the channel in this case is nonideal, some residual ISI will be present in the system and this residual ISI can be minimized by the use of an equalizer in the receiver. While a wide variety of equalizers are available, we chose to include a 9 tap, synchronously spaced LMS (linear minimum mean squared error) equalizer to illustrate several aspects of methodology.
A gradient algorithm is normally used for iteratively adjusting the equalizer weights. If the equalizer convergence is simulated, this has to be done via a Monte Carlo simulation using a training sequence for the input and with noise samples injected during the simulation. Since the LMS equalizer is a linear filter, and the noise at the input to the receiver is AWGN, the noise at the output of the equalizer will be additive and Gaussian, and hence a semianalytic technique can be used for error probability estimation. For BER estimation, we need to simulate only the effects of ISI distortion, and the effects of additive white Gaussian noise can be handled analytically without having to do a Monte Carlo simulation with noise samples.
We could consider two approaches for handling the equalizer when using a semianalytic technique for BER estimation. We can run a short Monte Carlo simulation in the beginning, with noise samples included, wait until the equalizer weights converge, and then “freeze” the equalizer weights and execute the performance estimation simulation with the noise source turned off.
The second approach that could be used for the equalizer is based on the well-known fact that the equalizer weights will converge to a weight vector whose value can be computed analytically according to
where is the weight vector, Γ is the “channel covariance matrix,” and R is a vector of sampled values of the unequalized impulse response of the system from the input to the transmit filter to the output of the receive filter [2]. This unequalized impulse response, sampled at the symbol rate, can be obtained from a calibration run in which a unit impulse is applied at the transmit filter input and the impulse response is recorded at the output of the receive filter. The sampled values of the impulse response are used to compute the autocorrelation function of the unequalized impulse response and the entries in the matrix Γ are obtained from the values of the autocorrelation function. Diagonal entries in Γ will include the autocorrelation value at zero lag plus the noise variance at the input to the equalizer, which can be computed knowing the input noise PSD and the noise bandwidth of the receive filter. With this approach, the equalizer weights can be computed prior to the simulation for BER estimation as part of the “calibration process” and the equalizer can be treated as an FIR filter during the BER simulations.
When we use a direct Monte Carlo simulation for performance estimation, the noise source will be “on,” and hence the iterative (gradient) method is used at the beginning to let the equalizer weights converge. The weights are then frozen during performance estimation. (If the semianalytical method is used for BER estimation, the noise source will be turned off during the semianalytical BER estimation phase.)
In the direct Monte Carlo approach, the input and noise processes are explicitly simulated. An estimate of the error rate for each Eb/N0 and channel condition is obtained by counting the number of errors between the symbol sequence at the input to the modulator and at the output of the decision device . While the equalizer can provide amplitude normalization (which is not necessary with QPSK modulation) and can also compensate for phase offsets, a calibration run must be executed at the beginning to establish a timing reference for the equalizer and for lining up the input and output symbol sequences. Also, an initial training sequence might have to be used to aid in the equalizer convergence and the error rate estimation should start only after the equalizer weights have converged and are frozen. The essential steps in the Monte Carlo simulation are as follows:
Draw a set of ã and τ and start with the initial value of Eb/N0.
Execute a calibration run to establish a timing reference for the equalizer and for lining up the input and output for error counting.
Train the equalizer and freeze the weights (noise source turned on with the variance value computed from Eb/N0).
Start the Monte Carlo simulation for performance estimation and run the simulation until about 50 errors are counted.
Repeat for all values of Eb/N0 and 10,000 channel conditions.
Compute the histogram of q for each value of Eb/N0.
While the direct Monte Carlo approach is simple to implement in principle, it does require long simulation runs for each value of Eb/N0 and channel condition. Even if each simulation takes only a few seconds of CPU time, the total effort required to repeat the simulations for 10,000 channel conditions and 10 values of Eb/N0 might be overwhelming.
Since the receiver is linear (an LMS equalizer is an FIR filter), and the noise is additive and Gaussian at the receiver input, the noise at the output will also be additive and Gaussian. Hence we can use the semianalytic approach for performance estimation.
The BER in the system will be a function of intersymbol interference and additive Gaussian noise, the effects of which can be handled analytically. Hence only the ISI produced by the cascade of the transmit filter, the channel, the receive filter, and the equalizer is simulated. The BER is estimated (assuming that the transmitted signal constellation point is (1,1), which maps to (A,A) at the equalizer output as shown in Figure 11.7) using the semianalytic approach described in the previous chapter. This gives
where dxi and dyi are the direct and quadrature components of ISI associated with the ith simulated symbol, and are the variances of the direct and quadrature components of the noise at the output of the equalizer, and M is the number of symbols simulated.
The values of and are computed using
where N0/2 is the PSD of the two-sided bandpass noise at the input to the receive filter and BN is the noise bandwidth of the receive filter and the equalizer together. The noise bandwidth is computed from a calibration run as outlined earlier.
The steps in applying semianalytic techniques for performance estimation are as follows:
Initialization: Chose an initial value for Eb/N0 and the channel parameters.
Calibrations and equalizer weight determination:
Establish a timing reference for the equalizer and the overall time delay.
Obtain the unequalized impulse response via simulation by injecting an impulse at the input A and compute the weight vector for the equalizer using (11.11).
Compute the noise bandwidth of the receive filter and the equalizer and calibrate the variance of the noise at the output using (11.13).
Simulation: Simulate M symbols and estimate the BER according to (11.12).
Repeat for 10,000 channels and 10 values of Eb/N0 and compute the histogram of q.
The semianalytic error rate estimation can be speeded up considerably by combining all the blocks, the transmit filter, the channel, the receive filter, and the equalizer (after weights have been computed and set) into one single block, since all of them are linear time-invariant components. Since no noise samples are injected, these components of the system, which are in cascade, process the QSPK waveform signal in a pipeline fashion. From a performance estimation point of view we are simply interested in the waveform at the output of the equalizer. Since we are not interested in the waveforms at the outputs of the other blocks in the system, there is really no need for processing the input waveform through each individual block. By combining all the blocks into one and processing the input waveform, the equivalent representation will be computationally very efficient.
The overall impulse response of the system is
and this response can actually be obtained via simulation by injecting an impulse at point A and measuring the impulse response at the output of the equalizer (point F in Figure 11.4). The overall impulse response can be truncated and the entire system can be simulated as a single FIR filter. An example of the overall impulse response is shown in Figure 11.8. Note the delay through the system is approximately 135 samples and the impulse response can be truncated to 108 samples from sample number 135 to 242. The impulse response is assumed to be zero outside this interval. The time index for the nonzero values of the impulse response are renumbered from 0 to 107 for notational convenience.
A detailed waveform level simulation of the model will be executed at the sampling rate of rs = 172, 800 samples per second according to the equation
where are the sampled values of the QPSK waveform at the input to the transmit filter, is the output of the equalizer, and , p = 0 to 107 are the truncated values of the overall impulse response. The output of the equalizer is sampled starting at sample number 187 (why) and once every 12 samples afterward to produce the decision metric , and a decision (estimate of the transmitted symbol) is made based on the value of the decision metric. As far as performance estimation is concerned, we are interested only in every 12th sample (one every symbol) of the equalizer output, corresponding to decision times, and the intervening samples are of no interest or use. Since the equalizer operates with a tap spacing of 12 samples (or one symbol duration T), we can write an expression for the decision metric using every 12th sample of the impulse response (see Figure 11.8) as
Note that (11.16) is the entire simulation model for semianalytic error rate estimation; we simply generate a sequence of QPSK symbols and process them through (11.16). This requires only eight operations per symbol to produce the values of the decision metric, which represents the input symbol with additive ISI. The semianalytic error computation given in (11.12) is applied to the sequence .
It is easy to see that the model given in (11.16) will be about two orders of magnitude faster than the model given in (11.15), since we now compute the output only once every 12th sample and each output sample requires only 8 multiply and add operations, as opposed to 108 multiply and add operations for the model given in (11.15). Compared to processing the QPSK waveform though each block, the combined model in (11.15) will be a factor about three to four times faster, since we have combined four blocks. Thus, the overall computational savings of the simulation model given in (11.16) could be of the order of 1,000 compared to the direct approach wherein we simulate the evolution of waveforms through each block in the simulation model on a sample by sample basis.
The simulation model given in (11.16), coupled with the semianalytic approach for BER estimation, will be computationally very efficient. The overall memory length of the system is nine symbols and hence a PN sequence, having a period of 29 = 512 symbols, will be adequate to produce all possible ISI values except, of course, the all-zero sequence. Thus, after calibration, each performance simulation run consists of generating 512 QPSK symbols, computing the 512 output samples according to (11.16), and applying the semianalytic estimation given in (11.16). The essential steps in the fast analytic error rate estimation can be summarized as follows:
Initialization: Chose an initial value for Eb/N0 and the channel parameters.
Calibrations and equalizer weight determination:
Establish a timing reference for the equalizer and the overall time delay.
Obtain the unequalized impulse response via simulation by injecting an impulse at the input (point A) and compute the weight vector for the equalizer using (11.11).
Compute the noise bandwidth of the receive filter and the equalizer and calibrate the variance of the noise at the output using (11.13).
Obtain the equalized pulse response sampled at the symbol rate by injecting an impulse at the input (point A) and sampling the impulse response at the equalizer output once every symbol (Figure 11.8).
Simulation: Generate M = 512 QPSK symbols, process them according to (11.16), and estimate the error probability according to (11.12).
Repeat for 10,000 channels and 10 values of Eb/N0 and compute the histogram of q.
While the fast semianalytic technique described in the preceding section is computationally very efficient, the computational load increases significantly if the memory length of the system and/or the order of the modulation scheme increases. The increase in memory length results, since the number of symbols that need to be simulated for all possible ISI values increase according to M = mL, where L is the memory length of the system and m is the alphabet size (2 in the binary case). When m and L are large, the simulation length will be very long. In such cases we can use another method to reduce the computational burden.
The basic computation performed in (11.12) can be expressed
where the expectations are taken with respect to Dx and Dy, which are the random variables that represent the direct and quadrature phase components of the ISI. Rather than estimating this expected value by averaging over simulated values of ISI, we can compute the moments of the ISI, approximate the distribution of Dx and Dy using the computed moments of the ISI, and then perform the expected value operation using the approximate distribution of ISI.
Let us consider approximating the distribution of Dx using its moments. From (11.16), we can write the ISI term Dx as
where αj and βj are the real and imaginary components of the impulse response (i.e., ), and Aj = ±1, Bj = ±1 are the real and imaginary components of the QPSK symbol sequence. The moments of Dx can be computed according to
Note that in the preceding equation the values of the impulse response are constants whose values are known and Aj and Bj are independent binary random variables with values ±1. The odd moments of Aj and Bj are zero and even moments are 1. Hence the computation of the moments of Dx involves a simple expansion of the binomial sum in (11.19). We compute the expected values term by term, and add them.
From the moments of Dx we can obtain a discrete approximation to the distribution of Dx. In this approximation, Dx is treated as a discrete random variable with J values d1, ...., dJ, with probabilities p1, ..., pJ. As a simple example of the principle, consider a discrete approximation of a Gaussian pdf as shown in Figure 11.9.
The abscissas x1, ...., xJ and the ordinates p1, ..., pJ are chosen such that the continuous distribution and the discrete approximation yield the same moments
Given the first 2J moments µ1, ..., µ2J of X, we can solve the set of 2J nonlinear equations
for the values of xk and pk, k = 1,2,...,J. Details of the techniques used to derive the discrete approximation of a distribution using its moments may be found in [3, 4].
Using the discrete approximation for the ISI distribution we can compute E{Q(A + Dx/σx)} as
The second term in (11.17) can be computed using a similar procedure.
Note that, in this method, there is no Monte Carlo simulation at all for performance estimation! It is entirely analytic except for two single event simulations; one for obtaining the unequalized pulse response from which the equalizer weights are computed and a second pulse response simulation to obtain the equalized pulse response sampled at the symbol rate. The moments of the ISI distribution are computed using (11.19) and the discrete approximation of the distribution of ISI and the probability of error are also computed. In addition to the two single event simulations, calibration runs have to be executed for establishing the timing references for sampling the unequalized and equalized pulses and for calculating the noise bandwidth of the receive filter.
The computational efficiency of the moment method will depend on the number of moments needed to obtain a good approximation of the ISI distribution and the computational load associated with computing the moments and the moment-based approximation. The latter will be a function of the length of the impulse response, which in many cases can be truncated to 10 or so symbols. Good approximations of the ISI distribution can be obtained from the first six or eight moments.
The moment method is very useful if higher-order modulation schemes such as 256 QAM are used. In this case the direct and quadrature phase waveforms have 16 amplitude levels, and if the memory length is 10, then we need a 16-ary PN sequence of length 1610 symbols to simulate all possible ISI values. Hence the semianalytic method will require a long simulation, whereas the moment method in this case will be computationally much more efficient. The key steps in applying the moment method are summarized below:
Initialization: Chose an initial value for EbN0 and the channel parameters.
Calibration and equalizer weight determination:
Establish a timing reference for the equalizer and the overall time delay.
Obtain the unequalized impulse response via simulation by injecting an impulse at the input A and compute the weight vector for the equalizer [Equation (11.11)].
Compute the noise bandwidth of the receive filter and the equalizer and calibrate the variance of the noise at the output using (11.13).
Obtain the equalized pulse response sampled at the symbol rate by injecting an impulse at the input (point A in Figure 11.4) and sampling the impulse response at the equalizer output once every symbol (Figure 11.8).
Compute the moments and moment based approximation of the distribution of ISI.
Compute the BER according to (11.22).
Repeat for 10,000 channels and 10 values of Eb/N0 and compute the histogram of q.
In this section we illustrated many important aspects of the methodology for simulating the waveform processing portion of wireless communication systems operating over slowly fading channels. Several approaches to simplifying the simulation problem were discussed and a number of performance estimation techniques were also presented. These techniques range from pure Monte Carlo, to partial Monte Carlo, to a totally analytic method.
In any performance estimation simulation, a considerable amount of up-front effort must be expended for simplifying the simulation model and for examining and evaluating the various approaches that are possible. These efforts will lead to tremendous computational savings during performance estimation simulations. While many of the details discussed in the context of the example presented in this section may not be directly applicable to other problems, the overall methodology presented in this example should suggest the range of factors that should be considered before reaching the final choice of a performance estimation procedure.
It should be obvious by now that calibration and single-event simulations for measuring pulse and impulse responses play an important role prior to any performance estimation. All of these calibrations do not have to be repeated for each situation. For example, if the channel condition is fixed, the unequalized pulse response does not have to be measured for each value of Eb/N0.
The next step in outage probability estimation is to obtain the coded bit error probability PE(Eb/N0) from the uncoded bit error probability q(Eb/N0). This can be accomplished semianalytically using the transfer function bound for the convolutional code used for error correction. This technique was discussed in Chapter 8.
The last step in outage probability estimation is the mapping of the coded error probability to the voice-quality metric. This is accomplished by injecting binary errors between the output of the voice coder and the input to the decoder and scoring the quality of the resulting voice output. By repeating the listening tests for various values of the injected error rate, we can establish the relationship between the error rate PE and the voice-quality metric V. An example is given in Table 11.1.
Note that for this part of the performance evaluation process, the derivation of the relationship between PE and V can be carried out independent of the previous steps. Also, this table could have been obtained from the manufacturer of the voice coder/decoder chip set.
From the preceding table it is clear that the outage probability P(V < 3) can be expressed in terms of the distribution of PE as
From the analytic bounds which relate PE to q we can establish the value of q, say, q0, which yields PE > 5 × 10−3. The system outage probability then is equal to P(q > q0), which can be obtained for different values of Eb/N0 from the distribution of the BER q for the analog portion of the channel. The outage probability P(q > q0), will decrease as Eb/N0 is increased, and by plotting the outage probability as a function of Eb/N0, we can determine he minimum value of Eb/N0 needed to maintain P(q > q0) < 0.02.
We now summarize the major steps in the methodology used for estimating outage probability in a wireless communication system:
Determine the relationship between the coded error probability PE and the voice-quality metric V by simulating the voice encoder/decoder for different values of PE.
Compute the bounds that relate the uncoded probability of error q to the coded probability of error PE.
Simulate the analog portion of the system for various values of Eb/N0 and 10,000 channel conditions and obtain the distribution of q for different values of Eb/N0.
Map the distribution of q to the distribution of V and for different values of Eb/N0 and determine the value of Eb/N0 needed to maintain the specified outage probability.
In this chapter we described the methodology used for estimating system-level performance metrics such as outage probabilities in a typical wireless communication system operating over a slowly fading channel. This problem requires extensive simulations for several thousand channel conditions, and brute-force Monte Carlo simulation is simply not feasible because of the computational burden involved.
Given the computational burden required for simulating the system for many thousands of channel conditions, we need to carefully look at simplifying the simulation model, as well as at alternate techniques for estimating the error rate in the system. Several approaches were discussed in this chapter for reducing the computational burden associated with this problem, such as
Using a hierarchical approach and partitioning the problem so that the complexity of simulations grow linearly rather than in a multiplicative manner for various combinations of parameter values.
Simplifying the simulation model by combining various functional blocks and looking at alternate approaches for simulating the behavior of individual components (e.g., the channel covariance inversion technique for simulating the equalizer).
Using semianaytical techniques when possible.
Running the simulations at symbol rate rather than at a rate of 8 to 16 samples/symbol.
Using moment based approximations for some of the pdfs involved in the semianalytical procedure.
Reducing the entire simulation run to a small number of single-event simulations and using analytical computations instead of Monte Carlo simulations for estimating performance metrics.
Using a semianalytical technique for obtaining PE from q.
While the techniques discussed in this chapter may not be directly applicable to a different simulation problem, the overall methodology clearly illustrates the need to think through and carefully design the simulation experiment. By doing so, it is possible to reduce the computational burden significantly.
An excellent example of the application of the methodology discussed here can be found in
M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed., New York: Kluwer Academic/Plenum Publishers, 2000, Chap. 12.
11.1 | Consider a binary baseband system in which the received signal at sampling instants consists of yk = Ak + Dk + nk where {Ak} is the transmitted symbol sequence, Dk is the intersymbol interference, and nk is the noise. Assume the symbol sequence to be independent and Ak = ±1. The ISI term is given by where the sampled values of the overall impulse response of the system that contribute to ISI are tabulated below:
| ||||||||||
11.2 | Repeat Problem 11.1 for a QPSK signal where the I and Q symbol sequences {Ak, Bk} are made up of independent binary values ±1 and the overall complex lowpass equivalent response of the system is given in the table below:
where and | ||||||||||
11.3 | Develop a recursive formula for computing the moments of the ISI. In other words, do an expansion of and arrange it such that the nth moment can be computed by successively adding the contributions coming from h1, h2, h3,.... | ||||||||||
11.4 | Use the approximations for a Gaussian pdf given in reference [3] with 20 points and compare the first eight moments of the actual Gaussian pdf with the moments of the approximation. | ||||||||||
11.5 | Simulate a 5 tap LMS equalizer for a binary system with the impulse response values given in Problem 11.1 using ordinary Monte Carlo simulations (gradient algorithm) with noise samples injected and compare the mean square error results with the weight vectors obtained from the channel covariance inversion method (Figure 11.10). |
3.144.14.150