Chapter 12. Modeling and Simulation of Nonlinearities

Simulation models of functional blocks in communication systems fall into three broad categories as follows:

  1. Linear time-invariant causal systems such as filters and static communication channels

  2. Linear time-varying systems such as fading channels and linear equalizers

  3. Nonlinear time-invariant systems such as high-power amplifiers, limiters, and phase-locked loops (PLLs)

While most of the functional blocks in a communication system are either linear or can be approximated by linear behavior, there are many functional blocks that are nonlinear. Some of these components are inherently nonlinear and are intentionally included in the system to improve performance. A limiter, for example, is a nonlinear element that may be included at the front end of a receiver to improve performance in the presence of impulsive noise. Another example of a nonlinear device that produces better performance over its linear counterpart is the decision feedback equalizer (DFE). Other functional blocks, such as a high-power amplifier, however, may exhibit unintentional nonlinear behavior.

Introduction

There are many functional blocks in communications systems whose intended or desired behavior is linear but the physical devices that are used to implement these functions may produce nonlinear effects over certain ranges of operation. One example of such a device is a high-power amplifier, which might exhibit limiting and saturation when the input amplitude or power is very large.

Mathematical analysis of the effects of nonlinearities is, in general, very difficult. Even in the case of a simple third-order memoryless nonlinearity such as

Equation 12.1. 

it is very difficult in general to compute the probability density function (pdf) and the autocorrelation function of the output y(t) given the pdf and the autocorrelation function of the input x(t). Using simulation, however, it is very easy to generate sampled values of the of the input {x(kTs)} and then use (12.1) to generate sampled values of the output {y(kTs)}. From the two sequences {x(kTs)} and {y(kTs)} one may estimate a number of quantities of interest including the pdf and autocorrelation function of x(t), the pdf and autocorrelation function of y(t), and the crosscorrelation of x(t) and y(t). Indeed, simulation may be the only method available for the analysis and design of communication systems containing nonlinearities, as well as nonideal filters and non-Gaussian noise. This chapter focuses on methods of modeling and simulating nonlinear components in communication systems.

Types of Nonlinearities and Models

Nonlinearities in communication systems may be either baseband or bandpass. For example, a limiter is an example of a baseband nonlinearity whereas a radio frequency (RF) amplifier is a bandpass nonlinearity. The input to a bandpass nonlinearity will be centered at some frequency fc and the spectral components of the output will lie in the neighborhood of fc. Harmonic terms near 2fc, 3fc ..., will not be of interest in most cases, since the functional blocks “downstream” from the nonlinearity will usually reject the harmonic terms. (One example of where the harmonic terms might be of interest is in the analysis of “spurs,” or spurious components, in the output of mixers.) The most commonly used model for a bandpass nonlinearity is a baseband nonlinearity followed by a “zonal filter” that passes only those components that lie within on near the edge of the band of frequencies occupied by the input to the nonlinearity.

Another classification of nonlinearity is based on whether the nonlinearity is, or is not, memoryless. The output of a memoryless nonlinearity at time t will depend only on the instantaneous value of the input at time t, whereas a nonlinearity with memory will generate an output at time t that is a function of the present and past values of the input. Devices with memory will exhibit frequency-selective behavior, and they are usually modeled by a memoryless nonlinearity “sandwiched” between two filters as shown in Figure 12.1. The filters account for the frequency-selective properties of the nonlinearity with memory.

Model for a frequency selective nonlinearity with memory.

Figure 12.1. Model for a frequency selective nonlinearity with memory.

Bandpass nonlinearities can be modeled and simulated using the real-valued bandpass version of the signals. As we will see later on, the behavior of most bandpass nonlinearities can also be modeled and simulated in terms of the lowpass complex envelopes of the bandpass signals. The lowpass complex envelope representation leads to significant computational savings and hence is the preferred simulation technique.

Nonlinear devices may also be described by nonlinear differential equations. In this case, the simulation may be carried out in the form of a recursive solution of the nonlinear differential equations. An alternate approach is to represent the nonlinear system in block diagram form (if possible) and simulate the block diagram model. This approach is called an assembled block diagram method.

The assembled block diagram method was treated in some detail in Chapter 6 when we studied the phase-locked loop, but to briefly review consider a subsystem described by the differential equation

Equation 12.2. 

This differential equation can be rearranged as

Equation 12.3. 

which can be represented in a block diagram form as shown in Figure 12.2. As we saw in Chapter 6, the simulation immediately follows by representing the continuous-time integrations by an appropriate algorithm for discrete-time integration.

Assembled block diagram model for a nonlinearity.

Figure 12.2. Assembled block diagram model for a nonlinearity.

The nonlinear differential equation given in (12.2) can also be simulated directly using recursive numerical integration techniques. This method is computationally very efficient, but requires some up-front effort to derive the model. For this reason, it is usually reserved for very complex nonlinear devices.

Simulation of Nonlinearities—Factors to Consider

There are a number of factors to be considered when one attempts to simulate the behavior of a nonlinearity. The simulation of nonlinearities is almost always performed in the time domain except for filters included in the model to account for frequency-selective behavior. Filters, of course, can be simulated in the time domain or in the frequency domain.

To illustrate some of the factors that must be considered in the process of modeling or simulating a nonlinearity, assume that the model takes the form of either the zero-memory model described by (12.1) or the frequency-selective model shown in Figure 12.1.

Sampling Rate

The first factor we have to consider is the sampling rate. For a linear system we typically set the sampling rate at 8 to 16 times the bandwidth of the input signal. In the case of a nonlinearity of the form

Equation 12.4. 

where the input x(t) is a deterministic finite energy signal, the transform Y(f) of the output y(t) will be given by

Equation 12.5. 

where ⊛ denotes convolution. The triple convolution will lead to a threefold increase in the bandwidth of Y(f) over the bandwidth of X(f). This effect is called spectral spreading and is an effect of the nonlinearity. If y(t) is to be represented adequately without excessive aliasing error, the sampling rate must be set on the basis of the bandwidth of y(t), which has much higher bandwidth than x(t). Thus, in setting the sampling rate for simulating a nonlinearity, we must take spectral spreading into account and set an appropriately high sampling rate. However, the sampling frequency actually required for simulation will not be as high as that indicated in this example (see Section 12.2.2 on the zonal bandpass model).

Cascading

Another factor that we have to consider is the effect of cascading linear and nonlinear blocks as in Figure 12.1. If we are using, for example, the overlap and add technique for simulating the filters, we have to exercise caution. We cannot process blocks of data through the first filter, then through the nonlinearity and the second filter, and perform overlap and add at the output of the second filter. This operation is incorrect, since the principle of superposition does not apply for nonlinearities. A correct processing technique is to apply overlap and add at the output of the first filter, compute the time-domain samples representing the first filter output, process these samples on a sample-by-sample basis through the memoryless nonlinearity, and apply the overlap and add technique to the second filter. Note that this problem does not occur in the overlap and save method, since superposition is not applied.

Nonlinear Feedback Loops

Feedback loops might require the insertion of a one-sample delay in the loop in order to avoid computational deadlocks (recall, from Chapter 6, the feedback path in a PLL simulation). A small delay in a linear feedback loop may not adversely affect the simulation results. In a nonlinear system, however, a small delay in a feedback loop might not only significantly degrade the simulation results but might even lead to unstable behavior. In order to avoid these effects, the sampling rate must be increased, which, in effect, decreases the delay.

Variable Sampling Rate and Interpolation

If the model is a nonlinear differential equation that is solved using numerical integration techniques, many numerical integration algorithms included in software packages such as SIMULINK will use a variable integration step size. The step size will be determined automatically at each step depending on the behavior of the solution. If the solution is well behaved locally, then a large step size might be used. In order to avoid aliasing problems in downstream blocks, it might be necessary to interpolate the output and produce uniformly spaced samples of the output signal.

Modeling and Simulation of Memoryless Nonlinearities

The input-output relationship of a memoryless nonlinearity is given by

Equation 12.6. 

where F(·) is a nonlinear function and t0 may be assumed to be zero. If x(t) is a baseband signal, then y(t) will also be treated as a baseband signal. On the other hand, if x(t) is a bandpass signal, then y(t) will in general be a bandpass signal, but y(t) may also contain a dc term as well as harmonics of the input signal. In practical cases, we might be interested in only those output terms whose spectral contents lie near the input frequencies. We can select these components of interest by adding a zonal bandpass filter at the output of the nonlinearity. Thus, the terminology lowpass versus bandpass applies to the frequency content of the input and the output. The nonlinearity itself is modeled as a simple instantaneous device.

In Chapter 4 the lowpass complex envelope representation for modeling and simulating bandpass signals through linear systems was developed. It is desirable to have models for bandpass nonlinearities also in terms of complex lowpass envelopes of the input and output. It turns out that complex lowpass envelope models for many nonlinear devices including limiters (recall the example in Chapter 4), broadband RF amplifiers, phase-locked loops, decision feedback equalizers, and timing recovery systems can be derived.

Baseband Nonlinearities

The input to a baseband nonlinearity is a real-valued signal x(t) and its output is also a real-valued signal, y(t). The nonlinearity is modeled as y(t) = F(x(t)). The most commonly used models of baseband nonlinearities are the power series model and the limiter model. The power series model is defined by

Equation 12.7. 

and the general limiter model has the form [1]

Equation 12.8. 

In (12.8), M is the limiting value of the output, m is the input limiting value, and s is the “shape” parameter. The normalized input-output relationship for a limiter is given in Figure 12.3 for different values of s. Note that s = ∞ corresponds to a “soft” limiter and m = 0 corresponds to a hard limiter. Also, note that with m = 0, the value of s has no effect on the characteristic of the nonlinearity described by (12.8). Figure 12.3 can be generated with the following MATLAB program:

% File:  c12_limiter.m
x = -10:0.1:10;             % input voltage vector
M = 1;                      % output limiting value
m = 1;                      % input limiting value
s = [0.5 1.0 10.0];         % vector of shape factors
for k = 1:201
    xx=x(k);
        num = M*sign(xx);
        for kk = 1:3
        xxx=(m/abs(xx))^s(kk);
        den = (1+xxx)^(1/s(kk));
       y(k,kk) = num/den;
    end
end
plot(x,y(:,1),'k-',x,y(:,2),'k:',x,y(:,3),'k--'),
grid
xlabel('Input Voltage')
ylabel('Output Voltage')
legend('s=0.5','s=1.0','s=10.0',2)
% End of script file.

Limiter characteristics.

Figure 12.3. Limiter characteristics.

The student is encouraged to examine the behavior of the model for various values of m and s.

Simulation of the models given in (12.7) and (12.8) is trivial and it is always implemented in the time domain. We simply generate sample values of x(t), process these samples through (12.7) or (12.8) to generate samples of y(t). The samples of y(t) are then used as input to the downstream blocks. The properties of y(t), when the input x(t) is a random process, can be estimated from the samples {y(kTs)}. We do need to pay careful attention to the sampling rate requirements as discussed at the beginning of this section.

Bandpass Nonlinearities—Zonal Bandpass Model

Memoryless bandpass models are used to characterize a variety of narrowband nonlinear bandpass devices encountered in communications systems. The word memoryless implies not only an instantaneous relationship between input and output, but also implies that the device does not exhibit frequency-selective behavior over the bandwidth of operation. The bandwidth of the nonlinear device and the bandwidth of the signal are both assumed to be much less than fc, where fc is the carrier frequency.

When the bandwidth becomes wider, the nonlinearity may exhibit frequency-selective behavior, and we use appropriate frequency-selective models for such devices. Frequency selectivity is synonymous with memory, and the most commonly used model for frequency-selective nonlinearities (i.e., nonlinearities with memory) consists of a memoryless nonlinearity sandwiched between two filters as illustrated in Figure 12.1. We will focus on bandpass nonlinearities without memory in this section and deal with nonlinearities with memory in the next section.

Consider a memoryless nonlinearity of the form

Equation 12.9. 

Assume that the input is a bandpass random signal of the form

Equation 12.10. 

where the amplitude A(t) and the phase deviation φ(t) are lowpass random processes having bandwidth Bfc (the narrowband assumption). From (12.9) and (12.10) we obtain the output of the nonlinearity Y(t) as

Equation 12.11. 

In the preceding equation, the first term is at the center or carrier frequency fc and the last term is at the third harmonic of the carrier frequency 3fc. The bandwidth of the third harmonic will be of the order of 3B. Since our assumption is that fcB, the second term will be well outside the bandwidth of interest. We can therefore approximate the first zone output of the nonlinearity as

Equation 12.12. 

or

Equation 12.13. 

where

Equation 12.14. 

Thus, the model for a narrowband, memoryless nonlinearity is a memoryless nonlinearity followed by a “zonal” bandpass filter that passes only the “first zone” output near fc. The model is illustrated in Figure 12.4, in which x(t) and y(t) represent the bandpass input and output of the model, respectively, and the center frequency of the zonal bandpass filter is fc. The memoryless nonlinearity itself does not respond differently to the baseband or bandpass input, and it is also not sensitive to the carrier frequency. It is the zonal bandpass filter that turns the baseband model into a bandpass model at fc.

Zonal Bandpass Model for a Memoryless, Narrowband Nonlinearity.

Figure 12.4. Zonal Bandpass Model for a Memoryless, Narrowband Nonlinearity.

Note that for the power series model, the bandpass output y(t) has the same form as the input with the output amplitude related to the input amplitude via f(A(t)) and the output phase is the same as the input phase. The function f(A(t)) is referred to as the input amplitude to output amplitude, or the AM-to-AM transfer characteristic, of the nonlinearity. A limiter or power series model affects only the amplitude of the input signal. The phase is unaffected by the model.

In terms of the complex envelopes of the input x(t) and the output z(t), the lowpass equivalent model for the power series nonlinearity is

Equation 12.15. 

and

Equation 12.16. 

The power series model can be simulated using the zonal bandpass model given in (12.13) and Figure 12.4 or the lowpass equivalent model given in (12.16). Simulating the bandpass model requires a much higher sampling rate and computational complexity than the lowpass equivalent model. Fortunately it is possible to derive lowpass equivalent models for most memoryless nonlinearities either analytically or from measurements as shown in the following sections.

Lowpass Complex Envelope (AM-to-AM and AM-to-PM) Models

Devices such as bandpass amplifiers, which respond to bandpass inputs having spectra centered on the carrier frequency, produce outputs that are bandpass in nature. The spectral components of these bandpass outputs are centered on the carrier frequency but perhaps have larger bandwidths than the input signal. Such devices can be modeled using the complex envelope representations of the input and output signals.

Suppose the input to a memoryless bandpass nonlinearity is of the form

Equation 12.17. 

where

Equation 12.18. 

The output of the memoryless nonlinearity y(t) = F(x(t)) can be expressed as

Equation 12.19. 

Since A cos(α) is periodic in α, y(t) will also be periodic in α. Therefore, y(t) can be expanded in a Fourier series:

Equation 12.20. 

where ak and bk are the Fourier coefficients given by

Equation 12.21. 

and

Equation 12.22. 

The first zone output z(t) in the vicinity of fc is given by the k = 1 term in the Fourier series. This gives

Equation 12.23. 

where

Equation 12.24. 

and

Equation 12.25. 

The function f1(A) is sometimes called the first-order Chebyshev transform and the complex function [f1(A) − jf2(A)] is called the describing function of the nonlinearity.

The model given by (12.23) can also be expressed as

Equation 12.26. 

where

Equation 12.27. 

and

Equation 12.28. 

In polar coordinates we have

Equation 12.29. 

The functions f(A) and g(A) are the amplitude-to-amplitude (AM-to-AM) and amplitude-to-phase (AM-to-PM) transfer characteristics of the nonlinearity.

The model given in (12.26) is a generalization of the input-output relationship given in (12.12). Whereas the model given in (12.12) accounts for amplitude distortion only, the model given in (12.26) includes both the amplitude distortion and the phase distortion introduced by the nonlinearity, and it can be expressed in terms of the complex lowpass equivalents of the input and output as

Equation 12.30. 

and

Equation 12.31. 

The model given in the preceding equation is in polar form, and it can be converted to rectangular form

Equation 12.32. 

where the complex envelope can be expressed in terms of the direct or in-phase component and the quadrature components of sd(t) and sq(t) as

Equation 12.33. 

in which

Equation 12.34. 

and

Equation 12.35. 

Analytical Derivation of AM–to-AM and AM-to-PM Characteristics

The AM-to-AM and AM-to-PM transfer characteristics can be derived for a zonal bandpass nonlinearity using (12.23). For a hard-limiter type nonlinearity [see (12.8) with m = 0] it can be easily shown that

Equation 12.36. 

and

Equation 12.37. 

The value of f(A) is simply the amplitude of the fundamental sinusoidal component (the “first zone output”) present in the square wave output of the limiter.

For the so-called soft-limiter type nonlinearity [see (12.8) with s = ∞], f(A) and g(A) can be shown to be [1]

Equation 12.38. 

and

Equation 12.39. 

which illustrates that the limiter does not introduce any phase distortion.

For any arbitrary memoryless zonal bandpass nonlinearity, we can derive f(A) and g(A) analytically if the transfer characteristics of the nonlinearity are given and if the integrals in (12.21) and (12.22) can be evaluated in closed form. In some cases we might be able to derive the lowpass equivalent model directly.

Consider for example a power series nonlinearity of the form

Equation 12.40. 

The bandpass input to the nonlinearity x(t) can be expressed in terms of its lowpass complex envelope as

Equation 12.41. 

In terms of the complex envelope, the nth power of x(t) can be expressed as

Equation 12.42. 

Only terms with n odd and 2kn = ±1 in xn(t) will produce a first-zone output. Hence the complex envelope of the first-zone output of the power series nonlinearity is

Equation 12.43. 

Equation (12.43) describes the complex lowpass equivalent model for a power series type of nonlinearity. [The third-order power series nonlinearity discussed in Section 12.2.2 is a special case of the model given in (12.43).] Note that in this model there is no phase distortion, that is, g(A) = 0.

Measurement of AM-to-AM and AM-to-PM Characteristics

The AM-to-AM and AM-to-PM characteristics of bandpass amplifiers and many other devices are normally determined experimentally from measurements and are not derived analytically. The input to the amplifier will be an unmodulated carrier of amplitude A and the output amplitude f(A) and the output phase g(A) are measured for different values of A. Such measurements are called “swept-power measurements” and the results of such measurements are usually included in the data sheets for the amplifiers.

The AM-to-AM and AM-to-PM characteristics will be displayed in decibel units of power. The output axis will be normalized with respect to the maximum output power of the device at saturation and the input axis will be normalized with respect to the input power that produces the maximum output. These normalized powers are referred to as output backoff (OBO) and input backoff (IBO), respectively. If the average input power is very small compared to the input power required to produce the maximum output power, the amplifier will behave linearly. As the input power is increased from this low level, the device will begin to exhibit nonlinear behavior. Sometimes the “operating point” of the amplifier will be specified as so many dBs below the input power required to produce the maximum output power. It is also a common practice to normalize the gain of the amplifier to one. It is trivial to add an ideal amplifier to account for the nonunity gain.

Measurements are typically made at a few input power levels, or equivalently, at various levels of |A(t)|, and the measured characteristics will be given as a table. During simulation it might be necessary to interpolate the values in the table for a given input power level. Also, it should be noted that the model given in (12.30) and (12.31) is in terms of voltage (or current) levels for x(t), A(t), etc., where the AM-to-AM measurements are usually given in terms of power levels. In such cases it will be necessary to convert the power gain or attenuation to voltage (or current) gain or attenuation. An example of typical AM-to-AM and AM-to-PM characteristics of a bandpass amplifier is shown in Figure 12.5. Also illustrated in Figure 12.5 is the operating point and the backoff. The backoff is measured with respect to the peak value of f(|A(t)|) and is specified with respect to the input level, input backoff, or with respect to the output level, output backoff.

AM-to-AM and AM-to-PM characteristics.

Figure 12.5. AM-to-AM and AM-to-PM characteristics.

Analytical Forms of AM-to-AM and AM-to-PM Characteristics

It is often a common practice to approximate the measured AM-to-AM and AM-to-PM characteristics by analytical forms and to use the analytical forms rather than numerical interpolation for obtaining the output amplitude and phase values. Two functional forms widely used to model the measured characteristics of RF amplifiers [10] are given by

Equation 12.44. 

or

Equation 12.45. 

The coefficients of the model, αp, αq, βp, and βq, or αf, βf, αg, and βg, are obtained from data using numerical curve-fitting techniques.

Example 12.1. 

The following MATLAB code illustrates the generation of the AM-to-AM and AM-to-PM characteristics using Saleh’s model as defined through (12.45). The code for Saleh’s model is given in Appendix A in which the model parameters are defined as αf = 1.1587, βf = 1.15, αg = 4.0, and βg = 2.1.

% File:  c12_example1.m
x = 0.1:0.1:2;                    % input power vector
n = length(x);                    % length of x
backoff = 0.0;                    % backoff
y =  salehs_model(x,backoff,n);   % nonlinearity model
subplot(2,1,1)
pin = 10*log10(abs(x));           % input power in dB
pout = 10*log10(abs(y));          % output power in dB
plot(pin,pout); grid;
xlabel('Input power - dB')
ylabel('Output power - dB')
subplot(2,1,2)
plot(pin,(180/pi)*unwrap(angle(y))); grid;
xlabel('Input power - dB')
ylabel('Phase shift - degrees')
% End of script file.

Executing the program results in the output illustrated in Figure 12.6.

AM-to-PM characteristics.

Figure 12.6. AM-to-PM characteristics.

Simulation of Complex Envelope Models

Complex envelope models can be simulated in the time domain in either polar form, as defined in (12.30) and (12.31), or in quadrature form as defined in (12.33), (12.34), and (12.35). The corresponding block diagrams are shown in Figure 12.7. In polar form, the simulation procedure consists of the following steps:

  1. Generate sampled values of the complex envelope of the input Simulation of Complex Envelope Models.

  2. Compute the input amplitude Simulation of Complex Envelope Models, and phase Simulation of Complex Envelope Models

  3. Find the output amplitude using the AM-to-AM characteristic f(A). (Note: This step might require interpolation of AM-to-AM values and conversion of power gain to voltage or current gain.)

  4. Find the output phase offset using the AM-to-PM characteristic g(A). (Note: This step might require interpolation of AM-to-PM values.)

  5. Compute sampled values of the complex output Simulation of Complex Envelope Models according to (12.31).

<source>Source: M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communications Systems, 2nd ed., New York: Kluwer Academic/Plenum Publishers, 2000. </source>
Complex lowpass envelope models of a zonal bandpass nonlinearity.

Figure 12.7. Complex lowpass envelope models of a zonal bandpass nonlinearity.

A similar procedure is used for implementing the quadrature model shown in Figure 12.7. Computationally, the two procedures are identical.

The Multicarrier Case

In our development of the AM-to-AM and AM-to-PM models we have thus far implicitly assumed that the input to the nonlinear device consists of a single carrier with amplitude modulation A(t) and frequency or phase modulation φ(t). While this might be the case in wideband single-carrier TDMA systems, the input signal in a FDMA system might consist of many individually modulated carriers and the sum of many such carriers might be amplified by a single-power amplifier. It is rather straightforward to extend the AM-to-AM and AM-to-PM models to the multicarrier case.

The Multicarrier Model

Suppose that the input to a nonlinearity consists of the sum of m modulated carriers

Equation 12.46. 

where fk is the frequency offset of the kth carrier, fc is the nominal center frequency, and Ak(t) and φk(t) represent the amplitude and phase modulation associated with the kth carrier. We can express the composite multicarrier signal in lowpass complex envelope form as (recall Section 4.3)

Equation 12.47. 

where the lowpass complex envelope is given by

Equation 12.48. 

In (12.48) the amplitude of the complex envelope is

Equation 12.49. 

and the phase is

Equation 12.50. 

where

Equation 12.51. 

is the direct component and

Equation 12.52. 

is the quadrature component.

The lowpass complex envelope of the output, , and the actual bandpass output signal z(t) can be obtained from

Equation 12.53. 

and

Equation 12.54. 

respectively.

Intermodulation Distortion in Multicarrier Systems

Consider a power series nonlinearity of the form

Equation 12.55. 

followed by a zonal filter. Suppose that the input to the nonlinearity is the sum of two modulated tones

Equation 12.56. 

where fc is the carrier frequency, f1 and f2 are the offset frequencies, and fi < B ≪ f0,i = 1, 2. It can be shown that the first zone output (terms in the vicinity of fc) is given by

Equation 12.57. 

The preceding expression for the output consists of distorted terms at the input frequencies as well as cross-modulated terms at fc + 2f1f2 and fc + 2f2f1, which are usually referred to as “intermodulation” distortion terms. These intermodulation distortion terms degrade the overall signal quality and might also produce adjacent channel interference if the frequency of these terms causes them to fall outside the bandwidth of the signal of interest but inside the bandwidth that might be occupied by an adjacent signal. (The reader can verify that even-powered terms in a power series type nonlinearity do not produce any distortion terms in the vicinity of the input frequency.)

It can be easily seen that, when the input consists of a large number of carriers or when the nonlinearity has many higher-order nonlinear terms, the output will contain a very large number of terms at various linear combinations of the input frequencies. While it is possible to develop an algorithm for the number of intermodulation terms, and the frequencies of these terms, it is very difficult to characterize and analyze the effects of the distortion introduced by the nonlinearity analytically when the input consists of a large number of arbitrarily modulated carriers and/or when the nonlinearity is severe. The communication literature is full of techniques for approximating the effects of intermodulation distortion. However, an exact characterization of intermodulation in a multicarrier digital system with arbitrary modulation schemes is difficult. Simulation is very useful in such applications, as will be seen in the following section.

Example 12.2. 

Consider a memoryless third-order nonlinearity of the form

Equation 12.58. 

with a two-tone bandpass input x(t) at frequencies 11 and 14 Hz

Equation 12.59. 

The bandpass versions of the input and output are shown in Figure 12.8. The intermodulation terms generated by the nonlinearity lie at 8Hz and 17 Hz and the third harmonics are around 33 and 42Hz.

Nonlinearity input and output based on bandpass model.

Figure 12.8. Nonlinearity input and output based on bandpass model.

The lowpass equivalent version of this example, with a reference frequency f0 = 12, has the form

Equation 12.60. 

and

Equation 12.61. 

The intermodulation distortion terms now appear at −4 and +5 Hz, respectively (with respect to the reference frequency f0 = 12 Hz), as illustrated in Figure 12.9. No third harmonic terms are accounted for in the lowpass equivalent model. The MATLAB code used to generate the results illustrated in this example is contained in Appendix B.

Nonlinearity input and output based on lowpass model.

Figure 12.9. Nonlinearity input and output based on lowpass model.

 

Example 12.3. 

We now consider the nonlinearity to be based on the AM-to-AM and AM-to-PM characteristic rather than on the power series model as we did in the preceding example. The AM-to-AM and AM-to-PM model used in this example is based on Saleh’s model as defined through (12.45). The AM-to-AM and AM-to-PM characteristics were determined in Example 12.1 and are illustrated in Figure 12.6. The parameter values are given in Example 12.1, the MATLAB code for simulating Saleh’s model is given in Appendix A, and log_psd.m is given in Appendix A of Chapter 7. The lowpass equivalent signal model, which is the sum of two complex tones of the preceding example, is used for the simulation so that the results can be compared. The MATLAB code follows:

% File:  c12_example3.m
backoff = input('Enter backoff in dB > '),
f1 = -1.0; f2 = 2.0; ts = 1.0/128; n = 1024;
for k=1:n
    t(k) = (k-1)*ts;
    x(k) = exp(i*2*pi*f1*t(k))+0.707*exp(i*2*pi*f2*t(k));
    y(k) = salehs_model(x(k),-1*backoff,1);
end
[psdx,freq] = log_psd(x,n,ts);
[psdy,freq] = log_psd(y,n,ts);
subplot(2,1,1)
plot(freq,psdx); grid; title('Input to the NL'),
ylabel('PSD in dB'),
subplot(2,1,2)
plot(freq,psdy); grid; title('Output of the NL'),
ylabel('PSD in dB'), xlabel('Frequency in Hz'),
% End of script file.

Executing the code yields the result illustrated in Figure 12.10 for 5 dB backoff. (Note: In the MATLAB program we enter backoff as a positive quantity, since this seems more natural. Note that this is converted to a negative quantity in the call to the model, since the signal level is, in this case, −5 dB relative to the peak value.)

Simulation results for 5 dB backoff.

Figure 12.10. Simulation results for 5 dB backoff.

Example 12.4. 

We now consider a complex lowpass signal model for a 16-QAM signal. As in the previous example Saleh’s model for the AM-to-AM and AM-to-PM characteristics is used. The input and output signal constellation is computed for a backoff of 10 dB. The purpose of the simulation is to determine the effect of the nonlinearity on the signal constellation. The MATLAB code for implementing the simulation follows:

% File:  c12_example4.m
%
% Create input constellation
backoff = input('Enter backoff in dB > '),
N =  1024;                     % number of points
x1 = 2*fix(4*rand(1,N))-3;     % direct components
x2 = 2*fix(4*rand(1,N))-3;     % quadrature components
y =  x1+i*x2;                  % signal space points
%
% Run it thru Saleh's model
z = salehs_model(y,-1*backoff,1024);
subplot(1,2,1)
plot(real(y),imag(y));grid; title('Input Constellation'),
xlabel('direct'), ylabel('quadrature')
axis equal
subplot(1,2,2)
plot(real(z),imag(z));grid; title('Output Constellation'),
xlabel('direct'), ylabel('quadrature')
axis equal
% End of script file.

Executing the MATLAB program yields the results illustrated in Figure 12.11 for 10 dB backoff. Note that the effect of the nonlinearity is to move a number of the signal points closer together. As we noted in Chapter 4, for an additive, white, Gaussian noise (AWGN) channel the pairwise error probability is a monotonic function of the Euclidean distance between a pair of points in signal space with the error probability increasing as the points in signal space move closer together. We therefore conclude that the nonlinearity degrades the probability of error of the communications system. Like the eye diagram, the signal constellation gives us a qualitative measure of system performance. Thus, viewing the change in the signal constellation as system parameters are varied gives us considerable insight into system performance. This is especially true for nonlinear systems. The student should therefore simulate this system a number of times and observe the result of varying the backoff.

Input and output signal constellations for 10 dB backoff.

Figure 12.11. Input and output signal constellations for 10 dB backoff.

Modeling and Simulation of Nonlinearities with Memory

If the output of a nonlinear device depends on the present and past values of the input signal, the device is classified as a nonlinearity with memory. Memory, or dependence on past input values, is modeled in linear systems by the impulse response and the convolution integral in the time domain. Linear systems are modeled in the frequency domain by the transfer function, which implies that the sinusoidal steady-state system response is dependent on the input frequency. Thus, memory and frequency selective behavior are synonymous.

Many nonlinear devices, such as wideband amplifiers, exhibit frequency-selective behavior. Such behavior will be evident when the response of the device is measured at different power levels and at different frequencies. If the input-output relationship does not depend on the frequency of the tone used in the measurements, the device is memoryless. Otherwise, the device exhibits frequency-selective behavior and therefore has memory.

Suppose that the input to an AM-to-AM nonlinearity is an unmodulated tone at some frequency fc + fi where fc is the center frequency of the device and fi is the offset frequency with respect to the center frequency. The complex envelope of the input signal is

Equation 12.62. 

from which

Equation 12.63. 

If the nonlinearity has no memory, and is therefore not frequency selective, the output is given by

Equation 12.64. 

Note that the output amplitude is independent of frequency.

If measurements indicate that the nonlinearity is frequency selective, we can attempt to account for the frequency selectivity by modifying the AM-to-AM function to include the input frequency and write the response as

Equation 12.65. 

As we vary fi over the bandwidth of the device, the function H(fi) accounts for the frequency dependency of the nonlinearity. Note that H(fi) may be viewed as the transfer function of a filter that precedes the AM-to-AM nonlinearity. The filter produces a response AH(fi)exp[j2πfit] when the input is . Thus, the model for the AM-to-AM part of a frequency-selective nonlinearity consists of a filter H(fi) followed by a memoryless AM-to-AM nonlinearity f(A).

This approach can be extended to account for frequency-selective AM-to-PM transfer characteristics by including another filter in the model. Indeed, the most commonly used model for frequency-selective nonlinearities consists of a memoryless nonlinearity sandwiched between two filters. We now describe two of these models.

Empirical Models Based on Swept Tone Measurements

These models are derived from “swept tone” and “stepped power” measurements made with an input that is an unmodulated tone of the form

Equation 12.66. 

The output amplitude and phase offset are measured for different values of the input amplitude Ai and frequency fi producing a set of plots as shown in Figure 12.12(a). Note that these plots clearly show the frequency dependent nature of the response of the nonlinear device.

<source>Source: M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communications Systems, 2nd ed., New York: Kluwer Academic/Plenum Publishers, 2000. </source>
Example of frequency-selective AM-to-AM model.

Figure 12.12. Example of frequency-selective AM-to-AM model.

Two models that attempt to take into account the frequency-dependent nature of nonlinear devices are Poza’s model and Saleh’s model. Both of these models try to reproduce as accurately as possible the swept tone and stepped power measurements.

Poza’s Model

A simple simulation model that characterizes measurements like the ones shown in Figure 12.12(a) has been derived by Poza et al. [11] based on the following assumptions:

  1. AM-to-AM: Curves of AM-to-AM for different frequencies are similar in shape, with all curves being a combination of vertical and horizontal translations of one another.

  2. AM-to-PM: These curves for different frequencies are also similar in shape and all curves are a combination of horizontal and vertical translations of each other.

This model requires a complex curve-fitting procedure to fit a family of curves for the AM-to-AM and AM-to-PM data such that each member of the family is a translated version of the other member (translation of both horizontal and vertical axes) of the same family.

Let us first consider how to implement the frequency-selective AM-to-AM simulation model. The first step in the implementation is to select one of the curves within the AM-to-AM family as the “reference” nonlinearity at the reference frequency fc. With respect to this curve, the AM-to-AM response at another frequency f1 can be obtained through a horizontal translation ΔG1 + Δg1. The horizontal translation corresponds to an attenuation or gain of the input signal with respect to the reference nonlinearity at frequency f1, and the vertical translation ΔG1 corresponds to an attenuation or gain of the output of the reference nonlinearity at frequency f1. These can be implemented by two FIR filters, one preceding and one following the reference nonlinearity, with amplitude responses HAM1(f) and HAM2(f), respectively. Thus, the AM-to-AM model can be implemented as shown in Figure 12.12.

A similar model and implementation can be derived for the AM-to-PM responses. This model will consist of an input filter with an amplitude response HPM(f) preceding the AM-to-PM reference nonlinearity and a phase response exp [PM(f)] following the output. The AM-to-AM and AM-to-PM models can be combined into one model as shown in Figure 12.13. The AM-to-PM part precedes the AM-to-AM part, and the input (amplitude) offset introduced by the AM-to-PM model has to be “taken out” prior to the AM-to-AM model so that the power level at the input to the AM-to-AM part of the model is the same as the power in the input signal AM-to-PM:.

Frequency-selective AM-to-AM and AM-to-PM (combined) model.

Figure 12.13. Frequency-selective AM-to-AM and AM-to-PM (combined) model.

Saleh’s Model

A slightly different approach to modeling nonlinearities with memory has been proposed by Saleh [10]. This model is obtained from the memoryless quadrature model given in (12.44) and (12.45) by modifying the coefficients to be frequency dependent. This gives

Equation 12.67. 

and

Equation 12.68. 

The coefficients are obtained from the measurements made at various frequencies f using a least squares fit. The actual implementation of the model is given in Figure 12.14. The functions illustrated in Figure 12.14 are defined as follows: φ0(f) is the small signal phase response, , , P0 = A/(1+A2), , , and Q0(A) = A3/[1 + A2]2T. The details of this derivation are left as an exercise for the reader.

Saleh’s quadrature model for a nonlinearity with memory.

Figure 12.14. Saleh’s quadrature model for a nonlinearity with memory.

Note that Saleh’s model and Poza’s model start out with different assumptions and that the two models are topologically different. They are both “block diagram models” designed to reproduce a specific set of measurements. Both of these models have more complex structures than models for memoryless nonlinearities. There is also an added degree of complexity due to the empirical procedure used for fitting model parameters to measured data.

Other Models

In spite of the complexity, the models described in the preceding section are not really capable of capturing the behavior of a nonlinearity when the input consists of a sum of many modulated carriers, since the models were derived on the basis of a single, constant envelope, carrier (tone) measurements. Since superposition does not hold for nonlinear systems, it is not possible to characterize the behavior with multitone input based on single-tone measurements. It is, of course, very difficult to make multitone measurements for many different combinations of input frequencies and power levels. The number of combinations of frequencies and power levels will be overwhelmingly large when the number of carriers is large.

Some alternate approaches have been suggested, one of which includes approximating the behavior of the nonlinearity using a carrier modulated with a PN sequence as the input [12]. The input now approximates the sum of a large number of carriers with a more or less uniform power spectral density at the input. (Note that, with this approach, the input spectral components are generated by the modulating signal.) By changing the total power in the input, measurements are taken to characterize the behavior of the nonlinearity under input conditions that produce a reasonable approximation of a multicarrier input. The resulting model has a transfer function of the form H(f, P), where P is the input power level. The details of this approach may be found in [12].

Another more general approach that has been suggested for modeling and analyzing nonlinearities with memory uses the Volterra series. The input-output relationship in the time domain is given by

Equation 12.69. 

where

Equation 12.70. 

The time response model given in (12.70) assumes that the system has no dc response or constant in the output. Note that the model looks like a power series model. However, each term in the Volterra series model is a k-fold convolution (and not a kth power) of the input signal with a k-fold impulse response kernel h1, ... , τk). A lowpass equivalent version of the model can be derived but it is exceedingly complex. The interested reader is referred to [13].

While the Volterra series model is analytically very elegant, the higher-order kernels are difficult to characterize and measure. Hence the model is of limited use for simulation except in cases where the order of the model is small. As in the case of power series models, only odd terms contribute to the first zone output.

Techniques for Solving Nonlinear Differential Equations

We have seen that a nonlinear system with memory may be modeled by a nonlinear differential equation (NLDE) and the simulation of the system can take place by substituting a discrete-time integration for the continuous-time integration. We have also seen there are two approaches to simulating a system described by a differential equation. There is the assembled block diagram approach, in which the system block diagram is constructed using basic building blocks including some simpler memoryless nonlinear blocks and integrators, and the solution is obtained by simulating the block diagram model of the nonlinear system. An alternate approach is to derive the nonlinear differential equation that governs the dynamic behavior of the PLL and solve the NLDE using recursive procedures. This method, referred to as the direct or stand-alone method, will be superior in terms of stability and accuracy for a given sample time. However, some effort will be required to develop the model and implement this approach, and the effort has to be repeated for each new nonlinear system that needs to be simulated. With the block diagram assembly method, a core set of building blocks can be used to construct models of new systems rather quickly.

All approaches yield the same results when the sample time, or simulation step time, is small. However, if we desire a large time step in order to reduce the computational burden, the stand-alone model using a variable step-size solution technique will be very efficient. The step size is usually chosen automatically by the solution method depending on the behavior of the underlying NLDE. In regions where the solution behaves well, it is possible to use very large time steps resulting in significant savings in simulation time. Some interpolation, however, will be required if succeeding blocks require uniformly spaced samples.

When a PLL is simulated by itself, the time step required will be determined by the so-called “loop bandwidth”. If the PLL is simulated along with other parts a receiver, the sampling rate for the remainder of the receiver will be governed by the data rate, R. In general, R will be much larger than the loop bandwidth, and hence the time step commensurate with R will be much smaller than that needed for accurate simulation of the PLL itself. In this situation, it might be advantageous to use multirate simulations in which different time steps are used for different parts of the system with appropriate decimation and interpolation in between.

The focus of the remainder of this chapter is on the simulation of nonlinear systems with memory using a recursive solution of the underlying NLDE. We will use the PLL, which we studied extensively in Chapter 6, as an example to illustrate the methodology.

State Vector Form of the NLDE

The literature on numerical methods is full of techniques for solving nonlinear differential equations [14]. From a simulation point of view, techniques that can be implemented in recursive form are most attractive both from a structural as well a computational point of view. For linear differential equations of the mth order, the recursive solution in time domain often makes use of the state variable method where the system equations are represented in the form

Equation 12.71. 

and

Equation 12.72. 

where X is the state vector of size m × 1, U is the input vector, Y is the output vector, A, B and D are matrices of constants which define the system, and m is the order of the system.

To apply this technique to the PLL, consider the relation between ed(t), as identified in Figure 12.15 and the VCO phase θ(t). This is the linear part of the system and it follows from Figure 12.15 that the relationship between θ(t) and ed(t) is defined by

Equation 12.73. 

and

Equation 12.74. 

Representation of the loop filter and the definition of the state variables for the NLDE model of the PLL.

Figure 12.15. Representation of the loop filter and the definition of the state variables for the NLDE model of the PLL.

From the definition of ed(t) and (12.73)

Equation 12.75. 

Substitution into (12.74) gives the differential equation

Equation 12.76. 

From Figure 6.3 and (6.25) we know that

Equation 12.77. 

and from Figure 12.15 we have

Equation 12.78. 

From which

Equation 12.79. 

Now, if we define the state variables as

Equation 12.80. 

we can, from (12.76) write the NLDE in the matrix form

Equation 12.81. 

 

Note that the NLDE can also be expressed as

Equation 12.82. 

where X is a vector of size m, and F is a nonlinear vector function with m components and U is the input vector that in this example is the scalar φ(t).

In general, it is possible to convert an mth order NLDE into a set of m simultaneous first-order NLDE that are similar in form to the state variable form of mth order linear differential equations. We now look at methods of solving simultaneous (i.e., vector) first-order differential equations, starting with a scalar first-order differential equation.

Recursive Solutions of NLDE-Scalar Case

Explicit Techniques

Consider a first-order differential equation of the form

Equation 12.83. 

with initial condition x(t0). Let xn denote the solution obtained via numerical integration at time step tn [we will use the notation x(tn) for the actual solution, which is not known, and use xn to denote the approximate solution obtained via numerical integration].

Most of the numerical integration methods for solving nonlinear (or linear) differential equations are based on Taylor series expansions. For example, consider the Taylor series (or some other) expansion of the form

Equation 12.84. 

where hn is the time step tn+1tn, and T is the local error or remainder given by

Equation 12.85. 

If hn is sufficiently small, then we can obtain a recursive solution for the first-order differential equation as

Equation 12.86. 

Equation 12.87. 

The recursion can be started with the initial condition x(t0) = x0. A simpler version of the recursive solution can be derived with a fixed step size h as

Equation 12.88. 

Equation 12.89. 

Equations (12.88) and (12.89) define Euler’s integration method.

By including the higher derivatives in the Taylor series, we can derive more accurate integration formulas. For example, a “second-order” integration rule may be derived by starting from

Equation 12.90. 

Using the approximation for the second derivative

Equation 12.91. 

we obtain the two-step formula called the second-order Adam-Bashforth integration rule

Equation 12.92. 

Another similar class of integration rules are the various Runge-Kutta (R-K) methods. One of the most widely used of the R-K formula is the classical four-stage formula,

Equation 12.93. 

where

Equation 12.94. 

Equation 12.95. 

Equation 12.96. 

Equation 12.97. 

All three methods described above are called explicit methods, since the recursion is carried out explicitly, using the solution values and derivative values obtained at the preceding time step.

Implicit Techniques

There is another class of solution techniques, called implicit techniques, that yield better accuracy and more stable solutions at the expense of increased computational burden. In implicit techniques, the solution at time step tn+1 will involve not only the quantities computed at the previous time step tn but also the quantities to be computed at the current time step tn+1. This requires the solution of a nonlinear algebraic equation at each time step.

A very simple and very popular implicit technique is the Trapezoidal integration rule given by

Equation 12.98. 

which is simply a trapezoidal approximation of the areas in the Reimann sum of an integral. (Recall our discussion of trapezoidal integration in Chapter 5.) It is clear that the preceding equation gives xn+1 only implicitly, and we have to solve the nonlinear equation given by (12.98) to obtain xn+1, whereas in the explicit methods xn+1 does not appear within the nonlinear f(·) on the right-hand side, and hence the solution, as we have seen, is rather straightforward.

There are many other implicit methods besides the Trapezoidal rule. As another example of an implicit technique, let us consider the so-called third-order Adams-Moulton (A-M) method based on the Taylor series expansion

Equation 12.99. 

with a symmetric approximation for the derivatives

Equation 12.100. 

Equation 12.101. 

Equation 12.102. 

Substituting these derivatives in (12.99), we obtain the “two-step” Adams-Moulton integration formula

Equation 12.103. 

The A-M integration formula given above points out two major problems with higher-order implicit techniques: initial conditions, and solving a nonlinear equation at each time step. When the recursion in (12.103) is applied to finding x1, we need x0, and x1. While the initial condition x0 will be given, x1 is not usually available. To avoid this problem we start the iterations at n +1 = 2, by using the value of x1, which may be computed using an explicit method such as the R-K method.

The second difficulty arises because xn+1 appears on the right-hand side inside the (nonlinear) function f, and hence the solution for xn+1 will require the numerical solution of the implicit equation given in (12.103). We present below two iterative methods for accomplishing this.

Implicit Solution Using the Predictor-Corrector Method

The idea behind the predictor-corrector (P-C) method is to first obtain a predicted value for xn+1 using an explicit technique such as the R-K or the A-B method (of the same order as the A-M method) and use this predicted (or the estimated) value of xn+1 in the right-hand side of (12.103) to obtain a corrected (or improved) value of xn+1. We then we proceed to n + 2. To improve the accuracy of the solution, one could use an interactive technique at each time step and repeat the predictor-corrector method many times. Now, (12.103) is modified as

Equation 12.104. 

where r is the iteration index. The iteration is started using a predicted value of xn+1 (obtained via an explicit method) on the right-hand side of (12.104) to obtain the next value in the iteration , which is then used in the right-hand side again to obtain the next improved value and so on until the iterations converge. [Note: is the (r +1)st iterated value of xn+1, not the (r +1)st power of xn+1!]

Implicit Solution Using Newton-Raphson Method

A variety of other techniques, such as the Newton-Raphson (N-R) method, can also be used for solving the implicit expression defined by (12.103). The N-R method is based on an iterative technique for finding a solution to y = g(x) = 0, that is, find the value of x that yields g(x) = 0.

An iterative solution to the “root-finding” problem can be obtained as follows. With reference to Figure 12.16, the slope of the curve y = g(x) at x0 is given by

Equation 12.105. 

We can rearrange the preceding equation as

Equation 12.106. 

Iterative technique for solving g(x) = 0.

Figure 12.16. Iterative technique for solving g(x) = 0.

If this procedure is repeated to generate a sequence of values x1,x2,x3, ... , by the recurrence relation

Equation 12.107. 

the xr converges to the zero of g(x) under quite simple conditions.

This technique could be applied to solving (12.103), by first writing it as

Equation 12.108. 

and applying the recursion given in (12.107) as

Equation 12.109. 

The iterations given in the preceding equation are repeated until the difference between successive values is small. The starting value of xn+1 is obtained using an explicit method.

General Form of Multistep Methods

The general form of the recursive solution of NLDE can be expressed as

Equation 12.110. 

which is referred to as a p-step integration rule and it is explicit or implicit depending on whether b1 = 0 (explicit) or b1 ≠ 0 (implicit). Table 12.1 summarizes the different integration formulas that are commonly used in simulation. The truncation error given in the table is obtained from the truncation error associated with terminating the Taylor series expansion [see (12.84) and (12.85)].

Integration Functions Commonly Used in Simulation

Figure 12.1. Integration Functions Commonly Used in Simulation

Accuracy and Stability of Numerical Integration Methods

Accuracy

The truncation error at a given time step is proportional to the higher-order derivatives and the step size h. If it is possible to estimate the local truncation error at each step, then the step size can be adjusted up or down. In general, higher-order implicit methods are better, since they yield smaller error. However, implicit methods require solving a nonlinear equation at each step. Reducing step size reduces local truncation error, and it will also speed up the convergence of the iterative solution of the nonlinear equation at each time step. Reducing the time step, however, will increase the overall computational burden.

The truncation error contributes, in a cumulative fashion, to the overall (global) error. While it is possible to estimate the local truncation error, unfortunately no general procedures are available for controlling the possible growth of global error. Therefore, most of the methods rely on estimating the local truncation error and reducing the step size when necessary.

Stability

Another measure of the “goodness” of an integration method is stability, which is a measure of the degree to which the recursive solution converges to the true solution. Stability depends on the specific problem and the integration method, and it is common practice to investigate the stability of different integration methods using a simple test problem like

Stability

for which the solution is known to be

Stability

If Re(λ) < 0, the real solution tends to zero as t → ∞. With this test case, an integration rule is said to be stable if the recursive solution converges to zero as n → ∞.

While it is easy to investigate the stability of the integration methods for the simple test case, it is difficult to draw general conclusions about stability of a method when applied to an arbitrary nonlinear differential equation. A general rule of thumb is that the implicit methods have better stability properties than explicit methods.

It is difficult to track truncation error and stability of the solution of a NLDE, since tracking the truncation error requires knowledge of the higher-order derivatives, and analysis of stability requires knowledge of the roots of the characteristic equation of the linearized version of the NLDE in the vicinity of tn. During simulations, an often-used heuristic method consists of comparing the solutions obtained with time steps h and h/2. If the estimated error associated with both time steps is within specified limits, then the solution is accepted. If not, the time step is halved again and the procedure is repeated. This method controls both stability and truncation error.

Among the multitude of integration techniques that are available, it is impossible to say which method is the “best” because the answer depends to a large extent on the problem being considered, the accuracy desired, the type of output needed, and the computational burden imposed by the integration technique. Trapezoidal rule, fourth-order Runge-Kutta method, and the Adam-Moulton method (with predictor-corrector) are the most commonly used methods offering a good tradeoff between computational complexity and stability and accuracy (with the Trapezoidal method being the least complex). For a reasonably well behaved nonlinear components or subsystems system like the PLL, any one of these methods with a small step size (of the order of eight samples/Hz of bandwidth) will provide a stable and accurate solution.

Solution of Higher-Order NLDE-Vector Case

If the NLDE model is mth order, then it can be converted to m simultaneous first-order differential equations (as described in Section 12.4.1), in vector form as

Solution of Higher-Order NLDE-Vector Case

where X and U are vectors of size m × 1. The multistep iterative solution of the vector NLDE is of the same form as the scalar case, and is given by

Solution of Higher-Order NLDE-Vector Case

where ai and bj are the same scalar constants given in Table 12.1.

For the implicit methods, a set of simultaneous nonlinear equations have to be solved in each step. Either the predictor-corrector method or the Newton-Raphson method can be used for this. The vector form of the Newton-Raphson method is given by

Equation 12.111. 

and

Equation 12.112. 

where the (i, j)th entry in the m × m Jacobian matrix, J(·), is defined as

Equation 12.113. 

The starting value of Xn+1 is obtained via an explicit method. At each time step, tn+1, the iterations are carried out (the iteration index is r) until subsequent iterated solutions differ by a small amount. Then the time index is advanced to tn+2, an initial value of Xn+2 is obtained using an explicit method, and an iterated solution for Xn+2 is obtained. The procedure is repeated until the time index reaches the simulation stop time.

PLL Example

We now return our original PLL problem introduced in Chapter 6 and illustrate the simulation of the PLL using several of the integration techniques discussed in the previous section.

Integration Methods

The techniques to be used are defined in terms of the PLL problem in the following sections.

Forward Euler (Explicit Method)

The defining equations are as follows:

Equation 12.114. 

Equation 12.115. 

and

Equation 12.116. 

Backward Euler (Implicit Method with Predictor-Corrector)

The predictor is defined by

Equation 12.117. 

and

Equation 12.118. 

and the corrector is defined by

Equation 12.119. 

and

Equation 12.120. 

Backward Euler (Implicit Method with N-R Iterations)

The predictor is the same as with the previous backward Euler [see (12.117) and (12.118)]. The N-R iterations for Xn+1 are defined by

Equation 12.121. 

where

Equation 12.122. 

and is the 2 × 2 matrix

Equation 12.123. 

with elements

Equation 12.124. 

Equation 12.125. 

Equation 12.126. 

Equation 12.127. 

It is left as an exercise for the reader to derive similar formulas for the A-M method with A-B as a predictor and corrector iterations or N-R iterations.

Summary

Simulation plays an important role in the analysis of nonlinear components and their impact on communication system performance. While the performance of linear systems can, in principle, be attacked by analytical means, the analysis of the impact of nonlinear components in a system context is by and large intractable. Simulation may be the only approach for tackling these problems without actually building the systems and testing them.

In this chapter we developed modeling and simulation techniques for nonlinear components in communication systems such as power amplifiers and limiters. For bandpass nonlinearities it was shown that the simulation can be carried out either with bandpass or with lowpass equivalent models. The complex lowpass equivalent model will be computationally more efficient. With either model, simulation of nonlinear components is usually carried out on a sample-by-sample basis in time domain.

Bandpass nonlinearities may be frequency nonselective (memoryless) or frequency selective (with memory). The lowpass equivalent model for memoryless bandpass nonlinear components in communication systems can be derived analytically or obtained from swept power measurements. For devices such as bandpass limiters, the lowpass equivalent model can be analytically derived using the Fourier integrals and the resulting model is expressed in terms of the AM-to-AM [f(A)] and AM-to-PM [g(A)] transfer characteristics. For devices such as high-power amplifiers, the AM-to-AM and AM-to-PM characteristics are directly obtained from measurements. These transfer characteristics can be stored in empirical form in tables or can be approximated by functional forms such as the ones given in (12.44) and (12.45). Simulation of the AM-to-AM and AM-to-PM models consist of generating sampled values of the input complex envelope and obtaining the sampled values of the output complex envelope as shown in Figure 12.7.

For bandpass nonlinear devices operating over wide bandwidths, it may be necessary to use frequency-selective models. The preferred simulation model for frequency-selective nonlinearities consists of two filters with a memoryless nonlinearity sandwiched in between. The transfer function of the filters and the transfer characteristics of the memoryless nonlinearity can be obtained from swept power/swept frequency measurements. Once the structure and parameters of the model are specified, simulation is rather straightforward.

An alternate approach for simulating nonlinearities is based on representing the dynamic behavior of the system by a set of nonlinear differential equations and solving them using numerical integration techniques. This method is very useful for simulating such devices as PLLs and is very efficient computationally.

Further Reading

An excellent and detailed treatment of the topics covered in this chapter and additional material on simulation of nonlinear components may be found in

  • M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed., Kluwer Academic/Plenum Publishers, 2000, Chap. 5.

The literature on nonlinear systems is rather broad and is also very application dependent. The literature on this topic can be divided into the following categories:

  1. Mathematical and numerical techniques for solving nonlinear differential equations

  2. Nonlinear circuit analysis techniques

  3. Nonlinear control systems

  4. Modeling and simulation of nonlinear components in communication systems

A selected set of papers from the last group are given in the list of references.

References

Problems

12.1

Derive the complex lowpass equivalent (CPE) model for the cubic nonlinearity of the form

Problems

For a two-tone input of equal amplitudes at f1 = 13 Hz and f2 = 15 Hz, find the first-zone output for the bandpass case analytically and compute the power levels of the various output components. Simulate the bandpass and the LPE models and compare the simulated power levels with the computed power levels. Do this for at least three different sampling rates and compare the results.

12.2

Show that a second-order nonlinearity does not produce any first-zone intermodulation products (use a two-tone input as in Problem 12.1).

12.3

Show that f2(A) = 0 in the LPE model for an arbitrary memoryless nonlinearity y(t) = g(x(t)).

12.4

Derive the LPE model for a bandpass soft limiter and simulate the bandpass and LPE models with the same input as in Problem 12.1 and compare the results.

12.5

Derive the LPE model for the nonlinearity y(t) = |x(t)|.

12.6

Derive the LPE model for the limiter given in (12.8) and plot A versus f(A). Note that the Fourier integral will have to be evaluated using numerical integration procedures.

12.7

Experimental tests performed on a TWT amplifier results in the data given in Table 12.2.

  1. Implement a simulation model for a TWT amplifier specified by the AM-to-AM and AM-to-PM data given in Table 12.2.

    Table 12.2. Experimental TWT Data

    Input Power

    Output Power

    Phase (degrees)

    0

    0

    32.8510

    0.0110

    0.0500

    32.8510

    0.0120

    0.0560

    32.7570

    0.0140

    0.0630

    32.6540

    0.0150

    0.0700

    32.4840

    0.0170

    0.0790

    32.3210

    0.0190

    0.0880

    32.1930

    0.0220

    0.0980

    31.9740

    0.0250

    0.1100

    31.5650

    0.0280

    0.1230

    31.2160

    0.0310

    0.1370

    30.8170

    0.0350

    0.1520

    30.4400

    0.0390

    0.1690

    30.0840

    0.0440

    0.1890

    29.6240

    0.0500

    0.2100

    29.1450

    0.0560

    0.2330

    28.6350

    0.0630

    0.2570

    28.0410

    0.0700

    0.2840

    27.3180

    0.0790

    0.3130

    26.3990

    0.0890

    0.3440

    25.6150

    0.1000

    0.3770

    24.8700

    0.1120

    0.4130

    23.9820

    0.1250

    0.4510

    23.0500

    0.1410

    0.4910

    22.0380

    0.1580

    0.5320

    21.0060

    0.1770

    0.5730

    19.9700

    0.1990

    0.6150

    18.6400

    0.2230

    0.6570

    17.3930

    0.2510

    0.6970

    16.0620

    0.2820

    0.7370

    14.8880

    0.3160

    0.7770

    13.5740

    0.3540

    0.8130

    12.2540

    0.3980

    0.8480

    11.0110

    0.4460

    0.8770

    9.7070

    0.5010

    0.9050

    8.3830

    0.5620

    0.9320

    6.9150

    0.6300

    0.9510

    5.4690

    0.7070

    0.9700

    41.1550

    0.7950

    0.9810

    2.8770

    0.8910

    0.9920

    1.4660

    1.0000

    1.0000

    0

    1.1220

    0.9980

    -1.5710

    1.2580

    0.9980

    -3.0480

    1.4120

    0.9890

    -4.7500

    1.5840

    0.9820

    -6.5670

    1.7780

    0.9760

    -8.3850

    1.9950

    0.9700

    -10.2020

    2.2480

    0.9630

    -12.0190

    2.5110

    0.9570

    -13.8360

    2.8120

    0.9510

    -15.6530

    3.1620

    0.9450

    -17.4710

    3.5480

    0.9380

    -19.2880

    3.9810

    0.9320

    -21.1050

    4.4660

    0.9260

    -22.9220

  2. Simulate this model with a 64 QAM input for backoffs of −10 and −20 dB and compare the distortion in the signal constellations.

  3. Repeat the simulations for the two-tone input described in Problem 12.1 and observe the intermodulation components and their power levels as a function of backoff.

12.8

Set up a four-carrier QPSK input into the TWTA model specified by Table 12.2. The symbol rate for each QPSK signal is 1 symbol/sec and the lowpass equivalent carrier frequencies are f1 = −2.5 Hz, f2 = −1.25 Hz, f3 = 1.25 Hz, and f4 = 2.5 Hz. Assume that each QPSK signal is filtered by an SQRC filter with a roll off factor of 25%. Plot the power spectral density at the output of the TWT and observe the spectral regrowth in the vicinity of the tails and at f = 0 as the backoff is varied. Explain the spectral regrowth due to the nonlinearity in general and the intermodulation products in particular. Where would you expect the IM to be worst? Repeat for different carrier spacing and backoffs.

12.9

Fit a Saleh’s functional form to the AM-to-AM and AM-to-PM data given in Table 12.2.

12.10

Implement a simulation model for the second-order PLL using the various numerical integration techniques given in Table 12.1. Simulate a second-order PLL using the same parameter values as the example in Chapter 6 and compare the results.

Appendix A: Saleh’s Model

% File: salehs_model.m
function [y]=salehs_model(x,backoff,n)
% This function implements Saleh's model
% x is the complex input vector of size n;
% Back-off is in db; the input amplitude is scaled by
% c=10^(backoff/20);
% The maximum normalized input power should be less than 3 dB
% i.e., 20 log10(a*abs(x)) < 3 dB

y = zeros(1,n)*(1.0+i*1.0);          % initialize output array
a1=2.1587; b1=1.15;                  % model parameters
a2=4.0; b2=2.1;                      % model parameters
c=10^(backoff/20);                   % backoff in dB
for k=1:n
    ain = c*abs(x(k));
    thetain(k) = angle(x(k));
    aout = a1*ain/(1+b1*ain^2);
    thetapm(k) = a2*ain^4/(1+b2*ain^2);
    thetaout(k) = thetain(k)+thetapm(k);
    y(k) = aout*exp(i*thetaout(k));
end;
% End of function file.

Appendix B: MATLAB Code for Example 12.2

% File: c12_example2.m
% This example implements both Lowpass and bandpass versions of a
% power series nonlinearity of the form y(t) = x(t) - a3*x(t)^3
% For the BP Model f1=11Hz; f2=14Hz; IM @ 8 and 17 Hz
% The 3rd harmonics are at 33 and 42 Hz
%
% For the LPE model the ref freq is f0 =10Hz
% Hence f1=-1 and f2=2 Hz; IM @ -4 and 5Hz
% Input parameters: None;
% Plots: BP input; BP output; LPE input; LPE Output
f1=11.0; f2=14.0; ts=1.0/128; n=1024; a3=0.3;
% Generate input samples
for k=1:n
 t(k)=(k-1)*ts;
 x(k) = cos(2*pi*f1*t(k))+0.707*cos(2*pi*f2*t(k));
end
% Generate output samples
for k=1:n
 y(k)=x(k)-a3*x(k)^3;
end;
% Plot the results
[psdx,freq]=log_psd(x,n,ts); [psdy,freq]=log_psd(y,n,ts);
subplot(2,1,1)
plot (freq,psdx,'b'), grid;
ylabel('PSD in dB')
title('BP Input @ f1 = 11 and f2=14'),
subplot(2,1,2)
plot (freq,psdy,'b'), grid;
xlabel('Frequency in Hz')
ylabel('PSD in dB')
title('BP output: IM @ 8 and 17 and Third harmonics')
%
% This Section of the model implements the LPE model for the 3-rd
% order power series nonlinearity.
% Baseband model: y(t) = x(t) - a3*x(t)^3.
% LPE Model: y(k)=x(k)+0.75*a3*(abs(x(k))^2)*x(k);
% Generate LPE of the input signals using 12Hz as the ref frequency
% And generate output samples using the LPE model
f1=-1.0; f2=2.0;
for k=1:n
 t(k)=(k-1)*ts;
 x(k) = exp(i*2*pi*f1*t(k))+0.707*exp(i*2*pi*f2*t(k));
 y(k)=x(k)+0.75*a3*(abs(x(k))^2)*x(k);
end
% Plot the results
[psdx,freq]=log_psd(x,n,ts); [psdy,freq]=log_psd(y,n,ts);
figure;
subplot(2,1,1)
plot(freq,psdx); grid;
ylabel('PSD in dB')
title('LP Equivalent input; f0=12; f1=-1 and f2 = 2'),
subplot(2,1,2)
plot(freq,psdy); grid;
xlabel('Frequency in Hz')
ylabel('PSD in dB')
title('LP Output IM at 2f1-f2= -4 and 2f2-f1 =5')
% End of script file.

Supporting Routines

Program log_psd.m is defined in Appendix A of Chapter 7.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.147.158