Chapter 1. The Role of Simulation

The complexity of modern communication systems is a driving force behind the widespread use of simulation. This complexity results both from the architecture of modern communication systems and from the environments in which these systems are deployed. Modern communication systems are required to operate at high data rates with constrained power and bandwidth. These conflicting requirements lead to complex modulation and pulse shaping along with error control coding and an increased level of signal processing at the receiver. Synchronization requirements also become more stringent at high data rates and, as a result, receivers become more complex. While the analysis of linear communication systems operating in the presence of additive, white, Gaussian noise is usually quite simple, most modern systems operate in much more hostile environments. Multihop systems often require nonlinear amplifiers for efficiency. Wireless cellular systems often operate in the presence of heavy interference along with multipath and shadowing that leads to signal fading at the receiver site. This combination of complex systems and hostile environments leads to design and analysis problems that are no longer analytically tractable using traditional (nonsimulation-based) techniques.

Fortunately, the past two decades have seen the development of digital computers that are both powerful and inexpensive. Thus, modern computers are suitable for use at the desktop and can therefore be dedicated to the solution of problems taking many hours of computer time without interfering with the work of others. Computers have become easy to use, and the cost of computer resources is no longer a significant factor in many efforts. As a result, computer-aided design and analysis techniques are available to almost all who need them. The development of powerful software packages targeted to communication systems has accelerated the use of simulation in the communications area. Thus, the increase in system complexity has been accompanied by an increase in computing power. In many cases, the availability of appropriate computational power has directly led to many of the complex signal-processing structures that constitute the building blocks of modern communication systems. Thus, it is not just good luck that computational tools appeared at the time they were needed. Rather, practical computational power, in the form of the microprocessor, is the enabling technology for modern communication systems and is also the enabling technology for powerful simulation engines.

The growth in computer technology has also been accompanied by a rapid growth in what we loosely refer to as simulation theory. As a result, the tools and methodologies required for the successful application of simulation to design and analysis problems are more accessible and better understood than was the case a few decades ago. A large number of technical papers and several books are now available that illustrate the application of these tools to the design and analysis of communication systems.

An important motivation for the use of simulation is that simulation is a valuable tool for gaining insight into system behavior. A properly developed simulation is much like a laboratory implementation of a system. Measurements can easily be made at various points in the system under study. Parametric studies are easily conducted, since parameter values, such as filter bandwidths and signal-to-noise ratios (SNRs), can be changed at will and the effects of these changes on system performance can quickly be observed. Time-domain waveforms, signal spectra, eye diagrams, signal constellations, histograms, and many other graphical displays can easily be generated and, if desired, a comparison can be made between these graphical products and the equivalent displays generated by system hardware. We will see that the process of comparing simulation results with hardware-generated results is an important part of the design process. Most importantly, perhaps, one can perform “what if” studies more easily and economically using a simulation than with actual system hardware. Although we often perform a simulation to obtain a number, such as a bit error rate (BER), the main role of simulation, as noted by R. W. Hamming, is not to obtain numbers but to gain insight.

Examples of Complexity

The complexity of communication systems varies widely. We now consider three communications systems of increasing complexity. We will see that for the first system, simulation is not necessary. For the second system, simulation, while not necessary, may be useful. For the third system, simulation is necessary in order to conduct detailed performance studies. Even the most complicated of the three systems considered here is still simple by today’s standard.

The Analytically Tractable System

A very simple communications system is shown in Figure 1.1. This system should remind us of the basic communications system studied in a first course on communications theory. The data source generates a sequence of symbols, dk. The symbols are assumed to be discrete. The source symbols are assumed to be elements from a finite symbol library. For a binary communication system, the source alphabet consists of two symbols, which are usually denoted {0, 1}. In addition, the source is assumed to be memoryless, which means that the kth symbol generated by the source is independent from all other symbols generated by the source. A data source satisfying these properties is referred to as a discrete memoryless source (DMS). The role of the modulator is to map the source symbols onto waveforms, with a different waveform representing each of the source symbols. For a binary system, we have two possible waveforms generated by the modulator. This set of waveforms may be denoted {s1(t),s2(t)}. The transmitter, in this case, is simply assumed to amplify the modulator output so that the signals generated by the modulator are radiated with the desired energy per bit.

Analytically tractable communications system.

Figure 1.1. Analytically tractable communications system.

The next part of the system is the channel. In general, the channel is the most difficult part of the system to model accurately. Here, however, we will assume that the channel simply adds noise to the transmitted signal. This noise is assumed to have a power spectral density (PSD) that is constant for all frequency. Noise satisfying this constant PSD property is referred to as white noise. The noise amplitude is also assumed to have a Gaussian probability density function. Channels in which the noise is additive, white, and Gaussian are referred to as AWGN channels.

The function of the receiver is to observe the signal at the receiver input and from this observation form an estimate, denoted Analytically tractable communications system., of the original data signal, dk. The receiver illustrated in Figure 1.1 is referred to as an optimum receiver because the estimate of the data symbol is made so that the probability of error, PE, is minimized. We know from basic digital communication theory that the optimum receiver for the system described in the preceding paragraphs (binary signaling in an AWGN environment) consists of a matched filter (or, equivalently, a correlation receiver), which observes the signal over a symbol period. The output of the matched filter is sampled at the end of a symbol period to generate a statistic, Vk, which is a random variable because of the addition of noise to the transmitted signal in the channel. The statistic, Vk, is compared to a threshold, T. If Vk > T the decision, Analytically tractable communications system., is made in favor in one of the data symbols. If Vk < T the decision is made in favor of the other data signal.

We refer to this system as an analytically tractable because, with a knowledge of basic communication theory, analysis of the system is carried out with ease. For example, the probability of error is found to be

Equation 1.1. 

where Es represents the average energy, calculated over a symbol period, associated with the set of waveforms {s1(t),s2(t)}, and N0 represents the single-sided power spectral density of the additive channel noise. The parameter, k, is determined by the correlation of the waveforms {s1(t),s2(t)}. As an example, for FSK (frequency-shift keying) transmission, the waveforms {s1(t),s2(t)} are sinusoids having different frequencies and equal power. Assuming that the frequencies are chosen correctly, the signals are uncorrelated and k = 1. For the PSK case (phase-shift keying), the signals used for data transmission are assumed to be sinusoids having the same frequencies and equal power but different initial phases. If the phase difference is π radians, so that s2(t) = −s1(t), the signals are anticorrelated and k = 2.

The performance of the system illustrated in Figure 1.1 is easily determined using traditional analysis techniques, and we are therefore able to classify the system as analytically tractable. Why is this system analytically tractable? The first and most obvious reason deals with the AWGN channel and the fact that the receiver is linear. Since the noise is Gaussian and the matched filter is a linear system, the decision statistic, Vk, is a Gaussian random variable. We are therefore able to calculate the bit error rate (BER) analytically as a function of the parameters of the receiver filter and determine the values of those parameters that result in a minimum BER.

There are a number of other factors leading to the fact that the system shown in Figure 1.1 is analytically tractable. These relate to the simplicity of the system model, which results from a number of assumptions. The data source was assumed memoryless, which may or may not be true in practice. In addition, perfect symbol synchronization was assumed, so that we have exact knowledge of the beginning and ending times of the data symbols. This assumption allows the decision statistic, Vk, to be correctly extracted.

Would simulation ever play a role in an analytically tractable system? The answer is yes, since the system shown in Figure 1.1 may well be the basic building block of a more complex system. The simulation code can be developed for the system. The resulting simulation can be validated with ease, since analysis of the system is straightforward. At this point the data source, modulator, channel, or receiver can be modified as required to model the system under study. In addition, other subsystems as needed can be added to the simulation model. As we proceed with the task of developing a simulation model of the system of interest, we can be confident that the starting point was correct.

The Analytically Tedious System

We now turn attention to a somewhat more complex system. The system illustrated in Figure 1.2, which is identical to the previously investigated system except for the addition of the nonlinear high-power amplifier (HPA) and filter in the transmitter. Consider first the nonlinear amplifier. Nonlinear amplifiers exhibit much higher power efficiency than linear amplifiers and, as a result, are often preferred over linear amplifiers for use in environments where power is limited. Examples include space applications and mobile cellular systems, where battery power must be conserved. Unlike linear amplifiers, which preserve the spectrum of the input signal, the nonlinear amplifier will generate harmonic and intermodulation distortion. As a result, the spectrum of the amplifier output will be spread over a much larger bandwidth than that occupied by the spectrum of the modulator output. The filter following the amplifier will in most cases be a bandpass filter with a center frequency equal to the desired carrier frequency. The role of the filter is to attenuate the harmonic and intermodulation distortion resulting from the nonlinearity.

Analytically tedious communications system.

Figure 1.2. Analytically tedious communications system.

The filter following the modulator and HPA leads to time dispersion of the data signal so that the filtered signals are no longer time limited to the symbol period. This leads to intersymbol interference (ISI). As a result of ISI, the probability of error of the ith symbol is dependent upon one or more of the symbols previous to the symbol upon which the decision is being made. The number of previous symbols that must be considered in the demodulation of the ith symbol depends upon the memory associated with the signal at the filter output. If the probability of error for the ith symbol depends on the k previous symbols we compute the quantity

Analytically tedious communications system.

For the binary case there are 2k different sequences of length k. Assuming that each data symbol is equally likely to be a binary 0 or 1, the error probability of the ith symbol is given by

Equation 1.2. 

In other words, one must compute 2k different error probabilities, with each error probability dependent upon one of the 2k preceding sequences of length k, and average the 2k results. Since the channel is assumed AWGN, each of the 2k error probabilities is a Gaussian Q-function. It is a straightforward, but tedious procedure to calculate the argument of each Q-function and, therefore, simulation is often used.

The system illustrated in Figure 1.2 has an important property that makes analysis straightforward. Note that the system is linear from the point at which the noise is injected to the point at which the statistic Vk appears. The statistic Vk often takes the form

Equation 1.3. 

where Sk and Ik are the components of Vk due to signal and intersymbol interference, respectively, and Nk is the component of Vk due to the channel noise. Thus, if the channel noise is Gaussian, Nk will be a Gaussian random variable, since it is a linear transformation of a Gaussian random variable. In addition, the decision statistic Vk will be a Gaussian random variable having the same variance as Nk but with mean Sk + Ik, both of which are deterministic. The mean of Vk can be computed in a straightforward manner. The variance of Vk is determined from knowledge of the power spectral density of the channel noise and the equivalent noise bandwidth of the system from the channel to the point where Vk appears. The pdf of Vk is therefore known and the error probability is easily determined. To summarize, the reason that we can easily determine the pdf of Vk, even though the system has a nonlinearity, is because the noise does not pass through the system nonlinearity.

The fact that the noise passes only through the linear portion of the system has a significant impact on the simulation methodology. Because the noise does not pass through a nonlinearity, the mean of Vk can quickly be determined using a noise-free simulation. The variance of Vk can be determined analytically and, as a result, the pdf of Vk is known and the error probably is easily determined. These concepts are combined in a simulation technique that is both simple and fast. The result is the semi-analytic method in which analysis and simulation is combined in a way that leads to very fast simulations. Semi-analytic simulation is an important tool and will be the subject of a later chapter.

The Analytically Intractable System

The system illustrated in Figure 1.3 is referred to as an analytically intractable system and is a simple model of a two-hop satellite communications system. The satellite transponder is modeled as a nonlinear HPA and a filter to remove the out-of-band harmonic distortion caused by the nonlinearity. Comparison of Figure 1.3 with Figure 1.2 shows that they are quite similar. A satellite channel model has been added and consists of two noise sources rather than one. One noise source represents the uplink (transmitter-to-satellite) noise, and the other noise source represents the downlink (satellite-to-receiver) noise. The problem lies in the fact that the noise at the receiver consists of two components; the downlink noise and the uplink noise that was passed through the nonlinear HPA. Even assuming that both the uplink and the downlink noise are Gaussian, the pdf of the noise at the receiver is very difficult to determine. The downlink noise is easy to model, since the downlink noise passes only through the linear portion of the system. The uplink noise, however, leads to difficulties. The reason for the difficulty lies in the fact that the uplink noise passes through the nonlinear HPA. Even if the uplink noise is Gaussian, the pdf of the uplink noise at receiver input is no longer Gaussian. Determination of the pdf of the decision statistic, Vk, is a very difficult, if not impossible, undertaking. Without exact knowledge of the pdf of the decision statistic, the probability of error cannot be determined. Simulation is an essential tool for these types of systems.

Analytically intractable communications system.

Figure 1.3. Analytically intractable communications system.

The range of communication systems considered in this section has been very narrow. The systems were chosen simply to illustrate how increasing complexity gives rise to the need for simulation. Many systems of current interest fall into the analytically intractable category. Consider, for example, a wireless cellular radio link operating in a high interference and multipath environment. Simulation is almost always necessary for the detailed analysis of such systems.

Multidisciplinary Aspects of Simulation

Prior to the 1970s simulation problems were often solved in a somewhat ad hoc manner. The methodologies for developing simulations, and the error sources present in all simulation programs, were not understood by many. Over the past 20 years, the research community has produced a body of knowledge that provides a methodology for simulation development and a theoretical framework for solving many of the problems that arise in the development of simulation programs. This body of knowledge provides those using simulation as an analytical tool the insights and understanding necessary to develop reliable simulations that execute in reasonable computer run times. Building this body of knowledge has required the integration of material from a variety of fields. Although not exhaustive, nine important areas of study that impact our study of simulation are depicted in Figure 1.4. We will now briefly look at these nine areas in order to better understand their relationship to the art and science of simulation.

Areas impacting the study of the simulation of communication systems.

Figure 1.4. Areas impacting the study of the simulation of communication systems.

The concepts of linear system theory give us the techniques for determining the input-output relationships of linear systems. This body of knowledge allows us to represent system models in both the time domain (the system impulse response) and in the frequency domain (the system transfer function). The basic concepts of linear system theory builds the foundation for much of what follows.

An understanding of communication theory is obviously important to our study. The architecture of systems, the operational characteristics of various subsystems such as modulators and equalizers, and the details of channel models must be understood prior to the development of a simulation. While simulation can be used to determine appropriate values for system parameters, the practical range of parameter values must usually be known before the simulation is developed. Some insight into proper system behavior is necessary in order to ensure that the simulation is working properly and that the results are reasonable.

The tools of digital signal processing (DSP) are used to develop the algorithms that constitute the simulation model of a communication system. This simulation model usually consists of several discrete-time approximations of continuous-time system components, such as filters, and a knowledge of DSP techniques is necessary to understand and appreciate the nature of these approximations. As a matter of fact, each functional block in a simulation model is a DSP operation and, therefore, the tools of digital signal processing provide the techniques for implementing simulations.

Numerical analysis is closely related to DSP but is mentioned separately, since it is an older discipline. Many classical techniques, such as the suite of tools for numerical integration, polynomial interpolation, and curve fitting have their origins in numerical analysis.

The concepts of probability are also fundamental to our study. The performance measures of communication systems are often expressed in probabilistic terms. As examples, we often have interest in the probability of bit error or symbol error in a digital communication system. In synchronization systems we have interest in the probability that a phase error will exceed a given level. Basic probability theory provides us with the concept of random variables and the probability density function. Knowledge of the underlying probability density function allows us to compute the quantities previously discussed. We will see later that the result of many simulations (called stochastic simulations) is typically a random variable, and the variance of that random variable is often a measure of the usefulness and statistical accuracy of the simulation.

The signal and noise waveforms that are processed by our simulations will, in many cases, be assumed to be sample functions of a stochastic process. Development of the algorithms to produce waveforms having the appropriate statistical properties will require knowledge of the underlying stochastic process. This is especially true for developing simulation models for channels. Stochastic process theory gives us the tools to describe these processes in the time domain (the autocorrelation function) and in the frequency domain (the power spectral density). Many other applications of stochastic process theory will appear in the course of our work.

A few of the very basic concepts of number theory provide us with the tools used to develop random number generators. These random number generators are the basic building blocks of the waveform generators used to represent digital sequences, noise waveforms, signal fading, and random interference, to name only a few applications.

Some of the basic concepts of computer science will be useful in the course of our study. As examples, the word length, and the format of words, used to represent samples of signals will impact simulation accuracy, although this is often of minimal importance in floating-point processors. The choice of language is important in the development of commercial simulators. Available memory, and the organization of that memory, will impact the manner in which data and instructions are passed from one part of the simulation to another. Graphics requirements and capabilities will determine how waveforms are displayed and will impact the transportability of the simulation code from one computer platform to another.

The tools and concepts of estimation theory will allow us to evaluate the effectiveness of a given simulation result. As mentioned earlier, the result of a stochastic simulation is a random variable. Each execution of the simulation will produce a value of that random variable, and this random variable will constitute an estimator of a desired quantity. Typically, all values produced by replications of the simulation will be different. Simulations are most useful when the estimator produced by a simulation is unbiased and consistent. Unbiased estimators are those for which the average value of the estimate is the quantity being measured. This is another way of saying that on the average the estimates produced by the simulation are correct. This is clearly a desired attribute. A consistent estimate is one for which the variance of the estimate decreases as the simulation run length increases. In other words, if 100 independent measurements of the height of a person are made, and the results averaged, we would expect a more accurate estimate of the height than would result from a single measurement. Estimation theory provides us with the analytical tools necessary to explore questions of this type and, in general, to assess the reliability of simulation results.

The previous paragraphs are not intended to make a study of simulation appear to be a daunting task. The goal is simply to point out that simulation is a field of study in its own right. It draws from many other fields just as electrical engineering draws from physics, mathematics, and chemistry, to name only a few. It is expected that those embarking on this study have a grasp of linear system theory, communications, and probability theory. Much of the remaining material will be treated in the following chapters of this text.

Models

The first step in developing a simulation of a communication system is the development of a simulation model for the system of interest. We are all familiar with models and should understand that models describe the input-output relationship of physical systems or devices. These models are typically expressed in mathematical form. The art of modeling is to develop behavioral models (we use this term since the model captures the input-output behavior of the device under specific conditions) that are sufficiently detailed to maintain the essential features of the system being modeled and yet are not overly complex so that the models can be used with reasonable expenditures of computational resources. Tradeoffs between accuracy, complexity, and computational requirements are therefore usually required.

It is useful to consider two different types of models in the work to follow: analytical models and simulation models. Both analytical models and simulation models are abstractions of a physical device or system as illustrated in Figure 1.5. The physical device illustrated in Figure 1.5 may be a single circuit element such as a resistor or a subsystem such as a single chip implementation of a phase-locked loop (PLL) used as a bit synchronizer. It may be a complete communications system. The first and most important step in the modeling process is to identify those attributes and operational characteristics of the physical device that are to be represented in the model. The identification of these essential features often requires considerable engineering judgment and always requires a thorough understanding of the application for which the model is being developed. The accuracy required of any mathematical analysis or any computer simulation based on the model is limited by the accuracy of the model. Once these questions have been answered, an analytical model is developed that captures the essential features of the physical device. Analytical models typically take the form of equations, or systems of equations, that define the input-output relationship of the physical device. These equations are, at best, only a partial description of the device being modeled, since only certain aspects of the device are modeled. In addition, the equations that define the device are typically accurate only over a limited range of voltages, currents, and frequencies. The simulation model is usually a collection of algorithms that implement a numerical solution of the equations defining the analytical model. The techniques of numerical analysis and digital signal processing are the tools used in the development of these algorithms.

Devices and models.

Figure 1.5. Devices and models.

We also see from Figure 1.5 that the level of abstraction increases as one moves from the physical device to the analytical model and finally to the simulation model. The increase in abstraction results, in part, from the assumptions and approximations made in moving from the physical device to the analytical model to the simulation model. Every assumption and approximation moves us farther from the physical device and its operating characteristics. In addition, the level of abstraction present at any step in the process is due, in large part, to the representation used for the analytical model. As an example, assume that the physical device being considered is a phase-locked loop. The analytical model for a PLL can take many forms, with each form corresponding to a different level of abstraction. An analytical model having a low level of abstraction could consist of a system of equations, with each equation corresponding a single functional operation within the PLL. Each of these functional, or signal-processing, operations within the PLL (phase detector, loop filter, and voltage-controlled oscillator) is represented by a separately identifiable equation within the system of equations defining the overall PLL. The process and assumptions used in moving from the hardware device to the analytical model are often clear from observation of these equations. In addition, simulations developed from such a system of equations may allow individual signals of interest within the PLL to be observed and compared to corresponding signals in the hardware device. We will see that such comparisons are often an essential part of the design process. On the other hand, the individual equations representing separate signal-processing operations may be combined into a single nonlinear (and perhaps time-varying) differential equation relating the input-output relationship of the PLL, which leads to a much more abstract model. The individual signal-processing operations that take place within the PLL, and the waveforms associated with these operations, are no longer separately identifiable. It might seem logical to consider only analytical models having a low level of abstraction. This, however, is not the case.

Models having different levels of abstraction will be frequently encountered throughout our studies. As another example, we will see that channels may be modeled using a waveform-level approach, in which sample values of waveforms are processed by the model. On the other hand, channels may be represented by a discrete Markov process based on symbols rather than on samples of waveforms. In addition, Markov channel models usually absorb the modulator, transmitter, and receiver into the channel. These models are highly abstract and are difficult to parameterize accurately but, once found, result in numerically efficient simulations that execute rapidly. This efficiency is a principal reason for having interest in the more abstract modeling approaches.

Figure 1.6 also tells us much about the modeling process. It is intuitively obvious that a desirable attribute of a simulation is fast execution of the simulation code. Simple models will execute faster than more complex models, since fewer lines of computer code need to be processed each time the model is invoked by the simulation. Simple models may not, however, fully characterize the important attributes of a device, and therefore the simulation may yield inaccurate results. In such a case, more complex models are necessary. While more complex models may yield more accurate simulation results, the increased accuracy usually comes at the cost of increased simulation run time.

Effects of model complexity.

Figure 1.6. Effects of model complexity.

Figure 1.6 makes it clear that the desirable attributes of simulation accuracy and execution speed are in competition. A well-designed simulation is one that provides reasonable accuracy along with reasonable execution speeds. Of course, when the specifications for a simulation demand a high level of accuracy, the ability to trade off accuracy and execution speed becomes severely constrained. In this case the model complexity must be sufficient to guarantee the required accuracy, and long simulation run times become, perhaps, unavoidable.

Figure 1.6 tells only part of the story. More complex models often require that extensive measurements be made before accurate simulation models can be developed. The development of simulation models for a nonlinear amplifier is one example. Another, and even more complex example is the development of a simulation model of a wireless communication channel when multiple interference sources and severe frequency selective fading is present. There are many other cases we could mention in which extensive measurements are required. It should be kept in mind that these measurements require resources (both equipment and engineering time) and therefore a relationship exists between the cost of model development and model complexity. It should also be kept in mind that complex models are more error prone than simple models.

When we move from an analytical model to a discrete-time (digital) simulation model, additional assumptions and approximations are involved. At this point we mention only a few of the most obvious. The voltages and currents present in both the physical device and in the analytical model are usually considered to be continuous functions of the continuous variable time. In moving from the analytical model to the simulation model, we move from the continuous domain to the discrete domain. This process involves quantizing the amplitudes of the voltages and currents and time sampling these quantities. The process of time sampling leads to aliasing errors, and quantizing amplitudes leads to quantizing errors. While quantizing errors are often negligible in simulations performed on floating-point processors, aliasing errors require our attention if the sampling frequency for the simulation is to be selected appropriately. Aliasing errors are reduced by increasing the sampling frequency, but an increased sampling frequency results in more samples being required to represent a given segment of data. The result is that more samples must be processed in order to execute the simulation, and the time necessary to execute the simulation is thereby increased. Hence, a tradeoff therefore exists between sampling frequency and simulation run time. One therefore should not attempt to eliminate aliasing errors, or most other errors for that matter, but rather should seek a simulation having the required accuracy with reasonable run times.

The modeling concepts briefly touched on here will be revisited in more detail in the following chapter and will be encountered many times throughout this book. The purpose of this brief introduction is simply to remind the reader that we deal not with physical devices but with models in performing any engineering analysis. Analytical models (equations) are abstractions of physical devices and involve many assumptions and approximations. Simulation models are based on analytical models and involve additional assumptions and approximations. Great care must be exercised at each step in this process to ensure a valid simulation model and to ensure that the simulation results reflect reality.

Deterministic and Stochastic Simulations

There are basically two types of simulation: deterministic simulation and stochastic simulation. Deterministic simulation is probably familiar to most of us from previous experiences. An example might be a SPICE simulation of a fixed electrical circuit in which the response to certain deterministic input signals are of interest. A software program is developed that represents the components of the circuit and the input applied to the circuit. The simulation generates the currents present in each branch of the network and, consequently, generates the voltage across each circuit element. The voltages and currents are typically expressed as waveforms. The desired time duration of these waveforms is specified prior to executing the simulation program. Since the circuit is fixed and the input signal is deterministic, identical results will be obtained each time the simulation is executed. In addition, these same waveforms will be obtained if the network is solved using traditional (pencil and paper) techniques. Simulation is used in order to save time and to avoid the mathematical errors that result from performing long and tedious calculations.

Now assume that the input to the network is a random waveform. (In more precise terminology we would say that the input to the network is a sample function of a stochastic process.) Equivalently the system model might require that the resistance of a resistor is a random variable defined by a certain probability density function. The result of this simulation will no longer be a deterministic waveform, and samples of this waveform will yield a set of random variables. Simulations in which random quantities are present are referred to as stochastic simulations.

As an example assume that the voltage across a certain circuit element is denoted e(t) and the simulation is performed to generate the value of e(t) at 1 millisecond. In other words we desire e(0.001). In a deterministic simulation e(0.001) is fixed and we get the same result each time we perform the simulation. We also get this same number using traditional analysis techniques. In a stochastic simulation e(0.001) is a random variable and each time we perform the simulation we get a different value of this random variable.

Another example might be a digital communication system in which the received signal consists of the transmitted signal plus random noise. Suppose that it is our task to compute the probability of symbol error at the receiver output. We know from a basic course in digital communications that if the modulation format is BPSK (binary phase-shift keying) and if the channel is AWGN (additive, white, Gaussian noise), the probability of symbol error is given by

Equation 1.4. 

where Eb is the symbol energy, N0 is the single-sided noise power spectral density, and Q(x) is the Gaussian Q-function defined by

Equation 1.5. 

Note that PE is a number and not a random variable, even though there is a random quantity (noise) present at the receiver input. The number PE is an average over an infinite number of trials, in which a trial consists of passing a digital symbol through the system and observing the result. The result, of course, will be that either a correct decision or an error is observed at the receiver output. For ergodic processes we can determine the probability of error in two different ways. We can view a single bit being transmitted and calculate PE as an ensemble average in which we have an infinite ensemble of noise waveforms all having the same statistical properties. Alternately, we can determine PE as a time average by transmitting infinitely many binary symbols and using a single sample function of the noise. The key is that we calculate PE using an infinite number of transmitted binary symbols. If instead of determining PE based on an infinite number of transmitted symbols, we estimate PE using a finite number of transmitted binary symbols, we will find that the estimate of PE is indeed a random variable, since each finite-duration sample function will yield a different (hopefully not much different) value for the error probability. This will be demonstrated in a following paragraph when we take a brief look at the Monte Carlo technique.

It is very important to note that both analysis and deterministic simulations result in a number. Each time the analysis is performed, the same number will result. Each time a deterministic simulation is performed, the same result will be obtained. Stochastic simulations, however, result in random variables, and the statistical behavior of these random variables is very important in determining the quality of the simulation result.

An Example of a Deterministic Simulation

Although the main purpose of this book is to present and explore the techniques used in stochastic simulations, one should not lose sight of the fact that completely deterministic simulations are important tools for gaining insights into the operational behavior of communication systems. One can execute a simulation that determines the waveforms present at points of interest in a system. System parameters can be changed and the effects of changing parameters can be readily observed. Very simple models can often be used and still important results can be obtained.

As a simple example consider a phase-locked loop, such as would be used for synchronization or demodulation. A block diagram is illustrated in Figure 1.7. The system appears quite simple. However, due to the nonlinear characteristics of the phase detector, analysis of phase-locked loops in the acquisition mode is quite complex. As a simple example, an important performance parameter of a PLL is the time required to acquire a signal, given various loop parameters and the specification of the input signal. To solve this problem analytically requires the solution of a nonlinear differential equation. We therefore turn our attention to simulation.

PLL model.

Figure 1.7. PLL model.

Suppose that a PLL is designed with a natural frequency of 5 Hz and a damping factor of 0.707. Also assume that the PLL is operating in lock and that the input frequency changes instantaneously by 20 Hz at t = 0.1 second. Given the large ratio of the step change in the input frequency to the natural frequency of the PLL, the PLL will lose phase lock and must reacquire the input signal. The nonlinear behavior of the loop leads to a phenomenon called “cycle slipping,” and the acquisition time will be largely dependant upon the number of cycles slipped in the acquisition process.

The result of a simple simulation is illustrated in Figure 1.8, in which the step in the input frequency occurs at t = 0.1. We see that the PLL slips three cycles and then reacquires approximately 0.6 s after application of the frequency step. The simulation is completely deterministic, and performing multiple simulations using the same PLL parameters and signal model will result in identical results. This problem will be explored in greater depth in a later chapter in order to examine techniques for developing system simulations without the complications imposed by the presence of random perturbations.

PLL acquisition behavior.

Figure 1.8. PLL acquisition behavior.

An Example of a Stochastic Simulation

We now consider a completely different situation. Consider the simple digital communication system illustrated in Figure 1.1 and assume that we wish to determine the bit error rate (BER). The most basic simulation technique for determining this important performance measure is to pass a large number of digital symbols through the system and count errors at the receiver output. This is known as the Monte Carlo technique. If N symbols are processed by the system and Ne errors are observed at the system output, the Monte Carlo estimate of the error probability is

Equation 1.6. 

This is known as the BER based on N symbols, and the value of the BER is that it provides an estimate of the symbol error probability, which, using the relative frequency definition of probability, is

Equation 1.7. 

Since a simulation of necessity can process only a finite number of symbols, the symbol error probability can only be approximated.

Since the terms bit error rate and probability of bit error are often taken to mean the same thing, it might appear confusing to distinguish between the two. These two terms, however, are actually quite different. The BER is an estimate of the probability of bit error. One should keep in mind that “rate” is formed as a fraction, such as miles per hour. BER is indeed a rate, since it means Ne errors per N transmitted symbols. Replicating the random experiment of transmitting N symbols through a noisy, or random, channel K times will usually result in K different error counts, Ne. The probability of bit error, however, is based on passing an infinite number of symbols through the system. The probability of bit error, rather than being a random variable, is a number. For example, the probability of bit error for a binary PSK (phase-shift keying) system in an AWGN (additive, white, Gaussian noise) is where Eb is the energy per bit and N0 is the single-sided power spectral density of the channel noise. This number remains fixed as long as Eb and N0 are held constant.

Suppose we perform K = 7 independent Monte Carlo simulations of a binary PSK communications system in which we have adjusted Eb/N0 so that the probability of symbol (or bit) error is 0.1. Each simulation is based on N = 1,000 transmitted symbols. The result of replicating the random experiment of passing 1,000 symbols through the random channel seven times is shown in Figure 1.9. The randomness is evident in that the BER based on any number of transmissions N ≤ 1,000 gives a spread of results. This spread is related to the variance of the estimate and in general, in order for simulation results to be useful, the spread should be small. Note that, for the results shown in Figure 1.9, the variance grows smaller as N grows larger. This is typical behavior for a correctly developed estimator. Also note that for large N, the results cluster about the true probability or error, and we tend to believe that, for a correctly developed simulation, the estimator, , will converge to the probability of error, PE, consistent with the relative frequency definition of probability. This is also typical of correctly developed estimators. These two desired conditions are well-defined concepts in estimation theory. If the variance of the estimate tends to zero as N grows arbitrarily large, we say that the estimate is consistent. Also, if , we say that the estimate is unbiased. We will have much more to say about the properties of estimators in later chapters, and we will also learn how to develop the simulation upon which Figure 1.9 is based.

Monte Carlo simulation results.

Figure 1.9. Monte Carlo simulation results.

The Role of Simulation

Simulation is used extensively during the many phases of the system design process and deployment process of modern communication systems. While simulation is used primarily for performance evaluation and design tradeoff studies (parameter optimization), simulation can also be used for establishing test procedures and benchmarks, end-of-life predictions, and anomaly investigations after the system has been deployed in the field. Both the simulation methodology and the simulation model used to represent the system will depend on the various phases of the design, implementation, and the lifecycle of the system. The simulation methodology will also be governed or guided by the overall design flow used. We illustrate the design flow and the use of simulation during various phases of the design and lifecycle of a communication system.

The design of a complex communication system is done from the “top down,” whereas hardware implementation usually proceeds from the “bottom up”. By this we mean that, in designing a system, we start at the system level (the highest level of abstraction) and start filling in the details of the design from system level and proceed down to subsystem level and finally to the component level. We then reach the bottom level at which the details of component assembly can be identified. When building a system, the components are first fabricated. These are then assembled into subsystems, and finally the entire system is constructed from the subsystems. Simulation development follows the top-down approach. We start with a system-level simulation, having a high level of abstraction, followed by more and more detailed models and simulations of subsystems and components. As the implementation begins, the measured characteristics of components and subsystems are included in the simulation model.

We now describe the various phases of the design process and how simulations are used during various phases of the design process.

Link Budget and System-Level Specification Process

The design process for a communications system begins with the statement and analysis of user requirements and performance expectations including throughput, error rate, outage probability, and constraints on bandwidth, power, weight, complexity/cost, channel over which the system is expected to operate, and the life expectancy of the system. Based on the user requirements, the “systems engineer” arrives at an initial concept for the system such as the modulation schemes to be used, the coding and equalization techniques if necessary, and so on. A set of parameter values called A-level specifications such as power levels, bandwidths, and modulation index are also established during this initial stage of the design.

The overall goal at this point in the design process is to determine a system topology and the parameter values that will meet performance objectives and also meet the design constraints. As stated earlier, the system performance will be a function of the signal-to-noise ratio (SNR, or equivalently the value of Eb/N0) and the total distortion introduced by all the components in the communication link. The signal-to-noise ratio is established though a process called link budgeting, which for the most part is a power calculation that takes into account such factors as the transmitted power, antenna gains, path losses, power gains, and noise figures of amplifiers and filters.[1] While the link budget is not the primary quantity of interest in simulations, it does establish a range of values of S/N or Eb/N0 over which simulations for performance estimations have to be carried out.

Since it is impossible to build ideal components, practical implementation of components like amplifiers and filters will produce nonideal behavior. As a result, signal distortion will be induced, which will impact system performance. This is taken into account in the link budget by calculating the performance of the system with ideal components and then including “implementation losses” that account for performance degradation due to the signal distortion induced by nonideal components. The implementation loss is a measure (often an estimate based on prior experience) of how much the Eb/N0 must be increased in order to overcome the effect of the distortion induced by nonideal components. Sometimes the implementation losses are also referred to as communication or distortion parameters. Note that some parameters, such as filter bandwidths, might affect the noise power at various points in the system and this in turn will impact the link budget as well as the distortion.

The system designer starts with an initial configuration for the system, the A-level specs and the link budget. The link budget is expressed in a spreadsheet-like format, and the bottom line in the link budget is the net Eb/N0 at a critical point in the system after all the implementation losses have been taken into account. This “critical point” is often the receiver input. The link budget is said to be “closed” or “balanced” if the link has sufficient Eb/N0, with a safe margin, to produce acceptable system performance. There are many different measures of system performance. As examples, analog systems often use mean-square error as a performance measure while a typical performance measure for digital systems is the bit error rate. At this point in the design process, the performance metric is computed from approximate formulas and not simulated. Since all of the implementation losses have been accounted in the net Eb/N0, the BER, for example, can be computed using the formula for an ideal system.

If the link budget does not close or balance, then the A-level specifications, the implementation losses, and even the system configuration are changed and the link budget is recomputed. For example, the bandwidth of one or more filters may be changed, the antenna size (gain) may be increased, and the specification of the noise figure of an amplifier might be lowered. This process is continued until the budget is balanced or closed with an adequate margin.

Based on the initial system configuration, the A-level specifications and the link budget, which is now assumed to be closed, it should be possible to construct a simulation model that can be used to verify the link budget and refine the design. Performance measures can be estimated accurately and performance degradations due to nonideal implementations can be verified through detailed simulations. If the allocations in the link budget are verified through simulation and the link budget is still closed, the design process then proceeds to the next stage, which involves the detailed design and implementation of subsystems and components. If the link budget does not close, then some of the distortion allocations are changed and the system topology and A-level specs might have to be changed. For example, the coding gain might have to be increased and the specifications on the linearity requirements of an amplifier might be changed. Also if the simulation indicates that the distortion due to a component is less than what was allocated to that component in the link budget, the resulting savings can be applied to relax the requirements for some other component (i.e., more distortion can be tolerated elsewhere in the system). This iterative process continues until the link budget is balanced. A balanced link budget provides the initial specifications for hardware (and software) development.

This initial phase of the design involves a considerable amount of “art,” and it is usually done by someone who has a considerable level of experience in designing communication systems. In most cases the initial design will be based on previous designs for similar systems with minor modifications. In other words, new designs are often evolutionary or incremental in nature.

Implementation and Testing of Key Components

The design for a new communication system will almost always contain some new signal-processing algorithms, and new hardware (and software) technologies. With any new technology there is always some risk or uncertainty about performance. If the new technology is introduced in a critical element of a communication system, that component must first to be built and tested under realistic operating conditions in order to verify the performance and minimize the risk. Since only a few key components will have been built at this early stage in the design process, it is impossible to test the entire system in hardware. Simulation provides an excellent test environment in this situation and the use of simulation is much less costly than hardware prototyping an entire system. All components and signals, up to the input to the component being tested and after its output, are simulated with measured characteristics of the component being tested inserted into the simulation model for the component. For example, if the component being tested is a new amplifier, its AM to AM and AM to PM transfer characteristics are measured and the measured characteristics are inserted into a nonlinear model for the amplifier. The entire system is then simulated to verify the resulting performance and the link budget. Once again, if the measured characteristics inserted into the simulation indicate better-than-expected distortion, then the savings are applied elsewhere in the system.

If the link budget closes, then the hardware development proceeds to the next critical component. Otherwise, either the component is redesigned, rebuilt, and tested again, or the link budget is modified to take into account additional degradation introduced by the component (beyond what was allocated in the original link budget for the component). This procedure is repeated for other key components.

Completion of the Hardware Prototype and Validation of the Simulation Model

As the procedure described above advances, a hardware prototype of the entire system begins to emerge along with an accompanying simulation model. The simulation model now includes measured characteristics for most of the components in the model. Many of the performance metrics for the entire system can now be measured on the hardware prototype. Parallel simulations are also conducted. Measured performance characteristics can be compared with the simulation results, and vice versa. Simulations provide benchmarks for testing, and test results validate the simulations. The end result of this phase of the design process is a complete prototype of the system, which serves as the basis for developing the production version of the system. In addition, we have a validated simulation model that can be used for end-of-life (EOL) predictions with a high degree of confidence.

End-of-Life Predictions

While the preceding procedure leads to a design that guarantees a given level of performance when the system is deployed, there is another important requirement that must be satisfied for most systems. This is the end-of-life performance. Many communication systems such as communication satellites and under-sea cable systems are expected to have a long lifespan (usually 10 years or more) over which performance must be guaranteed. It is of course impossible to subject a hardware prototype to an actual lifecycle test, since such a test might, if executed in real time, last many years! While procedures for so called accelerated life testing have been developed, it is a common practice to use simulations as a complementary approach to accelerated life testing.

EOL performance predictions using simulations are accomplished though the use of aging models for the major components in the system. If we have a validated simulation model for the entire system at the beginning of life (BOL) and also have good models for the behavior of components as a function of age, which are somewhat easier to obtain, then the aging models for the components can be substituted in the validated BOL model to arrive at EOL performance metrics for the system.

If the predicted EOL performance is satisfactory, and the final EOL link budget is closed with adequate margin, the system design and implementation is complete. Otherwise, the process has to be iterated until convergence is achieved.

A summary of the key steps in the design flow and the role of simulation in communication systems engineering is shown in Figure 1.10.

Systems engineering and design flow.

Figure 1.10. Systems engineering and design flow.

Software Packages for Simulation

Over the past decade a variety of software packages have been developed, and are being widely used, to simulate communication systems at the waveform level. The essential components of a simulation framework for communication systems include a model builder, a model library, a simulation kernel, and a postprocessor. Individual simulation packages differ in the way these components are implemented and in the scope and focus of the model libraries that are provided.

Irrespective of the specific simulation package used, the first step in simulating a communication system consists of building simulation models of the various subsystems that make up the overall system and configuring these subsystems into an end-to-end simulation of the system of interest. Simulation models can be built using a general-purpose programming language and writing the appropriate code or by using a graphical model builder. With graphical model builders, simulation models of subsystems and of the overall communication system are developed using building blocks taken from the model libraries provided with the simulation environment. Icons representing functional blocks such as information sources, encoders, modulators, multiplexers, channel models, noise and interference sources, filters, demodulators, decoders, and demultiplexers are selected from various model libraries. These subsystem icons are then placed on the screen of a PC or workstation, moved to appropriate locations, and “wired” together to create a simulation model in a hierarchical block diagram form. SIMULINK is a relatively simple simulation package using the graphical model builder approach.

Models are built either from the top down or from the bottom up with the top-down view being preferred by systems engineers and the bottom-up approach being the choice of hardware engineers. At the “leaf level,” which is the lowest level in the hierarchy, models can have a number of representations ranging from floating-point subroutines or procedures in a programming language, such as FORTRAN, C, or C++, to bit-level implementations of subsystem models in VHDL.

As an alternate approach to using a graphical block diagram editor for model building, one could use an intermediate (pseudo) language such as the MATLAB command language. Those producing simulations to guide the development of complex and expensive communication systems generally prefer the block diagram approach and graphical model builders. This preference results because the block diagram approach is a natural representation of communication systems and provides the systems engineer with a user-friendly environment. Despite the advantages of the block diagram approach, we will use MATLAB extensively in the work to follow for the reasons discussed in Section 1.8.

The level of effort expended in building a simulation model of a system is greatly reduced by the availability of model libraries that contain an extensive set of well-documented and well-tested building blocks. Many of the commercial simulation packages for communications systems available today have extensive model libraries available.

After the simulation model is developed, simulation parameters (such as sampling rates, seed numbers for random number generators and simulation length) and design parameters (such as filter bandwidths, code rates, and signal-to-noise ratios) are specified. The simulation is then executed. Linking all the models together, generating executable code, starting the simulation, saving sampled values of waveforms generated by the simulation, and monitoring the completion of simulation are functions that are usually performed by the simulation kernel/manager.

After the completion of the simulation, performance measures such as bit error rates and signal-to-noise ratios are computed from the waveforms generated by the simulation and the results are displayed as a function of design parameters using a simulation “postprocessor”. Spectral plots, waveform plots, scatter diagrams, and eye diagrams are some of the commonly used visual aids for both viewing the simulation results and for debugging the simulation. As an additional aid in viewing results and debugging simulations, some simulation environments also provide the ability to view simulation results inactively as they are generated during a simulation rather than viewing results only after a simulation has been completed. This is especially helpful in simulations requiring lengthy run times.

Just as a well-stocked model library reduces the effort involved in creating a simulation model, a well-developed postprocessor with good interactive graphical capabilities can significantly reduce the effort involved in analyzing and displaying the simulation results. A rich set of analysis and estimation algorithms for error rates, power spectral densities, probability density functions, statistical parameters, and a flexible set of display routines are essential components of a good postprocessor.

Different simulation kernels or frameworks provide different sampling and simulation techniques. These techniques can be classified as time driven (single-rate, multirate, or variable-rate sampling), stream driven, event driven, or mixed. In the simplest case of a time-driven simulation there is a single simulation clock, and each functional block in the simulation model is executed once every “tick” of the simulation clock. The simulation clock is then advanced by a fixed (constant) increment equal, to the reciprocal of the sampling frequency. All functional blocks in the model are then invoked again so that each model can update the model state to correspond to the new value of the simulation clock. Simulations of this type are structured as a single “do loop” or “for loop” in which each tick of the simulation clock increments the loop index by one.

Event-driven simulations, on the other hand, advance the clock by an arbitrary amount to the scheduled time of the next event of interest, and each functional block in the system updates its state corresponding to the value of the new simulation time. Typically, only a few blocks need to be activated to update their internal states, and no processing takes place during the “inter-event” time. Simulations of queueing systems are typically developed in this manner.

Event-driven and variable-step size simulations are computationally more efficient than time-driven simulations. However, they might require interpolation and resampling in some cases and they do carry an overhead associated with scheduling. For simulations of communication systems, time-driven simulation with either single-rate or multirate sampling is most commonly used. Multirate sampling is called for in simulations of systems having signals with widely varying bandwidths. A spread spectrum system is therefore an example of a system in which the use of multirate sampling can greatly reduce the required simulation run time.

Digital signal-processing algorithms play an important role in both simulation and in the implementation of communication systems. Simulation algorithms used for functions such as filtering and equalization can actually be used for implementing these functions in hardware or in software using DSP processors. Hence it is often of interest to include implementation issues such as bit widths and resource sharing in the simulation model and to move seamlessly from simulation to implementation. For hardware implementation, this is accomplished using hardware description languages such as VHDL to provide the interface between the system-level simulation framework and the hardware design tools. For software implementation of system components the simulation framework can translate the simulation algorithm to the assembly language code required for a target DSP processor. These links to implementation are becoming increasingly important as more and more functions in communication receivers are implemented in digital hardware or as embedded software.

A Word of Warning

We should never think of simulation as a replacement for traditional analysis or hardware measurements. Simulation is most powerful when used hand in hand with analysis and measurement. Quite often, the insights gained through repeated simulations allow the critical parameters in a system to be identified and for the system model to be simplified. The resulting simplifications often allow additional analysis to be performed.

Some level of analysis is always required for solving system-level problems. As an example, one must understand the basic dependence of performance parameters, such as the bit error rate, mean-square error at the demodulator output, or the signal-to-noise ratio at the receiver input, on system parameters, such as transmitted power and bandwidth, modulation format, or code rate, in order to ensure that the system is performing properly and that the simulation results are reasonable. In other words, as parameters are varied within a simulation, one must ensure that the observed results of these changes are reasonable and consistent with known theory. These “sanity checks” are important for validating the simulation and almost always require some level of analytical effort.

The Use of MATLAB

MATLAB will be used throughout this book for demonstrating concepts, for problem solving, and for performing example simulations. As mentioned in the preface, there are a number of reasons for the choice of MATLAB. First, MATLAB is widely used in the engineering community. MATLAB combines excellent computational capabilities with excellent and easy-to-use graphical capabilities. MATLAB contains a rich library of preprogrammed functions (m-files) for generating, analyzing, processing and displaying signals. Add-on libraries (toolboxes) allow the basic MATLAB library to be supplemented with m-files important to specific application areas. It is easy for the MATLAB user to generate new m-files for user-dependent applications. In addition, MATLAB code is very concise, making it possible to express complex signal-processing (simulation) algorithms using a very few lines of code.

Most of the examples, demonstrations, and problems used in this book can be solved using the Student Edition of MATLAB. Occasionally, a restriction present in the MATLAB Student Edition may make it necessary to use the professional version of MATLAB.

Outline of the Book

This book is divided into three parts. The first part, “Introduction,” consists of two chapters that explore simulation and modeling philosophy in a very broad context. The second part, “Fundamental Concepts and Techniques,” covers the basic techniques used in the simulation of almost all communication systems. These include the fundamental concepts of sampling and discrete system theory, filters and filter models, the representation of signals and systems in simulations, noise generation and modeling, the development of graphical displays, and Monte Carlo simulation techniques. A number of simple case studies are included in Part II. The first case study, devoted to the acquisition behavior of phase-locked loops, allows us to illustrate simple simulation techniques and to identify the sources of error in simulations without having to consider the complicating effects of noise. The second case study considers the simulation of a wireless communications system. This case study comes after our study of noise, and therefore the effects of noise on the communications system are considered. After a careful study of Part II, one should be able to simulate a moderately complex digital communications system operating in a Gaussian noise environment. While simulation, strictly speaking, may not always be necessary for determining the performance of these systems, important insights can often be gained by observing the waveforms present at various points in the system. Simulation of simpler systems, in which the simulation results can easily be understood and verified, often provides a starting point for developing simulations of more complex systems.

The third part of this book, “Advanced Modeling and Simulation Techniques,” treats many of the concepts required for the development of simulations of modern systems. In Part III, simulation strategies for nonlinear and time-varying systems are explored. Attention is then turned to the important problem of modeling time-varying channels, such as those encountered in mobile wireless communication systems. Both waveform-based models and discrete channel models based on Markov processes are considered. Finally, variance reduction techniques are briefly considered. The general term variance reduction techniques encompasses a number of strategies that allow knowledge of system details to be used in a way that reduces the time required to execute a simulation with a given level of accuracy.

Further Reading

Very few books have been written that focus specifically on the simulation of communication systems. Two books falling into this category are

  • M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed., New York: Kluwer Academic/Plenum Publishers, 2000.

  • F. M. Gardner and J. D. Baker, Simulation Techniques, New York: Wiley, 1997.

However, a number of books cover general topics relevant to our study. Several that are cited from time-to-time in this book include

  • R. Y. Rubinstein, Simulation and the Monte Carlo Method, New York: Wiley, 1981.

  • B. D. Ripley, Stochastic Simulation, New York: Wiley, 1987.

  • S. M. Ross, A Course in Simulation, New York: Macmillan, 1990.

  • P. Bratley, B. L. Fox, and L. E. Schrage, A Guide to Simulation, 2nd ed., New York: Springer-Verlag, 1987.

As mentioned earlier in this chapter, MATLAB is used throughout this book to illustrate methodology and algorithms. A number of complete simulations are also included. It is therefore important to have at least a basic familiarity with MATLAB. While the MATLAB manuals, together with the online help, provide descriptions of the techniques and routines used in this book, the following two references have been useful to the authors:

  • D. Hanselman and B. Littlefield, Mastering MATLAB 5: A Comprehensive Tutorial and Reference, Upper Saddle River, NJ: Prentice-Hall, 1998.

  • A. Biran and M. Breiner, MATLAB for Engineers, Reading, MA: Addison-Wesley, 1995.

  • G. J. Borse, Numerical Methods With MATLAB, Boston, MA: PWS Publishing Company, 1997.

The first of these books, as the title implies, is a good tutorial on MATLAB and is a useful reference for the beginning MATLAB user. The second two books are more oriented toward applications and algorithms. The last cited book (Borse) is more advanced and contains a number of DSP applications and techniques that are useful in the development of simulations.



[1] A link budget typically takes the form of a spreadsheet in which all of the system gains and losses (both signal and noise) including propagation loss, antenna gains, amplifier noise, cabling losses and other effects are identified and numerically defined. These are usually expressed in dB. After the spreadsheet is complete, the SNR necessary at the receiver input for the required level of performance is determinied. Using the spreadsheet, one can then work backward and determine the transmitter power required to achieve the required performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.172.214