7
CHARACTERIZING RANDOM PROCESSES

7.1 INTRODUCTION

There are many ways a random process can be characterized, and the characterization is usually linked to the application or situation in which it arises and the information required. The most common ways of characterizing random processes is via the evolution with time of the probability mass/density function, an autocorrelation function, and a power spectral density function. This chapter provides an introduction to such characterization, and this is followed by associated material including correlation, the average power in a random process, stationarity, Cramer’s representation of random processes, and the state space characterization of random processes. One-dimensional random processes are assumed.

7.1.1 Notation for One-Dimensional Random Processes

Consistent with the notation introduced in Chapter 5 and used in Chapter 6, the following notation is used for a random process X:

(7.1) images

For notational convenience, a random process is written as X(Ω, t) with the interpretation

(7.2) images

The probability of a specific experimental outcome is governed by a probability mass function or a probability density function:

(7.3) images

In this chapter, the signals are assumed to be one-dimensional as illustrated in Figure 7.1.

c7-fig-0001

Figure 7.1 Illustration of signals defined by a random process for the one-dimensional case and the values determined by the random variable defined by the random process at the time t0.

7.1.1.1 Random Variables Defined by a Random Process

For a random process that defines one-dimensional signals, the values defined by the signals of the random process, at a specific time t0, define a random variable images:

(7.4) images

By varying the time from t0 to t1 to t2, etc., an infinite sequence of random variables can be defined.

7.1.2 Associated Random Processes

Consider a one-dimensional random process:

(7.5) images

The following associated random processes, for example, can be defined:

(7.6) images

Here, X1 is a random process defined by the magnitude squared of the signals arising from X. X2 is a random process, for τ fixed, defined by the autocorrelation function of the signals arising from X. X3 is a random process defined by the Fourier transform of the signals arising from X, and X4 is a random process defined by the power spectral density of the signals arising from X. Signals from X, X1, and X3 are illustrated in Figure 7.2.

c7-fig-0002

Figure 7.2 Illustration of the ith signals defined by a random process X and the associated random processes X1 and X3.

7.2 TIME EVOLUTION OF PMF OR PDF

For all values of time, where the signals defined by a random process are valid, a random variable is defined that has a probability mass function, a probability density function, or outcomes that form both a countable set and an uncountable set (the mixed random variable case). The former two cases dominate the latter case, and these are considered. The probability mass function, or probability density function, changes with time to result in an evolving function. The evolution with time of the probability mass function, or probability density function, provides useful information about a random process.

For a discrete state random process, the time-evolving probability mass function is denoted pX(t)(x) and is defined according to

(7.7) images

where, for a fixed time t, the random variable defined, consistent with the notation above, is denoted Xt.

For a continuous state random process, the time-evolving probability density function is denoted fX(t)(x) and is defined according to

(7.8) images

for dx sufficiently small.

This evolution is illustrated in Figure 7.3 for the case of a discrete time–discrete state random process and in Figure 7.4 for the case of a continuous time–continuous state random process.

c7-fig-0003

Figure 7.3 Illustration of the evolution of probability mass function for a discrete time–discrete state random process.

c7-fig-0004

Figure 7.4 Illustration of the evolution of the probability density function for a continuous time–continuous state random process.

7.3 FIRST-, SECOND-, AND HIGHER-ORDER CHARACTERIZATION

As a random process, at a set time, defines a random variable, it follows that random variable theory finds widespread application in characterizing random processes. The following first-, second-, and higher-order characterizations are fundamental.

7.3.1 First-Order Characterization

Consider a random variable images defined by a random process at a fixed time to. The following parameters and functions can be defined: first, the mean and variance:

(7.9) images

Second, the cumulative distribution function:

(7.10) images

Third, the probability mass function for the case where images is a discrete random variable and the probability density function for the case where images is a continuous random variable:

(7.11) images

Here, dx is assumed to be sufficiently small.

7.3.2 Second-Order Characterization

Consider the two random variables images and images defined by the fixed times t1 and t2. Apart from the mean, variance, cumulative distribution function, probability mass function, or probability density function, of the individual random variables, the following functions can be defined. First, the joint cumulative distribution function:

(7.12) images

Second, the joint probability mass function for the case where images and images are discrete random variables and the joint probability density function for the case where images and images are continuous random variables:

(7.13) images

Here, dx1 and dx2 are assumed to be sufficiently small.

7.3.3 Nth-Order Characterization

Consider the N random variables images defined by the fixed times t1, …, tN. Apart from the mean, variance, cumulative distribution function, probability mass/density function of the individual random variables or the joint cumulative distribution and joint probability mass/density function of pairs of random variables, the following functions can be defined. First, the joint cumulative distribution function of the N random variables:

(7.14) images

The constraints, on a random process, defined by the joint cumulative distribution function are illustrated in Figure 7.5.

c7-fig-0005

Figure 7.5 Illustration of the constraints on a random process as defined by the joint cumulative distribution function.

Second, the joint probability mass function for the case where images are discrete random variables and the joint probability density function for the case where images are continuous random variables, respectively, are defined according to

(7.15) images
(7.16) images

Here, dx1dxN are assumed to be sufficiently small.

As a random process defined on the real line defines an uncountable number of random variables, it follows that a complete specification of such a random process, in terms of a joint probability density function, is not possible.

7.3.4 Mean and Variance and Average Power

Based on a first-order characterization, the following functions give useful information about a random process X.

7.3.4.1 Instantaneous Power

The average power, as defined by Equation 7.21, is consistent with the following definition of instantaneous power.

7.3.4.2 Example

Consider a random process defined as the sum of two random processes:

(7.23) images

The instantaneous power of X is

(7.24) images

where images for images is the instantaneous power in Xi. Thus, uncorrelatedness implies addition of individual powers; correlation implies that the addition of individual powers does not hold.

Consider the case of

(7.25) images
(7.26) images

where images and images. It then follows that

(7.27) images

assuming A1 and A2 are independent of Φ1 and Φ2. For the case where one or both of E[A1A2] and images are zero, the instantaneous power of X equals the sum of the instantaneous powers of X1 and X2.

7.3.5 Transient, Steady-State, Periodic, and Aperiodic Random Processes

The definitions for the mean, and mean squared value, of a random process allows the following classification of random processes.

7.4 AUTOCORRELATION AND POWER SPECTRAL DENSITY

The two most widely used approaches for characterizing a random process is via an autocorrelation function and via a power spectral density function. The transition from the signal definitions for the autocorrelation function and the power spectral density function to equivalent definitions for a random process is via the expectation operator. A review of the signal definitions for these functions is useful.

7.4.1 Definitions for Individual Signals

Consider a random process images. Each signal in the signal sample space SX has an autocorrelation, a time-averaged autocorrelation, and a power spectral density consistent with the definitions in Chapter 3. For the ωth signal and the interval [0, T], these definitions are

(7.28) images
(7.29) images
(7.30) images
(7.31) images

where X(ω, T, f) is the Fourier transform of X(ω, t) evaluated over the interval [0, T].

The time-averaged autocorrelation–power spectral density function relationships are

(7.32) images

Sufficient conditions for the existence of the individual power spectral density and time-averaged autocorrelation functions have been detailed in Chapter 3, and the relationships between G(ω, T, f) and images are valid when images and X(ω, t) is piecewise differentiable on [0, T].

7.4.1.1 Associated Autocorrelation and PSD Random Processes

The collection of autocorrelation and power spectral density functions defined by a random process X define the following associated random processes:

(7.33) images
(7.34) images
(7.35) images
(7.36) images

Convenient notation for these associated random processes are R(Ω, T, t1, t2), R(Ω, T, t, τ), images, and G(Ω, T, f).

7.4.2 Definitions for Autocorrelation and PSD

The following definitions can be made by considering the expectation over the appropriate signal sample space.

7.4.3 Simplified Notation: Countable Signal Sample Space Case

For the case of a countable space of experimental outcomes

(7.42) images

where the probability of the ith outcome is images, the notation for a random process X, as follows, is useful:

(7.43) images

The following simplified notation holds for the autocorrelation and power spectral density functions: for the ith signal,

(7.44) images
(7.45) images
(7.46) images
(7.47) images

and the time-averaged autocorrelation–power spectral density function relationships are

(7.48) images

7.4.3.1 Definitions

The following definitions for functions that characterize a random process can then be made.

7.4.4 Notation: Vector Case

In many cases, the experiment underpinning the random process images defines N random variables images where

(7.57) images

The first definition is for the discrete random variable case; the second definition is for the continuous random variable case. For the continuous random variable case, the following definitions apply.

7.4.5 Infinite Interval Case

The time-averaged autocorrelation function, and the power spectral density function, of individual signals from a random process can be defined on the infinite interval images as a limit according to

(7.60) images
(7.61) images

The expectation over the ensemble of signals leads to the following definitions.

7.4.6 Existence: Finite Interval

7.4.7 Existence: Infinite Interval

7.4.7.1 Power Spectral Density: Infinite Interval

7.4.8 Power Spectral Density–Autocorrelation Relationship

7.4.8.1 Finite Interval

The relationship between the power spectral density function and the time-averaged autocorrelation function is via the Fourier transform according to the following theorem.

7.4.8.2 Infinite Interval Case: Wiener–Khintchine Relationships

Additional restrictions are required for the power spectral density–autocorrelation function relationships to hold on the infinite interval.

7.4.8.3 Impulsive Case: Formal Definition of Wiener–Khintchine

When the signals defined by a random process contain periodic components, the power spectral density G(T, f) becomes impulsive, at specific values of f, as images, and it is not possible to interchange the order of limit and integration in the following equation:

(7.78) images

For this case, a spectral distribution function is defined.

7.4.9 Notes on Spectral Characterization

The most widely used approach for characterizing random phenomena in engineering and science is through the use of the power spectral density function. Further, it is common to characterize stationary random processes according to the nature of their spectrum and without reference to the underlying time domain signals or underlying experiments. Examples include white noise with a power spectral density specified as

(7.81) images

or 1/f noise random processes with a power spectral density specified as

(7.82) images

Both definitions, however, are problematic as they imply infinite power random processes.

7.5 CORRELATION

For notational convenience, the argument T is dropped from the autocorrelation functions and a subscript of XX for a random process X is added, that is, R(T, t1, t2) is written as RXX(t1, t2).

7.5.1 Correlation Coefficient

A correlation coefficient can be defined, as in Section 4.11.3, for any two random variables. Accordingly, the following definition can be made.

7.5.1.1 Correlation Time

It can be useful to have a measure of the time over which a random process is correlated. One approach is to determine the time τ for the correlation coefficient to change from unity at the times (t, t) to a predefined level at the times images.

7.5.2 Expected Change in a Specified Interval

Another useful characteristic of a random process is its expected change over a set interval.

7.5.2.1 Application

For a set resolution in amplitude Δx and for a given random process, it is possible to find the maximum time resolution Δt such that

(7.91) images

This information can be used to set the sampling time required to ascertain, for example, the first passage time or maximum level, consistent with an amplitude resolution of Δx.

7.6 NOTES ON AVERAGE POWER AND AVERAGE ENERGY

When an orthonormal basis set is used for signal decomposition, the average power, or average energy, in a random process is given by the weighted average of the powers, or energies, in the individual component signals.

7.6.1 Power and Energy: Uncorrelated Coefficients

Orthonormality guarantees that the average power, or average energy, on an interval is the summation of the powers in the individual signal components. It is of interest if this result also holds for the case where the waveforms in a decomposition are not necessarily orthonormal or orthogonal.

7.6.2 Sufficient Conditions for Uncorrelated Coefficients

Consider a sinusoidal basis set images for the interval [0, T]. Assume a random process images which is such that each defined signal has a Fourier series decomposition on the interval [0, T]:

(7.101) images

The kth coefficients define a random variable Ck with outcomes images. The uncorrelatedness of the coefficients Ci and Ck for images, that is, images, as images depends on the autocorrelation function, images, and on the interval [0, T].

7.6.2.1 Notes

For t fixed the region of integration for the integral in Equation 7.102 is illustrated in Figure 7.6.

c7-fig-0006

Figure 7.6 Region where the autocorrelation function RXX (t + τ, t) is nonzero assuming the interval [0,T].

The random variables Ci and Ck, in general, vary with T. However, the result of the convergence of images to zero, images, is guaranteed when the autocorrelation function is integrable on the infinite interval such that Equation 7.102 holds.

Writing the autocorrelation function in terms of the correlation function (see Eq. 7.83) according to

(7.103) images

it follows that sufficient conditions for Equation 7.102 to hold are as follows: images for all t, boundedness of the variance, that is,

(7.104) images

and for the random process to be increasingly uncorrelated at the times images and t, as images, consistent with the existence of constants images, τo independent of t, such that for all images (see Fig. 7.6), it is the case that

(7.105) images

The condition specified by Equation 7.102 may not hold, for example, when images is periodic with τ for t fixed.

Stationarity allows a precise statement, for the finite interval and for the sinusoidal basis set case, for when the coefficients Ci and Ck are uncorrelated for images.

7.6.2.2 Conditions for Correlated Coefficient to be Negligible

Consider a basis set {b1, b2, …} for the interval [α, β], where the basis signals do not necessarily form an orthogonal set. With such a basis set, each signal in the signal sample space of a random process images has the decomposition

(7.106) images

and the kth coefficients define a random variable Ck with outcomes images. Consistent with Equation 7.95, the average power, or average energy, in the random process is

(7.107) images

The average power, or average energy, depends on the nature of images, images, and it is useful to define the following correlation matrix:

(7.108) images

7.6.2.3 Orthonormal Basis Set and Uncorrelated Coefficients

It has been shown (Theorems 7.9 and 7.10) that orthogonality of the basis signals and uncorrelatedness of the basis signal coefficients are sufficient conditions for the average power of a random process to equal the summation of the average power of the individual signal components. It is of interest if orthogonality of the basis signals is linked to the uncorrelatedness of the basis signal coefficients.

Consider an arbitrary orthonormal basis set {b1, …} and the decomposition for the ith signal defined by the random process images, on the interval [0, T], according to

(7.111) images

The following theorem states sufficient conditions on the basis functions for uncorrelated coefficients.

7.6.2.4 Karhunen–Loeve Decomposition

For a set random process, with RXX(t, λ) known, solving Equation 7.112 for the basis functions results is a Karhunen–Loeve basis set.

7.7 CLASSIFICATION: STATIONARITY VS NON-STATIONARITY

A widely used classification of random phenomena is based on the concept of stationarity, and two definitions are important: strict-sense stationarity and wide-sense stationarity. The latter is widely used.

7.7.1 Definitions

To define strict-sense stationarity, first, consider two random processes defined on a sample space S and on the infinite interval images:

(7.114) images

Each signal defined by V is a time-shifted version of the corresponding signal from X.

7.7.1.1 Finite Interval

Wide-sense stationarity for the interval [0, T] implies

(7.120) images

7.7.1.2 Stationary and Nonstationary Random Processes

Most random phenomena commencing at a set time, which exhibit an initial transient response, are nonstationary. Many random phenomena, which exhibit steady-state behavior after the transient period, will exhibit characteristics consistent with stationarity. A random walk, for example, is clearly nonstationary.

7.7.2 Examples of Stationary Random Processes

The following subsections detail random processes that are wide-sense stationary:

7.7.2.1 Example 1

Consider an random process images defined by a sinusoid with random phase:

(7.121) images
(7.122) images

which has the following properties:

(7.123) images
(7.124) images

and is wide-sense stationary.

7.7.2.2 Example 2

Consider the random process defined by the sum of two sinusoids of the same frequency but with random amplitudes:

(7.125) images

where images is a pair of random variables whose outcomes are governed by the joint probability density function images.

The mean and autocorrelation functions are

(7.126) images
(7.127) images

Multiplying terms yields

(7.128) images

as

(7.129) images

Thus,

(7.130) images

and it then follows that the random process is wide-sense stationary if Ω1 and Ω2 have zero mean, have identical variances, and are uncorrelated, that is, images, images, and images. Here, the result

(7.131) images

has been used. With the stated assumptions,

(7.132) images

7.7.2.3 Example 3

Consider the random process defined by a sinusoid with random frequency variations around a set frequency and with random phase:

(7.133) images

where images is a pair of independent random variables with respective density functions images and images.

The mean of this random process is

(7.134) images

For the case where images, it follows that images as required by stationarity.

The autocorrelation function, by definition, is

(7.135) images

As images, it follows that

(7.136) images

For the case where Ω2 has a uniform distribution on images, it follows that

(7.137) images

which is consistent with wide-sense stationarity.

7.7.3 Implications of Stationarity

Several important results can be stated for stationary random processes.

7.7.4 Wide-Sense Stationarity and Correlation of Coefficients

Consider a random process images defined on the interval [α, β] and the Fourier series decomposition of each signal:

The coefficients define a set of random variables images where the sample space of Ck is images. The following theorem states sufficient conditions for wide-sense stationarity (Koopmans, 1974/1995, p. 40):

7.7.4.1 Stationarity Implies Uncorrelated Fourier Coefficients

Theorem 7.15 states that if the coefficients defined by a Fourier series decomposition of the signals in a random process are uncorrelated and have zero mean apart from the zeroth-order coefficient, then the random process is wide-sense stationary. The converse is also true (Kawata, 1969; Papoulis, 1965, pp. 367, 461; Yaglom, 1962, p. 36).

7.8 CRAMER’S REPRESENTATION

Consider a random process images defined on the interval [0, T] with images. Consistent with the discussion in Chapter 3, the Cramer representation of the signal X(ω, t), denoted XC(ω, f), by definition, is

(7.147) images

The set of signals images defines the associated random process XC:

(7.148) images

The shorthand notation

(7.149) images

is useful.

Using the signal definitions in Chapter 3, the following integrated spectrum, spectrum, and power spectrum definitions can be made.

7.8.1 Fundamental Results

The following theorems state fundamental results.

7.8.1.1 Notes

The first result of Theorem 7.18 implies that the integrated spectrum, XC(Ω, f), is an orthogonal increment random process on the infinite interval. This result does not hold for the interval [0, T] as images.

The second result of Theorem 7.18 states that the expected magnitude squared value of the difference in the integrated spectrum between f1 and f2 equals the power in the sinusoidal components with frequencies between the two values. This result justifies the definition of the power spectrum |dXC(f)|2, as given by Equation 7.152, as a valid power spectrum with a resolution of images for the interval images. In summary:

7.8.1.2 Notes

Theorem 7.19 states that the expected mean squared change of XC(Ω, f), which is an orthogonal increment function, between f2 and f1 equals the change in the integrated power spectrum, that is, the power, between f2 and f1.

To determine the power spectrum, as given by |dXC(f)|2, it is sufficient to determine the integrated power spectrum as given by images.

For the case where the integrated power spectrum is continuous, the power spectral density can be defined according to

(7.163) images

7.8.1.3 Power

7.8.2 Continuous Spectrum

Consider any arbitrary interval of the form images and a random process images,which is such that all signals in SX can be decomposed, using a Fourier series, into the form

(7.165) images

Such a decomposition leads to a discrete line power spectrum of the form shown in Figure 7.8. It is of interest if a random process exists, which has a spectrum with spectral components that are spaced arbitrarily closely, such that a continuous spectrum results.

c7-fig-0008

Figure 7.8 Power spectrum as defined by a Fourier series.

7.8.2.1 Example of a Random Process with a Continuous Spectrum

Consider a sequence of random processes images where the ith random process is defined according to

(7.166) images

Here, B is a constant with the restriction images, Θ is a continuous random variable with a uniform distribution on images, and K is a discrete uniform random variable with a sample space images and with images. The two random variables are assumed to be independent. For the ith random process, the waveforms are sinusoidal with frequencies randomly selected from the set images, and associated with each waveform is random phase.

For the ith random process, the expected power in the frequency range images is determined by the power of a sinusoid with a frequency images, which occurs with a probability 1/2i. The expected power is images. The expected power in the interval images equals the average power, which is A2/2.

The Cramer transform of one outcome of the ith random process as specified by

(7.167) images

is given, for the infinite interval images, by (see Table 3.1)

(7.168) images

Thus,

(7.169) images

and

(7.170) images

These functions are illustrated in Figure 7.9.

c7-fig-0009

Figure 7.9 Graph of the magnitude of the Cramer transform and the power spectrum associated with Xi (k, θk, t).

The probability of each signal in the ith random process is 1/2i. It then follows that the power spectrum, images, of the ith random process has the form illustrated in Figure 7.10.

c7-fig-0010

Figure 7.10 Graph of the power spectrum for the ith random process.

7.8.2.2 Meaning of a Continuous Spectrum

For such a sequence of random processes, it is the case, for any fixed finite resolution in f of B/2i, that the random processes X1, …, Xi have a power spectrum with a discrete uniform distribution as illustrated in Figure 7.10. The power spectrum for the random processes images will appear increasingly uniform and continuous when viewed with the same resolution.

7.8.3 Example: Binary Communication Random Process

Consider a random process that defines binary communication signals on the interval images, images, according to

(7.171) images

where images, images, are identical and independent random variables with zero mean and variance images.

As detailed in Chapter 3 (Eq. 3.152), the Cramer transform of a unit step function images on the interval images, assuming images and images, is

(7.172) images

Hence,

(7.173) images

Define

(7.174) images

and it follows that

(7.175) images

and the integrated power spectrum is

(7.176) images

as images when images. Here,

(7.177) images

The power spectral density can then be evaluated according to

The integrated power spectrum is shown in Figure 7.11 for the case of images, images, amplitudes from the set images and images. The corresponding power spectral density is shown in Figure 7.12.

c7-fig-0011

Figure 7.11 Integrated power spectrum of a binary communication signal defined on the interval [−NDD/2, ND + D/2] for the case of N = 16, D = 1, amplitudes from the set {−1,1}, and images.

c7-fig-0012

Figure 7.12 Power spectral density of a binary communication signal defined on the interval [−NDD/2, ND + D/2] for the case of N = 16, D = 1, amplitudes from the set {−1,1}, and images. G(f) is the power spectral density defined by the Cramer transform and evaluated with a frequency resolution of df = 1/2(ND + D/2). G(f) is the power spectral density assuming a sinusoidal basis set.

For reference, the power spectral density obtained according to

(7.179) images

is as images,

(7.180) images

where P(f) is the Fourier transform of the signalling pulse images, that is,

(7.181) images

This power spectral density is shown in Figure 7.12.

7.8.3.1 Notes

The random process is not wide-sense stationary as is evident from the autocorrelation function

(7.182) images

which is illustrated in Figure 7.13. This nonstationarity results in the power spectral density, as given by Equation 7.178, not being a valid power spectral density in terms of the usual requirement of the sum of the power in the individual components being equal to the total power. The next section details a stationary random process (white noise) whose power spectral density defined by the Cramer transform is a power spectral density that satisfies such a requirement.

c7-fig-0013

Figure 7.13 Autocorrelation function of the defined binary communication random process.

7.8.4 White Noise: Approach II

Consider a white noise random process images defined according to

where images, F1, …, FN are independent random variables with a uniform distribution on [0, fmax], Φ1, …, ΦN are independent random variables with a uniform distribution on the interval images, and N is fixed according to images where fo is the nominal frequency resolution.

7.8.4.1 Example

In Figures 7.15 and 7.16, the integrated power spectrum as given by Equation 7.184 and the associated power spectral density as given by

(7.186) images

are shown for the finite interval case with images, images, and images and an average signal power of images.

c7-fig-0015

Figure 7.15 Integrated power spectrum for a white noise random process defined on the interval [−T, T] for T = 10, Ao = 1, and fmax = 1.

c7-fig-0016

Figure 7.16 Power spectral density for a white noise random process defined on the interval [−T, T] for T = 10, Ao = 1, and fmax = 1. A resolution of df = 1/2T has been used.

7.9 STATE SPACE CHARACTERIZATION of Random Processes

In some instances, the time nature of a random process is less important than the states that the random process takes on, and for this case, Markov theory is often appropriate (Allen, 1978, p. 129f; Grimmett and Stirzaker, 1992, p. 156f).

7.9.1 Markov Processes

Markov processes are characterized by a lack of memory: the future is dependent on the current state of the random process and not on its past. The theory of Markov processes can be applied to queueing systems, population dynamics, diffusion processes, probability of error calculations in communication systems, etc.

7.9.1.1 Classification of Markov Processes

As noted in Chapter 5, one-dimensional random processes define signals that are either discrete time–discrete state, discrete time–continuous state, continuous time–discrete state, or continuous time–continuous state. Consistent with this demarcation, Markov processes can be classified as detailed in Table 7.1.

Table 7.1 Demarcation of Markov random processes

State space
Discrete time–continuous time Countable # states: Markov chain Uncountable # states: Markov process
Countable → discrete time Markov process Discrete time Markov chain Discrete time Markov process
Uncountable → continuous time Markov process Continuous time Markov chain Continuous time Markov process

7.9.1.2 Definition of a Markov Random Process

7.9.2 Discrete Time–Discrete State Random Processes

7.9.2.1 State Diagrams for Discrete Time–Discrete State Random Processes

Consider a discrete time–discrete state random process X defined on a sample space S according to

(7.188) images

where the set images specifies the times at which signals from the random process are defined, and at these times the signals take on values from the state space images. The possible states are illustrated in Figure 7.17.

c7-fig-0017

Figure 7.17 Left: illustration of the possible states for a discrete time–discrete state random process. Right: illustration of the possible transitions at the time tk.

At each possible time, a discrete random variable is defined. For the kth time, the random variable images is defined along with the associated probability mass function:

(7.189) images

At the time tk, the transitions, as illustrated in Figure 7.17, are possible. For the time t1, the probabilities associated with these transitions form a matrix according to

(7.190) images

where the superscript of 12 indicates the transition as time changes from t1 to t2. At the time t2, the same transitions as at time t1 are possible but, in general, with different probabilities. For the special memoryless case—the Markov case—where the probabilities associated with these transitions are dependent only on the state level and not on the prior transition, the following transition probability matrix can be defined:

(7.191) images

For the nonmemoryless case—the non-Markov case—the transition probabilities have the form

(7.192) images

etc.

7.9.2.2 Characterizing a Trajectory of a Signal from a Random Process

The probability of a given trajectory for a random process over N possible transitions is

(7.193) images

Expanding yields

(7.194) images

For a Markov process, it then follows that

(7.195) images

and a two-state transition matrix is sufficient.

7.9.3 Homogenous Markov Chain

7.9.3.1 Example: Random Walk

A random walk, X, with states images and with transition probabilities

(7.199) images

is a homogenous Markov chain. The transition probability matrix is

(7.200) images

7.9.3.2 Example: Ehrenfest Model

Consider m particles, as illustrated in Figure 7.18, that are distributed on either side of a membrane and that diffuse from one side to the other. When the probability of a particle diffusing from one side to the other side is proportional to the number of particles on the first side, the Ehrenfest model is defined. Consider a random process X defined as the number of particles on the right side of the membrane. An increase in X from a level i is consistent with one of the images particles moving from the left side to the right side and a decrease in X from a level i is consistent with one of the i particles moving from the right side to the left side. With the Ehrenfest model, the transition probabilities, for images are

(7.201) images
c7-fig-0018

Figure 7.18 Diffusion of particles from one side of a membrane to the other side.

The random process X is a homogenous Markov chain with a transition probability matrix:

(7.202) images

7.9.4 N-Step Transition Probabilities for Homogenous Case

For a homogenous Markov chain, the following definitions can be made.

7.9.4.1 Chapman–Kolmogorov Equation

Consider the possible paths between the ith state at time tk and the jth state at time images as illustrated in Figure 7.19.

c7-fig-0019

Figure 7.19 Possible paths for a discrete time–discrete state random process between the ith state at time tk time and the jth state at time tk + m + n.

7.9.4.2 N-Step Transition Probability Matrix

7.9.4.3 Example

Consider an N-stage communication system, as illustrated in Figure 7.20, with identical stages and with binary data being transmitted. The probabilities characterizing each stage are

(7.210) images

and define the transmission matrix illustrated in Figure 7.20. The input data is characterized according to

(7.211) images

with images.

c7-fig-0020

Figure 7.20 Top: N-stage communication system. Bottom: transmission matrix for each stage.

An important measure of the communication system is the probability of error in transmission after N stages, which is defined as

(7.212) images

A simple approach to finding the probability of error is to note that the communication link defines an N-stage homogenous Markov chain where each stage has two possible states with a transition probability matrix of

(7.213) images

It then follows, from Theorem 7.24, that the transmission probability matrix for N stages is

(7.214) images

and the probability of error is

(7.215) images

For the case of images, images, and images, the probability of error is shown in Figure 7.21.

c7-fig-0021

Figure 7.21 Probability of error in a binary communication system as the number of stages varies.

7.9.5 PMF Evolution for Homogenous Markov Chain

For a homogenous Markov chain, the n-step transition probability matrix images facilitates establishing the evolution of the probability mass function once the initial state probabilities are known. The initial states, associated with the time t0, define a random variable images with outcomes s1, …, si, … and a probability mass function

(7.216) images

The nth transition time, tn, defines a random variable images with outcomes s1, …, si, … and a probability mass function

(7.217) images

This probability mass function is specified in the following theorem.

7.9.5.1 Example

Consider a one-step processing system, as illustrated in Figure 7.22, with one level of redundancy such that the state diagram, as detailed in Figure 7.23, is appropriate. The state diagram arises from the state assignment detailed in Table 7.2. It is assumed that there is a time between when a unit fails and when it is taken for repair, and this time varies depending on the nature of the fault. It is also assumed that the repair time varies with the nature of the fault.

c7-fig-0022

Figure 7.22 A one-step processing system with one level of redundancy. The router directs the input to unit one when it is operational and to unit 2 when unit 1 has failed.

c7-fig-0023

Figure 7.23 State diagram for system. O, F, and R stand, respectively, for operational, fail, and repair.

Table 7.2 State assignment for the system of Figure 7.22

State Unit 1 Unit 2 System status
s1 Operational, in use Operational, standby Operational
s2 Failed Operational
s3 Being repaired Operational
s4 Failed Operational, in use Operational
s5 Failed Nonoperational
s6 Being repaired Nonoperational
s7 Being repaired Operational, in use Operational
s8 Failed Nonoperational
s9 Being repaired Nonoperational

For the case where the system state is updated at set rate of 1/Δt, the transition probability matrix is

(7.222) images

The following definitions, associated with a time interval of duration Δt seconds, apply: pF is the fault probability given initial operation status, pFR is the probability of moving to a repair state given an initial failed status state, and pRO is the probability of moving to an operational state given an initial repair state. With these definitions, it then follows that

(7.223) images
(7.224) images
(7.225) images
(7.226) images
(7.227) images

The evolution of the state probabilities with time, as given by P0Pn(1), is shown in Figure 7.24 for the case of images (an unrealistically high value but a value that demonstrates the nature of the state probability evolution), images, images, and images.

c7-fig-0024

Figure 7.24 Probability mass function evolution with the number of time intervals of duration Δt. Here, tn = nΔt.

Of interest is the probability that the system is operational. At the time images, this is given by (see Table 7.2)

(7.228) images

The evolution with time of the probability of the system being nonoperational is shown in Figure 7.25 for the case of images, images, images, and images.

c7-fig-0025

Figure 7.25 Probability of system being nonoperational for the case of pF = 0.0001, pFR = 0.4, pRO = 0.33, and tn = nΔt.

7.9.6 First Passage Time for Homogenous Markov Chain

The follow definitions underpin the determination of characteristics related to a state being reached for the first time.

7.10 TIME SERIES CHARACTERIZATION

Time series characterization is an important area, and AR, MA, and ARIMA models find widespread application. Brillinger (2001), for example, provides a good introduction.

7.11 PROBLEMS

  1. 7.1 Consider an experiment with a sample space images and where the probability of an outcome is specified by the probability mass function images. A random process is defined on the experiment according to
    (7.232) images
    1. Explicitly define SX.
    2. Define the following associated random processes:
      1. The random process defined by the autocorrelation function
      2. The random process defined by the time-averaged autocorrelation function
      3. The random process defined by the Fourier transform
      4. The random process defined by the power spectral density function
    3. Determine the time-averaged autocorrelation function and the power spectral density function for the random process.
    4. Show that the Fourier transform of the time-averaged autocorrelation function equals the power spectral density.
    5. Determine the average power of the random process. Show that the average power equals the value of the time-averaged autocorrelation function with an argument of zero and is also equal to the integral of the power spectral density function.

    Note that images (Gradsteyn and Ryzhik, 1980, eq. 0.241.)

  2. 7.2 Consider a system, where on power down, a transient signal component is evident. This is modelled by the random process images according to
    (7.233) images

    for a causal pulse function p.

    1. Determine the mean and variance of the random process.
    2. Determine an expression for the probability density function evolution with time of the random process.
    3. For the case of images, images, images, and images, graph the mean, variance, and probability density function evolution with time.
  3. 7.3 Consider a sinusoidal signal with a phase component that is randomly varying and that can be approximated by a random walk. The set of possible sinusoids defines a random process according to
    (7.234) images

    where images and Ω1, Ω2, … are independent and identical random variables with a sample space images and with images. The following result is useful: for a random walk with a step size of Δx, after N steps, the possible levels are images and the probability of the level kΔx is

    images
    1. Determine analytical expressions for the mean, mean squared value, and variance of the random process.
    2. Determine an analytical expression for the autocorrelation function of the random process. To determine the joint probability mass function of the random walk at the times t and images, it is useful to use the result
      (7.235)images
    3. Note that the expressions for the mean, variance, and autocorrelation function allow the correlation coefficient of the random process at the times t and images to be determined.
    4. For the case of images, images, images, and images, graph examples of the phase variation with time, and graph the variation with time of the mean and variance. Graph the autocorrelation and correlation coefficient with respect to time and the delay τ. By varying Δt and φo, the duration of the transient period, before the random process becomes approximately stationary, can be ascertained.
  4. 7.4 Consider the case of a sinusoidal signal that is corrupted by additive noise such that a random process images results:
    (7.236) images
    images
    1. For the case of images, images, images, images, and images, graph some of the possible signals, and the noise components, from the random process.
    2. Determine the average power of the signals in the random process and the average power of the random process. Assume an interval T that is an integer multiple of 1/fo.
    3. Establish expressions for the coefficients in the decomposition
      (7.237)images

      on the interval [0, 1/fo] for the case of images. Hence, determine the average power of the random process as a summation over the expected value of the magnitude squared of the coefficients.

  5. 7.5 Consider a random process images defined according to
    (7.238) images

    where C1 and C2 are random variables defined on S.

    1. Determine general expressions for the mean and autocorrelation function of the random process.
    2. If C1 and C2 are uncorrelated, then specify the autocorrelation function.
    3. Consider the case of images, images, and images and where the sample spaces of the random variables C1 and C2, respectively, are images and images. Show that C1 and C2 are uncorrelated. Determine expressions for the mean and autocorrelation functions for this case. Is the random process stationary? What nonzero functions b1 and b2 would ensure stationarity?
    4. Using the above specified parameters and assuming
      (7.239)images

      graph the autocorrelation function for the case of images.

  6. 7.6 Consider an experiment with a sample space
    (7.240) images

    and with outcomes governed by a joint probability density function

    (7.241) images

    A random process images is defined on this sample space according to

    (7.242) images

    for defined functions g1 and g2.

    1. Determine general expressions for the mean, variance, and autocorrelation function of the random process.

      For the following parts, consider the specific definitions:

      (7.243)images
      (7.244)images
    2. Determine expressions for the mean, variance, and autocorrelation function.
    3. For the case of images, images, and images, graph signals from the random process.
    4. For the above defined parameters, graph the mean, variance, autocorrelation function, and correlation coefficient.
    5. For the above defined parameters, graph the effective correlation time for images.
    6. For the above defined parameters, graph the expected mean squared change.
  7. 7.7 A random process X is based on an experiment with outcomes
    (7.245) images

    and where the outcomes are governed by the probability density function images, images, images, and images. The random process is defined according to

    (7.246) images
    1. Determine the mean and autocorrelation function of this random process, and confirm that it is a wide-sense stationary random process.
    2. Using the Cramer transform, determine the integrated spectrum and integrated power spectrum for the random process. For the latter, leave your result in an integral form.
    3. For the case of images, images, images, and images, use numerical integration to graph the integrated power spectrum and the associated power spectral density.
  8. 7.8 Consider the case of a progressive disease in a specific species with no known cure but with several effective treatments. The initial treatment is available when the disease is diagnosed and classed as being in stage 1. When the disease is classed as being in stage 2, a second treatment is available. When the disease is classed as being in stage 3, only new experimental treatments are available. The disease is such that the initial treatment is not effective a second time. The state diagram, as detailed in Figure 7.26, is appropriate. The state diagram arises from the state assignment detailed in Table 7.3. The transition probabilities listed in Figure 7.26 are for a specified time interval.
    1. Specify the transition probability matrix.
    2. Graph the time evolution of the state probabilities assuming state 1 at images.
    3. Determine the probability that an entity from the species, chosen at random, has not been diagnosed with the disease, is in stage three of the disease, or has died after 10, 20, and 100 time intervals.
    c7-fig-0026

    Figure 7.26 State diagram for disease progression and state transition probabilities.

    Table 7.3 State assignment for states defined in Figure 7.26

    State Definition
    s1 Prior to initial disease diagnosis
    s2 Stage 1 of disease
    s3 First treatment
    s4 Remission phase
    s5 First treatment unsuccessful
    s6 Stage 2 of disease
    s7 Second treatment
    s8 Stage 3 of disease
    s9 Experimental treatment
    s10 Death

APPENDIX 7.A PROOF OF THEOREM 7.2

The proof is given for the countable case; the proof for the uncountable case follows in an analogous manner. First, images guarantees the existence of images. As images implies images (Theorem 2.13), it follows that the Fourier transform of X(ωi, t), denoted X(ωi, T, f), exists, and thus, G(ωi, T, f) is well defined.

For the time-averaged autocorrelation function, consider the case of images:

(7.247) images

where Schwarz’s inequality (Theorem 2.15) has been used. Thus, the assumption of finite average power implies that images is finite for images, which implies images.

For the power spectral density, the assumption of finite average power implies:

(7.248) images

Here, Parseval’s relationship (Theorem 3.9) has been used. The interchange of the summation and integral is valid, from the Fubini–Tonelli theorem (Theorem 2.16), as all terms are positive and the summation of the integral exists by the assumption of finite average power. Thus, images, which implies images and, hence, the existence of G(T, f) for all f except, potentially, at a countable number of points.

APPENDIX 7.B PROOF OF THEOREMS 7.3 AND 7.4

7.B.1 Autocorrelation Function: Theorem 7.3

Consider the countable case: by definition

(7.249) images

Convergence is guaranteed as

(7.250) images

has assumed to be finite. This assumption also ensures that the interchange of the limit and summation, consistent with the dominated convergence theorem (Theorem 2.19), is valid, that is,

(7.251) images

As shown in Appendix 7.A, images is finite if images, and the average power in the random process images is finite. These results extend to the infinite interval when images for all images, images, and when there is an upper bound on the signal powers over the interval images as specified by Equation 7.71.

7.B.2 Power Spectral Density: Theorem 7.4

Consider the countable case:

(7.252) images

Finiteness of images is guaranteed as

(7.253) images

has been assumed. This assumption also ensures the validity of the interchange of the limit and summation, according to the dominated convergence theorem (Theorem 2.19), i.e.,

(7.254) images

A sufficient condition for G(ωi, T, f) to be bounded as images is for images. A stronger condition can be found by considering

(7.255) images

Hence, if images, then G(ωi, T, f) is bounded.

APPENDIX 7.C PROOF OF THEOREM 7.5

First, consistent with Theorem 3.14, the assumption of finite energy, and piecewise differentiability, on [0, T] for each signal, ensures that

(7.256) images

for all images. Consider the countable case: to prove that the power spectral density is the Fourier transform of the time-averaged autocorrelation function, consider the integral

(7.257) images

Consistent with Theorem 7.2, the assumption of finite average signal power on [0, T] implies that the summation is bounded above, that is, images is finite. It then follows from the Fubini–Tonelli theorem that the order of the summation and the integral operation can be interchanged to yield

(7.258) images

To prove that the inverse Fourier transform of the power spectral density function equals the time-averaged autocorrelation function, consider

(7.259) images

Consistent with Theorem 7.2, the assumption of finite average signal power on [0, T] implies that the summation is bounded above, that is, G(T, f) is finite and, further, images. It then follows from the Fubini–Tonelli theorem that the order of the summation and the integral operation can be interchanged to yield

(7.260) images

As images is finite for all values of τ, it follows that

(7.261) images

APPENDIX 7.D PROOF OF THEOREM 7.6

First, the assumed conditions, consistent with Theorem 7.5, ensure that the images and G(T, f) exist for all images and that they are related via the Fourier and inverse Fourier transforms.

To relate images to images, consider

(7.262) images

It is necessary to interchange the limit and integration operations in this equation. To achieve this, first, note, consistent with Theorem 7.2, that images. Second, assume there exists a function images such that images for all T. Then, from the dominated convergence theorem (Theorem 2.19), it follows that the order of limit and integral can be interchanged to yield

(7.263) images

To relate images to images, consider

(7.264) images

Again, it is necessary to interchange the limit and integration operations. However, and in general, the power spectral density function G(T, f) will have impulsive components, and it is not possible to justify the interchange of the limit and integral operations. With the assumption of the existence of a function images such that images for all T, it follows, from the dominated convergence theorem (Theorem 2.19), that the order of limit and integral can be interchanged to yield

(7.265) images

APPENDIX 7.E PROOF OF THEOREM 7.11

Assume the countable case and consider images:

(7.266) images

A change of variable images for λ results in the area for integration as illustrated in Figure 7.6. The expectation can then be written as

(7.267) images

Define IR as

(7.268) images

It then follows that

(7.269) images

which clearly converges to zero as images.

APPENDIX 7.F PROOF OF THEOREM 7.12

Consider the double summation consistent with the average power:

(7.270) images

Expanding the summation out yields

(7.271) images

and the following requirement then follows for the off-diagonal terms to be negligible:

(7.272) images

APPENDIX 7.G PROOF OF THEOREM 7.16

Consider the countable case:

(7.273) images

The assumption of wide-sense stationarity implies that the second summation is independent of t. Due to the harmonic nature of the terms in this summation, this is only possible if

(7.274) images

Expanding the exponential terms out gives the condition

(7.275) images

where images. Due to the orthogonality of the sine and cosine functions, the requirement is for

(7.276) images

and, hence, the condition that images for images.

APPENDIX 7.H PROOF OF THEOREM 7.17

Consider the countable case:

(7.277) images

For the interval images, the Cramer transform of images (see Theorem 3.29) is

(7.278) images

and the approximation to this equation is

(7.279) images

Thus,

(7.280) images

For the stationary case images when images (see Theorem 7.16) and, thus

(7.281) images

The exact expression follows in an analogous manner and is

(7.282) images

In Figure 7.27, the graph of images is shown. It then follows that

(7.283) images
c7-fig-0027

Figure 7.27 Graph of [sgn (i) + sgn (fifo)]2. Left: i < 0. Right: i > 0.

APPENDIX 7.I PROOF OF THEOREM 7.18

Denote images and consider the countable case:

(7.284) images

For an interval images, images, it follows, from Theorem 3.32 for the case of images, that

(7.285) images

It then follows, for the case of images, that

(7.286) images

Interchanging the order of summations yields

(7.287) images

Wide-sense stationarity implies, consistent with Theorem 7.16, that images for images, and it then follows that images as images, images, as required.

It also follows from the result images for images, and for the case of images, that

assuming f1 and f2 are integer multiples of fo. Consider the result from Theorem 7.17 for images:

(7.289) images

which, for images, implies

(7.290) images

Thus,

assuming f1 and f2 are integer multiples of fo. As images, images, and the significance of an individual term in the summation becomes increasingly small, it then follows from Equations 7.288 and 7.291 that

(7.292) images

as required.

APPENDIX 7.J PROOF OF THEOREM 7.20

Consider the countable case and the definition of power:

(7.293) images

Interchanging the order of summations, it follows that

(7.294) images

For images and the same step size for df and , it is the case, consistent with Theorem 7.18, that

(7.295) images

Hence,

(7.296) images

From Theorem 7.18, it then follows that

(7.297) images

which is the required result.

APPENDIX 7.K PROOF OF THEOREM 7.21

Consider the alternative trigonometric form for X(Ω, t):

(7.298) images

The Cramer transforms, respectively, of sin(2πfit) and cos(2πfit) for the interval images are specified in Theorem 3.28:

(7.299) images

and for the infinite interval (Theorem 3.27):

(7.300) images

For the case of images, the infinite interval result is a good approximation.

The associated random process defined by the Cramer transform of each signal of X(Ω, t) is

(7.301) images

and this random process can be approximated according to

(7.302) images

The associated random process defined by the magnitude squared of the Cramer transform of each signal of X(Ω, t) is

(7.303) images

and this random process can be approximated according to

(7.304) images

The joint density function of F1, …, FN, Φ1, …, ΦN is

(7.305) images

and it then follows that

(7.306) images

Interchanging the order of integrations and summations, and splitting the summations into the diagonal and nondiagonal components, yields

(7.307) images

where

(7.308) images
(7.309) images

As the integral of a sinusoid over its period is zero, it is the case that images. Further, as the integral of cos(φi)2 or sin(φi)2 over [0, 2π] equals π, it follows that

(7.310) images

The same procedure leads to the true form

(7.311) images

The functions images and images have the graphs shown in Figure 7.28, and it then follows, for images, that

(7.312) images

and images. The power spectral density then is

(7.313) images
c7-fig-0028

Figure 7.28 Left: graph of sgn (f + fi) − sgn (ffi) −2 sgn (fi). Right: graph of sgn (f + fi) + sgn (ffi). The case of f > 0 is assumed.

APPENDIX 7.L PROOF OF THEOREM 7.23

Consider the following two results: first, from conditional probability theory,

Second, from the Theorem of Total Probability,

The substitution of Equations 7.313 into 7.312 yields

(7.316) images

The Markov property of transition probabilities depending on the present, and not past, values results in

(7.317) images

The substitution of this result, and the use of conditional probability theory, yields

(7.318) images

as required.

APPENDIX 7.M PROOF OF THEOREM 7.24

The proof is by induction. First, the theorem is true for the case of images. Second, consider images. From the Chapman–Kolmogorov equation, it is the case for an M-state random process that

Now,

(7.320) images

and it is clear that the ijth element of this matrix is consistent with Equation 7.317, and the theorem holds for the case of images. Third, consider the case of images, images. From the Chapman–Kolmogorov equation, it follows that

It is the case that

(7.322) images

and it is clear that the ijth element of this matrix is consistent with Equation 7.319, and the result images holds. Finally,

(7.323) images

concludes the proof.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.104.120