15 Estimate of Stochastic Process Frequency-Time Parameters

15.1 Estimate of Correlation Function

Considered parameters of the stochastic process, namely, the mathematical expectation, the variance, the probability distribution and the density functions, do not describe statistic dependence between the values of stochastic process at different instants. We can use the following parameters of the stochastic process, such as the correlation function, spectral density, characteristics of spikes, central frequency of narrowband stochastic process, and others, to describe the statistic dependence subject to specific problem. Let us briefly consider the methods to measure these parameters and define the methodological errors associated, in general, with the finite time of observation and analysis applied to the ergodic stochastic processes with zero mathematical expectation.

As applied to the ergodic stochastic process with zero mathematical expectation, the correlation function can be presented in the following form:

R(τ)=limT1TT0x(t)x(tτ)dt.(15.1)

Since in practice the observation time or integration limits (integration time) are finite, the correlation function estimate under observation of stochastic process realization within the limits of the finite time interval [0, T] can be presented in the following form:

R*(τ)=1TT0x(t)x(tτ)dt.(15.2)

As we can see from (15.2), the main operations involved in measuring the correlation function of ergodic stationary process are the realization of fixed delay τ, process multiplication, and integration or averaging of the obtained product. Flowchart of the measurer or correlator is depicted in Figure 15.1. To obtain the correlation function estimate for all possible values of τ, the delay must be variable. The flowchart shown in Figure 15.1 provides a sequential receiving of the correlation function for various values of the delay τ. To restore the correlation function within the limits of the given interval of delay values τ, the last, as a rule, varies discretely with the step Δτ= τk+1 − τk, k = 0,1,2,…. If the spectral density of investigated stochastic process is limited by the maximum frequency value fmax, then according to the sampling theorem or the Kotelnikov’s theorem, in order to restore the correlation function, we need to employ the intervals equal to

Δτ=12fmax.(15.3)

Images

FIGURE 15.1 Correlator.

However, in practice, it is not convenient to restore the correlation function employing the sampling theorem. As a rule, there is a need to use an interpolation or smoothing of obtained discrete values. As it was proved empirically [1], it is worthwhile to define the discrete step in the following form:

Δτ1510fmax.(15.4)

In doing so, 5 – 10 discrete estimates of correlation function correspond to each period of the highest frequency fmax of stochastic process spectral density.

Approximation to select the value Δτ based on the given error of straight-line interpolation of the correlation function estimate was obtained in Ref. [2]:

Δτ1ˆf0.2|Δ|,(15.5)

where |Δℛ| is the maximum allowable value of interpolation error of the normalized correlation function ℛ(τ);

ˆf=0f2S(f)df0S(f)df(15.6)

is the mean-square frequency value of the spectral density S(f) of the stochastic process ξ(t).

Sequential measurement of the correlation function at various values of τ = kΔτ, k = 0,1,2,…, v is not acceptable forever owing to long-time analysis, in the course of which the measurement conditions can change. For this reason, the correlators operating in parallel can be employed. The flowchart of multichannel correlator is shown in Figure 15.2. The voltage proportional to the discrete value of the estimated correlation function is observed at the output of each channel of the multichannel correlator. This voltage is supplied to the channel commutator, and the correlation function estimate, as a discrete time function, is formed at the commutator output. For this purpose, the commutator output is connected with the low-pass filter input and the low-pass filter possesses the filter time constant adjusted with request speed and a priori behavior of the investigated correlation function.

In principle, a continuous variation of delay is possible too, for example, the linear variation. In this case, the additional errors of correlation function measurements will arise. These errors are caused by variations in delay during averaging procedure. The acceptable values of delay variation velocity are investigated in detail based on the given additional measurement error.

Methods of the correlation function estimate can be classified into three groups based on the principle of realization of delay and other elements of correlators: analog, digital, and analog-to-digital. In turn, the analog measurement procedures can be divided on the methods employing a representation of investigated stochastic process both as the continuous process and as the sampled process. As a rule, physical delays are used by analog methods with continuous representation of investigated stochastic process. Under discretization of investigated stochastic process in time, the physical delay line can be changed by corresponding circuits. While using digital procedures to measure the correlation function estimate, the stochastic process is sampled in time and transformed into binary number by analog-to-digital conversion. Further operations associated with the signal delay, multiplication, and integration are carried out by the shift registers, summator, and so on.

Images

FIGURE 15.2 Multichannel correlator.

Define the statistical characteristics of the correlation function estimate (the bias and variance) given by (15.2). Averaging the correlation function estimate by an ensemble of realizations we obtain the unbiased correlation function estimate. The variance of correlation function estimate can be presented in the following form:

Var{R*(τ)}=1T2T0T0x(t1)x(t1τ)x(t2)x(t2τ)dt1dt2R2(τ).(15.7)

The fourth moment 〈x(t1)x(t1 − τ)x(t2)x(t2 − τ)〉 of Gaussian stochastic process can be determined as

x(t1)x(t1τ)x(t2)x(t2τ)=R2(τ)+R2(t2t1)+R(t2t1τ)R(t2t1+τ).(15.8)

Substituting (15.8) in (15.7) and transforming the double integral into a single one by introduction of the new variable t2t1 = τ, we obtain

Var{R*(τ)}=2TT0(1zT)[R2(z)+R(zτ)(z+τ)]dz.(15.9)

If the observation time interval [0, T] is much more in comparison with the correlation interval of the investigated stochastic process, (15.9) can be simplified as

Var{R*(τ)}=2TT0[R2(z)+R(zτ)R(z+τ)]dz.(15.10)

Thus, we can see that maximum value of variance of the correlation function estimate corresponds to the case τ = 0 and is equal to the variance of stochastic process variance estimate given by (13.61) and (13.62). Minimum value of variance of the correlation function estimate corresponds to the case when τ ≫ τcor and is equal to one half of the variance of the stochastic process variance estimate.

In principle, we can obtain the correlation function estimate, the variance of which tends to approach to zero as τ → T. In this case, the estimate

˜R(τ)=1TT|τ|0x(t)x(tτ)dt(15.11)

can be considered as the correlation function estimate when the mathematical expectation of stochastic process is equal to zero. This correlation function estimate is characterized by the bias

b[˜R(τ)]=|τ|TR(τ)(15.12)

and variance

Var{˜R(τ)}=2TT|τ|0(1z+|τ|T)[R2(z)+R(zτ)R(z+τ)]dz.(15.13)

In practice, a realization of the estimate given by (15.11) is more difficult to achieve compared to a realization of the estimate given by (15.2) since we need to change the value of the integration limits or observation time interval simultaneously with changing in the delay τ employing the one-channel measurer or correlator. Under real conditions of correlation function measurement, the observation time interval is much longer compared to the correlation interval of stochastic process. Because of this, the formula (15.13) can be approximated by the formula (15.10). Note that the correlation function measurement process is characterized by dispersion of estimate. If the correlation function estimate is given by (15.11), in the limiting case, the dispersion of estimate is equal to the square of estimate bias. For example, in the case of exponential correlation function given by (12.13) and under the condition Tτcor, based on (15.10) we obtain

Var{R*(τ)}=σ4αT[1+(2ατ+1)exp{2ατ}].(15.14)

As applied to the estimate given by (15.11) and the exponential correlation function, the variance of correlation function estimate can be presented in the following form [3]:

Var{˜R(τ)}=σ4αT[1+(2ατ+1)exp{2ατ}]σ42α2T2[2ατ+1(4ατ+6α2T2)exp{2ατ}],(15.15)

when the conditions Tτcor and τ ≫ T are satisfied. Comparing (15.15) and (15.14), we can see that the variance of correlation function estimate given by (15.11) is lesser than the variance of correlation function estimate given by (15.2). When αT ≫ 1, we can discard this difference because it is defined as (αT)−1.

In addition to the ideal integrator, any linear system can be used as an integrator to obtain the correlation function estimate analogously as under the definition of estimates of the variance and the mathematical expectation of the stochastic process. In this case, the function

R*(τ)=c0h(z)x(tz)x(tτz)dz(15.16)

can be considered as the correlation function estimate, where, as before, the constant c is chosen from the condition of estimate unbiasedness

c0h(z)dz=1.(15.17)

As applied to the Gaussian stochastic process, the variance of correlation function estimate as t → ∞ can be presented in the following form:

Var{R*(τ)}=c2T0T0h(z1)h(z2)[R2(z2z1)+R(z2z1τ)R(z2z1+τ)]dz1dz2.(15.18)

Introducing new variables z2z1 = z, z1 = υ we obtain

Var{R*(τ)}=c2T0[R2(z)+R(zτ)R(z+τ)]dzTτ0h(z+υ)h(υ)dυ+0T[R2(z)+R(zτ)R(z+τ)]dzT0h(z+υ)h(υ)dυ.(15.19)

Introducing the variables y = −z, υ − y and as T → ∞, we obtain

Var{R*(τ)}=c20[R2(z)+R(zτ)R(z+τ)]rh(z)dz,(15.20)

where the function rh(z) is given by (12.131) as T − τ → ∞.

Now, consider how a sampling in time of stochastic process acts on the characteristics of correlation function estimate assuming that the investigated stochastic process is a stationary process. If a stochastic process with zero mathematical expectation is observed and investigated at discrete instants, the correlation function estimate can be presented in the following form:

R*(τ)=1NNi=1x(ti)x(tiτ),(15.21)

where N is the sample size. The correlation function estimate is unbiased, and the variance of correlation function estimate takes the following form:

Var{R*(τ)}=1N2Ni=1Nj=1x(ti)x(tiτ)x(tj)x(tjτ)R2(τ).(15.22)

As applied to the Gaussian stochastic process, the general formula, by analogy with the formula for the variance of correlation function estimate under continuous observation and analysis of stochastic process realization, is simplified and takes the following form:

Var{R*(τ)}=2NNi=1(1iN)[R2(iTp)+R(iTpτ)(iTp+τ)],(15.23)

where we assume that the samples are taken over equal time intervals, that is, T = titi−1.

If samples are pairwise independent, a definition of the variance of correlation function estimate given by (15.22) can be simplified; that is, we can use the following representation:

Var{R*(τ)}=1N2Ni=1x2(ti)x2(tiτ)1NR2(τ).(15.24)

As applied to the Gaussian stochastic process with pairwise independent samples, we can write

Var{R*(τ)}=σ4N[1+2(τ)],(15.25)

where ℛ(τ) is the normalized correlation function of the observed stochastic process. As we can see from (15.25), the variance of correlation function estimate increases with an increase in the absolute magnitude of the normalized correlation function.

The obtained results can be generalized for the estimate of mutual correlation function of two mutually stationary stochastic processes, the realizations of which are x(t) and y(t), respectively:

R*xy(τ)=1TT0x(t)y(tτ)dt.(15.26)

At this time, we assume that the investigated stochastic processes are characterized by zero mathematical expectations. Flowcharts to measure the mutual correlation function of stochastic processes are different from the flowcharts to measure the correlation functions presented in Figures 15.1 and 15.2 by the following: The processes x(t) and y(t − τ) or x(t − τ) and y(t) come in at the input of the mixer instead of the processes x(t) and x(t − τ). The mathematical expectation of mutual correlation function estimate can be presented in the following form:

R*xy(τ)=1TT0x(t)y(tτ)dt=Rxy(τ)(15.27)

that means the mutual correlation function estimate is unbiased.

As applied to the Gaussian stochastic process, the variance of mutual correlation function estimate takes the following form:

Var{R*xy(τ)}=1T2T0T0{Rxx(t2t1)Ryy(t2t1)+Rxy[τ(t2t1)]Rxy[τ+(t2t1)]}dt1dt2.(15.28)

As before, we introduce the variables t2t1 = z and t1 = υ and reduce the double integral to a single one. Thus, we obtain

{R*xy(τ)}=2TTT(1zT)[Rxr(z)Ryy(z)+Rxy(τz)Rxy(τ+z)]dz.(15.29)

At Tτcorx, Tτcory, Txcorxy, where xcorxy is the interval of mutual correlation of stochastic processes and is determined analogously to (12.21), the integration limits can be expanded until [0, ∞) and we can neglect z/T in comparison with unit. Since the mutual correlation function can reach the maximum value at τ ≠ 0, the maximum variance of mutual correlation function estimate can be obtained at τ ≠ 0.

Now, consider the mutual correlation function between stochastic processes both at the input of linear system with the impulse response h(t) and at the output of this linear system. We assume the input process is the “white”. Gaussian stochastic process with zero mathematical expectation and the correlation function

Rxx(τ)=N02δ(τ).(15.30)

Denote the realization of stochastic process at the linear system input as x(t). Then, a realization of stochastic process at the linear system output in stationary mode can be presented in the following form:

y(t)=0h(υ)x(tυ)dυ=0h(tυ)x(υ))dυ.(15.31)

The mutual correlation function between x(t − τ) and y(t) has the following form:

Ryx(τ)=y(t)x(tτ)=0h(υ)Rxr(υτ)dυ={0.5N0h(τ),τ0,0,τ<0.(15.32)

In (15.32) we assume that the integration process covers the point υ = 0.

As we can see from (15.32), the mutual correlation function of the stochastic process at the output of linear system in stationary mode, when the “white”. Gaussian noise excites the input of linear system, coincides with the impulse characteristic of linear system accurate with the constant factor. Thus, the method to measure the impulse response is based on the following relationship:

h*(τ)=2N0R*yx(τ)=2N01TT0y(t)x(tτ)dt,(15.33)

Images

FIGURE 15.3 Measurement of linear system impulse response.

where y(t) is given by (15.31). The measuring process of the impulse characteristic of linear system is shown in Figure 15.3. Operation principles are clear from Figure 15.3.

Mathematical expectation of the impulse characteristic estimate

h*(τ)=2N0TT00h(υ)x(tυ)x(tτ)dυdt=h(τ),(15.34)

that is, the impulse response estimate is unbiased. The variance of impulse characteristic estimate takes the following form:

Var{h*(τ)}=4N20T2T0T0y(t1)y(t2)x(t1τ)x(t2τ)dt1dt2h2(τ).(15.35)

Since the stochastic process at the linear system input is Gaussian, the stochastic process at the output of the linear system is Gaussian, too. If the condition T ≫ τcory is satisfied, where τcory is the correlation interval of the stochastic process at the output of linear system, then the stochastic process at the output of linear system is stationary. For this reason, we can use the following representation for the variance of impulse characteristic estimate

Var{h*(τ)}=4N20T2T0T0[Rxx(t2t1)Ryy(t2t1)+Ryx(t2t1τ)Ryx(t2t1+τ)]dt1dt2.(15.36)

Introducing new variables t2t1 = z, as before, the variance of impulse characteristic estimate can be presented in the following form:

Var{h*(τ)}=8N20TT0(1zT)[Rxx(z)Ryy(z)+Ryx(τz)Ryx(τ+z)]dt1dt2+2TT0(1zT)h(τz)h(τ+z)dz,(15.37)

where [3]

Ryy(z)=N020h(υ)h(υ+|z|)dυ(15.38)

is the correlation function of the stochastic process at the linear system output in stationary mode. While calculating the second integral in (15.37), we assume that the observation time interval [0, T] is much more compared to the correlation interval of the stochastic process at the output of linear system τcory. For this reason, in principle, we can neglect the term zT−1 compared to the unit. In this case, the upper integration limits can be approximated by ∞. However, taking into consideration the fact that at τ < 0 the impulse response of linear system is zero, that is, h(τ) = 0, the integration limits with respect to the variable z must satisfy the following conditions:

{0<z<,τz>0,τ+z>0.(15.39)

Based on (15.39), we can find that 0 < z < τ. As a result, we obtain

Var{h*(τ)}=2T{0h2(υ)dυ+τ0h(τυ)h(τ+υ)dυ}.(15.40)

As applied to the impulse responses of the form

h1(τ)=1T1,0<τ<T1,T>T1(15.41)

and

h2(τ)=αexp(ατ),(15.42)

the variances of impulse characteristic estimates can be presented in the following form:

Var{h*1(τ)}=2TT1×{1+τT1,0τ0.5T1,2τT1,0.5T1τT1,(15.43)

Var{h*2(τ)}=αT[1+2ατ×exp{2ατ}].(15.44)

15.2 Correlation Function Estimation Based on its Expansion in Series

The correlation function of stationary stochastic process can be presented in the form of expansion in series with respect to earlier-given normalized orthogonal functions φ(t):

R(τ)=k=0αkφk(τ),(15.45)

where the unknown coefficients αk are given by

αk=φk(τ)R(τ)dτ,(15.46)

and (14.123) is true in the case of normalized orthogonal functions. The number of terms of the expansion in series (15.45) is limited by some magnitude v. Under approximation of correlation function by the expansion in series with the finite number of terms, the following error

ε(τ)=R(τ)vk=0αkφk(τ)=R(τ)Rv(τ)(15.47)

exists forever. This error can be reduced to an earlier-given negligible value by the corresponding selection of the orthogonal functions φ(t) and the number of terms of expansion in series.

Thus, the approximation accuracy will be characterized by the total square of approximated correlation function Rv(τ) deviation from the true correlation function R(i) for all possible values of the argument τ

ε2=ε2(τ)dτR2(τ)dτvk=0α2k.(15.48)

Formula (15.48) is based on (15.45). The original method to measure the correlation function based on its representation in the form of expansion in series

R*v(τ)=vk=0α*kφk(τ)(15.49)

by the earlier-given orthogonal functions φk(t) and measuring the weight coefficients αk was discussed in Ref. [4]. According to (15.46), the following representation

αk=limTφk(τ){1TT0x(t)x(tτ)dt}dτ(15.50)

is true in the case of ergodic stochastic process with zero mathematical expectation. In line with this fact, the estimates of unknown coefficients of expansion in series can be obtained based on the following representation:

α*k=1TT0x(t){0x(tτ)φk(τ)dτ}dt.(15.51)

The integral in the braces

yk(t)=0x(tτ)φk(τ)dτ(15.52)

is the signal at the output of linear filter operating in stationary mode with impulse response given by

hk(t)=π{0ift<0,φk(t)ift0,(15.53)

matched with the earlier-given orthogonal function φ(t). As we can see from (15.51), the mathematical expectation of estimate

α*k=0φk(τ){1TT0x(t)x(tτ)dt}dτ=0φk(τ)R(τ)dτ=αk(15.54)

is matched with the true value; in other words, the estimate of coefficients of expansion in series is unbiased.

The estimate variance of expansion in series coefficients can be presented in the following form:

Var{α*k}=1T2T0T0x(t1)x(t2)yk(t1)yk(t2)dt1dt2{1TT0x(t)yk(t)dt}2.(15.55)

As applied to the stationary Gaussian stochastic process, the stochastic process forming at the output of orthogonal filter will also be the stationary Gaussian stochastic process for the considered case. Because of this, we can write

Var{α*k}=1T2T0T0[R(t2t1)Ryk(t2t1)+Rxyk(t2t1)Rykx(t2t1)]dt1dt2,(15.56)

where

Ryk(τ)=00R(τ+vκ)φk(κ)φk(v)dκdv;(15.57)

Rxyk(τ)=0R(τv)φk(v)dv;(15.58)

Rykx(τ)=0R(τ+v)φk(v)dv.(15.59)

Introducing new variables t2t1 = τ, t1 = t and changing the order of integration analogously as shown in Section 12.4, we obtain

Var{α*k}=2TT0(1τT)T0T0[R(τ)Ryk(τ)+Rxyk(τ)Rykx(τ)]dτ.(15.60)

Images

FIGURE 15.4 One-channel measurer of coefficients αk.

Taking into consideration that

(α*k)2=Var{α*k}+α2k(15.61)

and the condition given by (14.123), we can define the “integral” variance of correlation function estimate

Var{R*v(τ)}=[R*v(τ)R*v(τ)]2 dτ=vk=0Var{α*k}.(15.62)

As we can see from (15.62), the variance of correlation function estimate increases with an increase in the number of terms of expansion in series v. Because of this, we must take into consideration choosing the number of terms under expansion in series.

The foregoing formulae allow us to present the flowchart of correlation function measurer based on the expansion of this correlation function in series and the estimate of coefficients of this expansion in series Figure 15.4 represents the one-channel measurer of the current value of the coefficient α*k. Operation of measurer is clear from Figure 15.4. One of the main elements of block diagram of the correlation function measurer is the generator of orthogonal signals or functions (the orthogonal filters with the impulse response given by [15.53]). If the pulse with short duration τp that is much shorter than the filter constant time and amplitude τ-1p excites the input of the orthogonal filter, then a set of orthogonal functions φk(t) are observed at the output of orthogonal filters.

The flowchart of correlation function measurer is shown in Figure 15.5. Operation control of the correlation function measurer is carried out by the synchronizer that stimulates action on the generator of orthogonal signals and allows us to obtain the correlation function estimate with period that is much longer compared to the orthogonal signal duration.

The functions using the orthogonal Laguerre polynomials given by (14.72) and presented in the following form

Lk(αt)=exp{αt}dk[tkexp{αt}]dtk=kμ=0(αt)μ(k!)2(kμ)!(μ!)2(15.63)

are the simplest among a set of orthogonal functions, where α characterizes a polynomial scale in time. To satisfy (14.123), in this case, the orthogonal functions φk(t) take the following form:

φk(t)=1k!αexp{0.5αt}Lk(αt).(15.64)

Images

FIGURE 15.5 Measurer of correlation function.

Carrying out the Laplace transform

φk(p)=0exp{pt}φk(t)dt,(15.65)

we can find that the considered orthogonal functions correspond to the transfer functions

φk(p)=2α×0.5αp+0.5α[p0.5αp+0.5α]k.(15.66)

The multistep filter based on RC elements, that is, α = 2(RC)−1, which has the transfer characteristic given by (15.66) accurate with the constant factor 2α−0.5 is shown in Figure 15.6. The phase inverter is assigned to generate two signals with equal amplitudes and shifted by phase on 90° and the amplifiers compensate attenuation in filters and ensure decoupling between them.

If the stationary stochastic process is differentiable ν times, the correlation function R(τ) of this process can be approximated by expansion in series about the point τ = 0:

R(τ)Rv(τ)=vi=0d2iR(τ)dτ2i|τ=0×τ2i(2i)!.(15.67)

The approximation error of the correlation function is defined as

ε(τ)=R(τ)Rv(τ).(15.68)

The even 2ith derivatives of the correlation function at the point τ = 0 accurate within the coefficient (−1)i are the variances of ith derivatives of the stochastic process

(1)id2iR(τ)dτ2i|τ=0=[diξ(t)dti]2=Vari.(15.69)

Images

FIGURE 15.6 Multistep RC filter.

As applied to the ergodic stochastic process, the coefficients of expansion in series given by (15.67) can be presented in the following form:

αi=d2iR(τ)dτ2i|τ=0=(1)ilimT1TT0{dix(t)dti}2dt.(15.70)

In the case when the observation time interval is finite, the estimate α*i of the coefficient αi is defined as

α*i=(1)i1TT0{dix(t)dti}2dt.(15.71)

The correlation function estimate takes the following form:

R*(τ)=vi=1α*iτ2i(2i)!.(15.72)

Flowchart of measurer based on definition of the coefficients of correlation function expansion in power series is shown in Figure 15.7. The investigated realization x(t) is differentiable ν times. The obtained processes yi(t) = dix(t)/dti are squared and integrated within the limits of the observation time interval [0, T] and come in at the input of calculator with corresponding signs. According to (15.72), the correlation function estimate is formed at the calculator output. Define the statistical characteristics of the coefficient estimate α*i. The mathematical expectation of the estimate α*i has the following form:

Images

FIGURE 15.7 Correlation function measurement.

α*i=(1)i1TT0{dix(t)dti}2dt.(15.73)

We can see that

{dix(t)dti}2=2iR(t1t2)ri1ri2|t1=t2=t=(1)id2iR(τ)dτ2i|τ=0.(15.74)

An introduction of new variables t2t1 = τ, t2 = t is made. Substituting (15.74) into (15.73), we see that the estimates of coefficients of expansion in series are unbiased. The correlation function of estimates is

Rip=α*iα*pα*iα*p,(15.75)

where

α*iα*p=(1)(i+p)1T2T0T0{dix(t1)dti1}2{dpx(t1)dtp2}2dt1dt2.(15.76)

As applied to the Gaussian stochastic process, its derivative is Gaussian too. Because of this, we can write

{dix(t1)dti1}2{dpx(t1)dtp2}2={dix(t1)dti1}2{dpx(t1)dtp2}2+2{(i+p)x(t1)x(t2)ri1rp2}2=Vari×Varp+2{(i+p)R(t2t1)ri1rp2}2.(15.77)

Substituting (15.77) into (15.76) and then into (15.75), we obtain

Rip=(1)(i+p)2T2T0T0{(i+p)R(t2t1)ri1rp2}2dt1dt2.(15.78)

Introducing new variables t2t1 = τ, t2 = t and changing the order of integration, we obtain

Rip=4TT0(1τT){d(i+p)R(τ)dτ(i+p)}2dτ.(15.79)

If the observation time interval is much longer than the correlation interval of stochastic process and its derivatives, we can write

Rip=2T{d(i+p)R(τ)dτ(i+p)}2dτ.(15.80)

As applied to the conditions, for which (15.80) is appropriate, the derivatives of the correlation function can be written using the spectral density S(ω) of the stochastic process:

d(i+p)R(τ)dτ(i+p)=12π(jω)(i+p)S(ω)exp{jωτ}dω.(15.81)

Then

Rip=1Tπω2(i+p)S2(ω)dω.(15.82)

The variance of the estimate α*i of expansion in series coefficients can be presented in the following form:

Var{α*i}=1Tπω4iS2(ω)dω.(15.83)

Let us define the deviation of correlation function estimate from the approximated value, namely,

ε(τ)=Rv(τ)R*v(τ).(15.84)

Averaging ɛ(τ) by realizations of the investigated stochastic process, we can see that in the considered case 〈ɛ(τ)〉 = 0, which means the bias of correlation function estimate does not increase due to the finite observation time interval.

The variance of correlation function estimate can be presented in the following form:

Var{R*v(τ)}=vi=1vp=1τ2(i+p)(2i)!(2p)!4TT0(1τT){d(i+p)R(τ)dτ(i+p)}2dτ.(15.85)

At Tτcor, we can write

Var{R*v(τ)}=1Tπvi=1vp=1τ2(i+p)(2i)!(2p)!ω2(i+p)S2(ω)dω.(15.86)

Let the correlation function of the stochastic process be approximated by

R(τ)=σ2exp{α2τ2},(15.87)

which corresponds to the spectral density defined as

S(ω)=σ2παexp{ω24α2}.(15.88)

Substituting (15.88) into (15.86), we obtain

Var{R*v(τ)}=σ42πTαvi=1vp=1[2(i+p)1]!!(2i)!(2p)!(ατ)2(i+p),(15.89)

where

[2(i+p)1]!=1×3×5×...×[2(i+p)1].(15.90)

As we can see from (15.89), the variance of correlation function estimate increases with an increase in the number of terms under correlation function expansion in power series.

15.3 Optimal Estimation of Gaussian Stochastic Process Correlation Function Parameter

In some practical conditions, the correlation function of stochastic process can be measured accurately with some parameters defining a character of its behavior. In this case, a measurement of correlation function can be reduced to measurement or estimation of unknown parameters of correlation function. Because of this, we consider the optimal estimate of arbitrary correlation function parameter assuming that the investigated stochastic process. ξ(t) is the stationary. Gaussian stochastic process observed within the limits of time interval [0, T] in the background of Gaussian stationary noise ζ(t) with known correlation function.

Thus, the following realization

y(t)=x(t,l0)+n(t),0tT(15.91)

comes in at the measurer input, where x(t, l0) is the realization of the investigated Gaussian stochastic process with the correlation function Rx(t1, t2, l) depending on the estimated parameter l; n(t) is the realization of Gaussian noise with the correlation function Rn(t1, t2). True value of estimated parameter of the correlation function Rx(t1, t2, l) is l0. Thus, we assume that the mathematical expectation of both the realization x(t, l0) and the realization n(t) is equal to zero and the realizations. x(t, l0) and n(t) are statistically independent of each other. Based on input realization, the optimal receiver should form the likelihood ratio functional Λ(l) or some monotone function of the likelihood ratio. The stochastic process η(t) with realization given by (15.91) is the Gaussian stochastic process with zero mathematical expectation and the correlation function

Ry(t1,t2,l)=Rx(t1,t2,l)+Rn(t1,t2).(15.92)

As applied to the investigated stochastic process η(t), the likelihood functional can be presented in the following form [5]:

Λ(l)=exp{12T0T0y(t1)y(t2)[ϑn(t1,t2)ϑx(t1,t2;l)]dt1dt212H(l)}.(15.93)

We can write the derivative of the function H(l) in the following form:

dH(l)dl=T0T0Rx(t1,t2,l)lϑx(t1,t2;l)dt1dt2(15.94)

and the functions ϑx(t1, t2; l) and ϑn(t1, t2) can be found from the following equations:

T0[Rx(t1,t2,l)+Rn(t1,t2)]ϑx(t1,t2;l)dt=δ(t2t1),(15.95)

T0Rn(t1,t2)ϑn(t1,t2)dt=δ(t2t1).(15.96)

Evidently, we can use the logarithmic term of the likelihood functional depending on the observed data as the signal at the receiver output

M1(l)=12T0T0y(t1)y(t2)ϑ(t1,t2;l)dt1dt2,(15.97)

where

ϑ(t1,t2;l)=ϑn(t1,t2)ϑx(t1,t2;l).(15.98)

We suppose that the correlation intervals of the stochastic processes ξ(t) and ζ(t) are sufficiently small compared to the observation time interval [0, T]. In this case, we can use infinite limits of integration. Under this assumption, we have

{ϑx(t1,t2;l)=ϑx(t1t2;l),ϑn(t1,t2)=ϑn(t1t2),(15.99)

and

ϑ(t1,t2;l)=ϑ(t1t2;l).(15.100)

Introducing new variables t2t1 = τ, t2 = t and changing the order of integration, we obtain

M1(l)=12{T0ϑ(τ;l)Tτ0y(t+τ)y(t)dtdτ+0Tϑ(τ;l)T0y(t+τ)y(t)dtdτ}.(15.101)

Introducing new variables τ = −τ′, t′ = t + τ = τ − τ′ and taking into consideration that the correlation interval of stochastic process η(t) is shorter compared to the observation time interval [0, T], we can write

M1(l)=TT0R*y(τ)ϑ(τ;l)dτ,(15.102)

where

R*y(τ)=1TTτ0y(t)y(t+τ)dt1TT0y(t)y(tτ)dt(15.103)

is the correlation function estimate of investigated input process consisting of additive mixture of the signal and noise.

Applying the foregoing statements to (15.94), we obtain

dH(l)dl=2TT0Rx(τ;l)lϑx(τ;l)dτ.(15.104)

Substituting (15.102) and (15.104) into (15.93), we obtain

Λ(l)=exp{TT0R*y(τ)ϑ(τ;l)dτ12H(l)}.(15.105)

In the case, when the observation time interval [0, T] is much longer compared to the correlation interval of the investigated stochastic process, we can use the spectral representation of the function given by (15.100)

ϑ(τ;l)=12π[ϑn(ω)ϑx(ω;l)]exp{jωτ}dω.(15.106)

Then (15.104) takes the following form:

dH(l)dl=T2πT0Sx(ω;l)lϑx(ω;l)dω.(15.107)

In (15.106) and (15.107), ϑn(ω), ϑx(ω; l), and Sx(ω; l) are the Fourier transforms of the corresponding functions ϑn(τ), ϑx(τ; l), and Rx(τ; l). Applying the Fourier transform to (15.95) and (15.96), we obtain

{{Rn(ω)ϑn(ω)=1,ϑx(ω;l)[Sn(ω)+Rx(ω;l)]=1.(15.108)

Taking into consideration (15.107) and (15.108), we can write

ϑ(τ;l)=12πSx(ω;l)S1n(ω)Sx(ω;l)+Sn(ω)exp{jωτ}dω,(15.109)

H(l)=T2πln[1+Sx(ω;l)Sn(ω)]dω.(15.110)

The signal at the optimal receiver output takes the following form:

M(l)=TT0R*y(τ)ϑ(τ;l)dτ12H(l).(15.111)

Flowchart of the optimal measurer is shown in Figure 15.8. This measurer operates in the following way. The correlation function R*y(τ) is defined based on the input realization of additive mixture of the signal and noise. This correlation function is integrated as the weight function with the signal ϑ(τ; l). The signals forming at the outputs of the integrator and generator of the function H(l) come

Images

FIGURE 15.8 Optimal measurement of correlation function parameter.

in at the input of summator. Thus, the receiver output signal is formed. The decision device issues the value of the parameter lm, under which the output signal takes the maximum value.

If the correlation function of the investigated Gaussian stochastic process has several unknown parameters l = {l1, l2, …, l}, then the likelihood ratio functional can be found by (15.93) changing the scalar parameter l. on the vector parameter l and the function H(l) is defined by its derivatives

H(l)li=T0T0Rx(t1,t2;l)liϑx(t1,t2;l)dt1dt2.(15.112)

The function ϑx(t1, t2; l) is the solution of integral equation that is analogous to (15.95).

To obtain the estimate of maximum likelihood ratio of the correlation function parameter the optimal measurer (receiver) should define an absolute maximum lm of logarithm of the likelihood ratio functional

M(l)=12T0T0y(t1)y(t2)[ϑn(t1,t2)ϑx(t1,t2,l)]dt1dt212H(l).(15.113)

To define the characteristics of estimate of the maximum likelihood ratio lm introduce the signal

s(l)=M(l)(15.114)

and noise

n(l)=M(l)M(l)(15.115)

functions. Then (15.113) takes the following form:

M(l)=s(l)+n(l).(15.116)

Prove that if the noise component is absent in (15.116), that is, n(l) = 0, the logarithm of likelihood ratio functional reaches its maximum at l = l0, that is, when the estimated parameter takes the true value. Define the first and second derivatives of the signal function given by (15.114) at the point l = l0:

ds(l)dl|l=l0=dM(l)dl|l=l0.(15.117)

Substituting (15.113) into (15.117) and averaging by realizations y(t), we obtain

ds(l)dl|l=l0={12T0T0[Rn(t1,t2)+Rx(t1,t2;l)]ϑx(r1,r2;l)ldt1dt212T0T0ϑx(t1,t2;l)Rx(t1,t2;l)ldt1dt2}l=l0=12{ddtT0T0[Rn(t1,t2)+Rx(t1,t2;l)]ϑx(t1,t2;l)dt1dt2}l=l0.(15.118)

Since

R(t1,t2;l)=Rn(t1,t2)+Rx(t1,t2;l)=R(t2,t1;l),(15.119)

in accordance with (15.95) we have

ds(l)dl|l=l0=12T0{ddlT0[Rn(t2,t1)+Rx(t2,t1:l)]ϑx(t1,t2;l)dt1}l=l0dt2=0.(15.120)

Now, let us define the second derivative of the signal component at the point l = l0:

d2s(l)dl2|l=l0=d2M(l)dl2|l=l0=12{T0T0Rx(t2,t1:l)lϑx(t1,t2;l)ldt1dt2d2dl2T0T0[Rn(t2,t1)+Rx(t2,t1:l)]ϑx(t1,t2;l)dt1dt2}l=l0(15.121)

In accordance with (15.95) we can write

{d2dl2T0T0[Rn(t2,t1)+Rx(t2,t1:l)]ϑx(t1,t2;l)dt1dt2}l=l0   =T0{d2dl2T0[Rn(t2,t1)+Rx(t2,t1:l)]ϑx(t1,t2;l)dt1}l=l0dt2=0.(15.122)

Because of this

d2s(l)dl2|l=l0=12{T0T0Rx(t2,t1:l)lϑx(t1,t2;l)ldt1dt2}l=l0.(15.123)

Let us prove that the condition

d2s(l)dl2|l=l0<0(15.124)

is satisfied forever. For this purpose, we define the averaged quadratic first derivative of the likelihood ratio functional logarithm at the point l = l0

m2={dM(l)dl|l=l0}2={dN(l)dl|l=l0}2,(15.125)

which is a positive value. Substituting (15.91) into (15.113), differentiating by l, and averaging by realizations y(t), we obtain the second central moment of the first derivative of the likelihood ratio functional logarithm:

2l1l2[M(l1)M(l1)][M(l2)M(l2)]=12T0T0T0T0[Rn(t1,t3)+Rx(t1,t3;l0)][Rn(t2,t4)+Rx(t2,t4;l0)]×ϑx(r1,r2;l1)l1ϑx(t3,t4;l2)l2dt1dt2dt3dt4.(15.126)

Assuming l2 = l1 = l0 and taking into consideration that

T0[Rn(t1,t)+Rx(t1,t;l0)]ϑx(t1,t2;l)l1dt=T0ϑx(t,t2;l)Rx(t1,t;l)ldt,(15.127)

we obtain

m2={12T0T0T0T0[Rn(t2,t4)+Rx(t2,t4;l0)]ϑx(t1,t2;l)Rx(t1,t3;l)lϑx(t3,t4;l)ldtldt2dt3dt4}l=l0={12T0T0Rx(t1,t2;l)lϑx(t1,t2;l)ldt1dt2}l=l0.(15.128)

In (15.128) we have implemented (15.95) again. Comparing (15.128) with (15.123), we can see that

d2s(l)dl2|l=l0=m2(15.129)

and, consequently, (15.124) is satisfied forever

Introduce the signal-to-noise ratio (SNR)

SNR=s2(l0)n2(l0)(15.130)

and the normalized signal and noise functions

{S(l)=s(l)s(l0),N(l)=n(l)n2(l0).(15.131)

Taking into consideration (15.120), (15.124), and (15.131), we can see that

{maxS(l)=S(l0)=1,N2(l0)=1.(15.132)

In addition, as follows from the definition, the mathematical expectation of the noise function n(l) is zero. Taking into consideration the introduced notations, we can write the logarithm of the likelihood ratio functional in the following form:

M(l)=s(l0)[S(l)+εN(l)],(15.133)

where ɛ=1/SNR. Taking into consideration (15.133), the likelihood ratio equation for the estimate of correlation function parameter of Gaussian stochastic process can be presented in the following form:

{dS(l)dl+εdN(l)dl}l=lm=0.(15.134)

Usually, under the measurement of stochastic process parameters, SNR is high and, consequently, ɛ ≪ 1. Then, by analogy with Ref. [5], the approximated solution of likelihood ratio equation can be searched in the form of expansion in power series

lm=l0+εl1+ε2l2+ε3l3+....(15.135)

To define the approximations l1, l2, l3 and grouping the terms with small value ɛ of the same power, we obtain

s1+ε(l1s2+n1)+ε2(l2s2+l1n2+0.5l21s3)+ε3(l3s2+l2n2+0.5l21n3+l31s46+l1l3s2)+...=0,(15.136)

where we use the following notations:

{si=diS(l)dli|l=l0,ni=diN(l)dli|l=l0.(15.137)

Since the system of functions 1, x, x2, ….is linearly independent, the equality given by (15.136) is satisfied for any ɛ if and only if all coefficients of terms with power equal to ɛ are equal to zero. Zero approximation is matched with the true value of the parameter l0 since S(l) reaches its absolute maximum at

l=l0.(15.138)

Equating to zero the coefficients at ɛ, ɛ2, and ɛ3, we obtain equations to define l1, l2, and l3. We can write solutions of these equations in the following form:

{l1=n1s2,l2=l1n2+0.5l21s3s2,l3=l2n2+05l21n3+61l31s4+l1l2s3s2.(15.139)

Taking into consideration the first three approximations l1, l2, and l3, the conditional bias and variance of maximum likelihood ratio take the following form:

b(lm|l0)=εl1+ε2l2+ε3l3,(15.140)

Var{lm|l0}=ε2[l21l12]+2ε3[l1l2l1l2]+ε4[l22l22+2l1l32l1l3].(15.141)

Averaging is carried out by all possible realizations of the total stochastic process η(t) at the fixed value of estimated parameter l0. The relative error of estimate bias and variance that can be defined as the ratio of the first term with small order to the first term of expansion takes the order ɛ2.

We are limited by consideration of the first approximation. In doing so, the random error of a single measurement can be presented in the following form:

Δl=lml0=εl1=εdN(l)dld2S(l)dl2|l=l0=dn(l)dld2s(l)dl2|l=l0.(15.142)

For the first approximation, the estimate of arbitrary parameter of correlation function will be unbiased, since 〈n(l)〉 = 0. Taking into consideration (15.128) and (15.129), the variance of estimate can be presented in the following form:

Var{lm|l0}=2l1l2[n(l1)n(l2)|l=l0[d2S(l)dl2]2|l=l0=m2.(15.143)

If the observation time interval is much longer than the correlation interval of the investigated stochastic process η(t), a flowchart of optimal measurer is significantly simplified. In this case, the logarithm of the likelihood ratio functional is given by (15.111), where the signal function can be described in the following form:

s(l)=TTTR*y(τ)ϑ(τ;l)dτ12H(l).(15.144)

The first term in (15.144) can be presented in the following form:

T0R*y(τ)ϑ(τ;l)dτ=12TT[Rn(τ)+Rx(τ;l0)]ϑ(τ;l)dτ12[Rn(τ)+Rx(τ;l0)]ϑ(τ;l)dτ=14π[Sn(ω)+Sx(ω;l0)]ϑ(ω;l)dω=14πSx(ω;l)[Sn(ω)+Sx(ω;l0)]Sn(ω)[Sn(ω)+Sx(ω;l)]dω.(15.145)

Substituting (15.145) into (15.114) and taking into consideration (15.110), we obtain the signal function in the following form:

s(l)=T4π{Sx(ω;l)[Sn(ω)+Sx(ω;l0)]Sn(ω)[Sn(ω)+Sx(ω;l)]ln[1+Sx(ω;l)Sn(ω)]}dω.(15.146)

The signal function reaches its maximum at l = l0:

s(l0)=T4π{Sx(ω;l0)Sn(ω)ln[1+Sx(ω;l0)Sn(ω)]}dω.(15.147)

Analogously, we can define the variance of noise component n(l) given by (15.115)

n2(l)=T4πS2x(ω;l0)S2n(ω)dω.(15.148)

Consequently, SNR given by (15.130) can be presented in the following form:

SNR=T4π{{Sx(ω;l0)Sn(ω)ln[1+Sx(ω;l0)Sn(ω)]}dω}2S2x(ω;l0)S2n(ω)dω.(15.149)

If SNR is high, the variance of correlation function estimate is defined by (15.143), where the value m2 given by (15.129) can be presented in the following form using the spectral density components:

m2m2=TT0Rx(τ;l)l×ϑx(τ;l)ldτ|l=l0T4πSx(ω;l)l×ϑx(ω;l)ldω|l=l0=T4π[Sx(ω;l)l]2[Sn(ω)+Sx(ω;l)]2|l=l0.(15.150)

We can define the second set of approximations for the bias and variance of arbitrary parameter estimate of the correlation function of the investigated stochastic process in accordance with (15.140) and (15.141). After cumbersome mathematical transformations, we obtain that the bias and variance of estimate take the following form:

b(lm|l0)=12J1112(J2010)2,(15.151)

Var{lm|l0}=2J2010+4[J2010]3{12J40106J2112J1113+1J2010[72[J1112]2+6J1112J301012[J3010]2]},(15.152)

where

Jpqij=T2π{iSx(ω;l)li}ql=l0{jSx(ω;l)lj}pl=l0[Sx(ω;l0)+Sn(ω)](p+q)dω.(15.153)

Comparing (15.129), (15.150), and (15.153), we see that

m2=12J2010.(15.154)

Based on the formulae obtained, we can define the statistical characteristics of the estimate of correlation function parameter α of the investigated stochastic process additively mixed with the white noise possessing the one-sided power spectral density N0

Rx(τ;α)=σ2exp{α|τ|}.(15.155)

To reduce mathematical transformations and computations, the bias of estimate of the correlation function parameter α is defined taking into consideration the second approximation given by (15.151) and the variance of estimate of the correlation function parameter α is defined taking into consideration the first approximation given by (15.129) (the first term in the right side of (15.152)).

The correlation function parameter α defines the effective bandwidth of spectral density of stochastic process, that is, Δfef = 0.25α. Thus, we can write

{Sx(ω;α)=2ασ2α2+ω2,Sn(ω)=N02.(15.156)

Introduce the following notations:

{q2=4σ2N0α,p=Tτcor,β=1+q2,(15.157)

q2 is the ratio of the investigated stochastic process variance to the white noise power within the limits of effective bandwidth of the signal; p is the ratio between the observation time interval of the investigated stochastic process and its correlation interval. Using these notations, we can write

J2010=pq4(β3β2+3β+1)α20β3(1+β)3;(15.158)

J1112=2pq4(2β3β2+4β+1)α20β3(1+β)4,(15.159)

where α0 is the true value of estimated correlation function parameter. Substituting (15.158) and (15.159) into (15.151) and (15.152), we obtain

b(αm|α0)=α0(1+β)2β3(2β3β2+4β+1)pq4(β3β2+3β+1)2;(15.160)

Var{αm|α0}=2α20(1+β)3β3pq4(β3β2+3β+1)2.(15.161)

When q2 ≪ 1 and 4σ2T N10 ≫ 1, in this case (15.160) and (15.161) are correct, then β ≈ 1 and (15.160) and (15.161) take a simple form:

b(αm|α0)3α02pq4=3N20α2032σ4T;(15.162)

Var{αm|α0}4α20pq4=N20α304σ4T.(15.163)

If q2 ≪ 1 and β ≈ q, (15.162) and (15.163) take the following form:

b(αm|α0)=α0pq2;(15.164)

Var{αm|α0}=2α20pq.(15.165)

The relative shift of estimate bias pbm | α0)/α0 and relative root-mean-square deviation (pVar{αm|α0})/2α20 as a function of ratio between the variance of investigated stochastic process to power noise q2 within the limits of effective spectral bandwidth are presented in Figures 15.9 and 15.10.

Consider the second example. For this purpose, we analyze the correlation function of the narrowband stochastic process ξ(t)

Rx(τ;v)=σ2ρen(τ)cosvτ,(15.166)

Images

FIGURE 15.9 Relative estimate bias shift as a function of the ratio between the variance of the investigated stochastic process and power noise.

Images

FIGURE 15.10 Relative root-mean-square deviation of estimate as a function of the ratio between the variance of the investigated stochastic process and power noise.

where ρen(τ) is the envelope of normalized correlation function and the condition

2πΔfef<<v(15.167)

is satisfied. We estimate the parameter ν. In the narrowband stochastic process case, the parameter ν plays a role of the central spectral density frequency. We assume that the stochastic process with the correlation function given by (15.166) is investigated in the white noise with the correlation function

Rn(τ)=N02δ(τ),(15.168)

where δ(τ) is the Dirac delta function and the observation time interval [0, T] is much longer than the correlation stochastic process interval.

In accordance with (15.111), the logarithm of likelihood ratio functional can be presented in the following form:

M(v)=M1(v)0.5H(v),(15.169)

where M1(ν) and H(ν) are given by (15.102) and (15.104), respectively. Consider the second term in (15.169) taking into consideration that

{Sn(ω)=N02,Sx(ω,v)=12σ2[1(ωv)+1(ω+v)],(15.170)

where ℱ1(ω) is the Fourier transform of the normalized correlation function envelope ρen(τ). Introducing new variable ω′ = ω - ν and taking into consideration that the investigated stochastic process is the narrowband process, we can write

H(v)Tπln[1+σ2N01(ω)]dω=const.(15.171)

Consequently, the logarithm of likelihood ratio functional accurate with the constant factor coincides with the output signal

M(v)=T0TR*y(τ)ϑ(τ;v)dτ.(15.172)

Taking into consideration (15.106), (15.109), and the fact that the stochastic process ξ(t) is the narrowband process, we obtain

ϑ(τ;v)=σ2πN0[1(ωv)+1(ω+v)]exp{jωτ}σ2[1(0)v)+1(ω+v)]+N0dω=1(τ)cosvτ,(15.173)

where

1(τ)=2σ2πN01(ω)exp{jωτ}σ21(ω)+N0dω.(15.174)

Thus, the estimation of parameter v can be carried out by position of absolute maximum of the function

M1(v)=TT0R*y(τ)˜1(τ)cosvτdτ,(15.175)

where R*y(τ) is the correlation function estimate of the total process given by (15.103).

Define the bias and variance of estimate of the parameter v limiting only by the first approximation. Under this approximation, the estimate will be unbiased and, in accordance with (15.143) and (15.150), the variance of estimate can be presented in the following form:

Var{vm|v0}=2πTσ4{d1(ω)dω}2dω[σ21(ω)+N0]2.(15.176)

If the normalized correlation function envelope takes the form

ρen(τ)=exp{α|τ|},(15.177)

the Fourier transform can be presented as

1(ω)=2αα2+ω2.(15.178)

As a result, the variance of the correlation function parameter estimate takes the following form:

Var{vm|v0}=α2(1+1+q2)31+q2q4p,(15.179)

where

q2=2σ2N0α.(15.180)

If q2 ≪ 1 and 2σ2TN10 ≫ 1, the variance of correlation function parameter estimate is simplified

Var{vm|v0}=8α2q4p.(15.181)

If q2 ≫ 1, then

Var{vm|v0}=α2p,(15.182)

or the variance of the central frequency estimate of stochastic process spectral density is inversely proportional to the product between the correlation interval and observation time interval. Figure 15.11 presents the root-mean-square deviation pVar{vm|v0}/α2 as a function of ratio between the variance of the investigated stochastic process and the power noise q2 within the limits of effective spectral bandwidth. In doing so, we assume that for all values of q2 the following inequality q2p ≫ 1 is satisfied.

The optimal estimate of stochastic process correlation function can be found in the form of estimations of the elements Rij of the correlation matrix R or elements Cij of the inverse matrix C. In the case of Gaussian stationary stochastic process with the multidimensional probability density function given by (12.169), the solution of likelihood ratio equation

fN(x1,x2,...,xN|C)Cij=0(15.183)

allows us to obtain the estimates Cij and, consequently, the estimates of elements Rij of correlation matrix R.

Images

FIGURE 15.11 Relative root-mean-square deviation pVar{vm|v0}/α2 as a function of the ratio between the variance of the investigated stochastic process and power noise.

15.4 Correlation Function Estimation Methods Based on other Principles

Under practical realization of analog correlators based on the estimate given by (15.2), a multiplication of two stochastic processes is most difficult to carry out. We discussed previously that for this purpose there is a need to use the circuits performing a multiplication in accordance with (13.201). The described flowchart consists of two quadrators. There are two methods [1] called the interference and compensation methods using a single quadrator. In doing so, it is assumed that the variance of investigated stochastic process is known very well. The interference method is based on the following relationship:

R(τ)=±{12[x(tτ)±x(t)]2σ2}.(15.184)

It is natural to use the following function to estimate the correlation function given by (15.184)

˜R(τ)=±{12TT0[x(tτ)±x(t)]2dtσ2}.(15.185)

As we can see from (15.184) and (15.185), the estimate of correlation function is not biased.

Let us define the variance of correlation function estimate assuming that the investigated process is Gaussian. Suppose that we use the sign “+” in the square brackets in (15.184) and (15.185). Then

Var{˜R(τ)}={1TT0x(t)x(tτ)dt}2R2(τ)+1T2T0T0x(t1)x(t1τ)[x2(t2)+x2(t2τ)]dt1dt22σ2R(τ)+{12TT0[x2(t)+x2(tτ)]dt}2σ4.(15.186)

The first and second terms in the right side of (15.186) represent the variance of the correlation function estimate R*(x) according to (15.2). Other terms on the right side of (15.186) characterize an increase in the variance of the correlation function estimate R*(x) according to (15.185) compared to the estimate given by (15.2). Making mathematical transformations with the introduction of new variables, as it was done earlier, we can write

Var{˜R(τ)}=Var{R*(τ)}+1TT0(1τT)×{2R2(z)+R2(z+τ)+R2(zτ)+4R(z)[R(z+τ)+R(zτ)]}dz.(15.187)

As τ → 0

Var{˜R(0)}16TT01τTR2(z)dz,(15.188)

and, in this case, the variance of correlation function estimate given by (15.185) exceeds by four times the variance of correlation function estimate given by (15.2).

If the condition. T ≫ τcor is satisfied, the variance of correlation function estimate given by (15.185) can be presented in the following form:

Var{ R˜(τ) }=2T0[R2(z)+R(z+τ)R(zτ)]dz+1T0T{ 2R2(z)+R2(z+τ)+R2(zτ)+4R(z)[R(z+τ)+R(zτ)] }dz.(15.189)

At T ≫ τ ≫ τcor, we have

Var{ R˜(τ) }6T0R2(z)dz.(15.190)

Under accepted initial conditions, we obtain

0R2(zτ)dz0R2(υ)dυ=20R2(υ)dυ.(15.191)

As applied to the exponential correlation function given by (12.13) and if the condition T ≫ τcor is satisfied, the variance of correlation function estimate given by (15.185) can be written in the following form:

Var{ R˜(τ) }=σ4αT[3+4(1+αT)exp{αT}+(1+2αT)exp{2αT}].(15.192)

Using the compensation method to measure the correlation function, the function

μ(τ,γ)= [x(tτ)γx(t)]2 (15.193)

is formed and a selection of the factor γ ensuring a minimum of the function μ(τ, γ) is performed. In doing so, the factor γ becomes numerically equal to the normalized correlation function value. Thus, likelihood defining the minimum of the function μ(τ, γ) based on the condition

dμ(τ,γ)dγ=0ifd2μ(τ,γ)dγ2>0,(15.194)

we obtain

γ= x(t)x(tτ) x2(t) =(τ).(15.195)

Consequently, the compensation measurer of correlation function should generate the function of the following form:

μ*(τ,γ)=1T0T[x(tτ)γx(t)]2dt.(15.196)

Minimizing the function μ*(τ, γ) given by (15.196) with respect to the parameter γ, we are able to obtain the estimate of normalized correlation function γ = ℛ(τ). Solving the equation

dμ*(τ,γ)dγ=0,(15.197)

we can see that the procedure to define the estimate of the correlation function R*(τ) is equivalent to the estimate that can be presented in the following form:

γ*=˜(τ)=(1/T)0Tx(t)x(tτ)dt(1/T)0Tx2(t)dt.(15.198)

As was shown in Ref. [1], as applied to the estimate by minimum of the function μ(τ, γ) given by (15.196), the requirements of quadrator are less stringent compared to the requirements of quadrators used by the previously discussed methods of correlation function measurement.

Determine the statistical characteristics of normalized correlation function estimate of the Gaussian stochastic process. For this purpose, we present the numerator and denominator in (15.198) in the following form:

1T0Tx(t)x(tτ)dt=σ2(τ)+σ2Δ(τ).(15.199)

1T0Tx2(t)dt=σ2[ 1+ΔVarσ2 ].(15.200)

As discussed earlier,

Δ(τ) =0,(15.201)

ΔVar=0.(15.202)

Their variances are given by (15.9) and (13.62), respectively. Henceforth, we assume that the condition Tτcor is satisfied. In this case, the error of variance estimate is negligible compared to the true value of variance

(ΔVar)2 σ41.(15.203)

Because of this, we can use the following approximation of estimate given by (15.198)

˜(τ)=(τ)+Δ(τ)1+(ΔVar/σ2)(τ)+Δ(τ)[ 1ΔVarσ2+(ΔVarσ2)2 ].(15.204)

Under the definition of the bias and variance of estimate, a limitation is imposed by the terms containing the moments of random variables Δℛ(τ) and ΔVar/σ2, and the order of these terms is not higher than 2. Under this approximation, the mathematical expectation of estimate of the normalized correlation function (15.204) can be presented in the following form:

˜(τ) =(τ) Δ(τ)ΔVar σ2+(τ)(ΔVarσ2)2.(15.205)

Thus, the estimate of the normalized correlation function given by (15.198) is characterized by the bias

b[˜(τ)]=˜(τ)(τ)= Δ(τ)ΔVar σ2+(τ)(ΔVarσ2)2.(15.206)

The product moment 〈Δℛ(τ)ΔVar〉 can be presented in the following form:

Δ(τ)ΔVar= { 1σ2T0Tx(t)x(tτ)dt(τ) }×{ 1T0Tx2(t)dtσ2 } =1σ2T0T0T x(t1)x(t1τ)x2(t2) dt1dt2σ2(τ).(15.207)

Determining the fourth product moment and making transformations and introducing new variables under the condition T ≫ τcor, as it was done before, we can write

Δ(τ)ΔVarσ2=2T0(z)[(z+τ)+(zτ)]dz.(15.208)

Taking into consideration (15.208) and the variance of variance estimate given by (13.63), we obtain the estimate bias in the following form:

b[˜(τ)]=2T0(z)[(z+τ)+(zτ)]dz+(z)4T0T2(z)dz.(15.209)

To define the variance of the normalized correlation function estimate

Var{˜(τ)}=˜2(τ)[˜(τ)]2(15.210)

we determine ˜2(τ) accurate with the terms of the moments Δ(τ) and ΔVar of the second order

˜2(τ)= [(τ)+Δ(τ)]2[1+(ΔVar/σ2]2 {2(τ)+2(τ)Δ(τ)+[Δ(τ)]2}{12ΔVarσ2+3(ΔVarσ2)2} 2(τ)+Δ2(τ)+32(τ)(ΔVar)2σ44(τ)ΔVarΔ(τ)σ2.(15.211)

Taking into consideration (15.205) and the earlier-given moments, we obtain

Var{˜(τ)}=2T0[2(z)+(z+τ)(zτ)]dz+2(τ)4T0T2(z)dz(τ)×4T0(z)[(z+τ)+(zτ)]dz.(15.212)

As we can see from (15.209) and (15.212), as τ → 0 the bias and variance of estimate given by (15.198) tend to approach zero, since at τ = 0, according to (15.198), the normalized correlation function estimate is not a random variable.

As applied to the Gaussian stochastic process with the exponential correlation function given by (12.13), the bias and variance of the normalized correlation function estimate take the following form:

b[˜(τ)]=2Texp{ατ},(15.213)

Var{˜(τ)}=1Tα[1(1+2ατ)exp{2ατ}].(15.214)

The sign or polar methods of measurements allow us to simplify essentially the experimental investigation of correlation and mutual correlation functions. Delay and multiplication of stochastic processes can be realized very simply by circuitry. The sign methods of correlation function measurements are based on the existence of a functional relationship between the correlation functions of the initial stochastic process ξ(t) and the transformed stochastic process η(t) = sgn ξ(t). The stochastic process η(t) is obtained by nonlinear inertialess transformation of initial stochastic process ξ(t) by the ideal two-sided limiter with transformation characteristic given by (12.208).

As applied to the Gaussian stochastic process, its normalized correlation function (t) is related to the correlation function ρ(τ) of the transformed stochastic process η(t) = sgn ξ(t) by the following relationship:

(τ)=sin[0.5πρ(τ)]=cos[2πP+(τ)],(15.215)

where

P+(τ)=00p2(x1,x2;τ)dx1dx2(15.216)

is the probability of coincidence between the positive signs of functions η(t) and η(t − τ). The estimate of the probability P+(τ) can be obtained as a signal averaging by time at the matching network output of positive values of the stochastic functions η(t) and η(t − τ) realizations. If the stochastic process is non-Gaussian, a relationship between the correlation functions of initial and transformed by the ideal limiter stochastic processes is very complex. For this reason, the said method of correlation function measurement is restricted. The method of correlation function measurement using additional processes by analogy with the discussed method of estimation of the mathematical expectation and variance of stochastic process is widely used.

Assume that the investigated stochastic process η(t) has zero mathematical expectation. Consider two sign functions

{η2(tτ)=sgn[ξ(tτ)μ2(tτ)]η1(t)=sgn[ξ(t)μ1(t)],(15.217)

where the mutual independent additional stationary stochastic processes μ1(t) and (μ2t) have the same uniform probability density functions given by (12.205) and the condition (12.206) is satisfied. As mentioned previously, the conditional stochastic processes η1(t|x1) and η2[(t − τ)|x2] are mutually independent at the fixed values ξ(t) = x1 and η(t − τ) = x2. Taking into consideration (12.209), we obtain

η1(t|x1)η2[(tτ)|x2]=x1x2A2.(15.218)

The unconditional mathematical expectation of product between two stochastic functions can be presented in the following form:

η1(t)η2(t)=1A2x1x2p2(x1,x2;τ)dx1dx2=R(τ)A2.(15.219)

Thus, the function

˜(τ)=A2Ni=1Ny1iy2i,(15.220)

where y1i and y2i are the samples of stochastic sequences ηi1 and η2i, can be considered as the estimate of correlation function to be used for the investigation of stochastic process at discrete instants. In this case, the estimate will be unbiased. When additional stochastic functions are carried out to estimate the variance (see Section 13.3), the operations of product and summation in (15.220) are easily changed by operations of defnition of estimate difference between the probability of polarity coincidence and noncoincidence of sampled values y1i and y2i. Delay operations of sign functions can be implemented by circuitry.

Determine the variance of correlation function estimate given by (15.220) assuming that the samples are pairwise independent, that is,

y1iy1j=y2iy2j=0(15.221)

we obtain

Var{R˜(τ)}=A4N2i=1Nj=1Ny1iy2iy1jy2jR2(τ).(15.222)

The double sum can be presented in the form of two sums by analogy with (13.94). At this time, (13.55) is true. Define the conditional product moment 〈(η1iη1jη2j| x1i, x2i, x1j, x2j)〉 if i ≠ j under the condition

{ ξ(tiτ)=x1i,ξ(tiτ)=x2i,ξ(tj)=x1j,ξ(tjτ)=x2i.(15.223)

Taking into consideration a statistical independence between η1 and η2, mutual independence between the conditional random values η1(ti|x1i ) and η2 [(ti − τ)|x2i], and (12.209), the conditional product moment can be written in the following form:

(η1iη2iη1jη2j|x1i,x2i,x1j,x2j)=x1ix2ix1jx2jA4.(15.224)

Given that the random variables ηi and ηj are independent of each other, the unconditional product moment can be presented in the following form:

η1iη2iη1jη2j=1A4{x1x2p2(x1,x2;τ)dx1dx2}2=R2(τ)A4.(15.225)

Substituting (13.96) and (15.225) into (13.94) and then into (15.222), we obtain

Var{R˜(τ)}=A4N[1R2(τ)A4].(15.226)

According to (15.220), the correlation function estimate satisfies the condition given by (12.206), that is, σ2A2. For this reason, the variance of correlation function estimate is defined by half-intervals of possible values of additional stochastic processes. Comparing (15.226) and (15.25), we obtain

Var{R˜(τ)}Var{R*(τ)}=A4σ4×1(σ4/A4)2(τ)1+2(τ).(15.227)

As we can see from (15.227), since the condition σ2A2 is satisfied, the correlation function estimate given by (15.220) is worse compared to the correlation function estimate given by (15.21).

15.5 Spectral Density Estimate Of Stationary Stochastic Process

By defnition, the spectral density of stationary stochastic process is the Fourier transform of correlation function

S(ω))=R(τ)exp{j0)τ}dτ.(15.228)

The inverse Fourier transform takes the following form:

R(τ)=12πS(ω)exp{jωτ}dω.(15.229)

As we can see from (15.229), at τ = 0 we obtain the variance of stochastic process:

Var=R(τ=0)=12πS(ω)dω.(15.230)

As applied to the ergodic stochastic process with zero mathematical expectation, the correlation function is defined by (15.1). Because of this, we can rewrite (15.1) in the following form:

S1(0))=limT1T0Tx(t){x(tτ)exp{jωτ}dτ}dt.(15.231)

The received realization of stochastic process can be presented in the following form:

x(t)={0if[ t ]>T.x(t)if0tT,(15.232)

In the case of physically realized stochastic processes, the following condition is satisfied:

0Tx2(t)dt=x2(t)dt<(15.233)

For the realization of the stochastic process, the Fourier transform takes the following form:

X(jω)=0Tx(t)exp{jωτ}dt=x(t)exp{jωτ}dt.(15.234)

Introducing a new variable z = t − τ, we can write

x(tτ)exp{jωτ}dτ=X(jω)exp{jωτ}.(15.235)

Substituting (15.235) into (15.231) and taking into consideration (15.234), we obtain

S1(ω)=limT1T|X(jω)|2.(15.236)

Formula (15.236) is not correct for the defnition of spectral density as the characteristic of stochastic process averaged in time. This phenomenon is caused by the fact that the function T-1|X()|2 is the stochastic function of the frequency ω. As the stochastic function x(t), this function changes randomly by its mathematical expectation and possesses the variance that does not tend to approach zero with an increase in the observation time interval. Because of this, to obtain the averaged characteristic corresponding to the defnition of spectral density according to (15.228), the spectral density S1(ω) should be averaged by a set of realizations of the investigated stochastic process and we need to consider the function

S(ω)=limTN1Ti=1N|Xj(jω)|2.(15.237)

Consider the statistical characteristics of estimate of the function

S1*(ω)=|Xi(jω)|2T,(15.238)

where the random spectrum X(jω) of stochastic process realization is given by (15.234).

The mathematical expectation of spectral density estimate given by (15.238) takes the following form:

{S1*(ω)}=|Xi(jω)|2T=1T0T0Tx(t1)x(t2)exp{jω(t2tl}dtldt2=1T0T0TR(t2t1)exp{jω(t2t1)}dt1dt2.(15.239)

After introduction of new variables τ = t2.- t1.and t2 = t, the double integral in (15.239) can be transformed into a single integral, that is,

S1*(ω) =TT(1|τ|T)R(τ)exp{jωτ}dτ.(15.240)

If the condition T ≫ τcor is satisfied, we can neglect the second term compared to the unit in paren-thesis in (15.240), and the integration limits are propagated on ±∞. Consequently, as T → ∞, we can write

{S1*(ω)}=S(ω).(15.241)

that is, as T → ∞, the spectral density estimate of stochastic process is unbiased.

Determine the correlation function of spectral density estimate

R(ω1,ω2)= S1*(ω1)S1*(ω2) S1*(ω1)S1*(ω2) S1*(ω2) =1T20T0T0T0Tx(t1)x(t2)x(t3)x(t4)exp{jω1t1+jω1t2jω2t3+jω2t4}dt1dt2dt3dt41T20T0Tx(t1)x(t2)exp{jω1t1+jω1t2}dt1dt2×0T0Tx(t3)x(t4)exp{jω2t3+jω2t4}dt3dt4(15.242)

As applied to the Gaussian stochastic process, (15.242) can be reduced to

R(ω1,ω2)=1T20T0T0T0T[R(t1t3)R(t2t4)+R(t1t4)R(t2t3)]×exp{jω1t1+jω1t2jω2t3+jω2t4}dt1dt2dt3dt4.(15.243)

Taking into consideration (15.229) and (12.122) and as T → ∞, we obtain

R(ω1,ω2)=S(ω1)S(ω2)T2{T0exp{j(ω1+ω2)}tdtT0exp{j(ω1+ω2)}tdt+T0exp{j(ω1ω2)}tdtT0exp{j(ω1ω2)}tdt}=S(ω1)S(ω2){[2sinω1+ω22T(ω1+ω2)T]2+[2sinω1ω22T(ω1ω2)T]2}.(15.244)

If the condition ωT ≫ 1 is satisfied, we can use the following approximation:

R(ω1,ω2)S(ω1)S(ω2){ sin[ 0.5(ω1ω2)T ]0.5(ω1ω2)T }2.(15.245)

If

ω1ω2=2iπT,i=1,2,...,(15.246)

then

R(ω1,ω2)=0,(15.247)

which means, if the frequencies are detuned on the value multiple 2πT-1, the spectral components S1*(ω1)andSi*(ω2) are uncorrelated. When 0.5(ω1 - ω2) T ≫ 1, we can neglect the correlation function between the estimates of spectral density given by (15.238).

The variance of estimate of the function S((ω1) can be defined by (15.244) using the limiting case at ω1 = ω2 = ω:

Var{ S1*ω }=S2(ω){ sin2ωT(ωT)2+1 }.(15.248)

If the condition ωT ≫ 1 is satisfied (as T → ∞), we obtain

limTVar{ S1*(ω) }=S2(ω).(15.249)

Thus, according to (15.238), in spite of the fact that the estimate of spectral density ensures unbi-asedness, it is not acceptable because the value of the estimate variance is larger than the squared spectral density true value.

Averaging the function S1*(ω) by a set of realizations is not possible, as a rule. Some indirect methods to average the function S1*(ω) are discussed in Refs. [3,5,6]. The first method is based on implementation of the spectral density averaged by frequency bandwidth instead of the estimate of spectral density defined at the point (the estimate of the given frequency). In doing so, the more the frequency range, within the limits of which the averaging is carried out, at T = const, the lesser the variance of estimate of spectral density. However, as a rule, there is an estimate bias that increases with an increase in the frequency range, within the limits of which the averaging is carried out. In general, this averaged estimate of spectral density can be presented in the following form:

s2*(ω)=12πW(ω)S1*(ωv)dv,(15.250)

where W(ω) is the even weight function of frequency ω or as it is called in other words the function of spectral window. The widely used functions W(ω) can be found in Refs. [3,5,6].

As T → ∞, the bias of spectral density estimate S2*(ω) can be presented in the following form:

b{ S2*(ω) }=12πS(ων)W(ω)dνS(ω).(15.251)

As applied to the narrowband spectral window W(ω), the following expansion

S(ων)S(ω)S(ω)+0.5(ω)ν2(15.252)

is true, where S′(ω) and S″(ω) are the derivatives with respect to the frequency ω. Because of this, we can write

b{ S2*(ω) }S(ω)4πω2W(ω)dω.(15.253)

If the condition ωT ≫ 1 is satisfied (as T → ∞), we obtain [1]

Var{ S2*(ω) }S2(ω)2πTW2(ω)dω.(15.254)

As we can see from (15.254), as T → ∞, Var{ S2*(ω) }0; that is, the estimate of the spectral density S2*(ω) is consistent.

The second method to obtain the consistent estimate of spectral density is to divide the observation time interval [0, T] on N subintervals with duration T0 < T and to define the estimate S1i*(ω) for each subinterval and subsequently to determine the averaged estimate

S3*(ω)=1Ni=1NS1i*(ω),N=TT0.(15.255)

Note that according to (15.240), at T = const, an increase in N(or decrease in T0) leads to bias of the estimate S3*(ω). If the condition T0 ≫ τcor is satisfied, the variance of estimate S3*(ω) can be approximated by the following form:

Var{ S3*(ω) }S2(ω)N.(15.256)

As we can see from (15.256), as T → ∞, the estimate given by (15.255) will be consistent.

In radar applications, sometimes it is worthwhile to obtain the current estimate S1*(ω,t) instead of the averaged summation given by (15.225). Subsequently, the obtained function of time is smoothed by the low-pass filter with the filter constant time τfilterT0. This low-pass filter is equivalent to estimate by ν uncorrelated estimations of the function S1*(ω), where v=τfilterT01. In practice, it makes sense to consider only the positive frequencies f = ω(2π)−1. Taking into consideration that the correlation function and spectral density remain even, we can write

G(f)=2S(ω=2πf),f>0.(15.257)

According to (15.228) and (15.229), the spectral density G(f) and the correlation function R(x) take the following form:

G(f)=40R(τ)cos2πfτdτ,(15.258)

R(τ)=0G(f)cos2πfτdf.(15.259)

In this case, the current estimate, by analogy with (15.238), can be presented in the following form:

G1*(f,t)=2A2(f,t)T0,(15.260)

where the squared current spectral density can be presented in the following form:

A2(f,t)=| X(jω,t) |2=Ac2(f,t)+As2(f,t)={ tT0tx(t)cos2πftdt }2+{ tT0tx(t)sin2πftdt }2,(15.261)

where Ac2(f,t) and As2(f,t) are the cosine and sine components of the current spectral density of realization x(t). As a result of smoothing the estimate G1*(f,t) by the filter with the impulse response h(t), we can write the averaged estimate of spectral density in the following form:

G2*(f,t)=0h(z)G1*(f,tz)dz.(15.262)

If

h(t)=α0exp{ α0t },t>0,τfilter=1α0,(15.263)

the variance of spectral density estimate G2*(f,t) can be approximated by

Var{ G2*(f,t) }G2(f)α0T0.(15.264)

The flowchart illustrating how to define the current estimate G2*(f,t) of spectral density is shown in Figure 15.12. The input realization of stochastic process is multiplied by the sine and cosine signals of the reference generator in quadrature channels, correspondingly. Obtained products are integrated, squared, and come in at the summator input. The current estimate G2*(f,t) forming at the summator output comes in at the smoothing filter input. The smoothing filter possesses the impulse response h(t). The smoothed estimate G(f, t) of spectral density is issued at the filter output.

Images

FIGURE 15.12 Defnition of the current estimate of spectral density.

To measure the spectral density within the limits of whole frequency range we need to change the reference generator frequency discretely or continuously, for example, by the linear law.

In practice, the filtering method is widely used. The essence of filtering method is the following. The investigated stationary stochastic process is made to pass through the narrowband (compared to the stochastic process spectrum bandwidth) filter with the central frequency ω0 = 2πf0. The ratio between the variance of stochastic process at the narrowband filter output and the bandwidth Δf of the filter is considered as the estimate of spectral density of stochastic process.

Let h(t) be the impulse response of the narrowband filter. The transfer function corresponding to the impulse response h(t) is ℋ(jω). The stationary stochastic process forming at the filter output takes the form:

y(t,ω0)=0h(tτ)x(τ)dτ.(15.265)

Spectral density G˜(f) at the filter output can be presented in the following form:

G˜(f)=G(f)ϰ2(f),(15.266)

where ℋ(f) is the filter transfer function module with the maximum defined as ℋmax(f) = ℋmax. The narrowband filter bandwidth can be defined as

Δf=0ϰ2(f)ϰmax2df.(15.267)

The variance of stochastic process at the filter output in stationary mode takes the following form:

Var{ y(t,f0) }= y2(t,f0) =0G(f)ϰ2(f)df.(15.268)

Assume that the filter transfer function module is concentrated very closely about the frequency f0 and we can think that the spectral density is constant within the limits of the bandwidth Δf, that is, G(f) ≈ G(f0). Then

Var{ y(t,f0) }G(f0)Δfϰmax2.(15.269)

Naturally, the accuracy of this approximation increases with concomitant decrease in the filter bandwidth Δf, since as Δf → 0 we can write

G(f0)=limf0Var{ y(t,f0) }Δfϰmax2.(15.270)

As applied to the ergodic stochastic processes, under defnition of variance, the averaging by realization can be changed based on the averaging by time as T → ∞

G(f0)=limf0T1TΔfϰmax20Ty2(t,f0)dt.(15.271)

For this reason, the value

G*(f0)=1TΔfϰmax20Ty2(t,f0)dt(15.272)

is considered as the estimate of spectral density under designing and construction of measurers of stochastic process spectral density. The values Δf and max2 are known before. Because of this, a measurement of the stochastic process spectral density is reduced to estimate of stochastic process variance at the filter output. We need to note that (15.272) envisages a correctness of the condition TΔf ≫1, which means the observation time interval is much longer compared to the narrowband filter time constant.

Based on (15.272), we can design the flowchart of spectral density measurer shown in Figure 15.13. The spectral density value at the fixed frequency coincides accurately within the constant factor with the variance of stochastic process at the filter output with known bandwidth. Operation principles of the spectral density measurer are evident from Figure 15.13. To define the spectral density for all possible values of frequencies, we need to design the multichannel spectrum analyzer and the central frequency of narrowband filter must be changed discretely or continuously. As a rule, a shift by spectrum frequency of investigated stochastic process needs to be carried out using, for example, the linear law of frequency transformation instead of filter tuning by frequency. The structure of such measurer is depicted in Figure 15.14. The sawtooth generator controls the operation of measurer that changes the frequency of heterodyne.

Images

FIGURE 15.13 Measurement of spectral density.

Images

FIGURE 15.14 Measurement of spectral density by spectrum frequency shift.

Define the statistical characteristics for using the filter method to measure the spectral density of stochastic process according to (15.272). The mathematical expectation of spectral density estimate at the frequency f0 takes the following form:

{G*(f0)}=y2(t,f0)Δfϰmax2=1Δfϰmax20G(f)ϰ2(f)df.(15.273)

In a general case, the estimate of spectral density will be biased, that is,

b{G0(f)}= G*(f0) G(f0).(15.274)

The variance of spectral density estimate is defined by the variation in the variance estimate of stochastic process y(t,f0) at the filter output. If the condition T ≫ τcor is satisfied for the stochastic process y(t,f0), the variance of estimate is given by (13.64), where instead of S(ω) we should understand

Sy(ω)=|ϰ(jω)|2S(ω).(15.275)

As applied to introduced notations G(f) and ℋ(f), we can write

Var{G*(f0)}=1T(Δf)2ϰmax40G2(f)ϰ4(f)df.(15.276)

To define the bias and variance of spectral density estimate of stochastic process we assume that the module of transfer function is approximated by the following form:

ϰ(f)={ ϰmax,f00.5Δfff0+0.5Δf;0,f00.5Δf>f,f0+0.5Δf<f,(15.277)

where Δf = δf. We apply an expansion in power series about the point. f = f0 for the spectral density G(f) and assume that there is a limitation imposed by the first three terms of expansion in series, namely,

G(f)G(f0)+G(f0)(ff0)+0.5G(f0)(ff0)2,(15.278)

where G′(f0) and G″(f0) are the first and second derivatives with respect to the frequency f at the point f0. Substituting (15.278) and (15.277) into (15.273) and (15.274), we obtain

b{G*(f0)}124(Δf)2G(f0).(15.279)

Thus, the bias of spectral density estimate of stochastic process is proportional to the squared bandwidth of narrowband filter. To define the variance of estimate for the first approximation, we can assume that the condition G(f) ≈ G(f0) is true within the limits of the narrowband filter bandwidth. Then, according to (15.276), we obtain

Var{G*(f0)}G2(f0)TΔf.(15.280)

The dispersion of spectral density estimate of the stochastic process takes the following form:

D{G*(f0)}G2(f0)TΔf+1576[ΔfG(f0)]2.(15.281)

15.6 Estimate of Stochastic Process Spike Parameters

In many application problems we need to know the statistical parameters of stochastic process spike (see Figure 15.15a): the spike mean or the average number of down-up cross sections of some horizontal level M within the limits of the observation time interval [0, T], the average duration of the spike, and the average interval between the spikes. In Figure 15.15a, the variables τi and θi mean the random variables of spike duration and the interval between spikes, correspondingly. To measure these parameters of spikes, the stochastic process realization x(t) is transformed by the nonlinear transformer (threshold circuitry) into the pulse sequence normalized by the amplitude ητ with duration τi (Figure 15.15b) or normalized by the amplitude ηθ with duration θi (Figure 15.15c), correspondingly:

ητ(t)={ 0ifx(t)<M;1ifx(t)M,(15.282)

ηθ(t)={ 0ifx(t)>M;1ifx(t)M,(15.283)

Using the pulse sequences ητ and ηθ, we can define the aforementioned parameters of stochastic process spike. Going forward, we assume that the investigated stochastic process is ergodic, as mentioned previously, and the following condition T≫ τcor is satisfied.

Images

FIGURE 15.15 Transformation of stochastic process realization x(t) into the pulse sequence: (a) Example of stochastic process spike; (b) Pulse sequence normalized by the amplitude ητ with duration Ti; (c) Pulse sequence normalized by the amplitude ηθ with duration θi.

15.6.1 Estimation of Spike Mean

Taking into consideration the assumptions stated previously, the estimate of the spike number in the given stochastic process realization x(t) within the limits of the observation time interval [0, T] at the level M can be defined approximately as

N*=1τavT0ητ(t)dt=1θav0Tηθ(t)dt,(15.284)

where τav and θav are the average duration of spike and the average interval between spikes within the limits of the observation time interval [0, T] of the given stochastic process realization at the level M. The true values of the average duration of spikes τ¯ and the average interval between spikes θ¯ obtained as a result of averaging by a set of realizations are defined in accordance with Ref. [1] in the following form:

τ¯=1N¯Mp(x)dx=1N¯[1F(M)],(15.285)

θ¯=1N¯Mp(x)dx=F(M)N¯,(15.286)

where

  • F(M) is the probability distribution function

  • N¯ is the average number of spikes per unit time at the level M defined as

    N¯=0x˙p2(M,x˙)dx˙,(15.287)

where p2(M, ) is the two-dimensional pdf of the stochastic process and its derivative at the same instant.

Note that τ¯=θ¯ corresponds to the level M0 defined from the equality

F(M0)=1F(M0)=0.5.(15.288)

If the condition MM0 is satisfied, the probability of event that on average there will be noninteger number of intervals between the stochastic process spikes θi within the limits of the observation time interval [0, T] high; otherwise, if the condition MM0 is satisfied, the probability of event that on average there will be the noninteger number of spike duration τi within the limits of the observation time interval [0, T] is high. This phenomenon leads, on average, to more errors while measuring N* using the only formula (15.284). Because of this, while determining the statistical characteristics of the estimate of the average number of spikes, the following relationship

N*={ 1θ¯0Tηθ(t)dtatMM0(τ¯θ¯),1τ¯0Tητ(t)dtatMM0(τ¯θ¯)(15.289)

can be considered as the first approximation, where we assume that

τavτ¯τ¯  and  θavθ¯θ¯1.

For this reason, we use the approximations τavτ¯ and θavθ¯.

The mathematical expectation of the average number of stochastic process spikes can be deter-mined in the following form:

N*={ 1θ¯0Tηθ(t)dtatMM0(τ¯θ¯),1τ¯0Tητ(t)dtatMM0(τ¯θ¯).(15.290)

According to (15.282), (15.283), (15.285), and (15.286), we obtain

ηθ(t)=Mp(x)dx=θ¯×N¯,(15.291)

ητ(t)=Mp(x)dx=τ¯×N¯(15.292)

Substituting (15.291) and (15.292) into (15.290), we obtain

N*=N¯×T.(15.293)

In other words, the estimate of average number of the stochastic process spikes at the level M within the limits of the observation time interval [0, T] is unbiased as a first approximation.

The estimate variance of the average number of stochastic process spikes at the level M can be presented in the following form:

{N*}={ 1θ¯0T0Tηθ(t1)ηθ(t2)dt1dt2[N¯T]2,MM0,1τ¯0T0Tητ(t1)ητ(t2)dt1dt2[N¯T]2,MM0.(15.294)

In the case of ergodic stochastic processes, the average values can be written in the following form:

ηθ(t1)ηθ(t2)=Rθ(t1t2)(15.295)

and

ητ(t1)ητ(t2)=Rτ(t1t2).(15.296)

As we can see from (15.295) and (15.296), the average values are equal to the probabilities of nonexceeding and exceeding the level M by the stochastic process realization x(t) at the instants t1 and t2

Rθ(t1t2)=MMp2(x1,x2;t1t2)dx1dx2;(15.297)

Rτ(t1t2)=MMp2(x1,x2;t1t2)dx1dx2.(15.298)

Taking into consideration. (15.294). through. (15.298), introducing new variables. t = t1t2, and changing the order of integration, we can write

Var{N*}[N¯T]2={ 2T{F(M)}20T(1τT)Rθ(t)dt1,MM0,2T{1F(M)}20T(1τT)Rτ(t)dt1,MM0,(15.299)

where Var{ N* }/[ N¯T ]2 is the normalized variance of the average number of stochastic process spikes or the relative variance of the average number of stochastic process spikes.

As applied to the Gaussian and Rayleigh stochastic processes, we can present the two-dimensional probability distribution functions in the form (14.50) and (14.71). In the case of Gaussian stochastic process with zero mathematical expectation, that is, F(M0) = 1 − F(M0) at M0 = 0 Substituting (14.50) into (15.297) and (15.298), we obtain

Rθ(t)=(1Q[ Mσ ]2)+v=1{ 1Q(v)[ Mσ ]2 }Rv(t)v!,(15.300)

Rτ(t)={Q[Mσ]}2+v=1{1Q(v)[Mσ]}2Rv(t)v!.(15.301)

In accordance with (15.299), we obtain

Var{N*}[N¯T]2={ { 1Q[ Mσ ] }2v=1avcv,Mσ0,{ Q[ Mσ ] }2v=1avcv,Mσ0,(15.302)

where av and cv are defined analogously as in (14.56) and (14.58). In doing so, the values of the coefficients av are presented in Table 14.1 as a function of v and the normalized level z = Mσ−1.

Since

1Q[Mσ]=Q[Mσ],(15.303)

we can see from (15.302) that the normalized variance of the average number of stochastic process spikes Var{ N* }/[ N¯T ]2 is symmetrical with respect to the level M/σ = 0. Because of this, we can write

Var{N*}[N¯T]2={Q[|Mσ|]}2v=1avcv.(15.304)

As applied to the Gaussian stochastic process with the normalized correlation function

(t)=exp{α2t2},(15.305)

we obtain

cv=πpv{1Q(pv)+1exp{vp}pv},(15.306)

where p = αT. If p ≫ 1

cv=πpv,(15.307)

The normalized squared deviation of the average number of stochastic process spikes Var{ N* }/[ N¯T ] as a function of the normalized level |z = Mσ−1| for realizations of stochastic process with fixed duration, the parameter p = αT is shown in Figure 15.16. As we can see from

Images

FIGURE 15.16 Normalized squared deviation of the average number of stochastic process spikes as a function of the normalized level for Gaussian realizations of a stochastic process with fixed duration.

Figure 15.16, the normalized squared deviation Var{ N* }/[ N¯T ] of the average number of stochastic process spikes is increased with increasing in the absolute level value |z = Mσ−1| and decreasing the parameter p = αT. At p = αT ≥ 10 and |z = Mσ−1| = 0, we can write

Var{N*}[N¯T]21.3αT.(15.308)

As applied to the Rayleigh stochastic process, we have that τ¯=0 when

M2σ=ln20.83.(15.309)

Substituting (14.71) into (15.297) and (15.298), we obtain

Rθ(t)=[1exp{M22σ2}]2+v=1R2v(t)(v!)2exp{M2σ2}{Lv[M22σ2]vLv1[M22σ2]}2;(15.310)

Rτ(t)=exp{M22σ2}+v=1R2v(t)(v!)2exp{M2σ2}{Lv[M22σ2]vLv1[M22σ2]}2.(15.311)

In accordance with (15.299), we obtain

Var{N*}[N¯T]2={ exp{M22σ2}v=1bvdv,M0.83,{1exp{M22σ2}}2v=1bvdv,M<0.83,(15.312)

where bv and dv are defined analogously as in (14.86) and (14.83). The values of coefficients are given in Table 14.3. The normalized squared deviation of the average number of stochastic process spikes Var{ N* }/[ N¯T ] as a function of the normalized level M/2σ2 at p1=2αT=10 in the case of the exponential normalized correlation function given by (15.305) is shown in Figure 15.17. As we can see from Figure 15.17, the normalized squared deviation of the average number of stochastic process spikes Var{ N* }/[ N¯T ] increases with an increase in the deviation of the normalized level M/2σ2 with respect to the median of the probability distribution function M0/2σ2=ln2 at the fixed p1=2αT. This phenomenon is explained by a decrease in the number of spikes higher or lower than the level M0/2σ2=ln2.

15.6.2 Estimation of Average Spike Duration and Average Interval Between Spikes

Considering the stochastic process realization presented in Figure 15.15a, we can see that with higher number of spikes N within the limits of the observation time interval [0, T] the estimate of spike duration average τ* and the estimate of average of the interval θ* between the spikes can be presented as

Images

FIGURE 15.17 Normalized squared deviation of the average number of stochastic process spikes as a function of the normalized level for Rayleigh realizations of the stochastic process with fixed duration.

{θ*=1Ni=1Nθi'τ*=1Ni=1Nτi(15.313)

for the given realization. The relationships given by (15.313) can be presented in the following form:

{τ*=1N0Tητ(t)dt,θ*=1N0Tηθ(t)dt,(15.314)

where ητ(t) and ηθ(t) are given by (15.282) and (15.283).

Images

FIGURE 15.18 Measurement of spike duration and the average of interval between the spikes.

The device to measure the spike duration average τ* and the average of interval θ* between the spikes can be designed based on (15.313) and (15.314). As applied to the estimate of the spike duration average τ*, the flowchart of measurer is depicted in Figure 15.18. This measurer consists of the trigger forming at the output normalized by amplitude and shape function ητ(t), the counter of spikes, the integrator, and the divisor determining the estimate of spike duration average τ*. The threshold M is given by external source.

To define the statistical characteristics of estimates τ* and θ* we assume that the condition T ≫ τcor is satisfied. In this case, we can approximately assume NN¯T Then

{τ*=1N¯T0Tητ(t)dt,θ*=1N¯T0Tηθ(t)dt,(15.315)

The mathematical expectation of estimates can be presented in the following form:

{τ*=1N¯T0Tητ(r)dt,θ*=1N¯T0Tηθ(t)dt,(15.316)

Taking into consideration (15.291) and (15.292), we see that τ* =τ¯ and θ* =θ¯. In other words, the estimates of the spike duration average x* and the average of interval 8* between the spikes are unbiased as a first approximation.

Determine the estimate variance of the average spike duration of stochastic process at level M:

Var{τ*}=(τ*τ¯)2=1[N¯T]20T0Tητ(t1)ητ(t2)dtldt2τ¯2.(15.317)

The mathematical expectation

ητ(r1)ητ(r2)=eτ(t1t2)(15.318)

is defined by (15.298). By analogy with (15.299), we can define the estimate variance of the average spike duration τ*

Var{τ*}=1N¯2×2T0T(1τT)τ(t)dtτ¯2.(15.319)

The estimate variance of the average of interval θ*.between the spikes can be presented in the following form:

Var{θ*}=1N¯2×2T0T(1τT)θ(t)dtθ¯2.(15.320)

As applied to the Gaussian stochastic process, if the condition T ≫ τcor is satisfied the normalized correlation functions ℛτ(t) and ℛθ(t) are given by (15.300) and (15.301) and the average number of stochastic process spikes at level M can be determined as

N¯[ Mσ ]=12πd2(t)dt2|t=0exp{ M22σ2 }.(15.321)

Taking into consideration (15.285) and substituting (15.319), we obtain

Var{ τ* }=8π2T(d2(t)/dt2)|t=0exp{ M2σ2 }v=1αv0Tv(t)dt,(15.322)

where αv is given by (14.56) and Table 14.1. The formula (15.322) is correct for measuring the estimate variance of the average of interval θ* between the spikes.

As applied to the normalized correlation function given by (15.305), the estimate variance of the average spike duration τ*

Var{ τ* }=2π2πpexp{ M2σ2 }v=1αvv,(15.323)

where p = αT. As we can see from (15.323), in the Gaussian stochastic process and the fixed duration of the observation interval [0, T] case, the estimate variance of the spike duration average τ* and the estimate variance of the average of interval θ* between the spikes are minimum at M/σ = 0.

As applied to the Rayleigh stochastic process, if the condition Tτcor is satisfied the normalized correlation functions ℛτ(t) and ℛθ(t) are given by (15.310) and (15.311) and the average number of stochastic process spikes at level M can be determined as

N¯[ Mσ ]=12πd2(t)dt2|t=0exp{ Mσ }exp{ M22σ2 }.(15.324)

Substituting ℛτ(t) into (15.319) and taking into consideration (15.285) and (15.324), we obtain

Var{ τ* }=2πT(d2(t)/dt2)|t=0exp{ M2σ2 }2σ2M2v=1v0T2v(t)dt.(15.325)

It is easy to prove that (15.235) is true to define the estimate variance of the average of interval θ*. between the spikes.

15.7 Mean-Square Frequency Estimate of Spectral Density

The mean-square frequency f¯ given by (15.6) is widely used as a parameter characterizing the spectral density of stochastic process. The value f¯ defines a dispersion of component of the stochastic process spectral density relative to zero frequency. As applied to low-frequency stationary stochastic processes, the mean-square frequency f¯ characterizes the effective bandwidth of spectral density. Taking into consideration (15.259), we can present the mean-square frequency f¯ in the following form:

f¯=12πd2R(τ)dτ2R(τ)|τ=0=12π [ dx(t)dt ]2 x2(t) .(15.326)

Here and further, we assume that the investigated stochastic process possesses zero mathematical expectation. As applied to the Gaussian stochastic process, the mean-square frequency f¯ is matched with the average number of stochastic process spikes per unit time at the zero level (15.321).

According to (15.326), it is worthwhile to consider for the investigated stationary stochastic process the following value

f¯=12π1T0T[ dx(t)dt ]2dt1T0Tx2(t)dt(15.327)

as the estimate of the mean-square frequency that tends to approach f¯ as T → ∞. The flowchart of measurer of the mean-square frequency of the stochastic process is shown in Figure 15.19. To define the characteristics of the mean-square frequency estimate we can use the following representation of numerator and denominator in (15.327) in the following form:

1T0T[ dx(t)dt ]2dt=Varx˙+ΔVarx˙,(15.328)

1T0Tx2(t)dt=Varx+ΔVarx,(15.329)

where

  • Varx and Varx˙ are the mathematical expectations of variances of the stochastic process and its derivative

  • ΔVarx and Δ Varx˙ are the random errors of defnition of the earliermentioned variances within the limits of the finite observation time interval [0, T]

Images

FIGURE 15.19 Measurement of the averaged squared frequency of stochastic process.

As we discussed before, the estimates of variances are unbiased and the variances of variance estimates are defined by (13.62) where we need to use the corresponding correlation function of the investigated stochastic process and its first derivative instead σ2ℛ(τ).

Going forward, we assume that the condition Tτcor is satisfied. In this case, the errors ΔVarx, and Δ Varx˙ will be small compared to Varx and Varx˙. To define the bias of the mean-square frequency under previous assumptions, we can think that

f¯*=12πVarx˙+ΔVarx˙Varx+ΔVarx=f¯1+ΔVarx˙Varx˙1+ΔVarxVarxf¯{ 1+12ΔVarxVarx18ΔVarx˙2Varx˙2 }{ 1112ΔVarxVarx+38ΔVarx2Varx2 }.(15.330)

Now, we are able to obtain the relative bias of estimate

Δf¯ f¯= f¯*f¯ f¯18{ ΔVarx˙2 Varx23 ΔVarx2 Varx2+2ΔVarxΔVarxVarxVarx },(15.331)

where under the condition T ≫ τcor we have

ΔVarx˙ΔVarx =4T0T{ dR(τ)dτ }2dτ.(15.332)

As a result, the relative bias can be presented in the following form:

Δf¯f¯=12T{ 0[ (τ)(0) ]2dτ+302(τ)dτ+202(τ)(0)dτ },(15.333)

where ℛ(τ), ℛ′(τ), and ℛ″(τ) are the normalized correlation function of the investigated stochastic process and its first and second derivatives. As applied to the exponential normalized correlation function of the investigated stochastic process given by (15.305), we can write

Δf¯ f¯=52π32αT,(15.334)

that is, it means that the bias of estimate is inversely proportional to the observation time interval T.

Define a dispersion of the mean-square frequency estimate

D{ f¯* }= (f¯*f¯)2 =f¯2 { 1+ΔVarx˙2Varx˙1+ΔVarxVarx+121+ΔVarx˙Varx˙1+ΔVarxVarx } .(15.335)

Using two-dimensional Taylor expansion in series for the first and third terms in (15.335) about the points

ΔVarx˙Varx˙=0andΔVarxVarx=0

and limiting by terms of the second order, we obtain

D{f¯*}f¯2{ { 1+ΔVarx˙Varx˙ }{ 1ΔVarxVarx+ΔVarx2Varx2 }+12{ 1+12ΔVarx˙Varx˙18ΔVarx˙2Varx˙2 } }×{ 112ΔVarxVarx+38ΔVarx2Varx2 }.(15.336)

Under averaging to a first approximation, we obtain

D{f¯*}f¯21T{ 0[ (τ)(0) ]2dτ+02(τ)dτ+202(τ)(0)dτ }.(15.337)

As applied to the exponential normalized correlation function given by (15.305), we obtain

D{f¯*}f¯232π16αT0.47αT.(15.338)

Comparing (15.338) with the relative variance of estimate of the average number of Gaussian stochastic process spikes for the same normalized correlation function given by (15.308), we see that according to (15.326), the average mean-square frequency estimate or the average number of spikes leads to the bias of estimate and decrease in estimate dispersion approximately in 2.8 times.

15.8 Summary and Discussion

Methods of the correlation function estimate can be classifed on three groups subject to a principle of realization of delay and other elements of correlators: analog, digital, and analog-to-digital. In turn, the analog measurement procedures can be divided based on the methods using a representation of the investigated stochastic process both as the continuous process and as the sampled process. As a rule, physical delays are used by analog methods with continuous representation of the investigated stochastic process. Under discretization of investigated stochastic process in time, the physical delay line can be changed by corresponding circuits. Under the use of digital procedures to measure the correlation function estimate, the stochastic process is sampled in time and transformed into binary number by analog-to-digital conversion. Further operations associated with the signal delay, multiplication, and integration are carried out by the shift registers, summator, and so on.

We can see that the maximum value of variance of the correlation function estimate corresponds to the case τ = 0 and is equal to the variance of the stochastic process variance estimate given by (13.61) and (13.62). Minimum value of variance of the correlation function estimate corresponds to the case when τ ≫ τcor and is equal to one-half of the variance of the stochastic process variance estimate.

The correlation function of stationary stochastic process can be presented in the form of expansion in series with respect to earlier-given normalized orthogonal functions. The variance of correlation function estimate increases with an increase in the number of terms of expansion in series v. Because of this, we must take into consideration this fact choosing the number of terms under expansion in series.

In some practical problems, the correlation function of stochastic process can be measured accurately with some parameters defning a character of its behavior. In this case, the measurement of correlation function can be reduced to measurement or estimation of unknown parameters of correlation function. Because of this, we consider the optimal estimate of correlation function arbitrary parameter assuming that the investigated stochastic process ξ(t) is the stationary Gaussian stochastic process observed within the limits of time interval [0, T] in the background of Gaussian stationary noise ζ(t) with known correlation function.

The optimal estimate of stochastic process correlation function can be found in the form of estimations of the elements Rij of the correlation matrix R or elements Cij of the inverse matrix C. In the case of Gaussian stationary stochastic process with the multidimensional pdf given by (12.169), the solution of likelihood ratio equation allows us to obtain the estimates Cij and, consequently, the estimates of elements Rij of the correlation matrix R.

Based on (15.272), we can design the flowchart of spectral density measurer shown in Figure 15.13. The spectral density value at the fixed frequency coincides accurately within the constant factor with the variance of stochastic process at the filter output with known bandwidth. Operation principles of the spectral density measurer are evident from Figure 15.13. To define the spectral density for all possible values of frequencies, we need to design the multichannel spectrum analyzer and the central frequency of narrowband filter must be changed discretely or continuously. As a rule, we need to carry out a shift by the spectrum frequency of the investigated stochastic process using, for example, the linear law of frequency transformation instead of filter tuning by frequency. The structure of such measurer is depicted in Figure 15.14. The sawtooth generator controls the operation of measurer changing a frequency of heterodyne.

In many applications, we need to know the statistical parameters of stochastic process spike (see Figure 15.15a): the spike mean or the average number of down-up cross sections of some horizontal level M within the limits of the observation time interval [0, T], the average duration of spike, and the average interval between spikes. To measure these parameters of spikes, the stochastic process realization x(t) is transformed by the nonlinear transformer (threshold circuitry) into the pulse sequence normalized by the amplitude ητ with duration τi (Figure 15.15b) or normalized by the amplitude ηθ with duration θi (Figure 15.15c).

The mean-square frequency f¯ given by (15.6) is widely used as a parameter characterizing the spectral density of the stochastic process. The value f¯ defines a dispersion of component of the stochastic process spectral density relative to zero frequency. As applied to low-frequency stationary stochastic processes, the mean-square frequency f¯ characterizes the effective bandwidth of spectral density.

References

1. Lunge, F. 1963. Correlation Electronics. Leningrad, Russia: Nauka.

2. Ball, G.A. 1968. Instrumental Correlation Analysis of Stochastic Processes. Moscow, Russia: Energy.

3. Kay, S.M. 1993. Fundamentals of Statistical Signal Processing: Estimation Theory. Upper Saddle River, NJ: Prentice Hall, Inc.

4. Lampard, D.G. 1955. New method of determining correlation functions of stationary time series. Proceedings of the IEE, C-102(1): 343.

5. Kay, S.M. 1998. Fundamentals of Statistical Signal Processing: Detection Theory. Upper Saddle River, NJ: Prentice Hall, Inc.

6. Gribanov, Yu I. and V.L. Malkov. 1978. Selected Estimates of Spectral Characteristics of Stationary Stochastic Processes. Moscow, Russia: Energy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.60.149