12 Estimation of Mathematical Expectation

12.1 Conditional Functional

Let the analyzed Gaussian stochastic process ξ(t) possess the mathematical expectation defined as

E(t)=E0s(t)(12.1)

the correlation function R(t1, t2), and be observed within the limits of the finite time interval [0, T]. We assume that the law of variation of the mathematical expectation s(t) and correlation function R(t1, t2) are known. Thus, the received realization takes the following form:

x(t)=E(t)+x0(t)=E0s(t)+x0(t),  0tT,(12.2)

where

x0(t)=x(t)E(t)(12.3)

is the centered Gaussian stochastic process. The problem with estimating the mathematical expectation is correlated to the problem with estimating the amplitude E(t) of the deterministic signal in the additive Gaussian noise.

The pdf functional of the Gaussian process given by (12.2) takes the following form:

F[x(t)|E0]=B0exp{0.5T0T0ϑ(t1,t2)[x(t1)E0s(t1)]×[x(t2)E0s(t2)dt1dt2]}.(12.4)

where B0 is the factor independent of the estimated parameter E0; the function

ϑ(t1,t2)=ϑ(t2,t1)(12.5)

is defined from the integral equation

T0R(t1,t)ϑ(t1,t2)dt=δ(t2t1).(12.6)

Introduce the following function:

υ(t)=T0s(t1)ϑ(t1,t)dt1,(12.7)

which is a solution of the following integral equation:

T0R(t,τ)υ(τ)dτ=s(t).(12.8)

Since the received realization of the Gaussian stochastic process does not depend on the current value of estimation E0, the pdf functional of Gaussian stochastic process can be written in the following form:

F[x(t)|E0]=B1exp{E0T0x(t)υ(t)dt0.5E20T0s(t)υ(t)dt},(12.9)

where B1 is the factor independent of the estimated parameter E0.

As applied to analysis of the stationary stochastic process, we can write

{s(t)=1,R(t1,t2)=R(t2t1)=R(t1t2).(12.10)

In doing so, the function υ(t) and the pdf functional are determined as

T0R(tτ)υ(τ)dτ=1;(12.11)

F[x(t)|E0]=B1exp{E0T0x(t)υ(t)dt0.5E20T0υ(t)dt}.(12.12)

In practice, the stationary stochastic processes occur very often and their correlation functions can be written in the following form:

R(τ)=σ2exp{α|τ|},(12.13)

R(τ)=σ2exp{α|τ|}[cosω1ταω1sinω1|τ|],(12.14)

where σ2 is the variance of stationary stochastic process. These correlation functions correspond to the stationary stochastic processes obtained as a result of excitation of the RC-circuit, α = (RC)−1, and the RLC-circuit, ω21=ω20α2,ω20=(LC)1>α=R(2L)1 inputs by the “white” noise.

Solution of (12.8) for the correlation functions given by (12.13) and (12.14) takes the following form, respectively:

υ(t)=α2σ2[s(t)α2s(t)]+1σ2{[s(0)α1s(0)]δ(t)+[s(T)+α1s(T)]δ(Tt)},(12.15)

υ(t)=14σ2αω20[s(t)+2(ω202α2)s(t)+ω40s(t)]+12σ2αω20{[s(0)+(ω204α2)s(0)+2αω20s(0)]δ(t)[s(T)+(ω204α2)s(T)2αω20s(T)]δ(tT)+[s(0)2αs(0)+ω20s(0)]δ(t)[s(T)+2αs(T)+ω20s(T)]δ(tT)}.(12.16)

The notations s′(t), s″(t), s′″(t), s″″(t) mean the derivatives of the first, second, third, and fourth order with respect to t, respectively. If the function s(t) and its derivatives at t = 0 and t = T become zero, then (12.15) and (12.16) have a simple form. As applied to stochastic process at s(t) = 1 = const, (12.15) and (12.16) have the following form:

υ(t)=α2σ2+1σ2[δ(t)+δ(Tt)],(12.17)

υ(t)=ω204ασ2+1σ2[δ(t)+δ(tT)]+12ασ2[δ(t)δ(tT)].(12.18)

The following spectral densities

S(ω)=2ασ2α2+ω2(12.19)

and

S(ω)=4ασ2(ω21+α2)ω42ω2(ω21α2)+(ω21+α2)2(12.20)

correspond to the correlation functions given by (12.13) and (12.14), respectively. It is necessary to note that there is no general procedure to solve (12.8). However, if the correlation function of stochastic process depends on the absolute value of difference of arguments |t2t1| and the observation time T is much more than the correlation interval defined as

τcor=1σ20|R(τ)|dτ=0|(τ)|dτ,(12.21)

where

(τ)=R(τ)σ2(12.22)

is the normalized correlation function, and the function s(t) and its derivatives at t = 0 and t = T become zero, it is possible to define the approximate solution of the integral equation (12.8) using the Fourier transform. Applying the Fourier transform to the left and right side of the following equation

R(tτ)˜υ(τ)dτ=s(t),(12.23)

it is not difficult to use the inverse. Fourier transform in order to obtain

˜υ(t)=12πS(ω)S(ω)exp{jωt}dω.(12.24)

where

  • S (ω) is the Fourier transform of the correlation function R(τ)

  • S(ω) is the Fourier transform of mathematical expectation of the function s (t), which can be defined as

    S(ω)=R(τ)exp{jωτ}dτ,(12.25)

    S(ω)=s(t)exp{jωt}dt.(12.26)

The inverse Fourier transform gives us the following formulae:

R(τ)=12πS(ω)exp{jωτ}dω,(12.27)

s(t)=12πS(ω)exp{jωτ}dω.(12.28)

If the function s(t) and its derivatives do not become zero at t = 0 and t = T and the function S(ω) is a ratio of two polynomials of pth and dth orders, respectively, with respect to ω2 and d > p, then there is a need to add the delta function δ(t) and its derivative δ′(t) taken at t = 0 and t = T. Thus, there is a need to define the solution for Equation 12.8 in the following form:

υ(t)=˜υ(t)+d1μ=0[bμδμ(t)+cμδμ(tT)].(12.29)

Here, the coefficients bμ and cμ are defined from the solutions of equations obtained under the substitution of (12.29) in (12.8); δμ(t) is the delta function derivative of μth order with respect to the time.

In the case of stationary stochastic process, we have s(t) = 1. In this case, the spectral density takes the following form:

S(ω)=2πδ(ω).(12.30)

From (12.24) and (12.29), we have

υ(t)=S1(ω=0)+d1μ=0[bμδμ(t)+cμδμ(tT)].(12.31)

As applied to the stationary stochastic process with the spectral density given by (12.19), we have that d = 1. For this reason, we can write

υ(t)=α2σ2+b0δ(t)+c0δ(tT).(12.32)

Substituting (12.13) and (12.32) into (12.5), we obtain

0.5[αT0exp{α|tτ|}dτ+b0σ2exp{αt}+c0σ2exp{α(Tt)}]=1.(12.33)

Dividing the integration intervals on two intervals, namely, 0 ≤ τ ≤ t and t ≤ τ ≤ T, after integration we obtain

(b0σ21)exp{αt}+(c0σ21)exp{α(tT)}=0.(12.34)

This equality is correct if the coefficient of the terms exp{−αt} and exp{−α(tT)} is equal to zero, that is, b0 = c0 = σ−2. Substituting b0 and c0 into (12.32), we obtain the formula given by (12.17).

Now, consider the exponential function in (12.9). The formula

ρ21=T0s(t)υ(t)dt(12.35)

is the deterministic component or, in other words, the signal when the estimated parameter E0 = 1. The random component

T0x0(t)υ(t)dt(12.36)

is the noise component. The variance of the noise component taking into consideration (12.8) is defined as

[T0x0(t)υ(t)dt]2=T0T0x0(t1)x1(t2)υ(t1)υ(t2)dt1dt2=T0s(t)υ(t)dt=ρ21.(12.37)

As we can see from (12.37), ρ21 is the ratio between the power of the signal and the power of the noise. Because of this, we can say that (12.37) is the signal-to-noise ratio (SNR) when the estimated parameter value E0 = 1.

12.2 Maximum Likelihood Estimate of Mathematical Expectation

Consider the conditional functional given by (12.9) of the observed stochastic process. Solving the likelihood equation with respect to the parameter E, we obtain the mathematical expectation estimate

EE=T0x(t)υ(t)dtT0s(t)υ(t)dt.(12.38)

As applied to analysis of stationary stochastic process, (12.38) becomes simple, namely,

EE=T0x(t)υ(t)dtT0υ(t)dt.(12.39)

In doing so, at the condition Tτ1cor we can neglect the values of stochastic process and its derivatives at t = 0 and t = T under estimation of the mathematical expectation; that is, in other words, we can think that the following approximation is correct:

υ(t)=S1(ω=0).(12.40)

In this case, we obtain the asymptotical formula for the mathematical expectation estimate of stationary stochastic process, namely,

EE=limT1TT0x(t)dt,(12.41)

which is widely used in the theory of stochastic processes to define the mathematical expectation of ergodic stochastic processes with arbitrary pdf. At the large and finite values Tτ1cor, we can neglect an effect of values of the stochastic process and its derivatives at t = 0 and t = T on the mathematical expectation estimate. As a result, we can write

EE1TT0x(t)dt.(12.42)

Images

FIGURE 12.1 Optimal structure to define the mathematical expectation estimate.

Although the obtained formulae for the mathematical expectation estimate are optimal in the case of Gaussian stochastic process, these formulae will be optimal for the stochastic process differed from the Gaussian pdf in the class of linear estimations. Equations 12.38 and 12.39 are true if the a priori interval of changes of the mathematical expectation is not limited. Equation 12.38 allows us to define the optimal device structure to estimate the mathematical expectation of the stochastic process (Figure 12.1). The main function is the linear integration of the received realization x (t). with the weight υ(t) that is defined based on the solution of the integral equation (12.8). The decision device issues the output process at the instant t = T. To obtain the current value of the mathematical expectation estimate, the limits of integration in (12.38) must be tT and t, respectively. Then the parameter estimation is defined as

EE(t)=ttTx(τ)υ(τ)dτttTs(τ)υ(τ)dτ.(12.43)

The weight integration can be done by the linear filter with corresponding impulse response. For this purpose, we introduce the function

υ(τ)=h(tτ)  or   h(τ)=υ(tτ)(12.44)

and substitute this function into (12.43) instead of υ(t) introducing a new variable tτ = z. Then (12.43) can be transformed to the following form:

EE(t)=T0x(tz)h(z)dzT0s(tz)h(z)dz.(12.45)

The integrals in (12.45) are the output responses of the linear filter with the impulse response h (t) given by (12.44) when the filter input is excited by x (t) and s (t), respectively. The mathematical expectation of estimate

EE=1ρ21T0x(t)υ(t)dt=E0,(12.46)

that is, the estimate of the maximum likelihood of the mathematical expectation of stochastic process is both the conditionally and unconditionally unbiased estimate. The conditional variance of the mathematical expectation estimate can be presented in the following form:

Var{EE|E0}=E2EEE2=1ρ41T0T0x0(t1)x0(t2)υ(t1)υ(t2)dt1dt2=ρ21,(12.47)

that is, the variance of estimate is unconditional. Since, according to (12.38), the integration of Gaussian stochastic process is a linear operation, the estimate EE is subjected to the Gaussian distribution.

Let the analyzed stochastic process be a stationary process and possess the correlation function given by (12.13). Substituting instead of the function υ(t) its value from (12.17) into (12.47) and integrating with delta functions, we obtain

Var{EE}=2σ22+(T/τcor)=2σ22+αT=2σ22+p,(12.48)

where p is a ratio between the time required to analyze the stochastic process and the correlation interval of the same stochastic process. In doing so, according to (12.38), the formula for the optimal estimate takes the following form:

EE=x(0)+x(T)+αT0x(t)dt2+p.(12.49)

If p ≫ 1, we have

Var{EE}2σ2p.(12.50)

Formulae (12.48) and (12.49) can be obtained without determination of the pdf functional. For this purpose, the value defined by the following equation:

E*=T0h(t)x(t)dt(12.51)

can be considered as the estimate. Here h(t) is the weight function defined based on the condition of unbiasedness of the estimate that is equivalent to

T0h(t)dt=1,(12.52)

and minimization of the variance of estimate,

Var{E*}=T0T0h(t1)h(t2)R(t1,t2)dt1dt2.(12.53)

Transform the formula for the variance of estimate into a convenient form. For this purpose, introduce new variables in the double integral, namely,

τ=t2t1andt1=z,(12.54)

and change the order of integration. Taking into consideration that R(x) = R(−t), we obtain

Var{E*}=2T0R(τ)Tτ0h(z)h(z+τ)dzdτ.(12.55)

As was shown in Ref. [1], a definition of optimal form of the weight function h(t) is reduced to a solution of the integral Wiener-Hopf equation

T0h(τ)R(τs)dτVarmin{E}=0,  0sT,(12.56)

where Varmin{E} is the minimal estimate variance, jointly with the condition given by (12.52). However, the solution of (12.56) is complicated.

Define the formula for an optimal estimate of mathematical expectation of the stationary sto-chastic process possessing the correlation function given by (12.14) and weight function given by (12.18). Substituting (12.18) into the formula for mathematical expectation estimate of the stochastic process defined as (12.38) and calculating the corresponding integrals, the following is obtained:

EE=1TT0x(t)dt+2αω20T[x(0)+x(T)]+1ω20T[x(0)x(t)]1+4αω20T.(12.57)

In doing so, the variance of the mathematical expectation estimate is defined as

Var{EE}=4ασ2ω20T+4α.(12.58)

If α ≪ ω0 and ω0T ≫ 1, the formula for the mathematical expectation estimate of the stationary Gaussian stochastic process transforms to the well-known formula of the mathematical expectation definition of the ergodic stochastic process given by (12.42), and the variance of the mathematical expectation estimate is defined as

Var{EE}4ασ2ω20T.(12.59)

At ω1 = 0 (ω0 = α), the correlation function given by (12.14), can be transformed into the following form

R(τ)=σ2exp{α|τ|}(1+α|τ|),τcor=2α(12.60)

by limiting process. In particular, the given correlation function corresponds to the stationary stochastic process at the output of two RC circuits connected in series when the “white” noise excites the input. In this case, the formulae for the mathematical expectation estimate and variance take the following form:

EE=1TT0x(t)dt+2αT[x(0)+x(T)]+1α2T[x(0)x(T)]1+4αT,(12.61)

Var{EE}=4σ2αT+4.(12.62)

Relationships between the definition of the estimate and the estimate variance of the mathematical expectation of the stochastic processes with other types of correlation functions can be defined analogously.

As we assumed before, the a priori domain of definition of the mathematical expectation is not limited. Thus, we consider a domain of possible values of the mathematical expectation as a function of the mathematical expectation estimate. Let the a priori domain of definition of the math-ematical expectation be limited both by the upper bound and by the lower bound, that is,

ELEE0.(12.63)

In the considered case, the mathematical expectation estimate Ê cannot be outside the considered interval given by (12.63), even though it is defined as a position of the absolute maximum of the likelihood functional logarithm (12.9). The likelihood functional logarithm reaches its maximum at E = EE. As a result, when EEEL the likelihood functional logarithm becomes a monotonically decreasing function within the limits of the interval [EL, EU] and reaches its maximum value at E= EL. If EEEU, the likelihood functional logarithm becomes a monotonically increasing function within the limits of the interval [EL, EU] and, consequently, reaches its maximum value at E = EU. Thus, in the case of the limited a priori domain of definition of the mathematical expectation, the estimate of mathematical expectation of stochastic process can be presented in the following form:

ˆE={EUifEE>EU,EEifELEEEU,ELifEE<EL.(12.64)

Taking into consideration the last relationship, the structure of optimal device for the mathemat-ical expectation estimate determination in the case of the limited a priori domain of mathematical expectation definition can be obtained by the addition of a linear limiter with the following characteristic:

g(z)={EUifz>EU,zifELzEU,ELifz<EL(12.65)

to the circuit shown in Figure 12.1. Using the well-known relationships [2] to transform the Gaussian random variable pdf of by a nonlinear inertialess system with the chain characteristic g (z), we can define the conditional pdf of the mathematical expectation estimate as follows:

ρ(ˆE|E0)={PLδ(ˆEEL)+PUδ(ˆEEU)+12πVar(EE|E0)exp{(ˆEE0)22Var(EE|E0)}0,  at ˆE<EL,ˆE>EU. at ELEEEU,(12.66)

Here

{PL=1Q(ELE0Var(EE|E0)),PU=Q(ELE0Var(EE|E0));(12.67)

where

Q(z)=12πzexp{0.5y2}dy(12.68)

is the Gaussian Q function [3, 4]; Var(EE|E0) is the variance given by (12.47). The conditional bias is defined as

b(ˆE|E0)=ˆEE0=(ˆEE0)p(ˆE|E0)dˆE=PL(ELE0)+PU(EUE0)+Var(EE|E0)2π{exp[(ELE0)22Var(EE|E0)]exp[(EUE0)22Var(EE|E0)]}.(12.69)

Thus, in the case of the limited a priori domain of possible values of the mathematical expectation of stochastic process, the maximum likelihood estimate of the stochastic process mathematical expectation is conditionally biased. However, at small variance values of the maximum likelihood estimate of stochastic process mathematical expectation, that is, Var(EE|E0) → 0, as it follows from (12.67) and (12.69), we obtain the asymptotical expression

limVar(EE|E0)0b(EE|E0)=0;(12.70)

that is, at Var(EE|E0) → 0, the maximum likelihood estimate of mathematical expectation of sto-chastic process is asymptotically unbiased. At the high variance values of the maximum likelihood estimate of stochastic process mathematical expectation, that is, Var(EE|E0) → ∞, the bias of the maximum likelihood estimate of stochastic process mathematical expectation tends to approach

b(EE|E0)=0.5(EL+EU2E0).(12.71)

The conditional dispersion of the maximum likelihood estimate of stochastic process mathematical expectation is defined as

D(EE|E0)=(ˆEE0)2f(EE|E0)dˆE=PL(ELE0)2+PU(ELE0)2+Var(1PUPL)+Var(EE|E0)2π{(ELE0)exp[(ELE0)22Var(EE|E0)](EUE0)exp[(EUE0)22Var(EE|E0)]}.(12.72)

At small variance values of the maximum likelihood estimate of stochastic process mathematical expectation

Var(EE|E0)EUEL1  and  EL<E<EU(12.73)

if the limiting process is carried out at EL → − ∞ and EU → ∞, the conditional dispersion of the maximum likelihood estimate of stochastic process mathematical expectation coincides with the variance of estimate given by (12.47). If the true value of the mathematical expectation coincides with one of two bounds of the a priori domain of possible values of the mathematical expectation, then the following approximation is true:

D(EE|E0)0.5Var(EE|E0);(12.74)

that is, the dispersion of estimate is twice as less compared to the unlimited a priori domain case. With increasing variance of the maximum likelihood estimate of stochastic process mathematical expectation Var(EE|E0) → ∞, the conditional dispersion of the maximum likelihood estimate of the stochastic process mathematical expectation tends to approach the finite value since PL = PU = 0.5

D(EE|E0)0.5[(ELE0)2+(EUE0)2],(12.75)

whereas the dispersion of the maximum likelihood estimate of stochastic process mathematical expectation within the unlimited a priori domain of possible values of the maximum likelihood estimate of stochastic process mathematical expectation is increased without limit as Var(EE|E0) → ∞. It is important to note that although the bias and dispersion of the maximum likelihood estimate of stochastic process mathematical expectation are defined as the conditional values, they are never-theless independent of the true value of the mathematical expectation E0 and are the unconditional estimates simultaneously.

Determine the unconditional bias and dispersion of maximum likelihood estimate of stochastic process mathematical expectation in the case of the limited a priori domain of possible estimate values. For this purpose, it is necessary to average the conditional characteristics given by (12.69) and (12.72) with respect to possible values of estimated parameter, assuming that the a priori pdf of estimated parameter is uniform within the limits of the interval [EL, EU]. In this case, we observe that the unconditional estimate is unbiased, and the unconditional dispersion is determined in the following form:

D(ˆE)=Var{12Q[EUELVar(EL|E0)]}+23(EUEL)2Q[EUELVar(EE|E0)]2Var(EE|E0)Var(EE|E0)32π(EUEL){1exp{(EUEL)22Var(EE|E0)}}2Var(EE|E0)(EUEL)32πexp{(EUEL)22Var(EE|E0)}.(12.76)

At the same time, it is not difficult to see that at small values of the variance, that is, Var(EE|E0) → 0, the unconditional dispersion transforms into a dispersion of the estimate obtained under the unlimited a priori domain of possible values, D (Ê) → Var(EE|E0). Otherwise, at high values of variance, that is, Var(EE|E0) → ∞, the dispersion of the estimate given by (12.47) increases without limit and the unconditional dispersion given by (12.76) has a limit equal to the average square of the a priori domain of possible values of the estimate, that is, (EUEL)2/3.

12.3 Bayesian Estimate of Mathematical Expectation: Quadratic Loss Function

As before, we analyze the realization x(t) of stochastic process given by (12.2). The a posteriori pdf of estimated stochastic process parameter E can be presented in the following form:

ppost(E)=pprior(E)exp{ET0x(t)υ(t)dtE22T0s(t)υ(t)dt}pprior(E)exp{ET0x(t)υ(t)dtE22T0s(t)υ(t)dt}dE,(12.77)

where

  • pprior(E) is the a priori pdf of estimated stochastic process parameter

  • υ(t) is the solution of the integral equation given by (12.8)

In accordance with the definition given in Section 11.4, the Bayesian estimate γE is the estimate minimizing the unconditional average risk given by (11.29) at the given loss function. As applied to the quadratic loss function defined as

(γ,E)=(γE)2,(12.78)

the average risk coincides with the dispersion of estimate. In doing so, the Bayesian estimate γE is obtained based on minimization of the a posteriori risk at each fixed realization of observed data

γE=Eppost(E)dE.(12.79)

To define the estimate characteristics, that is, the bias and dispersion, it is necessary to determine two first moments of the random variable γE. However, in the case of the arbitrary a priori pdf of estimated stochastic process parameter E, it is impossible to determine these moments in a general form. In accordance with this statement, we consider the discussed problem for the case of a priori Gaussian pdf of estimated parameter; that is, we assume [5]

pprior(E)=12πVarprior(E)exp{(EEprior)22Varprior(E)},(12.80)

where Eprior and Varprior(E) are the a priori values of the mathematical expectation and variance of the mathematical expectation estimate. Substituting (12.80) into the formula defining the Bayesian estimate and carrying out the integration, we obtain

γE=Varprior(E)T0x(t)υ(t)dt+EpriorVarprior(E)T0s(t)υ(t)dt+1.(12.81a)

It is not difficult to note that if Varprior(E) → ∞, the a priori pdf of estimate is approximated by the uniform pdf of the estimate and the estimate becomes the maximum likelihood estimate (12.38). In the opposite case, that is, Varprior(E) → 0, the a priori pdf of estimate degenerates into the Dirac delta function δ(EEprior) and, naturally, the estimate γE will match with Eprior. The mathematical expectation of estimate can be presented in the following form:

γE=Varprior(E)ρ21E0+EpriorVarprior(E)ρ21+1,(12.81b)

where ρ21 is given by (12.35). In doing so, the conditional bias of the considered estimate is defined as

b(γE|E0)=γEE0=EpriorE0Varprior(E)ρ21+1.(12.82)

Averaging the conditional bias by all possible a priori values E0, we obtain that in the case of the quadratic loss function the Bayesian estimate for the Gaussian a priori pdf is the unconditionally unbiased estimate.

The conditional dispersion of the obtained estimate can be presented in the following form:

D(γE|E0)=(γEE0)2=(EpriorE0)2+Var2prior(E)ρ21{Var2prior(E)ρ21+1}2.(12.83)

We see that the unconditional dispersion coincides with the unconditional variance and is defined as

Var(γE)=D(γE)=Varprior(E)Varprior(E)ρ21+1.(12.84)

If Varprior(E)ρ211, then the variance of the considered Bayesian estimate coincides with the variance of the maximum likelihood estimate given by (12.47). In the opposite case, if Varprior(E)ρ211, the variance of estimate tends to approach

Var(γE)Varprior(E){1Varprior(E)ρ21}.(12.85)

As applied to arbitrary pdfs of estimate, we can obtain the approximated formulae for the bias and dispersion of estimate. For this purpose, we can transform (12.77) by substituting the realization x (t) given by (12.2). Then, we can write

ET0x(t)υ(t)dtE22T0s(t)υ(t)dt=ρ2S(E)+ρN(E),(12.86)

where

ρ2=E20ρ21;(12.87)

S(E)=E(2E0E)2E20;(12.88)

N(E)=EE0ρ1T0x0(t)υ(t)dt.(12.89)

The introduced function S (E) and N (E) can be called normalized signal and noise components, respectively. In doing so, they are normalized in such a way that the function S (E) can reach the maximum equal to 0.5 at E = E0:

S(E)max=S(E=E0)=0.5.(12.90)

The noise component N (E) has zero mathematical expectation, and its correlation function is defined as

N(E1)N(E2)=E1E2E20.(12.91)

At E = E0, the variance of noise component can be presented in the following form:

N2(E0)=1.(12.92)

As a result, the Bayesian estimate of the mathematical expectation of stochastic process can be written in the following form:

γE=Epprior(E)exp{ρ2S(E)+ρN(E)}dEpprior(E)exp{ρ2S(E)+ρN(E)}dE.(12.93)

Consider two limiting cases: the weak and powerful signals or, in other words, the low and high SNR p2.

12.3.1 LOW SIGNAL-TO-NOISE RATIO2 ≪ 1)

As we can see from (12.93), at low values of the SNR (ρ → 0), the exponential function tends to approach the unit and, as a result, the Bayesian estimate γE coincides with the a priori mathematical expectation

γE(ρ0)=γ0=Epprior(E)dE=Eprior.(12.94)

At finite values of the SNR, the difference γE − γ0 is not equal to zero. Closeness γE to γ0 at ρ ≪ 1 allows us to find the estimate characteristics if we are able to define a deviation of γE from γ0 in the form of corresponding approximations and, consequently, the deviation of γE from the true value of the estimated parameter E0, since in general EpriorE0. At ρ ≪ 1, the estimate γE can be defined in the following approximated form [6]:

γE=γ0+ργ1+ρ2γ2+ρ3γ3+.(12.95)

Considering the exponential function exp{ρ2S (E) + ρN (E)} in (12.93) as a function of ρ, we can expand it in Maclaurin series by ρ. Then, neglecting the terms with an order of more than.4, we can write

(γEE)pprior(E)exp{ρ2S(E)+ρN(E)}dE=(γ0E+ργ1+ρ2γ2+ρ3γ3+)pprior(E)×{1+ρN(E)+12ρ2[N2(E)+2S(E)]+16ρ3[N3(E)+6N(E)S(E)]+}dE=0.(12.96)

Equating with zero the coefficients of terms with the same order ρ, we obtain the formulae for corresponding approximations:

γ0=Epprior(E)dE=Eprior;(12.97)

γ1=(EEprior)pprior(E)N(E)dE;(12.98)

γ2=(EEprior)pprior(E)[0.5N2(E)+S(E)]dEγ1pprior(E)N(E)dE;(12.99)

γ3=16(EEprior)pprior(E)[N3(E)+6N(E)S(E)]dEγ2pprior(E)N(E)dEγ1pprior(E)[0.5N2(E)+S(E)]dE.(12.100)

To define the approximate values of bias and dispersion of the estimate, it is necessary to determine the corresponding moments of approximations γ1, γ2, and γ3. Taking into consideration that all odd moments of the stochastic process x0(t) are equal to zero, we obtain

γ1=γ3=0,(12.101)

γ2=Varprior(E0Eprior)E20,(12.102)

γ21=Var2priorE20,(12.103)

where

Varprior=E2pprior(E)dEE2prior(12.104)

is the variance of a priori distribution.

Based on (12.101) through (12.104), we obtain the conditional estimate bias in the following form:

b(γE|E0)=Eprior+ρ2VarpriorE0EpriorE20E0=(EpriorE0)(1ρ21Varprior).(12.105)

Formula for the conditional bias coincides with the approximation given by (12.82) at low values of SNR ρ21, that is, Varpriorρ211. We can see that the unconditional estimate of mathematical expectation that is averaged with respect to all possible values E0 is unbiased. The conditional dispersion of estimate with accuracy of the order ρ4 and higher is defined in the following form:

D(γE|E0)=(γEE0)2(EpriorE0)2+ρ2[γ21+2(EpriorE0)(γ2)].(12.106)

Substituting the determined moments, we obtain

D(γE|E0)(EpriorE0)2(12Varpriorρ21)+ρ21Var2prior.(12.107)

Averaging (12.107) by all possible values of estimated parameter E0 with the a priori pdf pprior(E0) matched with the pdf pprior(E), we can define the unconditional dispersion of the mathematical expectation estimate defined by approximation in (12.85)

12.3.2 HIGH SIGNAL-TO-NOISE RATIO2 ≫ 1)

Bayesian estimate of the stochastic process mathematical expectation given by (12.93) can be written in the following form:

γE=Epprior(E)exp{ρ2Z(E)}dEpprior(E)exp{ρ2Z(E)}dE,(12.108)

where

Z(E)=[S(EE)+ρ1N(EE)][S(E)+ρ1N(E)];(12.109)

EE is the maximum likelihood estimate given by (12.38). We can see that at the maximum likelihood point E = EE, the function Z(E) reaches its minimum and is equal to zero, that is, Z(E) = 0.

At high values of the SNR ρ2, we can use the asymptotic Laplace formula [7] to determine the integrals in (12.108)

limλbaφ(x)exp{λh(x)}dx2πλh(x0)exp{λh(x0)}φ(x0),(12.110)

where a < x0 < b and the function h (x) has a maximum at x = x0. Substituting (12.110) into an initial equation for the Bayesian estimate (12.108), we obtain γEEE. Thus, at high values of the SNR, the. Bayesian estimate of the stochastic process mathematical expectation coincides with the maximum likelihood estimate of the same parameter.

12.4 Applied Approaches to Estimate the Mathematical Expectation

Optimal methods to estimate the stochastic process mathematical expectation envisage the need for having accurate and complete knowledge of other statistical characteristics of the considered stochastic process. Therefore, as a rule, various nonoptimal procedures based on (12.51) are used in practice. In doing so, the weight function is selected in such a way that the variance of estimate tends to approach asymptotically the variance of the optimal estimate.

Thus, let the estimate be defined in the following form:

E*=T0h(t)x(t)dt.(12.111)

The function of the following form

h(t)={T1if0tT,0ift<0,t>T,(12.112)

is widely used as the weighted function h(t). In doing so, the mathematical expectation estimate of stochastic process is defined as

E*=1TT0x(t)dt.(12.113)

Procedure of the estimate definition given by (12.113) coincides with approximation in (12.42) that was delivered based on the optimal rule of estimation in the case of large interval in comparison with the interval of correlation of the considered stochastic process. A device operating according to the rule given by (12.113) is called the ideal integrator.

The variance of mathematical expectation estimate is defined as

Var(E*)=1T2T0T0R(t2t1)dt1dt2.(12.114)

We can transform the double integral introducing the new variables, namely, τ = t2t1 and t2 = t. Then,

Var(E*)=1T2T0{0tR(τ)dτ+Tt0R(τ)dτ}dt.(12.115)

Images

FIGURE 12.2 Integration domains.

The integration domain is shown in Figure 12.2. Changing the integration order, we obtain

Var(E*)=2TT0(1τT)R(τ)dτ.(12.116)

If the interval of observation [0, T] is much more than the correlation interval τcor, we can change the upper integration limit in (12.116) by infinity and neglect the integrand term τ/T in comparison with unit. Then

Var(E*)=2Var(EE|E0)T0R(τ)dτ.(12.117)

If the normalized correlation function R (τ) is not a sign-changing function of the argument τ, the formula (12.117) takes a simple and obvious form:

Var(E*)=2Var(EE|E0)Tτcor.(12.118)

Consequently, if the ideal integrator integration time is sufficiently large in comparison with the correlation interval of stochastic process, then to determine the variance of mathematical expectation estimate of stochastic process there is a need to know only the values of variance and the ratio between the observation interval and correlation interval.

In the case of the sign-changing normalized correlation function with the argument τ, we can write

T0|R(τ)|dτ>0R(τ)dτ.(12.119)

Thus, at T ≫ τcor the formula for the variance of the mathematical expectation estimate in the case of arbitrary correlation function of stationary stochastic process can be written in the following form:

Var(E*)2Var(EE|E0)Tτcor.(12.120)

If T≫ τcor, the formula for the variance of the mathematical expectation estimate can be presented in the form using the spectral density S (co) of stochastic process that is related to the correlation function by the Fourier transform given by (12.25) and (12.27). In doing so, the formula for variance of the mathematical expectation estimate given by (12.117) takes the following form:

Var(E*)1TR(τ)dτ=1TS(ω){12πexp{jωτ}dτ}dω.(12.121)

Taking into consideration that

δ(ω)=12πexp{jωτ}dτ,(12.122)

we obtain

Var(E*)1TS(ω)|ω=0.(12.123)

Thus, the variance of the mathematical expectation estimate of stochastic process is proportional to the spectral density value of fuctuation component of the considered stochastic process at co = 0 when the ideal integrator is used as a smoothing circuit. In other words, in the considered case, the variance of the mathematical expectation estimate of stochastic process is defined by spectral components about zero frequency. To obtain the current value of the mathematical expectation estimate and to investigate the realization of stochastic process within the limits of large interval of observation, we use the following estimate:

E*(t)=T0h(τ)x(τ)dτ.(12.124)

Evidently, this estimate has the same statistical characteristics as the estimate defined by (12.111).

In practice, the linear low-pass filters with constant parameters defined by the impulse response

h(t)={h(t)att0,0att<0,(12.125)

are used as averaging devices. In this case, the formula describing the process at the low-pass filter output, taking into consideration the unbiasedness of estimate, takes the following form:

E*(t)=cT0h(τ)x(Tτ)dτ,(12.126)

where the constant factor c can be determined from the following condition:

c=1T0h(τ)dτ.(12.127)

If a difference between the measurement instant and instant of appearance of stochastic process at the low-pass filter input is much more than the correlation interval τcor of the considered stochastic process and the low-pass filter time constant, then we can write

E*(t)=0h(tτ)x(τ)dτ0h(τ)dτ.(12.128)

The variance of the mathematical expectation estimate of stochastic process is defined as

Var(E*)=c2T0T0R(τ1τ2)h(τ1)h(τ2)dτ1dτ2.(12.129)

Introducing new variables τ1 − τ2 = τ and τ2 = t and changing the order of integration, the formula for the variance of the mathematical expectation estimate can be presented in the following form:

Var(E*)=2c2T0R(τ)rh(τ)dτ,(12.130)

where, if T ≫ xcor we can change the upper integration limit on infinity, and the introduced function

rh(τ)=Tτ0h(t)h(t+τ)dt,  τ>0,(12.131)

corresponds to the correlation function of stochastic process forming at the output of filter with the impulse response h(t) when the “white” noise with the correlation function R(τ) = δ(τ) [8] excites the filter input.

When the process at the low-pass filter is stationary, that is, duration of exciting input stochastic process is much more in comparison with the low-pass filter constant time, the formula for the variance of the mathematical expectation estimate of stochastic process can be written in the following form:

Var(E*)=12πS(ω)|S(jω)|2dω,(12.132)

using the spectral density

S(jω)=0h(t)exp{jωt}dt,(12.133)

that is, the Fourier transform of the impulse response or frequency characteristic of the low-pass filter.

Consider an example where we compute the normalized variance of the mathematical expectation estimate Var(E*)/σ2 as a function of the ratio Tcor by averaging the investigated stochastic process by the ideal integrator with the pulse response given by (12.112) and RC-circuit, the impulse response of which takes the following form:

h(t)={βexp{βτ}at0tT,0atτ<0,τ>T.(12.134)

The corresponding frequency characteristics take the following form:

S(jω)=1exp{jωT}jωT;(12.135)

S(jω)=β{1exp{(β+jω)T}}β+jω.(12.136)

As an example, consider the stationary stochastic process with the exponential correlation function given by (12.13) where the parameter α is inversely proportional to the correlation interval, that is,

α=1τcor.(12.137)

Substituting (12.13) into (12.116) and (12.130), we obtain the normalized variance of the mathematical expectation estimate for the ideal integrator and RC filter:

Var1(E*)σ2=2p2[p1+exp{p}];(12.138)

Var1(E*)σ2=λ1λ+2λexp{p(1+λ)}(1+λ)exp{2pλ}(1λ2)[1exp{λp}]2,(12.139)

where

p=αT=Tτcor  and  λ=βα=τcorτRC(12.140)

are the ratio between the observation interval duration and the correlation interval and the ratio between the correlation interval and the RC filter time constant, respectively. The RC filter time constant τRC is defined as the correlation interval (see (12.21)).

If the observation time interval is large, that is, the condition λp ≫ 1 is satisfied, for example, for the RC filter, then the normalized variance of the mathematical expectation estimate will be limited by the RC filter, that is,

Var2(E*)σ2λ1+λ.(12.141)

At λ ≪ 1, that is, when the spectral bandwidth of considered stochastic process is much more than the averaging RC filter bandwidth, we can think that the RC filter plays the role of the ideal integrator and the estimate variance formula (12.139) under limiting transition, that is, λ → 0, takes a form given by (12.138).

Images

FIGURE 12.3 Normalized variance of the mathematical expectation estimate versus p at various values of λ.

Dependences of the normalized mathematical expectation estimate variances Var(E*)/σ2.ver-sus the ratio p between the observation time interval and the correlation interval for the ideal integrator (the continuous line) and the RC filter (the dashed lines) are presented in Figure 12.3, where λ serves as the parameter. As we can see from Figure 12.3, in the case of the ideal inte-grator, the variance of estimate decreases proportionally to the increase in the observation time interval, but in the case of the RC filter, the variance of estimate is limited by the value given by (12.141). Taking into consideration λ ≪ 1, the normalized variance of the mathematical expectation estimate is limited by the value equal to the ratio between the correlation interval and the RC filter time constant in the limiting case.

It is worthwhile to compare the mathematical expectation estimate using the ideal integrator with the optimal estimate, the variance of which Var(EE) is given by (12.48). Relative increase in the variance using the ideal integrator in comparison with the optimal estimate is defined as

κ=Var1(E*)Var(EE)Var(EE)=(2+p)[p1+exp{p}]p21.(12.142)

Images

FIGURE 12.4 Relative increases in variance as a function of Tcor.

Relative increase in the variance as a function of Tcor is shown in Figure 12.4. As we can see from Figure 12.4, the relative increase in the variance of the mathematical expectation estimate of stochastic process possessing the correlation function given by (12.13) using the ideal integrator is less than 0.01, in comparison with the optimal estimate. At the same time, the maximum relative increase in the variance is 0.14 and corresponds to p≈ 2.7. This maximum increase is caused by a rapid decrease in the optimal estimate variance in comparison with the estimate obtained by the ideal integrator at small values of the observation time interval. However, as p → ∞, both estimates are equivalent, as it was expected.

Consider the normalized variances of the mathematical expectation estimates of stochastic process using the ideal integrator for the following normalized correlation functions that are widely used in practice. We analyze two RCfilters connected in series and the “white” noise excites the input of this linear system. In this case, the normalized correlation function takes the following form:

R(τ)=(1+α|τ|)exp{α|τ|},α=1RC.(12.143)

In doing so, the normalized variance of the mathematical expectation estimate of stochastic process is defined as

Var3(E*)σ=2[2p13+(3+p1)exp{p1}]p21,(12.144)

where

p1=αT=2Tτcor  and  τcor=2α.(12.145)

The set of considered stochastic processes has the normalized correlation functions that are approx-imated in the following form:

R(τ)=exp{α|τ|}cosϖτ.(12.146)

Depending on the relationships between the parameters α and ϖ, the normalized correlation function (12.146) describes both the low-frequency (α ≫ ϖ) and high-frequency (α ≪ ϖ) stochastic processes. The normalized variance of the mathematical expectation estimate of stochastic process with the normalized correlation function given by (12.146) takes the following form:

Var4(E*)σ2=2[p1(1+η2)(1η2)]+2exp{p1}[(1η2)cosp1η2ηsinp1η]p21(1+η2)2,(12.147)

where η = ϖαϖ1. At ϖ = 0 (η = 0) in (12.147), as in the particular case, we obtain (12.138); that is, we obtain the normalized variance of the mathematical expectation estimate of stochastic process with the exponential correlation function given by (12.13) under integration using the ideal integrator. In this case, (12.147) is essentially simplified at ϖ = α:

Var4(E*)σ2=p1exp{p1}sinp1p21, at ϖ=α.(12.148)

Images

FIGURE 12.5 Normalized variances given by (12.144), (12.147), and (12.149) as functions of p1 with the parameter η.

As applied to the correlation function given by (12.14), the normalized variance of the mathematical expectation estimate of stochastic process is defined by

Var5(E*)σ2=2[2p1(1+η21)(3η21)]+2exp{p1}[(3η21)cosp1η13η1+η11sin(p1η1)]p21(1+η21)2,(12.149)

where η1 = ϖ1α−1. As ϖ1 → 0 η1 → 0), the correlation function given by (12.14) can be written as the correlation function given by (12.143), and the formula (12.149) is changed to (12.144).

The normalized variances of the mathematical expectation estimate of stochastic process given by (12.144), (12.147), and (12.149) as a function of the parameter p1 with the parameter r), are shown in Figure 12.5. As expected, at the same value of the parameter p1, the normalized variance of the mathematical expectation estimate of stochastic process decreases corresponding to an increase r) characterizing the presence of quasiharmonical components in the considered stochastic process.

Discussed procedures to measure the mathematical expectation assume that there are no limitations of instantaneous values of the considered stochastic process in the course of measurement. Presence of limitations leads to additional errors while measuring the mathematical expectation of stochastic process.

Determine the bias and variance of estimate applied both to the symmetrical inertialess signal limiter (see Figure 12.6) and to the asymmetrical inertialess signal limiter (see Figure 12.7) when the input of the signal limiter is excited by the Rayleigh stochastic process. In doing so, we assume that the mathematical expectation is defined according to (12.113) where we use y(t) = g[x(t)] instead of x(t) and g(x) as the characteristic functions of transformation. The variance of the mathematical expectation estimate of stochastic process is defined by (12.116) where under the correlation function R(x) we should understand the correlation function Ry(τ) defined as

Ry(τ)=g(x1)g(x2)p2(x1,x2;τ)dx1dx2E2y.(12.150)

Images

FIGURE 12.6 Symmetric inertialess signal limiter performance.

Images

FIGURE 12.7 Asymmetric inertialess signal limiter performance.

Let the Gaussian stochastic process excite the input of nonlinear device (Figure 12.6) and the trans-formation be described by the following function:

y=g(x)={aifx>a,xifaxa,aifx<a.(12.151)

The bias of estimate is defined as

b(E*)=g(x)p(x)dxE0=aap(x)dx+aaxp(x)dx+aap(x)dxE0.(12.152)

Substituting the one-dimensional pdf of Gaussian stochastic process

p(x)=12πVar(x)exp{(xE0)22Var(x)}(12.153)

into (12.152), we obtain the bias of the mathematical expectation estimate of stochastic process in the following form:

b(E*)=Var(x){χ[Q(χq)Q(χ+q)]q[Q(χ+q)+Q(χq)]+12π{exp[0.5(χ+q)2]exp[0.5(χq)2]}},(12.154)

where

χ=αVar(x0)  and  q=E0Var(x0)(12.155)

are the ratio of the limitation threshold and the mathematical expectation for the square root of the variance of the observed realization of stochastic process; Q(x) is the Gaussian Q function given by (12.68).

To determine the variance of mathematical expectation estimate of stochastic process according to (12.116) and (12.150), there is a need to expand the two-dimensional pdf of Gaussian stochastic process in series [9]

p2(x1,x2;τ)=1Var(x)v=0Q(v+1)(x1E0Var(x))Q(v+1)(x2E0Var(x))v(τ)v!,(12.156)

where ℛ(τ) is the normalized correlation function of the initial stochastic process ξ(t);

Q(v+1)(z)=dvdzv[exp{0.5z2}2π],  v=0,1,2,.(12.157)

are the derivatives of (v + 1)th order of the Gaussian Q function.

Substituting (12.156) into (12.150) and (12.1167) and taking into consideration that

Ey=1σ2g(x)Q(xE0σ)dx,(12.158)

we obtain

Var(E*)=1σ2v=11v!{g(x)Q(v+1)(xE0σ)dx}22TT0(1τT)v(τ)dτ.(12.159)

Computing the integral in the braces, we obtain

Var(E*)=σ2v=11v![Q(v1)(χq)Q(v1)(χq)]22TT0(1τT)v(τ)dτ.(12.160)

As χ → ∞, the derivatives of the Gaussian Q function tend to approach zero. As a result, only the term at ν = 1 remains and we obtain the initial formula (12.116).

In practical applications, the stochastic process measurements are carried out, as a rule, under the conditions of “weak” limitations of instantaneous values, that is, under the condition (χ − |q|) ≥ 1.5 ÷ 2. In this case, the first term at ν = 1 in (12.160) plays a very important role:

Var(E*)[1Q(χq)Q(χ+q)]22σ2TT0(1τT)(τ)dτ,(12.161)

where, at sufficiently high values, that is, (χ − q) ≥ 3, the term in square brackets is very close to unit and we may use (12.116) to determine the variance of mathematical expectation estimate of stochastic process.

In practice, the Rayleigh stochastic processes are widely employed because this type of stochastic processes have a wide range of applications. In particular, the envelope of narrow-band Gaussian stochastic process described by the Rayleigh pdf can be presented in the following form:

z(t)=x(t)cos[2πf0t+φ(t)],(12.162)

where

  • x(t) is the envelope

  • φ(t) is the phase of stochastic process

Representation in (12.162) assumes that the spectral density of narrow-band stochastic process is concentrated within the limits of narrow bandwidth Δf.with the central frequency f0 and the condition f0 ≫ Δf. As applied to the symmetrical spectral density, the correlation function of stationary narrow-band stochastic process takes the following form:

Rz(τ)=σ2(τ)cos(2πf0τ).(12.163)

In doing so, the one-dimensional Rayleigh pdf can be written as

f(x)=xσ2exp{x22σ2},x0.(12.164)

The first and second initial moments and the normalized correlation function of the Rayleigh stochastic process can be presented in the following form:

{ξ(t)=π2σ2,ξ2(t)=2σ2,ρ(τ)2(τ).(12.165)

As applied to the Rayleigh stochastic process and nonlinear transformation (see Figure 12.7) given as

y=g(x)={aifx>a,xif0xa,(12.166)

the bias of the mathematical expectation estimate takes the following form:

b(E*)=EyE0=0g(x)f(x)dxE0=2πσ2Q(χ),(12.167)

where χ is given by (12.155). As χ → ∞, the mathematical expectation estimate, as it would be expected, is unbiased.

Determining the variance of mathematical expectation estimate of stochastic process in accordance with (12.116) and (12.150) is very difficult in the case of Rayleigh stochastic process. It is evident that to determine the variance of mathematical expectation estimate of stochastic process in the first approximation, the formula (12.116) has to be true, provided the condition χ ≥ 2–3 is satisfied, which is analogous to the Gaussian stochastic process at weak limitation.

12.5 Estimate of Mathematical Expectation at Stochastic Process Sampling

In practice, we use digital measuring devices to measure the parameters of stochastic processes after sampling. Naturally, we do not use a part of information that is outside a sample of stochastic process.

Let the Gaussian stochastic process χ(t) be observed at some discrete instants ti. Then, there are a set of samples xi = x(t), i = 1, 2,…, N at the input of digital measuring device. As a rule, a sample clamping of the observed stochastic process is carried out over equal time intervals Δ = ti+1ti. Each sample value can be presented in the following form:

xi=Ei+x0i=Esi+x0i(12.168)

as in (12.2), where Ei = E si = Es(ti) is the mathematical expectation and x0i = x0 (ti) is the realization of the centralized Gaussian stochastic process at the instant t = ti. A set of samples xi are characterized by the conditional N-dimensional pdf

fN(x1,,xN|E)=1(2π)0.5NdetRijexp{0.5Ni=1Nj=1(xiEi)(xjEj)Cij},(12.169)

where

  • det ||Rij|| is the determinant of the correlation matrix ||Rij|| = R of the N × N order

  • Cij is the elements of the matrix ||Cij|| = C, which is the reciprocal matrix with respect to the correlation matrix and the elements Cij are defined from the following equation:

    Nl=1CijRij=δij={1ifi=j,0ifij.(12.170)

The conditional multidimensional pdf in (12.169) is the multidimensional likelihood function of the parameter E of stochastic process. Solving the likelihood equation with respect to the parameter E, we obtain the formula for the mathematical expectation estimate of stochastic process:

EE=Ni=1,j=1xisjCijNi=1,j=1sisjCij.(12.171)

This formula can be written in a simple form if we introduce the weight coefficients

υi=Nj=1sjCij,(12.172)

which satisfy, as the function υ(t) given by (12.7), the system of equations

Nl=1Rilυl=si,   i=1,2,,N.(12.173)

In doing so, the mathematical expectation estimate can be presented in the following form:

EE=Ni=1xiυiNi=1siυi.(12.174)

The mathematical expectation of estimate takes the following form:

EE=Ni=1xiυiNi=1siυi=E0.(12.175)

The variance of estimate in accordance with (12.172) can be presented in the following form:

Var(EE)=Ni=1,j=1Rijυiυj[Ni=1siυi]2=1Ni=1siυi.(12.176)

The weight coefficients are determined using a set of linear equations:

{σ2υ1+R1υ2+R2υ3++RN1υN=s1,R1υ1+σ2υ2+R1υ3++RN2υN=s2,                                                       RN1υ1+RN2υ2+RN3υ3++σ2υN=sN,(12.177)

where Rl = R (lΔ) are the elements of the correlation matrix of difference in time |ij|Δ = lΔ, l = 0, 1,…, N − 1. The solution derived from this system of linear equations can be presented in the following form:

υj=detGijdetRij,  j=1,2,,N,(12.178)

where det ||Gij|| is the determinant of matrix obtained from the matrix ||Rij|| = R by substituting the column containing elements s1, s2,…, sN instead of the jth column. In the case of independent samples of stochastic process, that is, Rij = 0 at ij and Rii = σ2, the correlation matrix and its recip-rocal matrix will be diagonal. In doing so, for all i = 1, 2,…, N the weight coefficients are defined as

υi=siσ2.(12.179)

Substituting (12.179) into (12.174) and (12.176) we obtain

EE=Ni=1xisiNi=1s2i,(12.180)

Var(EE)=σ2Ni=1s2i.(12.181)

If the observed stochastic process is stationary, that is, si = 1 ∀i, i = 1, 2,…, N, even in the presence of independent samples, we obtain the mean and variance of the mean

E*=1NNi=1xi;(12.182)

Var(E*)=σ2N,(12.183)

respectively.

Now, consider the estimate of stationary stochastic process mathematical expectation with the correlation function given by (12.13). Denote ψ = exp{−αΔ}. In this case, the correlation matrix takes the following form:

Rij=σ2N[1ψψ2ψN1ψ1ψψN2ψ2ψ1ψN3ψN1ψN2ψN31].(12.184)

The determinant of this matrix and its reciprocal matrix are defined in the following form [10]:

detRij=σ2N(1ψ2)N1,(12.185)

Cij=1(1ψ2)σ2[1ψ00ψ1+ψ2ψ00ψ1+ψ200001].(12.186)

It is important to note that all elements of the reciprocal matrix are equal to zero, except for the elements of the main diagonal and the elements flanking the main diagonal from right and left. As we can see from (12.172) and (12.186), the optimal values of weight coefficients are defined as

{υ1=υN=1(1+ψ)σ2;υ2=υ3==υN1=1ψ(1+ψ)σ2.(12.187)

Substituting the obtained weight coefficients into (12.174) and (12.176), we have

EE=(x1+xN)+(1ψ)N1i=2xiN(N2)ψ;(12.188)

Var(EE)=σ21+ψN(N2)ψ.(12.189)

Dependence of the normalized variance of the optimal mathematical expectation estimate versus the values ψ of the normalized correlation function between the samples and the various numbers of samples N is shown in Figure 12.8. As we can see from Figure 12.8, starting from ψ ≥ 0.5, the variance of estimation increases rapidly corresponding to an increase in the value of the normalized correlation function, which tends to approach the variance of the observed stochastic process as ψ → 1.

We can obtain the formulae (12.188) and (12.189) by another way without using the maximum likelihood method. For this purpose, we suppose that

E*=Ni=1xihi(12.190)

Images

FIGURE 12.8 Normalized variance of the optimal mathematical expectation estimate versus ψ and the number of samples N.

can be used as the estimate, where hi are the weight coefficients satisfying the following condition

Ni=1hi=1(12.191)

for the unbiased estimations. The weight coefficients are chosen from the condition of minimization of the variance of mathematical expectation estimate. As applied to observation of stationary stochastic process possessing the correlation function given by (12.13), the weight coefficients hi are defined in Ref. [11] and related with the obtained weight coefficients (12.187) by the following relationship:

hi=υiNi=1υi.(12.192)

In the limiting case, as Δ → 0 the formulae in (12.188) and (12.189) are changed into (12.48) and (12.49), respectively. Actually, as Δ → 0 and if (n − 1) Δ = T = const and exp{−αΔ} ≈ 1 − αΔ, the summation in (12.188) is changed by integration and x1 and xN are changed in x (0) and x (T), respectively. In practice, the equidistributed estimate (the mean) given by (12.182) is widely used as the mathematical expectation estimate of stationary stochastic process that corresponds to the constant weight coefficients hiN−1, i = 1, 2,…, N given by (12.190).

Determine the variance of the mathematical expectation estimate assuming that the samples are equidistant from each other on the value Δ. The variance of the mathematical expectation estimate of stochastic process is defined as

Var(E*)=1N2Ni=1,j=1R(t1tj)=1N2Ni=1,j=1R[(ij)Δ].(12.193)

Images

FIGURE 12.9 Domain of indices.

The double summation in (12.193) can be changed in a more convenient form. For this purpose, there is a need to change indices, namely, l = ij and j = j, and change the summation order in the domain shown in Figure 12.9. In this case, we can write

Var(E*)=1N2Nj=1Njl=j(lΔ)=1N2{Nσ2+2N1i=1(Ni)(iΔ)}=σ2N{1+2N1i=1(1iN)(iΔ)},(12.194)

where ℛ(iΔ) is the normalized correlation function of observed stochastic process. As we can see from (12.194), if the samples are not correlated the formula (12.183) can be considered as a particular case.

If the correlation function of observed stochastic process is described by (12.13), the variance of the equidistributed estimate of mathematical expectation is defined as

Var(E*)=σ2N(1ψ2)+2ψ(ψN1)N2(1ψ)2,(12.195)

where, as before, ψ = exp{−αΔ}. We have just obtained (12.195) by taking into consideration the formula for summation [12]

N1i=0(a+ir)qi=a[a+(N1)r]qN1q+rq(1qN1)1q2.(12.196)

Computations made by the formula (12.195) show that the variance of the equidistributed estimate of mathematical expectation differs from the variance of the optimal estimate (12.189). Figure 12.10. represents a relative increase in the variance of the equidistributed estimate of mathematical expec-tation in comparison with the variance of the optimal estimate

ε=Var(E*)Var(EE)Var(EE)(12.197)

Images

FIGURE 12.10 Relative increase in the variance of equidistributed estimate of mathematical expectation as a function of the normalized correlation function between samples.

as a function of values of the normalized correlation function between the samples ψ for various numbers of samples. Naturally, if the relative increase in the variance is low, then the magnitude of the normalized correlation function between samples will be low as well. Similar to the mathematical expectation estimate by the continuous realization of stochastic process, the presence of maxima is explained by the fact that in the case of small numbers of samples N and sufficiently large magnitude ψ, the variance of the optimal estimate decreases rapidly in comparison with the variance of the equidistributed estimate of mathematical expectation. As we can see from Figure 12.10, the magnitude of the normalized correlation function between samples is less than ψ = 0.5, then the optimal and equidistributed estimates coincide practically.

As applied to the normalized correlation function (12.146), the normalized variance of estimate is defined as

Var(E*)σ2=N(1ψ2)[1+ψ22ψcos(Δϖ)]+2ψ(2N+1){cos[(N+1)Δϖ]2ψcos(NΔϖ)+ψ2cos[(N1)Δϖ]}N2[1+ψ22ψcos(Δϖ)]22ψ2[(1+ψ2)cos(Δϖ)2ψ]N2[1+ψ22ψcos(Δϖ)]2,(12.198)

where, as before, ψ = exp{−αΔ}. At ϖ = 0, we obtain the formula (12.195). In the case of large numbers of samples, the formula (12.198) is simplified

Var(E*)σ2(1ψ2)N[1+ψ22ψcos(Δϖ)],NΔα1.(12.199)

As we can see from (12.199), in the case of stochastic process with the correlation function given by (12.146) the equidistributed estimate may possess the estimate variance that is less than the variance of estimate by the same number of the uncorrelated samples. Actually, if the samples are taken over the interval

Δ=π+2πkϖ,  k=0,1,,(12.200)

then the minimal magnitude of the normalized variance of estimate can be presented in the following form:

Var(E*)σ2|min1N×1ψ1+ψ,   NΔα1.(12.201)

Otherwise, if the interval between samples is chosen such that

Δ=2πkϖ,  k=0,1,.(12.202)

then the maximum value of the normalized variance of estimate takes the following form:

Var(E*)σ2|max1N×1+ψ1ψ.(12.203)

Thus, for some types of correlation functions, the variance of the equidistributed estimate of mathematical expectation by the correlated samples can be lesser than the variance of estimate by the same numbers of uncorrelated samples.

If the interval between samples Δ is taken without paying attention to the conditions discussed previously, then the value Δϖ = φ can be considered as the random variable with the uniform distribution within the limits of the interval [0, 2π]. Averaging (12.199) with respect to the random variable φ uniformly distributed within the limits of the interval [0, 2π], we obtain the variance of the mathematical expectation estimate of stochastic process by N uncorrelated samples

{Var(E*)σ2}φ=1ψ2N2π0dφ1+ψ22ψcosφ=1N.(12.204)

Of definite interest for the definition of the mathematical expectation estimate is the method to measure the stochastic process parameters by additional signals [13, 14]. In this case, the realization x(ti) = xi of the observed stochastic process ξ(ti) = ξi is compared with the realization v(ti) = vi of the additional stochastic process ζ(ti) = ζi. A distinct feature of this measurement method is that the values xi of the observed stochastic process realization must be with the high probability within the limits of the interval of possible values of the additional stochastic process. Usually, it is assumed that the values of the additional stochastic process are independent from each other and from the values of the observed stochastic process.

To further simplify an analysis of the stochastic process parameters and the definition of the mathematical expectation, we believe that the values xi are independent of each other and the random variables ζi are uniformly distributed within the limits of the interval [−A, A], that is,

f(v)=12A,  AvA.(12.205)

As applied to the pdf given by (12.203), the following condition must be satisfied

P[AξA]1(12.206)

in the case of the considered method to measure the stochastic process parameters. As a result of comparison, a new independent random variable sequence ςi is formed:

ςi=xivi.(12.207)

The random variable sequence ςi can be transformed to the new independent random variable sequence ηi by the nonlinear inertialess transformation g(ε)

ηi=g(εi)=sgn[ςi=ξiζi]={1,ξiζi,1,ξ<ζi.(12.208)

Determine the mathematical expectation of random variable ηi under the condition that the random variable ξi takes the fixed value x and the following condition |x| ≤ A is satisfied:

(ηi|x)=1×P(v<x)1×P(v>x)=2×P(v<x)1=xA.(12.209)

The unconditional mathematical expectation of the random variable ηi can be presented in the following form:

ηi=AA(ηi|x)p(x)dx1Axp(x)dx=E0A.(12.210)

Based on the obtained representation, we can consider the following value

˜E=ANNi=1yi,(12.211)

as the mathematical expectation estimate of random variable, where yi is the sample of random sequence ηi. At that point, it is not difficult to see that the considered mathematical expectation estimate is unbiased for the accepted conditions. The structural diagram of device measuring the mathematical expectation using the additional signals is shown in Figure 12.11. The counter defines a difference between the positive and negative pulses forming at the transformer output g(ε).The functional purpose of other elements is clear from the diagram.

Images

FIGURE 12.11 Measurer of mathematical expectation.

If we take into consideration the condition that P(x < −A) ≠ 0 and P(x > A) ≠ 0, then the mathematical expectation estimate given by (12.211) has a bias defined as

b(˜E)=˜EE0={Axp(x)dx+Axp(x)dx}.(12.212)

The variance of the mathematical expectation given by (12.211) can be presented in the following form:

Var(˜E)=A2N2Ni=1,j=1yiyjE20.(12.213)

Taking into consideration the statistical independence of the samples yi, we have

yiyj={y2i,i=j,yiyj,ij.(12.214)

According to (12.208),

η2i=(ηi|x)2=η2i=y2i=1.(12.215)

Consequently, the variance of the mathematical expectation estimate is simplified and takes the following form:

Var(˜E)=A2N(1E20A2).(12.216)

As we can see from (12.216), since E20<A2, the variance of the mathematical expectation of stochastic process is defined completely by half-interval of possible values of the additional random sequence.

Comparing the variance of the mathematical expectation estimate given by (12.216) with the variance of the mathematical expectation estimate by N independent samples given by (12.183)

Var(˜E)Var(E*)=A2σ2(1E20A2),(12.217)

we see that the considered procedure to estimate the mathematical expectation possesses the high variance since A2 > σ2 and E20<A2.

If we know a priori that the observed stochastic sequence v(ti) = vi is a positive value, then we can use the following pdf:

p(v)=1A,0vA,(12.218)

and the following function:

ηi=g(εi)={1,ξiζi,0,ξ<ζi(12.219)

as the nonlinear transformation η = g(ε). In doing so, the following condition must be satisfied:

P[0ξA]1.(12.220)

As we can see from (12.220), this condition is analogous to the condition given by (12.206).

The conditional mathematical expectation of random variable ηi at ξi = x takes the following form:

(ηi||x|)=1×P(v<x)+0×P(v>x)=x0p(v)dv=xA.(12.221)

In doing so, the unconditional mathematical expectation of the random variable ηi is defined in the following form:

ηi1A0xp(x)dx=E0A.(12.222)

For this reason, if the mathematical expectation estimate of the random sequence ξi is defined by (12.211) it will be unbiased at the first approximation.

The variance of the mathematical expectation estimate, as we discussed previously, is given by (12.213). In doing so, the conditional second moment of the random variable ηi is determined analogously as shown in (12.221):

(ηi|x)2=x0p(v)dv=xA.(12.223)

The unconditional moment is given by

η2i=y2i=0(ηi|x)2p(x)dx=E0A.(12.224)

Taking into consideration (12.223) and (12.224) under definition of the variance of the mathematical expectation estimate, we have

Var(˜E)=AE0N(1E0A).(12.225)

Thus, in the considered case, the variance of the mathematical expectation estimate is defined by the interval of possible values of the additional stochastic sequence and is independent of the variance of the observed stochastic process and is forever more than the variance of the equidistributed estimate of the mathematical expectation by independent samples. For example, if the observed stochastic sequence subjected to the uniform pdf coinciding in the limiting case with (12.218), then the variance of the mathematical expectation for the considered procedure is defined as

Var(˜E)=A24N(12.226)

and the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation is given by

Var(E*)=A212N;(12.227)

that is, the variance of the mathematical expectation estimate is three times more than the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation under the use of additional stochastic signals in the considered limiting case when the observed and additional random sequences are subjected to the uniform pdf. At other conditions, a difference in variances of the mathematical expectation estimate is higher.

As applied to (12.219), the flowchart of the mathematical expectation measurer using additional stochastic signals shown in Figure 12.11 is the same, but the counter defines the positive pulses only in accordance with (12.219).

12.6 Mathematical Expectation Estimate Under Stochastic Process Amplitude Quantization

Define an effect of stochastic process quantization by amplitude on the estimate of its mathematical expectation. With all this going on, we assume that a quantization can be considered as the inertia-less nonlinear transformation with the constant quantization step and the number of quantization levels is so high that the quantized stochastic process cannot be outside the limits of staircase characteristic of the transform g(x), the approximate form of which is shown in Figure 12.12. The pdf p(x) of observed stochastic process possessing the mathematical expectation that does not match with the middle between the quantization thresholds xi and xi+1 is presented in Figure 12.12.

Transformation or quantization characteristic y = g(x) can be presented in the form of summation of the rectangular functions shown in Figure 12.13, the width and height of which are equal to the quantization step:

g(x)=k=kΔa(xkΔ),(12.228)

where a(z) is the rectangular function with unit height and the width equal to Δ. Hence, we can use the following mathematical expectation estimate

E=k=kΔTkT,(12.229)

Images

FIGURE 12.12 Staircase characteristic of quantization.

Images

FIGURE 12.13 Rectangular function.

where Tk = Σiτi is the total time when the observed realization is within the limits of the interval (k ± 0.5)Δ during its observation within the limits of the interval [0, T]. In doing so,limTTkT is the probability that the stochastic process is within the limits of the interval (k ± 0.5)Δ.

The mathematical expectation of the realization y(t) forming at the output of inertialess element (transducer) with the transform characteristic given by (12.228) when the realization x(t) of the stochastic process ξ(t) excites the input of inertialess element is equal to the mathematical expectation of the mathematical expectation estimate given by (12.229) and is defined as

E=g(x)p(x)dx=Δk=k(k+0.5)Δ(k0.5)Δp(x)dx,(12.230)

where p(x) is the one-dimensional pdf of the observed stochastic process. In general, the mathematical expectation of estimate 〈E〉 differs from the true value E0; that is, as a result of quantization we obtain the bias of the mathematical expectation of the mathematical expectation estimate defined as

b(E)=EE0.(12.231)

To determine the variance of the mathematical expectation estimate according to (12.116), there is a need to define the correlation function of process forming at the transducer output, that is,

Ry(τ)=g(x1)g(x2)p2(x1,x2;τ)dx1dx2E2,(12.232)

where p2(x1, x2; τ) is the two-dimensional pdf of the observed stochastic process.

Since the mathematical expectation and the correlation function of process forming at the transducer output depend on the observed stochastic process pdf, the characteristics of the mathematical expectation estimate of stochastic process quantized by amplitude depend both on the correlation function and on the pdf of observed stochastic process.

Apply the obtained results to the Gaussian stochastic process with the one-dimensional pdf given by (12.153) and the two-dimensional pdf takes the following form:

p2(x1,x2;τ)=12πσ21R2(τ)exp{(x1E0)2+(x2E0)22R(τ)(x1E0)(x2E0)2σ2[1R2(τ)]}.(12.233)

Define the mathematical expectation E0 using the quantization step Δ:

E0=(c+d)Δ,(12.234)

where c is the integer; −0.5 < d < 0.5. The value dΔ is equal to the deviation of the mathematical expectation from the middle of quantization interval (step). Further, we will take into consideration that for the considered staircase characteristic of transducer (transformer) the following relation is true:

g(x+wΔ)=g(x)+wΔ,(12.235)

where w = 0, ±1, ±2,… is the integer. Substituting (12.153) into (12.230) and taking into consideration (12.234) and (12.235), we can define the conditional mathematical expectation in the following form:

E|d=12πσ2g(x)exp{[(xcΔ)dΔ]22σ2}dx=k=kΔ{Q[(k0.5d)λ]Q[(k+0.5d)λ]}+cΔ,(12.236)

where

λ=Δσ(12.237)

is the ratio between the quantization step and root-mean-square deviation of stochastic process; Q(x) is the Gaussian Q function given by (12.68).

The conditional bias of the mathematical expectation estimate can be presented in the following form:

b(E|d)=Δk=k{Q[(k0.5d)λ]Q[(k+0.5d)λ]}dΔ.(12.238)

It is easy to see that the conditional bias is the odd function d, that is,

b(E|d)=b(E|d),(12.239)

and at that, if d = 0 and d = ±0.5, the mathematical expectation estimate is unbiased. If λ ≫ 1, in practice at λ ≥ 5, (12.238) is simplified and takes the following form:

b(E|d)={Q[0.5λ(12d)]Q[0.5λ(1+2d)]d}Δ.(12.240)

At λ < 1, the conditional bias can be simplified. For this purpose, we can expand the function in braces in (12.238) into the Taylor series about the point (kd)λ, but limit to the first three terms of expansion. As a result, we obtain

b(E|d)=λΔ2πk=kexp{0.5(kd)2λ2}dΔ.(12.241)

If λ ≪ 1, the sum in (12.241) can be changed by the integral. Denoting x = λk and dx = λ, we obtain

k=kλexp{0.5(kd)2λ2}1λxexp{0.5(xdλ)2}dx=2πd.(12.242)

As we can see from (12.241) and (12.242), at λ ≪ 1, that is, the quantization step is much lower than the root-mean-square deviation of stochastic process, the mathematical expectation estimate is unbiased for all practical training.

To obtain the unconditional bias we assume that d is the random variable uniformly distributed within the limits of the interval [−0.5; 0.5]. Let us take into consideration that

Q(x)dx=xQ(x)xQ(x)dx.(12.243)

Averaging (12.238) by all possible values of d we obtain

b(E)=λΔk=k{2kQ(kλ)(k+1)Q[(k+1)λ](k1)Q[(k1)λ]+12πλ[2exp{0.5λ2k2}+exp{0.5λ2(k+1)2}+exp{0.5λ2(k1)2}]}.(12.244)

The terms of expansion in series at k = p and k = −p are equal by module and are inverse by sign. Because of this, b(E) = 0; that is, the mathematical expectation estimate of stochastic process quantized by amplitude is unconditionally unbiased. Substituting (12.233) into (12.232), introducing new variables

{z1=x1cΔ,z2=x2cΔ,(12.245)

and taking into consideration (12.235), we obtain

Ry(τ)=g(z1)g(z2)p2(z1,z2;τ)dz1dz2,(12.246)

where p2(z1, z2; τ) is the two-dimensional pdf of Gaussian stochastic process with zero mathematical expectation. To determine (12.246) we can present the two-dimensional pdf as expansion in series by derivatives of the Gaussian Q function (12.156) assuming that x = z and E0 = 0 for the last formula. Substituting (12.156) and (12.228) into (12.246) and taking into consideration a parity of the function Q(1)(z/σ) and oddness of the function g(z), we obtain

Ry(τ)=1σ2v=1Rv(τ)v!{g(z)Q(v+1)(zσ)dz}2.(12.247)

Taking the integral in the braces by parts and taking into consideration that Q(v)(±∞) = 0 at ν ≥ 1, we obtain

g(z)Q(v+1)(zσ)dz=σg(z)Q(v)(zσ)dz.(12.248)

According to (12.228),

g(z)=dg(z)dz=k=1Δδ[z(k0.5)Δ]+k=1Δδ[z(k+0.5)Δ],(12.249)

where δ(z) is the Dirac delta function. Then the correlation function given by (12.247) takes the following form:

Ry(τ)=Δ2v=1av2Rv(τ)v!,(12.250)

where

av=k=1Q(v)[(k0.5)λ]+k=1Q(v)[(k+0.5)λ],(12.251)

and at that the coefficients av are equal to zero at even v; that is, we can write

av={2k=1Q(v)[(k0.5)λ]atodd v,0ateven v.(12.252)

Substituting (12.250) into (12.116) and taking into consideration (12.252), we obtain the normalized variance of the mathematical expectation estimate of stochastic process:

Var(E)σ2=4λ2v=1C2v12(2v1)!×2T0T(1τT)R2v1(τ)dτ,(12.253)

where

C2v1=k=1Q(2v1)[(k0.65)λ].(12.254)

In the limiting case as Δ → 0, we have

limΔ0ΔσC2v1=limΔ0Δσk=1Q(2v1)[(k0.5)Δσ]=0Q(2v1)(z)dz=Q(2v2)(x)|0={0.5ifv=1,0ifv2.(12.255)

As we can see from (12.253), based on (12.255) we obtain (12.116). Computations carried out, λC2v−1, for the first five magnitudes v show that at λ ≤ 1.0 the formula (12.255) is approximately true with the relative error less than 0.02. Taking into consideration this statement, we observe a limitation caused by the first term in (12.253) for the indicated magnitudes A, especially for the reason that the contribution of terms with higher order in the total result in (12.253) decreases proportionally to the factors λ2C2v12/(2v1)! Thus, if the quantization step is not more than the root-mean-square deviation of the observed Gaussian stochastic process, (12.116) is approximately true for the definition of the variance of mathematical expectation estimate.

12.7 Optimal Estimate of Varying Mathematical Expectation of Gaussian Stochastic Process

Consider the estimate of varying mathematical expectation E(t) of Gaussian stochastic process ξ(t) based on the observed realization x(t) within the limits of the interval [0, T]. At that time, we assume that the centralized stochastic process ξ0(t) = ξ(t) − E(t) is the stationary stochastic process and the time-varying mathematical expectation of stochastic process can be approximated by a linear summation in the following form:

E(t)i=1Nαiφi(t),(12.256)

where

  • αi indicates unknown factors

  • φi(t) is the given function of time

If the number of terms in (12.256) is finite, that is, if N is finite, there will be a difference between E (t) and expansion in series. However, with increasing N, the error of approximation tends to approach zero:

ε2=0T[E(t)i=1Nαiφi(t)]2dt.(12.257)

In this case, we say that the series given by (12.256) approaches the average.

Based on the condition of approximation error square minimum, we conclude that the factors αi are defined from the system of linear equations:

i=1Nαi0Tφi(t)φj(t)dt=0TE(t)φj(t)dt,  j=1,2,,N.(12.258)

In the case of representation E(t) in the series form (12.256), the functions φi(t) are selected in such a way to ensure fast series convergence. However, in some cases, the main factor in selection of the functions φ(t) is the simplicity of physical realization (generation) of these functions. Thus, the problem of definition of the mathematical expectation E(t) of stochastic process ξ(t) by a single realization x(t) within the limits of the interval [0, T] is reduced to estimation of the coefficients αi in the series given by (12.256). In doing so, the bias and dispersion of the mathematical expectation estimate E(t) of the observed stochastic process caused by measurement errors of the coefficients αi are given in the following form:

bE(t)=E(t)E*(t)=i=1Nφi(t)(αiαi*);(12.259)

DE(t)=i=1,j=1Nφi(t)φj(t)(αiαi*)(αjαj*),(12.260)

where αi* is the estimate of the coefficients αi. Statistical characteristics (the bias and dispersion) of the mathematical expectation estimate of stochastic process averaged within the limits of the observation interval [0, T] take the following form:

bE(t)=1T0TbE(t)dt=1Ti=1N(αiαi*)0Tφi(t)dt;(12.261)

DE(t)=1T0TDE(t)dt=i=1,j=1N(αiαi*)(αjαj*)1T0Tφi(t)φj(t)dt.(12.262)

The functional of the observed stochastic process pdf with accuracy, until the terms remain independent of the unknown mathematical expectation E(t), can be presented by analogy with (12.4) in the following form:

F[x(t)|E(t)]=B1exp{i=1Nαiyi12i=1,j=1Nαiαjcij},(12.263)

where

yi=0Tx(t)υi(t)dt=0T0Tx(t1)φi(t2)ϑ(t1,t2)dt1dt2;(12.264)

cij=cji=0Tφi(t)υi(t)dt=0T0Tφi(t1)φj(t2)ϑ(t1,t2)dt1dt2,(12.265)

and the function υi(t) is defined by the following integral equation:

0TR(t,τ)υi(τ)dτ=φi(t).(12.266)

Solving the likelihood equation with respect to unknown coefficients αi

F[x(t)|E(t;α1,,αN)]αj=0,(12.267)

we can find the system of N linear equations

i=1Nαi*cij=yj,  j=1,2,,N(12.268)

with respect to the estimates αi*. Based on (12.268), we can obtain the estimates of coefficients

αm*=AmA=1Ai=1NAmiyi,   m=1,2,,N,(12.269)

where

A=cij=|c11c12c1mc1Nc21c22c2mc2NcN1cN2cNmcNN|(12.270)

is the determinant of the system of linear equations given by (12.268),

Am=|c11c12y1c1Nc21c22y2c2NcN1cN2yNyNN|(12.271)

is the determinant obtained from the determinant A given by (12.270) by changing the column cim by the column yi; Aim is the algebraic supplement of the mth column elements (the column yi). In doing so, the following relationship

j=1NcijAkj=j=1NcijAjk=Aδik(12.272)

is true for the quadratic matrix ||cij||.

A flowchart of the optimal measurer of varying mathematical expectation of Gaussian stochastic process is shown in Figure 12.14. The measurer operates in the following way. Based on the previously mentioned system of functions φi(t) generated by the “Genφ” and a priori information about the correlation function R(τ) of observed stochastic process, the generator “Genv” forms the system of linear functions vi(t) in accordance with the integral equation (12.266). According to (12.265), the generator “GenC” generates the system of coefficients cij sent to “Microprocessor”. The magnitudes yi come in at the input of “Microprocessor” and also from the outputs of the “Integrator”. Based on a solution of N linear equations given by (12.268) with respect to unknown coefficients αi, the “Microprocessor” generates their estimates. Using the estimates αi* and system of functions φi(t), the estimate E*(t) of time-varying mathematical expectation E(t) is formed according to (12.256). The generated estimate E(t) will have a delay with respect to the true value on T + ΔT that is required to compute the random variables yi and to solve the m linear equations by “Microprocessor”. The delay blocks T and ΔT are used by the flowchart with this purpose.

Substituting x(t) given by (12.2) into (12.264), we can write

yi=0Tx0(t)υi(t)dt+0TE(t)υi(t)dt=ni+αmcim+q=1,qmNαqciq,(12.273)

where

ni=0Tx0(t)υi(t)dt.(12.274)

Images

FIGURE 12.14 Optimal measurer of time-varying mathematical expectation estimate.

Taking into consideration (12.272) and (12.273), the determinant given by (12.271) can be presented in the form of sum of two determinants

Am=Bm+Cm,(12.275)

where

Bm=|c11c12(n1+αmc1m)c1Nc21c22(n2+αmc2m)c2NcN1cN2(nN+αmcNm)cNN|,(12.276)

The determinant Cm is obtained from the determinant Bm by changing the mth column on the column consisting of the terms q=1,qmNαqciq at qm. Since the mth column of the determinant Cm consists of two elements that are the linear combination of elements of other columns, then Cm = 0.

Let us present the determinant Am = Bm in the form of sum of products of all elements of the mth column on their algebraic supplement Am, that is,

Am=i=1N(ni+αmcim)Aim.(12.277)

Taking into consideration that

i=1NAimcim=A,(12.278)

the estimate of the coefficients αm takes the following form:

αm*=1Ai=1NAimni+αm,   m=1,2,,N.(12.279)

Since

ni=0Tx0(t)υi(t)dt=0,(12.280)

the estimates of the coefficients αi of series given by (12.256) are unbiased. The correlation function of estimates of the coefficients αm and αq is defined by the following form:

R(αm,αq)=1A2i=1,j=1NninjAimAjq=AmqA.(12.281)

While delivering (12.281), we have taken into consideration the integral equation (12.266) and the formula given by (12.272). Now, we are able to define the variance of estimate of the coefficients. αm, namely,

Var(αm)=AmmA.(12.282)

In practice, we can assume that the frequency band ΔfE of the varying mathematical expectation is much less the effective frequency band Δfef of the spectrum G(f) of the observed centralized stochastic process ξ0(t), and the spectrum G(f) is not changed for all practical training within the limits of the frequency band ΔfE. In this case, the centralized stochastic process ξ0(t) can be considered as the “white” noise with the effective spectral density

Nef=0ΔfEG(f)ΔfEdf(12.283)

for further analysis. In doing so, the effective spectrum bandwidth can be defined as

Δfef=0G(f)Gmax(f)df.(12.284)

In the case of accepted approximation, the correlation function of the centralized stochastic process. ξ0(t) is approximated by

R(τ)=Nef2δ(τ).(12.285)

Substituting (12.285) into the integral equation (12.266), we obtain

υi(t)=2Nefφi(t).(12.286)

In doing so, the matrix of the coefficients

cij=2Nefδij(12.287)

is the diagonal matrix and the determinant and algebraic supplement of this matrix are defined, correspondingly

A={2Nef}N;(12.288)

Aim={Amm={2Nef}N1,ati=m,0,atim.(12.289)

As we can see from (12.281), the correlation function of the estimates αm* and αq* takes the following form:

R(αm*,αq*)=Nef2δmq.(12.290)

Based on (12.260), (12.262), and (12.290), we can note that the current and averaged variances of the varying mathematical expectation estimate E*(t) can be presented in the following form:

VarE*(t)=Nef2i=1Nφi2(t),(12.291)

Var(E*)=1Ti=1NVar(αi)=NefN2T.(12.292)

As we can see from (12.292), the higher the number of terms under expansion in series in (12.256) used for approximation of the mathematical expectation, the higher the variance of time-varying mathematical expectation estimate at the same conditions averaged within the limits of the observation interval. In doing so, there is a need to note that in general, the number N of series expansion terms essentially increases corresponding to the increase in the observation interval [0, T], within the limits of which the approximation is carried out.

As applied to the correlation function given by (12.13), the effective spectral density of the centralized stochastic process ξ(t) takes the following form:

Nef=2σ2πΔfEarctg(2πΔfEα).(12.293)

If the observed stochastic process is stationary, then

Nef=4σ2α  and  v=1,(12.294)

and the variance of the mathematical expectation estimate takes a form given by (12.50). The formulae for the variance of estimates of the coefficients αm given by (12.282) and the variance of the time-varying mathematical expectation E(t) given by (12.260) and (12.262) are simplified essentially if the functions φ(t) satisfy the integral Fredholm equation of the second kind:

φi(t)=λi0TR(t,τ)φi(τ)dτ.(12.295)

In the considered case, the coefficients λi and the functions φi(t) are called the eigenvalues and eigenfunctions of the integral equation, respectively. Comparing (12.295) and (12.266), we can see

υi(t)=λiφi(t).(12.296)

In theory of stochastic processes [1517], it is proved that if the functions ϕi(t) were to satisfy the Equation 12.295, then these functions are the orthogonal normalized (orthonormalized) functions and the eigenvalues λi > 0. In this case, the following equation

0Tφi(t)φj(t)dt=δij={1ifi=j,0ifij(12.297)

is true for the eigenfunctions φi(t), and the correlation function R (t1, t2) can be presented by the following expansion in series

R(t1,t2)=i=1φi(t1)φi(t2)λi,(12.298)

and the following equality is satisfied:

i=11λi=σ2T.(12.299)

Substituting (12.295) into (12.265) and taking into consideration (12.297), we obtain

cij={λiifi=j,0ifij.(12.300)

At that, the matrix of coefficients cij = λi is the diagonal matrix. The determinant A and the algebraic supplements Am of this matrix can be presented in the following form, respectively,

A=i=1Nλi,(12.301)

Aim={Aλmifi=m,0ifim.(12.302)

As we can see from (12.301), (12.302), (12.269), and (12.256), the estimates of the coefficients αi* and the time-varying mathematical expectation estimate E*(t) can be presented in the following form:

αi*=0Tx(τ)φi(τ)dτ;(12.303)

E*(t)=i=1Nαi*φi(t).(12.304)

Images

FIGURE 12.15 Optimal measurer of time-varying mathematical expectation estimate in accordance with (12.304); particular case of the flowchart in Figure 12.14.

The optimal measurer of the time-varying mathematical expectation of Gaussian stochastic process operating in accordance with (12.304) is shown in Figure 12.15. The flowchart presented in Figure 12.15 is a particular case of the measurer depicted in Figure 12.14. Based on the well-known correlation function of the observed stochastic process, in accordance with (12.295), the generator “Gene” forms the functions φi(t) that are employed to form the coefficients αi*. The estimates of coefficients are multiplied by the functions φi(t) again and obtained products come in at the summator input. The expected estimate E*(t) of the time-varying mathematical expectation is formed at the summator output. Delay is required to compute the coefficients αi*.

The variance of estimate of the coefficients αi* and the current and averaged variances of estimates E*(t) of the time-varying mathematical expectation are transformed in accordance with (12.282), (12.260), and (12.262) in the following form:

Var(αm*)=1λm;(12.305)

Var{E*(t)}=i=1Nφi2(t)λi;(12.306)

Var(E*)=i=1N1λiT.(12.307)

As we can see from (12.307), with increase in the number of terms under approximation E(t) by the series given by (12.256), the averaged variance of the time-varying mathematical expectation estimate E* also increases. As N → ∞, taking into consideration (12.299), we obtain

Var(E*)=σ2.(12.308)

Thus, at a sufficiently large number of the eigenfunctions in the sum given by (12.256), the averaged variance of the time-varying mathematical expectation estimate E (t) is equal to the variance of the initial stochastic process. In doing so, the estimate bias caused by the finite number of terms in series given by (12.256) tends to approach zero. However, in practice, there is a need to choose the number of terms in series given by (12.256) in such a way that a dispersion of the time-varying mathematical expectation estimate caused both by the bias and by the estimate variance would be minimal.

12.8 Varying Mathematical Expectation Estimate Under Stochastic Process Averaging in Time

The problems with using optimal procedure realizations to estimate the time-varying mathematical expectations of stochastic processes are common in sufficiently complex mathematical computations. For this reason, to measure the time-varying mathematical expectations of stochastic processes in the course of practical applications, the averaging in time of the observed stochastic process is widely used. In principle, to define the current value of the mathematical expectation of nonstationary stochastic process there is a need to have a set of realizations xi(t) of the investigated stochastic process ξ(t). Then, the estimate of searched parameter of the stochastic process at the instant t0 is determined in the following form:

E*(t0)=1Ni=1Nxi(t0),(12.309)

where N is the number of investigated stochastic process realizations. As we can see from (12.309), the mathematical expectation estimate is unbiased and the variance of the mathematical expectation estimate can be presented in the following form:

Var[E*(t)]=σ2(t0)N,(12.310)

in the case of independent realizations xi(t), where σ2(t0) is the variance of investigated stochastic process at the instant t = t0. Thus, the estimate given by (12.309) is the consistent since N → ∞ the variance of estimate tends to approach zero. However, as a rule, a researcher does not use a sufficient number of realizations of stochastic process; thus, there is a need to carry out an estimation of the mathematical expectation based on an analysis of the limited number of realizations and, sometimes, based on a single realization.

Under definition of estimation of the time-varying mathematical expectation of stochastic process by a single realization, we meet with difficulties caused by the definition of optimal time of averaging (integration) or the time constant of smoothing filter at the earlier-given filter impulse response. In doing so, two conflicting requirements arise. On one hand, there is a need to decrease the variance of estimate caused by finite time interval of measuring; this time interval must be large. On the other hand, for better distinguishing the mathematical expectation variations in time, there is a need to choose the integration time as short as possible. Evidently, there is an optimal averaging time or the bandwidth of the smoothing filter under the given impulse response, which corresponds to minimal dispersion of the mathematical expectation estimate of stochastic process caused by the aforementioned factors.

The simplest way to define the mathematical expectation of stochastic process at the instant t = t0 is an averaging of ordinates of stochastic process realization within the limits of time interval about the given magnitude of argument t = t0. In doing so, the mathematical expectation estimate is defined as

E*(t0,T)=1Tt00.5Tt0+0.5Tx(t)dt=1T0.5T0.5Tx(t0+t)dt.(12.311)

Averaging (12.311) by realizations, we obtain the mathematical expectation of estimate at the instant t = t0:

E*(t0,T)=1T0.5T0.5TE(t0+t)dt,(12.312)

where E(t0 + t) is the true mathematical expectation value of the investigated stochastic process at the instant t = t0. Thus, the mathematical expectation of estimate of the time-varying mathematical expectation of stochastic process in contrast to the stationary case is obtained by smoothing the estimate within the limits of time interval [t0 − 0.5T; t0 + 0.5T].

In general, as a result of considered averaging, there is the mathematical expectation bias that can be presented in the following form:

b[E*(t0,T)]=1T0.5T0.5T[E(t0+t)E(t0)]dt.(12.313)

If the magnitude E(t) is described about the point t = t0 within the limits of time interval [t0 − 0.5T; t0 + 0.5T] by the series with odd powers in the following form

E(t+t0)E(t0)+k=1Nt2k1(2k1)![d(2k1)E(t)dt(2k1)]t0,(12.314)

the mathematical expectation estimate bias would be minimal. Then

b[E*(t0,T)]0.(12.315)

The variance of the mathematical expectation estimate of the investigated stochastic process is defined in the following form:

Var[E*(t0,T)]=1T20.5T0.5T0.5T0.5TR(t0+t1,t0+t2)dt1dt2,(12.316)

where R(t1, t2) is the correlation function of the investigated stochastic process ξ(t).

In practice, the nonstationary stochastic processes with time-varying mathematical expectation or variance or both of them simultaneously are widely used. In doing so, the mathematical expectation and variance vary slowly in comparison with variations of the investigated stochastic process. In other words, the mathematical expectation and variance of stochastic process are constant within the limits of the correlation interval. In this case, to define the variance of the time-varying mathematical expectation estimate we can assume that the centralized stochastic process ξ0(t) = ξ(t) −E(t) is the stationary stochastic process within the limits of the interval t0 ± 0.5T with the correlation function that can be presented in the following form:

R(τ)=ξ0(t)ξ0(t+τ)σ2(t0)(τ).(12.317)

Taking into consideration the given approximation, the variance of the mathematical expectation estimate given by (12.316) after transformation of the double integral by introducing new variables τ = t2t1, t2 = t and changing the order of integration takes the following form:

Var[E*(t0,T)]σ2(t0)×2T0T(1τT)(τ)dτ.(12.318)

As we can see from (12.313) and (12.318), the dispersion of the mathematical expectation estimate is defined in the following form:

D[E*(t0,T)]=b2[E*(t0,T)]+Var[E*(t0,T)].(12.319)

In principle, we can define the optimal integration time T, under which the dispersion will be minimum at the instant t0 minimizing the dispersion of estimate by the parameter T. However, we can present a solution to this problem in an acceptable analytical form by giving a specific function of E(t). Evaluate how the mathematical expectation estimate varies when the mathematical expectation E(t) deviates from the linear function. At the same time, we assume that the estimated mathematical expectation E(t) possesses the first and second continuous derivatives with respect to the time t. Then, according to the Taylor formula, we can write

E(t)=E(t0)+(tt0)E(t0)+0.5(tt0)2E[t0+ϑ(tt0)],(12.320)

where 0 < ϑ < 1. Substituting (12.320) into (12.313), we obtain

b[E*(t0,T)]=12T0.5T0.5Tt2E[t0+tϑ]dt.(12.321)

Denoting M as the maximum value of the second derivative of the mathematical expectation E (t) with respect to the time t, we obtain the top bound of the mathematical expectation estimate bias by module

|b[E*(t0,T)]|T2M24.(12.322)

As a rule, the maximum magnitude of the second derivative of the mathematical expectation E(t) with respect to the time t can be evaluated based on an analysis of specific physical problems.

To estimate the optimal time of integration T minimizing the dispersion of estimate given by (12.319), we assume that the correlation interval of the investigated stochastic process is much less than the integration time, that is, τcorT. Then, the following written form is true:

0T(1τT)(τ)dτ0(τ)dτ0|(τ)|dτ=τcor.(12.323)

Taking into consideration (12.322) and (12.323) and based on the condition of minimization of the estimate dispersion given by (12.319), we obtain the optimal estimation of the integration time:

T2[9σ2(t0)τcorM2]15.(12.324)

As we can see from (12.324), the larger the integration time, the larger the correlation interval and the variance of the investigated stochastic process. The lesser the integration time, the larger the maximum absolute value of the second derivative of the mathematical expectation measured. This statement agrees well with the physical interpretation of measuring the time-varying mathematical expectation.

In some applications, the time-varying mathematical expectation can be approximated by the series given by (12.256). The values minimizing the function

ε2(α1,α2,,αN)=1T0T[x(t)i=1Nαiφi(t)]2dt(12.325)

can be considered as the estimates of the coefficients αi*. This representation of the coefficients αi* is possible only if the mathematical expectation E(t) and the functions φi(t) vary in time slowly in comparison with the variation of the first derivative of the function x0(t) with respect to the time; that is, the following condition must be satisfied:

|E(t)E(t)|max|φi(t)φi(t)|max}[x0(t)]2σ2(t).(12.326)

In other words, the condition (12.325) to define the coefficients αi* is true if the frequency band ΔfE of the mathematical expectation E (t) is much less than the effective bandwidth of energy spectrum of the stochastic component x0(t). Based on the condition of minimization the function ε2, that is,

dε2αm=0,(12.327)

we obtain the system of equations to estimate the coefficients αm*

i=1Nαi*0Tφi(t)φm(t)dt=0Tx(t)φm(t)dt,   m=1,2,,N.(12.328)

Denote

0Tφi(t)φm(t)dt=cim;(12.329)

0Tx(t)φm(t)dt=ym.(12.330)

Then the estimations of the coefficients αm* can be presented in the following form:

αm*=AmA,   m=1,2,,N,(12.331)

where the determinant A of the system of linear equations given by (12.328) and the determinant Am obtained by changing the mth column cim of the determinant A by the column yi are determined based on (12.270) and (12.271).

The flowchart of measurer of the time-varying mathematical expectation estimate is similar to the block diagram shown in Figure 12.14, but the difference is that a set of functions φ(t) are assigned for the sake of convenience in generating them and the coefficients cij and values yi are formed according to (12.329) and (12.330), correspondingly. Definition of the coefficients αm* is simplified essentially if the functions φ(t) are orthonormal functions; that is, the formula (12.297) is true. In this case,

αm*=0Tx(t)φm(t)dt.(12.332)

The flowchart of measurer of the time-varying mathematical expectation estimate differs from the block diagram shown in Figure 12.15, and the difference is that a set of functions φi(t) are assigned for the sake of convenience in generating them; thus, there is no need to solve the integral equation (12.295).

Compute the estimate bias and mutual correlation functions between the estimates of the coefficients αm* and αq*. Based on investigation carried out in Section 12.7, we can conclude that the estimations of the coefficients αm* of expansion in series given by (12.256) are unbiased estimates and the correlation functions and variances of estimates of the coefficients αm* are defined in the following form:

R(αm*,αq*)=1A2i=1,j=1NAimAjqCij,(12.333)

Var(αm*)=1A2i=1,j=1NAimAjmCij,(12.334)

where Aim is the algebraic supplement of the determinant given by (12.270),

Cij=0T0TR(t1,t2)φi(t1)φj(t2)dt1dt2,(12.335)

and R(t1, t2) is the correlation function of the investigated stochastic process.

With φi(t) used as the orthonormal function, the coefficients cim given by (12.329) take the following form:

cim=δim.(12.336)

In doing so, the matrix ‖cij‖ is transformed to the diagonal matrix and the determinant A of this matrix and algebraic supplements Aij are defined as follows:

A=1,Aij=δij.(12.337)

Based on (12.336) and (12.333), the correlation function of estimation of the coefficients αm* and αq* can be presented in the following form:

R(αm*,αq*)=Cmq,(12.338)

where Cij is given by (12.335) at i = m, j = q. In doing so, the current variance of estimate given by (12.260) and the averaged variance of estimate (12.262) of the time-varying mathematical expectation take the following form:

Var{E*(t)}=i=1,j=1NCijφi(t)φj(t);(12.339)

Var{E*}=1Ti=1NCii.(12.340)

correspondingly.

If it is possible to approximate the centralized stochastic process ξ0(t) by the “white” noise with the effective spectral density given by (12.283) in addition to the orthonormal functions φi(t), then we can write

Cij=Nef2δij.(12.341)

Based on (12.341), we are able to define the current and averaged variances of the time-varying mathematical expectation estimates coinciding with the optimal estimations given by (12.336) and (12.338), which are applied to the observation of the Gaussian stochastic process with the time-varying mathematical expectation.

12.9 Estimate of Mathematical Expectation by Iterative Methods

Currently, the iterative methods or procedures of step-by-step approximation are widely used to estimate the parameters of stochastic processes. These procedures and methods are also called the recurrent procedures or methods of stochastic approximation. Essence of the iterative method applied to estimation of scalar parameter l by discrete sample with the size N is to form the recurrent relationship in the following form [18]:

l*[N]=l*[N1]+γ[N]{f(x[N])l*[N1]},(12.342)

where

  • l*[N − 1] and l*[N] are the estimates of stochastic process parameter based on the observation of N − 1 and N samples, respectively

  • f(x[N]) is the function of received sample related with the transformation required to obtain the searched stochastic process parameter

  • γ[N] is the factor defining a value of next step to make accurate the estimate of parameter l, depending on the number of step N and satisfying the following conditions:

    {γ[N]>0,N=1γ[N],N=1γ2[N]<.(12.343)

Images

FIGURE 12.16 Iterative measurer of mathematical expectation.

The relationship (12.342) allows us to image the flowchart of the iterative measurer of stochastic process parameter.

The discrete algorithm given by (12.342) can be transformed to the continuous algorithm using the limiting process for the difference equation

l[N]l[N1]=Δl[N]=γ[N]{f(x[N])l[N1]}(12.344)

to differential equation

dl(t)dt=γ(t){f[x(t)]l(t)}.(12.345)

The flowchart of measurer corresponding to (12.345) is similar to the block diagram shown in Figure 12.16 where there is a need to change γ[N] on γ(t) and the summator on the integrator.

As applied to the mathematical expectation estimate of stationary stochastic process, the recur-rent algorithm of measurement takes the following form:

E*[N]=E*[N1]+γ[N]{x[N]E*[N1]}.(12.346)

The optimal magnitude of the factor γ[N] to estimate the mathematical expectation of stochastic process by uncorrelated samples can be obtained from (12.182). This optimal value must ensure the minimal variance of the mathematical expectation estimate over the class of linear estimations given by (12.183). Actually, (12.182) can be presented in the following form:

E*[N]=1Ni=1Nxi=E*[N1]+1N{x[N]E*[N1]}.(12.347)

Comparing (12.346) and (12.347), we obtain the optimal magnitude of the factor γ[N]:

γopt[N]=1N.(12.348)

The flowchart of iterative measurer of the mathematical expectation is similar to the block diagram shown in Figure 12.16, in which there is a need to exclude block f (x[N]) responsible for transformation of stochastic process.

Since the iterative algorithm (12.347) is equivalent to the algorithm (12.182), we can state that, in the considered case, the mathematical expectation estimate is unbiased and the variance of the mathematical expectation estimate is given by (12.183). In practice, we sometimes use the constant values of the factor, γ[N] that is,

γ[N]=γ=const,   0<γ<1.(12.349)

In this case, the estimation given by (12.346) can be presented in the following form [19, 20]:

E*[N]=(1γ)Nx[1]+γi=2N(1γ)Nixi=(1γ)N{x[1]+γi=2Nxi(1γ)i}.(12.350)

As we can see from (12.350), the mathematical expectation estimate is unbiased. To define the variance of estimate, we assume that the samples xi are uncorrelated. Then, the variance of mathematical expectation estimate takes the following form:

Var{E*}=2(1γ)2N1+γ2γσ2.(12.351)

As 1 − γ < 1 and N → ∞ or N ≫ 1, (12.351) can be simplified and takes the limiting form:

Var{E*}γ2γσ2,(12.352)

that is, the considered estimate is not consistent.

The ratio between the variance of the mathematical expectation estimate defined by (12.351) and the variance of the optimal mathematical expectation estimate given by (12.183) as a function of the number of uncorrelated samples N and various magnitudes of the factor γ is shown in Figure 12.17. As we can see from Figure 12.17, for each value N there is a definite magnitude γ, at which the ratio of variances reaches the maximum.

Images

FIGURE 12.17 Ratio between the variance of mathematical expectation estimate given by (12.351) and the variance of optimal mathematical expectation estimate given by (12.183) as a function of the number of uncorrelated samples N and various values of the factor γv.

12.10 Estimate of Mathematical Expectation with Unknown Period

In some applications, we can suppose that the time-varying mathematical expectation E(t) of the stochastic process ξ(t) is the periodic function

E(t)=E(t+kT0),k=0,1,,(12.353)

and the value of the period T0 is unknown. At the same time, the practical case, when the period T0 is much more than the correlation interval τcor of the observed stochastic process, is of interest for us. We employ the adaptive filtering methods widely used in practice under interference cancellation to measure the time-varying mathematical expectation estimate [21]. We consider the discrete sample x(ti) = xi = x(iTs) where the sampling period is Ts. The sample can be presented in the form discussed in (12.168).

The observed samples xi come in at the inputs of the main and reference (the adaptive filter) channels (see Figure 12.18). The delay τ = kTs, where k is integer, is chosen in such a way that the samples at the main and reference channels would be uncorrelated. There is a filter with varying parameters in the reference channel. The incoming samples xi and the process yi forming at the adaptive filter output are sent at the subtractor input. At the subtractor output, the following process

εi=xiyi=x0i+(Eiyi)(12.354)

takes place. Taking into consideration that the samples are uncorrelated, the mathematical expectation of quadratic signal in the main and reference channels is defined as

ε2=σ02+(Eiyi)2.(12.355)

The minimum of 〈ε2〉 corresponds to the minimum of the second term in the right side (12.355). Thus, if the parameters of adaptive filter are changed before definition of the minimum 〈ε2〉, there is a need to use the signal yi at the adaptive filter output as the estimate Ei* of time-varying mathematical expectation. As was shown in Ref. [21], under the given structure of interference and noise canceller, the value yi is the best estimate in the case of the quadratic loss function given by (11.25).

Images

FIGURE 12.18 Adaptive filter flowcharts.

The adaptive filter with required impulse response is realized in the form of linear vector summing of signals with the weight coefficients Wj, where j = 0, 1,…, P − 1 are the numbers of parallel channels, at that, the delay between neighboring channels is equal to the sampling period Ts. Tuning of the weight coefficients Wj* is carried out in accordance with the recurrent Widrow-Hopf algorithm [22]

Wj*[N+1]=Wj*[N]+2μ{x[N]x[Ndj]x[Ndj]l=0P1Wl*[N]x[Ndl]},(12.356)

where

  • l = 0, 1,…, P − 1

  • μ is the adaptation parameter characterizing the algorithmic convergence rate and the tuning accuracy of the weight coefficients

As was shown in Ref. [22], if the parameter μ satisfies the condition 0<μ<λmax1, where λmax1 is the largest eigenvalue [3] of the covariance matrix consisting of elements Cij = 〈cicj〉, then the algorithm (12.356) is converged. In the physical sense the eigenvalues of covariance matrix characterize the power of input stochastic process and, under the other equal conditions, the larger λ, the more power of input stochastic process. It was proved that

limNWj*[N]=Wj,(12.357)

where Wj are the elements of optimal vector of the weight coefficients satisfying the Wiener-Hopf equation that takes the following form:

j=0P1CijWj=Cl+d,    l=0,1,,P1.(12.358)

in the stationary mode. The block diagram of computation algorithm for the weight coefficients of adaptive filter is shown in Figure 12.19. To stop the adaptation process we can use the procedures discussed in Ref. [22]. The most widely used procedure applied to the considered problem is based on the following inequality:

ε[N]=|Wj*[N]Wj*[N1]Wj*[N]|v,(12.359)

where ν is the number given before.

The estimated mathematical expectation Ei is the periodical function and can be approximated by the Fourier series with finite number of terms

Eia0+μ=1M[aμcos(ωμκ)+bμsin(ωμκ)],(12.360)

where

ω=2πT0(12.361)

Images

FIGURE 12.19 Block diagram of algorithm for determination of the weight coefficients of adaptive filters.

is the radial frequency of the first harmonic. In addition, we assume that the sampling interval Ts is chosen in such a way that the sample readings xi are uncorrelated between each other. Taking into consideration the orthogonality between components of the mathematical expectation and a definition of the covariance function (the ambiguity function) of the deterministic signal with constant component as

CE(k)=limk12ki=kkE(κ)E(κ+k),(12.362)

the covariance matrix elements given by (12.358) can be written in the following form:

C(k)=σ2σ(k)+μ=0MAμ2cos(kωμ),(12.363)

where δ(k) is the discrete analog of the Dirac delta function given by (12.170);

A02=a02;(12.364)

Aμ2=12(aμ2+bμ2),  μ=1,2,,M.(12.365)

Substituting (12.363) into (12.358), we obtain

j=0P1{σ2δ(lj)+μ=1MAμ2cos[ωμ(lj)]}Wj=μ=0MAμ2cos[ωμ(ld)].(12.366)

Denoting

{φμ=j=0P1Wjcos(jωμ);ψμ=j=0P1Wjsin(jωμ),(12.367)

we obtain a solution for the weight coefficient Wj in the case of stationary mode in the following form:

Wj=1σ2μ=0MAμ2{[cos(dωμ)φμ]cos(jωμ)[sin(dωμ)ψμ]sin(jωμ)}.(12.368)

Substituting (12.368) into (12.366), we obtain the system of equations with respect to the unknown variables φμ and ψμ in the following form:

φχ=1σ2μ=0MAμ2{[cos(dωμ)φμ]αμχ[sin(dωμ)ψμ]γμχ};(12.369)

ψχ=1σ2μ=0MAμ2{[cos(dωμ)φμ]βμχ[sin(dωμ)ψμ]ϑμχ};(12.370)

χ=0,1,,M.(12.371)

where

αμχ=j=0P1cos(jωμ)cos(jωχ);(12.372)

βμχ=j=0P1cos(jωμ)sin(jωχ);(12.373)

γμχ=j=0P1sin(jωμ)cos(jωχ);(12.374)

ϑμχ=j=0P1sin(jωμ)sin(jωχ).(12.375)

With the purpose of subsequent simplifying, we are limited by the case when the number of channels N is sufficiently high and in (12.372) through (12.375) we can use the integrals. Moreover, we assume that the constant component of the estimated mathematical expectation is equal to zero, that is, a0 = 0. As a result of limiting process, we have

{αμχ=ϑμχ=0.5Pδ(μχ);γμχ=βμχ=0.(12.376)

Using the approximation given by (12.376), the solutions of the equation system given by (12.369) and (12.370) take the following form:

{φμ=cos(dωμ)1+2σ2PAμ2;ψμ=sin(dωμ)1+2σ2PAμ2.(12.377)

In the case of stationary mode and high number of channels, the weight coefficients given by (12.368) can be presented in the following form:

Wj=μ=1M2Aμ22σ2+Aμ2Pcos[ωμ(d+j)].(12.378)

If the following condition is satisfied

P2σ2Aμ2,(12.379)

we obtain

Wj=2Pμ=1Mcosωμ(d+j)=2P×sinπ(M+1)(d+j)2Mcosπ(d+j)2sinπ(d+j)2M.(12.380)

As we can see from (12.380), the adaptive filter transfer characteristic in the stationary mode will be a sequence of maxima with magnitudes.2M/P and period 2M.

Consider the statistical characteristics of discrete magnitudes of the mathematical expectation estimate Ei* at the adaptive filter output in the stationary mode. The mathematical expectation of estimate Ei* is defined as

Ei*=j=0N1WjEijd.(12.381)

Changing Ei and Wi on their magnitudes from (12.360) and neglecting the sum of fast oscillating terms, we obtain

Ei*=0.5Pμ=1M1qμ1+0.5P[aμcos(iωμ)+bμsin(iωμ)],(12.382)

where

qμ=aμ2+bμ22σ2(12.383)

is the SNR for the |LLth component (or harmonic) of the mathematical expectation. As we can see from (12.383), as the number of channels P tends to approach infinity, that is, P → ∞, the mathematical expectation estimate is unbiased.

By analogy, determining the second central moment that is defined completely by the centralized component of stochastic process, it is easy to define the variance of estimate

Var{Ei*}=σ2j=0P1Wj2.(12.384)

We can see from (12.384) that the variance of the investigated mathematical expectation estimate Ei* decreases concomitant to a decrease in the number of harmonics M in the observed stochastic process. In the limiting case, given the high number of channels (P → ∞) the variance of the mathematical expectation estimate tends to approach zero in stationary mode; that is, the considered estimate of the periodically changing mathematical expectation Ei* is consistent.

To illustrate the obtained results, a simulation of the described adaptation algorithm based on the example of the mathematical expectation estimate E(t) = acos(ωt) given in the form of readings Ei and when the period T0 corresponds to four sampling periods, that is, T0 = 4Ts, is carried out. As the centralized component x0i of stochastic process realization, the uncorrelated samples of Gaussian stochastic process are used. To obtain the uncorrelated samples in the main and reference channels, the delay corresponding to one sampling period (d = 1) is introduced. Initial magnitudes of the weight coefficients, except for the channel with j = 0, are chosen equal to zero. The initial magnitude of the weight coefficient at j = 0 is taken equal to unit, that is, W0 [0] = 1.

The memory of microprocessor system continuously updates the discrete sample xi = x0i + Ei and in accordance with the algorithm given by (12.357) a tuning of the weight coefficients is carried out. The tuning is considered to be complete when the components of the vector differ from each other by no more than 10% on two neighboring steps of adaptation. Thus, the realization formed at the output of adaptive interference and noise canceller is investigated.

The normalized variance of estimate of the harmonic component amplitude, that is, Var{a*}/a2, at the SNR equal to unit, that is, q = 1, as a function of the number of adaptation cycles N and two values of the number of parallel channels (P = 4; P = 16) is depicted in Figure 12.20. The dashed lines correspond to the theoretical asymptotical values of variance of the mathematical expectation estimate computed according to (12.378) and (12.384). As we can see from Figure 12.20, the adaptation process is evident and its beginning depends on the number of parallel channels of the adaptive filter.

Images

FIGURE 12.20 The normalized variance of amplitude estimation as a function of the adaptation cycles number.

12.11 Summary and Discussion

Let us summarize briefy the main results discussed in this chapter.

In spite of the fact that the formulaes for the mathematical expectation estimate (12.41) and (12.42) are optimal in the case of Gaussian stochastic process, these formulae are also optimal for the stochastic process that is different from the Gaussian pdf in the class of linear estimations. Equations 12.38 and 12.39 are true if the a priori interval of changes of the mathematical expectation is not limited. Equation 12.38 allows us to define the optimal device structure to estimate the mathematical expectation of stochastic process (Figure 12.1). The main function is the linear integration of the received realization x(t) with the weight υ(t) that is defined based on the integral equation (12.8). The decision device issues the output process at the instant t = T. To obtain the current value of the mathematical expectation estimate, the limits of integration in (12.38) must betT and t, respectively, then the parameter estimation is given by (12.43).

The mathematical expectation estimate of the maximum likelihood of stochastic process is both the conditionally and unconditionally unbiased estimate. Conditional variance of the mathematical expectation estimate can be presented by (12.47), from which we can see that the variance of estimate is unconditional. Since, according to (12.38), the integration of Gaussian stochastic process is a linear operation, the estimate EE is subjected to Gaussian distribution.

The procedure to define the optimal estimate of mathematical expectation of stationary stochastic process is presented in Figure 12.1 in the case of the limited a priori domain of definition of the mathematical expectation. In this case, the maximum likelihood estimate of the mathematical expectation of stochastic process is conditionally biased. The unconditional estimate is unbiased and the unconditional dispersion is given by (12.76).

The Bayesian estimate of the mathematical expectation of stochastic process is a function of the SNR. At low SNR, the conditional estimate bias coincides with the approximation given by (12.82). We can see that the unconditional estimate of mathematical expectation averaged with respect to all possible values E0 is unbiased. At high SNR, the Bayesian estimate of the mathematical expectation of stochastic process coincides with the maximum likelihood estimate of the same parameter.

Optimal methods to estimate the mathematical expectation of stochastic process envisage the need for accurate and complete knowledge of other statistical characteristics of the considered stochastic process. For this reason, as a rule, various nonoptimal procedures are used in practice. In doing so, the weight function is selected in such a way that the variance of estimate tends to approach asymptotically the variance of the optimal estimate. If the integration time of ideal integrator is sufficiently large in comparison with the correlation interval of stochastic process, then to determine the variance of mathematical expectation estimate of stochastic process there is a need to know only the values of variance and the ratio between the observation interval and correlation interval. The variance of the mathematical expectation estimate of stochastic process is proportional to the spectral density value of fuctuation component of the considered stochastic process at ω = 0 when the ideal integrator is used as a smoothing circuit. In other words, in the considered case, the variance of the mathematical expectation estimate of stochastic process is defined by the spectral components in the case of zero frequency. To obtain the current value of the mathematical expectation estimate and to investigate the realization of stochastic process within the limits of large interval of observation, we use the estimate given by (12.124). Evidently, this estimate has the same statistical characteristics as the estimate defined by (12.111). The previously discussed procedures that measure the mathematical expectation suppose that there are no limitations due to instantaneous values of the considered stochastic process in the course of measurement. Presence of limitations leads to additional errors while measuring the mathematical expectation of stochastic process.

The variance of the mathematical expectation estimate is defined by the interval of possible values of the additional stochastic sequence and is independent of the variance of the observed stochastic process and is forever more than the variance of the equidistributed estimate of the mathematical expectation by independent samples. For example, if the observed stochastic sequence subjected to the uniform pdf coinciding in the limiting case with (12.218), then the variance of the mathematical expectation for the considered procedure is defined by (12.226) and the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation is given by (12.227), that is, the variance of the mathematical expectation estimate in three times more than the variance of the mathematical expectation in the case of equidistributed estimate of the mathematical expectation under the use of additional stochastic signals in the considered limiting case when the observed and additional random sequences are subjected to the uniform pdf. At other conditions, a difference in variances of the mathematical expectation estimate is higher.

Under the definition of stochastic process quantization effect by amplitude on the estimate of its mathematical expectation, we assume that quantization can be considered as the inertialess nonlinear transformation with the constant quantization step and the number of quantization levels is so high that the quantized stochastic process cannot be outside the limits of staircase characteristic of the transform g(x), the approximate form of which is shown in Figure 12.12. The pdf p(x) of observed stochastic process possessing the mathematical expectation that does not match with the middle between the quantization thresholds xi and xi+1 is presented in Figure 12.12 for obviousness. The mathematical expectation of the realization y(t) forming at the output of inertialess element (transducer) with the transform characteristic given by (12.228) when the realization x(t) of the stochastic process ξ(t) excites the input of inertialess element is equal to the mathematical expectation of the mathematical expectation estimate given by (12.229) and is defined by (12.230). In general, the mathematical expectation of estimate 〈E〉 differs from the true value E0, that is, as a result of quantization we obtain the bias of the mathematical expectation of the mathematical expectation estimate given by (12.231). Since the mathematical expectation and the correlation function of process forming at the transducer output depend on the observed stochastic process pdf, the characteristics of the mathematical expectation estimate of stochastic process quantized by amplitude depend both on the correlation function and on the pdf of observed stochastic process.

In the case of the time-varying mathematical expectation of stochastic process, the problem of definition of the mathematical expectation E(t) of stochastic process ξ(t) by a single realization x (t) within the limits of the interval [0, T] is reduced to an estimation of the coefficients. αi of the series given by (12.256). In doing so, the bias and dispersion of the mathematical expectation estimate E*(t) of the observed stochastic process caused by measurement errors of the coefficients. αi are given by (12.259) and (12.260). Statistical characteristics (the bias and dispersion) of the mathematical expectation estimate of stochastic process averaged within the limits of the observation interval are defined by (12.261) and (12.262), respectively. The higher the number of terms under expansion in series in (12.256) used for approximation of the mathematical expectation, the higher at the same conditions the variance of time-varying math-ematical expectation estimate averaged within the limits of the observation interval. In doing so, there is a need to note that in general the number N of series expansion terms essentially increases parallel to an increase in the observation interval [0, T], within the limits of which the approximation is carried out.

Thus, at sufficiently large number of the eigenfunctions in the sum given by (12.256) the average time-varying mathematical expectation estimate E (t) is equal to the variance of the initial stochastic process. In doing so, the estimate bias caused by the finite number of terms in series given by (12.256) tends to approach zero. However, in practice, there is a need to choose the number of terms in series given by (12.256) in such a way that a dispersion of the time-varying mathematical expectation estimate caused both by the bias and by the estimate variance would be minimal.

Under definition of estimation of the time-varying mathematical expectation of stochastic process by a single realization, we meet difficulties caused by the definition of optimal time of averaging (integration) or the time constant of smoothing filter at the filter impulse response given before. In doing so, two conflicting requirements arise. On one hand, there is a need to decrease the variance of estimate caused by finite time interval of measuring; this time interval must be large. On the other hand, for better distinguishing the mathematical expectation variations in time, there is a need to choose the integration time to be as short as possible. Evidently, there is an optimal averaging time or the bandwidth of the smoothing filter under the given impulse response, which corresponds to minimal dispersion of the mathematical expectation estimate of stochastic process caused by the factors listed previously.

In practice, the nonstationary stochastic processes with time-varying mathematical expectation or variance or both of them simultaneously are widely used. In doing so, the mathematical expectation and variance vary slowly in comparison with variations of investigated stochastic process. In other words, the mathematical expectation and variance of stochastic process are constant within the limits of the correlation interval. In this case, to define the variance of the time-varying mathe-matical expectation estimate we can assume that the centralized stochastic process ξ0(t) = ξ(t) − E (t) is the stationary stochastic process within the limits of the interval t0 ± 0.5T with the correlation function given by (12.317).

In some applications, we can suppose that the time-varying mathematical expectation of stochas-tic process is the periodic function and the value of the period is unknown. At the same time, in a practical case, when the period is much longer than the correlation interval of the observed stochas-tic process is of interest to us. We employ the adaptive filtering methods widely used in practice under interference cancellation to measure. We consider the discrete sample that can be presented in the form discussed in (12.168). The estimated mathematical expectation is the periodical function and can be approximated by the Fourier series with finite number of terms. We can assume that the sampling interval is chosen in such a way that the sample readings remain uncorrelated with each other. Taking into consideration the orthogonality between the components of mathematical expectation and a definition of the covariance function (the ambiguity function) of the deterministic signal with constant component given by (12.362), the covariance matrix elements given by (12.358) can be presented in the form (12.363).

References

1. Lindsey, J.K. 2004. Statistical Analysis of Stochastic Processes in Time. Cambridge, U.K.: Cambridge University Press.

2. Ruggeri, F. 2011. Bayesian Analysis of Stochastic Process Models. New York: Wiley & Sons, Inc.

3. Van Trees, H. 2001. Detection, Modulation, and Estimation Theory. Part 1. New York: Wiley & Sons, Inc.

4. Taniguchi, M. 2000. Asymptotic Theory of Statistical Inference for Time Series. New York: Springer + Business Media, Inc.

5. Franceschetti, M. 2008. Random Networks for Communication: From Physics to Information Systems. Cambridge, U.K.: Cambridge University Press.

6. Le Cam, L. 1986. Asymptotic Methods in Statistical Decision Theory. New York: Springer + Business Media, Inc.

7. Anirban DasGupta. 2008. Asymptotic Theory of Statistics and Probability. New York: Springer + Business Media, Inc.

8. Berger, J. 1985. Statistical Decision Theory and Bayesian Analysis. New York: Springer + Business Media, Inc.

9. Le Cam, L. and G.L. Yang. 2000. Asymptotics in Statistics: Some Basic Concepts. New York: Springer + Business Media, Inc.

10. Liese, F. and K.J. Miescke. 2008. Statistical Decision Theory: Estimation, Testing, and Selection. New York: Springer + Business Media, Inc.

11. Schervish, M. 1996. Theory of Statistics. New York: Springer + Business Media, Inc.

12. Lehmann, E.L. 2005. Testing Statistical Hypothesis, 3rd Edn. New York: Springer + Business Media, Inc.

13. Jesbers, P., Chu, P.T., and A.A. Fettwers. 1962. A new method to compute correlations. IRE Transactions on Information Theory, 8(8): 106–107.

14. Mirskiy, G. Ya. 1972. Hardware Definition of Stochastic Process Characteristics,. 2nd Edn. Moscow, Russia: Energy.

15. Gusak, D., Kukush, A., Kulik, A., Mishura, Y., and A. Pilipenko. 2010. Theory of Stochastic Processes. New York: Springer + Business Media, Inc.

16. Gikhman, I., Skorokhod, A., and S. Kotz. 2004. The Theory of Stochastic Processes I. New York: Springer + Business Media, Inc.

17. Gikhman, I., Skorokhod, A., and S. Kotz. 2004. The Theory of Stochastic Processes II. New York: Springer + Business Media, Inc.

18. Tzypkin, Ya. 1968. Adaptation and Training in Automatic Systems. Moscow, Russia: Nauka.

19. Cox, D.R. and H.D. Miller. 1977. Theory of Stochastic Processes. Boca Raton, FL: CRC Press.

20. Brzezniak, Z. and T. Zastawniak. 2004. Basic Stochastic Processes. New York: Springer + Business Media, Inc.

21. Ganesan, S. 2009. Model Based Design of Adaptive Noise Cancellation. New York: VDM Verlag, Inc.

22. Zeidler, J.R., Satorius, E.H., Charies, D.M., and H.T. Wexler. 1978. Adaptive enhancement of multiple sinusoids in uncorrelated noise. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(3): 240–254.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.63.90