11.6.2 UMP Detection with Both Composite Hypotheses
We now consider the more general case where both the hypotheses are composite. The UMP optimization problem can be stated as:
Maximize PD(˜δ;θ),for all θ∈Λ1, subject to supθ∈Λ0 PF(˜δ;θ)≤α. |
(11.21) |
If a UMP test δ̃UMP exists, then it must satisfy the following conditions. First,
(11.22) |
Second, for any δ̃ ∈ Δ̃ that satisfies supθ0∈Λ0PF(˜δ;θ0)≤α
PD(˜δ;θ1)≤PD(˜δUMP;θ1) for all θ1∈Λ1. |
(11.23) |
The following example illustrates a case where a UMP solution can be found. Also see Exercises 11.9.12 and 11.9.13.
Example 11.6.4. Testing Between Two One-Sided Composite Signals in Gaussian Noise. This is an extension of Example 11.6.1 in which the observation is Y = θ+Z, with Z ~ N
H0: θ∈Λ0 = [0,1] versus H1:θ∈Λ1 = (1,∞).
For fixed θ0 ∈ Λ0 and θ1 ∈ Λ1, Lθ0,θ1(y)
δNP(y:θ0,θ1) = {1ifLθ0,θ1(y)≥η(θ0,θ1)0ifLθ0,θ1(y)<η(θ0,θ1) = {1ify≥η′(θ0,θ1)0ify<η′(θ0,θ1)
where η′(θ0, θ1) is given by
η′(θ0,θ1) = σ2log η(θ0,θ1)θ1 − θ0 + θ0 + θ12.
Now in order to set the threshold η′ to meet the constraint on PF given in (11.21), we first compute:
PF(δη′;θ0) = Pθ0{Y≥η′} = Q(η′ − θ0σ)
and note that this probability is an increasing function in θ0. Therefore
supθ0∈[0,1] PF(δη′;θ0) = Q(η′ − 1σ)
and we can meet the PF constraint with equality by setting η′ such that:
Q(η′ − 1σ) = α⇒η′α = σQ − 1(α) + 1.
Note that η′α is independent of θ0 and θ1. Define the test
δη′α(y) = {1ify≥η′α0ify<η′α.
We will now establish that δη′α
supθ0∈[0,1] PF(δη′α;θ0) = PF(δη′α;1) = α
and so (11.22) holds. Also, δη′α
PD(˜δ;θ1)≤PD(δη′α;θ1)for allθ1∈(1,∞).
Therefore (11.23) holds and we have:
δUMP(y) = δη′α(y) = {1ify≥σQ − 1(α) + 10ify<σQ − 1(α) + 1.
Again, while the test δUMP is independent of the θ1, the performance of the test in terms of the PD depends on θ1. In particular
PD(δUMP;θ1) = Pθ{Y≥σQ − 1(α) + 1} = Q(Q − 1(α) − θ1 − 1σ).
□
11.6.3 Generalized Likelihood Ratio (GLR) Detection
While it is always desirable to have a UMP solution to the composite hypothesis testing problem, such solutions rarely exist in practice, especially in situations where both hypotheses are composite. One approach to generating a good test when UMP solutions do not exist is through the use of a “GLR” defined by
TGLR(y) = supθ1∈Λ1pθ1(y)supθ0∈Λ0pθ0(y).
It is important to note that the maximization over θ0 and θ1 has to be performed for each realization of the observation y, and so this test statistic is considerably more complex that the LRT. Also the result of the maximization may not produce a PDF (or PMF) in the numerator and denominator. We can use the statistic TGLR(y) to produce a test, which is called the “generalized likelihood ratio test (GLRT)”:
˜δGLRT(y) = {1ifTGLRT(y)>η1 w.p.γifTGLRT(y) = η.0ifTGLRT(y)<η
The use of the GLRT can be justified via an asymptotic analysis with a sequence of independent and identically distributed (i.i.d.) observations under each hypothesis, where it can be shown to have certain optimality properties. The maximization in numerator and denominator in TGLR(y) can also be justified from the viewpoint of maximum likelihood parameter estimation [2].
Example 11.6.5. Detection of One-Sided Composite Signal in Cauchy Noise (continued). This problem was introduced in Example 11.6.3. The conditional PDF is given by
pθ(y) = 1π[1 + (y − θ)2]
and we are testing H0 : θ = 0 against the one-sided composite hypothesis H1 : θ > 0. As we saw in Example 11.6.3, there is no UMP solution to this problem. The GLR statistic is given by
TGLR(y) = supθ>0pθ(y)p0(y)
with
supθ>0 pθ(y) = supθ>0 1π[1 + (y − θ)2] = {1πify≥01π(1 + y2)ify<0.
Thus
TGLR(y) = {1 + y2ify≥01ify<0.
To find an α-level test we need to evaluate P0{TGLR(Y) ≥ η}. Clearly
P0{TGLR(y)≥η} = 1 for 0≤η<1.
For η ≥ 1
P0{TGLR(Y)≥η} = ∫∞√η − 11π11 + y2dy = 0.5 − tan − 1(√η − 1)π.
There is a point of discontinuity in P0{TGLR(Y) ≥ η} at η = 1 as the value drops from 1 to the left to 0.5 to the right. For α ∈ (0.5, 1], we would need to randomize to meet the PF constraint with equality. For α ∈ (0, 0.5], which would be more relevant in practice, the GLRT is a deterministic test:
δGLRT(y) = {1ifTGLR(y)≥ηα0ifTGLR(y)<ηα
where
ηα = [tan(π(0.5 − α))]2 + 1.
□
11.6.4 Locally Most Powerful (LMP) Detection
Another approach to finding good detectors in cases where UMP tests do not exist is via a local optimization approach, which works when only one of the hypotheses is composite. Consider the scenario where Y ~ Pθ, we are interested in testing H0 : θ = θ0 versus H1 : θ > θ0, and there is no UMP solution. Also, suppose that θ takes values close to θ0 under H1; this might occur in practice in the detection of weak signals with unknown amplitude in noise.
Fix θ > θ0 and let δ̃θ be an α-level N-P test between θ and θ0. Then assuming that PD(δ̃θ; θ) is differentiable with respect to θ, we can write the Taylor series approximation:
PD(˜δθ;θ) = PD(˜δθ0;θ0) + (θ − θ0)∂∂θPD(˜δθ;θ)|θ = θ0 + (θ − θ0) ≈α + (θ − θ0)∂∂θPD(˜δθ;θ)|θ = θ0 .
The locally optimal criterion can described as:
Maximize∂∂θPD(˜δθ;θ)|θ = θ0 subject to PF(˜δ;θ0)≤α |
(11.24) |
the idea being that maximizing PD should be approximately the same as maximizing the slope of PD at θ = θ0 for values of θ close to θ0. Now
PD(˜δθ;θ) = ∫yI{˜δ(y) = 1}pθ(y)μ(dy).
Assuming that pθ (y) is differentiable in θ
∂∂θPD(˜δθ;θ)|θ = θ0 = ∫yI{˜δ(y) = 1}∂∂θpθ(y)|θ = θ0μ(dy).
Therefore, the solution to the locally optimal detection problem of (11.24) can be seen as being equivalent to N-P testing between pθ0(y)
∂∂θpθ(y)|θ = θ0.
Even though the latter quantity is not necessarily a PDF (or PMF), the steps that we followed in deriving the N-P solution in Section 11.4 can be repeated to show that the solution to (11.24) has the form:
˜δLMP(y) = {1ifTlo(y)>η1 w.p γifTlo(y) = η0ifTlo(y)<η
where
Tlo(y) = ∂∂θpθ(y)|θ = θ0pθ0(y).
Example 11.6.6. Detection of One-Sided Composite Signal in Cauchy Noise (continued). This problem was introduced in Example 11.6.3, and we saw that there was no UMP solution. We studied the GLRT in Example 11.6.5, and now we examine the LMP solution.
pθ(y) = 1π[1 + (y − θ)2] ⇒ ∂∂θpθ(y)|θ = 0 = 2yπ(1 + y2)2
Thus
Tlo(y) = 2y1 + y2
and
˜δLMP(y) = {1ifTlo(y)≥η0ifTlo(y)<η.
Randomization is not needed since Tlo(y) does not have point masses under P0.
□
11.7 Binary Detection with Vector Observations
In the detection problems we have studied so far, we did not make any explicit assumptions about the observation space, although the examples were restricted to scalar observations. The theory that we have developed applies equally to scalar and vector observations. Nevertheless, it is useful to study the case of vector observations in more detail as such a study reveals aspects of detector structures that are useful in applications.
Consider the detection problem:
H0:Y∼p0(y) versus H1:Y∼p1(y)
where Y = [Y1 Y2 ⋯ Yn]⊤ and y = [y1 y2 ⋯ yn]⊤. The optimum detector for this problem, no matter which criterion (Bayes, Neyman-Pearson, minimax) we choose, is of the form
˜δOPT(y) = {1iflog L(y)>η1 w.p.γiflog L(y) = η0iflog L(y)<η |
(11.25) |
where L(y) = p1(y)/p0(y) is the likelihood ratio, and taking the log of L(y) does not affect the structure of the test since log is a monotonic function. The threshold η and randomization parameter γ are chosen based on the criterion used for detection. Of course, in the Bayesian setting, η = log τ, with τ given in (11.11), and γ = 0.
11.7.1 Conditionally Independent Observations
Consider the special case where the observations are (conditionally) independent under each hypothesis. In this case
pj(y) = nΠk = 1 pj,k(yk)
and the log likelihood ratio in (11.25) can be written as
log L(y) = n∑k = 1log Lk(yk)
where Lk(yk) = p1,k(yk)/p0,k(yk).
Example 11.7.1. Deterministic signals in i.i.d. noise. Here, the hypotheses are given by:
H0:Y = s0 + Z versus H1:Y = s1 + Z
where s0 and s1 are deterministic vectors (signals) and Z1, Z2, …, Zn are i.i.d. random variables with zero mean and density given by pZ. Hence, the log likelihood ratio in (11.25) can be written as:
log L (y) = n∑k = 1log pZ(yk − s1,k)pZ(yk − s0,k).
A special case of this example is one where Z is a vector of i.i.d. N
δOPT(y) = {1if(s1 − s0)⊤y≥η0if(s1 − s0)⊤y<η.
□
11.7.2 Deterministic Signals in Correlated Gaussian Noise
In general, the detection problem with vector observations that are conditionally dependent, given the hypothesis, does not admit any special structure beyond what is described in (11.25). However, in some special cases, we can simplify the expression for the log likelihood ratio to obtain some more insight into the detector structure. In this section, we consider the example of detecting deterministic signals in correlated Gaussian noise, for which the hypotheses are described by:
H0:Y = s0 + Z versus H1: Y = s1 + Z
with s0 and s1 being deterministic signals as in Example 11.7.1, and Z is a Gaussian vector with zero mean and covariance matrix Σ, denoted by Z ~ N
pj(y) = 1√(2π)n|Σ|exp{ − 12(y − sj)⊤Σ − 1(y − sj)}.
where ∣Σ∣ is the absolute value of the determinant of Σ. Therefore
log L(y) − logp1(y)p0(y) = (s1 − s0)⊤Σ − 1(y − s1 − s02).
Since log L(y) does not have any point masses under either hypothesis, the optimum detector is deterministic and has the form:
δOPT(y) = {1ifT(y)≥η0ifT(y)<η
where T(y) = (s1 − s0)⊤Σ−1y and the η is chosen based on the detection criterion. In the special case of Bayesian detection,
η = log τ + 12(s1 − s0)⊤Σ − 1(s1 + s0)
with τ given in (11.11).
If we define the pseudosignal s̃ by
˜s≜Σ − 1(s1 − s0)
then the test statistic T(y) can be written as:
T(y) = ˜s⊥y = n∑k = 1˜skyk.
We see that the optimum detector is a correlation detector or matched filter [2].
Note that T(y) is linear in Y and hence has a Gaussian PDF under both H0 and H1. In particular,
Ej[T(Y)] = ˜s⊤˜sj≜˜μj
and
Varj[T(Y)] = Var(˜s⊤Z) = ˜s⊤Σ˜s = ˜μ1 − ˜μ0≜d2
where d2 is called the Mahalanobis distance between the signals s1 and s0.
Based on the above characterization of T(y), we can conclude that the problem of deterministic signal detection in correlated Gaussian noise is equivalent to the following detection problem involving the scalar observation T(y):
H0:T(y)~N(˜μ0,d2) versus H1:T(y)~N(˜μ1,d2).
11.7.3 Gaussian Signals in Gaussian Noise
In this section we consider another important example involving dependent observations, that of detecting Gaussian signals in Gaussian noise. The hypotheses are described by:
H0:Y = S0 + Z versus H1:Y = S1 + Z
where S0, S1, and Z are jointly Gaussian random vectors. It is easy to see that this problem is equivalent to the following detection problem:
H0:Y~N(μ0,∑0) versus H1:Y~N(μ1,∑1) |
(11.26) |
for some vectors μ0, μ1, and covariance matrices Σ0 and Σ1. Note that
pj(y) = 1√(2π)n|∑j|exp{ − 12(y − μj)⊤∑ − 1j(y − μj)}
and therefore the log likelihood ratio is given by:
log L(y) = 12y⊤(∑ − 10 − ∑ − 10)y + (μ⊤1∑ − 11 − μ⊤0∑ − 10)y + 12[log|∑0||∑1| + μ⊤0∑⊤1∑ − 11μ1].
Thus, the optimum detector in general involves both a quadratic term as well as a linear term in y. If Σ0 = Σ1 and μ0 ≠ μ1, then the quadratic term vanishes and we have the detector structure we saw earlier for the detection of deterministic signals in Gaussian noise. If μ0 = μ1 = 0 and Σ1 ≠ Σ0, then the linear term vanishes and we have a purely quadratic detector.
Example 11.7.2. Signaling over Rayleigh Fading Channel with Random Phase. The following detection problem arises in the context of wireless communication systems, when the carrier phase is not known at the receiver:
H0:Y = Z versus H1:Y = [AcosϕAsinϕ] + Z |
(11.27) |
where Z ~ N
pA(a) = av2exp[ − a22v2]I{a≥0}
If we define the fading signal vector S to have components S1 = A cos ϕ and S2 = A sin ϕ, then it is not difficult to show that S1 and S2 are independent N
H0:Y~N(0,σ2I) versus H1:Y~N(0,(σ2 + v2)I).
This is a special case of (11.26) with μ0 = μ1 = 0, and Σ0 = σ2I, Σ1 = (σ2 + ν2)I. Thus the log likelihood ratio has the form:
log L(y) = (constant)y⊤y + (constant)
from which we can conclude that the optimum detector is of the form:
δOPT(y) = {1if y⊤y≥η0if y⊤y≥η.
The test statistic Y⊤Y=Y21+Y22
exp[ηα2σ2] = α ⇒ ηα = − 2σ2log α.
The corresponding power of the test is given by:
PD(δOPT) = P1{YTY≥ηα} = exp[ − ηα2(σ2 + v2)] = ασ2σ2 + v2.
□
11.8 Summary and Further Reading
This chapter covered the fundamentals of detection theory, with an emphasis on binary detection problems. In Section 11.1, we provided a general statistical decision theory framework for detection problems. In Section 11.2, Section 11.4, we introduced the three basic formulations for the binary detection problem: Bayesian, minimax, and Neyman-Pearson. We saw that in all cases the optimum detection rule is a LRT with possible randomization. In Section 11.5, Section 11.6, we studied composite detection problems where the distributions of the observations are not completely specified. In particular, we saw that Bayesian composite detection can be reduced to an equivalent simple detection problem. The Neyman-Pearson version of the composite detection problem is more interesting, and we studied various approaches to this problem, including UMP detection, GLR detection, and LMP detection. Finally, we examined the detection problem with vector observations in more detail, and discussed optimum detector structures for both the cases where the observations are conditionally independent and dependent, under each hypothesis.
This chapter was inspired by the textbook on detection and estimation theory by Poor [2]. While we focused almost exclusively on binary detection problems, extension to M-ary detection is straightforward at least in the Bayesian setting (see Exercise 11.9.6). More details on M-ary detection can be found in the books by Van Trees [3], Levy [4] and Kay [5]. An alternative formulation to the detection problem with incompletely specified distributions is the robust formulation of Huber [6]. Other extensions of detection theory include sequential [7] and quickest change detection [8], where observations are taken sequentially in time and decisions about the hypothesis need to be made online. Asymptotic performance analysis and design of detection procedures for large number of observations using tools from large deviations theory has been an active area of research (see, e.g., [9]). Finally, distributed sensor networks have generated interesting new directions for research in detection theory [10].
Acknowledgments
The writing of this chapter was supported in part by the U.S. National Science Foundation, under grant CCF-0830169, through the University of Illinois at Urbana-Champaign. The author would also like to thank Taposh Banerjee for help with the figures.
Exercise 11.9.1. Consider the binary statistical decision theory problem for which S=D={0,1}
C(i,j) = {0if i = j1if j = 0, i = 110if j = 1, i = 0
The observation Y takes values in the set Γ = {a, b, c} and the conditional p.m.f.’s of Y are:
p0(a) = p0(b) = 0.5 p1(a) = p1(b) = 0.25,p1(c) = 0.5
1. Is there a best decision rule based on conditional risks?
2. Find Bayes (for equal priors) and minimax rules within the set of deterministic decision rules.
3. Now consider the set of randomized decision rules. Find a Bayes rule (for equal priors). Also construct a randomized rule whose maximum risk is smaller than that of the minimax rule of part (b).
Exercise 11.9.2. For the binary hypothesis testing problem, with C0,0 < C1,0 and C1,1 < C0,1, show there is no “best” rule based on conditional risks, except in the trivial case case where p0(y) and p1(y) have disjoint supports.
Exercise 11.9.3. Let S
pj(y) = 1√2πσexp[ − (y − ( − 1)j + 1)22σ2], j = 0,1, − ∞<y<∞.
That is, Y has distribution N
Ci,j = {0 if i = 0, j = 0 or i = 1, j = 11 if i = 1, j = 0 or i = 0, j = 1c if i = e
Furthermore, assume that the two states are equally likely.
1. First assume that c < 0.5. Show that the Bayes rule for this problem has the form:
δB(y) = {0 y≤ − te − t<y<t1 y≥t
Also give an expression for t in terms of the parameters of the problem.
2. Now find δB(y) when c ≥ 0.5.
Exercise 11.9.4. Consider the binary detection problem with
p1(y) = {1/4if y∈[0,4]00otherwise
and
p0(y) = {(y + 3)/18if y∈[ − 3,3]0otherwise
1. Find a Bayes rule for uniform costs and equal priors and the corresponding minimum Bayes risk.
2. Find a minimax rule for uniform costs, and the corresponding minimax risk.
Exercise 11.9.5. For Exercise 11.9.2 above, find the minimum Bayes risk function V(π0), and then find a minimax rule in the set of randomized decision rules using V(π0).
Exercise 11.9.6. In this chapter, we formulated and solved the general Bayesian binary detection problem. We may generalize this formulation to M-ary detection (M > 2) as follows:
• S
• D
• C(i, j) = Cij ≥ 0, for i, j = 0, …, M − 1.
• Y
• δ ∈ Δ, δ partitions Y
Find δB(y) by specifying the Bayes decision regions Y
Exercise 11.9.7. Consider the 5-ary detection problem in which the hypotheses are given by
Hj:Y = (j − 2) + Z, j = 0,1,2,3,4,
where Z ~ N
1. Find the decision rule with minimum probability of error (i.e., Bayes rule with uniform costs).
2. Also find the corresponding minimum Bayes risk.
Hint: Find the probability of correct decision making first.
Exercise 11.9.8. Consider the binary detection problem with
p0(y) = 12e − |y| and p1(y) = e − 2|y|, y∈ℝ
1. Find the Bayes rule for equal priors and a cost structure of the form C00 = C11 = 0, C10 = 1, and C01 = 2.
2. Find the Bayes risk for the Bayes rule of part (a). (Note that the costs are not uniform.)
3. Find a Neyman-Pearson rule for α = 1/4.
4. Find the probability of detection for the rule of part (c).
Exercise 11.9.9. Consider the detection problem for which Γ = {0, 1, 2, …} and the PMF’s of the observations under the two hypotheses are:
p0(y) = (1 − β0)βy0, y = 0,1,2,…
and
p1(y) = (1 − β0)βy1, y = 0,1,2,…
Assume that 0 < β0 < β1 < 1.
1. Find the Bayes rule for uniform costs and equal priors.
2. Find the Neyman-Pearson rule with false-alarm probability α ∈ (0, 1). Also find the corresponding probability of detection as a function of α.
Exercise 11.9.10. Consider a binary detection problem, where the goal is to minimize the following risk measure
ρ(˜δ) = [PF(˜δ)]2 + PM(˜δ)
1. Show that the optimal solution is a (possibly randomized) likelihood-ratio test.
2. Find the optimal solution for the observation model
p0(y){1if y∈[0,1]0otherwise
and
p1(y){2yif y∈[0,1]0otherwise
Exercise 11.9.11. Consider the detection problem where L(y) has no point masses under either hypothesis. Let δη denote the likelihood ratio test:
δη(y) = {1if L(y)≥η0if L(y)<η.
As discussed in Section 11.4.2, a plot of PD(δη) versus PF(δη) for various values of η is called the ROC. This plot is a concave function with the point (0, 0) corresponding to η = ∞, and the point (1, 1) corresponding to η = 0. Prove the following properties of ROC’s:
1. PD(δη) ≥ PF(δη) for all η. (Hint: consider cases η ≤ 1 and η > 1 separately.)
2. The slope of the ROC at a particular point is equal to the value of the threshold η required to acheive the PD and PF at that point, i.e.,
dPDdPF = η.
(Hint: Use the fact that L(Y) has a density under each hypothesis.)
Exercise 11.9.12. Consider the following composite detection problem with Λ = ℝ:
H0:θ≤˜θ versus H1:θ>˜θ
where θ̃ is a fixed real number. Now suppose that for each fixed θ0 ≤ θ̃ and each fixed θ1 > θ̃, we have
pθ1(y)pθ0(y) = gθ0,θ1(T(y))
where the function T does not depend on θ1 or θ0, and the function gθ0,θ1
Show that for any level α, a UMP test between H0 and H1 exists.
Exercise 11.9.13. Consider the composite binary detection problem in which
pθ(y) = {θe − θyif y≥ 00if y< 0
1. For α ∈ (0, 1), show that a UMP test of level α exists for testing the hypotheses
H0:Λ0 = [1,2] versus H1:Λ1 = (2,∞).
Find this UMP test as a function of α.
2. Find the structure of the generalized likelihood ratio test.
Exercise 11.9.14. (UMP testing with Laplacian Observations) Consider the composite binary detection problem in which
pθ(y) = 12e − |y − θ|, y∈ℝ.
and we are testing:
H0:θ = 0 versus H1:θ>0
1. Does a UMP test exist? If so, find it for level α and derive its power PD. If not, find the generalized likelihood ratio test for level α.
2. Find a locally most powerful α-level test and derive its power PD.
Exercise 11.9.15. Consider the detection problem:
H0:Y = [ − a0] + Z versus H1: Y = [a0] + Z
where Z ~ N
∑ = [1ρρ1 + ρ2].
Assume that a > 0 and ρ ∈ (0, 1).
1. For equal priors show that the minimum-probability-of-error detector is given by
δB(y) = {1if y1 − by2≥τ0if y1 − by2<τ
where b = ρ/(1 + ρ2) and τ = 0.
2. Determine the minimum probability of error.
3. Consider the test of part (a) in the limit as ρ → 0. Explain why the dependence on y2 goes away in this limit.
4. Now suppose the observations Y ~ N
H0:0<a<1 versus H1:a>1.
Show that a UMP test exists for this problem, and find the UMP test of level α ∈ (0, 1).
Exercise 11.9.16. Consider the detection problem with n-dimensional observations:
H0:Y = Z versus H1:Y = s + Z
where the components of Z are zero mean correlated random variables with
E[ZkZℓ] = σ2ρ|k − ℓ|, for all 1≤k,ℓ≤n
where ∣ρ∣ < 1.
1. Show that the N-P test for this problem has the form:
δη(y) = {1if∑nk = 1bkxk≥η0if∑nk = 1bkxk<η
where b1 = s1/σ, x1 = y1/σ, and
bk = sk − ρsk − 1σ√1 − ρ2, xk = yk − ρyk − 1σ√1 − ρ2, k = 2,…,n.
Hint: Note that ∑ − 1Z = A/(σ2(1 − ρ2)), where A is a tridiagonal matrix with main diagonal (1 1 + ρ2 1 + ρ2 … 1 + ρ2 1) and superdiagonal and subdiagonal entries all being −ρ.
2. Find the α-level N-P test, δηα.
3. Find the ROC for the above detector, i.e., find PD(δηα) as a function of α.
Exercise 11.9.17. Consider the composite detection problem with twodimensional observations:
H0:Y = Z versus H1:Y = θs + Z
where Z1 and Z2 are independent N(0, 1) random variables, and s1 = 1 and s2 = −1.
The parameter θ is a deterministic but unknown parameter that takes one of two possible values +1 or −1.
1. Is there a UMP test for this problem? If so, find it for level a. If not, explain why not.
2. Show that an α-level GLRT for this problem is given by:
δGLRT(y) = {1if|y1 − y2|≥ηα0othwerwise
with ηα = √2Q − 1(α2).
3. Give a clear argument to establish that the probability of detection for the GLRT of part (b) is independent of θ.
4. Now find the probability of detection for the GLRT as a function of ηα.
[1] Ferguson, T.S., Mathematical Statistics: A Decision Theoretic Approach. Academic Press, 1967.
[2] Poor, H.V., An Introduction to Signal Detection and Estimation, second edition. Springer-Verlag, 1994.
[3] Van Trees, H.L., Detection, Estimation and Modulation Theory, Part 1. Wiley, 1968.
[4] Levy, B.C., Principles of Signal Detection and Parameter Estimation. Springer-Verlag, 2008.
[5] Kay, S.M., Fundamentals of Statistical Signal Processing: Detection Theory. Prentice Hall, 1998.
[6] Huber, P.J., Robust Statistics. Wiley, 1981.
[7] Wald, A., Sequential Analysis. Wiley, 1947.
[8] Poor, H.V. and Hadjiliadis, O., Quickest Detection. Cambridge University Press, 2009.
[9] Dembo, A. and Zeitouni, O., Large Deviations Techniques and Applications, Second Edition. Springer-Verlag, 1998.
[10] Varshney, P.K., Distributed Detection and Data Fusion. Springer-Verlag, 1997.
1As will be the convention in the rest of the chapter, we denote random variables by uppercase letters and their corresponding realizations by lowercase letters. In particular, a realization of Y is denoted by y.
2This condition typically holds for continuous observations when p0(y) and p1(y) are PDF’s with the same support, but not necessarily even in this case.
3.140.198.43