PART III: PROBLEMS

Section 8.1

8.1.1 Let inline = {B(N, θ);0 < θ < 1} be the family of binomial distributions.

(i) Show that the family of beta prior distributions inline.jpg = {β (p, q); 0 < p, q < ∞} is conjugate to inline.
(ii) What is the posterior distribution of θ given a sample of n i.i.d. random variables, having a distribution in inline?
(iii) What is the predicted distribution of Xn + 1, given (X1, …, Xn)?

8.1.2 Let X1, …, Xn be i.i.d. random variables having a Pareto distribution, with p.d.f.

Unnumbered Display Equation

0 < ν < ∞ (A is a specified positive constant).

(i) Show that the geometric mean, inline.jpg, is a minimal sufficient statistic.
(ii) Suppose that ν has a prior G(λ, p) distribution. What is the posterior distribution of ν given X?
(iii) What are the posterior expectation and posterior variance of ν given X?

8.1.3 Let X be a p–dimensional vector having a multinormal distribution N(μ, inline.jpg). Suppose that inline.jpg is known and that μ has a prior normal distribution N(μ0, V). What is the posterior distribution of μ given X?

8.1.4 Apply the results of Problem 3 to determine the posterior distribution of β in the normal multiple regression model of full rank, when σ2 is known. More specifically, let

Unnumbered Display Equation

(i) Find the posterior distribution of β given X.
(ii) What is the predictive distribution of X2 given X1, assuming that conditionally on β, X1 and X2 are i.i.d.?

8.1.5 Let X1, …, Xn be i.i.d. random variables having a Poisson distribution, P(λ), 0 < λ < ∞. Compute the posterior probability P{λ ≥ inline.jpgn | Xn} corresponding to the Jeffreys prior.

8.1.6 Suppose that X1, …, Xn are i.i.d. random variables having a N(0, σ2) distribution, 0 < σ2 < ∞.

(i) Show that if inline.jpg then the posterior distribution of 1/2 σ2 given S = inline.jpg is also a gamma distribution.
(ii) What is E{σ2 | S} and V{σ2 | S} according to the above Bayesian assumptions.

8.1.7 Let X1, …, Xn be i.i.d. random variables having a N(μ, σ2) distribution; −∞ < μ < ∞, 0 < σ < ∞. A normal–inverted gamma prior distribution for μ, σ2 assumes that the conditional prior distribution of μ, given σ2, is N(μ0, λ σ2) and that the prior distribution of 1/2σ2 is a inline.jpg. Derive the posterior joint distribution of (μ, σ2) given (inline.jpg, S2), where inline.jpg and S2 are the sample mean and variance, respectively.

8.1.8 Consider again Problem 7 assuming that μ and σ2 are priorly independent, with μN(μ0, D2) and inline.jpg. What are the posterior expectations and variances of μ and of σ2, given (inline.jpg, S2)?

8.1.9 Let XB(n, θ), 0 < θ < 1. Suppose that θ has a prior beta distribution inline.jpg. Suppose that the loss function associated with estimating θ by inline.jpg is L(θ, inline.jpg). Find the risk function and the posterior risk when L(inline.jpg, θ) is

(i) L(inline.jpg, θ) = (inline.jpgθ)2;
(ii) L(inline.jpg, θ) = (inline.jpgθ)2/θ (1 − θ);
(iii) L(inline.jpg, θ) = |inline.jpgθ|.

8.1.10 Consider a decision problem in which θ can assume the values in the set {θ0, θ1}. If i.i.d. random variables X1, …, Xn are observed, their joint p.d.f. is f(xn;θ) where θ is either θ0 or θ1. The prior probability of {θ = θ0} is η. A statistician takes actions a0 or a1. The loss function associated with this decision problem is given by

Unnumbered Display Equation

(i) What is the prior risk function, if the statistician takes action a0 with probability ξ?
(ii) What is the posterior risk function?
(iii) What is the optimal action with no observations?
(iv) What is the optimal action after n observations?

8.1.11 The time till failure of an electronic equipment, T, has an exponential distribution, i.e., inline.jpg; 0 < τ < ∞. The mean–time till failure, τ, has an inverted gamma prior distribution, 1/τG(Λ, ν). Given n observations on i.i.d. failure times T1, …, Tn, the action is an estimator inline.jpg. The loss function is inline.jpg. Find the posterior risk of inline.jpg. Which estimator will minimize the posterior risk?

Section 8.2

8.2.1 Let X be a random variable having a Poisson distribution, P(λ). Consider the problem of testing the two simple hypotheses H0: λ = λ0 against H1: λ = λ1; 0 < λ0 < λ1 < ∞.

(i) What is the form of the Bayes test inlineπ (X), for a prior probability π of H0 and costs c1 and c2 for errors of types I and II.
(ii) Show that R0(π) = c1(1 − P([ξ (π)]; λ0)), where P(j; λ) is the c.d.f. of P(λ), R1(π) = c2P([ξ (π)]; λ1),

Unnumbered Display Equation

where [x] is the largest integer not exceeding x.

(iii) Compute (R0(π), R1(π)) for the case of c1 = 1, c2 = 3; λ10 = 2, λ1–λ0 = 2, and graph the lower boundary of the risk set R.

8.2.2 Let X1, …, Xn be i.i.d. random variables having an exponential distribution G(λ, 1), 0 < λ < ∞. Consider the two composite hypotheses H0: λ ≤ λ0 against H1: λ > λ0. The prior distribution of λ is inline.jpg. The loss functions associated with accepting Hi (i = 0, 1) are

Unnumbered Display Equation

and

Unnumbered Display Equation

0 < B < ∞.

(i) Determine the form of the Bayes test of H0 against H1.
(ii) What is the Bayes risk?

8.2.3 Let X1, …, Xn be i.i.d. random variables having a binomial distribution B(1, θ). Consider the two composite hypotheses H0: θ ≤ 1/2 against H1: θ > 1/2. The prior distribution of θ is β (p, q). Compute the Bayes Factor in favor of H1 for the cases of n = 20, T = 15 and

(i) p = q = 1/2 (Jeffreys prior);
(ii) p = 1, q = 3.

8.2.4 Let X1, …, Xn be i.i.d. random variables having a N(μ, σ2) distribution. Let Y1, …, Ym be i.i.d. random variables having a N(η, ρ σ2) distribution, −∞ < μ, η < ∞, 0 < σ2, ρ < ∞, The X–sample is independent of the Y–sample. Consider the problem of testing the hypothesis H0: ρ ≤ 1, (μ, η, σ2) arbitrary against H1: ρ > 1, (μ, η, σ2) arbitrary. Determine the form of the Bayes test function for the formal prior p.d.f. h(μ, η, σ, ρ) inline.jpg and a loss function with c1 = c2 = 1.

8.2.5 Let X be a k–dimensional random vector having a multinomial distribution M(n;θ). We consider a Bayes test of inline.jpg against inline.jpg. Let θ have a prior symmetric Dirichlet distribution (8.2.27) with ν = 1 or 2 with equal hyper–prior probabilities.

(i) Compute the Bayes Factor in favor of H1 when k = 5, n = 50, X1 = 7, X2 = 12, X3 = 9, X4 = 15, and X5 = 7. [Hint: Approximate the values of Γ(ν k + n) by the Stirling approximation: inline.jpg, for large n.]
(ii) Would you reject H0 if c1 = c2 = 1?

8.2.6 Let X1, X2, … be a sequence of i.i.d. normal random variables, N(0, σ2). Consider the problem of testing H0: σ2 = 1 against H1: σ2 = 2 sequentially. Suppose that c1 = 1 and c2 = 5, and the cost of observation is c = 0.01.

(i) Determine the functions ρ(n)(π) for n = 1, 2, 3.
(ii) What would be the Chernoff approximation to the SPRT boundaries (A, B)?

8.2.7 Let X1, X2, … be a sequence of i.i.d binomial random variables, B(1, θ), 0 < θ < 1. According to H1: θ = 0.3. According to H1: θ = 0.7. Let π, 0 < π < 1, be the prior probability of H0. Suppose that the cost for erroneous decision is b = 10[$] (either type of error) and the cost of observation is c = 0.1[$]. Derive the Bayes risk functions ρi(π), i = 1, 2, 3 and the associated decision and stopping rules of the Bayes sequential procedure.

Section 8.3

8.3.1 Let XB(n, θ) be a binomial random variable. Determine a (1 – α)–level credibility interval for θ with respect to the Jeffreys prior h(θ) ∝ θ−1/2(1-θ)−1/2, for the case of n = 20, X = 17, and α = 0.05.

8.3.2 Consider the normal regression model (Problem 3, Section 2.9). Assume that σ2 is known and that (α, β) has a prior bivariate normal distribution inline.jpg.

(i) Derive the (1 − inline.jpg) joint credibility region for (α, β).
(ii) Derive a (1 − inline.jpg) credibility interval for α + β ξ0, when ξ0 is specified.
(iii) What is the (1 − inline.jpg) simultaneous credibility interval for α + β ξ, for all ξ?

8.3.3 Consider Problem 4 of Section 8.2. Determine the (1–α) upper credibility limit for the variance ratio ρ.

8.3.4 Let X1, …, Xn be i.i.d. random variables having a N(μ, σ2) distribution and let Y1, …, Yn be i.i.d. random variables having a N(η, σ2) distribution. The Xs and the Ys are independent. Assume the formal prior for μ, η, and σ, i.e.,

Unnumbered Display Equation

(i) Determine a (1 – α) HPD–interval for δ = μ -η.
(ii) Determine a (1 – α) HPD–interval for σ2.

8.3.5 Let X1, …, Xn be i.i.d. random variables having a G(λ, 1) distribution and let Y1, …, Ym be i.i.d. random variables having a G(η, 1) distribution. The Xs and Ys are independent. Assume that λ and η are priorly independent having prior distributions inline.jpg and inline.jpg, respectively. Determine a 1 – α) HPD–interval for ω = λ/η.

Section 8.4

8.4.1 Let XB(n, θ), 0 < θ < 1. Suppose that the prior distribution of θ is β (p, q), 0 < p, q < ∞.

(i) Derive the Bayes estimator of θ for the squared–error loss.
(ii) What is the posterior risk and the prior risk of the Bayes estimator of (i)?
(iii) Derive the Bayes estimator of θ for the quadratic loss L(inline.jpg, θ) = (inline.jpgθ)2/θ (1 − θ), and the Jeffreys prior inline.jpg.
(iv) What is the prior and the posterior risk of the estimator of (iii)?

8.4.2 Let X∼ P(λ), 0 < λ < ∞. Suppose that the prior distribution of λ is inline.jpg.

(i) Derive the Bayes estimator of λ for the loss function L(inline.jpg, λ) = a(inline.jpg–λ)+ + b(inline.jpg–λ), where (·)+ = max(·,) and (·) = −min(·, 0); 0 < a, b < ∞.
(ii) Derive the Bayes estimator for the loss function L(inline.jpg, λ) = (inline.jpg − λ)2/λ.
(iii) What is the limit of the Bayes estimators in (ii) when inline.jpg and inline.jpg.

8.4.3 Let X1, …, Xn, Y be i.i.d. random variables having a normal distribution N(μ, σ2); −∞ < μ < ∞, 0 < σ2 < ∞. Consider the Jeffreys prior with h(μ, σ2) dμ dσ2dμ dσ2/σ2. Derive the γ–quantile of the predictive distribution of Y given (X1, …, Xn).

8.4.4 Let XP(λ), 0 < λ < ∞. Derive the Bayesian estimator of λ with respect to the loss function L(inline.jpg, λ) = (inline.jpg – λ)2/λ, and a prior gamma distribution.

8.4.5 Let X1, …, Xn be i.i.d. random variables having a B(1, θ) distribution, 0 < θ < 1. Derive the Bayesian estimator of θ with respect to the loss function L(inline.jpg, θ) = (inline.jpgθ)2/θ (1 − θ), and a prior beta distribution.

8.4.6 In continuation of Problem 5, show that the posterior risk of inline.jpg = Σ X/n with respect to L(inline.jpg, θ) = (inline.jpgθ)2/θ (1 − θ) is 1/n for all Σ Xi. This implies that the best sequential sampling procedure for this Bayes procedure is a fixed sample procedure. If the cost of observation is c, determine the optimal sample size.

8.4.7 Consider the normal–gamma linear model inline.jpg where Y is four–dimensional and

Unnumbered Display Equation

(i) What is the predictive distribution of Y?
(ii) What is the Bayesian estimator inline.jpg?

8.4.8 As an alternative to the hierarchical model of Gelman et al. (1995) described in Section 8.4.2, assume that n1, …, nk are large. Make the variance stabilizing transformation

Unnumbered Display Equation

We consider now the normal model

Unnumbered Display Equation

where Y = (Y1, …, Yk)′, η = (η1, …, ηk)′ with ηi = 2sin−1inline.jpg, i = 1, …, k. Moreover, D is a diagonal matrix, D = inline.jpg. Assume a prior multinormal distribution for η.

(i) Develop a credibility region for η, and by inverse transformations (1:1) obtain credibility region for θ = (θ1, …, θk).

8.4.9 Consider the normal random walk model, which is a special case of the dynamic linear model (8.4.6), given by

Unnumbered Display Equation

where θ0N(η0, c0), {inlinen} are i.i.d. N(0, σ2) and {ωn} are i.i.d. N(0, τ2). Show that inline.jpg, where cn is the posterior variance of θn, given (Y1, …, Yn). Find the formula of c*.

Section 8.5

8.5.1 The integral

Unnumbered Display Equation

can be computed analytically or numerically. Analytically, it can be computed as

(i)

Unnumbered Display Equation

(ii) Make the transformation ω = θ/(1 − θ), and write

Unnumbered Display Equation

where inline.jpgn = X/n. Let k(ω) = log (1 + ω) − inline.jpgn log(ω) and f(ω) = inline.jpg. Find inline.jpg which maximizes -k(ω). Use (8.5.3) and (8.5.4) to approximate I.

(iii) Approximate I numerically. Compare (i), (ii) and (iii) for n = 20, 50 and X = 15, 37. How good is the saddle–point approximation (8.5.9)?

8.5.2 Prove that if U1, U2 are two i.i.d. rectangular (0, 1) random variables then the Box Muller transformation (8.5.26) yields two i.i.d. N(0, 1) random variables.

8.5.3 Consider the integral I of Problem [1]. How would you approximate I by simulation? How would you run the simulations so that, with probability ≥ 0.95, the absolute error is not greater than 1% of I.

Section 8.6

8.6.1 Let (X1, θ1), …, (Xn, θn), … be a sequence of independent random vectors of which only the Xs are observable. Assume that the conditional distributions of Xi given θi are B(1, θi), i = 1, 2, …, and that θ1, θ2, … are i.i.d. having some prior distribution H(θ) on (0, 1).

(i) Construct an empirical–Bayes estimator of θ for the squared–error loss.
(ii) Construct an empirical–Bayes estimator of θ for the squared–error loss, if it is assumed that H(θ) belongs to the family inline.jpg = {β (p, q): 0 < p, q < ∞}.

8.6.2 Let (X1, inline1), …, (Xn, inlinen), … be a sequence of independent random vectors of which only the Xs are observable. It is assumed that the conditional distribution of Xi given inlinei is NB(inlinei, ν), ν known, i = 1, 2, …. Moreover, it is assumed that inline1, inline2, … are i.i.d. having a prior distribution H(θ) belonging to the family inline.jpg of beta distributions. Construct a sequence of empirical–Bayes estimators for the squared–error loss, and show that their posterior risks converges a.s. to the posterior risk of the true β (p, q).

8.6.3 Let (X1, λ1), …, (Xn, λn), … be a sequence of independent random vectors, where Xi | λiGi, 1), i = 1, 2, …, and λ1, λ2, … are i.i.d. having a prior inline.jpg distribution; τ and ν unknown. Construct a sequence of empirical–Bayes estimators of λi, for the squared–error loss.

PART IV: SOLUTIONS OF SELECTED PROBLEMS

8.1.6 (i) Since XN(0, σ2), inline.jpg. Thus, the density of S, given σ2, is

Unnumbered Display Equation

Let inline.jpg and let the prior distribution of inline be like that of inline.jpg. Hence, the posterior distribution of inline, given S, is like that of inline.jpg.

(ii)

Unnumbered Display Equation

Similarly, we find that

Unnumbered Display Equation

8.1.9

Unnumbered Display Equation

Accordingly,

(i)

Unnumbered Display Equation

(ii)

Unnumbered Display Equation

(iii)

Unnumbered Display Equation

8.2.1 (i) The prior risks are R0 = c1π and R1 = c2(1 − π), where π is the prior probability of H0. These two risk lines intersect at π* = c2/(c1 + c2). We have R0(π*) = R1(π*). The posterior probability that H0 is true is

Unnumbered Display Equation

The Bayes test function is

Unnumbered Display Equation

inlineπ(X) is the probability of rejecting H0. Note that inlineπ(X) = 1 if, and only if, X > ξ (π), where

Unnumbered Display Equation

(ii)

Unnumbered Display Equation

Let P(j;λ) denote the c.d.f. of Poisson distribution with mean λ. Then

Unnumbered Display Equation

and

Unnumbered Display Equation

(iii)

Unnumbered Display Equation

Then

Unnumbered Display Equation

8.3.2 We have a simple linear regression model Yi = α + β xi + inlinei, i = 1, …, n; where inlinei are i.i.d. N(0, σ2). Assume that σ2is known. Let (X) = (1n, xn), where 1n is an n–dimensional vector of 1s, xn = (x1, …, xn). The model is

Unnumbered Display Equation

(i) Let inline.jpg. The prior distribution of θ is N(θ0, inline.jpg). Note that the covariance matrix of Y is V[Y] = σ2 I + (X)inline.jpg(X)′. Thus, the posterior distribution of θ, given Y, is N(η(Y), D), where

Unnumbered Display Equation

and

Unnumbered Display Equation

Accordingly

Unnumbered Display Equation

Thus, the (1–α) credibility region for θ is

Unnumbered Display Equation

(ii) Let inline.jpg and inline.jpg. The posterior distribution of θw0, given Y, is N(η(Y)′w0, w0Dw0). Hence, a (1 – α) credibility interval for θw0, given Y, has the limits η(Y)′w0 ± z1 − inline.jpg/2(w0 Dw0)1/2.
(iii) Simultaneous credibility interval for all ξ is by Sheffe’s S–intervals

Unnumbered Display Equation

8.4.1

Unnumbered Display Equation

The prior of θ is Beta (p, q), 0 < p, q < ∞.

(i) The posterior distribution of θ, given X, is Beta (p + X, q + n − X). Hence, the Bayes estimator of θ, for squared–error loss, is inline.jpgB = inline.jpg.
(ii) The posterior risk is V(θ | X) = inline.jpg. The prior risk is inline.jpg
(iii) The Bayes estimator is

Unnumbered Display Equation

8.4.3 The Jeffreys prior is inline.jpg. A version of the likelihood function, given the minimal sufficient satistic (inline.jpg, Q), where Q = inline.jpg, is

Unnumbered Display Equation

Thus, the posterior density under Jeffrey’s prior is

Unnumbered Display Equation

Now,

Unnumbered Display Equation

Furthermore,

Unnumbered Display Equation

Accordingly,

Unnumbered Display Equation

Thus, the predictive density of Y, given [inline.jpg, Q], is

Unnumbered Display Equation

or

Unnumbered Display Equation

Recall that the density of t[n − 1] is

Unnumbered Display Equation

Thus, the γ–quantile of the predictive distribution fH(y | inline.jpg, Q) is inline.jpg + inline.jpg.

8.5.3

Unnumbered Display Equation

where Uβ (X + 1, n- X + 1). By simulation, we generate M i.i.d. values of U1, …, UM, and estimate I by inline.jpg inline.jpg. For large inline.jpg, where D = B2(X +1, nX + 1)V{e−U}. We have to find M large enough so that inline.jpg ≥ 0.95. According to the asymptotic normal distribution, we determine M so that

Unnumbered Display Equation

or

Unnumbered Display Equation

By the delta method, for large M,

Unnumbered Display Equation

Similarly,

Unnumbered Display Equation

Hence, M should be the smallest integer such that

Unnumbered Display Equation

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.101.81