PART III: PROBLEMS

Section 5.2

5.2.1 Let X1, …, Xn be i.i.d. random variables having a rectangular distribution R(θ1, θ2), -∞ < θ1 < θ2 < ∞.

(i) Determine the UMVU estimators of θ1 and θ2.
(ii) Determine the covariance matrix of these UMVU estimators.

5.2.2 Let X1, …, Xn be i.i.d. random variables having an exponential distribution, E(λ), 0 < λ < ∞.

(i) Derive the UMVU estimator of λ and its variance.
(ii) Show that the UMVU estimator of ρ = e– λ is

Unnumbered Display Equation

where T = inline.jpg and a+ = max (a, 0).

(iii) Prove that the variance of inline.jpg is

Unnumbered Display Equation

where P(j; λ) is the c.d.f. of P(λ) and H(k| x) = inline.jpg. [H(k| x) can be determined recursively by the relation

Unnumbered Display Equation

and H(1|x) is the exponential integral (Abramowitz and Stegun, 1968).

5.2.3 Let X1, …, Xn be i.i.d. random variables having a two–parameter exponential distribution, X1μ + G(λ, 1). Derive the UMVU estimators of μ and λ and their covariance matrix.

5.2.4 Let X1, …, Xn be i.i.d. N(μ, 1) random variables.

(i) Find a λ(n) such that Φ(λ(n)inline.jpg) is the UMVU estimator of Φ (μ).
(ii) Derive the variance of this UMVU estimator.

5.2.5 Consider Example 5.4. Find the variances of the UMVU estimators of p(0; λ) and of p(1; λ). [Hint: Use the formula of the p.g.f. of a P(nλ).]

5.2.6 Let X1, …, Xn be i.i.d. random variables having a NB(inline, ν) distribution; 0 < inline < ∞ (ν known). Prove that the UMVU estimator of inline is

Unnumbered Display Equation

5.2.7 Let X1, …, Xn be i.i.d. random variables having a binomial distribution B(N, θ), 0 < θ < 1.

(i) Derive the UMVU estimator of θ and its variance.
(ii) Derive the UMVU estimator of σ2(θ) = θ (1-θ) and its variance.
(iii) Derive the UMVU estimator of b(j; N, θ).

5.2.8 Let X1, …, Xn be i.i.d. N(μ, 1) random variables. Find a constant b(n) so that

Unnumbered Display Equation

is a UMVU estimator of the p.d.f. of X at ξ, i.e., inline.jpg. [Hint: Apply the m.g.f. of (inline.jpgξ)2.]

5.2.9 Let J1, …, Jn be i.i.d. random variables having a binomial distribution B(1, e-Δ/θ), 0 < θ < 1 (Δ known). Let inline.jpgn = inline.jpg. Consider the estimator of θ

Unnumbered Display Equation

Determine the bias of inline.jpgn as a power–series in 1/n.

5.2.10 Let X1, …, Xn be i.i.d. random variables having a binomial distribution B(N, θ), 0 < θ < 1. What is the Cramér – Rao lower bound to the variance of the UMVU estimator of ω = θ (1-θ)?

5.2.11 Let X1, …, Xn be i.i.d. random variables having a negative–binomial distribution NB(inline, ν). What is the Cramér – Rao lower bound to the variance of the UMVU estimator of inline? [See Problem 6.]

5.2.12 Derive the Cramér – Rao lower bound to the variance of the UMVU estimator of δ = e– λ in Problem 2.

5.2.13 Derive the Cramér – Rao lower bound to the variance of the UMVU estimator of Φ(μ) in Problem 4.

5.2.14 Derive the BLBs of the second and third order for the UMVU estimator of Φ(μ) is Problem 4.

5.2.15 Let X1, …, Xn be i.i.d. random variables having a common N(μ, σ2) distribution, -∞ < μ < ∞, 0 < σ2 < ∞.

(i) Show that inline.jpg = exp {inline.jpg} is the UMVU estimator of ω = exp {μ + σ2/2n}.
(ii) What is the variance of inline.jpg?
(iii) Show that the Cramér – Rao lower bound for the variance of inline.jpg is inline.jpg.

5.2.16 Let X1, …, Xn be i.i.d. random variables having a common N(μ, σ2) distribution, -∞ < μ < ∞, 0 < σ < ∞. Determine the Cramér – Rao lower bound for the variance of the UMVU estimator of ω = μ + zγ σ, where zγ = Φ−1(γ), 0 < γ < 1.

5.2.17 Let X1, …, Xn be i.i.d. random variables having a G(λ, ν) distribution, 0 < λ < ∞, ν ≥ 3 fixed.

(i) Determine the UMVU estimator of λ2.
(ii) Determine the variance of this UMVU.
(iii) What is the Cramér – Rao lower bound for the variance of the UMVU estimator?
(iv) Derive the BLBs of orders 2, 3, and 4.

5.2.18 Consider Example 5.8. Show that the Cramér – Rao lower bound for the variance of the MVU estimator of cov(X, Y) = ρ σ2 is inline.jpg.

5.2.19 Let X1, …, Xn be i.i.d. random variables from N(μ1, inline.jpg) and Y1, …, Yn i.i.d. from N(μ2, inline.jpg). The random vectors X and Y are independent and n ≥ 3. Let δ = inline.jpg.

(i) What is the UMVU estimator of δ and what is its variance?
(ii) Derive the Cramér – Rao lower bound to the variance of the UMVU estimator of δ.

5.2.20 Let X1, …, Xn be i.i.d. random variables having a rectangular distribution R(0, θ), 0 < θ < ∞. Derive the Chapman – Robbins inequality for the UMVU of θ.

5.2.21 Let X1, …, Xn be i.i.d. random variables having a Laplace distribution L(μ, σ), -∞ < μ < ∞, 0 < σ < ∞. Derive the Chapman – Robbins inequality for the variances of unbiased estimators of μ.

Section 5.3

5.3.1 Show that if inline.jpg(X) is a biased estimator of θ, having a differentiable bias function B(θ), then the efficiency of inline.jpg(X), when the regularity conditions hold, is

Unnumbered Display Equation

5.3.2 Let X1, …, Xn be i.i.d. random variables having a negative exponential distribution G(λ, 1), 0 < λ < ∞.

(i) Derive the efficiency function inline(λ) of the UMVU estimator of λ.
(ii) Derive the efficiency function of the MLE of λ.

5.3.3 Consider Example 5.8.

(i) What are the efficiency functions of the unbiased estimators of σ2 and ρ, where inline.jpg = Σ XiYiinline.jpg and inline.jpg2 = inline.jpg.
(ii) What is the combined efficiency function (5.3.13) for the two estimators simultaneously?

Section 5.4

5.4.1 Let X1, …, Xn be equicorrelated random variables having a common unknown mean μ. The variance of each variable is σ2 and the correlation between any two variables is ρ = 0.7.

(i) Show that the covariance matrix of X = (X1, …, Xn)′ is inline.jpg = σ2(0.3In + 0.7Jn) = inline.jpg, where In is the identity matrix of order n and Jn is an n × n matrix of 1s.
(ii) Determine the BLUE of μ.
(iii) What is the variance of the BLUE of μ?
(iv) How would you estimate σ2?

5.4.2 Let X1, X2, X3 be i.i.d. random variables from a rectangular distribution R(μσ, μ + σ), -∞ < μ < ∞, 0 < σ < ∞. What is the best linear combination of the order statistics X(i), i = 1, 2, 3, for estimating μ, and what is its variance?

5.4.3 Suppose that X1, …, Xn are i.i.d. from a Laplace distribution with p.d.f. f(x;μ, σ) = inline.jpg, where inline(z) = inline.jpg. What is the best linear unbiased combination of X(1), Me, and X(n) for estimating μ, when n = 5?

5.4.4 Let inlinek(T) = inline.jpg.

(i) Show that

Unnumbered Display Equation

(ii) Apply (i) to derive the following formulae:

Unnumbered Display Equation

Unnumbered Display Equation

[Hint: To prove (i), show that both sides are inline.jpg (Anderson, 1971, p. 83).]

5.4.5 Let Xt = f(t) + et, where t = 1, …, T, where

Unnumbered Display Equation

et are uncorrelated random variables, with E{et} = 0, V{et} = σ2 for all t = 1, …, T.

(i) Write the normal equations for the least–squares estimation of the polynomial coefficients βi (i = 0, …, p).
(ii) Develop explicit formula for the coefficients βi in the case of p = 2.
(iii) Develop explicit formula for V{βi} and σ2 for the case of p = 2. [The above results can be applied for a polynomial trend fitting in time series analysis when the errors are uncorrelated.]

5.4.6 The annual consumption of meat per capita in the United States during the years 1919 – 1941 (in pounds) is (Anderson, 1971, p. 44)

Unnumbered Table

(i) Fit a cubic trend to the data by the method of least squares.
(ii) Estimate the error variance σ2 and test the significance of the polynomial coefficients, assuming that the errors are i.i.d. N(0, σ2).

5.4.7 Let (x1i, Y1i), i = 1, …, n1, and (x2i, Y2i), i = 1, …, n2, be two independent sets of regression points. It is assumed that

Unnumbered Display Equation

where xji are constants and eji are i.i.d. N(0, σ2). Let

Unnumbered Display Equation

where inline.jpgj and inline.jpgj are the respective sample means.

(i) Show that the LSE of β1 is

Unnumbered Display Equation

and that the LSEs of β0j (j = 1, 2) are

Unnumbered Display Equation

(ii) Show that an unbiased estimator of σ2 is

Unnumbered Display Equation

where N = n1 + n2.

(iii) Show that

Unnumbered Display Equation

Section 5.5

5.5.1 Consider the following raw data (Draper and Smith, 1966, p. 178).

Unnumbered Table

(i) Determine the LSE of β0, …, β4 and of σ2 in the model Y = β0 + β1 X1 + … + β4 X4 + e, where eN(0, σ2).
(ii) Determine the ridge–regression estimates βi(k), i = 0, …, 4 for k = 0.1, 0.2, 0.3.
(iii) What value of k would you use?

Section 5.6

5.6.1 Let X1, …, Xn be i.i.d. random variables having a binomial distribution B(1, θ), 0 < θ < 1. Find the MLE of

(i) σ2 = θ (1-θ);
(ii) ρ = e-θ;
(iii) ω = e-θ/(1 + e-θ);
(iv) inline = log (1 + θ).

5.6.2 Let X1, …, Xn be i.i.d. P(λ), 0 < λ < ∞. What is the MLE of p(j;λ) = e–λ · λj/j!, j = 0, 1, …?

5.6.3 Let X1, …, Xn be i.i.d. N(μ, σ2), -∞ < μ < ∞, 0 < σ < ∞. Determine the MLEs of

(i) μ + Zγ σ, where Zγ = Φ−1(γ), 0 < γ < 1;
(ii) ω (μ, σ) = Φ (μ/σ)· [1-Φ (μ /σ)].

5.6.4 Using the delta method (see Section 1.13.4), determine the large sample approximation to the expectations and variances of the MLEs of Problems 1, 2, and 3.

5.6.5 Consider the normal regression model

Unnumbered Display Equation

where x1, …, xn are constants such that inline.jpg, and e1, …, en are i.i.d. N(0, σ2).

(i) Show that the MLEs of β0 and β1 coincide with the LSEs.
(ii) What is the MLE of σ2?

5.6.6 Let (xi, Ti), i = 1, …, n be specified in the following manner. x1, …, xn are constants such that inline.jpg are independent random variables and Tiinline.jpg, i = 1 = 1, …, n.

(i) Determine the maximum likelihood equations for α and β.
(ii) Set up the Newton – Raphson iterative procedure for determining the MLE, starting with the LSE of α and β as initial solutions.

5.6.7 Consider the MLE of the parameters of the normal, logistic, and extreme–value tolerance distributions (Section 5.6.6). Let x1 < … < xk be controlled experimental levels, n1, …, nk the sample sizes and J1, …, Jk the number of response cases from those samples. Let pi = (Ji + 1/2)/(ni + 1). The following transformations:

1. Normit: Yi = Φ−1(pi), i = 1, …, k;
2. Logit: Yi = log (pi/(1 – pi)), i = 1, …, k;
3. Extremit: Yi = -log (-log pi), i = 1, …, k; are applied first to determine the initial solutions. For the normal, logistic, and extreme–value models, determine the following:
(i) The LSEs of θ1 and θ2 based on the linear model Yi = θ1 + θ2xi + ei (i = 1, …, k).
(ii) The MLE of θ1 and θ2, using the LSEs as initial solutions.
(iii) Apply (i) and (ii) to fit the normal, logistic, and extreme–value models to the following set of data in which k = 3; ni = 50 (i = 1, 2, 3); x1 = -1, x2 = 0, x3 = 1; J1 = 15, J2 = 34, J3 = 48.
(iv) We could say that one of the above three models fits the data better than the other two if the corresponding statistic

Unnumbered Display Equation

is minimal; or

Unnumbered Display Equation

is maximal. Determine W2 and D2 to each one of the above models, according to the data in (iii), and infer which one of the three models better fits the data.

5.6.8 Consider a trinomial according to which (J1, J2) has the trinomial distribution M(n, P(θ)), where

Unnumbered Display Equation

This is the Hardy – Weinberg model.

(i) Show that MLE of θ is

Unnumbered Display Equation

(ii) Find the Fisher information function In(θ).
(iii) What is the efficiency in small samples inline.jpg(inline.jpgn) of the MLE?

5.6.9 A minimum chi–squared estimator (MCE) of θ in a multinomial model M(n, P(θ)) is an estimator inline.jpgn minimizing

Unnumbered Display Equation

For the model of Problem 8, show that the MCE of θ is the real root of the equation

Unnumbered Display Equation

Section 5.7

5.7.1 Let X1, …, Xn be i.i.d. random variables having a common rectangular distribution R(0, θ), 0 < θ < ∞.

(i) Show that this model is preserved under the group of transformations of scale changing, i.e., inline = {gβ: gβ X = β X, 0 < β < ∞ }.
(ii) Show that the minimum MSE equivariant estimator of θ is inline.jpg.

5.7.2 Let X1, …, Xn be i.i.d. random variables having a common location–parameter Cauchy distribution, i.e., f(x;μ) = inline.jpg, -∞ < x < ∞; -∞ < μ < ∞. Show that the Pitman estimator of μ is

Unnumbered Display Equation

where Y(i) = X(i)X(1), i = 2, …, n. Or, by making the transformation ω = (1 + u2)−1 one obtains the expression

Unnumbered Display Equation

This estimator can be evaluated by numerical integration.

5.7.3 Let X1, …, Xn be i.i.d. random variables having a N(μ, σ2) distribution. Determine the Pitman estimators of μ and σ, respectively.

5.7.4 Let X1, …, Xn be i.i.d. random variables having a location and scale parameter p.d.f. f(x;μ, σ) = inline.jpg, where -∞ < μ < ∞, 0 < σ < ∞ and inline(z) is of the form

(i) inline(z) = inline.jpg (Laplace);
(ii) inline(z) = 6z(1 – z), 0 ≤ z ≤ 1. (β (2, 2)).

Determine the Pitman estimators of μ and σ for (i) and (ii).

Section 5.8

5.8.1 Let X1, …, Xn be i.i.d. random variables. What are the MEEs of the parameters of

(i) NB(inline, ν); 0 < inline < 1, 0 < ν < ∞;
(ii) G(λ, ν); 0 < λ < ∞, 0 < ν < ∞;
(iii) LN(μ, σ2); -∞ < μ < ∞, 0 < σ2 < ∞;
(iv) G1/β(λ, 1); 0 < λ < ∞, 0 < β < ∞ (Weibull);
(v) Location and scale parameter distributions with p.d.f. f(x;μ, σ) =inline.jpg; with
(a) inline(z) = inline.jpg, (Laplace),
(b) inline.jpg known,
(c) inline (inline.jpg) = inline.jpg, 0 ≤ z ≤ 1, ν1 and ν2 known.

5.8.2 It is a common practice to express the degree and skewness and kurtosis (peakness) of a p.d.f. by the coefficients

Unnumbered Display Equation

and

Unnumbered Display Equation

Provide MEEs of inline.jpg and β2 based on samples of n i.i.d. random variables X1, …, Xn.

5.8.3 Let X1, …, Xn be i.i.d. random variables having a common distribution which is a mixture α G(λ, ν1) + (1 – α) G(λ, ν2), 0 < α < 1, 0 < λ, ν1, ν2 < ∞. Construct the MEEs of α, λ, ν1, and ν2.

5.8.4 Let X1, …, Xn be i.i.d. random variables having a common truncated normal distribution with p.d.f.

Unnumbered Display Equation

where n(x| μ, σ2) = inline.jpg. Determine the MEEs of (μ, σ, ξ).

Section 5.9

5.9.1 Consider the fixed–effects two–way ANOVA (Section 4.6.2). Accordingly, Xijk, i = 1, …, r1; j = 1, … r2, k = 1, …, n, are independent normal random variables, N(μ ij, σ2), where

Unnumbered Display Equation

Construct PTEs of the interaction parameters inline.jpg and the main–effects inline.jpg inline.jpg(i = 1, …, r1; j = 1, …, r2). [The estimation is preceded by a test of significance. If the test indicates nonsignificant effects, the estimates are zero; otherwise they are given by the value of the contrasts.]

5.9.2 Consider the linear model Y = Aβ + e, where Y is an N× 1 vector, A is an N × p matrix (p < N) and β a p× 1 vector. Suppose that rank (A) = p. Let β′ = (β}(1), β(2)), where β(1) is a k× 1 vector, 1 ≤ k < p. Construct the LSE PTE of β (1). What is the expectation and the covariance of this estimator?

5.9.3 Let inline.jpg be the sample variance of n1 i.i.d. random variables having a N(μ1, inline.jpg) distribution and inline.jpg the sample variance of n2 i.i.d. random variables having a N(μ2, inline.jpg) distribution. Furthermore, inline.jpg and inline.jpg are independent. Construct PTEs of inline.jpg and inline.jpg. What are the expectations and variances of these estimators? For which level of significance, α, these PTEs have a smaller MSE than inline.jpg and inline.jpg separately.

Section 5.10

5.10.1 What is the asymptotic distribution of the sample median Me when the i.i.d. random variables have a distribution which is the mixture

Unnumbered Display Equation

L(μ, σ) designates the Laplace distribution with location parameter μ and scale parameter σ.

5.10.2 Suppose that X(1) ≤ … ≤ X(9) is the order statistic of a random sample of size n = 9 from a rectangular distribution R(μσ, μ + σ). What is the expectation and variance of

(i) the tri–mean estimator of μ;
(ii) the Gastwirth estimator of μ?

5.10.3 Simulate N = 1000 random samples of size n = 20 from the distribution of X∼ 10 + 5t[10]. Estimate in each sample the location parameter μ = 10 by inline.jpg, Me, GL, inline.jpg.10 and compare the means and MSEs of these estimators over the 1000 samples.

PART IV: SOLUTIONS OF SELECTED PROBLEMS

5.2.1

(i) The m.s.s. is (X(1), Xn, where X(1) < … < X(n). The m.s.s. is complete, and

Unnumbered Display Equation

where U(1) < … < U(n) are the order statistics of n i.i.d. R(0, 1) random variables. The p.d.f. of U(i), i = 1, …, n is

Unnumbered Display Equation

Thus, Ui ∼ Beta(i, ni + 1). Accordingly

Unnumbered Display Equation

Solving the equations

Unnumbered Display Equation

for inline.jpg1 and inline.jpg2, we get UMVU estimators

Unnumbered Display Equation

and

Unnumbered Display Equation

(ii) The covariance matrix of inline.jpg1, inline.jpg2 is

Unnumbered Display Equation

5.2.3 The m.s.s. is nX(1) and U = inline.jpg;

Unnumbered Display Equation

Thus inline.jpg is UMVU of λ, provided n> 2. X(1)μ + G(nλ, 1). Thus, Einline.jpg, since E{U} = inline.jpg. Thus, inline.jpg = X(1)inline.jpg is a UMVU.

Unnumbered Display Equation

Thus,

Unnumbered Display Equation

provided n> 3. Since X(1) and U are independent,

Unnumbered Display Equation

for n> 1.

Unnumbered Display Equation

5.2.4

(i) X1, …, Xn i.i.d. N(μ, 1). inline.jpgnNinline.jpg. Thus,

Unnumbered Display Equation

Set inline.jpg. The UMVU estimator of Φ(μ) is Φ inline.jpg.

(ii) The variance of Φ inline.jpg is

Unnumbered Display Equation

If YN(η, τ2) then

Unnumbered Display Equation

In our case,

Unnumbered Display Equation

5.2.6 X1, …, Xn ∼ i.i.d. NB(inline, ν), 0 < inline < ∞ ν is known. The m.s.s. is T = inline.jpg.

Unnumbered Display Equation

5.2.8 inline.jpg. Hence, inline.jpg, where inline.jpg. Therefore,

Unnumbered Display Equation

Set t = inline.jpg we get

Unnumbered Display Equation

Thus, inline.jpg or inline.jpg.

5.2.9 The probability of success is p = e-Δ/θ. Accordingly, θ = -Δ/log (p). Let inline.jpg. Thus, inline.jpg. The estimator of θ is inline.jpgn = -Δ/log (inline.jpgn). The bias of inline.jpgn is B(inline.jpgn) = -Δ inline.jpg. Let inline.jpg. Taylor expansion around p yields

Unnumbered Display Equation

where |p -p| < |inline.jpgnp|. Moreover,

Unnumbered Display Equation

Furthermore,

Unnumbered Display Equation

Hence,

Unnumbered Display Equation

5.2.17 X1, …, XnG(λ, ν), ν ≥ 3 fixed.

(i) inline.jpg. Hence

Unnumbered Display Equation

The UMVU of λ2 is inline.jpg.

(ii) inline.jpg
(iii) In(λ) = inline.jpg. The Cramér – Rao lower bound for the variance of the UMVU of λ2 is

Unnumbered Display Equation

(iv) inline.jpg.

Unnumbered Display Equation

inline.jpg

Unnumbered Display Equation

The second order BLB is

Unnumbered Display Equation

and third and fourth order BLB do not exist.

5.3.3 This is continuation of Example 5.8. We have a sample of size n of vectors (X, Y), where inline.jpg. We have seen that inline.jpg. We derive now the variance of inline.jpg2 = (QX + QY)/2n, where QX = inline.jpg and QY = inline.jpg. Note that QY| QXσ2(1 – ρ22inline.jpg. Hence,

Unnumbered Display Equation

and

Unnumbered Display Equation

It follows that

Unnumbered Display Equation

Finally, since (QX + QY, PXY) is a complete sufficient statistic, and since inline.jpg is invariant with respect to translations and change of scale, Basu’s Theorem implies that inline.jpg2 and inline.jpg are independent. Hence, the variance – covariance matrix of (inline.jpg2, inline.jpg) is

Unnumbered Display Equation

Thus, the efficiency of (inline.jpg2, inline.jpg), according to (5.3.13), is

Unnumbered Display Equation

5.4.3 X1, …, X5 are i.i.d. having a Laplace distribution with p.d.f. inline.jpg, where -∞ < μ < ∞, 0 < σ < ∞, and inline(x) = inline.jpg, -∞ < x < ∞. The standard c.d.f. (μ = 0, σ = 1) is

Unnumbered Display Equation

Let X(1) < X(2) < X(3) < X(4) < X(5) be the order statistic. Note that Me = X(3). Also X(i) = μ + σ U(i), i = 1, …, 5, where U(i) are the order statistic from a standard distribution.

The densities of U(1), U(3), U(5) are

Unnumbered Display Equation

Since inline(u) is symmetric around u = 0, F(-u) = 1-F(u). Thus, p(3)(u) is symmetric around u = 0, and U(1) ∼ -U(5). It follows that α1 = E{U(1)} = – α5 and α3 = 0. Moreover,

Unnumbered Display Equation

Accordingly, α′ = (-1.58854, 0, 1.58854), V{U(1)} = V{U(5)} = 1.470256, V{U(3)} = 0.35118.

Unnumbered Display Equation

Thus,

Unnumbered Display Equation

Let

Unnumbered Display Equation

The matrix V is singular, since U(5) = -U(1). We take the generalized inverse

Unnumbered Display Equation

We then compute

Unnumbered Display Equation

According to this, the estimator of μ is inline.jpg = X(3), and that of σ is inline.jpg = 0.63X(3)-0.63X(1). These estimators are not BLUE. Take the ordinary LSE given by

Unnumbered Display Equation

then the variance covariance matrix of these estimators is

Unnumbered Display Equation

Thus, V{inline.jpg} = 0.0391 < V{inline.jpg} = 0.3512, and V{inline.jpg} = 0.5827 > V{inline.jpg} = 0.5125. Due to the fact that V is singular, inline.jpg is a better estimator than inline.jpg.

5.6.6 (xi, Ti), i = 1, …, n. Ti ∼ (α + β xi)G(1, 1), i = 1, …, n.

Unnumbered Display Equation

The MLEs of α and β are the roots of the equations:

(I)inline.jpg

(II)inline.jpg,

The Newton – Raphson method for approximating numerically (α, β) is

Unnumbered Display Equation

The matrix

Unnumbered Display Equation

where wi = inline.jpg.

Unnumbered Display Equation

We assume that |D(α, β)| > 0 in each iteration. Starting with (α1, β1), we get the following after the lth iteration:

Unnumbered Display Equation

The LSE initial solution is

Unnumbered Display Equation

5.6.9 inline.jpg. For the Hardy – Weinberg model,

Unnumbered Display Equation

where

Unnumbered Display Equation

Note that J3 = nJ1J2. Thus, the MCE of θ is the root of N(θ) ≡ 0.

5.7.1 X1, …, Xn are i.i.d. R(θ, θ), 0 < θ < ∞.

(i) cXR(0, cθ) for all 0 < c < ∞. Thus, the model is preserved under the scale transformation inline.jpg.
(ii) The m.s.s. is X(n). Thus, an equivariant estimator of θ is

Unnumbered Display Equation

Consider the following invariant loss function:

Unnumbered Display Equation

There is only one orbit of inline.jpg in Θ. Thus, find inline to minimize

Unnumbered Display Equation

Thus, inline.jpg computed at θ = 1. Note that under θ = 1, X(n) ∼ Beta(n, 1) E(Xn = inline.jpg.

5.7.3

(i) The Pitman estimator of the location parameter in the normal case, according to Equation (5.7.33), is

Unnumbered Display Equation

Moreover,

Unnumbered Display Equation

and

Unnumbered Display Equation

The other terms are cancelled. Thus,

Unnumbered Display Equation

This is the best equivariant estimator for the squared error loss.

(ii) For estimating σ in the normal case, let (inline.jpg, S) be the m.s.s., where inline.jpg. An equivariant estimator of σ, for the translation–scale group, is

Unnumbered Display Equation

inline.jpg. By Basu’s Theorem, S is independent of u. We find inline (u) by minimizing E{(Sinline -1)2} under σ = 1. E(S2| inline) = E1{S2} = 1 and E1{S| inline } = E1{S}. Here, inline0 = E1{S}/E1{S2}.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.255.86