Subject Index

(1−α) level p–content intervals

(β, γ)–upper tolerance limit

F–distributions

M–estimates

P–value

S–method

α–trimmed means

α–similar

α–similar on the boundary

σ-field

σ–finite measure

p–content prediction interval

p–content tolerance intervals

p–quantiles

t–test

absolutely continuous

acceptance probability

admissibility of estimators

algebra

alternative hypothesis

analysis of variance

ancillarity

ancillary statistic

association

asymptotic confidence intervals

asymptotic efficacy of Tn

asymptotic efficiency of MLE

asymptotic variance

asymptotically efficient

asymptotically normal

asymptotically size α test

augmented symmetric functions

autoregressive time–series

average predictive error probability

Bahadur’s theorem

Barndorff–Nielsen p*–formula

Bartlett test

Basu’s theorem

Bayes

Bayes decision function

Bayes equivariant

Bayes factor

Bayes procedures

Bayes theorem

Bayesian analysis

Bayesian approach

Bayesian estimation

Bayesian inference

Bayesian testing

Bernoulli trials

best linear combination of order statistics

best linear unbiased estimators

Beta distributions

beta function

Bhattacharyya lower bound

binomial distribution

bivariate distribution function

Bonferroni inequality

Borel σ–field

Borel–Cantelli Lemma

Box–Muller transformation

C.R. regularity conditions

canonical parameters

Cantelli’s theorem

Cantor distribution

Cauchy distribution

Central Limit Theorem

central moments

Chapman–Robbins inequality

characteristic function

Chebychev inequality

Chebychev–Hermite polynomial

chi–squared distribution

classical Bayesian model

coefficient of correlation

coefficient of determination

coefficient of kurtosis

complete class

complete sufficient statistic

completeness

conditional expectation

conditional probability

conditional test function

confidence interval

confidence level

confidence limits

confidence probability

confidence region

confluent–hypergeometric function

conjugate families

consistency of the MLE

consistent estimators

contingency table

continuity theorem

contrasts

converge in distribution

convergence almost surely

convergence in r–th mean

convergence in distribution

convergence in probability

converges almost–surely

converges in r–th mean

convolution

covariance matrix

covariance stationary

Cramér–Rao lower bound

Cramér–Rao regularity condition

credibility intervals

cross product ratio

cumulants

cumulants generating function

Cumulative Probability Integral Transformation

curved exponential family

decision function

Definition of Sufficiency

degrees of freedom

delta method

DeMorgan’s laws

di–gamma

Dirichlet distribution

discrete algebra

dispersion

distribution free (β, γ) upper tolerance limit

distribution free confidence intervals

distribution free test

distribution of sums

Dominated Convergence Theorem

dynamic linear model

Dynkin’s regularity conditions

Dynkin’s theorem

E–M algorithm

Edgeworth approximation

Edgeworth expansion

efficiency function

efficiency of multi–parameter estimator

empirical Bayes

empirical Bayes estimators

empirical distribution function

equivalent–likelihood partition

equivariant

equivariant estimator

error of Type I

error of Type II

estimating functions

Euler constant

exchangeable

exponential boundedness

exponential conjugate

exponential distribution

exponential integral

exponential type families

exponential type family

extreme value

failure (hazard) rate function

family of distributions

Fatou’s lemma

Fatou’s Theorem

Fieller’s method

Fisher information function

Fisher information matrix

fixed width confidence intervals

formal Bayes estimators

fractiles

Gamma distribution

gamma function

Gauss–Markov Theorem

geometric random

Glivenko–Cantelli’s Theorem

goodness of fit test

guaranteed coverage tolerance intervals

Hardy–Weinberg model

Hellinger distance

Hellinger’s distance

Helmert transformation

hierarchical models

highest posterior density

Holder’s Inequality

Hunt–Stein Theorem

hypergeometric distribution

idempotent matrix

importance density

importance sampling

improper priors

incomplete beta function

incomplete beta function ratio

incomplete gamma function

independence

independent events

information in an estimating function

interaction

interquartile range

Invariance Principle

inverse regression

James–Stein

Jeffreys prior density

Jensen’s Inequality

joint density function

joint distributions

Kalman filter

Karlin’s admissibility theorem

Karlin’s Lemma

Khinchin WLLN

Kullback–Leibler information

kurtosis

Lagrangian

Laplace distribution

Laplace method

large sample confidence intervals

large sample tests

law of iterated logarithm

law of total variance

laws of large numbers

least–squares

least–squares estimators

Lebesgue dominated convergence

Lebesgue integral

likelihood functions

likelihood ratio test

likelihood statistic

Lindeberg–Feller Theorem

linear models

link functions

local hypotheses

location parameter

log–convex

log–normal distribution

log–normal distributions

loss functions

Lyapunov’s Inequality

Lyapunov’s Theorem

main effects

marginal distributions

Markov WLLN

maximal invariant statistic

maximum likelihood estimator

measurable space

median

minimal sufficient statistic

minimax estimators

minimax test

minimum chi–squared estimator

minimum variance unbiased estimator

Minkowsky’s Inequality

mixed effect model

mixture

mixture of the two types

Model II of Analysis of Variance

modes of convergence

moment generating function

moment of order

moment–equations estimators

monotone convergence

monotone likelihood ratio

monotonic class

multinomial distribution

multinormal distribution

multivariate hypergeometric distributions

multivariate negative binomial

multivariate normal distribution

multivariate–t

Negative–Binomial

Newton–Raphson method

Neyman structure

Neyman–Fisher Factorization Theorem

Neyman–Pearson Lemma

non–central F

non–central t

non–central chi–squared

non–informative prior

non–parametric test

normal approximations

normal distribution

normal regression

nuisance parameter

nuisance parameters–unbiased tests

null hypothesis

observed significance level

odds–ratio

operating characteristic function

orbit of inline

order of magnitude in probability

order statistics

orthogonal subvectors

parameter of non–centrality

parameter space

parametric empirical Bayes

Pareto distribution

partial correlation

partition of sample space

Pascal distribution

Pitman ARE

Pitman estimator

Pitman relative efficiency

Poisson distributions

posterior distribution H(theta | X)

posterior risk

power of a test

pre–test estimators

precision parameter

predictive distributions

predictive error probability

prior distribution

prior risk

probability generating function

probability measure

probability model

proportional–closeness

psi function

Radon–Nikodym

Radon–Nikodym theorem

random sample

random variable

random walk

randomized test

Rao’s efficient score statistic

Rao–Blackwell Lehmann–Scheffé Theorem

rectangular distribution

regression analysis

regular

regular family of distributions

regular parametric family

regularity conditions

regularity conditions for estimating functions

relative betting odds

relative efficiency

ridge regression

ridge trace

Riemann integrable

risk function

robust estimation

saddlepoint approximation

sample correlation

sample median

sample quantile

sample range

scale parameter

Schwarz inequality

score function

second order deficiency

second order efficiency

sensitivity

sequential fixed–width interval estimation

shape parameter

simulation

simultaneous confidence intervals

simultaneous coverage probability

size of a test

skewness

Slutsky’s Theorem

standard deviation

standard normal distribution

standard normal integral

standard–errors

statistic

statistical model

Stein type estimators

Stein’s two–state procedure

Stieltjes–Riemann integral

Stirling approximation

stopping variable

Strong Law of Large Numbers

strongly consistent

structural distributions

structural estimators

student’s t–distribution

subalgebra

sufficient statistics

super efficient

symmetric difference

test function

tetrachoric correlation

The Delta Method

The Law of Total Variance

tight family of distribution

tolerance distributions

tolerance intervals

total life

transformations

trimean

Type I censoring

unbiased estimator

unbiased test

uniform distribution

uniform integrability

uniformly most accurate confidence interval

uniformly most accurate unbiased

uniformly most powerful

upper quartiles

utility function

variance

variance components

variance stabilizing transformation

variance–covariance matrix

Wald Fundamental Identity

Wald Sequential Probability Ratio Test

Wald statistic

Wald Theorem

weak convergence

Weak Law of Large Numbers

Weibull distribution

Weibull distributions

Wiener process

Wilcoxon signed–rank test

Wilks’ likelihood ratio statistic

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.208.12