Index

A

α–β filter, 302

Algebraic Riccati equation, 280, 421

Answers to summary questions, 539–541

AR Model, 12

Asymptotic distributions, 99–101

Asymptotic efficiency (see Large sample properties of estimators)

Asymptotic mean, 100

Asymptotic normality (see Large sample properties of estimators)

Asymptotic unbiasedness (see Large sample properties of estimators)

Asymptotic variance, 100–101

Augmented state-variable model (see State augmentation)

B

Bartlett-Brillinger-Rosenblatt formulas, 478–480

Basic state-variable model, continuous-time:

defined, 414–415

properties of, 415–416, 427–428

Basic state-variable model, discrete-time (see also Not-so-basic state-variable model):

augmented, 324–325

defined, 215–216

properties of, 216–220

Basis functions, 13

Batch processing (see Least-squares estimation processing)

Bayes risk, 210

Best linear unbiased estimation, 121–131

Best linear unbiased estimator:

batch form, 124–125

comparison with maximum a posteriori estimator, 196

comparison with mean-squared estimator, 181–184

comparison with weighted least-squares estimator, 125–126

derivation from a Kalman filter, 266

properties, 126–130

random parameters, 182–184

recursive forms, 130–131

Biases (see Not-so-basic state-variable model)

Bicepstrum (see Complex bicepstram)

Binary detection:

detection rules, 201–203

elements of, 200–204

errors, 202–203

single most-likely replacement detector, 209–210

square-law detector, 203–204

Biographies (see Fisher, biography; Gauss, biography; Kalman, biography; Wiener, biography)

Bispectrum (see Polyspectra)

Block component method, 412

BLUE (see Best linear unbiased estimator)

C

C(q, k) formula, 481–482

Causal invertibility, 244, 252

Central limit theorems, 101–102

Colored noises (see Not-so-basic state-variable model)

Completeness, 445

Complex bicepstrum, 496–497

Complex cepstrum, 489

Computation, 7–8, 20, 37, 51, 66–67, 131, 156–157, 184, 199, 222–223, 235, 253, 294, 314, 328, 357–358, 374–375, 392–393, 408–409, 426–427, 490–491, 514

Conditional mean:

defined, 167

properties of, 169–170

Conditional mean estimator, 184–185 (see also Nonlinear estimator)

Consequence of relinearization, 387–388

Consistency (see Large sample properties of estimators)

Constrained total least squares, 38–39

Convergence:

distribution, 93

mean-square, 97

probability, 93, 97

with probability, 93

relationships among different types, 94

rth mean, 93

Convolutional model, 10, 16, 21–22, 329

Correlated noises (see Not-so-basic state-variable model)

Correlation:

biased estimator, 492

unbiased estimator, 493

Correlation-based normal equations, 494

Covariance form of recursive BLUE, 130

Covariance form of recursive least-squares estimator, 63–64

Coverage, 3–6

Cramer Rao inequality:

scalar parameter, 78–82

vector of parameters, 82–84, 86–87

Cross-sectional processing (see Least-squares estimation processing)

Cumulant-based normal equations, 484–485

Cumulants (see also Higher-order statistics applied to linear systems):

defined for random processes, 454

defined for random variables, 452

domain of support, 454

estimation of, 474–475

properties, 464–465, 466–468

relation to moments, 466

slices, 455

symmetries, 457

D

Decision space, 200

Deconvolution (see also Steady-state MVD filter):

double-stage smoothing, 311–312

higher-order statistics, 486–488

maximum-likelihood (MLD), 196–198, 340

minimum-variance (MVD), 181, 329–338

model formulation, 16–17, 21

Ott and Meder PEF, 268–269

single-stage smoothing, 308–309

Detection (see Binary detection)

Dirac delta function, 416

Discretization of linear time-varying state-variable model, 371–373

Divergence phenomenon, 267–268

Double C(q, k) algorithm, 489, 495–496

E

Efficiency (see Small sample properties of estimators)

Eigenvalues and poles, 513

Eigenvalues and transformations of variables, 513

Equation error, 20

Estimate types (see Filtering; Prediction; Smoothing)

Estimation (see also Philosophy):

error, 19

model, 19

Estimation algorithm M-files (see M-files)

Estimation of deterministic parameters (see Best linear unbiased estimation; Least-squares estimation processing; Maximum-likelihood estimation)

Estimation of higher-order statistics (see Cumulants; Polyspectra)

Estimation problem, 3

Estimation of random parameters (see Best linear unbiased estimator; Least-squares estimator; Maximum a posteriori estimation; Mean-squared estimation)

Estimation techniques (see Best linear unbiased estimator; Least-squares estimator; Maximum a posteriori estimation; Maximum-likelihood estimators; Mean-squared estimators; State estimation)

Estimator properties (see Large sample properties of estimators; Small sample properties of estimators)

Estimator versus estimates, 20

Examples:

aerospace vehicle control, 33–35

binomial model:

likelihood ratios, 141

probability versus likelihood, 139–140

bispectra and power spectra for two spectrally equivalent systems, 461–464

C (q, k) formula, 481–482

complete sufficient statistic, 445–446, 447

consistency, 97

convergence in probability, 98

Cramer–Rao bound (Cauchy distribution), 79–80

cumulant-based normal equation, 484–485

cumulants boost signal-to-noise-ratio, 465–466

cumulants for Bernoulli–Gaussian sequence, 454–455

cumulants: what do they look like?, 457

deconvolution, 16–17

deconvolution by double-stage smoothing, 308–309

deconvolution by single-stage smoothing, 311–312

deconvolution using higher-order statistics, 486–488

Dirac delta function in a calculation, 416

divergence phenomenon, 267–268

factorization theorem to determine sufficient statistics, 438–439

Fisher information matrix for generic linear model, 84

fixed-interval smoother, 320–321

fixed-point smoother, 324

function approximation, 13–14

GM equations for MA(q) systems, 486

hypothesis testing, 143–144

identification of coefficients of a finite-difference equation, 12–13, 110–111

impulse and frequency response plots for steady-state predictors, 286–288

impulse response identification, 10–12, 110

impulse response of steady-state predictor, 284–285

instrument calibration, 33, 64, 76

Kalman filter for first-order system with correlated disturbance and measurement noises, 352–354

Kalman filter quantities for two second-order systems, 260–262

Markov model for a colored process, 350

matrix exponential calculations, 373

maximum a posteriori estimator of population mean, 194–195

maximum-likelihood deconvolution, 196–198

maximum-likelihood estimates of parameters of a missile, 408, 410

maximum-likelihood estimation using sufficient statistics, 442–443

maximum-likelihood estimation using sufficient statistics: generic linear model, 443–444

maximum-likelihood estimator of population mean and variance, 150–151

maximum-likelihood estimator of variance, 152

mean and variance computed for a first-order dynamical system, 219

minimum-variance deconvolution, 181

MVD filters for broadband and narrowband channels, 332–336, 338–339

nonlinear estimator, 187

nonlinear model, 15–16

nonlinear predictor, 187–188

Ott and Meder prediction-error filter, 269

perturbation equations for satellite dynamics, 370

prediction performance for a first-order dynamical system, 232–233

q-slice formula, 483–484

random number generator and maximum likelihood, 139, 150–151

random number generator and probability, 138

relationship of BLUE to Kalman filter, 266–267

sample mean, 7, 98

satellite dynamics, 366–367

satellite dynamics plus unknown parameters, 392

second-order cumulant derivation, 453

sensitivity of Kalman filter, 260, 262–265

signal-to-noise ratio for a first-order dynamical system, 221–222

single-channel Kalman filter, 266

square-law detector, 203–204

state augmentation procedure for colored disturbance, 351

state augmentation procedure for colored measurement noise, 351–352

state estimation, 14–15

state estimation and MS and MAP estimation, 198–199

state estimation errors in a feedback control system:

continuous-time, 420–421

discrete-time, 270

state-variable models:

AR models, 504–505

ARMA models, 506–507

controllable canonical form, 155

discretization of second-order differential equation, 502

MA models, 503–504

parallel connection of systems, 507–508

second-order differential equation, 500, 501–502

subsystem of two complex conjugate poles, 509

transfer function for a second-order ARMA model, 511

steady-state Kalman–Bucy filter for a second-order system, 421–423

steady-state Kalman–Bucy filter in an optimal control system, 425

steady-state Kalman filter, 281–282

sufficient statistics, 437, 438

sufficient statistics for exponential families, 440, 441

SVD computation of ImageLs(k), 50–51

third-order cumulants for an MA(q) model, 481

unbiasedness of variance estimator, 95

unbiasedness of WLSE, 77

uniformly minimum variance unbiased estimator, 446

uniformly minimum variance unbiased estimator: generic linear model, 446–447

Volterra series representation of a nonlinear system, 17–18

Expanding memory estimator, 36

Expectation, expansion of total expectation, 117

Exponential family of distributions, 441–444

Extended Kalman filter:

application to parameter estimation, 391–392

correction equation, 389

derived, 388–389

flowchart, 390

iterated, 390–391

nonlinear discrete-time system, 393–394

prediction equation, 388

F

Factorization theorem, 438

Fading memory estimator, 70

Fading-memory filtering, 268

Filtered estimate, 19

Filtering (see also Kalman–Bucy filtering):

applications, 271–276

comparisons of Kalman and Wiener filters, 293

computations, 251

covariance formulation, 251

derivation of Kalman filter, 246–247

divergence phenomenon, 267–268

examples, 259–270

information formulation, 252

innovations derivation of Kalman filter, 246–247

MAP derivation of Kalman filter, 253–255

properties, 248–253

recursive filter, 248

relationship to BLUE, 266

relationship to Wiener filtering, 286, 289–291

sensitivity, 260, 262–265

steady-state Kalman filter, 280–285

Finite-difference equation coefficient identification (see Identification of)

Finite-memory filtering, 268

FIR model, 10

Fisher, biography, 85–86

Fisher information matrix, 83

Fisher’s information, 78

Fixed-interval smoother (see Smoothing)

Fixed-lag smoother (see Smoothing)

Fixed memory estimator, 36

Fixed-point smoother (see Smoothing)

Function approximation, 13–14

G

Gauss, biography, 28–29

Gauss–Markov random sequences, 212–214, 328–329

Gaussian random sequences (see Gauss–Markov random sequences)

Gaussian random variables (see also Conditional mean):

conditional density function, 166–168

joint density function, 165–166

multivariate density function, 165

properties of, 168–169

univariate density function, 165

Gaussian sum approximation, 186

Generalized GM equation, 494

Generalized regression neural network, 186

Generic linear model, 9–10

Glossary of major results, 518–523

GM equation, 485–486

H

Harmonic retrieval problem, 471

Higher-order moments, 452

Higher-order statistics (see Cumulants; Polyspectra)

Higher-order statistics applied to linear systems:

Bartlett-Brillinger-Rosenblatt formulas, 478–480

C (q, k) formula, 481–482

complex bicepstrum, 496–497

cumulant-based normal equations, 484–485

deconvolution, 486–488

double C(q, k) algorithm, 489, 495–496

generalized GM equation, 494

GM equation, 485–486

MA(q) model, 481

q-slice algorithm, 498

q-slice formula, 483–484

residual time-series method, 497–498

Hypotheses (see also Binary detection):

binary, 138

multiple, 138

I

Identifiability, 154–155

Identification of:

coefficients of a finite-difference equation, 12–13

impulse response, 10–12

Impulse response identification (see Identification of)

Information form of recursive BLUE, 130

Information form of recursive least-squares estimator, 59–60

Innovations process:

defined, 233

properties of, 233–234

Innovations system, 252–253

Instrumental variables, 116

Invariance property of maximum-likelihood estimators (see Maximum-likelihood estimators)

Iterated extended Kalman filter (see Extended Kalman filter)

Iterated least squares, 385–386

J

Jordan canonical form, 508–509

K

Kalman, biography, 244–245

Kalman–Bucy filtering:

derivation using a formal limiting procedure, 418–420

derivation when structure of filter is pre-specified, 428–431

notation and problem statement for derivation, 416–417

optimal control application, 423–425

statement of, 417

steady-state, 421

system description for derivation, 414–415

Kalman–Bucy gain matrix, 417

Kalman filter (see Filtering; Steady-state Kalman filter; Wiener filter)

Kalman filter applications (see Filtering)

Kalman filter sensitivity system, 260, 401, 403–404

Kalman filter tuning parameter, 266

Kalman gain matrix, 246

Kalman innovations system (see Innovations system)

L

Lagrange’s method, 131

Lagrange variation of parameter technique, 375–376

Large sample properties of estimators (see also Least-squares estimator):

asymptotic efficiency, 103–104

asymptotic normality, 101–103

asymptotic unbiasedness, 94–95

consistency, 95–99

Least-squares estimation processing:

batch, 27–37

cross-sectional, 72–73

recursive, 58–69

Least-squares estimator:

batch form, 30–31

comparison with ML estimator, 154

examples, 33–35

initialization of recursive forms, 61, 67–69

large sample properties, 115–117

multistage, 73

normalization of data, 36–37

properties, 108–117

random parameters, 182–184

recursive covariance form, 63–64

recursive information form, 59–60

restricted, 43, 120

scale change, 36–37

small sample properties, 109–114

SVD computation, 49–51

vector measurements, 68

Lehmann–Scheffe theorem, 446

Levenberg–Marquardt algorithm, 401–404

Levinson algorithm, 295–300

Likelihood:

compared with probability, 137–138

conditional, 194

continuous distributions, 141, 148

defined, 137–140

unconditional, 194

Likelihood ratio:

decision making, 142–144

defined for multiple hypotheses, 141–142

defined for two hypotheses, 140

Linear algebra facts, 45

Linear model (see also Generic linear model):

defined, 10

examples of (see Deconvolution; Examples, state estimation; Identification of; Nonlinear measurement model)

Linear prediction:

backward prediction error filter, 236

forward prediction error filter, 235

Kalman filter solution, 267

lattice, 237–238

reflection coefficient, 236

tapped delay line, 236

Linear projection, 190

Linear system concepts, 294–295

Linearized nonlinear system (see Nonlinear dynamical systems)

Log-likelihood:

basic state-variable model, 398–400

defined, 148

important dynamical system, 154–156

Loss function, 184

Lyapunov equation, 220

M

M-files:

fixed-interval smoother, 536–538

Kalman filter, 526–529

Kalman predictor, 529–532

recursive weighted least-squares estimator, 525–526

suboptimal filter, 532–534

suboptimal predictor, 534–536

MA model, 10

Markov parameters, 512

Markov sequence, 213–214

MATLAB, 8

Matrix (see also Pseudo-inverse):

fundamental, 375

gradient, 430

Hessian, 401

intensity, 366

inversion lemma, 62

Jacobian, 368, 369

nonsingular, 31

pseudo-Hessian, 403

spectral intensity, 366

state transition, 510

Matrix Riccati differential equation, 417

Matrix Riccati equation, 250

Maximum a posteriori estimation, 192–199

Maximum a posteriori estimators:

comparison with best linear unbiased estimator, 196

comparison with mean-squared estimator, 195

Gaussian linear model, 195–199

general case, 193–195

Maximum-likelihood deconvolution (see Deconvolution)

Maximum-likelihood estimation, 147–156, 441–444

Maximum-likelihood estimators:

comparison with best linear unbiased estimator, 154

comparison with least-squares estimator, 154

exponential families, 441–444

linear model, 152–154

obtaining, 148–151

properties, 151–152

Maximum-likelihood method, 148–151

Maximum-likelihood state and parameter estimation:

computing ImageML, 400–404

log-likelihood function for the basic state-variable model, 398–400

steady-state approximation, 404–408

Mean-squared convergence (see Convergence, mean-square)

Mean-squared estimation, 173–188

Mean-squared estimators (see also Conditional mean estimator):

comparison with best linear unbiased estimator, 181–184

comparison with maximum a posteriori estimator, 262–263

derivation, 175–176

Gaussian case, 176–177

linear and Gaussian model, 179–181

properties of, 178–179

Measurement differencing technique, 355

Measurement problem, 2

Measurement residual, 20

Median estimator, 210

Memory:

expanding, 36

fixed, 36

Message space, 200

Method of moments, 451–452

Minimal sufficient statistic, 439, 583

Minimum-variance deconvolution (see Deconvolution)

MLD (see Deconvolution)

Mode estimator, 210

Modeling (see also Philosophy):

estimation problem, 3

measurement problem, 2

representation problem, 2

validation problem, 3

Moments (see Cumulants)

Multistage least-squares (see Least-squares estimator)

Multivariate Gaussian random variables (see Gaussian random variables)

MVD (see Deconvolution)

N

Nominal:

measurements, 16

parameters, 16

Nonlinear dynamical systems:

discretized perturbation state-variable model, 374

linear perturbation equations, 367–370

model, 365–366

Nonlinear estimator, 185–188

Nonlinear measurement model, 15–16

Normal equations, 32, 484–485, 494

Normalization of data, 36–37

Notation, 18–20

Not-so-basic state-variable model:

biases, 346–347

colored noises, 350–354

correlated noises, 347–350

perfect measurements, 354–357

O

Observation space, 200

Orthogonality condition, 32

Orthogonality principle, 177–178

P

Parameter estimation (see Extended Kalman filter)

Parameters, 2

Perfect measurements (see Not-so-basic state-variable model)

Perturbation equations (see Nonlinear dynamical systems)

Philosophy:

estimation theory, 7

modeling, 6

Polyspectra:

bispectrum, 458

defined, 458

domain of support, 458

estimation of bispectrum, direct method, 476–477

estimation of bispectrum, indirect method, 476

symmetries, 459

trispectrum, 458

Predicted estimate, 19

Prediction (see also Linear prediction):

general, 229–233

recursive predictor, 248

single-stage, 228–229

steady-state predictor, 283

Prediction error, 20

Prediction error filter (see Deconvolution)

Prediction error filtering (see Linear prediction)

Predictor-corrector form of Kalman filter, 246–247

Probability, transformation of variables (see also Gaussian random variables), 144

Projection (see Linear projection)

Properties of best linear unbiased estimators (see Best linear unbiased estimator)

Properties of estimators (see Large sample properties of estimators; Small sample properties of estimators)

Properties of least-squares estimator (see Least-squares estimator)

Properties of maximum-likelihood estimators (see Maximum-likelihood estimators)

Properties of mean-squared estimators (see Mean-squared estimators)

Pseudo-inverse, 50, 51–53

Q

q-slice algorithm (see Higher-order statistics applied to linear systems)

q-slice formula, 483–484

R

Random processes (see Gauss–Markov random sequences; Second-order Gauss–Markov random sequences)

Random variables (see Gaussian random variables)

Recursive calculation of state covariance matrix, 217–218

Recursive calculation of state mean vector, 217–218

Recursive processing (see Best linear unbiased estimator; Least-squares estimation processing)

Reduced-order Kalman filter, 355–357

Reduced-order state estimator, 355

Reduced-order state vector, 354

References, 542–552

Reflection seismology, 21–22

Relinearized Kalman filter, 387

Representation problem, 2

Residual time series method (see Higher-order statistics applied to linear systems)

Restricted least-squares, 43, 120

Riccati equation (see Algebraic Riccati equation; Matrix Riccati equation)

S

Sample mean as a recursive digital filter, 7

Scale changes:

best linear unbiased estimator, 128–130

least-squares, 36–37

Second-order Gauss–Markov random sequences, 328–329

Sensitivity of Kalman filter, 260, 262–265

Signal-plus-noise model, 11

Signals, 2

Signal-space, 200

Signal-to-noise ratio, 220–222, 266, 465

Single-channel steady-state Kalman filter, 282–286

Single most likely replacement detector (see Binary detection)

Singular-value decomposition, derivation, 45–48 (see also Least-squares estimator)

Small sample properties of estimators (see also Least-squares estimator)

efficiency, 77–85

unbiasedness, 76–77

Smoothed estimate, 19

Smoothing:

double-stage, 309–313

fixed-interval, 305, 318–323

fixed-lag, 305, 325–327

fixed-point, 305, 323–325

single-stage, 306–309, 312–313

three types, 305

Square-law detector (see Binary detection)

Stabilized form for computing P(k + 1|k + 1), 250–251

Standard form for computing P(k + 1|k + 1), 250–251

State and parameter estimation, combined (see Extended Kalman filter; Maximum-likelihood state and parameter estimation)

State augmentation, 324, 351, 391

State equation solution, 371, 509–512

State estimation (see Filtering; Prediction; Smoothing)

State transition matrix (see Matrix)

State-variable models (see also Basic state-variable model, continuous-time; Basic state-variable model, discrete-time; Not-so-basic state-variable model):

constructing state-variable representations, 503–508

miscellaneous properties, 512–514

solutions of state equations, 509–512

state, state variables, and state space, 500–503

Steady-state approximation (see Maximum-likelihood state and parameter estimation)

Steady-state filter system, 282

Steady-state Kalman–Bucy filter, 421

Steady-state Kalman filter (see also Single-channel steady-state Kalman filter), 280

Steady-state MVD filter:

defined, 332

properties of, 337–338

relationship to IIR Wiener deconvolution filter, 338, 340

Steady-state predictor system, 283

Steady-state state-covariance matrix, 220

Stochastic convergence (see Convergence)

Stochastic linear optimal output feedback regulator problem, 424

Sufficient statistics (see also Factorization theorem; Lehmann–Scheffe theorem; Minimal sufficient statistic):

complete, 447–448

defined, 437–438

exponential families of distributions, 439–441

uniformly minimum-variance unbiased estimation, 444–448

Summary questions (see also Answers to summary questions), 8, 23–24, 39–40, 53–54, 69–70, 87–88, 104, 117–118, 132, 145, 157–158, 171, 188–189, 204–205, 223–224, 238–239, 255–256, 276–277, 300, 314–315, 341, 360–361, 376–377, 394, 409–410, 431–432, 448–449, 469, 491–492, 514–515

T

Time-delay estimation, 472

Total least-squares, 38–39

Trispectrum (see Polyspectra)

U

Unbiasedness (see Small sample properties of estimators)

Uniformly minimum-variance unbiased estimation (see Sufficient statistics)

Univariate Gaussian random variables (see Gaussian random variables)

V

Validation problem, 3

Variance estimator, 113–114

Volterra series representation of a nonlinear system, 17–18

W

Weighted least-squares estimator (see Best linear unbiased estimator; Least-squares estimator)

White noise:

continuous, 366, 555

discrete, 214

higher-order, 455

Wiener, biography, 291–293

Wiener filter:

derivation, 289–291

IIR, 338, 340, 488

recursive, 291

relation to Kalman filter, 291, 293

Wiener–Hopf equations, 290

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.55.14