Note: f denotes figures, t denotes tables
Activation function
magnitude and phase responses, 49f
Adaptive principal component extraction (APEX) algorithm, 238
Additive random white noise (AWN)
power, 219
random vector, 219
Additive white Gaussian noise channel, 171
All-pole model, 358
All-zero state, 154
AM. See Amplitude modulation (AM)
Amplitude modulation (AM), 93
Amplitude-shift keying (ASK), 93
Analytic function, 9–14, 16, 17, 21, 22, 50, 51
APEX. See Adaptive principal component extraction (APEX) algorithm
APF. See Auxiliary particle filtering (APF)
API. See Approximated power iteration (API) algorithm
a posteriori, 282f
probability mass function, 146, 155
Approximated power iteration (API) algorithm, 230
Approximate symmetric orthonormalization family, 225
Approximation-based methods projection
a priori probability
input bit, 196
Array processing examples, 114–120
beamformers, 114
subspace DOA estimation for noncircular sources, 120
Arrival tracking direction
signal processing subspace tracking, 257
ASK. See Amplitude-shift keying (ASK)
Asymptotic mean square error, 251
Asymptotic pseudo-covariance matrix, 127
Asynchronous direct sequence code division multiple access, 259
Auxiliary particle filtering (APF), 295, 300
performances, 303
AWN. See Additive random white noise (AWN)
Back-propagation (BP) learning algorithms, 334–339
complex update derivation, 55–57
support vector machine, 337–339
Bandlimited speech, 351f
Bandwidth extension (BWE), 352
model-based scheme structure, 364f
nonmodel-based algorithms, 352
overall system, 350f
oversampling and imaging, 353f
telephony speech bandwidth extension, 352
upsampling and low-quality antiimaging filters, 354f
Bandwidth extension algorithm
excitation signal generation, 365–368
telephony speech bandwidth extension, 364–382
vocal tract transfer function estimation, 369–382
Bandwidth extension algorithm evaluation, 383–387
objective distance measures, 383–384
subjective distance measures, 385–387
Basis functions
Bayes's rule, 147
BBC. See British Broadcasting Corporation (BBC)
Beamformers. See also Minimum variance distortionless response (MVDR) beamformer
array processing examples, 114
Capon's weight vector, 115
BFGS. See Broyden–Fletcher–Goldfarb–Shanno (BFGS) method
Binary inputs
channel capacity, 172f
Binary phase shift keying (BPSK), 3, 72, 73f, 74, 88, 135
ISI, 73f
signal, 134
signal constellations, 3f
Bit error probability
vs. parameters, 189f
Bit error rate
Bitwise optimal solution, 146
Blind channel estimation and equalization
signal processing subspace tracking, 258
Blind signal separation, Blind source separation (BSS), 58–74, 89
Blind turbo equalization, 173–180
differential encoding, 179–181
BP. See Back-propagation (BP) learning algorithms
BPSK. See Binary phase shift keying (BPSK)
Brandwood's theorem, 60
Broadband predictor codebook, 381
Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, 33, 40
BSS. See Blind signal separation, Blind source separation (BSS)
BWE. See Bandwidth extension (BWE)
Capacity analysis
Cauchy–Bunyakovskii–Schwarz inequality, 22, 38
Cauchy noise case, 123
Cauchy–Riemann equations, 10, 11
CCLT. See Complex central limit theorem (CCLT)
CDF. See Cumulative distribution function (CDF)
CDMA. See Code division multiple access (CDMA)
Cepstral coefficients, 358
Cepstral distance, 363
CES. See Complex elliptically symmetric (CES) distributions
CG. See Conjugate gradient (CG) updates
Channel
communication, 145
identification, 173
likelihood evaluations, 162
Trellis diagram, 161f
Channel capacity
Gaussian and binary inputs, 172f
operational definition, 169
Channel impulse response
coefficient, 179
length, 163
Circular distribution, also see complex random
chi-square plot, 101
complex normal (CN), 90
scatter plots, 30f
Circular functions, 50
Circularity assumption testing
Circularity coefficients, 25, 94, 95, 133–134
Circularity matrix, 132
Circular symmetry, 91
City block distance, 363
Closed-loop feedback system
EKF and MLP, 343f
Closing the loop, 163
CML. See Complex maximum likelihood (CML)
CMN. See Complex maximization of non-Gaussianity (CMN)
CMOS. See Comparison mean opinion scores (CMOS)
CN. See Circular complex normal (CN)
C-ND. See Complex nonlinear decorrelations (C-ND)
Codebook approaches, 375–379, 387f
classification, 382f
vs. neural network approaches, 387f
Code division multiple access (CDMA), 199, 256, 258, 259
codebook preclassification and linear mapping, 380f
Communication channel, 145
Communications chain, 145f
Comparison mean opinion scores (CMOS), 383
CB, 387f
conditions, 386f
NN, 387f
test, 386
Complex central limit theorem (CCLT), 106–107
Complex conjugate gradient updates, 39
Complex differentiability, 12
Complex domain
real differentiable cost function matrix optimization, 37
real differentiable cost function vector optimization, 34–36
Complex elliptically symmetric (CES) distributions, 90, 95–101
circular case, 98
circularity assumption testing, 99–101
Complex function
singular points, 10
Complex gradient updates, 35–36
Complex independent component analysis, 58–74
complex maximization of non-Gaussianity, 64–65
complex maximum likelihood, 59–64, 66–74
ML and MN connections to, 66
mutual information minimization, 66
Hermitian covariance, 213, 214
Complex maximization of non-Gaussianity (CMN), 64–66, 72–74
complex independent component analysis, 64–65
fixed-point updates, 65
gradient updates, 65
Complex maximum likelihood (CML), 59–64, 66–74
complex independent component analysis, 54–63
unitary, 74
Complex nonlinear decorrelations (C-ND), 72
Complex random processes, 27–31
circularity properties, 28, 29, 30f, 45
stationarity properties, 28
Complex random variables, 24–31, 91–93
circularity properties, 26–31, 91–92
circularity properties, 26, 45, 93–95
robust estimation techniques, 91–94
statistical characterization, 91–94
Complex-to-complex mappings, 17–19
Complex-to-real mapping, 17–19
properties, 18
Complex-valued adaptive signal processing, 1–78
activation function for MLP filter, 48–54
back-propagation updates derivation, 55–57
complex domain optimization, 31–40
complex independent component analysis, 58–74
complex maximization of non-Gaussianity, 64–65
complex maximum likelihood, 54–63
complex-to-complex mappings, 17–19
complex-to-real mappings, 17–19
complex-valued random variables statistics, 24–30
efficient computation of derivatives in complex domain, 9–16, 31–37
linear and widely linear mean-square error filter, 40–47
ML and MN connections, 66
mutual information minimization, 66
nonlinear adaptive filtering with multilayer perceptrons, 47–57
nonlinear function optimization, 31–34
random processes statistics, 24–30
widely linear adaptive filtering, 40–47
Complex-valued random variables statistics, 24–30
Complex-valued random vectors robust estimation techniques, 87–138
array processing examples, 114–120
beamformers, 114
circular case, 98
circularity assumption testing, 99–101
class of DOGMA estimators, 129–131
class of GUT estimators, 132–133
communications example, 134–136
complex elliptically symmetric (CES) distributions, 95–101
complex random variables, 91–92
complex random vectors statistical characterization, 91–94
estimator asymptotic performance, 106
influence function study, 123–127
M-estimators of scatter, 110
motivation, 107
MVDR beamformers based on M-estimators, 121–127
problems, 137
robustness and influence function, 102–105
scatter and pseudo-scatter matrices, 107–113
subspace DOA estimation for noncircular sources, 120
tools comparing estimators, 102–106
Conjugate gradient (CG) updates, 39
Convergence
bit error probability, 187–189
EXIT chart for interference canceler, 192–193
linear and widely linear filter, 46, 47
signal processing subspace tracking, 243–255, 248–255
Cosine modulation, 366f
Coupled stochastic approximation algorithms, 223
Coupling element
interference canceler, 166
Covariance matrix, 25–27, 40–47, 90–99, 104–113, 115–141
principal subspace characterization, 218
signal processing subspace tracking, 218
-calculus. See Wirtinger calculus
CU. See Central unit (CU)
Cumulative distribution function (CDF), 282f, 283f
DAPF. See Density assisted particle filter (DAPF)
DASE. See Direct adaptive subspace estimation (DASE)
Data Projection Method (DPM), 224
algorithm, 225
Data whitened, 133
Davidon–Fletcher–Powell (DFP) method, 33
Decoders
comparing EXIT functions, 190f
double concatenated, 181, 182f
EXIT functions, 191f
forward-backward channel, 185, 186
information exchange between, 150f
Decomposition, 290
Density adaptation approach, 67
Density assisted particle filter (DAPF), 309
Density generator, 97
Density matching
complex independent component analysis, 67–71
Derivatives of cost functions, 17, 34–40
DFP. See Davidon–Fletcher–Powell (DFP) method
Diagonalization Of Generalized covariance MAtrices (DOGMA), 91
Differential encoder, 180
integrating, 181f
Differential encoding
blind turbo equalization, 179–181
Direct adaptive subspace estimation (DASE), 235
Direction-of-arrival (DOA)
Diversity versus signal power, 171–172
DOA. See Direction-of-arrival (DOA)
DOGMA. See Diagonalization Of Generalized covariance MAtrices (DOGMA)
Dominant subspace tracking
gradient-descent technique, 230
Double concatenated decoder, 181, 182f
DPM. See Data Projection Method (DPM)
Drawn particles, 283f
EIF. See Empirical influence function (EIF)
Eigenvalue decomposition (EVD), 212, 221
parameters, 240
signal processing subspace tracking, 213
Eigenvector power-based methods
eigenvectors tracking, 235
issued from exponential windows, 239–240
Eigenvectors tracking, 234–242
eigenvector power-based methods, 235
projection approximation-based methods, 240
Rayleigh quotient-based methods, 234
second-order stationary data, 242
EKF. See Extended Kalman filter (EKF)
Element-wise mappings, 19–20, 20t
Empirical influence function (EIF), 127
sample covariance matrix, 105–106
Encoder, 145
error correction, 145
recursive systematic, 152, 153f
systematic convolutional, 190f, 191f
Trellis diagram, 154f
Error correction coding, 144, 145
Error variance transfer function, 195f
Essential singularity, 11
Estimating channel impulse response
EVD. See Eigenvalue decomposition (EVD)
Exact orthonormalization family, 227–229
Excitation signal generation
bandwidth extension model-based algorithms, 365–368
EXIT. See Extrinsic information transfer (EXIT)
Expectation operator, 322
Extended Kalman filter (EKF), 272, 301, 334, 341–343
closed-loop feedback system, 343f
experimental comparison, 344–346
nonlinear sequential state estimation, 344–346
Extrinsic information transfer (EXIT), 186f
chart, 186f
chart construction, 185
charts, 182
extraction, 159
functions decoders, 190f, 191f
inner decoder, 186f
interference canceler, 192–193, 194, 194f
outer decoder, 185f
performance prediction, 188
probability distribution function, 187f
turbo equalization, 182
Extrinsic probabilities, 150
FAPI. See Fast approximated power iteration algorithm implementation (FAPI)
Fast approximated power iteration algorithm implementation (FAPI), 230
Fast Data Projection Method (FDPM), 228
algorithm, 229
Fast Rayleigh quotient-based Adaptive Noise Subspace algorithm (FRANS), 227–228
algorithm, 232
FDPM. See Fast Data Projection Method (FDPM)
Filters. See also Auxiliary particle filtering (APF); Density assisted particle filter (DAPF); Extended Kalman filter (EKF); Gaussian particle filtering (GPF); Kalman filter (KF); Linear filter; Particle filtering; Quadrature Kalman filter (QKF); widely linear filter
feedforward, 164
fully complex MLP filter, 57
kernel-based auxiliary particle filter, 306–307
linear and widely linear filter, 40–47
convergence, 46
matched, 164
operations, 371
optimal, 323
RBF, 47
UKF, 272
FOBI. See Fourth order blind identification (FOBI) method
Forward-backward algorithm, 152–162, 161, 181
intersymbol interference, 160–162
Forward-backward channel decoder, 185, 186
Forward recursion, 156–157, 156f
Fourth order blind identification (FOBI) method, 129–131
FPGA. See Field-programmable gate array (FPGA) platform
FRANS. See Fast Rayleigh quotient-based Adaptive Noise Subspace algorithm (FRANS)
Full-way rectification, 368
Fully-complex functions, 50
Fully complex multilayer perceptron filter, 54f, 55–57
Function generators, 368
Function signals, 335
computation, 336
Gauss–Hermite quadrature integration rule, 272
Gaussian distributed complex variable, 64
Gaussian error function, 187
Gaussian inputs channel capacity, 172f
Gaussian kernels, 70
Gaussian particle filtering (GPF), 295, 324
algorithm, 303
particle filtering, 301
performances, 303
Gaussian probability density functions, 272
Gaussian quadrature Kalman filter, 272
Gaussian source signals, 135f
Gauss–Newton algorithm, 40
Gauss–Newton Hessian matrix, 40
General asymptotic theory of GLR-tests, 100
General Gaussian approximation result method review
signal processing subspace tracking, 246–247
Generalized Gaussian density model, 68–69
Generalized Gaussian distribution (GGD) sources, 72, 77
ISI, 74f
Generalized Hebbian algorithm (GHA), 237, 239
Generalized likelihood ratio test (GLRT), 90, 102
Generalized uncorrelating transform (GUT), 132, 135f, 136f
GGD. See Generalized Gaussian distribution (GGD) sources
GHA. See Generalized Hebbian algorithm (GHA)
Global System for Mobile Communication (GSM), 350
GLRT. See Generalized likelihood ratio test (GLRT)
GPF. See Gaussian particle filtering (GPF)
Gradient-descent technique
dominant subspace tracking, 230
Gradient vector
computation, 336
Gram–Charlier and Edgeworth expansions, 68
Gram–Schmidt orthonormalization, 225, 237
Gram–Schmidt procedure, 65
Green's theorem, 16
GSM. See Global System for Mobile Communication (GSM)
GUT. See Generalized uncorrelating transform (GUT)
Hadamard's inequality of matrix theory, 171
Harmonic analysis, 3
Harmonic signal, 366f
Hermitian centro-symmetric covariance matrices, 242
Hermitian matrix, 44
Hessian matrices, 20–24, 32–37
HFRANS. See Householder Fast Rayleigh quotient-based Adaptive Noise Subspace algorithm (HFRANS)
Hidden Markov model, 379
Hidden neuron
MLP, 336
Holomorphic (analytic) function, 9–14, 16, 17, 21, 22, 50, 51
Householder Fast Rayleigh quotient-based Adaptive Noise Subspace algorithm (HFRANS), 228
Householder transform, 228, 236–237
Huber's M-estimator (HUB), 112, 119, 123
asymptotic relative efficiencies, 128f
MVDR beamformer, 126f
Human speech
apparatus, 355f
generation source-filter model, 356f
Hyperbolic functions, 51
ICA. See Independent component analysis (ICA)
Ice Multiparameter Imaging X-Band Radar (IPIX), 31
IF. See Influence function (IF)
IIR. See Infinite impulse response (IIR) filter
Imaging
bandwidth extension, 353f
telephony speech bandwidth extension, 353
Importance function. See Proposal distribution
Independent component analysis (ICA), 58–74, 89, 91, 128–136
Infinite impulse response (IIR) filter, 358, 361
Influence function (IF), 90
complex-valued random vectors robust estimation techniques, 102–105
Information-maximization (Infomax), 58, 68, 72, 74
Inner decoder, 150
extrinsic information transfer function, 186f
definition, 9
properties, 9
Innovations process, 341
Input bit
a priori probability, 196
Input signals, 335
Integrals of complex function, 15–17
Integrated Service Digital Network (ISDN), 350
Integrating
differential encoder, 181f
Interference canceler, 167f, 168f
coupling element, 166
turbo equalization, 163–167, 197
International Telecommunication Union (ITU), 383
Intersymbol interference (ISI), also see performance index (PI), 72, 134
Inverse circular functions, 50
Inverse hyperbolic functions, 51
Invertible linear transformations, 100
IPIX. See Ice Multiparameter Imaging X-Band Radar (IPIX)
ISDN. See Integrated Service Digital Network (ISDN)
Isomorphic transformation, 5
Iterative decoder
algorithm, 163
properties, 151
Iterative receiver
flow diagram, 198f
ITU. See International Telecommunication Union (ITU)
Joint approximate diagonalization of eigenmatrices (JADE), 58, 72, 135f, 136f
Kalman filter (KF), 272, 284, 310, 312, 344
Kalman gain, 342
Karhunen–Loeve Transform (KLT), 211, 233
Karush–Kuhn–Turker (KKT) conditions, 337
Kernel-based auxiliary particle filter
Kernel trick, 340
KF. See Kalman filter (KF)
KKT. See Karush–Kuhn–Turker (KKT) conditions
KLT. See Karhunen–Loeve Transform (KLT)
KM. See Kurtosis maximization (KM)
Kullback–Leibler distance, 66
Kurtosis maximization (KM) algorithm, 72–74, 134
Langrangian function, 337
Langrangian multipliers, 337
LBG. See Linde, Buzo, and Gray (LBG) algorithm
Least-mean-square (LMS), 40
normalized algorithm, 225
widely linear algorithm, 43
Levinson–Durbin recursion, 357–358
Likelihood function, 149
Likelihood ratio calculation, 159–160
Linde, Buzo, and Gray (LBG) algorithm, 377–378
Linear conjugate gradient updates, 39
Linear equalizers, 145
Linear filter
Linear interference canceler
turbo equalizer, 164f
Linear interpolant, 166
function of approach, 382f
structure, 372f
Linear minimum mean square error detector, 259
LMS. See Least-mean-square (LMS)
Log-volatility, 302
LPC coefficient, 362, 371, 389
Lyapunov equation, 245, 247, 249
Magnetic resonance imaging (MRI), 1
MALASE. See Maximum likelihood adaptive subspace estimation (MALASE)
MAP. See Maximum a posteriori (MAP)
Mapping
complex-to-real, 18
relationship, 20
vector-concatenation type, 17–18, 20t
Marginally circular, 26
Markov chain, 152
Monte Carlo sampling, 305
Matched filter, 164
MATLAB optimization toolbox, 346
Matrix function
derivatives, 6f
Matrix functionals, 109
Matrix notation, 7
Maximization of non-Gaussianity (MN), 64–66
criterion, 64
fixed-point updates, 65
gradient updates, 65
Maximum a posteriori (MAP), 276, 314
Maximum likelihood (ML), 59–64, 66, 74
Relative (natural) gradient updates, 60, 61
Maximum likelihood adaptive subspace estimation (MALASE), 240
algorithm, 241
MCA-OOja algorithm, 236
MCA-OOjaH algorithm, 236
MDL. See Minimum description length (MDL)
Mean-opinion-score (MOS) tests, 385
Mean square error (MSE), 304f, 320
covariance matrix principal subspace characterization, 218
estimate, 322
learning curves, 251f
particles, 305f
Measurement model, 341
Mercer kernel, 340
Mercer's theorem, 340
M-estimators computation, 113–114
MIMO. See Multiple-input multiple-output (MIMO) channel
Minimum description length (MDL), 119
simulation results, 119f
Minimum mean square error (MMSE) detector, 259, 276
Minimum variance distortionless response (MVDR) beamformer, 90, 115
based on M-estimators, 121–127
HUB, MLT, and SCM, 126f
influence function study, 123–127
patterns, 122f
Minkowski distance, 363
ML. See Maximum likelihood (ML)
MLP. See Multilayer perceptron (MLP)
MMSE. See Minimum mean square error (MMSE) detector
MN. See Maximization of non-Gaussianity (MN)
Mobility tracking, 278
Modulation techniques, 365
Monte Carlo integration methods, 273
Monte Carlo procedures, 305
Monte Carlo sampling
Markov chain, 305
methods, 310
MOS. See Mean-opinion-score (MOS) tests
MRI. See Magnetic resonance imaging (MRI)
MSE. See Mean square error (MSE)
Multilayer perceptron (MLP), 47–57, 334
architectural graph, 54f, 335f
complex back-propagation updates derivation, 55–57
nonlinear adaptive filtering, 48–54
performance (complex), 57f
supervised training framework, 340
Multiple-input multiple-output (MIMO) channel, 88
Multiple output channel, 197f
Multiuser case
turbo equalization, 198
MUSIC
algorithm, 256
methods, 116
spectrum, 118
Mutual information vs. parameters, 184f
MVDR. See Minimum variance distortionless response (MVDR) beamformer
Narrowband array signal processing, 88
Narrowband codebook, 380
Neural network approaches, 373–375, 376
vs. codebook approaches, 387f
Newton–Raphson procedure, 257
Newton updates
complex domain optimization, 38–39
complex matrix, 38
Noise
generator, 356
and intersymbol interference, 146
vector, 88
NOja. See Normalized Oja's algorithm (NOja)
Noncircular data scatter plots, 30f
Nonlinear adaptive filtering
activation function for MLP filter, 48–54
back-propagation updates derivation, 55–57
implementation, 48
with multilayer perceptrons, 47–57
Nonlinear characteristic effects, 369
Nonlinear classifiers, 145
Nonlinear function optimization
complex domain optimization, 31–34
Nonlinear sequential state estimation
back-propagation learning algorithms, 334–339
EKF algorithm, 344
experimental comparisons, 344–346
extended Kalman filter, 341–343
problems, 348
solving pattern-classification problems, 333–348
supervised training framework, 340
support vector machine learning algorithms, 334–339
Nonlinear state-space model, 340f
Nonnegative diagonal matrix, 132
Nonnegative real symmetric, 213
NOOJa. See Normalized Orthogonal Oja's algorithm (NOOJa)
Normalized least-mean-square algorithm, 225
Normalized Oja's algorithm (NOja), 227
Normalized Orthogonal Oja's algorithm (NOOJa), 227
Objective distance measures, 383–384
Observation model, 219
ODE. See Ordinary differential equation (ODE)
OFA. See Optimal fitting analyzer (OFA) algorithm
Oja method, 212
convergence and performance analysis, 252–256
convergence and performance analysis, 248–251
signal processing subspace tracking, 221
OPAST. See Orthonormal projection approximation subspace tracking algorithm (OPAST)
Optimal equalizers, 145
Optimal filter, 323
Optimal fitting analyzer (OFA) algorithm, 238
Ordinary differential equation (ODE), 213, 225, 231, 232
signal processing subspace tracking, 244–245
stationary point, 245
Orthogonalization procedure, 65
Orthonormal projection approximation subspace tracking algorithm (OPAST), 232
extrinsic information transfer function, 185f
Output neuron, 336
MLP, 336
Oversampling
bandwidth extension, 353f
telephony speech bandwidth extension, 353
Parsevals theorem, 363
auxiliary particle filters, 297–300
comparison, 302
computational issues and hardware implementation, 323
constant parameters handling, 305–309
density-assisted particle filter, 306–307
Gaussian particle filters, 301
kernel-based auxiliary particle filter, 306–307
methodology, 273
proposal distribution and resampling choice, 289–294
proposal distribution choice, 289–290
Particle filters (PF), 273, 291
algorithm flowchart, 294f
convergence, 321
real time processing, 323
with resampling, 294f
speed, 323
weights, 287f
without resampling, 292f
PAST. See Projection approximation subspace tracking algorithm (PAST)
Path tracing, 153
PCRB. See Posterior Cramér–Rao bounds (PCRB)
PDF. See Probability density functions (PDF)
PE. See Processing elements (PE)
Performance analysis
signal processing subspace tracking, 243–255
using easy channel, 166
PF. See Particle filters (PF)
Picard's theorem, 11
Pitch estimation, 369
Pole function, 11
Posterior Cramér–Rao bounds (PCRB), 320
Power-based methods issued from exponential or sliding windows, 229–230
Power method, 216
Prediction error filter, 364, 368
Predictive probability density functions, 315
Probability density functions (PDF), 272
a posteriori, 272, 275, 278, 317, 321
Probability distribution function
extrinsic information, 187f
Processing elements (PE), 324
Projection approximation subspace tracking algorithm (PAST), 231
orthonormal, 232
Proposal distribution, 282f
matrix, 25, 26, 42–45, 59, 78, 94, 95, 97–99, 106–111
Pseudo-scatter matrix, 109, 110
Pulse train generator, 356
QAM. See Quadrature amplitude modulation (QAM)
QKF. See Quadrature Kalman filter (QKF)
QPSK. See Quadrature phase shift keying (QPSK)
QR factorization
signal processing subspace tracking, 214
Quadratic cost function, 371
Quadrature amplitude modulation (QAM), 3, 70, 135f
ISI, 73f
signal constellations, 3f
Quadrature Kalman filter (QKF), 272
Quadrature phase shift keying (QPSK), 3
signal, 101
signal constellations, 3f
RADAR application, 256
Radial basis function (RBF), 338, 339
Random processes. See complex random processes
Random vector. See complex random vectors
Rao–Blackwellization
Rao–Blackwellized scheme
target tracking, 313
Rayleigh quotient
eigenvectors tracking, 234
Rayleigh's quotient, 215
RBF. See Radial basis function (RBF)
Real differentiability, 12
Real symmetric matrices
eigenvaluese/genvectors variational characterization, 215
Real-to-complex transformation, 4
Recursive least squares (RLS) algorithms, 40
Recursive systematic encoder, 152, 153f
Relative (natural) gradient update rule, 61
Removable singular point, 10
Resampling
concept, 293f
RLS. See Recursive least squares (RLS) algorithms
Robust Huber's M-estimator
MVDR beampatterns, 122f
Robust independent component analysis, 128–138
class of DOGMA estimators, 129–131
class of GUT estimators, 132–133
communications example, 134–136
Robustness
complex-valued random vectors robust estimation techniques, 102–105
steering errors, 125
RV. See Random vector (RV)
Sample covariance matrix (SCM), 123
diagonal loading, 125
MVDR beamformer, 126f
Sampling-importance-resampling (SIR), 295
algorithm, 297
Scalar function derivatives, 6f
Scaled conjugate gradient (SCG) method, 40
Scatter and pseudo-scatter matrices, 107–113
M-estimators of scatter, 110
motivation, 107
SCG. See Scaled conjugate gradient (SCG) method
SCM. See Sample covariance matrix (SCM)
SDM. See Spectral distortion measure (SDM)
Second-order Taylor series expansions, 21
Sequential importance sampling algorithm, 285t
flowchart, 285f
step by step execution, 286f
Sequential Monte Carlo integration methodology, 273
SGA. See Stochastic gradient ascent (SGA)
Shannon's capacity, 168
Shannon's expression, 171
Shannon's work, 143
Signal constellations
BPSK, QAM, and QPSK, 3f
Signal model
complex-valued random vectors robust estimation techniques, 88–89
Signal noise ratio (SNR)
optimality, 165
Signal processing subspace tracking, 211–271
approximation-based methods projection, 230–231
arrival tracking direction, 257
blind channel estimation and equalization, 258
covariance matrix principal subspace characterization, 218
eigenvalue value decomposition, 213
eigenvector power-based methods, 235
eigenvectors tracking, 234–242
general Gaussian approximation result method review, 246–247
illustrative examples, 256–259
linear algebra review, 213–218
observation model, 219
Oja's neuron, 221
performance analysis issues, 243–255
preliminary example, 221
problems, 260
problem statement, 220
projection approximation-based methods, 240
QR factorization, 214
Rayleigh quotient-based methods, 234
real symmetric matrices, 215
second-order stationary data, 242
standard subspace iterative computational techniques, 216
subspace power-based methods, 224–229
SIMO. See Single-input/multi-output (SIMO) configuration
Single-input/multi-output (SIMO) configuration, 196f
Singular points
complex function, 10
Singular value decomposition (SVD), 212
SIR. See Sampling-importance-resampling (SIR)
Smoothing
fixed interval, 316
fixed-lag, 316
SNL. See Subspace Network Learning (SNL)
SNR. See Signal noise ratio (SNR)
Source-filter model, 356
human growth speech generation, 356f
telephony speech bandwidth extension, 355–357
Source signals, 88
constellations, 136f
Spectral distortion measure (SDM), 384
branches, 385f
Speech, 349
coding codebook approaches, 370
generation source-filter model, 356f
signal time-frequency analysis, 350
Split-complex functions, 8, 48, 49f
State-space system
characterization, 275
Static sensors, 308
Stationary point
ODE, 245
Stochastic algorithm, 231, 233, 244, 246
Stochastic gradient ascent (SGA), 237
Stochastic volatility model, 302, 304f
Stokes's theorem, 16
Strongly circular, 26
Strongly uncorrelating transform (SUT), 58, 72, 95, 132, 133
Subjective distance measures
bandwidth extension algorithm evaluation, 385–387
Subspace Network Learning (SNL), 225
algorithms, 239
Subspace power-based methods, 224–229
approximation-based methods projection, 230–231
subspace power-based methods, 224–229
Support-vector machine (SVM)
architecture, 339f
back-propagation learning algorithms, 337–339
SUT. See Strongly uncorrelating transform (SUT)
SVD. See Singular value decomposition (SVD)
SVM. See Support-vector machine (SVM)
Symmetric distribution, 92
Systematic encoders, 191f
convolutional, 190f
System matrix, 88
System (state) model, 341
Target distribution, 283f
Target tracking, 278
Rao–Blackwellized scheme, 313
Telephony speech bandwidth extension, 349–389
bandwidth extension algorithm evaluation, 383–387
bandwidth extension model-based algorithms, 364–382
bandwidth extension nonmodel-based algorithms, 352
excitation signal generation, 365–368
nonlinear characteristics application, 353
objective distance measures, 383–384
oversampling with imaging, 353
spectral envelope parametric representations, 358–361
subjective distance measures, 385–387
vocal tract transfer function estimation, 369–382
Time-frequency analysis
speech signal, 350
Time instant values, 286t
Tracking target, 308
DOA, 278
two-dimensional plane, 309–310
Trigonometric interpolant, 166
algorithm, 168
bit error probability, 187–189
blind turbo equalization, 173–180
context, 144
differential encoding, 179–181
EXIT chart for interference canceler, 192–193
forward-backward algorithm, 152–162
forward-backward equalizer, 196
interference canceler, 163–167, 197
intersymbol interference, 160–162
iterative decoding properties, 151
multichannel and settings, 195–198
multiuser case, 198
Turbo equalizer, 164f
Two channel SIMO model, 258–259
Tyler's M-estimator, 113
UKF. See Unscented Kalman filter (UKF)
Uniform Linear Array (ULA), 88, 134
sensors with sensor displacement, 89f
Unscented Kalman filter (UKF), 272
Upsampling and low-quality antiimaging filters
bandwidth extension, 354f
Vector-concatenation type mappings, 17–18, 20t
Vector function
derivatives, 6f
Vector quantization
training data, 379f
Vector-valued measurement function, 341
Very large scale integration (VLSI), 295
implementation, 323
VLSI. See Very large scale integration (VLSI)
Vocal tract transfer function
bandwidth extension model-based algorithms, 369–382
Weakly circular, 26
Weight
function, 112
PF, 287f
update, 342
Weighted covariance matrix, 112
Weighted subspace algorithm (WSA), 238, 239, 251
Weighted subspace criterion, 216
Whitening filter, 368
Wideband codecs, 351
Wideband speech, 351f
Widely linear adaptive filtering, 40–47
least-mean-square algorithm, 43
mean-square error filter, 41–46
Wind data
covariance and pseudo-covariance function, 30f
scatter plot, 30f
Wirtinger calculus, 1, 2, 5, 6, 11–15, 19, 22–24, 34–40, 55, 56, 62, 65, 68
Wirtinger derivatives, 11–15, 19, 34–40, 55–57, 76
WSA. See Weighted subspace algorithm (WSA)
Yule-Walker equation system, 357
Adaptive Signal Processing: Next Generation Solutions. Edited by Tülay Adalı and Simon Haykin
Copyright © 2010 John Wiley & Sons, Inc.
52.14.84.29