Index

A

Accuracy, 42, 43, 57, 97, 98, 229, 231, 238, 248, 268
Acoustic echo cancellation (AEC), 3, 6, 32, 65, 72–74, 76, 93, 94, 244, 245, 252, 254, 255
nonlinear, 96–98, 253, 254
Active noise control (ANC), 32, 49, 81, 93, 244
Adaptation, 48, 57, 73, 78, 80–82, 90, 97, 99, 106, 233, 234, 244–247, 250, 251, 254, 256, 258
stochastic gradient, 53–56
Adaptive dynamic programming (ADP), 336, 337
Adaptive filtering, 79, 202
algorithms, 94, 106
nonlinear, 243, 267, 285
quaternion-valued nonlinear, 287
Adaptive filters, 59, 72, 73, 77, 81, 94, 95, 119, 122, 123, 213, 223, 233, 244–247, 250–252, 269
linear, 6, 106, 244, 247
performance of, 244, 262
performance of nonlinear, 244, 251
Adaptive optimal control, 341, 348
Addition, 2, 3, 5, 19, 23, 25, 33, 35, 48, 57, 58, 60, 61, 74, 76, 110, 115, 295
AEC system, 72, 94
Affine projection algorithms (APA), 106, 251
Agents, 5, 8, 223–231, 233–237, 239, 314
Algebra, 19, 23, 25, 27, 207
Algorithm convergence, 58, 318, 320
Algorithm design, 139, 145
Algorithms, 
adaptive, 6, 32, 33, 81, 94, 106
adaptive identification, 18, 42
distributed, 185, 231, 233
following, 328, 329
iterative, 53–56, 181, 193
kernel, 141, 237, 238
multitask, 227, 238
nonlinear dynamical modeling, 298
off-policy, 320
tree-based, 214
Amplitudes, 282, 298, 301, 305
peak, 299, 300, 302, 304, 305, 307
APA (Affine projection algorithms), 106, 251
Architectures, 5, 7, 8, 48, 53–58, 60, 61, 63, 66, 142, 233, 274
Autoregressive model, 185, 190
Average learning curves, 119–122

B

Basis functions, 6, 17–19, 21, 23–26, 28, 30–37, 39, 40, 42, 43, 50, 52, 132, 175, 215, 217, 294, 295
one-dimensional, 23, 36
orthogonal, 17, 18, 20, 22, 25, 29, 42, 338
selected, 40, 41
total number of, 24, 295
Basis functions of order, 24, 33, 35, 36
Bayesian information criterion (BIC), 37, 40
Bellman equation, 317–321, 326, 328–330
Block diagram, 53–56, 92, 253
Bounds, 58, 66, 280, 324
Branch, 75, 76, 78–81, 90–92, 95, 98, 248–250, 253
nonlinear, 246, 251, 254
nonlinearities, 75, 77, 78, 89, 90, 97, 98
signals, 78, 85–87

C

Cascade group model, see CGMs
Cascade models, 6, 48, 82, 89
Centers, 107, 150, 162–164
CGM preprocessors, 77, 81, 82
CGMs, 6, 75–78, 81, 85, 88, 90, 91, 95–99
Changes, input/output spikes, 303
Chebyshev polynomials, 27, 28, 75
Class, 2, 3, 6, 17–20, 24, 48, 72, 81, 129, 139, 225–228, 231, 268, 269, 322, 336
CLAss-specific kernel (CLASK), 134, 139
Classification, 2, 5, 150, 157, 162, 163, 174, 256
Classification output, 209, 210, 212
Classification problems, 139, 140, 213
Classifiers, 1, 203, 209, 210, 212, 218
Clusters, 184, 224, 225, 227
Coefficients, 17, 30, 31, 35–37, 40, 42, 48, 90, 150, 156, 161, 177, 180, 181, 192, 245, 255, 256
Combination, 8, 18, 56, 136, 233, 244, 245, 248–251, 256, 258, 272
Combination schemes, 244, 247, 251, 253, 256
Combination strategies, 244, 245, 247, 248, 251
Combination weights, 167, 204, 206, 207, 219
Combine-then-adapt (CTA), 162, 233
Combined schemes, 8, 244, 251, 254, 262
Complexity, 3, 7, 8, 35, 93, 95, 98, 128, 131, 134–136, 139, 156, 182, 186, 188, 189, 251
Components, 74, 84, 85, 154, 165, 244, 249, 250, 271, 272
Computational complexity, 3, 7, 17, 18, 32, 35, 81, 85, 95, 97–99, 132, 133, 135–137, 141, 142, 190, 206–208, 211–213
Computer-unified device architecture (CUDA), 141, 142
Conditions, 2, 3, 19, 29, 33–35, 40, 58, 59, 168, 227, 268, 273, 324, 326, 328, 343, 354
Connectivity, functional, 308
Constrained update, 83, 85–87
Construction, 8, 23, 24, 26, 29, 36, 39, 132, 137, 138, 208, 212
Continuous stirred tank reactor (CSTR), 64, 65
Continuous-time (CT), 9, 16, 314, 321, 327, 337
Continuous-time systems, 314, 337
nonlinear, 9, 314
Control, 1, 2, 72, 75, 112, 119, 128, 131, 132, 202, 314, 318, 319, 324, 327, 347, 349, 355
Control effort, 314, 316, 324, 326, 328, 329
Control input, 314–317, 319, 320, 322–324, 328
Control points, 50–52, 54–56, 59, 233
Control policy, 316, 318, 319, 336, 338, 349, 350, 352, 353
fixed, 319, 320
Convergence properties, 57, 153, 154, 156, 228
Convex combination, 97, 98, 227, 250, 251, 254, 256–258, 260–262
Convex filter combinations, 73, 97, 99
Correntropy, 7, 105, 106, 110, 112
Correntropy induced metric (CIM), 111, 112
Cost function, 4, 5, 53, 72, 112, 162, 163, 226, 273, 275, 277, 280, 314, 318
local, 57, 226, 232, 235
Covariance matrix, sample, 136, 194
Cross-correlation, 16, 17, 22, 30, 33, 35, 36, 39
Cross-correlation method, 16–18, 20, 22, 30, 35, 37, 40–42
CTW algorithms, 214
Cyclic convolution artifacts, 79, 80, 84, 86, 87

D

Data, 
desired, 213–215, 217
wind, 281, 284–286
Decomposition, 6, 86–88, 191
Devices, 42, 94–97
DFT domain, 79, 80, 86, 87
Diagonals, 30–32, 75, 97, 98, 187, 252, 255, 256
Dictionary, 109, 115, 150, 163, 164, 179, 183, 193, 233, 238, 247, 248, 259, 261
Diffusion adaptation (DA), 57
Diffusion algorithms, 227–229, 231
Diffusion kernels, 179, 183, 184, 194, 195
Diffusion LMS, 162, 166, 224, 230, 234, 236
Dimension, 19, 129, 141, 151, 152, 154, 155, 160, 163, 164, 166, 170, 177, 278, 283, 284, 337, 338, 340
Discrete Fourier transform (DFTs), 74, 80, 81, 83–85, 93
Distance, 95, 111, 116, 229
Distributed filtering, nonlinear, 223
Distributed parameter systems (DPSs), 9, 335–337
Disturbance, 114, 318–320, 322, 327
Disturbance attenuation condition, 315–318, 324
Disturbance input, 315–317, 319, 323, 325, 327–329, 331
Diversity, 236, 244
Dynamic graphs, 174, 185, 186
Dynamics, command generator, 9, 314, 316, 324

E

Echo state networks (ESNs), 231, 269, 274, 275, 281, 284
Echo-return loss enhancement (ERLE), 94, 96
EMFN filters, 23, 25, 40
Empirical risk, 130, 137, 139, 140
Emulation, 39, 41
Error signal, 53, 72, 77, 78, 85, 87, 91, 92, 245, 250
Estimate, final, 140, 206–208
Estimated coefficients, 35, 36, 307
Estimators, 112, 183–187, 189, 193, 271
Exchange, 81, 161, 231, 234
Expansion order, 246, 247
Experimental results, 6, 18, 39, 43, 49, 60, 106, 270
Experiments, 40, 60, 61, 64, 93, 97, 141, 157, 159–161, 168, 170, 184, 185, 194, 214, 219, 236, 237
Extended graphs, 186, 187, 194

F

Feature space, 6, 8, 107, 128, 129, 131, 132, 201–204, 214, 256, 257
Feature vectors, 129, 136, 139, 145, 209, 210, 212, 213, 218, 248
explicit, 128, 130–132, 136
Feedforward kernel, first-order, 304, 307
Filter, 6, 8, 16–25, 30–37, 39, 40, 42, 43, 73, 96, 98, 119, 233, 234, 244, 245, 248–252, 256, 261, 262
combined, 56, 249–251, 256
Filter coefficients, 17, 18, 97, 98, 246
frozen, 94
Filter components, 244, 249, 251, 258
Filter length, 57, 61
Filter of order, 32, 35, 36
Filtered-X, see FX
Filtering, 73, 84, 85, 94, 97, 98
Filtering scenario, 244, 247, 251, 253
First-order kernels, 294, 298, 303
FLOPS, 95
FNF, 213–215
Functional expansion block (FEB), 246, 252
Functional link, 17, 57, 246
Functional link adaptive filters, see FLAFs
Functional link artificial neural networks (FLANNs), 17
Functionals, 16, 21, 22
Functions, 
evolving, 174, 185–187
separator, 204–209, 215, 216
FX adaptation, 82, 85, 95, 96
FX algorithms, 81, 82, 87, 97, 99
FX (Filtered-X), 32, 72, 73, 77, 81, 82, 85, 86, 92
FX signals, 82, 83, 92

G

Gaussian distribution, 119, 120, 153, 154, 164, 166, 195
Gaussian noise, 184, 261, 295, 296
Gaussian processes (GPs), 130, 131
GDP values, 195
Generalized correntropy shape parameter, 115, 116, 118
Generalized kernel maximum correntropy, see GKMC
Generalized kernel recursive maximum correntropy, see GKRMC
Generalized MCC, 106, 113, 114
Generalized Volterra model, see GVM
GKMC algorithm, 115
GKMC and GKRMC algorithms, 119, 120, 124
GKMC (Generalized kernel maximum correntropy), 115, 119, 121
GKRMC algorithms, 118, 119
GKRMC (Generalized kernel recursive maximum correntropy), 117, 121, 123
GMCC estimation, 113, 114
GPs (Gaussian processes), 130, 131
GPUs (Graphics processing units), 141, 142
Gradient, 152, 163, 164, 166, 208, 228, 230, 232, 276, 278, 280, 281, 352
Gradient descent adaptation algorithms, 6, 17
Gradient descent algorithms, fast convergence of, 17, 18
Gradient noise, 88, 91, 97, 98, 244, 245, 250, 252–254, 256, 258
Graph signal, 176, 177
reconstruction, 175, 179
Graphics processing units, see GPUs
Graphs, 5, 7, 168–170, 174, 175, 177–179, 181, 183, 185, 186, 190, 192, 195, 224, 227, 236
GVM (Generalized Volterra model), 295–298, 303, 304

H

Hammerstein models, 48

I

Identification, 1, 3, 5, 16–18, 20, 32, 33, 36, 38–43, 49, 58, 61, 65, 91, 93, 261, 262
Induction functions, 304–309
Information theoretic learning, see ITL
Infrastructures, 128
Inner product, 109, 110, 150, 175, 257, 337, 338
Input, 2, 4, 9, 16, 17, 30, 38, 39, 53, 64, 128–131, 275, 290, 292–298, 300, 301, 303–305, 308, 309
Input data, 1, 3, 5, 8, 107, 109, 115, 129, 131, 151
Input samples, 17, 22, 25, 34, 35, 42, 52, 57, 245, 247
Input signal, 16–19, 25, 27, 28, 35, 40, 42, 52, 60, 61, 63, 73, 78, 79, 90, 119, 255, 261
white, 30, 31, 34
white Gaussian, 17, 29, 34, 42
Input spike trains, 293, 294, 296, 297, 305
Input variance, 17, 33, 39, 42
Input vectors, 107, 119, 123, 166, 202, 230, 248, 275, 283
Input/output relationship, 73, 76
IR (Impulse response), 73, 75, 91, 92, 253, 254, 268
IR vector, 74, 88
IRL algorithm, 314
IRNAEC, 255, 256
Iteration, 5, 34, 57, 107, 115, 116, 150, 155–157, 192, 220, 221, 224, 229, 258, 261, 318
ITL (Information theoretic learning), 106, 109, 110

K

KAF algorithms, 106, 107, 114
KAFs (Kernel adaptive filter), 6, 7, 105, 106, 108, 114, 119, 124, 151, 244, 245, 247, 251–254, 256, 260, 261, 263
Kernel adaptive filter, see KAFs
Kernel approximation, 7, 128, 129, 136
Kernel approximation techniques, 128, 132, 133, 145
Kernel filters, 226, 231
Kernel function, 6, 111, 112, 128–130, 132–135, 138, 150, 153, 155, 156, 231, 232, 256, 297, 305
Kernel Kalman filter, see KKF
Kernel LMS (KLMS), 106–108, 114, 115, 119, 123, 151, 152, 157, 158, 162, 231, 247, 251, 258
Kernel matrix, 128, 130–136, 145, 151, 179, 188, 191–193
inversion, 136, 137
Kernel maximum correntropy (KMC), 115
Kernel methods, 106, 107, 109, 128–130, 132, 135, 141, 150, 175, 231, 232, 247
Kernel parameter, 112, 115, 116, 153, 157, 168, 256
Kernel PEGASOS, 156, 159, 160
Kernel recursive maximum correntropy (KRMC), 117, 119
Kernel regression, 175, 224
Kernel ridge regression (KRR), 131, 177, 179, 182, 194, 231
Kernel RLS (KRLS), 106–108, 114, 123, 151, 152, 155, 157, 158, 251
Kernel space, 106, 109, 110
Kernel subspace approximation, 134, 144
Kernel techniques, 130, 145
Kernel widths, 247, 259–262
Kernels, 6, 7, 32, 36, 97, 105, 106, 110, 111, 177, 178, 192, 193, 245, 247, 251, 252, 255–257, 294, 295, 297–300, 304–307
band-reject, 195
estimated, 299–302
higher-order, 17, 38, 294, 301
lower-order, 38
nonlinear, 22, 31, 35
second-order, 31, 42, 75, 299
KKF (Kernel Kalman filter), 106, 188, 189, 194
KLMS filters, 256, 257
KMC (Kernel maximum correntropy), 115
KPCA (Kernel principal component analysis), 131

L

LAC (Local analyticity condition), 269, 273, 274
Laplacian kernels, 178, 179, 191, 193
Leaf nodes, 203–205, 209, 214, 215
Learning, 5–8, 107, 130, 145, 170, 210, 217, 224, 225, 233, 257, 290, 292, 309, 314, 319, 320
real-time recurrent, 269
Learning algorithms, 49, 54–57, 131, 269, 281, 283
adaptive, 134, 282
nonlinear, 267, 273
quaternion-valued nonlinear, 268, 283
Learning curves, 238, 258, 261
Learning model, 131, 135, 136
Learning rates, 54, 56, 58, 61, 63, 66, 115, 163, 164, 168, 207, 208, 210, 212, 214, 215, 217, 236, 237
Learning rules, 66, 292, 302–304, 306, 308, 309
Linear adaptive filtering (LAF), 48, 60, 73, 107, 243, 267, 285
Linear combination of basis functions of order, 35, 36
Linear filter, 33, 52–56, 58, 59, 65, 90, 95, 97, 246, 250, 255, 256
identification of, 32, 33
second, 55–57, 65
Linear models, 2, 3, 81, 96, 131, 181, 202, 204, 253, 267, 269, 270, 272, 277, 278, 281, 283, 284, 286
piece-wise, 202, 203, 214, 216, 217
Linear subsystems, 76, 78, 79, 82, 88, 91, 93, 94, 97
adaptive, 90, 92
memory length of, 97
Linear systems, 2, 4, 17, 74, 76, 78, 80
Linear time-invariant (LTI), 48, 81, 315
Linear-in-the-parameters, see LIP
LIP (Linear-in-the-parameters), 6, 9, 16, 18, 19, 24, 32, 40, 72–74, 77, 78, 86, 98
LN and CN filters, 18, 23, 34, 40
Long-term synaptic plasticity, see LTSP
Look-up table (LUT), 49, 52, 53
Lorenz attractor, 215–217, 282
Lorenz signal, 282, 283
Loss functions, 130, 155, 163, 166, 182, 184
LTD (Long-term depression), 290, 305, 308, 309
LTI (Linear time-invariant), 48, 81, 315
LTP (Long-term potentiation), 290, 305, 308
LTSP (Long-term synaptic plasticity), 9, 289, 290, 292, 297, 302, 308, 309
LUT (Look-up table), 49, 52, 53
Lyapunov function equation, 349, 350

M

Machine learning, 5, 175
Matrix, 49, 52, 54, 60, 74, 128, 129, 139, 142, 152, 153, 155, 156, 161, 164, 165, 167, 174, 175, 338–340
Matrix inversion, 128, 135–137
Maximum correntropy criterion (MCC), 7, 49, 105, 106, 112, 124
Mean square error, see MSE
Memory, filter of, 31, 32
Memory-efficient kernel approximation (MEKA), 134
Memoryless, 48, 52–55, 246, 255, 256
Mercer kernel, 106, 107, 109, 112, 115, 116, 118, 119, 247, 256
Methods, multiple-variance, 18, 38, 39, 41, 43
MISO model, 293, 295, 296, 308
Mixing parameter, 245, 249, 250, 255, 256, 258, 261
Model complexity, 17, 128–130, 132, 137, 139, 140, 145, 157
Model design, 5
Model reduction, 335, 340, 353, 356
Model structure estimation, 73, 99
Model variables, 293, 295, 296, 298
MSE (Mean square error), 6, 30, 38, 49, 58, 61–63, 65, 106, 110, 157, 189, 245, 257, 271, 283, 284
Multidimensional data, 8, 281
Multikernel, 182, 252
Multiple-input/single-output (MISO), 78, 292, 293
Multitask networks, 225, 235, 236
Multitask problems, 8, 226, 227, 239

N

NDP-based adaptive optimal control, 341, 346–348
Neighbors, 150, 161–164, 175, 187, 224, 227, 228, 234, 235, 238
Networks, 5, 7, 49, 151, 161, 162, 165, 170, 173, 185, 223, 224, 227–229, 233, 236–238, 268
Neural networks (NNs), 5, 8, 9, 17, 48, 51, 106, 128, 227, 267–269, 286, 320, 321, 330, 331
Neurons, readout, 269, 274, 275, 286
NIPS (Neural Information Processing Systems), 146
NLMS algorithm, 33, 77, 83, 255
Node predictors, 204, 205, 207, 208
Nonaffine systems, 322, 324, 327–329
Nonlinear acoustic echo cancellation (NAECs), 6, 32, 65, 244, 252, 253
Nonlinear adaptive filter (NAF), 48, 49, 56, 58, 60, 73, 86, 202, 243, 244, 248, 251, 252, 262, 267, 285
Nonlinear distortions, 40, 89, 252–256
Nonlinear dynamical model, 292, 308
Nonlinear dynamics, 9, 284, 290, 292, 295, 297, 308
Nonlinear functions, 48, 52–56, 107, 162, 268, 274, 322
Nonlinear methods, 3, 7, 8
Nonlinear modeling, 4–9, 105, 124, 201, 202, 213
Nonlinear models, 2–4, 8, 33, 48, 86, 87, 95, 97, 99, 106, 150, 202, 225, 230, 251–254
Nonlinear system modeling, 3–5, 9, 105
Nonlinear systems, 2, 3, 5, 6, 9, 16, 18, 19, 23, 32–35, 37, 38, 40, 42, 48, 56, 60, 72, 73, 314, 315
unknown, 4
Nonstationary model, 297, 298, 300
Number, 
diagonal, 31, 32
positive, 342, 352, 353
Nyström methods, 128, 132, 133

O

ODE system, closed-loop, 347, 355, 356
Online algorithm, 150, 188, 190, 209, 218
Online learning, 5, 151, 152
Optimal control, 314, 317, 320, 326
Optimal control problem, 9, 314, 336–338, 340, 342, 348, 356
Optimal tracking control, 336
Optimal tracking control problem, see OTCP
Order, filter of, 24, 30–32, 36, 41
OTCP (Optimal tracking control problem), 9, 314
Output, desired, 152, 236, 237
Output error, 53–56
Output layer, 275, 277, 279, 281
Output spike trains, 293–298, 305, 306
Output spikes, 294, 297, 298, 304, 306

P

Parameter vector, 57, 76, 77
Parameters, regularization, 123, 155, 156, 166, 167, 176, 191, 193
Partition classifiers, 209–211
best, 209, 210, 212
Partitions, 52, 79–81, 90–92, 95, 98, 201–203, 205, 206, 209, 211
PDE system, 336–340, 342, 353
closed-loop, 341–343, 347, 353, 355
original closed-loop, 342, 345, 353, 355
Performance function, 316, 317
Performance index, 325, 338–340, 346, 348, 352
PI algorithm, 349–353
PI-based adaptive optimal control, 351, 352, 354–356
Power filters, 75, 82, 244
PPSs (Perfect periodic sequence), 6, 17, 18, 31, 33–37, 39, 40, 42, 43

Q

QESNs and AQESNs, 279, 282–285
Quadratic information potential (QIP), 109, 110
Quadratic kernels, 251, 252, 254–256
Quantized KLMS algorithm, 247
Quaternion ESNs, 274
Quaternion least mean square (QLMS), 268, 282
Quaternion variable, 270–272
Quaternions, 8, 267–270, 272–274, 278, 284, 286

R

Random Fourier features, 7, 128, 132–134, 145, 151, 152, 157
RDDs (Resilient distributed dataset), 142
Recurrent neural networks, see RNNs
Regularization factor, 108, 109, 117, 118, 230, 235, 238
Regulation problems, optimal, 314, 316, 319
Reinforcement learning (RL), 9, 313, 314, 336, 337
Reproducing kernel Hilbert space, see RKHS
RKHS (Reproducing kernel Hilbert space), 129, 150, 152, 156, 161, 162, 175, 180, 182, 231, 232, 235
RNNs (Recurrent neural networks), 267–269, 274, 277
RTRL (Real-time recurrent learning), 269

S

SA filtering, 86, 92, 98, 99
SAF approach, 49, 65
SAF architectures, 57, 58, 60, 65
Samples, 17–20, 30, 31, 33–35, 40–42, 49, 54, 79, 80, 85, 95, 109, 115, 116, 118, 119, 164, 165, 188, 194
Sampling, 128, 195
Second-order self-kernels, 294, 298, 299
Selection, 2, 165, 178, 179, 189–191, 202, 225, 232, 243–247, 251, 256
Self-organizing trees, see SOT
Short-term synaptic plasticity, see STSP
Signal processing, 5, 48, 106, 122, 174, 201, 202, 224
Signal processing on graphs (SPoG), 174, 178, 179
Signals, reference, 53–56
Significance-aware (SA), 6, 72, 73, 98
Simulated input-output, 298, 304, 306, 308
Simulations, 1, 41, 90, 119, 121, 123, 151, 170, 171, 193, 229, 248, 259, 298, 300, 301, 304–306
SOT (Self-organizing trees), 7, 201, 204–206, 209, 213, 215, 219
SOTC algorithm, 217, 219
SOTR algorithm, 214–217
Spatiotemporal models, 190, 191
Spiking neuron model, single-output, 298, 304, 305
Spline adaptive filters (SAFs), 6, 49, 52, 56, 57, 60, 65, 66, 213, 233
Spline interpolation scheme, 52–55
STDP function, 304–309
Step changes, 298–300
Step size, 95, 115, 116, 119, 123, 167, 168, 228, 234, 247, 251, 259, 261, 276, 277, 279–281
Structured model, 17, 18
STSP (Short-term synaptic plasticity), 9, 290, 292, 293, 297, 308
Subfilters, 246, 248–251, 254
nonlinear, 246, 249, 252
Suitable models, 2, 3
Support vectors, 156, 157, 159, 160, 248
Synaptic learning rule, 9, 292, 302, 304, 308
Synaptic weight, 304, 305, 309
System dynamics, 9, 313–315, 318–320, 327
System identification, 1, 16, 38, 64, 72, 73, 81, 88, 94, 245, 247, 268
System modeling, 1, 2
System output, 30, 35, 39, 57, 153, 155, 156, 164, 167
System theory, linear, 2

T

Time instants, 17, 150, 162, 163, 185–187, 226, 235, 236, 248, 255, 275
Time kernels, 188, 194
Time slot, 185, 186, 188, 189
Time-invariant, 18–20, 35, 42, 72, 78, 82, 188, 194, 195, 296
Time-invariant graph, 185, 190, 194, 195
TNC algorithm, 217, 219
Tracking error, 315, 316, 325
Training data, 128, 130, 131, 133, 135, 137, 140, 145, 158, 170, 183
Transformation matrix, 131, 135, 137–140

U

UAV (Unmanned aerial vehicles), 9, 315, 322, 323, 331
Unknown system, 2–4, 6, 30, 33, 43, 63, 86–88, 90–92, 98, 99, 244, 251

V

Value function, 318–321, 325, 327, 340, 349, 350, 356
Volterra adaptive filter (VAFs), 48, 60
Volterra filters (VFs), 6, 17, 19, 20, 23, 31, 33, 38, 40, 41, 75, 82, 97, 98, 244, 245, 247, 248, 251, 253, 254
third-order, 39, 61
triangular representation of, 20, 23, 31
Volterra kernels, 21, 39, 82, 97, 295, 304, 309
Volterra series, 16, 17, 20, 38, 48, 106, 295

W

Weight change, synaptic, 304
Wiener and Hammerstein models, 48
Wiener kernels, 21, 22, 39
Wiener SAF, see WSAF
Wireless sensor networks, see WSNs
WSAF (Wiener SAF), 49, 53, 57–62, 234
WSNs (Wireless sensor networks), 8, 224, 225, 231

Z

Zero order kernels, 294, 296, 299
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.58.169