A
Accuracy,
42,
43,
57,
97,
98,
229,
231,
238,
248,
268
Acoustic echo cancellation (AEC), , ,
32,
65,
72–74,
76,
93,
94,
244,
245,
252,
254,
255
Adaptation,
48,
57,
73,
78,
80–82,
90,
97,
99,
106,
233,
234,
244–247,
250,
251,
254,
256,
258stochastic gradient,
53–56
Adaptive dynamic programming (ADP),
336,
337
Adaptive filtering,
79,
202quaternion-valued nonlinear,
287
Adaptive filters,
59,
72,
73,
77,
81,
94,
95,
119,
122,
123,
213,
223,
233,
244–247,
250–252,
269performance of nonlinear,
244,
251
Adaptive optimal control,
341,
348
Addition, , , ,
19,
23,
25,
33,
35,
48,
57,
58,
60,
61,
74,
76,
110,
115,
295
Affine projection algorithms (APA),
106,
251
Algorithms,
adaptive identification,
18,
42nonlinear dynamical modeling,
298
APA (Affine projection algorithms),
106,
251
Architectures, , , ,
48,
53–58,
60,
61,
63,
66,
142,
233,
274
Autoregressive model,
185,
190
B
Basis functions, ,
17–19,
21,
23–26,
28,
30–37,
39,
40,
42,
43,
50,
52,
132,
175,
215,
217,
294,
295
Basis functions of order,
24,
33,
35,
36
Bayesian information criterion (BIC),
37,
40
C
Cascade group model,
see CGMs
Changes, input/output spikes,
303
Chebyshev polynomials,
27,
28,
75
Class, , , ,
17–20,
24,
48,
72,
81,
129,
139,
225–228,
231,
268,
269,
322,
336
CLAss-specific kernel (CLASK),
134,
139
Coefficients,
17,
30,
31,
35–37,
40,
42,
48,
90,
150,
156,
161,
177,
180,
181,
192,
245,
255,
256
Combination, ,
18,
56,
136,
233,
244,
245,
248–251,
256,
258,
272
Combine-then-adapt (CTA),
162,
233
Complexity, , , ,
35,
93,
95,
98,
128,
131,
134–136,
139,
156,
182,
186,
188,
189,
251
Components,
74,
84,
85,
154,
165,
244,
249,
250,
271,
272
Computational complexity, , ,
17,
18,
32,
35,
81,
85,
95,
97–99,
132,
133,
135–137,
141,
142,
190,
206–208,
211–213
Computer-unified device architecture (CUDA),
141,
142
Conditions, , ,
19,
29,
33–35,
40,
58,
59,
168,
227,
268,
273,
324,
326,
328,
343,
354
Connectivity, functional,
308
Construction, ,
23,
24,
26,
29,
36,
39,
132,
137,
138,
208,
212
Continuous stirred tank reactor (CSTR),
64,
65
Continuous-time systems,
314,
337
Control, , ,
72,
75,
112,
119,
128,
131,
132,
202,
314,
318,
319,
324,
327,
347,
349,
355
Convex filter combinations,
73,
97,
99
Correntropy induced metric (CIM),
111,
112
Cost function, , ,
53,
72,
112,
162,
163,
226,
273,
275,
277,
280,
314,
318
Covariance matrix, sample,
136,
194
D
Data,
sets,
63–65,
109,
139,
152,
156–160,
168,
170,
194,
195,
218,
219,
224,
238
Dictionary,
109,
115,
150,
163,
164,
179,
183,
193,
233,
238,
247,
248,
259,
261
Diffusion adaptation (DA),
57
Dimension,
19,
129,
141,
151,
152,
154,
155,
160,
163,
164,
166,
170,
177,
278,
283,
284,
337,
338,
340
Distributed filtering, nonlinear,
223
Distributed parameter systems (DPSs), ,
335–337
E
Echo-return loss enhancement (ERLE),
94,
96
Error signal,
53,
72,
77,
78,
85,
87,
91,
92,
245,
250
Estimated coefficients,
35,
36,
307
Experiments,
40,
60,
61,
64,
93,
97,
141,
157,
159–161,
168,
170,
184,
185,
194,
214,
219,
236,
237
F
Feature space, , ,
107,
128,
129,
131,
132,
201–204,
214,
256,
257
Feature vectors,
129,
136,
139,
145,
209,
210,
212,
213,
218,
248
Feedforward kernel, first-order,
304,
307
Filter, , ,
16–25,
30–37,
39,
40,
42,
43,
73,
96,
98,
119,
233,
234,
244,
245,
248–252,
256,
261,
262
Functional expansion block (FEB),
246,
252
Functional link adaptive filters,
see FLAFs
Functional link artificial neural networks (FLANNs),
17
G
Gaussian processes (GPs),
130,
131
Generalized correntropy shape parameter,
115,
116,
118
Generalized kernel maximum correntropy,
see GKMC
Generalized kernel recursive maximum correntropy,
see GKRMC
Generalized Volterra model,
see GVM
GKMC (Generalized kernel maximum correntropy),
115,
119,
121
GKRMC (Generalized kernel recursive maximum correntropy),
117,
121,
123
GPs (Gaussian processes),
130,
131
GPUs (Graphics processing units),
141,
142
Gradient,
152,
163,
164,
166,
208,
228,
230,
232,
276,
278,
280,
281,
352
Gradient descent adaptation algorithms, ,
17
Gradient descent algorithms, fast convergence of,
17,
18
Gradient noise,
88,
91,
97,
98,
244,
245,
250,
252–254,
256,
258
Graphics processing units,
see GPUs
Graphs, , ,
168–170,
174,
175,
177–179,
181,
183,
185,
186,
190,
192,
195,
224,
227,
236
H
I
Identification, , , ,
16–18,
20,
32,
33,
36,
38–43,
49,
58,
61,
65,
91,
93,
261,
262
Information theoretic learning,
see ITL
Input, , , ,
16,
17,
30,
38,
39,
53,
64,
128–131,
275,
290,
292–298,
300,
301,
303–305,
308,
309
Input data, , , , ,
107,
109,
115,
129,
131,
151
Input samples,
17,
22,
25,
34,
35,
42,
52,
57,
245,
247
Input signal,
16–19,
25,
27,
28,
35,
40,
42,
52,
60,
61,
63,
73,
78,
79,
90,
119,
255,
261
Input/output relationship,
73,
76
Iteration, ,
34,
57,
107,
115,
116,
150,
155–157,
192,
220,
221,
224,
229,
258,
261,
318
ITL (Information theoretic learning),
106,
109,
110
K
KAFs (Kernel adaptive filter), , ,
105,
106,
108,
114,
119,
124,
151,
244,
245,
247,
251–254,
256,
260,
261,
263
Kernel adaptive filter,
see KAFs
Kernel function, ,
111,
112,
128–130,
132–135,
138,
150,
153,
155,
156,
231,
232,
256,
297,
305
Kernel Kalman filter,
see KKF
Kernel LMS (KLMS),
106–108,
114,
115,
119,
123,
151,
152,
157,
158,
162,
231,
247,
251,
258
Kernel maximum correntropy (KMC),
115
Kernel methods,
106,
107,
109,
128–130,
132,
135,
141,
150,
175,
231,
232,
247
Kernel recursive maximum correntropy (KRMC),
117,
119
Kernel subspace approximation,
134,
144
Kernels, , ,
32,
36,
97,
105,
106,
110,
111,
177,
178,
192,
193,
245,
247,
251,
252,
255–257,
294,
295,
297–300,
304–307
KMC (Kernel maximum correntropy),
115
KPCA (Kernel principal component analysis),
131
L
LAC (Local analyticity condition),
269,
273,
274
Learning,
5–8,
107,
130,
145,
170,
210,
217,
224,
225,
233,
257,
290,
292,
309,
314,
319,
320
Learning rates,
54,
56,
58,
61,
63,
66,
115,
163,
164,
168,
207,
208,
210,
212,
214,
215,
217,
236,
237
Linear combination of basis functions of order,
35,
36
Linear filter,
33,
52–56,
58,
59,
65,
90,
95,
97,
246,
250,
255,
256identification of,
32,
33
Linear models, , ,
81,
96,
131,
181,
202,
204,
253,
267,
269,
270,
272,
277,
278,
281,
283,
284,
286
Linear time-invariant (LTI),
48,
81,
315
Linear-in-the-parameters,
see LIP
LIP (Linear-in-the-parameters), , ,
16,
18,
19,
24,
32,
40,
72–74,
77,
78,
86,
98
Long-term synaptic plasticity,
see LTSP
Look-up table (LUT),
49,
52,
53
LTI (Linear time-invariant),
48,
81,
315
LUT (Look-up table),
49,
52,
53
Lyapunov function equation,
349,
350
M
Matrix,
49,
52,
54,
60,
74,
128,
129,
139,
142,
152,
153,
155,
156,
161,
164,
165,
167,
174,
175,
338–340
Mean square error,
see MSE
Memory, filter of,
31,
32
Memory-efficient kernel approximation (MEKA),
134
Mercer kernel,
106,
107,
109,
112,
115,
116,
118,
119,
247,
256
Model design,
Model structure estimation,
73,
99
MSE (Mean square error), ,
30,
38,
49,
58,
61–63,
65,
106,
110,
157,
189,
245,
257,
271,
283,
284
Multidimensional data, ,
281
Multiple-input/single-output (MISO),
78,
292,
293
N
Neighbors,
150,
161–164,
175,
187,
224,
227,
228,
234,
235,
238
Networks, , ,
49,
151,
161,
162,
165,
170,
173,
185,
223,
224,
227–229,
233,
236–238,
268
Neural networks (NNs), , , ,
17,
48,
51,
106,
128,
227,
267–269,
286,
320,
321,
330,
331
NIPS (Neural Information Processing Systems),
146
Nodes, ,
142,
145,
150,
160–165,
167,
168,
170,
173,
174,
184,
202–207,
209–212,
214,
231,
232,
237
Nonlinear acoustic echo cancellation (NAECs), ,
32,
65,
244,
252,
253
Nonlinear adaptive filter (NAF),
48,
49,
56,
58,
60,
73,
86,
202,
243,
244,
248,
251,
252,
262,
267,
285
Nonlinear dynamical model,
292,
308
Nonlinear methods, , ,
Nonlinear models,
2–4, ,
33,
48,
86,
87,
95,
97,
99,
106,
150,
202,
225,
230,
251–254
Nonlinear system modeling,
3–5, ,
105
Nonlinear systems, , , , , ,
16,
18,
19,
23,
32–35,
37,
38,
40,
42,
48,
56,
60,
72,
73,
314,
315unknown,
O
Optimal tracking control,
336
Optimal tracking control problem,
see OTCP
OTCP (Optimal tracking control problem), ,
314
P
Partitions,
52,
79–81,
90–92,
95,
98,
201–203,
205,
206,
209,
211
Performance function,
316,
317
Q
Quadratic information potential (QIP),
109,
110
Quantized KLMS algorithm,
247
Quaternion least mean square (QLMS),
268,
282
R
RDDs (Resilient distributed dataset),
142
Recurrent neural networks,
see RNNs
Reproducing kernel Hilbert space,
see RKHS
RKHS (Reproducing kernel Hilbert space),
129,
150,
152,
156,
161,
162,
175,
180,
182,
231,
232,
235
RTRL (Real-time recurrent learning),
269
S
Samples,
17–20,
30,
31,
33–35,
40–42,
49,
54,
79,
80,
85,
95,
109,
115,
116,
118,
119,
164,
165,
188,
194
Schemes,
150,
151,
155,
156,
162–164,
171,
185,
233,
244,
245,
248–251,
254–258,
261
Selection, ,
165,
178,
179,
189–191,
202,
225,
232,
243–247,
251,
256
Self-organizing trees,
see SOT
Short-term synaptic plasticity,
see STSP
Signal processing on graphs (SPoG),
174,
178,
179
Signals, reference,
53–56
Significance-aware (SA), ,
72,
73,
98
Simulations, ,
41,
90,
119,
121,
123,
151,
170,
171,
193,
229,
248,
259,
298,
300,
301,
304–306
Spatiotemporal models,
190,
191
Spiking neuron model, single-output,
298,
304,
305
Spline adaptive filters (SAFs), ,
49,
52,
56,
57,
60,
65,
66,
213,
233
Spline interpolation scheme,
52–55
Step size,
95,
115,
116,
119,
123,
167,
168,
228,
234,
247,
251,
259,
261,
276,
277,
279–281
Suitable models, ,
System identification, ,
16,
38,
64,
72,
73,
81,
88,
94,
245,
247,
268
System modeling, ,
System theory, linear,
T
Time instants,
17,
150,
162,
163,
185–187,
226,
235,
236,
248,
255,
275
Time-invariant,
18–20,
35,
42,
72,
78,
82,
188,
194,
195,
296
Training data,
128,
130,
131,
133,
135,
137,
140,
145,
158,
170,
183
U
Unknown system,
2–4, ,
30,
33,
43,
63,
86–88,
90–92,
98,
99,
244,
251
V
Volterra adaptive filter (VAFs),
48,
60
Volterra filters (VFs), ,
17,
19,
20,
23,
31,
33,
38,
40,
41,
75,
82,
97,
98,
244,
245,
247,
248,
251,
253,
254triangular representation of,
20,
23,
31
W
Weight change, synaptic,
304
Wiener and Hammerstein models,
48
Wireless sensor networks,
see WSNs
WSNs (Wireless sensor networks), ,
224,
225,
231
Z