A
Adaptive Lung Nodule Screening Benchmark (ALNSB),
264
Anchor graph hashing (AGH),
172
Anchored neighborhood regression (ANR),
73
Approximated message passing (AMP),
43
Architecture style classification,
214,
216,
230
Area under curve (AUC),
242
Augmented Lagrange multipliers (ALM),
146
Autoencoder (AE),
102,
184,
187,
216,
217,
232,
235,
238,
240,
263–266,
268
Automatic speech recognition (ASR),
130
AVIRIS Indiana Pines,
20,
40
B
Baseline encoder (BE),
39,
109
C
Canonical correlation analysis (CCA),
185
Classification, , ,
11–13,
17,
18,
22,
23,
38,
40,
43,
106,
116,
145,
149,
155,
170,
183,
196–198,
205,
214,
227,
246,
265
Classifier,
11,
12,
14,
15,
18,
19,
22,
91,
192,
201,
227,
264,
265,
270hyperspectral image,
Clustering, ,
17,
31,
32,
38,
42,
43,
87–89,
92,
97,
98,
101,
103,
104,
106,
107,
112,
114,
116,
117
Collective matrix factorization (CMF),
185
Color attenuation prior,
253
Compressive sensing (CS),
121
Consensus style centralizing autoencoder (CSCAE),
216,
222,
225,
229
Contrast limited adaptive histogram equalization (CLAHE),
257
Convolutional neural networks (CNN), ,
48,
125,
253
Cutting plane maximum margin clustering (CPMMC),
91
D
Dark channel prior (DCP),
253
Datasets,
19,
23,
87,
93,
107,
108,
159,
163,
164,
167,
226,
227,
254daily and sports activities (DSA),
191,
210
Deep,
learning,
1–4,
32,
43,
48,
57,
58,
63,
101,
102,
129,
146,
165–167,
187,
213,
216,
217,
226–229,
242,
246learning for clustering,
101,
102learning for speech denoising,
130networks, , ,
32,
36,
41,
47–49,
72,
73,
81,
106,
130,
170,
184
Deep network cascade (DNC),
64
Deep neural networks (DNN),
131
Deep nonlinear regression,
132
Deeply Optimized Compressive Sensing (DOCS),
123–127,
129
Deeply-Task-specific And Graph-regularized Network (DTAGnet),
106,
115
Densely connected pyramid dehazing network (DCPDN),
257
Dictionary, ,
10,
13–15,
18,
19,
32,
39,
43,
51,
60,
88,
89,
91,
95,
97,
108,
121,
144,
145,
147,
149,
150,
152,
155,
166for feature extraction,
26
Discrete cosine transform (DCT),
59,
122
Discriminative and robust classification,
11,
12
E
Effectiveness,
22,
38,
78,
81,
101,
105,
117,
144,
147,
157,
160,
172,
193,
196–198,
229,
239
Encoder,
42,
123,
146,
150,
170,
171,
173,
187,
217,
232,
266,
267
Encoder and decoder network,
266,
267
F
Face recognition (FR),
200,
244
Fashion style classification dataset,
226
G
Gated recurrent unit (GRU),
Gaussian mixture model (GMM),
87,
101
Generalized multiview analysis (GMA),
185
Generative adversarial networks (GAN),
263
Gradient descent,
32,
167
H
Hard thrEsholding Linear Unit (HELU),
35,
36,
170
Hashing methods,
neural-network hashing (NNH),
172sparse neural-network hashing (SNNH),
172
Hidden,
layer neural network,
146layers, ,
39,
106,
109,
131,
134–136,
146,
151,
171,
173,
217,
232,
235,
240
Human Visual System (HVS),
254
Hybrid convolutional-recursive neural network (HCRNN),
199,
201,
206–209
Hybrid neural network for action recognition,
198
Hybrid subjective testing set (HSTS),
254
I
Inputs, ,
133,
135,
187,
191,
204,
218,
228,
232,
259,
263,
266,
268
Iterative shrinkage and thresholding algorithm (ISTA),
33,
43,
52,
124
J
Joint,
optimization,
11,
13,
18,
26,
37,
54,
95,
97,
99,
101,
103,
123,
167,
257
K
Kernelized supervised hashing (KSH),
172
L
Layer, ,
48,
52–54,
57,
60,
102,
104,
106,
107,
116,
131–133,
135,
169,
191,
192,
202,
204,
218,
219,
229,
235,
240,
242,
259
Learned iterative shrinkage and thresholding algorithm (LISTA), ,
33,
35,
43,
48,
52,
75,
124,
173,
176
Learning, ,
18,
19,
39,
58,
73,
74,
81,
101,
102,
123,
143,
147,
149,
165,
186,
187,
189,
190,
196,
197,
199,
206,
209,
218,
228deep,
1–4,
32,
43,
48,
57,
58,
63,
101,
102,
129,
146,
165–167,
187,
213,
216,
217,
226–229,
242,
246
Linear discriminant analysis (LDA),
88
Linear regression classification (LRC),
157
Linear search hashing (LSH),
170
Local contrast normalization (LCN),
202
Local Coordinate Coding (LCC),
149
Logistic regression classifier,
26
Loss function,
10,
14,
16,
38,
88,
90,
105,
235,
254–256,
260quadratically smoothed,
93
Lung nodule classification,
265
M
Marginalized denoising dictionary learning (MDDL),
143,
144,
148,
152,
154,
155,
157,
159,
160,
162,
164
Marginalized denoising transformation,
148,
152,
165
Maximum margin clustering (MMC),
91,
105
Mean average precision (MAP),
173,
254
Mean squared error (MSE),
53,
133
N
Negative matrix factorization (NMF),
130
Network,
33,
36,
39,
48,
51,
53,
54,
60,
74,
75,
77,
78,
81,
83,
108,
124,
132–135,
253,
264–268compression,
Neural network, ,
10,
48,
51,
57,
73–76,
106,
122,
124,
130,
131,
151,
199,
204,
252,
265
Noise,
31,
49,
57,
59,
69,
88,
112,
134,
137–139,
144,
145,
147,
148,
150,
159,
160,
164–166,
223,
224,
228,
253,
254
Nonnegative matrix factorization (NMF),
139
Normalized mutual information (NMI),
94,
95,
108
O
Overall accuracy (OA),
20
P
Parallel autoencoders learning,
239
Parameter-sensitive hashing (PSH),
172
Patch,
38,
39,
51,
53,
59,
61,
74,
199,
201–205,
209,
216,
218,
222,
223,
230,
255
Principal component analysis (PCA),
88
R
Rank-constrained group sparsity autoencoder (RCGSAE),
216,
222,
224
Real-world task-driven testing set (RTTS),
254
Reconstruction loss,
14,
107
Rectified linear unit (ReLU), ,
35,
54,
76,
170
Recurrent neural networks (RNN),
Restricted Boltzmann machines (RBM),
131
Restricted isometry property (RIP),
43
S
Shallow neural networks (SNN),
130
Sparse,
codes,
10–12,
14,
31–33,
39,
51,
53,
88,
89,
91,
97,
98,
102,
104,
106,
112,
167coding,
2–4,
13,
16,
31–33,
36,
38,
41–43,
48,
50–52,
54,
60,
61,
64,
67,
72,
73,
75,
77,
87–89,
97,
98,
101,
105,
106,
109,
111,
149coding domain expertise,
103,
117coding for clustering,
102coding for feature extraction,
12representation,
10,
13,
48,
50,
54,
72,
73,
77,
89,
121,
126,
144–146,
149
Sparse coding based network (SCN),
54,
55,
57
Spectral hashing (SH),
170
Superresolution,
Support vector machine (SVM),
10,
90
Synthetic objective testing set (SOTS),
254
T
Task-specific And Graph-regularized Network (TAGnet),
103–106,
109,
112
U
W