Index

A

Action recognition, 4, 183, 198–201, 203, 205, 209, 210
conventional, 209
Adaptive Lung Nodule Screening Benchmark (ALNSB), 264
Anchor graph hashing (AGH), 172
Anchored neighborhood regression (ANR), 73
Approximated message passing (AMP), 43
Architecture style classification, 214, 216, 230
dataset, 227
Area under curve (AUC), 242
Augmented Lagrange multipliers (ALM), 146
Autoencoder (AE), 102, 184, 187, 216, 217, 232, 235, 238, 240, 263–266, 268
rPAE, 242
Automatic speech recognition (ASR), 130
Auxiliary loss, 106
AVIRIS Indiana Pines, 20, 40
classification, 20
dataset, 41

B

Baseline encoder (BE), 39, 109
Bicubic interpolation, 48, 55, 57–61, 72, 76, 77
Bounded Linear Unit (BLU), 165, 167, 169

C

Canonical correlation analysis (CCA), 185
Cascade, 48, 55, 57, 63, 199, 201, 257
Cascade of SCNs (CSCN), 49, 61, 63, 64, 69, 72
deeper network, 55
Classification, 4, 9, 11–13, 17, 18, 22, 23, 38, 40, 43, 106, 116, 145, 149, 155, 170, 183, 196–198, 205, 214, 227, 246, 265
Classifier, 11, 12, 14, 15, 18, 19, 22, 91, 192, 201, 227, 264, 265, 270
domain, 259
hyperspectral image, 9
Cluster label, 
predicted, 38, 90, 92, 105
true, 38, 90
Clustering, 4, 17, 31, 32, 38, 42, 43, 87–89, 92, 97, 98, 101, 103, 104, 106, 107, 112, 114, 116, 117
discriminative, 87, 88, 98, 102, 109
joint optimization, 88
loss of, 90, 91
CMU MultiPIE, 108, 109, 114
CMU PIE, 42, 157, 158, 161
dataset, 42, 161
Collective matrix factorization (CMF), 185
Color attenuation prior, 253
Compressive sensing (CS), 121
Consensus style centralizing autoencoder (CSCAE), 216, 222, 225, 229
Contrast limited adaptive histogram equalization (CLAHE), 257
Convolutional layers, 4, 53, 76, 77, 265
Convolutional neural networks (CNN), 2, 48, 125, 253
Corrupted version, 151, 187–189, 221
Cutting plane maximum margin clustering (CPMMC), 91

D

Dark channel prior (DCP), 253
Datasets, 19, 23, 87, 93, 107, 108, 159, 163, 164, 167, 226, 227, 254
3D, 200, 206
AR, 157
architecture style, 230
BSD500, 63
CIFAR10, 172
CMU MultiPIE, 108
COIL-100, 157, 162
COIL20, 108
daily and sports activities (DSA), 191, 210
Extend YaleB, 157
Extended YaleB, 159
fashion, 214
ILSVRC2013, 63
IXMAS, 210
MNIST, 38, 40, 107–109, 111, 112, 162, 163
MSCNN, 253, 259, 260
MSR-Action3D, 206–208
MSR-Gesture3D, 206, 207
ORL, 157
RESIDE, 253, 260
SUN, 228
Decoder, 123, 125, 146, 151, 187, 266
Deep, 
architectures, 38, 43, 48, 103, 109, 125, 191, 201
clustering models, 117
encoders, 38–40, 42
learning, 1–4, 32, 43, 48, 57, 58, 63, 101, 102, 129, 146, 165–167, 187, 213, 216, 217, 226–229, 242, 246
learning for clustering, 101, 102
learning for speech denoising, 130
models, 3, 109, 111, 126, 200, 201
network cascade, 64
networks, 3, 4, 32, 36, 41, 47–49, 72, 73, 81, 106, 130, 170, 184
neural network, 2, 130, 132, 139, 210
“deep” competitors, 173
Deep network cascade (DNC), 64
Deep neural networks (DNN), 131
Deep nonlinear regression, 132
“deep unfolding”, 34
Deeply Optimized Compressive Sensing (DOCS), 123–127, 129
loss function, 125
Deeply-Task-specific And Graph-regularized Network (DTAGnet), 106, 115
Dehazing, 252, 254, 257
Denoising autoencoder (DAE), 131, 221, 228
Densely connected pyramid dehazing network (DCPDN), 257
Depth videos, 198–200, 207
Dictionary, 2, 10, 13–15, 18, 19, 32, 39, 43, 51, 60, 88, 89, 91, 95, 97, 108, 121, 144, 145, 147, 149, 150, 152, 155, 166
atoms, 11, 12, 19, 31, 146, 157
for feature extraction, 26
learned, 13, 19, 32, 89, 102, 145, 146
learning, 13, 50, 88, 122, 143–145, 147–149, 152, 157, 165
Discrete cosine transform (DCT), 59, 122
Discriminative, 87, 88, 106, 148, 152, 165, 188
clustering, 87, 88, 98, 102, 109
coefficients, 152
effects, 10
information, 146, 148, 165, 188, 189
Discriminative and robust classification, 11, 12
Domain expertise, 48

E

Effectiveness, 22, 38, 78, 81, 101, 105, 117, 144, 147, 157, 160, 172, 193, 196–198, 229, 239
Encoder, 42, 123, 146, 150, 170, 171, 173, 187, 217, 232, 266, 267
Encoder and decoder network, 266, 267

F

Face recognition (FR), 200, 244
Facial images, 93, 107, 231, 232, 234, 239, 244
Familial features, 231–233, 235, 243, 244
in kinship verification, 232
Family faces, 232
“family faces”, 232
Family faces, 233–235
Family membership recognition (FMR), 231, 239, 243, 244
Fashion style classification, 214, 216, 229
Fashion style classification dataset, 226
Features, 
descriptors, 214–216, 221, 222
extraction, 11, 12, 17–19, 102, 143, 146, 259
extraction network, 258
for clustering, 88, 89, 101
learning, 103, 109, 146, 147, 167, 184, 200, 242
learning layers, 102
maps, 36, 202, 205, 259, 265
network, 266
representation, 3, 192, 214, 217, 226, 227, 232
robustness, 202
space, 10, 49, 50, 73, 186, 210, 267, 271
vector, 12, 192, 193, 199, 202, 232, 234, 240

G

Gated recurrent unit (GRU), 2
Gaussian mixture model (GMM), 87, 101
Generalized multiview analysis (GMA), 185
Generative, 87
Generative adversarial networks (GAN), 263
Generator network, 266, 268
Generic SCN, 49, 57, 58, 67, 78
Gradient descent, 32, 167

H

Hard thrEsholding Linear Unit (HELU), 35, 36, 170
Hashing codes, 176
Hashing methods, 
neural-network hashing (NNH), 172
sparse neural-network hashing (SNNH), 172
Hazy images, 253, 254, 257, 259
Hidden, 
layer features, 219
layer neural network, 146
layers, 2, 39, 106, 109, 131, 134–136, 146, 151, 171, 173, 217, 232, 235, 240
units, 151, 165, 187, 240, 242
HR image, 49–51, 53, 55, 73–76
Human Visual System (HVS), 254
Hybrid convolutional-recursive neural network (HCRNN), 199, 201, 206–209
Hybrid neural network for action recognition, 198
Hybrid subjective testing set (HSTS), 254

I

Inputs, 1, 133, 135, 187, 191, 204, 218, 228, 232, 259, 263, 266, 268
Iterative shrinkage and thresholding algorithm (ISTA), 33, 43, 52, 124

J

Joint, 
EMC, 95, 97
MMC, 95, 97
optimization, 11, 13, 18, 26, 37, 54, 95, 97, 99, 101, 103, 123, 167, 257

K

Kernelized supervised hashing (KSH), 172
Kinship pairs, 239, 243
negative, 240
positive, 240
Kinship verification, 230–233, 239, 243, 244

L

Label information, 184, 189, 190, 193, 197, 198
Layer, 4, 48, 52–54, 57, 60, 102, 104, 106, 107, 116, 131–133, 135, 169, 191, 192, 202, 204, 218, 219, 229, 235, 240, 242, 259
biases, 132
convolution, 53, 267
deconvolution, 267
linear, 53
weight, 132, 173
LDA Hash (LH), 172
Learned, 
atoms, 13, 32, 89, 102
dictionary, 13, 19, 32, 89, 102, 145, 146
shared features, 193, 196, 198
Learned iterative shrinkage and thresholding algorithm (LISTA), 4, 33, 35, 43, 48, 52, 75, 124, 173, 176
Learning, 2, 18, 19, 39, 58, 73, 74, 81, 101, 102, 123, 143, 147, 149, 165, 186, 187, 189, 190, 196, 197, 199, 206, 209, 218, 228
capacity, 32, 48, 101, 109, 172
dictionary, 13, 50, 88, 122, 143–145, 147–149, 152, 157, 165
features, 103, 109, 146, 147, 167, 184, 200, 242
rate, 16, 19, 38, 77, 108, 172, 255, 256, 259
Linear discriminant analysis (LDA), 88
Linear layers, 53, 60
Linear regression classification (LRC), 157
Linear scaling layers, 35, 53, 104
Linear search hashing (LSH), 170
Local contrast normalization (LCN), 202
Local Coordinate Coding (LCC), 149
Locality constraint, 143, 144, 147–150, 164, 165
Logistic regression classifier, 26
Loss, 73, 76, 93, 105, 254, 255, 259
Loss function, 10, 14, 16, 38, 88, 90, 105, 235, 254–256, 260
classical, 13
for SSIM, 255
joint, 14
quadratically smoothed, 93
Low-rankness, 2–4, 143
LR, 
feature space, 49, 50, 73
image, 47, 48, 50, 57, 58, 67, 73–76
patch, 50–53, 60, 73
Lung nodule, 263, 264, 270
Lung nodule classification, 265

M

Manga dataset, 226, 227, 229
Manga style classification, 214, 216, 219, 222, 229
dataset, 226
Marginalized denoising dictionary learning (MDDL), 143, 144, 148, 152, 154, 155, 157, 159, 160, 162, 164
Marginalized denoising transformation, 148, 152, 165
Marginalized stacked denoising autoencoder (mSDA), 147, 187, 197, 198, 228
MaxM
pooling, 36, 37, 43
unpooling, 36, 37, 43
Maximum margin clustering (MMC), 91, 105
Mean average precision (MAP), 173, 254
Mean precision (MP), 173
Mean squared error (MSE), 53, 133

N

Nearest neighbor, 218, 221, 229, 232–234, 244, 245
Negative matrix factorization (NMF), 130
Network, 33, 36, 39, 48, 51, 53, 54, 60, 74, 75, 77, 78, 81, 83, 108, 124, 132–135, 253, 264–268
compression, 3
formulation, 32
implementation, 33, 51
unified, 74
Network cascade, 54, 55, 72
deep, 64
Network in network, 53
Neural network, 1, 10, 48, 51, 57, 73–76, 106, 122, 124, 130, 131, 151, 199, 204, 252, 265
deep, 2, 130, 132, 139, 210
Neurons, 3, 32, 125, 146, 151, 170, 265, 268
Nodules, 263–265, 268
analyzer, 264, 265, 269
base, 265
random, 266, 267
“seed”, 265
Noise, 31, 49, 57, 59, 69, 88, 112, 134, 137–139, 144, 145, 147, 148, 150, 159, 160, 164–166, 223, 224, 228, 253, 254
Nonnegative matrix factorization (NMF), 139
Normalized mutual information (NMI), 94, 95, 108

O

Object detection, 1, 146, 252, 254, 258, 260
in hazy images, 254
Overall accuracy (OA), 20

P

Parallel autoencoders, 231, 232, 234, 235, 238, 242
Parallel autoencoders learning, 239
Parameter-sensitive hashing (PSH), 172
Patch, 38, 39, 51, 53, 59, 61, 74, 199, 201–205, 209, 216, 218, 222, 223, 230, 255
LR, 50–53, 60, 73
Peak signal-to-noise ratio (PSNR), 62, 64, 77, 78, 254, 256
Principal component analysis (PCA), 88
Private features, 188, 193, 196–198

R

Rank-constrained group sparsity autoencoder (RCGSAE), 216, 222, 224
Real-world task-driven testing set (RTTS), 254
Reconstruction loss, 14, 107
Rectified linear activation, 132, 135, 136
Rectified linear unit (ReLU), 4, 35, 54, 76, 170
Recurrent neural networks (RNN), 2
Regularized parallel autoencoders (rPAE), 231, 238–240, 243
hidden layer, 239
Restricted Boltzmann machines (RBM), 131
Restricted isometry property (RIP), 43
Robustness, 9, 57, 67, 72, 117, 137, 160, 171, 184
features, 202
ROC curves, 242

S

Seed images, 264, 265, 267, 268
limited, 264, 271
Seed nodules, 265, 266, 268, 270
Shallow neural networks (SNN), 130
Shojo, 214, 226
Siamese network, 170, 171, 173
Sparse, 
approximation, 31, 32, 38, 43, 125, 167
codes, 10–12, 14, 31–33, 39, 51, 53, 88, 89, 91, 97, 98, 102, 104, 106, 112, 167
coding, 2–4, 13, 16, 31–33, 36, 38, 41–43, 48, 50–52, 54, 60, 61, 64, 67, 72, 73, 75, 77, 87–89, 97, 98, 101, 105, 106, 109, 111, 149
coding domain expertise, 103, 117
coding for clustering, 102
coding for feature extraction, 12
representation, 10, 13, 48, 50, 54, 72, 73, 77, 89, 121, 126, 144–146, 149
Sparse coding based network (SCN), 54, 55, 57
cascade, 49, 54, 55
model, 49, 53–55, 57, 60, 72
Sparsity, 2–4, 10, 19, 93, 143–145, 149, 150, 176
Spectral hashing (SH), 170
Speech denoising, 130, 131, 140, 144
SR, 
inference, 75–78
inference modules, 74–78, 81
methods, 59, 64, 69, 71, 72, 77, 78
performance, 67, 74, 77, 81
results, 49, 54, 55, 64, 70, 81
Stochastic gradient descent (SGD), 11, 15, 16, 34, 37, 91, 101, 106, 147
Structural similarity (SSIM), 64, 77, 78, 254–256
performance, 255
Style, 
centralizing, 220, 229
centralizing autoencoder, 216, 217, 221, 228, 246
classification, 213, 214, 216, 229, 246
descriptor, 214, 215, 227
level, 216, 218–220, 226, 229
Style centralizing autoencoder (SCAE), 216–219, 221, 222, 228, 229, 246
inputs, 216, 228
Superresolution, 4
Support vector machine (SVM), 10, 90
Synthetic objective testing set (SOTS), 254

T

Task-specific And Graph-regularized Network (TAGnet), 103–106, 109, 112

U

Unlabeled samples, 11, 12, 14, 20
Upscaling factors, 49, 63, 73, 77, 78, 81

W

Weak style, 214, 216, 220, 221, 246
Weighted sum of loss, 14
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.3.167