Index
Symbols
- @app.route() function, Building a Flask Service
- @torch.jit.script_method, Scripting
- __call__, Custom Transform Classes, Frequency masking
- __getitem__, Building an ESC-50 Dataset, A New Dataset
- __init__, SoX Effect Chains, TorchScript Limitations
- __len__, Building an ESC-50 Dataset
- __repr__, Custom Transform Classes
A
- AdaGrad, Optimizing
- Adam, Optimizing, A CNN Model for ESC-50
- AdaptiveAvgPool layer, Pooling
- AdaptiveMaxPool layer, Pooling
- add_graph() function, Sending Data to TensorBoard
- adversarial samples, Adversarial Samples-Defending Against Adversarial Attacks
- AlexNet, AlexNet, Gradient Checkpointing, Tracing
- Amazon Web Services (see AWS)
- AMD, GPU
- Anaconda, Anaconda
- ApacheMXNet, What About TensorFlow?
- append_effect_to_chain, SoX Effect Chains
- ARG, Building the Docker Container
- argmax() function, Tensor Operations, Making Predictions, Classifying Tweets, Adversarial Samples
- arXiv, Conclusion
- arXiv Sanity Preserver, Conclusion
- attacks
- attention, Paying Attention
- audio (see sound)
- autoencoder, Introduction to Super-Resolution
- AutoML, Other Architectures Are Available!
- AWS (Amazon Web Services), Amazon Web Services, Deploying on Kubernetes
- Azure, Cloud Providers, Azure, Deploying on Kubernetes
- Azure Blob Storage, Local Versus Cloud Storage, Local Versus Cloud Storage
- Azure Marketplace, Azure
B
- back translation, Back Translation
- backpropagation through time, Recurrent Neural Networks
- backward() function, Training
- BadRandom, Fixing a Slow Transformation, mixup
- BatchNorm layer, BatchNorm, Transfer Learning with ResNet, ESRGAN
- batch_size, Building Validation and Test Datasets
- BCEWithLogitsLoss() function, Updating the Training Loop
- BertLearner.from_pretrained_model, FastBERT
- best_loss, Finding That Learning Rate
- Bidirectional Encoder Representations from Transformers (BERT), BERT-FastBERT, What to Use?
- biLSTM (bidirectional LSTM), biLSTM
- Bitcoin, GPU
- black-box attacks, Black-Box Attacks
- BookCorpus dataset, BERT
- Borg, Deploying on Kubernetes
- broadcasting, tensor, Tensor Broadcasting
C
- C++ compiler, Obtaining libTorch and Hello World
- C++ library (see libTorch)
- Caffe2, Conclusion
- CAM (class activation mapping), Class Activation Mapping-Class Activation Mapping
- Candadian Institute for Advanced Research (CIFAR-10), It’s 3 a.m. What Is Your Data Doing?
- Chainer, PyTorch
- challenges with image classification, Traditional Challenges-Building Validation and Test Datasets
- checkbox, Google Cloud Platform
- checkpoint_sequential, Gradient Checkpointing
- CIFAR-10 (Candadian Institute for Advanced Research), It’s 3 a.m. What Is Your Data Doing?
- CIFAR-10 dataset, Adversarial Samples
- class activation mapping, Debugging PyTorch Models, Debugging PyTorch Models
- class activation mapping (CAM), Class Activation Mapping-Class Activation Mapping
- classifier, Transfer Learning with ResNet
- cloud platforms, Deep Learning in the Cloud-Which Cloud Provider Should I Use?
- cloud storage
- CMake, Obtaining libTorch and Hello World
- CNNs (see convolutional neural networks)
- Colaboratory (Colab), Google Colaboratory
- collisions, mixup
- color spaces
- ColorJitter, Torchvision Transforms
- Compute Unified Device Architecture (CUDA), Download CUDA, Checking Your GPU
- conda, SoX and LibROSA, Building a Flask Service
- Conv2d layer, Convolutions-Convolutions, Introduction to Super-Resolution
- convolutional kernel, Convolutions
- convolutional neural networks (CNNs), Convolutional Neural Networks-Conclusion
- AlexNet, AlexNet
- architectures, History of CNN Architectures-Other Architectures Are Available!
- convolutions, Convolutions-Convolutions
- dropout, Dropout
- ESC-50 model, A CNN Model for ESC-50
- example, Our First Convolutional Model
- history, Deep Learning in the World Today
- Inception/GoogLeNet, Inception/GoogLeNet
- pooling, Pooling
- ResNet, ResNet
- VGG, VGG
- convolutions, Convolutions-Convolutions
- COPY, Building the Docker Container
- copyfileobj(), Local Versus Cloud Storage
- CPU, CPU/Motherboard, Fixing a Slow Transformation
- CrossEntropyLoss() function, Loss Functions, Updating the Training Loop, A CNN Model for ESC-50, mixup, Label Smoothing
- CUDA (Compute Unified Device Architecture), Download CUDA, Checking Your GPU
- cuda() function, Making It Work on the GPU
- cuda.is_available() function, Finally, PyTorch! (and Jupyter Notebook)
- custom deep learning machine
- custom transform classes, Custom Transform Classes
D
- data
- data augmentation, Data Augmentation-Start Small and Get Bigger!, Data Augmentation-Augmentation and torchtext
- audio (see audio data augmentation)
- back translation, Back Translation
- color spaces and Lamba transforms, Color Spaces and Lambda Transforms
- custom transform classes, Custom Transform Classes
- label smoothing, Label Smoothing
- mixed and smoothed, Data Augmentation: Mixed and Smoothed-Label Smoothing
- mixup, mixup-mixup
- random deletion, Random Deletion
- random insertion, Random Insertion
- random swap, Random Swap
- starting small with, Start Small and Get Bigger!
- torchtext, Augmentation and torchtext
- torchvision transforms, Torchvision Transforms-Torchvision Transforms
- transfer learning and, Transfer Learning?
- datasets
- DDR4, RAM
- debugging, Debugging PyTorch Models-Conclusion
- decoder, Introduction to Super-Resolution
- deep learning, defined, But What Is Deep Learning Exactly, and Do I Need a PhD to Understand It?
- degrees parameter, Torchvision Transforms
- deletion, random, Random Deletion
- DenseNet, Other Architectures Are Available!
- differential learning rates, Differential Learning Rates
- DigitalOcean, Deploying on Kubernetes
- discriminator networks, The Forger and the Critic, The Dangers of Mode Collapse
- distilling, Defending Against Adversarial Attacks
- Docker, Faster R-CNN and Mask R-CNN
- Docker container, building, Building the Docker Container-Building the Docker Container
- Docker Hub, Building the Docker Container
- download.py script, But First, Data, Building Validation and Test Datasets
- Dropout, Deep Learning in the World Today, Transfer Learning with ResNet
- Dropout layer, Dropout, AlexNet, Tracing
E
- embedding matrix, Embeddings
- embeddings, for text classification, Embeddings-Embeddings
- encoder, Introduction to Super-Resolution
- encoding, one-hot, Embeddings
- Enhanced Super-Resolution Generative Adversarial Network (ESRGAN), ESRGAN
- ENTRYPOINT, Building the Docker Container
- ENV, Building the Docker Container
- Environmental Sound Classification (ESC) dataset, The ESC-50 Dataset-A CNN Model for ESC-50
- epsilon, Adversarial Samples
- ESC-50, A Wild ResNet Appears
- (see also Environmental Sound Classification (ESC) dataset)
- exploding gradient, Long Short-Term Memory Networks
- EXPOSE, Building the Docker Container
F
- Facebook, Deep Learning in the World Today, TensorBoard
- fast gradient sign method (fgsm), Adversarial Samples
- fast.ai library, Finding That Learning Rate, ULMFiT
- FastBERT, FastBERT-FastBERT
- Faster R-CNN, Faster R-CNN and Mask R-CNN-Faster R-CNN and Mask R-CNN
- fc, Transfer Learning with ResNet
- feature map, Convolutions
- filesytem, Making Predictions
- filter, Convolutions
- find_lr() function, Finding That Learning Rate, Finding a Learning Rate
- fit() function, FastBERT
- fit_one_cycle, ULMFiT
- flame graphs, Flame Graphs-Fixing a Slow Transformation
- Flask, Building a Flask Service-Building a Flask Service
- forward() function, Creating a Network, Our First Convolutional Model, Reading Flame Graphs
- Fourier transform, Tensor Operations
- frequency domain, This Frequency Is My Universe-Finding a Learning Rate
G
- GANs (see generative adversarial networks)
- gated recurrent units (GRUs), Gated Recurrent Units
- gc.collect() function, Checking Your GPU
- GCP (Google Cloud Platform), Cloud Providers, Google Cloud Platform
- GCP Marketplace, Google Cloud Platform
- generative adversarial networks (GANs), An Introduction to GANs-Running ESRGAN
- generator networks, The Forger and the Critic
- get_stopwords() function, Random Insertion
- get_synonyms() function, Random Insertion
- GKE (Google Kubernetes Engine), Deploying on Kubernetes, Setting Up on Google Kubernetes Engine
- Google, What About TensorFlow?
- Google Cloud Platform (GCP), Cloud Providers, Google Cloud Platform
- Google Cloud Storage, Local Versus Cloud Storage, Local Versus Cloud Storage
- Google Colaboratory, Google Colaboratory
- Google Kubernetes Engine (GKE), Deploying on Kubernetes, Setting Up on Google Kubernetes Engine
- Google Translate, Text Classification, Back Translation
- GoogLeNet, Inception/GoogLeNet
- googletrans, Back Translation
- GPT-2, More Than Meets the Eye: The Transformer Architecture, GPT-2-Generating Text with GPT-2, What to Use?
- GPU (graphical processing unit)
- gradient
- gradient checkpointing, Gradient Checkpointing-Gradient Checkpointing
- Gregg, Brendan, Flame Graphs
- grid search, Finding That Learning Rate
- GRUs (gated recurrent units), Gated Recurrent Units
I
- image classification, Image Classification with PyTorch-Conclusion
- activation functions, Activation Functions
- and data loaders, PyTorch and Data Loaders
- and GPU, Making It Work on the GPU
- building training dataset for, Building a Training Dataset-Building a Training Dataset
- building validation and test datasets, Building Validation and Test Datasets
- challenges with, Traditional Challenges-Building Validation and Test Datasets
- creating a network, Creating a Network
- data for, But First, Data
- example, Our Classification Problem
- loss functions, Loss Functions
- model saving, Model Saving
- neural networks, Finally, a Neural Network!-Optimizing
- optimizing, Optimizing-Optimizing
- predictions, Making Predictions
- training network for, Training
- image detection, Further Adventures in Image Detection-Faster R-CNN and Mask R-CNN
- Image.convert() function, Color Spaces and Lambda Transforms
- ImageFolder, Building a Training Dataset
- ImageNet, Deep Learning in the World Today, But First, Data, A Wild ResNet Appears
- ImageNet Large Scale Visual Recognition Challenge, Deep Learning in the World Today
- import torch, Google Colaboratory
- imsave, Faster R-CNN and Mask R-CNN
- in-place functions, Tensor Operations
- Inception, Inception/GoogLeNet, Mel Spectrograms, Further Experiments, Object Detection
- init() function, Creating a Network
- insertion, random, Random Insertion
- in_channels, Convolutions
- in_features, Transfer Learning with ResNet
- item() function, Tensor Operations, Classifying Tweets
K
- k80, Deep Learning in the Cloud, Azure
- k8s (see Kubernetes)
- Kaggle, Which Model Should You Use?, Object Detection
- Karpathy, Andrej, Finding That Learning Rate
- Karpathys constant, Finding That Learning Rate
- Keras, What About TensorFlow?
- Kubernetes (k8s), Building the Docker Container, Deploying on Kubernetes-Updates and Cleaning Up
L
- label smoothing, Label Smoothing, Defending Against Adversarial Attacks
- labelled, labelling, But First, Data, Defining Fields, It’s 3 a.m. What Is Your Data Doing?, Black-Box Attacks
- label_from_df, ULMFiT
- Lambra transforms, Color Spaces and Lambda Transforms
- layers
- AdaptiveAvgPool layer, Pooling
- AdaptiveMaxPool layer, Pooling
- BatchNorm layer, BatchNorm, Transfer Learning with ResNet, ESRGAN
- Conv2d layer, Convolutions-Convolutions, Introduction to Super-Resolution
- Dropout layer, Dropout, AlexNet, Tracing
- Linear layer, Transfer Learning with ResNet
- MaxPool layer, AlexNet
- MaxPool2d layer, Pooling
- nn.Sequential layer, Plotting Mean and Standard Deviation, Gradient Checkpointing, Introduction to Super-Resolution
- torch.nn.ConvTranspose2d layer, Introduction to Super-Resolution
- upsample layer, Introduction to Super-Resolution
- learning
- learning rates
- least recently used (LRU) cache, A New Dataset
- LeNet-5, History of CNN Architectures-Other Architectures Are Available!
- LibROSA, SoX and LibROSA, Mel Spectrograms
- libTorch, Working with libTorch-Importing a TorchScript Model
- Linear layer, Transfer Learning with ResNet
- list_effects() function, SoX Effect Chains
- load() function, torchaudio
- load_model() function, Building a Flask Service, Setting Up the Model Parameters, Local Versus Cloud Storage
- load_state_dict() function, Reading Flame Graphs
- local storage, cloud versus, Local Versus Cloud Storage-Local Versus Cloud Storage
- logging, Logging and Telemetry
- log_spectogram.shape, Mel Spectrograms
- Long Short-Term Memory (LSTM) Networks, Long Short-Term Memory Networks-Long Short-Term Memory Networks
- loss functions, Loss Functions
- LRU (least recently used) cache, A New Dataset
- LSTM Networks (see Long Short-Term Memory Networks)
- Lua, PyTorch
M
- MacBook, Anaconda
- Mask R-CNN, Faster R-CNN and Mask R-CNN-Faster R-CNN and Mask R-CNN
- maskrcnn-benchmark library, Faster R-CNN and Mask R-CNN
- matplotlib, Finding That Learning Rate, A New Dataset
- max() function, Tensor Operations
- MaxPool layer, AlexNet
- MaxPool2d layer, Pooling
- max_size parameter, Building a Vocabulary
- max_width parameter, Frequency masking
- md5sum, Anaconda
- mean function, Ensembles
- mean, plotting, Plotting Mean and Standard Deviation
- mel scale, Mel Spectrograms
- mel spectrograms, Mel Spectrograms-Mel Spectrograms
- Microsoft Azure (see Azure)
- MIcrosoft Cognitive Toolkit, Conclusion
- mixup, mixup-mixup
- MNIST, It’s 3 a.m. What Is Your Data Doing?
- MobileNet, Other Architectures Are Available!
- mode collapse, The Dangers of Mode Collapse
- model saving, Model Saving
- model serving, Model Serving-Logging and Telemetry
- model.children() function, Plotting Mean and Standard Deviation
- model.eval() function, Tracing
- models.alexnet(pretrained=True) function, Using Pretrained Models in PyTorch
- Motherboard, CPU/Motherboard
- MSELoss, Loss Functions
- multihead attention, Attention Is All You Need
- MXNet, Conclusion
- MySQL, Flame Graphs
N
- n1-standard-1 nodes, Creating a k8s Cluster
- NamedTemporaryFile() function, Local Versus Cloud Storage
- NASNet, Other Architectures Are Available!
- natural language processing (NLP), Text Classification
- NC6, Azure
- NCv2, Azure
- network, creating, Creating a Network
- neural networks
- NLP (natural language processing), Text Classification
- nn.Module, Label Smoothing
- nn.Sequential layer, Gradient Checkpointing, Introduction to Super-Resolution
- nn.Sequential() function, Our First Convolutional Model
- nn.Sequential() layer, Plotting Mean and Standard Deviation
- NumPy, Tensor Broadcasting
- NVIDIA GeForce RTX 2080 Ti, GPU, Checking Your GPU
- Nvidia GTX 1080 Ti, GPU, Google Colaboratory
- Nvidia RTX 2080 Ti, GPU, Deep Learning in the Cloud
- nvidia-smi, Checking Your GPU
O
- object detection, Object Detection-Object Detection
- OK, Building a Flask Service
- one-hot encoding, Embeddings
- ones() function, Tensors
- ONNX (Open Neural Network Exchange), Conclusion
- OpenAI, More Than Meets the Eye: The Transformer Architecture, GPT-2
- optim.Adam() function, Optimizing
- optimization of neural networks, Optimizing-Optimizing
- optimizer.step() function, Training
- out_channels, Convolutions
- out_features, Transfer Learning with ResNet
- overfitting, Building Validation and Test Datasets, Data Augmentation
P
- P100, Azure
- P2, P3, Amazon Web Services
- p2.xlarge, Amazon Web Services
- pad token, Building a Vocabulary
- PadTrim, torchaudio Transforms
- pandas, torchtext
- parameters() function, Transfer Learning with ResNet
- partial() function, Plotting Mean and Standard Deviation
- PCPartPicker, Storage
- permute() function, Tensor Operations
- pip, SoX and LibROSA, Building a Flask Service
- plt function, Finding That Learning Rate
- pod, Creating a k8s Cluster
- pooling in CNN, Pooling
- predict() function, Building a Flask Service
- predictions, Faster R-CNN and Mask R-CNN
- preprocess() function, Classifying Tweets
- pretrained models, Using Pretrained Models in PyTorch-Which Model Should You Use?
- print() function, PyTorch Hooks, Tracing
- print(model) function, BatchNorm
- process() function, Classifying Tweets
- production, deploying PyTorch applications in, PyTorch in Production-Conclusion
- py-spy, Installing py-spy, Fixing a Slow Transformation
- Python, Plotting Mean and Standard Deviation, Model Serving
- Python 2.x, Installing PyTorch from Scratch
- PyTorch (generally), Getting Started with PyTorch-Conclusion
- PyTorch Hub, One-Stop Shopping for Models: PyTorch Hub
- pytorch-transformers, Generating Text with GPT-2
R
- Raina, Rajat, Deep Learning in the World Today
- RAM, RAM
- random deletion, Random Deletion
- random insertion, Random Insertion
- random swap, Random Swap
- RandomAffine, Torchvision Transforms
- RandomApply, Color Spaces and Lambda Transforms
- RandomCrop, Torchvision Transforms
- RandomGrayscale, Torchvision Transforms
- RandomResizeCrop, Torchvision Transforms
- RBG color space, Color Spaces and Lambda Transforms
- README, Obtaining the Dataset
- rectified linear unit (see ReLU)
- recurrent neural networks (RNNs), Recurrent Neural Networks-Recurrent Neural Networks, Paying Attention
- Red Hat Enterprise Linux (RHEL) 7, Download CUDA
- register_backward_hook() function, PyTorch Hooks
- ReLU (rectified linear unit), Activation Functions, Conclusion, AlexNet, Transfer Learning with ResNet
- remove() function, PyTorch Hooks
- requires_grad() function, Transfer Learning with ResNet
- resample, Torchvision Transforms
- reshape() function, Tensor Operations
- reshaping a tensor, Tensor Operations
- Resize(64) transform, Building a Training Dataset
- ResNet architecture, ResNet, Which Model Should You Use?, Mel Spectrograms, Object Detection
- ResNet-152, Debugging GPU Issues
- ResNet-18, PyTorch Hooks
- RHEL (Red Hat Enterprise Linux) 7, Download CUDA
- RMSProp, Optimizing
- RNNs (recurrent neural networks), Recurrent Neural Networks-Recurrent Neural Networks, Paying Attention
- ROCm, GPU
- RUN, Building the Docker Container
- run_gpt2.py, Generating Text with GPT-2
S
- Salesforce, PyTorch
- save() function, torchaudio
- savefig, A New Dataset
- scaling, Torchvision Transforms, Scaling Services
- scripting, Scripting
- Secure Shell (SSH), Google Colaboratory
- segmentation, Object Detection
- send_to_log() function, Logging and Telemetry
- Sentiment140 dataset, Getting Our Data: Tweets!
- seq2seq, Recurrent Neural Networks
- SimpleNet, Conclusion
- simplenet.parameters() function, Optimizing
- slow transformations, fixing, Fixing a Slow Transformation-Fixing a Slow Transformation
- Smith, Leslie, Finding That Learning Rate
- softmax function, Activation Functions
- softmax() function, Loss Functions
- sound, A Journey into Sound-Conclusion
- about, Sound
- and ESC-50 dataset, The ESC-50 Dataset-A CNN Model for ESC-50
- audio data augmentation, Audio Data Augmentation-Time masking
- frequency domain, This Frequency Is My Universe-Finding a Learning Rate
- frequency masking, Frequency masking-Frequency masking
- in Jupyter Notebook, Playing Audio in Jupyter
- mel spectrograms, Mel Spectrograms-Mel Spectrograms
- SoX effect chains, SoX Effect Chains
- SpecAugment, SpecAugment-Time masking
- time masking, Time masking-Time masking
- torchaudio, torchaudio
- torchaudio transforms, torchaudio Transforms
- SoX, SoX and LibROSA
- SoX effect chains, SoX Effect Chains
- sox_build_flow_effects(), SoX Effect Chains
- spaCy, torchtext
- SpecAugment, SpecAugment-Time masking
- squeeze(0) function, Making Predictions
- SqueezeNet, Other Architectures Are Available!
- SSH (Secure Shell), Google Colaboratory
- stacktrace, Flame Graphs, Flame Graphs
- standard deviation, plotting, Plotting Mean and Standard Deviation
- startup, Building the Docker Container, Local Versus Cloud Storage
- state_dict() function, Setting Up the Model Parameters
- stochastic gradient descent (SGD), Optimizing
- storage
- SummaryWriter, Sending Data to TensorBoard
- super-resolution, Computer, Enhance!-Running ESRGAN
- supervised learning, But First, Data
- swap, random, Random Swap
T
- telemetry, Logging and Telemetry
- tensor processing units (TPUs), Deep Learning in the World Today, Google Cloud Platform
- tensor.mean() function, Frequency masking
- TensorBoard, TensorBoard-Class Activation Mapping
- TensorFlow, What About TensorFlow?, TorchScript, Generating Text with GPT-2
- tensors, Tensor Operations-Tensor Operations
- TeslaV100, Deep Learning in the Cloud, Azure
- test datasets, building, Building Validation and Test Datasets
- text classification, Text Classification-Conclusion
- and transfer learning, Transfer Learning?
- back translation, Back Translation
- biLSTM, biLSTM
- data augmentation, Data Augmentation-Augmentation and torchtext
- embeddings for, Embeddings-Embeddings
- gated recurrent units, Gated Recurrent Units
- in Long Short-Term Memory Networks, Long Short-Term Memory Networks-Long Short-Term Memory Networks
- random deletion, Random Deletion
- random insertion, Random Insertion
- random swap, Random Swap
- recurrent neural networks, Recurrent Neural Networks-Recurrent Neural Networks
- torchtext, torchtext-Classifying Tweets
- text generation, with GPT-2, Generating Text with GPT-2-Generating Text with GPT-2
- tf.keras, What About TensorFlow?
- Theano, What About TensorFlow?
- time step, Recurrent Neural Networks
- to() function, Tensor Operations, Making It Work on the GPU
- top-5, AlexNet, ResNet
- torch.argmax() function, Building a Flask Service
- torch.distribution.Beta, mixup
- torch.hub.list(pytorch/vision) function, One-Stop Shopping for Models: PyTorch Hub
- torch.jit.save, Tracing, Scripting
- torch.jit.save() function, TorchScript Limitations, Obtaining libTorch and Hello World
- torch.load() function, Model Saving
- torch.nn.ConvTranspose2d layer, Introduction to Super-Resolution
- torch.save() function, Model Saving, Setting Up the Model Parameters
- torch.topk() function, Class Activation Mapping
- torch.utils.checkpoint_sequential() function, Gradient Checkpointing
- torch.utils.tensorboard, Sending Data to TensorBoard
- torchaudio, SoX and LibROSA, torchaudio Transforms
- torchaudio transforms, torchaudio Transforms
- torchaudio.load(), SoX Effect Chains
- torchaudio.sox_effects.effect_names() function, SoX Effect Chains
- torchaudio.sox_effects.SoxEffectsChain, SoX Effect Chains
- TorchScript, TorchScript-TorchScript Limitations
- torchtext, torchtext-Classifying Tweets
- torchtext.datasets, Getting Our Data: Tweets!
- torchvision, Building a Training Dataset
- torchvision transforms, Torchvision Transforms-Torchvision Transforms
- torchvision.models, One-Stop Shopping for Models: PyTorch Hub
- TPUs (tensor processing units), Deep Learning in the World Today, Google Cloud Platform
- tracing, Tracing-Tracing
- train() function, Putting It All Together
- train_net.py, Faster R-CNN and Mask R-CNN
- transfer learning, Transfer Learning and Other Tricks-Conclusion
- transformations, fixing slow, Fixing a Slow Transformation-Fixing a Slow Transformation
- Transformer architecture, More Than Meets the Eye: The Transformer Architecture-What to Use?
- transforms.ToTensor() function, Building a Training Dataset
- TWEET, Defining Fields, Classifying Tweets
- Twitter, Getting Our Data: Tweets!
U
- U-Net architecture, Object Detection
- Uber, PyTorch
- Ubuntu, Download CUDA
- ULMFiT, ULMFiT-ULMFiT, What to Use?
- unknown word token, Building a Vocabulary
- unsqueeze() function, Making Predictions, Building a Flask Service
- unsupervised learning, But First, Data
- upsample layer, Introduction to Super-Resolution
- urlopen() function, Local Versus Cloud Storage
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.