1 Introduction to natural language processing
1.1 What is natural language processing (NLP)?
Development of NLP applications
2.1 Introducing sentiment analysis
Train, validation, and test sets
Loading SST datasets using AllenNLP
Using word embeddings for sentiment analysis
Recurrent neural networks (RNNs) and linear layers
Architecture for sentiment analysis
2.5 Loss functions and optimization
2.6 Training your own classifier
2.7 Evaluating your classifier
2.8 Deploying your application
3 Word and document embeddings
3.2 Building blocks of language: Characters, words, and phrases
Words, tokens, morphemes, and phrases
3.3 Tokenization, stemming, and lemmatization
3.4 Skip-gram and continuous bag of words (CBOW)
Where word embeddings come from
Implementing Skip-gram on AllenNLP
Continuous bag of words (CBOW) model
How GloVe learns word embeddings
Using pretrained GloVe vectors
Making use of subword information
4.1 Recurrent neural networks (RNNs)
Handling variable-length input
4.2 Long short-term memory units (LSTMs) and gated recurrent units (GRUs)
4.3 Accuracy, precision, recall, and F-measure
4.4 Building AllenNLP training pipelines
4.5 Configuring AllenNLP training pipelines
4.6 Case study: Language detection
Building the training pipeline
Running the detector on unseen instances
5 Sequential labeling and language modeling
5.1 Introducing sequential labeling
Using RNNs to encode sequences
Implementing a Seq2Seq encoder in AllenNLP
5.2 Building a part-of-speech tagger
Defining the model and the loss
Building the training pipeline
5.3 Multilayer and bidirectional RNNs
What is named entity recognition?
Implementing a named entity recognizer
Why are language models useful?
Training an RNN language model
5.6 Text generation using RNNs
Evaluating text using a language model
Generating text using a language model
6.1 Introducing sequence-to-sequence models
6.3 Building your first translator
6.5 Evaluating translation systems
6.6 Case study: Building a chatbot
Training and running a chatbot
7 Convolutional neural networks
7.1 Introducing convolutional neural networks (CNNs)
Pattern matching for sentence classification
Convolutional neural networks (CNNs)
Pattern matching using filters
7.4 Case study: Text classification
Training and running the classifier
Limitation of vanilla Seq2Seq models
8.2 Sequence-to-sequence with attention
Building a Seq2Seq machine translation with attention
8.3 Transformer and self-attention
8.4 Transformer-based language models
Transformer as a language model
Spell correction as machine translation
9 Transfer learning with pretrained language models
Limitations of word embeddings
9.3 Case study 1: Sentiment analysis with BERT
9.4 Other pretrained language models
9.5 Case study 2: Natural language inference with BERT
What is natural language inference?
Using BERT for sentence-pair classification
Using Transformers with AllenNLP
Part 3 Putting into production
10 Best practices in developing NLP applications
10.2 Tokenization for neural models
10.4 Dealing with imbalanced datasets
Using appropriate evaluation metrics
Hyperparameter tuning with Optuna
11 Deploying and serving NLP applications
11.1 Architecting your NLP application
Choosing the right architecture
11.3 Case study: Serving and deploying NLP applications
Serving models with TorchServe
Deploying models with SageMaker
11.4 Interpreting and visualizing model predictions
3.145.55.198