0%

Transfer Learning for Natural Language Processing teaches you to create powerful NLP solutions quickly by building on existing pretrained models. This instantly useful book provides crystal-clear explanations of the concepts you need to grok transfer learning along with hands-on examples so you can practice your new skills immediately. As you go, you’ll apply state-of-the-art transfer learning methods to create a spam email classifier, a fact checker, and more real-world applications.

Table of Contents

  1. Transfer Learning for Natural Language Processing
  2. Copyright
  3. dedication
  4. contents
  5. front matter
    1. preface
    2. acknowledgments
    3. about this book
    4. Who should read this book?
    5. Road map
    6. Software requirements
    7. About the code
    8. liveBook discussion forum
    9. about the author
    10. about the cover illustration
  6. Part 1 Introduction and overview
  7. 1 What is transfer learning?
    1. 1.1 Overview of representative NLP tasks
    2. 1.2 Understanding NLP in the context of AI
    3. 1.2.1 Artificial intelligence (AI)
    4. 1.2.2 Machine learning
    5. 1.2.3 Natural language processing (NLP)
    6. 1.3 A brief history of NLP advances
    7. 1.3.1 General overview
    8. 1.3.2 Recent transfer learning advances
    9. 1.4 Transfer learning in computer vision
    10. 1.4.1 General overview
    11. 1.4.2 Pretrained ImageNet models
    12. 1.4.3 Fine-tuning pretrained ImageNet models
    13. 1.5 Why is NLP transfer learning an exciting topic to study now?
    14. Summary
  8. 2 Getting started with baselines: Data preprocessing
    1. 2.1 Preprocessing email spam classification example data
    2. 2.1.1 Loading and visualizing the Enron corpus
    3. 2.1.2 Loading and visualizing the fraudulent email corpus
    4. 2.1.3 Converting the email text into numbers
    5. 2.2 Preprocessing movie sentiment classification example data
    6. 2.3 Generalized linear models
    7. 2.3.1 Logistic regression
    8. 2.3.2 Support vector machines (SVMs)
    9. Summary
  9. 3 Getting started with baselines: Benchmarking and optimization
    1. 3.1 Decision-tree-based models
    2. 3.1.1 Random forests (RFs)
    3. 3.1.2 Gradient-boosting machines (GBMs)
    4. 3.2 Neural network models
    5. 3.2.1 Embeddings from Language Models (ELMo)
    6. 3.2.2 Bidirectional Encoder Representations from Transformers (BERT)
    7. 3.3 Optimizing performance
    8. 3.3.1 Manual hyperparameter tuning
    9. 3.3.2 Systematic hyperparameter tuning
    10. Summary
  10. Part 2 Shallow transfer learning and deep transfer learning with recurrent neural networks (RNNs)
  11. 4 Shallow transfer learning for NLP
    1. 4.1 Semisupervised learning with pretrained word embeddings
    2. 4.2 Semisupervised learning with higher-level representations
    3. 4.3 Multitask learning
    4. 4.3.1 Problem setup and a shallow neural single-task baseline
    5. 4.3.2 Dual-task experiment
    6. 4.4 Domain adaptation
    7. Summary
  12. 5 Preprocessing data for recurrent neural network deep transfer learning experiments
    1. 5.1 Preprocessing tabular column-type classification data
    2. 5.1.1 Obtaining and visualizing tabular data
    3. 5.1.2 Preprocessing tabular data
    4. 5.1.3 Encoding preprocessed data as numbers
    5. 5.2 Preprocessing fact-checking example data
    6. 5.2.1 Special problem considerations
    7. 5.2.2 Loading and visualizing fact-checking data
    8. Summary
  13. 6 Deep transfer learning for NLP with recurrent neural networks
    1. 6.1 Semantic Inference for the Modeling of Ontologies (SIMOn)
    2. 6.1.1 General neural architecture overview
    3. 6.1.2 Modeling tabular data
    4. 6.1.3 Application of SIMOn to tabular column-type classification data
    5. 6.2 Embeddings from Language Models (ELMo)
    6. 6.2.1 ELMo bidirectional language modeling
    7. 6.2.2 Application to fake news detection
    8. 6.3 Universal Language Model Fine-Tuning (ULMFiT)
    9. 6.3.1 Target task language model fine-tuning
    10. 6.3.2 Target task classifier fine-tuning
    11. Summary
  14. Part 3 Deep transfer learning with transformers and adaptation strategies
  15. 7 Deep transfer learning for NLP with the transformer and GPT
    1. 7.1 The transformer
    2. 7.1.1 An introduction to the transformers library and attention visualization
    3. 7.1.2 Self-attention
    4. 7.1.3 Residual connections, encoder-decoder attention, and positional encoding
    5. 7.1.4 Application of pretrained encoder-decoder to translation
    6. 7.2 The Generative Pretrained Transformer
    7. 7.2.1 Architecture overview
    8. 7.2.2 Transformers pipelines introduction and application to text generation
    9. 7.2.3 Application to chatbots
    10. Summary
  16. 8 Deep transfer learning for NLP with BERT and multilingual BERT
    1. 8.1 Bidirectional Encoder Representations from Transformers (BERT)
    2. 8.1.1 Model architecture
    3. 8.1.2 Application to question answering
    4. 8.1.3 Application to fill in the blanks and next-sentence prediction tasks
    5. 8.2 Cross-lingual learning with multilingual BERT (mBERT)
    6. 8.2.1 Brief JW300 dataset overview
    7. 8.2.2 Transfer mBERT to monolingual Twi data with the pretrained tokenizer
    8. 8.2.3 mBERT and tokenizer trained from scratch on monolingual Twi data
    9. Summary
  17. 9 ULMFiT and knowledge distillation adaptation strategies
    1. 9.1 Gradual unfreezing and discriminative fine-tuning
    2. 9.1.1 Pretrained language model fine-tuning
    3. 9.1.2 Target task classifier fine-tuning
    4. 9.2 Knowledge distillation
    5. 9.2.1 Transfer DistilmBERT to monolingual Twi data with pretrained tokenizer
    6. Summary
  18. 10 ALBERT, adapters, and multitask adaptation strategies
    1. 10.1 Embedding factorization and cross-layer parameter sharing
    2. 10.1.1 Fine-tuning pretrained ALBERT on MDSD book reviews
    3. 10.2 Multitask fine-tuning
    4. 10.2.1 General Language Understanding Dataset (GLUE)
    5. 10.2.2 Fine-tuning on a single GLUE task
    6. 10.2.3 Sequential adaptation
    7. 10.3 Adapters
    8. Summary
  19. 11 Conclusions
    1. 11.1 Overview of key concepts
    2. 11.2 Other emerging research trends
    3. 11.2.1 RoBERTa
    4. 11.2.2 GPT-3
    5. 11.2.3 XLNet
    6. 11.2.4 BigBird
    7. 11.2.5 Longformer
    8. 11.2.6 Reformer
    9. 11.2.7 T5
    10. 11.2.8 BART
    11. 11.2.9 XLM
    12. 11.2.10 TAPAS
    13. 11.3 Future of transfer learning in NLP
    14. 11.4 Ethical and environmental considerations
    15. 11.5 Staying up-to-date
    16. 11.5.1 Kaggle and Zindi competitions
    17. 11.5.2 arXiv
    18. 11.5.3 News and social media (Twitter)
    19. 11.6 Final words
    20. Summary
  20. appendix A Kaggle primer
    1. A.1 Free GPUs with Kaggle kernels
    2. A.2 Competitions, discussion, and blog
  21. appendix B Introduction to fundamental deep learning tools
    1. B.1 Stochastic gradient descent
    2. B.2 TensorFlow
    3. B.3 PyTorch
    4. B.4 Keras, fast.ai, and Transformers by Hugging Face
  22. index
3.145.36.10