NLP pipeline with spaCy and textacy

spaCy is a widely used Python library with a comprehensive feature set for fast text processing in multiple languages. The usage of tokenization and annotation engines requires the installation of language models. The features we will use in this chapter only require small models; larger models also include word vectors that we will cover in Chapter 15, Word Embeddings.

Once installed and linked, we can instantiate a spaCy language model and then call it on a document. As a result, spaCy produces a doc object that tokenizes the text and processes it according to configurable pipeline components that, by default, consist of a tagger, a parser, and a named-entity recognizer:

nlp = spacy.load('en')
nlp.pipe_names
['tagger', 'parser', 'ner']

Let's illustrate the pipeline using a simple sentence:

sample_text = 'Apple is looking at buying U.K. startup for $1 billion'
doc = nlp(sample_text)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.41.187