Topic models

Topic models are a powerful method to group documents by their main topics. Topic models allow probabilistic modeling of term frequency occurrence in documents. The fitted model can be used to estimate the similarity between documents, as well as between a set of specified keywords using an additional layer of latent variables, which are referred to as topics (Grun and Hornik, 2011). In essence, a document is assigned to a topic based on the distribution of the words in that document, and the other documents in that topic will have roughly the same frequency of words.

The algorithm that we will focus on is Latent Dirichlet Allocation (LDA) with Gibbs sampling, which is probably the most commonly used sampling algorithm. In building topic models, the number of topics must be determined before running the algorithm (k-dimensions). If no a priori reason for the number of topics exists, then you can build several and apply judgment and knowledge to the final selection. LDA with Gibbs sampling is quite complicated mathematically, but my intent is to provide an introduction so that you are at least able to describe how the algorithm learns to assign a document to a topic in layperson terms. If you are interested in mastering the math associated with the method, block out a couple of hours on your calendar and have a go at it. Excellent background material is available at http://www.cs.columbia.edu/~blei/papers/Blei2012.pdf.

LDA is a generative process, and so the following will iterate to a steady state:

  1. For each document (j), there are 1 to j documents. We will randomly assign a multinomial distribution (Dirichlet distribution) to the topics (k) with 1 to k topics, for example, document A is 25 percent topic one, 25 percent topic two, and 50 percent topic three.
  2. Probabilistically, for each word (i), there are 1 to i words to a topic (k); for example, the word mean has a probability of 0.25 for the topic statistics.
  3. For each word (i) in document (j) and topic (k), calculate the proportion of words in that document assigned to that topic; note it as the probability of topic (k) with document (j), p(k|j), and the proportion of word (i) in topic (k) from all the documents containing the word. Note it as the probability of word (i) with topic (k), p(i|k).
  1. Resample, that is, assign w a new t based on the probability that t contains w, which is based on p(k|j) times p(i|k).
  2. Rinse and repeat; over numerous iterations, the algorithm finally converges and a document is assigned a topic based on the proportion of words assigned to a topic in that document.

The LDA that we will be doing assumes that the order of words and documents does not matter. There has been work done to relax these assumptions in order to build models of language generation and sequence models over time (known as dynamic topic modeling).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.176.228