PyTorch documentation: https://pytorch.org/cppdocs/
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011): http://ai.stanford.edu/~amaas/data/sentiment
A simplified description of GloVe: Global Vectors for Word Representation algorithm: http://mlexplained.com/2018/04/29/paper-dissected-glove-global-vectors-for-word-representation-explained/
GloVe: Global Vectors for Word Representation, Jeffrey Pennington, Richard Socher, Christopher D. Manning: https://nlp.stanford.edu/projects/glove/
Math theory behind Neural Networks , Ian Goodfellow, Yoshua Bengio, Aaron Courville 2016, Deep Learning.
Word embeddings: how to transform text into numbers: https://monkeylearn.com/blog/word-embeddings-transform-text-numbers
A detailed LSTM architecture description: http://colah.github.io/posts/2015-08-Understanding-LSTMs
Learning Long-Term Dependencies with Gradient Descent is Difficult by Yoshua Bengio et al. (1994): http://www.iro.umontreal.ca/~lisa/pointeurs/ieeetrnn94.pdf
On the difficulty of training recurrent neural networks by Razvan Pascanu et al. (2013): http://proceedings.mlr.press/v28/pascanu13.pdf
Contextual Correlates of Synonymy Communications of the ACM, 627-633. Rubenstein, H. and Goodenough, J.B. (1965).
Efficient Estimation of Word Representations in Vector Space , Mikolov, Tomas; et al. (2013): https://arxiv.org/abs/1301.3781
Distributed Representations of Sentences and Documents , Quoc Le, Tomas Mikolov: https://arxiv.org/pdf/1405.4053v2.pdf
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.