References

  1. J. B. Lovins (1968). Development of a stemming algorithm, Mechanical Translation and Computer Linguistic, vol.11, no.1/2, pp. 22-31.
  2. Porter M.F, (1980). An algorithm for suffix stripping, Program; 14, 130-137.
  3. ZIPF, H.P., (1949). Human Behaviour and the Principle of Least Effort, Addison-Wesley, Cambridge, Massachusetts.
  4. LUHN, H.P., (1958). The automatic creation of literature abstracts', IBM Journal of Research and Development, 2, 159-165.
  5. Deerwester, S., Dumais, S., Furnas, G., & Landauer, T. (1990), Indexing by latent semantic analysis, Journal of the American Society for Information Sciences, 41, 391–407.
  6. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977), Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistic Society, Series B, 39(1), 1–38.
  7. Greiff, W. R. (1998). A theory of term weighting based on exploratory data analysis. In 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY. ACM.
  8. P. F. Brown, P. V. deSouza, R. L. Mercer, V. J. Della Pietra, and J/ C. Lai (1992), Class-based n-gram models of natural language, Computational Linguistics, 18, 4, 467-479.
  9. T. Liu, S. Lin, Z. Chen, W.-Y. Ma (2003), An Evaluation on Feature Selection for Text Clustering, ICML Conference.
  10. Y. Yang, J. O. Pederson (1995). A comparative study on feature selection in text categorization, ACM SIGIR Conference.
  11. Salton, G. & Buckley, C. (1998). Term weighting approaches in automatic text retrieval. Information Processing & Management, 24(5), 513–523.
  12. Hofmann, T. (2001). Unsupervised learning by probabilistic latent semantic analysis. Machine Learning Journal, 41(1), 177–196.
  13. D. Blei, J. Lafferty (2006). Dynamic topic models. ICML Conference.
  14. D. Blei, A. Ng, M. Jordan (2003). Latent Dirichlet allocation, Journal of Machine Learning Research, 3: pp. 993–1022.
  15. W. Xu, X. Liu, and Y. Gong (2003). Document-Clustering based on Non-negative Matrix Factorization. Proceedings of SIGIR'03, Toronto, CA, pp. 267-273,
  16. Dud´ik M. and Schapire (2006). R. E. Maximum entropy distribution estimation with generalized regularization. In Lugosi, G. and Simon, H. (Eds.), COLT, Berlin, pp. 123– 138, Springer-Verlag,.
  17. McCallum, A., Freitag, D., and Pereira, F. C. N. (2000). Maximum Entropy Markov Models for Information Extraction and Segmentation. In ICML, pp. 591–598..
  18. Langville, A. N, Meyer, C. D., Albright, R. (2006). Initializations for the Nonnegative Factorization. KDD, Philadelphia, USA
  19. Dunning, T. (1993). Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, 19, 1, pp. 61-74.
  20. Y. Bengio, R. Ducharme, P. Vincent and C. Jauvin (2003). A Neural Probabilistic Language Model. Journal of Machine Learning Research.
  21. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537.
  22. T. Mikolov, K. Chen, G. Corrado and J. Dean (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781v1.
  23. R. Socher, Christopher Manning, and Andrew Ng. (2010). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning.
  24. R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. (2011). Semi-supervised recursive autoencoders for predicting sentiment distributions. In EMNLP.
  25. M. Luong, R. Socher and C. Manning (2013). Better word representations with recursive neural networks for morphology. CONLL.
  26. A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al (2013). Devise: A deep visual-semantic embedding model. In NIPS Proceedings.
  27. Léon Bottou (2011). From Machine Learning to Machine Reasoning. https://arxiv.org/pdf/1102.1808v3.pdf.
  28. Cho, Kyunghyun, et al (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.188.201