References

  1. Behnke, S. (2001). Learning iterative image reconstruction in the neural abstraction pyramid. International Journal of Computational Intelligence and Applications, 1(4), 427–438.
  2. Behnke, S. (2002). Learning face localization using hierarchical recurrent networks. In Proceedings of the 12th international conference on artificial neural networks (pp. 1319–1324).
  3. Behnke, S. (2003). Discovering hierarchical speech features using convolutional non-negative matrix factorization. In Proceedings of the international joint conference on neural networks, vol. 4 (pp. 2758–2763).
  4. Behnke, S. (2003). LNCS, Lecture notes in computer science: Vol. 2766. Hierarchical neural networks for image interpretation. Springer. Behnke, S. (2005). Face localization and tracking in the neural abstraction pyramid. Neural Computing and Applications, 14(2), 97–103.
  5. Casey, M. P. (1996). The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6), 1135–1178.
  6. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by error propagation. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing, volume 1, pages 318–362. MIT Press.
  7. Goller, C.; Küchler, A (1996). ""Learning task-dependent distributed representations by backpropagation through structure"". Neural Networks, IEEE. doi:10.1109/ICNN.1996.548916
  8. Hochreiter, Sepp. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02): 107–116, 1998.
  9. G. E. Hinton, S. Osindero, and Y. The (2006). "A fast learning algorithm for deep belief nets," Neural Comput., vol. 18, pp. 1527–1554.
  10. G. E. Hinton (2002). "Training products of experts by minimizing contrastive divergence," Neural Comput., vol. 14, pp. 1771–1800.
  11. G. E. Hinton and R. R. Salakhutdinov (2006). "Reducing the dimensionality of data with neural networks," Science, vol. 313, no. 5786, pp. 504–507.
  12. Hinton, G. E., & Zemel, R. S. (1994). Autoencoders, minimum description length, and Helmholtz free energy. Advances in Neural Information Processing Systems, 6, 3–10.
  13. Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. (2007). "Greedy layer-wise training of deep networks," in Advances in Neural Information Processing Systems 19 (NIPS'06) pp. 153–160.
  14. H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio (2007). "An empirical evaluation of deep architectures on problems with many factors of variation," in Proc. 24th Int. Conf. Machine Learning (ICML'07) pp. 473–480.
  15. P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol (2008), "Extracting and composing robust features with denoising autoencoders," in Proc. 25th Int. Conf. Machine Learning (ICML'08), pp. 1096–1103.
  16. F.-J. Huang and Y. LeCun (2006). "Large-scale learning with SVM and convolutional nets for generic object categorization," in Proc. Computer Vision and Pattern Recognition Conf. (CVPR'06).
  17. F. A. Gers, N. N. Schraudolph, and J. Schmidhuber (2003). Learning precise timing with LSTM recurrent networks. The Journal of Machine Learning Research.
  18. Kyunghyun Cho et. al (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. https://arxiv.org/pdf/1406.1078.pdf.
  19. https://brohrer.github.io/how_convolutional_neural_networks_work.html
  20. Henry W. Lin, Max Tegmark, David Rolnick (2016). Why does deep and cheap learning work so well? https://arxiv.org/abs/1608.08225
  21. Mike Schuster and Kuldip K. Paliwal (1997). Bidirectional Recurrent Neural Networks, Trans. on Signal Processing.
  22. H Lee, A Battle, R Raina, AY Ng (2007). Efficient sparse coding algorithms, In Advances in Neural Information Processing Systems
  23. Bengio Y. (2009). Learning deep architectures for AI, Foundations and Trends in Machine Learning 1(2) pages 1-127.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.41.229