There's more...

The blog post The Unreasonable Effectiveness of Recurrent Neural Networks (http://karpathy.github.io/2015/05/21/rnn-effectiveness/ ) describes a fascinating set of examples of RNN character-level language models, including the following:

  • Shakespeare text generation similar to this example
  • Wikipedia text generation similar to this example, but based on different training text
  • Algebraic geometry (LaTex) text generation similar to this example, but based on different training text
  • Linux source code text generation similar to this example, but based on different training text
  • Baby names text generation similar to this example, but based on different training text
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.29.201