Further tips

It will be handy to have a few tips that might be helpful while solving other problems, and also mention some topics that might help those who want to go further into deep learning. The first tip I am going to give you is about reshaping arrays. It is very common for R users to reshape arrays this way:

dim(<array object>) <- c(<new dimensions>)

Unfortunately, this might not work every single time because while R uses a column-major order, numpy and other libraries called by keras use a row-major order as default. For this, we can use something like the following pseudo code:

<array object> <- array_reshape(<array object>, c(<new dimensions>))

This is a really good trick, and I hope you don't get caught by it. If you are interested in learning more about column-major and row-major, you can find rich information in the following vignette: https://cran.r-project.org/web/packages/reticulate/vignettes/arrays.html

Although this chapter did not tackle a computer vision problem, Keras definitely has the power to do so. These problems are frequently solved with a combination of convolutional and max pooling layers. Both have 2D and 1D versions; here is how to call them:

  • layer_conv_2d()
  • layer_conv_1d()
  • layer_max_pooling_2d()
  • layer_max_pooling_1d()

Usually, when dealing with 2D layers you will need to flatten the output. This is done with layer_flatten(). Preprocessing words, phrases, and documents comes with its own nuts and bolts. Here is a list of useful functions to preprocess text data:

  • pad_sequences()
  • skipgrams()
  • text_hashing_trick()
  • text_one_hot()
  • text_to_word_sequence()

Without going into detail, the reader will find a list of wonderful topics to study under the label of NNs and deep learning:

  • Adaptive learning: Some people tend to characterize deep learning as the third generation of AI and adaptive learning as the fourth generation.
  • Data augmentation: Have you ever felt as if you haven't enough data to work on your problem? Data augmentation may help. It's very easy to implement with Keras, especially for computer vision problems.
  • Reinforcement learning: Do you wish to train a network to play a game or chat with humans? Reinforcement learning is the way to go.
  • Generative Adversarial Networks (GANs): Deep convolutional GANs can mimic real datasets.

We barely scratched the surface of neural nets, and there is much more to learn. If you are willing to use Python directly to call Keras, rather than using R as a middleman, I do recommend Mike Bernico's Deep Learning Quick Reference.

NNs are a whole universe in terms of how much there is to master and discover. Although they're very flexible and can tackle a whole bunch of different tasks, don't forget they're a black-box type of model. Nonetheless, they are amazing and worth the time required to master them. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.236.27