Summary

In this chapter, we've learned how to write a simple neural network with one hidden layer that performs remarkably well. Along the way, we've learned how to perform ZCA whitening so that the data can be cleaned. There are some difficulties with this model, of course; you'd have to pre-calculate the derivatives by hand before you coded it.

The key takeaway point is that a simple neural network can do a lot! While this version of the code is very Gorgonia's tensor-centric, the principles are exactly the same, even if using Gonum's mat. In fact, Gorgonia's tensor uses Gonum's awesome matrix multiplication library underneath.

In the next chapter, we will revisit the notion of a neural network on the same dataset to get a 99% accuracy, but our mindsets of how to approach a neural network will have to change. I would advise re-reading the section on linear algebra to get a stronger grasp on things.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.39.60