Neural network

A neural network provides a way to approximate nonlinear functions. Nonlinearity is achieved by applying activation functions on top of the summation of weighted input variables.

A neural network looks like this:

The input level contains the inputs and the hidden layer contains the summation of the weighted input values, where each connection is associated with a weight.

The nonlinearity is applied to the hidden layer. Typical non-linear activation functions could be sigmoid, tanh, or rectified linear unit.

The output level is associated with the summation of weights associated with each hidden unit. The optimal value of weights associated with each connection is obtained by adjusting the weights in such a way that the overall squared error value is minimized. More details of how a neural network works are provided in a later chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.159.198