A simple linear neuron

A linear neuron is the most basic building block of a deep neural network. It can be schematically represented as shown in the following figure. Here,  represents the input vector and wis are the weights of the neuron. Given a training set consisting a of set of input, target value pairs, a linear neuron tries to learn a linear transformation that can map the input vectors to the corresponding target value. Basically, a linear neuron approximates the input output relationship by a linear function :

Schematic representation of a simple linear neuron and a simple non-linear neuron

Lets try to model a toy problem with this simple neuron. Employee A buys lunch from the cafeteria. Their diet consists of fish, chips, and ketchup. They get several portions of each. The cashier only tells them the total price of the meal. After several days, can they figure out price of each portion?

Well, this sounds like a simple linear programming problem, which can be easily solved analytically. Let's represent this problem using the preceding linear neural unit. Here,   and we have the corresponding weights 

Each meal price gives a linear constraint on the prices of the portions:

Let tn be the true price and yn the estimated price by our model, which is given by the preceding linear equation. The residual price difference between the target and our estimates is tn - ynNow, these residuals for different meals can be positive or negative, and may cancel out, giving zero overall errors. One way to handle this is to use sum squared residuals, . If we are able to minimize this error, we can possibly get a good estimate of a set of weights/price per item. So, we have arrived at an optimization problem. Let's first discuss some of the methods to solve an optimization problem. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.93.137