3.2. STANDARD BACKPROPAGATION ALGORITHM 27
e neuron has a bias b, which is summed with the weighted inputs to form the net input
n, which can be expressed by
n D
R
X
j D1
w
j
p
j
C b D W
p
C b: (3.1)
en the net input n passes through an active function f , which generates the neuron
output a:
a D f .n/: (3.2)
In this study, the log-sigmoid activation function is adopted. It can be given by the fol-
lowing expression:
f .x/ D
1
1 C e
x
: (3.3)
us, the multi-input FFNN in Fig. 3.2 implements the following equation:
a
2
D f
2
0
@
S
X
iD1
w
2
1;i
f
1
0
@
R
X
j D1
w
1
i;j
p
j
C b
1
i
1
A
C b
2
1
A
; (3.4)
where
a
2
denotes the output of the overall networks.
R
is the number of inputs,
S
is the number
of neurons in the hidden layer, and p
j
indicates the j th input. f
1
and f
2
are the activation
functions of the hidden layer and output layer, respectively. b
1
i
represents the bias of the ith
neuron in the hidden layer, and b
2
is the bias of the neuron in the output layer. w
1
i;j
represents
the weight connecting the j th input and the ith neuron of the hidden layer, and w
2
1;i
represents
the weight connecting the i th source of the hidden layer to the output layer neuron.
3.2 STANDARD BACKPROPAGATION ALGORITHM
In order to train the established FFNN, the backpropagation algorithm can be utilized [67].
Considering a multilayer FFNN, such as the one with three-layer shown in Fig. 3.2, its operation
can be described using the following equation:
a
mC1
D f
mC1
W
mC1
a
m
C b
mC1
; (3.5)
where a
m
and a
mC1
are the outputs of the mth and (m C 1)th layers of the networks, respectively.
b
mC1
is the bias vector of (m C 1)th layers of the networks. m D 0; 1; : : : ; M 1, where M is
the number of layers of the neural network. e neurons of the first layer obtain inputs:
a
0
D p: (3.6)
Equation (3.6) provides the initial condition for Equation (3.5). e outputs of the neurons in
the last layer can be seen as the overall networks’ outputs:
a D a
M
: (3.7)