June 12, 2012 18:12 PSP Book - 9in x 6in 07-Junichi-Takeno-c07
Chapter 7
Artificial Neural Networks and Machine
Evolution
This chapter introduces artificial information networks that imitate
the human nervous system. It also discusses the evolutionary
development of machines using such networks and those in which
the theory of evolution in biology is applied. If human consciousness
or the mind consists of networks of nervous systems that connect
the brain and the body, we should be able to imitate these human
functions with networks of artificial nerves and using computer
programs. It is only natural to believe that human consciousness and
the mind came to be in the course of the phylogenetic development
of humans. The latter part of this chapter describes possibility of
a machine system autonomously acquiring something like human
consciousness or the mind by evolving and developing artificial
nervous systems, applying the principles of evolution theory and
using computers.
7.1 Neural Networks
Braitenberg’s vehicles behaved variously via signals directly trans-
mitted from the sensors to the drive motors. His vehicles are a
Creation of a Conscious Robot: Mirror Image Cognition and Self-Awareness
Junichi Takeno
Copyright
c
2013 Pan Stanford Publishing Pte. Ltd.
ISBN 978-981-4364-49-2 (Hardcover), 978-981-4364-50-8 (eBook)
www.panstanford.com
June 12, 2012 18:12 PSP Book - 9in x 6in 07-Junichi-Takeno-c07
98 Artificial Neural Networks and Machine Evolution
simple mechanical system with neural pathways for transmitting
signals from sensors to drive motors. The electronic pathways are
“neural pathways” mimicking those of a living organism. In other
words, the behavior of the neural pathways determines the behavior
of the vehicle.
If we could cleverly build neural pathways connecting sensors
and drive motors in a robot, it would be able to perform
various behaviors. Perhaps it would even be possible to create
“consciousness” or “mind” on a computer or other machine systems
using such artificial neural networks.
7.1.1 Hebb’s Rule
Artificial neural pathways simulating the neural pathways of
organisms are already known. They are generally called artificial
neural networks, or simply neural networks. Neural networks were
first introduced in 1949 by Canadian psychologist Donald Hebb, who
proposed a theory known as Hebb’s rule. Hebb’s rule says that when
two connected neurons are simultaneously activated, the strength,
i.e., the weight, of the synapse between the two neurons increases.
For example, when neurons x
1
and y
2
are simultaneously activated,
the relevant synaptic weight w
12
increases (Fig. 7.1). The synaptic
weight is also simply called weight and is also known as the synaptic
value. The synaptic weight is defined by the following equation:
w
12
= x
1
× y
2
or more generally:
w
ij
= x
i
× y
j
(7.1)
where x
i
and y
j
are presynaptic and postsynaptic neurons,
respectively (see Fig. 7.1).
Let the output
(
y
1
, y
2
, y
3
)
= (1,0,0) be expected for the input
pattern
(
x
1
, x
2
, x
3
, x
4
)
= (0,1,0,1) for this neural network. In this
neural network satisfying the above input and output conditions,
two pairs of neurons, x
2
, y
1
and x
4
, y
1
, are activated according to
Hebb’s rule. Synaptic weight w
21
between x
2
and y
1
is unity, i.e., the
strength of the synapse between these two neurons increases. This
condition also applies to w
41
connecting x
4
and y
1
(Fig. 7.1(1)).
June 12, 2012 18:12 PSP Book - 9in x 6in 07-Junichi-Takeno-c07
Neural Networks 99
Figure 7.1. Application of Hebb’s rule.
All other synaptic weights do not increase and are zero because,
from Eq. 7.1, w
11
= 0, for example, since x
1
= 0andy
1
= 1. In Fig.
7.1, the synapses with an increased weight are shown using thicker
lines. One lesson has been learned with Eq. 7.1.
Here is the second lesson. Let the output pattern
(
y
1
, y
2
, y
3
)
=
(0,1,0) be expected for the input pattern
(
x
1
, x
2
, x
3
, x
4
)
= (1,0,0,0).
Then synaptic weight w
12
increases in addition to the first lesson.
The neural network after the second learning changes as shown in
Fig. 7.1(2). As you might have already anticipated, the more lessons
we learn, the more there will be thicker lines in the neural network
until all the synapses have unity. Ultimately, all output patterns are
ignited no matter what the input pattern is. The obvious error of
this approach is that the synaptic weight increases with learning
endlessly. The delta rule was devised to solve this problem.
7.1.2 Single-Layer Neural Network and Delta Rule
Let us consider the learning of a neural network as shown in
Fig. 7.2(1). This type of neural network is called a single-layer neural
network.
Let an input pattern ρ
n
with n-input neurons be input into this
neural network.
ρ
n
= (x
1
, x
2
, x
3
,...,x
n2
, x
n1
, x
n
)
= (1, 1, 0, 1, 0, 1,...,0, 1) (this is an example)
where k is the total number of output neurons.
June 12, 2012 18:12 PSP Book - 9in x 6in 07-Junichi-Takeno-c07
100 Artificial Neural Networks and Machine Evolution
Figure 7.2. Single-layer neural network and explanatory illustration.
(M ) is the linear output function to return a value continuously
for variable M .
y
1
= (w
11
× x
1
+ w
21
× x
2
+···+w
n1
× x
n
) =
n
i=1
w
i1
× x
i
y
2
= (w
12
× x
1
+ w
22
× x
2
+···+w
n2
× x
n
) =
n
i=1
w
i2
× x
i
.
.
.
y
k
= (w
1k
× x
1
+ w
2k
× x
2
+···+w
nk
× x
n
) =
n
i=1
w
ik
× x
i
y
( j =1k )
=
n
i=1
w
ij
× x
i
(7.2)
June 12, 2012 18:12 PSP Book - 9in x 6in 07-Junichi-Takeno-c07
Neural Networks 101
This equation calculates the values of signals output from the
neural network as the input patterns vary with the synaptic value
w
ij
. These complex equations are described in a more easily
understood manner below.
Figure 7.2(2) is part of Fig. 7.2(1) that is extracted for the
purpose of explanation. The extracted figure shows the relationship
between output terminal y
1
and the input terminal pattern
(x
1
, x
2
, x
3
,...,x
n
). Figure 7.2(3) is a network representation of the
neurons in Fig. 7.2(2). The network consists of several neurons,
a, as shown in Fig. 7.2(3), for example, and the neural pathways
connecting the neurons, b, for example. Each neural pathway has
a direction for transmitting signals, from x
1
to y
1
, for example.
Furthermore, each neural pathway has its own synaptic weight w
ij
that indicates the strength of the flow of signals (c,orw
11
,inFig.
7.2(3), for example).
By modifying this weight, the input patterns can be changed to a
desired output pattern. If, for example, weight w
11
for information
transmitted from input terminal x
1
to output terminal y
1
is zero,
the signals received at x
1
cannotbetransmittedtoy
1
. The value of
output terminal y
1
in Fig. 7.2(3) is calculated by the first expression
of Eq. 7.2. First of all, the input patterns are multiplied by their
respective weight and the results are added to find the sum.
M
1
= (w
11
× x
1
+ w
21
× x
2
+ w
31
× x
3
+···w
n1
× x
n
)
Value M
1
represents the total weight-adjusted strength of the
signals appearing on the input terminals. The value M
1
is then used
to determine the value of output terminal y
1
. The value of y
1
is
determined only when value M
1
exceeds a certain threshold value.
Several methods are available to calculate the threshold value.
Detailed explanations are omitted here, but in essence, the function
(M
1
) is calculated.
The values of y
2
, y
3
, y
4
,...,y
k
are respectively determined by
calculation. As a result, the output terminal patterns can be decided.
All of these calculations are subsumed in the final expression of
Eq. 7.2.
It is clear that various reactive systems can be constructed using
these neural networks. The function of a system is determined by
the structure of the network. Figure 7.2(1) shows one such neural
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.123.2