Particle S warm Simulation 149
6.4.1 PSO algorithm
The classic PSO was intended to be applied to optimization problems. It
simulates the motion of a la rge number of individuals (or “particles”) moving in
a multi-dimensional space [71]. Each individual stores its own location vector
(~x
i
), veloc ity vector (~v
i
), and the position at which the individual obta ined
the highest fitness value (~p
i
). All individuals also share information regarding
the position with the highest fitness value for the group ( ~p
g
).
As generations progress, the velocity of each individual is updated using
the best overall location obtained up to the curre nt time for the entire group
and the best loc ations obtained up to the current time for that individual.
This update is performed using the following formula:
~v
i
= χ(ω ~v
i
+ φ
1
· (~p
i
~x
i
) + φ
2
· ( ~p
g
~x
i
)) (6.5)
The coefficients employed here are the convergence coefficient χ (a random
value between 0.9 and 1.0) and the attenuation coefficient ω, while φ
1
and
φ
2
are random values unique to each individual and the dimensio n, with a
maximum value of 2. When the calculated velo c ity exceeds some limit, it is
replaced by a maximum velocity V
max
. This procedure allows us to hold the
individuals within the search region dur ing the search.
The locations of each of the individuals are updated at each generation by
the following formula:
~x
i
= ~x
i
+ ~v
i
(6.6)
The overall flow of the PSO is as shown in Fig. 6.12. Let us now consider
the specific movements of each individual (see Fig. 6.13). A flock consisting of
a number o f birds is assumed to be in flight. We focus on one of the individuals
(Step 1). In the figure, the symbols and linking line segments indicate the
positions and paths of the bird. The nearby
symbol (on its path) indicates
the pos itio n with the highest fitness value on the individual’s path (Step 2).
The distant
symbol (on the other bird’s path) marks the position with the
highest fitness value for the flock (Step 2). One would expe c t that the next
state will be reached in the direction shown by the arrows in Step 3. Vector
1
shows the direction followed in the previous steps; vector
2
is directed
toward the position with the highest fitness for the flock; and vector
3
points
to the location w here the individual obtained its highest fitness value so far.
Thus, all these vectors,
1
,
2
, and
3
, in Step 3 are summed to obtain the
actual direction of movement in the subs e quent step (see Step 4).
A simulator is available for investigating the PSO sea rch process. Fig-
ure 6.14 is a screenshot of the simulator. Interested readers are referred to
Appendix A.2 for a detailed description of how the simulator is operated.
The efficiency of this type of PSO search is certainly high because focused
searching is available near optimal solutions in a relatively simple search space.
However, the canonical PSO algorithm often gets trapped in a local optimum
in multimodal problems. Be c ause of that, s ome sort o f a daptation is necessary
in order to apply PSO to problems with multiple sharp peaks.
150 Agent-Based Modeling and Simulation with Swarm
FIGURE 6.12: Flow chart of the PSO algorithm.
To overcome the above limitation, a GA-like mutation can be integrated
with PSO [51]. This hybrid PSO does not follow the process by which every
individual of the simple PSO moves to another position inside the se arch area
with a predetermined probability w itho ut being affected by other individuals,
but leaves a certain ambiguity in the transition to the next generation due to
Gaussian mutation. This technique e mploys the following eq uation:
mut(x) = x × (1 + Gaussian(σ)), (6.7)
where σ is set to b e 0.1 times the length of the sea rch space in one dimension.
The individuals are selected at a predetermined probability and their positions
are determined at the probability under the Gaussian distri bution. Wide-
ranging s e arches are possible at the initial search stage and search efficiency is
improved at the middle and final stages by gradually reducing the appearance
ratio of the Gaussian mutation at the initial stage . Figure 6.15 shows the
PSO search proces s with a Gaussian mutation. In the figure, V
lbest
represents
the velocity based on the local best, i.e., ~p
i
~x
i
in eq. (6.5), where as V
gbest
represents the velocity based on the global best, i.e., ~p
g
~x
i
.
6.4.2 Comparison with GA
Let us turn to a comparison of the performance of PSO with that of the
GA using benchmark functions to examine the effective ness of PSO.
Particle S warm Simulation 151
FIGURE 6.13: In which way do birds fly?
For the c omparison, F 8 (Rastrigin’s function) and F 9 (Griewangk’s func-
tion) are employed. These are defined as:
F 8(x
1
, x
2
) = 20 + x
2
1
10 cos(2πx
1
) + x
2
2
10 cos(2πx
2
)
(5.11 x
i
5.11)
F 9(x
1
, x
2
) =
1
4000
2
X
i=1
(x
i
100)
2
2
Y
i=1
cos
x
i
100
i
+ 1(10 x
i
1 0)
Figures 6.16 and 6.17 show the shapes of F 8 and F 9, respectively. F 8 and
F 9 seek the minimum value. F 8 contains a large number of pea ks so that its
optimization is particularly difficult.
Comparative experiments were conducted with PSO and GA us ing the
above benchmark functions. PSO and GA were repeatedly run 100 times.
Search space ranges for the experiments are listed in Table 6.1. PSO and GA
parameters are given in Table 6.2.
The performance results are show n in Figs. 6.18 and 6.19, which plot the
fitness va lues against the generations. Table 6.3 shows the averaged best fit-
ness values over 100 runs. As c an be seen from the ta ble and the figures, the
152 Agent-Based Modeling and Simulation with Swarm
FIGURE 6.14: PSO simulator.
V
lbest
V
lbest
V
gbest
V
gbest
V
V
FIGURE 6.15: C oncept of searching process by PSO with a Gaussian mu-
tation.
combination of PSO with a Gaussian mutation allows us to achieve a perfor-
mance that is almost equal to that of the c anonical PSO for the unimodals,
and a better performance than the canonical PSO for the multimodals. The
exp e rimental results with other benchmark functions are further discussed
in [58].
PSO is a stochastic search method, as are GA and GP, and its method
of adjustment of ~p
i
and ~p
g
resembles crossover in GA. It also e mploys the
concept of fitness, as in evolutionary computation. Thus, the PSO algorithm
is strongly related to evolutionary computation (EC) methods. In conceptual
terms, one could place PSO somewhere between GA and EP.
However, PSO has certain characteristics that other EC techniques do
not have. GA opera tors directly operate on the search points in a multi-
dimensional search space, while PSO operates on the motion vectors of parti-
cles which in turn update the search points (i.e., particle positions ). In other
words, GA operators are position specific and PSO ope rators a re direction
Particle S warm Simulation 153
TABLE 6.1: Search s pace for test functions.
Function Search space
F 8 65.535 x
i
< 65.536
F 9 10 x
i
10
TABLE 6.2: PSO and GA pa rameters.
Parameter PSO, PSO with Gaussian Real-valued GA
Population 200 200
V
max
1
Generation 50 50
φ
1
,φ
2
upper limits = 2.0
Inertia weight 0.9
Crossover ratio 0.7(BLX-α)
Mutation 0.01 0.01
Elite 0.05
Selection tournament (size=6)
TABLE 6.3: Average best fitness of 10 0 runs for experiments.
Gen GA PSO PSO with Gaussian
F 8 1 4.290568 3.936564 3.913959
10 0.05674 0.16096 0.057193
20 0.003755 0.052005 0.002797
30 0.001759 0.037106 0.000454
40 0.001226 0.029099 0.000113
50 0.000916 0.02492 3.61E-05
F 9 1 0.018524 0.015017 0.019726
10 0.000161 0.000484 0.000145
20 1.02E-05 0.000118 1.43E-05
30 3.87E-06 6.54E-05 4.92E-06
40 2.55E-06 5.50E-05 2.04E-06
50 1.93E-06 4.95E-05 1.00E-06
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.63.5