3. Swarm Intelligence
• Definition
Swarm intelligence is artificial intelligence, based on the
collective behavior of decentralized, self-organized systems.
The expression was introduced by Gerardo Beni and Jing
Wang in 1989, in the context of cellular robotic systems.
4. Swarm Intelligence
• Information
Swarm intelligence systems are typically made up of a population of
simple agents interacting locally with one another and with their
environment.
The agents follow very simple rules, and although there is no centralized
control structure dictating how individual agents should behave, local
interactions between such agents lead to the emergence of complex
global behavior.
Natural examples of SI include ant colonies, bird flocking, animal
herding, bacterial growth, and fish schooling.
5. Swarm Intelligence Applications
• U.S. military is investigating swarm techniques for
controlling unmanned vehicles.
• NASA is investigating the use of swarm technology
for planetary mapping.
• Tim Burton's Batman Returns was the first movie to
make use of swarm technology for rendering,
realistically depicting the movements of a group of
penguins using the Boids system.
• The Lord of the Rings film trilogy made use of
similar technology, known as Massive, during battle
scenes.
6. Particle swarm optimization
The particle swarm optimization algorithm was first described in 1995 by James
Kennedy and Russell C. Eberhart inspired by social behavior of bird
flocking or fish schooling.
Particle swarm optimization or PSO is a global optimization, population-based
evolutionary algorithm for dealing with problems in which a best solution can be
represented as a point or surface in an n-dimensional space.
Hypotheses are plotted in this space and seeded with an initial velocity, as well as
a communication channel between the particles.
7. How it works? [1/2]
PSO is initialized with a group of random particles (solutions) and
then searches for optima by updating generations.
Particles move through the solution space, and are evaluated according
to some fitness criterion after each timestep. In every iteration, each
particle is updated by following two quot;bestquot; values.
8. How it works? [2/2]
The first one is the best solution (fitness) it has achieved so far (the
fitness value is also stored). This value is called pbest.
Another quot;bestquot; value that is tracked by the particle swarm optimizer
is the best value obtained so far by any particle in the population.
This second best value is a global best and called gbest.
When a particle takes part of the population as its topological
neighbors, the second best value is a local best and is called lbest.
Neighborhood bests allow parallel exploration of the search space
and reduce the susceptibility of PSO to falling into local minima,
but slow down convergence speed.
9. PSO Algorithm (General)
• Searches Hyperspace of Problem for Optimum
▫ Define problem to search
How many dimensions?
Solution criteria?
▫ Initialize Population
Random initial positions
Random initial velocities
▫ Determine Best Position
Global Best Position
Personal Best Position
▫ Update Velocity and Position Equations
10. Particle Properties
With Particle Swarm Optimization, a swarm of particles (individuals) in a n-
dimensional search space G is simulated, where each particle p has a position p.g
∈ G ⊆ Rn and a velocity p.v ∈ Rn.
The position p.g corresponds to the genotypes, and, in most cases, also to the
solution candidates, i. e., p.x = p.g, since most often the problem space X is also
the Rn and X = G. However, this is not necessarily the case and generally, we can
introduce any form of genotype-phenotype mapping in Particle Swarm
Optimization.
The velocity vector p.v of an individual p determines in which direction the
search will continue and if it has an explorative (high velocity) or an exploitive
(low velocity) character.
11. Neighbourhood
∀p , q ∈ Pop : q ∈ N ( p ) ⇔ dist eucl ( p.g , q.g ) ≤ δ
Population
Topological Neighbours
δ p.x
12. Basic PSO algorithm
• New Velocity
vi(k+1) = vi(k) + γ1i(pi – xi(k)) + γ2i(G – xi(k))
• New Position
xi(k + 1) = xi(k) + vi(k + 1)
i – particle index
k – discrete time index
vi – velocity of ith particle
xi – position of ith particle
pi – best position found by ith particle (personal best)
G – best position found by swarm (global best, best of personal bests)
g(1,2)i – random numbers on the interval [0,1] applied to ith particle
13. The Common PSO Algorithm
vi(k+1) = φ(k)vi(k) + α1[γ1i(pi-xi(k))]+α2[γ2i(G – xi(k))]
φ - Inertia function
α1,2– Acceleration constants
As training progresses using a decreasing linear inertia function, the
influence of past velocity becomes smaller.
14. Pseudocode
For each particle
initialize particle
End For
Do
For each particle
calculate fitness value
if the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
choose the particle with the best fitness value of all the particles as the gBest
For each particle
calculate particle velocity according to previous equations
update particle position according to previous equations
End
While maximum iterations or minimum error criteria is not attained
15. New algorithms
• A Modified PSO Structure Resulting in High
Exploration Ability With Convergence
Guaranteed (Chen & Li, 2007)
▫ Decreasing coefficient to the updating principle
• The Generalized PSO: A New Door to PSO
Evolution (Martinez & Gonzalo, 2008)
▫ GPSO is derived from a continuous version of PSO
adopting a time step different than the unit
18. Papers / Books
• The Generalized PSO: A New Door to PSO
Evolution (Martinez & Gonzalo, 2008)
• A Modified PSO Structure Resulting in High
Exploration Ability With Convergence
Guaranteed (Chen & Li, 2007)
• Emergent Social Structures in Cultural
Algorithms (Reynolds, Peng & Whallon, 2005)
• Global Optimization Algorithms, Theory and
Application ( Thomas Weise, 2008)