2. An optimization problem seeks to find the largest (the smallest)
value of a quantity (such as maximum revenue or minimum surface
area) given certain limits to a problem.
An optimization problem can usually be expressed as “find the
maximum (or minimum) value of some quantity Q under a certain
set of given conditions”.
Definition of Optimization problems
3. Problems that can be modelled and solved by optimization
techniques
Scheduling Problems (production, airline, etc.)
Network Design Problems
Facility Location Problems
Inventory management
Transportation Problems
Minimum spanning tree problem
Shortest path problem
Maximum flow problem
Min-cost flow problem
4. 1. Classical Optimization
Useful in finding the optimum solution or unconstrained maxima
or minima of continuous and differentiable functions.
Analytical methods make use of differential calculus in locating
the optimum solution
5. cont.…
Have limited scope in practical applications as some of them
involve objective functions which are not continuous and/or
differentiable.
Basis for developing most of numerical techniques that involved
into advanced techniques more suitable to today’s practical
problem
6. Linear Program (LP)
studies the case in which the objective function (f ) is linear and the set design
variable space (A) is specified Using only linear equalities and inequalities.
2. Numerical Methods
7. Optimization Problem Types
Non-Linear Program (NLP)
studies the general case in which the objective function or the constraints or
both contain nonlinear parts.
(P) Convex problems easy to solve
Non-convex problems harder, not guaranteed to find global optimum
8. Optimization Problem Types
Integer Programs (IP)
studies linear programs in which some or all variables are constrained to
take on integer values
Quadratic programming
allows the objective functions to have quadratic terms, while the set (A) must be
specified with linear equalities and inequalities
9. Optimization Problem Types
Stochastic Programming
studies the case in which some of the constraints depend on random variables
Dynamic programming
studies the case in which the optimization strategy is based on splitting the
problem into smaller sub-problems.
10. 3. Advanced Methods
Swarm Intelligence Based Algorithms
Bio-inspired (not SI-based) algorithms
Physical and chemistry based algorithms
others
12. Introduction to Optimization
The optimization can be defined as a mechanism through which the maximum or minimum value of a
given function or process can be found.
The function that we try to minimize or maximize is called as objective function.
Variable and parameters.
Statement of optimization problem
Minimize f(x)
subject to g(x)<=0
h(x)=0.
13. Particle Swarm Optimization(PSO)
Inspired from the nature social behavior and dynamic movements with communications of insects,
birds and fish.
14. Particle Swarm Optimization(PSO)
Uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for the
best solution
Each particle in search space adjusts its “flying” according to its
own flying experience as well as the flying experience of other
particles.
Each particle has three parameters position, velocity, and previous
best position, particle with best fitness value is called as global best
position.
15. Contd..
Collection of flying particles (swarm) - Changing solutions
Search area - Possible solutions
Movement towards a promising area to get the global optimum.
Each particle adjusts its travelling speed dynamically corresponding to the flying
experiences of itself and its colleagues.
Each particle keeps track:
its best solution, personal best, pbest.
the best value of any particle, global best, gbest.
Each particle modifies its position according to:
• its current position
• its current velocity
• the distance between its current position and pbest.
• the distance between its current position and gbest.
16. Algorithm - Parameters
f : Objective function
Xi: Position of the particle or agent.
Vi: Velocity of the particle or agent.
A: Population of agents.
W: Inertia weight.
C1: cognitive constant.
R1, R2: random numbers.
C2: social constant.
17. Algorithm - Steps
1. Create a ‘population’ of agents (particles) uniformly distributed over X
2. Evaluate each particle’s position according to the objective function( say
y=f(x)= -x^2+5x+20
1. If a particle’s current position is better than its previous best position, update it.
2. Determine the best particle (according to the particle’s previous best positions).
Y=F(x) = -x^2+5x+20
18. Contd..
5. Update particles’ velocities:
6. Move particles to their new positions:
7. Go to step 2 until stopping criteria are satisfied.
19. Contd…
Particle’s velocity
:
1. Inertia
2. Personal
Influence
3. Social
Influence
• Makes the particle move in the same direction and
with the same velocity
• Improves the individual
• Makes the particle return to a previous position,
better than the current
• Conservative
• Makes the particle follow the best neighbors
direction
20. Acceleration coefficients
• When , c1=c2=0 then all particles continue flying at their current speed until they hit the search space’s boundary.
Therefore, the velocity update equation is calculated as:
t
ij
t
ij v
v
1
• When c1>0 and c2=0 , all particles are independent. The velocity update equation will be:
t
ij
i
best
t
t
j
t
ij
t
ij x
P
r
c
v
v
,
1
1
1
• When c1>0 and c2=0 , all particles are attracted to a single point in the entire swarm and
the update velocity will become
t
ij
best
t
j
t
ij
t
ij x
g
r
c
v
v
2
2
1
• When c1=c2, all particles are attracted towards the average of pbest and gbest.
29. Grey wolf optimizer (GWO)
• The social hierarchy consists of four levels as
follow.
•The first level is called Alpha ( ). The alpha
wolves are the leaders of the pack and they are
a male and a female.
•They are responsible for making decisions
about hunting, time to walk, sleeping place and
so on.
30. Grey wolf optimizer (GWO)
• The alpha wolf is considered the dominant
wolf in the pack and all his/her orders should
be followed by the pack members.
Social hierarchy of grey wolf
31. Grey wolf optimizer (GWO)
•The second level is called Beta ( ).
•The betas are subordinate wolves, which help
the alpha in decision making.
•The beta wolf can be either male or female and
it consider the best candidate to be the alpha
when the alpha passes away or becomes very
old.
•The beta reinforce the alpha's commands
throughout the pack and gives the feedback to
alpha.
32. Grey wolf optimizer (GWO)
• The third level is called Delta ( )
• The delta wolves are not alpha or beta wolves
and they are called subordinates.
•Delta wolves have to submit to the alpha and
beta but they dominate the omega (the lowest
level in wolves social hierarchy).
33. Grey wolf optimizer (GWO)(History and main idea)
The fourth (lowest) level is called Omega ( )
•The omega wolves are considered the
scapegoat in the pack, they have to submit to
all the other dominant wolves.
•They may seem are not important individuals
in the pack and they are the last allowed
wolves to eat.
34. Social hierarchy of grey wolf
• In the grey wolf optimizer (GWO), we
consider the fittest solution as the alpha , and
the second and the third fittest solutions are
named beta and delta , respectively.
•The rest of the solutions are considered omega
•In GWO algorithm, the hunting is guided by
and
• The solutions follow these three wolves.
35. Grey wolf encircling prey
•During the hunting, the grey wolves encircle
prey.
•The mathematical model of the encircling
behavior is presented in the following
equations.
36. Grey wolf encircling prey (Cont.)
Where t is the current iteration, A and C are
coefficient vectors, Xp is the position vector of
the prey, and X indicates the position vector of
a grey wolf.
•The vectors A and C are calculated as follows:
Where components of a are linearly decreased
from 2 to 0 over the course of iterations and r1,
r2 are random vectors in [0, 1]
37. Grey wolf Hunting
•The hunting operation is usually guided by
the alpha .
•The beta and delta might participate in
hunting occasionally.
•In the mathematical model of hunting
behavior of grey wolves, we assumed the alpha
, beta and delta have better knowledge about
the potential location of prey.
•The first three best solutions are saved and the
other agent are oblige to update their positions
according to the position of the best search
agents as shown in the following equations.
39. Attacking prey (exploitation)
•The grey wolf finish the hunt by attacking the
prey when it stop moving.
•The vector A is a random value in interval
[-2a, 2a], where a is decreased from 2 to 0 over
the course of iterations.
When |A| < 1, the wolves attack towards the
prey, which represents an exploitation process.
40. Search for prey (exploration)
•The exploration process in GWO is applied
according to the position , and , that diverge
from each other to search for prey and
converge to attack prey.
•The exploration process modeled
mathematically by utilizing A with random
values greater than 1 or less than -1 to oblige
the search agent to diverge from the prey.
When |A| > 1, the wolves are forced to
diverge from the prey to fined a fitter prey.
41. Example (Unconstrained problem): Find the minimum of the function 𝑦 = (𝑥 − 3)2
By using GWO algorithm with the following parameters:
r1a: 0.273 r1b: 0.778 r1d: 0.222 MaxIter: 50
r2a: 0.718 r2b: 0.081 r2d: 0.204
Use initial positions [
6.6506
9.0463
−6.6383
−4.032
]
Solution :
𝒂 = 𝟐 (𝟏 −
𝒕
𝒎𝒂𝒙𝒊𝒕𝒆𝒓
) ,