SlideShare una empresa de Scribd logo
1 de 140
Descargar para leer sin conexión
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks




               Nature-Inspired Metaheristics Algorithms
           for Optimization and Computational Intelligence

                                                  Xin-She Yang

                                        National Physical Laboratory, UK


                                                 @ FedCSIS2011




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are wrong, but some are useful.
                                                                        - George Box, Statistician




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are inaccurate, but some are useful.
                                                                        - George Box, Statistician




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are inaccurate, but some are useful.
                                                                        - George Box, Statistician

        All algorithms perform equally well on average over all possible
        functions.
                                    - No-free-lunch theorems (Wolpert & Macready)


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are inaccurate, but some are useful.
                                                                        - George Box, Statistician

        All algorithms perform equally well on average over all possible
        functions.             How so?
                                    - No-free-lunch theorems (Wolpert & Macready)


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are inaccurate, but some are useful.
                                                                        - George Box, Statistician

        All algorithms perform equally well on average over all possible
        functions.             Not quite! (more later)
                                    - No-free-lunch theorems (Wolpert & Macready)


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Intro


Intro

        Computational science is now the third paradigm of science,
                                   complementing theory and experiment.
                                 - Ken Wilson (Cornell University), Nobel Laureate.

        All models are inaccurate, but some are useful.
                                                                        - George Box, Statistician

        All algorithms perform equally well on average over all possible
        functions.             Not quite! (more later)
                                    - No-free-lunch theorems (Wolpert & Macready)


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Overview


Overview
        Part I

               Introduction
               Metaheuristic Algorithms
               Monte Carlo and Markov Chains
               Algorithm Analysis




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Overview


Overview
        Part I

               Introduction
               Metaheuristic Algorithms
               Monte Carlo and Markov Chains
               Algorithm Analysis
        Part II

               Exploration & Exploitation
               Dealing with Constraints
               Applications
               Discussions & Bibliography

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c ,




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                           =⇒     E =mc 2




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                           =⇒     E =mc 2
        Steepest Descent




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                           =⇒     E =mc 2
        Steepest Descent




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms    Metaheuristic       Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                              =⇒     E =mc 2
        Steepest Descent
                   =⇒                          d                   d
                                                   1                      1 + y ′2
                              min t =                ds =                             dx
                                           0       v           0       2g [h − y (x)]




Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms    Metaheuristic       Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                              =⇒     E =mc 2
        Steepest Descent
                   =⇒                          d                   d
                                                   1                      1 + y ′2
                              min t =                ds =                             dx
                                           0       v           0       2g [h − y (x)]



        =⇒
Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms    Metaheuristic       Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                              =⇒     E =mc 2
        Steepest Descent
                   =⇒                          d                   d
                                                   1                      1 + y ′2
                              min t =                ds =                             dx
                                           0       v           0       2g [h − y (x)]



        =⇒              =⇒
Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms    Metaheuristic       Markov   Analysis   All and NFL   Constraints   Applications   Thanks

A Perfect Algorithm


A Perfect Algorithm
        What is the best relationship among E , m and c?



        Initial state:        m,E ,c , =⇒                              =⇒     E =mc 2
        Steepest Descent
                   =⇒                          d                   d
                                                   1                      1 + y ′2
                              min t =                ds =                             dx
                                           0       v           0       2g [h − y (x)]

                                                        A
                                         
                                          x=           2 (θ   − sin θ)
        =⇒              =⇒
                                                y = h − A (1 − cos θ)
                                         
                                                        2
Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Computing in Reality


Computing in Reality


                            A Problem & Problem Solvers
                                          ⇓
                           Mathematical/Numerical Models
                                          ⇓
                        Computer & Algorithms & Programming
                                          ⇓
                                     Validation
                                          ⇓
                                      Results



Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

What is an Algorithm?


What is an Algorithm?

        Essence of an Optimization Algorithm
        To move to a new, better point xi +1 from an existing known
        location xi .
                                             xi


                        x2
           x1




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

What is an Algorithm?


What is an Algorithm?

        Essence of an Optimization Algorithm
        To move to a new, better point xi +1 from an existing known
        location xi .
                                             xi


                        x2
           x1




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

What is an Algorithm?


What is an Algorithm?

        Essence of an Optimization Algorithm
        To move to a new, better point xi +1 from an existing known
        location xi .
                                             xi

                                                                       ?
                        x2
           x1                                                               xi +1

        Population-based algorithms use multiple, interacting paths.
        Different algorithms
        Different strategies/approaches in generating these moves!

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Optimization is Like Treasure Hunting


Optimization is Like Treasure Hunting




        How to find a treasure, a hidden 1 million dollars?
        What is your best strategy?
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Optimization Algorithms


Optimization Algorithms
        Deterministic

               Newton’s method (1669, published in 1711), Newton-Raphson
               (1690), hill-climbing/steepest descent (Cauchy 1847),
               least-squares (Gauss 1795),
               linear programming (Dantzig 1947), conjugate gradient
               (Lanczos et al. 1952), interior-point method (Karmarkar
               1984), etc.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Stochastic/Metaheuristic


Stochastic/Metaheuristic

               Genetic algorithms (1960s/1970s), evolutionary strategy
               (Rechenberg & Swefel 1960s), evolutionary programming
               (Fogel et al. 1960s).
               Simulated annealing (Kirkpatrick et al. 1983), Tabu search
               (Glover 1980s), ant colony optimization (Dorigo 1992),
               genetic programming (Koza 1992), particle swarm
               optimization (Kennedy & Eberhart 1995), differential
               evolution (Storn & Price 1996/1997),
               harmony search (Geem et al. 2001), honeybee algorithm
               (Nakrani & Tovey 2004), ..., firefly algorithm (Yang 2008),
               cuckoo search (Yang & Deb 2009), ...


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent/Hill Climbing


Steepest Descent/Hill Climbing
        Gradient-Based Methods
        Use gradient/derivative information – very efficient for local search.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL     Constraints    Applications   Thanks




        Newton’s Method                                                                           
                                                                     ∂2f                  ∂2f
                                                                     ∂x1 2
                                                                                  ···    ∂x1 ∂xn
                 xn+1 = xn − H−1 ∇f ,
                                                           
                                                         H=           .
                                                                       .          ..         .
                                                                                             .
                                                                                                   
                                                                                                   .
                                                                      .             .       .     
                                                                     ∂2f                  ∂2f
                                                                    ∂xn ∂x1       ···     ∂xn 2




        Generation of new moves by gradient.

Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL     Constraints    Applications   Thanks




        Newton’s Method                                                                           
                                                                     ∂2f                  ∂2f
                                                                     ∂x1 2
                                                                                  ···    ∂x1 ∂xn
                 xn+1 = xn − H−1 ∇f ,
                                                           
                                                         H=           .
                                                                       .          ..         .
                                                                                             .
                                                                                                   
                                                                                                   .
                                                                      .             .       .     
                                                                     ∂2f                  ∂2f
                                                                    ∂xn ∂x1       ···     ∂xn 2



        Quasi-Newton
        If H is replaced by I, we have

                                          xn+1 = xn − αI∇f (xn ).

        Here α controls the step length.

        Generation of new moves by gradient.

Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Steepest Descent Method (Cauchy 1847, Riemann 1863)


Steepest Descent Method (Cauchy 1847, Riemann 1863)

        From the Taylor expansion of f (x) about x(n) , we have

                  f (x(n+1) ) = f (x(n) + ∆s) ≈ f (x(n) + (∇f (x(n) ))T ∆s,

        where ∆s = x(n+1) − x(n) is the increment vector.
        So
                   f (x(n) + ∆s) − f (x(n) ) = (∇f )T ∆s < 0.
        Therefore, we have
                                                ∆s = −α∇f (x(n) ),
        where α > 0 is the step size.
        In the case of finding maxima, this method is often referred to as
        hill-climbing.

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Conjugate Gradient (CG) Method


Conjugate Gradient (CG) Method
        Belong to Krylov subspace iteration methods. The conjugate
        gradient method was pioneered by Magnus Hestenes, Eduard
        Stiefel and Cornelius Lanczos in the 1950s. It was named as one of
        the top 10 algorithms of the 20th century.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov    Analysis   All and NFL   Constraints   Applications   Thanks

Conjugate Gradient (CG) Method


Conjugate Gradient (CG) Method
        Belong to Krylov subspace iteration methods. The conjugate
        gradient method was pioneered by Magnus Hestenes, Eduard
        Stiefel and Cornelius Lanczos in the 1950s. It was named as one of
        the top 10 algorithms of the 20th century.
        A linear system with a symmetric positive definite matrix A

                                                         Au = b,

        is equivalent to minimizing the following function f (u)
                                             1
                                      f (u) = uT Au − bT u + v,
                                             2
        where v is a vector constant and can be taken to be zero. We can
        easily see that ∇f (u) = 0 leads to Au = b.
Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

CG


CG

        The theory behind these iterative methods is closely related to the
        Krylov subspace Kn spanned by A and b as defined by

                              Kn (A, b) = {Ib, Ab, A2 b, ..., An−1 b},

        where A0 = I.
        If we use an iterative procedure to obtain the approximate solution
        un to Au = b at nth iteration, the residual is given by

                                                  rn = b − Aun ,

        which is essentially the negative gradient ∇f (un ).


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms    Metaheuristic     Markov   Analysis   All and NFL   Constraints   Applications   Thanks




        The search direction vector in the conjugate gradient method is
        subsequently determined by

                                                              dT Arn
                                                               n
                                         dn+1 = rn −                 dn .
                                                              dT Adn
                                                               n

        The solution often starts with an initial guess u0 at n = 0, and
        proceeds iteratively. The above steps can compactly be written as

                             un+1 = un + αn dn , rn+1 = rn − αn Adn ,

        and
                                              dn+1 = rn+1 + βn dn ,
        where
                                           rT rn
                                            n                          rT rn+1
                                                                        n+1
                                 αn =      T
                                                 ,             βn =             .
                                          dn Adn                         rT r n
                                                                          n
        Iterations stop when a prescribed accuracy is reached.
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Gradient-free Methods


Gradient-free Methods



        Gradient-base methods
        Requires the information of derivatives. Not suitable for problems
        with discontinuities.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Gradient-free Methods


Gradient-free Methods



        Gradient-base methods
        Requires the information of derivatives. Not suitable for problems
        with discontinuities.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Gradient-free Methods


Gradient-free Methods



        Gradient-base methods
        Requires the information of derivatives. Not suitable for problems
        with discontinuities.

        Gradient-free or derivative-free methods
        BFGS, Downhill simplex, Trust-region, SQP ...




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Nelder-Mead Downhill Simplex Method


Nelder-Mead Downhill Simplex Method
        The Nelder-Mead method is a downhill simplex algorithm, first
        developed by J. A. Nelder and R. Mead in 1965.
        A Simplex
        In the n-dimensional space, a simplex, which is a generalization of
        a triangle on a plane, is a convex hull with n + 1 distinct points.
        For simplicity, a simplex in the n-dimension space is referred to as
        n-simplex.




Xin-She Yang         (a)                           (b)                        (c)                        FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis       All and NFL   Constraints   Applications   Thanks

Downhill Simplex Method


Downhill Simplex Method

                                                                                               xe
                                                xr                               xr
                               ¯
                               x
                               s                                    s
                                                         xc
           xn+1                                 xn+1                                        xn+1

        The first step is to rank and re-order the vertex values

                                    f (x1 ) ≤ f (x2 ) ≤ ... ≤ f (xn+1 ),


        at x1 , x2 , ..., xn+1 , respectively. Wikipedia Animation

Xin-She Yang                                                                                                 FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Metaheuristic


Metaheuristic


        Most are nature-inspired, mimicking certain successful features in
        nature.

                Simulated annealing
                Genetic algorithms
                Ant and bee algorithms
                Particle Swarm Optimization
                Firefly algorithm and cuckoo search
                Harmony search ...



Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Simulated Annealling


Simulated Annealling




        Metal annealing to increase strength =⇒ simulated annealing.
        Probabilistic Move: p ∝ exp[−E /kB T ].
        kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy.

        E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1).
                      T → 0, =⇒p → 0, =⇒ hill climbing.



Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Simulated Annealling


Simulated Annealling




        Metal annealing to increase strength =⇒ simulated annealing.
        Probabilistic Move: p ∝ exp[−E /kB T ].
        kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy.

        E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1).
                      T → 0, =⇒p → 0, =⇒ hill climbing.
        This is essentially a Markov chain.
        Generation of new moves by Markov chain.
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

An Example


An Example




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Genetic Algorithms


Genetic Algorithms




                  crossover                                                          mutation




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Genetic Algorithms


Genetic Algorithms




                  crossover                                                          mutation




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Genetic Algorithms


Genetic Algorithms




                  crossover                                                          mutation




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks




        Generation of new solutions by crossover, mutation and elistism.

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Swarm Intelligence


Swarm Intelligence



        Ants, bees, birds, fish ...
        Simple rules lead to complex behaviour.



        Go to Metaheuristic Slides




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Cuckoo Search


Cuckoo Search

        Local random walk:

                             xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
                              i      i                     j    k

        [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
        is a random number drawn from a uniform distribution, and s is
        the step size.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Cuckoo Search


Cuckoo Search

        Local random walk:

                             xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
                              i      i                     j    k

        [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
        is a random number drawn from a uniform distribution, and s is
        the step size.

        Global random walk via L´vy flights:
                                e

                                                             λΓ(λ) sin(πλ/2) 1
        xt+1 = xt + αL(s, λ),
         i      i                               L(s, λ) =                         , (s ≫ s0 ).
                                                                    π       s 1+λ


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Cuckoo Search


Cuckoo Search

        Local random walk:

                             xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
                              i      i                     j    k

        [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
        is a random number drawn from a uniform distribution, and s is
        the step size.

        Global random walk via L´vy flights:
                                e

                                                             λΓ(λ) sin(πλ/2) 1
        xt+1 = xt + αL(s, λ),
         i      i                               L(s, λ) =                         , (s ≫ s0 ).
                                                                    π       s 1+λ


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Cuckoo Search


Cuckoo Search

        Local random walk:

                             xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
                              i      i                     j    k

        [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
        is a random number drawn from a uniform distribution, and s is
        the step size.

        Global random walk via L´vy flights:
                                e

                                        λΓ(λ) sin(πλ/2) 1
        xt+1 = xt + αL(s, λ),
         i      i                               L(s, λ) =     , (s ≫ s0 ).
                                                π       s 1+λ
        Generation of new moves by L´vy flights, random walk and elitism.
                                    e
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Monte Carlo Methods


Monte Carlo Methods
        Almost everyone has used Monte Carlo methods in some way ...




               Measure temperatures, choose a product, ...
               Taste soup, wine ...
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Markov Chains


Markov Chains
        Random walk – A drunkard’s walk:
                                             ut+1 = µ + ut + wt ,
        where wt is a random variable, and µ is the drift.

        For example, wt ∼ N(0, σ 2 ) (Gaussian).




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic    Markov         Analysis   All and NFL   Constraints   Applications   Thanks

Markov Chains


Markov Chains
        Random walk – A drunkard’s walk:
                                                   ut+1 = µ + ut + wt ,
        where wt is a random variable, and µ is the drift.

        For example, wt ∼ N(0, σ 2 ) (Gaussian).
          25




          20




          15




          10




          5




          0




          -5




         -10
               0   100       200          300      400       500




Xin-She Yang                                                                                                       FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic    Markov         Analysis            All and NFL    Constraints   Applications        Thanks

Markov Chains


Markov Chains
        Random walk – A drunkard’s walk:
                                                   ut+1 = µ + ut + wt ,
        where wt is a random variable, and µ is the drift.

        For example, wt ∼ N(0, σ 2 ) (Gaussian).
          25                                                                  10




          20
                                                                               5




          15

                                                                               0




          10



                                                                               -5



          5




                                                                              -10

          0




                                                                              -15
          -5




         -10                                                                  -20
               0   100       200          300      400       500                 -15       -10   -5     0     5     10    15       20




Xin-She Yang                                                                                                                   FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms   Metaheuristic     Markov   Analysis    All and NFL   Constraints   Applications   Thanks

Markov Chains


Markov Chains
        Markov chain: the next state only depends on the current state
        and the transition probability.

                          P(i , j) ≡ P(Vt+1 = Sj V0 = Sp , ..., Vt = Si )

                                        = P(Vt+1 = Sj Vt = Sj ),

        =⇒Pij πi∗ = Pji πj∗ ,           π ∗ = stionary probability distribution.

        Examples: Brownian motion

                                ui +1 = µ + ui + ǫi ,               ǫi ∼ N(0, σ 2 ).


Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Markov Chains


Markov Chains
        Monopoly (board games)




        Monopoly Animation
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Markov Chain Monte Carlo


Markov Chain Monte Carlo


        Landmarks: Monte Carlo method (1930s, 1945, from 1950s) e.g.,
        Metropolis Algorithm (1953), Metropolis-Hastings (1970).

        Markov Chain Monte Carlo (MCMC) methods – A class of
        methods.

        Really took off in 1990s, now applied to a wide range of areas:
        physics, Bayesian statistics, climate changes, machine learning,
        finance, economy, medicine, biology, materials and engineering ...




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms    Metaheuristic     Markov    Analysis     All and NFL         Constraints   Applications   Thanks

Convergence Behaviour


Convergence Behaviour
        As the MCMC runs, convergence may be reached

               When does a chain converge? When to stop the chain ... ?
               Are multiple chains better than a single chain?
                     0




                    100




                    200




                    300




                    400




                    500




                    600




                          0   100      200       300     400     500       600       700         800     900



Xin-She Yang                                                                                                      FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL    Constraints   Applications   Thanks

Convergence Behaviour


Convergence Behaviour
          −∞ ← t
                                             t=−2                                 converged
            U

                 1
                 2                                                                 t=2
                             t=−n
                 3                                                     t=0


        Multiple, interacting chains
        Multiple agents trace multiple, interacting Markov chains during
        the Monte Carlo process.


Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro      Classic Algorithms   Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Analysis


Analysis
        Classifications of Algorithms

                 Trajectory-based: hill-climbing, simulated annealing, pattern
                 search ...
                 Population-based: genetic algorithms, ant & bee algorithms,
                 artificial immune systems, differential evolutions, PSO, HS,
                 FA, CS, ...




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro      Classic Algorithms   Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Analysis


Analysis
        Classifications of Algorithms

                 Trajectory-based: hill-climbing, simulated annealing, pattern
                 search ...
                 Population-based: genetic algorithms, ant & bee algorithms,
                 artificial immune systems, differential evolutions, PSO, HS,
                 FA, CS, ...




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro      Classic Algorithms   Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Analysis


Analysis
        Classifications of Algorithms

                 Trajectory-based: hill-climbing, simulated annealing, pattern
                 search ...
                 Population-based: genetic algorithms, ant & bee algorithms,
                 artificial immune systems, differential evolutions, PSO, HS,
                 FA, CS, ...

        Ways of Generating New Moves/Solutions
                 Markov chains with different transition probability.
                 Trajectory-based =⇒ a single Markov chain;
                 Population-based =⇒ multiple, interacting chains.
                 Tabu search (with memory) =⇒ self-avoiding Markov chains.
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Ergodicity


Ergodicity

        Markov Chains & Markov Processes
               Most theoretical studies uses Markov chains/process as a
               framework for convergence analysis.
               A Markov chain is said be to regular if some positive power k
               of the transition matrix P has only positive elements.
               A chain is call time-homogeneous if the change of its
               transition matrix P is the same after each step, thus the
               transition probability after k steps become Pk .
               A chain is ergodic or irreducible if it is aperiodic and positive
               recurrent – it is possible to reach every state from any state.


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov    Analysis   All and NFL   Constraints   Applications   Thanks

Convergence Behaviour


Convergence Behaviour

        As k → ∞, we have the stationary probability distribution π
               π = πP,          =⇒           thus the first eigenvalue is always 1.

        Asymptotic convergence to optimality:

                             lim θk → θ∗ ,               (with probability one).
                             k→∞




Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov    Analysis   All and NFL   Constraints   Applications   Thanks

Convergence Behaviour


Convergence Behaviour

        As k → ∞, we have the stationary probability distribution π
               π = πP,          =⇒           thus the first eigenvalue is always 1.

        Asymptotic convergence to optimality:

                             lim θk → θ∗ ,               (with probability one).
                             k→∞


        The rate of convergence is usually determined by the second
        eigenvalue 0 < λ2 < 1.

        An algorithm can converge, but may not be necessarily efficient,
        as the rate of convergence is typically low.

Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro    Classic Algorithms      Metaheuristic     Markov     Analysis    All and NFL     Constraints     Applications      Thanks

Convergence of GA


Convergence of GA
        Important studies by Aytug et al. (1996)1 , Aytug and Koehler
        (2000)2 , Greenhalgh and Marschall (2000)3 , Gutjahr (2010),4 etc.5
        The number of iterations t(ζ) in GA with a convergence
        probability of ζ can be estimated by
                                                                ln(1 − ζ)
                                t(ζ) ≤                                                             ,
                                               ln 1 − min[(1 −              µ)Ln , µLn ]

         where µ=mutation rate, L=string length, and n=population size.
            1
                H. Aytug, S. Bhattacharrya and G. J. Koehler, A Markov chain analysis of genetic algorithms with power of
        2 cardinality alphabets, Euro. J. Operational Research, 96, 195-201 (1996).
             2
               H. Aytug and G. J. Koehler, New stopping criterion for genetic algorithms, Euro. J. Operational research,
        126, 662-674 (2000).
             3
               D. Greenhalgh & S. Marshal, Convergence criteria for genetic algorithms, SIAM J. Computing, 30, 269-282
      (2000).
Xin-She Yang                                                                                                    FedCSIS2011
           4
Metaheuristics and Gutjahr, Convergence Analysis of Metaheuristics Annals of Information Systems, 10, 159-187 (2010).
             W. J. Computational Intelligence
Intro   Classic Algorithms      Metaheuristic   Markov   Analysis    All and NFL    Constraints    Applications    Thanks

Multiobjective Metaheuristics


Multiobjective Metaheuristics
        Asymptotic convergence of metaheuristic for multiobjective
        optimization (Villalobos-Arias et al. 2005)6

        The transition matrix P of a metaheuristic algorithm has a
        stationary distribution π such that

                      |Pij − πj | ≤ (1 − ζ)k−1 ,
                        k
                                                                 ∀i , j,     (k = 1, 2, ...),

         where ζ is a function of mutation probability µ, string length L
        and population size. For example, ζ = 2nL µnL , so µ < 0.5.




Xin-She Yang
           6                                                                                                   FedCSIS2011
             M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
                                                                 a
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms      Metaheuristic   Markov   Analysis    All and NFL    Constraints    Applications    Thanks

Multiobjective Metaheuristics


Multiobjective Metaheuristics
        Asymptotic convergence of metaheuristic for multiobjective
        optimization (Villalobos-Arias et al. 2005)6

        The transition matrix P of a metaheuristic algorithm has a
        stationary distribution π such that

                      |Pij − πj | ≤ (1 − ζ)k−1 ,
                        k
                                                                 ∀i , j,     (k = 1, 2, ...),

         where ζ is a function of mutation probability µ, string length L
        and population size. For example, ζ = 2nL µnL , so µ < 0.5.




Xin-She Yang
           6                                                                                                   FedCSIS2011
             M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
                                                                 a
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms      Metaheuristic   Markov   Analysis    All and NFL    Constraints    Applications    Thanks

Multiobjective Metaheuristics


Multiobjective Metaheuristics
        Asymptotic convergence of metaheuristic for multiobjective
        optimization (Villalobos-Arias et al. 2005)6

        The transition matrix P of a metaheuristic algorithm has a
        stationary distribution π such that

                      |Pij − πj | ≤ (1 − ζ)k−1 ,
                        k
                                                                 ∀i , j,     (k = 1, 2, ...),

         where ζ is a function of mutation probability µ, string length L
        and population size. For example, ζ = 2nL µnL , so µ < 0.5.

        Note: An algorithm satisfying this condition may not converge (for
        multiobjective optimization)
        However, an algorithm with elitism, obeying the above condition,
        does converge!.
Xin-She Yang
           6                                                                                                   FedCSIS2011
             M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
                                                                 a
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Other results


Other results

        Limited results on convergence analysis exist, concerning (finite
        states/domains)
                ant colony optimization
                generalized hill-climbers and simulated annealing,
                best-so-far convergence of cross-entropy optimization,
                nested partition method, Tabu search, and
                of course, combinatorial optimization.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Other results


Other results

        Limited results on convergence analysis exist, concerning (finite
        states/domains)
                ant colony optimization
                generalized hill-climbers and simulated annealing,
                best-so-far convergence of cross-entropy optimization,
                nested partition method, Tabu search, and
                of course, combinatorial optimization.
        However, more challenging tasks for infinite states/domains and
        continuous problems.
        Many, many open problems needs satisfactory answers.

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Converged?


Converged?
        Converged, often the ‘best-so-far’ convergence, not necessarily at
        the global optimality
        In theory, a Markov chain can converge, but the number of
        iterations tends to be large.
        In practice, a finite (hopefully, small) number of generations, if the
        algorithm converges, it may not reach the global optimum.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Converged?


Converged?
        Converged, often the ‘best-so-far’ convergence, not necessarily at
        the global optimality
        In theory, a Markov chain can converge, but the number of
        iterations tends to be large.
        In practice, a finite (hopefully, small) number of generations, if the
        algorithm converges, it may not reach the global optimum.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Converged?


Converged?
        Converged, often the ‘best-so-far’ convergence, not necessarily at
        the global optimality
        In theory, a Markov chain can converge, but the number of
        iterations tends to be large.
        In practice, a finite (hopefully, small) number of generations, if the
        algorithm converges, it may not reach the global optimum.

        How to avoid premature convergence
               Equip an algorithm with the ability to escape a local optimum
               Increase diversity of the solutions
               Enough randomization at the right stage
               ....(unknown, new) ....
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks




        Coffee Break (15 Minutes)




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

All and NFL


All and NFL



        So many algorithms – what are the common characteristics?

               What are the key components?
               How to use and balance different components?
               What controls the overall behaviour of an algorithm?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms     Metaheuristic    Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Exploration and Exploitation


Exploration and Exploitation

        Characteristics of Metaheuristics
        Exploration and Exploitation, or Diversification and Intensification.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms     Metaheuristic    Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Exploration and Exploitation


Exploration and Exploitation

        Characteristics of Metaheuristics
        Exploration and Exploitation, or Diversification and Intensification.

        Exploitation/Intensification
        Intensive local search, exploiting local information.
        E.g., hill-climbing.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms     Metaheuristic    Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Exploration and Exploitation


Exploration and Exploitation

        Characteristics of Metaheuristics
        Exploration and Exploitation, or Diversification and Intensification.

        Exploitation/Intensification
        Intensive local search, exploiting local information.
        E.g., hill-climbing.

        Exploration/Diversification
        Exploratory global search, using randomization/stochastic
        components. E.g., hill-climbing with random restart.


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Summary


Summary    Exploration




                                                Exploitation

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Summary


Summary
                         uniform
                          search
           Exploration




                                                   Exploitation

Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Summary


Summary
                         uniform
                          search
           Exploration




                                                                                 steepest
                                                   Exploitation                  descent

Xin-She Yang                                                                                                FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic   Markov        Analysis   All and NFL   Constraints   Applications   Thanks

Summary


Summary
                         uniform
                          search
                                                   CS
                                   Ge
                                     net
           Exploration




                                           ic
                                                alg
                                                   ori        PS
                                                       th   ms   O/
                                           SA                EP     FA
                                                   A nt        /E
                                                        /Be       S
                                                           e
                                                                                    Newton-
                                                                                    Raphson
                                                            Tabu Nelder-Mead
                                                                                      steepest
                                                    Exploitation                      descent

Xin-She Yang                                                                                                     FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms         Metaheuristic   Markov        Analysis   All and NFL   Constraints   Applications   Thanks

Summary


Summary
                         uniform
                          search                                                       Best?
                                                   CS                               Free lunch?
                                   Ge
                                     net
           Exploration




                                           ic
                                                alg
                                                   ori        PS
                                                       th   ms   O/
                                           SA                EP     FA
                                                   A nt        /E
                                                        /Be       S
                                                           e
                                                                                    Newton-
                                                                                    Raphson
                                                            Tabu Nelder-Mead
                                                                                      steepest
                                                    Exploitation                      descent

Xin-She Yang                                                                                                     FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

No-Free-Lunch (NFL) Theorems


No-Free-Lunch (NFL) Theorems


        Algorithm Performance
        Any algorithm is as good/bad as random search, when averaged
        over all possible problems/functions.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

No-Free-Lunch (NFL) Theorems


No-Free-Lunch (NFL) Theorems


        Algorithm Performance
        Any algorithm is as good/bad as random search, when averaged
        over all possible problems/functions.

        Finite domains
        No universally efficient algorithm!




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

No-Free-Lunch (NFL) Theorems


No-Free-Lunch (NFL) Theorems


        Algorithm Performance
        Any algorithm is as good/bad as random search, when averaged
        over all possible problems/functions.

        Finite domains
        No universally efficient algorithm!

        Any free taster or dessert?
        Yes and no. (more later)



Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

NFL Theorems (Wolpert and Macready 1997)


NFL Theorems (Wolpert and Macready 1997)
               Search space is finite (though quite large), thus the space of
               possible “cost” values is also finite. Objective function
               f : X → Y, with F = Y X (space of all possible problems).
               Assumptions: finite domain, closed under permutation (c.u.p).
               For m iterations, m distinct visited points form a time-ordered
                             x       y             x       y
               set dm =     dm (1), dm (1) , ..., dm (m), dm (m) .
               The performance of an algorithm a iterated m times on a cost
                                           y
               function f is denoted by P(dm |f , m, a).

        For any pair of algorithms a and b, the NFL theorem states
                                      y                                y
                                   P(dm |f , m, a) =                P(dm |f , m, b).
                               f                                f


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

NFL Theorems (Wolpert and Macready 1997)


NFL Theorems (Wolpert and Macready 1997)
               Search space is finite (though quite large), thus the space of
               possible “cost” values is also finite. Objective function
               f : X → Y, with F = Y X (space of all possible problems).
               Assumptions: finite domain, closed under permutation (c.u.p).
               For m iterations, m distinct visited points form a time-ordered
                             x       y             x       y
               set dm =     dm (1), dm (1) , ..., dm (m), dm (m) .
               The performance of an algorithm a iterated m times on a cost
                                           y
               function f is denoted by P(dm |f , m, a).

        For any pair of algorithms a and b, the NFL theorem states
                                      y                                y
                                   P(dm |f , m, a) =                P(dm |f , m, b).
                               f                                f


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

NFL Theorems (Wolpert and Macready 1997)


NFL Theorems (Wolpert and Macready 1997)
               Search space is finite (though quite large), thus the space of
               possible “cost” values is also finite. Objective function
               f : X → Y, with F = Y X (space of all possible problems).
               Assumptions: finite domain, closed under permutation (c.u.p).
               For m iterations, m distinct visited points form a time-ordered
                             x       y             x       y
               set dm =     dm (1), dm (1) , ..., dm (m), dm (m) .
               The performance of an algorithm a iterated m times on a cost
                                           y
               function f is denoted by P(dm |f , m, a).

        For any pair of algorithms a and b, the NFL theorem states
                                      y                                y
                                   P(dm |f , m, a) =                P(dm |f , m, b).
                               f                                f
        Any algorithm is as good (bad) as a random search!
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Open Problems


Open Problems


               Framework: Need to develop a unified framework for
               algorithmic analysis (e.g.,convergence).
               Exploration and exploitation: What is the optimal balance
               between these two components? (50-50 or what?)
               Performance measure: What are the best performance
               measures ? Statistically? Why ?
               Convergence: Convergence analysis of algorithms for infinite,
               continuous domains require systematic approaches?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Open Problems


Open Problems


               Framework: Need to develop a unified framework for
               algorithmic analysis (e.g.,convergence).
               Exploration and exploitation: What is the optimal balance
               between these two components? (50-50 or what?)
               Performance measure: What are the best performance
               measures ? Statistically? Why ?
               Convergence: Convergence analysis of algorithms for infinite,
               continuous domains require systematic approaches?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Open Problems


Open Problems


               Framework: Need to develop a unified framework for
               algorithmic analysis (e.g.,convergence).
               Exploration and exploitation: What is the optimal balance
               between these two components? (50-50 or what?)
               Performance measure: What are the best performance
               measures ? Statistically? Why ?
               Convergence: Convergence analysis of algorithms for infinite,
               continuous domains require systematic approaches?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Open Problems


Open Problems


               Framework: Need to develop a unified framework for
               algorithmic analysis (e.g.,convergence).
               Exploration and exploitation: What is the optimal balance
               between these two components? (50-50 or what?)
               Performance measure: What are the best performance
               measures ? Statistically? Why ?
               Convergence: Convergence analysis of algorithms for infinite,
               continuous domains require systematic approaches?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

More Open Problems


More Open Problems


               Free lunches: Unproved for infinite or continuous domains for
               multiobjective optimization. (possible free lunches!)
               What are implications of NFL theorems in practice?
               If free lunches exist, how to find the best algorithm(s)?
               Knowledge: Problem-specific knowledge always helps to find
               appropriate solutions? How to quantify such knowledge?
               Intelligent algorithms: Any practical way to design truly
               intelligent, self-evolving algorithms?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

More Open Problems


More Open Problems


               Free lunches: Unproved for infinite or continuous domains for
               multiobjective optimization. (possible free lunches!)
               What are implications of NFL theorems in practice?
               If free lunches exist, how to find the best algorithm(s)?
               Knowledge: Problem-specific knowledge always helps to find
               appropriate solutions? How to quantify such knowledge?
               Intelligent algorithms: Any practical way to design truly
               intelligent, self-evolving algorithms?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

More Open Problems


More Open Problems


               Free lunches: Unproved for infinite or continuous domains for
               multiobjective optimization. (possible free lunches!)
               What are implications of NFL theorems in practice?
               If free lunches exist, how to find the best algorithm(s)?
               Knowledge: Problem-specific knowledge always helps to find
               appropriate solutions? How to quantify such knowledge?
               Intelligent algorithms: Any practical way to design truly
               intelligent, self-evolving algorithms?




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Constraints


Constraints
        In describing optimization algorithms, we are not concern with
        constraints. Algorithms can solve both unconstrained and more
        often constrained problems.

        The handling of constraints is an implementation issue, though
        incorrect or inefficient methods of dealing with constraints can slow
        down the algorithm efficiency, or even result in wrong solutions.
        Methods of handling constraints

               Direct methods
               Langrange multipliers
               Barrier functions
               Penalty methods
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Aims


Aims

        Either converting a constrained problem to an unconstrained one
        or changing the search space into a regular domain




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Aims


Aims

        Either converting a constrained problem to an unconstrained one
        or changing the search space into a regular domain

        The ease of programming and implementation
        Improve (or at least not hinder) the efficiency of the chosen
        algorithm in implementation.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Aims


Aims

        Either converting a constrained problem to an unconstrained one
        or changing the search space into a regular domain

        The ease of programming and implementation
        Improve (or at least not hinder) the efficiency of the chosen
        algorithm in implementation.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Aims


Aims

        Either converting a constrained problem to an unconstrained one
        or changing the search space into a regular domain

        The ease of programming and implementation
        Improve (or at least not hinder) the efficiency of the chosen
        algorithm in implementation.

        Scalability
        The used approach should be able to deal with small, large and
        very large scale problems.


Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Common Approaches


Common Approaches
        Direct method
        Simple, but not versatile, difficult in programming.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Common Approaches


Common Approaches
        Direct method
        Simple, but not versatile, difficult in programming.

        Lagrange multipliers
        Main for equality constraints.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Common Approaches


Common Approaches
        Direct method
        Simple, but not versatile, difficult in programming.

        Lagrange multipliers
        Main for equality constraints.

        Barrier functions
        Very powerful and widely used in convex optimization.




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Common Approaches


Common Approaches
        Direct method
        Simple, but not versatile, difficult in programming.

        Lagrange multipliers
        Main for equality constraints.

        Barrier functions
        Very powerful and widely used in convex optimization.

        Penalty methods
        Simple and versatile, widely used.



Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Common Approaches


Common Approaches
        Direct method
        Simple, but not versatile, difficult in programming.

        Lagrange multipliers
        Main for equality constraints.

        Barrier functions
        Very powerful and widely used in convex optimization.

        Penalty methods
        Simple and versatile, widely used.

        Others
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL     Constraints   Applications   Thanks

Direct Methods


Direct Methods
        Minimize f (x, y ) = (x − 2)2 + 4(y − 3)2
        subject to −x + y ≤ 2, x + 2y ≤ 3.
                                                                                        2
                                                                                    ≤
                                                                                y
                                                   Optimal                 x+
                                                                       −
                                                                     x+
                                                                           2y
                                                                              ≤3




        Direct Methods: to generate solutions/points inside the region!
        (easy for rectangular regions)
Xin-She Yang                                                                                               FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms    Metaheuristic     Markov   Analysis   All and NFL   Constraints     Applications   Thanks

Method of Lagrange Multipliers


Method of Lagrange Multipliers
        Maximize f (x, y ) = 10 − x 2 − (y − 2)2 subject to x + 2y = 5.

        Defining a combined function Φ using a multiplier λ, we have

                             Φ = 10 − x 2 − (y − 2)2 + λ(x + 2y − 5).

        The optimality conditions are
        ∂Φ                              ∂Φ                                           ∂Φ
           = 2x +λ = 0,                    = −2(y −2)+2λ = 0,                           = x +2y −5,
        ∂x                              ∂y                                           ∂λ
        whose solutions become
                                                                                                49
                    x = 1/5,          y = 12/5,           λ = 2/5, =⇒ fmax =                       .
                                                                                                 5

Xin-She Yang                                                                                               FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Barrier Functions


Barrier Functions

        As an equality h(x) = 0 can be written as two inequalities h(x) ≤ 0
        and −h(x) ≤ 0, we only use inequalities.

        For a general optimization problem:

                    minimize f (x),             subject to g (xi ) ≤ 0(i = 1, 2, ..., N),

        we can define a Indicator or barrier function
                                                         0 if u ≤ 0
                                        I−1 [u] =
                                                         ∞ if u > 0.

        Not so easy to deal with numerically. Also discontinuous!

Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms      Metaheuristic   Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Logarithmic Barrier Functions


Logarithmic Barrier Functions

        A log barrier function

                                      ¯− (u) = − 1 log(−u),
                                      I                                  u < 0,
                                                 t
        where t > 0 is an accuracy parameters (can be very large).




Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms      Metaheuristic    Markov   Analysis   All and NFL    Constraints   Applications   Thanks

Logarithmic Barrier Functions


Logarithmic Barrier Functions

        A log barrier function

                                      ¯− (u) = − 1 log(−u),
                                      I                                   u < 0,
                                                 t
        where t > 0 is an accuracy parameters (can be very large).
        Then, the above minimization problem becomes
                                            N                                 N
                                                 ¯− (gi (x)) = f (x) +               1
            minimize f (x) +                     I                                  − log[−gi (x)].
                                                                                     t
                                          i =1                               i =1

        This is an unconstrained problem and easy to implement!


Xin-She Yang                                                                                               FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov     Analysis   All and NFL       Constraints   Applications   Thanks

Penalty Methods


Penalty Methods
        For a nonlinear optimization problem with equality and inequality
        constraints,
                               minimize
                                 x∈ℜn f (x),             x = (x1 , ..., xn )T ∈ ℜn ,
                               subject to φi (x) = 0, (i = 1, ..., M),
                                                  ψj (x) ≤ 0, (j = 1, ..., N),
        the idea is to define a penalty function so that the constrained
        problem is transformed into an unconstrained problem. Now we
        define
                                                           M                        N
                      Π(x, µi , νj ) = f (x) +                    µi φ2 (x) +
                                                                      i                  νj ψj2 (x),
                                                           i =1                 j=1

         where µi ≫ 1 and νj ≥ 0 which should be large enough,
        depending on the solution quality needed.
Xin-She Yang                                                                                                   FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL    Constraints   Applications   Thanks




        In addition, for simplicity of implementation, we can use µ = µi for
        all i and ν = νj for all j. That is, we can use a simplified

                                                M                                 N
         Π(x, µ, ν) = f (x) + µ                     Qi [φi (x)]φ2 (x) + ν
                                                                i                       Hj [ψj (x)]ψj2 (x).
                                             i =1                                 j=1

        Here the barrier/indicator-like functions

                              0 if ψj (x) ≤ 0                            0 if φi (x) = 0
                  Hj =                        ,               Qi =                       .
                              1 if ψj (x) > 0                            1 if φi (x) = 0

        In general, for most applications, µ and ν can be taken as 1010 to
        1015 . We will use these values in most implementations.


Xin-She Yang                                                                                              FedCSIS2011
Metaheuristics and Computational Intelligence
Intro   Classic Algorithms   Metaheuristic      Markov   Analysis   All and NFL   Constraints   Applications   Thanks

Pressure Vessel Design Optimization


Pressure Vessel Design Optimization




                                 d1                                                             d2

                                r                                                                        r




                                                                        L
Xin-She Yang                                                                                             FedCSIS2011
Metaheuristics and Computational Intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Más contenido relacionado

La actualidad más candente

Particle Swarm optimization
Particle Swarm optimizationParticle Swarm optimization
Particle Swarm optimizationmidhulavijayan
 
Cuckoo Optimization ppt
Cuckoo Optimization pptCuckoo Optimization ppt
Cuckoo Optimization pptAnuja Joshi
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimizationMahesh Tibrewal
 
Kalman filters
Kalman filtersKalman filters
Kalman filtersAJAL A J
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagationParveenMalik18
 
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...Edureka!
 
AI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAndrew Ferlitsch
 
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
AI_Session 11: searching with Non-Deterministic Actions and partial observati...AI_Session 11: searching with Non-Deterministic Actions and partial observati...
AI_Session 11: searching with Non-Deterministic Actions and partial observati...Asst.prof M.Gokilavani
 
Ant Colony Optimization presentation
Ant Colony Optimization presentationAnt Colony Optimization presentation
Ant Colony Optimization presentationPartha Das
 
Analysis of optimization algorithms
Analysis of optimization algorithmsAnalysis of optimization algorithms
Analysis of optimization algorithmsGem WeBlog
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Ahmed Gad
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Machine learning ppt.
Machine learning ppt.Machine learning ppt.
Machine learning ppt.ASHOK KUMAR
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceAlbert Orriols-Puig
 
Particle Swarm Optimization - PSO
Particle Swarm Optimization - PSOParticle Swarm Optimization - PSO
Particle Swarm Optimization - PSOMohamed Talaat
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimizationAbhishek Agrawal
 

La actualidad más candente (20)

Particle Swarm optimization
Particle Swarm optimizationParticle Swarm optimization
Particle Swarm optimization
 
Cuckoo Optimization ppt
Cuckoo Optimization pptCuckoo Optimization ppt
Cuckoo Optimization ppt
 
Optimization Using Evolutionary Computing Techniques
Optimization Using Evolutionary Computing Techniques Optimization Using Evolutionary Computing Techniques
Optimization Using Evolutionary Computing Techniques
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimization
 
Kalman filters
Kalman filtersKalman filters
Kalman filters
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagation
 
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
 
AI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAI - Local Search - Hill Climbing
AI - Local Search - Hill Climbing
 
Machine learning
Machine learningMachine learning
Machine learning
 
Tabu search
Tabu searchTabu search
Tabu search
 
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
AI_Session 11: searching with Non-Deterministic Actions and partial observati...AI_Session 11: searching with Non-Deterministic Actions and partial observati...
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
 
Ant Colony Optimization presentation
Ant Colony Optimization presentationAnt Colony Optimization presentation
Ant Colony Optimization presentation
 
Analysis of optimization algorithms
Analysis of optimization algorithmsAnalysis of optimization algorithms
Analysis of optimization algorithms
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Machine learning ppt.
Machine learning ppt.Machine learning ppt.
Machine learning ppt.
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligence
 
Particle Swarm Optimization - PSO
Particle Swarm Optimization - PSOParticle Swarm Optimization - PSO
Particle Swarm Optimization - PSO
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimization
 

Similar a Nature-inspired metaheuristic algorithms for optimization and computional intelligence

Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...
Evolutionary Symbolic Discovery for Bioinformatics,  Systems and Synthetic Bi...Evolutionary Symbolic Discovery for Bioinformatics,  Systems and Synthetic Bi...
Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...Natalio Krasnogor
 
The Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistanceThe Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistancePhD Assistance
 
The physics behind systems biology
The physics behind systems biologyThe physics behind systems biology
The physics behind systems biologyImam Rosadi
 
Statistical global modeling of β^- decay halflives systematics ...
Statistical global modeling of β^- decay halflives systematics ...Statistical global modeling of β^- decay halflives systematics ...
Statistical global modeling of β^- decay halflives systematics ...butest
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer PerceptronAndres Mendez-Vazquez
 
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docbutest
 
Monoton-working version-1995.doc
Monoton-working version-1995.docMonoton-working version-1995.doc
Monoton-working version-1995.docbutest
 
Monoton-working version-1995.doc
Monoton-working version-1995.docMonoton-working version-1995.doc
Monoton-working version-1995.docbutest
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learningbutest
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKSESCOM
 
Xin Yao: "What can evolutionary computation do for you?"
Xin Yao: "What can evolutionary computation do for you?"Xin Yao: "What can evolutionary computation do for you?"
Xin Yao: "What can evolutionary computation do for you?"ieee_cis_cyprus
 
10.637 Lecture 1: Introduction
10.637 Lecture 1: Introduction10.637 Lecture 1: Introduction
10.637 Lecture 1: IntroductionHeather Kulik
 
The Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistanceThe Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistancePhD Assistance
 
Introduction to systems biology
Introduction to systems biologyIntroduction to systems biology
Introduction to systems biologylemberger
 
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催Preferred Networks
 
Why Neurons have thousands of synapses? A model of sequence memory in the brain
Why Neurons have thousands of synapses? A model of sequence memory in the brainWhy Neurons have thousands of synapses? A model of sequence memory in the brain
Why Neurons have thousands of synapses? A model of sequence memory in the brainNumenta
 

Similar a Nature-inspired metaheuristic algorithms for optimization and computional intelligence (20)

Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...
Evolutionary Symbolic Discovery for Bioinformatics,  Systems and Synthetic Bi...Evolutionary Symbolic Discovery for Bioinformatics,  Systems and Synthetic Bi...
Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...
 
Econopysics
EconopysicsEconopysics
Econopysics
 
The Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistanceThe Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - Phdassistance
 
The physics behind systems biology
The physics behind systems biologyThe physics behind systems biology
The physics behind systems biology
 
Statistical global modeling of β^- decay halflives systematics ...
Statistical global modeling of β^- decay halflives systematics ...Statistical global modeling of β^- decay halflives systematics ...
Statistical global modeling of β^- decay halflives systematics ...
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
 
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
 
Lecture at the C3BI 2018
Lecture at the C3BI 2018Lecture at the C3BI 2018
Lecture at the C3BI 2018
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.doc
 
CV.PDF
CV.PDFCV.PDF
CV.PDF
 
Monoton-working version-1995.doc
Monoton-working version-1995.docMonoton-working version-1995.doc
Monoton-working version-1995.doc
 
Monoton-working version-1995.doc
Monoton-working version-1995.docMonoton-working version-1995.doc
Monoton-working version-1995.doc
 
Kernel methods in machine learning
Kernel methods in machine learningKernel methods in machine learning
Kernel methods in machine learning
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKS
 
Xin Yao: "What can evolutionary computation do for you?"
Xin Yao: "What can evolutionary computation do for you?"Xin Yao: "What can evolutionary computation do for you?"
Xin Yao: "What can evolutionary computation do for you?"
 
10.637 Lecture 1: Introduction
10.637 Lecture 1: Introduction10.637 Lecture 1: Introduction
10.637 Lecture 1: Introduction
 
The Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - PhdassistanceThe Advancement and Challenges in Computational Physics - Phdassistance
The Advancement and Challenges in Computational Physics - Phdassistance
 
Introduction to systems biology
Introduction to systems biologyIntroduction to systems biology
Introduction to systems biology
 
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催
PFP:材料探索のための汎用Neural Network Potential - 2021/10/4 QCMSR + DLAP共催
 
Why Neurons have thousands of synapses? A model of sequence memory in the brain
Why Neurons have thousands of synapses? A model of sequence memory in the brainWhy Neurons have thousands of synapses? A model of sequence memory in the brain
Why Neurons have thousands of synapses? A model of sequence memory in the brain
 

Más de Xin-She Yang

Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionXin-She Yang
 
A Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelA Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelXin-She Yang
 
Multiobjective Bat Algorithm (demo only)
Multiobjective Bat Algorithm (demo only)Multiobjective Bat Algorithm (demo only)
Multiobjective Bat Algorithm (demo only)Xin-She Yang
 
Bat algorithm (demo)
Bat algorithm (demo)Bat algorithm (demo)
Bat algorithm (demo)Xin-She Yang
 
Flower Pollination Algorithm (matlab code)
Flower Pollination Algorithm (matlab code)Flower Pollination Algorithm (matlab code)
Flower Pollination Algorithm (matlab code)Xin-She Yang
 
Metaheuristics and Optimiztion in Civil Engineering
Metaheuristics and Optimiztion in Civil EngineeringMetaheuristics and Optimiztion in Civil Engineering
Metaheuristics and Optimiztion in Civil EngineeringXin-She Yang
 
A Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelA Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelXin-She Yang
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Xin-She Yang
 
Memetic Firefly algorithm for combinatorial optimization
Memetic Firefly algorithm for combinatorial optimizationMemetic Firefly algorithm for combinatorial optimization
Memetic Firefly algorithm for combinatorial optimizationXin-She Yang
 
Two-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionTwo-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionXin-She Yang
 
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Xin-She Yang
 
Bat Algorithm for Multi-objective Optimisation
Bat Algorithm for Multi-objective OptimisationBat Algorithm for Multi-objective Optimisation
Bat Algorithm for Multi-objective OptimisationXin-She Yang
 
Are motorways rational from slime mould's point of view?
Are motorways rational from slime mould's point of view?Are motorways rational from slime mould's point of view?
Are motorways rational from slime mould's point of view?Xin-She Yang
 
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
 
Test Problems in Optimization
Test Problems in OptimizationTest Problems in Optimization
Test Problems in OptimizationXin-She Yang
 
Engineering Optimisation by Cuckoo Search
Engineering Optimisation by Cuckoo SearchEngineering Optimisation by Cuckoo Search
Engineering Optimisation by Cuckoo SearchXin-She Yang
 
A New Metaheuristic Bat-Inspired Algorithm
A New Metaheuristic Bat-Inspired AlgorithmA New Metaheuristic Bat-Inspired Algorithm
A New Metaheuristic Bat-Inspired AlgorithmXin-She Yang
 
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Xin-She Yang
 
Fractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time DelayFractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time DelayXin-She Yang
 

Más de Xin-She Yang (20)

Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An Introduction
 
A Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelA Biologically Inspired Network Design Model
A Biologically Inspired Network Design Model
 
Multiobjective Bat Algorithm (demo only)
Multiobjective Bat Algorithm (demo only)Multiobjective Bat Algorithm (demo only)
Multiobjective Bat Algorithm (demo only)
 
Bat algorithm (demo)
Bat algorithm (demo)Bat algorithm (demo)
Bat algorithm (demo)
 
Firefly algorithm
Firefly algorithmFirefly algorithm
Firefly algorithm
 
Flower Pollination Algorithm (matlab code)
Flower Pollination Algorithm (matlab code)Flower Pollination Algorithm (matlab code)
Flower Pollination Algorithm (matlab code)
 
Metaheuristics and Optimiztion in Civil Engineering
Metaheuristics and Optimiztion in Civil EngineeringMetaheuristics and Optimiztion in Civil Engineering
Metaheuristics and Optimiztion in Civil Engineering
 
A Biologically Inspired Network Design Model
A Biologically Inspired Network Design ModelA Biologically Inspired Network Design Model
A Biologically Inspired Network Design Model
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)
 
Memetic Firefly algorithm for combinatorial optimization
Memetic Firefly algorithm for combinatorial optimizationMemetic Firefly algorithm for combinatorial optimization
Memetic Firefly algorithm for combinatorial optimization
 
Two-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential EvolutionTwo-Stage Eagle Strategy with Differential Evolution
Two-Stage Eagle Strategy with Differential Evolution
 
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
 
Bat Algorithm for Multi-objective Optimisation
Bat Algorithm for Multi-objective OptimisationBat Algorithm for Multi-objective Optimisation
Bat Algorithm for Multi-objective Optimisation
 
Are motorways rational from slime mould's point of view?
Are motorways rational from slime mould's point of view?Are motorways rational from slime mould's point of view?
Are motorways rational from slime mould's point of view?
 
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
 
Test Problems in Optimization
Test Problems in OptimizationTest Problems in Optimization
Test Problems in Optimization
 
Engineering Optimisation by Cuckoo Search
Engineering Optimisation by Cuckoo SearchEngineering Optimisation by Cuckoo Search
Engineering Optimisation by Cuckoo Search
 
A New Metaheuristic Bat-Inspired Algorithm
A New Metaheuristic Bat-Inspired AlgorithmA New Metaheuristic Bat-Inspired Algorithm
A New Metaheuristic Bat-Inspired Algorithm
 
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...
 
Fractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time DelayFractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time Delay
 

Último

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfChris Hunter
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterMateoGardella
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfSanaAli374401
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxVishalSingh1417
 

Último (20)

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 

Nature-inspired metaheuristic algorithms for optimization and computional intelligence

  • 1. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Nature-Inspired Metaheristics Algorithms for Optimization and Computational Intelligence Xin-She Yang National Physical Laboratory, UK @ FedCSIS2011 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 2. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 3. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 4. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are wrong, but some are useful. - George Box, Statistician Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 5. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are inaccurate, but some are useful. - George Box, Statistician Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 6. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are inaccurate, but some are useful. - George Box, Statistician All algorithms perform equally well on average over all possible functions. - No-free-lunch theorems (Wolpert & Macready) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 7. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are inaccurate, but some are useful. - George Box, Statistician All algorithms perform equally well on average over all possible functions. How so? - No-free-lunch theorems (Wolpert & Macready) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 8. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are inaccurate, but some are useful. - George Box, Statistician All algorithms perform equally well on average over all possible functions. Not quite! (more later) - No-free-lunch theorems (Wolpert & Macready) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 9. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Intro Intro Computational science is now the third paradigm of science, complementing theory and experiment. - Ken Wilson (Cornell University), Nobel Laureate. All models are inaccurate, but some are useful. - George Box, Statistician All algorithms perform equally well on average over all possible functions. Not quite! (more later) - No-free-lunch theorems (Wolpert & Macready) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 10. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Overview Overview Part I Introduction Metaheuristic Algorithms Monte Carlo and Markov Chains Algorithm Analysis Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 11. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Overview Overview Part I Introduction Metaheuristic Algorithms Monte Carlo and Markov Chains Algorithm Analysis Part II Exploration & Exploitation Dealing with Constraints Applications Discussions & Bibliography Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 12. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 13. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 14. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 15. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 16. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 17. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 18. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 19. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 20. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent =⇒ d d 1 1 + y ′2 min t = ds = dx 0 v 0 2g [h − y (x)] Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 21. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent =⇒ d d 1 1 + y ′2 min t = ds = dx 0 v 0 2g [h − y (x)] =⇒ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 22. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent =⇒ d d 1 1 + y ′2 min t = ds = dx 0 v 0 2g [h − y (x)] =⇒ =⇒ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 23. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks A Perfect Algorithm A Perfect Algorithm What is the best relationship among E , m and c? Initial state: m,E ,c , =⇒ =⇒ E =mc 2 Steepest Descent =⇒ d d 1 1 + y ′2 min t = ds = dx 0 v 0 2g [h − y (x)] A   x= 2 (θ − sin θ) =⇒ =⇒ y = h − A (1 − cos θ)  2 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 24. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Computing in Reality Computing in Reality A Problem & Problem Solvers ⇓ Mathematical/Numerical Models ⇓ Computer & Algorithms & Programming ⇓ Validation ⇓ Results Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 25. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks What is an Algorithm? What is an Algorithm? Essence of an Optimization Algorithm To move to a new, better point xi +1 from an existing known location xi . xi x2 x1 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 26. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks What is an Algorithm? What is an Algorithm? Essence of an Optimization Algorithm To move to a new, better point xi +1 from an existing known location xi . xi x2 x1 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 27. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks What is an Algorithm? What is an Algorithm? Essence of an Optimization Algorithm To move to a new, better point xi +1 from an existing known location xi . xi ? x2 x1 xi +1 Population-based algorithms use multiple, interacting paths. Different algorithms Different strategies/approaches in generating these moves! Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 28. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Optimization is Like Treasure Hunting Optimization is Like Treasure Hunting How to find a treasure, a hidden 1 million dollars? What is your best strategy? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 29. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Optimization Algorithms Optimization Algorithms Deterministic Newton’s method (1669, published in 1711), Newton-Raphson (1690), hill-climbing/steepest descent (Cauchy 1847), least-squares (Gauss 1795), linear programming (Dantzig 1947), conjugate gradient (Lanczos et al. 1952), interior-point method (Karmarkar 1984), etc. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 30. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Stochastic/Metaheuristic Stochastic/Metaheuristic Genetic algorithms (1960s/1970s), evolutionary strategy (Rechenberg & Swefel 1960s), evolutionary programming (Fogel et al. 1960s). Simulated annealing (Kirkpatrick et al. 1983), Tabu search (Glover 1980s), ant colony optimization (Dorigo 1992), genetic programming (Koza 1992), particle swarm optimization (Kennedy & Eberhart 1995), differential evolution (Storn & Price 1996/1997), harmony search (Geem et al. 2001), honeybee algorithm (Nakrani & Tovey 2004), ..., firefly algorithm (Yang 2008), cuckoo search (Yang & Deb 2009), ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 31. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 32. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 33. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 34. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 35. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 36. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent/Hill Climbing Steepest Descent/Hill Climbing Gradient-Based Methods Use gradient/derivative information – very efficient for local search. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 37. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Newton’s Method   ∂2f ∂2f ∂x1 2 ··· ∂x1 ∂xn xn+1 = xn − H−1 ∇f ,  H= . . .. . .  .  . . .  ∂2f ∂2f ∂xn ∂x1 ··· ∂xn 2 Generation of new moves by gradient. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 38. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Newton’s Method   ∂2f ∂2f ∂x1 2 ··· ∂x1 ∂xn xn+1 = xn − H−1 ∇f ,  H= . . .. . .  .  . . .  ∂2f ∂2f ∂xn ∂x1 ··· ∂xn 2 Quasi-Newton If H is replaced by I, we have xn+1 = xn − αI∇f (xn ). Here α controls the step length. Generation of new moves by gradient. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 39. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Steepest Descent Method (Cauchy 1847, Riemann 1863) Steepest Descent Method (Cauchy 1847, Riemann 1863) From the Taylor expansion of f (x) about x(n) , we have f (x(n+1) ) = f (x(n) + ∆s) ≈ f (x(n) + (∇f (x(n) ))T ∆s, where ∆s = x(n+1) − x(n) is the increment vector. So f (x(n) + ∆s) − f (x(n) ) = (∇f )T ∆s < 0. Therefore, we have ∆s = −α∇f (x(n) ), where α > 0 is the step size. In the case of finding maxima, this method is often referred to as hill-climbing. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 40. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method Belong to Krylov subspace iteration methods. The conjugate gradient method was pioneered by Magnus Hestenes, Eduard Stiefel and Cornelius Lanczos in the 1950s. It was named as one of the top 10 algorithms of the 20th century. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 41. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method Belong to Krylov subspace iteration methods. The conjugate gradient method was pioneered by Magnus Hestenes, Eduard Stiefel and Cornelius Lanczos in the 1950s. It was named as one of the top 10 algorithms of the 20th century. A linear system with a symmetric positive definite matrix A Au = b, is equivalent to minimizing the following function f (u) 1 f (u) = uT Au − bT u + v, 2 where v is a vector constant and can be taken to be zero. We can easily see that ∇f (u) = 0 leads to Au = b. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 42. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks CG CG The theory behind these iterative methods is closely related to the Krylov subspace Kn spanned by A and b as defined by Kn (A, b) = {Ib, Ab, A2 b, ..., An−1 b}, where A0 = I. If we use an iterative procedure to obtain the approximate solution un to Au = b at nth iteration, the residual is given by rn = b − Aun , which is essentially the negative gradient ∇f (un ). Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 43. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks The search direction vector in the conjugate gradient method is subsequently determined by dT Arn n dn+1 = rn − dn . dT Adn n The solution often starts with an initial guess u0 at n = 0, and proceeds iteratively. The above steps can compactly be written as un+1 = un + αn dn , rn+1 = rn − αn Adn , and dn+1 = rn+1 + βn dn , where rT rn n rT rn+1 n+1 αn = T , βn = . dn Adn rT r n n Iterations stop when a prescribed accuracy is reached. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 44. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Gradient-free Methods Gradient-free Methods Gradient-base methods Requires the information of derivatives. Not suitable for problems with discontinuities. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 45. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Gradient-free Methods Gradient-free Methods Gradient-base methods Requires the information of derivatives. Not suitable for problems with discontinuities. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 46. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Gradient-free Methods Gradient-free Methods Gradient-base methods Requires the information of derivatives. Not suitable for problems with discontinuities. Gradient-free or derivative-free methods BFGS, Downhill simplex, Trust-region, SQP ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 47. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Nelder-Mead Downhill Simplex Method Nelder-Mead Downhill Simplex Method The Nelder-Mead method is a downhill simplex algorithm, first developed by J. A. Nelder and R. Mead in 1965. A Simplex In the n-dimensional space, a simplex, which is a generalization of a triangle on a plane, is a convex hull with n + 1 distinct points. For simplicity, a simplex in the n-dimension space is referred to as n-simplex. Xin-She Yang (a) (b) (c) FedCSIS2011 Metaheuristics and Computational Intelligence
  • 48. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Downhill Simplex Method Downhill Simplex Method xe xr xr ¯ x s s xc xn+1 xn+1 xn+1 The first step is to rank and re-order the vertex values f (x1 ) ≤ f (x2 ) ≤ ... ≤ f (xn+1 ), at x1 , x2 , ..., xn+1 , respectively. Wikipedia Animation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 49. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Metaheuristic Metaheuristic Most are nature-inspired, mimicking certain successful features in nature. Simulated annealing Genetic algorithms Ant and bee algorithms Particle Swarm Optimization Firefly algorithm and cuckoo search Harmony search ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 50. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Simulated Annealling Simulated Annealling Metal annealing to increase strength =⇒ simulated annealing. Probabilistic Move: p ∝ exp[−E /kB T ]. kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy. E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1). T → 0, =⇒p → 0, =⇒ hill climbing. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 51. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Simulated Annealling Simulated Annealling Metal annealing to increase strength =⇒ simulated annealing. Probabilistic Move: p ∝ exp[−E /kB T ]. kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy. E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1). T → 0, =⇒p → 0, =⇒ hill climbing. This is essentially a Markov chain. Generation of new moves by Markov chain. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 52. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks An Example An Example Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 53. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Genetic Algorithms Genetic Algorithms crossover mutation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 54. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Genetic Algorithms Genetic Algorithms crossover mutation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 55. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Genetic Algorithms Genetic Algorithms crossover mutation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 56. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 57. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 58. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Generation of new solutions by crossover, mutation and elistism. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 59. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Swarm Intelligence Swarm Intelligence Ants, bees, birds, fish ... Simple rules lead to complex behaviour. Go to Metaheuristic Slides Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 60. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Cuckoo Search Cuckoo Search Local random walk: xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ). i i j k [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ is a random number drawn from a uniform distribution, and s is the step size. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 61. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Cuckoo Search Cuckoo Search Local random walk: xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ). i i j k [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ is a random number drawn from a uniform distribution, and s is the step size. Global random walk via L´vy flights: e λΓ(λ) sin(πλ/2) 1 xt+1 = xt + αL(s, λ), i i L(s, λ) = , (s ≫ s0 ). π s 1+λ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 62. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Cuckoo Search Cuckoo Search Local random walk: xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ). i i j k [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ is a random number drawn from a uniform distribution, and s is the step size. Global random walk via L´vy flights: e λΓ(λ) sin(πλ/2) 1 xt+1 = xt + αL(s, λ), i i L(s, λ) = , (s ≫ s0 ). π s 1+λ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 63. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Cuckoo Search Cuckoo Search Local random walk: xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ). i i j k [xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ is a random number drawn from a uniform distribution, and s is the step size. Global random walk via L´vy flights: e λΓ(λ) sin(πλ/2) 1 xt+1 = xt + αL(s, λ), i i L(s, λ) = , (s ≫ s0 ). π s 1+λ Generation of new moves by L´vy flights, random walk and elitism. e Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 64. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Monte Carlo Methods Monte Carlo Methods Almost everyone has used Monte Carlo methods in some way ... Measure temperatures, choose a product, ... Taste soup, wine ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 65. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chains Markov Chains Random walk – A drunkard’s walk: ut+1 = µ + ut + wt , where wt is a random variable, and µ is the drift. For example, wt ∼ N(0, σ 2 ) (Gaussian). Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 66. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chains Markov Chains Random walk – A drunkard’s walk: ut+1 = µ + ut + wt , where wt is a random variable, and µ is the drift. For example, wt ∼ N(0, σ 2 ) (Gaussian). 25 20 15 10 5 0 -5 -10 0 100 200 300 400 500 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 67. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chains Markov Chains Random walk – A drunkard’s walk: ut+1 = µ + ut + wt , where wt is a random variable, and µ is the drift. For example, wt ∼ N(0, σ 2 ) (Gaussian). 25 10 20 5 15 0 10 -5 5 -10 0 -15 -5 -10 -20 0 100 200 300 400 500 -15 -10 -5 0 5 10 15 20 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 68. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chains Markov Chains Markov chain: the next state only depends on the current state and the transition probability. P(i , j) ≡ P(Vt+1 = Sj V0 = Sp , ..., Vt = Si ) = P(Vt+1 = Sj Vt = Sj ), =⇒Pij πi∗ = Pji πj∗ , π ∗ = stionary probability distribution. Examples: Brownian motion ui +1 = µ + ui + ǫi , ǫi ∼ N(0, σ 2 ). Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 69. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chains Markov Chains Monopoly (board games) Monopoly Animation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 70. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Markov Chain Monte Carlo Markov Chain Monte Carlo Landmarks: Monte Carlo method (1930s, 1945, from 1950s) e.g., Metropolis Algorithm (1953), Metropolis-Hastings (1970). Markov Chain Monte Carlo (MCMC) methods – A class of methods. Really took off in 1990s, now applied to a wide range of areas: physics, Bayesian statistics, climate changes, machine learning, finance, economy, medicine, biology, materials and engineering ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 71. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Convergence Behaviour Convergence Behaviour As the MCMC runs, convergence may be reached When does a chain converge? When to stop the chain ... ? Are multiple chains better than a single chain? 0 100 200 300 400 500 600 0 100 200 300 400 500 600 700 800 900 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 72. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Convergence Behaviour Convergence Behaviour −∞ ← t t=−2 converged U 1 2 t=2 t=−n 3 t=0 Multiple, interacting chains Multiple agents trace multiple, interacting Markov chains during the Monte Carlo process. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 73. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Analysis Analysis Classifications of Algorithms Trajectory-based: hill-climbing, simulated annealing, pattern search ... Population-based: genetic algorithms, ant & bee algorithms, artificial immune systems, differential evolutions, PSO, HS, FA, CS, ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 74. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Analysis Analysis Classifications of Algorithms Trajectory-based: hill-climbing, simulated annealing, pattern search ... Population-based: genetic algorithms, ant & bee algorithms, artificial immune systems, differential evolutions, PSO, HS, FA, CS, ... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 75. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Analysis Analysis Classifications of Algorithms Trajectory-based: hill-climbing, simulated annealing, pattern search ... Population-based: genetic algorithms, ant & bee algorithms, artificial immune systems, differential evolutions, PSO, HS, FA, CS, ... Ways of Generating New Moves/Solutions Markov chains with different transition probability. Trajectory-based =⇒ a single Markov chain; Population-based =⇒ multiple, interacting chains. Tabu search (with memory) =⇒ self-avoiding Markov chains. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 76. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Ergodicity Ergodicity Markov Chains & Markov Processes Most theoretical studies uses Markov chains/process as a framework for convergence analysis. A Markov chain is said be to regular if some positive power k of the transition matrix P has only positive elements. A chain is call time-homogeneous if the change of its transition matrix P is the same after each step, thus the transition probability after k steps become Pk . A chain is ergodic or irreducible if it is aperiodic and positive recurrent – it is possible to reach every state from any state. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 77. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Convergence Behaviour Convergence Behaviour As k → ∞, we have the stationary probability distribution π π = πP, =⇒ thus the first eigenvalue is always 1. Asymptotic convergence to optimality: lim θk → θ∗ , (with probability one). k→∞ Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 78. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Convergence Behaviour Convergence Behaviour As k → ∞, we have the stationary probability distribution π π = πP, =⇒ thus the first eigenvalue is always 1. Asymptotic convergence to optimality: lim θk → θ∗ , (with probability one). k→∞ The rate of convergence is usually determined by the second eigenvalue 0 < λ2 < 1. An algorithm can converge, but may not be necessarily efficient, as the rate of convergence is typically low. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 79. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Convergence of GA Convergence of GA Important studies by Aytug et al. (1996)1 , Aytug and Koehler (2000)2 , Greenhalgh and Marschall (2000)3 , Gutjahr (2010),4 etc.5 The number of iterations t(ζ) in GA with a convergence probability of ζ can be estimated by ln(1 − ζ) t(ζ) ≤ , ln 1 − min[(1 − µ)Ln , µLn ] where µ=mutation rate, L=string length, and n=population size. 1 H. Aytug, S. Bhattacharrya and G. J. Koehler, A Markov chain analysis of genetic algorithms with power of 2 cardinality alphabets, Euro. J. Operational Research, 96, 195-201 (1996). 2 H. Aytug and G. J. Koehler, New stopping criterion for genetic algorithms, Euro. J. Operational research, 126, 662-674 (2000). 3 D. Greenhalgh & S. Marshal, Convergence criteria for genetic algorithms, SIAM J. Computing, 30, 269-282 (2000). Xin-She Yang FedCSIS2011 4 Metaheuristics and Gutjahr, Convergence Analysis of Metaheuristics Annals of Information Systems, 10, 159-187 (2010). W. J. Computational Intelligence
  • 80. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Multiobjective Metaheuristics Multiobjective Metaheuristics Asymptotic convergence of metaheuristic for multiobjective optimization (Villalobos-Arias et al. 2005)6 The transition matrix P of a metaheuristic algorithm has a stationary distribution π such that |Pij − πj | ≤ (1 − ζ)k−1 , k ∀i , j, (k = 1, 2, ...), where ζ is a function of mutation probability µ, string length L and population size. For example, ζ = 2nL µnL , so µ < 0.5. Xin-She Yang 6 FedCSIS2011 M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics a Metaheuristics and Computational Intelligence
  • 81. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Multiobjective Metaheuristics Multiobjective Metaheuristics Asymptotic convergence of metaheuristic for multiobjective optimization (Villalobos-Arias et al. 2005)6 The transition matrix P of a metaheuristic algorithm has a stationary distribution π such that |Pij − πj | ≤ (1 − ζ)k−1 , k ∀i , j, (k = 1, 2, ...), where ζ is a function of mutation probability µ, string length L and population size. For example, ζ = 2nL µnL , so µ < 0.5. Xin-She Yang 6 FedCSIS2011 M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics a Metaheuristics and Computational Intelligence
  • 82. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Multiobjective Metaheuristics Multiobjective Metaheuristics Asymptotic convergence of metaheuristic for multiobjective optimization (Villalobos-Arias et al. 2005)6 The transition matrix P of a metaheuristic algorithm has a stationary distribution π such that |Pij − πj | ≤ (1 − ζ)k−1 , k ∀i , j, (k = 1, 2, ...), where ζ is a function of mutation probability µ, string length L and population size. For example, ζ = 2nL µnL , so µ < 0.5. Note: An algorithm satisfying this condition may not converge (for multiobjective optimization) However, an algorithm with elitism, obeying the above condition, does converge!. Xin-She Yang 6 FedCSIS2011 M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics a Metaheuristics and Computational Intelligence
  • 83. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Other results Other results Limited results on convergence analysis exist, concerning (finite states/domains) ant colony optimization generalized hill-climbers and simulated annealing, best-so-far convergence of cross-entropy optimization, nested partition method, Tabu search, and of course, combinatorial optimization. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 84. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Other results Other results Limited results on convergence analysis exist, concerning (finite states/domains) ant colony optimization generalized hill-climbers and simulated annealing, best-so-far convergence of cross-entropy optimization, nested partition method, Tabu search, and of course, combinatorial optimization. However, more challenging tasks for infinite states/domains and continuous problems. Many, many open problems needs satisfactory answers. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 85. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Converged? Converged? Converged, often the ‘best-so-far’ convergence, not necessarily at the global optimality In theory, a Markov chain can converge, but the number of iterations tends to be large. In practice, a finite (hopefully, small) number of generations, if the algorithm converges, it may not reach the global optimum. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 86. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Converged? Converged? Converged, often the ‘best-so-far’ convergence, not necessarily at the global optimality In theory, a Markov chain can converge, but the number of iterations tends to be large. In practice, a finite (hopefully, small) number of generations, if the algorithm converges, it may not reach the global optimum. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 87. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Converged? Converged? Converged, often the ‘best-so-far’ convergence, not necessarily at the global optimality In theory, a Markov chain can converge, but the number of iterations tends to be large. In practice, a finite (hopefully, small) number of generations, if the algorithm converges, it may not reach the global optimum. How to avoid premature convergence Equip an algorithm with the ability to escape a local optimum Increase diversity of the solutions Enough randomization at the right stage ....(unknown, new) .... Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 88. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Coffee Break (15 Minutes) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 89. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks All and NFL All and NFL So many algorithms – what are the common characteristics? What are the key components? How to use and balance different components? What controls the overall behaviour of an algorithm? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 90. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Exploration and Exploitation Exploration and Exploitation Characteristics of Metaheuristics Exploration and Exploitation, or Diversification and Intensification. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 91. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Exploration and Exploitation Exploration and Exploitation Characteristics of Metaheuristics Exploration and Exploitation, or Diversification and Intensification. Exploitation/Intensification Intensive local search, exploiting local information. E.g., hill-climbing. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 92. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Exploration and Exploitation Exploration and Exploitation Characteristics of Metaheuristics Exploration and Exploitation, or Diversification and Intensification. Exploitation/Intensification Intensive local search, exploiting local information. E.g., hill-climbing. Exploration/Diversification Exploratory global search, using randomization/stochastic components. E.g., hill-climbing with random restart. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 93. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Summary Summary Exploration Exploitation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 94. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Summary Summary uniform search Exploration Exploitation Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 95. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Summary Summary uniform search Exploration steepest Exploitation descent Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 96. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Summary Summary uniform search CS Ge net Exploration ic alg ori PS th ms O/ SA EP FA A nt /E /Be S e Newton- Raphson Tabu Nelder-Mead steepest Exploitation descent Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 97. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Summary Summary uniform search Best? CS Free lunch? Ge net Exploration ic alg ori PS th ms O/ SA EP FA A nt /E /Be S e Newton- Raphson Tabu Nelder-Mead steepest Exploitation descent Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 98. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks No-Free-Lunch (NFL) Theorems No-Free-Lunch (NFL) Theorems Algorithm Performance Any algorithm is as good/bad as random search, when averaged over all possible problems/functions. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 99. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks No-Free-Lunch (NFL) Theorems No-Free-Lunch (NFL) Theorems Algorithm Performance Any algorithm is as good/bad as random search, when averaged over all possible problems/functions. Finite domains No universally efficient algorithm! Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 100. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks No-Free-Lunch (NFL) Theorems No-Free-Lunch (NFL) Theorems Algorithm Performance Any algorithm is as good/bad as random search, when averaged over all possible problems/functions. Finite domains No universally efficient algorithm! Any free taster or dessert? Yes and no. (more later) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 101. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks NFL Theorems (Wolpert and Macready 1997) NFL Theorems (Wolpert and Macready 1997) Search space is finite (though quite large), thus the space of possible “cost” values is also finite. Objective function f : X → Y, with F = Y X (space of all possible problems). Assumptions: finite domain, closed under permutation (c.u.p). For m iterations, m distinct visited points form a time-ordered x y x y set dm = dm (1), dm (1) , ..., dm (m), dm (m) . The performance of an algorithm a iterated m times on a cost y function f is denoted by P(dm |f , m, a). For any pair of algorithms a and b, the NFL theorem states y y P(dm |f , m, a) = P(dm |f , m, b). f f Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 102. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks NFL Theorems (Wolpert and Macready 1997) NFL Theorems (Wolpert and Macready 1997) Search space is finite (though quite large), thus the space of possible “cost” values is also finite. Objective function f : X → Y, with F = Y X (space of all possible problems). Assumptions: finite domain, closed under permutation (c.u.p). For m iterations, m distinct visited points form a time-ordered x y x y set dm = dm (1), dm (1) , ..., dm (m), dm (m) . The performance of an algorithm a iterated m times on a cost y function f is denoted by P(dm |f , m, a). For any pair of algorithms a and b, the NFL theorem states y y P(dm |f , m, a) = P(dm |f , m, b). f f Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 103. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks NFL Theorems (Wolpert and Macready 1997) NFL Theorems (Wolpert and Macready 1997) Search space is finite (though quite large), thus the space of possible “cost” values is also finite. Objective function f : X → Y, with F = Y X (space of all possible problems). Assumptions: finite domain, closed under permutation (c.u.p). For m iterations, m distinct visited points form a time-ordered x y x y set dm = dm (1), dm (1) , ..., dm (m), dm (m) . The performance of an algorithm a iterated m times on a cost y function f is denoted by P(dm |f , m, a). For any pair of algorithms a and b, the NFL theorem states y y P(dm |f , m, a) = P(dm |f , m, b). f f Any algorithm is as good (bad) as a random search! Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 104. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Open Problems Open Problems Framework: Need to develop a unified framework for algorithmic analysis (e.g.,convergence). Exploration and exploitation: What is the optimal balance between these two components? (50-50 or what?) Performance measure: What are the best performance measures ? Statistically? Why ? Convergence: Convergence analysis of algorithms for infinite, continuous domains require systematic approaches? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 105. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Open Problems Open Problems Framework: Need to develop a unified framework for algorithmic analysis (e.g.,convergence). Exploration and exploitation: What is the optimal balance between these two components? (50-50 or what?) Performance measure: What are the best performance measures ? Statistically? Why ? Convergence: Convergence analysis of algorithms for infinite, continuous domains require systematic approaches? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 106. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Open Problems Open Problems Framework: Need to develop a unified framework for algorithmic analysis (e.g.,convergence). Exploration and exploitation: What is the optimal balance between these two components? (50-50 or what?) Performance measure: What are the best performance measures ? Statistically? Why ? Convergence: Convergence analysis of algorithms for infinite, continuous domains require systematic approaches? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 107. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Open Problems Open Problems Framework: Need to develop a unified framework for algorithmic analysis (e.g.,convergence). Exploration and exploitation: What is the optimal balance between these two components? (50-50 or what?) Performance measure: What are the best performance measures ? Statistically? Why ? Convergence: Convergence analysis of algorithms for infinite, continuous domains require systematic approaches? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 108. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks More Open Problems More Open Problems Free lunches: Unproved for infinite or continuous domains for multiobjective optimization. (possible free lunches!) What are implications of NFL theorems in practice? If free lunches exist, how to find the best algorithm(s)? Knowledge: Problem-specific knowledge always helps to find appropriate solutions? How to quantify such knowledge? Intelligent algorithms: Any practical way to design truly intelligent, self-evolving algorithms? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 109. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks More Open Problems More Open Problems Free lunches: Unproved for infinite or continuous domains for multiobjective optimization. (possible free lunches!) What are implications of NFL theorems in practice? If free lunches exist, how to find the best algorithm(s)? Knowledge: Problem-specific knowledge always helps to find appropriate solutions? How to quantify such knowledge? Intelligent algorithms: Any practical way to design truly intelligent, self-evolving algorithms? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 110. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks More Open Problems More Open Problems Free lunches: Unproved for infinite or continuous domains for multiobjective optimization. (possible free lunches!) What are implications of NFL theorems in practice? If free lunches exist, how to find the best algorithm(s)? Knowledge: Problem-specific knowledge always helps to find appropriate solutions? How to quantify such knowledge? Intelligent algorithms: Any practical way to design truly intelligent, self-evolving algorithms? Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 111. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Constraints Constraints In describing optimization algorithms, we are not concern with constraints. Algorithms can solve both unconstrained and more often constrained problems. The handling of constraints is an implementation issue, though incorrect or inefficient methods of dealing with constraints can slow down the algorithm efficiency, or even result in wrong solutions. Methods of handling constraints Direct methods Langrange multipliers Barrier functions Penalty methods Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 112. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Aims Aims Either converting a constrained problem to an unconstrained one or changing the search space into a regular domain Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 113. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Aims Aims Either converting a constrained problem to an unconstrained one or changing the search space into a regular domain The ease of programming and implementation Improve (or at least not hinder) the efficiency of the chosen algorithm in implementation. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 114. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Aims Aims Either converting a constrained problem to an unconstrained one or changing the search space into a regular domain The ease of programming and implementation Improve (or at least not hinder) the efficiency of the chosen algorithm in implementation. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 115. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Aims Aims Either converting a constrained problem to an unconstrained one or changing the search space into a regular domain The ease of programming and implementation Improve (or at least not hinder) the efficiency of the chosen algorithm in implementation. Scalability The used approach should be able to deal with small, large and very large scale problems. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 116. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Common Approaches Common Approaches Direct method Simple, but not versatile, difficult in programming. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 117. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Common Approaches Common Approaches Direct method Simple, but not versatile, difficult in programming. Lagrange multipliers Main for equality constraints. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 118. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Common Approaches Common Approaches Direct method Simple, but not versatile, difficult in programming. Lagrange multipliers Main for equality constraints. Barrier functions Very powerful and widely used in convex optimization. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 119. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Common Approaches Common Approaches Direct method Simple, but not versatile, difficult in programming. Lagrange multipliers Main for equality constraints. Barrier functions Very powerful and widely used in convex optimization. Penalty methods Simple and versatile, widely used. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 120. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Common Approaches Common Approaches Direct method Simple, but not versatile, difficult in programming. Lagrange multipliers Main for equality constraints. Barrier functions Very powerful and widely used in convex optimization. Penalty methods Simple and versatile, widely used. Others Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 121. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Direct Methods Direct Methods Minimize f (x, y ) = (x − 2)2 + 4(y − 3)2 subject to −x + y ≤ 2, x + 2y ≤ 3. 2 ≤ y Optimal x+ − x+ 2y ≤3 Direct Methods: to generate solutions/points inside the region! (easy for rectangular regions) Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 122. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Method of Lagrange Multipliers Method of Lagrange Multipliers Maximize f (x, y ) = 10 − x 2 − (y − 2)2 subject to x + 2y = 5. Defining a combined function Φ using a multiplier λ, we have Φ = 10 − x 2 − (y − 2)2 + λ(x + 2y − 5). The optimality conditions are ∂Φ ∂Φ ∂Φ = 2x +λ = 0, = −2(y −2)+2λ = 0, = x +2y −5, ∂x ∂y ∂λ whose solutions become 49 x = 1/5, y = 12/5, λ = 2/5, =⇒ fmax = . 5 Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 123. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Barrier Functions Barrier Functions As an equality h(x) = 0 can be written as two inequalities h(x) ≤ 0 and −h(x) ≤ 0, we only use inequalities. For a general optimization problem: minimize f (x), subject to g (xi ) ≤ 0(i = 1, 2, ..., N), we can define a Indicator or barrier function 0 if u ≤ 0 I−1 [u] = ∞ if u > 0. Not so easy to deal with numerically. Also discontinuous! Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 124. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Logarithmic Barrier Functions Logarithmic Barrier Functions A log barrier function ¯− (u) = − 1 log(−u), I u < 0, t where t > 0 is an accuracy parameters (can be very large). Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 125. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Logarithmic Barrier Functions Logarithmic Barrier Functions A log barrier function ¯− (u) = − 1 log(−u), I u < 0, t where t > 0 is an accuracy parameters (can be very large). Then, the above minimization problem becomes N N ¯− (gi (x)) = f (x) + 1 minimize f (x) + I − log[−gi (x)]. t i =1 i =1 This is an unconstrained problem and easy to implement! Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 126. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Penalty Methods Penalty Methods For a nonlinear optimization problem with equality and inequality constraints, minimize x∈ℜn f (x), x = (x1 , ..., xn )T ∈ ℜn , subject to φi (x) = 0, (i = 1, ..., M), ψj (x) ≤ 0, (j = 1, ..., N), the idea is to define a penalty function so that the constrained problem is transformed into an unconstrained problem. Now we define M N Π(x, µi , νj ) = f (x) + µi φ2 (x) + i νj ψj2 (x), i =1 j=1 where µi ≫ 1 and νj ≥ 0 which should be large enough, depending on the solution quality needed. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 127. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks In addition, for simplicity of implementation, we can use µ = µi for all i and ν = νj for all j. That is, we can use a simplified M N Π(x, µ, ν) = f (x) + µ Qi [φi (x)]φ2 (x) + ν i Hj [ψj (x)]ψj2 (x). i =1 j=1 Here the barrier/indicator-like functions 0 if ψj (x) ≤ 0 0 if φi (x) = 0 Hj = , Qi = . 1 if ψj (x) > 0 1 if φi (x) = 0 In general, for most applications, µ and ν can be taken as 1010 to 1015 . We will use these values in most implementations. Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence
  • 128. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks Pressure Vessel Design Optimization Pressure Vessel Design Optimization d1 d2 r r L Xin-She Yang FedCSIS2011 Metaheuristics and Computational Intelligence