SlideShare una empresa de Scribd logo
1 de 50
Descargar para leer sin conexión
Lesson 11
                     Markov Chains

                          Math 20


                      October 15, 2007


Announcements
   Review Session (ML), 10/16, 7:30–9:30 Hall E
   Problem Set 4 is on the course web site. Due October 17
   Midterm I 10/18, Hall A 7–8:30pm
   OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323)
   Old exams and solutions on website
The Markov Dance




  Divide the class into three groups A, B, and C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
      1/4of group B goes to group A, and 1/4 of group A goes to
      group C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
      1/4of group B goes to group A, and 1/4 of group A goes to
      group C .
      1/2   of group C goes to group B.
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today.
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today. Some questions you might ask are:
      If I go to class on Monday, how likely am I to go to class on
      Friday?
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today. Some questions you might ask are:
      If I go to class on Monday, how likely am I to go to class on
      Friday?
      Assuming the class is infinitely long (the horror!),
      approximately what portion of class will I attend?
Many times we are interested in the transition of something
between certain “states” over discrete time steps. Examples are
    movement of people between regions
    states of the weather
    movement between positions on a Monopoly board
    your score in blackjack
Many times we are interested in the transition of something
between certain “states” over discrete time steps. Examples are
    movement of people between regions
    states of the weather
    movement between positions on a Monopoly board
    your score in blackjack

Definition
A Markov chain or Markov process is a process in which the
probability of the system being in a particular state at a given
observation period depends only on its state at the immediately
preceding observation period.
Common questions about a Markov chain are:
    What is the probability of transitions from state to state over
    multiple observations?
    Are there any “equilibria” in the process?
    Is there a long-term stability to the process?
Definition
Suppose the system has n possible states. For each i and j, let tij
be the probability of switching from state j to state i. The matrix
T whose ijth entry is tij is called the transition matrix.
Definition
Suppose the system has n possible states. For each i and j, let tij
be the probability of switching from state j to state i. The matrix
T whose ijth entry is tij is called the transition matrix.

Example
The transition matrix for the skipping class example is

                                0.7 0.8
                         T=
                                0.3 0.2
The big idea about the transition matrix reflects an important fact
about probabilities:
    All entries are nonnegative.
    The columns add up to one.
Such a matrix is called a stochastic matrix.
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.

Example
Suppose we start out with 20 students in group A and 10 students
in groups B and C . Then the initial state vector is


                           x(0) =
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.

Example
Suppose we start out with 20 students in group A and 10 students
in groups B and C . Then the initial state vector is
                                      
                                   0.5
                        x(0) = 0.25 .
                                  0.25
Example
Suppose after three weeks of class I am equally likely to come to
class or skip. Then my state vector would be x(10) =
Example
Suppose after three weeks of class I am equally likely to come to
                                                         0.5
class or skip. Then my state vector would be x(10) =
                                                         0.5
Example
Suppose after three weeks of class I am equally likely to come to
                                                         0.5
class or skip. Then my state vector would be x(10) =
                                                         0.5
The big idea about state vectors reflects an important fact about
probabilities:
    All entries are nonnegative.
    The entries add up to one.
Such a vector is called a probability vector.
Lemma
Let T be an n × n stochastic matrix and x an n × 1 probability
vector. Then T x is a probability vector.
Lemma
Let T be an n × n stochastic matrix and x an n × 1 probability
vector. Then T x is a probability vector.

Proof.
We need to show that the entries of T x add up to one. We have
                                  
          n                n         n                  n     n
               (T x)i =                   tij xj  =               tij   xj
                                
         i=1              i=1       j=1                j=1   i=1
                           n
                                1 · xj = 1
                     =
                          j=1
Theorem
If T is the transition matrix of a Markov process, then the state
vector x(k+1) at the (k + 1)th observation period can be
determined from the state vector x(k) at the kth observation
period, as
                           x(k+1) = T x(k)
Theorem
If T is the transition matrix of a Markov process, then the state
vector x(k+1) at the (k + 1)th observation period can be
determined from the state vector x(k) at the kth observation
period, as
                           x(k+1) = T x(k)

This comes from an important idea in conditional probability:

  P(state i at t = k + 1)
            n
       =         P(move from state j to state i)P(state j at t = k)
           j=1

That is, for each i,
                                         n
                            (k+1)                  (k)
                          pi        =         tij pj
                                        j=1
Illustration

   Example
   How does the probability of going to class on Wednesday depend
   on the probabilities of going to class on Monday?

                                                    (k)
                                (k)
                                               p2
                            p1

                           go
       Monday                                        skip
                                  t21
                    t11                               t12 t22
                    go                         go
      Wednesday                   skip                      skip
                          (k+1)          (k)              (k)
                      p1          = t11 p1 + t12 p2
                          (k+1)          (k)              (k)
                      p2          = t21 p1 + t22 p2
Example
If I go to class on Monday, what’s the probability I’ll go to class on
Friday?
Example
If I go to class on Monday, what’s the probability I’ll go to class on
Friday?

Solution
                  1
We have x(0) =      . We want to know x(2) . We have
                  0

    x(2) = T x(1) = T (T x(0) ) = T 2 = T x(0)
                         2
               0.7 0.8       1       0.7 0.8     0.7       0.73
           =                     =                     =
               0.3 0.2       0       0.3 0.2     0.3       0.27
Let’s look at successive powers of the probability matrix. Do they
converge? To what?
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
                                            
                     0.194444 0.208333 0.125
              T 2 = 0.444444 0.458333 0.5 
                     0.361111 0.333333 0.375
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
                                             
                      0.194444 0.208333 0.125
              T 2 = 0.444444 0.458333 0.5 
                      0.361111 0.333333 0.375
                                              
                    0.175926 0.184028 0.166667
            T 3 = 0.467593 0.465278 0.479167
                    0.356481 0.350694 0.354167
                            
       0.17554 0.177662 0.175347
T 4 = 0.470679 0.469329 0.472222
       0.353781 0.353009 0.352431
                               
         0.17554   0.177662 0.175347
  4   0.470679    0.469329 0.472222
T=
        0.353781   0.353009 0.352431
                                    
        0.176183   0.176553 0.176505
T 5 = 0.470743    0.47039 0.470775
        0.353074   0.353057 0.35272
                               
                    0.17554   0.177662 0.175347
             4   0.470679    0.469329 0.472222
           T=
                   0.353781   0.353009 0.352431
                                               
                   0.176183   0.176553 0.176505
           T 5 = 0.470743    0.47039 0.470775
                   0.353074   0.353057 0.35272
                                               
                   0.176414   0.176448 0.176529
           T 6 = 0.470636    0.470575 0.470583
                    0.35295   0.352977 0.352889
Do they converge? To what?
A transition matrix (or corresponding Markov process) is called
regular if some power of the matrix has all nonzero entries. Or,
there is a positive probability of eventually moving from every state
to every state.
Theorem 2.5
If T is the transition matrix of a regular Markov process, then
(a) As n → ∞, T n approaches a matrix
                                                
                         u1 u1 . . .        u1
                        u2 u2 . . .        u2 
                   A=                           ,
                       . . . . . . . . .   . . .
                         un un . . .        un

    all of whose columns are identical.
(b) Every column u is a a probability vector all of whose
    components are positive.
Theorem 2.6
If T is a regular∗ transition matrix and A and u are as above, then
(a) For any probability vector x, Tn x → u as n → ∞, so that u is
    a steady-state vector.
(b) The steady-state vector u is the unique probability vector
    satisfying the matrix equation Tu = u.
Finding the steady-state vector




   We know the steady-state vector is unique. So we use the equation
   it satisfies to find it: Tu = u.
Finding the steady-state vector




   We know the steady-state vector is unique. So we use the equation
   it satisfies to find it: Tu = u.
   This is a matrix equation if you put it in the form

                             (T − I)u = 0
Example (Skipping class)
                                  0.7 0.8
If the transition matrix is T =           , what is the
                                  0.3 0.2
steady-state vector?
Example (Skipping class)
                                  0.7 0.8
If the transition matrix is T =           , what is the
                                  0.3 0.2
steady-state vector?

Solution
We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a
single linear system with augmented matrix
                                               
                 −3/10   8/10 0         1 0 8/11
              3/10 −8/10 0          0 1 3/11 
                     1      11          00      0
Example (Skipping class)
                                    0.7 0.8
If the transition matrix is T =             , what is the
                                    0.3 0.2
steady-state vector?

Solution
We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a
single linear system with augmented matrix
                                               
                 −3/10   8/10 0         1 0 8/11
              3/10 −8/10 0          0 1 3/11 
                     1      11          00      0

                                  8/11
So the steady-state vector is            . You’ll go to class about 72%
                                  3/11
of the time.
Example (The Markov Dance)
                                           
                               1/3   1/4  0
If the transition matrix is T = 1/3 1/2 1/2, what is the
                                 1/3 1/4 1/2
steady-state vector?
Example (The Markov Dance)
                                           
                               1/3   1/4  0
If the transition matrix is T = 1/3 1/2 1/2, what is the
                                 1/3 1/4 1/2
steady-state vector?

Solution
We have
                                                        
         −2/3 1/4    0        0         1   0   0   3/17
        1/3 −1/2  1/2        0      0    1   0   8/17   
                                                        
              1/4 −1/2
        1/3                  0      0    0   1   6/17   
            1   1    1        1         0   0   0     0

Más contenido relacionado

La actualidad más candente

Markov decision process
Markov decision processMarkov decision process
Markov decision process
Hamed Abdi
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
ganith2k13
 

La actualidad más candente (20)

Markov chain
Markov chainMarkov chain
Markov chain
 
Markov Chain and its Analysis
Markov Chain and its Analysis Markov Chain and its Analysis
Markov Chain and its Analysis
 
Chap 4 markov chains
Chap 4   markov chainsChap 4   markov chains
Chap 4 markov chains
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo Simulation
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Hidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathHidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable Path
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
 
Introduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood EstimatorIntroduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood Estimator
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
 
2 discrete markov chain
2 discrete markov chain2 discrete markov chain
2 discrete markov chain
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdf
 
Markor chain presentation
Markor chain presentationMarkor chain presentation
Markor chain presentation
 
Traveling Salesman Problem
Traveling Salesman Problem Traveling Salesman Problem
Traveling Salesman Problem
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Maximum likelihood estimation
Maximum likelihood estimationMaximum likelihood estimation
Maximum likelihood estimation
 
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONMARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
 
Recurrence relations
Recurrence relationsRecurrence relations
Recurrence relations
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
 
Introduction to Approximation Algorithms
Introduction to Approximation AlgorithmsIntroduction to Approximation Algorithms
Introduction to Approximation Algorithms
 

Destacado

Lesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the DerivativeLesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the Derivative
Matthew Leingang
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and Applications
Matthew Leingang
 
Lesson05 Continuity Slides+Notes
Lesson05    Continuity Slides+NotesLesson05    Continuity Slides+Notes
Lesson05 Continuity Slides+Notes
Matthew Leingang
 

Destacado (20)

Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
 
Markov theory
Markov theoryMarkov theory
Markov theory
 
Markov Models
Markov ModelsMarkov Models
Markov Models
 
HR Supply forecasting
HR Supply forecastingHR Supply forecasting
HR Supply forecasting
 
The markovchain package use r2016
The markovchain package use r2016The markovchain package use r2016
The markovchain package use r2016
 
RELINK - benefit of micro inverters
RELINK - benefit of micro invertersRELINK - benefit of micro inverters
RELINK - benefit of micro inverters
 
Markov models explained
Markov models explainedMarkov models explained
Markov models explained
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Mô hình mạng pert
Mô hình mạng pertMô hình mạng pert
Mô hình mạng pert
 
Serives mktg -_banking
Serives mktg -_bankingSerives mktg -_banking
Serives mktg -_banking
 
Research Benefit esitysaineisto
Research Benefit esitysaineistoResearch Benefit esitysaineisto
Research Benefit esitysaineisto
 
Markov Chain Basic
Markov Chain BasicMarkov Chain Basic
Markov Chain Basic
 
Lesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of FunctionsLesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of Functions
 
Lesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient RuleLesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient Rule
 
Lesson 5: Continuity
Lesson 5: ContinuityLesson 5: Continuity
Lesson 5: Continuity
 
Lesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the DerivativeLesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the Derivative
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and Applications
 
Lesson 10: Inverses
Lesson 10: InversesLesson 10: Inverses
Lesson 10: Inverses
 
Lesson 15: The Chain Rule
Lesson 15: The Chain RuleLesson 15: The Chain Rule
Lesson 15: The Chain Rule
 
Lesson05 Continuity Slides+Notes
Lesson05    Continuity Slides+NotesLesson05    Continuity Slides+Notes
Lesson05 Continuity Slides+Notes
 

Similar a Lesson 11: Markov Chains

Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inference
asimnawaz54
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
University of Salerno
 
lecture 8
lecture 8lecture 8
lecture 8
sajinsc
 

Similar a Lesson 11: Markov Chains (20)

Stochastic Processes Assignment Help
Stochastic Processes Assignment HelpStochastic Processes Assignment Help
Stochastic Processes Assignment Help
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Lesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the DerivativeLesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the Derivative
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inference
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte Carlo
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
 
Book chapter-5
Book chapter-5Book chapter-5
Book chapter-5
 
Logic
LogicLogic
Logic
 
Queueing theory
Queueing theoryQueueing theory
Queueing theory
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle Systems
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Roots of unity_cool_app
Roots of unity_cool_appRoots of unity_cool_app
Roots of unity_cool_app
 
lecture 8
lecture 8lecture 8
lecture 8
 
jalalam.ppt
jalalam.pptjalalam.ppt
jalalam.ppt
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
Unit 4 jwfiles
Unit 4 jwfilesUnit 4 jwfiles
Unit 4 jwfiles
 

Más de Matthew Leingang

Más de Matthew Leingang (20)

Making Lesson Plans
Making Lesson PlansMaking Lesson Plans
Making Lesson Plans
 
Streamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choiceStreamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choice
 
Electronic Grading of Paper Assessments
Electronic Grading of Paper AssessmentsElectronic Grading of Paper Assessments
Electronic Grading of Paper Assessments
 
Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)
 
Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)
 
Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)
 
Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)
 
Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)
 
Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)
 
Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)
 
Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)
 
Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)
 
Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)
 
Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)
 
Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)
 
Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)
 

Último

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Último (20)

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

Lesson 11: Markov Chains

  • 1. Lesson 11 Markov Chains Math 20 October 15, 2007 Announcements Review Session (ML), 10/16, 7:30–9:30 Hall E Problem Set 4 is on the course web site. Due October 17 Midterm I 10/18, Hall A 7–8:30pm OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323) Old exams and solutions on website
  • 2. The Markov Dance Divide the class into three groups A, B, and C .
  • 3. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C .
  • 4. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C . 1/4of group B goes to group A, and 1/4 of group A goes to group C .
  • 5. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C . 1/4of group B goes to group A, and 1/4 of group A goes to group C . 1/2 of group C goes to group B.
  • 6.
  • 7. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today.
  • 8. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today. Some questions you might ask are: If I go to class on Monday, how likely am I to go to class on Friday?
  • 9. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today. Some questions you might ask are: If I go to class on Monday, how likely am I to go to class on Friday? Assuming the class is infinitely long (the horror!), approximately what portion of class will I attend?
  • 10. Many times we are interested in the transition of something between certain “states” over discrete time steps. Examples are movement of people between regions states of the weather movement between positions on a Monopoly board your score in blackjack
  • 11. Many times we are interested in the transition of something between certain “states” over discrete time steps. Examples are movement of people between regions states of the weather movement between positions on a Monopoly board your score in blackjack Definition A Markov chain or Markov process is a process in which the probability of the system being in a particular state at a given observation period depends only on its state at the immediately preceding observation period.
  • 12. Common questions about a Markov chain are: What is the probability of transitions from state to state over multiple observations? Are there any “equilibria” in the process? Is there a long-term stability to the process?
  • 13. Definition Suppose the system has n possible states. For each i and j, let tij be the probability of switching from state j to state i. The matrix T whose ijth entry is tij is called the transition matrix.
  • 14. Definition Suppose the system has n possible states. For each i and j, let tij be the probability of switching from state j to state i. The matrix T whose ijth entry is tij is called the transition matrix. Example The transition matrix for the skipping class example is 0.7 0.8 T= 0.3 0.2
  • 15.
  • 16.
  • 17. The big idea about the transition matrix reflects an important fact about probabilities: All entries are nonnegative. The columns add up to one. Such a matrix is called a stochastic matrix.
  • 18. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k.
  • 19.
  • 20. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k. Example Suppose we start out with 20 students in group A and 10 students in groups B and C . Then the initial state vector is x(0) =
  • 21. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k. Example Suppose we start out with 20 students in group A and 10 students in groups B and C . Then the initial state vector is   0.5 x(0) = 0.25 . 0.25
  • 22.
  • 23. Example Suppose after three weeks of class I am equally likely to come to class or skip. Then my state vector would be x(10) =
  • 24. Example Suppose after three weeks of class I am equally likely to come to 0.5 class or skip. Then my state vector would be x(10) = 0.5
  • 25. Example Suppose after three weeks of class I am equally likely to come to 0.5 class or skip. Then my state vector would be x(10) = 0.5 The big idea about state vectors reflects an important fact about probabilities: All entries are nonnegative. The entries add up to one. Such a vector is called a probability vector.
  • 26. Lemma Let T be an n × n stochastic matrix and x an n × 1 probability vector. Then T x is a probability vector.
  • 27. Lemma Let T be an n × n stochastic matrix and x an n × 1 probability vector. Then T x is a probability vector. Proof. We need to show that the entries of T x add up to one. We have   n n n n n (T x)i = tij xj  = tij xj  i=1 i=1 j=1 j=1 i=1 n 1 · xj = 1 = j=1
  • 28. Theorem If T is the transition matrix of a Markov process, then the state vector x(k+1) at the (k + 1)th observation period can be determined from the state vector x(k) at the kth observation period, as x(k+1) = T x(k)
  • 29. Theorem If T is the transition matrix of a Markov process, then the state vector x(k+1) at the (k + 1)th observation period can be determined from the state vector x(k) at the kth observation period, as x(k+1) = T x(k) This comes from an important idea in conditional probability: P(state i at t = k + 1) n = P(move from state j to state i)P(state j at t = k) j=1 That is, for each i, n (k+1) (k) pi = tij pj j=1
  • 30. Illustration Example How does the probability of going to class on Wednesday depend on the probabilities of going to class on Monday? (k) (k) p2 p1 go Monday skip t21 t11 t12 t22 go go Wednesday skip skip (k+1) (k) (k) p1 = t11 p1 + t12 p2 (k+1) (k) (k) p2 = t21 p1 + t22 p2
  • 31. Example If I go to class on Monday, what’s the probability I’ll go to class on Friday?
  • 32. Example If I go to class on Monday, what’s the probability I’ll go to class on Friday? Solution 1 We have x(0) = . We want to know x(2) . We have 0 x(2) = T x(1) = T (T x(0) ) = T 2 = T x(0) 2 0.7 0.8 1 0.7 0.8 0.7 0.73 = = = 0.3 0.2 0 0.3 0.2 0.3 0.27
  • 33. Let’s look at successive powers of the probability matrix. Do they converge? To what?
  • 34. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5
  • 35. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5   0.194444 0.208333 0.125 T 2 = 0.444444 0.458333 0.5  0.361111 0.333333 0.375
  • 36. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5   0.194444 0.208333 0.125 T 2 = 0.444444 0.458333 0.5  0.361111 0.333333 0.375   0.175926 0.184028 0.166667 T 3 = 0.467593 0.465278 0.479167 0.356481 0.350694 0.354167
  • 37.  0.17554 0.177662 0.175347 T 4 = 0.470679 0.469329 0.472222 0.353781 0.353009 0.352431
  • 38.  0.17554 0.177662 0.175347 4 0.470679 0.469329 0.472222 T= 0.353781 0.353009 0.352431   0.176183 0.176553 0.176505 T 5 = 0.470743 0.47039 0.470775 0.353074 0.353057 0.35272
  • 39.  0.17554 0.177662 0.175347 4 0.470679 0.469329 0.472222 T= 0.353781 0.353009 0.352431   0.176183 0.176553 0.176505 T 5 = 0.470743 0.47039 0.470775 0.353074 0.353057 0.35272   0.176414 0.176448 0.176529 T 6 = 0.470636 0.470575 0.470583 0.35295 0.352977 0.352889 Do they converge? To what?
  • 40. A transition matrix (or corresponding Markov process) is called regular if some power of the matrix has all nonzero entries. Or, there is a positive probability of eventually moving from every state to every state.
  • 41. Theorem 2.5 If T is the transition matrix of a regular Markov process, then (a) As n → ∞, T n approaches a matrix   u1 u1 . . . u1  u2 u2 . . . u2  A= , . . . . . . . . . . . . un un . . . un all of whose columns are identical. (b) Every column u is a a probability vector all of whose components are positive.
  • 42. Theorem 2.6 If T is a regular∗ transition matrix and A and u are as above, then (a) For any probability vector x, Tn x → u as n → ∞, so that u is a steady-state vector. (b) The steady-state vector u is the unique probability vector satisfying the matrix equation Tu = u.
  • 43. Finding the steady-state vector We know the steady-state vector is unique. So we use the equation it satisfies to find it: Tu = u.
  • 44. Finding the steady-state vector We know the steady-state vector is unique. So we use the equation it satisfies to find it: Tu = u. This is a matrix equation if you put it in the form (T − I)u = 0
  • 45.
  • 46. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector?
  • 47. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector? Solution We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a single linear system with augmented matrix     −3/10 8/10 0 1 0 8/11  3/10 −8/10 0   0 1 3/11  1 11 00 0
  • 48. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector? Solution We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a single linear system with augmented matrix     −3/10 8/10 0 1 0 8/11  3/10 −8/10 0   0 1 3/11  1 11 00 0 8/11 So the steady-state vector is . You’ll go to class about 72% 3/11 of the time.
  • 49. Example (The Markov Dance)   1/3 1/4 0 If the transition matrix is T = 1/3 1/2 1/2, what is the 1/3 1/4 1/2 steady-state vector?
  • 50. Example (The Markov Dance)   1/3 1/4 0 If the transition matrix is T = 1/3 1/2 1/2, what is the 1/3 1/4 1/2 steady-state vector? Solution We have     −2/3 1/4 0 0 1 0 0 3/17  1/3 −1/2 1/2 0 0 1 0 8/17      1/4 −1/2  1/3 0 0 0 1 6/17  1 1 1 1 0 0 0 0