SlideShare a Scribd company logo
1 of 206
Download to read offline
Lectures on Lévy Processes and Stochastic
                 Calculus (Koc University)
               Lecture 1: Infinite Divisibility

                                   David Applebaum

                School of Mathematics and Statistics, University of Sheffield, UK


                                  5th December 2011




Dave Applebaum (Sheffield UK)                Lecture 1                        December 2011   1 / 40
Introduction




A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.




 Dave Applebaum (Sheffield UK)    Lecture 1              December 2011   2 / 40
Introduction




A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.




 Dave Applebaum (Sheffield UK)    Lecture 1              December 2011   2 / 40
Introduction




A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.




 Dave Applebaum (Sheffield UK)    Lecture 1              December 2011   2 / 40
Introduction




A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.




 Dave Applebaum (Sheffield UK)    Lecture 1              December 2011   2 / 40
T HEORETICAL
     Lévy processes are simplest generic class of process which have
     (a.s.) continuous paths interspersed with random jumps of
     arbitrary size occurring at random times.
     Lévy processes comprise a natural subclass of semimartingales
     and of Markov processes of Feller type.
     There are many interesting examples - Brownian motion, simple
     and compound Poisson processes, α-stable processes,
     subordinated processes, financial processes, relativistic process,
     Riemann zeta process . . .




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   3 / 40
T HEORETICAL
     Lévy processes are simplest generic class of process which have
     (a.s.) continuous paths interspersed with random jumps of
     arbitrary size occurring at random times.
     Lévy processes comprise a natural subclass of semimartingales
     and of Markov processes of Feller type.
     There are many interesting examples - Brownian motion, simple
     and compound Poisson processes, α-stable processes,
     subordinated processes, financial processes, relativistic process,
     Riemann zeta process . . .




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   3 / 40
T HEORETICAL
     Lévy processes are simplest generic class of process which have
     (a.s.) continuous paths interspersed with random jumps of
     arbitrary size occurring at random times.
     Lévy processes comprise a natural subclass of semimartingales
     and of Markov processes of Feller type.
     There are many interesting examples - Brownian motion, simple
     and compound Poisson processes, α-stable processes,
     subordinated processes, financial processes, relativistic process,
     Riemann zeta process . . .




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   3 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
Noise. Lévy processes are a good model of “noise” in random
    dynamical systems.
                               Input + Noise = Output
    Attempts to describe this differentially leads to stochastic calculus.
    A large class of Markov processes can be built as solutions of
    stochastic differential equations (SDEs) driven by Lévy noise.
    Lévy driven stochastic partial differential equations (SPDEs) are
    currently being studied with some intensity.
    Robust structure. Most applications utilise Lévy processes taking
    values in Euclidean space but this can be replaced by a Hilbert
    space, a Banach space (both of these are important for SPDEs), a
    locally compact group, a manifold. Quantised versions are
    non-commutative Lévy processes on quantum groups.



Dave Applebaum (Sheffield UK)         Lecture 1             December 2011   4 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
A PPLICATIONS
These include:
     Turbulence via Burger’s equation (Bertoin)
     New examples of quantum field theories (Albeverio, Gottshalk,
     Wu)
     Viscoelasticity (Bouleau)
     Time series - Lévy driven CARMA models (Brockwell, Marquardt)
     Stochastic resonance in non-linear signal processing (Patel,
     Kosco, Applebaum)
     Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:

     option pricing in incomplete markets.
     interest rate modelling.

 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   5 / 40
Some Basic Ideas of Probability


Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
                                                  d
                                     (x, y ) =         xi yi .
                                                 i=1


The associated norm (length of a vector) is
                                                                   1
                                                        d          2
                                           1
                                |x| = (x, x) =
                                           2                 xi2       .
                                                       i=1




 Dave Applebaum (Sheffield UK)              Lecture 1                       December 2011   6 / 40
Some Basic Ideas of Probability


Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
                                                  d
                                     (x, y ) =         xi yi .
                                                 i=1


The associated norm (length of a vector) is
                                                                   1
                                                        d          2
                                           1
                                |x| = (x, x) =
                                           2                 xi2       .
                                                       i=1




 Dave Applebaum (Sheffield UK)              Lecture 1                       December 2011   6 / 40
Some Basic Ideas of Probability


Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
                                                  d
                                     (x, y ) =         xi yi .
                                                 i=1


The associated norm (length of a vector) is
                                                                   1
                                                        d          2
                                           1
                                |x| = (x, x) =
                                           2                 xi2       .
                                                       i=1




 Dave Applebaum (Sheffield UK)              Lecture 1                       December 2011   6 / 40
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),

                                P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
                       = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).


If X and Y are independent, the law of X + Y is given by convolution of
measures

      pX +Y = pX ∗ pY , where (pX ∗ pY )(A) =                         pX (A − y )pY (dy ).
                                                                 Rd



 Dave Applebaum (Sheffield UK)                    Lecture 1                     December 2011   7 / 40
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),

                                P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
                       = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).


If X and Y are independent, the law of X + Y is given by convolution of
measures

      pX +Y = pX ∗ pY , where (pX ∗ pY )(A) =                         pX (A − y )pY (dy ).
                                                                 Rd



 Dave Applebaum (Sheffield UK)                    Lecture 1                     December 2011   7 / 40
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),

                                P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
                       = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).


If X and Y are independent, the law of X + Y is given by convolution of
measures

      pX +Y = pX ∗ pY , where (pX ∗ pY )(A) =                         pX (A − y )pY (dy ).
                                                                 Rd



 Dave Applebaum (Sheffield UK)                    Lecture 1                     December 2011   7 / 40
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),

                                P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
                       = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).


If X and Y are independent, the law of X + Y is given by convolution of
measures

      pX +Y = pX ∗ pY , where (pX ∗ pY )(A) =                         pX (A − y )pY (dy ).
                                                                 Rd



 Dave Applebaum (Sheffield UK)                    Lecture 1                     December 2011   7 / 40
Equivalently

                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),
            Rd                             Rd     Rd

for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:

        fX +Y = fX ∗ fY , where (fX ∗ fY )(x) =                 fX (x − y )fY (y )dy .
                                                           Rd




 Dave Applebaum (Sheffield UK)             Lecture 1                         December 2011   8 / 40
Equivalently

                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),
            Rd                             Rd     Rd

for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:

        fX +Y = fX ∗ fY , where (fX ∗ fY )(x) =                 fX (x − y )fY (y )dy .
                                                           Rd




 Dave Applebaum (Sheffield UK)             Lecture 1                         December 2011   8 / 40
Equivalently

                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),
            Rd                             Rd     Rd

for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:

        fX +Y = fX ∗ fY , where (fX ∗ fY )(x) =                 fX (x − y )fY (y )dy .
                                                           Rd




 Dave Applebaum (Sheffield UK)             Lecture 1                         December 2011   8 / 40
The characteristic function of X is φX : Rd → C, where

                                 φX (u) =           ei(u,x) pX (dx).
                                               Rd



Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
                                    
                                  n
                  E exp i             (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
                                  j=1


for all u1 , . . . , un ∈ Rd .




 Dave Applebaum (Sheffield UK)                  Lecture 1                      December 2011   9 / 40
The characteristic function of X is φX : Rd → C, where

                                 φX (u) =           ei(u,x) pX (dx).
                                               Rd



Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
                                    
                                  n
                  E exp i             (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
                                  j=1


for all u1 , . . . , un ∈ Rd .




 Dave Applebaum (Sheffield UK)                  Lecture 1                      December 2011   9 / 40
The characteristic function of X is φX : Rd → C, where

                                 φX (u) =           ei(u,x) pX (dx).
                                               Rd



Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
                                    
                                  n
                  E exp i             (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
                                  j=1


for all u1 , . . . , un ∈ Rd .




 Dave Applebaum (Sheffield UK)                  Lecture 1                      December 2011   9 / 40
More generally, the characteristic function of a probability measure µ
on Rd is
                       φµ (u) =      ei(u,x) µ(dx).
                                      Rd
Important properties are:-
 1   φµ (0) = 1.
 2                                       ¯
     φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
     ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
 3   φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
     dominated convergence)).

Also µ → φµ is injective.




 Dave Applebaum (Sheffield UK)        Lecture 1                   December 2011   10 / 40
More generally, the characteristic function of a probability measure µ
on Rd is
                       φµ (u) =      ei(u,x) µ(dx).
                                      Rd
Important properties are:-
 1   φµ (0) = 1.
 2                                       ¯
     φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
     ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
 3   φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
     dominated convergence)).

Also µ → φµ is injective.




 Dave Applebaum (Sheffield UK)        Lecture 1                   December 2011   10 / 40
More generally, the characteristic function of a probability measure µ
on Rd is
                       φµ (u) =      ei(u,x) µ(dx).
                                      Rd
Important properties are:-
 1   φµ (0) = 1.
 2                                       ¯
     φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
     ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
 3   φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
     dominated convergence)).

Also µ → φµ is injective.




 Dave Applebaum (Sheffield UK)        Lecture 1                   December 2011   10 / 40
More generally, the characteristic function of a probability measure µ
on Rd is
                       φµ (u) =      ei(u,x) µ(dx).
                                      Rd
Important properties are:-
 1   φµ (0) = 1.
 2                                       ¯
     φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
     ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
 3   φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
     dominated convergence)).

Also µ → φµ is injective.




 Dave Applebaum (Sheffield UK)        Lecture 1                   December 2011   10 / 40
More generally, the characteristic function of a probability measure µ
on Rd is
                       φµ (u) =      ei(u,x) µ(dx).
                                      Rd
Important properties are:-
 1   φµ (0) = 1.
 2                                       ¯
     φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
     ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
 3   φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
     dominated convergence)).

Also µ → φµ is injective.




 Dave Applebaum (Sheffield UK)        Lecture 1                   December 2011   10 / 40
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
                              j=1

                                   n
                                             ¯
                                          cj ck ψ(uj − uk ) ≥ 0,
                                 j,k =1


for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .



 Dave Applebaum (Sheffield UK)                  Lecture 1           December 2011   11 / 40
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
                              j=1

                                   n
                                             ¯
                                          cj ck ψ(uj − uk ) ≥ 0,
                                 j,k =1


for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .



 Dave Applebaum (Sheffield UK)                  Lecture 1           December 2011   11 / 40
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
                              j=1

                                   n
                                             ¯
                                          cj ck ψ(uj − uk ) ≥ 0,
                                 j,k =1


for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .



 Dave Applebaum (Sheffield UK)                  Lecture 1           December 2011   11 / 40
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
                              j=1

                                   n
                                             ¯
                                          cj ck ψ(uj − uk ) ≥ 0,
                                 j,k =1


for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .



 Dave Applebaum (Sheffield UK)                  Lecture 1           December 2011   11 / 40
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
                              j=1

                                   n
                                             ¯
                                          cj ck ψ(uj − uk ) ≥ 0,
                                 j,k =1


for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .



 Dave Applebaum (Sheffield UK)                  Lecture 1           December 2011   11 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.

Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
                                      n
                                1
                                             cj ck (etψ(uj −uk ) − 1) ≥ 0,
                                                ¯
                                t
                                    j,k =1

and so
            n                                             n
                                                   1
                      ¯
                   cj ck ψ(uj − uk ) = lim                      cj ck (etψ(uj −uk ) − 1) ≥ 0.
                                                                   ¯
                                               t→0 t
          j,k =1                                       j,k =1

                                                                                                      2
 Dave Applebaum (Sheffield UK)                       Lecture 1                       December 2011   12 / 40
Infinite Divisibility



We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
                                                      n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
                       1             1   n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
        1
case µ n is unique.




 Dave Applebaum (Sheffield UK)        Lecture 1                 December 2011   13 / 40
Infinite Divisibility



We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
                                                      n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
                       1             1   n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
        1
case µ n is unique.




 Dave Applebaum (Sheffield UK)        Lecture 1                 December 2011   13 / 40
Infinite Divisibility



We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
                                                      n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
                       1             1   n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
        1
case µ n is unique.




 Dave Applebaum (Sheffield UK)        Lecture 1                 December 2011   13 / 40
Infinite Divisibility



We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
                                                      n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
                       1             1   n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
        1
case µ n is unique.




 Dave Applebaum (Sheffield UK)        Lecture 1                 December 2011   13 / 40
Infinite Divisibility



We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
                                                      n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
                       1             1   n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
        1
case µ n is unique.




 Dave Applebaum (Sheffield UK)        Lecture 1                 December 2011   13 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that

                                        φµ (u) = (φn (u))n ,
                                                1
for all u ∈ Rd . Moreover µn = µ n .

Proof. If µ is infinitely divisible, take φn = φ                  1   . Conversely, for each
                                                                µn
n ∈ N, by Fubini’s theorem,

            φµ (u) =                 ···        ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
                                Rd         Rd
                                                 n
                         =           ei(u,y ) µ∗ (dy ).
                                               n
                                Rd

But φµ (u) =        Rd   ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
       n
µ = µ∗ .
     n                                                                                               2
 Dave Applebaum (Sheffield UK)                       Lecture 1                      December 2011   14 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.
                                                            w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
                                       w
[Note: Weak convergence. µn ⇒ µ means

                          lim     f (x)µn (dx) =            f (x)µ(dx),
                         n→∞ Rd                        Rd

for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
             d  (n)          (n)          (n)         (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.




 Dave Applebaum (Sheffield UK)              Lecture 1                      December 2011   15 / 40
Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
                             (n)          (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
   d     (n)                  (n)
X = Y1         + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-

                                1               1
       f (x) =            d
                                           exp − (x − m, A−1 (x − m)) ,            (1.1)
                  (2π)    2         det(A)      2

for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
 Dave Applebaum (Sheffield UK)                 Lecture 1            December 2011    16 / 40
Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
                             (n)          (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
   d     (n)                  (n)
X = Y1         + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-

                                1               1
       f (x) =            d
                                           exp − (x − m, A−1 (x − m)) ,            (1.1)
                  (2π)    2         det(A)      2

for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
 Dave Applebaum (Sheffield UK)                 Lecture 1            December 2011    16 / 40
Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
                             (n)          (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
   d     (n)                  (n)
X = Y1         + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-

                                1               1
       f (x) =            d
                                           exp − (x − m, A−1 (x − m)) ,            (1.1)
                  (2π)    2         det(A)      2

for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
 Dave Applebaum (Sheffield UK)                 Lecture 1            December 2011    16 / 40
Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
                             (n)          (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
   d     (n)                  (n)
X = Y1         + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-

                                1               1
       f (x) =            d
                                           exp − (x − m, A−1 (x − m)) ,            (1.1)
                  (2π)    2         det(A)      2

for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
 Dave Applebaum (Sheffield UK)                 Lecture 1            December 2011    16 / 40
Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
                             (n)          (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
   d     (n)                  (n)
X = Y1         + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-

                                1               1
       f (x) =            d
                                           exp − (x − m, A−1 (x − m)) ,            (1.1)
                  (2π)    2         det(A)      2

for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
 Dave Applebaum (Sheffield UK)                 Lecture 1            December 2011    16 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
A standard calculation yields

                                                  1
                           φX (u) = exp {i(m, u) − (u, Au)},                   (1.2)
                                                  2

and hence
                                1         m      1      1
                    (φX (u)) n = exp i      ,u −      u, Au    ,
                                          n      2      n

                                                         (n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
                                                          n
                                                              1

each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.


 Dave Applebaum (Sheffield UK)            Lecture 1             December 2011    17 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
                                      c n −c
                          P(X = n) =      e .
                                       n!

In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
                         φX (u) = exp[c(eiu − 1)],


from which we deduce that X is infinitely divisible with each
  (n)    c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.




 Dave Applebaum (Sheffield UK)    Lecture 1               December 2011   18 / 40
Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
                                          0            if N = 0
                        X :=
                                Z (1) + · · · + Z (N) if N > 0.


Theorem
For each u ∈ Rd ,

                       φX (u) = exp         (ei(u,y ) − 1)cµZ (dy ) .
                                       Rd




 Dave Applebaum (Sheffield UK)               Lecture 1                   December 2011   19 / 40
Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
                                          0            if N = 0
                        X :=
                                Z (1) + · · · + Z (N) if N > 0.


Theorem
For each u ∈ Rd ,

                       φX (u) = exp         (ei(u,y ) − 1)cµZ (dy ) .
                                       Rd




 Dave Applebaum (Sheffield UK)               Lecture 1                   December 2011   19 / 40
Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
                                          0            if N = 0
                        X :=
                                Z (1) + · · · + Z (N) if N > 0.


Theorem
For each u ∈ Rd ,

                       φX (u) = exp         (ei(u,y ) − 1)cµZ (dy ) .
                                       Rd




 Dave Applebaum (Sheffield UK)               Lecture 1                   December 2011   19 / 40
Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
                                          0            if N = 0
                        X :=
                                Z (1) + · · · + Z (N) if N > 0.


Theorem
For each u ∈ Rd ,

                       φX (u) = exp         (ei(u,y ) − 1)cµZ (dy ) .
                                       Rd




 Dave Applebaum (Sheffield UK)               Lecture 1                   December 2011   19 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
                                ∞
              φX (u) =                E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
                                n=0
                                 ∞
                                                                      cn
                          =           E(ei(u,Z (1))+···+Z (n)) )e−c
                                                                      n!
                                n=0
                                       ∞
                                            [cφZ (u)]n
                          = e−c
                                               n!
                                      n=0
                          = exp[c(φZ (u) − 1)],

and the result follows on writing φZ (u) =                 ei(u,y ) µZ (dy ).                 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
                                (n)   c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.


 Dave Applebaum (Sheffield UK)                  Lecture 1                    December 2011   20 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where

                         1
         η(u) = i(m, u) − (u, Au) +                     (ei(u,y ) − 1)cµZ (dy ).            (1.3)
                         2                        Rd

This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.

                                     (|y |2 ∧ 1)ν(dy ) < ∞,                                 (1.4)

(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
                                ν((− , )c ) < ∞ for all > 0.
 Dave Applebaum (Sheffield UK)               Lecture 1                       December 2011    21 / 40
Here is the fundamental result of this lecture:-
Theorem (Lévy-Khintchine)
A Borel probability measure µ on Rd is infinitely divisible if there exists
a vector b ∈ Rd , a non-negative symmetric d × d matrix A and a Lévy
measure ν on Rd − {0} such that for all u ∈ Rd ,

                                  1
            φµ (u) = exp i(b, u) − (u, Au)                                                 (1.5)
                                  2

                        +                 (ei(u,y ) − 1 − i(u, y )1B (y ))ν(dy ) .
                                                                   ˆ                       (1.6)
                                Rd −{0}

        ˆ
where B = B1 (0) = {y ∈ Rd ; |y | < 1}.
Conversely, any mapping of the form (1.5) is the characteristic function
of an infinitely divisible probability measure on Rd .

The triple (b, A, ν) is called the characteristics of the infinitely divisible
random variable X . Define η = log φµ , where we take the principal part
of the logarithm. η is called the Lévy symbol.
 Dave Applebaum (Sheffield UK)                  Lecture 1                   December 2011    22 / 40
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)

More Related Content

Recently uploaded

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 

Recently uploaded (20)

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 

Featured

Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)contently
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024Albert Qian
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsKurio // The Social Media Age(ncy)
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summarySpeakerHub
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next Tessa Mero
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentLily Ray
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best PracticesVit Horky
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project managementMindGenius
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...RachelPearson36
 
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...Applitools
 
12 Ways to Increase Your Influence at Work
12 Ways to Increase Your Influence at Work12 Ways to Increase Your Influence at Work
12 Ways to Increase Your Influence at WorkGetSmarter
 

Featured (20)

Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage Engineerings
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
 
Skeleton Culture Code
Skeleton Culture CodeSkeleton Culture Code
Skeleton Culture Code
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search Intent
 
How to have difficult conversations
How to have difficult conversations How to have difficult conversations
How to have difficult conversations
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best Practices
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project management
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
 
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
 
12 Ways to Increase Your Influence at Work
12 Ways to Increase Your Influence at Work12 Ways to Increase Your Influence at Work
12 Ways to Increase Your Influence at Work
 

Koc1(dba)

  • 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 1: Infinite Divisibility David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 5th December 2011 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 1 / 40
  • 2. Introduction A Lévy process is essentially a stochastic process with stationary and independent increments. The basic theory was developed, principally by Paul Lévy in the 1930s. In the past 20 years there has been a renaissance of interest and a plethora of books, articles and conferences. Why ? There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  • 3. Introduction A Lévy process is essentially a stochastic process with stationary and independent increments. The basic theory was developed, principally by Paul Lévy in the 1930s. In the past 20 years there has been a renaissance of interest and a plethora of books, articles and conferences. Why ? There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  • 4. Introduction A Lévy process is essentially a stochastic process with stationary and independent increments. The basic theory was developed, principally by Paul Lévy in the 1930s. In the past 20 years there has been a renaissance of interest and a plethora of books, articles and conferences. Why ? There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  • 5. Introduction A Lévy process is essentially a stochastic process with stationary and independent increments. The basic theory was developed, principally by Paul Lévy in the 1930s. In the past 20 years there has been a renaissance of interest and a plethora of books, articles and conferences. Why ? There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  • 6. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  • 7. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  • 8. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  • 9. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 10. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 11. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 12. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 13. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 14. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 15. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  • 16. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 17. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 18. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 19. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 20. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 21. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 22. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 23. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 24. A PPLICATIONS These include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands) The biggest explosion of activity has been in mathematical finance. Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  • 25. Some Basic Ideas of Probability Notation. Our state space is Euclidean space Rd . The inner product between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1 The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  • 26. Some Basic Ideas of Probability Notation. Our state space is Euclidean space Rd . The inner product between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1 The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  • 27. Some Basic Ideas of Probability Notation. Our state space is Euclidean space Rd . The inner product between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1 The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  • 28. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra of subsets of Ω and P is a probability measure defined on (Ω, F). Random variables are measurable functions X : Ω → Rd . The law of X is pX , where for each A ∈ F, pX (A) = P(X ∈ A). (Xn , n ∈ N) are independent if for all i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ). If X and Y are independent, the law of X + Y is given by convolution of measures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  • 29. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra of subsets of Ω and P is a probability measure defined on (Ω, F). Random variables are measurable functions X : Ω → Rd . The law of X is pX , where for each A ∈ F, pX (A) = P(X ∈ A). (Xn , n ∈ N) are independent if for all i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ). If X and Y are independent, the law of X + Y is given by convolution of measures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  • 30. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra of subsets of Ω and P is a probability measure defined on (Ω, F). Random variables are measurable functions X : Ω → Rd . The law of X is pX , where for each A ∈ F, pX (A) = P(X ∈ A). (Xn , n ∈ N) are independent if for all i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ). If X and Y are independent, the law of X + Y is given by convolution of measures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  • 31. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra of subsets of Ω and P is a probability measure defined on (Ω, F). Random variables are measurable functions X : Ω → Rd . The law of X is pX , where for each A ∈ F, pX (A) = P(X ∈ A). (Xn , n ∈ N) are independent if for all i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ). If X and Y are independent, the law of X + Y is given by convolution of measures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  • 32. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rd for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ). If X and Y are independent with densities fX and fY , respectively, then X + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  • 33. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rd for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ). If X and Y are independent with densities fX and fY , respectively, then X + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  • 34. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rd for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ). If X and Y are independent with densities fX and fY , respectively, then X + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  • 35. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). Rd Theorem (Kac’s theorem) X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1 for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  • 36. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). Rd Theorem (Kac’s theorem) X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1 for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  • 37. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). Rd Theorem (Kac’s theorem) X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1 for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  • 38. More generally, the characteristic function of a probability measure µ on Rd is φµ (u) = ei(u,x) µ(dx). Rd Important properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)). Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  • 39. More generally, the characteristic function of a probability measure µ on Rd is φµ (u) = ei(u,x) µ(dx). Rd Important properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)). Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  • 40. More generally, the characteristic function of a probability measure µ on Rd is φµ (u) = ei(u,x) µ(dx). Rd Important properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)). Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  • 41. More generally, the characteristic function of a probability measure µ on Rd is φµ (u) = ei(u,x) µ(dx). Rd Important properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)). Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  • 42. More generally, the characteristic function of a probability measure µ on Rd is φµ (u) = ei(u,x) µ(dx). Rd Important properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)). Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  • 43. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1), (2) and is continuous at u = 0, then it is the characteristic function of some probability measure µ on Rd . ψ : Rd → C is conditionally positive definite if for all n ∈ N and c1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1 for all u1 , . . . , un ∈ Rd . Note: conditionally positive definite is sometimes called negative definite (with a minus sign). ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  • 44. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1), (2) and is continuous at u = 0, then it is the characteristic function of some probability measure µ on Rd . ψ : Rd → C is conditionally positive definite if for all n ∈ N and c1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1 for all u1 , . . . , un ∈ Rd . Note: conditionally positive definite is sometimes called negative definite (with a minus sign). ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  • 45. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1), (2) and is continuous at u = 0, then it is the characteristic function of some probability measure µ on Rd . ψ : Rd → C is conditionally positive definite if for all n ∈ N and c1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1 for all u1 , . . . , un ∈ Rd . Note: conditionally positive definite is sometimes called negative definite (with a minus sign). ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  • 46. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1), (2) and is continuous at u = 0, then it is the characteristic function of some probability measure µ on Rd . ψ : Rd → C is conditionally positive definite if for all n ∈ N and c1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1 for all u1 , . . . , un ∈ Rd . Note: conditionally positive definite is sometimes called negative definite (with a minus sign). ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  • 47. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1), (2) and is continuous at u = 0, then it is the characteristic function of some probability measure µ on Rd . ψ : Rd → C is conditionally positive definite if for all n ∈ N and c1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1 for all u1 , . . . , un ∈ Rd . Note: conditionally positive definite is sometimes called negative definite (with a minus sign). ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  • 48. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 49. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 50. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 51. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 52. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 53. Theorem (Schoenberg correspondence) ψ : Rd → C is hermitian and conditionally positive definite if and only if etψ is positive definite for each t > 0. Proof. We only give the easy part here. Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose c1 , . . . , cn and u1 , . . . , un as above. We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1 and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  • 54. Infinite Divisibility We study this first because a Lévy process is infinite divisibility in motion, i.e. infinite divisibility is the underlying probabilistic idea which a Lévy process embodies dynamically. n Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n times). We say that µ has a convolution nth root, if there exists a 1 1 n probability measure µ n for which (µ n )∗ = µ. µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1 case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  • 55. Infinite Divisibility We study this first because a Lévy process is infinite divisibility in motion, i.e. infinite divisibility is the underlying probabilistic idea which a Lévy process embodies dynamically. n Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n times). We say that µ has a convolution nth root, if there exists a 1 1 n probability measure µ n for which (µ n )∗ = µ. µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1 case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  • 56. Infinite Divisibility We study this first because a Lévy process is infinite divisibility in motion, i.e. infinite divisibility is the underlying probabilistic idea which a Lévy process embodies dynamically. n Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n times). We say that µ has a convolution nth root, if there exists a 1 1 n probability measure µ n for which (µ n )∗ = µ. µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1 case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  • 57. Infinite Divisibility We study this first because a Lévy process is infinite divisibility in motion, i.e. infinite divisibility is the underlying probabilistic idea which a Lévy process embodies dynamically. n Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n times). We say that µ has a convolution nth root, if there exists a 1 1 n probability measure µ n for which (µ n )∗ = µ. µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1 case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  • 58. Infinite Divisibility We study this first because a Lévy process is infinite divisibility in motion, i.e. infinite divisibility is the underlying probabilistic idea which a Lévy process embodies dynamically. n Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n times). We say that µ has a convolution nth root, if there exists a 1 1 n probability measure µ n for which (µ n )∗ = µ. µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1 case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  • 59. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 60. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 61. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 62. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 63. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 64. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 65. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 66. Theorem µ is infinitely divisible iff for all n ∈ N, there exists a probability measure µn with characteristic function φn such that φµ (u) = (φn (u))n , 1 for all u ∈ Rd . Moreover µn = µ n . Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µn n ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n Rd But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence n µ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  • 67. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 68. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 69. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 70. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 71. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 72. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w - If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely divisible. w [Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rd for each f ∈ Cb (Rd ).] A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n) divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  • 73. Examples of Infinite Divisibility In the following, we will demonstrate infinite divisibility of a random (n) (n) variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n) X = Y1 + · · · + Yn , for each n ∈ N. Example 1 - Gaussian Random Variables Let X = (X1 , . . . , Xd ) be a random vector. We say that it is (non − degenerate)Gaussian if it there exists a vector m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such that X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2 for all x ∈ Rd . In this case we will write X ∼ N(m, A). The vector m is the mean of X , so m = E(X ) and A is the covariance matrix so that A = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  • 74. Examples of Infinite Divisibility In the following, we will demonstrate infinite divisibility of a random (n) (n) variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n) X = Y1 + · · · + Yn , for each n ∈ N. Example 1 - Gaussian Random Variables Let X = (X1 , . . . , Xd ) be a random vector. We say that it is (non − degenerate)Gaussian if it there exists a vector m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such that X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2 for all x ∈ Rd . In this case we will write X ∼ N(m, A). The vector m is the mean of X , so m = E(X ) and A is the covariance matrix so that A = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  • 75. Examples of Infinite Divisibility In the following, we will demonstrate infinite divisibility of a random (n) (n) variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n) X = Y1 + · · · + Yn , for each n ∈ N. Example 1 - Gaussian Random Variables Let X = (X1 , . . . , Xd ) be a random vector. We say that it is (non − degenerate)Gaussian if it there exists a vector m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such that X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2 for all x ∈ Rd . In this case we will write X ∼ N(m, A). The vector m is the mean of X , so m = E(X ) and A is the covariance matrix so that A = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  • 76. Examples of Infinite Divisibility In the following, we will demonstrate infinite divisibility of a random (n) (n) variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n) X = Y1 + · · · + Yn , for each n ∈ N. Example 1 - Gaussian Random Variables Let X = (X1 , . . . , Xd ) be a random vector. We say that it is (non − degenerate)Gaussian if it there exists a vector m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such that X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2 for all x ∈ Rd . In this case we will write X ∼ N(m, A). The vector m is the mean of X , so m = E(X ) and A is the covariance matrix so that A = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  • 77. Examples of Infinite Divisibility In the following, we will demonstrate infinite divisibility of a random (n) (n) variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n) X = Y1 + · · · + Yn , for each n ∈ N. Example 1 - Gaussian Random Variables Let X = (X1 , . . . , Xd ) be a random vector. We say that it is (non − degenerate)Gaussian if it there exists a vector m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such that X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2 for all x ∈ Rd . In this case we will write X ∼ N(m, A). The vector m is the mean of X , so m = E(X ) and A is the covariance matrix so that A = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  • 78. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 79. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 80. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 81. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 82. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 83. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2 and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n) so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1 each 1 ≤ j ≤ n. We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some σ > 0. We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0, and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  • 84. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 85. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 86. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 87. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 88. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 89. Example 2 - Poisson Random Variables In this case, we take d = 1 and consider a random variable X taking values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists c > 0 for which c n −c P(X = n) = e . n! In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is easy to verify that φX (u) = exp[c(eiu − 1)], from which we deduce that X is infinitely divisible with each (n) c Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  • 90. Example 3 - Compound Poisson Random Variables Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking values in Rd with common law µZ and let N ∼ π(c) be a Poisson random variable which is independent of all the Z (n)’s.The compound Poisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0. Theorem For each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  • 91. Example 3 - Compound Poisson Random Variables Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking values in Rd with common law µZ and let N ∼ π(c) be a Poisson random variable which is independent of all the Z (n)’s.The compound Poisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0. Theorem For each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  • 92. Example 3 - Compound Poisson Random Variables Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking values in Rd with common law µZ and let N ∼ π(c) be a Poisson random variable which is independent of all the Z (n)’s.The compound Poisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0. Theorem For each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  • 93. Example 3 - Compound Poisson Random Variables Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking values in Rd with common law µZ and let N ∼ π(c) be a Poisson random variable which is independent of all the Z (n)’s.The compound Poisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0. Theorem For each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  • 94. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 95. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 96. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 97. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 98. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 99. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 100. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 101. Proof. Let φZ be the common characteristic function of the Zn ’s. By conditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)], and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2 If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) c infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  • 102. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 103. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 104. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 105. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 106. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 107. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 108. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 109. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 110. The Lévy-Khintchine Formula de Finetti (1920’s) suggested that the most general infinitely divisible random variable could be written X = Y + W , where Y and W are independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 Rd This is WRONG! ν(·) = cµZ (·) is a finite measure here. Lévy and Khintchine showed that ν can be σ-finite, provided it is what is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4) (where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1 whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  • 111. Here is the fundamental result of this lecture:- Theorem (Lévy-Khintchine) A Borel probability measure µ on Rd is infinitely divisible if there exists a vector b ∈ Rd , a non-negative symmetric d × d matrix A and a Lévy measure ν on Rd − {0} such that for all u ∈ Rd , 1 φµ (u) = exp i(b, u) − (u, Au) (1.5) 2 + (ei(u,y ) − 1 − i(u, y )1B (y ))ν(dy ) . ˆ (1.6) Rd −{0} ˆ where B = B1 (0) = {y ∈ Rd ; |y | < 1}. Conversely, any mapping of the form (1.5) is the characteristic function of an infinitely divisible probability measure on Rd . The triple (b, A, ν) is called the characteristics of the infinitely divisible random variable X . Define η = log φµ , where we take the principal part of the logarithm. η is called the Lévy symbol. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 22 / 40