1. Lectures on Lévy Processes and Stochastic
Calculus (Koc University)
Lecture 1: Infinite Divisibility
David Applebaum
School of Mathematics and Statistics, University of Sheffield, UK
5th December 2011
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 1 / 40
2. Introduction
A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
3. Introduction
A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
4. Introduction
A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
5. Introduction
A Lévy process is essentially a stochastic process with stationary and
independent increments.
The basic theory was developed, principally by Paul Lévy in the 1930s.
In the past 20 years there has been a renaissance of interest and a
plethora of books, articles and conferences. Why ?
There are both theoretical and practical reasons.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
6. T HEORETICAL
Lévy processes are simplest generic class of process which have
(a.s.) continuous paths interspersed with random jumps of
arbitrary size occurring at random times.
Lévy processes comprise a natural subclass of semimartingales
and of Markov processes of Feller type.
There are many interesting examples - Brownian motion, simple
and compound Poisson processes, α-stable processes,
subordinated processes, financial processes, relativistic process,
Riemann zeta process . . .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
7. T HEORETICAL
Lévy processes are simplest generic class of process which have
(a.s.) continuous paths interspersed with random jumps of
arbitrary size occurring at random times.
Lévy processes comprise a natural subclass of semimartingales
and of Markov processes of Feller type.
There are many interesting examples - Brownian motion, simple
and compound Poisson processes, α-stable processes,
subordinated processes, financial processes, relativistic process,
Riemann zeta process . . .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
8. T HEORETICAL
Lévy processes are simplest generic class of process which have
(a.s.) continuous paths interspersed with random jumps of
arbitrary size occurring at random times.
Lévy processes comprise a natural subclass of semimartingales
and of Markov processes of Feller type.
There are many interesting examples - Brownian motion, simple
and compound Poisson processes, α-stable processes,
subordinated processes, financial processes, relativistic process,
Riemann zeta process . . .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
9. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
10. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
11. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
12. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
13. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
14. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
15. Noise. Lévy processes are a good model of “noise” in random
dynamical systems.
Input + Noise = Output
Attempts to describe this differentially leads to stochastic calculus.
A large class of Markov processes can be built as solutions of
stochastic differential equations (SDEs) driven by Lévy noise.
Lévy driven stochastic partial differential equations (SPDEs) are
currently being studied with some intensity.
Robust structure. Most applications utilise Lévy processes taking
values in Euclidean space but this can be replaced by a Hilbert
space, a Banach space (both of these are important for SPDEs), a
locally compact group, a manifold. Quantised versions are
non-commutative Lévy processes on quantum groups.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
16. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
17. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
18. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
19. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
20. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
21. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
22. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
23. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
24. A PPLICATIONS
These include:
Turbulence via Burger’s equation (Bertoin)
New examples of quantum field theories (Albeverio, Gottshalk,
Wu)
Viscoelasticity (Bouleau)
Time series - Lévy driven CARMA models (Brockwell, Marquardt)
Stochastic resonance in non-linear signal processing (Patel,
Kosco, Applebaum)
Finance and insurance (a cast of thousands)
The biggest explosion of activity has been in mathematical finance.
Two major areas of activity are:
option pricing in incomplete markets.
interest rate modelling.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
25. Some Basic Ideas of Probability
Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
d
(x, y ) = xi yi .
i=1
The associated norm (length of a vector) is
1
d 2
1
|x| = (x, x) =
2 xi2 .
i=1
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
26. Some Basic Ideas of Probability
Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
d
(x, y ) = xi yi .
i=1
The associated norm (length of a vector) is
1
d 2
1
|x| = (x, x) =
2 xi2 .
i=1
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
27. Some Basic Ideas of Probability
Notation. Our state space is Euclidean space Rd . The inner product
between two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is
d
(x, y ) = xi yi .
i=1
The associated norm (length of a vector) is
1
d 2
1
|x| = (x, x) =
2 xi2 .
i=1
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
28. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),
P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
= P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).
If X and Y are independent, the law of X + Y is given by convolution of
measures
pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ).
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
29. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),
P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
= P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).
If X and Y are independent, the law of X + Y is given by convolution of
measures
pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ).
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
30. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),
P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
= P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).
If X and Y are independent, the law of X + Y is given by convolution of
measures
pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ).
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
31. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebra
of subsets of Ω and P is a probability measure defined on (Ω, F).
Random variables are measurable functions X : Ω → Rd . The law of X
is pX , where for each A ∈ F, pX (A) = P(X ∈ A).
(Xn , n ∈ N) are independent if for all
i1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ),
P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar )
= P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).
If X and Y are independent, the law of X + Y is given by convolution of
measures
pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ).
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
32. Equivalently
g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ),
Rd Rd Rd
for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:
fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
33. Equivalently
g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ),
Rd Rd Rd
for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:
fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
34. Equivalently
g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ),
Rd Rd Rd
for all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).
If X and Y are independent with densities fX and fY , respectively, then
X + Y has density fX +Y given by convolution of functions:
fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
35. The characteristic function of X is φX : Rd → C, where
φX (u) = ei(u,x) pX (dx).
Rd
Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
n
E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
j=1
for all u1 , . . . , un ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
36. The characteristic function of X is φX : Rd → C, where
φX (u) = ei(u,x) pX (dx).
Rd
Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
n
E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
j=1
for all u1 , . . . , un ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
37. The characteristic function of X is φX : Rd → C, where
φX (u) = ei(u,x) pX (dx).
Rd
Theorem (Kac’s theorem)
X1 , . . . , Xn are independent if and only if
n
E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un )
j=1
for all u1 , . . . , un ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
38. More generally, the characteristic function of a probability measure µ
on Rd is
φµ (u) = ei(u,x) µ(dx).
Rd
Important properties are:-
1 φµ (0) = 1.
2 ¯
φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
dominated convergence)).
Also µ → φµ is injective.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
39. More generally, the characteristic function of a probability measure µ
on Rd is
φµ (u) = ei(u,x) µ(dx).
Rd
Important properties are:-
1 φµ (0) = 1.
2 ¯
φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
dominated convergence)).
Also µ → φµ is injective.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
40. More generally, the characteristic function of a probability measure µ
on Rd is
φµ (u) = ei(u,x) µ(dx).
Rd
Important properties are:-
1 φµ (0) = 1.
2 ¯
φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
dominated convergence)).
Also µ → φµ is injective.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
41. More generally, the characteristic function of a probability measure µ
on Rd is
φµ (u) = ei(u,x) µ(dx).
Rd
Important properties are:-
1 φµ (0) = 1.
2 ¯
φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
dominated convergence)).
Also µ → φµ is injective.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
42. More generally, the characteristic function of a probability measure µ
on Rd is
φµ (u) = ei(u,x) µ(dx).
Rd
Important properties are:-
1 φµ (0) = 1.
2 ¯
φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all
ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N.
3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use
dominated convergence)).
Also µ → φµ is injective.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
43. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
j=1
n
¯
cj ck ψ(uj − uk ) ≥ 0,
j,k =1
for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
44. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
j=1
n
¯
cj ck ψ(uj − uk ) ≥ 0,
j,k =1
for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
45. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
j=1
n
¯
cj ck ψ(uj − uk ) ≥ 0,
j,k =1
for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
46. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
j=1
n
¯
cj ck ψ(uj − uk ) ≥ 0,
j,k =1
for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
47. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),
(2) and is continuous at u = 0, then it is the characteristic function of
some probability measure µ on Rd .
ψ : Rd → C is conditionally positive definite if for all n ∈ N and
c1 , . . . , cn ∈ C for which n cj = 0 we have
j=1
n
¯
cj ck ψ(uj − uk ) ≥ 0,
j,k =1
for all u1 , . . . , un ∈ Rd .
Note: conditionally positive definite is sometimes called negative
definite (with a minus sign).
ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd .
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
48. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
49. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
50. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
51. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
52. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
53. Theorem (Schoenberg correspondence)
ψ : Rd → C is hermitian and conditionally positive definite if and only if
etψ is positive definite for each t > 0.
Proof. We only give the easy part here.
Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choose
c1 , . . . , cn and u1 , . . . , un as above.
We then find that for each t > 0,
n
1
cj ck (etψ(uj −uk ) − 1) ≥ 0,
¯
t
j,k =1
and so
n n
1
¯
cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0.
¯
t→0 t
j,k =1 j,k =1
2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
54. Infinite Divisibility
We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
1 1 n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
1
case µ n is unique.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
55. Infinite Divisibility
We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
1 1 n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
1
case µ n is unique.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
56. Infinite Divisibility
We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
1 1 n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
1
case µ n is unique.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
57. Infinite Divisibility
We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
1 1 n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
1
case µ n is unique.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
58. Infinite Divisibility
We study this first because a Lévy process is infinite divisibility in
motion, i.e. infinite divisibility is the underlying probabilistic idea which
a Lévy process embodies dynamically.
n
Let µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (n
times). We say that µ has a convolution nth root, if there exists a
1 1 n
probability measure µ n for which (µ n )∗ = µ.
µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this
1
case µ n is unique.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
59. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
60. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
61. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
62. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
63. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
64. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
65. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
66. Theorem
µ is infinitely divisible iff for all n ∈ N, there exists a probability measure
µn with characteristic function φn such that
φµ (u) = (φn (u))n ,
1
for all u ∈ Rd . Moreover µn = µ n .
Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each
µn
n ∈ N, by Fubini’s theorem,
φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn )
Rd Rd
n
= ei(u,y ) µ∗ (dy ).
n
Rd
But φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence
n
µ = µ∗ .
n 2
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
67. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
68. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
69. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
70. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
71. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
72. - If µ and ν are each infinitely divisible, then so is µ ∗ ν.
w
- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitely
divisible.
w
[Note: Weak convergence. µn ⇒ µ means
lim f (x)µn (dx) = f (x)µ(dx),
n→∞ Rd Rd
for each f ∈ Cb (Rd ).]
A random variable X is infinitely divisible if its law pX is infinitely
d (n) (n) (n) (n)
divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for each
n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
73. Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
(n) (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
d (n) (n)
X = Y1 + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-
1 1
f (x) = d
exp − (x − m, A−1 (x − m)) , (1.1)
(2π) 2 det(A) 2
for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
74. Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
(n) (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
d (n) (n)
X = Y1 + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-
1 1
f (x) = d
exp − (x − m, A−1 (x − m)) , (1.1)
(2π) 2 det(A) 2
for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
75. Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
(n) (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
d (n) (n)
X = Y1 + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-
1 1
f (x) = d
exp − (x − m, A−1 (x − m)) , (1.1)
(2π) 2 det(A) 2
for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
76. Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
(n) (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
d (n) (n)
X = Y1 + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-
1 1
f (x) = d
exp − (x − m, A−1 (x − m)) , (1.1)
(2π) 2 det(A) 2
for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
77. Examples of Infinite Divisibility
In the following, we will demonstrate infinite divisibility of a random
(n) (n)
variable X by finding i.i.d. Y1 , . . . , Yn such that
d (n) (n)
X = Y1 + · · · + Yn , for each n ∈ N.
Example 1 - Gaussian Random Variables
Let X = (X1 , . . . , Xd ) be a random vector.
We say that it is (non − degenerate)Gaussian if it there exists a vector
m ∈ Rd and a strictly positive-definite symmetric d × d matrix A such
that X has a pdf (probability density function) of the form:-
1 1
f (x) = d
exp − (x − m, A−1 (x − m)) , (1.1)
(2π) 2 det(A) 2
for all x ∈ Rd .
In this case we will write X ∼ N(m, A). The vector m is the mean of X ,
so m = E(X ) and A is the covariance matrix so that
A = E((X − m)(X − m)T ).
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
78. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
79. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
80. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
81. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
82. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
83. A standard calculation yields
1
φX (u) = exp {i(m, u) − (u, Au)}, (1.2)
2
and hence
1 m 1 1
(φX (u)) n = exp i ,u − u, Au ,
n 2 n
(n)
so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for
n
1
each 1 ≤ j ≤ n.
We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for some
σ > 0.
We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,
and these random variables are also infinitely divisible.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
84. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
85. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
86. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
87. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
88. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
89. Example 2 - Poisson Random Variables
In this case, we take d = 1 and consider a random variable X taking
values in the set n ∈ N ∪ {0}. We say that is Poisson if there exists
c > 0 for which
c n −c
P(X = n) = e .
n!
In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It is
easy to verify that
φX (u) = exp[c(eiu − 1)],
from which we deduce that X is infinitely divisible with each
(n) c
Yj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
90. Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
0 if N = 0
X :=
Z (1) + · · · + Z (N) if N > 0.
Theorem
For each u ∈ Rd ,
φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
91. Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
0 if N = 0
X :=
Z (1) + · · · + Z (N) if N > 0.
Theorem
For each u ∈ Rd ,
φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
92. Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
0 if N = 0
X :=
Z (1) + · · · + Z (N) if N > 0.
Theorem
For each u ∈ Rd ,
φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
93. Example 3 - Compound Poisson Random Variables
Let (Z (n), n ∈ N) be a sequence of i.i.d. random variables taking
values in Rd with common law µZ and let N ∼ π(c) be a Poisson
random variable which is independent of all the Z (n)’s.The compound
Poisson random variable X is defined as follows:-
0 if N = 0
X :=
Z (1) + · · · + Z (N) if N > 0.
Theorem
For each u ∈ Rd ,
φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) .
Rd
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
94. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
95. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
96. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
97. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
98. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
99. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
100. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
101. Proof. Let φZ be the common characteristic function of the Zn ’s. By
conditioning and using independence we find,
∞
φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n)
n=0
∞
cn
= E(ei(u,Z (1))+···+Z (n)) )e−c
n!
n=0
∞
[cφZ (u)]n
= e−c
n!
n=0
= exp[c(φZ (u) − 1)],
and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2
If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly
(n) c
infinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
102. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
103. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
104. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
105. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
106. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
107. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
108. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
109. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
110. The Lévy-Khintchine Formula
de Finetti (1920’s) suggested that the most general infinitely divisible
random variable could be written X = Y + W , where Y and W are
independent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where
1
η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3)
2 Rd
This is WRONG! ν(·) = cµZ (·) is a finite measure here.
Lévy and Khintchine showed that ν can be σ-finite, provided it is what
is now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e.
(|y |2 ∧ 1)ν(dy ) < ∞, (1.4)
(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1
whenever 0 < ≤ 1, it follows from (1.4) that
ν((− , )c ) < ∞ for all > 0.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
111. Here is the fundamental result of this lecture:-
Theorem (Lévy-Khintchine)
A Borel probability measure µ on Rd is infinitely divisible if there exists
a vector b ∈ Rd , a non-negative symmetric d × d matrix A and a Lévy
measure ν on Rd − {0} such that for all u ∈ Rd ,
1
φµ (u) = exp i(b, u) − (u, Au) (1.5)
2
+ (ei(u,y ) − 1 − i(u, y )1B (y ))ν(dy ) .
ˆ (1.6)
Rd −{0}
ˆ
where B = B1 (0) = {y ∈ Rd ; |y | < 1}.
Conversely, any mapping of the form (1.5) is the characteristic function
of an infinitely divisible probability measure on Rd .
The triple (b, A, ν) is called the characteristics of the infinitely divisible
random variable X . Define η = log φµ , where we take the principal part
of the logarithm. η is called the Lévy symbol.
Dave Applebaum (Sheffield UK) Lecture 1 December 2011 22 / 40