SlideShare una empresa de Scribd logo
1 de 240
Descargar para leer sin conexión
m
S T O H A
D I F F E R E N T I A L
E Q U A T I O N S I N S C I E N C E
A N D E N G I N E E R I N G
D o u g l a s H e n d e r s o n • P e t e r P l a s c h k o
S T O C H A S T I C
D I F F E R E N T I A L
E Q U A T I O N S IN S C I E N C E
A N D E N G I N E E R I N G
Douglas Henderson
Brigham Young University, USA
Pp t p f PIi3Qf*ihk"Ac i c i r I O O U I I I V V /
Uriiversidad Autonoma Metropolitans, Mexico
| | p World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • S H A N G H A I • HONG KONG • TAIPEI » C H E N N A I
Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
STOCHASTIC DIFFERENTIAL EQUATIONS IN SCIENCE AND ENGINEERING
(With CD-ROM)
Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in anyform or by any means,
electronic or mechanical, includingphotocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
ISBN 981-256-296-6
Printed in Singapore by World Scientific Printers (S) Pte Ltd
To Rose-Marie Henderson
A good friend and spouse
PREFACE
This book arose from a friendship formed when we were both fac-
ulty members of the Department of Physics, Universidad Autonoma
Metropolitana, Iztapalapa Campus, in Mexico City. Plaschko was
teaching an intermediate to advanced course in mathematical
physics. He had written, with Klaus Brod, a book entitled, "Hoehere
Mathematische Methoden fuer Ingenieure und Physiker", that
Henderson admired and suggested that be translated into English
and be updated and perhaps expanded somewhat.
However, we both prefer new projects and this suggested instead
that a book on Stochastic Differential Equations be written and this
project was born. This is an important emerging field. From its incep-
tion with Newton, physical science was dominated by the idea of
determinism. Everything was thought to be determined by a set of
second order differential equations, Newton's equations, from which
everything could be determined, at least in principle, if the initial
conditions were known. To be sure, an actual analytic solution would
not be possible for a complex system since the number of dynamical
equations would be enormous; even so, determinism prevailed. This
idea took hold even to the point that some philosophers began to
speculate that humans had no free will; our lives were determined
entirely by some set of initial conditions. In this view, even before
the authors started to write, the contents of this book were deter-
mined by a set of initial conditions in the distant past. Dogmatic
Marxism endorsed such ideas, although perhaps not so extremely.
Deterministic Newtonian mechanics yielded brilliant successes.
Most astronomical events could be predicted with great accuracy.
V l l
viii Stochastic Differential Equations in Science and Engineering
Even in case of a few difficulties, such as the orbit of Mercury, New-
tonian mechanics could be replaced satisfactorily by equally deter-
ministric general relativity. A little more than a century ago, the
case for determinism was challenged. The seemingly random motion
of the Brownian motion of suspended particles was observed as was
the sudden transition of the flow of a fluid past an object or obstacle
from lamanar flow to chaotic turbulence. Recent studies have shown
that some seemingly chaotic motion is not necessarily inconsistent
with determinism (we can call this quasi-chaos). Even so, such prob-
lems are best studied using probablistic notions. Quantum theory
has shown that the motion of particles at the atomic level is funda-
mentally nondeterministic. Heisenberg showed that there were limits
to the precision with which physical properties could be determined.
One can only assign a probablity for the value of a physical quantity.
The consequence of this idea can be manifest even on a macroscopic
scale. The third law of thermodynamics is an example.
Stochastic differential equations, the subject of this monograph,
is an interesting extension of the deterministic differential equations
that can be applied to Brownian motion as well as other problems.
It arose from the work of Einstein and Smoluchowski among others.
Recent years have seen rapid advances due to the development of the
calculii of Ito and Stratonovich.
We were both trained as mathematicians and scientists and our
goal is to present the ideas of stochastic differential equations in
a short monograph in a manner that is useful for scientists and
engineers, rather than mathematicians and without overpowering
mathematical rigor. We presume that the reader has some, but not
extensive, knowledge of probability theory. Chapter 1 provides a
reminder and introduction to and definition of some fundamental
ideas and quantities, including the ideas of Ito and Stratonovich.
Stochastic differential equations and the Fokker-Planck equation are
presented in Chapters 2 and 3. More advanced applications follow in
Chapter 4. The book concludes with a presentation of some numeri-
cal routines for the solution of ordinary stochastic differential equa-
tions. Each chapter contains a set of exercises whose purpose is to aid
the reader in understanding the material. A CD-ROM that provides
Preface ix
MATHEMATICA and FORTRAN programs to assist the reader with
the exercises, numerical routines and generating figures accompanies
the text.
Douglas Henderson
Peter Plaschko
Provo Utah, USA
Mexico City DF, Mexico
June, 2006
CONTENTS
Preface vii
Introduction xv
Glossary xxi
1. Stochastic Variables and Stochastic Processes 1
1.1. Probability Theory 1
1.2. Averages 4
1.3. Stochastic Processes, the Kolmogorov Criterion
and Martingales 9
1.4. The Gaussian Distribution and Limit Theorems 14
1.4.1. The central limit theorem 16
1.4.2. The law of the iterated logarithm 17
1.5. Transformation of Stochastic Variables 17
1.6. The Markov Property 19
1.6.1. Stationary Markov processes 20
1.7. The Brownian Motion 21
1.8. Stochastic Integrals 28
1.9. The Ito Formula 38
1.9. The Ito Formula 38
Appendix 45
Exercises 49
2. Stochastic Differential Equations 55
2.1. One-Dimensional Equations 56
2.1.1. Growth of populations 56
2.1.2. Stratonovich equations 58
xii Stochastic Differential Equations in Science and Engineering
2.1.3. The problem of Ornstein-Uhlenbeck and
the Maxwell distribution 59
2.1.4. The reduction method 63
2.1.5. Verification of solutions 65
2.2. White and Colored Noise, Spectra 67
2.3. The Stochastic Pendulum 70
2.3.1. Stochastic excitation 72
2.3.2. Stochastic damping (/? = 7 = 0; a ^ 0) 73
2.4. The General Linear SDE 76
2.5. A Class of Nonlinear SDE 79
2.6. Existence and Uniqueness of Solutions 84
Exercises 87
3. The Fokker-Planck Equation 91
3.1. The Master Equation 91
3.2. The Derivation of the Fokker-Planck Equation 95
3.3. The Relation Between the Fokker-Planck Equation and
Ordinary SDE's 98
3.4. Solutions to the Fokker-Planck Equation 104
3.5. Lyapunov Exponents and Stability 107
3.6. Stochastic Bifurcations 110
3.6.1. First order SDE's 110
3.6.2. Higher order SDE's 112
Appendix A. Small Noise Intensities and the Influence
of Randomness Limit Cycles 117
Appendix B.l The method of Lyapunov functions 124
Appendix B.2 The method of linearization 128
Exercises 130
4. Advanced Topics 135
4.1. Stochastic Partial Differential Equations 135
4.2. Stochastic Boundary and Initial Conditions 141
4.2.1. A deterministic one-dimensional wave
equation 141
4.2.2. Stochastic initial conditions 144
4.3. Stochastic Eigenvalue Equations 147
4.3.1. Introduction 147
4.3.2. Mathematical methods 148
Contents xiii
4.3.3. Examples of exactly soluble problems 152
4.3.4. Probability laws and moments of the eigenvalues 156
4.4. Stochastic Economics 160
4.4.1. Introduction 160
4.4.2. The Black-Scholes market 162
Exercises 164
5. Numerical Solutions of Ordinary
Stochastic Differential Equations 167
5.1. Random Numbers Generators and Applications 167
5.1.1. Testing of random numbers 168
5.2. The Convergence of Stochastic Sequences 173
5.3. The Monte Carlo Integration 175
5.4. The Brownian Motion and Simple Algorithms for SDE's 179
5.5. The Ito-Taylor Expansion of the Solution of a ID SDE 181
5.6. Modified ID Milstein Schemes 187
5.7. The Ito-Taylor Expansion for N-dimensional SDE's 189
5.8. Higher Order Approximations 193
5.9. Strong and Weak Approximations and the Order
of the Approximation 196
Exercises 201
References 205
Fortran Programs 211
Index 213
INTRODUCTION
The theory of deterministic chaos has enjoyed during the last three
decades a rapidly increasing audience of mathematicians, physicists,
engineers, biologists, economists, etc. However, this type of "chaos"
can be understood only as quasi-chaos in which all states of a system
can be predicted and reproduced by experiments.
Meanwhile, many experiments in natural sciences have brought
about hard evidence of stochastic effects. The best known example
is perhaps the Brownian motion where pollen submerged in a fluid
experience collisions with the molecules of the fluid and thus exhibit
random motions. Other familiar examples come from fluid or plasma
dynamic turbulence, optics, motions of ions in crystals, filtering the-
ory, the problem of optimal pricing in economics, etc. The study of
stochasticity was initiated in the early years of the 1900's. Einstein
[1], Smoluchowsky [2] and Langevin [3] wrote pioneering investiga-
tions. This work was later resumed and extended by Ornstein and
Uhlenbeck [4]. But investigation of stochastic effects in natural sci-
ence became more popular only in the last three decades. Meanwhile
studies are undertaken to calculate or at least approximate the effect
of stochastic forces on otherwise deterministic oscillators, to investi-
gate the stability or the transition to stochastic chaos of the latter
oscillator.
To motivate the following considerations of stochastic differential
equations (SDE) we introduce a few examples from natural sciences.
(a) Pendulum with Stochastic Excitations
We study the linearized pendulum motion x(t) subjected to a
stochastic effect, called white noise
x + x = (3£t,
XV
xvi Stochastic Differential Equations in Science and Engineering
where ft is an intensity constant, t is the time and £j stands for
the white noise, with a single frequency and constant spectrum. For
(3 = 0 we obtain the homogeneous deterministic (non-stochastic) tra-
ditional pendulum motion. We can expect that the stochastic effect
disturbs this motion and destroys the periodicity of the motion in
the phase space (x,x). The latter has closed solutions called limit
cycles. It is an interesting task to investigate whether the solutions
disintegrate into scattered points (stochastic chaos). We will cover
this problem later in Section 2.3 and find that the average motion
(in a sense to be defined in Section 1.2 of Chapter 1) of the pendu-
lum is determined by the deterministic limit (/3 = 0) of the stochastic
pendulum equation.
(b) Stochastic Growth of Populations
N(i) is the number of the members of a population at the time t, a
is the constant of the deterministic growth and (5 is again a constant
characterizing the intensity of the white noise. Thus we study the
growth problem in terms of the linear scenario
The deterministic limit (/? = 0) of this equation describes the growth
of a population living on an unrestricted area with unrestricted
food supply. Its solution (the number of such a population) grows
exponentially. The stochastic effects, or the white noise describes a
stochastic varying food supply that influences the growth of the pop-
ulation. We will consider this problem in the Section 2.1.1 and find
again that the average of the population is given by the deterministic
limit.
(c) Diffraction of Optical Waves
The transfer function T(u>); UJ = (u, U2) of a two-dimensional optical
device is defined by
/
oo /-oo
dx / dyF{x,y)F*{x -wuy- u;2)/N;
-OO J —OO
/
CO /*CO
dx dyF(x,y)2
,
-00 J—00
Introduction xvn
where F is a complex wave amplitude and F* = cc(F) is its complex
conjugate. The parameter N denotes the normalization of |F(x,y)|2
and the variables x and y stand for the coordinates of the image
plane. In a simplified treatment, we assume that the wave form is
given by
F = |F|exp(—ikA); |F|,fc = const,
where k and A stand for the wave number and the phase of the
waves, respectively. We suppose that the wave emerging from the
optical instrument (e.g. a lens) exhibits a phase with two different
deviations from a spherical structure A = Ac + Ar with a controlled
or deterministic phase Ac(x,y) and a random phase Ar(x,y) that
arises from polishing the optical device or from atmospheric influ-
ences. Thus, we obtain
•1 POO /"OO
T(u>) = — dx dyexp{ifc[A(x-o;i,y-u;2) - A(x,y)}},
•••*• J—oo J—oo
where K is used to include the normalization. In simple applications
we can model the random phase using white noise with a Gaussian
probability density. To evaluate the average of the transfer function
(T(ui)) we need to calculate the quantity
(exp{ik[AT(x - Ui,y - u2) - Ar(x,y)]}).
We will study the Gaussian probability density and complete the
task to determine the average written in the last line in Section 1.3
of Chapter 1. An introduction to random effects in optics can be
found in O'Neill [5].
(d) Filtering Problems
Suppose that we have performed experiments of a stochastic problem
such as the one in (a) in an interval t € [0, u] and we obtain as result
say A(v), v = [0, u]. To improve the knowledge about the solution we
repeat the experiments for t € [u,T] and we obtain A(t),t = [u,T].
Yet due to inevitable experimental errors we do not obtain A(i) but
a result that includes an error A(i) + 'noise'. The question is now
how can we filter the noise away? A filter is thus, an instrument to
xviii Stochastic Differential Equations in Science and Engineering
clean a result and remove the noise that arises during the observa-
tion. A typical problem is where a signal with unknown frequency
is transmitted (e.g. by an electronic device) and it suffers during
the transmission the addition of a noise. If the transmitted signal
is stochastic itself (as in the case of music) we need to develop a
non-deterministic model for the signal with the aid of a stochastic
differential equation. To study basic the ideas of filtering problems
the reader in referred to the book of Stremler [6].
(e) Fluidmechanical Turbulence
This is the perhaps most challenging and most intricate application
of statistical science. We consider here the continuum dynamics of a
flow field influenced by stochastic effects. The latter arise from initial
conditions (e.g. at the nozzle of a jet flow, or at the entry region of a
channel flow) and/or from background noise (e.g. acoustic waves). In
the simplest case, the incompressible two-dimensional flows, there are
three characteristic variables (two velocity components and the pres-
sure). These variables are governed by the Navier-Stokes equations
(NSEs). The latter are a set of three nonlinear partial differential
equations that included a parameter, the Reynolds number R. The
inverse of R is the coefficient of the highest derivatives of the NSEs.
Since turbulence occurs at intermediate to high values of the R, this
phenomenon is the rule and not the exception in Fluid Dynamics and
it occurs in parameter regions where the NSEs are singular. Nonlin-
ear SDEs — such as the NSEs — lead additionally to the problem
of the closure, where the equation governing the statistical moment
of nth order contains moments of the (n + l)th order.
Hopf [7] was the first to try to find a theoretical approach to
solve the problem for the idealized case of isotropic homogenous tur-
bulence, a flow configuration that can be approximately realized in
grid flows. Hopf assumed that the turbulence is Gaussian, an assump-
tion that facilitates the calculation of higher statistical moments of
the distribution (see Section 1.3 in Chapter 1). However, later mea-
surements showed that the assumption of a Gaussian distribution
was rather unrealistic. Kraichnan [8] studied the problem again in
Introduction xix
the 60's and 70's with the direct triad interaction theory in the ide-
alized configuration of homogeneous isotropic turbulence. However,
this rather involved analysis could only be applied to calculate the
spectrum of very small eddies where the viscosity dominates the flow.
Somewhat more progress has been achieved by the investigation of
Rudenko and Chirin [9]. The latter predicted with aid of stochas-
tic initial conditions with random phases a broad banded spectra
of a nonlinear model equation. During the last two decades there
was the intensive work done to investigate the Burgers equation and
this research is summarized in part by Wojczinsky [10]. The Burgers
equation is supposed to be a reasonable one-dimensional model of
the NSEs. We will give a short account on the work done in [9] in
Chapter 4.
GLOSSARY
AC almost certainly
BC boundary condition
dBj — dWj — £td£ differential of the Brownian motion
(or equivalently Wiener process)
cc(a) = a* complex conjugate of a
D dimension or dimensional
DF distribution function
DOF degrees of freedom
Sij Kronecker delta function
S(x) Dirac delta function
EX exercise at the end of a chapter
FPE Fokker-Planck equation
r(x) gamma function
GD Gaussian distribution
GPD Gaussian probability distribution
HPP homogeneous Poisson process
Hn(x) Hermite polynomial of order n
IC initial condition
IID identically independently distributed
xxii Stochastic Differential Equations in Science and Engineering
IFF if and only if
IMSL international mathematical science library
C Laplace transform
M master, as in master equation
MCM Monte Carlo method
NSE Navier-Stokes equation
NIGD normal inverted GD
N(jU, a) normal distribution with i as mean and a as variance
o Stratonovich theory
ODE ordinary differential equation
PD probability distribution
PDE partial differential equation
PDF probability distribution function
PSDE partial SDE
r Reynolds number
RE random experiment
RN random number
RV random variable
Re(a) real part of a complex number
R, C sets of real and complex numbers, respectively
S Prandt number
SF stochastic function
SI stochastic integral
SDE stochastic differential equation
SLNN strong law of large numbers
Glossary
TPT transition probability per unit time
WP Wiener process
WS Wiener sheet
WKB Wentzel, Kramers, Brillouin
WRT with respect to
W(t) Wiener white (single frequency) noise
(a) average of a stochastic variable a
a2
= (a2
) — (a) (a) variance
{xy),{x,uy,v) conditional averages
s At minimum of s and t
V for all values of
€ element of
f f(x)dx short hand for J^ f(x)dx
X end of an example
• end of definition
$ end of theorem
CHAPTER 1
STOCHASTIC VARIABLES AND
STOCHASTIC PROCESSES
1.1. Probability Theory
An experiment (or a trial of some process) is performed whose
outcome (results) is uncertain: it depends on chance. A collec-
tion of all possible elementary (or individual) outcomes is called
the sample space (or phase space, or range) and is denoted
by f2. If the experiment is tossing a pair of distinguishable dice,
then 0, = {(i,j) | 1 < i,j < 6}. For the case of an exper-
iment with a fluctuating pressure 0, is the set of all real func-
tions fi = (0, oo). An observable event A is a subset of f2; this
is written in the form A c f2. In the dice example we could
choose an even, for example, as A = {{i,j)  i + J' = 4}. For the
case of fluctuating pressures we could use the subset A = (po >
0,oo).
Not every subset of £1 is observable (or interesting). An example
of a non-observable event appears when a pair of dice are tossed and
only their spots are counted, fi = {(i,j),2 < i + j < 12}. Then
elementary outcomes like (1, 2), (2, 1) or (3, 1), (2, 2), (1, 3) are not
distinguished.
Let r be the set of observable events for one single experiment.
Then F must include the certain event of CI, and the impossible
event of 0 (the empty set). For every A C T, Ac
the complement of
A, satisfies Ac
C T and for every B C F the union and intersection
of events, A U B and A D B, must pertain also to F. F is called
an algebra of events. In many cases there are countable unions and
intersections in F. Then it is sufficient to assume that
oo
(J An e r, if An e r.
1
2 Stochastic Differential Equations in Science and Engineering
An algebra with this property is called a sigma algebra. In measure
theory, the elements of T are called measurable sets and the pair of
(F, Q,) is called a measurable space.
A finite measure Pr(A) defined on F with
0 < Pr(A) < 1, Pr(0) = 0, Pr(fi) = 1,
is called the probability and the triple (I f2, Pr) is referred to as the
probability space. The set function Pr assigns to every event A
the real number Pr(A). The rules for this set function are along with
the formula above
Pr(Ac
) = l - P r ( A ) ;
Pr(A)<Pr(B); Pr(BA) = Pr(B) - Pr(A) for A C B € T.
The probability measure Pr(r) on Q, is thus a function Pr(P) —>•
[0,1] and it is generally derived with Lebesque integrations that are
defined on Borel sets.
We introduced this formal concept because it can be used as the
most general way to introduce axiomatically the probability theory
(see e.g. Chung, [1.1]). We will not follow this procedure but we will
introduce heuristically stochastic variables and their probabilities.
Definition 1.1. (Stochastic variables)
A random (or stochastic) variable ~X.(u),u £ Q is a real valued
function defined on the sample space Q. In the following we omit the
parameter u) whenever no confusion is possible. •
Definition 1.2. (Probability of an event)
The probability of an event equals the number of elementary out-
comes divided by the total number of all elementary outcomes, pro-
vided that all cases are equally likely. •
Example
For the case of a discrete sample space with a finite number of ele-
mentary outcome we have, fi = {wi,... ,u>n} and an event is given
by A = {LO, ... ,u>k}, I < k < n. The probability of the event A is
then Pr(A) = k/n. *
Stochastic Variables and Stochastic Processes 3
Definition 1.3. (Probability distribution function and probability
density)
In the continuous case, the probability distribution function
(PDF) Fx(a;) of a vectorial stochastic variable X = (Xi,...,Xn )
is defined by the monotonically increasing real function
Fx(xi,...,xn) = Pr(Xi < xi,...,Xn < xn), (1.1)
where we used the convention that the variable itself is written
in upper case letters, whereas the actual values that this variable
assumes are denoted by lower case letters.
The probability density px(^i, • • • ,xn) (PD) of the random
variable is then defined by
Fx(xi,...,xn) = ••• px (ui,...,-un )dn1 ---dun (1.2)
and this leads to
dn
Fx
dxi...dXn =!*(*!,...,*„). (1-3)
Note that we can express (1.1) and (1.2) alternatively if we put
Pr(xn < Xi < X12,..., xnl < Xn < xn2)
fX12 fXn2
•••px(xi,...,xn)dxi •••dxn. (1.1a)
rxi2 rxn-.
JXn Jx„,
The conditions to be imposed on the PD are given by the positiveness
and the normalization condition
PxOci, ,xn)>0] / ••• / px(xi,...,xn)dxi •••dxn = 1. (1.4)
In the latter equation we used the convention that integrals without
explicitly given limits refer to integrals extending from the lower
boundary — oo to the upper boundary oo. •
In a continuous phase space the PD may contain Dirac delta
functions
p(x) = Y^l(k
)s
(x
- k
) + P(x); q(k) = Pr(x = k), (1.5)
4 Stochastic Differential Equations in Science and Engineering
where q(k) represents the probability that the variable x of the dis-
crete set equals the integer value k. We also dropped the index X in
the latter formula. We can interpret it to correspond to a PD of a set
of discrete states of probabilities q(fc) that are embedded in a con-
tinuous phase space S. The normalization condition (1.4) yields now
^2<ik+ p(x)dx = 1.
1. J S
Examples (discrete Bernoulli and Poisson distributions)
First we consider the Bernoulli distribution
(i) qf)
= Pr(a; = fe) = 6(A:,n,p)=r™)pf c
(l-p)'l
-f c
; A; = 0,1,...
and then we introduce the Poisson distribution
(ii)7rfc(A0 = Pr(x = A; )=( A t ) f c e
g) (
-A t )
; * = 0,1,....
In the appendix of this chapter we will give more details about the
Poisson distribution. We derive there the Poisson distribution as limit
of Bernoulli distribution
TTk(Xt) — lim b(k,n,p = Xt/n). *
n—>oo
In the following we will consider in almost all cases only contin-
uous sets.
1.2. Averages
The sample space and the PD define together completely a stochas-
tic variable. To introduce observable quantities we consider now aver-
ages. The expectation value (or the average, or the mean value)
of a function G(xi,...,xn ) of the stochastic variables x,...,xn is
denned by
(G(xi,...,xn)) = ••• G(zi,...,£n )px(xi,...,xn )dxi--'dxn .
(1.6)
In the case of a discrete variable we must replace to integral in
(1.6) by a summation. We obtain then with the use of (1.5) for p(x)
<G(xi,..., xn)) = Y^ Yl G
(fc
i> • • •' M # i r • •, k
n)- (1-7)
Stochastic Variables and Stochastic Processes 5
There are two rules for the application of the averages:
(i) a and b are two deterministic constants and G(x,...xn)
and H(xi,...,xn ) are two functions of the random variables
x,..., xn. Then we have
(aG(xi,...,xn) + bK(xi,...,xn))
= a(G(xi,..., xn)) + 6(H(xi,..., xn)), (1.8a)
and
(ii)
{(G{x1,...,xn))) = (G(x1,...,xn)). (1.8b)
Now we consider two scalar random variables x and y, their joint
PD is p(x,y). If we do not have more information (observed values)
of y, we introduce the two marginal PD's px(x) and py(y) of the
single variables x and y
Px(ar) = / p{x,y)dy; pY(y) = / p(x,y)dx, (1.9a)
where we integrate over the phase spaces S^ (Sy) of the variables
x(y). The normalization condition (1.4) yields
/ px(x)dx = / pY(y)dy = 1. (1.9b)
Definition 1.4. (Independence of variables)
We consider n random variables x,..., xn, x to be independent of
the other variables X2, - - -, xn if
(xiX2 • • • xn) = (xi)(x2---xn). (1.10a)
We see easily that a sufficient condition to satisfy (1.10a) is
p(xu...,xn) = pi(xi)pn_i(x2,...,a;n), (1.10b)
where p^(...), k < n denotes the marginal probability distribution of
the corresponding variables. •
6 Stochastic Differential Equations in Science and Engineering
The moments of a PD of a scalar variable x are given by
<*"> = /•><**•<* " e N
-
where n denotes the order of the moment. The first order moment
(x) is the average of x and we introduce the variance a2
by
a2
= ((x - {x))2
} = (x2
) - (re)2
> 0. (1.11)
The random variable x — (x) is called the standard deviation.
The average of the of the Fourier transform of a PD is called the
characteristic function
G(k,..., kn) = (ex.p(ikrxr)}
p(xi,..., xn) ex.Y>(ikrxr)dx • • • dxn, (1-12)
where we applied a summation convention krxr = ^?=i kjx
j- This
function has the properties G(0,..., 0)1; | G(ki,..., kn) < 1.
Example
The Gaussian (or normal) PD of a scalar variable x is given by
p(x) = (2vr)"1/2
exp(-a;2
/2); -co < x < oo. (1.13a)
Hence we obtain (see also EX 1.1)
<*2n
> = | ? 7 ; "2
= i; (^2n+1
) = o. (l.isb)
Li lit
A stochastic variable characterized by N(m, s) is a normal dis-
tributed variable with the average m and the variance s. The vari-
able x distributed with the PD (1.13a) is thus called a normal
distributed variable with N(0, 1).
Stochastic Variables and Stochastic Processes 7
A Taylor expansion of the characteristic function G(k) of (1.13a)
yields with (1.12)
G(*) = E ^ V > . (L14a
)
n=0 U
-
We define the cumulants nm by
 m
lnG(fc) = E^f-Km- (1.14b)
A comparison of equal powers of k gives
Ki = (x); K2 = (x2
) - (x)2
= a2
;
K3 = {X3
)-3(X2
)(X)+2{X)3
;....
(1.14c)
*
Definition 1.5. (Conditional probability)
We assume that A, B C T are two random events of the set of
observable events V. The conditional probability of A given B
(or knowing B, or under the hypothesis of B) is defined by
Pr(A | B) = Pr(A n B)/Pr(B); Pr(B) > 0.
Thus only events that occur simultaneously in A and B contribute
to the conditional probability.
Now we consider n random variables x,... ,xn with the joint
PD pn (xi,..., xn). We select a subset of variables x,..., xs. and we
define a conditional PD of the latter variables, knowing the remaining
subset xs+i,... ,xn, in the form
Ps|n—sx
li • • • ix
s I Xs--, . . . , Xn)
= pn(xi, . . . , Xn)/pn-s(xs+i, . . . , Xn). (1.15)
Equation (1.15) is called Bayes's rule and we use the marginal PD
pn-s(xs+i,...,xn) = pn{xi,...,xn)dxi---dxs, (1.16)
where the integration is over the phase space of the variables x± • • • xs.
Sometimes is useful to write to Bayes's rule (1.15) in the form
P n l ^ l j • • • j Xn) = pn—syXs-^i, . . . , 3^nJPs|n—s v^l> • • • > x
s  Xs--lj • • • , XnJ.
(1.15')
8 Stochastic Differential Equations in Science and Engineering
We can also rearrange (1.15') and we obtain
P n ^ l i • • • > -En) =
Ps(.-El> • • • ) •KsjPn—ss%s+1: • • • j %n  Xi, . . . ,XS).
(1.15")
•
Definition 1.6. (Conditional averages)
The conditional average of the random variable x, knowing
x2, • • •, xn, is defined by
(Xi | X2, . • • , Xn) = / ZlPi|n _i(xi I X2, • • • , Xn)dXi
= / XxPnfa X2,..., X n ) d x i / p n _ i ( x 2 , • • • , Xn).
(1.17)
Note that (1.17) is a random variable.
The rules for this average are in analogy to (1.8)
(axi + bx2 | y) = a{xx  y) + b(x2  y), ((x  y)) = (x  y). (1.18)
D
Example
We consider a scalar stochastic variable x with its PD p(a;). An event
A is given by a; £ [a, 6]. Hence we have
p(x | A) = 0 Vz^ [a, b],
and
p(x | A) = p(x) / / p(s)ds; xe[a,b.
The conditional PD is thus given by
(x | A) = / xp(x)dx / / p(s)ds.
Ja I Ja
For an exponentially distributed variable x in [0, oo] we have p(x) =
Aexp(—Arc). Thus we obtain for a > 0 the result
/•oo / /-oo
{x  x > a) = / xexp(—Ax)ds / / exp(—Xx)dx = a + 1/A.
JO / ./a JL
Stochastic Variables and Stochastic Processes 9
1.3. Stochastic Processes, the Kolmogorov Criterion
and Martingales
In many applications (e.g. in irregular phenomena like blood flow,
capital investment, or motions of molecules, etc.) one encounters
a family of random variables that depend on continuous or dis-
crete parameters like the time or positions. We refer to {X(t,co),t £
l,u £ ft}, where I is set of (continuous or discrete) parameters and
X(t7ui) £ Rn, as a stochastic process (random process or stochas-
tic (random) function). If I is a discrete set it is more convenient
to call X(t,u>) a time series and to use the phrase process only for
continuous sets. If the parameter is the time t then we use I = [to, T],
where to is an initial instant. For a fixed value of t £ I, X(£, a>) is a
random variable and for every fixed value of LO £ Q (hence for every
observation) X(t, LO) is a real valued function. Any observation of this
process is called a sample function (realization, trajectory, path or
orbit) of the process.
We consider now a finite variate PD of a process and we define
the time dependent probability density functions (PDF) in analogy
to (1.1) in the form
Fx (x,t) = Pr(X(t)<x);
Fx.yfo t; y, s) = Pr(X(t) < x, Y(s) < y); ^1A^
Fxu...,xn(x
iit
i---;xn,tn) = Pr(Xx(t) < xi,Xn(t) < xn),
where we omit the dependence of the process X(t) on the chance
variable LO, whenever no confusion is possible. The system of PDF's
satisfies two classes of conditions:
(i) Symmetry
If {ki,..., kn} is a permutation of 1,..., n then we obtain
Fxlv..,xn (zfc! ,tkl;...;xkn,tkJ = FXl,...,x„ {x, h;...; xn, tn).
(1.19a)
(ii) Compatibility
Fx1?...,x„ (xi,ti;...; xr, tr; oo, tr+i; ...;oo,tn)
= FXl,...,xr(a;i, <i; •••xr, tr). (1.19b)
10 Stochastic Differential Equations in Science and Engineering
The rules to calculate averages are still given by (1.6) where the
corresponding PD is derived by (1.3) and where the PDF's of (1.19)
are used
Qn
p(xi, ii; ...;xn, tn) = -—— —7r^FXll...,x„ (xi,h;...; xn, tn).
dxi(ti) • • • dxn(tn)
One would expect that a stochastic process at a high rate of
irregularity (expressed e.g. by high values of intensity constants, see
Chapter 2) would exhibit sample functions (SF) with a high degree
of irregularity like jumps ore singularities. However, Kolmogorov's
criterion gives a condition for continuous SF:
Theorem 1.1. (Kolmogorov's criterion)
A bivariate distribution is necessary to give information about the
possibility of continuous SF. If and only if (IFF)
(|Xi(ti)-X2 (t2 )r> < c | i i - i 2 | 1 + b
; a,6,c>0; tx,t2 G [t0,T],
(1.20)
then the stochastic process X(t) posses almost certainly (AC, this
symbol is discussed in Chapter 5) continuous SF. However, the lat-
ter are nowhere differentiable and exhibit jumps, and higher order
derivatives singularities. &
We will use later the Kolmogorov's criterion to investigate SF of
Brownian motions and of stochastic integrals.
Definition 1.7. (Stationary process)
A process x(t) is stationary if its PD is independent of a time shift r
p(xi,h +T;...;xn,tn + T) = p(zi, tx;... ;xn,tn). (1.21a)
Equation (1.21a) implies that all moments are also independent of
the time shift
(x(h + T)x(t2 + T) • • • x(tk + T))
= (x(t1)x(t2)---x(tk)); forfc = l , 2 . . . . (1.21b)
A consequence of (1.25a) is given by
(x(t)) = (x), independent of t:
(1.21c)
(x(t)x(t + r)) = (x(0)x(r))=5 (r).
•
Stochastic Variables and Stochastic Processes 11
The correlation matrix is defined by
Cik = (zi{h)zk(t2)); Zi(ti) = Xi(ti) - (xi(ti)). (1.22)
Thus, we have
cik = {xi(h)xk(t2)) - {Xi(ti))(xk(t2)). (1.23)
The diagonal elements of this matrix are called autocorrelation
functions (we do not employ a summation convention)
Cii = {Zi{tl)Zi{t2)).
The nondiagonal elements are referred to as cross-correlation
functions. The correlation coefficient (the nondimensional
correlation) is defined by
r. = (xj{ti)xk{t2)) - (xi(ti))(xk(t2)) ,x 2 4 )
y/{xUh)) ~ (Xt(h))^(xl(t2)) - (Xk(t2)/
For stationary processes we have
Cik(h,t2) = (zi(0)zk(t2 - h)) = cik(t2 - h);
(1.25)
Cki(h,t2) = (zkit^Zifo)) = (zk(ti -t2)zi(0)) = Cik(ti -t2).
A stochastic function with C{k = 0 is called an uncorrelated
function and we obtain
(xl{h)xk(t2)) = (Xiihfiixkfa)). (1.26)
Note that the condition of noncorrelation (1.26) is weaker than the
condition of statistical independence.
Example
We consider the process X(i) = Ui cos t + l^sini. Ui,2 are inde-
pendent stochastic variables independent of the time. The moments
of the latter are given by (Uk) = 0, (U|) — a = const; k — 1,2,
(U1U2) = 0. Hence we obtain (X) — 0;cxx(s,t) = acos(t — s). JI»
Remark (Statistical mechanics and stochastic differential equations)
In Chapter 2 we will see that stochastic differential equations or
"stochastic mechanics" can be used to investigate a single mechani-
cal system in the presence of stochastic influences (white or colored
12 Stochastic Differential Equations in Science and Engineering
noise). We use concepts that are similar to those developed in statis-
tical mechanics such as probability distribution functions, moments,
Markov properties, ergodicity, etc. We solve the stochastic differen-
tial equation (analytically, but in most cases numerically) and one
solution represents a realization of the system. Repeating the solu-
tion process we obtain another realization and in this way we are
able to calculate the moments of the system. An alternative way to
calculate the moments would be to solve the Fokker-Planck equation
(see: Chapter 3) and then use the corresponding solution to deter-
mine the moments. To establish the Fokker-Planck equation we will
use again the coefficients of the stochastic differential equation.
Statistical Mechanics works with the use of ensemble averages.
Rather than defining a single quantity (e.g. a particle) with a PD
p(x), one introduces a fictitious set of an arbitrary large number of
M quantities (e.g. particles or thermodynamic systems) and these M
non-interacting quantities define the ensemble. In case of interact-
ing particles, the ensemble is made up by M different realizations
of the N particles. In general, these quantities have different charac-
teristic values (temperature, or energy, or values of N) x, in a com-
mon range. The number of quantities having a characteristic value
between x and x + dx defines the PD. Therefore, the PD is replaced
by density function for a large number of samples. One observes a
large number of quantities and averages the results. Since, by defini-
tion, the quantities do not interact one obtains in this way a physical
realization of the ensemble. The averages calculated with this den-
sity function are referred to as ensemble averages and a system where
ensemble averages equal time averages is called an ergodic system.
In stochastic mechanics we say that a process with the property that
the averages defined in accordance with (1.6) equal the time averages,
represents an ergodic process.
An other stochastic process that posses SF of some regularity is
called a martingale. This name is related to "fair games" and we
give a discussion of this expression in a moment.
In everyday language, we can state that the best prediction of
a martingale process X(t) conditional on the path of all Brownian
Stochastic Variables and Stochastic Processes 13
motions up to s < t is given by previous value X(s). To make this
idea precise we formulate the following theorem:
Theorem 1.2. (Adapted process)
We consider a probability space (r, Q, Pr) with an increasing family
(of sigma algebras of T) of events Ts £ Tt, 0 < s < t (see Section 1.1).
A process X.(s,u);u) € Q,s £ [0, oo) is called Ts-adapted if it is Im-
measurable. An rs-adapted process can be expanded into a (the
limit) of a sequence of Brownian motions Bu(u>) with u < s (but not
u> s). ^
Example
For n = 2, 3,... ; 0 < A < t we see that the processes
(i) G1(t,co) = Bt/n(co), G2(t,uj) = Bt_x(u;),
(ii) G3(t,Lj) = Bnt(u), G4(t,w) = Bt+X(u>),
are Tj-adapted, respectively, not adapted. *
Theorem 1.3. (martingale process)
A process X(t) is called a martingale IFF it is adapted and the
condition
<Xt |Ta) = XS V 0 < s < t < o o , (1.27)
is almost certainly (AC) satisfied.
If we replace the equality sign in (1.27) by < (>) we obtain
a super (sub) martingale. We note that martingales have no other
discontinuities than at worst finite jumps (see Arnold [1.2]). ^
Note that (1.27) defines a stochastic process. Its expectation
((Xj | Ts)) = (Xs);s < t is a deterministic function.
An interesting property of a martingale is expressed by
Pr(sup | X(t) |> c) < (| X(6) p
)/cp
; c > 0; p > 1, (1.28)
where sup is the supremum of the embraced process in the interval
[a, b]. (1.28) is a particular version of the Chebyshev inequality, that
14 Stochastic Differential Equations in Science and Engineering
will be derived in EX 1.2. We apply later the concept of martingales
to Wiener processes and to stochastic integrals.
Finally we give an explanation of the phrase "martingale". A
gambler is involved in a fair game and he has at the start the capital
X(s). Then he should posses in the mean at the instant t > s the
original capital X(s). This is expressed in terms of the conditional
mean value (Xt | Xs) = Xs . Etymologically, this term comes from
French and means a system of betting which seeks the amount to be
wagered after each win or loss.
1.4. The Gaussian Distribution and Limit Theorems
In relation (1.13) we have already introduced a special case of the
Gaussian (normal distributed) PD (GD) for a scalar variable. A gen-
eralization of (1.13) is given by theN(m,o-2
) PD
p(x) = (2TTCT2
)-1/2
exp[-(x - m)2
/(2a2
)]; V i e [-oo, oo] (1.29)
where m is the average and <72
= (a;2
) — m2
is the variance. The mul-
tivariate form of the Gaussian PD for the set of variables xi,...,xn
has the form
p(xi,...,xn) = N e x p f --AikXiXk -bkxkj , (1.30a)
where we use a summation convention. The normalization constant
N is given by
N = (27r)-"/2
[Det(A)]1
/2
eX pf-^A-1
6i 6f e ') . (1.30b)
We define the characteristic function of (1.30) has the form
G(ki,...,kn) = exp
An expansion of (1.31) WRT powers of k yields the moments
(Xi) = -A^bfc, (1.32a)
and the covariance is given by
Cik = {{xi - {xi)){xk - (xk)) = A^1
. (1.32b)
-l
'•uv (1.31)
Stochastic Variables and Stochastic Processes 15
This indicates that the GD is completely given, if the mean value and
the covariance matrix are evaluated. The n variables are uncorrelated
and thus are independent if A - 1
and hence A itself are diagonal.
The higher moments of n-variate GD with zero mean are partic-
ularly easy to calculate. To show this, we recall that for zero mean
we have bk = 0 and we obtain the characteristic function with the
use of (1.31) and (1.32) in form of
G = exp XUXV)KUKV
1 -p ~ XUXV)ZUZV -- XuXvj XpXq]ZuZvZpZq -p
A/y ( ( I f f . /u,v,p,q,r = 1,2,.... (1.33)
A comparison of equal powers of z in (1.33) and in a Taylor
expansion of the exponent in (1.31) shows that all odd moments
vanish
XaXbXcf — X aXl)X CX dX ej = • • • U.
We also obtain with restriction to n — 2 (bivariate GD)
( 4 ) = 3 ( 4 ) 2
; (xlxp} = 3(xl)(x1x2), i,p= 1,2;
(1.34)
{xlx2
2) = {x){xl) + 2{xlx2)2
.
In the case of a trivariate PD we face additional terms of the type
(xkXpXr) — 2(xkXp){xpXr) + {xkXr){Xp). The higher order variate and
higher order moments can be calculated in analogy to the results
(1.34).
We give also the explicit formula of the bivariate Gaussian (see
also EX 1.3)
with
P(x,y)
1
N2exp
x- (x),
e
2 ( 1 - r 2
) [a
v = y- (y),
2T-£?7 rf
Vab b
2Tr^ab(l - r2
); a2
x a;
(1.35a)
(1.35b)
and where r = vi is defined as the cross correlation coefficient (1.24).
For ax = ay = 1 and (x) = (y) = 0 in (1.35) we can expand the latter
16 Stochastic Differential Equations in Science and Engineering
formula and we obtain
p(x,y) = (27r)~1
exp[-(x2
+ y2
)/2} £ -Hfc(x)Hfc(y), (1.36)
A;!"
fc=0
where Hfc(x) is the fc-th order Hermite polynomial (see Abramowitz
and Stegun [1.3]). Equation (1.36) is the basis of the "Hermitian-
chaos" expansion in the theory of stochastic partial differential
equations.
In EX 1.3 we show that conditional probabilities of the GD
(1.35a) are Gaussian themselves.
Now we consider two limit theorems. The first of them is related
to GD and we introduce the second one for later use.
1.4.1. The central limit theorem
We consider the random variable
u = n
{xk) = 0, (1.37)
where x^ are identically independent distributed (IID) (but not nec-
essarily normal) variables with zero mean and variance a2
= {x2
,).
We find easily (U) = 0 and (U2
) = a2
.
The central limit theorem says that U tends in the limit
n —> oo to a N(0, a2
) variable with a PD given by (1.13a). To prove
this we use the independence of the variables Xk and we perform the
calculation of the characteristic function of the variable U with the
aid of (1.12)
Gu(fc) = / dxip(xi) • • • / dxnp(xn) • • • exp [ik(xi - h xn)/y/n
= [Gx(A;/v^)]n
2^2
kl
a
~2n~
+ 0(n -3/2.
exp(—k a 12) for n —> oo.
(1.38)
We introduced in the second line of (1.38) the characteristic function
of one of the individual random functions according to (1.14a); (1.38)
is the characteristic function of a GD that corresponds indeed to
Stochastic Variables and Stochastic Processes 17
N(0, a2
). Note that this result is independent of the particular form
of the individual PD's p(x). It is only required that p(a;) has finite
moments. The central limit theorem explains why the Gaussian PD
plays a prominent role in probability and stochastics.
1.4.2. The law of the iterated logarithm
We give here only this theorem and refer the reader for its derivation
to the book Chow and Teichler [1.4]. yn is the partial sum of n IID
variables
yn = xx - -xn; (xn) = /3, {(xn - (if) = a2
. (1.39)
The theorem of the iterated logarithm states that there exists AC an
asymptotic limit
-a < lim / n
~ r a / ?
< a. (1.40)
rwoo v/2nln[ln(n)]
Equation (1.40) is particular valuable in case of estimates
of stochastic functions and we will use it later to investigate
Brownian motions. We will give a numerical verification of (1.40)
in program F18.
1.5. Transformation of Stochastic Variables
We consider transformations of an n-dimensional set of stochastic
variables x,... ,xn with the PD pxi-xn (x, • • •, xn). First we intro-
duce the PD of a linear combination of random variables
n
Z =
fe=l
where the a^ are deterministic constants. The PD of the stochastic
variable z is then defined by
Pz(z) = / dxi ••• / dxn8 I z - Y^a
kXk 1 PXi-x„(zi,--- ,xn).
(1.41b)
Now we investigate transformations of the stochastic variables
xi,..., xn. The new variables are defined by
uk = uk(x1,...,xn), k = l,...,n. (1.42)
^2otkxk, (1.41a)
18 Stochastic Differential Equations in Science and Engineering
The inversion of this transformation and the Jacobian are
Xk = gk(ui,...,un), J = d(x1,...,xi)/d(u1,...,u1). (1.43)
We infer from an expansion of the probability measure (1.1a) that
dpx!-xn = Pr(zi < Xi < xi + dxi,..., xn < Xn < xn + dxn)
= pXl...xn(^i, • • •, xn)dxi • • • dxn
for dxk —> 0, k — 1,... ,n. (1.44a)
Equation (1.44a) represents the elementary probability measure that
the variables are located in the hyper plane
n
Y[[xk,xk + dxk}.
k=l
The principle of invariant elementary probability measure
states that this measure is invariant under transformations of the
coordinate system. Thus, we obtain the transformation
dp^...^ =dpXi...Xn. (1.44b)
This yields the transformation rule for the PD's
PUi-u„(Mi(a;i, • • -,xn), • • •, un(xi,.. .,xn))
=| det(J) | px!-x„(a;i,---,a;n)- (1-45)
Example (The Box-Miller method)
As an application we introduce the transformations method of Box-
Miller to generate a GD. There are two stochastic variables given
in an elementary cube
, , (I V 0 < x 1 < l , 0 < x 2 < l >
 n ^
P ( X 1
' X 2 ) =
U elsewhere J ' ( L 4 6
)
Note that the bivariate PD is already normalized. Now we introduce
the new variables
yi = y/—2 In x cos(27TX2),
(1.47)
y2 = V -
21nxisin(27TX2).
The inversion of (1.47) is
xi = exp[-(yj + yl)/2] x2 = —arc tan(y2/yi).
Stochastic Variables and Stochastic Processes 19
According to (1-45) we obtain the new bivariate PD
p(y1,y2) = p ( x 1 ^ 2 ) | | ^ | = i - e x P [ - ( y ? + y2
2
)/2], (1.48)
and this the PD of two independent N(0, 1) variables.
Until now we have only covered stochastic variables that are time-
independent or stochastic processes for the case that all variables
belong to the same instant. In the next section we discuss a property
that is rather typical for stochastic processes. Jf»
1.6. The Markov Property
A process is called a Markov (or Markovian) process if the condi-
tional PD at a given time tn depends only on the immediately prior
time tn-. This means that for t < t2 < • • • < tn
Pln-l(yn,tn | 2/1, h . . . ; y n - l , * n - l ) = Pl|l(?/n,£n I 2/n-l>*7i-l)>
(1.49)
and the quantity Pii(yn,tn  yn-i,tn-i) is referred to as transition
probability distribution (TPD).
A Markov process is thus completely defined if we know the two
functions
Pi(yi,*i) and p2(y2,t2 | yi,ti) forti<t2.
Thus, we obtain for t < t2 (see (1.15") and note that we use a
semicolon to separate coordinates that belong to different instants)
V2{yiMV2,t2) =pi(yi,*i)Pi|i(y2,*21 yi,*i), (1.50.1)
and for t < t2 < £3
P3(yi,*i; 2/2, £2; 2/3,^3)
= Pi(yi,*i)pi|i(y2,*21 yi,<i)pi|i(y3,*31 y2,t2). (1.50.2)
We integrate equation (1.50.2) over the variable y2 and we obtain
P2(yi,*i;y3,*3) =pi(yi,h) / Pi|i(y2,*21 2/1, £I)PI 11(2/3^312/2,t2)dy2.
(1.51)
20 Stochastic Differential Equations in Science and Engineering
Now we use
Pi|i(2/3,*31 yi,*i) = P2(yi,*i;y3,i3)M(yi,*i),
and we obtain from (1.51) the Chapman-Kolmogorov equation
Pi|i(z/3,*31 yi,h) = / Pi|i(y2,*21 yi,*i)pi|i(y3,*31 V2,t2)dy2.
(1.52)
It is easy to verify that a particular solution of (1.52) is given by
Pi|l(2/2,*2 I 2/1, *i) = [27r{t2-t1)}~1
/2
exV{-{y2-y1)2
/[2{t2-t1)}}.
(1.53)
We give in EX 1.4 hints how to verify (1.53).
We can also integrate the identity (1.50.1) over y and we obtain
ViiViM) = I Pi(z/i,*i)Pi|i(l/2,*2 I J/i,*i)dj/i. (1-54)
The latter relation is an integral equation for the function pi(?/2, t2).
EX 1.5 gives hints to show that the solution to (1.54) is the
Gaussian PD
P l (y, t) = (27rt)-V2 exp[-j/7(2i)]; lim P l (y, t) = 8(y). (1.55)
t—>U-|-
In Chapter 3 we use the Chapman-Kolmogorov equation (1.52)
to derive the master equation that is in turn applied to deduce the
Fokker-Planck equation.
1.6.1. Stationary Markov processes
Stationary Markovian processes are defined by a PD and transi-
tion probabilities that depend only on the time differences. The most
important example is the Ornstein—Uhlenbeck-process that we
will treat in Section 2.1.3 and 3.4. There we will prove the formulas
for its PD
pi(y) = (2TT)-1
/2 exp(-y2
/2), (1-56.1)
Stochastic Variables and Stochastic Processes 21
and the transition probability
Pi|i(w,«2 I l/i, ti) = [2TT(1 - u 2
) ] - 1
" ^ ' (?/2
~ " ^
1 yJ
' (1.56.2)
u = exp(-r); Pi|i(y2,*i I yi,*i) = <%2 - y{]
The Ornstein-Uhlenbeck-process is thus stationary, Gaussian and
Markovian. A theorem from Doob [1.5] states that this is apart from
trivial process, where all variables are independent — the only pro-
cess that satisfies all the three properties listed above. We continue
to consider stationary Markov processes in Section 3.1.
1.7. The Brownian Motion
Brown discovered in year 1828 that pollen submerged in fluids show
under collisions with fluid molecules, a completely irregular move-
ment. This process is labeled with y := Bt(ui), where the subscript
is the time. It is also called a Wiener (white noise) process and
labeled with the symbol Wj (WP) that is identical to the Brownian
motion: Wt = Bt. The WP is a Gaussian [it has the PD (1.55)] and
a Markov process.
Note also that the PD of the Wiener process (WP) — given
by (1.55) — satisfies a parabolic partial differential equation (called
Fokker—Planck equation, see Section 3.2)
dp ld2
p
ft =2 ft?" (L57)
We calculate the characteristic function G(u) and we obtain
according to (1.12)
G{u) = (exp(mWt)) = exp(-n2
t/2), (1.58a)
and we obtain the moments in accordance with (1.13b)
< w ? f c ) =
2Wf f e ;
(w
?fc+1
) = °; fcGN
°- (L58b
)
We use the Markovian properties now to prove the independence
of Brownian increments. The latter are defined
yi,y2-yi,---,yn-yn-i with yk := wtk; h<---<tn. (1.59)
22 Stochastic Differential Equations in Science and Engineering
We calculate explicitly the joint distribution given by (1.50) and
we obtain with the use (1.53) and (1.55)
P2(yuh;y2,t2) = [(27r)2
ti(t2 -t1 )]-1
/2
exp{-y2/(2t1 )
- ( y 2 - y i ) 2
/ [ 2 ( t 2 - i i ) ] } , (1-60)
and
P3(yi,*i; 2/2,^2; 2/3, *3)
= [(27r)3
*i(f2 - h)(t3 - t 2 ) ] - 1
/ 2 e x p { _ y 2 / ( 2 i i )
- (2/2 - yi)2
/[2(t2 - h)] - (2/3 - y2)2
/[2(«3 " *2)]},
(1.61)
P 4 ( y i , t i ; y 2 , * 2 ; y 3 , * 3 ; y 4 , * 4 )
= [27r(*4 - * 3 ) r 1 / 2
P 3 ( y i , * i ; 2 / 2 , * 2 ; 2 / 3 , * 3 )
xexp{-(y4 -y3 )2
/[2(i4-i3)]}-
We see that the joint PD's of the variables 2/1,2/2 ~~ 2/1,2/3 ~~ 2/2,2/4 ~~ 2/3
are given in (1.60) and (1.61) in a factorized form and this implies
the independence of these variables. To prove the independence of
the remaining variables y^ — 2/3,..., yn — yn~ we would only have to
continue the process of constructing joint PD's with the aid of (1.49).
In EX 1.6 we prove the following property
(2/1^1)2/2^2)) = min(ti, t2) =hA t2. (1.62)
Equation (1.62) also demonstrates that the Brownian motion is not
a stationary process, since the autocorrelation does not depend on
the time difference r = t2 — t but it depends on t2 A t.
To apply Kolmogorov's criterion (1.20) we choose a = 2 and we
obtain with (1.58b) and (1.62) ([2/1 (*i) - 2/2to)]2
) = 1*2 - *i|- Thus
we can conclude with the choice b = c = 1 that the SF of the WP
are ac continuous functions. The two graphs Figures 1(a) and 1(b)
are added in this section to indicate the continuous SF.
We apply also the law of iterated logarithm to the WP. To this
end we consider the independent increments y^ — y^-i where we
ifc = kAt with a finite time increment At. This yields for the partial
sum in (1.39)
n
]P(2/fc - 2/fe-i) = 2/n = 2/nAt! OL = (yk) = 0; ((yk - yk-i)2
) = At.
fc=i
Stochastic Variables and Stochastic Processes 23
Fig. 1(a). The Brownian motion Bt versus the time axis. Included is a graph of the
numerically determined temporal evolution of the mean value and the variance.
Bv(t)
Fig. 1(b). The planar Brownian motion with x = Bj and y = Bj • B^, k = 1, 2 are
independent Brownian motions.
We substitute the results of the last line into (1.40) and we obtain
-  / A ! < lim
W,nAt
n
^°° v/2nln(ln(n))
< VAt.
24 Stochastic Differential Equations in Science and Engineering
The assignment of t := nAt into the last line and the approximation
ln(i/Ai) —> ln(i) for t —> oo gives the desired result for the AC
asymptotic behavior of the WP
- 1 < lim l
< 1. (1.63)
~ ™^°° ^/2tln(In(t))
We will verify (1.63) in Chapter 5 numerically.
There are various equivalent definitions of a Wiener process. We
use the following:
Definition 1.8. (Wiener process)
A WP has an initial value of Wo = 0 and its increments Wj — Ws,
t > s satisfies three conditions. They are
(i) independent and (ii) stationary (the PD dependence on t — s)
and (iii) N[0, t — s] distributed.
As a consequence of these three conditions WP exhibits continu-
ous sample functions with probability 1. •
There are also WP's that do not start at zero. There is also a
generalization of the WP with discontinuous SF. We will return to
this point at the end of Section 1.7.
Now we show that a WP is a martingale
<BS | Bu> = Bu; s> u. (1.64)
We prove (1.64) with the application of the Markovian property
(1.53). We use (1.17) write
(Bs | Bu) = (y2,s | yi,u) = / t/2Pi|iO/2,s I yi,u)dy2
= /n . I 2/2exp{-(y2 - yi)2
/[2{s - u)]}dy2
J2lT{S — U) J
= yi = Bu.
This concludes the proof of (1.64).
A WP has also the following properties. The translated quantity
Wi and the scaled quantity Wf defined by
t , a > 0 : W t = W t + a - W a and Wt =  2 ( ) (1.65)
Stochastic Variables and Stochastic Processes 25
are also a Brownian motion. To prove (1.65) we note first that the
averages of both variables are zero (Wt) = (Wt) = 0. Now we have
to show that both variables satisfy also the condition for the auto
correlation. We prove this only for the variable Wt and leave the
second part for the EX 1.7. Thus, we put
(WtWs) = (Ba24Ba2s)/a2
= ^ V 2
^ =tAs.
So far, we considered exclusively scalar WP's. In the study of par-
tial differential equations we need to introduce a set of n independent
WP's. Thus, we generalize the WP to the case of an independent
WP's that define a vector of a stochastic processes
xi(h),...,xn(tn); tk>0. (1.66)
The corresponding PD is then
p(xu...,xn) = pXl(xi)...pXn(xn)
= (27T)""/2
I K 1 / 2
e x p [-4/(2**)] • (1-67)
fc=l
We have assumed independent stochastic variables (like the orthog-
onal basic vectors in the case of deterministic variables) and this
independence is expressed by the factorized multivariate PD (1.67).
We define an n-dimensional WP (or a Wiener sheet (WS)) by
n
Min)
= n **(**): t = (t1,...,tn). (1.68)
k=
Now we find how we can generalize the Definition 1.8 to the case of
n stochastic processes. First, we prove easily that the variable (1.68)
has a zero mean
(M(
t
n)
) = 0. (1.69)
Thus, it remains to calculate the autocorrelation (1.62). We use the
independence of the set of variables Xk(tk),k = l , . . . , n and we
obtain with the use of the bivariate PD (1.61) with y = Xk(tk);
26 Stochastic Differential Equations in Science and Engineering
V2 = xk(sk) and factorize the result for the independent variables.
Hence, we obtain
n
(Mjn)
M(")> = Yl(xk(tk)xk(sk)}; t = (h,...,tn); s = (si,...,sn ).
fc=i
The evaluation of the last line yields with (1.62)
n
(M(
t
n)
M^) = HtkAsk. (1.70)
fc=i
The relations (1.69) and (1.70) show now that process (1.68) is an
n-WP.
In analogy to deterministic variables we can now construct with
stochastic variables curves, surfaces and hyper surfaces. Thus, a curve
in 2-dimensional WS and surfaces on 3-dimensional WS are given by
c
« = M
t2
f(ty s
tut2 = M
S2)g(tl>t2)-
We give here only two interesting examples.
Example 1
Here we put
(2)
Kt = M ^ ; a = exp(i), b = exp(—£); —oo < x < oo.
This defines a stochastic hyperbola with zero mean and with the
autocorrelation
(KtKs) = (x1{et
)x1{es
))(x2{e-t
)x2{e-s
))
= (e* A es
)(e-* A e~s
) = exp(-|t - s). (1.71)
The property (1.71) shows this process is not only a WS but also a
stationary Ornstein-Uhlenbeck process (see Section 1.5.1). 4b
Example 2
Here we define the process
Kt = exp[-(l + c)t]M^l; a = exp(2£), b = exp(2ct); c > 0.
(1.72)
Stochastic Variables and Stochastic Processes 27
Again we see, that stochastic variable defined (1.72) has zero mean
and the calculation of its autocorrelation yields
(KtKs) = exp[-(l + c)(t + s)}(xl(e2t
)x1(e2s
)){x2(e2ct
)x2(e2cs
))
= exp[-(l + c)(t + s)}(e2t
A e2s
){e2ct
A e2cs
)
= exp[-(l + c)|t-s|]. (1.73)
The latter equation means that the process (1.72) is again an
Ornstein-Uhlenbeck process. Note also that because of c > 0 there
is no possibility to use (1.73) to reproduce the result of the previous
example. X
Just as in the case of one parameter, there exist for WS's also
scaling and translation. Thus, the stochastic variables
H - —M{2)
•
'v
~ ab a2u
'h2v
''
T _ M ( 2 ) _ M ( 2 ) _ M ( 2 ) M ( 2 )
L>u,v - M
u+a,v+b M
u+a,b M
a,v+b ~ M
a,6>
(1.74)
are also WS's. The proof of (1.74) is left for EX 1.8.
We give in Figures 1(a) and 1(b) two graphs of the Brownian
motion.
At the end of this section we wish to mention that the WP is a
subclass of a Levy process L(t). The latter complies with the first
two conditions of the Definition 1.8. However, it does not possess
normal distributed increments. A particular feature of normal dis-
tributed process x is the vanishing of the skewness (x3
) / (x2
)3
'2
. How-
ever, many statistical phenomena (like hydrodynamic turbulence, the
market values of stocks, etc.) show remarkable values of the skew-
ness. This means that a GD (with only two parameter) is not flex-
ible enough to describe such phenomena and it must be replace by
a PD that contains a sufficient number of parameters. An appropri-
ate choice is the normal inverted Gaussian distribution (NIGD) (see
Section 4.4). The NIGD distribution does not satisfy the Kolmogorov
criterion. This means that the sample functions of the Levy pro-
cess L(i) is equipped with SF that jump up and down at arbitrary
instances t. To get more information about the Levy process we refer
the reader to the work of Ikeda k, Watanabe [1.6] and of Rydberg
28 Stochastic Differential Equations in Science and Engineering
[1.7]. In Section 4.4 we will give a short description of the application
of the NIGD in economics theories.
1.8. Stochastic Integrals
We need stochastic integrals (SI) when we attempt to solve a stochas-
tic differential equation (SDE). Hence we introduce a simple first
order ordinary SDE
^ = a(X(t),t) + b(X(t),t)Zt; X,a,b,t€R. (1.75)
We use in (1.75) the deterministic functions a and b. The symbol £t
indicates the only stochastic term in this equation. We assume
<6> = 0; (tes) = 6(t-s). (1.76)
The spectrum of the autocorrelation in (1.76) is constant (see
Section 2.2) and in view of this £t is referred as white noise and
any term proportional to £( is called a noisy term. These assump-
tions are based on a great variety of physical phenomena that are
met in many experimental situations.
Now we replace (1.75) by a discretization and we put
Atk = tk+i — tk>0; Xfc = X(ifc);
AXfc = Xfc+1 -Xf c ; A; = 0,1,
The substitution into (1.75) yields
AXfc = a(Xfc, tk) Atk + b{Xk,tk) ABk;
ABfc = Bf c + 1 -Bf c ; A; = 1,2,...
where we used
A precise derivation of (1.77) is given in Section 2.2. Thus we can
write (1.75) in terms of
n - l
Xn = X0 + Y^ [<x
s,ts)Ats + b(Xa, ts)ABs] • X0 = X(t0).
s=Q
(1.78)
Stochastic Variables and Stochastic Processes 29
What happens in the limit Atk —> 0? If there is a "reasonable"
limit of the last term in (1.78) we obtain as solution of the SDE (1.75)
X(t) = X(0)+ / a(X(s),s)ds + " / 6(X(s),s)dB8". (1.79)
Jo Jo
The first integral in (1.79) is a conventional integral of Riemann's
type and we put the stochastic (noisy) integral into inverted commas.
The irregularity of the noise does not allow to calculate the stochastic
integral in terms of a Riemann integral. This is caused by the fact
that the paths of the WP are nowhere differentiable. Thus we find
that a SI depends crucially on the decomposition of the integration
interval.
We assumed in (1.75) to (1.79) that b(X,t) is a deterministic
function. We generalize the problem of the calculation of a SI and
we consider a stochastic function
1= / i(w,s)dBs. (1.80)
Jo
We recall that Riemann integrals of the type (g(s) is a differen-
tiable function)
i(s)dg(s) = [ f(s)g'(s)ds,
Jo'0
are discretized in the following manner
pT n—1
/ f(s)dg(s) = lim Vf(sfc)[g(sfc+i)-g(sfc)].
JU
k=0
Thus, it is plausible to introduce a discretization of (1.80) that
takes the form
I = 53f(afc,a;)(Bfc+1-Bfc). (1.81)
In Equation (1.81) we used s^ as time-argument for the integrand f.
This is the value of s that corresponds to the left endpoint of the
discretization interval and we say that this decomposition does not
30 Stochastic Differential Equations in Science and Engineering
look into the future. We call this type of integral an Ito integral
and write
I i = / {{s,uj)dBs. (1.82)
Jo
An other possible choice is to use the midpoint of the interval
and with this we obtain the Stratonovich integral
Is = / f(s,w)odBs = ^f(sfc,w)(Bfc+i -Bfc); sk = -(tk+1 + tk).
J° k
(1.83)
Note that the symbol "o" between integrand and the stochastic dif-
ferential is used to indicate Stratonovich integrals.
There are, of course, an uncountable infinity of other decomposi-
tions of the integration interval that yield to different definitions of
a SI. It is, however, convenient to take advantage only of the Ito and
the Stratonovich integral. We will discuss their properties and find
out which type of integrals seems to be more appropriate for the use
in the analysis of stochastic differential equations.
Properties of the Ito integral
(a) We have for deterministic constants a < b < c, a, /3 G R.
f [ah{s,u>) + /?f2(s,w)]dBs = all +/JI2; h = [ f*(s,w)dBs.
Ja J a
(1.84)
Note that (1.84) remains also valid for Stratonovich integrals. The
proof of (1.84) is trivial.
In the following we give non-trivial properties that apply, how-
ever, exclusively to Ito integrals. Now we need a definition:
Definition 1.9. (non-anticipative or adapted functions)
The function f(t, Bs) is said to be non-anticipative (or adapted,
see also Theorem 1.2) if it depends only on a stochastic variable
of the past: Bs appears only for arguments s < t. Examples for a
non-anticipative functions are
i(s,co)= [S
g(u)dBu; f(
Jo
s,u) = B,. •
Stochastic Variables and Stochastic Processes 31
Now we list further properties of the Ito integrals that include non
anticipative functions f(s,Bs) and g(s,Bs).
(b) M l E E
W f
(s
'B
s)d B
s)=°- (L85
)
Proof.
We use (1.81) and obtain
Mi = /^f(sf c ,Bf c )(Bf c + 1 -Bf c )
But we know that Bk is independent of B^+i — Bk. The function
f(sfc,Bfc) is thus also independent of B^+i — B^. Hence we obtain
Mi = ^2(f(sk,Bk)){Bk+1 - Bfc) = 0.
k
This concludes the proof of (1.85).
(c) Here we study the average of a product of integrals and we show
that
M2 = I J f(s,Bs)dBsJ g(u,Bu)dBu = J (i(s,Bs)g(s,Bs))ds.
(1.86)
Proof.
M2 = ]T<f(sm ,Bm )(Bm + 1 -Bm )g(sn ,Bn )(Bn + 1 - B n ) ) .
m,n
We have to distinguish three subclasses: (i) n > m, (ii) n < m and
(hi) n = m.
Taking into account the independence of the increments of WP's
we see that only case (hi) contributes non-trivially to M2. This yields
M2 = ^(f(S n ,Bn )g(sn ,Bn )(B
ra+l ~ B n ) ).
n
But we know that f(sn, Bn)g(sn, Bn) is again a function that is inde-
pendent of (Bn+i — Bn)2
. We use (1.62) and obtain
((Bn+i — Bn) ) = (Bn + 1 — 2Bn + iBn + Bn) = tn+ — tn — Ain,
32 Stochastic Differential Equations in Science and Engineering
and thus we get
M2 = ^(f(S n ,Bn )g(S n ,Bn ))((B
n+l —
n
oo
^2(i{sn,Bn)g{sn,Bn))Atn.
n = l
The last relation tends for Atn —» 0 to (1.86).
(d) A generalization of the property (c) is given by
/ pa rb  rahb
M3 = M i(s,Bs)dBsl g(u,Bu)dBu = I (f(S,Bs)g(S,Bs))ds.
(1.87)
To prove (1.87) we must distinguish to subclasses (i) b = a + c > a
and (ii) a = b + c> b;c> 0. We consider only case (i), the proof for
case (ii) is done by analogy. We derive from (1.86) and (1.87).
M3 = M2 + / / f(s,Bs)dBs / g(u,Bu)dBt
 J0 Ja
= M
2 + Yl Yl ^ Bn)g(sm, Bm )ABn ABm ).
n m>n
But we see that i(sn, Bn) and ABn are independent of f(sm, Bm) and
ABm . Hence, we obtain
M3 = M2 + £ ( f ( s „ , Bn)ABn) Y, (g(sm, Bm)ABm) = M2,
n m>n
where we use (1.85). This concludes the proof of (1.87) for case (i).
Now we calculate an example
I(t) = / BsdBs. (1.88a)
Jo
First of all we obtain with the use of (1.85) and (1.86) the moments
of the stochastic variable (1.88a)
<I(t)> = 0; (I(t)I(t + r)) = [B2
s)ds = f a d s = 7
2
A n QQ^
Jo Jo (1.88b)
7 = iA(t + r).
Stochastic Variables and Stochastic Processes 33
We calculate the integral with an Ito decomposition
1 = 2 ^ B*;(Bfc+i —
Bfc).
k
But we have
AB2
k = (B2
k+l - B2
k) = (Bk+1 - Bkf + 2Bfc(Bfc+1 - Bfc)
= (ABfc)2
+ 2Bfc(Bfc+1-Bfc).
Hence we obtain
I(t) = lj2[A(Bl)-(ABk)%
k
We calculate now the two sums in (1.90) separately. Thus we
obtain the first place
I1(t) = EA
(B
fc) = (B
?-B
8) + ( B
i - B
? ) + - + (B
N-B
N-l)
k
N ~* B
i >
where we used Bo = 0. The second integral and its average are
given by
I2(t) = Y, (AB
*)2
= E (B
^+i "2B
k+iBk + Bl);
k k
(I2(i))=EAifc = t
k
The relation (I2(i)) =
t gives not only the average but also the
integral I2(i) itself. However, the direct calculation of l2(t) is im-
practical and we refer the reader to the book of 0ksendahl [1.8],
where the corresponding algebra is performed. We use instead an
indirect proof and show that the quantity z (the standard deviation
of I2(i)) is a deterministic function with the value zero. Thus, we put
z = I2(£) — t. The mean value is clearly (z) — 0 and we obtain
(Z
2
) = (l2
(t)-2tl2(t)+t2
) = (l2
(t))-t2
.
34 Stochastic Differential Equations in Science and Engineering
But we have
ai(*)> = EE<(AB
*)2
(AB
™)2
>- (i.88c)
k rn
The independence of the increments of the WP's yields
((ABfc)2
(ABm)2
) = ((ABk)2
)((ABm)2
) + 5km{ABi),
hence we obtain with the use of the results of EX 1.6
$(«)>= (£<(ABfc)2
)) +]T<(Bfc+1+Bfc)4
} = £2
+ 5>+1-£fc)2
.
V k J k k
However, we have
^2(tk+1 - tkf = J^(At)2
= tAt^0 for At -> 0,
k k
and this indicates that (z2
) = 0.
This procedure can be pursued to higher orders and we obtain
the result that all moments of z are zero and thus we obtain l2(£) = t.
Thus, we obtain finally
I(*) = / BsdBs = ^ ( B 2
- £ ) . (1.89)
There is a generalization of the previous results with respect to
higher order moments. We consider here moments of a stochastic
integral with a deterministic integrand
Jfc(t) = I ffc(s)dBs; k€N. (1.90)
Jo
These integrals are a special case of the ones in (1.82) and we know
from (1.85) that the mean value of (1.90) is zero. The covariance of
(1.90) is given by (see (1.86))
(Jfc(t)Jm(t)) = / h(s)fm(s)ds.
Jo
But we can obtain formally the same result if we put
(dBsdB„) = 5(s - u)dsdu. (1.91)
A formal justification of (1.91) is given in Chapter 2 in connection
with formula (2.41). Here we show that (1.91) leads to a result that
Stochastic Variables and Stochastic Processes 35
S.
is identical to the consequences of (1.86)
<Jfc(*)Jm(<)> = / h(s) f fm H(dB,dBu )
Jo Jo
-  fk(s) / fm(u)5(s - u)dsdu = / ffc(s)fm(s)d
Jo Jo Jo
We also know that Bt and hence dB4 are Gaussian and Markovian.
This means that all odd moments of the integral (1.90) must vanish
<Jfc(*)Jm(*)Jr(t)> = ••• = (). (1.92a)
To calculate higher order moments we use the properties of
the multivariate GD and we put for the 4th order moment of the
differential
<dBpdB9dBudB„) = <dBpdB9>(dBudB„) + (dBpdBu)(dBgdB^)
+ (dBpdB^)(dB(?dBu) = [S(p - q)5{u - v)
+ S(p — u)S(q — v) + 5(p — v)8(q — u)]dpdqdudv.
Note that the 4th order moment of the differential of WP's has a
form similar to an isotropic 4th order tensor. Hence, we obtain
<Jj(t)Jm(i)Jr(*)J*(<)> = / f » f m ( a ) d a f ir{(3%((3)d(3
Jo Jo
+ / ij{a)ir{a)da f fm(/3)f,(/3)d/3
Jo Jo
+ / f » f s ( a ) d a [ fm(/3)fr(/3)d/3.
Jo Jo
This leads in a special case to
<j£(i)> = 3<J2
(i)>2
. (1.92b)
Again, this procedure can be carried out also for higher order
moments and we obtain
<J2
"+1
(i)) = 0; <J2
^)) = 1.3....(2/ ,-l)<J2
(i))^ ^ N .
(1.92c)
Equation (1.92) signifies that the stochastic Ito-integral (1.90) with
the deterministic integrand ffc(s) is N[0, fQ f|(s)ds] distributed. How-
ever, one can also show that the Ito-integral with the non-anticipative
36 Stochastic Differential Equations in Science and Engineering
integrand
K(i) = / g(s,Bs)dB„ (1.93a)
Jo
is, in analogy to the stochastic integral with the deterministic
integrand,
N[0,r(t)]; r(t)= [ (gu,Bu))du, (1.93b)
Jo
distributed (see Arnold [1.2]). The variable r(t) is referred to as
intrinsic time of the stochastic integral (1.93a). We use this vari-
able to show with Kolmogorov's Theorem (1.20) that (1.93a) posses
continuous SF. The Ito integral
rtk
Xk= / g(«,Bu)dBu, ti = t > t2 = s,
Jo
with
(xk) = 0; (x) = r(tfc) = rk; k = 1, 2, {xix2) = r2,
has according to (1.35a) the joint PD
xi _ (xi - x2)2
"2n 2(n-r2)
p2(xi,x2) = [(27r)i
r1(r1 - r2)] ' exp
Yet, the latter line is identical with the bivariate PD of the Wiener
process (1.60) if we replace in the latter equation the t- by rk. Hence,
we obtain from Kolmogorov's criterion ([xi(ri) — x2i(r2)]2
) =|
7~i — r2 | and this guarantees the continuity of the SF of the Ito-
integral (1.93a). A further important feature of Ito integrals is their
martingale property. We verify this now for the case of the integral
(1.89). To achieve this, we generalize the martingale formula (1.64)
for the case of arbitrary functions of the Brownian motions
(%2,«) I f(yi,i)> = / %2,s)Pi|i(y2,s | yx,t)dy2 = f(yi,t);
yk = Btk; Vs>t, (1.94)
Stochastic Variables and Stochastic Processes 37
where p ^ is given by (1.53). To verify now the martingale property
of the integral (1.89) we specify (1.94) to
(I(j/2,s) | Ifo!,*)) = - i = / {vl - s) exp[-(y2 - yif /f5]dy2.
The application of the standard substitution (see EX 1.1) yields
(I(y2, s) | I(2/i,*)> = ^=J{y-s + 2Vlz^ + /3^2
) exp(-z2
)d^
1
= -{yi-s + P/2)=I(yi,t). (1.95)
This concludes the proof that the Ito integral (1.89) is a martingale.
The general proof that all Ito integrals are martingales is given by
0ksendahl [1.8]. However, we will encounter the martingale property
for a particular class of Ito integrals in the next section.
To conclude this example we add here also the Stratonovich ver-
sion of the integral (1.89). This yields (the subscript s indicates a
Stratonovich integral)
rt i
ls{t)= / BsodBs = - V ( B f c + 1 + Bfc)(Bfc+1-Bfc)
Jo 2
k
k
The result (1.96) is the "classical" value of the integral whereas
the Ito integral gives a non classical result. Note also the signifi-
cant differences between the Ito and Stratonovich integrals. Even
the moments do not coincide since we infer from (1.96)
&(*)> =  a n d
(U*)I*(«)> = ^[tu + 2(t A u)2
}.
It is now easy to show that the Stratonovich integral Is is not a
martingale. We obtain this result if we drop the term s in second
line of (1.95)
(ls(y2,s) | I8(yi,t)) = {y2
+ P/2)^Uyut). X
Hence, we may summarize the properties of the Ito and
Stratonovich integrals. The Stratonovich concept uses all the trans-
formation rules of classical integration theory and thus leads in many
38 Stochastic Differential Equations in Science and Engineering
applications to an easy way of performing the integration. Deviat-
ing from the Ito integral, the Stratonovich integral does, however,
not posses the effective rules to calculated averages such as (1.85) to
(1.87) and they do not have the martingale property. In the following
we will consider both integration concepts and their application in
solution of SDE.
We have calculated so far only one stochastic integral and we
continue in the next section with helpful rules perform the stochastic
integration.
1.9. The Ito Formula
We begin with the differential of a function $(Bf, t). Its Ito differen-
tial takes the form
d$(Bt, t) = Qtdt + $B t dBt + ^ B t B t (dBt)2
. (1.97.1)
Formula (1.97.1) contains the non classical term that is proportional
to the second derivative WRT Bt. We must supplement (1.97.1) by
a further non classical relation
(dBt)2
= dt. (1.97.2)
Thus, we infer from (1.97.1,2) the final form of this differential
d$(Bt, t) = Ut + ^*BtBt) dt + ^BedBj. (1.98)
Next we derive the Ito differential of the function Y = g(x, t)
where x is the solution of the SDE
dx = a(x,t)dt + b(x,t)dBt. (1.99.1)
In analogy to (1.97.1) we include a non classical term and put
dY = gtdt + gxdz + -gxx(dx)2
,
We substitute dx from (1.99.1) into the last line and apply the non
classical formula
(dx)2
= {adt + bdBt)2
= b2
dt; {dt)2
= dtdBt = 0; (dBt)2
= dt,
(1.99.2)
Stochastic Variables and Stochastic Processes 39
and this yields
d Y = ( 6 + 0 b + £& .)< K + t b d B , . (1.99.3)
The latter equation is called the Ito formula for the total differen-
tial of function Y = g{x,t) given the SDE (1.99.1). (1.99.3) contains
the non classical term b2
gxx/2 and it differs thus from the classical
(or Stratonovich) total differential
dYc = (gt + agx)dt + bgxdBt. (1.100)
Note that both the Ito and the Stratonovich differentials coincide if
g{x,i) is a first order polynomial of the variable x.
We postpone a sketch of the proof of (1.99) for a moment and
give an example of the application of this formula. We use (1.99.1)
in the form
dx = dBt, or x = Bt with a = 0, 6 = 1, (1.101a)
and we consider the function
Y = g(x) = x2
/2; gt = 0; gx = x; gxx = 1. (1.101b)
Thus we obtain from (1.99.3) and (1.101b)
dY = d(x2
/2) = dt/2 + BtdBt,
and the integration of this total differential yields
d(x2
/2) = / d(B2
s/2) = B2
/2 = t/2+ [ BsdBs
Jo Jo
and the last line reproduces (1.89). X
We give now a sketch of the proof of the Ito formula (1.99) and
we follow in part considerations of Schuss [1.9]. It is instructive to
perform this in detail and we do it in four consecutive steps labeled
with Si to S4.
I
40 Stochastic Differential Equations in Science and Engineering
Si
We begin with the consideration of the stochastic function x(t)
given by
rv rv
x(v)-x(u) = a(x{s),s)ds + b(x(s),s)dBs, (1.102)
Ju Ju
where a and b are two differentiate functions. Thus, we obtain the
differential of x(t) if we put in (1.102) v = u + dt and let dt —• 0
dx(u) = a(x(u),u)du + b(x(u),u)dBu. (1.103)
Before we pass to the next step we consider two important examples
Example 1. (integration by parts)
Here we consider a deterministic function f and a stochastic func-
tion Y and we put
Y(Bt,t) = g(Bt)t) = f(i)Bt. (1.104a)
The total differential is in both (Ito and Stratonovich) cases (see
(1.98) with 3>BtBt = 0) given by the exact formula
dY = d[f(t)Bt] = f(*)dBt + i'(t)Btdt. (1.104b)
The integration of this differential yields
i(t)Bt= f f'(s)Bsds+ [ f(s)dBs. (1.105a)
Jo Jo
Subtracting the last line for t = u from the same relation for t — v
yields
rv rv
i{v)Bv - i(u)Bu = f'(s)Bsds+ f(s)dBs. (1.105b)
Ju Ju
Example 2. (Martingale property)
We consider a particular class of Ito integrals
I(t) = / f(u)dBu, (1.106)
Jo
Stochastic Variables and Stochastic Processes 41
and show that I(i) is a martingale. First we realize that the integral
l(t) is a particular case of the class (1.93a) with g(u,Bu) = i(u).
Hence we know that the variable (1.106) is normal distributed and
posses the intrinsic time given by (1.93b). Its transition probability
Pi|i is defined by (1.53) with tj = r(tj); yj = I(i,); j — 1, 2. This con-
cludes the proof that the integral (1.106) obeys a martingale property
like (1.27) or (1.64). *
S2
Here we consider the product of two stochastic functions subjected
to two SDE with constant coefficients
dxk(t) = ctkdt + frfcdBi; ak,bk = const; k ~ 1,2, (1.107)
with the solutions
xk(t) = akt + bkBt; xfc(0) = 0. (1.108)
The task to evaluate d(xiX2) is outlined in EX 1.9 and we obtain
with the aid of (1.89)
d{xX2) = X2dxi + xdx2 + b^dt. (1.109)
The term proportional to 6162 in (1.109) is non classical and it is a
mere consequence of the non classical term in (1.89).
The relation (1.109) was derived for constant coefficients in
(1.107). One may derive (1.109) under the assumption of step-
function for the functions a and b in (1.106) and with that one can
approximate differentiable functions (see Schuss [1.9]).
We consider now two examples
Example 1
We take put x = Bi;X2 = Bf. Thus, we obtain with an application
of (1.101b) and (1.109)
dBt
3
= BtdB? + B^dBj + 2Btdi = 3(Btdt + B?dBt).
The use of the induction rule yields the generalization
dBt
fc
= jfeB^dB* + ^ " ^ B ^ d t (1.110)
42 Stochastic Differential Equations in Science and Engineering
Example 2
Here we consider polynomials of the Brownian motion
Pn(Bt) = c0 + cxBt + • • • + cnB?; ck = const. (1.111)
The application of (1.110) to (1.111) leads to
dPn(B4) = P;(Bt)dBt + ip£(Bt)di; ' = d/dBt. (1.112)
The relation (1.112) is also valid for all functions that can be
expanded in form of polynomials. &
S3
Here we consider the product
*{Bt,t) = ip(Bt)g(t), (1.113)
where g is a deterministic function. The use of (1.109) yields
d*(Bt,t) = g(i)<MBt) + <p(Bt)g!(t)dt
= L'dBt + ^"dtg +Vg'(t)dt (1.114)
= Ug' + ^"g]dt + gip'dBt.
But we also have
Thus, we obtain
1 d2
 , 1 ,,
+
25Bfj$ = g V +
2 g
^ ' ^L115)
(d 1 d2
 <9$
d
* = ( » +
2 8 B ? j W f +
8 B ; d B
" <L116
>
Equation (1.116) applies, in the first place, only to the function
(1.113). However, the use of the expansion
CO
$(Bt,t) = J>fc(B4)gfc(i), (1.117)
fc=i
shows that (1.116) is valid for arbitrary functions and this proves
(1.98).
Stochastic Variables and Stochastic Processes 43
S4
In this last step we do not apply the separation (1.113) or (1.117)
but we use a differentiable function of the variables (x,t), where
x satisfies a SDE of the type (1.107)
$(Bt,t) = g(x,t) = g(at + bBt,t); x = adt + bdBt; a, 6 = const.
0 (1.118)
*t = agx + gt; *Bt = &gx5 $B,B, = b l
gxx.
Thus we obtain with (1.116)
The relation (1.119) represents the Ito formula (1.99.3) (for constant
coefficients a and b). As before, we can generalize the proof and
(1.119) is valid for arbitrary coefficients a(x,t) and b(x,t).
We generalize now the Ito formula for the case of a multivariate
process. First we consider K functions of the type
Vk yfc(Bi
1
,...,Bt
M
,t); fc = 1,2,... ,K,
where B^,...,B^ are M independent Brownian motions. We take
advantage of the summation convention and obtain the generaliza-
tion of (1.97.1)
dyfc(Bj,... ,Bf,,) = * £ d t + ^LdBr + I ^ d B r d B f ;
dt <9B[ l
2dWtWt
(1.120)
/c = l,...,K; r,s = l,...,M.
We generalize (1.97.2) and put
dB[dB? = Srsdt, (1.121)
and we obtain (see (1.98))
d s t ( B , ' , . . . , B ( - t ) = ( ^ + i ^ ) d . + ^ d B E . (1.122)
Now we consider a set of n SDEs
dXfe = afc(Xi,... ,Xn,t)dt + 6fer(Xi,... ,Xn,i)dB£;
Jfe = l,2,...,n; r = l,2,...,R.
[1.123)
44 Stochastic Differential Equations in Science and Engineering
We wish to calculate the differential of the function
Zfc = Zfc(Xi,...,Xn,t); fc = l,...,K. (1.124)
The differential reads
dt M+
dXm^m+
2dXmdX1J
dZfc = - ^ dt + T^dXm + - v * dXmdXM
-dt + ^r— (amdt + 6mrdBj)
mdt dX,
1 d2
Z
+ o oV ov (a
™di
+ b
mrdBr
t) (audt + busdBs
t);
m,n = 1,2,... ,n; r = 1,2,... ,R. (1.125)
The n-dimensional generalization of the rule (1.99.2) is given by
d(B[dBJ1
) = <5rudt; (dt)2
= dB[ dt = 0. (1.126)
Thus, we obtain the differential of the vector valued function (1.124)
dZfc = ( -XT + a
mT^rp - T.bmrKr ^ r ^ r I d t
dt + am
dxm
+
2bmrbur
dxmdxu
+ h dZk
aw
+ t>mr flv ar3t.
Now we conclude this section with two examples.
Example 1
A stochastic process is given by
(1.127)
Yi = B j + B ? + Bt
3
; Y2 = (B?)2
-BjB
We obtain for the SDE in the form (1.120) corresponding to the last
line
dYi = dB! + dB2
+ dB3
;
dY2 = dt + 2B2
dBt
2
- (B^dBj + BjdB?). *
Stochastic Variables and Stochastic Processes 45
dY 5,+ «gI + ^ + 7 2
) §
Example 2
Here we study a single stochastic process under the influence of two
independent Brownian motions
dx = a(x, t)dt + (3{x, t)dB] + j(x, i)dBf
2
. (1.128)
The differential of the function Y = g(x, t) has the form
dt + g^dBJ+jdB2
).
We consider now the special case
g = In x; a = rx; (5 — ux; 7 = ax; r,u,a = const,
and we obtain
d(lnar) = [r - (u2
+ a2
)/2]dt + (udBJ + odB2
). (1.129)
We will use (1.129) in Section 2.1 of the next chapter. X
We introduced in this chapter some elements of the probability
theory and added the basic ideas about SDE. For readers who wish
to get more deeply involved in the abstract theory of probability
and in particular with the measure theory we suggest they consider
the following books: Chung & Aitsahia [1.10], Ross [1.11], Mallivan
[1.12], Pitman [1.13] and Shiryaev [1.14].
Appendix: Poisson Processes
In many applications appears there a random set of countable points
driven by some stochastic system. Typical examples are arrival times
of customers (at the desk of an office, at the gate of an airport, etc.),
the birth process of an organism, the number of competing building
projects for a state budget. The randomness in such phenomena is
conveniently described by Poisson distributed variables.
First we verify that the Poisson distribution is the limit of the
Bernoulli distribution. We substitute for the argument p in the
Bernoulli distribution in Section 1.1 the value p = a/n and this
46 Stochastic Differential Equations in Science and Engineering
yields
Kt,n,a/n) = i ; ) ( 2 ) ' ( l - 2
( aV6(0, n, a/n) = 1 1 —> exp(—a) for n —> oo.
V n
J
Now we put
b(k + l,n,a/n) a n — kf a
~ a
(A.l)
b(k,n,a/n) k + 1 n  n) fc + l'
and this yields
a2
6(1, n, a/n) —> a exp(—a); 6(2, n, a/n) —> — exp(—a);...
an
b(k, n, a/n) —y —- exp(—a) = ^ ( a ) .
Definition. (Homogeneous Poisson process (HPP))
A random point process N(t), t > 0 on the real axis is a HPP with
a constant intensity A if it satisfies the three conditions
(a) N(0) = 0.
(b) The random increments N(£&) — N(£fc_i); k = 1,2,... are for
any sequence of times 0 < to < t < • • • < tn < • • • mutually
independent.
(c) The random increments defined in condition (b) are Poisson dis-
tributed of the form
Pr([N(tr+0-Nfa)] = fc)=(A
^7(
-H
*• (A.2)
Tr = t r + i — tr, k = (J, 1 , . . . ; r = 1, 2 , . . . . ^
To analyze the sample paths we consider the increment AN(£) —
N(£ + At) — N(£). Its probability has, for small values of At, the form
' l - A A i f o r k = 0~
Pr(AN(£) = fe) = ih^fL exp(-AAt)
fe!
AAt for A; = 1
0(At2
) forfc>2
(A.3)
Equation (A.3) means that for At —> 0 the probability that N(t +
At) is most likely the one of N(£) (Pr([N(t + At) - N(t)] = 0) ss 1).
Stochastic Variables and Stochastic Processes 47
However, the part of (A.3) with Pr([N(t + At) - N(t)] = 1) « XAt
indicates that there is small chance for a jump with the height unity.
The probability of jumps with higher heights k = 2, 3,... correspond-
ing to the third part of (A.3) is subdominantly small and such jumps
do not appear.
We calculate of the moments of the HPP in two alternative ways.
(i) We use (1.5) with (A.2) to obtain
/
oo
p(x)sm
dx = J2km Fr
(x
= k
)
fc=0
oo
= exp(-a)^fcm
a*/A;!; a = At, (A.4)
k=0
or we apply (ii) the concept of the generating function defined by
g(z) = J2zk Pr
(x
= *)> with
g'w = (x
)
fc=0
5"(l) = ( o : 2
) - ( x ) , . . . ; *
fc=
° ^ (A.5)
dz
This leads in the case of an HPP to
oo oo
g(z) = ^2 zk
ak
exp(-o;)/fc! = exp(-q) y^(zq)fc
/A:!
fc=o fe=o
= exp[a(z-l)]. (A.6)
In either case we obtain
(N(t)) = At, (N2
(t)) = (At)2
+ At. (A.7)
We calculate now the PD of the sum x + x^ of two independent
HPP's. By definition this yields
Pr([a;i + x2] = k) = Prl ^ [ x i = j , x 2 = A; - j] J
fc
= X P r
( X l
= J>2 = A; - j)
A;
j=o
48 Stochastic Differential Equations in Science and Engineering
qk-j
Eex
p[-(^)]fex
p[-^)](fc_j)!
k
exp[-(01+ l92)]£^'Q)/fc!
3=0
= exp[-(81 + 92)](01+92)k
/kl (A.8)
If the two variables are IID (0 = 6± = 02) (A.8) reduces to
Pr([xi + x2] = k) = exp(-20){20)k
/k. (A.9)
Poisson HPP's play important roles in Markov process (see
Bremaud [1.16]). In many applications these Markov chains are iter-
ations driven by "white noise" modeled by HPP's. Such iterations
arise in the study of the stability of continuous periodic phenomena,
in the biology and economics, etc. We consider the form of iterations
x{t + s) = F(x(s),Z(t + s)); s , t e N 0 (A.10)
where t, s are discrete variables and x(t) is a discrete random variable
driven by the white noise Z(t + s). An important particular case is
Z(£ + s) := N(t + s) with a PD
Pr(N(£ + 8) = k) = exp(u)uk
/kl; u = 9(t + s).
The transition probability is the matrix governing the transi-
tion from state i to state k.
Examples
(i) Random walk
This is an iteration of a discrete random variable x(t)
x(t) = x(t-l) + N(t); x{0) = xoeN. (A.ll)
N(t) is HPP with Pr([N(t) = A;]) = exp(-Xt){Xt)k
/k. Hence, we
obtain the transition probability
Pjl = Pr(x(t) = j , x(t - 1) = i) = Pr([i + N(j)] = 3)
= P r ( N ( j ) = j - l ) .
Stochastic Variables and Stochastic Processes 49
(ii) Flip-Flop processes
The iteration takes here the form
x(i) = (-l)N ( t )
. (A.12)
The transition matrix takes the form
p _ u = Pr(x(t + s) = 1 | x(s) = -1) = Pr(N(t) = 2k + 1) = a;
p M = pr(x(t + s) = 1 | x(s) = 1 = Pr(N(t) = 2k) = 0,
with
a = ^Texp(-t)(t)2k+1
/(2k + 1)! = exp(-At) sinh(Ai);
k=o
oo
0 = ^exp(-Ai)(Ai)2fc
/(2fc)! = exp(-Ai) cosh(At). X
fc=0
Another important application of HPP is given by a ID approach
to turbulence elaborated by Kerstein [1.17] and [1.18]. This model
is based on the turbulence advection by a random map. A triplet
map is applied to a shear flow velocity profile. An individual event is
represented by a mapping that results in a new velocity profile. As
a statistical hypothesis the author assumes that the temporal rate
of the event is governed by a Poisson process and the parameter of
the map can be sampled from a given PD. Although this model was
applied to ID turbulence, its results go beyond this limit and the
model has a remarkable power of prediction experimental data.
Exercises
EX 1.1. Calculate the mean value Mn(s,t) = ((Bt - Bs)n
), n G N.
Hint: Use (1.60) and the standard substitution y^ = yi +
Zj2{t2 — t), where z is a new variable. Show that this yields
[2(£2-*i)]n/2
Mn / exp(—v2
)dv / exp(—z2
)zn
dz.
/it
The gamma function is defined by (see Ryshik & Gradstein [1.15])
r((n + l)/2)Vn = 2fc;
^0Vn = 2ife + l; fceN,
r(l/2) =-S/TF, r(n + l) = nT(n).
/ ex.p(-z2
)zn
dv
50 Stochastic Differential Equations in Science and Engineering
a2
>e2
Verify the result
M2n = ir-V2
[2(t2 - ii)]n
r((2n + l)/2).
EX 1.2. We consider a ID random variable X with the mean fi and
the variance a2
. Show that the latter can be written in the form
(fx(x) is the P D ^ = ( s ) ; e > 0 )
<r2
>( + ) fx(x)(x - tfdx; e.
J fi+e J—oo /
For x < /x — e and X>/J, + £^(X — fi)2
> e2
, this yields
1 - / ix(x)dx = e 2
P r ( | X - ^ | > e),
J ix—e
and this gives the Chebyshev inequality its final form
Pr{|X-/z| >e} <a2
/e2
.
The inequality governing martingales (1-28) is obtained with con-
siderations similar to the derivation of the Chebyshev inequality.
EX 1.3.
(a) Show that we can factorize the bivariate GD (1.35a) with zero
mean and equal variance ((x) = (y) — 0; a2
= a = b) in the form
p(x, y) = J~1/2
p(x)p((y - rx)/y/T); 7 = (1 - r2
),
where p(x) is the univariate GD (1.29).
(b) Calculate the conditional distribution (see 1.17) of the bivariate
GD (1.35a). Hint: (c is the covariance matrix)
PiliO* I V) = V <W(27rD
) exp[-cTO(x - ycxy/cyy)/(2D)};
Verify that the latter line corresponds to a N[ycxy/cyy,cxx —
ciy/cyy) distribution.
EX 1.4. Prove that (1.53) is a solution of the Chapman-Kolmogorov
equation (1.52)
Stochastic Variables and Stochastic Processes 51
Hint: The integrand in (1.52) is given by
T = Pi|i(y2,t2 | ?/i,*i)Pi11 (2/3,^3 I 2/2, £2)-
Use the substitution
u = t<i - t > 0, v = £3 - £2 > 0; £3 - t = v + u > 0,
introduce (1.53) into (1.52) and put
T = (47r2
m;)-1/2
exp(-A);
A = (2/3 - y2?/(2v) + (3/2 - 2/i)2
/(2u
) = a
22/| + ai2/2 + «o,
with afc = ctk(yi,y3),k = 1,2,3. Use the standard substitution (see
EX 1.1) to obtain
/ Tdt/2 = (47rra)"1/2
exp[-F(y3,y2)] / exp(-K)djy2;
4a0a2 - af / ai  2
F =
- ^ 2 — ; K = a 2
ly 2 +
2^J'
and compare the result of the integration with the right hand side
of (1.52).
EX 1.5. Verify that the solution of (1.54) is given by (1.55). Prove
also its initial condition.
Hint: To verify the initial condition use the integral
/
oo
exp[-y2
/(2£)]H(y)dy,
-00
where H(y) is a continuous function. Use the standard substitution
in its form y = /2tz.
To verify the solution (1.55) use the same substitution as in
EX 1.4.
EX 1.6. Calculate the average (yf (£1)^(*2)>; 2/fc = Btfc, k = 1,2;
n, m G N with the use of the Markovian bivariate PD (1.60).
Hint: Use standard substitution of the type given in EX 1.2.
EX 1.7. Verify that the variable Bt defined in (1.65) has the auto-
correlation (BtBs) = £ A s. To perform this task we calculate for a
52 Stochastic Differential Equations in Science and Engineering
fixed value of a > 0
<BtBs) = (Bt + a Bs + a ) - (Bt+aBa) - (Bs+aBa) + {B2
a)
= sAf + a - a - a + a = sAt.
EX 1.8. Prove that the scaled and translated WS's defined in (1.74)
are WS's.
Hint: To cover the scaled WS's, put
HUl„ = ^M-alpv = ^x1(a2
u)x2(b2
v).
Because of (xi(a)x2((3)) = 0 we have (H.u,v) = 0. Its autocorrelation
is given by
(HUi„HPi9) = —-^(x1(a2
u)x1(a2
p))(x2(b2
v)x2{b2
q))
[ao)
(a2
u) A (a2
p)(b2
v) A (b2
q) = (u A p)(v A q).
(ab)2
For the case of the translated quantity use the consideration of
EX 1.7.
EX 1.9. Verify the differential (1.109) of two linear stochastic
functions.
Hint: According to (1.89) we have dBt
2
= 2BtdBt + dt...
EX 1.10. Show that the "inverted" stochastic variables
Zt = tB1/t; H8it = stM{2
lA/t,
are also a WP (Zt) and a WS (HS)t).
EX 1.11. Use the bivariate PD (1.60) for a Markov process, to cal-
culate the two-variable characteristic function of a Brownian motion.
Verify the result
G(u,v) = (exp[i(uBi + uB2)])
exp -(u2
h + v2
t2) + 2uv(ti A£2) B
fc = Btfc
and compare its ID limit with (1.58a).
EX 1.12. Calculate the probability P of a particle to stay in the
interior of the circle D = {(a;, y) G R2 | x2
+ y2
< R}.
Stochastic Variables and Stochastic Processes 53
Hint: Assume that components of the vector (x, y) are statistically
independent use the bivariate GD (1.35) with zero mean to calculate
P[Bt€D] = J J p(x,y)dxdj/.
EX 1.13. Consider the Brownian motion on the perimeter of ellipses
and hyperbolas
(i) ellipses
x(t) = cos(Bi), y(t) = sin(Bt),
(ii) hyperbolas
x{t) = cosh(Bt), y(t) = sinh(Bt).
Use the Ito formula to obtain the corresponding SDE and calcu-
late (x(t)) and (y(t)).
EX 1.14. Given the variables
Z1 = (Bj - B2
)4
+ (Bj)5
; Z2 = (B,1
- Bt
2
)3
+ (B,1
)6
,
where B^ and B2
are independent WP's. Find the SDE's governing
dZx and dZ2.
EX 1.15. The random function
RW = [(Bt
1
)2
+ --- + (BD2
]1/2
,
is considered as the distance of an n-dimensional vector of indepen-
dent WP's from the origin. Verify that its differential has the form
n
— 1
dR(t) = J2B
tdB
t/R+T
^Tdt
-
EX 1.16. Consider the stochastic function
x(t) = exp(aBt — a2
t/2); a = const.
(a) Show that
x(t) = x(t — s)x(s).
Hint: Use (1.65).
(b) Show that x(t) is a martingale.
54 Stochastic Differential Equations in Science and Engineering
EX 1.17. The Wiener-Levy Theorem is given by
oo »t
Bt = V A f c / rl>k{z)Az, (E.l)
where A^ is a set of IID N(0,1) variables and ipi-; k = 1, 2,... is a set
of orthonormal functions in [0, 1]
»i
1/Jk(z)lpm(z)dz = Skm.
'0
Show that (E.l) defines a WP.
Hint: The autocorrelation is given by (BtBs) = t A s. Show that
Jo
d_
~dt
(BtBa) = — (t A s) = V Vfc(0 / Mz)dz.
Multiply the last line by ipm(t) and integrate the resulting equation
from zero to unity.
EX 1.18. A bivariate PD of two variables x,y is given by p(x,y).
(a) Calculate the PD of the "new" variable z and its average for
(i) z = x ± y (ii) z = xy.
Hint: Use (1.41b).
(b) Find the PD iuy(u,v) for the "new" variables u = x + y; v =
x - y.
EX 1.19. The Ito representation of a given stochastic processes
F(t,(j) has the form
F(t,u) = (F(t,u)) + [ f(s,u)dBs,
Jo
where i(s,u) is an other stochastic process. Find i(s,u) for the par-
ticular cases
(i) F(t,iu) = const; (ii) F(t,u) = BJ1
; n = 1,2,3; (hi) F(t,u) =
exp(Bt).
EX 1.20. Calculate the probability of n identically independent
HPP's [see (A.8)].
CHAPTER 2
STOCHASTIC DIFFERENTIAL EQUATIONS
There are two classes of ordinary differential equations that contain
stochastic influences:
(i) Ordinary differential equations (ODE) with stochastic coefficient
functions and/or random initial or boundary conditions that con-
tain no stochastic differentials. We consider this type of ODE's in
Chapter 4.3 where we will analyze eigenvalue problems. For these
ODE's we can take advantage of all traditional methods of analysis.
Here we give only the simple example of a linear 1st order ODE
dx
— = -par; p = p(w), ar(0) = x0(u),
where the coefficient function p and the initial condition are x-
independent random variables. The solution is x(t) = xoexp(—pt)
and we obtain the moments of this solution in form of (xm
) =
{XQ1
exp(—pmt)}. Assuming that the initial condition and the param-
eter p are identically independent N(0, a) distributed, this yields
(*2m
> = ^ ^ e x p ( 2 a m 2
t 2
) ; ( x 2
^ 1
) = 0. *
(ii) We focus in this book — with a few exceptions in Chapter 4 —
exclusively on initial value problems for ordinary SDE's of the type
(1.123) that contain stochastic differentials of the Brownian motions.
The initial values may also vary randomly xn(0) — xn(u). In this
chapter we introduce the analytical tools to reach this goal. However,
in many cases we would have to resort to numerical procedures and
we perform this task in Chapter 5.
The primary questions are:
(i) How can we solve the equations or at least approximate the
solutions and what are the properties of the latter?
55
56 Stochastic Differential Equations in Science and Engineering
(ii) Can we derive criteria for the existence and uniqueness of the
solutions?
The theory is, however, only in a state of infancy and we will be
happy if we will be able to answer these questions in case of the
simplest problems. The majority of the knowledge pertains to lin-
ear ordinary SDE, nonlinear problems are covered only in examples.
Partial stochastic differential equations (PSDE) will be covered in
Chapter 4 of this book.
2.1. One-Dimensional Equations
To introduce the ideas we begin with two simple problems.
2.1.1. Growth of populations
We consider here the growth of an isolated population. N(i) is the
number of members of the population at the instant t. The growth
(or decay) rate is proportional to the number of members and this
growth is, in absence of stochastic effects, exponential. We introduce
additionally a stochastic term that is also proportional to N. We
write the SDE first in the traditional way
dN
— = rN + uW(t)N; r,u = const, (2.1)
where W(t) stands for the white noise. It is, however, convenient to
write Equation (2.1) in a form analogous to (1.99.1). Thus, we obtain
dN = adt + bdBt; a = rN, b = uN; dBt = W(t)dt. (2.2)
Equation (2.2) is a first order homogeneous ordinary SDE for the
desired solution N(B4,i). We call the function a(N, t) (the coefficient
of dt) the drift coefficient and the function fe(N, t) (the coefficient of
dBi) the diffusion coefficient. SDE's with drift coefficients that are
at most first order polynomials in N and diffusion coefficients that
are independent of N are called linear equations. Equation (2.2) is
hence a nonlinear SDE. We solve the problem with the use of the
Stochastic Differential Equations 57
Ito formula. Thus we introduce the function Y = g(N) = InN and
apply (1.99.3) (see also (1.129))
dY = d(lnN) = (agN + 62
gNN/2)di + bg^dBt
= (r - u2
/2)dt + udBt. (2.3)
Equation (2.3) is now a SDE with constant coefficients. Thus we
can directly integrate (2.3) and we obtain its solution in the form
N = N0exp[(r-M2
/2)£ + uBt]; N0 = N(i = 0). (2.4)
There are two classes of initial conditions (ICs):
(i) The initial condition (here the initial population No) is a deter-
ministic quantity.
(ii) The initial condition is stochastic variable. In this case we assume
that No is independent of the Brownian motion.
The relation (2.4) is only a formal solution and does not offer
much information about the properties of the solutions. We obtain
more insight from the lowest moments of the formal solution (2.4).
Thus we calculate the mean and the variance of N and we obtain
(N(t)> = (N0) exp[(r - u2
/2)t] (exp(uB4)) = (N0) exp(rt), (2.5)
where we used the characteristic function (1.58a). We see that the
mean or average (2.5) represents the deterministic limit solution
(u — 0) of the SDE. We calculate the variance with the use of
(1.20) and we obtain
Var(N) = exp(2ri) [(N§> exp(u2
t) - (N0)2
]. (2.6)
An important special case is given by the combination of the
parameters r = u2
/2. This leads to
N(t)=N0 exp(«Bt ); (N(t)> = (N0) exp(u2
i/2); , ^
(2.7)
Var(N) = (N§> exp(2u2
t) - (N0)2
exp(u2
f).
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering
Henderson d., plaskho p.   stochastic differential equations in science and engineering

Más contenido relacionado

Destacado

Explaining the idea behind automatic relevance determination and bayesian int...
Explaining the idea behind automatic relevance determination and bayesian int...Explaining the idea behind automatic relevance determination and bayesian int...
Explaining the idea behind automatic relevance determination and bayesian int...Florian Wilhelm
 
Un Juego Diferencial Estocástico para Reaseguro
Un Juego Diferencial Estocástico para ReaseguroUn Juego Diferencial Estocástico para Reaseguro
Un Juego Diferencial Estocástico para ReaseguroDavid Solis
 
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIA
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIAMODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIA
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIANubia Mejía
 
Financial Markets with Stochastic Volatilities - markov modelling
Financial Markets with Stochastic Volatilities - markov modellingFinancial Markets with Stochastic Volatilities - markov modelling
Financial Markets with Stochastic Volatilities - markov modellingguest8901f4
 
Calculo diferencial e_integral_en_la_vida_cotidiana (2)
Calculo diferencial e_integral_en_la_vida_cotidiana (2)Calculo diferencial e_integral_en_la_vida_cotidiana (2)
Calculo diferencial e_integral_en_la_vida_cotidiana (2)Hugo Rosales Vera
 
Calculo diferencial e integral
Calculo diferencial e integralCalculo diferencial e integral
Calculo diferencial e integralronalrepi1
 
Solucionario de matematicas de g. mancill.
Solucionario de matematicas de g. mancill.Solucionario de matematicas de g. mancill.
Solucionario de matematicas de g. mancill.C 12
 
Deterministic vs stochastic
Deterministic vs stochasticDeterministic vs stochastic
Deterministic vs stochasticsohail40
 
Fractales y Teoría del Caos
Fractales y Teoría del CaosFractales y Teoría del Caos
Fractales y Teoría del CaosJimmy Campo
 
Ejercicios algebra superior hall y knight
Ejercicios algebra superior hall y knightEjercicios algebra superior hall y knight
Ejercicios algebra superior hall y knightguido guzman perez
 
Algebra arrayan
Algebra arrayanAlgebra arrayan
Algebra arrayangomezaa
 
Álgebra de Mancil tomo 2
Álgebra de Mancil tomo 2Álgebra de Mancil tomo 2
Álgebra de Mancil tomo 2Agustín Ramos
 

Destacado (18)

Explaining the idea behind automatic relevance determination and bayesian int...
Explaining the idea behind automatic relevance determination and bayesian int...Explaining the idea behind automatic relevance determination and bayesian int...
Explaining the idea behind automatic relevance determination and bayesian int...
 
Libro de Mancil
Libro de MancilLibro de Mancil
Libro de Mancil
 
Un Juego Diferencial Estocástico para Reaseguro
Un Juego Diferencial Estocástico para ReaseguroUn Juego Diferencial Estocástico para Reaseguro
Un Juego Diferencial Estocástico para Reaseguro
 
Prigogine esayo
Prigogine esayoPrigogine esayo
Prigogine esayo
 
Calculo diferencial e integral2
Calculo diferencial e integral2Calculo diferencial e integral2
Calculo diferencial e integral2
 
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIA
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIAMODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIA
MODELACIÓN MATEMÁTICA A TRAVÉS DE LAS ECUACIONES EN DIFERENCIA
 
Financial Markets with Stochastic Volatilities - markov modelling
Financial Markets with Stochastic Volatilities - markov modellingFinancial Markets with Stochastic Volatilities - markov modelling
Financial Markets with Stochastic Volatilities - markov modelling
 
Calculo diferencial e_integral_en_la_vida_cotidiana (2)
Calculo diferencial e_integral_en_la_vida_cotidiana (2)Calculo diferencial e_integral_en_la_vida_cotidiana (2)
Calculo diferencial e_integral_en_la_vida_cotidiana (2)
 
Calculo diferencial e integral
Calculo diferencial e integralCalculo diferencial e integral
Calculo diferencial e integral
 
Calculo integral
Calculo integralCalculo integral
Calculo integral
 
Solucionario de matematicas de g. mancill.
Solucionario de matematicas de g. mancill.Solucionario de matematicas de g. mancill.
Solucionario de matematicas de g. mancill.
 
Deterministic vs stochastic
Deterministic vs stochasticDeterministic vs stochastic
Deterministic vs stochastic
 
Fractales y Teoría del Caos
Fractales y Teoría del CaosFractales y Teoría del Caos
Fractales y Teoría del Caos
 
Ejercicios algebra superior hall y knight
Ejercicios algebra superior hall y knightEjercicios algebra superior hall y knight
Ejercicios algebra superior hall y knight
 
Algebra Elemental Moderna
Algebra Elemental ModernaAlgebra Elemental Moderna
Algebra Elemental Moderna
 
Algebra proschle
Algebra  proschleAlgebra  proschle
Algebra proschle
 
Algebra arrayan
Algebra arrayanAlgebra arrayan
Algebra arrayan
 
Álgebra de Mancil tomo 2
Álgebra de Mancil tomo 2Álgebra de Mancil tomo 2
Álgebra de Mancil tomo 2
 

Similar a Henderson d., plaskho p. stochastic differential equations in science and engineering

A Short Introduction to Quantum Information and Quantum.pdf
A Short Introduction to Quantum Information and Quantum.pdfA Short Introduction to Quantum Information and Quantum.pdf
A Short Introduction to Quantum Information and Quantum.pdfSolMar38
 
The Entropy Law and the Economic Process
The Entropy Law and the Economic ProcessThe Entropy Law and the Economic Process
The Entropy Law and the Economic ProcessJoão Soares
 
Opportunities for students
Opportunities for students Opportunities for students
Opportunities for students Rene Kotze
 
Motionmountain volume6
Motionmountain volume6Motionmountain volume6
Motionmountain volume6Elsa von Licy
 
An approach to understanding psychotronics
An approach to understanding psychotronicsAn approach to understanding psychotronics
An approach to understanding psychotronicsClifford Stone
 
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...EmperorTwice
 
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...BfhJe1
 
Sciences of Europe VOL 1, No 62 (2021)
Sciences of Europe VOL 1, No 62 (2021)Sciences of Europe VOL 1, No 62 (2021)
Sciences of Europe VOL 1, No 62 (2021)Sciences of Europe
 
Skills 2 essay
Skills 2 essay Skills 2 essay
Skills 2 essay Eryk Ryzko
 
Skills 2 essay eryk ryzko
Skills 2 essay eryk ryzkoSkills 2 essay eryk ryzko
Skills 2 essay eryk ryzkoEryk Ryzko
 
Fundamentals in Nuclear Physics.pdf
Fundamentals in Nuclear Physics.pdfFundamentals in Nuclear Physics.pdf
Fundamentals in Nuclear Physics.pdfsirajuddindesignerpr
 
Why finding the TOE took so long v9
Why finding the TOE took so long v9Why finding the TOE took so long v9
Why finding the TOE took so long v9Scott S Gordon
 

Similar a Henderson d., plaskho p. stochastic differential equations in science and engineering (20)

A Short Introduction to Quantum Information and Quantum.pdf
A Short Introduction to Quantum Information and Quantum.pdfA Short Introduction to Quantum Information and Quantum.pdf
A Short Introduction to Quantum Information and Quantum.pdf
 
Teoria das supercordas
Teoria das supercordasTeoria das supercordas
Teoria das supercordas
 
The Entropy Law and the Economic Process
The Entropy Law and the Economic ProcessThe Entropy Law and the Economic Process
The Entropy Law and the Economic Process
 
The cambridge handbook of physics formulas
The cambridge handbook of physics formulas The cambridge handbook of physics formulas
The cambridge handbook of physics formulas
 
Opportunities for students
Opportunities for students Opportunities for students
Opportunities for students
 
mmnp2015103Gorban
mmnp2015103Gorbanmmnp2015103Gorban
mmnp2015103Gorban
 
Motionmountain volume6
Motionmountain volume6Motionmountain volume6
Motionmountain volume6
 
An approach to understanding psychotronics
An approach to understanding psychotronicsAn approach to understanding psychotronics
An approach to understanding psychotronics
 
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...
David P. Landau, Kurt Binder - A Guide to Monte Carlo Simulations in Statisti...
 
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...
Huang, Samuel S P Shen, Norden E Huang, Samuel S P Shen - Hilbert Huang Trans...
 
String
StringString
String
 
list_of_publications
list_of_publicationslist_of_publications
list_of_publications
 
list_of_publications
list_of_publicationslist_of_publications
list_of_publications
 
Sciences of Europe VOL 1, No 62 (2021)
Sciences of Europe VOL 1, No 62 (2021)Sciences of Europe VOL 1, No 62 (2021)
Sciences of Europe VOL 1, No 62 (2021)
 
Skills 2 essay
Skills 2 essay Skills 2 essay
Skills 2 essay
 
Skills 2 essay eryk ryzko
Skills 2 essay eryk ryzkoSkills 2 essay eryk ryzko
Skills 2 essay eryk ryzko
 
Fundamentals in Nuclear Physics.pdf
Fundamentals in Nuclear Physics.pdfFundamentals in Nuclear Physics.pdf
Fundamentals in Nuclear Physics.pdf
 
Why finding the TOE took so long v9
Why finding the TOE took so long v9Why finding the TOE took so long v9
Why finding the TOE took so long v9
 
thesis
thesisthesis
thesis
 
thesis
thesisthesis
thesis
 

Último

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 

Último (20)

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 

Henderson d., plaskho p. stochastic differential equations in science and engineering

  • 1. m S T O H A D I F F E R E N T I A L E Q U A T I O N S I N S C I E N C E A N D E N G I N E E R I N G D o u g l a s H e n d e r s o n • P e t e r P l a s c h k o
  • 2. S T O C H A S T I C D I F F E R E N T I A L E Q U A T I O N S IN S C I E N C E A N D E N G I N E E R I N G
  • 3.
  • 4. Douglas Henderson Brigham Young University, USA Pp t p f PIi3Qf*ihk"Ac i c i r I O O U I I I V V / Uriiversidad Autonoma Metropolitans, Mexico | | p World Scientific NEW JERSEY • LONDON • SINGAPORE • BEIJING • S H A N G H A I • HONG KONG • TAIPEI » C H E N N A I
  • 5. Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. STOCHASTIC DIFFERENTIAL EQUATIONS IN SCIENCE AND ENGINEERING (With CD-ROM) Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in anyform or by any means, electronic or mechanical, includingphotocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher. For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher. ISBN 981-256-296-6 Printed in Singapore by World Scientific Printers (S) Pte Ltd
  • 6. To Rose-Marie Henderson A good friend and spouse
  • 7.
  • 8. PREFACE This book arose from a friendship formed when we were both fac- ulty members of the Department of Physics, Universidad Autonoma Metropolitana, Iztapalapa Campus, in Mexico City. Plaschko was teaching an intermediate to advanced course in mathematical physics. He had written, with Klaus Brod, a book entitled, "Hoehere Mathematische Methoden fuer Ingenieure und Physiker", that Henderson admired and suggested that be translated into English and be updated and perhaps expanded somewhat. However, we both prefer new projects and this suggested instead that a book on Stochastic Differential Equations be written and this project was born. This is an important emerging field. From its incep- tion with Newton, physical science was dominated by the idea of determinism. Everything was thought to be determined by a set of second order differential equations, Newton's equations, from which everything could be determined, at least in principle, if the initial conditions were known. To be sure, an actual analytic solution would not be possible for a complex system since the number of dynamical equations would be enormous; even so, determinism prevailed. This idea took hold even to the point that some philosophers began to speculate that humans had no free will; our lives were determined entirely by some set of initial conditions. In this view, even before the authors started to write, the contents of this book were deter- mined by a set of initial conditions in the distant past. Dogmatic Marxism endorsed such ideas, although perhaps not so extremely. Deterministic Newtonian mechanics yielded brilliant successes. Most astronomical events could be predicted with great accuracy. V l l
  • 9. viii Stochastic Differential Equations in Science and Engineering Even in case of a few difficulties, such as the orbit of Mercury, New- tonian mechanics could be replaced satisfactorily by equally deter- ministric general relativity. A little more than a century ago, the case for determinism was challenged. The seemingly random motion of the Brownian motion of suspended particles was observed as was the sudden transition of the flow of a fluid past an object or obstacle from lamanar flow to chaotic turbulence. Recent studies have shown that some seemingly chaotic motion is not necessarily inconsistent with determinism (we can call this quasi-chaos). Even so, such prob- lems are best studied using probablistic notions. Quantum theory has shown that the motion of particles at the atomic level is funda- mentally nondeterministic. Heisenberg showed that there were limits to the precision with which physical properties could be determined. One can only assign a probablity for the value of a physical quantity. The consequence of this idea can be manifest even on a macroscopic scale. The third law of thermodynamics is an example. Stochastic differential equations, the subject of this monograph, is an interesting extension of the deterministic differential equations that can be applied to Brownian motion as well as other problems. It arose from the work of Einstein and Smoluchowski among others. Recent years have seen rapid advances due to the development of the calculii of Ito and Stratonovich. We were both trained as mathematicians and scientists and our goal is to present the ideas of stochastic differential equations in a short monograph in a manner that is useful for scientists and engineers, rather than mathematicians and without overpowering mathematical rigor. We presume that the reader has some, but not extensive, knowledge of probability theory. Chapter 1 provides a reminder and introduction to and definition of some fundamental ideas and quantities, including the ideas of Ito and Stratonovich. Stochastic differential equations and the Fokker-Planck equation are presented in Chapters 2 and 3. More advanced applications follow in Chapter 4. The book concludes with a presentation of some numeri- cal routines for the solution of ordinary stochastic differential equa- tions. Each chapter contains a set of exercises whose purpose is to aid the reader in understanding the material. A CD-ROM that provides
  • 10. Preface ix MATHEMATICA and FORTRAN programs to assist the reader with the exercises, numerical routines and generating figures accompanies the text. Douglas Henderson Peter Plaschko Provo Utah, USA Mexico City DF, Mexico June, 2006
  • 11.
  • 12. CONTENTS Preface vii Introduction xv Glossary xxi 1. Stochastic Variables and Stochastic Processes 1 1.1. Probability Theory 1 1.2. Averages 4 1.3. Stochastic Processes, the Kolmogorov Criterion and Martingales 9 1.4. The Gaussian Distribution and Limit Theorems 14 1.4.1. The central limit theorem 16 1.4.2. The law of the iterated logarithm 17 1.5. Transformation of Stochastic Variables 17 1.6. The Markov Property 19 1.6.1. Stationary Markov processes 20 1.7. The Brownian Motion 21 1.8. Stochastic Integrals 28 1.9. The Ito Formula 38 1.9. The Ito Formula 38 Appendix 45 Exercises 49 2. Stochastic Differential Equations 55 2.1. One-Dimensional Equations 56 2.1.1. Growth of populations 56 2.1.2. Stratonovich equations 58
  • 13. xii Stochastic Differential Equations in Science and Engineering 2.1.3. The problem of Ornstein-Uhlenbeck and the Maxwell distribution 59 2.1.4. The reduction method 63 2.1.5. Verification of solutions 65 2.2. White and Colored Noise, Spectra 67 2.3. The Stochastic Pendulum 70 2.3.1. Stochastic excitation 72 2.3.2. Stochastic damping (/? = 7 = 0; a ^ 0) 73 2.4. The General Linear SDE 76 2.5. A Class of Nonlinear SDE 79 2.6. Existence and Uniqueness of Solutions 84 Exercises 87 3. The Fokker-Planck Equation 91 3.1. The Master Equation 91 3.2. The Derivation of the Fokker-Planck Equation 95 3.3. The Relation Between the Fokker-Planck Equation and Ordinary SDE's 98 3.4. Solutions to the Fokker-Planck Equation 104 3.5. Lyapunov Exponents and Stability 107 3.6. Stochastic Bifurcations 110 3.6.1. First order SDE's 110 3.6.2. Higher order SDE's 112 Appendix A. Small Noise Intensities and the Influence of Randomness Limit Cycles 117 Appendix B.l The method of Lyapunov functions 124 Appendix B.2 The method of linearization 128 Exercises 130 4. Advanced Topics 135 4.1. Stochastic Partial Differential Equations 135 4.2. Stochastic Boundary and Initial Conditions 141 4.2.1. A deterministic one-dimensional wave equation 141 4.2.2. Stochastic initial conditions 144 4.3. Stochastic Eigenvalue Equations 147 4.3.1. Introduction 147 4.3.2. Mathematical methods 148
  • 14. Contents xiii 4.3.3. Examples of exactly soluble problems 152 4.3.4. Probability laws and moments of the eigenvalues 156 4.4. Stochastic Economics 160 4.4.1. Introduction 160 4.4.2. The Black-Scholes market 162 Exercises 164 5. Numerical Solutions of Ordinary Stochastic Differential Equations 167 5.1. Random Numbers Generators and Applications 167 5.1.1. Testing of random numbers 168 5.2. The Convergence of Stochastic Sequences 173 5.3. The Monte Carlo Integration 175 5.4. The Brownian Motion and Simple Algorithms for SDE's 179 5.5. The Ito-Taylor Expansion of the Solution of a ID SDE 181 5.6. Modified ID Milstein Schemes 187 5.7. The Ito-Taylor Expansion for N-dimensional SDE's 189 5.8. Higher Order Approximations 193 5.9. Strong and Weak Approximations and the Order of the Approximation 196 Exercises 201 References 205 Fortran Programs 211 Index 213
  • 15.
  • 16. INTRODUCTION The theory of deterministic chaos has enjoyed during the last three decades a rapidly increasing audience of mathematicians, physicists, engineers, biologists, economists, etc. However, this type of "chaos" can be understood only as quasi-chaos in which all states of a system can be predicted and reproduced by experiments. Meanwhile, many experiments in natural sciences have brought about hard evidence of stochastic effects. The best known example is perhaps the Brownian motion where pollen submerged in a fluid experience collisions with the molecules of the fluid and thus exhibit random motions. Other familiar examples come from fluid or plasma dynamic turbulence, optics, motions of ions in crystals, filtering the- ory, the problem of optimal pricing in economics, etc. The study of stochasticity was initiated in the early years of the 1900's. Einstein [1], Smoluchowsky [2] and Langevin [3] wrote pioneering investiga- tions. This work was later resumed and extended by Ornstein and Uhlenbeck [4]. But investigation of stochastic effects in natural sci- ence became more popular only in the last three decades. Meanwhile studies are undertaken to calculate or at least approximate the effect of stochastic forces on otherwise deterministic oscillators, to investi- gate the stability or the transition to stochastic chaos of the latter oscillator. To motivate the following considerations of stochastic differential equations (SDE) we introduce a few examples from natural sciences. (a) Pendulum with Stochastic Excitations We study the linearized pendulum motion x(t) subjected to a stochastic effect, called white noise x + x = (3£t, XV
  • 17. xvi Stochastic Differential Equations in Science and Engineering where ft is an intensity constant, t is the time and £j stands for the white noise, with a single frequency and constant spectrum. For (3 = 0 we obtain the homogeneous deterministic (non-stochastic) tra- ditional pendulum motion. We can expect that the stochastic effect disturbs this motion and destroys the periodicity of the motion in the phase space (x,x). The latter has closed solutions called limit cycles. It is an interesting task to investigate whether the solutions disintegrate into scattered points (stochastic chaos). We will cover this problem later in Section 2.3 and find that the average motion (in a sense to be defined in Section 1.2 of Chapter 1) of the pendu- lum is determined by the deterministic limit (/3 = 0) of the stochastic pendulum equation. (b) Stochastic Growth of Populations N(i) is the number of the members of a population at the time t, a is the constant of the deterministic growth and (5 is again a constant characterizing the intensity of the white noise. Thus we study the growth problem in terms of the linear scenario The deterministic limit (/? = 0) of this equation describes the growth of a population living on an unrestricted area with unrestricted food supply. Its solution (the number of such a population) grows exponentially. The stochastic effects, or the white noise describes a stochastic varying food supply that influences the growth of the pop- ulation. We will consider this problem in the Section 2.1.1 and find again that the average of the population is given by the deterministic limit. (c) Diffraction of Optical Waves The transfer function T(u>); UJ = (u, U2) of a two-dimensional optical device is defined by / oo /-oo dx / dyF{x,y)F*{x -wuy- u;2)/N; -OO J —OO / CO /*CO dx dyF(x,y)2 , -00 J—00
  • 18. Introduction xvn where F is a complex wave amplitude and F* = cc(F) is its complex conjugate. The parameter N denotes the normalization of |F(x,y)|2 and the variables x and y stand for the coordinates of the image plane. In a simplified treatment, we assume that the wave form is given by F = |F|exp(—ikA); |F|,fc = const, where k and A stand for the wave number and the phase of the waves, respectively. We suppose that the wave emerging from the optical instrument (e.g. a lens) exhibits a phase with two different deviations from a spherical structure A = Ac + Ar with a controlled or deterministic phase Ac(x,y) and a random phase Ar(x,y) that arises from polishing the optical device or from atmospheric influ- ences. Thus, we obtain •1 POO /"OO T(u>) = — dx dyexp{ifc[A(x-o;i,y-u;2) - A(x,y)}}, •••*• J—oo J—oo where K is used to include the normalization. In simple applications we can model the random phase using white noise with a Gaussian probability density. To evaluate the average of the transfer function (T(ui)) we need to calculate the quantity (exp{ik[AT(x - Ui,y - u2) - Ar(x,y)]}). We will study the Gaussian probability density and complete the task to determine the average written in the last line in Section 1.3 of Chapter 1. An introduction to random effects in optics can be found in O'Neill [5]. (d) Filtering Problems Suppose that we have performed experiments of a stochastic problem such as the one in (a) in an interval t € [0, u] and we obtain as result say A(v), v = [0, u]. To improve the knowledge about the solution we repeat the experiments for t € [u,T] and we obtain A(t),t = [u,T]. Yet due to inevitable experimental errors we do not obtain A(i) but a result that includes an error A(i) + 'noise'. The question is now how can we filter the noise away? A filter is thus, an instrument to
  • 19. xviii Stochastic Differential Equations in Science and Engineering clean a result and remove the noise that arises during the observa- tion. A typical problem is where a signal with unknown frequency is transmitted (e.g. by an electronic device) and it suffers during the transmission the addition of a noise. If the transmitted signal is stochastic itself (as in the case of music) we need to develop a non-deterministic model for the signal with the aid of a stochastic differential equation. To study basic the ideas of filtering problems the reader in referred to the book of Stremler [6]. (e) Fluidmechanical Turbulence This is the perhaps most challenging and most intricate application of statistical science. We consider here the continuum dynamics of a flow field influenced by stochastic effects. The latter arise from initial conditions (e.g. at the nozzle of a jet flow, or at the entry region of a channel flow) and/or from background noise (e.g. acoustic waves). In the simplest case, the incompressible two-dimensional flows, there are three characteristic variables (two velocity components and the pres- sure). These variables are governed by the Navier-Stokes equations (NSEs). The latter are a set of three nonlinear partial differential equations that included a parameter, the Reynolds number R. The inverse of R is the coefficient of the highest derivatives of the NSEs. Since turbulence occurs at intermediate to high values of the R, this phenomenon is the rule and not the exception in Fluid Dynamics and it occurs in parameter regions where the NSEs are singular. Nonlin- ear SDEs — such as the NSEs — lead additionally to the problem of the closure, where the equation governing the statistical moment of nth order contains moments of the (n + l)th order. Hopf [7] was the first to try to find a theoretical approach to solve the problem for the idealized case of isotropic homogenous tur- bulence, a flow configuration that can be approximately realized in grid flows. Hopf assumed that the turbulence is Gaussian, an assump- tion that facilitates the calculation of higher statistical moments of the distribution (see Section 1.3 in Chapter 1). However, later mea- surements showed that the assumption of a Gaussian distribution was rather unrealistic. Kraichnan [8] studied the problem again in
  • 20. Introduction xix the 60's and 70's with the direct triad interaction theory in the ide- alized configuration of homogeneous isotropic turbulence. However, this rather involved analysis could only be applied to calculate the spectrum of very small eddies where the viscosity dominates the flow. Somewhat more progress has been achieved by the investigation of Rudenko and Chirin [9]. The latter predicted with aid of stochas- tic initial conditions with random phases a broad banded spectra of a nonlinear model equation. During the last two decades there was the intensive work done to investigate the Burgers equation and this research is summarized in part by Wojczinsky [10]. The Burgers equation is supposed to be a reasonable one-dimensional model of the NSEs. We will give a short account on the work done in [9] in Chapter 4.
  • 21.
  • 22. GLOSSARY AC almost certainly BC boundary condition dBj — dWj — £td£ differential of the Brownian motion (or equivalently Wiener process) cc(a) = a* complex conjugate of a D dimension or dimensional DF distribution function DOF degrees of freedom Sij Kronecker delta function S(x) Dirac delta function EX exercise at the end of a chapter FPE Fokker-Planck equation r(x) gamma function GD Gaussian distribution GPD Gaussian probability distribution HPP homogeneous Poisson process Hn(x) Hermite polynomial of order n IC initial condition IID identically independently distributed
  • 23. xxii Stochastic Differential Equations in Science and Engineering IFF if and only if IMSL international mathematical science library C Laplace transform M master, as in master equation MCM Monte Carlo method NSE Navier-Stokes equation NIGD normal inverted GD N(jU, a) normal distribution with i as mean and a as variance o Stratonovich theory ODE ordinary differential equation PD probability distribution PDE partial differential equation PDF probability distribution function PSDE partial SDE r Reynolds number RE random experiment RN random number RV random variable Re(a) real part of a complex number R, C sets of real and complex numbers, respectively S Prandt number SF stochastic function SI stochastic integral SDE stochastic differential equation SLNN strong law of large numbers
  • 24. Glossary TPT transition probability per unit time WP Wiener process WS Wiener sheet WKB Wentzel, Kramers, Brillouin WRT with respect to W(t) Wiener white (single frequency) noise (a) average of a stochastic variable a a2 = (a2 ) — (a) (a) variance {xy),{x,uy,v) conditional averages s At minimum of s and t V for all values of € element of f f(x)dx short hand for J^ f(x)dx X end of an example • end of definition $ end of theorem
  • 25. CHAPTER 1 STOCHASTIC VARIABLES AND STOCHASTIC PROCESSES 1.1. Probability Theory An experiment (or a trial of some process) is performed whose outcome (results) is uncertain: it depends on chance. A collec- tion of all possible elementary (or individual) outcomes is called the sample space (or phase space, or range) and is denoted by f2. If the experiment is tossing a pair of distinguishable dice, then 0, = {(i,j) | 1 < i,j < 6}. For the case of an exper- iment with a fluctuating pressure 0, is the set of all real func- tions fi = (0, oo). An observable event A is a subset of f2; this is written in the form A c f2. In the dice example we could choose an even, for example, as A = {{i,j) i + J' = 4}. For the case of fluctuating pressures we could use the subset A = (po > 0,oo). Not every subset of £1 is observable (or interesting). An example of a non-observable event appears when a pair of dice are tossed and only their spots are counted, fi = {(i,j),2 < i + j < 12}. Then elementary outcomes like (1, 2), (2, 1) or (3, 1), (2, 2), (1, 3) are not distinguished. Let r be the set of observable events for one single experiment. Then F must include the certain event of CI, and the impossible event of 0 (the empty set). For every A C T, Ac the complement of A, satisfies Ac C T and for every B C F the union and intersection of events, A U B and A D B, must pertain also to F. F is called an algebra of events. In many cases there are countable unions and intersections in F. Then it is sufficient to assume that oo (J An e r, if An e r. 1
  • 26. 2 Stochastic Differential Equations in Science and Engineering An algebra with this property is called a sigma algebra. In measure theory, the elements of T are called measurable sets and the pair of (F, Q,) is called a measurable space. A finite measure Pr(A) defined on F with 0 < Pr(A) < 1, Pr(0) = 0, Pr(fi) = 1, is called the probability and the triple (I f2, Pr) is referred to as the probability space. The set function Pr assigns to every event A the real number Pr(A). The rules for this set function are along with the formula above Pr(Ac ) = l - P r ( A ) ; Pr(A)<Pr(B); Pr(BA) = Pr(B) - Pr(A) for A C B € T. The probability measure Pr(r) on Q, is thus a function Pr(P) —>• [0,1] and it is generally derived with Lebesque integrations that are defined on Borel sets. We introduced this formal concept because it can be used as the most general way to introduce axiomatically the probability theory (see e.g. Chung, [1.1]). We will not follow this procedure but we will introduce heuristically stochastic variables and their probabilities. Definition 1.1. (Stochastic variables) A random (or stochastic) variable ~X.(u),u £ Q is a real valued function defined on the sample space Q. In the following we omit the parameter u) whenever no confusion is possible. • Definition 1.2. (Probability of an event) The probability of an event equals the number of elementary out- comes divided by the total number of all elementary outcomes, pro- vided that all cases are equally likely. • Example For the case of a discrete sample space with a finite number of ele- mentary outcome we have, fi = {wi,... ,u>n} and an event is given by A = {LO, ... ,u>k}, I < k < n. The probability of the event A is then Pr(A) = k/n. *
  • 27. Stochastic Variables and Stochastic Processes 3 Definition 1.3. (Probability distribution function and probability density) In the continuous case, the probability distribution function (PDF) Fx(a;) of a vectorial stochastic variable X = (Xi,...,Xn ) is defined by the monotonically increasing real function Fx(xi,...,xn) = Pr(Xi < xi,...,Xn < xn), (1.1) where we used the convention that the variable itself is written in upper case letters, whereas the actual values that this variable assumes are denoted by lower case letters. The probability density px(^i, • • • ,xn) (PD) of the random variable is then defined by Fx(xi,...,xn) = ••• px (ui,...,-un )dn1 ---dun (1.2) and this leads to dn Fx dxi...dXn =!*(*!,...,*„). (1-3) Note that we can express (1.1) and (1.2) alternatively if we put Pr(xn < Xi < X12,..., xnl < Xn < xn2) fX12 fXn2 •••px(xi,...,xn)dxi •••dxn. (1.1a) rxi2 rxn-. JXn Jx„, The conditions to be imposed on the PD are given by the positiveness and the normalization condition PxOci, ,xn)>0] / ••• / px(xi,...,xn)dxi •••dxn = 1. (1.4) In the latter equation we used the convention that integrals without explicitly given limits refer to integrals extending from the lower boundary — oo to the upper boundary oo. • In a continuous phase space the PD may contain Dirac delta functions p(x) = Y^l(k )s (x - k ) + P(x); q(k) = Pr(x = k), (1.5)
  • 28. 4 Stochastic Differential Equations in Science and Engineering where q(k) represents the probability that the variable x of the dis- crete set equals the integer value k. We also dropped the index X in the latter formula. We can interpret it to correspond to a PD of a set of discrete states of probabilities q(fc) that are embedded in a con- tinuous phase space S. The normalization condition (1.4) yields now ^2<ik+ p(x)dx = 1. 1. J S Examples (discrete Bernoulli and Poisson distributions) First we consider the Bernoulli distribution (i) qf) = Pr(a; = fe) = 6(A:,n,p)=r™)pf c (l-p)'l -f c ; A; = 0,1,... and then we introduce the Poisson distribution (ii)7rfc(A0 = Pr(x = A; )=( A t ) f c e g) ( -A t ) ; * = 0,1,.... In the appendix of this chapter we will give more details about the Poisson distribution. We derive there the Poisson distribution as limit of Bernoulli distribution TTk(Xt) — lim b(k,n,p = Xt/n). * n—>oo In the following we will consider in almost all cases only contin- uous sets. 1.2. Averages The sample space and the PD define together completely a stochas- tic variable. To introduce observable quantities we consider now aver- ages. The expectation value (or the average, or the mean value) of a function G(xi,...,xn ) of the stochastic variables x,...,xn is denned by (G(xi,...,xn)) = ••• G(zi,...,£n )px(xi,...,xn )dxi--'dxn . (1.6) In the case of a discrete variable we must replace to integral in (1.6) by a summation. We obtain then with the use of (1.5) for p(x) <G(xi,..., xn)) = Y^ Yl G (fc i> • • •' M # i r • •, k n)- (1-7)
  • 29. Stochastic Variables and Stochastic Processes 5 There are two rules for the application of the averages: (i) a and b are two deterministic constants and G(x,...xn) and H(xi,...,xn ) are two functions of the random variables x,..., xn. Then we have (aG(xi,...,xn) + bK(xi,...,xn)) = a(G(xi,..., xn)) + 6(H(xi,..., xn)), (1.8a) and (ii) {(G{x1,...,xn))) = (G(x1,...,xn)). (1.8b) Now we consider two scalar random variables x and y, their joint PD is p(x,y). If we do not have more information (observed values) of y, we introduce the two marginal PD's px(x) and py(y) of the single variables x and y Px(ar) = / p{x,y)dy; pY(y) = / p(x,y)dx, (1.9a) where we integrate over the phase spaces S^ (Sy) of the variables x(y). The normalization condition (1.4) yields / px(x)dx = / pY(y)dy = 1. (1.9b) Definition 1.4. (Independence of variables) We consider n random variables x,..., xn, x to be independent of the other variables X2, - - -, xn if (xiX2 • • • xn) = (xi)(x2---xn). (1.10a) We see easily that a sufficient condition to satisfy (1.10a) is p(xu...,xn) = pi(xi)pn_i(x2,...,a;n), (1.10b) where p^(...), k < n denotes the marginal probability distribution of the corresponding variables. •
  • 30. 6 Stochastic Differential Equations in Science and Engineering The moments of a PD of a scalar variable x are given by <*"> = /•><**•<* " e N - where n denotes the order of the moment. The first order moment (x) is the average of x and we introduce the variance a2 by a2 = ((x - {x))2 } = (x2 ) - (re)2 > 0. (1.11) The random variable x — (x) is called the standard deviation. The average of the of the Fourier transform of a PD is called the characteristic function G(k,..., kn) = (ex.p(ikrxr)} p(xi,..., xn) ex.Y>(ikrxr)dx • • • dxn, (1-12) where we applied a summation convention krxr = ^?=i kjx j- This function has the properties G(0,..., 0)1; | G(ki,..., kn) < 1. Example The Gaussian (or normal) PD of a scalar variable x is given by p(x) = (2vr)"1/2 exp(-a;2 /2); -co < x < oo. (1.13a) Hence we obtain (see also EX 1.1) <*2n > = | ? 7 ; "2 = i; (^2n+1 ) = o. (l.isb) Li lit A stochastic variable characterized by N(m, s) is a normal dis- tributed variable with the average m and the variance s. The vari- able x distributed with the PD (1.13a) is thus called a normal distributed variable with N(0, 1).
  • 31. Stochastic Variables and Stochastic Processes 7 A Taylor expansion of the characteristic function G(k) of (1.13a) yields with (1.12) G(*) = E ^ V > . (L14a ) n=0 U - We define the cumulants nm by m lnG(fc) = E^f-Km- (1.14b) A comparison of equal powers of k gives Ki = (x); K2 = (x2 ) - (x)2 = a2 ; K3 = {X3 )-3(X2 )(X)+2{X)3 ;.... (1.14c) * Definition 1.5. (Conditional probability) We assume that A, B C T are two random events of the set of observable events V. The conditional probability of A given B (or knowing B, or under the hypothesis of B) is defined by Pr(A | B) = Pr(A n B)/Pr(B); Pr(B) > 0. Thus only events that occur simultaneously in A and B contribute to the conditional probability. Now we consider n random variables x,... ,xn with the joint PD pn (xi,..., xn). We select a subset of variables x,..., xs. and we define a conditional PD of the latter variables, knowing the remaining subset xs+i,... ,xn, in the form Ps|n—sx li • • • ix s I Xs--, . . . , Xn) = pn(xi, . . . , Xn)/pn-s(xs+i, . . . , Xn). (1.15) Equation (1.15) is called Bayes's rule and we use the marginal PD pn-s(xs+i,...,xn) = pn{xi,...,xn)dxi---dxs, (1.16) where the integration is over the phase space of the variables x± • • • xs. Sometimes is useful to write to Bayes's rule (1.15) in the form P n l ^ l j • • • j Xn) = pn—syXs-^i, . . . , 3^nJPs|n—s v^l> • • • > x s Xs--lj • • • , XnJ. (1.15')
  • 32. 8 Stochastic Differential Equations in Science and Engineering We can also rearrange (1.15') and we obtain P n ^ l i • • • > -En) = Ps(.-El> • • • ) •KsjPn—ss%s+1: • • • j %n Xi, . . . ,XS). (1.15") • Definition 1.6. (Conditional averages) The conditional average of the random variable x, knowing x2, • • •, xn, is defined by (Xi | X2, . • • , Xn) = / ZlPi|n _i(xi I X2, • • • , Xn)dXi = / XxPnfa X2,..., X n ) d x i / p n _ i ( x 2 , • • • , Xn). (1.17) Note that (1.17) is a random variable. The rules for this average are in analogy to (1.8) (axi + bx2 | y) = a{xx y) + b(x2 y), ((x y)) = (x y). (1.18) D Example We consider a scalar stochastic variable x with its PD p(a;). An event A is given by a; £ [a, 6]. Hence we have p(x | A) = 0 Vz^ [a, b], and p(x | A) = p(x) / / p(s)ds; xe[a,b. The conditional PD is thus given by (x | A) = / xp(x)dx / / p(s)ds. Ja I Ja For an exponentially distributed variable x in [0, oo] we have p(x) = Aexp(—Arc). Thus we obtain for a > 0 the result /•oo / /-oo {x x > a) = / xexp(—Ax)ds / / exp(—Xx)dx = a + 1/A. JO / ./a JL
  • 33. Stochastic Variables and Stochastic Processes 9 1.3. Stochastic Processes, the Kolmogorov Criterion and Martingales In many applications (e.g. in irregular phenomena like blood flow, capital investment, or motions of molecules, etc.) one encounters a family of random variables that depend on continuous or dis- crete parameters like the time or positions. We refer to {X(t,co),t £ l,u £ ft}, where I is set of (continuous or discrete) parameters and X(t7ui) £ Rn, as a stochastic process (random process or stochas- tic (random) function). If I is a discrete set it is more convenient to call X(t,u>) a time series and to use the phrase process only for continuous sets. If the parameter is the time t then we use I = [to, T], where to is an initial instant. For a fixed value of t £ I, X(£, a>) is a random variable and for every fixed value of LO £ Q (hence for every observation) X(t, LO) is a real valued function. Any observation of this process is called a sample function (realization, trajectory, path or orbit) of the process. We consider now a finite variate PD of a process and we define the time dependent probability density functions (PDF) in analogy to (1.1) in the form Fx (x,t) = Pr(X(t)<x); Fx.yfo t; y, s) = Pr(X(t) < x, Y(s) < y); ^1A^ Fxu...,xn(x iit i---;xn,tn) = Pr(Xx(t) < xi,Xn(t) < xn), where we omit the dependence of the process X(t) on the chance variable LO, whenever no confusion is possible. The system of PDF's satisfies two classes of conditions: (i) Symmetry If {ki,..., kn} is a permutation of 1,..., n then we obtain Fxlv..,xn (zfc! ,tkl;...;xkn,tkJ = FXl,...,x„ {x, h;...; xn, tn). (1.19a) (ii) Compatibility Fx1?...,x„ (xi,ti;...; xr, tr; oo, tr+i; ...;oo,tn) = FXl,...,xr(a;i, <i; •••xr, tr). (1.19b)
  • 34. 10 Stochastic Differential Equations in Science and Engineering The rules to calculate averages are still given by (1.6) where the corresponding PD is derived by (1.3) and where the PDF's of (1.19) are used Qn p(xi, ii; ...;xn, tn) = -—— —7r^FXll...,x„ (xi,h;...; xn, tn). dxi(ti) • • • dxn(tn) One would expect that a stochastic process at a high rate of irregularity (expressed e.g. by high values of intensity constants, see Chapter 2) would exhibit sample functions (SF) with a high degree of irregularity like jumps ore singularities. However, Kolmogorov's criterion gives a condition for continuous SF: Theorem 1.1. (Kolmogorov's criterion) A bivariate distribution is necessary to give information about the possibility of continuous SF. If and only if (IFF) (|Xi(ti)-X2 (t2 )r> < c | i i - i 2 | 1 + b ; a,6,c>0; tx,t2 G [t0,T], (1.20) then the stochastic process X(t) posses almost certainly (AC, this symbol is discussed in Chapter 5) continuous SF. However, the lat- ter are nowhere differentiable and exhibit jumps, and higher order derivatives singularities. & We will use later the Kolmogorov's criterion to investigate SF of Brownian motions and of stochastic integrals. Definition 1.7. (Stationary process) A process x(t) is stationary if its PD is independent of a time shift r p(xi,h +T;...;xn,tn + T) = p(zi, tx;... ;xn,tn). (1.21a) Equation (1.21a) implies that all moments are also independent of the time shift (x(h + T)x(t2 + T) • • • x(tk + T)) = (x(t1)x(t2)---x(tk)); forfc = l , 2 . . . . (1.21b) A consequence of (1.25a) is given by (x(t)) = (x), independent of t: (1.21c) (x(t)x(t + r)) = (x(0)x(r))=5 (r). •
  • 35. Stochastic Variables and Stochastic Processes 11 The correlation matrix is defined by Cik = (zi{h)zk(t2)); Zi(ti) = Xi(ti) - (xi(ti)). (1.22) Thus, we have cik = {xi(h)xk(t2)) - {Xi(ti))(xk(t2)). (1.23) The diagonal elements of this matrix are called autocorrelation functions (we do not employ a summation convention) Cii = {Zi{tl)Zi{t2)). The nondiagonal elements are referred to as cross-correlation functions. The correlation coefficient (the nondimensional correlation) is defined by r. = (xj{ti)xk{t2)) - (xi(ti))(xk(t2)) ,x 2 4 ) y/{xUh)) ~ (Xt(h))^(xl(t2)) - (Xk(t2)/ For stationary processes we have Cik(h,t2) = (zi(0)zk(t2 - h)) = cik(t2 - h); (1.25) Cki(h,t2) = (zkit^Zifo)) = (zk(ti -t2)zi(0)) = Cik(ti -t2). A stochastic function with C{k = 0 is called an uncorrelated function and we obtain (xl{h)xk(t2)) = (Xiihfiixkfa)). (1.26) Note that the condition of noncorrelation (1.26) is weaker than the condition of statistical independence. Example We consider the process X(i) = Ui cos t + l^sini. Ui,2 are inde- pendent stochastic variables independent of the time. The moments of the latter are given by (Uk) = 0, (U|) — a = const; k — 1,2, (U1U2) = 0. Hence we obtain (X) — 0;cxx(s,t) = acos(t — s). JI» Remark (Statistical mechanics and stochastic differential equations) In Chapter 2 we will see that stochastic differential equations or "stochastic mechanics" can be used to investigate a single mechani- cal system in the presence of stochastic influences (white or colored
  • 36. 12 Stochastic Differential Equations in Science and Engineering noise). We use concepts that are similar to those developed in statis- tical mechanics such as probability distribution functions, moments, Markov properties, ergodicity, etc. We solve the stochastic differen- tial equation (analytically, but in most cases numerically) and one solution represents a realization of the system. Repeating the solu- tion process we obtain another realization and in this way we are able to calculate the moments of the system. An alternative way to calculate the moments would be to solve the Fokker-Planck equation (see: Chapter 3) and then use the corresponding solution to deter- mine the moments. To establish the Fokker-Planck equation we will use again the coefficients of the stochastic differential equation. Statistical Mechanics works with the use of ensemble averages. Rather than defining a single quantity (e.g. a particle) with a PD p(x), one introduces a fictitious set of an arbitrary large number of M quantities (e.g. particles or thermodynamic systems) and these M non-interacting quantities define the ensemble. In case of interact- ing particles, the ensemble is made up by M different realizations of the N particles. In general, these quantities have different charac- teristic values (temperature, or energy, or values of N) x, in a com- mon range. The number of quantities having a characteristic value between x and x + dx defines the PD. Therefore, the PD is replaced by density function for a large number of samples. One observes a large number of quantities and averages the results. Since, by defini- tion, the quantities do not interact one obtains in this way a physical realization of the ensemble. The averages calculated with this den- sity function are referred to as ensemble averages and a system where ensemble averages equal time averages is called an ergodic system. In stochastic mechanics we say that a process with the property that the averages defined in accordance with (1.6) equal the time averages, represents an ergodic process. An other stochastic process that posses SF of some regularity is called a martingale. This name is related to "fair games" and we give a discussion of this expression in a moment. In everyday language, we can state that the best prediction of a martingale process X(t) conditional on the path of all Brownian
  • 37. Stochastic Variables and Stochastic Processes 13 motions up to s < t is given by previous value X(s). To make this idea precise we formulate the following theorem: Theorem 1.2. (Adapted process) We consider a probability space (r, Q, Pr) with an increasing family (of sigma algebras of T) of events Ts £ Tt, 0 < s < t (see Section 1.1). A process X.(s,u);u) € Q,s £ [0, oo) is called Ts-adapted if it is Im- measurable. An rs-adapted process can be expanded into a (the limit) of a sequence of Brownian motions Bu(u>) with u < s (but not u> s). ^ Example For n = 2, 3,... ; 0 < A < t we see that the processes (i) G1(t,co) = Bt/n(co), G2(t,uj) = Bt_x(u;), (ii) G3(t,Lj) = Bnt(u), G4(t,w) = Bt+X(u>), are Tj-adapted, respectively, not adapted. * Theorem 1.3. (martingale process) A process X(t) is called a martingale IFF it is adapted and the condition <Xt |Ta) = XS V 0 < s < t < o o , (1.27) is almost certainly (AC) satisfied. If we replace the equality sign in (1.27) by < (>) we obtain a super (sub) martingale. We note that martingales have no other discontinuities than at worst finite jumps (see Arnold [1.2]). ^ Note that (1.27) defines a stochastic process. Its expectation ((Xj | Ts)) = (Xs);s < t is a deterministic function. An interesting property of a martingale is expressed by Pr(sup | X(t) |> c) < (| X(6) p )/cp ; c > 0; p > 1, (1.28) where sup is the supremum of the embraced process in the interval [a, b]. (1.28) is a particular version of the Chebyshev inequality, that
  • 38. 14 Stochastic Differential Equations in Science and Engineering will be derived in EX 1.2. We apply later the concept of martingales to Wiener processes and to stochastic integrals. Finally we give an explanation of the phrase "martingale". A gambler is involved in a fair game and he has at the start the capital X(s). Then he should posses in the mean at the instant t > s the original capital X(s). This is expressed in terms of the conditional mean value (Xt | Xs) = Xs . Etymologically, this term comes from French and means a system of betting which seeks the amount to be wagered after each win or loss. 1.4. The Gaussian Distribution and Limit Theorems In relation (1.13) we have already introduced a special case of the Gaussian (normal distributed) PD (GD) for a scalar variable. A gen- eralization of (1.13) is given by theN(m,o-2 ) PD p(x) = (2TTCT2 )-1/2 exp[-(x - m)2 /(2a2 )]; V i e [-oo, oo] (1.29) where m is the average and <72 = (a;2 ) — m2 is the variance. The mul- tivariate form of the Gaussian PD for the set of variables xi,...,xn has the form p(xi,...,xn) = N e x p f --AikXiXk -bkxkj , (1.30a) where we use a summation convention. The normalization constant N is given by N = (27r)-"/2 [Det(A)]1 /2 eX pf-^A-1 6i 6f e ') . (1.30b) We define the characteristic function of (1.30) has the form G(ki,...,kn) = exp An expansion of (1.31) WRT powers of k yields the moments (Xi) = -A^bfc, (1.32a) and the covariance is given by Cik = {{xi - {xi)){xk - (xk)) = A^1 . (1.32b) -l '•uv (1.31)
  • 39. Stochastic Variables and Stochastic Processes 15 This indicates that the GD is completely given, if the mean value and the covariance matrix are evaluated. The n variables are uncorrelated and thus are independent if A - 1 and hence A itself are diagonal. The higher moments of n-variate GD with zero mean are partic- ularly easy to calculate. To show this, we recall that for zero mean we have bk = 0 and we obtain the characteristic function with the use of (1.31) and (1.32) in form of G = exp XUXV)KUKV 1 -p ~ XUXV)ZUZV -- XuXvj XpXq]ZuZvZpZq -p A/y ( ( I f f . /u,v,p,q,r = 1,2,.... (1.33) A comparison of equal powers of z in (1.33) and in a Taylor expansion of the exponent in (1.31) shows that all odd moments vanish XaXbXcf — X aXl)X CX dX ej = • • • U. We also obtain with restriction to n — 2 (bivariate GD) ( 4 ) = 3 ( 4 ) 2 ; (xlxp} = 3(xl)(x1x2), i,p= 1,2; (1.34) {xlx2 2) = {x){xl) + 2{xlx2)2 . In the case of a trivariate PD we face additional terms of the type (xkXpXr) — 2(xkXp){xpXr) + {xkXr){Xp). The higher order variate and higher order moments can be calculated in analogy to the results (1.34). We give also the explicit formula of the bivariate Gaussian (see also EX 1.3) with P(x,y) 1 N2exp x- (x), e 2 ( 1 - r 2 ) [a v = y- (y), 2T-£?7 rf Vab b 2Tr^ab(l - r2 ); a2 x a; (1.35a) (1.35b) and where r = vi is defined as the cross correlation coefficient (1.24). For ax = ay = 1 and (x) = (y) = 0 in (1.35) we can expand the latter
  • 40. 16 Stochastic Differential Equations in Science and Engineering formula and we obtain p(x,y) = (27r)~1 exp[-(x2 + y2 )/2} £ -Hfc(x)Hfc(y), (1.36) A;!" fc=0 where Hfc(x) is the fc-th order Hermite polynomial (see Abramowitz and Stegun [1.3]). Equation (1.36) is the basis of the "Hermitian- chaos" expansion in the theory of stochastic partial differential equations. In EX 1.3 we show that conditional probabilities of the GD (1.35a) are Gaussian themselves. Now we consider two limit theorems. The first of them is related to GD and we introduce the second one for later use. 1.4.1. The central limit theorem We consider the random variable u = n {xk) = 0, (1.37) where x^ are identically independent distributed (IID) (but not nec- essarily normal) variables with zero mean and variance a2 = {x2 ,). We find easily (U) = 0 and (U2 ) = a2 . The central limit theorem says that U tends in the limit n —> oo to a N(0, a2 ) variable with a PD given by (1.13a). To prove this we use the independence of the variables Xk and we perform the calculation of the characteristic function of the variable U with the aid of (1.12) Gu(fc) = / dxip(xi) • • • / dxnp(xn) • • • exp [ik(xi - h xn)/y/n = [Gx(A;/v^)]n 2^2 kl a ~2n~ + 0(n -3/2. exp(—k a 12) for n —> oo. (1.38) We introduced in the second line of (1.38) the characteristic function of one of the individual random functions according to (1.14a); (1.38) is the characteristic function of a GD that corresponds indeed to
  • 41. Stochastic Variables and Stochastic Processes 17 N(0, a2 ). Note that this result is independent of the particular form of the individual PD's p(x). It is only required that p(a;) has finite moments. The central limit theorem explains why the Gaussian PD plays a prominent role in probability and stochastics. 1.4.2. The law of the iterated logarithm We give here only this theorem and refer the reader for its derivation to the book Chow and Teichler [1.4]. yn is the partial sum of n IID variables yn = xx - -xn; (xn) = /3, {(xn - (if) = a2 . (1.39) The theorem of the iterated logarithm states that there exists AC an asymptotic limit -a < lim / n ~ r a / ? < a. (1.40) rwoo v/2nln[ln(n)] Equation (1.40) is particular valuable in case of estimates of stochastic functions and we will use it later to investigate Brownian motions. We will give a numerical verification of (1.40) in program F18. 1.5. Transformation of Stochastic Variables We consider transformations of an n-dimensional set of stochastic variables x,... ,xn with the PD pxi-xn (x, • • •, xn). First we intro- duce the PD of a linear combination of random variables n Z = fe=l where the a^ are deterministic constants. The PD of the stochastic variable z is then defined by Pz(z) = / dxi ••• / dxn8 I z - Y^a kXk 1 PXi-x„(zi,--- ,xn). (1.41b) Now we investigate transformations of the stochastic variables xi,..., xn. The new variables are defined by uk = uk(x1,...,xn), k = l,...,n. (1.42) ^2otkxk, (1.41a)
  • 42. 18 Stochastic Differential Equations in Science and Engineering The inversion of this transformation and the Jacobian are Xk = gk(ui,...,un), J = d(x1,...,xi)/d(u1,...,u1). (1.43) We infer from an expansion of the probability measure (1.1a) that dpx!-xn = Pr(zi < Xi < xi + dxi,..., xn < Xn < xn + dxn) = pXl...xn(^i, • • •, xn)dxi • • • dxn for dxk —> 0, k — 1,... ,n. (1.44a) Equation (1.44a) represents the elementary probability measure that the variables are located in the hyper plane n Y[[xk,xk + dxk}. k=l The principle of invariant elementary probability measure states that this measure is invariant under transformations of the coordinate system. Thus, we obtain the transformation dp^...^ =dpXi...Xn. (1.44b) This yields the transformation rule for the PD's PUi-u„(Mi(a;i, • • -,xn), • • •, un(xi,.. .,xn)) =| det(J) | px!-x„(a;i,---,a;n)- (1-45) Example (The Box-Miller method) As an application we introduce the transformations method of Box- Miller to generate a GD. There are two stochastic variables given in an elementary cube , , (I V 0 < x 1 < l , 0 < x 2 < l > n ^ P ( X 1 ' X 2 ) = U elsewhere J ' ( L 4 6 ) Note that the bivariate PD is already normalized. Now we introduce the new variables yi = y/—2 In x cos(27TX2), (1.47) y2 = V - 21nxisin(27TX2). The inversion of (1.47) is xi = exp[-(yj + yl)/2] x2 = —arc tan(y2/yi).
  • 43. Stochastic Variables and Stochastic Processes 19 According to (1-45) we obtain the new bivariate PD p(y1,y2) = p ( x 1 ^ 2 ) | | ^ | = i - e x P [ - ( y ? + y2 2 )/2], (1.48) and this the PD of two independent N(0, 1) variables. Until now we have only covered stochastic variables that are time- independent or stochastic processes for the case that all variables belong to the same instant. In the next section we discuss a property that is rather typical for stochastic processes. Jf» 1.6. The Markov Property A process is called a Markov (or Markovian) process if the condi- tional PD at a given time tn depends only on the immediately prior time tn-. This means that for t < t2 < • • • < tn Pln-l(yn,tn | 2/1, h . . . ; y n - l , * n - l ) = Pl|l(?/n,£n I 2/n-l>*7i-l)> (1.49) and the quantity Pii(yn,tn yn-i,tn-i) is referred to as transition probability distribution (TPD). A Markov process is thus completely defined if we know the two functions Pi(yi,*i) and p2(y2,t2 | yi,ti) forti<t2. Thus, we obtain for t < t2 (see (1.15") and note that we use a semicolon to separate coordinates that belong to different instants) V2{yiMV2,t2) =pi(yi,*i)Pi|i(y2,*21 yi,*i), (1.50.1) and for t < t2 < £3 P3(yi,*i; 2/2, £2; 2/3,^3) = Pi(yi,*i)pi|i(y2,*21 yi,<i)pi|i(y3,*31 y2,t2). (1.50.2) We integrate equation (1.50.2) over the variable y2 and we obtain P2(yi,*i;y3,*3) =pi(yi,h) / Pi|i(y2,*21 2/1, £I)PI 11(2/3^312/2,t2)dy2. (1.51)
  • 44. 20 Stochastic Differential Equations in Science and Engineering Now we use Pi|i(2/3,*31 yi,*i) = P2(yi,*i;y3,i3)M(yi,*i), and we obtain from (1.51) the Chapman-Kolmogorov equation Pi|i(z/3,*31 yi,h) = / Pi|i(y2,*21 yi,*i)pi|i(y3,*31 V2,t2)dy2. (1.52) It is easy to verify that a particular solution of (1.52) is given by Pi|l(2/2,*2 I 2/1, *i) = [27r{t2-t1)}~1 /2 exV{-{y2-y1)2 /[2{t2-t1)}}. (1.53) We give in EX 1.4 hints how to verify (1.53). We can also integrate the identity (1.50.1) over y and we obtain ViiViM) = I Pi(z/i,*i)Pi|i(l/2,*2 I J/i,*i)dj/i. (1-54) The latter relation is an integral equation for the function pi(?/2, t2). EX 1.5 gives hints to show that the solution to (1.54) is the Gaussian PD P l (y, t) = (27rt)-V2 exp[-j/7(2i)]; lim P l (y, t) = 8(y). (1.55) t—>U-|- In Chapter 3 we use the Chapman-Kolmogorov equation (1.52) to derive the master equation that is in turn applied to deduce the Fokker-Planck equation. 1.6.1. Stationary Markov processes Stationary Markovian processes are defined by a PD and transi- tion probabilities that depend only on the time differences. The most important example is the Ornstein—Uhlenbeck-process that we will treat in Section 2.1.3 and 3.4. There we will prove the formulas for its PD pi(y) = (2TT)-1 /2 exp(-y2 /2), (1-56.1)
  • 45. Stochastic Variables and Stochastic Processes 21 and the transition probability Pi|i(w,«2 I l/i, ti) = [2TT(1 - u 2 ) ] - 1 " ^ ' (?/2 ~ " ^ 1 yJ ' (1.56.2) u = exp(-r); Pi|i(y2,*i I yi,*i) = <%2 - y{] The Ornstein-Uhlenbeck-process is thus stationary, Gaussian and Markovian. A theorem from Doob [1.5] states that this is apart from trivial process, where all variables are independent — the only pro- cess that satisfies all the three properties listed above. We continue to consider stationary Markov processes in Section 3.1. 1.7. The Brownian Motion Brown discovered in year 1828 that pollen submerged in fluids show under collisions with fluid molecules, a completely irregular move- ment. This process is labeled with y := Bt(ui), where the subscript is the time. It is also called a Wiener (white noise) process and labeled with the symbol Wj (WP) that is identical to the Brownian motion: Wt = Bt. The WP is a Gaussian [it has the PD (1.55)] and a Markov process. Note also that the PD of the Wiener process (WP) — given by (1.55) — satisfies a parabolic partial differential equation (called Fokker—Planck equation, see Section 3.2) dp ld2 p ft =2 ft?" (L57) We calculate the characteristic function G(u) and we obtain according to (1.12) G{u) = (exp(mWt)) = exp(-n2 t/2), (1.58a) and we obtain the moments in accordance with (1.13b) < w ? f c ) = 2Wf f e ; (w ?fc+1 ) = °; fcGN °- (L58b ) We use the Markovian properties now to prove the independence of Brownian increments. The latter are defined yi,y2-yi,---,yn-yn-i with yk := wtk; h<---<tn. (1.59)
  • 46. 22 Stochastic Differential Equations in Science and Engineering We calculate explicitly the joint distribution given by (1.50) and we obtain with the use (1.53) and (1.55) P2(yuh;y2,t2) = [(27r)2 ti(t2 -t1 )]-1 /2 exp{-y2/(2t1 ) - ( y 2 - y i ) 2 / [ 2 ( t 2 - i i ) ] } , (1-60) and P3(yi,*i; 2/2,^2; 2/3, *3) = [(27r)3 *i(f2 - h)(t3 - t 2 ) ] - 1 / 2 e x p { _ y 2 / ( 2 i i ) - (2/2 - yi)2 /[2(t2 - h)] - (2/3 - y2)2 /[2(«3 " *2)]}, (1.61) P 4 ( y i , t i ; y 2 , * 2 ; y 3 , * 3 ; y 4 , * 4 ) = [27r(*4 - * 3 ) r 1 / 2 P 3 ( y i , * i ; 2 / 2 , * 2 ; 2 / 3 , * 3 ) xexp{-(y4 -y3 )2 /[2(i4-i3)]}- We see that the joint PD's of the variables 2/1,2/2 ~~ 2/1,2/3 ~~ 2/2,2/4 ~~ 2/3 are given in (1.60) and (1.61) in a factorized form and this implies the independence of these variables. To prove the independence of the remaining variables y^ — 2/3,..., yn — yn~ we would only have to continue the process of constructing joint PD's with the aid of (1.49). In EX 1.6 we prove the following property (2/1^1)2/2^2)) = min(ti, t2) =hA t2. (1.62) Equation (1.62) also demonstrates that the Brownian motion is not a stationary process, since the autocorrelation does not depend on the time difference r = t2 — t but it depends on t2 A t. To apply Kolmogorov's criterion (1.20) we choose a = 2 and we obtain with (1.58b) and (1.62) ([2/1 (*i) - 2/2to)]2 ) = 1*2 - *i|- Thus we can conclude with the choice b = c = 1 that the SF of the WP are ac continuous functions. The two graphs Figures 1(a) and 1(b) are added in this section to indicate the continuous SF. We apply also the law of iterated logarithm to the WP. To this end we consider the independent increments y^ — y^-i where we ifc = kAt with a finite time increment At. This yields for the partial sum in (1.39) n ]P(2/fc - 2/fe-i) = 2/n = 2/nAt! OL = (yk) = 0; ((yk - yk-i)2 ) = At. fc=i
  • 47. Stochastic Variables and Stochastic Processes 23 Fig. 1(a). The Brownian motion Bt versus the time axis. Included is a graph of the numerically determined temporal evolution of the mean value and the variance. Bv(t) Fig. 1(b). The planar Brownian motion with x = Bj and y = Bj • B^, k = 1, 2 are independent Brownian motions. We substitute the results of the last line into (1.40) and we obtain - / A ! < lim W,nAt n ^°° v/2nln(ln(n)) < VAt.
  • 48. 24 Stochastic Differential Equations in Science and Engineering The assignment of t := nAt into the last line and the approximation ln(i/Ai) —> ln(i) for t —> oo gives the desired result for the AC asymptotic behavior of the WP - 1 < lim l < 1. (1.63) ~ ™^°° ^/2tln(In(t)) We will verify (1.63) in Chapter 5 numerically. There are various equivalent definitions of a Wiener process. We use the following: Definition 1.8. (Wiener process) A WP has an initial value of Wo = 0 and its increments Wj — Ws, t > s satisfies three conditions. They are (i) independent and (ii) stationary (the PD dependence on t — s) and (iii) N[0, t — s] distributed. As a consequence of these three conditions WP exhibits continu- ous sample functions with probability 1. • There are also WP's that do not start at zero. There is also a generalization of the WP with discontinuous SF. We will return to this point at the end of Section 1.7. Now we show that a WP is a martingale <BS | Bu> = Bu; s> u. (1.64) We prove (1.64) with the application of the Markovian property (1.53). We use (1.17) write (Bs | Bu) = (y2,s | yi,u) = / t/2Pi|iO/2,s I yi,u)dy2 = /n . I 2/2exp{-(y2 - yi)2 /[2{s - u)]}dy2 J2lT{S — U) J = yi = Bu. This concludes the proof of (1.64). A WP has also the following properties. The translated quantity Wi and the scaled quantity Wf defined by t , a > 0 : W t = W t + a - W a and Wt = 2 ( ) (1.65)
  • 49. Stochastic Variables and Stochastic Processes 25 are also a Brownian motion. To prove (1.65) we note first that the averages of both variables are zero (Wt) = (Wt) = 0. Now we have to show that both variables satisfy also the condition for the auto correlation. We prove this only for the variable Wt and leave the second part for the EX 1.7. Thus, we put (WtWs) = (Ba24Ba2s)/a2 = ^ V 2 ^ =tAs. So far, we considered exclusively scalar WP's. In the study of par- tial differential equations we need to introduce a set of n independent WP's. Thus, we generalize the WP to the case of an independent WP's that define a vector of a stochastic processes xi(h),...,xn(tn); tk>0. (1.66) The corresponding PD is then p(xu...,xn) = pXl(xi)...pXn(xn) = (27T)""/2 I K 1 / 2 e x p [-4/(2**)] • (1-67) fc=l We have assumed independent stochastic variables (like the orthog- onal basic vectors in the case of deterministic variables) and this independence is expressed by the factorized multivariate PD (1.67). We define an n-dimensional WP (or a Wiener sheet (WS)) by n Min) = n **(**): t = (t1,...,tn). (1.68) k= Now we find how we can generalize the Definition 1.8 to the case of n stochastic processes. First, we prove easily that the variable (1.68) has a zero mean (M( t n) ) = 0. (1.69) Thus, it remains to calculate the autocorrelation (1.62). We use the independence of the set of variables Xk(tk),k = l , . . . , n and we obtain with the use of the bivariate PD (1.61) with y = Xk(tk);
  • 50. 26 Stochastic Differential Equations in Science and Engineering V2 = xk(sk) and factorize the result for the independent variables. Hence, we obtain n (Mjn) M(")> = Yl(xk(tk)xk(sk)}; t = (h,...,tn); s = (si,...,sn ). fc=i The evaluation of the last line yields with (1.62) n (M( t n) M^) = HtkAsk. (1.70) fc=i The relations (1.69) and (1.70) show now that process (1.68) is an n-WP. In analogy to deterministic variables we can now construct with stochastic variables curves, surfaces and hyper surfaces. Thus, a curve in 2-dimensional WS and surfaces on 3-dimensional WS are given by c « = M t2 f(ty s tut2 = M S2)g(tl>t2)- We give here only two interesting examples. Example 1 Here we put (2) Kt = M ^ ; a = exp(i), b = exp(—£); —oo < x < oo. This defines a stochastic hyperbola with zero mean and with the autocorrelation (KtKs) = (x1{et )x1{es ))(x2{e-t )x2{e-s )) = (e* A es )(e-* A e~s ) = exp(-|t - s). (1.71) The property (1.71) shows this process is not only a WS but also a stationary Ornstein-Uhlenbeck process (see Section 1.5.1). 4b Example 2 Here we define the process Kt = exp[-(l + c)t]M^l; a = exp(2£), b = exp(2ct); c > 0. (1.72)
  • 51. Stochastic Variables and Stochastic Processes 27 Again we see, that stochastic variable defined (1.72) has zero mean and the calculation of its autocorrelation yields (KtKs) = exp[-(l + c)(t + s)}(xl(e2t )x1(e2s )){x2(e2ct )x2(e2cs )) = exp[-(l + c)(t + s)}(e2t A e2s ){e2ct A e2cs ) = exp[-(l + c)|t-s|]. (1.73) The latter equation means that the process (1.72) is again an Ornstein-Uhlenbeck process. Note also that because of c > 0 there is no possibility to use (1.73) to reproduce the result of the previous example. X Just as in the case of one parameter, there exist for WS's also scaling and translation. Thus, the stochastic variables H - —M{2) • 'v ~ ab a2u 'h2v '' T _ M ( 2 ) _ M ( 2 ) _ M ( 2 ) M ( 2 ) L>u,v - M u+a,v+b M u+a,b M a,v+b ~ M a,6> (1.74) are also WS's. The proof of (1.74) is left for EX 1.8. We give in Figures 1(a) and 1(b) two graphs of the Brownian motion. At the end of this section we wish to mention that the WP is a subclass of a Levy process L(t). The latter complies with the first two conditions of the Definition 1.8. However, it does not possess normal distributed increments. A particular feature of normal dis- tributed process x is the vanishing of the skewness (x3 ) / (x2 )3 '2 . How- ever, many statistical phenomena (like hydrodynamic turbulence, the market values of stocks, etc.) show remarkable values of the skew- ness. This means that a GD (with only two parameter) is not flex- ible enough to describe such phenomena and it must be replace by a PD that contains a sufficient number of parameters. An appropri- ate choice is the normal inverted Gaussian distribution (NIGD) (see Section 4.4). The NIGD distribution does not satisfy the Kolmogorov criterion. This means that the sample functions of the Levy pro- cess L(i) is equipped with SF that jump up and down at arbitrary instances t. To get more information about the Levy process we refer the reader to the work of Ikeda k, Watanabe [1.6] and of Rydberg
  • 52. 28 Stochastic Differential Equations in Science and Engineering [1.7]. In Section 4.4 we will give a short description of the application of the NIGD in economics theories. 1.8. Stochastic Integrals We need stochastic integrals (SI) when we attempt to solve a stochas- tic differential equation (SDE). Hence we introduce a simple first order ordinary SDE ^ = a(X(t),t) + b(X(t),t)Zt; X,a,b,t€R. (1.75) We use in (1.75) the deterministic functions a and b. The symbol £t indicates the only stochastic term in this equation. We assume <6> = 0; (tes) = 6(t-s). (1.76) The spectrum of the autocorrelation in (1.76) is constant (see Section 2.2) and in view of this £t is referred as white noise and any term proportional to £( is called a noisy term. These assump- tions are based on a great variety of physical phenomena that are met in many experimental situations. Now we replace (1.75) by a discretization and we put Atk = tk+i — tk>0; Xfc = X(ifc); AXfc = Xfc+1 -Xf c ; A; = 0,1, The substitution into (1.75) yields AXfc = a(Xfc, tk) Atk + b{Xk,tk) ABk; ABfc = Bf c + 1 -Bf c ; A; = 1,2,... where we used A precise derivation of (1.77) is given in Section 2.2. Thus we can write (1.75) in terms of n - l Xn = X0 + Y^ [<x s,ts)Ats + b(Xa, ts)ABs] • X0 = X(t0). s=Q (1.78)
  • 53. Stochastic Variables and Stochastic Processes 29 What happens in the limit Atk —> 0? If there is a "reasonable" limit of the last term in (1.78) we obtain as solution of the SDE (1.75) X(t) = X(0)+ / a(X(s),s)ds + " / 6(X(s),s)dB8". (1.79) Jo Jo The first integral in (1.79) is a conventional integral of Riemann's type and we put the stochastic (noisy) integral into inverted commas. The irregularity of the noise does not allow to calculate the stochastic integral in terms of a Riemann integral. This is caused by the fact that the paths of the WP are nowhere differentiable. Thus we find that a SI depends crucially on the decomposition of the integration interval. We assumed in (1.75) to (1.79) that b(X,t) is a deterministic function. We generalize the problem of the calculation of a SI and we consider a stochastic function 1= / i(w,s)dBs. (1.80) Jo We recall that Riemann integrals of the type (g(s) is a differen- tiable function) i(s)dg(s) = [ f(s)g'(s)ds, Jo'0 are discretized in the following manner pT n—1 / f(s)dg(s) = lim Vf(sfc)[g(sfc+i)-g(sfc)]. JU k=0 Thus, it is plausible to introduce a discretization of (1.80) that takes the form I = 53f(afc,a;)(Bfc+1-Bfc). (1.81) In Equation (1.81) we used s^ as time-argument for the integrand f. This is the value of s that corresponds to the left endpoint of the discretization interval and we say that this decomposition does not
  • 54. 30 Stochastic Differential Equations in Science and Engineering look into the future. We call this type of integral an Ito integral and write I i = / {{s,uj)dBs. (1.82) Jo An other possible choice is to use the midpoint of the interval and with this we obtain the Stratonovich integral Is = / f(s,w)odBs = ^f(sfc,w)(Bfc+i -Bfc); sk = -(tk+1 + tk). J° k (1.83) Note that the symbol "o" between integrand and the stochastic dif- ferential is used to indicate Stratonovich integrals. There are, of course, an uncountable infinity of other decomposi- tions of the integration interval that yield to different definitions of a SI. It is, however, convenient to take advantage only of the Ito and the Stratonovich integral. We will discuss their properties and find out which type of integrals seems to be more appropriate for the use in the analysis of stochastic differential equations. Properties of the Ito integral (a) We have for deterministic constants a < b < c, a, /3 G R. f [ah{s,u>) + /?f2(s,w)]dBs = all +/JI2; h = [ f*(s,w)dBs. Ja J a (1.84) Note that (1.84) remains also valid for Stratonovich integrals. The proof of (1.84) is trivial. In the following we give non-trivial properties that apply, how- ever, exclusively to Ito integrals. Now we need a definition: Definition 1.9. (non-anticipative or adapted functions) The function f(t, Bs) is said to be non-anticipative (or adapted, see also Theorem 1.2) if it depends only on a stochastic variable of the past: Bs appears only for arguments s < t. Examples for a non-anticipative functions are i(s,co)= [S g(u)dBu; f( Jo s,u) = B,. •
  • 55. Stochastic Variables and Stochastic Processes 31 Now we list further properties of the Ito integrals that include non anticipative functions f(s,Bs) and g(s,Bs). (b) M l E E W f (s 'B s)d B s)=°- (L85 ) Proof. We use (1.81) and obtain Mi = /^f(sf c ,Bf c )(Bf c + 1 -Bf c ) But we know that Bk is independent of B^+i — Bk. The function f(sfc,Bfc) is thus also independent of B^+i — B^. Hence we obtain Mi = ^2(f(sk,Bk)){Bk+1 - Bfc) = 0. k This concludes the proof of (1.85). (c) Here we study the average of a product of integrals and we show that M2 = I J f(s,Bs)dBsJ g(u,Bu)dBu = J (i(s,Bs)g(s,Bs))ds. (1.86) Proof. M2 = ]T<f(sm ,Bm )(Bm + 1 -Bm )g(sn ,Bn )(Bn + 1 - B n ) ) . m,n We have to distinguish three subclasses: (i) n > m, (ii) n < m and (hi) n = m. Taking into account the independence of the increments of WP's we see that only case (hi) contributes non-trivially to M2. This yields M2 = ^(f(S n ,Bn )g(sn ,Bn )(B ra+l ~ B n ) ). n But we know that f(sn, Bn)g(sn, Bn) is again a function that is inde- pendent of (Bn+i — Bn)2 . We use (1.62) and obtain ((Bn+i — Bn) ) = (Bn + 1 — 2Bn + iBn + Bn) = tn+ — tn — Ain,
  • 56. 32 Stochastic Differential Equations in Science and Engineering and thus we get M2 = ^(f(S n ,Bn )g(S n ,Bn ))((B n+l — n oo ^2(i{sn,Bn)g{sn,Bn))Atn. n = l The last relation tends for Atn —» 0 to (1.86). (d) A generalization of the property (c) is given by / pa rb rahb M3 = M i(s,Bs)dBsl g(u,Bu)dBu = I (f(S,Bs)g(S,Bs))ds. (1.87) To prove (1.87) we must distinguish to subclasses (i) b = a + c > a and (ii) a = b + c> b;c> 0. We consider only case (i), the proof for case (ii) is done by analogy. We derive from (1.86) and (1.87). M3 = M2 + / / f(s,Bs)dBs / g(u,Bu)dBt J0 Ja = M 2 + Yl Yl ^ Bn)g(sm, Bm )ABn ABm ). n m>n But we see that i(sn, Bn) and ABn are independent of f(sm, Bm) and ABm . Hence, we obtain M3 = M2 + £ ( f ( s „ , Bn)ABn) Y, (g(sm, Bm)ABm) = M2, n m>n where we use (1.85). This concludes the proof of (1.87) for case (i). Now we calculate an example I(t) = / BsdBs. (1.88a) Jo First of all we obtain with the use of (1.85) and (1.86) the moments of the stochastic variable (1.88a) <I(t)> = 0; (I(t)I(t + r)) = [B2 s)ds = f a d s = 7 2 A n QQ^ Jo Jo (1.88b) 7 = iA(t + r).
  • 57. Stochastic Variables and Stochastic Processes 33 We calculate the integral with an Ito decomposition 1 = 2 ^ B*;(Bfc+i — Bfc). k But we have AB2 k = (B2 k+l - B2 k) = (Bk+1 - Bkf + 2Bfc(Bfc+1 - Bfc) = (ABfc)2 + 2Bfc(Bfc+1-Bfc). Hence we obtain I(t) = lj2[A(Bl)-(ABk)% k We calculate now the two sums in (1.90) separately. Thus we obtain the first place I1(t) = EA (B fc) = (B ?-B 8) + ( B i - B ? ) + - + (B N-B N-l) k N ~* B i > where we used Bo = 0. The second integral and its average are given by I2(t) = Y, (AB *)2 = E (B ^+i "2B k+iBk + Bl); k k (I2(i))=EAifc = t k The relation (I2(i)) = t gives not only the average but also the integral I2(i) itself. However, the direct calculation of l2(t) is im- practical and we refer the reader to the book of 0ksendahl [1.8], where the corresponding algebra is performed. We use instead an indirect proof and show that the quantity z (the standard deviation of I2(i)) is a deterministic function with the value zero. Thus, we put z = I2(£) — t. The mean value is clearly (z) — 0 and we obtain (Z 2 ) = (l2 (t)-2tl2(t)+t2 ) = (l2 (t))-t2 .
  • 58. 34 Stochastic Differential Equations in Science and Engineering But we have ai(*)> = EE<(AB *)2 (AB ™)2 >- (i.88c) k rn The independence of the increments of the WP's yields ((ABfc)2 (ABm)2 ) = ((ABk)2 )((ABm)2 ) + 5km{ABi), hence we obtain with the use of the results of EX 1.6 $(«)>= (£<(ABfc)2 )) +]T<(Bfc+1+Bfc)4 } = £2 + 5>+1-£fc)2 . V k J k k However, we have ^2(tk+1 - tkf = J^(At)2 = tAt^0 for At -> 0, k k and this indicates that (z2 ) = 0. This procedure can be pursued to higher orders and we obtain the result that all moments of z are zero and thus we obtain l2(£) = t. Thus, we obtain finally I(*) = / BsdBs = ^ ( B 2 - £ ) . (1.89) There is a generalization of the previous results with respect to higher order moments. We consider here moments of a stochastic integral with a deterministic integrand Jfc(t) = I ffc(s)dBs; k€N. (1.90) Jo These integrals are a special case of the ones in (1.82) and we know from (1.85) that the mean value of (1.90) is zero. The covariance of (1.90) is given by (see (1.86)) (Jfc(t)Jm(t)) = / h(s)fm(s)ds. Jo But we can obtain formally the same result if we put (dBsdB„) = 5(s - u)dsdu. (1.91) A formal justification of (1.91) is given in Chapter 2 in connection with formula (2.41). Here we show that (1.91) leads to a result that
  • 59. Stochastic Variables and Stochastic Processes 35 S. is identical to the consequences of (1.86) <Jfc(*)Jm(<)> = / h(s) f fm H(dB,dBu ) Jo Jo - fk(s) / fm(u)5(s - u)dsdu = / ffc(s)fm(s)d Jo Jo Jo We also know that Bt and hence dB4 are Gaussian and Markovian. This means that all odd moments of the integral (1.90) must vanish <Jfc(*)Jm(*)Jr(t)> = ••• = (). (1.92a) To calculate higher order moments we use the properties of the multivariate GD and we put for the 4th order moment of the differential <dBpdB9dBudB„) = <dBpdB9>(dBudB„) + (dBpdBu)(dBgdB^) + (dBpdB^)(dB(?dBu) = [S(p - q)5{u - v) + S(p — u)S(q — v) + 5(p — v)8(q — u)]dpdqdudv. Note that the 4th order moment of the differential of WP's has a form similar to an isotropic 4th order tensor. Hence, we obtain <Jj(t)Jm(i)Jr(*)J*(<)> = / f » f m ( a ) d a f ir{(3%((3)d(3 Jo Jo + / ij{a)ir{a)da f fm(/3)f,(/3)d/3 Jo Jo + / f » f s ( a ) d a [ fm(/3)fr(/3)d/3. Jo Jo This leads in a special case to <j£(i)> = 3<J2 (i)>2 . (1.92b) Again, this procedure can be carried out also for higher order moments and we obtain <J2 "+1 (i)) = 0; <J2 ^)) = 1.3....(2/ ,-l)<J2 (i))^ ^ N . (1.92c) Equation (1.92) signifies that the stochastic Ito-integral (1.90) with the deterministic integrand ffc(s) is N[0, fQ f|(s)ds] distributed. How- ever, one can also show that the Ito-integral with the non-anticipative
  • 60. 36 Stochastic Differential Equations in Science and Engineering integrand K(i) = / g(s,Bs)dB„ (1.93a) Jo is, in analogy to the stochastic integral with the deterministic integrand, N[0,r(t)]; r(t)= [ (gu,Bu))du, (1.93b) Jo distributed (see Arnold [1.2]). The variable r(t) is referred to as intrinsic time of the stochastic integral (1.93a). We use this vari- able to show with Kolmogorov's Theorem (1.20) that (1.93a) posses continuous SF. The Ito integral rtk Xk= / g(«,Bu)dBu, ti = t > t2 = s, Jo with (xk) = 0; (x) = r(tfc) = rk; k = 1, 2, {xix2) = r2, has according to (1.35a) the joint PD xi _ (xi - x2)2 "2n 2(n-r2) p2(xi,x2) = [(27r)i r1(r1 - r2)] ' exp Yet, the latter line is identical with the bivariate PD of the Wiener process (1.60) if we replace in the latter equation the t- by rk. Hence, we obtain from Kolmogorov's criterion ([xi(ri) — x2i(r2)]2 ) =| 7~i — r2 | and this guarantees the continuity of the SF of the Ito- integral (1.93a). A further important feature of Ito integrals is their martingale property. We verify this now for the case of the integral (1.89). To achieve this, we generalize the martingale formula (1.64) for the case of arbitrary functions of the Brownian motions (%2,«) I f(yi,i)> = / %2,s)Pi|i(y2,s | yx,t)dy2 = f(yi,t); yk = Btk; Vs>t, (1.94)
  • 61. Stochastic Variables and Stochastic Processes 37 where p ^ is given by (1.53). To verify now the martingale property of the integral (1.89) we specify (1.94) to (I(j/2,s) | Ifo!,*)) = - i = / {vl - s) exp[-(y2 - yif /f5]dy2. The application of the standard substitution (see EX 1.1) yields (I(y2, s) | I(2/i,*)> = ^=J{y-s + 2Vlz^ + /3^2 ) exp(-z2 )d^ 1 = -{yi-s + P/2)=I(yi,t). (1.95) This concludes the proof that the Ito integral (1.89) is a martingale. The general proof that all Ito integrals are martingales is given by 0ksendahl [1.8]. However, we will encounter the martingale property for a particular class of Ito integrals in the next section. To conclude this example we add here also the Stratonovich ver- sion of the integral (1.89). This yields (the subscript s indicates a Stratonovich integral) rt i ls{t)= / BsodBs = - V ( B f c + 1 + Bfc)(Bfc+1-Bfc) Jo 2 k k The result (1.96) is the "classical" value of the integral whereas the Ito integral gives a non classical result. Note also the signifi- cant differences between the Ito and Stratonovich integrals. Even the moments do not coincide since we infer from (1.96) &(*)> = a n d (U*)I*(«)> = ^[tu + 2(t A u)2 }. It is now easy to show that the Stratonovich integral Is is not a martingale. We obtain this result if we drop the term s in second line of (1.95) (ls(y2,s) | I8(yi,t)) = {y2 + P/2)^Uyut). X Hence, we may summarize the properties of the Ito and Stratonovich integrals. The Stratonovich concept uses all the trans- formation rules of classical integration theory and thus leads in many
  • 62. 38 Stochastic Differential Equations in Science and Engineering applications to an easy way of performing the integration. Deviat- ing from the Ito integral, the Stratonovich integral does, however, not posses the effective rules to calculated averages such as (1.85) to (1.87) and they do not have the martingale property. In the following we will consider both integration concepts and their application in solution of SDE. We have calculated so far only one stochastic integral and we continue in the next section with helpful rules perform the stochastic integration. 1.9. The Ito Formula We begin with the differential of a function $(Bf, t). Its Ito differen- tial takes the form d$(Bt, t) = Qtdt + $B t dBt + ^ B t B t (dBt)2 . (1.97.1) Formula (1.97.1) contains the non classical term that is proportional to the second derivative WRT Bt. We must supplement (1.97.1) by a further non classical relation (dBt)2 = dt. (1.97.2) Thus, we infer from (1.97.1,2) the final form of this differential d$(Bt, t) = Ut + ^*BtBt) dt + ^BedBj. (1.98) Next we derive the Ito differential of the function Y = g(x, t) where x is the solution of the SDE dx = a(x,t)dt + b(x,t)dBt. (1.99.1) In analogy to (1.97.1) we include a non classical term and put dY = gtdt + gxdz + -gxx(dx)2 , We substitute dx from (1.99.1) into the last line and apply the non classical formula (dx)2 = {adt + bdBt)2 = b2 dt; {dt)2 = dtdBt = 0; (dBt)2 = dt, (1.99.2)
  • 63. Stochastic Variables and Stochastic Processes 39 and this yields d Y = ( 6 + 0 b + £& .)< K + t b d B , . (1.99.3) The latter equation is called the Ito formula for the total differen- tial of function Y = g{x,t) given the SDE (1.99.1). (1.99.3) contains the non classical term b2 gxx/2 and it differs thus from the classical (or Stratonovich) total differential dYc = (gt + agx)dt + bgxdBt. (1.100) Note that both the Ito and the Stratonovich differentials coincide if g{x,i) is a first order polynomial of the variable x. We postpone a sketch of the proof of (1.99) for a moment and give an example of the application of this formula. We use (1.99.1) in the form dx = dBt, or x = Bt with a = 0, 6 = 1, (1.101a) and we consider the function Y = g(x) = x2 /2; gt = 0; gx = x; gxx = 1. (1.101b) Thus we obtain from (1.99.3) and (1.101b) dY = d(x2 /2) = dt/2 + BtdBt, and the integration of this total differential yields d(x2 /2) = / d(B2 s/2) = B2 /2 = t/2+ [ BsdBs Jo Jo and the last line reproduces (1.89). X We give now a sketch of the proof of the Ito formula (1.99) and we follow in part considerations of Schuss [1.9]. It is instructive to perform this in detail and we do it in four consecutive steps labeled with Si to S4. I
  • 64. 40 Stochastic Differential Equations in Science and Engineering Si We begin with the consideration of the stochastic function x(t) given by rv rv x(v)-x(u) = a(x{s),s)ds + b(x(s),s)dBs, (1.102) Ju Ju where a and b are two differentiate functions. Thus, we obtain the differential of x(t) if we put in (1.102) v = u + dt and let dt —• 0 dx(u) = a(x(u),u)du + b(x(u),u)dBu. (1.103) Before we pass to the next step we consider two important examples Example 1. (integration by parts) Here we consider a deterministic function f and a stochastic func- tion Y and we put Y(Bt,t) = g(Bt)t) = f(i)Bt. (1.104a) The total differential is in both (Ito and Stratonovich) cases (see (1.98) with 3>BtBt = 0) given by the exact formula dY = d[f(t)Bt] = f(*)dBt + i'(t)Btdt. (1.104b) The integration of this differential yields i(t)Bt= f f'(s)Bsds+ [ f(s)dBs. (1.105a) Jo Jo Subtracting the last line for t = u from the same relation for t — v yields rv rv i{v)Bv - i(u)Bu = f'(s)Bsds+ f(s)dBs. (1.105b) Ju Ju Example 2. (Martingale property) We consider a particular class of Ito integrals I(t) = / f(u)dBu, (1.106) Jo
  • 65. Stochastic Variables and Stochastic Processes 41 and show that I(i) is a martingale. First we realize that the integral l(t) is a particular case of the class (1.93a) with g(u,Bu) = i(u). Hence we know that the variable (1.106) is normal distributed and posses the intrinsic time given by (1.93b). Its transition probability Pi|i is defined by (1.53) with tj = r(tj); yj = I(i,); j — 1, 2. This con- cludes the proof that the integral (1.106) obeys a martingale property like (1.27) or (1.64). * S2 Here we consider the product of two stochastic functions subjected to two SDE with constant coefficients dxk(t) = ctkdt + frfcdBi; ak,bk = const; k ~ 1,2, (1.107) with the solutions xk(t) = akt + bkBt; xfc(0) = 0. (1.108) The task to evaluate d(xiX2) is outlined in EX 1.9 and we obtain with the aid of (1.89) d{xX2) = X2dxi + xdx2 + b^dt. (1.109) The term proportional to 6162 in (1.109) is non classical and it is a mere consequence of the non classical term in (1.89). The relation (1.109) was derived for constant coefficients in (1.107). One may derive (1.109) under the assumption of step- function for the functions a and b in (1.106) and with that one can approximate differentiable functions (see Schuss [1.9]). We consider now two examples Example 1 We take put x = Bi;X2 = Bf. Thus, we obtain with an application of (1.101b) and (1.109) dBt 3 = BtdB? + B^dBj + 2Btdi = 3(Btdt + B?dBt). The use of the induction rule yields the generalization dBt fc = jfeB^dB* + ^ " ^ B ^ d t (1.110)
  • 66. 42 Stochastic Differential Equations in Science and Engineering Example 2 Here we consider polynomials of the Brownian motion Pn(Bt) = c0 + cxBt + • • • + cnB?; ck = const. (1.111) The application of (1.110) to (1.111) leads to dPn(B4) = P;(Bt)dBt + ip£(Bt)di; ' = d/dBt. (1.112) The relation (1.112) is also valid for all functions that can be expanded in form of polynomials. & S3 Here we consider the product *{Bt,t) = ip(Bt)g(t), (1.113) where g is a deterministic function. The use of (1.109) yields d*(Bt,t) = g(i)<MBt) + <p(Bt)g!(t)dt = L'dBt + ^"dtg +Vg'(t)dt (1.114) = Ug' + ^"g]dt + gip'dBt. But we also have Thus, we obtain 1 d2 , 1 ,, + 25Bfj$ = g V + 2 g ^ ' ^L115) (d 1 d2 <9$ d * = ( » + 2 8 B ? j W f + 8 B ; d B " <L116 > Equation (1.116) applies, in the first place, only to the function (1.113). However, the use of the expansion CO $(Bt,t) = J>fc(B4)gfc(i), (1.117) fc=i shows that (1.116) is valid for arbitrary functions and this proves (1.98).
  • 67. Stochastic Variables and Stochastic Processes 43 S4 In this last step we do not apply the separation (1.113) or (1.117) but we use a differentiable function of the variables (x,t), where x satisfies a SDE of the type (1.107) $(Bt,t) = g(x,t) = g(at + bBt,t); x = adt + bdBt; a, 6 = const. 0 (1.118) *t = agx + gt; *Bt = &gx5 $B,B, = b l gxx. Thus we obtain with (1.116) The relation (1.119) represents the Ito formula (1.99.3) (for constant coefficients a and b). As before, we can generalize the proof and (1.119) is valid for arbitrary coefficients a(x,t) and b(x,t). We generalize now the Ito formula for the case of a multivariate process. First we consider K functions of the type Vk yfc(Bi 1 ,...,Bt M ,t); fc = 1,2,... ,K, where B^,...,B^ are M independent Brownian motions. We take advantage of the summation convention and obtain the generaliza- tion of (1.97.1) dyfc(Bj,... ,Bf,,) = * £ d t + ^LdBr + I ^ d B r d B f ; dt <9B[ l 2dWtWt (1.120) /c = l,...,K; r,s = l,...,M. We generalize (1.97.2) and put dB[dB? = Srsdt, (1.121) and we obtain (see (1.98)) d s t ( B , ' , . . . , B ( - t ) = ( ^ + i ^ ) d . + ^ d B E . (1.122) Now we consider a set of n SDEs dXfe = afc(Xi,... ,Xn,t)dt + 6fer(Xi,... ,Xn,i)dB£; Jfe = l,2,...,n; r = l,2,...,R. [1.123)
  • 68. 44 Stochastic Differential Equations in Science and Engineering We wish to calculate the differential of the function Zfc = Zfc(Xi,...,Xn,t); fc = l,...,K. (1.124) The differential reads dt M+ dXm^m+ 2dXmdX1J dZfc = - ^ dt + T^dXm + - v * dXmdXM -dt + ^r— (amdt + 6mrdBj) mdt dX, 1 d2 Z + o oV ov (a ™di + b mrdBr t) (audt + busdBs t); m,n = 1,2,... ,n; r = 1,2,... ,R. (1.125) The n-dimensional generalization of the rule (1.99.2) is given by d(B[dBJ1 ) = <5rudt; (dt)2 = dB[ dt = 0. (1.126) Thus, we obtain the differential of the vector valued function (1.124) dZfc = ( -XT + a mT^rp - T.bmrKr ^ r ^ r I d t dt + am dxm + 2bmrbur dxmdxu + h dZk aw + t>mr flv ar3t. Now we conclude this section with two examples. Example 1 A stochastic process is given by (1.127) Yi = B j + B ? + Bt 3 ; Y2 = (B?)2 -BjB We obtain for the SDE in the form (1.120) corresponding to the last line dYi = dB! + dB2 + dB3 ; dY2 = dt + 2B2 dBt 2 - (B^dBj + BjdB?). *
  • 69. Stochastic Variables and Stochastic Processes 45 dY 5,+ «gI + ^ + 7 2 ) § Example 2 Here we study a single stochastic process under the influence of two independent Brownian motions dx = a(x, t)dt + (3{x, t)dB] + j(x, i)dBf 2 . (1.128) The differential of the function Y = g(x, t) has the form dt + g^dBJ+jdB2 ). We consider now the special case g = In x; a = rx; (5 — ux; 7 = ax; r,u,a = const, and we obtain d(lnar) = [r - (u2 + a2 )/2]dt + (udBJ + odB2 ). (1.129) We will use (1.129) in Section 2.1 of the next chapter. X We introduced in this chapter some elements of the probability theory and added the basic ideas about SDE. For readers who wish to get more deeply involved in the abstract theory of probability and in particular with the measure theory we suggest they consider the following books: Chung & Aitsahia [1.10], Ross [1.11], Mallivan [1.12], Pitman [1.13] and Shiryaev [1.14]. Appendix: Poisson Processes In many applications appears there a random set of countable points driven by some stochastic system. Typical examples are arrival times of customers (at the desk of an office, at the gate of an airport, etc.), the birth process of an organism, the number of competing building projects for a state budget. The randomness in such phenomena is conveniently described by Poisson distributed variables. First we verify that the Poisson distribution is the limit of the Bernoulli distribution. We substitute for the argument p in the Bernoulli distribution in Section 1.1 the value p = a/n and this
  • 70. 46 Stochastic Differential Equations in Science and Engineering yields Kt,n,a/n) = i ; ) ( 2 ) ' ( l - 2 ( aV6(0, n, a/n) = 1 1 —> exp(—a) for n —> oo. V n J Now we put b(k + l,n,a/n) a n — kf a ~ a (A.l) b(k,n,a/n) k + 1 n n) fc + l' and this yields a2 6(1, n, a/n) —> a exp(—a); 6(2, n, a/n) —> — exp(—a);... an b(k, n, a/n) —y —- exp(—a) = ^ ( a ) . Definition. (Homogeneous Poisson process (HPP)) A random point process N(t), t > 0 on the real axis is a HPP with a constant intensity A if it satisfies the three conditions (a) N(0) = 0. (b) The random increments N(£&) — N(£fc_i); k = 1,2,... are for any sequence of times 0 < to < t < • • • < tn < • • • mutually independent. (c) The random increments defined in condition (b) are Poisson dis- tributed of the form Pr([N(tr+0-Nfa)] = fc)=(A ^7( -H *• (A.2) Tr = t r + i — tr, k = (J, 1 , . . . ; r = 1, 2 , . . . . ^ To analyze the sample paths we consider the increment AN(£) — N(£ + At) — N(£). Its probability has, for small values of At, the form ' l - A A i f o r k = 0~ Pr(AN(£) = fe) = ih^fL exp(-AAt) fe! AAt for A; = 1 0(At2 ) forfc>2 (A.3) Equation (A.3) means that for At —> 0 the probability that N(t + At) is most likely the one of N(£) (Pr([N(t + At) - N(t)] = 0) ss 1).
  • 71. Stochastic Variables and Stochastic Processes 47 However, the part of (A.3) with Pr([N(t + At) - N(t)] = 1) « XAt indicates that there is small chance for a jump with the height unity. The probability of jumps with higher heights k = 2, 3,... correspond- ing to the third part of (A.3) is subdominantly small and such jumps do not appear. We calculate of the moments of the HPP in two alternative ways. (i) We use (1.5) with (A.2) to obtain / oo p(x)sm dx = J2km Fr (x = k ) fc=0 oo = exp(-a)^fcm a*/A;!; a = At, (A.4) k=0 or we apply (ii) the concept of the generating function defined by g(z) = J2zk Pr (x = *)> with g'w = (x ) fc=0 5"(l) = ( o : 2 ) - ( x ) , . . . ; * fc= ° ^ (A.5) dz This leads in the case of an HPP to oo oo g(z) = ^2 zk ak exp(-o;)/fc! = exp(-q) y^(zq)fc /A:! fc=o fe=o = exp[a(z-l)]. (A.6) In either case we obtain (N(t)) = At, (N2 (t)) = (At)2 + At. (A.7) We calculate now the PD of the sum x + x^ of two independent HPP's. By definition this yields Pr([a;i + x2] = k) = Prl ^ [ x i = j , x 2 = A; - j] J fc = X P r ( X l = J>2 = A; - j) A; j=o
  • 72. 48 Stochastic Differential Equations in Science and Engineering qk-j Eex p[-(^)]fex p[-^)](fc_j)! k exp[-(01+ l92)]£^'Q)/fc! 3=0 = exp[-(81 + 92)](01+92)k /kl (A.8) If the two variables are IID (0 = 6± = 02) (A.8) reduces to Pr([xi + x2] = k) = exp(-20){20)k /k. (A.9) Poisson HPP's play important roles in Markov process (see Bremaud [1.16]). In many applications these Markov chains are iter- ations driven by "white noise" modeled by HPP's. Such iterations arise in the study of the stability of continuous periodic phenomena, in the biology and economics, etc. We consider the form of iterations x{t + s) = F(x(s),Z(t + s)); s , t e N 0 (A.10) where t, s are discrete variables and x(t) is a discrete random variable driven by the white noise Z(t + s). An important particular case is Z(£ + s) := N(t + s) with a PD Pr(N(£ + 8) = k) = exp(u)uk /kl; u = 9(t + s). The transition probability is the matrix governing the transi- tion from state i to state k. Examples (i) Random walk This is an iteration of a discrete random variable x(t) x(t) = x(t-l) + N(t); x{0) = xoeN. (A.ll) N(t) is HPP with Pr([N(t) = A;]) = exp(-Xt){Xt)k /k. Hence, we obtain the transition probability Pjl = Pr(x(t) = j , x(t - 1) = i) = Pr([i + N(j)] = 3) = P r ( N ( j ) = j - l ) .
  • 73. Stochastic Variables and Stochastic Processes 49 (ii) Flip-Flop processes The iteration takes here the form x(i) = (-l)N ( t ) . (A.12) The transition matrix takes the form p _ u = Pr(x(t + s) = 1 | x(s) = -1) = Pr(N(t) = 2k + 1) = a; p M = pr(x(t + s) = 1 | x(s) = 1 = Pr(N(t) = 2k) = 0, with a = ^Texp(-t)(t)2k+1 /(2k + 1)! = exp(-At) sinh(Ai); k=o oo 0 = ^exp(-Ai)(Ai)2fc /(2fc)! = exp(-Ai) cosh(At). X fc=0 Another important application of HPP is given by a ID approach to turbulence elaborated by Kerstein [1.17] and [1.18]. This model is based on the turbulence advection by a random map. A triplet map is applied to a shear flow velocity profile. An individual event is represented by a mapping that results in a new velocity profile. As a statistical hypothesis the author assumes that the temporal rate of the event is governed by a Poisson process and the parameter of the map can be sampled from a given PD. Although this model was applied to ID turbulence, its results go beyond this limit and the model has a remarkable power of prediction experimental data. Exercises EX 1.1. Calculate the mean value Mn(s,t) = ((Bt - Bs)n ), n G N. Hint: Use (1.60) and the standard substitution y^ = yi + Zj2{t2 — t), where z is a new variable. Show that this yields [2(£2-*i)]n/2 Mn / exp(—v2 )dv / exp(—z2 )zn dz. /it The gamma function is defined by (see Ryshik & Gradstein [1.15]) r((n + l)/2)Vn = 2fc; ^0Vn = 2ife + l; fceN, r(l/2) =-S/TF, r(n + l) = nT(n). / ex.p(-z2 )zn dv
  • 74. 50 Stochastic Differential Equations in Science and Engineering a2 >e2 Verify the result M2n = ir-V2 [2(t2 - ii)]n r((2n + l)/2). EX 1.2. We consider a ID random variable X with the mean fi and the variance a2 . Show that the latter can be written in the form (fx(x) is the P D ^ = ( s ) ; e > 0 ) <r2 >( + ) fx(x)(x - tfdx; e. J fi+e J—oo / For x < /x — e and X>/J, + £^(X — fi)2 > e2 , this yields 1 - / ix(x)dx = e 2 P r ( | X - ^ | > e), J ix—e and this gives the Chebyshev inequality its final form Pr{|X-/z| >e} <a2 /e2 . The inequality governing martingales (1-28) is obtained with con- siderations similar to the derivation of the Chebyshev inequality. EX 1.3. (a) Show that we can factorize the bivariate GD (1.35a) with zero mean and equal variance ((x) = (y) — 0; a2 = a = b) in the form p(x, y) = J~1/2 p(x)p((y - rx)/y/T); 7 = (1 - r2 ), where p(x) is the univariate GD (1.29). (b) Calculate the conditional distribution (see 1.17) of the bivariate GD (1.35a). Hint: (c is the covariance matrix) PiliO* I V) = V <W(27rD ) exp[-cTO(x - ycxy/cyy)/(2D)}; Verify that the latter line corresponds to a N[ycxy/cyy,cxx — ciy/cyy) distribution. EX 1.4. Prove that (1.53) is a solution of the Chapman-Kolmogorov equation (1.52)
  • 75. Stochastic Variables and Stochastic Processes 51 Hint: The integrand in (1.52) is given by T = Pi|i(y2,t2 | ?/i,*i)Pi11 (2/3,^3 I 2/2, £2)- Use the substitution u = t<i - t > 0, v = £3 - £2 > 0; £3 - t = v + u > 0, introduce (1.53) into (1.52) and put T = (47r2 m;)-1/2 exp(-A); A = (2/3 - y2?/(2v) + (3/2 - 2/i)2 /(2u ) = a 22/| + ai2/2 + «o, with afc = ctk(yi,y3),k = 1,2,3. Use the standard substitution (see EX 1.1) to obtain / Tdt/2 = (47rra)"1/2 exp[-F(y3,y2)] / exp(-K)djy2; 4a0a2 - af / ai 2 F = - ^ 2 — ; K = a 2 ly 2 + 2^J' and compare the result of the integration with the right hand side of (1.52). EX 1.5. Verify that the solution of (1.54) is given by (1.55). Prove also its initial condition. Hint: To verify the initial condition use the integral / oo exp[-y2 /(2£)]H(y)dy, -00 where H(y) is a continuous function. Use the standard substitution in its form y = /2tz. To verify the solution (1.55) use the same substitution as in EX 1.4. EX 1.6. Calculate the average (yf (£1)^(*2)>; 2/fc = Btfc, k = 1,2; n, m G N with the use of the Markovian bivariate PD (1.60). Hint: Use standard substitution of the type given in EX 1.2. EX 1.7. Verify that the variable Bt defined in (1.65) has the auto- correlation (BtBs) = £ A s. To perform this task we calculate for a
  • 76. 52 Stochastic Differential Equations in Science and Engineering fixed value of a > 0 <BtBs) = (Bt + a Bs + a ) - (Bt+aBa) - (Bs+aBa) + {B2 a) = sAf + a - a - a + a = sAt. EX 1.8. Prove that the scaled and translated WS's defined in (1.74) are WS's. Hint: To cover the scaled WS's, put HUl„ = ^M-alpv = ^x1(a2 u)x2(b2 v). Because of (xi(a)x2((3)) = 0 we have (H.u,v) = 0. Its autocorrelation is given by (HUi„HPi9) = —-^(x1(a2 u)x1(a2 p))(x2(b2 v)x2{b2 q)) [ao) (a2 u) A (a2 p)(b2 v) A (b2 q) = (u A p)(v A q). (ab)2 For the case of the translated quantity use the consideration of EX 1.7. EX 1.9. Verify the differential (1.109) of two linear stochastic functions. Hint: According to (1.89) we have dBt 2 = 2BtdBt + dt... EX 1.10. Show that the "inverted" stochastic variables Zt = tB1/t; H8it = stM{2 lA/t, are also a WP (Zt) and a WS (HS)t). EX 1.11. Use the bivariate PD (1.60) for a Markov process, to cal- culate the two-variable characteristic function of a Brownian motion. Verify the result G(u,v) = (exp[i(uBi + uB2)]) exp -(u2 h + v2 t2) + 2uv(ti A£2) B fc = Btfc and compare its ID limit with (1.58a). EX 1.12. Calculate the probability P of a particle to stay in the interior of the circle D = {(a;, y) G R2 | x2 + y2 < R}.
  • 77. Stochastic Variables and Stochastic Processes 53 Hint: Assume that components of the vector (x, y) are statistically independent use the bivariate GD (1.35) with zero mean to calculate P[Bt€D] = J J p(x,y)dxdj/. EX 1.13. Consider the Brownian motion on the perimeter of ellipses and hyperbolas (i) ellipses x(t) = cos(Bi), y(t) = sin(Bt), (ii) hyperbolas x{t) = cosh(Bt), y(t) = sinh(Bt). Use the Ito formula to obtain the corresponding SDE and calcu- late (x(t)) and (y(t)). EX 1.14. Given the variables Z1 = (Bj - B2 )4 + (Bj)5 ; Z2 = (B,1 - Bt 2 )3 + (B,1 )6 , where B^ and B2 are independent WP's. Find the SDE's governing dZx and dZ2. EX 1.15. The random function RW = [(Bt 1 )2 + --- + (BD2 ]1/2 , is considered as the distance of an n-dimensional vector of indepen- dent WP's from the origin. Verify that its differential has the form n — 1 dR(t) = J2B tdB t/R+T ^Tdt - EX 1.16. Consider the stochastic function x(t) = exp(aBt — a2 t/2); a = const. (a) Show that x(t) = x(t — s)x(s). Hint: Use (1.65). (b) Show that x(t) is a martingale.
  • 78. 54 Stochastic Differential Equations in Science and Engineering EX 1.17. The Wiener-Levy Theorem is given by oo »t Bt = V A f c / rl>k{z)Az, (E.l) where A^ is a set of IID N(0,1) variables and ipi-; k = 1, 2,... is a set of orthonormal functions in [0, 1] »i 1/Jk(z)lpm(z)dz = Skm. '0 Show that (E.l) defines a WP. Hint: The autocorrelation is given by (BtBs) = t A s. Show that Jo d_ ~dt (BtBa) = — (t A s) = V Vfc(0 / Mz)dz. Multiply the last line by ipm(t) and integrate the resulting equation from zero to unity. EX 1.18. A bivariate PD of two variables x,y is given by p(x,y). (a) Calculate the PD of the "new" variable z and its average for (i) z = x ± y (ii) z = xy. Hint: Use (1.41b). (b) Find the PD iuy(u,v) for the "new" variables u = x + y; v = x - y. EX 1.19. The Ito representation of a given stochastic processes F(t,(j) has the form F(t,u) = (F(t,u)) + [ f(s,u)dBs, Jo where i(s,u) is an other stochastic process. Find i(s,u) for the par- ticular cases (i) F(t,iu) = const; (ii) F(t,u) = BJ1 ; n = 1,2,3; (hi) F(t,u) = exp(Bt). EX 1.20. Calculate the probability of n identically independent HPP's [see (A.8)].
  • 79. CHAPTER 2 STOCHASTIC DIFFERENTIAL EQUATIONS There are two classes of ordinary differential equations that contain stochastic influences: (i) Ordinary differential equations (ODE) with stochastic coefficient functions and/or random initial or boundary conditions that con- tain no stochastic differentials. We consider this type of ODE's in Chapter 4.3 where we will analyze eigenvalue problems. For these ODE's we can take advantage of all traditional methods of analysis. Here we give only the simple example of a linear 1st order ODE dx — = -par; p = p(w), ar(0) = x0(u), where the coefficient function p and the initial condition are x- independent random variables. The solution is x(t) = xoexp(—pt) and we obtain the moments of this solution in form of (xm ) = {XQ1 exp(—pmt)}. Assuming that the initial condition and the param- eter p are identically independent N(0, a) distributed, this yields (*2m > = ^ ^ e x p ( 2 a m 2 t 2 ) ; ( x 2 ^ 1 ) = 0. * (ii) We focus in this book — with a few exceptions in Chapter 4 — exclusively on initial value problems for ordinary SDE's of the type (1.123) that contain stochastic differentials of the Brownian motions. The initial values may also vary randomly xn(0) — xn(u). In this chapter we introduce the analytical tools to reach this goal. However, in many cases we would have to resort to numerical procedures and we perform this task in Chapter 5. The primary questions are: (i) How can we solve the equations or at least approximate the solutions and what are the properties of the latter? 55
  • 80. 56 Stochastic Differential Equations in Science and Engineering (ii) Can we derive criteria for the existence and uniqueness of the solutions? The theory is, however, only in a state of infancy and we will be happy if we will be able to answer these questions in case of the simplest problems. The majority of the knowledge pertains to lin- ear ordinary SDE, nonlinear problems are covered only in examples. Partial stochastic differential equations (PSDE) will be covered in Chapter 4 of this book. 2.1. One-Dimensional Equations To introduce the ideas we begin with two simple problems. 2.1.1. Growth of populations We consider here the growth of an isolated population. N(i) is the number of members of the population at the instant t. The growth (or decay) rate is proportional to the number of members and this growth is, in absence of stochastic effects, exponential. We introduce additionally a stochastic term that is also proportional to N. We write the SDE first in the traditional way dN — = rN + uW(t)N; r,u = const, (2.1) where W(t) stands for the white noise. It is, however, convenient to write Equation (2.1) in a form analogous to (1.99.1). Thus, we obtain dN = adt + bdBt; a = rN, b = uN; dBt = W(t)dt. (2.2) Equation (2.2) is a first order homogeneous ordinary SDE for the desired solution N(B4,i). We call the function a(N, t) (the coefficient of dt) the drift coefficient and the function fe(N, t) (the coefficient of dBi) the diffusion coefficient. SDE's with drift coefficients that are at most first order polynomials in N and diffusion coefficients that are independent of N are called linear equations. Equation (2.2) is hence a nonlinear SDE. We solve the problem with the use of the
  • 81. Stochastic Differential Equations 57 Ito formula. Thus we introduce the function Y = g(N) = InN and apply (1.99.3) (see also (1.129)) dY = d(lnN) = (agN + 62 gNN/2)di + bg^dBt = (r - u2 /2)dt + udBt. (2.3) Equation (2.3) is now a SDE with constant coefficients. Thus we can directly integrate (2.3) and we obtain its solution in the form N = N0exp[(r-M2 /2)£ + uBt]; N0 = N(i = 0). (2.4) There are two classes of initial conditions (ICs): (i) The initial condition (here the initial population No) is a deter- ministic quantity. (ii) The initial condition is stochastic variable. In this case we assume that No is independent of the Brownian motion. The relation (2.4) is only a formal solution and does not offer much information about the properties of the solutions. We obtain more insight from the lowest moments of the formal solution (2.4). Thus we calculate the mean and the variance of N and we obtain (N(t)> = (N0) exp[(r - u2 /2)t] (exp(uB4)) = (N0) exp(rt), (2.5) where we used the characteristic function (1.58a). We see that the mean or average (2.5) represents the deterministic limit solution (u — 0) of the SDE. We calculate the variance with the use of (1.20) and we obtain Var(N) = exp(2ri) [(N§> exp(u2 t) - (N0)2 ]. (2.6) An important special case is given by the combination of the parameters r = u2 /2. This leads to N(t)=N0 exp(«Bt ); (N(t)> = (N0) exp(u2 i/2); , ^ (2.7) Var(N) = (N§> exp(2u2 t) - (N0)2 exp(u2 f).