SlideShare a Scribd company logo
1 of 123
Download to read offline
Electronic noise
Three types of noise are commonly observed
Shot noise. Electric currents are not continuous but are
ultimately made up from large numbers of moving
charge carriers, typically electrons. Shot noise arises
from statistical fluctuations in the flow of charge
carriers: if a single bit of data is represented by 10,000
electrons, the magnitude of the fluctuations will
typically be about 1%. When looked at as a waveform
over time, shot noise has a flat frequency spectrum.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.1/123
Thermal (Johnson) noise. Even though an electric
current may have a definite overall direction, the
individual charge carriers within it will exhibit random
motions. In a material at nonzero temperature, the
energy of these motions and thus the intensity of the
thermal noise they produce is essentially proportional
to temperature. (At very low temperatures, quantum
mechanical fluctuations still yield random motion in
most materials.) Like shot noise, thermal noise has a
flat frequency spectrum.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.2/123
Flicker (1/f) noise. Almost all electronic devices also
exhibit a third kind of noise, whose main characteristic
is that its spectrum is not flat, but instead goes roughly
like 1/f over a wide range of frequencies. Such a
spectrum implies the presence of large low-frequency
fluctuations, and indeed fluctuations are often seen
over timescales of many minutes or even hours. Unlike
the types of noise described above, this kind of noise
can be affected by details of the construction of
individual devices. Although seen since the 1920s its
origins remain somewhat mysterious
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.3/123
Shot noise in electronic devices consists of random
fluctuations of the electric current in an electrical
conductor, which are caused by the fact that the current is
carried by discrete charges (electrons).
Shot noise is to be distinguished from current fluctuations
in equilibrium, which happen without any applied voltage
and without any average current flowing. These equilibrium
current fluctuations are known as thermal noise.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.4/123
h(t)
Z(t)
× × × ×
S(t)
Figure 1: Shot noise.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.5/123
Given a set of Poisson points ti with average density λ and
a real system h(t)
S(t) =
X
i
h(t − ti)
is an SSS process known as shot noise.
S(t) = Z(t) ∗ h(t) =
Z ∞
−∞
h(α)Z(t − α)dα
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.6/123
E{S(t)} =
Z ∞
−∞
h(α)E{Z(t − α)}dα
= ηZ
Z ∞
−∞
h(α)dα = ηZH(0)
Z(t) =
X
i
δ(t − ti) =
d
dt
X(t)
z }| {
X
i
u(t − ti), Z(t) = X′
(t)
X(t) is a Poisson process.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.7/123
RXX(t1, t2) = λ2
t1t2 + λ min(t1, t2)
RXZ(t1, t2) =
∂
∂t1
RXX(t1, t2) = λ2
t1 + λu(t1 − t2)
RZZ(t1, t2) = λ2
+ λδ(t1 − t2)
SZZ(ω) = 2πλ2
δ(ω) + λ
SSS(ω) = 2πλ2
|H(0)|2
δ(ω) + λ |H(ω)|2
RSS(τ) = λ2
|H(0)|2
+ λρ(τ), ρ(τ) = h(τ) ∗ h(−τ)
CSS(τ) = λρ(τ)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.8/123
Markov process:
In probability theory, a Markov process is a stochastic
process characterized as follows: The state ck at time k is
one of a finite number in the range
{1, · · · , M}
Under the assumption that the process runs only from time
0 to time N and that the initial and final states are known,
the state sequence is then represented by a finite vector
C = (c0, · · · , cN ).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.9/123
Let P(ck|c0, c1, · · · , c(k−1)) denote the probability (chance of
occurrence) of the state ck at time k conditioned on all states
up to time k − 1. Suppose a process was such that ck de-
pended only on the previous state ck−1 and was indepen-
dent of all previous states. This process would be known as
a first-order Markov process.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.10/123
This means that the probability of being in state ck at time
k, given all states up to time k − 1 depends only on the
previous state, i.e. ck−1 at time k − 1:
P(ck|c0, c1, . . . , ck−1) = P(ck|ck−1).
For an nth-order Markov process,
P(ck|c0, c1, . . . , ck−1) = P(ck|ck−n, . . . , ck−1).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.11/123
The underlying process is assumed to be a Markov
process with the following characteristics:
finite-state, this means that the number M is finite.
discrete-time, this means that going from one state to
other takes the same unit of time.
observed in memoryless noise, this means that the
sequence of observations depends probabilistically
only on the previous sequence transitions.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.12/123
In probability theory, a stochastic process has the Markov
property if the conditional probability distribution of future
states of the process, given the present state, depends only
upon the current state, i.e. it is conditionally independent of
the past states (the path of the process) given the present
state. A process with the Markov property is usually called
a Markov process, and may be described as Markovian.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.13/123
Mathematically, if X(t), t > 0, is a stochastic process, the
Markov property states that
Pr[X(t+h) = y|X(s) = x(s), s ≤ t] = Pr[X(t+h) = y|X(t) = x(t)], ∀h > 0
Markov processes are typically termed (time-)
homogeneous if
Pr[X(t + h) = y|X(t) = x(t)] = Pr[X(h) = y|X(0) = x(0)], ∀t, h > 0
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.14/123
and otherwise are termed (time-) inhomogeneous (or (time-
) nonhomogeneous). Homogeneous Markov processes,
usually being simpler than inhomogeneous ones, form the
most important class of Markov processes.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.15/123
In some cases, apparently non-Markovian processes may
still have Markovian representations, constructed by
expanding the concept of the ’current’ and ’future’ states.
Let X be a non-Markovian process. Then we define a
process Y, such that each state of Y represents a
time-interval of states of X, i.e. mathematically
Y (t) = {X(s) : s ∈ [a(t), b(t)]}.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.16/123
If Y has the Markov property, then it is a Markovian rep-
resentation of X. In this case, X is also called a second-
order Markov process. Higher-order Markov processes are
defined analogously.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.17/123
An example of a non-Markovian process with a Markovian
representation is a moving average time series.
Xt = εt +
q
X
i=1
θiεt−i
where the θ1, · · · , θq are the parameters of the model and
the εt, εt−1, · · · are again, the error terms. A moving aver-
age model is essentially a finite impulse response filter with
some additional interpretation placed on it.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.18/123
The most famous Markov processes are Markov chains,
but many other processes, including Brownian motion, are
Markovian.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.19/123
Markov chain:
A collection of random variables {Xt} (where the index
runs through 0, 1, . . . ) having the property that, given the
present, the future is conditionally independent of the past.
In other words,
P{Xt = j|X0 = i0, X1 = i1, · · · , Xt−1 = i(t−1)} = P{Xt = j|Xt−1 = it−1}
If a Markov sequence of random variates Xn take the dis-
crete values {a1, · · · , aN }, then
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.20/123
P{xn = ain |xn−1 = ain−1 , · · · , x1 = ai1 } = P{xn = ain |xn−1 = ain−1 }
and the sequence Xn is called a Markov chain. A simple
random walk is an example of a Markov chain.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.21/123
Example of Markov chains
The probabilities of weather conditions, given the weather
on the preceding day, can be represented by a transition
matrix:
P =

0.9 0.5
0.1 0.5

The matrix P represents the weather model in which a
sunny day is 90% likely to be followed by another sunny
day, and a rainy day is 50% likely to be followed by another
rainy day.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.22/123
The columns can be labelled “sunny” and “rainy” respec-
tively, and the rows can be labelled in the same order. Pij
is the probability that, if a given day is of type j, it will be
followed by a day of type i. Notice that the columns of P
sum to 1. This is because P is a stochastic matrix.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.23/123
Predicting the weather
The weather on day 0 is known to be sunny. This is
represented by a vector in which the “sunny” entry is
100%, and the “rainy” entry is 0%:
x(0)
=

1
0

The weather on day 1 can be predicted by:
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.24/123
x(1)
= Px(0)
=

0.9 0.5
0.1 0.5
 
1
0

=

0.9
0.1

Thus, there is an 90% chance that day 1 will also be sunny.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.25/123
The weather on day 2 can be predicted in the same way:
x(2)
= Px(1)
= P2
x(0)
=

0.9 0.5
0.1 0.5
2 
1
0

=

0.86
0.14

General rules for day n are:
x(n)
= Px(n−1)
x(n)
= Pn
x(0)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.26/123
Steady state of the weather In this example, predictions for
the weather on more distant days are increasingly
inaccurate and tend towards a steady state vector. This
vector represents the probabilities of sunny and rainy
weather on all days, and is independent of the initial
weather.
The steady state vector is defined as:
q = lim
n→∞
x(n)
but only converges if P is a regular transition matrix (that
is, there is at least one Pn
with all non-zero entries).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.27/123
Since the q is independent from initial conditions, it must be
unchanged when transformed by P. This makes it an
eigenvector (with eigenvalue 1), and means it can be
derived from P. For the weather example:
P =


0.9 0.5
0.1 0.5


Pq = q
(I − P)q = 0 ⇒

0.1 −0.5
−0.1 0.5

q =

0
0

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.28/123
q1 + q2 = 1, 0.1q1 − 0.5q2 = 0 ⇒
q1 = 0.833, q2 = 0.167
In conclusion, in the long term, 83% of days are sunny.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.29/123
For homogeneous Markov chain:
The statistics of any order can be determined in terms of
the conditional PDF f(Xn|Xn−1) and f(Xn),
f(Xn|Xn−1, · · · , X0) = f(Xn|Xn−1)
and if Xn is stationary then f(Xn) and f(Xn|Xn−1) are
invariant to shift of origin. A Markov process is
homogeneous if the conditional PDF f(Xn|Xn−1) is
invariant to shift of the origin
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.30/123
Chapman-Kolmogorov Equation
f(xn|xs) =
Z ∞
−∞
f(xn|xr)f(xr|xs) dxr
which gives the transitional densities of a Markov sequence.
Here, n  r  s are any integers.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.31/123
Continuous time-discrete state Markov chain (CTDSMC)
A CTDSMC is Markov process X(t) consisting of a family
of staircase functions(discrete states) with discontinuities at
random times tn.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.32/123
× × × × × ×
aj
ai
Figure 2: CTDSMC
πij(t1, t2) = P{X(t2) = aj|X(t1) = ai}
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.33/123
×
t1
×
t2
× × × ×
tk
qn = X(t+
n ), n = 1, 2, · · ·
Figure 3: Imbedded sequence qn is Markov chain.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.34/123
X(t) at these random points form a discrete-state Markov
sequence called the Markov chain imbedded in the
process X(t).
A discrete-state stochastic process is called semi-Markov if
it is not Markov but the imbedded sequence qn is Markov
chain.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.35/123
Pi(t) = P{X(t) = ai}, state probabilities
πij(t1, t2) = P{X(t2) = aj|X(t1) = ai}, transition probabilities
X
j
πij(t1, t2) = 1,
X
i
Pi(t1)πij(t1, t2) = Pj(t2)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.36/123
Discrete-time Markov chain is Markov chain where Xn has a
countable number of states ai. This kind of Markov process
is specified by:
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.37/123
state probabilities:
Pi(n) = P{Xn = ai}, i = 1, 2, · · ·
transition probabilities:
πij(n1, n2)P{Xn2 = aj|Xn1 = ai}
X
j
πij(n1, n2) = 1
P{Xn = aj} =
X
i
P{Xn2 = aj, Xn1 = ai}
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.38/123
Chapman-Kolmogorov Equation, discrete version
πij(n1, n3) =
X
r
πir(n1, n3)πrj(n2, n3)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.39/123
If Xn is homogeneous, then the transition probabilities
depend only on m = n2 − n1 ⇒
πij(n1, n2) = πij(m) = P{Xn+m = aj|Xn = ai}
the Chapman-Kolmogorov equation will be:
πij(n1, n3) =
X
r
πir(n1, n3)πrj(n2, n3) ⇒
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.40/123
Πij(n + k) =
X
r
elements
z }| {
Πir(k)
| {z }
matrix
Πrj(n)
| {z }
matrix
Π(n + k) = Π(k)Π(n)
the right hand side is matrix at time n + k
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.41/123
For homogeneous discrete Markov chain
P{Xn2 = aj} =
X
i
P{Xk = ai}πij(k, n) ⇒
P(n) = P(0)Πn
Π =





π11 π12 · · · π1N
π21 π22 · · · π2N
.
.
.
.
.
. · · ·
.
.
.
πN1 πN2 · · · πNN





,
X
j
πij = 1
Πn
is the state transition matrix at the time instant n = 2.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.42/123
Steady state probabilities:
P = PΠ,
N
X
i=1
pi = 1
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.43/123
Random walk with two reflecting walls.
Π =







0 1 0 0 0
.5 0 0 .5 0
0 0.5 0 0.5 0
0 0 0.5 0 0.5
0 0 0 1 0







Steady state probabilities:
P = PΠ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.44/123
eig(AT
) provides the eigenvector corresponding to the
eigenvalue=1
P =

1
8
1
4
1
4
1
4
1
8

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.45/123
Continuous time Markov chain discrete state:
Stationary homogeneous transition probabilities:
πij(t) = P{X(t+s) = aj|X(s) = ai}, i, j = 0, 1, · · · , m, s, t  0
lim
t→0
πij(t) =

0, i 6= j
1, i = j
lim
t→∞
πij(t) = Pj =
steady state probability,
independent of initial state probability vector
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.46/123
Chapman-Kolomogorov equation:
πij(t) =
m
X
ℓ=0
πiℓ(ν)πℓj(t − ν), 0 6 ν 6 t
Π(τ + α) = Π(τ)Π(α)
Steady state probabilities satisfy:
 Pm
j=0 Pj = 1
Pm
i=0 Piπij(t) = Pj
, j = 0, 1, · · · , m, t  0
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.47/123
λj = lim
∆t→0

1 − πjj(∆t)
∆t

= −
d
dt
πjj(t) |t=0
λj=rate(intensity) of transition
λij = lim
∆t→0

πij(∆t)
∆t

=
d
dt
πij(t) |t=0
λij=rate(intensity) of passage
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.48/123
Π′
(0+
) = Λ =





λ11 · · · λ1n
λ21 · · · λ2n
.
.
.
.
.
.
.
.
.
λn1 · · · λnn





X
j
πij(τ) = 1 ⇒
d
dτ
X
j
πij(τ) = 0 ⇒
X
j
λij = 0
λii +
X
i6=j
λij = 0 ⇒ µi = −λii
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.49/123
P{X(t + ∆t) = aj|X(t) = ai} =

1 − µi∆t, i = j
λij∆t, i 6= j
⇒
d
dα
{Π(τ + α) = Π(τ)Π(α)} |α=0
Π′
(τ) = Π(τ)Π′
(0) ⇒ Π′
(τ) = Π(τ)Λ, Π(0) = I
Π(τ) = eΛτ
X
i
Pi(t1)πij(t1, t2) = Pj(t2) ⇔ P(t)Π(τ) = P(t + τ)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.50/123
if homogeneous Markov chain(homogeneity condition):
d
dτ
{P(t)Π(τ) = P(t + τ)} |τ=0 ⇒
P(t)Λ = P′
(t), P(0) = P0 ⇒
P(t) = P0eΛt
P(t)Λ = P′
(t), P(0) = P0 ⇒
sP(s) − P0 = P(s)Λ ⇒
P(s) = (sI − Λ)−1
P0 ⇔ P(t) = P0eΛt
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.51/123
Telegraph signal:
A
−A
λ12 = µ2
µ1 µ2
λ21 = µ1
A
−A
Figure 4: Telegraph signal.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.52/123
The process takes on values A and −A
P′
(t) = P(t)

−µ1 µ1
µ2 −µ2

, µ1, µ2  0
P1(t) =
µ2
µ1 + µ2

1 − e−(µ1+µ2)t

+ P1(0)e−(µ1+µ2)t
P2(t) = 1 − P1(t)
E{X(t)} = A[P1(t) − P2(t)] = A[2P1(t) − 1]
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.53/123
E{X(t1)X(t2)} = RXX(t1, t2)
by using
Π′
(τ) = Π(τ)Λ
we have
RXX(t1, t2) = A2
[P{X(t + τ) = A, X(t) = A}
+P{X(t + τ) = −A, X(t) = −A}−
P{X(t + τ) = −A, X(t) = A}
−P{X(t + τ) = A, X(t) = −A}]
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.54/123
Then we conclude that the telegraph signal is a non-
stationary signal. But as t → ∞, E{X(t)} is constant and
E{X(t1)X(t2)} is only a function of time lag.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.55/123
Caley-Hamilton theorem:
∆(A) = |A − λI| = (−λ)n
+
n−1
X
i=0
ciλi
= characteristic polynomial
∆(λ) = (−1)n
An
+
n−1
X
i=0
ciAi
A = MΛM−1
⇒ Ak
= MΛk
M−1
⇒
∆(A) = M

(−1)n
Λn
+
n−1
X
i=0
ciΛi
#
M−1
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.56/123
A =

3 1
1 2

∆(λ) = |A − λI| = (3 − λ)(2 − λ) − 1 = λ2
− 5λ + 5 ⇒
∆(A) = A2
− 5A + 5I =

0 0
0 0

∆(A) = (−1)n
An
+
n−1
X
i=0
ciAi
= 0 ⇒
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.57/123
(−1)n
An
+ · · · + c1A + c0I = 0 ⇒ (−1)n
An−1
+ · · · + c1I + c0A−1
= 0
A−1
= −1
c0
Pn−1
i=1 ciAi
A =

3 1
1 2

∆(A) = A2
− 5A + 5I = 0 ⇒ A−1
=
−1
5

−2 1
1 −3

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.58/123
P(x) = 3x4
+ 2x2
+ x + 1, P1(x) = x2
− 3
= P1(x)(3x2
+ 11) + R(x), R(x) = x + 34
This is true for matrices as well.
P(A) = ∆(A)Q(A) + R(A), ∆(A) = 0 ⇒
P(A) = R(A)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.59/123
P(A) = A4
+ 3A3
+ 2A2
+ A + I, A =

3 1
1 2

∆(x) = x2
− 5x + 5
P(x) = ∆(x)(x2
+ 8x + 37) + (146x − 184) ⇒
P(A) = 146A − 184I
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.60/123
f(x) =
∞
X
k=0
γkxk
⇒ f(x) = ∆(x)
∞
X
k=0
βkxk
+ R(x)
R(x) = α0 + α1x + · · · + αn−1xn−1
In order to find {αi}, we note that ∆(λi) = 0 ⇒
R(λi) = f(λi)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.61/123
A =

−3 1
0 −2

⇒
Determine sin A.
∆(A) = (−3 − λ)(−2 − λ)
R(x) = α0 + α1x
sin(−2) = α0 + α1(−2)
sin(−3) = α0 + α1(−3)
α0 = 3 sin(−2) − 2 sin(−3), α1 = sin(−2) − sin(−3)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.62/123
sin A = α0I + α1A =

sin(−3) sin(−2) − sin(−3)
0 sin(−2)

AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.63/123
However, for λi is a repeated root:
d
dλ
∆(λ) |λi
= 0
A =


0 1 0
0 0 1
27 −27 9


Determine eAt
|A − λI| = (3 − λ)3
= 0
eAt
= R(A) = α0 + α1A + λ2A2
⇒
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.64/123



e3t
= α0 + 3α1 + 9α2
te3t
= α1 + 2α2(3)
t2
e3t
= 2α2
eAt
=


1 − 3t + 4.5t2
t − 3t2
0.5t2
13.5t2
1 − 3t − 9t2
t + 1.5t2
27t + 40.5t2
−27t − 27t2
1 + 6t + 4.5t2

 e3t
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.65/123
Λ =

−µ1 µ1
µ2 −µ2

⇒ λ =

0
−(µ1 + µ2)
Π(τ) = eΛτ
= α0 + α1Λ
1 = α0, e−(µ1+µ2)τ
= α0 + (−µ1 − µ2)α1 ⇒
Π(τ) = I +

1−e−(µ1+µ2)τ
µ1+µ2

Λ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.66/123
As another strategy,
Π(τ) =

π11(τ) π12(τ)
π11(τ) π22(τ)

Π′
(τ) = Π(τ)Λ, Π(0) = I ⇒
π11(τ) + π12(τ) = 1, π21(τ) + π22(τ) = 1
π′
11(τ) = −µ1π11(τ) + µ2π12(τ), π11(0) = 1
π′
11(τ) = −µ1π11(τ) + µ2 (1 − π11(τ))
Taking Laplace transform:
sπ11(s) − 1 = −µ1π11(s) +
µ2
s
− µ2π11(s)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.67/123
π11(s) =
1
s + µ1 + µ2
+
µ2
s(s + µ1 + µ2)
⇒
π11(τ) = µ1
µ1+µ2
e−(µ1+µ2)τ
+ µ2
µ1+µ2
π′
22(τ) = µ1π21(τ) − µ2π22(τ), π22(0) = 1;
π22(τ) = µ2
µ1+µ2
e−(µ1+µ2)τ
+ µ1
µ1+µ2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.68/123
For a discrete-time discrete-state Markov chain the
transition probabilities are as follows:
Π =


0 0.75 0.25
0.25 0 0.75
0.25 0.25 0.5


The steady state and transient probabilities are as follows.
P = PΠ,
3
X
i=1
pi = 1 ⇒
p1 =
1
5
, p2 =
7
25
, p3 =
13
25
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.69/123
For transient state probabilities:
P(n) = P(n − 1)Π ⇒
P(1) = P(0)Π, P(2) = P(1)Π, · · · , P(n) = P(0)Πn
The first strategy using Caley-Hamilton:
eig(Π) = {1, −0.25, −0.25}
Πn
= α0I + α1Π + α2Π2
, nΠn−1
= α1 + 2α2Π ⇒
1n
= α0 + α1 + α2
(−0.25)n
= α0 + (−0.25)α1 + (−0.25)2
α2
n(−0.25)n−1
= α1 + 2(−0.25)α2
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.70/123
α0 = 0.04 + 0.96

−1
4
n
+ 0.2n

−1
4
n−1
α1 = 0.32 − 0.32

−1
4
n
+ 0.6n

−1
4
n−1
α2 = 0.64 − 0.64

−1
4
n
− 0.8n

−1
4
n−1
P(n) = P(0)Πn
As for the second strategy, we use Z transform:
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.71/123
Z{P(n)} =
∞
X
n=0
P(n)zn
=
∞
X
n=0
P(0)Πn
zn
= P(0)(1 − zΠ)−1
Because |eig(Π)| 6 1 ⇒ (1 − zΠ)−1
exists.
I − zΠ =


1 −0.75z −0.25z
−0.25z 1 −0.75z
−0.25z −0.25z 1 − 0.5z


AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.72/123
(I −zΠ)−1
=




1 − z/2 − 3z2
/16 3z/4 − 5z2
/16 z/4 + 9z2
/16
z/4 + z2
/16 1 − z/2 − z2
/16 3z/4 + z2
/16
z/4 + z2
/16 z/4 + 3z2
/16 1 − 3z2
/16




(1 − z)(1 + 0.25z)2
(I−zΠ)−1
=




5 7 13
5 7 13
5 7 13




25(1 − z)
+




0 −8 8
0 2 −2
0 2 −2




5(1 + z/4)2
+




20 33 −53
−5 8 −3
−5 −17 22




25(1 + z/4)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.73/123
Πn
=
ergodic  irreducible
z }| {


5 7 13
5 7 13
5 7 13


25
+
1
5
(n + 1)

−1
4
n


0 −8 8
0 2 −2
0 2 −2


+
1
25

−1
4
n


20 33 −53
−5 8 −3
−5 −17 22


AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.74/123
If for n → ∞,
P(n) = P0Πn
has a limit which is independent of state of Markov chain,
then the Markov process is called ergodic and irreducible.
In general, for irreducible, ergodic Markov chain:
lim
n→∞
πij(n) = Pj
these steady state probabilities Pj satisfy
Pn
j=1 Pj = 1
Pj =
Pn
i=1 Piπij
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.75/123
Birth-Death process:
A birth is referred to as an arrival  a death as a departure
from a physical system.
X(t) is the number of customers in the system at time
t.
States=0,1,2,· · ·,j,j+1,· · ·
S is the number of servers.
Pj(t) = P{X(t) = j}
Pj = lim
t→∞
Pj(t)
λn mean arrival rate given n customers are in the
system
µn mean service rate given n customers are in the
system AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.76/123
Birth-death process can be used to describe how X(t)
changes through time.
The basic assumption is Poisson arrival so that the
probability of more than one birth or death at the same
instant is zero.
Because of Poisson arrival, when X(t) = j, the PDF of the
time to the next birth(arrival) is exponential with parameter
λj.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.77/123
P′
0(t) = −λ0P0(t) + µ1P1(t)
P′
j(t) = −(λj + µj)Pj(t) + λj−1Pj−1(t)
+µj+1Pj+1(t), j = 1, 2, · · ·
∞
X
j=0
Pj(t) = 1
In the steady state: P′
j(t) = 0
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.78/123
µ1P1 = λ0P0
λ0P0 + µ2P2 = (λ2 + µ2)P1
.
.
.
.
.
.
.
.
.
λj−2Pj−2 + µjPj = (λj−1 + µj−1)Pj−1
P1 =
λ0
µ2
P0, Pj+1 =
λj
µj+1
Pj =
Πj
k=0λk
Πj+1
k=1µk
P0
Cj =
Πj−1
k=0λk
Πj
k=1µk
, Pj = CjP0,
∞
X
j=0
Pj = 1 ⇒
P0 =
1
1 +
P∞
j=1 Cj
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.79/123
Specific scenarios:
λj = λ  µ =mean service rate per busy server.
µj = sµ, j  s
µj = jµ, j  s
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.80/123
In queueing theory, the inter-arrival times (i.e. the times be-
tween customers entering the system) are often modeled
as exponentially distributed variables. The length of a pro-
cess that can be thought of as a sequence of several inde-
pendent tasks is better modeled by a variable following the
gamma distribution (which is a sum of several independent
exponentially distributed variables).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.81/123
The exponential distribution is used to model Poisson pro-
cesses, which are situations in which an object initially in
state A can change to state B with constant probability per
unit time λ. The time at which the state actually changes is
described by an exponential random variable with parame-
ter λ. Therefore, the integral from 0 to T over PDF is the
probability that the object is in state B at time T.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.82/123
An important property of the exponential distribution is that
it is memoryless. This means that if a random variable T is
exponentially distributed, its conditional probability obeys
P(T  s + t|T  t) = P(T  s), ∀s, t  0.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.83/123
This says that the conditional probability that we need to
wait, for example, more than another 10 seconds before
the first arrival, given that the first arrival has not yet hap-
pened after 30 seconds, is no different from the initial prob-
ability that we need to wait more than 10 seconds for the
first arrival. This is often misunderstood by students tak-
ing courses on probability: the fact that P(T  40|T  30) =
P(T  10) does not mean that the events T  40 and T  30
are independent.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.84/123
To summarize: “memorylessness” of the probability
distribution of the waiting time T until the first arrival means
P(T  40|T  30) = P(T  10)
It does not mean
P(T  40|T  30) = P(T  40)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.85/123
The arrival and service processes follow the following PDF:
fa(t) = λe−λt
, fs(t) = µe−µt
The inter-arrival and service times in a busy channel
produce rates λ and µ.
Mean inter-arrival time = 1
λ
Mean for busy channel to complete service = 1
µ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.86/123
Expected number of customers in the queueing system:
L =
∞
X
j=0
jPj
Expected queue length:
Lq =
∞
X
j=0
jPj − s
W =Expected waiting time in the system. Wq =Expected
waiting time in the queue(excluding service time).
L = λW, Lq = λWq
W = Wq +
1
µ
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.87/123
If λj are not equal, λ̄ replaces λ
λ̄ =
∞
X
j=0
λjPj
System utilization factor:
ρ =
λ
sµ
ρ is the fraction of time that the servers are busy.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.88/123
An important case s = 1:
We assume an unlimited queue length with exponential
inter-arrivals and
λ0 = λ1 = λ2 = · · · = λ
We also assume the service times will be independent with
exponential distribution, and
µ1 = µ2 = µ3 = · · · = µ ⇒
Cj =

λ
µ
j
= ρj
, j = 1, 2, · · ·
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.89/123
Pj = ρj
P0, P0 =
1
1 +
P∞
j=1 ρj
= 1 − ρ
The steady state probabilities:
Pj = (1 − ρ)ρj
, j = 1, 2, · · ·
Pj the probability that there are j customers in the system
follows a geometric distribution.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.90/123
The exponential distribution may be viewed as a
continuous counterpart of the geometric distribution, which
describes the number of Bernoulli trials necessary for a
discrete process to change state. In contrast, the
exponential distribution describes the time for a continuous
process to change state.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.91/123
Expected number of customers in the system:
L =
∞
X
j=0
jPj =
∞
X
j=0
j(1 − ρ)ρj
=
(1 − ρ)ρ
∞
X
j=0
d
dρ
ρj
=
ρ
1 − ρ
Expected queue length:
Lq =
∞
X
j=0
jPj − 1 = L − (1 − P0) =
λ2
µ(µ − λ)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.92/123
Expected waiting time in the system:
W =
L
λ
=
ρ
λ(1 − ρ)
=
1
µ − λ
Expected waiting time in the queue:
Wq =
Lq
λ
=
λ
µ(µ − λ)
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.93/123
Queueing Theory Basics
A good understanding of the relationship between conges-
tion and delay is essential for designing effective congestion
control algorithms. Queuing Theory provides all the tools
needed for this analysis.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.94/123
Communication Delays
Lets understand the different components of delay in a
messaging system. The total delay experienced by
messages can be classified into the following categories:
Processing Delay
Queuing Delay
Transmission Delay
Propagation Delay
Retransmission Delay
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.95/123
Processing Delay
This is the delay between the time of receipt of a
packet for transmission to the point of putting it into the
transmission queue.
On the receive end, it is the delay between the time of
reception of a packet in the receive queue to the point
of actual processing of the message.
This delay depends on the CPU speed and CPU load
in the system.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.96/123
Queuing Delay
This is the delay between the point of entry of a packet
in the transmit queue to the actual point of
transmission of the message.
This delay depends on the load on the communication
link.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.97/123
Transmission Delay
This is the delay between the transmission of first bit of
the packet to the transmission of the last bit.
This delay depends on the speed of the
communication link.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.98/123
Propagation Delay
This is the delay between the point of transmission of
the last bit of the packet to the point of reception of last
bit of the packet at the other end.
This delay depends on the physical characteristics of
the communication link.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.99/123
Retransmission Delay
This is the delay that results when a packet is lost and
has to be retransmitted.
This delay depends on the error rate on the link and
the protocol used for retransmissions.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.100/123
We will be dealing primarily with queueing delay.
Little’s Theorem
Little’s theorem states that:
The average number of customers (N) can be determined
from the following equation:
N = λT
λ is the average customer arrival rate.
T is the average service time for a customer.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.101/123
We will focus on an intuitive understanding of the result.
Consider the example of a restaurant where the customer
arrival rate (λ) doubles but the customers still spend the
same amount of time in the restaurant (T). This will double
the number of customers in the restaurant (N). By the same
logic if the customer arrival rate remains the same but the
customers service time doubles, this will also double the
total number of customers in the restaurant.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.102/123
Queueing System Classification
With Little’s Theorem, we have developed some basic un-
derstanding of a queueing system. To further our under-
standing we will have to dig deeper into characteristics of a
queueing system that impact its performance.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.103/123
For example, queueing requirements of a restaurant will
depend upon factors like:
How do customers arrive in the restaurant? Are
customer arrivals more during lunch and dinner time (a
regular restaurant)? Or is the customer traffic more
uniformly distributed (a cafe)?
How much time do customers spend in the restaurant?
Do customers typically leave the restaurant in a fixed
amount of time? Does the customer service time vary
with the type of customer?
How many tables does the restaurant have for
servicing customers?
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.104/123
The above three points correspond to the most important
characteristics of a queueing system. They are explained
under the following:
Arrival Process
Service Process
Number of Servers
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.105/123
Arrival Process
The probability density distribution that determines the
customer arrivals in the system.
In a messaging system, this refers to the message
arrival probability distribution.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.106/123
Service Process
The probability density distribution that determines the
customer service times in the system.
In a messaging system, this refers to the message
transmission time distribution. Since message
transmission is directly proportional to the length of the
message, this parameter indirectly refers to the
message length distribution.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.107/123
Number of customers
Number of servers available to service the customers.
In a messaging system, this refers to the number of
links between the source and destination nodes.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.108/123
Based on the above characteristics, queueing systems can
be classified by the following convention:
A/S/n
Where A is the arrival process, S is the service process
and n is the number of servers. A and S can be any of the
following:
M (Markov) Exponential probability density
D (Deterministic) All customers have the same value
G (General) Any arbitrary probability distribution
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.109/123
Examples of queueing systems that can be defined with
this convention are:
1. M/M/1
2. M/D/n
3. G/G/n
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.110/123
M/M/1:
This is the simplest queueing system to analyze. Here the
arrival and service time are poisson process. The system
consists of only one server. This queueing system can be
applied to a wide variety of problems as any system with
a very large number of independent customers can be ap-
proximated as a Poisson process. Using a Poisson process
for service time however is not applicable in many applica-
tions and is only a crude approximation.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.111/123
M/D/n:
Here the arrival process is poisson and the service time
distribution is deterministic. The system has n servers. (e.g.
a ticket booking counter with n cashiers.) Here the service
time can be assumed to be same for all customers).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.112/123
G/G/n:
This is the most general queueing system where the ar-
rival and service time processes are both arbitrary. The
system has n servers. No analytical solution is known for
this queueing system.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.113/123
M/M/1 Queueing System
M/M/1 refers to Poisson arrivals and service times with a
single server. This is the most widely used queueing sys-
tem in analysis as pretty much everything is known about
it. M/M/1 is a good approximation for a large number of
queueing systems.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.114/123
Poisson Arrivals
M/M/1 queueing systems assume a Poisson arrival
process. This assumption is a very good approximation for
arrival process in real systems that meet the following
rules:
1. The number of customers in the system is very large.
2. Impact of a single customer on the performance of the
system is very small, i.e. a single customer consumes
a very small percentage of the system resources.
3. All customers are independent, i.e. their decision to
use the system are independent of other users.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.115/123
Example: Cars on a Highway
As you can see these assumptions are fairly general, so
they apply to a large variety of systems. Lets consider the
example of cars entering a highway. Lets see if the above
rules are met.
1. Total number of cars driving on the highway is very
large.
2. A single car uses a very small percentage of the
highway resources.
3. Decision to enter the highway is independently made
by each car driver.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.116/123
The above observations mean that assuming a Poisson ar-
rival process will be a good approximation of the car arrivals
on the highway. If any one of the three conditions is not met,
we cannot assume Poisson arrivals. For example, if a car
rally is being conducted on a highway, we cannot assume
that each car driver is independent of each other. In this
case all cars had a common reason to enter the highway
(start of the race).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.117/123
Another Example: Telephony Arrivals
Consider arrival of telephone calls to a telephone
exchange. Putting our rules to test we find:
1. Total number of customers that are served by a
telephone exchange is very large.
2. A single telephone call takes a very small fraction of
the systems resources.
3. Decision to make a telephone call is independently
made by each customer.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.118/123
Again, if all the rules are not met, we cannot assume tele-
phone arrivals are Poisson. If the telephone exchange is
a (Private Branch eXchange) (PBX) catering to a few sub-
scribers, the total number of customers is small, thus we
cannot assume that rule 1 and 2 apply. If rule 1 and 2 do
apply but telephone calls are being initiated due to some
disaster, calls cannot be considered independent of each
other. This violates rule 3.
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.119/123
Poisson Arrival Process
Pn(t) =
(λt)n
n!
e−λt
This equation describes the probability of seeing n arrivals
in a period from 0 to t. Where:
t is used to define the interval 0 to t
n is the total number of arrivals in the interval 0 to t.
λ is the total average arrival rate in arrivals/sec
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.120/123
Poisson Service times
In an M/M/1 queueing system we assume that service times
for customers are also exponentially distributed (i.e. gener-
ated by a Poisson process). Unfortunately, this assumption
is not as general as the arrival time distribution. But it could
still be a reasonable assumption when no other data is avail-
able about service times. Lets see a few examples:
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.121/123
Telephone Call Durations
Telephone call durations define the service time for
utilization of various resources in a telephone exchange.
Lets see if telephone call durations can be assumed to be
exponentially distributed.
1. Total number of customers that are served by a
telephone exchange is very large.
2. A single telephone call takes a very small fraction of
the systems resources.
3. Decision on how long to talk is independently made by
each customer
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.122/123
From these rules it appears that exponential call hold times
are a good fit. Intuitively, the probability of a customers mak-
ing a very long call is very small. There is a high probability
that a telephone call will be short. This matches with the
observation that most telephony traffic consists of short du-
ration calls. (The only problem with using the exponential
distribution is that, it predicts a high probability of extremely
short calls).
AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.123/123

More Related Content

What's hot

Clarke fourier theory(62s)
Clarke   fourier theory(62s)Clarke   fourier theory(62s)
Clarke fourier theory(62s)apsitachi
 
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous Plate
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous PlateMagnetohydrodynamic Rayleigh Problem with Hall Effect in a porous Plate
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous PlateIJERA Editor
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018JeremyHeng10
 
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...BRNSS Publication Hub
 
Natalini nse slide_giu2013
Natalini nse slide_giu2013Natalini nse slide_giu2013
Natalini nse slide_giu2013Madd Maths
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equationsIlya Gikhman
 
Operators n dirac in qm
Operators n dirac in qmOperators n dirac in qm
Operators n dirac in qmAnda Tywabi
 
Freezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potentialFreezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potentialAlberto Maspero
 
HMM-Based Speech Synthesis
HMM-Based Speech SynthesisHMM-Based Speech Synthesis
HMM-Based Speech SynthesisIJMER
 
Time series Modelling Basics
Time series Modelling BasicsTime series Modelling Basics
Time series Modelling BasicsAshutosh Kumar
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMCPierre Jacob
 
An Efficient Boundary Integral Method for Stiff Fluid Interface Problems
An Efficient Boundary Integral Method for Stiff Fluid Interface ProblemsAn Efficient Boundary Integral Method for Stiff Fluid Interface Problems
An Efficient Boundary Integral Method for Stiff Fluid Interface ProblemsAlex (Oleksiy) Varfolomiyev
 
Dynamical systems
Dynamical systemsDynamical systems
Dynamical systemsSpringer
 

What's hot (20)

Clarke fourier theory(62s)
Clarke   fourier theory(62s)Clarke   fourier theory(62s)
Clarke fourier theory(62s)
 
Lec1 01
Lec1 01Lec1 01
Lec1 01
 
So a webinar-2013-2
So a webinar-2013-2So a webinar-2013-2
So a webinar-2013-2
 
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous Plate
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous PlateMagnetohydrodynamic Rayleigh Problem with Hall Effect in a porous Plate
Magnetohydrodynamic Rayleigh Problem with Hall Effect in a porous Plate
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
 
Anomaly And Parity Odd Transport
Anomaly And Parity Odd Transport Anomaly And Parity Odd Transport
Anomaly And Parity Odd Transport
 
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...
Solving High-order Non-linear Partial Differential Equations by Modified q-Ho...
 
Natalini nse slide_giu2013
Natalini nse slide_giu2013Natalini nse slide_giu2013
Natalini nse slide_giu2013
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equations
 
Ladder operator
Ladder operatorLadder operator
Ladder operator
 
Operators n dirac in qm
Operators n dirac in qmOperators n dirac in qm
Operators n dirac in qm
 
Maths 3 ppt
Maths 3 pptMaths 3 ppt
Maths 3 ppt
 
Freezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potentialFreezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potential
 
HMM-Based Speech Synthesis
HMM-Based Speech SynthesisHMM-Based Speech Synthesis
HMM-Based Speech Synthesis
 
Time series Modelling Basics
Time series Modelling BasicsTime series Modelling Basics
Time series Modelling Basics
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-object
 
Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility Basic concepts and how to measure price volatility
Basic concepts and how to measure price volatility
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
An Efficient Boundary Integral Method for Stiff Fluid Interface Problems
An Efficient Boundary Integral Method for Stiff Fluid Interface ProblemsAn Efficient Boundary Integral Method for Stiff Fluid Interface Problems
An Efficient Boundary Integral Method for Stiff Fluid Interface Problems
 
Dynamical systems
Dynamical systemsDynamical systems
Dynamical systems
 

Similar to Stochastic Processes - part 4

Special theory of relativity
Special theory of relativitySpecial theory of relativity
Special theory of relativitydjramrock
 
A multiphase lattice Boltzmann model with sharp interfaces
A multiphase lattice Boltzmann model with sharp interfacesA multiphase lattice Boltzmann model with sharp interfaces
A multiphase lattice Boltzmann model with sharp interfacesTim Reis
 
Öncel Akademi: İstatistiksel Sismoloji
Öncel Akademi: İstatistiksel SismolojiÖncel Akademi: İstatistiksel Sismoloji
Öncel Akademi: İstatistiksel SismolojiAli Osman Öncel
 
WavesStatistics.pdf
WavesStatistics.pdfWavesStatistics.pdf
WavesStatistics.pdfcfisicaster
 
Stephy index page no 1 to 25 2
Stephy  index page no 1 to 25 2Stephy  index page no 1 to 25 2
Stephy index page no 1 to 25 2stephy97
 
La fisica dei motori molecolari
La fisica dei motori molecolariLa fisica dei motori molecolari
La fisica dei motori molecolarinipslab
 
Modification factor
Modification factorModification factor
Modification factorDhruv Shah
 
Quick refresher on the physics of coaxial cable asdeq
Quick refresher on the physics of coaxial cable asdeqQuick refresher on the physics of coaxial cable asdeq
Quick refresher on the physics of coaxial cable asdeqfoxtrot jp R
 
1979 Optimal diffusions in a random environment
1979 Optimal diffusions in a random environment1979 Optimal diffusions in a random environment
1979 Optimal diffusions in a random environmentBob Marcus
 
The tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic modelsThe tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic modelsColin Gillespie
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docxpaynetawnya
 
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical Systems
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical SystemsLecture1 NPTEL for Basics of Vibrations for Simple Mechanical Systems
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical SystemsNaushad Ahamed
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsSpringer
 

Similar to Stochastic Processes - part 4 (20)

Panacm 2015 paper
Panacm 2015 paperPanacm 2015 paper
Panacm 2015 paper
 
Random vibrations
Random vibrationsRandom vibrations
Random vibrations
 
Chapter 3 wave_optics
Chapter 3 wave_opticsChapter 3 wave_optics
Chapter 3 wave_optics
 
Gauge theory field
Gauge theory fieldGauge theory field
Gauge theory field
 
Special theory of relativity
Special theory of relativitySpecial theory of relativity
Special theory of relativity
 
A multiphase lattice Boltzmann model with sharp interfaces
A multiphase lattice Boltzmann model with sharp interfacesA multiphase lattice Boltzmann model with sharp interfaces
A multiphase lattice Boltzmann model with sharp interfaces
 
Öncel Akademi: İstatistiksel Sismoloji
Öncel Akademi: İstatistiksel SismolojiÖncel Akademi: İstatistiksel Sismoloji
Öncel Akademi: İstatistiksel Sismoloji
 
Wave function
Wave functionWave function
Wave function
 
WavesStatistics.pdf
WavesStatistics.pdfWavesStatistics.pdf
WavesStatistics.pdf
 
Stephy index page no 1 to 25 2
Stephy  index page no 1 to 25 2Stephy  index page no 1 to 25 2
Stephy index page no 1 to 25 2
 
La fisica dei motori molecolari
La fisica dei motori molecolariLa fisica dei motori molecolari
La fisica dei motori molecolari
 
Modification factor
Modification factorModification factor
Modification factor
 
Quick refresher on the physics of coaxial cable asdeq
Quick refresher on the physics of coaxial cable asdeqQuick refresher on the physics of coaxial cable asdeq
Quick refresher on the physics of coaxial cable asdeq
 
1979 Optimal diffusions in a random environment
1979 Optimal diffusions in a random environment1979 Optimal diffusions in a random environment
1979 Optimal diffusions in a random environment
 
The tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic modelsThe tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic models
 
Engineering physics 2
Engineering physics 2Engineering physics 2
Engineering physics 2
 
Instantons in 1D QM
Instantons in 1D QMInstantons in 1D QM
Instantons in 1D QM
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
 
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical Systems
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical SystemsLecture1 NPTEL for Basics of Vibrations for Simple Mechanical Systems
Lecture1 NPTEL for Basics of Vibrations for Simple Mechanical Systems
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flows
 

Recently uploaded

Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIShubhangi Sonawane
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...Poonam Aher Patil
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesShubhangi Sonawane
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
 

Recently uploaded (20)

Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 

Stochastic Processes - part 4

  • 1. Electronic noise Three types of noise are commonly observed Shot noise. Electric currents are not continuous but are ultimately made up from large numbers of moving charge carriers, typically electrons. Shot noise arises from statistical fluctuations in the flow of charge carriers: if a single bit of data is represented by 10,000 electrons, the magnitude of the fluctuations will typically be about 1%. When looked at as a waveform over time, shot noise has a flat frequency spectrum. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.1/123
  • 2. Thermal (Johnson) noise. Even though an electric current may have a definite overall direction, the individual charge carriers within it will exhibit random motions. In a material at nonzero temperature, the energy of these motions and thus the intensity of the thermal noise they produce is essentially proportional to temperature. (At very low temperatures, quantum mechanical fluctuations still yield random motion in most materials.) Like shot noise, thermal noise has a flat frequency spectrum. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.2/123
  • 3. Flicker (1/f) noise. Almost all electronic devices also exhibit a third kind of noise, whose main characteristic is that its spectrum is not flat, but instead goes roughly like 1/f over a wide range of frequencies. Such a spectrum implies the presence of large low-frequency fluctuations, and indeed fluctuations are often seen over timescales of many minutes or even hours. Unlike the types of noise described above, this kind of noise can be affected by details of the construction of individual devices. Although seen since the 1920s its origins remain somewhat mysterious AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.3/123
  • 4. Shot noise in electronic devices consists of random fluctuations of the electric current in an electrical conductor, which are caused by the fact that the current is carried by discrete charges (electrons). Shot noise is to be distinguished from current fluctuations in equilibrium, which happen without any applied voltage and without any average current flowing. These equilibrium current fluctuations are known as thermal noise. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.4/123
  • 5. h(t) Z(t) × × × × S(t) Figure 1: Shot noise. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.5/123
  • 6. Given a set of Poisson points ti with average density λ and a real system h(t) S(t) = X i h(t − ti) is an SSS process known as shot noise. S(t) = Z(t) ∗ h(t) = Z ∞ −∞ h(α)Z(t − α)dα AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.6/123
  • 7. E{S(t)} = Z ∞ −∞ h(α)E{Z(t − α)}dα = ηZ Z ∞ −∞ h(α)dα = ηZH(0) Z(t) = X i δ(t − ti) = d dt X(t) z }| { X i u(t − ti), Z(t) = X′ (t) X(t) is a Poisson process. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.7/123
  • 8. RXX(t1, t2) = λ2 t1t2 + λ min(t1, t2) RXZ(t1, t2) = ∂ ∂t1 RXX(t1, t2) = λ2 t1 + λu(t1 − t2) RZZ(t1, t2) = λ2 + λδ(t1 − t2) SZZ(ω) = 2πλ2 δ(ω) + λ SSS(ω) = 2πλ2 |H(0)|2 δ(ω) + λ |H(ω)|2 RSS(τ) = λ2 |H(0)|2 + λρ(τ), ρ(τ) = h(τ) ∗ h(−τ) CSS(τ) = λρ(τ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.8/123
  • 9. Markov process: In probability theory, a Markov process is a stochastic process characterized as follows: The state ck at time k is one of a finite number in the range {1, · · · , M} Under the assumption that the process runs only from time 0 to time N and that the initial and final states are known, the state sequence is then represented by a finite vector C = (c0, · · · , cN ). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.9/123
  • 10. Let P(ck|c0, c1, · · · , c(k−1)) denote the probability (chance of occurrence) of the state ck at time k conditioned on all states up to time k − 1. Suppose a process was such that ck de- pended only on the previous state ck−1 and was indepen- dent of all previous states. This process would be known as a first-order Markov process. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.10/123
  • 11. This means that the probability of being in state ck at time k, given all states up to time k − 1 depends only on the previous state, i.e. ck−1 at time k − 1: P(ck|c0, c1, . . . , ck−1) = P(ck|ck−1). For an nth-order Markov process, P(ck|c0, c1, . . . , ck−1) = P(ck|ck−n, . . . , ck−1). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.11/123
  • 12. The underlying process is assumed to be a Markov process with the following characteristics: finite-state, this means that the number M is finite. discrete-time, this means that going from one state to other takes the same unit of time. observed in memoryless noise, this means that the sequence of observations depends probabilistically only on the previous sequence transitions. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.12/123
  • 13. In probability theory, a stochastic process has the Markov property if the conditional probability distribution of future states of the process, given the present state, depends only upon the current state, i.e. it is conditionally independent of the past states (the path of the process) given the present state. A process with the Markov property is usually called a Markov process, and may be described as Markovian. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.13/123
  • 14. Mathematically, if X(t), t > 0, is a stochastic process, the Markov property states that Pr[X(t+h) = y|X(s) = x(s), s ≤ t] = Pr[X(t+h) = y|X(t) = x(t)], ∀h > 0 Markov processes are typically termed (time-) homogeneous if Pr[X(t + h) = y|X(t) = x(t)] = Pr[X(h) = y|X(0) = x(0)], ∀t, h > 0 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.14/123
  • 15. and otherwise are termed (time-) inhomogeneous (or (time- ) nonhomogeneous). Homogeneous Markov processes, usually being simpler than inhomogeneous ones, form the most important class of Markov processes. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.15/123
  • 16. In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the ’current’ and ’future’ states. Let X be a non-Markovian process. Then we define a process Y, such that each state of Y represents a time-interval of states of X, i.e. mathematically Y (t) = {X(s) : s ∈ [a(t), b(t)]}. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.16/123
  • 17. If Y has the Markov property, then it is a Markovian rep- resentation of X. In this case, X is also called a second- order Markov process. Higher-order Markov processes are defined analogously. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.17/123
  • 18. An example of a non-Markovian process with a Markovian representation is a moving average time series. Xt = εt + q X i=1 θiεt−i where the θ1, · · · , θq are the parameters of the model and the εt, εt−1, · · · are again, the error terms. A moving aver- age model is essentially a finite impulse response filter with some additional interpretation placed on it. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.18/123
  • 19. The most famous Markov processes are Markov chains, but many other processes, including Brownian motion, are Markovian. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.19/123
  • 20. Markov chain: A collection of random variables {Xt} (where the index runs through 0, 1, . . . ) having the property that, given the present, the future is conditionally independent of the past. In other words, P{Xt = j|X0 = i0, X1 = i1, · · · , Xt−1 = i(t−1)} = P{Xt = j|Xt−1 = it−1} If a Markov sequence of random variates Xn take the dis- crete values {a1, · · · , aN }, then AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.20/123
  • 21. P{xn = ain |xn−1 = ain−1 , · · · , x1 = ai1 } = P{xn = ain |xn−1 = ain−1 } and the sequence Xn is called a Markov chain. A simple random walk is an example of a Markov chain. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.21/123
  • 22. Example of Markov chains The probabilities of weather conditions, given the weather on the preceding day, can be represented by a transition matrix: P = 0.9 0.5 0.1 0.5 The matrix P represents the weather model in which a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.22/123
  • 23. The columns can be labelled “sunny” and “rainy” respec- tively, and the rows can be labelled in the same order. Pij is the probability that, if a given day is of type j, it will be followed by a day of type i. Notice that the columns of P sum to 1. This is because P is a stochastic matrix. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.23/123
  • 24. Predicting the weather The weather on day 0 is known to be sunny. This is represented by a vector in which the “sunny” entry is 100%, and the “rainy” entry is 0%: x(0) = 1 0 The weather on day 1 can be predicted by: AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.24/123
  • 25. x(1) = Px(0) = 0.9 0.5 0.1 0.5 1 0 = 0.9 0.1 Thus, there is an 90% chance that day 1 will also be sunny. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.25/123
  • 26. The weather on day 2 can be predicted in the same way: x(2) = Px(1) = P2 x(0) = 0.9 0.5 0.1 0.5 2 1 0 = 0.86 0.14 General rules for day n are: x(n) = Px(n−1) x(n) = Pn x(0) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.26/123
  • 27. Steady state of the weather In this example, predictions for the weather on more distant days are increasingly inaccurate and tend towards a steady state vector. This vector represents the probabilities of sunny and rainy weather on all days, and is independent of the initial weather. The steady state vector is defined as: q = lim n→∞ x(n) but only converges if P is a regular transition matrix (that is, there is at least one Pn with all non-zero entries). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.27/123
  • 28. Since the q is independent from initial conditions, it must be unchanged when transformed by P. This makes it an eigenvector (with eigenvalue 1), and means it can be derived from P. For the weather example: P =   0.9 0.5 0.1 0.5   Pq = q (I − P)q = 0 ⇒ 0.1 −0.5 −0.1 0.5 q = 0 0 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.28/123
  • 29. q1 + q2 = 1, 0.1q1 − 0.5q2 = 0 ⇒ q1 = 0.833, q2 = 0.167 In conclusion, in the long term, 83% of days are sunny. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.29/123
  • 30. For homogeneous Markov chain: The statistics of any order can be determined in terms of the conditional PDF f(Xn|Xn−1) and f(Xn), f(Xn|Xn−1, · · · , X0) = f(Xn|Xn−1) and if Xn is stationary then f(Xn) and f(Xn|Xn−1) are invariant to shift of origin. A Markov process is homogeneous if the conditional PDF f(Xn|Xn−1) is invariant to shift of the origin AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.30/123
  • 31. Chapman-Kolmogorov Equation f(xn|xs) = Z ∞ −∞ f(xn|xr)f(xr|xs) dxr which gives the transitional densities of a Markov sequence. Here, n r s are any integers. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.31/123
  • 32. Continuous time-discrete state Markov chain (CTDSMC) A CTDSMC is Markov process X(t) consisting of a family of staircase functions(discrete states) with discontinuities at random times tn. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.32/123
  • 33. × × × × × × aj ai Figure 2: CTDSMC πij(t1, t2) = P{X(t2) = aj|X(t1) = ai} AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.33/123
  • 34. × t1 × t2 × × × × tk qn = X(t+ n ), n = 1, 2, · · · Figure 3: Imbedded sequence qn is Markov chain. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.34/123
  • 35. X(t) at these random points form a discrete-state Markov sequence called the Markov chain imbedded in the process X(t). A discrete-state stochastic process is called semi-Markov if it is not Markov but the imbedded sequence qn is Markov chain. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.35/123
  • 36. Pi(t) = P{X(t) = ai}, state probabilities πij(t1, t2) = P{X(t2) = aj|X(t1) = ai}, transition probabilities X j πij(t1, t2) = 1, X i Pi(t1)πij(t1, t2) = Pj(t2) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.36/123
  • 37. Discrete-time Markov chain is Markov chain where Xn has a countable number of states ai. This kind of Markov process is specified by: AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.37/123
  • 38. state probabilities: Pi(n) = P{Xn = ai}, i = 1, 2, · · · transition probabilities: πij(n1, n2)P{Xn2 = aj|Xn1 = ai} X j πij(n1, n2) = 1 P{Xn = aj} = X i P{Xn2 = aj, Xn1 = ai} AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.38/123
  • 39. Chapman-Kolmogorov Equation, discrete version πij(n1, n3) = X r πir(n1, n3)πrj(n2, n3) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.39/123
  • 40. If Xn is homogeneous, then the transition probabilities depend only on m = n2 − n1 ⇒ πij(n1, n2) = πij(m) = P{Xn+m = aj|Xn = ai} the Chapman-Kolmogorov equation will be: πij(n1, n3) = X r πir(n1, n3)πrj(n2, n3) ⇒ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.40/123
  • 41. Πij(n + k) = X r elements z }| { Πir(k) | {z } matrix Πrj(n) | {z } matrix Π(n + k) = Π(k)Π(n) the right hand side is matrix at time n + k AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.41/123
  • 42. For homogeneous discrete Markov chain P{Xn2 = aj} = X i P{Xk = ai}πij(k, n) ⇒ P(n) = P(0)Πn Π =      π11 π12 · · · π1N π21 π22 · · · π2N . . . . . . · · · . . . πN1 πN2 · · · πNN      , X j πij = 1 Πn is the state transition matrix at the time instant n = 2. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.42/123
  • 43. Steady state probabilities: P = PΠ, N X i=1 pi = 1 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.43/123
  • 44. Random walk with two reflecting walls. Π =        0 1 0 0 0 .5 0 0 .5 0 0 0.5 0 0.5 0 0 0 0.5 0 0.5 0 0 0 1 0        Steady state probabilities: P = PΠ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.44/123
  • 45. eig(AT ) provides the eigenvector corresponding to the eigenvalue=1 P = 1 8 1 4 1 4 1 4 1 8 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.45/123
  • 46. Continuous time Markov chain discrete state: Stationary homogeneous transition probabilities: πij(t) = P{X(t+s) = aj|X(s) = ai}, i, j = 0, 1, · · · , m, s, t 0 lim t→0 πij(t) = 0, i 6= j 1, i = j lim t→∞ πij(t) = Pj = steady state probability, independent of initial state probability vector AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.46/123
  • 47. Chapman-Kolomogorov equation: πij(t) = m X ℓ=0 πiℓ(ν)πℓj(t − ν), 0 6 ν 6 t Π(τ + α) = Π(τ)Π(α) Steady state probabilities satisfy: Pm j=0 Pj = 1 Pm i=0 Piπij(t) = Pj , j = 0, 1, · · · , m, t 0 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.47/123
  • 48. λj = lim ∆t→0 1 − πjj(∆t) ∆t = − d dt πjj(t) |t=0 λj=rate(intensity) of transition λij = lim ∆t→0 πij(∆t) ∆t = d dt πij(t) |t=0 λij=rate(intensity) of passage AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.48/123
  • 49. Π′ (0+ ) = Λ =      λ11 · · · λ1n λ21 · · · λ2n . . . . . . . . . λn1 · · · λnn      X j πij(τ) = 1 ⇒ d dτ X j πij(τ) = 0 ⇒ X j λij = 0 λii + X i6=j λij = 0 ⇒ µi = −λii AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.49/123
  • 50. P{X(t + ∆t) = aj|X(t) = ai} = 1 − µi∆t, i = j λij∆t, i 6= j ⇒ d dα {Π(τ + α) = Π(τ)Π(α)} |α=0 Π′ (τ) = Π(τ)Π′ (0) ⇒ Π′ (τ) = Π(τ)Λ, Π(0) = I Π(τ) = eΛτ X i Pi(t1)πij(t1, t2) = Pj(t2) ⇔ P(t)Π(τ) = P(t + τ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.50/123
  • 51. if homogeneous Markov chain(homogeneity condition): d dτ {P(t)Π(τ) = P(t + τ)} |τ=0 ⇒ P(t)Λ = P′ (t), P(0) = P0 ⇒ P(t) = P0eΛt P(t)Λ = P′ (t), P(0) = P0 ⇒ sP(s) − P0 = P(s)Λ ⇒ P(s) = (sI − Λ)−1 P0 ⇔ P(t) = P0eΛt AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.51/123
  • 52. Telegraph signal: A −A λ12 = µ2 µ1 µ2 λ21 = µ1 A −A Figure 4: Telegraph signal. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.52/123
  • 53. The process takes on values A and −A P′ (t) = P(t) −µ1 µ1 µ2 −µ2 , µ1, µ2 0 P1(t) = µ2 µ1 + µ2 1 − e−(µ1+µ2)t + P1(0)e−(µ1+µ2)t P2(t) = 1 − P1(t) E{X(t)} = A[P1(t) − P2(t)] = A[2P1(t) − 1] AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.53/123
  • 54. E{X(t1)X(t2)} = RXX(t1, t2) by using Π′ (τ) = Π(τ)Λ we have RXX(t1, t2) = A2 [P{X(t + τ) = A, X(t) = A} +P{X(t + τ) = −A, X(t) = −A}− P{X(t + τ) = −A, X(t) = A} −P{X(t + τ) = A, X(t) = −A}] AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.54/123
  • 55. Then we conclude that the telegraph signal is a non- stationary signal. But as t → ∞, E{X(t)} is constant and E{X(t1)X(t2)} is only a function of time lag. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.55/123
  • 56. Caley-Hamilton theorem: ∆(A) = |A − λI| = (−λ)n + n−1 X i=0 ciλi = characteristic polynomial ∆(λ) = (−1)n An + n−1 X i=0 ciAi A = MΛM−1 ⇒ Ak = MΛk M−1 ⇒ ∆(A) = M (−1)n Λn + n−1 X i=0 ciΛi # M−1 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.56/123
  • 57. A = 3 1 1 2 ∆(λ) = |A − λI| = (3 − λ)(2 − λ) − 1 = λ2 − 5λ + 5 ⇒ ∆(A) = A2 − 5A + 5I = 0 0 0 0 ∆(A) = (−1)n An + n−1 X i=0 ciAi = 0 ⇒ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.57/123
  • 58. (−1)n An + · · · + c1A + c0I = 0 ⇒ (−1)n An−1 + · · · + c1I + c0A−1 = 0 A−1 = −1 c0 Pn−1 i=1 ciAi A = 3 1 1 2 ∆(A) = A2 − 5A + 5I = 0 ⇒ A−1 = −1 5 −2 1 1 −3 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.58/123
  • 59. P(x) = 3x4 + 2x2 + x + 1, P1(x) = x2 − 3 = P1(x)(3x2 + 11) + R(x), R(x) = x + 34 This is true for matrices as well. P(A) = ∆(A)Q(A) + R(A), ∆(A) = 0 ⇒ P(A) = R(A) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.59/123
  • 60. P(A) = A4 + 3A3 + 2A2 + A + I, A = 3 1 1 2 ∆(x) = x2 − 5x + 5 P(x) = ∆(x)(x2 + 8x + 37) + (146x − 184) ⇒ P(A) = 146A − 184I AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.60/123
  • 61. f(x) = ∞ X k=0 γkxk ⇒ f(x) = ∆(x) ∞ X k=0 βkxk + R(x) R(x) = α0 + α1x + · · · + αn−1xn−1 In order to find {αi}, we note that ∆(λi) = 0 ⇒ R(λi) = f(λi) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.61/123
  • 62. A = −3 1 0 −2 ⇒ Determine sin A. ∆(A) = (−3 − λ)(−2 − λ) R(x) = α0 + α1x sin(−2) = α0 + α1(−2) sin(−3) = α0 + α1(−3) α0 = 3 sin(−2) − 2 sin(−3), α1 = sin(−2) − sin(−3) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.62/123
  • 63. sin A = α0I + α1A = sin(−3) sin(−2) − sin(−3) 0 sin(−2) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.63/123
  • 64. However, for λi is a repeated root: d dλ ∆(λ) |λi = 0 A =   0 1 0 0 0 1 27 −27 9   Determine eAt |A − λI| = (3 − λ)3 = 0 eAt = R(A) = α0 + α1A + λ2A2 ⇒ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.64/123
  • 65.    e3t = α0 + 3α1 + 9α2 te3t = α1 + 2α2(3) t2 e3t = 2α2 eAt =   1 − 3t + 4.5t2 t − 3t2 0.5t2 13.5t2 1 − 3t − 9t2 t + 1.5t2 27t + 40.5t2 −27t − 27t2 1 + 6t + 4.5t2   e3t AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.65/123
  • 66. Λ = −µ1 µ1 µ2 −µ2 ⇒ λ = 0 −(µ1 + µ2) Π(τ) = eΛτ = α0 + α1Λ 1 = α0, e−(µ1+µ2)τ = α0 + (−µ1 − µ2)α1 ⇒ Π(τ) = I + 1−e−(µ1+µ2)τ µ1+µ2 Λ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.66/123
  • 67. As another strategy, Π(τ) = π11(τ) π12(τ) π11(τ) π22(τ) Π′ (τ) = Π(τ)Λ, Π(0) = I ⇒ π11(τ) + π12(τ) = 1, π21(τ) + π22(τ) = 1 π′ 11(τ) = −µ1π11(τ) + µ2π12(τ), π11(0) = 1 π′ 11(τ) = −µ1π11(τ) + µ2 (1 − π11(τ)) Taking Laplace transform: sπ11(s) − 1 = −µ1π11(s) + µ2 s − µ2π11(s) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.67/123
  • 68. π11(s) = 1 s + µ1 + µ2 + µ2 s(s + µ1 + µ2) ⇒ π11(τ) = µ1 µ1+µ2 e−(µ1+µ2)τ + µ2 µ1+µ2 π′ 22(τ) = µ1π21(τ) − µ2π22(τ), π22(0) = 1; π22(τ) = µ2 µ1+µ2 e−(µ1+µ2)τ + µ1 µ1+µ2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.68/123
  • 69. For a discrete-time discrete-state Markov chain the transition probabilities are as follows: Π =   0 0.75 0.25 0.25 0 0.75 0.25 0.25 0.5   The steady state and transient probabilities are as follows. P = PΠ, 3 X i=1 pi = 1 ⇒ p1 = 1 5 , p2 = 7 25 , p3 = 13 25 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.69/123
  • 70. For transient state probabilities: P(n) = P(n − 1)Π ⇒ P(1) = P(0)Π, P(2) = P(1)Π, · · · , P(n) = P(0)Πn The first strategy using Caley-Hamilton: eig(Π) = {1, −0.25, −0.25} Πn = α0I + α1Π + α2Π2 , nΠn−1 = α1 + 2α2Π ⇒ 1n = α0 + α1 + α2 (−0.25)n = α0 + (−0.25)α1 + (−0.25)2 α2 n(−0.25)n−1 = α1 + 2(−0.25)α2 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.70/123
  • 71. α0 = 0.04 + 0.96 −1 4 n + 0.2n −1 4 n−1 α1 = 0.32 − 0.32 −1 4 n + 0.6n −1 4 n−1 α2 = 0.64 − 0.64 −1 4 n − 0.8n −1 4 n−1 P(n) = P(0)Πn As for the second strategy, we use Z transform: AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.71/123
  • 72. Z{P(n)} = ∞ X n=0 P(n)zn = ∞ X n=0 P(0)Πn zn = P(0)(1 − zΠ)−1 Because |eig(Π)| 6 1 ⇒ (1 − zΠ)−1 exists. I − zΠ =   1 −0.75z −0.25z −0.25z 1 −0.75z −0.25z −0.25z 1 − 0.5z   AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.72/123
  • 73. (I −zΠ)−1 =     1 − z/2 − 3z2 /16 3z/4 − 5z2 /16 z/4 + 9z2 /16 z/4 + z2 /16 1 − z/2 − z2 /16 3z/4 + z2 /16 z/4 + z2 /16 z/4 + 3z2 /16 1 − 3z2 /16     (1 − z)(1 + 0.25z)2 (I−zΠ)−1 =     5 7 13 5 7 13 5 7 13     25(1 − z) +     0 −8 8 0 2 −2 0 2 −2     5(1 + z/4)2 +     20 33 −53 −5 8 −3 −5 −17 22     25(1 + z/4) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.73/123
  • 74. Πn = ergodic irreducible z }| {   5 7 13 5 7 13 5 7 13   25 + 1 5 (n + 1) −1 4 n   0 −8 8 0 2 −2 0 2 −2   + 1 25 −1 4 n   20 33 −53 −5 8 −3 −5 −17 22   AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.74/123
  • 75. If for n → ∞, P(n) = P0Πn has a limit which is independent of state of Markov chain, then the Markov process is called ergodic and irreducible. In general, for irreducible, ergodic Markov chain: lim n→∞ πij(n) = Pj these steady state probabilities Pj satisfy Pn j=1 Pj = 1 Pj = Pn i=1 Piπij AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.75/123
  • 76. Birth-Death process: A birth is referred to as an arrival a death as a departure from a physical system. X(t) is the number of customers in the system at time t. States=0,1,2,· · ·,j,j+1,· · · S is the number of servers. Pj(t) = P{X(t) = j} Pj = lim t→∞ Pj(t) λn mean arrival rate given n customers are in the system µn mean service rate given n customers are in the system AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.76/123
  • 77. Birth-death process can be used to describe how X(t) changes through time. The basic assumption is Poisson arrival so that the probability of more than one birth or death at the same instant is zero. Because of Poisson arrival, when X(t) = j, the PDF of the time to the next birth(arrival) is exponential with parameter λj. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.77/123
  • 78. P′ 0(t) = −λ0P0(t) + µ1P1(t) P′ j(t) = −(λj + µj)Pj(t) + λj−1Pj−1(t) +µj+1Pj+1(t), j = 1, 2, · · · ∞ X j=0 Pj(t) = 1 In the steady state: P′ j(t) = 0 AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.78/123
  • 79. µ1P1 = λ0P0 λ0P0 + µ2P2 = (λ2 + µ2)P1 . . . . . . . . . λj−2Pj−2 + µjPj = (λj−1 + µj−1)Pj−1 P1 = λ0 µ2 P0, Pj+1 = λj µj+1 Pj = Πj k=0λk Πj+1 k=1µk P0 Cj = Πj−1 k=0λk Πj k=1µk , Pj = CjP0, ∞ X j=0 Pj = 1 ⇒ P0 = 1 1 + P∞ j=1 Cj AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.79/123
  • 80. Specific scenarios: λj = λ µ =mean service rate per busy server. µj = sµ, j s µj = jµ, j s AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.80/123
  • 81. In queueing theory, the inter-arrival times (i.e. the times be- tween customers entering the system) are often modeled as exponentially distributed variables. The length of a pro- cess that can be thought of as a sequence of several inde- pendent tasks is better modeled by a variable following the gamma distribution (which is a sum of several independent exponentially distributed variables). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.81/123
  • 82. The exponential distribution is used to model Poisson pro- cesses, which are situations in which an object initially in state A can change to state B with constant probability per unit time λ. The time at which the state actually changes is described by an exponential random variable with parame- ter λ. Therefore, the integral from 0 to T over PDF is the probability that the object is in state B at time T. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.82/123
  • 83. An important property of the exponential distribution is that it is memoryless. This means that if a random variable T is exponentially distributed, its conditional probability obeys P(T s + t|T t) = P(T s), ∀s, t 0. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.83/123
  • 84. This says that the conditional probability that we need to wait, for example, more than another 10 seconds before the first arrival, given that the first arrival has not yet hap- pened after 30 seconds, is no different from the initial prob- ability that we need to wait more than 10 seconds for the first arrival. This is often misunderstood by students tak- ing courses on probability: the fact that P(T 40|T 30) = P(T 10) does not mean that the events T 40 and T 30 are independent. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.84/123
  • 85. To summarize: “memorylessness” of the probability distribution of the waiting time T until the first arrival means P(T 40|T 30) = P(T 10) It does not mean P(T 40|T 30) = P(T 40) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.85/123
  • 86. The arrival and service processes follow the following PDF: fa(t) = λe−λt , fs(t) = µe−µt The inter-arrival and service times in a busy channel produce rates λ and µ. Mean inter-arrival time = 1 λ Mean for busy channel to complete service = 1 µ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.86/123
  • 87. Expected number of customers in the queueing system: L = ∞ X j=0 jPj Expected queue length: Lq = ∞ X j=0 jPj − s W =Expected waiting time in the system. Wq =Expected waiting time in the queue(excluding service time). L = λW, Lq = λWq W = Wq + 1 µ AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.87/123
  • 88. If λj are not equal, λ̄ replaces λ λ̄ = ∞ X j=0 λjPj System utilization factor: ρ = λ sµ ρ is the fraction of time that the servers are busy. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.88/123
  • 89. An important case s = 1: We assume an unlimited queue length with exponential inter-arrivals and λ0 = λ1 = λ2 = · · · = λ We also assume the service times will be independent with exponential distribution, and µ1 = µ2 = µ3 = · · · = µ ⇒ Cj = λ µ j = ρj , j = 1, 2, · · · AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.89/123
  • 90. Pj = ρj P0, P0 = 1 1 + P∞ j=1 ρj = 1 − ρ The steady state probabilities: Pj = (1 − ρ)ρj , j = 1, 2, · · · Pj the probability that there are j customers in the system follows a geometric distribution. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.90/123
  • 91. The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.91/123
  • 92. Expected number of customers in the system: L = ∞ X j=0 jPj = ∞ X j=0 j(1 − ρ)ρj = (1 − ρ)ρ ∞ X j=0 d dρ ρj = ρ 1 − ρ Expected queue length: Lq = ∞ X j=0 jPj − 1 = L − (1 − P0) = λ2 µ(µ − λ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.92/123
  • 93. Expected waiting time in the system: W = L λ = ρ λ(1 − ρ) = 1 µ − λ Expected waiting time in the queue: Wq = Lq λ = λ µ(µ − λ) AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.93/123
  • 94. Queueing Theory Basics A good understanding of the relationship between conges- tion and delay is essential for designing effective congestion control algorithms. Queuing Theory provides all the tools needed for this analysis. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.94/123
  • 95. Communication Delays Lets understand the different components of delay in a messaging system. The total delay experienced by messages can be classified into the following categories: Processing Delay Queuing Delay Transmission Delay Propagation Delay Retransmission Delay AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.95/123
  • 96. Processing Delay This is the delay between the time of receipt of a packet for transmission to the point of putting it into the transmission queue. On the receive end, it is the delay between the time of reception of a packet in the receive queue to the point of actual processing of the message. This delay depends on the CPU speed and CPU load in the system. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.96/123
  • 97. Queuing Delay This is the delay between the point of entry of a packet in the transmit queue to the actual point of transmission of the message. This delay depends on the load on the communication link. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.97/123
  • 98. Transmission Delay This is the delay between the transmission of first bit of the packet to the transmission of the last bit. This delay depends on the speed of the communication link. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.98/123
  • 99. Propagation Delay This is the delay between the point of transmission of the last bit of the packet to the point of reception of last bit of the packet at the other end. This delay depends on the physical characteristics of the communication link. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.99/123
  • 100. Retransmission Delay This is the delay that results when a packet is lost and has to be retransmitted. This delay depends on the error rate on the link and the protocol used for retransmissions. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.100/123
  • 101. We will be dealing primarily with queueing delay. Little’s Theorem Little’s theorem states that: The average number of customers (N) can be determined from the following equation: N = λT λ is the average customer arrival rate. T is the average service time for a customer. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.101/123
  • 102. We will focus on an intuitive understanding of the result. Consider the example of a restaurant where the customer arrival rate (λ) doubles but the customers still spend the same amount of time in the restaurant (T). This will double the number of customers in the restaurant (N). By the same logic if the customer arrival rate remains the same but the customers service time doubles, this will also double the total number of customers in the restaurant. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.102/123
  • 103. Queueing System Classification With Little’s Theorem, we have developed some basic un- derstanding of a queueing system. To further our under- standing we will have to dig deeper into characteristics of a queueing system that impact its performance. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.103/123
  • 104. For example, queueing requirements of a restaurant will depend upon factors like: How do customers arrive in the restaurant? Are customer arrivals more during lunch and dinner time (a regular restaurant)? Or is the customer traffic more uniformly distributed (a cafe)? How much time do customers spend in the restaurant? Do customers typically leave the restaurant in a fixed amount of time? Does the customer service time vary with the type of customer? How many tables does the restaurant have for servicing customers? AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.104/123
  • 105. The above three points correspond to the most important characteristics of a queueing system. They are explained under the following: Arrival Process Service Process Number of Servers AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.105/123
  • 106. Arrival Process The probability density distribution that determines the customer arrivals in the system. In a messaging system, this refers to the message arrival probability distribution. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.106/123
  • 107. Service Process The probability density distribution that determines the customer service times in the system. In a messaging system, this refers to the message transmission time distribution. Since message transmission is directly proportional to the length of the message, this parameter indirectly refers to the message length distribution. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.107/123
  • 108. Number of customers Number of servers available to service the customers. In a messaging system, this refers to the number of links between the source and destination nodes. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.108/123
  • 109. Based on the above characteristics, queueing systems can be classified by the following convention: A/S/n Where A is the arrival process, S is the service process and n is the number of servers. A and S can be any of the following: M (Markov) Exponential probability density D (Deterministic) All customers have the same value G (General) Any arbitrary probability distribution AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.109/123
  • 110. Examples of queueing systems that can be defined with this convention are: 1. M/M/1 2. M/D/n 3. G/G/n AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.110/123
  • 111. M/M/1: This is the simplest queueing system to analyze. Here the arrival and service time are poisson process. The system consists of only one server. This queueing system can be applied to a wide variety of problems as any system with a very large number of independent customers can be ap- proximated as a Poisson process. Using a Poisson process for service time however is not applicable in many applica- tions and is only a crude approximation. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.111/123
  • 112. M/D/n: Here the arrival process is poisson and the service time distribution is deterministic. The system has n servers. (e.g. a ticket booking counter with n cashiers.) Here the service time can be assumed to be same for all customers). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.112/123
  • 113. G/G/n: This is the most general queueing system where the ar- rival and service time processes are both arbitrary. The system has n servers. No analytical solution is known for this queueing system. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.113/123
  • 114. M/M/1 Queueing System M/M/1 refers to Poisson arrivals and service times with a single server. This is the most widely used queueing sys- tem in analysis as pretty much everything is known about it. M/M/1 is a good approximation for a large number of queueing systems. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.114/123
  • 115. Poisson Arrivals M/M/1 queueing systems assume a Poisson arrival process. This assumption is a very good approximation for arrival process in real systems that meet the following rules: 1. The number of customers in the system is very large. 2. Impact of a single customer on the performance of the system is very small, i.e. a single customer consumes a very small percentage of the system resources. 3. All customers are independent, i.e. their decision to use the system are independent of other users. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.115/123
  • 116. Example: Cars on a Highway As you can see these assumptions are fairly general, so they apply to a large variety of systems. Lets consider the example of cars entering a highway. Lets see if the above rules are met. 1. Total number of cars driving on the highway is very large. 2. A single car uses a very small percentage of the highway resources. 3. Decision to enter the highway is independently made by each car driver. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.116/123
  • 117. The above observations mean that assuming a Poisson ar- rival process will be a good approximation of the car arrivals on the highway. If any one of the three conditions is not met, we cannot assume Poisson arrivals. For example, if a car rally is being conducted on a highway, we cannot assume that each car driver is independent of each other. In this case all cars had a common reason to enter the highway (start of the race). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.117/123
  • 118. Another Example: Telephony Arrivals Consider arrival of telephone calls to a telephone exchange. Putting our rules to test we find: 1. Total number of customers that are served by a telephone exchange is very large. 2. A single telephone call takes a very small fraction of the systems resources. 3. Decision to make a telephone call is independently made by each customer. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.118/123
  • 119. Again, if all the rules are not met, we cannot assume tele- phone arrivals are Poisson. If the telephone exchange is a (Private Branch eXchange) (PBX) catering to a few sub- scribers, the total number of customers is small, thus we cannot assume that rule 1 and 2 apply. If rule 1 and 2 do apply but telephone calls are being initiated due to some disaster, calls cannot be considered independent of each other. This violates rule 3. AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.119/123
  • 120. Poisson Arrival Process Pn(t) = (λt)n n! e−λt This equation describes the probability of seeing n arrivals in a period from 0 to t. Where: t is used to define the interval 0 to t n is the total number of arrivals in the interval 0 to t. λ is the total average arrival rate in arrivals/sec AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.120/123
  • 121. Poisson Service times In an M/M/1 queueing system we assume that service times for customers are also exponentially distributed (i.e. gener- ated by a Poisson process). Unfortunately, this assumption is not as general as the arrival time distribution. But it could still be a reasonable assumption when no other data is avail- able about service times. Lets see a few examples: AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.121/123
  • 122. Telephone Call Durations Telephone call durations define the service time for utilization of various resources in a telephone exchange. Lets see if telephone call durations can be assumed to be exponentially distributed. 1. Total number of customers that are served by a telephone exchange is very large. 2. A single telephone call takes a very small fraction of the systems resources. 3. Decision on how long to talk is independently made by each customer AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.122/123
  • 123. From these rules it appears that exponential call hold times are a good fit. Intuitively, the probability of a customers mak- ing a very long call is very small. There is a high probability that a telephone call will be short. This matches with the observation that most telephony traffic consists of short du- ration calls. (The only problem with using the exponential distribution is that, it predicts a high probability of extremely short calls). AKU-EE/Stochastic/HA, 1st Semester, 85-86 – p.123/123