3. Random Variables
Consider an experiment of throwing two dice. We know that this experiments
has 36 outcomes .
Let x be the sum of numbers on the uppermost faces.
For ExampleThe value of x is 2 if outcome of the experiment is (1,1) & probability of this
outcome is 1/36.
In this case we can associate real number with each outcome of random
experiment or group of outcomes of random experiment.
Here X is called Random Variables.
4. Probability Mass Function
If x is discrete random variables values x1, x2, x3, …., xn then probability of each value is described by a function called
the Probability Mass Function. The probability that random variable X takes values xi is denoted by p(xi).
IllustrationSuppose fair coin is marked 1 & 2, dice numbered 1, 2, 3, 4, 5 & 6 are thrown simultaneously then probability mass
function of random variables X which is sum of numbers on coin & dice is obtained as under.
The sample space is:
S={(1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,2), (2,3), (2,4), (2,5), (2,6)}
Note that n(s) = 12
X: sum of numbers on coin & dice.
X = 2, 3, 4, 5, 6, 7, 8
P[x=2] = {(1,1)} = 1/12
P[x=3] = {(1,2)(2,1)} = 2/12
P[x=4] = {(1,3)(2,2)} = 2/12
P[x=5] = {(1,4)(2,3)} = 2/12
P[x=6] = {(1,5)(2,4)} = 2/12
P[x=7] = {(1,6)(2,5)} = 2/12
P[x=8] = {(2,6)} = 1/12
Thus probability distribution of random variable X is as under:
X=x
2
3
4
5
6
7
8
P[X=x]
1/12
2/12
2/12
2/12
2/12
2/12
1/12
Note that P[xi] ≥ 0 & ∑ (xi) = 1
5. Discrete Random Variable
A random variable which can assume only a countable number of real values &
the values taken by variables depends on outcome of random experiment is
called Discrete Random Variables.
For Example1. Number of misprints per page of book.
2. Number of heads in n tosses of fair coin.
3. Number of throws of dice to get first 6.
6. Distribution Function
Let X be a random variable. The function F defined for all real values x by F(x) = P [X = x], For all real x is called
Distribution Function.
A distribution function is also called as Cumulative Probability Distribution Function.
Properties OF Distribution Function:
If F is the D.F. of random variable X & if a <b then
P[a < X ≤ b] = F(b) – F(a).
Values of all distribution functions lie between 0 & 1
i.e. 0 ≤ F(x) ≤ 1 for all x.
All distribution functions are monotonically non-decreasing
i.e. 0 < y then F(x) < F(y).
F(- ∞) = lim F(x) = 0
x-∞
F(+ ∞) = lim F(x) = 1
x→∞
If X is discrete random variable then F(x) = ∑ P(xi)
xi ≤ x
If values of discrete random variable X are like x1 < x2 < x3 < x4 … then P(Xn+1) = F(Xn+1) – F(Xn)
If x is discrete random variable then D.E. is step function.
7. IllustrationConsider probability distribution of random variable x
X=x
1
2
3
4
5
6
P[X=x]
0.1
0.2
0.3
0.2
0.1
0.1
If F(x) is distribution function of random variable x then
F(1) = P[x ≤ 1] = P(1) = 0.1
F(2) = P[x ≤ 1] = P(1) + P(2)
= 0.1 + 0.2 = 0.3
F(3) = 0.1 + 0.2 + 0.3 = 0.6
F(4) = 0.1 + 0.2 + 0.3 + 0.2 = 0.8
F(5) = 0.1 + 0.2 + 0.3 + 0.2 + 0.1 = 0.9
F(6) = 1
Thus values of x & corresponding cumulative probability distribution function is as under:
X=x
1
2
3
4
5
6
P[X=x]
0.1
0.3
0.6
0.8
0.9
1.0
We will have graphical representation of random variable X, & the Graph of D.E.
1.0
Probability Distribution
0.8
3
P(x)
f(x)
2
Cumulative
Probability Distribution
0.6
0.4
1
0.2
1
2
3
4
5
6
1
2
3
X→
X→
4
5
6
8. Variance
If X is discrete random variable then variance of x is given by
Var (X) = E[X – X (X)]2
Note that Var (X) = µ2
Var (X) = µ2` - µ1`2
Var (X) = E (X2) – [E(X)] 2
The positive square root of the variance is known as standard deviation
S D (X) = + √Var (X)
IllustrationFor a random variable X, E (X) = 10 and Var (X) = 5. Find Var (3X + 5), Var(X – 2), Var (4X).
Also find E(5X – 4), E(4X + 3).
SolutionVar (3X + 5) = Var (3X)
= 9 Var (X) = 9 x 5 = 45
Var (x - 2) = Var (X) = 5
Var (4X) = 16 Var (X) = 16 x 5 = 80
Var (5X - 4) = 5 E(X) – 4
Var (5X - 4) = 5 x 10 – 4 = 46
Var (4X + 3) = 4 E(X) + 3
Var (4X + 3) = 4 x 10 + 3 = 43
9. Properties OF Variance:
1. Variance is independent of change of origin. It means that if X is random variable & ‘a’ is constant then variances
of X and new variable X + a are same i.e.
Var (X) + Var (X + a)
consider
Var (X + a) = E [X + a – (E (X) + a)]2
Var (X + a) = E [X + a – E (X) + a]2
Var (X + a) = E [X –E (X)]2
Var (X + a) = Var (X).
2. Variance of random variable depends upon change of scale.
i.e. Var (a X) = a2 Var (X)
consider
Var (a X) = E [aX – E (aX)]2
= E [aX – a E (X)]2
= E a2 [X –E (X)]2
= a2 E [X –E (X)]2
= a2 Var (X).
3. From property 1 and 2 we can find Var (a X + b),
Var (a X + b) = Var (a X)
by property 1
2 Var (X)
=a
by property 2
4. Variance of constant is 0. put a=0 in a x + b then
Var (a X + b)
= Var (b)
but
Var (a x + b) = a2 × Var (X)
Var (a x + b) = 0 × Var (X) = 0
Var (b) = 0
10. Expectation
Mathematical Expectation of Discrete Random Variable:
Once we have determined probability distribution function P(x) & distribution function of discrete
random variable X, we want to compute the mean or variance of random variable X, the mean or expected value of X
is nothing but weighted average of value X where corresponding probabilities are taken as weights, thus if X takes
values x1, x2,.. With corresponding probabilities p(x1), p(x2),… then mathematical expectation of X denoted by E(X) is
given by E(X) = ∑ xi P(xi).
E(X) is also mean of random variable X.
E(X) exists if series on right hand side is absolutely convergent.
IllustrationThe p.m.f. of a random variable X is given below. Find E(X).
Hence E(2X + 5) & E(X - 5).
X=x
1
2
3
4
5
6
P[X=x]
0.1
0.15
0.2
0.3
0.15
0.1
SolutionExpected value of random variable is given by:
E(X) = ∑xi P[X = xi]
= 1(0.1) + 2(0.15) + 3(0.2) + 4(0.3) + 5(0.15) + 6(0.1)
= 0.1 + 0.3 + 0.6 + 1.2 + 0.75 + 0.6
E(X) = 3.55
E(2X + 5) = 2 E(X) + 5
= 2 E(3.55) + 5
= 12.1
E(X - 5) = E(X) - 5
= (3.55) - 5
= -1.45
11. Properties OF Expectation:
If X1 & X2 are two random variables then E(X1 + X2) = E(X1 ) + E(X2).
This result can be generalized for X1, X2, …, Xn i.e. n random variables
E(X1 + X2 + …. + Xn ) = E(X1) + E(X2) + …. + E(Xn)
If X & Y are independent random variable, E(XY) = E(X) E(Y)
If X is a random variable & a is constant then
E(a X) = a E(X) and
E(X + a) = E(X) + a
If X is a random variable and a & b are constants
E[a (X) + b] = a E(X) + b
If X is a random variable and a & b are constants & g (X) a function X is random variable then
E[a g (X) + b] = a E[g (X)] + b
If X1, X2, X3 ….,Xn are any n random variable and if a1, a2, ….,an are any n constants then
a1 X1 + a2 X2 + …. An Xn is called linear combination of n variables & expectation of linear combination is given by
E(a1 X1 + a2 X2 + ….. + an Xn)
=
=
=
If X ≥ 0 then E(X) ≥ 0.
If X and Y are two random variables & if X ≤ Y then E(X) ≥ E(Y).
If X & Y are independent random variable then
E[a g (X) + h (Y)] = E[g (X)] E[h (Y)]
then g (X) is a function of X & its random variable. Also h (Y) is a function of Y & random variable.
12. Continuous Random Variables
Consider the small interval (X, X + dx) of length dx round the point x.
Let f(x) be any continuous function of x so that f(x) represent the probability that falls in very small
interval (X, x + dx)
Symbolically P[x ≤ X ≤ x + dx] = f(x) dx.
y
f(x) dx
Y = f(x)
x
X – dx/2 X + dx/2
In the figure f(x) dx represents the area bounded by the curve y = f(x); X-axis a& the ordinates x and x
+ dx.
The function f(x) so defined is known as probability density function of random variable X & usually
abbreviated as p.d.f. The expression f(x) dx usually written as F (x), is known as probability
differential & f(x) is known as probability density curve. The probability that X lie in the interval dx is
f(x) dx. Thus the p.d.f. of random variable x is defined as
f(x) = lim P [x ≤ X ≤ x + δx]
δx → 0
δx
13. IllustrationA random variable X has following p.d.f.
f(x) = K
-∞<x<∞
2
1+x
=0
Otherwise
Find k:
Solution:
Since X is continuous random variable with density function f(x),
∞
⌠ f(x) dx
=1
-∞
∞
⌠
-∞
k[tan-1
K
dx
2
1+x
∞
x]
-∞
=1
=1
k[tan-1 ∞ - tan-1 ∞ (- ∞)] = 1
k[π/2 + (π/2)]
k
k
π
=1
=1
= 1/ π