This document provides an overview of key statistical concepts including point estimation, confidence intervals, hypothesis testing, and sample size determination. It discusses how to calculate point estimates like the sample mean. It explains how to construct confidence intervals using the normal and t-distributions. It outlines how to perform lower tail, upper tail, and two-tailed hypothesis tests on means and proportions. It also provides formulas for determining required sample sizes.
Micro-Scholarship, What it is, How can it help me.pdf
Talk 3
1. Statistics Lab
Rodolfo Metulini
IMT Institute for Advanced Studies, Lucca, Italy
Lesson 3 - Point Estimate, Confidence Interval and Hypotesis
Tests - 20.01.2015
2. Introduction
Let’s start having empirical data (one variable of length N)
extracted from external file, suppose to consider it to be the
population. We define a sample of size n.
Suppose we do not have information on population (or, better, we
want to check if and how the sample can represent the
population)
We, in other words, want to make inference using the information
contained in the sample, in order to obtain an estimation for the
population.
That sample is one of several samples we can randomly draw from
the population (the sample space).
What are the instruments to obtain infos about the population?
(1) Sample mean (point estimation) (2) Confidence interval (3)
Hypothesis tests
3. Sample space
In probability theory, the sample space of an experiment or random
trial is the set of all possible outcomes or results of that
experiment.
It is common to refer to a sample space by the labels Ω, (or U, or
S), where Ω is the first element of what we call statistical model:
(Ω, A, {Pθ : θ ∈ Θ). Each element of Ω have its relative θ
value.
For example, for tossing two coins, the corresponding sample space
would be {(head,head), (head,tail), (tail,head), (tail,tail)}, so that
the dimension is 4. dim(Ω) = 4. It means that we can obtain 4
different samples with corresponding 4 different sample means.
Dim(Ω) = xn , where x: number of outcomes in each single
experiment. n: number of experiments.
In practice, we face up with only one sample took at random from
the sample space.
4. Point estimate
Point estimate (or statistic) permit us to summarize the
information contained in the population (dimension N), throughout
only 1 value constructed using n vales (T = t(X1, ..., Xn))
The most used, unbiased point estimator (statistic) is the sample
mean. ˆXn =
n
1=1 xi
n
Other point estimators are: (1) Sample Median (2) Sample Mode
(3) Geometric mean.
Geometric Mean = Mg = n
i=1 xi
2
= exp[1
n
n
1=1 lnxi ]
An example of what is not an estimator is when you use the
sample mean after sub setting the sample truncating it on a
certain value.
P.S. A Naif definition of estimator: when the estimator is
computed using all the n informations in the sample.
5. Efficient estimators
The BLUE (Best Linear Unbiased Estimator) is defined as
follow:
1. is a linear function of all the sample values
2. is unbiased (E(T) = θ)
3. has the smallest sample variance among all unbiased
estimators.
The sample mean is BLUE for the parameter µ
Some estimators are biased but consistent: An estimator is
consistent when become unbiased for n −→ ∞
6. Point estimators - cases
Normal samples: ˆXn is the BLUE estimator for µ parameter
(mean).
Normal samples: s2 = 1
n−1
n
i=1(x1 − ˆXn)2 is the unbiased
estimator of the variance σ2.
Bernoulli samples f (x) = ρx (1 − ρ)1−x : ˆXn is a unbiased
estimator for ρ parameter (frequency).
Poisson samples f (x) =
e−kkx
x!
: ˆXn is a unbiased estimator
for k parameter (which represent both mean and variance of
the distribution).
Exponential samples f (x) = λe−λy 1
ˆXn
:is a unbiased
estimator for λ parameter (density at value 0).
(Chunks 1 to 4)
7. Confidence interval theory
With point estimators we make use of only one value to infer
about population.
With confidence interval we define a minimum and a maximum
value in which the population parameter we expect to lie.
Formally, we need to calculate:
µ1 = ˆXn − z ∗
σ
√
n
µ2 = ˆXn + z ∗
σ
√
n
and we end up with interval ˆµ = {µ1; µ2}, or I = [T
(1)
α , T
(2)
α ]. It is
used to write P{θ ∈ I} 1 − α
Here: ˆXn is the sample mean; z is the upper (or lower) critical
value of the theoretical distribution. σ is the standard deviation of
the theoretical distribution. n the sample size.
(See the graph)
8. Confidence interval theory - Gaussian
Remembering that: Theorem if X1, ..., Xn are i.i.d. with
distribution N(µ, σ2), then the distribution of ˆXn is N(µ,
σ2/n)
Let assume that the sample mean is 5, the standard deviation in
population is known and it is equal to 2, and the sample size is
n = 20. In the example below we will use a 95 per cent confidence
level and wish to find the confidence interval.
N.B. Here, since the confidence interval is 95, the z (the critical
value) to consider is the one corresponding with CDF (i.e. dnorm)
= 0.975.
We also can speak of α = 0.05, or 1 − α = 0.95, or
1 − α/2 = 0.975
(Chunk 5)
9. Confidence interval theory - T-student
We use T − student distribution when n is small and sd is
unknown in population. We need to use a sample variance
estimation: s = (xi − ˆXn)2
n−1
The t-student distribution is more spread out.
In simple words, since we do not know the population sd, we need
for more large intervals (caution - approach).
The only difference with normal distribution, is that we use the
command associated with the t-distribution rather than the normal
distribution. Here we repeat the procedures above, but we will
assume that we are working with a sample standard deviation
rather than an exact standard deviation.
N.B. The T distribution is characterize by its degree of freedom. In
this test the degree are equal to n − 1, because we use 1
estimation (1 constraint)
10. Confidence interval theory - comparison of two means
In some case we can have an experiment called (for example)
case-control.
Let’s imagine to have the population divided in 2: one is the
treated group, the second is the non treated group.
Suppose to extract two samples from them with aim to test if the
two samples comes from a population with the same mean
parameter (is the treatment effective?)
The output of this test will be a confidence interval representing
the difference between the two means.
N.B. Here, the degree of freedom of the t-distribution are equal to
min(n1, n2) − 1
(Chunk 7)
11. Formulas
Gaussian confidence interval:
ˆµ = {µ1, µ2} = ˆXn ± z ∗ σ√
n
T - student confidence interval:
ˆµ = {µ1, µ2} = ˆXn ± tn−1 ∗ s√
n
T-student confidence interval for two sample difference:
ˆµdiff = {µdiff 1, µdiff2 } = ( ˆX1 − ˆX2) ± tn−1 ∗ s;
where s = s1 ∗ s1
n1
+ s2 ∗ s2
n2
Gussian confidence interval for proportion (bernoulli
distribution):
ˆρ = {ρ1, ρ2} = ˆf1 ± z ∗ s;
where s = ρ(1−ρ)
n2
12. Hypotesis testing
Researchers retain or reject hypothesis based on measurements of
observed samples.
The decision is often based on a statistical mechanism called
hypothesis testing.
A type I error is the mishap of falsely rejecting a null hypothesis
when the null hypothesis is true (see the image).
The probability of committing a type I error is called the
significance level of the hypothesis testing, and is denoted by the
Greek letter α (the same used in the confidence intervals).
We demonstrate the procedure of hypothesis testing in R first with
the intuitive critical value approach.
Then we discuss the popular p − value (and very quick) approach
as alternative.
13. Hypotesis testing - lower tail
The alternative hypothesis of the lower tail test of the population
mean can be expressed as follows:
µ ≥ µ0; where µ0 is a hypothesized lower bound of the true
population mean µ.
Let us define the test statistic z in terms of the sample mean, the
sample size and the population standard deviation σ:
z =
ˆXn−µ0
σ/
√
n
Then the null hypothesis of the lower tail test is to be rejected if
z ≤ zα , where zα is the 100(α) percentile of the standard normal
distribution.
(Chunk 9)
14. Hypotesis testing - upper tail
The alternative hypothesis of the upper tail test of the population
mean can be expressed as follows:
µ ≤ µ0; where µ0 is a hypothesized upper bound of the true
population mean µ.
Let us define the test statistic z in terms of the sample mean, the
sample size and the population standard deviation σ:
z =
ˆXn−µ0
σ/
√
n
Then the null hypothesis of the upper tail test is to be rejected if
z ≥ z1−α , where z1−α is the 100(1 − α) percentile of the
standard normal distribution.
(Chunk 10)
15. Hypotesis testing - two tailed
The alternative hypothesis of the two-tailed test of the population
mean can be expressed as follows:
µ = µ0; where µ0 is a hypothesized value of the true population
mean µ. Let us define the test statistic z in terms of the sample
mean, the sample size and the population standard deviation
σ:
z =
ˆXn−µ0
σ/
√
n
Then the null hypothesis of the two-tailed test is to be rejected if
z ≤ zα/2 or z ≥ z1−α/2 , where zα/2 is the 100(α/2) percentile of
the standard normal distribution.
(Chunk 11)
16. Hypotesis testing - lower tail with Unknown variance
The alternative hypothesis of the lower tail test of the population
mean can be expressed as follows:
µ ≥ µ0; where µ0 is a hypothesized lower bound of the true
population mean µ.
Let us define the test statistic t in terms of the sample mean, the
sample size and the sample standard deviation ˆσ:
t =
ˆXn−µ0
s/
√
n
Then the null hypothesis of the lower tail test is to be rejected if
t ≤ tα , where tα is the 100(α) percentile of the Student t
distribution with n − 1 degrees of freedom.
(Chunk 12)
17. Hypotesis testing - upper tail with Unknown variance
The alternative hypothesis of the upper tail test of the population
mean can be expressed as follows:
µ ≤ µ0; where µ0 is a hypothesized upper bound of the true
population mean µ.
Let us define the test statistic t in terms of the sample mean, the
sample size and the sample standard deviation ˆσ:
t =
ˆXn−µ0
s/
√
n
Then the null hypothesis of the upper tail test is to be rejected if
t ≥ t1−α , where t1−α is the 100(1 − α) percentile of the Student
t distribution with n1 degrees of freedom.
(Chunk 13)
18. Hypotesis testing - two tailed with Unknown variance
The alternative hypothesis of the two-tailed test of the population
mean can be expressed as follows:
µ ≥ µ0 or µ ≤ µ0 ; where µ0 is a hypothesized value of the true
population mean µ. Let us define the test statistic t in terms of
the sample mean, the sample size and the sample standard
deviation ˆσ:
t =
ˆXn−µ0
ˆσ/
√
n
Then the null hypothesis of the two-tailed test is to be rejected if
t ≤ tα/2 or t ≥ t1−α/2 , where tα/2 is the 100(α/2) percentile of
the Student t distribution with n − 1 degrees of freedom.
(Chunk 14)
19. Lower Tail Test of Population Proportion
The alternative hypothesis of the lower tail test about population
proportion can be expressed as follows:
ρ ≥ ρ0; where ρ0 is a hypothesized lower bound of the true
population proportion ρ.
Let us define the test statistic z in terms of the sample proportion
and the sample size:
z = ˆρ−ρ0
ρ0(1−ρ0)
n
Then the null hypothesis of the lower tail test is to be rejected if
z ≤ zα , where zα is the 100(α) percentile of the standard normal
distribution.
(Chunk 15)
20. Upper Tail Test of Population Proportion
The alternative hypothesis of the upper tail test about population
proportion can be expressed as follows:
ρ ≤ ρ0; where ρ0 is a hypothesized lower bound of the true
population proportion ρ.
Let us define the test statistic z in terms of the sample proportion
and the sample size:
z = ˆρ−ρ0
ρ0(1−ρ0)
n
Then the null hypothesis of the lower tail test is to be rejected if
z ≥ z1−α , where z1−α is the 100(1 − α) percentile of the standard
normal distribution.
(Chunk 16)
21. Two Tailed Test of Population Proportion
The alternative hypothesis of the upper tail test about population
proportion can be expressed as follows:
ρ ≥ ρ0 or ρ ≤ ρ0; where ρ0 is a hypothesized true population
proportion.
Let us define the test statistic z in terms of the sample proportion
and the sample size:
z = ˆρ−ρ0
ρ0(1−ρ0)
n
Then the null hypothesis of the lower tail test is to be rejected if
z ≤ zα/2 or z ≥ z1−α/2
(Chunk 17)
22. Sample size definition
The quality of a sample survey can be improved (worsened) by
increasing (decreasing) the sample size.
The formula below provide the sample size needed under the
requirement of (1 − α) confidence level, margin of error E and
planned parameter estimation.
Here, z1−α/2 is the 100(1 − α/2) percentile of the standard normal
distribution.
For mean: n =
z2
1−α/2
∗σ2
E2
For proportion: n =
z2
1−α/2
ρ∗(1−ρ)
E2
n : {P( ˆXn − µ E) = 1 − α}
23. Sample size definition - Exercises
Mean: Assume the population standard deviation σ of the student
height in survey is 9.48. Find the sample size needed to achieve a
1.2 centimeters margin of error at 95 per cent confidence level.
Since there are two tails of the normal distribution, the 95 per cent
confidence level would imply the 97.5th percentile of the normal
distribution at the upper tail. Therefore, z1−α/2 is given by
qnorm(.975).
Proportion: Using a 50 per cent planned proportion estimate, find
the sample size needed to achieve 5 per cent margin of error for the
female student survey at 95 per cent confidence level.
Since there are two tails of the normal distribution, the 95 per cent
confidence level would imply the 97.5th percentile of the normal
distribution at the upper tail. Therefore, z1−α/2 is given by
qnorm(.975).
(Chunk 18-19)
24. Homeworks
1: Confidence interval for the proportion. Suppose we have a
sample of size n = 25 of births. 15 of that are female. Define the
interval (at 99 per cent) for the proportion of female in the
population. HINT: Apply with the proper functions in R, the
formula in slide 11.
2: Hypothesis test to compare two proportions. Suppose we have
two schools. Sampling from the first, n = 20 and the Hispanics
students are 8. Sampling from the second, n = 18 and Hispanics
students are 4. Can we state (at 95 per cent) the frequency of
Hispanics are the same in the two schools? N.B.: the test here is
two tailed.
The hypothesis test here is:
z = ˆρ1−ˆρ2
sd ; where sd = ρ(1 − ρ)[ 1
n1
+ 1
n2
];
ρ = (ρ1∗n1+ρ2+n2)
n1+n2
25. Charts - 1
Figure: Representation of the critical point for the upper tail hypothesis
test
26. Charts - 2
Figure: Representation of the critical point for the lower tail hypothesis
test
27. Charts - 3
Figure: Representation of the critical point for the two-tailed hypothesis
test