SlideShare una empresa de Scribd logo
1 de 17
IMPORTATNT NOTE: Most of the equations have (hats) as an intercept. However in the
graphs you will see the intercept are (hats). This is my mistake but since I really do not
have time to make changes due to time constraints kindly corporate with me on this issue.
For simplicity please read    as and            and on ONLY FOR GRAPHS.

The Simple Regression Model:

Regression Analysis:



Y and X are two variables representing population and we are
interested in explaining y in terms of x.

Where Y = Dependent on X, which is the independent variable.

How to make a choice between the independent and the dependent
variable?



Income is the cause for consumption. Thus the income is the
independent variable and consumption is the effect, dependent
variable.

It is also called the two-variable linear regression model or bivariate
linear regression modelbecause it relates the two variables x and y.

Regression Analysis is concerned with the study of dependent variable
on one or more independent or explanatory variables with a view of
estimating or predicting population mean, in terms of the known or
fixed (in repeated sampling) value of the latter i.e,

The variable u, called the error termor disturbance in the relationship,
represents factors other than that affect y, the “unobserved” factor.

If the other factors in u are held fixed, so that the change in u is zero,
        , then x has a linear effect on y:
Thus, the change in y is simply      multiplied by the change in x.

Terminology- Notation:

Dependent Variable Independent Variable
  Explained Variable                       Explanatory Variable
  Predicted                                         Predicator
  RegressandRegressor
  Response                                           Stimulus
  Endogenous                                          Exogeneous
  Outcome                                            Covariate
  Controlled                                         Control Variables

          are two unknown but fixed parameters known as the
regression coefficients.

Eg: Suppose in a "total Population" we have 60 families living in a
community called XYZ and their weekly income (X) and weekly
consumption (Y) are both in dollars.
    X         80   100   120   140 160 180     200 220       240  260
    Y          55   65    79    80 102 110     120 135       137 150
               60   70    84    93 107 115     136 137       145 152
               65   74    90    95 110 120     140 140       155 175
               70   80    94   103 116 130     144 152       165 178
               75   85    98   108 118 135     145 157       175 180
                 0  88     0   113 125 140       0 160       189 185
                 0   0     0   115   0   0       0 162         0 191
Total         325 462    445   707 678 750     685 1043      966 1211
Conditional
Means of
Y, E(Y/X)      65   77    89   101   113   125 137     149   161   173


The 60 families of X are divided into 10 income groups from $80-$260.

The values of X are "fixed" and 10 Y subpopulation.

There is a considerable variation in each income group
Geometrically, then a population regression curve is simply the locus of
the conditional means of the dependent variable for the fixed values
of the explanatory variable (s).



The conditional mean

Where          denotes some function of the explanatory variable X.

E(Y/ ) is a linear function of      say of a type:

E         =



Meaning of term Liner:

    1. Liner in Variables i.e. X i.e. E(Y/              is not Liner.

    2. Liner in Parameters i.e.              .

    E         =             is not linear.
Eg: Linear in Parameters:




But for now whenever we refer to the term "linear" regression we only
mean linear in parameters the

Two way scatter plot of income and consumption




                                                                   Population
                                                                   Regression
                                                                   Line
The Population Regression Line passes between the "Average" values
of consumption E(Y/X) which is also known as the conditional
expected value.

The CEV tells us the expected value of weekly consumption expenditure
or a family whose income is $80, $100…

Unconditional Expected Value: The unconditional expected value of
weekly consumption expenditure is given by E(Y) it disregards the
income levels of various families.

E(Y) = 7272/60 = $121.20

It tells us the expected value of weekly consumption expenditure of
"any" family.

Thus;

Conditional mean E(Y/X) is a function of             where     =      ,
         and so on.

It is a liner function, AND is also known asthe conditional Expected
Function, Population Regression Function or Population Function.

E        =

Where              are two unknown but fixed parameters known as
the regression coefficients.

And     is the intercept and   is the slope.

The main objective of the regression analysis is to estimate the values
of the unknown's           on the basis of observations Y and X.

We saw previously that as family's income increases, family's
consumption expenditure on average increases too.

But what about the individual family?
For example see that as income increases from $80 to $100 we see
particular families consumption is $65, which is less than consumption
expenditure of two families whose weekly income is $ 80.

Thus we express this deviation of an individual   as:

                or             ) or

                                                                 .

The expenditure of an individual family given its income level can be
expressed as:

      1. E      = Systematic or deterministic and
      2.   = Nonsystematic and cannot be determined




       =



Taking the expected value on both sides:



           /   )+     /   ).



Before we make any assumption of u and x. We make an important
assumption i.e. as long as we include the intercept in the equation;
nothing is lost by assuming that the average value of u in the
"population" is zero.i.e. E(u) = 0.



Relationship between u and x:

We assume u and x are not correlated or u and x are not linearly related.
It is possible for u to be uncorrelated with x while being correlated with
the functions of x such as the .

Thus the better assumption involves that the expected value of u given
x is zero or E ( / = E(u) = 0.

This is called the zero conditional mean assumption.

The sample regression function:

So far we have only talked about the population of Y values
corresponding to the fixed X's.

When collecting data it is almost impossible to collect data on the entire
population.

Thus for most practical situations we have is a sample of Y values
corresponding to some fixed X's.

Thus our task is to estimate PRF based on the sample information.
OR;

                Where;            is   the    estimator    of        and
                      .

Thenumerical value obtained by the estimator is known as the
"Estimate".

Expressing SRF is stochastic term can be written as:

                    .

Where     is the residual term.

Conceptually    is analogus to     and can be regarded as the estimate of
  .

So far:

PRF:                and

SRF:



In terms of SRF:




In terms of PRF:
            )+

It is almost impossible for SRF and PRF to be the same due to sampling
problems thus our main objective is to choose                so that it
replicates           as close as possible.
How is SRF itself determined since PRF is never known?

Ordinary Least Square:

PRF:

SRF: =                .




Thus we should choose SRF in such a way that sum of the residuals

    =          is as small as possible.



Thus if we adopt the criterion of minimizing  , then according to the
diagram above we should give equal weights to

In other words all the residuals should receive equal weights no matter
how far (     ) or how close (      ) they are from the SRF.
And such a minimization is possible by adopting least square criteria
which states that SRF can be fixed in such a way that




   is as small as possible where;

                                                               .

   Thus our goal is to choose            in such a way that        is as
   small as possible which is done by OLS.

   Let                   =

   So we want to minimize                                  .

   Taking partial derivative with respect to           .

         = -2                                  .

         = -2                                      .



    =

                                           .

Plugging the values of

         ( -( -     )-       )=0

Upon rearranging gives:
( - )=            -




             - )( -        –




Provided that

   Thus

                               =               or


  equals the population covariance divided by the variance of
when                           .

Which concludes:

        If   and   are positively correlated then     is positive and
        If   and   are negatively correlated then      is negative.

Fitted Value and Residuals:

We assume that the intercept and slope                  , have been obtained
for a given sample of data.

Given              , we can obtain the fitted value   for each observation.

By definition each fitted value is on the OLS line.

The OLS residuals associated with observation i,            is the difference
between and the its fitted value.
If is positive the line under predicts        if   is negative the line over
predicts.

The ideal case is for observation   is when          , but in every case OLS
is not equal to zero.




Algebraic Prosperities of OLS Statistics:

There are several useful algebraic properties of OLS estimates and their
associated statistics. We now cover the three most important of these.

(1) The sum, and therefore the sample average of the OLS residuals, is
zero. Mathematically,




It follows immediately from the OLS first order condition.
This means OLS estimates             are chosen to make the residuals
add up to zero (for any data set). This says nothing about the residual
for any particular observation

(2) The sample covariance between the regressor and the OLS residuals
is zero. This can be written as:




The sample average of the OLS residuals is zero.

Example:



Thus                     and u captures all the factors not included in the
model eg: aptitude, ability as so on.

(3) The point (      is always on the OLS regression line.

Writing each as its fitted value, plus its residual, provides another way
to interpret an OLS regression.

For each i, write:            .
From property (1) above, the average of the residuals is zero;
equivalently, the sample average of the fitted values, , is the same as
the sample average of the   , or = .

Further, properties (1) and (2) can be used to show that the sample
covariance between             is zero.

Thus, we can view OLS as decomposing each       into two parts, a fitted
value and a residual.

The fitted values and residuals are uncorrelated in the sample.



Precision Or Standard Errors of Least Square Estimates:

Thus far we know that least square estimates are functions of SAMPLE
data.

And our estimates             will change with each change in sample.

Therefore a proper measure of reliability and precision is needed. And
such precision/ reliability is measured by STANDARD ERROR.

Define the total sum of squares (SST), the explained sum of squares
(SSE), and the residual sum of squares (SSR) (also known as the sum
of squared residuals), as follows:

SST =
SSE =

SSR =               .
SST is a measure of the total sample variation in the ; that is, it
measures how spread out the is in the sample.

If we divide SST by n-1 we obtain the sample variance of y.
Similarly, SSE measures the sample variation in the    (where we use the
fact that       ), and

SSR measures the sample variation in the .

The total variation in y can always be expressed as the sum of the
explained variation and the unexplained variation SSR. Thus,

                          SST = SSE +SSR.
PROOF:




Since the covariance between the residuals and the fitted value is zero.
We have

                         SST = SSE +SSR.



Goodness of Fit:

So far, we have no way of measuring how well the explanatory or
independent variable, x, explains the dependent variable, y.

It is often useful to compute a number that summarizes how well the
OLS regression line fits the data.

Assuming that the total sum of squares, SST, is not equal to zero—which
is true except in the very unlikely event that all the equal the same
value—we can divide SST on both sides to obtain:




Alternatively:
The R-squared of the regression, sometimes called the coefficient of
determination, is ASLO BE defined as

                                               or




  is the ratio of the explained variation compared to the total variation,
and thus it is interpreted as the fraction of the sample variation in y
that is explained by x.

  is always between zero and one, since SSE can be no greater than SST.

When interpreting , we usually multiply it by 100 to change it into a
percent: 100* is the percentage of the sample variation in y that is
explained by x.

Más contenido relacionado

La actualidad más candente

Introduction to correlation and regression analysis
Introduction to correlation and regression analysisIntroduction to correlation and regression analysis
Introduction to correlation and regression analysisFarzad Javidanrad
 
Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Mohammed Musah
 
Binary OR Binomial logistic regression
Binary OR Binomial logistic regression Binary OR Binomial logistic regression
Binary OR Binomial logistic regression Dr Athar Khan
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression AnalysisSalim Azad
 
Theory of estimation
Theory of estimationTheory of estimation
Theory of estimationTech_MX
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares RegressionChapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regressionnszakir
 
ders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptErgin Akalpler
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regressionJames Neill
 
Statistics-Regression analysis
Statistics-Regression analysisStatistics-Regression analysis
Statistics-Regression analysisRabin BK
 
Chapter8
Chapter8Chapter8
Chapter8Vu Vo
 
Parametric versus semi nonparametric parametric regression models
Parametric versus semi nonparametric parametric regression modelsParametric versus semi nonparametric parametric regression models
Parametric versus semi nonparametric parametric regression modelsNuriye Sancar
 
SOME PROPERTIES OF ESTIMATORS - 552.ppt
SOME PROPERTIES OF ESTIMATORS - 552.pptSOME PROPERTIES OF ESTIMATORS - 552.ppt
SOME PROPERTIES OF ESTIMATORS - 552.pptdayashka1
 
Correlation and regression
Correlation and regressionCorrelation and regression
Correlation and regressionSakthivel R
 
Probability And Probability Distributions
Probability And Probability Distributions Probability And Probability Distributions
Probability And Probability Distributions Sahil Nagpal
 

La actualidad más candente (20)

Introduction to correlation and regression analysis
Introduction to correlation and regression analysisIntroduction to correlation and regression analysis
Introduction to correlation and regression analysis
 
Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 
Binary OR Binomial logistic regression
Binary OR Binomial logistic regression Binary OR Binomial logistic regression
Binary OR Binomial logistic regression
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression Analysis
 
Theory of estimation
Theory of estimationTheory of estimation
Theory of estimation
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares RegressionChapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regression
 
Regression analysis on SPSS
Regression analysis on SPSSRegression analysis on SPSS
Regression analysis on SPSS
 
ders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.ppt
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
 
Statistics-Regression analysis
Statistics-Regression analysisStatistics-Regression analysis
Statistics-Regression analysis
 
Chapter8
Chapter8Chapter8
Chapter8
 
Parametric versus semi nonparametric parametric regression models
Parametric versus semi nonparametric parametric regression modelsParametric versus semi nonparametric parametric regression models
Parametric versus semi nonparametric parametric regression models
 
Ordinal Logistic Regression
Ordinal Logistic RegressionOrdinal Logistic Regression
Ordinal Logistic Regression
 
An Overview of Simple Linear Regression
An Overview of Simple Linear RegressionAn Overview of Simple Linear Regression
An Overview of Simple Linear Regression
 
SOME PROPERTIES OF ESTIMATORS - 552.ppt
SOME PROPERTIES OF ESTIMATORS - 552.pptSOME PROPERTIES OF ESTIMATORS - 552.ppt
SOME PROPERTIES OF ESTIMATORS - 552.ppt
 
Correlation and regression
Correlation and regressionCorrelation and regression
Correlation and regression
 
Linear regression
Linear regressionLinear regression
Linear regression
 
Correlation and Regression
Correlation and Regression Correlation and Regression
Correlation and Regression
 
Probability And Probability Distributions
Probability And Probability Distributions Probability And Probability Distributions
Probability And Probability Distributions
 

Similar a 2.1 the simple regression model

Introduction to regression analysis 2
Introduction to regression analysis 2Introduction to regression analysis 2
Introduction to regression analysis 2Sibashis Chakraborty
 
Assumptions of OLS.pptx
Assumptions of OLS.pptxAssumptions of OLS.pptx
Assumptions of OLS.pptxEzhildev
 
regression and correlation
regression and correlationregression and correlation
regression and correlationPriya Sharma
 
Linear Regression
Linear Regression Linear Regression
Linear Regression Rupak Roy
 
9 Quantitative Analysis Techniques
9   Quantitative Analysis Techniques9   Quantitative Analysis Techniques
9 Quantitative Analysis TechniquesGajanan Bochare
 
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsDerek Kane
 
Chapter 6 simple regression and correlation
Chapter 6 simple regression and correlationChapter 6 simple regression and correlation
Chapter 6 simple regression and correlationRione Drevale
 
Chapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxChapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxaschalew shiferaw
 
Conditional probability
Conditional probabilityConditional probability
Conditional probabilityHusnain Haider
 
L1 updated introduction.pptx
L1 updated introduction.pptxL1 updated introduction.pptx
L1 updated introduction.pptxMesfinTadesse8
 
Regression.ppt basic introduction of regression with example
Regression.ppt basic introduction of regression with exampleRegression.ppt basic introduction of regression with example
Regression.ppt basic introduction of regression with exampleshivshankarshiva98
 
Hypothesis testing.pptx
Hypothesis testing.pptxHypothesis testing.pptx
Hypothesis testing.pptxrklums91
 
Probability distribution
Probability distributionProbability distribution
Probability distributionRohit kumar
 
Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and RegressionShubham Mehta
 
Two-Variable (Bivariate) RegressionIn the last unit, we covered
Two-Variable (Bivariate) RegressionIn the last unit, we covered Two-Variable (Bivariate) RegressionIn the last unit, we covered
Two-Variable (Bivariate) RegressionIn the last unit, we covered LacieKlineeb
 
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptx
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptxSM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptx
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptxManjulasingh17
 

Similar a 2.1 the simple regression model (20)

Introduction to regression analysis 2
Introduction to regression analysis 2Introduction to regression analysis 2
Introduction to regression analysis 2
 
Assumptions of OLS.pptx
Assumptions of OLS.pptxAssumptions of OLS.pptx
Assumptions of OLS.pptx
 
regression and correlation
regression and correlationregression and correlation
regression and correlation
 
Linear Regression
Linear Regression Linear Regression
Linear Regression
 
Powerpoint2.reg
Powerpoint2.regPowerpoint2.reg
Powerpoint2.reg
 
Linear regression
Linear regressionLinear regression
Linear regression
 
9 Quantitative Analysis Techniques
9   Quantitative Analysis Techniques9   Quantitative Analysis Techniques
9 Quantitative Analysis Techniques
 
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
 
Chapter 6 simple regression and correlation
Chapter 6 simple regression and correlationChapter 6 simple regression and correlation
Chapter 6 simple regression and correlation
 
Chapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptxChapter 2 Simple Linear Regression Model.pptx
Chapter 2 Simple Linear Regression Model.pptx
 
Conditional probability
Conditional probabilityConditional probability
Conditional probability
 
L1 updated introduction.pptx
L1 updated introduction.pptxL1 updated introduction.pptx
L1 updated introduction.pptx
 
Regression.ppt basic introduction of regression with example
Regression.ppt basic introduction of regression with exampleRegression.ppt basic introduction of regression with example
Regression.ppt basic introduction of regression with example
 
Chapter 14 Part I
Chapter 14 Part IChapter 14 Part I
Chapter 14 Part I
 
Hypothesis testing.pptx
Hypothesis testing.pptxHypothesis testing.pptx
Hypothesis testing.pptx
 
Probability distribution
Probability distributionProbability distribution
Probability distribution
 
Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and Regression
 
Regression
RegressionRegression
Regression
 
Two-Variable (Bivariate) RegressionIn the last unit, we covered
Two-Variable (Bivariate) RegressionIn the last unit, we covered Two-Variable (Bivariate) RegressionIn the last unit, we covered
Two-Variable (Bivariate) RegressionIn the last unit, we covered
 
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptx
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptxSM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptx
SM_d89ccf05-7de1-4a30-a134-3143e9b3bf3f_38.pptx
 

Más de Regmi Milan

Work place violence
Work place violenceWork place violence
Work place violenceRegmi Milan
 
(C) Regmi_Public Private Partnership
(C) Regmi_Public Private Partnership(C) Regmi_Public Private Partnership
(C) Regmi_Public Private PartnershipRegmi Milan
 
Prespective On Chinese Financial System and policy-reforms-
Prespective On Chinese Financial System and policy-reforms-Prespective On Chinese Financial System and policy-reforms-
Prespective On Chinese Financial System and policy-reforms-Regmi Milan
 
E-Commerce-Chapter-4_MR
E-Commerce-Chapter-4_MRE-Commerce-Chapter-4_MR
E-Commerce-Chapter-4_MRRegmi Milan
 
Project M&E (unit 1-4)
Project M&E (unit 1-4)Project M&E (unit 1-4)
Project M&E (unit 1-4)Regmi Milan
 
E-Commerce -Note -2
E-Commerce -Note -2E-Commerce -Note -2
E-Commerce -Note -2Regmi Milan
 
E-Commerce-Note-1_MR
E-Commerce-Note-1_MRE-Commerce-Note-1_MR
E-Commerce-Note-1_MRRegmi Milan
 
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal Micro Hydro Schemes : Case of Ghandruk VDC, Nepal
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal Regmi Milan
 
Insights in Economic Development Of Nepal & Early United States Assistance to...
Insights in Economic Development Of Nepal & Early United States Assistance to...Insights in Economic Development Of Nepal & Early United States Assistance to...
Insights in Economic Development Of Nepal & Early United States Assistance to...Regmi Milan
 
GATT & WTO : History and Prospective of Nepal.
GATT & WTO : History and  Prospective of Nepal.GATT & WTO : History and  Prospective of Nepal.
GATT & WTO : History and Prospective of Nepal.Regmi Milan
 
Nepal japan project_2013
Nepal japan project_2013Nepal japan project_2013
Nepal japan project_2013Regmi Milan
 
Chitwan overview
Chitwan overviewChitwan overview
Chitwan overviewRegmi Milan
 
Pokhara- Field Presentation On Thematic Areas
Pokhara-  Field Presentation On Thematic AreasPokhara-  Field Presentation On Thematic Areas
Pokhara- Field Presentation On Thematic AreasRegmi Milan
 
Principle of abiity to pay
Principle of  abiity to payPrinciple of  abiity to pay
Principle of abiity to payRegmi Milan
 
Lecture on public finance ( abridged version)
Lecture on public finance ( abridged version)Lecture on public finance ( abridged version)
Lecture on public finance ( abridged version)Regmi Milan
 
Final study report for publication december 17, 2009
Final study report for publication december 17, 2009Final study report for publication december 17, 2009
Final study report for publication december 17, 2009Regmi Milan
 
Annex 2 national micro finance policy
Annex 2 national micro finance policyAnnex 2 national micro finance policy
Annex 2 national micro finance policyRegmi Milan
 

Más de Regmi Milan (20)

Work place violence
Work place violenceWork place violence
Work place violence
 
(C) Regmi_Public Private Partnership
(C) Regmi_Public Private Partnership(C) Regmi_Public Private Partnership
(C) Regmi_Public Private Partnership
 
Prespective On Chinese Financial System and policy-reforms-
Prespective On Chinese Financial System and policy-reforms-Prespective On Chinese Financial System and policy-reforms-
Prespective On Chinese Financial System and policy-reforms-
 
E-Commerce-Chapter-4_MR
E-Commerce-Chapter-4_MRE-Commerce-Chapter-4_MR
E-Commerce-Chapter-4_MR
 
Project M&E (unit 1-4)
Project M&E (unit 1-4)Project M&E (unit 1-4)
Project M&E (unit 1-4)
 
E-Commerce -Note -2
E-Commerce -Note -2E-Commerce -Note -2
E-Commerce -Note -2
 
E-Commerce-Note-1_MR
E-Commerce-Note-1_MRE-Commerce-Note-1_MR
E-Commerce-Note-1_MR
 
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal Micro Hydro Schemes : Case of Ghandruk VDC, Nepal
Micro Hydro Schemes : Case of Ghandruk VDC, Nepal
 
Ghandruk
GhandrukGhandruk
Ghandruk
 
Insights in Economic Development Of Nepal & Early United States Assistance to...
Insights in Economic Development Of Nepal & Early United States Assistance to...Insights in Economic Development Of Nepal & Early United States Assistance to...
Insights in Economic Development Of Nepal & Early United States Assistance to...
 
GATT & WTO : History and Prospective of Nepal.
GATT & WTO : History and  Prospective of Nepal.GATT & WTO : History and  Prospective of Nepal.
GATT & WTO : History and Prospective of Nepal.
 
HDI
HDIHDI
HDI
 
Nepal japan project_2013
Nepal japan project_2013Nepal japan project_2013
Nepal japan project_2013
 
Chitwan overview
Chitwan overviewChitwan overview
Chitwan overview
 
Optical fibers
Optical fibersOptical fibers
Optical fibers
 
Pokhara- Field Presentation On Thematic Areas
Pokhara-  Field Presentation On Thematic AreasPokhara-  Field Presentation On Thematic Areas
Pokhara- Field Presentation On Thematic Areas
 
Principle of abiity to pay
Principle of  abiity to payPrinciple of  abiity to pay
Principle of abiity to pay
 
Lecture on public finance ( abridged version)
Lecture on public finance ( abridged version)Lecture on public finance ( abridged version)
Lecture on public finance ( abridged version)
 
Final study report for publication december 17, 2009
Final study report for publication december 17, 2009Final study report for publication december 17, 2009
Final study report for publication december 17, 2009
 
Annex 2 national micro finance policy
Annex 2 national micro finance policyAnnex 2 national micro finance policy
Annex 2 national micro finance policy
 

Último

[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 

Último (20)

[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 

2.1 the simple regression model

  • 1. IMPORTATNT NOTE: Most of the equations have (hats) as an intercept. However in the graphs you will see the intercept are (hats). This is my mistake but since I really do not have time to make changes due to time constraints kindly corporate with me on this issue. For simplicity please read as and and on ONLY FOR GRAPHS. The Simple Regression Model: Regression Analysis: Y and X are two variables representing population and we are interested in explaining y in terms of x. Where Y = Dependent on X, which is the independent variable. How to make a choice between the independent and the dependent variable? Income is the cause for consumption. Thus the income is the independent variable and consumption is the effect, dependent variable. It is also called the two-variable linear regression model or bivariate linear regression modelbecause it relates the two variables x and y. Regression Analysis is concerned with the study of dependent variable on one or more independent or explanatory variables with a view of estimating or predicting population mean, in terms of the known or fixed (in repeated sampling) value of the latter i.e, The variable u, called the error termor disturbance in the relationship, represents factors other than that affect y, the “unobserved” factor. If the other factors in u are held fixed, so that the change in u is zero, , then x has a linear effect on y:
  • 2. Thus, the change in y is simply multiplied by the change in x. Terminology- Notation: Dependent Variable Independent Variable Explained Variable Explanatory Variable Predicted Predicator RegressandRegressor Response Stimulus Endogenous Exogeneous Outcome Covariate Controlled Control Variables are two unknown but fixed parameters known as the regression coefficients. Eg: Suppose in a "total Population" we have 60 families living in a community called XYZ and their weekly income (X) and weekly consumption (Y) are both in dollars. X 80 100 120 140 160 180 200 220 240 260 Y 55 65 79 80 102 110 120 135 137 150 60 70 84 93 107 115 136 137 145 152 65 74 90 95 110 120 140 140 155 175 70 80 94 103 116 130 144 152 165 178 75 85 98 108 118 135 145 157 175 180 0 88 0 113 125 140 0 160 189 185 0 0 0 115 0 0 0 162 0 191 Total 325 462 445 707 678 750 685 1043 966 1211 Conditional Means of Y, E(Y/X) 65 77 89 101 113 125 137 149 161 173 The 60 families of X are divided into 10 income groups from $80-$260. The values of X are "fixed" and 10 Y subpopulation. There is a considerable variation in each income group
  • 3. Geometrically, then a population regression curve is simply the locus of the conditional means of the dependent variable for the fixed values of the explanatory variable (s). The conditional mean Where denotes some function of the explanatory variable X. E(Y/ ) is a linear function of say of a type: E = Meaning of term Liner: 1. Liner in Variables i.e. X i.e. E(Y/ is not Liner. 2. Liner in Parameters i.e. . E = is not linear.
  • 4. Eg: Linear in Parameters: But for now whenever we refer to the term "linear" regression we only mean linear in parameters the Two way scatter plot of income and consumption Population Regression Line
  • 5. The Population Regression Line passes between the "Average" values of consumption E(Y/X) which is also known as the conditional expected value. The CEV tells us the expected value of weekly consumption expenditure or a family whose income is $80, $100… Unconditional Expected Value: The unconditional expected value of weekly consumption expenditure is given by E(Y) it disregards the income levels of various families. E(Y) = 7272/60 = $121.20 It tells us the expected value of weekly consumption expenditure of "any" family. Thus; Conditional mean E(Y/X) is a function of where = , and so on. It is a liner function, AND is also known asthe conditional Expected Function, Population Regression Function or Population Function. E = Where are two unknown but fixed parameters known as the regression coefficients. And is the intercept and is the slope. The main objective of the regression analysis is to estimate the values of the unknown's on the basis of observations Y and X. We saw previously that as family's income increases, family's consumption expenditure on average increases too. But what about the individual family?
  • 6. For example see that as income increases from $80 to $100 we see particular families consumption is $65, which is less than consumption expenditure of two families whose weekly income is $ 80. Thus we express this deviation of an individual as: or ) or . The expenditure of an individual family given its income level can be expressed as: 1. E = Systematic or deterministic and 2. = Nonsystematic and cannot be determined = Taking the expected value on both sides: / )+ / ). Before we make any assumption of u and x. We make an important assumption i.e. as long as we include the intercept in the equation; nothing is lost by assuming that the average value of u in the "population" is zero.i.e. E(u) = 0. Relationship between u and x: We assume u and x are not correlated or u and x are not linearly related.
  • 7. It is possible for u to be uncorrelated with x while being correlated with the functions of x such as the . Thus the better assumption involves that the expected value of u given x is zero or E ( / = E(u) = 0. This is called the zero conditional mean assumption. The sample regression function: So far we have only talked about the population of Y values corresponding to the fixed X's. When collecting data it is almost impossible to collect data on the entire population. Thus for most practical situations we have is a sample of Y values corresponding to some fixed X's. Thus our task is to estimate PRF based on the sample information.
  • 8. OR; Where; is the estimator of and . Thenumerical value obtained by the estimator is known as the "Estimate". Expressing SRF is stochastic term can be written as: . Where is the residual term. Conceptually is analogus to and can be regarded as the estimate of . So far: PRF: and SRF: In terms of SRF: In terms of PRF: )+ It is almost impossible for SRF and PRF to be the same due to sampling problems thus our main objective is to choose so that it replicates as close as possible.
  • 9. How is SRF itself determined since PRF is never known? Ordinary Least Square: PRF: SRF: = . Thus we should choose SRF in such a way that sum of the residuals = is as small as possible. Thus if we adopt the criterion of minimizing , then according to the diagram above we should give equal weights to In other words all the residuals should receive equal weights no matter how far ( ) or how close ( ) they are from the SRF.
  • 10. And such a minimization is possible by adopting least square criteria which states that SRF can be fixed in such a way that is as small as possible where; . Thus our goal is to choose in such a way that is as small as possible which is done by OLS. Let = So we want to minimize . Taking partial derivative with respect to . = -2 . = -2 . = . Plugging the values of ( -( - )- )=0 Upon rearranging gives:
  • 11. ( - )= - - )( - – Provided that Thus = or equals the population covariance divided by the variance of when . Which concludes: If and are positively correlated then is positive and If and are negatively correlated then is negative. Fitted Value and Residuals: We assume that the intercept and slope , have been obtained for a given sample of data. Given , we can obtain the fitted value for each observation. By definition each fitted value is on the OLS line. The OLS residuals associated with observation i, is the difference between and the its fitted value.
  • 12. If is positive the line under predicts if is negative the line over predicts. The ideal case is for observation is when , but in every case OLS is not equal to zero. Algebraic Prosperities of OLS Statistics: There are several useful algebraic properties of OLS estimates and their associated statistics. We now cover the three most important of these. (1) The sum, and therefore the sample average of the OLS residuals, is zero. Mathematically, It follows immediately from the OLS first order condition.
  • 13. This means OLS estimates are chosen to make the residuals add up to zero (for any data set). This says nothing about the residual for any particular observation (2) The sample covariance between the regressor and the OLS residuals is zero. This can be written as: The sample average of the OLS residuals is zero. Example: Thus and u captures all the factors not included in the model eg: aptitude, ability as so on. (3) The point ( is always on the OLS regression line. Writing each as its fitted value, plus its residual, provides another way to interpret an OLS regression. For each i, write: .
  • 14. From property (1) above, the average of the residuals is zero; equivalently, the sample average of the fitted values, , is the same as the sample average of the , or = . Further, properties (1) and (2) can be used to show that the sample covariance between is zero. Thus, we can view OLS as decomposing each into two parts, a fitted value and a residual. The fitted values and residuals are uncorrelated in the sample. Precision Or Standard Errors of Least Square Estimates: Thus far we know that least square estimates are functions of SAMPLE data. And our estimates will change with each change in sample. Therefore a proper measure of reliability and precision is needed. And such precision/ reliability is measured by STANDARD ERROR. Define the total sum of squares (SST), the explained sum of squares (SSE), and the residual sum of squares (SSR) (also known as the sum of squared residuals), as follows: SST = SSE = SSR = . SST is a measure of the total sample variation in the ; that is, it measures how spread out the is in the sample. If we divide SST by n-1 we obtain the sample variance of y.
  • 15. Similarly, SSE measures the sample variation in the (where we use the fact that ), and SSR measures the sample variation in the . The total variation in y can always be expressed as the sum of the explained variation and the unexplained variation SSR. Thus, SST = SSE +SSR. PROOF: Since the covariance between the residuals and the fitted value is zero.
  • 16. We have SST = SSE +SSR. Goodness of Fit: So far, we have no way of measuring how well the explanatory or independent variable, x, explains the dependent variable, y. It is often useful to compute a number that summarizes how well the OLS regression line fits the data. Assuming that the total sum of squares, SST, is not equal to zero—which is true except in the very unlikely event that all the equal the same value—we can divide SST on both sides to obtain: Alternatively:
  • 17. The R-squared of the regression, sometimes called the coefficient of determination, is ASLO BE defined as or is the ratio of the explained variation compared to the total variation, and thus it is interpreted as the fraction of the sample variation in y that is explained by x. is always between zero and one, since SSE can be no greater than SST. When interpreting , we usually multiply it by 100 to change it into a percent: 100* is the percentage of the sample variation in y that is explained by x.