New Immissions/Updates:
boundless - educate - edutalab - empatico - es-ebooks - es16 - fr16 - fsfiles - hesperian - solidaria - wikipediaforschools
- wikipediaforschoolses - wikipediaforschoolsfr - wikipediaforschoolspt - worldmap -

See also: Liber Liber - Libro Parlato - Liber Musica  - Manuzio -  Liber Liber ISO Files - Alphabetical Order - Multivolume ZIP Complete Archive - PDF Files - OGG Music Files -

PROJECT GUTENBERG HTML: Volume I - Volume II - Volume III - Volume IV - Volume V - Volume VI - Volume VII - Volume VIII - Volume IX

Ascolta ""Volevo solo fare un audiolibro"" su Spreaker.
CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Linear regression - Wikipedia, the free encyclopedia

Linear regression

From Wikipedia, the free encyclopedia

In statistics, linear regression is a regression method that allows the relationship between the dependent variable Y and the p independent variables X and a random term ε. The model can be written as

Y = \beta_1  + \beta_2 X_2 +  \cdots +\beta_p X_p + \varepsilon

where β1 is the intercept ("constant" term), the βis are the respective parameters of independent variables, and p is the number of parameters to be estimated in the linear regression. Linear regression can be contrasted with non-linear regression.

This method is called "linear" because the relation of the response to the explanatory variables is assumed to be a linear function of each independent variable. It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of Y = β0 + βX is a straight line. But if the model is (for example)

Y = \alpha + \beta x + \gamma x^2 + \varepsilon

the problem is still one of linear regression, that is, linear in X and X2 respectively, even though the graph on X by itself is not a straight line.

Contents

[edit] Historical remarks

The earliest form of linear regression was the method of least squares, which was published by Legendre in 1805,[1] and by Gauss in 1809.[2] The term "least squares" is from Legendre's term, moindres carrés. However, Gauss claimed that he had known the method since 1795.

Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the sun. Euler had worked on the same problem (1748) without success.[citation needed] Gauss published a further development of the theory of least squares in 1821,[3] including a version of the Gauss-Markov theorem.

[edit] Notation and naming convention

In order to understand the information presented in this article, the notation must be clarified. A vector of variables will be denoted using an arrow over, such as, as \vec X. Matrices will be denoted using a bold face font, such as X. Let a β vector-times-X matrix be written as βX. The dependent variable, Y in regression is conventionally called the "response variable". The independent variables (in vector form are called the explanatory variables or regressors. Other terms include "exogenous variables," "input variables," and "predictor variables".

A hat, \hat{}, over variable denotes that the variable or parameter has been estimated, for example, \hat\beta, estimated values of the parameter vector β.

[edit] The linear regression model

The linear regression model can be written in vector-matrix notation as

\ Y =  X\beta + \varepsilon.\,

The term ε represents the unpredicted or unexplained variation in the response variable; it is conventionally called the "error" whether it is really a measurement error or not, and is assumed to be independent of \vec X. For simple linear regression, where there is only a single explanatory variable and 2 parameters the above equation reduces to:

y = a+bx+\varepsilon.\,

An equivalent formulation which explicitly shows the linear regression as a model of conditional expectation can be given as

\mbox{E}(y|x) = \alpha + \beta x \,

with the conditional distribution of y given x is identical to the distribution of the error term.

[edit] Types of linear regression

There are many different approaches that can be taken to solving the regression problem, that is, determing suitable estimates for the parameters.

[edit] Least-squares analysis

Main article: Least squares


Least-squares analysis was developed by Carl Friedrich Gauss in the 1820s. This method uses the following Gauss-Markov assumptions:

(See also Gauss-Markov theorem). These assumptions imply that least-squares estimates of the parameters are optimal in a certain sense.

A linear regression with p parameters (including the regression intercept β1) and n data points (sample size), with n\geq (p+1) allows construction of the following vectors and matrix with associated standard errors:

\begin{bmatrix} y_1\\ y_2\\ \vdots\\ y_n \end{bmatrix}= \begin{bmatrix} 1 & x_{21} & x_{31} & \dots & x_{p1} \\ 1 & x_{22} & x_{32} & \dots & x_{p2} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_{2n} & x_{3n} & \dots & x_{pn} \end{bmatrix} \begin{bmatrix} \beta_1 \\ \beta_2 \\ \vdots \\ \beta_p \end{bmatrix} + \begin{bmatrix} \varepsilon_1\\ \varepsilon_2\\ \vdots\\ \varepsilon_n \end{bmatrix}

or, from vector-matrix notation above,

\ Y =  \beta X + \varepsilon.\,

Each data point can be given as (\vec x_i, y_i), i=1,2,\dots,n.. For n = p, standard errors of the parameter estimates could not be calculated. For n less than p, parameters could not be calculated.

The estimated values of the parameters can be given as

\widehat{\beta}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T {\vec y}

Using the assumptions provided by the Gauss-Markov Theorem, it is possible to analyse the results and determine whether or not the model determined using least-squares is valid. The number of degrees of freedom is given by mn.

The residuals, representing 'observed' minus 'calculated' quantities, are useful to analyse the regression. They are determined from

\hat\vec\epsilon = \vec y - \mathbf{X} \hat\beta\,

The standard deviation, \hat \sigma for the model is determined from

{\hat \sigma  = \sqrt{ \frac {\hat\vec\epsilon^T \hat\vec\epsilon} {n-p}} = \sqrt {\frac{{  \vec y^T  \vec y -   \hat\vec\beta^T \mathbf{X}^T   \vec y}}{{n - p}}} }


The variance in the errors can be described using the Chi-square distribution:

\hat\sigma^2 \sim \frac { \chi_{n-p}^2 \ \sigma^2 } {n-p}

The 100(1 − α)% confidence interval for the parameter, βi, is computed as follows:


{\widehat \beta_i  \pm t_{\frac{\alpha }{2},n - p} \hat \sigma \sqrt {(\mathbf{X}^T \mathbf{X})_{ii}^{ - 1} } }

where t follows the Student's t-distribution with mn degrees of freedom and (\mathbf{X}^T \mathbf{X})_{ii}^{ - 1}) denotes the value located in the ith row and column of the matrix.

The 100(1 − α)% mean response confidence interval for a prediction (interpolation or extrapolation) for a value \vec{x} = \vec {x_d} is given by:

{  \vec {x_0}   \widehat\beta \pm t_{\frac{\alpha }{2},n - p} \hat \sigma \sqrt {  \vec {x_0} (\mathbf{X}^T \mathbf{ X})_{}^{ - 1}   \vec {x_0}^T } }

where \vec {x_0}  = <1, x_{20},  x_{30}, . . .,  x_{p0})>.

The 100(1 − α)% predicted response confidence intervals for the data are given by:

{  \vec {x_0}   \widehat\beta \pm t_{\frac{\alpha }{2},n - p} \hat \sigma \sqrt {1 +   \vec {x_0} (\mathbf{X}^T \mathbf{X})_{}^{ - 1}   \vec {x_0}^T } }.

The regression sum of squares SSR is given by:

{SSR = \sum {\left( {\hat y_i  - \bar y} \right)^2 }  =   \hat\beta^T \mathbf{X}^T   \vec y - \frac{1}{n}\left( {  \vec y^T   \vec u  \vec u^T   \vec y} \right)}

where \vec u is an n by 1 unit vector.

The error sum of squares ESS is given by:

{ESS = \sum {\left( {y_i  - \hat y_i } \right)^2 }  =   \vec y^T   \vec y -   \hat\beta^T \mathbf{X}^T   \vec y}.

The total sum of squares TSS' is given by

{TSS = \sum {\left( {y_i  - \bar y} \right)^2 }  =   \vec y^T   \vec y - \frac{1}{n}\left( {  \vec y^T   \vec u  \vec u^T   \vec y} \right) = SSR + ESS}

Pearson's co-efficient of regression, R2 is then given as:

{R^2  = \frac{{SSR}}{{TSS}} = 1 - \frac{{ESS}}{{TSS}}}

[edit] Assessing the least-squares model

Once the above values have been corrected, the model should be checked for 2 different things:

  1. Whether the assumptions of least-squares are fulfilled and
  2. Whether the model is valid

[edit] Checking model assumptions

The model assumptions are checked by calculating the residuals and plotting them. The residuals are calculated as follows:

\hat\vec\epsilon = \vec y -  \hat \vec y  = \vec y - \mathbf{X} \hat\beta\,

The following plots can be constructed to test the validity of the assumptions:

  1. Plotting a normal probability plot of the residuals to test normality. The points should lie along a straight line.
  2. Plotting a time series plot of the residuals, that is, plotting the residuals as a function of time.
  3. Plotting the residuals as a function of the explanatory variables, \mathbf{ X}.
  4. Plotting the residuals against the fitted values, \hat \vec y\,.
  5. Plotting the residuals against the previous residual.

In all, but the first case, there should not be any noticeable pattern to the data.

[edit] Checking model validity

The validity of the model can be checked using any of the following methods:

  1. Using the confidence interval for each of the parameters, βi. If the confidence interval includes 0, then the parameter can be removed from the model. Ideally, a new regression analysis excluding that parameter would need to be performed and continued until there are no more parameters to remove.
  2. Calculate Pearson’s co-efficient of regression. The closer the value is to 1; the better the regression is. This co-efficient gives what fraction of the observed behaviour can be explained by the given variables.
  3. Examining the observational and prediction confidence intervals. The smaller they are the better.
  4. Computing the F-statistic.

[edit] Modifications of least-squares analysis

There are various different ways in which least-squares analysis can be modified including

  • weighted least squares, which is a generalisation of the least squares method
  • polynomial fitting, which involves fitting a polynomial to the given data.

[edit] Polynomial fitting

A polynomial fit is a specific type of multiple regression. The simple regression model (a first-order polynomial) can be trivially extended to higher orders. The regression model y_i = \alpha_0 + \alpha_1 x_i + \alpha_2 x_i^2 + \cdots + \alpha_m x_i^m + \varepsilon_i is a system of polynomial equations of order m with polynomial coefficients \{ \alpha_0 \dots \alpha_m \}. As before, we can express the model using data matrix \mathbf{X}, target vector \vec y and parameter vector δ. The ith row of \mathbf{X} and \vec y will contain the x and y value for the ith data sample. Then the model can be written as as system of linear equations:

\begin{bmatrix} y_1\\ y_2\\ \vdots\\ y_n \end{bmatrix}= \begin{bmatrix} 1 & x_1 & x_1^2 & \dots & x_1^m \\ 1 & x_2 & x_2^2 & \dots & x_2^m \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_n & x_n^2 & \dots & x_n^m \end{bmatrix} \begin{bmatrix} \alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_m \end{bmatrix} + \begin{bmatrix} \varepsilon_1\\ \varepsilon_2\\ \vdots\\ \varepsilon_n \end{bmatrix}

which when using pure matrix notation remains, as before,

Y = \mathbf{X} \vec \alpha + \varepsilon, \,

and the vector of polynomial coefficients is

\widehat{\vec \alpha} = (\mathbf{X}^T \mathbf{X})^{-1}\; \mathbf{X}^T Y. \,

[edit] Robust regression

Main article: robust regression

A host of alternative approaches to the computation of regression parameters are included in the category known as robust regression. One technique minimizes the mean absolute error, or some other function of the residuals, instead of mean squared error as in linear regression. Robust regression is much more computationally intensive than linear regression and is somewhat more difficult to implement as well. While least squares estimates are not very sensitive to breaking the normality of the errors assumption, this is not true when the variance or mean of the error distribution is not bounded, or when an analyst that can identify outliers is unavailable.

In the Stata culture, Robust regression means linear regression with Huber-White standard error estimates. This relaxes the assumption of homoscedasticity for variance estimates only; the predictors are still ordinary least squares (OLS) estimates.

[edit] Applications of linear regression

[edit] The trend line

For trend lines as used in technical analysis, see Trend lines (technical analysis)

A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.

Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.

[edit] Examples

Linear regression is widely used in biological, behavioral and social sciences to describe relationships between variables. It ranks as one of the most important tools used in these disciplines.

[edit] Medicine

As one example, early evidence relating tobacco smoking to mortality and morbidity came from studies employing regression. Researchers usually include several variables in their regression analysis in an effort to remove factors that might produce spurious correlations. For the cigarette smoking example, researchers might include socio-economic status in addition to smoking to ensure that any observed effect of smoking on mortality is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in a study employing regression. For the smoking example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are considered to be more trustworthy than a regression analysis.

[edit] Finance

Linear regression underlies the capital asset pricing model, and the concept of using Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.

[edit] References

  1. ^ A.M. Legendre. Nouvelles méthodes pour la détermination des orbites des comètes (1805). "Sur la Méthode des moindres quarrés" appears as an appendix.
  2. ^ C.F. Gauss. Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientum. (1809)
  3. ^ C.F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. (1821/1823)

[edit] Additional sources

  • Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
  • Charles Darwin. The Variation of Animals and Plants under Domestication. (1869) (Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)
  • Draper, N.R. and Smith, H. Applied Regression Analysis Wiley Series in Probability and Statistics (1998)
  • Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," Journal of the Anthropological Institute, 15:246-263 (1886). (Facsimile at: [1])
  • Robert S. Pindyck amd Daniel L. Rubinfeld (1998, 4h ed.). Econometric Models and Economic Forecasts,, ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3 (mult. regression in matrix form).
  • http://homepage.mac.com/nshoffner/nsh/CalcBookAll/Chapter%201/1functions.html

[edit] See also

[edit] External links

Static Wikipedia (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2007 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2006 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu

Static Wikipedia February 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu