Linear regression
From Wikipedia, the free encyclopedia
In statistics, linear regression is a regression method that allows the relationship between the dependent variable Y and the p independent variables X and a random term ε. The model can be written as
where β1 is the intercept ("constant" term), the βis are the respective parameters of independent variables, and p is the number of parameters to be estimated in the linear regression. Linear regression can be contrasted with non-linear regression.
This method is called "linear" because the relation of the response to the explanatory variables is assumed to be a linear function of each independent variable. It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of Y = β0 + βX is a straight line. But if the model is (for example)
the problem is still one of linear regression, that is, linear in X and X2 respectively, even though the graph on X by itself is not a straight line.
Contents |
[edit] Historical remarks
The earliest form of linear regression was the method of least squares, which was published by Legendre in 1805,[1] and by Gauss in 1809.[2] The term "least squares" is from Legendre's term, moindres carrés. However, Gauss claimed that he had known the method since 1795.
Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the sun. Euler had worked on the same problem (1748) without success.[citation needed] Gauss published a further development of the theory of least squares in 1821,[3] including a version of the Gauss-Markov theorem.
[edit] Notation and naming convention
In order to understand the information presented in this article, the notation must be clarified. A vector of variables will be denoted using an arrow over, such as, as . Matrices will be denoted using a bold face font, such as X. Let a β vector-times-X matrix be written as βX. The dependent variable, Y in regression is conventionally called the "response variable". The independent variables (in vector form are called the explanatory variables or regressors. Other terms include "exogenous variables," "input variables," and "predictor variables".
A hat, , over variable denotes that the variable or parameter has been estimated, for example,
, estimated values of the parameter vector β.
[edit] The linear regression model
The linear regression model can be written in vector-matrix notation as
The term ε represents the unpredicted or unexplained variation in the response variable; it is conventionally called the "error" whether it is really a measurement error or not, and is assumed to be independent of . For simple linear regression, where there is only a single explanatory variable and 2 parameters the above equation reduces to:
An equivalent formulation which explicitly shows the linear regression as a model of conditional expectation can be given as
with the conditional distribution of y given x is identical to the distribution of the error term.
[edit] Types of linear regression
There are many different approaches that can be taken to solving the regression problem, that is, determing suitable estimates for the parameters.
[edit] Least-squares analysis
Least-squares analysis was developed by Carl Friedrich Gauss in the 1820s. This method uses the following Gauss-Markov assumptions:
- The random errors εi have expected value 0.
- The random errors εi are uncorrelated (this is weaker than an assumption of probabilistic independence).
- The random errors εi are homoscedastic, i.e., they all have the same variance.
(See also Gauss-Markov theorem). These assumptions imply that least-squares estimates of the parameters are optimal in a certain sense.
A linear regression with p parameters (including the regression intercept β1) and n data points (sample size), with allows construction of the following vectors and matrix with associated standard errors:
or, from vector-matrix notation above,
Each data point can be given as ,
. For n = p, standard errors of the parameter estimates could not be calculated. For n less than p, parameters could not be calculated.
The estimated values of the parameters can be given as
Using the assumptions provided by the Gauss-Markov Theorem, it is possible to analyse the results and determine whether or not the model determined using least-squares is valid. The number of degrees of freedom is given by m − n.
The residuals, representing 'observed' minus 'calculated' quantities, are useful to analyse the regression. They are determined from
The standard deviation, for the model is determined from
The variance in the errors can be described using the Chi-square distribution:
The 100(1 − α)% confidence interval for the parameter, βi, is computed as follows:
where t follows the Student's t-distribution with m − n degrees of freedom and denotes the value located in the ith row and column of the matrix.
The 100(1 − α)% mean response confidence interval for a prediction (interpolation or extrapolation) for a value is given by:
where .
The 100(1 − α)% predicted response confidence intervals for the data are given by:
.
The regression sum of squares SSR is given by:
where is an n by 1 unit vector.
The error sum of squares ESS is given by:
.
The total sum of squares TSS' is given by
Pearson's co-efficient of regression, R2 is then given as:
[edit] Assessing the least-squares model
Once the above values have been corrected, the model should be checked for 2 different things:
- Whether the assumptions of least-squares are fulfilled and
- Whether the model is valid
[edit] Checking model assumptions
The model assumptions are checked by calculating the residuals and plotting them. The residuals are calculated as follows:
The following plots can be constructed to test the validity of the assumptions:
- Plotting a normal probability plot of the residuals to test normality. The points should lie along a straight line.
- Plotting a time series plot of the residuals, that is, plotting the residuals as a function of time.
- Plotting the residuals as a function of the explanatory variables,
.
- Plotting the residuals against the fitted values,
.
- Plotting the residuals against the previous residual.
In all, but the first case, there should not be any noticeable pattern to the data.
[edit] Checking model validity
The validity of the model can be checked using any of the following methods:
- Using the confidence interval for each of the parameters, βi. If the confidence interval includes 0, then the parameter can be removed from the model. Ideally, a new regression analysis excluding that parameter would need to be performed and continued until there are no more parameters to remove.
- Calculate Pearson’s co-efficient of regression. The closer the value is to 1; the better the regression is. This co-efficient gives what fraction of the observed behaviour can be explained by the given variables.
- Examining the observational and prediction confidence intervals. The smaller they are the better.
- Computing the F-statistic.
[edit] Modifications of least-squares analysis
There are various different ways in which least-squares analysis can be modified including
- weighted least squares, which is a generalisation of the least squares method
- polynomial fitting, which involves fitting a polynomial to the given data.
[edit] Polynomial fitting
A polynomial fit is a specific type of multiple regression. The simple regression model (a first-order polynomial) can be trivially extended to higher orders. The regression model is a system of polynomial equations of order m with polynomial coefficients
. As before, we can express the model using data matrix
, target vector
and parameter vector δ. The ith row of
and
will contain the x and y value for the ith data sample. Then the model can be written as as system of linear equations:
which when using pure matrix notation remains, as before,
and the vector of polynomial coefficients is
[edit] Robust regression
A host of alternative approaches to the computation of regression parameters are included in the category known as robust regression. One technique minimizes the mean absolute error, or some other function of the residuals, instead of mean squared error as in linear regression. Robust regression is much more computationally intensive than linear regression and is somewhat more difficult to implement as well. While least squares estimates are not very sensitive to breaking the normality of the errors assumption, this is not true when the variance or mean of the error distribution is not bounded, or when an analyst that can identify outliers is unavailable.
In the Stata culture, Robust regression means linear regression with Huber-White standard error estimates. This relaxes the assumption of homoscedasticity for variance estimates only; the predictors are still ordinary least squares (OLS) estimates.
[edit] Applications of linear regression
[edit] The trend line
- For trend lines as used in technical analysis, see Trend lines (technical analysis)
A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
[edit] Examples
Linear regression is widely used in biological, behavioral and social sciences to describe relationships between variables. It ranks as one of the most important tools used in these disciplines.
[edit] Medicine
As one example, early evidence relating tobacco smoking to mortality and morbidity came from studies employing regression. Researchers usually include several variables in their regression analysis in an effort to remove factors that might produce spurious correlations. For the cigarette smoking example, researchers might include socio-economic status in addition to smoking to ensure that any observed effect of smoking on mortality is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in a study employing regression. For the smoking example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are considered to be more trustworthy than a regression analysis.
[edit] Finance
Linear regression underlies the capital asset pricing model, and the concept of using Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
[edit] References
- ^ A.M. Legendre. Nouvelles méthodes pour la détermination des orbites des comètes (1805). "Sur la Méthode des moindres quarrés" appears as an appendix.
- ^ C.F. Gauss. Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientum. (1809)
- ^ C.F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. (1821/1823)
[edit] Additional sources
- Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
- Charles Darwin. The Variation of Animals and Plants under Domestication. (1869) (Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)
- Draper, N.R. and Smith, H. Applied Regression Analysis Wiley Series in Probability and Statistics (1998)
- Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," Journal of the Anthropological Institute, 15:246-263 (1886). (Facsimile at: [1])
- Robert S. Pindyck amd Daniel L. Rubinfeld (1998, 4h ed.). Econometric Models and Economic Forecasts,, ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3 (mult. regression in matrix form).
- http://homepage.mac.com/nshoffner/nsh/CalcBookAll/Chapter%201/1functions.html
[edit] See also
- Econometrics
- Regression analysis
- Robust regression
- Ridge regression
- Least squares
- Median-median line
- Instrumental variable
- Hierarchical linear modeling
- Empirical Bayes methods
- General linear model
[edit] External links
- Visual Least Squares: An interactive, visual flash demonstration of how linear regression works.
- In Situ Adaptive Tabulation: Combining many linear regressions to approximate any nonlinear function.
- Earliest Known uses of some of the Words of Mathematics. See: [2] for "error", [3] for "Gauss-Markov theorem", [4] for "method of least squares", and [5] for "regression".
- Online linear regression calculator.
- Online regression by eye (simulation).
- Leverage Effect Interactive simulation to show the effect of outliers on the regression results
- Linear regression as an optimisation problem
- Visual Statistics with Multimedia
- Multiple Regression by Elmer G. Wiens. Online multiple and restricted multiple regression package.
- ZunZun.com Online curve and surface fitting.
- Analytical argumentations of probability and statistics