Generalized linear model
From Wikipedia, the free encyclopedia
In statistics, the generalized linear model (GLM) is a useful generalization of ordinary least squares regression. It stipulates that the random distribution of the measured variable of the experiment (the distribution function) and the systematic (non-random) portion of the experiment (the linear predictor) are related through a function called the link function.
The subject of generalized linear models was formulated by John Nelder and Robert Wedderburn as a way of putting under one framework various previous models, and finding their commonalities.
Contents |
[edit] Overview
In a GLM, each outcome of the data set (Y) is assumed to be generated from a particular distribution function in the exponential family (a very large range of distributions; also see below). The mean μ of the distribution for a particular Y value depends on the independent variables for that point:
where Xβ is the linear predictor, a linear combination (depending on X, the known "independent" variables of the experiment) of unknown parameters β, and g is called the link function.
In this framework, the variance is typically a function of the mean:
It is convenient if the variance follows from the exponential family distribution, but it may simply be that the variance is a function of the predicted value.
The unknown parameters β are typically estimated with maximum likelihood, quasi-maximum likelihood, or Bayesian techniques.
[edit] Components of the model
The GLM consists of three elements.
- 1. A distribution function f, from the exponential family.
- 2. A linear predictor η = X β.
- 3. A link function g such that E(y) = μ = g-1(η).
[edit] Exponential family of distributions
Generally speaking, the exponential family of distributions are those probability distributions, parameterized by θ and τ, whose density functions or probability mass functions (depending on whether it is a continuous distribution or a discrete distribution) can be expressed in the form
τ, called the dispersion parameter, typically is known. The functions a, b, c, d, and h are known. Many, although not all, common distributions are in this family.
If a is the identity function, then the distribution is said to be in canonical form. If in addition b is the identity, then θ is called the canonical parameter.
In the context of generalized linear models, the θ and τ can be related to the mean and variance that are linked to the linear predictor.
[edit] Linear predictors
The linear predictor is a quantity relating to the expected value of the data (thus, "predictor") through the link function. The symbol η ("eta") is typically used to denote a linear predictor.
η is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix X; its elements are either fully known by the experimenters or stipulated by them in the modeling process.
Thus η can be expressed as
[edit] Link functions
The link function provides the relationship between the linear predictor and the distribution function (through its mean). There are many commonly used link functions, and their choice can be somewhat arbitrary. However, it is important to match the domain of the link function to the range of the distribution function's mean.
Following is a table of some common link functions and their inverses (sometimes referred to as the mean function) used for several distributions in the exponential family.
Distribution | Name | Link Function | Mean Function |
---|---|---|---|
Normal | Identity | ![]() |
![]() |
Exponential | Inverse | ![]() |
![]() |
Gamma | |||
Poisson | Log | ![]() |
![]() |
Binomial | Logit | ![]() |
![]() |
Multinomial |
Notice however that in the case of the exponential and gamma distributions, the domain of the link function (that is, the range of the mean function) is not the same as the permitted range of the mean. In particular, the linear predictor may be negative, which would give an illegal negative mean.
[edit] Examples
[edit] Linear regression and general linear models
A possible point of confusion has to do with the distinction between generalized linear models and the general linear model, two broad statistical models. For example, a program that is important in current statistical practice is the GLM procedure in the SAS software package (SAS is a Copyright of SAS Institute Inc.). In that context, GLM stands for "general linear model." A great many standard statistical methods fall under the general linear model, including bivariate and multivariate linear regression, fixed-effects analysis of variance, and analysis of covariance. While the general linear model may be viewed as a case of the generalized linear model with identity link, most results of interest are obtained exactly only for the general linear model. Thus, the development of the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link and fitted variance parameters are largely asymptotic (tending to work well with large samples).
A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. Here the distribution function is the normal distribution with constant variance and the link function is the identity.
[edit] Binomial data
When the response data (Y) are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the binomial distribution and the interpretation of μi is then the probability of Yi taking on the value one. There are several popular link functions for binomial functions; the most typical is the logistic function:
GLMs with this setup are logistic regression models.
In addition, any inverse cumulative density function (CDF) can be used for the link since the CDF's range is [0, 1], the range of the binomial mean. The normal CDF Φ is a popular choice and yields the probit model. Its link is
The identity link is also sometimes used for binomial data (this is equivalent to using the uniform distribution CDF in the above instead of the Gaussian CDF) but this can be problematic as the predicted probabilities can be greater than one or less than zero. In implementation it is possible to fix the nonsensical probabilities outside of [0,1] but interpreting the coefficients can be difficult in this model. The model's primary merit is that near p = 0.5 it is approximately a linear transformation of the probit and logit — econometricians sometimes call this the Harvard model.
The variance function for binomial data is given by:
where the dispersion parameter τ is typically exactly one. When it is not, the model is often described as binomial with overdispersion or quasibinomial.
[edit] Count data
Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm.
The variance function is proportional to the mean
where the dispersion parameter τ is typically exactly one. When it is not, the model is often described as poisson with overdispersion or quasipoisson.
[edit] References
- Peter McCullagh and John Nelder. Generalized Linear Models. London: Chapman and Hall, 1989.
- A.J. Dobson. Introduction to Generalized Linear Models, Second Edition. London: Chapman and Hall/CRC, 2001.
- James Hardin and Joseph Hilbe, Generalized Linear Models and Extensions. College Station: Stata Press, 2001, 2007.