Dickey-Fuller test
From Wikipedia, the free encyclopedia
In statistics, the Dickey-Fuller test tests whether a unit root is present in an autoregressive model. It is named after the statisticians D. A. Dickey and W. A. Fuller, who developed the test in the 1970s.
[edit] Explanation
A simple AR(1) model is yt = ρyt − 1 + ut, where yt is the variable of interest, t is the time index, ρ is a coefficient, and ut is the error term. A unit root is present if | ρ | = 1.
The regression model can be written as Δyt = (ρ − 1)yt − 1 + ut = δyt − 1 + ut, where Δ is the first difference operator. This model can be estimated and testing for a unit root is equivalent to testing δ = 0. Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to as critical values. Therefore this statistic τ has a specific distribution simply known as the Dickey Fuller table.
There are three main versions of the test:
1. Test for a unit root
Δyt = δyt − 1 + ut
2. Test for a unit root with drift
Δyt = a0 + δyt − 1 + ut
3. Test for a unit root with drift around a stochastic trend
Δyt = a0 + a1t + δyt − 1 + ut
Each version of the test has its own critical value which depends on the size of the sample. In each case, the null hypothesis is that there is a unit root, δ = 0. The tests have low power in that they often cannot distinguish between true unit-root processes (δ = 0)and near unit-root processes (δ is close to zero).
There is also an extension called the Augmented Dickey Fuller (ADF), which removes all the structural effect (autocorrelation) in the time series and then tests using the same procedure.
[edit] Reference
Dickey, D.A. and W.A. Fuller (1979), “Distribution of the Estimators for Autoregressive Time Series with a Unit Root,” Journal of the American Statistical Association, 74, p. 427–431.
[edit] See also
- Augmented Dickey-Fuller test
- Phillips-Perron test
- Unit root