Autocovariance
From Wikipedia, the free encyclopedia
In statistics, given a time series or continuous signal Xt, the autocovariance is simply the covariance of the signal against a time-shifted version of itself. If each state of the series has a mean, E[Xt] = μt, then the autocovariance is given by
Where E is the expectation operator. If Xt is second-order stationary then the following definition becomes the more familiar:
with μ = μi = μj, for all i,j (because of second-order stationarity).
The k is the amount the signal has been shifted and is usually referred to as the lag. When normalised by dividing by the variance σ2 then the autocovariance becomes the autocorrelation R(k). That is
Note, however, that some disciplines use the terms autocovariance and autocorrelation interchangeably.
The autocovariance can be thought of as a measure of how similar a signal is to a time-shifted version of itself with an autocovariance of σ2 indicating perfect correlation at that lag. The normalisation with the variance will put this into the range [−1, 1].
[edit] References
- P. G. Hoel (1984): Mathematical Statistics, New York, Wiley