Cramér-Rao inequality
From Wikipedia, the free encyclopedia
In statistics, the Cramér-Rao inequality, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, expresses a lower bound on the variance of an unbiased statistical estimator, based on Fisher information.
It states that the reciprocal of the Fisher information, , of a parameter θ, is a lower bound on the variance of an unbiased estimator of the parameter (denoted
).
In some cases, no unbiased estimator exists that realizes the lower bound.
The Cramér-Rao inequality is also known as the Cramér-Rao bounds (CRB) or Cramér-Rao lower bounds (CRLB) because it puts a lower bound on the variance of an estimator .
Contents |
[edit] Example
Suppose X is a normally distributed random variable with known mean μ and unknown variance σ2. Consider the following statistic:
Then T is unbiased for σ2, as E(T) = σ2. What is the variance of T?
(the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value 3(σ2)2; the second is the square of the variance, or (σ2)2. Thus
Now, what is the Fisher information in the sample? Recall that the score V is defined as
where L is the likelihood function. Thus in this case,
where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of V, or
Thus the information in a sample of n independent observations is just n times this, or .
The Cramer Rao inequality states that
In this case, the inequality is satisfied. Equality is also achieved, showing that the estimator is efficient.
[edit] Regularity conditions
This inequality relies on two weak regularity conditions on the probability density function, f(x;θ), and the estimator T(X):
- The Fisher information is always defined; equivalently, for all x such that f(x;θ) > 0,
- is finite.
- The operations of integration with respect to x and differentiation with respect to θ can be interchanged in the expectation of T; that is,
- whenever the right-hand side is finite.
In some cases, a biased estimator can have both a variance and a mean squared error that are below the Cramér-Rao lower bound (the lower bound applies only to estimators that are unbiased). See estimator bias.
If the second regularity condition extends to the second derivative, then an alternative form of Fisher information can be used and yields a new Cramér-Rao inequality
In some cases, it may be easier to take the expectation with respect to the second derivative than to take the expectation of the square of the first derivative.
[edit] Multiple parameters
Extending the Cramér-Rao inequality to multiple parameters, define a parameter column vector
with probability density function (pdf), , that satisfies the above two regularity conditions.
The Fisher information matrix is a matrix with element
defined as
then the Cramér-Rao inequality is
where
And is a positive-semidefinite matrix, that is
If is an unbiased estimator (i.e.,
) then the Cramér-Rao inequality is
[edit] Single-parameter proof
First, a more general version of the inequality will be proven; namely, that if the expectation of T is denoted by ψ(θ), then for all θ
The Cramér-Rao inequality will then follow as a consequence.
Let X be a random variable with probability density function f(x,θ). Here T = t(X) is a statistic, which is used as an estimator for θ. If V is the score, i.e.
then the expectation of V, written E(V), is zero. If we consider the covariance cov(V,T) of V and T, we have cov(V,T) = E(VT), because E(V) = 0. Expanding this expression we have
This may be expanded using the chain rule
and the definition of expectation gives, after cancelling f(x;θ),
because the integration and differentiation operations commute (second condition).
The Cauchy-Schwarz inequality shows that
therefore
If T is an unbiased estimator of θ, that is, E(T) = θ, then ψ'(θ) = 1; the inequality then becomes
This is the Cramér-Rao inequality.
The efficiency of T is defined as
or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér-Rao lower bound thus gives .
[edit] Multivariate normal distribution
For the case of a d-variate normal distribution
with a probability density function
The Fisher information matrix has elements
where "tr" is the trace.
Let w[n] be a white Gaussian noise (a sample of N independent observations) with variance σ2
Where
and has N (the number of independent observations) terms.
Then the Fisher information matrix is 1 × 1
and so the Cramér-Rao inequality is
[edit] Further reading
- Kay, Steven M. (1993). Statistical Signal Processing, Volume I: Estimation Theory. Prentice Hall, ch. 3. ISBN 0-13-345711-7.