Egorov's theorem
From Wikipedia, the free encyclopedia
In mathematics, Egorov's theorem in measure theory establishes a condition for the uniform convergence of a sequence of measurable functions.
Let (fn) be a sequence of real-valued measurable functions on some measure space (X,Σ,μ) such that fn converges μ-almost everywhere on a measurable set A of finite measure to a limit function f. Then, for every ε > 0, there exists a subset B of A such that μ(A-B) < ε, and (fn) converges to f uniformly on the difference set A−B. Here, μ(B) denotes the measure of B.
In words, pointwise convergence almost everywhere on A implies the much stronger uniform convergence on some set of arbitrarily smaller measure than A. One can prove the theorem by using directly the definition of uniform convergence and countable subadditivity of μ. The type of convergence stated in the theorem is also called almost uniform convergence. Note that the assumption μ(A) < ∞ is necessary. Under Lebesgue measure, consider the sequence of indicator functions
defined for x in the real numbers R. This sequence converges pointwise to the zero function everywhere but does not converge uniformly on R−B for any set B of finite measure.
Egorov's theorem can be used along with compactly supported continuous functions to prove Lusin's theorem for integrable functions.
The theorem is named after Dmitri Egorov, a Russian physicist and geometer.
[edit] References
- Beals, Richard (2004). Analysis: An Introduction. New York: Cambridge University Press. ISBN 0-521-60047-2.
- Weisstein, Eric W., et al. (2005). Egorov's Theorem. Retrieved April 19, 2005.