贝叶斯定理
维基百科,自由的百科全书
zh-cn:贝叶斯定理;zh-tw:貝氏定理
zh-cn:贝叶斯定理;zh-tw:貝氏定理(又被称为贝叶斯法则)是概率论中的一个结果,它跟随机变量的条件概率以及边缘概率分布有关。在有些关于概率的解说中,贝叶斯定理(贝叶斯更新)能够告知我们如何利用新证据修改已有的看法。
通常,事件A在事件B(发生)的条件下的概率,与事件B在事件A的条件下的概率是不一样的;然而,这两者是有确定的关系,贝叶斯定理就是这种关系的陈述。
作为一个规范的原理,贝叶斯定理对于所有概率的解释是有效的;然而,频率主义者和贝叶斯主义者对于在应用中概率如何被赋值有着不同的看法: 频率主义者根据随机事件发生的频率,或者总体样本里面的个数来赋值概率;贝叶斯主义者要根据未知的命题来赋值概率。一个结果就是,贝叶斯主义者有更多的机会使用贝叶斯定理。本文深度讨论了这些争论。
目录 |
[编辑] 贝叶斯定理的陈述
贝叶斯定理是关于随机事件A和B的条件概率和边缘概率的。
其中L(A|B)是在B发生的情况下A发生的可能性。
在贝叶斯定理中,每个名词都有约定俗成的名称:
- Pr(A)是A的先驗概率或邊緣概率。之所以稱為"先驗"是因為它不考慮任何B方面的因素。
- Pr(A|B)是已知B發生后A的條件概率,也由于得自B的取值而被稱作A的后驗概率。
- Pr(B|A)是已知A發生后B的條件概率,也由于得自A的取值而被稱作B的后驗概率。
- Pr(B)是B的先驗概率或邊緣概率,也作標准化常量(normalized constant).
按這些術語,Bayes定理可表述為:
- 后驗概率 = (相似度 * 先驗概率)/標准化常量
也就是說,后驗概率与先驗概率和相似度的乘積成正比。
另外,比例Pr(B|A)/Pr(B)也有時被稱作標准相似度(standardised likelihood),Bayes定理可表述為:
- 后驗概率 = 標准相似度 * 先驗概率
[编辑] 從條件概率推導貝葉斯定理
根據條件概率的定義 . 在事件B发生的条件下事件 A发生的概率是
同樣地, 在事件A发生的条件下事件 B发生的概率
整理与合并這兩個方程式, 我們可以找到
这个引理有时称作概率乘法规则.上式兩邊同除以Pr(B), 若Pr(B)是非零的, 我們可以得到贝叶斯 定理:
[编辑] 二中擇一的形式 of Bayes's 定理
Bayes's theorem is often embellished by noting that
where AC is the complementary event of A (often called "not A"). So the theorem can be restated as
More generally, where {Ai} forms a partition of the event space,
for any Ai in the partition.
[编辑] Bayes's theorem in terms of odds and likelihood ratio
Bayes's theorem can also be written neatly in terms of a likelihood ratio Λ and odds O as
where
are the odds of A given B,
are the odds of A by itself, and
is the likelihood ratio.
See also the law of total probability.
[编辑] Bayes' theorem for probability densities
There is also a version of Bayes's theorem for continuous distributions. It is somewhat harder to derive, since probability densities, strictly speaking, are not probabilities, so Bayes's theorem has to be established by a limit process; see Papoulis (citation below), Section 7.3 for an elementary derivation. Bayes's theorem for probability densities is formally similar to the theorem for probabilities:
and there is an analogous statement of the law of total probability:
As in the discrete case, the terms have standard names. f(x, y) is the joint distribution of X and Y, f(x|y) is the posterior distribution of X given Y=y, f(y|x) = L(x|y) is (as a function of x) the likelihood function of X given Y=y, and f(x) and f(y) are the marginal distributions of X and Y respectively, with f(x) being the prior distribution of X.
Here we have indulged in a conventional abuse of notation, using f for each one of these terms, although each one is really a different function; the functions are distinguished by the names of their arguments.
[编辑] Extensions of Bayes's theorem
Theorems analogous to Bayes's theorem hold in problems with more than two variables. For example:
This can be derived in several steps from Bayes's theorem and the definition of conditional probability:
A general strategy is to work with a decomposition of the joint probability, and to marginalize (integrate) over the variables that are not of interest. Depending on the form of the decomposition, it may be possible to prove that some integrals must be 1, and thus they fall out of the decomposition; exploiting this property can reduce the computations very substantially. A Bayesian network, for example, specifies a factorization of a joint distribution of several variables in which the conditional probability of any one variable given the remaining ones takes a particularly simple form (see Markov blanket).
[编辑] Examples
[编辑] Example #1: False positives in a medical test
Suppose that a test for a particular disease has a very high success rate:
- if a tested patient has the disease, the test accurately reports this, a 'positive', 99% of the time (or, with probability 0.99), and
- if a tested patient does not have the disease, the test accurately reports that, a 'negative', 95% of the time (i.e. with probability 0.95).
Suppose also, however, that only 0.1% of the population have that disease (i.e. with probability 0.001). We now have all the information required to use Bayes's theorem to calculate the probability that, given the test was positive, that it is a false positive. This problem is discussed at greater length in Bayesian inference.
Let D be the event that the patient has the disease, and T be the event that the test returns a positive result. Then, using the second alternative form of Bayes's theorem (above), the probability of a true positive is
P(T) is the probability that a given person tests positive. This depends on the two populations: those with the disease (and correctly test positive 0.99 x 0.001) and those without the disease (and incorrectly test positive 0.05 x 0.999).
and hence the probability that a positive result is a false positive is about (1 − 0.019) = 0.981.
Despite the apparent high accuracy of the test, the incidence of the disease is so low (one in a thousand) that the vast majority of patients who test positive (98 in a hundred) do not have the disease. It should be noted that this is quite common in screening tests. It is more important to have a very low false negative rate than a high true positive rate.
[编辑] Example #2: Conditional probabilities
Suppose there are two bowls full of cookies. Bowl #1 has 10 chocolate chip cookies and 30 plain cookies, while bowl #2 has 20 of each. Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes's theorem. But first, we can clarify the situation by rephrasing the question to "what’s the probability that Fred picked bowl #1, given that he has a plain cookie?” Thus, to relate to our previous explanation, the event A is that Fred picked bowl #1, and the event B is that Fred picked a plain cookie. To compute Pr(A|B), we first need to know:
- Pr(A), or the probability that Fred picked bowl #1 regardless of any other information. Since Fred is treating both bowls equally, it is 0.5.
- Pr(B), or the probability of getting a plain cookie regardless of any information on the bowls. In other words, this is the probability of getting a plain cookie from each of the bowls. It is computed as the sum of the probability of getting a plain cookie from a bowl multiplied by the probability of selecting this bowl. We know from the problem statement that the probability of getting a plain cookie from bowl #1 is 0.75, and the probability of getting one from bowl #2 is 0.5, and since Fred is treating both bowls equally the probability of selecting any one of them is 0.5. Thus, the probability of getting a plain cookie overall is 0.75×0.5 + 0.5×0.5 = 0.625.
- Pr(B|A), or the probability of getting a plain cookie given that Fred has selected bowl #1. From the problem statement, we know this is 0.75, since 30 out of 40 cookies in bowl #1 are plain.
Given all this information, we can compute the probability of Fred having selected bowl #1 given that he got a plain cookie, as such:
As we expected, it is more than half.
[编辑] Tables of occurrences and relative frequencies
It is often helpful when calculating conditional probabilities to create a simple table containing the number of occurrences of each outcome, or the relative frequencies of each outcome, for each of the independent variables. The tables below illustrate the use of this method for the cookies.
Number of cookies in each bowl by type of cookie |
Relative frequency of cookies in each bowl by type of cookie |
|||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
The table on the right is derived from the table on the left by dividing each entry by the total number of cookies under consideration, or 80 cookies.
[编辑] Example #3: Bayesian inference
Applications of Bayes's theorem often assume the philosophy underlying Bayesian probability that uncertainty and degrees of belief can be measured as probabilities. One such example follows. For additional worked out examples, including simpler examples, please see the article on the examples of Bayesian inference.
We describe the marginal probability distribution of a variable A as the prior probability distribution or simply the prior. The conditional distribution of A given the "data" B is the posterior probability distribution or just the posterior.
Suppose we wish to know about the proportion r of voters in a large population who will vote "yes" in a referendum. Let n be the number of voters in a random sample (chosen with replacement, so that we have statistical independence) and let m be the number of voters in that random sample who will vote "yes". Suppose that we observe n = 10 voters and m = 7 say they will vote yes. From Bayes's theorem we can calculate the probability distribution function for r using
From this we see that from the prior probability density function f(r) and the likelihood function L(r) = f(m = 7|r, n = 10), we can compute the posterior probability density function f(r|n = 10, m = 7).
The prior probability density function f(r) summarizes what we know about the distribution of r in the absence of any observation. We provisionally assume in this case that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. If some additional background information is found, we should modify the prior accordingly. However before we have any observations, all outcomes are equally likely.
Under the assumption of random sampling, choosing voters is just like choosing balls from an urn. The likelihood function L(r) = P(m = 7|r, n = 10,) for such a problem is just the probability of 7 successes in 10 trials for a binomial distribution.
As with the prior, the likelihood is open to revision -- more complex assumptions will yield more complex likelihood functions. Maintaining the current assumptions, we compute the normalizing factor,
and the posterior distribution for r is then
for r between 0 and 1, inclusive.
One may be interested in the probability that more than half the voters will vote "yes". The prior probability that more than half the voters will vote "yes" is 1/2, by the symmetry of the uniform distribution. In comparison, the posterior probability that more than half the voters will vote "yes", i.e., the conditional probability given the outcome of the opinion poll – that seven of the 10 voters questioned will vote "yes" – is
which is about an "89% chance".
[编辑] Historical remarks
Bayes's theorem is named after the Reverend Thomas Bayes (1702–1761), who studied how to compute a distribution for the parameter of a binomial distribution (to use modern terminology). His friend, Richard Price, edited and presented the work in 1763, after Bayes' death, as An Essay towards solving a Problem in the Doctrine of Chances. Pierre-Simon Laplace replicated and extended these results in an essay of 1774, apparently unaware of Bayes' work.
One of Bayes's results (Proposition 5) gives a simple description of conditional probability, and shows that it can be expressed independently of the order in which things occur:
- If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, the probability I am right [i.e., the conditional probability of the first event being true given that the second has also happened] is P/b.
Note that the expression says nothing about the order in which the events occurred; it measures correlation, not causation. His preliminary results, in particular Propositions 3, 4, and 5, imply the result now called Bayes's Theorem (as described above), but it does not appear that Bayes himself emphasized or focused on that result.
Bayes's main result (Proposition 9 in the essay) is the following: assuming a uniform distribution for the prior distribution of the binomial parameter p, the probability that p is between two values a and b is
where m is the number of observed successes and n the number of observed failures.
What is "Bayesian" about Proposition 9 is that Bayes presented it as a probability for the parameter p. So, one can compute probability for an experimental outcome, but also for the parameter which governs it, and the same algebra is used to make inferences of either kind.
Bayes states his question in a way that might make the idea of assigning a probability distribution to a parameter palatable to a frequentist. He supposes that a billiard ball is thrown at random onto a billiard table, and that the probabilities p and q are the probabilities that subsequent billiard balls will fall above or below the first ball.
[编辑] 参见
- 概率论
- Bayesian inference
- Monty Hall problem
- Occam's razor
- Prosecutor's fallacy
- Raven paradox
- Revising opinions in statistics
- Empirical Bayes method
- Bayesian spam filtering
[编辑] References
[编辑] Versions of the essay
- Thomas Bayes (1763), "An Essay towards solving a Problem in the Doctrine of Chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S.", Philosophical Transactions, Giving Some Account of the Present Undertakings, Studies and Labours of the Ingenious in Many Considerable Parts of the World 53:370–418.
- Thomas Bayes (1763/1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes's Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:296–315. (Bayes's essay in modernized notation)
- Thomas Bayes "An essay towards solving a Problem in the Doctrine of Chances". (Bayes's essay in the original notation)
[编辑] Commentaries
- G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes's Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
- Daniel Covarrubias. "An Essay Towards Solving a Problem in the Doctrine of Chances". (an outline and exposition of Bayes's essay)
- Stephen M. Stigler (1982). "Thomas Bayes's Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
- Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.
[编辑] Additional material
- Pierre-Simon Laplace (1774). "Mémoire sur la Probabilité des Causes par les Événements", Savants Étranges 6:621–656; also Œuvres 8:27–65.
- Pierre-Simon Laplace (1774/1986). "Memoir on the Probability of the Causes of Events", Statistical Science 1(3):364–378.
- Stephen M. Stigler (1986). "Laplace's 1774 memoir on inverse probability", Statistical Science 1(3):359–378.
- Stephen M. Stigler (1983). "Who Discovered Bayes's Theorem?" The American Statistician 37(4):290–296.
- Jeff Miller et al. Earliest Known Uses of Some of the Words of Mathematics (B). (very informative; recommended)
- Athanasios Papoulis (1984). Probability, Random Variables, and Stochastic Processes, second edition. New York: McGraw-Hill.
- James Joyce (2003). "Bayes's Theorem", Stanford Encyclopedia of Philosophy.
- The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay provides an up to date overview of the use of Bayes's theorem in information theory and machine learning.
- Stanford Encyclopedia of Philosophy: Bayes's Theorem provides a comprehensive introduction to Bayes's theorem.
- 埃立克·魏爾斯史甸在MathWorld中所描述之Bayes' Theorem。
- Template:PlanetMath