Bayes' Theorem

Theorem
Let $$\Pr$$ be a probability measure on a probability space $$\left({\Omega, \Sigma, \Pr}\right)$$.

Let $$\Pr \left({A | B}\right)$$ denote the conditional probability of $A$ given $B$.

Let $$\Pr \left({A}\right) > 0$$ and $$\Pr \left({B}\right) > 0$$.

Then:
 * $$\Pr \left({B | A}\right) = \frac {\Pr \left({A | B}\right) \Pr \left({B}\right)} {\Pr \left({A}\right)}$$

Generalized Versions
There are other more or less complicated ways of saying very much the same thing, all of which can be derived from the basic version with the help of other fairly elementary results.

For example:

Let $$\left\{{B_1, B_2, \ldots}\right\}$$ be a partition of the event space $$\Sigma$$.

Then, for any $$B_i$$ in the partition:


 * $$\Pr \left({B_i | A}\right) = \frac {\Pr \left({A | B_i}\right) \Pr \left({B_i}\right)} {\Pr \left({A}\right)} = \frac {\Pr \left({A | B_i}\right) \Pr \left({B_i}\right)} {\sum_j \Pr \left({A | B_j}\right) \Pr \left({B_j}\right)}$$

Proof
From the definition of conditional probabilities, we have:


 * $$\Pr \left({A | B}\right) = \frac{\Pr \left({A \cap B}\right)} {\Pr \left({B}\right)}$$


 * $$\Pr \left({B | A}\right) = \frac{\Pr \left({A \cap B}\right)} {\Pr \left({A}\right)}$$

After some algebra:


 * $$\Pr \left({A | B}\right) \Pr \left({B}\right) = \Pr \left({A \cap B}\right) = \Pr \left({B | A}\right) \Pr \left({A}\right)$$

Dividing both sides by $$\Pr \left({A}\right)$$ (we are told that it is non-zero), the result follows:
 * $$\Pr \left({B | A}\right) = \frac {\Pr \left({A | B}\right) \Pr \left({B}\right)} {\Pr \left({A}\right)}$$

Proof of Generalized Version
Follows directly from the Total Probability Theorem:
 * $$\Pr \left({A}\right) = \sum_i \Pr \left({A | B_i}\right) \Pr \left({B_i}\right)$$

Note
This result is also known as Bayes' Formula.

The formula:
 * $$\Pr \left({A | B}\right) \Pr \left({B}\right) = \Pr \left({A \cap B}\right) = \Pr \left({B | A}\right) \Pr \left({A}\right)$$

is sometimes called the product rule for probabilities.