Definition:Probability Measure

Context
Probability Theory.

Definition
Let $$\mathcal E$$ be an experiment.

Let $$\Omega$$ be the sample space on $$\mathcal E$$, and let $$\Sigma$$ be the event space of $$\mathcal E$$.

A probability measure on $$\mathcal E$$ is a mapping $$\Pr: \Sigma \to \R$$ which fulfils the Kolmogorov axioms:

First Axiom

 * $$\forall A \in \Sigma: 0 \le \Pr \left({A}\right)$$

Second Axiom

 * $$\Pr \left({\Omega}\right) = 1$$

Third Axiom
Let $$A_1, A_2, \ldots$$ be a countable (possibly countably infinite) sequence of pairwise disjoint events.

Then:
 * $$\Pr \left({\bigcup_{i \ge 1} A_i}\right) = \sum_{i \ge 1} \Pr \left({A_i}\right)$$

As an elementary an easily-digested consequence of this, we have:


 * $$\forall A, B \in \Sigma: A \cap B = \varnothing \implies \Pr \left({A \cup B}\right) = \Pr \left({A}\right) + \Pr \left({B}\right)$$.