Kolmogorov's Law

Theorem
Let $P$ be a population.

Let $P$ have mean $\mu$ and finite variance.

Let $\sequence {X_n}_{n \mathop \ge 1}$ be a sequence of random variables forming a random sample from $P$.

Let:


 * $\ds {\overline X}_n = \frac 1 n \sum_{i \mathop = 1}^n X_i$

Then:


 * $\ds {\overline X}_n \xrightarrow {\text {a.s.} } \mu$

where $\xrightarrow {\text {a.s.} }$ denotes almost sure convergence.

Proof
We may assume that $X_n \ge 0$ for all $n \ge 1$.

Indeed, otherwise consider:
 * ${X_n}^+ := \max \set {X_n, 0}$

and:
 * ${X_n}^- := \max \set {- X_n, 0}$

instead of $X_n$.

Let $\varepsilon \in \R_{>0}$.

For $k \ge 1$ let:
 * $\ell_k := \floor {\paren {1 + \epsilon}^n }$

be the floor of $\paren {1 + \epsilon}^n$.

Then:

Thus:
 * $\ds \sum_{k \mathop = 1}^\infty \map \Pr {\size { {\overline X}_{\ell_k} - \mu} \ge \paren {1 + \epsilon}^{-n/4} } < \infty$

By Borel-Cantelli Lemma:
 * $\ds \map \Pr { \limsup_{k \mathop \to \infty} \set { \size { {\overline X}_{\ell_k} - \mu} \ge \paren {1 + \epsilon}^{-n/4} } } = 0$

That is:
 * $\ds \map \Pr { \liminf_{k \mathop \to \infty} \set { \size { {\overline X}_{\ell_k} - \mu} < \paren {1 + \epsilon}^{-n/4} } } = 1$

In particular:
 * $\ds \map \Pr { \lim_{k \mathop \to \infty} \size { {\overline X}_{\ell_k} - \mu} = 0 } = 1$

We claim:
 * $\ds \map \Pr { \limsup_{n \mathop \to \infty} \size { {\overline X}_n - \mu} \le 2 \epsilon \mu } = 1$

As:
 * $\ds \lim_{k \mathop \to \infty} \frac {\ell_{k+1} }{\ell_k} = 1 + \epsilon$

there exists an $k_0 \ge 1$ such that:
 * $\forall k \ge k_0 : \ell_{k+1} \le \paren {1 + 2 \epsilon} \ell_k$

For all $n > \ell_{k_0}$ we have:
 * $\ds \paren {1 - 2 \epsilon} {\overline X}_{\ell_k} \le \frac 1 {1 + 2 \epsilon} {\overline X}_{\ell_k} \le {\overline X}_n \le \paren {1 + 2 \epsilon} {\overline X}_{\ell_{k+1} }$

where:
 * $k := \max \set { i \ge 1 : n > \ell_i }$

Also see

 * Bernoulli's Theorem, also known as the Law of Large Numbers
 * Khinchin's Law, also known as the Weak Law of Large Numbers