Kolmogorov's Law
Theorem
Let $P$ be a population.
Let $P$ have mean $\mu$ and finite variance.
Let $\sequence {X_n}_{n \mathop \ge 1}$ be a sequence of random variables forming a random sample from $P$.
Let:
- $\ds {\overline X}_n = \frac 1 n \sum_{i \mathop = 1}^n X_i$
Then:
- $\ds {\overline X}_n \xrightarrow {\text {a.s.} } \mu$
where $\xrightarrow {\text {a.s.} }$ denotes almost sure convergence.
Proof
We may assume that $X_n \ge 0$ for all $n \ge 1$.
Indeed, otherwise consider:
- ${X_n}^+ := \max \set {X_n, 0}$
and:
- ${X_n}^- := \max \set {- X_n, 0}$
instead of $X_n$.
Let $\varepsilon \in \R_{>0}$.
For $k \ge 1$ let:
- $\ell_k := \floor {\paren {1 + \epsilon}^k }$
be the floor of $\paren {1 + \epsilon}^k$.
Then:
\(\ds \map \Pr {\size { {\overline X}_{\ell_k} - \mu} \ge \paren {1 + \epsilon}^{-k/4} }\) | \(\le\) | \(\ds \paren {1 + \epsilon}^{k/2} \expect {\size { {\overline X}_{\ell_k} - \mu}^2}\) | ||||||||||||
\(\ds \) | \(=\) | \(\ds \paren {1 + \epsilon}^{k/2} \var { {\overline X}_{\ell_k} }\) | as $\expect { {\overline X}_{\ell_k} } = \mu$ | |||||||||||
\(\ds \) | \(=\) | \(\ds \frac {\paren {1 + \epsilon}^{k/2} } { {\ell_k}^2 } \sum_{i \mathop = 1}^{\ell_k} \var {X_i}\) | ||||||||||||
\(\ds \) | \(=\) | \(\ds \frac {\paren {1 + \epsilon}^{k/2} } {\ell_k} \var {X_1}\) | ||||||||||||
\(\ds \) | \(\le\) | \(\ds \paren {1 + \epsilon}^{-k/2} \var {X_1}\) |
Thus:
- $\ds \sum_{k \mathop = 1}^\infty \map \Pr {\size { {\overline X}_{\ell_k} - \mu} \ge \paren {1 + \epsilon}^{-k/4} } < \infty$
- $\ds \map \Pr { \limsup_{k \mathop \to \infty} \set {\size { {\overline X}_{\ell_k} - \mu} \ge \paren {1 + \epsilon}^{-k/4} } } = 0$
That is:
- $\ds \map \Pr { \liminf_{k \mathop \to \infty} \set {\size { {\overline X}_{\ell_k} - \mu} < \paren {1 + \epsilon}^{-k/4} } } = 1$
In particular:
- $\ds \map \Pr { \lim_{k \mathop \to \infty} \size { {\overline X}_{\ell_k} - \mu} = 0} = 1$
As:
- $\ds \lim_{k \mathop \to \infty} \frac {\ell_{k + 1} } {\ell_k} = 1 + \epsilon$
there exists an $k_0 \ge 1$ such that:
- $\forall k \ge k_0 : \ell_{k + 1} \le \paren {1 + 2 \epsilon} \ell_k$
For all $n > \ell_{k_0}$ we have:
- $\ds \paren {1 - 2 \epsilon} {\overline X}_{\ell_k} \le \frac 1 {1 + 2 \epsilon} {\overline X}_{\ell_k} \le {\overline X}_n \le \paren {1 + 2 \epsilon} {\overline X}_{\ell_{k + 1} }$
and therefore:
- $\ds \paren { {\overline X}_{\ell_k} - \mu} - 2 \epsilon {\overline X}_{\ell_k} \le {\overline X}_n - \mu \le \paren { {\overline X}_{\ell_{k + 1} } - \mu} + 2 \epsilon {\overline X}_{\ell_{k + 1} }$
where:
- $k := \max \set {i \ge 1 : n > \ell_i}$
As $k \to \infty$, when $n \to \infty$, we have:
- $\ds \set {\lim_{k \mathop \to \infty} \size { {\overline X}_{\ell_k} - \mu} = 0} \subseteq \set {\limsup_{n \mathop \to \infty} \size { {\overline X}_n - \mu} \le 2 \epsilon \mu}$
Finally, as $\epsilon \in \R_{>0}$ was arbitrary, we have:
\(\ds \set {\lim_{k \mathop \to \infty} \size { {\overline X}_{\ell_k} - \mu} = 0}\) | \(\subseteq\) | \(\ds \bigcap_{\epsilon \mathop \in \R_{>0} } \set {\limsup_{n \mathop \to \infty} \size { {\overline X}_n - \mu} \le 2 \epsilon \mu}\) | ||||||||||||
\(\ds \) | \(=\) | \(\ds \set {\limsup_{n \mathop \to \infty} \size { {\overline X}_n - \mu} = 0}\) | ||||||||||||
\(\ds \) | \(=\) | \(\ds \set {\lim_{n \mathop \to \infty} {\overline X}_n = \mu}\) |
Hence:
- $\ds \map \Pr {\lim_{n \mathop \to \infty} {\overline X}_n = \mu} = 1$
This needs considerable tedious hard slog to complete it. In particular: Details To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Finish}} from the code.If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page. |
Also known as
Kolmogorov's Law is also known as the Strong Law of Large Numbers.
Also see
- Bernoulli's Theorem, also known as the Law of Large Numbers
- Khinchin's Law, also known as the Weak Law of Large Numbers
Source of Name
This entry was named for Andrey Nikolaevich Kolmogorov.
Sources
- 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): laws of large numbers
- 2021: Richard Earl and James Nicholson: The Concise Oxford Dictionary of Mathematics (6th ed.) ... (previous) ... (next): strong law of large numbers