User:Tkojar/Sandbox/Lebesgue Differentiation Theorem

Theorem
For a Lebesgue integrable real or complex-valued function $f$ on $\R^n$, the indefinite integral is a set function which maps a measurable set $A$ to the Lebesgue integral of $f \cdot \mathbf 1_A$, where $\mathbf 1_A$ denotes the characteristic function of the set $A$.

It is usually written:


 * $\ds A \mapsto \int_A f \rd \lambda$

with $\lambda$ the n-dimensional Lebesgue measure. The derivative of this integral at x is defined to be:


 * $\ds \lim_{B \mathop \rightarrow x} \dfrac 1 {\size B} \int_ B f \rd \lambda$

where $\size B$ denotes the volume (that is, the Lebesgue measure) of a ball $B$ centered at $x$, and $B \to x$ means that the diameter of $B$ tends to $0$.

The Lebesgue differentiation theorem states that this derivative exists and is equal to $\map f x$ at almost every point $x \in \R$.

In fact a slightly stronger statement is true. Note that:


 * $\ds \size {\dfrac 1 {\size B} \int_B \map f y \rd \map \lambda y - \map f x} = \size {\dfrac 1 {\size B} \int_B \paren {\map f y - \map f x} \rd \map \lambda y} \le \dfrac 1 {\size B} \int_B \size {\map f y - \map f x} \rd \map \lambda y$

The stronger assertion is that the right hand side tends to $0$ for almost every point $x$. The points $x$ for which this is true are called the Lebesgue points of $f$.

Proof
Since the statement is local in character, $f$ can be assumed to be zero outside some ball of finite radius and hence integrable.

It is then sufficient to prove that the set


 * $\ds E_\alpha = \set {x \in \mathbf R^n : \limsup_{\size B \mathop \rightarrow 0, \, x \in B} \dfrac 1 {\size B} \size {\int_B \map f y - \map f x \rd y} > 2 \alpha}$

has measure 0 for all $\alpha > 0$.

Let $\epsilon > 0$ be given. Using the density of continuous functions of compact support in $\map {L^1} \R$, one can find such a function $g$ satisfying:


 * $\ds \norm {f - g}_{L^1} = \int_{\mathbf R^n} \size {\map f x - \map g x} \rd x < \epsilon$

It is then helpful to rewrite the main difference as


 * $\ds \dfrac 1 {\size B} \int_B \map f y \rd y - \map f x = \paren {\dfrac 1 {\size B} \int_B \paren {\map f y - \map g y} \rd y} + \paren {\dfrac 1 {\size B} \int_B \map g y \rd y - \map g x} + \paren {\map g x - \map f x}$

The first term can be bounded by the value at $x$ of the maximal function for $f - g$, denoted here by $\map {\paren {f - g}^*} x$:


 * $\ds \dfrac 1 {\size B} \int_B \size {\map f y - \map g y} \rd y \le \sup_{r \mathop > 0} \dfrac 1 {\size {\map {B_r} x} } \int_{\map {B_r} x} \size {\map f y - \map g y} \rd y = \map {\paren {f - g}^*} x$

which is also called the Hardy-Littlewood maximal function.

The second term disappears in the limit since g is a continuous function, and the third term is bounded by $\size {\map f x - \map g x}$.

For the absolute value of the original difference to be greater than $2\alpha$ in the limit, at least one of the first or third terms must be greater than $\alpha$ in absolute value.

However, the estimate on the Hardy–Littlewood function (see Hardy-Littlewood Maximal Inequality) says that


 * $\size {\set {x : \map {\paren {f - g}^*} x > \alpha} } \le \dfrac {A_n} \alpha \norm {f - g}_{L^1} < \dfrac {A_n} \alpha \epsilon$

for some constant $A_n$ depending only upon the dimension $n$. The Markov's Inequality (also called Tchebyshev's inequality) says that


 * $\size {\set {x : \size {\map f x - \map g x} > \alpha} } \le \dfrac 1 \alpha \norm {f - g}_{L^1} < \dfrac 1 \alpha \epsilon$

whence


 * $\size {E_\alpha} \le \dfrac {A_n + 1} \alpha \epsilon$

Since $\epsilon$ was arbitrary, it can be taken to be arbitrarily small, and the theorem follows.