Divergence Operator Distributes over Addition

Theorem
Let $\map {\mathbf V} {x_1, x_2, \ldots, x_n}$ be a vector space of $n$ dimensions.

Let $\tuple {\mathbf e_1, \mathbf e_2, \ldots, \mathbf e_n}$ be the standard ordered basis of $\mathbf V$.

Let $\mathbf f$ and $\mathbf g: \mathbf V \to \mathbf V$ be vector-valued functions on $\mathbf V$:


 * $\mathbf f := \tuple {\map {f_1} {\mathbf x}, \map {f_2} {\mathbf x}, \ldots, \map {f_n} {\mathbf x} }$


 * $\mathbf g := \tuple {\map {g_1} {\mathbf x}, \map {g_2} {\mathbf x}, \ldots, \map {g_n} {\mathbf x} }$

Let $\nabla \cdot \mathbf f$ denote the divergence of $f$.

Then:
 * $\nabla \cdot \paren {\mathbf f + \mathbf g} = \nabla \cdot \mathbf f + \nabla \cdot \mathbf g$