Necessary Condition for Integral Functional to have Extremum for given function/Dependent on n Variables

From ProofWiki
Jump to: navigation, search

Theorem

Let $\mathbf x$ be an n-dimensional vector.

Let $\map u {\mathbf x}$ be a real function.

Let $R$ be a fixed region.

Let $J$ be a functional such that

$\displaystyle J \sqbrk u = \idotsint_R \map F {\mathbf x, u, u_{\mathbf x} } \rd x_1 \cdots \rd x_n$


Then a necessary condition for $J \sqbrk u$ to have an extremum (strong or weak) for a given mapping $\map u {\mathbf x}$ is that $\map u {\mathbf x}$ satisfies Euler's equation:

$F_u - \dfrac {\partial} {\partial \mathbf x} F_{u_{\mathbf x} } = 0$


Proof

By definition of increment of the functional:

$\displaystyle J \sqbrk {u + h} - J \sqbrk u = \idotsint_R \paren {F \sqbrk {x, u + h, u_{\mathbf x} + h_{\mathbf x} } - F \sqbrk {x, u, u_{\mathbf x} } } \rd x_1 \cdots \rd x_n$


Use multivariate Taylor's Theorem on $F$ around the point $\tuple {\mathbf x, u, u_{\mathbf x} }$:

$F \sqbrk {\mathbf x, u + h, u_{\mathbf x} + h_{\mathbf x} } = F \sqbrk {\mathbf x, u, u_{\mathbf x} } + \dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} h + \dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u_{\mathbf x} } h_{\mathbf x} + \map {\mathcal O} {h^2, h h_{\mathbf x}, h_{\mathbf x}^2}$

where $\mathcal O$ stands for Big-O.

Then:

$\displaystyle \Delta J \sqbrk {u, h} = \idotsint_R \paren {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} h + \dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u_{\mathbf x} } h_{\mathbf x} + \map {\mathcal O} {h^2, h h_{\mathbf x}, h_{\mathbf x}^2} } \rd x_1 \cdots \rd x_n$

By definition of variation of the functional:

\(\displaystyle \delta J\) \(=\) \(\displaystyle \idotsint_R \paren {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} + \dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u_{\mathbf x} } h_{\mathbf x} } \rd x_1 \cdots \rd x_n\) $\quad$ $\quad$
\(\displaystyle \) \(=\) \(\displaystyle \idotsint_R \paren {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} - \dfrac {\partial F_{u_{\mathbf x} } \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial \mathbf x} } \map h {\mathbf x} \rd x_1 \cdots \rd x_n + \idotsint_R \dfrac {\partial} {\partial \mathbf x} \sqbrk {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u_{\mathbf x} } \map h {\mathbf x} } \rd x_1 \cdots \rd x_n\) $\quad$ as $F_{u_{\mathbf x} } h_{\mathbf x} = \dfrac {\partial} {\partial \mathbf x} \sqbrk {F_{u_{\mathbf x} } h} - \dfrac {\partial F_{u_{\mathbf x} } } {\partial \mathbf x} h$ $\quad$

By Green's theorem:

$\displaystyle \idotsint_R \dfrac {\partial} {\partial \mathbf x} \sqbrk {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u_{\mathbf x} } \map h {\mathbf x} } \rd x_1 \cdots \rd x_n = \idotsint_\Gamma \map h {\mathbf x} F_{u_{\mathbf x} } \sqbrk {\mathbf x, u, u_{\mathbf x} } \boldsymbol \nu \rd \sigma$

where:

$\Gamma$ denotes the boundary of $R$
$\boldsymbol \nu$ is an outward normal to $\Gamma$.

Since the region is fixed, so are its boundary points.

Hence, the difference function $h$ has to vanish at the boundary.

In other words:

$\forall \mathbf x \in \Gamma: \map h {\mathbf x} = 0$

This leaves only the first integral.

$\displaystyle \delta J = \idotsint_R \paren {\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} - \dfrac{\partial F_{u_{ \mathbf x } } \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial \mathbf x} } \map h {\mathbf x} \rd x_1 \cdots \rd x_n$

For arbitrary $h$ the first variation vanishes if the term in brackets vanishes:

$\dfrac {\partial F \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial u} - \dfrac {\partial F_{u_{\mathbf x} } \sqbrk {\mathbf x, u, u_{\mathbf x} } } {\partial \mathbf x} = 0$

$\blacksquare$


Sources