Necessary Condition for Integral Functional to have Extremum for given function/Dependent on n Variables

Theorem
Let $ \mathbf x $ be an n-dimensional vector.

Let $ u \left ( { \mathbf x } \right ) $ be a real function.

Let $ R $ be a fixed region.

Let $ J $ be a functional such that


 * $ \displaystyle J \left [ { u } \right ] = \idotsint_R F \left ( { \mathbf x, u, u_{ \mathbf x } } \right ) \mathrm d x_1 \dots \mathrm d x_n $

Then a necessary condition for $ J \left [ { u } \right ] $ to have an extremum (strong or weak) for a given mapping $ u \left ( { \mathbf x } \right )$ is that $ u \left ( { \mathbf x } \right )$ satisfies Euler's equation:


 * $ \displaystyle F_u - \frac{ \partial }{ \partial \mathbf x } F_{ u_{ \mathbf x } } = 0 $

Proof
By definition of increment of the functional:

Use multivariate Taylor's Theorem on $ F $ around the point $ \left ( { \mathbf x, u, u_{ \mathbf x } } \right ) $:


 * $ \displaystyle F \left [ { \mathbf x, u + h, u_{ \mathbf x } + h_ { \mathbf x } } \right ] = F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] + \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u } h + \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u_{ \mathbf x} } h_{ \mathbf x } + \mathcal O \left ( { h^2, hh_{ \mathbf x }, h_{ \mathbf x}^2  } \right ) $

where $ \mathcal O $ stands for Big-O.

Then:


 * $ \displaystyle \Delta J \left [ { u, h } \right ] = \idotsint_R \left ( { \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u } h + \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u_{ \mathbf x } } h_{ \mathbf x } + \mathcal O \left ( { h^2, hh_{ \mathbf x }, h_{ \mathbf x}^2 } \right ) } \right ) \mathrm d x_1 \dots \mathrm d x_n $

By definition of variation of the functional:

By Green's theorem:


 * $ \displaystyle \idotsint_R \frac{ \partial }{ \partial \mathbf x } \left [ { \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u_{ \mathbf x } } h \left ( { \mathbf x } \right )  } \right ] \mathrm d x_1 \dots \mathrm d x_n = \idotsint_\Gamma h \left ( { \mathbf x } \right ) F_{ u_{ \mathbf x } } \left [ { \mathbf x, u, u_{ \mathbf x } } \right ]  \boldsymbol \nu \mathrm d \sigma$

where $ \Gamma $ denotes boundary of $ R $, and $ \boldsymbol \nu $ is an outward normal to $ \Gamma $.

Since the region is fixed, so are its boundary points.

Hence, the difference function $ h $ has to vanish at the boundary.

In other words:


 * $ \displaystyle \forall \mathbf x \in \Gamma : h \left ( { \mathbf x } \right ) = 0 $

This leaves only the first integral.


 * $ \displaystyle \delta J = \idotsint_R \left ( { \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u } - \frac{ \partial F_{ u_{ \mathbf x } } \left [ { \mathbf x, u, u_{ \mathbf x } } \right ]  }{ \partial \mathbf x }  } \right ) h \left ( { \mathbf x } \right ) \mathrm d x_1 \dots \mathrm d x_n$

For arbitrary $ h $ the first variation vanishes if the term in brackets vanishes:


 * $ \displaystyle \frac{ \partial F \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial u } - \frac{ \partial F_{ u_{ \mathbf x } } \left [ { \mathbf x, u, u_{ \mathbf x } } \right ] }{ \partial \mathbf x } = 0 $