Simplest Variational Problem with Subsidiary Conditions for Curve on Surface

From ProofWiki
Jump to navigation Jump to search

Theorem

Let $J \sqbrk {y, z}$ be a (real) functional of the form:

$\ds J \sqbrk y = \int_a^b \map F {x, y, z, y', z'} \rd x$

Let there exist admissible curves $y, z$ lying on the surface:

$\map g {x, y, z} = 0$

which satisfy boundary conditions:

$\map y a = A_1, \map y b = B_1$
$\map z a = A_2, \map z b = B_2$

Let $J \sqbrk {y, z}$ have an extremum for the curve $y = \map y x, z = \map z x$.


Let $g_y$ and $g_z$ not simultaneously vanish at any point of the surface $g = 0$.


Then there exists a function $\map \lambda x$ such that the curve $y = \map y x, z = \map z x$ is an extremal of the functional:

$\ds \int_a^b \paren {F + \map \lambda x g} \rd x$


In other words, $y = \map y x$ satisfies the differential equations:

\(\ds F_y + \lambda g_y - \frac \d {\d x} F_{y'}\) \(=\) \(\ds 0\)
\(\ds F_z + \lambda g_z - \frac \d {\d x} F_{z'}\) \(=\) \(\ds 0\)


Proof

Let $J \sqbrk y$ be a functional, for which the curve $y = \map y x, z = \map z x$ is an extremal with the boundary conditions $\map y a = A, \map y b = B$ as well as $\map g {x, y, z} = 0$.

Choose an arbitrary point $x_1$ from the interval $\closedint a b$.

Let $\delta \map y x$ and $\delta \map z x$ be functions, different from zero only in the neighbourhood of $x_1$.

Then we can exploit the definition of variational derivative in a following way:

$\Delta J \sqbrk {y; \delta_1 \map y x + \delta_2 \map y x} = \paren {\valueat {\dfrac {\delta F} {\delta y} } {x \mathop = x_1} + \epsilon_1} \Delta \sigma_1 + \paren {\valueat {\dfrac {\delta F} {\delta z} } {x \mathop = x_1} + \epsilon_2} \Delta \sigma_2$

where:

$\ds \Delta \sigma_1 = \int_a^b \delta \map y x$
$\ds \Delta \sigma_2 = \int_a^b \delta \map z x$

and $\epsilon_1, \epsilon_2 \to 0$ as $\Delta \sigma_1, \Delta \sigma_2 \to 0$.



We now require that the varied curve $y^* = \map y x + \map {\delta_y} x$, $z^* = \map y x + \map {\delta_z} x$ satisfies the condition $\map g {x, y^*, z^*} = 0$.

This condition limits arbitrary varied curves only to those which still satisfy the original constraint on the surface.

By using constraints on $g$, we can follow the following chain of equalities:

\(\ds 0\) \(=\) \(\ds \int_a^b \paren {\map g {x, y^*, z^*} - \map g {x, y, z} } \rd x\)
\(\ds \) \(=\) \(\ds \int_a^b \paren {\overline {g_y} \delta y + \overline {g_z} \delta z}\)
\(\ds \) \(=\) \(\ds \paren {\bigvalueat {g_y} {x \mathop = x_1} + \epsilon'_1} \Delta \sigma_1 + \paren {\bigvalueat {g_z} {x \mathop = x_1} + \epsilon'_2} \Delta \sigma_2\)

where:

$\epsilon_1', \epsilon_2' \to 0$ as $\Delta \sigma_1, \Delta \sigma_2 \to 0$

and overbar indicates that corresponding derivatives are evaluated along certain intermediate curves.



By hypothesis, either $\bigvalueat {g_y} {x \mathop = x_1}$ or $\bigvalueat {g_z} {x \mathop = x_1}$ is nonzero.

Suppose $\bigvalueat {g_z} {x \mathop = x_1} \ne 0$.

Then the previous result can be rewritten as:

\(\ds \Delta \sigma_2\) \(=\) \(\ds -\Delta \sigma_1 \frac {\bigvalueat {g_y} {x \mathop = x_1} + \epsilon'_1} {\bigvalueat {g_z} {x \mathop = x_1} + \epsilon'_2}\)
\(\ds \) \(=\) \(\ds -\Delta \sigma_1 \frac {\bigvalueat {g_y} {x \mathop = x_1} + \epsilon'_1} {\bigvalueat {g_z} {x \mathop = x_1} \paren {1 + \frac {\epsilon'_2} {\bigvalueat {g_z} {x \mathop = x_1} } } }\)
\(\ds \) \(=\) \(\ds -\Delta \sigma_1 \frac {\bigvalueat {g_y} {x \mathop = x_1} + \epsilon'_1} {\bigvalueat {g_z} {x \mathop = x_1} } \sum_{n \mathop = 0}^\infty \paren {\frac {\epsilon_2'} {\bigvalueat {g_z} {x \mathop = x_1} } }^n\) Sum of Infinite Geometric Sequence, holds for $\size {\epsilon_2'} \to 0$
\(\ds \) \(=\) \(\ds -\Delta \sigma_1 \frac {\bigvalueat {g_y} {x \mathop = x_1} } {\bigvalueat {g_z} {x \mathop = x_1} } - \Delta \sigma_1 \frac {\epsilon'_1} {\bigvalueat {g_z} {x \mathop = x_1} } - \Delta \sigma_1 \frac {\bigvalueat {g_y} {x \mathop = x_1} + \epsilon'_1} {\bigvalueat {g_z} {x \mathop = x_1} } \sum_{n \mathop = 1}^\infty \paren { {\frac {\epsilon_2'} {\bigvalueat {g_z} {x \mathop = x_1} } } }^n\)
\(\ds \leadsto \ \ \) \(\ds \Delta \sigma_2\) \(=\) \(\ds -\paren {\frac {\bigvalueat {g_y} {x \mathop = x_1} } {\bigvalueat {g_z} {x \mathop = x_1} } + \epsilon'} \Delta \sigma_1\)

where $\epsilon'\to 0$ as $\Delta \sigma_1 \to 0$.

Substitute this back into the equation for $\Delta J \sqbrk {y, z}$

\(\ds \Delta J\) \(=\) \(\ds \paren {\valueat {\frac {\delta F} {\delta y} } {x \mathop = x_1} + \epsilon_1} \Delta \sigma_1 + \paren {\valueat {\frac {\delta F} {\delta z} } {x \mathop = x_1} + \epsilon_2} \Delta \sigma_2\)
\(\ds \) \(=\) \(\ds \paren {\valueat {\frac {\delta F} {\delta y} } {x \mathop = x_1} + \epsilon_1} \Delta \sigma_1 + \paren {\valueat {\frac {\delta F} {\delta z} } {x \mathop = x_1} + \epsilon_2} \paren {-\paren {\frac {\bigvalueat {g_y} {x \mathop = x_1} } {\bigvalueat {g_z} {x \mathop = x_1} } + \epsilon'} \Delta \sigma_1}\)
\(\ds \) \(=\) \(\ds \paren {\valueat {\frac {\delta F} {\delta y} } {x \mathop = x_1} - \valueat {\paren {\frac {g_y} {g_z} \frac {\delta F} {\delta z} } } {x \mathop = x_1} } \Delta \sigma_1 + \paren {\epsilon_1 - \epsilon_2 \frac {\bigvalueat {g_y} {x \mathop = x_1} } {\bigvalueat {g_z} {x \mathop = x_1} } - \epsilon' \valueat {\frac {\delta F} {\delta z} } {x \mathop = x_1} - \epsilon_2 \epsilon'} \Delta \sigma_1\)
\(\ds \) \(=\) \(\ds \paren {\valueat {\frac {\delta F} {\delta y} } {x \mathop = x_1} - \valueat {\paren {\frac {g_y} {g_z} \frac {\delta F} {\delta z} } } {x \mathop = x_1} } \Delta \sigma_1 + \epsilon \Delta \sigma_1\)

where $\epsilon \to 0$ as $\Delta \sigma_1 \to 0$.

Then the variation of the functional $J \sqbrk y$ at the point $x_1$ is:

$\delta J = \paren {\valueat {\dfrac {\delta F} {\delta y} } {x \mathop = x_1} - \valueat {\paren {\dfrac {g_y} {g_z} \frac {\delta F} {\delta z} } } {x \mathop = x_1} } \Delta \sigma_1$

A necessary condition for $\delta J$ vanish for any $\Delta \sigma$ and arbitrary $x_1$ is:

$\dfrac {\delta F} {\delta y} - \dfrac {g_y} {g_z} \dfrac {\delta F} {\delta z} = F_y - \dfrac \d {\d x} F_{y'} - \dfrac {g_y} {g_z} \paren {F_z - \dfrac \d {\d x} F_{z'} } = 0$

The latter equation can be rewritten as

$\dfrac {F_y - \dfrac \d {\d x} F_{y'} } {g_y} = \dfrac {F_z - \dfrac \d {\d x} F_{z'} } {g_z}$

If we denote this ratio by $-\map \lambda x$, then this ratio can be rewritten as two equations presented in the theorem.

$\blacksquare$


Sources