Simplest Variational Problem with Subsidiary Conditions for Curve on Surface

From ProofWiki
Jump to navigation Jump to search

Theorem

Let $J\sqbrk{y,z}$ be a functional of the form:

$\displaystyle J\sqbrk y=\int_a^b \map F {x,y,z,y',z'}\rd x$

Let there exist admissible curves $y,z$ lying on the surface:

$\map g {x,y,z}=0$

which satisfy boundary conditions:

$\map y a= A_1,\map y b=B_1$
$\map z a=A_2,\map z b=B_2$

Let $J\sqbrk{y,z}$ have an extremum for the curve $y=\map y x,z=\map z x$.


Let $g_y$ and $g_z$ not simultaneously vanish at any point of the surface $g=0$.


Then there exists a function $\map \lambda x$ such that the curve $y=\map y x,z=\map z x$ is an extremal of the functional:

$\displaystyle \int_a^b \paren{F+\map \lambda x g}\rd x$


In other words, $y=\map y x$ satisfies the differential equations:

\(\displaystyle F_y+\lambda g_y-\frac \d {\d x} F_{y'}\) \(=\) \(\displaystyle 0\)
\(\displaystyle F_z+\lambda g_z-\frac \d {\d x} F_{z'}\) \(=\) \(\displaystyle 0\)


Proof

Let $J\sqbrk y$ be a functional, for which the curve $y=\map y x,~z=\map z x$ is an extremal with the boundary conditions $\map y a=A,~\map y b=B$ as well as $\map g {x,y,z}=0$.

Choose an arbitrary point $x_1$ from the interval $\closedint a b$.

Let $\delta \map y x$ and $\delta \map z x$ be functions, different from zero only in the neighbourhood of $x_1$.

Then we can exploit the definition of variational derivative in a following way:

$\displaystyle\Delta J\sqbrk{y;~\delta_1\map y x+\delta_2\map y x}=\paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}+\epsilon_1}\Delta\sigma_1+\paren{\frac{\delta F}{\delta z}\bigg\rvert_{x=x_1}+\epsilon_2}\Delta\sigma_2$

where

$\displaystyle\Delta\sigma_1=\int_{a}^{b}\delta \map y x,~\Delta\sigma_2=\int_{a}^{b}\delta \map z x$

and $\epsilon_1,~\epsilon_2\to 0$ as $\Delta\sigma_1,~\Delta\sigma_2\to 0$.



We now require that the varied curve $y^*=\map y x+\map {\delta_y} x$, $z^*=\map y x+\map {\delta_z} x$ satisfies the condition $\map g{x,y^*,z^*}=0$.

This condition limits arbitrary varied curves only to those which still satisfy the original constraint on the surface.

By using constraints on $g$, we can follow the following chain of equalities

\(\displaystyle 0\) \(=\) \(\displaystyle \int_a^b \paren{\map g{x, y^*, z^*}-\map g {x,y,z} }\rd x\)
\(\displaystyle \) \(=\) \(\displaystyle \int_a^b \paren{\overline {g_y} \delta y + \overline {g_z} \delta z}\)
\(\displaystyle \) \(=\) \(\displaystyle \paren{g_y\rvert_{x\mathop=x_1}+\epsilon'_1}\Delta\sigma_1+\paren{g_z\rvert_{x\mathop=x_1}+\epsilon'_2}\Delta\sigma_2\)

where:

$\epsilon_1',\epsilon_2'\to 0$ as $\Delta\sigma_1,\Delta\sigma_2\to 0$

and overbar indicates that corresponding derivatives are evaluated along certain intermediate curves.



By hypothesis, either $g_y \rvert_{x=x_1}$ or $g_z \rvert_{x=x_1}$ is nonzero.

Suppose $g_z\rvert_{x=x_1}\ne 0$.

Then the previous result can be rewritten as

\(\displaystyle \Delta\sigma_2\) \(=\) \(\displaystyle -\Delta\sigma_1 \frac {g_y\rvert_{x\mathop=x_1}+\epsilon'_1} {g_z\rvert_{x\mathop=x_1}+\epsilon'_2}\)
\(\displaystyle \) \(=\) \(\displaystyle -\Delta\sigma_1\frac{g_y\rvert_{x=x_1}+\epsilon'_1}{g_z\rvert_{x=x_1}\paren{1+\frac{\epsilon'_2}{g_z\rvert_{x=x_1} } } }\)
\(\displaystyle \) \(=\) \(\displaystyle -\Delta\sigma_1\frac{g_y\rvert_{x=x_1}+\epsilon'_1}{g_z\rvert_{x=x_1} }\sum_{n=0}^{\infty}\paren{\frac{\epsilon_2'}{ g_z\rvert_{x=x_1} } }^n\) Sum of Infinite Geometric Progression, holds for $\paren{\size {\epsilon_2'} }\to 0$
\(\displaystyle \) \(=\) \(\displaystyle -\Delta\sigma_1 \frac{g_y\rvert_{x=x_1} }{g_z\rvert_{x=x_1} }-\Delta\sigma_1 \frac{\epsilon'_1}{g_z\rvert_{x=x_1} }-\Delta\sigma_1 \frac{g_y\rvert_{x=x_1}+\epsilon'_1} {g_z\rvert_{x=x_1} }\sum_{n=1}^{\infty}\paren{ { \frac{\epsilon_2'}{g_z\rvert_{x=x_1} } } }^n\)
\(\displaystyle \leadsto \ \ \) \(\displaystyle \Delta\sigma_2\) \(=\) \(\displaystyle -\paren{\frac{g_y\rvert_{x=x_1} }{g_z\rvert_{x=x_1} }+\epsilon'}\Delta\sigma_1\)

where $\epsilon'\to 0$ as $\Delta\sigma_1\to 0$.

Substitute this back into the equation for $\Delta J\sqbrk {y,z}$

\(\displaystyle \Delta J\) \(=\) \(\displaystyle \paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}+\epsilon_1}\Delta\sigma_1+\paren{\frac{\delta F}{\delta z }\bigg\rvert_{x=x_1}+\epsilon_2}\Delta\sigma_2\)
\(\displaystyle \) \(=\) \(\displaystyle \paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}+\epsilon_1}\Delta\sigma_1+\paren{\frac{\delta F}{\delta{z} }\bigg\rvert_{x=x_1}+\epsilon_2}\sqbrk{ -\paren{\frac{g_y\rvert_{x=x_1} }{ g_z\rvert_{x=x_1} }+\epsilon'}\Delta\sigma_1}\)
\(\displaystyle \) \(=\) \(\displaystyle \paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}-\paren{\frac{g_y}{g_z}\frac{\delta F}{\delta z} }\bigg\rvert_{x=x_1} }\Delta\sigma_1+\sqbrk{\epsilon_1-\epsilon_2\frac{g_y\rvert_{x=x_1} }{g_z\rvert_{x=x_1} }-\epsilon'\frac{\delta F}{\delta z}\bigg\rvert_{x=x_1}-\epsilon_2\epsilon'}\Delta\sigma_1\)
\(\displaystyle \) \(=\) \(\displaystyle \paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}-\paren{\frac{g_y}{g_z}\frac{\delta F}{\delta z} }\bigg\rvert_{x=x_1} }\Delta\sigma_1+\epsilon\Delta\sigma_1\)

where $\epsilon\to 0$ as $\Delta\sigma_1\to 0$.

Then the variation of the functional $J\sqbrk y$ at the point $x_1$ is

$\displaystyle\delta J=\paren{\frac{\delta F}{\delta y}\bigg\rvert_{x=x_1}-\paren{\frac{g_y}{g_z}\frac{\delta F}{\delta z} } \bigg\rvert_{x=x_1} }\Delta\sigma_1$

A necessary condition for $\delta J$ vanish for any $\Delta\sigma$ and arbitrary $x_1$ is

$\displaystyle\frac{\delta F}{\delta y}- \frac{g_y}{g_z}\frac{\delta F}{\delta z}=F_y-\frac \d{\d x}F_{y'}-\frac{g_y}{g_z}\paren{F_z-\frac \d {\d x}F_{z'} }=0$

The latter equation can be rewritten as

$\displaystyle\frac{F_y-\frac \d {\d x}F_{y'} }{g_y}=\frac{F_z-\frac \d {\d x}F_{z'} }{g_z}$.

If we denote this ratio by $-\map \lambda x$, then this ratio can be rewritten as two equations presented in the theorem.


$\blacksquare$


Sources