Derivation of Hamilton-Jacobi Equation

Theorem
Let $S \left({ x_0, x_1, \mathbf y } \right)= S \left({ x, \mathbf y } \right)$ be the geodetic distance, where $x_0$ is fixed and $x_1=x$.

Let $H$ be Hamiltonian.

Then the following equation holds:


 * $ \displaystyle \frac{ \partial S}{ \partial x} + H \left({ x, \mathbf y, \nabla_{ \mathbf y}S } \right)=0$

and is known as the Hamilton-Jacobi Equation.

Proof
Consider the increment $ \Delta S$:


 * $ \Delta S= S \left({ x+ \Delta x, \mathbf y + \Delta \mathbf y } \right)- S \left({ x, \mathbf y  } \right)$

Note that the change of function $ \mathbf y$ denoted by $ \Delta \mathbf y$ is dependent on the manner $ \Delta x$ is chosen through the definition of geodetic distance.

For sufficiently smooth $S$, $ \left \vert \Delta \mathbf y \right \vert \to 0$ as $ \left \vert \Delta \mathbf x \right \vert \to 0$.

By definition of differential, $ \Delta S$ can be written as


 * $ \Delta S \left( { x, \mathbf y; \Delta x, \Delta \mathbf y } \right)= \mathrm d S \left( { x, \mathbf y; \Delta x, \Delta \mathbf y } \right)+ \epsilon \Delta x + \boldsymbol \epsilon \cdot \Delta \boldsymbol y$

where $ \epsilon \to 0$ as $ \Delta x \to 0$, and $ \left \vert \mathbf y \right \vert \to 0$ as $ \left \vert \Delta \mathbf x \right \vert \to 0$.

By definition of the geodetic distance,


 * $ \Delta S = J \left[ { \gamma^* } \right] - J \left[ { \gamma } \right]$

where $\gamma$ and $\gamma^*$ are extremal curves, connecting the fixed initial point with points $\left({ x, \mathbf y } \right)$ and $\left({ x+ \Delta x, \mathbf y+ \mathbf h } \right)$ respectively.

By definition of increment of functional:


 * $ J \left[ { \gamma^* } \right]- J \left[ { \gamma } \right]= \Delta J \left[{ \gamma ; \Delta \gamma } \right]$

where $ \displaystyle \Delta \gamma= \gamma^*- \gamma$.

A differentiable $ J$ can be expressed as:


 * $ \Delta J \left[ { \gamma; \Delta \gamma } \right]= \delta J \left[ { \gamma; \Delta \gamma } \right] + \epsilon_ \gamma \cdot \left \vert \Delta \gamma \right \vert$

where $ \epsilon_ \gamma \to 0$ as $ \left \vert \Delta \gamma \right \vert \to 0$, and $ \left \vert \Delta \gamma \right \vert \to 0$ as $ \left \vert \Delta \mathbf x \right \vert \to 0$ for sufficiently smooth $S$.

To summarise:


 * $ \Delta S \left( { x, \mathbf y; \Delta x, \Delta \mathbf y } \right)= \Delta J \left[ { \gamma; \Delta \gamma } \right]$

Both sides contain terms linear in $ \left \vert \Delta x \right \vert$, $ \left \vert \Delta \mathbf y \right \vert$, $ \left \vert \Delta \gamma \right \vert$ as well terms of higher order.

Higher order terms on both sides are the same.

Hence, the principal parts match:


 * $ \mathrm d S= \delta J$

The variation of extremal $J$ is expressible as


 * $ \displaystyle \delta J= \sum_{i=1}^n p_i \mathrm \Delta y_i - H \mathrm \Delta x$

while, the differential of $S$ is


 * $ \displaystyle \mathrm d S= \frac{ \partial S}{ \partial x} \Delta x+ \sum_{i=1}^n \frac{ \partial S}{ \partial y_i} \Delta y_i$

Equivalently:


 * $ \displaystyle \left({ \frac{ \partial S}{ \partial x}+H } \right) \Delta x + \sum_{i=1}^n \left({ \frac{ \partial S}{ \partial y_i}-p_i  } \right) \Delta y_i=0$

$ \Delta x$ and $ \Delta y_i$ are independent variables.

The equation holds only if all the coefficients in front of $ \Delta x$ and $ \Delta y_i$ vanish simultaneously:


 * $ \displaystyle \frac{ \partial S}{ \partial x}=-H, \quad \frac{ \partial S}{ \partial y_i}= p_i$

Since $H=H \left({ x, \mathbf y, \mathbf p } \right)$, using the second relation to replace $ \mathbf p$ together with the first one proves the formula.