User:Julius/Sandbox

Theorem
Let $ P, Q : \R \to \R $ be real mappings such that $ P $ is smooth and positive, while $ Q $ is continuous:


 * $ \displaystyle P \left ( { x } \right ) \in C^\infty $


 * $ \displaystyle P \left ( { x } \right ) > 0 $


 * $ \displaystyle Q \left ( { x } \right ) \in C^0 $

Let the Sturm-Liouville equation, with $ w \left ( { x } \right ) = 1 $, be of the form:


 * $ - \left ( { P y' } \right )' + Qy = \lambda y $

Let it satisfy the following boundary conditions:


 * $ y \left ( { a } \right ) = y \left ( { b } \right ) = 0 $

where $ \lambda \in \R $.

Then all solutions of the Sturm-Liouville equation, together with their eigenvalues, form infinite sequences $ \{ { y^{ \left ( { n } \right ) } } \} $ and $ \{ { \lambda^{ \left ( { n } \right ) } } \} $.

Furthermore, to each $ \lambda^{ \left ( { n } \right ) } $ corresponds an eigenfunction $ y^{ \left ( { n } \right ) } $, unique up to a constant factor.

Lemma
The given Sturm-Liouville equation is an Euler equation of the following functional:


 * $ \displaystyle J \left [ { y } \right ] = \int_a^b \left ( { P y'^2 + Q y^2 } \right ) \mathrm d x $

together with a subsidiary condition:


 * $ \displaystyle \int_a^b y^2 \mathrm d x = 1 $

Proof
According to Simplest Variational Problem with Subsidiary Conditions we have that:


 * $ F = P y'^2 + Q y^2 $


 * $ G = y^2 $

Then the Euler equation reads:


 * $ \displaystyle 2 Q y - 2 \left ( { P y' } \right )' + 2 \lambda y = 0 $

or, after simplification and rearrangement of terms:


 * $ - \left ( { P y' } \right )' + Q y = - \lambda y $

Thus, by Necessary Condition for Integral Functional to have Extremum for given function, if $ y $ is an extremum of $ J $, it is also a solution of the Sturm-Liouville equation.

Since $ P > 0 $, it holds that:


 * $ \displaystyle \int_a^b \left ( { Py'^2 + Qy^2 } \right ) \mathrm d x > \int_a^b Qy^2 \mathrm d x \ge M \int_a^b y^2 \mathrm d x = M $

where


 * $ \displaystyle M = \min_{ a \le x \le b } Q \left ( { x } \right )$

Introduce a new variable $ \displaystyle t = \pi \frac{ x - a }{ b - a } $.

Then the interval of consideration $ \left [ { a \,. \,. \, b } \right ] $ is mapped onto $ \left [ { 0 \,. \,. \, \pi } \right ] $.

Choose Ritz sequence $ \{ { \phi_n \left ( { t } \right ) } \} = \{ { \sin nt  } \} $, where $ n \in \N $.

Lemma
The elements of the sequence $ \{ { \sin nt } \} $ are orthogonal on the interval $ \left [ { 0 \,. \,. \, \pi } \right ] $:


 * $ \displaystyle \int_0^\pi \sin \left ( { k t } \right ) \sin \left ( { l t } \right ) \mathrm d x = \frac{ \pi }{ 2 } \delta_{ k l } $

Proof
Suppose $ k = l $.

Then:

Suppose $ k \ne l $.

Then:

By Proof by Cases, the statement is proved.

Thus, the trial solution is of the following form:


 * $ \displaystyle y \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k \sin \left ( { k t \left ( { x } \right ) } \right ) $

Trial solution has to satisfy boundary and subsidiary conditions.

Boundary conditions are satisfied without further requirements.

Subsidiary condition results into an additional constraint on coefficients $ \alpha_k $:

All the points $ \boldsymbol \alpha $ constitute a set which is a surface $ \sigma_n $ of a $ n $-dimensional sphere, defined by the constraint equation.

For the assumed trial mapping the functional $ J $ reads as:


 * $ \displaystyle J_n \left ( { \boldsymbol \alpha } \right ) = \int_0^\pi \left [ { P \left ( { \sum_{ k = 1 }^n \alpha_k \sin k t } \right )'^2 + Q  \left ( { \sum_{ k = 1 }^n \alpha_k \sin k t  } \right )^2 } \right ] \mathrm d t $

Sine function is continuous $ \boldsymbol \alpha $.

$ \sigma_n $ is a compact set.

Thus, $ J_n \left ( { \boldsymbol \alpha } \right ) $ has a minimum on $ \sigma_n $.

Let $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ be defined as:


 * $ \displaystyle y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k^{ \left ( { 1 } \right ) } \sin k t \left ( { x } \right ) $

for which $ J $ achieves the minimum $ \lambda_n^{ \left ( { 1 } \right ) } $.

Then a sequence $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ can be constructed, for which the sequence of minima of $ J $ is $ \{ { \lambda_n^{ \left ( { 1 } \right ) } } \} $.

Since $ \sigma_n \subset \sigma_{ n + 1} $, which happens for $ \alpha_{ n + 1 } = 0 $, it holds that:


 * $ \displaystyle J_n \left ( { \alpha_1, \ldots, \alpha_n } \right ) = J_{ n + 1 } \left ( { \alpha_1, \ldots, \alpha_n, 0 } \right ) $

Hence:


 * $ \displaystyle \lambda_{ n + 1 }^{ \left ( { 1 } \right ) } \le \lambda_n^{ \left ( { 1 } \right ) } $

By increasing the domain of definition of $ y_n^{ \left ( { 1 } \right ) } $, the minima of $ J $ cannot increase.

From the last inequality and $ J $ being bounded from below it follows, that the following limit exists:


 * $ \displaystyle \lambda^{ \left ( { 1 } \right ) } = \lim_{ n \to \infty } \lambda_n^{ \left ( { 1 } \right ) } $

Lemma
The sequence $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ contains a uniformly convergent subsequence.

Proof
The sequence


 * $ \displaystyle \lambda_n^{ \left ( { 1 } \right ) } = \int_0^\pi \left ( { P { y_n^{ \left ( { 1 } \right ) } }'^2 + Q { y_n^{ \left ( { 1 } \right ) } }^2 } \right ) \mathrm d x $

is convergent.

Hence, it is bounded:


 * $ \displaystyle \int_0^\pi \left ( { P { y_n^{ \left ( { 1 } \right ) } }'^2 + Q { y_n^{ \left ( { 1 } \right ) } }^2 } \right ) \mathrm d x \le M $

Furthermore:


 * $ \displaystyle \int_0^\pi P { y_n^{ \left ( { 1 } \right ) } }'^2 \mathrm d x \le M + \left \vert { \int_0^\pi Q { y_n^{ \left ( { 1 } \right ) } }^2 \mathrm d x } \right \vert \le M + \max_{ a \le x \le b } \left \vert { Q \left ( { x } \right ) } \right \vert = M_1 $

For positive $ P $:


 * $ \displaystyle \int_0^\pi { y_n^{ \left ( { 1 } \right ) } }'^2 \left ( { x } \right ) \mathrm dx \le \frac{ M_1 }{ \min_{ a \le x \le b } P \left ( { x } \right ) } = M_2 $

From:


 * $ y_n^{ \left ( { 1 } \right ) } \left ( { 0 } \right ) = 0 $

and Schwarz inequality:


 * $ \displaystyle \left \vert { y_n \left ( { x } \right ) } \right \vert^2 = \left \vert { \int_0^x y_n' \left ( { \zeta } \right ) \mathrm d \zeta } \right \vert^2 \le \int_0^x y_n'^2 \left ( { \zeta } \right ) \mathrm d \zeta \int_0^x \mathrm d \zeta \le M_2 \pi $

Thus, $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ is uniformly bounded.

By Schwarz inequality:


 * $ \displaystyle \left \vert { y_n \left ( { x_2 } \right ) - y_n \left ( { x_1 } \right ) } \right \vert^2 = \left \vert { \int_{ x_1 }^{ x_2 } y_n' \left ( { x } \right ) \mathrm d x } \right \vert^2 \le \int_{ x_1 }^{ x_2 } y_n'^2 \mathrm d x \left \vert { \int_{ x_1 }^{ x_2 } \mathrm d x } \right \vert^2 \le M_2 \left \vert { x_2 - x_1 } \right \vert $

Thus, $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ is equicontinuous.

By Arzela's Theorem, there exists a uniformly convergent subsequence $ \{ { y_ { n _m}^{ \left ( { 1 } \right ) } } \} $ from $ \{ { y_n^{ \left ( { 1 } \right ) }  } \} $.

Denote:


 * $ \displaystyle y^{ \left ( { 1 } \right ) } \left ( { x } \right ) = \lim_{ m \to \infty } y_{ n_m }^{ \left ( { 1 } \right ) } \left ( { x } \right ) $

Lemma
Let $ y \left ( { x } \right ) $ be continuous in $ \left [ { 0 \,. \,. \, \pi } \right ] $.

Suppose:


 * $ \displaystyle \forall h \in \mathcal D_2 \left ( { 0, \pi } \right ) : h \left ( { 0 } \right ) = h \left ( { \pi } \right ) = h' \left ( { 0 } \right ) = h' \left ( { \pi } \right ) = 0 : \int_0^\pi \left [ { - \left ( { Ph' } \right )' + Q_1 h } \right ] y \mathrm d x = 0 $

Then $ y \left ( { x } \right ) \in \mathcal D_2 \left ( { 0, \pi } \right ) $ and:


 * $ - \left ( { Py' } \right )' + Q_1 y = 0 $

Proof
By Integration by parts:


 * $ \displaystyle \int_0^\pi \left [ { - \left ( { Py' } \right )' + Q_1 y } \right ] y \mathrm d x = - \int_0^\pi Ph''y \mathrm d x - \int_0^\pi P'h'y \mathrm d x + \int_0^\pi Q_1 h y \mathrm d x = - \int_0^\pi \left [ { - P y + \int_0^x P' y \mathrm d \zeta + \int_0^x \left ( { \int_0^\zeta Q_1 y \mathrm d t } \right ) \mathrm d \zeta } \right ] \mathrm d x = 0 $

From lemma:


 * $ \displaystyle - P y + \int_0^x P' y \mathrm d \zeta + \int_0^x \left ( { \int_0^t Q_1 y \mathrm d t } \right ) \mathrm d \zeta = c_0 + c_1 x $

Differentiation leads to:


 * $ \displaystyle - \left ( { P y } \right )' + P' y + \int_0^x Q_1 y \mathrm d \zeta = c_1 $

Since $ P $ is continuously differentiable and does not vanish, $ y' $ exists and is continuous:


 * $ \displaystyle - P y' + \int_0^x Q_1 y \mathrm d \zeta = c_1 $

and the integral are differentiable.

Thus, $ \left ( { P y' } \right )' $ exists and:


 * $ \displaystyle - \left ( { P y' } \right )' + Q_1 y = 0 $

Furthermore, $ y'' $ exists and is continuous.


 * $ \displaystyle - \left ( { P{ y^{ \left ( { 1 } \right ) } }' } \right )' + Qy^{ \left ( { 1 } \right ) } = \lambda^{ \left ( { 1 } \right ) } y^{ \left ( { 1 } \right ) } $

According to the theory of Lagrange multipliers, at the point $ \left ( { \boldsymbol \alpha^{ \left ( { 1 } \right ) } } \right ) $:


 * $ \displaystyle \frac{ \partial }{ \partial \alpha_r } \{ { J_n \left ( { \alpha_1, \ldots, \alpha_n } \right ) - \lambda_n^{ \left ( { 1 } \right ) } \int_0^\pi \left ( { \sum_{ k = 1 }^n \alpha_k \sin kx } \right )^2 \mathrm d x  } \} = 0 $

This leads to a system of equations:


 * $ \displaystyle \int_0^\pi \{ { P \left ( { x } \right ) \left [ { \sum_{ k = 1 }^n \alpha_k^{ \left ( { 1 } \right ) } \left ( { \sin kx } \right )' } \right ] \left ( { \sin rx } \right )' + \left [ { Q - \lambda_n^{ \left ( { 1 } \right ) } } \right ]  \left [ { \sum_{ k = 1 }^n  \alpha_k^{ \left ( { 1 } \right ) } \sin kx } \right ] \sin rx  } \} \mathrm d x = 0 $

Multiplying each equation by an arbitrary constant $ C_r^{ \left ( { n } \right ) } $ and summing over $ r $ results in:


 * $ \displaystyle \int_0^\pi \left [ { P y_n' h_n' + \left ( { Q - \lambda_n^{ \left ( { 1 } \right ) } y_n h_n } \right ) } \right ] \mathrm d x = 0 $

where:


 * $ \displaystyle h_n \left ( { x } \right ) = \sum_{ r = 1 }^n C_r^{ \left ( { n } \right ) } \sin rx$

By Integration by parts:


 * $ \displaystyle \int_0^\pi \left [ { - \left ( { P h_n' } \right )' + \left ( { Q - \lambda ) h_n } \right ) } \right ] y_n \mathrm d x = 0 $

If:


 * $ h \left ( { x } \right ) \in \mathcal D_2 \left ( { 0, \pi } \right ) $

satisfying the boundary conditions, then we can choose $ C_r^{ \left ( { n } \right ) } $, such that:


 * $ h_n \to h, h_n' \to h', h_n \to h $

where:


 * $ \lim_{ n \to \infty } \int_0^\pi \left \vert h_n \left ( { x } \right ) - h \left ( { x } \right ) \right \vert \mathrm d x = 0 $

Since $ y_n^{ \left ( { 1 } \right ) } \to y^{ \left ( { 1 } \right ) } $ uniformly in $ \left ( { 0, \pi } \right ) $:


 * $ \displaystyle \lim_{ m \to \infty } \int_0^{ \pi } \left [ { - \left ( { Ph_{ n_m }' } \right )' + (Q - \lambda_{ n_m }^{ \left ( { 1 } \right ) } ) h_{ n_m } } \right ]y_{ n_m }^{ \left ( { 1 } \right ) } \mathrm d x = \int_0^\pi \left [ { -\left ( { Ph' } \right )' + \left ( { Q - \lambda^{ \left ( { 1 } \right ) } } \right )h  } \right ] y^{ \left ( { 1 } \right ) } \mathrm d x = 0 $

Lemma
$ \{ { y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) } \} $ converges to $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $

Proof
Up to a sign, the Sturm-Liouville equation


 * $ - \left ( { Py' } \right )' + Qy = \lambda y $

satisfying the boundary conditions:


 * $ y \left ( { 0 } \right ) = 0, y \left ( { \pi } \right ) = 0 $

and the normalisation condition:


 * $ \int_0^\pi y^2 \left ( { x } \right ) = 1 $

is unique.

Let $ y^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ be a solution corresponding to $ \lambda = \lambda^{ \left ( { 1 } \right ) } $

Suppose:


 * $ \exists x_0 \in \left [ { 0 \,. \,. \, \pi } \right ] : y^{ \left ( { 1 } \right ) } \left ( { x_0 } \right ) \ne 0 $

Choose the sign so that $ y^{ \left ( { 1 } \right ) } \left ( { x_0 } \right ) > 0 $

Similarly, let $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ be a solution corresponding to $ \lambda = \lambda_n^{ \left ( { 1 } \right ) } $

Choose the signs so that:


 * $ \forall n \in \N : y^{ \left ( { 1 } \right ) } \left ( { x_0 } \right ) \ge 0 $

If $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ does not converge to $ y^{ \left ( { 1 } \right ) } \left ( { x } \right ) $, select another subsequence from $ \{ { y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) } \} $, converging to another solution $ \overline{ y }_n^{ \left ( { 1 } \right ) } $, where again $ \lambda = \lambda^{ \left ( { 1 } \right ) } $.

Because of the uniqueness of solutions:


 * $ \overline{ y }^{ \left ( { 1 } \right ) } \left ( { x } \right ) = - y^{ \left ( { 1 } \right ) } \left ( { x } \right ) $

Therefore, $ \overline{ y }^{ \left ( { 1 } \right ) } \left ( { x_0 } \right ) < 0 $

This contradicts $ y_n^{ \left ( { 1 } \right ) } \left ( { x_0 } \right ) \ge 0 $ for all $ n $.

Therefore $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) \to y^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ uniformly, provided $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ is chose with the correct sign.

The next eigenfunction $ y^{ \left ( { 2 } \right ) } $ and the corresponding eigenvalue $ \lambda^{ \left ( { 2 } \right ) } $ can be found by minimising


 * $ \displaystyle J \left [ { y } \right ] = \int_0^\pi \left ( { Py'^2 + Qy^2 } \right ) \mathrm d x $

where boundary conditions are supplied with an orthogonality condition:


 * $ \displaystyle \int_0^\pi y^{ \left ( { 1 } \right ) } \left ( { x } \right ) y \left ( { x } \right ) \mathrm d x = 0 $

The new solution of the form:


 * $ \displaystyle y \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k \sin kx $

is now also orthogonal to the function:


 * $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k^{ \left ( { 1 } \right ) } \sin kx $

This results into:


 * $ \sum_{ k = 1 }^n \alpha_k \int_0^\pi \sin kx \left ( { \sum_{ l = 1 }^n \sin lx } \right ) \mathrm d x = \frac{ \pi }{ 2 } \sum_{ k = 1 }^n \alpha_k \alpha_k^{ \left ( { 1 } \right ) } = 0 $

By similar arguments:


 * $ \displaystyle \lambda_{ n + 1 }^{ \left ( { 2 } \right ) } \le \lambda_n^{ \left ( { 2 } \right ) } $


 * $ \displaystyle \lambda^{ \left ( { 2 } \right ) } = \lim_{ n \to \infty } \lambda_n^{ \left ( { 2 } \right ) } $


 * $ \displaystyle \lambda^{ \left ( { 1 } \right ) } \le \lambda^{ \left ( { 2 } \right ) } $

Let:


 * $ \displaystyle y_n^{ \left ( { 2 } \right ) } = \sum_{ k = 1 }^n \alpha_k^{ \left ( { 2 } \right ) } \sin kx $

Orthogonal mappings cannot be linearly dependent.

Each eigenvalue corresponds only to one eigenfunction, unique up to a constant factor.

Thus:


 * $ \displaystyle \lambda^{ \left ( { 1 } \right ) } < \lambda^{ \left ( { 2 } \right ) }$