Sturm-Liouville Problem/Unit Weight Function

Theorem
Let $ P, Q : \R \to \R $ be real mappings such that $ P $ is smooth and positive, while $ Q $ is continuous:


 * $ \displaystyle P \left ( { x } \right ) \in C^\infty $


 * $ \displaystyle P \left ( { x } \right ) > 0 $


 * $ \displaystyle Q \left ( { x } \right ) \in C^0 $

Let the Sturm-Liouville equation, with $ w \left ( { x } \right ) = 1 $, be of the form:


 * $ - \left ( { P y' } \right )' + Qy = \lambda y $

where $ \lambda \in \R $.

Let it satisfy the following boundary conditions:


 * $ y \left ( { a } \right ) = y \left ( { b } \right ) = 0 $

Then all solutions of the Sturm-Liouville equation, together with their eigenvalues, form infinite sequences $ \{ { y^{ \left ( { n } \right ) } } \} $ and $ \{ { \lambda^{ \left ( { n } \right ) } } \} $.

Furthermore, each $ \lambda^{ \left ( { n } \right ) } $ corresponds to an eigenfunction $ y^{ \left ( { n } \right ) } $, unique up to a constant factor.

Lemma
The given Sturm-Liouville equation is an Euler equation of the following functional:


 * $ \displaystyle J \left [ { y } \right ] = \int_a^b \left ( { P y'^2 + Q y^2 } \right ) \mathrm d x $

constrained by a subsidiary condition:


 * $ \displaystyle \int_a^b y^2 \mathrm d x = 1 $

Proof
According to Simplest Variational Problem with Subsidiary Conditions, the following equation must hold:


 * $ \displaystyle F_y - \frac{ \mathrm d }{ \mathrm d x } F_{ y' } + \lambda \left ( { G_y - \frac{ \mathrm d }{ \mathrm d x } G_{ y' } } \right) = 0 $

where:


 * $ F = P y'^2 + Q y^2 $


 * $ G = y^2 $

Then the Euler equation reads:


 * $ \displaystyle 2 Q y - 2 \left ( { P y' } \right )' + 2 \lambda y = 0 $

Division by $ 2 $ and rearrangement of terms yields the desired result.

By Necessary Condition for Integral Functional to have Extremum for given function, if $ y $ is an extremum of $ J $, it is also a solution of the Sturm-Liouville equation.

Lemma
$ J $ is bounded from below.

Proof
Since $ Q $ is continuous on an interval, it is bounded.

Since $ P > 0 $, it holds that:


 * $ \displaystyle \int_a^b \left ( { Py'^2 + Qy^2 } \right ) \mathrm d x > \int_a^b Qy^2 \mathrm d x \ge M \int_a^b y^2 \mathrm d x = M $

where


 * $ \displaystyle M = \min_{ a \le x \le b } Q \left ( { x } \right )$

Therefore, $ J $ is bounded from below.

Introduce a new variable $ \displaystyle t = \pi \frac{ x - a }{ b - a } $.

Then the interval of consideration $ \left [ { a \,. \,. \, b } \right ] $ is mapped onto $ \left [ { 0 \,. \,. \, \pi } \right ] $.

Choose Ritz sequence $ \{ { \phi_n \left ( { t } \right ) } \} = \{ { \sin nt  } \} $, where $ n \in \N $.

Lemma
The elements of the sequence $ \{ { \sin nt } \} $ are orthogonal on the interval $ \left [ { 0 \,. \,. \, \pi } \right ] $:


 * $ \displaystyle \int_0^\pi \sin \left ( { k t } \right ) \sin \left ( { l t } \right ) \mathrm d x = \frac{ \pi }{ 2 } \delta_{ k l } $

Proof
The product involves two elements of the sequence $ \{ { \sin nt } \} $.

Their indices either match each other or not.

Suppose $ k = l $.

Then:

Suppose $ k \ne l $.

Then:

By Proof by Cases, the statement is proved.

Let the trial solution be of the following form:


 * $ \displaystyle y \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k \sin \left ( { k t \left ( { x } \right ) } \right ) $

Trial solution has to satisfy boundary and subsidiary conditions.

Boundary conditions are satisfied without further requirements.

Subsidiary condition results into an additional constraint on coefficients $ \alpha_k $:

All the points $ \boldsymbol \alpha $ constitute a set $ \sigma_n $ which is a surface of an $ n $-dimensional sphere, defined by the subsidiary condition.

For the assumed trial mapping the functional $ J_n \left ( { \boldsymbol \alpha } \right ) $ reads as:


 * $ \displaystyle J_n \left ( { \boldsymbol \alpha } \right ) = \frac{ \pi }{ b - a } \int_0^\pi \left [ { P \left ( { \sum_{ k = 1 }^n \alpha_k \sin k t } \right )'^2 + Q  \left ( { \sum_{ k = 1 }^n \alpha_k \sin k t  } \right )^2 } \right ] \mathrm d t $

The integrand is a second order polynomial the components of $ \boldsymbol \alpha $.

Hence, $ J $ is continuous the components of $ \boldsymbol \alpha $.

The components of $ \boldsymbol \alpha $ constitute a closed and bounded set.

By definition, $ \sigma_n $ is a compact set.

Thus, $ J_n \left ( { \boldsymbol \alpha } \right ) $ is continuous on $ \sigma_n $.

By Continuous Function on Compact Space is Bounded, $ J_n \left ( { \boldsymbol \alpha } \right ) $ has a minimum on $ \sigma_n $.

Let $ y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) $ be defined as:


 * $ \displaystyle y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) = \sum_{ k = 1 }^n \alpha_k^{ \left ( { 1 } \right ) } \sin k t \left ( { x } \right ) $

for which $ J_n \left ( { \boldsymbol \alpha } \right ) $ achieves the minimum $ \lambda_n^{ \left ( { 1 } \right ) } $, unrelated to $ \lambda $.

Then the $ n $-th element of the sequence $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ corresponds to the $ n $-th element of the sequence of minima $ \{ { \lambda_n^{ \left ( { 1 } \right ) } } \} $ of $ J_n \left ( { \boldsymbol \alpha } \right ) $.

Since $ \sigma_n \subset \sigma_{ n + 1} $, where $ \sigma_n $ has $ \alpha_{ n + 1 } = 0 $, it holds that:


 * $ \displaystyle J_n \left ( { \alpha_1, \ldots, \alpha_n } \right ) = J_{ n + 1 } \left ( { \alpha_1, \ldots, \alpha_n, 0 } \right ) $

By Ritz Method implies Not Worse Approximation with Increased Number of Functions:


 * $ \displaystyle \lambda_{ n + 1 }^{ \left ( { 1 } \right ) } \le \lambda_n^{ \left ( { 1 } \right ) } $

Therefore, by increasing the domain of definition of $ y_n^{ \left ( { 1 } \right ) } $ through additional summands, the minima of $ J_n \left ( { \boldsymbol \alpha } \right ) $ cannot increase.

From the last inequality and $ J $ being bounded from below it follows, that the following limit exists:


 * $ \displaystyle \lambda^{ \left ( { 1 } \right ) } = \lim_{ n \to \infty } \lambda_n^{ \left ( { 1 } \right ) } $

Lemma
The sequence $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ contains a uniformly convergent subsequence.

Proof
The sequence


 * $ \displaystyle \lambda_n^{ \left ( { 1 } \right ) } = \frac{ \pi }{ b - a } \int_0^\pi \left ( { P { y_n^{ \left ( { 1 } \right ) } }'^2 + Q { y_n^{ \left ( { 1 } \right ) } }^2 } \right ) \mathrm d t $

is convergent with its limit being $ \lambda^{ \left ( { 1 } \right ) } $.

Hence, it is bounded:


 * $ \displaystyle \frac{ \pi }{ b - a } \int_0^\pi \left ( { P { y_n^{ \left ( { 1 } \right ) } }'^2 + Q { y_n^{ \left ( { 1 } \right ) } }^2 } \right ) \mathrm d t \le M $

Furthermore:

Consequently:


 * $ \displaystyle \frac{ \pi }{ b - a } \min_{ a \le x \le b } P \left ( { x } \right ) \int_0^\pi { y_n^{ \left ( { 1 } \right ) } }'^2 \left ( { t } \right ) \mathrm d t \le \frac{ \pi }{ b - a } \int_0^\pi P { y_n^{ \left ( { 1 } \right ) } }'^2 \left ( { t } \right ) \mathrm d t \le M_1 $

For positive $ P $, division by $ P $ does not affect the direction of inequality.

If follows that:


 * $ \displaystyle \int_0^\pi { y_n^{ \left ( { 1 } \right ) } }'^2 \left ( { t } \right ) \mathrm d t \le \frac{ b - a }{ \pi } \frac{ M_1 }{ \min_{ a \le x \le b } P \left ( { x } \right ) } = M_2 $

Consider squared absolute value of $ y_n^{ \left ( { 1 } \right ) } $.

Then, for $ 0 \le t \le \pi $:

In other words:


 * $ \displaystyle \forall t \in \left [ { 0 \,. \,. \, \pi } \right ], n \in \N : \left \vert { y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) - y_n^{ \left ( { 1 } \right ) } \left ( { 0 } \right ) } \right \vert \le \sqrt{ M_2 \pi } $

Thus, $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ is uniformly bounded.

In addition to this, for $ 0 \le t_1, t_2 \le \pi $:

Let $ \epsilon $ be any strictly positive real number such that $ \epsilon = \sqrt{ M_2 \delta } $, where $ \delta $ is a strictly positive real number.

Suppose $ \delta $ is such that $ \left \vert t_2 - t_1 \right \vert < \delta $.

Then:

In other words:


 * $ \displaystyle \forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall n \in \N: \forall t_1, t_2 \in \left [ { 0 \,. \,. \, \pi } \right ]: \left \vert t_2 - t_1 \right \vert < \delta \implies \left \vert y_n^{ \left ( { 1 } \right ) } \left ( { t_2 } \right ) - y_n^{ \left ( { 1 } \right ) } \left ( { t_1 } \right ) \right \vert < \epsilon$

where metric is induced by norm.

Thus, $ \{ { y_n^{ \left ( { 1 } \right ) } } \} $ is uniformly equicontinuous.

By Arzela's Theorem, there exists a uniformly convergent subsequence $ \{ { y_ { n _m}^{ \left ( { 1 } \right ) } } \} $ from $ \{ { y_n^{ \left ( { 1 } \right ) }  } \} $.

Denote:


 * $ \displaystyle y^{ \left ( { 1 } \right ) } \left ( { x } \right ) = \lim_{ m \to \infty } y_{ n_m }^{ \left ( { 1 } \right ) } \left ( { x } \right ) $

Lemma
Let $ y \left ( { t } \right ) $ be continuous in $ \left [ { 0 \,. \,. \, \pi } \right ] $.

Suppose:


 * $ \displaystyle \forall h \in C^2 \left ( { 0, \pi } \right ) : h \left ( { 0 } \right ) = h \left ( { \pi } \right ) = h' \left ( { 0 } \right ) = h' \left ( { \pi } \right ) = 0 : \int_0^\pi \left [ { - \left ( { Ph' } \right )' + Q_1 h } \right ] y \mathrm d t = 0 $

Then $ y \left ( { t } \right ) \in C^2 \left ( { 0, \pi } \right ) $ and:


 * $ - \left ( { Py' } \right )' + Q_1 y = 0 $

Proof
By Integration by parts, Product Rule for Derivatives, boundary conditions for $ h $, and noticing that:

the previous integral can be rewritten as:

From lemma:


 * $ \displaystyle - P y + \int_0^t P' y \mathrm d \zeta + \int_0^x \left ( { \int_0^t Q_1 y \mathrm d t } \right ) \mathrm d \zeta = c_0 + c_1 t $

The as well as the second and third terms on the  are differentiable  $ t $.

Thus, $ \left ( { P y } \right )' $ exists.

Differentiation $ t $ leads to:


 * $ \displaystyle - \left ( { P y } \right )' + P' y + \int_0^t Q_1 y \mathrm d \zeta = c_1 $

or:


 * $ \displaystyle - P y' + \int_0^t Q_1 y \mathrm d \zeta = c_1 $

The, the second term on the and $ P $ are continuous and differentiable  $ t $, while $ P $ is also positive.

Therefore, $ y' $ exists and is continuous.

Hence, $ \left ( { P y' } \right )' $ exists and:


 * $ \displaystyle - \left ( { P y' } \right )' + Q_1 y = 0 $

Furthermore, $ P $ is continuous and differentiable, while $ Q_1 $ is continuous.

Then $ y'' $ exists and is continuous.

Lemma
$ y^{ \left ( { 1 } \right ) } $ together with $ \lambda^{ \left ( { 1 } \right ) } $ satisfy the Sturm-Liouville equation, where $ w \left ( { x } \right ) = 1 $:


 * $ \displaystyle - \left ( { P{ y^{ \left ( { 1 } \right ) } }' } \right )' + Qy^{ \left ( { 1 } \right ) } = \lambda^{ \left ( { 1 } \right ) } y^{ \left ( { 1 } \right ) } $

Proof
Let $ J_n $, together with its subsidiary condition $ \int_a^b y^2 \mathrm d x = 1 $, achieve a minimum for $ \boldsymbol \alpha = \boldsymbol \alpha^{ \left ( { 1 } \right ) } $

Then the necessary condition for its minimum is:

Notice, that:

This leads to a system of equations:


 * $ \displaystyle \int_0^\pi \left \{ { P \left ( { t } \right ) \left [ { \sum_{ k = 1 }^n \alpha_k^{ \left ( { 1 } \right ) } \left ( { \sin k t } \right )' } \right ] \left ( { \sin rx } \right )' + \left [ { Q - \lambda_n^{ \left ( { 1 } \right ) } } \right ]  \left [ { \sum_{ k = 1 }^n  \alpha_k^{ \left ( { 1 } \right ) } \sin k t } \right ] \sin r t  } \right \} \mathrm d t = 0 $

Multiplying each equation by an arbitrary constant $ C_r^{ \left ( { n } \right ) } $ and summing over $ r $ results in:


 * $ \displaystyle \int_0^\pi \left [ { P y_n' h_n' + \left ( { Q - \lambda_n^{ \left ( { 1 } \right ) } y_n h_n } \right ) } \right ] \mathrm d t = 0 $

where:


 * $ \displaystyle h_n \left ( { t } \right ) = \sum_{ r = 1 }^n C_r^{ \left ( { n } \right ) } \sin r t$


 * $ \displaystyle y_n = \sum_{ k = 1 }^n \alpha_k \sin \left ( { k t } \right ) $

By Integration by parts:

Consider all real mappings $ h $ such that:


 * $ h \left ( { x } \right ) \in C^2 \left ( { 0, \pi } \right ) $

and satisfying the boundary conditions.

Then $ C_r^{ \left ( { n } \right ) } $ can be chosen, such that:


 * $ \displaystyle \lim_{ n \to \infty } \int_0^\pi \left \vert h_n \left ( { x } \right ) - h \left ( { x } \right ) \right \vert^2 \mathrm d x = 0 $


 * $ \displaystyle \lim_{ n \to \infty } \int_0^\pi \left \vert h_n' \left ( { x } \right ) - h' \left ( { x } \right ) \right \vert^2 \mathrm d x = 0 $


 * $ \displaystyle \lim_{ n \to \infty } \int_0^\pi \left \vert h_n \left ( { x } \right ) - h \left ( { x } \right ) \right \vert^2 \mathrm d x = 0 $

Due to the existence of uniformly convergent subsequence, $ y_n^{ \left ( { 1 } \right ) } $ converges to $ y^{ \left ( { 1 } \right ) } $ uniformly on $ \left [ { 0, \pi } \right ] $:


 * $ \displaystyle \lim_{ m \to \infty } \int_0^{ \pi } \left [ { - \left ( { Ph_{ n_m }' } \right )' + (Q - \lambda_{ n_m }^{ \left ( { 1 } \right ) } ) h_{ n_m } } \right ]y_{ n_m }^{ \left ( { 1 } \right ) } \mathrm d x = \int_0^\pi \left [ { -\left ( { Ph' } \right )' + \left ( { Q - \lambda^{ \left ( { 1 } \right ) } } \right )h  } \right ] y^{ \left ( { 1 } \right ) } \mathrm d x = 0 $

Lemma
$ \{ { y_n^{ \left ( { 1 } \right ) } \left ( { x } \right ) } \} $ pointwise converges to $ y^{ \left ( { 1 } \right ) } \left ( { x } \right ) $.

Proof
By Existence and Uniqueness of Solution for Linear Second Order ODE with two Initial Conditions, where $ R \left ( { x } \right ) = 0 $, the Sturm-Liouville equation


 * $ - \left ( { Py' } \right )' + Qy = \lambda y $

satisfying the boundary conditions:


 * $ y \left ( { 0 } \right ) = y \left ( { \pi } \right ) = 0 $

and the subsidiary condition:


 * $ \displaystyle \int_0^\pi y^2 \left ( { t } \right ) = 1 $

is unique up to the sign of $ y $.

Let $ y^{ \left ( { 1 } \right ) } \left ( { t } \right ) $ be a solution corresponding to $ \lambda = \lambda^{ \left ( { 1 } \right ) } $

Due to the subsidiary condition, the condition $ y^{ \left ( { 1 } \right ) } \left ( { t } \right ) = 0 $ cannot hold in the entire interval $ \left [ { 0 \,. \,. \, \pi } \right ] $.

Hence, the set of roots to this condition is countable.

Then:


 * $ \exists t_0 \in \left [ { 0 \,. \,. \, \pi } \right ] : y^{ \left ( { 1 } \right ) } \left ( { t_0 } \right ) \ne 0 $

Choose the sign so that $ y^{ \left ( { 1 } \right ) } \left ( { t_0 } \right ) > 0 $

Similarly, let $ y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) $ be a solution corresponding to $ \lambda = \lambda_n^{ \left ( { 1 } \right ) } $

Choose the signs so that:


 * $ \forall n \in \N : y_n^{ \left ( { 1 } \right ) } \left ( { t_0 } \right ) \ge 0 $

Suppose $ y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) $ does not pointwise converge to $ y^{ \left ( { 1 } \right ) } \left ( { t } \right ) $.

By Arzela's Theorem there exists another subsequence from $ \{ { y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) } \} $, converging to another solution $ \overline{ y }^{ \left ( { 1 } \right ) } $, where $ \lambda = \lambda^{ \left ( { 1 } \right ) } $.

Because of the uniqueness of solutions, except for the sign, both solutions may differ only in their signs:


 * $ \overline{ y }^{ \left ( { 1 } \right ) } \left ( { x } \right ) = - y^{ \left ( { 1 } \right ) } \left ( { t } \right ) $

Therefore:


 * $ \overline{ y }^{ \left ( { 1 } \right ) } \left ( { t_0 } \right ) < 0 $

This is impossible, since:


 * $ \forall n \in N : y_n^{ \left ( { 1 } \right ) } \left ( { t_0 } \right ) \ge 0 $

Therefore $ y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) $ pointwise converges to $ y^{ \left ( { 1 } \right ) } \left ( { t } \right ) $, provided $ y_n^{ \left ( { 1 } \right ) } \left ( { t } \right ) $ is chosen with the correct sign.

Lemma
Sequences $ \{ { y^{ \left ( { n } \right ) } } \} $ and $ \{ { \lambda^{ \left ( { n } \right ) }  } \} $ are infinite.

Proof
Suppose, $ y^{ \left ( { r } \right ) } $ and $ \lambda^{ \left ( { r } \right ) } $ are known.

The next eigenfunction $ y^{ \left ( { r + 1 } \right ) } $ and the corresponding eigenvalue $ \lambda^{ \left ( { r + 1 } \right ) } $ can be found by minimising


 * $ \displaystyle J \left [ { y } \right ] = \int_0^\pi \left ( { Py'^2 + Qy^2 } \right ) \mathrm d x $

where boundary and subsidiary conditions are supplied with orthogonality conditions:


 * $ \forall m \in \N : { 1 \le m \le r } : \displaystyle \int_0^\pi y^{ \left ( { m } \right ) } \left ( { t } \right ) y^{ \left ( { r + 1 } \right ) } \left ( { t } \right ) \mathrm d t = 0 $

The new solution of the form:


 * $ \displaystyle y_n^{ \left ( { r + 1 } \right ) } \left ( { t } \right ) = \sum_{ k = 1 }^n \alpha_k^{ \left ( { r + 1 } \right ) } \sin k t $

is now also orthogonal to mappings:


 * $ \displaystyle y_n^{ \left ( { m } \right ) } \left ( { t } \right ) = \sum_{ k = 1 }^n \alpha_k^{ \left ( { m } \right ) } \sin k t $

This results into:


 * $ \displaystyle \sum_{ k = 1 }^n \alpha_k^{ \left ( { r + 1 } \right ) } \int_0^\pi \sin k t \left ( { \sum_{ l = 1 }^n \alpha_l^{ \left ( { m } \right ) } \sin l t } \right ) \mathrm d t = \frac{ \pi }{ 2 } \sum_{ k = 1 }^n \alpha_k^{ \left ( { r + 1 } \right ) } \alpha_k^{ \left ( { m } \right ) } = 0 $

These equations describe $ r $ distinct $ \left ( { n - 1 } \right ) $-dimensional hyperplanes, passing through the origin of coordinates in $ n $ dimensions.

These hyperplanes intersect the sphere $ \sigma_n $, resulting in an $ \left ( { n - r } \right ) $-dimensional sphere $ \hat{ \sigma }_{ n - r } $.

By definition, it is a compact set.

By Continuous Function on Compact Space is Bounded, $ J_n \left ( { \boldsymbol \alpha } \right ) $ has a minimum on $ \hat{ \sigma }_{ n - r } $.

Denote it as $ \lambda_n^{ \left ( { r + 1 } \right ) } $.

By Ritz Method implies Not Worse Approximation with Increased Number of Functions:


 * $ \displaystyle \lambda_{ n + 1 }^{ \left ( { r + 1 } \right ) } \le \lambda_n^{ \left ( { r + 1 } \right ) } $

This, together with $ J $ being bounded from below, implies:


 * $ \displaystyle \lambda^{ \left ( { r + 1 } \right ) } = \lim_{ n \to \infty } \lambda_n^{ \left ( { r + 1 } \right ) } $

Additional constraints may or may not affect the new minimum:


 * $ \displaystyle \lambda^{ \left ( { r } \right ) } \le \lambda^{ \left ( { r + 1 } \right ) } $

Let:


 * $ \displaystyle y_n^{ \left ( { r + 1 } \right ) } = \sum_{ k = 1 }^n \alpha_k^{ \left ( { r + 1 } \right ) } \sin k t $

$ \{ { y_n^{ \left ( { r + 1 } \right ) } } \} $ uniformly converges to $ y^{ \left ( { r + 1 } \right ) } $, which satisfies Sturm-Liouville equation together with boundary, subsidiary and orthogonality conditions.

Thus, $ y^{ \left ( { r + 1 } \right ) } $ is an eigenfunction of Sturm-Liouville equation with an eigenvalue $ \lambda^{ \left ( { r + 1 } \right ) } $.

Orthogonal mappings are linearly independent.

Each eigenvalue corresponds only to one eigenfunction, unique up to a constant factor.

Thus:


 * $ \displaystyle \lambda^{ \left ( { r } \right ) } < \lambda^{ \left ( { r + 1 } \right ) } $