User:J D Bowen/Math725 HW6

1) We aim to show that $$r\in\mathbb{Z}, \ T\vec{v}=\lambda \vec{v}\implies T^r\vec{v}=\lambda^r\vec{v} \ $$.

Suppose we have $$T\vec{v}=\lambda \vec{v} \ $$, and let $$r\in\mathbb{N} \ $$. Suppose $$\lambda^{r-1} \ $$ is an eigenvalue of $$T^{r-1} \ $$ with eigenvector $$\vec{v} \ $$, ie, $$T^{n-1}\vec{v}=\lambda^{r-1}\vec{v} $$. Then

$$T^r\vec{v}=T(T^{r-1}\vec{v})=T(\lambda^{r-1})\vec{v})=\lambda^{r-1}T\vec{v}=\lambda^{r-1}\lambda\vec{v}=\lambda^r\vec{v} \ $$

We have the first case, and so the theorem follows by induction for $$r\in\mathbb{N} \ $$.

Now consider the case $$r=0 \ $$. Then $$T^0 = I, \ \lambda^0=1$$, and we have $$I\vec{v}=\vec{v} \ $$.

Now suppose $$-r\in\mathbb{N} \ $$, and $$T \ $$ is invertible. Consider that

$$\vec{v}=T^{0}\vec{v}=T^{-1} (T\vec{v})= T^{-1} (\lambda\vec{v}) = \lambda T^{-1}\vec{v} \implies \lambda^{-1}\vec{v}=T^{-1}\vec{v} \ $$.

If we let $$A=T^{-1}, \psi=\lambda^{-1} \ $$, the theorem follows from the first case.

2) Let $$x\in\mathbb{R} \ $$. Then the matrix

$$\begin{pmatrix} 0 & x \\ -x & x \end{pmatrix}$$

has all-real entries, but complex eigenvalues $$\frac{x\pm ix\sqrt{3}}{2} \ $$ with eigenvectors

$$\begin{pmatrix} y \\ \frac{1+i\sqrt{3}}{2}y \end{pmatrix}, \ \begin{pmatrix} z \\ \frac{1-i\sqrt{3}}{2}z \end{pmatrix} \ $$.

For the matrix we've described, observe that the discriminant of the determinant polynomial is $$ -3x^2 \ $$.

This means we will have an invertible real matrix with complex eigenvalues

$$\lambda = \frac{x\pm ix\sqrt{3}}{2} \ $$

Some eigenvectors of this matrix then are

$$\begin{pmatrix} 0 & x \\ -x & x \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} v_1\frac{x+ ix\sqrt{3}}{2} \\ v_2\frac{x+ ix\sqrt{3}}{2} \end{pmatrix} $$,

leading to

$$xv_2 = x\frac{1+i\sqrt{3}}{2}v_1 \ $$ and

$$x(v_2-v_1)=x\frac{1+i\sqrt{3}}{2}v_2 \ $$.

The first equation gives us $$v_2 = \frac{1+i\sqrt{3}}{2}v_1 \ $$; we can plug this into the second to get

$$(\frac{1+i\sqrt{3}}{2}v_1-v_1)=\frac{1+i\sqrt{3}}{2}\frac{1+i\sqrt{3}}{2}v_1 \ $$,

which is just

$$\frac{-1+i\sqrt{3}}{2}v_1 = \frac{1}{4}\left({ 1+2i\sqrt{3}-3  }\right)v_1 \ $$,

a true statement. So this eigenvector works.

An eigenvector for the other eigenvalue can be found as well:

$$\begin{pmatrix} 0 & x \\ -x & x \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} = \begin{pmatrix} u_1\frac{x- ix\sqrt{3}}{2} \\ u_2\frac{x-ix\sqrt{3}}{2} \end{pmatrix} $$

leads to

$$u_2 = u_1\frac{1- i\sqrt{3}}{2}, \ -u_1+u_2=u_2\frac{1- i\sqrt{3}}{2} \ $$

which are consistent, since

$$-u_1+u_1\frac{1- i\sqrt{3}}{2}=u_1\frac{1- i\sqrt{3}}{2}\frac{1- i\sqrt{3}}{2} \ $$

becomes a true statement,

$$\frac{-1- i\sqrt{3}}{2}= \frac{1}{4}\left({ 1-2i\sqrt{3}-3  }\right) $$

3) Let $$U_1, U_2 \ $$ be vector spaces, and set $$V=U_1 \oplus U_2, \ T:V\to V \ $$ as $$(\vec{u_1},\vec{u_2})\mapsto (\vec{u_1},\vec{0}) \ $$.

Let $$\phi_j :\mathbb{F}^{\text{dim}(U_j)} \to U_j, \ j=1,2 $$ be any isomorphisms. Then we can define bases for $$U_1, U_2 \ $$ as $$B_j = \left\{{\phi_j(\vec{e_{k,j}})}\right\}_{k=1}^{\text{dim}(U_j)} \ $$, where

$$\vec{e}_{k,j}=(0,0,\dots,\underbrace{1}_{k^{th} \ \text{place}},0, \dots, \underbrace{0}_{\text{dim}(U_j)^{th} \ \text{place}}) \ $$

are the standard bases for $$U_1, U_2 \ $$.

Then $$\phi_1(B_1)\oplus \phi_2(B_2) \ $$ forms a basis for $$V \ $$. Call this basis $$B \ $$. Observe that $$(\vec{u_1},\vec{u_2})\mapsto(\phi_1(\vec{u_1}),\phi_2(\vec{u_2})) \ $$ is an isomorphism $$\mathbb{F}^{\text{dim}(U_1)+\text{dim}(U_2)} \ $$. Call this isomorphism $$\phi \ $$. Define the transformation $$S=\phi^{-1}T\phi \ $$.

Note that

$$\mathfrak{M}_B^B (S) = \begin{pmatrix}

1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \dots \\ 0 & \dots & 1 & \dots \\ 0 & \dots & \dots & 0 \\ 0 & \dots & \dots & 0 \end{pmatrix} $$,

that is, $$\mathfrak{M}_B^B (S) = (m_{ij})$$, where $$m_{ij}= \begin{cases} 1, & \mbox{if } i=j\leq \text{dim}(U_1)  \\ 0,  & \mbox{if } i\neq j \ \text{or} \ j>\text{dim}(U_1) \end{cases} \ $$.

Hence, the eigenvalues of $$S \ $$ are precisely the solutions to the equation

$$(1-\lambda)^{\text{dim}(U_1)}\lambda^{\text{dim}(U_2)} = 0 \ $$.

Obviously, $$\lambda = 0 \ $$ is a solution with multiplicity $$\text{dim}(U_2) \ $$ and $$\lambda=1 \ $$ is a solution with multiplicity $$\text{dim}(U_1) \ $$.

The eigenspace corresponding to the nonzero eigenvalues is of course just $$T(U_1)\cong U_1 \ $$.