User:GFauxPas/Sandbox

Welcome to my sandbox, you are free to play here as long as you don't track sand onto the main wiki. --GFauxPas 09:28, 7 November 2011 (CST)

We have $\rho\left({\mathbf A}\right) + \nu\left({\mathbf A}\right) = \text{number of columns} = 3$, which we might not have a page for.

From Null Space Contains Only Zero Vector iff Columns are Independent, $\nu\left({\mathbf A}\right) = 1$ iff the column vectors are independent.

etc.

Actually, I should add a corollary to that page: nullity is one iff column vectors are independent... --GFauxPas 23:32, 4 April 2012 (EDT)


 * Well, we do have Rank Plus Nullity Theorem, if that's what you mean. By the way, did you mean $\nu\left({\mathbf A}\right) = 0$? --abcxyz 23:47, 4 April 2012 (EDT)


 * Rank plus nullity is for transformations, the page doesn't yet tie it into matrices. Oh and actually I'm not sure what the dimension of $\left \{ {\mathbf 0} \right \}$ is, now that I think about it. --GFauxPas 00:17, 5 April 2012 (EDT)


 * I thought that an $m \times n$ matrix $\mathbf A$ with entries in $\R$ can be viewed as the linear transformation $\mathbf A : \R^n \to \R^m$ given by $\mathbf x \mapsto \mathbf A \mathbf x$. Isn't this already noted in Definition:Matrix?
 * I'm pretty sure that $\dim \left({\left\{{\mathbf 0}\right\}}\right) = 0$, since its basis is empty. --abcxyz 00:26, 5 April 2012 (EDT)


 * Yeah a matrix can be looked at as a LT but I'd need a theorem that the $n$ in $\rho + \nu = n$ is the number of columns. Anyway, the idea of an empty basis is kind of interesting. Though it fits the definition of empty sum of the linear combination of no vectors. Maybe. --GFauxPas 00:46, 5 April 2012 (EDT)


 * Well, if $\mathbf x \in \R^n$, then $\mathbf A$ must have $n$ columns for $\mathbf A \mathbf x$ to be defined ... right? So the domain of the linear transformation $\mathbf A : \R^n \to \R^m$ is $n$-dimensional; doesn't that permit the use of the rank-nullity theorem ($\rho + \nu = n$)? --abcxyz 01:00, 5 April 2012 (EDT)
 * It sure does - but it deserves a page. Thanks for the proof, shorter than I thought. --GFauxPas 01:21, 5 April 2012 (EDT)

Matrix Spaces
My linear algebra class gets to abstract vector spaces in around two weeks. In the mean time, doesn't hurt to try to get some understanding on my own. What's the proper notation and definition of a matrix such that each column is a member of a vector space? What's the proper way to say "the matrix $\mathbf A$ that when multiplied on the left on a vector does the same thing as some linear transformation $T$ on the vector"? --GFauxPas 09:05, 6 April 2012 (EDT)


 * 1. Same notation as block matrices, I think: $\left[{\begin{array}{cccc}\mathbf v_1 & \mathbf v_2 & \cdots & \mathbf v_n\end{array}}\right]$.
 * 2. Probably something like "matrix of the linear transformation $T$" or "transformation matrix of $T$". --abcxyz 11:21, 9 April 2012 (EDT)


 * Surely a reference needs to be made to the bases chosen. Something like 'the matrix of $T$ with respect to the bases $e_1\ldots e_n$ and $f_1\ldots f_m$'. --Lord_Farin 11:28, 9 April 2012 (EDT)

Theorems
On my to do list:


 * $T\left({\mathbf x}\right) = \mathbf A \mathbf x \iff T^{-1}\left({\mathbf x}\right) = \mathbf A^{-1}\mathbf x$


 * $T\left({\mathbf x}\right)= \mathbf A \mathbf x, T \,' \left({\mathbf x}\right) = \mathbf A' \mathbf x, \left({T \circ T\,'}\right)\left({\mathbf x}\right) = \mathbf A \mathbf A' \mathbf x$. --GFauxPas 14:03, 12 April 2012 (EDT)


 * Judging by the second line, I think it should be the second alternative for the first. --Lord_Farin 14:06, 12 April 2012 (EDT)
 * Yeah it is, I looked it up, thanks. Your reasoning I assume was $\mathbf A \mathbf A^{-1} \mathbf x = \mathbf I \mathbf x$ --GFauxPas 14:08, 12 April 2012 (EDT)
 * Well, the completely equivalent $T \circ T^{-1} = \operatorname{Id}$. --Lord_Farin 14:09, 12 April 2012 (EDT)

Existence of Lagrange Multipliers
Let $f: \mathbf X \to \R$, $g: \mathbf X \to \R$, $\mathbf X \subseteq \R^n$ have continuous first partial derivatives in a region containing $\mathbf x_0 \in \mathbf X$, where:


 * $\mathbf x_0 = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$

Let $f$ have an extremum at $\mathbf x_0$ such that $\mathbf x_0$ is a solution to the constraint $g\left({\mathbf x}\right) = c$, where $c \in \R$ is some constant.

Let $\nabla g \left({ \mathbf x_0 }\right) \ne \mathbf 0$.

Then there exists some real number $\lambda$ such that:


 * $\nabla f\left({\mathbf x_0}\right) = \lambda \nabla g\left({\mathbf x_0}\right)$.

$\lambda$ is called a Lagrange multiplier.


 * A few questions:
 * 1. Did you mean $\nabla g \left({\mathbf x_0}\right) \ne \mathbf 0$?
 * 2. Is $\mathbf X$ assumed to be open in $\R^n$?
 * --abcxyz 23:51, 22 April 2012 (EDT)
 * Yes, I meant $\nabla g$ not $\nabla f$, that was a mistake.
 * Regarding $\mathbf X$, unfortunately the domain of $f$ and $g$ are not explicit, it's really annoying, I'm trying to figure it out from context. --GFauxPas 23:58, 22 April 2012 (EDT)
 * http://mathworld.wolfram.com/LagrangeMultiplier.html says it has to be open. Can you tell me what your line of thinking is? --GFauxPas 00:05, 23 April 2012 (EDT)
 * Generally, you only need an open neighbourhood around $\mathbf x_0$, but you may deem that equivalent. However, I'm not sure that this is the most general treatment of Lagrange multipliers; you may for example employ those also to find the local maximum of $f = r \sin \theta$ on the circle $r = 1$ (in polar coordinates) (so $g = r$). But $f$ does not have a local extremum anywhere on $\R^2$. --Lord_Farin 10:51, 23 April 2012 (EDT)
 * I was trying to imply that kind of extremum in the statement of the theorem, LF, but it seems I have failed to do so. How should I have stated it?

Not sure; approaching from this direction is a bit alien to me, and I'm not sure I wouldn't make errors if I tried. Let me try to find a reference work later tonight. --Lord_Farin 11:29, 23 April 2012 (EDT)

Proof
It seems Larson assumes the existence of such a function:

where $x_i$ represent continuously differentiable functions of $t$, and $\mathbf r \, ' \left({t}\right) \ne \mathbf 0$, and $\dfrac{\mathrm d {x_i}}{\mathrm dt}$ are continuous on some open interval $I$. Or something. I could be misunderstanding him. Is such an assumption reasonable? Can I prove the existence of such a function? --GFauxPas 15:10, 22 April 2012 (EDT)
 * I think you can prove it; it's been a while since I concerned myself with Lagrange multipliers. --Lord_Farin 10:51, 23 April 2012 (EDT)
 * Is that because Lagrange multipliers aren't important? Is this stuff worth putting on PW? I'm looking for things to do on PW from my math classes. We're also doing integrals of the form $\int \int_R f\left({x,y}\right) \mathrm dA$ but I don't think there's enough of a foundation on PW to do those. --GFauxPas 11:20, 23 April 2012 (EDT)
 * They are important, and get more important once you get more into applied fields like physics and engineering... My talents (as I have discovered) lie more in the abstract branches not bothering about applications. It is definitely an important subject in applied math, so it's about time someone covered it. --Lord_Farin 11:29, 23 April 2012 (EDT)

Found a great pdf with this proof, hooray!

Theorem:

Let $F: \R^{n + 1} \to \R$ be continuously differentiable.

Let $(\mathbf x, z)$ be an element in $\R^{n+1}$ where $\mathbf x \in \R^n, z \in \R$.

Suppose $\exists (\mathbf x_0,z_0) \in \R^{n + 1}: F(\mathbf x_0, z_0) = 0, \dfrac{\partial F(\mathbf x_0, z_0)}{\partial z} \ne 0$.

Then $\exists$ a function $g: U \to I$, $U \subseteq \R^n$ containing $\mathbf x_0$, $I \subset \R$ containing $z_0$:


 * $F(\mathbf x, z) = 0 \iff z = g(\mathbf x)$.

Where $g$ is continuously differentiable.

Lemma: Let $\mathbf u, \mathbf v \in \R^n: \mathbf u \ne \mathbf 0$. Let $T$ denote the set of all vectors $\mathbf x \in \R^n$ that satisfy $\mathbf {x \cdot u} = 0$. If $\forall \mathbf x \in T: \mathbf {x \cdot v} = 0$ then $\exists \lambda \in \R: \mathbf u = \lambda \mathbf v$.