User:GFauxPas/Sandbox

Welcome to my sandbox, you are free to play here as long as you don't track sand onto the main wiki. --GFauxPas 09:28, 7 November 2011 (CST)

We have $\rho\left({\mathbf A}\right) + \nu\left({\mathbf A}\right) = \text{number of columns} = 3$, which we might not have a page for.

From Null Space Contains Only Zero Vector iff Columns are Independent, $\nu\left({\mathbf A}\right) = 1$ iff the column vectors are independent.

etc.

Actually, I should add a corollary to that page: nullity is one iff column vectors are independent... --GFauxPas 23:32, 4 April 2012 (EDT)


 * Well, we do have Rank Plus Nullity Theorem, if that's what you mean. By the way, did you mean $\nu\left({\mathbf A}\right) = 0$? --abcxyz 23:47, 4 April 2012 (EDT)


 * Rank plus nullity is for transformations, the page doesn't yet tie it into matrices. Oh and actually I'm not sure what the dimension of $\left \{ {\mathbf 0} \right \}$ is, now that I think about it. --GFauxPas 00:17, 5 April 2012 (EDT)


 * I thought that an $m \times n$ matrix $\mathbf A$ with entries in $\R$ can be viewed as the linear transformation $\mathbf A : \R^n \to \R^m$ given by $\mathbf x \mapsto \mathbf A \mathbf x$. Isn't this already noted in Definition:Matrix?
 * I'm pretty sure that $\dim \left({\left\{{\mathbf 0}\right\}}\right) = 0$, since its basis is empty. --abcxyz 00:26, 5 April 2012 (EDT)


 * Yeah a matrix can be looked at as a LT but I'd need a theorem that the $n$ in $\rho + \nu = n$ is the number of columns. Anyway, the idea of an empty basis is kind of interesting. Though it fits the definition of empty sum of the linear combination of no vectors. Maybe. --GFauxPas 00:46, 5 April 2012 (EDT)


 * Well, if $\mathbf x \in \R^n$, then $\mathbf A$ must have $n$ columns for $\mathbf A \mathbf x$ to be defined ... right? So the domain of the linear transformation $\mathbf A : \R^n \to \R^m$ is $n$-dimensional; doesn't that permit the use of the rank-nullity theorem ($\rho + \nu = n$)? --abcxyz 01:00, 5 April 2012 (EDT)
 * It sure does - but it deserves a page. Thanks for the proof, shorter than I thought. --GFauxPas 01:21, 5 April 2012 (EDT)

Matrix Spaces
My linear algebra class gets to abstract vector spaces in around two weeks. In the mean time, doesn't hurt to try to get some understanding on my own. What's the proper notation and definition of a matrix such that each column is a member of a vector space? What's the proper way to say "the matrix $\mathbf A$ that when multiplied on the left on a vector does the same thing as some linear transformation $T$ on the vector"? --GFauxPas 09:05, 6 April 2012 (EDT)


 * 1. Same notation as block matrices, I think: $\left[{\begin{array}{cccc}\mathbf v_1 & \mathbf v_2 & \cdots & \mathbf v_n\end{array}}\right]$.
 * 2. Probably something like "matrix of the linear transformation $T$" or "transformation matrix of $T$". --abcxyz 11:21, 9 April 2012 (EDT)


 * Surely a reference needs to be made to the bases chosen. Something like 'the matrix of $T$ with respect to the bases $e_1\ldots e_n$ and $f_1\ldots f_m$'. --Lord_Farin 11:28, 9 April 2012 (EDT)

Theorems
On my to do list:


 * $T\left({\mathbf x}\right) = \mathbf A \mathbf x \iff T^{-1}\left({\mathbf x}\right) = \mathbf A^{-1}\mathbf x$


 * $T\left({\mathbf x}\right)= \mathbf A \mathbf x, T \,' \left({\mathbf x}\right) = \mathbf A' \mathbf x, \left({T \circ T\,'}\right)\left({\mathbf x}\right) = \mathbf A \mathbf A' \mathbf x$. --GFauxPas 14:03, 12 April 2012 (EDT)


 * Judging by the second line, I think it should be the second alternative for the first. --Lord_Farin 14:06, 12 April 2012 (EDT)
 * Yeah it is, I looked it up, thanks. Your reasoning I assume was $\mathbf A \mathbf A^{-1} \mathbf x = \mathbf I \mathbf x$ --GFauxPas 14:08, 12 April 2012 (EDT)
 * Well, the completely equivalent $T \circ T^{-1} = \operatorname{Id}$. --Lord_Farin 14:09, 12 April 2012 (EDT)

Polar Coordinates
A question please, for anyone who wants to answer.

Theorem: Let

$R \subseteq \R^2 = \{(x,y) = (r\cos \theta, r \sin \theta): 0 \le g_1(\theta) \le r \le g_2(\theta), \ \alpha \le \theta \le \beta, \ 0 \le \beta - \alpha \le 2\pi\}$

where $g_1,g_2$ are real functions continuous for all $\theta \in [\alpha..\beta]$.

Let $f$ be a real-valued function continous on $R$.

Then:


 * $\displaystyle \int \int_R f(x,y) \ \mathrm dA = \int_{\alpha}^{\beta} \int_{g_1(\theta)}^{g_2(\theta)} f (r\cos \theta,r \sin \theta) \ r \ \mathrm dr \ \mathrm d\theta$

I get $x = \cos \theta, y = \sin \theta$, but where did that last $r$ come from? I expected it to be $\displaystyle \int \int_R f(r\cos \theta, r\sin \theta) \ \mathrm dr \ \mathrm d\theta$. --GFauxPas 17:12, 26 April 2012 (EDT)


 * This $r$ arises from the Jacobi determinant. In fact, what is done here is a change of variables, which is closely related to Lebesgue Measure of Matrix Image.
 * I suggest a web search on Jacobi determinant and Change of Variables (this is an application of the latter). --Lord_Farin 17:21, 26 April 2012 (EDT)


 * Will do, thanks a lot. Another question; why do we need $r \ge 0$? --GFauxPas 00:00, 27 April 2012 (EDT)


 * That's probably to ensure the injectivity of the mapping $\left({r, \theta}\right) \mapsto \left({r \cos \theta, r \sin \theta}\right)$ (except possibly at endpoints or when $r = 0$), so that extra area won't be counted twice in the integral on the right-hand side. So I think either of the requirements $\left({0 \le \beta - \alpha \le 2 \pi}\right) \land \left({r \ge 0}\right)$ or $\left({0 \le \beta - \alpha \le \pi}\right)$ will do. In the latter case (and also the former case) I think the formula reads:
 * $\displaystyle \iint_R f \left({x, y}\right) \ \mathrm d x \ \mathrm d y = \int_{\alpha}^{\beta} \int_{g_1 \left({\theta}\right)}^{g_2 \left({\theta}\right)} \left\vert{r}\right\vert \ f \left({r \cos \theta, r \sin \theta}\right) \ \mathrm d r \ \mathrm d \theta$
 * --abcxyz 01:07, 27 April 2012 (EDT)


 * It sometimes helps to draw pictures, and see what determines the size of the elements of area you are adding up. Once you have seen it properly with your eyes it's straightforward to get the definition the integral correct, and easier to prove. --prime mover 02:19, 27 April 2012 (EDT)


 * Thanks a lot, guys. abc, did you mean "injectivity of $\left({x, y}\right) \mapsto \left({r \cos \theta, r \sin \theta}\right)$? PM, did you mean, e.g., when I encounter a theorem such as this in its raw formulation, or when applying the theorem to a specific case for homework or a test? --GFauxPas 08:24, 27 April 2012 (EDT)


 * As a general rule, if you want to understand something, particularly like this, draw a picture, it helps to understand what's going on. --prime mover 18:57, 27 April 2012 (EDT)

No, I meant $\left({r, \theta}\right) \mapsto \left({r \cos \theta, r \sin \theta}\right)$. Do you see a problem with that? --abcxyz 11:18, 27 April 2012 (EDT)
 * One would need to establish injectivity of $(x,y)\mapsto (r,\theta)$ as well, of which you wrote the inverse. --Lord_Farin 13:02, 27 April 2012 (EDT)
 * For every $\left({r, \theta}\right)$ there exists exactly one $\left({r \cos \theta, r \sin \theta}\right)$ ... Sorry but I don't get your point LF. Would you mind explaining a bit more? --abcxyz 14:08, 27 April 2012 (EDT)
 * The Change of Variables Theorem requires an a.e. diffeomorphism, not simply an injective differentiable function. In this sense, one at least needs a bijective correspondence $(x,y)\leftrightarrow (r,\theta)$ except at a set of measure zero (in this case, the point $x=y=0$ is the set of measure zero, as is the set $r=0$ in the $(r,\theta)$ setting) which was what I was referring to. You wrote the a.e. differentiable mapping $(r,\theta) \mapsto (x,y)$, and I just said that its inverse also needs to be a.e. differentiable. I hope that clarifies it a bit. --Lord_Farin 14:22, 27 April 2012 (EDT)


 * Thanks for the clarification. Isn't the almost everywhere differentiability of the mapping $\left({x, y}\right) \mapsto \left({r, \theta}\right)$ guaranteed by the inverse function theorem? But I don't understand how that really matters. Could you please explain if you don't mind? --abcxyz 18:50, 27 April 2012 (EDT)
 * Yes it is; I have a feeling we are on the same track here but phrasing it a bit differently. I think I simply forgot of all the nice theorems of calculus in several variables when I wrote my comments. I hereby terminate this discussion on nothing; my apologies. --Lord_Farin 18:59, 27 April 2012 (EDT)
 * What's a.e.? --GFauxPas 18:36, 27 April 2012 (EDT)

See Definition:Almost Everywhere. The applicable measure is Definition:Lebesgue Measure (maybe it's clearer what it does atm from Definition:Lebesgue Pre-Measure). --Lord_Farin 18:43, 27 April 2012 (EDT)

Notation for Set
Is $S^2$ reserved for a specific type of cartesian product? E.g. I see $\Z \times \Z$ rather than $\Z^2$, and $\R^2$ rather than $\R \times \R$, which makes me think that there's a subtlety here I'm missing. Is it related to the difference between $\R$ and $\R^1$? --GFauxPas 09:57, 29 April 2012 (EDT)
 * No difference, all other things being equal. --prime mover 11:01, 29 April 2012 (EDT)