User:GFauxPas/Sandbox

Welcome to my sandbox, you are free to play here as long as you don't track sand onto the main wiki. --GFauxPas 09:28, 7 November 2011 (CST)


 * $u \ v \ \mathsf{u} \ \mathsf{v} \ \nu \ \upsilon$

Anyone else have a hard time distinguishing between $u$ and $v$? I would like it to look more like this, does it confuse anyone else? It seems PW doesn't have the upgreek package. --GFauxPas 07:49, 27 January 2012 (EST)


 * Nope. Multiple years of extensive TeX writing and reading have trained my eye. I agree that referenced $v$ looks more distinguished, but imagine it is hard to implement. --Lord_Farin 08:08, 27 January 2012 (EST)

Convergence
We have that $\displaystyle \lim_{n \to +\infty}a_n = 0$, by hypothesis.

To show that $\displaystyle \sum_{k=1}^n{n \choose k}\frac {{a_n}^{k-1}} {n^k}$ converges, observe that, for $n$ large enough:

Recall that $a_n \to 0$, by hypothesis.

This means that $0 \le \left\vert{a_n}\right\vert \le 1$ for sufficiently large $n$, because $a_n$ is Cauchy.

Then, we can say:


 * $\displaystyle \left \vert{ \frac{ 1 - {a_n }^k } {1 - a_n} }\right\vert \le \left \vert{ \frac{ 1 - a_n } {1 - a_n} }\right\vert = 1$ as $n \to +\infty$, because $k \ge 1$.

Hence $\displaystyle \left \vert{\sum_{k=1}^n{n \choose k}\frac { {a_n}^{k-1} } {n^k} }\right\vert$ converges, by the Comparison Test.

That means that $\displaystyle \sum_{k=1}^n{n \choose k}\frac { {a_n}^{k-1} } {n^k}$ converges as well, because Absolutely Convergent Series is Convergent.

Is this okay? --GFauxPas 08:54, 6 March 2012 (EST)


 * Unfortunately, no. I think I got carried away when I said that the argument generalized to arbitrary complex sequences. However, it functions in the sense that we have demonstrated that the sequence is bounded. This means that we can estimate the (modulus of the) product of this thing with $a_n$ by a fixed number times the modulus of $a_n$, which then converges to zero. This will suffice to prove that the limit of the whole expression is zero because $|a_n|\to0$ does imply $a_n\to0$ (unlike when $0$ is replaced by another complex number). These tedious considerations are necessary because, as mentioned before, the sequence is not in the form of a series, hence results for series can't be applied. I hope you grasp the sketchy adapted approach. --Lord_Farin 13:56, 6 March 2012 (EST)
 * But we don't care what $\displaystyle \sum_{k=1}^n{n \choose k}\frac {{a_n}^{k-1}} {n^k}$ converges to, just that it converges to some number... Just to try to separate out what I know and what I don't know, is my proof legitimate for $a_n$ being a real sequence? I'm trying to find a way to deal with this strange not-a-series sequence that kind of looks like a series but it isn't but it maybe it is just a little. I'll leave complex analysis for another time. --GFauxPas 14:08, 6 March 2012 (EST)


 * I'm not so sure anymore. The proof as it stands now seems to only be guaranteed to work when all $a_n$ are positive real. However, the approach sketched for the complex case works; a page establishing $a_n\to0, |b_n|\le B\implies a_nb_n \to 0$ should be established (if it isn't already). That's the theorem we want to invoke. --Lord_Farin 18:04, 6 March 2012 (EST)

Vector Stuff
Let $f_1, f_2, \cdots, f_n$ be real functions of $t$.

Let $\mathbb T \subset \R, \mathbb Y \subset \R^n$ (where usually $n \ge 2$).

Let $\mathbf r$ be a mapping from $\mathbb T \to \mathbb Y$ that maps each $t \in \mathbb T$ to a vector $\langle{f_1\left({t}\right),f_2\left({t}\right),\cdots,f_n\left({t}\right)}\rangle \in \mathbb Y$.

Then $\mathbf r$ is said to be a vector-valued function (of the parameter $t$).

Each $f_1,f_2,\cdots,f_n$ is said to be a component function of $\mathbf r$.

If $\mathbb T$ is not explicit, it is the intersection of all the domains of $f_1,f_2,\cdots,f_n$.

Larson treats this definition as equivalent to $\mathbf r(t) = f_1(t)\mathbf{e_1} + \cdots + f_n(t)\mathbf{e_n}$. I'm planning on adding this representation as a theorem, and the definition up there as the main definition. I think it's more consistent with PW's mentality. Thoughts? Also, in the experience of people who've seen more books than me, what're the most common notations for $\mathbf{r}:\R \to \R^n$:? --GFauxPas 15:35, 14 March 2012 (EDT)


 * You have guessed correctly. In principle, the function $f$ may be represented without coordinate functions ('intrinsically'); it is therefore at least clear that the very explicit coordinate definition of Larson is not the way to go. It is probably too involved to really define the stuff only intrinsically, as this is not the course of action when one actually wants to compute things; so I agree with your suggested course of action. However, I dislike the notation $\Bbb Y^n$ as it may be unclear whether we take the power of $\Bbb Y$ or we just define an auxiliary symbol $\Bbb Y^n$. --Lord_Farin 17:44, 14 March 2012 (EDT)


 * I guess a quite common notation for $\mathbf r$ is $\gamma$, mentally denoting a curve. But then, this might mostly apply to at least continuous such functions. Note that I haven't touched this area in about two years, so I'm not really authoritative here. --Lord_Farin 17:52, 14 March 2012 (EDT)

Arc length
Larson has been dealing exclusively with $\R^2$ and $\R^3$, the extension to $\R^n$ was a natural extension of what he had. But regarding arc length, I'm not sure it makes sense to define arc length for $\R^n, n>3$. Does it? --GFauxPas 17:22, 16 March 2012 (EDT)


 * Sure, curves appear everywhere. If you want physical interpretation, consider further dimensions as parameters that can be varied. --Lord_Farin 18:14, 16 March 2012 (EDT)

Cross Product

 * $\mathbf a = \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix}$, $\mathbf b = \begin{bmatrix} b_x \\ b_y \\ b_z \end{bmatrix}$, $\mathbf c = \begin{bmatrix} c_x \\ c_y \\ c_z \end{bmatrix}$


 * $\vdash \mathbf a \times \left({\mathbf b \times \mathbf c}\right) = \left({\mathbf{a \cdot c} }\right) \mathbf b - \left({\mathbf{a \cdot b} }\right) \mathbf c$