User talk:GFauxPas/Archive1

Change to MathWorld citation template
I noticed (based on One-to-One and Strictly Between) that some pages on MathWorld are credited to different authors from Eric Weisstein, and so require that author to be included in the citation.

I have fixed the template (which is now "MathWorld" not "Mathworld", that's just me tidying up) so as to be able to include the author (which, if not given, defaults to the "Weisstein, Eric W." format as per normal).

What you need to do is add "author=author-name" and "authorpage=author-pagename" where "author-name" is the displayname of the author and "author-pagename" is the name of the html file on MathWorld (not including the full path, not including the extension).

An example:

which gives:
 *  



If the page is given as written by "Weisstein, Eric W." then you should not add the "author" and "authorpage" tags.

I have included this info in the usage section of the Template:MathWorld page itself, but I'm bringing it to your attention because I know you've been active in using it.

Chx. --prime mover 02:55, 31 December 2011 (CST)

Intuitionism / Constructivism
The term which I learned as "intuitionism" seems nowadays to be the same as "constructivism". I found this fascinating article just now:
 * Constructivism is Difficult

on a website which we may want to study.

This may give some background into this whole philosophical quagmire. --prime mover 03:22, 12 February 2012 (EST)


 * Following a course on intuitionistic mathematics at the moment; the two might combine quite well. The lecturer said there will be course notes; I will refer to them if they appear in PDF. --Lord_Farin 18:59, 12 February 2012 (EST)

Theorem Holds in All Models
Anyone know the page name to the theorem that if a theorem is a theorem the theorem has to hold in all models theorem theorem theorem? I can't find it theorem --GFauxPas 08:30, 12 February 2012 (EST)


 * That usually goes by the name of 'Soundness Theorem' (i.e., anything you can prove is true (where true means 'true in all models')). --Lord_Farin 18:59, 12 February 2012 (EST)

Attribution of Sum of Reciprocals is Divergent/Proof 1
You left a comment in the "Historical Note" section of Sum of Reciprocals is Divergent (which has now been moved to Sum of Reciprocals is Divergent/Proof 1) to the effect that you have uncovered evidence that it wasn't Bernoulli who discovered this, but it was in fact Oresme (which would have been some 400 years earlier).

Are you able to find out where you found this evidence? It's an interesting snippet of information to add, and it would be good to find a citation for it. --prime mover 05:16, 11 March 2012 (EDT)


 * Larson says:

''One way to show that the harmonic series diverges is attributed to Jakob Bernoulli. He grouped the terms of the harmonic series as follows:''


 * $ 1 + \frac 1 2 + \underbrace{\frac 1 3 + \frac 1 4}_{> \frac 1 2} +  \underbrace{\frac 1 5 + \cdots + \frac 1 8}_{> \frac 1 2} +  \underbrace{\frac 1 9 + \cdots + \frac 1 {16}}_{> \frac 1 2} +  \underbrace{\frac 1 {17} + \cdots + \frac 1 {32}}_{> \frac 1 2} + \cdots$

Larson doesn't finish the proof, that's left as an exercise. But http://mathworld.wolfram.com/HarmonicSeries.html attributes this proof to Oresme. I don't know which is more reliable. --GFauxPas 08:57, 11 March 2012 (EDT)
 * Just pointing out that wolfram mathworld doesn't say which proof Mengoli and Johann Bernoulli and Jakob Bernoulli used, only that they had a proof. Do you have a source that they had the same proof? Is it implied in Mathworld? --GFauxPas 09:28, 11 March 2012 (EDT)


 * I'll take a look in my copy of and see what it says, but I was assuming that (since this is the proof that was being discussed in MathWorld) this is what it is. --prime mover 09:40, 11 March 2012 (EDT)


 * It's also worth pointing out that all the proofs using calculus in some way require results which hadn't been discovered at the time. If there was another simple proof like Proof 1, it would have been documented eagerly by now. --prime mover 10:09, 11 March 2012 (EDT)

Differentiability of Functions of >1 variable
Larson's definition of differentiablity for functions of more than one variable is very non-intuitive (I'm going to use $f:x,y \mapsto f(x,y)$ for ease of asking the question, though the question is for any number of variables):


 * f is differentiable at $(x,y) = (x_0,y_0) \iff \exists \Delta z:$


 * $\Delta z = f_x(x_0,y_0)\Delta x + f_y(x_0,y_0)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y$

such that $\varepsilon_1, \varepsilon_2 \to 0$ as $(\Delta x, \Delta y) \to (0,0)$.

Is there an equivalent definition that's more intuitive? Why not define "differentiable" as "differentiable iff all partial derivatives exist"? --GFauxPas 12:42, 28 March 2012 (EDT)


 * As to your last question: Because it isn't enough; derivatives in all directions need to exist.
 * A general definition can be given as follows:


 * A mapping $f: \R^n \to \R^p$ (or defined on some subset of $\R^n$) is said to be differentiable at $a \in \R^n$ iff:
 * There exists a linear mapping $Df(a):\R^n\to\R^p$ (that is, simply put, a matrix) such that:
 * $\displaystyle \lim_{\left\Vert{h}\right\Vert\to 0, h \in \R^n} \frac {\left\Vert{f(a+h)-f(a)-Df(a)h}\right\Vert} {\left\Vert{h}\right\Vert} = 0$


 * This comes down to the existence of a linear approximation $Df(a)$ of $f$ near $a$ which is good enough to make the limit zero (for comparison, you can take $n=p=1$, it will reduce to the familiar expression for $f:\R\to\R$). Note that in the fraction, the norm in the numerator is in $\R^p$, while the one in the denominator is in $\R^n$. Note that $Df(a)h$ means 'the mapping $Df(a)$ evaluated at $h \in \R^n$', not your standard multiplication (well, they are the same iff $n=p=1$; alternatively, this is matrix multiplication with a vector)). Note that this is different from existence of all partial derivatives since the $h \in \R^n$ need to be in a sphere around zero, not just on the coordinate axes. If it is not entirely clear, please say so, and I will demonstrate by means of a small example. --Lord_Farin 14:35, 28 March 2012 (EDT)


 * Alternatively, see this, pp.792 --Lord_Farin 14:40, 28 March 2012 (EDT)


 * How incredibly convenient that in today's Linear Algebra class I first learned about linear maps as matrices! An example would be great. --GFauxPas 15:11, 28 March 2012 (EDT)


 * I thought that the existence of derivatives in all directions does not necessarily ensure differentiability. –Abcxyz (talk | contribs) 20:50, 28 March 2012 (EDT)
 * Correct, but they need to exist for differentiability to possibly apply. I will hopefully get to the example later today. --Lord_Farin 04:42, 29 March 2012 (EDT)

Okay, so let $f: \R^{2n}\simeq\R^n \times \R^n \to \R, (x,y)\mapsto \left\langle{x,y}\right\rangle$.

Say we want to know if $f$ is differentiable at $(a,b)\in\R^n\times\R^n$; then let $h = (h_1,h_2)\in\R^{2n}$, and compute:
 * $f(a,b)-f(a-h_1,b-h_2) = \left\langle{a,b}\right\rangle - \left\langle{a-h_1,b-h_2}\right\rangle = \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle - \left\langle{h_1,h_2}\right\rangle$

Using Cauchy-Schwarz, the last term can be estimated to $\left\Vert{h}\right\Vert^2$ as the norms of $h_1,h_2$ are dominated by that of $h$. What remains is linear in $h$ (a sum of inner products). Thus, putting $Df((a,b)) = (h\mapsto \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle)$ we compute the limit to go to zero (by the Cauchy-Schwarz argument).

There is a theorem (not too hard) establishing that the linear mapping $Df((a,b))$ is unique; hence conclude that it equals the given expression (compare the case that $n=1$ for further insights). Hopefully, this slightly nontrivial example gives a bit of insight. --Lord_Farin 06:42, 29 March 2012 (EDT)


 * Also, when considering $f:\R\to\R$, the standard derivative $f'$ is obtained by the canonical identification $\operatorname{Lin}(\R,\R)\simeq \R,Df(a)\mapsto Df(a)1 = f'(a)$. Because $Df(a)1$ is also often denoted $D_af(1)$, this is the origin of the possible confusion I expressed earlier. --Lord_Farin 06:45, 29 March 2012 (EDT)


 * This is significantly harder than what we're doing in Calc III but I'm getting something out of it, thanks! I'm not going to say that I get it completely, but I'm okay with that- I haven't even finished Calc III yet. Is this definition equivalent to Larson's for $\R^2 \to \R$? --GFauxPas 09:21, 29 March 2012 (EDT)


 * I would say so. In matrix form, $Df(a)$ will always be the matrix of partial derivatives (the Jacobian) with respect to the chosen basis. That means, for $\R^2\to\R$, that it becomes a row matrix $(f_x(a), f_y(a))$ (which upon multiplication by the column vector $(\Delta x, \Delta y)$ becomes the first part of Larson's expression; the $\varepsilon$s correspond to the term $\left\langle{h_1,h_2}\right\rangle$ in the example). It would be rather awkward had Larson an incompatible definition of something basic like differentiation. --Lord_Farin 09:45, 29 March 2012 (EDT)


 * I have a much better understanding of Larson's def'n now after discussing it with my Linear Algebra professor.


 * Side note: Has anyone seen $f^{\,'}_x(x,y), f^{\,''}_{xy}(x,y)$ for $\dfrac {\partial z}{\partial x}, \dfrac {\partial^2 z}{\partial y \partial x}$? I keep on wanting to put a prime on it --GFauxPas 10:48, 30 March 2012 (EDT)


 * No, that notation isn't used. You have to know what $f$ is derived with respect to, which is why subscripts are used, and it's strictly instead of primes, which is strictly reserved for total derivative, not partial. --prime mover 13:10, 30 March 2012 (EDT)


 * You mean that $f'$ is seriously used for $Df$ (or $df$, if in differential geometry)?! That's new to me. --Lord_Farin 17:09, 30 March 2012 (EDT)


 * Think so. May be wrong. Point is, it is never used for partial drivs. I think I met it in the context of fluid mechanics but I misremember the details. --prime mover 18:09, 30 March 2012 (EDT)

Definite Integral Definition
Regarding the "subdivision $P$" in Definition:Definite Integral, what would the subdivision be if it's a function from $\R^n$ to $\R$?

Larson's definitions all involve an alternative definition that is disliked by proofwiki members because convergence is more finicky:


 * $\displaystyle \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta x_i$

what's the equivalent definition of the supremum of a subdivision in higher dimensions? I.e.,


 * $\displaystyle \int \int \int_Q f\left({x,y,z}\right) \ \mathrm dV = \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta V_i$

where $\Delta V_i = \Delta x_i \Delta y_i \Delta z_i$, $Q \subset \R^3$

how would you convert that to an definition analogous to what PW has for a definite single integral? --GFauxPas 12:22, 4 May 2012 (EDT)


 * Take a look at Definition:Real Interval at the section that mentions multi-dimensional intervals. But I suspect that a complete analysis of the problem at the same level as done for single-dimension definite intervals may not be the correct way to go. Long time since I did this, but I think beyond an intuitive level (slices, soldiers and croutons) there is no need to go into the same level of detail - having established the result in 1 dimension, expanding it to more dimensions is an inductive process from there, or something. --prime mover 18:07, 4 May 2012 (EDT)

Bases for Matrix Spaces
To prove, for example, that:


 * $\mathcal B_1 = \left({\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix},\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} }\right)$

is an ordered basis of $\mathbf M_2\left({\R}\right)$, my professor says it's enough on a test of his to say something like:

"because $T: \mathbf M_2\left({\R}\right) \to \R^4, \begin{bmatrix} a & b \\ c & d \end{bmatrix} \mapsto \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix}$ is an isomorphism..."

and then use that $\left({\mathbf e_i}\right)_1^4$ is an ordered basis, and "isomorph" back.

Q1) Do we have such a theorem up on PW?

Q2) Is that handwaving by PW standards?

--GFauxPas 11:20, 6 May 2012 (EDT)


 * For a start, an isomorphism is with respect to one or more operations. In this context the nature of the operation(s) is unclear. Addition, yes, that's clear and the isomorphism is trivial to prove. But for multiplication this is a different matter altogether.


 * And I still contend that the concept of an "ordered basis" of $\mathbf M_2\left({\R}\right)$ is claptrap. There is no canonical ordering of the basis elements so how can you order the bloody things? --prime mover 11:24, 6 May 2012 (EDT)


 * Oh, right, I should have said $\left( { \mathbf M_2\left({\R}\right), +, \circ } \right) \to \left({\R^4, +, \circ}\right)$ where $+$ is matrix addition, and $\circ$ is multiplication by a scalar in $\R$.


 * And regarding ordered bases on $\mathbf M_2$, Fraleigh wants to do things like:


 * $\begin{bmatrix} 1 & -2 \\ 3 & \pi \end{bmatrix} = \left({1,-2,3,\pi}\right)_{\mathcal B_{1}}$


 * I appreciate your enthusiasm to get stuff done, but if this is the langiage Fraleigh is using, he is introducing unnecessary confusion by being shamefully lax and imprecise. I understand that this and the equally appalling Larson are the only books you've been given for your course, but they don't do a very good job. --prime mover 12:34, 6 May 2012 (EDT)
 * I apologize for not knowing what's considered a good text and what's not. Hopefully my judgement will become better as I'm exposed to more. Until then, I just hope I'm doing less harm than good. Please point out what you don't like about my contributions because I can learn quickly, and I'll do my best to not make a mess in the mean time. Can you think of a way of avoiding these sorts of problems? If it means contributing less so be it, but I think most of what I put up isn't that bad. --GFauxPas 13:24, 6 May 2012 (EDT)
 * What do you think of this online text I found? Linear Algebra Done Wrong. I haven't looked at it much, but if it's better than Fraleigh I can start using it instead. --GFauxPas 13:29, 6 May 2012 (EDT)


 * Shallow. --prime mover 15:11, 6 May 2012 (EDT)
 * Well, do you have any suggested texts? --GFauxPas 15:13, 6 May 2012 (EDT)
 * Depends on what you want to do. If you want to learn how vectors work and how to pass exams and use this as a boost towards the basics of applied mathematics and physics, then the ones you have are probably adequate. If, however, you want to contribute towards a website of teaching materials which provides an axiomatic derivation of the current status of pure mathematics, then I'd take a good long look at Seth Warner's Modern Algebra, Paul Halmos's Naive Set Theory, Hartley & Hawkes' Rings, Modules and Linear Algebra, and probably for some more background Clark's Elements of Modern Algebra and Steen & Seebach's Counterexamples in Topology. For something really basic and accessible on abstract algebra try Whitelaw's Introduction to Abstract Algebra, or there's R.B Ash's Abstract Algebra. There's a large number of books referenced on the Books page of this site, and on the community portal there are plenty of links to browse. --prime mover 15:30, 6 May 2012 (EDT)
 * Alright then, I should look into those. Until then, I'd appreciate you continuing to point out when Fraleigh or Larson is being sub-PW standards --GFauxPas 15:46, 6 May 2012 (EDT)
 * No worries. I'm delighted to have been invited to let my prejudices hang out for all to see. --prime mover 16:00, 6 May 2012 (EDT)

Here is the Fraleigh / Beauregard page on Amazon:
 * http://www.amazon.com/Linear-Algebra-Third-Edition-Fraleigh/dp/0201526751/ref=cm_cr_pr_product_top

You might be interested to read the comments. There were marginally fewer 1-star comments than 5-star ones, but only because the latter were beefed up by instances of peopel who think a review of a book is for expressing how happy you are with the delivery service ...

The verdict, then: a good-ish reference work, but not good for learning the subject from new. --prime mover 10:47, 7 May 2012 (EDT)