User talk:GFauxPas/Archive1

Change to MathWorld citation template
I noticed (based on One-to-One and Strictly Between) that some pages on MathWorld are credited to different authors from Eric Weisstein, and so require that author to be included in the citation.

I have fixed the template (which is now "MathWorld" not "Mathworld", that's just me tidying up) so as to be able to include the author (which, if not given, defaults to the "Weisstein, Eric W." format as per normal).

What you need to do is add "author=author-name" and "authorpage=author-pagename" where "author-name" is the displayname of the author and "author-pagename" is the name of the html file on MathWorld (not including the full path, not including the extension).

An example:

which gives:
 *  



If the page is given as written by "Weisstein, Eric W." then you should not add the "author" and "authorpage" tags.

I have included this info in the usage section of the Template:MathWorld page itself, but I'm bringing it to your attention because I know you've been active in using it.

Chx. --prime mover 02:55, 31 December 2011 (CST)

Differentiability of Functions of >1 variable
Larson's definition of differentiablity for functions of more than one variable is very non-intuitive (I'm going to use $f:x,y \mapsto f(x,y)$ for ease of asking the question, though the question is for any number of variables):


 * f is differentiable at $(x,y) = (x_0,y_0) \iff \exists \Delta z:$


 * $\Delta z = f_x(x_0,y_0)\Delta x + f_y(x_0,y_0)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y$

such that $\varepsilon_1, \varepsilon_2 \to 0$ as $(\Delta x, \Delta y) \to (0,0)$.

Is there an equivalent definition that's more intuitive? Why not define "differentiable" as "differentiable iff all partial derivatives exist"? --GFauxPas 12:42, 28 March 2012 (EDT)


 * As to your last question: Because it isn't enough; derivatives in all directions need to exist.
 * A general definition can be given as follows:


 * A mapping $f: \R^n \to \R^p$ (or defined on some subset of $\R^n$) is said to be differentiable at $a \in \R^n$ iff:
 * There exists a linear mapping $Df(a):\R^n\to\R^p$ (that is, simply put, a matrix) such that:
 * $\displaystyle \lim_{\left\Vert{h}\right\Vert\to 0, h \in \R^n} \frac {\left\Vert{f(a+h)-f(a)-Df(a)h}\right\Vert} {\left\Vert{h}\right\Vert} = 0$


 * This comes down to the existence of a linear approximation $Df(a)$ of $f$ near $a$ which is good enough to make the limit zero (for comparison, you can take $n=p=1$, it will reduce to the familiar expression for $f:\R\to\R$). Note that in the fraction, the norm in the numerator is in $\R^p$, while the one in the denominator is in $\R^n$. Note that $Df(a)h$ means 'the mapping $Df(a)$ evaluated at $h \in \R^n$', not your standard multiplication (well, they are the same iff $n=p=1$; alternatively, this is matrix multiplication with a vector)). Note that this is different from existence of all partial derivatives since the $h \in \R^n$ need to be in a sphere around zero, not just on the coordinate axes. If it is not entirely clear, please say so, and I will demonstrate by means of a small example. --Lord_Farin 14:35, 28 March 2012 (EDT)


 * Alternatively, see this, pp.792 --Lord_Farin 14:40, 28 March 2012 (EDT)


 * How incredibly convenient that in today's Linear Algebra class I first learned about linear maps as matrices! An example would be great. --GFauxPas 15:11, 28 March 2012 (EDT)


 * I thought that the existence of derivatives in all directions does not necessarily ensure differentiability. –Abcxyz (talk | contribs) 20:50, 28 March 2012 (EDT)
 * Correct, but they need to exist for differentiability to possibly apply. I will hopefully get to the example later today. --Lord_Farin 04:42, 29 March 2012 (EDT)

Okay, so let $f: \R^{2n}\simeq\R^n \times \R^n \to \R, (x,y)\mapsto \left\langle{x,y}\right\rangle$.

Say we want to know if $f$ is differentiable at $(a,b)\in\R^n\times\R^n$; then let $h = (h_1,h_2)\in\R^{2n}$, and compute:
 * $f(a,b)-f(a-h_1,b-h_2) = \left\langle{a,b}\right\rangle - \left\langle{a-h_1,b-h_2}\right\rangle = \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle - \left\langle{h_1,h_2}\right\rangle$

Using Cauchy-Schwarz, the last term can be estimated to $\left\Vert{h}\right\Vert^2$ as the norms of $h_1,h_2$ are dominated by that of $h$. What remains is linear in $h$ (a sum of inner products). Thus, putting $Df((a,b)) = (h\mapsto \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle)$ we compute the limit to go to zero (by the Cauchy-Schwarz argument).

There is a theorem (not too hard) establishing that the linear mapping $Df((a,b))$ is unique; hence conclude that it equals the given expression (compare the case that $n=1$ for further insights). Hopefully, this slightly nontrivial example gives a bit of insight. --Lord_Farin 06:42, 29 March 2012 (EDT)


 * Also, when considering $f:\R\to\R$, the standard derivative $f'$ is obtained by the canonical identification $\operatorname{Lin}(\R,\R)\simeq \R,Df(a)\mapsto Df(a)1 = f'(a)$. Because $Df(a)1$ is also often denoted $D_af(1)$, this is the origin of the possible confusion I expressed earlier. --Lord_Farin 06:45, 29 March 2012 (EDT)


 * This is significantly harder than what we're doing in Calc III but I'm getting something out of it, thanks! I'm not going to say that I get it completely, but I'm okay with that- I haven't even finished Calc III yet. Is this definition equivalent to Larson's for $\R^2 \to \R$? --GFauxPas 09:21, 29 March 2012 (EDT)


 * I would say so. In matrix form, $Df(a)$ will always be the matrix of partial derivatives (the Jacobian) with respect to the chosen basis. That means, for $\R^2\to\R$, that it becomes a row matrix $(f_x(a), f_y(a))$ (which upon multiplication by the column vector $(\Delta x, \Delta y)$ becomes the first part of Larson's expression; the $\varepsilon$s correspond to the term $\left\langle{h_1,h_2}\right\rangle$ in the example). It would be rather awkward had Larson an incompatible definition of something basic like differentiation. --Lord_Farin 09:45, 29 March 2012 (EDT)


 * I have a much better understanding of Larson's def'n now after discussing it with my Linear Algebra professor.


 * Side note: Has anyone seen $f^{\,'}_x(x,y), f^{\,''}_{xy}(x,y)$ for $\dfrac {\partial z}{\partial x}, \dfrac {\partial^2 z}{\partial y \partial x}$? I keep on wanting to put a prime on it --GFauxPas 10:48, 30 March 2012 (EDT)


 * No, that notation isn't used. You have to know what $f$ is derived with respect to, which is why subscripts are used, and it's strictly instead of primes, which is strictly reserved for total derivative, not partial. --prime mover 13:10, 30 March 2012 (EDT)


 * You mean that $f'$ is seriously used for $Df$ (or $df$, if in differential geometry)?! That's new to me. --Lord_Farin 17:09, 30 March 2012 (EDT)


 * Think so. May be wrong. Point is, it is never used for partial drivs. I think I met it in the context of fluid mechanics but I misremember the details. --prime mover 18:09, 30 March 2012 (EDT)

Definite Integral Definition
Regarding the "subdivision $P$" in Definition:Definite Integral, what would the subdivision be if it's a function from $\R^n$ to $\R$?

Larson's definitions all involve an alternative definition that is disliked by proofwiki members because convergence is more finicky:


 * $\displaystyle \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta x_i$

what's the equivalent definition of the supremum of a subdivision in higher dimensions? I.e.,


 * $\displaystyle \int \int \int_Q f\left({x,y,z}\right) \ \mathrm dV = \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta V_i$

where $\Delta V_i = \Delta x_i \Delta y_i \Delta z_i$, $Q \subset \R^3$

how would you convert that to an definition analogous to what PW has for a definite single integral? --GFauxPas 12:22, 4 May 2012 (EDT)


 * Take a look at Definition:Real Interval at the section that mentions multi-dimensional intervals. But I suspect that a complete analysis of the problem at the same level as done for single-dimension definite intervals may not be the correct way to go. Long time since I did this, but I think beyond an intuitive level (slices, soldiers and croutons) there is no need to go into the same level of detail - having established the result in 1 dimension, expanding it to more dimensions is an inductive process from there, or something. --prime mover 18:07, 4 May 2012 (EDT)

Linear Algebra

 * Well, do you have any suggested texts? --GFauxPas 15:13, 6 May 2012 (EDT)
 * Depends on what you want to do. If you want to learn how vectors work and how to pass exams and use this as a boost towards the basics of applied mathematics and physics, then the ones you have are probably adequate. If, however, you want to contribute towards a website of teaching materials which provides an axiomatic derivation of the current status of pure mathematics, then I'd take a good long look at Seth Warner's Modern Algebra, Paul Halmos's Naive Set Theory, Hartley & Hawkes' Rings, Modules and Linear Algebra, and probably for some more background Clark's Elements of Modern Algebra and Steen & Seebach's Counterexamples in Topology. For something really basic and accessible on abstract algebra try Whitelaw's Introduction to Abstract Algebra, or there's R.B Ash's Abstract Algebra. There's a large number of books referenced on the Books page of this site, and on the community portal there are plenty of links to browse. --prime mover 15:30, 6 May 2012 (EDT)
 * Alright then, I should look into those. Until then, I'd appreciate you continuing to point out when Fraleigh or Larson is being sub-PW standards --GFauxPas 15:46, 6 May 2012 (EDT)
 * No worries. I'm delighted to have been invited to let my prejudices hang out for all to see. --prime mover 16:00, 6 May 2012 (EDT)

Here is the Fraleigh / Beauregard page on Amazon:
 * http://www.amazon.com/Linear-Algebra-Third-Edition-Fraleigh/dp/0201526751/ref=cm_cr_pr_product_top

You might be interested to read the comments. There were marginally fewer 1-star comments than 5-star ones, but only because the latter were beefed up by instances of peopel who think a review of a book is for expressing how happy you are with the delivery service ...

The verdict, then: a good-ish reference work, but not good for learning the subject from new. --prime mover 10:47, 7 May 2012 (EDT)

"This concludes the proof"
Do we have a proofwiki ruling as to whether to state that a proof is done? I personally see and read to myself "which was to be demonstrated"", but maybe that's not what other people do. --GFauxPas (talk) 16:29, 11 January 2013 (UTC)


 * It's generally good to end a multi-stage proof with a short comment to remind the reader that the proof is done. It's not required and ultimately a matter of preference. --Lord_Farin (talk) 17:27, 11 January 2013 (UTC)

Chrome
What's the method to have PW be the default for site:proofwiki.org in the search bar? --GFauxPas (talk) 14:01, 4 April 2013 (UTC)


 * Right-click the address bar, go to "edit search engines" and add  as a new search engine. You can choose a name and keyword yourself. &mdash; Lord_Farin (talk) 14:12, 4 April 2013 (UTC)

Limit question
Someone in a Calc I class asked me for help with the following limit:

$\displaystyle \lim_{x \to 0} \frac {1 - \cos x}{x^2}$

Now, the class hasn't had L'Hopital's yet. How else is there to do it? Some trig identity with Limit of (Cosine (X) - 1) over X or Limit of Sine of X over X? I've tried several things to no avail. It could be that the student was meant to approximate it using a computer or something.

My dead ends:

$\dfrac {1 - \cos x}{x^2} = \dfrac {\sin^2 x}{x^2\left({1 + \cos x}\right)}$


 * $0 \le 1 - \cos x \le 2$ and Squeeze Theorem?

--GFauxPas (talk) 20:03, 3 June 2013 (UTC)


 * Aha! Got it.

--GFauxPas (talk) 21:05, 3 June 2013 (UTC)


 * I tried to submit an answer, but you edit conflicted me. Your way is much more elegant than mine anyway. --Dfeuer (talk) 21:14, 3 June 2013 (UTC)


 * Can you show me what you were thinking of? --GFauxPas (talk) 21:17, 3 June 2013 (UTC)


 * Limit of (Cosine (X) - 1) over X/Proof 3, but float another $x$ through till almost the end, where it'll turn a $\sin x$ into $\sin x/x$. --Dfeuer (talk) 21:57, 3 June 2013 (UTC)


 * Almost the same as my method. --GFauxPas (talk) 22:11, 3 June 2013 (UTC)


 * Since your result is strictly stronger than Limit of (Cosine (X) - 1) over X, may I suggest you add it to PW, and add a Proof 4 of said theorem that uses yours? --Dfeuer (talk) 23:14, 3 June 2013 (UTC)


 * Why do you think it's "stronger" rather than just different? I'm wary of putting it on proofwiki as it's more an exercise than a theorem. $\frac {1 - \cos x}x$ is important because it's used in a proof for $D_x\cos x$, I'm not sure this theorem is important. --GFauxPas (talk) 23:50, 3 June 2013 (UTC)


 * "Stronger" in the sense that it's straightforward to prove the other one given this one, but not the other way around. I don't personally see $\lim_{x \to 0}\frac {1-\cos x} x = 0$ as especially important; it's just another lemma. --Dfeuer (talk) 04:19, 4 June 2013 (UTC)

How can you prove the other one given this one? I'm not seeing it. --GFauxPas (talk) 05:00, 4 June 2013 (UTC)


 * $\displaystyle \lim_{x\to 0}\frac {1-\cos x} x = \lim_{x\to 0}\left(x \frac {1-\cos x} {x^2}\right)=\left(\lim_{x\to 0}x\right)\left(\lim_{x\to 0}\frac{1-\cos x}{x^2}\right)$. Basically, just knowing that the limit with denominator $x^2$ exists and is finite proves the theorem with denominator $x$. --Dfeuer (talk) 06:17, 4 June 2013 (UTC)

Source Review
Just a quick heads up to draw your attention to Definition:Random Variable, which has an open SourceReview call listed on your name. &mdash; Lord_Farin (talk) 11:42, 5 April 2014 (UTC)