User talk:GFauxPas/Archive1

Change to MathWorld citation template
I noticed (based on One-to-One and Strictly Between) that some pages on MathWorld are credited to different authors from Eric Weisstein, and so require that author to be included in the citation.

I have fixed the template (which is now "MathWorld" not "Mathworld", that's just me tidying up) so as to be able to include the author (which, if not given, defaults to the "Weisstein, Eric W." format as per normal).

What you need to do is add "author=author-name" and "authorpage=author-pagename" where "author-name" is the displayname of the author and "author-pagename" is the name of the html file on MathWorld (not including the full path, not including the extension).

An example:

which gives:
 *  



If the page is given as written by "Weisstein, Eric W." then you should not add the "author" and "authorpage" tags.

I have included this info in the usage section of the Template:MathWorld page itself, but I'm bringing it to your attention because I know you've been active in using it.

Chx. --prime mover 02:55, 31 December 2011 (CST)

Theorem Holds in All Models
Anyone know the page name to the theorem that if a theorem is a theorem the theorem has to hold in all models theorem theorem theorem? I can't find it theorem --GFauxPas 08:30, 12 February 2012 (EST)


 * That usually goes by the name of 'Soundness Theorem' (i.e., anything you can prove is true (where true means 'true in all models')). --Lord_Farin 18:59, 12 February 2012 (EST)

Differentiability of Functions of >1 variable
Larson's definition of differentiablity for functions of more than one variable is very non-intuitive (I'm going to use $f:x,y \mapsto f(x,y)$ for ease of asking the question, though the question is for any number of variables):


 * f is differentiable at $(x,y) = (x_0,y_0) \iff \exists \Delta z:$


 * $\Delta z = f_x(x_0,y_0)\Delta x + f_y(x_0,y_0)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y$

such that $\varepsilon_1, \varepsilon_2 \to 0$ as $(\Delta x, \Delta y) \to (0,0)$.

Is there an equivalent definition that's more intuitive? Why not define "differentiable" as "differentiable iff all partial derivatives exist"? --GFauxPas 12:42, 28 March 2012 (EDT)


 * As to your last question: Because it isn't enough; derivatives in all directions need to exist.
 * A general definition can be given as follows:


 * A mapping $f: \R^n \to \R^p$ (or defined on some subset of $\R^n$) is said to be differentiable at $a \in \R^n$ iff:
 * There exists a linear mapping $Df(a):\R^n\to\R^p$ (that is, simply put, a matrix) such that:
 * $\displaystyle \lim_{\left\Vert{h}\right\Vert\to 0, h \in \R^n} \frac {\left\Vert{f(a+h)-f(a)-Df(a)h}\right\Vert} {\left\Vert{h}\right\Vert} = 0$


 * This comes down to the existence of a linear approximation $Df(a)$ of $f$ near $a$ which is good enough to make the limit zero (for comparison, you can take $n=p=1$, it will reduce to the familiar expression for $f:\R\to\R$). Note that in the fraction, the norm in the numerator is in $\R^p$, while the one in the denominator is in $\R^n$. Note that $Df(a)h$ means 'the mapping $Df(a)$ evaluated at $h \in \R^n$', not your standard multiplication (well, they are the same iff $n=p=1$; alternatively, this is matrix multiplication with a vector)). Note that this is different from existence of all partial derivatives since the $h \in \R^n$ need to be in a sphere around zero, not just on the coordinate axes. If it is not entirely clear, please say so, and I will demonstrate by means of a small example. --Lord_Farin 14:35, 28 March 2012 (EDT)


 * Alternatively, see this, pp.792 --Lord_Farin 14:40, 28 March 2012 (EDT)


 * How incredibly convenient that in today's Linear Algebra class I first learned about linear maps as matrices! An example would be great. --GFauxPas 15:11, 28 March 2012 (EDT)


 * I thought that the existence of derivatives in all directions does not necessarily ensure differentiability. –Abcxyz (talk | contribs) 20:50, 28 March 2012 (EDT)
 * Correct, but they need to exist for differentiability to possibly apply. I will hopefully get to the example later today. --Lord_Farin 04:42, 29 March 2012 (EDT)

Okay, so let $f: \R^{2n}\simeq\R^n \times \R^n \to \R, (x,y)\mapsto \left\langle{x,y}\right\rangle$.

Say we want to know if $f$ is differentiable at $(a,b)\in\R^n\times\R^n$; then let $h = (h_1,h_2)\in\R^{2n}$, and compute:
 * $f(a,b)-f(a-h_1,b-h_2) = \left\langle{a,b}\right\rangle - \left\langle{a-h_1,b-h_2}\right\rangle = \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle - \left\langle{h_1,h_2}\right\rangle$

Using Cauchy-Schwarz, the last term can be estimated to $\left\Vert{h}\right\Vert^2$ as the norms of $h_1,h_2$ are dominated by that of $h$. What remains is linear in $h$ (a sum of inner products). Thus, putting $Df((a,b)) = (h\mapsto \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle)$ we compute the limit to go to zero (by the Cauchy-Schwarz argument).

There is a theorem (not too hard) establishing that the linear mapping $Df((a,b))$ is unique; hence conclude that it equals the given expression (compare the case that $n=1$ for further insights). Hopefully, this slightly nontrivial example gives a bit of insight. --Lord_Farin 06:42, 29 March 2012 (EDT)


 * Also, when considering $f:\R\to\R$, the standard derivative $f'$ is obtained by the canonical identification $\operatorname{Lin}(\R,\R)\simeq \R,Df(a)\mapsto Df(a)1 = f'(a)$. Because $Df(a)1$ is also often denoted $D_af(1)$, this is the origin of the possible confusion I expressed earlier. --Lord_Farin 06:45, 29 March 2012 (EDT)


 * This is significantly harder than what we're doing in Calc III but I'm getting something out of it, thanks! I'm not going to say that I get it completely, but I'm okay with that- I haven't even finished Calc III yet. Is this definition equivalent to Larson's for $\R^2 \to \R$? --GFauxPas 09:21, 29 March 2012 (EDT)


 * I would say so. In matrix form, $Df(a)$ will always be the matrix of partial derivatives (the Jacobian) with respect to the chosen basis. That means, for $\R^2\to\R$, that it becomes a row matrix $(f_x(a), f_y(a))$ (which upon multiplication by the column vector $(\Delta x, \Delta y)$ becomes the first part of Larson's expression; the $\varepsilon$s correspond to the term $\left\langle{h_1,h_2}\right\rangle$ in the example). It would be rather awkward had Larson an incompatible definition of something basic like differentiation. --Lord_Farin 09:45, 29 March 2012 (EDT)


 * I have a much better understanding of Larson's def'n now after discussing it with my Linear Algebra professor.


 * Side note: Has anyone seen $f^{\,'}_x(x,y), f^{\,''}_{xy}(x,y)$ for $\dfrac {\partial z}{\partial x}, \dfrac {\partial^2 z}{\partial y \partial x}$? I keep on wanting to put a prime on it --GFauxPas 10:48, 30 March 2012 (EDT)


 * No, that notation isn't used. You have to know what $f$ is derived with respect to, which is why subscripts are used, and it's strictly instead of primes, which is strictly reserved for total derivative, not partial. --prime mover 13:10, 30 March 2012 (EDT)


 * You mean that $f'$ is seriously used for $Df$ (or $df$, if in differential geometry)?! That's new to me. --Lord_Farin 17:09, 30 March 2012 (EDT)


 * Think so. May be wrong. Point is, it is never used for partial drivs. I think I met it in the context of fluid mechanics but I misremember the details. --prime mover 18:09, 30 March 2012 (EDT)

Definite Integral Definition
Regarding the "subdivision $P$" in Definition:Definite Integral, what would the subdivision be if it's a function from $\R^n$ to $\R$?

Larson's definitions all involve an alternative definition that is disliked by proofwiki members because convergence is more finicky:


 * $\displaystyle \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta x_i$

what's the equivalent definition of the supremum of a subdivision in higher dimensions? I.e.,


 * $\displaystyle \int \int \int_Q f\left({x,y,z}\right) \ \mathrm dV = \lim_{\Vert \Delta \Vert \to 0} \sum_a^b f\left({x_i}\right) \ \Delta V_i$

where $\Delta V_i = \Delta x_i \Delta y_i \Delta z_i$, $Q \subset \R^3$

how would you convert that to an definition analogous to what PW has for a definite single integral? --GFauxPas 12:22, 4 May 2012 (EDT)


 * Take a look at Definition:Real Interval at the section that mentions multi-dimensional intervals. But I suspect that a complete analysis of the problem at the same level as done for single-dimension definite intervals may not be the correct way to go. Long time since I did this, but I think beyond an intuitive level (slices, soldiers and croutons) there is no need to go into the same level of detail - having established the result in 1 dimension, expanding it to more dimensions is an inductive process from there, or something. --prime mover 18:07, 4 May 2012 (EDT)

Linear Algebra

 * Well, do you have any suggested texts? --GFauxPas 15:13, 6 May 2012 (EDT)
 * Depends on what you want to do. If you want to learn how vectors work and how to pass exams and use this as a boost towards the basics of applied mathematics and physics, then the ones you have are probably adequate. If, however, you want to contribute towards a website of teaching materials which provides an axiomatic derivation of the current status of pure mathematics, then I'd take a good long look at Seth Warner's Modern Algebra, Paul Halmos's Naive Set Theory, Hartley & Hawkes' Rings, Modules and Linear Algebra, and probably for some more background Clark's Elements of Modern Algebra and Steen & Seebach's Counterexamples in Topology. For something really basic and accessible on abstract algebra try Whitelaw's Introduction to Abstract Algebra, or there's R.B Ash's Abstract Algebra. There's a large number of books referenced on the Books page of this site, and on the community portal there are plenty of links to browse. --prime mover 15:30, 6 May 2012 (EDT)
 * Alright then, I should look into those. Until then, I'd appreciate you continuing to point out when Fraleigh or Larson is being sub-PW standards --GFauxPas 15:46, 6 May 2012 (EDT)
 * No worries. I'm delighted to have been invited to let my prejudices hang out for all to see. --prime mover 16:00, 6 May 2012 (EDT)

Here is the Fraleigh / Beauregard page on Amazon:
 * http://www.amazon.com/Linear-Algebra-Third-Edition-Fraleigh/dp/0201526751/ref=cm_cr_pr_product_top

You might be interested to read the comments. There were marginally fewer 1-star comments than 5-star ones, but only because the latter were beefed up by instances of peopel who think a review of a book is for expressing how happy you are with the delivery service ...

The verdict, then: a good-ish reference work, but not good for learning the subject from new. --prime mover 10:47, 7 May 2012 (EDT)

Tableau Notation
Is there a way to write a tableau proof in such a format as this? I find it the easiest way to read a tableau proof, but maybe it's not doable in LaTeX? http://i50.tinypic.com/2urm4hf.jpg --GFauxPas 09:06, 17 June 2012 (EDT)


 * The language of LaTeX allows for infinite diversity; the main problem here is that we have to deal with MathJax's implementation, which is more limited. To employ such structures one would generally define a complete style file, defining an environment like \begin{tableauproof} or st. like that.
 * Regardless of whether it is possible, I have some doubts concerning how useful this language is when multiple proof trees combine together (like with $\lor$-elimination). It seems a tad hard to make such trees with equally appealing presentation. Besides even that conceptual proble, we have the necessity to be able to refer to other pages in-line, by means of hyperlinks. This means of reference is IMO at least as important as an aesthetically satisfying presentation.
 * On the positive side, I share your desire for a more appealing presentation of proof trees. I have decided that I will (in due time, when PredCalc is restyled and can take amendments) try to incorporate sequent calculus, a system I have always liked due to its expressiveness and clarity of assumptions at each point. --Lord_Farin 09:43, 17 June 2012 (EDT)


 * I discovered, in which the presentation of tableau proofs receives a prominent place. From there I researched to work out how to write the appropriate LaTeX, and discovered how fiddly it was. I reverted to the technique given in , which is the pedestrian and clunky (but ultimately easy to maintain) system which I posted here.
 * IMO it's not worth the candle to try and emulate some slick technique of presenting tableau proofs, but then I won't prevent anyone from trying - as long as they explain in full detail exactly how such a presentation functions. --prime mover 12:11, 17 June 2012 (EDT)


 * It needn't be too hard in TeX (given a .sty file) but as MathJax is not a full-fledged TeX parser we are limited in both practical and pedagogical manners. --Lord_Farin 17:22, 17 June 2012 (EDT)

"This concludes the proof"
Do we have a proofwiki ruling as to whether to state that a proof is done? I personally see and read to myself "which was to be demonstrated"", but maybe that's not what other people do. --GFauxPas (talk) 16:29, 11 January 2013 (UTC)


 * It's generally good to end a multi-stage proof with a short comment to remind the reader that the proof is done. It's not required and ultimately a matter of preference. --Lord_Farin (talk) 17:27, 11 January 2013 (UTC)

Order topology thingum
FYI, I posted a very rough proof of the theorem about order topology vs. subspace topology on convex subsets over at Order Topology on Convex Subset is Subspace Topology. It's sufficiently badly written that it may or may not make sense to anyone but me, but you're welcome to try to fix that. --Dfeuer (talk) 06:09, 1 February 2013 (UTC)


 * It's not our job to improve the presentation of your own badly-written material. If you think it is substandard you should develop it to an appropriate standard yourself on your sandbox before publishing it. --prime mover (talk) 09:29, 1 February 2013 (UTC)

Can't find this theorem
I'm almost sure we have it up, anyone know the page that has the result that if $f\left({x}\right) > 0$ for all $x \in \left[{a..b}\right]$ and $a < b$ then $\int_a^b f\left({x}\right) \, \mathrm dx > 0$? --GFauxPas (talk) 17:14, 14 February 2013 (UTC)


 * We have something similar, it's in the integral calculus category. --prime mover (talk) 18:01, 14 February 2013 (UTC)


 * Closest I could find is Upper and Lower Bounds of Integral, which is roundabout. --GFauxPas (talk) 20:13, 14 February 2013 (UTC)


 * The closest I could find was the relative sizes theorem, but that doesn't quite do the trick and as stated it only applies to continuous functions. I imagine that the theorem you're after would rectify these problems. You may have to prove it from scratch. --Dfeuer (talk) 20:55, 14 February 2013 (UTC)


 * Errrr... forget that. I don't think the theorem holds if $f$ isn't continuous. I suspect the relative sizes theorem and the extreme value theorem will probably give you what you need.-Dfeuer (talk) 21:12, 14 February 2013 (UTC)


 * You have to be weary about integrability in case you are to craft pathological examples... --Lord_Farin (talk) 22:01, 14 February 2013 (UTC)

Yeah, I don't actually know that the theorem doesn't hold for non-continuous functions. But it's certainly not obvious that it holds. The function would have to have an infimum of $0$ on every non-degenerate interval in order for the lower integral to be $0$. The infimum of the upper sums would also have to be $0$. I would guess without knowing that it is possible to construct a counterexample using some form of choice, but I could be wrong. --Dfeuer (talk) 23:08, 14 February 2013 (UTC)

Googling "Lebesgue Criterion for Riemann Integrability" will reveal that a function on a bounded closed interval is Riemann integrable iff it is bounded and is continuous almost everywhere. Suppose it is continuous at $p$ and $f(p) > 0$. Then there is an open interval $I$ containing $p$ such that $x \in I \implies f(x) > f(p)/2$. But that places a lower bound of $|I|f(p)/2$ on the integral. --Dfeuer (talk) 01:13, 15 February 2013 (UTC)


 * Here's a specific source that goes into detail: . Note that while the function has to be continuous almost everywhere to be Riemann integrable, all we really need is for it to be continuous at a single point. I imagine a proof of the latter fact might be easier to manufacture. It would also be good, of course, to prove or disprove this convenience theorem for other sorts of integration, but I'm not currently competent to do so. --Dfeuer (talk) 01:50, 15 February 2013 (UTC)


 * I only need it for continuous functions, so behold!: Sign of Function Matches Sign of Definite Integral. Feel free to use it to build a stronger result if you'd like. --GFauxPas (talk) 03:06, 15 February 2013 (UTC)

Chrome
What's the method to have PW be the default for site:proofwiki.org in the search bar? --GFauxPas (talk) 14:01, 4 April 2013 (UTC)


 * Right-click the address bar, go to "edit search engines" and add  as a new search engine. You can choose a name and keyword yourself. &mdash; Lord_Farin (talk) 14:12, 4 April 2013 (UTC)

absolute value of x^n
Is:
 * $\forall x^n,x,n \in \R: |x^n| = |x|^n$

a theorem? What about $x^n,x,n \in \C$? --GFauxPas (talk) 12:24, 9 April 2013 (UTC)


 * I haven't studied complex numbers much, but it looks to me like the answer is yes:


 * If $x = re^{i \theta}$ then $x^n = r^n e^{i \theta n}$.


 * Thus the modulus of $x^n$ is $r^n \left\vert e^{i \theta n} \right\vert = |x|^n$.


 * Note that despite your choice of variable name, $n$ can be any real number (except of course that it cannot be zero if $x$ is). $n$, however, can only have an imaginary component if $x$ is a positive real number—can you see why I think so? --Dfeuer (talk) 17:23, 9 April 2013 (UTC)


 * "Thus the modulus of $x^n$ is $r^n \left\vert e^{i \theta n} \right\vert = |x|^n$." How'd you get that? --GFauxPas (talk) 19:24, 9 April 2013 (UTC)


 * What don't you understand? Consider parenthesizing $x^n = (r^n) e^{i(\theta n)}$. --Dfeuer (talk) 19:41, 9 April 2013 (UTC)


 * Are you assuming that the result holds for $\R$? --GFauxPas (talk) 19:51, 9 April 2013 (UTC)


 * "$n$ can be any real number (except of course that it cannot be zero if $x$ is)." My understanding is that it can, and $0^0$ is understood (by convention and the other usual reasons) as being equal to $1$. --prime mover (talk) 19:54, 9 April 2013 (UTC)


 * Not the sort of convention I'd want to rely on without an explicit statement that it is in force. --Dfeuer (talk) 01:19, 10 April 2013 (UTC)


 * It's such a widespread convention that it should not need defining. I recommend you might want to do some reading around the subject. --prime mover (talk) 05:21, 10 April 2013 (UTC)

GFauxPas asks if I'm assuming that the result holds for $\R$. I am not. I will be glad to continue to answer questions, but we do appear to be having some sort of communication problem. May I recommend that you try to draft a proof? That should give us something more concrete to discuss and, more importantly, give you a better sense of how the pieces go together. The restriction I imposed on when $n$ can have a non-zero imaginary component, for example, seems a bit arbitrary, but it arose naturally out of the process of trying to prove the theorem. --Dfeuer (talk) 02:17, 10 April 2013 (UTC)


 * I don't see how the parentheses help:
 * $x^n = (r^n) e^{i(\theta n)} \implies |x^n| = |(r^n) e^{i(\theta n)}| = r^n |e^{i(\theta n)}|$

... and then what? --GFauxPas (talk) 04:08, 10 April 2013 (UTC)


 * Well, what is $|e^{iy}|$ when $y$ is real? --Dfeuer (talk) 04:44, 10 April 2013 (UTC)


 * And while we're at it, what is $r$? The end is very much in sight. --Dfeuer (talk) 04:54, 10 April 2013 (UTC)

It looks false for general complex powers. Definition of powers when $z,w \in \C$:
 * $z^w = \exp(w\log z) = \exp(w \log |z| + i w\arg(z))$

so
 * $|z^w| = \exp(\Re(w) \log |z| - \Im(w) \arg(z))$

Now
 * $|z|^w = \exp(w\log |z|) = \exp(\Re(w)\log |z| + i \Im(w) \log |z|)$

I don't see that they're equal when $z \in \R_{\geq 0}$. Surely they're equal if
 * $- \Im(w) \arg(z) = i \Im(w) \log |z|$

That is, if $\Im(w) = 0$ or if $z = 1$.--Linus44 (talk) 13:37, 10 April 2013 (UTC)


 * Thought of a counterexample:


 * $\displaystyle i^i = e^{- \frac \pi 2} \ne 1 = |i|^i$ --GFauxPas (talk) 14:09, 10 April 2013 (UTC)


 * Linus, I already said that it's only true for a complex exponent if the base is positive and real. --Dfeuer (talk) 15:03, 10 April 2013 (UTC)


 * This is my point. As I described above, for a complex exponent it is true if and only if $z = 1$. It isn't true for any positive real base. (though I just notices that my argument doesn't account for $z = 0$, when of course it's also true) --Linus44 (talk) 15:53, 10 April 2013 (UTC)


 * Example: $2 = |2^{1+i}| \neq |2|^{1+i} = 2^{1+i}$ --Linus44 (talk) 16:00, 10 April 2013 (UTC)


 * Regarding DF's clue, I can't figure it out:


 * $|e^{iy}| = \sqrt{\left({\operatorname{Re}\left({e^{iy}}\right)}\right)^2 + \left({\operatorname{Im}\left({e^{iy}}\right)}\right)^2}$
 * ... so what? --GFauxPas 14:48, 10 April 2013 (UTC)


 * For real $y$, $e^{iy} = \cos y + i \sin y$, the modulus of which is &hellip;? --Dfeuer (talk) 15:03, 10 April 2013 (UTC)