User talk:GFauxPas/Archive1

Convergence and other principles of analysis
Currently I am a teaching assistant for an advanced analysis course, so if you have any questions regarding real (multidimensional or not) analysis, feel free to drop a note on my talk page. When you eventually get there, I might be able to help out on Complex Analysis as well. --Lord_Farin 14:27, 23 October 2011 (CDT)

Awesome, thanks a lot --GFauxPas 14:33, 23 October 2011 (CDT)


 * Thanks for your explanation LF. Just curious, what's the cardinality of $\R^\R$? --GFauxPas 07:18, 17 January 2012 (EST)


 * There are nice rules for cardinalities like $\left|A^B\right| = |A|^{|B|}$, so if we let $c = |\R|, \omega = |\N|$, then $\left|\R^\R\right| = c^c = \left(2^\omega\right)^c = 2^{(\omega\times c)} = 2^c$. So it is 'only' $2^c$. --Lord_Farin 09:38, 17 January 2012 (EST)

Notation
I note from your front page you're setting up some copypasta for yourself. Before you go too far down that route, pls note the following:

1. The raw symbols ≡, · and Δ and so on are never used on ProofWiki. The $\LaTeX$ code is always used: $\equiv, \cdot, \Delta$ (or when appropriate $\triangle$ and its variants).

2. For "defined as" we use $:=$ as this is a specific symbol meaning "is defined as". The $\equiv$ symbol has plenty of other meanings and it is best kept for those.

Hope this is OK. --prime mover 16:29, 24 October 2011 (CDT)

A while back I wrote a proof that all integers are even or odd. The reason I wrote it was just to practice mathematical induction. The theorem itself seems rather unimportant, and so I don't see a reason to make a page for it, but at someone's request I can make it. --GFauxPas 15:56, 4 November 2011 (CDT)
 * I have a feeling that one might already be up, but I don't think it was proved by induction. Can't remember and I don't feel like looking at the moment. --prime mover 17:24, 4 November 2011 (CDT)

Seeing as how we're using an uncommon notation for intervals to avoid ambiguity, what's the defining criterion for whether we use an atypical "better" notation or not? For example, the though the standard notation for function inverse is $f^{-1}$, that notation is the same as that of the multiplicative inverse, very different things. There's $f^\gets$ and $\breve f$ which aren't ambiguous, but they're not common. --GFauxPas 16:41, 8 November 2011 (CST)


 * Good question. There are many notations for intervals, and none of them are very good because it's easy to mistake two numbers separated by a comma for all sorts of other usages. $[a..b]$ is not common but it's completely unambiguous and has a precedent in computer languages, so I'm sort of expecting it to catch on. Getting mathematicians to change their notation, though, is not easy.
 * As for the inverse function notation, "generally speaking" you don't mistake $f^{-1}$ for a multiplicative inverse because the contexts are different. The $f^{\gets}$ notation has been noted on the page for inverse mapping so I suppose we could start using it, if you particularly like it. --prime mover 17:05, 8 November 2011 (CST)


 * I think I oppose to that. The notation for the interval isn't really ambiguous, even if you never saw it before, the meaning is clear. With $f^{\gets}$ I am having the hunch that it will create unnecessary fuss. But that's me, and probably an instantiation of the notation change thing... --Lord_Farin 17:09, 8 November 2011 (CST)

Lord_Farin, just wanted to let you know that I figured out your explanation and PW's approach here Talk:Fundamental_Theorem_of_Calculus/Alternative_Second_Part_Proof, thank you! --GFauxPas 22:19, 5 November 2011 (CDT)


 * Glad I put up sensible stuff. HTH --Lord_Farin 09:01, 6 November 2011 (CST)

Sizing of Images
When you are creating images, there is no need to create different versions depending on what size you want it. All you do in the code is add " px" to tell the browser how many pixels (high? wide? Don't know) to render the image.

In order to keep the images folder tidy (it can't be organized easily) I will update the cone page to use File:ConeVolumeProof.png and size it appropriately, then delete File:ConeVolumeProof2.png.

--prime mover 04:45, 20 November 2011 (CST)

Trig integrals
Good work on all these trigonometric integrals. It's something I've been meaning to get round to doing but haven't done yet. We have the opportunity of providing the best repository of integrals on the internet. --prime mover 12:06, 24 November 2011 (CST)
 * My pleasure. I don't like proofs like the secant proof where the steps come out of nowhere, but it is what it is. Thanks for the compliment. --GFauxPas 12:09, 24 November 2011 (CST)
 * Understood, and I'll stick to x if you like x better, I apologize for wasting your time. As I said, I'm a slow learner at some things. I'm still trying to measure what needs to be said and what doesn't. --GFauxPas 14:06, 24 November 2011 (CST)
 * Okay, here's a general rule: if it's in place using notation you're not happy with, but it's sound, then leave well alone unless there's a good reason not to (e.g. it's incorrect). If you prefer using theta in the proofs you work on, fair enough, but if and when we expand the understanding to take on board complex numbers we might take the opportunity of amending the notation again. Of course, if it has a proofread and/or tidy template (or it's otherwise new by a contributor who has not yet assimilated the house style) then the above does not apply. Of course, if you really don't agree with the presentational style, raise the question in the discussion page. That's always an option. --prime mover 14:58, 24 November 2011 (CST)
 * Thank you for your patience with me, Prime.mover. I don't care what symbols I use, and if it makes you happy you can change any notation I use, I don't care. I just have accustomed myself to using theta for trig, I don't mind using x or whatever if you like it better. The main reason I edited the cosine page there was because the proof didn't address the sign of the sine. Also, can I leave it as a given that differentiation and integration are linear operators? I've been putting it in the proofs, but it's left implied in most of the proofs I see outside of PW. --GFauxPas 15:08, 24 November 2011 (CST)
 * I rarely bother to note the derivative of a minus, because it all follows with simple algebra anyway. If there's a specific need to invoke a complicated linear combination, then perhaps note that, but for a simple constant multiple I would not. Note the corollary to the derivative of the exponential which includes the drv. of $e^{cx}$ - you might want to add something similar as a corollary in the trig functions. In fact, drv of $\sin (ax + b) = a \cos (ax + b)$ is a really useful corollary, so when we get onto complicated substitutions in the messy integrations involving quadratics, you just need to invoke that page and it saves a lot of extra work on the substitution. --prime mover 16:26, 24 November 2011 (CST)

Certainly the integrals of tangent, secant etc. are worth adding, but would you like me to add pages for the integrals of functions like $\sec x \tan x$, $\sec^2 x$? --GFauxPas 13:40, 25 November 2011 (CST)
 * My rule of thumb is: a result is reported in a text book as worthwhile results then they can probably go in. Otherwise, if they're needed in the course of a more complicated proof then we could add them when they were needed. Otherwise I wouldn't bother. --prime mover 17:23, 25 November 2011 (CST)
 * It might be feasible to compute the indefinite integrals for $\cos^n x\sin^m x$ and $m,n\in\Z$. That page would be worthwhile I think as a reference table, and would cover all of these. --Lord_Farin 17:32, 25 November 2011 (CST)
 * That's one that's been on my own list to do in due course - but there's lots of other fiddly stuff I want to get sorted out while I have the particular books in front of me. Feel free to get there first ... --prime mover 17:44, 25 November 2011 (CST)

Grammatical note
Reinstating this section because its still relevant.

I see you starting lines with a capital letter where it does not need one. Here is an example:


 * The hyperbolic tangent function is defined on the complex numbers as:


 * $\tanh: X \to \C$:


 * $\displaystyle \tanh z := \dfrac {\sinh z}{\cosh z}$


 * Where $\sinh$ is the hyperbolic sine, $\cosh$ is the hyperbolic cosine, and $X = \{ z : z \in \C, \ \cosh z \ne 0 \}$."

The word "where" should not have a capital letter. The above is all (technically) one sentence, like:


 * "The best food is:
 * FISH AND CHIPS
 * where chips are made of deep-fried potato."

See? As "where" is part of the same sentence, it does not start with a capital letter.

I have been changing them consistently where I've seen them, hoping you'll pick it up by following examples, but now I see you changing one in the other direction, I have to mention it.

I understand that Microsoft make things complicated by automatically making the first letter after every new line / return start with a capital, but Microsoft are cracked.


 * Your comment in the edit page about "committed to memory", you might want to amend your subroutines to ensure it's your hard drive not your RAM it gets committed to, as I notice the same is being done on your sandbox page for Riemann Sum. --prime mover 00:19, 25 November 2011 (CST)

email citations
Citing email conversations as source works. I'm going to attempt to give a ruling, as there's no obvious reason why the general technique of citing private correspondence can't be used - within reason.

The idea of a citation is that it allows people to go back to the original source work and see what the original says. I've seen exceptions where books have been written and the citation is "private correspondence" so there's a precedent. But unless there is a unique point mentioned in the page that you are unable to find anywhere in the literature at your disposal then as a last resort you can cite your email conversation. Before that, you want to say to your correspondent: "Where did you get that from?" If he's thought it up himself, obviously you need his permission to use it, but in that case you can cite "personal correspondence" if you are indeed the only person he has shared this with. If he can't remember where he got the information, then google for it, and if you still can't find a citation for that precise piece of information, don't bother citing it at all.

If this piece of information is notable enough, then it may well merit its own page. In the case of the classical probability model (good page, btw, don't worry about notation conventions) I'd be prepared to believe that the email conversation probably (no pun intended) didn't contain anything that can't be found in books.

Citations shouldn't need you to bust a gut. I think what we're doing at the moment is adequate. Your idea of linking to Khanacademy was inspired.

While I'm about it, if you know stuff about [P. Arvanites] (and he consents to this knowledge being in the public domain), feel free to put a page up on ProofWiki and so you'll be able to link to him. --prime mover 14:45, 1 December 2011 (CST)

Change to MathWorld citation template
I noticed (based on One-to-One and Strictly Between) that some pages on MathWorld are credited to different authors from Eric Weisstein, and so require that author to be included in the citation.

I have fixed the template (which is now "MathWorld" not "Mathworld", that's just me tidying up) so as to be able to include the author (which, if not given, defaults to the "Weisstein, Eric W." format as per normal).

What you need to do is add "author=author-name" and "authorpage=author-pagename" where "author-name" is the displayname of the author and "author-pagename" is the name of the html file on MathWorld (not including the full path, not including the extension).

An example:

which gives:
 *  



If the page is given as written by "Weisstein, Eric W." then you should not add the "author" and "authorpage" tags.

I have included this info in the usage section of the Template:MathWorld page itself, but I'm bringing it to your attention because I know you've been active in using it.

Chx. --prime mover 02:55, 31 December 2011 (CST)

intuition
Are people all right with my "intuition" sections, like I put here and here? --GFauxPas 06:54, 23 January 2012 (EST)
 * I am. I think some theorems benefit from a sort of colloquial phrasing of the result; this can prevent annoying misconceptions and confusion. --Lord_Farin 07:14, 23 January 2012 (EST)
 * I think so, as long as they are grammatical enough and don't use too much colloquial language. Not that colloquialism is bad in itself, but it can grate a little, particularly to those whose culture is not the same as the one from which those colloquialisms originate. For example, I can't stand "... and we're done" because I have a problem with using "done" to mean "finished". Possibly just my problem, but then others may also share this. --prime mover 08:44, 23 January 2012 (EST)

Vector Arrows
Book: Linear Algebra, 3rd edition, by John B Fraleigh and Raymond A. Beauregard.

Context: $\R^n$ considered as a euclidean space.

O = $\mathbf{0}$.

Visual representation in 3-space: Where the x,y,z axes intersect.

Here's the juicy part, I'll paraphrase some parts.

We are accustomed to visualizing an ordered pair or triple as a point in the plane or in space and denoting it geometrically by a dot...Physicists have found another very useful geometric interpretation in their consideration of forces acting on a body...[stuff about magnitude, direction]...It is natural to represent a force by an arrow...such an arrow is a force vector.

Using a rectangular coordinate system in the plane, note that if we consider a force vector to start from the origin (0,0), then the vector is completely determined by the coordinates of the point at the tip of the arrow. Thus we can consider each $x \in \R^2$ to represent a vector in the plane as well as a point in the plane. When we wish to regard an ordered pair as a vector, we will use [x,y] instead of (x,y).

''Mathematically, there is no distinction between (1,2) and [1,2]. The different notations merely indicate different views of the same element of $\R^2$.'' Each n-tuple can be viewed as both a point and as a vector.

So there you go, it's purely a matter of perspective. He's saying that they're both legitimate ways to view an n-tuple, but ultimately there's no mathematical difference, just different connotations. --GFauxPas 13:52, 26 January 2012 (EST)


 * Brilliant. Way to go. I note the amendments to the Vector page. --prime mover 16:18, 26 January 2012 (EST)


 * For the formal part of it, I quote: ...if we consider a force vector to start from the origin (0,0), then.... Also, in many (particularly mechanical) situations, the starting point of a vector is very significant for its effect on a system (for example, forces on a rigid axis of a wheel do practically nothing; forces on some surface point of the wheel generally make the wheel turn). Therefore, this assumption is quite questionable, especially when thinking about the starting point of a vector like $\mathbf u -\mathbf v$ where $\mathbf u,\mathbf v$ are vectors... I consider this case not closed yet. --Lord_Farin 17:27, 26 January 2012 (EST)


 * At one point during my mid-teens mathematics education, the concept "position vector (of a point)" was encountered, whose meaning was "the vector from the origin to that point", so one can call $\mathbf 0$ the "position vector of the origin" if that helps. --prime mover 17:40, 26 January 2012 (EST)


 * I was under the impression that vectors are not defined by their location, i.e., the vector issuing from $(0,0)$ and ending at $(1,0)$ is the exact same vector as the one starting from $(5,5)$ and ending at $(6,5)$. Certainly if we define a vector as "magnitude and direction" we don't see "location" there. --GFauxPas 17:44, 26 January 2012 (EST)

It is precisely that approach that I am questioning, on mentioned physical grounds. --Lord_Farin 17:49, 26 January 2012 (EST)
 * Well, in Khan Academy, Khan is very sure about that, and this is what my Linear Algebra professor, er, professes. I'll let you know if I find a book that says otherwise. Oh, if you want a physics source, check out http://www.learner.org/resources/series42.html video 5, which also takes this approach, though it may be dated. --GFauxPas 17:56, 26 January 2012 (EST)

LF I looked through my books to see if I can find any clues for you. I'm paraphrasing. They're only dealing with $\R^n$.

Jewett: Physics for Scientists and Engineers
An example of a vector quantity is displacement...The direction of the arrowhead represents the direction of the displacement, and the length of the arrow represents the magnitude of the displacement.

For many purposes, two vectors $\mathbf{a}$ and $\mathbf{b}$ may be defined to be equal if they have the same magnitude and point in the same direction:


 * $\mathbf{a} = \mathbf{b} \iff \left({||\mathbf{a}|| = ||\mathbf{b}|| \land \text{the vector arrows run along parallel lines}}\right)$

This property allows us to move a vector to a position parallel to itself in a diagram without affecting the vector.

Larson
Larson bolds terms that he's defining for the first time. He's using $\R^2$ for the moment, he addresses $\R^n$ later.

A directed line segment is used to represent a vector quantity. The directed line segment $\overrightarrow{PQ}$ has initial point $P$ terminal point $Q$. Directed line segments that have the same length and direction are equivalent.

(He's defining a new term here. Note he doesn't say equals. - GFP)

The set of all directed line segments that are equivalent to a directed line segment $\overrightarrow{PQ}$ is a vector in the plane and is denoted $\mathbf{v} = \overrightarrow{PQ}$...be sure you see that a vector in the plane can be represented by many different line segments--all pointing in the same direction and all the same length.

The component form of $\mathbf{v}$ is given by:


 * $\mathbf{v} = \langle{v_1,v_2}\rangle$, $v_n \in \R$

where it's implied that the initial point is the origin. If both the initial point and the terminal point lie at the origin, then $v$ is called the zero vector and is denoted by $\mathbf{0} = \langle{0,0}\rangle$.

Two vectors $\mathbf{u} = \langle{u_1,u_2}\rangle$ and $\mathbf{v} = \langle{v_1,v_2}\rangle$ are equal iff $u_1 = v_1 \land u_2 = v_2$.

FWIW, note he's defining equal at the end here, he put it in bold. Also, Equality of Ordered Pairs.

I'm not convinced he has the same approach as Khan and Fraleigh. --GFauxPas 07:35, 27 January 2012 (EST)


 * I hallow the distinction between 'directed line segment' and 'vector' (where the latter is an equivalence class of the former). If this can be implemented, I am satisfied. It is good that we have found at least one reference that calls full rigour to arms on this subject. --Lord_Farin 08:12, 27 January 2012 (EST)


 * On closer investigation, Khan treats vectors differently in the Linear Algebra videos than in the physics videos. I think the video I linked to on the definition:vector page is formal enough for your expectations. He distinguishes the arrow representation of a vector from the vector itself. A vector in $\R^n$ is and only is an ordered n-tuple of n elements of $\R^1$. I guess this supports your view of physics as math without rigor. I'll have to watch his linear algebra videos much more carefully, then I'll try to fix the page.

On a side note, is it more common to say n-toople or n-tuh-pl? --GFauxPas 08:56, 27 January 2012 (EST)


 * N-tuh-pl when used as an adjective (i.e. meaning "multiple" but specifically meaning "with $n$ parts"), n-toople for an ordered set of $n$ elements. This is because in this context it comes from the word "tuple". Except in a UK English accent it would be pronounced something like "n-tyoople". (Sharp-eyed observers would then say: "Which UK English accent? There are thousands!" but y'all know wha'mean, innit?) --prime mover 04:38, 29 January 2012 (EST)


 * I always say n-toople, but that is because this is also how it is pronounced in Dutch. Dealing mostly with Dutch students around, I consider this common practice. I actually enjoyed watching the Khan academy video (even though I already knew all the material covered). Especially when he cares to warn about identifying a vector with the arrow interpretation; note that this distinction is consistent with proposed difference between 'directed line segment' and 'vector'. --Lord_Farin 10:14, 27 January 2012 (EST)
 * I'm glad you like it. I find it's worth watching Khan's videos even if you know the material, as it gives me idea to explain it to people who don't know. cf. my intuition sections. Anyway, how about this. I'll do an entry on euclidean n-space, I'll consolidate the stuff about arrows and mention that this is the primary interpretation used in physics even though it's not 100% correct, and you do other vector spaces? --GFauxPas 10:18, 27 January 2012 (EST)


 * I am afraid I don't know what you mean. There hasn't been any occurrence that I have even thought about drawing a vector space of functions with arrows. Arrows only apply to things we can imagine (that is, mathematically, $\R^n, n \le 3$; $n\le4$ for some gifted persons). --Lord_Farin 10:32, 27 January 2012 (EST)


 * FWIW, I approve of the vector/directed line segment distinction, although I'd probably call a directed line segment vector (physics). Honestly, I think from a mathematical standpoint, the Vector page is fine now (up to maybe reordering the material a bit), the question just arises for physics vectors.  Of course, it wouldn't make sense for standard linear algebra vectors to have endpoints since one of the central requirements in a vector field is additive commutativity, and by assigning endpoints it becomes hard to define addition at all (and impossible to preserve commutativity, as far as I can tell...)  Incidentally, I'd say n-tuh-pl, and that's coming from a US English background. Of course, I'm not exactly an expert on pronunciation, but I'd say that either way is fine. --Alec  (talk) 10:48, 27 January 2012 (EST)
 * I think viewing a vector as an arrow is kind of like viewing a function as a graph of the function, am I wrong? The graph and the function are distinct but we view the graph as an interpretation of the function. E.g., if you define the definite integral as the limit of a riemann sum, it can be represented as the signed area bounded by the graph and the x-axis or whatever. Similarly, the real number line is used as a geometric interpretation of $\R$. --GFauxPas 15:01, 27 January 2012 (EST)

That is quite a good analogy, I think. Be sure to look at my contribution over at Definition talk:Vector. --Lord_Farin 17:57, 27 January 2012 (EST)
 * The confusion between vectors and directed line segments seems to arise from the fact that in order to illustrate a vector, the teacher has to draw an arrow on the board somewhere. Therefore the students natural interpretation is to think: "Aha - that's the point at which the vector is applied." In fact, a vector applies to every point in the space simultaneously, and can perhaps better be illustrated by covering the plane with arrows (making it look a bit like a weather map). As this is easier to do in a computer environment than talk-n-chalk, perhaps this is the approach we might want to adopt. --prime mover 04:43, 29 January 2012 (EST)

New templates for you
We have new templates for the Tarski stuff as follows:
 * a) Template:AnalogueLink which is a generic template allowing an also-see into a page:
 * thatpage, an analogue of thispage in the context of thatcategory.
 * b) Template:TarskiAxiomLink which uses Template:AnalogueLink directly to create an also-see to:
 * thataxiom, an analogue of thispage in the context of Tarski's Geometry.
 * c) Template:TarskiGeometryCitation, which bags up all the complicated mess when specifying the link to the document online.

<3 --GFauxPas 05:33, 29 January 2012 (EST)

Euclidean space
I notice you say 'Euclidean $n$-space' wherever you mean $\R^n$. However, according to Definition:Euclidean Space, the former includes already a certain metric on $\R^n$ as well (the Euclidean metric). Therefore, I consider this not a good practice. I recommend to use just $\R^n$ or 'Real $n$-space' (I always just use $\R^n$). --Lord_Farin 06:51, 30 January 2012 (EST)
 * I was taught the definitions as synonymous. After googling it a little, it seems that both are used definitions, even though the definitions are not the same. e.g. ttp://mathworld.wolfram.com/EuclideanSpace.html uses it as synonymous to R^n, and http://tutorial.math.lamar.edu/Classes/LinAlg/EuclideanSpace.aspx uses it like the def'n already on PW. --GFauxPas 06:59, 30 January 2012 (EST)
 * I'm with LF here. "Euclidean space" implicitly bears the assumption that the "usual metric" is in place, and this is what is used in conventional physics and applied mathematics where the space is modelling the conventional spatial universe - which is most of the time, and all of the time at elementary level. However, we have to be careful here as we have approached this from the pure-mathematical direction.
 * There is already a page defining "real $n$-space" somewhere in the database (in linear algebra, I believe), so suggest that is what is linked to. --prime mover 08:08, 30 January 2012 (EST)
 * When you say "you're with LF" You're making it sound like I was disagreeing with someone! I'll use the def'n on PW, then. --GFauxPas 08:12, 30 January 2012 (EST)
 * 'I'm with' means 'I agree with'. If you are in disagreement with what LF says, then yes. If not, then no. Apologies for having caused confusion. I'll shut up. --prime mover 09:01, 30 January 2012 (EST)
 * No worries. Must be an American thing, when I say "I'm with him" it implied he's arguing with someone. Don't shut up, I like it when you talk. --GFauxPas 09:27, 30 January 2012 (EST)

categories
As "Matrix Algebra" is already a subcategory of "Linear Algebra", how useful is it to put such results into both categories? My thinking was: matrices have a use in the field of linear algebra as a mathematical technique. As such, Matrix Algebra is also a subcategory of Algebra, in which it can be considered independently of the field of linear algebra (e.g. as results in graph theory). My thinking was that: results and definitions purely defined in the context of manipulation of matrices (without any restriction on what those results are to be applied to) can remain in this self-contained category called "Matrix Algebra" which can then be included directly into any other category (and I know of quite a few coming down the road) by adding the name of that category directly into the Category:Matrix Algebra page. In that way it will be unnecessary to go back and add a category name (e.g. in this case "Linear Algebra") to all the individual results in Matrix Algebra that use results / definitions in this area.

The idea is to make sure that the sizes of the categories do not grow overlarge. As (by default) only 200 entries appear on the category list page at any one time, more than that and it begins to be difficult to navigate your way through to the result you want.

What does anyone else think? --prime mover 02:37, 12 February 2012 (EST)


 * Actually I am doing this myself while processing Conway's functional analysis book. Some results are just important/well-known enough that someone in eg. Functional Analysis should find them, while they also appear in, say, Hilbert Spaces. Not a problem for me; in case a category grows too large, some comments to prevent the already notoriously hard search for results already up from becoming too difficult. However, I think that in such cases, a Category:Landmarks or something like that could be invented. As for Matrix and Linear Algebra, the two are fundamentally intertwined in my mind, hence I can't usually think about whether st. should be in one or the other, but not both. --Lord_Farin 18:59, 12 February 2012 (EST)

Intuitionism / Constructivism
The term which I learned as "intuitionism" seems nowadays to be the same as "constructivism". I found this fascinating article just now:
 * Constructivism is Difficult

on a website which we may want to study.

This may give some background into this whole philosophical quagmire. --prime mover 03:22, 12 February 2012 (EST)


 * Following a course on intuitionistic mathematics at the moment; the two might combine quite well. The lecturer said there will be course notes; I will refer to them if they appear in PDF. --Lord_Farin 18:59, 12 February 2012 (EST)

Theorem Holds in All Models
Anyone know the page name to the theorem that if a theorem is a theorem the theorem has to hold in all models theorem theorem theorem? I can't find it theorem --GFauxPas 08:30, 12 February 2012 (EST)


 * That usually goes by the name of 'Soundness Theorem' (i.e., anything you can prove is true (where true means 'true in all models')). --Lord_Farin 18:59, 12 February 2012 (EST)

Attribution of Sum of Reciprocals is Divergent/Proof 1
You left a comment in the "Historical Note" section of Sum of Reciprocals is Divergent (which has now been moved to Sum of Reciprocals is Divergent/Proof 1) to the effect that you have uncovered evidence that it wasn't Bernoulli who discovered this, but it was in fact Oresme (which would have been some 400 years earlier).

Are you able to find out where you found this evidence? It's an interesting snippet of information to add, and it would be good to find a citation for it. --prime mover 05:16, 11 March 2012 (EDT)


 * Larson says:

''One way to show that the harmonic series diverges is attributed to Jakob Bernoulli. He grouped the terms of the harmonic series as follows:''


 * $ 1 + \frac 1 2 + \underbrace{\frac 1 3 + \frac 1 4}_{> \frac 1 2} +  \underbrace{\frac 1 5 + \cdots + \frac 1 8}_{> \frac 1 2} +  \underbrace{\frac 1 9 + \cdots + \frac 1 {16}}_{> \frac 1 2} +  \underbrace{\frac 1 {17} + \cdots + \frac 1 {32}}_{> \frac 1 2} + \cdots$

Larson doesn't finish the proof, that's left as an exercise. But http://mathworld.wolfram.com/HarmonicSeries.html attributes this proof to Oresme. I don't know which is more reliable. --GFauxPas 08:57, 11 March 2012 (EDT)
 * Just pointing out that wolfram mathworld doesn't say which proof Mengoli and Johann Bernoulli and Jakob Bernoulli used, only that they had a proof. Do you have a source that they had the same proof? Is it implied in Mathworld? --GFauxPas 09:28, 11 March 2012 (EDT)


 * I'll take a look in my copy of and see what it says, but I was assuming that (since this is the proof that was being discussed in MathWorld) this is what it is. --prime mover 09:40, 11 March 2012 (EDT)


 * It's also worth pointing out that all the proofs using calculus in some way require results which hadn't been discovered at the time. If there was another simple proof like Proof 1, it would have been documented eagerly by now. --prime mover 10:09, 11 March 2012 (EDT)

Vector-valued functions
Just a mental note that formally, scalar multiplication and addition of vector-valued functions have not been defined; note that it is probably an instantiation of Definition:Induced Structure. However, that page and its associates could do with a rewrite in due time. --Lord_Farin 06:07, 15 March 2012 (EDT)
 * Can we use this? Definition:Vector Sum. And do we have addition of real functions defined? --GFauxPas 09:10, 15 March 2012 (EDT)


 * In fact, Definition:Vector Sum is necessary to make sense of the right-hand $\oplus$ on Definition:Induced Structure. I think we can invoke Mappings to R-Algebraic Structure form Similar R-Algebraic Structure (with $G = \R^n$, $X=\R$), but as mentioned, this particular (intuitively very natural) part of PW needs to be cleaned and made rigorous. It may be best to leave it for now, as the stuff is intuitively overwhelmingly clear. --Lord_Farin 09:23, 15 March 2012 (EDT)


 * Sho' thing. Oh, and the problem is only going to get worse, as I add theorems for
 * $D_x(f(x)\mathbf{r}(x))$ (well, that's covered by scalar multiplication. Maybe.)
 * $D_x(\mathbf{r}(x) \cdot \mathbf{q}(x))$
 * $\mathbf{r,q}:\R \to \R^3, D_x(\mathbf{r}(x) \times \mathbf{q}(x))$ --GFauxPas 09:27, 15 March 2012 (EDT)

Yes, $[\R\to\R^n]$ is a $[\R\to\R]$-module as well ((abelian) group with multiplication by functions $f:\R\to\R$) making matters indeed even worse. The inner product is rapidly approaching the realm of analysis in multiple variables, along with its advanced notions of differentiation (sensing a possible clash of use in the $D$ notation here, btw); advantage of that theory is that it is intrinsic, because in the particular case of the inner product, it is necessary to prove that the result does not depend on the particular basis chosen; generally a painstaking exercise. Again, had I limitless time to spend on PW, I would have a few more books to cover, in particular one addressing all these rigorous foundations for (real) analysis in more variables. --Lord_Farin 09:54, 15 March 2012 (EDT)
 * ...but I can/should still put up the proofs, right? --GFauxPas 09:56, 15 March 2012 (EDT)
 * Sure, all I'm saying is that I hope to eventually reach the point that everything is rigorous, and the (quite short) proofs using analysis in more variables can be added. This could be months at the least, so please, do continue. --Lord_Farin 09:59, 15 March 2012 (EDT)

Good to see this heavyweight vector calculus stuff going in. It's so easy to get bogged down in the foundations when all you want to do is plug in some 3-D vectors and watch it rip! Here's to gradient, divergence and curl ...--prime mover 14:37, 15 March 2012 (EDT)
 * Np, glad to help. I'm just glad all the derivatives of products are of the same form as $D_xf(x)g(x)$, makes it easy to remember. Which reminds me, I still have to do $D_xf(x)\mathbf{r}(x)$...Oh, and you'd help me out by completing Definition:Derivative/Vector-Valued Function, because I'm not good at transbificating. Plus it would be pretty if this definition exactly matched the other definitions of derivatives, which of course means the same author of all pages! This is of course a ridiculous excuse for me not doing it myself, but saying "I'm lazy and I'll do it later" doesn't sound nice. --GFauxPas 14:59, 15 March 2012 (EDT)
 * LF you wrote there might be a problem in the future with more than one use of $D$, what were you referring to? --GFauxPas 17:34, 15 March 2012 (EDT)
 * Currently, we write $D_x$ for $\dfrac{\mathrm d}{\mathrm dx}$. In the language of multidimensional real analysis, $D$ becomes a map 'computing the total derivative'; the derivative is total in the sense that it is independent of the direction you differentiate in (in $\R$ this doesn't arise, obviously). Effectively, it is bilinear map $Df:\R^n\times \R^n\to\R^n$, written $(x,v)\mapsto D_v f(x) \equiv Df(x,v)$ (differentiation of $f$ at the point $x$ in the direction $v$). Final point is, that for $\R$ this comes down to $D_x f(x) = D_1 f(x)$, giving obvious problems as the left hand side now can mean two different things (differing by a factor $x$). Hope that made at least a bit of sense. --Lord_Farin 18:08, 15 March 2012 (EDT)
 * It is important to add that you can think of $Df$ as being the 'matrix of partial derivatives'; in fact, the two can be shown to be the same. It's only that a matrix requires a particular choice, namely of a basis for your vector spaces. --Lord_Farin 18:10, 15 March 2012 (EDT)
 * Out of my league, I'll wait until I learn differentiation of functions of more than one variable --GFauxPas 19:25, 15 March 2012 (EDT)

Is there such a definition for derivative at a point for vector valued functions?


 * $\displaystyle \lim_{x \to c} \frac {\mathbf r \left({x}\right) - \mathbf r \left({c}\right)}{x -c}$

What about for complex functions? --GFauxPas 08:25, 18 March 2012 (EDT)

Oh, and should I create a category like "Vector Calculus" or "Vector-Valued Calculus" or something? --GFauxPas 08:29, 18 March 2012 (EDT)

Differentiability of Functions of >1 variable
Larson's definition of differentiablity for functions of more than one variable is very non-intuitive (I'm going to use $f:x,y \mapsto f(x,y)$ for ease of asking the question, though the question is for any number of variables):


 * f is differentiable at $(x,y) = (x_0,y_0) \iff \exists \Delta z:$


 * $\Delta z = f_x(x_0,y_0)\Delta x + f_y(x_0,y_0)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y$

such that $\varepsilon_1, \varepsilon_2 \to 0$ as $(\Delta x, \Delta y) \to (0,0)$.

Is there an equivalent definition that's more intuitive? Why not define "differentiable" as "differentiable iff all partial derivatives exist"? --GFauxPas 12:42, 28 March 2012 (EDT)


 * As to your last question: Because it isn't enough; derivatives in all directions need to exist.
 * A general definition can be given as follows:


 * A mapping $f: \R^n \to \R^p$ (or defined on some subset of $\R^n$) is said to be differentiable at $a \in \R^n$ iff:
 * There exists a linear mapping $Df(a):\R^n\to\R^p$ (that is, simply put, a matrix) such that:
 * $\displaystyle \lim_{\left\Vert{h}\right\Vert\to 0, h \in \R^n} \frac {\left\Vert{f(a+h)-f(a)-Df(a)h}\right\Vert} {\left\Vert{h}\right\Vert} = 0$


 * This comes down to the existence of a linear approximation $Df(a)$ of $f$ near $a$ which is good enough to make the limit zero (for comparison, you can take $n=p=1$, it will reduce to the familiar expression for $f:\R\to\R$). Note that in the fraction, the norm in the numerator is in $\R^p$, while the one in the denominator is in $\R^n$. Note that $Df(a)h$ means 'the mapping $Df(a)$ evaluated at $h \in \R^n$', not your standard multiplication (well, they are the same iff $n=p=1$; alternatively, this is matrix multiplication with a vector)). Note that this is different from existence of all partial derivatives since the $h \in \R^n$ need to be in a sphere around zero, not just on the coordinate axes. If it is not entirely clear, please say so, and I will demonstrate by means of a small example. --Lord_Farin 14:35, 28 March 2012 (EDT)


 * Alternatively, see this, pp.792 --Lord_Farin 14:40, 28 March 2012 (EDT)


 * How incredibly convenient that in today's Linear Algebra class I first learned about linear maps as matrices! An example would be great. --GFauxPas 15:11, 28 March 2012 (EDT)


 * I thought that the existence of derivatives in all directions does not necessarily ensure differentiability. –Abcxyz (talk | contribs) 20:50, 28 March 2012 (EDT)
 * Correct, but they need to exist for differentiability to possibly apply. I will hopefully get to the example later today. --Lord_Farin 04:42, 29 March 2012 (EDT)

Okay, so let $f: \R^{2n}\simeq\R^n \times \R^n \to \R, (x,y)\mapsto \left\langle{x,y}\right\rangle$.

Say we want to know if $f$ is differentiable at $(a,b)\in\R^n\times\R^n$; then let $h = (h_1,h_2)\in\R^{2n}$, and compute:
 * $f(a,b)-f(a-h_1,b-h_2) = \left\langle{a,b}\right\rangle - \left\langle{a-h_1,b-h_2}\right\rangle = \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle - \left\langle{h_1,h_2}\right\rangle$

Using Cauchy-Schwarz, the last term can be estimated to $\left\Vert{h}\right\Vert^2$ as the norms of $h_1,h_2$ are dominated by that of $h$. What remains is linear in $h$ (a sum of inner products). Thus, putting $Df((a,b)) = (h\mapsto \left\langle{h_1,b}\right\rangle + \left\langle{a,h_2}\right\rangle)$ we compute the limit to go to zero (by the Cauchy-Schwarz argument).

There is a theorem (not too hard) establishing that the linear mapping $Df((a,b))$ is unique; hence conclude that it equals the given expression (compare the case that $n=1$ for further insights). Hopefully, this slightly nontrivial example gives a bit of insight. --Lord_Farin 06:42, 29 March 2012 (EDT)


 * Also, when considering $f:\R\to\R$, the standard derivative $f'$ is obtained by the canonical identification $\operatorname{Lin}(\R,\R)\simeq \R,Df(a)\mapsto Df(a)1 = f'(a)$. Because $Df(a)1$ is also often denoted $D_af(1)$, this is the origin of the possible confusion I expressed earlier. --Lord_Farin 06:45, 29 March 2012 (EDT)