Talk:Weierstrass Approximation Theorem/Proof 2

Probabilistic structure
I am not very familiar with probabilistic proofs. Are you sure that such a proof does not require an assumption (trivial, natural, canonical, induced etc.) about the probability space you use? Maybe in the long term this proof should have its own page rather than being a second proof?--Julius (talk) 21:18, 17 October 2022 (UTC)


 * It's a proof of Weierstrass Approximation Theorem, and it's already got its own page, I don't udnerstand. --prime mover (talk) 21:21, 17 October 2022 (UTC)


 * This is quite a common proof of this theorem. I think $X_k \sim \Binomial 1 p$ is enough for everything else to work. Caliburn (talk) 21:32, 17 October 2022 (UTC)


 * Random variables need a probability space which is an additional structure on top of my assumptions (its like comparing proofs in topological, metric and normer vector spaces - analogous but not congruent). Now I see the addition of Kolmogorov extension theorem which is supposed to remedy this issue. I wonder whether this theorem says that functions spaces belong to probabilistic spaces like metric spaces belong to topological spaces. Then I would count such proofs analogous but not equal. Congruent proofs would stick to function spaces but involve different lemmas.--Julius (talk) 21:43, 17 October 2022 (UTC)


 * I believe I understand your concern about the assumption. I added a remark on Kolmogorov Extension Theorem. We can also give a space explicitly as $\Omega := \set {0,1}^\N$ and $\Pr := \paren { \paren{1-p} \delta_0 + p \delta_1 }^{\otimes \N}$.
 * The proof has a solid justification but it is currently hard to add more details because Category:Probability Theory is not sufficiently developed yet. --Usagiop (talk) 21:49, 17 October 2022 (UTC)


 * Of course, simplification of the proof is welcome, but I think we should leave at least a red link to be able to use random variable. I mean, it is a mapping taking values from a sample space, and we have not defined this space yet.--Julius (talk) 22:56, 18 October 2022 (UTC)


 *  I wonder whether this theorem says that functions spaces belong to probabilistic spaces like metric spaces belong to topological spaces.  What do you mean? --Usagiop (talk) 21:54, 17 October 2022 (UTC)


 * Your last answer cleared my doubts. This was me looking for analogies, i.e. sometimes people lift the problem to a more abstract space, solve it there, and then return to the original space. Never mind.--Julius (talk) 21:59, 17 October 2022 (UTC)


 * OK, but this is not an abstraction. Actually, the same as Weierstrass Approximation Theorem/Proof 1, just written in another language. An advantage of this proof is reuse of some known results. --Usagiop (talk) 22:10, 17 October 2022 (UTC)


 * If it is not an abstraction, then can it be that the space of continuous functions (with whatever trivial supplementaries needed) can be repackaged into a probability space. Then it would mean that we are running the same calculation in two different environments concurrently.--Julius (talk) 22:56, 18 October 2022 (UTC)


 * No, there is no such relation. This proof is just a trick, there isn't any essential relation to the probability space. It says only, for each $f \in \map C {\closedint 0 1}$:
 * $\ds \map f p = \lim_{n \to \infty} \expect {\map f {\dfrac {\Binomial n p} n} }$
 * The variable $p \in \closedint 0 1$ corresponds to the parameter $p$ of binomial distribution. I mean, $p$ has nothing to do with $\omega \in \Omega$. --Usagiop (talk) 01:07, 19 October 2022 (UTC)


 * I mean Weierstrass Approximation Theorem/Lemma 1 is the same as Expectation of Binomial Distribution as well as Weierstrass Approximation Theorem/Lemma 2 is the same as Variance of Binomial Distribution. --Usagiop (talk) 22:24, 17 October 2022 (UTC)


 * (I got edit conflicted a few times and I think you've got it now but this may be useful to people) - The introduction of random variables is an intermediate step, the claim doesn't try to relate the statement to a probabilistic one and then prove that, it's just using a random variable on the way to obtaining an estimate. This is basically like (well, it is :p) if you were working on bounding a real function, then rewrote it as/bounded it above by an integral and started using integral inequalities to get an estimate on that integral and hence the original function. That's all the proof really is behind the scenes - calling it "random variables" blurs it a bit but it makes the approach accessible to people with only elementary probability knowledge. Caliburn (talk) 22:01, 17 October 2022 (UTC)


 * Yes, just a trick, which makes the proof more elegant. --Usagiop (talk) 22:19, 17 October 2022 (UTC)


 * I haven't got a clue what everyone is talking about. Utterly incomprehensible. --prime mover (talk) 22:11, 17 October 2022 (UTC)