Regression Coefficients of Normally Distributed Random Variable
Jump to navigation
Jump to search
Theorem
Let $X$ be a random variable.
Let $x$ be a given value of $X$.
Let $Y$ be a random variable with a normal distribution.
Let the variance $\sigma^2$ of $Y$ (usually unknown) be independent of $x$.
Let $S$ be a sample of $n$ independent pairs of observations $\tuple {x_i, y_i}$ for $i = 1, 2, \ldots, n$.
The maximum likelihood estimators $\beta_0$ and $\beta_1$ are given by:
\(\ds \beta_1\) | \(=\) | \(\ds \dfrac {\ds \sum_i \paren {x_i - \overline x} \paren {y_i - \overline y} } {\ds \sum_i \paren {x_1 - \overline x}^2}\) | ||||||||||||
\(\ds \beta_0\) | \(=\) | \(\ds \overline y - \beta_1 \overline x\) |
where:
\(\ds \overline x\) | \(=\) | \(\ds \sum_i \dfrac {x_i} n\) | ||||||||||||
\(\ds \overline y\) | \(=\) | \(\ds \sum_i \dfrac {y_i} n\) |
Proof
The maximum likelihood estimators are obtained by the method of least squares.
That is, the aim is to minimize $\ds \sum_i \paren {y_i - \beta_0 - \beta_1 x_i}^2$.
![]() | This theorem requires a proof. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by crafting such a proof. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{ProofWanted}} from the code.If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page. |
Sources
- 1998: David Nelson: The Penguin Dictionary of Mathematics (2nd ed.) ... (previous) ... (next): regression
- 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): regression