Maximum Likelihood Estimator from Likelihood Function
Jump to navigation
Jump to search
Theorem
Let $\FF$ be a one-parameter family of probability distributions whose parameter is $\theta$.
Let $X$ be a continuous random variable belonging to a member of $\FF$.
Let $\map {\mathrm L} \theta$ be the likelihood function of $\theta$ with respect to $X$.
Let $\EE$ be a maximum likelihood estimator for $\theta$ with respect to $X$.
Then $\EE$ can be found by:
- calculating $\theta$ for which $\map {\dfrac \d {\d \theta} } {\map \ln {\map {\mathrm L} \theta} } = 0$
- determine which of these solutions corresponds to a maximum.
It is often the case that an iterative solution is needed.
Proof
![]() | This theorem requires a proof. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by crafting such a proof. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{ProofWanted}} from the code.If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page. |
Sources
- 1998: David Nelson: The Penguin Dictionary of Mathematics (2nd ed.) ... (previous) ... (next): maximum likelihood estimation
- 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): maximum likelihood estimation