Definition:Sufficient Statistic/Definition 2
Jump to navigation
Jump to search
![]() | This article needs to be linked to other articles. In particular: independent You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{MissingLinks}} from the code. |
Definition
Let $X_1, X_2, \ldots, X_n$ form a random sample from a population whose probability distribution is determined by a parameter $\theta$.
Let $T$ be a sample statistic.
Let $I = \Img {\map T {X_1, X_2, \ldots, X_n} }$.
Let $D$ be the conditional joint distribution of $X_1, X_2, \ldots, X_n$ given $T = t$ and $\theta$.
We call $T$ a sufficient statistic for $\theta$ if and only if $D$ is independent of the value of $\theta$ for all $t \in I$.
Also see
- Results about sufficient statistics can be found here.
Historical Note
The concept of a sufficient statistic was introduced by Ronald Aylmer Fisher in $1921$.
Sources
- 2011: Morris H. DeGroot and Mark J. Schervish: Probability and Statistics (4th ed.): $7.7$: Sufficient Statistics: Definition $7.7.1$