Definition:Information Matrix
Jump to navigation
Jump to search
Definition
Let $\theta_1, \theta_2, \ldots, \theta_p$ be parameters
Let $S$ be a sample of $n$ observations from a probability distribution with frequency function $\map f {x, \theta_i}$.
The information matrix is a square matrix of order $p$ such that the $\tuple {i, j}$th element is given by:
- $n \map E {\paren {\dfrac {\partial \ln f} {\partial \theta_i} } \paren {\dfrac {\partial \ln f} {\partial \theta_j} } }$
This article, or a section of it, needs explaining. In particular: Absolutely not a clue as to what this is all about. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Explain}} from the code. |
Also see
- Results about information matrices can be found here.
Sources
- 1998: David Nelson: The Penguin Dictionary of Mathematics (2nd ed.) ... (previous) ... (next): information: 2.
- 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): information: 2.