# Determinant of Matrix Product

## Theorem

Let $\mathbf A = \sqbrk a_n$ and $\mathbf B = \sqbrk b_n$ be a square matrices of order $n$.

Let $\map \det {\mathbf A}$ be the determinant of $\mathbf A$.

Let $\mathbf A \mathbf B$ be the (conventional) matrix product of $\mathbf A$ and $\mathbf B$.

Then:

$\map \det {\mathbf A \mathbf B} = \map \det {\mathbf A} \map \det {\mathbf B}$

That is, the determinant of the product is equal to the product of the determinants.

### General Case

Let $\mathbf {A_1}, \mathbf{A_2}, \cdots, \mathbf{A_n}$ be square matrices of order $n$, where $n > 1$.

Then:

$\det \left({\mathbf {A_1} \mathbf {A_2} \cdots \mathbf {A_n}}\right) = \det \left({\mathbf {A_1}}\right) \det \left({\mathbf {A_2}}\right) \cdots \det \left({\mathbf {A_n}}\right)$

## Proof 1

This proof assumes that $\mathbf A$ and $\mathbf B$ are $n \times n$-matrices over a commutative ring with unity $\left({R, +, \circ}\right)$.

Let $\mathbf C = \left[{c}\right]_n = \mathbf A \mathbf B$.

From Square Matrix is Row Equivalent to Triangular Matrix, it follows that $\mathbf A$ can be converted into a upper triangular matrix $\mathbf A'$ by a finite sequence of elementary row operations $\hat o_1, \ldots, \hat o_{m'}$.

Let $\mathbf C'$ denote the matrix that results from using $\hat o_1, \ldots, \hat o_{m'}$ on $\mathbf C$.

From Elementary Row Operations Commute with Matrix Multiplication, it follows that $\mathbf C' = \mathbf A' \mathbf B$.

Effect of Sequence of Elementary Row Operations on Determinant shows that there exists $\alpha \in R$ such that:

$\alpha \det \left({\mathbf A'}\right) = \det \left({\mathbf A}\right)$
$\alpha \det \left({\mathbf C'}\right) = \det \left({\mathbf C}\right)$

Let $\mathbf B^\intercal$ be the transpose of $B$.

From Transpose of Matrix Product, it follows that:

$\left({\mathbf C'}\right)^\intercal = \left({\mathbf A' \mathbf B}\right)^\intercal = \mathbf B^\intercal \left({\mathbf A'}\right)^\intercal$

From Square Matrix is Row Equivalent to Triangular Matrix, it follows that $\mathbf B^\intercal$ can be converted into a lower triangular matrix $\left({\mathbf B^\intercal}\right)'$ by a finite sequence of elementary row operations $\hat p_1, \ldots, \hat p_{m''}$.

Let $\mathbf C''$ denote the matrix that results from using $\hat p_1, \ldots, \hat p_{m''}$ on $\left({\mathbf C'}\right)^\intercal$.

From Elementary Row Operations Commute with Matrix Multiplication, it follows that:

$\mathbf C'' = \left({\mathbf B^\intercal}\right)' \left({\mathbf A'}\right)^\intercal$

Effect of Sequence of Elementary Row Operations on Determinant shows that there exists $\beta \in R$ such that:

$\beta \det \left({\left({\mathbf B^\intercal}\right)'}\right) = \det \left({\mathbf B^\intercal}\right)$
$\beta \det \left({\mathbf C''}\right) = \det \left({ \left({\mathbf C'}\right)^\intercal }\right)$

From Transpose of Upper Triangular Matrix is Lower Triangular, it follows that $\left({\mathbf A'}\right)^\intercal$ is a lower triangular matrix.

Then Product of Triangular Matrices shows that $\left({\mathbf B^\intercal}\right)' \left({\mathbf A'}\right)^\intercal$ is a lower triangular matrix whose diagonal elements are the products of the diagonal elements of $\left({\mathbf B^\intercal}\right)'$ and $\left({\mathbf A'}\right)^\intercal$.

From Determinant of Triangular Matrix, we have that $\det \left({\left({\mathbf A'}\right)^\intercal}\right)$, $\det \left({\left({\mathbf B^\intercal}\right)' }\right)$, and $\det \left({\left({\mathbf B^\intercal}\right)' \left({\mathbf A'}\right)^\intercal }\right)$ are equal to the product of their diagonal elements.

Combinining these results shows that:

$\det \left({\left({\mathbf B^\intercal}\right)' \left({\mathbf A'}\right)^\intercal}\right) = \det \left({\left({\mathbf B^\intercal}\right)'}\right) \det \left({\left({\mathbf A'}\right)^\intercal }\right)$

Then:

 $\displaystyle \det \left({\mathbf C}\right)$ $=$ $\displaystyle \alpha \det \left({\mathbf C'}\right)$ $\displaystyle$ $=$ $\displaystyle \alpha \det \left({ \left({\mathbf C'}\right)^\intercal}\right)$ Determinant of Transpose $\displaystyle$ $=$ $\displaystyle \alpha \beta \det \left({\mathbf C''}\right)$ $\displaystyle$ $=$ $\displaystyle \alpha \beta \det \left({ \left({\mathbf B^\intercal}\right)' \left({\mathbf A'}\right)^\intercal}\right)$ $\displaystyle$ $=$ $\displaystyle \alpha \beta \det \left({\left({\mathbf B^\intercal}\right)' }\right) \det \left({\left({\mathbf A'}\right)^\intercal}\right)$ $\displaystyle$ $=$ $\displaystyle \alpha \det \left({\left({\mathbf A'}\right)^\intercal}\right) \beta \det \left({\left({\mathbf B^\intercal}\right)' }\right)$ Commutativity of Ring Product in $R$ $\displaystyle$ $=$ $\displaystyle \alpha \det \left({\mathbf A'}\right) \det \left({\mathbf B^\intercal}\right)$ Determinant of Transpose $\displaystyle$ $=$ $\displaystyle \det \left({\mathbf A}\right) \det \left({\mathbf B}\right)$ Determinant of Transpose

$\blacksquare$

## Proof 2

Consider two cases:

$(1): \quad \mathbf A$ is not invertible.
$(2): \quad \mathbf A$ is invertible.

### Proof of case $1$

Assume $\mathbf A$ is not invertible.

Then:

$\det \left({\mathbf A}\right) = 0$

Also if $\mathbf A$ is not invertible then neither is $\mathbf A \mathbf B$. Indeed, if $\mathbf{AB}$ has an inverse $\mathbf C$, then $\mathbf{ABC} = \mathbf{I}$, whereby $\mathbf{BC}$ is a right inverse of $\mathbf A$. It follows by Left or Right Inverse of Matrix is Inverse that in that case $\mathbf{BC}$ is the inverse of $A$.

It follows that:

$\det \left({\mathbf A \mathbf B}\right) = 0$

Thus:

$0 = 0 \cdot \det\left({\mathbf B}\right)$
$\det \left({\mathbf A\mathbf B}\right) = \det \left({\mathbf A}\right) \cdot \det\left({\mathbf B}\right)$

$\Box$

### Proof of case $2$

Assume $\mathbf A$ is invertible.

Then $\mathbf A$ is a product of elementary matrices, $\mathbf E$.

Let $\mathbf A = \mathbf E_{k} \mathbf E_{k-1} \cdots \mathbf E_{1}$.

So:

$\det \left({\mathbf A\mathbf B}\right) = \det \left({\mathbf E_{k}\mathbf E_{k-1} \cdots \mathbf E_{1} \mathbf B}\right)$

It remains to be shown that for any square matrix $\mathbf D$ of order $n$:

$\det \left({\mathbf E \mathbf D}\right) = \det \left({\mathbf E}\right) \cdot \det\left({\mathbf D}\right)$

Let $e_i \left({\mathbf I}\right) = \mathbf E_i$ for all $i \in [1,2,\cdots,k]$, then using Elementary Row Operations by Matrix Multiplication and Effect of Sequence of Elementary Row Operations on Determinant yields

$\det \left({\mathbf {ED}}\right) = \det \left({\mathbf E_k \mathbf E_{k-1} \cdots \mathbf {E_1D}}\right) = \det \left({e_ke_{k-1} \cdots e_1 \left({\mathbf D}\right)}\right) = \alpha \det \left({\mathbf D}\right)$
$\det \left({\mathbf E}\right) = \det \left({\mathbf E_k \mathbf E_{k-1} \cdots \mathbf {E_1I}}\right) = \det \left({e_ke_{k-1} \cdots e_1 \left({\mathbf I}\right)}\right) = \alpha \det \left({\mathbf I}\right)$
$\det \left({\mathbf E}\right) = \alpha$

And so $\det \left({\mathbf E \mathbf D}\right) = \det \left({\mathbf E}\right) \cdot \det\left({\mathbf D}\right)$

$\Box$

Therefore:

$\det \left({\mathbf {AB}}\right) = \det \left({\mathbf A}\right) \det \left({\mathbf B}\right)$

as required.

$\blacksquare$

## Proof 3

The Cauchy-Binet Formula gives:

$\displaystyle \det \left({\mathbf A \mathbf B}\right) = \sum_{1 \mathop \le j_1 \mathop < j_2 \mathop < \cdots \mathop < j_m \le n} \det \left({\mathbf A_{j_1 j_2 \ldots j_m}}\right) \det \left({\mathbf B_{j_1 j_2 \ldots j_m}}\right)$

where:

$\mathbf A$ is an $m \times n$ matrix
$\mathbf B$ is an $n \times m$ matrix.
For $1 \le j_1, j_2, \ldots, j_m \le n$:
$\mathbf A_{j_1 j_2 \ldots j_m}$ denotes the $m \times m$ matrix consisting of columns $j_1, j_2, \ldots, j_m$ of $\mathbf A$.
$\mathbf B_{j_1 j_2 \ldots j_m}$ denotes the $m \times m$ matrix consisting of rows $j_1, j_2, \ldots, j_m$ of $\mathbf B$.

When $m = n$, the only set $j_1, j_2, \ldots, j_m$ that fulfils $1 \le j_1 < j_2 < \cdots < j_m \le n$ is $\left\{ {1, 2, \ldots, n}\right\}$.

Hence the result.

$\blacksquare$

## Proof 4

Remember that $\det$ can be interpreted as an alternating multilinear map with respect to the columns. This property is sufficient to prove the theorem as follows... Let $A,B$ be two $n\times n$ matrices (with coefficients in a commutative field $\mathbb K$ like $\mathbb R$ or $\mathbb C$).

Let us denote the vectors of the canonical basis of $\mathbb K^n$ by $\boldsymbol{e}_1,\ldots,\boldsymbol{e}_n$ (where $\boldsymbol{e}_i$ is a column with $1$ at $i$th row, zero elsewhere).
Now, we are able to write the matrix $B$ as a column block matrix : $$B = \begin{pmatrix}\sum_{s_1=1}^n B_{s_1,1}\boldsymbol{e}_{s_1}&\cdots& \sum_{s_n=1}^n B_{s_n,n}\boldsymbol{e}_{s_n}\end{pmatrix}$$ We can rewrite the product $AB$ as a column-block matrix : $$AB = \begin{pmatrix}\sum_{s_1=1}^n B_{s_1,1}A\boldsymbol{e}_{s_1}&\cdots& \sum_{s_n=1}^n B_{s_n,n}A\boldsymbol{e}_{s_n}\end{pmatrix}$$ Using linearity with respect to each columns, we get $$\det(AB) = \sum_{1\leqslant s_1,\ldots,s_n\leqslant n} \left(\prod_{i=1}^n B_{s_i,i}\right)\det\begin{pmatrix}A\boldsymbol{e}_{s_1}&\cdots& A\boldsymbol{e}_{s_n}\end{pmatrix}$$ Now notice that $\det\begin{pmatrix}A\boldsymbol{e}_{s_1}&\cdots& A\boldsymbol{e}_{s_n}\end{pmatrix}$ is zero once two entries are the same (since $\det$ is an alternating map), it means that if for some $k\neq \ell$ we have $A\boldsymbol{e}_{s_k} = A\boldsymbol{e}_{s_\ell}$ , then $\det\begin{pmatrix}A\boldsymbol{e}_{s_1}&\cdots& A\boldsymbol{e}_{s_n}\end{pmatrix} = 0$. Therefore the only nonzero summands are those one the $s_1,\ldots, s_n$ are all distinct. In other words, the "selector" $s$ represents some permutation of the numbers $1,\ldots, n$.

As a result, the determinant of the product can now be expressed as a sum of precisely $n!$ terms using permutations : $$\det(AB) = \sum_{\sigma \in S_n} \left(\prod_{i=1}^n B_{\sigma(i),i}\right)\det\begin{pmatrix}A\boldsymbol{e}_{\sigma(1)}&\cdots& A\boldsymbol{e}_{\sigma(n)}\end{pmatrix}$$ where $S_n$ denotes the set of the permutations of numbers $1,\ldots, n$. However, the RHS determinant of the above equality corresponds to the determinant of permutated columns of $A$. Whenever we transpose two columns, the determinant is modified by a factor $-1$. Indeed, let us apply some transposition $\tau_{ij}$ to a column-block matrix $\begin{pmatrix}C_1&\cdots&C_n\end{pmatrix}$, by linearity it follows that for $i,j$ entries equal to $C_i+C_j$ : $$\begin{split} 0 = \det\begin{pmatrix}C_1\cdots C_i+C_j\cdots C_j+C_i\cdots C_n\end{pmatrix} &= \det\begin{pmatrix}C_1\cdots C_i\cdots C_j\cdots C_n\end{pmatrix} + \det\begin{pmatrix}C_1\cdots C_i\cdots C_i\cdots C_n\end{pmatrix}\\ &+ \det\begin{pmatrix}C_1\cdots C_j\cdots C_j\cdots C_n\end{pmatrix} + \det\begin{pmatrix}C_1\cdots C_j\cdots C_i\cdots C_n\end{pmatrix}\\ &= \det\begin{pmatrix}C_1\cdots C_i\cdots C_j\cdots C_n\end{pmatrix} + \det\begin{pmatrix}C_1\cdots C_j\cdots C_i\cdots C_n\end{pmatrix} \end{split}$$ Hence, transpose two columns reverse determinant sign : $\det\begin{pmatrix}C_1\cdots C_n\end{pmatrix} = -\det\begin{pmatrix}C_{\tau_{ij}(1)}\cdots C_{\tau_{ij}(n)}\end{pmatrix}$.
Since every permutation $\sigma\in S_n$ can be written as a product of transpositions, i.e. $\sigma= \tau_m\cdots\tau_1$ for some transpositions $\tau_1,\ldots,\tau_m$ it follows that $\det(C_1\cdots C_n) = -\det(C_{\tau_1(1)}\cdots C_{\tau_1(n)}) = \det(C_{\tau_2\tau_1(1)}\cdots C_{\tau_2\tau_1(n)}) = \cdots = (-1)^m\det(C_{\sigma(1)}\cdots C_{\sigma(n)})$.
The number $(-1)^m$ is the signature of the permutation $\sigma$ (see article about the signature of permutations) and denoted by $\mathrm{sgn}(\sigma)$.

It remains to apply several transpositions of columns to $A$ to get for any permutation $\sigma$ the equality : $$\det \begin{pmatrix}A\boldsymbol{e}_{\sigma(1)}&\cdots& A\boldsymbol{e}_{\sigma(n)}\end{pmatrix} = \mathrm{sgn}(\sigma)\det\begin{pmatrix}A\boldsymbol{e}_{1}&\cdots& A\boldsymbol{e}_{n}\end{pmatrix} = \mathrm{sgn}(\sigma)\det(A)$$ Since $\det(A)$ is a constant quantity, we can go this factor out of the sum, then write $$\det(AB) = \det(A)\sum_{\sigma\in S_n} \mathrm{sgn}(\sigma)\prod_{i=1}^n B_{\sigma(i),i}$$ But the above sum is exactly the definition of $\det(B)$ using the Leibniz formula, and so $$\det(AB) = \det(A)\det(B)$$ ... as desired. $\square$