Determinant of Matrix Product

Theorem
Let $\mathbf A = \left[{a}\right]_n$ and $\mathbf B = \left[{b}\right]_n$ be a square matrices of order $n$.

Let $\det \left({\mathbf A}\right)$ be the determinant of $\mathbf A$.

Let $\mathbf A \mathbf B$ be the (conventional) matrix product of $\mathbf A$ and $\mathbf B$.

Then:
 * $\det \left({\mathbf A \mathbf B}\right) = \det \left({\mathbf A}\right) \det \left({\mathbf B}\right)$

That is, the determinant of the product is equal to the product of the determinants.

Proof 1
Let $\mathbf C = \left[{c}\right]_n = \mathbf A \mathbf B$.

Thus:
 * $\displaystyle \forall i, j \in \left[{1 .. n}\right]: c_{i j} = \sum_{l=1}^n a_{i l} b_{l j}$

Then:

This gets messy very quickly, so we can try another approach.

From Square Matrix Row Equivalent to Triangular Matrix, you can turn a square matrix into a triangular matrix by using elementary row operations that, from Effect of Elementary Row Operations on Determinant, do not change either the value or the sign of its determinant.

So suppose:
 * $\mathbf A$ is row equivalent to the upper triangular matrix $\mathbf T_A$
 * $\mathbf B$ is row equivalent to the upper triangular matrix $\mathbf T_B$

such that:
 * $\det \left({\mathbf A}\right) = \det \left({\mathbf T_A}\right)$
 * $\det \left({\mathbf B}\right) = \det \left({\mathbf T_B}\right)$

From Determinant of a Triangular Matrix, we have that $\det \left({\mathbf T_A}\right)$ and $\det \left({\mathbf T_B}\right)$ are equal to the product of their diagonal elements.

From Product of Triangular Matrices, it also follows that $\mathbf T_A \mathbf T_B$ is an upper triangular matrix whose diagonal elements are the products of the diagonal elements of $\mathbf T_A$ and $\mathbf T_B$.

Thus it follows that:
 * $\det \left({\mathbf T_A \mathbf T_B}\right) = \det \left({\mathbf T_A}\right) \det \left({\mathbf T_B}\right)$

Thus it follows that:
 * $\det \left({\mathbf A \mathbf B}\right) = \det \left({\mathbf A}\right) \det \left({\mathbf B}\right)$

Proof 2
Consider two cases:


 * $(1): \quad \mathbf A$ is not invertible.


 * $(2): \quad \mathbf A$ is invertible.

Proof of case $(1)$:

Assume $\mathbf A$ is not invertible.

Then $\det \left({\mathbf A}\right) = 0$.

Also if $\mathbf A$ is not invertible then neither is $\mathbf A \mathbf B$, and so


 * $\det\left({\mathbf A \mathbf B}\right) = 0$

Thus:
 * $0 = 0 \cdot \det\left({\mathbf B}\right)$


 * $\det \left({\mathbf A\mathbf B}\right) = \det \left({\mathbf A}\right) \cdot \det\left({\mathbf B}\right)$

Proof of case $(2)$:

Assume $\mathbf A$ is invertible.

Then $\mathbf A$ is a product of elementary matrices, $\mathbf E$.

Let $\mathbf A = \mathbf E^{k} \mathbf E^{k-1} \cdots \mathbf E^{1}$.

So:
 * $\det \left({\mathbf A\mathbf B}\right) = \det \left({\mathbf E^{k}\mathbf E^{k-1} \cdots \mathbf E^{1} \mathbf B}\right)$

But for any matrix $\mathbf D$:
 * $\det \left({\mathbf E \mathbf D}\right) = \det \left({\mathbf E}\right) \cdot \det\left({\mathbf D}\right)$

by Effect of Elementary Row Operations on Determinant.

Therefore:
 * $\det \left({\mathbf A \mathbf B}\right) = \det \left({\mathbf E^{k}}\right) \det\left({\mathbf E^{k-1}}\right) \cdots \det\left({\mathbf E^{1}}\right) \det\left({\mathbf B}\right)$


 * $\det\left({\mathbf A \mathbf B}\right) = \det\left({\mathbf E^{k} \mathbf E^{k-1} \ldots \mathbf E^{1}}\right) \det \left({\mathbf B}\right)$


 * $\det \left({\mathbf A \mathbf B}\right) = \det\left({\mathbf A}\right) \cdot \det\left({\mathbf B}\right)$

as required.