Expansion Theorem for Determinants

Theorem
Let $$\mathbf{A} = \left[{a}\right]_{n}$$ be a square matrix of order $n$.

Let $$D = \det \left({\mathbf{A}}\right)$$ be the determinant of $$\mathbf{A}$$.

Let $$a_{pq}$$ be an element of $$\mathbf{A}$$.

Let $$A_{pq}$$ be the cofactor of $$a_{pq}$$ in $$D$$.

Then:


 * $$\forall r \in \left[{1 \, . \, . \, n}\right]: D = \sum_{k=1}^n a_{rk} A_{rk}$$;
 * $$\forall r \in \left[{1 \, . \, . \, n}\right]: D = \sum_{k=1}^n a_{kr} A_{kr}$$;

What this theorem states is that the value of a determinant can be found either by:
 * Multiplying all the elements in a row by their cofactors and adding up the products;
 * Multiplying all the elements in a column by their cofactors and adding up the products.

The identity $$D = \sum_{k=1}^n a_{rk} A_{rk}$$ is known as the expansion of $$D$$ in terms of row $$r$$, while $$D = \sum_{k=1}^n a_{kr} A_{kr}$$ is known as the expansion of $$D$$ in terms of column $$r$$.

Corollary
Let $$\delta_{rs}$$ be the Kronecker delta.

Then, using the same notation as above:


 * $$\forall r \in \left[{1 \, . \, . \, n}\right]: \sum_{k=1}^n a_{rk} A_{sk} = \delta_{rs} D$$;
 * $$\forall r \in \left[{1 \, . \, . \, n}\right]: \sum_{k=1}^n a_{kr} A_{ks} = \delta_{rs} D$$;

That is, if you multiply each element of a row or column with the cofactor of another row or column, the sum of those products is zero.

Proof
Because of Determinant of Transpose, it is necessary to prove only one of these identities.

Let $$D = \begin{vmatrix} a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{r1} & \cdots & a_{rk} & \cdots & a_{rn} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix}$$.

First, note that from Determinant with Row Multiplied by Constant, we have:

$$\begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{r1} & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{vmatrix} = a_{r1} \begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{vmatrix}$$

... and similarly:

$$\begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & a_{r2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{vmatrix} = a_{r2} \begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{vmatrix}$$

and so on for the whole of row $$r$$.

By summing all these up together, using Determinant as Sum of Determinants, we get:

$$\begin{vmatrix} a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{r1} & \cdots & a_{rk} & \cdots & a_{rn} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix} = \sum_{k=1}^n \left({a_{rk} \begin{vmatrix} a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\  0 & \cdots & 1 & \cdots & 0 \\ \vdots & \ddots & \vdots & \ddots & \vdots \\  a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix}}\right)$$

Now, let's take this determinant $$\begin{vmatrix} a_{11} & \cdots & a_{1\left({k-1}\right)} & a_{1k} & a_{1\left({k+1}\right)} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)\left({k-1}\right)} & a_{\left({r-1}\right)k} & a_{\left({r-1}\right)\left({k+1}\right)} & \cdots & a_{\left({r-1}\right)n} \\ 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)\left({k-1}\right)} & a_{\left({r+1}\right)k} & a_{\left({r+1}\right)\left({k+1}\right)} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{n\left({k-1}\right)} & a_{nk} & a_{n\left({k+1}\right)} & \cdots & a_{nn} \end{vmatrix}$$.

Let's swap rows $$r$$ and $$r-1$$, then (the new) row $$r-1$$ with row $$r-2$$, until finally row $$r$$ is at the top. Row 1 will be in row 2, row 2 in row 3, and so on.

It can be seen that this is permuting the rows by a $k$-cycle of length $$r$$.

Call that cycle $$\rho$$. Then from Parity of K-Cycle, $$\sgn \left({\rho}\right) = \left({-1}\right)^{r-1}$$.

So this gives us $$\begin{vmatrix} 0 & \cdots & 1 & \cdots & 0 \\ a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)k} & \cdots & a_{\left({r-1}\right)n} \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)k} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix} = \left({-1}\right)^{r-1} \begin{vmatrix} a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)k} & \cdots & a_{\left({r-1}\right)n} \\ 0 & \cdots & 1 & \cdots & 0 \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)k} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix}$$.

The same argument can be applied to columns.

Thus $$\begin{vmatrix} 1 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ a_{1k} & a_{11} & \cdots & a_{1\left({k-1}\right)} & a_{1\left({k+1}\right)} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)k} & a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)\left({k-1}\right)} & a_{\left({r-1}\right)\left({k+1}\right)} & \cdots & a_{\left({r-1}\right)n} \\ a_{\left({r+1}\right)k} & a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)\left({k-1}\right)} & a_{\left({r+1}\right)\left({k+1}\right)} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{nk} & a_{n1} & \cdots & a_{n\left({k-1}\right)} & a_{n\left({k+1}\right)} & \cdots & a_{nn} \end{vmatrix} = \left({-1}\right)^{k-1}\begin{vmatrix} 0 & \cdots & 1 & \cdots & 0 \\ a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)k} & \cdots & a_{\left({r-1}\right)n} \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)k} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix}$$.

Thus $$\begin{vmatrix} 1 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ a_{1k} & a_{11} & \cdots & a_{1\left({k-1}\right)} & a_{1\left({k+1}\right)} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)k} & a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)\left({k-1}\right)} & a_{\left({r-1}\right)\left({k+1}\right)} & \cdots & a_{\left({r-1}\right)n} \\ a_{\left({r+1}\right)k} & a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)\left({k-1}\right)} & a_{\left({r+1}\right)\left({k+1}\right)} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{nk} & a_{n1} & \cdots & a_{n\left({k-1}\right)} & a_{n\left({k+1}\right)} & \cdots & a_{nn} \end{vmatrix} = \left({-1}\right)^{r+k} \begin{vmatrix} a_{11} & \cdots & a_{1k} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)k} & \cdots & a_{\left({r-1}\right)n} \\ 0 & \cdots & 1 & \cdots & 0 \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)k} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nk} & \cdots & a_{nn} \end{vmatrix}$$.

Now note that $$\left({-1}\right)^{r+k} \begin{vmatrix} a_{11} & \cdots & a_{1\left({k-1}\right)} & a_{1\left({k+1}\right)} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{\left({r-1}\right)1} & \cdots & a_{\left({r-1}\right)\left({k-1}\right)} & a_{\left({r-1}\right)\left({k+1}\right)} & \cdots & a_{\left({r-1}\right)n} \\ a_{\left({r+1}\right)1} & \cdots & a_{\left({r+1}\right)\left({k-1}\right)} & a_{\left({r+1}\right)\left({k+1}\right)} & \cdots & a_{\left({r+1}\right)n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{n\left({k-1}\right)} & a_{n\left({k+1}\right)} & \cdots & a_{nn} \end{vmatrix}$$ is $$A_{rk}$$, the cofactor of $$a_{rk}$$ in $$D$$.

But from Determinant with Unit Element in Otherwise Zero Row, we have:

$$\begin{vmatrix} 1 & 0 & \cdots & 0 \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \cdots & b_{nn} \end{vmatrix} = \begin{vmatrix} b_{22} & \cdots & b_{2n} \\ \vdots & \ddots & \vdots \\ b_{n2} & \cdots & b_{nn} \end{vmatrix}$$

Putting together all the pieces derived above, the result follows.

Proof of Corollary
Let $$D'$$ be the determinant obtained by replacing row $$s$$ with row $$r$$.

Let $$a'_{ij}$$ be an element of $$D'$$.

Then:

$$a'_{ij} = \begin{cases} a_{ij} & : i \ne s \\ a_{rj} & : i = s \end{cases}$$

If we denote the cofactor of $$a'_{ij}$$ in $$D'$$ by $$A'_{ij}$$, we get:

$$\forall k \in \left[{1 \,. \, . \, n}\right]: A'_{sk} = A_{sk}$$

So by the Expansion Theorem (above), we get:

$$D' = \sum_{k=1}^n a'_{sk} A'_{sk}$$

But the $$r$$th and $$s$$th rows are identical.

So $$D' = 0$$ by Square Matrix with Duplicate Rows has Zero Determinant.

Hence the result.

The result for columns follows from Determinant of Transpose.

Comment
This result is significant, because it provides a way of calculating the value of a determinant from the values of determinants of lower order.

This is a far more convenient and programmable process than working out all the permutations of the order of the determinant you're calculating, working out which elements of the rows and columns each permutation requires, multiplying them all out, adding them together ...

However, from the derivation of this result, it can be seen that there's another possible technique to explore. We can try reducing one row or column of a matrix (by adding and/or subtracting other rows or columns, etc.) to one with zeroes everywhere except in one place. Then you only have one determinant to calculate.

This technique is taken to its logical conclusion in the exploitation of the Determinant of a Triangular Matrix.