Talk:Determinant of Matrix Product

From ProofWiki
Jump to navigation Jump to search

I do not think that the elementary operations on rows converting a square matrix to a triangular one can introduce a minus sign before the determinant. In other words, multiplying one row by a constant and subtracting it from another row does not change the value of the determinant, and this is how the square matrix is converted to a triangular matrix. The second stated proof is therefore robustly correct. The proof of this concept may be found in elementary math analysis texts.

You are correct. The method used in Square Matrix is Row Equivalent to Triangular Matrix can be adapted to avoid the type 3 elementary operations. I will hopefully get to this later today. --Lord_Farin 11:51, 3 November 2011 (CDT)
Thx guys. I will amend the proof as appropriate. --prime mover 15:17, 3 November 2011 (CDT)
The proof of mentioned statement has been adapted. --Lord_Farin 17:33, 3 November 2011 (CDT)

missing step

From the proof 1, it is clear than:

$\det \left({\mathbf T_A \mathbf T_B}\right) = \det \left({\mathbf A}\right) \det \left({\mathbf B}\right)$


But I think we still have to proof that:

$\det \left({\mathbf T_A \mathbf T_B}\right) = \det \left({\mathbf A \mathbf B}\right)$

Could anybody clarify? Abaca

We have $\det \left({\mathbf T_A}\right) = \det \left({\mathbf A}\right)$ and $\det \left({\mathbf T_B}\right) = \det \left({\mathbf B}\right)$ from a bit further up.
Then we have $\det \left({\mathbf T_A \mathbf T_B}\right) = \det \left({\mathbf T_A}\right)\det \left({\mathbf T_B}\right)$.
Unless I'm also missing something obvious. BTW, please sign your talk page posts by pressing the signature icon (it's the one with scribble in it) from the row of icons above the edit pane. Thx. --prime mover 19:08, 14 March 2012 (EDT)
The point is that it is not immediately clear from the current phrasing that $\mathbf T_A \mathbf T_B$ is row equivalent to $\mathbf{AB}$. Probably that's immediate from the definition, but it deserves a link at the very least. --Lord_Farin 05:59, 15 March 2012 (EDT)
Exactly, that is my doubt. On the other hand, is the triangular matrix provided by Square Matrix is Row Equivalent to Triangular Matrix unique? If it is, we may prove that ${\mathbf T_A \mathbf T_B}$ is the triangular matrix equivalent to $ {\mathbf A \mathbf B}$. Thanks (now with the signature) --Abaca 16:28, 15 March 2012 (EDT)
Yes, you're right, I understand now. That does indeed seem to be a missing step. Anyone able to plug it? --prime mover 16:37, 15 March 2012 (EDT)


Proof#1 Corrected

Hello, I don't know LaTex but I have the following proof that completes this theorem (with proof#1):

Here is the solution to the rest of proof#1:

Instead of looking at triangularized matrices as a result of row operations, look at diagonalized matrices as a result of further row operations (gauss-jordan scheme). Once A and B have been diagonalized denote them as Ad and Bd. Now we claim that if we performed the same sequence of row operations used on A (to make it diagonal) on AB (denote it as AB(dA) followed by performing the same row operations used to make B diagonal on the TRANSPOSE of AB(dA), then we will have diagonalized AB whose determinant (product of diagonal elements) is the same as the product of determinant of A (product of diagonal elements of Ad) and determinant of B (product of diagonal elements of Bd).To demonstrate this note the following:


       a11b11+a12b21+a13b31+....  a11b12+a12b22+a13b32+....

AB = a21b11+a22b21+a23b31+.... a21b12+a22b22+a23b32+.... ....

       a31b11+a32b21+a33b31+....  a31b12+a32b22+a33b32+....     .
                  .                                             .
                  .                                             .
                  .                                             .

after performing row operations of A on AB we get:

           a11'b11+....  a11'b12+....

AB(dA) = a22'b21+.... a22'b22+.... ....

           a33'b31+....  a33'b32+....     .
                  .                       .
                  .                       .
                  .                       .

Now consider transpose of AB(dA) (i.e. AB(dA)t):


            a11'b11+....  a22'b21+....

AB(dA)t = a11'b12+.... a22'b22+.... ....

            a11'b13+....  a22'b23+....     .
                   .                       .
                   .                       .
                   .                       .

This is the same as:

AB(dA)t = diag(a11',a22',a33',...)*AB(dA)

Now if we perform the row operations used to diagonalize B on AB(dA)t we get a diagonal matrix of the following shape:


            a11'b11'       0         0       ...

AB(dA)t = 0 a22'b22' 0 ...

                0          0       a33'b33'  ...
                .          .                  .
                .          .                  .
                .          .                  .

and we're done. Couple of things to note: 1) Transposing a matrix doesn't change it's determinant. 2) row or column operations on a matrix do not change its determinant. 3) We assume there was no pivoting involved which is reasonable because pivoting can be avoided by simply summing a row from below with non-zero entry with the row that has the zero entry which itself doesn't change the determinant.

PS: sorry for the messy format. Didn't know how to fix it.

Source: myself

error in Proof 2, case 2

Let $F$ be a field and $n$ a natural number. In general it is not true that any matrix $A\in GL(n,F)$ is a product of elementary matrices since elementary matrices have determinant $1$. In fact, $SL(n,F)=E(n,F)$ and further any element of $GL(n,F)$ can be written as the product of a number of elementary matrices and one diagonal matrix (every field is a "GE-ring", cf. Silvester, "Introduction to Algebraic K-Theory", p. 114).

Are you able to fix the proof?
By the way: please sign your posts. Cheers. --prime mover (talk) 07:58, 21 February 2017 (EST)
I just saw that elementary matrices are defined a bit different than I thought. The proof of case 2 should be correct. But what about the proof of case 1. It uses Left or Right Inverse of Matrix is Inverse which in turn uses the multiplicativity of the determinant. Isn't that a circular argument? --Ralle (talk) 15:43, 22 February 2017 (EST)