I've been presented to a proof that having Ax=b, one could have elementary row operations seen as certain special matrices Ei. And then we can prove that applying the same sequence of matrix products to the identity, we obtain the inverse. Suppose that A is invertible:
AB=IEn…E2E1AB=En…E2E1IIB=En…E2E1I
For each Ei, there is the requirement that each one of them is invertible. Why is that needed? My guess is that if some of the Ei is not invertible, then we could go back to different A's and then, applying Ei's again, we could go to a different B?
Answer
If we can use elementary matrices to start from A and find B=A−1, we should be able to reverse the process starting with B and finding A=B−1. The reverse of each step in the process is just applying the inverse elementary matrix. If an elementary matrix is not invertible, then we cannot reverse the step.
Anther reason that each elementary matrix must be invertible is that the determinant of noninvertible matrices is zero. Furthermore, invertible matrices have nonzero determinant. Therefore, if even one of the Ei is not inertible, then det(B)=det(En...E1I)=det(En)...det(E1)det(I)=0.
No comments:
Post a Comment