Sunday, 17 November 2013

linear algebra - Assuming AB=I prove BA=I










Most introductory linear algebra texts define the inverse of a square matrix A as such:




Inverse of A, if it exists, is a matrix B such that AB=BA=I.



That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:



Inverse of A, if it exists, is a matrix B such that AB=I. Then they go and prove that BA=I.



Do you know of a proof other than defining inverse through determinants or through using rref?



Is there a general setting in algebra under which ab=e leads to ba=e where e is the identity?


Answer




Multiply both sides of ABI=0 on the left by B to get
(BAI)B=0


Let {ej} be the standard basis for Rn. Note that {Bej} are linearly independent: suppose that
nj=1ajBej=0

then, multiplying (2) on the left by A gives
nj=1ajej=0

which implies that aj=0 since {ej} is a basis. Thus, {Bej} is also a basis for Rn.



Multiplying (1) on the right by ej yields
(BAI)Bej=0


for each basis vector Bej. Therefore, BA=I.




Failure in an Infinite Dimension



Let A and B be operators on infinite sequences. B shifts the sequence right by one, filling in the first element with 0. A shifts the sequence left, dropping the first element.



AB=I, but BA sets the first element to 0.



Arguments that assume A1 or B1 exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find limh0sin(ha)h without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...