Most introductory linear algebra texts define the inverse of a square matrix A as such:
Inverse of A, if it exists, is a matrix B such that AB=BA=I.
That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:
Inverse of A, if it exists, is a matrix B such that AB=I. Then they go and prove that BA=I.
Do you know of a proof other than defining inverse through determinants or through using rref
?
Is there a general setting in algebra under which ab=e leads to ba=e where e is the identity?
Answer
Multiply both sides of AB−I=0 on the left by B to get
(BA−I)B=0
Let {ej} be the standard basis for Rn. Note that {Bej} are linearly independent: suppose that
n∑j=1ajBej=0
then, multiplying (2) on the left by A gives
n∑j=1ajej=0
which implies that aj=0 since {ej} is a basis. Thus, {Bej} is also a basis for Rn.
Multiplying (1) on the right by ej yields
(BA−I)Bej=0
for each basis vector Bej. Therefore, BA=I.
Failure in an Infinite Dimension
Let A and B be operators on infinite sequences. B shifts the sequence right by one, filling in the first element with 0. A shifts the sequence left, dropping the first element.
AB=I, but BA sets the first element to 0.
Arguments that assume A−1 or B−1 exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.
No comments:
Post a Comment