Sunday, 17 November 2013

linear algebra - Assuming $AB=I$ prove $BA=I$










Most introductory linear algebra texts define the inverse of a square matrix $A$ as such:




Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$.



That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:



Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$.



Do you know of a proof other than defining inverse through determinants or through using rref?



Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity?


Answer




Multiply both sides of $AB-I=0$ on the left by $B$ to get
$$
(BA-I)B=0\tag{1}
$$
Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that
$$
\sum_{j=1}^n a_jBe_j=0\tag{2}
$$
then, multiplying $(2)$ on the left by $A$ gives
$$

\sum_{j=1}^n a_je_j=0\tag{3}
$$
which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$.



Multiplying $(1)$ on the right by $e_j$ yields
$$
(BA-I)Be_j=0\tag{4}
$$
for each basis vector $Be_j$. Therefore, $BA=I$.




Failure in an Infinite Dimension



Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element.



$AB=I$, but $BA$ sets the first element to $0$.



Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...