Thursday, 21 November 2013

linear algebra - Change of basis matrix - part of a proof



I'm trying to understand a proof from Comprehensive Introduction to Linear Algebra (page 244)



I can't really figure out what steps have been taken to get from eq. 1. to eq. 2. It's just overcomplicated there. Different proofs of that theorem are very easy and obvious for me, multiplication of matrix by its inverse produces an identity matrix etc. But this particular case here is causing problems for me. Now I'm not trying to understand why this theorem is true, because I already do. The thing I want is to understand step from eq. 1. to eq. 2. Thank you for help.




Here's what I'm talking about


Answer



The collection of equations



$$a_i = \sum_{j=1}^np_{ij}b_j\qquad i = 1, \dots, n$$



is equivalent to the matrix equation $a = Pb$ where $a^T = (a_1, \dots, a_n)$, $b^T = (b_1, \dots, b_n)$ and $P$ is the $n\times n$ matrix with $(i, j)^{\text{th}}$ element $p_{ij}$.



If $P$ is invertible, then we can rewrite this as $b = P^{-1}a$. Denoting the $(i, j)^{th}$ element of $P^{-1}$ by $p^{-1}_{ij}$ (not great notation in my opinion), the matrix equation $b = P^{-1}a$ is equivalent to the collection of equations




$$b_i = \sum_{j=1}^np^{-1}_{ij}a_j\qquad i = 1, \dots, n.$$



Swapping the roles of the indicies $i$ and $j$, you get the following collection of equations:



$$b_j = \sum_{i=1}^np^{-1}_{ji}a_i\qquad j = 1, \dots, n.$$


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...