Saturday 18 March 2017

linear algebra - Alternate Definition of the Change of Basis Matrix



Let $u = (u_1, \dots, u_n)$ and $v = (v_1, \dots, v_n)$ be bases for the vector space $V$. According to the definition supplied by Arturo in the answer to this question, the change of basis matrix from $u$ to $v$ is the matrix $P$ such that
for any vector $a \in V$

$$
P[a]_u = [a]_v
$$
where $[a]_u$ is the representation of $a$ with respect to to the basis $u$ and $[a]_v$ is the representation of $a$ with respect to the bases $v$. On page 251 of Birkhoff & Mac Lanes Algebra (3rd ed), the change so basis matrix from $u$ to $v$ is defined as the matrix $P$ whose components are given by
$$
P^i_j = v^i(u_j)
$$
where $(v^1, \dots, v^n)$ denotes the basis dual to $(v_1, \dots, v_n)$.



Now, I am trying to demonstrate the equivalence of these two definitions.




I'm aware of the various properties of dual bases such as
$$
v^i(v_j) = \delta^i_j
$$
which implies that $v^i(a)$ represents the $i^{th}$ component of $a$ which implies that $a$ can be expressed as



$$
a = v^1(v)v_1 + \cdots + v^n(v)v_n
$$




I've tried various approaches to this problem using these facts and the algebra always ends up a mess with more equations than unknowns, or I end up with rank two tensors, etc, but to no avail. My basic strategy has been to expand the LHS of the expression in the original COB definition and then solve for the components of the matrix $P$ to show that they are equal to $v^i(u_j)$.



I'm thinking it might be easier to compute the inverse first but this, of course, would require knowing that $P$ is invertible (which isn't a fact yet established in the development I'm following)



What is the best way to proceed with this?



Update



Following azarel's hint, I believe I have a handle on this now. Consider, for example, the case of a 2-dimensional vector space

and let $(x^1, x^2)$ denote the coordinates of $a$ relative to $u$ and $(y^1, y^2)$ denote the coordinates of
$a$ relative to $v$. If $a$ is a unit vector relative to $u$ then the coordinates of $a$ are either $(1, 0)$ or $(0, 1)$.
In the first instance, the change of basis equation implies that $P^1_1 = y^1$ and $P^2_1 = y^2$.
But, using the summation convention,
$$
y^1 = v^1(y^jv_j) = v^1(x^ju_j) = x^jv^1(u_j) = v^1(u_1)
$$
since $x^1 = 1$ and $x^2 = 0$.



Similarly,

$$
y^1 = v^2(u_1)
$$
Therefore, $P^1_1 = v^1(u_2)$ and $P^2_1 = v^2(u_1)$. The scalars $P^1_2$ and $P^2_2$ can be obtained in an analagous fashion
by applying $P$ to the $(0,1)$ unit vector. This verifies that
in the case of $n=2$, $P^i_j = v^i(u_j)$. I believe I can now turn this into a general argument that proves the claim.


Answer



Hint:
A linear map is determined by its action on the basis.
Try applying the matrix $P$ to the $u_i$'s vectors.



No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...