Thursday, 26 December 2019

Linear independence in construction of Jordan canonical form basis for nilpotent endomorphisms



I am proving by construction that there is some basis in which a nilpotent endomorphism has a jordan canonical form that has only ones over supradiagonal. I'll put what I have already and stop where my problem is in order you can think it in the way I am.



What I want to prove is:




Theorem



Let TL(V) a rnilpotent endomorphism, V(C) a finite-dimensional vector space. There is some basis of V in which the matrix representation of T is a block diagonal matrix, and the blocks have the form
(0100000100000100000100000)
that is, blocks that have null entries except for the ones-filled supradiagonal.



Proof



First we have that if T is a rnilpotent endomorphism then Tr=0L(V), then, since U1=T(V)V=id(V)=T0(V)=U0 therefore U2=T2(V)=T(T(V))T(V)=U1 and if we suppose that Uk=Tk(V)Tk1(V)=Uk1 we conclude that Uk+1=Tk+1(V)=T(Tk(V))T(Tk1(V))=Tk(V)=Uk. Then we have proven by induction over k that Uk=Tk(V)Tk1(V)=Uk1, and since Tr=0L(V), and Uk=T(Uk1) then {0V}=UrUr1U1U0=V and we have shown too that the Uk are Tinvariant spaces and Ur1kerT.




In the same manner, let W0=kerT0=kerid={0V} and Wk=kerTk. Is easy to see that T(W0)=T({0V})={0V} therefore W0W1, moreover T2(W1)=T(T(W1))=T({0V})={0V} therefore W1W2. Then, suppose Wk1Wk, and we see that Tk+1(Wk)=T(Tk(Wk))=T({0V})={0V} and therefore WkWk+1 and we conclude we have the chain of nested spaces {0V}=W0W1Wr1Wr=V since Wr=kerTr=ker0L(V)=V.



Since we have a chain of nested spaces in which the largest is V itself, then if we choose a basis for the smallest non-trivial (Supposing UrUr1) of them (that is Ur1) we can climb chain constructing a basis for the larger spaces completing the basis we have already, what is always possible.



Now, since Ur1kerT then every vector in Ur1 is a eigenvector for eigenvalue 0. Then every basis we choose for Ur1 is a basis of eigenvectors. To complete this basis {u(r1)i} to a basis of Ur2 (Supposing Ur1Ur2) we can remember that T(Ur2)=Ur1, therefore every vector in Ur1 has a preimage in Ur2. Then there are some u(r2)iUr2 (maybe many for each i since we don't know T is inyective) such that T(u(r2)i)=u(r1)i. It's to be noted that for fixed i is not possible that u(r2)i=u(r1)i since u(r1)i is an eigenvector associated to eigenvalue 0 and also every vector in Ur1 since they are linear combinations of the basis vectors. Since we have stated they are non unique we can choose one and only one for every i. It only remains to see they are linearly independent: let take a linear combination of null vector αiu(r1)i+βiu(r2)i=0V and let apply T on both sides, αiT(u(r1)i)+βiT(u(r2)i)=iαi0V+βiu(r1)i=βiu(r1)i=0V. Since the last sum is a null linear combination of linearly independent vectors (since they form a basis for Ur1), it implies that βi=0 for every i. Therefore the initial expression takes the form αiu(r1)i=0V and αi=0 for every i by the same argument. We conclude that they are linearly independent.



At this moment we have {u(r1)i,u(r2)i} a linearly independent set of vectors in Ur2. If dimUr2=2dimUr1, then we have finished the construction, if not (dimUr22dimUr1+1) then we have to choose u(r2)j with j=dimUr1+1,,dimUr2 that complete the set to a basis of Ur2. Again, is in construction of the u(r2)i, we remember that T(Ur2)=Ur1. Therefore, every vector we choose will have, under T, the form T(v(r2)j)=μjiu(r1)i. But since we want they to be linearly independent from the u(r1)i and u(r2)i we can choose them from kerT, that is we can set u(r2)j=v(r2)jμjiu(r2)i and applying T we obtain T(u(r2)i)=T(v(r2)j)μjiT(u(r2)i)=μjiu(r1)iμjiu(r1)i=0V. Then we only need to see they are linearly independent with the others. Let, again, a null linear combination αiu(r1)i+βiu(r2)i+γju(r2)j=0V. First we can apply T both sides: αiT(u(r1)i)+βiT(u(r2)i)+γjT(u(r2)j)=iαi0V+βiu(r1)i+jγj0V=βiu(r1)i=0V and therefore βi=0 for every i since {u(r2)i} is a basis. Then the initial expression takes the form αiu(r1)i+γju(r2)j=0V. Note that we have to sets of vectors that are in kerT...



This is the point where I don't see a way to say that the αi,γi=0 for every i in order to say that they are linearly independent. Any kind of help (hints more than everything else) will be good.



Answer



Mostly, you are both on the right track and everything you say is correct, though there are a few spots where a bit more thought could let you be sharper. Let me discuss them first.



You note along the way that "(Supposing UrUr1)". In fact, we know that for each i, 0i<r, Ui+1Ui. The reason is that if we have Ui+1=Ui, then that means that Ui+2=T(Ui+1)=T(Ui)=Ui+1, and so we have reached a stabilizing point; since we know that the sequence must end with the trivial subspace, that would necessarily imply that Ui={0}. But we are assuming that the degree of nilpotence of T is r, so that Ui{0} for any i<r; hence Ui+1Ui is a certainty, not an assumption.



You also comment parenthetically: "(maybe many for each i since we don't know T is injective)". Actually, we know that T is definitely not injective, because T is nilpotent. The only way T could be both nilpotent and injective is if V is zero dimensional. And since every vector of Ur1 is mapped to 0, it is certainly the case that the restriction of T to Ui is not injective for any i, 0i<r.



As to what you are doing: suppose u1,,ut are the basis for Ur1, and v1,,vt are vectors in Ur2 such that T(vi)=ui. We want to show that {u1,,ut,v1,,vt} is linearly independent; you can do that the way you did before: take a linear combination equal to 0,
α1u1++αtut+β1v1++βtvt=0.
Apply T to get β1u1++βtut=0 and conclude the βj are zero; and then use the fact that u1,,ut is linearly independent to conclude that α1==αt=0.




Now, this may not be a basis for Ur2, since there may be elements of ker(T)Ur2 that are not in Ur1.



The key is to choose what is missing so that they are linearly independent from u1,,ut. How can we do that? Note that Ur1ker(T), so in fact Ur1ker(T)Ur2.
So we can complete {u1,,ut} to a basis for ker(T)Ur2 with some vectors z1,zs.



The question is now is how to show that {u1,,ut,v1,,vt,z1,,zs} are linearly independent. The answeer is: the same way. Take a linear combination equal to 0:
α1u1++αtut+β1v1++βtvt+γ1z1++γszs=0.
Apply T to conclude that the βi are zero; then use the fact that {u1,,ut,z1,,zs} is a basis for ker(T)Ur2 to conclude that the αi and the γj are all zero as well.




And now you have a basis for Ur2. Why? Because by the Rank-Nullity
Theorem applied to the restriction of T to Ur2, we know that
dim(Ur2)=dim(T(Ur2))+dim(ker(T)Ur2).
But T(Ur2)=Ur1, so dim(T(Ur2))=dim(Ur1)=t; and dimker(T)Ur2=t+s, since {u1,,ut,z1,,zs} is a basis for this subspace. Hence, dim(Ur2)=t+t+s=2t+s, which is exactly the number of linearly independent vectors you have.



You want to use the same idea "one step up": you will have that u1,,ut,z1,,zs is a linearly independent subset of Ur3ker(T), so you will complete it to a basis of that intersection; after adding preimages to z1,,zs and v1,,vt, you will get a "nice" basis for Ur3. And so on.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find lim without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...