Thursday, 20 March 2014

linear algebra - If functions are linearly independent for one xinI, are they linearly independent for all xinI?




This theorem comes up when talking about ordinary differential equations. Basically, if we have a fundamental system, i.e. a basis B=(f1,f2,...,fn) of the vector space of solutions for a differential equation y=A(x)y

, we can check for linear independence ( if we are unsure if it really is a basis ) by checking that
(f1(x0),f2(x0),...,fn(x0))

is linearly independent for some x0I. The theorem says that if they are linearly independent for some x0I, that's equivalent to them being linearly independent for all xI.



The proof is omitted, because this equivalence is supposed to be trivial, says the author of the textbook. Could you explain why the implication from some to all holds true?



I'd actually think there would be functions for which there is an xI where all the functions happen to be zero, and then you can find coefficients which are non-zero so that you can say



c1f1(x)+...+cnfn(x)=0




cR, but why does this imply that they must be linearly independent for all x?


Answer



Assume by contradiction that your functions are linearly dependent at some x1I, i.e. there exist constants c1,,cnR, with at least one cj0, such that
c1f1(x1)++cnfn(x1)=0.


The function f(x):=c1f1(x)++cnfn(x) is a solution of the equation and f(x1)=0. By uniqueness, we must have f(x)=0 for every xI and, in particular,
c1f1(x0)++cnfn(x0)=0.

But this last condition implies c1==cn=0, a contradiction.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find limh0sin(ha)h without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...