This theorem comes up when talking about ordinary differential equations. Basically, if we have a fundamental system, i.e. a basis B=(f1,f2,...,fn) of the vector space of solutions for a differential equation y′=A(x)y
(f1(x0),f2(x0),...,fn(x0))
is linearly independent for some x0∈I. The theorem says that if they are linearly independent for some x0∈I, that's equivalent to them being linearly independent for all x∈I.
The proof is omitted, because this equivalence is supposed to be trivial, says the author of the textbook. Could you explain why the implication from some to all holds true?
I'd actually think there would be functions for which there is an x∈I where all the functions happen to be zero, and then you can find coefficients which are non-zero so that you can say
c1∗f1(x)+...+cn∗fn(x)=0
c∈R, but why does this imply that they must be linearly independent for all x?
Answer
Assume by contradiction that your functions are linearly dependent at some x1∈I, i.e. there exist constants c1,…,cn∈R, with at least one cj≠0, such that
c1f1(x1)+⋯+cnfn(x1)=0.
The function f(x):=c1f1(x)+⋯+cnfn(x) is a solution of the equation and f(x1)=0. By uniqueness, we must have f(x)=0 for every x∈I and, in particular,
c1f1(x0)+⋯+cnfn(x0)=0.
But this last condition implies c1=⋯=cn=0, a contradiction.
No comments:
Post a Comment