Thursday 20 March 2014

linear algebra - If functions are linearly independent for one $xin I$, are they linearly independent for all $xin I$?




This theorem comes up when talking about ordinary differential equations. Basically, if we have a fundamental system, i.e. a basis $B=(f_1,f_2,...,f_n)$ of the vector space of solutions for a differential equation $$y'=A(x)y$$, we can check for linear independence ( if we are unsure if it really is a basis ) by checking that
$$(f_1(x_0),f_2(x_0),...,f_n(x_0))$$
is linearly independent for some $x_0 \in I$. The theorem says that if they are linearly independent for some $x_0 \in I$, that's equivalent to them being linearly independent for all $x \in I$.



The proof is omitted, because this equivalence is supposed to be trivial, says the author of the textbook. Could you explain why the implication from some to all holds true?



I'd actually think there would be functions for which there is an $x\in I$ where all the functions happen to be zero, and then you can find coefficients which are non-zero so that you can say



$$ c_1 * f_1(x) + ... + c_n * f_n(x) = 0$$




$c\in \mathbb{R}$, but why does this imply that they must be linearly independent for all $x$?


Answer



Assume by contradiction that your functions are linearly dependent at some $x_1\in I$, i.e. there exist constants $c_1, \ldots, c_n\in\mathbb{R}$, with at least one $c_j\neq 0$, such that
$$
c_1 f_1(x_1) + \cdots + c_n f_n(x_1) = 0.
$$
The function $f(x) := c_1 f_1(x) + \cdots + c_n f_n(x)$ is a solution of the equation and $f(x_1) = 0$. By uniqueness, we must have $f(x) = 0$ for every $x\in I$ and, in particular,
$$
c_1 f_1(x_0) + \cdots + c_n f_n(x_0) = 0.

$$
But this last condition implies $c_1 = \cdots = c_n = 0$, a contradiction.


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...