I'm trying to understand a specific proof for the notion that differentiability implies continuity in multivariate space.
After having defined differentiability as:
Let $U \subset \mathbb{R}^n$ be open, $\vec f:U \rightarrow \mathbb{R}^m$ be a mapping. Now we call $\vec f$ differentiable in $\vec x_0 \in U$, if there exists a linear mapping $A: \mathbb{R}^n \rightarrow \mathbb{R}^m$ such that:
$\vec f(\vec x_0 + \vec h) - \vec f(\vec x_0) - A \vec h = \vec \phi (\vec h)$ with $\vec \phi (\vec h) = 0 (||\vec h||) \Leftrightarrow \lim_{\vec h \rightarrow \vec 0} \frac{||\vec \phi (\vec h)||}{||\vec h||} = 0$, which leads us to the derivative in the form: $A\vec h = d \vec f(\vec x_0) \vec h$. (Which I think I understand but am not 100% sure.)
Now for the proof of differentiability implies continuity of $\vec f: U \subset \mathbb{R}^n \rightarrow \mathbb{R}^m$ in $\vec x_0 \in U$, we have:
$lim_{x \rightarrow x_0} \vec f (\vec x) = \vec f (\vec x_0) + lim_{x \rightarrow x_0} A(\vec x - \vec x_0) + lim_{x \rightarrow x_0} \vec \phi (\vec x - \vec x_0)$, which, according to my source, yields $\vec f(\vec x_0) + \vec 0 + \vec 0$.
Now, I understand the implication of the proof, and also that the error term goes to $0$, as that is by definition, but I just don't understand how the A function will become zero. Maybe I'm missing something from it's definition as well? Could someone perhaps explain the definition or maybe intuition behind it, any further? Thanks for any answer.
Answer
Hint:
As $A$ is a linear operator from one finite-dimensional space to another there exists a constant $C$ such that $\|A(x - x_0)\| \leqslant C \|x - x_0\|$.
This is easy to prove by expanding with respect to the standard basis vectors for $\mathbb{R}^n$ and $\mathbb{R}^m.$
No comments:
Post a Comment