Friday 15 March 2013

integration - Rigorous definition of differentials in the context of integrals.




When using the subsitituion rule in integration of an integral
$\displaystyle \int f(x)\,\mathrm dx$, one turns the integral from the form $$\displaystyle \int f(x(t))\,\mathrm dx \quad(*)$$ into the form $$\int f(x(t))\,\frac{\mathrm dx}{\mathrm dt}\mathrm dt \quad(**)$$. This transform is usually accomplished by means of differenting the subsitution $x = x(t)$, such that $\dfrac{\mathrm dx}{\mathrm dt} = \dfrac{\mathrm d x(t)}{\mathrm dt}$. Now, at this point, one turns this into a differential form by means of magic, s.t. $\mathrm dx = \dfrac{\mathrm dx(t)}{\mathrm dt}\mathrm dt$. This now substitutes the differential term $\mathrm dx$ in the original expression $(*)$ to the one in the transformed expression $(**)$.



I'd like to learn that magic step a bit more rigorous – so that I can better understand it. It is often explained by "multiplication" of $\mathrm dt$, which do make sense, but it does not explain the nature of differentials; when is "multiplication" allowed? It seems there should be a more rigorous way of explaining it, perhaps by defining the "multiplication.



So, in what ways can differentials like $\mathrm dx$ and $\mathrm dt$ be formalized in this context? I've seen them being compared to small numbers, which often work, but can this analogy fail? (And what are the prerequisites needed to understand them?)


Answer



Here's one way:




Consider $x$ and $t$ are coordinate systems of $\mathbb{R}$. If we wish to change coordinate systems, we have to look at how they transform into one another. If we consider $t$ to be a reference coordinate system and let the coordinate transformation be defined as $x(t) = 2t$ then for any $t$ element, $x$ is twice that (under $x$ view).



Now, since $(\mathbb{R}, + , \cdot)$ is a vector space, it has a dual $\mathbb{R}^*$. Using this space, we can start defining the elements $dx, dt$. Specifically, $dt$ will be a basis for $\mathbb{R}^*$ if $t$ is the basis vector for $\mathbb{R}$ . The elements of the dual space are called 1-forms. 1-forms of $\mathbb{R}^*$ "eat" vector elements of $\mathbb{R}$ and return a measure along that direction (only 1 dimension, so one direction). In this case you can consider elements of $\mathbb{R}^*$ as "row" vectors and multiply column vectors in $\mathbb{R}$ (which is the dot product of two vectors).



We can define a different basis for $\mathbb{R}$ and $\mathbb{R}^*$ with a coordinate change. For this example, if $dt$ eats a one dimensional vector $a$, it will return $a$. But when $dx$ eats $a$ it returns $2a$ in the $t$ coordinate system. That is $dx = 2dt$. For a general coordinate transform, a 1-form can be describe by $dx = \frac{dx}{dt} dt$.



This provides us with a way to talk about $dx$ and $dt$ meaningfully. Since $f: \mathbb{R} \to \mathbb{R}$ then $f(x)dx$ is $dx$ "eating" the vector $f(x)$ with regards to the $x$ coordinate system. Sometimes $f$ is easier to think of in a different coordinate system and so we wish to change it. $f(x)$ then becomes $f(x(t))$ and $dx$ becomes $\frac{dx}{dt}dt$. Now $dt$ is eating vectors $f(x(t))$ in its own coordinate system.



Consider how the uniform subdivide interval $(a,b)$ looks in a new coordinate system.
For example $\{(0,\frac{1}{2}), (\frac{1}{2},1), (1,\frac{3}{2})\}$ in $t$ looks like $\{(0,\frac{2}{3}), (\frac{2}{3},2), (2, \frac{6}{2})\}$ in $x$ in the example coordinate transform. $\frac{dx}{dt}$ tells us precisely how the intervals change under our transformation.



No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...