When using the subsitituion rule in integration of an integral
∫f(x)dx, one turns the integral from the form ∫f(x(t))dx(∗) into the form ∫f(x(t))dxdtdt(∗∗). This transform is usually accomplished by means of differenting the subsitution x=x(t), such that dxdt=dx(t)dt. Now, at this point, one turns this into a differential form by means of magic, s.t. dx=dx(t)dtdt. This now substitutes the differential term dx in the original expression (∗) to the one in the transformed expression (∗∗).
I'd like to learn that magic step a bit more rigorous – so that I can better understand it. It is often explained by "multiplication" of dt, which do make sense, but it does not explain the nature of differentials; when is "multiplication" allowed? It seems there should be a more rigorous way of explaining it, perhaps by defining the "multiplication.
So, in what ways can differentials like dx and dt be formalized in this context? I've seen them being compared to small numbers, which often work, but can this analogy fail? (And what are the prerequisites needed to understand them?)
Answer
Here's one way:
Consider x and t are coordinate systems of R. If we wish to change coordinate systems, we have to look at how they transform into one another. If we consider t to be a reference coordinate system and let the coordinate transformation be defined as x(t)=2t then for any t element, x is twice that (under x view).
Now, since (R,+,⋅) is a vector space, it has a dual R∗. Using this space, we can start defining the elements dx,dt. Specifically, dt will be a basis for R∗ if t is the basis vector for R . The elements of the dual space are called 1-forms. 1-forms of R∗ "eat" vector elements of R and return a measure along that direction (only 1 dimension, so one direction). In this case you can consider elements of R∗ as "row" vectors and multiply column vectors in R (which is the dot product of two vectors).
We can define a different basis for R and R∗ with a coordinate change. For this example, if dt eats a one dimensional vector a, it will return a. But when dx eats a it returns 2a in the t coordinate system. That is dx=2dt. For a general coordinate transform, a 1-form can be describe by dx=dxdtdt.
This provides us with a way to talk about dx and dt meaningfully. Since f:R→R then f(x)dx is dx "eating" the vector f(x) with regards to the x coordinate system. Sometimes f is easier to think of in a different coordinate system and so we wish to change it. f(x) then becomes f(x(t)) and dx becomes dxdtdt. Now dt is eating vectors f(x(t)) in its own coordinate system.
Consider how the uniform subdivide interval (a,b) looks in a new coordinate system.
For example {(0,12),(12,1),(1,32)} in t looks like {(0,23),(23,2),(2,62)} in x in the example coordinate transform. dxdt tells us precisely how the intervals change under our transformation.
No comments:
Post a Comment