Tuesday 25 April 2017

Motivation for definition of logarithm in Feynman's Lectures on Physics



I'm not sure if the title is descriptive enough; feel free to change it if you come up with something better.




I've been reading through Feynman's Lectures on Physics. In the first volume, he dedicates a chapter to just math. He starts with the natural numbers and addition by 1, and builds his way to the complex numbers, with the purpose of proving Euler's formula $e^{ix} = \cos x + i \sin x$. It's a very nice read, but there's a part in the middle I'm having trouble understanding.



After having introduced irrational numbers, he begins to explain how to calculate (or define, rather) irrational powers as succesive approximations of rational powers and how to calculate logarithms, which is a related problem. In particular, he gives as an example how to find solutions to the equations $x = 10^\sqrt{2}$ and $x = \log_{10} 2$. To do that, he makes a table of successive square roots of ten, by calculating $10^s$ for $s = 1, \frac1{2}, \frac1{4}, \frac1{8}, \cdots , \frac1{1024}$. He remarks that this is enough to calculate some logarithms, because if we already calculated $10^s$ and we want its logarithm, it is simply $s$.



He also notices that as we take square roots of number that get closer and closer to $1$, there is a pattern: $\sqrt{1+\delta}$ is approximately $1+\delta/2$, so, for numbers that are already close to $1$, (such as $10^{1/1024}$, which is the last square root he calculated) instead of keeping on doing square roots we can just "guess" at the result with a pretty good accuracy.



Now, here's the part I don't understand: Having calculated $10^{1/1024}$ to be approximately $1.0022511$, he says the following:




[...] it is clear that, to an excellent approximation, if we take

another root, we shall get 1.00112 something, and rather than actually
take all the square roots, we guess at the ultimate limit. When we take a small fraction $\Delta$ of $1024$ as $\Delta$ approaches zero,
what will the answer be? Of course it will be some number close to
$0.0022511 \Delta$. Not exactly $0.0022511 \Delta$, however -- we can
get a better value by the following trick: we substract the $1$, and
then divide by the power $s$. This ought to correct all the excesses
to the same value.




He then adds another column: for each $s$, in addition to $10^s$ there's $\frac{10^s-1}{s}$, and it looks like this converges to something as $s$ gets smaller. I recognize this as one of the usual formulas for the logarithm, but I don't follow why he introduced it. Later he uses this to make an approximation formula: $10^\Delta \approx 1+\Delta \ln 10$. I understand this, but I don't get where he got that from. Could someone clarify this?




I wasn't sure about asking this question because I thought it might be hard to understand if you've never read this chapter. If this is the case, let me know and I'll try to edit it a bit.


Answer



I'm not surprised you didn't understand this; it's rather uncharacteristically badly written. It should say "When we take a small fraction $\Delta$ of $1/1024$", not $1024$. The idea is that, since when we halve the exponent the distance from $1$ is roughly halved, the distance from $1$ (which he calls the "excess") is roughly proportional to the exponent, so $10^s\approx1+\alpha s$ for some $\alpha$. The "excess" is $\alpha s$, and by dividing through by $s$ he gets increasingly accurate approximations of $\alpha\approx(10^s-1)/s$.



The reason for $\alpha=\log10$ is that $10^s=\mathrm e^{s\log10}\approx1+s\log10$ for small $s$.


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...