Sunday 30 September 2018

multivariable calculus - What does it mean for partial derivative to exist but not continuous?


What does it mean for partial derivative to exist but not continuous?




Continuous partial derivatives imply differentiability and differentiability imply continuity of a function and the existence of partial derivatives. I also understand that any other implication of this statement is false.

calculus - Differentiable function satisfying $f(x+a) = bf(x)$ for all $x$




This is an exercise from Apostol Calculus, (Exercise 10 on page 269).




What can you conclude about a function which has derivative everywhere and satisfies an equation of the form
$$ f(x+a) = bf(x) $$
for all $x$, where $a$ and $b$ are positive constants?




The answer in the back of the book suggests that we should conclude $f(x) = b^{x/a} g(x)$ where $g(x)$ is a periodic function with period $a$. I'm not sure how to arrive at this.




One initial step is to say, by induction,
$$ f(x+a) = bf(x) \implies f(x+na) = b^n f(x)$$
for all $x$. I'm not sure what to do with this though. I'm also not clear how to use the differentiability of $f$. (If I write down the limit definition of the derivative then I end up with a term $f(x+h)$, but I cannot use the functional equation on that since the functional equation is for a fixed constant $a$.)


Answer



One trivial solution that doesn't use the differentiability of $ f(x) $:



From $ f(x+na)=b^nf(x) $, letting $ y=x+na $, and requiring that $ x \in [0,a) $ and $ n = \left\lfloor \frac{y}{a} \right\rfloor $ we get the following equivalent definition of $ f $:



$$ f(y)=b^{\frac{y-\left(y-\left\lfloor \frac{y}{a} \right\rfloor a\right)}{a}}f\left (y- \left\lfloor \frac{y}{a} \right\rfloor a \right) $$




By letting $ g(y)=b^{-\frac{\left( y-\left\lfloor \frac{y}{a} \right\rfloor a\right)}{a}}f\left(y-\left\lfloor \frac{y}{a} \right\rfloor a \right) $ noting that $ g $ is periodic with a period of $ a $:



$$ f(y)=b^{\frac{y}{a}}g(y) $$


Very basic limit yet I cannot find the answer and the explanation


$$\lim\limits_{n \to \infty} \sqrt[n+1]{n+1}$$




This question looks pretty easy and basic to me, yet I can't find an answer. I am pretty sure the answer is 1, but I'm not sure of the reason.



I think that it can be written as $(n+1)^{\frac1{n+1}}$, but $\dfrac1{n+1}$ should be $0$ (because $n\to\infty$) and that would be the reason, but I'm not really sure.




Can anyone give me a proper explanation on this limit and its result?

Saturday 29 September 2018

Many other solutions of the Cauchy's Functional Equation



By reading the Cauchy's Functional Equations on the Wiki, it is said that




On the other hand, if no further conditions are imposed on f, then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions.





Could anyone give a more explicit explanation of these many other solutions?



Besides the trivial solution of the form $f(x)=C x$, where $C$ is a constant, and the solution above constructed by the Hamel Basis, are there any more solutions existing?


Answer



All solutions are "Hamel basis" solutions. Any solution of the functional equation is $\mathbb{Q}$-linear. Let $H$ be a Hamel basis. Given a solution $f$ of the functional equation, let $g(b)=f(b)$ for every $b\in H$, and extend by $\mathbb{Q}$-linearity. Then $f(x)=g(x)$ for all $x$.



There are lots of solutions because a Hamel basis has cardinality the cardinality $c$ of the continuum. A solution can assign arbitrary values to elements of the basis, and be extended to $\mathbb{R}$ by $\mathbb{Q}$-linearity. So there are $c^c$ solutions. There are only $c$ linear solutions, so "most" solutions of the functional equation are non-linear.


continuity - Let $f$ be a continuous and positive function on $mathbb{R}_{+} $ such that $lim_{x to infty}



Let $f$ be a continuous and positive function on $\mathbb{R}_{+}$ such that $\displaystyle\underset{x \to \infty}{\lim} \frac{f(x)}{x} <1$.
Prove the equation $$f(x)=x$$ has at least one solution on $\mathbb{R}_{+}$.


Answer



If $f(0)=0$, done. If not $g(x)=f(x)-x, g(0)>0$ and there exists $x>0$ such that $f(x)/x<1$ which implies that $g(x)=f(x)-x<0$, applies IVT at $g$ in $[0,x]$.


geometry - The staircase paradox, or why $pine4$



What is wrong with this proof?






Is $\pi=4?$


Answer



This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.



In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.



Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.


calculus - How to show that $intlimits_1^{infty} frac{1}{x}dx$ diverges(without using the harmonic series)?



I was reading up on the harmonic series,
$H=\sum\limits_{n=1}^\infty \frac{1}{n}$, on Wikipedia, and it's divergent, as can be shown by a comparison test using the fact that




$H=1+\frac{1}{2}+(\frac{1}{3}+\frac{1}{4})+(\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8})+...\geq 1+\frac{1}{2}+(\frac{1}{4}+\frac{1}{4})+(\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8})+...=1+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+...,$ where the expression on the right clearly diverges.



But after this proof idea was given, the proof idea using the integral test was given. I understand why $H_n=\sum_{k=1}^n \frac{1}{k}\geq \int_1^n \frac{dx}{x}$, but how is it shown that $\int_1^\infty \frac{dx}{x}$ is divergent without using the harmonic series in the following way:
$H_n-1\leq \int_1^n \frac{dx}{x} \leq H_n$, and then using this in the following way, by comparison test:



$\lim_{n\rightarrow\infty}H_n=\infty\Rightarrow\lim_{n\rightarrow\infty}(H_n-1)=\infty\Rightarrow\lim_{n\rightarrow\infty}\int_1^n \frac{dx}{x}=\infty$.



So to summarize, is there a way to prove that $\int_1^\infty \frac{dx}{x}$ without using the fact that $H$ diverges?


Answer




Let $x = y/2.$ Then



$$\int_1^\infty\frac{dx}{x} = \int_2^\infty\frac{dy}{y}.$$



That is a contradiction unless both integrals equal $\infty.$


Friday 28 September 2018

convergence divergence - What is the $I_{0}(x)$ function?



While trying to calculate the following infinite sum:



$$ \sum_{k=0}^{\infty} \frac {4^k}{(k!)^{2}}$$



I got the result: $I_{0}(4) = 11.301...$



I've never encountered this function before ($ I_{0}(x) $), can someone please describe it and explain why the above infinite sum converges to an output of this function?




I expected something having to do with the exponential function since $$ \sum_{k=0}^{\infty} \frac {\mu^k}{k!} = e^\mu $$


Answer



The modified Bessel function of the first kind has a power series expansion
$$ I_{\alpha}(x)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(k+\alpha+1)}\Big(\frac{x}{2}\Big)^{2k+\alpha} $$



Taking $\alpha=0$ and using $\Gamma(k+1)=k!$, and then setting $x=4$, we get
$$ I_0(4)=\sum_{k=0}^{\infty}\frac{4^k}{(k!)^2} $$
which is your sum.


elementary number theory - How can you prove that the square root of two is irrational?



I have read a few proofs that $\sqrt{2}$ is irrational.



I have never, however, been able to really grasp what they were talking about.




Is there a simplified proof that $\sqrt{2}$ is irrational?


Answer



You use a proof by contradiction. Basically, you suppose that $\sqrt{2}$ can be written as $p/q$. Then you know that $2q^2 = p^2$. However, both $q^2$ and $p^2$ have an even number of factors of two, so $2q^2$ has an odd number of factors of 2, which means it can't be equal to $p^2$.


probability - A basic game of "roll-off", finding probabilities of events



A and B play a "roll-off". The rules of the game are simple: they'll each roll the dice (which is a standard, fair, six-sided dice with faces numbered 1 to 6), and whoever obtains the higher number wins. In the event of a tie, they'll both roll again, until there's a winner.






a. What is the probability that B wins the game on the first roll? (Give your answer as a fraction in its lowest terms.)







Here we want whatever player B gets to be greater than whatever player A gets. So we want $\Pr(A>B)$ how do I find this though?






b. What is the probability that A wins the game, but not on the first roll? (Give your answer as a fraction in its lowest terms.)







Here I want $\Pr(A=B)*\Pr(A>B)$ but again I'm not really sure how to find it.






c. Suppose A rolls first, and gets a '2'. At that moment, what is the probability that B will win the game? (Give your answer as a fraction in its lowest terms.)






Here I want to find \Pr((A>B)|(B=2)) and you guessed it, I don't really know how to.






Answer



(a) In this case you can just enumerate the outcomes:
$$(2,1),(3,1),(4,1), (5,1), (6,1), \ldots $$
There are $15$ outcomes with $B>A$ and $36$ possible outcomes from rolling two dice, so the proability is $$\frac{15}{36} = \frac 5{12}.$$



(b) It's clear by symmetry that both players have an equal chance to win. So this probability is $1/2$ times the probability that the first roll is a tie (since the first roll must be a tie in order for $A$ to win but not on the first roll). So the probability is $$\frac12\cdot\frac16=\frac1{12}.$$



(c) The probability that $B$ wins on the first roll, given that $A$ rolled a 2, would be $4/6=2/3$ (since $B$ could roll $3$, $4$, $5$, or $6$). We also need to add to this $1/2$ times the probability that the first roll is a tie (which is $1/6$, i.e. when $B$ rolls a $2$ on the first roll). So the probability is $$\frac23 + \frac12\cdot\frac16=\frac34.$$



calculus - Convergence of $sumfrac{|cos n|}{nlog n}$



I wonder if the series $$\sum_{n=1}^\infty\frac{|\cos n|}{n\log n}$$ converges.



I tried to applying the condensation test, getting $$\sum\frac{2^n|\cos 2^n|}{2^n\log{2^n}}=\sum\frac{|\cos 2^n|}{n\log 2}$$ but I don't know how to show it converges?



Am I in the right way?


Answer



Note that




$$|\cos n| \geqslant \cos^2 n = \frac1{2} + \frac1{2}\cos(2n).$$



Hence



$$\sum_{k=2}^n \frac{|\cos k|}{k \log k}\geqslant \frac1{2}\sum_{k=2}^n \frac{1}{k \log k}+\frac1{2}\sum_{k=2}^n \frac{\cos(2k)}{k \log k}.$$



The series on the LHS diverges because the first sum on the RHS diverges by the integral test and the second sum converges by Dirichlet's test.


Thursday 27 September 2018

sequences and series - Does $sum_{n=1}^infty frac{(-1)^n}{n^{1+frac{1}{n}}}$ converge?



I want to use the alternating series test here, but I've just been told that it won't work because it's not monotonically decreasing.



However, if the alternating harmonic series converges then don't we have for $\sum_{n=1}^\infty \frac{(-1)^n}{n^{1+\frac{1}{n}}}$ that

$$ \lim_{n \to \infty} \frac{1}{n^{1+\frac{1}{n}}} = 0$$
since
$$\lim_{n \to \infty} \frac{1}{n^{1+\frac{1}{n}}} < \lim_{n \to \infty} \frac{1}{n} = 0.$$



Can someone point out where the mistake here is?


Answer



To show that it is monotonically decreasing one should show that:
$$\frac{1}{n^{1+\frac{1}{n}}} > \frac{1}{(n+1)^{1+\frac{1}{n+1}}}.$$
This is equivalent to showing that:
$$\frac{n+1}{n} > \frac{n^\frac{1}{n}}{(n+1)^\frac{1}{n+1}},$$

which is the same as
$$(1+\frac{1}{n})^n > \frac{n}{n+1}\cdot (n+1)^\frac{1}{n+1}.$$



For sufficiently large values of $n$, this must be the case, as the limit of the LHS is just $e$ and the one of the RHS is $1$.


independence - root of prime numbers are linearly independent over $mathbb{Q}$

How can we prove by mathematical induction that $1,\sqrt{2}, \sqrt{3}, \sqrt{5},\ldots, \sqrt{p_n}$ ($p_n$ is the $n^{\rm th}$ prime number) are linearly independent over the rational numbers ?



$\underline{\text{base case (n=1)}}$: $1,\sqrt{2}$ are linearly independent over the field $\mathbb{Q}$ otherwise $a1+b\sqrt{2}=0 \Leftrightarrow \sqrt{2}=-\frac{a}{b}$ which is absurd.



Then I am stuck.

number theory - How to show $x^{p-2}+cdots+x^2+x+1equiv0pmod{p}$ has $p-2$ incongruent solutions: $2, 3,..., p-1.$ when $p$ is an odd prime?

Let $p$ be an odd prime. How would I prove that the congruence $x^{p-2}+\cdots+x^2+x+1\equiv0\pmod{p}$ has exactly $p-2$ incongruent solutions, and they are the integers $2, 3,..., p-1?$



I found this question posted, but the answer doesn't quite make sense to me.



Here is my attempt:




Proof
As $p$ is an odd prime, by Fermat's Theorem,
\begin{align*}
x^{p-1}&\equiv1\pmod{p}\\
&\implies x^{p-1}-1\equiv0\pmod{p}
\end{align*}

and by Lagrange's Corollary, with $d=p-1, p-1\mid p-1$, this congruence has exactly $p-1$ solutions.
\begin{align*}
x^{p-1}-1 \pmod{p}&\equiv0\pmod{p}\\

&\equiv (x-1)(x^{p-2}+\cdots+x^2+x+1) \pmod{p}\\
\end{align*}

Assume $x^{p-2}+\cdots+x^2+x+1\equiv0\pmod{p}$...



And then I get stuck.



The proof I linked above used:




Now, $x^{p-1}-1\equiv 0\pmod{p}$ has exactly $p-1$ incongruent solutions modulo $p$ by Lagrange's Theorem.




Note that $g(1)=(1-1)f(1)=0\equiv 0\pmod{p}$, so $1$ is a root of $g(x)$ modulo $p$. Hence, the incongruent roots of $g(x)$ modulo $p$ are $1,2,3,\dots,p-1$.



But every root of $g(x)$ other than $1$ is also a root of $f(x)$ (This is the part I'm concerned about. Is it clear that this is the case?), hence $f(x)$ has exactly $p-2$ incongruent roots modulo $p$, which are $2,3,\dots,p-1$. $\blacksquare$




But I don't understand the part about roots. Nor do I quite get the use of creating $g(x)$ and $f(x)$. Could someone explain the proof using a different method?

calculus - A limit problem $limlimits_{x to 0}frac{xsin(sin x) - sin^{2}x}{x^{6}}$



This is a problem from "A Course of Pure Mathematics" by G H Hardy. Find the limit $$\lim_{x \to 0}\frac{x\sin(\sin x) - \sin^{2}x}{x^{6}}$$ I had solved it long back (solution presented in my blog here) but I had to use the L'Hospital's Rule (another alternative is Taylor's series). This problem is given in an introductory chapter on limits and the concept of Taylor series or L'Hospital's rule is provided in a later chapter in the same book. So I am damn sure that there is a mechanism to evaluate this limit by simpler methods involving basic algebraic and trigonometric manipulations and use of limit $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ but I have not been able to find such a solution till now. If someone has any ideas in this direction please help me out.



PS: The answer is $1/18$ and can be easily verified by a calculator by putting $x = 0.01$


Answer




Preliminary Results:



We will use
$$
\begin{align}
\frac{\color{#C00000}{\sin(2x)-2\sin(x)}}{\color{#00A000}{\tan(2x)-2\tan(x)}}
&=\underbrace{\color{#C00000}{2\sin(x)(\cos(x)-1)}\vphantom{\frac{\tan^2(x)}{\tan^2(x)}}}\underbrace{\frac{\color{#00A000}{1-\tan^2(x)}}{\color{#00A000}{2\tan^3(x)}}}\\
&=\hphantom{\sin}\frac{-2\sin^3(x)}{\cos(x)+1}\hphantom{\sin}\frac{\cos(x)\cos(2x)}{2\sin^3(x)}\\
&=-\frac{\cos(x)\cos(2x)}{\cos(x)+1}\tag{1}
\end{align}

$$
Therefore,
$$
\lim_{x\to0}\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}=-\frac12\tag{2}
$$
Thus, given an $\epsilon\gt0$, we can find a $\delta\gt0$ so that if $|x|\le\delta$
$$
\left|\,\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}+\frac12\,\right|\le\epsilon\tag{3}
$$
Because $\,\displaystyle\lim_{x\to0}\frac{\sin(x)}{x}=\lim_{x\to0}\frac{\tan(x)}{x}=1$, we have

$$
\sin(x)-x=\sum_{k=0}^\infty2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\tag{4}
$$
and
$$
\tan(x)-x=\sum_{k=0}^\infty2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1})\tag{5}
$$
By $(3)$ each term of $(4)$ is between $-\frac12-\epsilon$ and $-\frac12+\epsilon$ of the corresponding term of $(5)$. Therefore,
$$
\left|\,\frac{\sin(x)-x}{\tan(x)-x}+\frac12\,\right|\le\epsilon\tag{6}

$$
Thus,
$$
\lim_{x\to0}\,\frac{\sin(x)-x}{\tan(x)-x}=-\frac12\tag{7}
$$
Furthermore,
$$
\begin{align}
\frac{\tan(x)-\sin(x)}{x^3}
&=\tan(x)(1-\cos(x))\frac1{x^3}\\

&=\frac{\sin(x)}{\cos(x)}\frac{\sin^2(x)}{1+\cos(x)}\frac1{x^3}\\
&=\frac1{\cos(x)(1+\cos(x))}\left(\frac{\sin(x)}{x}\right)^3\tag{8}
\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\tan(x)-\sin(x)}{x^3}=\frac12\tag{9}
$$
Combining $(7)$ and $(9)$ yield
$$

\lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag{10}
$$
Additionally,
$$
\frac{\sin(A)-\sin(B)}{\sin(A-B)}
=\frac{\cos\left(\frac{A+B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}
=1-\frac{2\sin\left(\frac{A}{2}\right)\sin\left(\frac{B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}\tag{11}
$$







Finishing Up:
$$
\begin{align}
&x\sin(\sin(x))-\sin^2(x)\\
&=[\color{#C00000}{(x-\sin(x))+\sin(x)}][\color{#00A000}{(\sin(\sin(x))-\sin(x))+\sin(x)}]-\sin^2(x)\\
&=\color{#C00000}{(x-\sin(x))}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&+\color{#C00000}{(x-\sin(x))}\color{#00A000}{\sin(x)}\\
&+\color{#C00000}{\sin(x)}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&=(x-\sin(x))(\sin(\sin(x))-\sin(x))+\sin(x)(x-2\sin(x)+\sin(\sin(x)))\tag{12}

\end{align}
$$
Using $(10)$, we get that
$$
\begin{align}
&\lim_{x\to0}\frac{(x-\sin(x))(\sin(\sin(x))-\sin(x))}{x^6}\\
&=\lim_{x\to0}\frac{x-\sin(x)}{x^3}\lim_{x\to0}\frac{\sin(\sin(x))-\sin(x)}{\sin^3(x)}\lim_{x\to0}\left(\frac{\sin(x)}{x}\right)^3\\
&=\frac16\cdot\frac{-1}6\cdot1\\
&=-\frac1{36}\tag{13}
\end{align}

$$
and with $(10)$ and $(11)$, we have
$$
\begin{align}
&\lim_{x\to0}\frac{\sin(x)(x-2\sin(x)+\sin(\sin(x)))}{x^6}\\
&=\lim_{x\to0}\frac{\sin(x)}{x}\lim_{x\to0}\frac{x-2\sin(x)+\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-(\sin(x)-\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))\left(1-\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}\right)}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))+\sin(x-\sin(x))\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}}{x^5}\\
&=\lim_{x\to0}\frac{\sin(x-\sin(x))}{x^3}\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{x^2}\\[6pt]

&=\frac16\cdot\frac12\\[6pt]
&=\frac1{12}\tag{14}
\end{align}
$$
Adding $(13)$ and $(14)$ gives
$$
\color{#C00000}{\lim_{x\to0}\frac{x\sin(\sin(x))-\sin^2(x)}{x^6}=\frac1{18}}\tag{15}
$$







Added Explanation for the Derivation of $(6)$



The explanation below works for $x\gt0$ and $x\lt0$. Just reverse the red inequalities.



Assume that $x\color{#C00000}{\gt}0$ and $|x|\lt\pi/2$. Then $\tan(x)-2\tan(x/2)\color{#C00000}{\gt}0$.

$(3)$ is equivalent to
$$
\begin{align}
&(-1/2-\epsilon)(\tan(x)-2\tan(x/2))\\[4pt]

\color{#C00000}{\le}&\sin(x)-2\sin(x/2)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-2\tan(x/2))\tag{16}
\end{align}
$$
for all $|x|\lt\delta$. Thus, for $k\ge0$,
$$
\begin{align}
&(-1/2-\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\\[4pt]
\color{#C00000}{\le}&2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\tag{17}

\end{align}
$$
Summing $(17)$ from $k=0$ to $\infty$ yields
$$
\begin{align}
&(-1/2-\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\\[4pt]
\color{#C00000}{\le}&\sin(x)-\lim_{k\to\infty}2^k\sin(x/2^k)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\tag{18}
\end{align}
$$

Since $\lim\limits_{k\to\infty}2^k\tan(x/2^k)=\lim\limits_{k\to\infty}2^k\sin(x/2^k)=x$, $(18)$ says
$$
\begin{align}
&(-1/2-\epsilon)(\tan(x)-x)\\[4pt]
\color{#C00000}{\le}&\sin(x)-x\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-x))\tag{19}
\end{align}
$$
which, since $\epsilon$ is arbitrary is equivalent to $(6)$.


real analysis - A variant of $lim_{n to infty }underbrace{sin sin dotssin(t)}_{text{$n$ compositions}}$



In this question





it's proved that $$\lim_{n \to \infty }\underbrace{\sin \sin \dots\sin(t)}_{\text{$n$ compositions}}, t\in \mathbb {R}$$ converges to the null-function.




In this case the (we can't use Did's argument) $$\lim_{n \to \infty }\underbrace{\sin (2\pi\sin( \dots2\pi\sin(t))}_{\text{$n$ compositions}}$$
what happen?


Answer



Behavior of sequences like these falls into the field of mathematics called dynamical systems. A dynamical system consists of a metric space $X$ and a continuous function $f:X\to X$. Here, $X=\mathbb R$, and $f(x)=2\pi\sin(x)$. One of the questions asked by those studying a dynamical system is what is the behavior of sequences of the form $\{f(x),f(f(x)),f(f(f(x))),...\}$, which is exactly what you are curious about. In this field, we denote the function composed with itself $n$ times by $f^n(x)$. Some dynamical systems are simple and predictable. Others are complicated and chaotic. This is one of the latter types. To demonstrate this, here are some plots:



This is $f(x)$



$\uparrow$ This is $f(x)$.




enter image description here



$\uparrow$ This is $f^2(x)$.



enter image description here



$\uparrow$ This is $f^3(x)$.



enter image description here




$\uparrow$ This is $f^5(x)$.



enter image description here



$\uparrow$ This is $f^8(x)$.



This is chaos$^\ast$! Chaos is, in my opinion, the coolest thing about dynamical systems. Chaos means that even if $x$ and $y$ are close, $f^n(x)$ and $f^n(y)$ may be far apart. Chaos means that it is very difficult to predict how the system will behave in the long term. And chaos means that your sequence certainly doesn't converge to anything.



Some questions that would be interesting to answer are:





  • What does the set of all points $x$ such that $0\leq f^n(x)\leq 1$ for all $n$ look like?

  • Are there any periodic points, points such that $f^n(x)=x$ for some $n$? (the answer to this is yes). What does the set of periodic points look like?

  • If we change $f$ to $f(x)=a\sin(x)$, for $1\leq a\leq 2\pi$, how does the system change? We know if $a=1$, the system is simple, so somewhere, the behavior changes.



$\ast$ It is almost certainly chaotic. We would need a proof to know for sure.


Wednesday 26 September 2018

complex analysis - Do the polynomials $(1+z/n)^n$ converge compactly to $e^z$ on $mathbb{C}$?




The question is




Do the polynomials $p_n(x)=(1+z/n)^n$ converge compactly (or uniformly on compact subsets) to $e^z$ on $\mathbb{C}$?




I thought about expanding
$$p_n(z)=\sum_{k=0}^n a_k^{(n)}z^k$$
where

$$a_k^{(n)}=\binom{n}{k}\frac{1}{n^k}=\frac{1}{k!}\prod_{j=0}^{k-1}\left(1-\frac{j}{n}\right)$$
and trying to show that $\frac{1}{k!}-a_k^{(n)}$ decreases sufficiently fast on any closed ball. That is, I tried to show
$$\lim_{n\rightarrow\infty}\max_{z\in\overline{B_0(A)}}\left|\sum_{k=0}^n\frac{z^k}{k!}-p_n(z)\right|=0$$
for any fixed $A>0$, but I had difficulty with this approach.



Any help is appreciated.


Answer



You can use following steps.





  1. For $a, b \in \mathbb C$ and $k \in \mathbb N$ you have $$\vert a^k -b^k \vert =\vert a-b \vert \vert a^{k-1}+b a^{k-2}+\dots+b^{k-1}\vert\le \vert a - b \vert k m^{k-1} \tag{1}$$ where $m = \max (\vert a \vert, \vert b \vert)$

  2. For $u \in \mathbb C$ you have $$\left\vert e^u-(1+u) \right\vert \le \sum_{k=2}^{+\infty} \frac{\vert u \vert^k}{k!} \le \vert u \vert^2 \sum_{k=0}^{+\infty} \frac{\vert u \vert^k}{k!}=\vert u \vert^2 e^{\vert u \vert} \tag{2}$$

  3. Now taking $a=e^u,b=1+u$, we get $m=\max(\vert e^u \vert,\vert 1+u \vert) \le \max(e^{\vert u \vert},1+\vert u \vert) \le e^{\vert u \vert}$. For $k \ge 1$ applying (1) and (2) successively, we get $$\left\vert e^{ku} -(1+u)^k\right\vert \leq\frac{\vert k u \vert^2 e^{\vert ku \vert}}{k} \tag{3}$$

  4. Finally for $z \in \mathbb{C}$ and denoting $u=\frac{z}{n}$ and $k=n$, we obtain using (3) $$\left\vert e^z -\left(1+\frac{z}{n}\right)^n \right\vert \le \frac{\vert z \vert^2 e^{\vert z \vert}}{n} \tag{4}$$

  5. For $K \subset \mathbb C$ compact, one can find $M > 0$ such that $M \ge \sup\limits_{z \in K} \vert z \vert$ which implies $$\sup\limits_{ z \in K} \left\vert e^z -\left(1+\frac{z}{n}\right)^n \right\vert \le \frac{M^2 e^{M}}{n} \tag{5}$$ proving that $(p_n)$ converges uniformly to $e^z$ on every compact subset of $\mathbb C$.


Tuesday 25 September 2018

linear algebra - The determinant for n real eigenvalues



When the problem says "repeated according to multiplicities" does that mean there is one value of lambda that is repeated $n$ times? I'm confused on why the determinant of $A$ must be the product of the $n$ eigenvalues of $A$. Couldn't lambda be any value? I only see how the determinant of $A$ would equal the $n$ eigenvalues if lambda is 0. Why is this result true when complex eigenvalues are considered?




Prove that the determinant of an $n × n$ matrix A is the product of
the eigenvalues (counted according to their algebraic multiplicities).
Hint: Write the characteristic polynomial as $p(\lambda) = (\lambda_1
− \lambda)(\lambda_2 − \lambda)· · ·(\lambda_n − \lambda)$.




Solution: If the eigenvalues of $A$ are $\lambda_1, . . . , \lambda_n$
(counted with algebraic multiplicity), then as the hint says, the
characteristic polynomial of $A$ is $det(A − \lambda I) = (\lambda_1 −
> \lambda)(\lambda_2 − \lambda)...(\lambda_n − \lambda)$. Plugging in
$\lambda = 0$ yields $det A = \lambda_1\lambda_2... \lambda_n$.



Answer



Repeated according to multiplicity




This simply means that we want a copy of $(\lambda_r-\lambda)$ for each time that $\lambda_r$ pops up as an eigenvalue. For an example, consider $A=\begin{bmatrix} 3 & 1 \\ 0 & 3 \end{bmatrix}$. $3$ is an eigenvalue with (algebraic) multiplicity two. We want $\det(A-\lambda I) = (3- \lambda)(3-\lambda) = (3-\lambda)^2$ and NOT simply $(3-\lambda)$.



$\det(A- \lambda I)$ is a polynomial



You are absolutely correct, $\lambda$ can have any value, just as $x$ can have any value in $f(x) = x^2 - 4 = (x+2)(x-2)$. What makes eigenvalues special is that they are the roots of the polynomial $\det(A-\lambda I)$, i.e., the special values of lambda that give $\det(A-\lambda I) =0$. When factoring this polynomial as given in your question, it should be clear what the roots are, namely $\{ \lambda_1, \lambda_2, \dots, \lambda_n \}$.



$\det(A)$ = product of eigenvalues



What happens when we evaluate this polynomial at $\lambda=0$, just as we might evaluate the polynomial $f(x) = x^2-4$ at $x=0$ to get $f(0) = -4$?
\begin{align*}

\det(A - 0 I) &= (\lambda_1 - 0) (\lambda_2 - 0) \cdots (\lambda_n - 0) \\
&= \lambda_1 \cdot \lambda_2 \cdots \lambda_n
\end{align*}
However, $\det(A-0I) = \det(A - 0) = \det(A)$, so we have
$$\det(A) = \lambda_1 \cdot \lambda_2 \cdots \lambda_n.$$



Again, notice the importance of listing repeated eigenvalues multiple times. For example, $\det \left( \begin{bmatrix} 3 & 1 \\ 0 & 3 \end{bmatrix} \right) = 9$ and this matrix has a repeated eigenvalue of $3$. We see that $3\cdot 3 =9$, but if we only listed this eigenvalue once we would have $3 \neq 9$.


gcd and lcm - using extended euclidean algorithm to find s, t, r




i am stuck for many hours and i don't understand using the extended euclidean algorithm. i calculated it the gcd using the regular algorithm but i don't get how to calculate it properly to obtain s,t,r.




i understand that from the gcd i can get a linear combination representation, but i don't get how to do it using the algorithm.



how can i find $s,t,r$ for $a=154, b= 84$?



if it is of any importance, the algorithm i am referring to is from the book cryptography: theory and practice



thank you very much. became hopeless because of it


Answer



Using the Euclidean algorithm, we have




$$
\begin{align}
154&=1\cdot84+70\tag{1}\\
84&=1\cdot70+14\tag{2}\\
70&=5\cdot14+0
\end{align}
$$

The last nonzero remainder is 14. So $\gcd(154,84)=14$. Now
$$

\begin{align*}
14&=84-70\qquad\text{(using 2)}\\
&=84-(154-84)\qquad\text{(using 1)}\\
&=2\cdot84-1\cdot154
\end{align*}
$$

So $14=2\cdot84-1\cdot154$.


elementary number theory - How to find the inverse modulo m?



For example:
$$7x \equiv 1 \pmod{31} $$
In this example, the modular inverse of $7$ with respect to $31$ is $9$. How can we find out that $9$? What are the steps that I need to do?



Update
If I have a general modulo equation:
$$5x + 1 \equiv 2 \pmod{6}$$



What is the fastest way to solve it? My initial thought was:
$$5x + 1 \equiv 2 \pmod{6}$$

$$\Leftrightarrow 5x + 1 - 1\equiv 2 - 1 \pmod{6}$$
$$\Leftrightarrow 5x \equiv 1 \pmod{6}$$



Then solve for the inverse of $5$ modulo 6. Is it a right approach?



Thanks,


Answer




  1. One method is simply the Euclidean algorithm:
    \begin{align*}

    31 &= 4(7) + 3\\\
    7 &= 2(3) + 1.
    \end{align*}

    So $ 1 = 7 - 2(3) = 7 - 2(31 - 4(7)) = 9(7) - 2(31)$. Viewing the equation $1 = 9(7) -2(31)$ modulo $31$ gives $ 1 \equiv 9(7)\pmod{31}$, so the multiplicative inverse of $7$ modulo $31$ is $9$. This works in any situation where you want to find the multiplicative inverse of $a$ modulo $m$, provided of course that such a thing exists (i.e., $\gcd(a,m) = 1$). The Euclidean Algorithm gives you a constructive way of finding $r$ and $s$ such that $ar+ms = \gcd(a,m)$, but if you manage to find $r$ and $s$ some other way, that will do it too. As soon as you have $ar+ms=1$, that means that $r$ is the modular inverse of $a$ modulo $m$, since the equation immediately yields $ar\equiv 1 \pmod{m}$.


  2. Another method is to play with fractions Gauss's method:
    $$\frac{1}{7} = \frac{1\times 5}{7\times 5} = \frac{5}{35} = \frac{5}{4} = \frac{5\times 8}{4\times 8} = \frac{40}{32} = \frac{9}{1}.$$
    Here, you reduce modulo $31$ where appropriate, and the only thing to be careful of is that you should only multiply and divide by things relatively prime to the modulus. Here, since $31$ is prime, this is easy. At each step, I just multiplied by the smallest number that would yield a reduction; so first I multiplied by $5$ because that's the smallest multiple of $7$ that is larger than $32$, and later I multiplied by $8$ because it was the smallest multiple of $4$ that is larger than $32$. Added: As Bill notes, the method may fail for composite moduli.




Both of the above methods work for general modulus, not just for a prime modulus (though Method 2 may fail in that situation); of course, you can only find multiplicative inverses if the number is relatively prime to the modulus.




Update. Yes, your method for general linear congruences is the standard one. We have a very straightforward method for solving congruences of the form $$ax \equiv b\pmod{m},$$
namely, it has solutions if and only if $\gcd(a,m)|b$, in which case it has exactly $\gcd(a,m)$ solutions modulo $m$.



To solve such equations, you first consider the case with $\gcd(a,m)=1$, in which case $ax\equiv b\pmod{m}$ is solved either by finding the multiplicative inverse of $a$ modulo $m$, or as I did in method $2$ above looking at $\frac{b}{a}$.



Once you know how to solve them in the case where $\gcd(a,m)=1$, you can take the general case of $\gcd(a,m) = d$, and from
$$ax\equiv b\pmod{m}$$
go to
$$\frac{a}{d}x \equiv \frac{b}{d}\pmod{\frac{m}{d}},$$

to get the unique solution $\mathbf{x}_0$. Once you have that unique solution, you get all solutions to the original congruence by considering
$$\mathbf{x}_0,\quad \mathbf{x}_0 + \frac{m}{d},\quad \mathbf{x}_0 + \frac{2m}{d},\quad\ldots, \mathbf{x}_0 + \frac{(d-1)m}{d}.$$


Monday 24 September 2018

trigonometry - Solve, for $0 leq x < 360$, $5sin 2x = 2cos 2x$. What about $cos theta = 0$?




Solve $5\sin 2x = 2\cos 2x$ for $0 \leq x < 360^\circ$.





Let $\theta = 2x$. Then
$(5\tan \theta -2)\cos \theta = 0.$



So $\tan \theta = \dfrac 2 5$ or $\cos \theta = 0.$



Calculating the above value gives:



$\theta = 45.0^\circ, 135.0^\circ, 225.0^\circ, 315.0^\circ, 10.9^\circ, 100.9^\circ, 190.9^\circ$, or $280.9^\circ.$




But looking at the marking scheme strictly states:




$x = 10.9, 100.9, 190.9, 280.9$ (Allow awrt)
Extra solution(s) in range: Loses the final A mark.




As you can see this is only solution to the $\tan \theta = \dfrac 5 2$.
What am I missing here? Why is $\cos \theta = 0$ not a correct solution?



Is factorising $\cos \theta$ unnecessary because when $\cos \theta = 0$,
$\tan \theta = \dfrac {\sin \theta}{\cos \theta}$ is undefined so that renders the equation useless?




Many thanks in advance.


Answer



Another method if you are interested. $5 \sin(2x)-2 \cos(2x)=0 \\ \frac{5}{\sqrt{29}} \sin(2x)-\frac{2}{\sqrt{29}} \cos(2x)=0 \\ \sin(\theta)\sin(2x)-\cos(\theta)\cos(2x)=0 \\ \cos(\theta+2x)=0 \\ \theta+2x=\frac{\pi}{2}+n \pi \\ 2x=\frac{\pi}{2}+n \pi-\theta \\ \text{ where } \theta=\arccos(\frac{2}{\sqrt{29}})$


Sunday 23 September 2018

probability - Expectetion of $Y^{alpha}$ with $alpha >0$





Let $Y$ be a positive random variable. For $\alpha>0$ show that



$E(Y^{\alpha})=\alpha \int_{0}^{\infty}t^{\alpha -1}P(Y>t)dt$.



My ideas:




$E(Y^{\alpha})= \int_{-\infty}^{\infty}t^{\alpha}f_{Y}(t)dt$



=$\int_{0}^{\infty}t^{\alpha}f_{Y}(t)dt$



=$\int_{0}^{\infty}(\int_{0}^{t^{\alpha}}dy)f_{Y}(t)dt$


Answer



$E(Y^\alpha)=\int_0^\infty t^\alpha f_y(t)dt$. Let $G_y(t)=P(Y\gt t)=1-F_y(t)$. Therefore $G'_y(t)=-f_y(t)$. Integrate $E(Y^\alpha)$ by parts and get $E(Y^\alpha)=-t^\alpha G_y(t)\rbrack_0^\infty +\alpha \int_0^\infty t^{\alpha-1}G_y(t)dt={\alpha \int_0^\infty t^{\alpha-1}P(Y\gt t)dt}$.


complex analysis - Negative integers in Euler's Reflection Formula?



Euler's reflection formula is $\Gamma \left({z}\right) \Gamma \left({1 - z}\right) = \dfrac \pi {\sin \left({\pi z}\right)}$, where $\Gamma \left({z}\right)$ is the Gamma-function, which is only defined for non-negative numbers. To me, it looks like the reflection formula must take negatives. How can this be? Is it because $z$ must be a complex number?


Answer



The $\Gamma$ function can be extended to a meromorpphic function on the complex plane, with poles at the negative integers. The integral formula
$$
\Gamma(z)=\int_0^\infty t^{z-1}e^{-t}\,dt
$$

is absolutely convergent for $z\in\mathbb{C}$ with $\Re z>0$. The formula
$$
\Gamma(z+1)=z\,\Gamma(z),\text{ or }\Gamma(z)=\frac{\Gamma(z+1)}{z},
$$
is then used to extend it to complex numbers with $\Re z\le0$.


functional analysis - Prove that the Fourier series of $frac{1}{f}$ is absolutely convergent




I have a problem:




Let $f$ be a continuous function on the unit circle $(\Gamma)$:



$$\Gamma=\{e^{i\theta}: \theta\in [0, 2 \pi]\}$$



Assume that $f \ne 0$ on $\Gamma$, and the Fourier series of $f$ is absolutely convergent on $\Gamma$.



Prove that the Fourier series of $\dfrac{1}{f}$ is absolutely convergent on $\Gamma$.





=================================================




  • I've tried to use the definition of $\sum_{n=0}^\infty a_n$ is absolutely convergent:




A real or complex series $\sum_{n=0}^\infty a_n$ is said to converge absolutely if $\sum_{n=0}^\infty \left|a_n\right| = L$ for some real number $L$.







So let the Fourier series of $f$ be given by:
$$f(x) = \sum_{n=-\infty}^{\infty} a_{n}e^{in \theta}$$



We want to show that $$\sum_{n=-\infty}^{\infty} |a_{n}e^{in \theta}| \to f(\theta)< +\infty$$



==========================================




But I still have no solution :( . Can anyone help me!



Any help will be appreciated! Thanks!


Answer



The standard proof ever since Gelfand uses commutative Banach agebras, but this requires a certain amount of preparations. A short proof was given by Newman in 1975. The paper is only two pages long, but still too long to summarize. Anyway, the paper is accessible freely at
http://www.ams.org/journals/proc/1975-048-01/S0002-9939-1975-0365002-8/S0002-9939-1975-0365002-8.pdf


Saturday 22 September 2018

real analysis - prove that (4n-3)/ (n+43) sequence converges

I got stucked while I was working out whether

$$\frac{4n-3}{n+43}$$
converges. I would be pleased if I could a hint of the above question.

real analysis - Proof for $sum_{n=1}^{infty}frac{1}{n^2}=frac{pi^2}{6}$ without complexes?




This is what I needed. Practically, a link were also okay.




$$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$$


Answer



Evaluating ζ(2) by Robin Chapman contains several proofs (~14 altogether). You can have a look through and find a nice one.


Friday 21 September 2018

elementary number theory - Solve the Linear Congruence Equations.



I get so frustrated with modular arithmetic. It seems like every example I look at leaves steps out. I am trying to solve this problem:



Solve the linear congruence equations for x:




$x \equiv 2 \mod 7$



$x \equiv 1 \mod 3$



Ok, so I start



We know that 1st equation has a solution when $7 \mid (x-2)$. So there exists an integer k where $x = 2 + 7k$.



Ok, great. So I substitute into the 2nd equation:




$
2+7k \equiv 1 \mod 3 \implies \\
7k \equiv -1 \mod 3 \implies \\
7k \equiv 2 \mod 3
$



Now I need to find an inverse of this last congruence. How do I do that? I know there is one solution because gcd(7,3) = 1. This is the step I'm having problems on. If I can get the solution to $7k \equiv 2 \mod 3$ into the form $k = a + bj$ where $a,b \in \mathbb{N}$ then I know how to solve it.



Thank you.


Answer




Firstly note that by CRT we know that a solution exists $\pmod{3\cdot 7}$



To find the solution, you was right we have $x = 2 + 7k$ and then we find $7k \equiv 2 \mod 3$ that is



$$7k \equiv 2 \mod 3 \iff k \equiv 2 \mod 3 \implies k=2+3h$$



and therefore



$$x=2+7(2+3h)=16+21h \iff x\equiv16 \pmod{3\cdot 7}$$


Geometry: move of "x" centimeter on a curve

Definition:
I have two connected line segments (in black on picture) defined by 3 points: P1 P2 and P2 P3.I subdivise first 5 centimeters of line segment (in grey on picture).
I connect corresponding subdivisions (in blue on picture).
If I subdivide again the first 5 centimeters at infinite, I will get a perfect curve (in red on picture) formed by connection lines.



enter image description here




Problem:
I'm on point P1 and I want to move of "x" centimeter toward P3 passing by line segment P1 P2, then on red curve and finally on line segment P2 P3.
How compute the exact new point coordinate of this move ?



Solution:
I only have an approximate solution where I compute the blue lines intersection points (in orange on picture).
Then, I compute length between orange points.
Finally, I move from P1 to 4nd subdivision of line segment P1 P2, then to first orange point, then on second orange point, etc. until I reach the "x" centimeter.

Thursday 20 September 2018

calculus - Is $(n+1)(h/R)^n$ monotically decreasing when $0< h < R$ and how to prove it?



I have the following alternating series where $R$ is a positive constant:
$$\sum_{n=0}^{\infty}(-1)^n (n+1)\left(\dfrac{h}{R}\right)^n$$




I'm trying to use the "Alternating Series Estimation Theorem" to calculate the sum error at a certain term. To use the theorem on a series $\sum(-1)^{n-1}b_n$ two conditions has to be met first:




  1. $b_{n+1} \le b_n\qquad$ (decreasing terms)

  2. $\lim_{n\to\infty} b_n = 0$



The second condition can be proved easily by using the ratio test:
\begin{align*}
\lim_{n\to\infty} \left| \dfrac{b_{n+1}}{b_n} \right| &= \lim_{n\to\infty} \left| \dfrac{(n+2)\left(\frac{h}{R}\right)^{n+1}}{(n+1)\left(\frac{h}{R}\right)^n}\right|\\

&= \lim_{n\to\infty} \left(\dfrac{n+2}{n+1}\right) \left|\dfrac{h}{R}\right| \\
&= \dfrac{h}{R}\qquad h > 0
\end{align*}



By the ratio test the series converges absolutely when $\lim_{n\to\infty}|b_{n+1}/b_n| < 1$ therefore the series converges when $h < R$ and:
$$ \lim_{n\to\infty} (-1)^n(n+1)\left(\dfrac{h}{R}\right)^n = 0 \qquad h < R$$



Now $b_n = (n+1)\left(\dfrac{h}{R}\right)^n$ is positive continuous and convergent when $0 < h < R$, but does that means it's also decreasing? if not how do I prove that it's decrerasing?


Answer



You have proved that

$$\lim_{n\to\infty} \left| \dfrac{b_{n+1}}{b_n} \right|=\dfrac{h}{R},$$ with $\dfrac{h}{R}<1 $ then one can see that there exists $N$ such that for all $n\ge N$,
$$
\left| \dfrac{b_{n+1}}{b_n} \right| <1
$$ or
$$
|b_{n+1}|<|b_n|, \quad n\ge N,
$$ thus the given sequence is decreasing (from $N$ onward).


Wednesday 19 September 2018

Simplifying fraction with square root as denominator

I'm trying to find the integral of:




$$\dfrac {2\sqrt{x} - 3x + x^2}{\sqrt{x}}$$



but I first need to simplify it so I tried dividing by the $\sqrt{x}$ for each of the numbers on the top like so:
$$\dfrac {2\sqrt{x}}{\sqrt{x}}$$



and did the same for the others. For the one above it was easy to see that it just simplifies to $2$. But I am unsure how to do the same for the others for instance $\dfrac {-3x}{\sqrt{x}}$. I know to $-\sqrt{x}$ but i don't know what $-3x - \sqrt{x}$ would come out with?

calculus - Proving limit at infinity of a rational function








I need to prove these statements:



Let $f(x)=\sum_{j=0}^{n}a_jx^j$ , $g(x)=\sum_{j=0}^{m}b_jx^j$.




  • $$\deg(g)\gt \deg(f) \implies \lim_{x\rightarrow \infty}\frac{f(x)}{g(x)}=0$$

  • $$\deg(g)= \deg(f) \implies \lim_{x\rightarrow \infty}\frac{f(x)}{g(x)}=\frac{a_n}{b_n}$$

  • $$\deg(f)\gt \deg(g) \implies \lim_{x\rightarrow \infty}\frac{f(x)}{g(x)}=\pm \infty$$




Is there any proof that would help me in all three statements, so that my answer can be shorter? I'm pretty sure I know how to do it, but I am trying to think of a cleaver way to shorten my answer.



Thanks!

elementary set theory - Do two injective functions prove bijection?



I'm trying to prove $|A| = |B|$, and I have two injective functions $f:A \to B$ and $g:B \to A$. Is this enough proof for a bijection, which would prove $|A| = |B|$? It seems logical that it is, but I can't find a definitive answer on this.




All I found is this yahoo answer:




One useful tool for proving that two sets admit a bijection between
them is a theorem which says that if there is an injective function $f: A \to B$ and an injective function $g: B \to A$ then there is a bijective
function $h: A \to B$. The theorem doesn't really tell you how to find $h$,
but it does prove that $h$ exists. The theorem has a name, but I forget
what it is.





But he doesn't name the theorem name and the yahoo answers are often unreliable so I don't dare to base my proof on just this quote.


Answer



Yes this is true, it is called Cantor–Bernstein–Schroeder theorem.


Is there an operation in complex numbers that can only be answered by quaternions?

The natural numbers cannot provide an answer to $1-2$.



The integers cannot provide an answer to $\frac{1}{2}$.



The rational numbers cannot provide an answer to $\sqrt{2}$.



The real numbers cannot provide an answer to $\sqrt{-1}$.



The complex numbers cannot provide an answer to what (leading to quaternions)? Is this the way it works?




This question asks similar. The accepted answer points to the Cayley-Dickson construction but that doesn't seem to address an operation between complex numbers that cannot be a complex number.

elementary number theory - A sequence divisible by 9

I was trying to solve this series by mathematical induction for every $n$ from $\Bbb N$ : $u_n=n4^{n+1}-(n+1)4^n+1$ is divisible by $9$.



The initiation was pretty easy, but I only managed to prove $u_{n+1}=3k$ while $k$ is an integer and I don't think if it's divisible by $3$ implies that it is divisible by $9$ ; is it ? if not how can I proceed to prove the divisibility ? by mod maybe? thanks in advance for your answer

Monday 17 September 2018

elementary set theory - How did Cantor demonstrate a bijection from $I=[0,1]$ to $I^n$?




"I See It, but I Don't Believe It."




Georg Cantor showed that sets of different dimensions can have the same cardinality; in particular, he demonstrated that there is a bijection between the interval $I= [0,1]$ and the $n$-fold product $I^{n} = I \times I \times \cdots \times I$.




Does anyone know specifically how this was done?



Answer



I am not sure if Cantor did it this way, but this argument works: any number $x$ in $[0,1]$ has an expansion to base $2$: $x=\sum \frac {a_k} {2^{k}}$ where $a_k =0$ or $a_k=1$ for each $k$. This expansion is not unique but it can be made unique by avoiding expansions with $a_k=1$ for all but finitely many $k$ (except when $x=1$). Now let $r \in \{0,1,2,...,n-1\}$ and form a sequence $(b_k^{(r)})$ using the coefficients $a_k$ with $k=r\, \pmod{n}$. Let $x_r$ be the number whose expansion to base $2$ has the coefficient sequence $(b_k^{(r)})$. Then the map $x \to (x_1,x_2,...,x_n)$ is a bijection.



A correction: it has been pointed out that $x=1$ causes problem in this argument. (See comment by Henno Brandsma). As suggested we can use the proof to show that there is a bijection between $[0,1)$ and $[0,1) \times [0,1)\times \cdots\times [0,1)$ and use the fact that there are bijections between $[0,1)$ and $[0,1]$ as well as between $[0,1) \times [0,1)\times \cdots \times [0,1)$ and $[0,1] \times [0,1]\times \cdots \times [0,1]$


Sum of the series $sum frac{n}{2^{n}}$




I know that the series converges by d'Alembert ratio test, where $\lim\left ( \frac{A_{n+1}}{A_{n}} \right )= \frac{1}{2}$, but I don't know how to calculate the sum of the serie. Thanks for the help.


Answer




$\sum_{k=0}^\infty x^n = \frac{1}{1-x}$ for $|x|<1$.
Then
$\sum_{k=1}^\infty nx^n = x\times \left(\frac{1}{1-x}\right)'$ for $|x|<1$.


real analysis - Prove that $frac{2^n}{n!}$ converges 0.











Prove that $\frac{2^n}{n!}$ converges 0.




I can see why, I just don't get how exactly to do convergence proofs. Right now I have:



For $n>6$, $|\frac{2^n}{n!}-0|=\frac{2^n}{n!}<\frac{2^n}{3^n}$



and



assuming $\frac{2^n}{3^n}<\epsilon$, $n<\frac{\ln\epsilon}{\ln\frac2 3}$



Not sure if the last step is even right...




(This was an exam question today)


Answer



I'm pretty sure that last one need to be $n > \frac{\ln \varepsilon}{\ln \frac{2}{3}}$. But then that this works. For every $\varepsilon$ you give an explicit way to find $N\left(= \frac{\ln \varepsilon}{\ln \frac{2}{3}})\right)$ such that for all $n > N$ we have $|x_n - 0| < \varepsilon$. Definitions, ta da!


real analysis - Evaluate $f(x)=sum_0^inftyfrac{x^n}{n!!}$.

A question from Introduction to Analysis by Arthur Mattuck:




Let $n!!=n(n-2)(n-4)\cdot…\cdot k$, where $k=1$ or $2$,depending on whether n is odd or even. (define $0!!=1$.)




Evaluate the sum $f(x)=\sum_0^\infty\frac{x^n}{n!!},$ using term-by-term differentiation and integration.




I think what the question asked is to give an explicit form for this sum.

Sunday 16 September 2018

probability - Expected Payoff for Dice Game Where Six = No Payoff



In a dice game where a player's payoff is whatever the die is rolled, each player can roll how many times they want. Each payoff gets added cumulatively (e.g. roll 5 and 5, then payoff = 10). The catch is that if the user rolled a six at any point in time, they get 0 (e.g. rolled 5 and 6, then payoff = 0).



Intuitively, raising $n$ (the number of die rolls) could raise the expected payoff but also increases the probability of at least rolling one six and getting 0 as a payoff. For example if you picked a really large $n$, the likelihood of rolling at least 1 six and getting 0 payout is extremely likely. But going from $n=1$ to $n=2$ you get a little higher expected payoff (checked the math manually in lieu of a general formula).



Going with the weighted average approach to come up with a formula for expected value, one part of the formula must be the weighted average of getting a 0 payout, i.e. the probability of rolling at least one six out of $n$ dice rolls equals $1-\left(\frac{5}{6}\right)^n$. As such, we get:




$E(X) =$ weighted average for each payoff



$E(X) =$ weighted average of getting 0 + the rest of the weighted average of payoffs



$E(X) = \left(1-\left(\frac{5}{6}\right)^n\right) 0$ + the rest of the weighted average of payoffs



However, I'm having trouble wrapping my head around coming up with the rest of the equation. How can I solve this problem?


Answer



If no die equals $6$, the expected value for each die equals $\frac{1+2+3+4+5}{5} = 3$. There are $n$ dice, so the expected value equals $3n$. We thus get:




$$E(X) = \left(1 - \left(\frac{5}{6}\right)^n\right) 0 + \left(\frac{5}{6}\right)^n 3 n = \left(\frac{5}{6}\right)^n 3 n$$



The highest expected payout is achieved for $n=5$ and $n=6$, with $E(X) \approx 6.028$.


calculus - Why does L'Hôpital's rule work for sequences?



Say, for the classic example, $\frac{\log(n)}{n}$, this sequence converges to zero, from applying L'Hôpital's rule. Why does it work in the discrete setting, when the rule is about differentiable functions?



Is it because at infinity, it doesn't matter that we relabel the discrete variable, $n$, with a continuous variable, $x$, and instead look at the limit of $\frac{\log(x)}{x}$?




But then what about the quotients of sequences that go to the indeterminate form $\frac{0}{0}$? Why is it OK to use L'Hôpital's rule, as $n$ goes to zero?



I haven't found anything on Wikipedia or Wolfram about the discrete setting.



Thanks.


Answer



There IS a L'Hospital's rule for sequences called Stolz-Cesàro theorem. If you have an indeterminate form, then:



$$\lim\limits_{n\to\infty} \frac{s_n}{t_n}=\lim\limits_{n\to\infty} \frac{s_n-s_{n-1}}{t_n-t_{n-1}}$$




So for your example:



$$\lim\limits_{n\to\infty}\frac{\ln(n)}{n}=\lim\limits_{n\to\infty}\frac{\ln\left(\frac{n}{n-1}\right)}{n-n+1}=\lim\limits_{n\to\infty}\ln\left(\frac{n}{n-1}\right)=0$$



But that isn't your question. Your question is, why do people "differentiate"? Basically because the real case covers the discrete case.



Recall the definition of limits for real and discrete cases.



Definition. A sequence, $s_n\colon \Bbb{N}\to \Bbb{R},$ converges to $L$ as $n\to\infty$, written $\lim\limits_{n\to\infty} s_n=L$ iff for all $\epsilon>0$ there is some $N$ such that for all $n\in \Bbb{N}$ with $n>N$, $|s_n-L|<\epsilon$.




Definition. A function, $f(x) \colon \Bbb{R}\to \Bbb{R}$ converges to $L$ as $x\to\infty$, written $\lim\limits_{x\to\infty} f(x)=L$ iff for all $\epsilon>0$ there is some $X$ such that for all $x\in \Bbb{R}$ with $x>X$, $|f(x)-L|<\epsilon$.



So if $f(x)$ is a real valued function that agrees with a sequence, $s_n$ on integer values, then $\lim\limits_{x\to\infty} f(x)=L$ implies $\lim\limits_{n\to\infty} s_n=L$.


analysis - Limit to infinity question











Does $\lim_{x\rightarrow \infty} \frac{5x}{(1+x^2)} = 0$ or $\lim_{x\rightarrow \infty} \frac{5x}{(1+x^2)} = 1$?



I am asking because I was wondering if $\infty^2$ at the denominator is "considered" bigger than $5\infty$. Or do we just take the above as $\frac{\infty}{\infty}$?


Answer



You have (dividing through by $x$)




$$\lim_{x\to \infty} \frac{5x}{1+x^2} = \lim_{x\to \infty} \frac{5}{x^{-1} + x} = 0.$$



Of if you know about L'Hopital's rule:



$$\lim_{x\to \infty} \frac{5x}{1+x^2} = \lim_{x\to \infty} \frac{5}{2x} = 0.$$



Note that in general we can't talk about $\frac{\infty}{\infty}$. $\infty$ is not a number so we can't really use it as such. Usually in calculus when we talk about infinity we think of it in terms of limits.


Absolutely convergent series of complex functions.



I have to do the following excercise:



Let $\{f_n(z)\}_{n\in\mathbb{N}}$ a sequence of complex functions, and let $\sum_{n=1}^\infty f_n(z)$.



Prove that: if $\sum_{n=1}^\infty |f_n(z)|$ converges, then $\sum_{n=1}^\infty f_n(z)$ converges.



I know how to prove it for a series $\sum_{n=1}^\infty z_n$ of complex numbers with $z_n=x_n+iy_n$ because if $\sum_{n=1}^\infty |z_n|$ converges, one can observe that $|x_n|<|z_n|$ and $|y_n|<|z_n|$ then by the comparison criteria the real numbers series $\sum_{n=1}^\infty |x_n|$ and $\sum_{n=1}^\infty |y_n|$ converge and we know for real series that this implies that $\sum_{n=1}^\infty x_n$ and $\sum_{n=1}^\infty y_n$ converge.




If we call $R_n=\sum_{k=1}^n x_n$, $I_n=\sum_{k=1}^n y_n$ and $S_n=\sum_{k=1}^n z_n$.



And $\lim_{n \rightarrow \infty}R_n=x$, $\lim_{n \rightarrow \infty}I_n=y$, then



$$\lim_{n \rightarrow \infty}S_n=\lim_{n \rightarrow \infty}R_n+i\lim_{n \rightarrow \infty}I_n=x+iy.$$



Then $S_n$ converges and $\sum_{n=1}^\infty z_n$ does as well.



Is it enough to call $\{w_n\}=\{f_n(z)\}$ in my original problem and just apply this proof?




Thanks in advance.


Answer



Yes, that would be correct. On the other hand, you don't have to decompose your series into real and imaginary part. Suppose that $\sum_{n=1}^\infty\lvert z_n\rvert$ converges. Take $\varepsilon>0$. Then there is a natural $N$ such than$$m\geqslant n\geqslant N\implies \sum_{k=n}^m\lvert z_k\rvert<\varepsilon,$$and therefore, by the triangle inequality,$$m\geqslant n\geqslant N\implies\left\lvert\sum_{k=n}^mz_k\right\rvert<\varepsilon.$$Therefore, by Cauchy's criterion, the series $\sum_{n=1}^\infty z_n$ converges too.


Saturday 15 September 2018

divisibility - Prove that $5mid 8^n - 3^n$ for $n ge 1$




I have that $$5\mid 8^n - 3^n$$



The first thing I tried is vía Induction:



It is true for $n = 1$, then I have to probe that it's true for $n = n+1$



$$5 \mid 8(8^n -3^n)$$
$$5 \mid 8^{n+1} -8\cdot3^n$$

$$5 \mid 3(8^{n+1} -8\cdot3^n)$$
$$5 \mid 3\cdot8^{n+1} -8\cdot3^{n+1}$$



After this, I don't know how to continue. Then I saw an example about a property: $$(a+b)^n = am + b ^ n$$ with $m = a + 2b$ or the number it represents.



$$5 \mid 8^n -3^n$$
$$5 \mid (5+3)^n -3^n)$$
$$5 \mid 5m + 3^n - 3^n)$$
$$5 \mid 5m$$




So, $d \mid a$ only if $a = kd$. From this I get that $5 \mid 5 m$.



My questions:



1) Is the exercise correct?



2) Could it have been resolved via method 1?



Thanks a lot.


Answer




For induction, you have



$$\begin{align}8^{n+1} - 3^{n+1} &= 8\cdot 8^n - 3\cdot3^n\\&= 3(8^n - 3^n) + 5\cdot8^n\end{align}$$



Note that the first term must be divisible by $5$ because $8^n-3^n$ is divisie by $5$.


real analysis - Let $f$ be a differentiable function and for all $x$ $f'(x)>x$, prove $f$ isn't uniformly continuous





Suppose $f:(0,\infty)\to \mathbb R$ is differentiable and $f'(x)>x$.



Prove that $f$ isn't uniformly continuous in $(0,\infty)$.



Hint, prove first that for all $y>x>0$ we have $f(y)-f(x)\ge (y-x)x$.




From the first line I understand the function is always increasing and never has an extremum.




The hint looks like it is directly from Lagrange's MVT, but the definition doesn't really work with $\mathbb R$ and $\infty$, and using the definition of a derivative with $\displaystyle\lim_{y\to x}\frac {f(y)-f(x)}{y-x}=f'(x)$ I don't see how to loose the limit.



The next step would probably be to show the function isn't Lipschitz, since a uniformly continuous function pass Lipschitz definition: $f(y)-f(x)\le M(y-x)$ for some $M\ge 0$ but here it will never hold since for all $x$ we have $f(y)-f(x)\ge x(y-x)$


Answer



Proof of the hint



Let $y>x>0$.



By the mean value theorem, there is some $\beta\in (x,y)$ such that $\displaystyle \frac{f(y)-f(x)}{y-x}=f'(\beta)$




By assumption, $f'(\beta)>\beta>x$



Hence $\displaystyle \frac{f(y)-f(x)}{y-x}>x$



Proof of the claim



Suppose that for $\epsilon=1$, there is some $\delta$ such that $|x-y|\leq \delta\implies |f(x)-f(y)|\leq 1$



Choose $N$ an integer such that $n\geq N \implies \frac{1}n\leq \delta$




For $n\geq N$, consider $x_n=n^2$ and $y_n=n^2+\frac1n$



By the previous lemma, and since $|x_n-y_n|\leq \delta$, we have $|y_n-x_n|x_n\leq 1$



That is $n\leq 1$



Contradiction.


Minimal polynomial of product, sum, etc., of two algebraic numbers

The standard proof, apparently due to Dedekind, that algebraic numbers form a field is quick and slick; it uses the fact that $[F(\alpha) : F]$ is finite iff $\alpha$ is algebraic, and entirely avoids the (to me, essential) issue that algebraic numbers are roots of some (minimal) polynomial. This seems to be because finding minimal polynomials is hard and largely based on circumstance.



There are more constructive proofs which, given algebraic $\alpha$, $\beta$, find an appropriate poly with $\alpha \beta$, $\alpha + \beta$, etc., as a root -- but of course these are not generally minimal.




You would of course want an algorithm to compute such min. polies, but assuming this is unfeasible (as it seems), my question is a bit different:




Every algebraic number $\alpha$, $\beta$, $\alpha\beta$, $\alpha + \beta$, etc., has a unique corresponding minimal polynomial, call it $p_{\alpha}(x)$, $p_{\beta}(x)$, $p_{\alpha \beta}(x)$, etc., and these polies have other roots, the conjugates, $\alpha_1$,...,$\alpha_n$, $\beta_1$,...,$\beta_m$, etc. Suppose I want to define an operation on this set of polies in the most naive way: $p_{\alpha}(x) \star p_{\beta}(x) = p_{\alpha\beta}(x)$. (Note that this is NOT the usual, direct multiplication of polies.)



But is this even well-defined? More specifically: suppose I swap $\alpha$ with one of its conjugates, $\beta$ with one of its conjugates, and multiply those together. Is the minimal polynomial of the new product the same as before? i.e. is this new product a conjugate of the old one? Meaning, I would need $p_{\alpha_1 \beta_1}(x) = p_{\alpha_i \beta_k}(x)$ for any combination of conjugates in order for this proposed operation to even make sense. And this seems unlikely -- that would be sort of miraculous right?



What about a similar operation for $\alpha + \beta$, $\alpha - \beta$, etc?





More broadly, given two algebraic numbers $\alpha$, $\beta$, I'm interested in the set of minimal polynomials corresponding to those algebraic numbers which can be generated by performing the field operations on $\alpha$, $\beta$ -- call this the "set of minimal polies attached to the number field" or something -- and if a similar field (or even just ring) structure can be put on these polies by defining appropriate operations on them. (Not the usual operations, which will clearly give you polies with roots outside of your number field.) I'm ultimately after questions like:




(1) How do the conjugates of $\alpha \beta$, $\alpha + \beta$, etc., relate to the conjugates of $\alpha$ and $\beta$?



(2) How do the coefficients of the min. polies of $\alpha \beta$, $\alpha + \beta$, etc., relate to those of the min. polies of $\alpha$ and $\beta$? Obviously, the algebraic integers form a ring; what else can be said?



(3) Degree?





It may be impractical to calculate any one such min. poly explicitly, but maybe interesting things can be said about the collection as a whole?

Friday 14 September 2018

geometry - The Wobbling Staircase Function



Define the wobbling staircase function as follows:





Informally: The wobbling staircase function consists of $3$ connected line segments with alternating 'size' of gradients. That is, the gradient of the second line segment is smaller than the first and that of the third is greater than the second.




Note: if you sketch this then it does look like a wobbly staircase!






Formally: Assume that the first line segment begins at the origin. If the endpoint of the first line segment is $(a,d)$, that of the second is $(b,e)$ aand that of the third is $(c,f)$, then the wobbling staircase function is $$\begin{align}y=\begin{cases}\frac dax,\quad &0\le x\le a\\\frac{e-d}{b-a}x+\frac{d(b-a)-a(e-d)}{b-a},\quad & a\frac{e-d}{b-a}\implies \frac de>\frac ab$$





Define also the line joining the origin and $(c,f)$: $$y_{\text{tot}}=\frac fcx.$$







Question: On what condition(s) is $y_{\text{tot}}$ always above each of the line segments, except for the points $(0,0)$ and $(c,f)$?





Note that this is equivalent to asking when $y_{\text{tot}}$ does not cross the second line segment.




Bonus Question: What if I extend this is $2k+1$ connected line segments? What would the constraints then be?



Answer



Informally, what you want is for the vector $\left(\matrix{c\\f}\right)$ to lie "on the left" of the vector $\left(\matrix{a\\d}\right)$.



Formally, this is saying that the basis $\left(\left(\matrix{c\\f}\right),\left(\matrix{a\\d}\right)\right)$ is negatively oriented, which you check by computing the sign of its determinant. Hence $y_\text{tot}$ lies above all line segments if and only if $$cd-af<0.$$




For more line segments, you have more determinants to compute.


elementary set theory - Let $alpha, beta, gamma$ be cardinals, $beta leq gamma$, prove $alpha ^{beta}le alpha ^{gamma}$



Let $|A|=\alpha, |B|=\beta, |C|= \gamma$ be cardinals and $\beta \leq \gamma$. Prove $\alpha ^{\beta}\le \alpha ^{\gamma}$.



So from the given we know that there's an injection $f:B\to C$ and some functions $h:B\to A, g: C\to A$. We want to prove there's an injection $l_1:A\to C$. It appears that $f$ doesn't help here.




Trying to take representatives from $A$ and show they're in $C$ and there's an injection doesn't work so maybe the function should be $l_2: h \to g$ but I don't know how to work with it.


Answer



Given the injection $f$, for each function $h: B \to A$ you can associate a $g(y): C \to A$ by $g(f(x))=h(x)$ if $y \in f(B),$ otherwise $g(y)=$something in $A$ Since $f$ is an injection, the $g$'s will be distinct whenever the $h$'s are.


Thursday 13 September 2018

Power series representation of $f(x) = frac{1}{x+2}$



In attempting to find the power series representation of $f(x)$, using the fact:



$$\frac{1}{1 - t} = \sum_{n=0}^{\infty}{t^n}$$




I simply set $t = -x - 1$, which when substituting into the above formula gives $f(x)$. Therefore, I presumed that the power series representation of $f(x)$ is $\sum_{n=0}^{\infty}{(-x - 1)^n} = \sum_{n=0}^{\infty}{(-1)^n(x + 1)^n}$.



But apparently this is wrong, and have seen a more complicated derivation, that also seems correct, that gives $\sum_{n = 0}^{\infty}{\frac{(-1)^n}{2^{n+1}}x^n}$



Can anyone provide any insight into why my more simplistic derivation is wrong?



EDIT: To clarify, I would like to point that I understand that the second derivation is correct and I understand how it is derived. What I want to know is why my derivation is wrong.


Answer



Your derivation is not wrong.




It simply gives the power series developed about a different point. The series



$$\sum_{n=0}^{\infty} \frac{(-1)^n}{2^{n+1}} x^n$$



is developed about the point $x = 0$, while



$$\sum_{n=0}^{\infty} (-1)^n (x + 1)^n$$



is developed about the point $x = -1$. Typically, one "likes" to have a series expansion of the first type since simple powers are easier to deal with than powers involving an addition, but there is no need for the first kind of expansion and moreover, they may not exist in many cases due to singular behavior at the origin, e.g. the function $x \mapsto \frac{1}{x}$ directly, or $\log$. If you were specifically asked for the series to be developed at $x = 0$ then the answer would be wrong. If no development point is advanced, and yet this is considered wrong, then the problem wasn't stated well to begin with.




ADD: from the comments, the meaning of "development point" is because the general power series has the form



$$\sum_{n=0}^{\infty} a_n (x - c)^n$$



The point "$c$" is called the "center". The reason for calling it such is two-fold: one reason is because such series, when they converge, do so within a circle (most literally in the complex plane, but on the real numbers, an interval constitutes a 1-d "circle") about this point. Another reason is that this point is the one of maximum rate of convergence (namely, infinitely fast, since at $x = 0$ it "instantly" converges to the value of $a_0$).


Difficulty with circle geometry



I am working through the Senior Team Regional Maths Challenge questions from the 2007 paper (answers on this page: http://furthermaths.org.uk/files/SupervisorsBooklet.pdf).




On question 10 I am really struggling. It says 'The circle in the diagram below has radius 6cm. If the perimeter of the rectangle is 28cm, what is its area?'



The diagram is of a circle with a rectangle inside, all four corners touching the circumference (sorry for the absence of a diagram, I can't find the question paper itself).



I've figured out that the distance from the centre of the rectangle to the corner must be equal to the radius of the circle (6cm). Thus the diagonal length (diameter of circle) is 12cm. Using pythagoras, $x^2+y^2=144$. However, the answer for the area is 26cm. This means that is the perimeter is 28cm, the two lengths must be 2cm and 13cm. Using these lengths in pythagoras is not possible as $2^2+13^2 \not = 12^2$. So now I am absolutely confused and have no idea how to figure this out.


Answer



Call $a$ and $b$ the lengths of the sides of the rectangle.



We have $2(a+b) = 28$ and $a^2 + b^2 = 12^2$ by Pythagoras' Theorem.




Therefore $a+b =14$ so $(a+b)^2 = a^2 + b^2 + 2ab = 14^2$



Thus $2ab = 14^2 - 12^2 = (14-12) (14+12) = 2 \times 26$ ie $ab = 26$



The area of the rectangle is indeed 26 $cm^2$.


Wednesday 12 September 2018

real analysis - Direct bijection between $C[0,1]$ and $[0,1]$




By applying the Schroeder-Bernstein theorem one can state that there exists a
bijection between $C[0,1]$ and $[0,1]$. But is it possible to construct a bijection between $C[0,1]$ and $[0,1]$? Thanks in advance for any help.


Answer



All proofs of the Bernstein-Cantor-Schroeder theorem that I know either directly or with very little work produce an explicit bijection from any given pair of injections.



There is an obvious injection from $[0,1]$
to $C[0,1]$ mapping each $t$ to the function constantly equal to $t$, so the question reduces to finding an explicit injection from $C[0,1]$ to $[0,1]$.



Here is an example:




Any $f\in C[0,1]$ is determined by its restriction to the rationals. Fix an explicit enumeration $(a_n)_{n\in\mathbb N}$ of $\mathbb Q\cap[0,1]$.



The simple continued fraction of a real $x\in(0,1)$ has the form $1/(c^x_0+1/(c^x_1+\dots))$ where the $c_i$ are positive integers. It is unique unless $x$ is rational, in which case it has exactly two representations, that differ on their last term (this is essentially a matter of convention, and some presentations pick one from the beginning); pick the shortest one if that is the case. The sequence $(c^x_0,c^x_1,\dots)$ is infinite unless $x$ is rational, in which case we will extend it to an infinite sequence by setting $c^x_i=0$ for all $i$ past the last index where the sequence was defined.



Each real $r$ corresponds to a unique sequence $(b^r_n)_{n\in\mathbb N}$ of naturals as follows:




  • $b^r_0$ is $1$ if $r=0$, it is $2$ if $r>0$, and it is $3$ if $r<0$.

  • Let $b^r_1=\lfloor |r|\rfloor+1$.

  • Let $r'=(|r|-\lfloor|r|\rfloor)/2$ if $|r|-\lfloor |r|\rfloor$ is in $(0,1)$, and let $0'$ and $1'$ be your favorite (distinct) irrationals in $(1/2,1)$. Now let $b^r_n=c^{r'}_{n-2}+1$ for all $n\ge2$.




The assignment $r\mapsto(b^r_n)_{n\in\mathbb N}$ is an injection.



Fix an explicit bijection $\tau:\mathbb N\to \mathbb N\times\mathbb N$, denote $\tau(n)$ by $(\tau(n)_0,\tau(n)_1)$.



Assign to $f\in C[0,1]$ the following infinite sequence of positive integers $(d^f_n)_{n\in\mathbb N}$:




  • $d^f_n=b^{f(a_{\tau(n)_0})}_{\tau(n)_1}$.




Since everything is explicit so far, note that from $d^f_n$ we can reconstruct all the values $f(a_n)$, $n\in\mathbb N$, and therefore $f$.



Finally, there is a unique irrational $y_f\in(0,1)$ whose simple continued fraction is $(d^f_n)_{n\in\mathbb N}$. The map $f\mapsto y_f$ is an explicit injection.



The explicit bijection provided by your favorite explicit proof of the Bernstein-Cantor-Schroeder theorem and the two explicit injections specified above do the trick.


Tuesday 11 September 2018

combinatorics - Evaluate $sum_{k=1}^nfrac{1}{k}binom{n}{k}$





I'm interested in finding a nice closed form expression for the sum $\sum_{k=1}^n\frac{1}{k}\binom{n}{k}$. I've tried using the Binomial Theorem to get
\begin{align*}
\sum_{k=1}^n\frac{1}{k}\binom{n}{k}x^k & =\int_0^1\frac{(1+x)^n-1}{x} \, dx\\
&=\int_1^2 (1+u+\cdots+u^{n-1}) \, du
\end{align*}

using the substitution $u=1+x$ but I can't quite to simplify this integral either. I have also not been able to come up with a combinatorial approach, which may not exist since the summation and its terms are in general not integers. Any help in evaluating this sum would be appreciated, thanks!


Answer



In the question, the problem is in "nice closed form expression" since
$$\sum_{k=1}^n\frac{1}{k}\binom{n}{k}x^k=n x \, _3F_2(1,1,1-n;2,2;-x)$$ where appears an hypergoemetric function.



So, let us forget the $x$ and compute for a few values of $n$
$$S_n=\sum_{k=1}^n\frac{1}{k}\binom{n}{k}$$
$$\left\{1,\frac{5}{2},\frac{29}{6},\frac{103}{12},\frac{887}{60},\frac{1517}{60},\frac{18239}{420}\right\}$$ which are
$$\left\{1,\frac{5}{2},\frac{29}{6},\frac{206}{24},\frac{1774}{120},\frac{18204}{720},\frac{218868}{5040}\right\}$$ The denominators are clearly $n!$ and the numerators corresponds to sequence A103213 in OEIS.



As you will see in the link is that, for large $n$

$$S_n\approx \frac{2^{n+1}} n$$ It is also given that
$$S_n=-H_n-\Re(B_2(n+1,0))$$ where appear the harmonic number and the real part of the incomplete beta function.



Update



Concerning the asymptotic behavior, it seems that it could be slightly improved using
$$S_n\approx 2^{n+1} n^{\frac{1}{4 n}-1}$$


polynomials - Multiplicative Inverse in a $256$ Galois Field



I am working on finding the multiplicative reverse in $GF(2^8)$ using the Euclidean Algorithm but after reading multiple sources, I feel as though I am proceeding incorrectly. Using the irreducible polynomial $m(p)=x^8+x^4+x^3+x+1=0x11B$ I am trying to find the inverse of $x^6+x^4+x+1=0x53$



I know using long division (via http://www.wolframalpha.com/widgets/view.jsp?id=f396eaca9aaccbf858652bccc972324a) I get for the first step
$$(x^8+x^4+x^3+x+1)=(x^6+x^4+x+1)*(x^2-1)+(2x^4-x^2+2x+2)$$
but do I keep the negatives and even coefficients? I can't seem to get a reasonable answer and all the examples I have seen use simpler numbers. I know the answer to be $x^7+x^6+x^3+x=0xCA$ I just cannot seem to get there.


Answer



Here are the steps you should obtain.




\begin{align}
&x^8+x^4+x^3+x+1 = (x^6+x^4+x_1+1) (x^2 + 1) + x^2\\
&x^6+x^4+x+1 = x^2 (x^4 + x^2) + x + 1\\
&x^2 = (x+1) x + 1.
\end{align}


calculus - Finding the Derivative of |x| using the Limit Definition



Please Help me derive the derivative of the absolute value of x using the following limit definition.
$$\lim_{\Delta x\rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}
$$

I have no idea as to how to get started.Please Help.



Thank You


Answer



Since the absolute value is defined by cases,
$$|x|=\left\{\begin{array}{ll}
x & \text{if }x\geq 0;\\
-x & \text{if }x\lt 0,
\end{array}\right.$$
it makes sense to deal separately with the cases of $x\gt 0$, $x\lt 0$, and $x=0$.




For $x\gt0$, for $\Delta x$ sufficiently close to $0$ we will have $x+\Delta x\gt 0$. So
$f(x)= |x| = x$, and $f(x+\Delta x) = |x+\Delta x| = x+\Delta x$; plugging that into the limit, we have:
$$\lim_{\Delta x\to 0}\frac{f(x+\Delta x) - f(x)}{\Delta x} = \lim_{\Delta x\to 0}\frac{|x+\Delta x|-|x|}{\Delta x} = \lim_{\Delta x\to 0}\frac{(x+\Delta x)-x}{\Delta x}.$$
You should be able to finish it now.



For $x\lt 0$, for $\Delta x$ sufficiently close to zero we will have $x+\Delta x\lt 0$; so $f(x) = -x$ and $f(x+\Delta x) = -(x+\Delta x)$. It should again be easy to finish it.



The tricky one is $x=0$. I suggest using one-sided limits. For the limit as $\Delta x\to 0^+$, $x+\Delta x = \Delta x\gt 0$; for $\Delta x \to 0^-$, $x+\Delta x = \Delta x\lt 0$; the (one-sided) limits should now be straightforward.


Monday 10 September 2018

definite integrals - How to show that $ PV int_{-infty}^{infty} frac{tan x}{x}dx = pi$



In a recent question, it was stated in a comment, without proof, that



$$ PV \int_{-\infty}^{\infty} \frac{\tan x}{x}dx = \pi$$




What is the easiest way to prove this? I was able to show that



$$
PV \int_{-\infty}^{\infty} \frac{\tan x}{x}dx = -PV\int_{-\infty}^{\infty} \frac{1}{x \tan x}dx \\
PV \int_{-\infty}^{\infty} \frac{1}{x \sin x}dx = 0 \\
\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \pi
$$



but failed to compute the original integral from this.



Answer



The
question is : What would be a right way to define the principal value
of this integral, knowing that it has infinitely many singularities at
the points $\frac{\pi}{2}+\pi\Bbb{Z}$ ? I will propose the following
$$
PV\int_0^\infty\frac{\tan x}{x}dx~\buildrel{\rm def}\over{=}~
\lim_{\lambda\to0}\int_0^{\infty}\frac{\sin
x\cos x}{x(\cos^2 x+\lambda^2)}dx
$$

Next I will show that the limit in this definition does exist and that
its value is $\frac{\pi}{2}$.



First, note that the convergence of the integral
$\int_0^{\infty}\frac{\sin x\cos x}{x(\cos^2 x+\lambda^2)}dx$ is easy to prove
using integration by parts. Now
$$\eqalign{
\int_0^{\infty}\frac{\sin x\cos x}{x(\cos^2 x+\lambda^2)}dx
&=\frac{1}{2}\lim_{n\to\infty}\int_{-\pi n}^{\pi (n+1)}\frac{\sin
x\cos x}{x(\cos^2 x+\lambda^2)}dx\cr

&=\frac{1}{2}\lim_{n\to\infty}\sum_{k=-n}^{n}\int_{\pi
k}^{\pi(k+1)}\frac{\sin x\cos x}{x(\cos^2 x+\lambda^2)}dx\cr
&=\frac{1}{2}\lim_{n\to\infty}\sum_{k=-n}^{n}\int_{0}^{\pi}\frac{\sin
x\cos x}{(x+ \pi k)(\cos^2 x+\lambda^2)}dx\cr
&=\frac{1}{2}\lim_{n\to\infty}\int_{0}^{ \pi}\left(\sum_{k=-n}^{n}\frac{1}{x+ \pi
k}\right)
\frac{\sin x\cos x}{ \cos^2x+\lambda^2}dx\cr
&=\frac{1}{2}\lim_{n\to\infty}\int_{0}^{ \pi}U_n(x)
\frac{ \cos^2 x}{ \cos^2x+\lambda^2}dx\cr
}

$$
where
$$
U_n(x)=\tan(x)\left(\sum_{k=-n}^{n}\frac{1}{x+ \pi
k}\right)
$$
But using the well-known expansion of the cotangent function, it is easy to see that
$\{U_n \}_n$ converges point-wise to $1$, and that this sequence is bounded uniformely on the interval $[0,\pi]$. Thus, we can interchange the signs of integral and limit in the above formula to get
$$
\int_0^{\infty}\frac{\sin x\cos x}{x(\cos^2 x+\lambda^2)}dx

=\frac{1}{2} \int_{0}^{ \pi}
\frac{ \cos^2 x}{ \cos^2x+\lambda^2}dx
= \int_{0}^{ \pi/2}
\frac{ \cos^2 x}{ \cos^2x+\lambda^2}dx
$$
Finally, taking the limit as $\lambda\to0$ we get
$$
PV\int_0^\infty\frac{\tan x}{x}dx~=~
\lim_{\lambda\to0}\int_0^{\infty}\frac{\sin
x\cos x}{x(\cos^2 x+\lambda^2)}dx=\frac{\pi}{2}.

$$


calculus - Prove $int^infty_0 bsin(frac{1}{bx})-asin(frac{1}{ax}) = -ln(frac{b}{a})$ using Frullani integrals

Prove $$\int^\infty_0 b\sin(\frac{1}{bx})-a\sin(\frac{1}{ax}) = -\ln(\frac{b}{a})$$




I'm supposed to use Frullani integrals which states that $\int^\infty_0 \frac{f(bx)-f(ax)}{x}\mathrm dx$ since this equals $[f(\infty)-f(0)] \ln(\frac{b}{a})$



So I need to get the first equation into the form of the Frullani integral. I can't figure out how to make this transformation though because I'm no good at them.

algebra precalculus - Compound interest coumpounded n time per year formula. $A=Pleft(1+frac{r}{n}right)^{nt}$ intuition behind it.

I know that the compound interest formula for the interest compounded annually is given by $$A=P(1+r)^t$$
I know the intuition behind it. But why the compound interest formula for the interest compounded n time per year is: $$A=P\left(1+\frac{r}{n}\right)^{nt}$$
What's the intuition behind it and why is it true?

calculus - Proving $limlimits_{n to infty} frac{n^a}{c^n} = 0$ using L'Hôpital's Rule



I am trying to prove $\displaystyle \lim_{n \to \infty} \frac{n^a}{c^n} = 0$ using L'Hôpital's Rule, but I'm stuck.




Here's what I have so far:



$$ \lim_{n \to \infty} \frac{n^a}{c^n} = \lim_{n \to \infty}\frac{an^{n-1}}{c^n \ln c} = \lim_{n \to \infty}\frac{a(a-1)n^{a-2}}{c^n(\ln c)^2 + c^n \frac{1}{c}}$$



All three limits above seem to evaluate to $\frac{\infty}{\infty}$, so I feel like I'm not getting anywhere. Any ideas?




Edit: So, with the help of the hints below, I was able to figure out that



$$ \lim_{n \to \infty} \frac{n^a}{c^n} = \frac{a}{\ln c} \cdot \lim_{n \to \infty} \frac{n^{a-1}}{c^n} = \frac{a}{\ln c} \cdot \frac{a - 1}{\ln c} \cdot \lim_{n \to \infty} \frac{n^{a-2}}{c^n} = \cdots $$




So, disregarding the constant, it looks like the numerator keeps decreasing, while the denominator stays the same.



I can also see that if I let $a = 2$, for instance, I end up with $0$ after applying L'Hopital's $2$ times:



$$ \begin{aligned} \lim_{n \to \infty} \frac{n^2}{c^n} &\overset{LH}= \lim_{n \to \infty} \frac{2n}{c^n \ln c} \\ &= \frac{2}{\ln c} \lim_{n \to \infty} \frac{n}{c^n} \\&\overset{LH}= \frac{2}{\ln c} \lim_{n \to \infty} \frac{1}{c^n \ln c} \\ &= \frac{2}{(\ln c)^2} \lim_{n \to \infty} \frac{1}{c^n} \\ &= 0 \end{aligned} $$



So it seems reasonable to conclude that for an arbitrary $a > 0$, I will end up with $0$ after applying L'Hopital's $a$ times.



But I'm not sure how to go about using induction to prove it formally. I've only proven very simple sums by induction so far. Do I have to apply it to a product here?




Answer



I deleted my old answer, as it missed the point a bit (especially given the edits to the question). I'm going to expand on J.G.'s answer, since you seem to need a little extra help.



Let's prove $\lim_{n\to\infty} \frac{n^a}{c^n} = 0$, for $a \in \Bbb{R}$ and $c > 1$. (if $0 < c \le 1$, then the sequence does not tend to $0$, and for $c = 0$, the expression is undefined). We can tackle this in a number of cases, but the cases reduce back down to one case fairly easily, using the squeeze theorem.



Case 1: $a \in \Bbb{N}_0 = \{0, 1, 2, \ldots\}$, and $c > 1$
In this case, we use induction on $a$ (not $n$, as I originally suggested). When $a = 0$, then
$$\frac{n^a}{c^n} = \frac{1}{c^n}.$$
This tends to $0$, a fact which you seem happy to assume. If you wished to prove it, observe that the sequence $a_n = \frac{1}{c^n}$ satisfies is decreasing, bounded below by $0$, and hence convergent. It also satisfies the recurrence relation $a_{n+1} = \frac{a_n}{c}$, so if $L$ is its limit, then taking the limit of both sides yields $L = \frac{L}{c} \implies (c - 1)L = 0$, and hence $L = 0$, as $c \neq 1$.




You probably could skip the above proof, but either way, the base case is established.



Now, suppose for some $k \in \Bbb{N}_0$ (and $c > 1$), we have
$$\lim_{n \to \infty} \frac{n^k}{c^n} = 0.$$
Then,
\begin{align*}
\lim_{n \to \infty} \frac{n^{k+1}}{c^n} &= \lim_{n \to \infty} \frac{(k+1)n^k}{\ln c \cdot c^n} &\text{L'Hopital's rule} \\
&= \frac{k+1}{\ln c} \lim_{n \to \infty} \frac{n^k}{c^n} \\
&= \frac{k+1}{\ln c} \cdot 0 = 0 &\text{induction hypothesis.}
\end{align*}


By induction, we now have $\lim_{n \to \infty} \frac{n^a}{c^n} = 0$ for all $a \in \Bbb{N}_0$ and $c > 1$. That is, we have completed this case.



Case 2: $a \in \Bbb{R}$, and $c > 1$
To prove this case, simply choose any natural number $k$ such that $k \ge a$ (we can do this, due to the Archimedean property). Naturally, if we take a negative value of $a$, then just choose $k = 0$ (or $1$, or anything higher really). Then, note that for all $n$,
$$0 \le \frac{n^a}{c^n} \le \frac{n^k}{c^n}.$$
The first case proved that $\frac{n^k}{c^n} \to 0$. Thus, by squeeze theorem, we have a proof for case 2.



We can even extend to $c < -1$ too!



Case 3: $a \in \Bbb{R}$, and $c < -1$
We prove this again by squeeze theorem. Note that,
$$-\frac{n^a}{|c|^n} \le 0 \le \frac{n^a}{|c|^n},$$

and by case 2, both bounds tend to $0$, proving case 3.



Hope that helps, and sorry for the misleading hint.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...