Thursday 30 April 2015

calculus - Evaluate $sumlimits_{n=0}^{infty}sumlimits_{r=0}^{n}left(frac{1}{(n-r)!}a^{n-r}right)left(frac{1}{r!}b^{r}right)$



I'd like to Prove that $\sum\limits_{n=0}^{\infty}\sum\limits_{r=0}^{n}\left(\frac{1}{(n-r)!}a^{n-r}\right)\left(\frac{1}{r!}b^{r}\right)=\left(\sum\limits_{n=0}^{\infty}\frac{1}{n!}a^n\right)\left(\sum\limits_{n=0}^{\infty}\frac{1}{n!}b^n\right)$




I do as follow




$\sum\limits_{n=0}^{\infty}\sum\limits_{r=0}^{n}\left(\frac{1}{(n-r)!}a^{n-r}\right)\left(\frac{1}{r!}b^{r}\right)=\sum\limits_{r=0}^{0}\left(\frac{1}{(0-r)!}a^{0-r}\right)\left(\frac{1}{r!}b^{r}\right)+\sum\limits_{r=0}^{1}\left(\frac{1}{(1-r)!}a^{1-r}\right)\left(\frac{1}{r!}b^{r}\right)+\sum\limits_{r=0}^{2}\left(\frac{1}{(2-r)!}a^{2-r}\right)\left(\frac{1}{r!}b^{r}\right)+\cdots$




I couldn't able to get the right hand



Any help will be appreciated! Thanks



Answer



You might find it easier to start from the RHS and show that



$$ \left(\sum_{n=0}^{\infty}\frac{1}{n!}a^n\right)\left(\sum_{n=0}^{\infty}\frac{1}{n!}b^n\right) = \sum\limits_{n=0}^{\infty}\sum_{r=0}^{n}\left(\frac{1}{(n-r)!}a^{n-r}\right)\left(\frac{1}{r!}b^{r}\right). $$



Actually, for me this is the only step. It's just how you multiply two series.



Something more interesting would be to see what you can make of



$$ \frac{1}{(n-r)!r!}a^{n - r}b^r = \frac{1}{n!} \binom{n}{r} a^{n - r}b^r $$




and the Binomial Theorem.


Calculating a real integral using complex integration



$$\int^\infty_0 \frac{dx}{x^6 + 1}$$




Does someone know how to calculate this integral using complex integrals? I don't know how to deal with the $x^6$ in the denominator.


Answer



Thankfully the integrand is even, so we have



$$
\int^\infty_0 \frac{dx}{x^6 + 1} = \frac{1}{2}\int^\infty_{-\infty} \frac{dx}{x^6 + 1}.
\tag{1}
$$



To find this, we will calculate the integral




$$
\int_{\Gamma_R} \frac{dz}{z^6+1},
$$



where $\Gamma_R$ is the semicircle of radius $R$ in the upper half-plane, $C_R$, together with the line segment between $z=-R$ and $z=R$ on the real axis.



enter image description here



(Image courtesy of Paul Scott.)




Then



$$
\int_{\Gamma_R} \frac{dz}{z^6+1} = \int_{-R}^{R} \frac{dx}{x^6+1} + \int_{C_R} \frac{dz}{z^6+1}.
$$



We need to show that the integral over $C_R$ vanishes as $R \to \infty$. Indeed, the triangle inequality gives



$$\begin{align}

\left| \int_{C_R} \frac{dz}{z^6+1} \right| &\leq L(C_R) \cdot \max_{C_R} \left| \frac{1}{z^6+1} \right| \\
&\leq \frac{\pi R}{R^6 - 1},
\end{align}$$



where $L(C_R)$ is the length of $C_R$. From this we may conclude that



$$
\lim_{R \to \infty} \int_{\Gamma_R} \frac{dz}{z^6+1} = \int_{-\infty}^{\infty} \frac{dx}{x^6+1}.
\tag{2}
$$




The integral on the left is evaluated by the residue theorem. For $R > 1$ we have



$$
\int_{\Gamma_R} \frac{dz}{z^6+1} = 2\pi i \sum_{k=0}^{2} \operatorname{Res}\left(\frac{1}{z^6+1},\zeta^k \omega\right),
$$



where $\zeta$ is the primitive sixth root of unity and $\omega = e^{i\pi/6}$. Note that this is because $\omega$, $\zeta\omega$, and $\zeta^2 \omega$ are the only poles of the integrand inside $\Gamma_R$. The sum of the residues can be calculated directly, and we find that



$$

\int_{\Gamma_R} \frac{dz}{z^6+1} = 2\pi i \sum_{k=0}^{2} \operatorname{Res}\left(\frac{1}{z^6+1},\zeta^k \omega\right) = \frac{\pi}{3 \sin(\pi/6)} = \frac{2\pi}{3}.
$$



Thus, from $(1)$ and $(2)$ we conclude that



$$
\int_{0}^{\infty} \frac{dx}{x^6+1} = \frac{\pi}{3}.
$$



In general,




$$
\int_{0}^{\infty} \frac{dx}{x^{2n}+1} = \frac{\pi}{2 n \sin\left(\frac{\pi}{2n}\right)}
$$



for $n \geq 1$.


real analysis - Let $f: Drightarrow mathbb{R}$ and assume that $x_0 in D$ is not an accumulation point of $D$. Prove that $f$ is continuous at $x_0$.

(professor hints)



A Road Map to Glory




  • Write Down the negation of the definition of an accumulation point.

  • Prove that there exists a positive real number $\delta$ for which $$(x_0-\delta, x_0+\delta) \cap D=\{x_0\}$$

  • Prove that the only number $x$ satisfying $x\in D$ and $ |x-x_0| < \delta$ is $x=x_0$.

  • Prove that for such an $x$, $|f(x)-f(x_0)|<\epsilon$ for every positive number $\epsilon$







I have trouble starting from the second bullet point. After that I wouldn't know how to connect it with the third. I wanted to ask for help regarding these two bullets. I understand that due to the negation of accumulation point there exists a finite neighborhood of $x_0$.Im not sure how this connects to the third bullet. I understand that it the $\delta$-neighborhood of $x_0$, however how is that neighborhood finite.

algebra precalculus - If $450^circ



The problem says:





If $450^\circ<\alpha<540^\circ$ and $\cot\alpha=-\frac{7}{24},$ calculate $\cos\frac{\alpha}{2}$




I solved it in the following way: $$\begin{align} -\frac{7}{24}&=-\sqrt{\frac{1+\cos2\alpha}{1-\cos2\alpha}}\\ \frac{49}{576}&=\frac{1+\cos2\alpha}{1-\cos2\alpha}\\ 625\cos2\alpha&=527\\ 2\cos^2\alpha-1&=\frac{527}{625}\\ \cos\alpha&=-\frac{24}{25}, \end{align}$$ therefore, $$\begin{align} \cos\frac{\alpha}{2}&=\sqrt{\frac{1-\frac{24}{25}}{2}}\\ &=\sqrt{\frac{1}{50}}\\ &=\frac{\sqrt{2}}{10}. \end{align}$$ But there is not such an answer:




A) $0.6$




B) $\frac{4}{5}$



C) $-\frac{4}{5}$



D) $-0.6$



E) $0.96$




I have checked the evaluating process several times. While I believe that my answer is correct and there is a mistake in the choices, I want to hear from you.



Answer



$$\frac {7^2}{24^2}=\frac {1+\cos 2\alpha}{1-\cos 2\alpha} \implies$$ $$\implies 7^2(1-\cos \alpha)=24^2(1+\cos \alpha)\implies$$ $$\implies 7^2- 7^2\cos 2\alpha = 24^2+ 24^2 \cos 2\alpha\implies$$ $$\implies 7^2-24^2= (7^2+24^2)\cos 2\alpha =25^2 \cos 2\alpha\implies$$ $$\implies -527=625\cos 2\alpha .$$



The missing negative sign on the LHS of the above line is your first error.



Your second error is writing $\cos \frac {\alpha}{2}=\sqrt {\frac {1+\cos \alpha}{2}}\;.$ We have $|\cos \frac {\alpha}{2}|=\sqrt { \frac {1+\cos \alpha}{2} }\;.\;$.... If $450^o<\alpha<340^o$ then $225^o<\frac {\alpha}{2}<270^o,$ implying $\cos \frac {\alpha}{2}<0.$



In general if $\cot x=\frac {a}{b}$ then the proportion of $\cos^2 x$ to $\sin^2 x$ is $a^2$ to $b^2$, so let $\cos^2 x=a^2y$ and $\sin^2 x=b^2y$. Since $1=\cos^2 x +\sin^2 x$, we have $y=a^2+b^2$, so $\cos^2 x =\frac {a^2}{a^2+b^2}$ and $\sin^2 x =\frac {b^2}{a^2+b^2}\;$ and therefore $\;|\cos x|=\frac {|a|}{\sqrt {a^2+b^2}}$ and $|\sin x|=\frac {|b|}{\sqrt {a^2+b^2}}.$



So if $\cot \alpha =\frac {-7}{24}$ and $\cos \alpha<0$ then $\cos \alpha =-\frac {7}{\sqrt {7^2+24^2}}=-\frac {7}{25}.$



calculus - Evaluating the following integral: $int_{0}^{infty }frac{xcos(ax)}{e^{bx}-1} dx$



How can we compute this integral for all $\operatorname{Re}(a)>0$ and $\operatorname{Re}(b)>0$?



$$\int_{0}^{\infty }\frac{x\cos(ax)}{e^{bx}-1}\ dx$$



Is there a way to compute it using methods from real analysis?



Answer



**my attempt **



$$I=\int_{0}^{\infty }\frac{x\ cos(ax)}{e^{bx}-1}dx=\sum_{n=1}^{\infty }\int_{0}^{\infty }x\ e^{-bnx}cos(ax)dx\\
\\
\\
=\frac{1}{2}\sum_{n=1}^{\infty }\int_{0}^{\infty }x\ e^{-bnx}\ (e^{iax}-e^{-iax})dx=\frac{1}{2}\sum_{n=1}^{\infty }[\int_{0}^{\infty }x\ e^{-(bn-ia)}dx+\int_{0}^{\infty }x\ e^{-(bn+ia)}dx]\\
\\
\\
=\frac{1}{2}\sum_{n=1}^{\infty }(\frac{\Gamma (2)}{(bn-ai)^2}+\frac{\Gamma (2)}{(bn+ia)^2})=\frac{1}{2b^2}\sum_{n=0}^{\infty }\frac{1}{(n-\frac{ai}{b})^2}+\frac{1}{2b^2}\sum_{n=0}^{\infty }\frac{1}{(n+\frac{ai}{b})^2}+\frac{1}{a^2}\\

\\$$

$$=\frac{1}{a^2}+\frac{1}{2b^2}(\Psi ^{1}(\frac{ai}{b})+\Psi ^{1}(\frac{-ai}{b}))\\\\\
\\
but\ we\ know\ \Psi ^{(1)}(\frac{-ai}{b})=\Psi ^{(1)}(1-\frac{ai}{b})-\frac{b^2}{a^2}\\
\\
\\
\therefore \ I=\frac{1}{a^2}+\frac{1}{2b^2}\left ( \Psi ^{(1)}(1-\frac{ai}{b}) +\Psi ^{(1)}(\frac{ai}{b})-\frac{b^2}{a^2}\right )\\
\\
\\
by\ using\ the\ reflection\ formula\ :\ \Psi ^{(1)}(1-\frac{ai}{b})+\Psi ^{(1)}(\frac{ai}{b})=\frac{\pi ^2}{sin^2(\frac{i\pi a}{b})}\\

\\$$



so we have
$$\therefore I=\frac{1}{2a^2}+\frac{1}{2b^2}\left ( \frac{-\pi ^2}{sinh^2(\frac{\pi a}{b})} \right )\\
\\
\\
=\frac{1}{2a^2}-\frac{\pi ^2}{2b^2sinh^2(\frac{\pi a}{b})}\ \ \ \ \ \ , b>0$$



note that :
$$\frac{\pi ^2}{sin^2(\frac{i\pi a}{b})}=-\frac{\pi ^2}{sinh^2(\frac{\pi a}{b})}$$



Wednesday 29 April 2015

calculus - A tricky integral - $int_0^1 sqrt{frac{1}{(1-t^2)^2}-frac{(n+1)^2t^{2n}}{(1-t^{2n+2})^2}}dt $




$$

\mathbf{\mbox{Evaluate:}}\qquad
\int_{0}^{1} \sqrt{\frac{1}{\left(1 - t^{2}\right)^2} -
\frac{\left(n + 1\right)^{2}\,t^{2n}}{\left(\, 1 - t^{2n+2}\,\,\right)^{2}}}
\,\,\mathrm{d}t
$$
where $n$ is any positive integer.




Introduction: This integral came up while studying the distribution of the roots of random polynomials - and I can't crack it. It seems impervious to methods of integration I know. Neither Mathematica nor Wolfram-Alpha could find a closed form, not only for this general integral, but any special case of $n>1$.




My attempt:




For $n=1$, the integral is pretty trivial to compute - expanding the integrand gives:
$$\int_0^1 \sqrt{\frac{1}{t^4-2 t^2+1}-\frac{4 t^2}{t^8-2 t^4+1}}$$
Which simplifies quite easily to:
$$\int_0^1 \frac{1}{t^2+1}$$
The antiderivative of the integrand is $\tan^{-1}{t}$. Evaluating at the limits gives:
$$\int_0^1 \sqrt{\frac{1}{t^4-2 t^2+1}-\frac{4 t^2}{t^8-2 t^4+1}}=\frac{\pi}{4}-0=\frac{\pi}{4}$$
However, this method does not work for $n>1$, and niether does any method I know of.





Numerical values:
Listed below are the approximate numerical values for this integral. Neither Wolfram Alpha nor the Inverse Symbolic calculator were able to find closed forms for these numbers.




$$n=2 \qquad 1.01868$$
$$n=3 \qquad 1.17241$$
$$n=4 \qquad 1.28844$$
$$n=5 \qquad 1.38198$$

$$n=6 \qquad 1.46049$$




Any help on this integral would be greatly appreciated. Thank you!


Answer



It appears that the integral when $n=2$ can be represented in terms of elliptic integrals:



$$
I(2)=\frac{\pi}{2}-\frac{1}{\sqrt{6}}\left(\Pi\left(\frac23\mid\frac13\right)-K\left(\frac13\right)\right).
$$




Here the arguments of elliptic functions follow Mathematica conventions: that is,



$$
K(m)=\int^{\pi/2}_{0}\frac{d\theta}{\sqrt{1-m\sin^2\theta}}
$$
and
$$
\Pi(n\mid m)=\int^{\pi/2}_{0}\frac{d\theta}{(1-n\sin^2\theta)\sqrt{1-m\sin^2\theta}}.
$$



probability - Cdf and Pdf of independent random variables(iid)



Let $X_1, X_2,...,X_n$ be independent random variables, each having a uniform distribution over $(0,1)$. Let $Z:=\min(X_1, X_2,...,X_n)$ and $Y:=\max(X_1, X_2,...,X_n)$. I need to find the cdf and pdf of $Y$ and $Z$.



Cdf of $Y$ is $$F_Y(x)=P(Y
Then $f_Y(x)=F'_Y(x)=na^{n-1}$.



Cdf of $Z$ is $$F_Z(x)=P(Zz)= P(X_1>x,X_2>x,...X_n>x)=P(X_1>x)P(X_2>x)\ldots P(X_n>x)=1-a\cdot 1-a \cdot 1-a\cdot\ldots\cdot 1-a=[1-a]^n.$$
Then $f_Z(x)=F'_Z(x)=n[1-a]^{n-1}$.



Answer



Let $n \in \mathbb N$ and $X_1, X_2,...,X_n$ be $\sim^{iid}$ Unif$(0,1)$.



Define



$X_i:=\min(X_1, X_2,...,X_n)$



$X_a:=\max(X_1, X_2,...,X_n)$.



What are the distribution and densities of those?







Let $c \in \mathbb R$.



We have for $X_a$



$$P(X_a < c) = P(X_1

By independence, we have




$$=P(X_1

By identical distribution, we have



$$=[P(X_1

Hence we have



$$F_{Y}(c) = (a)^n$$




$$\to f_{Y}(a) = n(a)^{n-1} a'(c)$$



$$\to f_{Y}(a) = n(a)^{n-1} 1_{c \in [0,1]}$$






We have for $X_i$



$$P(X_i < c) = 1 - P(X_i \ge c)$$




By independence, we have



$$P(X_i \ge c)=P(X_1 \ge c)P(X_2 \ge c)...P(X_n \ge c)$$



By identical distribution, we have



$$=[P(X_1 \ge c)]^n$$



$$=[1-P(X_1 < c)]^n := [1-a(c)]^n$$




Hence we have



$$F_{X_i}(c) = 1-[1-a(c)]^n$$



$$\to f_{X_i}(c) = -n[1-a(c)]^{n-1}(-a'(c))$$



$$\to f_{X_i}(c) = n[1-a(c)]^{n-1} 1_{c \in [0,1]}$$


real analysis - Continuity and Differentiation on a interval

$$f(x) =

\begin{cases}
x\sin(1/x), & \text{if $x$ $\ne$ $0$} \\
0, & \text{if $x$ = $0$} \\
\end{cases}$$



Is $f$ continuous on $(-1/\pi$, 1/$\pi$)?
Is $f$ differentiable on $(-1/\pi$, 1/$\pi$)?



I have a question with this problem. I know how to prove continuity on a single point, bu I'm not sure how to prove continuity for a whole interval. Also, I know there is a theorem that states that if a function is differentiable at a point, then it's continuous but I have a feeling that $f(x)$ is continuous but not differentiable.

algebra precalculus - Sum of irrational numbers



Well, in this question it is said that $\sqrt[100]{\sqrt3 + \sqrt2} + \sqrt[100]{\sqrt3 - \sqrt2}$, and the owner asks for "alternative proofs" which do not use rational root theorem. I wrote an answer, but I just proved $\sqrt[100]{\sqrt3 + \sqrt2} \notin \mathbb{Q}$ and $\sqrt[100]{\sqrt3 - \sqrt2} \notin \mathbb{Q}$, not the sum of them. I got (fairly) downvoted, because I didn't notice that the sum of two irrational can be either rational or irrational, and I deleted my (incorrect) answer. So, I want help in proving things like $\sqrt5 + \sqrt7 \notin \mathbb{Q}$, and $(1 + \pi) - \pi \in \mathbb{Q}$, if there is any "trick" or rule to these cases of summing two (or more) known irrational numbers (without rational root theorem).




Thanks.


Answer



To prove that $\sqrt5+\sqrt7$ is irrational:



$\sqrt 5+\sqrt 7=\frac{a}{b}$



$\frac{a^2}{b^2}=12+\sqrt{35}$



$\frac{a^2-12b^2}{b^2}=\sqrt{35}$




$35=\frac{(a^2-12b^2)^2}{(b^2)^2}$



$35|a^2-12b^2$



$35^2|(a^2-12b^2)^2$



$35^2|(b^2)^2$



Both the numerator and denominator are multiples of an even power of 2. Contradiction.




The method can be extended to many other sums of nth roots.


Tuesday 28 April 2015

Cauchy's functional equation real to real

Cauchy's functional equation:
$$f(x+y)=f(x)+f(y)$$
On wikipedia (and some other websites) it says that there are non-linear solutions for real to real. But I don't quite understand about additive functions and Lebesgue measure.Can someone give me an example of a non-linear solution and explain the set of non-linear solutions throughly?



Thank you in advance.

real analysis - Prove that $Gamma (x)=int_0^infty t^{x-1}e^{-t}dt$ is continuous at $x=1^+$.

A question from Introduction to Analysis by Arthur Mattuck:




Prove that $\Gamma (x)=\int_0^\infty t^{x-1}e^{-t}dt$ is continuous at $x=1^+$.



(Method: consider $|\Gamma(1+h)-\Gamma(1)|.$ To estimate it, break up the interval $[0,\infty)$ into two parts. Remember that you can estimate differences of the form $|f(a)-f(b)|$ by using the Mean-value Theorem, if $f(x)$ is differentiable on the relevant interval.




$|\Gamma(1+h)-\Gamma(1)|=\int_0^\infty (t^h-1)e^{-t}dt$. I don't know how to apply the Mean-value Theorem to it. I haven't learned differentiating under the integral sign.

sequences and series - Mathematical Explanation of Mathematica Summation ${sum_{n=1}^{infty}frac{(2n-1)!}{(2n+2)!}zeta(2n)}$


From a mathematical point of view, what phenomena that most likely Mathematica Wolfram encountered when calculating:
$$ \sum_{n=1}^{\infty}\frac{(2n-1)!}{(2n+2)!}\zeta(2n)\,=\,\color{red}{\frac{2\log(2\pi)-3}{8}+\frac{\zeta(3)}{8\pi^2}} $$
which is incorrect.




While calculating the sum from this question, I noticed that Wolfram result is containing ${\small\,\frac{\zeta(3)}{8\pi^2}\,}$, which is incorrect. Although I realized that this could be a bug, I started to wonder if there are any logical explanation behind this miscalculation! Has Wolfram algorithm encountered something similar to Riemann Rearrangement Theorem?



Doing more investigations, it turns-out that Wolfram is incorrectly miscalculating the closed form of an entire class of zeta summation, except the last case which is correct.

$$ \small \begin{align} \sum_{n=1}^{\infty}\frac{\zeta(\alpha\,n)}{(n+a)(n+b)\dots} &= \sum_{n=1}^{\infty}\left[A\frac{\zeta(\alpha\,n)}{n+a}+B\frac{\zeta(\alpha\,n)}{n+b}+\dots\right] = \\ C+\,\sum_{n=1}^{\infty}\frac{\zeta(\alpha\,n)-1}{(n+a)(n+b)\dots} &= \color{darkgreen}{\sum_{n=1}^{\infty}\left[A\frac{\zeta(\alpha\,n)-1}{n+a}+B\frac{\zeta(\alpha\,n)-1}{n+b}+\dots\right]\,+C} \end{align} $$
And with the appearance of this case (the last correct closed form), I believe there is a mathematical explanation regarding a correct summation method or algorithm that gives a kind of systematic incorrect closed form if it applied in a certain way. Appreciating if someone can explore this and alert us regardless of any bug that may exist in any math app. Thanks.



enter image description here

combinatorics - What is the graph coloring problem that Paul Erdős offered $25 to solve circa 1979?

I know that what I am asking is not a mathematics question, but a question about a mathematics question, so if it is inappropriate and you're thinking to downvote, I would appreciate a comment first, so that I can delete it or change it (I don't have a lot of reputation here).



I heard something about a \$25 prize offered by Paul Erdős for determining the number of colors required for coloring graphene, and I wanted to know what the exact question is, because I thought Erdős didn't spend a lot of money (and \$25 back then was about \$100 now). However, the only thing I could find about it was some text in German, and I tried Google translate but the precise description of the problem is not clear:





1979 löste er ein Problem von Paul Erdős über die Färbung von Graphen
(von Erdős mit 25 Dollar dotiert). Er benutzte einen Computer, um eine
ebene Menge mit 6448 Punkten ohne gleichseitige Dreiecke der Länge 1
zu konstruieren, deren zugehöriger Graph (Punkte wurden jeweils
verbunden falls Abstand 1) nicht mit drei Farben färbbar war
(chromatische Zahl 4), entgegen der Vermutung von Erdös und zu dessen
Überraschung.





I searched so many things such as "graphene chromatic number Erdos" and "graphene graph coloring Erdos" and similar searches but absolutely nothing came up!

calculus - Find $limlimits_{ntoinfty}sumlimits_{k=1}^n frac{2k+1}{k^2(k+1)^2}$




I have to find the limit $$\lim_{n\to\infty}\sum_{k=1}^n \frac{2k+1}{k^2(k+1)^2}.$$ I tried to make it into a telescopic series but it doesn't really work out...



$$\lim_{n\to\infty} \sum_{k=1}^n \frac{2k+1}{k^2(k+1)^2}=\sum_{k=1}^n \left(\frac{1-k}{k^2}+\frac1{k+1}-\frac1{(k+1)^2} \right)$$ so that is what I did using telescopic...



I said that:



$$\frac{2k+1}{k^2(k+1)^2}=\frac{Ak+B}{k^2}+\frac C{k+1}+\frac D{(k+1)^2}$$ but now as I look at it.. I guess I should "build up the power" with the ${k^2}$ too, right?


Answer



$$\lim_{n\rightarrow\infty}\sum^{n}_{k=1}\bigg[\frac{1}{k^2}-\frac{1}{(k+1)^2}\bigg]$$



number theory - $sqrt{17}$ is irrational: the Well-ordering Principle

Prove that $\sqrt{17}$ is irrational by using the Well-ordering property of the natural numbers.



I've been trying to figure out how to go about doing this but I haven't been able to.

abstract algebra - Finding degree of a finite field extension



Let $x=\sqrt{2}+\sqrt{3}+\ldots+\sqrt{n}, n\geq 2$. I want to show that $[\mathbb{Q}(x):\mathbb{Q}]=2^{\phi(n)}$, where $\phi$ is Euler's totient function.



I know that if $p_1,\ldots,p_n$ are pairwise relatively prime then $[\mathbb{Q}(\sqrt{p_1}+\ldots+\sqrt{p_n}):\mathbb{Q}]=2^n$. But how to proceed in the above case? I could not apply induction also. Any help is appreciated.





The assertion is false. Actually $[\mathbb{Q}(x):\mathbb{Q}]=2^{\pi(n)}$, where $\pi(n)$ is the number of prime numbers less than or equal to $n$.



Answer



Let $L= \mathbb Q ( \sum _{j=1} ^n \sqrt j )$ , $k= \mathbb Q$ and $ N = \mathbb Q ( \sqrt 2, \sqrt 3 ,... , \sqrt n ) $ .



Clearly $ N|_k $ is Galois and the Galois group is of the form $ \mathbb Z_2 ^m$ for some $m$ since every $k$ automorphism of $N$ has order at most $2$. Note that each element of $Gal (N|_k)$ is completely specified by it's action on $ \{ \sqrt p : \ p \ prime \ \ p \leq n \} $ by the fundamental theorem of arithmetic. So this gives $$ m \leq \pi (n)$$



Now if the Galois group is $ \mathbb Z_2 ^m $ then it will have $2^m -1$ subgroups of index $2$ and hence there exist $2^m -1 $ subfields $F$ of $N $ containing $k$ such that $F:k=2$ . But we already have $ 2^ {\pi (n)} -1$ many such subfields by taking product of a nonempty subset of $ \{ \sqrt p : \ p \ prime \ \ p \leq n \} $ and hence we get $$ 2^ {\pi (n)} -1 \leq 2^ m -1 $$

$$ \implies \pi (n) \leq m $$



And hence $$Gal ( N|_k) = \mathbb Z_2 ^ {\pi(n)} $$



Now we just observe that the orbit of $ \sum _{j=1} ^n \sqrt j $ under the action of $Gal(N|_k) $ contains $2^ {\pi (n)} $ distinct elements by linear independence of $ \{ \sqrt {p_i }, \sqrt {p_ip_j},... \} $ and hence $N= L$



So $$Gal \left ( \mathbb Q ( \sum _{j=1} ^n \sqrt j ) |_ {\mathbb Q} \right ) \cong \mathbb Z _2 ^ {\pi (n)} $$


discrete mathematics - Need help with simple proof by mathematical induction

$$ \sum_{k=1}^n k^4 = {(6n^5+15n^4+10n^3-n) \over 30} $$



How can this be proven using mathematical induction? My teacher isn't any help, he just tells me to think about it, but I've read the textbook again and again, and there isn't much in it that would help me prove this statement.



*Sorry to ask multiple questions on the site in such a short time, but I'm just quite desperate for help and I don't have anyone to go to for it. My teacher barely speaks English, and with my poor hearing I can hardly follow the class and I need to pass this exam desperately.

Calculate limit with summation index in formula










I want to calculate the following:



$$ \lim_{n \rightarrow \infty} \left( e^{-n} \sum_{i = 0}^{n} \frac{n^i}{i!} \right) $$




Numerical calculations show it has a value close to 0.5. But I am not able to derive this analytically. My problem is that I am lacking a methodology of handling the $n$ both as a summation limit and a variable in the equation.


Answer



I don't want to put this down as my own solution, since I have already seen it solved on MSE.



One way is to use the sum of Poisson RVs with parameter 1, so that $S_n=\sum_{k=1}^{n}X_k, \ S_n \sim Poisson(n)$ and then apply Central Limit Theorem to obtain $\Phi(0)=\frac{1}{2}$.



The other solution is purely analytic and is detailed in the paper by Laszlo and Voros(1999) called 'On the Limit of a Sequence'.


real analysis - Compute $int_{-infty}^{infty}int_{-infty}^{infty}e^{-(x^2+(x-y)^2+y^2)}dxdy$




Compute $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2+(x-y)^2+y^2)}dxdy$.



I tried to do this by using polar coordinate.
Let $x=r\cos t,\ y=r\sin t$, and then



$$
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2+(x-y)^2+y^2)}dxdy=\int_{0}^{2\pi}\int_{0}^{\infty}e^{-2r^2}e^{2\cos t\sin t}rdrdt=\int_{0}^{2\pi}e^{\sin(2t)}\int_{0}^{\infty}e^{-2r^2}rdrdt=\frac{1}{4}\int_{0}^{2\pi}e^{\sin (2t)}dt.
$$
But, I have no idea how to compute $\int_{0}^{2\pi}e^{\sin (2t)}dt$. Please give me some hint or suggestion. Thanks.


Answer




$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}

\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$




It's better to $\underline{\mbox{take advantage}}$ of the $\bbox[5px,#efe]{integrand\ symmetries}$. Namely,




\begin{align}

&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\pars{-\braces{x^{2} + \bracks{x - y}^{2} + y^{2}}}\,\dd x\,\dd y
\\[5mm] = &\
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\pars{-\,{\bracks{x + y}^{2} + 3\bracks{x - y}^{2} \over 2}}\,\dd x\,\dd y
\\[5mm] = &
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\pars{-\,{x^{2} + 3\bracks{x - 2y}^{2} \over 2}}\,\dd x\,\dd y =
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\pars{-\,{x^{2} \over 2} - 6y^{2}}\,\dd x\,\dd y

\\[5mm] = &\
\root{2}\,{1 \over \root{6}}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\pars{-x^{2} - y^{2}}
\,\dd x\,\dd y =
{\root{3} \over 3}\pars{\int_{-\infty}^{\infty}\expo{-x^{2}}\,\dd x}^{2} =
\bbx{\ds{{\root{3} \over 3}\,\pi}}
\end{align}


real analysis - A problem on countable dense subset on a metric space



Suppose for a metric space every infinite subset has a limit point. What should be my strategy to construct a countable dense subset there? Additionally, how do I intuitively guess that with such a property the metric space has a countable dense subset.


Answer



HINT: Let $\langle X,d\rangle$ be a metric space in which every infinite subset has a limit point. For each $n\in\Bbb N$ let $D_n$ be a maximal subset of $X$ such that $d(x,y)\ge 2^{-n}$ whenever $x,y\in D_n$ with $x\ne y$. (You can use Zorn’s lemma to show that $D_n$ exists.)




  • Show that each $D_n$ is finite.

  • Show that $\bigcup_{n\in\Bbb N}D_n$ is dense in $X$.




I’m not sure how you’d guess this result. The hypothesis on $X$ does tell you that $X$ does not contain an infinite closed discrete subset, which in some sense says that the points of $X$ aren’t spread out too much, but that property alone isn’t enough to ensure that $X$ is separable: the result really does use the fact that $X$ is a metric space as well.


Monday 27 April 2015

sequences and series - Find $frac{1}{7}+frac{1cdot3}{7cdot9}+frac{1cdot3cdot5}{7cdot9cdot11}+cdots$ upto 20 terms



Find $S=\frac{1}{7}+\frac{1\cdot3}{7\cdot9}+\frac{1\cdot3\cdot5}{7\cdot9\cdot11}+\cdots$ upto 20 terms




I first multiplied and divided $S$ with $1\cdot3\cdot5$
$$\frac{S}{15}=\frac{1}{1\cdot3\cdot5\cdot7}+\frac{1\cdot3}{1\cdot3\cdot5\cdot7\cdot9}+\frac{1\cdot3\cdot5}{1\cdot3\cdot5\cdot7\cdot9\cdot11}+\cdots$$
Using the expansion of $(2n)!$
$$1\cdot3\cdot5\cdots(2n-1)=\frac{(2n)!}{2^nn!}$$
$$S=15\left[\sum_{r=1}^{20}\frac{\frac{(2r)!}{2^rr!}}{\frac{(2(r+3))!}{2^{r+3}(r+3)!}}\right]$$
$$S=15\cdot8\cdot\left[\sum_{r=1}^{20}\frac{(2r)!}{r!}\cdot\frac{(r+3)!}{(2r+6)!}\right]$$
$$S=15\sum_{r=1}^{20}\frac{1}{(2r+5)(2r+3)(2r+1)}$$



How can I solve the above expression? Or is there an simpler/faster method?



Answer



Hint:



$\frac{1}{(2r+5)(2r+3)(2r+1)}=\frac{1}{4}\left(\frac{1}{(2r+3)(2r+1)}-\frac{1}{(2r+5)(2r+3)}\right)$


calculus - Prove using the basic properties of a sequence that $lim frac{n}{2^n} = 0$




Prove using the basic properties of a sequence that $\lim \frac{n}{2^n} = 0$


I tried to prove this using the standard epsilon-definition of a limit but the math got really hard, so I stopped. Then, I tried to represent the sequence as a quotient or a product of two sequences, but those sequences were not convergent. Thus, I've got no more ideas on how to tackle this problem. Any suggestions would be most appreciated.


Answer



As the comment suggests, you can notice that $2^{n}$ grows exponentially, and thus, eventually, we would have $2^{n}>n^{2}$, and thus the limit becomes sandwiched between $0$ and $\lim \frac{1}{n}$, and hence it becomes $0$.



Otherwise, you could also note that $2^{n}=e^{nlog(2)}=1+nlog(2)+\frac{1}{2!}(nlog(2))^{2}+..$, and so $\frac{n}{2^{n}}=\frac{1}{\frac{1}{n}+log(2)+n(..)}$, and when $n\rightarrow \infty$, the terms with a factor of $n$ in the denominator dominate and so the limit becomes $0$.


Sunday 26 April 2015

elementary number theory - Find all $(a,b) in Bbb Z^2$ such that $b equiv 2a pmod 5$ and $28a+10b=26$



I'm stuck with this exercise:





Find all $(a,b) \in \Bbb Z^2$ such that $b \equiv 2a \pmod 5$ and $28a+10b=26$




It's from my algebra class, we are looking into diophantic and congruence equations.



I started by looking to the $(a,b)$ that would solve the equation: $14a+5b=13$, those would be of the form $a=-13+5s$ and $b=39-14s$. Here is where I don't understand what I should do.



I've looked into $a$'s congruence mod 5 and got $5s \equiv 3 (5)$.



If $b$ should be two times $a$ then it would be: $10s\equiv1(5)$. Right?




So if I'm on the right path I still don't see how should I make to combine the first solution for b and this congruence requirement. What should I try? Thanks a lot.


Answer



$$28a+10b=26\iff 14a+5b=13$$



Now use mod $14$ and mod $5$: $$14a\equiv 13\pmod{5}\iff -a\equiv -2\pmod{5}$$



$$\stackrel{:(-1)}\iff a\equiv 2\pmod{5}$$



$$5b\equiv 13\pmod{14}\iff 5b\equiv 55\pmod{14}$$




$$\stackrel{:5}\iff b\equiv 11\pmod{14}$$



Therefore it's necessary that $a=5k+2$ and $b=14t+11$ for some $k,t\in\mathbb Z$. $$14a+5b=13\iff 14(5k+2)+5(14t+11)=13$$



$$\iff 70(k+t)+83=13\iff k+t=-1$$



Therefore all the solutions are given by $$(a,b)=(5k+2,14(-k-1)+11)$$



$$=(5k+2,-14k-3),\, k\in\mathbb Z$$




You're given $b\equiv 2a\pmod{5}$, i.e. $$-14k-3\equiv 2(5k+2)\pmod{5}$$



$$\iff k+2\equiv 4\pmod{5}\iff k\equiv 2\pmod{5}$$



$$\iff k=5r+2,\, r\in\mathbb Z$$



$$(a,b)=(5(5r+2)+2,-14(5r+2)-3)$$



$$=(25r+12,-70r-31),\, r\in\mathbb Z$$



convergence divergence - difference of two series converges, but when does each converge?

My question below was stupid. I was meaning to ask the question in a more specific context. (Without mentioning the context, it's a stupid question!) Please ignore this question.




=====================================



Consider two sequences $a_n$ and $b_n$ in a compact set. Assume $|a_n-b_n| \leq \frac{C}{n}$. Then $a_n - b_n$ converges to zero; however, $a_n$ and $b_n$ may not converge.



My question is: again, consider two sequences $a_n$ and $b_n$ in a compact set. Assume $|a_n-b_n| \leq \frac{C}{n^2}$. Do $a_n$ and $b_n$ converge? (My intuition says yes since $\sum \frac{1}{n}$ diverges, but $\sum \frac{1}{n^2}$ converges) How do I show it? Or is it NOT true?

algebra precalculus - Time and Work ::



A and B can do a work in 5 days B and C can do a work in 4 Days.A start the Work and leave after 4 Days Then B joined and left after 3 days .Remaining work done by C in how many Days



I Have tried:




A and B can do a piece of work in 1 Day is 1/5



similarly B and C is 1/4



After that I am not able to proceed with this sum, Please anyone guide me for the Answer


Answer



Suppose $A,B,C$ does $1/x,1/y,1/z$ part of the in $1$ day. Then



$1/x+1/y=1/5$ and $1/y+1/z = 1/4$.




Let $C$ work for $d$ days. Then by problem $4/x+3/y+d/z=1$.



Now find the value of $d$ using the constraints.


Saturday 25 April 2015

calculus - Logistic differential equation problem



I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide any explanations.



I'm having trouble turning the differential equation into a normal equation. A step-by-step explanation would be wonderful.



The population P(t) of a species satisfies the logistic differential equation dP/dt = P(224 - (P^2)/56) where the initial population P(0) = 30 and t is the time in years. What is the limit of P(t) as t approaches infinity? (Calculator allowed)


Answer




Given that the population has a logistic form, at long times (large t), the population reaches a steady state. Thus $dP/dt = 0$ as t approaches infinity.



Solving for P yields $\pm 112$


calculus - Proving $int_{0}^{infty} mathrm{e}^{-x^2} dx = frac{sqrt pi}{2}$



How to prove
$$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$


Answer



This is an old favorite of mine.
Define $$I=\int_{-\infty}^{+\infty} e^{-x^2} dx$$
Then $$I^2=\bigg(\int_{-\infty}^{+\infty} e^{-x^2} dx\bigg)\bigg(\int_{-\infty}^{+\infty} e^{-y^2} dy\bigg)$$
$$I^2=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}e^{-(x^2+y^2)} dxdy$$

Now change to polar coordinates
$$I^2=\int_{0}^{+2 \pi}\int_{0}^{+\infty}e^{-r^2} rdrd\theta$$
The $\theta$ integral just gives $2\pi$, while the $r$ integral succumbs to the substitution $u=r^2$
$$I^2=2\pi\int_{0}^{+\infty}e^{-u}du/2=\pi$$
So $$I=\sqrt{\pi}$$ and your integral is half this by symmetry



I have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.


real analysis - Show that if f is differentiable and f'(x) ≥ 0 on (a, b), then f is strictly increasing

Show that if f is differentiable and f'(x) ≥ $0$ on (a, b), then f is strictly increasing provided there is no sub interval (c, d) with с < d on which f' is identically zero.



So so far I'm trying to do this by contradiction:




Suppose not, that is suppose we have function $f$ where f(x)$\geq$$0$ on (a,b) where f ' is not identically $0$ for a sub interval of (a,b) and f is not strictly increasing. Since f is not strictly increasing this implies there exists $x_1$ and $x_2$ where a <$x_1$ < $x_2$ < b and f($x_1$) = f($x_2$). Then for all y $\in$ [$x_1$, $x_2$], f(y) = f($x_2$) which means that f is constant and f '(y) = 0.



Since f'(y)=$0$ for all y $\in$ [$x_1$, $x_2$] this means f ' is identically $0$ which is a contradiction. Thus f is strictly increasing. $\square$



I'm not sure if there is a better way to do this but any help or comments would be appreciated!

complex numbers - Finding modulus of $sqrt{6} - sqrt{6},i$

I found the real part $=\sqrt{6}$.



But I don't know how to find imaginary part. I thought it was whatever part of the function that involved $i$, with the $i$ removed? Therefore the imaginary part would be $-\sqrt{6}$.



Meaning the modulus is equal to

\begin{align}
\sqrt{ (\sqrt{6})^2 + (-\sqrt{6})^2} = \sqrt{12}.
\end{align}
The answer was $2\sqrt{3}$.

Friday 24 April 2015

integration - Integrate by parts: $int ln (2x + 1) , dx$



$$\eqalign{
& \int \ln (2x + 1) \, dx \cr
& u = \ln (2x + 1) \cr
& v = x \cr

& {du \over dx} = {2 \over 2x + 1} \cr
& {dv \over dx} = 1 \cr
& \int \ln (2x + 1) \, dx = x\ln (2x + 1) - \int {2x \over 2x + 1} \cr
& = x\ln (2x + 1) - \int 1 - {1 \over {2x + 1}} \cr
& = x\ln (2x + 1) - (x - {1 \over 2}\ln |2x + 1|) \cr
& = x\ln (2x + 1) + \ln |(2x + 1)^{1 \over 2}| - x + C \cr
& = x\ln (2x + 1)^{3 \over 2} - x + C \cr} $$







The answer $ = {1 \over 2}(2x + 1)\ln (2x + 1) - x + C$



Where did I go wrong?



Thanks!


Answer



Starting from your second to last line (your integration was fine, minus a few $dx$'s in you integrals):



$$ = x\ln (2x + 1) + \ln |{(2x + 1)^{{1 \over 2}}}| - x + C \tag{1}$$




Good, up to this point... $\uparrow$.



So the error was in your last equality at the very end:



You made an error by ignoring the fact that the first term with $\ln(2x+1)$ as a factor also has $x$ as a factor, so we cannot multiply the arguments of $\ln$ to get $\ln(2x+1)^{3/2}$. What you could have done was first express $x\ln(2x+1) = \ln(2x+1)^x$ and then proceed as you did in your answer, but your result will then agree with your text's solution.



Alternatively, we can factor out like terms.



$$ = x\ln(2x + 1) + \frac 12 \ln(2x + 1) - x + C \tag{1}$$
$$= \color{blue}{\bf \frac 12 }{\cdot \bf 2x} \color{blue}{\bf \ln(2x+1)} + \color{blue}{\bf \frac 12 \ln(2x+1)}\cdot {\bf 1} - x + C$$




Factoring out $\color{blue}{\bf \frac 12 \ln(2x + 1)}$ gives us



$$= \left(\dfrac 12\ln(2x + 1)\right)\cdot \left(2x +1\right) - x + C $$ $$= \frac 12(2x + 1)\ln(2x+1) - x + C$$


trigonometry - Simplify a quick sum of sines




Simplify $\sin 2+\sin 4+\sin 6+\cdots+\sin 88$



I tried using the sum-to-product formulae, but it was messy, and I didn't know what else to do. Could I get a bit of help? Thanks.


Answer



The angles are in arithmetic progression. Use the formula



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



See here for two proofs (using trigonometry, or using complex numbers).




In your case, $a=b=2$ and $n=44$.


Thursday 23 April 2015

elementary number theory - Proving that $9$ is a divisor of $x in Bbb N$ if the sum of digits of $x$ is divisible by $9$.

Suppose x is a positive integer with $n$ digits, say $x = d_1d_2d_3\ldots d_n.$ If $9$ is a divisor of $d_1 + d_2 + \ldots d_n$, prove then $9$ is a divisor of $x$.



My attempt: suppose $x = 4518.$ Therefore $d_1 = 4, d_2 = 5, d_3 = 1, d_4 = 8$ those added together equals $18$, where $9$ is a divisor.



With that in mind $4518$ can be written as $4000 + 500 + 10 + 8.$ How do you show that $9$ is a divisor of this entire number from the information given?

complex numbers - Question about Euler's formula



I have a question about Euler's formula



$$e^{ix} = \cos(x)+i\sin(x)$$



I want to show



$$\sin(ax)\sin(bx) = \frac{1}{2}(\cos((a-b)x)-\cos((a+b)x))$$




and



$$ \cos(ax)\cos(bx) = \frac{1}{2}(\cos((a-b)x)+\cos((a+b)x))$$



I'm not really sure how to get started here.



Can someone help me?


Answer



$$\sin { \left( ax \right) } \sin { \left( bx \right) =\left( \frac { { e }^{ aix }-{ e }^{ -aix } }{ 2i } \right) \left( \frac { { e }^{ bix }-{ e }^{ -bix } }{ 2i } \right) } =\frac { { e }^{ \left( a+b \right) ix }-e^{ \left( a-b \right) ix }-{ e }^{ \left( b-a \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ -4 } \\ =-\frac { 1 }{ 2 } \left( \frac { { e }^{ \left( a+b \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ 2 } -\frac { { e }^{ \left( a-b \right) ix }+{ e }^{ -\left( a-b \right) ix } }{ 2 } \right) =\frac { 1 }{ 2 } \left( \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right) $$




same method you can do with $\cos { \left( ax \right) \cos { \left( bx \right) } } $






Edit:
$$\int { \sin { \left( ax \right) \sin { \left( bx \right) } } dx=\frac { 1 }{ 2 } \int { \left[ \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right] dx=\quad } } $$$$\frac { 1 }{ 2 } \int { \cos { \left( a-b \right) xdx } } -\frac { 1 }{ 2 } \int { \cos { \left( a+b \right) xdx= } } $$



now to order calculate $\int { \cos { \left( a+b \right) xdx } } $ write
$$t=\left( a+b \right) x\quad \Rightarrow \quad x=\frac { t }{ a+b } \quad \Rightarrow dx=\frac { 1 }{ a+b } dt\\ \int { \cos { \left( a+b \right) xdx=\frac { 1 }{ a+b } \int { \cos { \left( t \right) } dt=\frac { 1 }{ a+b } \sin { \left( t \right) = } } \frac { 1 }{ a+b } \sin { \left( a+b \right) x } } } +C\\ $$



Wednesday 22 April 2015

Derivative in 1D as a linear transformation with reminder

There are many topics with the derivative definition, but I couldn't find a precise answer to my doubts. In one of the formulation the derivative of a function in a given point $x_0$ is a number $a\in\mathbb{R}$ such as:



$$f(x_0+h)=f(x_0) + a\cdot h +r(x_0,h)$$



In this, the $f(x_0) + ah$ term is the "best" linear approximation of $f(x_0+h)$, and $r(x_0,h)$ is some reminder (or correction). Now, if we make $h \to 0$ we want the $r(x_0,h) \to 0$. However, such an approach will not provide the proper derivative definition, and we must make the following:



$$\lim_{h \to 0} \frac{r(x_0,h)}{h}=0$$




which means the $r(x_0,h)$ vanishes "faster" than $h$ when $h \to 0$. Is there are clear explanation why this entire fraction must vanish, rather than the reminder itself? With many thanks.

real analysis - Show that $sum_{n=1}^infty (frac{1}{a_{n+1}} - frac{1}{a_n})$ converges

Let $(a_n)_n$ be a sequence, in which $a_n\geq 0$ for all $n\in\mathbb{N}$, and $\lim_{n\rightarrow\infty} a_n = \infty$. Show that $\sum_{n=1}^\infty (\frac{1}{a_{n+1}} - \frac{1}{a_n})$ converges.



I tried to use the Cauchy Criterion but couldn't conclude anything. Can someone help?

Tuesday 21 April 2015

real analysis - Diagram used to prove if $A_i$ are countable then $bigcuplimits^{infty}A_i$ is countable



I'm working my way through the very early stages of Abbott's analysis book, and am stuck on a particular segment regarding cardinality—countable sets in particular.
He has not yet introduced the axiom of choice, yet he states in an exercise that the following diagram suffices to prove that an infinite union of a countably infinite collection of countable sets is countable.



The exercise asks the following:





How does the following diagram lead to a proof of theorem $1.5.8$ (on the inifinte union of countable sets)?
$$\begin{array}{col1col2col3col4}
1 & 3 & 6 & 10 & 15 & \cdots\\
2 & 5 & 9 & 14 & \cdots\\
4 & 8 & 13 & \cdots \\
7 & 12 & \cdots \\
11 & \cdots \\
\vdots
\end{array}$$





So as I said, I'm confused how we can accept any "picture-proof" without even a discussion of the axiom of choice. This is also different from Cantor's diagonalization method, so I'm not sure how this works as a proof.


Answer



As was pointed out in the comments, there is no need for the axiom of choice to prove that there is a bijection of $\mathbb{N}$ to $\mathbb{N}^2$, call it $b$, which is obtained as in your diagram.



However, in order to prove the statement in the title of your question, one needs some kind of choice. This is because we are not talking about $\mathbb{N}^2$ in the general case, and we need to collectively select countable bijections.



Explicitly, let $A_i$ be a sequence of sets, select bijections $f_i:A_i \to \mathbb{N}$, and define
$$g: \mathbb{N} \times \mathbb{N} \to \bigcup_i A_i$$
$$(m,n) \mapsto f_m(n). $$

You can prove that this is surjective, and hence that $\bigcup_i A_i$ is countable. This last argument, as written, could also be believed to depend on the axiom of choice. This can be circumvented, though. Sketchly, since $g \circ b$ is surjective you can, for every set $(g \circ b)^{-1}(a)$, pick its least element in order to define a right inverse for $g \circ b$, which gives an injection from $\bigcup_i A_i$ to $\mathbb{N}$. And thus you can either appeal to Schroder-Bernstein (which does not use AC) or to the arguably simpler fact that every subset of $\mathbb{N}$ is countable, and the right inverse is a bijection to a subset of $\mathbb{N}$.



However, the AC in the bold text cannot be circumvented (and, indeed, the result is not true without some kind of choice).



Some more discussion can be seen here, for example.



Now, regarding the subjective point of your question:




So as I said, I'm confused how we can accept any "picture-proof" without even a discussion of the axiom of choice.





What follows is a personal POV:



This is arguably a pedagogic issue. Talking about necessity of the axiom of choice when students have other dire needs in mind when learning analysis is maybe not a good option, and best left for later (or simply for another context). This is true even for the need of specification and its correct usage, say. In my country (and, from what I've seen, this happens elsewhere) there is an underlying assumption by students that analysis is the subject where we learn how to prove things (i.e., everything and anything) rigorously. That is not true, imho, and also a bit damaging for the student. Analysis is the subject where we learn estimates, convergence, regularity etc. We learn to "prove things rigorously" in Mathematics as a whole, and even then it is not the ultimate goal of the subject. Nevertheless, context is more important than by-the-book rigour in determining what should go where and when.



Having said that, it would probably not hurt to make a slight mention about the AC after the statement of the exercise. It would satisfy the curious reader, and not make him stray away or overload him with information: he can simply choose to postpone this momentarily without hindering his momentum on his studies.


arithmetic - Setting up a word problem.




I cannot figure out if this is right because there is no example to verify calculations and I have gone over this a few hours now. I do not need help with programming, just the correct mathematics.



"Your kid brother plans to start a lawn-mowing service this summer, and he wants to earn $10 per hour.



The input is (in feet) the length and width of a rectangular yard and the length and width of the house situated in the yard.



All input is in feet. His average speed for mowing is .20 square yards per second. The amount to charge the customer should be printed as output."



Here is what I have:




 MowYardsPerHour = .20 * 60 * 60;
MowRate = 10;

houseLength = houseLength /3;
houseWidth = houseWidth /3;

yardLength = yardLength /3;
yardWidth = yardWidth /3;

squareYdYard = yardLength * yardWidth;

squareYdHouse = houseLength * houseWidth;

mowRange = squareYdYard - squareYdHouse;

amountDue = (mowRange / MowYardsPerHour) * MOW_RATE;


Is this right way to calculate the rate??


Answer



The method is right. I would suggest rewriting the program so that one works purely in feet and square feet. That makes the four conversions to square yards unnecessary.

The mow rate is $(9)(0.2)(60)(60)$ square feet per hour. Let $x$ and $y$ be the overall lot dimensions in feet, and $a$ and $b$ the house dimensions in feet. Then we charge
$$\frac{xy-ab}{(9)(0.2)(3600)}\times 10.$$



Remarks: $1.$ But I am not a good one for business advice. The printout from the longer program may be more impressive.



$2.$ It is good practice to use names for the inputs that are more informative than the ones used in the answer.


real analysis - continuity and limit of a function.

Below is the question:





To what degree would the sequence definition of continuity need to be modified in order to be suitable as a definition for the limit of a function?



In other words,if $f$ is a function and if $(x_n)_{n=1}^{\infty}$ is any sequence of domain points such that $(x_n)_{n=1}^{\infty}$ converges to $x_o$,then lim$_{x\to x_o}$$f(x)=L$ iff
$\ldots$ ?




{HERE Sequence definition of continuity is





$f(x_0)$ exists;



$\lim_{x \to x_o} f(x)$ exists; and



$\lim_{x \to x_o} f(x)$ =$f(x_o)$.




}



I cannot understand what should be iff case?Please help...

real analysis - Show the equivalence $x=1$ $⟺$ $-ε

Let $x\in \mathbb{R}$. Show the equivalence $x=1 \Leftrightarrow-\epsilon\lt (x-1)\lt \epsilon$ for all $ε\gt0$.



So the first thing I thought to do was to prove both sides ($\Leftarrow$ and $\Rightarrow$) since this is an equivalence question.



i) To show $\Leftarrow$:




Suppose $x=1$, we have to show $-\epsilon \lt (x-1)\lt \epsilon$ for all $\epsilon\gt0$.



$1
If $x=1$ then $1\lt (1+\epsilon )$, $\epsilon \gt0$



$(x-1)\lt \epsilon$
$(x-\epsilon )\lt 1$



If $x=1$ then $1-ε<1$




Therefore $-\epsilon \lt 0$, so $\epsilon \gt 0$.



Then i would go on to prove the $\Rightarrow$side, but i'm not sure if i'm on the right line or not. Would really appreciate the help.

Sum of square root of non perfect square positive integers is always irrational?



Let $S$ be a set of positive integers such that no element of $S$ is a perfect square. Is it true that $\sum_{s_i \in S} \sqrt{s_i}$ is always irrational?




Motivation. Suppose the length of the circumference of a polygon whose nodes are located on lattice points is an integer. I'm trying to figure out whether this implies that the lengths of all its sides must be integers as well.



Edit: This is a slightly more general question than this one (in particular, primes versus non-squares), but appears to be answered in the same way.


Answer



The answer is "yes"; see here (page 87)


calculus - Proof of $int_0^infty left(frac{sin x}{x}right)^2 mathrm dx=frac{pi}{2}.$



I am looking for a short proof that $$\int_0^\infty \left(\frac{\sin x}{x}\right)^2 \mathrm dx=\frac{\pi}{2}.$$
What do you think?



It is kind of amazing that $$\int_0^\infty \frac{\sin x}{x} \mathrm dx$$ is also $\frac{\pi}{2}.$ Many proofs of this latter one are already in this post.


Answer



Let $f(x)=\max\{0,1-|x|\}$. It is easy to calculate the Fourier transform
$$\hat{f}(\xi)=\int_{-\infty}^{\infty}f(x)e^{-ix\xi}dx=\left(\frac{\sin(\xi/2)}{\xi/2}\right)^2.$$

Taking the inverse Fourier transform, we get
$$\int_{-\infty}^{\infty}\left(\frac{\sin(\xi/2)}{\xi/2}\right)^2e^{ix\xi}d\xi=2\pi f(x),$$
and the result follows.



The second integral can be computed in a similar way. Just take $f(x)=\chi_{[-1,1]}(x)$ (the indicator function of the interval $[-1,1]$).






Edit. It might be interesting to note that there are analogous formulas for the sinc
sums

$$\sum_{n=1}^{\infty}\frac{\sin n}{n}=\sum_{n=1}^{\infty}\left(\frac{\sin n}{n}\right)^2=
\frac{\pi}{2}-\frac{1}{2}.$$



I learned about this from the note "Surprising Sinc Sums and Integrals" by Baillie, Borwein, and Borwein (can be found through a quick web search).


probability - Confusion on Independent Probabilities and Notation

Suppose we have a fair coin. The event Heads will be marked as $H$ and Tails as $T$. So we have, for an arbitrary flip



$$P(H) = 1/2 \\
P(T) = 1/2$$



We wish to calculate the probability of flipping the coin twice and obtaining either





  • both tails

  • both heads

  • one heads, one tails $(\star)$



Now, some of my confusion begins. I'll try to keep a numbered list of the concerns I have.




  1. The problem as stated is ambiguous. For the bullet marked by $(\star)$ it's unclear whether I mean the sequence $(H, T)$, with respect to time, or the unordered pair $\{T, H\}$. Clearly these things have different probabilities.




Regarding my confusion in (1), the author of the question could easily make precise what is meant (as I did in my statement of confusion). However, the original ambiguity leads me to what I think is my deeper confusion on the definition of the probability of independent events. For example—and by Wikipedia's definition—if events $A$ and $B$ are independent, then their joint probability is the product of their individual probabilities. In other words:



$$P\left(A\cap B\right) = P\left(A\right)P\left(B\right)$$



So, this definition seems to correspond to the probability of flipping our coin and getting one of the sequences: $TH$, $HT$. Or we could discard time considerations and consider $HT = TH$, more formally in set notation as $\{T, H\} = \{H, T\}$, then add $P(HT) + P(TH)$ to get $P(H\cap T)$...?




  1. I think I should normally interpret independent probability questions like this as irrespective of time or sequence. Except in the sense that the number of sequences that are "unordered equivalent" to the unordered pair in question, are the "weight" of the unordered pair.




Okay, that might have been a confusing but it's because I'm confused. Maybe what I mean can be made clear by building what I think is the full situation of the coin flipping, up to two flips.



\begin{align}
P(T) &= 0.5\\
P(H) &= 0.5 \\
P(TT) = P(T \cap T) &= 0.25 \\
P(HH) = P(H \cap H) &= 0.25 \\
P(HT) = P(H)P(T) = P(T)P(H) = P(TH) &=0.25 \\
P(T \cap H) = P(HT) + P(TH) &=0.5

\end{align}



Where concatenated letters refer to the sequence and the set operator refers to the unordered pair (or set).




  1. Is this notation consistent?



And lastly, I was motivated to ask these questions primarily by a problem I read in Data Science from Scratch, which essentially asks the follow up question: what is the probability of getting two Heads given that the first flip was Heads?




The answer is intuitively $0.5$—the problem becomes only dependent on the second, yet-to-be-flipped coin, but how does the notation work? To set it up, based on the very definition provided by the text, we have



\begin{align}
P(H \cap H | H) = \frac{P
\left((H \cap H)\cap H\right)}{P(H)} = \frac{P(H)P(H)P(H)}{P(H)}
\end{align}



Which doesn't make sense, because there is not third flip of the coin, and the second flip is independent of the first. But if we think of the coin flipping, and conditional probability in terms of a sequence we could write



\begin{align}

P(HH | H) = \frac{P
\left((HH)\cap H\right)}{P(H)} = \frac{P(H)P(H)P(H)}{P(H)}
\end{align}



But ends up being the same thing?




  1. I'm missing something here, whether it's notation, definition(s), or conceptually. Will you please guide me, in a rigorous way, to the intuitive result that: given one heads, the probability of getting two is $1/2$?




Thank you!

absolute value - Complex number with z to the power of 4



I have to find all $z\in C$ for which BOTH of the following is true:



1) $|z|=1$



2) $|z^4+1| = 1$



I understand that the 1) is a unit circle, but I can't find out what would be the 2).

Calculating the power of $(x+yi)$ seems wrong because I get a function with square root of polynomial of the 8th degree.



I've even tried to equate the two, since their modulo is the same, like this:



$|z|= |z^4+1|$



and then separating two possibilities in which the equation holds, but I get only



$z^4\pm z+1=0$




which I don't know how to calculate.



I guess there is an elegant solution which I am unable to see...



Any help?


Answer



It seems helpful to square both sides. From the second equation, you get $$1 = (z^4 + 1)(\overline{z}^4 + 1) = (z \overline{z})^4 + z^4 + \overline{z}^4 + 1 = z^4 + \overline{z}^4 + 2.$$ That is, $z^4 + \overline{z^4} = -1.$



The first equation lets you write $z = e^{i \theta}$, so you have $$-1 = e^{4i\theta} + e^{-4i \theta} = 2\cos(4\theta);$$ in other words, $\cos(4\theta) = \frac{-1}{2}.$ The solutions of this are well-known.


calculus - Antiderivative of unbounded function?

One way to visualize an antiderivative is that the area under the derivative is added to the initial value of the antiderivative to get the final value of the antiderivative over an interval.



The Riemann Series Theorem essentially says that you can basically get any value you want out of a conditionally convergent series by changing the order you add up the terms.




Now consider the function: $$f(x) = 1/x^3$$The function is unbounded over the interval $(-1,1)$, so it not integrable over this interval.



If you break $f(x)$ into Riemann rectangles over the interval $(-1,1)$ and express the area as a Riemann sum, you essentially get a conditionally convergent series. And because of the Riemann Series Theorem, you can make the sum add up to anything. In other words, you can make the rectangles add up to whatever area you want by changing the order in which you add them up. This is in fact why a function needs to be bounded to be integrable - otherwise the area has infinite values/is undefined.



So my question is, in cases like this, how does the antiderivative "choose" an area? I mean in this case the antiderivative, $\frac{1}{-2x^2}$, choose to increase by $\frac{1}{-2(1)^2} - \frac{1}{-2(-1)^2} = 0.$ In other words, the antiderivative "choose" to assign a area of zero to the area under $1/x^3$from $-1$ to $1$ even though the Riemann Series Theorem says the area can be assigned any value.



How did the antiderivative "choose" an area of $0$ from the infinite possible values?

number theory - What is $operatorname{gcd}(a,2^a-1)$?

Intuitively, I feel it's $1$, for example $(2, 3), (3,7)$ etc. But then I cannot go to prove it. Using the formula $ax+by=c$ does not make sense because of the power.
Is it possible by induction? If I assume $a=1$, then $\operatorname{gcd}(1, 2^1-1)=1$ Assuming, it to be true for $k$, then



$$\operatorname{gcd}(k,2^k-1)=1 = kx+(2^k-1)y=1$$




I'm stuck here. Is it even possible with this method?

Monday 20 April 2015

limits - Prove $[sin x]' = cos x$ without using $limlimits_{xto 0}frac{sin x}{x} = 1$

I came across this question: How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$?




From the comments, Joren said:




L'Hopital Rule is easiest: $\displaystyle\lim_{x\to 0}\sin x = 0$ and $\displaystyle\lim_{x\to 0} = 0$, so $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = \lim_{x\to 0}\frac{\cos x}{1} = 1$.




Which Ilya readly answered:




I'm extremely curious how will you prove then that $[\sin x]' = \cos x$





My question: is there a way of proving that $[\sin x]' = \cos x$ without using the limit $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = 1$. Also, without using anything else $E$ such that, the proof of $E$ uses the limit or $[\sin x]' = \cos x$.






All I want is to be able to use L'Hopital in $\displaystyle\lim_{x\to 0}\frac{\sin x}{x}$. And for this, $[\sin x]'$ has to be evaluated first.







Alright... the definition that some requested.



Def of sine and cosine: Have a unit circumference in the center of cartesian coordinates. Take a dot that belongs to the circumference. Your dot is $(x, y)$. It relates to the angle this way: $(\cos\theta, \sin\theta)$, such that if $\theta = 0$ then your dot is $(1, 0)$.



Basically, its a geometrical one. Feel free to use trigonometric identities as you want. They are all provable from geometry.

probability theory - Does there exist a mutivariate inverse?

Let $F:\mathbb{R}\rightarrow \mathbb{R}$ be a distribution function (CDF)



In this case, we can define the inverse $X$ of $F$, and it is a random variable on $(0,1)$ such that $F_X=F$.




Hence, every distribution (CDFs) can be viewed as the cdf of a random variable on $(0,1)$.



Is there an analogous result for joint distribution functions (CDFs)?



That is, for a fixed $n$, does there exists a probability space $(\Omega,\mathscr{F},P)$ such that every joint distribution function $F:\mathbb{R}^n\rightarrow \mathbb{R}$ is $F_X$ for some $n$-dimensional random vector $X$ on $(\Omega,\mathscr{F},P)$?

elementary set theory - Ordered sets $langle mathbb{N} times mathbb{Q}, le_{lex} rangle$ and $langle mathbb{Q} times mathbb{N}, le_{lex} rangle$ not isomorphic




I'm doing this exercise:
Prove that ordered sets $\langle \mathbb{N} \times \mathbb{Q}, \le_{lex} \rangle$ and $\langle \mathbb{Q} \times \mathbb{N}, \le_{lex} \rangle$ are not isomorphic ($\le_{lex}$ means lexigraphic order).



I don't know how to start (I know that to prove that ordered sets are isomorphic I would make a monotonic bijection, but how to prove they aren't isomorphic?).


Answer



Recall that the lexicographic order on $A\times B$ is essentially to take $A$ and replace each point with a copy of $B$.



So only in one of these lexicographic orders every element has an immediate successor. And having an immediate successor is preserved under isomorphisms.



(Generally, to show two orders are not isomorphic you need to show either there are no bijections between the sets (e.g. $\Bbb Q$ and $\Bbb R$ are not isomorphic because there is no bijection between them) or that there are properties true for one ordered and set and not for the other that are preserved under isomorphisms, like having minimum or being a linear order, or having immediate successors.)


complex numbers - Multiplication of angles on the unit circle?

Preliminary: One reason that in the (algebro-geometric) theory of $\mathbb{Q}$ arbitrary powers $p^q$ are not considered is, that there is no finite Euclidean construction which would yield $p^q$ for arbitrary lengths $p$ and $q$. This already and most prominently holds for $q = \frac{1}{3}$: While $\sqrt[2]{p}$ can be constructed, $\sqrt[3]{p}$ can not.






On the unit circle the sum of two angles $\alpha = \angle NOP$ and $\beta = \angle NOQ$ (= two arc lengths $\overline{NP}$, $\overline{NQ}$) can be constructed:




enter image description here



which - by the way - has a strong similarity (by the use of parallels) with the multiplication of straight lengths:



enter image description here



It turns out that the addition of (arc) lengths on the circle corresponds to the multiplication of two Gaussian numbers in the plane $\mathbb{Q}(i)$: $\alpha \oplus \beta := e^{i\alpha} \cdot e^{i\beta} = e^{i(\alpha + \beta)} $.



If it were possible to unroll the circle onto the straight line, we could add two angles (= arc length) as straight line segments and roll the sum back onto the circle (modulo $2\pi$). Even though unrolling is not possible (by Euclidean constructions), addition of angles is possible (taking something like a magic shortcut, see above).




What would the multiplication of arc lengths on the circle correspond to?
If unrolling were possible we could multiply angles in the same way that we add them: unroll the arc lengths $\alpha, \beta$ onto the straight line, multiply them here (see above) and roll the product back onto the circle (modulo $2\pi$).




How can it be seen that there is no "magic geometric shortcut" - as in the case
of addition - that yields the product of two arbitrary angles modulo $2\pi$?




The only (?) way we can construct multiplication on the unit circle is by restricting the numbers/points/angles/arc lengths to some roots of unity $\omega_m^n = e^{i2\pi n/m}$. We then can multiply two numbers by $\omega_m^p \otimes \omega_m^q := \omega_m^q \oplus \dots \oplus \omega_m^q = \omega_m^{pq}$.

elementary number theory - Proving that $text{gcd}(a,b)=text{gcd}(b,r)$




Let $0\neq a,b\in \mathbb{Z}$. there are integers $p,q$ such that $0\leq r




My attempt:



$$\text{gcd}(a,b)=\varphi$$



So



$$\exists \,m,n \in \mathbb{Z}\,\,\,\,\,\;\;\;\;\;\;\;\;\;\varphi=ma+nb$$







$$\text{gcd}(b,r)=\psi$$



So



$$\exists \,m,n \in \mathbb{Z}\,\,\,\,\,\;\;\;\;\;\;\;\;\;\psi=mb+nr$$



We know that $a=bq+r$ so $r=a-bq$



So gcd$(b,\underbrace{a-bq}_{=r})=\psi=mb+n(\underbrace{a-bq}_{=r})=$





I'm stuck here, and I don't know if my attempt is correct or not



Answer



You have to use the fact that the gcd is the smallest linear combination.



Suppose $(a,b)=ma+nb$. Then $(a,b)=m(bq+r)+nb = mr+(qm+n)b$.



Similarly, if $(b,r)=m'b+n'r$ then $(b,r) = m'b+n'(a-bq) = n'a + (m'-n'q)b$.




Therefore you know that $(a,b)$ divides $(b,r)$ and $(b,r)$ divides $(a,b)$


Evaluation of a series (possibly related to Binomial Theorem)

I have the following series:





$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2\cdot5}{3\cdot6}\cdot\frac{1}{2^2} + \frac{2\cdot5\cdot8}{3\cdot6\cdot9}\cdot\frac{1}{2^3} + \ldots$$




I have to find the value of this series, and I have four options:
(A) $2^{1/3}$ (B) $2^{2/3}$ (C) $3^{1/2}$ (D) $3^{3/2}$



I can't seem to find a general term for this. I tried:



$$S = 1 + \frac{(1 - \frac{1}{3})}{1!}(\frac{1}{2}) + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})}{2!}(\frac{1}{2})^2 + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})(3 - \frac{1}{3})}{3!}(\frac{1}{2})^3 + \ldots$$



But this doesn't seem to get me anywhere.




Any help?






This maybe a telescopic series, because there was a similar question we solved in class which ended up being telescopic:




$$ \frac{3}{2^3} + \frac{4}{2^4\cdot3} + \frac{5}{2^6\cdot3} + \frac{6}{2^7\cdot5} + \ldots$$




$=\displaystyle\sum\limits_{r=1}^\infty\frac{r+2}{2^{r+1}r(r+1)}$



$=\displaystyle\sum \bigg(\frac{1}{2^r r} - \frac{1}{2^{r+1}(r+1)}\bigg) = \frac{1}{2}$




$P.S:$ This problem was included in my set of questions for Binomial Theorem, which is why I thought it might be related to it.

sequences and series - What's the closed form of this :$sum_{n=1}^{+infty}frac{(-1)^nphi(n)}{n}$



I have checked some links related the below sum which is related to The Euler totient function to check if it has any known closed form but i don't find anything then my question here is :





Question:
What is the closed form of this :$\sum_{n=1}^{+\infty}\frac{(-1)^n\phi(n)}{n}$ , where $\phi(n)$ is Euler totient function ?





Answer



As stated by reuns in the comments, for any $s$ with a large enough real part we have



$$ \sum_{n\geq 1}\frac{\varphi(n)}{n^s} = \prod_{p}\left(1+\frac{\varphi(p)}{p^s}+\frac{\varphi(p^2)}{p^{2s}}+\frac{\varphi(p^3)}{p^{3s}}+\ldots\right)= \prod_{p}\frac{p^s-1}{p^s-p}$$
by Euler's product, hence
$$ \sum_{n\geq 1}\frac{\varphi(n)}{n^s} = \prod_p \frac{1-\frac{1}{p^{s}}}{1-\frac{1}{p^{s-1}}}=\frac{\zeta(s-1)}{\zeta(s)}$$
$$ \sum_{\substack{n\geq 1\\n\text{ odd}}}\frac{\varphi(n)}{n^s} = \prod_{p>2} \frac{1-\frac{1}{p^{s}}}{1-\frac{1}{p^{s-1}}}=\frac{\zeta(s-1)}{\zeta(s)}\cdot\frac{2^s-2}{2^s-1}$$
$$ \sum_{n\geq 1}\frac{(-1)^n \varphi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}\left(1-2\cdot\frac{2^s-2}{2^s-1}\right) =-\frac{\zeta(s-1)}{\zeta(s)}\cdot\frac{2^s-3}{2^s-1}$$
but the series in the LHS is convergent only for $\text{Re}(s)>2$.



Sunday 19 April 2015

elementary number theory - find the last digit - should i use $mod 10$ to show this?



When it is said to find the last digit of a number and the number is given in $a^b$ or $a^{b^c}$ format it is easy to find using either basic congruency or Fermat's little theorem or Euler's phi-function. But in the exam question is of this type $$1!+2!+....+99!$$ and I did it in this way



$1!=1\equiv 1$ $mod$ $10$
$2!=2 \equiv 2$ $mod$ $10$
$3!=6 \equiv 6$ $mod$ $10$
$4!=24 \equiv 4$ $mod$ $10$, and
$n!=n.(n-1).(n-2)....4.3.2.1 \equiv 0$ $mod$ $10$ for $n \geq5$



Hence adding up all the factorials and the remainders we get



$1!+2!+3!+4!+n! \equiv 1+2+6+4 \equiv 13 \equiv 3$ $mod$ $10$




Hence, $$1!+2!+....+99! \equiv 3\mod10$$



My question is should I use $\mod10$ here because no other conditions are given to solve it. The remainder we find that is the digit in unitary place of a number, I think. Any help is appreciated.


Answer



Yes of course even if in this case is very simply as you noticed since



$$1!+2!+....+99!= 1!+2!+3!+4!+10\cdot N = 3 +10\cdot M$$


set theory - For every infinite $S$, $|S|=|Stimes S|$ implies the Axiom of choice

How to prove the following conclusion:




[For any infinite set $S$,there exists a bijection $f:S\to S \times S$] implies the Axiom of choice.



Can you give a proof without the theory of ordinal numbers.

algebra precalculus - exponential equation



$$\sqrt{(5+2\sqrt6)^x}+\sqrt{(5-2\sqrt6)^x}=10$$



So I have squared both sides and got:



$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2\sqrt{1^x}=100$$



$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2=100$$




I don't know what to do now


Answer



You don't have to square the equation in the first place.



Let $y = \sqrt{(5+2\sqrt{6})^x}$, then $\frac{1}{y} = \sqrt{(5-2\sqrt{6})^x}$. Hence you have $y + \frac{1}{y} = 10$ i.e. $y^2 + 1 = 10y$ i.e. $y^2-10y+1 = 0$.



Hence, $(y-5)^2 =24 \Rightarrow y = 5 \pm 2 \sqrt{6}$.



Hence, $$\sqrt{(5+2\sqrt{6})^x} = 5 \pm 2\sqrt{6} \Rightarrow x = \pm 2$$




(If you plug in $x = \pm 2$, you will get $5+2\sqrt{6} + 5-2\sqrt{6} $ which is nothing but $10$)


convergence divergence - Prove that $x^n/n!$ converges to $0$ for all $x$

Prove that $a_n=x^n/n! \to 0$ for all $x$



Here is what I tried, but it seems to lead to nowhere.



Choose $\epsilon > 0$. We need to show that there exists $N\in \mathbb{N}$ such that for all $n>N$ we have $|a_n| < \epsilon$



So, $|(x^n/n!)| < \epsilon \implies |x^n| < n!\cdot \epsilon$ (since $n!$ is positive we ignore the absolute signs). So $|x|/(\epsilon^{1/n}) < [n!^{(1/n)}]$.

Now I am stuck in solving this for $n$, and hence finding $N$ ...

geometry - What can be learned for number theory from geometrical constructions (and vice versa)?

Even though this question of mine was not so well received at MO I'd like to pick two examples and make a question out of them here.




Consider these two pairs of geometrical constructions which yield the same arithmetical results:





  1. Constructing the half $x/2$ for one given positive real $x$ in two different ways.


  2. Constructing the product $mn$ for two given positive integers $m, n$ in two different ways.






It's noteworthy that in each of these pairs one of the constructions does make use of circles while the other one doesn't.



Example 1: Constructing the half $x/2$



You can create the half $\mathsf{X}/2$ of a positive real $\mathsf{X}$ by two different Euclidean constructions:





  1. enter image description here



  2. enter image description here





Example 2: Constructing the product $mn$



You can create the product $nm$ of two positive integers $n, m$ by two different Euclidean constructions:




  1. creating a rectangle
    enter image description here

    and counting the number of unit squares that fit into the rectangle


  2. creating a line segment
    enter image description here

    and counting the number of unit lengths that fit into the line segment








In both cases it's not obvious that the two constructions always yield the same result, but for the sake of the theories (i.e. Euclid's – later Descartes' – geometry and number theory, resp. arithmetic geometry) it's essential.






My questions are:






  1. How did (possibly) Euclid formulate the two statements above, i.e. that the two pairs of constructions always yield the same results?


  2. How did (possibly) Euclid prove these statements?


  3. What are the deep insights which we gain from understanding why these two pairs of constructions always yield the same result? (The
    same point in Example 1, the same positive integer in Example 2.)



calculus - Deconstructing $0^0$












It is well known that $0^0$ is an indeterminate form. One way to see that is noticing that



$$\lim_{x\to0^+}\;0^x = 0\quad,$$



yet,



$$\lim_{x\to0}\;x^0 = 1\quad.$$



What if we make both terms go to $0$, that is, how much is




$$L = \lim_{x\to0^+}\;x^x\quad?$$



By taking $x\in \langle 1/k\rangle_{k\in\mathbb{N*}}\,$, I concluded that it equals $\lim_{x\to\infty}\;x^{-1/x}$, but that's not helpful.


Answer



This is, unfortunately, not very exciting. Rewrite $x^x$ as $e^{x\log x}$ and take that limit. One l'Hôpital later, you get 1.


combinatorics - Give a combinatorial proof for a multiset identity



I'm asked to give a combinatorial proof of the following, $\binom{\binom n2}{2}$ = 3$\binom{n}{4}$ + n$\binom{n-1}{2}$.



I know $\binom{n}{k}$ = $\frac{n!}{k!(n-k)!}$ and $(\binom{n}{k}) = \binom{n+k-1}{k}$ but I'm at a loss as to what to do with the $\binom{\binom n2}{2}$



Can someone point me in the right direction as to how to proceed with writing a combinatorial proof for this identity?



Answer



LHS can be interpreted as counting the ways in which you can create two different pairs of two different elements, taken from a set of $n$ toys; while the pairs must be different, they need not be disjoint: they might have a toy in common.



RHS counts the same amount of pairs in a different way: you can create two kinds of such couple of pairs: disjoint ones $(a,b)(c,d)$ and overlapping ones $(a,b)(a,c)$. Remember that every pair must be different and the two couples must be different. The first kind of pairs can be chosen in the following way: first chose $4$ toys in $\binom{n}{4}$ ways, then chose one of the $3$ ways to partition the $4$ into two pairs (to see that there are $3$ ways, focus on one of the four: it must be paired with one of the three remaining ones, and that determines the partition). The second kind of pairs can be chosen by first choosing the toy $a$ that will be common to both pairs in $n$ ways, then chose $2$ toys from the remaining $n-1$ to be $\{b,c\}$; the order does not matter here, so the final choice can be made in $\binom{n-1}{2}$ ways.


Saturday 18 April 2015

complex analysis - A weird value obtained by using Cauchy Principal Value on $int_{-infty}^{infty}frac{1}{x^2}dx$

so I'm trying to evaluate the integral in the title,




$$\int_{-\infty}^{\infty}\frac{1}{x^2}dx$$



by using complex plane integration. I've chosen my contour to be a infinte half circle with it's diameter on the real axis. (integration is preformed ccw).



when R tends to infinity, the arch part of the contour yields zero, and so we are left with the part along the real axis, which is the one I'm trying to evalute.



there are no other poles in my contour, only a second order pole at $z=0$ lying on it. the residue of this pole is $0$ so the integral sums up to be zero (by using Cauchy principal value.)



However my function is always positive and greater than $0$, so this doesn't make sense.




Any help would be appreciated

Friday 17 April 2015

real analysis - Find limit $lim_{xrightarrow 0}frac{2^{sin(x)}-1}{x} = 0 $



can someone provide me with some hint how to evaluate this limit?
$$\lim_{x\rightarrow 0}\frac{2^{\sin(x)}-1}{x} = 0 $$
Unfortunately, I can't use l'hopital's rule
I was thinking about something like that:
$$\lim_{x\rightarrow 0}\frac{2^{\sin(x)}-1}{x} =\\\lim_{x\rightarrow 0}\frac{\ln(e^{2^{\sin(x)}})-1}{\ln(e^x)} $$ but there I don't see how to continue this way of thinking (of course if it is correct)



Answer



Hint:



For $\sin x\ne0$



$$\dfrac{2^{\sin x}-1}x=\dfrac{2^{\sin x}-1}{\sin x}\cdot\dfrac{\sin x}x$$



$$\implies\lim_{x\to0}\dfrac{2^{\sin x}-1}x=\lim_{x\to0}\dfrac{2^{\sin x}-1}{\sin x}\cdot\lim_{x\to0}\dfrac{\sin x}x$$


real analysis - Calculate $lim_{nto{+}infty}{(sqrt{n^{2}+n}-n)}$











Could someone help me through this problem?
Calculate $\displaystyle\lim_{n \to{+}\infty}{(\sqrt{n^{2}+n}-n)}$


Answer



We have:



$$\sqrt{n^{2}+n}-n=\frac{(\sqrt{n^{2}+n}-n)(\sqrt{n^{2}+n}+n)}{\sqrt{n^{2}+n}+n}=\frac{n}{\sqrt{n^{2}+n}+n}$$
Therefore:




$$\sqrt{n^{2}+n}-n=\frac{1}{\sqrt{1+\frac{1}{n}}+1}$$



And since: $\lim\limits_{n\to +\infty}\frac{1}{n}=0$



It follows that:



$$\boxed{\,\,\lim\limits_{n\to +\infty}(\sqrt{n^{2}+n}-n)=\dfrac{1}{2}\,\,}$$


Thursday 16 April 2015

elementary set theory - Order type in finite sets



This is a general question regarding power (cardinal number) and type (ordinal number) of a set. These definitions are taken from Kolmogorov and Fomin(1970). I can see why given power there are (uncountably) many sets with different types with this power in the infinite case. For instance, $\aleph_0$ corresponds to the usual order $\omega$ of $\mathbb{N}$ which is
$$
1,2,3,\dots,
$$
but another order type can be written as

$$
1,3,5,\dots,2,4,6\dots.
$$
However, it is claimed that in the finite case there is a unique type for given power. I do not understand this because we can follow the same arguments. For instance take a set with power $2n$, usual order type is
$$
1,2,,\dots,2n
$$
in which $2n$ is the maximal element. However, another order type
$$
1,3,5,\dots,2n-1,2n,\dots 2

$$
in which $2$ is the maximal element. I do not see why this argument should not hold. Thanks for any help


Answer



Order type does not care about naming of the elements. i.e. the order



$$1,2,\dots,2n \>\>\>\>\text{ and }\>\>\>\>1,3,5,\dots,2n-1,2n,\dots 2$$
are the same. Just create the natural bijection (mapping least element to least, second least to second least etc.) between the sets, and it is straight forward to check that it is an order preserving bijection.



Your question should really be why the order types of
$$

1,2,3,\dots
\>\>\>\>\text{ and }\>\>\>\>
1,3,5,\dots,2,4,6\dots
$$
are different.



Answer: The easy way to see this is to observe that in $1,2,3,\dots$ for each element there is a finite amount of elements which is less than that element. However in $
1,3,5,\dots,2,4,6\dots$ the element $2$ has an infinite amount of elements which is less than it. Thus any order preserving bijection between $1,2,3,\dots$ and $1,3,5,\dots,2,4,6\dots$ has an impossible task to map any element in the domain to the element $2$ in the co-domain.



Especially if we map least element to least, second least to second least etc. we will not create a function which is onto.




Formal proof that there is no order preserving bijection: Assume $f$ is an order preserving bijection from $1,2,3,\dots$ to $1,3,5,\dots,2,4,6\dots$ . Now assume that $f(n)=2$. As $1,3,5,\ldots,2n+3$ are all numbers less than 2 (in the right order type) we see that $f^{-1}(1),f^{-1}(3),\ldots,f^{-1}(2n+1)$ are all distinct numbers which are less than $n$. However these consitute a set of $n$ different positive integers which are less than $n$, which is a contradiction, since there are only $n-1$ positive integers which are less than $n$.


algebra precalculus - Verify trigonometric equation $frac{(sec{A}-csc{A})}{(sec A+csc A)}=frac{(tan A-1)}{(tan A+1)}$




How Would I verify the following identity.



$$\frac{(\sec{A}-\csc{A})}{(\sec A+\csc A)}=\frac{(\tan A-1)}{(\tan A+1)}$$



I simplified it to



$$\frac{(\sin{A}-\cos{A})}{(\sin{A} \cos{A})}\div\frac{(\sin{A}+\cos{A})}{(\sin{A}\cos{A})}$$


Answer



Hint: Start by multiplying top and bottom on the left by $\sin A$.



Wednesday 15 April 2015

probability theory - Does $lim_{ntoinfty}frac{xi_n}{n}$ for poisson distribution exists?

How can i find, does $\lim_{n\to\infty}\frac{\eta_n}{n}$ where $\eta_n$ has poisson distribution with $\lambda = n$ exists?

calculus - Extreme values of a continuous function on a closed connected domain



Suppose a one-variable continuous function has only one extreme value on a closed interval and it is a local minimum, we can prove it is the global minimum on the interval.



Suppose a one-variable continuous function has only two extreme values on a closed interval, we can prove one of them is a local maximum and another is a local minimum and the local minimum is strictly less than the local maximum.



In two-variable case there are counterexamples.




$f(x,y) = {x^2} + {y^3} - 3y$ is a continuous function which has only one extreme value on the plane and it is a local minimum, but it is not the global minimum on the plane.



$f(x,y) = {x^4} + {y^4} - {(x + y)^2}$ is a continuous function which has only two extreme values on the plane, but both of them are local minima.



I wonder if there exists a continuous function on a connected closed domain which has only two extreme values, one of them is local maximum and another is local minimum but the local minimum is strictly greater than the local maximum.



In one-variable case I take this proof. Suppose $a < b$ and $f(a)$ is local maximum, $f(b)$ is local minimum, $f(a) \leqslant f(b)$. Consider global minimum $f(c)$ on $[a,b]$. If $c = a$, f is constant near $a$, this is a contradiction. If $c = b$, $f(b) = f(a)$, this is a contradiction again. So $a < c < b$, then $f(c)$ is a third extreme value. So the local minimum is less than the local maximum.



But in two-variable case this doesn't work. I think there may exist a counterexample but I don't know how to find it. Thank John for giving a nice example!




$f(x, y) = x + 2 \sin x + x y^2$ (on a properly chosen domain)


Answer



$$
f(x,y)=1/x+x+1/y+y
$$



According to wolfram alpha, the local min/max are:
min and max points




You can see that the local minimum is greater than the local maximum. :)


elementary set theory - Fun quiz: where did the infinitely many candies come from?



Story 1:




Let there be a bowl $A$ with countably infinite many of candies indexed by $\mathbb{N}$. Let bowl $B$ be empty.




  • After $1/2$ unit of time, we take candy number 1 and 2 from $A$ and put them in $B$. Then we eat candy 1 from bowl $B$.

  • After $1/4$ unit of time, we take candy number 3 and 4 from $A$ and put them in $B$. Then we eat candy 2 from bowl B.

  • After $(1/2)^n$ unit of time, we take candy number $2n-1$ and $2n$ from $A$ and put them in $B$. Then we eat candy $n$ from bowl $B$.



What happens after 1 unit of time? How many candies are there left in $A$, how many in $B$, how many have you eaten? Answer:





There are no candies in $A$ left. For any candy corresponding to a given natural number $k$, one can compute the time it was eaten. Similarly there are no candies in $B$. We ate as many candies as the cardinality of $\mathbb{N}$.




Story 2:



Let there be a bowl $A$ with countably infinite many of candies indexed by $\mathbb{N}$. Let bowl $B$ be empty.




  • After $1/2$ unit of time, we take candy number 1 and 2 from $A$ and put them in $B$. Then we eat candy 1 from bowl $B$.


  • After $1/4$ unit of time, we take candy number 3 and 4 from $A$ and put them in $B$. Then we eat candy 3 from bowl B.

  • After $(1/2)^n$ unit of time, we take candy number $2n-1$ and $2n$ from $A$ and put them in $B$. Then we eat candy $2n-1$ from bowl $B$.



What happens after 1 unit of time? How many candies are there left in $A$, how many in $B$, how many have you eaten? Answer:




As before, there are no candies in $A$ left. All the candies labelled by even numbers are in $B$, so there are countably many. We ate all the candies labelled by odd numbers, so we ate countably many.





Question:



The main question is, in both stories we do essentially the same thing: take two candies from $A$ to $B$, then eat one from $B$. The difference is that in the second case we have infinitely many candies left in $B$, but in the first case it was empty after 1 unit of time. So how did this happen?



Imagine this situation: let $X$ be conducting the eating according to story 1 with his labeling, let $Y$ be watching. But $Y$ secretly has a different labeling scheme in his mind, where the candy number $k$ in $X$'s labeling is candy number $2k-1$ in $Y$'s labeling. Then after unit time, according to $Y$, there should be infinitely many candies in $A$ but according to $X$ it should be empty. So the reality depends on the spectator?


Answer



So as you said according to person X, there will be no candies left. Lets take a look, then, at what Y observes.



From person Y's point of view:
First X moves candy 1 and 3 and from bowl A to B, then eats candy 1. Next, X moves candy 5 and 7 (3 and 4 by X's labeling) to B and eats 3. This continues for infinitely many steps. At this point person Y sees every odd candy eaten. Note that according to Y, the even candies never existed. So Y as well sees that there are no candies left in the bowls.




You may also be interested in these two links:
A strange puzzle having two possible solutions
Ross–Littlewood paradox


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...