Saturday 31 October 2015

general topology - Is a local diffeomorphism with nice boundary values a diffeomorphism?



Let $f:\mathbb{D}=\{z\in\mathbb{C}\mid |z|<1\}\rightarrow\mathbb{C}$ be a local diffeomorphism (i.e. an immersion) from an open disk in the plane to the plane.




The only situation I can image where $f$ is not injective is that $f$ sends $\mathbb{D}$ to a "self-overlapping'' region, in which case $f$ can not have continuous injective boundary values. But it seems non-trivial to me whether nice boundary values can guarantee injectivity:



Question. Assume that $f$ extends to a continuous map $\overline{\mathbb{D}}\rightarrow\mathbb{C}$ such that the boundary values $f|_{\partial\mathbb{D}}$ is injective and continuous, so that (by Jordan Curve Theorem) it maps $\partial\mathbb{D}$ homeomorphically to a Jordan curve which is the boundary of a simply connected domain $\Omega\subset\mathbb{C}$. Then it is true that $f$ is a homeomorphism from $\mathbb{D}$ to $\Omega$?


Answer



Yes, this is true. The uses that the map $f : \mathbb{D} \to \Omega$ is a proper local diffeomorphism. It follows that $f$ is a covering map, by applying a theorem of elementary topology says that every proper local homeomorphism from a locally compact space to a Hausdorff space is a covering map. Since $\mathbb{D}$ is simply connected, every covering map defined on $\mathbb{D}$ is a homeomorphism.


algebra precalculus - Find the $n^{th}$ term and the sum to $n$ terms of the following series




Find the $n^{th}$ term and sum to $n$ terms of the following series.
$$1.3+2.4+3.5+……$$



My Attempt:



Here,
$n^{th}$ term of $1+2+3+……=n$



$n^{th}$ term of $3+4+5+……=n+2$




Thus,



$n^{th}$ term of the series $1.3+2.4+3.5+……=n(n+2)$



$$t_n=n^2+2n$$



If $S_n$ be the sum to $n$ terms of the series then
$$S_n=\sum t_n$$
$$=\sum (n^2+2n)$$




How do I proceed?


Answer



\begin{align}\sum_{i=1}^n (i^2+2i) &=\sum_{i=1}^n i^2 + 2\sum_{i=1}^n i\\
&= \frac{n(n+1)(2n+1)}{6}+2\cdot \frac{n(n+1)}{2} \end{align}



You might want to factorize the terms to simplify things.


integration - How can I evaluate $int_0^infty frac{sin x}{x} ,dx$? [may be duplicated]




How can I evaluate $\displaystyle\int_0^\infty \frac{\sin x}{x} \, dx$? (Let $\displaystyle \frac{\sin0}{0}=1$.)




I proved that this integral exists by Cauchy's sequence.



However I can't evaluate what is the exact value of this integral.


Answer



It's a famous Dirichlet integral.
http://en.wikipedia.org/wiki/Dirichlet_integral


calculus - Find $lim_{x to 0} frac{(tan(tan x) - sin (sin x))}{ tan x - sin x}$



Find $$\lim_{x\to 0} \dfrac{\tan(\tan x) - \sin (\sin x)}{ \tan x - \sin x}$$



$$= \lim_{x \to 0} \dfrac{\frac{\tan x \tan (\tan x)}{\tan x}- \frac{\sin x \sin (\sin x)}{\sin x}}{ \tan x - \sin x} = \lim_{x \to 0} \dfrac{\tan x - \sin x}{\tan x - \sin x} = 1$$



But the correct answer is $2$. Where am I wrong$?$



Answer



This is a nice case for composition of Taylor series. Using
$$\tan(x)=x+\frac{x^3}{3}+\frac{2 x^5}{15}+O\left(x^7\right)$$
$$\sin(x)=x-\frac{x^3}{6}+\frac{x^5}{120}+O\left(x^7\right)$$
$$\tan(\tan(x))=x+\frac{2 x^3}{3}+\frac{3 x^5}{5}+O\left(x^7\right)$$
$$\sin(\sin(x))=x-\frac{x^3}{3}+\frac{x^5}{10}+O\left(x^7\right)$$ then
$$\frac{\tan(\tan( x)) - \sin (\sin( x)}{ \tan (x) - \sin (x)}=\frac {x^3+\frac{x^5}{2}+O\left(x^7\right) } {\frac{x^3}{2}+\frac{x^5}{8}+O\left(x^7\right) }=2+\frac{x^2}{2}+O\left(x^4\right)$$ which shows the limit and also how it is approached.


Friday 30 October 2015

For $ngeq 6$, the smallest composite number that is not a factor of $n!$ is $2p$, where $p$ is the smallest prime bigger than $n$?



I'm pretty sure the following statement is true:




For $n\geq 6$, the smallest composite number that is not a factor of
$n!$ is $2p$, where $p$ is the smallest prime bigger than $n$.





But I'm having trouble proving it.



Here is an attempt by induction. The property is true when $n=6$, and assume it's true for $n$. If $n+1$ isn't prime, the induction step is trivial, for the smallest prime bigger than $n+1$ is equal to the smallest prime bigger than $n$; call this prime $p$. But by hypothesis all composites smaller than $2p$ divide $n!$, hence $(n+1)!$.



The harder case is when $n+1$ is prime. Let $q$ denote the next prime, i.e. the smallest prime bigger than $n+1$. We know by hypothesis that all composites smaller than $2(n+1)$ divide $n!$, hence $(n+1)!$. We also know $2(n+1)$ divides $(n+1)!$. To finish, we need to show that all composites $m$ strictly between $2(n+1)$ and $2q$ divide $(n+1)!$.



This is where I get stuck. It certainly helps that the ratio $\frac{q}{n+1}$ can't be larger than 2 (by Bertrand's postulate; I imagine the bound can be sharpened but I know embarrassingly little number theory). It's also obvious that the prime factors of any such composite $m$ are all smaller than $q$. What I don't quite see is an argument to ensure the powers of those prime factors aren't too large.



Feel free to give alternative approaches, rather than by induction, if there is a much simpler proof I've overlooked.



Answer



A composite $m$ with $2(n+1)\lt m \lt 2q$ cannot be twice a prime because $q$ is the next prime after $n+1$, so must be able to be factored into $ab$ with $a,b \lt n+1$. Then $a,b$ are separately factors of $(n+1)!$ and $ab$ divides $(n+1)!$ unless $a=b$. We will have $a^2$ divide $(n+1)!$ unless $a \gt \frac {n+1}2$ because $a,2a$ are both factors. But then $a^2 \gt \frac {(n+1)^2}4 \ge 4(n+1) \ge 2q$ as long as $n \ge 15$ by Bertrand's postulate. We can check the cases up to $15$ by hand to complete the proof.


elementary number theory - If $p=1cdot 3 cdot 5 cdot 7 cdot 9 cdot ... cdot 2011$, then the units digit of $p$ is five



I know there is a $5$ on the sequence, but i don't know how and why his presence leads to the final units digit of the product.



Answer



Hint: All odd numbers divisible by $5$ end in $5$.


calculus - Series - $sum_{i=1}^infty (frac{5}{12})^i$ - geometric series?



I have to solve - $$\sum_{i=1}^\infty \left(\frac{5}{12}\right)^i$$ - geometric series?



The geometric series sequence I know is - $$\sum_{i=0}^\infty x_i= \frac{1}{1-x}$$




However in my assignment, the series starts from $i=1$.



The solution I have is - $$\sum_{i=1}^\infty \left(\frac{5}{12}\right)^i = \frac{1}{1-\frac{5}{12}}-1$$



Can you explain please why is that the solution?


Answer



HINT:
$$\sum_{i=0}^\infty x_i= \frac{1}{1-x} =x_0 + \sum_{i=1}^\infty x_i$$


multivariable calculus - Confused about directional derivatives



I have the following question:





Calculate the directional derivative of the function at the point and in direction indicated.



$f(x, y) = \arctan(xy)$ at $(1, 2)$ along the line $y = 2x$ in the direction of increasing $x$.




When I looked at the solution, I was confused about the way they solved it:
enter image description here



I understand that we need a gradient vector at (1,2) and some unit vector to give the direction.




I also understand that since it is in the direction of increasing x, x will be positive.
However, how did they get that it is going to be along the line (1,2).
Did they set x = t and then got parametric equations where x = t, and y = 2t?
If so, are the coefficient before t our vector that gives us the direction?



Also, I'm so confused about the use of parametric equations with directional derivatives. Could someone explain relationship between parametric equations and directional derivatives?


Answer



If you are going along the line $y = 2x$ then you find it's directional vector by looking at $\vec{OP}$ where $O = (0,0)$ and $P$ is a point on the line. Take $P = (1,2)$ then the directional vector is $\vec{OP} = \langle1,2 \rangle$.


Thursday 29 October 2015

calculus - Sum : $sum sin left( frac{(2lfloor sqrt{kn} rfloor +1)pi}{2n} right)$.



Calculate : $$ \sum_{k=1}^{n-1} \sin \left( \frac{(2\lfloor \sqrt{kn} \rfloor +1)\pi}{2n} \right).$$


Answer



Lemma Summation by Pasts (1)
$$\sum_{k=a}^b f_k\Delta g_k=f_kg_k\Bigg|_{k=a}^{b+1}-\sum_{k=a}^b g_{k+1}\Delta f_k=f_{b+1}g_{b+1}-f_ag_a-\sum_{k=a}^b g_{k+1}\Delta f_k$$
$\displaystyle \text{with difference operator }\Delta :\qquad \Delta f_k=f_{k+1}-f_k$



My solution




$\text{Get }j=\left\lfloor \sqrt{kn}\right\rfloor\Rightarrow j\le \sqrt{kn}

Therefore if $\left\lfloor\dfrac{j^2}{n}\right\rfloor+1\le k\le \left\lfloor\dfrac{(j+1)^2}{n}\right\rfloor$ then $\sqrt{kn}=j$



Note:



$j=0\Rightarrow \left\lfloor\dfrac{j^2}{n}\right\rfloor+1=1$



$j=n-2\Rightarrow \left\lfloor\dfrac{(j+1)^2}{n}\right\rfloor=n-2$




$j=n-1\Rightarrow \left\lfloor\dfrac{(j+1)^2}{n}\right\rfloor=n>n-1$



So that sum became



\begin{align*}S&=\sum_{k=1}^{n-1}\sin\left(\dfrac{\left(2\lfloor\sqrt{kn}\rfloor+1\right)\pi}{2n}\right)\\ &=\sin\left(\dfrac{\left(2\lfloor\sqrt{(n-1)n}\rfloor+1\right)\pi}{2n}\right)+\sum_{k=1}^{n-2}\sin\left(\dfrac{\left(2\lfloor\sqrt{kn}\rfloor+1\right)\pi}{2n}\right)\\ &=\sin\left(\frac{(2n-1)\pi}{2n}\right)+\sum_{j=0}^{n-2}\left(\left\lfloor\frac{(j+1)^2}{n}\right\rfloor-\left\lfloor\frac{j^2}{n}\right\rfloor\right)\sin\left(\frac{(2j+1)\pi}{2n}\right)\\ &=\sin\left(\frac{\pi}{2n}\right)+\sum_{j=0}^{n-2}\Delta\left(\left\lfloor\frac{j^2}{n}\right\rfloor\right)\sin\left(\frac{(2j+1)\pi}{2n}\right)\end{align*}
Using the 1, we get
\begin{align*}S&=\sin\left(\frac{\pi}{2n}\right)+\left[\sin\left(\frac{(2j+1)\pi}{2n}\right)\left\lfloor\frac{j^2}{n}\right\rfloor\right]_{j=0}^{n-1}\\ &{}\quad -\sum_{j=0}^{n-2}\left\lfloor\frac{(j+1)^2}{n}\right\rfloor\left(\sin\left(\frac{(2j+3)\pi}{2n}\right)-\sin\left(\frac{(2j+1)\pi}{2n}\right)\right)\\ &=(n-1)\sin\left(\frac{\pi}{2n}\right)-2\sin\left(\frac{\pi}{2n}\right)\sum_{j=0}^{n-2}\left\lfloor\frac{(j+1)^2}{n}\right\rfloor\cos\left(\frac{(j+1)\pi}{n}\right)\\ &=(n-1)\sin\left(\frac{\pi}{2n}\right)-2\sin\left(\frac{\pi}{2n}\right)\underbrace{\sum_{j=1}^{n-1}\left\lfloor\frac{j^2}{n}\right\rfloor\cos\left(\frac{j\pi}{n}\right)}_{=A}\end{align*}



With sum A, using reverse summand property we get
\begin{align*}A&=\sum_{j=1}^{n-1}\left\lfloor\frac{j^2}{n}\right\rfloor\cos\left(\frac{j\pi}{n}\right)\\ &=\sum_{j=1}^{n-1}\left\lfloor\frac{(n-j)^2}{n}\right\rfloor\cos\left(\frac{(n-j)\pi}{n}\right)\\ &=-\sum_{j=1}^{n-1}\left\lfloor n-2j+\frac{j^2}{n}\right\rfloor\cos\left(\frac{j\pi}{n}\right)\\ &=-A+\sum_{j=1}^{n-1}(2j-n)\cos\left(\frac{j\pi}{n}\right)\\ \Rightarrow A&=\frac{1}{2}\sum_{j=1}^{n-1}(2j-n)\cos\left(\frac{j\pi}{n}\right)\end{align*}




We get



$\displaystyle\cos\left(\frac{j\pi}{n}\right)=\dfrac{1}{2\sin\left(\frac{\pi}{2n}\right)}\left[\sin\left(\frac{(2j+1)\pi}{2n}\right)-\sin\left(\frac{(2j-1)\pi}{2n}\right)\right]=\dfrac{1}{2\sin\left(\frac{\pi}{2n}\right)}\Delta\left[\sin\left(\frac{(2j-1)\pi}{2n}\right)\right]$



continue using 1 :)



$\displaystyle A =\left.\dfrac{(2j-n)}{4\sin\left(\frac{\pi}{2n}\right)}\sin\left(\frac{(2j-1)\pi}{2n}\right)\right|_{j=1}^{n}-\dfrac{1}{4\sin\left(\frac{\pi}{2n}\right)}\sum_{j=1}^{n-1}2\sin\left(\frac{(2j+1)\pi}{2n}\right)$



\begin{align*}A&=\frac{n-1}{2}+\dfrac{1}{4\sin^2\left(\frac{\pi}{2n}\right)}\sum_{j=1}^{n-1}\Delta\left[\cos\left(\frac{j\pi}{n}\right)\right]\\ &=\frac{n-1}{2}+\dfrac{1}{4\sin^2\left(\frac{\pi}{2n}\right)}\cdot\left.\cos\left(\frac{j\pi}{n}\right)\right|_{j=1}^n\\ &=\frac{n-1}{2}-\dfrac{1+\cos\left(\frac{\pi}{n}\right)}{4\sin^2\left(\frac{\pi}{2n}\right)}\\&=\frac{n-1}{2}-\dfrac{2\cos^2\left(\frac{\pi}{2n}\right)}{4\sin^2\left(\frac{\pi}{2n}\right)}\end{align*}




Therefore:



\begin{align*}S&=(n-1)\sin\left(\frac{\pi}{2n}\right)-2\sin\left(\frac{\pi}{2n}\right)A\\ &=(n-1)\sin\left(\frac{\pi}{2n}\right)-(n-1)\sin\left(\frac{\pi}{2n}\right)+\dfrac{\cos^2\left(\frac{\pi}{2n}\right)}{\sin\left(\frac{\pi}{2n}\right)}\\ &=\boxed{\displaystyle\cot\left(\frac{\pi}{2n}\right)\cos\left(\frac{\pi}{2n}\right)}\end{align*}


elementary set theory - There exists an injection from $X$ to $Y$ if and only if there exists a surjection from $Y$ to $X$.



Theorem. Let $X$ and $Y$ be sets with $X$ nonempty. Then (P) there exists an injection $f:X\rightarrow Y$ if and only if (Q) there exists a surjection $g:Y\rightarrow X$.



For the P $\implies$ Q part, I know you can get a surjection $Y\to X$ by mapping $y$ to $x$ if $y=f(x)$ for some $x\in X$ and mapping $y$ to some arbitrary $\alpha\in X$ if $y\in Y\setminus f(X)$. But I don't know about the Q $\implies$ P part.



Could someone give an elementary proof of the theorem?


Answer



There is no really elementary proof, since this is in fact independent of the "constructive" part of the usually axioms of set theory.




However if one has a basic understanding of the axiom of choice then one can easily construct the injection. The axiom of choice says that if we have a family of non-empty sets then we can choose exactly one element from each set in our family.



Suppose that $g\colon Y\to X$ is a surjection then for every $x\in X$ there is some $y\in Y$ such that $g(y)=x$. I.e., the set $\{y\in Y\mid g(y)=x\}$ is non-empty.



Now consider the family $\Bigg\{\{y\in Y\mid g(y)=x\}\ \Bigg|\ x\in X\Bigg\}$, by the above sentence this is a family of non-empty sets, and using the axiom of choice we can choose exactly one element from every set. Let $y_x$ be the chosen element from $\{y\in Y\mid g(y)=x\}$. Let us see that the function $f(x)=y_x$ is injective.



Suppose that $y_x=y_{x'}$, in particular this means that both $y_x$ and $y_{x'}$ belong to the same set $\{y\in Y\mid g(y)=x\}$ and this means that $x=g(y_x)=g(y_{x'})=x'$, as wanted.







Some remarks:



The above proof uses the full power of the axiom of choice, we in fact construct an inverse to the injection $g$. However we are only required to construct an injection from $X$ into $Y$, which need not be an inverse of $g$ -- this is known as The Partition Principle:




If there exists a surjection from $Y$ onto $X$ then there exists an injection from $X$ into $Y$




It is still open whether or not the partition principle implies the axiom of choice, so it might be possible with a bit less than the whole axiom of choice.




However the axiom of choice is definitely needed. Without the axiom of choice it is consistent that there exist two sets $X$ and $Y$ such that $Y$ has both an injection into $X$ and a surjection onto $X$, but there is no injection from $X$ into $Y$.


algebra precalculus - Trouble with algebraic manipulation: proving and factorisation

I have been trying to proof the following for more than a day, but to no avail. I have tried several ways, partial/full expansion/factorisation etc but I just cannot prove it. I would love indirect nudges and hints, rather than direct answers since I wish to work it out myself, as I'm sure many do prefer of the questioner.



I am supposed to prove, starting from LHS that



$\frac{1}{4}(m)(m+1)(m^2+m+2)+(m+1)[(m+1)^2+1]=\frac{1}{4}(m+1)(m+2)(m^2+3m+4)$



I zoomed in on the the fact that $\frac{1}{4}$ and $(m+1)$ can be factorised i.e.



$LHS = \frac{1}{4}(m)(m+1)(m^2+m+2)+(m+1)[(m+1)^2+1]$




= $\frac{1}{4}(m+1)[m(m^2+m+2)+4(m^2+2m+2)]$



Seeing that $(m+2)$ was also factorised on the RHS (what I'm working towards), I sought to also factorise $(m+2)$ out. I mistakenly thought I saw an opportunity to do this:



= $\frac{1}{4}(m+1)[m(m^2+m+2)+2(2)(m^2+2m+2)]$



= $\frac{1}{4}(m+1)[m(m^2+m+2)+2(2m^2+4m+4)]$



And now, unfortunately, I am unable to factorise because $(m^2+m+2)\not= (2m^2+4m+4)$. In any case, none of them are equal to the expression $(m^2+3m+4)$ that I'm trying to obtain (see RHS).




I feel like I'm missing something really obvious yet crucial here, some technique of manipulation or something. Would really appreciate any hints or nudges in the right direction. Thank you!

Wednesday 28 October 2015

number theory - Partitioning integers into sets

Try to find out if it is possible to partition 6 consecutive positive integers into two sets, $A_1$ and $A_2$ such that the product of the elements in $A_1$ is equal to the product of the elements in $A_2$?



Also, check if this is possible for 20 consecutive positive integers?



I said:




Let $K=$ positive integer, then we need to partition:



$$K, K+1, K+2, K+3, K+4, K+5 $$



This wasnt helping me so i worked with numbers.



I started out with:



$$ 1,2,3,4,5,6 $$




and i can't seem to partition it.



So i worked with:



$$ 2,3,4,5,6,7 $$



I tried:



$A_1=4,7,3=84$
$A_2=2,5,6=60$




and I tried more cases but trial and error does not seem to be effective. Any strategies or patterns anybody else notice?

limits - Find $lim_{xto -infty} frac{(x-1)^2}{x+1}$




Find $\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$




If I divide whole expression by maximum power i.e. $x^2$ I get,$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$

Numerator tends to $1$ ,Denominator tends to $0$



So I get the answer as $+\infty$



But when I plot the graph it tends to $-\infty$



What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!



NOTE: I cannot use L'hopital for finding this limit.


Answer




Hint: Write $$\frac{x^2\left(1-\frac{1}{x}\right)^2}{x\left(1+\frac{1}{x}\right)}$$


discrete mathematics - Proof by Induction: $n! > 2^{n+1}$ for all integers $n geq 5.$

I have to answer this question for my math class and am having a little trouble with it.



Use mathematical induction to prove that $n! > 2^{n+1}$ for all integers $n \geq 5.$




For the basis step: $(n = 5)$



$5! = 120$



$2^{5+1} = 2^6 = 64$



So $120 > 64$, which is true.



For the induction step, this is as far as I've gotten:




Prove that $(n! > 2^{n+1}) \rightarrow \left((n+1)! > 2^{(n+1)+1}\right)$



Assume $n! > 2^{(n+1)}$
Then $(n+1)! = (n+1) \cdot n!$



After this, I'm stuck. Any assistance would be appreciated.



Thanks!

geometry - Interesting property of triangle and ellipses formed on its edges

I need to prove following theorem. It seems to work, and it seems to be intuitive but I think I am missing skills in geometry proofs to prove that.



Clearer version of theorem: For any triangle $ABC$ on a plane, we draw circle $O$ such that $AB$ is its diameter. Placing a point $P$ inside $O$ will result in creating triangle (either $PAB$, $PBC$ or $PAC$) of smaller perimeter than $ABC$.



Previous (base theorem) version: For triangle on a plane, we can draw an ellipse based on each of its edges such that end points of that edge are focal points and common point of two other edges lies on edge of an that ellipse (it describes every triangle with same "base" edge and every combination of two other edges such that they form triangle with same perimeter as first one). We can draw 3 such ellipses for triangle - let's call their union $E$. Prove that circles circumscribed on each edge (edge is diameter of circle) of that triangle lie inside of $E$.



Where did it come from?

I've found an interesting answer on problem of finding 3 points from set of points such that triangle built of them has as small perimeter as possible - it's here. I want to prove that this method is working, as it's not "clear" for me at all. As it's easier to prove anything on Delaunay triangulation by referencing Voronoi diagrams I tried that method - with no good results.



Theorem above is a bit more than this, but it seems to work (used visualization software to check it and couldn't create counterexample).



EDIT: Here are examples. As you can see, without losing a generality, we choose $AB$ edge and draw circle such that $AB$ is its diameter. Now this circle is always inside union of ellipses.




  1. Example 1

  2. Example 2

  3. Example 3


Tuesday 27 October 2015

elementary set theory - Are two segments order isomorphic



I thought $[0,2]$ and $[0,1] \bigcup (2,3]$ are order isomorphic, since I can write the isomorphism $f:[0,2]\rightarrow [0,1] \bigcup (2,3]$ : $f(x)=x $ when $x\in [0,1]$
$f(x)=x+1$ when $x\in (1,2]$.



On the other hand, in $[0,2]$ every element has an immediate following element, and in $[0,1] \bigcup (2,3]$ the element 1 doesn't have an immediate following element.



I am a bit confused. Are they order isomorphic or not, and why?


Answer



No, in a real interval no number ever has an immediately following element. No matter which number you try to nominate as the immediately following element, there will be another one that is even closer.




$[0,2]$ and $[0,1]\cup(2,3]$ are indeed order isomorphic, for example by the correspondence you define.


complex analysis - A definite integral of the exponential of cos

During some calculs, I came across the following definite integral,
$$\int_0^{2\pi} \frac{\sin^2 \theta}{1+b\cos\theta} \exp(a\cos\theta) d\theta$$
with $a$ and $b$ constants. I tried to look it up in the Gradshteyn and Ryzhik, for example in Section 3.93: Trigonometric and exponential functions of trigonometric functions, but find nothing helpful. I also tried the Poisson Integral via complex analyse, apparently the exponential function is a little particular, if it's a $\log$ function instead of $\exp$ that's done, but with exponential I have not yet found the solution.



Thanks in advance if anyone has any idea :)

elementary number theory - Uniqueness of Extended Euclidean Algorithm



I'm doing a bit of extra reading on the Extended Euclidean Algorithm and had a side-thought that I couldn't find an answer to in the book.



I understand that the Extended Euclidean Algorithm can express the GCD of two numbers as a linear combination of those two numbers.




My question is, is the linear combination acquired unique? (My gut is telling me that it not, but I'd like some verification as I cannot produce a proof of uniqueness).



If the answer is 'No', then my follow-up question is "What is so special about the specific linear combination acquired by the EEC?"


Answer



Given two integers $a$ and $b$, the Extended Euclidean algorithm calculates the $\gcd$ and the coefficients $x$ and $y$ of Bézout's identity: $ax+by=\gcd(a,b)$. These coefficients are not unique (see linked article).



The specific coefficients created by the algorithm satisfy these conditions: $$|x|<|\frac{b}{\gcd(a,b)}|$$
$$|y|<|\frac{a}{\gcd(a,b)}|$$


number theory - Can the extended euclidean algorithm be used to calculate a multiplicative inverse in this case?

$e = 503456131$ is a prime number. It is relatively prime to the number $b = 10000123400257488$



If I use the extended euclidean algorithm (using this python implementation) to calculate the coefficients on the linear combination of e and b that gives 1, I obtain $-906226286492069\cdot e + 45623955 \cdot b = 1$.



Is it not correct that the coefficient on e, namely $-906226286492069$, is a multiplicative inverse of e (mod b)?



I thought this was correct, but $-906226286492069\cdot e$ divided by b gives remainder $10000096$.



What am I not seeing here?

algebra precalculus - Power summation of $n^3$ or higher











If I want to find the formula for $$\sum_{k=1}^n k^2$$
I would do the next: $$(n+1)^2 = a(n+1)^3 + b(n+1)^2 + c(n+1) - an^3 - bn^2 - cn$$
After calculations I will match the coefficients.
What equation should I use, if I want to calculate the coefficients of the formula that gives the sum $$\sum_{k=1}^n k^3$$ or higher?



EDIT: Please do not refer me to other places, not trying to blaim, or to be rude, but I find your sources too complicated for me, and unable to learn from, for my level.



If you can please explain how am I supposed to find an equation that matches to my needs, not the equation itself, only the technique I should be working with.




Regards, Guy


Answer



I think I know what strategy was used for the proof you are referring to for sum of squares. The same idea works for sum of cubes, but is more painful. We try to find numbers $a$, $b$, $c$, and $d$ such that
$$(n+1)^3=[a(n+1)^4 +b(n+1)^3+c(n+1)^2+d(n+1)]-[an^4+bn^3+cn^2+dn].$$
To find these numbers, there are some shortcuts. But ultimately you will probably need to compute $(n+1)^2$ (familiar), $(n+1)^3=n^3+3n^2+3n+1$, and $(n+1)^4=n^4+4n^3+6n^2+4n+1$.



We get a system of $4$ equations in $4$ unknowns, but the solution turns out to be surprisingly uncomplicated. To give a start, on the left the coefficient of $n^3$ is $1$. On the right it is $4a$, so $a=\frac{1}{4}$. As a further check when you do the work, it should turn out that $d=0$.



After you have found the remaining constants $b$ and $c$, do the "collapsing" or "telescoping" argument that you seem to have seen for $\sum_{k=1}^n k^2$.



discrete mathematics - Find a formula for $sumlimits_{k=1}^n lfloor sqrt{k} rfloor$




I need to find a clear formula (without summation) for the following sum:



$$\sum\limits_{k=1}^n \lfloor \sqrt{k} \rfloor$$



Well, the first few elements look like this:



$1,1,1,2,2,2,2,2,3,3,3,...$



In general, we have $(2^2-1^2)$ $1$'s, $(3^2-2^2)$ $2$'s etc.




Still I have absolutely no idea how to generalize it for $n$ first terms...


Answer



Hint



We have
$$p\le\sqrt k< p+1\iff p^2\le k<(p+1)^2\Rightarrow \lfloor \sqrt{k} \rfloor=p$$
so



$$\sum\limits_{k=1}^n \lfloor \sqrt{k} \rfloor=\sum\limits_{p=1}^{\lfloor \sqrt{n+1}\rfloor-1} \sum_{k=p^2}^{(p+1)^2-1}\lfloor \sqrt{k} \rfloor=\sum\limits_{p=1}^{\lfloor \sqrt{n+1}\rfloor-1}p(2p+1)$$
Now use the fact

$$\sum_{k=1}^n k=n(n+1)/2$$
and
$$\sum_{k=1}^n k^2=n(n+1)(2n+1)/6$$
to get the desired closed form.


abstract algebra - Prove that all groups with three elements are isomorphic



The Cayley table clearly states that there is only one possible "blueprint" of a group with three elements. However, in order to prove that groups are isomorphic, we need to prove that they are homomorphic, and so if we take a function $f:G_1\rightarrow G_2$,
$$f(x *_1y) = f(x)*_2f(y)$$
What kind of operations should we define for the three-element groups to prove that they are in fact homomorphic?


Answer




If you want to show that two arbitrary three-element groups are isomorphic, then you cannot choose what operations they have. However that doesn't mean that you can't "discover" their operations. Suppose $G:=\{e,x,y\}$ is a three-element group. What are the possibilities for $xy$?



If $xy=x$, then $y=e$ is a contradiction. Similarly $xy=y$ gives the contradiction $x=e$. Thus $xy=e$, which also gives $yx=e$.



Let's do the same thing for $x^2$:



If $x^2=e$, then $x=x^{-1}$ is a contradiction to $x^{-1}=y$ obtained above. If $x^2=x$, then $x=e$ is a contradiction. Thus $x^2=y$.



Similarly we can obtain that $y^2=x$. Therefore we know exactly how the operation on $G$ behaves. Let's use this to define an isomorphism.




Let $G_1:=\{e_1,x_1,y_1\}$ and $G_2:=\{e_2,x_2,y_2\}$ be two three-element groups. Define $f:G_1\to G_2$ by $f(e_1)=e_2$, $f(x_1)=x_2$, and $f(y_1)=y_2$.
Then $f$ is clearly a bijection. To see that it is a homomorphism, we must verify



$$
f(x_1y_1)=f(x_1)f(y_1), \quad
f(y_1x_1)=f(y_1)f(x_1),\quad f(x_1^2)=f(x_1)^2,\quad\text{and}\quad f(y_1^2)=f(y_1)^2,
$$
as well as the (more obvious) identities involving $e_1$. It should be easy to convince yourself that each of these are true using what we discovered about the group operation on three-element groups. Therefore $f$ is an isomorphism.



To test your understanding, try answering the following question: If we had defined $f$ to be $f(e_1)=e_2$, $f(x_1)=y_2$, and $f(y_1)=x_2$, would it still be an isomorphism?







As the comments hint, once you learn about cyclic groups and the order of groups, you can come up with a simpler (but more sophisticated) argument that proves a much more general version of this problem.


calculus - Intuitive explanation for formula of maximum length of a pipe moving around a corner?

For one of my homework problems, we had to try and find the maximum possible length $L$ of a pipe (indicated in red) such that it can be moved around a corner with corridor lengths $A$ and $B$ (assuming everything is 2d, not 3d):



corner



My professor walked us through how to derive a formula for the maximum possible length of the pipe, ultimately arriving at the equation $L = (A^{2/3} + B^{2/3})^{3/2}$.



The issue I have is understanding intuitively why this formula works, and exactly what it's doing. I understand the steps taken to get to this point, but there's an odd symmetry to the end result -- for example, is the fact that $\frac{2}{3}$ and its inverse are the only constants used just a coincidence, or indicative of some deeper relationship?



I also don't quite understand how the formula relates, geometrically, to the diagram. If I hadn't traced the steps myself, I would have never guessed that the formula was in any way related to the original problem.




If possible, can somebody give an intuitive explanation as to why this formula works, and how to interpret it geometrically?






Here's how he found the formula, if it's useful:



The formula is found by finding the maximum possible length of the pipe by expressing the length in terms of the angle $\theta$ formed between the pipe and the wall, and by taking the derivative to find when $\frac{dL}{d\theta} = 0$, which is the minimum of $\frac{dL}{d\theta}$ and is therefore when $L$ is the smallest:



$$

L = \min_{0 \leq \theta \leq \frac{\pi}{2}} \frac{A}{\cos{\theta}} + \frac{B}{\sin{\theta}} \\
0 = \frac{dL}{d\theta} = \frac{A\sin{\theta}}{\cos^2{\theta}} - \frac{B\cos{\theta}}{\sin^2{\theta}} \\
0 = \frac{A\sin^3{\theta} - B\cos^3{\theta}}{\sin^2{\theta}\cos^2{\theta}} \\
0 = A\sin^3{\theta} - B\cos^3{\theta} \\
\frac{B}{A} = \tan^3{\theta} \\
\theta = \arctan{\left( \frac{B}{A} \right)^{\frac{1}{3}}} \\
$$



At this point, we can substitute $\theta$ back into the original equation for $L$ by interpreting $A^{1/3}$ and $B^{1/3}$ as sides of a triangle with angle $\theta$ and hypotenuse $\sqrt{A^{2/3} + B^{2/3} }$:




$$
\cos{\theta} = \frac{A^{1/3}}{ \sqrt{A^{2/3} + B^{2/3} }} \\
\sin{\theta} = \frac{B^{1/3}}{ \sqrt{A^{2/3} + B^{2/3} }} \\
\therefore L = A^{2/3} \sqrt{A^{2/3} + B^{2/3} } + B^{2/3} \sqrt{A^{2/3} + B^{2/3} } \\
L = (A^{2/3} + B^{2/3}) \sqrt{A^{2/3} + B^{2/3} } \\
L = (A^{2/3} + B^{2/3})^{3/2} \\
$$



The equation for the formula for the maximum length of the pipe is therefore $L = (A^{2/3} + B^{2/3})^{3/2}$.

Monday 26 October 2015

Why $0.999$... isn't the largest number before 1?

Why doesn't it called like that? It seems fair, $1$ called $1$ while $0.999$... being the largest number before $1$, and not called $1$ while not look like it is. Let's say it isn't, how would that number look like?

calculus - Definition of convergence of a nested radical $sqrt{a_1 + sqrt{a_2 + sqrt{a_3 + sqrt{a_4+cdots}}}}$?




In my answer to the recent question Nested Square Roots, @GEdgar correctly raised the issue that the proof is incomplete unless I show that the intermediate expressions do converge to a (finite) limit. One such quantity was the nested radical
$$
\sqrt{1 + \sqrt{1+\sqrt{1 + \sqrt{1 + \cdots}}}} \tag{1}
$$



To assign a value $Y$ to such an expression, I proposed the following definition. Define the sequence $\{ y_n \}$ by:
$$
y_1 = \sqrt{1}, y_{n+1} = \sqrt{1+y_n}.
$$

Then we say that this expression evaluates to $Y$ if the sequence $y_n$ converges to $Y$.



For the expression (1), I could show that the $y_n$ converges to $\phi = (\sqrt{5}+1)/2$. (To give more details, I showed, by induction, that $y_n$ increases monotonically and is bounded by $\phi$, so that it has a limit $Y < \infty$. Furthermore, this limit must satisfy $Y = \sqrt{1+Y}$.) Hence we could safely say (1) evaluates to $\phi$, and all seems to be good.



My trouble. Let us now test my proposed idea with a more general expression of the form
$$\sqrt{a_1 + \sqrt{a_2 + \sqrt{a_3 + \sqrt{a_4+\cdots}}}} \tag{2}$$
(Note that the linked question involves one such expression, with $a_n = 5^{2^n}$.) How do we decide if this expression converges? Mimicking the above definition, we can write:
$$
y_1 = \sqrt{a_1}, y_{n+1} = \sqrt{a_{n+1}+y_n}.
$$

However, unrolling this definition, one get the sequence
$$
\sqrt{a_1}, \sqrt{a_{2}+ \sqrt{a_1}}, \sqrt{a_3 + \sqrt{a_2 + \sqrt{a_1}}}, \sqrt{a_4+\sqrt{a_3 + \sqrt{a_2 + \sqrt{a_1}}}}, \ldots
$$
but this seems little to do with the expression (2) that we started with.



I could not come up with any satisfactory ways to resolve the issue. So, my question is:




How do I rigorously define when an expression of the form (2) converges, and also assign a value to it when it does converge?





Thanks.


Answer



I would understand it by analogy with continued fractions and look for a limit of $\sqrt{a_1}$, $\sqrt{a_1+\sqrt{a_2}}$, $\sqrt{a_1+\sqrt{a_2+\sqrt{a_3}}}$, ..., $\sqrt{a_1+\sqrt{a_2 \cdots + \sqrt{a_n}}}$, ...



Each of these is not simply derivable from the previous one, but neither are continued fraction approximants.


Closed-form expression of sum of series $sumlimits_{n=0}^inftysumlimits_{k=1}^inftyfrac{x^k}{(k+2n)!}$




I would like a closed-form expression of the following series:



$\left(\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots\right)+\left(\frac{x}{3!}+\frac{x^2}{4!}+\frac{x^3}{5!}+\frac{x^4}{6!}+\cdots\right)+\left(\frac{x}{5!}+\frac{x^2}{6!}+\frac{x^3}{7!}+\frac{x^4}{8!}+\cdots\right)+\cdots$





Clearly the first bracket is $e^x-1$, the second bracket is $x^{-2}(e^x-1-x-\frac{x^2}{2!})$, the third bracket is $x^{-4}(e^x-1-x-\frac{x^2}{2!}-\frac{x^3}{3!}-\frac{x^4}{4!})$, and so on. Is there a way to combine these infinitely many terms into a closed form expression?


Answer



For any formal Laurent series in $t$, $f(t) = \sum\limits_{\ell=-\infty}^\infty \alpha_\ell t^\ell$, we will use the notation $[t^\ell] f(t)$ to denote the coefficient $\alpha_\ell$ in front of the monomial $t^\ell$. When $f(t)$ defines a
function holomorphic over an annular region $\mathcal{A} \subset \mathbb{C}$, $[t^\ell] f(t)$ can be reinterpreted as a contour integral over some suitably chosen circle $C_R = \{ t : |t| = R \}$ lying within $\mathcal{A}$.
$$[t^\ell]f(t) \quad\longleftrightarrow\quad\frac{1}{2\pi i}\oint_{C_R} \frac{f(t)}{t^{\ell+1}} dt$$



The series at hand can be rewritten as




$$\begin{align}
\mathcal{S} \stackrel{def}{=}\sum_{\ell=0}^\infty \sum_{k=1}^\infty \frac{x^k}{(k+2\ell)!}
&= \sum_{\ell=0}^\infty \sum_{k=1}^\infty \sum_{m=0}^\infty \delta_{m,k+2\ell} \frac{x^k}{m!}
= \sum_{\ell=0}^\infty \sum_{k=1}^\infty \sum_{m=0}^\infty \delta_{m-k,2\ell} \frac{x^k}{m!}\\
&= \sum_{\ell=0}^\infty \left\{ \sum_{k=1}^\infty \sum_{m=0}^\infty [t^{2\ell}]\left(\frac{x}{t}\right)^k \frac{t^m}{m!}\right\}\tag{*1}
\end{align}
$$
Over the complex plane, the sum
$\displaystyle\;\sum\limits_{k=1}^\infty \sum\limits_{m=0}^\infty \left(\frac{x}{t}\right)^k \frac{t^m}{m!}\;$
converges absolutely for $|t| > |x|$.
If we interpret the expression inside curly braces of $(*1)$ as contour integrals over circle $C_R$ with $R > |x|$, we can switch the order of summation and $[\ldots]$.

We find



$$\mathcal{S} = \sum_{\ell=0}^\infty [t^{2\ell}]
\sum\limits_{k=1}^\infty \sum\limits_{m=0}^\infty \left(\frac{x}{t}\right)^k \frac{t^m}{m!}
= \sum_{\ell=0}^\infty [t^{2\ell}] \frac{xe^t}{t-x}
= \sum_{\ell=0}^\infty [t^0] t^{-2\ell} \frac{xe^t}{t-x}
$$



Over the complex plane, the term $\sum\limits_{\ell=0}^\infty t^{-2\ell}$ converges absolutely when $|t| > 1$. If we choose $R > \max\{ |x|, 1 \}$,
we can change the order of summation and $[\cdots]$ again and get




$$\mathcal{S}
= [t^0]\left[\left(\sum_{\ell=0}^\infty t^{-2\ell}\right)\frac{xe^t}{t-x}\right]
= [t^0] \left(\frac{t^2}{t^2-1}\frac{xe^t}{t-x}\right)
= \frac{1}{2\pi i}\oint_{C_R} \frac{tx e^t}{(t^2-1)(t-x)} dt
$$
Since $R > \max\{ |x|, 1 \}$, there are 3 poles $x, \pm 1$ inside $C_R$. If one
sums over the contributions from these poles, one obtains



$$\begin{align}

\mathcal{S}
&= \frac{x^2 e^x}{(x^2-1)} + \frac{xe}{2(1-x)} + \frac{-xe^{-1}}{(-2)(-1-x)}\\
&= \frac{x}{2(x^2-1)}\left[ 2xe^x - e (x+1) - e^{-1}(x-1)\right]\\
&= \frac{x}{2(x^2-1)}\left[ x(2e^x - (e+e^{-1})) - (e-e^{-1})\right]
\end{align}
$$
Reproducing what Did mentioned in hir comment.


calculus - Find Derivative of $y=tan^{-1}left(sqrt{frac{x+1}{x-1}}right)$ for $|x|>1$?




Question. If $y=\tan^{-1}\left(\sqrt{\frac{x+1}{x-1}}\right)$ for $|x|>1$ then $\frac{d}{dx}=?$





Answer: $\displaystyle\frac{dy}{dx}=-\frac{1}{2|x|\sqrt{x^2-1}}$



My 1st attempt- I followed the simple method and started by taking darivative of tan inverse and the following with chain rule and i got my answer as ($-\frac{1}{2x\sqrt{x^2-1}}$), which is not same as the above correct answer. 2nd method is that you can substitute $x=\sec\left(\theta\right)$ and while solving in last step we will get $\sec^{-1}\left(\theta\right)$ whose derivative contains $\left|x\right|$, but still i searched and don't know why its derivative has $\left|x\right|$



Here's my attempt stepwise



$\displaystyle\frac{dy}{dx}=\frac{1}{1+\left(\sqrt{\frac{x+1}{x-1}}\right)^2}\cdot\frac{1}{2\sqrt{\frac{x+1}{x-1}}}\cdot\frac{\left(x-1\right)-\left(x+1\right)}{\left(x-1\right)^2}$



$\displaystyle=\frac{\left(x-1\right)}{\left(x-1\right)+\left(x+1\right)}\cdot\frac{1\sqrt{x-1}}{2\sqrt{x+1}}\cdot-\frac{2}{\left(x-1\right)^2}$




$\displaystyle=-\frac{1}{2x}\cdot\frac{\left(x-1\right)\sqrt{x-1}}{\left(x-1\right)^2}\cdot\frac{1}{\sqrt{x+1}}$



$\displaystyle=-\frac{1}{2x\sqrt{x-1}\sqrt{x+1}}$



$\displaystyle=-\frac{1}{2x\sqrt{x^2-1}}$



Can you tell what i am doing wrong in my 1st attempt?


Answer



You start right:

\begin{align}
\frac{dy}{dx}
&=\frac{1}{1+\left(\sqrt{\dfrac{x+1}{x-1}}\right)^2}
\frac{1}{2\sqrt{\dfrac{x+1}{x-1}}}
\frac{(x-1)-(x+1)}{(x-1)^2}\\
&=\frac{1}{2}\frac{x-1}{(x-1)+(x+1)}
\sqrt{\dfrac{x-1}{x+1}}
\frac{-2}{(x-1)^2}
\end{align}

but then make a decisive error in splitting the square root in the middle and then use the wrong fact that $t=\sqrt{t^2}$, which only holds for $t\ge0$.




You can go on with
$$
=-\frac{1}{2x}\sqrt{\dfrac{x-1}{x+1}}\frac{1}{x-1}
$$

and now you have to split into the cases $x>1$ and $x<-1$ or observe that $x(x-1)=|x|\,|x-1|$ (when $|x|>1$) so you can write
$$
=-\frac{1}{2|x|}\sqrt{\dfrac{x-1}{x+1}}\frac{1}{|x-1|}
=-\frac{1}{2|x|}\sqrt{\dfrac{x-1}{x+1}\frac{1}{(x-1)^2}}
$$


and finish up.






You might try simplifying the expression, before plunging in the computations.



You have, by definition,
$$
\sqrt{\frac{x+1}{x-1}}=\tan y
$$


so that
$$
x+1=x\tan^2y-\tan^2y
$$

that yields
$$
x=\frac{1+\tan^2y}{\tan^2y-1}=\frac{1}{\cos^2y}\frac{\cos^2y}{-\cos2y}=-\frac{1}{\cos2y}
$$

Hence $\cos2y=-1/x$ and $y=\frac{1}{2}\arccos(-1/x)$. Thus
$$

\frac{dy}{dx}=-\frac{1}{2}\frac{-1}{\sqrt{1-\dfrac{1}{x^2}}}\frac{-1}{x^2}=
=-\frac{1}{2}\frac{-\sqrt{x^2}}{\sqrt{x^2-1}}\frac{-1}{x^2}=
-\frac{1}{2|x|\sqrt{x^2-1}}
$$

because
$$
\frac{\sqrt{x^2}}{x^2}=\frac{|x|}{|x|^2}=\frac{1}{|x|}
$$


trigonometry - geometric meaning of a trigonometric identity

It follows from the law of cosines that if $a,b,c$ are the lengths of the sides of a triangle with respective opposite angles $\alpha,\beta,\gamma$, then
$$
a^2+b^2+c^2 = 2ab\cos\gamma + 2ac\cos\beta + 2bc\cos\alpha.
$$



For a cyclic (i.e. inscribed in a circle) polygon, consider the angle "opposite" a side to be the angle between adjacent diagonals whose endpoints are those of that side (it doesn't matter which vertex of the polygon serves as the vertex of that angle because of the Inscribed Angle Theorem). Then for a cyclic quadrilateral with sides $a,b,c,d$ and opposite angles $\alpha,\beta,\gamma,\delta$, one can show that
$$

\begin{align}
a^2+b^2+c^2+d^2 = {} & 2ab\cos\gamma\cos\delta + 2ac\cos\beta\cos\delta + 2ad\cos\beta\cos\gamma \\
& {} +2bc\cos\alpha\cos\delta+2bd\cos\alpha\cos\gamma+2cd\cos\alpha\cos\beta \\
& {}-4\frac{abcd}{(\text{diameter})^2}
\end{align}
$$
And for a cyclic pentagon, with sides $a,b,c,d,e$ and respective opposite angles $\alpha,\beta,\gamma,\delta,\varepsilon$,
$$
\begin{align}
a^2 + \cdots + e^2 = {} & 2ab\cos\gamma\cos\delta\cos\varepsilon+\text{9 more terms} \\& {} - 4\frac{abcd}{(\text{diameter})^2}\cos\varepsilon+ \text{4 more terms}

\end{align}
$$
And for a cyclic $n$-gon with sides $a_i$ and opposite angles $\alpha_i$,
$$
\begin{align}
\sum_{i=1}^n a_i^2 = {} & \text{a sum of }\binom{n}{2}\text{ terms each with coefficient 2} \\
& {} - \text{a sum of }\binom{n}{4}\text{ terms each with coefficient 4} \\
& {} + \text{a sum of }\binom{n}{6}\text{ terms each with coefficient 6} \\
& {} - \cdots
\end{align}

$$
The number of terms depends on $n$ and the power of the diameter on the bottom is in each case what is needed to make the term homogeneous of degree $2$ in the side lengths ("dimensional correctness" if you like physicists' language), and the alternation of signs continues.



I showed this by induction. It should work for infinitely many sides, too, by taking limits. Each term would then have a product of infinitely many cosines.



My question is: Is there some reasonable geometric interpretation of the sum of squares of sides of a polygon inscribed in a circle?

sequences and series - $frac{n}{frac{1}{n} sum_{i=1}^{n} frac{1}{1-t^i}} + frac{t}{1-t} left( 1 - t^n right) leq n$?



​Hi there,



I bump into a weird sequence, and know for a fact the following holds.
(Also, I ran MATLAB simulations to double-check.) This sequence comes up when doing research on computing the reliability of a signal to reach the destination when sent through parallel links and the benefit associated with it. (Details can be too long, which might be helpful to get the solution, but I think those who are very good at dealing with sequences don't need it necessarily.)



For any positive integers $n$ and a real number $t \in (0, 1)$, the following holds:



$\frac{n}{\frac{1}{n} \sum_{i=1}^{n} \frac{1}{1-t^i}} + \frac{t}{1-t} \left( 1 - t^n \right) \leq n$.




I'm struggling to prove this analytically, though. A few techniques that I have tried usually aim at showing an upper bound of LHS is less than or equal to RHS:



The fact that $1 - t \leq 1 - t^i \leq 1 - t^n$ holds for all $i = 1, \dots, n$ implies that $\frac{1}{1-t^n} \leq \frac{1}{n} \sum_{i=1}^{n} \frac{1}{1-t^i} \leq \frac{1}{t}$ holds for all $i = 1, \dots, n$.



Then, LHS is less than or equal to



$n(1-t^n) + \frac{t}{1-t} (1-t^n)$.



Subtracting RHS with the upper bound above, we get




$\frac{t}{1-t} \left( -1 + n(1-t)t^{n-1} + t^n \right)$.



The terms in the parentheses imply that it is less than zero: consider a binomial distribution with paramters $n$ and $t$.



A few other attempts similar to the above one failed. I also tried to use Bernoulli's inequality $(1+x)^n \geq 1+nx$, but it didn't help. Maybe I applied the techniques sloppily.



I feel like we should either prove it by induction (when $n=1$, LHS = RHS. Assuming LHS $\leq$ RHS for $n$, we could show it also is true for $n+1$.), or we begin with defining a sequence, say $S_n = \frac{1}{n} \sum_{i=1}^{n} \frac{1}{1-t^i}$, and show that the forward difference of LHS is less than or equal to the forward difference of RHS (which is 1).



But, even after hours of effort, I couldn't seem to find a clue. Could anyone help? Thanks!



Answer



Jensen's Inequality and the fact that $\frac1{1-x}$ is convex on $(0,1)$ say that
$$
\frac1n\sum_{k=1}^n\frac1{1-t^k}
\ge\frac1{1-\frac1n\sum\limits_{k=1}^nt^k}\tag{1}
$$
Therefore,
$$
\begin{align}
\frac1{\frac1n\sum\limits_{k=1}^n\frac1{1-t^k}}+\frac t{1-t}\frac{1-t^n}n

&\le\left(1-\frac1n\sum\limits_{k=1}^nt^k\right)+\frac1n\sum\limits_{k=1}^nt^k\\
&=1\tag{2}
\end{align}
$$
$n$ times inequality $(2)$ is the desired inequality.


elementary number theory - Divisibility Tests for Palindromes?



Let $n$ be a positive integer. My question, loosely, is




  • Which palindromes are divisible by $n$?



Of course, the question has the easy answer "those that are divisible by $n$" unless I specify in what terms I want to characterize the divisibility. I'm not quite sure myself of how to answer this, so I will give some examples and general thoughts here, which will hopefully convey the spirit of the question.




Example of what I'm looking for:




  • Every palindrome with an even amount of digits is divisible by $11$



Example of what I'm not looking for:




  • A non-zero palindrome is divisible by $5$ if and only if it starts with a $5$




Both of these examples are found by applying some well known divisibility rules. The example of what I'm looking for is the case of the divisibility rule for $11$, which lets on simply check the alternating sum of the digits. But the nice thing is that one doesn't need to know any specific digits of the palindrome; only its length and the fact that it is a palindrome are needed. It is also a little less obvious than the example of what of I'm not looking for (at least to me.) Are there any other of these "cute" rules which let one easily see that a palindrome is or is not divisible by a certain integer? Has anyone ever studied this question in more detail?



I understand that the question of checking whether a certain number is divisible by another is already hard by itself, so maybe the question should be rephrased as




  • For which integers $n$ and $P$ we can use the fact that $P$ is palindromic to our advantage in determining whether $n$ divides $P$?




Finally, as a sort of bonus question if anything in this direction is possible, I am particularly interested in the case in which $n$ is itself palindromic, i.e. determining whether a palindrome possesses a palindromic factor.



Thanks in advance for any thoughts whatsoever on the issue.


Answer



An interesting case is $n=101$.



The $3$-digit palindrome $aba$ is divisible by 101 iff $b=0$.



The $4$-digit palindrome $abba$ is divisible by 101 iff $a=b$.




The $5$-digit palindrome $abcba$ is divisible by 101 iff $c=2a$.



The $6$-digit palindrome $abccba$ is divisible by 101 iff $a+b=c$.



The $7$-digit palindrome $abcdcba$ is divisible by 101 iff $d=2b$.



The $8$-digit palindrome $abcddcba$ is divisible by 101 iff $a+d=b+c$.



The $9$-digit palindrome $abcdedcba$ is divisible by 101 iff $e = 2(c-a)$.




The $10$-digit palindrome $abcdeedcba$ is divisible by 101 iff $a+b+e=c+d$.



...


elementary number theory - Find solutions of linear congruences: $xequiv 0 pmod 2$ , $xequiv 0 pmod 3$, $xequiv 1 pmod5$, $xequiv 6 pmod7$


$x\equiv 0 \pmod 2$
$x\equiv 0 \pmod 3$
$x\equiv 1 \pmod5$
$x\equiv 6 \pmod7$
Find all the solutions of each of the following systems of linear congruences.




I know how to find solutions of three congruence equations, but
I don't know how to solve the 4 equations system...
I can't find it in my text book but it is in exercise sample... help me pls.

calculus - Show this $int_0^infty frac{tln(2sinh t)}{left(3t^2+ln^2(2sinh t)right)^2}~dt=0$



While evaluating the integral

$$
I_1=\int_{0}^\infty\frac{\sin\pi x~dx}{x\prod\limits_{k=1}^\infty\left(1-\frac{x^3}{k^3}\right)},\tag{1}
$$
I came to this integral of elementary function
$$
I_2=\int_0^\infty \frac{dt}{\left(i t\sqrt{3}+\ln(2\sinh t)\right)^2}.\tag{2}
$$
In fact $I_2$ is real and
$$
I_1=-2\pi I_2.

$$
These formulas imply the closed form
$$
\int_{0}^{\infty}\frac{t\ln\left(\,2\sinh\left(\,t\,\right)\,\right)}{\left[\,3t^{2} + \ln^{2}\left(\,2\sinh\left(\,t\,\right)\,\right)\right]^{\,2}}\,{d}t = 0,\tag{3}
$$
or alternatively
$$
\text{Im}\int_0^\infty \frac{dt}{\left(i t\sqrt{3}+\ln(2\sinh t)\right)^2}=0.
$$




Brief outline of proof is as follows. Write the infinite product in terms of Gamma functions, apply reflection formula for Gamma function to get rid of $\sin\pi x$, then use integral representation for Beta function and change the order of integration. Then one can integrate over $x$ to obtain the desired formula.
It seems that this should have a simple proof, but I don't see it.




Q: Can anybody provide a direct proof ?.




Such a direct proof may shed light on possible routes to calculation or simplification of $(2)$.



Here is a numerical demonstration using Mathematica that the integral under consideration is $0$ up to at least $100$ digits:

enter image description here



The integrand for $t>w$ has been replaced by $\frac{1}{16t^2}$, resulting in the term $\frac{1}{16w}$.


Answer



This answer directly proves that:



$$\text{Im}\int_0^\infty \frac{dt}{\left(i t\sqrt{3}+\ln(2\sinh t)\right)^2}=0$$



First, we make a change of variable:




$$x=e^{-2t}$$



Which transforms the identity to:



$$\text{Im} \int_0^1 \frac{dx}{x \left(\ln(1-x)-e^{\pi i/3} \ln x \right)^2}=0$$



Finding the imaginary part explicitly, we now need to prove:



$$ \int_0^1 \frac{\ln x \ln(1-x)-\frac{1}{2} \ln^2 x}{x \left(\ln^2 x+\ln^2(1-x)-\ln x \ln (1-x)\right)^2}dx=0$$







Let's introduce a function:




$$f(x)=f(1-x)=\ln^2 x+\ln^2(1-x)-\ln x \ln (1-x)$$




As we have a difference of two positive definite functions under the integral, the identity is equivalent to:





$$\int_0^1 \frac{\ln x \ln(1-x)}{x f(x)^2}dx=\frac{1}{2}\int_0^1 \frac{ \ln^2 x}{x f(x)^2}dx$$




Let's denote the integrals $J_1$ and $J_2$. We need to prove that $J_1=J_2$.



Using the substitution $x \to 1-x$ we can prove the following identities:



$$J_1=\int_0^1 \frac{\ln x \ln(1-x)}{(1-x) f(x)^2}dx=\frac{1}{2} \int_0^1 \frac{\ln x \ln(1-x)}{x (1-x) f(x)^2}dx$$




$$J_2=\frac{1}{2}\int_0^1 \frac{ \ln^2 (1-x)}{(1-x) f(x)^2}dx=\frac{1}{4}\int_0^1 \frac{ (1-x)\ln^2 x+x\ln^2 (1-x)}{x(1-x) f(x)^2}dx$$



Subtracting the two forms of $J_2$ gives us another set of identities:



$$J_3=\int_0^1 \frac{ \ln^2 x}{x (1-x)f(x)^2}dx=\int_0^1 \frac{ \ln^2 (1-x)}{x (1-x)f(x)^2}dx=\int_0^1 \frac{ \ln^2 x+\ln^2 (1-x)}{x f(x)^2}dx$$






From the above follows a relation:




$$J_3-J_1=\int_0^1 \frac{ 1}{x f(x)}dx$$



Now we use integration by parts with:



$$u(x)=\frac{ 1}{f(x)}, \qquad v(x)= \ln x$$



The limits for $u(x)v(x)$ at $0$ and $1$ are both equal to zero. After simplifications, we can write:



$$J_3-J_1=\int_0^1 \frac{(2-x) \ln^2 x-(1+x) \ln x \ln (1-x)}{x(1-x)f(x)^2}dx$$




Making a substitution $x \to 1-x$ and adding the two results, we obtain a symmetric form of the integral:



$$J_3-J_1=\frac{1}{2} \int_0^1 \frac{(2-x) \ln^2 x+(1+x) \ln^2 (1-x)-3 \ln x \ln (1-x)}{x(1-x)f(x)^2}dx$$



From the other identities above, it can be finally seen that:




$$J_3-J_1=J_3+2J_2-3J_1$$





Or immediately:



$$J_1=J_2$$



The proof is finished.






Remark. This doesn't use the other identity shown by FDP in the comments. To me it looks very difficult to prove.


Convergence of complex series $sum_{n=1}^{infty}frac{i^n}{n}$




Prove that the series $\displaystyle \sum_{n=1}^{\infty}\frac{i^n}{n}$ converges.



Optional. find it's sum, if possible.





Comments. I am aware of the general result about the convergence (not absolute)
of $\displaystyle \sum_{n=1}^{\infty}\frac{z^n}{n}$, for $z\neq 1$. But i feel that the answer to the above problem can be more trivial, although have not made any interesting approach so far.



Thank you in advance!


Answer



We have $\frac{1}{1-z}=\sum_{n\geq 0}z^n$ for any $z\in\mathbb{C}$ such that $|z|<1$. The convergence is uniform over any compact subset of $\{z\in\mathbb{C}:|z|<1\}$, hence we are allowed to state



$$ \int_{0}^{i}\frac{dz}{1-z} = \sum_{n\geq 0}\frac{i^{n+1}}{n+1} = \sum_{n\geq 1}\frac{i^n}{n} $$
where the LHS equals
$$ -\log(1-i) = -\log\left(\sqrt{2}\, e^{-\frac{\pi i}{4}}\right) = \color{red}{-\frac{\log 2}{2}+\frac{\pi i}{4}}.$$

The convergence of the original series is granted by Dirichlet's test, as already remarked by other users. You may also notice that the real part is given by the terms of the original series with $n\in 2\mathbb{N}$ and the imaginary part is given by the terms of the original series with $n\in 2\mathbb{N}+1$. On the other hand,
$$ \sum_{n\geq 1}\frac{(-1)^n}{n}=-\log(2),\qquad \sum_{n\geq 0}\frac{(-1)^n}{2n+1}=\frac{\pi}{4} $$
are pretty well-known. They can be proved with the same approach, restricted to the real line only.


sequences and series - How to find the area. Linked with another question.








In this question we discussed why the fake proof is wrong.



But, what about the area?



The process converges to the same area of the circle ($\frac{\pi}{4}$)?



What's the area when the process repeats infinitely?



How we can prove, using calculus and limits the result?




Pi is equal to 4

sequences and series - Is 1 divided by 3 equal to 0.333...?




I have been taught that $\frac{1}{3}$ is 0.333.... However, I believe that this is not true, as 1/3 cannot actually be represented in base ten; even if you had infinite threes, as 0.333... is supposed to represent, it would not be exactly equal to 1/3, as 10 is not divisible by 3.



0.333... = 3/10 + 3/100 + 3/1000...



This occured to me while I discussion on one of Zeno's Paradoxes. We were talking about one potential solution to the race between Achilles and the Tortoise, one of Zeno's Paradoxes. The solution stated that it would take Achilles $11\frac{1}{3}$ seconds to pass the tortoise, as 0.111... = 1/9. However, this isn't that case, as, no matter how many ones you add, 0.111... will never equal precisely $\frac{1}{9}$.



Could you tell me if this is valid, and if not, why not? Thanks!



I'm not arguing that $0.333...$ isn't the closest that we can get in base 10; rather, I am arguing that, in base 10, we cannot accurately represent $\frac{1}{3}$


Answer



Here is a simple reasoning that $1/3=0.3333...$.




Lets denote $0.333333......$ by $x$. Then



$$x=0.33333.....$$
$$10x=3.33333...$$



Subtracting we get $9x=3$. Thus $x=\frac{3}{9}=\frac{1}{3}$.



Since $x$ was chosen as $0.3333....$ it means that $0.3333...=\frac{1}{3}$.


Sunday 25 October 2015

summation - Calculating the sum $sum_{n=1}^infty frac{n}{(n-1)!}x^n$?



So I've got the sum $$\sum_{n=1}^\infty \frac{n}{(n-1)!}x^n$$




To show that it converges for all real numbers, I used the ratio test. And found the convergence radius to be $$R = \frac{1}{L}, \qquad R = \infty$$
The next task is to calculate the sum, and I feel sort of lost.. I think I want the sum too look like a geometric series. Or substitute it with something else.


Answer



Recall that
$$e^x=\sum_{n=0}^\infty \frac{x^n}{n!}.$$
First way. Note that
$$xe^x=x(e^x)'=x\sum_{n=0}^\infty n\frac{x^{n-1}}{n!}=\sum_{n=1}^\infty \frac{x^{n}}{(n-1)!}.$$
Try to differentiate again and compare the result with your series.



Second way. we have that

$$\sum_{n=1}^\infty \frac{n}{(n-1)!}x^n=\sum_{m=0}^\infty \frac{m+1}{m!}x^{m+1}=x^2\sum_{m=1}^\infty \frac{m}{m!}x^{m-1}+x\sum_{m=0}^\infty \frac{x^{m}}{m!}.$$
What then?


exponentiation - What do we actually mean by raising some number to an imaginary power?

It seems quite intuitive when we say that some number $a$ is raised to a power $b$ where $a \in \mathbb{C} $ and $b \in \mathbb{Z}$ and can be expressed as
$$a^b = a \times a \times a ... \text{($b$ times)}$$
Extending the argument such that $b \in \mathbb{R}$ then if $b$ is rational, it can be expressed in the form $\dfrac{p}{q}$ such that $ p,q \in \mathbb{Z}$ and $q \ne 0$ and $a^b$ is defined as
$$a^{\frac{p}{q}} = \sqrt[q]{a^p}$$
If $b$ is irrational then $a^b$ is a transcendental number as stated by Gelfond- Schneider theorem ($a$ and $b$ are algebraic numbers). Agreed.




Now, here is the problem: What happens when $b$ is an imaginary number? What is an intuitive idea behind saying $i\theta$ times in the expression (I may be wrong in saying that)
$$e^{i\theta} = e\times e\times e...\text{($i\theta$ times)} = \cos \theta + i\sin \theta$$
Yes, thats the Euler's theorem.

How to procede with alternating series

Given some alternating series, the first step is to check whether it's absolutely convergent. Say it's not. Then you use the alternating series test. That test tells you if the series is convergent, but not necessarily if it's divergent (I think). So if it doesn't meet the conditions for that, what is your recourse? How do you confirm whether the series is conditionally convergent or not? All of the other tests I know require all positive terms -- so I don't know any other test to use.

gamma function - Proof that the factorial is nonelementary



Is there a proof that the factorial function $!:\mathbb N\to\mathbb N$ is nonelementary?




If it were equal to an elementary function (call it $P(n)$), then it would extend the factorial function to the real and complex numbers. This sounds like the Gamma function, but we have $\dfrac{\Gamma(n+1)}{\Gamma(n)}=n$ for all real $n$. It's entirely possible that $\dfrac{P(n)}{P(n-1)}$ isn't $n$, but rather something like $n+\sin(\pi n)$ which is only equal to $n$ at the integers. (Also, I've never found a proof that Gamma is nonelementary, either. I do know that the incomplete Gamma function is nonelementary, due to differential Galois theory.)



Also, the fact that $\pi$ appears in limits involving factorials isn't a proof, by the way. For example, the fact that:
$$\lim_{n\to\infty}\frac{(n!)^2(n+1)^{2n^2+n}}{n^{2n^2+3n+1}}=2\pi,$$
which comes from Stirling's approximation, doesn't prove that it can't be elementary; we also have:
$$\lim_{n\to\infty}n(-1)^{1/n}-n=i\pi$$
so it's possible for elementary functions to have $\pi$ as a limiting value.



So, is there any proof that the factorial function is nonelementary?


Answer




We will use the following facts:




(i) The extension, to a larger domain, of a non-elementary function is also non-elementary;



(ii) The derivative of an elementary function is also elementary;



(iii) The product of finitely many elementary functions is also elementary;



(iv) The product of an elementary function times a non-elementary function is non-elementary.





Claim 1: $\Gamma(x)$ is a non-elementary function.



Proof. Assume the contrary.
By (i) $n!$ must be elementary, and by (ii) so is $\Gamma'(x)=\Gamma(x)\psi^{(0)}(x)$, which by (iii) implies the same for $\psi^{(0)}(x)$ and all of its derivatives. But we have $$\psi^{(n)}(x)=(-1)^{n+1}\ n!\ \zeta(n+1,x),$$ where $\zeta(a,s)$ is the non-elementary Hurwitz zeta function, so combining (iii) and (iv) yields that $\psi^{(n)}(x)$ is a non-elementary function, contradiction. $ \ \ \ \text{QED} $



Claim 2: $n!$ is a non-elementary function.



Proof. The Riemann zeta function satisfies $$\begin{align} 2\ \pi^{-s/2}\Gamma\left(\frac{s}{2}\right)\zeta(s)&= \int_0 ^ \infty \left(\vartheta(0,it) -1\right)t^{s/2-1}dt,\end{align}$$ where $\vartheta(z,q)$ is the non-elementary Jacobi theta function. So let $s=2n$ to obtain $$\begin{align}2\pi^{-n}\Gamma(n)\zeta(2n) &= \int_0 ^ \infty \left(\vartheta(0,it) -1\right)t^{n-1}dt \\ 2\ \pi^{-n} (n-1)! \frac{ (-1)^{n+1} B_{2n}(2\pi)^{2n}}{2(2n)!} &= \int_0 ^ \infty \left(\vartheta(0,it) -1\right)t^{n-1}dt \\ - (-\pi)^n 2^{2n} B_{2n}\frac{(n-1)!}{(2n)!}&=\int_0 ^ \infty \left(\vartheta(0,it) -1\right)t^{n-1}dt.\end{align}$$ Now, by (i) the Bernoulli numbers are elementary, due to being a restriction of the Bernoulli polynomials, which are elementary. But the RHS is non-elementary by (ii), therefore by (iii) the ratio of factorials is non-elementary as well, and the claim follows combining this and (iii). $ \ \ \ \text{QED} $







Of course Claim 1 directly follows from Claim 2 by (i), but I wanted to give two different and independent proofs.


Saturday 24 October 2015

Integration with Polar Coordinates

I want to integrate this integral with polar coordinates:



$\int \sin x \ dA$ on the region bounded by $ y=x, y=10-x^2, x=0$.




So far I've got that $$\int_{\frac{\pi}{4}}^{\frac{\pi}{2}} \int_{l}^{10} f(r\cos\theta, r\sin\theta) \ r\,dr\,d\theta$$



Where $l =\sqrt{2}\frac{(\sqrt{41}-1)}{2}$.



I'm most curious about how I should represent $\sin x$, though I could directly use $\sin(r\cos\theta)$. I feel like there's a better way though.



Thx for helping.

geometry - Overlapping circles. Distance to move 1 circle along specific line to remove overlap

I have two lines AB and CD that are not parallel.
There is a circle centered on each of points A and C.
The circles overlap as the distance between A and C is less than the two radii combined. The circles have the same radius.




To remove the overlap the circle at point C needs to be moved along the line towards D until the distance is greater than the radii of the circles.



I know the distance AC can be worked out which means I'd have:




  • the coordinates for both AB and CD

  • the distance AC, and the distance the center points need to be away from each other, radii of the circles.




The lines are almost but not quite parallel. Treating them as parallel (I've tried) doesn't produce accurate enough results. I don't know any of the angles.



How do I go from this information to moving the circle in line CD to coordinates on that line where the circles no longer overlap?

complex analysis - Show that $int_{0}^{infty} frac{cosh(ax)}{cosh(pi x)} dx=frac{1}{2}sec(frac{a}{2})$ using Residue Calculus



Show that the following expression is true



$$\int_{0}^{\infty} \frac{\cosh(ax)}{\cosh(\pi x)} dx=\frac{1}{2}\sec(\frac{a}{2})$$



Edit: I forgot to mention that $|a|<\pi$




Specifically,
using Residue Calculus and a rectangular contour with corners at $\pm R$ and $\pm R+i$



However, I'm unsure how to approach this given the bound from $(0,\infty)$, where I usually see the bound $(- \infty, \infty )$. How does this change the problem, and how should I begin to approach it from here?



Edit: Given the tip that the integrand is an even function, I can use the following relation:



$$\int_{0}^{\infty} \frac{\cosh(ax)}{\cosh(\pi x)} dx= \frac{1}{2} \int_{- \infty}^{\infty} \frac{\cosh(ax)}{\cosh(\pi x)} dx$$




Next I proceed by the standard procedure



$$\oint_C f(z) \,dz=(\int_{C_{R}}^{}+\int_{C_{T}}^{}+\int_{C_{L}}^{}+\int_{C_{B}}^{})f(z)dz=2 \pi i \sum_{j}\text{Res}(f(z);z_j)$$



where $f(z)=\frac{\cosh(az)}{\cosh(\pi z)}$ and R, T, L, and B denote the right, top, left, and bottom sides of the rectangular contour. Furthermore, I can bound each $C_i$ integral and determine what happens as R approaches $\infty$ to ultimately simplify the above expression.



In fact, the side contour integrals do disappear as R approaches $\infty$, and the bottom integral becomes our integral of interest.



$$\oint_C f(z) \,dz=(\int_{C_{T}}^{}+\int_{C_{B}}^{})f(z)dz=2 \pi i \sum_{j}^{}\text{Res}(f(z);z_j)$$




However, I am left clueless as to how to deal with the top integral.


Answer



We assume that $|a|<\pi$. Note that we have



$$\int_0^\infty \frac{\cosh(ax)}{\cosh(\pi x)}\,dx=\frac12 \int_{-\infty}^\infty \frac{\cosh(ax)}{\cosh(\pi x)}\,dx$$



Now, we analyze the contour integral $I(a)$ given by



$$I(a)=\oint_C \frac{\cosh(az)}{\cosh(\pi z)}\,dz$$




where $C$ is the rectangular contour with corners at $\pm R$ and $\pm R+i$. Thus, we can write



$$\begin{align}
I(a)&=\int_{-R}^R \frac{\cosh(ax)}{\cosh(\pi x)}\,dx\\\\
&+\int_{0}^1\frac{\cosh(a(R+iy))}{\cosh(\pi (R+iy))}\,i\,dy\\\\
&+\int_{R}^{-R}\frac{\cosh(a(x+i))}{\cosh(\pi (x+i))}\,dx\\\\
&+\int_{1}^0\frac{\cosh(a(-R+iy))}{\cosh(\pi (-R+iy))}\,i\,dy \tag 1
\end{align}$$



As $R\to \infty$, the second and fourth integrals approach zero. Using the residue theorem, $I(a)$ is




$$\begin{align}
I(a)&=2\pi i \text{Res}\left( \frac{\cosh(az)}{\cosh(\pi z)}, z=i/2\right)\\\\
&=2\cos(a/2) \tag 2
\end{align}$$



Now, we have using $(1)$ and $(2)$



$$\begin{align}
\int_{-\infty}^\infty \frac{\cosh(ax)}{\cosh(\pi x)}\,dx&=2\cos(a/2)+ \int_{-\infty}^\infty\frac{\cosh(a(x+i))}{\cosh(\pi (x+i))}\,dx\\\\

&=2\cos(a/2)- \int_{-\infty}^\infty\frac{\cosh(ax)\cos(a)+i\sinh(ax)\sin(a)}{\cosh(\pi x)}\,dx \tag 3\\\\
&=2\cos(a/2)-\cos(a) \int_{-\infty}^\infty\frac{\cosh(ax)}{\cosh(\pi x)}\,dx \tag 4\\\\
&=\frac{2\cos(a/2)}{1+\cos(a)}\\\\
&=\frac{1}{\cos(a/2)}
\end{align}$$



where in going from $(3)$ to $(4)$ we exploited the fact that $\frac{\sinh(ax)}{\cosh(\pi x)}$ is an odd function, and the integral of an odd function over anti-symmetric limits is zero.



Therefore, the integral of interest is found to be




$$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{\cosh(ax)}{\cosh(\pi x)}\,dx=\frac{1}{2\cos(a/2)}}$$



as was to be shown!


elementary number theory - Proof that $n! > 3n$ for $nge4 $ using the Principle of Mathematical Induction


Use induction to prove that $n! > 3n$ for $n\ge4 $.




I have done the base case and got both sides being equal to $24>12$ for $n=4$.
However, when doing the inductive step I can't seem to find the right form to match the expression on the right hand side.




So far I have:



Need to show: $(n+1)!>3(n+1)$.



When doing the inductive step:



$(n+1)! = (n+1)n!$



we know that $n!$ is larger than $3n$, then




$(n+1)n! >(n+1)3n$.



Here is where I don't know what to do next, could anyone shed some insight on how to continue after this part? Thanks.

calculus - Evaluate $int_{0}^{pi}sin^5{theta}cos^2{theta} dtheta$

I'm trying to find the mass of a spherical object with a given density function, and to do so I must solve this integral $$\int_{0}^{\pi}\sin^5{\theta}\cos^2{\theta}\ d\theta,$$



but no matter which method I choose (integration by parts, substitution, etc) I can't for the life of my figure out the anti-derivative.

sequences and series - Inequality for finite harmonic sum


For a positive integer $n$ let
$$A(n) = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4}+\dots +\frac{1}{2^n - 1}$$
Then prove that $A(200) > 100 > A(100)$.




I tried some concepts like AM>GM>HM and some algebraic methods for reducing the series but was unable to solve it.
Please help me to solve this.

How is $zeta(0)=-1/2$?







Fermat's Dream by Kato et al. gives the following:




  1. $\zeta(s)=\sum\limits_{n=1}^{\infty}\frac{1}{n^s}$ (the standard Zeta function) provided the sum converges.



  2. $\zeta(0)=-1/2$




Thus, $1+1+1+...=-1/2$ ? How can this possibly be true? I guess I'm under the impression that $\sum 1$ diverges.

Friday 23 October 2015

integration - Where does $Gamma left( frac{3}{2} right) = int _0 ^{+infty}! mathrm e ^{-x^2} , mathrm d x$ come from?



My teacher solved this problem in class but I don't get how one step is justified.





Prove that $$\int_0 ^{+\infty} \! \mathrm e ^{- x^2 } \, \mathrm d x = \dfrac{\sqrt{\pi}}{2}$$ using this relation $$\int_0 ^{+\infty} \! \int_0^{+\infty} \! y \,\mathrm e ^{-(1+ x^2 )y} \, \mathrm d y \, \mathrm d x = \dfrac{\pi}{4}.$$




Using Fubini's theorem we switch integrals:
$$\int_0 ^{+\infty} \! y\, \mathrm e ^{-y} \left( \int_0 ^{+\infty} \! \mathrm e ^{-x^2 y} \, \mathrm d x \right) \mathrm d y = \dfrac{\pi}{4}.$$



Let us compute first:
$$\int_0 ^{+\infty} \! \mathrm e ^{-x^2 y} \, \mathrm d x =\int _0 ^{+\infty} \! \dfrac{\mathrm e ^{-t ^2}}{\sqrt y}\, \mathrm d t= \dfrac{\mathcal E}{\sqrt y},$$
where we have made the change $x\sqrt y =t $ and $\mathcal E$ is the integral that we want to compute.




Then $$\int _0 ^{+\infty} \! y \, \mathrm e ^{-y} \dfrac{\mathcal E}{\sqrt y} \, \mathrm d y = \mathcal E \int _0 ^{+\infty} y ^{\frac{1}{2}} \, \mathrm e ^{-y} \, \mathrm d y = $$
$$=\mathcal E \int _0 ^{+\infty} y ^{\frac{3}{2}-1} \, \mathrm e ^{-y} \, \mathrm d y = \mathcal E \, \Gamma \left( \frac{3}{2} \right) \color{red}{\stackrel{?}{=}} $$
$$\color{red}{\stackrel{?}{=}} \mathcal E \int_0 ^{+\infty} \mathrm e ^{-s ^2} \, \mathrm d s = \mathcal E ^2 = \dfrac{\pi}{4},$$
therefore $$\mathcal E = \dfrac{\sqrt \pi}{2} = \int _0 ^{+\infty}\! \mathrm e ^{-x^2} \, \mathrm d x.$$



What I don't get is how does he relate $\mathcal E$ with the gamma function, that is, $$\Gamma \left( \frac{3}{2} \right) = \int _0 ^{+\infty}\! \mathrm e ^{-x^2} \, \mathrm d x = \mathcal E.$$
I have seen that $\Gamma \left( \frac{3}{2} \right) = \dfrac{\sqrt \pi }{2}$, but since we don't know the value of $\mathcal E$ yet (as this is what we are trying to prove), this is not a way to relate them.



Thank you for your help.



Answer



One definition of the Gamma function is $\Gamma(s)=\int_0^\infty x^{s-1}\exp (-x) dx=2\int_0^\infty x^{2s-1}\exp (-x^2) dx$, so your integral is $\frac12\Gamma(\frac12)$. One only need then use the identity $\Gamma(s+1)=s\Gamma(s)$. Indeed, the difference is $\int_0^\infty (x^s-sx^{s-1})\exp (-x) dx=[-x^s\exp (-x)]_0^\infty=0$.


elementary set theory - Prove ${1,2,4,8,16,32,ldots}$ is countably infinite

Been working on this question for a while now and despite scouring my notes and the internet, i still haven't been able to come up with a good answer...



Prove that the set of numbers which are powers of 2 (i.e. $\{1,2,4,8,16,32,\ldots\}$) is a countably infinite set.




How would i go about proving this? could i use proof by induction?



Any help would be greatly appreciated :)

elementary number theory - Prove that $sqrt 3$ is irrational

I have to prove that $\sqrt 3$ is irrational.
let us assume that $\sqrt 3$ is rational. This means for some distinct integers $p$ and $q$ having no common factor other than 1,




$$\frac{p}{q} = \sqrt3$$



$$\Rightarrow \frac{p^2}{q^2} = 3$$



$$\Rightarrow p^2 = 3 q^2$$



This means that 3 divides $p^2$. This means that 3 divides $p$ (because every factor must appear twice for the square to exist). So we have, $p = 3 r$ for some integer $r$. Extending the argument to $q$, we discover that they have a common factor of 3, which is a contradiction.



Is this proof correct?

elementary number theory - Uniqueness of Extended Euclidean Algorithm



I'm doing a bit of extra reading on the Extended Euclidean Algorithm and had a side-thought that I couldn't find an answer to in the book.



I understand that the Extended Euclidean Algorithm can express the GCD of two numbers as a linear combination of those two numbers.



My question is, is the linear combination acquired unique? (My gut is telling me that it not, but I'd like some verification as I cannot produce a proof of uniqueness).




If the answer is 'No', then my follow-up question is "What is so special about the specific linear combination acquired by the EEC?"


Answer



Given two integers $a$ and $b$, the Extended Euclidean algorithm calculates the $\gcd$ and the coefficients $x$ and $y$ of Bézout's identity: $ax+by=\gcd(a,b)$. These coefficients are not unique (see linked article).



The specific coefficients created by the algorithm satisfy these conditions: $$|x|<|\frac{b}{\gcd(a,b)}|$$
$$|y|<|\frac{a}{\gcd(a,b)}|$$


Power series solution of $ f(x+y) = f(x)f(y) $ functional equation



Here on StackExchange I read a lot of interesting questions and answers about functional equations, for example a list of properties and links to questions is Overview of basic facts about Cauchy functional equation.



I'm interested in the following problem:
if $f:\mathbb{R} \rightarrow \mathbb{R}$ is a continuous function verifying the functional equation $f(x+y)=f(x)f(y), \ \forall x,y\in \mathbb{R}$, find its non identically zero solution using power series.




My attempt so far using power series:
let
$$f(x) = \sum_{n=0}^{\infty} a_{n} \, x^{n}$$
so
$$f(y) = \sum_{n=0}^{\infty} a_{n} \, y^{n} $$
and
$$f(x+y) = \sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}$$



The functional equation $f(x+y)=f(x)f(y)$ leads to
$$\sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}=\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n}$$




Using the binomial theorem
$$(x+y)^{n} = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}$$
and the Cauchy product of series
$$\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n} = \sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$
it follows
$$\sum_{n=0}^{\infty} a_{n} (\sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$
$$\sum_{n=0}^{\infty}(\sum_{k=0}^{n} a_{n} \binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$



Now I need to equate the coefficients:
$$\forall n\in\mathbb N, \;\;\;\; \; a_{n} \binom{n}{k} = a_k a_{n-k} \;\; \textrm{for } k= 0,1,...,n $$




The first equation, for $n=0$, is $a_0=a_0a_0$, that is $a_0(a_0-1)=0$ with solutions $a_0=0$ and $a_0=1$. If $a_0=0$ every coefficient would be zero, so we have found the first term of the power series: $a_0=1$.



Now the problem is to determine the remaining coefficients. I tried, but it's too difficult to me.


Answer



From $a_n{n\choose n-1} = a_{n-1}a_1$ we have $a_n = a_{n-1}\dfrac{a_1}{n}$. So $a_n = \dfrac{a_1^n}{n!}$.



We know that the functional equation has as solutions the expnential functions $f(x) = a^x$ for some positive real number $a$. We are insterested to know if there is a relation between $a$ and the coefficient $a_1$.



Let us call $f_{a_1}(x)$ the solution of the functional equation where the coefficients are $(a_1)^n/n!$ and let $e$ be the real number defined by $f_1(x)$, i.e. $f_1(x) = e^x$. Then the series expansion tells us that $f_1(a_1x) = f_{a_1}(x)$, i.e. $e^{a_1x} = a^x$. For $x = 1$ we have that $e^{a_1} = a$.




From the series expansion, one sees that $e^x$ is a strictly increasing function and it's continuous by definition. Thus it has a continuous inverse. Let us call $\ln(x) = f^{-1}_{1}(x)$. Then $a_1 = \ln(a)$.


real analysis - Disprove uniform convergence of sequence of functions $f_n: mathbb{R} to mathbb{R}^2$, $f_n(x) = (sinfrac{x}{n}, cosfrac{x}{n})$




I have already found the pointwise limit of $f_n(x) = (0, 1)$.



I have a theorem that states "Let $D\subset\mathbb{R}^q$, and $D$ compact. Let $f, f_n:D\to\mathbb{R}^p$ and $f_n$ continuous for all $n\in\mathbb{N}$. Then $f_n$ converges uniformly to $f$ if and only if $\lim_{n\to\infty}\left\lVert f_n - f \right\rVert_D = 0$."



So, following an example from my professor, I let $f = (0, 1)$ and found that $\lim_{n\to\infty}\left\lVert f_n - f \right\rVert = \lim_{n\to\infty}\left\lVert (\sin\frac{x}{n}, \cos\frac{x}{n}) - (0, 1) \right\rVert = \lim_{n\to\infty}\left\lVert (\sin\frac{x}{n}, \cos\frac{x}{n} - 1) \right\rVert = \lim_{n\to\infty} \sqrt{\sin^2\frac{x}{n} + (\cos\frac{x}{n} - 1)^2} = \sqrt{0 + (1-1)^2} = 0$



Shouldn't this prove that in fact the function DOES converge uniformly? I'm supposed to prove that it does not. What am I missing here?


Answer



What you have proved is pointwise convergence, not uniform convergence. Suppose the convegence is uniform. Then there must be an integer $m$ such that $\|(\sin (\frac x n),\cos (\frac x n))-(0,1)\| <\frac 1 2$ for all $x \in \mathbb R$ for all $n \geq m$. Take $x=\frac {m\pi} 2$ and $n=m$ to get a contradiction.


Thursday 22 October 2015

number theory - How to find sum of powers from 1 to r



Let say I have two numbers n power r. How can we find sums of all powers. For example if n = 3 and r 3 then we can calculate manually like this



3 ^ 3 = 27
3 ^ 2 = 9
3 ^ 1 = 3

Sum = 39



Can we formulate this? I mean can we create a function which takes n and r and returns this sum?



I have background in programming but don't know maths :-) . I know using any programming language we can write a function which can calculate sum using loops or recursion but is there any other solution so I find sum without loops or recursion.



Thanks in advance


Answer



As I said in the comment, it is called geometric series:




$$a^0+a^1+a^2+\ldots + a^n = \sum_{k=0}^n a^k = \frac{a^{n+1}-1}{a-1}$$



So in your case we do not begin witht the exponent 0 but with 1 so we just substract $a^0=1$:



$$a^1 + a^2+a^3 + \ldots + a^n = \frac{a^{n+1}-1}{a-1} - 1$$



In your concrete case $a=3$ and $n=3$:



$$3^1+3^2+3^3 = \frac{3^{4}-1}{3-1} -1 = 39$$




You can derive it as follows:



Let $$S = a^0 + a^1 + \ldots a^n.$$ Therefore



$$ a\cdot S = a^1 + a^2 \ldots + a^{n+1}.$$



So $$(a-1)S = aS-S = a^{n+1}-a^0 = a^{n+1} -1$$ results when dividing by $(a-1)$ in:



$$S = \frac{a^{n+1}-1}{a-1}$$


summation - Easy question regarding this proof




I do not understand a small step in a proof I'm reading at the moment. Why are the following things equal?



$$\sum_{k=1}^{n} \frac{1}{2k-1} - \frac{1}{2} \sum_{k=1}^{n} \frac{1}{k} = \sum_{k=1}^{2n} \frac{1}{k} - \sum_{k=1}^{n} \frac{1}{k}$$


Answer



Separating out the odd & the even terms in the denominator,
$$\sum_{k=1}^{2n}\frac1k=\sum_{k=1}^n\left(\frac1{2k-1}+\sum_{k=1}^n\frac1{2k}\right)$$



$$=\sum_{k=1}^n\frac1{2k-1}+\sum_{k=1}^n\frac1{2k}$$




$$=\sum_{k=1}^n\frac1{2k-1}+\frac12\sum_{k=1}^n\frac1k$$


proof writing - Question on Induction (Very Simple)

I've just started a course in mathematics at university, and our current topic is mathematical induction.



I've been given the following question:




$$1+4+4^2+....+4^{n-1}=\frac{4^{n}-1}{3}.$$



I get the first step - $P(1) = 1$.



I get the second step - Assume $n = k$.



It's this third step that gets me. I've seen it done a few ways.. but this is what I've got so far:



$$\begin{align} 1+4+4^2+...+4^{k-1}+4^{k} & = [1+4+4^2+...+4^{k-1}] +4^{k} \\ & = \frac{4^{k}-1}{3} + 4^{k} \end{align}$$




I've no idea where to go after this. I'm assuming I want it to look something like:



$$\frac{4^{k+1}-1}{3}.$$



However, no idea how to get there, or if that's even the direction I want to be heading in.



I would appreciated any help. Especially if there's an easier way of doing this.



Cheers. =)

analysis - Example of bijection from $mathbb{Q} to mathbb{Q} times mathbb{Q}$

What would be an example of bijection between $\mathbb{Q}$ and $\mathbb{Q}\times \mathbb{Q}$.



I can think of one: $x \mapsto (x,x+1)$ Does this work? I am not sure.<

Contour integration - complex analysis




I am trying to solve the following integral using a contour (large semi-circle connected to smaller semi-circle in the upper-half plane):
$$\int_0^{\infty} \frac{\log^4(x)}{1+x^2} dx.$$





I have split the contour into 4 parts - the large semi-circle, the small semi-circle, the part on the negative real axis and the part on the positive real axis.



The integral of the function over the contour is $2\pi i \sum Res(f)$, which is $\pi^5$. The function has poles: $i$ and $-i$, each of order $1$, but $i$ is the only pole contained in the contour.



The integral over the large semi-circle is $0$ as the large radius approaches infinity and the integral over the small semi-circle is $0$ as the small radius approaches $0$.



I take the real part of both sides and the following is left:



$$
\pi^5 = 2\int_0^{\infty} \frac{\log^4(x)}{1+x^2} dx

+ \int_{-\infty}^0 \frac{-6\pi^2\log^2(x) + \pi^4}{1+x^2} dx
$$



My final answer is $5\pi^5/8$, but the correct answer is $5\pi^5/32$.



Any suggestions? Thank you!


Answer



Feynman isn't necessarily antangonistic to contour integration. They sometimes work together well.



$$J=\int^\infty_0\frac{\log^4(x)}{1+x^2}dx=I^{(4)}(0)$$




where
$$I(a)=\int^\infty_{0}\frac{x^{a}}{1+x^2}dx$$ with $-1.






Take $C$ as a keyhole contour, centered at the origin, avoiding the positive real axis.



Let $f(z)=z^a(1+z^2)^{-1}$ with branch cut on the positive real axis, implying $z^a\equiv \exp(a(\ln|z|+i\arg z))$ where $\arg z\in[0,2\pi)$.







Firstly, by residue theorem,
$$\oint_C f(z)dz=2\pi i\bigg(\operatorname*{Res}_{z=i}f(z)+\operatorname*{Res}_{z=-i}f(z)\bigg)$$



We have
$$\operatorname*{Res}_{z=i}f(z)=\frac{\exp(a(\ln|i|+i\arg i))}{i+i}=\frac{e^{\pi ia/2}}{2i}$$
$$\operatorname*{Res}_{z=-i}f(z)=\frac{\exp(a(\ln|-i|+i\arg -i))}{-i-i}=-\frac{e^{3\pi ia/2}}{2i}$$



Thus,

$$\oint_C f(z)dz=\pi(e^{\pi ia/2}-e^{3\pi ia/2})$$






On the other hand,
$$\oint f(z)dz=K_1+K_2+K_3+K_4$$



where



$$

K_1=\lim_{R\to\infty}\int^{2\pi}_0 f(Re^{it})iRe^{it}dt
=\lim_{R\to\infty}2\pi f(Re^{ic})iRe^{ic}=0 \qquad{c\in[0,2\pi]}$$



$$K_2=\lim_{r\to0^+}\int_{2\pi}^0 f(re^{it})ire^{it}dt
=\lim_{r\to0^+}2\pi f(re^{ic})ire^{ic}=0 \qquad{c\in[0,2\pi]}$$



$$K_3=\int^\infty_0 f(te^{i0})dt=\int^\infty_0\frac{e^{i0}t^a}{t^2+1}dt=I$$



$$K_4=\int_\infty^0 f(te^{i2\pi})dt=-\int^\infty_0\frac{e^{2\pi ia}t^a}{t^2+1}dt=-e^{2\pi ia}I$$




For $K_1,K_2$, please respectively note the asymptotics $f(z)\sim z^{a}$ for small $|z|$ and $f(z)=O(z^{a-2})$ for large $|z|$.






Therefore,
$$I-e^{2\pi ia}I=\pi(e^{\pi ia/2}-e^{3\pi ia/2})$$
$$\implies I=\pi\frac{e^{\pi ia/2}-e^{3\pi ia/2}}{1-e^{2\pi ia}}
=\pi\frac{e^{-\pi ia/2}-e^{\pi ia/2}}{e^{-\pi i a}-e^{\pi ia}}
=\pi\frac{\sin(\pi a/2)}{\sin(\pi a)}
=\frac{\pi}2\sec\left(\frac{\pi a}2\right)

$$






Let $T=\tan(\pi a/2)$, $S=\sec(\pi a/2)$.
$$I^{(4)}(a)=\frac{\pi}2\frac{\pi^4(T^4+18S^2T^2+5S^4)}{16}$$



Hence,
$$J=I^{(4)}(0)=\frac{\pi}2\frac{\pi^4(0+0+5\cdot 1)}{16}=\color{red}{\frac{5\pi^5}{32}}$$
The tedious differentiation is done by calculator. :)



elementary number theory - Prove that any set of integers that are relatively prime in pairs are relatively prime




It seems pretty obvious, but how to prove?



I thought that maybe the way to go was by contradiction. So suppose that a set of integers is not relatively prime but pairs of members are coprime. We know for the set
$$S=\{a_1,a_2,a_3,...,a_{n-1},a_n\}$$
$(a_i,a_j)=1, \forall i,j, i\neq j$ If the set of integers was not coprime, then
$$(a_1,a_2,a_3,...,a_{n-1},a_n)=k$$
for some integer $k$. By the definition of the greatest common divisor, we know that $$k|a_i, \forall a_i\in S$$
However, for all pairs $a_i, a_j$, the only number that divides each is $1$ since $(a_i,a_j)=1$ Thus, no members of $S$ have a common divisor of $k$ which is a contradition. Therefore, the set of integers that are relatively prime in pairs is also relatively prime




Is this logical? I know there is a theorem that states $(a_1,a_2,...a_{n-1},a_n)=((a_1,a_2,...,a_{n-1}),a_n)$ which might help the cause, but that exercise has not been crossed yet in my textbook, and I think that the linear fashion of the text should be upheld... Thoughts?


Answer



Pretty much as you have said. To put it more concisely, . . .



Let $d$ be a positive common factor of $a_1,\ldots,a_n$. Then $d$ is a common factor of $a_1,a_2$. Since by assumption the numbers are relatively prime in pairs, $d$ can only be $1$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...