Thursday 31 May 2018

sequences and series - How to solve this indetermination when calculating a limit?



I'm stuck trying to find the limit of the sequence $$\frac{\sqrt{12 + a_n} - \sqrt{4a_n}}{a_n^2 - 2a_n - 8}$$



Where I'm given that $a_n > 4$ and $a_n \rightarrow 4$



Both the numerator and the denominator tend to 0, and I can't find how to solve this indetermination. I tried multiplying and dividing by the "reciprocal" of the numerator to get rid of the square roots in the numerator, but that doesn't seem to lead anywhere. What else can I try?


Answer



Hint:




$$b^2-2b-8=(b-4)(b+2)$$



$$\sqrt{12+b}-\sqrt{4b}=-\dfrac{3(b-4)}{\sqrt{12+b}+\sqrt{4b}}$$



If $b\to4,b\ne4\implies b-4\ne0$ hence can be cancelled safely


Wednesday 30 May 2018

Modular congruence not adding up



I am having trouble actually really understanding the modulo congruence.



I understand it intuitively very well. However, I fear that my background in development is not helping.




Writing:



$x \equiv y \pmod m$



Means that both $X$ and $Y$ belong have the same remainder after being divided by $m$.
For example:



$17 ≡ 20 \pmod 3$



As they both belong to the same "class" of numbers with a reminder of $2$ when divided by $3$.




In MATHEMATICAL CRYPTOLOGY by Keijo Ruohonen confirms this:




The congruence $x ≡ y$ $mod$ $m$ says that when dividing $x$ and $y$ by $m$ the remainder is the same, or in other words, $x$ and $y$ belong to the same residue class modulo $m$




Then, a specific case comes by.



$59 ≡ -1 \pmod{60}$




Here my understanding breaks down. They both clearly belong to the same class (the numbers being "behind" one of $60$ as multuple, intuitively speaking). However, dividing $x$ and $y$ by $m$ the remainder is the same (Ruohonen) is no longer true, since $59 % 60 = 59$, and $-1 % 60 = -1$.



What am I missing?


Answer



You're misinterpreting the mathematical definition of division with remainder when it's extended to negative integers. Your statement -1 % 60 = -1 is NOT true.



Quoting from the Wikipedia article on Remainder:





If $a$ and $d$ are integers, with $d$ non-zero, it can be proven that there exist unique integers $q$ and $r$, such that $a=qd+r$ and $0\le r<|d|$. The number $q$ is called the quotient, while $r$ is called the remainder.




Note that by definition, the remainder can NOT be negative. That's one reason why your example is wrong: the remainder can't be "$-1$".



Here's one way to look at it (somewhat informally). For example, you said that $20$ has a remainder of $2$ when divided by $3$. Yes, that's true, but why? I bet you were taught to look for the largest multiple of $3$ that doesn't exceed $20$. This is going to be $18$, and then the remainder is $20-18=2$.



Well, all you gotta do now is apply exactly the same logic to negative numbers too! Let's find the remainder of $-20$ modulo $3$. What is the largest multiple of $3$ that doesn't exceed $-20$? It is NOT $-18$, because $-18>-20$, not less. Instead, the largest multiple of $3$ that doesn't exceed $-20$ is $-21$, and the remainder is $(-20)-(-21)=1$.



In terms of the definition, $a=\color{blue}{q}d+\color{red}{r}$, where $\color{blue}{q}$ is the quotient and $\color{red}{r}$ is the remainder, $0\le\color{red}{r}<|d|$, for these two examples we have: $20=\color{blue}{6}\cdot3+\color{red}{2}$ for the first one, and $-20=\color{blue}{(-7)}\cdot3+\color{red}{1}$ for the second one.




Same for your last example. What is the largest multiple of $60$ that doesn't exceed $-1$? It is NOT $0$, because $0>-1$, not less. Instead, the largest multiple of $60$ that doesn't exceed $-1$ is $-60$, and the remainder is $(-1)-(-60)=59$. In terms of the definition: $-1=\color{blue}{(-1)}\cdot60+\color{red}{59}$.


Tuesday 29 May 2018

calculus - How to compute the limit $lim_{x to infty}left[xleft(1+frac{1}{x}right)^x-exright]$?



How to compute the limit. My first instinct was to convert the expression in a fraction and use l'hopitals rule, but the didnt seem like it was going anywhere. Are there any better approaches to evaluating this limit?




$$\lim_{x \to \infty}\left[x\left(1+\frac{1}{x}\right)^x-ex\right]$$


Answer



$$\ln\left[\left(1+\frac1x\right)^x\right]
=x\ln\left(1+\frac1x\right)=1-\frac1{2x}+O(x^{-2})$$

so
$$\left(1+\frac1x\right)^x
=e\exp\left(-\frac1{2x}+O(x^{-2})\right)=e\left(1-\frac1{2x}+O(x^{-2})
\right)$$

and so

$$x\left(1+\frac1x\right)^x-ex\to-\frac e2$$
as $x\to\infty$.


Monday 28 May 2018

Prove there are no other solutions of functional equation $f(x+y) = frac{f(x)+f(y)}{1-f(x)f(y)}$



I have the following functional equation. Find all continuous functions $f:(-1,1) \to \mathbb R$ such that
$$
f(x+y)=\frac{f(x) + f(y)}{1 - f(x)f(y)}
$$
The first obvious solution is $f(x) \equiv 0$. Another one I guessed, it is $f(x) = \pm \tan x$. I suspect, that there are no more solutions. The problem is that I don't know how to prove that.




Since we have a rational equation, I have no idea how to make any substitutions in order to get expression for $\tan x$ (as it is not a rational expression).



P.S. I can also show that it is true that (a) $f(0) = 0$ (take $x=y=0$), and (b) $f(-x) = -f(x)$ (take $y=-x$).



Update: there is a solution of this problem also here: https://artofproblemsolving.com/community/c6h386060


Answer



Define $g(x)=\arctan(f(x))$, so that $f(x)=\tan(g(x))$. Then $g:(-1,1)\to(-\pi/2,\pi/2)$ is continuous. The functional equation becomes
$$
\tan(g(x+y))=\frac{\tan(g(x))+\tan(g(y))}{1-\tan(g(x))\tan(g(y))}=\tan(g(x)+g(y)).

$$
The latter equality used the tangent angle addition formula. It follows that $g(x+y)-g(x)-g(y)\in \mathbb{Z}\cdot\pi$. The function $w(x,y):=g(x+y)-g(x)-g(y)$ is continuous and discretely-valued on the connected set $\{(x,y)\in(-1,1)^2:x+y\in(-1,1)\}$, so $w$ must be constant. Since $w(0,0)=0$, we have $w\equiv 0$, and we conclude that
$$
g(x+y)=g(x)+g(y),
$$
for all $x$, $y\in(-1,1)$ such that $x+y\in(-1,1)$. It is well-known that every continuous real-valued functions on $\mathbb{R}$ that preserves addition is of the form multiplication by a constant, and essentially the same proof works for functions on $(-1,1)$. I won't write out the details.



Since $g$ takes $(-1,1)$ into $(-\pi/2,\pi/2)$, we must have $g(x)=Cx$ for some $C\in [-\pi/2,\pi/2]$. We conclude that the only solutions to the original equation are
$$
f(x)=\tan(Cx)

$$
for $C\in[-\pi/2,\pi/2]$. The cases $C=0,\pm1$ are the solutions you found.


Sunday 27 May 2018

elementary number theory - how to start a proof of a system of congruences




Let $a$, $b$, $m$, and $n$ be integers with $m > 0$, $n >0$, and $gcd(m,n) = 1$. Then the system $x\equiv a$ (mod n) and $x\equiv b$ (mod m) has a unique solution modulo mn.



This is not the Chinese Remainder Theorem just yet. That is the next proof. This is a proof leading up to it.



Help please!


Answer



Hint $\ $ If $\,x',x\,$ are two solutions then $\,m,n\mid x'-x\,$ so $\ \ldots\mid x'-x$


real analysis - I want to show that $f(x)=x.f(1)$ where $f:Rto R$ is additive.











I know that if $f$ is continuous at one point then it is continuous at every point.
From this i want to show that $f(x)=xf(1).$
Can anybody help me to proving this?


Answer



HINTS:





  1. Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.


  2. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.


  3. $f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.


  4. Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.


  5. Then use continuity at a point.



elementary number theory - Do we need to apply the Euclidean Algorithm before applying the Extended Euclidean Algorithm?



As the title says, do we need to apply the Euclidean Algorithm before applying the Extended Euclidean Algorithm?



For example, we have $\gcd(24,17)$, so we can find $x,y$ such that $24x+17y=1$. Applying the Euclidean algorithm:



$$\gcd(24,17)=\gcd(7,17)=\gcd(7,3)=\gcd(1,3)=1$$




Applying the Extended Euclidean algorithm:



\begin{align*} 1 &= 7-2\cdot3 \\ &= 7-2\cdot(17-2\cdot7) \\ &= 5\cdot7-2\cdot17 \\ &=5\cdot(24-17)-2\cdot17 \\ &=5\cdot24-7\cdot17 \end{align*}



Is there a way to do this without first applying the Euclidean algorithm?


Answer



As the name suggests the extended Euclidean algorithm is an extension of the classical algorithm, which means the normal Euclidean algorithm is part of the extended Euclidean algorithm.
normal: Provides gcd
extended provides gcd + the 2 numbers x,y such that gcd= nx + my
whereas n,m are the input numbers




So to answer your original question, yes you don't have to compute the classic Euclidean since you get the gcd at the end.



Look at this simple example from Wikipedia.



I hope this is useful for you.


calculus - What does the group of the geometric images of z, such that $z+overline{z}+|z^2|=0$ define?




What does the group of the geometric images of z, such that
$z+\overline{z}+|z^2|=0$ define?



A) A circumference of center (0,-1) and radius 1.




B) Two lines, of equations $y=x+2$ and $y=x$.



C)A circumference of center (1,0) and radius 1.



D)A circumference of center (-1,0) and radius 1.




I tried to simplify the expression:




$$z+\overline{z}+|z^2|=0 \Leftrightarrow \\
x+yi+x-yi+|(x+yi)^2|=0\Leftrightarrow \\
2x+\sqrt{(x^2+y^2)^2+(2xy)^2}=0 \Leftrightarrow \\
\sqrt{(x^2+y^2)^2+(2xy)^2}=-2x \Leftrightarrow \\
(x^2+y^2)^2+(2xy)^2=4x^2 \Leftrightarrow \\
x^4 - 2 x^2 y^2 + y^4+4x^2y^2 = 4x^2 \Leftrightarrow \\
x^4+y^4+2x^2y^2 = 4x^2 \Leftrightarrow \\
???$$



How do I continue from here?




I have also been thinking that if the sum of those three numbers is zero then they could be the vertices of a triangle. I rewrote the expression:



$$\rho \cdot cis(\theta)+\rho cis(-\theta)+|\rho^2 \cdot cis(2\theta)|=0 \Leftrightarrow \\
\rho \cdot cis(\theta)+\rho cis(-\theta)+\rho^2 =0 \Leftrightarrow \\
\rho(cis(\theta)+cis(-\theta)+\rho) = 0 \Leftrightarrow \\
\rho = 0 \lor cis(\theta)+cis(-\theta)+\rho = 0 \Leftrightarrow \\
cis(\theta)+cis(-\theta) = -\rho \Leftrightarrow \\
\cos(\theta)+\sin(\theta)i+\cos(-\theta)+\sin(-\theta)i = -\rho \Leftrightarrow \\
\cos(\theta)+\cos(\theta) = -\rho \Leftrightarrow \\

2\cos(\theta) = -\rho \Leftrightarrow \\
\rho = -2\cos(\theta)$$



This means that $\rho$ will be between 0 and 2. If $|z^2|=\rho^2 = \rho^2 cis(0)$, then one of the vertices is $4$.



But what do I do next? How do I solve this?


Answer



That solution is very laboured: $|z^2|=x^2+y^2$ so the curve's equation is
$$x^2+y^2+2x=0$$
or

$$(x+1)^2+y^2=1.$$
I'm sure you can identify the curve now.


Saturday 26 May 2018

calculus - continued square root function



Solutions of following eqution
$$x=\sqrt{2+{\sqrt{2-{\sqrt{2+{\sqrt{2-x}}}}}}}$$
is $\frac{1+\sqrt5}{2}$.
This is solution of
$$x=\sqrt{2+{\sqrt{2-x}}}$$




Does all of this type equation(repeating same shape) always have same solutions like this?
Can you explain why?


Answer



Note the following identities: $$2+2\sin(\theta) = 4\sin^2(\theta/2+\pi/4)$$ $$2-2\sin(\theta) = 4\sin^2(\pi/4-\theta/2)$$



Consider $$x=\sqrt{2+{\sqrt{2-{\sqrt{2+{\sqrt{2-x}}}}}}}$$



where $x = 2\sin(\theta)$.



Using the above identities, it must be that




$$2\sin(\theta)=\sqrt{2+{\sqrt{2-{\sqrt{2+2\sin(\pi/4-\theta/2)}}}}}$$



$$2\sin(\theta)=\sqrt{2+{\sqrt{2-2\sin(3\pi/8-\theta/4)}}}$$



$$2\sin(\theta)=\sqrt{2+2\sin(\theta/8+\pi/16)}$$



$$\sin(\theta) = \sin(\theta/16 + 9\pi/32)$$



$$\theta = 3\pi/10$$




Note that $$2\sin\left(\dfrac{3\pi}{10}\right) = \dfrac{1+\sqrt{5}}{2}.$$



I would conjecture that, yes, equations in that general form will have a solution in the form of the sine or cosine of some rational multiple of $\pi$.



It is worth noting that $\sin\!\left(\!\dfrac{p}{q}\pi\!\right)$ will be algebraic per this thread.


calculus - Convergence of $sum_{n=1}^{infty} frac{sin(n)}{n}$




I am trying to argue that
$$
\sum_{n=1}^{\infty} \frac{\sin(n)}{n}
$$
is divergent. It get that it must be divergent because $\sin(n)$ is bounded and there is an $n$ on the bottom. But I have to use one of the tests in Stewart's Calculus book and I can't figure it out. I can't use the Comparison Tests or the Integral Test because they require positive terms. I can't take absolute values, that would only show that it is not absolutely convergent (and so it might still be convergent). The Divergence Test also doesn't work.



I see from this question:



Evaluate $ \sum_{n=1}^{\infty} \frac{\sin \ n}{ n } $ using the fourier series




that the series is actually convergent, but using some math that I don't know anything about. My questions are



(1) Is this series really convergent?



(2) Can this series be handled using the tests in Stewart's Calculus book?


Answer



One may apply the Dirichlet test, noticing that




  • $\displaystyle \frac1{n+1} \le \frac1{n}$



  • $\displaystyle \lim_{n \rightarrow \infty}\frac1{n} = 0$


  • $\displaystyle \left|\sum^{N}_{n=1}\sin
    n\right|=\left|\text{Im}\sum^{N}_{n=1}e^{in}\right| \leq
    \left|e^i\frac{1-e^{iN}}{1-e^i}\right| \leq \frac{2}{|1-e^i|}<\infty,\qquad N\ge1,$



giving the convergence of the given series.


Friday 25 May 2018

real analysis - Prove the convergence of a sequence based on sub-sequence



Suppose that every convergent sub-sequence of $a_n$ converges to zero. Prove that $a_n$ converges to zero.



I tried to prove this statement using the definition of limit. From the question, I know that $a_{n_k}$ converges to zero $\forall$ k $\in$ N. I want to show that $\forall \epsilon > 0, \exists N \in$ natural number such that $\forall n > N$, we have |$a_n - 0$| $< \epsilon$


Answer



First of all, $(a_{n})$ must be bounded, if not, one can find some $|a_{n_{k}}|$ such that $|a_{n_{k}}|\rightarrow\infty$, if your convergence includes the sort of infinity, then this violates the assumption.



Now let a subsequence $(a_{n_{k}})$ such that $\lim_{k\rightarrow\infty}|a_{n_{k}}|=\limsup_{n\rightarrow\infty}|a_{n}|$, so by assumption $\limsup_{n\rightarrow\infty}|a_{n}|=0$. Hence $0\leq\liminf_{n\rightarrow\infty}|a_{n}|\leq\limsup_{n\rightarrow\infty}|a_{n}|=0$ and we have then $\lim_{n\rightarrow\infty}|a_{n}|=0$.


real analysis - Monotonicity of $ell_p$ norm



Consider a $n$ dimensional space, it is known (Wikipedia) that for $p>r>0$, we have




$$
\|x\|_p\leq\|x\|_r\leq n^{(1/r-1/p)}\|x\|_p.
$$



I have two questions about the above inequality.



$(\bf 1)$. The first is how to show $\|x\|_p\leq\|x\|_r$ when $p,r\leq1$. When $p>r\geq1$, we can define $$f(s)=\|x\|_s,\,\,s\geq1$$ and find out that $$f'(s)=\|x\|_s\left\{-\frac{1}{s^2}\log(\sum_i|x_i|^s)+\frac{1}{s}\frac{\sum_i|x_i|^s\log(|x_i|)}{\sum_i|x_i|^s}\right\}.$$



Then by the concavity of the $\log$ function, we can see that $$\frac{\sum_i|x_i|^s\log(|x_i|)}{\sum_i|x_i|^s}\leq \log\left(\sum_i\frac{|x_i|^s}{\sum_j|x_j|^s}\cdot|x_i|\right).$$
Let $$y_i=\frac{|x_i|^s}{\sum_j|x_j|^s},$$ it is easy to see $\|y\|_{s^*}\leq1$, where $s^*\geq1$ and $1/s+1/s^*=1$. Then, the Hölder's inequality leads to

$$\frac{\sum_i|x_i|^s\log(|x_i|)}{\sum_i|x_i|^s}\leq \log\left(\sum_i\frac{|x_i|^s}{\sum_j|x_j|^s}\cdot|x_i|\right)= \log\left(\sum_iy_i\cdot|x_i|\right)\leq\log(\|x\|_s\|y\|_{s^*})\leq\log\|x\|_s.$$
Therefore, we can conclude $f'(s)\leq0$ and $\|x\|_p\leq\|x\|_r$ is satisfied. However, when $p,r<1$, we do not have $s^*\geq1$ and $\|y\|_{s^*}\leq1$. The last step does not work any more.



(${\bf 2}$). My second question is how to show $\|x\|_r\leq n^{(1/r-1/p)}\|x\|_p.$ In fact, I was trying to show this by solving the following optimization problem:



$$
\max_{\|x\|_p\leq1} \|x\|_r.
$$
But seems it is difficult to derive a closed form solution. The objective function is non-smooth. Is there any elegant way to solve the above optimization problem?




Can anyone give me a hint? Thanks a lot.


Answer



This answers your first question



As for the second question. Consider Holder inequality
$$
\sum\limits_{i=1}^n |a_ib_i|\leq \left(\sum\limits_{i=1}^n |a_i|^{s/(s-1)}\right)^{1-1/s}\left(\sum\limits_{i=1}^n |b_i|^s\right)^{1/s}
$$
with $a_i=1$, $b_i=|x_i|^r$, $s=p/r$.


real analysis - Is there a continuous bijection from open unit interval to open unit disc?



I came across a problem asking the reverse of the titular question:




Prove that there is no continuous bijection from $B_1(\mathbb{0}) = \{ (x,y) \in \mathbb{R}^2 : x^2 + y^2 < 1\}$ to $(0,1)$.





This I did as follows:



Suppose for the sake of contradiction that there is such a map $f$. Consider the map $g : B_1(\mathbb{0}) \setminus \{ \mathbb{0} \} \to (0,1) \setminus \{ f(\mathbb{0}) \}$ given by $g(\mathbb{x}) = f(\mathbb{x})$. Then $g$ is also a continuous bijection. The continuous image of a connected set must be connected, and the domain is connected, so the image must be contained in either $(0,f(\mathbb{0}))$ or $(f(\mathbb{0}),1)$, which contradicts surjectivity. Hence, proved.






I then decided to try and prove the claim in the other direction:





Q. Is there a continuous bijection $f : (0,1) \to B_1(\mathbb{0})$?




But, the same idea failed to work.



After removing a point from the domain and its image from the codomain, I have two disjoint intervals on the one hand and a connected space on the other. I tried taking a path in the codomain connecting two points, each lying in the image of the two components of the domain. But since the inverse of $f$ is not assumed to be continuous, I couldn't proceed further.



I'm not sure what to do next. Can someone help me prove or disprove the claim? Thanks!


Answer



Note that $(0,1)$ is the countable union of the compact sets $\left[\frac1n,1-\frac1n\right]$ ($n>2$). It turns out that the interior of $C_n=f\left(\left[\frac1n,1-\frac1n\right]\right)$ is empty. Suppose otherwise. Since $\left[\frac1n,1-\frac1n\right]$ is compact and $B_1(0)$ is Hausdorff, $f$ induces a homeomorphism from $\left[\frac1n,1-\frac1n\right]$ onto $C_n$. Therefore, the restriction of $f^{-1}$ to an open ball contained in $C_n$ is a homeomorphism from that open ball in $B_1(0)$ onto a subset of $(0,1)$. That cannot be by the connectedness argument that you mentioned.




So, $f\bigl((0,1)\bigr)$ is a countable union of closed sets with empty interior. Now, the Baire Category Theorem tells us that $f\bigl((0,1)\bigr)$ cannot be $B_1(0)$.


Thursday 24 May 2018

abstract algebra - Simple proof for independence of square roots of two primes



Consider the following problem:





Let $p$ and $q$ be two distinct prime numbers. Show that $\sqrt{p}$
and $\sqrt{q}$ are independent over $\mathbb{Q}$, which means that:



$\forall a,b \in \mathbb{Q}: a\sqrt{p} + b\sqrt{q} = 0 \Rightarrow a = b = 0$




I'm well aware how to prove this for a sequence $p_i$ of primes and thus a sequence $\sqrt{p_i}$ of prime roots using Galois theory, but I want to show some students a very elemental and short proof for just two prime roots. Those students are only at the beginning of an elemental algebra course and did not learn about anything like field traces. Is this possible?


Answer



I wanted to construct a proof of this using as elementary means as possible, avoiding if at all feasible "big gun" results such as the fundamental theorem of arithmetic, which in the following has been supplanted by repeated application of Bezout's identity:




If $\sqrt p$ and $\sqrt q$ are dependent over $\Bbb Q$, they satisfy a relation of the form



$r\sqrt p + s\sqrt q = 0, \; 0 \ne r, s \in \Bbb Q; \tag 0$



by clearing the denominators of $r$ and $s$ we find there exist $0 \ne a, b \in \Bbb Z$ with



$a\sqrt p + b\sqrt q = 0, \tag 1$



and we may clearly further assume




$\gcd(a, b) = 1; \tag 2$



from (1) we have, upon multiplication by $\sqrt p$,



$ap + b\sqrt{pq} = 0, \tag 3$



whence



$ap = -b\sqrt{pq}; \tag 4$




we square:



$a^2 p^2 = b^2 pq, \tag 5$



and divide through by $p$:



$a^2 p = b^2 q \Longrightarrow p \mid b^2 q; \tag 6$



now since $p, q \in \Bbb P$ are distinct, $p \ne q$, we have




$\gcd(p, q) = 1, \tag 7$



and thus



$\exists x, y \in \Bbb Z, \; xp + yq = 1, \tag 8$



which further implies



$xpb^2 + yqb^2 = b^2 \Longrightarrow p \mid b^2, \tag 9$




since



$p \mid pb^2, \; p \mid qb^2; \tag{10}$



now with $p \in \Bbb P$,



$p \not \mid b \Longrightarrow \gcd(p, b) = 1, \tag{11}$



whence




$\exists z, w \in \Bbb Z, \; zp + wb = 1, \tag{12}$



and so



$zpb + wb^2 = b \Longrightarrow
p \mid b \Rightarrow \Leftarrow p \not \mid b, \tag{13}$



as assumed in (11); thus in fact



$p \mid b \Longrightarrow \exists c \in \Bbb Z, \; b = pc \Longrightarrow b^2 = p^2c^2, \tag{14}$




and thus (6) becomes



$a^2 p = c^2p^2 q \Longrightarrow a^2 = c^2pq \Longrightarrow p \mid a^2; \tag{15}$



now repeating in essence the argument of (11)-(13) proves that $p \mid a$, which is of course precluded by (2), lest $p \mid \gcd(a, b) = 1$.



We thus see that there can be no relation of the form (0) for $p, q \in \Bbb P$ distinct; $p$ and $q$ are independent over $\Bbb Q$.



The informed reader, upon careful scrutiny, will note that this demonstration also has much in common with the classic proof that $\sqrt 2 \notin \Bbb Q$, which truth be told inspired my conception of this answer.



Wednesday 23 May 2018

linear algebra - Determining a matrix given the characteristic and minimal polynomial



Let $p_a=(x-2)^2(x-7)^4x$ be the characteristic polynomial of the matrix $A$ and $(x-2)^2(x-7)x$ the minimal polynomial. Determine the matrix $A$.




My work: I know the matrix has to be $7x7$ and in its diagonal it must have two $2$, four $7$ and one $0$, so:



\begin{bmatrix}{}
2& & & & & & \\
& 2& & & & &\\
& & 7 & & & &\\
& & & 7 & & &\\
& & & & 7& & \\
& & & & & 7 &\\
& & & & & & 0\\ \end{bmatrix}




I don't know how to follow, what information gives me the minimal polynomial?


Answer



The minimal polynomial in this case gives you the information about the relevant Jordan blocks. Since it has $(x-2)^2$ as a factor, you must have one $2 \times 2$ Jordan block associated to the eigenvalue $2$ (and not two $1 \times 1$ Jordan blocks). To see why, note that the minimal polynomial of



$$ \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix} $$



is $(x - 2)$ while the minimal polynomial of



$$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} $$




is $(x - 2)^2$.



Similarly, since the minimal polynomial has $(x-7)$ as a factor, al the Jordan blocks associated to the eigenvalue $7$ must be $1 \times 1$. Hence, $A$ is similar to the matrix



$$ \begin{pmatrix} 2 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 7 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 7 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 7 & 0 & 0 \\

0 & 0 & 0 & 0 & 0 & 7 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{pmatrix}. $$


Tuesday 22 May 2018

calculus - how to integrate $ int_{-infty}^{+infty} frac{sin(x)}{x} ,dx $?











How can I do this integration using only calculus?
(not laplace transforms or complex analysis)



$$
\int_{-\infty}^{+\infty} \frac{\sin(x)}{x} \,dx
$$



I searched for solutions not involving laplace transforms or complex analysis but I could not find.


Answer




Putting rigor aside, we may do like this:
$$\begin{align*}
\int_{-\infty}^{\infty} \frac{\sin x}{x} \; dx
&= 2 \int_{0}^{\infty} \frac{\sin x}{x} \; dx \\
&= 2 \int_{0}^{\infty} \sin x \left( \int_{0}^{\infty} e^{-xt} \; dt \right) \; dx \\
&= 2 \int_{0}^{\infty} \int_{0}^{\infty} \sin x \, e^{-tx} \; dx dt \\
&= 2 \int_{0}^{\infty} \frac{dt}{t^2 + 1} \\
&= \vphantom{\int}2 \cdot \frac{\pi}{2} = \pi.
\end{align*}$$
The defects of this approach are as follows:





  1. Interchanging the order of two integral needs justification, and in fact this is the hardest step in this proof. (There are several ways to resolve this problem, though not easy.)

  2. It is nothing but a disguise of Laplace transform method. So this calculation contains no new information on the integral.


calculus - Why doesn't using the approximation $sin xapprox x$ near $0$ work for computing this limit?




The limit is
$$\lim_{x\to0}\left(\frac{1}{\sin^2x}-\frac{1}{x^2}\right)$$
which I'm aware can be rearranged to obtain the indeterminate $\dfrac{0}{0}$, but in an attempt to avoid L'Hopital's rule (just for fun) I tried using the fact that $\sin x\approx x$ near $x=0$. However, the actual limit is $\dfrac{1}{3}$, not $0$.



In this similar limit, the approximation reasoning works out.


Answer



If we take one more term in the Taylor expansion:



\begin{align}

\sin x&\approx x-\frac{x^3}6+\cdots\\
\sin^2 x&\approx x^2-2x\frac{x^3}6+\cdots\\
\frac 1{\sin^2 x}&\approx\frac 1{x^2-x^4/3}\\
&=\frac 1{x^2}\cdot\frac 1{1-x^2/3}\\
\lim_{x\to 0}\left[\frac 1{\sin^2 x}-\frac 1{x^2}\right]&=\lim_{x\to 0}\left[\frac 1{x^2}\left(\frac 1{1-x^2/3}-1\right)\right]\\
&=\lim_{x\to 0}\left[\frac 1{x^2}\cdot\frac{1-1+x^2/3}{1-x^2/3}\right]\\
&=\lim_{x\to 0}\frac 1{3-x^2}\\
&=\frac 1 3
\end{align}







To see where the first-order expansion went wrong, it's necessary to keep track of where the error term goes:



\begin{align}
\sin x&= x+\text{O}(x^3)\\
\sin^2 x&=x^2+2x\text{O}(x^3)+\text{O}(x^3)^2\\
&=x^2+\text{O}(x^4)+\text{O}(x^6)\\
&=x^2+\text{O}(x^4)\\
\frac 1{\sin^2 x}&=\frac 1{x^2+\text{O}(x^4)}\\

&=\frac 1{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
\frac 1{\sin^2 x}-\frac 1{x^2}&=\frac 1{x^2}\left[\frac 1{1+\text{O}(x^2)}-1\right]\\
&=\frac 1{x^2}\cdot\frac{1-1+\text{O}(x^2)}{1+\text{O}(x^2)}\\
&=\frac{\text{O}(x^2)}{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
&=\text{O}(1)
\end{align}



Thus the $\sin x\approx x$ approximation is not accurate enough to estimate even the constant term of the expression in the limit. (Note that it does allow us to say that there are no $\text{O}(n^{-1})$ or bigger terms, so the limit probably won't diverge.)


calculus - Calculating this limit without l'Hospital/Taylor/derivatives



Let's consider $$a_n = n^2 \log \left(\cos \frac 1n\right)$$




It's easy to calculate $$\lim_{n\to\infty} a_n$$ by using l'Hospital/Taylor. But how to do it without using anything that involves derivatives? (Pure sequences!)


Answer



I'll use two facts 1. $\lim_{x\to 0}\sin x/x = 1.$ 2. $\lim_{x\to 0}(1+ax +o(x))^{1/x} = e^a$ for any constant $a.$



From 1. we get, as $x\to 0,$



$$\frac{1-\cos x}{x^2} = \frac{1}{1+\cos x}\frac{1-\cos^2 x}{x^2} = \frac{1}{1+\cos x}\frac{\sin^2 x}{x^2} \to \frac{1}{2}\cdot 1^2 = \frac{1}{2}.$$



This shows $\cos x = 1 - (1/2)x^2 + o(x^2).$ Therefore




$$[\cos(1/n)]^{n^2} = [1+(-1/2)/n^2 + o(1/n^2)]^{n^2} \to e^{-1/2},$$



where we have used 2. above. Now apply $\ln$ to see the desired limit is $-1/2.$


polynomials - Why is it that the roots of a quadratic is closest when the discriminant is smallest (but non-zero)?



My teacher told me earlier today that the roots of a quadratic are the closest when the discriminant is smallest, but non-zero. I wasn't able to understand why that is the case, however.



Is it correct to say that the discriminant is the distance between the roots? And since you can't have negative distance, that's why the roots are imaginary when the discriminant is negative?


Answer



If the roots coincide the discriminant is $0$ ($(x-r)^2=x^2-2rx+r^2$ has discriminant $0$) so it is natural to imagine that a very small discriminant means that the roots are close. However, the relation between the difference and the discriminant is not as simple as you might like.




If the quadratic is $ax^2+bx+c$ with discriminant $\Delta =b^2-4ac$ then the roots are $r_{\pm}=\frac {-b\pm \sqrt {\Delta}}{2a}$ so the difference is $$r_+-r_-=\frac {\sqrt {\Delta}}a$$



Thus $$\boxed {\Delta = a^2(r_+-r_-)^2}$$


Monday 21 May 2018

abstract algebra - How to solve system of equations with mod?



I'm trying to solve for $a$ and $b$:
$$5 \equiv (4a + b)\bmod{26}\quad\text{and}\quad22\equiv (7a + b)\bmod{26}.$$
I tried looking it up online, and the thing that seemed most similar was the Chinese remainder theorem; however, I couldn't find an instance where it fit something more like what I want to solve. A simple explanation or a reference to one would be most appreciated.




With my (limited) knowledge of algebra, I figured out that $x\ \textrm{mod}\ 26 = x - 26\lfloor\frac{x}{26}\rfloor$, so I tried substituting that into my equations:
$$5=(4a+b)-26\left\lfloor\frac{4a+b}{26}\right\rfloor\quad\text{and}\quad 22=(7a+b)-26\left\lfloor\frac{7a+b}{26}\right\rfloor.$$



And I figured I could do something with that since I got rid of the mod, but... I have never solved an equation with a floor function before.


Answer



Well, mod is easier to handle. We have only $m$ numbers $\pmod m$: $0,1,\dots,m-1$ and already $m\equiv 0$ (also, $-1\equiv m-1$), it goes in a cycle just like the hours in a day $\pmod{12}$.



Precisely, $a\equiv b \pmod m$ means $\ m|(a-b)$, and the arithmetic operations such as $+,-,\cdot$ are very friendly with it, $\equiv$ acts just like an equation.



You can try to solve it, like $b\equiv 22-7a \pmod{26}$, then substitute it back to the other, $5\equiv -3a+22 $, so $3a\equiv 17$, but $\pmod{26}$ this $17$ can be substituted by $-9$ for example (because $17\equiv -9 \pmod{26}$), and $3$ is coprime to $26$ so one can divide by $3$.



elementary number theory - Proving $gcd left(frac{a}{gcd (a,b)},frac{b}{gcd (a,b)}right)=1$



How would you go about proving that $$\gcd \left(\frac{a}{\gcd (a,b)},\frac{b}{\gcd (a,b)}\right)=1$$



for any two integers $a$ and $b$?



Intuitively it is true because when you divide $a$ and $b$ by $\gcd(a,b)$ you cancel out any common factors between them resulting in them becoming coprime. However, how would you prove this rigorously and mathematically?


Answer



Very simply it can be done like this: $\gcd(a,b)=d$.




Now we ask can: $\gcd(\frac{a}{d},\frac{b}{d})=e$ for $e>1$?



Well, this implies $e\mid\frac{a}{d},e\mid\frac{b}{d} \Rightarrow em=\frac{a}{d}, en=\frac{b}{d} \Rightarrow dem=a,den=b \Rightarrow de$ is a common divisor of $a,b$ which is greater than $d$, thus a contradiction as $d$ by definition was supposed as the $\gcd$. Hence, $e=1$.


calculus - Why does the power rule work?



If $$f(x)=x^u$$ then the derivative function will always be $$f'(x)=u*x^{u-1}$$
I've been trying to figure out why that makes sense and I can't quite get there.




I know it can be proven with limits, but I'm looking for something more basic, something I can picture in my head.



The derivative should be the slope of the function.
If $$f(x)=x^3=x^2*x$$ then the slope should be $x^2$. But it isn't. The power rule says it's $3x^2$.



I understand that it has to do with having variables where in a more simple equation there would be a constant. I'm trying to understand how that exactly translates into the power rule.


Answer



First let's try to understand why the derivative of the function $f$ given by $f(x) = x^2$ is equal to $2x$ and not to $x$. (The product rule and the power rule are both generalizations of this.)




Imagine that you have a square whose sides have length $x$. Now imagine what happens to its area if we increase the length of each side by a small amount $\Delta x$. We can do this by adding three regions to the picture: two thin rectangles measuring $x$ by $\Delta x$ (say one on the right of the square and another on the top) and one small square measuring $\Delta x$ by $\Delta x$ (say added in the top right corner.) So the change in the area $x^2$ is equal to $2x \cdot \Delta x + (\Delta x)^2$. If we divide this by $\Delta x$ and take the limit as $\Delta x$ approaches zero, we get $2x$.



So geometrically what is happening is that the small square in the corner is too small to matter, but you have to count both rectangles. If you only count one of them, you will get the answer $x$; however, this only tells you what happens when you lengthen, say, the horizontal sides and not the vertical sides of your square to get a rectangle. This is a different problem than the one under consideration, which asks (after we put it in geometrical terms) how the area varies as we lengthen all the sides.


Sunday 20 May 2018

sequences and series - Matrix Exponential equality



I was reading about the matrix exponential function and I came across this:




If $xy = yx$ then



$$ \exp(x+y) = \exp(x)\cdot\exp(y) $$



My textbook gives a proof as follows:



$$ \exp(x+y) = \sum\limits_{k=0}^{\infty}\frac{1}{k!}(x+y)^{k} = \sum\limits_{k=0}^{\infty}\left(\sum\limits_{l=0}^{k}\frac{x^{l}y^{k-l}}{l!(k-l)!}\right) = \left(\sum\limits_{p=0}^{\infty}\frac{x^{p}}{p!}\right)\cdot\left(\sum\limits_{l=0}^{\infty}\frac{y^{l}}{l!}\right)$$



I have trouble understanding the last equality. I guess it has something to do with Fubini but I do not understand how an infinite and finite summation got changed into two infinite summations.




Any help will be appreciated.



EDIT: The sole comment was enough to get me to the answer. This is nothing but the convolution product and the result follows from Merten's Theorem.


Answer



So that this question doesn't remain in the unanswered queue:






We note that
$$

\sum\limits_{k=0}^{\infty}\left(\sum\limits_{l=0}^{k}\frac{x^{l}y^{k-l}}{l!(k-l)!}\right)
$$
Is simply the Cauchy Product (aka the convolution product) of the two series
$
\sum_{p=0}^{\infty}\frac{x^{p}}{p!}
$
and
$
\sum_{l=0}^{\infty}\frac{y^{l}}{l!}
$

. By Merten's Theorem, we can deduce that
$$
\sum\limits_{k=0}^{\infty}\left(\sum\limits_{l=0}^{k}\frac{x^{l}y^{k-l}}{l!(k-l)!}\right) =
\left(\sum\limits_{p=0}^{\infty}\frac{x^{p}}{p!}\right)\cdot\left(\sum\limits_{l=0}^{\infty}\frac{y^{l}}{l!}\right)
$$


elementary number theory - Expression for the highest power of 2 dividing $3^aleft(2b-1right)-1$


Question: I am wondering if an expression exists for the highest power of 2 that divides $3^a\left(2b-1\right)-1$, in terms of $a$ and $b$, or perhaps a more general expression for the highest power of 2 dividing some even $n$?




EDIT: This is equivalent to finding an expression for the highest power of 2 dividing some even $n$, however finding it in terms of $a$ and $b$ may make it more useful for what I have described below.







Motivation For Asking:






The reason for this particular question is because, having had a look at the Collatz conjecture, I have found that positive odd integers $n$ of the form $2^a\cdot\left(2b-1\right)-1$ will iterate to the even value of $3^a\cdot\left(2b-1\right)-1$ after $a$ iterations of $\frac{3n-1}{2}$, then being divided by the highest power of 2 dividing it. Since I have also found that numbers of the form ${\left(\frac{2}{3}\right)}^c\cdot\left(2^{3^{c-1}\cdot(2d-1)}+1\right)-1$ become powers of 2 after $c$ iterations, I am hoping to find an expression for the number of iterations for $n$ to become a power of 2, and hence reach the 4-2-1 loop.






Updates:







EDIT 1: I have found a post on mathoverflow at https://mathoverflow.net/questions/29828/greatest-power-of-two-dividing-an-integer giving a rather long expression for the highest power of 2 dividing any odd positive integer $n$ which may be useful.






EDIT 2: Here is an image displaying patterns I find quite interesting. It is from a program I wrote which displays the exponent of the highest power of 2 dividing each value of $3^a\left(2b-1\right)-1$, represented by lighter pixels for higher values and darker pixels for lower values. It is plotted on a 2d grid such that each pixel on the x-axis represents the $b$ value, starting from one, and the same for $a$ on the y-axis:



enter image description here




This seems to suggest that if $a$ and $b$ are both either odd or even, the highest power of 2 dividing $3^a\left(2b-1\right)-1$ is 2 (I am currently having a look at this further).






EDIT 3: Defining $2^{b_s}$ as the highest power of 2 dividing $1.5^{a_s}(n_{s}+1)-1$, I know that any odd $n_s=2^{a_s}(2x_s-1)-1$ will reach:



$$n_{s+1}=2^{a_{s+1}}(2x_{s+1}-1)-1=\frac{3^{a_s}(2x_s-1)-1}{2^{b_s}}=\frac{1.5^{a_s}(n_{s}+1)-1}{2^{b_s}}$$



This has led me to find an expression for the $z$th iteration, $T_z(n_1)$, starting from some odd $n_1$, of:




$$T(n_s)=\frac{1.5^{a_s}\left(n_s+1\right)-1}{2^{b_s}}=n_{s+1}$$



The expression I found is as follows:



$$T_z(n_1)=\frac{1.5^{\sum_{c=2}^{z}a_c}\left(1.5^{a_1}\left(n_1+1\right)-1\right)}{2^{\sum_{d=1}^{z}b_d}}+\frac{1.5^{a_z}-1}{2^{b_z}}+\sum_{e=2}^{z-1}\frac{1.5^{\sum_{f=e+1}^{z}a_f}\left(1.5^{a_e}-1\right)}{2^{\sum_{g=e}^{z}b_g}}$$



Hence, for the Collatz conjecture to be true:



$$T_z(n_1)=\frac{1.5^{\sum_{c=2}^{z}a_c}\left(1.5^{a_1}\left(n_1+1\right)-1\right)}{2^{\sum_{d=1}^{z}b_d}}+\frac{1.5^{a_z}-1}{2^{b_z}}+\sum_{e=2}^{z-1}\frac{1.5^{\sum_{f=e+1}^{z}a_f}\left(1.5^{a_e}-1\right)}{2^{\sum_{g=e}^{z}b_g}}=1$$




must have a unique solution for any odd $n_1$ such that $\left\{z,a_{1...z},b_{1...z}\in\mathbb{N^+}\right\}$



This raises a separate question - namely what can be said of $a_{s+1}$ and $b_{s+1}$ from the values of $a_{s}$ and $b_{s}$? If there turns out to be some connection, the values of $\sum_{i=1}^{z}a_{i}$ and $\sum_{i=1}^{z}b_{i}$ may be expressed simply in terms of $a_1$ and $b_1$, hence turning the problem into a (diophantine equation? I have not yet studied Mathematics beyond secondary school, so I am unsure of correct Mathematical terminology or notation - please feel free to correct me).






EDIT 4: As suggested by Gottfried Helms, when $3^a\left(2b-1\right)-1$ is written as $3^ab-\left(3^a+1\right)$, factoring out the highest power of 2 shows that for $a \equiv b \pmod 2$, the highest power of 2 dividing $3^a\left(2b-1\right)-1$ is 2, and for $a \equiv 1 \pmod 2$ where $b \equiv 0 \pmod 4$, or $a \equiv 0 \pmod 2$ where $b \equiv 3 \pmod 4$ it must be 4. In other cases it seems to be no longer dependant on $a$ or $b$ and becomes pseudo-random. This helps to explain the patterns found above, but not all of them.




abstract algebra - Is every multiplicative function from the matrix ring $text{Mat}_n(R)$ to $R$ a function of $text{det}$?



Suppose that $R$ is a commutative ring with $1$, and for $n \in \mathbb{N}$, let $\text{Mat}_n(R)$ be the set of $n \times n$ matrices with entries in $R$.



It is well known that the determinant function $\text{det} : \text{Mat}_n(R) \rightarrow R$ is multiplicative, i.e.



$$
\text{det}(AB) = \text{det}(A) \text{det}(B)
$$




$\text{det}$ is certainly not unique in this respect; there are lots of functions $g : \text{Mat}_n(R) \rightarrow R$ which are multiplicative. For a start, there are the constant $1$ and constant $0$ functions, as well as the indicator function of "is invertible". More strangely, for $R = \mathbb{R}$, are the functions



$$
g(A) =
\begin{cases}
e^{f(\log(\left|\det(A)\right|))} &\text{if} \det(A) \neq 0 \\
0 &\text{if} \det(A) = 0
\end{cases}
$$




where $f : \mathbb{R} \rightarrow \mathbb{R}$ is any solution of the Cauchy functional equation $f(x+y) = f(x)+f(y)$ : non-continuous solutions to this equation are really badly behaved.



However, I have not found any examples of multiplicative functions which are not themselves a function of $\det$.



Is there any such function which is not a function of det?



That is, is there a ring $R$, an $n \in \mathbb{N}$ and $g : \text{Mat}_n(R) \rightarrow R$ which is multiplicative, and not a function of det, i.e. there exist $A,B \in \text{Mat}_n(R)$ such that



$$
\begin{align}

\det(A) &= \det(B) \\
g(A) &\neq g(B)
\end{align}
$$


Answer



Suppose $R$ satisfies the following conditions.




  • $R$ has an element $x\neq 1$ with $x^2=1$.

  • There is a surjective ring homomorphism $q:R\to\mathbb{F}_2$.




For example, $R=\mathbb{Z}$ and $x=-1$, or $R=S\times\mathbb{F}_2$ where $S$ has an element $s$ of multiplicative order $2$, and $x=(s,1)$.



Then $q$ induces a ring homomorphism $\text{Mat}_2(R)\to\text{Mat}_2(\mathbb{F}_2)$,
which I'll also denote by $q$.



An element $X$ of $\text{GL}_2(\mathbb{F}_2)$ acts on the three non-zero vectors of $\mathbb{F}_2^2$, and I'll say $X$ is even or odd depending on whether it acts by an even or odd permutation.



There is a multiplicative function $g:\text{Mat}_2(R)\to R$ with

$$g(A)=
\begin{cases}
0&\mbox{if $q(A)$ is not invertible}\\
1&\mbox{if $q(A)$ is invertible and even}\\
x&\mbox{if $q(A)$ is invertible and odd.}
\end{cases}$$



But $g(A)$ is not a function of $\det(A)$, since for $A_1=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ and $A_2=\begin{pmatrix}1&1\\0&1\end{pmatrix}$, $\det(A_1)=\det(A_2)$ but $g(A_1)=1\neq x=g(A_2)$.


real analysis - A continuous onto function from $[0,1)$ to $(-1,1)$




How I can construct a continuous onto function from $[0,1)$ to $(-1,1)$ ?




I know that such a function exists and also I have a function $\displaystyle f(x)=x^2\sin\frac{1}{1-x}$ which is continuous and onto from $[0,1)$ to $(-1,1)$. But I don't know how I can construct this type of function.
There are many function which is continuous onto from $[0,1)$ to $(-1,1)$. I can construct a continuous onto function from $(0,1)$ to $(-1,1)$. But when the domain interval is left-closed then how I can construct ?




Please give the idea to construct the function.


Answer



I suggest to start by drawing: draw the bounding box $[0,1)\times (-1,1)$, place your pencil at $(0,0)$, and trace different functions. The following are observations and hints, not logical proofs.



A first idea is that, since you want a continuous function, a monotonous one might be troublesome in mapping a semi-open interval like $[0,1)$ to an open one like $(-1,1)$. You can find one location in $[0,1)$ where $f(x)$ approaches $ 1$ and another location in $[0,1)$ where $f(x)$ approaches $ -1$. If these locations are in some $[\epsilon,1-\epsilon]$ ($\epsilon >0$), continuity might impose that the values $-1$ or $1$ will be reached strictly inside $[0,1)$, which you do not want.



One remaining choice is that values $y=-1$ and $y=1$ are both approached on an open end like $\left[.~ 1\right)$. So you might need a function that oscillates infinitely often close to $x=1$. In other word, it can be wise to open $[0,1)$ to something like $[0,+\infty)$.



To summarize, three main ingredients ingredients could be useful, in the shape of a fellowship of continuous functions that you could compose. Many choices are possible, here is one (borrowing from Tolkien's Ring poem):





  1. three functions for the unit-interval $[0,1)$ up to the sky $[0,+\infty)$ (hereafter $f_0$, $f_1$, $f_\phi$),

  2. one for the oscillation lords in their infinite $[0,+\infty)$ hall of sound (hereafter $f_2$),

  3. one function to bring them all and in $]-1,1[$ bind them (hereafter $f_3$),



in the Land of functions where continuity lies (indeed, continuous functions tend to cast continuous shadows to connected intervals).



The first ingredient $f_1(x)$ is easy with functions you know, defined on some $[a,b[$ with a singularity at $b$. It is easy to remap $[a,b[$ to $[0,1[$, so let us stick to that interval $[0,1[$. Examples: $\frac{1}{1-x}$, $-\log (1-x)$, $\tan (\pi x/2)$, and many more.




If you want more flexibility, you can start with any function $f_0$ that maps $[0,1[$ to $[0,1[$: $x^p$ with $p>0$, $\sin(\pi x/2)$, $\log(1+x(e-1))$.



For the second ingredient $f_2$ on $[0,+\infty)$, the sine is a nice pick, and a lot of combinations of sines and cosines, like a chirping sound. But you have a lot of fancy alternatives. And you can easily plug in a function $f_\phi$ that maps $[0,+\infty)$ to $[0,+\infty)$ (for instance $\exp(x)-1$, $x^p$).



The choice of $f_2$ is possibly the most sensitive, since you will need to strictly bound it afterward inside $ (−1,1)$, therefore you will need a third ingredient: a function $f_3$ that compensates (as a product for instance) the envelope of $f_2$ so that the result does not exceed $1$ in absolute value. So for the sine, $x^p$ or $\frac{\exp{x}-1}{e-1}$ will do the job. A function whose magnitude is strictly less than $1$, and tends to $1$ as $x\to 1$ is likely to work.



Finally, a function can be obtained by composing $f(x)= f_3(x)\times f_2\left( f_\phi\left( f_1\left( f_0\left( x\right)\right)\right)\right)$.



In your case, you have for instance $f_0(x)=x$, $f_1(x)=\frac{1}{1-x}$, $f_\phi(x)=x$, $f_2(x)=\sin(x)$, $f_3(x)=x^2$.




While not fully generic, you can cook of lot of recipes with those ingredients. For instance (see image below):$$f(x)=\sin(\pi x^7/2)\sin \left( \tan \left(\pi \sqrt{x}/2\right)\right)\,.$$



fancy chirping continuous function


calculus - What is the solution to the following limit?

How to solve the following limit problem?



$$\lim_{x\to\infty} \frac{3x^2+\sin x}{x^2+(\sin x)^2}$$. I have tried the following and arrived at a solution. But not sure if it is correct.




Applying L'Hospital's rule taking derivative thrice I get the following expression:



$$\lim_{x\to\infty} \frac{\cos x}{8\cos 2x} = 0$$

Saturday 19 May 2018

number theory - Show $ 3x^2 + 2 = y^2 $ has no solution in integers.





Show $ 3x^2 + 2 = y^2 $ has no solution in integers.



I've seen from similar problems, the idea is to reduce the equation to a congruence $ \mod{3} $ and show that the congruence $ y^2 \equiv 2 \pmod{3} $ has no solutions.



Why is one able to reduce the problem in this manner?


Answer



Start from basics,




What does the representation $a \equiv b \pmod c$ mean in the first place?



Answer : It means $(b-a)$ is divisible by $c$, or in a fancy way, it's written as $$c \mid (b-a)$$



For your question, you can clearly see that if $ ~3x^2 + 2 = y^2$ is true it would imply $~ y^2-2=3x^2$. Which, therefore implies that $y^2-2$ is a multiple of $3$.



Therefore $3 \mid y^2-2 \implies y^2 \equiv 2 \pmod{3}$



So if you could prove, somehow, that this ain't possible, it would prove that the equation has no solution in integers.



Friday 18 May 2018

Question In Elementary Number Thoery

In the book, "Elementary Number Theory 6th Edition(David M. Burton)", I don't know how to solve this problem.



P.58 number 18 (a)



If p is a prime and b is not divisible by p, prove that in the

arithmetic progression a, a+b, a+2b, ....
every pth term is divisible by p
(Hint : Because gcd(p, b)=1, there exist integers r and s satisfying pr+bs=1. Put n(k)=kp-as for k = 1,2,3... and show that a+n(k) is divisible by p)



By the above hint, I can result in pl(a+n(k)), but after that, I don't know how to continue this problem. Thank you very much if you give me some help.

real analysis - $0^0$ -- indeterminate, or $1$?




One of my teachers argued today that 0^0 = 1. However, WolframAlpha, intuition(?) and various other sources say otherwise... 0^0 doesn't really "mean" anything..



can anyone clear this up with some rigorous explanation?



Answer



Short answer: It depends on your convention and how you define exponents.



Long answer: There are a number of ways of defining exponents. Usually these definitions coincide, but this is not so for $0^0$: some definitions yield $0^0=1$ and some don't apply when both numbers are zero (leaving $0^0$ undefined).



For example, given nonnegative whole numbers $m$ and $n$, we can define $m^n$ to be the number of functions $A \to B$, where $A$ is a set of size $n$ and $B$ is a set of size $m$. This definition gives $0^0=1$ because the only set of size $0$ is the empty set $\varnothing$, and the only function $\varnothing \to \varnothing$ is the empty function.



However, an analyst might not want $0^0$ to be defined. Why? Becuase look at the limits of the following functions:
$$\lim_{x \to 0^+} 0^x = 0, \qquad \lim_{x \to 0} x^0 = 1, \qquad \lim_{x \to 0^+} (e^{-1/t^2})^{-t} = \infty$$
All three limits look like $0^0$. So when this is desired, you might want to leave $0^0$ undefined, so that it's a lack of definition rather than a discontinuity.




Typically this is resolved by:




  • If you're in a discrete setting, e.g. considering sets, graphs, integers, and so on, then you should take $0^0=1$.

  • If you're in a continuous setting, e.g. considering functions on the real line or complex plane, then you should take $0^0$ to be undefined.



Sometimes these situations overlap. For example, usually when you define functions by infinite series
$$f(x) = \sum_{n=0}^{\infty} a_nx^n$$

problems occur when you want to know the value of $f(0)$. It is normal in these cases to take $0^0=1$, so that $f(0)=a_0$; the reason being that we're considering what happens as $x \to 0$, and this corresponds with $\lim_{x \to 0} x^0 = 1$.


congruences - Confused about a neither statement and modular



I am trying currently in the process of learning proofs involving congruence of integers with methods of direct and contrapositive and proofs with cases. However, I am quite confused by this statement as follows:




Let $a,b\in Z$. Prove that if $a^2+2b^2\equiv 0\ (\mathrm{mod\ 3})$,

then either $a$ and $b$ are both congruent to $0$ modulo $3$ or neither is congruent to $0$ modulo $3$.




I am trying to do this with a proof by contrapositive where I let $$q(a,b): \text{either } a \text{ and } b \text{ are both
congruent to } 0 \text{ modulo } 3 \text{ or neither is congruent to }
0 \text{ modulo 3}.$$
$$p(a,b): a^2 + 2b^2 \equiv 0 (\text{ mod 3 )}$$



However, by using the negation of $q$, I am left with $(3 \nmid a \cup 3\nmid b) \cap (3 \mid a \cup 3 \mid b)$. Since I interpreted the negation of $q$ as being $$\neg ((3\mid a \cap 3 \mid b) \cup (3\nmid a \cap 3\nmid b))=\neg (3\mid a \cap 3 \mid b) \cap \neg(3\nmid a \cap 3\nmid b)$$




Am I doing something wrong here ? I think it is with the negation of $q$ that I am having trouble with and it would be nice if someone could help explain.



Thank you.


Answer



I don't think using formal proofs is the way to go here, but anyway... (I suggest you read from where I put a $\star$ on, closer to the bottom of the answer.)



Consider the usual axiom for congruence modulo $3$, with variables $a,b,c,\ldots$. The problem you have is to prove that
$$\vdash (a^2+2b^2=0)\rightarrow(((a=0)\land(b=0))\lor((a\neq 0)\land(b\neq 0))) $$
(where equality means congruence modulo $3$). By contrapositive, this is the same as proving.
$$\vdash(((a\neq 0)\lor(b\neq 0))\land((a=0)\lor(b=0)))\rightarrow(a^2+2b^2\neq 0)$$

Now, note that $(a=0)\lor(a\neq 0)$ is a tautology, so this is the same as proving
\begin{align*}
\vdash&\left((a=0)\lor(a\neq 0)\right)\land\left((((a\neq 0)\lor(b\neq 0))\land((a=0)\lor(b=0)))\rightarrow(a^2+2b^2\neq 0)\right)\\
&=\left(((a=0)\lor(a\neq 0))\land\left((((a\neq 0)\lor(b\neq 0))\land((a=0)\lor(b=0)))\right)\right)\to(a^2+2b^2\neq 0)
\end{align*}
Now look at the term on the left. Using distributivity, it is the same as
$$\left((a=0)\land(((a\neq 0)\lor(b\neq 0))\land((a=0)\lor(b=0)))\right)\lor\left((a\neq 0)\land(((a\neq 0)\lor(b\neq 0))\land((a=0)\lor(b=0)))\right)$$



Now look just at the first term above, use transitivity and distributivity and get
$$(((a=0)\land(a\neq 0))\lor((a=0)\land(b\neq 0)))\land((a=0)\lor(b=0))$$

But $(a=0)\land(a\neq 0)$ is always false, so the first term is equivalent to $(a=0)\land(b\neq 0)$, and this expression becomes
$$(a=0)\land(b\neq 0)\land((a=0)\lor(b=0))$$
Now you use distributivity, associativity, and what-nots, and see after a few minutes that this is equivalent to
$$(a=0)\land (b\neq 0)$$
Now look at the second term, do a bunch more of stuff and see it is equivalent to $(a\neq 0)\land(b=0)$.



So the problem now becomes to prove
\begin{align*}
\vdash&(((a=0)\land(b\neq 0))\lor((a\neq 0)\land(b=0))\rightarrow(a^2+2b^2\neq 0)\\
&=(((a=0)\land(b\neq 0))\rightarrow(a^2+2b^2\neq 0))\land(((a\neq 0)\land(b=0))\rightarrow(a^2+2b^2\neq 0))

\end{align*}



Now we have a conjuntion on the right-hand side, so this is equivalent to proving the two problems
$$\vdash((a=0)\land(b\neq 0))\rightarrow(a^2+2b^2\neq 0)$$
$$\vdash((a\neq0)\land(b=0))\rightarrow(a^2+2b^2\neq 0)$$
Now for each of these problems, you do a bunch more stuff an finally prove it, somehow, hopefully...






$\star$ Do you see why logic is not very well-suited to prove this problem? Instead if we stated problem in a manner which we can easily understand: The contrapositive becomes





Suppose $3$ does not divide $a$ and $b$ simultaneously, and that $3$ divides at least one of $a$ and $b$. Then $3$ does not divide $a^2+2b^2$. (This is just a translation of the contrapositive you got.)




There are a bunch of ways to solve this. The second phrase in the hypothesis means that $3$ divides at least one between $a$ and $b$. Here we have two cases (yes, we can verify cases separately):



CAse 1: $3$ divides $a$.



Then by the first phrase in the hypothesis, $3$ does not divide $b$, so either $b\equiv 1\mod{3}$ or $b\equiv 2\mod{3}$. In either case (yes, we take subcases and verify them separately again), $b^2\equiv 1\mod{3}$ and $a^2+2b^2\equiv 2\mod{3}$.




CASE 2: $3$ does not divide $a$.



By the second phrase in the hypothesis, $3$ divides $b$, and therefore $a^2+2b^2\equiv 1\mod{3}$ (here, again, we have two subcases: $a\equiv 1$ or $2\mod{3}$, and in each subcase $a^2\equiv 1\mod{3}$)






Remark 1: There are other ways of separating the problem in cases. For example, one of the hypothesis is that $3$ divides at least one of $a$ and $b$, so we could separate in the cases that $3$ divides $a$ (as above), or that $3$ divides $b$ (and then verify things in this case).







Remark 2: One can also prove the theorem directly by separating in the cases where $3$ divides $a$ or not.






Remark 3: To avoid having to consider subcases in several parts, you can prove, as a lemma, that for every $x$ not divisible by $3$, $x^2\equiv 1\mod{3}$. Then use this when $3$ does not divide $a$ or $b$. However, the proof of this lemma will probably be divided in two cases.


calculus - Should L'Hopital's Rule be used for this limit?



Question



$$\lim_{x \to 0} \left(\dfrac{\sin x}{x}\right)^{\dfrac{1}{1 - \cos x}}$$



Attempt




This limit is equal to,



$$\lim_{x \to 0} \exp\left({\dfrac{1}{1 - \cos x}\ln \left(\dfrac{\sin x}{x}\right)}\right).$$



I hope to solve this by focusing on the limit,



$$\lim_{x \to 0} {\dfrac{1}{1 - \cos x}\ln \left(\dfrac{\sin x}{x}\right)}.$$



Which is indeterminate of the form "$\frac{0}{0}$". However, repeated application of L'Hopital's Rule seems to lead to an endless spiral of trigonometric terms, but wolfram Alpha says the limit I am focusing on exists.




Is there another way to solve this?


Answer



It should be used. There may be some little mistakes in your computation.



$$\begin{align*}
\lim_{x\rightarrow0}\frac{1}{1-\cos x}\ln\frac{\sin x}{x}
&=\lim_{x\rightarrow0}\frac{\frac{\cos x}{\sin x}-\frac{1}{x}}{\sin x}\\
&=\lim_{x\rightarrow0}\frac{x\cos x-\sin x}{x\sin^{2}x}\\
&=\lim_{x\rightarrow0}\frac{-x\sin x}{\sin^{2}x+2x\sin x\cos x}\\

&=-\lim_{x\rightarrow0}\frac{1}{\frac{\sin x}{x}+2\cos x}\\
&=-\frac{1}{3}
\end{align*}$$


Thursday 17 May 2018

Converse statements for convolutions of probability distributions




There is a list of convolutions of probability distributions on wikipedia. Using that list we can find convolution distribution of given independent random variables.



From this online tutorial I found that the converse statement is true for the binomial distribution, i.e. if $Y \sim \mathrm{Bin}(n,p)$ then there exist i.i.d $X_1, \ldots ,X_n \sim \mathrm{Bern}(p)$ such that $\displaystyle Y = \sum_{i=1}^n X_i$.



So, are there similar converse statements for other convolutions from the above-mentioned list (excluding the last one)?



For example consider r.v. $Y \sim \mathrm{Pois}(\lambda), \, \lambda \gt 0$. Can we claim that there exist independent random variables $X_1 \sim \mathrm{Pois}(\lambda_1), X_2 \sim \mathrm{Pois}(\lambda_2), X_3 \sim \mathrm{Pois}(\lambda_3)$ such that $\lambda_1 + \lambda_2 + \lambda_3 = \lambda$ and $Y = X_1 + X_2 + X_3$?



I know that there is a list of infinitely divisible distributions.

And some distributions present in both above-mentioned lists. Unfortunately, the property of infinite divisibility doesn't indicate distribution law of the summands (it only says that the summands exist and they are i.i.d).


Answer



A slight tangent, but perhaps an angle you may find interesting is to look into Infinitely Divisible distributions.



A probably distribution $X$ is said to be infinititely divisible if for any $N \geq 1$ there exists a collection $(Y_n)_{n = 1}^N$ of independent identically distributed random variables such that:



$$ X = \sum_{n = 1}^N Y_n,$$



where the equality stands for equality in distribution.




As an example, from the fact that $\text{Poi}(\lambda) + \text{Poi}(\mu) = \text{Poi}(\mu + \lambda)$, from the comments above, then we can see that in fact for any fixed $N \geq 1$, and $\lambda > 0$:



$$ \text{Poi}(\lambda) = \sum_{n=1}^N \text{Poi}\left( \frac{\lambda}{N} \right),$$



so that Poisson variables are infinitely divisible. Similar proofs work Normal distributions, as well as Gamma distributions.



A simple example of a non-infinitely divisible distribution would be the Bernoulli distribution (see proof below). Similarly the Uniform distribution is not infinitely divisible.



Bernoulli is not Infinitely Divisible
Consider the case of the Bernoulli distribution, $X$, with $P(X = 1) = p$, with $0 < p < 1$ (the cases $p = 0,1$ are trivially infinitely divisible).




Fix $N = 2$, then if $X$ were infinitely divisible there would exists $Y_1, \,Y_2$ independent and identically distributed such that



$$X = Y_1 + Y_2$$



Since $X \in \{0,1\}$ the i.i.d. assumption forces $Y_n \in \left\{0, \frac12\right\}$.



But then the support of $Y_1 + Y_2$ is $\{0,\frac12, 1\}$; since $p \neq 0,1$ we have



$$P\left(Y_1+Y_2 = \frac12\right) > 0$$




whereas $P(X = \frac12) = 0$, so we have a contradiction.


summation - Summing series in the form: $sum_{n=infty}^{infty}f(n)$ via Complex Methods?



$(1.)$



$$\sum_{n=\infty}^{\infty}f(n)$$



$(2.)$



$$\sum_{n=1}^{\infty}f(n)=\lim_{n\to\infty}\sum_{n=1}^{n}f(n)$$




How would via the tools of Complex Analysis approach the summation of series in the from defined in $(1.)$ via the tools of Complex Variables ? If possible provide applicable examples.



$$EDIT$$



Adding to our original question how would handle the upper bound and lower bound of sum defined in $(1.)$ as $n \, \rightarrow \infty$, how would generalize the approach seen in real-variable methods as seen and defined in $(2.)$.


Answer



I don't know what $f(n)$ you are interested in. There is Cauchy's Residue Theorem, Fourier Series with Complex Exponentials, Parseval's (Plancherel's) Identity, and Poisson Summation Formula. Have a look at the answers to proving $\zeta(2)=\frac{\pi^2}{6}$ in each of the links below: Complex Analysis Solution to the Basel Problem ($\sum_{k=1}^\infty \frac{1}{k^2}$) for the Residue Theorem, http://math.cmu.edu/~bwsulliv/basel-problem.pdf for Fourier Series and Parseval's Identity, http://www.libragold.com/blog/2014/12/poisson-summation-formula-and-basel-problem/ for Poisson Summation.


integration - A sine integral

The integral
\begin{align}
\int_{0}^{\pi/2} \frac{ \sin(n\theta) }{ \sin(\theta) } \ d\theta

\end{align}
is claimed to not have a closed form expression. In this view find the series solution of the integral as a series involving of $n$.



Editorial note:
As described in the problem several series may be obtained, of which, all seem to hold validity. As a particular case, from notes that were made a long while ago, the formula
\begin{align}
\int_{0}^{\pi/2} \frac{ \sin(n\theta) }{ \sin(\theta) } \ d\theta = \sum_{r=1}^{\infty} (-1)^{r-1} \ \ln\left(\frac{2r+1}{2r-1}\right) \ \sin(r n \pi)
\end{align}
is stated, but left unproved. Can this formula be proven along with finding other series dependent upon $n$?

Tuesday 15 May 2018

real analysis - Show that $sqrt{2+sqrt{2+sqrt{2...}}}$ converges to 2




Consider the sequence defined by
$a_1 = \sqrt{2}$, $a_2 = \sqrt{2 + \sqrt{2}}$, so that in general, $a_n = \sqrt{2 + a_{n - 1}}$ for $n > 1$.
I know 2 is an upper bound of this sequence (I proved this by induction). Is there a way to show that this sequence converges to 2? What I think is that the key step is to prove 2 is the least upper bound of this sequence. But how?


Answer



Let $ x = \sqrt {2 + \sqrt {2 + \sqrt {2 + \cdots}}} $. Then, note that $$ x^2 = 2 + \sqrt {2 + \sqrt {2 + \cdots}} = 2 + x \implies x^2 - x - 2 = 0. $$Note that the two solutions to this equation are $x=2$ and $x=-1$, but since this square root cannot be negative, it must be $2$.


calculus - How to prove that the sequence defined by $a_1=0$, $a_2=1$, $a_n=frac{a_{n-1}+a_{n-2}}{2}$ converges to $frac23$?


How to prove that the sequence defined by $a_1=0$, $a_2=1$, $a_n=\frac{a_{n-1}+a_{n-2}}{2}$ converges to $\frac23$?





If we analyse terms:
$$
0,1,\frac{1}{2},\frac{3}{4},\frac{5}{8},\cdots.
$$



I'm asked to do this using a previously proved theorem which says that




if you have two sequences ${b_n}$ and ${c_n}$ converging both to the same limit $L$, then the sequence $a_n$ defined as $b_1,c_1,b_2,c_2,b_3,\dots$ converge to $L$.





In this case, $b_n$ would be $0,\cfrac{1}{2},\cfrac{5}{8},\cfrac{21}{32},\dots$ and $c_n$ would be $1,\cfrac{3}{4},\cfrac{11}{16},\cfrac{43}{64},\dots$



From here it's seems all I need to do is prove that $b_n$ and $c_n$ converge to $\cfrac{2}{3}$ and then, by the theorem, $a_n$ converges to $\cfrac{2}{3}$.



I need to define them because I need to prove $b_n$ and $c_n$ are monotone and bounded, in order to use the monotone convergence theorem. I've got troubles when trying to define $b_n$ and $c_n$. Can anyone help me to define them?

Monday 14 May 2018

big list - $2=1$ Paradoxes repository

I really like to use paradoxes in my math classes, in order to awaken the interest of my students. Concretely, these last weeks I am proposing paradoxes that achieve the conclusion that 2=1. After one week, I explain the solution in the blackboard and I propose a new one. For example, I posted the following one some months ago: What is wrong with the sum of these two series?
I would like to increase my repertoire of fake-proofs. I would be glad to read your proposals and discuss them! My students are 18 years old, so don't be too cruel :) Here is my own contribution:



\begin{equation}
y(x) = \tan x
\end{equation}
\begin{equation}
y^{\prime} = \frac{1}{\cos^{2} x}
\end{equation}

\begin{equation}
y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x}
\end{equation}
This can be rewritten as:
\begin{equation}
y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x} = \frac{2 \sin x}{\cos x \cdot \cos^{2} x} = 2 \tan x \cdot \frac{1}{\cos^{2} x} = 2yy^{\prime} = \left( y^{2} \right)^{\prime}
\end{equation}
Integrating both sides of the equation $y^{\prime \prime} = \left( y^{2} \right)^{\prime}$:
\begin{equation}
y^{\prime} = y^{2}

\end{equation}
And therefore
\begin{equation}
\frac{1}{\cos^{2} x} = \tan^{2} x
\end{equation}
Now, evalueting this equation at $x = \pi / 4$
\begin{equation}
\frac{1}{(\sqrt{2}/2)^{2}} = 1^{2}
\end{equation}
\begin{equation}

2 = 1
\end{equation}

real analysis - Finding a bijective Map




I need to find a bijective map from $A=[0,1)$ to $B=(0,1).$ Is there a standard method for coming up with such a function, or does one just try different functions until one fits the requirements?



I've considered some "variation" of the floor function, but not sure if $\left \lfloor{x}\right \rfloor$ is bijective. Now i'm thinking of maybe some trig function that takes elements of $A$ and maps them to $B$.



I know how to show that a function is injective and surjective, but how do I find such a function? Am I overthinking this or am I under thinking this?



Any help would be greatly appreciated!!!



Lastly, since we are to find a bijective map from $A$ to $B$, assuming one exists, is this equivalent to saying that these two sets have the same cardinality?




Thank you


Answer



Consider the function
$\psi: A \rightarrow B$ given by
$$
\psi(x) = \begin{cases} x & \text{ if}\; \; \; x \neq 0 \; \; \text{and}\; \; x \neq \frac{1}{n} \in \mathbb{N} \\
\frac{1}{2} & \text{if} \;\; x = 0 \\
\frac{1}{n+1} & \text{if } \; \; x= \frac{1}{n}, \; n \in \mathbb{N}
\end{cases}

$$
It is not difficult to verify that $\psi$ is bijective. The problem with constructing a bijection using our favorite elementary functions is that no continuous bijection between the sets exists. This is due to the differences in character (or topology) of the two sets. So unfortunately one must turn their attention to different functions.



The function above relies on the fact that we can get this bijection by taking a sequence in $(0,1)$, say $\{x_n\}$ and mapping $x_1$ to the extra point $0$, and then mapping $x_n$ to $x_{n-1}$ for all other $n$. Hopefully this trick helps with your future work.


sequences and series - Weird limit $lim limits_{nmathoptoinfty}frac{1}{e^n}sum limits_{kmathop=0}^nfrac{n^k}{k!} $




$$\lim \limits_{n\mathop\to\infty}\frac{1}{e^n}\sum \limits_{k\mathop=0}^n\frac{n^k}{k!} $$




I thought this limit was obviously $1$ at first but approximations on Mathematica tells me it's $1/2$. Why is this?


Answer



In this answer, it is shown that
$$
\begin{align}
e^{-n}\sum_{k=0}^n\frac{n^k}{k!}
&=\frac{1}{n!}\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t\\
&=\frac12+\frac{2/3}{\sqrt{2\pi n}}+O(n^{-1})
\end{align}
$$



Sunday 13 May 2018

calculus - Gaussian type integral $int_{-infty}^{infty} frac{mathrm{e}^{-a^2 x^2}}{1 + x^2} mathrm{d}x$



When working a proof, I reached an expression similar to this:



$$\int_{-\infty}^{\infty} \frac{\mathrm{e}^{-a^2 x^2}}{1 + x^2} \mathrm{d}x$$




I've tried the following:



1. I tried squaring and combining and converting to polar coordinates, like one would solve a standard Gaussian. However, this yielded something which seems no more amenable to a solution:



$$\int_{\theta=0}^{\theta=2\pi} \int_{0}^{\infty} \frac{r \mathrm{e}^{-a^2 r^2}}{(1 + r^2 \sin^2(\theta))(1 + r^2 \cos^2(\theta))} \mathrm{d}r \mathrm{d}\theta$$



2. I tried doing a trig substitution, t = tan u, and I have no idea what to do from there.



$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \mathrm{e}^{-a^2 \tan^2(u)} \mathrm{d}u$$




3. I looked into doing $u^2 = 1 + x^2$ but this gives us a ugly dx that I don't know how to handle, and moreover, I think I'm breaking my limits of integration (because Mathematica no longer solves it.):



$$u^2 = 1 + x^2$$



$$2 u \mathrm{d}u = 2 x \mathrm{d}x$$



$$\mathrm{d}x = \frac{u}{\sqrt{u^2 - 1}}$$
$$\mathrm{e}^{a^2} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{-a^2 u^2}}{u \sqrt{u^2 - 1}} \mathrm{d}u$$



4. I looked into some form of differentiation under the integral, but that didn't seem to yield anything that looked promising. (I checked parameterizing x^2 to x^b in both places, and in either place, and nothing canceled cleanly.)




I have a solution from Mathematica, it's:



$$\pi e^{a^2} \text{erfc}(a)$$



But I'd like to know how to arrive at this. I'm sure it's something simple I'm missing.


Answer



Let $F$ be the function
$$F(a)=\int_{-\infty}^{\infty}\frac{e^{-a^{2}x^{2}}}{1+x^2}dx$$
We take the derivative w.r.t $a$

$$F^{\prime}(a)=\frac{d}{da}\left(\int_{-\infty}^{\infty}\frac{e^{-a^{2}x^{2}}}{1+x^2}dx\right)=\int_{-\infty}^{\infty}\frac{d}{da}\left(\frac{e^{-a^{2}x^{2}}}{1+x^2}\right)dx
=\int_{-\infty}^{\infty}\frac{-2ax^{2}e^{-a^{2}x^{2}}}{1+x^2}dx$$
$$=\int_{-\infty}^{\infty}\frac{-2a\big((x^{2}+1)-1\big)e^{-a^{2}x^{2}}}{1+x^2}dx
=-2a\int_{-\infty}^{\infty}e^{-a^{2}x^{2}}dx+2aF(a)
=-2a\sqrt{\frac{\pi}{a^2}}+2aF(a)$$
Then
$$F^{\prime}(a)=2a\left(F(a)-\sqrt{\pi}\,\frac{1}{\vert{a}\vert}\right)
=2aF(a)-2\sqrt{\pi}\mathrm{sign}(a).$$
Then you have a differential equation:
$$ F^{\prime}(a)-2a\,F(a)=-2\sqrt{\pi}\mathrm{sign}(a) $$

with initial condition $F(0)=\pi$.
This fisrt order ode has integrant factor:
$$\mu(a)=\displaystyle{e^{\displaystyle{\int{-2ada}}}}=e^{-a^2}$$
Then
$$
\left(e^{-a^2}F(a)\right)^{\prime}=-2\sqrt{\pi}\mathrm{sign}(a) e^{-a^2} $$
this implies
$$ e^{-a^2}F(a)=-2\sqrt{\pi}\int{\mathrm{sign}(a) e^{-a^2}}da+C $$
Finaly
$$F(a)=e^{a^2}\left(C-2\sqrt{\pi}\mathrm{sign}(a)\int{e^{-a^2}da}\right)$$



Saturday 12 May 2018

arithmetic - Detecting that a fraction is a repeating decimal




Given any fraction where both the numerator (N) and denominator (D) are both positive and are both whole numbers.



Without manually dividing N by D, is it possible to pre-determine if the resulting value represented in decimal would be a repeating value? (e.g. 44÷33 is 1.3333333333....)



I believe the value of N ÷ D will NOT be a repeating decimal if and only if D is any of the following




  1. D is equal to 1
    OR


  2. D's prime factors only consist of 2's and/or 5's. (includes all multiples of 10)



Otherwise, if none of the two rules above hold true, then the positive whole numbers N and D will divide into repeating decimal.



Correct, or am I missing a case?


Answer



Correct.



From wikipedia:





A decimal representation written with a repeating final 0 is said to terminate before these zeros. Instead of "1.585000…" one simply writes "1.585". The decimal is also called a terminating decimal. Terminating decimals represent rational numbers of the form $k/(2^n5^m)$.




http://en.wikipedia.org/wiki/Repeating_decimals


big list - Primer on complex analysis and Riemann surfaces for undergraduate physics / theoretical physics majors



Ref: The Road to Reality: a complete guide to the laws of the universe, (Vintage, 2005) by Roger Penrose [Chap. 7: Complex-number calculus and Chap. 8: Riemann surfaces and complex mappings]



I'm searching for an easily readable and understandable book (or resource of any kind; but preferably textbook with many worked-out problems and solutions and problem sets) to learn complex analysis and basics of Riemann surfaces - and applications to theoretical physics. (Particularly: material geared towards / written with undergraduate-level physics / theoretical physics students in view).




Any suggestions?



My math background: I have a working knowledge of single- and multivariable calculus, linear algebra, and differential equations; also some rudimentary real analysis.


Answer



I don't think you'll be able to read a book on Riemann surfaces before you study complex analysis. Once you've finished learning the rudiments of complex analysis, I recommend Rick Miranda's book "Algebraic Curves and Riemann Surfaces". It probably is the gentlest introduction I know.


Conditions for measure being finite and $sigma$-finite




working on a problem for measure theory and wanted to see if I'm on the right track.



I've shown that over a measurable space $(X,\mathcal M)$, $\sum_{j=1}^\infty a_j\mu_j$ where {$\mu_j$}$_{j=1}^\infty$ a sequence of measures, and {$a_j$}$_{j=1}^\infty \subset (0,\infty)$ is a measure.



Now I'm trying to find the conditions for which a specific example of this measure is finite, and $\sigma$-finite, the measure being $$\sum_{j=1}^\infty a_j\delta_{x_j}$$ where $\delta_x$ is the Dirac measure.



$\sum_{j=1}^\infty a_j\delta_{x_j}(X)$ = $\sum_{j=1}^\infty a_j$ since each $\delta_{x_j}$ = $1$ over the entire set X, and condition for finiteness of this measure should be the sum being finite. Is this correct?



Secondly, for $\sigma$-finiteness, my intuition tells me that X should be countable, since the Dirac measure is similar to the counting measure, but I'm not sure how to formally show conditions for finiteness or $\sigma$-finiteness.




Any help would be great! Thank you.


Answer



The finiteness part is correct. The measure is always $\sigma$-finite. Each $x_j$ has finite measure (namely $a_j$) and $X = (\cup_j \{x_j\})\cup (X\setminus \cup_j \{x_j\})$ where $X\setminus \cup_j \{x_j\}$ is measurable with measure $0$.


elementary number theory - Solving a congruence -- where to start?

For which positive integers $n$ is it true that
$$1^2 + 2^2 + \cdots + (n − 1)^2 \equiv 0 \,(\text{mod } n)$$






I have no idea where to start. I'm just looking for a nudge in the right direction. Any idea how to go about solving? Thanks for any help.

Friday 11 May 2018

probability - Expected time to roll all 1 through 6 on a die




What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter.



I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$



Can someone please explain this to me, possibly with a link to a general topic?


Answer



The time until the first result appears is $1$. After that, the random time until a second (different) result appears is geometrically distributed with parameter of success $5/6$, hence with mean $6/5$ (recall that the mean of a geometrically distributed random variable is the inverse of its parameter). After that, the random time until a third (different) result appears is geometrically distributed with parameter of success $4/6$, hence with mean $6/4$. And so on, until the random time of appearance of the last and sixth result, which is geometrically distributed with parameter of success $1/6$, hence with mean $6/1$. This shows that the mean total time to get all six results is
$$\sum_{k=1}^6\frac6k=\frac{147}{10}=14.7.$$







Edit: This is called the coupon collector problem. For a fair $n$-sided die, the expected number of attempts needed to get all $n$ values is
$$n\sum_{k=1}^n\frac1k,$$ which, for large $n$, is approximately $n\log n$. This stands in contrast with the mean time needed to complete only some proportion $cn$ of the whole collection, for some fixed $c$ in $(0,1)$, which, for large $n$, is approximately $-\log(1-c)n$. One sees that most of the $n\log n$ steps needed to complete the full collection are actually spent completing the last one per cent, or the last one per thousand, or the last whatever percentage of the collection.


ordinary differential equations - A nice way to do Euler's method on a calculator?

As part of the calculator paper for IB (International Baccalaureate), we may be asked to do Euler's method, for say, 10 iterations.



While it is feasible to do with a calculator (slightly easier if you have 2 calculators), it is quite a nuisance, and it is quite easy to make a mistake when I have to retype the numbers back into the calculator.




The problem here is that calculators can only store 1 answer (Ans) at a time, and that any attempt to store your answer into a memory slot would delete your calculation line which you would have to type in again.



On the other hand, you can easily use the calculator to execute recursive functions which only rely on 1 variable.



For example, I can calculate the $n$th term of the following recursive sequence by repeatedly pressing the = button:



$$T_{n+1}=T_n^3+\sqrt{T_n}-\frac{1}{T_n},\, T_0=5$$



In this example, I would first press 5 = and then type in ${Ans}^3+\sqrt{Ans}-\frac{1}{Ans}$ and press = $n$ times to get $T_n$.




However, this is not possible with Euler's method which has recursive formulas:



$$x_{n+1}=x_n+h$$
$$y_{n+1}=y_n+h\frac{dy_n}{dx_n}$$



where $h$ is the step (which can be pre-stored in a memory slot) and $\frac{dy_n}{dx_n}=f(x_n, y_n)$.



We are allowed to use a graphical calculator (GDC) which has several functions. I have found a way to do this using the spreadsheet function which is quite fiddly, but at least avoids copying mistakes when retyping numbers into my calculator. I was wondering if there were yet easier methods to do this, so I shall ask for the following methods, if possible:



Method A




I would prefer it if a method with the following restrictions could be used to execute Euler's method:



You have a calculator which is an ordinary scientific calculator which has the ability to store the previous answer (Ans). You have the ability to type in a whole function in terms of (Ans) as shown in the first example. Is it possible to type in a one-liner which would do Euler's method by repeatedly pressing =?



*Note: I am aware that this method is probably possible but may be quite complicated to type in/learn. Therefore I would prefer Method B if it is simpler, as I would be under the time pressure of an exam. Nevertheless, I would still be interested in a method A which can do Euler's method for me as I personally prefer my ordinary scientific calculator to my GDC.



Method B



In addition to the Ans tool given in Method A, you also have the ability to type in $m\times n$ matrices in your calculator. The calculator can take in matrices in its input, and can output matrices which will be stored to Ans. Matrix multiplication (and raising to a power) can be carried out along with transpose, determinant and inverse functions (where possible) on matrices. Note that I cannot "get" cell $(i,j)$ of a matrix - I have to use the matrix as a whole.




However, I can still do something like the following:



$$\pi\begin{bmatrix}\sqrt{\det\left({Ans}^{T} \times \begin{bmatrix}3 & 4\\12 & \det(Ans)\end{bmatrix}^{-1}\right)} & 4\\\det(Ans)^2 & -\det({Ans}^2)\end{bmatrix}^3$$



Similarly, is there a one-liner which would be able to do Euler's method?



EDIT:



Method C




Unfortunately, it turns out that my calculator cannot accept matrices within matrices which is a real shame - my mistake that I thought it could. It can only manipulate matrix answers, and I can only type in constant matrices. Nevertheless, I'm sure there is a way to circumvent this problem with some matrix multiplication.



So ny calculator can only do something like:



$$\begin{bmatrix}1 & 2 \\ 8 & 2\end{bmatrix}^{-2}\times {Ans}^T \times {(Mat\, A)}^{-1}$$



(Where Mat A is previously stored in the memory. Note that Ans can still be a matrix)



Note that my calculators can store values (and matrices in the case of method B) into their memory, but that would only be initial values for variables (like $h$, $x_0$ and $y_0$), as the one-liner must be deleted before I can type in the line which can store memory.




In addition to these methods, if there is some function/tool I am completely missing on these calculators which I could use to simplify my task, I would be glad to hear it.

discrete mathematics - Proving the sum of the first $n$ natural numbers by induction




I am currently studying proving by induction but I am faced with a problem.




I need to solve by induction the following question.



$$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$



for all $n > 1$.



Any help on how to solve this would be appreciated.







This is what I have done so far.



Show truth for $N = 1$



Left Hand Side = 1



Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$



Suppose truth for $N = k$




$$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$



Proof that the equation is true for $N = k + 1$



$$1 + 2 + 3 + ... + k + (k + 1)$$



Which is Equal To



$$\frac{1}{2} k (k + 1) + (k + 1)$$




This is where I'm stuck, I don't know what else to do. The answer should be:



$$\frac{1}{2} (k+1) (k+1+1)$$



Which is equal to:



$$\frac{1}{2} (k+1) (k+2)$$



Right?




By the way sorry about the formatting, I'm still new.


Answer



Basic algebra is what's causing the problems: you reached the point



$$\frac{1}{2}K\color{red}{(K+1)}+\color{red}{(K+1)}\;\;\;\:(**)$$



Now just factor out the red terms:



$$(**)\;\;\;=\color{red}{(K+1)}\left(\frac{1}{2}K+1\right)=\color{red}{(K+1)}\left(\frac{K+2}{2}\right)=\frac{1}{2}(K+1)(K+2)$$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...