Tuesday 31 October 2017

geometry - I have X feet of rope. What should maximum size of triangle be?

Since we're approaching the Christmas season, I'm calculating how many feet of lights I need for a few decorations.



Let's say I have X feet of lights, is it possible to calculate the height/width of an isosceles triangle so that it's outlined completely in lights? I can always divide the isosceles triangle in two right triangles, and then cover each one separately.



So in terms of angles, the outer rectangle should either be 70, 90, & 20 degrees (if divided into two rectangles) or 70, 90, and 40.



Three questions:

If I have X feet of string, how high & wide should the isosceles triangle be so that it's covered completely by the string?



If I have X feet of string, how high & wide should the right triangle be so that it's covered completely by the string?



Instead of using one piece of string for the isosceles triangle (which has 6 sides total), would I use less string if I divide it in two right triangles? I would need 2 strings, but maybe the sum of these two strings is less than the one string I would use if I covered the isosceles triangle.



Anyways I can buy the string in 30 or 60 feet. And if I buy the lights, I don't want to be short or over-buy. And since they're a specific length, I want the triangle to use the as much of the lights (ie. not buy a 60' string so that the whole triangle only uses 50').



Thanks.
enter image description here

combinatorics - combinatorial proof for binomial identity



Consider the following binomial identity:
$$
\sum_{k=0}^n(-1)^k\binom{n}{k}g(k)=0

$$
for every polynomial $g(k)$ with degree less than $n$.



My Proof
Every polynomial $g(k)$ of degree $t$ can be represented in the following form
$$
g(k) = \sum_{l=0}^tc_l(k)_l,
$$
where
$$

(k)_l=k(k-1)\ldots(k-l+1),
$$
ans $c_l$ are some coefficients.



For every $l\begin{multline}
\sum_{k=0}^n(-1)^k\binom{n}{k}(k)_l=
\sum_{k=l}^n(-1)^k\binom{n}{k}(k)_l=
\sum_{k=l}^n(-1)^k\frac{n!k!}{k!(n-k)!(k-l)!}=
\sum_{k=l}^n(-1)^k\frac{n!k!}{k!(n-k)!(k-l)!}=

\frac{n!}{(n-l)!}\sum_{k=l}^n(-1)^k\frac{(n-l)!}{(n-k)!(k-l)!}=\\
\frac{n!}{(n-l)!}\sum_{k=l}^n(-1)^k\binom{n-l}{n-k}=0,
\end{multline}
therefore,
$$
\sum_{k=0}^n(-1)^k\binom{n}{k}g(k)=0.
$$



Question
Do you know some other proofs of this identity? I'm most interested in combinatorial proof.



Answer



Let $n$ a positive integer and $E$ the real vector space of polynomials with degree $

Consider the linear map $\varphi:E\to E,P(X)\mapsto P(X)-P(X+1)$.



For every nonzero $P(X)\in E$, we observe that $\deg(\varphi(P(X))<\deg(P(X))$. This leads to $\varphi^n=0$ (of course $\varphi^k$ means $\varphi\circ\cdots\circ\varphi$ with $k$ times $\varphi$).



Now consider the map $\psi:E\to E,P(X)\mapsto P(X+1)$, so that $\varphi=id_E-\psi$.



Since $\psi$ and $id_E$ commute, we can apply Newton's binomial formula and get :




$$\sum_{k=0}^n{n\choose k}(-1)^k\psi^k=\left(id_E-\psi\right)^n=\varphi^n=0$$



So, for any $P(X)\in E$ :



$$\sum_{k=0}^n{n\choose k}(-1)^kP(X+k)=0$$Evaluating in $0$, we finally get :



$$\sum_{k=0}^n{n\choose k}(-1)^kP(k)=0$$


Monday 30 October 2017

real analysis - Prove that the sequence $(b_n)$ converges



Prove that if $(a_n)$ converges and $|a_n - nb_n| < 2$ for all $n \in \mathbb N^+$ then $(b_n)$ converges.



Is the following proof valid?



Proof
Since $(a_n)$ converges, $(a_n)$ must be bounded, i.e. $\exists M \in \mathbb R^+$ such that for each $n \in \mathbb N^+$, we have $|a_n| < M$.
Now, by the triangle inequality, $|nb_n|$ = $|nb_n - a_n + a_n| \le |nb_n - a_n| + |a_n| < 2 + M$.
Hence, $|b_n - 0| < \frac{2 + M}{n}$.
Let $\epsilon > 0$ be given, and by the Archimedean Property of $\mathbb R$, we can choose $K \in \mathbb N^+$ such that $K > \frac {2+M}{\epsilon}$.
Then, $n \ge K \implies n > \frac {2 + M}{\epsilon} \implies |b_n - 0| < \epsilon$.
Therefore $(b_n)$ converges, and its limit is $0$.


Answer



Your proof is correct. In fact, once you have $$|b_n|<\frac{M+2}{n}$$ it is clear that since the numerator is bounded one can make the right hand side $<\epsilon$ for arbitrarily small $\epsilon >0.$


calculus - Evaluating $lim_{x to 0^+} (e^x-1)^{frac{(tan{x})^2}{sqrt[3]{x^2}}}$



I cannot figure this limit out.




$$\lim_{x \to 0^+} (e^x-1)^{\frac{(\tan{x})^2}{\sqrt[3]{x^2}}}$$



I've used the e to the ln trick and multiplied by 1 ($\frac{x^2}{x^2}$) and arrived at



$$\lim_{x \to 0^+} \exp({x^{4/3}} \ln ({e^x-1})) $$



However I failed at getting further. I tried adding and subtracting $\ln x$ but that got me nowhere.



I cannot use l'Hospital or Taylor series (only the "known" limits for $\sin$, $\cos$, $e^x$, $\ln$ such as $\lim_{x \to 0}\frac{sinx}{x}=1$ which are really only Taylor series).




Thanks for help!


Answer



Hint: $$\lim_{x\to 0}\frac{e^x-1}x=\lim_{x\to 0}\frac{\tan x}x=1$$


elementary number theory - Solving linear congruences by hand: modular fractions and inverses



When I am faced with a simple linear congruence such as
$$9x \equiv 7 \pmod{13}$$
and I am working without any calculating aid handy, I tend to do something like the following:



"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives:
$$-4x \equiv 20 \pmod{13}$$




so that $$x \equiv -5 \equiv 8 \pmod{13}.$$



Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)



I would also be interested to hear if anyone else does anything similar.


Answer



Generally, if $\,b\,$ is coprime to the modulus $m$ then (by Bezout) it is invertible $\!\bmod m,\,$ so scaling $\,bx\equiv a\,$ by $\,b^{-1}\,$ we obtain the unique solution $\,x\equiv b^{-1}a.\,$ We can quickly compute $\,b^{-1}\pmod{\!m}\,$ by the extended Euclidean algorithm, but there are often more convenient ways for smaller numbers. We describe a few of these methods below, where we view $\, x\equiv b^{-1}a \equiv a/b\,$ as a modular fraction.







The first, Gauss's algorithm, is based on Gauss's proof of Euclid's lemma via the descent $\,p\mid ab\,\Rightarrow\, p\mid a(p\bmod b).\,$ Generally it only works for prime moduli, but we can also execute the general extended Euclidean algorithm in fraction form too (using multi-valued "fractions").



It works by repeatedly scaling $\rm\:\color{#C00}{\frac{A}B}\overset{\times\ N} \to \frac{AN}{BN}\: $ by the least $\rm\,N\,$ with $\rm\, BN \ge 13,\, $ then reducing mod $13$



$$\rm\displaystyle \ mod\ 13\!:\,\ \color{#C00}{\frac{7}9} \,\overset{\times\ 2}\equiv\, \frac{14}{18}\, \equiv\, \color{#C00}{\frac{1}5}\,\overset{\times \ 3}\equiv\, \frac{3}{15}\,\equiv\, \color{#C00}{\frac{3}2} \,\overset{\times\ 7}\equiv\, \frac{21}{14} \,\equiv\, \color{#C00}{\frac{8}1}\qquad\!\! $$



Denominators of the $\color{#c00}{\rm reduced}$ fractions decrease $\,\color{#C00}{ 9 > 5 > 2> \ldots}\,$ so reach $\color{#C00}{1}\,$ (not $\,0\,$ else the denominator would be a proper factor of the prime modulus; it may fail for composite modulus)



Or, simpler, allowing negative residues $\displaystyle\ \ \frac{7}9\,\equiv\, \frac{7}{\!-4\!\ \,}\,\equiv\,\frac{21}{\!\!-12\ \ \ \!\!}\,\equiv\, \frac{8}1$




This optimization using least magnitude residues $0,\pm 1, \pm 2.\ldots$ often simplifies modular arithmetic. Here we can also optimize by (sometimes) cancelling obvious common factors, or by pulling out obvious factors of denominators, etc. For example



$$\frac{7}9\,\equiv\, \frac{\!-6\,}{\!-4\,}\,\equiv\frac{\!-3\,}{\!-2\,}\,\equiv\frac{10}{\!-2\,}\,\equiv\,-5$$



$$\frac{7}9\,\equiv\,\frac{\!-1\cdot 6}{\ \ 3\cdot 3}\,\equiv\,\frac{\!\,12\cdot 6\!}{\ \ \,3\cdot 3}\,\equiv\, 4\cdot 2$$






Or as you did: $ $ check if the quotient $\rm\,a/b\equiv (a\pm\!13\,i)/(b\pm\!13\,j)\,$ is exact for small $\rm\,i,j,\,$ e.g.




$$ \frac{1}7\,\equiv \frac{\!-12}{-6}\,\equiv\, 2;\ \ \ \frac{5}7\,\equiv\,\frac{18}{\!-6\!\,}\,\equiv -3$$



When working with smaller numbers there is a higher probability of such optimizations being applicable (the law of small numbers), so it's well-worth looking for such in manual calculations.



More generally we can make the quotient exact by using Inverse Reciprocity.



$\bmod 13\!:\ \dfrac{a}{b}\equiv \dfrac{a-13\left[\color{#0a0}{\dfrac{a}{13}}\bmod b\right]}b\,\ $ e.g. $\,\ \dfrac{8}9\equiv \dfrac{8-13\overbrace{\left[\dfrac{8}{\color{#c00}{13}}\bmod 9\right]}^{\large\color{#c00}{ 13\ \,\equiv\,\ 4\ }}}9\equiv\dfrac{8-13[2]}9\equiv-2$



Note that the value $\,\color{#0a0}{x\equiv a/13}\,$ is what is needed to make the numerator divisible by $b,\,$ i.e.




$\qquad\quad\bmod b\!:\,\ a-13\,[\color{#0a0}x]\equiv 0\iff 13x\equiv a\iff \color{#0a0}{x\equiv a/13}$



This can be viewed as an optimization of the Extended Euclidean Algorithm in the case when it terminates in two steps.



Note $ $ Gauss' algorithm is my name for a special case of the Euclidean algorithm that's implicit in Gauss' Disquisitiones Arithmeticae, Art. 13, 1801. I don't know if Gauss explicitly used this algorithm elsewhere (apparently he chose to avoid use or mention of the Euclidean algorithm in Disq. Arith.).



The reformulation in terms of fractions does not occur in Gauss' work as far as I know. I devised it in my youth before I had perused Disq. Arith. It is likely very old but I don't recall seeing it in any literature. I'd be very grateful for any historical references.



See here for further discussion, including a detailed comparison with the descent employed by Gauss, and a formal proof of correctness of the algorithm.




Beware $ $ Modular fraction arithmetic is valid only for fractions with denominator coprime to the modulus. See here for further discussion.


Sunday 29 October 2017

sequences and series - Is the sum of natural numbers equal to $-frac{1}{8}$?

I came across the following video on YouTube: Sum of all natural numbers (- 1/8).



Basically what happens is:
\begin{align*}
1+2+3+\dotsb &= N \\
1+(2+3+4)+(5+6+7)+\dotsb &= N\\
1+9+18+27+\dotsb &= N\\
1+9(1+2+3+4+\dotsb)&= N\\

1+9N &= N
\end{align*}
and therefore $N=-\frac{1}{8}$.



This is directly in contradiction with the well-known result of $-\frac{1}{12}$.



What is the problem with this reasoning? Was this result discovered earlier? Is this a consequence of Riemann's Rearrangement Theorem? Thanks in advance.



This was a repost of my previous post because some people said it was a duplicate to "Why is the sum of natural numbers $-1/12$?"

functional analysis - Convolutions Support.



It is known that:

$$
\operatorname{supp}(u *v) \subset \operatorname{supp}(u) + \operatorname{supp}(v)
$$

Where:
$$
\operatorname{supp}(u) = \overline{\{x \in \mathbb{R}^n: u(x) \neq 0\}}
$$

And:
$$
(u*v)(x)=\int_\limits{\mathbb{R}^n} u(x-y)v(y) dy

$$

I need an example of two functions $u,v$ such that $u$ has compact support, but $u*v$ has no compact support. Does anyone know any examples of this?


Answer



Let $u$ be any non-negative function with compact support which has the value $1$ on some open set. Let $v(x)=e^{-x^{2}}$. Then $(u*v)(x) >0$ for all $x$. So $u*v$ does not have compact support.



For an explicit example let $u(x)=1$ for $0 \leq x \leq 1$, $0$ for $x \geq 1+\frac 1 n$as well as for $x \leq -\frac 1 n$, $u(x)=1+nx$ for $-\frac 1 n \leq x \leq 0$ and $u(x)=1-n(x-1)$ for $1 \leq x \leq 1+\frac 1 n$.



There is no such example where both $u$ and $v$ have compact support. If $u$ has support $K$ and $v$ has support $H$ then $u*v$ vanishes on the complement of $K+H$. Since sum of two compact sets is compact it follows that $u*v$ has compact support.


integration - Why do we treat differential notation as a fraction in u-substitution method




How did we come to know that treating the differential notation as a fraction will help us in finding the integral. And how do we know about its validity?

How can $\frac{dy}{dx}$ be treated as a fraction?


I want to know about how did u-substitution come about and why is the differential treated as a fraction in it?


Answer



It doesn't necessarily need to be.



Consider a simple equation $\frac{dy}{dx}=\sin(2x+5)$ and let $u=2x+5$. Then
$$\frac{du}{dx}=2$$
Traditionally, you will complete the working by using $du=2\cdot dx$, but if we were to avoid this, you could instead continue with the integral:
$$\int\frac{dy}{dx}dx=\int\sin(u)dx$$
$$\int\frac{dy}{dx}dx=\int\sin(u)\cdot\frac{du}{dx}\cdot\frac{1}{2}dx$$
$$\int\frac{dy}{dx}dx=\frac{1}{2}\int\sin(u)\cdot\frac{du}{dx}dx$$

$$y=c-\frac{1}{2}\cos(u)$$
$$y=c-\frac{1}{2}\cos(2x+5)$$



But why is this? Can we prove that the usefulness of the differentiatals' sepertation is justified? As Gerry Myerson has mentioned, it's a direct consequence of the chain rule:



$$\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}\frac{du}{dx}dx$$
But then if you 'cancel', it becomes
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}du$$
Which is what you desired.



Saturday 28 October 2017

real analysis - Show that $lim_{n rightarrow infty}m(O_n)=m(E)$ when $E$ is compact.



Below is an attempt at a proof of the following problem. Any feedback would be greatly appreciated. Thx!







Let $E$ be a set and $O_n = \{x: d(x, E) < \frac{1}{n}\}$.



Show




  • If $E$ is compact then $m(E) = lim_{n \rightarrow \infty} m(O_n)$.


  • This is false for $E$ closed and unbounded or $E$ open and bounded.




Note that all sets are real and measurable refers to Lebesgue measurable.







Suppose $E$ is compact. Then $m(E) < \infty$. Also, since each $O_n$ is an open set of $\mathbb{R}^d$, $m(O_n) < \infty$. If either of the following is true, we have that $lim_{n \rightarrow \infty}m(O_n)=m(E)$:



$O_n \nearrow E$



$O_n \searrow E$



Now $\bigcap_{n=1}^{\infty} O_n = \{ x:d(x,E) = 0\}$.




That is, $x \in \bigcap_{n=1}^{\infty}O_n \iff \exists n> \frac{1}{\epsilon},\ n \in \mathbb{N} \iff x\in \{x \mid d(x,E) =0\}$.



Then we have that $\bigcap_{n=1}^{\infty}O_n = E$ and $O_n \searrow E$. Thus $ \ m(E) = lim_{n \rightarrow \infty} m(O_n)$.



Suppose $E$ is closed and unbounded. Let $E = \{(x,y): y = 2 \}$ and $O_n = \{(x,y): d(x,E) < \frac{1}{n}\}$.



Since $E$ is a line in $\mathbb{R}^2$, $m(E) = 0$.



Also, since the measure of a rectangle in $\mathbb{R}^d$ is its volume, $m(O_n) = \left| O_n \right| = \infty$, since the rectangle $O_n$ is unbounded.




It is therefore apparent that $m(E) \not = lim_{n \rightarrow \infty} m(O_n)$.



Suppose $E$ is open and bounded.
Let $E=(0,1)$ and $O_n = \{x \mid d(x,E) < \frac{1}{n}\}$.



$\mathbb{R}-E$ is closed and $O_n \in \mathbb{R}-E$.



Since $\mathbb{R}-E$ contains all its limit points $lim_{n \rightarrow \infty}O_n = O \in \mathbb{R}-E$.




Thus $lim_{n \rightarrow \infty} m(O_n) \neq m(E)$.


Answer



Your example for $E$ open and bounded is not correct. I really can't tell what you are doing, but it should be clear that $m((0,1)) = \lim_{n\to \infty}m(O_n)$ in this case.



Hint for an example: Let $E$ be an open dense subset of $(0,1)$ such that $m(E)<1/2.$,


calculus - Relation between integral, gamma function, elliptic integral, and AGM

The integral $\displaystyle\int\limits_0^{\infty}\frac {\mathrm dx}{\sqrt{1+x^4}}$ is equal to $\displaystyle \frac{\Gamma \left(\frac{1}{4}\right)^2}{4 \sqrt{\pi }}$.



It is calculated or verified with a computer algebra system that $\displaystyle \frac{\Gamma \left(\frac{1}{4}\right)^2}{4 \sqrt{\pi }} = K\left(\frac{1}{2}\right)$ , where $K(m)$ is the complete elliptic integral of the first kind. This is in relation to what is called the elliptic integral singular value.



It is also known or verified that
$\displaystyle K\left(\frac{1}{2}\right) =\displaystyle \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{1-\frac{\sin ^2(t)}{2}}} \, dt= \frac{1}{2} \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{\sin (t) \cos (t)}} \, dt$.




Can one prove directly or analytically that



$\displaystyle\int\limits_0^{\infty}\frac {\mathrm dx}{\sqrt{1+x^4}} =\frac{1}{2} \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{\sin (t) \cos (t)}} \, dt =\displaystyle \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{1-\frac{\sin ^2(t)}{2}}} \, dt = K\left(\frac{1}{2}\right) $ ?

Sum of a complex, finite geometric series and its identity



I have the formula for summing a finite geometric series as $$1+z+z^2\cdots +z^n = \frac{1-z^{n+1}}{1-z},$$ where $z\in\mathbb{C}$ and $n=0,1,...$. I am asked to infer the identity $$1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta = \frac{1}{2}+\frac{\sin(n+1/2)\theta}{2\sin\theta /2}.$$ Now, I understand that on the left hand side I'm going to get $$1+\cos\theta +\cdots + \cos n\theta + i[\sin\theta + \sin 2\theta +\cdots + \sin n\theta]$$
using $z=e^{i\theta}$ for any complex $z$. However, when I make that substitution on the right hand side, a monstrous expression occurs and I cannot simplify it down to the desired result. For instance, I get $$\frac{1-e^{i(n+1)\theta}}{1-e^{i\theta}}$$ and using identities I get $$\frac{1-\cos [(n+1)\theta]+i(\sin[(n+1)\theta])}{1-\cos n\theta -i\sin n\theta}.$$ From here I did a whole lot of manipulating, but never getting any closer to the identity asked.



If anyone could shed some light it would be greatly appreciated!
~Dom


Answer




Hint: factor out $e^{i (n+1) \theta/2}$ from the numerator and $e^{i \theta/2}$ from the denominator, then take the real part of the complex expression.


Thursday 26 October 2017

complex analysis - Why the integral $int_{-infty}^infty dx frac{1}{(x-i)^2} frac{1}{(x+i)^2}$ is not zero?



This might be silly but I am almost pulling my hair.



Why the following integral (along the real axis) is not zero:
$$\int_{-\infty}^\infty dx \frac{1}{(x-i)^2} \frac{1}{(x+i)^2}$$



The integral has no residues and can be completed by a semicircle either in the upper complex plane or the lower complex plane. According to the residue theorem, it should be zero. (Mathematica says it's $\pi/2$)




What's wrong with my logic here?



Edit:
I was somehow given the wrong impression that high-order poles does not contribute to the residue.


Answer



Oh, but the residue theorem does give a nonzero value. There are second-order poles at $\pm i$, and the integrand's residue at such a pole is $$\lim_{x\to\pm i}\frac{d}{dx}\frac{1}{(x\pm i)^2}=\frac{-2}{(\pm 2i)^3}=\mp\frac{i}{4}.$$As I think your only mistake was in misunderstanding how higher-order poles provide residues, I'll leave it to you to understand why we can calculate the integral from the pole at $i$ alone, viz.$$\int_{\Bbb R}\frac{dx}{(x^2+1)^2}=2\pi i\times\frac{-i}{4}=\frac{\pi}{2},$$just as your software said. (It can also be proven with $x=\tan t$.)



Edit: let me motivate the way higher-order poles work. Suppose $f(c)$ is neither $0$ nor infinite. Then $$\oint\frac{f(z)}{(z-c)^n}dz=\oint\frac{f^{n-1}(c)}{(n-1)!(z-c)}dz=2\pi i\cdot\lim_{z\to c}\frac{f^{n-1}(z)}{(n-1)!}.$$


real analysis - Understanding expansion on base $k$

There are some times when one might need to use expansion of real numbers on some base $k$. One example is when dealing with Cantor's set, one uses expansion of the numbers inside $[0,1]$ on base $3$. The point is that I simply can't understand these expansions. I'm asking here on the more general context of expansion on base $k$ in order to try to make the question more useful.




As I understood, expansion of a number $a\in \mathbb{R}$ in base 3 means to write



$$a = \sum_{n=1}^\infty \dfrac{a_n}{3^n} \quad a_n \in \{0,1,2\}.$$



In that setting, I imagine expansion of $a$ in base $k$ would mean to write



$$a = \sum_{n=1}^\infty \dfrac{a_n}{k^n} \quad a_n \in \{0,1,\dots, k-1\}.$$



Now what those expansions really mean? I simply can't understand, we are decomposing numbers as certain series. But what those series really mean? Why would anyone consider doing these expansions? What the coefficients $a_n$?




I believe this is related to decimal expansions, that is when we write a number $a = a_0.a_1a_2a_3\dots$ but I'm unsure how to make this connection rigorous. Also, I believe this would be true just for $k=10$, so for the other cases it would still be something hard to grasp.



In truth, I've seem quite a few times this being used in some proofs, the Cantor set being the most well-known example. But up to now I never understood correctly what these expansions are and how to work with them.

real analysis - Questions on the integration $int_0^infty e^{-(x^2+a^2/x^2)}dx$





Let $F(a):=\int_0^\infty e^{-(x^2+a^2/x^2)}dx$ with $a>0$. My questions are as follows:




(1) Calculate $\lim_{a \to 0^+}F(a)$.




(2) Show that $F'(a)=-2F(a)$.



(3) Calculate $F(a)$.




It seems to me that the well-known Gaussian integral $\int_0^\infty e^{-x^2}dx=\sqrt \pi/2$ plays some role here. However, if I set $t=x+a/x$ then $dt=(1-a/x^2)dx$. I'm stuck in finding a function $f$ so that $1-a/x^2=f(t).$ For part (2), I don't quite understand how to find the derivative of $F(a)$. It seems that we need to find the closed-form of $F(a)$ first. Assume that I am given part (2) to solve part (3). Is it true that $F(a)=e^{-2a+\ln(\sqrt \pi/2)}$?



Any help would be much appreciated.


Answer



Consider

\begin{align}
x^{2} + \frac{a^2}{x^{2}} = \left( x - \frac{a}{x} \right)^{2} +2a
\end{align}
for which
\begin{align}
I = \int_{0}^{\infty} e^{-\left(x^{2} + \frac{a^2}{x^{2}}\right)} \, dx = e^{-2a} \, \int_{0}^{\infty} e^{-\left(x - \frac{a}{x}\right)^{2}} \, dx.
\end{align}
Now make the substitution $t=\frac{a}{x}$ to obtain
\begin{align}
e^{2a} I = a\int_{0}^{\infty} e^{- \left( t - \frac{a}{t} \right)^{2}} \, \frac{dt}{t^{2}}.

\end{align}
Adding the two integral form leads to
\begin{align}
2 e^{2a} I = \int_{0}^{\infty} e^{- \left( t - \frac{a}{t} \right)^{2}} \left(1 + \frac{a}{t^{2}} \right) \, dt = \int_{-\infty}^{\infty} e^{- u^{2}} \, du = 2 \int_{0}^{\infty} e^{- u^{2}} \, du = \sqrt{\pi},
\end{align}
where the substitution $u = t - \frac{a}{t}$ was made. It is now seen that
\begin{align}
\int_{0}^{\infty} e^{-\left(x^{2} + \frac{a^2}{x^{2}}\right)} \, dx = \frac{\sqrt{\pi}}{2}e^{-2a}.
\end{align}




Note: Above is a modified version of Evaluate $\int_{0}^{\infty} \mathrm{e}^{-x^2-x^{-2}}\, dx$ 's accepted answer.



Answers :



(1) $\lim_{a \to 0^+}F(a) = \lim_{a \to 0^+}\frac{\sqrt{\pi}}{2}e^{-2a} = \frac{\sqrt{\pi}}{2}.$



(2) $F'(a)=\frac{\sqrt{\pi}}{2}e^{-2a}(-2)=-2F(a).$



(3) $F(a)=\frac{\sqrt{\pi}}{2}e^{-2a}$.


complex analysis - Cauchy Principal Value of $int_{-infty}^{infty} dfrac{sin x}{x(x^2+1)}mathrm dx$




How to evaluate this integral
$$
\int_{-\infty}^\infty\frac{\sin x}{x(x^2 + 1)}dx
$$

I am having a problem to solve this because of two poles when I solve it by integration first from $-R$ to $R$ and then along a semicircle in the upper half plane.


Answer



According to your choice of the contour: $C$ is the upper half-plane and $\Gamma$ is the semicircular arc of radius $R$ (say).



$$\int_C \frac{\sin z}{z(z^2+1)}\mathrm dz=\text{P.V.} \int_{-\infty}^{+\infty} \frac{\sin x}{x(x^2+1)}\mathrm dx+\lim_{R \to \infty}\int_{\Gamma}\frac{\sin z}{z(z^2+1)}\mathrm dz$$




Now, use the fact that if $f(z)=\frac{g(z)}{h(z)}$ where $f$ and $g$ are analytic near $z_0$ and $h$ has a simple zero at $z_0$, then $\text{Res}(f(z), z_0) = \frac{g(z_0)}{h'(z_0)}$. Note that $g$ is $\sin z$ and $h$ is $z(z^2+1)$. For the evaluation of the integral over the arc $\Gamma$, use the ML-inequality and you should be good to go.


linear algebra - Eigenvalue decomposition of $A = I - xx^T$





Let $A = I - xx^T$, where $x \in \mathbb{R}^n$ and $I$ is the identity matrix of $\mathbb{R}^n$



We know that $A$ is a real symmetric matrix, therefore there exists an eigenvalue decomposition of $A$ such that



$$A = Q^T\Lambda Q$$




Is it possible to find $Q$, $\Lambda$?



$I - xx^T = Q^TQ - xQ^TQx^T = Q^TQ - (Q^Tx)^T(x^TQ)^T...$


Answer



Consider
$$
Ax=(I-xx')x=(1-x'x)x
$$
so $x$ itself is an eigenvector with eigenvalue $1-x'x$. In fact, if $v$ is an eigenvector with some eigenvalue $\alpha$, we have

$$
\alpha v=Av=(I-xx')v=v-(x'v)x\implies(1-\alpha)v=(x'v)x.
$$
This means if $\alpha\neq 1$, then $v$ is proportional to $x$ so in fact $v$ is an eigenvector with eigenvalue $1-x'x$. If $\alpha=1$, then $x'v=0$. Conversely, if $v'x=0$, then $v$ is an eigenvector with eigenvalue $1$:
$$
Av=(I-xx')v=v-(x'v)v=v.
$$
Conclusion: $I-xx'$ has eigenvalues $1-x'x$ and $1$ where $1$ has multiplicity $n-1$. The eigenvectors for $1-x'x$ are parallel to $x$ and the eigenvectors of $1$ are any vector in the space orthogonal to the space spanned by $x$. So you can take $Q'=\Big(\frac{x}{|x|}\;r_1\;\cdots\;r_{n-1}\Big)$ where each $r_i$ is $n\times 1$ and $\{r_1,\ldots,r_{n-1}\}$ is some orthonormal basis of $[\text{span}(x)]^\perp$.


Finding roots of complex number




The problem is specific as an example from hw. But it is more the concept/process I could use clarification on. Given a complex number



$$\Big(\frac{-2}{1-i\sqrt3}\Big)^{\frac{1}{4}}$$



Find all possible roots. I know the method is to change into the exponential form, solving for magnitude (r) and theta. Which I did and got $e^{i4\pi/3}$. Do I then multiply theta by $4$, or $1/4$ and then add $\pi/2$ ? By either multiplying or dividing I get the same 4 roots, only with different starting points. But, if all I want are the roots, does it matter what the starting point is?


Answer



In general, we can write



$$z^c=e^{c\log(z)}$$




where the complex logarithm is the multi-valued function and can be written as



$$\log(z)=\log(|z|)+i(\arg(z)+2n\pi)$$



for all integer values of $n$.



For the specific problem in the OP, $z=\frac{-2}{1-i\sqrt 3}=\frac{2e^{-i\pi}}{2e^{-i\pi/3}}=e^{-i2\pi/3}$ and $c=1/4$. Therefore, we have



$$\begin{align}
\left(\frac{-2}{1-i\sqrt 3}\right)^{1/4}&=e^{\frac14\log(1)+i\frac14(-2\pi/3+2n\pi)}\\\\

&e^{i(-\pi/6+n\pi/2)}\\\\
&=(i)^ne^{-i\pi/6}\\\\
&=
\begin{cases}
ie^{-i\pi/6}&,n=1\\\\
-1e^{-i\pi/6}&,n=2\\\\
-ie^{-i\pi/6}&,n=3\\\\
e^{-i\pi/6}&,n=4
\end{cases}
\end{align}$$



real analysis - Calculating $ lim_{n rightarrow infty} int_{[0, infty)} frac{sin(e^x) }{1+nx^2},dx$



I want to calculate the limit of following Lebesgue-integral:



$$ \lim_{n \rightarrow \infty} \int_{[0, \infty)} \frac{\sin(e^x) }{1+nx^2}\,\mathrm dx$$



Therefore I wanted to apply Lebesgue's dominated convergence theorem.
$f_n(x)$ is measurable and $ f_n \rightarrow 0$ pointwise. Now it holds:




$$ \left|\frac{\sin(e^x) }{1+nx^2}\right| \leq \frac{1}{1+x^2} :=g(x) $$



The improper integral over does converge. That means f is lebesgue integrable.
Therefore
$$ \lim_{n \rightarrow \infty} \int_{[0, \infty)} \frac{\sin(e^x) }{1+nx^2}\,\mathrm dx = \int_{[0, \infty)} \lim_{n \rightarrow \infty} \frac{\sin(e^x) }{1+nx^2}\,\mathrm dx =0$$
Consider $ f_n(0) = sin 1$ does not converge to $ 0$. So I can't apply the theorem, can I ?


Answer



You are on the right track: apply Lebesgue's dominated convergence with $g(x)=\frac{1}{1+x^2}$ which is Lebesgue integrable in $[0,+\infty)$. Since $f_n(x)=\frac{\sin(e^x) }{1+nx^2}\to 0$ for all $x>0$ the sequence $(f_n)_n$ converges to zero almost everywhere on $[0,+\infty)$, that's enough for dominated convergence, and we may conclude that the limit of $\int_0^{\infty} f_n(x)\,dx$ is zero.



Alternative way (without dominated convergence):

$$\begin{align}\left|\int_{[0, \infty)} \frac{\sin(e^x) }{1+nx^2}\, dx \right|&\leq \int_0^{+\infty} \frac{|\sin(e^x) |}{1+nx^2}\, dx\\
&\leq \int_0^{+\infty} \frac{dx}{1+nx^2}=\left[\frac{\arctan(\sqrt{n}x)}{\sqrt{n}}\right]_0^{+\infty}=\frac{\pi}{2\sqrt{n}}.\end{align}$$

So, again, the limit as $n\to\infty$ is zero.


Wednesday 25 October 2017

complex analysis - Calculating winding number



Let
$$
\begin{align}
\gamma= \gamma_1 +\gamma_2+\gamma_3,\\

\gamma_1(t)=e^{it}, t\in[0,2\pi] \\
\gamma_2(t)=-1+2e^{-2it}, t\in [0,2\pi]\\
\gamma_3(t)=1-i+e^{it},t\in [\frac{\pi}{2},\frac{9\pi}{2}]
\end{align}
$$



Calculate the value of $n(\gamma,z)$ as $z$ takes its value in $\mathbb{C}\backslash\gamma.$



$\gamma$ is the image of the closed curve.




as $\gamma$ is a closed and smooth by parts, we have that



$$
n(\gamma,z)= \frac{1}{2\pi i}\int\limits_\gamma\frac{dc}{c-z} = \frac{1}{2\pi i} \left[ \int\limits_{\gamma_1}\frac{dc}{c-z}+ \int\limits_{\gamma_2}\frac{dc}{c-z} + \int\limits_{\gamma_3}\frac{dc}{c-z}\right]
$$



as $\gamma_1'(t) = ie^{it}$, $\gamma_2'(t) = -4ie^{-2it}$ and $\gamma_3'(t) = ie^{it}$



i can compute those integrals as




$$
n(\gamma,z)=\frac{1}{2\pi i} \left[ \int\limits_{0}^{2\pi}\frac{ie^{it}dt}{e^{it}-z}+ \int\limits_{0}^{2\pi}\frac{-4ie^{-2it}dt}{-1+2e^{-2it}-z} + \int\limits_{\pi /2}^{9\pi /2}\frac{ie^{it}}{1-i+e^{it}-z}\right]
$$



but when i compute then for any point i am finding $0$, did i made any mistake or am i making one when computing those integrals?


Answer



The value depends on $z$. Note that $\gamma_1$ runs once counterclockwise around the circle with center $0$ and radius $1$, $\gamma_2$ runs twice clockwise around the circle with center $-1$ and radius $2$, $\gamma_3$ runs twice counterclockwise around the circle with center $1-i$ and radius $1$. Let $D_i$ be the open disk bounded by $\gamma_i$. Note that $D_1 \subset D_2$. Drawing a picture is helpful. For a point $z \in \mathbb C \setminus \gamma$ we get $n(\gamma,z) = n(\gamma_1,z) + n(\gamma_2,z) + n(\gamma_3,z)$. We have $n(\gamma_1,z) = 1$ for $z \in D_1$, $n(\gamma_1,z) = 0$ for $z \notin D_1$, $n(\gamma_2,z) = -2$ for $z \in D_2$, $n(\gamma_2,z) = 0$ for $z \notin D_2$, $n(\gamma_3,z) = 2$ for $z \in D_3$, $n(\gamma_3,z) = 0$ for $z \notin D_3$. Thus:




  1. If $z \in D_1 \cap D_3$, then $n(\gamma,z) = 1 -2 + 2 = 1$.



  2. If $z \in D_1 \setminus D_3$, then $n(\gamma,z) = 1- 2 + 0 = -1$.


  3. If $z \in D_2 \setminus (D_1 \cup D_3)$, then $n(\gamma,z) = 0 -2 + 0 = -2$.


  4. If $z \in (D_2 \cap D_3) \setminus D_1 $, then $n(\gamma,z) = 0 -2 + 2 = 0$.


  5. If $z \in D_3 \setminus D_2$, then $n(\gamma,z) = 0 + 0 + 2 = 2$.


  6. If $z \notin D_1 \cup D_2 \cup D_3$, then $n(\gamma,z) = 0 + 0 + 0 = 0$.



real analysis - Proving that $f$ is measurable with $f(x+y)= f(x)+f(y)$ then $f(x) =Ax$ for some $AinBbb R$?




How to show with given hints that $f$ is measurable with $f(x+y)= f(x)+f(y)$ implies that $f(x) =Ax$ for some $A\in\Bbb R$?




The following exercise is from an Analysis book by Eiolt-LieB, second edtion page 76:





Assume $f:\Bbb R\to \Bbb R$ is measurable such that $f(x+y)= f(x)+f(y).$ Prove that $f(x) =Ax$ for some $A\in\Bbb R.$



Hints:
a)Prove the result when $f$ is continuous.



b) Next: if $f$ is not continuous then, consider $f_\varepsilon(x) =\exp(if) \star j_\varepsilon(x)$. Where
$ j_\varepsilon(x) =\varepsilon j(\varepsilon x)$ is a moliffier sequence. That is $j$ is smooth of compact support with $$\int_\Bbb R j(x) dx = 1.$$




I was able to solve the first question in the hint. But I don't how to use Hint(b). Would anyone help?




Also I would likt to know how to compute
$$\lim_{\varepsilon \to 0 } f_\varepsilon(x)=?$$
I failed to use the Dominated Convergence Theorem.



This question has been asked in one of my previous questions:
Let $g:\mathbb{R}\to\mathbb{R}$ be a measurable function such that $g(x+y) =g(x)+g(y).$ Then $g(x) = g(1)x$ .



But the solution there does not use the hints in this post.


Answer




Let's write $g \colon x \mapsto \exp\bigl(if(x)\bigr)$ and $g_{\varepsilon} = g \ast j_{\varepsilon}$. By standard theory, we know that $g_{\varepsilon}$ is continuous for every $\varepsilon > 0$. Also we know that for all $x,y\in \mathbb{R}$ we have



$$g_{\varepsilon}(x+y) = \int_{\mathbb{R}} g(x+y-t) j_{\varepsilon}(t)\,dt = \int_{\mathbb{R}} g(x)\cdot g(y-t)j_{\varepsilon}(t)\,dt = g(x)\cdot g_{\varepsilon}(y).$$



Thus, if $g_{\varepsilon}(y) \neq 0$ we have



$$g(x) = \frac{g_{\varepsilon}(x+y)}{g_{\varepsilon}(y)}$$



for all $x$, hence the continuity of $g$. This implies $g(x) = \exp(icx)$ for some $c\in \mathbb{R}$, and hence




$$f(x) = cx + h(x),$$



where $h \colon \mathbb{R} \to 2\pi \mathbb{Z} \subset \mathbb{R}$ is additive. That implies $h \equiv 0$.



So all that remains is to show that for suitable $y$ and $\varepsilon$ we have $g_{\varepsilon}(y) \neq 0$. That follows since $g_{\varepsilon} \xrightarrow{\varepsilon \downarrow 0} g$ in $L^1_{\text{loc}}(\mathbb{R})$, so there is a sequence $\varepsilon_k \to 0$ such that $g_{\varepsilon_k} \to g$ pointwise almost everywhere.


fake proofs - imaginary number $i$ equals $-6/3.4641$?

$$-4^3 = -64$$
so the third root of $-64$ should be $-4$ than.
$$\sqrt[3]{-64} = -4$$
But if you calculate the third root of -64
with WolframAlpha( http://www.wolframalpha.com/input/?i=third+root+of+-64 )
you get a complex number with an imaginary part of $$3.4641016151 i$$ and a real part of $$2$$



so if the third root of $4-64$ equals $-4$ AND $2 + 3.46410162 i$ (which i know is a bit foolish) than you could actually reform it like this
$$

\sqrt[3]{-64} \approx 2 + 3.46410162 i | -2$$
$$
\sqrt[3]{-64} -2 \approx -6 \approx 3.46410162 i |/3.46410162$$
$$
\frac{\sqrt[3]{-64} -2}{3.46410162} ≈ \frac{-6}{3.46410162} ≈ i$$



and this have to be totally wrong, so my question is, where exactly is the mistake?

sequences and series - Blocks of Pyramid Pattern Expression





There is a pattern following, and trying to find the algebraic expression



Each layer (from the top).



Diagram.




enter image description here



So the first layer has 1, second has 4, third has 9, and the fourth has 16.



That's how the sequence is increasing.



What I'm looking for is,



When the second layer is added with the first layer,




Third layer is added with the second and first,



Fourth is added with third,second and first.



So something like this.



enter image description here



enter image description here




I am trying to find the algebraic expression for this pattern.



Any ideas??



Thank you


Answer



There is a well-known formula for the sum of the first $n$ squares, but I don't want spoil your investigation, so I will give you some hints.



First, compute some more terms of the sequence. Three or four more should do.




Multiply all the terms by six, and factor the results. Notice that all of them are multiple of $n$ and $n+1$.


combinatorics - What is the probability to get each side at least once on $n$ throws of fair dice?

Let's say I have $n$ fair (six-sided) dice ($n>=6$) and I want to figure out the probability of getting each side at least once. I figured out a formula through elementary combinatorial reasoning: $$\frac{6!\cdot\binom{n}{6}\cdot6^{n-6}}{6^n}$$



First, of all, is this formula correct?



Second, somewhere I read that I could do this with the sieve formula. How do I use the sieve formula for this one?

Tuesday 24 October 2017

calculus - What is the sum of the series $sumlimits _{k=1}^infty frac{1}{k^2}$?







What is $\lim \limits_{n\to\infty} \sum\limits_{k=1}^n \frac{1}{k^2}$ as an exact value?

real analysis - Prove that if $sum{a_n}$ converges absolutely, then $sum{a_n^2}$ converges absolutely



I'm trying to re-learn my undergrad math, and I'm using Stephen Abbot's Understanding Analysis. In section 2.7, he has the following exercise:



Exercise 2.7.5 (a) Show that if $\sum{a_n}$ converges absolutely, then $\sum{a_n^2}$ also converges absolutely. Does this proposition hold without absolute convergence?



I'm posting about this here because my answer to that last question about whether the proposition holds without absolute convergence is "Yes", but my suspicions are raised by him merely asking the question. Usually questions like this are asked to point out that certain conditions are necessary in the statement of propositions, theorems, etc. I just want to see if I'm missing something here.



Anyway, here's how I prove the absolute convergence of $\sum{a_n^2}$, and note that I never use the fact that $\sum{a_n}$ is absolutely convergent:




Proof: Let $s_n = \sum_{i=1}^n{a_n^2}$. I want to show that $(s_n)$ is a Cauchy sequence. So, let $\epsilon > 0$ and $n > m$, and consider,
\begin{equation}
\begin{aligned}
|s_n - s_m| &= |(a_1^2 + a_2^2 + \cdots a_n^2) - (a_1^2 + a_2^2 + \cdots a_m^2)|\\
&= |(a_{m+1})^2 + (a_{m+2})^2 + \cdots + a_n^2|\\
&= |a_{m+1}|^2 + |a_{m+2}|^2 + \cdots + |a_n|^2\\
\end{aligned}
\end{equation}
Now, since $\sum{a_m}$ converges, the sequence $(a_m)$ has limit 0. Therefore I can choose an $N$ such that $|a_m| < \sqrt{\epsilon/n}$ for all $m > N$. So for all $n > m> N$, we have,

\begin{equation}
\begin{aligned}
|s_n - s_m| &= |a_{m+1}|^2 + |a_{m+2}|^2 + \cdots + |a_n|^2\\
&< \left(\sqrt{\frac{\epsilon}{n}}\right)^2 + \left(\sqrt{\frac{\epsilon}{n}}\right)^2 + \cdots + \left(\sqrt{\frac{\epsilon}{n}}\right)^2\\
&< \left(\sqrt{\frac{\epsilon}{n-m}}\right)^2 + \left(\sqrt{\frac{\epsilon}{n-m}}\right)^2 + \cdots + \left(\sqrt{\frac{\epsilon}{n-m}}\right)^2\\
&= \epsilon
\end{aligned}
\end{equation}
Therefore $(s_n)$ is Cauchy and $\sum{a_n^2}$ converges. And since $\sum{a_n^2} = \sum{|a_n^2|}$, the series $\sum{a_n^2}$ is absolutely convergent. ∎




Am I doing something wrong here? Am I right in thinking that $\sum{a_n}$ need not be absolutely convergent for $\sum{a_n^2}$ to be absolutely convergent?


Answer



The problem with your proof as written is that $n$ is arbitrary, so when you choose $N$ you are not choosing a fixed value - you need a different $N$ for each $n$.



You could approach this by noting that there is an $N$ with $|a_m|\lt 1$ for $m\gt N$, and therefore $|a_m|^2\lt |a_m|$, which sets up a comparison using the absolute convergence.


Monday 23 October 2017

trigonometry - solve$, ;cosleft(,2arctanleft(1/2right),right),$

How would I go on about solving:
$\,\quad\cos\Big(\,2\cdot \arctan\big(\frac{1}{2}\big)\Big)$ ?



My attempt was to find $\,v\,$ in $\,\frac{\sin\left(v\right)}{\cos\left(v\right)} = \frac{1}{2}\,$ and substitute that with $\,\arctan\left(\frac{1}{2}\right),\,$ but I end up with $\,\sin\left(v\right) = \sqrt{\frac{1}{5}}\,$ which I cant solve.



Is there any other way to approach the problem?

Sunday 22 October 2017

permutations - How many factors of $10$ in $100!$











How many factors of 10 are there in $100!$ (IIT Question)?



Is it 26,25,24 or any other value



Please tell how you have done it



Answer



So first, we want to find how many factors of $5$ there are in $100!$. There are $20$ numbers divisible by $5$ from $1$ to $100$, so we start off our count at $20$. Then, we count how many numbers are divisble by $5^2$. There are four: $25, 50, 75, 100$, and so we add four to our count to get $24$ factors of $5$. (Note that we don't add eight fives - if we did so, we would be counting the first factors of five twice!)



Since $5^3>100$, we don't have to worry about third powers of five. There are at least $100/2=50$ factors of $2$ in $100!$, but we're only going to use $24$ of them to get our $24$ multiples of $10$, so we don't have to calculate the exact number of factors of $2$ in $100!$.



So basic method: To find how many factors of $a$ there are in $b!$, first decompose $a$ into its primes $p_n$, and then find out how many factors of each prime $p_n$ are in numbers less than $b$, by using the method I described of checking for divisibility by $p_n$, then $p_n^2$, etc. Then, from this pool of factors, figure out how many you can take. In our examples to make $10^n$ we could take a maximum of $24$ fives and $24$ twos. If we wanted to find how many factors of $40$ (=$2^3 5$) were less than $100$, we would have needed to find out exactly how many factors of $2$ were less than $100$, and then either take $24*3$ twos if there are enough, or less, if there aren't.



See also: youtube Factors of Factorials Part 1


integration - When does a function NOT have an antiderivative?




I know this question may sound naïve but why can't we write $\int e^{x^2} dx$ as $\int e^{2x} dx$? The former does not have an antiderivative, while the latter has.




In light of this question, what are sufficient conditions for a function NOT to have an antiderivative. That is, do we need careful examination of a function to say it does not have an antiderivative or is there any way that once you see the function, you can right away say it does not have an antiderivative?


Answer



As you might have realised, exponentiation is not associative:



$$\left(a^b\right)^c \ne a^\left(b^c\right)$$



So what should $a^{b^c}$ mean? The convention is that exponentiation is right associative:



$$a^{b^c} = a^\left(b^c\right)$$




Because the otherwise left-associative exponentiation is just less useful and redundant, as it can be represented by multiplication inside the power (again as you might have realised):



$$a^{bc} = \left(a^b\right)^c$$



Wikipedia on associativity of exponentiation.


elementary number theory - Euclid Algorithm to Find Muliplicative Inverse




Here I am trying to find the multiplicative inverse of 19 respect to 29.



$$19x \equiv 1 \pmod{29} $$



What I tried



\begin{align*}
29 &= 1(19) + 10\\\
19 &= 1(10) + 9\\\

10 &= 1(9) + 1.
\end{align*}



From backtracking, I came up with the



\begin{align*}
1 &= 2(29) - 3(19)\\\
\end{align*}



However, 3 is not a multiplicative inverse of the 29. Where am I making a mistake?




I looked many answers including this answer; however, couldn't figure out my mistake.


Answer



What you have found indeed is that $-3\equiv 26$ is the multiplicative inverse of $19$ $\mod 29$.


integration - Rigorous definition of differentials in the context of integrals.




When using the subsitituion rule in integration of an integral
$\displaystyle \int f(x)\,\mathrm dx$, one turns the integral from the form $$\displaystyle \int f(x(t))\,\mathrm dx \quad(*)$$ into the form $$\int f(x(t))\,\frac{\mathrm dx}{\mathrm dt}\mathrm dt \quad(**)$$. This transform is usually accomplished by means of differenting the subsitution $x = x(t)$, such that $\dfrac{\mathrm dx}{\mathrm dt} = \dfrac{\mathrm d x(t)}{\mathrm dt}$. Now, at this point, one turns this into a differential form by means of magic, s.t. $\mathrm dx = \dfrac{\mathrm dx(t)}{\mathrm dt}\mathrm dt$. This now substitutes the differential term $\mathrm dx$ in the original expression $(*)$ to the one in the transformed expression $(**)$.




I'd like to learn that magic step a bit more rigorous – so that I can better understand it. It is often explained by "multiplication" of $\mathrm dt$, which do make sense, but it does not explain the nature of differentials; when is "multiplication" allowed? It seems there should be a more rigorous way of explaining it, perhaps by defining the "multiplication.



So, in what ways can differentials like $\mathrm dx$ and $\mathrm dt$ be formalized in this context? I've seen them being compared to small numbers, which often work, but can this analogy fail? (And what are the prerequisites needed to understand them?)


Answer



Here's one way:



Consider $x$ and $t$ are coordinate systems of $\mathbb{R}$. If we wish to change coordinate systems, we have to look at how they transform into one another. If we consider $t$ to be a reference coordinate system and let the coordinate transformation be defined as $x(t) = 2t$ then for any $t$ element, $x$ is twice that (under $x$ view).



Now, since $(\mathbb{R}, + , \cdot)$ is a vector space, it has a dual $\mathbb{R}^*$. Using this space, we can start defining the elements $dx, dt$. Specifically, $dt$ will be a basis for $\mathbb{R}^*$ if $t$ is the basis vector for $\mathbb{R}$ . The elements of the dual space are called 1-forms. 1-forms of $\mathbb{R}^*$ "eat" vector elements of $\mathbb{R}$ and return a measure along that direction (only 1 dimension, so one direction). In this case you can consider elements of $\mathbb{R}^*$ as "row" vectors and multiply column vectors in $\mathbb{R}$ (which is the dot product of two vectors).




We can define a different basis for $\mathbb{R}$ and $\mathbb{R}^*$ with a coordinate change. For this example, if $dt$ eats a one dimensional vector $a$, it will return $a$. But when $dx$ eats $a$ it returns $2a$ in the $t$ coordinate system. That is $dx = 2dt$. For a general coordinate transform, a 1-form can be describe by $dx = \frac{dx}{dt} dt$.



This provides us with a way to talk about $dx$ and $dt$ meaningfully. Since $f: \mathbb{R} \to \mathbb{R}$ then $f(x)dx$ is $dx$ "eating" the vector $f(x)$ with regards to the $x$ coordinate system. Sometimes $f$ is easier to think of in a different coordinate system and so we wish to change it. $f(x)$ then becomes $f(x(t))$ and $dx$ becomes $\frac{dx}{dt}dt$. Now $dt$ is eating vectors $f(x(t))$ in its own coordinate system.



Consider how the uniform subdivide interval $(a,b)$ looks in a new coordinate system.
For example $\{(0,\frac{1}{2}), (\frac{1}{2},1), (1,\frac{3}{2})\}$ in $t$ looks like $\{(0,\frac{2}{3}), (\frac{2}{3},2), (2, \frac{6}{2})\}$ in $x$ in the example coordinate transform. $\frac{dx}{dt}$ tells us precisely how the intervals change under our transformation.


combinatorics - Prove $sum_{k=0}^{n-2}{n-k choose 2} = {n+1 choose 3}$




Prove
$$\sum_{k=0}^{n-2}{n-k \choose 2} = {n+1 \choose 3}$$




Is there a relation I can use that easily yields above equation?


Answer



$$\sum_{k=0}^{n-2}\frac{k^2+k(1-2n)+n^2-1}{2}\tag{Simplify what @dixv said}$$

now, $$\sum_{k=1}^nk=\frac{n(n+1)}{2}\text{ and }\sum_{k=1}^nk^2=\frac{n(n+1)(2n+1)}{6}$$



Hope it helps?


Saturday 21 October 2017

complex numbers - Polar coordinates - issue with direction denoted by angle




Convert $1-\sqrt{3}i$ to polar coordinates $(r,\varphi)$.





I started by computing $r=|1-\sqrt{3}i|=\sqrt{1^2+\sqrt{3}^2}=\sqrt{4}=2$. When I tried to compute the angle I did something like



$$\varphi=\arctan\left|\frac{y}{x}\right|=\arctan\left|\frac{-\sqrt{3}}{1}\right|=\arctan\sqrt{3}=\frac{\pi}{3}.$$



Although this answer seems plausible to me, I am unsure, because the angle should be $-\frac{\pi}{3}$ otherwise the resulting coordinates would be the first quadrant rather than in the fourth. How do I have to compute $\varphi$ to match the right quadrant?


Answer



Why did you do



$$\arg z=\arctan\left|\frac{y}{z}\right|??$$




It should be



$$\arg z=\arctan\frac{y}{z}=\arctan-\sqrt 3=-\frac{\pi}{3}\,,\,\frac{2\pi}{3}$$



Since in $\,z=1-\sqrt 3i\,\;$ the real part is positive and the imaginary part is negative, the vector(=the complex number) is in the fourth quadrant, so the answer must be $\,-\dfrac{\pi}{3}\,$ , or if a positive number is wanted, $\,\dfrac{5\pi}{3}\,$


real analysis - Give an example of a monotonic increasing function which does not satisfy intermediate value property.




Give an example of a monotonic increasing function which does not satisfy intermediate value property.




I have no idea whether that function would be continuous or discontinuous. Please give me the example(s) with proper reason. Thank you very much.



Answer



Well, the intermediate value theorem says that any continuous $f:I \to \mathbb{R}$, where $I \subset \mathbb{R}$ is an interval (and we allow notations like $(-\infty,\infty)=\mathbb{R}$) has the intermediate value property. So in order to find a function which fails to have the intermediate value property, it had better either be




  • discontinuous, or


  • defined on a set which is not an interval in $\mathbb{R}$



It turns out you can get a counterexample with either strategy. The idea of the intermediate value property is that the function can't "jump" from one value to the next: it has to take every value in between.



In fact, we'll use essentially the same function for both properties. Consider the function $f: \mathbb{R} \to \mathbb{R}$ defined by
$$f(x)=\cases{0 \quad \text{ if }x<0\\1\quad \text{ if }x>0}$$
and $f(0)=0$. Then $f$ is continuous everywhere except $0$, but it does not have the intermediate value property: $f$ takes the value $0$ (at $-1$), and it takes the value $1$ (at $1$), but there is no point $x$ such that $-1

If, instead of taking $f$ as a function $\mathbb{R} \to \mathbb{R}$, we choose to look at it as a function $\mathbb{R} \setminus \{0\} \to \mathbb{R}$ (and do not define it at $x=0$), then it is continuous at every point at which it is defined, but fails to have the intermediate value property for the same reason. So both hypotheses of the theorem are necessary!




EDIT: There may be a slight confusion over whether this function is monotone increasing - this function is (according to the definition I'm used to) increasing, but not strictly increasing. It is pretty straightforward to use it to generate an example of a strictly increasing function, though.


elementary set theory - Proof that a subset of R has same cardinality as R



I am sitting with a task where I have to prove the following:



Claim:
Every subset of $\mathbb{R}$, that contains an interval $I$ with $a < b$, has the same cardinality as $\mathbb{R}$.



So I think that I should prove that there exist a bijection from $I$ to $\mathbb{R}$? I'm kinda lost, and don't know how to start.




There is a lemma 3 in the book saying:
Let $a,b \in \mathbb{R}$ with $-\infty\neq a < b \neq \infty$. There exist a $f\colon ]-1; 1[ \to ]a; b[$ that is bijektiv. The intervals $]-1; 1[$ and $]a; b[$ has same cardinality.



Another lemma 4 says:
$f\colon ]0; 1[ \to ]1; \infty[$, $x\to \frac{1}{x}$, is bijective. The intervals $f\colon ]0; 1[ \to ]1; \infty[$ have same cardinality.



(There is an image added to lemma4)



enter image description here




The above interval $]-1; 1[$ has same cardinality as $\mathbb{R}$, and the $f$ is bijective
Any help is highly appreciated


Answer



Hints:



First prove that any two non-empty open intervals have the same cardinality.



Second, pass now to use the nice interval $\;(-\pi/2\,,\,\pi/2)\;$ and a rather nice, simple trigonometric function to show equipotency with $\;\Bbb R\;$


Friday 20 October 2017

elementary number theory - How do I find the remainder for the following?



I know this is a very typical question for modular arithmetic but still I haven't found a comprehensive explanation for this question, so I'm posting it here. So here goes:



I need to find the remainder when $19^{38}$ is divided by $38$.
Here is my attempt:-



$$19\equiv 19\pmod {38}$$
$$19^2\equiv 19^2\pmod {38} \implies 19^2\equiv 19\pmod {38}$$
$$\implies (19^2)^2\equiv 19^2\pmod {38} $$

$$\implies 19^4\equiv 19\pmod {38}$$
$$\implies 19^8\equiv 19\pmod {38}$$
$$\implies 19^{16}\equiv 19\pmod {38}$$
$$\implies 19^{32}\equiv 19\pmod {38}$$



And carrying on I do get the answer that $$19^{38}\equiv 19\pmod {38}$$
But this seems a very cumbersome task, for higher powers it may be hard, or if 19 would not have been a factor of 38, then probably I wouldn't have been able to develop the pattern. Is there an easier and more methodical way to solve this using number theory/modular arithemtic?



I think I may have seen a solution involving Euler's Totient Function, but it was a while ago and I simply can't seem to relate it with this question and cant remember what the solution was. Can that be used to simplify this question?


Answer




Precisely, Euler's totient function comes to be very handy for such problems. Euler's theorem states that if gdc$(a,m)=1$, then $a^{\phi(m)} \equiv 1 \pmod{m}$. In your case, since gdc$(19,38) \neq 1$, the fact that $19^2 \equiv 19 \pmod{38}$ is crucial, and from that one gets $19^n \equiv 19 \pmod{38}$, for every $n \in \mathbb{N}$. For instance, if you want to calculate the remainder of $17^{38}$ when divided by 38, first observe that gdc$(17,38) = 1$ (here we can use Euler's theorem), and that $\phi(38)=18$. So, since $38= 2 \times 18 +2$, we have
$$
17^{38}=17^{2 \times 18 +2}=(17^{18})^217^2 \equiv 17^2 \equiv 23 \pmod{38}.
$$

That's the "canonical" way.


logarithms - Graph of a Log Function




I am curious as to why Wolfram|Alpha is graphing a logarithm the way that it is. I was always taught that a graph of a basic logarithm function $\log{x}$ should look like this:



enter image description here



However, Wolfram|Alpha is graphing it like this:



enter image description here



As you can see, there is a "real" range in the region $(-\infty, 0)$, and an imaginary part indicated by the orange line. Is there a part about log graphs that I am missing which would explain why Wolfram|Alpha shows the range of the log function as $\mathbb{R}$?


Answer




$\ln(x)$ is formally defined as the solution to the equation $e^y=x$.



If $x$ is positive, this equation has an unique real solution, anyhow if $x$ is negative this doesn't have a real solution. But it has complex roots.



Indeed, $\ln(x)= a+ib$ is equivalent to



$$x= e^{a+ib}= e^{a} (\cos(b)+i \sin (b)) \,.$$



If $x <0$ we need $e^{a}=|x|$, $\cos(b)=-1$ and $\sin(b)=0$.




Thus, $a= \ln(|x|)$ and $b=\frac{3\pi}{2}+2k\pi$....


Thursday 19 October 2017

multivariable calculus - Show that the series $sum_{n,m=1}^infty 1/(n+m)!$ is absolutely convergent and find its sum

Show that the series $$\sum_{n,m=1}^\infty \dfrac{1}{(n+m)!}$$ is absolutely convergent and find its sum.



This comes from a chapter called interchange of limit operations. I tried using the ratio test but wasn't sure if this was the correct route.

real analysis - Prove that $lim_{uto infty}{frac{u^m}{e^u}}=0$




I'm trying to prove that $\lim_{u\to \infty}{\frac{u^m}{e^u}}=0$ for any integer $m$. I tried using the Ratio Test For Limits to prove that the limit converges to $0$ for all real values of $m$, and therefore it must converge to $0$ for all integer values of $m$ since $\mathbb{Z} \subset \mathbb{R}$. However, using the Ratio Test only seems to complicate things. Is there a better way to show this?



Additionally, I need to prove that $\lim_{x\to 0^+}{x\log x}=0$ by setting $u=-\log x$ and using the above limit. So far, I have done the following:




$$\lim_{-\log x\to \infty}{\frac{(-\log x)^m}{e^{-\log x}}}=\lim_{x\to 0^+}x{(-\log x)^m}$$



but I don't know where to go from here. Also, I cannot use L'Hopital's rule, since I haven't covered it in lectures. Thanks in advance.


Answer



If you accept that $e^{u}=\sum_{n=0}^{\infty} \frac{u^n}{n!}$, then



$$
\frac{u^m}{e^u}=\frac{u^m}{\sum_{n=0}^{\infty} \frac{u^n}{n!}}\le (m+1)!\frac{u^m}{u^{m+1}}\to 0$$



as $u\to\infty$.



Wednesday 18 October 2017

divisibility - Show that the number $n$ is divisible by $7$

How can I prove that $n = 8709120$ divisible by $7$?
I have tried a couple methods, but I can't show that. Can somebody please help me?

How do I prove that the limit as x approaches 0 of (sin(x)/x) = 1 using the definition of a limit?

If I let r be a positive number, I have to show that there exists a positive number, u, such that |x| < u implies that |(sin(x))/x - 1| < r. Now I've gone about this two different ways, but hit dead ends both times.




My first approach was to use the triangle inequality and the fact that -1 <= sin(x) <= 1. |(sin(x))/x - 1| <= |(sin(x))/x| + 1 = |sin(x)|/|x| + 1 <=
1/|x| + 1. I knew that if I could show that 1/|x| + 1 < r, that would mean
|(sin(x))/x - 1| < r. However, the issue is that if 1/|x| + 1 < r, than 1 < r. But I need prove it with r being any positive number.



My second approach was to split it into left-hand and right-hand limits and use the squeeze theorem. I knew that if I show that each limit was 1, then the entire limit was 1. I decided to start with the left-hand limit. For x<0, 1/x <= sin(x)/x <= -1/x. However, since the limit as x approaches 0 from the left of 1/x = -oo and the limit as x approaches 0 from the left of -1/x is oo, the squeeze theorem really can't be applied.



What is it that I'm doing wrong?

Limit of a sequence involving root of a factorial: $lim_{n to infty} frac{n}{ sqrt [n]{n!}}$





I need to check if
$$\lim_{n \to \infty} \frac{n}{ \sqrt [n]{n!}}$$ converges or not. Additionally, I wanted to show that the sequence is monotonically increasing in n and so limit exists. Any help is appreciated. I had tried taking log and manipulating the sequence but I could not prove monotonicity this way.


Answer



Use Stirling's approximation:
$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n $

and you'll get
$$
\lim_{n \rightarrow \infty} \frac{n}{(n!)^{1/n}}
=\lim_{n \rightarrow \infty} \frac{n}{(\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n)^{1/n}}
=\lim_{n \rightarrow \infty} \frac{n}{({2 \pi n})^{1/2n} \left(\frac{n}{e}\right)}
=\lim_{n \rightarrow \infty} \frac{e}{({2 \pi n})^{1/2n} }=e,
$$
because $\lim_{n\to \infty} ({2 \pi n})^{1/2n}= \lim_{n\to \infty} n^{1/n}=1$.


arithmetic - False proof $m = 1$ for any $m$



Is it possible to find a fake proof that results in $$ 1 = m, $$ where $m$ is any number you chose?



The idea is to find a generalization to this fallacy:





\begin{align}a &= b\\
&\Downarrow\\
a^2 - b^2 &= ab - b^2\\
(a+b) (a-b) &= b (a-b)\\
a+b &= b\\
2b &= b \\
2 &= 1 \end{align}




You can use division by zero or anything a little bit elaborated you can think of.



Answer



Using $2=1$, you can prove using induction that $$\forall n\in\mathbb N,\,n=1.$$



Indeed, it is true for $1$ and also for $2$ by assumption. Assume that it's true for a certain $n>2$.



\begin{align}
n+1&=n-1+2\\
&=n-1+1\\
&=n\\
&=1.

\end{align}



Edit:



Another method which looks like a real generalisation of your given proof as follows: let $n\in\mathbb N$. We have:



\begin{align}
a&=b\\
a^n-b^n&=ab^{n-1}-b^n\\
(a-b)\sum_{k=0}^{n-1}a^kb^{n-1-k}&=b^{n-1}(a-b)\\

\sum_{k=0}^{n-1}a^kb^{n-1-k}&=b^{n-1}\\
nb^{n-1}&=b^{n-1}\\
n&=1.
\end{align}


Extension of the additive Cauchy functional equation



Let $f\colon (0,\alpha)\to \def\R{\mathbf R}\R$ satisfy $f(x + y)=f(x)+f(y)$
for all $x,y,x + y \in (0,\alpha)$, where $\alpha$ is a positive real number. Show that there exists an additive function $A \colon \R \to \R$ such that $A(x) = f(x)$ for all $x \in (0, \alpha)$.
Simply I want to define a function A In specific form as an extension of the function f wich is additive functional equation. I tried to define the function A .


Answer




Let $x > 0$. Choose $n \in \def\N{\mathbf N}\N$ with $\frac xn < \alpha$. Define $A(x) := nf(\frac xn)$. Note that this is well-defnined: If $m\in \N$ is another natural number such that $\frac xm < \alpha$, we have
\begin{align*}
mf\left(\frac xm\right) &= mf\left(\sum_{k=1}^n \frac x{mn}\right)\\
&= m\sum_{k=1}^n f\left(\frac x{mn}\right)\\
&= \sum_{l=1}^m n f\left(\frac x{mn}\right)\\
&= nf\left(\sum_{l=1}^m \frac x{mn}\right)\\
&= nf\left(\frac x{n}\right).
\end{align*}
For $x < 0$ choose $n \in \N$ with $\frac xn > -\alpha$ and define $A(x) := -nf(-\frac xn)$, finally, let $A(0) = 0$. Then $A$ is an extension of $f$, to show that it is additive, let $x,y \in \def\R{\mathbf R}\R$. Choose $n \in \N$ such that $\frac xn, \frac yn, \frac{x+y}n \in (-\alpha, \alpha)$. We have if $x,y \ge 0$:
\begin{align*}

A(x+y) &= nf\left(\frac{x+y}n\right)\\
&= nf\left(\frac xn\right) + nf\left(\frac yn\right)\\
&= A(x) + A(y)
\end{align*}
If both $x,y \le 0$, we argue along the same lines. Now suppose $x \ge 0$, $y \le 0$, $x+y \ge 0$. We have $A(y) = -A(-y)$ be definition of $A$. Hence
\begin{align*}
-A(y) + A(x+y) &= A(-y) + A(x+y)\\
&= A(-y+x+y)\\
&= A(x).
\end{align*}

If $x \ge 0$, $y \le 0$, $x+y \le 0$, we have $-x \le 0$ and
\begin{align*}
-A(x) + A(x+y) &= A(-x) + A(x+y)\\
&= A(y)
\end{align*}


Assigning values to divergent series




I have been looking at divergent series on wikipedia and other sources and it seems people give finite "values" to specific ones. I understand that these values sometimes reflect the algebraic properties of the series in question, but do not actually represent what the series converges to, which is infinity. Why is it usefull to assign values to divergent series?



The only theory I could come up with, is this:



Say you have 2 divergent series, series' A and B, and you assign each a value,



Series ($A=
\sum_{n=0}^\infty a_n$), which I assigned the value Q




and series ($B=
\sum_{n=0}^\infty b_n$ ), which I assigned the value P



But it just so happens that series $C=A-B=
\sum_{n=0}^\infty (a_n-b_n)$ converges.
Could that imply that the actual value of series $C$ is the difference of the two assigned values to $A$ and $B$, that is $\sum_{n=0}^\infty (a_n-b_n)=Q-P$ ?



If so, then that would make some sense to me, as to why people sometimes assign values to divergent series.


Answer



The most common situation with a divergent series is this: an infinite series with a variable $z$ is given, which converges for some values of $z$ in the complex plane $\mathbb C.$ On the region of convergence, the series defines a holomorphic function, call it $f(z).$ Then the analytic continuation of the function $f(z)$ is correctly described. As a result, there is a well-defined value $f(z)$ for $z$ values that would cause the original series to diverge.




The best example is ZETA. I guess Euler found values of $\zeta(-n)$
at negative integers, and wrote these down in the style of divergent series. So people get an impression that one assigns a value to a divergent series by clever manipulation. This is not the general case, however. When the radius of convergence is strictly exceeded, we are simply reporting the value given by the analytic continuation. Not assigning.



Here is an elementary example: Let us take
$$ f(z) = \frac{1}{1 + z^2}. $$ Now, for $|z| < 1,$ we know
$$ f(z) = 1 - z^2 + z^4 - z^6 + z^8 - z^{10} \cdots $$
If I wrote
$$ 1 - 9 + 81 - 729 + 6561 - 59049 \cdots = \frac{1}{10} $$ you would have every reason to be suspicious as the series obviously diverges. But if I instead wrote
$$ f(3) = \frac{1}{10} $$ you would think that was probably alright.



Tuesday 17 October 2017

calculus - Induction Proof on the existence of Injections and Bijections from Natural numbers into arbitrary sets



I found a Note in my old Calc I script. Where it was stated that: for any set




$M\neq \emptyset $ and ${ \mathbb{N} }_{ m }=\left\{ n\quad \epsilon \quad \mathbb{N} :\quad n\le m,\quad m\quad \epsilon \quad \mathbb{N} \right\}$ ( N denoting the natural numbers).



Their either exists: a fixed $ m\ \epsilon \ \mathbb{N}$ and a bijective Map $\varphi :{ \mathbb{N} }_{ m }\rightarrow M\\$



or an injective Map $\varphi :{ \mathbb{N} }_{ m }\rightarrow M\\$



for all $m\ \epsilon \ \mathbb{N}$



I tired to proof this as refreshing exercise. It is fairly easy if you use counatbility arguments but i tried another way to proof it by induction where i got completley stuck. My quetions is now if it is even possible to proof it by induction and if yes how?



Answer



Well, clearly there if $a \in M$ then $f:\{1\} \to M$ via $f(1) = a$ is injective.



Suppose $k$. If there is a $m \le k$ so that there is a bijection from $N_m \to M$ we are done and there is nothing left to prove.



Assume there is not but there is an injection $\varphi: N_k \to M$. Since $\varphi$ is not a bijection but is an injection, $\varphi$ is not surjective. So there is an $a \in M$ so that $a \ne \varphi(j)$ for any $j \in N_k$.



Let $\varphi': N_{k+1}\to M$ via $\varphi'(k+1) = a$ and for $j \le k$ $\varphi'(j) = \varphi(j)$. $\varphi'$ is clearly an injection[$*$]. And it might be a bijection (in which case we won't need to do any further induction) but it is definitely an injection. (And if it isn't a bijection we can do another induction).



==== after edit ===




In a current edit there is a condition I have not addressed. That if there is a $m$ so that there is a bijection $\varphi:N_m \to M$ that $m$ is fixed, or in other words there is precisely one such $m$ that will have bijections from $N_m$ to $M$ and no other natural number will.



Claim: If a bijection exists between $N_m$ and $M$ then no bijection exist between $N_k$ and $M$ for any other $k$.



Pf: Suppose $k < n$ and bijections $\varphi_n:N_n\to M$ and $\varphi_k:N_k\to M$ exist.



Then $f = \varphi_n\circ\varphi_k^{-1}\circ\varphi_n(i):N_n\to M$ is a bijection as it is a composition of bijections.



But then for $j > k$ we must have an $f(i) = \varphi_n(j)$or $\varphi_n(\varphi_k^{-1}(varphi_n(i))= \varphi_n(j)$. But $\varphi_n$ is a bijection, so $\varphi_k^{-1}(\varphi_n(i)) = j$ which would mean $j \in N_k$ which is a contradiction.




......



[$*$] If $\varphi'(b) = \varphi'(c)$ either $\varphi'(c) = a$ or $\varphi'(c) \ne a$.



If $\varphi'(c) = a$ then $c \not \le k$ so $c = b = k+1$.



If $\varphi'(c) \ne a$ then $c \ne k+1$ and $b \ne k+1$ so $\varphi'(c) = \varphi(c)$ and $\varphi(b) = \varphi(b)$, and, as $\varphi$ is an injection $b = c$.



Either way $b = c$ and $\varphi'$ is an injection.



Sunday 15 October 2017

probability - What makes a random variable absolutely continuous?



Consider this question:





If $X$ is a random variable with $F_X$ (cumulative distribution function), then $X$ is absolutely continuous if:



a) $F_X(x)$ is differentiable for all $x$ in the real numbers.



b) $f_X(x)$ (Probability density function) is differentiable for all $x$ in the real numbers.



c) If the integral of $f_X(x)$ over all the real numbers is equal to $1$.




d) None of the above.




I know that the answer is d, but I don't know why or what even makes $X$ absolutely continuous.


Answer



In probability language, $F_X$ is absolutely continuous if there exists an associated probability density function $f_X$. This means there is the relationship $F_X(x)=\int_{-\infty}^x f_X(y) dy$. In this case we usually just say that $X$ is continuous, not absolutely continuous (though we may say something even more precise, like "the distribution of $X$ is absolutely continuous with respect to Lebesgue measure", if there is some possibility of confusion).



This does not require that $F_X$ be differentiable at every point; for a familiar example, you can look at $F_X(x)=\begin{cases} 0 & x<0 \\ 1-e^{-x} & x \geq 0 \end{cases}$. It certainly doesn't require $f_X$ to be differentiable at every point, indeed $f_X$ can even be nowhere continuous in principle.



Property (c) I don't entirely understand what is meant, seeing as absolute continuity is equivalent to $f_X$ merely existing.



Saturday 14 October 2017

discrete mathematics - Solving Simple Linear Congruence



Im give the equation $ 2x \equiv 5\pmod 9 $ and am told to solve for the linear congruence. I attempted to this with only the methods taught to me and no more so i cannot use Bezout's Identity or a Diophantine Equation. I can only use Extended Euclid's algorithm and use modular arithmetic from there.




My attempt:



GCD(2,9)



$ 9 = (2)4+1$ (Find GCD)



$1 = (1)9 -4(2)$ (Solve 1 in terms of 9 and 2)



$1 \equiv 9 -4(2) \pmod 9$ (Rewrite equivalence with new inverse)




$9-4 = 5 $ (Finding equivalent positive multiplicative inverse)



$(5)2x \equiv (5)5 \pmod 9 $ (Go back to original equation and multiply each side by the inverse)



$x = 25 \pmod 9$ (The left side becomes x because $5*2 \pmod 9 = 1$)



$x = 7$ (Result)



I cannot find a single calculator online that confirms this answer. Quite honestly different calculators are giving me different answers so I'm not even quite sure if mine is wrong or not. Is this right or is there something wrong in my steps?


Answer




This particular example is quite easy:
$$
2x \equiv 5 \equiv -4\bmod 9
\implies
x \equiv -2 \equiv 7 \bmod 9
$$
This is easy because we can just divide the two sides of $2x \equiv -4\bmod 9$ by $2$ since $\gcd(2,9)=1$.



Note that $2x \equiv -4\bmod 9$ means that $9$ divides $2x+4=2(x+2)$, and so $9$ divides $x+2$.


number theory - Given integers $x,,y$ s.t. $x^2-16=y^3$, show that $x+4$ and $x-4$ are perfect cubes

Suppose $x$ and $y$ are some integers satisfying $$x^2-16=y^3.$$ I'm trying to show that $x+4$ and $x-4$ are both perfect cubes.



I know that the greatest common divisor of $x+4$ and $x-4$ must divide $8$, but I don't know where to go from there. Would anyone be able to help?

exponentiation - Positive rationals satisfying: $a^2+a^2=c^2$?




If there are none why not?



Thanks in advance.


Answer




Note that this can be written as $$2=\left(\frac{c}a\right)^2$$



Is $\sqrt 2$ rational?


Wednesday 11 October 2017

calculus - Proving $int_{0}^{infty} mathrm{e}^{-x^2} dx = frac{sqrt pi}{2}$




How to prove
$$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$


Answer



This is an old favorite of mine.
Define $$I=\int_{-\infty}^{+\infty} e^{-x^2} dx$$
Then $$I^2=\bigg(\int_{-\infty}^{+\infty} e^{-x^2} dx\bigg)\bigg(\int_{-\infty}^{+\infty} e^{-y^2} dy\bigg)$$
$$I^2=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}e^{-(x^2+y^2)} dxdy$$
Now change to polar coordinates
$$I^2=\int_{0}^{+2 \pi}\int_{0}^{+\infty}e^{-r^2} rdrd\theta$$
The $\theta$ integral just gives $2\pi$, while the $r$ integral succumbs to the substitution $u=r^2$
$$I^2=2\pi\int_{0}^{+\infty}e^{-u}du/2=\pi$$
So $$I=\sqrt{\pi}$$ and your integral is half this by symmetry




I have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.


real analysis - Showing the existence of a continuous, strictly increasing function $f$ on $mathbb{R}$ such that $f'(x) = 0$ almost everywhere




Problem: Show that there exists a continuous strictly increasing function $f$ on $\mathbb{R}$ such that $f'(x) = 0$ almost everywhere.




Attempt:



Perhaps we could modify the Cantor Function which is non-decreasing, continuous, and constant on each interval in the complement of the Cantor Set in $[0,1]$.




enter image description here



Of course, we'd need to do the following:




  1. Make the Cantor function strictly increasing (since the Cantor Function is constant on certain intervals, it's certainly not strictly increasing as is).


  2. Extend our modified Cantor Function from $[0,1]$ to all of $\mathbb{R}$.


  3. Verify that $f'(x) = 0$ for all irrational $x$ (or for all but a countable subset of $\mathbb{R}$).




Answer



Aside. Your item 3 sets an unattainable goal. If a continuous function $f$ satisfies $f'=0$ for all but countably many points of $\mathbb R$, then $f$ is a constant function. See Set of zeroes of the derivative of a pathological function.



Here is a more explicit construction than in the post link in comments. Fix $r\in (0,1/2)$. Let $f_0(x)=x$. Inductively define $f_{n+1}$ so that




  1. $f_{n+1} (x) = f_n(x)$ when $x\in 2^{-n}\mathbb Z$

  2. If $x\in 2^{-n}\mathbb Z$, let $f_{n+1}(x+2^{-n-1}) = r f_n(x) + (1-r) f_n(x+2^{-n})$.

  3. Now that $f_{n+1}$ has been defined on $2^{-n-1}\mathbb Z$, extend it to $\mathbb R$ by linear interpolation.

  4. Let $f=\lim f_n$.




Here is a piece of this function with $r=1/4$:



derivative = 0 a.e.



and for clarity, here is the family $f_0$,$f_1$,... shown together. All the construction does is replace each line segment of previous step with two; sort of like von Koch snowflake, but less pointy. (It's a monotonic version of the blancmange curve).



enter image description here




To check that the properties hold, you can proceed as follows:




  1. Since $\sup|f_{n+1}-f_{n}| \le 2^{-n}$, it follows that $f_n$ is a uniformly convergent sequence. So the limit is continuous.

  2. The values of $f$ at every dyadic grid $2^{-n} \mathbb Z$ are strictly increasing, by direct inspection. Since $f$ eventually stabilizes at dyadic rationals, it is strictly increasing.

  3. Being increasing, $f$ is differentiable almost everywhere. Let $x$ be a point of differentiability (note that $x$ is not a dyadic rational). Then
    $$f'(x) = \lim_{n\to\infty} 2^{n} (f_n(x_n+2^{-n})-f_n(x_n)) \tag{1}$$
    where $x_n\in 2^{-n}\mathbb Z$ is such that $x\in (x_n,x_n+2^{-n})$. The expression inside the limit in (1) is a product of the form $r^{i}(1-r)^j$ where $i+j=n$. More precisely, $i$ and $j$ are the numbers of $1$s and $0$s in the first $n$ digits of the binary expansion of the fractional part of $x$. It follows that $f'(x)=0$ unless $x$ has a lot more $0$s than $1$s in the binary expansion; but the number of such points $x$ has zero measure.




(It may be more enjoyable to prove part 3 using the Law of Large Numbers from probability.)


Tuesday 10 October 2017

Sum of alternating harmonic series

Edit: As @FamousBlueRaincoat pointed out below, this question was based on an error in a wikipedia article.



The Wikipedia article on the harmonic series gives the following "proof without words" that the alternating harmonic series $1-1/2 +1/3 -1/4 + \cdots $ converges to $\log (2) $:

\begin{equation}
(1/1)(1/1 - 1/2) + (1/2)(2/3 -2/4) + (1/4)(4/5 - 4/6 + 4/7 - 4/8) + \cdots
= \log (2).
\end{equation}



Can anyone explain why the sum on the left is $\log (2) $? I don't see it right now.



Edit: my goal is specifically to understand this "proof without words" that the alternating harmonic series converges to $\log(2)$.

logic - Paradox - What is wrong with this proof that proves a false assertion?




Theorem: Let $a_{n}=a_{n-1}+1, a_1=1$. For any $n$, in order to compute $a_n$, it is necessary to compute $a_i$ for each $i=1,\dots,n-1$, which takes $\Theta(n)$ time.



Proof: This is vacuously true for $n=1$. Assume true for $n=k-1$. Prove true for $n=k$. In order to compute $a_{k-1}+1$, it is necessary to compute $a_{k-1}$. Then since $a_k=a_{k-1}+1$, in order to compute $a_k$, it is necessary to compute $a_{k-1}$. By the induction hypothesis, in order to compute $a_{k-1}$, it is necessary to compute $a_i$ for each $i=1,\dots,k-2$. Hence, in order to compute $a_k$, it is necessary to compute $a_i$ for each $i=1\dots,k-1$. QED



What is wrong with this proof? It seems valid to me, even though the theorem is false.


Answer



In your induction step, you made the additional assumption (beyond the inductive hypothesis) that it was necessary to compute $a_{k-1}$ in order to compute $a_k$. That's hardly the case, as simple inspection of the recursive definition gives us the closed-form definition $a_n:=n$ for all $n$.


calculus - What is the sum of $sumlimits_{i=1}^{n}ip^i$?



What is the sum of $\sum\limits_{i=1}^{n}ip^i$ and does it matter, for finite n, if $|p|>1$ or $|p|<1$ ?



Edition :




Why can I integrate take sum and then take the derivative ? I think that kind of trick is not always allowed.



ps. I've tried this approach but I made mistake when taking derivative, so I've asked, mayby I should use some program (or on-line app) for symbolic computation.


Answer



$$\sum_{k=1}^n kp^k=\frac{p(np^{n+1}-(n+1)p^n+1)}{(p-1)^2}\tag{*}$$



Proof by induction:





  1. $n=1$: $ p=\frac{p(p^{1+1}-(1+1)p^1+1)}{(p-1)^2}=\frac{p(p^{2}-2p+1)}{(p-1)^2}=p$







  1. $$
    \begin{eqnarray}
    (n+1)p^{n+1}+\sum_{k=1}^n kp^k&=&(n+1)p^{n+1} + \frac{p(np^{n+1}-(n+1)p^n+1)}{(p-1)^2}\\
    &=&\frac{(n+1)p^{n+1}(p^2-2p+1)}{(p-1)^2} + \frac{p(np^{n+1}-(n+1)p^n+1)}{(p-1)^2}\\

    &=&\frac{np^{n+3}+p^{n+3}-np^{n+2}-2p^{n+2}+p}{(p-1)^2}\\
    &=&\frac{p((n+1)p^{n+2}-(n+1+1)p^{n+1}+1)}{(p-1)^2}\\
    &=&\sum_{k=1}^{n+1} kp^k
    \end{eqnarray}
    $$



If $p=1$, we expect $\sum_{k=1}^n k\cdot 1^k= \frac12 n(n+1)$:
Since the RHS of $(*)$ gives $\frac00$, when we insert $p=1$, we apply L'Hospital's rule two times:
$$

\lim_{p\to 1} \frac{p(np^{n+1}-(n+1)p^n+1)}{(p-1)^2}
=\lim_{p\to 1} \frac{n(n+2)p^{n+1}-(n+1)^2p^{n}+1}{2(p-1)}\\
=\frac12 \lim_{p\to 1} (n(n+1)(n+2)p^{n}-n(n+1)^2p^{n-1})\\
=\frac12 n(n+1) \underbrace{\lim_{p\to 1} p^{n-1}((n+2)p-(n+1))}_{=1}\\
$$
If $n\to \infty$, the series converges due to ratio test ($\lim_{n\to \infty}\left|\frac{(k+1)}{k}p\right|<1$), when $|p|<1$. You'll get
$$
\sum\limits_{k=1}^\infty kp^k = \frac p{(p-1)^2}+\underbrace{\lim_{n\to \infty} \frac{np^{n+2}-(n+1)p^{n+1}}{(p-1)^2}}_{=0}
= \frac p {(p-1)^2}
$$



In Taxicab Geometry, what is the solution to d(P, A) = 2 d(P, B) for two points, A and B?



Taxicab and Euclidean geometry differ a great deal, due to the modified metric function:




$$d_T(A,B)=|x_a-x_b|+|y_a-y_b|$$



(Note that this means when measuring distance, it is not the length of the hypotenuse, but the sum of the legs of the same right triangle.)



My Main Problem



In Euclidean geometry, the answer to the question "Find the locus of points $X$ such that: $d(X, A) = 2 * d(X, B)$" yields a regular, Euclidean circle. A little bit of algebra makes this very trivial.



But what is the answer to the same problem, but for $d_T$?




What I Know So Far



This kind of geometry actually has a very interesting property, namely that as things rotate, their measures change. Consider the cases where points share either one of their coordinates. Many times, those situations yield the same answers as do their Euclidean counterparts.



Some things are noticeably different, though. For instance, a circle, as defined as the set of points a fixed distance from one point, actually comes out as a square, rotated 45 degrees. It is also trivial to illustrate that.



It did occur to me that the answer to this problem could be analogous to Euclidean geometry, and the solution may simply be a Taxicab circle (a square). But this didn't seem to work out. Plus, I worked out the solution for the points sharing an x or y coordinate, I end up with two mirror-image line segments. But the general case, where the two points are corners of any rectangle still eludes me. My second educated guess was that the solution could be a Euclidean circle, but this didn't work out either.



Lastly, some constructions seemed to differ depending on whether the points I chose formed the opposite diagonal corners of a general rectangle, or a square. E.g. (0,0) and (3, 3) seem to be a yet different type of exception.




Any thoughts on this problem would be greatly appreciated!


Answer



EDIT: alright, got it. There is a special case, when $AB$ are both on a horizontal or a vertical line. Then the shape asked for is a convex kite shape, two orthogonal edges of equal length with slopes $\pm 1,$ the other two edges with slopes out of $3,-3, \frac{1}{3},\frac{-1}{3}, $ the whole figure symmetric across the $AB$ line, as one would expect. Otherwise, draw the rectangle with $AB$ as the endpoints of one diagonal. If this rectangle is a square, or the longer edge is no more than twice the length of the other, the shape is an isosceles trapezoid, as described below. If the longer edge is more than twice as long as the other, the shape is a nonconvex hexagon, as in Rahul's image. Furthermore, if we fix the longer edge and call it length $1,$ as the shorter edge goes to length $0$ the resulting hexagon shape comes to resemble the kite shape, which is its limit. Anyway, there are just the three possibilities. Note that, in the cases when $AB$ are not in a line parallel to the $x$ or $y$ axis, we can color the nine regions of the tic-tac-toe board as a chess or checkerboard; then regions the same color as the rectangle have segments (if any) of slope $\pm 1,$ while any segments in the regions that are the other color have slope among $3,-3, \frac{1}{3},\frac{-1}{3}. $ It is all pretty rigid.



ORIGINAL: A little to add to what Ross and Rahul pointed out. One segment is automatically there, the segment inside the rectangle passing through the 2/3 point along the $AB$ diagonal, but with slope $\pm 1,$ and closer to $B.$ This segment is part of a "circle" around $A$ as well as a "circle" around $B.$ There is another such with the same slope, passing through a point we might as well call $A',$ which is the reflection of $A$ along the line $AB,$ the same distance from $B$ as $A$ is but on the other side. This can be the longest edge involved, as it continues through one of Ross's tic-tac-toe regions.



There may also be common "circle" segments rotated $90^\circ$ from those, as in Rahul's diagram. So one ought to check for those first, in the nine regions, any segments with slope $\pm 1$ that are the overlap of a circle around $A$ and circle around $B.$ Rahul has shown that you can get three such segments, and there are automatically at least two, but I do not see how one could get four such. So I think the diagram you are asking about is likely to be either a quadrilateral (an isosceles trapezoid) or a non-convex hexagon, being an isosceles trapezoid with one vertex replaced by an extra triangle, one edge of which is orthogonal to the two parallel edges of the trapezoid part. For that matter, there are really only two genuinely distinct slopes allowed, $\pm 1$ and $3,-3, \frac{1}{3},\frac{-1}{3}. $


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...