Saturday 28 February 2015

number theory - Solving quadratic equations in modular arithmetic

Is there a general way to solve quadratic equations modulo n? I know how to use Legendre and Jacobi symbols to tell me if there's a solution, but I don't know how to get a solution without resorting to guesswork. Some examples of my problems:



$x^2 = 8$ mod 2009
$x^2 + 3x + 1 = 0$ mod 13



Thanks

real analysis - Prove that a sequence converges to a finite limit iff lim inf equals lim sup

This problem is purely for my own benefit, so I'd appreciate it if you offer help but don't spoil the proof for me. I've worked out the following solution, but I want to make sure that my reasoning doesn't have any holes in it:




Suppose we have the sequence $\{a_n\}$, and $\lim_{n \rightarrow \infty}\{a_n\}=L$. Then given $\epsilon>0$, we can choose $N \in \mathbb{N}$ such that
$$\left| a_n-L \right|<\frac{\epsilon}{2}$$
for every term $a_n \in \{a_n\}$ such that $n>N$. Now for $\inf_{n>N}\{a_n\}$, (that is, to my understanding, the infimum of the sequence with the first N terms truncated) we must have some $a_{low} \in \{a_n|n>N\}$ such that
$$a_{low}-\inf_{n>N}\{a_n\}<\frac{\epsilon}{2}$$
because otherwise $\inf_{n>N}\{a_n\}-\frac{\epsilon}{2}$ would be a greater lower bound for $\{a_n|n>N\}$. Similar reasoning demonstrates that we must have some $a_{high}$ such that
$$\sup_{n>N}\{a_n\}-a_{high}<\frac{\epsilon}{2}$$
Now since $a_{high} \in \{a_n|n>N\}$, we must have
$$-\epsilonand, adding this inequality to the above one gives
$$-\epsilon<\sup_{n>N}\{a_n\}-L<\epsilon$$

for $n>N$, and therefore $\lim_{N \rightarrow \infty}\sup_{n>N}\{a_n\}=L$. Likewise, since $a_{low} \in \{a_n|n>N\}$ we must have
$$\left|a_{low}-L\right|=\left|L-a_{low}\right|<\frac{\epsilon}{2}$$
So
$$-\frac{\epsilon}{2}and adding this gives to the second inequality gives
$$-\epsilonN}\{a_n\}<\epsilon$$
for $n>N$. So we have $\lim_{N \rightarrow \infty}\inf_{n>N}\{a_n\}=\lim_{N \rightarrow \infty}\sup_{n>N}\{a_n\}=L$. Now conversely assume that $\lim \inf=\lim \sup$. We have
$$\inf_{n>N}\{a_n\} \leq a_n \leq \sup_{n>N}\{a_n\}$$
for $a_n \in \{a_n|n>N\}$. Since the limits of the left and right sides of the inequality are equal, the limit of the middle exists and equals that of the other two by the squeeze theorem (I've only seen the squeeze theorem proven for functions, but I believe it applies here since sequences are just functions with domains on the naturals). Therefore the sequence $\{a_n\}$ converges iff its $\lim \inf$ equals its $\lim \sup$ (and, as a corollary, the limit of the sequence always equals that of the infima and suprema).




Please tell me if you see any holes or things I could've done better, as I've made many stupid mistakes in proofs before. Thanks for your help!

limits - Is there a way to simplify this expression?

While this particular question is from a calculus class I'm taking, this issue has plagued me for some time; I simply didn't care enough to bother figuring it out. Now, however, it'll cost me if I don't get it, so I'd like to know how to simplify something like this:



$$\frac{\sqrt{2a + 2h + 1} - \sqrt{2a + 1}}{h}$$




This particular problem is attempting to find the derivative of $\sqrt{2x+1}$ (using limits, not just doing the derivative). So, one finds the derivative using limits through the limit as $h \to 0$ of $\frac{f(a+h)-f(a)}{h}$. However, when I put in the function $\sqrt{2x+1}$, I get the above expression. Is there a way to simplify this (and other difficult square roots) or am I just doing this wrong?



Thank you!

discrete mathematics - using Gauss' algorithm (for linear congruences) for A > B



To solve $Bx \equiv A \pmod{m}$, use Gauss' algorithm.




The algorithm works perfectly when $A < B$. For example, to solve $6x \equiv 5 \pmod{11}$: $$x \equiv \frac{5}{6} \equiv \frac{5(2)}{6(2)} \equiv \frac{10}{12} \equiv \frac{10}{1}$$
so $x \equiv 10$



But when $A > B$, I run into problems. For example, trying to solve $7x \equiv 13 \pmod{100}$: $$x \equiv \frac{13}{7} \equiv \frac{13(15)}{7(15)} \equiv \frac{195}{105} \equiv \frac{95}{5} \equiv \frac{95(21)}{5(21)} \equiv \frac{1995}{105} \equiv \frac{95}{5}$$ and it continues to be $\frac{95}{5}$. Am I missing a step?



PS: I was applying the algorithm on random linear congruence problems I could find. The second example comes from http://www.johndcook.com/blog/2008/12/10/solving-linear-congruences/, which says the answer is $x \equiv 59$.



update




This answer answered my question. It explains that Gauss' algorithm works only on prime modulo.


Answer



This answer answered my question. It explains that Gauss' algorithm works only on prime modulo.


logarithms - Prove that $5/2 < e < 3$?




Prove that $$\dfrac{5}{2} < e < 3$$




By the definition of $\log$ and $\exp$,
$$1 = \log(e) = \int_1^e \dfrac{1}{t} dt$$

where $e = \exp(1)$.



So given that $e$ is unknown, how could I prove this problem? I think I'm missing some important facts that could somehow help me rewrite $e$ in some form of $3$ and $5/2$. Any idea would be greatly appreciated.


Answer



$e=\lim_{n\to \infty}(1+\frac1n)^n$



the rth term $t_r=\frac{n(n-1)(n-2)\cdots(n-r+1)}{r!n^r}=\frac1{r!}\prod_{0\le s

So, $\lim_{n\to \infty}t_r=\frac1{r!}$




So, $e=1+\frac1{1!}+\frac1{2!}+\frac1{3!}+\cdots$



But $1+\frac1{1!}+\frac1{2!}+\frac1{3!}+\cdots>1+1+0.5=2.5$



Again,



$3!=1.2.3>1.2.2=2^2$



$4!=1.2.3.4>1.2.2.2=2^3$




So,



$e=1+\frac1{1!}+\frac1{2!}+\frac1{3!}+\cdots$
$<1+1+\frac12+\frac1{2^2}+\frac1{2^3}+\cdots$
$=1+(1+\frac12+\frac1{2^2}+\frac1{2^3}+\cdots)=1+\frac{1}{1-\frac12}=3$
as the terms inside parenthesis forms an infinite geometric series with the common ratio $=\frac12,$ the 1st term being $1$


linear algebra - Basic concepts about matrices and their decompositions



I am studying the basics of linear algebra and I have some questions that I can not conclude them by my own.




Let $A \in \Bbb R^{m \times n} $




  • $A$ can always be expressed as a LU decomposition?

  • $A$ can always be expressed in the reduced echelon form?

  • $A$ can be expressed as a QR decomposition?


Answer





  • First one: by definition a triangular matrix is square (wait, that sounds funny)...so the first one is not really applicable in general to $\mathbb{R}^{m\times n}$ - it is also not true that all square matrices have a LU decomposition. For necessary and sufficient conditions see this article: http://arxiv.org/pdf/math/0506382v1.pdf

  • Yes, row equivalence is an equivalence relation, and every equivalence class contains one matrix in reduced echelon form. You can prove it by induction on $m$. As a reference you can refer to Matrices and linear transformations, theorem 1.18 [Cullen].

  • Yes, ref: Proposition 16.11 in The linear algebra a beginning graduate student ought to know [Golan]


indeterminate forms - Limit $frac{e^frac{-x^2}{2}-cos(x)}{x^3sin(x)}$ as $xto 0$



We have to find the limit:




$$\lim_{x\to 0}\dfrac{e^\frac{-x^2}{2}-\cos(x)}{x^3\sin(x)}$$



I was stuck, but was able to find the limit using series expansion as $\dfrac{1}{4}$.



How can we calculate the limit with standard limits like



$$\lim_{x\to 0}\dfrac{e^x-1}{x}=1\\\lim_{x\to 0}\dfrac{\sin(x)}{x}=1$$



etc.




Also I didn't try L'hospital as that would be too complicated.


Answer



Using Taylor polynomials, you have
$$
\frac {1-\frac {x^2}2+\frac {x^4}{8}+O (x^6)-(1-\frac {x^2}2+\frac {x^4}{24}+O (x^6))}{x^3\sin x}
= \frac {\frac {x^4}{12}+O (x^6)}{x^3\sin x}\to\frac1 {12}.
$$
You cannot expect to use limits as simple as those in your question, because this limit depends on the terms of degree two in the expansion, while the two limits you quote depend on the terms of degree one.


Friday 27 February 2015

calculus - Evaluating $intlimits_0^infty ! frac{x^{1/n}}{1+x^2} mathrm{d}x$



I've been trying to evaluate the following integral from the 2011 Harvard PhD Qualifying Exam. For all $n\in\mathbb{N}^+$ in general:



$$\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$$



However, I'm not quite sure where to begin, even. There is a possibility that it is related to complex analysis, so I tried going at it with Cauchy's and also with residues, but I still haven't managed to get any further in solving it.


Answer



Beta function




I see @robjohn beat me to it.
The substitution is slightly different, so here it is.



Here's a simple approach using the beta function.



First, notice the integral diverges logarithmically for $n=1$, since the integrand goes like $1/x$ for large $x$.



Let $t=x^2$.
Then
$$\begin{eqnarray*}

\int_0^\infty dx\, \frac{x^{1/n}}{1+x^2}
&=& \frac{1}{2}\int_0^\infty dt\, \frac{t^{(1-n)/(2n)}}{1+t} \\
&=& \frac{1}{2}
\textstyle B\left(\frac{1}{2} + \frac{1}{2n}, \frac{1}{2} - \frac{1}{2n}\right) \\
&=& \frac{1}{2}
\textstyle\Gamma\left(\frac{1}{2} + \frac{1}{2n}\right)
\Gamma\left(\frac{1}{2} - \frac{1}{2n}\right) \\
&=& \frac{\pi}{2 \sin\left(\frac{\pi}{2} + \frac{\pi}{2n}\right)} \\
&=& \frac{\pi}{2} \sec \frac{\pi}{2n}.
\end{eqnarray*}$$




Some details



Above we use the integral representation for the beta function
$$B(x,y) = \int_0^\infty dt\, \frac{t^{x-1}}{(1+t)^{x+y}}$$
for $\mathrm{Re}(x)>0$, $\mathrm{Re}(y)>0$.
We also use Euler's reflection formula,
$$\Gamma(1-z)\Gamma(z) = \frac{\pi}{\sin\pi z}.$$



Addendum: A method with residue calculus




Let $t = x^{1/n}$. Then
$$\begin{eqnarray*}
I &=& \int_0^\infty dx\, \frac{x^{1/n}}{1+x^2} \\
&=& n\int_0^\infty dt\, \frac{t^n}{t^{2n}+1}
\end{eqnarray*}$$
Notice the last integral has no cuts for integral $n$.
The residues are at the roots of $t^{2n}+1=0$.
Consider the pie-shaped contour with one edge along the positive real axis, another edge along the line $e^{i\pi/n}t$ with $t$ real and positive, and the "crust" at infinity.




$\hspace{4.5cm}$pie-shaped contour



There is one residue in the contour at $t_0 = e^{i\pi/(2n)}$.
The integral along the real axis is just $I$.
The integral along the other edge of the pie is
$$\begin{eqnarray*}
I' &=& n\int_\gamma dz\,\frac{z^n}{z^{2n}+1} \\
&=& n \int_\infty^0 dt e^{i\pi/n} \frac{t^n e^{i\pi}}{t^{2n}+1} \\
&=& -e^{i(n+1)\pi/n} I.
\end{eqnarray*}$$

The integral along the crust goes like $1/R^{2n-1}$ as the radius of the pie goes to infinity, and so vanishes in the limit.
Therefore,
$$\begin{eqnarray*}
I + I'
&=& 2\pi i \,\mathrm{Res}_{t=t_0}\, \frac{t^n}{t^{2n}+1} \\
&=& 2\pi i \frac{t_0^n}{2n t_0^{2n-1}}.
\end{eqnarray*}$$
Using this and the formula for $I'$ in terms of $I$ above we find
$$I = \frac{\pi}{2} \sec \frac{\pi}{2n},$$
as before.



linear algebra - Matrices and their inverses

Having a bit of an issue answering this question. The answer seems simple but I am quite unsure. I think the answer is true, as it is a true/false question. The question is:



If a square matrix B is invertible, then its inverse is also invertible - True or False? This is all the information I have. I do not know how to go about this, i've just been trying to read laws on invertible matrices but I am really confused. Going to extra classes tomorrow for it, as I said I joined the class late and now I suddenly have an assignment to hand in, so I was just looking for an answer!



Please help me out! Thanks

real analysis - Proving an alternating Euler sum: $sum_{k=1}^{infty} frac{(-1)^{k+1} H_k}{k} = frac{1}{2} zeta(2) - frac{1}{2} log^2 2$



Let $$A(p,q) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H^{(p)}_k}{k^q},$$
where $H^{(p)}_n = \sum_{i=1}^n i^{-p}$, the $n$th $p$-harmonic number. The $A(p,q)$'s are known as alternating Euler sums.




Can someone provide a nice proof that
$$A(1,1) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H_k}{k} = \frac{1}{2} \zeta(2) - \frac{1}{2} \log^2 2?$$





I worked for a while on this today but was unsuccessful. Summation by parts, swapping the order of summation, and approximating $H_k$ by $\log k$ were my best ideas, but I could not get any of them to work. (Perhaps someone else can?) I would like a nice proof in order to complete my answer here.



Bonus points for proving $A(1,2) = \frac{5}{8} \zeta(3)$ and $A(2,1) = \zeta(3) - \frac{1}{2}\zeta(2) \log 2$, as those are the other two alternating Euler sums needed to complete my answer.





Added: I'm going to change the accepted answer to robjohn's $A(1,1)$ calculation as a proxy for the three answers he gave here. Notwithstanding the other great answers (especially the currently most-upvoted one, the one I first accepted), robjohn's approach is the one I was originally trying. I am pleased to see that it can be used to do the $A(1,1)$, $A(1,2)$, and $A(2,1)$ derivations.

Answer



$A(1,1)$:
$$
\begin{align}

\sum_{n=1}^N\frac{(-1)^{n-1}}{n}H_n
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\sum_{n=2}^N\frac{(-1)^{n-1}}{n}H_{n-1}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{n}\left(\frac1k+\frac1{n-k}\right)\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{k(n-k)}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=k+1}^N\frac{(-1)^{n-1}}{k(n-k)}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=1}^{N-k}\frac{(-1)^{n+k-1}}{kn}\\
&=\color{#00A000}{\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}}
-\color{#0000FF}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=1}^{N-1}\frac{(-1)^{n-1}}{n}}\\
&+\color{#C00000}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}}\tag{1}
\end{align}

$$
where, using the Alternating Series Test, we have
$$
\begin{align}
&\color{#C00000}{\frac12\left|\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|}\\
&\le\frac12\left|\sum_{k=1}^{N/2}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|
+\frac12\left|\sum_{k=N/2}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|\\
&\le\frac12\cdot1\cdot\frac2N+\frac12\cdot\frac2N\cdot1\\
&=\frac2N\tag{2}
\end{align}

$$
Applying $(2)$ to $(1)$ and letting $N\to\infty$, we get
$$
\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}H_n=\color{#00A000}{\frac12\zeta(2)}-\color{#0000FF}{\frac12\log(2)^2}\tag{3}
$$


Proof with induction on a sequence

Let $\{a_k\}_{k=0}^\infty$ be a sequence where

  • $a_0 = 0$
  • $a_1 = 0$
  • $a_2 = 2$
  • $\forall k \geq 3, a_k = a_{\lfloor k/2 \rfloor} + 2$


    Show that every element of this sequence is even.




    I am stuck on the induction step, and can't seem to prove that $a_n$ is even $\implies a_(n+1)$ is even . Could someone please give me some hints.


  • number theory - Square and cubic roots in $mathbb Q(sqrt n)$



    Here is my question :





    Let $n$ a squarefree positive integer, $m \ge 2$ an integer and $a+b \sqrt n \in\mathbb Q (\sqrt n).$ What (sufficient or necessary) conditions should $a$ and $b$ satisfy so that $a+b \sqrt n$ has a $m$-th root in $\mathbb Q (\sqrt n)$?




    Here is my attempt :
    I tried the case $m=2$. If $\sqrt{a+b \sqrt n} = c+d\sqrt n$ with $c,d \in \mathbb Q$ then
    $$ a=c^2+d^2n, b=2cd. $$ Assuming $b \neq 0$, I get $c^2 + n\left(\frac{b}{2c}\right)^2 = a$, and for instance $c = \pm \sqrt{\frac{a+\sqrt{a^2-nb^2}}{2}}$, so it is necessary to have $\frac{a+\sqrt{a^2-nb^2}}{2}$ is a square in $\mathbb Q$ (and then $d$ is also rational).



    We may find better conditions than this one. But I don't know how to manage with the cases $m \ge 3$, because the calculations become difficult. Is there some theoretical approach (e.g. Galois theory) to treat this problem ?




    Thank you !


    Answer



    As suggested by @franz lemmermeyer, a theoretical approach would certainly consist in an adequate global-local principle (i.e. CFT in fine), but there could be technical difficulties when ramification comes into play. Take a general number field $K$, but to avoid petty trouble, assume that the given integer $m$ is odd. The global-local principle for $m$-th powers consists in studying the kernel of the natural map from the global group $ K^* / (K^*)^m$ to the direct sum of all the local groups $K_v^{*} / (K_v^{*})^m $. Given a finite set $S$ of primes of $K$, an element of $K^*$ which is not divisible by any prime $\mathcal L_{v}$ outside $S$ will be called an $S$-unit. The following global-local principle is valid : "an $S$-unit $\alpha$ of $K$ is a global $m$-th power iff for any $\mathcal L_v$ outside $S$, $\alpha$ is an $m$-th power in the local field $K_v$" (Artin-Tate, chap. IX, thm. 1). The finite set $S$ is meant to give us a certain « room » adapted to the problem under study. Here we’ll choose $S$ such that it contains all the infinite primes, all the primes dividing the given $m$ and the given $\alpha$ in $ K^*$,as well as all the primes dividing disc($K$). To decide if $\alpha$ is a global $m$-th power, we have only to look at its natural image in $K_v^* / (K_v^*)^m$ for any $\mathcal L_v$ outside $S$ . Using the Chinese remainder theorem, we can suppose that $m = p^r$ for some rational prime $p$. Let $l$ be the rational prime under such an $\mathcal L_v$ . By our choice of $S$, $l \neq p$, $K_v$ is an unramified extension of $\mathbf Q_l$, and $\alpha \in U_v$, the group of units of $K_v$. Let $\kappa_v$ be the residual field of $K_v$, a finite field of degree over $\mathbf F_l$ equal to the inertia index, equal here to the local degree $[K_v : \mathbf Q_l]$. It is classically known that $U_v$ is the direct product of a group $\cong(\kappa_{v})^*$ (via Hensel’s lemma) and of the group of principal units $U_1 = 1 + \mathcal L_v$ . Since $l \neq p$, raising to a $p$-primary power is an automorphism of $U_1$, hence in the end $ U_v / (U_v)^{p^r} \cong \kappa_v^* / (\kappa_v^*)^{p^r}$.



    Let us now switch to the case at hand, where $K$ is a quadratic field. We have only to consider two cases : either $l$ is inert in $K$, or $l$ is split. In the first case, $\kappa_l^*$ is cyclic of order $l^2 – 1$ ; in the second, $\kappa_v$ cyclic of order $l – 1$ for any of the two $v$’s above $l$. Define $W_r (l)$ to be the quotient $\kappa_l^*$ mod $p^r$ or the product of the two quotients $\kappa_v^*$ mod $p^r$ . We know explicitly $W_r (l)$ (without feeling like writing it down !), and the conclusion is : let $\alpha \in K^* $; choose $S$ as above ; then $\alpha$ is a $p^r$-th power in $K^*$ iff for any $l$ outside $S$, the natural image of $\alpha$ in $W_r(l)$ is trivial.


    real analysis - Mean value theorem, second derivative, in $mathbf{R}^n$.



    Why is the following statement true:



    Suppose that $f$ is twice continuously differentiable on open set $U \subset \mathbf{R}^n$. Then for all $x, y \in U$ there exists $t \in [0, 1]$ such that $z = x + t(y - x)$ and such that

    $$
    f(y) = f(x) + \langle \nabla f(x), y- x \rangle + (1/2) \langle y - x, \nabla^2f(z)(y-x)\rangle.
    $$



    It vaguely looks like an application of the mean value theorem, but I can't seem to show it.


    Answer



    General domains



    The claim is false for a general open connected set $U$. For a counterexample, let $U=\mathbb{R}^2\setminus \{(x_1, 0):x_1\ge 0\}$ be a slit plane. Define
    $$

    f(x) = \begin{cases} x_1^3\quad &\text{if }x_1,x_2>0\\ 0 &\text{otherwise} \end{cases}
    $$
    This is a twice continuously differentiable function: e.g., the second derivative in the $x_1$ direction is
    $$
    f(x) = \begin{cases} 6x_1\quad &\text{if }x_1,x_2>0\\ 0 &\text{otherwise} \end{cases}
    $$
    which is continuous in $U$. The other second partials are identically zero.



    Choose $x=(1, -1)$ and $y=(1, 1)$. Then the claim takes the form
    $$

    1 = (1/2) \langle y - x, \nabla^2f(z)(y-x)\rangle = 2 \frac{\partial^2 f}{\partial x_2^2}(z) = 0
    $$
    which is false.






    Convex domains



    The correct statement would assume that $U$ is open and convex, not merely connected. Then the proof follows by considering the function $g(t) = f(x+t(y-x))$ of a real variable $t\in [0, 1]$ and applying Taylor's theorem with the Lagrange form of the remainder.


    galois theory - Proving that $mathbb{Q}$ adjoin the square root of every prime is an infinite extension

    How would one show that $[\mathbb{Q}(\sqrt2, \sqrt3,...,\sqrt{p_n},...)]=\infty$? I know that we want to show there is no finite basis over the rationals, but I'm not sure how one would determine that such a basis does not exist.

    Valid proof by induction for modulus of a product of complex numbers



    I want to prove that $$|z_1 z_2 z_3\cdots z_n|=|z_1| | z_2| |z_3|\cdots|z_n|$$



    Now this really feels like something I can just throw induction at.




    Base case: $n=2$.



    $|z_1 z_2|=|z_1| |z_2|$ follows directly from expressing these numbers in exponential form.



    Induction hypothesis:



    Now assume that for $\forall k$, we can say that



    $|z_1 z_2 z_3\cdots z_k|=|z_1| | z_2| |z_3|\cdots|z_k|$.




    Inductive step:



    Call the product of the first terms $w$



    $$|z_1 z_2 z_3\cdots z_k z_{k+1}|=| w \space z_{k+1}|=|w||z_{k+1}|$$



    Now use the induction hypothesis:



    $$|w||z_{k+1}|=|z_1| | z_2| |z_3|\cdots|z_k||z_{k+1}|.$$




    The aforementioned theorem holds by the principle of induction. QED



    Modulus really feels like a homomorphism like this, did I do my proof correctly, should I elaborate? Is my writing clear? Any tips or recommendations? I really feel induction proofs are a bit rigid sometimes, the interesting ones are the ones where the inductive step isn't straightforward at all and requires some extra skill or inspiration.


    Answer



    Your solution is perfect. There is nothing to add. Perhaps only this for $n=2$:



    $$ |zw| = \sqrt{zw \cdot \overline{zw}} = \sqrt{z\overline{z}}\cdot \sqrt{w\overline{w}} = |z||w|$$


    number theory - Prove that $M_nmid (i+n)(i-n)$ and $M_nmid(i+n-1)(i-n+1)$




    My friend gave me a problem:




    Prove that if $M_n$ is the $n^\text{th}$ odd number, then for all integers $i$, $$M_n\mid (i+n)(i-n)\quad\text{and}\quad M_n\mid(i+n-1)(i-n+1)\tag*{$[1]$}$$




    The following was my attempt at a proof:







    Attempt.



    Firstly, $(i+n)(i-n)=i^2-n^2$ and $(i+n-1)(i-n+1)=i^2-(n-1)^2$. Note that $$\begin{align}i^2-(n-1)^2&=i^2-(n^2+1-2n^2) \\ &=i^2-n^2+(2n^2-1) \\\therefore M_n &{\;\ }|\,\,\,2n^2-1.\tag*{$\big(\because M_n\mid i^2-n^2\big)$}\end{align}$$



    Now since $M_n$ is the $n^\text{th}$ odd number, then $M_n=2n-1$. It thus suffices to prove that $$2n-1\mid 2n^2-1.$$ However, there are cases where $2n^2-1$ is prime, and clearly, $2n-1\neq 2n^2-1$ unless $n\in\{0,1\}$ so I must have done something wrong... however, I didn't think I did, and I thought the question was wrong. So I asked my friend for the proof and he did this:






    Proof.




    Ignore the case where $n=1$ since that would mean $M_n=1$ and $1$ divides everything.



    Suppose that $3\mid i^2-1=(i+1)(i-1)$. Notice that $3\mid i^2-1-3=i^2-4=(i+2)(i-2)$. $$\therefore 3\mid (i+2)(i-2)\Leftrightarrow 3\mid (i+1)(n-1)$$ or $$M_n\mid (i+n)(i-n)\quad\text{and}\quad M_n\mid(i+n-1)(i-n+1).\tag{$n=2$}$$ Suppose that $5\mid i^2-4=(i+2)(i-2)$. Notice that $5\mid i^2-4-5=i^2-9=(i+3)(i-3)$. $$\therefore 5\mid (i+3)(i-3)\Leftrightarrow 5\mid(i+2)(i-2)$$ or $$M_n\mid (i+n)(i-n)\quad\text{and}\quad M_n\mid(i+n-1)(i-n+1).\tag{$n=3$}$$ Suppose that $7\mid i^2-9=(i+3)(i-3)$. Notice that $7\mid i^2-9-7=i^2-16=(i+4)(i-4)$. $$\therefore 7\mid (i+4)(i-4)\Leftrightarrow 7\mid(i+3)(i-3)$$ or $$M_n\mid (i+n)(i-n)\quad\text{and}\quad M_n\mid(i+n-1)(i-n+1).\tag{$n=4$}$$ Clearly, this would continue ad infinitum if (and only if!) $$\sum_{j=1}^k (2j-1)=k^2\tag{for some $k$}$$ which is a well-known theorem. Since this is true, then $[1]$ therefore deems true. $\;\bigcirc$






    His proof looks correct, I don't doubt... but what is wrong with my own attempt? It's late for me now, as well as my friend, so he went to bed, and unfortunately he didn't have the time to tell me.



    Hence, I didn't go to bed, and I came to the MSE!




    May someone please tell me where my mistake is in my attempt of the proof? I cannot find it, and surely there must be at least one... right?



    Thank you in advance.


    Answer



    Your mistake is that $(n-1)^2=n^2-2n+1$, but you write
    $$(n-1)^2=n^2-2n^2+1.$$
    You get stuck trying to prove that $2n-1\mid 2n^2-1$, which indeed fails for some $n$, when in fact this should be $2n-1\mid2n-1$, which is obviously true.



    Of course, this does not yet prove that $M_n$ divides both
    $$(i-n)(i+n)\qquad\text{ and }\qquad(i-n+1)(i+n-1),$$

    but only that $M_n$ divides their difference. You will have a hard time proving the statement however, because it is false (as noted in the comment below). This is illustrated very clearly by taking $i=0$; it is clear that $2n-1$ divides $-n^2$ if and only if $n=1$ (or $n=0$ if that is allowed).


    Thursday 26 February 2015

    number theory - Are there infinitely many primes with digit sums of the form $n^2+1$?



    The motivation for this question is from one of well known Landau Problems which asks for proof of the statement:




    Are there infinitely many primes $p$ such that $p − 1$ is a perfect square? In other words: Are there infinitely many primes of the form $n^2 + 1$.





    But this is not what I am asking. Before I ask my question let me explain the scenario.



    A prime $p$ is called $\color{blue}{\text{Nice prime}} \ $if sum of its digits is of the form $n^2+1$ For example $37\rightarrow 3+7=3^2+1$ . The primes sum of whose digits is of the form $n^2$ (Eg $31\rightarrow 4=2^2$) or $n^2-1$ (Eg $71\rightarrow 8=3^2-1$) will be called Almost Nice primes.



    The question is are there infinitely many Nice primes?



    Now, I tried to find Nice and Almost Nice primes by hand till $400$ and here is what I've got:




    Nice primes are:




    $5,11,19,23,37,41,73,89,101,109,113,127,131,163,179,181,197,271,307,311,359,373$.



    While Almost Nice primes are:



    $n^2\rightarrow 1,3,31,79,97,103,211,277,367,349$



    $n^2-1\rightarrow 3,17,53,71,107,233,251$





    There is a reason why I called primes whose sum of digits is of the form $n^2$ and $n^2-1$ Almost nice primes.



    If you have an Almost Nice Prime of the form $n^2-1$ then adding $2\times10^k$ to it (here $k$ is the highest power of $10$ in decimal expansion of $n^2-1$) will give you a Nice prime if it is a prime (by it I mean $n^2-1+2\times10^k$). In a similar manner, if a prime $p$ is an Almost Nice prime of the form $n^2$ then if $n^2+10^k$ is a prime then it will be a Nice prime.



    But introducing Almost Nice prime is not much helpful as we have to make sure that $\text{Almost Nice prime (of the form $n^2$)}+10^k$ $\text{or Almost Nice prime (of the form $n^2-1$)+$2\times10^k$}$ is a prime before concluding that it met our Nice prime criteria.



    Since the post is long, I again remind you the question. Are there infinitely many Nice primes?



    Thanks.


    Answer




    Even the set of positive integers with digit sum $101$, only having the digits $0$ and $1$ and ending with a $1$, contains $$\binom{n-2}{99}$$ $n$-digit numbers.



    This means, that we have , for example, about $\large \color\red {10^{438}}$ numbers with a million digits in the set. Plenty of them should be primes, since they share no common factor.



    If $n$ increases, the binomial coefficient grows much faster than $n$ itself. So there is an overwhelming statistical evidence that infinite many nice primes exist.



    Of course, such an evidence is no proof.


    abstract algebra - Numbers of elements in field



    The question is list all those integers $n$ such that $1 \leq n \leq 10$ and there exists a field with $n$ elements.



    My approach:




    Of course $n$ can take values $2,3,5,7$ since they are prime and we know $ \mathbb Z_p$ is a field iff $p$ is a prime.



    Next I will use the concept that if $p(x)$ is a irreducible polynomial then the ideal $$ is maximal, and if $I$ is a maximal ideal of a ring $R$ then $R/I$ is a field.



    Now I consider the ideal $$ over $\mathbb F_2(x)$, since $x^2+x+1$ is irreducible in $\mathbb F_2(x)$ then $\mathbb F_2(x)/$ is a field. Any element of this field will have a form $ax+b$, where $(a,b) \in \mathbb F_2$, this implies this field has $4$ elements.



    By the same type of argument, I have found the fields $\mathbb F_2(x)/$ and $\mathbb F_3(x)/$ have $8$ and $9$ elements respectively.



    However I cannot find any field containing $6$ or $10$ elements. How to confirm whether there really are fields containing $6$ or $10$ elements?




    Now I found that there are fields with $2,3,4,5,7,8,9$ elements. My method is a bit tedious. Are there any simple and direct computation to find so?


    Answer



    A finite field has prime characteristic.



    If $R$ is any unital ring then there is a unique unital ring homomorphism
    $\phi:\Bbb Z\to R$. Either $\phi$ is injective ($R$ has characteristic zero) or $\ker R=n\Bbb Z$ ($n\in\Bbb N$); in this case $R$ has
    characteristic $n$. Then $\Bbb Z/n\Bbb Z$ is a subring of $R$.



    If $R$ has composite characteristic $n=ab$ (a proper factorisation)
    then $n1_R=0$ in $R$ but $n1_R=(a1_R)(b1_R)$ and $a1_R\ne0$

    and $b1_R\ne0$. Therefore $R$ has zero-divisors and is not a field.



    Therefore every finite field has prime characteristic. It
    has a subfield $\Bbb Z/p\Bbb Z$, and as it is a vector space
    over this subfield, it has $p^d$ elements for some $d\in\Bbb N$.


    sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)




    As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
    $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$
    However, Euler was Euler and he gave other proofs.



    I believe many of you know some nice proofs of this, can you please share it with us?


    Answer



    OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
    (EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



    When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus

    $$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$
    Note that $1/\tan^2 x = 1/\sin^2 x - 1$.
    Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
    the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
    $$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
    Denoting the sum on the right-hand side by $S_n$, we can write this as
    $$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



    Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
    $$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$

    Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,
    $$S_n = 4 S_{n-1} + 2.$$
    Since $S_1=2$, the solution of this recurrence is
    $$S_n = \frac{2(4^n-1)}{3}.$$
    (For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)



    We now have
    $$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
    Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


    trigonometry - Trigonometric AP relation on sides of a triangle

    The sides of a triangle are in AP (Arithmetic Progression) and the greatest angle exceeds the least angle by $90$ degrees prove that the sides are proportional to $7^{\frac{1}{2}}+1$ , $7^{\frac{1}{2}}$ and $7^{\frac{1}{2}}-1$.



    I tried to use the following property "when the sides of a triangle are in AP then the $\cot$ of half the angles are also in AP" but I couldn't get anything please help.

    Wednesday 25 February 2015

    linear algebra - Given $A in mathbb{R}^{n times n}$ real, symmetric, positive semidefinite matrix, find all eigenvalue and eigenvector pairs



    Let $A \in \mathbb{R}^{n \times n}$ be a real, symmetric, positive semidefinite matrix with $n$ eigenvalues between $[0,1]$



    Suppose we have the following information:





    • $\lambda_{min}(A) = 0, \lambda_{max}(A) = 1$

    • $\det(A) = 0$

    • $A\mathbf{1} = 0$ (the one vector of $\mathbb{R}^n$ is an eigenvector
      associated with the eigenvalue 0)



    Since $A$ is symmetric, we know that all eigenvalues are going to be real.



    Is it possible to find all the other eigenvalues and eigenvectors?




    This sounds like an impossible question because the information presented is minimal, but if there is some result that the distribution of the eigenvalues and eigenvectors are..."uniformly spaced" for this kind of matrix, then perhaps we can find all the other eigenvalue-eigenvector pairs.



    Anyone has any ideas how to approach this question?


    Answer



    There is no way to know what the other eigenvectors or eigenvalues are. Consider a matrix defined as $$A=\lambda_1v_1v_1^T + ... + \lambda_nv_nv_n^T$$
    Where $\{v_1,...,v_n\}$ is an orthonormal basis for $\mathbb{R}^n$. Then clearly every $\lambda_i$ is an eigenvalue with eigenvector $v_i$. You can fix $\lambda_1=0$, $v_1=(1,...,1)$ and $\lambda_2=1$, but any choices over the other eigenvalues and eigenvectors on the definition of $A$ will give you a matrix that will hold all the constraints.


    calculus - How to Evaluate $int^infty_0int^infty_0e^{-(x+y)^2} dx dy$

    How do you get from$$\int^\infty_0\int^\infty_0e^{-(x+y)^2} dx\ dy$$to

    $$\frac{1}{2}\int^\infty_0\int^u_{-u}e^{-u^2} dv\ du?$$ I have tried using a change of variables formula but to no avail.
    Edit: Ok as suggested I set $u=x+y$ and $v=x-y$, so I can see this gives $dx dy=\frac{1}{2}dudv$ but I still can't see how to get the new integration limits. Sorry if I'm being slow.

    complex analysis - Contour method to show that $int_0^inftyfrac{x-sin x}{x^3} , dx=fracpi4$




    Show that $$\int_{0}^{\infty} \frac{x - \sin(x)}{x^3} \, dx = \frac{\pi}{4}$$





    My attempt is as follows: Let $$f(z) = \frac{z - i e^{iz}}{z^3}$$ and consider the contour on $[\epsilon, R]$ followed by a semicircular arc in the counter clockwise direction, then on $[-R, -\epsilon]$, then the semicircular clockwise contour avoiding the origin. We have, then, that



    $$0 = \int_{\Gamma} f(z) dz = \int_{[\epsilon, R]} f(t) dt + \int_{C_R}f(Re^{it})Rie^{it}dt + \int_{[-R, -\epsilon]}f(t)dt + \int_{C_{\epsilon}}f(\epsilon e^{-it})\epsilon i e^{-it} dt$$



    Then the first and third integrals ( $I_1$ and $I_3$) combine so that



    $$I_1 + I_3 = 2\int_{\epsilon}^R \frac{t - \sin{t}}{t^3}\,dt$$



    Further,




    $$|I_{C_R}| \leq \int_0^\pi \left|\frac{Re^{it} - ie^{-R\sin{t}}e^{iRcos{t}}}{R^2 e^{2it}} \right|dt \rightarrow 0 \text{ as } R\rightarrow \infty$$



    (I've omitted the details, it isn't too bad to bound)



    However, I'm having trouble computing the limit



    $$\lim_{\epsilon \rightarrow 0}\int_{C_{\epsilon}} f(\epsilon e^{-it})\epsilon i e^{-it} dt$$



    No matter which way I look at it, it seems like this limit does not exist. Perhaps I'm seeing something wrong or have I chosen a bad $f(z)$?


    Answer




    METHODOLOGY $1$: Straightforward Approach



    We begin by letting $I$ be the integral of interest given by



    $$\begin{align}
    I&=\int_0^\infty \frac{x-\sin(x)}{x^3}\,dx\\\\
    &=\frac12 \text{Re}\left(\lim_{\varepsilon\to 0^+,R\to \infty}\left(\int_{-R}^{-\varepsilon} \frac{x+ie^{ix}}{x^3}\,dx+\int_\varepsilon^R \frac{x+ie^{ix}}{x^3}\,dx\right)\right)
    \end{align}$$







    Next, we analyze the contour integral $J_{\varepsilon,R}$



    $$\begin{align}
    J_{\varepsilon,R}&=\oint_{C_{\varepsilon,R}}\frac{z+ie^{iz}}{z^3}\,dz\\\\
    &=\int_{-R}^{-\varepsilon} \frac{x+ie^{ix}}{x^3}\,dx+\int_\varepsilon^R \frac{x+ie^{ix}}{x^3}\,dx\\\\
    &+\int_\pi^0 \frac{\varepsilon e^{i\phi}+ie^{i\varepsilon e^{i\phi}}}{(\varepsilon e^{i\phi})^3}\,i\varepsilon e^{i\phi}\,d\phi+\int_0^\pi \frac{Re^{i\phi}+ie^{iR e^{i\phi}}}{(R e^{i\phi})^3}\,iR e^{i\phi}\,d\phi
    \end{align}$$







    Expanding $e^{i\varepsilon e^{i\phi}}$ as



    $$e^{i\varepsilon e^{i\phi}}=1+i\varepsilon e^{i\phi}-\frac12 \varepsilon^2e^{i2\phi}+O\left(\varepsilon^3\right)$$



    reveals that the integration over the semicircle of radius $\epsilon$ is



    $$\begin{align}
    \int_\pi^0 \frac{\varepsilon e^{i\phi}+ie^{i\varepsilon e^{i\phi}}}{(\varepsilon e^{i\phi})^3}\,i\varepsilon e^{i\phi}\,d\phi&=\frac1{\varepsilon^2}\underbrace{\int_0^\pi e^{-i2\phi}\,d\phi}_{=0}-\frac12 \int_0^\pi (1)\,d\phi +O(\varepsilon)\\\\
    &=-\frac\pi2 +O(\varepsilon)

    \end{align}$$






    Furthermore, it is easy to show that the integration over the semi-circle of radisu $R$ is



    $$\begin{align}
    \int_0^\pi \frac{Re^{i\phi}+ie^{iR e^{i\phi}}}{(R e^{i\phi})^3}\,iR e^{i\phi}\,d\phi=O\left(\frac1R\right)
    \end{align}$$







    Since $\frac{z+ie^{iz}}{z^3}$ is analytic in and on $C_{\varepsilon,R}$, Cauchy's integral theorem guarantees that $J_{\varepsilon,R}=0$. Putting everything together, we see that



    $$\begin{align}
    0&=J_{\varepsilon,R}\\\\
    &=\int_{-R}^{-\varepsilon} \frac{x+ie^{ix}}{x^3}\,dx+\int_\varepsilon^R \frac{x+ie^{ix}}{x^3}\,dx\\\\
    &-\frac\pi2+O\left(\varepsilon\right)+\left(\frac1R\right)
    \end{align}$$




    whereupon taking the limit as $\varepsilon\to 0^+$ and $R\to \infty$ yields



    $$I=\frac\pi4$$



    And we are done!






    METHODOLOGY $2$: Simplifying Using Integration by Parts




    We can make our life much easier if we apply successive integration by parts. We now proceed accordingly.



    Let $I$ be the integral given by



    $$\begin{align}
    I&=\int_0^\infty \frac{x-\sin(x)}{x^3}\,dx\tag1
    \end{align}$$



    Integrating by parts the integral on the right-hand side of $(1)$ with $u=x-\sin(x)$ and $v=-\frac{1}{2x^2}$, we find that




    $$I=\frac12\int_0^\infty \frac{1-\cos(x)}{x^2}\,dx \tag2$$



    Integrating by parts the integral on the right-hand side of $(2)$ with $u=1-\cos(x)$ and $v=-\frac1x$ reveals



    $$\begin{align}
    I&=\frac12 \int_0^\infty \frac{\sin(x)}{x}\,dx\tag3
    \end{align}$$



    We will evaluate the integral in $(3)$ using contour integration.







    We analyze the contour integral $J(\varepsilon,R)$, where $R>0$ and $\varepsilon>0$, as given by



    $$\begin{align}
    J(\varepsilon,R)&=\oint_{C_{\varepsilon,R}}\frac{e^{iz}}{z}\,dz\\\\
    &=\int_{-R}^{-\varepsilon} \frac{e^{ix}}{x}\,dx+\int_\pi^0 \frac{e^{i\varepsilon e^{i\phi}}}{\varepsilon e^{i\phi}}\,i\varepsilon e^{i\phi}\,d\phi+\int_{\varepsilon}^R \frac{e^{ix}}{x}\,dx+\int_\pi^0 \frac{e^{iR e^{i\phi}}}{R e^{i\phi}}\,iR e^{i\phi}\,d\phi\tag4
    \end{align}$$



    Since $\frac{e^{iz}}{z}$ is analytic in and on the contour defined by $C_{\varepsilon,R}$, Cauchy's Integral Theorem guarantees that $J(\varepsilon,R)=0$.




    First, note from symmetry that



    $$\int_{-R}^{-\varepsilon} \frac{e^{ix}}{x}\,dx+\int_{\varepsilon}^R \frac{e^{ix}}{x}\,dx=i2\int_{\varepsilon}^R \frac{\sin(x)}{x}\,dx$$



    Furthermore, we have



    $$\lim_{\varepsilon\to 0,R\to \infty}\int_{\varepsilon}^R \frac{\sin(x)}{x}\,dx=\int_0^\infty \frac{\sin(x)}{x}\,dx\tag5$$







    Second, it is easy to see that



    $$\lim_{\varepsilon\to 0}\int_\pi^0 \frac{e^{i\varepsilon e^{i\phi}}}{\varepsilon e^{i\phi}}\,i\varepsilon e^{i\phi}\,d\phi=-i\pi \tag6$$






    Third, noting that $\sin(\phi)\ge \frac{2\phi}{\pi}$ for $\phi\in [0,\pi/2]$, we see that



    $$\begin{align}

    \left|\int_\pi^0 \frac{e^{iR e^{i\phi}}}{R e^{i\phi}}\,iR e^{i\phi}\,d\phi\right|&=\left|\int_0^\pi ie^{iR\sin(\phi)}e^{-R\cos(\phi)}\right|\\\\
    &\le\int_0^\pi e^{-R\cos(\phi)}\,d\phi\\\\
    &=2\int_0^{\pi/2}e^{-R\sin(\phi)}\,d\phi\\\\
    &\le 2\int_0^{\pi/2}e^{-2R\phi/\pi}\,d\phi\\\\
    &=\frac{\pi(1-e^{-R})}{R}
    \end{align}$$



    Hence, we see that



    $$\lim_{R\to \infty}\int_\pi^0 \frac{e^{iR e^{i\phi}}}{R e^{i\phi}}\,iR e^{i\phi}\,d\phi=0\tag 7$$







    Finally, using $(5)-(7)$ in $(4)$ yields



    $$\int_0^\infty \frac{\sin(x)}{x}\,dx=\frac{\pi}{2}$$



    whence we find that



    $$\int_0^\infty \frac{x-\sin(x)}{x^3}\,dx=\frac\pi4$$



    statistics - Mean concentration implies median concentration



    Exercise 2.14 in Wainwright, "High-Dimensional Statistics", states that if $X$ is such that $$P[|X-\mathbb{E}[X]|\geq t] \leq c_1 e^{-c_2t^2},$$ for $c_1, c_2$ positive constants, $t\geq 0$, then for any median $m_X$ it holds that $$P[|X-m_X|\geq t] \leq c_3 e^{-c_4t^2},$$ with $c_3=4c_1$ and $c_4=c_2/8$.



    I can get some loose concentration around the median using $|\mathbb{E}[X]-m_X|\leq \sqrt{\mathbb{V}[X]}$, but this does not achieve the constants proposed. Any ideas for how to get the suggested bound, or any other bound resembling it?


    Answer



    Let $\Delta = |EX - m_X|$. The idea of the proof is that if $X$ is too far from mean, then $X$ is far from median as well. If it is close, then make the upper bound a triviality.




    Now consider the two cases for $t$:




    1. $t \ge 2\Delta$:



      which implies that $\frac{t}{2} \geq \Delta$.
      By the reverse triangle inequality, $|X - EX| \geq |X-m_x| - \Delta$. Thus
      \begin{align}
      P(|X - m_X|\geq t) \leq P(|X - m_X|\geq \frac{t}{2} + \Delta ) \leq P( |X - EX| \geq \frac{t}{2}) \leq c_1e^{-\frac{c_2t^2}{4}}.
      \end{align}



    2. $t < 2\Delta$:



      By the definition of median: $\frac{1}{2}\leq P(|X - EX| \geq \Delta ) \leq c_1e^{-c_2\Delta^2}$. Therefore, $2c_1e^{-c_2\Delta^2} \geq 1$.
      \begin{align}
      2c_1e^{-\frac{c_2}{4}t^2} \geq 2c_1e^{-\frac{c_2}{4}(2 \Delta)^2} = 2c_1e^{-c_2\Delta^2} \geq 1
      \end{align}

      Therefore, the required condition holds trivially.




    The final constants are $c_3 = 2c_1$ and $c_4 = \frac{c_2}{4}$. I don't know why I am getting better constants. Let me know if you spot an error.



    algebra precalculus - Find the fraction that creates a repeating decimal that repeats certain digits



    Is there any way to find the fraction $x/y$ that, when converted to a decimal, repeats a series of digits $z$? For example: ${x}/{y} = z.zzzzzzzz...$ or with actual numbers, $x/y = 234.234234234...$ (z is 234)



    If this is impossible, is there a way that does the same but the value to the left of the decimal is not $z$?



    Answer



    There's a nice fact (derived from the expression demonstrated by TonyK): The repeating decimal $0.zzz\dots$ can be represented by
    $$\frac{z}{10^{l(z)}-1}$$
    where $l(z)$ here denotes the number of digits of $z$.



    Now if we want instead $z.zzz\dots$, all we have to do is multiply the above expression by $10^{l(z)}$ (or as Joffan points out - simply add $z$), getting
    $$\frac{10^{l(z)}z}{10^{l(z)}-1}$$


    Monday 23 February 2015

    lambert w - Solving an exponential function

    I have the below exponential function which I wish to solve it for $b$. Other than resorting to the Lambert W function, is there alternative way of representing the solution?



    $$ \frac{(1+a)(1-b)}{ab +a -b +1} = \exp \left({\frac{2a(ab-a-b)}{ab +a -b +1}}\right)$$ where $a,b \in (0,1)$.



    Thanks a lot.

    number theory - Problem in proof of: Show the order $d$ of $a$ modulo $m$ exists and $dmidphi(m)$



    Theorem: Let $m\in\mathbb{N}$ and $a\in\mathbb{Z}$ satisfy $(a,m)=1$. Then the order $d$ of $a$ modulo $m$ exists, and $d\mid\phi(m)$.



    Proof: By Euler's theorem, one has $a^{\phi(m)}\equiv 1\pmod{m}$, and so the order of $a$ modulo $m$ clearly exists.



    Suppose then that $d$ is the order of $a$ modulo $m$, and further that $a^k\equiv 1\pmod{m}$.



    Then it follows from the division algorithm that there exists integers $q$ and $r$ with $k=dq+r$ and $0\leq r


    But then we obtain $a^k=(a^d)^qa^r\equiv a^r\equiv 1\pmod{m}$,



    therefore $r=0$.



    Thus we have $d\mid k$ and in particular $d\mid\phi(m)$



    Point of contention: I understand that $a^k=(a^d)^qa^r$ and that $a^d\equiv 1\pmod{m}$ since it is the order of $a$ modulo $m$, therefore $(a^d)^q\equiv 1\pmod{m}$.



    But I dont understand how $a^r\equiv 1\pmod{m}$




    I understand the rest of the argument after this.


    Answer



    Since $(a^d)^q \equiv 1 \pmod{m}$ it follows that $a^r \equiv a^k \equiv 1 \pmod{m}$, where the second $\equiv$ is by hypothesis.


    matrices - Determinant of a symmetric Toeplitz matrix

    Let $A_n = (a_{ij})$ be an $n\times n$ matrix such that $a_{ii} = 0, a_{ij} = 2$ when $|j − i| = 1$, and $a_{ij} = 1$ otherwise. The question is to find the determinant in terms of $n$. I computed the first six terms, depending on $n$, but, unfortunately, no clear relationship was found. Here they are:
    \begin{align}

    \det A_1&=0,\\
    \det A_2&=-4,\\
    \det A_3&=8,\\
    \det A_4&=-7,\\
    \det A_5&=0,\\
    \det A_6&=7.
    \end{align}
    Laplace expansion turned out to be useful only if $n$ is known. How can one derive a formula for $\det A_n$ in terms of $n$?

    analysis - Is $C_0(mathbb R)$ separable?




    Let $C_0$ be the Banach space of all continuous real value functions whose limits in $\pm \infty$ is zero, with the supremum norm. Is this space separable?


    Answer



    There are continuous bijections between $\Bbb R$ and $(-1, 1)$. One example is
    $$
    h(x) = \frac{2}{\pi}\arctan(x)
    $$
    This means there is an isometry between your space and the space of continuous, real valued functions $f$ on $(-1, 1)$ with $\lim_{x \to -1^+}f(x) = \lim_{x \to 1^-} f(x)= 0$ (still with sup norm). The set of polynomials with rational coefficients (and with $(x^2 - 1)$ as factor) are dense on this space.


    combinatorics - Number of r-tuples with integer entries in a given interval



    This is probably an easy question, but I couldn't figure it out.



    Let $n$ and $r$ be positive integers with $n \geq r$. Then the number of elements of the set
    \begin{align}
    \{(a_1,a_2,...,a_r) : 0 \leq a_1 \leq a_2\leq ... \leq a_r \leq n,\,\, a_i \in \mathbb{Z}\}
    \end{align}


    is equal to the following $r$-fold sum,
    \begin{align}
    \sum\limits_{a_1=0}^{n} \sum\limits_{a_2=a_1}^{n} .... \sum\limits_{a_r=a_{r-1}}^{n} 1
    \end{align}



    My question: Is there any simple expression for the above sum in terms of some binomial terms?


    Answer



    We are looking for number of tuples
    \begin{align}
    \{(a_1,a_2,...,a_r) : 0 \leq a_1 \leq a_2\leq ... \leq a_r \leq n,\,\, a_i \in \mathbb{Z}\} .
    \end{align}


    Let us say that you take random $r$ elements (you allow repetitions) from set: $$\left\{0,1,2,...,n \right\} $$
    and you got
    $$ a_1,a_2,...,a_r$$
    In how many ways can you put them to the tuple such that $ 0 \leq a_1 \leq a_2\leq ... \leq a_r \leq n$?

    It is exactly $1$ way. So you are looking for the just number of possible ways to take $r$ elements from $\left\{0,1,2,...,n \right\} $.



    From stars and bars you can do that in exactly
    $$ \binom{(n+1) + r - 1}{r} $$
    I put $n+1$ instead of $n$ because you have $n+1$ elements in $\left\{0,1,2,...,n \right\} $



    Example




    $n=2, r=3$
    $$2,2,2\\
    1,2,2\\
    1,1,2\\
    1,1,1\\
    0,1,1\\
    0,0,1\\
    0,0,0\\
    0,1,2\\
    0,0,2\\

    0,2,2 $$

    and it is equal to $\binom{(2+1)+3-1}{3} = 10$


    Sunday 22 February 2015

    Linear transformation between vector spaces over $mathbb{Z}_5$



    Let $f:\mathbb{Z}_{5}^{3} \rightarrow \mathbb{Z}_{7}^{2}$ be a transformation such that $$f\left(\begin{bmatrix}x\\y\\z\end{bmatrix}\right):=\begin{bmatrix}x+2z\\x+y-z\end{bmatrix}$$




    My homework first asks me to show this is a linear transformation and decide whether or not it's surjective. This $f$ transformation is then used in several questions which suggest its linearity (finding its Kernel, finding its transformation matrix, etc.).



    My confusion is that the question doesn't specify what field these vector spaces are on. So the question would be if, say, I decide to pick $\mathbb{Z}_{5}$ as the field I work with, is there any way to make sense of this? after all, the operations in my two vector spaces are defined differently (namely, using their respective mod equivalence classes).



    If it helps, the context of this homework is diagonalization and triangularization of transformation matrices.


    Answer



    I think there is probably a typo, as I don't see how to talk about a linear transformation when the codomain is a vector space over a different field than the domain.



    Let's say $f:\Bbb Z_5^3\to\Bbb Z_5^2$.




    The matrix is $\begin{pmatrix}1&0&2\\1&1&-1\end{pmatrix}$.



    Row-reduce: $\to\begin {pmatrix}1&0&2\\0&1&2\end {pmatrix}$. It has rank $2$.



    Thus it's surjective.



    The kernel is $\{\begin{pmatrix}-2t\\-2t\\t\end{pmatrix}\,,t\in\Bbb Z_5\}$.


    complex analysis - Calculating integral using Cauchy integral formula in two variables



    I want to compute the integral: $\iint_{\partial_0P}\frac{1}{1-4zw}dzdw$ (or any similar integral) using Cauchy integral formula for two complex variables over polydiscs. The distinguished boundary is given by: $\partial_0P={\{(z,w):|z|=1, |w|=1}\}$.




    Nowhere online have I found an example of how to calculate such an integral. Would be really grateful if someone could show me how to do it, or give a link to a text with examples.


    Answer



    Usually, one evaluates such integrals by iterated integration. Sometimes it is considerably simpler to evaluate in a specific order, but here the situation is completely symmetric with respect to $z$ and $w$, so the order is irrelevant not only for the result but also for the way there. We evaluate the inner integral per Cauchy's integral formula/the residue theorem:



    $$\int_{\lvert z\rvert = 1} \frac{dz}{1-4zw} = -\frac{1}{4w}\int_{\lvert z\rvert = 1} \frac{dz}{z-\frac{1}{4w}} = \frac{\pi}{2iw},$$



    since we have $\lvert w\rvert = 1$ and hence the singularity $\frac{1}{4w}$ is enclosed by the unit circle. The outer integral becomes



    $$\frac{\pi}{2i}\int_{\lvert w\rvert = 1} \frac{dw}{w} = \pi^2.$$



    real analysis - How to calculate the improper integral $int_{0}^{infty}Big(frac{x}{mathrm{e}^x-mathrm{e}^{-x}}-frac{1}{2}Big)frac{mathrm{d}x}{x^2}$

    How to calculate the improper integral
    $$\int_{0}^{\infty}\Big(\frac{x}{\mathrm{e}^x-\mathrm{e}^{-x}}-\frac{1}{2}\Big)\frac{\mathrm{d}x}{x^2}$$
    Do you have same idea?

    sequences and series - Using the Root Test to Solve $sum_{k=1}^infty(frac{k^{2}}{2^{k}})$




    $\sum_{k=1}^\infty(\frac{k^{2}}{2^{k}})$



    So I've taken the $kth$ root of the numerator and denominator and got:



    $\sum_{k=1}^\infty(\frac{k^{\frac{2}{k}}}{2})$



    $k^{\frac{2}{k}}$ as k approaches infinity. What to do now?


    Answer



    $$\lim_{k \to \infty} k^{\frac{2}{k}}$$
    $$\lim_{k \to \infty} e^{ \frac{2 \ln k}{k}} = 1$$




    as the exponent tends to $0$ as $k \to \infty$ by L'Hospital's Rule.



    Dividing by two, we get that the limit is $\frac{1}{2} < 1$ and thus the series converges by the root test.


    Finding the sum of squares of roots of a quartic polynomial.



    What is the sum of the squares of the roots of $ x^4 - 8x^3 + 16x^2 - 11x + 5 $ ?




    This question is from the 2nd qualifying round of last year's Who Wants to be a Mathematician high school competition which can be seen here:



    I know the answer (32) because that is also given in the link, and I have checked by brute force that the given answer is correct.



    However, I have made no progress at all in figuring out how to calculate the sum of squares of the roots - either before or after knowing the answer! I was expecting there to be a nice "trick" analagous to the situation if they had given a quadratic and asked the same question -- in that case I know how to get the sum and product of the roots directly from the coefficients, and then a simple bit of algebraic manipulation to arrive at the sum of squares of the roots.



    In this case (the quartic) I have no idea how to approach it, and I have not spotted any way to simplify the problem (e.g. I cannot see an obvious factorisation, which might have helped me).



    I've looked on the web at various articles which dicuss the relationships between the coefficients of polynomials and their roots and - simply put - I found nothing which gave me inspiration for this puzzle.




    Given the audience for this test, it should be susceptible to elementary methods ... I would appreciate any hints and/or solutions!



    Thank you.


    Answer



    We have that



    $$(x-a)(x-b)(x-c)(x-d)=$$
    $$=x^4-(a+b+c+d)x^3+(ab+ac+ad+bc+bd+cd)x^2\\-(abc+abd+acd+bcd)x+abcd$$



    then by





    • $S_1=a+b+c+d$

    • $S_2=ab+ac+ad+bc+bd+cd$

    • $S_3=abc+abd+acd+bcd$

    • $S_4=abcd$



    $$a^2+b^2+c^2+d^2=S_1^2-2S_2$$




    and more in general by Newton's sums we have that




    • $P_1=a+b+c+d=S_1$

    • $P_2=a^2+b^2+c^2+d^2=S_1P_1-2S_2$

    • $P_3=a^3+b^3+c^3+d^3=S_1P_2-S_3P_1+3S_3$

    • $P_4=a^4+b^4+c^4+d^4=S_1P_3-S_2P_2+S_3P_1-4S_4$


    geometry - Rectangle inscribed in semicircle, find perimeter and more

    Consider this image: enter image description here




    A rectangle is inscribed in a semicircle and the radius is 1. The bas of the rectangle is x. Write an expression for the rectangle perimeter and determine the value of x that gives the highest possible perimeter. Also, what is the highest perimeter?
    Well... I fooled around with the unit circle, but still no progress. I've found several instructions on how to find the area, but no luck with perimeter.



    I've been trying to solve this one for days now, I have completely giving upp solving it by my self. If someone could provide me with both answers & solutions, I would be happier than a child in a candy store :)



    Could any kind soul help me out? :)



    EDIT:




    Please read new update:



    http://imgur.com/a/adATx

    Saturday 21 February 2015

    trigonometry - How to prove that $csc x - 2cos x cot 2x = 2 sin x$



    My idea was...



    \begin{align}
    & \text{LHS} \\[10pt]
    = {} & \frac 1 {\sin x} - \frac{2\cos x}{\tan2x} \\[10pt]

    = {} & \frac{\tan2x-\sin2x}{\sin x\tan2x}
    \end{align}



    from here, I don't know how to continue, please help! thanks



    ps, please teach me how to use the "divide" symbol


    Answer



    Somewhere you'll need a double-angle formula:
    $$
    \tan(2x) = \frac{2\tan x}{1 - \tan^2 x}

    $$
    or
    $$
    \tan(2x) = \frac{\sin(2x)}{\cos(2x)} = \frac{2\sin x\cos x}{\cos^2 x - \sin^2 x}.
    $$


    calculus - Every point takes local maximum value



    If $f:\mathbb{R}\to\mathbb{R}$ and every point takes a local maximum value, it's a fact that the local maximum values of a real function can only have countable, so if we assume $f$ is continuous we have $f$ must be constant. My question is, if $f$ isn't continuous, can we prove there must be some interval that $f$ is constant on it?


    Answer



    I think your conclusion is right. I've written a proof, please help me check if it's right.



    Since "local maximum values can only be countable", we assume they are $\{a_n\}_n$. And let $F_n=\{f=a_n\}$. Then $\mathbb{R}=\bigcup_{n\geq1}F_n$.



    Due to Baire's theorem, there is a $n_0$ such that $F_{n_0}$ is dense in an open interval (expressed as $U$).




    Because $\{f=a_{n_0}\}$ is dense in $U$, it's easy to prove that $f(x)\geq a_{n_0}$ in $U$.



    Assume that $x_0\in\{f=a_{n_0}\}$ is not an interior point of $\{f=a_{n_0}\}$ in $U$. In other words, $ \exists\{x_n\}_n\bigcap\{f=a_{n_0}\}=\emptyset$ such that $x_n\to x_0$. However, it can't be correct because $x_0$ is a local maximum.



    Then we know $\{f=a_{n_0}\}$ has an interior point $x_0$ and we arrive at your conclusion. What's more, since $x_0$ is arbitrary, we know that $F_{n_0}\cap U$ is open too.


    functional analysis - Norm of an $l_p$ -operator



    Let $F:l_1\rightarrow l_1$ with $F(a_1,a_2,...)=((1-\frac{1}{1})a_1,(1-\frac{1}{2})a_2,(1-\frac{1}{3})a_3,...)$.




    a. How do I show that $F$ is bounded?
    b. How do I show that $||F||=1$?
    c. Is there an $a\in l_1$ such that $||a||=1$ and $|F(a)|=||F||$?





    Things I know so far:
    a Boundedness has the following definition: that there exists a constant $K$ such that $||F(a)||\leq K||a||$ for all $a\in l_1$. Further the norm is $||a||=\sum_{i=1}^\infty|a_i|<\infty$.
    This means that we should have $||F(a)||=\sum_{i=1}^\infty|(1-\frac{1}{i})a_i|\leq K \sum_{i=1}^\infty|a_i|$. Which constant $K$ do we need?



    b The norm of an operator $F$ is $||F||=\sup\{|F(a)|:||a||\leq1\}$ and this must equal 1. How can I show this?



    c My intuition tells me there is, but I can't think of a way of making it concrete.


    Answer



    a)



    $||F(a)||=\sum_{i=1}^\infty|(1-\frac{1}{i})a_i|\leq \sum_{i=1}^\infty|a_i|$,




    since $ 0 \le |1-\frac{1}{i}| \le 1$. Thus take $K=1$ and s0 $||F|| \le 1$



    b)



    Let $e_n:=(0,....,0,1,0,,,,)$ (1 on the n-th place). Show that $F(e_n)=(1-\frac{1}{n})e_n$.



    Thus $1-\frac{1}{n} =||F(e_n)|| \le ||F|| \le 1$ for all $n$. This gives $||F||=1$



    c)




    Now its your turn to show that there is no(!) $a\in l_1$ such that $||a||=1$ and $||F(a)||=1$


    real analysis - limit of nth root of factorial devided by n








    I have a litle discution with a friend about the folowing limit:
    $$\lim_{n\to\infty} \frac{\sqrt[n]{n!}}{n}$$
    I would solve it like this: $$\lim_{n\to\infty} \sqrt[n]{\frac{n!}{n^{n}}} =0$$
    or
    $$\lim_{n\to\infty} \frac{\sqrt[n]{n}\sqrt[n]{n-1}\cdots\sqrt[n]{1}}{n}=\frac{1*1*1\cdots}{\infty}=0$$
    and in this 2º way would there be a problem with $1^{\infty}$? I would say that no, because there is no functions involved, since as much as I know this undetermination is because you would whant to avoid the situation such as $f(x)^{g(x)}$ where $f(x)\to1$ and $g(x)\to\infty$ Could anyone clarify this for me?

    Friday 20 February 2015

    elementary number theory - What is the last digit of $2003^{2003}$?




    What is the last digit of this number?



    $$2003^{2003}$$





    Thanks in advance.



    I don't have any kind of idea how to solve this. Except that $3^3$ is $27$.


    Answer



    When calculating the last digit of $a\cdot b$ you only need to find the product of the last digit of $a$ and the last digit of $b$.



    so $2003^{2003}$ ends in the same digit as $3^{2003}$




    $3^1$ ends in $3$



    $3^2$ ends in $9$



    $3^3$ ends in $7$



    $3^4$ ends in $1$



    $3^5$ ends in $3$




    $3^6$ ends in $9$



    $3^7$ ends in $7$



    $3^8$ ends in $1$.



    So $3^{4k}$ ends in $1$.



    From here $3^{2000}$ ends in $1$ (since $2000$ is $4\cdot 500$).




    Therefore $3^{2001}$ ends in $3$, $2^{2002}$ ends in $9$ and $3^{2003}$ ends in $7$


    convergence divergence - Infinite series of positive and negative terms both divergent



    Let $x$ be the infinite series



    $$\sum_{k=0}^\infty f(k)$$



    Let $p$ be the sum of all positive terms and $n$ the sum of all negative terms from $x$. My question is: if $p$ and $n$ are both divergent, does $x$ have no absolute sum? Can it be rearranged to form any arbitrary resulting value? I believe it is true because if at least one of $p$ and $n$ was convergent, then $x$ would have an absolutely determinable sum.


    Answer



    In a word, yes. In more detail, let $(a_{k})_{k=0}^{\infty}$ be a real sequence, and put

    \begin{align*}
    a_{k}^{+} &= \phantom{-}\max(a_{k}, 0) = \tfrac{1}{2}\bigl(|a_{k}| + a_{k}\bigr), \\
    a_{k}^{-} &= -\min(a_{k}, 0) = \tfrac{1}{2}\bigl(|a_{k}| - a_{k}\bigr),
    \end{align*}
    so that
    $$
    |a_{k}| = a_{k}^{+} + a_{k}^{-},\qquad
    a_{k} = a_{k}^{+} - a_{k}^{-}.
    $$
    The following are equivalent:





    1. $\sum\limits_{k=0}^{\infty} |a_{k}|$ converges (i.e., $(a_{k})_{k=0}^{\infty}$ is absolutely summable);


    2. Each of $\sum\limits_{k=0}^{\infty} a_{k}^{\pm}$ converges (absolutely).




    Further,




    1. If $\sum\limits_{k=0}^{\infty} a_{k}$ converges conditionally (i.e., $(a_{k})_{k=0}^{\infty}$ is summable, but not absolutely), then each of $\sum\limits_{k=0}^{\infty} a_{k}^{\pm}$ diverges.




    Proofs:



    1. implies 2.: Comparison, plus $a_{k}^{\pm} = |a_{k}^{\pm}| \leq |a_{k}|$.



    2. implies 1.: $|a_{k}| = a_{k}^{+} + a_{k}^{-}$.



    3. If both $(a_{k})$ and $(a_{k}^{-})$ are summable, then so is $(a_{k}^{+})$, since $a_{k}^{+} = a_{k} + a_{k}^{-}$, so $(a_{k})$ is absolutely summable since $|a_{k}| = a_{k}^{+} + a_{k}^{-}$. Contrapositively, if $(a_{k})$ is summable but not absolutely summable, then $(a_{k}^{-})$ is not summable.




    An entirely similar argument shows that if $(a_{k})$ is conditionally summable, then $(a_{k}^{+})$ is not summable.



    As you say, if $(a_{k})$ is conditionally summable, the series can be rearranged to converge to an arbitrary sum (or so that the partial sums diverge "in a more-or-less arbitrary way").


    calculus - Why is $limlimits_{xto0+}xcot x=1$?



    Why is $\lim\limits_{x\to0+}x\cot x=1$?



    Since both $x$ and $\cot x$ are continuous at zero and both equal to zero at $x=0$ why is the limit of both of them $1$?



    i.e why isn't it: $\lim\limits_{x\to0+}x\cot x=0\cdot 0 = 0$?



    PS: I know how to find the limit: $\displaystyle\lim_{x\to0}x\cot x=\lim_{x\to0}\frac {x\cos x} {\sin x}=\lim_{x\to0} \cos x = 1$ and it's the same with LHR too but I just find it strange since both of them are supposed to be $0$.


    Answer




    The cotangent is the reciprocal (the multiplicative inverse) of the tangent, that is $1/ \tan x$. The tangent is $0$ at $0$ so its reciprocal has a pole at $0$.



    It is important to note that while the cotangent is $(\tan x)^{-1}$ this is not the same as $\tan^{-1} (x)$, the inverse function (the functional inverse) of the tangent also called arcus tangent. This would be $0$ at $0$.


    algebra precalculus - Counting the number of squares on an $ntimes n$ Board



    Yesterday I was asked by a friend how many squares are in a chess board of $8\times 8$.
    I thought about 64 immediately, but of course this is not the solution, there are much more.



    So the number of squares is: $$8\times 8 + 7\times 7 + 6\times 6 + 5\times 5 + 4\times 4 + 3\times 3 + 2\times 2 + 1\times 1=1^2 + 2^2 + 3^2 + 4^2...+ 8^2$$



    I came across this formula: $$\frac{n(n + 1)(2n + 1)} 6$$



    It produces the sum of squares in $n\times n$ board.




    My question is, how did he reached this formula? Did he just guessed patterns until he reached a match, or there is a mathematical process behind?



    If there is a mathematical process, can you please explain line by line?



    Thanks very much.



    Btw: Couldn't find matching tags for this question, says I can't create.


    Answer



    The first step is to recognize that there are $8^2$ squares of size $1$ by $1$, $7^2$ squares of size $2$ by $2$ and so on. That justifies the total number being, as you say, $1^2+2^2+3^2+\ldots 8^2$. Sums of powers are calculated by Faulhaber's formula. There are several ways to derive them. One way is to know or suspect that $\sum_{k=1}^n k^p$ should be of degree $p+1$. So for squares, we want $\sum_{k=1}^n k^2=an^3+bn^2+cn+d$. Then if we evaluate it at $n+1$, we get $\sum_{k=1}^{n+1} k^2=a(n+1)^3+b(n+1)^2+c(n+1)+d$. Subtracting, we get $(n+1)^2=a((n+1)^3-n^3)+b((n+1)^2-n^2)+c((n+1)^1-n^1)$ and equating the coefficients gets the desired formula. You can prove the formula rigorously by induction



    Proving the sequence $a_n=frac{ln(n+7)}{n}$ converges

    As the title says, I'm trying to prove that the sequence $$a_n=\dfrac{\ln(n+7)}{n}$$ converges. I'm trying to use the squeeze theorem and l'hopitals rule, but I'm not making any progress. I just ran a quick simulation and it seems that $a_n$ converges to $0$.



    How do I go about proving this?

    Thursday 19 February 2015

    real analysis - I want to show that $f(x)=x.f(1)$ where $f:Rto R$ is additive.











    I know that if $f$ is continuous at one point then it is continuous at every point.
    From this i want to show that $f(x)=xf(1).$
    Can anybody help me to proving this?


    Answer




    HINTS:




    1. Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.


    2. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.


    3. $f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.


    4. Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.


    5. Then use continuity at a point.



    linear algebra - Intuition/Understanding of Inverse and Determinants



    This is not homework, but extends from a proof in my book.




    EDIT
    We're given an $m \times m$ nonsingular matrix $B$.



    According to the definition of an inverse, we can calculate each element of a matrix $B^{-1}$, for a given matrix $B$, such that each element is equal to an $(m-1) \times (m-1)$ determinant divided by an $m \times m$ nonzero determinant.



    Could someone please elaborate on this? I'm not sure why I don't see this, so I may need a lot of explanation.


    Answer



    Perhaps one way to understand -- Look up Cramer's Rule, e.g. http://en.wikipedia.org/wiki/Cramer%27s_rule and think about what the rule means when you want to find $x$ that solves $Ax = e_i$ where $e_i$ is a basis vector (all 0, except 1 at position $i$ in the vector). If you understand why Cramer's Rule is true, then you're basically there -- just note that if you want to solve the equations $Ax = e_i$ for all basis vectors (i.e. giving the identity matrix when you put the basis vectors and solutions into one matrix equation, giving $AX = I$) then you will get the determinant and sub-determinant based formula for matrix inverse $X = A^{-1}$ you described.


    real analysis - Existence of a Bijective Map



    I am having a little trouble with this question.



    Prove that there does not exist a bijective map from $\mathbb{R}^2 \to \mathbb{R}^3$ where $f$ and $f^{-1}$ are both differentiable.



    Thanks for any help.


    Answer



    I'd be pretty surprised if this question wasn't already answered somewhere on this site... but here's a sketch.




    Suppose $f : \mathbb{R}^2 \to \mathbb{R}^3$ is bijective with both $f$ and $f^{-1}$ differentiable. In particular, $f$ and $f^{-1}$ are continuous i.e. $f$ is a homeomorphism. So the question is: "why is $\mathbb{R}^2$ not homeomorphic to $\mathbb{R}^3$?" The simplest approach is probably to note that $\mathbb{R}^2$ minus a point is not simply connected, but $\mathbb{R}^3$ minus a point is simply connected. Since the property




    there exists a point $x \in X$ such that $X \setminus \{x\}$ is not-simply connected




    is invariant under homeomorphism, we are done.


    calculus - Evaluate $int_{0}^{infty} !frac{1-e^{-2x^2}}{xe^{x^4}}mathrm{d}x$



    How one can evaluate the following improper integral
    \begin{equation}
    \int_{0}^{\infty} \!\frac{1-e^{-2x^2}}{xe^{x^4}}\mathrm{d}x
    \end{equation}

    It seems that one can consider the function
    \begin{equation}

    I(\alpha)=\int_{0}^{\infty} \!\frac{1-e^{-\alpha x^2}}{xe^{x^4}}\mathrm{d}x
    \end{equation}

    and try to evaluate $I'(\alpha)=\int_{0}^{\infty} xe^{-\alpha x^2-x^4}\mathrm{d}x$. But this approach did not work for me.


    Answer



    $$
    \int_0^\infty \frac{1-e^{-2x^2}}{xe^{x^4}}dx=\int_0^\infty \frac{1-e^{-2u}}{2ue^{u^2}}du
    =\int_0^\infty e^{-u^2} du \sum_{k=0}^\infty(-1)^k \frac{(2u)^k}{(k+1)!}\\
    =\sum_{k=0}^\infty(-1)^k\frac{2^{k-1}\Gamma\left(\frac{k+1}{2}\right)}{(k+1)!}
    =\frac{\sqrt\pi}2\sum_{n=0}^\infty\frac1{(2n+1)n!}-\sum_{n=0}^\infty\frac{2^n}{(n+1)(2n+1)!!}\\
    =\frac{\sqrt\pi}2 {}_1F_1\left(\small\frac12;\small\frac32;1\right)-\frac12 {}_2F_2\left(1,1;\small\frac32,2;1\right).

    $$



    According to WA, the first summand can be represented as:
    $$
    \frac\pi4 \text{Erfi}(1).
    $$



    Further simplification seems to be not possible.


    calculus - How to show $lim_{n to infty}a_n=frac{5^{3cdot n}}{2^{left(n+1right)^2}}$?



    $$\lim_{n \to \infty}a_n=\dfrac{5^{3\cdot n}}{2^{\left(n+1\right)^2}}$$




    I am trying to solve it using the squeeze theorem.
    I have opened the expression to $$a_n=\dfrac{5^3\cdot 5^n}{2^{n^2}\cdot2^{2n}\cdot2)}$$
    I think that the LHS should be $$a_n=\dfrac{2^3\cdot 2^n}{2^{n^2}\cdot2^{2n}\cdot2)}$$
    But as for the RHS I do not find a bigger expression, any ideas?


    Answer



    Hint: $0<\dfrac{125^n}{2^{(n+1)^2}} < \dfrac{125^n}{2^{n^2}} < \left(\dfrac{1}{2}\right)^n$



    since $\dfrac{125}{2^n} < \dfrac{1}{2}$ when $n > 8$.


    combinatorics - Algebraic proof of combinatorial identity




    I would like to obtain the algebraic proof for the following identity. I already know the combinatorial proof but the algebraic proof is evading me.



    $$\sum_{r=0}^n\binom{n}{r}\binom{2n}{n-r}=\binom{3n}{n}$$



    Thanks.


    Answer



    We make use of the Binomial Theorem. Observe that:
    \begin{align*}
    \sum_{k=0}^{3n} \binom{3n}{k}x^k &= (1+x)^{3n} \\

    &= (1+x)^n(1+x)^{2n} \\
    &= \left[ \sum_{i=0}^{n} \binom{n}{i}x^i \right] \left[ \sum_{j=0}^{2n} \binom{2n}{j}x^j \right] \\
    &= \sum_{k=0}^{3n} \left[\sum_{r=0}^n\binom{n}{r}\binom{2n}{k-r}\right]x^k \\
    \end{align*}



    Hence, by setting $k=n$, we compare the coefficients of $x^n$ of both sides to obtain:
    $$
    \binom{3n}{n} = \sum_{r=0}^n\binom{n}{r}\binom{2n}{n-r}
    $$
    as desired.



    Wednesday 18 February 2015

    elementary number theory - Multiplicative property of Euler's Totient function



    Prove: When $m > 1$ and $n > 1$ and $\gcd(m, n) > 1$. Then $\phi(mn) \neq \phi(m)\phi(n)$.




    Proof



    Let $m,n \in \mathbb{Z}$ s.t. $\gcd(m,n) > 1$. Consider the congruences:



    $x \equiv b \mod m$



    $x \equiv c \mod n$



    This is where I am stuck. If i were to prove the opposite of this statement I would appeal to the chinese remainder theorem and state that distinct solutions have a correspondence to distinct pairs. So I think is in the same, vein. We do not have a distinct correspondence, so somehow we will end up with: $\phi(mn) \neq \phi(m)\phi(n)$


    Answer




    Let us use an inductive argument. Suppose that we have $N = mn$. If $N$ is composed of a single prime, say $N=p^k$ partitioned into $m=p^i$ and $n=p^j$
    $$\phi(N) = p^k\left(1-\frac{1}{p}\right) > \phi(m)\phi(n) = p^k\left(1-\frac{1}{p}\right)^2$$
    The base case holds. Now suppose the inequality holds for all $N$ composed of at most $k$ primes. Consider $$N=p_1^{a_1}\cdots p_{k+1}^{a_{k+1}}$$ Suppose without loss of generality that $N$ is partitioned into $m$ and $n$ where $p_1$ is shared.
    $$m=p_1^{b_1}\cdot m'$$
    $$n=p_1^{c_1}\cdot n'$$
    where $a_1 = b_1 + c_1$. Then we have
    $$\phi(N) = \phi(p_1^{a_1}m'n') = \phi(p_1^{a_1})\phi(m'n')$$
    From the inductive hypothesis, $m'$ and $n'$ are composed of $k$ or less primes, therefore
    $$\phi(m'n')>\phi(m')\phi(n')$$
    and we get

    $$\phi(p_1^{a_1})\phi(m'n')>\left[\phi(p_1^{b_1})\phi(m')\right]\left[\phi(p_1^{c_1})\phi(n')\right] = \phi(m)\phi(n)$$


    complex numbers - Solve: $z^4 +2sqrt3 +2i = 0$



    Solve: $z^4 +2\sqrt3 +2i = 0$



    I'm already trying to solve this exercise for $20$ minutes, no luck. I got up to here:



    $z^4 = -2i -2\sqrt3 = -2(\sqrt3 + i)$ but it's impossible to compute from here.



    Also I tried: $z^8 = -2\sqrt3 -2i \rightarrow$ $z^8 = (4*3) -4 = 8 \rightarrow $ $z^8 = 8$




    $z^8 = {{1}\over{8}}\text{Cis}(\pi)$ and from there to solve.


    Answer



    Hint:
    $$z^{4}=4\left(-\frac{\sqrt{3}}{2}-\frac{1}{2}i\right)=4e^{-\frac{5}{6}\pi i}$$


    summation - Intuition behind the formula for $sum i^{2}$



    I've been trying to figure out the intuition behind the closed formula:



    $\sum i^{2} = \frac{(n)(n+1)(2n+1)}{6}$



    This is not hard to prove via induction, so I'm not interested in the proof that this is true. Rather, I'm more interested in why somebody would even make this guess for an inductive step. What's so confusing to me is that



    $\sum i = \frac{(n)(n+1)}{2}$ makes sense using the Gaussian trick of taking the number line and folding it in half and realizing that (1+n) = (2+(n-1)) = (3 +(n-2))... so that you end up with this form.




    But, what's the intuition for $i^{2}$? Looking at these two formulas, one realizes very quickly that



    $\sum i^{2} = \sum i * \frac{(2n+2)}{3}$



    But, why is that true intuitively? What's the intuition for this?


    Answer



    Since you asked for an intuitive explanation consider a simple case of $1^2+2^2+3^4+4^2$ using a set of children's blocks to build a pyramid-like structure.



    First you arrange $16$ blocks in a $4\times4$ square. Next you place on top of this arrangement, $9$ blocks in a $3\times3$ square aligning the blocks of the upper left corner of each square, one above the other. On top of this construction place $4$ blocks in a $2\times2$ square, similarly aligned. Finally crown the upper left corner with a single block for the $1\times1$ square.




    The following table represents the number of block in each column of the arrangement viewed from above:



    $\begin{array}{cccc}
    4&3&2&1\\
    3&3&2&1\\
    2&2&2&1\\
    1&1&1&1
    \end{array}$



    We can find the total number of blocks in the arrangement by adding the number of columns containing a single block to twice the number of columns containing two blocks then adding three times the number of columns containing three blocks and finally adding four times the one column containing four blocks.




    \begin{eqnarray}
    \sum_{i=1}^4i^2=1\cdot(4+3)+2\cdot(3+2)+3\cdot(2+1)+4\cdot1
    \end{eqnarray}



    If we do the same thing with a pyramid of blocks having $n\times n$ blocks on its base then the summation would look like the following:



    \begin{eqnarray}
    \sum_{i=1}^{n}i^2&=&1\cdot(n+n-1)+2\cdot(n-1+n-2)+3\cdot(n-2+n-3)\\
    &+&\cdots+(n-1)\cdot(2+1)+n\cdot1\\

    &=&1\cdot(2n-1)+2\cdot(2n-3)+3\cdot(2n-5)+\cdots+(n-1)\cdot3+n\cdot1\\
    &=&\sum_{i=1}^{n}i(2n-2i+1)\\
    &=&2n\sum_{i=1}^{n}i-2\sum_{i=1}^{n}i^2+\sum_{i=1}^{n}i\\
    \sum_{i=1}^{n}i^2&=&(2n+1)\sum_{i=1}^{n}i-2\sum_{i=1}^{n}i^2\\
    3\sum_{i=1}^{n}i^2&=&(2n+1)\sum_{i=1}^{n}i\\
    &=&\dfrac{n(n+1)(2n+1)}{2}\\
    \sum_{i=1}^{n}i^2&=&\dfrac{n(n+1)(2n+1)}{6}
    \end{eqnarray}


    differential geometry - Intuitive way to understand Gauss-Bonnet Theorem

    There are quite a lot of intuitive explanations for Levi-Civita connection, exterior derivatives and other concepts in Differential and Riemannian geometry. But I couldn't find any resource with an intuitive explanation of Gauss-Bonnet Theorem. The proof I am aware of (given in John Lee's Riemannian Manifolds) seems like a trick of notation and usage of Stoke's theorem. I tried to come up with a more visual way of thinking about this theorem using Gauss map but so far wasn't able to do so.



    What is the intuition behind this theorem? How did Gauss come up with it himself? (If it was him) So far for me, it seems that the way one would come up with such a theorem is to notice the pattern for several shapes, write down the formula and try to prove it using the available techniques. Yet, It seems unsatisfying.



    All references are welcome.



    Thank you.

    Proof Involving Modular Arithmetic and Fermat's Theorem



    I'm trying, but struggling, to prove this statement about congruences. The title is perhaps uniformative, which I apologize for, so feel free to edit or comment if you have a better description.



    Theorem: Let $p$ and $r$ be distinct primes greater than $2$. Then, $p^{r-1} + r^{p-1} \equiv 1 \ \text{(mod $pr$)}$.



    Here's my best attempt at a proof. It doesn't get 'prove' it, but rather gives a string of facts I was able to piece together which, unless I missed something important, may be sufficient to write the proof, but I'm struggling to piece them together correctly.




    Proof Attempt:



    Since $p$ and $r$ are prime, it's clear that they are relatively prime to one another, so $\gcd(p,r) = 1$. Thus, we are free to leverage faremont's theorem in considering $p^{r-1}$ and $r^{p-1}$.



    From $p^{r-1}$, that $p$ and $r$ are relatively prime tells us that $p^{r-1} \equiv 1 \ \text{(mod $r$)}$. Similarly, we have $r^{p-1} \equiv 1 \ \text{(mod $p$)}$.



    Now, by definition, we have, taking $p$ and $r$ as fixed:
    \begin{align*}
    p^{r-1} \equiv 1 \ \text{(mod $r$)} \iff \exists a \in \mathbb{Z}, p^{r-1} -1 = ar

    \end{align*}
    and
    \begin{align*}
    r^{p-1} \equiv 1 \ \text{(mod $p$)} \iff \exists b \in \mathbb{Z}, q^{r-1} - 1 = bp
    \end{align*}
    So, now I can't quite figure out how to string these together. We could add the two equations so that we have $p^{r-1} + r^{p-1}$ on the same side of an equation, but then we have a $-2$ term that we can't quite eliminate, nor we can we factor out $pr$ from the right-hand side of the equation, though we can factor out a $p$ from one term and an $r$ from another. The only other thing that occurred to me was that $p$ and $r$ being relatively prime implies invertibility. Similarly, that they have a $\gcd$ of $1$ means that we can write $1$ as a linear combination of $p$ and $r$ with integer coefficients, but that still didn't quite help with factoring out a $pr$ from the right-hand side, even if it seems to isolate $p^{r-1} + r^{p-1} - 1$ on one side of the equation.



    I'd greatly appreciate any helpful comments or hints. I'd really like to figure as much of this out myself, as I'd like to think I'm reasonably close, so any guidance in the right direction is especially appreciated. I'll edit this question later, hopefully with an updated, correct proof of this and credit whoever provided any thoughts.



    Thanks.



    Answer



    Are you familiar with the property



    $a \equiv b \pmod{m_1}$



    $a \equiv b \pmod{m_2}$



    $\vdots$



    $a \equiv b \pmod{m_n}$




    $\implies a \equiv b \pmod{\text{lcm}(m_1, m_2, \ldots, m_n)}$?



    If not, try to prove it! This gives a direct solution to the problem you're facing. And I believe $p$ and $r$ must be distinct in order for this solution to work. Otherwise, I'm pretty sure $p^{r-1} + r^{p-1} \equiv 2 p^{p-1} \equiv 0 \not\equiv 1 \pmod{p^2}$.


    Tuesday 17 February 2015

    trigonometry - Proof of a summation



    Here is the problem I am stuck with: Is it true that for every positive integer $n > 1$,
    $$\sum\limits_{k=1}^n \cos \left(\frac {2 \pi k}{n} \right) =0= \sum \limits_{k=1}^n \sin \left(\frac {2 \pi k}{n} \right)$$
    I'm imagining the unit circle and adding up the value of both trig functions separately but I cannot picture see how their sum add up to $0$.



    EDIT I updated the question from earlier and realized that it was my fault because of a major typo. At least it was better to point it out late than never. So how would I go about solving this question as of now?



    Originally the question was: $$\sum\limits_{k=1}^n \frac {\cos(2πk)}{n} =0= \sum\limits_{k=1}^n \frac {\sin(2πk)}{n}$$



    Answer



    Old answer to old question:






    Multiply both sides by $n$ to get



    $$\sum_{k=1}^n\cos(2\pi k)\stackrel?=0\stackrel?=\sum_{k=1}^n\sin(2\pi k)$$



    It's easy enough to see that for positive integers $k$, $\cos(2\pi k)=1$ and $\sin(2\pi k)=0$, thus,




    $$\sum_{k=1}^n\sin(2\pi k)=0$$



    $$\sum_{k=1}^n\cos(2\pi k)=n\ne0$$


    set theory - Automorphisms on $(mathbb R,+)$ and the Axiom of Choice



    We know that the algebraic automorphisms of the real numbers under addition is not in $\text{1:1}$ correpondence with $\mathbb R \setminus \{0\}$; see here.



    The argument uses the AOC.



    Suppose we drop the AOC from $\text{ZFC}$ replacing it with



    Axiom (GR):




    The injective mapping



    $\quad \Phi: \mathbb R \setminus \{0\} \to \text{AutomorphismGroup(} \mathbb R ,+ \text{)}$



    is surjective.






    Has this $\text{ZF+GR}$ been tried and/or does this lead to $1 = 0$?







    Update:



    Added descriptive set theory tag after looking over links in Noah's answer.


    Answer



    It is indeed consistent, and in fact is a consequence of the extremely powerful axiom of determinacy.



    Specifically, AD implies that every homomorphism from $(\mathbb{R},+)$ to itself is continuous, and in particular of the form $a\mapsto ar$ for some $r\in\mathbb{R}$. See here for some discussion of how nasty any other endomorphism would have to be; AD rules out such sets (e.g. implies that every set of reals is measurable).




    Of course, as Asaf observes below, AD is truly massive overkill (like, nuking a mosquito); I'm mentioning it because AD is a natural alternative to AC which you may independently want to know about.






    Now AD isn't actually cheap: the theory ZF+AD proves the consistency of ZF, that is, the axiom determinacy is of high consistency strength. We can prove the consistency of ZF+GR relative to ZF alone; however, this is a bit more technical.


    calculus - Evaluating $sum_{k=1}^{infty}frac{1}{k(3k-1)}$

    I am wondering if the sum
    $$S=\sum_{k=1}^{\infty}\frac{1}{k(3k-1)}$$

    has an exact expression. And when I plugged it into Wolfram Alpha it spitted out:
    $$S=\frac{1}{6}\Big(-\sqrt{3} π + 9 \ln(3)\Big)$$
    I am wondering how is the answer obtained? Is there a simple way of not using math beyond second year university to arrive to that answer?

    Cumulative Distribution Fit

    I am looking for a monotonically decreasing function to fit a cumulative distribution. The distribution is the number of values of a random variable X, that are greater than Y as a function of Y. In total, there are a few hundred values of X so the distribution does not decrease smoothly. I am thinking of fitting it with a monotonically decreasing polynomial to obtain a smoothly decreasing function of Y. I would like to choose the degree of the polynomial to get a satisfactory fit. Can someone suggest a monotonically decreasing polynomial to use.

    real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

    How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...