Thursday 31 August 2017

real analysis - The convergence of $sqrt {2+sqrt {2+sqrt {2+ldots}}}$

I would like to know if this sequence converges $\sqrt {2+\sqrt {2+\sqrt {2+\ldots}}}$.



I know this sequence is increasing monotone, but I couldn't prove it's bounded.



Thanks

elementary number theory - Finding a linear combination of integers of infinite set $A$, that gives the gcd of $A$




This is a follow up to my previous question, were I asked for a proof, that for any nonnempty set of integers $A$, not all zero, the greastet common divisor of $A$ exists.



If $A$ is finite, with $A=\{a_1,\dots, a_k$} and $k = \#A$, then we can easily show, that there are $x_1, \dots, x_k\in \mathbb{Z}$ with $\gcd(A) = \sum_{i=1}^k x_ia_i$, and we can construct these integers. We use the extended euclidian algorithm and, if $k\geq 2$, the fact, that $\gcd(a_1, \dots, a_k) = \gcd(a_1, \gcd(a_2, \dots, a_k)$.



Now suppose $A$ is infinite:




  1. How can we show, that there is a $k\in \mathbb{N}$, $a_1,\dots, a_k \in A$ and $x_1,\dots, x_k\in \mathbb{Z}$, with $\gcd(A) = \sum_{i=1}^k x_ia_i$?
    Alternatively:

    How can we prove the existence of some finite set $A'$, with $A'\subseteq A$ and $\gcd(A')=\gcd(A)$?

  2. In what cases (for what sets) can we actually compute integers $k$ and $a_1, \dots, a_k$ with said property? (I'm looking for relatively general examples; not the exact class of sets, that satisfy this property)

  3. When and how can we determine the least possible $k$? When and how can we determine the set of sets $A'$ with $A'\subseteq A$ and $\gcd(A')=\gcd(A)$, such that $\#A'$ is minimal?


Answer



Assume that $\gcd(A)=1$ (if not, just divide by the common factor). Let $a_0\in A$. If $a_0=1$, then we're done. Else, say $a_0=p_1^{e_1}\cdots p_n^{e_n}$. Since $A$ has gcd $1$, there exist $a_i\in A$ such that $p_i\nmid a_i$. Then $\gcd(a_0, a_1,\ldots, a_n)=1$.



To optimize this, select $a_0$ to have the least number of prime factors, choose $a_i$ to minimize the number of prime factors of $\gcd (a_0, a_1, \ldots, a_i)$.


contest math - 3 variable symmetric inequality



Show that for positive reals $a,b,c$,
$\frac{a^2}{b+c}+\frac{b^2}{c+a}+\frac{c^2}{a+b}\geq \frac{3a^3+3b^3+3c^3}{2a^2+2b^2+2c^2}$



What I did was WLOG $a+b+c=1$ (since the inequality is homogenous)
Then I substituted into the LHS to get $\sum_{\text{cyc}} \frac{a^2}{1-a}\geq \frac{3a^3+3b^3+3c^3}{2a^2+2b^2+2c^2}$. Now I'm not sure if I should clear the denominators and expand and try to use Muirhead+Schur? (Clearing the denominators seems quite tedious)?




Any ideas are appreciated.


Answer



your inequalitiy is equivalent to
$$\left(a^3-2 a b c+b^3+c^3\right) \left(2 a^3-a^2 b-a^2 c-a b^2-a c^2+2 b^3-b^2
c-b c^2+2 c^3\right)\geq 0$$
since $$a^3+b^3+c^3\geq 3abc>2abc$$ is the first factor positive,
and
$$2(a^3+b^3+c^3)\geq ab(a+b)+ac(a+c)+bc(b+c)$$
since $a^3+b^3=(a+b)(a^2+b^2-ab)\geq ab(a+b)$ etc is the second factor also positive.


elementary number theory - Find all integer solutions to linear congruences



Find all integer solutions to linear congruences:



\begin{align}
&(a) &3x &\equiv 24 \pmod{6},\\
&(b) &10x &\equiv 18 \pmod{25},\\

\end{align}



What I have so far:



$$(a) \gcd(3,6)=3$$
And we know $3|24$ so there are $3$ solutions. By inspection we know that $x=8$ is a solution.



One, question I have is, even though $x=8$ is a solution, I can also see that $x=2$ is a solution, and among others. Does it matter which I choose, or do I randomly choose a solution I see? From here I'm a tad confused. I'm missing something very simple.



$$(b) \gcd (10,25)=5$$




And $5 \nmid 18$ so there are no solutions. Right?


Answer



Using this, $(10,25)=5$ must divide $9$ to admit any solution, which is not, so there is no solution.



$3x\equiv 24\pmod 6\implies 2\mid x$ i.e., any even value(not only $8$) of $x$ will satisfy the 1st congruence.



So, the system of the linear congruence has no solution as the 2nd congruence is not solvable.







Alternatively, for the 2nd congruence, $10x-18=25y$ for some integer $y$,



So,$5(2x-5y)=18, \frac{18}5=2x-5y$ which is an integer, hence contradiction.


calculus - Is a function $f$ integrable iff it satisfies the IVT?



I was having a conversation with another MSE user today and he mentioned that he had heard that "IVT was sufficient for a function to be integrable". When I asked if he meant it was "if-and-only-if" or just "if", he said "if-and-only-if".



This made no sense to me - I imagined a step-function which was 0 for $x<1$ and was 1 for $x\geq1$. This seemed to me to be integrable (with definite integral 1 between 0 and 2) and yet clearly did not satisfy the IVT.



He countered with "step functions are NOT integrable" and further said that he had found the place in the textbook which mentioned this and had verified that it WAS if-and-only-if... Before I could say anything, a movie we wanted to see started.



Where did I go wrong?




I assume he's correct because:
a) He's usually correct and b) I just started calculus and he's been doing it for years.



However, I thought that my step function was integrable, by the definition of integration that I learnt from Spivak's Calculus a la lower and upper sums.


Answer



You are correct: step functions don't satisfy the intermediate value property (unless there's only one step…), but they are always integrable. On the other hand, there are differentiable functions $f$ such that $f'$ is not Riemann-integrable. But, since it is a derivative, $f'$ has the intermediate value property (by Darboux's theorem).


modular arithmetic - Proofs using linear congruences



We have just covered solving linear congruences, and I am confused about how to use them in proofs. I understand that the linear congruence $$cx \equiv b \pmod m$$ has a unique solution $\bmod m$ if $\gcd(c,m) = 1$, but a general approach to problems escapes me.



Sample Questions




Prove that if $x^2 \equiv n \pmod {65}$ has a solution, then so does $x^2 \equiv -n \pmod {65}$.




Show that if $n \equiv 7 \pmod 8$, then $n$ is not the sum of three squares.







My Work



For the first one, $x^2 \equiv n \pmod {65}$ implies that $65 | (x^2 - n)$, so (I believe) that means that $n = b^2$ for some $b$, so $65 | (x^2 - b^2)$. We proved a result that said that if $a^2 \equiv b^2 \pmod p$ for some prime $p$, then $a \equiv b$ or $a \equiv - b \pmod p$. However, I don't think that is the right track to go down here because $65$ is not prime, and I'm unsure about assuming $n = b^2$ for some $b$.



For the second one, suppose to the contrary that $n = a^2 + b^2 + c^2$ for some $a, b, c$. Then $n \equiv 7 \pmod 8$ implies that $8 | (n - 7)$ so $n = 8k + 7$ for some $k$. So substituting in gives me $a^2 + b^2 + c^2 = 8k + 7$, and I'm unsure how to proceed from here.



Answer



Hint $\rm(1)\ \ \ x^2 \equiv n,\,\ y^2\equiv -1\:\Rightarrow\: -n\equiv x^2y^2\equiv (xy)^2.\ $ But $\rm\:mod\ 65\!:\ {-}1 \equiv 64\equiv (\_ )^2.$



$\rm(2)\ \ mod\ 4\!:\ x^2\!+\!y^2\!+\!z^2 \equiv\, 3\:\Rightarrow\:x,y,z\:$ odd, by $\rm\:odd^2\equiv 1,\ even^2\equiv 0.\:$ Therefore we deduce $\rm\phantom{(2)\ \ } mod\ 8\!:\ x^2\!+\!y^2\!+\!z^2\equiv\:7\:\Rightarrow\:x,y,z\:$ odd $\rm\:\Rightarrow\:x^2\!+\!y^2\!+\!z^2\equiv 3,\:$ by $\rm\:odd^2\!\equiv\{\pm1,\pm3\}^2\equiv 1.$


Wednesday 30 August 2017

integration - how to calculate the integral of $sin^2(x)/x^2$











$\int_{-\infty}^{\infty}\sin^2(x)/x^2=\pi$ according to wolfram alpha. That is such a beautiful result! But how do I calculate this integral by hand?



Thanks in advance.


Answer



The answer may be found by using complex analysis, specifically the residue theorem. A full deriviation may be found here.



I know of an easy way to derive this result using real analysis alone.


real analysis - Is there a bijective map from the open interval $(0,1)$ to $mathbb{R}^2$?

I couldn't find a bijective map from the open interval $(0,1)$ to $\mathbb{R}^2$. Is there any example?

sequences and series - Generalized Euler sum $sum_{n=1}^infty frac{H_n}{n^q}$



I found the following formula



$$\sum_{n=1}^\infty \frac{H_n}{n^q}= \left(1+\frac{q}{2} \right)\zeta(q+1)-\frac{1}{2}\sum_{k=1}^{q-2}\zeta(k+1)\zeta(q-k)$$




and it is cited that Euler proved the formula above , but how ?



Do there exist other proofs ?



Can we have a general formula for the alternating form



$$\sum_{n=1}^\infty (-1)^{n+1}\frac{H_n}{n^q}$$


Answer



$$

\begin{align}
&\sum_{j=0}^k\zeta(k+2-j)\zeta(j+2)\\
&=\sum_{m=1}^\infty\sum_{n=1}^\infty\sum_{j=0}^k\frac1{m^{k+2-j}n^{j+2}}\tag{1}\\
&=(k+1)\zeta(k+4)
+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{m^2n^2}
\frac{\frac1{m^{k+1}}-\frac1{n^{k+1}}}{\frac1m-\frac1n}\tag{2}\\
&=(k+1)\zeta(k+4)
+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{3}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=m+1}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{4}\\

&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{(n+m)m^{k+2}n}-\frac1{m(n+m)^{k+2}n}\tag{5}\\
&=(k+1)\zeta(k+4)\\
&+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m^{k+3}n}-\frac1{(m+n)m^{k+3}}\\
&-2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}\tag{6}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=1}^\infty\frac1{n(n+m)^{k+3}}\tag{7}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}

-4\sum_{n=1}^\infty\sum_{m=n+1}^\infty\frac1{nm^{k+3}}\tag{8}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=n}^\infty\frac1{nm^{k+3}}+4\zeta(k+4)\tag{9}\\
&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\sum_{n=1}^m\frac1{nm^{k+3}}\tag{10}\\
&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{11}\\

&=(k+5)\zeta(k+4)
-2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{12}
\end{align}
$$
Letting $q=k+3$ and reindexing $j\mapsto j-1$ yields
$$
\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)
=(q+2)\zeta(q+1)-2\sum_{m=1}^\infty\frac{H_m}{m^q}\tag{13}
$$
and finally

$$
\sum_{m=1}^\infty\frac{H_m}{m^q}
=\frac{q+2}{2}\zeta(q+1)-\frac12\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)\tag{14}
$$






Explanation



$\hphantom{0}(1)$ expand $\zeta$
$\hphantom{0}(2)$ pull out the terms for $m=n$ and use the formula for finite geometric sums on the rest
$\hphantom{0}(3)$ simplify terms
$\hphantom{0}(4)$ utilize the symmetry of $\frac1{nm^{k+2}(n-m)}+\frac1{mn^{k+2}(m-n)}$
$\hphantom{0}(5)$ $n\mapsto n+m$ and change the order of summation
$\hphantom{0}(6)$ $\frac1{mn}=\frac1{m(m+n)}+\frac1{n(m+n)}$
$\hphantom{0}(7)$ $H_m=\sum_{n=1}^\infty\frac1n-\frac1{n+m}$ and use the symmetry of $\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}$
$\hphantom{0}(8)$ $m\mapsto m-n$
$\hphantom{0}(9)$ subtract and add the terms for $m=n$
$(10)$ combine $\zeta(k+4)$ and change the order of summation
$(11)$ $H_m=\sum_{n=1}^m\frac1n$
$(12)$ combine sums



real analysis - Prove linear combinations of logarithms of primes over $mathbb{Q}$ is independent



Suppose we have a set of primes $p_1,\dots,p_t$. Prove that $\log p_1,\dots,\log p_t$ is linear independent over $\mathbb{Q}$. Now, this implies $ \sum_{j=1}^{t}x_j\log(p_j)=0 \iff x_1=\dots=x_t=0$.



I think I have to use that fact that every $q\in\mathbb{Q}$ can be written as $\prod_{\mathcal{P}}$, where $n_p$ is a unique sequence ($n_2$,$n_3$,$\dots$) with domain $\mathbb{Z}$. Here, $\mathcal{P}$ denotes the set of all integers.



Now how can I use this to prove the linear independency?


Answer



If $\sum_{j=1}^{t}x_j\log(p_j)=0$
then

$\sum_{j=1}^{t}y_j\log(p_j)=0$ where $y_j \in \Bbb Z$ is the product of $x_j$ by the common denominator of the $x_j$'s.



Therefore $\log\left(\prod_{j=1}^t p_j^{y_j}\right) = 0$,
which implies $\prod_{j=1}^t p_j^{y_j} = 1$, and this is only possible if $y_j=0$ for all $j$. Indeed, you have
$$ \prod\limits_{\substack{1 \leq j \leq t\\ y_j \geq 0}} p_j^{y_j} =
\prod\limits_{\substack{1 \leq i \leq t\\ y_i < 0}} p_i^{-y_i}$$
and uniqueness of prime powers decomposition implies $y_j=0$ for all $j$.







The converse is easy to see: if $x_j=0$ for all $j$, then $\sum_{j=1}^{t}x_j\log(p_j)=0$.


elementary set theory - The natural numbers as the intersection of all inductive sets

I am currently trying to understand a little bit of axiomatic set theory, following Enderton's fun book "Elements of Set Theory" and am a little unclear about the set of natural numbers as defined in Chapter 4.



Firstly, let me note that the axioms in the book preceding the study of the natural numbers are the axiom of extensionality, the empty set axiom, pairing, unions, power sets and the axiom for constructing subsets.



At the beginning of Chapter 4 is given the definition of the successor $a^{+}=a \cup \{a\}$ of a set $a$. Enderton then defines $0=\emptyset$, $1=0^{+}$, $2=1^{+}$ and so on.




A set $a$ is called inductive if $\emptyset \in A$ and $a \in A \implies a^{+} \in A$. The axiom of infinity then asserts the existence of an inductive set, and the set of natural numbers $\omega$ is defined to be the intersection of all inductive sets. The existence of this set follows from the axioms mentioned.



Now clearly each set $0,1,2 \ldots$ belongs to the set $\omega$ since it contains $0=\emptyset$ and is closed under successors.



My question: is the converse true? Is every element of $\omega$ obtained from $0=\phi$ by applying the successor operation to $0$ finitely many times?



I presume that this can be deduced, but as far as I can tell it is not addressed in the book.



Note: If I had a way of constructing the "set" $X=\{0,1,2 ,\ldots n,n^{+},\ldots \}$ then it would be inductive, by construction, and therefore contain $\omega$ but I do not know how to construct the aforementioned set from the axioms given. So an equivalent question is: how can I construct $X$? (Perhaps the later axiom of replacement might help somehow?)




Grateful for any help!

Real analysis Limits and continuous functions

Suppose that $f:\mathbb{R}\to \mathbb{R}$ is continuous on $\mathbb{R}$ and that $$
\lim_{x\to -\infty} f(x)=\lim_{x\to +\infty} f(x) =k$$ Prove that $f$ is bounded and if there exist a point $x_0 \in\mathbb{R}$ such that $f(x_0)>k$, then $f$ attains a maximum value on $\mathbb{R}$.



Edit by non OP. The OP seems to be a new user that posted the same question twice in less than 2 hours, both on MSE.
Real analysis continuous functions

Tuesday 29 August 2017

real analysis - Sequence Convergence in $mathbb{R}^n$



Let {$\mathbf u_k$} be a sequence in $\mathbb{R}^n$ and let $\mathbf u$ be a point in $\mathbb{R}^n$. Suppose that for every $\mathbf v$ in $\mathbb{R}^n$, $$\lim_{k\to \infty} \langle\mathbf u_k, \mathbf v\rangle = \langle\mathbf u, \mathbf v\rangle.$$




Prove that {$\mathbf u_k$} converges to $\mathbf u$.

Using the Triangle Inequality I get



$$\begin{align}\text{dist}(\mathbf u,\mathbf v) &\le \text{dist}\mathbf (u,\mathbf u_k)+ \text{dist}(\mathbf u_k,\mathbf v),\ \quad \forall k \\
\\ \therefore \text{dist}(\mathbf u,\mathbf v) &\le \lim_{k\to \infty}[\text{dist}(\mathbf u,\mathbf u_k)+ \text{dist}(\mathbf u_k,\mathbf v)] \end{align}$$

and
$$\lim_{k\to \infty}\text{dist}(\mathbf u,\mathbf u_k) = 0 $$ because {$\mathbf u_k$} $ \to \mathbf u$.



But I don't know what to do next.


Answer



Since you are working on $\mathbb{R}^n$ with the standard scalar product (I suppose?), you can just insert the $i$-th standard-basis vector $e^i$ for $v$. Then your equation reads

$$
\lim_{k \to \infty} (v_k)^i = v^i
$$
(the upper index $i$ denotes the $i$-th component). This already implies that $u_k$ converges to $u$ since a series of vectors converges if and only if all components converge.


sequences and series - Sums like $1+2+...=-1/12$ which give $1/3$? [Not a duplicate]

This isn't a duplicate: I am looking for a sum or sums which are similar to $1+2+3+\cdots=-\frac{1}{12}$ but which have an answer (via analytic continuation) of $\pm \frac{1}{3}$ rather than $-\frac{1}{12}$. By similar I mean it is an infinite series of whole numbers.

real analysis - Does $1/n$ satisfy the Cauchy criterion?



I have been told that if a series satisfies the Cauchy criterion, it converges. I was not quite convinced on this, however, since some of the posts on this website seemed to imply otherwise. I came up with what I thought was a counterexample: I argued that $1/n$ satisfies the Cauchy criterion but diverges. However, I've been told afterwards that $1/n$ doesn't satisfy the Cauchy criterion. Now, I've tried to show that the $1/n$ does not satisfy the Cauchy criterion but couldn't come up with a proof. Can someone provide me a proof that shows that the series $1/n$ does not satisfy the Cauchy criterion?



Answer



The sequence $\{1/n : n=1,2,3,\ldots\}$ satisfies the Cauchy criterion and converges.



The series $\displaystyle\sum_{n=1}^\infty \frac1n$ diverges. That is the same as saying that its sequence of partial sums
$$
\left\{ \sum_{n=1}^m \frac1n : m = 1,2,3,\ldots \right\}
$$
diverges. The latter sequence is not a Cauchy sequence. To see that it's not a Cauchy sequence without first considering whether it converges, consider the difference between the $m_1$th term and the $m_2$th term, whree $m_2>m_1$:
\begin{align}
& \phantom{={}} \left| \sum_{n=1}^{m_1} \frac1n - \sum_{n=1}^{m_2} \frac1n \right| \\[6pt]

& =\left| \sum_{n=m_1+1}^{m_2} \frac1n \right| \ge \left| \int_{m_1+1}^{m_2} \frac1x\,dx \right| \\[6pt]
& = \left|\log\frac{m_2}{m_1+1}\right|\to\infty\text{ as }m_2\to\infty.
\end{align}



Don't confuse sequences with series.


fake proofs - imaginary number $i$ equals $-6/3.4641$?

$$-4^3 = -64$$
so the third root of $-64$ should be $-4$ than.
$$\sqrt[3]{-64} = -4$$
But if you calculate the third root of -64
with WolframAlpha( http://www.wolframalpha.com/input/?i=third+root+of+-64 )
you get a complex number with an imaginary part of $$3.4641016151 i$$ and a real part of $$2$$




so if the third root of $4-64$ equals $-4$ AND $2 + 3.46410162 i$ (which i know is a bit foolish) than you could actually reform it like this
$$
\sqrt[3]{-64} \approx 2 + 3.46410162 i | -2$$
$$
\sqrt[3]{-64} -2 \approx -6 \approx 3.46410162 i |/3.46410162$$
$$
\frac{\sqrt[3]{-64} -2}{3.46410162} ≈ \frac{-6}{3.46410162} ≈ i$$



and this have to be totally wrong, so my question is, where exactly is the mistake?

real analysis - Establishing convergence of $int_0^1sqrt{frac{x^2+1}{x}}dx$

I am tasked with determining whether the following improper integral converges:
$$\int_0^1\sqrt{\frac{x^2+1}{x}}dx$$Deemed improper as $\lim_{x\to0^+}\sqrt{(x^2+1)/x}=\infty$. After a while of struggling I got Wolfram alpha to tell me that it evaluates to some expression involving hypergeometric functions (whatever they are - but not important!) - i.e: it converges, so it was left to me to demonstrate why.




Seeing as the function doesn't have an elementary antiderivative, it won't be possible to just show $\lim_{x\to0^+}\int_x^1\sqrt{(t^2+1)/t}\;dt$ exists, rather something like comparison to a larger function would be necessary, but I don't really know how to do that to functions in this form. I was also thinking of sort of using a reverse of the integral test for infinite series, by performing some change of variables on the integrand to get something like $\int_1^\infty f(t)dt$ which exists iff $\sum_{n=1}^\infty f(n)$ but I don't know whether (a) this would be valid or (b) this would be any quicker/easier.



Any help regarding this specific integral, or further general methods for dealing with improper integrals, would be much appreciated.

proof writing - If $ac≡bc$ $mod m$ and $gcd(c,m)=d$, show that $a≡b$ $mod frac{m}{d}$




How would you show that if $ac≡bc$ $\mod m$ and $\gcd(c,m)=d$, then $a≡b$ $\mod \frac{m}{d}$?



Any help would be much appreciated!


Answer



You have marked this under proof writing. So I will attempt a model proof.



So $ac \equiv bc \mod m$, which means that $m | ac-bc$, and hence $m | c(a-b)$. Hence, $c(a-b)=km$ for some integer $k$. Now, Given that $\gcd(c,m) =d $, it follows that $c=xd$ and $m=yd$ for some co-prime integers $x$ and $y$. Hence,
$$
c(a-b) = km \implies xd(a-b) = kyd \implies x(a-b) = ky

$$
Note that $x | ky$ from the above. Now, because $x$ is co-prime to $y$, it follows that $x |k$ and hence $\frac{k}{x}$ is an integer. Now, $$
(a-b) = \frac{k}{x}y
$$
and hence $a-b$ is a multiple of $y$. Hence $a \equiv b \mod y$. But $y = \frac{m}{d}$, hence $a \equiv b \mod \dfrac{m}{d}$.


matrices - What do real eigenvalues imply for a matrix

Suppose we have a matrix $A \in \mathbb{R}^{n \times n}$ with $\textrm{eig}(A)=\{ \lambda_1, \lambda_2, \ldots, \lambda_n\}$ such that $\lambda_i \in \mathbb{R}$.



Does the realness of the eigenvalues imply some structure in $A$?




I know of that $A=A^{\sf T}$ (real symmetric) implies all the eigenvalues will be real, but I'm wondering about implications in the other direction.
Is there some additional condition + realness of eigenvalues which would imply the matrix is symmetric?



Context: I'm trying to classify special types of matrices according to the properties of their eigenvalues. e.g. $\lambda_i \geq 0$, then $A$ is positive semidefinite, etc. If anyone can point me to a list of such correspondences between eigenvalue constraints and matrix properties it would be much appreciated.

elementary number theory - Divisibility criteria for $7,11,13,17,19$



A number is divisible by $2$ if it ends in $0,2,4,6,8$. It is divisible by $3$ if sum of ciphers is divisible by $3$. It is divisible by $5$ if it ends $0$ or $5$. These are simple criteria for divisibility.




I am interested in simple criteria for divisibility by $7,11,13,17,19$.


Answer



$(1)$



The formulae for $2,3,5,9,11$ can be derived from $\sum_{0\le r\le n}{a_r10^r}$



Observe that $\sum_{0\le r\le n}{a_r10^r}\equiv a_0\pmod 2$



$\sum_{0\le r\le n}{a_r10^r}\equiv a_0\pmod 5$




$\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le n}a_r\pmod 3$ as $9\mid(10^r-1)$



$\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le n}(-1)^ra_r\pmod {11}$ as $10^r\equiv(-1)^r\pmod{11}$



$\sum_{0\le r\le n}a_r10^r\equiv(a_0+a_2+a_4+\cdots)-(a_1+a_3+a_5+\cdots)\pmod{11}$



$(2)$



$N=\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le m-1}a_r10^r\pmod {10^m}\equiv \sum_{0\le r\le m-1}a_r10^r\pmod {2^m}$ as $2^s\mid 10^s$ where integer $s\ge0$




This explains why $2^m\mid N\iff $ the numbers with lower $m$ digits of $N$ is divisible by $2^m$



For example, $2524$ will be divisible by $2^2=4$ as $24$ is, but $2514$ will not be divisible by $2^2=4$ as $14$ is not.



Similarly for $5^m$



$(3)$



For any number $y$ co-prime with $10,$ we can have a reduction formula as follows:




If a number be $10a+b,$ we can find $u,v$ in integers such that $10u+y\cdot v=1$ (using Bézout's Identity)



So, $u(10a+b)+v\cdot y\cdot a=a(10u+y\cdot v)+u\cdot b=a+u\cdot b\implies 10a+b$ will be divisible by $y\iff y\mid(a+u\cdot b)$



For example if $y=7, $ we find $3\cdot7+(-2)10=1\implies u=-2,v=3$



So, $(a+u\cdot b)$ becomes $a-2b$



If $y=19,$ we find $2\cdot10+(-1)19=1\implies u=2\implies a+u\cdot b=a+2b$




We can always use convergent property of continued fractions to find $u,v$.



There is no strong reason why this can not be generalized to any positive integer bases.


derivatives - Steps of finding an absolute extremum on an open interval



$$f(x)=\cot x-\sqrt 2 \csc x,\quad I=(0,\pi)$$



Show that the function $f$ has an absolute extremum on the given interval $I$ and find that value.



I've found the local maximum point from the first derivative. I've showed that the second derivative at that point is less than zero.
I will find the values of endpoints but what points should I take as the endpoints on an open interval?
I'll also find the value of the critical point and say that is the only critical point on the interval and then compare all values to determine which one the absolute maximum value is. Are these steps right?


Answer




Sounds about right! To find the absolute extrema of a differentiable (!) function on an interval, one should indeed check the critical points (where the first derivative is zero) and the boundary points, then compare all found values and pick the largest (smallest).



As you pointed out, sometimes intervals are open and boundaries may or may not be $+\infty$ of $-\infty$. In this case, you should resort to using limits instead of just evaluating in the boundary points. In you case, calculate
$$
\lim_{x\to 0^{+}} f(x) \quad \text{ and } \quad \lim_{x \to \pi^{-}} f(x)
$$
If one of these values (which may be infinite) is larger (smaller) than the values attained in the critical points, than your function has an extremum at the boundaries. If the boundaries are not included in your domain, your function then does not have an absolute maximum (minimum) within the given interval.



Let me know if anything is still unclear.


Does series $sum _{n=1}^{infty :}lnleft(frac{nleft(n+2right)}{left(n+1right)^2}right)$ converge?

Does series
$\sum _{n=2}^{\infty \:}\ln\left(\frac{n\left(n+2\right)}{\left(n+1\right)^2}\right)$ converge?



My idea is to apply the Cauchy test, but I dont know how to simplify it next.




Thanks

integration - How to show that $int_0^1 sin pi t ~ left( zeta (frac12, frac{t}{2})-zeta (frac12, frac{t+1}{2}) right) dt=1$?



I've been trying to prove Fresnel integrals by real methods and encountered an interesting problem.



Let's start with the known result:



$$\int_0^\infty \sin y^2 dy = \sqrt{\frac{\pi}{8}}$$




Can we prove it without complex methods?



I have tried to do the following:



$$\int_0^\infty \sin y^2 dy =\frac{1}{2} \int_0^\infty \sin x \frac{dx}{\sqrt{x}} =\frac{1}{2} \sum_{n=0}^\infty (-1)^n \int_0^\pi \sin x \frac{dx}{\sqrt{x+\pi n}}$$



This directly follows from the properties of the sine, except the series only converges conditionally, not absolutely, so there's a question of if we can bring it inside the integral.



I will do it without proper justification for now, but if anyone has a comment on this, I would appreciate it.




So we obtain, after a simple change of variables:



$$\int_0^\infty \sin y^2 dy = \frac{\sqrt{\pi}}{2} \int_0^1 \sin \pi t ~ dt \sum_{n=0}^\infty \frac{(-1)^n}{\sqrt{n+t}} $$



Wolfram gives for this series:



$$\sum_{n=0}^\infty \frac{(-1)^n}{\sqrt{n+t}}=\frac{1}{\sqrt{2}}\left( \zeta \left(\frac12, \frac{t}{2}\right)-\zeta \left(\frac12, \frac{t+1}{2}\right) \right)$$



Which is easy enough to show using the definition of Hurwitz zeta function.




Surprisingly enough, this brings us exactly the known value of the Fresnel integral as a coefficient:



$$\int_0^\infty \sin y^2 dy =\sqrt{\frac{\pi}{8}} \int_0^1 \sin \pi t ~ \left( \zeta \left(\frac12, \frac{t}{2}\right)-\zeta \left(\frac12, \frac{t+1}{2}\right) \right) dt$$



Which means that we need to prove the identity in the title of the question:




$$\int_0^1 \sin \pi t ~ \left( \zeta \left(\frac12, \frac{t}{2}\right)-\zeta \left(\frac12, \frac{t+1}{2}\right) \right) dt=1$$




Can we prove this by real methods, not using the Fresnel integral?




Mathematica confirms this identity numerically.


Answer



I usually don't answer my own questions, but I literally just derived the solution and I think it's beautiful.



So we know from the link in the OP that:



$$\zeta(s,a)=\frac{1}{\Gamma(s)} \int_0^\infty \frac{z^{s-1} dz}{e^{a z} (1-e^{-z})}$$




Which makes our expression:



$$\zeta \left(\frac12, \frac{t}{2}\right)-\zeta \left(\frac12, \frac{t+1}{2}\right)=\frac{1}{\sqrt{\pi}} \int_0^\infty \frac{e^{- t z/2}(1-e^{-z/2}) dz}{(1-e^{-z}) \sqrt{z}}=$$



$$=\frac{2}{\sqrt{\pi}} \int_0^\infty \frac{e^{- t u^2/2}(1-e^{-u^2/2}) du}{1-e^{-u^2}}$$



Now we can see that it's very easy to take integral over $t$ (integration by parts):



$$\int_0^1 \sin \pi t~ e^{- t u^2/2} dt=\frac{4 \pi}{4 \pi^2 +u^4} (1+e^{-u^2/2})$$




Now we substitute this into the second integral to get (this is the beautiful part):



$$8 \sqrt{\pi} \int_0^\infty \frac{(1+e^{-u^2/2})(1-e^{-u^2/2}) du}{(1-e^{-u^2})(4 \pi^2 +u^4)}=8 \sqrt{\pi}\int_0^\infty \frac{du}{4 \pi^2 +u^4}$$



After a change of variables we have:



$$\frac{2 \sqrt{2}}{\pi} \int_0^\infty \frac{dv}{1 +v^4}=\frac{2 \sqrt{2}}{\pi} \frac{\pi}{2 \sqrt{2}}=1$$



Just as it was supposed to be.




God, Mathematics is simply perfect sometimes.






Appendix



Trying to prove in a simple way that the integral formula for $\zeta(1/2,a)$ equals the series definition.



$$ \int_0^\infty \frac{z^{-1/2} dz}{e^{a z} (1-e^{-z})}=2 \int_0^\infty \frac{e^{-a u^2}du}{ 1-e^{-u^2}}=2 \sum_{n=0}^\infty \int_0^\infty e^{-(a+n) u^2} du=$$




$$=2 \sum_{n=0}^\infty \frac{1}{\sqrt{a+n}} \int_0^\infty e^{-w^2} dw= \sqrt{\pi} \sum_{n=0}^\infty \frac{1}{\sqrt{a+n}} $$



Formally, this exactly fits the series definition, but it doesn't converge (it's alright, as the function for $s=1/2$ is defined by analytic continuation).



On the other hand, for the particular function in my case, the proof works well, as the series inside the integral becomes alternating due to a factor $(1-e^{-u^2/2})$ in the numerator, so everyting converges.



I would say that all of this constitutes a nice real proof of the Fresnel integrals, especially since the Poisson integral also has a few real proofs.



Though I'm definitely not claiming this is a new result. It was new for me, but a two second google search found me this paper https://www.jstor.org/stable/2320230, and I'm sure there's plenty more.



Monday 28 August 2017

3 digit numbers whose one digit is the average of the other two


Question: How many three-digit numbers are composed of three distinct digits such that one digit is the average of the other two?





How can this be solved with an arithmetic sequence?



I can see that any two digits must be same parity to produce the even sum. Also any two of the digits must be divisible by $2$. if $\overline{abc}$ is the digit, $a+b=2c$ and doing that for all three digits gives three equations. Thats as far as I can get.

calculus - Compute this limit $lim_{xto0}frac{sin(x^2+frac{1}{x})-sinfrac{1}{x}}{x}$ using L'Hôpital's rule



I have asked this problem before, but I can't understand the explanation, I couldn't understand how the sin multiply for cos, and too multiply for A + and - B: $$\sin(A)-\sin(B)=2\sin\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$ and I don't understand in this step how/why the $A-B$ and $A+B$ was replaced by $\frac{x^2}{2}$ and $\frac{x^2}{2}+\frac{1}{x}$ :

$$\lim_{x\to0}\frac{\sin\left(x^2+\frac1x\right)-\sin\left(\frac1x\right)}{x}= \lim_{x\to0}\frac{2\sin\left(\frac{x^2}{2}\right)\cos\left(\frac{x^2}{2}+\frac1x\right)}{x}.$$


Answer



Hint:its easy to prove $$\sin(x+y)+\sin(x-y)=2\sin(x)\cos(y)$$
then put $y=\frac{A+B}{2}$,$x=\frac{A-B}{2}$
$$ \sin(x)\sim x$$ $$\ cosx\sim1-\frac{x^2}{2}$$ because $$\lim_{x\to0}\frac{sinx}{x}=\lim_{x\to0}\frac{cosx}{1-\frac{x^2}{2}}=1$$


calculus - $frac{f(t+T)}{f(T)} = f(t)$ then $f$ is an exponential function

If $f$ is a function such that for all $(T, t) \in \mathbb{R}^2_+$ we have : $$\frac{f(t+T)}{f(T)} = f(t)$$




Then how can I prove that $f$ is an exponential function ?

Sunday 27 August 2017

calculus - Closed form of integral using contour integration



Here is the integral I am interested in evaluating using contour integration:




Prove that: $$\int_0^\infty \frac{{\rm d}x}{(1+x^2)(1+x^r)}=\frac{\pi}{4}$$





That is that the above integral is independant of $r$ which is assumed to be a positive real number.



I have a couple of approaches using real analysis. For instance,



$$\begin{align*}
\int_{0}^{\infty}\frac{{\rm d}x}{\left ( 1+x^2 \right )\left ( 1+x^r \right )} &=\int_{0}^{1}\frac{{\rm d}x}{\left ( 1+x^2 \right )\left ( 1+x^r \right )}+ \int_{1}^{\infty}\frac{{\rm d}x}{\left ( 1+x^2 \right )\left ( 1+x^r \right )} \\
&\overset{u=1/x}{=\! =\! =\!}\int_{0}^{1}\frac{{\rm d}x}{\left ( 1+x^2 \right )\left ( 1+x^r \right )} +\int_{0}^{1}\frac{x^r}{\left ( x^2+1 \right )\left ( 1+x^r \right )}\, {\rm d}x \\
&= \require{cancel} \int_{0}^{1}\frac{\cancel{x^r+1}}{\left ( x^2+1 \right )\cancel{\left ( x^r+1 \right )}}\, {\rm d}x \\

&= \int_{0}^{1}\frac{{\rm d}x}{x^2+1}\\
&= \arctan 1 = \frac{\pi}{4}
\end{align*}$$



or by applying the sub $x=\tan u$ we have that:



$$\begin{align*}
\int_{0}^{\infty}\frac{{\rm d}x}{\left ( 1+x^2 \right )\left ( 1+x^r \right )} &\overset{x=\tan u}{=\! =\! =\! =\!}\int_{0}^{\pi/2}\frac{\sec^2 u}{\left ( 1+\tan^2 u \right )\left ( 1+\tan^r u \right )}\, {\rm d}u \\
&=\int_{0}^{\pi/2}\frac{{\rm d}u}{1+\tan^r u} \\
&= \int_{0}^{\pi/2}\frac{{\rm d}u}{1+\cot^r u}

\end{align*}$$



Since $\cot u = 1/\tan u$ the last integral is evaluated easily again at $\pi/4$.



A third approach would be to kill it directly with the sub $u=1/x$ which leads to, if we name the initial integral $I$,



$$I= \frac{\pi}{2}-I$$



and the result follows.




Now, what I am interested in here is to evaluate this using contour integration. I have a feeling that the contour will be a wedge shaped contour with some angle, let me call that $\omega$, but that $r$ cause some problems. Therefore I don't know how to integrate it using complex analysis. Any help appreciated.


Answer



(Edited to allow for singularity at $z=-1$.)



We wish to find
$$\int_0^\infty \frac{1}{(1 + x^2)(1 + x^r)} \mathrm{d}x$$
by contour integration. We can see that the integral along an arc with $|z|=R$ will tend to zero as $R\rightarrow\infty$. We also know we want the integral along the real axis, and we see that we may be able to draw symmetry between the negative and positive real axis, so we may consider the arc of angle $0\leq\theta\leq\pi$ joined by the line segment $[-R,R]$. However, for $r=3,5,7\ldots$ we see there is a singularity at $z=-1$, so we instead integrate along the arc of radius $R$ and angle $0\leq\theta\leq\pi-\epsilon$ where $\epsilon\ll1$, and the line segments $[-Re^{-\epsilon i},0]$ and $[0,R]$. We call the union of these lines $\Gamma$ as shown below and we later take the limits $R\rightarrow\infty$ and $\epsilon\rightarrow0$.
Contour illustration



Since we wish to draw symmetries between the positive and negative real line we consider the integral

$$\int_\Gamma f(z) \mathrm{d}z, \qquad f(z)=\frac{1}{(1+z^2)\big( 1 + (z^2)^{r/2} \big)}.$$



The singularities of $f$ are at $z^2=-1$ and $z^r=-1$, which gives the points $z=i$ (shown on diagram), $z=-i$, $z=\exp(i\pi(1+2k)/r)$ for $k=0,1,\ldots,r-1$. We calculate the residues of the singularities that lie inside the domain bounded by $\Gamma$:
$$\begin{align}
\mathrm{res}(f; i) &= \frac{1}{2i\big( 1 + (-1)^{r/2} \big)} \\
&= \frac{1}{2i\big( 1 + e^{i\pi r/2} \big)} \\
&= \frac{1 + e^{-i\pi r/2}}{4i( 1 + \cos(\pi r/2))} \\
\mathrm{res}\Big( f; \exp\Big( \frac{i\pi}{r}(1+2k) \Big) \Big) &= \frac{1}{\big( 1+e^{2i\pi(1+2k)} \big) re^{i\pi(r-1)(1+2k)/r}} \\
&= \frac{1}{-2r \cos(\pi(1+2k)/r)}
\end{align}$$




Note that all of the residues arising from $z^r=-1$ are real, so by Cauchy's residue theorem we obtain
$$\int_\Gamma f(z) \mathrm{d}z = \frac{\pi}{2} + i\xi$$
where $\xi\in\mathbb{R}$, and we should in fact find that the real parts of the residues cancel out when added, giving $\xi=0$.



We next parameterise the arc (which we call $\gamma$) by $z=Re^{i\theta}$, to see that
$$\begin{align}
\left| \int_\gamma f(z) \mathrm{d}z \right| &= \left| \int_0^{\pi-\epsilon} \frac{1}{(1 + R^2e^{2i\theta})(1 + R^re^{ir\theta})} \mathrm{d}\theta \right| \\
&\leq (\pi-\epsilon) R \frac{1}{R^2 - 1} \rightarrow 0 \quad \mathrm{as} \quad R\rightarrow\infty,
\end{align}$$

where we have used the fact the arc length is $(\pi-\epsilon) R$, and the reverse triangle inequality in the last inequality.



Finally, taking $R\rightarrow\infty$ and parameterising $z$ along the line intervals as $z=e^{-\epsilon i}x$ and $z=x$, we are left with
$$\begin{align}
\frac{\pi}{2} + i\xi &= \int_\Gamma f(z) \mathrm{d}z \\
&= \int_{-\infty}^0 \frac{e^{-\epsilon i}}{\big( 1 + \big( e^{-\epsilon i}x \big)^2 \big) \big( 1 + \big( e^{-2\epsilon i}x^2 \big)^{r/2} \big)} \mathrm{d}x + \int_{0}^\infty \frac{1}{(1 + x^2)\big( 1 + (x^2)^{r/2} \big)} \mathrm{d}x.
\end{align}$$
The last step is to take the limit $\epsilon\rightarrow0$ to obtain
$$\int_{-\infty}^0 \frac{1}{(1 + x^2)\big( 1 + (x^2)^{r/2} \big)} \mathrm{d}x + \int_{0}^\infty \frac{1}{(1 + x^2)\big( 1 + (x^2)^{r/2} \big)} \mathrm{d}x = \frac{\pi}{2} + i\xi,$$
where we must have $\xi=0$ since the left-hand side is real, and substituting $-x$ for $x$ in the first integral gives

$$2\int_{0}^\infty \frac{1}{(1 + x^2)\big( 1 + (x^2)^{r/2} \big)} \mathrm{d}x = \frac{\pi}{2}$$
and the required result follows.


real analysis - Is this proof correctly written? Show that the sum of two uniformly continuous functions on $A$ is uniformly continuous on $A$



If $f$ and $g$ are uniformly continuous functions in $A$ show that $f+g$ is uniformly continuous in $A$.



Proof: because $f$ and $g$ are uniformly continuous on $A$ we can write




$$\forall\varepsilon>0,\exists\delta_1>0,\forall x,y\in A:|x-y|<\delta_1\implies|f(x)-f(y)|<\frac{\varepsilon}2$$
$$\forall\varepsilon>0,\exists\delta_2>0,\forall x,y\in A:|x-y|<\delta_2\implies|g(x)-g(y)|<\frac{\varepsilon}2$$



Then if I call $h(x)=f(x)+g(x)$ we can see that



$$|h(x)-h(y)|\le|f(x)-f(y)|+|g(x)-g(x)|$$



and if I took $\delta_0=\min\{\delta_1,\delta_2\}$ then I can finally write



$$\forall\varepsilon>0,\exists\delta_0>0,\forall x,y\in A:|x-y|<\delta_0\implies|f(x)-f(y)|+|g(x)-g(y)|<\frac{\varepsilon}2+\frac{\varepsilon}2=\varepsilon$$




My question, it is the proof correctly written? Im unsure about the use of "$\frac{\varepsilon}2$" to hold the proof. Thank you in advance.


Answer



The idea is correct, but since you seem to learning to write out formal proofs I will add a few critiques:




  • You say that "we can see that $|h(x)-h(y)|$...", but this feels like it's missing justification. Just a brief nod towards the triangle inequality would go a long way towards distinguishing this writeup from that of someone who was just guessing, and in a truly formal setting you would make explicit use of the triangle inequality to justify this.


  • You are right to point out that introducing $\epsilon/2$ right from the beginning is a bit odd. It's very easy to deduce that from the formal definition, but in a fully-fleshed out proof this deduction should be explicit. Starting from $\epsilon > 0$ you know that $\epsilon/2 > 0$ and thus you can apply the uniform continuity definition to $\epsilon/2$. I believe this should be spelled out (of course, if this was a journal paper, I would apply a different standard of writing).


  • There is a small stylistic disconnect between the second-last line and the conclusion. In the second-last line you write "$\delta_0 = \min\{\delta_1, \delta_2\}$" which implicitly creates a context where you have already chosen a particular $\epsilon > 0$, since otherwise $\delta_1$ and $\delta_2$ don't exist. Then in the conclusion you use "$\forall \epsilon$", which feels like you've escaped that implicit context without actually saying so.





I believe ideally you should make this context very explicit: first explain that you are choosing an $\epsilon > 0$, then deduce the existence of $\delta_1, \delta_2$ corresponding to that choice of $\epsilon$ (this would be a more traditional place at which to introduce the $\epsilon/2$), then perform the calculation showing that $|h(x)-h(y)| < \epsilon$. Having closed that figurative bracket, you are then free to say $\forall \epsilon>0$ because the preceding argument works in generality.


real analysis - Graph of discontinuous linear function is dense



$f:\mathbb{R}\rightarrow\mathbb{R}$ is a function such that for all $x,y$ in $\mathbb{R}$, $f(x+y)=f(x)+f(y)$. If $f$ is cont, then of course it has to be linear. But here $f$ is NOT continous. Then show that the set $\{{(x,f(x)) : x {\rm\ in\ } \mathbb{R}\}}$ is dense in $\mathbb{R}^2$.


Answer



Let $\Gamma$ be the graph.



If $\Gamma$ is contained in a $1$-dimensional subspace of $\mathbb R^2$, then it in fact coincides with that line. Indeed, the line will necessarily be $L=\{(\lambda,\lambda f(1)):\lambda\in\mathbb R\}$, and for all $x\in\mathbb R$ the line $L$ contains exactly one element whose first coordinate is $x$, so that $\Gamma=L$. This is impossible, because it clearly implies that $f$ is continuous.



We thus see that $\Gamma$ contains two points of $\mathbb R^2$ which are linearly independent over $\mathbb R$, call them $u$ and $v$.




Since $\Gamma$ is a $\mathbb Q$-subvector space of $\mathbb R^2$, it contains the set $\{au+bv:a,b\in\mathbb Q\}$, and it is obvious that this is dense in the plane.


linear algebra - $Ain M_n(mathbb{R})$ symmetric matrix , $A-lambda I$ is Positive-definite matrix- prove: $det Ageq a^n $



Let $a>1$ and $A\in M_n(\mathbb{R})$ symmetric matrix such that $A-\lambda I$ is Positive-definite matrix (All eigenvalues $> 0$) for every $\lambda


First, I'm not sure what does it mean that $A-\lambda I$ is positive definite for every $\lambda 0$ and all eigenvalues are bigger than $0$ or it's not.



Then, If it's symmetric I can diagonalize it, I'm not sure what to do...



Thanks!


Answer



$A-\lambda I$ is positive definite for every $\lambda0$, $A-(a-\epsilon) I$ is positive definite. That means, each eigenvalue of $A$ is larger than $a-\epsilon$, thus their product $\det A\ge \prod (a-\epsilon)$ ... let $\epsilon$ goes to zero, you get what you want.


Saturday 26 August 2017

summation - Summing over Subsets of ${2, 3, 4, 5, 6, ... n }$



I came across the function which counts distinct prime factors of a number. I then generalized the function $w$ to take as argument a set instead of a number and return the number of prime factors that are shared between them.



If $\underline{n} = \{2, 3, 4, 5, 6, ... n \}$, then $w(\; \underline{6}\; ) = 0.$ Some other examples: $w(\{2, 4, 6, 8 \}) = 1, w(\{15, 30, 45, 60 \}) = 2$.



Now, I'd like to sum all subsets of $\underline{n}$ by applying the function $w(J)$ to each subset $J$ of $\underline{n}$, where $|J|>1$. To put this into math form, let $\mathcal{P}(\underline{n})$ be the power set of $\underline{n}$. Then the set of sets $D(\underline{n}) = \mathcal{P}(\underline{n}) \backslash \{J; J \subseteq \underline{n}, |J| \le 1 \}$.



The quantity I would like to find is:
$$\sum_{J \in D(\underline{n})} w(J)$$




in terms of $w$. $J$ ranges across the sets in $D(A)$. I have tried using the binomial coefficient to get all possible subsets of one specific type of set, for instance those sets sharing one factor, two factors, etc., but I think I'm missing a basic trick for finding the relevant sets of $D(A)$ because there is always some subsets I've missed.



What is the best way to calculate the above sum in terms of $w$? I do not want to evaluate $w$ since I'd like to be able to apply the strategy you use in your answer to a variety of functions that take as argument a set and return an integer based on the factors of the elements in the set. I want to know how to break up $D(n)$ into all the relevant subsets to count them.


Answer



Calculating your desired sum is easier if you "partition" the sets in $D(
\underline{n})$ not into those with zero, one, two, etc. prime factors but into those divisible by $2$, $3$, $5$, $7$, and so forth. ("Partition" is quoted here because obviously this isn't a true partition; some sets will reoccur.) Let $w_p(J)$ be $1$ if every element in $J$ is divisible by $p$, and $0$ otherwise. Let $S_p(n) = \sum_{J \in D(\underline{n})} w_p(J)$. The number of subsets of $\{1, \ldots, n\}$ (starting from $2$ instead of $1$ makes no difference) in which every element is divisible by $p$ is the same as the number of nonempty subsets of $\{1, \ldots, \left\lfloor \frac{n}{p} \right\rfloor\}$, namely $2^{\lfloor n/p \rfloor}$. Elimintating the zero- and one-element sets gives $$S_p(n) = 2^{\lfloor n/p \rfloor} - \left\lfloor \frac{n}{p} \right\rfloor- 1$$ and of course $$S(n) = \sum_{\substack{2 \leq p \leq n/2 \\ p\, \text{prime}}} S_p(n)$$ which is a sum for which I wouldn't expect a convenient closed form to exist.


calculus - Find the limit : $limlimits_{nrightarrowinfty}int_{n}^{n+7}frac{sin{x}}{x},mathrm dx$



I have this exercise I don't know how to approach :





Find the limit : $$\lim_{n\rightarrow\infty}\int_{n}^{n+7}\frac{\sin x}{x}\,\mathrm dx$$




I can see that with $n\rightarrow\infty$ the area under the graph of this function becomes really small as $\sin{x} \leq 1$ so $\dfrac{\sin{n}}{n}\rightarrow_{\infty}0$ but can I get something from it?


Answer



Hint: $\def\abs#1{\left|#1\right|}$
$$\abs{\int_n^{n+7}\frac{\sin x} x \, dx}\le \int_n^{n+7}\frac{\abs{\sin x}}{x}\, dx\le \frac 1n \int_n^{n+7}\abs{\sin x}\, dx $$


combinatorics - Sum identity using Stirling numbers of the second kind




Experimenting with series representations of $e^{x e^x}$ I came across the two seemingly different power series
$$e^{x e^x} = \sum_{n=0}^{\infty} x^n \sum_{k=0}^{n} \frac{(n-k)^k}{(n-k)! \cdot k!}$$
and
$$e^{x e^x} = \sum_{n=0}^{\infty} x^n \sum_{k=0}^{n} \frac{1}{(n-k)!} \sum_{i=0}^{k} \frac{1}{(k-i)!} {k-i \brace i}\,.$$
They should be and are in fact identical. By equating the coefficients it is obvious that
$$\sum_{k=0}^{n} \frac{(n-k)^k}{(n-k)! \cdot k!} = \sum_{k=0}^{n} \frac{1}{(n-k)!} \sum_{i=0}^{k} \frac{1}{(k-i)!} {k-i \brace i}\,.$$
Going through some values of $n$ reassures the correctness of the equation, however I have no idea how one could show this algebraically. An equivalent equation would be
$$\sum_{k=0}^{n} {n \choose k} (n-k)^k = \sum_{k=0}^{n} {n \choose k} \sum_{i=0}^{k} {k \choose i} \sum_{j=1}^{i} (-1)^{i-j} {i \choose j} j^{k-i}\,,$$
which is obtained through multiplication of both sides by $n!$ and an explicit formula for the Stirling numbers. No matter what I try, I seem to run into a brick wall. How can I (algebraically) show that the equation is correct for arbitrary $n$?



Answer




Here is another variation of the theme. In order to prove the binomial identity
\begin{align*}
\sum_{k=0}^{n} \binom{n}{k} (n-k)^k
=\sum_{k=0}^{n} \binom{n}{k} \sum_{i=0}^{k}\binom{k}{i}
\sum_{j=0}^{i}\binom{i}{j} (-1)^{i-j} j^{k-i}\tag{1}
\end{align*}





we take a look at a binomial inverse pair. Let $F(x)=\sum_{n=0}^\infty f_n \frac{x^n}{n!}$ and $G(x)=\sum_{n=0}^\infty g_n \frac{x^n}{n!}$ be exponential generating functions. We consider the following




Binomial inverse pair:
\begin{align*}
F(x)&=G(x)e^x&G(x)&=F(x)e^{-x}\\
f_n&=\sum_{k=0}^{n}\binom{n}{k}g_k&g_n&=\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}f_k\qquad \tag{2}
\end{align*}





This simple one and many other examples of binomial inverse pairs can be found e.g. in the classic Combinatorial Identities by John Riordan ($1968$).




We obtain from (2) by successively substituting $f_n$ resp. $g_n$
\begin{align*}
f_n&=\sum_{k=0}^{n}\binom{n}{k}g_k\tag{3}\\
&=\sum_{k=0}^{n}\binom{n}{k}\sum_{i=0}^k\binom{k}{i}(-1)^{k-i}f_i\\
&=\sum_{k=0}^{n}\binom{n}{k}\sum_{i=0}^k\binom{k}{i}(-1)^{k-i}\sum_{j=0}^{i}\binom{i}{j}g_j\tag{4}
\end{align*}





Setting $g_k=k^{n-k}$ we obtain from (3) and (4)




The following is valid for $n\geq 0$:
\begin{align*}
\sum_{k=0}^{n}\binom{n}{k}k^{n-k}=\sum_{k=0}^{n}\binom{n}{k}\sum_{i=0}^k\binom{k}{i}(-1)^{k-i}\sum_{j=0}^{i}\binom{i}{j}j^{i-j}\tag{5}
\end{align*}



Observe the LHS of (5) corresponds to the LHS of (1) by replacing $k\rightarrow n-k$. So, we are nearly finished.




Finally we have to show the RHS of (5) is equal to the RHS of (1). Be careful, although they look similarly, they are not the same!



The validity of
\begin{align*}
\sum_{k=0}^{n} \binom{n}{k} \sum_{i=0}^{k}\binom{k}{i}\sum_{j=0}^{i}\binom{i}{j} (-1)^{i-j} j^{k-i}
=\sum_{k=0}^{n}\binom{n}{k}\sum_{i=0}^k\binom{k}{i}\sum_{j=0}^{i}\binom{i}{j}(-1)^{k-i}j^{i-j}
\end{align*}
is shown in this question and the claim follows.





Note: During analysis of the identity, when I had the association with binomial inverse pairs I was pretty sure, that the expressions (4) and (5) should already result in the RHS of (1). It was really amazing to see that this was not the case and it needed a considerable amount of effort to complete the answer.


Does the integral $intlimits_0^{infty}frac{sin^{2}x}{x^{2}ln(1+sqrt x)} dx$ converge?



Does the integral




$$\int\limits_0^\infty \frac{\sin^{2}x}{x^{2}\ln(1+\sqrt x)} dx$$



converge?



It's easy to check that $\int\limits_1^{\infty}\frac{\sin^{2}x}{x^{2}\ln(1+\sqrt x)} dx$ does converge, but I couldn't find the right method for either proving or disproving that $\int\limits_0^1 \frac{\sin^{2}x}{x^{2}\ln(1+\sqrt x)} dx$ converges.


Answer



We might check some equivalent for $x\mapsto \frac{\sin^2(x)}{x^2 \ln(1+\sqrt{x})}$ near zero.



hint: $$ \ln(1+x) = x + o(x)$$


elementary number theory - Test about prime gaps: which conclusions can be drawn from the results?



I did the following test:





For every prime, take the prime gap distance $dp$ to the previous prime and the next prime $dn$, then calculate $a=(pp\ mod\ dp)$ and $b=(np\ mod\ dn)$. If $a$ or $b$ $\in \Bbb P$ or equal to $1$ then the result is successful, if not the result is an error.




I have tested it with Python in the interval $[1,10^8]$ and the results are very interesting, the error percentage is very low, close to 1%:



Total errors: 62157 of 5761453 total primes. 
Successful primes: 5758741 in the interval [1,100000000]
%Error vs Total primes: 1.0788424378364276%
Family mod 3 errors: 59445

Uncategorized errors: 2712


Let $E$ be the set of prime numbers that are the error results of the test. I would like to isolate them at least partially, so I am trying to find a non basic commonality between them.



For instance, this is the error set $E$ for the interval $[1,57829]$, $1409$ is the first prime that does not succeed:




$E=\{1409,2039,3109,10799,13669,16519,20369,27631,31859,42359,42989,44221,46549,47609,54959,55259, 55579, 55589, 57829\}$





When the interval is extended, other primes like $68863$ or $71647$ will appear, so not all of them will finish with $9$ or $1$.



I have tried checking at OEIS, and some basic modularity reviews, but without luck, I can not see any special relationship between them, apart from being primes, and that they belong to $1+6k$ or $5+6k$ (as usual because they are prime numbers) and that most of them finish with $9$ (this could be something important, but I can not see the reason behind). Maybe it does not exists at all, but maybe there is something I can not see.



Regarding to the value of the congruences, I have been able to verify that, at least in the tested interval, is in most of the cases a multiple of $3$ (I called them "family mod 3" errors), but I guess that as the interval gets bigger other multiples ("uncategorized errors") of a prime number will appear more often.




The reason for this test is just that I wanted to learn if the prime gaps are following some kind of pattern, at least for some cases. The idea came from a very different test I did some days ago (link here) for a quite different topic.





From those results, I thought that if there is such symmetry, maybe I could try to find similar patterns in between primes, so I started making the same test I wrote in the present question.



Originally I was using the minimum distance $d$ that applied to the prime $p$ makes $p-d$ and $p+d$ primes. Just by chance I realized that in most of the cases $(p\mod\ d)$ was also prime or $1$, so I tried to make a longer test. Finally, after some tests, I did the version of the test that I wrote in this question.



If somebody is interested to try or modify the test, here is the Python code:



def distance_primality():
from sympy import nextprime, prevprime
from gmpy2 import is_prime


n=5
errorcount = 0
family_mod_3_errorcount = 0
totalpr = 0

print("RESULT"+"\t"+"P"+"\t"+"PP"+"\t"+"D1"+"\t"+"Mod"+"\t"+"pr/1?"+"\t"+"NP"+"\t"+"D2"+"\t"+"Mod"+"\t"+"pr/1?")
print("-----"+"\t"+"-"+"\t"+"--"+"\t"+"--"+"\t"+"---"+"\t"+"-----"+"\t"+"--"+"\t"+"--"+"\t"+"---"+"\t"+"-----")

test_limit = 100000000

while n < test_limit:
totalpr = totalpr + 1

pp = prevprime(n)
while not is_prime(pp):
pp = prevprime(pp-1)
np = nextprime(n+1)
while not is_prime(np):
np = nextprime(np+1)


d1 = n-pp
d2 = np-n

if ((pp%d1==1) or is_prime(pp%d1) and ((np%d2==1) or is_prime(np%d2))):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"Yes"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"Yes")
elif (pp%d1==1) or is_prime(pp%d1):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"Yes"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
elif (np%d2==1) or is_prime(np%d2):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"Yes")
else:

if (pp%d1)%3==0 or (np%d2)%3==0:
print("ErrF3"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
family_mod_3_errorcount = family_mod_3_errorcount + 1
else:
print("Error"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
errorcount = errorcount + 1

n=nextprime(n)
while not is_prime(n):
n=nextprime(n)


print()
print("Total errors: " + str(errorcount+family_mod_3_errorcount) + " of " + str(totalpr) + " total primes" + ". Successful primes: " + str(totalpr-errorcount) + " in the interval [1," + str(test_limit ) + "]")
print("%Error vs Total primes: "+str(((errorcount+family_mod_3_errorcount)/totalpr)*100)+"%")
print("Family mod 3 errors: " + str(family_mod_3_errorcount))
print("Uncategorized errors: " + str(errorcount))

distance_primality()



I would like to share the following questions:





  1. The error percentage seems very low, I have tested through $10^8$, but could it be just a result of the Strong Law of Small Numbers?


  2. Is there a way to isolate at least a subset of the primes at $E$?


  3. Given these results, is there any hint about other type of test that I could do to know more about the observed behavior?






Thank you!


Answer



The initial low density of the "errors" and the fact that initially most of them end in $9$ can readily be explained. I don't know what you mean by "isolate" in question $2$, and I don't think there's any need for further tests as referred to in question $3$.



Except for the prime $2$ and the subsequent prime gap of $1$, primes are odd and prime gaps are even. Thus primes modulo prime gaps are odd. Of the odd numbers up to $13$, only $9$ is not either $1$ or a prime. Thus, until you get up to numbers that have prime gaps of at least $16$ on either side, the only way to get an "error" is to have a remainder of $9$ on both sides, and even that requires a prime gap of at least $10$ on either side, and even then there's at most a $1$ in $25$ chance for both remainders to be $9$. The typical prime gap of $\log x$ is $10$ at $\mathrm e^{10}\approx22026$. We can estimate the probability of two successive prime gaps of length $10$ at your first "error" value of $1409$ by considering the prime gaps as very roughly Poisson-distributed with parameter $\log x$; then the probability for a gap of size at least $10$ is



$$
\sum_{k=10}^\infty\frac{\log^kx\mathrm e^{-\log x}}{k!}=\sum_{k=10}^\infty\frac{\log^kx}{k!\cdot x}\;,
$$




which for $x=1409$ is about $0.2$, so the overall probability is about $0.2^4=.0016$ (two factors of $0.2$ on either side, one for the probability of a prime gap at least $10$, the other for a remainder of $9$ modulo $10$). So the low initial density of your numbers requires no further systematic explanation.



This also explains the frequent $9$s in the last decimal digit: The most likely prime gap to produce initial "errors" is $10$; the only (in)admissible remainder in that case is $9$; and this constellation only has to occur on either side in order to force the prime in the middle to end in $9$; that is, for the prime in the middle not to end in $9$, we need to have prime gaps of at least $12$ on both sides, which takes even higher primes to become likely.



From an appropriate song:



I've looked at primes from both sides now
From norms and forms and still somehow
It's prime confusions I recall
I really don't know primes at all.


Friday 25 August 2017

Proving by induction that $| x_1 + x_2 + ... + x_n | leq | x_1 | + | x_2| + ... + | x_n |$



This is the first question in Thomas' Calculus Appendix on Proof by Induction Exercise (Exercise A.1).



As the title suggests, I'd like to prove by induction that $| x_1 + x_2 + ... + x_n | \leq | x_1 | + | x_2| + \ldots + | x_n |$ is true for any n numbers.




You are told to assume that the triangle inequality $|a+b| \leq |a| + |b|$ is true.



I'm 17 and I've only done Proof By Induction in Further Pure 1 (A first year module in Further Pure Mathematics at College, in the UK), so sorry if this seems incredibly simple. So far I have this:



$$
\text{Let n = 2 } \\
\implies LHS = | 1 + 2 | = |3| = 3 \\
\implies RHS = | 1 | + | 2| = 1 + 2 = 3 \\
\text{Therefore I have proved it for n = 2 } \\
\text{Assume that $n=k$ is true, Let $n = k+1$}

$$



And that's about it :).



I know how to say it in words; that the LHS is the absolute value of the sum of all n numbers, therefore when $x \in \mathbb{R^-}$ the actual value of the sum of all n numbers could be negative, where as the RHS is the sum absolute value of each $x$... but I don't know how to prove it...



Thanks.


Answer



In the induction step you have $|x_1+\ldots+x_n|\le |x_1|+\ldots+|x_n|$ and want ot show $|x_1+\ldots+x_n+x_{n+1}|\le |x_1|+\ldots+|x_n|+|x_{n+1}|$.
Simply plug $a=x_1+\ldots+x_n$, $b=x_{n+1}$ into the triangle inequality to obtain

$$\begin{matrix}|x_1+\ldots+x_n+x_{n+1}|&=&|a+b|\le|a|+|b|\\&=& |x_1+\ldots+x_n|+|x_{n+1}|\\&\le& |x_1|+\ldots+|x_n|+|x_{n+1}|.\end{matrix}$$


proof verification - Counting measure $sigma$-finite / not $sigma$-finite for different sets



Let $\zeta\colon\mathcal{P}(\Omega)\to [0,\infty]$ be the counting measure. Show that for $\Omega:=\mathbb{R}$ it is not $\sigma$-finite but for $\Omega:=\mathbb{N}$.





Hello and good afternoon! Here are my tries:



(1) Consider $\Omega:=\mathbb{N}$.




Define $A_n:=\left\{1,\ldots,n\right\}, n\geq 2$. Then $A_n\in\mathcal{P}(\mathbb{N}), A_n\nearrow\mathbb{N}$ and $\zeta(A_n)=n<\infty$ for $n\geq 1$. So the measure $\zeta$ is $\sigma$-finite.



(2) Now $\Omega:=\mathbb{R}$. First of all it is $\zeta(\mathbb{R})=\infty$ because the counting
measure of all sets that are not finite, is $\infty$.



Assume there are $A_i\in\mathcal{P}(\mathbb{R}), i\geq 1, A_i\nearrow \mathbb{R}$ and $\zeta(A_i)<\infty, i\geq 1$. Then for $A:=\bigcup_{i\geq 1}A_i$ it is because of the $\sigma$-additivity of $\zeta$ and the subtractivity
$$
\zeta(A)=A_1\uplus\biguplus_{k\geq 2}(A_k\setminus A_{k-1})=\zeta(A_1)+\sum_{k\geq 2}\zeta(A_k\setminus A_{k-1})=\zeta(A_1)+\sum_{k\geq 2}\zeta(A_k)-\zeta(A_{k-1})=\zeta(A_{\infty})<\infty,
$$
so because of the continuity of the measure it is

$$
\zeta(A_i)\nearrow\zeta(A)<\infty.
$$
On the other hand it is $\zeta(A_i)\nearrow\zeta(\mathbb{R})=\infty$. So $\mathbb{R}\neq A$.



Contradiction: It was assumed that $A_i\nearrow\mathbb{R}$, i.e. $A_i\nearrow A=\mathbb{R}$.



So there do not exist $A_i\in\mathcal{P}(\mathbb{R}), i\geq 1$ with $A_i\nearrow\mathbb{R}$ and $\zeta(A_i)<\infty$, i.e. $\zeta$ is not $\sigma$-finite for $\Omega:=\mathbb{R}$.







Maybe you can say me, if I am right.



Miro

finite fields - Factorization of $x^8-x$ over $F_2$ and $F_4$



How can I factorize $x^8-x$ over the fields $F_2$ and $F_4$?


Answer



Hint: We have that $x^8-x$ splits as
$$ x(x+1)(x^3+x^2+1)(x^3+x+1)$$
over $\mathbb{F}_2$ and $\gcd(x^8-x,x^4-x)=x^2+x=x(x+1)$.


calculus - Calculate $lim_{n to infty} ln frac{n!^{frac{1}{n}}}{n}$




How can I calculate the following limit? I was thinking of applying Cesaro's theorem, but I'm getting nowhere. What should I do?



$$\lim_{n \to \infty} \ln \frac{n!^{\frac{1}{n}}}{n}$$


Answer



First let's write $$\ln \frac{n!^{\frac{1}{n}}}{n} = \frac{1}{n} (\ln n! - n\ln n) = \frac{a_n}{b_n}$$ where $a_n = \ln n! - n\ln n$ and $b_n = n$.



Then




$$\begin{align*} \frac{a_{n+1} - a_n}{b_{n+1} - b_n} & = \quad \frac{\ln (n+1)! - (n+1) \ln(n+1) - (\ln n! - n\ln n)}{1} \\ & = \quad \ln(n+1) - (n+1)\ln(n+1) + n \ln n \\ & = \quad -\ln(1 + 1/n)^n \\ & \longrightarrow -1 \ \text{ as } n \to \infty \end{align*}$$



Hence by the Cesàro theorem (a.k.a. Stolz-Cesàro theorem)



$$ \lim_{n\to\infty} \ln \frac{n!^{\frac{1}{n}}}{n} = \lim_{n\to\infty} \frac{a_n}{b_n}= \lim_{n\to\infty} \frac{a_{n+1} - a_n}{b_{n+1} - b_n} = -1$$


sequences and series - How to prove $sumlimits_{k=0}^{N} frac{(-1)^k {N choose k}}{(k+1)^2} = frac{1}{N+1} sumlimits_{n=1}^{N+1} frac{1}{n}$

In the process of proving a more complicated relation, I've come across the following equality that I'm having trouble proving:




$$
\sum\limits_{k=0}^{N} \frac{(-1)^k {N \choose k}}{(k+1)^2} = \frac{1}{N+1} \sum\limits_{n=1}^{N+1} \frac{1}{n}
$$



I was already able to prove the following similar equality:



$$
\sum\limits_{k=0}^N \frac{(-1)^k {N \choose k}}{k+1} = \frac{1}{N + 1}
$$




but I'm unsure how to proceed with the first one. I assume it has something to do with the fact that every term in the left hand side of the first equality is $\frac{1}{k+1}$ times a term in the left hand side of the second equality. Any help would be greatly appreciated.

Thursday 24 August 2017

Natural and Real sets of numbers, which one is bigger than another?

From the years ago, it has always been this question in my mind which a teacher of high school talked about in a class but I never found it's correct answer.



We have set of natural numbers ${1,2,3,4,5,...}$ and set of real numbers.



We have two concepts to prove that these sets of numbers are equal or smaller than each other.



Concept 1



If we choose any number from the natural numbers, we can choose a number from real set of number, so they are equal.




Concept 2



Natural numbers are subset of the real numbers, so natural numbers set is smaller than set of real numbers.



Which concept is correct?

Calculate the limit of the sequence $lim_{nrightarrowinfty} a_n, ngeqslant1 $



Calculate the limit of the sequence




$$\lim_{n\rightarrow\infty}\ a_n, n\geqslant1 $$



knowing that



$$\ a_n = \frac{3^n}{n!},n\geqslant1$$



Choose the right answer:



a) $1$




b) $0$



c) $3$



d) $\frac{1}{3}$



e) $2$



f) $\frac{1}{2}$


Answer




Using D'Alambert's criterion, we can see that



$$ \lim_{n \to \infty}\frac{a_{n+1}}{a_n}=\lim_{n \to \infty} \frac{3^{n+1}n!}{3^{n}(n+1)!}= \lim_{n \to \infty} \frac{3}{n+1}=0$$



Thus, $\lim\limits_{n \to \infty} a_n =0$.


exponentiation - Proof that irrational exponents exist, and are unique?



Can someone provide a proof that for any given irrational number, $b$, exponentiation by that number defined as a limit of rational powers always converges, and that if we choose a particular base, $a$, that the no two $b$'s yield the same limit. (I suppose in the case where $a$ is not $0$ or $1$?)


Answer



We can restrict to the case $a>1$ (noting that $0Note that $\frac pq<\frac rs$ implies $ps$x\mapsto a^x$ is strictly increasing when looking only at rational $x$. If we take the convergence for granted, this implies that $x\mapsto a^x$ is injective as a function defined on all of $\mathbb R$: If $b\in\mathbb R$ and $b<\frac pq\in\mathbb Q$, then almost all fractions in a sequence converging to $b$ are $<\frac pq$, hence their powers are $

Now for the convergence: For rational exponents, the usual power laws hold, and as we have just seen order is also respected (for bases $a>1$). Hence whenever $\left|\frac pq-\frac rs\right|<\frac 1N$, we have $$\left|a^{\frac pq}-a^{\frac rs}\right|=a^{\frac rs}\cdot \left|a^{\frac pq-\frac rs}-1\right|
We may assume wlog that $\frac rs<\lceil b\rceil +1=:M$ so that $$\left|a^{\frac pq}-a^{\frac rs}\right|As $\sqrt[N]a\to 1$ and $\sqrt[N]{1/a}\to 1$, we conclude that any sequence $\{\frac{p_k}{q_k}\}_{k\in\mathbb N}$ of fractions converging to $b$ produces a Cauchy sequence $\{a^{\frac{p_k}{q_k}}\}_{k\in\mathbb N}$.


linear algebra - Vector space basics: scalar times a nonzero vector = zero implies scalar = zero?



I'm working through a linear algebra text, starting right from the axioms. So far I've understood and proved for myself that, for some vector space $V$ over a field $\mathbb{F}$:





  • the additive identity (zero vector) $\mathbf{0}$ in a vector space is unique

  • the additive inverses in a vector space are unique

  • scalar zero times any vector is the zero vector: $\forall \mathbf{v} \in V: 0 \mathbf{v} = \mathbf{0}$

  • any scalar times the zero vector is the zero vector: $\forall a \in \mathbb{F}: a \mathbf{0} = \mathbf{0}$



However, I'm stuck on the converse of the third statement: suppose $a \mathbf{v} = \mathbf{0}$ and $\mathbf{v} \neq \mathbf{0}$. Show that $a$ must be equal to $0$. In other words, show that the scalar zero is the only element of $\mathbb{F}$ that allows for rule of inference number 3 in the above list.



It seems like such a simple thing but I'm not used to proving super basic statements like this axiomatically. My attempt so far is something like:




$$\begin{align}a\mathbf{v} &= \mathbf{0} \quad &\text{(1)} \\
a \mathbf{v} + \mathbf{0} &= \mathbf{0} \quad &\text{(vector additive identity)} \\
a \mathbf{v} + a \mathbf{v} &= \mathbf{0} \quad &\text{(substitute from 1)} \\
(a + a) \mathbf{v} &= \mathbf{0} \quad &\text{(distributive property)} \\
(2a)\mathbf{v} &= \mathbf{0} \\
(2a)\mathbf{v} &= a \mathbf{v} \quad &\text{(substitute from 1)} \\
\mathrm{``therefore"} \quad 2a &= a \\
a &= 0
\end{align}
$$




But I'm not sure I'm "allowed" to do that second-to-last step yet, given the things proved so far. I think it might just be a circular argument. Is it?



EDIT:



I figured it out with some prodding; turned out I had all the pieces in front of me already but didn't realize it. Here it is for completeness:



Suppose $a \in \mathbb{F}$, $\mathbf{v} \in V$, and $a \mathbf{v} = \mathbf{0}$. Either $a = 0$ or $a \neq 0$. In the case where $a \neq 0$:



$$\begin{align}

a \mathbf{v} &= \mathbf{0} \\
\frac{1}{a} ( a \mathbf{v} ) &= \frac{1}{a} \mathbf{0} \\
(\frac{1}{a} a) \mathbf{v} & = \mathbf{0} \\
1\mathbf{v} &= \mathbf{0} \\
\mathbf{v} &= \mathbf{0}
\end{align}$$



But then suppose further that $\mathbf{v} \neq \mathbf{0}$. Then $a = 0$ by the contrapositive.


Answer



Hint: There was that axiom that said that $1v=v$, was there not?



calculus - Evaluating $int_0^{pi/2} int_0^{pi/2} frac{cos(x)}{ cos(a cos(x) cos(y))} dx dy $

Can we avoid the use of the geometric interpretation combined with polar coordinates change of variable for proving that



$$\int_0^{\pi/2} \int_0^{\pi/2} \frac{\cos(x)}{ \cos(a \cos(x) \cos(y))} d x dy =\frac{\pi}{ 2a}\log\left( \frac{\displaystyle 1+ \tan \left(\frac{a}{2}\right)}{ \displaystyle 1-\tan\left(\frac{a}{2}\right)}\right)$$ ?
EDIT
What if we go further and we also consider the case



$$\int_0^{\pi/2}\int_0^{\pi/2} \int_0^{\pi/2} \frac{\cos(x)}{ \cos(a \cos(x) \cos(y) \cos(z))} d x dy \ dz$$
? What can we say about the closed form of this one?

Wednesday 23 August 2017

sequences and series - Proof of the formula $1+x+x^2+x^3+ cdots +x^n =frac{x^{n+1}-1}{x-1}$











Proof to the formula
$$1+x+x^2+x^3+\cdots+x^n = \frac{x^{n+1}-1}{x-1}.$$


Answer



Let $S=1+x+x^2+...+x^n$. Then, $xS=x+x^2+...+x^{n+1}=1+x+x^2+...+x^n+(x^{n+1}-1)=S+x^{n+1}-1$. So, $xS-S=x^{n+1}-1$. So, $S=\frac{x^{n+1}-1}{x-1}$. (The exponent of the $x$ in the numerator of the RHS should be $n+1$ not $n$).


elementary number theory - Prove that $gcd(a^n - 1, a^m - 1) = a^{gcd(n, m)} - 1$

For all $a, m, n \in \mathbb{Z}^+$,



$$\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$$

Tuesday 22 August 2017

elementary set theory - Proving $rm{card}(Bbb Z)=rm{card}(Bbb N)$




So I'm trying to prove that the set of integers has the same cardinality as the set of naturals just using the definition, that is, I'm trying to find a bijective function between the two sets. I found this Why do the rationals, integers and naturals all have the same cardinality? but I couldn't quite find the answer.

Thanks for any help.


Answer



HINT(ish):



Let $f:\mathbb Z\to\mathbb N$ be the bijection in question, $f(0)=0$, and for any natural number $n$, let $f(n)=2n$. Can you complete the construction for what the negative integers are mapped to?


number theory - Prove $x = sqrt[100]{sqrt{3} + sqrt{2}} + sqrt[100]{sqrt{3} - sqrt{2}}$ is irrational



Prove $x = \sqrt[100]{\sqrt{3} + \sqrt{2}} + \sqrt[100]{\sqrt{3} - \sqrt{2}}$ is irrational.




I can prove that $x$ is irrational by showing that it's a root of a polynomial with integer coefficients and use rational root theorem to deduce it must be either irrational or an integer and then show it's not an integer, therefore, it must be irrational.



I was wondering what are the other methods to prove $x$ is irrational. I'd be very interested in seeing alternative proofs.


Answer



Let $y = \sqrt[100]{\sqrt{3} + \sqrt{2}}$. Then $x = y + {1 \over y}$. Suppose $x$ were some rational number $q$. Then $y^2 - qy + 1 = 0$. This means ${\mathbb Q}(y)$ is a field extension of ${\mathbb Q}$ of degree two, and every rational function of $y$ is in this field extension. This includes $y^{100} = \sqrt{3} + \sqrt{2}$, and $y^{-100} = \sqrt{3} - \sqrt{2}$. So their sum and difference is also in ${\mathbb Q}(y)$. Hence ${\mathbb Q}(\sqrt{2},\sqrt{3}) \subset {\mathbb Q}(y)$. But ${\mathbb Q}(\sqrt{2},\sqrt{3})$ is an extension of ${\mathbb Q}$ of degree 4, a contradiction.



You can make the above more elementary by showing successive powers of $y$ are always of the form $q_1y + q_2$ with $q_1$ and $q_2$ rational and eventually showing some $q_3\sqrt{2} + q_4\sqrt{3}$ must be rational, a contradiction.


combinatorics - Combinatorial proof that $sum limits_{k=0}^n binom{2k}{k} binom{2n-2k}{n-k} (-1)^k = 2^n binom{n}{n/2}$ when $n$ is even



In my answer here I prove, using generating functions, a statement equivalent to
$$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} (-1)^k = 2^n \binom{n}{n/2}$$
when $n$ is even. (Clearly the sum is $0$ when $n$ is odd.) The nice expression on the right-hand side indicates that there should be a pretty combinatorial proof of this statement. The proof should start by associating objects with even parity and objects with odd parity counted by the left-hand side. The number of leftover (unassociated) objects should have even parity and should "obviously" be $2^n \binom{n}{n/2}$. I'm having trouble finding such a proof, though. So, my question is




Can someone produce a combinatorial proof that, for even $n$, $$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} (-1)^k = 2^n \binom{n}{n/2}?$$





Some thoughts so far:



Combinatorial proofs for $\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} = 4^n$ are given by Phira here and by Brian M. Scott here. The proofs are basically equivalent. In Phira's argument, both sides count the number of paths of length $2n$ starting from $(0,0)$ using steps of $(1,1)$ and $(1,-1)$. By conditioning on the largest value of $2k$ for which a particular path returns to the horizontal axis at $(2k,0)$ and using the facts that there are $\binom{2k}{k}$ paths from $(0,0)$ to $(2k,0)$ and $\binom{2n-2k}{n-k}$ paths of length $2n-2k$ that start at the horizontal axis but never return to the axis we obtain the left-hand side.



With these interpretations of the central binomial coefficients $2^n \binom{n}{n/2}$ could count (1) paths that do not return to the horizontal axis by the path's halfway point of $(n,0)$, or (2) paths that touch the point $(n,0)$. But I haven't been able to construct the association that makes these the leftover paths (nor do all of these paths have even parity anyway). So perhaps there's some other interpretation of $2^n \binom{n}{n/2}$ as the number of leftover paths.






Update. Some more thoughts:




There's another way to view the identity $\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} = 4^n$. Both sides count the number of lattice paths of length $n$ when north, south, east, and west steps are allowed. The right side is obvious. The left side has a similar interpretation as before: $\binom{2k}{k}$ counts the number of NSEW lattice paths of length $k$ that end on the line $y=0$, and $\binom{2n-2k}{n-k}$ counts the number of NSEW lattice paths of length $n-k$ that never return to the line $y =0$. So far, this isn't much different as before. However, $2^n \binom{n}{n/2}$ has an intriguing interpretation: It counts the number of NSEW lattice paths that end on the diagonal $y = x$ (or, equivalently, $y = -x$). So maybe there's an involution that leaves these as the leftover paths. (Proofs of all of these claims can be found on this blog post, for those who are interested.)


Answer



Divide by $4^n$ so that the identity reads (again, for $n$ even)



$$ \sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} (-1)^k = \frac{1}{2^n} \binom{n}{n/2}. \tag{1}$$



Claim 1: Select a permutation $\sigma$ of $[n]$ uniformly at random. For each cycle $w$ of $\sigma$, color $w$ red with probability $1/2$; otherwise, color it blue. This creates a colored permutation $\sigma_C$. Then $$\binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n}$$ is the probability that exactly $k$ of the $n$ elements of a randomly-chosen permutation $\sigma$ are colored red. (See proof of Claim 1 below.)



Claim 2: Select a permutation $\sigma$ of $[n]$ uniformly at random. Then, if $n$ is even, $$\frac{1}{2^n} \binom{n}{n/2}$$ is the probability that $\sigma$ contains only cycles of even length. (See proof of Claim 2 below.)




Combinatorial proof of $(1)$, given Claims 1 and 2: For any colored permutation $\sigma_C$, find the smallest element of $[n]$ contained in an odd-length cycle $w$ of $\sigma_C$. Let $f(\sigma_C)$ be the colored permutation for which the color of $w$ is flipped. Then $f(f(\sigma_C)) = \sigma_C$, and $\sigma_C$ and $f(\sigma_C)$ have different parities for the number of red elements but the same probability of occurring. Thus $f$ is a sign-reversing involution on the colored permutations for which $f$ is defined. The only colored permutations $\sigma_C$ for which $f$ is not defined are those that have only even-length cycles. However, any permutation with an odd number of red elements must have at least one odd-length cycle, so the only colored permutations for which $f$ is not defined have an even number of red elements. Thus the left-hand side of $(1)$ must equal the probability of choosing a colored permutation that contains only even-length cycles. The probability of selecting one of the several colored variants of a given uncolored permutation $\sigma$, though, is that of choosing an uncolored permutation uniformly at random and obtaining $\sigma$, so the left-hand side of $(1)$ must equal the probability of selecting a permutation of $[n]$ uniformly at random and obtaining one containing only cycles of even length. Therefore,
$$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} (-1)^k = \frac{1}{2^n} \binom{n}{n/2}.$$



(Clearly, $$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} = 1,$$
which gives another combinatorial proof of the unsigned version of $(1)$ mentioned in the question.)






Proof of Claim 1: There are $\binom{n}{k}$ ways to choose which $k$ elements of a given permutation will be red and which $n-k$ elements will be blue. Given $k$ particular elements of $[n]$, the number of ways those $k$ elements can be expressed as the product of $i$ disjoint cycles is $\left[ {k \atop i} \right]$, an unsigned Stirling number of the first kind. Thus the probability of choosing a permutation $\sigma$ that has those $k$ elements as the product of $i$ disjoint cycles and the remaining $n-k$ elements as the product of $j$ disjoint cycles is $\left[ {k \atop i} \right] \left[ {n-k \atop j}\right] /n!$, and the probability that the $i$ cycles are colored red and the $j$ cycles are colored blue as well is $\left[ {k \atop i} \right] \left[ {n-k \atop j}\right]/(2^i 2^j n!).$ Summing up, the probability that exactly $k$ of the $n$ elements in a randomly chosen permutation are colored red is
\begin{align}

\frac{\binom{n}{k}}{n!} \sum_{i=1}^k \sum_{j=1}^{n-k} \frac{\left[ {k \atop i} \right] \left[ n-k \atop j \right]}{2^i 2^j} = \frac{\binom{n}{k}}{n!} \sum_{i=1}^k \frac{\left[ {k \atop i} \right]}{2^i} \sum_{j=1}^{n-k} \frac{\left[ {n-k \atop j} \right]}{2^j}.
\end{align}
The two sums are basically the same, so we'll just do the first one.
$$\sum_{i=1}^k \frac{\left[ {k \atop i} \right]}{2^i} = \left( \frac{1}{2} \right)^{\overline{k}} = \prod_{i=0}^{k-1} \left(\frac{1}{2} + i\right) = \frac{1 (3) (5) \cdots (2k-1)}{2^k} = \frac{1 (2) (3) \cdots (2k-1)(2k)}{2^k 2^k k!} = \frac{(2k)!}{4^k k!}.$$
(The first equality is the well-known property that Stirling numbers of the first kind are used to convert rising factorial powers to ordinary powers. This property can be proved combinatorially. For example, Vol. 1 of Richard Stanley's Enumerative Combinatorics, 2nd ed., pp. 34-35 contains two such combinatorial proofs.)



Thus the probability that exactly $k$ of the $n$ elements of a randomly chosen permutation are colored red is $$\frac{\binom{n}{k}}{n!} \frac{(2k)!}{4^k k! } \frac{(2n-2k)!}{4^{n-k} (n-k)!} = \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n}.$$







Proof of Claim 2: Since there can be no odd cycles, $\sigma(1) \neq 1$. Thus there are $n-1$ choices for $\sigma(1)$. We have already chosen the element that maps to $\sigma(1)$, but otherwise there are no restrictions on the value of $\sigma(\sigma(1))$, and so we have $n-1$ choices for $\sigma(\sigma(1))$ as well.



Now $n-2$ elements are unassigned. If $\sigma(\sigma(1)) \neq 1$, then we have an open cycle. We can't assign $\sigma^3(1) = 1$, as that would close the current cycle at an odd number of elements. Also, $\sigma(1)$ and $\sigma^2(1)$ are already taken. Thus there are $n-3$ choices for the value of $\sigma^3(1)$. If $\sigma(\sigma(1)) = 1$, then we have just closed an even cycle. Selecting any unassigned element in $[n]$, say $j$, we cannot have $\sigma(j) = j$, as that would create an odd cycle, and $1$ and $\sigma(1)$ are already taken. Thus we have $n-3$ choices for $\sigma(j)$ as well.



In general, if there are $i$ elements unassigned and $i$ is even, there is either one even-length open cycle or no open cycles. If there is an open cycle, we cannot close it, and so we have $i-1$ choices for the next element in the cycle. If there is not an open cycle, we select the smallest unassigned element $j$. Since we cannot have $\sigma(j) = j$, there are $i-1$ choices for $\sigma(j)$. Either way, we have $i-1$ choices. If there are $i$ elements unassigned and $i$ is odd, though, there must always be an odd-length open cycle. Since we can close it, there are $i$ choices for the next element in the cycle.



All together, then, if $n$ is even then the number of permutations of $[n]$ that contain only cycles of even length is $$(n-1)^2 (n-3)^2 \cdots (1)^2 = \left(\frac{n!}{2^{n/2} (n/2)!}\right)^2 = \frac{n!}{2^n} \binom{n}{n/2}.$$ Thus the probability of choosing a permutation uniformly at random and obtaining one that contains only cycles of even length is $$\frac{1}{2^n} \binom{n}{n/2}.$$







(I've been thinking about this problem off and on for the two months since I first posted it. What finally broke it open for me was discovering the interpretation of the unsigned version of the identity mentioned as #60 on Richard Stanley's "Bijective Proof Problems" document.)


trigonometry - If $A+B+C=π$, prove that




If $A+B+C=π$, prove that
$$\cos^2A+\cos^2B-\cos^2C=-2\cos A\cdot\cos B\cdot\cos C.$$




ATTEMPT:
Given $$A+B+C=π,$$
$$A+B=π-C$$
Taking "cos" on both sides

$$\cos(A+B)=-\cos C.$$



Now,
\begin{align*}
L.H.S&=\cos^2A+\cos^2B-\cos^2C\\
&=\frac{1+\cos 2A}{2}+\frac{1+\cos 2B}{2}-\cos^2 C\\
&=\frac{2+\cos 2A+\cos 2B}{2}-\cos^2C.
\end{align*}



How should I complete now?



Answer



The identity




$$1 - \cos^2 A - \cos^2 B - \cos^2 C - 2 \cos A \cos B \cos C = 0$$




can be seen as a re-writing of the relation $\cos C = - \cos(A+B)$ without sines of $A$ and $B$:



$$\cos C = - \cos(A+B) = \sin A \sin B - \cos A \cos B$$

$$\cos C + \cos A \cos B = \sin A \sin B$$
$$\left(\;\cos C + \cos A \cos B\;\right)^2 = \sin^2 A \sin^2 B = \left(1-\cos^2 A\right)\left(1-\cos^2 B\right)$$
$$\cos^2 C + 2 \cos A \cos B \cos C \color{red}{+ \cos^2A \cos^2 B} = 1 - \cos^2 A - \cos^2 B \color{red}{+ \cos^2 A \cos^2 B}$$
and the result follows.


algebra precalculus - Summation of natural number set with power of $m$




Who knows about the summation of this series:
$$\sum\limits_{i=1}^{n}i^m $$ where $m$ is constant and $m\in \mathbb{N}$?



thanks



Answer



Look up Faulhaber's formula. See for example http://en.wikipedia.org/wiki/Faulhaber%27s_formula.


Monday 21 August 2017

real analysis - Can there be only one extension to the factorial?



Usually, when someone says something like $\left(\frac12\right)!$, they are probably referring to the Gamma function, which extends the factorial to any value of $x$.



The usual definition of the factorial is $x!=1\times2\times3\times\dots x$, but for $x\notin\mathbb{N}$, the Gamma function results in $x!=\int_0^\infty t^xe^{-t}dt$.




However, back a while ago, someone mentioned that there may be more than one way to define the factorial for non-integer arguments, and so, I wished to disprove that statement with some assumptions about the factorial function.






the factorial




  1. is a $C^\infty$ function for $x\in\mathbb{C}$ except at $\mathbb{Z}_{<0}$ because of singularities, which we will see later.


  2. is a monotone increasing function that is concave up for $x>1$.


  3. satisfies the relation $x!=x(x-1)!$



  4. and lastly $1!=1$







From $3$ and $4$, one can define $x!$ for $x\in\mathbb{N}$, and we can see that for negative integer arguments, the factorial is undefined. We can also see that $0!=1$.



Since we assumed $2$, we should be able to sketch the factorial for $x>1$, using our points found from $3,4$ as guidelines.



At the same time, when sketching the graph, we remember $1$, so there can be no jumps or gaps from one value of $x$ to the next.




Then we reapply $3$, correcting values for $x\in\mathbb{R}$, since all values of $x$ must satisfy this relationship.



Again, because of $1$, we must re-correct our graph, since having $3$ makes the derivative of $x!$ for $x\in\mathbb N$ undefined.



So, because of $1$ and $3$, I realized that there can only be one way to define the factorial for $x\in\mathbb R$.



Is my reasoning correct? And can there be only one extension to the factorial?







Oh, and here is a 'link' to how I almost differentiated the factorial only with a few assumptions, like that it is even possible to differentiate.



Putting that in mind, it could be possible to define the factorial with Taylor's theorem?


Answer



First, for a fixed $c\in\mathbb{C}$, let $$F_c(z):=\Gamma(z+1)\cdot\big(1+c\,\sin(2\pi z)\big)$$ for all $z\in\mathbb{C}\setminus \mathbb{Z}_{<0}$, which defines an analytic function $F_c:\mathbb{C}\setminus\mathbb{Z}_{<0}\to\mathbb{C}$ such that $$F_c(z)=z\cdot F_c(z-1)$$
for all $z\in\mathbb{C}\setminus\mathbb{Z}_{\leq 0}$ and that $F_c(0)=F_c(1)=1$ (whence $F_c(n)=n!$ for every $n\in\mathbb{Z}_{\geq 0}$). Excluding the essential singularity at $\infty$, the negative integers are the only singularities of $F_c$, which are simple poles.



Here are some results I checked with Mathematica. If $c$ is a positive real number less than $0.022752$, then $F_c'(z)>0$ for all $z>1$ and $F_c''(z)>0$ for all $z>-1$, making $F_c$ monotonically increasing on $(1,\infty)$ and convex on $(-1,\infty)$. It also appears that, with $0

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...