Tuesday 31 January 2017

elementary set theory - Ordered sets $langle mathbb{N} times mathbb{Q}, le_{lex} rangle$ and $langle mathbb{Q} times mathbb{N}, le_{lex} rangle$ not isomorphic



I'm doing this exercise:
Prove that ordered sets $\langle \mathbb{N} \times \mathbb{Q}, \le_{lex} \rangle$ and $\langle \mathbb{Q} \times \mathbb{N}, \le_{lex} \rangle$ are not isomorphic ($\le_{lex}$ means lexigraphic order).



I don't know how to start (I know that to prove that ordered sets are isomorphic I would make a monotonic bijection, but how to prove they aren't isomorphic?).


Answer



Recall that the lexicographic order on $A\times B$ is essentially to take $A$ and replace each point with a copy of $B$.



So only in one of these lexicographic orders every element has an immediate successor. And having an immediate successor is preserved under isomorphisms.




(Generally, to show two orders are not isomorphic you need to show either there are no bijections between the sets (e.g. $\Bbb Q$ and $\Bbb R$ are not isomorphic because there is no bijection between them) or that there are properties true for one ordered and set and not for the other that are preserved under isomorphisms, like having minimum or being a linear order, or having immediate successors.)


probability - Random variable with expectation equal to itself




I read from a book that
$E(X) = X$ iff there is a constant $C$ such that $P(X=C) = 1$. But how to prove it? Thanks a lot for your help!


Answer



I am sorry, but E(X) = X does not make sense. E(X) is a number, X is a random variable. It could only make sense if the random variable X could only take one possible value.
Perhaps this is also partially the answer to your question.


Monday 30 January 2017

What is the mathematical formula representing this series?



I have the following sequences that form a relationship:



x: 1 2 3 4 5
y: 1 2 4 8 16




Where each number in the second sequence is twice the previous number.



Is there a formula that will give me the correct value of y for a given value of x? Does it have a name, and how could I have gone about working this out for myself?



Many thanks in advance.


Answer



You have $y=2^{x-1}$ and some characters


probability - Relation between the distribution functions of random variables $Y$ and $-Y$



I'm having trouble understanding a certain property of CDFs for negative random variables.



Let $Y$ be an exponential random variable and let $f_y, F_Y$ denote the PDF and CDF respectively.




My book claims that
$$f_{-Y}(y) = f_{Y}(-y)$$



I realized that I'm stuck on two parts.




  1. I'm having trouble understanding firstly, the relationship between $-Y$ and $Y$.


  2. I can't visualize the CDFs of $F_{-Y}$ and $F_{Y}$.





Any help would be appreciated. Thanks.


Answer



Let $Y$ have exponential distribution. It looks as if we are defining a new random variable $-Y$. We want the cumulative distribution function of $-Y$. The interesting part of the distribution function $F_{-Y}(w)$ is when $w$ is negative.



We have
$$F_{-Y}(w)=\Pr(-Y\le w)=\Pr(Y\ge -w)=1-F_Y(-w).\tag{1}$$



Note that this is different from what in OP is described as the book's claim.



Now differentiate to find $f_{-Y}(w)$. The differentiation introduces two cancelling minus signs, and from (1) we get $f_{-Y}(w)=f_Y(-w)$. Perhaps the book mistakenly used $F$ instead of $f$.



Commutative unitary ring without maximal ideal without axiom of choice



I want to find a semi-constructive example of a unitary commutative ring without any maximal ideals assuming that axiom of choice is incorrect and/or a model of $\sf ZF$ where we have such a concrete ring.



This question is similar to the questions Vector space bases without axiom of choice and A confusion about Axiom of Choice and existence of maximal ideals..



What I tried is to use $\mathbb{R}$ as a $\mathbb{Q}$ vector space without a basis and try to construct some chains of ideals on a related ring and try to show that a maximal ideal corresponds to a basis but didn't achieve much.



Thank you in advance!



Answer



Let $k$ be a field, let $I\subset k^{\mathbb{N}}$ be the ideal of sequences that are eventually zero, and let $S=k^{\mathbb{N}}/I$. Then maximal ideals in $S$ are in bijection with nonprincipal ultrafilters on $\mathbb{N}$. In particular, in any model of ZF in which there are no nonprincipal ultrafilters on $\mathbb{N}$, there will be no maximal ideals in $S$.


real analysis - About an exercise in Rudin's book

In the book of Walter Rudin Real And Complex Analysis page 31, exercise number 10 said:
Suppose $\mu(X) < \infty$, $\{f_n\}$ a sequence of bounded complexes measurables functions on $X$ , and $f_n \rightarrow f $ uniformly on $X$. Prove that $$ \lim_{n \rightarrow \infty} \int_X f_n d \mu = \int_X f d \mu $$

And the answer is the following



Let $\epsilon > 0$. Since $f_n \rightarrow f$ uniformly, therefore there exists $n_0 \in N$
such that
$$|f_n (x) − f (x)| < \epsilon \quad ∀ n > n_0$$
Therefore $|f (x)| < |f_{n_0} (x)| + \epsilon$. Also $|f_n (x)| < |f (x)| + \epsilon$. Combining both
equations, we get
$$|f_n (x)| < |f_{n_0}| + 2\epsilon \quad ∀ n > n_0$$
Define $g(x) = max(|f_1 (x)|, · · · , |f_{n_0 −1} (x)|, |f_{n_0} (x)| + 2\epsilon)$, then $f_n (x) \leq g(x)$
for all $n$. Also $g$ is bounded. Since $\mu(X)< \infty$, therefore $g \in \mathcal{L}^1(\mu)$. Now

apply DCT to get
$$ \lim_{n \rightarrow \infty} \int_X f_n d \mu = \int_X f d \mu $$



What didn't I understand is why the condition $f_n \rightarrow f $ uniformly on $X$ is necessary? I proceed like the following



For every $x \in X$ we have
$$ |f_n(x)| \leq h(x)= \max_i (|f_i(x)|)$$
such as every $f_i$ is bounded so $h$ is, and in the other hand we have $\mu(X) < \infty$, therefore $h \in \mathcal{L}^1(\mu)$, then we can apply DCT.



Where I am wrong please? are there any counter examples?

polynomials - Show that $x^5-x^2+1$ is irreducible in $mathbb{Q}[x]$.

Show that $x^5-x^2+1$ is irreducible in $\mathbb{Q}[x]$.



I tried use the Eisenstein Criterion (with a change variable) but I have not succeeded.




Thanks for your help.

Multidimensional Induction for n Variables




I am wanting to use a proof technique known as multidimensional induction, and I am having a hard time finding a situation that explains how to use this technique clearly for three or more variables. I want to use this technique properly and have listed the steps below which would make sense to me, but they could be wrong. Please, feel free to copy the code as it is a lot of typing and post answers as to how to conduct this proof technique properly.



Let us say I have a statement that is $P(x_1, x_2, x_3, ..., x_n)$ for some $n\in \mathbb{N}$ and want to show that $\forall x_1, x_2,..., x_n\in \mathbb{N}$, $P(x_1, x_2, x_3, ..., x_n)$ is true.





  1. We let $x_1, x_2, x_3, ..., x_n\in \mathbb{N}$ and confirm the base case is true, namely $P(1, 1, 1, ..., 1)$.


  2. Inductive step over $x_1$: We let $k_1\in \mathbb{N}$ and assume the statement $P(k_1, 1,1,...,1)$ is true. Our goal here is to show $P(k_1+1, 1, 1, ..., 1)$ is true.


  3. Inductive step over $x_2$: We let $k_2\in \mathbb{N}$ and assume $P(h_1, k_2, 1, ..., 1)$ is true for some $h_1\in \mathbb{N}$. Our goal here is to show $P(h_1, k_2+1, 1, ..., 1)$ is true.



  4. Inductive step over $x_3$: We let $k_3\in \mathbb{N}$ and assume $P(h_1, h_2, k_3, ..., 1)$ is true for some $h_1, h_2\in \mathbb{N}$. Our goal here is to show $P(h_1, h_2, k_3+1, ..., 1)$ is true.





We continue our inductive steps until we have expended all variables through our inductive steps except $x_n$.





  1. Inductive step over $x_n$: We let $k_n\in \mathbb{N}$ and assume $P(h_1, h_2, h_3, ..., k_n)$ is true for some $h_1, h_2, ... \in \mathbb{N}$. Our goal here is to show $P(h_1, h_2, h_3, ..., k_n+1)$ is true.





We have shown $\forall h_1, h_2, ..., h_n\in \mathbb{N}$, $P(h_1, h_2, h_3, ..., h_n)$ is true. Therefore, $\forall x_1, x_2,..., x_n\in \mathbb{N}$, $P(x_1, x_2, x_3, ..., x_n)$ is true.




Answer



In practice I would hope that the statement $P(x_1,\dotsc,x_n)$ would be highly symmetrical, and then it would suffice just prove the $i$th case and be done with it.



In principle, however, without any symmetry assumptions, I agree that induction will need to be done separately in each variable.




Perhaps you should also consider doing induction in the number of variables: I have an inductive proof of $\forall x_1 P(x_1,\cdots,1)$, and if I have an inductive proof of $\forall x_1,\cdots,x_i\,P(x_1,\cdots,x_i,1,\cdots,1),$ then I can derive a proof of $\forall x_1,\cdots,x_i,x_{i+1}\,P(x_1,\cdots,x_i,x_{i+1},1,\cdots,1),$ for $i

sequences and series - Proof of convergence of Dirichlet's Eta Function

I'd like to check directly the convergence of Dirichlet's Eta Function, also known as the Alternating Zeta Function or even Alternating Euler's Zeta Function:



$$\sum_{n=1}^\infty\frac1{n^s}\;\;,\;\;\;s=\sigma+it\;,\;\;\sigma\,,\,t\in\Bbb R\;,\;\;\color{red}{\sigma > 0}.$$




Now, there seems to be a complete ausence of any direct proof of this in the web (at least I didn't find it) that doesn't use the theory of general Dirichlet Series and things like that.



I was thinking of the following direct, more elementary approach:



$$n^{it}=e^{it\log n}:=\cos (t\log n)+i\sin(t\log n)$$



and then we can write



$$\frac1{n^s}=\frac1{n^\sigma n^{it}}=\frac{\cos(t\log n)-i\sin(t\log n)}{n^\sigma}$$




and since a complex sequence converges iff its real and imaginary parts converge, we're left with the real series



$$\sum_{n=1}^\infty\frac{\cos(t\log n)}{n^\sigma}\;\;,\;\;\;\;\sum_{n=1}^\infty\frac{\sin(t\log n)}{n^\sigma}$$



Now, I think it is enough to prove only one of the above two series' convergence, since for example $\;\sin(t\log n)=\cos\left(\frac\pi2-t\log n\right)\;$



...and here I am stuck. It seems obvious both series are alternating but not necessarily
elementwise.



For example, if $\;t=1\;$ , then $\;\cos\log n>0\;,\;for\;\;n=1,2,3,4\;$ , and then $\;\cos\log n<0\;,\;\;for\;\;\;n=5,6,\ldots,23\;$ . This behaviour confuses me, and any help will be much appreciated.

elementary set theory - Sum and product of Cardinal numbers

Define the sum and the product of two cardinal numbers and show that these are well-defined operations.



That's what I have tried:



Let $A,B$ sets with $A \cap B=\varnothing, card(A)=m, card(B)=n$.




We define the sum $m+n$ of the cardinal numbers $m,n$ as the cardinal number of the union of $A$ and $B$, i.e.



$$m+n=card(A \cup B)$$



We will show that the sum of two cardinal numbers is well-defined.



It suffices to show that if $A_1 \sim B_1, A_2 \sim B_2$ with $A_1 \cap A_2=\varnothing, B_1 \cap B_2=\varnothing$ then $A_1 \cup A_2 \sim B_1 \cup B_2$.



We know that there are bijective functions $f: A_1 \to A_2, g:B_1 \to B_2$.




We want to show that there is a bijective function $h: A_1 \cup A_2 \to B_1 \cup B_2$.



We set $ h(x)=f(x)$ if $x \in A_1, h(x)=g(x)$ if $x \in A_2$.



We will show that $h$ is 1-1.



Let $x_1, x_2 \in A_1 \cup A_2$ with $h(x_1)=h(x_2)$.



If $x_1, x_2 \in A_1$ then $h(x_1)=f(x_1), h(x_2)=f(x_2)$ and so $f(x_1)=f(x_2) \Rightarrow x_1=x_2$




If $x_1, x_2 \in A_2$ then $h(x_1)=g(x_1), h(x_2)=g(x_2)$ and so $g(x_1)=g(x_2) \Rightarrow x_1=x_2$ since $g$ is injective.



If $x_1 \in A_1, x_2 \in A_2$ then $h(x_1)=h(x_2) \Rightarrow f(x_1)=f(x_2)$ that cannot be true because $B_1 \cap B_2=\varnothing$.



We will show that $h$ is surjective, i.e. that $\forall y \in B_1 \cup B_2, \exists x \in A_1 \cup B_2$ such that $f(x)=y$.



If $y \in B_1$ then we know that there will be a $x \in A_1$ such that $h(x)=y \Rightarrow f(x)=y$ since $f$ is surjective.



If $y \in B_2$ then we know that there will be a $x \in A_2$ such that $h(x)=y \Rightarrow g(x)=y$ since $g$ is surjective.




We define the product $m \cdot n$ of the cardinal numbers $m,n$ as the cardinal number of the cartesian product of $A$ and $B$, i.e.



$$m \cdot n=card(A \times B)$$



We will show that the product of two cardinal numbers is well-defined.



It suffices to show that if $A_1 \sim B_1, A_2 \sim B_2$ with $A_1 \cap A_2=\varnothing, B_1 \cap B_2=\varnothing$ then $A_1 \times A_2 \sim B_1 \times B_2$.



We know that there are bijective functions $f: A_1 \to A_2, g:B_1 \to B_2$.




We want to show that there is a bijective function $h: A_1 \times A_2 \to B_1 \times B_2$.



We define $h: A_1 \times A_2 \to B_1 \times B_2$ such that $\langle n,m \rangle \mapsto \langle f(n),g(m) \rangle$



We will show that $h$ is 1-1.



Let $\langle m_1, n_1 \rangle, \langle m_2, n_2 \rangle \in A_1 \times A_2$ with $h(\langle m_1, n_1 \rangle)=h(\langle m_2, n_2 \rangle) \rightarrow \langle f(n_1),g(m_1) \rangle=\langle f(n_2),g(m_2) \rangle \rightarrow f(n_1))=f(n_2) \wedge g(m_1)=g(m_2) \overset{\text{f,g:} 1-1 }{\rightarrow} m_1=m_2 \wedge n_1=n_2 \rightarrow \langle m_1,n_1 \rangle=\langle m_2,n_2 \rangle$



From the surjectivity of $f,g$ we can conclude that $h$ is surjective.




Could you tell me if it is right?

Linear independence of $sin(x)$ and $cos(x)$



In the vector space of $f:\mathbb R \to \mathbb R$, how do I prove that functions $\sin(x)$ and $\cos(x)$ are linearly independent. By def., two elements of a vector space are linearly independent if $0 = a\cos(x) + b\sin(x)$ implies that $a=b=0$, but how can I formalize that? Giving $x$ different values? Thanks in advance.


Answer




Hint: If $a\cos(x)+b\sin(x)=0$ for all $x\in\mathbb{R}$ then it
is especially true for $x=0,\frac{\pi}{2}$


Sunday 29 January 2017

elementary number theory - What can be said about the convergence of this series?

What can be said about the convergence of the following modification of the hyperharmonic series ($\sum_{n=1}^{\infty} \frac{1}{n^{s}}$, which is convergent for any s>1):
$$\sum \frac{1}{n^{s_n}}$$ with $s_n$ strictly monotonically approaching 1 from above? In case both convergence and divergence are still possible under this condition, is it possible to give a specific criteria for convergence, e.g. in terms of the rate of convergence of $s_n$?

polynomials - How do you find roots of an equation which is a sum of quadratic equations?



So the questions says -



Let $f(x), g(x)$ and $h(x)$ be quadratic polynomials having positive leading coefficients and real and distinct roots. If each pair of them has a common root, then find roots of $f(x)+g(x)+h(x)=0$.




What I did -



Let,



$$
f(x) = a_1 (x-\alpha) (x-\beta),
\\
g(x) = a_2 (x-\beta)(x-\gamma),
\\

h(x) = a_3 (x-\gamma) (x-\alpha),
\\
F(x):=f(x)+g(x)+h(x)$$
Now,



$$ F(\alpha) = a_2 (\alpha-\beta) (\alpha-\gamma)
\\
F(\beta) = a_3 (\beta-\gamma) (\beta-\alpha)
\\
F(\gamma) = a_1 (\gamma-\alpha) (\gamma-\beta)$$




I don't know how to proceed further. I referred to the solution, it just multiplies $F(\alpha), F(\beta) \text{, and } F(\gamma)$ and it comes out to be negative. And hence it concludes that roots of $F(x)=0$ are real and distinct. Can anyone explain why?



Thanks.


Answer



Suppose, without loss of generality, that $\alpha<\beta<\gamma$: it is easy to check that $F(\alpha)>0$, $F(\beta)<0$ and $F(\gamma)>0$. But $F(x)$ is a quadratic polynomial, hence a continuous function: it follows that $F(x)=0$ for some $x$ between $\alpha$ and $\beta$, and also for some $x$ between $\beta$ and $\gamma$.


integration - Evaluate fourier integral : $int_{-infty}^infty frac{e^{isx} ds}{(s-tfrac{7i}{2})^2 + (1/2)^2}$



$$
\mbox{Evaluate}\quad
\int_{-\infty}^{\infty}{\mathrm{e}^{\mathrm{i}sx} \over
\left(\,{s - 7\mathrm{i}/2}\,\right)^{2} + \left(\,{1/2}\,\right)^2}\,\mathrm{d}s
$$




Here I am having trouble as using simply the fourier inverse of $2\left\vert\,{a}\,\right\vert/\left(\,{s^{2} + a^{2}}\,\right)$ which is $\mathrm{e}^{-\left\vert\,{ax}\,\right\vert}$ gives wrong answer as denominator has
$s-7\mathrm{i}/2$.



I can do convolution theorem and then find inverse, but I want to directly evaluate from this expression.



Thank you !.



And if this requires residue theorem, where can I learn this ?.


Answer



Since you are not familiar with complex analysis, we shall resort to using differential equations (these two are, in fact, the most used methods when dealing with integrals that depend on parameters).




Using Lebesgue's dominated convergence theorem, we have that



$$\int _{-\infty} ^\infty \frac {\mathrm e^{\mathrm i s x}} {(s - \frac {7 \mathrm i} 2)^2 + (\frac 1 2)^2} \ \mathrm d s = \lim _{R \to \infty} \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {(s - \frac {7 \mathrm i} 2)^2 + (\frac 1 2)^2} \ \mathrm d s \ .$$



Next, notice that



$$\frac 1 {(s - \frac {7 \mathrm i} 2)^2 + (\frac 1 2)^2} = \frac 1 {(s - \frac {7 \mathrm i} 2)^2 - (\frac {\mathrm i} 2)^2} = \frac 1 {(s - 3 \mathrm i) (s - 4 \mathrm i)} = \frac 1 {\mathrm i} \left( \frac 1 {s - 4 \mathrm i} - \frac 1 {s - 3 \mathrm i} \right) ,$$



therefore your integral becomes




$$\frac 1 {\mathrm i} \lim _{R \to \infty} \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {s - 4 \mathrm i} \ \mathrm d s - \frac 1 {\mathrm i} \lim _{R \to \infty} \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {s - 3 \mathrm i} \ \mathrm d s \ .$$



For fixed $a > 0$ and $R>0$ let



$$I(x) = \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {s - a \mathrm i} \ \mathrm d s \ .$$



Using again Lebesgue's dominated convergence theorem, we may differentiate with respect to $x$ inside the integral, so



$$I'(x) = \int _{-R} ^R \frac {\mathrm i s \ \mathrm e^{\mathrm i s x}} {s - a \mathrm i} \mathrm d s = \int _{-R} ^R \frac {\mathrm i (s - a \mathrm i + a \mathrm i) \ \mathrm e^{\mathrm i s x}} {s - a \mathrm i} \mathrm d s = \mathrm i \int _{-R} ^R \mathrm e ^{\mathrm i s x} \mathrm d s - a \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {s - a \mathrm i} \mathrm d s = 2 \mathrm i \frac {\sin R x} x - a I(x) \ ,$$




which is a nonhomogeneous linear differential equation of order $1$ in $I$, which is easily solvable with the usual method and has for (unique) solution



$$I(x) = 2 \mathrm i \mathrm e ^{-ax} \int _{-\infty} ^x \mathrm e ^{a y} \frac {\sin R y} y \ \mathrm d y = 2 \mathrm i \mathrm e ^{-ax} \int _{-\infty} ^{Rx} \mathrm e ^{\frac {au} R} \frac {\sin u} u \ \mathrm d u \ .$$



For $x>0$ we deduce, using Lebesgue's dominated convergence theorem for the third time (this should convince you about its importance), that



$$\lim _{R \to \infty} \int _{-R} ^R \frac {\mathrm e^{\mathrm i s x}} {s - a \mathrm i} \ \mathrm d s \ = \lim _{R \to \infty} I(x) = 2 \mathrm i \mathrm e ^{-ax} \lim _{R \to \infty} \int _{-\infty} ^{Rx} \mathrm e ^{\frac {au} R} \frac {\sin u} u \ \mathrm d u = 2 \mathrm i \mathrm e ^{-ax} \int _{-\infty} ^\infty \frac {\sin u} u \ \mathrm d u = 2 \pi \mathrm i \ \mathrm e ^{-ax}$$



whence, for $x>0$, the integral that you are looking for is found to be




$$2 \pi (\mathrm e^{-4x} - \mathrm e^{-3x}) \ .$$



(For various methods of computing $\int _{-\infty} ^\infty \frac {\sin u} u \mathrm d u$, you may look here (and at the link therein) and here.)



If $x<0$ the, using the same approach as above, we get that $\lim _{R \to \infty} \int _{-\infty} ^{Rx} = \int _{-\infty} ^{-\infty} = 0$, therefore the desired integral is $0$.



Finally, for $x=0$, we could devise some complicated method to compute the corresponding integral, but it is easier to notice that the inverse Fourier transform will be continuous at $0$, and since the lateral limits are $0$ (using the result obtained so far), then the integral itself must be $0$ for $x=0$.



To conclude, the desired result is $\begin{cases} 2 \pi (\mathrm e^{-4x} - \mathrm e^{-3x}), & x>0 \\ 0, & x \le 0 \end{cases}$.



algebra precalculus - $a,b$ are roots of $x^2-3cx-8d = 0$ and $c,d$ are roots of $x^2-3ax-8b = 0$. Then $a+b+c+d =$



(1) If $a,b$ are the roots of the equation $x^2-10cx-11d=0$ and $c,d$ are the roots of the equation



$x^2-10ax-11b=0$. Then the value of $\displaystyle \sqrt{\frac{a+b+c+d}{10}}=,$ where $a,b,c,d$ are distinct real numbers.



(2) If $a,b,c,d$ are distinct real no. such that $a,b$ are the roots of the equation $x^2-3cx-8d = 0$




and $c,d$ are the roots of the equation $x^2-3ax-8b = 0$. Then $a+b+c+d = $



$\bf{My\; Try}::$(1) Using vieta formula



$a+b=10c......................(1)$ and $ab=-11d......................(2)$



$c+d=10a......................(3)$ and $cd=-11b......................(4)$



Now $a+b+c+d=10(a+c)..........................................(5)$




and $abcd=121bd\Rightarrow bd(ab-121)=0\Rightarrow bd=0$ or $ab=121$



Now I did not understand how can i calculate $a$ and $c$



Help Required



Thanks


Answer



The answer for (1) is $11$.




$$abcd=121bd\Rightarrow bd(ac-121)=0\Rightarrow bd=0\ \text{or}\ ac=121.$$
(Note that you have a mistake here too.)



1) The $bd=0$ case : If $b=0$, we have $x(x-10a)=0$. This leads that $c=0$ or $d=0$. This is a contradiction. The $d=0$ case also leads a contradiction.



2) The $ac=121$ case : We have $$c=\frac{121}{a}, b=\frac{1210}{a}-a, d=10a-\frac{121}{a}.$$ Hence, we have
$$1210-a^2-11\left(10a-\frac{121}{a}\right)=0$$
$$\Rightarrow a^3-110a^2-1210a+121\times 11=0$$
$$\Rightarrow a=11, \frac{11(11\pm 3\sqrt{13})}{2}.$$




If $a=11$, then $c=11$, which is a contradiction. Hence, we have
$$(a,c)=\left(\frac{11(11\pm 3\sqrt{13})}{2},\frac{11(11\mp 3\sqrt{13})}{2}\right).$$



Hence, we have
$$\sqrt{\frac{a+b+c+d}{10}}=\sqrt{a+c}=\sqrt{121}=11.$$



I think you can get an answer for (2) in the same way as above.


Trivial zeros of the Riemann Zeta function



A question that has been puzzling me for quite some time now:




Why is the value of the Riemann Zeta function equal to $0$ for every even negative number?



I assume that even negative refers to the real part of the number, while its imaginary part is $0$.



So consider $-2$ for example:



$f(-2) =
\sum_{n=1}^{\infty}\frac{1}{n^{-2}} =
\frac{1}{1^{-2}}+\frac{1}{2^{-2}}+\frac{1}{3^{-2}}+\dots =
1^2+2^2+3^2+\dots =

\infty$



What am I missing here?


Answer



The Zeta function is defined as $\zeta(s)=\sum_{n\ge1}n^{-s}$ only for $s\in\mathbb{C}$ with $\Re(s)>1$!



The function on the whole complex plane (except a few poles) is the analytic continuation of that function.



On the Wikipedia page, you can find the formula:
$$\zeta(s)=\frac{2^{s-1}}{s-1}-2^s\int_0^\infty\frac{\sin(s\arctan t)}{(1+t^2)^{\frac{s}{2}}(e^{\pi t}+1)}dt$$

for $s\neq 1$. Maybe working on this integral for $s$ a negative integer will give you the result.


algebra precalculus - Continuous function and function relationship

I have a function relationship and I know that this function is continuous in a specific $x_0$. I know how to prove that this function is continuous at $D_f$.



I have the function relationship:

$$f(x+y)=f(x)\cdot f(y)-\sin(x)\cdot \sin(y), \forall x,y\in\mathbb{R}$$ This function is continuous in $x_0=0$ and I want to prove that $f$ is continuous at $\mathbb{R}$. But for $x=y=0$ I get two different values for $f(0)$. So how to prove what I want?

Saturday 28 January 2017

calculus - Finding the area of the top half of a circle



Alright, I'm trying to calculate the area of the top half of a circle of radius $a$. Here's what I did so far:



$$\int_{-a}^a \sqrt{(a^2 - x^2) }dx$$



So I wrote $x$ as $a \cdot \sin \theta$:




$$\int_{-a}^a \sqrt{(a^2 - a^2\sin^2 \theta )}$$



$$\int_{-a}^a a \sqrt{( 1 - \sin \theta^2)}$$



$$\int_{-a}^a [a \cdot \cos \theta]$$



$$2 \sin(a) a$$



The problem is that my textbook states that the area is actually:




$$\frac{\pi a^2}{2}$$



I've done this calculation over and over and I'm sure there are no mistakes, so what is going on here?


Answer



$$x=a\sin t\implies dx=a\cos t\,dt$$



and from here



$$\int_{-a}^a\sqrt{a^2-x^2}\,dx=a\int_{-\frac\pi2}^\frac\pi2\sqrt{1-\sin^2 t}\,a\cos t\,dt=a^2\int_{-\frac\pi2}^\frac\pi2\cos^2t=$$




$$=\left.\frac{a^2}2(t+\cos t\sin t)\right|_{-\frac\pi2}^{\frac\pi2}=\frac{a^2\pi}2$$


calculus - Prove without using graphing calculators that $f: mathbb Rto mathbb R,,f(x)=x+sin x$ is both one-to-one, onto (bijective) function.



Prove that the function $f:\mathbb R\to \mathbb R$ defined by $f(x)=x+\sin x$ for $x\in \mathbb R$ is a bijective function.






The codomain of the $f(x)=x+\sin x$ is $\mathbb R$ and the range is also $\mathbb R$. So this function is an onto function.
But I am confused in proving this function is one-to-one.

I know about its graph and I know that if a function passes the horizontal line test (i.e horizontal lines should not cut the function at more than one point), then it is a one-to-one function. The graph of this function looks like the graph of $y=x$ with sinusoids going along the $y=x$ line.

If I use a graphing calculator at hand, then I can tell that it is a one-to-one function and $f(x)=\frac{x}{2}+\sin x$ or $\frac{x}{3}+\sin x$ functions are not, but in the examination I need to prove this function is one-to-one theoritically, without graphing calculators.

I tried the method which we generally use to prove a function is one-to-one but no success.
Let $f(x_1)=f(x_2)$ and we have to prove that $x_1=x_2$ in order fot the function to be one-to-one.
Let $x_1+\sin x_1=x_2+\sin x_2$
But I am stuck here and could not proceed further.


Answer



You can prove that this function is strictly increasing :




It's a $C^1$ function and $f'(x) = 1+\cos(x) \geq 0$, so the function is increasing.



$\{x \mid f'(x) = 0 \} = \pi \Bbb Z$ is a discrete set, so $f$ is strictly increasing (if $f$ was locally constant somewhere, there would be an intervall $]a,b[$ where $f'(x)=0$ )


summation - How find this $sum_{n=1}^{infty}frac{(-1)^{n-1}zeta_{n}(3)}{n}=?$



Question:




show that
$$\sum_{n=1}^{\infty}\dfrac{(-1)^{n-1}\zeta_{n}(3)}{n}=\dfrac{19\pi^4}{1440}-\dfrac{3}{4}\zeta{(3)}\ln{2}?$$





where $$\zeta_{n}(3)=\sum_{k=1}^{n}\dfrac{1}{k^3}$$



But I use this computer find this enter image description here



and my reslut is wrong? Thank you


Answer



$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}

\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}

\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}

\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\sum_{n = 1}^{\infty}{\pars{-1}^{n - 1}\zeta_{n}\pars{3} \over n}
={19\pi^{4} \over 1440} - {3 \over 4}\,\zeta\pars{3}\ln\pars{2}:\ {\large ?}.
\qquad \zeta_{n}\pars{3} = \sum_{k = 1}^{n}{1 \over k^{3}}}$





\begin{align}&\color{#c00000}{\sum_{n = 1}^{\infty}
{\pars{-1}^{n - 1}\zeta_{n}\pars{3} \over n}}
=\sum_{n = 1}^{\infty}{\pars{-1}^{n - 1} \over n}\sum_{k = 1}^{n}{1 \over k^{3}}
=\sum_{k = 1}^{\infty}{1 \over k^{3}}
\color{#00f}{\sum_{n = k}^{\infty}{\pars{-1}^{n - 1} \over n}}
\end{align}




\begin{align}&\color{#00f}{\sum_{n = k}^{\infty}

{\pars{-1}^{n - 1} \over n}}\color{#00f}
=\sum_{n = k}^{\infty}\pars{-1}^{n - 1}\int_{0}^{1}x^{n - 1}\,\dd x
=\int_{0}^{1}\sum_{n = k}^{\infty}\pars{-x}^{n - 1}\,\dd x
=\int_{0}^{1}{\pars{-x}^{k - 1} \over 1 - \pars{-x}}\,\dd x
\\[3mm]&=\int_{0}^{1}{\pars{-x}^{k - 1} \over 1 + x}\,\dd x
\end{align}




\begin{align}&\color{#c00000}{\sum_{n = 1}^{\infty}
{\pars{-1}^{n - 1}\zeta_{n}\pars{3} \over n}}

=\sum_{k = 1}^{\infty}{1 \over k^{3}}
\int_{0}^{1}{\pars{-x}^{k - 1} \over 1 + x}\,\dd x
=-\int_{0}^{1}\sum_{k = 1}^{\infty}{\pars{-x}^{k} \over k^{3}}
\,{1 \over x\pars{1 + x}}\,\dd x
\\[3mm]&=-\int_{0}^{1}{{\rm Li}_{3}\pars{-x} \over x\pars{1 + x}}\,\dd x
=\int_{-1}^{0}{{\rm Li}_{3}\pars{x} \over x\pars{1 - x}}\,\dd x
=\int_{-1}^{0}{{\rm Li}_{3}\pars{x} \over x}\,\dd x
+\int_{-1}^{0}{{\rm Li}_{3}\pars{x} \over 1 - x}\,\dd x
\\[3mm]&=-{\rm Li}_{4}\pars{-1} + {\rm Li}_{3}\pars{-1}\ln\pars{2}
+\int_{-1}^{0}\ln\pars{1 - x}{\rm Li}_{3}'\pars{x}\,\dd x

\\[3mm]&=-{\rm Li}_{4}\pars{-1} + {\rm Li}_{3}\pars{-1}\ln\pars{2}
-\int_{-1}^{0}x{\rm Li}_{2}'\pars{x}\,{{\rm Li}_{2}\pars{x} \over x}\,\dd x
\end{align}




\begin{align}
\color{#c00000}{\sum_{n = 1}^{\infty}
{\pars{-1}^{n - 1}\zeta_{n}\pars{3} \over n}}
&={\large-{\rm Li}_{4}\pars{-1} + {\rm Li}_{3}\pars{-1}\ln\pars{2}
+\half\,{\rm Li}_{2}^{2}\pars{-1}}

\\[3mm]\mbox{and}&\qquad
\left\lbrace\begin{array}{rcl}
{\rm Li}_{4}\pars{-1} & = & -\,{7\pi^{4} \over 720}
\\
{\rm Li}_{3}\pars{-1} & = & -\,{3 \over 4}\,\zeta\pars{3}
\\
{\rm Li}_{2}\pars{-1} & = & -\,{\pi^{2} \over 12}
\end{array}\right.
\end{align}





$$\color{#66f}{\large%
\sum_{n = 1}^{\infty}{\pars{-1}^{n - 1}\zeta_{n}\pars{3} \over n}
={19\pi^{4} \over 1440} - {3 \over 4}\,\zeta\pars{3}\ln\pars{2}}
\approx 0.6604
$$



Is it possible to intuitively explain, how the three irrational numbers $e$, $i$ and $pi$ are related?



I read a bit about this equation: $e^{i\pi}=-1$
For someone knowing high school maths this perplexes me. How are these three irrational numbers so seemingly smoothly related to one another? Can this be explained in a somewhat intuitive manner?
From my perspective it is hard to comprehend why these almost arbitrary looking irrational numbers have such a relationship to one another. I know the meanings and origins behind these constants.



Answer



Just think about it this way: $\pi$ is related to the circle, whose equation is $x^2+y^2=r^2$. Euler's constant e is related to the hyperbola, whose equation is $x^2-y^2=r^2$. In order to turn $y^2$ into $-y^2$ we need a substitution of the form $y\mapsto iy$.


When is Leibniz' notation for derivatives useful?



So Lagrange's $y'$ and Leibniz' $\frac{d}{dx}y$ seems to be the two most common notations for differentiation, but it seems puzzling to me that there are two notations for this. I've been taught Lagrange's notation, and haven't really used Lebniz' notation. In most of the cases it seems to me like Lagrange's really is the best. But I'm pretty sure that since Leibniz' is so widespread and common, there must be some use for it. For instance I find it easier to write:



$$
\begin{align}
\sin'x&= \cos x &&\text{sine rule}\\[0.5em]
(uv)' &= uv'+u'v &&\text{multiplication rule}\\[0.5em]

(u(v))' &= u'(v)\times v' &&\text{chain rule}
\end{align}
$$



than:



$$
\begin{align}
\frac{d}{dx}\sin x&= \cos x &&\text{sine rule}\\[1em]
\frac{d(uv)}{dx} &= \frac{du}{dx}v+\frac{dv}{dx}u &&\text{multiplication rule}\\[1em]

\frac{d(u(v))}{dx} &= \frac{du}{dv}\times\frac{dv}{dx} &&\text{chain rule}
\end{align}
$$



This are just some cases where I find it a lot easier to use Lagrange's notation, so when is Leibniz' notation the best?


Answer



In many concrete situations, you may have some specific expression on hand. Consider the density of the normal distribution, $f(x,\mu,\sigma)=(2\pi\sigma^2)^{-1/2}\exp(-|x-\mu|^2/2\sigma^2)$. If you simply write
$$
\left((2\pi\sigma^2)^{-1/2}\exp(-|x-\mu|^2/2\sigma^2)\right)',
$$

it will of course be completely unclear what you mean. If you want, you can define $f_{\mu,\sigma}(x)=f(x,\mu,\sigma)$, and write $f_{\mu,\sigma}'$, but it quickly becomes cumbersome to define new functions every time you wish to take a derivative. It is much simpler to simply write, say
$$
\left.\frac{d}{dx}\right|_{x=1}(2\pi\sigma^2)^{-1/2}\exp(-|x-\mu|^2/2\sigma^2)
$$
in place of first defining $f_{\mu,\sigma}$ and then writing $f_{\mu,\sigma}'(1)$.



This is much the same reason that it is useful to have the $dx$ appearing somewhere when you integrate, instead of a more general $d\mu$, where $\mu$ is a measure. It is notationally less pretty, but much more flexible.


Friday 27 January 2017

integration - Frullani 's theorem in a complex context.



It is possible to prove that $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=-i\frac{\pi}{2}$$ and in this case the Frullani's theorem does not hold since, if we consider the function $f(x)=e^{-x}$, we should have $$\int_{0}^{\infty}\frac{e^{-ax}-e^{-bx}}{x}dx$$ where $a,b>0$. But if we apply this theorem, we get $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=\log\left(\frac{1}{i}\right)=-i\frac{\pi}{2}$$ which is the right result.




Questions: is it only a coincidence? Is it possible to generalize the theorem to complex numbers? Is it a known result? And if it is, where can I find a proof of it?




Thank you.



Answer




The following development provides a possible way forward to generalizing Frullani's Theorem for complex parameters.




Let $a$ and $b$ be complex numbers such that $\arg(a)\ne \arg(b)+n\pi$, $ab\ne 0$, and let $\epsilon$ and $R$ be positive numbers.



In the complex plane, let $C$ be the closed contour defined by the line segments (i) from $a\epsilon$ to $aR$, (ii) from $aR$ to $bR$, (iii) from $bR$ to $b\epsilon$, and (iv) from $b\epsilon$ to $a\epsilon$.



Let $f$ be analytic in and on $C$ for all $\epsilon$ and $R$. Using Cauchy's Integral Theorem, we can write




$$\begin{align}
0&=\oint_{C}\frac{f(z)}{z}\,dz\\\\
&=\int_\epsilon^R \frac{f(ax)-f(bx)}{x}\,dx\\\\
&+\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt\\\\
&-\int_0^1 \frac{f(a\epsilon+(b-a)\epsilon t)}{a+(b-a) t}\,(b-a)\,dt\tag1
\end{align}$$



Rearranging $(1)$ reveals that




$$\begin{align}
\int_\epsilon^R \frac{f(ax)-f(bx)}{x}\,dx&=\int_0^1 \frac{f(a\epsilon+(b-a)\epsilon t)}{a+(b-a) t}\,(b-a)\,dt\\\\ &-\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt \tag 2
\end{align}$$



If $\lim_{R\to \infty}\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt=0$, then we find that



$$\begin{align}
\int_0^\infty \frac{f(ax)-f(bx)}{x}\,dx&=f(0)(b-a)\int_0^1\frac{1}{a+(b-a)t}\,dt\\\\
&=f(0)\log(|b/a|)\\\\
&+if(0)\left(\arctan\left(\frac{\text{Re}(a\bar b)-|a|^2}{\text{Im}(a\bar b)}\right)-\arctan\left(\frac{|b|^2-\text{Re}(a\bar b)}{\text{Im}(a\bar b)}\right)\right) \tag 3

\end{align}$$



Since $(a-b)\int_0^1 \frac{1}{a+(b-a)t}\,dt$, $ab\ne 0$ is continuous in $a$ and $b$, then $(3)$ is valid for $\arg(a)=\arg(b)+n\pi$ also.







Note that the tangent of the term in large parentheses on the right-hand side of $(3)$ is



$$\begin{align}

\frac{\text{Im}(\bar a b)}{\text{Re}(\bar a b)}&=\tan\left(\arctan\left(\frac{\text{Re}(a\bar b)-|a|^2}{\text{Im}(a\bar b)}\right)-\arctan\left(\frac{|b|^2-\text{Re}(a\bar b)}{\text{Im}(a\bar b)}\right)\right)\\\\
&=\tan\left(\arctan\left(\frac{\text{Im}(b)}{\text{Re}(b)}\right)-\arctan\left(\frac{\text{Im}(a)}{\text{Re}(a)}\right)\right)
\end{align}$$



real analysis - Infinite Series $sum_{n=1}^inftyfrac{H_n}{n^32^n}$



I'm trying to find a closed form for the following sum

$$\sum_{n=1}^\infty\frac{H_n}{n^3\,2^n},$$
where $H_n=\displaystyle\sum_{k=1}^n\frac{1}{k}$ is a harmonic number.



Could you help me with it?


Answer



In the same spirit as Robert Israel's answer and continuing Raymond Manzoni's answer (both of them deserve the credit because of inspiring my answer) we have
$$
\sum_{n=1}^\infty \frac{H_nx^n}{n^2}=\zeta(3)+\frac{1}{2}\ln x\ln^2(1-x)+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x).
$$
Dividing equation above by $x$ and then integrating yields

\begin{align}
\sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&\zeta(3)\ln x+\frac12\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}+\color{blue}{\int\frac{\ln(1-x)\operatorname{Li}_2(1-x)}x\ dx}\\&+\operatorname{Li}_4(x)-\color{green}{\int\frac{\operatorname{Li}_3(1-x)}x\ dx}.\tag1
\end{align}
Using IBP to evaluate the green integral by setting $u=\operatorname{Li}_3(1-x)$ and $dv=\frac1x\ dx$, we obtain
\begin{align}
\color{green}{\int\frac{\operatorname{Li}_3(1-x)}x\ dx}&=\operatorname{Li}_3(1-x)\ln x+\int\frac{\ln x\operatorname{Li}_2(1-x)}{1-x}\ dx\qquad x\mapsto1-x\\
&=\operatorname{Li}_3(1-x)\ln x-\color{blue}{\int\frac{\ln (1-x)\operatorname{Li}_2(x)}{x}\ dx}.\tag2
\end{align}
Using Euler's reflection formula for dilogarithm
$$

\operatorname{Li}_2(x)+\operatorname{Li}_2(1-x)=\frac{\pi^2}6-\ln x\ln(1-x),
$$
then combining the blue integral in $(1)$ and $(2)$ yields
$$
\frac{\pi^2}6\int\frac{\ln (1-x)}{x}\ dx-\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}=-\frac{\pi^2}6\operatorname{Li}_2(x)-\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}.
$$
Setting $x\mapsto1-x$ and using the identity $H_{n+1}-H_n=\frac1{n+1}$, the red integral becomes
\begin{align}
\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}&=-\int\frac{\ln (1-x)\ln^2 x}{1-x}\ dx\\
&=\int\sum_{n=1}^\infty H_n x^n\ln^2x\ dx\\

&=\sum_{n=1}^\infty H_n \int x^n\ln^2x\ dx\\
&=\sum_{n=1}^\infty H_n \frac{\partial^2}{\partial n^2}\left[\int x^n\ dx\right]\\
&=\sum_{n=1}^\infty H_n \frac{\partial^2}{\partial n^2}\left[\frac {x^{n+1}}{n+1}\right]\\
&=\sum_{n=1}^\infty H_n \left[\frac{x^{n+1}\ln^2x}{n+1}-2\frac{x^{n+1}\ln x}{(n+1)^2}+2\frac{x^{n+1}}{(n+1)^3}\right]\\
&=\ln^2x\sum_{n=1}^\infty\frac{H_n x^{n+1}}{n+1}-2\ln x\sum_{n=1}^\infty\frac{H_n x^{n+1}}{(n+1)^2}+2\sum_{n=1}^\infty\frac{H_n x^{n+1}}{(n+1)^3}\\
&=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n+1} x^{n+1}}{(n+1)^2}-\sum_{n=1}^\infty\frac{x^{n+1}}{(n+1)^3}\right]\\&+2\left[\sum_{n=1}^\infty\frac{H_{n+1} x^{n+1}}{(n+1)^3}-\sum_{n=1}^\infty\frac{x^{n+1}}{(n+1)^4}\right]\\
&=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\sum_{n=1}^\infty\frac{x^{n}}{n^3}\right]\\&+2\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^3}-\sum_{n=1}^\infty\frac{x^{n}}{n^4}\right]\\
&=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&+2\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^3}-\operatorname{Li}_4(x)\right].
\end{align}
Putting all together, we have

\begin{align}
\sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&\frac12\zeta(3)\ln x-\frac18\ln^2x\ln^2(1-x)+\frac12\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&+\operatorname{Li}_4(x)-\frac{\pi^2}{12}\operatorname{Li}_2(x)-\frac12\operatorname{Li}_3(1-x)\ln x+C.\tag3
\end{align}
Setting $x=1$ to obtain the constant of integration,
\begin{align}
\sum_{n=1}^\infty \frac{H_n}{n^3}&=\operatorname{Li}_4(1)-\frac{\pi^2}{12}\operatorname{Li}_2(1)+C\\
\frac{\pi^4}{72}&=\frac{\pi^4}{90}-\frac{\pi^4}{72}+C\\
C&=\frac{\pi^4}{60}.
\end{align}
Thus

\begin{align}
\sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&\frac12\zeta(3)\ln x-\frac18\ln^2x\ln^2(1-x)+\frac12\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&+\operatorname{Li}_4(x)-\frac{\pi^2}{12}\operatorname{Li}_2(x)-\frac12\operatorname{Li}_3(1-x)\ln x+\frac{\pi^4}{60}.\tag4
\end{align}
Finally, setting $x=\frac12$, we obtain
\begin{align}
\sum_{n=1}^\infty \frac{H_n}{2^nn^3}=\color{purple}{\frac{\pi^4}{720}+\frac{\ln^42}{24}-\frac{\ln2}8\zeta(3)+\operatorname{Li}_4\left(\frac12\right)},
\end{align}
which matches Cleo's answer.







References :



$[1]\ $ Harmonic number



$[2]\ $ Polylogarithm


calculus - How does $exp(x+y) = exp(x)exp(y)$ imply $exp(x) = [exp(1)]^x$?




In Calculus by Spivak (1994), the author states in Chapter 18 p. 341 that
$$\exp(x+y) = \exp(x)\exp(y)$$ implies
$$\exp(x) = [\exp(1)]^x$$

He refers to the discussion in the beginning of the chapter where we define a function $f(x + y) = f(x)f(y)$; with $f(1) = 10$, it follows that $f(x) = [f(1)]^x$. But I don't get this either. Can anyone please explain this? Many thanks!


Answer



I think you should assume $x$ is an integer (since $a^x$ is defined using $\exp$ if $x$ is a positive real). You can write $\exp(x) = \exp(\underbrace{1+1+1+1+\cdots+1}_x)$.



Using the property of $\exp$, you find that $\exp(x) = \exp(1)\exp(1)\dots\exp(1) = (\exp(1))^x$.


complex analysis - Choosing a contour for this integral



$$\int_{0}^\infty \frac{x^{\frac{1}{2}}}{x^2+1} \ dx$$
by taking the branch cut of $x^{\frac{1}{2}}$ along the positive real axis.




I wasn't sure what contour to choose, so I chose this keyhole contour:
enter image description here



Where the upper line segment is $C_1$ and the lower line segment is $C_2$, with the circular arc of radius $R$ being $C_R$ and the smaller circular arc of radius $r$ be $C_r$.



Can someone please show me how to use my contour to evaluate this?
This is my attempt






Note that $$\int_{C} \frac{z^{\frac{1}{2}}}{z^2+1} \ dz = 2\pi i \mathrm{Res}(f,i) + 2\pi i \mathrm{Res}(f,-i)$$

where $C = C_1 + C_2 + C_R + C_r$.
First consider
$$\left|\int_{C_R} \frac{z^{\frac{1}{2}}}{z^2+1}\right| \ dz \leq ML$$
where $M = \mathrm{max}\left\{\left|\frac{z^{\frac{1}{2}}}{z^2+1}\right|:z\in C_R\right\} = \frac{R^{\frac{1}{2}}}{R^2 - 1}$
and $L = \mathrm{length}{C_R} = 2\pi R$.
So that expression goes to zero as $R\rightarrow \infty$.
Similarly,
$$\left|\int_{C_r} \frac{z^{\frac{1}{2}}}{z^2+1}\right| \rightarrow 0$$
as $r\rightarrow 0$.



So it seems that all that's left is to parametrize the remaining segments... but I can't seem to have an expression which has upper limits going to infinity.


Answer



You have by the definition of your brach cut $$(x+ir)^{\frac12}\approx\sqrt{x}+\frac{ir}{2\sqrt{x}}$$ and $$(x-ir)^{\frac12}\approx-\sqrt{x}+\frac{ir}{2\sqrt{x}}$$ for $x>r$. Thus the integrals over the segments from $r+ir$ to $R+ir$ and from $R-ir$ to $r-ir$ approximate twice the searched for integral value.







For a more boring region substitute $x=e^u$ to get the transformed integral
$$
\int_{-\infty}^\infty \frac{e^{\frac32 u}}{1+e^{2u}}du
$$
Now integrate along the contour of the box $[-R,R]+i[0,\pi]$. The integration along the upper side has the integrand for $w=u+i\pi$ so that
$$
f(w)=\frac{-i\,e^{\frac32 u}}{1+e^{2u}}
$$
plus orientation reversal,
the left and right sides are of size $e^{-R/2}$, so that the limit $R\to\infty$ of the contour integral is $(1+i)$ times the required value, and the only pole is at $w=i\frac\pi2$.



rational numbers - How can I explain $0.999ldots=1$?











I have to explain $0.999\ldots=1$ to people who don't know limit.



How can I explain $0.999\ldots=1$?




The common procedure is as follows



\begin{align}
x&=0.999\ldots\\
10x&=9.999\ldots
\end{align}



$9x=9$ so $x=1$.


Answer



What I always find the most simple explanation is:

$$
\frac{1}{3} = 0.333\ldots \quad \Longrightarrow \quad 1 = 3 \cdot \frac{1}{3} = 3 \cdot 0.333\ldots = 0.999\ldots
$$


Thursday 26 January 2017

nonstandard analysis - About 0.999... = 1




I've just happened to read this question on MO (that of course has been closed) and some of the answers to a similar question on MSE.



I know almost nothing of nonstandard analysis and was asking myself if something like the sentence « $1- 0.999 \dots$ is a nonzero positive infinitesimal» could be easily expressed and proved in nonstandard analysis.



First of all, what is 0.999... ? If we take the usual definition as a series or as a limit of a sequence of rationals, then it will still be a real number and equal to $1$ (I guess by "transfer principle", but please correct me if I'm wrong).



Instead, let's define



$$0.9_N:=\sum_{i=1}^N 9\cdot 10^{-i} $$




where $N\in{}^*\mathbb{N}\setminus\mathbb{N}$ is an infinite nonstandard natural number. This $0.9_N$ is a legitimate element of ${}^*\mathbb{R}$, expressed as $0.$ followed by an infinite number of "$9$" digits.



What can be said about $\epsilon_N:=1-0.9_N$ ? Is there an elementary proof that $\epsilon_N$ is a positive infinitesimal of ${}^*\mathbb{R}$ ? (by "elementary" I mean just order and field axioms and the intuitive facts about infinitesimals, like that for $x$ infinite $1/x$ is infinitesimal etc.; no nonprincipal ultrafilters & C).


Answer



We can use the geometric series formula:



$$0.9_N = \sum_{i=1}^N 9 \cdot 10^{-i} = 9 \cdot 10^{-1} \cdot \frac{1 - 10^{-N}}{1 - 10^{-1}} = (1 - 10^{-N})$$



Since $N$ is infinite, $\epsilon_N = 10^{-N} = 1 / 10^N$ is infinitesimal.



inequality - How to show $x_k in mathbb R, frac {sum{x_k}}{n} leq left(frac{sum{x_k}^2}{n}right)^n$?

Prove that, for arbitrary real numbers $x_1,x_2,x_3...,x_n$



$$\frac{x_1+x_2+x_3...+x_n}{n} \leq \left(\frac{x_1^2+x_2^2+x_3^2...+x_n^2}{n}\right)^n$$



What theorem would you use to prove the following inequality?
I would also like to know how to learn more about inequalities.Thanks

number theory - Prove that one integer among $m$ consecutive integers is divisible by $m$



Show that of any $m$ consecutive integers, exactly one is divisible by $m$. I am finding it difficult to prove that there is only one number among $m$ consecutive integers that is divisible by $m$.



Answer



Let the numbers be $b_r=a+r, 0\le r\le m-1$



Existence:



We can apply Pigeonhole Principle to prove the existence by contradiction.



Let none of them is divisible by $m,$ so they can leave $m-1$ distinct remainder$(r)$s namely, $1\le r\le m-1$



But, as there are $m$ numbers, so at least tow of them leave the same remainders.




Let $b_u,b_v$ leave the same remainders where $1\le u

Then $m$ divides $b_v-b_u=v-u$



But, $0

Uniqueness:



If $m$ divides both $b_s,b_t,0\le s\le t\le m-1$




$m$ must divide $b_t-b_s=t-s$ which lies $\in(0,m-1]$ which is impossible



So, there can be at most one $r$ divisible by $m$


calculus - Faster way to find Taylor series



I'm trying to figure out if there is a better way to teach the following Taylor series problem. I can do the problem myself, but my solution doesn't seem very nice!



Let's say I want to find the first $n$ terms (small $n$ - say 3 or 4) in the Taylor series for




$$
f(z) = \frac{1}{1+z^2}
$$



around $z_0 = 2$ (or more generally around any $z_0\neq 0$, to make it interesting!) Obviously, two methods that come to mind are 1) computing the derivatives $f^{(n)}(z_0)$, which quickly turns into a bit of a mess, and 2) making a change of variables $w = z-z_0$, then computing the power series expansion for



$$
g(w) = \frac{1}{1+(w+z_0)^2}
$$ and trying to simplify it, which also turns into a bit of a mess. Neither approach seems particularly rapid or elegant. Any thoughts?


Answer




Let $g(w) = \sum_{n=0}^{\infty} a_n w^n$.



Then
$(w^2+4w+5) \; g(w) = 1$ implies
$$\begin{align}
5 a_0 &= 1 \\
4 a_0 + 5 a_1 &= 0 \\
a_0 + 4 a_1 + 5 a_2 &= 0 \\
a_1 + 4 a_2 + 5 a_3 &= 0 \\
\text{etc.}

\end{align}$$



which you can then solve for the $a_n$'s in a stepwise fashion.


contest math - Find the coefficient of $x^{19}$ in the expression $(x+1)(x+2)(x+3)cdots (x+400)$

Find the coefficient of $x^{19}$ in the expression $(x+1)(x+2)(x+3)\cdots (x+400)$



I have no clue how to start. Any kind of help will be appreciated.

Problem with square roots of complex number



I have to tackle a question related with complex number and its square roots. My thoughts so far are below it.




(a)Given that (x+iy)^2=-5+12i,x and y belongs to real numbers, show that:

(i)x^2-y^2=-5
(ii)xy=6


It is easy for me to solve this first question, and what I only need to do do is extending the left side of the equation and get the results.
But what really makes me confused is the rest of the question:



(b)Hence find the two square roots of -5+12i



How can I find a root of a complex number? Using the viete’s formula?
What I only know is to find z from z^n=rcosθ.



Then it asks:



(C)for any complex number z, show that (z*)^2=(z^2)^*

(D)hence write down the two square roots of -5-12i



I thought C is easy to prove.Maybe I can suppose z=a+bi and z^*=a-bi and plug them into the equation provided.But how can it helps to find the roots??



How can I solve this problem ?? Help


Answer



Let $z=x+iy$. For $(b)$, you need to solve $x^2-y^2=-5$ and $xy=6$. This is not too difficult to solve using Theo Bendit's answer but a nice trick is to remark that



$$x^2+y^2=|z|^2=|z^2|=|-5+12i|=13$$




Call this equation (iii). (i) and (iii) are linear in $x^2$ and $y^2$, so the system they form is easy to solve. We obtain $x^2=4$ and $y^2=9$, so $x=\pm2$ and $y=\pm3$.



This gives for possibilities for $(x,y)$ but only 2 are solutions of the problem since $xy=6$. Therefore, there are two solutions: $(x,y)=(2,3)$ and $(x,y)=(-2,-3)$. In other words, the square roots of $-5+12i$ are $2+3i$ and $-2-3i$.


Wednesday 25 January 2017

real analysis - Continuous and additive implies linear



The following problem is from Golan's linear algebra book. I have posted a solution in the comments.



Problem: Let $f(x):\mathbb{R}\rightarrow \mathbb{R}$ be a continuous function satisfying $f(x+y)=f(x)+f(y)$ for all $x,y\in \mathbb{R}$. Show $f$ is a linear transformation.


Answer



The only property of linear transformations that we still need to verify is that $f(xt)=tf(x)$ for all $x,y\in \mathbb{R}.$ It is enough to establish this result just for rational numbers. If $j$ is irrational and $x\in \mathbb{R}$,, we can find a rational $r$ with $|jx-rx|<\delta$ for any positive real $\delta$. By continuity, for every $\epsilon>0$, we can choose $\delta$ so that $|f(rx)-f(jx)|<\epsilon$. This condition also gives $|r-j|<\delta/|j|$, and choosing $\delta$ to be even smaller if necessary gives $|f(x)-f(r)|<\epsilon$, too. Putting this all together gives




$|jf(x)-f(jx)|<|j|f(x)-f(r)| + |f(jx)-f(rx)|<(|j|+1)\epsilon$



and we can make this arbitrarily small, giving the desired result.



To verify the property for rationals, we first verify it for integers. If $n\in \mathbb{N}$, then



$nf(x)=f(x)+f(x)+...+f(x)=f(nx)$



by hypothesis. Also,




$f(x)=f(x/n)+f(x/n)+\cdots f(x/n)=nf(x/n)$



so $\frac{1}{n}f(x)=f(\frac{x}{n})$. Combining the above shows we have scalar multiplication for all positive rationals.



Noting that $f(0)=f(0)+f(0)$ gives $f(0)=0$, and



$f(0)=f(-x)+f(x)\Rightarrow -f(-x)=f(x)$. Using this allows us to extend scalar multiplication to negative rationals and completes the proof.


Solution of functional equation

i know the solutions of the well known Cauchy-functional-equation



$f(x+y)=f(x)+f(y)$



But what does it change if i have the following form




$f(x+g(y))=f(x)+f(g(y))$



?
what can i say about g?



thanks

measure theory - Suppose $X, X'$ are variables on different probability spaces with equal distributions. Do they have the same expectation?




Suppose $X: \Omega \to \mathbb{R}$ is a random variable on $(\Omega, \mathcal{F}, \mathbb{P})$ and $X': \Omega' \to \mathbb{R}$ is a random variable on $(\Omega', \mathcal{F}', \mathbb{P}')$. Assume that $\mathbb{P}_X = \mathbb{P'}_{X'}$ (the distribution of $X$ is equal to the distribution of $X'$). Is it true that



$$\int_\Omega X d \mathbb{P} = \int_{\Omega'} X' d \mathbb{P'}$$



I.e. is the $\mathbb{P}$-expectation of $X$ equal to the $\mathbb{P'}$-expectation pf $X'$?



Intuitively, this ought to be true but how can I formally show this?



I tried the approach where you first show this for indicatorfunctions, then for positive functions etc but this doesn't work because we work on different probability spaces.




Maybe I can argue in the following way, if $X\geq 0$:



$$\int_\Omega X d \mathbb{P} = \int_0^\infty \mathbb{P}(X \geq t)dt = \int_0^\infty \mathbb{P}'(X'\geq t)dt = \int_{\Omega'}X'd\mathbb{P'}$$



and in the general case, the result then follows if we can prove that $X^+=XI_{\{X \geq 0\}}$ and $(X')^+ = X' I_{\{X' \geq 0\}}$ have equal distribution (and similarly for $X^-$ and $(X')^-$.



Any ideas?


Answer



$EX=\int_{\mathbb R} x dP_X(x)$ and $EX'=\int_{\mathbb R} x dP_{X'}(x)$, so the answer is YES. $EX$ exists iff $EX'$ exist and they are equal when they exist.


Tuesday 24 January 2017

elementary number theory - Is every non-square integer a primitive root modulo some odd prime?

This question often comes in my mind when doing exercices in elementary number theory:




Is every non-square integer a primitive root modulo some odd prime?





This would make many exercices much easier. Unfortunately I seem unable to discover anything interesting which may lead to an answer.



It seems likely to me that this is true. If $n\equiv2\pmod3$ then it's a primitive root modulo $3$. If $n\equiv2,3\pmod5$, it's a primitive root modulo $5$. If we would continue like this, my guess is that any non-square $n$ will satify at least one of these congruences.



This being difficult, I began considering a simplified question:




Is every non-square integer a quadratic non-residue modulo some prime?





Or equivalently,




If an integer is a square modulo every prime, then is it a square itself?




The second form seems easier to approach, however I still can't find anything helpful.

algebra precalculus - How to determine answers to problems dealing with undefined values?



So we know that undefined values exist, like $\ln0$. However, if you have $e^{\ln0}$, it brings a problem where you can get $0$, if you understand that $e$ and $\ln{x}$ undo each other, or you will get undefined, because $\ln0$ is undefined and e to an undefined value is undefined. So which is correct?



Another problem I have is with exponents. We know for example that if $a-b=c$, $\dfrac{n^a}{n^b}=n^c$ which works for example, if a is $3$, b is $2$, and n is $2$, so $\dfrac{2^3}{2^2}=2$. But if $n$ is $0$, $0^{a-b}$ is the same as $\dfrac{0^a}{0^b}$, which both values are $0$ and the answer is undefined (or if you want to say $\dfrac{0}{0}$ is $1$, $1$ still isn't $0$), which would show that $0^c$ (where $c$ is any number) is undefined, since all numbers can be represented as a difference of two numbers.




In short:
My question is when do you simplify problems with undefined values (e.g $\dfrac{0^2}{0^1}$ is not $\dfrac{0}{0}$ but instead just $0^1$) and when is leaving it undefined correct? (e.g $e^{\ln0}$ being undefined, not $0$)


Answer



By the traditional meaning of equality, $e^{\ln0}$ is undefined. You cannot 'cancel' $e^x$ and $\ln{x}$ because $\ln{0}$ itself is undefined. It's analogous to rational functions like this:



$$\dfrac{(x-4)(x^2+x+3)}{(x-4)}$$



In the above function, you can cancel the term $(x-4)$ on both the top and the bottom, but that doesn't change the fact the original function has a hole at $x=4$ because you're dividing by zero. Same concept here: you can 'cancel' $e^x$ and $\ln{x}$, but that doesn't change the fact that it's undefined.



As you begin to venture into calculus and observe function behavior as $x$ approaches some constant $c$, you can use limits to state that:




$$\lim_{x\to0}\ln{x}=-\infty \Rightarrow \lim_{x\to-\infty}e^x=0$$



So as $x$ approaches $0$ in $e^{\ln{x}}$, the expression tends to $0$, but isn't equal in the traditional sense.



As for your concern about the power subtraction rule, it's simply restricted to nonzero values:



$$\dfrac{x^a}{x^b} = x^{a-b} \;\;\; x\neq0$$



So it's invalid to say:




$$\dfrac{0^2}{0^1} = 0^1 = 0$$


proof verification - Number of positive integer solutions of inequality



I was preparing a class on polynomials (high school level). The handbook I use always contains some questions from math olympiads. The following question is asked:




What is the number of positive integer solutions of the following inequality:
$$(x-\frac{1}{2})^1(x-\frac{3}{2})^3\ldots(x-\frac{4021}{2})^{4021} < 0.$$



I found this number to be 1005. Here is my reasoning:



For a positive integer to satisfy this inequality, an odd number of factors should be negative. There are $\frac{4021 +1}{2} =2011$ distinct factors. Each distinct factor appears an odd number of times, hence we can forget about the exponents. We also only need to consider positive integers smaller than $\frac{4021}{2} = 2010,5$, for any bigger number would make all factors positive.



In order for the product to be negative, we need to look for positive integers making an even amount of the first factors positive. This happens if we pick even integers: the constants $\frac{1}{2}, \frac{3}{2},...$ are one unit from the next, so if we pick an even integer, there are an even amount of such constants smaller than that integer.




Hence we look for the number of positive integers smaller than $2010,5$, which is $\frac{2011 - 1}{2} = 1005$.



Is this solution correct? (sorry for the bad english by the way. I hope this does not make my explanation unclear).



Note I am also interested in other (quicker, slicker) solutions.


Answer



This may look shorter, but then again it is not really different from your argument:



The polynomial changes its sign at $\frac12, \frac32,\ldots, \frac{4021}2$ and is positive as $x\to+\infty$, hence positive for $x>\frac{4021}2$. In the $2010$ intervals determined this way, the polynomial is alternatingly positive and negative, hence negative in $1005$ of them. Each such interval contains exactly one positive integer (whereas all other positive integers are $>\frac{4021}2$), hence the answer is $1005$.


Monday 23 January 2017

How to calculate limit of the sequence $e^{-(n^frac{1}{2})}{(n+1)^{100}}$





Does the sequence $e^{-(n^\frac{1}{2})}{(n+1)^{100}}$ converge? If yes what is the limit?




What I tried: Expanding
$${(n+1)^{100}}= 1+^{100}C_1n+^{100}C_2{n^2}+^{100}C_3{n^3}+ \dots + ^{100}C_{100}{n^{100}}$$
Multiplying each term by $e^{-(n^\frac{1}{2})}$ and taking limits using L'Hospital Rule we get the limit of the sequence 0, but that is a lengthy and I think it is not a proper approach. [Actually applying L'Hospital rule twice to each term gives the preceding term(I checked it upto $[n^3e^{-(n^\frac{1}{2})}$] and hence ultimately limit will be 0 because lim($e^{-(n^\frac{1}{2})})=0$]
Can anyone please tell me whether this is a correct method to solve the problem and please suggest me a proper method if there is any. Thank you.



The answer is limit of the sequence is 0


Answer



Yes, the final limit is zero. Note that as $n\to +\infty$

$$e^{-\sqrt{n}}{(n+1)^{100}}=\exp\left({-\sqrt{n}\underbrace{\left(1-\frac{100\ln(n+1)}{\sqrt{n}}\right)}_{\to 1}}\right)\to0$$
because, for example by using L'Hopital,
$$\lim_{n\to +\infty}\frac{\ln(n+1)}{\sqrt{n}}=0.$$


Sunday 22 January 2017

calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$



I'm supposed to calculate:



$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$



By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer




Edited. I justified the application of the dominated convergence theorem.



By a simple calculation,



$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\

&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$



We remark that




  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.




Then in view of the Stirling's formula, it suffices to show that



$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$



The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$




and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then



$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$



From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,



$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


summation - Sum of fourth powers in terms of sum of squares

The sum of the fourth powers of the first $n$ integers can be expressed as a multiple of the sum of squares of the first $n$ integers, i.e.



$$\begin{align}

\sum_{r=1}^n r^4&=\frac {n(n+1)(2n+1)(3n^3+3n-1)}{30}\\
&=\frac{3n^2+3n-1}5\cdot \frac {n(n+1)(2n+1)}6 \\
&=\frac{3n^2+3n-1}5\sum_{r=1}^nr^2 \end{align}$$




Question: Is it possible to show this, purely by manipulating the summand, and without first expressing the summation in closed form and then factoring the sum of squares?




From the question here, we see that




$$\sum_{r=1}^n r^4=\left(\sum_{r=1}^n r^2\right)^2-2\sum_{r=1}^n r^2\sum_{j=1}^{r-1}j^2\\
=\sum_{r=1}^n r^2 \left(\sum_{i=1}^n i^2-2\sum_{j=1}^{r-1}j^2\right)$$
but this does not appear to lead anywhere closer to answering the question.



Another approach might be to use Abel's summation formula
$$\sum_{r=1}^n f_r (g_{r+1}-g_r)=\left[f_{n+1}g_{n+1}-f_1g_1\right]-\sum_{r=1}^n g_{r+1}(f_{r+1}-f_r)$$
Putting $f_r=r^2$ and $g_r=\frac {r(r-1)(2r-1)}6$ gives
$$\sum_{r=1}^n r^4=(n+1)^2\cdot \frac {n(n+1)(2n+1)}6-\sum_{r=1}^n \frac {r(r+1)(2r+1)}6\cdot (2r+1)$$
but again this does not seem to get us any further.







1st Edit



Putting $T_m=\sum_{r=1}^n r^m$, the original problem can be restated as an attempt to prove that



$$5T_4=(6T_1-1)T_2$$







2nd Edit



This paper might be useful.



In section 4 (p$206$), it is stated that
$$\frac{\sigma_4}{\sigma_2}=\frac {6\sigma_1-1}5$$
which is derived from the Faulhabner polynomials.
$\sigma_m$ has the same definition as our $T_m$ as defined above.

real analysis - A point a is a cluster point of a set $A subset mathbb R$ iff there exists a sequence ${a ^{(k)}} subset Asetminus{a}$ converging to $a$.




Prove: a point a is a cluster point of a set $A \subset \mathbb R$ iff there exists a sequence ${a ^{(k)}} \subset A\setminus\{a\}$ converging to $a$.





My thoughts:



I know that the definition of a cluster point $a$ of a set $A \subset \mathbb R$ is, for every $\delta > 0$, the n-ball $B_{\delta}(a)$ contains at least one point of A, not counting $a$. but I do not know how to use this definition to prove what required.



Could anyone show me how to prove this please?


Answer



Hint:



Take $\epsilon = \frac{1}{n}$ in the definition of a cluster point to find a sequence. For the other direction, the fact that a sequence in $A \setminus \{a\}$ exists already tells you something about the intersection of open balls with $A$.



real analysis - Compute $lim_{n to infty} int_0^{frac{pi}{2}} sum_{k=1}^n (sin{x})^kdx$




Compute $$\displaystyle \lim_{n \to \infty} \int_0^{\frac{\pi}{2}} \sum_{k=1}^n (\sin{x})^kdx.$$





I tried writing $\displaystyle \lim_{n \to \infty} \int_0^{\frac{\pi}{2}} \sum_{k=1}^n (\sin{x})^kdx = \lim_{n \to \infty} \int_0^{\frac{\pi}{2}} \frac{1-(\sin{x})^{n+1}}{1-\sin{x}}dx$, but I don't know how to continue form here.



I know that the limit diverges to $\infty$, but I don't know how to prove it.


Answer



The wanted limit is $+\infty$. Indeed, by letting $I_k=\int_{0}^{\pi/2}\left(\sin x\right)^k\,dx$ we have that $\{I_k\}_{k\geq 1}$ is a decreasing sequence, but it is also log-convex by the Cauchy-Schwarz inequality, and by integration by parts
$$ I_{k}^2\geq I_k I_{k+1} = \frac{\pi}{2(k+1)}\tag{1}$$
such that
$$ \sum_{k=1}^{n}I_k \geq \sqrt{\frac{\pi}{2}}\sum_{k=1}^{n}\frac{1}{\sqrt{k+1}}\geq \sqrt{\frac{\pi}{2}}\sum_{k=1}^{n}2\left(\sqrt{k+2}-\sqrt{k+1}\right)=\sqrt{2\pi}\left(\sqrt{n+2}-\sqrt{2}\right).\tag{2}$$


limits - $lim_{x to c} {f(x)} = 0 Rightarrow lim_{x to c} {1 over f(x)} = infty$



If $f$ is defined as a function of real variables to real values, and $c \in cl(Domain)$ as its limit value (i.e. $\lim_{x \to c} {f(x)} = 0 $) how to prove that this implies: $\lim_{x \to c} {1 \over f(x)} = \infty$.




It seems logical that the values will be always bigger, but when tried to construct a contradiction using the y-creterion I stuck at: $\exists \epsilon > 0: f(x)>0 \forall x \in [c-\epsilon,c+\epsilon]$.


Answer



This is problematic, even if you consider $1/|f(x)|$ instead of $1/f(x)$. For example, let $$f(x)=\begin{cases}x\sin(1/x) & x\ne0\\ 0 & \text{otherwise.}\end{cases}$$ This is everywhere defined and continuous on $\Bbb R$, and $$\lim_{x\to 0}f(x)=0,$$ but since there is no $x$-interval around $0$ on which $1/|f(x)|$ is defined, then it is problematic to talk about $$\lim_{x\to0}\frac1{|f(x)|}.$$ It's even more problematic to talk about it if we were to let $f$ be the constant zero function.



We must make some extra assumptions to take care of your problems. In particular, you need to show the following:



Suppose that $E\subseteq\Bbb R$ and $f:E\to\Bbb R.$ Let $F=\{x\in E:f(x)\ne0\}.$ Suppose further that $c\in\Bbb R$ is a limit point of both $E$ and $F,$ and that for all $\epsilon>0$ there is some $\delta>0$ such that $|f(x)|<\epsilon$ whenever $x\in E$ with $0<|x-c|<\delta$. Then for all $M,$ there exists $\delta>0$ such that $1/|f(x)|>M$ whenever $x\in F$ with $0<|x-c|<\delta.$


calculus - Limit of an expression



$$\lim\limits_{n\to\infty}\frac{1}{e^n\sqrt{n}}\sum\limits_{k=0}^{\infty}\frac{n^k}{k!}|n-k|=\sqrt{2/\pi}$$
Is this limit true? I should show limit is true. It is allowed to use computer programs to find this limit.
Thanks for your helps...



Answer



This question has a nice probabilistic interpretation. Given that $X$ is a Poisson distribution with parameter $\lambda=n$, we are essentially computing the expected value of the absolute difference between $X$ and its mean $n$. The central limit theorem gives that $Y\sim N(n,n)$ (a normal distribution with mean and variance equal to $n$) is an excellent approximation of our distribution for large values of $n$, hence:
$$\begin{eqnarray*}\frac{1}{e^n \sqrt{n}}\sum_{k=0}^{+\infty}\frac{n^k}{k!}|n-k|&\approx&\frac{1}{\sqrt{n}}\cdot\frac{1}{\sqrt{2\pi n}}\int_{-\infty}^{+\infty}|x-n|\exp\left(-\frac{(x-n)^2}{2n}\right)\,dx\\&=&\frac{2}{n\sqrt{2\pi}}\int_{0}^{+\infty}x\exp\left(-\frac{x^2}{2n}\right)\,dx\\&=&\color{red}{\sqrt{\frac{2}{\pi}}},\end{eqnarray*}$$
so the limit is not zero.


In how many ways 1387 can written in the sum of $n,(n>2)$ Consecutive natural numbers



In how many ways $1387$ can written in the sum of $n(n>2)$ Consecutive natural numbers?




1.$2$



2.$3$



3.$4$




4.$5$




First we can see that it can be written in the form of the sum of two Consecutive natural numbers.Try other cases.The answer is going to be $3$ but how can we prove it?If you find all three cases how can we be sure that there isn't any other?


Answer



HINT:



If $a$ is the first term of the $n$ consecutive natural numbers,
we have $$\dfrac n2\{2a+(n-1)\}=1387$$




$$\iff n^2+(2a-1)n-2\cdot1387=0$$



As $n$ is a natural number, the discriminant $(2a-1)^2+8\cdot1387$ has to be perfect square



Let $(2a-1)^2+8\cdot1387=(2b+1)^2$ where integer $a\ge1,2b+1>\sqrt{8\cdot1387}$



$\iff(b+a)(b-a+1)=2\cdot1387=2\cdot19\cdot73$



So, $b+a$ must divide $2\cdot19\cdot73$



Saturday 21 January 2017

number theory - proof - infinite pair integers $a$ and $b$ such that $a + b = 100$ and $gcd(a, b) = 5$



Prove that there are an infinite pair integers $a$ and $b$ such that $a + b = 100$ and $\gcd(a, b) = 5$.



I don't know how to proceed with this. Especially, proving that they are infinite. There was another problem when the GCD was 3 and I had to prove there existed no such combination, which I did with ease.




I have tried but have made no progress. Any help would be appreciated.


Answer



Thanks to Wojowu, I am able to present my proof. If you find any error or flaws please comment.



Proof:



$\forall k \in \mathbb{Z}^+ \gcd(100k+5, -(100k - 95)) = 5$



As, $\gcd(100k + 5, 100k - 95) = \gcd(100k + 5, 100k - 95 - (100k + 5)) = \gcd(5(20k+1), 5(20))$ which is equal to $5$ as $20k +1$ is relatively prime to $20$. Thus there are infinitely many pairs of integers that satisfy the given criteria.


analysis - Unclear equation in my scriptum (perhaps cauchy-product)

here is the image



Hi the following equation in my scriptum seems unclear. I think it has something to do with cauchy-product but i dont know

calculus - What is the limit of the sequence n!/4^n?





I am trying to find the limit of the sequence by using root test and I don't understand why the limit is not zero?
(the answer is inf).


Answer



By the root test:



$$\begin{array}{rcl}

\displaystyle \limsup_{n\to\infty} \sqrt[n]{a_n} &=& \displaystyle \limsup_{n\to\infty} \sqrt[n]{\dfrac{n!}{4^n}} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{n!} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{\exp\left(n \ln n - n\right)} \\
&=& \displaystyle \dfrac14 \limsup_{n\to\infty} \exp\left(\ln n - 1\right) \\
&=& \displaystyle \dfrac1{4e} \limsup_{n\to\infty} n \\
&=& \infty
\end{array}$$



Hence the sequence diverges to infinity.


trigonometry - Rewriting an expression in the form of $A sin(x + C)$



The problem asks to rewrite $$\sin(x) - \cos(x)$$ in the form of $A\sin(x + C)$, using the reduction formula.



The answer is supposed to be $\sqrt{2}\sin(x - \pi/4)$, or $\sqrt{2}\sin(x - 45)$ using degrees.



But from what the book is doing, I don't know what C is supposed to be or how to get it.


Answer




This is an interesting question, because you wouldn't necessarily thing that the sum of two sine waves would give another sine wave and not some other wavy thing. The key to solving this problem is using two identities. The first is the complementary angle one: $\cos x = \sin (\pi/2-x)$ using radians. The second (which you may be calling the reduction formula) is the sum-to-product formula $\sin a - \sin b = 2 \cos ((a+b)/2) \sin ((a-b)/2)$. Applying these,



$$
\sin x - \cos x = \sin x - \sin (\pi/2 -x) = 2\cos(\pi/4) \sin (x-\pi/4) = \sqrt{2} \sin (x-\pi/4)
$$



which is close to the answer you gave.



The "$C$" is essentially the phase shift of the new sign function that occurs because you are making a new function. Alternatively, you could take Simon's hint (which may or may not be more obvious) by expanding $\sqrt{2} \sin (x+\pi/4)$ and working backwards.


calculus - Reduction formula for $intfrac{dx}{(ax^2+b)^n}$



I recently stumbled upon the following reduction formula on the internet which I am so far unable to prove.
$$I_n=\int\frac{\mathrm{d}x}{(ax^2+b)^n}\\I_n=\frac{x}{2b(n-1)(ax^2+b)^{n-1}}+\frac{2n-3}{2b(n-1)}I_{n-1}$$

I tried the substitution $x=\sqrt{\frac ba}t$, and it gave me
$$I_n=\frac{b^{1/2-n}}{a^{1/2}}\int\frac{\mathrm{d}t}{(t^2+1)^n}$$
To which I applied $t=\tan u$:
$$I_n=\frac{b^{1/2-n}}{a^{1/2}}\int\cot^{n-1}u\ \mathrm{d}u$$
I then used the $\cot^nu$ reduction formula to find
$$I_n=\frac{-b^{1/2-n}}{a^{1/2}}\bigg(\frac{\cot^{n-2}u}{n-2}+\int\cot^{n-3}u\ \mathrm{d}u\bigg)$$
$$I_n=\frac{-b^{1/2-n}\cot^{n-2}u}{a^{1/2}(n-2)}-b^2I_{n-2}$$
Which is a reduction formula, but not the reduction formula.



Could someone provide a derivation of the reduction formula? Thanks.



Answer



Hint The appearance of the term in $\frac{x}{(a x^2 + b)^{n - 1}}$ suggests applying integration by parts with $dv = dx$ and thus $u = (a x^2 + b)^{-n}$. Renaming $n$ to $m$ we get
$$I_m = u v - \int v \,du = \frac{x}{(a x^2 + b)^m} + 2 m \int \frac{a x^2 \,dx}{(a x^2 + b)^{m + 1}} .$$
Now, the integral on the right can be rewritten as a linear combination $p I_{m + 1} + qI_m$, so we can solve for $I_{m + 1}$ in terms of $I_m$ and replace $m$ with $n - 1$.


Friday 20 January 2017

sequences and series - Evaluating $ 1+frac{cos(theta)}{1!}+frac{cos(2theta)}{2!}+frac{cos(3theta)}{3!}+cdots $



Problem: We are asked to evaluate the infinite sum
$$
1+\frac{\cos(\theta)}{1!}+\frac{\cos(2\theta)}{2!}+\frac{\cos(3\theta)}{3!}+\cdots
$$




Thoughts on the problem: At first the infinite sum looks very similar to the power series expansion of the exponential function. However, instead of $\cos(\theta)^n$, we have $\cos(n\theta)$. I tried to use trig identities for $\cos(n\theta)$, but nothing seemed to jump out at first.



I know that $\cos(n\theta)$ is the $n$th Chebyshev polynomial evaluated at $\cos(\theta)$, and I was hoping that would allow us to simplify the expression somewhat. However, I do not know of any identities relating sums of these polynomials, so this did not get me very far.



Can anyone share a useful way to use the Chebyshev polynomials or give a reason why we should not use them? If not, what would be another good starting point? Thanks.


Answer



Hint



$$\cos(n\theta)+i\sin(n\theta)=e^{in\theta}$$



integration - Evaluating $int_0^infty sin x^2, dx$ with real methods?



I have seen the Fresnel integral



$$\int_0^\infty \sin x^2\, dx = \sqrt{\frac{\pi}{8}}$$



evaluated by contour integration and other complex analysis methods, and I have found these methods to be the standard way to evaluate this integral. I was wondering, however, does anyone know a real analysis method to evaluate this integral?



Answer



Let $u=x^2$, then
$$
\int_0^\infty \sin(u) \frac{\mathrm{d} u}{2 \sqrt{u}}
$$
The real analysis way of evaluating this integral is to consider a parametric family:
$$\begin{eqnarray}
I(\epsilon) &=& \int_0^\infty \frac{\sin(u)}{2 \sqrt{u}} \mathrm{e}^{-\epsilon u} \mathrm{d} u = \frac{1}{2} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}\int_0^\infty u^{2n+\frac{1}{2}} \mathrm{e}^{-\epsilon u} \mathrm{d} u \\ &=& \frac{1}{2} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} \Gamma\left(2n+\frac{3}{2}\right) \epsilon^{-\frac{3}{2}-2n} \\
&=& \frac{1}{2 \epsilon^{3/2}} \sum_{n=0}^\infty \left(-\frac{1}{\epsilon^2}\right)^n\frac{\Gamma\left(2n+\frac{3}{2}\right)}{\Gamma\left(2n+2\right)} \\
&\stackrel{\Gamma-\text{duplication}}{=}&\frac{1}{2 \epsilon^{3/2}} \sum_{n=0}^\infty \left(-\frac{1}{\epsilon^2}\right)^n\frac{\Gamma\left(n+\frac{3}{4}\right)\Gamma\left(n+\frac{5}{4}\right)}{\sqrt{2} n! \Gamma\left(n+\frac{3}{2}\right)} \\

&=& \frac{1}{(2 \epsilon)^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} {}_2F_1\left(\frac{3}{4}, \frac{5}{4}; \frac{3}{2}; -\frac{1}{\epsilon^2}\right) \\
&\stackrel{\text{Euler integral}}{=}& \frac{1}{(2 \epsilon)^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} \frac{1}{\operatorname{B}\left(\frac{5}{4}, \frac{3}{2}-\frac{5}{4}\right)} \int_0^1 x^{\frac{5}{4}-1} (1-x)^{\frac{3}{2}-\frac{5}{4} -1} \left(1+\frac{x}{\epsilon^2}\right)^{-3/4} \mathrm{d} x \\
&=& \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} \frac{\Gamma\left(\frac{3}{2}\right)}{\Gamma\left(\frac{5}{4}\right) \Gamma\left(\frac{1}{4}\right)} \int_0^1 x^{\frac{5}{4}-1} (1-x)^{\frac{1}{4} -1} \left(\epsilon^2+x\right)^{-3/4} \mathrm{d} x
\end{eqnarray}
$$
Now we are ready to compute $\lim_{\epsilon \to 0} I(\epsilon)$:
$$\begin{eqnarray}
\lim_{\epsilon \to 0} I(\epsilon) &=& \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \int_0^1 x^{\frac{1}{2}-1} \left(1-x\right)^{\frac{1}{4}-1} \mathrm{d} x = \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \frac{\Gamma\left(\frac{1}{2}\right) \Gamma\left(\frac{1}{4}\right)}{\Gamma\left(\frac{3}{4}\right)} \\ &=& \frac{1}{2^{3/2}} \Gamma\left(\frac{1}{2}\right) = \frac{1}{2} \sqrt{\frac{\pi}{2}}
\end{eqnarray}
$$



number theory - The smallest positive integer $n$ satisfying a given condition

Given any positive integer $g$, what is the smallest positive integer $n$ such that
$$\left\lceil \dfrac{(n-3)(n-4)}{12}\right\rceil>g.$$$\lceil x\rceil$ is a ceiling function of $x$.

How to convert a series to an equation?











I don't know the technical language for what I'm asking, so the title might be a little misleading, but hopefully I can convey my purpose to you just as well without.



Essentially I'm thinking of this: the series $4^n + 4^{n-1} \cdots 4^{n-n}$.



I suppose this is the summation of the series $4^n$ from $n$ to 0.



But is there any way to express this as a pure equation, not as a summation of a series?




If so, how do you figure out how to convert it?


Answer



In general, for $x\neq 1$ it is true that
$$\sum_{k=0}^nx^k=1+x+\cdots+x^n=\frac{x^{n+1}-1}{x-1}.$$
So, in your case in particular, we have that
$$\sum_{k=0}^n4^{n-k}=4^n+\cdots+4+1=1+4+\cdots+4^n=\sum_{k=0}^n4^k=\frac{4^{n+1}-1}{3}.$$
Alternatively, one could pull out a factor of $4^n$ from all terms, and compute
$$\sum_{k=0}^n4^{n-k}=4^n\sum_{k=0}^n(\tfrac{1}{4})^k=4^n\cdot\frac{(\frac{1}{4})^{n+1}-1}{(\frac{1}{4})-1}=4^n\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{\frac{3}{4}}=4^{n+1}\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{3}=\frac{4^{n+1}-1}{3}.$$


Limit involving $arctan$ without L'Hopital rule and series expansions



Find $$\lim_{x\to 0^{+}}\frac{p\arctan(\sqrt{x}/p)-q\arctan(\sqrt{x}/q)}{x\sqrt{x}}$$ without L'Hopital rule and series expansions.



Could someone help me , I did not understand how to find it, thanks.


Answer




Recall $\arctan y = \int_0^y 1/(1+t^2)\, dt.$ Verify that



$$1-t^2 \le \frac{1}{1+t^2}\le 1 -t^2 + t^4$$



for all $t.$ Thus for $y>0,$



$$y-y^3/3 \le \arctan y \le y - y^3/3 + y^5/5.$$



It follow that our expression is bounded below by




$$\frac{p(\sqrt x/p - x^{3/2}/(3p^3)) - q(\sqrt x/q - x^{3/2}/(3q^3)+x^{5/2}/(5q^5))}{x^{3/2}}.$$



Simplify to see this $\to 1/(3q^2) - 1/(3p^2).$ There is a similar estimate from above, giving the same limit. By the squeeze theorem, the limit is $1/(3q^2) - 1/(3p^2).$


Thursday 19 January 2017

calculus - Convergence of $sum_{n=0}^infty n^{1/n}-1$ and $sum_{n=0}^infty (1/n!)^{1/n}$

$$\sum_{n=0}^\infty n^{1/n}-1$$
$$\sum_{n=0}^\infty (1/n!)^{1/n}$$



Hi. I am working on calculus now. While studying convergence test part, I ran into those problems... Wolfram alpha says they both diverges by comparison test. But I cannot think of the series to apply the comparison test... I tried $\sum 1/n$ or $\sum 1/n^2$ but failed..... Can you give me any clue?? I'd really appreciate your help.

Are geometric proofs less reliable than others?

When I submit a homework with a proof that uses a graph, ball, shape etc., most of the time the professors are not happy with them. They respond with a statement like:




"The proof you made seems very true but why don't you just make a usual proof without drawing anything?"



Of course this is something I can do, but I don't like proving something without any visualization.



So, is it because geometric proofs are more likely to be misleading?



Edit: For example: An open ball $B(x,\epsilon)$ is open.

elementary number theory - Solving a Linear Congruence



I've been trying to solve the following linear congruence with not much success:
19 congruent to $19\equiv 21x\pmod{26}$




If anyone could point me to the solution I'd be grateful, thanks in advance


Answer



Hint: $26 = 2 \cdot 13$ and the Chinese remainder theorem. Modulo $2$ we have to solve $1 \cong x \pmod 2$, that is $x = 2k + 1$ for some $k$, now solve $19 \cong 42k + 21 \pmod{13}$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...