Monday 31 August 2015

calculus - Compute $sum_{n=1}^inftyfrac{H_n^3}{n^4}-3sum_{n=1}^inftyfrac{H_nH_n^{(2)}}{n^4}$

How to prove, without calculating each sum separately that




$$\sum_{n=1}^\infty\frac{H_n^3}{n^4}-3\sum_{n=1}^\infty\frac{H_nH_n^{(2)}}{n^4}=24\zeta(7)-4\zeta(2)\zeta(5)-15\zeta(3)\zeta(4)\ ?$$
where $H_n^{(r)}=\sum_{k=1}^n\frac{1}{k^r}$ is the generalized harmonic number and $\zeta$ is Riemann zeta function.








This problem is already proved by Cornel in his book Almost Impossible Integrals, Sums and Series page 299 where he used only pure series manipulations.



The question here is can we prove it in a different way?







If you are curious about the result of each sum, you can find them in the same book page 301-302.



$$\sum_{n=1}^\infty\frac{H_n^3}{n^4}=\frac{231}{16}\zeta(7)+2\zeta(2)\zeta(5)-\frac{51}{4}\zeta(3)\zeta(4)$$



$$\sum_{n=1}^\infty\frac{H_nH_n^{(2)}}{n^4}=-\frac{51}{16}\zeta(7)+2\zeta(2)\zeta(5)+\frac{3}{4}\zeta(3)\zeta(4)$$

sequences and series - Limit of a non-monotonic recurrence relation



I want to show the limit of recurrence $a_{n+1}=1/(1+a_n)$ exists with $a_1= 0.5$. I can find the limit, and I can see that sequence is oscillating. but proving it is being difficult to me. In general is there a method to prove the convergence of such non monotonic sequences, without bringing in matrices and Eigen vectors? So that I can apply it to other converging oscillating sequences.



Answer



One strategy to solve a recurrence relation is the "Guess and Verify" method.




Guess the answer, and then verify it is correct by whatever means.




For this problem, assume a limit $\lambda$ exists, the recurrence
$a_{n+1} = \frac{1}{1+a_n}$ tell us $\lambda$ "should" satisfy




$$\lambda = \frac{1}{1+\lambda} \iff \lambda^2 + \lambda - 1 \implies \lambda = \phi^{-1} \text{ or } -\phi$$
where $\phi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. It is clear start from any positive $a_1$, all remaining $a_n$ will be positive. So the desired limit, if exists, cannot be $-\phi$. This make $\lambda = \phi^{-1}$ of our own candidate as the limit.



To verify $\phi^{-1}$ is indeed the limit. One most approach is compare the absolute difference between it and successive terms of the sequence and see what one can get. For any $n \ge 1$, we have



$$\left|a_{n+1} - \phi^{-1}\right| =
\left|\frac{1}{1+a_n} - \frac{1}{1+\phi^{-1}}\right|
= \left|\frac{a_n - \phi^{-1}}{(1+a_n)(1+\phi^{-1})}\right|
\le \left|\frac{a_n - \phi^{-1}}{1+\phi^{-1}}\right|
= \phi^{-1}\left|a_n - \phi^{-1}\right|

$$
Repeating apply this inequality, we find for any $n > 1$,



$$|a_n - \phi^{-1}| \le \phi^{-1}|a_{n-1}-\phi^{-1}|
\le \phi^{-2}|a_{n-2}-\phi^{-1}| \le \cdots \le \phi^{1-n}|a_1 - \phi^{-1}|$$



Since $|\phi^{-1}| < 1$, $\lim_{n\to\infty} \phi^{1-n} = 0$ and verify our guess:
$$\lim_{n\to\infty} a_n = \phi^{-1} = \frac{\sqrt{5}-1}{2}$$



Update




If one cannot figure out the limit, there are others approaches one can try.



For this particular problem, one can use contraction mapping theorem.




Given any function $f : [a,b] \to [a,b]$. If for some $K < 1$,
$|f(x)-f(y)| \le K|x-y|$ for all $x, y \in [a,b]$, then





  • $f(x)$ has a unique fixed point $\lambda$ over [a,b].

  • Start from any $c \in [a,b]$, the sequence $(c_n)_{n\ge 1}$ obtained by repeat iteration of $f(x)$ $$c_1 = c\quad\text{ and }\quad c_n = f(c_{n-1})\text{ for } n > 1$$
    converges to this $\lambda$.




Take $f(x) = \frac{1}{1+x}$, it is easy to see $f([\frac12,1]) = [\frac12,\frac13]$. Notice for any $x,y \in [\frac12, 1]$, we have
$$|f(x) - f(y)| = \left|\frac{1}{1+x}-\frac{1}{1+y}\right|
= \left|\frac{x-y}{(1+x)(1+y)}\right| \le \frac49|x-y|$$
This $f(x)$ is a contraction mapping over $[\frac12,1]$.

Start from any $c \in [\frac12,1]$, the sequence obtained by repeating iteration of $f(x)$ converges to the unique fixed point of $f(x)$. This includes the sequence $a_n$ at hand which corresponds to $c = \frac12$.


calculus - What exactly is the relationship between integrals and derivatives? + Is my thought process correct?

For a while I've been struggling with why the antiderivative should give the area under a curve, and a little more generally, how exactly integration and differentiation are inverses.



So recently, in learning more about the applications of integrals (specifically to do with volumes), the notion that what we're doing with the integral to find volume is adding up infinitely many 'slices' of the solid was explained to me. And seeing as taking a derivative is looking at a curve's slope point-by-point (broken up), my question is:




Is the intuition of going from 2D to 3D and vice versa a good way to think about the relationship between integrals and derivatives? Especially since the summing-up bit about integrals is what makes the solid, well solid. It also seems to reflect the loss of information in taking the derivative, in the way that when you only look at one dimension of a two dimensional shape, you don't get the whole idea.



Any corrections or thoughts would be helpful. Thanks!



EDIT
It seems like the dimension bit isn't so clear, so I'll try to give more info. See, when my class was learning about volumes, the way my teacher described what the integral was in a way similar to this: since it's an infinite sum, it's taking all the slivers of area and adding them up to get a full solid. Thinking about integrals in that light, I focused on how we use 2D area of a slice, and integrate it to get the 3D volume of a solid. This put the idea in my brain that the process of integrating is similar to adding another dimension to a function. (in this case, integrating gave us a z-axis, kind of)



In that same thought process, differentiation can easily be thought of as the reverse: Taking a 2D curve and the process of homing in on one dimension, just a line.
Also, I'll put a little more into my note on losing info while taking a derivative. As I said above, differentiation is like losing a dimension, or an axis in a way. So when we try to bring back another dimension through antidifferentiation, it's downright impossible to know where our line was in reference to the other axis without more information, since we dropped the axis and all the information it contained. That's why indefinite integrals require the constant added on: to make up for the lost y-axis information.

I hope I cleared that up more, and thanks for any respones. :)

integration - Integral $int frac{sqrt{sin x}}{sqrt{sin x} + sqrt{cos x}} dx$



We have to evaluate the following integral:



$$\int \frac{\sqrt{\sin x}}{\sqrt{\sin x} + \sqrt{\cos x}} dx$$




I tried this:



I multiplied both the numerator and denominator by $\sec x$
And substituted $\tan x = t$.



But after that I got stuck.



The book where this is taken from gives the following as the answer: $$\ln(1+t)-\frac14\ln(1+t^4)+\frac1{2\sqrt2}\ln\frac{t^2-\sqrt2t+1}{t^2+\sqrt2t+1}-\frac12\tan^{-1}t^2+c$$ where $t=\sqrt{\cot x}$


Answer



$\displaystyle \mathcal{I} = \int\frac{\sqrt{\sin x}}{\sqrt{\sin x}+\sqrt{\cos x}}dx = \int \frac{\sqrt{\tan x}}{1+\sqrt{\tan x}}dx$




substitute $\tan x= t^2$ and $\displaystyle dx = \frac{1}{1+t^4}dt$



$\displaystyle \mathcal{I}= \int\frac{t}{(1+t)(1+t^4)}dt = \frac{1}{2}\int\frac{\bigg((1+t^4)+(1-t^4)\bigg)t}{(1+t)(1+t^4)}dt$



$\displaystyle = \frac{1}{2}\int\frac{t}{1+t}dt+\frac{1}{2}\int\frac{(t-t^2)(1+t^2)}{1+t^4}dt$



$\displaystyle = \frac{1}{2}\int \frac{(1+t)-1}{1+t}dt+\frac{1}{2}\int \frac{t+t^3-(t^2-1)-t^4-1}{1+t^4}dt$



$\displaystyle =-\frac{t}{2}+\frac{1}{2}\ln|t+1|+\frac{1}{4}\int\frac{2t}{1+t^4}+\frac{1}{2}\int\frac{t^3}{1+t^4}dt-\frac{1}{2}\int \frac{t^2-1}{1+t^4}dt-\frac{1}{2}t+\mathcal{C}$




all integrals are easy except $\displaystyle \mathcal{J} = \int\frac{t^2-1}{1+t^4}dt = \int\frac{1-t^{-2}}{\left(t+t^{-1}\right)^2-2}dt = \int\frac{(t-t^{-1})'}{(t-t^{-1})^2-2}dt$


logic - Who first proved that the second-order theory of real numbers is categorical?

The second-order theory of real numbers is obtained by taking the axioms of ordered fields and adding a (Dedekind) completeness axiom, which states that every set which has an upper bound has a least upper bound. This theory is categorical, meaning that it has a unique model upto isomorphism.



My question is, who first managed to prove this fact? Let me give a proof here, so we can be clear as to what we're talking about. Let us first define the natural numbers in our theory. A hereditary set is a set which contains x+1 whenever it contains x, and a natural number is a real number which is an element of every hereditary set containing 0. Let me show that these natural numbers satisfy the axioms of second-order Peano arithmetic.





  1. To prove that 0 is a natural number, we must prove that 0 belongs to every hereditary set containing 0. That's trivial because 0 belongs to every set containing 0.


  2. We must prove that whenever x is a natural number, so is x+1. Consider any hereditary set X containing 0. Then by definition of natural number, x must belong to X, and thus by definition of hereditary, x+1 must belong to X. Since X was arbitrary, x+1 belongs to every hereditary set containing 0, and thus x+1 is a natural number.


  3. The fact that x+1=y+1 implies x=y follows directly from the field axioms, which are part of the second-order theory of real numbers.


  4. We must prove that there exists no natural number x such that x+1=0, which in our case is equivalent to the statement that -1 is not a natural number. Well, just consider the set of all natural numbers greater than or equal to 0 (which is of course all of them, but we don't know that yet). Then this is a hereditary set containing 0, and yet -1 doesn't belong to it, so -1 is not a natural number.


  5. The induction axiom says that if X contains 0 and contains x+1 whenever it contains x, then X contains all the natural numbers. But that is just saying that if X is a hereditary set containing 0, all natural numbers belong to it, which is trivial given our definition of natural number.


  6. The defining axioms of addition and multiplication in Peano arithmetic follow easily from the field axioms.





Since the axioms of second-order Peano arithmetic are categorical, it follows that the sets of natural numbers in any two models of the second-order theory of real theory must be isomorphic, and thus the sets of rational numbers in the two models must also be isomorphic. (The isomorphism can be extended straightforwardly to the rational numbers.) Let’s say we have two models R and S, with sets of rational numbers Q_R and Q_S, and with the isomorphism g from Q_R to Q_S. Then let us define the map f from R to S as follows: f(x) is the least upper bound of the set g({q in Q_R: q < x}). In other words, f is the least upper bound of the Dedekind cut in Q_S which corresponds to the Dedekind cut of x in Q_R. We can easily check that f is an isomorphism, so any two models of the second-order theory of real numbers are isomorphic.



Who was the first person to write down this proof or something like it?



Any help would be greatly appreciated.



Thank You in Advance.

Sunday 30 August 2015

real analysis - Is $sumlimits_{n = 1}^infty (-1)^nfrac{H_n}{n}=frac{6ln^2(2)-pi^2}{12}$ a valid identity?

A while ago I asked a question about an identity that I found while playing with series involving harmonic numbers. However, since the methods I used were not the focus of my question, I never explained how I got this result. So here it goes...




NOTE: It may be of help to work backwards through this problem instead. This is just how I did it.



We start by defining $H_n$ as the $n^{th}$ harmonic number.
We also take note of the identities:



$$ \sum_{n=1}^\infty \frac{(-1)^n}{n} = -\ln(2)$$
$$ \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$$



Now we note that:




$$ \tag{1}H_n = \sum_{k=1}^\infty \frac{1}{k} - \frac{1}{k+n} = \sum_{k=1}^\infty \frac{n}{k(k+n)}$$



Since $\sum_{k=1}^\infty \frac{1}{k} - \frac{1}{k+n}$ is a telescoping series.



Dividing by $n$ we get:
$$\tag{2} \frac{H_n}{n} = \sum_{k=1}^\infty \frac{1}{k(k+n)}$$



Since $ \sum_{n=1}^\infty\frac{H_n}{n}$ is obviously divergent, I took a look at the alternating series:



$$ \tag{3}\sum_{n=1}^\infty(-1)^n\frac{H_n}{n} = \sum_{n=1}^\infty(-1)^n\left(\sum_{k=1}^\infty \frac{1}{k(k+n)}\right)$$




We must now multiply this series by 2 and add $\sum_{n=1}^\infty\frac{1}{n^2} $ which is necessary for the next step.



So we have:



$$\begin{align}\tag{4}\sum_{n=1}^\infty\frac{1}{n^2} + 2 \sum_{n=1}^\infty(-1)^n\left(\sum_{k=1}^\infty \frac{1}{k(k+n)}\right) & = \left(\frac{1}{1}\cdot\frac{1}{1} + \frac{1}{2}\cdot\frac{1}{2} + \frac{1}{3}\cdot\frac{1}{3}+\cdots\right) \\ & -2 \left(\frac{1}{1}\cdot\frac{1}{2} + \frac{1}{2}\cdot\frac{1}{3} + \frac{1}{3}\cdot\frac{1}{4}+\cdots\right)
\\ & +2 \left(\frac{1}{1}\cdot\frac{1}{3} + \frac{1}{2}\cdot\frac{1}{4} + \frac{1}{3}\cdot\frac{1}{5}+\cdots\right)
\\ & -2 \left(\frac{1}{1}\cdot\frac{1}{4} + \frac{1}{2}\cdot\frac{1}{5} + \frac{1}{3}\cdot\frac{1}{6}+\cdots\right)
\\ & + \cdots \end{align}$$




With a very careful (and possibly illegal) rearrangement of this expanded sum, we can finally obtain: $\\$
(NOTE: It may be easier to work backwards through this step)



$$ \begin{align} \tag{5}\sum_{n=1}^\infty\frac{1}{n^2} + 2 \sum_{n=1}^\infty(-1)^n\left(\sum_{k=1}^\infty \frac{1}{k(k+n)}\right) &= \left(-\frac{1}{1} + \frac{1}{2} - \frac{1}{3} + \cdots\right)\left(-\frac{1}{1} + \frac{1}{2} - \frac{1}{3} + \cdots\right)
\\ & = \ln^2(2)\end{align}$$



Thus, we have:



$$\tag{6}\sum_{n=1}^\infty(-1)^n\frac{H_n}{n} = \frac{6\ln^2(2) - \pi^2}{12} = -0.5822...$$




Here are 3 graphs from computing the actual value of the sum. The first graph is of the difference between the actual sum and $\frac{6*ln^2(2)-\pi^2}{12}$ for k up to 14000. As you can see it seems to converge on 0. You can ignore the 2nd and 3rd graphs.



QUESTIONS:



Is the method used from step 4-5 (or vise versa) legal even though it involves conditionally convergent series? If not, is there any restrictions and/or rearrangements I can use to make it legal?



Is there any known generalizations for $\sum_{n=1}^\infty(-1)^n\frac{H_n}{n^q}$ that extend this result?



Recommended readings that cover this topic would be greatly appreciated as well.




Thanks,
Dom

calculus - Finding value of given integral



Find the value of the following integral. $$\int_{1/e}^{\tan x} \cfrac{tdt}{1+t^2} + \int_{1/e}^{\cot x} \cfrac{dt}{1+t^2}$$



As a start, I've done this -



$$\int_{1/e}^{\tan x} \cfrac{tdt}{1+t^2} - \int_{\cot x}^{1/e} \cfrac{dt}{1+t^2} \\
=\int_{\cot x}^{\tan x} \cfrac{t-1}{1+t^2} dt $$




Is this valid? If yes, then how to proceed?


Answer



$$\int_{1/e}^{\tan x} \cfrac{tdt}{1+t^2} + \int_{1/e}^{\cot x} \cfrac{dt}{1+t^2}$$



For the first integral, substitute $1+t^2=s\implies tdt= \dfrac {ds}2$. Also, we know that $\int\frac 1{1+t^2} dt =\arctan t +c$. We thus get: $$\int_{1/e}^{\tan x} \dfrac{ds}{2s} + \arctan (\cot x)-\arctan\left (\dfrac 1e\right )$$ $$=\dfrac 12(\ln \tan x +1)+\arctan (\cot x)-\arctan\left (\dfrac 1e\right )$$


calculus - Calculate $lim_{n to infty}binom{2n}{n}$ without using L'Hôpital's rule.



Questions:



(1) Calculate



$$\lim_{n \to \infty}\binom{2n}{n}$$




(2) Calculate



$$\lim_{n \to \infty}\binom{2n}{n} 2^{-n}$$



without using L'Hôpital's rule.



Attempted answers:



(1)




Here I start by using the definition of a binomial:



$$\lim_{n \to \infty}\binom{2n}{n} = \lim_{n \to \infty} \frac{(2n)!}{n!(2n-n)!} = \lim_{n \to \infty} \frac{(2n)!}{n!n!} = \lim_{n \to \infty} \frac{2n \cdot (2n-1) \cdot (2n-2) \cdot ... \cdot (n + 1) \cdot n!}{n! \cdot n!}$$



Canceling one n! gives:



$$\lim_{n \to \infty} \frac{2n \cdot (2n-1) \cdot (2n-2) \cdot ... \cdot (n + 1)}{n!}$$



Here I argue that, surely, the numerator grows faster than the denominator and so the result must be that the limit grows towards $\infty$. Is this sufficiently mathematically rigorous?




(2)



My first attempt looked at the binomial theorem in an effort to get the expression to look something like the general form:



$$(a+b)^{n} = \sum_{k=0}^{n} a^{k} b^{n-k}$$



but it does not quite seem to match, since $k$ would have to be $0$ so that $a$ becomes $= 1$ does not interfere, but then $n-k$ would be $n$ instead.



My second attempt was to do what I did in (1), but then multiply it by $2^{-n}$:




$$\lim_{n \to \infty} \frac{2n \cdot (2n-1) \cdot (2n-2) \cdot ... \cdot (n + 1)}{n!} \cdot \frac{1}{2^{n}}$$



Now we can cancel one of the $2$s in the $2^{n}$ for every second factor in the numerator since they (i.e. 2n, 2n-2) are divisible by 2. But there are n factors in the numerator, so at most $\frac{n}{2}$ factors can be canceled.



Issues:



(a) is my answer to (1) sufficiently mathematically rigorous? Is it really "obvious" that the numerator grows faster or are there additional arguments that should be provided for the argument to be convincing?



(b) what are some more productive approaches to question (2)?


Answer




For part (1), letting $a_n:=\binom{2n}n,$ you should be able to show that $$\frac{a_{n+1}}{a_n}=\frac{(2n+2)(2n+1)}{(n+1)^2}=\frac{2(2n+1)}{n+1}=3+\frac{n-1}{n+1}\ge3.$$ Put another way, $a_{n+1}\ge3a_n$ for all $n,$ which should allow you to conclude readily that $a_n\ge 3^n$ for all $n$, at which point you're pretty much done.



As for (2), I would delay cancellation, working from $$\frac{(2n)!}{2^nn!n!}$$ instead. You can transform the numerator into the product of the first $n$ odd positive integers, and the denominator into $n!,$ then proceed from there in a similar fashion to part (1).



Added: More straightforwardly, part (1) allows you to conclude that $$\binom{2n}n\cdot2^{-n}\ge\left(\frac32\right)^n$$ for all $n,$ whence (2) is easy.


probability - Evaluating $int_{-infty}^infty e^{tx^2} frac{1}{sqrt{2pi} sigma} exp [ frac{-x^2}{2sigma^2} ] dx$

I have so far $$M(t) = E(e^{tX^2}) = \int_{-\infty}^\infty e^{tx^2} \frac{1}{\sqrt{2\pi} \sigma} \exp \left[ \frac{-x^2}{2\sigma^2} \right ] \ dx = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} \exp \left [ \frac{-x^2(1-t)}{2\sigma^2} \right ] \ dx$$




I've tried numerous substitutions and playing around but with no luck. There is a hint to use the fact that the integral of any density of a normal dist. is $1$.



Any help please



edit: answer should be $$ M(t) = (1-2\sigma^2 t)^{-1/2}$$

Having trouble understanding Series and Sequences

So all I could get from my teachers thick accent in class today is that:



A sequence is finite and converges when bounded by x?



and



A series is infinite and diverges because no matter how small the function gets, it will never reach zero?



I'm sorry. I'm really having a hard time understanding the concept. For example. This picture... !

$$\begin{align}
s_1 &= a_1\\
s_2 &= a_1+a_2\\
s_3 &= a_1+a_2+a_3\\
s_4 &= a_1+a_2+a_3+a_4\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\vdots\\
s_n &=a_1+a_2+a_3+a_4+\cdots+a_n=\sum_{i=1}^na_i
\end{align}$$



I don't the last part involving the sigma. How is that to the statments before it.

complex numbers - Trigonometrical approach for negative discriminant of second order polynomial.



This is in continuation to my earlier post, here



For trigonometrical approach to solve a complex second order polynomial (derived either directly, or resolved to be of that form) need consider discriminant. $D$ can be positive either directly, or if in the 3rd quad. can be converted as in second answer to my linked (earlier) post, to lie in the first.
But, for the cases where the discriminant is of mixed sign (say, $-8 +6i$ or $8-6i$), with angle in either 2nd or 4th quad. There it cannot be manipulated to represent angle in the first quadrant.




In 2nd quad., $\sin$ of negative angle is positive, while in the 3rd quad. both $\sin, \cos$ are negative.



But, further to that am hesitant to pursue.


Answer



Suppose we want to solve $$z^2=-8+6i,$$



suppose we have found $z_1^2=-8+6i$, when we also have $(-z_1)^2=-8+6i$.



Write $$-8+6i=r\exp(i\theta)$$




where $r$ is a positive number and $\theta$ is a real number.



$$r=\sqrt{6^2+8^2}=10.$$



That is we have



$$-8+6i=10\exp(i\theta)=10\cos(\theta)+10i\sin(\theta)$$



We have




$$\cos(\theta)=-\frac{8}{10}=-\frac{4}{5}$$
$$\sin(\theta)=\frac{6}{10}=\frac{3}{5}$$



Angle $\theta$ has negative cosine angle and positive sine angle, it is an angle in the second quadrant



$$2\cos^2\left( \frac{\theta}2\right)-1=-\frac45 \implies \cos^2\left( \frac{\theta}2\right)=\frac1{10} $$



$$2\sin\left( \frac{\theta}2\right) \cos\left(\frac{\theta}2 \right)=\frac3{5}$$



If $\cos\left(\frac{\theta}{2} \right)=\frac1{\sqrt{10}}$, then we have $\sin\left(\frac{\theta}2\right)=\frac{3}{{\sqrt{10}}}$.




If $\cos\left(\frac{\theta}{2} \right)=-\frac1{\sqrt{10}}$, then we have $\sin\left(\frac{\theta}2\right)=-\frac{3}{{\sqrt{10}}}$.



The square roots are



$$\pm10\left(\frac1{\sqrt{10}}+\frac{3i}{\sqrt{10}}\right)=\pm\sqrt{10}(1+3i)$$


Sum of series $frac {4}{10}+frac {4cdot7}{10cdot20}+ frac {4cdot7cdot10}{10cdot20cdot30}+cdots$

What is the sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots?$$



I know how to check if a series is convergent or not.Is there any technique to find out the sum of a series like this where each term increases by a pattern?

summation - The sum of squares of the first $n$ natural numbers.



My basic question is this: how to find the sum of squares of the first $n$ natural numbers?



My thoughts have led me to an interesting theorem: Faulhaber's formula. It is known that $$1^k+2^k+\ldots+n^k=P_{k+1}(n)$$ is a polynomial of degree $n$ $(k+1)$ (!). For my problem: $$1^2+2^2+\ldots+n^2=a+bn+cn^2+dn^3.$$ Further resolving an uncomplicated system of linear equations:
$$\left\{
\begin{aligned}
0=&a\\

1^2=&a+b\cdot1+c\cdot1^2+d\cdot1^3\\
1^2+2^2=&a+b\cdot2+c\cdot2^2+d\cdot2^3\\
1^2+2^2+3^2=&a+b\cdot3+c\cdot3^2+d\cdot3^3\\
\end{aligned}
\right.$$
Thus we get: $a=0,\,b=\frac16,\,c=\frac12,\,d=\frac13$, i.e$$P_3(n)=\frac{n(n+1)(2n+1)}{6}.$$
My further questions:



1) What are some ways to find the sum of the squares of the first n natural numbers?




2) (!) How to prove that the sum of $1^k+2^k+\ldots+n^k$ is a polynomial of degree $n$ $k+1$?


Answer



An integer-valued polynomial is a polynomial with complex coefficients taking values in $\mathbb{Z}$ when all the variables take integer values. For example, $\frac{x^2+3x}{2}$ and $\frac{13x^3+5xy^2}{6}$ are integer-valued polynomials. Clearly, the set of integer-valued polynomials with variables $x_1,x_2,\ldots,x_n$ form a subring of $\mathbb{Q}\left[x_1,x_2,\ldots,x_n\right]$. A result by Polya states that the ring of integer-valued polynomials in one variable $x$ is a free abelian group with basis elements $\binom{x}{k}=\frac{x(x-1)(x-2)\cdots(x-k+1)}{k!}$ for $k=0,1,2,\ldots$.



To answer your question, $x^k$ is an integer-valued polynomial. Therefore, $x^k=\sum_{r=0}^k \,a_r\binom{x}{r}$ for some $a_0,a_1,\ldots,a_n\in\mathbb{Z}$ (obviously, $a_n\neq 0$). Now, $\sum_{m=0}^n\,m^k=\sum_{m=0}^n\,\sum_{r=0}^k\,a_r\binom{m}{r}=\sum_{r=0}^k\,a_k\,\sum_{m=0}^n\,\binom{m}{r}$. By the Hockey-Stick Identity (see http://www.artofproblemsolving.com/wiki/index.php/Combinatorial_identity#Hockey-Stick_Identity), $\sum_{m=0}^n\,m^k=\sum_{r=0}^k\,a_k\,\binom{n+1}{r+1}$. Hence, $\sum_{m=0}^n\,m^k$ is a polynomial in $n$ of degree $k+1$, as the coefficient of $n^{k+1}$ is $\frac{a_k}{(k+1)!}\neq 0$. (In fact, $a_k=k!$, so we know that $\sum_{m=0}^n\,m^k=\frac{n^{k+1}}{k+1}+\mathcal{O}\left(n^k\right)$.)


elementary number theory - If $d$ divides $k$ and $d$ divides $n$, then $d$ divides $(8k - 3n)$

Suppose that $k$, $n$, and $d$ are integers and $d$ is not $0$. Prove: If $d$ divides $k$ and $d$ divides $n$, then $d$ divides $(8k - 3n)$. You may not use the theorem stating the following:



Let $m$, $n$, and $d$ be integers.



(a) If $d\mid m$ and $d\mid n$, then $d\mid (m + n)$.



(b) If $d\mid m$ and $d\mid n$, then $d\mid (m - n)$.




(c) If $d\mid m$, then $d\mid mn$.



I am not sure what the basis step is with this proof. Thank you for any help.

Saturday 29 August 2015

real analysis - The $ l^{infty} $-norm is equal to the limit of the $ l^{p} $-norms.





If we are in a sequence space, then the $ l^{p} $-norm of the sequence $ \mathbf{x} = (x_{i})_{i \in \mathbb{N}} $ is $ \displaystyle \left( \sum_{i=1}^{\infty} |x_{i}|^{p} \right)^{1/p} $.



The $ l^{\infty} $-norm of $ \mathbf{x} $ is $ \displaystyle \sup_{i \in \mathbb{N}} |x_{i}| $.



Prove that the limit of the $ l^{p} $-norms is the $ l^{\infty} $-norm.




I saw an answer for $ L^{p} $-spaces, but I need one for $ l^{p} $-spaces. Besides, I didn’t really understand the $ L^{p} $-answer either.



Thanks for your help!


Answer



Let me state the result properly:




Let $x=(x_n)_{n \in \mathbb{N}} \in \ell^q$ for some $q \geq 1$. Then $$\|x\|_{\infty} = \lim_{p \to \infty} \|x\|_p. \tag{1}$$





Note that $(1)$ fails, in general, not hold if $x=(x_n)_{n \in \mathbb{N}} \notin \ell^q$ for all $q \geq 1$ (consider for instance $x_n := 1$ for all $n \in \mathbb{N}$.)



Proof of the result: Since $$|x_k| \leq \left(\sum_{j=1}^{\infty} |x_j|^p \right)^{\frac{1}{p}}=\|x\|_p$$ for all $k \in \mathbb{N}$, $p \geq 1$, we have $\|x\|_{\infty} \leq \|x\|_p$. Thus, in particular $$\|x\|_{\infty} \leq \liminf_{p \to \infty} \|x\|_p. \tag{1}$$



On the other hand, we know that $$\|x\|_p = \left( \sum_{j=1}^{\infty} |x_j|^{p-q} \cdot |x_j|^q \right)^{\frac{1}{p}} \leq \|x\|_{\infty}^{\frac{p-q}{p}} \cdot \left( \sum_{j=1}^{\infty} |x_j|^q \right)^{\frac{1}{p}} = \|x\|_{\infty}^{1-\frac{q}{p}} \cdot \|x\|_q^{\frac{q}{p}}$$ for all $q

$$ \limsup_{p \to \infty} \|x\|_p \leq \limsup_{p \to \infty} \left( \|x\|_{\infty}^{1-\frac{q}{p}} \cdot \|x\|_q^{\frac{q}{p}}\right) = \|x\|_{\infty} \cdot 1. \tag{2}$$



Hence, $$\limsup_{p \to \infty} \|x\|_p \leq \|x\|_{\infty} \leq \liminf_{p \to \infty} \|x\|_p.$$ This shows that $\lim_{p \to \infty} \|x\|_p$ exists and equals $\|x\|_{\infty}$.



trigonometry - What is the integral value of $frac{tan 20^circ+tan40^circ+tan80^circ-tan60^circ}{sin40^circ}$?

I have tried possibly all approaches.



I first expressed $80$ as $60+20$ and $40$ as $60-20$ and then used trig identities.I later used conditional identities expressing $\tan 20^\circ+\tan40^\circ+\tan120^\circ$ as $\tan 20^\circ \tan40^\circ \tan120^\circ$. But I really can't get to the end of it .




Please help.

Friday 28 August 2015

Sum of the series $frac{2}{5cdot10}+frac{2cdot6}{5cdot10cdot15}+frac{2cdot6cdot10}{5cdot10cdot15cdot20 }+cdots$

How do I find the sum of the following infinite series:
$$\frac{2}{5\cdot10}+\frac{2\cdot6}{5\cdot10\cdot15}+\frac{2\cdot6\cdot10}{5\cdot10\cdot15\cdot20 }+\cdots$$
I think the sum can be converted to definite integral and calculated but I don't know how to proceed from there.

calculus - How to get the same answer for integral $intfrac{dz}{sqrt{1+z^2}}$ as in textbook



I have some answer:
$$\int\frac{dz}{\sqrt{1+z^2}}=\ln ( z + \sqrt {1 + z^2})+C$$
But I can't get the same expression. So here what I've done and got stuck. How can I get the same answer?



\begin{align}

& \int\frac{dz}{\sqrt{1+z^2}}=\left\{z=\tan t, \ dz=\frac{dt}{\cos^2 t} \right\} = \int\frac{dt}{\cos^2 t\sqrt{1+\tan^2 t}} \\[10pt]
= {} & \left\{\sqrt{1+\tan^2 t}=\frac{1}{\cos t}\right\}=\int\frac{\cos t \ dt}{\cos^2 t} \\[10pt]
= {} & \left\{\cos t \ dt= d\sin t, \cos^2 t=1-\sin^2 t\right\}=\int\frac{d\sin t}{1-\sin^2 t} \\[10pt]
= {} &\{\sin t=u\}=\int\frac{d u}{1-u^2}=-\int\frac{du}{u^2-1}=-\frac{1}{2} \int\left(\frac{1}{u-1}-\frac{1}{1+u}\right)\,du \\[10pt]
= {} &-\frac{1}{2}(\ln |u-1|-\ln|u+1|)+C
\end{align}



From here I don't understand how to get the answer above. Need help


Answer



You're almost there. Note that with $u=\sin(t)$ and $z=\tan(t)$, we have




$$\begin{align}
-\frac12\left(\log(|u-1|)-\log(|u+1|) \right)&=\frac12\log\left(\left|\frac{1+\sin(t)}{1-\sin(t)}\right|\right)\\\\
&=\frac12\log\left(\left|\frac{(1+\sin(t))^2}{1-\sin^2(t)}\right|\right)\\\\
&=\log\left(\left|\frac{1+\sin(t)}{\cos(t)}\right|\right)\\\\
&=\log(|\sec(t)+\tan(t)|)\\\\
&=\log(|z+\sqrt{1+z^2}|)
\end{align}$$



as was to be shown!



real analysis - If the condition of differentiability holds for the rationals then the function is differentiable?




$f:\mathbb{R}\rightarrow\mathbb{R}$ continuous, $a\in\mathbb{R}$. Suppose that there exists $L\in\mathbb{R}$ such that for every $\varepsilon>0$ there exists $r(\varepsilon)>0$ such that $|\frac{f(x)-f(a)}{x-a}-L|<\varepsilon$ for every $x\in\mathbb{Q}$ and $|x-a|

Answer



Fix a point $x\neq a$ such that $0 < |x - a| < r(\epsilon/2)$. Then, at the point x, the function $$g(y) = \frac{f(y) - f(a)}{y - a}$$ is continuous. Now, pick a point $p \in Q$ near $x$ such that $0< |p-a| < r(\epsilon/2)$ and $|g(p) - g(x)| < \epsilon/2$. This is possible by continuity. Now, use your condition to conclude via the triangle inequality, that
$$ \left|\frac{f(x) - f(a)}{x - a} - L\right| = |g(x) - L| \le |g(x) - g(p)| + |g(p) - L| < \epsilon/2 + \epsilon/2 = \epsilon.$$



It follows by definition that $f$ is differentiable at $a$ with $f'(a) = L$.


calculus - Show that function is differentiable but its derivative is discontinuous.




Let $g(x)=x^2\sin(1/x)$, if $x \neq 0$ and $g(0)=0$. If $\{r_i\}$ is the numeration of all rational numbers in $[0,1]$, define
$$
f(x)=\sum_{n=1}^\infty \frac{g(x-r_n)}{n^2}
$$

Show that $f:[0,1] \rightarrow R$ is differentiable in each point over [0,1] but $f'(x)$ is discontinuous over each $r_n$. Is possible that the set of all discontinuous points of $f'$ is precisely $\{r_n\}$?




I'm not seeing how this function is working. I could not even derive it. I need to fix some $ n $ to work? And to see the discontinuity of $ f '(x) $ after that? Can anyone give me any tips? I am not knowing how to work with this exercise and do not know where to start.


Answer



First a stylistic comment: you should use the word "differentiable" in place of "derivable." Second: you should show that $f$ is well-defined on $[0,1]$ so that you can take its derivative (this is easy). We want to consider the difference quotient $\frac{f(x)-f(y)}{x-y}$ and what happens as $x\rightarrow y$ (I will leave it to you to argue why you can interchange sum and limit.. Weierstrass M-test and uniform convergence are your friends). I'll help out with part of the solution.



So we want to evaluate the limit of the difference quotient. To do so, we consider two cases: when $y$ is irrational and when $y$ is rational.



Case 1: $y$ is irrational.




What we have is



$$\lim_{x\rightarrow y}\frac{f(x)-f(y)}{x-y} = \sum_{n=0}^{\infty}\frac{1}{n^2}\lim_{x\rightarrow y}\frac{g(x-r_n)-g(y-r_n)}{x-y}.$$



The important part here is the limit, so I'll focus on that. The limit looks eerily close to a difference quotient (and after a clever introduction of $0$, we see that it is):



$$\lim_{x\rightarrow y}\frac{g(x-r_n)-g(y-r_n)}{x-y} = \lim_{x\rightarrow y}\frac{g(x-r_n)-g(y-r_n)}{(x-r_n)-(y-r_n)}.$$



This is just the derivative of $g$ evaluated at $y-r_n$! Which is equal to $2(y-r_n)\sin\left(\frac{1}{y-r_n}\right)-\cos\left(\frac{1}{y-r_n}\right)$. This is well-defined for all $r_n$ since $y$ is irrational. Since this function is bounded for all $r_n$ and $y$ by the value of $4$, the series converges and so $f'(y)$ is well-defined if $y$ is irrational.




Case 2: $y$ is rational.



If $y$ is a rational in $[0,1]$, then $y=r_k$ for some $k$. Then our difference quotient is



$$\lim_{x\rightarrow r_k}\frac{f(x)-f(r_k)}{x-r_k} = \lim_{x\rightarrow r_k}\left(\frac{1}{k^2}\frac{g(x-r_k)-g(r_k-r_k)}{x-r_k}+\sum_{n\neq k}\frac{1}{n^2}\frac{g(x-r_n)-g(r_k-r_n)}{x-r_k}\right).$$



We could not haphazardly apply the trick from above in this case because we required that $y$ be irrational above (else the denominator in the trigonometric functions will be ill-defined) which is why we split off the term in the series corresponding to $y$ in this case. Notice that the remainder of the series is now susceptible to the trick we did above and the term we pulled out is easy to handle. This gives us:



$$\lim_{x\rightarrow r_k} \frac{f(x)-f(r_k)}{x-r_k} = \lim_{x\rightarrow r_k}\left(\frac{1}{k^2}\frac{g(x-r_k)}{x-r_k} + \sum_{n\neq k}\frac{1}{n^2}\frac{g(x-r_n)-g(r_k-r_n)}{(x-r_n)-(r_k-r_n)}\right).$$




The first part is simply $\frac{1}{k^2}g'(0)$ and the second part is exactly like above.



Do you see how this is also well-defined making $f'$ differentiable everywhere? Can you take it from here?


number theory - Elementary proof that $4$ never divides $n^2 - 3$



I would like to see a proof that for all integers $n$, $4$ never divides $n^2 - 3$. I have searched around and found some things about quadratic reciprocity, but I don't know anything about that. I am wondering if there is a more elementary proof.



For example, I managed to show that $4$ never divides $x^2 - 2$ by saying that if $4$ does divide $x^2 - 2$, then $x^2 - 2$ is even. And then $x^2$ is even, which means that $x$ is even. So $x = 2m$ for some integer $m$, and so $x^2 - 2 = 4m^2 - 2$ is not divisible by $4$. So I would like to see a similar proof that $4$ doesn't divide $n^2 -3$.


Answer



$n $ is odd $\implies n=2k+1\implies n^2-3=2(2k^2+2k-1)$ where $2k^2+2k-1$ is odd and hence can't have $2$ as a factor.




In order for $4$ to divide $n^2-3$ it should have $4=2.2$ as a factor but note that $2$ appears as a factor only once if $n$ is odd.



$n$ is even $\implies n^2-3=4k^2-3$ which is odd


fraction simplication, resolution with one variable



I'm having some problems to understand a simplification and I know the correct answer but I'd like to understand why my method is wrong or what rule I'm breaking.



I have an equation like the following:




$$Fr \cdot \left( \frac1{Sr} + \frac1{Sp} \right) = \frac{Ft}{Sp}$$



Where the variable I'm looking for is $Fr$. My tendency was to do:



$$ Fr = \frac{Ft}{Sp} \cdot ( Sr + Sp ) $$



But it seems to be wrong, I should have do:



$$ Ft \cdot \left( \frac{Sp + Sr}{Sr \cdot Sp} \right) = \frac{Ft}{Sp} $$




and then:



$$ Ft = \frac{Sr \cdot Sp \cdot Ft}{Sp + Sr} $$



Why I can not simply pass & inverse the fraction to the other side? Thanks so much!!


Answer



Because



$$ \frac1a + \frac1b \neq \frac1{a+b} $$




This is already the case for $a=b=1$, where one the left side you get $2$ and on the right side you get $\frac12$.


trigonometry - How is $Asintheta +Bcostheta = Csin(theta + phi)$ derived?



I have come across this trig identity and I want to understand how it was derived. I have never seen it before, nor have I seen it in any of the online resources including the many trig identity cheat sheets that can be found on the internet.



$A\cdot\sin(\theta) + B\cdot\cos(\theta) = C\cdot\sin(\theta + \Phi)$



Where $C = \pm \sqrt{A^2+B^2}$



$\Phi = \arctan(\frac{B}{A})$




I can see that Pythagorean theorem is somehow used here because of the C equivalency, but I do not understand how the equation was derived.



I tried applying the sum of two angles identity of sine i.e. $\sin(a \pm b) = \sin(a)\cdot\cos(b) + \cos(a)\cdot\sin(b)$



But I am unsure what the next step is, in order to properly understand this identity.



Where does it come from? Is it a normal identity that mathematicians should have memorized?


Answer



The trigonometric angle identity $$\sin(\alpha + \beta) = \sin\alpha \cos\beta + \cos\alpha \sin\beta$$ is exactly what you need. Note that $A^2 + B^2 = C^2$, or $$(A/C)^2 + (B/C)^2 = 1.$$ Thus there exists an angle $\phi$ such that $\cos\phi = A/C$ and $\sin\phi = B/C$. Then we immediately obtain from the aforementioned identity $$\sin(\theta+\phi) = \frac{A}{C}\sin \theta + \frac{B}{C}\cos\theta,$$ with the choice $\alpha = \theta$; after which multiplying both sides by $C$ gives the claimed result.




Note that $\phi = \tan^{-1} \frac{B}{A}$, not $\tan^{-1} \frac{B}{C}$.


Thursday 27 August 2015

divisibility - Prove $gcd(a,b)=gcd(a,2a+b)$



Call $gcd(a,b)=d$. Then $d|a$ and $d|b$. And if $c|a$ and $c|b$, then $c|d$.



It's simple to show that $d$ is SOME divisor of $a$ and $2a+b$, since we already know $d|a$ and $d|b$, so it divides the linear combination $2a+b$. However, how can I show that $d$ is the GREATEST common divisor of $a$ and $2a+b$?



Edit: Let $e$ be some other common divisor of $a$ and $2a+b$. Do we know if it's true that $e|b$? If so, we're finished since anything that divides both $a$ and $b$ will also divide $d$, since $d=gcd(a,b)$.



Answer



If $x$ divides both $a$ and $2a+b$ then $x$ divides both $a$ and $b$. Therefore $x$ divides $d$, so $x \leq d$.


elementary number theory - On the inequality $p^{alpha +1}q^{beta +1}-2p^{alpha+1}q^{beta}-2p^{alpha}q^{beta +1}+2p^{alpha}q^{beta}+p^{alpha+1}+q^{beta+1}-1 > 0$



I'm working on some equations in number theory and I stuck on below inequality :



$$p^{\alpha +1}q^{\beta +1}-2p^{\alpha+1}q^{\beta}-2p^{\alpha}q^{\beta +1}+2p^{\alpha}q^{\beta}+p^{\alpha+1}+q^{\beta+1}-1 > 0$$




Here $p$ and $q$ are distinct prime numbers and $p,q >2$ and $\alpha\,,\beta$ are positive integer numbers.



Can somebody help me to prove that or find counterexample , although I believe that the inequality is true.


Answer



The LHS is $$x y (\;(p-2)(q-2)-2\;)+p x+q y-1$$ where $x=p^a$ and $y=q^b$. Since $p,q$ are distinct odd positive integers, $(p-2)(q-2)\geq 3.$


real analysis - Use definition of limit to prove $lim_ {t to a} f(t^2) =L$



Suppose:





$$\lim_ {x \to a^2} f(x) =L $$



Use this to prove $\lim_{t \to a} f(t^2) =L$.




This is evident from the limit substitution rule, but in this course we have not covered this and we should use the formal definition of the limit. That is:



$$ \forall \epsilon>0 \quad \exists \delta \quad s.t. \quad |x-a^2|< \delta \implies |f(x)-L|< \epsilon$$




must be used to prove:



$$ \forall \epsilon>0 \quad \exists \delta \quad s.t. \quad |t-a|< \delta \implies |f(t^2)-L|< \epsilon$$



how do I make this connection.


Answer



$|t-a| <\delta'$ implies $|t^{2}-a^{2}|=|t-a||t+a| <\delta' (|t|+|a|)<\delta' (\delta'+2|a|)$. So choose $\delta'$ such that $\delta' (\delta'+2|a|) <\delta$. It is enough to take $\delta' <1$ and $\delta' <\frac {\delta} {1+2|a|}$. Then $|t-a| <\delta'$ implies $|f(t^{2})-L| <\epsilon$.


integration - Integral over the real axis using residues

I have to evaluate the following integral
$$
\int_{-\infty}^\infty \frac{x\sin{x}}{x^2-4x+8}dx

$$

I know that for such integrals we can use residues and
$$
\int_{-\infty}^\infty \frac{x\sin{x}}{x^2-4x+8}dx = 2\pi i \sum_{k=0}^n \text{res}[f(z),a_k] + \lim_{R\to\infty}\int_{\Gamma_R} f(z)dz
$$

I see that $f(z) = \frac{x\sin{x}}{x^2-4x+8}$ has one singularity in the upper half of the plane which is $x = 2+2i$ and I have computed that res$[f(z),2+2i] = \left(\frac{1}{2}-\frac{i}{2}\right)\sin{(2+2i)}$, but I have problems evaluating the line integral. Usually we have shown that this goes to $0$ as $R$ gets large, but in this case it doesn't seem to be so. How can I go about evaluating the line integral? Thanks for any advice.

summation - Sum to the nth term of an arithmetic-geometric series

We have to find the sum to nth term in the following series:




$$1-\frac{2}{2}+\frac{3}{2^2}-\frac{4}{2^3}+$$ up to nth term.



I tried using the common method of successive differences. It lead me to an answer that was:



$$\frac{3}{2}S=\frac{(-2)^n2+4}{3}+\frac{2-3n}{(-2)^n}$$



Where S is the sum of the series.



I'm not sure if this is the answer. Could anyone help me out by checking this answer and recommend a better, not so sophisticated method.

calculus - What is? $ lim_{x to infty} left(frac{e}{left( 1 + frac {1}{x} right)^x} right)^x $





Find $$ \lim_{x \to \infty} \left( \dfrac{e}{\left( \left( 1 + \frac {1}{x} \right)^x \right)} \right)^x. $$







$ \lim_{x \to \infty} \left( \dfrac{e}{\left( \left( 1 + \frac {1}{x} \right)^x \right)} \right)^x = \lim_{x \to \infty} \left( \dfrac {e}{e} \right)^x = \lim_{x \to \infty} 1^x = \infty $



Why is this incorrect?


Answer



Take the logarithm,




$$\begin{align}
\log \left(\frac{e}{\left(1+\frac1x\right)^x}\right)^x &= x\log \frac{e}{\left(1+\frac1x\right)^x}\\
&= x\left(1 - x \log \left(1+\frac1x\right)\right)\\
&= x\left(1 - x\left(\frac1x - \frac{1}{2x^2} + O(x^{-3})\right) \right)\\
&= x\left(1 - 1 + \frac{1}{2x} + O(x^{-2})\right)\\
&= \frac12 + O(x^{-1}).
\end{align}$$



And hence




$$\lim_{x\to\infty} \left(\frac{e}{\left(1+\frac1x\right)^x}\right)^x = e^{1/2}.$$


functional equations - Let $f(x)$ be a 3rd degree polynomial such that $f(x^2)=0$ has exactly $4$ distinct real roots



Problem :




Let $f(x)$ be a 3rd degree polynomial such that $f(x^2)=0$ has exactly four distinct real roots, then which of the following options are correct :



(a) $f(x) =0$ has all three real roots



(b) $f(x) =0$ has exactly two real roots



(c) $f(x) =0$ has only one real root



(d) none of these




My approach :



Since $f(x^2)=0 $ has exactly four distinct real roots. Therefore the remaining two roots left [ as $ f(x^2)=0$ is a degree six polynomial].



How can we say that the remaining two roots will be real . This will not have one real root ( as non real roots comes in conjugate pairs).



So, option (c) is incorrect. I think the answer lies in option (a) or (b) . But I am not confirm which one is correct. Please suggest.. thanks..


Answer



No requirement for $f$ to have real coefficients was stated. So you could have

e.g. $f(x) = (x-1)(x-4)(x+i)$ which has two real roots, and $f(x^2)$ has the four distinct real roots $\pm 1$ and $\pm 2$.


Wednesday 26 August 2015

proof writing - If $n$ is a positive integer and is not a perfect square



If $n$ is a positive integer and is not a perfect square, how do you prove that $n^{1/2}$ is irrational?


Answer



Since $n$ is not a perfect number, there exists at least one prime number $p$ such that
$$n=p^\alpha q$$
where $q\in\mathbb N$ is coprime to $p$ and $\alpha\ge 1$ is odd.



Now, suppose that $n$ is a rational number, namely,
$$n^{1/2}=\frac{b}{a}$$

where $a,b$ are natural numbers and they are coprime to each other.



Then, we have
$$n=\frac{b^2}{a^2}\Rightarrow a^2n=b^2.$$



By the fundamental theorem of arithmetic, this implies that the number of $p$ in the left hand side is odd, and that the number of $p$ in the right hand side is even. This is a contradiction.



Hence, $n^{1/2}$ is an irrational number.


real analysis - How to prove that if $sum|a_j|



I would like to prove the following statement about series.




Suppose that $\{a_j:j\ge1\}$ is a real sequence and $a_j\ne0$ for each $j\ge1$. If the series $\sum_{j=1}^\infty|a_j|<\infty$, then the series
$$
\sum_{j=1}^\infty|a_j|\log(|a_j|^{-1})<\infty.
$$





It is not difficult to show that this statement is true for some particular sequences $\{a_j:j\ge1\}$. For example, suppose that $a_j=j^{-2}$ and use the integral test for convergence. But I have no idea how to prove the statement for a general sequence $\{a_j:j\ge1\}$.



Any help would be much appreciated!


Answer



Try $a_j=1/(j(\log j)^c)$ for every $j\geqslant3$, then $\log(1/a_j)\sim\log j$ hence $\sum\limits_ja_j$ converges exactly when $c\gt1$ and $\sum\limits_ja_j\log(1/a_j)$ converges exactly when $c\gt2$.



Thus, any sequence defined by $a_j=1/(j(\log j)^c)$ for every $j\geqslant3$, for some $1\lt c\leqslant2$, disproves your claim.


Tuesday 25 August 2015

general topology - 1-1 correspondence between [0,1] and [0,1)

I wonder how to build a 1-1 correspondence between [0,1] and [0,1). My professor offers an example such that 1 in the first set corresponds to 1/2 in the second set, and 1/2 in the first set corresponds to 1/4 in the second. I don't quite understand it. Does it mean every element in the first set corresponds to its half value in the second set? wouldn't that make some elements left out in the second set? Does it still count as 1-1 correspondence? Does it connect to Schroder-Bernstein theorem?

multivariable calculus - Limit of $frac{sin(xy^2)}{xy}$




I found this interesting problem on calculating the limit of $\frac{\sin(xy^2)}{xy}$ on the positive coordinate axes $x$ and $y$. That is, compute the limit on the points $(x_0, 0)$ and $(0,y_0)$ when $x_0 > 0$ and $y_0 > 0$.




My approach was this:



If we first calculate the limit for $x$ axis, the the $x$ is a constant $x=x_0$ and therefore the function is essentially a function of one variable:




$$f(y) = \frac{\sin(x_0y^2)}{x_0y}$$



Using L'Hospital's rule:



$$\lim_{y\to0}f(y)=\frac{\lim_{y\to0}\frac{d\sin(x_0y^2)}{dy}}{\lim_{y\to0}\frac{dx_0y}{dy}}$$



$$=\frac{\lim_{y\to0}2yx_0\cos(x_0y^2)}{\lim_{y\to0}x_0}=0$$



The same idea applies to the other limit.




But in the sheet where this problem was listed, this was listed as a "tricky" limit to calculate. It seemed quite simple, so I would like to hear whether this approach is correct. Thank you!


Answer



It is correct, the limit is $0$ and the reasoning holds.


real analysis - Is my proof that $limlimits_{nto +infty}dfrac{u_{n+1}}{u_n}=1$ correct?



I'm doing an exercise where $(u_n)$ is a numerical sequence which is decreasing and strictily positive.While $(u_n)$ is a numerical sequence which is decreasing and strictily positive, then $(u_n)$ is convergent and its limit is positive which we symbolise by $l$. Assume that $l\ne 0$.




I have to prove that $\lim\limits_{n\to +\infty}\dfrac{u_{n+1}}{u_n}=1$. I'm not sure if my proof is correct or not. Can you please check it? Thank you very much!
Please excuse my English. We don't study Maths in English.



Let $\varepsilon\in ]0;l[$.



So $\exists N\in\mathbb{N},\,\forall n\in\mathbb{N},\,n>N\Longrightarrow |u_n-l|<\varepsilon$



Let $n\in\mathbb{N}$ such as $n>N$. We also have $n+1>n>N$.
Then:




$|u_{n+1}-u_n|=|(u_{n+1}-l)-(u_n-l)|\le |u_{n+1}-l|+|u_n-l|<2\varepsilon$ $(1)$



And we have $|u_n-l|<\varepsilon$ so $0

Then $(1)\times (2)$ gives:



$\left|\dfrac{u_{n+1}}{u_n}-1\right|<\dfrac{2\varepsilon}{l-\varepsilon}$



We put $\varepsilon '=\dfrac{2\varepsilon}{l-\varepsilon}>0$. Then $\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}>0$.




While $\varepsilon '>0$ then $\dfrac{\varepsilon '}{2+\varepsilon '}<1$ and because $l>0$ we have then $\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}

And so $\forall\varepsilon '\in\mathbb{R}^{+*},\,\exists\varepsilon\in ]0,l[,\,\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}$ and so $\varepsilon '$ covers $\mathbb{R}^{+*}$ where $\mathbb{R}^{+*}$ is the set of strictly positive real numbers. As a result we have then:$$\forall\varepsilon '\in\mathbb{R}^{+*},\,\exists N\in\mathbb{N},\,\forall n\in\mathbb{N},\, n>N\Longrightarrow\left|\dfrac{u_{n+1}}{u_n}-1\right| <\varepsilon '$$



And so $\lim\limits_{n\to +\infty}\dfrac{u_{n+1}}{u_n}=1$



Edit: $\mathbb{R}^{+*}$ is the set of strictly positive real numbers.



Edit2: Assume that $l\ne 0$.


Answer




As an exercise, we give a detailed argument directly from the definition. Suppose that the sequence $(u_n)$ has limit $a\gt 0$. We want to show that for any $\epsilon\gt 0$, there is an $N$ such that
$$1-\epsilon\lt \frac{u_{n+1}}{u_n}\le 1\tag{1}$$
if $n\gt N$. Note that
$$\frac{u_{n+1}}{u_n}\ge \frac{a}{u_n},$$ so it suffices to make $\frac{a}{u_n}\gt 1-\epsilon$. This will be the case automatically if $\epsilon\ge 1$, so we may suppose that $\epsilon\lt 1$.



For $0\lt \epsilon\lt 1$ we have
$$\frac{a}{u_n}\gt 1-\epsilon \quad\text{iff}\quad u_n \lt \frac{a}{1-\epsilon} \quad\text{iff}\quad u_n-a\lt \frac{a\epsilon}{1-\epsilon}.$$



Since the sequence $(u_n)$ converges to $a$, there is an $N$ such that if $n\gt N$, then $u_n-a\lt \frac{a\epsilon}{1-\epsilon}$. For any such $n$, Inequality (1) will hold.




Remark: Informally, this is simpler than it looks. We can scale the sequence $(u_n)$ so that it has limit $1$. That does not change ratios.


functions - Is this a valid proof of $f(S cap T) subseteq f(S) cap f(T)$?

$f(S \cap T) \subseteq f(S) \cap f(T)$



Suppose there is a $x$ that is in $S$, but not in $T$, then there is a value $y$ such that $f(x) = x$, that is in $f(S)$, but not in $f(S \cap T)$. Suppose there is a $x$ that is in $T$, but not in $S$, then there is a value $y$ such that $f(x) = x$ that is in $f(T)$, but is not in $f(S \cap T)$.



*correction made

combinatorics - Binomial upper bound for the bi-color Ramsey numbers (Erdős-Szekeres)





The question: How did Erdös - Szekeres came up with a close form with a binomial for the upper bound: Where does the idea behind $R(2,2)=\binom{2+2-2}{2-1}$ - I do see that $R(2,2)=2$ - or $\binom{s+t-3}{s-1}\left(\text{or }\binom{s+t-3}{s-2}\right)$ come from? And how is the induction over $s$ and $t$ work?



What I understand:




  1. I see that $R(s,t) \leq R(s-1,t)+R(s,t-1)$

  2. I understand that ${\displaystyle {\binom {r+s-3}{r-2}}+{\binom {r+s-3}{r-1}}={\binom {r+s-2}{r-1}}}$ - Pascal's triangle.

  3. I also see that $\forall s, t ∈ \mathbb N,$ the relationship $R(s, t) = R(t, s)$ holds.


  4. And I get it that $R(s,2)=R(2,s)=s.$



The problem: There are tons of sites where the proof of the inequality above is readily available, including one of the answers to this post. However, when the inequality is proven, the binomial formula seems to appear out of thin air like it is self-evident, typically with a short justification such as: easily proven by induction on $s$ and $t.$ But how does this work? How did they come up with this binomial to begin with? This binomial coefficient appears before testing the base cases.







Background info:




For instance, in here:



Since $R(r, s) ≤ R(r − 1, s) + R(r, s − 1)$ so this automatically gives an upper bound, although not in the closed form that we expect.



The closed form expression is ${\displaystyle R(r,s)\leq {\binom {r+s-2}{r-1}}}.$ To derive this use double induction on $r$ and $s.$ The base case $r = s = 2$ is easily established as $${\displaystyle R(2,2)=2\leq {\binom {2+2-2}{2-1}}=2}.$$



Now assume the expression holds for $R(r − 1, s)$ and $R(r, s − 1).$ Then



$${\displaystyle R(r,s)\leq R(r-1,s)+R(r,s-1)\leq {\binom {r+s-3}{r-2}}+{\binom {r+s-3}{r-1}}={\binom {r+s-2}{r-1}}}$$ gives us our upper bound. Note that we have used Pascal's relation in the last equivalence.




But why did they start already applying the binomial formula they intend to prove in ${\displaystyle R(2,2)=2\leq {\binom {2+2-2}{2-1}}=2},$ and how does the inductive process proceed from that point?






I see there are related questions, and in fact, I have tried to contribute with a possible answer as to the proof of a finite Ramsey number for every combination of two natural numbers here to get feedback.



However, I still have problems with the immediately related proof of the inequality (theorem of Erdős-Szekeres):



$$R(s,t) \leq \binom{s+t-2}{s-1}$$




as in here:



enter image description here



I see that this inequality is fulfilled by the base cases, as well as $s+t<5,$ but I presume other inequalities could also be fulfilled by the first Ramsey numbers.






In the following two answers that I found online it seems as though the Ramsey number on, say $(r,t),$ i.e. $R(r,t)$ is somewhat just replaced by $r$ and $t$ in the combinatorics solution. So I don't get the analogy to Pascal's triangle...




Solution 1:



The answer can be found here:



$$R(k,l) \leq \binom{k+l-2}{k-1}$$



because the recurrence $$R(k,l) \leq R(k-1,l) + R(k,l-1) $$ can be seen as the paths from a point $R(k,l)$ on the grid below to $(1,1):$



enter image description here




and the number of ways to get to a point on a lattice $(x,y)$ taking off from $(0,0)$ are:



$$\binom{x+y}{x}$$



Here we are moving in the opposite direction, and stopping at $(1,1),$ which reduces the count to:



$$\binom{(x-1)+(y-1)}{x-1}=\binom{x+y-2}{x-1}$$







"We’ve placed the value $1$ at each position $(k, 1)$ or $(1, l)$ in this grid, corresponding to the
base case $r(k, 1) = r(1, l) = 1$ of our induction. At the point $(k, l)$ in the grid, we know
that the value $r(k, l)$ at that point is upper-bounded by the sum of the values immediately
below and immediately to the left. Applying this same recurrence to these adjacent nodes,
we see that every left/down path from $(k, l)$ to the boundary will contribute $1$ in the final
sum (corresponding to the value $1$ at the boundary points). Thus, $r(k, l)$ is upper-bounded
by the number of left/down paths to the boundary, which is in turn equal to the number of
left/down paths from $(k, l)$ to $(1, 1),$ which is exactly
$\binom{k+l-2}{k-1}."$







Solution 2:



From here:



enter image description here



enter image description here


Answer




To see the upper bound, you are closest with your solution 1.



We have:



$$R(r,b)\le R(r-1,b) + R(r,b-1)$$



(I am using $r$ for red and $b$ for blue - I find it easier to remember!).



Using the idea of Pascal's triangle, we can extend this into:




$$R(r,b)\le \left(R(r-2,b) + R(r-1,b-1)\right) + \left(R(r-1,b-1) + R(r,b-2)\right)$$



or:



$$R(r,b)\le R(r-2,b) + 2R(r-1,b-1) + R(r,b-2)$$



The step takes us to:



$$R(r-3,b)+3R(r-2,b-1)+3R(r-1,b-2)+R(r,b-3)$$




with the next step involving $1,4,6,4,1$, and continue using binomial coefficients, except where we hit the boundaries at $R(1,b)=R(r,1)=1$ and then $R(0,b)=R(r,0)=0$, and this leaves the binomial in question.



From Known Ramsey numbers, you can see the pattern by looking at the anti-diagonals.


calculus - Finding the limit of $frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials



Suppose that $$Q(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0} $$and $$P(x)=b_{m}x^{m}+b_{m-1}x^{m-1}+\cdots+b_{1}x+b_{0}.$$ How do I find $$\lim_{x\rightarrow\infty}\frac{Q(x)}{P(x)}$$ and what does the sequence $$\frac{Q(k)}{P(k)}$$ converge to?



For example, how would I find what the sequence $$\frac{8k^2+2k-100}{3k^2+2k+1}$$ converges to? Or what is $$\lim_{x\rightarrow\infty}\frac{3x+5}{-2x+9}?$$



This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.



Answer



Short Answer:



The sequence $\displaystyle\frac{Q(k)}{P(k)}$ will converge to the same limit as the function $\displaystyle\frac{Q(x)}{P(x)}.$ There are three cases:



$(i)$ If $n>m$ then it diverges to either $\infty$ or $-\infty$ depending on the sign of $\frac{a_{n}}{b_{m}}$.



$(ii)$ If $n

$(iii)$ If $n=m$ then it converges to $\frac{a_{n}}{b_{n}}$.



Monday 24 August 2015

calculus - Proving bounded variation function is continuous



My question is how to prove that a function f:[a,b]→R with bounded variation and the intermediate value property (IVP) is continuous.



I have seen Bounded Variation $+$ Intermediate Value Theorem implies Continuous but it is not clear to me.




So I think there must be an easy way using the fact that $f = g- h$ where $g$ and $h$ are monotonic increasing. A monotonic function only has jump discontinuities, so $f$, $g$ and $h$ all have this property. Also a monotonic function with IVP must be continuous. But I can’t be sure $g$ and $h$ have IVP even if $f$ does. Furthermore $f$ has IVP and only jump discontinuities but because it may not be monotonic I can’t immediately say it is continuous.


Answer



Suppose $f$ is discontinuous at $c$.



Since a BV function is the difference of increasing functions, the one-sided limits $f(c+)$ and $f(c-)$ exist. WLOG we have $\Delta = f(c+) - f(c-) > 0$ and



$$f(c+) - \frac{\Delta}{3} = f(c-) + \frac{2\Delta}{3} > f(c-) + \frac{\Delta}{3}.$$



But there exists $\delta > 0$ such that $f(x) < f(c-)+ \frac{\Delta}{3}$ when $c-\delta < x < c$ and $f(x) > f(c+) - \frac{\Delta}{3}$ when $c < x < c+\delta$.




Consequently, if $f(c-) + \frac{\Delta}{3} < K < f(c+)- \frac{\Delta}{3}$ and $K \neq f(c)$, then there exists no $x$ such that $f(x) = K$, contradicting the intermediate value property.


decimal expansion - I'm puzzled with 0.99999











After reading all the kind answers for this previous question question of mine,
I wonder... How do we get a fraction whose decimal expansion is the simple $0.\overline{9}$?



I don't mean to look like kidding or joking (of course, one can teach math with fun so it becomes more interesting), but this series has really raised a flag here, because $\frac{9}{9}$ won't solve this case, although it solves for all other digits (e.g. $0.\overline{8}=\frac{8}{9}$ and so on).




Thanks!
Beco.


Answer



The number $0.9999\cdots$ is in fact equal to $1$, which is why you get $\frac{9}{9}$. See this previous question.



To see it is equal to $1$, you can use any number of ideas:




  1. The hand-wavy but convincing one: Let $x=0.999\cdots$. Then $10x = 9.999\cdots = 9 + x$. So $9x = 9$, hence $x=1$.



  2. The formal one. The decimal expansion describes an infinite series. Here we have that
    $$ x = \sum_{n=1}^{\infty}\frac{9}{10^n}.$$
    This is a geometric series with common ration $\frac{1}{10}$ and initial term $\frac{9}{10}$, so
    $$x = \sum_{n=1}^{\infty}\frac{9}{10^n} = \frac{\quad\frac{9}{10}}{1 - \frac{1}{10}} = \frac{\frac{9}{10}}{\quad\frac{9}{10}\quad} = 1.$$




In general, a number whose decimal expansion terminates (has a "tail of 0s") always has two decimal expansions, one with a tail of 9s. So:
$$\begin{align*}
1.0000\cdots &= 0.9999\cdots\\
2.480000\cdots &= 2.4799999\cdots\\

1938.01936180000\cdots &= 1938.019361799999\cdots
\end{align*}$$
etc.


calculus - Existence of Improper Integrals



I've tried to do the following question:



Spivak Chapter 14 Question 27:
The improper integral $\int_{-\infty}^{a} f$ is defined in the obvious way, as $\lim_{N\to -\infty} \int_{N}^a f$. But another kind of improper integral $\int_{-\infty}^{\infty} f$ is defined in a nonobvious way: it is $\int^{\infty}_{0} f +\int_{-\infty}^{0} f$, provided these improper integrals both exist.



a) Explain why $\int_{-\infty}^\infty \frac{1}{x^2+1}$ exists




b)Explain why $\int_{-\infty}^\infty x d{x}$ does not exist. (But notice that $\lim_{N\to \infty} \int_{-N}^N xd{x}$ does exist)



What I did:
For the first item, I just said that using the Fundamental Theorem of Calculus, we know that integral is $\tan^{-1} (\infty)-\tan^{-1} (-\infty)=\pi$. But I'm not sure it's correct and it seems, from the questions around it, that the book is looking for something from more fundamental stuff maybe (?).



For the second one, I don't understand how that's possible as it seems to be the very definition of this kind of integral from $-\infty \to \infty$. I've seen questions related to this item, but it's still not clear to me why the integral in question doesn't exist.


Answer



For the first integral, by definition $\int_{-\infty}^\infty\frac{dx}{x^2+1}$ exists if and only if the integral $\int_{0}^\infty\frac{dx}{x^2+1}$ exists because the function $x\mapsto \frac{1}{x^2+1}$ is even. To check this, we can use the fundamental theorem of calculus like you did:
\begin{align*}

\int_0^\infty\frac{dx}{x^2+1} \stackrel{\rm def}{=}\lim_{b\to\infty}\int_0^b\frac{dx}{x^2+1} = \lim_{b\to\infty}[\arctan(b)-\arctan(0)] = \pi/2.
\end{align*}
Or we can justify it with the integral test because the series $\sum_{n=0}^\infty\frac{1}{n^2+1}$ converges.



For the second one, by our definition of what $\int_{-\infty}^\infty x\,dx$ means, this integral exists if and only if both the integrals $\int_{-\infty}^0 x\,dx$ and $\int_{0}^\infty x\,dx$ are finite, but neither of these integrals is finite. However, that doesn't stop the following limit from existing:
\begin{align*}
\lim_{N\to\infty}\int_{-N}^Nx\,dx = \lim_{N\to\infty}\bigg[\frac{x^2}{2}\bigg]_{-N}^N = \lim_{N\to\infty}0 = 0.
\end{align*}


real analysis - Let $f: [0,1] rightarrow mathbb{R} $ be continuous with $f(0) = f(1)$ *note, there is a part b*

(a) Show that there must exist $x,y \in [0,1] $ satisfying $|x-y| = \frac{1} {2}$ and $f(x) = f(y)$



I can start by defining a function $g(x) = f(x + \frac{1} {2}) - f(x)$ to guarantee an $x,y$ so that $|x-y| = \frac{1} {2}$ But how do I show that $f(x) = f(y)$?



(b) Show that for each $n \in \mathbb{N}$ $\exists x_n ,y_n \in [0,1]$ with $|x_n - y_n| = \frac{1} {n}$, and $f(x_n) = f(y_n)$



Actually I'm not sure where to start here. Any help is greatly appreciated.
Thanks!

calculus - Mean Value theorem problem?(inequality)



I'm trying to solve this mean value theorem problem but confused where to start,




If $0




Can someone please lend me a hand?


Answer



Apply the mean value theorem to the function $x\mapsto \ln x$ on the interval $[a,b]$: $\exists c\in(a,b)$ such that
$$1-\frac b a=\frac {b-a} b<\ln b-\ln a=\ln \frac{b} {a}=(b-a)\frac 1 c<\frac {b-a} a=\frac b a-1$$


abstract algebra - Find the Addition and Multiplication tables for GF(7)

Find the Addition and Multiplication tables for GF(7).



I know in order to do this, I have to express it in terms of $Z_7[x]/(x+1)$, or some other irreducible polynomial of order 1. But I'm not sure how to proceed from here. Any help would be great, thank you in advance!

Sunday 23 August 2015

integration - Limit of an integral



I am trying to compute the following limit,
$$
\lim_{k \to \infty} \int_0^1 \left| \frac{\sin(kx)}{x}\right| dx
$$
After some numerical calculations, it loos like the limit is $\infty$. To prove it, I tried to use Riemann sums but such approach is not working.



Any hints on how one should prove this ?




Thanks in advance.


Answer



Changing variables ($y=kx$) this is
$$\lim_{k\to\infty}\int_0^k\frac{|\sin y|}{y}\,dy
=\int_0^\infty\frac{|\sin y|}{y}\,dy.$$
This improper integral is well-known to diverge. For instance
$$\int_{(n-1)\pi}^{n\pi}\frac{|\sin y|}{y}\,dy
\ge\frac{1}{n\pi}\int_0^\pi\sin y\,dy=\frac{2}{n\pi}.$$
Adding these up, and considering the behaviour of the harmonic

series shows the integral diverges.


measure theory - Example of a Lebesgue measurable set that can't be constructed from Borel sets and projections?

The Borel sigma algebra on $\mathbb{R}^n$ is obtained by starting with open sets and repeatedly applying the operations of complement, countable union, countable intersection. Now Henri Lebesgue famously made the mistake of thinking that the projection of a Borel set is always a Borel set. In reality, the projection of a Borel set need not be a Borel set, although it is still Lebesgue measurable. So that is a way of constructing a Lebesgue measurable set that is not a Borel set.




But my question is, what is an example of a Lebesgue measurable set that cannot be constructed in this way? That is, what is a Lebesgue measurable set that cannot be constructed by starting with open sets in $\mathbb{R}^n$ and repeatedly applying the operations of complement, countable union, countable intersection, and projection?

integration - Prove equality with product of improper integrals



I have to prove the following equality:
$$
\int\limits_0^{+\infty} \frac{dx}{\sqrt{\cosh x}} \cdot \int\limits_0^{+\infty} \frac{dx}{\sqrt{\cosh^3 x}} = \pi.

$$



The first integral Wolfram Mathematica somehow evaluates to
$$
4\sqrt{\frac{2}{\pi}}\Gamma\left(\frac 54\right)^2.
$$



But I don't know how to simplify it to such form and deal with the second integral.


Answer



We have:

$$ I_n = \int_{0}^{+\infty}\frac{dx}{\cosh^{n/2} x} = \int_{1}^{+\infty}\frac{dt}{t^{n/2}\sqrt{t^2-1}} =\int_{0}^{1}\frac{t^{n/2-1}}{\sqrt{1-t^2}}\,dt\tag{1}$$
so, by Euler's Beta function:



$$ I_n = \frac{1}{2}\int_{0}^{1}t^{n/4-1}(1-t)^{-1/2}\,dt = \frac{\Gamma(n/4)\,\Gamma(1/2)}{2\, \Gamma(n/4+1/2)}=\frac{\sqrt{\pi}}{2}\cdot\frac{\Gamma(n/4)}{\Gamma(n/4+1/2)}\tag{2}$$
and through $\Gamma(z+1)=z\,\Gamma(z)$ we have:



$$ I_1\cdot I_3 = \frac{\pi}{4}\cdot\frac{\Gamma\left(\frac{1}{4}\right)}{\Gamma\left(\frac{5}{4}\right)}=\color{red}{\pi}.\tag{3}$$


limits - Evaluating $lim_{x to 0} frac{1}{x}ln(1 + x)$

$$\lim_{x \to 0} \frac{1}{x} \ln(1 + x) = 1 $$
Limit is of type $+\infty \cdot 1$, so must be $+\infty$, but answer is natural exponential to the power $1/2$.

calculus - Where did I go wrong? Calculating $k(t)$ for $mathbf r(t) = langlecos(2t), -sin(2t), 4trangle$.

I'm calculating the curvature k(t) of the function




$$\mathbf r(t) = \langle \cos(2t), -\sin(2t), 4t\rangle.$$



The formula I'm using is



$$k(t) = \frac{||\mathbf r'(t) \times\mathbf r''(t)|| }{ ||\mathbf r'(t)||^3}.$$



Here's my work:



\begin{align*}
\mathbf r'(t) &= \langle -2\sin(2t), -2 \cos(2t), 4\rangle;\\

\mathbf r''(t) &= \langle-4\cos(2t), 4\sin(2t), 0\rangle
\end{align*}



Therefore:



$$\mathbf r'(t)\times\mathbf r''(t) = (0 - 16\sin(2t))\mathbf i - (0 + 16\cos(2t))\mathbf j + (-8)\mathbf k$$



(Note that I got $-8\mathbf k$ using the Pythagorean identity.)



So

\begin{align*}
||\mathbf r'(t)\times\mathbf r''(t)|| &= \sqrt{16^2 + 8^2}\qquad \text{(Pythagorean again)}\\
&= 64\sqrt 5.
\end{align*}



Then, I got
\begin{align*}
||\mathbf r'(t)|| &= \sqrt{4 + 4}\qquad \text{(also Pythagorean identity)}\\
&= 2\sqrt 2,
\end{align*}

so $||\mathbf r'(t)||^3 = 2(\sqrt 2)^3 = 16\sqrt 2.$



So finally, I have
\begin{align*}
k(t)&= \frac{64\sqrt 5}{16\sqrt 2}\\
\implies \;\;k(t)&= 2\sqrt{10}.
\end{align*}



Thoughts? I probably just misdifferentiated somewhere or took the cross-product incorrectly. Thanks!

complex numbers - How to solve $sqrt {35 - 5i}$

Need some hints on how to Solve $\sqrt {35 - 5i}$



Attempt.



I factorized 5 out and it became $\sqrt {5(7-i)}$




I just want to know if it can be solved further.



Thanks.

Using induction to show associativity on $x_1+dots + x_n$

I want to use induction to show that the sum $x_1 + \dots + x_n$ of real numbers is defined independently of parentheses to specify order of addition.



I know how to apply induction(base, assumption, k+1 applying inductive hypothesis). Here I am not sure what the base would be. I have two ideas:



1) First case is $(x_1 + x_2)+x_3+\dots+x_n$ and work through to $x_1+x_2+\dots+x_{n-2} + (x_{n-1} + x_n)$



2) Start with $(x_1+x_2)+x_3=x_1+(x_2+x_3)$ and work up in number of elements to the full case.




Both seem wrong, I have no idea what to actually do.



I imagine above is sufficient effort, although I have shown no working. Before you downvote, please tell me why you are planning it, and I will edit.

real analysis - Convergence of a series of translations of a Lebesgue integrable function




Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a Lebesgue integrable function. Prove that $$\sum_{n=1}^{\infty} \frac{f(x-\sqrt{n})}{\sqrt{n}}$$ converges almost for every $x \in \mathbb{R}$.




My tactic here (which of course might lead me to nowhere) is to express this series as an integral or sum of integrals and use in some way the fact that $f$ is integrable along with a convergence theorem of course.




From integrability of $f$ we know that $f$ is finite almost everywhere which I believe would help me in my proof.



But I don't know exactly how to start with this.



Any useful hint would be appreciated. I don't want a full solution to this.



Thank you in advance.


Answer



A brute-force method: Define $g(x):= \sum_0^{\infty} \frac{|f(x-\sqrt n)|}{\sqrt n}$, which a priori might be infinite at many points. Fix some $j \in \Bbb Z$. By Tonelli's theorem we may interchange sum and integral to compute \begin{align*}\int_j^{j+1} g(x)dx &=\sum_{n=1}^{\infty} \int_j^{j+1}\frac{|f(x-\sqrt n)|}{\sqrt n} dx \\ &=\sum_{n=1}^{\infty} n^{-1/2}\int_{j-\sqrt n}^{j+1-\sqrt n} |f(x)|dx \\ &=\sum_{k=1}^{\infty} \sum_{n=k^2}^{(k+1)^2-1} n^{-1/2}\int_{j-\sqrt n}^{j+1-\sqrt n} |f(x)|dx\\&= \sum_{k=1}^{\infty} \int_{\Bbb R}\bigg(\sum_{n=k^2}^{(k+1)^2-1}n^{-1/2}\cdot1_{[j-\sqrt n,\;j+1-\sqrt n)}(x) \bigg)|f(x)|dx\end{align*}

Now if $k^2 \leq n <(k+1)^2$, then $n^{-1/2} \leq k^{-1}$ and also $[j-\sqrt n,j+1-\sqrt n) \subset [j-k-1,j-k+1)$.




Therefore we find that \begin{align*}\sum_{n=k^2}^{(k+1)^2-1}n^{-1/2}\cdot 1_{[j-\sqrt n,\;j+1-\sqrt n)}(x) &\leq \big[(k+1)^2-k^2\big]\cdot k^{-1} \cdot 1_{[j-k-1,j-k+1)}(x) \\ &=(2k+1) \cdot k^{-1} \cdot 1_{[j-k-1,j-k+1)}(x) \\ & \leq 3 \cdot 1_{[j-k-1,j-k+1)}(x)\end{align*}
Consequently, \begin{align*}\int_j^{j+1} g(x)dx & \leq 3\sum_{k=1}^{\infty} \int_{j-k-1}^{j-k+1}|f(x)|dx \\ & = 3\sum_{k=1}^{\infty} \int_{j-k-1}^{j-k}|f(x)|dx+3\sum_{k=1}^{\infty} \int_{j-k}^{j-k+1}|f(x)|dx \\ &= 3\int_{-\infty}^{j-1} |f(x)|dx + 3\int_{-\infty}^j |f(x)|dx \\ &\leq 6 \|f\|_{L^1}<\infty\end{align*} We conclude that $g(x)<\infty$ for a.e. $x \in [j,j+1]$. But $j\in \Bbb Z$ was arbitrary, so we find that $g(x)<\infty$ for a.e. $x \in \Bbb R$. With a little more effort, this argument may be generalized to show that $\sum_n n^{\alpha-1}|f(x-n^{\alpha})|<\infty$ for any $\alpha \in (0,1]$. This was the special case $\alpha=\frac{1}{2}$.



number theory - Bezout's Identity proof and the Extended Euclidean Algorithm

I am trying to learn the logic behind the Extended Euclidean Algorithm and I am having a really difficult time understanding all the online tutorials and videos out there. To make it clear, though, I understand the regular Euclidean Algorithm just fine. This is my reasoning for why it works:



If $g = \gcd(a,b)$, then $gi = a$ and $gj = b$ with $\gcd(i,j)=1$. If you subtract the equations, then $g(i-j) = a-b$, which means $g$ divides $a-b$ too.



This implies that $\gcd(a,b) = \gcd(a-kb,b)$ and so the maximum $k$ is $a - kb = 0$ or $a = kb$ or $\lfloor a/b \rfloor = k$, and the value of $a-\lfloor a/b \rfloor b$ is the same as $a$ mod $b$.



But I do not understand the Extended algorithm or the Identity.





  1. Why is there always an $x,y$ such that $ax + by = \gcd(a,b)$ for nonzero (positive?) $a,b$? I don't understand the intuition behind this claim.


  2. I also don't understand how the Extended Euclidean algorithm works at all. Is it correct to say that the Extended Euclidean algorithm is what solves Bezout's Identity? It returns nonzero (positive?) $x,y$ such that $ax + by = \gcd(a,b)$?


  3. Is the idea behind modular inverses included here too? If I want to solve for the inverse of $a$ modulo $m$ then this is the same as solving for $x$ in $ax = 1 \bmod m$ for known integers $a,m$, the same as $ax = 1 - my$, the same as $ax + my = 1$, so this is like using the Extended algorithm on $a,m$ and checking if the gcd is equal to $1$. If so, then the answer is $x$. Is my understanding correct?


Saturday 22 August 2015

number theory - Solve the congruence $M^{49}equiv 2pmod{19}$.




Solve the congruence $M^{49}\equiv 2\pmod{19}$.





I don't know how to solve this one. I can get it down to $M^{13}\equiv 2$ using Fermat's little theorem, but after that I'm stumped.


Answer



Solve the Diophantine equation $18x+49y = 1$ and take the smallest positive solution for $y$. This turns out to be $y = 7$. Raise both sides of your congruence to the $7$ power



$$(M^{49})^7 \equiv 2^7 \pmod{19}.$$



Now you know that $49\cdot 7 \equiv 1 \pmod{18}$ so you have



$$M^1 \equiv 2^7 \equiv 14 \pmod{19}.$$


number theory - $A$ must contain two relatively prime elements




Let $A$ be a set of positive integers such that no positive integer greater than $1$ divides all the elements of $A$. Prove or disprove that $A$ must contain two relatively prime elements.




This seems to be true, but I wasn't sure how to prove it. Maybe we can prove it by a proof by contradiction?


Answer



$\{6,10,15\}$ is a counterexample, or more generally $\{p_1p_2,p_1p_3,p_2p_3\}$ for any primes $p_1,p_2,p_3$.


real analysis - Prove that $sum limits_{n=0}^{infty} frac{n!}{(n+1)!+(n+2)!} = frac{3}{4}$



I was playing around with factorials on Wolfram|Alpha, when I got this amazing result :



$$\sum \limits_{n=0}^{\infty} \dfrac{n!}{(n+1)!+(n+2)!} = \dfrac{3}{4}.$$




Evaluating the first few partial sums makes it obvious that the sum converges to $\approx 0.7$. But I am not able to prove this result algebraically. I tried manipulating the terms and introducing Gamma Function, but without success.



Can anyone help me with this infinite sum ? Is there some well-known method of evaluating infinite sums similar to this ?



Any help will be gratefully acknowledged.



Thanks in advance ! :-)



EDIT : I realized that $(n!)$ can be cancelled out from the fraction and the limit of the remaining fraction as $n \to \infty$ can be calculated very easily to be equal to $0.75$. Very silly of me to ask such a question !!! Anyways you can check out the comment by @Did if this "Edit" section does not help.



Answer



Thanks to pjs36 and Did,



Notice that:



$$\begin{align}a_n&=\frac{n!}{(n+1)!+(n+2)!}\\&=\frac1{(n+1)+(n+1)(n+2)}\\&=\frac1{(n+1)(n+3)}\\&=\frac12\left(\frac1{n+1}-\frac1{n+3}\right)\end{align}$$



Thus, we get a telescoping series, leaving us with:



$$\sum_{n=0}^\infty\frac{n!}{(n+1)!+(n+2)!}=\frac12\left(\frac1{0+1}+\frac1{1+1}\right)=\frac34$$



Calculate this limit without L'Hospital's rule or series expansion

Calculating the limit as $x$ approaches $0$ using L'Hospital's rule or series expansion is straightforward, but how to evaluate the limit without either of those techniques.



How to calculate the following limit as $x$ approaches $0$:



$\dfrac{\ln(x+1)+1-e^x}{x^2}$

elementary number theory - What is a modular inverse?

I realize that this question has been asked before; please just bear with me. I looked across the Internet on here, Khan Academy, and many other sites in order to understand what an inverse mod is and how to calculate it.



The closest explanation I got is



enter image description here



However, I can't for the life of me figure out how we get the equation in the second bullet point. I tried testing it out in Python programming language by representing ^ as both xor and "to the power of" symbol ** (example for those who have not used it: 2 ** 4 = 16) and replacing mod with % (modulo).




To anyone who knows, please explain exactly what a modular inverse is and how to solve it.



As someone said, I should probably tell why I am saying "the" modular inverse. I came across this while doing Problem 451 of Project Euler and am trying to find greatest modular inverse (if I have function l(n) then the modular inverse I am calculating must be less than n-1)

elementary number theory - Quadratic congruences in which modulus is divisible by constant term



There are two similar congruences:
$$
x^2-6x\equiv16\pmod{512}\\
y^2-y\equiv16\pmod{512}
$$
It is easy to see that the $\gcd$ of all three parts in both of them is $16$, $x$ and $x-6$ are even and one of $y$ and $y-1$ is odd. After also noticing $x\not\equiv x-6\pmod{4}$ we have

$$
x \equiv 0 \pmod{8}\\
or\\
x-6 \equiv 0 \pmod{8}
$$
$$
y \equiv 0 \pmod{16}\\
or\\
y-1 \equiv 0 \pmod{16}
$$

Making substitutions $x=8n,x=8n+6,y=16m,y=16m+1$ I obtained congruences with modulus $32$, but couldn't make it any further. I also discovered that the first congruence can be rewritten as $(x+2)(x-8) \equiv 0\pmod{512}$ while $-2$ and $8$ appear to be its solutions. However, it's still not clear to me whether there are more of them.
What am I missing? How can such congruences be solved?


Answer



The first one is easy. You need $(x-8)(x+2)$ divisible by $512$. Both $x+8$ and $x-2$ are even, but only one of them can have higher powers of $2$ as factors. Thus, $x\equiv 8\pmod {256}$ or $x\equiv -2\pmod{256}$.



The second requires more care.



First assuming $y\equiv 0\pmod{16}$ you have $y=16y_0$ and $$16y_0^2=y_0+1\pmod{32}$$



Since $16y_0^2$ is even, $y_0+1$ is even, so $y_0$ is odd. When $y_0$ odd, $16y_0^2\equiv 16\pmod{32}$, so $y_0\equiv 15\pmod {32}=$ and $y\equiv 16\cdot 15=240\pmod{512}$.




Second, assume $y=16y_0+1$. Then $$16y_0^2 \equiv 1-y_0\pmod{32}$$ So $y_0$ is odd, $1-y_0\equiv 16\pmod {32}$, or $y_0\equiv 17\pmod{32}$. So $y\equiv 16\cdot 17+1=273\pmod{512}$



So the two solutions are $y\equiv 240,273\pmod{512}$.



Note that the sum of these two values is $1\pmod{512}$, as befits the two roots of any quadratic equation of the form $x^2-x+C=0$.



Indeed, we have $(y-240)(y-273)\equiv y^2-y-16\pmod{512}$, so these are the only solutions, since only one of $y-240$ and $y-273$ can be even.


Friday 21 August 2015

discrete mathematics - Prove that $n^3 + 5n$ is divisible by $6$ by using induction




Prove that for integers $n > 0$, $n^3 + 5n$ is divisible by $6$.



Here is what I have done:



Base Step: $n=1$, $1^3+5(1)=6$



Inductive Step:



$p(k)=k^3 + 5k =6m$, $m$ is some integer




$p(k+1)=(k+1)^3+5(k+1)=6m$ $m$ is some integer



Since both are equal to $6m$ I set them equal to each other.



$k^3 + 5k=(k+1)^3+5(k+1)$



$k^3+5k=(k+1)[(k+1)^2+5]$



Proven that $p(k)=p(k+1)$




I do not think this is the correct way of proving this problem but I couldn't think of anything else.


Answer



Hint: $(k+1)^3+5(k+1) = (k^3 + 5k) + 3k^2 + 3k + 6$



Since $(k^3 + 5k)$ is divisible by 6, all you have to prove is $3k^2 + 3k + 6$ is divisible by 6 too. Since adding $6$ does not change divisibility, you just have to proove $3k(k+1)$ is divisible by 6. Think about even and odd numbers.


logarithms - Prove that $log X 0$

I'm working through Data Structures and Algorithm Analysis in C++, 2nd Ed, and problem 1.7 asks us to prove that $\log X < X$ for all $X > 0$.



However, unless I'm missing something, this can't actually be proven. The spirit of the problem only holds true if you define several extra qualifiers, because it's relatively easy to provide counter examples.




First, it says that $\log_{a} X < X$ for all $X > 0$, in essence.



But if $a = -1$, then $(-1)^{2} = 1$. Therefore $\log_{-1} 1 = 2$. Thus, we must assume
$a$ is positive.



if $a$ is $< 1$, then $a^2 < 1$. Therefore we must assume that $a \geq 1$.



Now, the book says that unless stated otherwise, it's generally speaking about base 2 for logarithms, which are vital in computer science.




However, even then - if $a$ is two and $X$ is $\frac{1}{16}$, then $\log_{a} X$ is $-4$. (Similarly for base 10, try taking the log of $\frac{1}{10}$ on your calculator: It's $-1$.) Thus we must assume that $X \geq 1$.



...Unless I'm horribly missing something here. The problem seems quite different if we have to prove it for $X \geq 1$.



But even then, I need some help solving the problem. I've tried manipulating the equation as many ways as I could think of but I'm not cracking it.

The probability of a single dice Vs many dice



Say I have a fair six sided die. I throw 6 times. What's the probability the result will be 1,2,3,4,5,6 in that order?




And imagine I have 6 die and threw them at the same time. What's the probability that I'll get those numbers 1,2,3,4,5,6 when I arrange them?



I'm struggling to understand the difference here.


Answer



Case 1: The chance you throw a $1$ on the first die is $1/6$; the chance you throw a $2$ on the second die is also $1/6$; and likewise for all 6 of them. Thus the probability you get all these independent events is $P = \left( \frac{1}{6} \right)^6$.



Case 2: You roll the first die and it comes up with some number. The probability that the next ("neighbor") or "second" die is some different number is $5/6$. The probability that the third die is a number different from the prior two is $4/6$.



Can you continue?



How to evaluate the series $1+frac34+frac{3cdot5}{4cdot8}+frac{3cdot5cdot7}{4cdot8cdot12}+cdots$



How can I evaluate the following series:

$$1+\frac{3}{4}+\frac{3\cdot 5}{4\cdot 8}+\frac{3\cdot 5\cdot 7}{4\cdot 8\cdot 12}+\frac{3\cdot 5\cdot 7\cdot 9}{4\cdot 8\cdot 12\cdot 16}+\cdots$$
In one book I saw this sum is equal to $\sqrt{8}$, but I don't know how to evaluate it.


Answer



$$a_n = \dfrac{1 \cdot 3 \cdot 5 \cdot 7 \cdots (2n+1)}{4 \cdot 8 \cdot 12 \cdots (4n)} = \dfrac{1 \cdot 3 \cdot 5 \cdot 7 \cdots (2n+1)}{4^n \cdot n!} = \dfrac{(2n+1)!}{2^n \cdot 4^n \cdot n! \cdot n!} = \dfrac1{8^n} \dfrac{(2n+1)!}{n! \cdot n!}$$
Now consider $f(x) = \dfrac1{(1-4x)^{3/2}}$. The Taylor series of $f(x)$ is
$$f(x) = \sum_{n=0}^{\infty} \dfrac{(2n+1)!}{n! \cdot n!} x^n$$ and is valid for $\vert x \vert < \dfrac14$. Hence, we get that
$$\sum_{n=0}^{\infty}\dfrac1{8^n} \dfrac{(2n+1)!}{n! \cdot n!} = f(1/8) = \dfrac1{(1-4\cdot 1/8)^{3/2}} = 2^{3/2} = \sqrt{8} = 2\sqrt{2}$$



EDIT




Below is a way on how to pick the appropriate function. First note that
$$\dfrac{1 \cdot 3 \cdot 5 \cdot 7 \cdots (2n+1)}{2^n} = (-1)^n \times \left(-\dfrac32\right) \times \left(-\dfrac32-1\right) \times \left(-\dfrac32-2\right) \times \cdots \times \left(-\dfrac32-n+1\right)$$
$$a_n = \dfrac1{2^n}\dfrac{(-1)^n \times \left(-\dfrac32\right) \times \left(-\dfrac32-1\right) \times \left(-\dfrac32-2\right) \times \cdots \times \left(-\dfrac32-n+1\right)}{n!}$$
Hence, $a_n = \left(\dfrac{-1}{2}\right)^n \dbinom{-3/2}{n}$. Hence, the idea is to consider $$g(x) = \sum_{n=0}^{\infty} \dbinom{-3/2}{n} x^n = (1+x)^{-3/2}$$
The motivation to choose such a $g(x)$ comes from the fact that $$(1+x)^{\alpha} = \sum_{n=0}^{\infty} \dbinom{\alpha}{n} x^n$$ where $\dbinom{\alpha}{n}$ is to be interpreted as $\dfrac{\alpha(\alpha-1)(\alpha-2)\cdots(\alpha-n+1)}{n!}$ forall real $\alpha$.



Now set $x=-1/2$ to get that $$g(-1/2) = (1/2)^{-3/2} = 2^{3/2} = \sqrt{8} = 2\sqrt{2}$$



Once we have $g(x)$, we could either have it as such or we can play around with some nice scaling factor, to choose the function $f(x) = (1-4x)^{-3/2}$ to get the Taylor series $$\sum_{n=0}^{\infty} \dfrac{(2n+1)!}{n! \cdot n!}x^n$$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...