Thursday 28 February 2019

inequality - For any natural number $n$, Prove that $prod^{n}_{r=1}bigg(r+frac{1}{n}bigg)leq 2(n!)$




For any natural number $n$, Prove that $$\displaystyle \prod^{n}_{r=1}\bigg(r+\frac{1}{n}\bigg)\leq 2(n!)$$




Trial Solution: Using $\displaystyle \frac{1}{n}\leq 1,2,3,\cdots n$



$\displaystyle \prod^{n}_{r=1}\bigg(1+\frac{2}{n}\bigg)\leq 2\cdot 4\cdot 6\cdots \cdots 2n$




$$\prod^{n}_{r=1}\bigg(1+\frac{2}{n}\bigg)\leq 2^n\cdot n!$$



Could some help me how to prove my original inequality, Thanks


Answer



We can verify directly for $n=1,2$. Suppose $n\geq3$. Then
$$
\sum_{r=1}^n\log\left(1+\frac1{rn}\right)
\leq\sum_{r=1}^n\frac1{rn}
\leq\frac{1+1/2+(n-2)/3}{n}
\leq\frac{11}{18}<\log 2.

$$
Now exponentiate and multiply by $n!$.


calculus - Determine the following limit as x approaches 0: $frac{ln(1+x)}x$



$$\lim_{x\to 0} \frac{\ln(1+x)}x$$



The process I want to take to solving this is by using the definition of the limit, but I am getting confused. ( without l'hopitals rule)



$$\lim_{h \to 0} \frac{f(x+h) - f(x)}h$$



$$\lim_{h \to 0} \frac{\frac{\ln (1+x+h)}{x+h} - \frac{\ln(1+x)}x}h$$




$$\lim_{h \to 0} \frac{x\ln(1+x+h) - (x+h)\ln (1+x)}{hx(x+h))}$$



At this point I get confused because I know the answer is $1$, but I am not getting this answer through simplification of my formula.


Answer



You are talking about L'Hôpital's rule, so I assume you already know how to differentiate the logarithm. Now note, that



$$\frac{\log(x+1)}x = \frac{\log(x+1)-\log(1)}{(x+1)-1}$$



Thus




$$\lim_{x\to0}\frac{\log(x+1)}x = \lim_{x\to0}\frac{\log(x+1)-\log(1)}{(x+1)-1}=\left(\log(x)\right)^\prime_{x=1}=\left.\frac{1}x\right|_{x=1}=1$$



(This is not by using L'Hôpital's rule but only by using the definition of derivative and knowing the derivative of $\log(x)$)


calculus - Which version of Rolle's theorem is correct?



#According to my textbook:



Rolle's theorem states that if a function $f$ is continuous on the closed interval $[a, b]$ and differentiable on the open interval $(a, b)$ such that $f(a) = f(b)$, then $f′(x) = 0$ for some $x$ with $a ≤ x ≤ b$.



#According to Wikipedia:




If a real-valued function $f$ is continuous on a proper closed interval $[a, b]$, differentiable on the open interval $(a, b)$, and $f(a) = f(b)$, then there exists at least one $c$ in the open interval (a, b) such that
$f'(c)=0$.



So one definition says that $c$ should belong in closed interval $[a,b]$ but the other says that $c$ should be in open interval $(a,b)$.



Which definition is correct ? Why?


Answer



These are theorems, not definitions, and both of them are correct. Notice that if Wikipedia is correct, then your textbook is automatically correct as well: if there exists $c\in (a,b)$ such that $f'(c)=0$, then there also exists $x\in [a,b]$ such that $f'(x)=0$, since you can take $x=c$ (since $(a,b)$ is a subset of $[a,b]$). On the other hand, you can't (in any obvious way) deduce Wikipedia's statement from your textbook's, so Wikipedia's statement is stronger: it tells you more information. So you could say Wikipedia's statement is more useful or more powerful, and is "correct" in that you might as well use it instead of your textbook's version.




As for which one is "correct" in the sense of being the "standard" statement of Rolle's theorem, I would say the Wikipedia version is probably more standard. But mathematical theorems quite often do not have universally accepted "standard" versions and instead have several different versions that are closely related but may be slightly different and all tend to be referred to with the same name. It's not like there's some committee of mathematicians who gets together and declares "this is the statement we will call Rolle's theorem"; everyone just refers to theorems independently and so there ends up being some minor variation.


calculus - Integral of a gradient over a plane area

Let $A$ be a plane area bounded by a curve $\partial A$. Then, is



$$ \iint_A \nabla f\, \textrm{d}x \textrm{d}y = \oint_{\partial A} f\ \hat{\mathbf{n}}\ dl $$



where $f=f(x,y)$ and $\mathbf{\hat n}$ is the outward unit normal?

Wednesday 27 February 2019

summation - Having trouble understanding why $sum_{i=1}^ni^2= frac{n(n+1)(2n+1)}{6}$

So I understand $\sum\limits_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6}$ but I'm not sure how to come to that conclusion. Having trouble understanding

complex analysis - Calculating this integral using Residue Theorem



$$I=\int_{C}\frac{z}{(\sinh z) ^2} \ dz$$
where $C$ is the circle $|z-i\pi| = 1$ taken once anti-clockwise.




I found that the only singularity of $f(z):= \frac{z}{(\sinh z)^2}$ is $z= i\pi$ which is a pole of order $2$.



So then by Residue Theorem,
$$I = 2\pi i\mathrm{Res}(f,i\pi).$$
However, I'm unsure if there's a nice way to calculate this residue:
$$\mathrm{Res}(f,i\pi) = \lim_{z\rightarrow i\pi} \frac{d}{dz} \frac{z}{(\sinh z)^2}(z-i\pi)^2$$



Is there a trick for this integral? Or does the residue calculation simplify nicely? I feel like I'd have to apply L'Hopital's a couple of times.


Answer



Since\begin{align}\sinh(z)&=\sinh(z-\pi i+\pi i)\\&=-\sinh(z-\pi i)\\&=-(z-\pi i)-\frac{(z-\pi i)^3}{3!}-\frac{(z-\pi i)^5}{5!}+\cdots,\end{align}we have$$\sinh^2(z)=(z-\pi i)^2+\frac{(z-\pi i)^4}3+\frac{2(z-\pi i)^6}{45}+\cdots$$Therefore,$$\frac z{\sinh^2(z)}=\frac1{(z-\pi i)^2}(a_0+a_1(z-\pi i)+a_2(z-\pi i)^2+\cdots),$$which means that\begin{multline}z=\pi i+(z-\pi i)=\\=(a_0+a_1(z-\pi i)+a_2(z-\pi i)^2+\cdots)\left(1+\frac{(z-\pi i)^2}3+\frac{2(z-\pi i)^4}{45}+\cdots\right).\end{multline}Therefore, $a_1=1$, which means that the residue is $1$.


Tuesday 26 February 2019

sequences and series - Why does this double infinite sum $sum_{n=1}^infty sum_{k=n}^inftyfrac{1}{k!}$ converge to $e$?




I can't seem to come to grips with the result below:
$$S=\sum_{n=1}^\infty \sum_{k=n}^\infty\frac{1}{k!}=e$$
which is given by Mathematica (code below) and (numerically) verified by WolframAlpha.



In[65]:= Sum[1/k!, {n, 1, Infinity}, {k, n, Infinity}]

Out[65]= E



I've attempted to work it out in the following way:
$$\begin{align*}S&=\sum_{n=1}^\infty\sum_{k=n}^\infty \frac{1}{k!}\\[1ex]
&=\sum_{n=1}^\infty\left(\frac{1}{n!}+\frac{1}{(n+1)!}+\frac{1}{(n+2)!}+\cdots\right)\\[1ex]
&=\sum_{n=1}^\infty\frac{1}{n!}+\sum_{n=1}^\infty\frac{1}{(n+1)!}+\sum_{n=1}^\infty\frac{1}{(n+2)!}+\cdots\\[1ex]
&=\sum_{n=1}^\infty\frac{1}{n!}+\sum_{n=2}^\infty\frac{1}{n!}+\sum_{n=3}^\infty\frac{1}{n!}+\cdots\\[1ex]
&=(e-1)+\left(e-1-\frac{1}{2}\right)+\left(e-1-\frac{1}{2}-\frac{1}{6}\right)+\cdots\end{align*}$$
which doesn't appear to me to follow a telescoping pattern, but I might be wrong about that. It's not obvious to me if this actually does telescope.



Edit: Changing the order of summation does wonders, as shown in the accepted answer, but I'm currently wondering if there is any possibility that the last line admits any neat telescoping argument?


Answer




Reverse the order of summation and this becomes



\begin{align*}
\sum_{k = 1}^{\infty} \sum_{n = 1}^k \frac{1}{k!} &= \sum_{k = 1}^{\infty} \frac 1 {k!} \sum_{n = 1}^k 1\\
&= \sum_{k = 1}^{\infty} \frac{1}{k!} \cdot k \\
&= \sum_{k = 1}^{\infty} \frac{1}{(k - 1)!} = e
\end{align*}







To understand the change of order, note that all sums here are very convergent (and positive), so I'm not going to worry about technical issues. The original sum is about fixing $n$ and summing over $k \ge n$. If you imagine writing out all the pairs of natural numbers in a grid with $k$ running horizontally and $n$ vertically, this is fixing a column and adding up every pair below the main diagonal. That is, the lower left half of the grid.



On the other hand, we can also describe this as summing over every row, but stopping when we get to the main diagonal.


combinatorics - Nested summations and their relation to binomial coefficients



As the festive seasons is coming to a close and we've seen out all twelve days of Christmas I found myself thinking about the famous carroll of the same name. Namely, how many presents in total did my true love give to me? This is can be given by the nested summation
$\sum \limits_{i=1}^{12} (\sum \limits_{j=1}^i j)$ ,
where the inside sum is the number of presents received each day (i.e a partridge in a pear tree on the first day, two turtle doves and a partridge in a pear tree on the second day, etc.) and the outside summation sums over the days. This can be found to be 364, but what about in general for $n$ days of Christmas?



Well we can generalise the summation by just inserting $n$ into the summation and solving it.
$\sum \limits_{i=1}^{n} (\sum \limits_{j=1}^i j)$
Using the well known solution of the sum $~ \sum \limits_{i=1}^{n}i = \frac{n(n+1)}{2} ~$ we can obtain a non-nested sum,
$ = \sum \limits_{i=1}^{n} \frac{i(i+1)}{2} = \sum \limits_{i=1}^{n} (\frac{1}{2}i^2 + \frac{1}{2}i )$ ,
which we can split into separate summations
$ = \frac{1}{2} \sum \limits_{i=1}^{n} i^2 + \frac{1}{2}\sum \limits_{i=1}^{n}i = \frac{n(n+1)(2n+1)}{12}+ \frac{n(n+1)}{4} = \frac{n(n+1)(2n+1) + 3n(n+1)}{12} = \frac{2n^3 + 6n^2 + 4n}{12} = \frac{n(n+1)(n+2)}{6}$
which we notice can be written as the binomial coefficient,
$ = {n+2\choose3}$.
This doesn't really come as any surprise since this kind of problem seems like the prime candidate for one which could be solved using some fancy combinatorics to yield a binomial coefficient. But if we notice now that we can write the standard arithmetic series sum as a binomial coefficient, eg.,
$~ \sum \limits_{i=1}^{n}i = \frac{n(n+1)}{2} = {n+1\choose2} ~$
we might notice a similarity between them.



Similarly it can be shown that for a twice nested sum we have,
$\sum \limits_{i=1}^{n} (\sum \limits_{j=1}^i (\sum \limits_{k=1}^j k)) = {n+3\choose4} $.




In general it seems for an $(m-1)$-th nested sum (i.e, one which has $m$ summation symbols) we have,
$\sum \limits_{i_1=1}^{n} \sum \limits_{i_2=1}^{i_1} \sum \limits_{i_3=1}^{i_2} \dots \sum \limits_{i_m=1}^{i_{m-1}}i_m = {n+m\choose1+m} $
and this is likely provable with induction.



My question is to you is, how can this relationship between binomial coefficient and summations be thought of? That is to say, why would taking the number of possible pairs of $n+1$ elements happen to give you the sum from $1$ to $n$? And how does this somehow seem to extent with nested summations; the number of ways you can take 3 elements from $n+2$ gives you the first nested summation and so on? I'm sure some combinatorics whizz will be able to rearrange the way we count these sums somehow to yield the answer. Anyway at the very least it's interesting to think about - happy holidays!


Answer



Suppose you want number of non-negative integer solutions of the equation
$$x_1+\ldots+x_n = k+1$$
The answer is $\binom{k+n}{k+1}$ (a standard application of Stars and Bars).



Now, look at the case $k=1$ for simplicity. We construct a solution as follows: initialize each components of the vector $(x_1,\ldots, x_n)$ to zero, then to make their sum $2$, we add $1$ to one of the components at two steps one by one.





  • In the first step, we have $n$ choices where to add the $1$. Suppose we added it at the $i$-th position, i.e. $x_j=0$ unless $j\neq i$, and $x_i=1$ after this step.


  • At the next step, we add the second $1$ to the $j$-th position, imposing the condition $j\leq i$ to avoid having same solution twice.




We can do this in $\sum_{i=1}^n\sum_{j=1}^i1$ ways. Result using the formula must also be same, so
$$\binom{n+1}{2} = \sum_{i=1}^n\sum_{j=1}^i1$$



For higher values of $k$, we just need to remember where we add the first $1$ is rightmost, and for the next steps, tail of the vector will remain empty. Then the formulas are apparent.


discrete mathematics - Solving systems of basic congruences



I'm having some difficulty with a problem and I was hoping I could find some help here.



We've been covering congruences in my Discrete Math class, and, while I understand what they mean, I can't seem to solve systems of congruences greater than 2 equations in size.



What I mean is, I can solve problems that look like this:




$x \equiv 4 \mod 5$



$x \equiv 7 \mod 8$



I understand that I can solve this by doing something along the lines of:



$x = 5k + 4$



$5k + 4 \equiv 7 \mod 8$




$5k \equiv 3 \mod 8$



$k = 7$



$x = 5*7 + 4$



therefore $x = 39$



But I can't seem to figure out how to expand this to three (or more) equations, like so:




$x \equiv 4 \mod 5$



$x \equiv 7 \mod 8$



$x \equiv 3 \mod 6$



Disclaimer: I made these numbers up and I'm assuming there's always an answer. If this system doesn't work, any example problem will do. I'm just confused about the process.



Thank you!


Answer




To construct an $x$ equal to $x_1 \mod m_1$, $x_2 \mod m_1$ and $x_3 \mod m_3$.



we would like some expression $x = A + B + C$ such that mod $m_1$ kills $B$ and $C$, mod $m_2$ kills $A$ and $C$ etc.. and gives the correct values ($x_1$, $x_2$, ..)



For the things to get killed in the right we we should have $x = m_2 m_3 A' + m_1 m_3 B' + m_1 m_2 C'$ so e.g. reduction mod $m_1$ gives $m_2 m_3 A'$ so we should set $A' = x_1 \cdot \text{inverse of (}m_2 m_3\text{) mod }m_1$ and similarly with the other two.


real analysis - Differentiability of $sum_{k=0}^{infty}c_{k}e^{ikt}$ given $lim_{ktoinfty}k^{m}c_{k}=0$



Consider a Fourier Series



$$S(t) = \sum_{k=0}^{\infty}c_{k}e^{ikt}$$



where $c_{k}$ are complex coefficients such that the sum $\sum |c_{k}|$ is finite. I am also given that $\lim_{k\to\infty}k^{m}c_{k}=0$ for some fixed $m>0$.




Question: What can we say about the differentiability of $S$?



What I tried: If I can prove that $\sum k|c_{k}|<\infty$, then the sequence of derivatives of the partial sums
$$f_{n}'(t)=\sum_{k=0}^{n}ikc_{k}e^{ikt}$$
must converge uniformly to a continuous limit, then $S$ would be differentiable. However, I am not sure how to apply the fact that $\lim_{k\to\infty}k^{m}c_{k}=0$ for some fixed $m>0$ - certainly $\sum k|c_{k}|<\infty$ implies the terms should go to 0 but how do I know the converse is true?


Answer



Before I type anything else, I would like to point out that the assumption (8) below is insufficient to establish that



$\displaystyle \sum_0^\infty k \vert c_k \vert < \infty, \tag 0$




even when the condition (9) binds.  A counterexample is provided by the series



$R(t) = \displaystyle \sum_1^\infty \dfrac{1}{k^2} e^{ikt}; \tag{0.1}$



it is well-known that



$\displaystyle \sum_1^\infty \dfrac{1}{k^2} = \dfrac{\pi^2}{6}, \tag{0.2}$



see Showing $\sum _{k=1} 1/k^2 = \pi^2/6$ ; however, with




$c_k = \dfrac{1}{k^2}, \tag{0.3}$



we find



$\displaystyle \sum_1^\infty kc_k = \sum_1^\infty k \dfrac{1}{k^2} = \sum_1^\infty \dfrac{1}{k} = \infty; \tag{0.4}$



nevertheless, if adopt the stronger hypothesis that



$\displaystyle \sum_1^\infty k^m \vert c_k \vert < \infty, \tag{0.5}$




we will discover its suffiency, as is shown below.



I assume



$0 < m \in \Bbb Z, \tag 0$



that is, $m$ is a positive integer.



Consider the sequence of partial sums




$S_n(t) = \displaystyle \sum_{k = 0}^n c_ke^{ikt} \tag 1$



of the series



$S(t) = \displaystyle \sum_{k = 0}^\infty c_ke^{ikt}; \tag 2$



it is easy to see that $S_n(t)$ is a $C^\infty$ function for every $n \in \Bbb N$; indeed, the $S_n(t)$ are analytic, each being the sum of a finite number of analytic functions $c_ke^{ikt}$; furthermore for $n > p$ we have



$S_n(t) - S_p(t) = \displaystyle \sum_{p + 1}^n c_k e^{ikt}, \tag 3$




whence



$\vert S_n(t) - S_p(t) \vert = \vert \displaystyle \sum_{p + 1}^n c_k e^{ikt} \vert \le \sum_{p + 1}^n \vert c_k \vert, \tag 4$



since



$\vert e^{ikt} \vert = 1; \tag 5$



now taking $p$ and $n$ sufficiently large we may affirm that




$\displaystyle \sum_{p + 1}^n \vert c_k \vert < \epsilon \tag 6$



for any real



$\epsilon > 0; \tag 7$



this assertion of course follows easily from the hypothesis



$\displaystyle \sum_0^\infty \vert c_k \vert < \infty. \tag 8$




In light of these remarks, we conclude that the sequence of functions $S_n(t)$ converges uniformly in $t$; thus the limit function $S(t)$ is indeed continuous.



Note that we have not yet called upon the hypothesis that



$\displaystyle \lim_{k \to \infty} k^m c_k = 0.  \tag 9$



Now observe that the $S_n(t)$ (1), being finite sums, are each in fact differentiable functions of $t$; indeed,



$S_n'(t) = \displaystyle \sum_{k = 0}^n ikc_ke^{ikt}; \tag{10}$




also,



$\vert S_n'(t) - S_p'(t) \vert = \vert \displaystyle \sum_{p + 1}^n ikc_k e^{ikt} \vert \le \sum_{p + 1}^n k\vert c_k \vert < \epsilon \tag{11}$



for $n$, $p$ sufficiently large in light of our added assumption (0.5) with $m = 1$, and thus the sequence $S_n'(t)$ is Cauchy and hence it also is uniformly convergent. since $\epsilon$ is independent of $t$; these facts in concert are sufficient for the existence of a function $S'(t)$ such that



$ S'(t) = \displaystyle \lim_{n \to \infty} S_n'(t) = ( \lim_{n \to \infty} S_n(t))' = (S(t))', \tag{12}$



in accord with the standard theorem on convergence of sequences of derivatives.




The process described in the above may be continued for larger values of $m$, the result being similar to
that attained so far, provided of course that (0.5) binds for the chosen value of $m$. Indeed, we may write



$S''(t) = \displaystyle \sum_1^\infty -k^2c_k e^{ikt}, \tag{13}$



and so forth. Higher derivatives of $S(t)$ may be expressed in an analogous manner, assuming (0.5) holds for appropriate values of $m$.


Proving that even numbers equal the sum of two odd numbers.

Define E to be the set of even integers; E = {$x$ $\in$ $\mathbb{Z}$ : $x$ = 2$k$, where $k$ $\in$ $\mathbb{Z}$}.



Define F to be the set of integers that can be expressed as the sum of two odd numbers.



Prove E = F.



My attempt: The only way I can figure out the solution is by providing numbers and examples. It's easy to see that two odd numbers will always equal an even integer. I just don't know how to write the proof for it.

calculus - Find the value of : $lim_{ntoinfty} left( left(sum_{k=n+1}^{2n}2sqrt[2k]{2k}-sqrt[k]{k}right)-nright)$



Find $$\lim_{n\to\infty} \left( \left(\sum_{k=n+1}^{2n}2\sqrt[2k]{2k}-\sqrt[k]{k}\right)-n\right).$$




I have tried rewriting the sum in a clever way, applying the Mean Value Theorem or Stolz-Cesaro Lemma somehow but haven't found anything fruitful.



Can someone please share a hint/trick to evaluate this?



Thank you.


Answer



Hint: use the power series expansion
$$
\sqrt[y]y = \exp\bigg(\frac{\log y}y\bigg) = 1 + \frac{\log y}y + O\bigg( \frac{(\log y)^2}{y^2} \bigg)
$$

(or upper and lower bounds for $\exp(\cdot)$ that express the same idea).



Reality check: the answer is a tidy number about 4% less than $\frac12$.


number theory - Proving $amid b^2,,b^2mid a^3,,a^3mid b^4,ldotsimplies a=b$ - why is my approach incorrect



https://math.stackexchange.com/questions/704048/theory-number-problems

After I saw that post i wanted to solve the first one which is
$a\mid b^2,b^2\mid a^3,a^3\mid b^4,b^4\mid a^5\cdots$ Prove that $a=b$



Now i started by proving that $a$ and $b$ have same prime divisors,proving that is trivial,after doing so I checked do those factors have to have the same power.I tried with
$$a=p^2,b=p^3$$
I noticed it exactly goes for 3 terms,or that $a^3\mid b^4$
doing $$a=p^{n-1},b=p^n\\p^{n-1}\mid p^{2n}\\p^{2n}\mid p^{3n-3}\\p^{3n-3}\mid p^{4n}\\p^{4n}\mid p^{5n-5}\\\cdots\\p^{n(n-1)}\mid p^{n(n-1)}\\p^{n(n-1)}\mid p^{n(n+1)}\\p^{n(n+1)}\not\mid p^{(n-1)(n+1)}$$
Now yeah I guess that would be a proof,but wouldn't setting $n=n+1$ infinitely many times make every term dividable?


Answer



Hint $\ b(b/a),\,b(b/a)^3,b(b/a)^5\ldots\,$ are all integers, i.e. $\,b\,$ is a common denominator for $\rm\color{#c00}{unbounded}$ powers of $\,b/a,\,$ hence $\,b/a,$ is an integer (prove this!), therefore $\,a\mid b.\,$ Similarly $\,b\mid a.\,$




Further hint: $ $ let $\, r = \frac{b}a = \frac{c}d,\ (c,d)=1.\,$ Then $\,br^k = n\in\Bbb Z\,\Rightarrow\, bc^k = n d^k\,$ so $\,d^k\mid b c^k\Rightarrow\, d^k\mid b\,$ by Euclid's Lemma and $\,(c,d)=1.\,$ $\,d^k\mid b\,$ for $\,\rm\color{#c00}{unbounded}$ $\,k\,$ $\,\Rightarrow\,d=\pm1,\,$ so $\,r = \frac{c}d = \pm c\in\Bbb Z.$



Remark $\ $ The above property, that unbounded powers of proper fractions cannot have a common denominator (such as $b$ above), is a special property of $\,\Bbb Z\,$ that needn't be true in general domains. When it holds true, a domain is called completely integrally closed.


Monday 25 February 2019

If it is Arithmetic Progression then find the value of s?



The value of s = enter image description here
I started this question by making an A.P as the common difference is same and got the answer that I need number of terms to proceed further but my valie for number of terms is coming in fraction that is not possible i tried many times but end up with the same value so I can't proceed further as I don't know the value of n to put in the sum of Arithmetic Progression series?


Answer



Hint:




The general term is $$\frac{1}{(5r-3)(5r+2)}$$



We use partial fraction decomposition, so:



$$\frac{1}{(5r-3)(5r+2)}=\frac{A}{5r-3}+\frac{B}{5r+2}$$



Can you find $A$ and $B$? (Hint 2: $A+B=0$)



Because $A+B=0$, can you see that between the decomposition of $$\frac{1}{(5r-3)(5r+2)}$$ and $$\frac{1}{(5(r+1)-3)(5(r+1)+2)}=\frac{1}{(5r+2)(5r+7)}$$




some terms are cancelled? Figure out which ones will not be cancelled and sum them.



This method is known as telescoping


Sunday 24 February 2019

number theory - Calculate 67−1 (mod 119) and use this to calculate 43/67 (mod 119). Euclidean GCD algorithm



Hello I am wondering if any one can help me I am trying to figure out how to



Calculate 67−1 (mod 119) and use this to calculate 43/67 (mod 119).



I have an exam coming up an this will be one style of question can anyone please walk me through how it is done? I know I need to use the extended Euclidean GCD algorithm but don't know how




119 = 67 + 52
67 = 52 + 15
52 = (3 × 15) + 7
15 = (2 × 7) + 1

52 = 119 − 67
15 = 67 − 52 = 67 − 119 + 67 = (2 × 67) − 119
7 = 52 − (3 × 15) = 119 − 67 − (6 × 67) + (3 × 119) = (4 × 119) − (7 × 67)
1 = 15 − (2 × 7) = (2 × 67) − 119 − (8 × 119) + (14 × 67) = (16 × 67) − (9 × 119)



I cant figure out how to get passed this pont


Answer



I think you meant $67^{-1} \mod 119$. If we denote the inverse of $67 \mod 119$ by $a$, then by definition we wish to find $a$ such that $67\cdot a\equiv 1 \mod 119$, or, find an integer solution $(a,n)$ to $67a+119x=1$. We will first find the GCD of $67$ and $119$ by Euclid's algorithm.
\begin{align}
119&=67\cdot 1+52\\
67&=52\cdot 1+15\\
52&=15\cdot 3+7\\
15&=7\cdot 2+1\\
7&=1\cdot 7+0

\end{align}
Now we can retrace our steps to find the solution to $67a+119x=1$. We write $1=15-7\cdot 2$ (from the fourth line). By substituting $7=52-15\cdot 3$ (from the third line), we obtain $1=15-(52-15\cdot 3)\cdot 2=15\cdot 7-52\cdot 2$. Substituting again, this time $15=67-52\cdot 1$ (from the second line), we get $1=(67-52\cdot 1)\cdot 7-52\cdot 2=67\cdot 7-52\cdot 9$. Substituting the last time, $52=199-67\cdot 1$ (from the first line), we finally get $1=67\cdot 7-(119-67\cdot 1)\cdot 9=67\cdot 16-119\cdot 9$. We find the solution $(a,x)$ is $(16,-9)$, and so $a=16$, making $67^{-1}\equiv 16\mod 119$. To now get $43/67\mod 119$, we just need to multiply by $43$; we see $43/67\equiv 43\cdot 16\equiv 688\equiv 93\mod 119$.



Hope this helped!


Saturday 23 February 2019

algebra precalculus - How do I show that $sqrt{5+sqrt{24}} = sqrt{3}+sqrt{2}$



According to wolfram alpha this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$




But how do you show this? I know of no rules that works with addition inside square roots.



I noticed I could do this:



$\sqrt{24} = 2\sqrt{3}\sqrt{2}$



But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition


Answer



Hint: Since they are both positive numbers, they are equal if, and only if, their squares are equal.



elementary number theory - Prove that $n^2 + n +1$ is not divisible by $5$ for any $n$


Prove that $n^2 + n +1$ is not divisible by $5$ for any $n$.




I believe this might be tried using division algorithm, or modular arithmetic. I don't see exactly how to start this... Please help.

summation - Find the sum of the series: $cos^3 alpha +cos^3 {3alpha} + cos^3 {5alpha}+....+cos^3 {(2n-1)alpha}$.




Question: Find the sum of the series:
$$\cos^3 \alpha +\cos^3 {3\alpha} + \cos^3 {5\alpha}+....+\cos^3 {(2n-1)\alpha}$$




The book from which this question was taken says that the answer is $\frac{3\sin{n\alpha}\cos{n\alpha}}{4\sin\alpha}+\frac{\sin{3n\alpha}\cos{3n\alpha}}{4\sin{3\alpha}}$.



My attempt to solve this question:
$$\text{Let S be the trigonometric series,}$$

$$\cos {3\theta} = 4\cos^3\theta-3\cos\theta \implies 4\cos^3 \theta=\cos{3\theta}+3\cos\theta$$
Applying formula on $\cos^3\theta$,..
$$4S = 3\cos\alpha + \cos3\alpha +3\cos3\alpha + \cos9\alpha+...+3\cos{(2n-1)\alpha}+\cos{(6n-3)\alpha}$$
$$4S= 3(\cos \alpha + \cos 3\alpha+\cos5\alpha+...)+(\cos3\alpha + \cos9\alpha+..)$$
Applying the summation of cosine formula,
$$4S= 3\frac{\sin{n\alpha}}{\sin\alpha}\cdot\cos{(\alpha+(n-1)\alpha)}+?$$



So my problem is I don't know how to apply the formula for the second series (denoted by '?') for a fixed number of terms $n$ or should I treat it as a general and separate series.



To clarify my doubt again, what I am asking is that would the $(2n-1)$ affect the formula for the second series (denoted by '?').




My working here looks most probably correct but if there is any mistake please correct it.


Answer



If $\sin3\alpha=0$ it's smooth.



But for $\sin3\alpha\neq0$ by the telescopic sum we obtain:
$$\sum_{k=1}^n\cos^3(2k-1)\alpha=\frac{1}{4}\sum_{k=1}^n(\cos3(2k-1)\alpha+3\cos(2k-1)\alpha)=$$
$$=\frac{\sum\limits_{k=1}^n2\sin3\alpha\cos(6k-3)\alpha}{8\sin3\alpha}+\frac{3\sum\limits_{k=1}^n2\sin\alpha\cos(2k-1)\alpha}{8\sin\alpha}=$$
$$=\frac{\sum\limits_{k=1}^n(\sin6k\alpha-\sin(6k-6)\alpha)}{8\sin3\alpha}+\frac{3\sum\limits_{k=1}^n(\sin2k\alpha-\sin(2k-2)\alpha)}{8\sin\alpha}=$$
$$=\frac{\sin6n\alpha}{8\sin3\alpha}+\frac{3\sin2n\alpha}{8\sin\alpha}.$$



Euler's formula for complex $z$



Given Euler's formula $e^{ix} = \cos x + i \sin x$ for $x \in \mathbb{R}$, how does one extend this definition for complex $z$? I.e for $z \in \mathbb{C}, $ show that $e^{iz} = \cos(z) + i\sin(z)$?




I can reexpress $z = x+$i$y\,$ for $x,y \in \mathbb{R}$ and then substitute into the above to get $e^{ix}e^{-y} = (\cos x + i\sin x)e^{-y}$, but I am unsure of how to progress. I am hoping this can be done without appealing to the complex $\sin$ and $\cos$.



Thanks.


Answer



Let $z=x+iy$, and you want to show that $e^{iz}=\cos(z)+i\sin(z)$ i.e. Euler's formula applies to complex $z$.



I will prove the formula starting from the right hand side.



I don't think you can get away from complex trigonometry, where the cosine/sine of a purely imaginary term is defined as a hyperbolic function as follows:-




$\cos(ix)=\cosh(x)$



$\sin(ix)=i\sinh(x)$



Using the above definitions in conjunction with the sum of angles identities, we have



$\cos(x+iy)=\cos(x)\cos(iy)-\sin(x)\sin(iy)=\cos(x)\cosh(y)-i\sin(x)\sinh(y)$



$\sin(x+iy)=\sin(x)\cos(iy)+\cos(x)\sin(iy)=\sin(x)\cosh(y)+i\cos(x)\sinh(y)$




So we have



$\cos(x+iy)+i\sin(x+iy)\\=\cos(x)\cosh(y)-\cos(x)\sinh(y)+i(\sin(x)\cosh(y)-\sin(x)\sinh(y))\\=\cos(x)e^{-y}+i\sin(x)e^{-y}=(\cos(x)+i\sin(x))e^{-y}=e^{ix}e^{+i^2y}=e^{i(x+iy)}=e^{iz}$



where we have used the fact that $\cosh(y)-\sinh(y)=e^{-y}$


Friday 22 February 2019

functional analysis - If $f in L^{infty}$ and $exists r < infty$ so that $|f|_r < infty$, show $lim_{p rightarrow infty} |f|_p = |f|_{infty}$




Question:
This is the last part of a 5 part question I am working on.
Let $(X,\mu)$ be a possibly infinite measure space. Assume $\exists r < \infty$ with $\|f\|_r < \infty$ and that $\|f \|_{\infty} < \infty$. Show that $\lim_{p \rightarrow \infty} \|f_p\| = \|f\|_{\infty}$.



This is from Real and Complex by Rudin, chapter 3 exercise 14.



Progress:

I have shown that $\|f\|_{\infty} \le \lim_{p \rightarrow \infty} \|f\|_p$ as follows,



Fix $\epsilon > 0$. Let $E = \{x : |f(x)| > \|f\|_{\infty} - \epsilon \}$. Then observe
$$ \|f\|_p \ge \left( \int_{E} |f|^p d\mu \right)^{1/p} > \left( \int_{E}(\| f \|_{\infty} - \epsilon)^{p} d\mu \right)^{1/p} = \left( \|f\|_{\infty} - \epsilon \right) \mu(E)^{1/p}, $$
thus, $\lim_{p \rightarrow \infty} \|f\|_p \ge \|f\|_{\infty} - \epsilon$ since $\mu(E) < \infty$.



I attempted something similar for the other direction, but could not say the measure of a set was finite like (I think) I need for this argument to work. Here is what I tried:



Since $\|f\|_r < \infty, \exists R$ so that $|x| > R \implies f(x) < \frac{1}{2}$. Let $A = \{ x : |x| \le R \}$ and $B = \{x : |x| > R \}$. Then,
$$\|f\|_{p} \le \left( \int_{A} |f|^p d \mu + \int_{B} \frac{1}{2^p} d\mu\right)^{1/p} = \left( \int_{A} |f|^p d\mu + \frac{1}{2^p} \mu( B ) \right)^{1/p}.$$




If $\mu(B) < \infty$ this can easily show the desired result. Moreover, if I could show that there is a family of sets $\{B_p\}$ that act similarly so that $\mu(B_p)$ grows slower than $e^p$, then I can also complete the proof.



Thoughts?


Answer



For the first part you have to use $\liminf$, as you don't still know that the limit exists.



For the second part, you are thinking as if $X$ was $\mathbb R^n$, which it might not be. One way of attacking the problem along your line of thought would be to assume $\|f\|_\infty=1$ (i.e., work with $f/\|f\|_\infty$). Then, for $p>r$,
$$
\left(\int_X|f|^pd\mu\right)^{1/p}\leq\left(\int_X|f|^rd\mu\right)^{1/p}=\|f\|_r^{r/p}

$$
Then
$$
\limsup_{p\to\infty}\|f\|_p\leq1.
$$
Now you can scale back with $\|f\|_\infty$ to get
$$
\limsup_{p\to\infty}\|f\|_p\leq\|f\|_\infty.
$$




Another way of doing this second part is to use Hölder's inequality:
$$
\|f\|_p^p=\int_X|f|^pd\mu=\int_X|f|^r|f|^{p-r}d\mu\leq \|f\|_\infty^{p-r}\,\|f\|_r^r.
$$
So
$$
\limsup_{p\to\infty}\|f\|_p\leq\limsup_{p\to\infty}\|f\|_\infty^{(p-r)/p}\,\|f\|_r^{r/p}=\|f\|_\infty.
$$


sequences and series - Intuition for sum of triangular numbers and significance for $3choose{k}$

Specific question: according to my calculation based on Timbuc's answer to this question,



$$\sum_{k=0}^n\frac{k(k+1)}{2}=\frac{n(n+1)(2n+4)}{12} \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; =\frac{n(n+1)(n+2)}{6}$$



[Edit: RHS simplified based on suggestion from herbsteinberg.]



If this is right, is there an intuitive or geometric proof of this?




Background and motivation: I'm trying to relate the concepts of integration and summation using reasoning which I find simple and intuitive.



As part of that, I'm trying to understand the process of summing sequences in this rotated section of Pascal's triangle:



$(0)\;\;01\;\;01\;\;01\;\;01\;\;01\;\;01\;\;...\frac{k^0}{0!} \\ (1)\;\;01\;\;02\;\;03\;\;04\;\;05\;\;06\;\;...\frac{k+0}{1!} \\ (2)\;\;01\;\;03\;\;06\;\;10\;\;15\;\;21\;\;...\frac{k(k+1)}{2!} \\ (3)\;\;01\;\;04\;\;10\;\;20\;\;35\;\;56\;\;...\frac{k(k+1)(k+2)}{3!}$



I see that summing each line seems to increase the degree of the expression by $1$ and I can imagine that rearrangement, simplification and/or a limit process could later reduce these terms to $\frac{k^2}{2}$, $\frac{k^3}{3}$ etc., but for now I'm interested in how/why each line has the exact expression it does.



Line $(1)$ is just counting.




Line $(2)$ I can picture and understand as in Fig. 1 below.



Fig. 1



Triangular numbers



Line $(3)$ [Edited to add the following, which may start to answer my question] I can picture and understand as a stepped version of the right-angled tetraga in Fig. 2 below.



Fig. 2




Tetragonal numbers

linear algebra - General expression for determinant of a block-diagonal matrix




Consider having a matrix whose structure is the following:



$$
A =
\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3} & 0 & 0 & 0 & 0 & 0 & 0\\
a_{2,1} & a_{2,2} & a_{2,3} & 0 & 0 & 0 & 0 & 0 & 0\\
a_{3,1} & a_{3,2} & a_{3,3} & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & a_{4,4} & a_{4,5} & a_{4,6} & 0 & 0 & 0\\
0 & 0 & 0 & a_{5,4} & a_{5,5} & a_{5,6} & 0 & 0 & 0\\

0 & 0 & 0 & a_{6,4} & a_{6,5} & a_{6,6} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & a_{7,7} & a_{7,8} & a_{7,9}\\
0 & 0 & 0 & 0 & 0 & 0 & a_{8,7} & a_{8,8} & a_{8,9}\\
0 & 0 & 0 & 0 & 0 & 0 & a_{9,7} & a_{9,8} & a_{9,9}\\
\end{pmatrix}
$$



Question.



What about its determinant $|A|$?.




Another question



I was wondering that maybe matrix $A$ can be expressed as a product of particular matrices to have such a structure... maybe using these matrices:



$$
A_1 =
\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3}\\
a_{2,1} & a_{2,2} & a_{2,3}\\

a_{3,1} & a_{3,2} & a_{3,3}\\
\end{pmatrix}
$$



$$
A_2 =
\begin{pmatrix}
a_{4,4} & a_{4,5} & a_{4,6}\\
a_{5,4} & a_{5,5} & a_{5,6}\\
a_{6,4} & a_{6,5} & a_{6,6}\\

\end{pmatrix}
$$



$$
A_2 =
\begin{pmatrix}
a_{7,7} & a_{7,8} & a_{7,9}\\
a_{8,7} & a_{8,8} & a_{8,9}\\
a_{9,7} & a_{9,8} & a_{9,9}\\
\end{pmatrix}

$$



I can arrange $A$ as a compination of those: $A = f(A_1,A_2,A_3)$



Kronecker product



One possibility can be the Kronecker product:



$$
A=

\begin{pmatrix}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0\\
\end{pmatrix} \otimes A_1 +
\begin{pmatrix}
0 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 0\\
\end{pmatrix} \otimes A_2 +

\begin{pmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 1\\
\end{pmatrix} \cdot A_3
$$



But what about the determinant??? There are sums in this case which is not good...


Answer



As stated as a comment, the result is here. It really makes block diagonal matrices wonderful, hence finding canonical forms important.



Thursday 21 February 2019

calculus - Usage of mean value theorem ; bounded derivative and open interval




Let $f : (0,1) \to \mathbb{R}$ be a function such that $ |f'(x)| \leq 5 $ on the open interval $(0,1)$. Prove that $\lim_{x \to 1^-} f(x)$ exists.



It involves the derivative and the actual function itself, so I think I have to somehow use the mean value theorem.. Also, $f$ is continuous on $(0,1)$ and differentiable on $(0,1)$ ( because the derivative exists there ).



But then, the function is defined on the open interval, so the requirements for the mean value theorem aren't satisfied. I'm guessing we have to consider intervals of the form $(a,b)$ with $a > 0$ and $b < 0$.



Lastly, I don't see the significance of the $5$ ... Is it only there to establish that the derivative is bounded, or does the number itself have some signifiance ( would the same thing hold if we had $3$ for example? ).



Please give me a hint, not the solution. Something like "consider the mean value theorem on intervals of the form ... " would be very helpful.


Answer




Pick a sequence $(x_{n})\subseteq(0,1)$ such that $x_{n}\rightarrow 1$. Then
\begin{align*}
|f(x_{n})-f(x_{m})|=|f'(\eta_{n,m})||x_{n}-x_{m}|\leq 5|x_{n}-x_{m}|,
\end{align*}
where $\eta_{n,m}$ is chosen by Mean Value Theorem. So $(f(x_{n}))$ is convergent. For other sequence $(y_{n})$ such that $y_{n}\rightarrow 1$, consider the sequence $(z_{n})$ defined by $z_{2n}=x_{n}$, $z_{2n+1}=y_{n}$ to claim that the limits of $(f(x_{n}))$ and $(f(y_{n}))$ are the same. So $\lim_{x\rightarrow 1^{-}}f(x)$ exists.


calculus - limit $lim_{xto 0}frac{tan x-x}{x^2tan x}$ without Hospital



Is it possible to find $$\lim_{x\to 0}\frac{\tan x-x}{x^2\tan x}$$ without l'Hospital's rule?



I have $\lim_{x\to 0}\frac{\tan x}{x}=1$ proved without H. but it doesn't help me with this complicated limit (however I'm sure I have to use it somehow).



I know the answer is $\frac{1}{3}$, so I tried to estimate: $0<\frac{\tan x-x}{x^2\tan x}\le\frac{1}{3}\cdot\frac{\tan x}{x}+g(x)$ with various $g$ and prove that $g(x)\to 0$, but with no result.


Answer




$$L=\lim_{x\to 0}\frac{x-\tan x}{x^2 \tan x}=\lim_{x\to 0}\frac{\cos x-\frac{\sin x}{x}}{x^2}\cdot\frac{x}{\sin x}=\lim_{x\to 0}\frac{1-2\sin^2\frac{x}{2}-\cos\frac{x}{2}\frac{\sin(x/2)}{(x/2)}}{x^2} $$
gives $L=A+B$ where:
$$ A = -\frac{1}{2}+\lim_{x\to 0}\frac{1-\cos\frac{x}{2}}{x^2} = -\frac{1}{2}+\lim_{x\to 0}\frac{2\sin^2\frac{x}{4}}{x^2} = -\frac{1}{2}+\frac{1}{8}=-\frac{3}{8}$$
and
$$ B = \lim_{x\to 0}\frac{1}{x^2}\left(1-\frac{\sin(x/2)}{x/2}\right)=\frac{1}{4}\lim_{x\to 0}\frac{1}{x^2}\left(1-\frac{\sin x}{x}\right)$$
(assuming such a limit exists) fulfills:
$$ 3B = 4B-B = \lim_{x\to 0}\left(\frac{\sin(x/2)}{x/2}-\frac{\sin x}{x}\right)=\lim_{x\to 0}\frac{1}{x^2}\left(\frac{2\sin(x/2)}{\sin (x)}-1\right)$$
so that:
$$ B = \frac{1}{3}\lim_{x\to 0}\frac{1}{x^2}\left(\frac{1}{\cos\frac{x}{2}}-1\right)=\frac{1}{3}\lim_{x\to 0}\frac{1-\cos\frac{x}{2}}{x^2}=\frac{1}{3}\lim_{x\to 0}\frac{2\sin^2\frac{x}{4}}{x^2}=\frac{1}{24}$$
and $L=A+B=-\frac{3}{8}+\frac{1}{24}=\color{red}{\large-\frac{1}{3}}$.



Wednesday 20 February 2019

algebra precalculus - How to explain what is wrong in this?




My friend showed this to me and I instantly know that this is wrong. However, I cannot explain why this is wrong to my friend.




Question. Prove $\displaystyle \frac{100-100}{100-100} = 2.$



Answer.
$$\begin{align*}
\frac{100-100}{100-100} &= \frac{(10)^2 - (10)^2}{10(10-10)}\\
&= \frac{(10+10)(10-10)}{10(10-10)}\\
&= \frac{20}{10} = 2.

\end{align*}$$




My argument is that in the third step, where it goes like this:
$$\frac{(10+10)(10-10)}{10(10-10)}$$
you cannot just cancel out the $(10-10)$ - it doesn't seem right. However, I am at a loss of explaining why exactly you cannot do that and my friend has the argumentative power (is that even a word? I mean he is good with arguments, even if they are not facts) and he has me confused to the point that I am starting to think it can be done.



Can anyone please explain why this is wrong?



Thanks.



Answer



There is no such thing as a "cancel" operation. This is, rather, shorthand for factoring out $1$, and simplifying. In other words, suppose you have
$$\frac{x^3+x}{x^2+1}$$
You could factor out
$$\frac{x^2+1}{x^2+1}$$
to yield
$$\frac{x(x^2+1)}{x^2+1}$$
However, since, everywhere
$$\frac{x^2+1}{x^2+1}=1$$
You simply replace the former with the latter, to yield

$$x.$$



At a fundamental level, where your proof goes wrong is that it skips over this subtlety and uses the "shortcut" of canceling without respect for the conditions under which that shortcut is valid.



In the end, what it comes down to is that
$$\frac{(10-10)}{(10-10)}=\frac{0}{0}\ne1,$$
which is the condition you need in order to make that cancellation.


soft question - How do you make less mistakes in math?




How do you make less mistakes in math? Do you try to be more alert, do you take your time more, or what? Usually I don't make that many mistakes, but sometimes (like now) I do math as I imagine I would do it if I was ever drunk. I just did a couple of problems and I'm confusing addition with multiplication, $\lt$ with $\le$ , and other stuff like this. I make most of my mistakes when I think about how to approach/solve a more open-ended or abstract problem, not when I actually write it down on paper.



Answer



I think that checking your work frequently is one of the best ways to deal with this. You can also check your work with varying degrees of formality: you can carefully go back through every computation, but you can also just look back and forth between your current line of work and the previous one to check that things seem to line up. This more informal form of checking is a good one to cultivate, because it doesn't take too much time, and can catch a lot of mistakes.



One example I use a lot: when multiplying out two expressions in parentheses, each with a lot of terms, I make sure that in the expanded expression I have the right number of terms; e.g multiplying $($two terms added together$)($three terms added together$)$ will give $($six terms added together$)$, and its pretty quick to check that you have six terms after multiplying out (quicker than computing the expansion all over again).



Related to this, another thing to try to practice is to read what you actually wrote (or what is actually written in the question), rather than what you think, or expect, to see there. (This is the basic problem with all proof-reading!) Concentration is important here, obviously, but simply being aware of the issue helps.



I think if you find that you really have trouble with concentration/alertness at a particular moment, taking a break and coming back to you work can save time in the long-term. Again, trying to cultivate a sense of what your current alertness and concentration level is can help. In general, just trying to be self-aware as you're working is helpful, and is something that you can get better at with practice.


Tuesday 19 February 2019

abstract algebra - Rules for divisibility based on digit sum in general basis.



When I was a kid I learned about the rule which says that if a number has a decimal digit sum divisible by $3$, then it is divisible by $3$. For example $123$ has digit sum $6$ and is $3\times 41$ whereas $321$ also has digit sum $6$ and is $3\times 107$.



Now to my question...






Does there exist any general result where if working in an integer base $b$:




$$n = \sum_k d_k b^k$$



If the sum of digits $\sum_k d_k$ can tell us whether $n$ is divisible by some integer (except for the trivial $b^m$, of course)


Answer



Observe that $b^k-1$ is divisible by $b-1$, so $b-1$ divides $n$ if and only if $b-1$ divides $\sum_k d_k$.



Not that useful for $b=2$. In the case $b=10$ you get standard divisibility by $9$.



Similarly, $b+1$ divides $n$ if and only if $b+1$ divides

$$
\sum_k (-1)^kd_k
$$


sequences and series - How to calculate the sum of $(n-1)^2+(n-2)^2+...+1$?

How to calculate the sum of the following series? $$(n-1)^2+(n-2)^2+...+1$$Thank you in advance

probability - Random distribution of colored balls into boxes.



This is an abstraction of a real problem I have:




I have a large number of balls that are either Red or Blue ($n = 9*10^7$) and a bunch of containers ($c = 3*10^7$). I've calculated that the probability of a Red ball occurring $p_r=0.12$.



Next, I place three balls in each container randomly. I need to find out the number of containers on average that contain at least one Red ball. But I need to find out a formula that will work for any number of containers and any number of balls per container.



Even better, I have a set of Boxes as well ($b = 10^7$), and I place three containers in each Box. If a container has at least one red Ball, then it is a red Container, and if a Box has at least one red Container, then it is a red Box. How many red Boxes should I have?


Answer



For $i=1$ to $c$, define random variable $X_i$ by $X_i=1$ if container $i$ contains at least one red ball, and by $X_i=0$ otherwise. Let $Y=\sum X_i$. We want $E(\sum X_i)$, which by the linearity of expectation is $\sum E(X_i)$.



We can calculate $\Pr(X_i=1)$, that is, $E(X_i)$ either approximately or exactly. Since $c$ is large, approximately is good enough. Place the balls one at a time in container $i$. The probability of no red is approximately $(0.88)^3$, and therefore an excellent approximation to $E(X_i)$ is $1-(0.88)^3$. for the expectation of $Y$, multiply by $c$.




The expected number of red boxes is calculated using a similar strategy.


Monday 18 February 2019

exponential function - Clever equalities proven similarly to Euler's Identity

From How to prove Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?, a very elegant proof of Euler's Identity was given. Namely, observing $f(z)=g(z)h(z)=e^{-iz}(\cos(z)+i\sin(z))$, we can see that $f'(z)=g(z)h'(z)+h(z)g'(z)=0$, showing $f(z)$ is constant, and $f(0)=1$, showing $f(z)=1$ and thus $e^{iz}=\cos(z)+i\sin(z)$.



My question is whether there are other interesting results that can be garnered by finding $g(z)$ and $h(z)$ such that $g(z)h'(z)=-h(z)g'(z)$.




Are there other interesting equalities that are known?

Sunday 17 February 2019

calculus - If $lim_{n to +infty} f(alpha n)$ exists , does this imply $lim_{x to +infty} f(x)$ exists?



Let $f:\mathbb{R} \to \mathbb{R}$. Suppose $f$ has the following property: for all $\alpha >0$, the sequence $\{f(\alpha n)\}_{n \geq 0}$ converges either to a finite or infinite extended real number as $n \to +\infty$. In other words, we assume



$$\forall \alpha \in \mathbb{R}_+,\lim_{n \to +\infty} f(\alpha n) = L(\alpha) $$ where $L(\alpha)$ is a value depending on $\alpha$. Note that $L(\alpha)$ may be $\pm \infty$.




Does it follow that $$\lim_{x \in \mathbb{R}, x \to +\infty} f(x)$$ exists as a finite or infinite extended real number?





Note that if the above claim is true, then $L(\alpha)$ is in fact independent of $\alpha$. However, we do not assume this.



We can also play around with this to get stronger and weaker statements. Does it hold if we just assume $a \in \mathbb{Q}_+$? Does it hold if we assume $f$ is continuous?



I haven't made too much progress. However, this is quite similar to a classic little theorem which states that for continuous $f$, if for all $\alpha>0$, $f(\alpha n)\xrightarrow[n \in \mathbb{N}, n\to+\infty]{} 0$ then $f(x)\xrightarrow[x \in \mathbb{R}, x\to+\infty]{} 0$ which is proven with the Baire category theorem. However, my 'conjecture' only assumes the existence of the limit (rather than giving an explicit value, like $0$) and also does not assume continuity. I can't quite modify that proof to make it work here.


Answer



It is true if we further assume that $f$ is continuous!



First we shall use the following Lemma:




Lemma. Let $I_n=[a_n,b_n]\subset(0,\infty)$ be an increasing sequence of nontrivial closed intervals in the sense that
$$
a_n$$
Then the set
$$
S=\Big\{x\in(0,\infty) : \text{$nx\in \bigcup_{k\in\mathbb N} I_k$
for infinitely many $n\in\mathbb N$}\Big\},
$$
is dense in $(0,\infty)$. In particular, if $\mathbb N=K\cup L$, with $|K|=|L|=\aleph_0$

and $K\cap L=\varnothing$, then
$$
S=\Big\{x\in(0,\infty) :
\text{$mx\in \bigcup_{k\in K} I_k$ and $nx\in\bigcup_{\ell\in L} I_\ell$
for infinitely many $m$ and infinitely many $\,n\in\mathbb N$}\Big\},
$$
is also dense in $(0,\infty)$.



We postpone the proof of this fact. Clearly, for every $\,x_0\in(0,\infty)$
$$

\liminf_{x\to\infty}\, f(x) \le
\lim_{n\to\infty}\,f(nx_0)\le
\limsup_{x\to\infty}\, f(x).
$$
Hence, if $\lim_{x\to\infty}\,f(x)$ does NOT exist, then we can pick
$\,A,B \in\mathbb R$, such that
$$
\liminf_{x\to\infty}\, f(x) < A < B < \limsup_{x\to\infty}\, f(x).
$$
Due to the continuity of $f$ it is possible to define intervals $I_n=[a_n,b_n]$ and $J_n=[c_n,d_n]$, $n\in\mathbb N$, such that $\,a_n
and
$$
f\,\big|_{I_k} B,
\quad \text{for all $n\in\mathbb N$}.
$$
Due the Lemma, there exists a dense set of points $x$, with the property
that, for infinitely many $n$'s the multiple $nx$ belongs to $\bigcup_{k\in\mathbb N} I_k$, and for infinitely many $n$'s the multiple $nx$ belongs to $\bigcup_{k\in\mathbb N} J_k$.



This in turn implies that $\lim_{n\to\infty}f(nx)$ does not exist.




Proof of the Lemma. Let $[c,d]\subset(0,\infty)$. We shall prove that
there exists an $x\in [c,d]$, such that for infinitely many $n$ the multiple
$nx$ belongs to an interval of the form $I_k$, $\,k\in K$, and for
infinitely many $n$ the multiple $nx$ belongs to an interval of the form
$I_\ell$, $\ell\in L$. This is based on the observation that, if $a_n$ is
sufficiently large, then
$$
[c',d']=\frac{1}{N}[a_n,b_n]\cap[c,d]\quad\text{is a nontrivial interval},
$$
for $N=\lfloor a_n/d\rfloor+1$. This allows us to recursively define a sequence of nontrivial

closed intervals $J_n=[c_n,d_n]$, where $[c_0,d_0]=[c,d]$, $[c_1,d_1]=[c',d']$, and we pick
the $a_n$'s, so that the first one is in $K$, the next in $L$, the next in $K$ and so on. In
this way we have a sequence of closed intervals
$$
J_0\supset J_1\supset\cdots\supset J_n\supset J_{n+1}\supset\cdots.
$$
Clearly $S=\bigcap_{n\in\mathbb N} J_n\ne\varnothing$, and for each $x\in S$, infinitely
many multiples belong to $I_k$'s, $k\in K$, while infinitely
many multiples belong to $I_\ell$'s, $\ell\in L$.


elementary number theory - Simple Mersenne prime divisibility proofs



From "Fundamentals of Number Theory" by William J. LeVeque:




$M_n=2^n-1$.




Show that if $n=rs$, $M_r$ divides $M_n$.




My proof is:




$(M_n=2^n-1)+(n=rs) => (M_{rs}=2^{rs}-1)=>(M_{rs}=(2^r)^s-1)$



$(M_r=2^r-1)=>(M_r+1=2^r)$




$M_{rs}=(M_r+1)^s-1$



In the expansion of $(M_r+1)^s$, all components are of the form $M_rX$, except for the product of all ones. This follows from the combinatorial constraints. Therefore $M_n$ is of the form



$M_n=M_{rs}=M_r(X_1+X_2+...)+1-1$.



Thus $M_r|M_n$.








I have two questions:



1) Is this proof correct/acceptable?



2) In what other ways can this problem be solved? Specifically I am interested in what other kinds of algebraic/logical/mathematical manipulations could be used.



Thanks in advance!


Answer



Note $\rm\ f_n\! :=\ a^n\!-1\ =\ a^{n-m} \: (a^m\!-1) + a^{n-m}\!-1\:$,$\ $ i.e. $\rm\ f_n = k\ f_m + f_{n-m}\equiv f_{n-m}\pmod{f_m}\:$ so




THEOREM $\: $ Let $\rm\ f_n\: $ be an integer sequence with $\rm\ f_{\:0} =\: 0\:\ $ and $\rm\ \: f_n\equiv f_{n-m}\ (mod\ f_m)\ $ for $\rm\: n > m\:$. Then $\rm\:\ (f_n,f_m)\ =\ f_{(n,\:m)}\ \: $ where $\rm\ (i,\:j)\ $ denotes $\rm\ gcd(i,\:j)\:$.



Proof $\ $ By induction on $\rm\:n + m\:$. The theorem is trivially true if $\rm\ n = m\ $ or $\rm\ n = 0\ $ or $\rm\: m = 0\:$.
So we may assume $\rm\:n > m > 0\:$.$\ $ Note $\rm\ (f_n,f_m)\: =\ (f_{n-m},f_m)\ $ follows from the hypothesis.
Since $\rm\:\ (n-m)+m \ <\ n+m\: ,\ $ induction yields $\rm\ \ (f_{n-m},f_m)\ =\ f_{(n-m,\:m)}\ =\ f_{(n,\:m)}\ \ $ QED



In particular $\rm\:(M_R,M_{RS}) = M_{(R,RS)} = M_R\ $ so $\rm\ M_R\:|\: M_{RS}$



See also this post for a conceptual proof exploiting the innate structure - an order ideal, and look up divisibility sequence to learn more about the essence of the matter.


Saturday 16 February 2019

number theory - Using the fundamental theorem of arithmetic to prove $sqrt{text{prime}}notinBbb Q$




I need to use the fundamental theorem of arithmetic to show that





if $p$ is prime then $\sqrt p$ is irrational.




So far I've stated that $\sqrt p=m/n$ where $m,n$ are positive integers, then $pn^2=m^2$. Now I can factor $m$ and $n$ into primes but I don't know where to go on from there.


Answer



Given a prime $p$ and some $n\in\mathbb{N}^*$, let we define:
$$\nu_p(n)=\max\{k\in\mathbb{N}: p^k\mid n\}.\tag{1}$$
Since $\mathbb{Z}$ is a UFD we have $\nu_p(ab)=\nu_p(a)+\nu_p(b)$. In particular, $\nu_p$ of a square is always an even number. If we assume that $\sqrt{p}=\frac{m}{n}$ with $m,n\in\mathbb{N}^*$, we also have
$$ p n^2 = m^2. \tag{2} $$

However, such identity cannot hold, since $\nu_p(\text{RHS})$ is an even number and $\nu_p(\text{LHS})$ is an odd number. It follows that $\sqrt{p}\not\in\mathbb{Q}$ as wanted.


integration - Compute an integral about error function $int_{-infty}^{infty} frac{e^{-k^2}}{1-k} mathrm{d}k$



There is an integral
$$
\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-k^2}}{1-k} \mathrm{d}k
$$

where $\mathcal{P}$ means Cauchy principal value.



Mathematica gives the result (as the screent shot shows)
$$

\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-k^2}}{1-k} \mathrm{d}k = \frac{\pi}{e}\mathrm{erfi}(1) = \frac{\pi}{e}\cdot \frac{2}{\sqrt{\pi}} \int_0^1 e^{u^2}\mathrm{d}u
$$

Mathematica screen shot



where $\mathrm{erfi}(x)$ is imaginary error function define as
$$
\mathrm{erfi}(z) = -\mathrm{i}\cdot\mathrm{erf}(\mathrm{i}z)
$$

$$
\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}}

\int_0^{x} e^{-t^2}\mathrm{d}t
$$

How can we get the right hand side from left hand side?


Answer



For $a \in \mathbb{R}$ define
\begin{align}
f(a) &\equiv \mathrm{e}^{a^2} \mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-k^2}}{a-k} \, \mathrm{d} k = \mathrm{e}^{a^2} \mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-(x-a)^2}}{x} \, \mathrm{d} x= \lim_{\varepsilon \to 0^+} \left[\int \limits_{-\infty}^{-\varepsilon} \frac{\mathrm{e}^{-x^2 + 2 a x}}{x} \, \mathrm{d} x + \int \limits_\varepsilon^\infty \frac{\mathrm{e}^{-x^2 + 2 a x}}{x} \, \mathrm{d} x\right] \\
&= \lim_{\varepsilon \to 0^+} \int \limits_\varepsilon^\infty \frac{\mathrm{e}^{-x^2 + 2 a x} - \mathrm{e}^{-x^2 - 2 a x}}{x} \, \mathrm{d} x = \int \limits_0^\infty \frac{\mathrm{e}^{-x^2 + 2 a x} - \mathrm{e}^{-x^2 - 2 a x}}{x} \, \mathrm{d} x \, .
\end{align}

In the last step we have used that the integrand is in fact an analytic function (with value $4a$ at the origin). The usual arguments show that $f$ is analytic as well and we can differentiate under the integral sign to obtain

$$ f'(a) = 2 \int \limits_0^\infty \left[\mathrm{e}^{-x^2 + 2 a x} + \mathrm{e}^{-x^2 - 2 a x}\right]\, \mathrm{d} x = 2 \int \limits_{-\infty}^\infty \mathrm{e}^{-x^2 + 2 a x}\, \mathrm{d} x = 2 \sqrt{\pi} \, \mathrm{e}^{a^2} \, , \, a \in \mathbb{R} \, .$$
Since $f(0) = 0$,
$$ f(a) = 2 \sqrt{\pi} \int \limits_0^a \mathrm{e}^{t^2} \, \mathrm{d} t = \pi \operatorname{erfi}(a)$$
follows for $a \in \mathbb{R}$. This implies
$$ \mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-k^2}}{a-k} \, \mathrm{d} k = \pi \mathrm{e}^{-a^2} \operatorname{erfi}(a) \, , \, a \in \mathbb{R} \, .$$


fraction as index number?




given these inputs x = 4, S = [1 2 3 4 5 6 7 8 9 10], and n = 10



 search (x,S,n) {
i = 1
j = n
while i < j {
m = [(i+j)/2]
if x > Sm then i=n+1
else j = m

end
if x = Si then location = i
else location = 0


This algorithm is from my discrete math hw. I'm confused as to what Sm would equal on the first iteration because m would be $\frac{11}{2}$. If i use a fraction as the index do I round down? Is there a general rule for this? Am I making any sense? Help pls


Answer



Yes, you should round down (or in other words, take only the integer part).



$$[11/2]=[5.5]=5$$




This is known as the floor function in mathematics (usually there is a floor(x) function in programming languages).


Friday 15 February 2019

trigonometry - Proving $left(tan^2x +frac{1}{tan x}right) left(sin x+cos xright)=left(frac{1}{cos x}right)+left(frac{1}{sin x}right)$

$$
\left(\tan^2x +\frac{1}{\tan x}\right) \left(\sin x+\cos x\right)=\left(\frac{1}{\cos x}\right)+\left(\frac{1}{sin x}\right)
$$




Can someone show me how to solve this identity and also explain the steps?

real analysis - Show that $limlimits_{n to infty} frac{(n!)^{1/n}}{n}= frac{1}{e}$





Show that $$\lim_{n \to \infty} \left\{\frac{(n!)^{1/n}}{n}\right\} = \frac{1}{e}$$




What I did is to let $U_n = \dfrac{(n!)^{\frac{1}{n}}}{n}$ and $U_{n+1} = \dfrac{(n+1)!^{\frac{1}{n+1}}}{n+1}$. Then



$$\frac{ U_{n+1} }{U_n } = \frac{\frac{(n+1)!^{\frac{1}{n+1}}}{n+1}}{\frac{(n!)^{\frac{1}{n}}}{n}}$$




Next I just got stuck. Am I on the right track, or am I wrong doing this type of sequence?


Answer



Let $v_n = \frac{n!}{n^n } $ then $$ \frac{v_{n+1}}{v_n } =\frac{(n+1)! }{(n+1)^{n+1}} \cdot \frac{n^n }{n!} =\frac{n^n}{(n+1)^n }=\frac{1}{\left(1+\frac{1}{n}\right)^n}\to \frac{1}{e}$$ hence $$\frac{\sqrt[n]{n!} }{n} =\sqrt[n]{v_n} \to\frac{1}{e} .$$


Thursday 14 February 2019

complex analysis - Uniform convergence of the serie $sum_{n = 1}^{infty} frac{(-1)^{n-1}}{n + |z|}$




Show that the following serie is uniformly convergent in $\mathbb{C}$
$$\sum_{n = 1}^{\infty}\frac{(-1)^{n-1}}{n + |z|}$$
This serie is absolutely convergent in $\mathbb{C}$?



I showed that the series is not absolutely convergent, just consider $z = 0$ and we have the harmonic serie. In addition, using the Leibniz test to alternating series I also showed that the series is pointwise convergent, but I can't show that the serie is uniformly convergent, someone can help me? Thank you


Answer



The series converges uniformly for all $z \in \mathbb{C}$ by the Dirichlet test which states that $\sum a_n(z)b_n(z)$ converges uniformly if the partial sums $\sum_{k=1}^n a_k(z)$ are uniformly bounded in modulus and $b_n(z) \downarrow 0$ as $n \to \infty$ monotonically and uniformly.



Here we have for all $n \in \mathbb{N}$ and $z \in \mathbb{z}$,




$$a_k(z) = (-1)^{k-1} \implies \left|\sum_{k=1}^n a_k(z) \right| \leqslant 1$$



We also have



$$b_n(z) = \frac{1}{n + |z|} \leqslant \frac{1}{n} \to 0,$$



where it is clear that the convergence to $0$ is uniform and monotone since $b_{n+1}(z) \leqslant b_n(z)$.


elementary set theory - Is the class of subsets of integers countably infinite?



I am trying to figure out if the class of subsets of integers is countably infinite or not. I know that a collection of all finite subsets of integers is countable. I believe i need to use diagonalization to prove whether it's true or not but I'm not sure how to approach it.



All Help is greatly appreciated!


Answer



No, the set of all subsets of the integer is not countable. Since $\mathbb{Z}$ has the same cardinality as $\mathbb{N}$, it suffice to consider all subsets of $\mathbb{N}$.




For each subset $X$ of $\mathbb{N}$ consider the characteristic function $\chi_X$ defined by



$\chi_X(z) = \begin{cases}
1 & \quad z \in X \\
0 & \quad z \notin X
\end{cases}$



In this way you associate injectively and subjectively each subset $X$ of $\mathbb{N}$ with a function in $2^{\mathbb{N}}$. $2^\mathbb{N}$ has cardinality strictly larger than $\mathbb{N}$. This is proved by the typical Cantor diagonalization argument.



Also, Cantor Diagonalization and the function I wrote above can be used to show more generally that the set of all subsets of a given set has cardinality strictly greater than the given set.




In response to comment :



You can think of a function from $\mathbb{N} \rightarrow 2$ a infinite binary strings of $0$'s and $1$'s. Assume that $2^{\mathbb{N}}$ is countable. That is there is a bijection $\sigma$ from $\mathbb{N}$ to $2^\mathbb{N}$. Then define the function $h : \mathbb{N} \rightarrow 2$ as follows



$h(n) = \begin{cases}
1 & \quad (\sigma(n))(n) = 0 \\
0 & \quad (\sigma(n))(n) = 1
\end{cases}$




Informally, this is the familiar argument, form a new binary string by going down the diagonal and switching $0$ for $1$ and $1$ for $0$. Now this is a a perfectly good binary string hence it down appear as $\sigma(k)$ for some $k$ if $\sigma$ is indeed a bijection. However, it can not be $\sigma(k)$ for any $k$ since it differs from $\sigma(k)$ in at least the $k^\text{th}$ entry.



I hope this helps.


limits - How to compute $lim_{nrightarrowinfty}e^{-n}left(1+n+frac{n^2}{2!}cdots+frac{n^n}{n!}right)$

There is a probabilistic method to solve it. But I am not familiar with probability. I am trying to compute it by analytic method, such as using L Hospital's rule or Stolz formula, but they are not working.

Finding the modulus of a complex number $1-e^{itheta }$



I have an answer for this question, but I'm not very confident with it:



$\left | 1-e^{i\theta } \right |^{2}$ i.e. find the square of the modulus of 1-$e^{i\theta }$



$e^{i\theta } = R(cos\theta +isin\theta )$, R=1



= $cos\theta +isin\theta$




In cartesian form:



$a = Rcos\theta \Rightarrow cos\theta$



$b = Rsin\theta \Rightarrow sin\theta$



$\therefore 1-e^{i\Theta } = 1-cos\theta-isin\theta$



$\Rightarrow \left | 1-e^i\theta \right | = \left | 1-cos\theta-isin\theta \right |$




Where the real part, a, = $1-cos\theta$
and the imaginary, b, = $-sin\theta$



The modulus of a complex number being $\sqrt{a^{2} +b^{2}}$



$\therefore \left | 1-cos\theta-isin\theta \right | = \sqrt{(1-cos\theta)^{2} + (-sin\theta)^{2}}$



$\Rightarrow \left | 1-cos\theta-isin\theta \right |^{2} = (1-cos\theta)^{2} + (-sin\theta)^{2} \Rightarrow 1-2cos\theta +cos^{2}\theta +sin^{2}\theta \Rightarrow (\because cos^{2}\theta +sin^{2}\theta = 1) = 2-2cos\theta$



Is this correct? Or is my approach incorrect? Thanks in advance for any feedback/help :)



Answer



Your argument is lengthy, but sound.



If just the square of the modulus is needed, you can consider that $|z|^2=z\bar{z}$, so
$$
|1-e^{i\theta}|^2=(1-e^{i\theta})(1-e^{-i\theta})=
1-(e^{i\theta}+e^{-i\theta})+1=2-2\cos\theta
$$



If also the argument is needed, the trick with $1-e^{i\theta}$ (but also $1+e^{i\theta}$) is to set

$$
\theta=2\alpha
$$
and rewrite
$$
1-e^{i\theta}=1-e^{2i\alpha}=
-e^{i\alpha}(e^{i\alpha}-e^{-i\alpha})=-2i(\sin\alpha)e^{i\alpha}
$$
Since $0\le\theta<2\pi$, we have $0\le\alpha<\pi$, so $\sin\alpha\ge0$. Hence the modulus is $2\sin\alpha$ and, from $-i=e^{3i\pi/2}$, the argument is
$$

\alpha+\frac{3\pi}{2}
$$
(up to an integer multiple of $2\pi$).



Thus the square of the modulus is
$$
4\sin^2\alpha=4\sin^2\frac{\theta}{2}=4\frac{1-\cos\theta}{2}=
2(1-\cos\theta)
$$


Wednesday 13 February 2019

integration - Calculating a Complicated Integral of Two Variables



I have encountered the following integral
$$I=\int_{0}^{2\pi} \int_{0}^{2\pi} \frac{c_1 + c_2 \cos\theta_2}{-a + \cos\theta_1 + \cos\theta_2} \cos(m\theta_1) \cos(n\theta_2) d\theta_1 d\theta_2$$

where $a>2$ and I am having a hard time moving forward. I would be happy with an analytic solution, converting this to some non-elementary but known special function, or finding the asymptotic behavior of the integral as $a \rightarrow 2$ from the positive reals.



I was able to perform the $\theta_1$ integration by looking in Gradshteyn and Ryzhik and this yields something proportional to
$$\int_{0}^{2\pi} \frac{c_1 + c_2\cos\theta_{2}}{\sqrt{a^2 - 2 a\cos(\theta_{2})-\sin(\theta_{2})^2}} \cos\left[n\theta_{q'}\right]\\
\times \left(\sqrt{a^2 - 2 a\cos(\theta_{2})-\sin(\theta_{2})^2} -a + \cos(\theta_{2}) \right)^{m}d\theta_{2}.$$
Trying to find this form in a table was hopeless, so I changed variables to $x=\cos\theta_2$. Having to expand $\cos(n\theta_2)$ introduced a sum, and the resulting integral is proportional to
$$\int_{-1}^{1} \frac{dx}{\sqrt{1-x^2}} \frac{c_1 + c_1 x}{\sqrt{\left(-a + x \right)^2 -1 }} \\
\times \left(\sqrt{\left(-a + x\right)^2 -1} -a + x) \right)^{m}\sum\limits_{\substack{r=0 \\ 2r \leq n}} (-1)^r \binom{n}{2r} x^{n-2r} (1-x^2)^r.$$
I am pretty stumped at this point.




I tried doing some further expansions, but I got infinite sums of infinite sums, so this didn't seem like progress (the hypergeometric ${}_{3}F_{2}$ and the regularized ${}_{2}\tilde{F}_{1}$ appeared in this approach).



There are reduction formulas for integrals, but many of them don't seem to apply, as I have two distinct square roots ($\sqrt{1-x^2}$ and $\sqrt{\left(-a + x\right)^2 -1}$). However, my integrand is a rational function of these square roots and $x$.



There is a small parameter in the problem, namely $\epsilon>0$ in $a = 2 +\epsilon$. The integral diverges for $a=2$, so this is a singular perturbation problem (if this approach is even helpful).



I would appreciate any suggestions or advice on this problem!


Answer



Extracting the leading order is not hard. Indeed, let $f : \mathbb{R}^2 \to \mathbb{C}$ be a continuous function such that $f(k_1, k_2) = f(k_1 + 2\pi, k_2) = f(k_1, k_2 + 2\pi)$ for all $k_1, k_2 \in \mathbb{R}$. Then with $\|f\| = \sup |f|$,




$$ \int_{0}^{2\pi}\int_{0}^{2\pi} \frac{f(k)}{a-\cos k_1 - \cos k_2} \, dk_1 dk_2
= \int_{B(0,\pi)} \frac{f(k)}{a - 2 + \frac{1}{2}|k|^2} \, dk + \mathcal{O}(\|f\|). $$



Now applying the polar coordinates change followed by the substitution $r=\sqrt{2(a-2)u}$,



\begin{align*}
\int_{B(0,\pi)} \frac{f(k)}{a - 2 + \frac{1}{2}|k|^2} \, dk
& = \int_{0}^{\pi} \frac{r}{a-2+\frac{1}{2}r^2} \left( \int_{S^1} f(r\omega) \, d\omega \right) \, dr \\
&= \int_{0}^{\frac{\pi^2}{2(a-2)}} \left( \int_{S^1} f(\omega\sqrt{2(a-2)u}) \, d\omega \right) \, \frac{du}{u+1} \\
&\sim 2\pi f(0) \log \left( \frac{\pi^2}{2(a-2)} \right) \\

&\sim -2\pi f(0) \log(a-2).
\end{align*}



As an alternative direction, I guess that a probabilistic interpretation may perhaps help analyze the behavior of $I$. Indeed,



$$ G^p(x,y) = \frac{1}{2\pi^2} \int_{0}^{2\pi}\int_{0}^{2\pi} \frac{e^{-ik\cdot y}}{\frac{2}{1-p} - \cos(k_1) - \cos(k_2)} \, dk $$



is the Green function of the 2-D simple random walk on $\mathbb{Z}^2$ killed at rate $p$.


calculus - continuity of $f ' $

if $f$ is differentiable on $(a,b)$ can we say that $f'$ is continuous on $(a,b)$?



I tried some functions and it seems that we can say so but I'm not sure.

calculus - Meaning of differentials when treated separately from the Leibniz notation dy/dx

I’ve heard people say $dy/dx$ is not a fraction with $dy$ as the numerator and $dx$ as the denominator; that it is just notation representing the derivative of the function $y$ with respect to the variable $x$. The calculus book I am reading (Calculus: A Complete Course - Adams & Essex) says that even though $dy/dx$, defined as the limit of $\Delta y / \Delta x$, appears to be “meaningless” if we treat it as a fraction; it can still be “useful to be able to define the quantities $dy$ and $dx$ in such a way that their quotient is the derivative $dy/dx$”. It then goes on to define the differential $dy$ as “a function of $x$ and $dx$”, as follows: $$dy = \frac{dy}{dx}\,dx = f’(x)\,dx$$ What is the meaning of $dx$ here? It is now an independent variable, yet it seemingly is not one supplied by most functions I would work with. In a later chapter, focused on using differentials to approximate change, the book gives the following: $$\Delta y = \frac{\Delta y}{\Delta x}\,\Delta x \approx \frac{dy}{dx}\,\Delta x = f’(x)\,\Delta x$$ This makes sense to me. The change in $y$, $\Delta y$, at/near a point can be approximated by multiplying the derivative at that point by some change in
$x$, $\Delta x$, at/near the point. Here $\Delta y$ and $\Delta x$ are real numbers, so there is no leap in understanding that is necessary. The problem with the definition of $dy$ is that the multiplication is not between a derivative and a real number, such as in the approximation of $\Delta y$, but a product of a derivative and an object that is not explicitly defined. Because I do not understand what $dx$ is, I can’t use it to build an understanding of what $dy$ is. I also have no real understanding of what $dy$ is meant to be, so I cannot work backwards to attach some meaning to $dx$. Is $dy$ meant to be some infinitesimal quantity? If so, how can we be justified in using it when most of the book is restricted to the real numbers? Are $dy$ and $dx$ still related to the limits of functions, or are they detached from that meaning? Later in the chapter on using differentials to approximate change, the book says it is convenient to “denote the change in $x$ by $dx$ instead of $\Delta x$”. We can just switch out $\Delta x$ for $dx$? Why is it convenient to do this? More importantly, how are we justified in doing this?



What exactly is a differential?
And https://math.blogoverflow.com/2014/11/03/more-than-infinitesimal-what-is-dx/ both contain discussions that are beyond my understanding. My problem arose in an introductory textbook, I find it strange that we can just swap out different symbols when we go to great lengths to say they are different entities.




In What is the practical difference between a differential and a derivative? Arturo Magidin’s answer says that it is not “literally true” that $dy = \frac{dy}{dx}\,dx$ and that replacing $\Delta x$ with $dx$ is an “abuse of notation”. If that is the case, then the quotient of $dy$ and $dx$ would not be $\frac{dy}{dx}$, but $\frac{dy}{dx}\frac{dx}{\Delta x}$, right?

Tuesday 12 February 2019

Proof series decreases by induction

I have a question regarding a proof by induction.
We have to see whether or not the following series converges.
$$U_n = \frac{1 \cdot 4 \cdot 7 \cdots (3n - 2)}{2 \cdot 5 \cdot 8 \cdots (3n-1)}$$
I was trying to do this by proving that this series has a lower limit of $0$ and is decreasing. It's easy enough to see that it has a lower limit of $0$ but proving (by induction) that this series is decreasing has proven to be difficult. I understand that I need to prove that $u_n > u_{n+1}$ but I have no idea how to go about doing this.

abstract algebra - Two number fields with trivial intersection, not linearly disjoint but....



Do there exist two finite extensions of $\mathbb{Q}$, $L=\mathbb{Q}(\alpha)$ and $K=\mathbb{Q}(\beta)$ such that $L\cap K=\mathbb{Q}$, the minimal polynomial $\mu_{\alpha}(X)\in \mathbb{Q}[X]$ of $\alpha$ over $\mathbb{Q}$ is not irreducible over $K$, but $\mu_\alpha(\beta)\neq 0$ ?



Answer



Sure. Let $\alpha=\root3\of2$, let $\gamma$ be a nonreal cube root of 2, and let $\beta=\gamma+1$.


Real Analysis Monotone Convergence Theorem Question

I am working on the following exercise for an introductory Real Analysis course.




Suppose that $x_0 \geq 2$ and $x_n = 2 + \sqrt{x_{n-1}-2}$ for $n \in \mathbb{N}$. Use the Monotone Convergence Theorem to prove that either $\lim_{n \to \infty}x_n=2$ or $3$ or $\infty$.




I need some help to move in the right direction. I'll start with what I know so far. My first approach was to break the question into different cases.




I've found that if $x_0=2$, then $x_n =2$ for all $n \in \mathbb{N}$. So, if this is the case, the series is neither decreasing nor increasing? Furthermore, I've found that if $x_0 = 3$ then $x_n =3$ for all $n \in \mathbb{N}$. For $2 < x_0 < 3$, the series is monotone increasing, and converges to $3$. For $x_0 >3$, the series is monotone decreasing and converges to $3$. You will notice none of the cases I've given result in $\lim_{n \to \infty}x_n= \infty$. I have not proven any of the above.



Now, the Monotone Convergence Theorem states, according to my textbook:




A monotone increasing sequence that is bounded above converges.



A monotone decreasing sequence that is bounded below converges.





Now, if I can show that the series is monotone increasing/decreasing and bounded above/below for the different cases, I believe I can use the following.



$$L=\lim_{n \to \infty} x_{n+1}=\lim_{n \to \infty} 2+\sqrt{x_n -2}=2 + \sqrt{\lim_{n \to \infty}x_n-2}=2+ \sqrt{L-2}$$



So,



$$L=2+\sqrt{L-2}$$



Which, solving for $L$, yields the solutions $L=2$ and $L=3$. This makes sense according to what I claimed earlier. Once again, I have not found under what conditions the series is divergent. Any help would be appreciated.




Also, any advice as to proving that the series is either monotone increasing and bounded above, or monotone decreasing and bounded below, under the different conditions for $x_0$, would be appreciated.

Monday 11 February 2019

calculus - Find a Continuous Function











I am having a problem with this exercise. Could someone help?



Suppose $a \in (0,1)$ is a real number which is not of the form $\frac{1}{n}$ for any natural number n
n. Find a function f which is continuous on $[0, 1]$ and such that $f (0) = f (1)$ but which does not satisfy $f (x) = f (x + a)$ for any x with $x$, $x + a \in [0, 1]$.




I noticed that this condition is satisfied if and only if $f(x) \geq f(0)$



Thank you in advance


Answer



Look at $f(x) = \sin(2\pi x)$. For which values of $a$ can you find an $x \in [0,1]$ with $x+a \in [0,1]$ and $f(x) = f(x+a)$? In particular, if you additionally require $a > \frac{1}{2}$, can such an $a$ exist at all?



Once you've answered that you've solved your problem for some values of $a$. Which are those?



A general solution can be found in the answers to Universal Chord Theorem (Link found by the user who asked the question). To quote $$
f(x) = \sin^2\left(\frac{\pi x}{a}\right) - x \ \sin^2\left(\frac{\pi}{a}\right)

$$



is a solution. This works because $f(x) = f(x+a)$ implies $a \sin^2\left(\frac{\pi}{a}\right) = 0$ and thus $a=\frac{1}{n}$ for some $n \in \mathbb{N}$. The answers to the linked questions also prove that $a \neq \frac{1}{n}$ for every $n \in \mathbb{N}$ is a necessary condition for a solution to exist.


linear algebra - Every element in $SU(2)$ has the form $begin{bmatrix} alpha& beta\ -bar{beta} & -bar{alpha}end{bmatrix}$ with $alpha, betain mathbb{C}$

I want to prove that every element in $\operatorname{SU}(2)$ has the form
$$
\begin{bmatrix} \alpha& \beta\\ -\bar{\beta} & -\bar{\alpha}\end{bmatrix},
$$

with $\alpha, \beta\in \mathbb{C}$, for this I took an arbitrary element
$$

\begin{bmatrix} \alpha& \beta\\ \gamma & \delta\end{bmatrix},
$$

and I arrive at the following equalities
$$
\alpha\delta-\beta\gamma=1, \quad \alpha\bar{\alpha}+\beta\bar{\beta}=1, \quad \bar{\alpha}\gamma+\bar{\beta}\delta=0, \quad \gamma\bar{\gamma}+\delta\bar{\delta}=1
$$



but I don't know from here how to prove that $\gamma=-\bar{\beta}$ and $\delta=-\bar{\alpha}$, any idea? Thank you.

Sunday 10 February 2019

What are some fast ways to generate random numbers?

Many programming languages come with a function to give random numbers. I wonder how they implement that. Also, assuming the language doesn't have a random function, is there a way to generate them quickly? Another related question is if the language (and included libraries) had this random function included, could someone write a faster one just using the "primitives" of the language? I don't remember ever hearing about this in college but for probability simulation I have been using random numbers a lot and now am wondering how they do it and get it so there is no bias and not the same repeating pattern of numbers. Do they just reference some internal clocks of the computer? If so, wouldn't that then bias the numbers some?



I should clarify and say what if I only needed random numbers from 0 to 255 (8 bit) to simulate most card hands, die rolls... Is there a way I can write my own or has anyone on this site tinkered around with writing their own successfully?



It would be fun on a very fast computer to have some algorithm that gives me random numbers as quickly as possible (even storing them in memory or streaming them). For example, if I wanted 1 trillion random numbers to run a simulation without spending a lot of CPU cycles generating them so there is more available CPU speed to actually run the simulation.

elementary number theory - Finding inverse of polynomial in a field



I'm having trouble with the procedure to find an inverse of a polynomial in a field. For example, take:





In $\frac{\mathbb{Z}_3[x]}{m(x)}$, where $m(x) = x^3 + 2x +1$, find the inverse of $x^2 + 1$.




My understanding is that one needs to use the (Extended?) Euclidean Algorithm and Bezout's Identity. Here's what I currently have:



Proceeding with Euclid's algorithm:



$$ x^3 + 2x + 1 =(x^2 + 1)(x) + (x + 1) \\\\

x^2 + 1 = (x + 1)(2 + x) + 2$$



We stop here because 2 is invertible in $\mathbb{Z}_3[x]$. We rewrite it using a congruence:



$$(x+1)(2+x) \equiv 2 \mod{(x^2+1)}$$



I don't understand the high level concepts sufficiently well and I'm lost from here. Thoughts?



Wikipedia has a page on this we a decent explanation, but it's still not clear in my mind.




Note that this question has almost the same title, but it's a level of abstraction higher. It doesn't help me, as I don't understand the basic concepts.



Thanks.


Answer



Write $f := x^3+2x+1$ and $g := x^2+1$. We want to find the inverse of $g$ in the field $\mathbb F_3[x]/(f)$ (I prefer to write $\mathbb F_3$ instead of $\mathbb Z_3$ to avoid confusion with the $3$-adic integers), i.e. we are looking for a polynomial $h$ such that $gh \equiv 1 \pmod f$, or equivalently $gh+kf=1$ for some $k\in \mathbb F_3[x]$. The Euclidean algorithm can be used to find $h$ and $k$:
\begin{align}
f &= x\cdot g+(x+1)\\
g &= (x+2)\cdot(x+1) + 2\\
(x+1) &= (2x)\cdot2 + 1
\end{align}

Working backwards, we find
\begin{align}
1 &= (x+1)-(2x)\cdot 2\\
&= (x+1)-(2x)(g-(x+2)(x+1))\\
&= (2x^2+x+1)(x+1)-(2x)g\\
&= (2x^2+x+1)(f-xg)-(2x)g\\
&= (2x^2+x+1)f- (x^3+2x^2)g\\
&= (2x^2+x+1)f - (2x^3+x^2)g\\
&= (2x^2+x+1)f + (x^3+2x^2)g.
\end{align}

So, the inverse of $g$ modulo $f$ is $h = x^3+2x^2 \pmod f = 2x^2+x+2 \pmod f$.


linear algebra - Relation between the rank of the matrix and its characteristic polynomial?



Is there some kind of relation between the rank of the matrix and its characteristic polynomial?After searching through various posts



Say if $A \in M_{5}(\Bbb{R})$ and its characteristic polynomial is $\alpha x^5 + \beta x^4 + \gamma x^3 =0 $,then the rank of matrix $A$ = ?



I am unable to estalish the relation ,like I know that from characteristic polynomial i can obtain the eigenvalues and hence the trace and determinant of the matrix and now the question is if i know the trace and determinat of the matrix can i obtain some information about the rank of the matrix(the number of linearly independent rows in the rref).




I was looking at this question but still i am not aware of any trick or relation.


Answer



If the matrix is diagonalizable, rank = degree of the characteristic polynomial minus the order of multiplicity of root 0 (in the example, the rank of the matrix is 5 - 3 = 2).



In fact, in this case, writing $M=PDP^{-1}$ with $D$ diagonal matrix with $n-r$ zeros, and transforming it into $MP=PD$, it means that if the columns of $P$ are denoted $P_k$, we have $MP_k=\lambda_k P_k$ with, say, the last $n-r$ vectors associated with eigenvalue $0$, (and only them) i.e. we have exactly $n-r$ independant vectors belonging to the kernel.


linear algebra - Diagonalizability of $A in M_{n}(Bbb{R})$ (Upper traingular matrix) with all diagonal entries 1 and $A neq I$?




If $A \in M_{n}(\Bbb{R})$ is an Upper triangular matrix with diagonal entries $1$ such that $A \neq I$, then what can we say about the diagonalizability of $A$ ?



I know that if the matrix has distinct eigenvalues or the set of eigenvectors are linearly independent then the matrix is diagonalizable.



And that the eigenvalues of the triangular matrices are given by the diagonal elements like here and here, but they work nocely if we had distinct elements on the main diagonal.



But in my case I have same value 1 on the main diagonal, how can I approach about the diagonalizability of the matrix?


Answer



It isn't diagonalizable. Your matrix is of the form $I+N$ where $I$ is the identity matrix and $N$ is strictly upper triangular. You can check that being strictly upper triangular means that $N^n=0$, since if the $e_i$ are the basis vectors, and $E_m$ is the subspace spanned by $e_1,\ldots,e_m$, with $E_0=0$ then you can check that $NE_m \subseteq E_{m-1}$, so $N^nE_n = E_0=0$. However since $A\ne I$, $N\ne 0$. Now suppose $M$ diagonalizes $A$, so $MAM^{-1} = D$ for $D$ a diagonal matrix. Then $M(I+N)M^{-1}=MIM^{-1}+MNM^{-1} = D$, but $MIM^{-1}=I$, so we in fact have $MNM^{-1}=D-I$, so in fact $N$ must be diagonalizable. However we still have $(MNM^{-1})^n=0$, so if $D-I$ has elements $\lambda_1,\ldots,\lambda_n$ on the diagonal, then $\lambda_i^n=0$, which implies $\lambda_i=0$, hence $D-I=0$. Thus we would have that $D=I$, and $N=0$, contradicting the assumption that $A\ne I$, so $N\ne 0$.


Friday 8 February 2019

summation - Mathematical induction proof that $sumlimits_{k=2}^{n-1} {k choose 2} = {n choose 3} $



I have a problem with a mathematical equation. I don’t find the given solution.



This is the equation: $\sum\limits_{k=2}^{n-1} {k \choose 2} = {n \choose 3} $



I should show with induction that the expression is correct for every $n >= 3$.



I started with the beginning of the induction. For $n = 3$.




$\sum\limits_{k=2}^{2} {2 \choose 2} = 1 = {3 \choose 3} $
That’s correct.



Now I don’t know how to dissolve that equation. The solution should be:



$\sum\limits_{k=2}^{n-1} {k \choose 2} + {n \choose 2} = {n \choose 3} + {n \choose 2} = {n + 1 \choose 3} $



Why ${n \choose 2}$? It would be awesome, if someone could tell me the steps or give some tips how I could reach that solution.



Edit:

Well, at that point I am:




  1. Beginning of the induction: $\sum\limits_{k=2}^{2} {2 \choose 2} = 1 = {3 \choose 3}$


  2. Induction step n + 1: $\sum\limits_{k=2}^{n} {k \choose 2} + {n \choose 3}$
    Thats my assumption: ${n \choose 3}$.


  3. Change the first addend with the assumption, so I get ${n \choose 2} + {n \choose 3}$


  4. Add it into the binomial coefficient formula: ${n! \choose 2!(n-2)!} + {n! \choose 3!(n-3)!}$





I have it like @david-mitra. Whats wrong with that way?



Thanks.


Answer



We wish to prove:
$$
\sum_{k=2}^{n-1} {k\choose 2}= {n\choose3},\quad n\ge3.\tag{1}
$$



If you want to prove that equation (1) holds for all $n\ge 3$ using induction:




1) Show that the equation is true for $n=3$:



$$
\sum_{k=2}^{3-1} {k\choose 2}= {2\choose2}= {3\choose3}.\tag{1}
$$



2) Assume that the equation is true for $n=m$:
$$
\color{maroon}{\sum_{k=2}^{m-1} {k\choose 2}}= \color{maroon}{m\choose3 }.

$$



3) Now show that the equation must be true for $n=m+1$:



$$\eqalign{
\sum_{k=2}^{(m+1)-1} {k\choose 2}
&=\sum_{k=2}^{m } {k\choose 2}\cr
&=\sum_{k=2}^{m-1 } {k\choose2} +\color{darkgreen}{\sum_{k=m}^m {k\choose2}}\cr
&=\biggl[\ \color{maroon}{ \sum_{k=2}^{m-1 } {k\choose 2}}\ \biggr]+\color{darkgreen}{m\choose2}\cr
&= \color{maroon}{ m\choose 3} +{m\choose2}\cr


&= {m!\over (m-3)! 3!}+ {m!\over (m-2)! 2!}\cr
&={ m(m-1)(m-2)\over 3!} +{ m(m-1) \over 2!} \cr
&= m(m-1)({m-2\over 6}+{3\over 6 })\cr

&= { m(m-1)(m+1)\over 6} \cr
&={m+1\choose 3}.

}
$$



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...