Friday 31 March 2017

limits - Calculate $limlimits_{x to 0+}{ e^{frac{1}{x^2}}sin x}$

I want to investigate the value of $\lim_\limits{x \to 0+}{e^{\frac{1}{x^2}}\sin x}$. Since the expontial tends really fast to infinity but the sine quite slowly to 0 in comparison I believe the limit to be infinity. But I cannot find I way to prove it. I tried rewriting using the standard limit $\frac{\sin x}{x}$ as $\frac{\sin x}{x}\cdot xe^{\frac{1}{x^2}}$ but I still get an indeterminate form "$1 \cdot 0 \cdot \infty$".

algebra precalculus - Why is $sqrt[3]{18+5sqrt{13}} + sqrt[3]{18-5sqrt{13}} = 3$?





How to show that $$\sqrt[3]{18+5\sqrt{13}} + \sqrt[3]{18-5\sqrt{13}} = 3?$$




This equality comes from solving $$t^3 - 15 t - 4 = 0$$ using Cardanos fomula and knowing the solution $t_1=4$.



I have attempted multiplying the whole thing with $(\sqrt[3]{18+5\sqrt{13}})^2 - (\sqrt[3]{18-5\sqrt{13}})^2$, but no success. Then I have solved for one cubic root and put all to the third power. Also no success.


Answer



Let $(a + b\sqrt{13})^3 = (18 + 5\sqrt{13})$ for $a, b \in \Bbb Q$



Expanding the LHS gives,




$$(a^3 + 39 ab^2 - 18 ) +\sqrt{13}(3a^2 b + 13 b^3 - 5) = 0$$,



From this we get,



$$\begin{cases}a^3 + 39 ab^2 - 18 = 0 \\ 3a^2 b + 13 b^3 - 5 = 0\end{cases}$$



Solving the system give $ a = \dfrac 32$ and $ b = \dfrac12$



Therefore




$$\sqrt[3]{(18 + 5\sqrt{13})} = \dfrac 32 +\dfrac12\sqrt{13}$$



Similarly,



$$\sqrt[3]{(18 - 5\sqrt{13})} = \dfrac 32 -\dfrac12\sqrt{13}$$



Hence the sum is $3$.


discrete mathematics - What's the shortest way to prove $sum_{t=1}^n t^2=frac{n(n+1)(2n+1)}6$ by a mathematical induction?




Is there a quick way to prove the following? The method I used seems bit too long and lengthy, which I'd rather not post here.



enter image description here



$$\sum_{t=1}^n t^2=1^2+2^2+3^2+\dots+t^2=\frac{n(n+1)(2n+1)}6$$


Answer



I assume the equation is meant to be $\sum_{t=1}^{n}t^2 = 1^2 + 2^2 + \dots + \color{red}{n}^2$ instead of $\sum_{t=1}^{n}t^2 = 1^2 + 2^2 + \dots + \color{red}{t}^2$.



The equation holds for $n=1$ because

$$\sum_{t=1}^{1} = 1^2 = 1$$
and
$$\frac{1(1+1)(2\times 1+1)}{6} = 1$$
so
$$\sum_{t=1}^{1} = \frac{1(1+1)(2\times 1+1)}{6}$$



Now if the equation holds for $n = k$ where $k \in \mathbb{N}_{\ge 1}$, then
$$\sum_{t=1}^{k} = 1^2 + 2^2 + \dots + k^2 = \frac{k(k+1)(2k+1)}{6} = \frac{2k^3 + 3k^2 + k}{6}$$
and hence
$$\sum_{t=1}^{k+1} = 1^2 + 2^2 + \dots + k^2 + (k+1)^2 = \frac{k(k+1)(2k+1)}{6} + (k+1)^2 = \frac{2k^3 + 3k^2 + k}{6} + (k^2 + 2k +1) = \frac{2k^3 + 3k^2 + k}{6} + \frac{6k^2 + 12k+6}{6} = \frac{2k^3+9k^2+13k+6}{6}$$




Also
$$\frac{[k+1]([k+1]+1)(2[k+1]+1)}{6} = \frac{(k+1)(k+2)(2k+3)}{6} = \frac{2k^3+9k^2+13k+6}{6}$$
so
$$\sum_{t=1}^{k+1}t^2 = \frac{[k+1]([k+1]+1)(2[k+1]+1)}{6}$$
and thus the equation holds for $n=k+1$.



Therefore by induction
$$\sum_{t=1}^{n}t^2 = 1^2 + 2^2 + \dots + n^2 = \frac{n(n+1)(2n+1)}{6}$$




As a straightforward induction proof, it cannot get any shorter.



Hope it helps. :)


Thursday 30 March 2017

calculus - How to find $ lim_{ntoinfty}frac{sin(1)+sin(frac{1}{2})+dots+sin(frac{1}{n})}{ln(n)}$


$$ \lim_{n\to\infty}\frac{\sin(1)+\sin(\frac{1}{2})+\dots+\sin(\frac{1}{n})}{\ln(n)}
$$




I tried applying Cesaro Stolz and found its $(\sin 1/(n+1))/\ln(n+1)/n$ where $\ln$ is $\log_e$ and it would be $1$ and so the limit is $0$, but in my book the answer is $2$. Am I doing something wrong or can't Cesaro be applied here?

fractions - Let $a = frac{9+sqrt{45}}{2}$. Find $frac{1}{a}$



I've been wrapping my head around this question lately:




Let



$$a = \frac{9+\sqrt{45}}{2}$$



Find the value of



$$\frac{1}{a}$$



I've done it like this:




$$\frac{1}{a}=\frac{2}{9+\sqrt{45}}$$



I rationalize the denominator like this:



$$\frac{1}{a}=\frac{2}{9+\sqrt{45}} \times (\frac{9-\sqrt{45}}{9-\sqrt{45}})$$



This is what I should get:



$$\frac{1}{a} = \frac{2(9-\sqrt{45})}{81-45} \rightarrow \frac{1}{a}=\frac{18-2\sqrt{45}}{36})$$




Which I can simplify to:



$$\frac{1}{a}=\frac{2(9-\sqrt{45})}{36}\rightarrow\frac{1}{a}=\frac{9-\sqrt{45}}{18}$$



However, this answer can't be found in my multiple choice question here:



enter image description here



Any hints on what I'm doing wrong?


Answer




$$\frac { 1 }{ a } =\frac { 9-\sqrt { 45 } }{ 18 } =\frac { 3\left( 3-\sqrt { 5 } \right) }{ 18 } =\frac { 3-\sqrt { 5 } }{ 6 } $$


probability - Is $mathbb{P}( X > Y) = mathbb{P}( X+k > Y+k )$ true, where X and Y are random variables?

My intuition tells me that $\mathbb{P}(X+k > Y+k) = \mathbb{P}(X > Y)$ should be true, since there (should?) be a bijection between every single result between these two probability distributions. But clearly there's a misunderstanding somewhere here, but I'm having trouble pinning it down.




A counterexample would be:
Suppose $X \sim \operatorname{Bin}(n, 0.5)$ and $Y \sim \operatorname{Bin}(n+1, 0.5)$.
$\mathbb{P}(X < Y) = \mathbb{P}(n-X < n+1-Y)$, since the probability distribution of $\operatorname{Bin}(n, 0.5)$ and $n - X$ should be exactly the same.



Thanks in advance for your help!

elementary number theory - $aequiv bpmod {m_1}, aequiv bpmod {m_2} implies aequiv bpmod L$





If $\exists a, b, m_1, m_2 \in \mathbb{Z}, a\equiv b\pmod {m_1}$ and $a\equiv b\pmod {m_2}$, then $\exists k \in \mathbb{Z}, a=b+{m_1m_2k\over gcd(m_1,m_2)} \implies a\equiv b\pmod L$, where $L$ is the l.c.m. of $m_1$ and $m_2$.




I am trying to derive, starting with the first part, with no success




  1. $\exists l \in \mathbb{Z}, a -b = l\cdot m_1 \implies a =b +l\cdot m_1$

  2. $\exists m \in \mathbb{Z}, a-b = m\cdot m_2 \implies a =b +m\cdot m_2$




Equating both, $b +l\cdot m_1 = b +m\cdot m_2$



Unable to start the proof






Based on comment by @saulspatz, the new attempt to proof is:
$\exists r_1, r_2 \in \mathbb{Z},$ s.t.
(i) $m_1 = r_1\gcd(m_1, m_2),$ (ii) $m_2 = r_2\gcd(m_1, m_2)$
Multiplying, (i) by (ii), we get:
$m_1m_2 = r_1r_2(m_1, m_2)^2$
if for suitable integer $k$, take $r_1r_2={1\over k}$, then not possible as $r_1, r_2$ are themselves integers.




So, anyway attempt something:
Let, $r_1r_2=k'$, $m_1m_2 = k'(m_1, m_2)^2 \implies (m_1, m_2) = {m_1m_2 \over {k'\cdot (m_1, m_2)}}$



But, $k'$ is an integer, and the final form has an integer $k$ in numerator.


Answer



Let $M = m*n$ be a multiple of $n$.



Then $a \equiv b \mod n \implies a \equiv b + kn \mod M$ for some $k: 0 \le k

So if we have $L = \text{lcm}(m_1,m_2) = k_1m_1 = k_2m_2$ then




Then if $a \equiv b \mod m_1$ and $a\equiv b \mod m_2$ then $a \equiv b + l_2*m_1 \mod L$ and $a \equiv b + l_2*m_1 \mod L$ where $l_1 < k_1$ and $l_2 < k_2$.



So $l_1*m_1 \equiv l_2*m_2$. Now $0\le l_1*m_1 < k_1m_1 = L$ and $0\le l_2*m_2 , k_2m_2 = L$ so $l_1*m_1 = l_2*m_2$ so $l_1m_1 = l_2m_2$ is a common multiple of $m_1, m_2$.



But $L$ is the least common multiple so $l_1*m_1 = l_2*m_2 = 0$.



======



$a \equiv b \mod m_1$ means $a = b + j*m_1 = b+ j*\gcd(m_1,m_2)*\frac {m_1}{\gcd(m_1,m_2)}$.




And $a\equiv b \mod m_2$ means $a = b + l*m_2 = b + l*\gcd(m_1,m_2)*\frac {m_2}{\gcd(m_1,m_2)}$



So $j*\gcd(m_1,m_2)*\frac {m_1}{\gcd(m_1,m_2)}=l*\gcd(m_1,m_2)*\frac {m_2}{\gcd(m_1,m_2)}$



So $j*\frac {m_1}{\gcd(m_1,m_2)}=l*\frac {m_2}{\gcd(m_1,m_2)}$



But $\frac {m_1}{\gcd(m_1,m_2)}$ and $\frac {m_2}{\gcd(m_1,m_2)}$ are relatively prime.



So $\frac {m_1}{\gcd(m_1,m_2)}|l$ and $\frac {m_2}{\gcd(m_1,m_2)}|j$




So $j*\frac {m_1}{\gcd(m_1,m_2)}=l*\frac {m_2}{\gcd(m_1,m_2)}= k*\frac {m_1}{\gcd(m_1,m_2)}*\frac {m_2}{\gcd(m_1,m_2)}$



And $j*\gcd(m_1,m_2)*\frac {m_1}{\gcd(m_1,m_2)}=l*\gcd(m_1,m_2)*\frac {m_2}{\gcd(m_1,m_2)}= k*\gcd(m_1,m_2)*\frac {m_1}{\gcd(m_1,m_2)}*\frac {m_2}{\gcd(m_1,m_2)}=k*\frac {m_1m_2}{\gcd(m_1,m_2)}$



So $a = b+ k*\frac {m_1m_2}{\gcd(m_1,m_2)}$



And $a \equiv b \mod \frac {m_1m_2}{\gcd(m_1,m_2)}$



And $L = \frac {m_1m_2}{\gcd(m_1,m_2)} = $ lowest common multiple of $m_1, m_2$.


abstract algebra - Monic Irreducible Polynomials over Finite Field



Let $F=\mathbb{F}_{q}$ be a finite field (so $q=p^k$ for some prime $p$ and positive integer $k$), and let $\varphi(d)$ denote the number of monic irreducible polynomials of degree $d$ in $F[X]$. I'm supposed to show that $\displaystyle{\sum_{d \mid n} d \varphi(d) = q^n}$.



I see there are previous questions about this topic and even a paper, but all (save one) seem to employ the use of the Möbius function and Möbius inversion - both topics I have not covered yet in class. There is also this answer, but it appears to hinge upon the extension having prime degree. Is there some way to show this without explicitly coming up with a formula for the number of irreducible monic polynomials of a given degree in $F[X]$?



Any help would be greatly appreciated.


Answer




The splitting field of $X^{q^n}-X$ is ${\bf F}_{q^n}$. Every irreducible $\pi(X)$ of degree $d$ splits in ${\bf F}_{q^n}$ and every element of ${\bf F}_{q^n}$ is a root of $X^{q^n}-X$, and thus $\pi(X)\mid(X^{q^n}-X)$. Furthermore $X^{q^n}-X$ has no repeated roots so each irreducible $\pi(X)$ of degree $d\mid n$ must appear in its factorization precisely once. Therefore we have the conclusion



$$X^{q^n}-X=\prod_{d\mid n}\prod_{\deg\pi=d}\pi(X).$$



Taking degrees yields $\displaystyle q^n=\sum_{d\mid n}d\varphi(d)$ (and from here Möbius inversion yields $\varphi(d)$).


Wednesday 29 March 2017

functional analysis - Convergence in a norm vs convergence in a strong sense



Let $X = \text{BUC}(\mathbb{R})$ with supremum norm, where BUC is space of functions which are bounded and uniformly continuous.
Let's define an operator:
$$T_n: X \to X$$

$$T_n = f\bigg(x + \frac{1}{n} \bigg).$$
We are to prove that:
$$\lim_{n \to \infty} ||T_nf - f||_{\infty} = 0 \tag{1}$$
but
$$\lim_{n \to \infty} ||T_{n} - I||_{op} \neq 0 \tag{2},$$
where $||T||_{op}$ is the operator norm.






My solution




Firstly let's prove $(1)$



Because $f \in X$ we can fix $\epsilon >0$ and find $\delta > 0$ such that:
$$\forall_{x,y \in \mathbb{R}} ||x-y||_{\infty} < \delta \implies ||f(x) - f(y)||_{\infty} < \epsilon.$$
Of course $$\forall{\delta>0} \exists_{N \in \mathbb{N}} \forall_{n>N} \bigg| \bigg|\frac{1}{n} \bigg| \bigg| < \delta.$$
Thus
$$\lim_{n \to \infty} \bigg| \bigg| f \bigg(x + \frac{1}{n} \bigg) - f(x) \bigg| \bigg|_{\infty} = \lim_{n \to \infty} \sup_{x \in \mathbb{R}} \bigg|f \bigg(x + \frac{1}{n} \bigg) - f(x) \bigg| = 0.$$



Now let's think about $(2)$




Let's define a sequence of functions: $g_n(x) = \sin(n \pi x)$.
It's obvious that $\forall_{n \in \mathbb{N}} g_n \in X$. Moreover $\forall_{n \in \mathbb{N}} ||g_n||_{\infty} = 1.$
$$||T_n - I||_{op} = \sup_{||f||_{\infty} = 1} ||T_nf - f||_{\infty} \tag{3}.$$
Because we do have the supremum norm over all $f \in X: ||f||_{\infty} = 1$ thus $(3)$ has to be equal or bigger than the following one:
$$||T_ng_n - g_n||_{\infty} = \sup_{x \in \mathbb{R}} |-2 \sin(n \pi x)| = 2.$$
In conclusion
$$\lim_{n \to \infty} ||T_n - I||_{op} \ge 2 \not\to 0.$$
I wonder if my attempts are correct?
If they are I would like to ask why the sequence $g_n$ works in $(2)$ but doesn't in $(1)$? I know that the two norms are something different but honestly I can't see any difference in the calculations. I mean that putting $g_n$ into $(1)$ would lead to something very similar. Where's the difference?


Answer



If by $BUC(\mathbb R)$ you mean the space of bounded and uniformly continuous functions $f:\mathbb R\to\mathbb R$ (or $\mathbb R\to\mathbb C$, whichever you prefer), then the proofs are correct.




I think your confusion comes from the distinction between strong convergence and norm convergence of operators. Think of strong convergence of operators as similar to pointwise convergence of functions from introductory real analysis, and norm convergence of operators as similar to uniform convergence of functions. In pointwise convergence (resp. strong convergence), rates of convergence can vary wildly from point to point (resp. function to function), while in uniform convergence (resp. norm convergence), points (resp. functions) must tend to the limit in a uniform manner.


calculus - Prove $sum_{n=2}^{infty}lnleft(1+frac{(-1)^n}{n^p}right)$ converges if and only if $p>frac 12$



I am trying to prove that the following sequence converges:
$$\sum_{n=2}^{\infty}\ln\left(1+\frac{(-1)^n}{n^p}\right)$$

if and only if $p>\frac 12$.



I've seen solutions to this exact problem here, but I am not looking for a general solution. I've tried to solve this problem, and could not continue my solution, so I came here to ask for your help on how to continue.



My solution



$(\star)$ I have proven that given a sequence $a_n$, such that $\lim_{n\to\infty}a_n=0$, if: $\sum_{n=1}^{\infty}(a_{2n}+a_{2n+1})$ converges or diverges to infinity, then $\sum_{n=2}^{\infty}a_n$ converges or diverges to infinity, respectively.



I have also proven that $\sum_{n=2}^{\infty}\ln\left(1+\frac{(-1)^n}{n^p}\right)$ converges absolutely for every $p>1$, converges for $p=1$, and diverges for $p=\frac 12$. So what I have left, essentially, is to prove that the series converges for every $\frac 12




We can see that:



$$\sum_{n=1}^{\infty}(a_{2n}+a_{2n+1})\equiv\sum_{n=1}^{\infty}\left(\ln(1+\frac{1}{(2n)^p})+\ln(1-\frac{1}{(2n+1)^p})\right)$$



Since the sequence is negative, we can use the limit test with (after using logarithm rules):



$$\frac{(2n)^p-(2n+1)^p+1}{(4n^2+2n)^p}$$



Now, on the one hand, I don't have the logarithm anymore; But on the other hand, I don't know how to deal with this series. I tried to use the limit test again with $\frac{1}{n^{2p}}$, but to no avail.




I would be very glad to hear how to continue my solution, or rather simplify it. I prefer using the claim I've proven (marked with $(\star)$).



Thank you very much!


Answer



Your limit test can be made to work.
$$
\lim_{x\rightarrow\infty}\frac{\frac{(2x)^p-(2x+1)^p+1}{(4x^2+2x)^p}}{\frac{1}{x^{2p}}}=\lim_{x\rightarrow\infty}\frac{(2x)^p-(2x+1)^p+1}{(4+\frac{2}{x})^p}=\frac{1}{4^p}
$$

Here I used the fact that $\lim_{x\rightarrow\infty}((2x+1)^p-(2x)^p)=0$. You can quickly prove this by noting that $f:x\rightarrow x^p$ is a concave function for $p<1$ ($f''<0$), so you can say that $(2x+1)^p-(2x)^p < 2p(2x)^{p-1}$. But $p-1<0$, so this last expression goes to $0$.


Polynomial roots of degree 2



Since every polynomial of degree 'n' has 'n' complex roots.



Then what about $P(x)=x^2$ .




Isn't $x=0$ the only possible root of this polynomial ?


Answer



A polynomial of degree $n$ has $n$ roots counting multiplicity. That means that if we have a factor $$(x - a)^k$$



we count the root $a$ a total of $k$ times, once per linear factor. In particular, $x^2$ has a root at $0$ of multiplicity $2$.


calculus - Starting index for geometric series test



I was just wondering if the geometric series test for series of the form $\sum_{n=}^{\infty}ar^{n}$ needs the index to start at $0$ or $1$? From my understanding of the proof using partial sums, calculating the convergent value as $\frac{a}{1-r}$ requires the series to start at $0$. I ask because I've been seeing a lot of post and even my assignment solutions neglecting whether it starts at $1$ or $0$. If my assignment assumes I do not know how to change indices when it starts at $n=1$, and this happens to not be negligible, how do I go around using the geometric series test and calculating the convergent value?


Answer



Recall that for the geometric series



$$\ S_n = \sum_{j=0}^n ar^j = \frac{ar^{n+1}-a}{r-1}\implies \quad |r|<1\quad S_{\infty}=\frac{a}{1-r}
$$



then if we start from $n=1$




$$\ S_n = \sum_{j=1}^n ar^j = \frac{ar^{n+1}-a}{r-1}-a\implies \quad |r|<1\quad S_{\infty}=\frac{a}{1-r}-a$$


calculus - Why can we treat infinitesimals as real numbers in integration by substitution?



During integration by substitution we normally treat infinitesimals as real numbers, though I have been made aware that they are not real numbers but merely symbolic, and yet we still can, apparently, treat them as real numbers. For instance, consider we want to integrate the expression $3x(x^4+1)^3$. A common way to do this is to let $u=x^4+1$, where $\frac{du}{dx}=4x^3$, and thus $du=4x^3dx$ which is appropriately used in our substitution to obtain $\int3x(u)^4 du$, and then we simply directly integrate this new integrand. However, while I understand the process and why we do it in such a manner, I am perplexed as to why we can still rigorously treat the infinitesimals as real numbers. So, my question is if anyone can elaborate on exactly why it is logically rigorous to treat infinitesimals as real numbers during substitution for integration.



(Note: My question does not concern as to what "dx" means in integration simply because my question is defined in the prospect of treating infinitesimal derivatives as ratios specifically in integration by substitution, where other questions do not specifically address. )


Answer




The issue is that there is a whole bunch of measure theory hidden behind those "legitimate" calculations. If you just learned how to integrate elementary functions, you have too much to learn before you get there.



The key in this is the Radon-Nikodym Theorem, which says that if you have two measures $\mu,\nu$ on a measurable space $X$ such that $\mu$ is absolutely continuous with respect to $\nu$, then there exists a unique density function (which we denote by $\frac{d\mu}{d\nu}$) such that for any function $g$ integrable with respect to both $\mu$ and $\nu$,
$$
\int_X g \, d\mu = \int_X g \frac{d\mu}{d\nu} \, d\nu.
$$
The function $\frac{d\mu}{d\nu} : X \to \mathbb R$ is called the Radon-Nikodym derivative of $\mu$ with respect to $\nu$.



So in other words, in the rigorous treatment, the infinitesimals are not treated as infinitely small quantities, but rather as measures; while doing a change of variables, it can be shown that the Radon-Nikodym derivative you obtain is in fact the derivative of the function you use to change variables (i.e. in $du = f'(x) dx$, $\, f'(x) = \frac{du}{dx}$ where $du$ and $dx$ are two measures on the real line and $u = f(x)$ ; $f$ must be a diffeomorphism).




Now let me assume that this was not satisfactory for you; there is also the field of mathematics called non-standard analysis, which defined hyperreal numbers; I suggest you look at the Wikipedia page on this for more details. It allows the treatment of infinitesimals as quantities so you can play around with them.



Hope that helps,


complex analysis - Laurent Series Coefficients Problem.



I am struggling with a question regarding Laurent's Theorem and the coefficients of a Laurent series. The question is attached above. I know the general formula for coefficients. I have set $z=e^{i \theta}$. I have plugged in also $f(z)$ into the coefficient formula given by Laurent's Theorem. I however, cannot get the answer to be or look like the integral for the coefficients with the cosine etc, given in the question, which is what we are trying to prove or show. Do you have any pointers or suggestions? That would really help. Thank you.


Answer



The function $e^{z+\frac{1}{z}}$ is holomorphic everywhere in $0 < |z| < \infty$ and, therefore, has a Laurent series expansions
$$
e^{z+\frac{1}{z}} = \sum_{n=-\infty}^{\infty}a_n z^n, \;\;\; 0 < |z| < \infty.

$$
The Laurent series coefficients $a_n$ are given by
\begin{align}
a_n&=\frac{1}{2\pi i}\oint_{|z|=1}e^{z+\frac{1}{z}}\frac{1}{z^{n+1}}dz \\
&= \frac{1}{2\pi }\int_{-\pi}^{\pi}e^{e^{i\theta}+e^{-i\theta}}e^{-i(n+1)\theta}e^{i\theta}d\theta \\
&= \frac{1}{2\pi }\int_{-\pi}^{\pi}e^{2\cos\theta}e^{-in\theta}d\theta \\
&= \frac{1}{2\pi }\left(\int_{-\pi}^{0}+\int_{0}^{\pi}\right)e^{2\cos\theta}e^{-in\theta}d\theta \\
&= \frac{1}{2\pi }\left(
- \int_{\pi}^{0}e^{2\cos(-\theta)}e^{in\theta}d\theta+\int_{0}^{\pi}e^{2\cos\theta}e^{-in\theta}d\theta\right) \\
&= \frac{1}{2\pi }\int_{0}^{\pi}e^{2\cos\theta}(e^{in\theta}+e^{-in\theta})d\theta \\

&= \frac{1}{\pi}\int_{0}^{\pi}e^{2\cos\theta}\cos(n\theta)d\theta.
\end{align}


Tuesday 28 March 2017

geometry - Probability that the circle will intersect the square.



We are randomly picking up two points inside a square, according to a uniform probability distribution. Draw a line segment which connects those two points. And use the line segment as a diameter for our circle.





What is the probability that the boundary of the circle will intersect the boundary of the square?




I can not come up with the correct and clear idea for the solution. Here is what I tried to do:
Let the square sides have length $2a$. Let the points be $(x_1, y_1)$ and $(x_2, y_2)$. Where we have $(0, 0)$ in the leftmost lower corner.


Answer



Let $P_1,P_2$ be the picked points and $M$ be the midpoint of $P_1 P_2$.
Our random circle intersects the square iff the distance of $M$ from the boundary of the square is less than the length of $MP$. Thus, assuming that the square is given by $[-1,1]^2$ and $P_1=(x_1,y_1)$, $P_2=(x_2,y_2)$, we want the probability of the event




$$ \min\left(1-\left|\frac{x_1+x_2}{2}\right|,1-\left|\frac{y_1+y_2}{2}\right|\right)\leq \frac{1}{2}\sqrt{(x_1-x_2)^2+(y_1-y_2)^2} $$
with $x_1,x_2,y_1,y_2$ being independent and uniformly distributed random variables over the interval $[-1,1]$. The complementary event is given by
$$\left\{\begin{array}{rcl} \sqrt{(x_1-x_2)^2+(y_1-y_2)^2}&\leq& 2-|x_1+x_2|\\\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}&\leq& 2-|y_1+y_2|\end{array}\right.$$
which is equivalent to
$$\left\{\begin{array}{rcl} (y_1-y_2)^2&\leq& 4+4x_1x_2-4|x_1+x_2|\\(x_1-x_2)^2&\leq& 4+4y_1y_2-4|y_1+y_2|\end{array}\right.$$
An efficient way for computing this probability is probably to assume that $x_1,x_2$ have already been fixed, compute the area of the subset of the $y_1 y_2$ plane described by the previous inequalities as a function of $x_1$ and $x_2$, then integrate such function over $[-1,1]^2$ with respect to $dx_1\,dx_2$.


integration - Multiple integral over a disc



I would need some help on this integration problem:



$$I=\int_0^{2\pi}\int_0^{R}\int_0^{2\pi}\int_0^{R}\exp(-a\ r_{12}) \ r_1 \ r_2 \,\mathrm{d}r_1\,\mathrm{d}\phi_1\,\mathrm{d}r_2\,\mathrm{d}\phi_2$$



Here, $r_1, r_2, \phi_1$ and $\phi_2$ are the polar coordinates for the integral on a disc with radius R. And $r_{12}$ is the distance between the coordinates, which can be calculated by



$$r_{12}=\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\phi_1-\phi_2)}.$$




Below a plot of the problem for $a=1$, $R=1$ and fixed coordinates of the first point at $(0,0)$:
enter image description here


Answer



This integral has a closed form in terms of special functions. First, start with the integral in the form
$$ I(\alpha, R) = \int_{D(0,R)}da\int_{D(0,R)}db e^{-\alpha|a-b|} $$
with parameters $\alpha$, $R$, and differentiate with respect to $R$:
$$
\begin{aligned}
\frac{dI}{dR}(\alpha, R) &= \lim_{\epsilon\to0}

\frac{2}{\epsilon}\int_{D(0,R)}da\int_{D(0,R+\epsilon)\setminus D(0,R)}db\, e^{-\alpha|a-b|}
\\&=
4\pi R\int_{D(0,R)}da\, e^{-\alpha|a - R e_1|}
\\&= 4\pi R^3\int_{D(0,1)}da\, e^{-\alpha R|a-e_1|}.
\end{aligned}$$
Here $e_1=(1,0)$ is a unit vector on the unit circle, and the integral simplified because $\int da\,e^{-\alpha|a-b|}$ is independent of where $b$ is on the circle $\partial D(0,R)$ of radius $R$. The factor of $2$ came from the two occurrences of $D(0,R)$ that are symmetric, and $2\pi R$ is the area of the strip $D(0,R+\epsilon)\setminus D(0,R)$ divided by $\epsilon$.



To evaluate this integral, write $a$ in polar coordinates, but with the origin at the point $(1,0)$, so that the unit disk has the form
$$\pi/2 \leq \phi \leq 3\pi/2, \qquad 0 \leq r \leq -2\cos\phi = \rho(\phi). $$
Then the integral for $\frac{dI}{dR}$ becomes

$$ \begin{aligned}
\frac{dI}{dR} &= 4\pi R^3\int_{\pi/2}^{3\pi/2}d\phi\int_0^{-2\cos\phi}r\,dr\, e^{-\alpha R r}
\\&= 4\pi R^3\int_{\pi/2}^{3\pi/2}d\phi\frac{1-(1+\alpha R\rho)e^{-\alpha R\rho}}{\alpha^2 R^2}.
\end{aligned} $$
The integral over $\phi$ can be done with the help of computer algebra:
$$ \frac{dI}{dR} =
\frac{4\pi^2 R}{\alpha^2}\big(1-I_0(2\alpha R)+2\alpha R I_1(2\alpha R)-2\alpha R \mathbf{L}_{-1}(2\alpha R)+\mathbf{L}_0(2\alpha R)\big), $$
where $I$ are the modified Bessel functions, and $\mathbf{L}$ are the modified Struve functions.



Then the integral $I(\alpha,R) = \int \frac{dI}{dR}$ can also be done by a computer in terms of Struve and hypergeometric functions, which yields

$$ I(\alpha,R) = \frac{2\pi R}{\alpha^3}\big(
-2+\alpha \pi R-\alpha \pi R \,{}_0\mathbf{F}_1(2,\alpha^2R^2)+2 \alpha^3 \pi R^3 \,{}_0\mathbf{F}_1(3,\alpha^2 R^2)-2 \alpha \pi R \mathbf{L}_{-2}(2 a R)+\pi \mathbf{L}_1(2\alpha R)\big) $$
where ${}_0\mathbf{F}_1(a;z) = {}_0F_1(a;z)/\Gamma(a)$ is the regularized hypergeometric function.



This is the Mathematica expression for the above formula:



1/a^3 2 \[Pi] R (-2 + a \[Pi] R - 
a \[Pi] R Hypergeometric0F1Regularized[2, a^2 R^2] +
2 a^3 \[Pi] R^3 Hypergeometric0F1Regularized[3, a^2 R^2] -
2 a \[Pi] R StruveL[-2, 2 a R] + \[Pi] StruveL[1, 2 a R])



EDIT Differentiating the integral. Let $D=D(0,R)$, $C=D(0,R+\epsilon)\setminus D(0,R)$. Then
$$
\left(\int_{D}+\int_C\,da\right)\left(\int_D+\int_C\,db\right) - \int_Dda\int_C db
= \int_Cda\int_D db + \int_Dda\int_Cdb + \int_C\int_C\,da\,db.$$
The term $\int_C\int_C$ is on the order of $O(\epsilon^2)$ because the area of $C$ is $2\pi R\epsilon$, so it may be ignored.



Because the function $e^{-\alpha|a-b|}$ is symmetric in $a,b$,
$$\int_Cda\int_D db + \int_Dda\int_Cdb = 2\int_Dda\int_Cdb. $$

Also, the integral $\int_D e^{-\alpha|a-b|}\,da$ depends only on the distance between $b$ and the origin, so picking a special value of $b=Re_1$ gives
$$ 2\int_D da\int_C db\,e^{-\alpha|a-b|} = 2\mathop{\mathrm{area}}(C)\int_Dda\,e^{-\alpha|a-Re_1|}. $$


sequences and series - how can we show $frac{pi^2}{8} = 1 + frac1{3^2} +frac1{5^2} + frac1{7^2} + …$?

Let $f(x) = \frac4\pi \cdot (\sin x + \frac13 \sin (3x) + \frac15 \sin (5x) + \dots)$. If for $x=\frac\pi2$, we have
$$f(x) = \frac{4}{\pi} ( 1 - \frac13 +\frac15 - \frac17 + \dots) = 1$$
then obviously :
$$ 1 - \frac13 +\frac15 - \frac17 + \dots=\frac{\pi}{4}$$
Now how can we prove that:
$$\frac{\pi^2}{8} = 1 + \frac1{3^2} +\frac1{5^2} + \frac1{7^2} + \dots$$

logic - How do I know what we already know in a proof when the assumptions we can use are even more complex?



I think one reason why I find proofs so difficult is that I don't know what it is we're permitted to know and what we assume we don't know.



For example, "proving $0x = 0$" seems so incredibly obvious and yet I have to prove it with "simpler" assumptions and try to find some clever workaround:




$$\begin{array} {rcll} 0x &=& 0x &\text{ by reflexivity $x=x$}\\
(0+0)x &=& 0x &\text{ by additive identity $x+0=x$ and... substitution?}\\
0x + 0x &=& 0x &\text{ by distributive axiom}\\
0x + 0x &=& 0x + 0 &\text{ by additive identity again}\\
0x &=& 0 &\text{ by cancellation law}\\
\end{array}$$



But anyways these proofs already feel unsatisfactory to me. How do I know what "rules" I'm allowed to work with? For instance the distributive law being an axiom I never would have guessed in a hundred years was an axiom because it seems far more "complicated" than the idea of proving $0x=0$.




Is there a general way to know what we can and cannot use in order to prove something? Why are these "more complicated" assumptions like distributive law (which seem to encapsulate several rules at once, like addition, multiplication, restructuring of the statement, etc) allowed?


Answer



As others have said in the comments, you know what rules and axioms you can use because you are given or choose a particular collection. One of the most common misunderstandings I see is the thought that there is one set of rules and axioms that all mathematicians agree on. This is especially pernicious in logic but exists in other areas too. Usually what happens is a student sees one definition and assumes that that is the definition. It usually takes a while before they realize that all of these things have a variety of different but equivalent definitions, and in most cases also have genuinely different definitions. In the case of logic, just looking at the sheer number of entries under "logic" in the Stanford Encyclopedia of Philosophy, many of which correspond to different logics, gives an indication. And it is not comprehensive by any means.



For your example, all the reasoning is equational so which logic we're using is not as critical. We do need the laws for equality which could be roughly described as "a congruence with respect to everything". First, equality is an equivalence relation meaning it is reflexive, $x=x$; symmetric, if $x = y$ then $y = x$; and transitive, if $x = y$ and $y = z$ then $x = z$. Then what makes equality equality is the indiscernibility of indenticals which is usually expressed as a rule rather than an axiom and states: if $x = y$ and $P$ is some predicate with free variable $z$, then if $P[x/z]$ is provable so is $P[y/z]$ where $P[x/z]$ means $P$ with all free occurrences of $z$ replaced with $x$, i.e. substituting $x$ for $z$, and similarly for $P[y/z]$. (Again, there are other ways of presenting these rules and axioms. Indeed, this set is redundant...)



The proof of a statement like $0x=0$ is common in the theory of rings. For example, using the definition of a ring given on Wikipedia, this is a theorem but not an axiom. The aspect I mentioned before strikes here too. There are other choices you could take for the axioms of a ring, including ones where $0x=0$ is taken axiomatically. Also, the term "ring" is ambiguous as many authors consider "rings without unit" (i.e. which don't necessarily have an element that behaves like $1$). The definition Wikipedia gives is a ring with unit. These definitions are not equivalent.



Anyway, using the definition on Wikipedia, one way to prove $0x=0$ is the following: $$\begin{align}
&0x-0x=0 \tag{additive inverse}

\\ \iff & (0+0)x-0x=0 \tag{additive identity}
\\ \iff & (0x+0x)-0x = 0 \tag{left distributivity}
\\ \iff & 0x+(0x-0x) = 0 \tag{additive associativity}
\\ \iff & 0x + 0 = 0 \tag{additive inverse}
\\ \iff & 0x = 0 \tag{additive identity}
\end{align}$$



Each $\iff$ is hiding a use of the indiscernibility of identicals. For example, the first step is: let $P$ be $zx-0x=0$, the additive identity axiom for $0$ states $0+0=0$ or, by symmetry, $0=0+0$, if $P[0/z]$ is provable, then $P[(0+0)/z]$ is provable. This gives the $\Rightarrow$ direction, the $\Leftarrow$ direction just uses the same equality the other way.



So why don't we just take $0x=0$ as an axiom. Well, we could. However, doing so wouldn't let us derive the other axioms and given the other axioms we can derive this one. Using a minimal collection of axioms makes it easier to verify if something is a ring (or a ring homomorphism). We would have to explicitly verify $0x=0$ while having it be a theorem means we can derive it once and for all for all rings. Another factor affecting the choice of axioms is also evident in Wikipedia's presentation. We often want to build our definitions in a modular fashion (which often leads to non-minimal lists of axioms). In this case, Wikipedia's definition starts with the axioms of an commutative group. That is, a ring is an commutative group and simultaneously a monoid whose "multiplication" distributes over the group operation. This way of presenting rings allows us to "import" theorems about commutative groups and monoids and apply them to rings. We could, of course, still do this if we had a different presentation of the axioms of a ring, but we'd have to derive the commutative group/monoid structure first, and this structure may not be obvious from the alternate presentation.




If you really want to get a visceral feel for all of this, I recommend getting familiar with a proof assistant like Agda, Coq, LEAN, or several others. I particularly recommend Agda as it puts all the gory details of the proofs right in your face. Most other proof assistants use a tactics-based approach which means you typically write "proof scripts" which are little programs that search for proofs for you. You don't typically see the proofs in those systems. Nevertheless, any of them will make it very apparent what it means to work with a given definition, what is and is not available at any point in time, and why structuring definitions one way versus another may be desirable. They all have pretty steep learning curves though and LEAN and Coq have better introductory material than Agda.


probability dice experiment

I am currently taking a probability course and I am stuck on a supposedly easy discrete probability question here:



Problem: Consider the experiment of rolling a fair die independently until the same number/face occurs 2 successive times and let $X$ be the trial on which the repeat occurs, e.g. if the rolls are $2,3,4,5,1,2,4,5,5$, then $X=9$.



a. find the probability function $f(x) = P(X=x)$




b. compute EX



Attempt at a solution: I know $X$ is discrete probability distribution, and that we are dealing with independent events. However, $X$ can be anything, up to infinity, or it may never happen where there are two successive values. Here's what I got:



obviously, the answer is a geometric distribution as the answer is:



$P(X=x) = f(x) = (5/6)^(x-1) * (1/6)$ for $x = 0,1,2,3,...$ and $0$ otherwise but I'm stuck here. please help

calculus - Relationship between two integrals



I'm reviewing for an intro calculus exam, and the following problem appears on a past final exam:



If: $$\int^3_1 \frac{1}{x^4\sqrt{1+x}}\, dx = k$$



What is:
$$\int^3_1 \frac{1}{x^5\sqrt{1+x}}\, dx $$ (I'm assuming the answer will be in terms of $k$)



It seems that most basic integration techniques(substitution, integration by parts, trig sub, etc.) will not allow the solution of the integral, and I'm not sure how else to approach this problem at my level. I've run this by both my lecturers, and they cannot find a solution in a reasonable amount of time either. I'm curious because it seems the there would be a simple solution or rule I'm ignorant of (considering this is on an intro calc exam), but I'm stumped. Where am I going wrong? Thanks!



Answer



Let $I_4$ be the given integral (i.e. k) and let $I_5$ be the integral you want. I am just expanding on Zach Stone's comment, so credit due to him, one can intgrate by parts and write



$$I_4 = \int^3_1 \frac{1}{x^4}\frac{1}{\sqrt{1+x}}\, dx $$ which will give
$$I_4 = \left. \frac{2\sqrt{1+x}}{x^4}\right|_1^3 + \int^3_1 \frac{8\sqrt{1+x}}{x^5}dx$$



The trick is now to multiply and divide the second term within the integral by $\sqrt{1+x}$. The equation then easily simplifies to



$$I_4 = \frac{4}{81} - 2\sqrt{2} +8I_4 +8I_5$$ which gives the required relation between $I_4$ and $I_5$. I verified that it agrees with Brian Tung's calculations from Wolfram.




So, in short, yes, this was possible using simple integration by parts. Once again, credit to Zach Stone


Prove that if $n$ is not the square of a natural number, then $sqrt{n}$ is irrational.












I have this homework problem that I can't seem to be able to figure out:

Prove: If $n\in\mathbb{N}$ is not the square of some other $m\in\mathbb{N}$, then $\sqrt{n}$ must be irrational.

I know that a number being irrational means that it cannot be written in the form $\displaystyle\frac{a}{b}: a, b\in\mathbb{N}$ $b\neq0$ (in this case, ordinarily it'd be $a\in\mathbb{Z}$, $b\in\mathbb{Z}\setminus\{0\}$) but how would I go about proving this? Would a proof by contradiction work here?



Thanks!!


Answer



Let $n$ be a positive integer such that there is no $m$ such that $n = m^2$. Suppose $\sqrt{n}$ is rational. Then there exists $p$ and $q$ with no common factor (beside 1) such that



$\sqrt{n} = \frac{p}{q}$



Then




$n = \frac{p^2}{q^2}$.



However, $n$ is an positive integer and $p$ and $q$ have no common factors beside $1$. So $q = 1$. This gives that



$n = p^2$



Contradiction since it was assumed that $n \neq m^2$ for any $m$.


linear algebra - Showing that a Matrix is Nonsingular




Let $n$ be a positive integer and let $F$ be a field. Let $A \in M_{n×n}(F )$ be a matrix for which there exists a matrix $ B \in M_{n×n}(F )$ satisfying $I + A + AB = O$. Show that $A$ is nonsingular.



Since $I + A + AB = O$, we can get
$$I+A(I+B)=0$$
$$A[-(I+B)]=I$$



I know it seems $-(I+B)$ is the inverse of $A$, however, I am not sure how to get
$-(I+B)A=I$. I may choose a wrong way to solve this problem. I am really stuck with this question.


Answer




\begin{align*} I_n + A + AB & = O \\
I_n + A(I_n + B) & = O \\
A(I_n + B) & = -I_n.
\end{align*} Take determinant of both sides gives $$\text{det}(A)\text{det}(I_n+B) = (-1)^n,$$ hence $\text{det}(A)$ is non-zero and $A$ is invertible.


Monday 27 March 2017

complex analysis - Evaluate $sum_{n=1}^infty frac{(-1)^{n+1}n^2}{n^4+1}$




Evaluate

$$\sum_{n=1}^\infty \frac{(-1)^{n+1}n^2}{n^4+1}$$




Does anyone have any smart ideas how to evaluate such a sum? I know one solution with complex numbers and complex analysis but I'm looking for some more smart or sophisticated methods.


Answer



I would not say that it is elegant, but:



The form $n^4+1$ in the denominator suggests that one should be able to get this series by expanding a combination of a hyperbolic and trigonometric function in a Fourier series.



Indeed, after some trial and error, the following function seems to work:




$$
\begin{gathered}
\left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)-\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)\right)\cos \left(\frac{x}{\sqrt{2}}\right) \cosh \left(\frac{x}{\sqrt{2}}\right) \\
+ \left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)+\sin \left(\frac{\pi }{\sqrt{2}}\right) \cosh
\left(\frac{\pi }{\sqrt{2}}\right)\right)\sin \left(\frac{x}{\sqrt{2}}\right) \sinh
\left(\frac{x}{\sqrt{2}}\right)
\end{gathered}
$$




It is even, and its cosine coefficients are
$$
\frac{\sqrt{2}\bigl(\cos(\sqrt{2}\pi)-\cosh(\sqrt{2}\pi)\bigr)(-1)^{n+1} n^2}{\pi(1+n^4)},\quad n\geq 1.
$$
(The zero:th coefficient is also zero). Evaluating at $x=0$ (the series converges pointwise there) gives
$$
\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}n^2}{1+n^4}=
\frac{\pi\left(\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)-\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)\right)}{\sqrt{2}\bigl(\cosh(\sqrt{2}\pi)-\cos(\sqrt{2}\pi)\bigr)}\approx 0.336.

$$


sequences and series - Finding the $n^{text{th}}$ term of $frac{1}{4}+frac{1cdot 3}{4cdot 6}+frac{1cdot 3cdot 5}{4cdot 6cdot 8}+ldots$



I need help on finding the $n^{\text{th}}$ term of this infinite series?
$$
s=\frac{1}{4}+\frac{1\cdot 3}{4\cdot 6}+\frac{1\cdot 3\cdot 5}{4\cdot 6\cdot 8}+\ldots
$$

Could you help me in writing the general term/solving?


Answer



The first thing you can do is start with $a_1=\frac{1}{4}$ and then realize that $$a_{n+1} =a_n\frac{2n-1}{2n+2}$$
that doesn't seem to get you anywhere, however.



As commentator Tenali notes, you can write the numerator as $$1\cdot 3\cdots (2n-1) = \frac{1\cdot 2 \cdot 3 \cdots (2n)}{2\cdot 4\cdots (2n)}=\frac{(2n)!}{2^nn!}$$



The denominator, on the other hand, is $$ 4\cdot 6\cdots \left(2(n+1)\right) = 2^n\left(2\cdot 3\cdots (n+1)\right) = 2^n(n+1)!$$



So this gives the result:




$$a_n = \frac{(2n)!}{2^{2n} n!(n+1)!} = \frac{1}{2^{2n}}\frac{1}{n+1}\binom{2n}{n} = \frac{1}{2^{2n}}C_n$$



where $C_n$ is the $n^{\text{th}}$ Catalan number.



If all you want is then $n^{\text{tn}}$ term, that might be enough - you can even skip the part about Catalan numbers and just write it as $a_n=\frac{1}{4^n(n+1)}\binom{2n}n$.



As it turns out, the Catalan numbers have a generating function (see the link above:)



$$\frac{2}{1+\sqrt{1-4x}} = \sum_{n=0}^\infty C_nx^n$$




So, if the series converges when $x=\frac{1}{4}$, it converges to $2$.



(It does converge, since $C_n \sim \frac{2^{2n}}{n^{3/2}\sqrt{\pi}}$.)


real analysis - Help proving elementray properties of the Riemann Integral using only the Riemann sums definition of the Riemann integral



Let $f:[a,b]\rightarrow\mathbb{R}$. A tagged partition, $\mathcal{P}$ of $[a,b]$ is a set of ordered pairs defined as $$\mathcal{P}:=\{([x_{k−1},x_k]),t_k)\}^n_{k=0},$$ where $a=x_0<...$$(\forall\epsilon>0)(\exists\delta>0)(\forall \mathcal{P}\in\mathbb{P}_{[a,b]})\bigg(\|\mathcal{P}\|<\delta\Rightarrow |S(f,\mathcal{P})-L|<\epsilon\bigg)$$




$\mathfrak{R}[a,b]$ will denote set of all Riemann integrable functions on $[a,b]$.



A Cauchy integrability which is a straightforward exercise says that $f\in\mathfrak{R}[a,b]$ iff
$$(\forall\epsilon>0)(\exists\delta>0)(\forall \mathcal{P}\in\mathbb{P}_{[a,b]})(\forall \mathcal{Q}\in\mathbb{P}_{[a,b]})\bigg(\|\mathcal{P}\|<\delta\wedge\|Q\|<\delta\Rightarrow |S(f,\mathcal{P})-S(f,\mathcal{Q})|<\epsilon\bigg).$$



And in fact $f\in\mathfrak{R}[a,b]$ iff for any sequence of tagged partions $\mathcal{P}_n$ on $[a,b]$ s.t. $\|\mathcal{P}_n\|\rightarrow 0$ then the sequence of Rieman sums $S(f,\mathcal{P}_n)$ is a Cauchy sequence.



Another easy equivalent condition is the following "Squeeze Theorem" for Riemann integrable functions, which states that: $f\in \mathfrak{R}[a,b]$ iff
$$(\forall\epsilon>0)(\exists h_\epsilon,g_\epsilon\in\mathfrak{R}[a,b])\bigg(((\forall x)(x\in[a,b])\wedge g_\epsilon(x)\leq f(x)\leq h_\epsilon(x)))\Rightarrow\int_a^bh_\epsilon-g_\epsilon<\epsilon\bigg).$$




Using the above definition and results as well as any other relevant results, please could anyone help me show the following elementary properties of Riemann integrable functions.



(i) If $f\in\mathfrak{R}[a,b]$ then $f\in\mathfrak{R}[c,d]$ for any $a\leq c\leq d \leq b$.



(ii) If $f$ is monotonic on $[a,b]$ then $f\in\mathfrak{R}[a,b]$.



(iii) If $f\in\mathfrak{R}[a,b]$ then $f^2\in\mathfrak{R}[a,b]$.



I know that the following properties are very easy to show when one uses the Darboux definition of the Riemann integral and the upper and lower Riemann sums and in fact it is quite straight forward to show that the Darboux definition and the above Riemann sum definition are equivalent, but for completeness I want to see if these results can be shown using just the Riemann sums definition and associated results. I have pondered on these and have just hit a brick wall as it is not very amenable, so please any help and ideas will be greatly appreciated and need. Thanks in advance.


Answer




Hint for (i)



You have to observe two things:



1) you can extend every partition of $[c,d]$ to a partition of $[a,b]$ with the same mesh;



being $f_1$ the restriction of $f$ to $[c,d]$



2) if you extend two tagged partitions $\mathcal {P}_1$ and $\mathcal {Q}_1$ of $[c,d]$ to tagged partitions $\mathcal {P}$ and $\mathcal {Q}$ of $[a,b]$ by the same additional points and tags, then you get $$S(f_1,\mathcal {P}_1)-S(f_1,\mathcal {Q}_1)=S(f,\mathcal {P})-S(f,\mathcal {Q})$$




So a clever use of Cauchy integrability criterion gives the proof.


analysis - Measure theory limit question





Let $(X, \cal{M},\mu)$ be a finite positive measure space and $f$ a $\mu$-a.e. strictly positive measurable function on $X$. If $E_n\in\mathcal{M}$, for $n=1,2,\ldots $ and $\displaystyle \lim_{n\rightarrow\infty} \int_{E_n}f d\mu=0$, prove that $\displaystyle\lim_{n\rightarrow\infty}\mu(E_n)=0$.



Answer



Since $f$ is almost everywhere strictly positive, the increasing sequence of sets $$A_n=\{x\in X:f(x)>1/n\}$$
has the property that $$\lim_{n\to\infty} \mu(A_n)=\mu(X).$$ Now $\int_E f ~d\mu0$.



So let $\epsilon>0$. Choose $n$ so that $\mu(X\backslash A_n)<\epsilon/2$. For $N$ large enough, $\int_{E_N}f~d\mu<\epsilon/(2n)$ and hence $$\mu(E_N\cap A_n)

Sunday 26 March 2017

real analysis - Show that $lim_{p to infty} | f |_p = | f |_infty$

In a space with measure $1$, $||f||_p$ is an increasing function with respect of $p$. To show that $\lim_{p \rightarrow \infty} ||f||_p=||f||_{\infty}$ we have to show that $||f||_{\infty}$ is the supremum, right??




To show that, we assume that $||f||_{\infty}-\epsilon$ is the supremum.



From the essential supremum we have that $m(\{|f|>||f||_{\infty}-\epsilon\})=0$.



So, we have to show that $m(\{|f|>||f||_{\infty}-\epsilon\})>0$.



Let $A=\{|f|>||f||_{\infty}-\epsilon\}$.



We have that $\int_A |f|^p \leq \int |f|^p \leq ||f||_{\infty}^p$.




$\int_A |f|^p >\int_A (||f||_{\infty}-\epsilon)^p=(||f||_{\infty}-\epsilon)^p m(A)$



So, $m(A)^{1/p} (||f||_{\infty}-\epsilon)<||f||_p \leq ||f||_{\infty}$



How could we continue to show that $m(A)>0$??



EDIT:



Is it as followed??




We have that $0<||f||_{\infty}-\epsilon<||f||_{\infty}$ for some $\epsilon>0$.



$||f||_{\infty}$ is the essential supremum. So, from the definition we have that $m\left ( \{|f(x)>||f||_{\infty}-\epsilon\} \right )>0$.



Let $A=\{|f|>||f||_{\infty}-\epsilon\}$.



We have that $\int_A |f|^p \leq \int |f|^p \leq ||f||_{\infty}^p$.



$\int_A |f|^p >\int_A (||f||_{\infty}-\epsilon)^p=(||f||_{\infty}-\epsilon)^p m(A)$




So, $m(A)^{1/p} (||f||_{\infty}-\epsilon)<||f||_p \leq ||f||_{\infty}$



Taking the limit $p \rightarrow +\infty$ we have the following:



$$\lim_{p \rightarrow +\infty}m(A)^{1/p} (||f||_{\infty}-\epsilon)<\lim_{p \rightarrow +\infty} ||f||_p \overset{ m(A)>0 \Rightarrow \lim_{p \rightarrow +\infty}m(A)^{1/p}=1}{\Longrightarrow} \\ ||f||_{\infty}-\epsilon<\lim_{p \rightarrow +\infty} ||f||_p$$



Is this correct?? How do we conclude that $\lim_{p \rightarrow +\infty} ||f||_p=||f||_{\infty}$ ??

sequences and series - Does $zeta(-1)=-1/12$ or $zeta(-1) to -1/12$?

I saw NumberPhile channel on Youtube, and they proved $1+2+3+\cdots=-1/12$. Also, I read This.







So, which one is correct



$$\zeta(-1)=-1/12\\ \text{or} \\\zeta(-1) \to -1/12$$



Equivalent to:



$$1+2+3+\cdots=-1/12\\ \text{or} \\1+2+3+\cdots \to -1/12$$







My question: Does it "equal" or "converge"?






Question Explanation:



I mean by "$\to$" "approaches to", like $x\to a $ means $\forall \epsilon>0, |x-a|<\epsilon.$

number theory - Given any $10$ consecutive positive integers , does there exist one integer which is relatively prime to the product of the rest ?

Given any $10$ consecutive positive integers , does there exist one integer which is relatively prime to the product of the rest ?

elementary number theory - Solving congruence system with no multiplicative inverse




I am trying to find a way of solving congruence systems of the form:



$$ b*x = a \quad mod \quad y $$




Where $b$ and $y$ are not prime to each other.



My current way of solving congruence systems where $b$ and $y$ are prime to each other is to find the multiplicative inverse of $b$ in $y$ and multiply $b$ (which will make it 1) and $a$ with this value.



Example:



$$ 13 \ x = 3 \ mod \ 17$$



I calculate the mul. inverse of 13 and multiply 3 and 13 times that value and thus I have solved this equation.




But I dont know how I can do that where $b$ and $y$ are not prime to each other.



Example:



$$3 x = 3 \ mod \ 9$$



How would I solve this ?


Answer



A congruence

$$ax\equiv b\pmod{n}\tag{1}$$
is soluble iff $\gcd(a,n)\mid b$. In this case (1) is equivalent to
$$\frac agx\equiv \frac bg\pmod{\frac ng}\tag{2}$$
where $g=\gcd(a,n)$. As $a/g$ is coprime to $n/g$ you may solve (2)
by multiplicative inverses if you like.


calculus - Evaluate: $int_{gamma}textrm {x.n(x)} dstextrm{(x)}$

Let $\textrm{x}=(x,y)\in\mathbb{R^2} $. Let $\textrm{n(x)}$ denotes the unit outward normal to the ellipse $\gamma$ whose equation is given by $$\frac{x^2}{4}+\frac{y^2}{9}=1$$ at point $\textrm{x}$ on it. Evaluate: $$\int_{\gamma}\textrm {x.n(x)} ds\textrm{(x)}.$$

complex analysis - Proving that $sum_{|j| < n} (n-|j|) exp(ijlambda)= frac{sin^2(frac 1 2 nlambda)}{sin^2(frac 1 2 lambda)}$




I want to show that $\sum_{|j| < n} (n-|j|) \exp(ij\lambda)= \dfrac{\sin^2(\frac 1 2 n\lambda)}{\sin^2(\frac 1 2 \lambda)}$.





I know from Proving $\sum\limits_{k=0}^{n}\cos(kx)=\frac{1}{2}+\frac{\sin(\frac{2n+1}{2}x)}{2\sin(x/2)}$
\begin{equation}
(1)\sum_{j=1}^{n-1} cos(j\lambda) = -\frac 1 2 + \frac{\sin(\frac{2n-1} 2 \lambda)}{2\sin(\frac \lambda 2)}.
\end{equation}



and from the Hint in the first answer in How to show that $\frac{1}{\tan(x/2)}=2 \sum_{j=1}^{\infty}\sin(jx)$ in Cesàro way/sense?
$$
(2)\frac d {dx} \sum_{j=1}^{n-1} sin(j\lambda) =\frac d {d\lambda} \frac{\cos({\frac \lambda 2})-\cos(\frac{n+1}2 \lambda)}{2\sin(\frac\lambda 2)} = \frac{n\sin(\frac{n+1} 2 \lambda)\sin(\frac \lambda 2)+\cos(\frac{n\lambda}2) - 1}{4\sin^2(\frac \lambda 2)}.
$$




This is what i did so far:
\begin{align}
\sum_{|j| < n} (n-|j|) \exp(ij\lambda) &= n + 2\sum_{j=1}^{n-1}(n-j) cos(j\lambda) \\
&=n + 2n \sum_{j=1}^{n-1} cos(j\lambda) - \frac d {d\lambda}\sum_{j=1}^{n-1} sin(j\lambda)\\
&= \frac{n\sin(\frac{2n-1} 2 \lambda)\sin(\frac \lambda 2) + \frac 1 2 + \frac 1 2 n \sin(\frac {n+1} 2 \lambda)\sin(\frac \lambda 2) + \frac 1 2 \cos( \frac{n\lambda}2)}{\sin^2(\frac \lambda 2)}.
\end{align}



If this and the proposition is true, it should be true that
$$
n\sin(\frac{2n-1} 2 \lambda)\sin(\frac \lambda 2) + \frac 1 2 + \frac 1 2 n \sin(\frac {n+1} 2 \lambda)\sin(\frac \lambda 2) + \frac 1 2 \cos( \frac{n\lambda}2) - \sin^2(\frac {n\lambda} 2) = 0.

$$



But for $n=2$, i get
$$
3\sin(\frac {3\lambda} 2) \sin(\frac \lambda 2) + \frac 1 2 + \frac 1 2 \cos(\lambda) - \sin^2 \lambda \ne 0.
$$
(checked with Wolframalpha)



So there is probably something wrong with my calculations. But i checked everything twice and don't see my mistake. Do you see it?




EDIT



Trying to prove it as proposed in the second answer.



Let $z:=\exp(i\lambda)$ and $p(z):=\sum_{j=0}^{n-1} z^j$.
First i want to show that $p(z)p(z^{-1})=\sum_{|j|

It is
\begin{align}
\sum_{j=1}^{n-1} z^{j}&= p(z) - 1 = \frac{1-z^n}{1-z} - 1.\\

\sum_{j=1}^{n-1} z^{-j}&= p(z^{-1})-1 = \frac{1-z^{-n}}{1-z^{-1}} - 1.\\
\sum_{j=1}^{n-1} jz^{-j} &= i \frac d {d\lambda} p(z^{-1}) = \frac{-nz^{-n}+(n-1)z^{-n-1}+z^{-1}}{(1-z^{-1})^2}.\\
\sum_{j=1}^{n-1} jz^{j} &= -i \frac d {d\lambda} p(z) = \frac{-nz^{n}+(n-1)z^{n+1}+z}{(1-z)^2}.\\
\end{align}
So i have
\begin{align}
\sum_{|j|\end{align}
If i now put in the values from above and substract $p(z)p(z^{-1})$ i should get $0$. But Wolfram Alpha doesn't agree, so i am again on the wrong track. What did i wrong?




The last step $p(z)p(z^{-1})= \frac{(z^{n/2}-z^{-n/2})^2}{(z^{1/2}-z^{-1/2})^2}$ was easy to show.



EDIT2



I did it now.
It was a lot easier to show directly $p(z)p(z^{-1})=\sum_{|j|

Answer



Letting $\alpha=\frac{\lambda}{2}$ to make the expressions cleaner, we get:
$$\begin{align}
\sum_{|j| < n} (n-|j|) \exp(ij\lambda) &= n + 2\sum_{j=1}^{n-1}(n-j) \cos j\lambda \\

&=n + 2n \sum_{j=1}^{n-1} \cos j\lambda - 2\frac d {d\lambda}\sum_{j=1}^{n-1} \sin j\lambda\\
&= \frac{n\sin(2n-1)\alpha\sin\alpha + \frac 1 2 - \frac 1 2 n \sin (n+1)\alpha \sin \alpha - \frac 1 2 \cos n\alpha}{\sin^2\alpha}.
\end{align}$$
Note the minus signs in the numerator. I'm not sure if that solves the problem - there might be an error elsewhere. Wolfram Alpha says when $n=2$ the numerator is $\sin^2 2\alpha$, but $n=1$ doesn't agree.



The easier way to do this sort of problem is via the comment I gave above, using $z=e^{i\lambda}$.



Let $p(z)=\sum_{j=0}^{n-1} z^j$. Show that:$$p(z)p(z^{-1})=\sum_{|j|

Now, $p(z)=\frac{z^n-1}{z-1}$, and $p(z^{-1})=z^{-(n-1)}p(z)$. Now show: $$p(z)p(z^{-1})= \frac{(z^{n/2}-z^{-n/2})^2}{(z^{1/2}-z^{-1/2})^2}$$




Then let $z=e^{i\lambda}$.


elementary number theory - induction proof of the fact that $n^2-9n+10$ is even



I have following problem: I need to prove that $n^2-9n+10$ is even, by induction.
I started with $n^2-9n+10 = 2k $ for some integer $k$.
$n=1$: $1-9+10 = 2$ which is even
for $n=k$ : $k^2-9k+10 = 2k $
for $n=k+1$: $(k+1)^2 - 9(k+1) +10 = 2(k+1) $
I do not know how to continue and would have liked to ask you if you could help me with this matter?
Thanks in advance!!!


Answer



When $n=1$, $1^2-9 \cdot 1 +10 = 2$ is even. Now, suppose there is some $n$ for which $n^2-9 \cdot n + 10$ is even. That is, there is a $l \in \mathbb{Z}$ with $n^2-9 \cdot n + 10=2l$. We need to show that $(n+1)^2-9 \cdot (n+1) +10$ is again even. Note:

\begin{equation}
(n+1)^2-9 \cdot (n+1) +10=n^2+2n+1-9n-9+10\\
=(n^2-9n+10)+2n-8 \\
=2l+2(n-4) \\
=2(l+n-4).
\end{equation}


Saturday 25 March 2017

calculus - $lim_{xto 0}frac{sin 3x+Asin 2x+Bsin x}{x^5}$ without series expansion or L Hospital rule




If $$f(x)=\frac{\sin 3x+A\sin 2x+B\sin x}{x^5},$$ $x\neq 0$, is continuous at $x=0$, then find $A,B$ and $f(0)$. Do not use series expansion or L Hospital's rule.




As $f(x)$ is continuous at $x=0$,therefore its limit at $x=0$ should equal to its value.
Note that this question is to be solved without series expansion or L Hospital's rule,

I tried to find the limit $\lim_{x\to 0}\frac{\sin 3x+A\sin 2x+B\sin x}{x^5}$
$\lim_{x\to 0}\frac{\sin 3x+A\sin 2x+B\sin x}{x^5}=\lim_{x\to 0}\frac{3\sin x-4\sin^3x+2A\sin x\cos x+B\sin x}{x^5}=\lim_{x\to 0}\frac{3-4\sin^2x+2A\cos x+B}{x^4}\times\frac{\sin x}{x}$
$=\lim_{x\to 0}\frac{3-4\sin^2x+2A\cos x+B}{x^4}$
As the denominator is zero,so numerator has to be zero,in order the limit to be finite.
So, $3+2A+B=0. (1)$

I tried but I could not get the second equation between $A$ and $B$. I am stuck here. How do I continue?


Answer



Using trigonometric identities we have

\begin{align}
3-4\sin^2 x+2A\cos x+B&=4(1-\sin^2 x)+2A\cos\left(2\cdot\frac{x}{2}\right)+B-1\\
&=4\cos^2\left(2\cdot\frac{x}{2}\right)+2A\cos\left(2\cdot\frac{x}{2}\right)+B-1\\
&=4\left(1-2\sin^2\frac{x}{2}\right)^2+2A\left(1-2\sin^2 \frac{x}{2}\right)+B-1\\
&=16\sin^4\frac{x}{2}-16\sin^2\frac{x}{2}-4A\sin^2\frac{x}{2}+2A+B+3\\
&=16\sin^4\frac{x}{2}-4(A+4)\sin^2\frac{x}{2}+2A+B+3\\
\end{align}
In order to make the limit finite we must have
$$A+4=0\quad\text{and}\quad 2A+B+3=0\qquad\iff\qquad \color{blue}{A=-4}\quad\text{and}\quad \color{blue}{B=5}$$
By taking those values we get

\begin{align}
\lim_{x\to 0}\frac{\sin 3x\color{blue}{-4}\sin 2x+\color{blue}{5}\sin x}{x^5}&=\left(\lim_{x\to 0}\frac{16\sin^4\frac{x}{2}}{x^4}\right)\left(\lim_{x\to 0}\frac{\sin x}{x}\right)\\
&=\left(\lim_{x\to 0}\frac{\sin \frac{x}{2}}{\frac{x}{2}}\right)^4\left(1\right)\\
&=\color{blue}{1}
\end{align}
Since $f$ is continuous at $0$ it follows $f(0)=1$.


Multiples of 2 numbers that differ by 1

I have 2 known positive integers, $a$ and $b$ . Is there a ' standard ' formula to find the lowest (if possible) positive integers $x$ and $y$ , so that the following is true?





$$xa = yb + 1$$


discrete mathematics - Proof by Induction - How can I get familiar with it?

I'm taking Discrete Structures now and I can't seem to get comfortable with proof by induction. I understand the concept, and the general procedure...but it all just seems like random algebra manipulation and playing around until you get what you want. There are no specific "rules" or guidelines to follow or trends to notice that I am aware of. And if these do exist, please tell me!



For example: Let $P(n)$ be the statement that $$\forall n\in\Bbb{N}:\ 1^3+2^3+···+n^3=\left(\frac{n(n+1)}{2}\right)^{2}$$



Here is the proof for it. How am I supposed to figure out if the algebraic manipulation I'm attempting is even going to lead me in the right direction?
Base case and inductive hypothesis:



enter image description here




Proof:



proof

complex numbers - Calculate Laurent series for $1/ sin(z)$



How can calculate Laurent series for




$$f(z)=1/ \sin(z) $$ ??



I searched for it and found only the final result, is there a simple way to explain it ?


Answer



Using the series for $\sin(z)$ and the formula for products of power series, we can get
$$
\begin{align}
\frac1{\sin(z)}
&=\frac1z\frac{z}{\sin(z)}\\
&=\frac1z\left(1-\frac{z^2}{3!}+\frac{z^4}{5!}-\frac{z^6}{7!}+\cdots\right)^{-1}\\

&=\frac1z\left(1+\frac{z^2}{6}+\frac{7z^4}{360}+\frac{31z^6}{15120}+\cdots\right)\\
&=\frac1z+\frac{z}{6}+\frac{7z^3}{360}+\frac{31z^5}{15120}+\cdots
\end{align}
$$






Using the formula for products of power series



As given in the Wikipedia article linked above,

$$
\left(\sum_{k=0}^\infty a_kz^k\right)\left(\sum_{k=0}^\infty b_kz^k\right)
=\sum_{k=0}^\infty c_kz^k\tag{1}
$$
where
$$
c_k=\sum_{j=0}^ka_jb_{k-j}\tag{2}
$$
Set
$$

c_k=\left\{\begin{array}{}
1&\text{for }k=0\\
0&\text{otherwise}
\end{array}\right.\tag{3}
$$
and
$$
a_k=\left\{\begin{array}{}
\frac{(-1)^j}{(2j+1)!}&\text{for }k=2j\\
0&\text{for }k=2j+1

\end{array}\right.\tag{4}
$$
Using $(2)$, $(3)$, and $(4)$, we can iteratively compute $b_k$.






For example, to compute the coefficient of $z^8$:
$$
\begin{align}
c_8=0

&=b_8-\frac16b_6+\frac1{120}b_4-\frac1{5040}b_2+\frac1{362880}b_0\\
&=b_8-\frac16\frac{31}{15120}+\frac1{120}\frac7{360}-\frac1{5040}\frac16+\frac1{362880}1\\
&=b_8-\frac{127}{604800}
\end{align}
$$
Thus, $b_8=\dfrac{127}{604800}$.


cryptography - question based RSA Algorithm

The RSA system was used to encrypt the message M into the cipher-text C = 6. The
public key is given by n = p q = 187 and e = 107. In the following, we will try to crack
the system and to determine the original message M.



(a) What parameters comprises the public key and what parameters the private key?



(b) What steps are necessary to determine the private key from the public key?



(c) Determine the private key for the given system.




(d) What is the original message M?



How would i solve this?

real analysis - An arctan exponential integral

Prove the following for $\Re(z) >0$



$$ \log(\Gamma(z)) = 2\int_{0}^{\infty} \tan^{-1} \left(\dfrac{t}{z}\right)\dfrac{\mathrm{d}t}{e^{2\pi t} - 1} + \dfrac{\log(2z)}{2} + \left( z - \dfrac{1}{2} \right)\log (z) -z $$




I found this identity on Wolfram Functions here. I can't see where to start solving this problem.



I'm looking for elementary solution, I haven't learned about Complex Analysis or Abel Plana formula yet.



Please help.
Thanks.

Friday 24 March 2017

elementary number theory - Prove that $7^n+2$ is divisible by $3$ for all $n ∈ mathbb{N}$




Use mathematical induction to prove that $7^{n} +2$
is divisible by $3$ for all $n ∈ \mathbb{N}$.




I've tried to do it as follow.




If $n = 1$ then $9/3 = 3$.
Assume it is true when $n = p$. Therefore $7^{p} +2= 3k $ where $k ∈ \mathbb{N} $. Consider now $n=p+1$. Then
\begin{align}
&7^{p+1} +2=\\
&7^p\cdot7+ 2=\\
\end{align}

I reached a dead end from here. If someone could help me in the direction of the next step it would be really helpful. Thanks in advance.


Answer



Hint




If $7^n+2=3k$ then $$7^{n+1}+2=7(\color{red}{7^n})+2=7(\color{red}{3k-2})+2.$$


combinatorics - Prove for any positive integer $n$, $(4n)!$ is divisible by $2^{3n}cdot 3^n$



Problem: Prove for any positive integer $n$, $(4n)!$ is divisible by $2^{3n}\cdot 3^n$



Solution given by the professor: $$4! = 2^3\cdot 3$$

$$(4!)^n = 2^{3n}\cdot 3^n$$
$$\frac{(4n)!}{(4!)^n}=\frac{(4n)!}{2^{3n}\cdot 3^n}$$



My question: The steps are pretty straightforward but I don't understand the last and most crucial step. For $\frac{(4n)!}{2^{3n}\cdot 3^n}$ to be an integer, we need $(4!)^n$ to divide $(4n)!$, is it a clear property of the factorial? How is it obvious?


Answer



It is true that $(4!)^n\mid(4n)!$, but it is not obvious (at least, not to me). You can use the fact that, for any natural number $m$, $4!\mid m(m+1)(m+2)(m+3)$; after all$$\frac{m(m+1)(m+2)(m+3)}{4!}=\binom{m+3}4.$$So:




  • $4!\mid1\times2\times3\times4$;

  • $4!\mid5\times6\times7\times8$;


  • $\vdots$

  • $4!\mid(4n-3)\times(4n-2)\times(4n-1)\times(4n)$



and therefore $(4!)^n\mid(4n)!$.


real analysis - Is a function globally Lipschitz continuous and $mathcal{C}^1$ if and only if it is $mathcal{C}^1$ and its total derivative is bounded?



Is $f:\mathbb{R}^n\rightarrow\mathbb{R}^n$ globally Lipschitz continuous, i.e. there exists an $L>0$ such that




$\frac{|f(x)-f(y)|}{|x-y|}\leq L$ for all $x,y\in\mathbb{R}^n$,



and $\mathcal{C}^1$ if and only if $f$ is $\mathcal{C}^1$ and its total derivative is bounded?



Based on intuition alone, I'm strongly inclined to believe that the answer is yes. However I'm having trouble coming up with a proof (probably because my grasp of multivariable calculus is far from great). Could someone give one if the statement is true, or provide a counter example if it is false?



Thanks.


Answer



If $f$ is $\mathscr{C}^1$, then $f(x) - f(y) = \int_0^1 Df(y + t(x-y)).(x-y) dt$, by the fundamental theorem of calculus.




Hence, $$\begin{aligned} \| f(x) - f(y) \| &\le& &\int_0^1 \|Df(y+t(x-y)).(x-y) \| dt& \\ &\le& &\left( \int_0^1 \| Df( y + t(x-y) )\| dt \right) \| x-y \|& \le \sup_{z \in \mathbb{R}^n} \, \|Df(z) \| \; \| x-y \| \end{aligned}$$



If $\sup_{z \in \mathbb{R}^n} \, \| Df(z) \| = C$ is finite, we get $\| f(x) - f(y) \| \le C \|x - y \|$ for all $x,y$.



Reciprocally, suppose that your function is $\mathscr{C}^1$ and that it's globally Lipschitz, with constant $C$.



Then, for all $x \in \mathbb{R}^n$, and all $h \in \mathbb{R}^n$, we know that $$Df(x).h = \lim_{t \to 0} \frac{f(x+th) - f(x)}{t}$$



But, by assumption, $\| f(x +th) - f(x) \| \le C \|th \| = C |t| \|h\|$, and we finally get $\|Df(x).h \| \le C \|h \|$ for all $h$, which by definition implies $\| Df(x) \| \le C$. Hence the total derivative is bounded all over $\mathbb{R}^n$.




All of this works also on an open set of $\mathbb{R}^n$, instead of the whole space.



Remark also that you don't need to assume $f$ to be $\mathscr{C}^1$, but only differentiable. The second part of my proof works as well, and for the first part, instead of applying fundamental theorem of calculus, you can use the mean value theorem.


proof verification - Thomae's function variant



let $f:[0,1]\rightarrow \mathbb{R}$ be define as
$f=\begin{cases}1, x=0 \\ x, x\in \mathbb{R}\setminus \mathbb{Q} \\ \frac{1}{q}, x=\frac{p}{q}, gcd(p,q)=1 \end{cases}$



$(1)$ I believe this function is continuous at the irrationals, but discontinuous at the rationals.



attempt:




to show it is discontinuous at the rationals, consider a sequence $(x_n)$ of irrationals converging to $0$ with $(x_n)=(\frac{\pi}{n})$, we have as
$n \rightarrow \infty$, $x_n\rightarrow 0$, but $f(x_n)={x}$ since every term in the sequence is irrational, and $f(x_n)\rightarrow 0$, but at $x=0$ $f(0)=1$ so the function is not continuous there.



For other rational numbers, I switch to an $\epsilon$-$\delta$ argument:



assume that $f$ was continuous at $x=\frac{p}{q}$, then by definition of continuity, $|f(x)-f(y)|<\epsilon$ when $|x-y|<\delta$, now choose $\epsilon < \frac{1}{q}$, which now implies $|f(x)-\frac{1}{q}|<\frac{1}{q}$ whenever $|x-y|<\delta$, but since the irrationals are dense, we can find an irrational $y$ within a $\delta$-ball of $x$, i.e. $y\in (x-\delta,y+\delta)$, denote this as $y'$ and $f(y')=x$, so $|x-\frac{1}{q}|<\frac{1}{q}$



This is where I get stuck, I don't know where to take it from here. I am pretty sure I need to change my $\epsilon$ but I am not seeing what I should change it to to make the proof work.



$(2)$




To show $f$ is continuous at the irrationals, we can argue by the Archimedian property that $\exists n\in \mathbb{N}$ with $\frac{1}{\epsilon}define $\delta=min_{\{i=1,...,n\}}\delta_{i}$, clearly $\delta>0$, and if $x$ is irrational, we have $|f(x)-f(x)|=0$ so continuity at the irrationals is established.


Answer



This function is not continuous at the irrationals. Consider $x=\frac e3$ and the sequence $(x_n)_{n\in\mathbb N}$ with:



$$x_n=\sum_{k=0}^n\frac1{3k!}\overset{n\rightarrow\infty}\longrightarrow\frac e3.$$



Then, $f(\frac e3)=\frac e3$ since $e$ is transcendental, but $f(x_n)\leq\frac 1n\overset{n\rightarrow\infty}\longrightarrow0,$ because




$$\sum_{k=0}^n\frac1{3k!}=\frac1{3n!}\sum_{k=0}^n\frac{n!}{k!}$$



with the sum on the right-hand side being an integer that is not divisible by $n$ (all summands except the last are multiples of $n$, with the last one being $1$). This means the $n$ in the denominator will never cancel, while the numerator is replaced by $1$.


Thursday 23 March 2017

induction - Inductive proof of the degree of a polynomial

Here is the problem:



Assume that there is a polynomial $P(x)$ of degree 4 such that for all $N \in \mathbb{N}$,




$$P(N) = \sum\limits_{n=0}^N n^3$$



Find the polynomial. Use induction to prove that the formula is correct.



...............



Not sure where to start on this, but for the base case, I did $n=0$ results in $0^3=0$. How can I prove this has degree $4$, since $0^5=0$, for example? Also, how can I prove it for N in the inductive step?



Also... before I even get there, I'm puzzled about the polynomial. I know it's something like:




$$ax^4+bx^3+cx^2+dx+e = x^3 + (x-1)^3 + (x-2)^3 + \cdots + 1$$



and you get an $x^4$ term on the RHS because you have $x$ number of times $x^3$, but I don't know where to go from there to find the polynomial.



Thank you!

modular arithmetic - Why is $-145 mod 63 = 44$?



When I enter $-145 \mod 63$ into google and some other calculators, I get $44$. But when I try to calculate it by hand I get that $-145/63$ is $-2$ with a remainder of $-19$. This makes sense to me, because $63\cdot (-2) = -126$, and $-126 - 19 = -145$.




So why do the calculators give that the answer is $44$?


Answer



I think you have to start with the more basic question, "What does $\text{mod}$ mean?"



When we say "$\pmod{63}$" what we really mean is: Pretend that the "number line" is bent around in a circle so that when counting up, after you hit $62$ you return to $0$. On such a "number circle", the numbers $5,68, 131, 194, \dots$ are all equal to each other. And you can count downwards, too: $68, 5, -58, -121, \dots$ are also all equal.



It's common to interpret $a \pmod{63}$ to mean "Find the number between $0$ and $62$ that is equal to $a$, mod $63$." You can always find such a number by repeatedly adding or subtracting 63 to your given number until you get it into the desired range.



In this case, $-145 = -82 = -19 = 44 = 107 = \dots$. The only result that lies between $0$ and $62$ is $44$.




Note, though, that you are not wrong in thinking that $-145 \pmod{63} = -19$. When working mod $63$, the numbers $-19$ and $44$ are identical.


The integer sum jump series for x[x]

I am trying to remember what the series 1+2+3+4+5+...+n is equal to in order to determine the series of breaks within the graph of x[x]. I know it obviously diverges as it goes to infinity, but what is the equation for when n is finite?



The sequence for the series goes 1,3,6,10,15,21,28,36,45,55,66,78,...

calculus - How to find $limlimits_{xto0}frac{e^x-1-x}{x^2}$ without using l'Hopital's rule nor any series expansion?

Is it possible to determine the limit



$$\lim_{x\to0}\frac{e^x-1-x}{x^2}$$




without using l'Hopital's rule nor any series expansion?



For example, suppose you are a student that has not studied derivative yet (and so not even Taylor formula and Taylor series).

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

Wednesday 22 March 2017

number theory - Now am I doing induction correctly?



Recursion: $L_n = L_{n-1} + n$ where $L_0 = 1$.



We guess that solution is $L_n = \frac{n(n+1)}{2} + 1$.



Base case: $L_0 = \frac{0(0+1)}{2} + 1 = 1$ is true.




Inductive step: Assume $L_n = \frac{n(n+1)}{2} + 1$ is true for some $n$. We will show that $L_{n+1} = \frac{(n+1)(n+2)}{2} + 1$ given that $L_n = L_{n-1} + n$ is true.



$L_{n+1} = \frac{(n+1)(n+2)}{2} + 1 = L_n + (n+1)$



$L_n = \frac{(n+1)(n+2)}{2} + 1 - (n+1)$



$L_n = \frac{(n+1)(n+2)}{2} + \frac{2}{2} - \frac{2n+2}{2} = \frac{n^2+3n+2 + 2 - 2n - 2}{2}$



$L_n = \frac{n^2+n+2}{2} = \frac{n^2+n}{2} + 1 = \frac{n(n+1)}{2} + 1$




This completes the proof.



Is everything in place for a correct induction proof? Is anything wrong? Backwards? Unclear? Awkward?


Answer




Base case: $L_0 = \frac{0(0+1)}{2} + 1 = 1$ is true.



Inductive step: Assume $L_n = \frac{n(n+1)}{2} + 1$ is true for some $n$. We will show that $L_{n+1} = \frac{(n+1)(n+2)}{2} + 1$ given that $L_n = L_{n-1} + n$ is true.





Fine.




$L_{n+1} = \frac{(n+1)(n+2)}{2} + 1 = L_n + (n+1)$




Don't start with $L_{n+1}=\frac{(n+1)(n+2)}{2}+1$ which is what you have to prove.







$$\begin{align}L_{n+1}&=L_n+n+1\\&=\frac{n(n+1)}{2}+1+n+1\\&=\frac{n(n+1)}{2}+\frac{2(n+1)}{2}+1\\&=\frac{n+1}{2}(n+2)+1\\&=\frac{(n+1)(n+2)}{2}+1\end{align}$$


number theory - What is $operatorname{gcd}(a,2^a-1)$?

Intuitively, I feel it's $1$, for example $(2, 3), (3,7)$ etc. But then I cannot go to prove it. Using the formula $ax+by=c$ does not make sense because of the power.
Is it possible by induction? If I assume $a=1$, then $\operatorname{gcd}(1, 2^1-1)=1$ Assuming, it to be true for $k$, then



$$\operatorname{gcd}(k,2^k-1)=1 = kx+(2^k-1)y=1$$



I'm stuck here. Is it even possible with this method?

Induction Proof: Fibonacci Numbers Identity with Sum of Two Squares

Using induction, how can I show the following identity about the fibonacci numbers? I'm having trouble with simplification when doing the induction step.



Identity: $$f_n^2 + f_{n+1}^2 = f_{2n+1}$$



I get to:




$$f_{n+1}^2 + f_{n+2}^2$$



Should I replace $f_{n+2}$ using the recursion? When I do that, I end up with the product of terms, and that just doesn't seem right. Any guidance on how to get manipulate during the induction step?



Thanks!

integration - Help on simplify $int_0^1frac{(xphi)^n-(-1)^n}{phi^{2n}-(-1)^n}cdot(x^n-phi^n)dx$



$\phi=\frac{1+\sqrt5}{2}$



Fibonacci numbers



$F_1=1$, $F_2=1$ ; $F_{n+2}=F_{n+1}+F_{n}$



A first few values for $n=1,2,3,4,...$ of $F_n$ are $1,2,3,5,...$




Lucas numbers



$L_1=1$, $L_2=3$ ; $L_{n+2}=L_{n+1}+L_n$



A first few values for $n=1,2,3,4,...$ of $L_n$ are $1,3,4,7...$



Show that,



$$\int_0^1\frac{(x\phi)^n-(-1)^n}{\phi^{2n}-(-1)^n}\cdot(x^n-\phi^n)dx=\left[\frac{1}{2n+1}-\frac{L_n}{n+1}+(-1)^n\right]\cdot\frac{1}{\sqrt5F_n}$$







$$\int_0^1\frac{(x\phi)^n-(-1)^n}{\phi^{2n}-(-1)^n}\cdot(x^n-\phi^n)dx=\frac{1}{\phi^{2n}-(-1)^n}\left[\frac{\phi^n}{2n+1}-\frac{\phi^{2n}}{n+1}-\frac{(-1)^n}{n+1}+(-1)^n\phi^n\right]$$



I am stuck, can't simplify to get the proposed result. Can anybody help me please?



Well we know that $\phi^n={\phi}F_n+F_{n-1}$. This didn't help, it makes more complicated. There must be an easy way for this, but can't figured out yet.



Binet's formula $F_n=\frac{\phi^n-(-\phi)^{-n}}{\sqrt5}$ and

$L_n=\phi^n+(-\phi)^{-n}$


Answer



Let's write $\psi = \dfrac{-1}{\phi} = \dfrac{1-\sqrt{5}}{2}$.



The trick is to multiply numerator and denominator of $\dfrac{(x\phi)^n - (-1)^n}{\phi^{2n} - (-1)^n}$ with $\phi^{-n}$ to get



$$\frac{(x\phi)^n - (-1)^n}{\phi^{2n} - (-1)^n} = \frac{x^n - \psi^n}{\phi^n - \psi^n} = \frac{x^n - \psi^n}{\sqrt{5}\,F_n}.$$



Now we can multiply the numerator with the remaining factor:




$$(x^n - \psi^n)(x^n - \phi^n) = x^{2n} - x^n\cdot (\phi^n + \psi^n) + (\phi\cdot \psi)^n = x^{2n} - L_n x^n + (-1)^n,$$



and the rest is straightforward.


soft question - Why is 1 raised to infinity Not defined and not "1"





$1$ square is $1$, so is raised $1$ to $123434234$.



My maths teacher claims that $1$ raised to infinity is not $1$, but not defined. Is there any reason for this?



I know that any number raised to infinity is not defined, but shouldn't $1$ be an exception?


Answer



What $1^\infty$ is, or is not, is merely a matter of definition. Normally, one would only define $a^b$ for some specific class of pairs of $a,b$ - say $b$ - positive integer, $a$ - real number.



When extending the definition of exponentiation to more general pairs, the key thing people keep in mind is that various nice properties are preserved. For instance, for $ b$ - positive integer, you want to put $a^{-b} = \frac{1}{a^b}$ so that the rule $a^ba^c = a^{b+c}$ is preserved.




It may make sense in some context to speak of infinities in the context of limits, but this is usually more a rule of thumb than rigorous mathematics. This may be seen as extending the rule that $(a,b) \mapsto a^b$ is continuous (i.e. if $\lim_n a_n = a$ and $\lim_n b_n = b$, then $\lim_n a_n^{b_n} = a^b$) to allow for $b_n \to \infty$. For instance, you may risk saying that:
$$\lim_{n} (2+\frac{1}{n})^n = 2^{\infty} = \infty$$
If you agree to use rules of this kind, you might be tempted to also say:
$$\lim_{n} (1+\frac{1}{n})^n = 1^{\infty} = 1$$
but this would lead you astray, since in reality:
$$\lim_{n} (1+\frac{1}{n})^n = e \neq 1$$
Thus, it is safer to leave $1^\infty$ undefined.







A more thorough discussion can be found on Wikipedia.


Tuesday 21 March 2017

elementary number theory - What is the remainder of $18!$ divided by $437$?





What is the remainder of $18!$ divided by $437$?



I'm getting a little confused in the solution. It uses Wilson's theorem



Wilson's Theorem:
If $p$ is prime then $(p-1)!\equiv-1(\text{mod } p)$




So it first factors $437$ into primes. So $437 = 19 \cdot 23$. Then from Wilson's theorem notes that $18!\equiv-1(\text{mod } 19)$ so we're part way there, but also says $22\equiv22!(\text{mod }23)$ by Wilson's theorem (really don't know how they got this from $22!\equiv-1(\text{mod }23)$.



Also I'm confused how solving this leads to finding the remainder for $18!$ divided by $437$? I understand getting $18!$ from $19$ but not the $23$ part.


Answer



By Wilson's theorem, $18!\equiv-1\mod 19$ and $22!\equiv-1\mod 23$. Now



$22!=22\times21\times20\times19\times18!\equiv(-1)(-2)(-3)(-4)18!\equiv(24)18!\equiv(1)18!=18!\mod 23.$



Therefore $18!\equiv-1\mod19$ and $18!\equiv-1\mod 23$.




By the constant case of the Chinese remainder theorem, therefore,



$18! \equiv-1\equiv436\mod 437=19\times23$.


calculus - Prove that function has no finite limit using $epsilon$ - $delta$ definition




I want to prove using
$$(\exists \varepsilon > 0)(\forall \delta > 0)\exists x(0 < \left| {x - {x_0}} \right| < \delta \Rightarrow \left| {f(x) - L} \right| \ge \varepsilon )$$
That the function $$f(x) = {x \over {\left( {x - \left\lfloor {\sin x} \right\rfloor } \right)}}$$ has no finite limit when $x_0 = 0$
and I can't seem to find the way to start.
I know using the heine method that the function has 1 and 0 limits on both sides, so I'm guessing that if i choose an $\varepsilon=1/2 $ I might be able to show that, but I'm not sure how.


Answer



Hint: You need to show that, for each $L \in \mathbb{R}$, there exists an $\varepsilon > 0$ with the property you mentioned.



Suggestion: Take $\varepsilon = 1/3$. For $L \geq 1/2$, consider points $-\delta < x < 0$; for $L \leq 1/2$, consider points $0 < x < \delta$.



combinatorics - is there a formula for this binomial sum?

Are there any formulas to calculate the below sum?



$$\sum_{n=1}^{1918}n\binom{2017-n}{99}$$



Or, more generally,



$$\sum_{n=1}^{1918}n\binom{2017-n}{k}$$

Convergence of the series $sumlimits_{n=3}^infty (loglog n)^{-loglog n}$



I am trying to test the convergence of this series from exercise 8.15(j) in Mathematical Analysis by Apostol:



$$\sum_{n=3}^\infty \frac{1}{(\log\log n)^{\log\log n}}$$



I tried every kind of test. I know it should be possible to use the comparison test but I have no idea on how to proceed. Could you just give me a hint?


Answer




Note that, for every $n$ large enough, $$(\log\log n)^{\log\log n}\leqslant(\log n)^{\log\log n}=\exp((\log\log n)^2)\leqslant\exp(\log n)=n,$$ provided, for every $k$ large enough, $$\log k\leqslant\sqrt{k},$$ an inequality you can probably show, used for $k=\log n$. Hence, for every $n$ large enough, $$\frac1{(\log\log n)^{\log\log n}}\geqslant\frac1n,$$ and the series...




...diverges.



calculus - How can you sketch this limit as n approaches infinity?

This was the difficult question on last year's first year differential calculus exam. I thought I was pretty good with limits, but am stumped with this one. I tried multiple ways: first, I used the natural log to pull down the exponent so it became form infinity over infinity, meaning I could use L'Hospital's Rule. The final form, however, was not really helpful for sketching purposes.
After, I tried to use the definition of the derivative, setting t equal to 1/n so the limit would go to zero instead. This, too, yielded a form that was no better than the former; I could not ascertain what x is. I have not yet learned power series or any other such methods; just differentiation.
This is also my first time attempting to use LaTeX, so I'm very sorry if it doesn't come out right. I'll post a picture for backup.
enter image description here
\lim_{n\to \infty} (1+x^(n-1)+x^n)^(1/2)}

real analysis - A divergent series, whose underlying sequence converges faster to $0$ than $n^{-1}$ and slower than $n^{-1-epsilon}$. for any $epsilon > 0$




Let $\left(a_n\right)_{n \in \mathbb N}$ be a monotone sequence of positive real numbers, such that $$\liminf_{n \to \infty} na_n = 0$$ Does it always follow that $$\sum_{n = 0}^\infty a_n < \infty$$
It is easy to show that $\sum_{n=0}^\infty a_n < \infty$ implies $\liminf_{n \to \infty} na_n = 0$, even if $(a_n)_{n \in \mathbb N}$ is not a monotone sequence. Moreover, one can easily come up with plenty of non-mononote sequences $(a_n)_{n \in \mathbb N}$, satisfying $\liminf_{n \to \infty} na_n = 0$, but $\sum_{n=0}^\infty a_n = \infty$. It is also easy to see that the existence of some $\epsilon > 0$, such that $\limsup_{n \to \infty} n^{1+\epsilon} a_n < \infty$ imples the convergence of $\sum_{n = 0}^\infty a_n$, again regardless of whether $(a_n)_{n \in \mathbb N}$ is monotone or not. However, I am stuck at this particular question. If there exists a counterexample, it should, more or less, be a sequence $(a_n)_{n \in \mathbb N}$ converging to $0$, but simultaneously slower than any sequence of the form $n^{-(1+\epsilon)}$ for all $\epsilon > 0$ and faster than than $n^{-1}$.


Answer



You can take $\frac 1{n \log(n)}$, for example. The sum diverges. You can also take $\frac 1{n(\log (n))^2}$, for which the sum converges.


abstract algebra - $mathbb Q(zeta_m)capmathbb Q(zeta_n)=mathbb Q(zeta_d)$



Prove that $\mathbb Q(\zeta_m)\cap\mathbb Q(\zeta_n)=\mathbb Q(\zeta_d)$ where $d=\gcd(m,n)$.




I want to solve this problem without Galois theory.



I know only about field extension. For example, algebraic extension, cyclotomic extension, splitting field and algebraic closure.



Can I solve it without Galois theory?

functions - Let $f(x) = 1$ for rational numbers $x$, and $f(x)=0$ for irrational numbers. Show $f$ is discontinuous at every $x$ in $mathbb{R}$




I am working on this proof, and wanted someone to check it and to help me understand what is happening in case (ii). The proof:




Let $f(x) = 1$ for rational numbers $x$, and $f(x)=0$ for irrational numbers. Show $f$ is discontinuous at every $x$ in $\mathbb{R}$.



We will consider two cases.



(i) $x \in \mathbb{Q}$




Consider the sequence $x_n = x + \frac{\sqrt{2}}{n}$. We have that $(x_n) \rightarrow x$, yet $x_n$ is irrational $ \forall \ n$.
$\Rightarrow x_n \ \in \ \mathbb{Q}$



$\Rightarrow x_n - x \in \ \mathbb{Q}$.



$\Rightarrow n(x_n-x) = \sqrt{2} \in \mathbb{Q}$.$\ \rightarrow \leftarrow$



This is a contradiction, therefore we have that $f(x_n) = 0 \ \forall \ n \Rightarrow \lim f(x_n) =0 \neq 1 =f(x)$.



Therefore, $f(x)$ is not continuous at any $x \in \mathbb{Q}$.




(ii) $x \not\in \mathbb{Q}$



Given the density of the rationals, there exists a subsequence of rational numbers that must converge to $x$. Call this subsequence $(x_{n_r})$. Therefore, we have that:



$f(x_{n_r})=1 \ \forall \ n \Rightarrow \lim f(x_{n_r}) = 1 \neq 0 = f(x).$



$\therefore \ f(x)$ is not continuous at any $x \in \mathbb{Q}$.



By cases (i) and (ii), we have that $f(x)$ is not continuous at any $x \in \mathbb{R}$.





For case (i), I used the hint in the back of my text book to generate a sequence I could work with. Part (ii) is almost identical to the solution in my text- I don't fully understand why we are using the denseness of the rationals, rather than just working with the sequence made in case (i).


Answer



When $x$ is irrational, the sequence defined in case (i) might not consist of only rationals. For example, if $x = \sqrt 2$, then $x + \frac {\sqrt 2}
n = \frac {n+1} n {\sqrt 2} $ is irrational. (It will converge to $x$, but it doesn't accomplish what's needed.)



For (ii), you need a sequence of rationals converging to the irrational $x$. In theory, we already know one: consider the decimal expansion of $x$. When $x$ is irrational, the sequence is necessarily infinite, doesn't eventually repeat itself forever. Suppose $$x = m + 0.d_1 d_2 \dotso$$ where $m$ is an integer. Let
$$x_n = m + 0. d_1 \dotso d_n$$
In other words, $x_n$ is the decimal representation of $x$ cut off at the $n^{th}$ digit after the decimal point ($x_0 = m$). Then every $x_n$ is rational, and:

$$lim_{n \to \infty} x_n = x$$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...