Thursday 31 October 2019

Limit $lim_{x rightarrow infty, y rightarrow infty} left( frac{xy}{x^2 + y^2}right)^{x^2} $

Given the followning limit:
$$ \lim_{x \rightarrow \infty, y \rightarrow \infty} \left( \frac{xy}{x^2 + y^2}\right)^{x^2} $$



To find limit I have made following steps:




  1. Let $ x = y $ ,then limit equals $0$

  2. Let $ x > y $ ,then consider the limit:




$$ \lim_{x \rightarrow \infty, y \rightarrow \infty} \left( \frac{x^2}{x^2 + y^2}\right)^{x^2} = \lim_{x \rightarrow \infty, y \rightarrow \infty} \left( \frac{1}{1 + \frac{y^2}{x^2}}\right)^{x^2} = 0$$
with respect to $$0 < y^2/x^2 < const$$




  1. Let $ y > x $ ,then consider the limit:



$$ \lim_{x \rightarrow \infty, y \rightarrow \infty} \left( \frac{x^2}{x^2 + y^2}\right)^{y^2} = \lim_{x \rightarrow \infty, y \rightarrow \infty} \left( \frac{1}{\frac{x^2}{y^2} + 1}\right)^{x^2} = 0$$
with respect to $$0 < x^2/y^2 < const$$




What could you say about my solution?

algebra precalculus - Partial Fractions continued...



Hi asked the following question yesterday: Obtaining the sum of a series



Given the answers to that question by wj32, I am now trying to solve the following problem:



Consider the series
$$\sum_{n=1}^{\infty}\frac{n}{n^4+n^2+1}$$
Use partial fractions to write the general term
$$u_n=\frac{n}{n^4+n^2+1}$$
as a difference of two simpler terms




My attempt at a Solution:
The partial fractions are
$$\begin{align}
\frac{n}{n^4+n^2+1}&=\frac{n+n-n}{n^4+n^2+1}\\
&=\frac{n+n}{n^4+n^2+1}-\frac{n}{n^4+n^2+1}\\
&=\frac{2}{n^3+n+\frac{1}{n}}-\frac{2}{2(n^3+n+\frac{1}{n})}
\end{align}$$
Then
$$\begin{align}
S_n&=\left[\sum_{n=1}^k\frac{2}{n^3+n+\frac{1}{n}}-\sum_{n=1}^k\frac{2}{2(n^3+n+\frac{1}{n})}\right]\\
&=\left[\left(\frac{2}{3}+\frac{4}{21}+\frac{6}{91}+...+\frac{2}{n^3+n+\frac{1}{n}}\right)-\left(\frac{1}{3}+\frac{2}{21}+\frac{3}{91}+...+\frac{2}{2(n^3+n+\frac{1}{n})}\right)\right]

\end{align}$$



Here the terms in the left do not cancel the terms in the right?



I'm guessing I need simpler/different partial fractions up at the top?


Answer



First note that we have a nice factorization of $n^4 + n^2 + 1$.
$$n^4 + n^2 + 1 =(n^2+n+1)(n^2-n+1)$$
Hence, $n = \dfrac{(n^2+n+1) - (n^2-n+1)}2$. This gives us $$u_n = \dfrac{(n^2+n+1) - (n^2-n+1)}{2(n^2+n+1)(n^2-n+1)} = \dfrac12 \left(\dfrac1{n^2-n+1} - \dfrac1{n^2+n+1}\right) = \dfrac12 \left(\dfrac1{(n-1)n+1} - \dfrac1{n(n+1)+1}\right)$$
Now telescopic summation should do the job for you

Hence,
\begin{align}
S_N = \sum_{n=1}^{N} u_n & = \dfrac12 \sum_{n=1}^N\left(\dfrac1{(n-1)n+1} - \dfrac1{n(n+1)+1}\right)
\end{align}
\begin{align}
2S_N & = \sum_{n=1}^N\left(\dfrac1{(n-1)n+1} - \dfrac1{n(n+1)+1}\right)\\
& = \left(1 - \dfrac13 \right) + \left(\dfrac13 - \dfrac17 \right) + \left(\dfrac17 - \dfrac1{13} \right) + \cdots\\
& + \left(\dfrac1{(n-2)(n-1)+1} - \dfrac1{(n-1)n+1}\right) + \left(\dfrac1{(n-1)n+1} - \dfrac1{n(n+1)+1}\right)\\
& = 1 - \dfrac1{n(n+1)+1}
\end{align}



functional analysis - $f_nto f $ in $L^1$ $implies$ $sqrt{f_n}tosqrt{f}$ in $L^2$?




Suppose that $\{f_n\}$ is a sequence of measurable functions converging to $f$ in $L^1(\mathbb{R}^n)$. Is it true that $\sqrt{f_n}$ converges to $\sqrt{f}$ in $L^2(\mathbb{R}^n)$?



If this is true then I would need to show that $$\int \sqrt{f_n}\sqrt{f}\to\int f.$$



Could someone help.


Answer



$$\left(\sqrt{f_n}-\sqrt{f}\right)^2\leqslant|f_n-f|$$


Wednesday 30 October 2019

integration - How to integrate $intlimits_0^infty e^{-a x^2}cos(b x) dx$ where $a>0$



How to integrate



$$\int\limits_0^\infty e^{-a x^2}\cos(b x) dx$$




where $a>0$



The real problem is this integral



$$\lim\limits_{\alpha\rightarrow 2}\int\limits_0^\infty e^{-a x^\alpha}\cos(b x) dx$$



I tried integration by parts and then the change of variable $z=x^2$ but it does not work.


Answer



Using Euler's identity, we get:




$$
\int\limits_0^\infty e^{-a x^2}\cos(b x) dx=Re \left( \int\limits_0^\infty e^{-a x^2} e^{ibx} dx \right)
$$



$$
\int\limits_0^\infty e^{-a x^2} e^{ibx} dx = \int\limits_0^\infty e^{-a x^2+ibx} dx
$$



Let's forget about imaginary unit and take $ib=\beta$ for simplicity:




$$
-ax^2+\beta x=-a (x^2-\frac{\beta}{a}x+\frac{\beta^2}{4a^2})+\frac{\beta^2}{4a}=-a(x-\frac{\beta}{2a})^2+\frac{\beta^2}{4a}
$$



$$
\int\limits_0^\infty e^{-a x^2+\beta x} dx=e^{\frac{\beta^2}{4a}} \int\limits_0^\infty e^{-a(x-\frac{\beta}{2a})^2} dx
$$



I believe you will not have trouble with the rest.




Hints:



$dx=d(x-\frac{\beta}{2a})$



$\beta^2=-b^2$.


complex numbers - $1/sqrt{-a/b} = i sqrt{b/a}$ or $-isqrt{b/a}$?



In a book I am reading, I'm following an equation that has the line:



$$ \frac{1}{\sqrt{-\frac{a}{b}}} = \sqrt{\frac{-b}{a}} = i\sqrt{\frac{b}{a}}$$



but while I was working ahead I did:




$$ \frac{1}{\sqrt{ -\frac{a}{b}}} = \frac{1}{i \sqrt{\frac{a}{b}}} = -i\sqrt{\frac{b}{a}}$$



Which is correct? Both?


Answer



The second one is correct. Implicit in the assumptions in the first is using an identity like



$$\frac{1}{\sqrt x} = \sqrt{\frac 1 x}.$$



Although this is correct for $x \in \mathbb{R}^+$, it does not extend to negative or to complex numbers. There are quite a few false proofs based on the premise that $\sqrt{ab} = \sqrt a \cdot \sqrt{b}$ holds unconditionally!



real analysis - Proving that a point on the boundary of a closed ball in a metric space cannot be interior.



The idea of this proof is quite clear but I'm having some trouble making it rigorous. Suppose we have a metric space $(X, d)$ and a closed ball $U := \{x \in X : d(x, a) \leq t\}$ for some fixed $a$ and $t$. I want to prove that a point on the boundary of this ball is not an interior point. Here is my "proof":



Let $x$ satisfy $d(x, a) = t$ (i.e. let $x$ be a boundary point). Suppose also that $x$ is interior. Then $\exists \, r > 0$ such that the open ball $D_r(x)$ is contained within $U$. This an immediate contradiction, because some points in this open ball are outside $U$.




My problem is with the very last statement, which relies entirely upon geometrical intuition and is not very rigorous. I suppose I could try a bit harder with this idea: along the line connecting $a$ and $x$, we can go a bit further along the line still inside the $r$ -ball and find a point outside of $U$. But this still doesn't sound very rigorous, with things like lines only really applying to Euclidean space.



How can I make this rigorous?



EDIT: Thanks for the answers and comments, I now realize that this cannot be proven at all.


Answer



In a general metric space the boundary of the set $U = \{x : d(x,a) \le t\}$ is not the set $\{x : d(x,a) = t\}$.



The (usual) definition of boundary point of a set implies that the boundary and interior of a set are disjoint.


Tuesday 29 October 2019

math induction of sin(x)-sin(3x)...

how would you use induction to prove this:



$\sin(x)-sin(3x)+sin(5x)-...+(-1)^{(n+1)}sin[(2n-1)x] = \frac{(-1)^{(n+1)}sin2nx}{2cosx} $



I know how you assume its true for n=k, and then prove for n=k+1, but I get to




Left Hand Side: $\frac{(-1)^{(k+1)}sin2kx}{2cosx}+(-1)^{k+2}sin[(2k+1)x]$ but I'm not sure what step to take next.



any help would be appreciated.
Cheers

Monday 28 October 2019

real analysis - Verification of $lim_{n rightarrow infty} sqrt[n]{n^3}=1$




I am interested in the limit




$$ \lim_{n \rightarrow \infty} \sqrt[n]{n^3}$$




Can we simply conclude that:
$$ \lim_{n \rightarrow \infty} (\sqrt[n]{n})^3= 1^3=1.$$

I have proven that $\sqrt[n]{n}\rightarrow1$ earlier in this textbook. Also since the limit of a power is the power of the limit.


Answer



Yes, we can simply do that. Since the exponent $^3$ is a constant neutral number (meaning we may interpret it as a fixed number of multiplications) we can move the limit inside of it. So if you already know $\lim_{n\to\infty}\sqrt[n]n=1$ then that's a full proof.


calculus - Multiple integrals involving product of gamma functions

The following integral was posted a few days back on Integrals and Series forum:

$$\int_0^{2\pi} \int_0^{2\pi} \int_0^{2\pi} \frac{dk_1\,dk_2\,dk_3}{1-\frac{1}{3}\left(\cos k_1+\cos k_2+ \cos k_3\right)}=\frac{\sqrt{6}}{4}\Gamma\left(\frac{1}{24}\right)\Gamma\left(\frac{5}{24}\right)\Gamma\left(\frac{7}{24}\right)\Gamma\left(\frac{11}{24}\right)$$



I am curious if there is a closed form solution for:



$$\int_{\large[0,2\pi]^n} \frac{dk_1\,dk_2\,dk_3\,\cdots \,dk_n}{1-\frac{1}{n}\left(\cos k_1+\cos k_2+\cos k_3+\cdots +\cos k_n\right)}$$






Since $\left|\dfrac{\cos k_1 + \cos k_2 + \cos k_3}{3}\right|<1$,




$$\int_0^{2\pi} \int_0^{2\pi} \int_0^{2\pi} \frac{dk_1\,dk_2\,dk_3}{1 - \frac 1 3 \left( \cos k_1 + \cos k_2 + \cos k_3 \right)}$$
$$=8\int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \frac{dk_1\,dk_2\,dk_3}{1 - \frac 1 3 \left( \cos k_1 + \cos k_2 + \cos k_3 \right)}$$
$$=8\sum_{n=0}^{\infty} \frac{1}{3^n} \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \left( \cos k_1 + \cos k_2 + \cos k_3 \right)^n\,dk_1\,dk_2\,dk_3 $$



We can ignore the odd values of $n$ as the integral is zero for them. Also, for even values of $n$, the exponents of cosines in the expansion of $\left( \cos k_1 + \cos k_2 + \cos k_3 \right)^{2n}$ must be even. Hence, from multinomial therem, we can write:



$$8\sum_{n=0}^{\infty}\,\,\sum_{m_1+m_2+m_3=n} \frac{1}{3^{2n}}\frac{(2n)!}{(2m_1)! (2m_2)! (2m_3)!} \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \cos^{2m_1}k_1\cos^{2m_2}k_2 \cos^{2m_3}k_3\,dk_1\,dk_2\,dk_3$$



$$ = 16\sum_{n=0}^{\infty}\,\,\sum_{m_1+m_2+m_3=n} \frac{1}{3^{2n}}\frac{(2n)!}{(2m_1)! (2m_2)! (2m_3)!} \int_0^{\pi/2} \int_0^{\pi/2} \int_0^{\pi/2} \cos^{2m_1}k_1\cos^{2m_2}k_2 \cos^{2m_3}k_3\,dk_1\,dk_2\,dk_3$$




Using the result: $\int_0^{\pi/2} \cos^{2k}x\,dx=\frac{(2k)!}{4^k (k!)^2}\frac{\pi}{2}$, the integral is,



$$2\pi^3 \sum_{n=0}^{\infty}\,\,\sum_{m_1+m_2+m_3=n} \frac{1}{36^n}\frac{(2n)!}{(m_1!)^2 (m_2!)^2 (m_3!)^2}$$



I am stuck here.



Any help is appreciated. Thanks!

Sunday 27 October 2019

elementary number theory - How to show that $gcd(ab,n)=1$ if $gcd(a,n)=gcd(b,n)=1$?



Let $a,b,n$ be integers such that $\gcd(a,n)=\gcd(b,n)=1$. How to show that $\gcd(ab,n)=1$?



In other words, how to show that if two integers $a$ and $b$ each have no non-trivial common divisor with and integer $n$, then their product does no have a non-trivial common divisor with $n$ either.



This is a problem that is an exercise in my course.




Intuitively it seems plausible and it is easy to check in specific cases but how to give an actual proof is not obvious.


Answer



HINT $\rm\ \ (n,ab)\ =\ (n,nb,ab)\ =\ (n,(n,a)\:b)\ =\ (n,b)\ =\ 1\ $ using prior said GCD laws.



Such exercises are easy on applying the basic GCD laws that I mentioned in your prior questions, viz. the associative, commutative, distributive and modular law $\rm\:(a,b+c\:a) = (a,b)\:.$ In fact, to make such proofs more intuitive one can write $\rm\:gcd(a,b)\:$ as $\rm\:a\dot+ b\:$ and then use familar arithmetic laws, e.g. see this proof of the GCD Freshman's Dream $\rm\:(a\:\dot+\: b)^n =\: a^n\: \dot+\: b^n\:.$



NOTE $\ $ Also worth emphasis is that not only are proofs using GCD laws more general, they are also more efficient notationally, hence more easily comprehensible. As an example, below is a proof using the GCD laws, followed by a proof using the Bezout identity (from Gerry's answer).



$\begin{eqnarray}
\qquad 1&=& &\rm(a\:,\ \ n)\ &\rm (b\:,\ \ n)&=&\rm\:(ab,\ &\rm n\:(a\:,\ &\rm b\:,\ &\rm n))\ \ =\ \ (ab,n) \\

1&=&\rm &\rm (ar\!\!+\!\!ns)\:&\rm(bt\!\!+\!\!nu)&=&\rm\ \ ab\:(rt)\!\!+\!\!&\rm n\:(aru\!\!+\!\!&\rm bst\!\!+\!\!&\rm nsu)\ \ so\ \ (ab,n)=1
\end{eqnarray}$



Notice how the first proof using GCD laws avoids all the extraneous Bezout variables $\rm\:r,s,t,u\:,\:$ which play no conceptual role but, rather, only serve to obfuscate the true essence of the matter. Further, without such noise obscuring our view, we can immediately see a natural generalization of the GCD-law based proof, namely



$$\rm\ (a,\ b,\ n)\ =\ 1\ \ \Rightarrow\ \ (ab,\:n)\ =\ (a,\ n)\:(b,\ n) $$



This quickly leads to various refinement-based views of unique factorizations, e.g. the Euclid-Euler Four Number Theorem (Vierzahlensatz) or, more generally, Schreier refinement and Riesz interpolation. See also Paul Cohn's excellent 1973 Monthly survey Unique Factorization Domains.


modular arithmetic - Finding the least significant digit of a large exponential.



I am trying to find the least significant digit of $17^{{17}^{17}}$. I know that I need to use to use the properties of modular arithmetic and mod base 10, but I am not sure how to go about it.



Please provide some hints/first few steps to help me get started.



Answer



Start off by looking at $17^n$ mod 10. (In your case, n will end up being $17^{17}$, but that's way too big to calculate yet.)



$17^0$ ends in a $1$, $17^1$ ends in a $7$, $17^2$ is congruent to $7 \times 7$ so it ends in a $9$, $17^3$ likewise is congruent to $9 \times 7$ so it ends in a $3$, and finally $17^4$ is congruent to $3 \times 7$ so it ends in a $1$.



Since $17^0$ and $17^4$ are congruent mod 10, it follows that $17^n$ mod 10 will repeat every time the exponent $n$ goes up by 4.



Therefore, to solve your problem, you now need to calculate the exponent $17^{17}$ mod 4. Then you can use that along with the pattern I just described to get the final answer. Since this is homework, I'll let you calculate $17^{17}$ mod 4 yourself... hint, use the same idea that I used above!


Saturday 26 October 2019

calculus - Show that $int_{0}^{infty }frac {ln x}{x^4+1} dx =-frac{pi^2 sqrt{2}}{16}$



I could prove it using the residues but I'm interested to have it in a different way (for example using Gamma/Beta or any other functions) to show that
$$

\int_{0}^{\infty}\frac{\ln\left(x\right)}{x^{4} + 1}\,{\rm d}x
=-\frac{\,\pi^{2}\,\sqrt{\,2\,}\,}{16}.
$$



Thanks in advance.


Answer



One possible way is to introduce
$$ I(s)=\frac{1}{16}\int_0^{\infty}\frac{y^{s-\frac34}dy}{1+y}.\tag{1}$$
The integral you are looking for is obtained as $I'(0)$ after the change of variables $y=x^4$.




Let us make in (1) another change of variables: $\displaystyle t=\frac{y}{1+y}\Longleftrightarrow y=\frac{t}{1-t},dy=\frac{dt}{(1-t)^2}$. This gives
\begin{align}
I(s)&=\frac{1}{16}\int_0^1t\cdot\left(\frac{t}{1-t}\right)^{s-\frac74}\cdot \frac{dt}{(1-t)^2}=\\
&=\frac{1}{16}\int_0^1t^{s-\frac34}(1-t)^{-s-\frac{1}{4}}dt=\\&
=\frac{1}{16}B\left(s+\frac14,-s+\frac34\right)=\\&
=\frac{1}{16}\Gamma\left(s+\frac14\right)\Gamma\left(-s+\frac34\right)=\\
&=\frac{\pi}{16\sin\pi\left(s+\frac14\right)}.
\end{align}
Differentiating this with respect to $s$, we indeed get
$$I'(0)=-\frac{\pi^2\cos\frac{\pi}{4}}{16\sin^2\frac{\pi}{4}}=-\frac{\pi^2\sqrt{2}}{16}.$$



factorial - Stirling's Formula normalization



An obvious way to get estimates on $n!$ is to compare $\sum\log k$ to $\int\log t$. If one could get Stirling's formula this way that would strike me as the "right" proof, because it would be clear why it works.




This morning I came much closer to this than I have in the past; in fact fairly straightforward comparisons of sums to integrals show that $$n!\sim c\sqrt n\left(\frac ne\right)^n.$$



Question: I wonder if there's some cheap trick to show that if $n!\sim c\sqrt n(n/e)^n$ then $c=\sqrt{2\pi}$.


Answer



You could calculate the normalization for the Gaussian approximation to the binomial distribution.


sequences and series - How do I evaluate this sum :$sum_{n=1}^{infty}frac{{(-1)}^{n²}}{{(ipi)}^{n}}$?

I'm interesting to know how do i evaluate this sum :$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n²}}{{(i\pi)}^{n}}$$, I have tried to evaluate it using two partial sum for odd integer $n$ and even integer $n$ ,but i can't since it's alternating series ,and i would like to know if it's well know series also what about it's values :real or complex ? .




Note : wolfram alpha showed that is a convergent series by the root test



Thank you for any help

Friday 25 October 2019

calculus - Limit of $sin(1/x)$ - why there is no limit?



$$ \lim_{x\to 0+} \sin\left(\frac{1}{x}\right)$$

I know that there is no limit.



but, why there is no limit?
I tried $x=0.4$, $x=0.3$, $x=0.1$, it looks like the limit is $0$.



And how can I show that there is no limit? I tried to calculate it like all the other functions, and I got wrong result and I don't know why:



$$\lim_{x \to 0+} \sin\left(\frac{1}{x}\right) = \sin\left(\frac{1}{0^+}\right) = \sin\left(\frac{1}{\infty}\right) = \sin(0) = 0.$$


Answer



Why there is no limit?




The graphic can help you understand why and suggest you some approach for the proof:



enter image description here



Remark: You have to be careful with tables of values because they can be misleading:



\begin{array}{ c | c c c c }
x & \frac{1}{2\pi} & \frac{1}{3\pi} & \frac{1}{4\pi} &\frac{1}{5\pi} \\ \hline
\sin\left(\frac{1}{x}\right) & 0 & 0 & 0 & 0 \\

\end{array}



\begin{array}{ c | c c c c }
x & \frac{2}{5\pi} & \frac{2}{9\pi} & \frac{2}{13\pi} &\frac{2}{17\pi} \\ \hline
\sin\left(\frac{1}{x}\right) & 1 & 1 & 1 & 1 \\
\end{array}



(The tables above are a sketch of the proof - see Theorem 2.4 here.)


Thursday 24 October 2019

natural numbers - Why is the sum over all positive integers equal to -1/12?





Recently, sources for mathematical infotainment, for example numberphile, have given some information on the interpretation of divergent series as real numbers, for example



$\sum_{i=0}^\infty i = -{1 \over 12}$



This equation in particular is said to have some importance in modern physics, but, being infotainment, there is not much detail beyond that.




As an IT Major, I am intrigued by the implications of this, and also the mathematical backgrounds, especially since this equality is also used in the expansion of the domain of the Riemann-Zeta function.



But how does this work? Where does this equation come from, and how can we think of it in more intuitive terms?


Answer



Basically, the video is very disingenuous as they never define what they mean by "=."



This series does not converge to -1/12, period. Now, the result does have meaning, but it is not literally that the sum of all naturals is -1/12. The methods they use to show the equality are invalid under the normal meanings of series convergence.



What makes me dislike this video is when the people explaining it essentially say it is wrong to say that this sum tends to infinity. This is not true. It tends to infinity under normal definitions. They are the ones using the new rules which they did not explain to the viewer.




This is how you get lots of shares and likes on YouTube.


elementary number theory - If $n$ is an odd integer prove that $n - 2^k$ is divisible by $3$




So let $n$ be a odd integer. Show that $n - 2^k$ is divisible by $3$ if $k$ is SOME SPECIFIC positive integer. $k \ge 0$. So there only has to exist one. For example:



$$7 - 2^2 = 3$$ is divisible by $3$



The approach is modular arithemetic, but it is hard since,



$$2 \equiv 2 \pmod{3}$$



$$n \equiv p \pmod{3}$$




It is hard to combine these? What should I do?


Answer



Hint $\ {\rm mod}\ 3\!:\ 2^2\equiv 1\,$ so $\,2^k\equiv 2^0\equiv 1$ or $\,2^k\equiv 2^1 \equiv 2$. Thus $\,n\equiv 2^k\iff 3\nmid n$


contest math - Help with inequality problem






Given $a$ , $b$ , $c \ge 0$ show that
$$\frac{a^2}{(a+b)(a+c)} + \frac{b^2}{(b+a)(b+c)}+ \frac{c^2}{(c+a)(c+b)} \ge \frac{3}{4}.$$




I tried using Titu's lemma on it, resulting in




$$\frac{a^2}{(a+b)(a+c)}+\frac{b^2}{(b+a)(b+c)}+ \frac{c^2}{(c+a)(c+b)}\ge \frac{(a+b+c)^2}{a^2+b^2+c^2 + 3(ab + bc + ca)} $$



And I am stuck here.


Answer



By C-S $$\sum_{cyc}\frac{a^2}{(a+b)(a+c)}\geq\frac{(a+b+c)^2}{\sum\limits_{cyc}(a+b)(a+c)}\geq\frac{3}{4},$$ where the last inequality it's
$$4\sum_{cyc}(a^2+2ab)\geq3\sum_{cyc}\left(a^2+3ab\right)$$ or
$$\sum_{cyc}(a^2-ab)\geq0$$ or
$$\sum_{cyc}(2a^2-2ab)\geq0$$ or
$$\sum_{cyc}(a^2+b^2-2ab)\geq0$$ or $$\sum_{cyc}(a-b)^2\geq0.$$



abstract algebra - Finite fields and primitive elements



Let $\mathbb F_9$ be a finite field of size $9$ obtained via the irreducible polynomial $x^2 + 1$ over the base field $\mathbb F_3$.




  1. How can you find a primitive element?

  2. Make a list of the elements of $\mathbb F_9$ together with a primitive element and all the powers of the primitive element.


Answer





  1. I assume that you are looking for a generator of $F^*$. You can just go through all the $8$ elements of $F^*=\{1,-1,x,x+1,x-1,-x,-x+1,-x-1\}$ and compute their multiplicative orders. But with a little bit of thought you can avoid most of these computations. The following method is also applicable in other finite fields. We have the Frobenius automorphism $a \mapsto a^3$. Remark that $a$ is a generator iff the order is $8$ iff $a^4=-1$. This already excludes $1,-1,x,-x$. So we should try $a=x+1$, and compute $a^3=x^3+1=x(-1)+1$, and $a^4=(-x+1)(x+1)=1-x^2=-1$. This shows that $x+1$ is a generator. The other generators are $x-1,-x+1,-x-1$.


  2. Again you can do this easily with the help of the Frobenius.



Tuesday 22 October 2019

calculus - Prove or disprove: Two statements about Cauchy-sequences


Prove or disprove:





  1. Every cauchy-sequence in $\mathbb{R}$ includes a subsequence which is monotonic.

  2. Every monotonic increasing cauchy-sequence in $\mathbb{R}$ converges to its supremum.





  1. I would say it's true because a main attribute of cauchy-sequences is that its sequences always get smaller and smaller with each other, so each one will be monotone.


  2. I say it's false but I cannot reason it :p








What do you think?

calculus - How to prove that a recursive sequence converges?

For this question, I'm not sure how the fact that $|a_n|<6$ effects the result of this question. I'm not really sure how to complete the proof. Here is what I have so far. Can anyone please help me out?



Consider the sequence defined by $a_{n+1} =

\sqrt{2+a_n}$ if $n ≥ 1$ and $a_1 = 1$.
Suppose that $|a_n| < 6$ for all n ∈ N. Does $\{a_n\}$ converge or diverge? Make sure to
fully prove your claim and cite appropriate theorem(s) and hypothesis conditions.



$a_2 = \sqrt{2+1} = \sqrt{3}$



$a_3 = \sqrt{2+\sqrt{3}}$



Wts there exists an M st $a_n \le M$




Choose M = 2 since the sequence eventually apporaches 2.



Base Case: $a_1 = 1 < 2$



Induction Hypothesis: Let k ∈ N be arbitrary.



Assume $a_k \le 2$



Induction Step: $a_{k+1} \to a_k$




$a_{k+1} = \sqrt{2+a_k} < \sqrt{2+2} = 2$



Therefore by induction, the sequence is bounded above



$a_n^2 - a_{n+1}^2 = a_n^2 - \sqrt{{2+a_n}}^2 = a_n^2 - a_n - 2 = (a_n-2)(a_n+1)$



$a_n^2 - a_{n+1}^2 <0$



$a_n^2 < a_{n+1}^2$




$a_n < a_{n+1}$



Therefore by the difference test, $\{a_n\}$ is strictly increasing.



Therefore by the bounded monotone convergence theorem,$\{a_n\}$ converges.

Sunday 20 October 2019

elementary number theory - Extended Euclidean Algorithm, what is our answer?




I am learning Euclidean Algorithm and the Extended Euclidean Algorithm. The problem I have is:



Find the multiplicative inverse of 33 modulo n, for n = 1023, 1033, 1034, 1035.


Now I learned that a multiplicative inverse only exists if the gcd of two numbers is 1. Thus, there doesn't exist a multiplicative inverse for n = 1023, 1034, 1035 because there gcd's are not 1.



gcd(33, 1023) = 33
gcd(33, 1033) = 1
gcd(33, 1034) = 11

gcd(33, 1035) = 3


So we move forward with n = 1033 using the Euclidean Algorithm:



33x = 1 mod 1033
x = 1/33 mod 1033
x = 1033 = 33 * 31 + 10
x = 33 = 10 * 3 + 3
x = 10 = 3 * 3 + 1

x = 1


Now we work our way back up using the Extended Euclidean Algorithm



//EEA
1 = 10 + 3(-3)
= 10 + (33 + 10(-3))(-3)
= 33(-3) + 10(10)
= 33(-3) + (1033 + 33(-31))(10)

1 = 33(-313) + 1033(10)


So now how do we get the multiplicative inverse from this final equation that we have here?



Also, I used this video as a reference to running through EA and EEA: https://www.youtube.com/watch?v=hB34-GSDT3k and I was wondering how he was able to use EEA when his gcd was 6, and not 1?


Answer



From the last line of your calculation $1 = 33(-313) + 1033(10)$, reduce mod $1033$ to see that



$$

1 \equiv 33(-313) \pmod{1033}.
$$



So the inverse of $33$ modulo $1033$ is the equivalence class of $-313$, the least non-negative representative of which is $-313+1033 = 720$.


limits - calculus continuity of a hard question?

How do i calculate the continuity of a function if the functions are in a given set limit. I tried doing it but I epically failed… Please help!how do I solve it?
When I tried to solve this I got that this function was not continuous but I'm not sure if it is right. Thanks!
$$w(t)=\cases{
48+3.64x+.6363x^2, & if $\ 1\le x \le 28$\cr
-1004+65.8x ,& if $\ 28 \le x \le 56$}

$$
Thank you so much!

combinatorics - Simplify the Expression $sum _{ k=0 }^{ n }{ binom{n}{k}}i^{k}3^{k-n} $



I should simplify the following expression (for a complex number):
$$\sum _{ k=0 }^{ n }{ \binom{n}{k}}i^{k}3^{k-n} $$



The solution is $(i+\frac{1}{3})^n$,but i don't quite get the steps. If would be nice if someone could explain.








The Binomial Theorem:
$(x+y)^{n}=\sum_{k=0}^{n}\binom{n}{k}x^{n-k}y^{k}$



Answer



$$
\sum_{k=0}^n \binom{n}{k}i^k 3^{k-n}

= \sum_{k=0}^n \binom{n}{k}i^k \left(3^{-1}\right)^{n-k}
= \sum_{k=0}^n \binom{n}{k}i^k \left(1/3\right)^{n-k}
= (i+1/3)^n
$$


Saturday 19 October 2019

galois theory - Distribution of the sumset of two GF($q$) subsets

First, a simple definition. The sumset of two subsets $\mathcal{S}_1$ and $\mathcal{S}_2$ containing $GF(q)$ elements is defined as:



$$\mathcal{S}_1 + \mathcal{S}_2 = \left\{ s_1 + s_2:s_1 \in \mathcal{S}_1,s_2 \in \mathcal{S}_2 \right\}.$$



I will also use the common definition: $g \cdot \mathcal{S}_1 = \left\{ g \cdot s_1:s_1 \in \mathcal{S}_1 \right\}$ for some $g \in GF(q)$. Note that $GF(q)$ arithmetic is assumed.



For example, assume that $q=4$ and that $\alpha$ is a primitive element GF($4$). Then:





  1. $$\{ 0,\alpha \} + \{ 0,\alpha \} = \{ 0 + 0,0 + \alpha ,\alpha + 0,\alpha + \alpha \} = \{ 0,\alpha \}.$$


  2. $$\{ 0,\alpha \} + \{ 0,1,\alpha \} = \{ 0 + 0,0 + 1,0 + \alpha ,\alpha + 0,\alpha + 1,\alpha + \alpha \} = \{ 0,1,\alpha,\alpha ^2 \}.$$


  3. $$\alpha \cdot \{ 0,1,\alpha \} = \{ 0,\alpha,\alpha ^2 \}.$$




My question is as follows. Assume that the sets $\mathcal{S}_1$ and $\mathcal{S}_2$ are iid random variables containing the symbol $0$. The additional elements of each set are drawn at random (without repetition), such that each $GF(q)$ subset (containing $0$) of a given size is equiprobable. For example, if $q=4$ then there exist probabilities $p_1,p_2,p_3,p_4$ such that:



I. $\Pr ( S_1 = \{ 0 \} ) = p_1$,




II. $\Pr \left( {{S_1} = \left\{ {0,1} \right\}} \right) = \Pr \left( {{S_1} = \left\{ {0,\alpha } \right\}} \right) = \Pr \left( {{S_1} = \left\{ {0,{\alpha ^2}} \right\}} \right) = {p_2},$



III. $\Pr \left( {{S_1} = \left\{ {0,1,\alpha } \right\}} \right) = \Pr \left( {{S_1} = \left\{ {0,1,{\alpha ^2}} \right\}} \right) = \Pr \left( {{S_1} = \left\{ {0,\alpha ,{\alpha ^2}} \right\}} \right) = {p_3},$



IV. $\Pr \left( {{S_1} = \left\{ {0,1,\alpha ,{\alpha ^2}} \right\}} \right) = {p_4}.$



In addition, $h_1$ and $h_2$ are iid random variables, distributed uniformly over the set $\left\{ {1,\alpha ,{\alpha ^2},...,{\alpha ^{q - 2}}} \right\}$ (i.e., $h_1$ and $h_2$ can be any non-zero element of GF($q$) with probability $1/(q-1)$). Denote: $\mathcal{S}_{\rm out} = {h_1} \cdot {\mathcal{S}_1} + {h_2} \cdot {\mathcal{S}_2}.$



I want to prove that $\mathcal{S}_{\rm out}$ is also distributed such that each GF($q$) subset of a given size containing $0$ is equiprobable. (As a special case, it means that I-IV above hold for $\mathcal{S}_{\rm out}$ with some $p_1', p_2',p_3',p_4'$).




What I tried:



So far I observed that $\mathcal{S}_{\rm out}$ sets that differ by a multiplicative factor are equiprobable, as $g \cdot {\mathcal{S}_{\rm out}} = g \cdot {h_1} \cdot {\mathcal{S}_1} + g \cdot {h_2} \cdot {\mathcal{S}_2}$ (for some $g \in$ GF($q$)) has the same distribution as $\mathcal{S}_{\rm out}$, since $g \cdot h_1, g \cdot h_2$ remain uniformly distributed. In fact, this is independent of the distribution of $\mathcal{S}_1$ and $\mathcal{S}_2$. However, I couldn't come up with an idea how to extend this relation to sets of the same size that do not differ by a multiplicative factor. In such cases one needs to show that all element-wise translations of $\mathcal{S}_{\rm out}$ of a given size has the same probability.

Friday 18 October 2019

calculus - Calculate the limit.

I am given the limit $ \lim_{x \to 0^{+}}\frac{\ln x}{\sqrt{x}} $.



As far as I am concerned $ \ln x \rightarrow - \infty $ and $\sqrt{x}\rightarrow 0 $ so I got $[\frac{-\infty}{0}]$ . What should I do next? Should I transform it and then use de l'Hospital rule? $[\frac{-\infty}{0}]$ is not indeterminate form so how should I deal with this?

linear algebra - Prove that a square matrix commutes with its inverse




The Question:




This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:




let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse




My thoughts so far:



This is particularly annoying to me because it seems like it should be easy.




We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have
$$
\left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right)
$$
But is that enough?



It might help to talk about generalized inverses.


Answer



You notation $A^{-1}$ is confusing because it makes you think of it as a two-sided inverse but we only know it's a left-inverse.




Let's call $B$ the matrix so that $BA=I$. You want to prove $AB=I$.



First, you need to prove that there is a $C$ so that $AC=I$. To do that, you can use the determinant but there must be another way. [EDIT] There are several methods here. The simplest (imo) is the one using the fact the matrix has full rank.[/EDIT]



Then you have that $B=BI=B(AC)=(BA)C=IC=C$ so you get $B=C$ and therefore $AB=I$.


analysis - Prove that region under graph of function is measurable



In the measure theory book that I am studying, we consider the 'area' under (i.e. the product measure of) the graph of a function as an example of an application of Fubini's Theorem for integrals (with respect to measures).




The setting: $(X,\mathcal{A}, \mu)$ is a $\sigma$-finite measure space, $\lambda$ is Lebesgue measure on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ (Borel $\sigma$-algebra), $f:X \to [0,+\infty]$ is $\mathcal{A}$-measurable, and we are considering the region under the graph of $f$,



$E=\{(x,y)\in X \times \mathbb{R}|0\leq y < f(x)\}$.



I need to prove $E \in \mathcal{A} \times \mathcal{B}(\mathbb{R})$. I thought to write $E=g^{-1}((0,+\infty])\cap(X \times [0,+\infty])$ where $g(x,y)=f(x)-y$ but I can't see why $g$ must be $\mathcal{A} \times \mathcal{B}(\mathbb{R})$-measurable. Any help would be appreciated.


Answer



$g=k\circ h$ where $h(x,y)=(f(x),y)$ and $k(a,b)=a-b$. [ Here $h:X\times \mathbb R \to \mathbb R^{2}$ and $k:\mathbb R^{2} \to \mathbb R$]. $k:\mathbb R^{2} \to \mathbb R$ is Borel measurable because it is continuous. To show that $h$ is measurable it is enough to show that $h^{-1} (A \times B) \in \mathcal A \times B(\mathbb R)$ for $A,B \in \mathcal B(\mathbb R)$. This is clear because $h^{-1} (A \times B)=f^{-1}(A) \times B$.



I have assumed that $f$ takes only finite values. To handle the general case let $g(x)=f(x)$ if $f(x) <\infty$ and $0$ if $f(x)=\infty$. Let $F=\{(x,y):0\leq y . Then $E=(f^{-1}\{\infty\}\times [0,\infty)) \cup [(f^{-1}\{\mathbb R\}\times \mathbb R) \cap F]$.



Thursday 17 October 2019

calculus - Functional equation $f(xy)=f(x)+f(y)$ and continuity




Prove that if $f:(0,\infty)→\mathbb{R}$ satisfying $f(xy)=f(x)+f(y)$, and if $f$ is continuous at $x=1$, then $f$ is continuous for $x>0$.





I let $x=1$ and I find that $f(x)=f(x)+f(1)$ which implies that $f(1)=0$. So, $\lim_{x\to1}f(x)=0$, but how can I use this to prove continuity of $f$ for every $x \in \mathbb R$?



Any help would appreciated. Thanks


Answer



Give $x_0>0$,
$$f(x)-f(x_0)=f\left(x_0\cdot\frac{x}{x_0}\right)-f(x_0)=f\left(\frac{x}{x_0}\right),$$
by $f$ is continuous at $x=1$, when $x\to x_0$, $\frac{x}{x_0}\to1$, then
$$\lim\limits_{x\to x_0}f(x)=f(x_0).$$


algebra precalculus - Evaluating $sum_{n=1}^{99}sin(n)$




I'm looking for a trick, or a quick way to evaluate the sum $\displaystyle{\sum_{n=1}^{99}\sin(n)}$. I was thinking of applying a sum to product formula, but that doesn't seem to help the situation. Any help would be appreciated.


Answer



Hint: compute $\sum_{n=0}^{99} (\cos(n) + i\sin(n))$.


Wednesday 16 October 2019

puzzle - Ant on a square




There is a square of side $1 m$ and an ant has to cross diagonally. However, it chooses to walk along the boundary so the distance covered by it is $2m$ and not $\sqrt{2} m$. This it does in two moves. Now the ant decides to walk along the one side and stop at $\frac{1}{2}$ that is $0.5 m $ and then go $0.5m$ up, $0.5m$ forward and another $0.5m$ upward to reach its destination. It does the same with $0.25 m$ ($\frac{1}{4})$etc. Assume now that the ant divides the segment $n$ times, that is, $\frac{1}{n}$ and covers the same distance, by $2n$ moves. As $n$ tends to infinity, the path of the ant will be equal to the straight line path that measures $\sqrt{2}m$. But then, if we consider this to be a normal limit problem as $n$ tends to infinity, the answer is always $2$. Where is the contradiction? I have tried drawing the paths and checking them for large values of $n$ but I still couldn't spot the contradiction.



Answer



Basically, there is no contradiction. Let's see what you have done. You have constructed a line in space, $\Gamma$. It has a simple parametrisation of $\gamma:t\mapsto(t,t)$ for $t\in[0,m]$.



You then construct a sequence of paths $\Gamma_n$ which also have some parametrisation. For example, $\Gamma_1$ has parametrisation $\gamma_1:t\mapsto(2t,0)$ for $t\in[0,m/2]$ and $\gamma_1:t\mapsto(m, 2t)$ for $t\in[m/2,m]$. The other parametrisations are increasingly hard to write, but you basically just cut the two orthogonal lines and keep their parameter intact.



Now, it can be shown that in fact even the parametrisations $\gamma_i$ converge towards $\gamma$ and in fact do so uniformly. Your example therefore shows that even if $\Gamma$ is the limit of a family of functions, its length may not be.



So what is the problem? Why is the length not converging? In my mind, the most natural way of thinking is that the derivatives do not converge. A length of a line parametrised by $\gamma$ is defined as the integral of $|\dot\gamma|$ over the parametrising interval. If you have $\dot \gamma_n\rightarrow \dot\gamma$, then the integrals will converge. If you do not (and in your case you don't), there is nothing forcing the lengths to converge.


sequences and series - Generalized Euler sum $sum_{n=1}^infty frac{H_n}{n^q}$




I found the following formula



$$\sum_{n=1}^\infty \frac{H_n}{n^q}= \left(1+\frac{q}{2} \right)\zeta(q+1)-\frac{1}{2}\sum_{k=1}^{q-2}\zeta(k+1)\zeta(q-k)$$



and it is cited that Euler proved the formula above , but how ?



Do there exist other proofs ?



Can we have a general formula for the alternating form




$$\sum_{n=1}^\infty (-1)^{n+1}\frac{H_n}{n^q}$$


Answer



$$
\begin{align}
&\sum_{j=0}^k\zeta(k+2-j)\zeta(j+2)\\
&=\sum_{m=1}^\infty\sum_{n=1}^\infty\sum_{j=0}^k\frac1{m^{k+2-j}n^{j+2}}\tag{1}\\
&=(k+1)\zeta(k+4)
+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{m^2n^2}
\frac{\frac1{m^{k+1}}-\frac1{n^{k+1}}}{\frac1m-\frac1n}\tag{2}\\
&=(k+1)\zeta(k+4)

+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{3}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=m+1}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{4}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{(n+m)m^{k+2}n}-\frac1{m(n+m)^{k+2}n}\tag{5}\\
&=(k+1)\zeta(k+4)\\
&+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m^{k+3}n}-\frac1{(m+n)m^{k+3}}\\
&-2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}\tag{6}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}

-4\sum_{n=1}^\infty\sum_{m=1}^\infty\frac1{n(n+m)^{k+3}}\tag{7}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=n+1}^\infty\frac1{nm^{k+3}}\tag{8}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=n}^\infty\frac1{nm^{k+3}}+4\zeta(k+4)\tag{9}\\
&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\sum_{n=1}^m\frac1{nm^{k+3}}\tag{10}\\

&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{11}\\
&=(k+5)\zeta(k+4)
-2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{12}
\end{align}
$$
Letting $q=k+3$ and reindexing $j\mapsto j-1$ yields
$$
\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)

=(q+2)\zeta(q+1)-2\sum_{m=1}^\infty\frac{H_m}{m^q}\tag{13}
$$
and finally
$$
\sum_{m=1}^\infty\frac{H_m}{m^q}
=\frac{q+2}{2}\zeta(q+1)-\frac12\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)\tag{14}
$$







Explanation



$\hphantom{0}(1)$ expand $\zeta$
$\hphantom{0}(2)$ pull out the terms for $m=n$ and use the formula for finite geometric sums on the rest
$\hphantom{0}(3)$ simplify terms
$\hphantom{0}(4)$ utilize the symmetry of $\frac1{nm^{k+2}(n-m)}+\frac1{mn^{k+2}(m-n)}$
$\hphantom{0}(5)$ $n\mapsto n+m$ and change the order of summation
$\hphantom{0}(6)$ $\frac1{mn}=\frac1{m(m+n)}+\frac1{n(m+n)}$
$\hphantom{0}(7)$ $H_m=\sum_{n=1}^\infty\frac1n-\frac1{n+m}$ and use the symmetry of $\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}$
$\hphantom{0}(8)$ $m\mapsto m-n$
$\hphantom{0}(9)$ subtract and add the terms for $m=n$
$(10)$ combine $\zeta(k+4)$ and change the order of summation
$(11)$ $H_m=\sum_{n=1}^m\frac1n$
$(12)$ combine sums


calculus - Why don't graphing tools represent holes in a graph?



Why don't graphing tools represent holes in the graph of a function? A hole at a point in a graph is point where function is not defined. Suppose there is a function



$$\frac{x}{\sqrt{x+1}-1}$$




Its should be like this



Graph 1



But online tools and even my android graphing tool app shows graph like this



Graph 2



What I'm saying is that, apart from a circle, there must be some sort of marks representing that the function is not defined here.


Answer




Programs/apps don't usually (if ever) show the hole because it is very small. It's just one point on the graph. Less than a pixel in size, making it impossible to actually show it on your screen.



Now, as for why these programs don't indicate where point-discontinuities are? I don't know. You'd have to talk to the developer of the program. I suppose these apps could implement a way of indicating it to the user. There are ways of doing it yourself, but you'd have to symbolically program your own function/method in the language (if the app is actually a programming language).


Tuesday 15 October 2019

summation - induction proof: $sum_{k=1}^nk^2 = frac{n(n+1)(2n+1)}{6}$




I encountered the following induction proof on a practice exam for calculus:



$$\sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$$



I have to prove this statement with induction.



Can anyone please help me with this proof?



Answer



If $P(n): \sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6},$



we see $P(1): 1^2=1$ and $\frac{1(1+1)(2\cdot1+1)}{6}=1$ so, $P(1)$ is true



Let $P(m)$ is true, $$\sum_{k=1}^mk^2 = \frac{m(m+1)(2m+1)}{6}$$



For $P(m+1),$



$$ \frac{m(m+1)(2m+1)}{6}+(m+1)^2$$




$$=\frac{m(m+1)(2m+1)+6(m+1)^2}6$$



$$=\frac{(m+1)\{m(2m+1)+6(m+1)\}}6$$



$$=\frac{(m+1)(m+2)\{2(m+1)+1\}}6$$ as $m(2m+1)+6(m+1)=2m^2+7m+6=(m+2)(2m+3)$



So, $P(m+1)$ is true if $P(m)$ is true


analysis - How to show $textrm{supp}(f*g)subseteq textrm{supp}(f)+textrm{supp}(g)$?



Let $f, g\in C_0(\mathbb R^n)$ where $C_0(\mathbb R^n)$ is the set of all continuous functions on $\mathbb R^n$ with compact support. In this case $$(f*g)(x)=\int_{\mathbb R^n} f(x-y)g(y)\ dy,$$ is well defined.



How can I show $\textrm{supp}(f*g)\subseteq \textrm{supp}(f)+\textrm{supp}(g)$?



This should be easy but I can't prove it.



I tried to proceed by contradiction as follows: Let $x\in \textrm{supp}(f*g)$. If $x\not\in \textrm{supp}(f)+\textrm{supp}(g)$ then $(x-\textrm{supp}(f))\cap \textrm{supp}(g)=\phi$. This should give me a contradiction but I can't see it.



Answer



If $f*g(x)\neq 0$ then $\int_{\Bbb R^n}f(x-y)g(y)dy\neq 0$, so there exists $y\in \Bbb R^n$ such that $f(x-y)g(y)\neq 0$, hence $g(y)\neq 0$ and $f(x-y)\neq 0$, take $z=x-y$ then $x=z+y$ with $f(z)\neq 0$ and $g(y)\neq 0$. Now we get
$\{f*g\neq 0\}\subset \{f\neq 0\}+\{g\neq 0\}\subset \text{supp}(f)+\text{supp}(g)$, so
$\text{supp}(f*g)\subset \text{supp}(f)+\text{supp}(g)$.


Monday 14 October 2019

analysis - How find this limits $lim_{ntoinfty}left(sin{frac{ln{2}}{2}}+sin{frac{ln{3}}{3}}+cdots+sin{frac{ln{n}}{n}}right)^{1/n}$


Find this limit
$$\lim_{n\to\infty}\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{n}}{n}}\right)^{1/n}$$





My idea:use
$$x=e^{\ln{x}}$$
so we only find
$$\lim_{n\to \infty}\dfrac{\ln{\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{n}}{n}}\right)}}{n}$$
then
$$\lim_{n\to\infty}\dfrac{\ln{\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{(n+1)}}{n+1}}\right)}-\ln{\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{n}}{n}}\right)}}{(n+1)-n}=\ln{\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{(n+1)}}{n+1}}\right)}-\ln{\left(\sin{\dfrac{\ln{2}}{2}}+\sin{\dfrac{\ln{3}}{3}}+\cdots+\sin{\dfrac{\ln{n}}{n}}\right)}$$
then I can't works,Thank you

Sunday 13 October 2019

Complex equation

The problem says to solve the given complex equation:



$$
z^4-\left[
\frac{\sqrt3}{2}i^{21}+

\frac{\sqrt3}{2}i^{9}+
\frac{8}{(1+i)^6}\right]^9=0
$$



The solution is this:



$$
\cos\left(
\frac{\pi}{8}+\frac{k\pi}{2}
\right)+i\sin\left(

\frac{\pi}{8}+\frac{k\pi}{2}
\right)
,k=0,1,2,3
$$



My problem is that I can't get this solution, I've tried multiple times to solve it but I always end up getting this solution:
$$z^4=(i+\sqrt3*i)^9$$



Could you help me with this? Thanks in advance.

Saturday 12 October 2019

number theory - Euclidean algorithm on $(a^n-1,a^m-1)$



I'm having trouble applying the Euclidean algorithm to

$$\gcd(a^n-1,a^m-1)=a^{\gcd(n,m)}-1$$
I've seen others do it such as in this solution but I'm having trouble figuring out what they are doing in each step.



Could someone help explain this to me?


Answer



Assume that $n=qm+r$. Then $ a^n-1 = a^r\cdot(a^m)^q-1 $, and since
$$ a^m \equiv 1 \pmod{a^m-1} \tag{1}$$
we have:
$$ (a^n-1) \equiv (a^r-1)\pmod{a^m-1}\tag{2} $$
proving:

$$ \gcd(a^n-1,a^m-1) = \gcd(a^r-1,a^m-1).\tag{3}$$
Now, repeat. The involved exponents transform like in the computation of
$$ \gcd(n,m) = \gcd(r,m) = \ldots \tag{4} $$
hence at last we get:
$$ \gcd(a^n-1,a^m-1) = a^{\gcd(n,m)}-1\tag{5} $$
qed.


How do I do combinatoric algebra without tedious factorial multiplication and division?

I find combinatoric algebra very non-intuitive. I'm talking about Pascal's Identity $n\geq r$,
$$
\binom{n+1}{r}=\binom{n}{r}+\binom{n}{r-1}.
$$




I understand the tedious proof of the theorem but what's a trick for understanding combinatoric algebra in general? I can't eyeball and decompose a binomial without memorizing the formulas or doing tedious factorial multiplication and division.



It's never obvious how combinatoric algebra works:



EXample: enter image description here

real analysis - Does there exist a bijective differentiable function $f:mathbb{R^+}rightarrow mathbb{R^+}$, whose derivative is not a continuous function?





Does there exist a bijective differentiable function $f:\mathbb{R^+}\rightarrow \mathbb{R^+}$, whose derivative is not a continuous function?




$x^2\sin \dfrac{1}{x}$ is a good example for non-continuous derivative function, that will not work here, I guess.


Answer



The function
$$f(x):=x^2\left(2+\sin{1\over x}\right)+8x\quad(x\ne0), \qquad f(0):=0,$$
is differentiable and strictly increasing for $x\geq-1$, and its derivative is not continuous at $x=0$. Translate the graph of $f$ one unit $\to$ and eight units $\uparrow$, and you have your example.


Friday 11 October 2019

trigonometry - Proving sine of sum identity for all angles



Could anyone present a proof of sine of sum identity for any pair of angles $a$, $b$?




$$\sin(a+b) = \sin(a) \cos(b) + \cos(a) \sin(b)$$



Most proofs are based on geometric approach (angles are $<90$ in this case). But please note the formula is supposed to work for any pair of angles.



The other derivation I know is using Euler's formula, namely this one.



There's one thing I don't feel comfortable with - we know that we add angles when multiplying two complex numbers. This is proven with sine of sum identity. So first we prove how multiplication of two complex exponentials works using sine of sum identity, and then use multiplication of complex exponentials to prove sine of sum identity. Can you tell me how it's not a circular argument?


Answer



here is a geometric proof i saw in an old american mathematics monthly which uses the unit circle. first show that the square of the chord connecting $(1,0)$ and $(\cos t, \sin t)$ is $2(1-\cos t)$ using the distance formula. now reinterpret

$$\text{ length of chord making an angle $t$ at the center is } 2 - 2\cos t $$



now compute the length squared between $\cos t, \sin t), (\cos s, \sin s)$ in two different ways:



(i) distance formula gives you $2 - \cos t \cos s - \sin t \sin s$



(ii) chord making an angle $t - s$ is $2 - \cos(t-s)$



equating the two gives you $$\cos (t-s) = \cos t \cos s + \sin t \sin s \tag 1$$




now use the fact $\cos \pi/2$ to derive $\cos (\pi/2 - s) = \sin s$ by putting $t = \pi/2$ in $(1)$



put $t=0,$ to derive $\cos$ is an even function. put $t = -\pi/2,$ to show $\sin$ is an odd function. after all these you derive
$$\sin(t-s) = \sin t \cos t - \cos t \sin s $$ and two for the sums.


calculus - A sine integral $int_0^{infty} left(frac{sin x }{x }right)^n,mathrm{d}x$



The following question comes from Some integral with sine post
$$\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$$

but now I'd be curious to know how to deal with it by methods of complex analysis.
Some suggestions, hints? Thanks!!!



Sis.


Answer



Here's another approach.



We have
$$\begin{eqnarray*}
\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n
&=& \lim_{\epsilon\to 0^+}

\frac{1}{2} \int_{-\infty}^\infty dx\,
\left(\frac{\sin x}{x-i\epsilon}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \int_{-\infty}^\infty dx\,
\frac{1}{(x-i\epsilon)^n}
\left(\frac{e^{i x}-e^{-i x}}{2i}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n} \int_{-\infty}^\infty dx\,
\frac{1}{(x-i\epsilon)^n}
\sum_{k=0}^n (-1)^k {n \choose k} e^{i x(n-2k)} \\

&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^n (-1)^k {n \choose k}
\int_{-\infty}^\infty dx\, \frac{e^{i x(n-2k)}}{(x-i\epsilon)^n}.
\end{eqnarray*}$$
If $n-2k \ge 0$ we close the contour in the upper half-plane and pick up the residue at $x=i\epsilon$.
Otherwise we close the contour in the lower half-plane and pick up no residues.
The upper limit of the sum is thus $\lfloor n/2\rfloor$.
Therefore, using the Cauchy differentiation formula, we find
$$\begin{eqnarray*}

\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!}
\left.\frac{d^{n-1}}{d x^{n-1}} e^{i x(n-2k)}\right|_{x=0} \\
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor}
(-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!} (i(n-2k))^{n-1} \\
&=& \frac{\pi}{2^n (n-1)!}

\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} (n-2k)^{n-1}.
\end{eqnarray*}$$
The sum can be written in terms of the hypergeometric function but the result is not particularly enlightening.


Thursday 10 October 2019

real analysis - Finite Measure Space integration

Let $(\Omega, \mathcal{A}, \mu)$ be a measure space with finite $\mu$. Let $f,f_n:\Omega \to \overline{\mathbb{R}}$ be a $\mathcal{A}-$measurable function $(n \in \mathbb{N})$.




For every $\varepsilon >0$,



$$lim_{n \to \infty} \mu (\bigcup_{m \geq n} x \in \Omega:f_m(x) > f(x)+ \varepsilon))=0$$



it means we may choose an integer $N$ large enough so that



$$\mu (\bigcup_{m \geq N} \left \{ x \in \Omega:f_m(x) > f(x)+ \varepsilon)) \right \})<\delta?$$

calculus - Why can't the indefinite integral $intfrac{sin(x)}{x}mathrm dx$ be found?




I came across a list of functions in my calculus textbook whose indefinite integral cannot be found. It was written that the integral $$\int \frac{\sin(x)}{x} dx$$ cannot be evaluated without any explanation as to why.




I did some research over the internet and found out that the definite integral $$\int_{0}^{\infty} \frac{\sin(x)}{x} dx$$ can be evaluated using Laplace transformation and is equal to $\pi /2$. But I still couldn't find answer to my original question. I read somewhere that the integral cannot be expressed using 'elementary functions'. A little help is appreciated, I'm in Calc 1 going advanced than my course but I am sorry if my post shows lack of research. Thank you!


Answer



Since every continuous real function $ f(x) $ has its indefinite integral $ F(x) $ on its domain by Newton-Leibniz formula
$$ F(x)=\int_a^xf(x)dx, \quad x\in [a, b] $$



But we cannot find an expression of $ F(x) $ using elementary functions and their composition of a finite number of arithmetic operations $(+ – × ÷)$, exponentials, logarithms, constants, and solutions of algebraic equations. Whereas, it doesn't mean that we cannot calculate them, by some certain methods such as using complex analysis, we can calculate the exact value of its definite integral on some proper interval.


linear algebra - Is there always a mapping from invertible $A$ to any $B in M_n(Bbb R)$?





Let $A, B$ be $n\times n$ matrices, then
$1)$ If $A$ is invertible then for every $B$ exists a matrix $X \in M_n(\Bbb R)$ such that $AX = B$.
$2)$ If for every $B$ there exists a matrix $X \in M_n(\Bbb R)$ such that $AX = B$ then $A$ is invertible.




For $1)$ I started with: $AX=B \implies X=A^{-1}B$



But I'm not sure I can do this:
$$A(A^{-1}B)=B$$
$$(AA^{-1})B=B$$ --> not sure about this
$$ IB = B $$



Am I right about the first part?




How should I prove the second part? thanks


Answer



The steps $A(A^{-1}B) = (A A^{-1})B = IB = B$ are okay because matrix multiplication is associative.



For the second part choose $B = I$. Now there exist $X$ s.t. $AX = I$, so $A$ is invertible.


real analysis - $f$ is convex, increasing then $lim_{xrightarrow -infty }f(x)/x$ exists



I am stuck with the following question:



Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be convex and increasing and $\displaystyle\lim_{x\rightarrow -\infty}f(x)=-\infty$. Prove that there exists $\alpha_{0}\in\mathbb{R}$ such that
$$
\lim_{x\rightarrow -\infty}\dfrac{f(x)}{x}=\alpha_{0}.
$$




Can someone give me a hint?



Thanks.


Answer



We can show that the limit
$$
\lim_{x\to-\infty} \frac{f(0)-f(x)}{0-x}
$$

exists.
On the one hand, $\frac{f(0)-f(x)}{0-x} >0$ because $f$ is an increasing function. On the other hand, the convexity implies that $\frac{f(0)-f(x)}{0-x}$ decreases (or at least does not increase) for $x\to -\infty$. A non-increasing function which is bounded from below has a limit. We define

$$
a_0 = \lim_{x\to-\infty} \frac{f(0)-f(x)}{0-x}
$$

and we have
$$
\lim_{x\to-\infty} \frac{f(x)}{x} \\
= \lim_{x\to-\infty} \left(\frac{f(0)-f(x)}{0-x} - \frac{f(0)}{-x}\right) \\
= \lim_{x\to-\infty} \frac{f(0)-f(x)}{0-x} - \lim_{x\to-\infty}\frac{f(0)}{-x} \\
= a_0 - 0 \\
= a_0

$$


Wednesday 9 October 2019

number theory - Prove that $sum_{i=0}^{63}f_{i}cdotleft(n+iright)^{5}=0$


For $n \geq 1$, let $f_m = (-1)^s$ where $s$ is the digital sum modulo $2$ of the binary representation of $m$. Prove that $$ \sum_{i=0}^{63}f_{i}\cdot\left(n+i\right)^{5}=0.$$




Since $s$ is the digital sum taken modulo $2$ we know that $s \in \{0,1\}$. I don't see a pattern in digital sum of the binary representation modulo $2$: $0,1,1,0,1,0,0,1,1,0,0,1,0,1,\ldots$. How do we prove the sum is equal to zero?

algebra precalculus - Is this an incorrect proof of $cot (x)+tan(x)=csc(x)sec(x)$?

If you input the trig identity:
$$\cot (x)+\tan(x)=\csc(x)\sec(x)$$
Into WolframAlpha, it gives the following proof:



Expand into basic trigonometric parts:
$$\frac{\cos(x)}{\sin(x)} + \frac{\sin(x)}{\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$
Put over a common denominator:




$$\frac{\cos^2(x)+\sin^2(x)}{\cos(x)\sin(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$



Use the Pythagorean identity $\cos^2(x)+\sin^2(x)=1$:



$$\frac{1}{\sin(x)\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$



And finally simplify into



$$1\stackrel{?}{=} 1$$




The left and right side are identical, so the identity has been verified.



However, I take some issue with this. All this is doing is manipulating a statement that we don't know the veracity of into a true statement. And I've learned that any false statement can prove any true statement, so if this identity was wrong you could also reduce it to a true statement.



Obviously, this proof can be easily adapted into a proof by simply manipulating one side into the other, but:



Is this proof correct on its own? And can the steps WolframAlpha takes be justified, or is it completely wrong?

Tuesday 8 October 2019

calculus - Proof of sum formula, no induction



$$\sum_{k=1}^n k=\frac{n(n+1)}2$$




So I was trying to prove this sum formula without induction. I got some tips from my textbook and got this.



Let $S=1+2+\cdots+n-1+n$ be the sum of integers and $S=n+(n+1)+\cdots+2+1$ written backwards. If I add these $2$ equations I get $2S=(1+n)+(1+n)\cdots(1+n)+(1+n)$ $n$ times.



This gives me $2S=n(n+1) \Rightarrow S=\frac{n(n+1)}2$ as wanted.



However if I changed this proof so that n was strictly odd or strictly even, how might I got about this. I realize even means n must be $n/2$. But I haven't been able to implement this in the proof correctly.



Edit: error in question fixed, also by $n/2$ I mean should I implement this idea somewhere in the proof, cause even means divisible by $2$.



Answer



Method 1: (requires you to consider whether $n$ is odd or even.)



$S = 1 + 2 + ...... + n$.



Join up the first to term to the last term and second to second to last and so on.



$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +....+(n-2)} + (n-1)} + n}$.



$= (n+1) + (n+1) + .....$.




If $n$ is even then:



$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +..+\underbrace{\frac n2 + (\frac n2 + 1)}+..+(n-2)} + (n-1)} + n}$



And you have $\frac n2$ pairs that add up to $n+1$. So the sum is $S= \frac n2(n+1)$.



If $n$ is odd then:



$S = \underbrace{1 + \underbrace{2 + \underbrace{3 +..+\underbrace{\frac {n-1}2 + [\frac {n+1}2] + (\frac {n+1}2 + 1)}+..+(n-2)} + (n-1)} + n}$




And you have $\frac {n-1}2$ pairs that also add up to $n+1$ and one extra number $\frac {n+1}2$ which didn't fit into any pair. So the sum is $\frac {n-1}2(n+1) + \frac {n+1}2 =(n-1)\frac {n+1}2 + \frac {n+1}2 = (n-1 + 1)\frac {n+1}2n=n\frac {n+1}2$.



Method 1$\frac 12$ (Same as above but waves hands over doing tso cases).



$S = average*\text{number of terms} = average*n$.



Now the average of $1$ and $n$ is $\frac {n+1}2$ and the average of $2$ and $n-1$ is $\frac {n+1}2$ and so on. So the average of all of them together is $\frac {n+1}2$. So $S = \frac {n+1}2n$.



Method 2: (doesn't require considering whether $n$ is odd or even).




$S = 1 + 2 + 3 + ...... + n$



$S = n + (n-1) + (n-2) + ...... + 1$.



$2S = S+S = (n+ 1) + (n+1) + ..... + (n+1) = n(n+1)$>



$S = \frac {n(n+1)}2$.



Note that by adding $S$ to itself this doesn't matter whether $n$ is even or odd.




And lest you are wondering why can we be so sure that $n(n+1)$ must be even (we constructed it so it must be true... but why?) we simply note that one of $n$ or $n+1$ must be even.



So no problem.


complex analysis - Prove that $lim_{R rightarrow infty} int_{sigma_1}f(z) dz=0$.

Let $$f(z)=\frac{e^{\pi iz}}{z^2-2z+2}$$
and $\gamma_R$ is the closed contour made up by the semi-circular contour $\sigma_1$ given by, $\sigma_1(t)=Re^{it}$, and the straight line $\gamma_2$ from $-R$ to $R$ (so the contour of $\gamma_R$ appears to be a semi circle).




Prove that $$\lim_{R \rightarrow \infty} \int_{\sigma_1}f(z) dz=0.$$

integration - Calculating improper integral $int limits_{0}^{infty}frac{mathrm{e}^{-x}}{sqrt{x}},mathrm{d}x$





I want to calculate the improper integral $\displaystyle \int
\limits_{0}^{\infty}\dfrac{\mathrm{e}^{-x}}{\sqrt{x}}\,\mathrm{d}x$

$\DeclareMathOperator\erf{erf}$




Therefore
\begin{align}
I(b)&=\lim\limits_{b\to0}\left(\displaystyle \int \limits_{b}^{\infty}\dfrac{\mathrm{e}^{-x}}{\sqrt{x}}\,\mathrm{d}x\right) \qquad \forall b\in\mathbb{R}:0
&=\lim\limits_{b\to0}\left(\sqrt{\pi} \erf(\sqrt{b}) \right)=\sqrt{\pi}\erf(\sqrt{0})=\sqrt{\pi}
\end{align}



This looks way to easy. Is this correct or am I missing something? Do you know a better way while using the following equation from our lectures?: $$\displaystyle \int\limits_0^\infty e^{-x^2}\,\mathrm{d}x=\frac{1}{2}\sqrt{\pi}$$


Answer



Hint:



Just substitute $x= u^2$. So, you get
$$\int \limits_{0}^{\infty}\dfrac{\mathrm{e}^{-x}}{\sqrt{x}}\,\mathrm{d}x =2\int_0^{\infty}e^{-u^2}du$$


Monday 7 October 2019

group theory - Prove that $defAut{operatorname{Aut}}Aut(mathbf{Z_{n}})simeq mathbf{Z_{n}^{*}}$



I am writing another exam in Algebra this week and this time the main topic is automorphism. I was again going through the example exercises and exams from previous years and this problem is giving me a hard time to understand:




Prove that for group of automorphisms $\def\Aut{\operatorname{Aut}}\Aut(\mathbf{Z_{n}})$ holds $\Aut(\mathbf{Z_{n}})\simeq \mathbf{Z_{n}^{*}}$ where $\mathbf{Z_{n}^{*}}=\{1 \leq k \leq q | \gcd(k,n)=1\} $.



My main issue with this problem is that it seems very "general" and I don't really know how to address it with any of the techniques we have used.



Could you please show me some ways this could be approached? I appreciate your help.


Answer



Observe that a homomorphism is determined by the image of $\overline{1}$.



Now consider which restrictions you get from the fact that the homomorphism is injective.



Sunday 6 October 2019

real analysis - Continuity of a ruler function.

A question from Introduction to Analysis by Arthur Mattuck:



Define the "ruler function" $f(x)$ as follows:



$$f(x)=\begin{cases} {1/2^n}, & \text{if $x={b/2^n}$ for some odd integer $b$;} \\ 0, & \text{otherwise}. \end{cases}$$



(a) Prove that $f(x)$ is discontinuous at the points $b/2^n$, ($b$ odd).



(b) Prove $f(x)$ is continuous at all other points.




I prove (a) by constructing a sequence {$x_n$} of irrational numbers whose limit is $b/2^n$. Then the limit of {$f(x_n)$} is $0$, since $f(x_n)=0$ for all $n$. But $f(b/2^n)\ne 0$, discontinuity occurs. I don't know how to prove (b).

improper integrals - Evaluating $int_0^{infty} frac{sin xt sin yt cos zt}{t^2} , dt$




The problem is to evaluate the improper integral $I = \int_0^{\infty} \frac{\sin xt \sin yt \cos zt}{t^2} dt$.



This can be written as $\int_0^{\infty} dt \int_0^y \frac{\sin xt \cos st \cos zt}{t} ds$, noting that $\int_0^y \cos(st) ds = \frac{\sin yt}{t}$.



I want to interchange the order of the two integrals for $I$ show that:



$$I = \int_0^{\infty} dt \int_0^y \frac{\sin xt \cos st \cos zt}{t} ds = \int_0^y ds \int_0^{\infty} \frac{\sin xt \cos st \cos zt}{t} dt$$



I know how to evaluate the second integral. The integrand can be rewritten using the product to sum trigonometric identities to get:




$$f(s) = \int_0^{\infty} \frac{\sin xt \cos st \cos zt}{t} dt =
\frac14 \int_0^{\infty} \frac{\sin((x+||s|-|z||)t)}{t} + \frac{\sin((x-||s|-|z||)t)}{t} +$$
$$\frac{\sin((x+||s|+|z||)t)}{t} + \frac{\sin((x-||s|+|z||)t)}{t} dt$$



It is well known that $ \int_0^{\infty} \frac{\sin xt}{t} dt =\frac{\pi}{2}$ sgn $x$, so we can provide an explicit expression for the integral.



In order to interchange the integrals for $I$, don't I have to show that $f(s) = \int_0^{\infty} \frac{\sin xt \cos st \cos zt}{t} dt$ converges uniformly on the interval {$0 \le s \le y$}. But $ g(x) = \int_0^{\infty} \frac{\sin xt}{t} dt$ does not converge uniformly on any interval that contains $0$ and $f(s)$ contains four integrals of this form.



So I don't know how to show that I can interchange the order of the integrals or if it even valid to do so.


Answer




We can use the addition formulas for $\sin$ and $\cos$, or more easily the relations $\sin(z) = \frac{e^{iz}-e^{-iz}}{2i}$ and $\cos(z) = \frac{e^{iz}+e^{-iz}}{2}$, to show that



$$\matrix{4\sin(xt)\sin(yt)\cos(zt) &=& -\cos((x+y+z)t) + \cos((x-y-z)t) \\&&- \cos((x+y-z)t)+\cos((x-y+z)t)}$$



Using this the integral we are after can be written in terms of



$$I(w) \equiv \int_0^\infty \frac{\cos(wt)-1}{t^2}{\rm d}t = |w|\int_0^\infty \frac{\cos(t)-1}{t^2}{\rm d}t = -\frac{\pi}{2}|w|$$



which is derived below as




$$\int_0^\infty\frac{\sin(xt)\sin(yt)\cos(zt)}{t^2}{\rm d}t = \frac{1}{4}\left[-I(x+y+z) + I(x-y-z) - I(x+y-z) + I(x-y+z)\right]\\ = \frac{\pi}{8}\left[\left| x+y+z\right|-\left| x-y-z\right|+\left| x+y-z\right| -\left| x-y+z\right| \right]$$






To compute the integral $\int_0^\infty \frac{\cos(t)-1}{t^2}{\rm d}t$ we generalize it by adding a $e^{-wt}$ term to the integrand, i.e. we study



$$f(w) = \int_0^\infty \frac{\cos(t)-1}{t^2}e^{-wt}{\rm d}t$$



We now expand $\cos(t)$ in a Taylor-series to get




$$f(w) = \sum_{k=1}^\infty \frac{(-1)^k}{(2k)!}\int_0^\infty t^{2k-2}e^{-wt}{\rm d}t = w\sum_{k=1}^\infty \frac{(-1/w^2)^k}{(2k)(2k-1)}$$



where I have used the definition of the $\Gamma$-function to perform the middle integral. The justification for exchanging the summation and integration can be found in this answer. Splitting $\frac{1}{2k(2k-1)}$ into partial fractions we get



$$f(w) = w\sum_{k=1}^\infty \frac{(-1/w^2)^k}{2k-1} - w\sum_{k=1}^\infty \frac{(-1/w^2)^k}{2k} = -w\arctan\left(\frac{1}{w}\right) + \frac{1}{2}\log\left(1 +\frac{1}{w^2}\right)$$



where we have used the Taylor-series for $\log(1+x)$ and $\arctan(x)$ to evaluate the sums. Taking the limit $w\to 0$ gives us the desired result $-\frac{\pi}{2}$ (using e.g. L'Hopitals).


Saturday 5 October 2019

complex analysis - Why is $sinx$ the imaginary part of $e^{ix}$?

Most of us who are studying mathematics are familiar with the famous $e^{ix}=cos(x)+isin(x)$. Why is it that we have $e^{ix}=cos(x)+isin(x)$ and not $e^{ix}=sin(x)+icos(x)$? I haven't studied Complex Analysis to know the answer to this question. It pops up in Linear Algebra, Differential Equations, Multivariable Equations and many other fields. But I feel like textbooks and teachers just expect us students to take it as given without explaining it to a certain extent. I also couldn't find any good article that explains this.

arithmetic - Could one argue that $10 cdot 10 cdot 10 cdot 10 cdots$ is equal to 0?




The thing about this is, if we assign a variable: $$x=10 \cdot 10 \cdot 10 \cdot 10 \cdots$$ and then enclose all but one multiplicand in brackets (as multiplication is associative) and we get: $$x=10 \cdot (10 \cdot 10 \cdot 10 \cdot 10\cdots)$$ We can’t now see that the expression within the bracket is equal to $x$ so we get: $$x=10 \cdot x$$ The value of $x$ could clearly be shown as $0$ because $0=10\cdot 0$. This can’t be right because if we do the same thing with $1$ we can show that every number equals every other: $$x=1\cdot x$$$$4=1 \cdot 1 \cdot 1 \cdot 1 \cdots=3$$




My question is where is the flaw in this logic. Where does this method go wrong. Any insight would be helpful.



Answer



First, we have to remind ourselves what it would mean for an infinite product to be equal to anything or to converge to something - much like the infinite summation, it would be the limit of the partial products, right? Well, the partial products for $\prod_{k=1}^\infty 10^k$ are $10, 10^2, 10^3, 10^4, 10^5$... obviously divergent. So, before I even address what you said: no, the product absolutely does not converge to anything.



Now, generally, this is a flaw not unlike with what Ramanujan ran into when showing that the summation $1+2+3+4+... = -1/12$. There's probably a proper formalization of this that someone else can elaborate on in the case of products, but I imagine the idea is the same.




Ramanujan's flaw was that the summation of the natural numbers is divergent. Thus, just "assigning" a variable to the value of the sum, i.e. saying $x = \sum_{k=1}^\infty k$, and then performing manipulations based on that to try to derive a value just is not kosher. The reason is because that summation is divergent - you can check the limit of the partial sums, and they visibly approach $\infty$.



Thus, I imagine an analogous idea holds here: you cannot say $x = \text{some infinite product}$ and perform manipulations as you did if that same product is divergent.



It's like a commenter said - to assume the product has a value and you can assign it to some constant $x$ is nonsense given it clearly has no value, and something reasonable cannot follow from nonsense.



Edit: As noted by a commenter, this is all under the assumptions of us working in the usual topology we normally work with. We could define an alternate topology in which these manipulations for this product make sense. So in a way, you're right - just not in our usual number system. :P


real analysis - A problem related to intermediate value property of continuous function.










Let $f:[0,1] \to R$ be a real valued continuous function satisfying $f(0)=f(1)$. Then using intermediate value theorem we know for every $n \in N$ there exist two point $a,b \in [0,1]$ at a distance $1/n$ satisfying $f(a)=f(b)$.



Now my question is, for every $r\in [0,1]$ is it possible to find two points $a,b\in [0,1]$ at a distance $r$, satisfying $f(a)=f(b)$ provided $f:[0,1] \to R$ be a real valued continuous function satisfying $f(0)=f(1)$.




as there is a counterexample for $r>1/2$, please consider the case when $r<1/2$.


Answer



Hint: Consider $r=\frac23$ and
$$f(x)=\begin{cases}x&\mathrm{if\ }x\le \frac13\\
1-2x &\mathrm{if\ } \frac13x-1 &\mathrm{if\ }x\ge\frac23
\end{cases}$$


Friday 4 October 2019

How can I solve this summation?



I've been trying to solve this problem for quite some time now and I can't think of how to reduce the inner summation to a smaller problem. Usually when I have variables in the upper and lower bound of the summation, I just do



$$(upper bound - lower bound + 1) * a$$ where $a$ is the value inside the summation. But this wont work here because I'll still have the $j$ variable inside the outer sigma. Is there an easier way to do this that I don't know?



$$\sum_{i=0}^{n}\,\,\sum_{j = i} ^ {n-1}(j -i +1 )$$



According to WolframAlpha, the solution should be:




$$\frac 16 n(n^2 + 3n + 2)$$


Answer



This method doesn't use combinatorics explicitly, but reduces the sum to more well-known ones.



The terms $-i$ and $1$ don't depend on the inner summation variable $j$, so you can take them out using the method you describe:



$$\begin{align}\sum_{i=0}^n \sum_{j=i}^{n-1} (j - i + 1) &= \sum_{i=0}^n \left( (n - i)(1-i) + \sum_{j=i}^{n-1}j\right)\end{align}$$



Also, we evaluate the inner summation as $T(n-1) - T(i-1)$ where $T(x)$ is the $x$th triangular number, and $T(-1)=T(0)=0$




$$\begin{align}\sum_{i=0}^n \sum_{j=i}^{n-1} (j - i + 1) &= \sum_{i=0}^n \left( (n - i)(1-i) + \sum_{j=i}^{n-1}j\right) \\
&= \sum_{i=0}^n \left( n - i(n+1) + i^2 + \sum_{j=i}^{n-1}j\right) \\
&= n \sum_{i=0}^n 1 - (n+1) \sum_{i=0}^n i + \sum_{i=0}^n i^2 + \sum_{i=0}^n \sum_{j=i}^{n-1}j \\
&= n (n+1) - (n+1) T(n) + \sum_{i=0}^n i^2 + \sum_{i=0}^n \sum_{j=i}^{n-1}j \\
&= n (n+1) - (n+1) T(n) + \sum_{i=0}^n i^2 + \sum_{i=0}^n (T(n-1) - T(i-1)) \\
&= n (n+1) - (n+1) T(n) + \sum_{i=0}^n i^2 + (n+1)T(n-1) - \sum_{i=0}^n T(i-1) \\
&= n (n+1) - (n+1) T(n) + \sum_{i=0}^n i^2 + (n+1)T(n-1) - \sum_{i=1}^{n-1} T(i) \\
&= (n+1) \underbrace{\left( n - (T(n) - T(n-1)) \right)}_0 + \sum_{i=0}^n i^2 - \sum_{i=1}^{n-1} T(i) \\
&= \sum_{i=0}^n i^2 - \sum_{i=1}^{n-1} T(i) \\
\end{align}$$




Now the summation is expressed in terms of more standard summations whose values are well known.



$$\sum_{i=0}^n i^2 = \frac 1 6 n(n+1)(2n+1)$$
$$\sum_{i=1}^{n-1} T(i) = \frac 1 6 (n-1)n(n+1)$$



And we can evaluate:



$$\begin{align}\sum_{i=0}^n i^2 - \sum_{i=1}^{n-1} T(i) &= \frac 1 6 n (n+1) \left( (2n+1) - (n-1)\right)\\
&= \frac 1 6 n (n+1) (n+2)\\

&= \frac 1 6 n (n^2 + 3n + 2)\\
\end{align}$$


Common term between arithmetic progression



How do I mathematically show that the common terms between the series $3+7+11+....$ and $1+6+11+....$ form an arithmetic progression without actually finding all the individual terms. How does the LCM of the common differences of the given series becomes the common difference of the new series?




My Attempt:
$$
3+(n-1)4=1+(m-1)5\implies 4n-1=5m-4\implies5m-4n=3
$$
But this does not tell me the above statement unless I try all the integer combinations of $m$ and $n$.


Answer



Continuing from your work you can see that the final equation is a simple linear Diphantene equation. Solving it for $m$ (you can also solve it with respect to $n$) you will get $m = 4t + 3$. Plugging it into the coresponding form of the sequence we get that the common terms are given by:



$$1 + 5(m-1) = 1 + 5(4t + 3 - 1) = 20t + 11$$




Hence the common terms make an arithemtic progression and it's given by $c_n = 20n + 11$.


sequences and series - Need to know why $sum_{k=0}^{infty}kr^{k} = frac{r}{(1-r)^{2}}$

Working on a Stat problem where I must find $E(x)$ of $f(x)=\left(\frac{1}{2}\right)^{x+1}$ for $x=0,1,2,\cdots$



I have,



$$E(x)=\sum_{x=0}^{\infty}x\left(\frac{1}{2}\right)^{x+1}=\frac{1}{2}\sum_{x=0}^{\infty}x\left(\frac{1}{2}\right)^{x}$$



I'm pretty sure this is a geometric series, but it would defeat the purpose of doing this problem if I didn't know why the following sum converges:



$$\sum_{k=0}^{\infty}kr^{k} = \frac{r}{(1-r)^{2}}$$




Can anyone explain why this is? I've tried using derivative but keep going in circles.

Thursday 3 October 2019

summation - The value of $sum _{n=1}^{infty }{left(qright)^nleft(sin(na)right), |q|

I need to solve this sum, but unfortunately I have no idea how to start.I would be grateful for every advice.




$$\sum _{n=1}^{\infty }{\left(q\right)^n\left(\sin(na)\right), |q|<1}$$


combinatorics - Combinatorial Intuition Behind Binomial Identity



It isn't hard to algebraically show that:
$${n \choose k} = \sum_{j = r}^{n + r - k}{j - 1 \choose r - 1}{n - j \choose k - r}, 1 \leq r \leq k$$ I'm trying to find some sort of combinatorial "proof"/intuition as to why this is true. My thought is that it is similar to Pascal's Rule, but I'm not sure.



Thanks!


Answer



Clearly $\binom{n}k$ is the number of $k$-element subsets of $[n]=\{1,\ldots,n\}$. We can categorize these subsets according to their $r$-th smallest element. How many of them have $r$-th smallest element $j$? There are $j-1$ members of $[n]$ less than $j$, and we have to choose $r-1$ of them for our set; this can be done in $\binom{j-1}{r-1}$ ways. There are $n-j$ members of $[n]$ bigger than $j$, and we have to choose $k-r$ of them for our set; this can be done in $\binom{n-j}{k-r}$ ways. Thus, there are




$$\binom{j-1}{r-1}\binom{n-j}{k-r}$$



ways to choose a $k$-element subset of $[n]$ whose $r$-th smallest element is $j$. Summing over the possible values of $j$ yields the desired identity,



$$\sum_{j=r}^{n+r-k}\binom{j-1}{r-1}\binom{n-j}{k-r}=\binom{n}k\;.$$


Wednesday 2 October 2019

Ratio test for convergent series - Does that mean that the series $1 +frac{1}{2} + frac{1}{3} +dotsb$ converges?





  1. An infinite series is convergent if from and after some fixed term, the ratio of each term to the preceding term is numerically less than some quantity which is itself numerically less than unity.
    Let the series beginning from the fixed term be denoted by
    $$u_1+u_2+u_3+u_4+\dotsb,$$
    and let
    $$\frac{u_2}{u_1} where $r<1$.
    Then
    \begin{align*}

    &u_1+u_2+u_3+u_4+\dotsb\\
    &=u_1\left(1+\frac{u_2}{u_1}+\frac{u_3}{u_2}\cdot\frac{u_2}{u_1}+\frac{u_4}{u_3}\cdot\frac{u_3}{u_2}\cdot\frac{u_2}{u_1}+\dotsb\right)\\
    &\end{align*}
    that is, $<\frac{u_1}{1-r}$, since $r<1$.
    Hence the given series is convergent.




Does that mean that the series $1 +\frac{1}{2} + \frac{1}{3} +\dotsb$ should be a convergent one?
$$S_n = \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dotsb$$

Here, we have
\begin{gather}
\frac{u_n}{u_{n-1}} = \frac{1/n}{1/(n-1)} = \frac{n-1}{n} = 1-\frac{1}{n}\\
∴\boxed{\frac{u}{u_{n-1}}<1}\\
∴\text{Series should be convergent}
\end{gather}


Answer



You need $\frac{u_n}{u_{n-1}} < r < 1$ for some $r < 1$ and all $n $.



For any $r <1$ we can find $r < \frac {u_n}{u_{n-1}} < 1$. So we can not find an appropriate $r <1$.




So we failed the hypothesis. The ratio test fails.


prime numbers - How to recognise the digit multiplication, subtraction or addition when checking for divisibility by 7, 11, 13, 17 and 19?



I was studying this page Divisibility by prime numbers under 50 to check for the divisibility by 7, 11, 13, 17, 19 etc. Is there any way to recognise whether to add or sub the given times of unit digit from the truncated number and to know how many times do I have to multiply with the unit digit.



For example:




to check for the divisibility by 7: subtract 2 times the unit digit from truncated number.



for 11: subtract 1 times



for 13: add 4 times the unit digit etc.


Answer



Yes, we can. Suppose we want to test a number $n$ for divisibility by $d$. Notice that we can expand $n$ with a result of $x$ after truncating the last digit $y$:
$$n=10x+y.$$
And we want to find some number $n'$ of the form $n'=x+\alpha y$ such $n'$ is divisible by $d$ exactly when $n$ is. For numbers not divisible by $2$ or $5$, there is an elegant solution to this; in particular, there is some number $\alpha$ such that $d$ divides $10\alpha - 1 $. Why this matters is then we can write

$$n'=\alpha n = 10\alpha x + \alpha y$$
which will be divisible by $d$ exactly when $n$ was ($\alpha$ and $d$ must be coprime, meaning that $k\alpha$ is divisible by $d$ exactly when $k$ was) but since $10\alpha = cd + 1$ for some integer $c$ (because $10\alpha -1$ is a multiple of $d$), we can write this as:
$$n'=(cd+1)x + \alpha y$$
and we can get rid of the $cd$ term, since it is divisible by $d$ and clearly if $n$ is divisible by $d$, so is $n-kd$ for any integer $k$. Thus, for our purposes, we can replace $n$ by $x+\alpha y$, getting rid of the $cdx$ term, without changing issues of divisibility by $d$.



This $\alpha$ term is the multiplicative inverse of $10$ mod $d$, which can be computed either by the Extended Euclidean algorithm, as described on the Wikipedia page. Trial and error would work too (you just need to find the first positive multiple of $d$ with a $9$ in the one's place or a negative multiple with a $1$ in the one's place). Notice that your existing identities follow this pattern; for $d=7$, we choose $\alpha=-2$, and $10\cdot (-2) - 1 = -21$, which is a multiple of $7$. Similarly, for $11$, we have $\alpha=-1$ and $10\cdot (-1) - 1 = -11$, which is a multiple of $11$ and, for $13$, we have $10\cdot 4 - 1 = 39$, which is a multiple of $13$.



So, for instance, to find a rule for $17$, we would notice that $-3\cdot 17= -51 = 10\cdot -5-1$. Thus, to compute divisibility by $17$, we chop off the unit digit $y$, and subtract $3y$ from the remaining number. To be very explicit, we can give the following general rules for divisibility tests of this form:





  • If $d=10k + 1$, then subtract $k$ times the unit digit from the rest.

  • If $d=10k + 3$, then add $3k+1$ times times the unit digit to the rest.

  • If $d=10k + 7$, then subtract $3k+2$ times the unit digit from the rest.

  • If $d=10k + 9$, then add $k+1$ times the unit digit to the rest.


Real and imaginary part of a complex sinusoid $y(t)=sin(wt)$



I'm trying to understand the plots on this page. It's a book about the Discrete Fourier Transform and it's discussing how a a function $x(t)=\cos(w_0t)$ or $y(t)=\sin(w_0t)$ is composed of a positive and a negative frequency component. I get why the spectrum of $\cos(wt)$ has two real components and none imaginary. But i don't get why $\sin(wt)$ have two imaginary components, as in b) of the following image.




This is the link for the image, from the web mentioned page, that i don't understand



I think i get how $x(t)=\cos(wt)$ is the sum of two complex sinusoids of frequencies of opposite signs that results in an zero imaginary part:



$$x(t)=\cos(wt)=\frac{e^{jwt}+e^{-jwt}}{2}$$
$$ x(t)=\frac{\cos(wt)+j\sin(wt)+\cos(-wt)+j\sin(-wt)}{2} $$



Since
$$\cos(-x)=\cos(x)$$

$$\sin(-x)=-\sin(x)$$



follows



$$ x(t)=\frac{\cos(wt)+j\sin(wt)+\cos(wt)-j\sin(wt)}{2} $$



so



$$Re\{ \ x(t)\ \} = \frac{\cos(wt)+\cos(wt)}{2}=\cos(wt)$$




and



$$Im\{ \ x(t)\ \} = \frac{\sin(wt)-\sin(wt)}{2}=0$$



That explains why $\cos(wt)$ have two real parts on the graph, of same amplitude and "opposite" frequencies.



I will try to to the same with $\sin(wt)$:



$$y(t)=\sin(wt)=\frac{e^{jwt}-e^{-jwt}}{2j}$$




Using $\cos(-x)=\cos(x)$



$$y(t)=\frac{ \cos(wt)+j\sin(wt) -(\cos(wt)+j\sin(-wt)) }{ 2j }$$
$$y(t)=\frac{ \cos(wt)+j\sin(wt) -\cos(wt)-j\sin(-wt) }{ 2j }$$



$$Re\{ \ y(t) \ \}=\frac{\cos(wt)-\cos(wt)}{2j}=0$$



$$Im\{ \ y(t) \ \}=\frac{\sin(wt)-\sin(-wt)}{2j}$$



I'm not sure how to follow from there. How come an imaginary part contains $j$? Or maybe $j$ should not be included? But in the case of




$$Im\{ \ y(t) \ \}=\frac{\sin(wt)-\sin(-wt)}{2}$$



Where that $j^-1$ went? This looks wrong to me because
$$j \cdot Im\{ \ y(t) \ \}\neq\frac{\sin(wt)-\sin(-wt)}{2j}$$



What did I do wrong here? This looks so silly, I'm sorry.


Answer



I think that the answer is less complex than you are making it.
Once you have:

$$\cos(\omega t)=\frac{e^{j \omega t}+e^{j (-\omega) t}}{2} $$
that shows that there is a '$\frac{1}{2}$' magnitude at '$\omega$' and a '$\frac{1}{2}$' magnitude at '$-\omega$', both in the positive real direction.



Taking the same logic:
$$\sin(\omega t)=\frac{e^{j \omega t}-e^{j (-\omega) t}}{2j} $$
Multiply both numerator and denominator of the fraction by $\frac{j}{2}$:
$$\sin(\omega t)=\frac{j\frac{1}{2} e^{j \omega t} - j\frac{1}{2} e^{j (-\omega) t}}{\frac{1}{2}\times 2j^2}$$
Where $j^2=-1$. This eliminates the denominator and multiplies the numerator by $-1$:
$$\sin(\omega t)=-j\frac{1}{2} e^{j \omega t} + j\frac{1}{2} e^{j (-\omega) t}$$
This shows that there is a '$-\frac{1}{2}j$' point at '$\omega$' and a '$(+)\frac{1}{2}j$' point at '$-\omega$'. In this case, both are in the imaginary plane. The one at positive $\omega$ has a negative sense, and the one at negative $\omega$ has a positive sense.




Does that help?


Tuesday 1 October 2019

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

calculus - If a function such that $f(x+y)=f(x)+f(y)$ is continuous at $0$, then it is continuous on $mathbb R$

Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a function such that $f(x+y)=f(x)+f(y)$. If $f$ is continuous at zero how can I prove that is continuous in $\mathbb{R}$.

real analysis - Let $q in mathbb{Q}$ and $x in mathbb{R}-mathbb{Q}$. Prove that $q+x in mathbb{R}-mathbb{Q}$



This is a homework problem for my Real Analysis course and I am having trouble getting started in the right direction. I understand the definition of the set of rational numbers and how $\mathbb{R}-\mathbb{Q}$ is the set of irrational numbers, but I am having trouble making the leap to where q+x is an element of the set of irrational numbers. Is there something that I'm missing?



I started with $a,b∈Q$ where either $a=0$ or $b=0$ which then gives us either $a+b=a$ or $a+b=b$. We know from this that $a+b∈Q$. It's the next part I'm struggling with.


Answer



You know $q$ is rational, so you can write $q = \dfrac{a}{b}$ for some integers $a,b$ where $b \neq 0$.



You also know that $x$ is irrational, so you cannot write $x$ as the ratio of two integers.




You need to show that $q+x$ is irrational. Suppose $q+x$ is rational, for the sake of contradiction.



Then, you can write $q+x = \dfrac{c}{d}$ for some integers $c,d$ where $d \neq 0$.



What does this tell you about $x$?


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...