Friday 30 November 2018

trigonometry - Real and imaginary part from trigonometric form

In school I am learning for complex numbers in trigonometric form.
$$z= a + bi r ( \cos(\alpha) + i\sin(\alpha) )$$
In a problem I have to find the real and imaginary part from trigonometric form of
$$1 + \cos(\alpha) + i\sin(\alpha)$$
For which I think the solution is




  • Real part = $1+ \arccos(\alpha)$

  • Imaginary part = $\arcsin(\alpha)$




PR2: $\sin(\alpha)+i\cos(\alpha)$
Update>
For which I think the solution is>




  • Real part = $\arcsin(\alpha)$

  • Imaginary part = $\arccos(\alpha)$

trigonometry - Simple expressions for $sum_{k=0}^ncos(ktheta)$ and $sum_{k=1}^nsin(ktheta)$?











I'm curious if there is a simple expression for
$$
1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta
$$
and
$$
\sin\theta+\sin 2\theta+\cdots+\sin n\theta.
$$
Using Euler's formula, I write $z=e^{i\theta}$, hence $z^k=e^{ik\theta}=\cos(k\theta)+i\sin(k\theta)$.

So it should be that
$$
\begin{align*}
1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta &= \Re(1+z+\cdots+z^n)\\
&= \Re\left(\frac{1-z^{n+1}}{1-z}\right).
\end{align*}
$$
Similarly,
$$
\begin{align*}

\sin\theta+\sin 2\theta+\cdots+\sin n\theta &= \Im(z+\cdots+z^n)\\
&= \Im\left(\frac{z-z^{n+1}}{1-z}\right).
\end{align*}
$$
Can you pull out a simple expression from these, and if not, is there a better approach? Thanks!


Answer



Take the expression you have and multiply the numerator and denominator by $1-\bar{z}$, and using $z\bar z=1$:
$$\frac{1-z^{n+1}}{1-z} = \frac{1-z^{n+1}-\bar{z}+z^n}{2-(z+\bar z)}$$



But $z+\bar{z}=2\cos \theta$, so the real part of this expression is the real part of the numerator divided by $2-2\cos \theta$. But the real part of the numerator is $1-\cos {(n+1)\theta} - \cos \theta + \cos{n\theta}$, so the entire expression is:




$$\frac{1-\cos {(n+1)\theta} - \cos \theta + \cos{n\theta}}{2-2\cos\theta}=\frac{1}{2} + \frac{\cos {n\theta} - \cos{(n+1)\theta}}{2-2\cos \theta}$$



for the cosine case. You can do much the same for the case of the sine function.


Wednesday 28 November 2018

calculus - Double Integral from Polar to Cartesian Coordinates

I know that when converting a double integral from Cartesian to Polar Coordinates, the Jacobian is equal to r and so we get $\iint dxdy$ $=$ $\iint rdrd\theta$. But what if I wanted to go from Polar to Cartesian Coordinates? What would the Jacobian be?

What is wrong with this circle's area problem?



My solution and my book's solution don't match.



Is something wrong with the my solution?
If so, where and why?




My book says:




The radius r of a circle increases by 50%.
In terms of r, what is the area of the circle
with the increased radius?




My solution:





  1. A = $\pi r^2\ $ => Area of any circle

  2. ir = $\ 3r/2 \ $ => Increased radio

  3. A$\ _{ir} = \pi ir^{2} \ $ => Area of circle with increased radio

  4. A$\ _{ir} = \pi (3r/2 )^{2} \ $ => Substituting ir with its value

  5. A$\ _{ir} = \pi (9r^2/4 ) \ $ => Square

  6. A$\ _{ir} = \ (9\pi r^2 )/4 \ $ => Result



Is the In terms of r tricky?


Answer




There is nothing wrong with your answer!


Tuesday 27 November 2018

calculus - Derivative has finite, unequal left and right limits at a point; is the function non-differentiable at this point?

I have a short question, related to the ongoing search of mathematics instructors for counter-examples to common undergraduate mistakes.



The classical example of a function that is differentiable everywhere but has discontinuous derivative is
\begin{equation}
f(x)=\left\{
\begin{array}{cc}
x^2\sin(1/x) &(x\neq0), \\
0 &(x=0),
\end{array}\right.
\end{equation}

which has derivative
\begin{equation}
f'(x)=\left\{
\begin{array}{cc}
2x\sin(1/x)-\cos(1/x) &(x\neq0), \\
0 &(x=0).
\end{array}\right.
\end{equation}
$f'$ fails to be continuous at $0$ purely because its left- and right-hand limits do not even exist at $0$.




However, suppose that we have found a function $g$ whose derivative $g'$ has finite but unequal left- and right-hand limits at some cluster point $x_0$ in its domain. May we conclude that $g$ is not differentiable at $x_0$?



If this is not the case, is there a simple counter-example? (I'm guessing such a counter-example ought to be more complicated than the $f$ I have given above, as $f$ is sometimes claimed to be the simplest example of a differentiable function with discontinuous derivative.)



Thanks in advance!

real analysis - About the $lim_{x rightarrow infty}(ln{x}-x)$




Find the $\lim_{x \rightarrow \infty}(\ln{x}-x)$.





We know that $\ln{x}=o(x)$ as ${x \rightarrow \infty}$ therefore we can guess that the limit will be $-\infty$.



Intuitively $x$ goes to infinity way faster than $\ln{x}$.



Here it is my formal proof of this:



We have that $\lim_{x \rightarrow \infty} \frac{x}{2\ln{x}}=+ \infty$ thus form the definition, $\exists a>0$ such that $x> 2\ln{x}$ forall $x>a$.



Now from this,$\forall x>a$, we deduce that $x- \ln{x} > 2 \ln{x}-\ln{x}=\ln{x} \Rightarrow \ln{x}-x < - \ln{x}$




Finally we have $\limsup_{x \rightarrow \infty} (\ln{x}-x) \leqslant - \infty$



Thus $\limsup_{x \rightarrow \infty} (\ln{x}-x)=\liminf_{x \rightarrow \infty} (\ln{x}-x)= -\infty$ .



Is my argument correct?



Thank you in advance!


Answer



for $x>0$,




$$\ln (x)-x=x (\frac {\ln (x)}{x}-1)$$



and $$\lim_{x\to+\infty}\frac {\ln (x)}{x}=0$$



thus



$$\lim_{+\infty}(\ln (x)-x)=-\infty $$


multivariable calculus - Example of discontinuous function having all partial derivatives

Is it possible to a real-valued function of two variables defined on an open set to have partial derivatives of all order and to be discontinuous at some point or maybe at each point?

Show that two numbers divided by their GCD are coprime




Let $a, b \in \mathbb{Z} \setminus \{0\}$ and $d = gcd(a, b)$. Show that $gcd(\frac{a}{d}, \frac{b}{d}) = 1$.



I tried proving this by contradiction and showing that otherwise $d$ isn't the gcd of $a$ and $b$, but it didn't work. Could someone please give me a hint on what the proof should look like?


Answer



If $d'>1$ divides both $\frac{a}{d}$ and $\frac{b}{d}$ then $dd'> d$ divides $a$ and $b$, contradicting the fact that $d$ is the greatest common divisor of $a$ and $b$.


calculate the absolute value of a complex number




I have to calculate the absolute value of $\lvert{i(2+3i)(5-2i)\over(2-3i)^3}\rvert$ solely using properties of modulus, not actually calculating the answer. I know that I could take the absolute value of the numerator and denominator and then take the absolute value of each factor, giving me $\lvert i \rvert \lvert2+3i\rvert\lvert5-2i\rvert\over \lvert(2-3i)^3\rvert$. I'm not sure what to do after this, without calculating it. Using $\lvert z \rvert=\sqrt{x^2 + y^2}$ I can plug in each fator into the equation and simplify but that would be calculating it, and I'm not sure if I'm suppose to go that far. Help please.


Answer



It seems to me that at some point one is going to want to use



$\vert a + bi \vert^2 = a^2 + b^2; \tag{1}$



that is, if an answer expressed as a single, non-negative real number is the ultimate goal. What we can do, however, is to carry things as far as we possible without employing (1), thus minimizing (hopefully) the amount of arithmetic to be done; we do this by working with more abstract properties of the modulus, such as



$\vert z_1 z_2 \vert = \vert z_1 \vert \vert z_2 \vert \tag{2}$




and



$\vert z \vert = \vert \bar z \vert\, \tag{3}$



etc. Bearing this intention in mind, we may proceed. Our OP cele has already taken a solid first step with



$\vert \dfrac{(i)(2 + 3i)(5 -2i)}{(2- 3i)^3} \vert = \dfrac{\vert i \vert \vert 2 + 3i \vert \vert 5 - 2i \vert}{\vert (2 - 3i)^3 \vert}; \tag{4}$



proceeding from ($4$), we note that several simplifications can be made before we actually invoke ($1$). For example, $\vert i \vert = 1$; this of course follows almost trivially from ($1$), but we also note we might observe that $\vert 1 \vert = 1$ by virtue of ($2$): $\vert 1 \vert = \vert 1^2 \vert = \vert 1 \vert^2$, whence $\vert 1 \vert \ne 0$ implies $\vert 1 \vert = 1$; from this logic we have $\vert -1 \vert = 1$ as well, since $1 = \vert 1 \vert = \vert (-1)^2 \vert = \vert -1 \vert^2$, since we must have $\vert -1 \vert > 0$, $\vert -1 \vert = 1$; then




$\vert i \vert^2 = \vert i^2 \vert = \vert -1 \vert = 1, \tag{5}$



so



$\vert i \vert = 1; \tag{6}$



$\vert -i \vert = 1$ follows similarly. We thus deduce, without invoking ($1$), that the factor $\vert i \vert$ in the numerator of the right-hand side of ($4$) is of no consequence; we may ignore it. Turning next to the factors $\vert 2 + 3i \vert$ and $\vert (2 - 3i)^2 \vert$, we have, again from ($2$), that



$\vert (2 - 3i)^3 \vert = \vert 2 - 3i \vert^3 \tag{7}$




and from ($3$)



$\vert 2 + 3i \vert = \vert 2 - 3i \vert; \tag{8}$



(6), (7) and (8) lead to the conclusion that



$\dfrac{\vert i \vert \vert 2 + 3i \vert \vert 5 - 2i \vert}{\vert (2 - 3i)^3 \vert} = \dfrac{\vert 5 - 2i \vert}{\vert 2 - 3i \vert^2}. \tag{9}$



I can't see how to take this any further without ($1$); using it yields




$\vert \dfrac{(i)(2 + 3i)(5 -2i)}{(2- 3i)^3} \vert = \dfrac{\vert i \vert \vert 2 + 3i \vert \vert 5 - 2i \vert}{\vert (2 - 3i)^3 \vert} = \dfrac{\sqrt{29}}{13}, \tag{10}$



in agreeement with Adrian's answer.



I think one point of this exercise is to illustrate the differences encountered when considering $\vert z \vert$ as a homomorphism from the group of non-zero complex numbers $\Bbb C^\ast$ to the positive reals $\Bbb R_+^\ast$ versus a norm on the vector space $\Bbb C$. In the former case start with "axioms" like (2) and (3) and then deduce algebraic facts about $\vert z \vert$; in the latter we start with (1) and prove things like (2)-(3). Ultimately, of course, the two views are equivalent, but they often appear different in practice.



Finally, it would be interesting to know if a homomorphism $\phi: \Bbb C^\ast \to \Bbb R^\ast$ ($2$) which is conjugate-invariant, that is $\phi(z) = \phi(\bar z)$ ($3$), (I think "$\phi$ factors through the conjugation involution" is the way the group theorists like to say it?) shares many properties in common with $\vert \cdot \vert$ as defined by (1). Do we need more axioms than (2) and (3) to attain the equivalence? I'd be glad to hear from anyone who knows.



Hope this helps. Cheers,




and as always,



Fiat Lux!!!


Monday 26 November 2018

Is there any functional equation $f(ab+cd)= f(a)+f(b)+f(c)+f(d)$?



I am looking for a real, continuous function that satisfies the functional equation
$$
f(ab+cd)= f(a)+f(b)+f(c)+f(d)
$$

where $a,b,c,d$ are real.

This is equivalent to a function satisfying these two functional equations:
$$
f(x+y)=f(x)+f(y)
$$

$$
f(xy)=f(x)+f(y)
$$

or Cauchy's functional equation and the logarithmic functional equation, respectively. I have the suspicion that the only the function $f(x) = 0$ satisfies this, but I am very confused as to how a solution may be found analytically. Thank you in advance.



EDIT: I saw in the comments that if f(x) is defined at zero the answer is trivial. If the domain were restricted to not include zero or even to not include all nonpositive numbers, would the answer change? If so, how? What if the function were discontinuous? Thank you again.



Answer



If $f$ has the positive numbers as its domain, it is zero regardless of continuity.



Note first that $f$ satisfies the relation $f(x+y)=f(xy)$ for all $x,y$ in its domain. Now, fix any two positive numbers $u$ and $v$. If $u^2> 4v$, then $u=x+y$ and $v=xy$ for $x,y=\frac{u \pm \sqrt{u^2-4v}}{2}$, and these $x,y$ are positive. So $f(u)=f(v)$.



Similarly, if $v^2 > 4u$, then $f(u)=f(v)$.



But if $u^2 \le 4v$ and $v^2\le4u$, then
$u^2\le4(2\sqrt{u})$, which implies that $u\le 4$. So $4^2 \ge 4u$, which means that $f(u)=f(4)$. Since $u$ and $v$ are symmetric, we also have $f(v)=f(4)$. So $f(u)=f(v)$.




Since $u$ and $v$ are arbitrary, $f$ is constant, which means that it is zero.


calculus - Determine $limlimits_{xto 0, xneq 0}frac{e^{sin(x)}-1}{sin(2x)}=frac{1}{2}$ without using L'Hospital



How to prove that




$$\lim\limits_{x\to 0, x\neq 0}\frac{e^{\sin(x)}-1}{\sin(2x)}=\frac{1}{2}$$



without using L'Hospital?



Using L'Hospital, it's quite easy. But without, I don't get this. I tried different approaches, for example writing $$e^{\sin(x)}=\sum\limits_{k=0}^\infty\frac{\sin(x)^k}{k!}$$
and
$$\sin(2x)=2\sin(x)\cos(x)$$
and get
$$\frac{e^{\sin(x)}-1}{\sin(2x)}=\frac{\sin(x)+\sum\limits_{k=2}^\infty\frac{\sin(x)^k}{k!} }{2\sin(x)\cos(x)}$$

but it seems to be unrewarding. How can I calculate the limit instead?



Any advice will be appreciated.


Answer



From the known limit
$$
\lim\limits_{u\to 0}\frac{e^u-1}{u}=1,
$$ one gets
$$
\lim\limits_{x\to 0}\frac{e^{\sin x}-1}{\sin(2x)}=\lim\limits_{x\to 0}\left(\frac{e^{\sin x}-1}{\sin x}\cdot\frac{\sin x}{\sin(2x)}\right)=\lim\limits_{x\to 0}\left(\frac{e^{\sin x}-1}{\sin x}\cdot\frac{1}{2\cos x}\right)=\color{red}{1}\cdot\frac{1}{2}.

$$


linear algebra - Given any two $n times n$ matrices $A$ and $B$ can we always find $c neq 0$ and $Y neq 0$ such that $AY = cBY$ is true?

Given any two $n \times n$ matrices $A$ and $B$ can we always find a scalar $c \neq 0$ and $n \times 1$ vector $Y \neq 0$ such that $AY = cBY$ is true ?

Sunday 25 November 2018

elementary number theory - Prove that if $gcd(a,b)=1$ then $gcd(ab,c) = gcd(a,c) gcd(b,c)$




Let $a, b, c$ be integers. Prove that if $\gcd(a,b)=1$ then $\gcd(ab,c) = \gcd(a,c) \gcd(b,c)$



First time asking here. I'm not sure what your policies are on general homework help but I truly am stuck.



So far I have shown $\gcd(a,c) \gcd(b,c)$ as an integer combination of $ab$ and $c$. So if I can show that $\gcd(a,c) \gcd(b,c)$ divides $ab$ and $c$ I can use the proof that if an integer $d$ is a common divisor of $a$ and $b$, and $d=ax+by$ for some $x$ and $y$, that $d=\gcd(a,b)$. However I don't really know where to start with this. Any help would be appreciated.


Answer



I'm going to write $(m,n)$ for $\text{gcd}(m,n)$ throughout what follows.



You want to show that $(a,c)(b,c) \mid ab, c$. From the definition of $(m,n)$, it's easy to show that $(a,c)(b,c) \mid ab$ (since $(a,c) \mid a$ and $(b,c) \mid b$). So it really boils down to showing that $(a,c)(b,c) \mid c$.




Since $(a,c), (b,c) \mid c$, you can write $c = r(a,c) = s(b,c)$ for some integers $r$ and $s$. Therefore, to show that $(a,c)(b,c) \mid c$, it's enough to show that $(b,c) \mid r = c/(a,c)$. This is equivalent to showing that $p \nmid (a,c)$ for any prime number dividing $(b,c)$. This follows from $a$ and $b$ being coprime. (Why?)


Thursday 22 November 2018

elementary set theory - Intersection of images is empty implies intersection of preimages is empty.

Let $f:(X,\tau_X) \rightarrow (Y,\tau_Y)$ be a continuous function between topological spaces. Can you show that
$$U,V\in \tau_Y, ~U\cap V = \emptyset \implies f^{-1}(U)\cap f^{-1}(V) = \emptyset.$$
It is stated as a fact in a proof that path-connectedness implies connectedness.

sequences and series - How to calculate $S_{n}=sum_{i=1}^{n}frac{1}{i^{2}}$

In math lesson, my teacher told me that Euler once used a very delicate method to calculate $\displaystyle S_{n}=\sum_{i=1}^{n}\frac{1}{i^{2}}$ and wrote a paper about it.



I wonder how to calculate this. I need a precise answer, not the answer that it's less than a number such as 2.

elementary number theory - Linear diophantine equation $100x - 23y = -19$



I need help with this equation: $$100x - 23y = -19.$$ When I plug this into Wolfram|Alpha, one of the integer solutions is $x = 23n + 12$ where $n$ is a subset of all the integers, but I can't seem to figure out how they got to that answer.


Answer



$100x -23y = -19$ if and only if $23y = 100x+19$, if and only if $100x+19$ is divisible by $23$. Using modular arithmetic, you have
$$\begin{align*}
100x + 19\equiv 0\pmod{23}&\Longleftrightarrow 100x\equiv -19\pmod{23}\\
&\Longleftrightarrow 8x \equiv 4\pmod{23}\\
&\Longleftrightarrow 2x\equiv 1\pmod{23}\\

&\Longleftrightarrow x\equiv 12\pmod{23}.
\end{align*}$$
so $x=12+23n$ for some integer $n$.


Wednesday 21 November 2018

sequences and series - How can I evaluate $sum_{n=0}^infty(n+1)x^n$?



How can I evaluate
$$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$?
I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method.




In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$


Answer



No need to use Taylor series, this can be derived in a similar way to the formula for geometric series. Let's find a general formula for the following sum: $$S_{m}=\sum_{n=1}^{m}nr^{n}.$$



Notice that
\begin{align*}
S_{m}-rS_{m} & = -mr^{m+1}+\sum_{n=1}^{m}r^{n}\\
& = -mr^{m+1}+\frac{r-r^{m+1}}{1-r} \\
& =\frac{mr^{m+2}-(m+1)r^{m+1}+r}{1-r}.
\end{align*}
Hence

$$S_m = \frac{mr^{m+2}-(m+1)r^{m+1}+r}{(1-r)^2}.$$
This equality holds for any $r$, but in your case we have $r=\frac{1}{3}$ and a factor of $\frac{2}{3}$ in front of the sum. That is
\begin{align*}
\sum_{n=1}^{\infty}\frac{2n}{3^{n+1}}
& = \frac{2}{3}\lim_{m\rightarrow\infty}\frac{m\left(\frac{1}{3}\right)^{m+2}-(m+1)\left(\frac{1}{3}\right)^{m+1}+\left(\frac{1}{3}\right)}{\left(1-\left(\frac{1}{3}\right)\right)^{2}} \\
& =\frac{2}{3}\frac{\left(\frac{1}{3}\right)}{\left(\frac{2}{3}\right)^{2}} \\
& =\frac{1}{2}.
\end{align*}



Added note:




We can define $$S_m^k(r) = \sum_{n=1}^m n^k r^n.$$ Then the sum above considered is $S_m^1(r)$, and the geometric series is $S_m^0(r)$. We can evaluate $S_m^2(r)$ by using a similar trick, and considering $S_m^2(r) - rS_m^2(r)$. This will then equal a combination of $S_m^1(r)$ and $S_m^0(r)$ which already have formulas for.



This means that given a $k$, we could work out a formula for $S_m^k(r)$, but can we find $S_m^k(r)$ in general for any $k$? It turns out we can, and the formula is similar to the formula for $\sum_{n=1}^m n^k$, and involves the Bernoulli numbers. In particular, the denominator is $(1-r)^{k+1}$.


Tuesday 20 November 2018

integration - Integrating $int_0^{frac{pi}{2}} frac{x}{cos(x)+sin(x)} mathrm dx$ using complex analysis

I am struggling with the integration of $$ \text{I} = \int_0^{\frac{\pi}{2}} \frac{x}{\cos(x)+\sin(x)} \mathrm dx$$ using complex analysis. I used the substitution $z=e^{ix} \rightarrow \mathrm dx = \frac{1}{iz}\mathrm dz$ and hence $\cos(x)+\sin(x)= ((z+\frac{1}{z})-i (z-\frac{1}{z}))$ because we have $ \cos(x)=\frac{1}{2}(e^{ix}+ e^{-ix})$, and $ \sin(x)=\frac{1}{2i}(e^{ix} -e^{-ix})$, and $x= -i \ln(z)$. However I am not successful to obtain the correct result when using real analysis, which gives $$\text{I}= \displaystyle\int_{0}^{\frac{\pi}{2}}\dfrac{x}{\sin\,x + \cos\,x}\,dx = 0.978959918 $$



Any idea how to calculate this integral using complex analysis and Cauchy integral theorem?

elementary number theory - ${rm gcd}(x,y)$ dividing linear combination of $x$, $y$



All variables are integers:



conclude that if, in addition, $ad − bc = \pm 1$, then
$\gcd(x, y) = \gcd(ax + by, cx + dy)$.
The fact that $\gcd(x, y) = \gcd(x + ky, y)$ used for the Euclidean Algorithm is a special case of this exercise.




I was also told to consider the idea of a matrix $\begin{pmatrix} a & b\\ c & d\end{pmatrix}$ as well as its inverse.



I understand that $ad-bc$ is the determinant and that the determinant of the inverse is still equal to $ad-bc$ as well but I'm not able to put the pieces together about how this helps me to show the two gcds are equal.


Answer



Let $A := \begin{pmatrix}a & b\\c & d\end{pmatrix}$. The condition that $ad-bc = \pm 1$ means that $A^{-1}$ also has integer coefficients. If you have integers $u,v$ with $ux+vy = {\rm gcd}(x,y)=: z$, then letting $B:= (u,v)$, we have $B\cdot (x, y)^T = z$. But then also
$$
z = BA^{-1} A\cdot \begin{pmatrix} x \\ y \end{pmatrix} = BA^{-1}\cdot \begin{pmatrix} ax + by\\ cx + dy\end{pmatrix},
$$
so, if $BA^{-1}=: (n,m)$, then $n\cdot (ax+by) + m\cdot (cx+dy) = z$, which shows that ${\rm gcd}(ax+by, cx+dy)$ divides $z = {\rm gcd}(x,y)$.
Interchanging the roles of $\begin{pmatrix} x\\ y\end{pmatrix}$ and $\begin{pmatrix} ax+by\\ cx+dy\end{pmatrix}$ as well as of $A$ and $A^{-1}$ shows that ${\rm gcd}(x,y)$ divides ${\rm gcd}(ax+by, cx+dy)$, which shows that both are equal.


limits - Find lim$_{n to infty} sum _{ k =0}^ n frac{e^{-n}n^k}{k!}$





We need to find out the limit of,



lim$_{n \to \infty} \sum _{ k =0}^ n \frac{e^{-n}n^k}{k!}$




One can see that $\frac{e^{-n}n^k}{k!}$ is the cdf of Poisson distribution with parameter $n$.



Please give some hints on how to find out the limit.


Answer



It's a good start to try to solve it in a probabilistic way: notice that the Poisson random variable has the reproducibility property, that is, if $X_{k} \sim \text{Poisson}(1)$, $k = 1, 2, \ldots, n$ independently, then
$$S_n = \sum_{k = 1}^n X_{k} \sim \text{Poisson}(n),$$
whose distribution function $F_{S_n}$ satisfies:
$$F_{S_n}(n) = P[S_n \leq n] = \sum_{k = 0}^n e^{-n} \frac{n^k}{k!},$$
which is exactly the expression of interest. Hence this suggests linking this problem to central limit theorem.




By the classic CLT, we have
$$\frac{S_n - n}{\sqrt{n}} \Rightarrow \mathcal{N}(0, 1).$$
Hence
$$P[S_n \leq n] = P\left[\frac{S_n - n}{\sqrt{n}} \leq 0\right] \to P[Z \leq 0] = \frac{1}{2}$$
as $n \to \infty$.


Monday 19 November 2018

sequences and series - Prove that $xmapsto sum^{infty}_{n=0}f_n(x)=sum^{infty}_{n=0}frac{x^2}{(1+x^2)^n}$ does not converge uniformly on $[-1,1]$



For each $n,$ we define \begin{align} f_n:\Bbb{R}\to \Bbb{R} \end{align}
\begin{align} x\mapsto f_n(x)=\frac{x^2}{(1+x^2)^n}\end{align}
We consider the function
\begin{align} f:\Bbb{R}\to \Bbb{R} \end{align}

\begin{align} x\mapsto f(x)=\sum^{\infty}_{n=0}f_n(x)=\sum^{\infty}_{n=0}\frac{x^2}{(1+x^2)^n}\end{align}



I want to show that the series does not converge uniformly on $[-1,1]$ but I'm finding it difficult to do that.



First of all, I considered the Weierstrass M test.



MY TRIAL



\begin{align}\left|f_n(x)\right|=\left|\frac{x^2}{(1+x^2)^n}\right|\leq \frac{1}{(1+x^2)^n} ,\;\;\forall \;x\in[-1,1],\;\forall\;n\in\Bbb{N}\end{align}




I'm also thinking that the $\beta_n=\sup\limits_{x\in[-1,1]}|\sum^{n}_{i=0}f_i(x)-\sum^{\infty}_{i=0}f_i(x)|$ approach could be very helpful too!


Answer



The convergence is not uniform.



For $x \ne 0$ this is a geometric series:



$$\sum^{\infty}_{n=0}\frac{x^2}{(1+x^2)^n} = x^2\cdot \frac{1}{1-\frac1{1+x^2}}= x^2+1$$



and for $x = 0$ the sum is $0$.




Therefore



$$f(x) = \begin{cases}
x^2+1, &\text{ if } x \ne 0\\
0, &\text{ if }x = 0
\end{cases}$$



Since $f$ is not continuous, the convergence cannot be uniform.


integration - Where is the mistake in proving 1+2+3+4+… = -1/12?

https://www.youtube.com/watch?v=w-I6XTVZXww#t=30




As I watched the video on YouTube of proving sum of $$1+2+3+4+\cdots= \frac{-1}{12}$$



Even we know that the series does not converge.



First I still can't prove what wrong in infinite sum of $$1-1+1-1+1+\cdots= \frac{1}{2}$$



and I want to know more about of what mistake did they make in the video.



I did research on our forum about this topic. But still not clearly understand about of that proving.




Thank you.
(Sorry about my bad English )

elementary number theory - $gcd(b^x - 1, b^y - 1, b^ z- 1,...) = b^{gcd(x, y, z,...)} -1$







Dear friends,




Since $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1, how can we prove that
$$
\gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\ldots)= b ^ {\gcd (x, y, z, .. .)} - 1
$$
?



Thank you!



Paulo Argolo

Sunday 18 November 2018

calculus - General Cesaro summation with weight




Assume that $a_n\to \ell $ is a convergent sequence of complex numbers and $\{\lambda_n\}$ is a sequence of positive real numbers such that $\sum\limits_{k=0}^{\infty}\lambda_k = \infty$





Then, show that,
$$\lim_{n\to\infty} \frac{1}{\sum_\limits{k=0}^{n}\lambda_k} \sum_\limits{k=0}^{n}\lambda_k a_k=\ell =\lim_{n\to\infty} a_n$$



(Note that : This is more general than the special case where, $\lambda_n= 1$)



Answer



Let $\varepsilon >0$ and $N$such that $|a_k-l|\le \varepsilon $ for all $k>N$
Then, for $n>N$ we have,

\begin{split}\left| \frac{\sum_\limits{k=0}^{n}\lambda_k a_k}{\sum_\limits{k=0}^{n}\lambda_k} -l\right|
&= &\left| \frac{\sum_\limits{k=0}^{n}\lambda_k (a_k - l)}{\sum_\limits{k=0}^{n}\lambda_k} \right|\\
&= &\left| \frac{\sum_\limits{k=0}^{N}\lambda_k (a_k - l)+\sum_\limits{k=N}^{n}\lambda_k (a_k - l)}{\sum_\limits{k=0}^{n}\lambda_k} \right|\\
&\le & \frac{M}{\sum_\limits{k=0}^{n}\lambda_k} + \frac{\sum_\limits{k=N}^{n}\lambda_k \underbrace{\left| a_k - l\right|}_{\le\varepsilon}}{\sum_\limits{k=0}^{n}\lambda_k} \\
&\le&
\frac{M}{\sum_\limits{k=0}^{n}\lambda_k} + \varepsilon\to 0
\end{split}
since $\sum_\limits{k=0}^{N}\lambda_k\to \infty$.
Where $M= \left|\sum_\limits{k=0}^{N}\lambda_k( a_k-l)\right|$


Approaching modular arithmetic problems

I'm a little stumbled on two questions.



How do I approach a problem like $x*41 \equiv 1 \pmod{99}$.



And given $2$ modulo, $7x+9y \equiv 0 \pmod{31}$ and $2x−5y \equiv 2 \pmod{31}$ (solve for $x$ only)?



When I solve for $x$ for the latter, I got a fraction as the answer and I'm not sure if I can have a fraction as an answer? I'm not sure how to approach the first problem either.

analysis - Let $f$ be continuous real-valued function on $[0,1]$. Then, $F(x)=max{f(t):0leq tleq x}$ is continuous



Let $f$ be continuous real-valued function on $[0,1]$ and



\begin{align} F(x)=\max\{f(t):0\leq t\leq x\}. \end{align}
I want to show that $F(x)$ is also continuous on $[0,1]$.




MY WORK



Let $\epsilon> 0$ be given and $x_0\in [0,1].$ Since f is continuous at $x_0\in [0,1],$ then $\forall x\in [0,1]$ with $|x-x_0|<\delta,$ it implies $|f(x)-f(x_0)|<\epsilon.$



Also, \begin{align} |f(t)-f(x_0)|<\epsilon, \text{whenever}\; |t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}



Taking max over $t\in[0,x]$, we have
\begin{align} \max|f(t)-f(x_0)|<\epsilon, \text{whenever}\; |t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}
\begin{align} |\max f(t)-\max f(x_0)|<\epsilon, \text{whenever}\; \max|t-x_0|<\delta,\;\forall\; t\in[0,x]\end{align}
\begin{align} |F(x)-f(x_0)|<\epsilon, \text{whenever}\; |x-x_0|<\delta\end{align}

which implies that $F(x)$ is continuous on $[0,1].$



I am very skeptical about this proof of mine. Please, is this proof correct? If no, a better proof is desired. Thanks!


Answer



Your proof is wring because you cannot take maximum over $t \in [0,x]$ in an inequality which is valid only for $|x-t| <\delta$. Here are some hints for a correct proof. Verify that $|F(x)-F(y)| \leq \max \{|f(t)-f(s)|:x\leq t \leq y, x\leq t \leq y\}$ for $x

Why do we take the axiom of induction for natural numbers (Peano arithmetic)?

More precisely, when we define the set of natural numbers $\mathbb{N}$ using the Peano axioms, we assume the following:




  1. $0\in\mathbb{N}$

  2. $\forall n\in\mathbb{N} (S(n)\in\mathbb{N})$

  3. $\forall n\in\mathbb{N}(0\neq S(n))$


  4. $\forall m,n (m\neq n\to S(m)\neq S(n))$

  5. If $P(n)$ denotes the fact that $n$ has property $P$, then $\Big(P(0)\wedge \forall n\in\mathbb{N}\big(P(n)\to P(S(n))\big)\Big)\implies \forall n\in \mathbb{N} (P(n))$



I understand that using these axioms we can derive everything about the natural numbers, but I also think it's helpful to know why the axioms were chosen the way they are. So my question is why we choose to accept the axiom of induction ((5.) above), which in a way makes this more of a metamathematical question.



For example in Tao's Analysis I, it says that the axiom of induction keeps unwanted elements (such as half-integers) from entering the set.



Wikipedia says, "Axioms [1], [2], [3] and [4] imply that the set of natural numbers is infinite, because it contains at least the infinite subset $\{ 0, S(0), S(S(0)), \ldots \}$, each element of which differs from the rest. To show that every natural number is included in this set requires an additional axiom, which is sometimes called the axiom of induction. This axiom provides a method for reasoning about the set of all natural numbers."---But I find this tautological: $\mathbb{N}$ is defined as the set of natural numbers so "$n$ is a natural number" means "$n\in\mathbb{N}$", right? So isn't every natural number included in $\mathbb{N}$ by definition?




Suppose we want to show $\mathbb{N}=\{0,1,2,3,\ldots\}$ using all five of the Peano axioms.



If we let $P(n)$ denote $n\in\{0,1,2,3,\ldots\}$, then $P(0)$ is true. Suppose $n$ is in $\{0,1,2,3,\ldots\}$. Then (informally) the dots indicate that $S(n)$ is in $\{0,1,2,3,\ldots\}$. So $\mathbb{N}\subseteq\{0,1,2,3,\ldots\}$, i.e., our defined set contains no "extra" elements (as in Tao's Analysis I).



Yet I still do not see how to show $\{0,1,2,3,\ldots\}\subseteq\mathbb{N}$ (in order to complete the "proof" that $\mathbb{N}=\{0,1,2,3,\ldots\}$) without just assuming it. (I think this is what the Wikipedia article was doing(?))



Thanks in advance for any help and I apologize if this kind of question is unsuitable for this site.

Friday 16 November 2018

calculus - Testing Correctness of Divergence Theorem




Let $E$ denote the ellipse
$$E=\{(x, y):\frac{x^2}{a^2}+\frac{y^2}{b^2}=1\}$$
The question is, by direct computation, showing that
$$\int \int_{\mbox{inside E}} div X dA = \int_E X n dl $$
where $l$ denotes length, and $n$ denotes the outward pointing normal vector
at point $(x, y)$ on the ellipse, and $X=(y, x)$. First of all,
since $div X=0$, the integral should be zero, but I'm not sure how to compute
$\int_E X n dl $ in this case. In particular, what should the expression for $n$ be?
For the ones involving three coordinates, $(x, y, z)$, I heard that you use cross product, but
what should be done for this case? Any help would be greatly appreciated. Once I figure out how to compute $n$ for this kind of cases, I think I will be set.



Answer



The integral is a line integral that can be evaluated by parameterization. The usual choice on an ellipse is
$$
\mathbf r(t) = \langle x(t),y(t) \rangle = \langle a \cos t, b \sin t \rangle, \quad 0 \le t \le 2\pi, \quad a,b > 0.
$$
The tangent vector is $$\mathbf r'(t) = \langle -a \sin t, b \cos t \rangle,$$
and the arclength element is
$$
dl = |\mathbf r'(t)| dt = \sqrt{a^2 \sin^2 t + b^2 \cos^2 t} \, dt.$$ There are various ways to find an exterior normal vector, but in this case note that $\langle b \cos t, a \sin t \rangle$ is orthogonal to $\mathbf r'(t)$ and points away from the origin. The unit exterior normal is thus $$ \mathbf n(t) = \frac{1}{\sqrt{a^2 \sin^2 t + b^2 \cos^2 t}} \langle b \cos t, a \sin t \rangle.$$ Your vector field is $$\mathbf X(t) = \langle y(t),x(t) \rangle = \langle b \sin t, a \cos t \rangle$$ so that $$ \mathbf X(t) \cdot \mathbf n(t) = \frac{(a^2 + b^2) \sin t \cos t}{\sqrt{a^2 \sin^2 t + b^2 \cos^2 t}}.$$ Put it all together to get $$ \oint_E \mathbf X \cdot \mathbf n \, dl = \int_0^{2\pi} \mathbf X(t) \cdot \mathbf n(t) |\mathbf r'(t)| \, dt = (a^2 + b^2) \int_0^{2\pi} \sin t \cos t \, dt.$$


integration - How to show $int_{0}^{infty} frac{dx}{x^3+1} = frac{2pi}{3sqrt{3}}$




I am trying to show $\displaystyle{\int_{0}^{\infty} \frac{dx}{x^3+1} = \frac{2\pi}{3\sqrt{3}}}.$




Any help?
(I am having troubles using the half circle infinite contour)



Or more specifically, what is the residue $\text{res} \left(\frac{1}{z^3+1},z_0=e^\frac{\pi i}{3} \right )$



Thanks!


Answer



There are already two answers showing how to find the integral using just calculus. It can also be done by the Residue Theorem:




It sounds like you're trying to apply RT to the closed curve defined by a straight line from $0$ to $A$ followed by a circular arc from $A$ back to $0$. That's not going to work, because there's no reason the intergal over the semicircle should tend to $0$ as $A\to\infty$.



How would you use RT to find $\int_0^\infty dt/(1+t^2)$? You'd start by noting that $$\int_0^\infty\frac{dt}{1+t^2}=\frac12\int_{-\infty}^\infty\frac{dt}{1+t^2},$$and apply RT to the second integral.



You can't do exactly that here, because the function $1/(1+t^3)$ is not even. But there's an analogous trick available.



Hint: Let $$f(z)=\frac1{1+z^3}.$$If $\omega=e^{2\pi i/3}$ then $$f(\omega z)=f(z).$$(Now you're going to apply RT to the boundary of a certain sector of opening $2\pi/3$... be careful about the "$dz"$...)


calculus - why do I have to double an exponential growth and half an exponential decay function to find out how fast/slow the process is?



Let’s take the exponential growth function to be $ y= A_0e^{kt}$ and exponential decay function to be $ y= A_0e^{-kt} $ .




I’m just told that to find out How fast the process is, you have to double the exponential growth function and Half-time the exponential decay function. Why is this so? I never seem to understand it. Is there a way to prove it? Or is my tutor right by just telling me the fact and nothing else?


Answer



Let's take the exponential growth function $f(t)=A_0e^{kt}$. It is simple to find $A_0$ - this is just the value of $f$ at time $0$. But how can we find the value of $k$ ?



Suppose $f(t)$ has value $y_0$ at time $t_0$ and value $2y_0$ at time $t_1$. Then



$2y_0=2A_0e^{kt_0}=A_0e^{kt_1}$



$\Rightarrow 2e^{kt_0}=e^{kt_1}$




$\Rightarrow \ln(2) + kt_0=kt_1$



$\Rightarrow k=\frac{\ln(2)}{t_1-t_0}$



The time interval $t_1-t_0$ during which $f(t)$ doubles is called the "doubling time". Notice that it does not depend on when it is measured. Once you know the doubling time you can find the growth constant $k$.



You can do the same calculation with exponential decay except you measure the time taken for $f(t)$ to halve, not double, which is called the "half life".


Thursday 15 November 2018

sequences and series - comparison or limit comparison test..?

I need to determine whether the series converges or diverges using either the comparison test or the limit comparison test. Given the nth power, i was thinking I would use the limit comparison test, but that left me with the sin(n) still afterwards, is that ok? what should I do next?



$$\sum_{n=1}^\infty {7*3^{n+1}(3+sin(n)) \over 5^n} $$

Sum of series $frac {4}{10}+frac {4cdot7}{10cdot20}+ frac {4cdot7cdot10}{10cdot20cdot30}+cdots$

What is the sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots?$$



I know how to check if a series is convergent or not.Is there any technique to find out the sum of a series like this where each term increases by a pattern?

Wednesday 14 November 2018

limits - How to deal with alternating series when index $ to +infty$ and $ to - infty $?



This question stems from some exercises in summing the alternating series of iterations of (simple) functions. I get some sort of paradox...



Assume $f(x) = x^2 - 0.5$ which has the inverse $ g(x) = \sqrt{0.5+x} $. Let's write the iterates using $h$ as "iteration-height" as second parameter:
$ f(x,h+1)= f(f(x,h)) \qquad f(x,0)=x $ and $ \qquad g(x,h)$ analoguously.




The iterations, if started at $x_0=1$ have two fixpoints: $ x_\omega=\lim_{k\to\infty} f(x_0,h)={1-\sqrt 3 \over 2} \approx -0.366 $ and $ x_{-\omega} = \lim_{k\to\infty} f(x_0,-h) = g(x_0,h) ={1+\sqrt 3 \over 2} \approx 1.366 $.



Let's denote the iterates from $x_0$ by the index $x_1 = f(x_0,1), x_2=f(x_0,2),\ldots,x_\omega $ and $x_{-1} = f(x_0,-1), x_{-2}=f(x_0,-2),\ldots, x_{-\omega} $



Now I compute the alternating sums:
\begin{align}
\operatorname{sp}(x_0)&= \sum_{k=0}^\infty (-1)^k x_k \\\
\operatorname{sn}(x_0)&= \sum_{k=0}^\infty (-1)^k x_{-k} \\\
y(x_0) &= \operatorname{sp}(x_0) + \operatorname{sn}(x_0) - x_0 \\\

&= x_\omega \cdots +x_2 -x_1 + x_0 - x_{-1}+ x_{-2} - \cdots + \cdots x_\omega
\end{align}



Because the iterations converge to the fixpoint and the sum is alternating each series can be Abel- or Euler summed and I can simply use the sumalt-function in Pari/GP to get numerical values, for instance,



\begin{align}
\operatorname{sp}(1)&= \sum_{k=0}^\infty (-1)^k f(1,k) &\approx 0.7098 \\\
\operatorname{sn}(1)&= \sum_{k=0}^\infty (-1)^k f(1,-k)&\approx 0.41976 \\\
y(1) &= \operatorname{sp}(1) + \operatorname{sn}(1) - 1 &\approx\ 0.1296 \\\
\end{align}




Intuitively I'd assume, that taking $x_{-4},x_{-2},x_0,x_2,x_4,\ldots$ as central values for the two alternating sums (in steps of two iterations) the results $y(x_{0+2k})$ should be identical for integer k - but apparently this is not true: Pari/GP as well as some checks with Euler-summation give a nonconclusive empirical result. Here is a table of some alternating sums centered at the iterates from $x_0=1$
$ \qquad \qquad \small
\begin{array} {rrr}
x & \operatorname{sp}(x) & \operatorname{sn}(x) & y(x) \\
1 & 0.709801988103 & 0.419756033790 & 0.129558021893 \\
0.500000000000 & 0.290198011897 & 0.0802439662097 & -0.129558021893 \\
-0.250000000000 & 0.209801988103 & -0.330243966210 & 0.129558021893 \\
-0.437500000000 & -0.459801988103 & -0.361385096486 & -0.383687084589 \\
-0.308593750000 & 0.0223019881028 & -0.348680460029 & -0.0177847219265 \\
-0.404769897461 & -0.330895738103 & -0.364204951981 & -0.290330792623 \\

-0.336161330109 & -0.0738741593581 & -0.355478984338 & -0.0931918135864 \\
-0.386995560139 & -0.262287170751 & -0.363410316203 & -0.238701926815 \\
-0.350234436433 & -0.124708389388 & -0.358352789732 & -0.132826742687 \\
-0.377335839537 & -0.225526047045 & -0.362477669053 & -0.210667876562 \\
\vdots & \vdots & \vdots & \vdots \\
-0.366025403784 & -0.183012701892 & -0.361005561798 & -0.177992859906 \\
-0.366025403784 & -0.183012701892 & -0.361005561798 & -0.177992859906 \\
-0.366025403784 & -0.183012701892 & -0.361005561798 & -0.177992859906 \\
-0.366025403784 & -0.183012701892 & -0.361005561798 & -0.177992859906
\end{array}

$



This confuses me much: I think, that the summation is valid, but I also think, that the two-way-infinite sum should be invariant to the centering in steps of two iterations.



Where do I go wrong here?



[Update] I've seen one source of the problem: this is the two-valuedness of the inverse $f(x)$-function (the taking of squareroot). From this follows the infinite multifoldness of the trajectory if going with negative height. As simple as it is: on the trajectory is $x_0=1, x_1=0.5, x_2=-0.25, x_3=-0.4375,\ldots$ but if we go back from $x_3$ to $x_2$ by $x_2 = f(x_3,-1)= g(x_3) $ we need a correction of sign after taking the squarerroot in g(x). This inherits to the full trajectory as far as it is in the negative domain.
There is some more interesting stuff following from this - indefinities or better: multivaluedness of iteration height of the values on the curve y=x^2-0.5=f(x) near the negative fixpoint - we have a continuous set mapped to itself (with contraction), which may have more consequences also in general, which I cannot yet fully recognize.

Hmm, while this solves at least the "practical" or "obvious" problem before the eyes here, I'm still scared/insecure about the more general problem of summing series if they are assumed with infinite index at both ends (even if things are generally easier if the series have alternating signs). But this question may then be too broad/unspecified for math.SE so maybe I'll later formally "accept" some null-answer or I'll retract the whole question.



[end update]



For convenience, here is Pari/GP-code to reproduce the behaviour:

\\ f can be positively or negatively be iterated:
f(x,h=0) = for(k=1,h,x=x^2-0.5); for(k=1,-h,x=sqrt(0.5+x)); x

\\ helper functions for sumalt. sumalt asks for consecutive iterations
\\ so we do not need to compute the full iteration at each function call
\\ instead we use a global variable and do only one step of iteration
fsa(k,x)=if(k==0, gl_f=x, gl_f = gl_f^2 - 0.5 ); return(gl_f);

gsa(k,x)=if(k==0, gl_f=x, gl_f = sqrt(gl_f + 0.5) ); return(gl_f);

fmt(400)
\\ check one solution using central value x0=1.0
x=1.0
sp=sumalt(k=0,(-1)^k*fsa(k,x))
sn=sumalt(k=0,(-1)^k*gsa(k,x))
sp+sn-x

\\ check for sequence of solutions for central values in steps of 1 iteration

vsumaltp=vectorv(n,r,[x=f(1,r-1),sp=sumalt(k=0,(-1)^k*fsa(k,x)),sn=sumalt(k=0,(-1)^k*gsa(k,x)),sp+sn-x])
vsumaltn=vectorv(n,r,[x=f(1,1-r),sp=sumalt(k=0,(-1)^k*fsa(k,x)),sn=sumalt(k=0,(-1)^k*gsa(k,x)),sp+sn-x])
Mat(vsumaltp) \\ list as documented , central values towards f(x,+inf)
Mat(vsumaltn) \\ central values towards f(x,-inf)

Answer



(to make a formally "acceptable" answer I restate the diagnosis of the problem here in a shorter manner)



I found the -nearly trivial- source of the problem; however no solution seems to be possible.




This is simply due to the fact, that $ \small f(x) $ is not bijective in the full range between the two fixed points. So for a subrange of $\small x$ it is $ \small f(f(x,h),-h) \ne x $ for some h and the shift of the initial point in the different $\small fsa$-calls (see the Pari/GP-code) introduces errors by unresolved ambiguity.



An example, where this does not occur and the function is in fact bijective in the range between the two fixpoints is $ \small f(x) = 1/16 x^2 + x - 1 $; here the described problem does not occur.



This problem cannot be resolved in generality. But it calls for deeper consideration: in an analogue manner the doubly infinite series of iterates of the exponential (which is my longer term main topic) suffers this ambiguity (multivaluedness of the log), even if the base-variable b for exponentiation is in the range of convergence of the infinite powertower.


elementary matrices and row operations



I am studying for an exam tomorrow and this is one of the problems given. The instructor gave the solution but I do not understand how he found the solution. The question is "write down the elementary matrices E1, E2 that correspond to the row operations you performed in part (a) in the order you performed them. Are these matrices 3x3 or 5x5?" In that answer, he says that they are 3x3. The first row of the matrix E1 is 1 0 0. The second row is 1 1 0. The third row is 0 0 1. The first row of the matrix E2 is 1 0 0. The second row is 0 1 0. The third row is 0 -1 1. How did he get this solution? The first row operation from part a was R1 + R2. The second row operation from part a was -R2 + R3. I do not understand the concept of an elementary matrix.


Answer



If I decode your question correctly, you say that you're told that the correct answer is
$$E_1=\pmatrix{1&0&0\\1&1&0\\0&0&1},\quad E_2=\pmatrix{1&0&0\\0&1&0\\0&-1&1}$$

But you don't tell us the complete story of what this is the complete answer to, right?



I imagine that you must have had a $3\times 5$ matrix such as
$$A=\pmatrix{1&2&0&4&5\\-1&-2&4&1&1\\0&0&4&8&16}$$
After the first row operation you had another matrix
$$B=\pmatrix{1&2&0&4&5\\0&0&4&5&6\\0&0&4&8&16}$$



You describe this operation as





The first row operation from part a was $R_1 + R_2$.




but that is not a complete description of a row operation; it describes how to make a new row (namely, take the sum of the first and second rows), but not what you do with that new row. You should have said




The first row operation was to replace $R_2$ with $R_1+R_2$.




The way the elementary matrix works is that it encodes your row operation as a matrix multiplication from the left:

$$E_1 A = B$$
(Write this out and compute the entires of the product matrix to see how it works!)



The reason it works is that the rows of the elementary matrix are all equal to the corresponding rows of the 3×3 identity matrix, except for the second row, which is the one your row operation modifies. Thus the other rows will be left unchanged by the operation. The second row of $E_1$ is $(1\;1\;0)$, which has ones in the first position and encodes how you make the new second row, namely as $R_1+R_2$, or a bit more verbosely, $1\cdot R_1+1\cdot R_2+0\cdot R_3$. Note how the coefficients here match the row from the elementary matrix exactly.



In the second row operation the row of $E_2$ that corresponds to the "new" row is $(0\;-1\;1)$, which means that we're replacing $R_3$ with $0\cdot R_1+(-1)\cdot R_2+1\cdot R_3$, which is the same as $R_3-R_2$. So we get
$$\pmatrix{1&2&0&4&5\\0&0&4&5&6\\0&0&0&3&10} = E_2B = E_2 E_1 A$$


discrete mathematics - Using the Cantor Schroeder Bernstein Theorem



I am asked to show that if $a


(i) $(a,b)$ ~ $(0,1)$



(ii) $[a,b]$ ~ $(0,1)$



using the Cantor Schroeder Bernstein Theorem.



Now, I know that $(0,1)$ ~ $\mathbb{R}$, so I think I would need to use this somehow to create my injections. Also, I am thinking I might need to use decimal expansions to define my functions but I am not too sure how the fact that for (i) $a,b$ are not included but for (ii) they are. Would it be less complicated to just give an actual function rather than go into decimal expansions?



Any suggestions on how to go about finding the injections for (i) and (ii)? Also, I would think we need to use that assumption that the decimal does not end in infinitely many $9$'s.


Answer




For the first one, you can give the bijection directly: $$f(x) = \frac{x-a}{b-a}$$



For the second one, use the inverse of the above bijection as an injection from $(0, 1)\to (a,b)\subset [a,b]$. For the reverse injection, define $g: [a, b]\to [0.1, 0.9]\subset(0,1)$.



Edit 1: For part one, this is how I find the bijection. Every open finite interval can be mapped bijectively to another open finite interval with a linear function. This is also true for closed finite intervals. All you need to do is to find an equation for a line that maps the endpoints of one interval to the other. In my case I wanted a line $f(x) = \alpha\, x + \beta$ such that $f(a) = 0$ and $f(b) = 1$. You have two equations and two unknowns:
$$
\begin{cases}
\alpha\, a +\beta = 0\\
\alpha\, b +\beta = 1
\end{cases}

$$
Solving for $\alpha$ and $\beta$, we have $\alpha = 1/(b-a)$ and $\beta = -a/(b-a)$. This gives $f(x) = (x-a)/(b-a)$.



Of course, I didn't do any of this when I wrote the answer. I just said I want $f(x)$ that maps $a$ to $0$, therefore, I'm going to have $x-a$ multiplies by something, and that something should cancel out the value of $x-a$ at $x=b$ so that it maps $b$ to $1$. So it should be $1/(b-a)$.



Edit 2: For part 2, we are using Schröder–Bernstein theorem, which by the way I did not know it had a name until today. It states that to prove that there exists a bijection, you don't need to find the bijection itself. All you need is to show that there is an injection from the first set into the second, and another from the second set into the first.



Note that for an injection, you do not need to cover the whole interval in the second set. You just need a one to one map from the first set with an image that is a subset of the second. And that is why you do not have to worry about the endpoints.



Injection from $(0,1)$ into $[a,b]$: If you find an injection from $(0,1)$ into $(a,b)$, it is also an injection from $(0,1)$ into $[a,b]$ because an injection does not need to be onto. That is, it does not need to cover the whole interval $[a,b]$.




Injection from $[a,b]$ into $(0,1)$: If you find an injection from $[a,b]$ into $[0.1, 0.9]$ that is also an injection from $[a,b]$ to $(0,1)$ because, again, you don't need to cover the whole interval $(0,1)$ for an injection. I will leave it to you to find the second injection.


Tuesday 13 November 2018

trigonometry - How to prove this trigonometric identity of sine of n angles as sum?

How can we prove
$ sin (A^1 + A^2 + ... + A^n) = cos A^1 . cos (A^2) ... cos (A^n) [ S_1 - S_3 + S_5 ... ] $
where $S_n$ denotes sum of tangents of angles taken n at a time.
I tried proving it but failed. I can derive it easily for n = 2 and 3 but not for general case. Wikipedia has same kind of formula for tangent but it is not derived.
https://en.m.wikipedia.org/wiki/List_of_trigonometric_identities
Please give a very simple detailed proof.

Monday 12 November 2018

calculus - How to evaluate $int_{0}^{1} frac{ln x}{x+c} dx$



For $c=-1$ , it can be evaluated using the Taylor Series for $\ln x$ centered at $1$ to get $\zeta (2)$. For $c=1$ you enforce the substitution $x=1-u$ then use the Taylor series centered at $0$.





Can we generalize for $c \in \mathbb{R}$?




I've completed calc 1 and 2 but currently taking no math classes besides Ap statistics. I try to use this website to expand my math toolbox and learn tricks here and there. So can someone incorporate a technique that's not to far away for me to understand.



Thanks.



Idea for $c<0$:




Taylor series for $\ln x$ about $x=-c$ is:



$$\ln (x)= \ln (-c)+ \sum_{n=1}^{\infty} \frac{(n-1)!(-1)^{n+1}}{(-c)^n} \frac{(x+c)^n}{n!}$$



I'm good from there... divide by $(x+c)$ and integrate.. and I'm left with a pretty weird sum, which I see as acceptable. Here's what I've got from this method:
$$\ln(-c) \int_{0}^{1} \frac{dx}{x+c}-\sum_{n=1}^{\infty} \frac{(1+c)^n-c^n}{n^2c^n}$$



Which can be simplified if $c \notin (-1,0)$




Now all that is left is $c \geq 0$


Answer



The Dilogarithm function can be defined as $$-\int_{0}^{1}\frac{\log\left(1-xt\right)}{x}dx=\textrm{Li}_{2}\left(t\right)
$$ so in your case we have, taking $t=-\frac{1}{c},c\notin\left(-1,0\right]
$ and integrating by parts, $$\textrm{Li}_{2}\left(-\frac{1}{c}\right)=-\int_{0}^{1}\frac{\log\left(1+x/c\right)}{x}dx$$ $$=-\left[\log\left(x\right)\log\left(1+x/c\right)\right]_{0}^{1}+\int_{0}^{1}\frac{\log\left(x\right)}{x+c}dx=\int_{0}^{1}\frac{\log\left(x\right)}{x+c}dx.$$


calculus - Proving $sum_{k=1}^n{k^2}=frac{n(n+1)(2n+1)}{6}$ without induction





I was looking at: $$\sum_{k=1}^n{k^2}=\frac{n(n+1)(2n+1)}{6}$$



It's pretty easy proving the above using induction, but I was wondering what is the actual way of getting this equation?


Answer



$$n^{3}-(n-1)^{3}=3n^{2}+3n+1$$

$$(n-1)^{3}-(n-2)^{3}=3(n-1)^{2}+3(n-1)+1$$
$$\vdots$$
$$2^{3}-1^{3}=3(1)^{2}+3(1)+1$$



Now use telescopic cancellation.



Here are some "proof without words"(I find them more elegant):



Sum of squares




Sum of Squares(2)



Finally a more generalized form:$$1^{k}+2^{k}+\cdots+n^{k}=\sum\limits_{i=1}^{k}S(k,i)\binom{n+1}{i+1}i!$$
Where S(k,i) represents the Stirling number of the second kind.


summation - Induction proof of $sum_{i=0}^{n}(5i+3) = frac{n(5n+11)}{2}+3$ for every natural $n$




$$S_{n} = \sum_{i=0}^{n}(5i+3)$$



I received a homework problem that instructed me to use induction to prove that for all natural numbers n



$$S_{n} = \frac{n(5n+11)}{2}+3$$



First I proved that my base case of $S_{0}$ holds, because substituting $0$ for $n$ in both the top formula and the following formula makes both equal to $3$. The next step is to form my inductive hypothesis. My hypothesis is that



$$\sum_{i=0}^{n}(5i+3) = \frac{n(5n+11)}{2}+3$$ for all natural numbers $n$. Then I'm assuming that $$\sum_{i=0}^{k}(5i+3) = \frac{k(5k+11)}{2}+3$$ holds when $n$ = some arbitrary natural number $k$ (I've since been told not to do $n=k$ for some reason).




Next step is to prove that $S_{k+1}$ holds, because if it does, knowing that my base case holds will tell me that $S_{1}$ holds, telling me that $S_{2}$ holds, etc.



To prove this, I took the equation from my assumption and substituted $k+1$ for $k$. Evaluating the left hand side of $\frac{(k+1)(5(k+1)+11)}{2}+3$ eventually yielded $\frac{5k^2+21k+22}{2}$, and solving the right hand side of $\sum_{i=0}^{k+1}(5i+3)$ using Gauss's(?) sum and splitting the terms of the sum (I don't know what to call it) to come to the same result. Since both sides of the equation reduced to the same expression, I reasoned that this proves that my original assumption holds, therefore the statement at the top has been proven.



I've gone wrong somewhere above, since I was told that I proved the original assertion with a direct proof rather than by induction. Where did I go wrong? I thought that after making my assumption and learning the case that needs to hold to make such assumption true, all I need to do is see if both sides of the equation equal each other. Has doing a direct proof of the original statement caused me to make too many assumptions? Or have I done something else inappropriate?


Answer



Typically, you want to remember that, for proof by induction, you have to make use of the induction assumption. You assume some case greater than your base case holds, and then show it implies the succeeding step - that gives you the whole "$S_1 \implies S_2 \implies S_3 \implies ...$" chain.



So our assumption is




$$\sum_{i=0}^{k}(5i+3) = \frac{k(5k+11)}{2}+3$$



We seek to show



$$\sum_{i=0}^{k+1}(5i+3) = \frac{(k+1)(5(k+1)+11)}{2}+3 = \frac{(k+1)(5k+16)}{2}+3$$



Starting with the sum at the left, we can pull out the $(k+1)^{th}$ term:



$$\sum_{i=0}^{k+1}(5i+3) = 5(k+1) + 3 + \sum_{i=0}^{k}(5i+3) = 5k+8 + \sum_{i=0}^{k}(5i+3)$$




As it happens, this new summation is precisely what we assume holds. So we substitute the corresponding expression and do some algebra:



$$\begin{align}
5k+9 + \sum_{i=0}^{k}(5i+3) &= 5k+8 + \frac{k(5k+11)}{2}+3\\
&=\frac{10k+16 + 5k^2 + 11k}{2} + 3\\
&=\frac{5k^2+21k+16}{2} + 3\\
&= \frac{(k+1)(5k+16)}{2}+3
\end{align}$$




Thus, the case for $(k+1)$ holds, completing the induction step.


Sunday 11 November 2018

Mathematical Induction help

Ok, so I'm not very good with these proving by induction thingies. Need a little help please.



How do I prove



$$\sum_{j=0}^n\left(-\frac12\right)^j=\frac{2^{n+1}+(-1)^n}{3\cdot2^n}$$



when $n$ is a non-negative integer.




I got the basis step and such down, but I'm pretty bad with exponents so I am having a difficult time in the induction step.

Arithmetic progression with complex common difference?



Suppose we have the following sequence:



$$\{0,i,2i,3i,4i,5i\}$$



Can we call this sequence an arithmetic progression with first term $0$ and common difference of $i$ ?



Clarification: Here, $i$ is referring to the imaginary unit, i.e., $i=\sqrt{-1}$




In general, I want to know if the common difference of an AP can be any complex value and not just real value.



Thanks!


Answer



You can define an arithmetic progression in any monoid $(M,+)$. It is then defined by a starting element $a\in M$ and an increment $b\in M$ and the recursion
$$a_0 = a\\
a_{n+1} = a_n + b$$



There is no reason to restrict to reals $(\mathbb R,+)$ or complex numbers $(\mathbb C, +)$. For some results about arithmetic progressions, you might want $M$ to be an (abelian) group or even a field (both is true for the two settings mentioned here).







For a complex finite arithmetic progression $\{z,z+w, \ldots, z+nw\}$ to have a real sum, you must actually force
$$\Im \sum_{k=0}^n (z+kw) = \Im \left((n+1)z + \frac{n(n+1)}2w\right) = (n+1)\Im z + \frac{n(n+1)}2 \Im w \stackrel!=0$$
In other words you can freely pick the real parts of $z$ and $w$, but the imaginary parts must be related by
$$\Im w = - \frac2n \Im z$$
for some $n\in\mathbb N$ wich will double as the number of terms minus one (since we sum from $k=0$ to $n$, wich has $n+1$ summands)


calculus - Show that $a_n



We define $a_1=1$ and $a_{n+1}=\dfrac{1}{2}\left(a_n+\dfrac{2}{a_n}\right)$.



I want to prove that $a_n$ converges. But first I'd like to show that $a_n<\sqrt{2}$ for every $n\in\mathbb{N}$.



I try to use induction. For a fixed $n$, I suppose that $a_n<\sqrt{2}$.



Any hint to prove that $a_{n+1}=\dfrac{1}{2}\left(a_n+\dfrac{2}{a_n}\right)<\sqrt{2}$?




Thanks.


Answer



We have:
$$ 2 a_n a_{n+1} = a_n^2 + 2, $$
from which it follows that:
$$ a_{n+1}^2-2 = (a_{n+1}-a_n)^2 = \left(\frac{1}{a_n}-\frac{a_n}{2}\right)^2 = \left(\frac{a_n^2-2}{2a_n}\right)^2 > 0, $$
so $a_n > \sqrt{2}$ for every $n > 1$. If we set $b_n = a_{n+1}^2-2$, we have $b_1=\frac{1}{4},b_n>0$ and:
$$ b_n = \frac{b_{n-1}^2}{4a_n^2} = \frac{b_{n-1}^2}{4(b_{n-1}+2)} \leq \frac{b_{n-1}^2}{8}, $$
from which it follows that the sequence $\{b_n\}_{n\geq 1}$ decreases very fast towards zero. For instance, $b_1=\frac{1}{4}$ and the last inequality imply, by induction:
$$ b_n \leq \frac{1}{2^{5\cdot 2^{n-1}-3}}.$$



calculus - Is there a pattern to expression for the nested sums of the first $n$ terms of an expression?




Apologies for the confusing title but I couldn't think of a better way to phrase it. What I'm talking about is this:



$$ \sum_{i=1}^n \;i = \frac{1}{2}n \left(n+1\right)$$
$$ \sum_{i=1}^n \; \frac{1}{2}i\left(i+1\right) = \frac{1}{6}n\left(n+1\right)\left(n+2\right) $$
$$ \sum_{i=1}^n \; \frac{1}{6}i\left(i+1\right)\left(i+2\right) = \frac{1}{24}n\left(n+1\right)\left(n+2\right)\left(n+3\right) $$



We see that this seems to indicate:




$$ \sum_{n_m=1}^{n}\sum_{n_{m-1}=1}^{n_m}\ldots \sum_{n_1=1}^{n_2} \; n_1 = \frac{1}{m!}\prod_{k = 0}^{m}(n+k) $$



Is this a known result? If so how would you go about proving it? I have tried a few inductive arguments but because I couldn't express the intermediate expressions nicely, I didn't really get anywhere.


Answer



You should have
$$\sum_{i=1}^{n} 1 = n$$
$$\sum_{i=1}^{n} i = \frac{1}{2} n(n+1)$$
$$\sum_{i=1}^{n} \frac{1}{2} i(i+1) = \frac{1}{6} n(n+1)(n+2)$$
$$\sum_{i=1}^{n} \frac{1}{6} i(i+1)(i+2) = \frac{1}{24} n(n+1)(n+2)(n+3)$$




In particular, the first sum of yours was wrong and the things you were adding should depend on $i$, not on $n$.



But, to answer the question, yes! This is a known result, and actually follows quite nicely from properties of Pascal's triangle. Look at the first few diagonals of the triangle and see how they match up to your sums, and see if you can explain why there's such a relation, and why the sums here can be written in terms of binomial coefficients. Then, the hockey-stick identity proves your idea nicely.


discrete mathematics - How would I show this bijection and also calculate its inverse of the function?

I want to show that $f(x)$ is bijective and calculate it's inverse.




Let $$f : \mathbf{R} \to \mathbf{R} $$ be defined by $f (x) = \frac{3x}{5} + 7$




I understand that a bijection must be injective and surjective but I don't understand how to show it for a function.

Saturday 10 November 2018

calculus - Antiderivative of $y = frac {x+22} {x^{2}+2x-8}$




I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide explanations, and I'd really, really appreciate it if someone could explain how to solve this problem - I'm having trouble with the antiderivative of the curve. It's a non-calculator question.




Which of the following is equal to the area of the region bounded by the lines $x = -3$, $x = 1$, $y = 0$, and the curve $y = \dfrac {x+22} {x^{2}+2x-8}$?




a) $\ln 5$



b) $6\ln 5$




c) $7\ln 5$



d) $8\ln 5$



e) $10\ln 5$



Thank you!


Answer



the answer is c. let me explain. the function is negative so we need to find :

$$ -\int f(x)dx $$
between $(-1 , 3)$.
Note that : $x^2 +2x -8 = (x+4)(x-2) $
So, $$\frac{x+22}{x^2 +2x -8} =\frac{x+22}{(x+4)(x-2)} $$
Now $$\frac{x+22}{(x+4)(x-2)} = \frac{A}{x+4} + \frac{B}{x-2}$$
after some calculation you will get that $A = -3$ and $B = 4$.
So we need to find
$$ -\int_{-1}^3 \frac{-3}{x+4} + \frac{4}{x-2}dx $$
and that equals $$ \Big[ -3\ln(|x+4|) + 4\ln(|x-2|)\Big]_{-1}^3 = 7\ln(5) $$


Friday 9 November 2018

linear algebra - Roots of a polynomial with real cofficients

Good evening;



Let $\alpha, \beta \in\mathbb{R}$, $n\in\mathbb{N}$. Please can you help me to prove that every polynomial of the form



$$ f(x)=x^{n+3}+\alpha x+\beta $$



admits at most 3 reals roots. Thank you for help.

modular arithmetic - Solving the congruence $7x + 3 = 1 mod 31$?




I am having a problem when the LHS has an addition function; if the question is just a multiple of $x$ it's fine.



But when I have questions like $3x+3$ or $4x+7$, I don't seem to get the right answer at the end.


Answer



We have that



$$7x + 3 \equiv 1 \mod 31 \implies 7x\equiv -2\mod 31$$



Then we need to evaluate by Euclidean algorithm the inverse of $7 \mod 31$, that is





  • $31=4\cdot \color{red}7 +\color{blue}3$


  • $\color{red}7=2\cdot \color{blue}3 +1$




then




  • $1=7-2\cdot 3=7-2\cdot (31-4\cdot 7)=-2\cdot 31+9\cdot 7$




that is $9\cdot 7\equiv 1 \mod 31$ and then




$$9\cdot 7x\equiv 9\cdot -2\mod 31 \implies x\equiv 13 \mod 31$$



calculus - How to evaluate the integral $int_0^{+infty} frac{sin^4{x}}{x^4}dx$?

As we know that $\int_0^{+\infty} \frac{\sin{x}}{x}dx=\pi/2$,but how to evaluate the integral $\int_0^{+\infty} \frac{\sin^4{x}}{x^4}dx$?

integration - Find $limlimits_{xto+infty}U_n$ where $U_n=frac{1}{n}intlimits_0^nsin^2(t) dt$



Writing $$\int\limits_0^n\sin^2(t)dt=\int\limits_0^{\pi E(\frac{n}{\pi})}\sin^2(t)dt + \int\limits_{\pi E(\frac{n}{\pi})}^n\sin^2(t)dt$$
Where E(x) designates the floor function of x




Use the squeeze theorem to find $\lim\limits_{n\to+\infty}U_n$



I tried to evaluate the Integral but it's specifically asked to use $\pi E(\frac{n}{\pi})$


Answer



As we have:



$$\sin^2(t)=\frac12(1-\cos(2t))$$



It is clear that the function being integrated has a periodicity of $\pi$. Hence every integral through a whole $\pi$ period will have the same value. So, we have that:




$$\int\limits_0^\pi\sin^2(t)dt=\int\limits_0^\pi\frac12(1-\cos(2t))dt=\frac\pi2$$



Then, if we break down the integration as the function is suggesting:



$$\int\limits_0^n\sin^2(t)dt=\int\limits_0^{\pi \lfloor\frac{n}{\pi}\rfloor}\sin^2(t)dt + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt=\frac\pi2 \lfloor\frac{n}\pi\rfloor + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt$$



Notice that the integrand is always positive, so we can easily find an lower and upper bound by excluding the left term or letting it complete another cycle, which means that:



$$\frac\pi2 \lfloor\frac{n}\pi\rfloor<\frac\pi2 \lfloor\frac{n}\pi\rfloor + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt<\frac\pi2 \lfloor\frac{n}\pi\rfloor+\frac\pi2$$




$$\frac1n \frac\pi2 \lfloor\frac{n}\pi\rfloor



So, if we let $n \to \infty$, as both sides of the inequality tend to $1/2$, we have that



$$\lim_{n\to\infty}U_n=\frac12$$


sequences and series - The sum $1+frac{1}{3}+frac{1}{5}+frac{1}{7}+cdots-(frac{1}{2}+frac{1}{4}+frac{1}{6}+cdots)$ does not exist.

What are the argument(s) that I can use proving that



$$1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots-(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots)$$



does not exist.



The question was:



Find a arrangement of $\sum\frac{(-1)^{n-1}}{n}$ for which the new sum is not exist(even not $+\infty$ or $-\infty$)

elementary number theory - Fermat's Last Theorem - A query



Problem Statement:
In Fermat's Last Theorem $$x^n + y^n = z^n$$ $x,y,z$ are considered integers. But upon closer inspection it is seen that it is also true for any rational numbers $x,y,z$. And that FLT is not applicable only when $x,y,z$ are irrational.



Query : Why is it that then it is always and only mentioned that Fermat's theorem is true when $x,y,z$ are integers and not rational numbers ? Is my perception correct? Can this be proven or disproved ?


Answer




As MTurgeon points, the two problems are equivalent.



More exactly, for some $n$, the equation $x^n+y^n=z^n$ has non trivial integer solutions if and only if $x^n+y^n=z^n$ has non trivial rational solutions.



Anyhow, many of the techniques used to attempt a proof, both in general and in the particular cases, work for the integer version. For example, the case $n=3$ relays on the fact that $\mathbb{Z}(\omega)$ is an UFD, the case $n=4$ is based on the fact that one gets a contradiction by building a smaller positive solution.



Since the two problems are equivalent, and in the study the integer version is easier to approach, it is typically posted as an equation over the integers.


calculus - Show that $int^{infty}_{0}left(frac{sin(x)}{x}right)^2 < 2$



I`m trying to show that this integral is converges and $<2$
$$\int^{\infty}_{0}\left(\frac{\sin(x)}{x}\right)^2dx < 2$$
What I did is to show this expression:
$$\int^{1}_{0}\left(\frac{\sin(x)}{x}\right)^2dx + \int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx$$
Second expression :
$$\int^{\infty}_{1}\left(\frac{\sin(x)}{x}\right)^2 dx < \int^{\infty}_{1}\left(\frac{1}{x^2}\right)^2dx = \lim\limits_{b\to 0} {-\frac{1}{x}}|^b_0 = 1 $$
Now for the first expression I need to find any explanation why its $<1$ and I will prove it.

I would like to get some advice for the first expression. thanks!



Answer



Hint: $$\lim_{x\to0}\frac{\sin x}{x}=1.$$


Thursday 8 November 2018

logarithms - Find the sum of the geometric sequence

$$1+ \frac12 + \frac14+\dots+\frac1{2^n}$$



To find the sum of the equation you have to find $n$, the number of terms in the geometric sequence and I don't know how...




The answer in the text book is $2-\frac1{2^n}$.

calculus - Find the limit of exponent/factorial sequence











I don't know how to even stoke it...



$$ \lim_{n\to \infty } \frac{2^n}{n!} = $$


Answer



HINT



Prove that $$0 < \dfrac{2^n}{n!} \leq \dfrac4n$$ for all $n \in \mathbb{Z}^+$ using induction and then use squeeze theorem.


abstract algebra - A field of order $32$



I was working on this problem from an old qual exam and here is the
question. In particular this is not for homework.




True or False: There are no fields of order 32. Justify your answer.





Attempt: From general theory I know that any finite field has prime power order
and conversely given any prime power there exists a finite field of that order. So of course such fields exist. But now I need to explicitly construct such a field.
If I could somehow construct $\mathbb{Z}_2[x]/(p(x))$ where $p(x)$ is a polynomial of degree 5 which is irreducible over $Z_2$ I am done. But wait, how do I come up with a degree 5 polynomial that is irreducible over $Z_2$. My normal methods don't work here because $p(x)$ does not have order 2 or 3. In which case it is easy to check for irreducibility.



My question is in these kinds of situations, is there a general way to proceed.



Note: I have not learnt Galois' theory or anything like that. Does this problem require more machinery to solve?




Please help.


Answer



No more machinery. Make yourself a table of irreducible polynomials of degrees up to 5 by thinking about how to recognize polynomials over $\mathbb Z_2$ with $0$ or $1$ as a root, then proceeding by doing a sieve of Eratosthenes (crossing out polynomials that are divisible by lower-degree irreducibles).


real analysis - There is a holomorphic and bijective function of $mathbb {D}$ in $mathbb{D}setminus(0,1]$?

There is a holomorphic and bijective function of $\mathbb{D}$ in $\mathbb{D}\setminus(0,1]$? where $\mathbb{D}$ denotes the open unit disk in the complex plane.



Idea: I really need to prove that there is a holomorphic and bijective function of $R={\{z\in\mathbb{C}: Re(z)>0 , Im(z)>0}\}$ in $\mathbb{D}\setminus(0,1]$ But just find a function with such properties of $\mathbb{D}$ in $\mathbb{D}\setminus(0,1]$, and then make the composition with the transformation of cayley and the function $f(z)=z^2$.



Thanks for the help!!

Wednesday 7 November 2018

general topology - "Basis" Version of a Local Connectedness Theorem





A space $X$ is locally connected if and only if for every open set $U$ of $X$, each component of $U$ is open in $X$.




The above is a theorem in Munkres' Topology book (theorem 25.3). As usual, I always wonder whether a given theorem is true when open sets are replaced with basis elements. I just proved that $X$ is locally connected if and only if $X$ has a basis entirely comprised of connected sets, thinking that this might help. But I cannot quite connect the pieces. So my question is




Does a "basis" version of the above theorem hold?



Answer




I don't imagine the proof to be that different from the original version: Let $X$ be a topological space with basis $\mathcal{B}$. Suppose for every $B \in \mathcal{B}$ we have that each component of $B$ is open in $X$. Now take any $x \in X$ and any open neighbourhood $U$ of $x$. By definition there is a $B \in \mathcal{B}$ such that $x \in B \subset U$. Now let $C$ be the component of $B$ containing $x$. Then $x \in C \subset U$ and $C$ is open and connected. So $X$ is locally connected.



To prove the converse assume $X$ is locally connected. By the above theorem, we have that every open $U \subset X$, each component of $U$ is open. Since every $B \in \mathcal{B}$ is open, each component of $B \in \mathcal{B}$ must be open too.


integration - Simple, yet evasive integral from zero to $pi/2$




Q: Evaluate$\newcommand{\dx}{\mathrm dx}\newcommand{\du}{\mathrm du}\newcommand{\dv}{\mathrm dv}\newcommand{\dtheta}{\mathrm d\theta}\newcommand{\dw}{\mathrm dw}$$$I=\int\limits_0^{\pi/2}\dx\,\frac {\left(\log\sin x\log\cos x\right)^2}{\sin x\cos x}$$





I'm out of ideas on what to do. I tried using the astute limit identity for integration, but that lead to nowhere, because it's simply the same integral i.e adding the two together, doesn't yield a simplification whatsoever$$I=\int\limits_0^{\pi/2}\dx\,\frac {\log^2\sin x\log^2\cos x}{\sin x\cos x}=\int\limits_0^{\pi/2}\du\,\frac {\log^2\cos u\log^2\sin u}{\cos u\sin u}$$I've looked into making a u-substitution, but have no idea where to begin. I plan to use a trigonometric identity, namely$$\sec x\csc x=\cot x+\tan x$$but I'm not sure what to do afterwards because we get$$I=\int\limits_0^{\pi/2}\dx\,\cot x\log^2\sin x\log^2\cos x+\int\limits_0^{\pi/2}\dx\,\tan x\log^2\sin x\log^2\cos x$$I would appreciate it if you guys gave me an idea on where to begin!



Also, as a side note, I've used \newcommand on \dx,\du,\dv, and \dw to automatically change to $\dx$, $\du$, $\dv$, and $\dw$ respectively, if you guys don't mind! Just to make it easier for the differential operator!


Answer



Change variable to $t = \sin^2 x$ and notice
$$\begin{align}\frac{dx}{\sin x\cos x} &= \frac{\sin x\cos x dx}{\sin^2 x\cos^2 x} = \frac12\frac{dt}{t(1-t)}\\
\log\sin x \log\cos x &= \frac14\log\sin^2 x \log \cos^2 x = \frac14 \log t\log(1-t)
\end{align}

$$
The integral at hand can be rewritten as
$$\mathcal{I} \stackrel{def}{=} \int_0^{\pi/2} \frac{(\log\sin x\log\cos x)^2}{\sin x\cos x} dx = \frac{1}{32}\int_0^1 \frac{\log^2 t \log^2(1-t)}{t(1-t)} dt$$
Since $\displaystyle\frac{1}{t(1-t)} = \frac1t + \frac{1}{1-t}$, by replacing $t$ by $1-t$ in part of the integral, we find



$$\begin{align}
\mathcal{I}
&= \frac{1}{16}\int_0^1 \frac{\log^2 t\log^2(1-t)}{t} dt
= \frac{1}{48}\int_0^1 \log^2(1-t) d\log^3 t\\
&\stackrel{\text{I.by.P}}{=} \frac{1}{48}\left\{

\left[\log^2(1-t)\log^3 t\right]_0^1 + 2
\int_0^1 \log^3 t\frac{\log(1-t)}{1-t}dt\right\}\\
&= -\frac{1}{24}\int_0^1 \frac{\log^3 t}{1-t}\sum_{n=1}^\infty\frac{t^n}{n} dt
= -\frac{1}{24}\int_0^1 \log^3 t\sum_{n=1}^\infty H_nt^n dt\\
&\stackrel{t = e^{-y}}{=} \frac{1}{24} \int_0^\infty y^3 \sum_{n=1}^\infty H_n e^{-(n+1)y} dy
= \frac14 \sum_{n=1}^\infty \frac{H_n}{(n+1)^4}
\end{align}
$$
where $H_n$ are the $n^{th}$ harmonic number. By rearranging its terms, the last sum should be expressible in terms of zeta functions. I'm lazy, I just ask WA to evaluate the sum. As expected, last sum equals to $2\zeta(5) - \frac{\pi^2}{6}\zeta(3)$${}^\color{blue}{[1]}$.




As a consequence, the integral at hand equals to:



$$\mathcal{I}
= \frac{12\zeta(5) - \pi^2\zeta(3)}{24}
\approx 0.024137789997360933616411382857235691008...$$



Notes




  • $\color{blue}{[1]}$ It turns out we can compute this sum using an

    identity by Euler.
    $$2\sum_{n=1}^\infty \frac{H_n}{n^m} = (m+2)\zeta(m+1) - \sum_{n=1}^{m-2}\zeta(m-n)\zeta(n+1),\quad\text{ for } m = 2, 3, \ldots$$
    In particular, for $m = 4$, this identity becomes
    $$\sum_{n=1}^\infty \frac{\zeta(n)}{n^4}
    = 3\zeta(5) - \zeta(2)\zeta(3)$$
    and we can evaluate our sum as
    $$\begin{align}\mathcal{I}
    &= \frac14\sum_{n=1}^{\infty}\frac{H_{n}}{(n+1)^4}
    = \frac14\sum_{n=1}^{\infty}\left(\frac{H_{n+1}}{(n+1)^4}-\frac{1}{(n+1)^5}\right)
    = \frac14\left(\sum_{n=1}^\infty \frac{H_n}{n^4} - \zeta(5)\right)\\

    &= \frac14(2\zeta(5) - \zeta(2)\zeta(3))
    = \frac{12\zeta(5) - \pi^2\zeta(3)}{24}
    \end{align}
    $$


Matrices: left inverse is also right inverse?

If $A$ and $B$ are square matrices, and $AB=I$, then I think it is also true that $BA=I$. In fact, this Wikipedia page says that this "follows from the theory of matrices". I assume there's a nice simple one-line proof, but can't seem to find it.



Nothing exotic, here -- assume that the matrices have finite size and their elements are real numbers.




This isn't homework (if that matters to you). My last homework assignment was about 45 years ago.

integration - Evaluate $int_0^{infty}frac{log x}{1+e^x},dx$



Evaluate
$$\int_0^{+\infty}\frac{\log x}{1+e^x}\,dx.$$



I have tried using Feynman's Trick (in several ways, but for example by introducing a variable $a$ such that $I(a)=\int_0^{+\infty}\frac{\log ax}{1+e^x}\,dx$), but that doesn't seem to work. Also integration by parts and all kinds of substitutions make things worse (I have no idea how to substitute such that $\log$ and $\exp$ both become simpler.




(Source: Dutch Integration Championship 2013 - Level 5/5)


Answer



By the inverse Laplace transform
$$ \sum_{n\geq 1}\frac{(-1)^{n+1}}{n^s} = \frac{1}{\Gamma(s)}\int_{0}^{+\infty}\frac{x^{s-1}}{e^x+1}\,dx $$
and by differentiating both sides with respect to $s$
$$ \sum_{n\geq 1}\frac{(-1)^n \log n}{n^s} = -\frac{\Gamma'(s)}{\Gamma(s)^2}\int_{0}^{+\infty}\frac{x^{s-1}}{e^x+1}\,dx + \frac{1}{\Gamma(s)}\int_{0}^{+\infty}\frac{x^{s-1}\log(x)}{e^x+1}\,dx $$
so by evaluating at $s=1$
$$\int_{0}^{+\infty}\frac{\log x}{e^x+1}\,dx = \sum_{n\geq 1}\frac{(-1)^n\log n}{n}+\underbrace{\Gamma'(1)}_{-\gamma}\underbrace{\int_{0}^{+\infty}\frac{dx}{e^x+1}}_{\log 2} $$
and it just remains to crack the mysterious series $\sum_{n\geq 1}\frac{(-1)^n\log n}{n}$. On the other hand by Frullani's integral, the inverse Laplace transform or Feynman's trick we have $\log(n)=\int_{0}^{+\infty}\frac{e^{-x}-e^{-nx}}{x}\,dx$, so

$$\sum_{n\geq 1}\frac{(-1)^n\log n}{n}=\int_{0}^{+\infty}\frac{\log(1+e^{-x})-e^{-x}\log 2}{x}\,dx=\gamma\log(2)-\frac{1}{2}\log^2(2)\tag{J}$$
where the last identity follows from the integral representation for the Euler-Mascheroni constant, got by applying the inverse Laplace transform to the series definition $\gamma=\sum_{n\geq 1}\left[\frac{1}{n}-\log\left(1+\frac{1}{n}\right)\right]$. Summarizing, we simply have
$$ \int_{0}^{+\infty}\frac{\log(x)}{e^x+1}\,dx = \color{red}{-\frac{1}{2}\log^2(2)}.$$
It is possible to prove the equality between the LHS and the RHS of $(J)$ by summation by parts and Euler sums, too.


Tuesday 6 November 2018

sequences and series - Convergence of $sum^{infty}_{n=1}frac{2}{sqrt{n}+2}$



In the Comparison Tests section of my textbook, I am tasked with determining the convergence of the series $$\sum^{\infty}_{n=1}\frac{2}{\sqrt{n}+2}$$



I will argue the series diverges using the "Limit" Comparison test.




Consider $$a_n=\frac{2}{\sqrt{n}+2}$$ and $$b_n=\frac{1}{\sqrt{n}}$$ Note: The latter series $b_n$ is a divergent $p$-series with $p=\frac{1}{2}.
$



If $$\lim_{n\rightarrow\infty}\frac{a_n}{b_n}=c$$ where $c>0$, then either both $a_n$ and $b_n$ converge, or both $a_n$ and $b_n$ diverge.



Thus $$\lim_{n\rightarrow\infty}\frac{a_n}{b_n}=\lim_{n\rightarrow\infty}\frac{2\sqrt{n}}{\sqrt{n}+2}=\lim_{n\rightarrow\infty}\frac{2}{1+\frac{2}{\sqrt{n}}}=2$$



Since the above limit existed as a finite value $c=2>0$ and since $\sum b_n$ diverges ($p$-series with $p=\frac{1}{2}$):





$$\sum^{\infty}_{n=1}\frac{2}{\sqrt{n}+2}\space\text{diverges}$$




However, from the beginning I can see the asymptotic equivalence: $$\frac{2}{\sqrt{n}+2}\sim\frac{1}{\sqrt{n}}$$ Why do I need the limit comparison test when I can deduce the result quickly upon inspection? Thanks in advance!


Answer



Note that $$\frac{2}{\sqrt{n}+2}\geq \frac{1}{\sqrt{n}}$$


Monday 5 November 2018

elementary number theory - Tiling rectangle with squares using Euclid's algorithm



Is there a proof that tiling an n*m rectangle with squares using Euclid's algorithm (that is always choose the biggest square that fits in the remaining space) results in a minimum sum of the sizes (length of one side) of the resulting squares?



Answer



Let's call "sum of the sizes (length of one side)" as SS. And we assume that $m < n$ as for a square all is clear.




  1. It's not hard to prove that SS(m,n) of euclidean algo is $m + n - gcd(m,n)$

  2. We are to prove that it's the minimum of SS(m,n). We'll succeed in it if we prove that the biggest size of tiles for optimum tiling is $m$.

  3. If it's not true we can assign 4 distinctive squares for every corner correspondingly.

  4. Let's now count the perimeter of the rectangle. We consider only those squares which are adjacent to the border:




$$2(m+n) = \sum_{agjacent}l_i + \sum_{corner}l_j,$$



last sum contains the sizes of 4 squares assigned to the corners. Ok, then we get:



$$2(m+n) \le \sum_{all~ squares}l_i + \sum_{corner}l_j \le \sum_{all ~squares}l_i + 2m, $$



as "corneric" squares are paired and every pair have total size $\le m$



$$\sum_{all~ squares}l_i \ge 2n \ge n+m-gcd(n.m).$$




The covering is not optimal. Contradiction.


Sunday 4 November 2018

algebra precalculus - Why is wolfram plotting a wrong graph for $f(x) =8^{log_{8}({x-3})}$?




I'm manually plotting various functions' graphs and use desmos and wolfram to validate whether I've analyzed the function in a correct way. But then I came to the following function and it seems that wolfram is showing a wrong result:



$$
f(x) =8^{\log_{8}({x-3})}
$$



After defining the range of the arguments the function may be reduced to $f(x) = x-3$ where $x \gt 3$, which eventually appears to be a linear function.



It's clear that the range of $x$ is restricted to $x>3$ in $\mathbb R$ since $\log(x)$ is not defined for $x \le 0$. But wolfram alpha expands the line below the X-axis and shows that the function exists for $x \le 3$




Am I missing something or is that just wolfram reducing the function and plotting the graph for the result?


Answer



Because Wolfram can deal with complex numbers. $$\log_8(-|x|)=\log_8(|x|e^{i\pi})=\log_8|x|+\log_8e^{i\pi}=\log_8|x|+i\pi\frac{1}{\ln 8}$$


Saturday 3 November 2018

Why are the elements of a galois/finite field represented as polynomials?



I'm new to finite fields - I have been watching various lectures and reading about them, but I'm missing a step. I can understand what a group, ring field and prime field is, no problem.




But when we have a prime extension field, suddenly the elements are no longer numbers, they are polynomials. I'm sure there is some great mathematical tricks which show that we can (or must?) use polynomials to be able to satisfy the rules of a field within a prime extension field, but I haven't been able to find a coherent explanation of this step. People I have asked in person don't seen to know either, it's just assumed that that is the way it is.



So I have two questions:



What is a clear explanation of "why polynomials?".



Has anyone tried using constructs other than polynomials to satisfy the same field rules?



Thanks in advance.



Answer



In any ring, finite or infinite, we have two operations: $+$ and $\cdot$. The idea of a ring extension is this: let $R$ be a ring and $x$ be some element that we want to add to $R$ (maybe $R\subset S$ and $x\in S$, or $x$ is some formal element).




We need $R[x]$ to be closed under addition and multiplication.




This means that any element that can be formed by manipulating elements of the set $R\cup\{x\}$ by $+$ and $\cdot$ must be an element of $R[x]$.





Polynomials are a general way of expressing such manipulations.




An arbitrary polynomial in $x$ is a completely general manipulation of $x$ using only the operations valid in a ring. Moreover, any element of a ring can be expressed as a polynomial in terms of the generators of the ring.



Let's see how this works: start with some element $a\in R$. We can add or multiply by any other element of $R$, but this just gives us some $a'\in R$. Or we can multiply by $x$ any number of times to get $a'x^n$ for some $n$. And given different elements of this form, we can add them together to get a polynomial.



In many rings, because the elements $x$ satisfy non-trivial relations (e.g. in $\mathbb C=\mathbb R[i]$, $i^2+1=0$), there are neater ways to express elements, and we can avoid polynomials, even though they lurk in the background. In finite fields, polynomials happen to be the easiest and most intuitive way to express what's going on.


Friday 2 November 2018

calculus - Proof that $int_1^x frac{1}{t} dt$ is $ln(x)$




A logarithm of base b for x is defined as the number u such that $b^u=x$. Thus, the logarithm with base $e$ gives us a $u$ such that $e^u=b$.



In the presentations that I have come across, the author starts with the fundamental property $f(xy) = f(x)+f(y)$ and goes on to construct the natural logarithm as $\ln(x) = \int_1^x \frac{1}{t} dt$.



It would be suprising if these two definitions ended up the same, as is the case. How do we know that the are? The best that I can think of is that they share property $f(xy) = f(x)+f(y)$, and coincide at certain obvious values (x=0, x=1). This seems weak. Is their a proof?


Answer



If you want the logarithm to be expressed as a function $f$ then the most important properties have to hold, which are



$$\log(xy) =\log(x)+\log(y)$$




$$\log(x^a) =a\log(x)$$



$$f(1)=0$$



And... suppose $f$ admits a derivative. Then fixing $y$ and differentiating the first equation gives:



$$yf'(xy) =f'(x)$$



Putting $x=1$ gives




$$f'(y) =\dfrac{f'(1)}{y}$$ for each $y\neq0$



From this equation we se the derivative is monotonous en each interval not containing the origin. Morover, $f'$ is continuous in $(c,x)$ with $c>0$ so we can apply the second $\mathcal{FTC}$:



$$f(x) - f(c) = \int_c^x f'(t) dt =f'(1) \int_c^x\frac{1}{t}dt$$



If $x>0$ this equation is valid for each nonnegative $c$, so choosing $c=1$ gives:



$$f(x) = f'(1) \int_1^x \frac{dt}{t}$$




You can readily check from this equation that the previous properties are met. Moreover, you can check that the logarithm will be the unique function that will satisfy the above requierements and $f'(1)=1$, which will give you the desired definition:



$$\log x = \int_1^x \frac{dt}{t}$$



What I gave you is rather a stub from Apostol's Calculus, pages 278-281, 2nd ed.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...