Tuesday 31 July 2018

calculus - Calculate the limit: $lim_{xtoinfty} sqrt[x]{3^x+7^x}$



Calculate the limit: $$\lim_{x\to\infty} \sqrt[x]{3^x+7^x}$$



I'm pretty much clueless on how to approach this. I've tried using the identity of $c^x = e^{x \cdot \ln(c)}$ but that led me to nothing. Also I've tried replacing $x$ with $t=\frac{1}{x}$ such that I would end up with $\lim_{t\to 0} (3^{1/t} + 7^{1/t})^{1/t}$ however I've reached yet again a dead end.




Any suggestions or even hints on what should I do next?


Answer



Note that



$$\sqrt[x]{3^x+7^x}=7\sqrt[x]{1+(3/7)^x}=7\cdot \large{e^{\frac{\log{1+(3/7)^x}}{x}}}\to7$$


multivariable calculus - Why does $left(int_{-infty}^{infty}e^{-t^2} dt right)^2= int_{-infty}^{infty}int_{-infty}^{infty}e^{-(x^2 + y^2)}dx,dy$?


Why does $$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right)^2 = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy ?$$





This came up while studying Fourier analysis. What's the underlying theorem?

Monday 30 July 2018

calculus - how to evaluate integral: $int_{0}^{10} x^2 e^{-x^2/2} dx$

I need to evaluate this integral $$\int_{0}^{10} x^2 e^{\frac{-x^2}2} dx$$ but I'm not sure how. The problem also states that $$\int_{0}^{\infty}x^2 e^{\frac{-x^2}\beta} dx=\frac{\beta^{\frac{1+1}\beta}\Gamma(\frac1\alpha)}{\alpha^2}$$ but I don't know what that means. Can somebody point me in the right direction ?

real analysis - Prove: if $limlimits_{x rightarrow x_0} f(x) = L >0$, then $limlimits_{x rightarrow x_0} sqrt{f(x)}= sqrt{L}$




Prove: if $\lim\limits_{x \rightarrow x_0} f(x) = L >0$, then $\lim\limits_{x \rightarrow x_0} \sqrt{f(x)}= \sqrt{L}$ (using the original definition of limit)





We assume that $f$ is defined in some deleted neighborhood of $x_0$, for every $\epsilon>0$ , there is a $\delta>0$ such that:



$$|f(x)-L|<\epsilon$$
and
$$ 0<|x-x_0|<\delta$$



As $L>0$,



$$|f(x)-L|
=| \sqrt{f} \sqrt{f} - \sqrt{L} \sqrt{L}|

= |\sqrt{f} - \sqrt{L}| \cdot |\sqrt{f} + \sqrt{L}|
<\epsilon$$



It follows that for the same $\delta>0$, the following statement is true:
$$|f(x)-L|<|\sqrt{f} - \sqrt{L}|< \frac{\epsilon}{ |\sqrt{f} + \sqrt{L}|} < \epsilon$$



>



It shows that if $\sqrt{f}$ is defined on some deleted neighborhood of $x_0$,there is a




$\epsilon>0$ , there is a $\delta>0$ such that:



$$|\sqrt{f}-\sqrt{L}|<\epsilon$$
and
$$ 0<|x-x_0|<\delta$$



Am I having the right reasoning? Can it be improved?



Any input is much appreciated.


Answer




${\epsilon}/{|\sqrt{f}+\sqrt{L}|}$ does not necessarily have to be smaller than $\epsilon$ because $|\sqrt{f}+\sqrt{L}|$ could be lesser than $1$. Instead, look for a bound for $1/{|\sqrt{f}+\sqrt{L}|}$ :



Since $L>0,$ so $L/2>0$, and there would be (by definition of the limit of $f$ at $x_0$) some $\delta_1>0$ such that if $0<|x-x_0|<\delta_1$, we must have $$|f(x)-L|$$\Rightarrow -L/2So we have $f(x)>L/2>0$ for all $x$ satisfying $0<|x-x_0|<\delta_1$. Then
$|\sqrt{f}+\sqrt{L}|>\sqrt{L/2}+\sqrt{L}$. Denote $\sqrt{L/2}+\sqrt{L}$ by $M$. Hence $1/{|\sqrt{f}+\sqrt{L}|}<1/M$ for all $0<|x-x_0|<\delta_1$.



Now we can show that desired limit:



Let $\epsilon>0$, then there would be some $\delta_2>0$ such that if $0<|x-x_0|<\delta_2$, we must have$$|f(x)-L|<\epsilon$$

Take $\delta=\min(\delta_1,\delta_2)$, then if $x$ is any number satisfying $0<|x-x_0|<\delta$, we must have$$|\sqrt{f}-\sqrt{L}|<\epsilon/|\sqrt{f}+\sqrt{L}|<\epsilon/M$$
Since $M$ is fixed and $\epsilon$ is arbitrary, so $\lim_{x\to x_0}\sqrt{f}=\sqrt{L}$.


calculus - Consider the increasing, concave function $x^{0.5}$ on $[0, 1]$.

Consider the increasing, concave function:
$$ g(x) = \sqrt x, x ∈ [0, 1]. $$



Can you state a continuous function:
$$ f(x), x ∈ [0, 1] $$




such that $f(0) = 0, f(x)$ is twice continuously differentiable on $(0, 1]$ and:



$$ 0 < f'< g', f'' > |g''| $$



for all $x ∈ (0,1]$ ?



So basically I want an increasing function $f(x)$ which has a lower slope than $g(x)$ everywhere but is more convex than $g(x)$ is concave everywhere.

calculus - Prove that $E(X) = int_{0}^{infty} P(X>x),dx = int_{0}^{infty} (1-F_X(x)),dx$.




Let $X$ be a continuous non-negative random variable (i.e. $R_x$ has only non-negative values). Prove that $$E(X) = \int_{0}^{\infty} P(X>x)\,dx = \int_{0}^{\infty} (1-F_X(x))\,dx$$ where $F_X(x)$ is the CDF for $X$. Using this result, find $E(X)$ for an exponential ($\lambda$) random variable.




I know that by definition, $F_X(x) = P(X \leq x)$ and so $1 - F_X(x) = P(X>x)$



The solution is:
$$\int_{0}^{\infty} \int_{x}^{\infty} f(y)\,dy dx

= \int_{0}^{\infty} \int_{0}^{y} f(y)\,dy dx
= \int_{0}^{\infty} yf(y) dy.$$



I'm really confused as to where the double integral came from. I'm also rusty on multivariate calc, so I'm confused about the swapping of $x$ and $\infty$ to $0$ and $y$.



Any help would be greatly appreciated!


Answer



Observe that for a continuous random variable, (well absolutely continuous to be rigorous):



$$\mathsf P(X> x) = \int_x^\infty f_X(y)\operatorname d y$$




Then taking the definite integral (if we can):



$$\int_0^\infty \mathsf P(X> x)\operatorname d x = \int_0^\infty \int_x^\infty f_X(y)\operatorname d y\operatorname d x$$



To swap the order of integration we use Tonelli's Theorem, since a probability density is strictly non-negative.



Observe that we are integrating over the domain where $0< x< \infty$ and $x< y< \infty$, which is to say $0

$$\begin{align}\int_0^\infty \mathsf P(X> x)\operatorname d x = & ~ \iint_{0< x< y< \infty} f_X(y)\operatorname d (x,y)

\\[1ex] = & ~ \int_0^\infty \int_0^y f_X(y)\operatorname d x\operatorname d y\end{align}$$



Then since $\int_0^y f_X(y)\operatorname d x = f_X(y) \int_0^y 1\operatorname d x = y~f_X(y)$ we have:



$$\begin{align}\int_0^\infty \mathsf P(X> x)\operatorname d x = & ~ \int_0^\infty y ~ f_X(y)\operatorname d y \\[1ex] = & ~ \mathsf E(X \mid X\geq 0)~\mathsf P(X\geq 0) \\[1ex] = & ~ \mathsf E(X) & \textsf{when $X$ is strictly positive} \end{align}$$



$\mathcal {QED}$


real analysis - Is there a function which is "midpoint linear" but not linear?




Let a function $f:\mathbb{R}\rightarrow\mathbb{R}$ be midpoint linear if for all $x,y\in\mathbb{R}$, $f(\frac{x+y}{2})=\frac{f(x)+f(y)}{2}$. If a midpoint linear function $f$ is continuous, then I believe you can show through a density argument that $f$ is also linear. However, if we drop the assumption of continuity, I wonder if we can construct (or at least describe) a counterexample that is midpoint linear function but is not in fact linear. Because midpoint convexity does not imply convexity without the additional assumption of continuity, I believe that such a counterexample should exist, and that it will likely be through a Vitali set sort of argument that involves taking a basis for $\mathbb{R}$ over $\mathbb{Q}$. I am not sure of the details though.


Answer



I think you're exactly right: take a Hamel basis for $\Bbb R$ over $\Bbb Q$, and then you can define $f$ arbitrarily on each basis element and extend it linearly to all of $\Bbb R$.



For example, if $\pi$ is a member of your Hamel basis, then every real number $x$ can be written uniquely as $x = r\pi + {}$(a rational linear combination of finitely many other basis elements), where $r$ is rational (possibly $0$). Then you can define $f(x) = r$. This function is midpoint-linear (indeed, rational-linear) but not linear: it takes the value $0$ on a whole lot of real numbers, including some between $\pi$ and $2\pi$ for instance.


limits - Evaluating $limlimits_{xmathoptoinfty}frac{tan x}{x}$



I need to find $$\lim\limits_{x\mathop\to\infty}\frac{\tan x}{x}$$

For some reason mathematica just returns my input without evaluating it.



For what it's worth, $\dfrac{\tan(10^{100})}{10^{100}}\approx -4\times10^{-101}$, so the limit is probably $0$. (...)



I'm guessing this has been asked before but I can't find it.


Answer



The limit does not exist: Since the tangent function has poles at every point of the form $\left(n + \frac 1 2\right) \pi$, the quantity



$$\frac{\tan x}{x}$$




is unbounded on every interval of length greater than $\pi$.


Sunday 29 July 2018

calculus - Solve integral without partial fractions or integration by parts



I've been trying this integral



$$\int \frac{x^2+x}{(e^x+x+1)^2}dx$$




For quite some time now but I am stuck on it.
The things I tried include factoring numerator as $x(x+1)$ and expanding denominator as
$$\big( e^x +(x+1) \big)^2$$
but I'm unable to solve it. I found a solution online which used integration by parts however this question can supposedly be solved by substitution according to my teacher and does not involve using partial fractions as well.


Answer



The following development may seem unmotivated, but it does work. Let
$$ v := e^x, \;\; w := 1 + x + v, \;\; w' = 1 + v = w - x. $$
We are trying to integrate
$$ u := \frac{x^2+x}{(e^x+x+1)^2} $$ Notice the equalities
$$ u = \frac{x (1 + x)}{w^2} = \frac{x (w - v)}{w^2} = \frac{x}{w} - \frac{x v}{w^2} =

1 - \frac{w - x}w - \frac{x v}{w^2} = 1 - \frac{w'}w - \frac{ x v}{w^2}$$

We suppose for some unknown $\,t\,$
$$ y := \frac{t}{w} \;\; \text{ and } \;\; y' = \frac{t'\, w - t\, w'}{w^2} =
\frac{t'\, w - t\, (w-x)}{w^2} = \frac{(t'-t)w + t x}{w^2} = -\frac {x v}{w^2}. $$

To solve this last equation, notice that
$$ (t' - t)(1 + x + v) + t x = - x v $$ solved by algebra
implies that $\,t = 1+x\,$ and $\, t' = 1.\,$ Now
$$ \int u\,dx = C + x - \ln(w) + \frac{1+x}{w}. $$



In a situation like this, it helps to have lots of practice with integrals ranging from simple to more complicated ones. It is also very helpful if you

already know what the answer is using Computer Algebra Systems and thus you can plot a path towards that goal. You specified




without partial fractions or integration by parts




but these kind of tools are
always in the back of the mind. They help to guide the thought process but they
do not need to be written down explicitly. You may be able to detect hints of
partial fractions and integration by parts even if they are not made explicit.



combinatorics - Inductive Proof for Vandermonde's Identity?



I am reading up on Vandermonde's Identity, and so far I have found proofs for the identity using combinatorics, sets, and other methods. However, I am trying to find a proof that utilizes mathematical induction. Does anyone know of such a proof?



For those who don't know Vandermonde's Identity, here it is:




For every $m \ge 0$, and every $0 \le r \le m$, if $r \le n$, then



$$ \binom{m+n}r = \sum_{k=0}^r \binom mk \binom n{r-k} $$


Answer



We have using the recursion formula for binomial coefficients the following for the induction step
\begin{align*}
\binom{m + (n+1)}r &= \binom{m+n}r + \binom{m+n}{r-1}\\
&= \sum_{k=0}^r \binom mk\binom n{r-k} + \sum_{k=0}^{r-1} \binom mk\binom{n}{r-1-k}\\
&= \binom mr + \sum_{k=0}^{r-1} \binom mk\biggl(\binom n{r-k} + \binom n{r-1-k}\biggr)\\
&= \binom mr\binom{n+1}0 + \sum_{k=0}^{r-1} \binom mk\binom{n+1}{r-k}\\

&= \sum_{k=0}^r \binom mk \binom{n+1}{r-k}
\end{align*}


complex numbers - If I pick $-1 = sqrt{1}$, then why $ sqrt{zw}= sqrt{z}sqrt{w} $ for only $z, w le 0$?

This Reddit comment expatiates why the third equality (colored in red) is the one that's wrong in $\color{limegreen}{1 = \sqrt{1}} = \sqrt{(-1)(-1)} \color{red}{=} \sqrt{-1} \sqrt{-1} = i² = -1$.




Can someone kindly explain the two sentences colored in red beneath?






Caution: To all those criticizing me for the apparently unnecessarily long and complicated response...



The answer to the question is not as simple as "there are two square roots for 1" or "



$\sqrt{zw}= \sqrt{z}\sqrt{w} \tag{@}$




holds only for positive z and w", although the second explanation is getting closer. It's true that $(@)$ doeesn't always hold, but why? The OP's false proof deals with square roots of negative numbers, so an explanation will involve complex numbers.



The subtleties of the problem become even more apparent when you realize the statement "$(@)$ holds only for positive z and w" is actually WRONG! For instance, choose the branch of the square root such that $\sqrt{1} = -1$. Then $\sqrt{(1)(1)} = \sqrt{1} = -1$, but $\sqrt{1}\sqrt{1} = (-1)(-1) = 1.$ They are not equal!



The only way to reconcile all these statements is to understand the elements of the false proof properly, in terms of complex numbers.






Shorter answer with fewer equations:




Every complex number has two square roots. If we want to make $f(z) = \sqrt{z}$ a function (i.e., single-valued), then for each complex number z we must choose one of the two possible square roots. If we also want the function to be continuous (in fact, it turns out to be analytic), then we must choose the square root in the same way for each complex number z.



For instance, if we were dealing just with positive real numbers, then we could say that √(x) is always the positive square root, or we could say √(x) is always the negative square root. But we must choose the same type of root for each x.



For complex numbers, the two square roots are not as simple as "the positive one" and "the negative one", but there is a way to make clear what the two choices are. (The choices are also called branches of the square root function, a term that is relevant in more advanced treatments, involving Riemann surfaces and branch cuts.) It turns out that even if we choose the same type of square root for each complex number, $(@)$ still isn't true for all complex numbers z and w. In general, a particular branch will make $(@)$ true only for a strict subset of complex numbers.



Most people are familiar with the usual square-root function on the non-negative real numbers which always chooses the positive root. (This is also called the principal root.) It turns out that choosing this branch of the square root makes $(@)$ true for all non-negative real numbers, but not for negative real numbers. This is why OP's false proof is tricky for many people. We are already used to a square root function which satisfies $(@)$, so it is difficult for many people to pinpoint which exact step in the false proof is wrong.



Like it or not, $(@)$ just isn't true for all complex numbers, no matter how you choose the square root.




$\color{red}{\text{Now if you happen to choose the branch of the square root such that $\sqrt{1} = -1$ (let's call it the }}\\\color{red}{\text{non-primary branch), then $(@)$ holds for only non-positive real numbers.}}$ The first equality of the false proof shows that the primary branch has been chosen [I colored this in green]. $\color{red}{\text{So then the third equality is wrong because $(@)$ doesn't hold for this branch unless z and w are }}\\\color{red}{\text{non-negative real numbers.}}$



If the false proof had chosen the non-primary branch in the first equality, then the "proof" would go through, but you would just end up getting -1 = -1 or 1 = 1 anyway, which isn't interesting.






Longer answer with more equations. (UPDATED 3:50pm EST 9-9-15 to make the presentation clearer.):



[I omitted.]

complex analysis - How to prove Euler's formula: $e^{ivarphi}=cos(varphi) +isin(varphi)$?



Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?


Answer



Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.



The MacLaurin series:
\begin{align}
\sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots

\\\\
\cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots
\\\\
e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots
\end{align}



Substitute $z=ix$ in the last series:
\begin{align}
e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots
\\\\

&=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots
\\\\
&=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right)
\\\\
&=\cos x+i\sin x
\end{align}


Saturday 28 July 2018

elementary set theory - finite and infinite subsets of positive integers

Prove that the set consisting of all subsets(both finite and infinite) of set , P, of positive integers is uncountable using the following approach. First clearly P is countable so we can list all the integers in P and clearly map each element in this list into a subscript of its location in this list. Write down the first 10 elements in this list. Now suppose we can list all subsets of the set, P in another list. Write down an example first 10 elements of this list. Now neatly complete the argument to prove uncountability using both list.

sequences and series - Solve the arithmetic progression given the sum of first 4 elements, and the sum of the first and 5th element



I'm trying to solve an arithmetic progression (find the first few elements: a1, a2, a3, ...).



I'm given S4 (sum of the first 4 elements of a sequence) = 14 and the sum of the first a1 and 2 * 5th element, 2*a5 = 0.



Simply put: S4 = 14, a1 + 2a5=0



I tried using the formula (plugging in the values) for the sum of the first n-terms of an arithmetic sequence, but I can't solve it since I don't have neither a1 nor d.




What's the correct way of solving this?


Answer



$$S_4 = 14 \implies 2a + 3d = 7 \quad\quad \text{(1.) } $$



$$a+ 2a_5 = 0 \implies a + 2a + 8d = 0\implies a = -\frac{8d}{3} \quad\quad \text{(2.) }$$



Substituting $\text{(2.) }$ in $\text{(1.) }$



$$-\frac{16d}{3} + 3d = 7 \implies -\frac {7d}3 = 7 \implies d =-3 \quad\quad\text{(3.) }$$




From $\text{(2.) }$ and $\text{(3.) }$
$$a = -\frac83 . -3 \implies a=8$$



Hence the A.P is $ \,\, 8,5,2,-1,-4 ....$


real analysis - Evaluate $lim_{ntoinfty}frac{x_n}{sqrt n}$




Question: $x_1>0$, $x_{n+1}=x_n+\dfrac1{x_n}$, $n\in\Bbb N$. Evaluate
$$\lim_{n\to\infty}\frac{x_n}{\sqrt n}.$$





What I know now is that $\dfrac1{x_n}\to\dfrac12$ when $n\ge2$,
$\{x_n\}$ is monotonically increasing,$x_n\ge 2$ when $n\ge 2$.



I have tried to use the Stolz theorem, and I found I could not use Squeeze theorem.



Could you please give some instructions? Thank you!


Answer



We have

$$x_{n+1}^2=\left(x_n+\frac1{x_n}\right)^2=x_n^2+\frac1{x_n^2}+2\implies x_{n+1}^2-x_n^2=\frac1{x_n^2}+2.$$



Obviously, $x_n$ is increasing and $x_n\to\infty$ as $n\to\infty$. Apply the Stolz theorem,
\begin{align*}
\left(\lim_{n\to\infty}\frac{x_n}{\sqrt n}\right)^2&=\lim_{n\to\infty}\frac{x_n^2}{n}\\
(\text{Stolz})&=\lim_{n\to\infty}\frac{x_n^2-x_{n-1}^2}{n-(n-1)}\\
&=\lim_{n\to\infty}\left(\frac1{x_{n-1}^2}+2\right)=0+2=2.
\end{align*}

$$\therefore \lim_{n\to\infty}\frac{x_n}{\sqrt n}=\sqrt 2.$$


complex analysis - How to evaluate $ int_{-infty}^infty {e^{ax} over 1 +e^x } ; dx $











Given that $0 < a < 1$ how to evaluate by the method of residues
$$ \int_{-\infty}^\infty {e^{ax} \over 1 +e^x } \; dx $$


Answer



Substituting $u=e^x$, so that $dx = \dfrac{du}{u}$, the integral becomes



$$\int_0^{\infty} \dfrac{u^{a-1}}{1+u}du$$




How might you solve this? You've tagged the question as homework, so I'll leave it here for now, but if you're still stuck, post in the comments.


calculus - Is There a General Form to Find the Length of a Line Given by a Function?




Consider a function, $f(x)$, and its plot on a graph ($f(x)=y$). This plot is usually represented by a line, e.g. $f(x)=x$ is represented by a straight line. Now consider a section of this line, e.g. $f(x)=x$ for $0 \leq x \leq 10$. Measuring the length of this line is trivial by using the Pythagorean Theorem or the Distance Formula, etc.



Now consider a graph of a semicircle. In this instance, we can also find the length of the line: we use the radius to find half the circumference.



Now consider a different graph, e.g. $f(x)=x^2$. I can think of no direct way to find the length of this line for some interval on $x$. Is there a way?



I can propose a way, which, to me, seems intuitively related to Riemann Integration. In Riemann Integration, we try to approximate the area under the graph for some interval by fitting arbitrarily small rectangles between the curve and the x-axis. Similarly, one ought to be able to find the length of a line on some interval
by fitting arbitrarily many right-angled triangles with equal width on the curve: this way, we can partition a curve into arbitrarily many lines that are approximately straight, which we can measure by finding the hypotenuses of our triangles. Then to find the length of the curve, we just sum those hypotenuses. Diagrams for illustration below:




enter image description here



enter image description here



As stated: in each image, the approximated length of the line is the sum of the lengths of the green straight lines. Clearly, as we let the size of the triangle base $a$ become arbitrarily small, our approximation of the length of the curve will become more accurate.



On a final note, obviously this method has something in common with how we 'intuitively' take a derivative.



Any Thoughts?


Answer




Your intuition is on point. I do not know your math background, but it sounds like you are at least familiar with basic calculus. The name for the line which you describe is arc length. We can pretty much approximate the arc length of any function, and obtain the exact value for quite a few types of functions. There are some pathological cases for which we cannot find exact values once we get into more advance stuff . For example, in general it is very difficult to obtain the arc lenght of a Fourier series restricted to a suitable domain.



Here is an edited version of the explanation of the arc length for single variable functions. This is directly from James Stewarts Calculus book, page 540, $7^{th}$ Edition.
The picture here makes a little more precise your intuition, I hope this helps.



enter image description here



To answer your specific question about $f(x)=x^2.$



$$L=\int_ a^b \sqrt{1+(2x)^2}dx$$

$$=\frac{1}{2} x \sqrt{1+4 x^2}+\frac{1}{4} \text{ArcSinh}(2 x)$$
$$ = \frac{1}{2} x \sqrt{1+4 x^2}+\frac{1}{4} \log \left(\left|2x+\sqrt{(2x)^2+1} \right|\right)$$



So if you want to know the length of "the line" as you refer to it, just choose the interval $[a,b]$ for which you want to know and plug in the endpoint values accordingly. "Log" has has base $e.$


Friday 27 July 2018

matrices - Determinant matrix $3 times 2$

I need to verify the linear dependence or independence of $3 \times 2$ complex matrix, how do I compute the determinant? I would use the row reduced echelon form but I have no idea about how to do that with complex numbers, can I divide for the "complex number" when row reducing or my operations of row reduction must be limited to real numbers?

Thursday 26 July 2018

Complex sum of sine and cosine functions


By using the complex representations of sine and cosine, show that $$\sum_{m=0}^n\sin m\theta =\frac{\sin\frac{n}{2}\theta\sin\frac{n+1}{2}\theta}{\sin\frac{1}{2}\theta}$$





So I am not too sure how to go about this proof. I tried to substitute sine for its complex representation but I can't see how to evaluate the sum?

elementary number theory - Showing $gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$



Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$.



I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere.




Can anybody provide a hint as to how I should be looking at this problem?


Answer



As you note, $\gcd(n^3+1,n^2+2) = \gcd(1-2n,n^2+2)$.



Now, continuing in that manner,
$$\begin{align*}
\gcd(1-2n, n^2+2) &= \gcd(2n-1,n^2+2)\\
&= \gcd(2n-1, n^2+2+2n-1)\\
&= \gcd(2n-1,n^2+2n+1)\\

&= \gcd(2n-1,(n+1)^2).
\end{align*}$$



Consider now $\gcd(2n-1,n+1)$. We have:
$$\begin{align*}
\gcd(2n-1,n+1) &= \gcd(n-2,n+1) \\
&= \gcd(n-2,n+1-(n-2))\\
&=\gcd(n-2,3)\\
&= 1\text{ or }3.
\end{align*}$$

Therefore, the gcd of $2n-1$ and $(n+1)^2$ is either $1$, $3$, or $9$. Hence the same is true of the original gcd.


divisibility - Help - remainders when number is divided



Please, give me hints, I've no idea ;):





Find greatest number $x$ such that $x<1000$ and $x$ divided by $4$ gives remainder $3$, divided by $5$ gives remainder $4$, and divided by $6$ gives remainder $5$.




I already know that for some natural $a, b, c$, $x=4a+3=5b+4=6c+5$, but what next? Help please.


Answer



You know that $x+1$ is divisible by $4$, $5$, and $6$ so $x+1$ is divisible by $60$. The largest number under $1000$ that is divisible by $60$ is $960$. So $x=959$.


Wednesday 25 July 2018

calculus - Bounding $ln(1+1/x)$ for $x>0$



I am hoping to somewhat rigorously establish the bound
$$
\frac{1}{x+1/2}<\ln(1+1/x)<1/x

$$
for any $x>0$. The upper bound is clear since
$$
1+1/x$$
The lower bound seems dicier. A taylor expansion argument wouldn't seem to work near zero, since $1/x$ blows up.



Geometrically at least it seems clear near zero. Since $\ln(1+1/x)$ is decreasing and convex, and $\lim_{x\rightarrow 0^+}\ln(1+1/x)=+\infty$ and at 0
$$
\frac{1}{1/2}=2

$$
I can establish the lower bound near the origin. Should I then take over with taylor expansion for larger $x$? From the graph, it seems like a pretty fine lower bound.



I still feel like I should be able to make a neater argument using the convexity and decreasing properties of $\ln(1+1/x)$ and any pointers would be appreciated!



edit: Not a duplicate, the term in the denominator on the lhs has a 1/2.


Answer



Hint: Define
$$f_1\colon \mathbf R_{>0} \to \mathbf R, \qquad f_1(t):=\ln \left(1+ \frac{1}{t} \right) - \frac{1}{t}$$
and

$$f_2\colon \mathbf R_{>0} \to \mathbf R, \qquad f_2(t):=\ln \left(1+ \frac{1}{t} \right) - \frac{1}{t+\frac{1}{2}}$$
and use the intermediate value theorem twice.


proof verification - Prove that $1^2+2^2+cdots+n^2=frac{n(n+1)(2n+1)}{6}$ for $n in mathbb{N}$.




Problem: Prove that $1^2+2^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$ for $n \in \mathbb{N}$.



My work: So I think I have to do a proof by induction and I just wanted some help editing my proof.



My attempt:



Let $P(n)=1^2+2^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$ for $n \in \mathbb{N}$.
Then $$P(1)=1^2=\frac{1(1+1)(2+1)}{6}$$
$$1=\frac{6}{6}.$$

So $P(1)$ is true.



Next suppose that $P(k)=1^2+2^2+\cdots+k^2=\frac{k(k+1)(2k+1)}{6}$ for $k \in \mathbb{N}$. Then adding $(k+1)^2$ to both sides of $P(k)$ we obtain the following:
$$1^2+2^2+\cdots+k^2+(k+1)^2=\frac{k(k+1)(2k+1)}{6}+(k+1)^2$$
$$=\frac{2k^3+3k^2+k+6(k^2+2k+1)}{6}$$
$$=\frac{2k^3+9k^2+13k+6}{6}$$
$$=\frac{(k^2+3k+2)(2k+3)}{6}$$
$$=\frac{(k+1)(k+2)(2k+3)}{6}$$
$$=\frac{(k+1)((k+1)+1)(2(k+1)+1)}{6}$$
$$=P(k+1).$$

Thus $P(k)$ is true for $k \in \mathbb{N}$.
Hence by mathematical induction, $1^2+2^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$ is true for $n \in \mathbb{N}$.


Answer



I am going to provide what I think is a nice way of writing up a proof, both in terms of accuracy and in terms of communication. You be the judge(s).






Claim: For $n\geq 1$, let $S(n)$ be the statement
$$
S(n) : 1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}.

$$



Base step $(n=1)$: The statement $S(1)$ says $1^2=1(2)(3)/6$ which is true.



Inductive step $(S(k)\to S(k+1))$: Fix some $k\geq 1$ and suppose that
$$
S(k) : 1^2+2^2+3^2+\cdots+k^2=\frac{k(k+1)(2k+1)}{6}
$$
holds. To be shown is that
$$

S(k+1) : 1^2+2^2+3^2+\cdots+k^2+(k+1)^2=\frac{(k+1)(k+2)(2(k+1)+1)}{6}
$$
follows. Starting with the left-hand side of $S(k+1)$,
\begin{align}
\text{LHS} &= 1^2+2^2+3^2+\cdots+k^2+(k+1)^2\tag{definition}\\[1em]
&= \frac{k(k+1)(2k+1)}{6}+(k+1)^2\tag{by $S(k)$}\\[1em]
&= (k+1)\left[\frac{k(2k+1)}{6}+(k+1)\right]\\[1em]
&= (k+1)\frac{k(2k+1)+6(k+1)}{6}\\[1em]
&= (k+1)\frac{2k^2+k+6k+6}{6}\\[1em]
&= (k+1)\frac{2k^2+7k+6}{6}\\[1em]

&= (k+1)\frac{(k+2)(2k+3)}{6}\\[1em]
&= \frac{(k+1)(k+2)(2(k+1)+1)}{6}\\[1em]
&= \text{RHS},
\end{align}
the right-hand side of $S(k+1)$ follows. This completes the inductive step.



Thus, by mathematical induction, for every $n\geq 1, S(n)$ is true. $\Box$


limits - Evaluate $lim_{n to infty }frac{(n!)^{1/n}}{n}$.











Evaluate

$$\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}.$$



Can anyone help me with this? I have no idea how to start with. Thank you.


Answer



Let's work it out elementarily by wisely applying Cauchy-d'Alembert criterion:



$$\lim_{n\to\infty} \frac{n!^{\frac{1}{n}}}{n}=\lim_{n\to\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}} = \lim_{n\to\infty} \frac{(n+1)!}{(n+1)^{(n+1)}}\cdot \frac{n^{n}}{n!} = \lim_{n\to\infty} \frac{n^{n}}{(n+1)^{n}} =\lim_{n\to\infty} \frac{1}{\left(1+\frac{1}{n}\right)^{n}}=\frac{1}{e}. $$



Also notice that by applying Stolz–Cesàro theorem you get the celebre limit:




$$\lim_{n\to\infty} (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}} = \frac{1}{e}.$$



The sequence $L_{n} = (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}}$ is called Lalescu sequence, after the name of a great Romanian mathematician, Traian Lalescu.



Q.E.D.


elementary number theory - How to sum this infinite series



How to sum this series:



$$\frac{1}{1}+\frac{1}{11}+\frac{1}{111}+\frac{1}{1111}+\cdots$$




My attempt:



Multiply and divide the series by $9$



$$9\left(\frac{1}{9}+\frac{1}{99}+\frac{1}{999}+\frac{1}{9999}+\cdots\right)$$



$$9\left(\frac{1}{10-1}+\frac{1}{10^2-1}+\frac{1}{10^3-1}+\frac{1}{10^4-1}+\cdots\right)$$



Now let $a_N$ denote the number of divisors of $N$, after some simplification the series becomes:




$$9\left(1+\sum{\frac{a_N}{10^N}}\right)$$



This is where I am stuck...



PS: Please rectify my mistakes along the way


Answer



Your approach is very nice but, as pointed by Pranav Arora, for the summation up to term $n$, a CAS leads to $$S =9 \left(\frac{\psi _{\frac{1}{10}}^{(0)}(n+1)}{\log (10)}-\frac{\psi
_{\frac{1}{10}}^{(0)}(1)}{\log (10)}\right)$$ and for the infinite summation, it becomes $$S=\frac{9 \left(\log \left(\frac{10}{9}\right)-\psi
_{\frac{1}{10}}^{(0)}(1)\right)}{\log (10)} \simeq 1.100918190836200736379855$$



Tuesday 24 July 2018

elementary number theory - Proof verification: Prove $sqrt{n}$ is irrational.

Let $n$ be a positive integer and not a perfect square. Prove $\sqrt{n}$ is irrational.





Consider proving by contradiction. If $\sqrt{n}$ is rational, then there exist two coprime integers $p,q$ such that $$\sqrt{n}=\frac{p}{q},$$ which implies $$p^2=nq^2.$$
Moreover, since $p, q$ are coprime, by Bézout's theorem, there exist two integers $a,b$ such that $$ap+bq=1.$$
Thus

$$p=ap^2+bpq=anq^2+bpq=(anq+bp)q,$$ which implies $$\sqrt{n}=\frac{p}{q}=anq+bp \in \mathbb{N^+},$$ which contradicts.

Monday 23 July 2018

Evaluate $ sum_{n=1}^{infty} frac{sin n}{ n } $ using the fourier series




I am a beginner with Fourier series and I have to evaluate the sum



$$\sum_{n =1}^{\infty}{\sin\left(n\right) \over n}$$



I don't know which function I have to take to evaluate the fourier series ...
Someone can give me a hint ?



Thanks in advance!


Answer



$\newcommand{\+}{^{\dagger}}%

\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%

\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%

\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{\sum_{n = 1}^{\infty}{\sin\pars{n} \over n} = \half\pars{\,\sum_{n = -\infty}^{\infty}{\sin\pars{n} \over n} - 1}.\quad}$ See $\large\tt details$
over here .





\begin{align}
\sum_{n = -\infty}^{\infty}{\sin\pars{n} \over n}&=
\int_{-\infty}^{\infty}{\sin{x} \over x}\sum_{n = -\infty}^{\infty}\expo{2n\pi x\ic}
\,\dd x
=
\int_{-\infty}^{\infty}\half\int_{-1}^{1}\expo{\ic kx}\,\dd k
\sum_{n = -\infty}^{\infty}\expo{-2n\pi x\ic}\,\dd x
\\[3mm]&=
\pi\sum_{n = -\infty}^{\infty}\int_{-1}^{1}\dd k
\int_{-\infty}^{\infty}\expo{\ic\pars{k - 2n\pi}x}\,{\dd x \over 2\pi}

=
\pi\sum_{n = -\infty}^{\infty}\int_{-1}^{1}\delta\pars{k - 2n\pi}\,\dd k
\\[3mm]&=
\pi\sum_{n = -\infty}^{\infty}\Theta\pars{{1 \over 2\pi} - \verts{n}}
= \pi\,\Theta\pars{1 \over 2\pi} = \pi
\end{align}


Then,
$$\color{#0000ff}{\large%

\sum_{n = 1}^{\infty}{\sin\pars{n} \over n} = \half\pars{\pi - 1}}
$$


trigonometry - Where is this trig equation derived from?




In the first, top rated, comment on the original post linked below, the author wrote the function $r = \dfrac {\cos(x)} {1−\cos(x)}$. I do not understand how they got to this equation. I get how they solved for $x$ in their first step. Can you please explain how they solved for $r$.



This is the picture of the problem, where we are solving for $r$, given $x = \dfrac {n-2}{2n}$



enter image description here



Original Post
Numbers of circles around a circle


Answer



In the right Triangle, we can write $\cos x $ as

Base divided by the hypotenuse



$$\cos x=\frac{r}{r+1}$$



$$\frac{1}{\cos x}=1+\frac{1}{r}$$



$$\frac{1}{r}=\frac{1}{\cos x}-1$$
$$\frac{1}{r}=\frac{1-\cos x}{\cos x}$$
$$r=\frac{\cos x}{1-\cos x }$$


integration - What is the simplest technique to evaluate the following definite triple integral?



Consider the following definite triple integral:



$$ \int_0^\pi \int_0^\pi \int_0^\pi \frac{x\sin x \cos^4y \sin^3z}{1 + \cos^2x} ~dx~dy~dz $$



According to Wolfram Alpha, this evaluates to $\frac{\pi^3}{8}$, but I have no idea how to obtain this result. The indefinite integral $$ \int \frac{x \sin x}{1 + \cos^2 x}~dx $$ appears to not be expressible in terms of elementary functions. Thus, I am at a loss as to what sort of techniques might be used to evaluate this integral. For context, this is from a past year's vector calculus preliminary exam at my graduate school, so while I'm sure there are some advanced integration techniques that can be used here, I'm particularly interested in what elementary techniques might be used to evaluate the integral, as I don't think something like, for instance, residue techniques would be considered pre-requisite knowledge for taking this exam.


Answer




First off, note that the integrals w.r.t. $y$ and $z$ are quite trivial to evaluate. Then, consider $x\mapsto\pi-x$, since trig functions are symmetric about $\pi/2$.



$$I=\int_0^\pi\frac{x\sin(x)}{1+\cos^2(x)}~\mathrm dx=\int_0^\pi\frac{(\pi-x)\sin(x)}{1+\cos^2(x)}~\mathrm dx$$



Add these together and apply $\cos(x)\mapsto x$.



$$\begin{align}\frac2\pi I&=\int_0^\pi\frac{\sin(x)}{1+\cos^2(x)}~\mathrm dx\\&=\int_{-1}^1\frac1{1+x^2}~\mathrm dx\\&=\arctan(1)-\arctan(-1)\\&=\frac\pi2\end{align}\\\implies I=\frac{\pi^2}4$$


Sunday 22 July 2018

limits - Summation of exponential series




Evaluate the limit:

$$
\lim_{n \to \infty}e^{-n}\sum_{k = 0}^n \frac{n^k}{k!}
$$
It is not as easy as it seems and the answer is definitely not 1.
Please help in solving it.


Answer



Given an event whose frequencies open the Poisson distribution and occurs an average of $n$ times per trial, the probability that it occurs $k$ times in a given trial is



$e^{-n} \frac{n^k}{k!}$.



So, the sum in the limit is the probability that the event (which now must have an integer average) occurs no more than the mean number of times. For large $n$, the Poisson distribution is well-approximated by the normal distribution (this can be made into a precise limiting statement). The normal distribution is symmetric about its mean, so the limit of the sum is the probability that a normally distributed random variable is less than the mean of the variable, namely $\frac{1}{2}$.



Saturday 21 July 2018

functions - Is this a correct bijection between $(0,1]$ and $[0,1]$?



I need to give an explicit bijection between $(0, 1]$ and $[0,1]$ and I'm wondering if my bijection/proof is correct. Using the hint that was given, I constructed the following function $f: (0, 1] \to [0,1]$:
$$
x \mapsto \left\{ \begin{array}{ll} 2 - 2^{-i} - 2^{-i-1} - x & \text{if } x \in (1-2^{-i}, 1-2^{-i-1}]\text{ for an } i \in \mathbb{N}_0 \\
1 & \text{if } x = 1 \end{array}\right.
$$



It's easy to see that for every $x \in (0, 1)$, there exists such an $i$.




Now define $\tilde{f}: [0,1] \mapsto (0,1]$ with
$$
x \mapsto \left\{ \begin{array}{ll} 2 - 2^{-i} - 2^{-i-1} - x & \text{if } x \in [1-2^{-i}, 1-2^{-i-1})\text{ for an } i \in \mathbb{N}_0 \\
1 & \text{if } x = 1 \end{array}\right.
$$



I want to prove that $\tilde{f}(f(x)) = f(\tilde{f}(x)) = x$, so it has an inverse and therefore is a bijection. The case $x=1$ is trivial, so assume that $x \in (0,1)$ with $x \in (1-2^{-i}, 1-2^{-i-1}]$ for some $i \in \mathbb{N}_0$. This interval has length $1-2^{-i-1} - (1-2^{-i}) = 2^{-i-1}$, so we can write $x = 1-2^{-i} + \epsilon\cdot 2^{-i-1}$ for some $\epsilon \in (0, 1]$. We now calculate $f(x)$:
\begin{align*}
f(x)

&= 2 - 2^{-i} - 2^{-i-1} - x\\
&= 2 - 2^{-i} - 2^{-i-1} - (1-2^{-i} + \epsilon\cdot 2^{-i-1})\\
&= 1 - 2^{-i-1}(1+\epsilon).
\end{align*}
We conclude that $f(x) \in [1-2^{-i}, 1-2^{-i-1})$. We now use the definition of $\tilde{f}$, so if we calculate $\tilde{f}(f(x))$, we get
\begin{align*}
\tilde{f}(f(x))
&= 2 - 2^{-i} - 2^{-i-1} - f(x) \\
&= 2 - 2^{-i} - 2^{-i-1} - (2-2^{-i} - 2^{-i-1} - x) \\
&= x.

\end{align*}



We conclude that $f$ has an inverse. Using exactly the same reasoning, we get that $f(\tilde{f}(x)) = x$ for all $x \in [0,1]$. Therefore it's inverse exists and it has to be a bijection.



I know there are less cumbersome methods of proving this fact, but as of now this is the only thing I can come up with.


Answer



It seems fine to me, I think, although it's a complicated enough construction that I'm not totally convinced of my surety.



If you want an easier method, by the way: let $f: (0, 1] \to [0, 1]$ by the following construction. Order the rationals in $(0, 1]$ as $q_1, q_2, \dots$. Then define $f(x)$ by "if $x$ is irrational, let $f(x) = x$; otherwise, let $f(q_i) = q_{i-1}$, and let $f(q_1) = 0$". We basically select some countable subset, and prepend 0 to it. You can do this with any countable subset: it doesn't have to be, or be a subset of, $\mathbb{Q} \cap (0, 1]$. If you prefer, for instance, you could take $\frac{1}{n}$ as the $q_n$.


elementary number theory - Difficult diophantine equation



Solve for integers:



$4n^4+7n^2+3n+6=m^3.$




Hi this is a problem from an Bulgarian olympiad for which I have no idea how to solve.



I figured out using wolfram alpha that $16\cdot m^3-47$ must be a square number.



I would appreciate any solutions. Thank you in advance!


Answer



The idea is going modulo $9$.



Indeed, we will prove that $4n^4 + 7n^2+3n+6$ leaves only remainders $2,5,6$ modulo $9$. None of these are cubes modulo $9$(only $0,1,8$ are), completing the proof that no such integers $n,m$ exist.




For this, we note that if $n \equiv 0 \pmod{3}$ then $4n^4 + 7n^2+3n+6 \equiv 6\pmod{9}$.



If $n \equiv 1 \pmod{3}$ then $4n^4 + 7n^2+3n+6 \equiv 2 \pmod{9}$.



Finally, if $n \equiv - 1 \pmod{3}$ then $4n^4+7n^2+3n+6 \equiv 5 \pmod{9}$.


ordinary differential equations - How to solve this implicit differentiation problem concerning arcsin?

My overarching question is about differentiating when you have these inverse trig functions, but listed below is the specific question I am trying to solve. If you help me with the problem, it'll help me (and others) apply it to similar questions.



Problem: y = arcsin(x) - sqrt(1 - x^2) Find dy/dx




Answer Choices: 1/2sqrt(1-x^2) or 2/sqrt(1-x^2) or (1+x)/sqrt(1-x^2) or (x^2)/sqrt(1-x^2) or 1/sqrt(1+x)



The arcsin(x) is primarily what is getting me stuck. To try to solve the problem I moved the root to the other side by adding it to both sides.




  1. y + sqrt(1 - x^2) = arcsin(x)



Then I converted the equation into a sin equation...I don't feel like this is correct





  1. sin(y + sqrt(1 - x^2)) = x



From here, if I take dy/dx of both sides, it seems utterly confusing and on the wrong track. (I believe I applied chain rule correctly, but I could be wrong)




  1. cos(y + sqrt(1 - x^2)) * [dy/dx + 1/2sqrt(1-x^2) * -2x] = 1




I also examined the square root in the problem carefully because I noticed it had a striking resemblance to another problem I saw earlier in a book:



Differentiate y = arcsin(x)
1. sin(y) = x
2. cos(y) dy/dx = 1
3. dy/dx = 1/cos(y) = 1/sqrt(1- (sin(y))^2) = 1/sqrt(1-x^2) because of the trig identity (sin(x))^2 + (cos(x))^2 = 1 & because subtracting (sin(y))^2 is the same as subtracting x^2 because of Step One conversion



So because I saw the sqrt(1-x^2) in the tougher problem I'm doing right now, I tried to find a way to utilize the technique from the earlier one, but I couldn't. So perhaps that could be the key to solving it.



Thanks in advance for your help, I appreciate it.

Friday 20 July 2018

real analysis - Evaluate $lim_{nrightarrowinfty} int_0^1 f(x^n) dx$



Let $f$ be a real, continuous function defined for all $0\leq x\leq 1$ such that $f(0)=1$, $f(1/2)=2$, and $f(1)=3$. Show that



$$\lim_{n\rightarrow\infty} \int_0^1 f(x^n) dx$$




exists and compute the limit.



Attempt:



Since $f$ is real-valued continuous on $0\leq x\leq 1$ and the boundaries $f(0)=1$ and $f(1)=3$, $f$ is bounded on the interval and the integral $\displaystyle\int_0^1 f(x^n)dx$ exists for any positive $n$. Thus, we can interchange the limit and the integral and compose the limit,
\begin{align*}
\lim_{n\rightarrow\infty} \int_0^1 f(x^n)dx&=\int_0^1\lim_{n\rightarrow\infty}f(x^n)dx\\
&=\int_0^1 f\left(\lim_{n\rightarrow\infty} x^n \right)dx\\
&=\int_0^1 f(0)dx=1.

\end{align*}



Questions: So, my first question concerns whether interchanging the limit and the integral is correct. It seems it would be justified by the dominated convergence theorem, where $f(x^n)$ is dominated by a function $g(x)=\displaystyle\sup_{0\leq x\leq 1} f(x^n)$ and $f_n(x^n)$, on $0\leq x\leq 1$, converges pointwise to a function $f$ that takes value 1 for $x\neq 1$ and 3 for $x=1$.



My next question is whether the interchange between the limit and composition of the function is allowed. Since the function is continuous and the limit exists at that point, it seems the interchange would be justified.


Answer



Let $f_n(x) = f(x^n)$ note that the $f_n$ are uniformly bounded (since $f$ is countinuous on $[0,1]$)
and $f_n(x) \to 1$ for all $x <1$.
Then $\lim_n \int f_n = \int \lim_n f_n = 1$.


analysis - Where does Jacobi's accessory equation come from?

I'm reading Charles Fox's An Introduction to the Calculus of Variations and in section 2.4 he just suddenly introduces Jacobi's accessory equation and I don't understand where it's coming from.



Jacobi's accessory equation (which oddly doesn't have a Wikipedia page) is $$\left[\frac{\partial^2 F(x,s,s')}{\partial s^2}-\frac d{dx}\frac{\partial^2 F(x,s,s')}{\partial s\partial s'}\right]u-\frac d{dx}\left(\frac{\partial^2 F(x,s,s')}{\partial s'^2}\frac {du}{dx}\right)=0$$ and has something to do with the second variation of $\int_a^b F(x,y,y')dx$ evaluated near a stationary path $y=s(x)$.



Could someone either derive (or give the main idea of the derivation) the equation or suggest a good source that does?



P.S. I checked in Gelfand and Fomin's book which I'm not reading but have laying around. The problem with their derivation is that it's in the middle of their book (as opposed to near the beginning of Fox's) and thus seems to make use of stuff I haven't gotten to yet in Fox's book.







Here's the context in Fox's book:



Note that for brevity the notation $F_{00} := \frac{\partial^2 F}{\partial s^2}$, $F_{01} := \frac{\partial^2 F}{\partial s\partial s'}$, $F_{11} := \frac{\partial^2 F}{\partial s'^2}$ is used in the following.



We just started looking at the second variation and trying to derive the conditions for extremizing the functional subject to weak variation. In the previous section we derived that if $t(a)=t(b)=0$, then $$\int_a^b \left(t^2F_{00} +2tt'F_{01}+t'^2F_{11}\right)dx = \int_a^b\left[t^2F_{00}-t^2\frac d{dx}(F_{01})-t\frac d{dx}(t'F_{11})\right]dx$$



Then this section starts off with:




On solving the [Euler-Lagrange equation], the equation of the extremal $y=s(x)$ which passes through the given points $A$ and $B$ can be determined.




Thus the quantities $F_{00}$, $F_{01}$, $F_{11}$, and $\frac d{dx} F_{01}$ can all be expressed in terms of $x$ and the differential equation $$\left[F_{00}-\frac d{dx}F_{01}\right]u-\frac d{dx}\left(F_{11}\frac {du}{dx}\right)=0 \tag{1}$$



can then be solved for $u$ as a function of $x$. This is an ordinary linear differential equation of the second order and is known as the subsidiary or Jacobi's equation or, more frequently, as the accessory equation.



On taking $x$ to be independent and $t(=t(x))$ the dependent variable in the integral $I_2 [= \int_a^b \left(t^2F_{00} +2tt'F_{01}+t'^2F_{11}\right)dx]$, it is easily seen that $(1)$ is the [Euler-Lagrange equation] for minimizing $I_2$ with $t$ replaced by $u$.




Note that other than that last line he doesn't explain what the function $u$ is.

calculus - Why Does The Taylor Remainder Formula Work?




I've been studying calculus on my own and have come across Taylor series. It is very intuitive until I came across the remainder part of the formula where things got fuzzy. I understand why the remainder exists but not the mathematical description. Why is the value being plugged into the derivative of the remainder some number between $x$ and $a$? What is the connection to the mean value theorem? Is the remainder used to bound the function? Lastly and my most important question is what is the intuitive (for me most likely a geometrical approach would be great) for how this remainder is derived. I've spent a long time trying to figure it out for myself and looking online but it seems I'm missing something for there are virtually no questions being asked about this.



Thanks for you time,
Jackson


Answer



Perhaps not quite the way you are looking for, but:



You can derive Taylor's theorem with the integral form of the remainder by repeated integration by parts:
$$ f(x)-f(a) = \int_a^x f'(t) \, dt = \left[-(x-t)f'(t) \right]_a^x + \int_a^x (x-t) f''(t) \, dt \\
= (x-a)f'(a) + \int_a^x (x-t) f''(t) \, dt, $$

and so on, integrating the $(x-t)$ and differentiating the $f$ each time, to arrive at
$$ R_N = \int_a^x \frac{(x-t)^N}{N!} f^{(N+1)}(t) \, dt. \tag{1} $$
Interpretation for this is simply that integrating by parts in the other direction will give you back precisely $f(x) - f(a) - \dotsb - \frac{1}{N!}(x-a)^N f^{(N)}(a)$.



Now, we can get from (1) to the Lagrange and Cauchy forms of the remainder by using the Mean Value Theorem for Integrals, in the form:




Let $g,h$ be continuous, and $g>0$ on $(a,b)$. Then $\exists c \in (a,b)$ such that
$$ \int_a^b h(t) g(t) \, dt = h(c) \int_a^b g(t) \, dt. $$





(this is easy if you think about weighted averages and the usual Mean Value Theorem).



Applying this to (1) with $h=f$, $g(t)=(x-t)^N/N!$ gives
$$ R_N = f^{(N+1)}(c)\frac{(x-a)^{N+1}}{(N+1)!}, $$
which is the Lagrange form of the remainder; using $h(t)=f(t)(x-t)^N/N!$, $g(t)=1$ gives
$$ R_N = f^{(N+1)}(c')\frac{(x-c')^N}{N!}(x-a), $$
which is the Cauchy form of the remainder.



The weighted averages mentioned above are a way to think about what we did here: we take an average of $\frac{(x-t)^N}{N!} f^{(N+1)}(t)$ over $[a,x]$, and use the MVT to equate this to a value of the function at a specific point: how much of the function we count as in the weighting affects what answer we obtain.



sequences and series - How many terms are 'missing'?

I know in this particular indeterminate partial sum S = $3^n - 3^{n+1} + 3^{n+2} - 3^{n+3} + \cdots + 3^{3n}$ where $a=3^n$ and $r=-3$. So I know if $3^1$ were the first term, there would be $3n$ terms. But I am missing $3^1,3^2,3^3, \cdots , 3^{n-1}$ (namely $n-1$ terms). Therefore $3n-(n-1) = 2n+1$ terms.



Now here's what I am having trouble with trying to apply the same process and find how many terms there are missing of this



$3^k + 3^{k-1} + 3^{k-2} + \cdots + 3^{-2k}$



it's hard to me now because it's counting down by 1. I believe that if $3^1$ was the first term there would be $-2k$ terms. But how do you find how many terms come before that? I was thinking like $3^{k+1 -1}$ comes before $3^k$ but am having trouble expanding the expression to make it clear to me what's happening to find how many terms are missing to get the number of terms there are. Also don't exactly know what the common ratio is but I see that the sign doesn't alternate so must have a positive value of r.



Basically, I want to apply the process used in the $3^n$ expression to that of the $3^k$ expression. Please help

Thursday 19 July 2018

probability - How to find the cumulative distribution function and the expected value of a random variable.




I've seen a lot of questions about cumulative distribution function (CDF) when you already have the PDF, but I was wondering how you find it when you're not given the PDF?



E.g., Let $X:([0,1], \mathcal{B}([0,1]) \to \Bbb{R}$ be a random variable, and let $\mathbb{P} = \lambda$, where $\lambda$ is the lebesgue measure ($\lambda([a,b]) = b-a$.



$X$ is defined by:
$$X(\omega) =
\begin{cases}
2\omega,\text{ if $\omega \in [0,1/2)$}\\
1, \text{ if $\omega \in [1/2,1]$}
\end{cases} $$

By eye, I think the CDF is given by
$$F_X(x) =
\begin{cases}
0,\text{if $x<0$}\\
\frac{1}{2}x, \text{ if $x \in [0,1)$}\\
1, \text{ if $x\geq 1$}
\end{cases} $$



However, I don't know any standard technique to find it and I was hoping someone could point me to a source which explains how to find this.




Once I have the CDF, I want to use this to find $\mathbb{E}[X]$. I know $\mathbb{E}[X] = \int_{[0,1]} X(\omega)d\mathbb{P}(\omega) = \int_\mathbb{R}xdF_X(x)$.



Clearly,
$$dF_X(x) =
\begin{cases}
1/2dx,\text{ if $x\in[0,1)$}\\
0dx, \text{ otherwise}
\end{cases} $$



So using the above formula, I would get:

$$\mathbb{E}[X] = \int_\mathbb{R}xdF_X(x) = \int_0^1x\frac{1}{2} dx = 1/4$$



But this seems wrong, because isn't this the expectation value we'd get if $X$ was uniformuly distributed between 0 and 1/2, and didn't include the extra part between 1/2 and 1?



Please let me know if I've misunderstood how to calculate CDFs or expected values, and if possible give me a constructive way to find them in general situations.
Thanks!


Answer



$X(\omega) = 2\omega\mathbf 1_{\omega\in[0;1/2)}+\mathbf 1_{\omega\in[1/2;1)}$



$\mathsf P\{\omega:X(\omega)\leq x\} ~{= \lambda[0;x/2)\mathbf 1_{x\in[0;1)}+\mathbf 1_{x\in[1;\infty)} \\[1ex] =\begin{cases}0&:& x < 0\\ x/2 &:& x\in[0;1)\\ 1&:& x\in[1;\infty)\end{cases}}$




Either include the point mass from the step discontinuity



$$\mathsf E(X)=\int_0^1 x\tfrac {\partial x^2/2}{\partial x}\mathsf d x+1/2 = 3/4$$



Or use the alternative definition:



$$\mathsf E(X) = \int_0^1 \mathsf P(X>x)\mathsf d x =\int_0^1(1-x/2)\mathsf d x={[x-x^2/4]}_{x=0}^{x=1}=3/4$$


Wednesday 18 July 2018

integration - Evaluating $int_0^{infty} text{sinc}^m(x) dx$



How do I evaluate $$I_m = \displaystyle \int_0^{\infty} \text{sinc}^m(x) dx,$$ where $m \in \mathbb{Z}^+$?




For $m=1$ and $m=2$, we have the well-known result that this equals $\dfrac{\pi}2$. In general, WolframAlpha suggests that is seems to be a rational multiple of $\pi$.



\begin{array}{c|c|c|c|c|c|c|c}
m & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\hline
I_m & \dfrac{\pi}2 & \dfrac{\pi}2 & \dfrac{3\pi}8 & \dfrac{\pi}3 & \dfrac{115\pi}{384} & \dfrac{11\pi}{40} & \dfrac{5887 \pi}{23040}\\
\end{array}



$(1)$. Can we prove that $I_m$ is a rational multiple of $\pi$ always?




$(2)$. If so, is there a nice formula, i.e., if $I_m = \dfrac{p(m)}{q(m)} \pi$, where $p(m),q(m) \in \mathbb{Z}^+$, are there nice expressions for $p(m)$ and $q(m)$?



P.S: This integral came up when I was trying my method to answer this question, by writing $\dfrac{\sin(x)}{x+\sin(x)}$ as $$\dfrac{\sin(x)}{x+\sin(x)} = \text{sinc}(x) \cdot \dfrac1{1+\text{sinc}(x)} = \sum_{k=0}^{\infty} (-1)^k \text{sinc}^{k+1}(x)$$


Answer



Notice $\lim_{x\to 0} \frac{\sin x}{x}$ is bounded at $x = 0$,



$$\begin{align}\int_0^{\infty} \left(\frac{\sin x}{x}\right)^m dx
&= \frac12 \int_{-\infty}^{\infty} \left(\frac{\sin x}{x}\right)^m dx\tag{*1}\\
&= \lim_{\epsilon\to 0} \frac12 \left(\frac{1}{2i}\right)^m \oint_{C_{\epsilon}} \left(\frac{e^{ix} - e^{-ix}}{x}\right)^m dx\tag{*2}

\end{align}$$
We can evaluate the integral $(*1)$ as a limit of a integral over a deformed
contour $C_{\epsilon}$ which has a little half-circle of radius $\epsilon$ at origin:



$$C_{\epsilon} = (-\infty,-\epsilon) \cup \left\{ \epsilon e^{i\theta} : \theta \in [\pi,2\pi] \right\} \cup ( +\epsilon, +\infty)$$



We then split the integrand in $(*2)$ in two pieces, those contains exponential factors $e^{ikx}$ for $k \ge 0$ and those for $k < 0$.



$$(*2) = \lim_{\epsilon\to 0} \frac12 \left(\frac{1}{2i}\right)^m \oint_{C_{\epsilon}} \left( \sum_{k=0}^{\lfloor\frac{m}{2}\rfloor} + \sum_{k=\lfloor\frac{m}{2}\rfloor+1} ^{m} \right) \binom{m}{k} \frac{(-1)^k e^{i(m-2k)x}}{x^m} dx$$




To evaluate the $1^{st}$ piece, we need to complete the contour in upper half-plane. Since the completed contour contains the pole at $0$, we get:



$$\begin{align}
\sum_{k=0}^{\lfloor\frac{m}{2}\rfloor} \text{ in }(*2)
&= \frac12 \left(\frac{1}{2i}\right)^m (2\pi i)\sum_{k=0}^{\lfloor\frac{m}{2}\rfloor} \binom{m}{k} \frac{(-1)^k i^{m-1}(m-2k)^{m-1}}{(m-1)!}\\ &= \frac{\pi m}{2^m} \sum_{k=0}^{\lfloor\frac{m}{2}\rfloor} \frac{(-1)^k (m-2k)^{m-1}}{k!(m-k)!}\tag{*3}\end{align}$$



To evaluate the $2^{nd}$ piece, we need to complete the contour in lower half-plane instead. Since the completed contour no longer contains any pole, it contributes nothing and hence $I_m$ is just equal to R.H.S of $(*3)$.



Update




About the question whether $I_m$ is decreasing. Aside from the exception $I_1 = I_2$, it is strictly decreasing.



For $m \ge 1$, it is clear $I_{2m} > I_{2m+1}$ because the difference of corresponding integrands is non-negative and not identically zero. For the remaining cases, we have:



$$\begin{align}&I_{2m+1}-I_{2m+2}\\
= & \int_{0}^{\infty} \left(\frac{\sin x}{x}\right)^{2m+1}\left(1 - \frac{\sin x}{x}\right) dx\\
= & \left(\sum_{n=0}^{\infty} \int_{2n\pi}^{(2n+1)\pi}\right) \left(\frac{\sin x}{x}\right)^{2m+1}\left[1 - \frac{\sin x}{x} - \left(\frac{x}{x+\pi}\right)^{2m+1}\left(1 + \frac{\sin x}{x + \pi}\right)\right] dx
\end{align}$$



Over the range $\cup_{n=0}^{\infty} (2n\pi,(2n+1)\pi)$, the factor $\left(\frac{\sin x}{x}\right)^{2m+1}$ is positive. The other factor $\Big[\cdots\Big]$ in above integral is bounded below by:




$$
\begin{cases}
1 - \frac{\sin x}{x} - \left(\frac{x}{x+\pi}\right)^3\left(1 + \frac{\sin x}{x + \pi}\right), & \text{ for } x \in (0,\pi)\\
1 - \frac{1}{x} - \frac{x}{x+\pi}\left(1 + \frac{1}{x}\right)
= \frac{(\pi - 2)x - \pi}{x(x+\pi)} & \text{ for } x \in \cup_{n=1}^{\infty}(2n\pi,(2n+1)\pi)
\end{cases}
$$
A simple plot will convince you both bounds are positive in corresponding range. This
implies the integrand in above integral is positive and hence $I_{2m+1} > I_{2m+2}$.



calculus - How to solve the limit of this sequence $lim_{n to infty} left(frac{1}{3cdot 8}+dots+frac{1}{6(2n-1)(3n+1)} right)$

$$\lim_{n \to \infty} \left(\frac{1}{3\cdot 8}+\dots+\frac{1}{6(2n-1)(3n+1)} \right)$$
I have tried to split the subset into telescopic series but got no result.
I also have tried to use the squeeze theorem by putting the $a_n$ between $\frac{1}{(2n-1)(2n+1)}$ and $\frac{1}{(4n-1)(4n+1)}$ but it doesn't work.

Limit of Stirling's approximation as n goes to infinity.

I would like to see some detailed solution for $$\frac{n!}{\sqrt{2\pi n} \left(\frac{n}{e}\right)^n}$$ as $n\to\infty$. I know that the answer is 1 but i am not sure why? Here is what is tried:



I rewrote the stirling's formula like this.
$$\frac{(e/n)\cdot (2e/n)\cdot (3e/n)\cdots (ne/n)}{\sqrt{2\pi n}}\to 0$$ as $n\to \infty$. I am not sure where I went wrong.

Tuesday 17 July 2018

abstract algebra - GCD of two polynomials without Euclidean algorithm



The book gives this example of greatest common divisor:



The quadratic polynomials $2x^{2}+7x+3$ and $6x^{2}+x-1$ in $\mathbb{Q}[x]$ have GCD $x+\frac{1}{2}$ since $$2x^{2}+7x+3=(2x+1)(x+3)=2\left(x+\frac{1}{2}\right)(x+3),\\6x^{2}+x-1=(2x+1)(3x-1)=2\left(x+\frac{1}{2}\right)(3x-1).$$




I understand that $2x+1$ is a common divisor, and we divide out $2$ to make it monic. I understand that $\left(x+\frac{1}{2}\right)$ is a common divisor because you can multiply it to the polynomials $2(x+3)$ and $2(3x-1)$ to get the two original polynomials.



My questions are:



1) How did they know $\left(x+\frac{1}{2}\right)$ would be divisible by all the other common divisors? I started by saying let $p(x)$ be another common divisor. But I don't know why $p(x)$ would have to divide $\left(x+\frac{1}{2}\right)$.



2) These two polynomials were easy to factor by hand. What if we had polynomials that weren't so easy to factor? How would you find a common divisor to start with?



(Note: I am self-learning. This is from the book Groups, Rings, and Fields by Wallace. I say "without Euclidean algorithm" because I tried looking up stuff about this but got answers saying use the Euclidean algorithm which is covered in the next section of the book.)



Answer



1) by definition, the GCD includes all common factors. If the factorizations of the polynomials in first degree binomials are available, it is trivial to find it.



2) if you may not use Euclid, then there are special methods for the factorization of certain polynomials (f.i. https://en.wikipedia.org/wiki/Factorization_of_polynomials#Factoring_univariate_polynomials_over_the_integers). But in the general case, polynomial factorization can only be achieved numerically with root finders.



The wonderful thing with Euclid is that it doesn't require any factorization to deliver the GCD.


Monday 16 July 2018

real analysis - Limit of integral expression approaches maximum of function

So I've been trying to find a solution for this all afternoon, but haven't found a good place to start:





Prove that if $f:[a,b]\to\mathbf{R}^+$ is a continuous function with maximum value $M$, then
$$
\ \lim_{n\to\infty}\left(\int_a^b f(x)^n\,dx\right)^{1/n} = M
$$




Here are some of the paths I've considered, though none have been very successful:



(1) Considering the sequence of functions for all increasing integer $n$ and trying to show that the sequence converges. We've had plenty of work on converging sequences, but with the integral expression, I am not sure how to simplify.




(2) Showing that that sequence is increasing (again, how?) and then showing there to be a supremum at $M$. I'm not sure how the maximum of the function arrives in this problem.



(3) Mean value theorems for integrals



If anyone could give me a solid place to start or perhaps point me to a place where this question has been asked before (I can't seem to find it), I would be very grateful.

probability distributions - Given a CDF, obtaining a random variable with said CDF



I'm working on the following problem.



We're given a CDF, $F_y(t)$, and a uniformly distributed random variable $X$ on the interval $[0,1]$. We define $Y = f(X)$ where $f(u) = inf\{t \in \mathbb{R} | F_y(t) \geq u\}$. We want to prove that $Y$ has the desired CDF $F_y(t)$. (Note that $Y$ won't necessarily be unique.)



Our professor gave us the following hint, but I'm not sure how it's helpful.




Hint: First show that the following two sets are equal, $(-\inf, F_y(t)] = \{u \in \mathbb{R}: f(u) \leq t\}$.



What I'm thinking is that our CDF $F_y(t)$ need not be continuous, only necessarily continuous from the right, so we need to proceed by cases. I found a similar question here, but I feel like this one is a bit different. Any hints or advice would be much appreciated.


Answer



Why the hint is helpful:



\begin{align}
P(Y \le t)
&= P(f(X) \le t)\\

&= P(X \in \{u \in \mathbb{R} : f(u) \le t\})
\\
&= P(X \in (-\infty, F_y(t)]) & \text{used hint here}
\\
&= F_y(t) & \text{$X$ is uniform on $[0,1]$}
\end{align}






Proving the hint:





If $u$ satisfies $f(u) \le t$, then using the definition of $f$ and the fact that $F_y$ is monotone nondecreasing implies that $F_y(t) \ge u$.

Conversely, if $u \le F_y(t)$, then using the same two facts (definition of $f$, $F_y$ is monotone nondecreasing) implies $f(u) \le t$.



svd - How to find the Takagi decomposition of a symmetric (unitary) matrix?

The Takagi decomposition is a special case of the singular value decomposition for symmetric matrices. More exactly:





Let $U$ be a symmetric matrix, then Takagi tells us there is a unitary
$V$ such that $U = VDV^T$ (with $D>0$ diagonal).




My question is basically: how to construct this $V$? Preferably I am looking for the `easiest'/most straight-forward way (which probably won't be the most efficient way!)



Note: For the case I am interested in, $U$ is in fact unitary (in which case Takagi gives $U = VV^T$). I'm happy to specialize to that special case if that makes the algorithm easier.

Cauchy functional equation three variables

If I have function from $R^3$ to $R$ satisfying



$f(x_1,x_2,x_3)+f(y_1,y_2,y_3) = f(x_1+y_1,x_1+y_2,x_3+y_3)$



is it necessarily linear?




$f(z_1,z_2,z_3) = \lambda _1 z_1+\lambda _2 z_2+\lambda _3 z_3$



Wasn't sure if this was a direct consequence of Cauchy's theorem or not.

Find the number of terms in the geometric series, given the first and the third term and the sum

In a geometric series, the first term is $12$, the third term is $92$, and the sum of all of the terms of the series is $62 813$. How many terms are in the series?



According to the answer sheet of the pre-calculus 11 book, the number terms in the series is $13$. Can anyone explain how does that happen?

asymptotics - Inequality with little-o notation

I'm having trouble justifying the following:
For large $n$,

\begin{align*}
-\log f(n) & < \log n + o(\log n)\\
\implies f(n) &> n^{-1} \log^3(n) \log(10)
\end{align*}



I think basically for large $n$ they claim $e^{-o(\log n)} > \log^3(n) \log(10)$?



Edit: the first inequality should have been strict, corrected

real analysis - Compute $sumlimits_{n=1}^inftyfrac{1}{(n(n+1))^p}$ where $pgeq 1$

I was recently told to compute some integral, and the result turned out to be a scalar multiple of the series $$\sum\limits_{n=1}^\infty\frac{1}{(n(n+1))^p},$$ where $p\geq 1$. I know it converges by comparison for
$$\dfrac{1}{(n(n+1))^p}\leq\dfrac{1}{n(n+1)}<\dfrac{1}{n^2},$$
and we know thanks to Euler that $$\sum\limits_{n=1}^\infty\frac1{n^2}=\frac{\pi^2}6.$$ I managed to work out the cases where $p=1$ and $p=2$. With $p=1$ being a telescoping sum, and my solution for $p=2$ being $$\frac13\pi^2-3,$$ which I obtained based on Euler's solution to the Basel Problem. I see no way to generalize the results to values to arbitrary values of $p$ however. Any advice on where to start would be much appreciated.




Also, in absence of another formula, is the series itself a valid answer? Given that it converges of course.

Sunday 15 July 2018

elementary number theory - Proving If and only if gcd(a,b) = gcd(b,c) = 1, then gcd(ab,c) = 1



I have tried to show this using the proposition that if gcd(a,b)=1, there exists two integers, x and y, such that $1 = ax + by$.




I first tried to prove the implication starting with gcd(a,b) = gcd(b,c) = 1.



$1 = ax + by$
and
$1 = bk + cm$



I then tried to manipulate these to the form
$1 = abl + cj$
(with l and j being integers)



I ran into trouble with multiplying by 1 in various places and replacing by either of the above identites because b is alone and both, so I kept on ending up with factors of b that I couldn't get rid of.



I also tried using the definition of gcd(a,b) = d, such that
$i) d > -1$ .
$ii)$ d divides both a and b.
$iii)$ any divisor of both a and b also divides d.




I got equally stuck using this method.


Answer



COUNTEREXAMPLE!



You cannot prove the title statement, it is false



Your title condition can be satisfied with $a=c,$ as long as the gcd with $b$ is one.



$$ a = 5,b=6, c = 5 $$




Then $$ \gcd(a,b) = 1$$
$$ \gcd(b,c) = 1$$



BUT
$$ \gcd(ab,c) = \gcd(30,5) = 5 $$


sequences and series - A cubic nonlinear Euler sum

Any idea how to solve the following Euler sum




$$\sum_{n=1}^\infty \left( \frac{H_n}{n+1}\right)^3 =
-\frac{33}{16}\zeta(6)+2\zeta(3)^2$$




I think It can be solved it using contour integration but I am interested in solutions using real methods.

algebra precalculus - Help with complicated functional equation





Problem: Let $T=\{(p,q,r)\mid p,q,r \in \mathbb{Z}_{\geq0}\}$. Find all functions $f:T\to \mathbb{R}$ such that:
$$f(p,q,r)=\\
=\begin{cases}
0, & \text{ if } pqr = 0 \\
1 + \frac{1}{6}\left(f(p+1,q-1,r)+f(p+1,q,r-1)+f(p,q+1,r-1)+\\
\;\;\;\;\;\;
f(p,q-1,r+1)+f(p-1,q+1,r)+f(p-1,q,r+1)\right) & \text{ otherwise.}
\end{cases}$$





Progress so far: It's not hard to see that $f$ is symmetric in $p,q,r$, which is useful to know. From the recursive definition one can also infer that $f:T\to \Bbb{Q}^+$, so no trig functions or logs. That's all I could observe from the get-go. I've tried calculating some values of $f$ to have an idea on how the functions look like (if there are any) but having trouble calculating even small values of $f$, for example $f(1,2,3)$ or $f(2,2,2)$. All I know is that $f(0,a,b)=0$ and $f(1,1,1)=1$. I could guess a solution based on my initial observations but I can't see any obvious candidates.



Any help would be appreciated, thanks.


Answer



When I worked on this problem back in 2002, showing uniqueness was really easy through the "average of neighbors" observation (albeit on a slanted hexagonal board, instead of the regular chessboard).



Proof of uniqueness: Suppose we have 2 solutions $ f(p,q,r)$ and $ g(p,q,r)$. Let $ h(p,q,r) = f(p,q,r) - g(p,q,r)$. Then, we get that




$$ 6 h(p,q,r) = h(p+1, q-1, r) + h(p-1, q+1, r) + h( p, q+1, r-1) + h( p, q-1, r+1) + h( p+1, q, r-1) + h(p-1, q, r+1). $$



Consider the plane $ p+q+r = N$. Oberve that the neighbors of the cell $(p,q,r)$ are these 6 other cells with coordinates as given above. Hence, every cell is the average of it's neighbors. Through the standard argument (extremal principle), this implies that all cells on this finite board are equal.



We also have the boundary conditions that $h(p,q,r ) = 0$ for $pqr=0$, hence $h(p,q,r) = 0$. Thus, the function is unique $_\square$



Finding the solution was harder, but still motivated from the conditions.
Note: It is important to bear in mind that as an ('easy') Olympiad problem, it often has a nice solution that can be motivated.



Finding function: From the boundary condition that $pqr=0 \Rightarrow f(p,q,r) = 0$, we guess the initial function $ F( p,q,r) = pqr$.




Observe that since $ (p-1)(q+1) r + (p+1)(q-1)r = 2pqr - 2r$, so this guess gives us:



$ F(p,q,r) = \frac{ p+q+r} { 3} + \frac{1}{6} [ F(p-1, q+1, r) + F(p+1, q-1, r) + F(p, q-1, r+1), F(p, q+1, r-1) + F( p-1, q, r+1), F(p+1, q, r-1) ] $.



Observe that since $p+q+r$ is a constant for all of these 7 terms, we should look at
$$ f(p,q,r) = \frac{ F(p,q,r) } { \frac{p+q+r} {3} } = \frac{3 pqr} { p+q+r}.$$



Indeed, this works. $_\square$



Note: Had $F(p,q,r) = pqr$ not worked, the next guess would have been $ F(p,q,r) = p^2q^2r^2$



Saturday 14 July 2018

trigonometry - Proof of $sin^2 x+cos^2 x=1$ using Euler's Formula



How would you prove $\sin^2x + \cos^2x = 1$ using Euler's formula?




$$e^{ix} = \cos(x) + i\sin(x)$$



This is what I have so far:



$$\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$$



$$\cos(x) = \frac{1}{2} (e^{ix}+e^{-ix})$$


Answer



Multiply $\mathrm e^{\mathrm ix}=\cos(x)+\mathrm i\sin(x)$ by the conjugate identity $\overline{\mathrm e^{\mathrm ix}}=\cos(x)-\mathrm i\sin(x)$ and use that $\overline{\mathrm e^{\mathrm ix}}=\mathrm e^{-\mathrm ix}$ hence $\mathrm e^{\mathrm ix}\cdot\overline{\mathrm e^{\mathrm ix}}=\mathrm e^{\mathrm ix-\mathrm ix}=1$.



integration - Why do we treat differential notation as a fraction in u-substitution method




How did we come to know that treating the differential notation as a fraction will help us in finding the integral. And how do we know about its validity?

How can $\frac{dy}{dx}$ be treated as a fraction?

I want to know about how did u-substitution come about and why is the differential treated as a fraction in it?


Answer



It doesn't necessarily need to be.



Consider a simple equation $\frac{dy}{dx}=\sin(2x+5)$ and let $u=2x+5$. Then
$$\frac{du}{dx}=2$$
Traditionally, you will complete the working by using $du=2\cdot dx$, but if we were to avoid this, you could instead continue with the integral:

$$\int\frac{dy}{dx}dx=\int\sin(u)dx$$
$$\int\frac{dy}{dx}dx=\int\sin(u)\cdot\frac{du}{dx}\cdot\frac{1}{2}dx$$
$$\int\frac{dy}{dx}dx=\frac{1}{2}\int\sin(u)\cdot\frac{du}{dx}dx$$
$$y=c-\frac{1}{2}\cos(u)$$
$$y=c-\frac{1}{2}\cos(2x+5)$$



But why is this? Can we prove that the usefulness of the differentiatals' sepertation is justified? As Gerry Myerson has mentioned, it's a direct consequence of the chain rule:



$$\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}\frac{du}{dx}dx$$

But then if you 'cancel', it becomes
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}du$$
Which is what you desired.


Friday 13 July 2018

linear algebra - Two vectors are linearly independent?

Let $x, y, z$ be vectors in vector space $V$. Suppose $z \notin L(x,y)$
, where $L(x,y)$ is the linear span of $x, y$.

Show that $x, y$ are linearly independent iff x+z, y+z are linearly independent.

I can easily show that $x, y$ are linearly independent implies linear independence of $x+z, y+z$. But I have a trouble with showing converse!. I want someone who help me~~

trigonometry - Proving $cos(w_1t) + cos(w_2t) = 2 cosleft(frac{1}{2}(w_1+w_2)tright) cos left(frac12 (w_1-w_2)tright)$




I am trying to prove :




$$\cos(w_1t) + \cos(w_2t) = 2 \cos\left(\frac{1}{2}(w_1+w_2)t\right) \cos \left(\frac12 (w_1-w_2)t\right)$$




My working so far uses the complex exponential identity:
$$\cos \theta = \frac12 \left(e^{i \theta} + e^{-i \theta}\right)$$



$$\begin{align}

\frac12\left(e^{i w_1t} + e^{-i w_1t}\right) +
\frac12\left(e^{i w_2t} + e^{-i w_2t}\right)
&= \frac12\left(e^{it(w_1 + w_2)} + e^{-it(w_1 + w_2)}\right) \\
&= \cos\left((w_1 + w_2) t\right) \\
&= \cos\left(w_1t + w_2t\right)
\end{align}$$



I cannot understand how to proceed further. Please help.


Answer



Remember that $$\cos(\alpha+\beta)=\cos(\alpha)\cos(\beta)-\sin(\alpha)\sin(\beta)$$ and $$\cos(\alpha-\beta)=\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)$$ so you can add up both equalities to get $$\cos(\alpha+\beta)+\cos(\alpha-\beta)=2\cos(\alpha)\cos(\beta)$$ so now you want that $\alpha+\beta=\omega_{1}t$ and $\alpha-\beta=\omega_{2}t$ so you have to solve the system of equations $$\begin{array}{l}

\alpha+\beta=\omega_{1}t\\
\alpha-\beta=\omega_{2}t
\end{array}$$
Which gives you $$\begin{array}{l}
\alpha=\dfrac{1}{2}\left(\omega_{1}t+\omega_{2}t\right)\\
\beta=\dfrac{1}{2}\left(\omega_{1}t-\omega_{2}t\right)
\end{array}$$ which is exactly what you wanted to prove


functional equations - If a function is like $f(f(y))=a^2+y$, does it imply that $f$ is surjective?

If a function is like $f(f(y))=a^2+y$, does it imply that $f$ is surjective?




Just for an example, consider this:




Find all functions $f:\mathbb{R}\mapsto \mathbb{R}$ such that $$f(xf(x)+f(y))=(f(x))^2+y$$ for all real values of $x,y$.
It's solution begins as follows:
Let $f(0)=a$. Setting $x=0$ we get $$f(f(y))=a^2+y ~ \forall y\in \mathbb{R}$$




Now we can say that the range of $a^2+y$ is all real numbers, so $f$ is surjective.




What if $f\in (a,b)$ where $(a,b)$ is some smaller interval?

linear algebra - Understanding the formula for a diagonal matrix $A = P^{-1}DP$.



I'm trying to understand the procedure for finding the powers of a matrix using the diagonal relation $A^n = P^{-1}D^nP$. Here's what I understand so far.





  • We find eigenvalues of A. The matrix D is formed with eigenvalues in
    the diagonal line and zeros everywhere else. The order of entering
    diagonal values doesn't matter.

  • The matrix $P$ is a matrix that contains eigenvectors of $A$. Again, the order does not matter.



Is this right? I'm assuming matrix is nice (invertible etc). Am I right in thinking that the diagonal matrix itself isn't useful (i.e. doesn't give you $A^2$ unless you find $P$ and $P^{-1}$ too).


Answer



Indicating the eigenvalues along the diagonal $D$ with $\lambda_i=D_{ii}$ we need that the corresponding eigenvectors $\vec v_i$ are placed as the i-th column of the matrix $P$.




Therefore the order of the eigenvalues in $D$ doesn't matter but the corresponding eigenvectors must be placed in $P$ accordingly and viceversa.


abstract algebra - Given $K(alpha)/K$ and $K(beta)/K$ abelian extensions, prove that $K(alpha + beta)/K$ is an abelian extension.




Problem:



Let $K(\alpha)/K$ and $K(\beta)/K$ algebraic field extensions so that their respective Galois groups are abelian.




Prove that the Galois group of the field extension $K(\alpha + \beta)/K$ is also abelian.




My attempt:



I've tried considering the towers



$$K(\alpha,\beta)/K(\alpha)/K \qquad K(\alpha,\beta)/K(\beta)/K$$



Are somehow related to




$$K(\alpha,\beta)/K(\alpha+\beta)/K \qquad K(\alpha,\beta)/K(\alpha - \beta)/K$$



But I don't know how to relate this to the fact that the quotient is abelian or whether this statement is true or not.


Answer



$K(\alpha, \beta)$ is Galois over $K$ and the corresponding Galois group is a subgroup of $\text{Gal}(K(\alpha)/K)\times \text{Gal}(K(\beta)/K)$. So $K(\alpha, \beta)/K$ is abelian. Thus, any intermediate Galois extension must be abelian since the corresponding Galois group would be a quotient of $\text{Gal}(K(\alpha,\beta)/K)$


Wednesday 11 July 2018

sequences and series - Convergence of $sum_{n=3}^infty frac {1}{n ln n}$

I know I have seen something similar and there is a telescoping trick to the convergence but it is eluding me.

elementary set theory - Does $mathbb R^2$ contain more numbers than $mathbb R^1$?











Does $\mathbb R^2$ contain more numbers than $\mathbb R^1$? I know that there are the same number of even integers as integers, but those are both countable sets. Does the same type of argument apply to uncountable sets? If there exists a 1-1 mapping from $\mathbb R^2$ to $\mathbb R^1$, would that mean that 2 real-valued parameters could be encoded as a single real-valued parameter?


Answer



Indeed $\mathbb R^2$ has the same cardinality as $\mathbb R$, as the answers in this thread show.



And indeed it means that functions of two variables can be encoded as functions of one variable. However do note that such encoding cannot be continuous, but can be measurable.



Lastly, to extend this result to all infinite sets one needs the axiom of choice. In fact the assertion "For every infinite $A$ there is a bijection between $A$ and $A^2$" is equivalent to the axiom of choice. If one requires that $A$ is well-ordered then this is true without the axiom of choice, but for many "sets of interest" (e.g. the real numbers) one cannot prove the existence of a well-ordering without some form of choice.




Despite the last sentence, the existence of a bijection between $\mathbb R$ and $\mathbb R^n$ does not require the axiom of choice (for $n>0$, of course).


Tuesday 10 July 2018

real analysis - How to show $|f|_{p}rightarrow |f|_{infty}$?







I was asked to show:



Assume $|f|_{r}<\infty$ for some $r<\infty$. Prove that $$

|f|_{p}\rightarrow |f|_{\infty}
$$ as $p\rightarrow \infty$.



I am stuck in the situation that $|f|_{p}<\infty$ for all $r

Could $f_{p}$ be fluctuating while $|f|_{\infty}=\infty$? I have proved that for $r

Monday 9 July 2018

education - What mistakes, if any, were made in Numberphile's proof that $1+2+3+cdots=-1/12$?

This is not a duplicate question because I am looking for an explanation directed to a general audience as to the mistakes (if any) in Numberphile's proof (reproduced below). (Numberphile is a YouTube channel devoted to pop-math and this particular video has garnered over 3m views.)



By general audience, I mean the same sort of audience as the millions who watch Numberphile. (Which would mean, ideally, making little or no mention of things that a general audience will never have heard of - e.g. Riemann zeta functions, analytic continuations, Casimir forces; and avoiding tactics like appealing to the fact that physicists and other clever people use it in string theory, so therefore it must be correct.)



Numberphile's Proof.



The proof proceeds by evaluating each of the following:




$S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \ldots$



$S_2 = 1 - 2 + 3 - 4 + \ldots $



$S = 1 + 2 + 3 + 4 + \ldots $



"Now the first one is really easy to evaluate ... You stop this at any point. If you stop it at an odd point, you're going to get the answer $1$. If you stop it at an even point, you get the answer $0$. Clearly, that's obvious, right? ... So what number are we going to attach to this infinite sum? Do we stop at an odd or an even point? We don't know, so we take the average of the two. So the answer's a half."



Next:




$S_2 \ \ = 1 - 2 + 3 - 4 + \cdots$



$S_2 \ \ = \ \ \ \ \ \ \ 1 - 2 + 3 - 4 + \cdots$



Adding the above two lines, we get:



$2S_2 = 1 - 1 + 1 - 1 + \cdots$



Therefore, $2S_2=S_1=\frac{1}{2}$ and so $S_2=\frac{1}{4}$.




Finally, take



\begin{align}
S - S_2 & = 1 + 2 + 3 + 4 + \cdots
\\ & - (1 - 2 + 3 - 4 + \cdots)
\\ & = 0 + 4 + 0 + 8 + \cdots
\\ & = 4 + 8 + 12 + \cdots
\\ & = 4S
\end{align}




Hence $-S_2=3S$ or $-\frac{1}{4}=3S$.



And so $S=-\frac{1}{12}$. $\blacksquare$

Saturday 7 July 2018

Geometric series and complex numbers




I'm new to this site, english is not my mother tongue, and I'm just learning LaTeX. I'm basically a noob, so please be indulgent if I break any rule or habits.



I'm stuck at proving the following equation. I suppose I should use the formula for geometric series ($\sum\limits_{k=0}^{n}q^k=\frac{1-q^{n+1}}{1-q}$), and also use somewhere that $e^{î\theta}=cos(\theta)+i\sin(\theta)$



So here is the equation I have to prove :
$$\frac{1}{2}+cos(\theta)+cos(2\theta)+...+cos(n\theta)=\frac{sin(n+\frac{1}{2}\theta)}{2sin(\frac{\theta}{2})}$$




Thanks for your help


Answer




$$ \begin{aligned} \dfrac{1}{2} + \sum_{r=1}^{n} \cos(r\theta) & = \Re \left\{ \dfrac{1}{2} + \sum_{r=1}^{n} e^{ir\theta} \right\} \\ & = \Re \left\{ -\dfrac{1}{2} + \sum_{r=0}^{n} e^{ir\theta} \right\} \\ & = \Re \left\{ -\dfrac{1}{2} + \dfrac{1-e^{(n+1)\theta}}{1-e^{i\theta}} \right\} \\ & = \Re \left\{ \dfrac{1-2e^{(n+1)\theta}+e^{i\theta}}{2(1-e^{i\theta})} \right\} \\ & = \Re \left\{ \dfrac{e^{-i\theta/2}-2e^{(n+1/2)\theta}+e^{i\theta/2}}{2(e^{-i\theta/2}-e^{i\theta/2})} \right\} \\ & = \Re \left\{ \dfrac{2\cos\left(\frac{\theta}{2}\right)-2\cos(n+1/2)\theta-2i\sin(n+1/2)\theta}{-2i\sin\left(\frac{\theta}{2}\right)} \right\} \end{aligned} $$




Multiply through by $1=\frac{i}{i}$, take the real part and there you have it. ^_^


sequences and series - The Lerch transcendent evaluation for the parameters HurwitzLerchPhi[z,-4 s,0]



I want to evaluate the Hurwitz zeta function
$$ \Phi (z, -4s, 0)= \sum_{k=0} \frac{z^k}{k^{-4s}}$$
And $|z|<1$ and $s>1$.
I want to have un upper bound for it.
I tried even Wolfram Mathematica (to have some hint of the form if possible fot the calculus) , but without success (since I give parameters and no numbers as an input).


Answer



The OP is asking about the upper bound of the function:




$$\Phi (z, -4s, 0)= \sum_{k=1}^\infty k^{4s} z^k$$



$$|z|<1 \qquad s>1$$



First, it's obvious that:



$$\sum_{k=1}^\infty k^{4s} z^k \leq \sum_{k=1}^\infty k^{4s} |z|^k$$



So let us consider only $z>0$.




For a fixed $z$ we can see that:



$$p>q \\ \sum_{k=1}^\infty k^{4p} z^k>\sum_{k=1}^\infty k^{4q} z^k$$



Which means that if an upper bound exists for $\Phi$ as a function of $s$, it will be the limit:



$$\lim_{s \to +\infty} \sum_{k=1}^\infty k^{4s} z^k = \infty$$







Now let us fix a finite $s$ and see what happens for $z \to 1$. For $z=1$ the series obviously diverges, which automatically means that for $z$ close to $1$ the value can be as large as we want, which means there's no upper bound for a fixed $s$ either.



More rigorously, we need to prove that for any $N>0$ there exists $\epsilon >0$ such that:



$$ \sum_{k=1}^\infty k^{4s} (1-\epsilon)^k > N$$



It's rather easy, we can just compare to geometric series:



$$\sum_{k=1}^\infty k^{4s} (1-\epsilon)^k>\sum_{k=1}^\infty (1-\epsilon)^k=\frac{1-\epsilon}{\epsilon}=\frac{1}{\epsilon}-1$$




Now pick $\epsilon=\frac{1}{N+1}$ and the proof is finished.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...