Wednesday 30 November 2016

calculus - Limit with natural log in the denominator: $lim_{xto1}{frac{x^2 - 1}{ln x}}$




Value of $\displaystyle\lim_{x\to1}{\frac{x^2 - 1}{\ln x}}$




The answer is given to be $2$. I'd appreciate an explanation.


Answer



Since simple substitution of $x:=1$ would yield the indeterminate form $\frac{0}{0}$,



L'Hôpital's rule to the rescue:



$$\lim_{x\rightarrow 1}\frac{f(x)}{g(x)}=\lim_{x\rightarrow 1}\frac{f'(x)}{g'(x)}$$



So, take the derivative of the top and the bottom (not the derivative of the top divided by the bottom).




$$\lim_{x\rightarrow 1}\frac{x^2-1}{\ln x} = \lim_{x\rightarrow 1}\frac{2x}{1/x}=\lim_{x\rightarrow 1}2x^2= 2$$


trigonometry - Prove by mathematical induction, or otherwise, that for all integers $nge 1$

$$\cos(1)+\cos(2)+\ldots+\cos(n-1)= \cos(n)-\cos(n-1)/(2\cos(1)-2 ) -1/2$$



Here is my attempt:




Let $P(n)$ be this statement.



$P(1)$ is true since $0=\cos(1)-\cos(1-1)/(2\cos(1)-2) -1/2$



Suppose $P(k)$ is true for some integer $k$. Then I have to prove $P(k+1)$ is also true. That is :



$$\cos(1)+\cos(2)+\ldots+\cos(k-1)+\cos(k)=\cos(k+1)-\cos(k)/(2\cos(1)-2) -1/2.$$



By inductive hypothesis, we have $\cos(k)-\cos(k-1)/(2\cos(1)-2 ) -1/2 +\cos(k)$. But how does it equal to the right hand side? Can someone help me with this question please, thank you!

real analysis - Purely "algebraic" proof of Young's Inequality




Young's inequality states that if $a, b \geq 0$, $p, q > 0$, and $\frac{1}{p} + \frac{1}{q} = 1$, then $$ab\leq \frac{a^p}{p} + \frac{b^q}{q}$$ (with equality only when $a^p = b^q$). Back when I was in my first course in real analysis, I was assigned this as homework, but I couldn't figure it out. I kept trying to manipulate the expressions algebraically, and I couldn't get anywhere. But every proof that I've seen since uses calculus in some way to prove this. For example, a common proof is based on this proof without words and integration. The proof on Wikipedia uses the fact that $\log$ is concave, which I believe requires the analytic definition of the logarithm to prove (correct me if I'm wrong).



Can this be proven using just algebraic manipulations? I know that that is a somewhat vague question, because "algebraic" is not well-defined, but I'm not sure how to make it more rigorous. But for example, the proof when $p = q = 2$ is something I would consider to be "purely algebraic":



$$0 \leq (a - b)^2 = a^2 + b^2 - 2ab,$$ so $$ab \leq \frac{a^2}{2} + \frac{b^2}{2}.$$


Answer



This proof is from "Mathematical Toolchest" published by the Australian Mathematics Trust (image).





Example. If $p$ and $q$ are positive rationals such that $\frac1p + \frac1q = 1$, then for positive $x$ and $y$ $$\frac{x^p}p + \frac{y^q}q \ge xy.$$



Since $\frac1p + \frac1q = 1$, we can write $p = \frac{m+n}m$, $q = \frac{m+n}n$ where $m$ and $n$ are positive integers. Write $x = a^{1/p}$, $y = b^{1/q}$. Then $$\frac{x^p}p + \frac{y^q}q = \frac a{\frac{m+n}m} + \frac b{\frac{m+n}n} = \frac{ma + nb}{m + n}.$$



However, by the AM–GM inequality, $$\frac{ma + nb}{m + n} \ge (a^m \cdot b^n)^{\frac1{m+n}} = a^{\frac1p} b^{\frac1q} = xy,$$ and thus $$\frac{x^p}p + \frac{y^q}q \ge xy.$$



elementary number theory - Prove that there exist $135$ consecutive positive integers so that the $n$th least is divisible by a perfect $n$th power greater than $1$

Prove that there exist 135 consecutive positive integers so that the second least is divisible by a perfect square $> 1$, the third least is divisible by a perfect cube $> 1$, the fourth least is divisible by a perfect fourth power $> 1$, and so on.



How should I go about doing this?




I thought perhaps I should use Fermat's little theorem, or its corollary?



Thanks!

sequences and series - Arithmetic progression & Geometric progression

If the $m$-th, $n$-th, and $p$-th terms of an A.p. and G.p. are equal and are $x,y,z$ respectively, prove that $x^{y-z}$. $y^{z-x}$. $z^{x-y}= 1$. To solve this question what I did is simply kept values of $x,y,z$ from G.P. (i.e. for $x = ar^{m-1}$ so on).
Can you help me to solve this in an more interesting way.

limits - $lim_{n to infty} left| cos left( frac{pi}{4(n-1)} right) right|^{2n-1}$




I need Help evaluating the limit of $$\lim_{n \to \infty} \left| \cos \left( \frac{\pi}{4(n-1)} \right) \right|^{2n-1} = L$$



I already know that $L = 1$, but I need help showing it.



The best idea I could come up with was to take the series representation of cosine.
$$\lim_{j,n \to \infty} \left| 1 - \left( \frac{\pi}{4(n-1)} \right)^2 \frac{1}{2!} + ....+ \frac{(-1)^j}{(2j)!}\left( \frac{\pi}{4(n-1)} \right)^{2j} \right|^{2n-1} = L$$



All lower order terms go to zero leaving:




$$\lim_{j,n \to \infty} \left|\frac{1}{(2j)!}\left( \frac{\pi}{4(n-1)} \right)^{2j} \right|^{2n-1} = L$$



But this doesn't really seem like I am any closer. How do I proceed? Obviously L'Hospitals rule will occur eventually. Hints?


Answer



Put $$L = \lim_{n \to \infty} \left| \cos \left( \frac{\pi}{4(n-1)} \right) \right|^{2n-1}$$
Then $$\log(L) = \lim_{n \to \infty} (2n-1)\log\left( \cos \left( \frac{\pi}{4(n-1)} \right) \right).$$
This is an $0\cdot \infty$ indeterminate form. Put the $2n -1$ in the denominator and invoke L'hospital.


Tuesday 29 November 2016

Limit involving exponentials and arctangent without L'Hôpital



$$\lim_{x\to0}\frac{\arctan x}{e^{2x}-1}$$
How to do this without L'Hôpital and such? $\arctan x=y$, then we rewrite it as $\lim_{y\to0}\frac y{e^{2\tan y}-1}$, but from here I'm stuck.


Answer



I thought it might be instructive to present a way forward that goes back to "basics." Herein, we rely only on elementary inequalities and the squeeze theorem. To that end, we proceed with a primer.




PRIMER ON A SET OF ELEMENTARY INEQUALITIES:




In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the exponential function satisfies the inequalities



$$\bbox[5px,border:2px solid #C0A000]{1+x\le e^x\le \frac{1}{1-x}} \tag 1$$



for $x<1$.



And in THIS ANSWER, I showed using only elementary inequalities from geometry that the arctangent function satisfies the inequalities



$$\bbox[5px,border:2px solid #C0A000]{\frac{|x|}{\sqrt{1+x^2}}\le |\arctan(x)|\le |x|} \tag 2$$




for all $x$.







Using $(1)$ and $(2)$ we can write for $1>x>0$



$$\frac{x}{\sqrt{1+x^2}\left(\frac{2x}{1-2x}\right)}\le \frac{\arctan(x)}{e^{2x}-1}\le \frac{x}{2x} \tag 3$$



whereupon applying the squeeze theorem to $(3)$, we find that




$$\lim_{x\to 0^+}\frac{\arctan(x)}{e^{2x}-1}=\frac12$$



Similarly, using $(1)$ and $(2)$ for $x<0$ we can write



$$\frac{x}{\left(\frac{2x}{1-2x}\right)}\le \frac{\arctan(x)}{e^{2x}-1}\le \frac{x}{\sqrt{1+x^2}\,\left(2x\right)} \tag 4$$



whereupon applying the squeeze theorem to $(4)$, we find that



$$\lim_{x\to 0^-}\frac{\arctan(x)}{e^{2x}-1}=\frac12$$





Inasmuch as the limits from the right and left sides are equal we can conclude that



$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\frac{\arctan(x)}{e^{2x}-1}=\frac12}$$



calculus - Finding the $p, r, q$ for which the series converges



I'm dealing with the series: $$\sum_{n=3}^{\infty} \frac{1}{n^p(\ln n)^q(\ln(\ln n))^r},$$ looking for the set of all $p,q,r$ such that the series converges. Is there a way to determine this without use of the integral test? In that case I would substitute $u=\ln x$ and so on, but I'm wondering if there is a method using ratio test, root test, comparison test, condensation criteria etc. Any help is appreciated.


Answer



First, suppose $p=q=1$. We can then use the integral test.




If $r=1$ we have



$$
\int_3^{y} \frac{dx}{x \ln(x) (\ln(\ln(x)))} = \left[\ln(\ln(\ln(x)))\right]_3^y
=\ln(\ln(\ln(y))) -\ln(\ln(\ln(3)))
$$



which diverges as $y \to \infty$. For $r \neq 1$ we have



$$

\int_3^{y} \frac{dx}{x \ln(x) (\ln(\ln(x)))^r} = \frac{1}{1-r}\left[\ln(\ln(x))^{1-r}\right]_3^y = \frac{1}{1-r}\left(\ln(\ln(y))^{1-r} - \ln(\ln(3))^{1-r}\right)
$$



which diverges as $y \to \infty$ if and only if $r < 1$. So when $p = q =1$, the series converges when $ r>1$ and diverges when $r \le 1$.



When $p \neq 1$, we can use the comparison test. If $a_n = \frac{1}{n^p (\ln(n))^q (\ln(\ln(n)))^r}$ with $p < 1$, then let $b_n = \frac{1}{n (\ln(n)) (\ln(\ln(n)))}$. Then



$$
\frac{a_n}{b_n} = \frac{n (\ln(n)) (\ln(\ln(n)))}{n^p (\ln(n))^q (\ln(\ln(n)))^r} = n^{1-p} (\ln(n))^{1-q} (\ln(\ln(n)))^{1-r}
$$




Since $1-p$ is positive, $n^{1-p}$ goes to infinity as $n$ goes to infinity, and will dominate the other two terms, so the ratio $\frac{a_n}{b_n}$ goes to infinity. Since $\sum_3^\infty b_n$ diverges by the integral test, so does $\sum_3^\infty a_n$ by the comparison test.



Similarly, if $a_n = \frac{1}{n^p (\ln(n))^q (\ln(\ln(n)))^r}$ with $p > 1$, then let $b_n = \frac{1}{n (\ln(n)) (\ln(\ln(n)))^2}$. Then



$$
\frac{a_n}{b_n} = \frac{n (\ln(n)) (\ln(\ln(n)))^2}{n^p (\ln(n))^q (\ln(\ln(n)))^r} = n^{1-p} (\ln(n))^{1-q} (\ln(\ln(n)))^{2-r}
$$



Since $1-p$ is negative, $n^{1-p}$ goes to zero as $n$ goes to infinity, and will dominate the other two terms, so the ratio $\frac{a_n}{b_n}$ goes to zero. Since $\sum_3^\infty b_n$ converges by the integral test, so does $\sum_3^\infty a_n$ by the comparison test.




A similar argument shows that the series converges when $p=1, q > 1$ and diverges when $p=1, q < 1$.



Putting it all together, the series converges if either:



$p>1$



$p=1, q > 1$



$p=q=1, r > 1$




and diverges otherwise.


Given an induction definition, how to calculate elements?



I'm having difficulty with a mathematical problem.




I've got the following;



The basis is:



 -1 ∈ V


And the induction is;



x ∈ V → x/(1-x) ∈ V



Now, I have made 4 statements and I want to get to know if they're either true, or false.



The statements are;



1. All elements are negative. (under zero)
2. All elements are between -2 and 0
3. -1/7 ∈ V
4. -2/3 ∈ V



There's also an exception;
There are no elements of V that cannot be acquired by applying statement 1 and 2.



Now, I'm not really sure how to start here. How can I effectively sort out elements in a induction definition?


Answer



You can proceed as follows :





  1. Start by computing the first few terms : $x_0 = -1$, then
    $$
    x_1 = -1/(1+1) = -1/2
    $$
    $$
    x_2 = -1/2/(1+1/2) = -1/3
    $$
    $$
    x_3 = -1/3/(1+1/3) = -1/4
    $$

    This gives you an idea as to what $x_n$ should be. ie. $x_n = -1/(n+1)$


  2. Prove your guess by induction :




a) $x_0 = -1 = -1/(0+1)$. Hence your guess is true when $n=0$



b) Assume $x_{n-1} = -1/n$, then
$$
x_n = -1/n/(1+1/n) = -1/(n+1)
$$

Hence, your guess is true for all $n\in \mathbb{N}$.



By your exception, it follows that
$$
V = \{-1, -1/2, -1/3, \ldots \}
$$


Monday 28 November 2016

What does my calculus textbook imply that differentials can't be manipulated algebraically?

The textbook defines differentials like this.



Let $y=f(x)$ be a differentiable function of $x$. The differential of $x$ (denoted by $dx$) is any nonzero real number. The differential of $y$ (denoted by $dy$) is equal to $f'(x)dx$.



It goes on to say that the derivative rules can be written in differential form using Leibniz notation. For example, it says the chain rule in differential form is




$$\frac{dy}{dx} = \frac{dy}{du} \frac{du}{dx}$$



The book says it appears to be true because the $du$'s would divide out, and although the reasoning is incorrect, it helps you remember the chain rule.



Why is the reasoning incorrect? Given those definitions of differentials, what's stopping you from manipulating them algebraically?

divisibility by (n-1) in base n

I Just found out that if I want to check if a number in base $n$ is divisible by $n-1$, I just need to sum all the digits, again and again, until I get to a single character, and if this number is $n-1$, then this number is divisible by $n-1$.



For example, 45 in base 10 is divisible by 9, because the digits 4 + 5 = 9.



Why this happens?




I'm trying to prove it for base 16, and can't seem to get it right.

calculus - Evaluating $lim_{xto0}frac{xsin{(sin{x})}-sin^2{x}}{x^6}$






Evaluate:
$$\lim_{x\to0}\frac{x\sin{(\sin{x})}-\sin^2{x}}{x^6}$$




I have been trying to solve this for $15$ minutes but sin(sin(x)) part has me stuck.



My attempt:



I tried multiplying with $x$ inside the $\sin$ as $\sin{(\frac{x\sin{x}}{x})}$. No leads.



Answer



Use $\sin(u)=u-\frac{u^3}{6}+\frac{u^5}{120}+o(u^6)$ (three times).


linear algebra - Similar matrices proof



Good evening, I'm stuck on how to proceed in the following qiestion.



Let A and B be nxn matrices and show that if there is a 𝜆 ∈ ℝ such that 𝐴−𝜆𝐼 is similar to 𝐵−𝜆𝐼, then 𝐴 is similar to 𝐵.



I thought to use determinant for both sides, but I'm not sure if it's the right way.




Thanks in advance!


Answer



Suppose $A-\lambda I$ and $B-\lambda I$ are similar.
By definition, we can find $S$ such that
$$A-\lambda I=S^{-1}(B-\lambda I)S = S^{-1} B S - \lambda I$$
Adding $\lambda I$ to the leftmost and rightmost sides of this equality reveals that $A$ and $B$ are also similar.


calculus - Why do second or higher derivatives work for finding concavity and inflection points?

Say we have the function $f(x)=(x-2)^3+3$, whose graph is
enter image description here



and we want to find at what regions does $f$ have a positive/negative concavity, and where the inflection points are.



I learned to answer these questions doing:
\begin{align*}
f^{'}(x) &= 3(x-2)^2 \\
f^{''}(x) &= 2\cdot 3(x-2)^1 \\

f^{''}(x) &= 0 \implies x = 2 \\
f(2)&= 3
\end{align*}

$\therefore$
Concavity is positive within $(2, \infty)$, negative within $(- \infty, 2)$
Inflection point(s): $(2, 3)$



But why does this work? Will I have issues when the function has multiple inflection points or do I just have to be more careful? And what if the degree of a function were very high, say of degree 6? Would I have to keep computing the derivative until I get a derivative of degree 1 or does it only take until the second derivative?

real analysis - Prove that the graph of a continuous function on $(-infty, infty)$ is completely determined once one knows a certain countable set of points on it.



A question from Introduction to Analysis by Arthur Mattuck:




Prove that the graph of a continuous function on $(-\infty, \infty)$ is completely determined once one knows a certain countable set of points on it.





I have no idea.


Answer



Hint: $\mathbb Q$ is countable and dense in $\mathbb R$.


limits - "Proving" that $0^0 = 1$




I know that $0^0$ is one of the seven common indeterminate forms of limits, and I found on wikipedia two very simple examples in which one limit equates to 1, and the other to 0. I also saw here: Prove that $0^0 = 1$ using binomial theorem
that you can define $0^0$ as 1 if you'd like.




Even so, I was curious, so I did some work and seemingly demonstrated that $0^0$ always equals 1.



My Work:



$$y=\lim_{x\rightarrow0^+}{(x^x)}$$



$$\ln{y} = \lim_{x\rightarrow0^+}{(x\ln{x})} $$



$$\ln{y}= \lim_{x\rightarrow0^+}{\frac{\ln{x}}{x^{-1}}} = -\frac{∞}{∞} $$




$\implies$ Use L'Hôpital's Rule



$$\ln{y}=\lim_{x\rightarrow0^+}\frac{x^{-1}}{-x^{-2}} $$
$$\ln{y}=\lim_{x\rightarrow0^+} -x = 0$$
$$y = e^{0} = 1$$



What is wrong with this work? Does it have something to do with using $x^x$ rather than $f(x)^{g(x)}$? Or does it have something to do with using operations inside limits? If not, why is $0^0$ considered indeterminate at all?


Answer



Someone said that $0^0=1$ is correct, and got a flood of downvotes and a comment saying it was simply wrong. I think that someone, me for example, should point out that while saying $0^0=1$ is correct is an exaggeration, calling that "simply wrong" isn't quite right either. There are many contexts in which $0^0=1$ is the standard convention.




Two examples. First, power series. If we say $f(t)=\sum_{n=0}^\infty a_nt^n$ that's supposed to entail that $f(0)=a_0$. But $f(0)=a_0$ depends on the convention that $0^0=1$.



Second, elementary set theory: Say $|A|$ is the cardinality of $A$. The cardinality of the set off all functions from $A$ to $B$ should be $|B|^{|A|}$. Now what if $A=B=\emptyset$? There as well we want to say $0^0=1$; otherwise we could just say the cardinality of the set of all maps was $|B|^{|A|}$ unless $A$ and $B$ are both empty.



(Yes, there is exactly one function $f:\emptyset\to\emptyset$...)



Edit: Seems to be a popular answer, but I just realized that it really doesn't address what the OP said. For the record, of course the OP is nonetheless wrong in claiming to have proved that $0^0=1$. It's often left undefined, and in any case one does not prove definitions...


Determine the sum of the given series



I'm trying to find the sum of this series, but I'm not sure how to go about doing that. Based on the Direct Comparison Test, I've found that this function converges, but I don't know how to find it's sum. Any help would greatly be appreciated.



Rusty $$\sum_{n=1}^\infty \dfrac6{n(n+3)}.$$



Answer



Hint



Use partial fraction decomposition to get $$\frac{6}{n (n+3)}=2(\frac{1}{n} -\frac{1}{n+3})$$ and check if, by chance, they do not telescope.


Sunday 27 November 2016

real analysis - Does the series $sum_{n=2}^infty frac {sin(n+frac 1n)}{(ln n)^2}$ converge?



Could you give me some hint how to deal with this series ?




I could not conclude about absolute convergence because
$\frac {\left|\sin\left( n+\frac 1n \right)\right|}{(\ln n)^2}\le \frac 1{(\ln n)^2}$
does not get me anywhere.



I tried to use the Dirichlet's test to prove convergence of this series:
$\sum_{n=2}^\infty \frac {\sin\left( n+\frac 1n \right)}{(\ln n)^2}$
but I could not prove that $\sum_{n=2}^\infty \sin(n+\frac 1n)$ are bounded.



Thanks.


Answer




$\sin (n+\frac{1}{n}) = \sin n \cos \frac{1}{n} + \sin \frac{1}{n} \cos n$



Firstly $\sum_{n=2}^\infty \frac{\sin \frac{1}{n} \cos n}{\ln ^2 n}$ is absolutely convergent from comparison test, because



$\sum_{n=2}^\infty \left| \frac{\sin \frac{1}{n} \cos n}{\ln ^2 n} \right| \le \sum_{n=2}^\infty \frac{\frac{1}{n} \cdot 1}{\ln ^2 n} < \infty$



Convergence of last series follows for example from http://en.wikipedia.org/wiki/Cauchy%27s_condensation_test



Secondly observe, that function $f(x)=\frac{\cos \frac{1}{x}}{\ln ^2 x}$ is decraising for big $x$ because $f'(x)= \frac{\ln x \sin \left( \frac{1}{x} \right) -2x \cos \left( \frac{1}{x} \right)}{\ln ^3 x}$,




Easy to see that $\lim_{x\to \infty} f'(x) = -\infty$ therefore there is $M>0$ such that for $\mathbb{N} \ni n> M$ sequence $\left( f(n) \right)_{n>M}$ is decraising and tends to $0$. If we could show that $\left|\sum_{k=1}^n \sin k \right| $ is bounded, we would just apply Dirichlet test to show that $\sum_{n=2}^\infty \frac{\sin n \cdot \cos \frac{1}{n}}{\ln ^2 n}$ diverges.



It's well known that $\sum_{k=1}^n \sin ka = \frac{\sin \left(\frac{na}{2} \right) \sin \left( \frac{(n+1)a}{2} \right)}{\sin \left( \frac{a}{2}\right)}$ (you can prove it f.e by complex numbers), so



$\left| \sum_{k=1}^n \sin k \right| \le \frac{1}{\left| \sin \frac{1}{2}\right|}$.



To sum up: $\sum_{n=2}^\infty \frac{\sin (n+ \frac{1}{n})}{\ln ^2 n}$ converges as sum of two convergent series.


integration - Show that $intlimits_2^{+infty}frac{sin{x}}{xln{x}}, rm dx$ is conditionally convergent



The integral $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x\ln{x}}\,dx$ is conditionally convergent.



I know that $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x}\, dx$
is conditionally convergent and $ {\forall}p > 1$,
$\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x^p}\, dx$ is absolute convergent,

but $\ln{x}$ is between $x$ and $x^p$, so how to prove that $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x\ln{x}}\, dx$ is conditionally convergent?


Answer



This is roughly integral analogue of the alternating series test. Since proving its generalization cause little harm, let me actually show




Proposition 1. Suppose that $f : [a, \infty) \to \mathbb{R}$ satisfies the following two conditions:




  1. $f$ is monotone-decreasing, i.e., $f(x) \geq f(y)$ for all $a \leq x \leq y$.

  2. $\lim_{x\to\infty} f(x) = 0$.




Then



$$ \int_{a}^{\infty} f(x)\sin(x) \, \mathrm{d}x = \lim_{b\to\infty} \int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x $$



converges. Moreover, this integral is absolutely convergent if and only if $\int_{a}^{\infty} f(x) \, \mathrm{d}x < \infty$.




The proof is quite simple. We first prove that the integral converges. Let $n$ be an integer so that $\pi n \geq a$. Then for $ b \geq \pi n$,




\begin{align*}
\int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x
&= \int_{a}^{\pi n} f(x)\sin(x) \, \mathrm{d}x + \sum_{k=n}^{\lfloor b/\pi\rfloor - 1} \int_{\pi k}^{\pi(k+1)} f(x)\sin(x) \, \mathrm{d}x \\
&\quad + \int_{\pi\lfloor b/\pi\rfloor}^{b} f(x)\sin(x) \, \mathrm{d}x.
\end{align*}



Writing $N = \lfloor b/\pi \rfloor$ and defining $a_k$ by $a_k = \int_{0}^{\pi} f(x+\pi k)\sin(x) \, \mathrm{d}x$, we find that





  1. $a_k \geq 0$, since $f(x+\pi k) \geq 0$ for all $x \in [0, \pi]$.


  2. $a_{k+1} \geq a_k$ since $f(x+\pi k) \geq f(x+\pi(k+1))$ for all $x \in [0, \pi]$.


  3. $a_k \to 0$ as $k\to\infty$, since $a_k \leq \int_{0}^{\pi} f(\pi k) \sin (x) \, \mathrm{d}x = 2f(\pi k) \to 0$ as $k \to \infty$.


  4. Bu a similar computation as in step 3, we check that $\left| \int_{\pi N}^{b} f(x) \sin (x) \, \mathrm{d}x \right| \leq 2f(\pi N)$, and so, $\int_{\pi N}^{b} f(x) \sin (x) \, \mathrm{d}x \to 0$ as $b\to\infty$.


  5. We have



    $$ \sum_{k=n}^{N - 1} \int_{\pi k}^{\pi(k+1)} f(x)\sin(x) \, \mathrm{d}x = \sum_{k=n}^{N-1} (-1)^k a_k. $$



    So, by the alternating series test, this converges as $N\to\infty$, hence as $b \to \infty$.





Combining altogether, it follows that $\int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x $ converges as $b\to\infty$.



To show the second assertion, let $n$ still be an integer with $\pi n \geq a$. Then for $k \geq n$, integrating each side of the inequality $f(\pi(k+1))|\sin x| \leq f(x)|\sin x| \leq f(\pi k)|\sin x|$ for $x \in [\pi k, \pi(k+1)]$ gives



$$ 2f(\pi(k+1))
\leq \int_{\pi k}^{\pi(k+1)} f(x)|\sin(x)| \, \mathrm{d}x
\leq 2f(\pi k) $$



and similar argument shows




$$ \pi f(\pi(k+1))
\leq \int_{\pi k}^{\pi(k+1)} f(x) \, \mathrm{d}x
\leq \pi f(\pi k). $$



From this, we easily check that



$$ \frac{2}{\pi} \int_{\pi(n+1)}^{\infty} f(x) \, \mathrm{d}x
\leq \int_{\pi n}^{\infty} f(x)|\sin x| \, \mathrm{d}x
\leq 2f(\pi n) + \frac{2}{\pi} \int_{\pi(n+1)}^{\infty} f(x) \, \mathrm{d}x. $$




Therefore the second assertion follows.


trigonometry - Can $Asin^2t + Bsin tcos t + Csin t + Dcos t + E = 0$ be solved algebraically?



This started out much more complex, but I've reduced an equation to this (it's for finding intersections of ellipses):



$$A\sin^2(t)+B\sin(t)\cos(t)+C\sin (t)+D\cos(t)+E=0$$




I want to solve for t where A/B/C/D/E are constants. Is this solvable algebraically, or is only numeric approximation possible?



Using trig identities and the formula for phase shifting, I can further simplify it down to this form:



$$\sin(2t+F) + G\sin(t+H) = I$$



Where F/G/H/I are constants. The formula is much simpler, but this may be a dead end, because now we have two angles to deal with.


Answer



yes use $$\sin(t)=2\,{\frac {\tan \left( t/2 \right) }{1+ \left( \tan \left( t/2

\right) \right) ^{2}}}
$$
$$\cos(t)={\frac {1- \left( \tan \left( t/2 \right) \right) ^{2}}{1+ \left(
\tan \left( t/2 \right) \right) ^{2}}}
$$
and after this you can substitute $$\tan(t/2)=z$$


Saturday 26 November 2016

complex analysis - Series of a sub sequence which converges in $mathbb{C}$



I came across an old question in the analysis course I am studying which goes like this:



Assume that $(x_n)_{n \in \mathbb{N}}$ is a sequence which converges to $0$ in $\mathbb{C}$. Is there always a sub sequence $(x_{n_k})_{n \in \mathbb{N}}$ such that the series $\sum_k x_{n_k}$ converges absolutely? If yes prove, if not give a counter example.



My first intuition looking at the questions is to that that the answer is yes, but I'm not really sure how to go about proving it. I know that in $\mathbb{R}$ if a sequence has a limit and converges, then every sub sequence has the same limit and converges as well. Thus given that the original sequence converges to $0$ then each sub sequence must also converge to $0$ and the sum of any of these sub sequences would also converge.




Is this correct or am I wrong from the start? Any help is appreaciated


Answer



Since $x_n$ converges towards zero, for every $k$ there exists $n_k$ such that $|x_{n_k}|<{1\over 2^k}$. $\sum x_{n_k}$ converges absolutely.


calculus - Show that $ lim_{n rightarrow infty} frac{n!}{2^{n}} = infty $

Show that $ \lim_{n \rightarrow \infty} \frac{n!}{2^{n}} = \infty $



I know what happens intuitively....



$n!$ grows a lot faster than $2^{n}$ which implies that the limit goes to infinity, but that's not the focus here.



I'm asked to show this algebraically and use the definition for a limit of a sequence.



"Given an $\epsilon>0$ , how large must $n$ be in order for $\frac{n!}{2^{n}}$ to be greater than this $\epsilon$ ?"




My teacher recommends using an inequality to prove it but I'm feeling completely lost...

probability theory - Let $X$ be a positive random variable with distribution function $F$. Show that $E(X)=int_0^infty(1-F(x))dx$

Let $X$ be a positive random variable with distribution function $F$. Show that $$E(X)=\int_0^\infty(1-F(x))dx$$



Attempt




$\int_0^\infty(1-F(x))dx= \int_0^\infty(1-F(x)).1dx = x (1-F(x))|_0^\infty + \int_0^\infty(dF(x)).x $ (integration by parts)



$=0 + E(X)$ where boundary term at $\infty$ is zero since $F(x)\rightarrow 1$ as $x\rightarrow \infty$



Is my proof correct?

Friday 25 November 2016

real analysis - Find the limit of this sequence $lim_{nto infty}frac{n}{1 + frac{1}{n}} - n$



Find the limit of this sequence $$\lim_{n\to \infty}\frac{n}{1 + \frac{1}{n}} - n$$



First I tried dividing everything by $n$ but that would leave me with $$\lim_{n\to \infty}\frac{1}{\frac{1}{n} + \frac{1}{n^2}} - 1$$



and as $n\to \infty$ i'd be left with $\frac{1}{0} - 1$. Would I be correct in saying that the limit is -1 or does the $\frac{1}{0}$ mess that up?


Answer



The limit is $-1$ but the $\frac{1}{0} $ does mess it up.




Just add the fractions:



you get $$\frac{n^2}{n+1} - n = \frac{n^2 - n^2 - n}{n} = \frac{-n}{n+1}$$



Finding the limit should now be easy


partial fraction for complex roots

While solving Laplace transform using Partial fraction expansion. I have confusion in solving partial fraction for complex roots.



I have this equation $$ \frac {2s^2+5s+12} {(s^2+2s+10)(s+2)}$$



Please anyone help to tell me to understand the steps for solving partial fraction for complex roots

solution verification - Bounding a complex integral over a square



I'm solving the following exercise:





Use the estimate lemma to prove that $$\left|\oint_\gamma \frac{z-2}{z-3}\,{\rm d}z\right| \leq 4\sqrt{10},$$where $\gamma$ is the square with vertices $\pm 1 \pm i$.




Clearly the lenght of the square is $8$. Starting from $-1-i$, counterclockwise, call the sides of the square $\gamma_1,\ldots,\gamma_4$.




  • For $\gamma_1$ and $\gamma_3$, we have $|z-2| \leq \sqrt{10}$ and $|z-3| \geq \sqrt{5}$, whence: $$ \left|\frac{z-2}{z-3}\right|\leq \frac{\sqrt{10}}{\sqrt{5}} = \sqrt{2}. $$


  • For $\gamma_2$, we have $|z-2| \leq \sqrt{2}$ and $|z-3| \geq 2$, so: $$\left|\frac{z-2}{z-3}\right|\leq \frac{\sqrt{2}}{2}. $$



  • For $\gamma_4$, we have $|z-2| \leq \sqrt{10}$ e $|z-3| \geq 4$, so: $$\left|\frac{z-2}{z-3}\right| \leq \frac{\sqrt{10}}{4} $$
    The greatest of these upper bounds is $\sqrt{2}$. So that inequality is good on all of $\gamma$. We get:$$ \left|\oint_\gamma \frac{z-2}{z-3}\,{\rm d}z\right| \leq 8\sqrt{2}. $$






I got these inequalities geometrically, looking at maximum and minimum distances from $2$ and $3$ to said curves $\gamma_j$. I don't see how he got $4\sqrt{10}$. Can someone help me please?



Here's a figure to make your life easier:




enter image description here


Answer



I'll convert my comment into an answer:



Since $8\sqrt{2} < 4\sqrt{10}$, you have found a stronger upper bound for $\left|\oint_\gamma \frac{z-2}{z-3}\,{\rm d}z\right|$ than what was required by the problem.



We can get the weaker upper bound that the problem is asking for as follows:



For all $z$ on the curve, we have $|z-2| \le \sqrt{10}$ and $|z-3| \ge 2$.




Hence, $\left|\dfrac{z-2}{z-3}\right| \le \dfrac{\sqrt{10}}{2}$, and thus, $\displaystyle\left|\oint_\gamma \frac{z-2}{z-3}\,{\rm d}z\right| \le 8 \cdot \dfrac{\sqrt{10}}{2} = 4\sqrt{10}$.



This gives us a weaker upper bound since we didn't have to consider 3 seperate cases.



We can get an even stronger upper bound by using the Residue Theorem to get $\displaystyle\left|\oint_\gamma \frac{z-2}{z-3}\,{\rm d}z\right| = 0$, (because the only pole $z = 3$ is outside $\gamma$), but that's overkill.


linear algebra - Decompose $A$ to the product of elementary matrices. Use this to find $A^{-1}$



Decompose $A$ to the product of elementary matrices. Use this to find $A^{-1}$
$$
A =
\begin{bmatrix}
3 & 1\\

1 & -2
\end{bmatrix}
$$



I understand how to reduce this into row echelon form but I'm not sure what it means by decomposing to the product of elementary matrices. I know what elementary matrices are, sort of, (a row echelon form matrix with a row operation on it) but not sure what it means by product of them. could someone demonstrate an example please? It'd be very helpful


Answer



An elementary matrix $E$ is a square matrix that the effect of performing a single elementary row operation to $I$. For example, $\left[ \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix}\right]$ is an elementary matrix because adding the first row of $I$ to the second row of $I$ gives us this matrix. Moreover, matrix multiplication on the left by an elementary matrix is equivalent to performing the corresponding elementary row operation. Thus, with $A$ as in your question and $E$ as above, we have that
$$EA=\left[ \begin{matrix} 1& 0 \\ 1 & 1 \end{matrix}\right]\left[ \begin{matrix} 3 & 1 \\ 1 & -2 \end{matrix}\right]=\left[ \begin{matrix} 3 & 1 \\ 3+1 & 1+(-2) \end{matrix}\right] =\left[ \begin{matrix} 3 & 1 \\ 4 & -1 \end{matrix}\right]$$



A square matrix $A$ is invertible if and only if it can be reduced to the identity matrix, which is to say that by multiplying by finitely-many elementary matrices on the left we get the identity: $$E_nE_{n-1}\cdots E_2E_1A=I$$ so that $$A=E_1^{-1}E_2^{-1}\cdots E_{n-1}^{-1}E_n^{-1}.$$

The above is well-defined since every elementary matrix is invertible (its inverse corresponds to the elementary row operation that reverses the elementary row operation corresponding to the original elementary matrix).



Thus, the first step is to row-reduce $A$ to the identity $I$, keeping track of what operations we used, and then multiplying the corresponding inverses in the opposite order as indicated above.



For example, take $A=\left[ \begin{matrix} 1 & 2 \\ 2 & 1 \end{matrix}\right]$. We can reduce this to $I$ by subtracting two times the second row from the first row, giving us $\left[ \begin{matrix} -3 & 0 \\ 2 & 1 \end{matrix}\right]$. We can then add $2/3$ the first row to the second to give us $\left[ \begin{matrix} -3 & 0 \\ 0 & 1 \end{matrix}\right]$. Finally, we multiply the first row by $-1/3$. This gives us the decomposition
$$\left[ \begin{matrix} 1 & 2 \\ 2 & 1 \end{matrix}\right] =\left[ \begin{matrix} 1 & 2 \\ 0 & 1 \end{matrix}\right] \left[ \begin{matrix} 1 & 0 \\ -2/3 & 1 \end{matrix}\right] \left[ \begin{matrix} -3 & 0 \\ 0 & 1 \end{matrix}\right]$$


normal distribution - Formula for probability of being $epsilon$ within the mean.



It should be possible to restate that as $P(\mu-\sigma \Phi^{-1}(\frac{p+1}{2})\leq X\leq \mu+\sigma \Phi^{-1}(\frac{p+1}{2}))=p$.



In this answer, it says:




For a normal distribution, the probability of being within $\Phi^{-1}\left(\frac{p +1}{2}\right)$ standard deviations of the mean is $p$, where $\Phi^{-1}$ is the inverse of the cumulative distribution of a standard normal.





I tried expressing $\Phi^{-1}\left(\frac{p +1}{2}\right)$ in terms of $\operatorname{erf^{-1}}$, but then again I can't get rid of the error function.



Also taking $\Phi$ on both sides would give $\Phi(p)=(p+1)/2$, but a simulation with MATLAB for the case described in the linked question shows it checks out.



(All this provided I interpreted the linked answer correctly.)


Answer



If $X\sim N(\mu,\sigma)$, then $Y=\frac{X-\mu}{\sigma}\sim N(0,1) $ and



$$ \mathbb{P}[\mu-k\sigma \leq X \leq \mu+k\sigma] = \mathbb{P}[-k\leq Y \leq k].$$
Can you recognize $\Phi$ in the RHS now?



Prescribing norm and trace of elements in a finite field.

Let $\mathbb{F}_{q}$ be the finite field with $q$ elements, where $q$ is a prime power.



Let $n \geq 1$ be an integer and consider $\mathbb{F}_{q^n}|\mathbb{F}_{q}$.



There is a theorem that says the following:



Theorem: There is always an element $\alpha \in \mathbb{F}_{q^n}$ that is primitive and normal over $\mathbb{F}_{q}$.




We say that one can prescribe the norm and the trace of a primitive and normal (over $\mathbb{F}_{q}$) element $\alpha \in \mathbb{F}_{q^n}$ if, for every $a,b \in \mathbb{F}_{q}^\ast$, with $b$ primitive, there is a primitive and normal element $\alpha \in \mathbb{F}_{q^n}$ such that $Tr_{\mathbb{F}_{q^n}|\mathbb{F}_{q}}(\alpha) = a$ and $N_{\mathbb{F}_{q^n}|\mathbb{F}_{q}}(\alpha) = b$.



The assumption that $a$ is non-zero is because normal elements cannot have zero trace and a primitive element $\alpha \in \mathbb{F}_{q^n}$ must have norm a primitive element of $\mathbb{F}_{q}$.



My point is, the article I'm reading asserts that if $n \leq 2$, such $\alpha$ is already prescribed by its trace and norm, but I cannot see this. Can anyone help me?



The case $\mathbb{F}_{q^2}|\mathbb{F}_{q}$ we then have $Tr(\alpha) = \alpha + \alpha^q$ and $N(\alpha) = \alpha^{q+1}$. I cannot see why all possible values for the norm and trace in $\mathbb{F}_{q}$ are achieved by primitive normal elements of $\mathbb{F}_{q^2}$.



Edit: I'm trying to think about it's minimal polynomial. There is a fact (I will not prove here but it is true): In $\mathbb{F}_{q^2} | \mathbb{F}_{q}$, every primitive element is also normal. So, the minimal polynomial of a primitive normal element of $\mathbb{F}_{q^2}$ must be




$$X^2 -aX + b,$$ where $a = Tr(\alpha)$ and $b = N(\alpha)$. Still cannot see why every possible value for $N(\alpha)$ (any primitive element of $\mathbb{F}_{q}$) and $Tr(\alpha)$ (any non-zero element of $\mathbb{F}_{q}$) can be achieved.

Thursday 24 November 2016

algorithms - Modulo over rational numbers?



Consider two irreducible fractions:



$r_1 = \frac{p_1}{q_1}$



$r_2 = \frac{p_2}{q_2}$



with $r_1 \ge 0$ and $r_2 \ge 0$.




How the modulo $\%$ is defined over rational numbers (I think that is $r_3$ such that $r_1 = r_2 \times n + r_3$ with $n$ a positive integer but I am not sure of that), and how to compute the numerator $p_3$ and the denominator $q_3$ from $p_1, q_1, p_2, q_2$ and using only the following operations on integers: $+, -, \times, /, \%, \text{gcd}(), \text{lcm}()$ ?


Answer



If you don't impose any condition on $n$ then clearly
$$r_3 = r_1 - n r_2$$
is a solution as $n$ goes over all the integers. If you want the one that minimizes $r_3$ then choose
$$ n = \left\lfloor \frac{r_1}{r_2}\right\rfloor$$
If you want the smallest $r_3$ in magnitude then round the ratio instead of flooring it, i.e
$$ n = \left\lfloor \frac{r_1}{r_2} + \frac{1}{2}\right\rfloor$$




If you program in C/C++ and similar programming language, the code would be



n = (p1 * q2)/(p2*q1); // Using floor
n = (p1 * q2 + ((p2*q1) >> 1)/(p2*q1); // Using rounding

real analysis - Does there exist a continuous onto function from $mathbb{R}-mathbb{Q}$ to $mathbb{Q}$?



Does there exist a continuous onto function from $\mathbb{R}-\mathbb{Q}$ to $\mathbb{Q}$?
(where domain is all irrational numbers)



I found many answers for contradicting the fact that there doesnt exist a continuous function which maps rationals to irrationals and vice versa.



But proving that thing was easier since our domain of definition of function was a connected set, we could use that connectedness or we could use the fact that rationals are countable and irrationals are uncountable.




But in this case those properties are not useful. I somehow think that baire category theorem might be useful but I am not good at using it.


Answer



Yes. Say $E_n$ is the set of irrationals in the interval $(n,n+1)$. Say $(q_n)$ is an enumeration of $\Bbb Q$. Define $f(x)=q_n$ for $x\in E_n$.


calculus - On Vanishing Riemann Sums and Odd Functions





Let $ f: [-1,1] \to \mathbb{R} $ be a continuous function. Suppose that the $ n $-th midpoint Riemann sum of $ f $ vanishes for all $ n \in \mathbb{N} $. In other words,
$$
\forall n \in \mathbb{N}: \quad \mathcal{R}^{f}_{n} := \sum_{k=1}^{n} f \left( -1 + \frac{2k - 1}{n} \right) \cdot \frac{2}{n} = 0.
$$
Question: Is it necessarily true that $ f $ is an odd function?




It is easy to verify that if $ f $ is an odd continuous function, then $ \mathcal{R}^{f}_{n} = 0 $ for all $ n \in \mathbb{N} $. However, is the converse true?



This is part of an original research problem, so unfortunately, there is no other source except myself. With someone else, I managed to obtain the following partial result.





Theorem If $ f $ is a polynomial function and $ \mathcal{R}^{f}_{n} = 0 $ for all $ n \in \mathbb{N} $, then $ f $ has only odd powers, which immediately implies that $ f $ is an odd function.




The proof relies on properties of Bernoulli polynomials and Vandermonde matrices.



For the general case, I was thinking that Fourier-analytic tools might help, such as Poisson summation. A Fourier-analytic approach seems promising, but it has limitations and might not be able to fully resolve the question.



Would anyone care to offer some insight into the problem? Thanks!



Answer



Take the function $f(x)=\sum_{j\geq1} \alpha_j \cos(\pi j x)$. Then its $n$-th midpoint Riemann sum is
$$\begin{align}
0 = R_n f &=
\sum_{j\geq1}\alpha_j \sum_{1\leq k\leq n}\frac{2}{n} \cos\left( \pi j\left( -1 + \frac{2k-1}{n}\right)\right)
\\&= \frac{2}{n}\sum_{j\geq1}\alpha_j \sum_{1\leq k\leq n} (-1)^j\cos\left(\pi j(2k-1)/n\right)
\\&= \frac{2}{n} \sum_{j\geq1} \alpha_j \frac{\sin \pi j}{\sin (\pi j/n)}
\end{align}$$
where (by Mathematica)
$$ \sum_{1\leq k\leq n}\cos\frac{\pi j(2k-1)}{n} = \frac{\cos \pi j\sin \pi j}{\sin (\pi j/n)}$$

and when I write $\sin \pi j/\sin(\pi j/n)$ I mean the limit as $j$ approaches its integer value (so no division by zero).



Now, when $n=1$, the condition is
$$ 0 = \sum_{j\geq1} \alpha_j $$
and when $n>1$, the condition is
$$ 0 = \sum_{j\geq1} \alpha_j(-1)^{(j/n)}[n\backslash j]. $$



The condition $R_n f=0$ is only nontrivial when there are $j$ such that $n\backslash j$ and $\alpha_j\neq0$. So suppose that $\alpha_j\neq0$ only when $j$ is a power of 2, so that the function is
$$ f(x) = \sum_{k\geq0} \beta_k \cos(\pi 2^k x). $$
Then the only $n$ that impose any conditions on $\alpha_k$ are the powers of 2.




If $n=2^m$, $m>0$, then the condition is
$$ \beta_m - \beta_{m+1}-\beta_{m+2}-\cdots = 0, $$
and for $n=1$ the condition is
$$ \sum_{k\geq0} \beta_k = 0. $$



Pick $\beta_0 = -1, \beta_k = 2^{-k}$ ($k\geq1$).
The condition for each $n=2^m$ and also $n=1$ will be satisfied, the function
$$ f(x) = -\cos\pi x+\sum_{k\geq1} 2^{-k} \cos(\pi 2^k x) $$
is clearly even and nonzero, and $R_nf=0$ for every $n$.




If the Fourier series is finite, the function must then be zero.


calculus - Limits without L'Hopitals Rule

Evaluate the limit without using L'hopital's rule



a)$$\lim_{x \to 0} \frac {(1+2x)^{1/3}-1}{x} $$




I got the answer as $l=\frac 23$... but I used L'hopitals rule for that... How can I do it another way?



b)$$\lim_{x \to 5^-} \frac {e^x}{(x-5)^3}$$



$l=-\infty$



c)$$\lim_{x \to \frac {\pi} 2} \frac{\sin x}{\cos^2x} - \tan^2 x$$



I don't know how to work with this at all




So basically I was able to find most of the limits through L'Hopitals Rule... BUT how do I find the limits without using his rule?

calculus - Finding the limit of $ lim_{k rightarrow infty} left(frac{2^k + 1}{2^{k-1} + 3}right) $



$$ \lim_{k \rightarrow \infty} \left(\frac{2^k + 1}{2^{k-1} + 3}\right) $$



I'm trying to prove that the limit of the sequence is $2$ using the squeeze theorem, but with no success.
Thanks


Answer



HINT: Multiply the fraction by $1$ in the carefully chosen disguise




$$\frac{1/2^{k-1}}{1/2^{k-1}}\;.$$


Proof for Elementary Number Theory

Prove that for any positive integer $n$, there exist $n$ consecutive positive integers $a_1, a_2,...,a_n$ such that $p_i$ divides $a_i$ for each $i$, where $p_i$ denotes the $i$-th prime.



I'm not sure how to prove this. Could we possibly use the Chinese Remainder theorem? If so how?

Wednesday 23 November 2016

calculus - Prove ${largeint}_0^inftyfrac{ln x}{sqrt{x} sqrt{x+1} sqrt{2x+1}}dxstackrel?=frac{pi^{3/2},ln2}{2^{3/2}Gamma^2left(tfrac34right)}$




I discovered the following conjecture by evaluating the integral numerically and then using some inverse symbolic calculation methods to find a possible closed form:
$$\int_0^\infty\frac{\ln x}{\sqrt{x\vphantom{1}}\ \sqrt{x+1}\ \sqrt{2x+1}}dx\stackrel{\color{#808080}?}=\frac{\pi^{3/2}\,\ln2}{2^{3/2}\,\Gamma^2\left(\tfrac34\right)}.\tag1$$
The equality holds numerically with a precision of at least $1000$ decimal digits. But so far I was not able to find a proof of it.



Because the integral can be represented as a derivative of a hypergeometic function with respect to its parameter, the conjecture can be rewritten as
$$\frac{d}{da}{_2F_1}\left(a,\ \tfrac12;\ 1;\ \tfrac12\right)\Bigg|_{a=\frac12}\stackrel{\color{#808080}?}=\frac{\sqrt\pi\,\ln2}{2\,\Gamma^2\left(\tfrac34\right)}\tag2$$
or, using a series expansion of the hypergeometric function, as
$${\large\sum}_{n=0}^\infty\frac{H_{n-\frac12}\ \Gamma^2\left(n+\tfrac12\right)}{2^n\ \Gamma^2\left(n+1\right)}\stackrel{\color{#808080}?}=-\frac{3\,\pi^{3/2}\,\ln2}{2\,\Gamma^2\left(\tfrac34\right)}\tag3,$$
where $H_q$ is the generalized harmonic number, $H_q=\gamma+\psi_0\left(q+1\right).$




Could you suggest any ideas how to prove this?


Answer




$$I:=\int_{0}^{\infty}\frac{\ln{(x)}}{\sqrt{x}\,\sqrt{x+1}\,\sqrt{2x+1}}\mathrm{d}x.$$




After first multiplying and dividing the integrand by 2, substitute $x=\frac{t}{2}$:



$$I=\int_{0}^{\infty}\frac{2\ln{(x)}}{\sqrt{2x}\,\sqrt{2x+2}\,\sqrt{2x+1}}\mathrm{d}x=\int_{0}^{\infty}\frac{\ln{\left(\frac{t}{2}\right)}}{\sqrt{t}\,\sqrt{t+2}\,\sqrt{t+1}}\mathrm{d}t.$$




Next, substituting $t=\frac{1}{u}$ yields:



$$\begin{align}
I
&=-\int_{0}^{\infty}\frac{\ln{(2u)}}{\sqrt{u}\sqrt{u+1}\sqrt{2u+1}}\mathrm{d}u\\
&=-\int_{0}^{\infty}\frac{\ln{(2)}}{\sqrt{u}\sqrt{u+1}\sqrt{2u+1}}\mathrm{d}u-\int_{0}^{\infty}\frac{\ln{(u)}}{\sqrt{u}\sqrt{u+1}\sqrt{2u+1}}\mathrm{d}u\\
&=-\int_{0}^{\infty}\frac{\ln{(2)}}{\sqrt{u}\sqrt{u+1}\sqrt{2u+1}}\mathrm{d}u-I\\
\implies I&=-\frac{\ln{(2)}}{2}\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{x}\sqrt{x+1}\sqrt{2x+1}}.
\end{align}$$




Making the sequence of substitutions $x=\frac{u-1}{2}$, then $u=\frac{1}{t}$, and finally $t=\sqrt{w}$, puts this integral into the form of a beta function:



$$\begin{align}
\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{x}\sqrt{x+1}\sqrt{2x+1}}
&=\int_{1}^{\infty}\frac{\mathrm{d}u}{\sqrt{u-1}\sqrt{u+1}\sqrt{u}}\\
&=\int_{1}^{\infty}\frac{\mathrm{d}u}{\sqrt{u^2-1}\sqrt{u}}\\
&=\int_{1}^{0}\frac{t^{3/2}}{\sqrt{1-t^2}}\frac{(-1)}{t^2}\mathrm{d}t\\
&=\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{t}\,\sqrt{1-t^2}}\\
&=\frac12\int_{0}^{1}\frac{\mathrm{d}w}{w^{3/4}\,\sqrt{1-w}}\\
&=\frac12\operatorname{B}{\left(\frac14,\frac12\right)}\\

&=\frac12\frac{\Gamma{\left(\frac12\right)}\Gamma{\left(\frac14\right)}}{\Gamma{\left(\frac34\right)}}\\
&=\frac{\pi^{3/2}}{2^{1/2}\Gamma^2{\left(\frac34\right)}}
\end{align}$$



Hence,



$$I=-\frac{\ln{(2)}}{2}\frac{\pi^{3/2}}{2^{1/2}\Gamma^2{\left(\frac34\right)}}=-\frac{\pi^{3/2}\,\ln{(2)}}{2^{3/2}\,\Gamma^2{\left(\frac34\right)}}.~~~\blacksquare$$







Possible Alternative: You could also derive the answer from the complete elliptic integral of the first kind instead of from the beta function by making the substitution $t=z^2$ instead of $t=\sqrt{w}$.



$$\begin{align}
\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{x}\sqrt{x+1}\sqrt{2x+1}}
&=\int_{1}^{\infty}\frac{\mathrm{d}u}{\sqrt{u-1}\sqrt{u+1}\sqrt{u}}\\
&=\int_{1}^{\infty}\frac{\mathrm{d}u}{\sqrt{u^2-1}\sqrt{u}}\\
&=\int_{1}^{0}\frac{t^{3/2}}{\sqrt{1-t^2}}\frac{(-1)}{t^2}\mathrm{d}t\\
&=\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{t}\,\sqrt{1-t^2}}\\
&=2\int_{0}^{1}\frac{\mathrm{d}z}{\sqrt{1-z^4}}\\
&=2\,K{(-1)}\\

&=\frac{\Gamma^2{\left(\frac14\right)}}{2\sqrt{2\pi}}\\
&=\frac{\pi^{3/2}}{2^{1/2}\Gamma^2{\left(\frac34\right)}}.
\end{align}$$


real analysis - Show $1-1/x


Show $1-1/x <\ln(x) for all $x>1$?





My attempt:



(1) $\ln (x)



Suppose $h: \mathbb{R}^+ \to \mathbb{R}, t\mapsto t-1-\ln(t)$, then $h'(t)=0$ if $t=1$. Furthermore $h'(t)>0$ for all $t\in \mathbb{R}^+$.



(2) $1-1/x<\ln(x)$



Suppose $f: \mathbb{R}^+\to \mathbb{R}, t\mapsto \ln(t)-1+\frac{1}{t}$, then $f'(t)=0$ for $t=1$ and $f'(t)>0$ for all $t\in \mathbb{R}^+$.




Therefore the inequality is true.

soft question - Examples of mathematical induction

What are the best examples of mathematical induction available at the secondary-school level---totally elementary---that do not involve expressions of the form $\bullet+\cdots\cdots\cdots+\bullet$ where the number of terms depends on $n$ and you're doing induction on $n$?



Postscript three years later: I see that I phrased this last part in a somewhat clunky way. I'll leave it there but rephrase it here:






--- that are not instances of induction on the number of terms in a sum?





Evaluating a real integral using complex methods

Let's say I want to evaluate the following integral using complex methods -



$$\displaystyle\int_0^{2\pi} \frac {1}{1+\cos\theta}d\theta$$



So I assume this is not very hard to be solved using real analysis methods, but let's transform the problem for the real plane to the complex plane, and instead calculate -



$$\begin{aligned}\displaystyle\int_0^{2\pi} \frac {1}{1+\cos\theta}d\theta \quad&\Longrightarrow \quad [ z=e^{i\theta} , |z| =1]\\
&\Longrightarrow \quad\displaystyle\int_{|z|=1} \frac {1}{1+\frac{z+\frac{1}{z}}{2}}\frac{dz}{iz}\end{aligned}$$



So now after few algebric fixed this is very easily solvable using the residue theorem.




My question is why can I just decide that I want to change the integration bounds for $[0,2\pi]$ to $|z|=1$. If I wanted to change the integrating variable to $z=e^{i\theta}$ aren't the integration bounds suppose to transform to $[1,1]$ (because $e^{i2\pi k}=1$)? I'm just having hard time figuring out why is this mathematicaly a right transform.



Thanks in advance!

real analysis - Show $lim_{m to infty ,n to infty } f(frac{{leftlfloor {mx} rightrfloor }}{m},frac{{leftlfloor {ny} rightrfloor }}{n}) = f(x,y)$



Suppose $f(x,y)$ is defined on $[0,1]\times[0,1]$ and continuous on each dimension, i.e. $f(x,y_0)$ is continuous with respect to $x$ when fixing $y=y_0\in [0,1]$ and $f(x_0,y)$ is continuous with respect to $y$ when fixing $x=x_0\in [0,1]$. Show



$$\lim_{m \to \infty ,n \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = f(x,y)$$




My attempt:



First, I know $$\lim\limits_{m \to \infty ,n \to \infty } \left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = (x,y)$$



Secondly it looks

$$\lim\limits_{m \to \infty }\lim\limits_{n \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = \lim \limits_{m \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},y\right) = f(x,y)$$



and
$$\lim\limits_{n \to \infty } \lim\limits_{m \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = \lim\limits_{n \to \infty } f\left(x,\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = f(x,y)$$



since $f(x,y)$ is continuous on each dimension.



However, I am not sure if this can infer $\lim\limits_{m \to \infty ,n \to \infty } f(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}) = f(x,y)$.



Can anyone provide some help? Thank you!




Added:



I am now sure $\lim\limits_{m \to \infty } \lim\limits_{n \to \infty } {a_{mn}} = \lim\limits_{n \to \infty } \lim\limits_{m \to \infty } {a_{mn}} = L$ does not imply $\lim\limits_{m \to \infty ,n \to \infty } {a_{mn}} =L$ in general. Hope someone can help solve the problem.

In the definition of Carmichael number, why is it necessary to have $(b, n) = 1$?

In number theory, a Carmichael number is a composite number $n$ which satisfies the modular arithmetic congruence relation $$b^{n-1}\equiv 1\pmod{n}$$



for all integers $1

In the definition of Carmichael number, why is it necessary to have $(b,n) = 1$?



I need to understand this point, please.

proof by mathematical induction with the summation operator?

$$
\sum_{k=1}^n k^3 = \left( \sum_{k=1}^n k \right)^2
$$
I can't quite understand this expression, and in fact this is my biggest difficulty in finding a solution. Can someone please explain to me ?
$$
\sum_{k=1}^n k = \frac{n(n+1)} 2
$$

logic - Infinite processes riddle

A train with infinitely many seats, one for each rational number, stops in countably many villages, one for each positive integer, in increasing order, and then finally arrives at the city.



At the first village, two women board the train.



At the second village, one woman leaves the train to go visit her cousin, and two other women board the train.



At the third village, one woman leaves the train to go visit her cousin, and two other women board the train.




At the fourth village, and in fact at every later village, the same thing keeps happening: one woman off to visit her cousin, two new women on board the train. How many women arrive at the city?

contest math - Find a polynomial with integer coefficients



Find a polynomial $p$ with integer coefficients for which $a = \sqrt{2} + \sqrt[3]{2}$ is a root. That is find $p$ such that for some non-negative integer $n$, and integers $a_0$, $a_1$, $a_2$, ..., $a_n$, $p(x) = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n$, and $p(a) = 0$.



I do not know how to solve this. It is very challenging. Also, if you name any theorem please describe it in a way that is easy to understand. If you just name it, I won't be able to understand it. (My math might not be/is not as good as yours.)



Thanks for any help!


Answer



We have
$$

a=\sqrt2+\sqrt[3]2\\
a-\sqrt2=\sqrt[3]2\\
(a-\sqrt2)^3=2\\
a^3-3\sqrt2a^2+6a-2\sqrt2=2\\
a^3+6a-2=\sqrt2(3a^2+2)\\
(a^3+6a-2)^2=2(3a^2+2)^2
$$
which has all integer coefficients, once you expand the brackets.


Tuesday 22 November 2016

general topology - Constructing a circle from a square





I have seen a [picture like this] several times:



troll proof



featuring a "troll proof" that $\pi=4$. Obviously the construction does not yield a circle, starting from a square, but how to rigorously and demonstratively prove it?




For reference, we start with a circle inscribed in a square with side length 1. A step consists of reflecting each corner of figure $F_i$ so that it lies precisely on the circle and yielding figure $F_{i+1}$. $F_0$ is the square with side length 1. After infinitely many steps we have a figure $F_\infty$. Prove that it isn't a circle.



Possible ways of thinking:




  1. Since the perimeter of figure $F_i$ indeed does not change during a step, it is invariant. Since it does not equal the perimeter of the circle, $\pi\neq4$, it cannot be a circle.



While it seems to work, I do not find this proof demonstrative enough - it does not show why $F_\infty$ which looks very much like a circle to us, is not one.





  1. Consider one corner of the square $F_0$. Let $t$ be a coordinate along the edge of this corner, $0 \leq t \leq 1$ and $t=0, t=1$ being the points of tangency for this corner of $F_0$ and the circle.
    By construction, all points $t \in A=\{ \frac{n}{2^m} | (n,m\in \mathbb{N}) \& (n<2^m)\}$ of $F_\infty$ lie on the circle. I think it can be shown that the rest of the points, $\bar{A}=[0;1] \backslash A$, lie in an $\varepsilon$-neighbourhood $U$ of the circle. I also think that in the limit $\varepsilon \to 0$, points $ t\in\bar{A}$ also lie on the circle. Am I wrong in thinking this? Can we get a contradiction from this line of thought?



Any other elucidating proofs and thoughts are also welcomed, of course.


Answer



You have rigorously defined $F_i$, but how do you define $F_\infty$? You cannot say: "after infinitely many steps...".



In this case you could define $F_\infty = \bigcap_i F_i$ (i.e. the intersection of all $F_i$), since $F_i$ is a decreasing sequence this is a good notion of limit. Notice however that $F_\infty$ is a circle! But this does not mean that the perimeter of $F_i$ should converge to the perimeter of $F_\infty$.




You could also choose a metric on subsets of the plane to define some sort of convergence $F_i \to F_\infty$ as $i\to \infty$. In any case, if you choose any good metric you find that either $F_\infty$ is the circle or that the sequence does not converge.



The point here is that the perimeter is not continuous with respect to the convergence of sets... so even if $F_i\to F_\infty$ (in any decent notion of convergence) you cannot say that $P(F_i)\to P(F_\infty)$ (where $P$ is the perimeter).


linear algebra - Find all solutions of the following congurence: $52x equiv 15 (text{ mod } 91)$





Find all solutions $x \in \mathbb{Z}_{m}$ of the following congruence,
whereby $m$ is the modulus. If there isn't a solution, state why.
$$52x \equiv 15 (\text{ mod } 91)$$




I'm not sure how to solve it because if we look at $52$ and $91$, we see that they aren't coprime. So we cannot use euclidean algorithm to continue because we haven't got $\text{gcd }(52,91)=1$.



Does that mean that there won't exist a solution? Or there is another way of solving it?



Answer



Hints:



Fill in details



$$52x=15+91k\;,\;\;k\in\Bbb Z\implies15=13(4x-7k)$$



So how many solutions can you find?


convergence divergence - Limit of the function $lim_{xto 0}(frac{e^{-x}-1}{x})$



I'm trying to solve this limit without the use of L'Hospital, but I'm doing something wrong. The limit should be:



$$\lim_{x\to 0}\left(\frac{e^{-x}-1}{x}\right) = -1$$



My attempted proof:
$$
\lim_{x\to 0}(\frac{e^{-x}-1}{x}) = \lim_{n\to \infty}(\frac{e^{-(\frac{1}{n})}-1}{\frac{1}{n}}) = \lim_{n\to \infty}(\frac{n \cdot ( e^{-(\frac{1}{n})}-1)}{1}) =

\lim_{n\to \infty}(\frac{n \cdot ( e^{0}-1)}{1}) = \lim_{n\to \infty}(\frac{0}{1}) = 0
$$



I assume the mistake is that I've used the continuity of the $exp$ function.


Answer



A variation of Bernard's answer:



$$\frac{e^{-x}-1}x=\frac{1-e^x}{xe^x}=-\frac1{e^x}\cdot\frac{e^x-1}x\xrightarrow[x\to0]{}-\frac11\cdot(e^x)'_{x=0}=-e^0=-1$$


limits - How to evaluate $lim_{Delta tto0}left[ frac{vsinDeltatheta}{Delta t} right]$?

This is more a mathematical question, but comes from physics during the derivation of centripetal acceleration.



We resolve the velocity vector at an arbitrary point up the circle from the initial position into two components, one parallel to the initial velocity and one perpendicular.



\begin{align*}

\parallel:\quad v\cos\Delta\theta\\
\perp:\quad v\sin\Delta\theta
\end{align*}



Since $a = \frac{dv}{dt}$, we subtract the corresponding components to find the difference in velocity at an arbitrary $\Delta\theta$ and divide by the corresponding arbitrary $\Delta t$, and then consider what happens as $\Delta t \to 0$ in the limit to find the differential.



The components parallel to the initial velocity are straightforward. As $\Delta t \to 0$, so does $\Delta\theta$. Since $\cos0 = 1$, both numerator and denominator in the fraction tend to $0$, so the limit intuitively evaluates to $0$.



$$\lim_{\Delta t \to 0}\left[\frac{v\cos\Delta\theta - v}{\Delta t}\right] = 0$$




However, I cannot get it with the the perpendicular components. The corresponding limit is:



$$\lim_{\Delta t \to 0}\left[\frac{v\sin\Delta\theta - 0}{\Delta t}\right] = \text{?}$$



My reasoning is that since $\sin0 = 0$, the numerator tends to $0$, and the denominator tends to $0$, so the fraction tends to $0$. Can someone please point out where the error is in this reasoning?



The textbook, however, makes a claim that as $\Delta t$ approaches $0$, $\sin\Delta\theta$ would approach $\Delta\theta$ - why would that be?



It then goes on to conclude that the limit evaluates to $v\frac{\Delta\theta}{\Delta t}$, which is a well-known and correct result. But I cannot understand the evaluation above and it is troubling me.

Monday 21 November 2016

abstract algebra - Congruency and Congruent Classes



so studying for my midterm on Tuesday (intro to abstract algebra). The topics on the exam are Division Algorithm, Divisibility, Prime Numbers, FTA, Congruency, Congruent Classes and very brief introduction to rings.




I was reading a few theorems about Congruency and have a couple of questions.



I want to know what a "congruent class" is. My notes say "the congruence class of a modulo n" is a set:



$ \left\{ \text{all } b \in \mathbb{Z} | b \equiv a \pmod{n} \right\} $ which is also saying
$ \left\{ \text{all } a + kn \in \mathbb{Z} | k \in \mathbb{Z} \right\} $



okay so got that. I just wrote it for some people who might need a refreshed (it is a 3rd undergrad course after all).



So in my notes our professor has a following example:

$\left[ 60 \right]_{17} = \left[ 43\right]_{17}$



1) so the way I figured this out is that to check if they are equivalent, we subtract 60-43 and see if that is a multiple of n = 17. Is this how you can check if they are equal classes? If not, is there a better way to do so?



2) A certain theorem states: Let $n \in \mathbb{Z}_+; a, b \in \mathbb{Z}$ and $gcd(a,n) = d$ then $[a]x=[b]$ has exactly $d$ solutions. My question here is that is x a congruence class or a random integer? What is x and how do I solve for it?



3) Is it true that if we are in $\mathbb{Z}_{12}$ then $[7]x=[11]$ can be rewritten as $ 7x \equiv 11 \pmod{12}$? If so, would finding the solution be similar to solution in this question



Thankyou. I am just very confused about congurency and stuff. I understand the theorems but I am hoping someone would give me an "easy" explanation of what is going on. I still don't know the difference between circle plus and regular plus except that circle plus has to satisfy certain axioms. Am I right?


Answer




There is a much more general definition of congruence classes, but I shall restrict to the one that is sufficient for your course. Given any integer $n$ the only possible remainders that we can get when we divide an integer $a$ by $n$ are $0,1,...,n-1$.
Each set of integers which leave the same remainder on division by $n$ form what is called as a congruence class modulo $n$. All such congruence classes are mutually disjoint since a number can leave only one remainder on division by $n$. Their union is the set of all integers. Each integer in any given congruence class is said to be a representative of the class.



To check whether two integers are in the same class, we check their difference and see if it is divisible by $n$ (since the remainders of these integers cancel out when we divide by $n$). So your approach to your first question is right.



For the second question, $x$ is indeed a congruence class, since otherwise the equation does not make sense. We can define operations on the congruence classes by the correponding operations on their representatives. Its easy to check these are well defined.


Prove trigonometry identity for $secxquad -sinx$



I'm trying to prove this equality but I' stuck at the second step.



Please give me some hints or other ways to proceed.




$\frac { { tan }^{ 2 }x\quad +\quad { cos }^{ 2 }x }{ sinx\quad +\quad secx } \quad \equiv \quad secx\quad -sinx\\ \\ sinx=x\\ cosx=y\\ \\ \frac { \frac { { x }^{ 2 } }{ { y }^{ 2 } } +\frac { { y }^{ 4 } }{ { y }^{ 2 } } }{ \frac { xy }{ y } +\frac { 1 }{ y } } \quad \equiv \quad \frac { 1 }{ y } -x\quad =\quad \frac { 1-xy }{ y } \quad \quad \quad \quad \quad (1)\\ \\ \\ \frac { \left( \frac { { x }^{ 2 }+{ y }^{ 4 } }{ { y }^{ 2 } } \right) }{ \left( \frac { xy+1 }{ y } \right) } \quad \equiv \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (2)\\ \\ \\ \frac { { x }^{ 2 }+{ y }^{ 4 } }{ y(xy+1) } \quad \equiv \qquad \qquad \qquad \qquad \qquad \qquad \qquad (3)$


Answer



The key is to see that $(x+y)(x-y) = x^2-y^2$.
\begin{align*}
\frac{\tan^2{x} + \cos^2{x}}{\sin x + \sec x} &= \frac{\tan^2{x} + 1 - 1 + \cos^2{x}}{\sin x + \sec x} \\
&= \frac{\sec^2{x} - \sin^2{x}}{\sin x + \sec x} \\
&= \sec x - \sin x.
\end{align*}


functions - Conditions Equivalent to Injectivity





Let $A$ and $B$ be sets, where $f : A \rightarrow B$ is a function. Show that the following properties are valid equivalent*:




  1. $f$ is injective.

  2. For all $X, Y \subset A$ is valid: $f(X \cap Y)=f(X)\cap f(Y)$

  3. For all $X \subset Y \subset A$ is valid: $f(X \setminus Y)=f(X) \setminus f(Y)$.





I do know what injective is, but I thought number (2.) and (3.) were valid for any kind of function. Just to see if I understood this right:



$f(X\cap Y)$ means, first, make the intersection from $X$ and $Y$ and then map it to $B$ via $f$.



$f(X)\cap f(Y)$ actually means, map all $f(X)$ and $f(Y)$ and intersect both.



Aren't those both properties valid for all functions? I can't think of a counter example. Thanks in advance guys!



*edited by SN.



Answer



No they are not true for any function:



Take a function $f:\{0,1\}\to\{0,1\}$ such that $f(0)=1$ and $f(1)=1$. Then $f[\{0\}\cap\{1\}]=f[\varnothing]=\varnothing$ but $f[\{0\}]\cap f[\{1\}]=\{1\}\cap\{1\}=\{1\}$. This function provides a counterexample for the second case as well: $f[\{0,1\}\setminus\{0\}]=\{1\}$, while $f[\{0,1\}]\setminus f[\{0\}]=\{1\}\setminus\{1\}=\varnothing$.



Note that for any function $f$ it is true that $f[X\cap Y]\subseteq f[X]\cap f[Y]$ and it is also true that $f[X]\setminus f[Y]\subseteq f[X\setminus Y]$.



As for the equivalence, a function $f$ is called injective exactly when $x\neq y$ implies $f(x)\neq f(y)$ (or equivalently $f(x)=f(y)$ implies $x=y$). They are sometimes called $1-1$:



$1\Rightarrow 2$. Let $f:A\to B$ be injective. We just need to show that $f[X]\cap f[Y]\subseteq f[X\cap Y]$. Let $x\in f[X]\cap f[Y]$. Then there is some $a\in X$ and some $b\in Y$ such that $f(a)=f(b)=x$. By the definition of injective functions $a=b$ thus $a\in X\cap Y$ or $x\in f[X\cap Y]$.




$2\Rightarrow 1$. Now let $f[X]\cap f[Y]=f[X\cap Y]$. Let $f(a)=f(b)$. We have that $f[\{a\}\cap\{b\}]=f[\{a\}]\cap f[\{b\}]$. Thus $f[\{a\}\cap\{b\}]$ is not empty (since it is equal to $f[\{a\}]\cap f[\{b\}]$ which is not). Therefore $\{a\}\cap\{b\}$ is not empty, which means that $a=b$.



$1\Rightarrow 3$. Let $f:A\to B$ be injective. We just need to show that $f[X\setminus Y]\subseteq f[X]\setminus f[Y]$. Let $x\in f[X\setminus Y]$. Of course $x\in f[X]$. We have that there is some $a\in X\setminus Y$ such that $f(a)=x$. For every $b\in Y$ we have that $a\neq b$ thus $f(a)\neq f(b)$. Thus $x\notin f[Y]$ and thus $x\in f[X]\setminus f[Y]$.



$3\Rightarrow 1$. Conversely assume that $f[X\setminus Y]= f[X]\setminus f[Y]$. Let $f(a)=f(b)$. Then $f[\{a,b\}\setminus\{b\}]=f[\{a,b\}]\setminus f[\{b\}]$. The second set is empty, thus $f[\{a,b\}\setminus\{b\}]$ is empty. Then $\{a,b\}\setminus\{b\}$ is empty, which means $a=b$.


calculus - Why $sum_{n=0}^{infty}(n+1)5^nx^n=frac{1}{(1-5x)^2}$



Why $$\sum_{n=0}^{\infty}(n+1)5^nx^n=\frac{1}{(1-5x)^2}?$$




I know that $\sum_{n=0}^{\infty}x^n=\dfrac{1}{1-x}$, so by the same token, $\sum_{n=0}^{\infty}5^nx^n=\dfrac{1}{1-5x}$.



Thus
$$
\left(\frac{1}{1-5x}\right)^2=\frac{1}{(1-5x)^2} = \left(\sum_{n=0}^{\infty}5^nx^n\right)^2.
$$



But why is $\big(\sum_{n=0}^{\infty}5^nx^n\big)^2=\sum_{n=0}^{\infty}(n+1)5^nx^n$?



Assuming $x$ is small enough so that the sum converges.



Answer



Note that
$$
\frac{1}{1-x}=\sum_{n=0}^\infty x^n,
$$
and thus
$$
\frac{1}{(1-x)^2}=\left(\frac{1}{1-x}\right)'=\sum_{n=1}^\infty nx^{n-1}=\sum_{n=0}^\infty (n+1)x^{n},
$$
and hence

$$
\frac{1}{(1-5x)^2}=\sum_{n=0}^\infty (n+1)(5x)^{n}.
$$


multivariable calculus - Missing continuity condition in theorem?



I'm going through the proof that all partials continuous $\implies$ $f$ is differentiable. Here's what my book says:




enter image description here



What I'm wondering about is how we can use the mean value theorem in step $2$. Doesn't the MVT require $\mathbf f$ (or its components $f_i$) be continuous on $[x_{k-1},x_k]$ for all $k$? That's not a part of the suppositions for this theorem. Did the author just forget to add that $\mathbf f$ needs to be continuous on $U$ or is there something I'm missing?


Answer



The mean value theorem is applied to the real function



$$t \mapsto f(x+\sum_{i=1}^{k-1}h_ie_i+te_k),$$
which is continuous since it is differentiable, as its derivative is given by the partial derivative of the function $f$ (just apply the definition of derivative).



For your edit, what he is using is the fact that $\Vert h_k \Vert \leq \Vert h \Vert$, and then putting $\Vert h \Vert$ in evidence.



Sunday 20 November 2016

calculus - proving that $sup limits_{x > 0} frac{xsin x}{x+1}=1$




I am having trouble proving that $$\sup ~ \Big\{ \frac{x\sin x}{x+1} \,:\, x>0 \Big\}=1$$ for my homework assignment. I have managed to prove that there is no $x$ so that $f(x) >1$ but cant seem to manage to prove there is no smaller number then $1$ for which that is true.



Can someone please help me out? Thanks.


Answer



Obviously, when $x>0$ we have $|\frac{x}{x+1}\sin x| \leq |\sin x| \leq 1$, therefore we have the inequality $\frac{x}{x+1}\sin x \leq 1$.



Now, to prove that the supremum is indeed 1, we need to find a sequence $x_n$ such that the expression evaluated at $x_n$ tends to $1$. This can be done using the sequence defined in Austin Mohr's answer, but I think it is important to understand why that is the sequence you need. when you look at the expression $\frac{x}{x+1}\sin x$ you should notice that $\lim_{x \to \infty}\frac{x}{x+1}=1$. The problem is that $\lim_{x \to \infty} \sin x$ does not exist. ( In fact, for every $\alpha \in [-1,1]$ there exists a sequence $x_n \to \infty$ such that $\sin x_n \to \alpha$.)



Now, since $\sin $ is periodic, there exists a sequence $x_n \to \infty$ such that $\sin x_n=1$, which is exactly the sequence chosen in the reffered answer.



Property of modular arithmetic.


(a / b) % c = ((a % c) * (b^{-1} % c)) % c




How to calculate b^{-1}? I know it is not 1/b. Is there more than one way to calculate this?

Saturday 19 November 2016

sequences and series - Why does $sum_{k=1}^{infty}frac{{sin(k)}}{k}={frac{pi-1}{2}}$?




Inspired by this question (and far more straightforward, I am guessing), Mathematica tells us that $$\sum_{k=1}^{\infty}\dfrac{{\sin(k)}}{k}$$ converges to $\dfrac{\pi-1}{2}$.



Presumably, this can be derived from the similarity of the Leibniz expansion of $\pi$ $$4\sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}}{2k-1}$$to the expansion of $\sin(x)$ as $$\sum_{k=0}^{\infty}\dfrac{(-1)^k}{(2k+1)!}x^{2k+1},$$ but I can't see how...



Could someone please explain how $\dfrac{\pi-1}{2}$ is arrived at?


Answer



Here is one way, but it does not use the series you mention so much. I hope that's OK.



The series is:




$$\sin(1)+\frac{\sin(2)}{2}+\frac{\sin(3)}{3}+\cdot\cdot\cdot $$



$$\Im\left[e^{i}+\frac{e^{2i}}{2}+\frac{e^{3i}}{3}+\cdot\cdot\cdot \right]$$



Let $\displaystyle x=e^{i}$.



$$\Im\left[x+\frac{x^2}{2}+\frac{x^3}{3}+\cdot\cdot\cdot \right]$$



differentiate:




$$\Im \left[1+x+x^{2}+x^{3}+\cdot\cdot\cdot \right]$$



This is a geometric series, $\displaystyle \frac{1}{1-x}$



$$\Im [\frac{1}{1-x}]$$



Integrate:



$$-\Im[\ln(x-1)]=-\Im [\ln(e^{i}-1)]$$




Now, suppose $$\ln(e^{i}-1)=a+bi$$,



$$e^{i}-1=e^{a}e^{bi}$$



$$\cos(1)-1+i\sin(1)=e^{a}\left[\cos(b)+i\sin(b)\right]$$



Equate real and imaginary parts:



$$\cos(1)-1=e^{a}\cos(b)\\ \sin(1)=e^{a}\sin(b)$$




divide both:



$$\frac{\cos(1)-1}{\sin(1)}=\frac{e^{a}\sin(b)}{e^{a}\cos(b)}$$



$$-\cot(1/2)=\tan(b)$$



$$b=\tan^{-1}(\cot(1/2))=\frac{1}{2}-\frac{\pi}{2}$$



But we need the negative of this, so finally:




$$\frac{\pi}{2}-\frac{1}{2}$$


number theory - What is the simplest way to prove that the logarithm of any prime is irrational?



What is the simplest way to prove that the logarithm of any prime is irrational?



I can get very close with a simple argument: if $p \ne q$ and $\frac{\log{p}}{\log{q}} = \frac{a}{b}$, then because $q^\frac{\log{p}}{\log{q}} = p$, $q^a = p^b$, but this is impossible by the fundamental theorem of arithmetic. So the ratio of the logarithms of any two primes is irrational. Now, if $\log{p}$ is rational, then since $\frac{\log{p}}{\log{q}}$ is irrational, $\log{q}$ is also irrational. So, I can conclude that at most one prime has a rational logarithm.




I realize that the rest follows from the transcendence of $e$, but that proof is relatively complex, and all that's left to show is that no integer power of $e$ is a prime power (because if $\log p$ is rational, then $e^a = p^b$ has a solution). It is easy to prove that $e$ is irrational ($e = \frac{a}{b!} = \sum{\frac{1}{n!}}$, multiply by $b!$ and separate the sum into integer and fractional parts) but I can't figure out how to generalize this simple proof to show that $e^x$ is irrational for all integer $x$; it introduces a $x^n$ term to the sum and the integer and fractional parts can no longer be separated. How to complete this argument, or what is a different elementary way to show that $\log{p}$ is always irrational?


Answer



A proof of the irrationality of rational powers of $e$ is given on page 8 of Keith Conrad's notes.


real analysis - Showing that a function is uniformly continuous but not Lipschitz

If $g(x):= \sqrt x $ for $x \in [0,1]$, show that there does not exist a constant $K$ such that $|g(x)| \leq K|x|$ $ \forall x \in [0,1]$



Conclude that the uniformly continuous function $g$ is not a Lipschitz function on interval $[0,1]$.



Necessary definitions:



Let $A \subseteq \Bbb R$. A function $f: A \to \Bbb R$ is uniformly continuous when:
Given $\epsilon > 0$ and $u \in A$ there is a $\delta(\epsilon, u) > 0$ such that $ \forall x \in A$ and $|x - u| < \delta(\epsilon,u)$ $\implies$ $|f(x) - f(u)| < \epsilon$




A function $f$ is considered Lipschitz if $ \exists$ a constant $K > 0$ such that $ \forall x,u \in A$ $|f(x) - f(u)| \leq K|x-u|$.



Here is the beginning of my proof, I am having some difficulty showing that such a constant does not exist. Intuitively it makes sense however showing this geometrically evades me.



Proof (attempt):



Suppose $g(x): = \sqrt x$ for $x \ in [0,1]$
Assume $g(x)$ is Lipschitz. $g(x)$ Lipschitz $\implies$ $\exists$ constant $K > 0$ such that $|f(x) - f(u)| \leq K|x-u|$ $\forall x,u \in [0,1]$.




Evaluating geometrically:



$\frac{|f(x) - f(u)|}{ |x-u|}$ = $\frac{ \sqrt x - 1}{|x-u|}$ $ \leq K$



I was hoping to assume the function is Lipschitz and encounter a contradiction however this is where I'm stuck.



Can anyone nudge me in the right direction?

An inequality involving two probability densities

I cannot prove the following inequality, which I state below:



Let $p, q$ be two positive real numbers such that $p+q=1$. Let $f$ and $g$ be two probability density functions. Then, show that:



$$\int_{\mathbb{R}} \frac{p^2 f^2 + q^2 g^2}{pf + qg} \geq p^2+q^2~.$$



I tried to use Cauchy-Schwarz and even Titu's lemma, but got nowhere. Any help will be greatly appreciated. Thanks!

sequences and series - Estimating $sum n^{-1/2}$



Could someone please explain me how does one obtain the following estimate:
$$
\sum_{n \leq X} n^{-1/2} = \frac12 X^{1/2} + c + O(X^{-1/2}),
$$
where $c$ is some constant.




Thank you very much!



PS As pointed out in the comments, $1/2$ in front of $X^{1/2}$ is a typo... I would like an answer with the correct coefficient here.


Answer



You can use the Euler McLaurin formula which gives us the estimate



$$ 2 \sqrt{n} + K + \frac{1}{2\sqrt{n}} + \mathcal{O}(n^{-3/2})$$



It can be shown by other means that $K = \zeta(\frac{1}{2})$.


Friday 18 November 2016

probability - question about the conditional expectation.



I found the following in He, Wang, Yan's "Semimartingales":



It is well known that for any non-negative random variable $X$ one can define conditional expectation by $\mathbb E\left[X\mid\mathcal G\right]=\lim_\infty \mathbb E\left[X\wedge n\mid\mathcal G\right]$. However, even if $X$ takes finite values, $\mathbb E\left[X\mid\mathcal G\right]$ may be $+\infty$ on a set with positive probability.



I wonder why it is true since conditional expectation is an "average", if $X$ take finite values,so is the average. Can anyone give my a counterexample?


Answer




If the question is how an expectation of a random variable that only takes finite values can be infinite, one of the most well known examples is this:
$$
X =2^n\text{ with probability } \frac 1 {2^n} \text{ for }n=1,2,3,\ldots
$$
This just means every time you toss the coin your winnings double. Toss the penny. If you get "heads", you win $1$ cent. If tails, toss again and if you get heads you win $2$ cents. If tails the first two times, toss a third time and if you win you get $4$ cents. And then $8$, and so on. Would you pay $\$100$ for each coin toss to play this? If so, you'll ultimately come out ahead, because some day you'll hit one of those rare instances where you get so many consecutive tails that your winnings will exceed all your vast losses up to that point. And the same is true if you pay $\$1$ trillion each time. Now matter how big the amount you pay each time, you ultimately come out ahead.


real analysis - Question about the existence of a Lebesgue measurable set



Question: Does there exist any Lebesgue measurable set $E \subset [0,1]$ such that for any $x \in \mathbb{R}$, there exists a $y \in E$ satifying $x - y \in \mathbb{Q}$?




I guess there does not exist such a measurable set $E$ but I failed to prove that. Can anyone give a proof for me as a beginner on Lebesgue integrals?


Answer



Take $E=[0,1]$ and, for each $x\in\mathbb R$, take $y=x-\lfloor x\rfloor$. Then $x-y\in\mathbb Z\subset\mathbb Q$.


limits - Strange equality involving a geometric series and gamma and zeta function

I saw someone do this (in a youtube video):



$$\sum_{\text{n}=1}^\infty\frac{\Gamma\left(\text{s}\right)}{\text{n}^\text{s}}=\Gamma\left(\text{s}\right)\sum_{\text{n}=1}^\infty\frac{1}{\text{n}^\text{s}}=\Gamma\left(\text{s}\right)\zeta\left(\text{s}\right)=\sum_{\text{n}=1}^\infty\left\{\int_0^\infty\text{u}^{\text{s}-1}e^{-\text{n}\text{u}}\space\text{d}\text{u}\right\}=$$
$$\int_0^\infty\text{u}^{\text{s}-1}\left\{\sum_{\text{n}=1}^\infty e^{-\text{n}\text{u}}\right\}\space\text{d}\text{u}=\int_0^\infty\text{u}^{\text{s}-1}\cdot\frac{1}{e^\text{u}-1}\space\text{d}\text{u}$$



But, I can follow all the steps he did but the last integral does not converge because the geometric series only hold when the real part of $\text{u}$ is bigger then $0$, but the lower bound of the integral equals $0$. So why are those two things equal?



Or can we assign a value to:




$$\lim_{u\to0}\text{u}^{\text{s}-1}\cdot\frac{1}{e^\text{u}-1}$$

finite fields - How to find minimal polynomial for an element in $mbox{GF}(2^m)$?

I'm new to this Finite field theory. Someone please explain how minimal polynomials are generated for each element in GF(2^m). I searched in the website but I'm not getting any clue.

calculus - Find the value of $lim_{nto infty}left(1+frac{1}{n}right)left(1+frac{2}{n}right)^{1/2}ldots(2)^{1/n}$



Find the value of
$$\lim_{n\to \infty}\bigg(1+\dfrac{1}{n}\bigg)\bigg(1+\dfrac{2}{n}\bigg)^{\frac12}\ldots(2)^{\frac{1}{n}}$$



My work:
$\bigg(1+\dfrac{1}{n}\bigg)=\bigg\{\bigg(1+\dfrac{1}{n}\bigg)^n\bigg\}^{\frac{1}{n}}=e^{\frac{1}{n}}$
$\bigg(1+\dfrac{2}{n}\bigg)^{\frac12}=\bigg\{\bigg(1+\dfrac{2}{n}\bigg)^{\frac{n}{2}}\bigg\}^{\frac{1}{n}}=e^{2\cdot\frac12\cdot\frac{1}{n}}=e^\frac{1}{n}$
$~~~~~~~~~~~~\vdots$
$~~~~~~~~~~~~\vdots$
$\bigg(1+\dfrac{n}{n}\bigg)^{\frac{1}{n}}=e^{\frac{1}{n}}$
So, $L=e$
But, the answer says $L=e^{\frac{\pi^2}{12}}$.
I do not know where I am going wrong, is the answer a typo or I am doing wrong. Please help.



Answer



This seems to be the reasoning in your argument
$$
\begin{align}
\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\left(1+\frac kn\right)^{n/k}\right)^{1/n}\tag{1}\\
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\lim_{n\to\infty}\left[\left(1+\frac kn\right)^{n/k}\right]\right)^{1/n}\tag{2}\\
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\ e\right)^{1/n}\tag{3}\\[12pt]
&=\ e\tag{4}
\end{align}

$$
All of the steps are fine except $(2)$. It is not, in general, allowed to take the limit of an inner part like that. For example consider
$$
\begin{align}
\lim_{n\to\infty}\left(\frac1n\cdot n\right)
&=\lim_{n\to\infty}\left(\lim_{n\to\infty}\left[\frac1n\right] n\right)\tag{5}\\
&=\lim_{n\to\infty}\left(0\cdot n\right)\tag{6}\\[3pt]
&=\lim_{n\to\infty}\ 0\tag{7}\\[2pt]
&=\ 0\tag{8}
\end{align}

$$
Step $(5)$ is the same as step $(2)$, but that step allows us to show that $1=0$.



To see why this affects your limit adversely, notice that no matter how big $n$ gets in the limit, when $k$ is near $n$, $\left(1+\frac kn\right)^{n/k}$ is close to $2$, not $e$. Thus, the terms of the product are between $2$ and $e$. Not all of them tend to $e$.






What we need to do is use the continuity of $\log(x)$ as viplov_jain suggests.
$$
\begin{align}

\log\left(\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}\right)
&=\lim_{n\to\infty}\log\left(\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}\right)\tag{9}\\
&=\lim_{n\to\infty}\sum_{k=1}^n\frac1k\log\left(1+\frac kn\right)\tag{10}\\
&=\lim_{n\to\infty}\sum_{k=1}^n\frac nk\log\left(1+\frac kn\right)\frac1n\tag{11}\\
&=\int_0^1\frac1x\log(1+x)\,\mathrm{d}x\tag{12}\\
&=\int_0^1\sum_{k=0}^\infty(-1)^k\frac{x^k}{k+1}\,\mathrm{d}x\tag{13}\\
&=\sum_{k=0}^\infty\frac{(-1)^k}{(k+1)^2}\tag{14}\\
&=\frac{\pi^2}{12}\tag{15}
\end{align}
$$

Step $(12)$ uses the idea of approximating a Riemann Sum by an integral. $(15)$ tells us that
$$
\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}=e^{\pi^2/12}\tag{16}
$$
Notice that
$$
2\lt2.27610815162573\doteq e^{\pi^2/12}\lt e\tag{17}
$$


Thursday 17 November 2016

complex analysis - Prove that when $z = p$ is a solution of $az^3+bz^2+cz+d=0$, $z=-p^*$ is also a solution



Given that $z = p$ is a solution of the equation:



$$az^3+bz^2+cz+d=0$$



where $a$ and $c$ are real constants while $b$ and $d$ are purely imaginary constants.
Show algebraically that $x = -p^*$ (negative conjugate) is another solution.



I tried to solve this question by letting $p = x + iy$, so $-p^* = -(x-iy) = -x + iy$.




So I tried to bring connections between $x + iy$ and $-x + iy$ by substitute them into the equation.



When I substitute x + iy, the equation becomes



$$a(x+iy)(x^2 - y^2 + 2xyi) + b(x^2 - y^2 + 2xyi) + c(x + iy) + d = 0$$



However, I failed to proceed with this question. By substituting $-x+iy$ into the equation, I cannot show that it is also a solution.



Is there a simpler method to solve this question?



Answer



Given the original polynomial:



$$ az^3 + bz^2 + cz + d = 0 $$



for some complex root $z$, we can take the complex conjugate of both sides:



$$ \bar{a} \bar{z}^3 + \bar{b} \bar{z}^2 + \bar{c} \bar{z} + \bar{d} = 0 $$



Here I've used just that complex conjugation preserves complex arithmetic (and that zero is its own complex conjugate).




Now use the fact that $a,c$ are real constants, while $b,d$ are purely imaginary:



$$ a \bar{z}^3 - b \bar{z}^2 + c \bar{z} - d = 0 $$



Comparing these coefficients to the original ones, we see only the even terms have changed sign. Throwing in an extra times $-1$, we have:



$$ a (-\bar{z})^3 + b (-\bar{z})^2 + c (-\bar{z}) + d = 0 $$



This shows, since we are back to the original coefficients now, that $-\bar{z}$ is a root whenever $z$ is a root.



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...