Thursday 28 February 2013

analysis - Prove: f is surjective -->$ f(f^{-1}(S)) = S$



I have to prove this exercise for my math-study:



Let $f: X \rightarrow Y$ be a function and $S \subseteq Y$



Prove: $f$ is surjective $\Rightarrow$ $f(f^{-1}(S)) = S$




I divided this exercise in two parts,
first proving that $S \subseteq f(f^{-1}(S))$.
This is what I did:



Assume $f$ is surjective $\Rightarrow$ $\forall s$ $\in S$ $\exists x \in$ $f^{-1}(S)$ such that $f(x) = s \Rightarrow s$ $\in$ $f(f^{-1}(S))$ $\Rightarrow$ $$S \subseteq f(f^{-1}(S))$$



Is this part right, or did I make any mistakes?



For the second part, I have to prove that $f(f^{-1}(S)) \subseteq S$




I began with this:



Assume $x$ $\in$ $f(f^{-1}(S))$. $f^{-1}(S)$ = {$x$ $\in$ X | $f(x)$ $\in$ S}



But I don't know how to prove from that that $x \in S$. Could you please help me with these two questions? Thanks in advance!


Answer



If $x\in f(f^{-1}(S))$, then $x = f(y)$ for some $y\in f^{-1}(S)$. So $f(y)\in S$, i.e., $x\in S$.



Conversely, if $x\in S$, then since $f$ is surjective, there exists a $u\in X$ such that $f(u) = x$. So $f(u)\in S$, which implies $u\in f^{-1}(S)$. Therefore $x = f(u)\in f(f^{-1}(S))$.



Nowhere continuous real valued function which has an antiderivative

My question:



Is there a function $f: \mathbb{R} \rightarrow \mathbb{R}$ that nowhere continuous on its domain, but has an antiderivative?



If there is no such a function, is it true to conclude that: to have an antiderivative, $f$ is necessary to be continuous at least at one point on its domain?



Any comments/ inputs are highly appreciated. Thanks in advance.

calculus - Evaluating $int_0^1frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}dx$




How can we prove $$\int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}\mathrm{d} x=\frac{2\pi}{3\sqrt 3}?$$




Thought 1
It cannot be solved by using contour integration directly. If we replace $-1/3$ with $-2/3$ or $1/3$ or something else, we can use contour integration directly to solve it.
Thought 2
I have tried substitution $x=t^3$ and $x=1-t$. None of them worked. But I noticed that the form of $1-x+x^2$ does not change while applying $x=1-t$.
Thought 3
Recall the integral representation of $_2F_1$ function, I was able to convert it into a formula with $_2F_1\left(2/3,1;4/3; e^{\pi i/3}\right)$ involved. But I think it will only make the integral more "complex". Moreover, I prefer a elementary approach. (But I also appreciate hypergeometric approach)


Answer



The solution heavily exploits symmetry of the integrand.




Let $$I = \int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2} dx $$
Replace $x$ by $1-x$ and sum up gives
$$\tag{1} 2I = \int_0^1 \frac{x^{2/3}(1-x)^{-1/3} + (1-x)^{2/3}x^{-1/3}}{1-x+x^2} dx = \int_0^1 \frac{x^{-1/3}(1-x)^{-1/3}}{1-x+x^2} dx$$






Let $\ln_1$ be complex logarithm with branch cut at positive real axis, while $\ln_2$ be the one whose cut is at negative real axis. Denote
$$f(z) = \frac{2}{3}\ln_1(x) - \frac{1}{3}\ln_2 (1-x)$$
Then $f(z)$ is discontinuous along the positive axis, but have different jump in $\arg$ across intervals $[0,1]$ and $[1,\infty)$.




Now integrate $g(z) = e^{f(z)}/(1-z+z^2)$ using keyhole contour. Let $\gamma_1$ be path slightly above $[0,1]$, $\gamma_4$ below. $\gamma_2$ be path slightly above $[1,\infty)$, $\gamma_3$ below. It is easily checked that
$$\int_{\gamma 1} g(z) dz = I \qquad \qquad \int_{\gamma 4} g(z) dz = I e^{4\pi i/3}$$
$$\int_{\gamma 2} g(z) dz = e^{\pi i/3} \underbrace{\int_1^\infty \frac{x^{2/3}(x-1)^{-1/3}}{1-x+x^2} dx}_J\qquad \int_{\gamma 3} g(z) dz = e^{\pi i} J$$



If we perform $x\mapsto 1/x$ on $J$, we get $\int_0^1 x^{-1/3}(1-x)^{-1/3}/(1-x+x^2)dx$, thus $J = 2I$ by $(1)$.



Therefore $$I(1-e^{4\pi i/3}) + 2I(e^{\pi i / 3} - e^{\pi i}) = 2\pi i\times \text{Sum of residues of } g(z) \text{ at } e^{\pm 2\pi i /3}$$
From which I believe you can work out the value of $I$.


Convert complex number to polar coordinates - round 2





This is follow up problem to this. I post new question related to previous question since all of the problem were not solved. I have new questions / problems that came after i closed the previous post.



Problem



Compute when $x \in \mathbb{C}$:
$$ x^2-4ix-5-i=0 $$
and express output in polar coordinates



Attempt to solve




On last post i solved the equation and got answer:



$$ x=2i \pm \sqrt{i+1} $$



After that i tried to convert this into polar form with little success. Someone provided solution how to express $\sqrt{i+1}$ in a way $\text{Re}$ and $\text{Im}$ parts are separated:



$$ \sqrt{i+1}=\sqrt{\frac{1+\sqrt{2}}{2}}+i\sqrt{\frac{\sqrt{2}-1}{2}} $$



More about on how this was accomplished is in previous post.





Not in previous post post



Now i tried to solve radius and angle of this complex number. when $x \in \mathbb{C}$



$$ ||x||=\sqrt{\text{Re(x)}^2+\text{Im}(x)^2} $$



$$ \text{Re}(x_1)=\sqrt{\frac{1+\sqrt{2}}{2}} \text{ and } \text{Re}(x_2)=-\sqrt{\frac{1+\sqrt{2}}{2}}$$




$$ \text{Im}(x_1) = 2+ \sqrt{\frac{\sqrt{2}-1}{2}} \text{ and } \text{Im}(x_2) = 2 - \sqrt{\frac{\sqrt{2}-1}{2}}$$



Both complex solutions should have same radius. It is only required to compute radius for one of them.



$$ ||x||=\sqrt{(\sqrt{\frac{1+\sqrt{2}}{2}})^2+( 2+\sqrt{\frac{\sqrt{2}-1}{2}})^2} $$



since $$ (a+b)^2=a^2+2ab+b^2 $$



$$= 2^2+ 2 \cdot 2 \cdot \sqrt{\frac{\sqrt{2}-1}{2}} + (\sqrt{\frac{\sqrt{2}-1}{2}})^2$$




$$= 4+4\sqrt{\frac{\sqrt{2}-1}{2}} + \frac{\sqrt{2}-1}{2} $$



$$||x||= \sqrt{\frac{\sqrt{2}-1}{2}+4+4\sqrt{\frac{\sqrt{2}-1}{2}}+\frac{\sqrt{2}-1}{2}} $$



$$||x||= \sqrt{4\sqrt{\frac{\sqrt{2}-1}{2}}+2\frac{\sqrt{2}-1}{2}+4} $$



$$ \text{let } a = \frac{\sqrt{2}-1}{2}$$



$$ ||x|| = \sqrt{4\sqrt{a}+2a+4} $$




Tried to simplify this expression but doesn't look like it is easily simplified.



Probably have to just stick with this:



$$||x||= \sqrt{4\sqrt{\frac{\sqrt{2}-1}{2}}+2\frac{\sqrt{2}-1}{2}+4} $$



now the argument of complex number



$$ \text{arg}(x)=\arctan(\frac{2+ \sqrt{\frac{\sqrt{2}-1}{2}}}{\sqrt{\frac{1+\sqrt{2}}{2}}}) $$




which again probably doesn't simplify that much.



Now i could express this in polar form which doesn't look that nice:



$$ \sqrt{4\sqrt{\frac{\sqrt{2}-1}{2}}+2\frac{\sqrt{2}-1}{2}+4} \cdot \exp(i\cdot \arctan(\frac{2+ \sqrt{\frac{\sqrt{2}-1}{2}}}{\sqrt{\frac{1+\sqrt{2}}{2}}})) $$



Now problem is this is quite complicated expression and not sure if this is right. If it is possible it would like to have it in much simpler form if this is possible.


Answer



The solution you have provided is correct. The conversion to polar is wrong.




Your mistake is here:




Both complex solutions should have same radius. It is only required to compute radius for one of them.
$$||x||=\sqrt{(\sqrt{\frac{1+\sqrt{2}}{2}})^2+( 2+\sqrt{\frac{\sqrt{2}-1}{2}})^2}$$




This is not true. The first radius is the above one, the second is



$$||x||=\sqrt{(\sqrt{\frac{1+\sqrt{2}}{2}})^2+( 2-\sqrt{\frac{\sqrt{2}-1}{2}})^2}$$




Also, there is a calculation mistake, when you arrive here:




$$||x||= \sqrt{4\sqrt{\frac{\sqrt{2}-1}{2}}+2\frac{\sqrt{2}-1}{2}+4}$$




Here is how you do it:



\begin{equation}

||x||=\sqrt{(\sqrt{\frac{1+\sqrt{2}}{2}})^2+( 2+\sqrt{\frac{\sqrt{2}-1}{2}})^2}
\end{equation}

becomes
\begin{equation}
||x||=
\sqrt{\frac{1+\sqrt{2}}{2} + 4 + 4\sqrt{\frac{\sqrt{2}-1}{2}} + \frac{\sqrt{2}-1}{2}}
\end{equation}

that is
\begin{equation}
||x||=

\sqrt{\sqrt{2} + 4 + 4\sqrt{\frac{\sqrt{2}-1}{2}}}
\neq
\sqrt{4\sqrt{\frac{\sqrt{2}-1}{2}}+2\frac{\sqrt{2}-1}{2}+4}
\end{equation}

The first solution has a magnitude given by the above which is not the same magnitude as the second solution.
The angle $\theta$ would be the one you have provided,
$$\theta=\arctan(\frac{2+ \sqrt{\frac{\sqrt{2}-1}{2}}}{\sqrt{\frac{1+\sqrt{2}}{2}}})$$
So, in polar form, you'd have
$$\sqrt{\sqrt{2} + 4 + 4\sqrt{\frac{\sqrt{2}-1}{2}}} \exp(i \arctan(\frac{2+ \sqrt{\frac{\sqrt{2}-1}{2}}}{\sqrt{\frac{1+\sqrt{2}}{2}}}))$$


Solving a limit with radicals without l'Hopital



I've been trying to solve this particular expression, rationalizing the numerator, and the denominator by conjugate multiplying, squaring, multiplying/dividing with x/x, nothing seems to work, I would appreciate any input.




$$\lim_{x\to 0} \frac{\sqrt{1+x}-\sqrt{1+x^2}}{\sqrt{1+x}-1}$$


Answer



\begin{align}
\frac{\sqrt{1+x}-\sqrt{1+x^2}}{\sqrt{1+x}-1}
&=
\frac{\sqrt{1+x}-\sqrt{1+x^2}}{\sqrt{1+x}-1}
\frac{\sqrt{1+x}+\sqrt{1+x^2}}{\sqrt{1+x}+1}
\frac{\sqrt{1+x}+1}{\sqrt{1+x}+\sqrt{1+x^2}}\\[6px]
&=
\frac{(1+x)-(1+x^2)}{(1+x)-1}

\frac{\sqrt{1+x}+1}{\sqrt{1+x}+\sqrt{1+x^2}}\\[6px]
&=
\frac{x(1-x)}{x}
\frac{\sqrt{1+x}+1}{\sqrt{1+x}+\sqrt{1+x^2}}\\[6px]
\end{align}
Now it's easy, isn't it?


number theory - Easy explanation of analytic continuation

Today, as I was flipping through my copy of Higher Algebra by Barnard and Child, I came across a theorem which said,




The series $$ 1+\frac{1}{2^p} +\frac{1}{3^p}+...$$ diverges for $p\leq 1$ and converges for $p>1$.




But later I found out that the zeta function is defined for all complex values other than 1. Now I know that Riemann analytically continued this function to fit all complex values, but how do I explain, to a layman, that $\zeta(0)=1+1+1+...=-\frac{1}{2}$?



The Wiki articles on these topics go way over my head. I'd appreciate it if someone can explain it to me what analytic continuation actually is, and which functions can be analytically continued?







Edit



If the function diverges for $p\leq1$, how is WolframAlpha able to compute $\zeta(1/5)$? Shouldn't it give out infinity as the answer?

Wednesday 27 February 2013

congruences - Modular inverse question

I am trying to find the x $$113x\equiv 311 \mod 653$$ but using Euclidean algorithm I calculate until here $$(-152)(113)\equiv 1 \mod 653$$
This negative number is confusing me. How can I go further to find the inverse?
Or how can I change $$x\equiv -152 \mod 653$$ so that there wouldn't be any negative number?

Can I simplify the question using Chinese remainder theorem?

Tuesday 26 February 2013

combinatorics - Prove by a combinatorial argument that $(n-r){n choose r}=n{n-1 choose r}$




Prove by a combinatorial argument $$(n-r){n \choose r}=n{n-1 \choose r}$$




My attempt:



We have two ways of count the number of persons forms a committee of a group $n$ of people.




Here I'm a little confused, because I don't know how interpret the multiplication by $(n-r)$ here. Can someone help me?


Answer



On the RHS




  • we choose one president ($n$ choiches) and then form a committee of $r$ out of n-1



On the LHS





  • we form a committee of $r$ out of $n$ and then choose a president from the others $n-r$


summation - Find closed form of sum of fraction of binomial coefficients



can somebody give me a hint for this exercise, where I have to find the specific closed form?




$\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}}, m,n\in\mathbb{N}$ and $m\leq n$





What I have done so far:



$\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}} = \sum_{k=0}^m \frac{\frac{m!}{(m-k)!\cdot k!}}{\frac{n!}{(n-k)!\cdot k!}} = \sum_{k=0}^m \frac{m!}{n!}\cdot \frac{(n-k)!}{(m-k)!} = \frac{m!}{n!}\cdot \sum_{k=0}^m \frac{(n-k)!}{(m-k)!}$



Look at example 1: $n=8, m=5$



$\frac{5!}{8!}\cdot(\frac{8!}{5!} + \frac{7!}{4!} +\frac{6!}{3!} +\frac{5!}{2!} + \frac{4!}{1!} +\frac{3!}{0!}) = \\\frac{5!}{8!} \cdot (8\cdot7\cdot6+7\cdot6\cdot5+6\cdot5\cdot4+5\cdot4\cdot3+4\cdot3\cdot2+3\cdot2\cdot1) = \\
\frac{5!}{8!} \cdot (336+210+120+60+24+6) = \frac{5!}{8!}\cdot 756 = 2.25$




I can't find a pattern.



Edit 1: Maybe there is an approach with recursion.



Edit 2: Okay I found the solution.



$\sum_{k=0}^m \frac{m!}{n!}\cdot \sum_{k=0}^m \frac{(n-k)!}{(m-k)!}= \frac{m!}{n!}\cdot\frac{1}{4}\cdot t\cdot(t+1)\cdot(t+2)\cdot(t+3)$ with $t=(n-m)!$



Edit 3: This formula works well for example 1, but fails for example 2: $n=9, m=3$




$\frac{3!}{9!}\cdot(\frac{9!}{3!} + \frac{8!}{2!} +\frac{7!}{1!} +\frac{6!}{0!}) = \\\frac{3!}{9!} \cdot (9\cdot8\cdot7\cdot6\cdot5\cdot4+8\cdot7\cdot6\cdot5\cdot4\cdot3+7\cdot6\cdot5\cdot4\cdot3\cdot2+6\cdot5\cdot4\cdot3\cdot2\cdot1) = \\
\frac{3!}{9!} \cdot (60480+20160+5040+720) = \frac{3!}{9!}\cdot 86400 = 1.428$



So I have still no general solution. Can someone help?


Answer



Let



$$S(m,n):=\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}}$$



We are going to prove:

$$S(m,n)=\frac{n+1}{n+1-m}.\tag{1}$$



Obviously (1) is valid for $m=0$ and arbitray $n\ge 0$. Further, if (1) is valid for $(m-1,n-1)$ it is valid for $(m,n)$ as well:
$$
S(m,n):=1+\sum_{k=1}^m \frac{\frac mk\binom{m-1}{k-1}}{\frac nk\binom{n-1}{k-1}}
=1+\frac{m}{n} S(m-1,n-1)\\
\stackrel{I.H.}{=}1+\frac{m}{n}\frac{n}{n+1-m}=\frac{n+1}{n+1-m}.
$$



Thus by induction (1) is valid for arbitrary $0\le m \le n$.



functions - Show $f^{-1}(A^c)=(f^{-1}(A))^c$





Let $f: X \to Y$, and $A\subseteq Y$. Show that $f^{-1}(A^c)=(f^{-1}(A))^c$



I know how to prove that $f^{-1}(A^c)\subseteq(f^{-1}(A))^c$, but stuck on proving $(f^{-1}(A))^c\subseteq f^{-1}(A^c)$. Could someone help with this step please? Thanks.


Answer



Suppose $x\in (f^{-1}(A))^c$. Then $x\in X$ and $x\notin f^{-1}(A)$. Thus $f(x)\notin A$, and of course $f(x)\in Y$, so $f(x)\in A^c$. Therefore $x\in f^{-1}(A^c)$.


Monday 25 February 2013

complex numbers - How can I compute the limit of this sequence: $sqrt[n]{sin n}$?



I need to calculate the limit of the following sequence:



$$\lim _ {n \to \infty} \sqrt[n]{\sin(n)}$$



where the $n$-th root of a negative number is defined as the principal complex root.



I suspect the answer to be $1$, but I do not know how to prove it.


Answer




The problem boils down to proving that $\sin(n)$ cannot be too close to zero for small values of $n$.



We know that $\pi$ is a trascendental number with a finite irrationality measure. In particular, the inequality
$$ \left| \pi-\frac{p}{q}\right| \leq \frac{1}{q^{10}} $$
may hold only for a finite number of rational numbers $\frac{p}{q}$, hence (since $\left|\sin x\right|\geq K\left|x-k\pi\right|$ when $x$ is close to $k\pi$, thanks to Adayah) in the general case $\left|\sin(n)\right|$ is greater than $\frac{C}{n^9}$ for some constant $C$. That is enough to ensure that the wanted limit is $1$ by squeezing.


integration - How to evaluate the limit $int_{0}^{frac{pi}{2}}Re^{-Rsintheta}dtheta (as quad R rightarrow infty)$



While doing a mathematical exercise(stein Complex Analysis chapter2,exercise 3),
I managed to reduce the problem to the following one:




$$\int_{0}^{\omega}Re^{-R\cos\theta}d\theta \rightarrow 0 \; (as \quad R \rightarrow \infty)$$ where $0\le \omega <\frac{\pi}{2}$.





I can prove this without much difficulty:
$$\int_{0}^{\omega}Re^{-R\cos\theta}d\theta \le \int_{0}^{\omega}Re^{-R\cos\omega}d\theta =\omega Re^{-R\cos\omega} \rightarrow 0 \; (as \quad R \rightarrow \infty)$$
It is crucial that $\omega $ is strictly less than $\frac{\pi}{2}$. This lead me to raise another interesting problem: what the limit will be if we replace $\omega$ by $\frac{\pi}{2}$. After changing $\cos\theta$ to $\sin\theta$( this doesn't matter), now my question is





I have no idea how to calculate, I even don't know if the limit exists.


Answer



Put $I(R)$ your integral and $J(R)=\int_{0}^{\pi/2}R\cos(\theta)^2\exp(-R\sin(\theta))d\theta$, $K(R)=\int_{0}^{\pi/2}R\sin(\theta)^2\exp(-R\sin(\theta))d\theta$. We have $I(R)=J(R)+K(R)$; Note that the function $u\exp(-u)$ is positive and bounded on $[0,+\infty[$, say by $M$.




a) For $K(R)$, we have $R\sin(\theta)^2\exp(-R\sin(\theta))\leq M$ for all $\theta$, and this function goes to $0$ everywhere if $R\to +\infty$. By the Dominated convergence theorem, $K(R)\to 0$ as $R\to +\infty$.



b) For $J(R)$, we integrate by parts:
$$J(R)=[(\cos(\theta)(-\exp(-R\sin(\theta))]_0^{\pi/2}-\int_0^{\pi/2}\sin(\theta)\exp(-R\sin(\theta))d\theta$$
We have hence $J(R)=1-\int_0^{\pi/2}\sin(\theta)\exp(-R\sin(\theta))d\theta$. Now apply the dominated convergence theorem to $\int_0^{\pi/2}\sin(\theta)\exp(-R\sin(\theta))d\theta$, and you are done.


Sunday 24 February 2013

calculus - Find $lim_{xto0}frac{sin5x}{sin4x}$ using $lim_{thetato0}frac{sintheta}{theta}=1$.

I am trying to find $$\lim_{x\to0}\frac{\sin5x}{\sin4x}$$



My approach is to break up the numerator into $4x+x$. So,




$$\begin{equation*}
\lim_{x\to0}\frac{\sin(4x+x)}{\sin4x}=\lim_{x\to0}\frac{\sin4x\cos x+\cos4x\sin x}{\sin4x}\\
=\lim_{x\to0}(\cos x +\cos4x\cdot\frac{\sin x}{\sin4x})\end{equation*}$$



Now the problem is with $\frac{\sin x}{\sin4x}$. If I use the double angle formula twice, it is going to complicate the problem.



The hint says that you can use $\lim_{\theta\to0}\frac{\sin\theta}{\theta}=1$.



I have little clue how can I make use of the hint.




Any helps are greatly appreciated. Thanks!

calculus - $n^2 int_0^1 (1-x)^n sin(pi x) mathrm{d}x$



I would like to find :



$$\lim_{n \to \infty} n^2 \int_0^1 (1-x)^n \sin(\pi x) \mathrm{d}x $$




We have :
$$n^2 \int_0^1 (1-x)^n \sin(\pi x) \mathrm{d}x = \int_0^1 n^2(1-x)^n \sin(\pi x) \mathrm{d}x$$



Moreover we have $\forall x \in [0, 1]$ :



$$\lim_{n \to \infty} n^2(1-x)^n \sin(\pi x) = 0$$



So by the dominated convergence theorem we can deduce that :



$$ \lim_{n \to \infty} \int_0^1 n^2(1-x)^n \sin(\pi x) \mathrm{d}x = 0$$




Yet, here my book say the answer is actually $\pi$, and I don't understand why what I've done is wrong, and how I can actually find that the value is $\pi$.


Answer



You cannot apply the DCT because there is not an integrable function $g$ (independent of $n$) such that $n^2(1-x)^n\sin(\pi\,x)\le g(x)$.



Integrating by parts twice we get
$$
\int_0^1(1-x)^n\sin(\pi\,x)\,dx=\frac{\pi}{(n+1)(n+2)}+\frac{\pi^2}{(n+1)(n+2)}\int_0^1(1-x)^{n+2}\sin(\pi\,x)\,dx.
$$
Multiply by $n^2$, let $n\to\infty$ and observe that the DCT can be applied to the integral on the left hand side because $n^2/(n+1)/(n+2)$ is bounded.



algebra precalculus - Converting $(1+...+n)^2*(n+1)^3$ to $(2+...+2n)^2$



I'm currently going through Calculus by Spivak by myself, and came across a proof by induction requiring to prove $1^3+...+n^3 = (1+...+n)^2$



Naturally, to prove this, I need to somehow convert $(1+...+n)^2+(n+1)^3$ to $(2+...+2n)^2$.
After quite a bit of thinking, I'm still not sure how to do this. I think i may be forgetting about some property of squares that we're supposed to be using.



Note: Please only provide a hint, not the complete answer.
Edit: I was mistakenly taking $(1+...+n)^2*(n+1)^3$ rather than $(1+...+n)^2+(n+1)^3$. Corrected.


Answer




Previous Answer



Hint:
$$1 + 2 + \ldots + n = \frac{n(n+1)}{2}$$



Can you take it from here?



Revised Answer



My apologies for not bothering to check more closely earlier, but I think your original equation

$$\left(1 + 2 + \ldots + n\right)^2\left(n + 1\right)^3 = \left(2 + 4 + \ldots + 2n\right)^2$$
is not an identity.



To see why, via a quick inspection, the highest power of $n$ on the LHS is $n^7$, while the highest power of $n$ on the RHS is $n^4$, a contradiction.



Therefore, your proposed equation is not an identity.


trigonometry - If $sintheta+sinphi=a$ and $costheta+ cosphi=b$, then find $tan frac{theta-phi}2$.



I'm trying to solve this problem:




If $\sin\theta+\sin\phi=a$ and $\cos\theta+ \cos\phi=b$, then find $\tan \dfrac{\theta-\phi}2$.





So seeing $\dfrac{\theta-\phi}2$ in the argument of the tangent function, I first thought of converting the left-hand sides of the givens to products which gave me:
$$2\sin\frac{\theta+\phi}2\cos\frac{\theta-\phi}2=a\quad,\quad2\cos\frac{\theta+\phi}2\cos\frac{\theta-\phi}2=b$$



But then, on dividing the two equations (assuming $b\ne0$), I just get the value of $\tan\dfrac{\theta+\phi}2$.



I don't know how else to proceed.
Any help would be appreciated!


Answer




Method $1:$



Squaring & adding what you have derived $$4\cos^2\frac{\theta-\phi}2=a^2+b^2$$



$$\implies \sec^2\frac{\theta-\phi}2=\frac4{a^2+b^2}$$



$$\implies \tan^2\frac{\theta-\phi}2=\frac4{a^2+b^2}-1=\frac{4-a^2-b^2}{a^2+b^2}$$



Method $2:$




$$\text{As }\quad2\cos\frac{\theta+\phi}2\cos\frac{\theta-\phi}2=b,$$



$$\implies \sec\frac{\theta-\phi}2=\frac2{b\sec\frac{\theta+\phi}2}$$



$$\implies \sec^2\frac{\theta-\phi}2=\frac4{b^2\sec^2\frac{\theta+\phi}2} =\frac4{b^2\left(1+\tan^2\frac{\theta-\phi}2\right)}=\frac4{b^2\left(1+\frac{a^2}{b^2}\right)}$$ as $\tan\frac{\theta+\phi}2=\frac ab$



$$\implies \sec^2\frac{\theta-\phi}2=\frac4{b^2+a^2}$$



$$\text{Now, }\tan^2\frac{\theta-\phi}2=\sec^2\frac{\theta-\phi}2-1$$


Saturday 23 February 2013

combinatorics - Double summation limits



Is there a way to "see" that $$\sum\limits_{r=0}^\infty \sum\limits_{x=r+1}^\infty \mathbb P(X=x)=\sum\limits_{x=1}^\infty\sum\limits_{r=0}^{x-1}\mathbb P(X=x)\; ?$$ Thanks.



Answer



It doesn't matter what you sum (as long as the sums are convergent). The points in the $(r,x)$ plane that are being summed over can be illustrated by a diagram like this:



x

5 *****
4 ****
3 ***
2 **
1 *

012345 r


The two sides in the identity correspond to summing by columns first or rows first.


Roots of a Polynomial Minus It's Constant Term




Suppose we have a sequence of integers $a_1,\dots,a_n$. Is there any way to determine the roots of the polynomial



$$P(x) = (x+a_1)\dots(x+a_n) - a_1\dots a_n$$



Clearly $P(0) = 0$, but can anything be said about the other roots? Can they be expressed in some way related to the original integers $a_1,\dots,a_n$?



Any answer or reference would be appreciated.



Edit: If need be, you may assume $a_i|a_{i+1}$ for all $i$.




Comment: The new roots need not be integers. I would be satisfied with finding complex roots.


Answer



It doesn't appear to be easier than solving a polynomial of degree $n-1$ "from scratch". For example, if $Q(x) = (x+1)(x+2)(x+4)(x+8)(x+16)(x+32)$, the Galois group of $$P(z)/z = (Q(z)-Q(0))/z = z^5 + 63 z^4 + 1302 z^3 + 11160 z^2 + 41664 z + 64512$$ is $S_5$, so this is not solvable by radicals.



On the other hand, it may be interesting to look at the roots of $Q(x) - t$ as functions of $t$: these are analytic except at the points where they collide (the roots of the discriminant of $Q(x)-t$), which can be branch points.
In the above example, the root that is $-32$ at $t=0$ has the Maclaurin series
$$ -32-{\frac {1}{9999360}}t+{\frac {1189}{578592599703552000}}{t}^{2}-{
\frac {5752091}{84743927083111118222131200000}}{t}^{3}+{\frac {
28255922633}{10460874099698222880422457709166592000000}}{t}^{4}-{

\frac {8129746966487}{
68399931666747186847737020565295881781248000000000}}{t}^{5}
+\ldots$$
I believe this has radius of convergence approximately $1.61741 \times 10^7$ (one of the roots of that discriminant), so it should certainly converge quite nicely at $t=Q(0)$.


Friday 22 February 2013

sequences and series - What is the partial sum of :$sum_{n=1}^{+infty}zeta(n)e^{-n^2}$ and what about it's closed form?



I selected this sum $\sum_{n=1}^{+\infty} \zeta(n)e^{-n^2}$ for evaluation, my weaker assumptions showed me that is convergent for this reason the term $e^{-n^2}$ vanish when $n \to +\infty $ yield to the result $ +\infty.0$ which almost is $0$ because $\zeta(n)$ for large $n$ is diverge, Wolfram alpha say that is a convergent series then what is it's partial sum and it's closed form?


Answer



The given series diverges because of the first term, as $\zeta(1)$ represents harmonic series and harmonic series diverges.




I will consider a convergent series instead:



$$S=\sum_{n=1}^\infty \zeta(n+1) e^{-n^2}$$



No closed form is expected to exist, but there's an interesting way to rewrite the series.



First, we rewrite it as a double sum:



$$S=\sum_{n=1}^\infty \sum_{k=1}^\infty \frac{1}{k^{n+1}} e^{-n^2}$$




Now we consider the term with $k=1$ separately, because otherwise it would cause us problems lately. In any case it has a closed form in terms of Jacobi theta functions:




$$\sum_{n=1}^\infty e^{-n^2}=\frac{1}{2} \left(\vartheta _3\left(0,\frac{1}{e}\right)-1\right)$$




Now let's move on to the rest of the series:



$$S_2=\sum_{k=2}^\infty \sum_{n=1}^\infty \frac{1}{k^{n+1}} e^{-n^2}$$




There's no closed form for the series in the form $\sum_{n=1}^\infty e^{-n^2} x^n$ (as far as I know or Mathematica knows), but we can rewrite it.



The following might be missing some rigour, but it works. Let's represent the exponential part as an integral:




$$e^{-n^2}= \frac{1}{2 \sqrt{ \pi}} \int_{- \infty} ^\infty e^{-x^2/4+inx} dx$$




The imaginary part of the integral is zero, because it is the integral of an odd function over a symmetric inverval. But it is beneficial to write it in exponential form anyway.




Getting back to the series:



$$S_2=\frac{1}{2 \sqrt{ \pi}} \sum_{k=2}^\infty \frac{1}{k} \int_{- \infty} ^\infty e^{-x^2/4} \sum_{n=1}^\infty \frac{1}{k^n} e^{inx} dx$$



Because $|e^{inx}| \leq 1$ and $k \geq 2$ we can write the closed form for the geometric series under the integral:



$$S_2=\frac{1}{2 \sqrt{ \pi}} \sum_{k=2}^\infty \frac{1}{k} \int_{- \infty} ^\infty e^{-x^2/4}\frac{ e^{ix}}{k-e^{ix}} dx$$



We can get rid of the imaginary part, which is again an odd function (as it should be):




$$\frac{ e^{ix}}{k-e^{ix}}=\frac{k \cos x-1+i k \sin x}{k^2-2k \cos x+1}$$



Now we can rewrite the series as a series of real valued integrals:




$$S_2=\frac{1}{2 \sqrt{ \pi}} \sum_{k=2}^\infty \frac{1}{k} \int_{- \infty} ^\infty e^{-x^2/4} \frac{k \cos x-1}{k^2-2k \cos x+1} dx$$




The series under the integral has a closed form too, in terms of digamma functions:





$$\sum_{k=2}^\infty \frac{1}{k} \frac{k \cos x-1}{k^2-2k \cos x+1}=1- \gamma-\frac{1}{2} \left( \psi (2- \cos x+ i \sin x) +\psi (2- \cos x- i \sin x) \right)$$




Where $\gamma$ is Euler-Mascheroni constant and the value is real, because the arguments are conjugate.



So we can write:



$$S_2=1- \gamma-\frac{1}{4 \sqrt{ \pi}} \int_{- \infty} ^\infty e^{-x^2/4} \left( \psi (2- \cos x+ i \sin x) +\psi (2- \cos x- i \sin x) \right) dx$$




Naming the integral (which I believe has no closed form) and getting its numerical value from Mathematica, we have:




$$I=\int_{- \infty} ^\infty e^{-x^2/4} \left( \psi (2- \cos x+ i \sin x) +\psi (2- \cos x- i \sin x) \right) dx= \\ =1.2890375247789216487\dots$$




Getting back to the full series we obtain:




$$S=\frac{1}{2} \left(\vartheta _3\left(0,\frac{1}{e}\right)+1\right)-\gamma-\frac{I}{4 \sqrt{ \pi}}=0.62728755144118062\dots$$





This is the same as numerical value Mathematica obtains for the original series.






We can also use the integral form of the digamma function to rewrite $I$ as a double integral and get a curious formula:




$$S=\frac{1}{2} \left(\vartheta _3\left(0,\frac{1}{e}\right)+1\right)-\frac{1}{2 \sqrt{ \pi}} \int_{- \infty} ^\infty \int_0^1 \frac{1-t^{1-\cos (x)} \cos (\log (t) \sin (x))}{1-t}~e^{-x^2/4}~ dt~dx$$




real analysis - Proving that a point on the boundary of a closed ball in a metric space cannot be interior.



The idea of this proof is quite clear but I'm having some trouble making it rigorous. Suppose we have a metric space $(X, d)$ and a closed ball $U := \{x \in X : d(x, a) \leq t\}$ for some fixed $a$ and $t$. I want to prove that a point on the boundary of this ball is not an interior point. Here is my "proof":



Let $x$ satisfy $d(x, a) = t$ (i.e. let $x$ be a boundary point). Suppose also that $x$ is interior. Then $\exists \, r > 0$ such that the open ball $D_r(x)$ is contained within $U$. This an immediate contradiction, because some points in this open ball are outside $U$.




My problem is with the very last statement, which relies entirely upon geometrical intuition and is not very rigorous. I suppose I could try a bit harder with this idea: along the line connecting $a$ and $x$, we can go a bit further along the line still inside the $r$ -ball and find a point outside of $U$. But this still doesn't sound very rigorous, with things like lines only really applying to Euclidean space.



How can I make this rigorous?



EDIT: Thanks for the answers and comments, I now realize that this cannot be proven at all.


Answer



In a general metric space the boundary of the set $U = \{x : d(x,a) \le t\}$ is not the set $\{x : d(x,a) = t\}$.



The (usual) definition of boundary point of a set implies that the boundary and interior of a set are disjoint.



calculus - Evaluating $lim_{xtofrac{pi}{4}}frac{1-tan x}{1-sqrt{2}sin x}$



How can I evaluate $$\lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x}$$ without L'Hopital rule. Using L'Hopital rule, it evaluates to 2. Is there a way to do it without using L'Hopital?



Answer



Multiply by the conjugate and use trig identities, factoring appropriately:
\begin{align*}
\lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x}
&= \lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x} \cdot \frac{1 + \sqrt{2}\sin x}{1 + \sqrt{2}\sin x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\tan x)(1 + \sqrt{2}\sin x)}{1 - 2\sin^2 x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\frac{\sin x}{\cos x})(1 + \sqrt{2}\sin x)}{(1 - \sin^2 x) - \sin^2 x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\frac{\sin x}{\cos x})(1 + \sqrt{2}\sin x)}{\cos^2 x - \sin^2 x} \cdot \frac{\cos x}{\cos x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(\cos x - \sin x)(1 + \sqrt{2}\sin x)}{\cos x(\cos x - \sin x)(\cos x + \sin x)} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{1 + \sqrt{2}\sin x}{\cos x(\cos x + \sin x)} \\

&= \frac{1 + \sqrt{2}\sin \frac{\pi}{4}}{\cos \frac{\pi}{4}(\cos \frac{\pi}{4} + \sin \frac{\pi}{4})} \\
&= \frac{1 + \sqrt{2}(\frac{1}{\sqrt 2})}{\frac{1}{\sqrt 2}(\frac{1}{\sqrt 2} + \frac{1}{\sqrt 2})}
= \frac{1 + 1}{\frac{1}{\sqrt 2}(\frac{2}{\sqrt 2})}
= \frac{2}{2/2} = 2
\end{align*}


algebra precalculus - Solve $sqrt{3x}+sqrt{2x}=17$

This is what I did:
$$\sqrt{3x}+\sqrt{2x}=17$$
$$\implies\sqrt{3x}+\sqrt{2x}=17$$
$$\implies\sqrt{3}\sqrt{x}+\sqrt{2}\sqrt{x}=17$$
$$\implies\sqrt{x}(\sqrt{3}+\sqrt{2})=17$$
$$\implies x(5+2\sqrt{6})=289$$
I don't know how to continue. And when I went to wolfram alpha, I got:

$$x=-289(2\sqrt{6}-5)$$
Could you show me the steps to get the final result?
Thank you.

Thursday 21 February 2013

matrices - Find matrix from Eigenvectors and Eigenvalues




A matrix $A$ has eigenvectors
$v_1 = \left(
\begin{array}{c}
2 \\
1 \\
\end{array}
\right)$
$v_2 = \left(
\begin{array}{c}
1 \\
-1 \\

\end{array}
\right)$



with corresponding eigenvalues $\lambda_1$= 2 and $\lambda_2$= -3, respectively.



Determine Ab for the vector b = $
\left(
\begin{array}{c}
1 \\
1 \\

\end{array}
\right)$



I know how to find eigenvalues and eigenvectors from a given matrix A, but not this one,
the vector A is a 2x1 matrix, determinant does not exist here, so how to find the matrix A as stated in the question?


Answer



By definition of eigenvalue and eigenvector, we have
$$\tag{1}A\left(
\begin{array}{c}
2 \\

1 \\
\end{array}
\right)=2\left(
\begin{array}{c}
2 \\
1 \\
\end{array}
\right)\mbox{ and }A\left(
\begin{array}{c}
1 \\

-1 \\
\end{array}
\right)=-3\left(
\begin{array}{c}
1 \\
-1 \\
\end{array}
\right).$$
Now, since
$$\left(

\begin{array}{c}
1 \\
1 \\
\end{array}
\right)=\frac{2}{3}\left(
\begin{array}{c}
2 \\
1 \\
\end{array}
\right)-\frac{1}{3}\left(

\begin{array}{c}
1 \\
-1 \\
\end{array}
\right),$$
we have
$$A\left(
\begin{array}{c}
1 \\
1 \\

\end{array}
\right)=\frac{2}{3}A\left(
\begin{array}{c}
2 \\
1 \\
\end{array}
\right)-\frac{1}{3}A\left(
\begin{array}{c}
1 \\
-1 \\

\end{array}
\right)=....\mbox{(using $(1)$)}$$


real analysis - Prob. 9, Chap. 6, in Baby Rudin: Which one of these two improper integrals converges absolutely and which one does not?

Here are the links to three earlier posts of mine on Prob. 9, Chap. 6, in Baby Rudin here on Math SE.



Prob. 9, Chap. 6, in Baby Rudin: Integration by parts for improper integrals




Prob. 9, Chap. 6, in Baby Rudin: Integration by parts for improper integrals with one infinite limit



Prob. 9, Chap. 6 in Baby Rudin: Integration by parts for an improper integral



Now my question is the following.




Which one of the integrals $\int_0^\infty \frac{ \cos x }{ 1+x } \ \mathrm{d} x$ and $\int_0^\infty \frac{\sin x}{ (1+x)^2 } \ \mathrm{d} x$ converges absolutely, and which one does not?





My Attempt:




For any $b > 0$, and for all $x \in [0, b]$, the following inequality holds:
$$ \left\lvert \frac{ \sin x }{ (1+x)^2 } \right\rvert \leq \frac{1}{(1+x)^2}, $$
which implies (by virtue of Theorem 6.12 (b) in Baby Rudin) that
$$ \int_0^b \left\lvert \frac{ \sin x }{ (1+x)^2 } \right\rvert \ \mathrm{d} x \leq \int_0^b \frac{1}{(1+x)^2} \ \mathrm{d} x = - \frac{1}{1+b} - \left( - \frac{1}{1+0} \right) = 1 - \frac{1}{1+b}; $$
moreover,
$$ \lim_{b \to \infty} \left( 1 - \frac{1}{1+b} \right) = 1. $$
So we can conclude that
$$ \int_0^\infty \left\lvert \frac{ \sin x }{ (1+x)^2 } \right\rvert \ \mathrm{d} x = \lim_{ b \to \infty} \int_0^b \left\lvert \frac{ \sin x }{ (1+x)^2 } \right\rvert \ \mathrm{d} x \leq 1. $$

That is, the improper integral $\int_0^\infty \left\lvert \frac{ \sin x }{ (1+x)^2 } \right\rvert \ \mathrm{d} x $ converges, which is the same as saying that the integral $\int_0^\infty \frac{ \sin x }{ (1+x)^2 } \ \mathrm{d} x $ converges absolutely.




Am I right?



If so, then, as suggested by Rudin, the other integral does not converge absolutely.



But how to show this directly?

probability - Coupon collector expectation using definition

I want to do coupon collector's for a dice roll, or the expected number of rolls to get all 6 numbers from definition of expected value as opposed to using linearity of expectation.



$\tau = min\{ t \vert X_t = i, \forall k \neq i,\ \exists j

$\mathbb{E}[\tau] = \sum_{j = 1}^{\infty} j\mathbb{P}(\tau = j)$ is the definition.



I try to compute $\mathbb{P}(\tau = j)$, and for j rolls, you must have 5 numbers in the first $j-1$ rolls, and a 6th number in the $j$th spot, and the 6th number can't be in any of the first $j-1$ spots. So



$\mathbb{P}(\tau = j) = 6[\frac{1}{6^6}5! \binom{j-1}{5} (\frac{5}{6})^{j-6}$].




$\frac{1}{6^6}$ is for six numbers being in those spots (say, 1-5 being in the first j-1 spots, 6 being in the last), $5!$ is the ways of arranging the 5 numbers, $\binom{j-1}{5}$ is to choose the ways of putting $1,2,3,4,5$ in $j-1$ spots, and $(\frac{5}{6})^{j-6}$ is so the $j-1$ spots which you didn't place 1 to 5, don't contain a 6. then multiply by 6 since 1 through 5 can also be at the jth position.



This doesn't work out to be the right sum. Where is this wrong?

Summation of simple series $sum_{r=5}^n (2r + 4r^2) $

I am trying to teach myself, but I am confused on one question. It says "for the following summation, give an equivalent equation without the summation:



$$\sum_{r=5}^n (2r + 4r^2) $$ where $i$ takes values from $5$ to $n$.

Wednesday 20 February 2013

analysis - Solving Limits: Why must I multiply by Conjugate? $lim limits_{n to infty}$ √(n+1) - √n



Sorry if the question strikes as you as dumb, but the last time I had math in school was 15 years ago and now I'm taking a course, requiring me to do some math.



So in the course material, the teacher solved this one $\lim \limits_{n \to \infty}$ $\sqrt{n+1}$- $\sqrt{n}$ by multiply with a conjugate and she got the result of 0.




I tried solving it, without looking at here solution first and what I did was using the theorems to solve it as follow:
$\lim \limits_{n \to \infty}$$\sqrt{n+1}$ - $\sqrt{n}$ =



$\lim \limits_{n \to \infty}$ $\sqrt{n+1}$ -$\lim \limits_{n \to \infty}$ $\sqrt{n}$ =



$\lim \limits_{n \to \infty}$ $\sqrt{n}$ + $\lim \limits_{n \to \infty}$ $\sqrt{1}$ - $\lim \limits_{n \to \infty}$ $\sqrt{n}$ =



$\lim \limits_{n \to \infty}$ $\sqrt{n}$ + 1 - $\lim \limits_{n \to \infty}$ $\sqrt{n}$ =




1 #



Why is that? and why must I multiply by conjugate? and when should one multiply by conjugate, any specific rules or tips?



I'd be grateful, if you could make your answer as detailed and simple as possible, since as I mentioned before, it's been a while since I had anything to do with math.


Answer



Let's take a step by step look at what you did.



1. $\lim_n (\sqrt{n+1}-\sqrt{n}) = \lim_n \sqrt{n+1}-\lim_n \sqrt n$




This is wrong, since the right hand side is in the indeterminate form $\infty - \infty$. It can be any real number and it can be $\pm\infty$. Thus, there is no sensible way to define it, just like there is no sensible way to define $\frac 00$.



To demonstrate more vividly why this is wrong, let $a$ be any real number. Then $$ a = \lim_n a = \lim_n (n+a-n) = \lim_n (n+a) - \lim_n n = \infty - \infty.$$ If $b$ is another real number, by the same reasoning as above, we get $b = \infty - \infty$. Thus, $a = \infty - \infty = b$, so any two real numbers are equal.



Whoops, we just broke math.



2. $\lim_n \sqrt{n+1} = \lim_n \sqrt n + \lim_n \sqrt 1.$



Ah, this one is not as wrong since contrary to the case $\infty - \infty$, one can actually make sense of $\infty + a = \infty$, for any real number $a$. However $\infty$ is still not a number and you can't do arithmetic with it as usual. For example, $\infty + a = \infty = \infty + b$, for any real $a$ and $b$. Notice that you can't cancel $\infty$ on both sides like you would with some real number $x$ to conclude $a = b$. That would break math once again.




So, if one is careful and knows what they are doing, $\infty + 1 = \infty$ is a fair game. However, in the same line you have $-\lim_n \sqrt n = -\infty$ and you remember what we said about subtracting infinity from infinity: it breaks math.



Thus, your line is still wrong.



Unfortunately, it's likely that it's even more wrong. I'm afraid that you wrote that thinking that $\sqrt{n+1} = \sqrt n + \sqrt 1$. No. Please, no. Nicolas FRANCOIS and I are both having nightmares tonight. Just plug in $n = 8$ and calculate both sides. Doesn't work.



3. $\lim_n \sqrt n + 1 - \lim_n \sqrt n = 1$.



Looks correct, but as I said before, subtracting infinity from infinity breaks math. It would have been correct if it said $\lim_n (\sqrt n -\sqrt n) + 1 = 1$. But it doesn't say that.







TL;DR Please use the rule $\lim_n (a_n+b_n) = \lim_n a_n + \lim_n b_n$ only if it doesn't produce $\infty - \infty$. If $\lim_n a_n = \infty$ and $\lim_n b_n$ is $\infty$ or finite, then $\lim_n (a_n+b_n) = \infty$. If $\lim_n a_n = \infty$ and $\lim_n b_n = -\infty$, then $\lim_n (a_n+b_n)$ can be anything under the heavens.






So, hopefully I've stressed enough that $\infty - \infty$ is a no no. So, how to deal with it?



Observe that $a-b = \frac{(a-b)(a+b)}{(a+b)} = \frac{a^2-b^2}{a+b}$ and so $$\lim_n(\sqrt{n+1}-\sqrt n) = \lim_n\frac{(n+1)-n}{\sqrt{n+1}+\sqrt n} = \lim_n\frac 1{\sqrt{n+1}+\sqrt n} = \left[ \frac 1{\infty + \infty} = \frac 1\infty\right] = 0.$$




This is actually a great strategy whenever you have $\infty - \infty$.


real analysis - Pathological, continuous functions

Today in my introduction to measure theory course, the professor said that often when we think of continuity, what we're actually thinking about is smooth functions. We've studied the Cantor set and its variations, and he said we ought to think of continuous functions like the Cantor-Lebegsue function more often when we think continuity.



I was wondering what are other example of "pathological" yet continuous functions? Functions that really help enforce the idea of continuity as distinct from smoothness or even just differentiable?

Tuesday 19 February 2013

Can the axiom of choice be explicitly proved in (intuitionistic) predicate logic, or is something like intuitionistic type theory necessary?



In intuitionistic mathematics, an axiom of choice of the form



$$
\forall x \exists y R(x,y) \rightarrow \exists f \forall x R(x, fx)

$$



is valid by the meaning of the quantifiers (comp. Dummett, Elements of Intuitionism, 2000).



In intuitionistic type theory, it is possible to actually prove the axiom of choice. For example,



$$
(\lambda z)((\lambda x) p_{left}(z(x)),(\lambda x)p_{right}(z(x)))
$$




is a proof-object for an axiom of choice of the form



$$
(\Pi x:A)(\Sigma y:B)R \rightarrow (\Sigma f :(\Pi x:A)B)(\Pi x:A)R(f(x)/y),
$$

where $p_{left}$ and $p_{right}$ are the projection-functions (comp. Martin-Löf, "Constructive Mathematics and Computer Programming", 1982).



What I am interest in is the precise relationship between predicate logic and type theory. Can the axiom of choice be proved in the former, or is the more expressive language of type theory necessary, which can refer to proof-objects and constructions directly? According to Dummett, in intuitionistic logic, the axiom of choice is true due to the constructivist meaning of the quantifiers. But this does not correspond to a formal proof within the system, but a meta-theoretical result.



Now, my feeling is the following: predicate logic cannot properly represent constructions or proof-objects. But in Martin-Löf's proof of the choice axiom, proof-objects are directly operated on. Therefore, while in intuitionistic predicate logic, the axiom of choice is an axiom properly so-called, i.e. an unprovable principle that we accept due to our informal understanding, it becomes provable in the more expressive system of intuitionistic type theory. Am I correct here, or is this a misunderstanding?



Answer



I think that you're basically correct. The key difference here is the difference in languages. In the very simplest kinds of intuitionistic higher-order predicate logic, we don't have the ability to create new functions "internally to the logic" by using lambda terms.



In slightly more complicated versions, there are term-forming operators that allow for lambda abstraction, but these systems still do not prove the axiom of choice. This is because the system don't see $(\forall x)(\exists y)R(x,y)$ as expressing a function.



In these settings, we can sometimes prove the axiom of choice as a kind of metatheorem, that if $(\forall x)(\exists y)R(x,y)$ is provable then $(\forall x)R(x,tx)$ is provable for some term $t$. This kind of metatheorem gives a precise interpretation of comments like the one by Dummett. It also shows how these systems align with the BHK interpretation of constructive reasoning, although the alignment is only visible in the metatheory.



Intuitionistic type theory is quite different from predicate logic, not only because the quantifiers $\forall$ and $\exists$ are often ignored, and the internal type operations $\Sigma$ and $\Pi$ are studied instead. In this way, type-theoretic systems mix the metatheory and object theory more than predicate logic does. (It takes some work, actually, to avoid inconsistency, which strong type theories are vulnerable to via Curry's paradox and similar issues.)



Systems like Martin-Löf type theory manage to internalize enough of the BHK interpretation to prove analogues of the axiom of choice (again, with quantifiers replaced by type constructors) without internalizing enough to become inconsistent.



soft question - Mathematical Telescoping



Bill Dubuque has answered several questions by indicating that some form of "telescoping" is taking place. See this post and the links provided by Bill for more information.



I have never heard of "telescoping" until I read a few answers on here by Bill which refer to this notion. It seems fairly straightforward, basically you expand some expression using basic arithmetic, there is a minor miracle and lots of terms cancel out in a systematic way, and we are then able to solve the problem.



I suppose "telescoping" in this sense was something I always took for granted, and considered a low level "trick" to keep in my back pocket. However, considering the importance Bill seems to attach to this notion of telescoping, and considering that I have a great deal of respect for Bill based on the post's by him I have read, I was wondering if I'm not missing something about telescoping. There is no wiki article on the subject, and a Google search directs me to Bill's answers on SE!




Therefore I would like to ask:



1) What unintuitive results can I achieve with telescoping?



2) Is there a good reference which only discusses telescoping and applications, or is this concept too narrow for anyone to write a paper/book like this?



3) More trivially, am I missing something about what telescoping actually means? If not, then why is this called telescoping, because I don't see what this has to do with a telescope?


Answer



1, 2) Telescoping is one of the ideas behind modern algorithms to automatically prove hypergeometric identities. These algorithms allow you, for example, to automatically prove binomial coefficient identities. The standard reference here is Petkovsek, Wilf, and Zeilberger's A=B.




3) The name comes from the process of collapsing a telescope, which is analogous to the collapsing of a telescoping sum.



Philosophically telescoping is the same as "discrete integration": telescoping a sum $\sum f(n)$ is the same as finding $g(n)$ such that $f(n) = g(n+1) - g(n)$. In that sense it is part of the theory of finite differences, although people probably don't call it "telescoping" in this context. The context in which I hear the term "telescoping" being used is high school math competitions. It's one of those basic ideas that everyone has in the back of their head, I suppose. It's elementary and effective when it applies, but usually there are more sophisticated methods available.



Edit: Some specific examples. The ur-example of a telescoping sum is probably



$$\sum_{k=1}^n \frac{1}{k(k+1)} = \sum_{k=1}^{n} \left( \frac{1}{k} - \frac{1}{k+1} \right) = 1 - \frac{1}{n+1}$$



and many people have seen this application, but probably far fewer have seen its generalization:




$$\sum_{k=1}^n \frac{1}{k(k+1)...(k+r)} = \frac{1}{r} \sum_{k=1}^n \left( \frac{1}{k(k+1)...(k+r-1)} - \frac{1}{(k+1)...(k+r)} \right) = \frac{1}{r} \left( \frac{1}{r!} - \frac{n!}{(n+r)!} \right).$$



The other classic example I remember from my competition days is



$$\sum_{k=1}^n \frac{k}{k^4 - k^2 + 1} = \frac{1}{2} \sum_{k=1}^n \left( \frac{1}{k^2 - k + 1} - \frac{1}{(k+1)^2 - (k+1) + 1} \right) = \frac{1}{2} \left( 1 - \frac{1}{n^2 + n + 1} \right)$$



although I have to admit I always found it a little contrived. Finally, telescoping was put to good use to solve this math.SE question I posed.


combinatorics - Combinational proof problem

I'm having trouble finding a combinatorial argument for



$\sum_{k=m}^n {k \choose m} = {n+1 \choose m+1}$



The right side is just choosing m+1 things from a set of n+1 things, but I can't see any way to relate this to the left side, where you're choosing m from m things, m from m+1 things, m from m+2 things and so on...

elementary set theory - How to prove $Acap B subseteq overline{A triangle B}$




An exercise asks me to prove the following:
$$A\cap B \subseteq \overline{A \triangle B}$$



Unlike most other exercises, this one implies a symmetric difference, of which I am unfamiliar in this kind of proofs. There was little I could do, here:



The statement can be rewritten as the following:
$$A\cap B \subseteq \overline{(A-B)\cap (B-A)}$$
$$A\cap B \subseteq \overline{(A-B)}\cap \overline{(B-A)}$$
$$A\cap B \subseteq (\overline{A} - \overline{B}) \cap (\overline{B} - \overline{A})$$

I rewrote it because the symmetric difference doesn't seem "primitive" enough for me to operate with. Then my proof begins:
$$x \in A \cap B \implies x\in A \land x \in B$$
$$\implies x \in (A \cap B) \land x \in (B \cap A)$$
$$\implies (x \in A \land x \in B) \land (x \in B \land x \in A)$$
And then, I got stuck. I don't see how can $(x \in A \land x \in B) \land (x \in B \land x \in A)$ become what I needed at all.


Answer



You’ve some serious errors in your first calculations: it is not true in general that $$\overline{(A\setminus B)\cap(B\setminus A)}=\overline{A\setminus B}\cap\overline{B\setminus A}$$ or that $$\overline{A\setminus B}\cap\overline{B\setminus A}=(\overline A\setminus\overline B)\cap(\overline B\setminus\overline A)\;.$$ In fact,



$$\overline{(A\setminus B)\cap(B\setminus A)}=\overline{A\setminus B}\cup\overline{B\setminus A}$$




by one of the de Morgan laws, and $\overline{A\setminus B}=\overline A\cup B$, also by de Morgan.



Here’s an approach that does work.



Suppose that $x\in A\cap B$; you want to show that $x$ is not in $A\triangle B$. Judging by the work in your question, your definition of $A\triangle B$ is $(A\setminus B)\cup(B\setminus A)$, so you want to show that



$$x\notin(A\setminus B)\cup(B\setminus A)\;.$$



To do this, you must show that $x\notin A\setminus B$ and $x\notin B\setminus A$. But that’s easy: if $x$ were in $A\setminus B$, then by definition we’d have $x\in A$, which is fine, and $x\notin B$, which is not fine: since $x\in A\cap B$, we know that $x$ is in $B$. Thus, $x$ cannot belong to $A\setminus B$: $x\notin A\setminus B$. A virtually identical argument shows that $x\notin B\setminus A$, and hence that $x\notin(A\setminus B)\cup(B\setminus A)$.




Another approach is to show that your definition of $A\triangle B$ is equivalent to another comment definition: $$A\triangle B=(A\cup B)\setminus(A\cap B)\;.$$ That makes it very obvious that nothing can belong both to $A\triangle B$ and $A\cap B$.


Monday 18 February 2013

An expression for the definite integral $I_n = int_0^{pi/4}{tan^n{x},mathrm{d}x}$



I have the following definite integral:



$$I_n = \int_0^{\pi/4}{\tan^n{x}\,\mathrm{d}x}\quad ,\forall n \in \mathbb{N}$$




  1. Calculate $I_0$ and $I_1$.


  2. Calculate $I_n + I_{n+2}$.

  3. Can we deduce $I_n$?






Here is my solution:



$$I_0 = \int_0^{\pi/4}{dx}=\pi/4$$
$$I_1 = \int_0^{\pi/4}{\tan{x}\,dx}=\int_0^{\pi/4}{\dfrac{\sin{x}}{\cos{x}}\,dx}$$ we put $u = \cos{x} \rightarrow du = -\sin{x}dx $




I found that: $I_1 = \ln{\sqrt{2}} $






for the second question:



$$I_n+I_{n+2} =\int_0^{\pi/4}{\tan^n{x}\left( 1+\tan^2{x}\right)\,dx} $$ we put $u = \tan{x} \rightarrow du = (1+\tan^2{x})dx$, that leads to:
$$I_n+I_{n+2} = \dfrac{1}{n+1}$$




My question is: Now, can we deduce the expression of $I_n$? I think it will be a recursive relation, Am I right?



Thank you


Answer



You are almost there; just observe that $$I_{n+2} = \frac{1}{n+1} - I_n,$$ so that $$I_n = \frac{1}{n-1} - I_{n-2} = \frac{1}{n-1} - \frac{1}{n-3} + I_{n-4} = \ldots,$$ which we can split into even and odd cases: in the even case, you would obtain $$I_n = \frac{1}{n-1} - \frac{1}{n-3} + \cdots + (-1)^{n/2} I_0,$$ and in the odd case, $$I_n = \frac{1}{n-1} - \frac{1}{n-3} + \cdots + (-1)^{(n-1)/2} I_1.$$


Sunday 17 February 2013

Complex analysis ~ Binomial theorem



Given the identity





$ \binom {2n} {n} = \frac{1}{2\pi i} \int_{C_r}
\frac{(1+z)^{2n}}{z^{n+1}}dz,$




with $C_r$ the unit circle, prove that $\forall n \in \mathbb{N}$: $\binom {2n} {n} \leq 4 ^n$.



I've tried expanding the integral with the binomial theorem, but unfortunately, that doesn't seem to get me anywhere.



I'm really stuck, so all help would be dearly appreciated.




EDIT/ FOLLOW-UP: I was wondering, I think the identity as well as the result still holds when $C_r$ is an arbitrary chosen circle. Can anyone confirm this and if so, explain to me why?


Answer



$\def\abs#1{\left|#1\right|}$Note, that for $z \in C_r$, we have
$$ \abs{\frac{(1+z)^{2n}}{z^{n+1 }}} = \frac{\abs{1+z}^{2n}}{\abs z^{n+1}} = \abs{1+z}^{2n} \le 2^{2n} = 4^n $$
hence
\begin{align*}
\binom{2n}n &= \abs{\frac 1{2\pi i} \int_{C_r} \frac{(1+z)^{2n}}{z^{n+1}}\, dz}\\
&\le \frac 1{2\pi} \int_{C_r} \abs{ \frac{(1+z)^{2n}}{z^{n+1}}}\, \abs{dz}\\
&\le \frac 1{2\pi} 4^{n} \cdot 2\pi\\

&= 4^n.
\end{align*}


Saturday 16 February 2013

trigonometry - Sum of sinx+sin3x+sin5x+......sin(2n-1)x

Options are





  1. n/2 cosx- 1sin(nx)/2sinx . cos(n+2)x

  2. n/2.sinx-1/2sin(nx)

  3. n/2.cosx - cos(n+2)x

  4. sinx +sin(nx)

elementary number theory - Prove that $(ma, mb) = |m|(a, b) $ [GCD Distributive Law]




I'm trying to prove that $(ma, mb) = $|$m$|$(a, b)$ , where $(ma, mb)$ is the greatest common divisor between $ma$ and $mb$.



My thoughts:



If $(ma, mb) = d$ , then $d$|$ma$ and $d$|$mb$ → $d$|$max + mby$ → $d$|$m(ax+by)$. This implies that $d$|$m$ or $d$|$(ax+by)$. This is the same as $d$|$m$ or $d$|$a$ and $d$|$b$, so $d$|$m$ or $d$|$(a,b)$. This is the same as $d$|$m|$ or $d|(a,b)$, so $d$|$|m|(a,b)$.



I don't know what to do.



Thanks



Answer



Below are sketches of four proofs of the GCD Distributive Law $\rm\:(ax,bx) = (a,b)x\:$ by various approaches: Bezout's identity, the universal gcd property, unique factorization, and induction.






First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$



$\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $



$\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$




$\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\ \,$ some $\rm\:n,k\in \mathbb Z$



$\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $



The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$






Alternatively, more generally, in any integral domain $\rm\:D\:$ we have




Theorem $\rm\ \ (a,b)\ \approx\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D\ \,$ [$c\approx d := c,d\,$ associate: $\,c\mid d\mid c$]



Proof $\rm\quad c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x$



The above proof uses the universal definitions of GCD, LCM, which often served to simplify proofs, e.g. see this proof of the GCD * LCM law.






Alternatively, comparing powers of primes in unique factorizations, it reduces to the following

$$\begin{eqnarray} \min(a+x,\,b+x) &\,=\,& \min(a,b) + x\\
\rm expt\ analog\ of\ \ \ \gcd(a \,* x,\,b \,* x)&=&\rm \gcd(a,b)\,*x\end{eqnarray}\qquad\qquad\ \ $$



The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\,\le,\,$ and using the universal property of min instead of that of gcd, i.e.



$$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\\
\rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \,|\,\ \gcd(a,b) \end{eqnarray}$$



Then the above proof translates as below, $\ $ with $\,\ m(x,y) := {\rm min}(x,y)$




$c \le a,b \!\iff\!c\!+\!x \le a\!+\!x,b\!+\!x\!$ $\!\iff\! c\!+\!x \le m(a\!+\!x,b\!+\!x)\!$ $\!\iff\!\! c \le m(a\!+\!x,b\!+\!x)\!-\!x$






Theorem $\ \ $ If $\ a,b,x\ $ are positive naturals then $\ (ax,bx) = (a,b)x $



Proof $\ $ We induct on $\color{#0a0}{{\rm size}:= a\!+b}.\,$ If $\,a=b,\,$ $\,(ax,bx) = (ax,ax) = (a)x = (a,b)x\,$ so it is true. $ $ Else $\,a\neq b;\,$ wlog, by symmetry, $\,a > b\,$ so $\,(ax,bx) = (ax\!-\!bx,bx) = \color{}{((a\!-\!b)x,bx)}\,$ with smaller $\rm\color{#0a0}{size}$ $\,(a\!-\!b) + b = a < \color{#0a0}{a\!+b},\,$ therefore $\,((a\!-\!b)x,bx)\!\underset{\rm induct}=\! (a\!-\!b,b)x = (a,b)x$.


contour integration - Example of Improper integral in complex analysis



I'm doing this example of Cauchy principle value



$$ \int_0^\infty \frac{dx}{x^3+1}=\frac{2\pi}{3\sqrt{3}} $$



After some steps i got,




$$ \int_{[0,R]+C_R} \frac{dz}{z^3+1}=2\pi i(B_1)\text{
where }B_1= \operatorname{Res}_{z=z_0}\frac{1}{z^3+1} $$



also I got that $\displaystyle \bigg|\int_{C_R} \frac{dz}{z^3+1}\bigg|\to 0 \text{ as } R \to \infty$



There is problem to to finding residue at $z_0=\displaystyle \frac{1+\sqrt{3}i}{2}$



Here i am considering the following contour:



enter image description here




please help me.thanks in advance.


Answer



The contour is good. Two things though:



1) You have to consider the integral along the angled line of the wedge contour. The angle of the contour was chosen to preserve the integrand. 2) Write $z=e^{i 2 \pi/3} x$ and get that the contour integral is



$$\left(1-e^{i 2 \pi/3}\right) \int_0^{\infty} \frac{dx}{x^3+1} = i 2 \pi \frac{1}{3 e^{i 2 \pi/3}}$$



The term on the right is the residue at the pole $z=e^{i\pi/3}$ times $i 2\pi$. I used the fact that, if $f(z)=a(z)/b(z)$, then the residue of a simple pole $z_k$ of $f$ is $a(z_k)/b'(z_k)$.




Note that $e^{i 2 \pi/3}-e^{i 4 \pi/3}=i \sqrt{3}$. The result follows.


Friday 15 February 2013

functions - Given $e=sqrt[y]{x}$ isolate y



I have a problem trying to create a function in a programming language that does not support any functions other than that of basic arithmetic (addition, subtraction, exponentiation, division...). This function should replicate that of the natural logarithm, or $\ln(x)$ function. Currently, I have $e=\sqrt[y]{x}$ (from $e^y=x$). As of now, I have not found any way to isolate $y$ in that equation without also using the $\log()$ function. Is this possible to do without using said function? Once I have the $\ln(x)$ function, I can construct the $\log_{base}(x)$ function as $$\log_{base}(x)=\dfrac{\ln(x)}{\ln(base)}.$$ And, with the Taylor series, I can create dynamic functions for $\sin,\cos,$ and $\tan$ as well as there inverses. From there, I've got a more or less functional programming space.


Answer



Obviously, since $y=\ln(x)$ is the solution, this is equivalent to writing $\log$ in terms of "elementary" functions. This can't be done - a simple argument to this fact would be that there is no way to, in terms of the map $x\mapsto e^x$ and rational functions, create a function tending to infinity, but having vanishing derivative. (A more sophisticated argument could proceed by noting that $\ln(x)$ is as "well-defined" as the other operations, since $\ln(1)$ could be argued to be $2\pi i$ since $e^{2\pi i}=1$ - then tossing the word "holomorphic" around a few times, we'd get a result).



However, if you're okay with Taylor series, then the following may be useful:

$$\ln(x+1)=x-\frac{x^2}2+\frac{x^3}3-\frac{x^4}4+\frac{x^5}5+\ldots$$
and, if you want to limit the number of terms you must compute, you could compute, for instance, $\alpha=\sqrt{e}$ (or a higher root; you just need to be able to precompute $\ln(\alpha)$ and, before computing $\ln(x)$ multiply by $\alpha^n$ for an integer $n$ such that $x$ is in $[\alpha^{-\frac{1}2},\sqrt{\alpha})$ - that is, let $x=y\alpha^n$ where $y$ is in that interval, then $\ln(x)=\ln(y\alpha^n)=\ln(y)+n\ln(\alpha)$ and if you use the Taylor series for $\ln(y)$ near $y=1$, you should be able to get reasonable accuracy on any input, since you're only using the Taylor series in a relatively small interval.



Another method would be to abuse the identity $\ln(x^a)=a\ln(x)$ and to use a linear approximation of $\ln$ In particular, if you chose some large integer $n$, then you could compute
$$\ln(x)=n\ln(\sqrt[n]x)\approx n(\sqrt[n]x-1).$$



You could also use Newton's method or something similar to numerically solve the equation - you could use one of the above methods to approximate a good place to start, and then iterate Newton's method until you reach acceptable accuracy.


abstract algebra - polynomial remainder theorem proof, is it legit?



"In algebra, the polynomial remainder theorem or little Bézout's theorem is an application of Euclidean division of polynomials. It states that the remainder of the division of a polynomial f(x) by a linear polynomial x-a is equal to f(a). In particular, x-a is a divisor of f(x) if and only if f(a)=0." -wikipedia



The polynomial remainder theorem





Let $p(x)$ be any polynomial, and $d(x)=x-c$ for any $c.$ Then the remainder of the division $$R(\frac{p(x)}{d(x)})=p(c).$$




So I tried to prove this theorem before looking for the proof on the web.
Looking through the web, I only found one traditional way to prove this, so I was wondering if my proof using polynomial division below is legitimate:



Let $p(x)= \sum_{i=0}^n a_n x^n , d(x)=x-c.$




Applying long division we have:



$a_n x^{n-1}+(a_{n-1}+a_nc)x^{n-2}+ \cdot \cdot \\
x-c| \overline{a_n x^n+a_{n-1} x^{n-1} +\cdot \cdot \cdot +a_0 } \\
\qquad a_n x^n-ca_nx^{n-1} \\
\qquad ------ \\
\qquad \quad (a_{n-1} + ca_n)x^{n-1} + a_{n-2} x^{n-2} \\
\qquad \quad (a_{n-1} + ca_n)x^{n-1}-c(a_{n-1} + ca_n)x^{n-2}
\\
\qquad \quad --------------- \\

\qquad \qquad (a_{n-2}+ca_{n-1}+c^2a_n)x^{n-2}\\
\qquad \qquad \qquad \qquad . \\ \qquad \qquad \qquad \qquad . \\ \qquad \qquad \qquad \qquad . \\
\qquad \qquad \quad a_0+a_1c+a_2c^2+ \cdot \cdot \cdot + a_nc^n = p(c)$



Thus the remainder of the division is equal to $p(c)$ as the theorem states.


Answer



Well first starters you would have to assume you have proved that the polynomial long division algorithm is correct (not your use of it but the algorithm itself). Assuming that then your proof looks correct but would not considered to be fully rigorous by many people since it's so visual.



Here is the proof I prefer:




Using the division algorithm for polynomials over a field:



$$\text{If } f,g \in F[x] \text{ and } g \neq 0, \text{ then there exists unique } q,r \in F[x] \text{ such that } f = qg+r \\ \text{ and deg } r < \text{ deg }g \text{ or } r = 0 $$



$\\$



Set $ g = x-c$ and note that deg $g = 1$. Choose $q,r$ such that $f = qg+r$ and $r = 0$



or deg $r < $ deg $g = 1$ Thus we must have either $r = 0$ or deg $r = 0$; in both cases, $r$ is just a constant.




Now use $f(x) = q(x)g(x) + r(x) = q(x)(x-c) + r(x) \implies f(c) = q(c) \times 0 + r(c) = r(c)$



In particular, $f(c) = r(c) = r \in F$ and hence $f(x) = q(x)(x-c) + f(c)$ and you're done.


trigonometry - Limit of $frac{1-cos x}{sin x}$



I don't understand the rewriting that's being done in this limit:



$$\lim_{x\to0} \frac{1−\cos x}{\sin x} = \lim_{x\to0} \frac{\sin x}{\cos x} $$




Why doesn't this simplify to $\frac{\sin x}{\sin x}$?


Answer



It doesn't. You can use l'Hospital to get
$$\lim_{x\to0} \frac{1-\cos x}{\sin x} = \lim_{x\to0} \frac{\sin x}{\cos x} = \tan 0 = 0$$


Thursday 14 February 2013

calculus - How to prove absolute summability of sinc function?



We know that $$\int_0^\infty \left(\frac{\sin x}{x}\right)^2 dx=\int_0^\infty \frac{\sin x}{x} dx=\frac{\pi}{2}.$$



How do I show that $$\int_0^\infty \left\vert\frac{\sin x}{x}\right\vert dx$$ converges?


Answer



It doesn't. Using the convexity of $1/x$,




$$\int_0^\infty \left\vert\frac{\sin x}{x}\right\vert \,\mathrm{d}x=\sum_{k=0}^\infty\int_{k\pi}^{(k+1)\pi}\left\vert\frac{\sin x}{x}\right\vert \,\mathrm{d}x>\sum_{k=0}^\infty\int_{k\pi}^{(k+1)\pi}\frac{\left\vert\sin x\right\vert}{(k+1/2)\pi} \,\mathrm{d}x=\frac{2}{\pi}\sum_{k=0}^\infty\frac{1}{k+1/2}\;,$$



which diverges since the harmonic series diverges.


Formation of new Quadratic Equation by changing the roots of a given Quadratic Equation.

If $\alpha$ and $\beta$ are the root of equation $ax^2+bx+c=0$.

Prove that the equation whose root is $\alpha^n$ and $\beta^n$ is $$a(x^\frac 1n)^2+b(x^\frac 1n)+c=0$$



I had already found the equation whose root is whose root is $\alpha^n$ and $\beta^n$ by using $$x^2-(\text{sum of roots})x+\text{product of roots}\implies x^2-(\alpha^n+\beta^n)x+\alpha^n\beta^n=0$$

Is this equation is same as what is given to prove?

limits - Why is this sum wrong?

$$\lim_{n\rightarrow\infty}\sum_{r=1}^n \sin\left(\frac{r}{n} \right)=\lim_{n\rightarrow\infty} \left[\sin\frac{1}{n}+\sin\frac{2}{n}+\cdots+\sin(1)\right]=0+0+\cdots+\sin(1)=1$$



Could anybody explain why this is wrong? I've tried to see why this doesn't work but I don't see why not. Thank you.

Wednesday 13 February 2013

Algebra: What allows us to do the same thing to both sides of an equation?

I understand that the expressions on both sides of an equal sign are the same entity, and I know that when you modify one side, the other must be changed because it is referring to the same thing. What I do not understand is why making a new equation (adding or taking away from an expression) allows one to know what an unknown represents. What about equations lets us do this?

geometry - Perimeter of an equilateral triangle drawn with respect to a square.



Here's a question that was asked in the International Kangaroo Math Contest 2016. The question goes like this:




If the perimeter of the square in the figure is 4 units, then what is the perimeter of the equilateral triangle?







What I did:



Well I tried something very naïve and it was the supposition that the equilateral triangle cuts the top side of the square at its midpoint. Hence giving the following result.





By Pythagoras' Theorem,

$$\overline{AB} = \overline{MC} = \sqrt{\overline{BC}^2 + \overline{BM}^2} = \sqrt{\left(1\right)^2 + \left(\frac12\right)^2} = \frac{\sqrt5}{2}$$



So perimeter of the triangle is:
$$\begin{align}
P&=\overline{AF} +\overline{FM}+\overline{MC}+\overline{CD}+\overline{DE}+\overline{EA}\\
&= \frac12+\frac12+\frac{\sqrt5}2+1+\frac12+\frac{\sqrt5}2\\
&= \frac52+\sqrt5
\end{align}$$



However this is not the correct answer and I know that the problem is with the supposition that $M$ is the midpoint of $\overline{AB}$. So what is the correct method and answer?




Thanks for the attention.


Answer



We only need to know that the $\angle ABC=30°$, the rest is just straight forward.
enter image description here


Tuesday 12 February 2013

linear algebra - Change in eigenvalues if row and column added to highly symmetric matrix



I have a symmetric matrix like the following:$$\begin{bmatrix}a&a&a&a\\a&b&b&b\\a&b&b&b\\a&b&b&b\end{bmatrix}$$It's a symmetric real matrix with only 3 unique eigenvalues. Given it's highly symmetric nature, I was wondering how much the eigenvalues would change if I add another row and column keeping it's symmetric property intact. Specifically adding $[a, b, b, b, b]$ as a column and a row at the end.



Is there any bound for the change in eigenvalues given these sort of highly symmetric matrices?


Answer



Let $\mathbf{1}$ denote that all-ones column vector of length $n$, and $I$ and $J$ the identity and all-ones matrices of order $n$ respectively. $\newcommand{\one}{\mathbf 1}$




Theorem
Let $M$ be the $(n + 1) \times (n + 1)$ matrix of the form
$\begin{bmatrix}a & a\one^T\\ a\one & bJ\end{bmatrix}$, where $a \ne 0$ and $b$ are distinct real numbers. Then the eigenvalues of $M$ are:




  1. $0$ with multiplicity $n - 1$.

  2. The two roots of the equation $\lambda^2 - (a + nb)\lambda - na(a - b) = 0$, each with multiplicity $1$.



Proof. Since $M$ is symmetric and has rank $2$, i.e., nullity $n - 1$, it has $0$ as an eigenvalue with multiplicity $n - 1$.




Now, let $\lambda$ be a root of
\begin{align}
\lambda^2 - (a + nb)\lambda - na(a - b) = 0 \tag{1}\label{eq:lambda}
\end{align}

and define the vector $x = \begin{bmatrix}\lambda - nb \\ a \one\end{bmatrix}$ of length $n + 1$. Then



\begin{align*}
Mx & = \begin{bmatrix}
(\lambda - nb)a + a^2 \one^T \one\\

(\lambda - nb)a \one + ab J \one
\end{bmatrix}\\
&= \begin{bmatrix}
\lambda a + na(a - b)\\
\lambda a \one
\end{bmatrix}
\end{align*}

where the last step follows from $\one^T \one = n$ and $J \one = n \one$. Now observe that on rearranging \eqref{eq:lambda}, we get $\lambda(\lambda - nb) = \lambda a + na(a - b)$, which shows that $Mx = \lambda x$. Thus, $x$ is an eigenvector of $M$ corresponding to the eigenvalue $\lambda$, for each root $\lambda$ of \eqref{eq:lambda}. $\quad\square$


combinatorics - Number of solutions of $x_1 + 2x_2 + 3x_3 + cdots + n x_n = p$


Given a non-negative integer $p$. What is the number of solutions of $x_1+2x_2+3x_3 + \cdots + nx_n = p$, where the $x_i$'s are non-negative integers.




Can we answer this by using number of solutions of $x_1+x_2+x_3 + \cdots + x_m = q$ for any $m,q$?

trigonometry - Sum of powers of primitive root of unity- Trig Proof



I'm trying to prove that if $z=\operatorname{cis}(2\pi/n) = \cos(2\pi/n) + i\sin(2\pi/n)$, that is, $z$ is a primitive $n$-th root of unity, for any integer $n\geq 2$, $1+z+z^2+\cdots+z^{n-1}=0$. I've already come across a nice and concise proof here, and that same link also has a comment pointing out that it's just a geometric sum which can be expressed as $\dfrac{1-\operatorname{cis}^n(2\pi/n)}{1-\operatorname{cis}(2\pi/n)}$ which is just $0$ in the numerator. However, I was wondering if I could do it just using trig functions. It's an inefficient way of proving it, but I was fixated on this approach for so long I was wondering if someone knew how to do it.



Proving that the imaginary part is $0$ is easy- you just use the identity $\sin(a)+\sin(b)=2\sin(\frac{a+b}{2})\sin(\frac{a-b}{2})$ and for each integer $j$ where $0< j

This same approach doesn't work for the real part- using the identity $\cos(a)+ \cos(b) =2\cos(\frac{a+b}{2})\cos(\frac{a-b}{2})$, and adding the same pairs gets $2\cos(2\pi)\cos(2\pi(n-2j)/n)=2\cos(2\pi(n-2j)/n)$ so this gets $1+2\sum_{j=1}^{\lfloor n/2 \rfloor}\cos(2\pi(n-2j)/n)$ with $\cos(\pi/n)=-1$ added if $n$ is even. Then I need to show that that sum is $0$ if $n$ is even and $-1/2$ if $n$ is odd. Is there a clean way of doing this? The only thing I can think to do is repeat the sum of $\cos$ identity, and that doesn't seem too helpful.


Answer



Use the identity $$\displaystyle\sum\limits_{m=0}^{n-1} \cos(mx+y)=\frac{\cos\left(\dfrac{n-1}{2}x+y\right)\sin\left(\dfrac{n}{2}\, x\right)}{\sin\left(\dfrac{x}{2}\right)}$$




and evaluate where $x=2\pi/n$ and $y=0$ to deduce that the real part is zero.


Monday 11 February 2013

linear algebra - Prove that the set of commuting matrices is a vector space



Prove that the set of real commuting matrices with the matrix $A= \begin{bmatrix}
0 & 1 & 0 & 0 \\

0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}$ is a vector space relative to standard operations on matrices. Find the dimension and a basis for that space.



Question: Is it necessary to check the subtraction for commuting matrices.



What are the steps for proving the given statement?


Answer



You just have to show it is a subspace of the vector space of $n\times n$ matrices.




Clearly the zero matrix commutes with $M$.



Supose $A$ and $B$ commute with $M$. then $(A+B)M=AM+BM=MA+MB=M(A+B)$.



We also have $(cA)M=c(AM)=c(MA)=M(cA)$, so the matrices that commute with $M$ contain the zero matrix and are closed under addition and scalar multiplication, therefore they form a subspace of the vector space of matrices as desired.


Solving a congruence/modular equation : $(((ax) mod M) + b) mod M = (ax + b) mod M$

I've been trying to prove this equation for my homework.



$$(((ax) \bmod M) + b) \mod M = (ax + b) \bmod M$$



We have that $a,x,b,M > 0$, and $a ≡ b \pmod M$






Reading KhanAcademy website, I found the following properties that looked promising.




- Multiplication property :
\[(A * B) mod C = (A mod C * B mod C) mod C\]
- Addition property :
\[(A + B) mod C = (A mod C + B mod C) mod C\]




I tried developping the left side of the Equation :



$(((ax) \bmod M) + b) \bmod M \rightarrow((a \bmod M \cdot x \bmod M) \bmod M + b) \bmod M$ (multiplication property)





And if I develop the right side of the Equation :



$$(ax + b) \bmod M = (ax \bmod M + b \bmod M) \mod M$$ (addition property)




Which gives this after applying the multiplication property :



$$(((a \bmod M \cdot x \bmod M)\bmod M) + b \bmod M) \bmod M$$




So I have



$$((a\bmod M\cdot x \bmod M)\bmod M+b) \bmod M = (((a \bmod M \cdot x \bmod M)\bmod M) + b \bmod M) \bmod M$$






It's almost the answer, but one side has $b \bmod M$ and the other only has $b.$ I've been looking for more congruence properties but I can't seem to find one that would allow me to solve my problem. Have I been tackling this problem from the correct angle? Or did I make a mistake from the beginning (by applying the addition and multiplication properties)?





Any help would be greatly appreciated! Thanks

real analysis - Geometric series in closed form.



$$f=\sum_{k=3}^{\infty}(-1)^kx^{(2k-2)}$$



i would like to write this as the geometric power series!



Is there a ritual you have to do to solve this? thanks in advance.


Answer



$$\sum_{k\geq3}\left(-1\right)^{k}x^{2k-2}=\frac{1}{x^{2}}\sum_{k\geq3}\left(-x^{2}\right)^{k}=-x^{4}\sum_{k\geq0}\left(-x^{2}\right)^{k}.
$$



linear algebra - Significance of row reduction

I hadn't realized it until I read the most shocking line while studying linear algebra:



"Row operations CAN change the column space of a Matrix."



Basically, I have accustomed myself in viewing solution of matrix equations as a series of transformation. Say for example, I come accross an eqution for $Ax=b$, then I would interpret this as if the vector x was tranformed by Matrix A, then what would that vector x be?



Normally, in linear algebra we'd do such solutions by reducing the matrix into echelon form or reduced echelon form. Note that the column space is also the Span of the Matrix A.




Then, this is where the confusion pin point is. Say $A$ spanned a space where $b$ was in the span of A, so, $Ax=b$ has a solution. Then we row reduce $A$ to echelon form, then possibly the span of $A$ has changed and it may no longer include $b$ in it's span, on top of which the vector b itself has row reduced equivalently. What guarantee is there that $b$ is in the span of $A$ ~ $B$?



Now there is a second confusion, if row reductions do change the span of a Matrix, then, do the entries in the reduced row echelon of $A$, that are pivots in $A$, not span the same space as A? Then, how are these the bases for the space spanned by $A$?



EDIT: more confusion here:,
when we're finding the null space of $A$, then we row reduce it and find the linear combinations of free variables on pivot columns, and this is the Null Space of A. But, how does this all make sense, or how is this even equivalent, when the span of A has changed?

Sunday 10 February 2013

real analysis - Evaluate $lim_{n to infty} sum_{k=n}^{2n} frac{1}{n+sqrt{k}}$

Evaluate $$\lim_{n \to \infty} \sum_{k=n}^{2n} \frac{1}{n+\sqrt{k}}$$



I tried to write the sum above as $$\lim_{n \to \infty} \frac{1}{n+\sqrt{n}} + \frac{1}{n+\sqrt{n+1}}+\dotsb+\frac{1}{n+\sqrt{2n}}$$
Now if I solve the limit the result is $0$ but obviously it's wrong. I think because it's an infinity sum of factors that go to zero, but it could be that the sum goes to infinity faster than the factors so that the result isn't zero. However I don't know how to solve it. Some advice?

real analysis - If $f$ measurable and $B$ a Borel set, then $f^{-1}(B)$ measurable




Let $f$ be a Lebesgue measurable function and $B$ be a Borel set. Show that $f^{-1}(B)$ is also measurable.




Attempt at the proof:




Suppose $f$ is Lebesgue measurable. Then $f^{-1}((\alpha,\infty))$ is measurable as well (by definition of a measurable function), $\forall\alpha\in\mathbb{R}$. We note that $(\alpha,\infty)\in\mathcal{B}$, where $\mathcal{B}$ is the Borel $\sigma$-algebra, since:




  • $\emptyset\in (\alpha,\infty)$

  • $(\alpha,\infty)^c=\mathbb{R}\setminus(\alpha,\infty)\in\mathcal{B}$

  • the infinite union of open sets is once again an open set



So, $(\alpha,\infty)$ is a Borel set. So let $B=(\alpha,\infty)$. Since $f$ is measurable, $f^{-1}((\alpha,\infty))$ is measurable and so $f^{-1}(B)$ is measurable.




However I'm doubtful that this is correct, since I didn't choose an arbitrary Borel set $B$. Can anyone nudge me in the right direction? Thanks.


Answer



$\{A| f^{-1}(A) \text{is Borel}\}$ is a $\sigma$-algebra containing all open intervals. What can you say now?


Saturday 9 February 2013

complex numbers - For $zinmathbb{C}, |z|=1$, Prove that $Big(frac{1+ia}{1-ia}Big)^4=z$ has all roots real and distinct




Given $z\in\mathbb{C}$ with $|z|=1$, then prove that the equation $\Big(\dfrac{1+ia}{1-ia}\Big)^4=z$ has all roots real and distinct





My Attempt
$$
z=e^{i\theta}\implies z^{1/4}=e^{i\theta/4}=\frac{1+ia}{1-ia}.1^{1/4}=\frac{1+ia}{1-ia}.(1,-1,i,-i)
$$

Is it correct to introduce the quadratic root of unity for the complete solution ?



Case 1: $1^{1/4}=1$
$$
\frac{1}{ia}=\frac{e^{i\theta/4}+1}{e^{i\theta/4}-1}=\frac{2\cos^2\theta/8+2i\sin\theta/8.\cos\theta/8}{-2\sin^2\theta/8+2i\sin\theta/8.\cos\theta/8}\\

\frac{1}{ia}=i\cot\theta/8\\\boxed{a=-\tan\theta/8}
$$

Case 2: $1^{1/4}=-1$
$$
\frac{1}{ia}=\frac{-e^{i\theta/4}+1}{-e^{i\theta/4}-1}=\frac{2\sin^2\theta/8-2i\sin\theta/8.\cos\theta/8}{-2\cos^2\theta/8-2i\sin\theta/8.\cos\theta/8}\\
\frac{1}{ia}=i\tan\theta/8\implies\boxed{a=-\cot\theta/8}
$$

Case 3: $1^{1/4}=-i$
$$
\frac{1}{ia}=\frac{ie^{i\theta/4}+1}{ie^{i\theta/4}-1}=\frac{1-\sin\theta/4+i\cos\theta/4}{-1-\sin\theta/4+i\cos\theta/4}\\

=\frac{1-\cos(\pi/2-\theta/4)+i\sin(\pi/2-\theta/4)}{-1-\cos(\pi/2-\theta/4)+i\sin(\pi/2-\theta/4)}\\
=\frac{2\sin^2(\pi/4-\theta/8)+2i\sin(\pi/4-\theta/8)\cos(\pi/4-\theta/8)}{-2\cos^2(\pi/4-\theta/8)+2i\sin(\pi/4-\theta/8)\cos(\pi/4-\theta/8)}\\
\frac{1}{ia}=-i\tan(\pi/4-\theta/8)\\
\boxed{a=\cot(\pi/4-\theta/8)=\tan(\pi/4+\theta/8)}
$$

Case 4: $1^{1/4}=i$
$$
\frac{1}{ia}=\frac{-ie^{i\theta/4}+1}{-ie^{i\theta/4}-1}=\frac{1+\sin\theta/4-i\cos\theta/4}{-1+\sin\theta/4-i\cos\theta/4}\\
=\frac{1+\cos(\pi/2-\theta/4)-i\sin(\pi/2-\theta/4)}{-1+\cos(\pi/2-\theta/4)-i\sin(\pi/2-\theta/4)}\\
=\frac{2\cos^2(\pi/4-\theta/8)-2i\sin(\pi/4-\theta/8)\cos(\pi/4-\theta/8)}{-2\sin^2(\pi/4-\theta/8)-2i\sin(\pi/4-\theta/8)\cos(\pi/4-\theta/8)}\\

=i.\cot(\pi/4-\theta/8)\\
\frac{1}{ia}=i\tan(\pi/4+\theta/8)\\
\boxed{a=-\cot(\pi/4+\theta/8)}
$$

Is it the right way to approach the problem ? and whats the easiest way to solve this ?


Answer



For the reality of the roots, take module on both sides :
$$\left|\frac{i-a}{-i-a}\right|=|z|^{1/4}=1$$
so if $A$ is the point of affix $a$, $I$ of affix $i$ and $I'$ of affix $-i$, this says that $AI=AI'$, so $A$ belongs to the perpendicular bisector of segment $[II']$, therefore the real axis.




For distinct roots, take argument : $(\overrightarrow{AI'};\overrightarrow{AI})=\frac14\arg(z)+k\frac\pi2$ with $k\in\{0,1,2,3\}$, this means 4 distinct points.



(There could be a better argument for last point).



With some calculus (which doesn't explain why roots are real) : let $z=e^{i\theta}$, and for $k\in\{0,1,2,3\}$, $\theta_k=\theta/4+k\pi/2$. Equation solves as
$$a=\frac{e^{i\theta_k}-1}{i(e^{i\theta_k}+1)} = \frac{e^{i\theta_k/2}-e^{-i\theta_k/2}}{i(e^{i\theta_k/2}+e^{-i\theta_k/2})} = \tan(\theta_k/2) = \tan(\theta/8+k\pi/4)$$
OK, so it's real, but why ?


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...