Saturday 30 January 2016

limits - Why $lim limits_ {nto infty}left (frac{n+3}{n+4}right)^n neq 1$?





Why doesn't $\lim\limits_ {n\to \infty}\ (\frac{n+3}{n+4})^n$ equal $1$?




So this is the question.



I found it actually it equals $e^{-1}$. I could prove it, using some reordering and canceling.



However another way I took was this:




$$\lim_ {n\to \infty}\ \left(\frac{n}{n+4}+\frac{3}{n+4}\right)^n$$



with the limit of the first term going to $1$ and the second to $0$. So $(1+0)^n=1$ not $e^{-1}$.


Answer



Because $1^\infty$ is a tricky beast. Perhaps the power overwhelms the quantity that's
just bigger than $1$, but approaching $1$, and the entire expression is large. Or perhaps not...



Perhaps the power overwhelms the quantity that's just smaller than $1$, but approaching $1$, and the entire expression tends to $0$ . Or perhaps not...



In your case,

$$
{n+3\over n+4} = 1-{1\over n+4}.
$$
And, as one can show (as you did): $$\lim\limits_{n\rightarrow\infty}(1-\textstyle{1\over n+4})^n =
\lim\limits_{n\rightarrow\infty}\Bigl[ (1-\textstyle{1\over n+4})^{n+4}\cdot (1-{1\over n+4})^{-4}\Bigr] =
e^{-1}\cdot1=e^{-1}.$$



Here, the convergence of $1-{1\over n+4}$ to 1 is too fast for the $n^{\rm th}$ power to drive it back down to $0$.


taylor expansion - Proof an inequality



I'm trying to prove that
$$ \frac{3-2\sqrt{1-15 m^2}}{1+12 m^2}\geq 1+3 m^2$$



I have obtained in a CAS software the Taylor expansion in $m=0$
enter image description here




One posibility to prove the inequality is showing coeficients in Taylor expansion are non-negative, by I don't find how.



Really I want only to obtain inequality. Some idea?



EDIT



$m$ must be between $0

Answer



$$ \frac{3-2\sqrt{1-15m^2}}{1+12m^2} \geq 1+3m^2 \iff $$
$$ 3-2\sqrt{1-15m^2} \geq (1+3m^2)(1+12m^2) \iff $$

$$ 2\sqrt{1-15m^2} \leq 3-(1+3m^2)(1+12m^2) \iff $$
$$ \sqrt{1-15m^2} \leq \frac{3-(1+3m^2)(1+12m^2)}{2} \iff $$
$$ \sqrt{1-15m^2} \leq \frac{2-15m^2-36m^4}{2} $$
Note that on the interval you're concerned about, the right hand side is always positive. Proof: it's obviously decreasing on $\left(0,\frac{1}{\sqrt{15}}\right)$, and is equal to $\frac{21}{50}$ at the right endpoint. Therefore squaring both sides is legal here with an $\iff$ statement.
$$ \sqrt{1-15m^2} \leq \frac{2-15m^2-36m^4}{2} \iff $$
$$ 1-15m^2 \leq \left(\frac{2-15m^2-36m^4}{2}\right)^2 \iff $$
$$ 1-15m^2 \leq 324m^8 + 270m^6 + \frac{81}{4}m^4 - 15m^2 + 1 \iff $$
$$ 0 \leq 324m^8 + 270m^6 + \frac{81}{4}m^4 $$



This last statement is clearly true.



elementary number theory - fastest method to solve a quadratic equation modulo p^N




What is the fastest method to solve a quadratic equation



$ax^2 + bx + c = 0 (mod p^N) $



where p is prime?


Answer



I'd start by multiplying by the inverse of $a$, to make your lead coefficient $1$: $x^2+Bx+C\equiv 0$. Then you can complete the square, replacing your coefficient $B$ with $B-p^N$ if that helps you see how to understand $\frac{B}{2}$.



This will tell you what number has to be a quadratic residue, and you can work on finding its square root. For that, you probably want to find the square root mod $p$ first, and then use Hensel's lifting lemma until you get to $p^N$.




Does that answer your question, or do you need more details on any of those steps?


Friday 29 January 2016

calculus - Prove that $lim_{xrightarrow 1} frac{int_0^xg(t)dt-int_0^1g(t)dt-int_0^1f(t)dt(x-1)}{(x-1)^2=frac{f(1}{2}}$




Prove that



$$\lim_{x\rightarrow 1} \frac{\int_0^xg(t)dt-\int_0^1g(t)dt-\int_0^1f(t)dt(x-1)}{(x-1)^2}=\frac{f(1}{2}$$



Now I now this is a limit of the form $\frac{"0"}{"0"}$ which means I can use L'hopital along with the fundamental theorem of calculus. This is the first time that I've done something with two variable, t and x. Which one am I differentiating the terms for in this case?


Answer



The limit says $x \to 1$, so take the derivative with respect to $x$. Note that the $t$ variables are all variables that inside integrals, so taking the derivative with respect to $t$ doesn't make sense.


field theory - $sqrt{p_1}$ is not in $Q[sqrt{p_2},...,sqrt{p_n}]$




How to show $\sqrt{p_1}$ is not in $Q[\sqrt{p_2},...,\sqrt{p_n}]$ if $p_1,...,p_n$ are distinct primes? Intuitively, this is pretty clear, but it makes me very uncomfortable to just believe. Any idea to prove this rigorously? I want this result because I am trying to compute the Galois group of $(X^2-p_1)...(X^2-p_n)$. If I know the statement is true, then the Galois group of this polynomial will be direct product of separate Galois group.



Answer



You may also go through the following lines: by quadratic reciprocity and Dirichlet's theorem, there is some uber-huge prime $P$ for which $p_2,p_3,\ldots,p_n$ are quadratic residues, while $p_1$ is not. It follows that the algebraic numbers $\sqrt{p_1}$ and $\sqrt{p_2}+\ldots+\sqrt{p_n}$ have different degrees over $\mathbb{F}_P$ ($2$ and $1$, respectively), so they cannot be linearly dependent over $\mathbb{Q}$.


elementary number theory - Proof that $3mid n^3 − 4n$

Prove that $n^3 − 4n$ is divisible by $3$ for every positive integer $n$.




I am not sure how to start this problem. Any help would be appreciated

integration - Leibniz's Rule for differentiation under the integral.



If we have




$$F\left( \alpha \right) = \int\limits_a^b {f\left( {\alpha ,x} \right)dx} $$



Then



$$\frac{{F\left( {\alpha + \Delta \alpha } \right) - F\left( \alpha \right)}}{{\Delta \alpha }} = \frac{{\Delta F}}{{\Delta \alpha }} = \int\limits_a^b {\frac{{f\left( {\alpha + \Delta \alpha ,x} \right) - f\left( {\alpha ,x} \right)}}{{\Delta \alpha }}dx} $$



and



$$\mathop {\lim }\limits_{\Delta \alpha \to 0} \frac{{\Delta F}}{{\Delta \alpha }} = \frac{{dF}}{{d\alpha }} = \mathop {\lim }\limits_{\Delta \alpha \to 0} \int\limits_a^b {\frac{{f\left( {\alpha + \Delta \alpha ,x} \right) - f\left( {\alpha ,x} \right)}}{{\Delta \alpha }}dx} $$




However, this doesn't always mean



$$\mathop {\lim }\limits_{\Delta \alpha \to 0} \frac{{\Delta F}}{{\Delta \alpha }} = \frac{{dF}}{{d\alpha }} = \int\limits_a^b {\mathop {\lim }\limits_{\Delta \alpha \to 0} \frac{{f\left( {\alpha + \Delta \alpha ,x} \right) - f\left( {\alpha ,x} \right)}}{{\Delta \alpha }}dx} $$



$$\mathop {\lim }\limits_{\Delta \alpha \to 0} \frac{{\Delta F}}{{\Delta \alpha }} = \frac{{dF}}{{d\alpha }} = \int\limits_a^b {\frac{{\partial f\left( {\alpha ,x} \right)}}{{\partial \alpha }}dx} $$



I know that in other cases, for example in the integration of a series of functions or in sequences of functions, if $s(x)_n \to s(x)$ or $f_n(x) \to f(x) $ uniformly then we can integrate term by term (in the series) or change the order of integration and of taking the limit (in the sequence), i.e:



If




$${s_n}\left( x \right) = \sum\limits_{k = 0}^n {{f_k}\left( x \right)} $$



then



$$\mathop {\lim }\limits_{n \to \infty } \int\limits_a^b {{s_n}\left( x \right)dx} = \int\limits_a^b {s\left( x \right)dx} $$



and for the other case:



$$\mathop {\lim }\limits_{n \to \infty } \int\limits_a^b {{f_n}\left( x \right)dx} = \int\limits_a^b {\mathop {\lim }\limits_{n \to \infty } {f_n}\left( x \right)dx} $$




However Leibniz's rule is used in cases such as:



$$\int\limits_0^1 {\frac{{{x^\alpha } - 1}}{{\log x}}dx} $$



Which isn't even continuous in $[0,1]$. How can we then justify this procedure?



ADD:



One particular example is




$$f(t) = \int\limits_0^\infty {\frac{{\sin \left( {xt} \right)}}{x}} dx =\frac{\pi}{2}$$



Which wrongly yields:



$$f'\left( t \right) = \int\limits_0^\infty {\cos \left( {xt} \right)dx} = 0$$


Answer



Take a look at http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign



For your integral

$$
\int_0^1 {\frac{{{x^\alpha } - 1}}{{\log x}}dx},
$$
I guess you need $\alpha>1$ (at least to apply the theorem the way it appears in the Wikipedia article). Be careful that $x$ in the article is your $\alpha$.



A more general result is Lebesgue's Dominated Convergence Theorem, where you can replace the continuity assumption with boundedness (since $(x,\alpha)$ will be staying within a rectangle).


linear algebra - Let $A$ and $B$ be two $3×3$ matrices with real entries such that $rank(A) = rank(B) =1$.

Let $A$ and $B$ be two $3×3$ matrices with real entries such that $rank(A) = rank(B) =1$. Let $N(A)$ and $R(A)$ stand for the null space and range space of $A$. Define $N(B)$ and $R(B)$ similarly. Then which of the following is necessarily true ?



$(A) \dim(N(A) ∩ N(B)) ≥ 1$.



$(B) \dim(N(A) ∩ R(A)) ≥ 1.$




$(C) \dim(R(A) ∩ R(B)) ≥ 1.$



$(D) \dim(N(B) ∩ R(B)) ≥ 1.$



I am feeling that option A is true..Can anyone help me in this..

Thursday 28 January 2016

calculus - Compute $int_0^{pi/2}frac{cos{x}}{2-sin{2x}}dx$



How can I evaluate the following integral?





$$I=\int_0^{\pi/2}\frac{\cos{x}}{2-\sin{2x}}dx$$







I tried it with Wolfram Alpha, it gave me a numerical solution: $0.785398$.
Although I immediately know that it is equal to $\pi /4$, I fail to obtain the answer with pen and paper.
I tried to use substitution $u=\tan{x}$, but I failed because the upper limit of the integral is $\pi/2$ and $\tan{\pi/2}$ is undefined.
So how are we going to evaluate this integral? Thanks.


Answer



Hint:




Knowing that $\sin2x=2\sin x\cos x$ and $\sin^2x+\cos^2x=1$. The integral can be expressed as



\begin{equation}
I=\int_0^{\pi/2}\frac{\cos x}{1+(\sin x-\cos x)^2}\ dx
\end{equation}



then use substitution $x\mapsto\frac{\pi}{2}-x$, we have



\begin{equation}

I=\int_0^{\pi/2}\frac{\sin x}{1+(\sin x-\cos x)^2}\ dx
\end{equation}



Add the two $I$'s and let $u=\sin x-\cos x$.


logic - Relative Consistency Lemma with finitistic proof



in set theory, one uses the following Lemma in order to provide relative consistency proofs. I have question concerning the proof of this lemma. First, here is the statement:



Suppose that $S$ and $T$ are theories over the language $\mathcal{L}(\in)$ of set theory and let $P$ be class (or, if you want, a new symbol added to $\mathcal{L}(\in)$). If $S\vdash\exists x P(x)$ and for all $\varphi\in T$, $S\vdash \varphi^{P}$, then the consistency of $S$ implies the consistency of $T$.



As far as I understand, the proof goes like this: assume that $T$ is inconsistent and let $\psi$ be such that $T\vdash\psi\wedge\neg\psi$.



Then one proves that this implies $S\vdash\psi^P\wedge\neg\psi^P$. In fact, one proves that for every $\psi$ such that $T\vdash\psi$, there holds $S\vdash\psi^P$ (nevertheless, $S\vdash\psi^P\wedge\neg\psi^P$ proves that $S$ is inconsistent, this is clear to me).




Now in order to prove the statement
\begin{align*}
T\vdash\psi \Rightarrow\ S\vdash\psi^P\ \ \ (*)
\end{align*}
one actually shows
\begin{align*}
T\vdash\psi \Rightarrow\ T^P\cup\{\exists xP(x)\}\vdash\psi^P\ \ \ (**),
\end{align*}
where $T^P:=\{\varphi^P\vert\ \varphi\in T\}$. By assumption, $(**)$ implies $(*)$, so it is enough to prove $(**)$. I have troubles to understand the proof of $(**)$. Going through the literature (mainly Kunen's book, but also some lecture notes), the idea for $(**)$ looks like this: given a deduction, i.e., a formal proof

\begin{align*}
\varphi_1\dots\varphi_{n-1}\psi
\end{align*}
from $T$ for $\psi$, one shows that
\begin{align*}
\exists xP(x)\varphi_1^P\dots\varphi_{n-1}^P\psi^P
\end{align*}
is a deduction from $T^P\cup\{\exists xP(x)\}$ for $\psi^P$.
Now, here's my question: where exactly do you need the premise $\exists xP(x)$ and how is it used to make sure that the deduction stays correct? It is clear to me that one needs it and I even worked out a proof for $(**)$ using Goedel's completeness theorem, i.e. using $\models$ instead of $\vdash$. There, I could clear see where you need $\exists xP(x)$, because you relativize the class $P$ to a set model and they need to be nonempty, therefore $\exists x P(x)$. So, intuitively, it is clear to me that the proof working with $\vdash$ must have the above structure, but I guess I oversaw something and implicitly used $\exists xP(x)$ where I didn't see it. However, I want to understand the syntactical proof without using completeness. The search in the literatur was quite unsatisfying for me so far because I always just found sketches of the proof.




By the way, I was working in Shoenfield's calculus/ deductive system of predicate logic. If someone has literature hints or links to a more detailed proof, I would be very grateful. Thank you very much.



P.S: Sorry for the long article!


Answer



You are asking where the assumption $\exists x P(x)$ is deployed in translating a proof of a sentence $\psi$ with assumptions drawn from a set of sentences $T$ into a proof of $\psi^P$ with assumptions drawn from $T^P \cup \{\exists x P(x)\}$. Here $\psi^P$, the relativisation of $\psi$ at $P$, means the formula obtained from $\psi$ by replacing subformulas of the form $\forall x \chi$ by $\forall x (P(x) \Rightarrow \chi)$ and sub formulas of the form $\exists x \chi$ by $\exists x(P(x)\land \chi)$.



To answer the question, I need to describe the translation.
A natural deduction style presentation of first-order logic is convenient for this.
(I don't have Kunen's book to hand and I have not read Shoenfield's book, so you will have to adapt this to the formalisations that those books use.)
What follows works for any first-order language (such as the language of set theory) that has only predicate symbols and no constant or function symbols, so that terms are just variables.




We will define by induction a function that maps a proof $\Pi$
with conclusion $\alpha$ and assumptions $\alpha_1, \ldots, \alpha_n$
into a proof $\Pi^P$ with conclusion $\alpha^P$ and assumptions
$\alpha_1^P, \ldots, \alpha_n^P, P(x_1), \ldots, P(x_k)$ for some
variables $x_1, \ldots, x_k$ that occur somewhere in $\Pi$.



Base case: we have a single formula
$\alpha$ and we translate it into $\alpha^P$




Propositional rules: relativisation commutes with the propositional connectives, so a propositional rule translates into an instance of the same rule.
E.g., for $\land$-introduction, the input is
$$
\begin{array}{ccc}
\vdots & & \vdots \\
\alpha & &\beta
\\\hline
&\alpha \land \beta&
\end{array}
$$

and we have already translated the proofs of $\alpha$ and $\beta$
into proofs of $\alpha^P$ and $\beta^P$. The $\land$-introduction then
translates into another $\land$-introduction:
$$
\begin{array}{ccc}
\vdots & & \vdots \\
\alpha^P & &\beta^P
\\\hline
&(\alpha \land \beta)^P&
\end{array}

$$



$\forall$-introduction:
the input is
$$
\begin{array}{c}
\vdots \\
\alpha
\\\hline
\forall x \alpha

\end{array}
$$
where $x$ does not appear free in any assumption.
We have already translated the proof of $\alpha$.
However, the translation may have added extra assumptions
$P(x)$. We do an $\Rightarrow$-introduction to discharge
all those assumptions, so that $x$ no longer appears
free in any assumption, and follow that by a $\forall$-introduction:
$$
\begin{array}{c}

\vdots \\
\alpha^P
\\\hline
P(x) \Rightarrow \alpha^P
\\\hline
(\forall x \alpha)^P
\end{array}
$$



$\forall$-elimination: the input is

$$
\begin{array}{c}
\vdots \\
\forall x \alpha
\\\hline
\alpha[y/x]
\end{array}
$$
and we have already translated the proof of $\forall x \alpha$.
A $\forall$-elimination gives us an implication, which

we eliminate by adding an assumption $P(y)$:
$$
\begin{array}{ccc}
P(y) & &
\begin{array}{c}
\vdots \\
(\forall x \alpha)^P
\\\hline
P(y) \Rightarrow (\alpha[y/x])^P
\end{array}

\\\hline
& (\alpha[y/x])^P
\end{array}
$$



$\exists$-introduction: the input is
$$
\begin{array}{c}
\vdots \\
\alpha

\\\hline
\exists x \alpha
\end{array}
$$
and we have already translated the proof of $\alpha$. To prove
$(\exists x\alpha)^P$ we have to add the assumption $P(x)$.
$$
\begin{array}{ccc}
&& \vdots \\
P(x) && \alpha^P

\\\hline
&
\begin{array}{c}
P(x) \land \alpha^P
\\\hline
(\exists x \alpha)^P
\end{array}
\end{array}
$$




$\exists$-elimination: the input is
$$
\begin{array}{ccc}
& & \alpha[y/x] \\
\vdots && \vdots \\
\exists x\alpha && \beta
\\\hline
& \beta &
\end{array}
$$

where $y$ does not appear free in $\beta$ or any assumption
other than the assumptions $\alpha[y/x]$, which are discharged
by this rule.
Renaming $y$ if necessary in the right-hand subproof, we may
assume that $y$ does not appear anywhere in the left-hand subproof.
We have already translated the subproofs. The translation of
the right-hand subproof may have additional assumptions
in which $y$ appears free, but these will all have the form $P(y)$.
We put $\land$-eliminations over the assumptions $(\alpha[y/x])^P$ and $P(y)$ in the translation of the right-hand subproof and then do an $\exists$-elimination:
$$

\begin{array}{ccc}
\begin{array}{c}
\vdots
\\(\exists x\alpha)^P
\end{array}
& &
\begin{array}{ccc}
\begin{array}{c}
P(y) \land (\alpha[y/x])^P
\\\hline

(\alpha[y/x])^P
\\\vdots
\end{array}
& &
\begin{array}{c}
P(y) \land (\alpha[y/x])^P
\\\hline
P(y)
\\\vdots
\end{array} \\\hline

& \beta^P &
\end{array}
\\\hline
& \beta^P
\end{array}
$$



That completes the definition of the translation. If we apply it to
a proof $\Pi$ of a sentence $\psi$ with assumptions drawn from the set
of sentences $T$, the result is a proof $\Pi^P$ of $\psi^P$ with assumptions

drawn from $T^P$ and possibly additional assumptions of the form $P(x)$.
The variables $x$ in these additional assumptions are the only free
variables in the assumptions of $\Pi^P$. We can therefore use
$\exists$-elimination to eliminate the additional assumptions in favour
of the assumption $\exists x P(x)$, resulting in a proof of $\psi^P$
with assumptions drawn from $T^P \cup \{\exists x P(x)\}$, which was what we wanted. It is this final step where the assumption $\exists x P(x)$ is deployed.


Wednesday 27 January 2016

diophantine approximation - Finding irrational numbers in given interval



If $~\xi~$ is irrational number then it is known that the set $~\{ p \xi + q ~ | ~ p,q \in \mathbb{Z} \}~$ is dense in $~\mathbb{R}$. Thus given some reals $~a~$ and $~b~$ one can find integers $~p~$ and $~q~$ such that $~ a \leq p \xi + q < b~$. But how?




Being precise i have $~a,b,\xi > 0~$ and i'm searching for an algorithm to find a pair $~(p,q)~$ with $~p~$ positive and least possible, $~q~$ negative.



I don't expect anything much more efficient than brute force search but at least which bounds can we put on $~p~$ and $~q~$ to narrow the search space?


Answer



Let $[x]$ denote the largest integer not exceeding $x.$ Let $e \in R-Q.$



For $k\in N,$ define $d_k= e k-[e k]$ and define $k'$ as follows :



Let $l_k\in N$ where $l_k d_k<1<(1+l_k)d_k.$ Let $k'=-l_k k$ if $1-l_k d_k<(1+l_k)d_k-1.$ Otherwise let $k'=(1+l_k)k.$ $$\text {Observe that }\quad 0< d_{k'}


Let $k_1=1$ and let $k_{n+1}=k_n'.$ Now choose $a^*\in (a,b))-Z.$ Let $M$ be the least (or any) $n$ such that $d_{k_n}<\min (b-a^*,a^*-[a^*]).$ For brevity let $C=k_M$. $$\text {We have }\quad 0< C e-[C e]

Let $D\in N$ where $$(D-1)(C e-[C e])\leq a^*-[a^*]

The use of $a^*$ was to remove the need to treat the cases $a\in Z$ and $a\not \in Z$ separately.


calculus - Prove $int_{0}^infty mathrm{d}yint_{0}^infty sin(x^2+y^2)mathrm{d}x=int_{0}^infty mathrm{d}xint_{0}^infty sin(x^2+y^2)mathrm{d}y=pi/4$



How can we prove that
\begin{aligned}
&\int_{0}^\infty \mathrm{d}y\int_{0}^\infty \sin(x^2+y^2)\mathrm{d}x\\
=&\int_{0}^\infty \mathrm{d}x\int_{0}^\infty \sin(x^2+y^2)\mathrm{d}y\\=&\cfrac{\pi}{4}
\end{aligned}



I can prove these two are integrable but how can we calculate the exact value?



Answer



I do not know if you are supposed to know this. So, if I am off-topic, please forgive me.



All the problem is around Fresnel integrals. So, using the basic definitions,$$\int_{0}^t \sin(x^2+y^2)dx=\sqrt{\frac{\pi }{2}} \left(C\left(\sqrt{\frac{2}{\pi }} t\right) \sin
\left(y^2\right)+S\left(\sqrt{\frac{2}{\pi }} t\right) \cos
\left(y^2\right)\right)$$ where appear sine and cosine Fresnel integrals. $$\int_{0}^\infty \sin(x^2+y^2)dx=\frac{1}{2} \sqrt{\frac{\pi }{2}} \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)$$ Integrating a second time,$$\frac{1}{2} \sqrt{\frac{\pi }{2}}\int_0^t \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)dy=\frac{\pi}{4} \left(C\left(\sqrt{\frac{2}{\pi }}
t\right)+S\left(\sqrt{\frac{2}{\pi }} t\right)\right)$$ $$\frac{1}{2} \sqrt{\frac{\pi }{2}}\int_0^\infty \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)dy=\frac{\pi}{4} $$



Represntation of a complex number in polar form

Question: enter image description here



The answer given in the textbook is option d.
What if I take iota outside the bracket which gives



(iota)^4 (cos theta + iota*sin theta)^4




1*(cos 4theta + iota*sin 4theta)



Which means option c.
What am I doing wrong?

limits - $ lim_{xrightarrow 0^{+}}frac{sin ^{2}xtan x-x^{3}}{x^{7}}=frac{1}{15} $



Can someone show me how is possible to prove that
\begin{equation*}
\lim_{x\rightarrow 0^{+}}\frac{\sin ^{2}x\tan x-x^{3}}{x^{7}}=\frac{1}{15}
\end{equation*}

but without Taylor series. One can use L'Hospital rule if necessary. I was not able.


Answer



Let the desired limit be denoted by $L$.



We have via LHR $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \lim_{x \to 0}\frac{1 - \cos x}{3x^{2}} = \frac{1}{6}\tag{1}$$ and $$\lim_{x \to 0}\frac{\tan x - x}{x^{3}} = \lim_{x \to 0}\frac{\sec^{2} x - 1}{3x^{2}} = \frac{1}{3}\tag{2}$$ and we also have $$\lim_{x \to 0}\frac{\sin x}{x} = 1\tag{3}$$ Multiplying the 3 limits above we get
\begin{align}
&\lim_{x \to 0}\frac{(x - \sin x)(\tan x - x)\sin x}{x^{7}} = \frac{1}{18}\notag\\ &\Rightarrow\lim_{x \to 0}\frac{(x\sin x\tan x - x^{2}\sin x - \sin^{2}x\tan x + x\sin^{2}x)}{x^{7}} = \frac{1}{18}\notag\\
&\Rightarrow\lim_{x \to 0}\frac{x\sin x\tan x - x^{2}\sin x - x^{3} + x^{3} - \sin^{2}x\tan x + x\sin^{2}x}{x^{7}} = \frac{1}{18}\notag\\
&\Rightarrow\lim_{x \to 0}\frac{x\sin x\tan x - x^{2}\sin x + x\sin^{2}x - x^{3}}{x^{7}} - \frac{\sin^{2}x\tan x - x^{3}}{x^{7}} = \frac{1}{18}\notag\\
&\Rightarrow\lim_{x \to 0}\frac{\sin x\tan x - x\sin x + \sin^{2}x - x^{2}}{x^{6}} - L = \frac{1}{18}\notag\\

\end{align}
Our job is done if we can show that $$\lim_{x \to 0}\frac{\sin x\tan x - x\sin x + \sin^{2}x - x^{2}}{x^{6}} = \frac{11}{90}\tag{4}$$ Multiplying $(1)$ and $(2)$ we get $$\lim_{x \to 0}\frac{x\tan x - x^{2} - \sin x\tan x + x\sin x}{x^{6}}= \frac{1}{18}\tag{5}$$ Adding $(4)$ and $(5)$ we see that our proof is complete if we show that $$\lim_{x \to 0}\frac{x\tan x + \sin^{2}x - 2x^{2}}{x^{6}} = \frac{8}{45}\tag{6}$$ Squaring $(1)$ we get $$\lim_{x \to 0}\frac{\sin^{2}x + x^{2} - 2x\sin x}{x^{6}} = \frac{1}{36}\tag{7}$$ Subtracting $(7)$ from $(6)$ we see that proof is complete if we show that $$\lim_{x \to 0}\frac{\tan x + 2\sin x - 3x}{x^{5}}= \frac{3}{20}\tag{8}$$ It is this limit which we will calculate using LHR as follows
\begin{align}
A &= \lim_{x \to 0}\frac{\tan x + 2\sin x - 3x}{x^{5}}\notag\\
&= \lim_{x \to 0}\frac{\sec^{2} x + 2\cos x - 3}{5x^{4}}\text{ (apply LHR)}\notag\\
&= \lim_{x \to 0}\frac{1 + 2\cos^{3} x - 3\cos^{2}x}{5x^{4}\cos^{2}x}\notag\\
&= \frac{1}{5}\lim_{x \to 0}\frac{1 + 2\cos^{3} x - 3\cos^{2}x}{x^{4}}\notag\\
&= \frac{1}{5}\lim_{x \to 0}\frac{(\cos x - 1)^{2}(2\cos x + 1)}{x^{4}}\notag\\
&= \frac{3}{5}\lim_{x \to 0}\left(\frac{1 - \cos x}{x^{2}}\right)^{2}\notag\\
&= \frac{3}{5}\cdot\frac{1}{2}\cdot\frac{1}{2} = \frac{3}{20}

\end{align}



Thus the proof is complete by application of LHR three times (once in proof of $(1)$, $(2)$, $(8)$ each). Also note that if you know the result $(1)$ then result $(2)$ can be derived from $(1)$ by subtraction and noting that the limit $$\lim_{x \to 0}\frac{\tan x - \sin x}{x^{3}}$$ can be calculated without LHR very easily. See this question. So in reality we only need two application of LHR for this problem.



Update: While dealing with limit expressions of type $\lim_{x \to 0}f(x)/x^{n}$ for large $n$ (here $n = 7$), I have often found it useful to multiply several well knows limits of type $g(x)/x^{m}$ with smaller values of $m$ to get something like $h(x)/x^{n}$. Expectation is that some terms of $f(x)$ match with those of $h(x)$ and a subtraction would cancel these terms. Also it is expected that resulting expression will be simplified to $p(x)/x^{r}$ when $r < n$. Continue this till we get very small values of exponent of $x$ in denominator. Here for example I have reduced an expression with $x^{7}$ to finally an expression with $x^{5}$ in denominator. See this technique applied to $$\lim_{x \to 0}\frac{x\sin(\sin x) - \sin^{2}x}{x^{6}}$$ here. Another application of the same technique can be found here as well.


derivatives - partial differentiation with log function.

Can someone please help as I am stuck,



I need to show that




$$\phi=\frac{k}{2\pi}\log(x^{2}+y^{2})^{1/2}$$



satisfies Laplaces equation, however I cannot seem to differentiate this function. Note $k$ is a constant.



How do I go about partially differentiating



$$\log(\sqrt{x^{2}+y^{2}})$$



I was thinking, using chain rule, just call




$$\sqrt{x^{2}+y^{2}}=r$$



so $$\frac{1}{r}\log r+\log r$$



Any help is appreciated.

Tuesday 26 January 2016

calculus - Why Cauchy's definition of infinitesimal is not widely used?

Cauchy defined infinitesimal as a variable or a function tending to zero, or as a null sequence.



While I found the definition is not so popular and nearly discarded in math according to the following statement.



(1). Infinitesimal entry in Wikipedia:





Some older textbooks use the term "infinitesimal" to refer to a
variable or a function tending to zero




Why textbooks involved with the definition is said to be old ?



(2). Robert Goldblatt, Lectures on the Hyperreals: An Introduction to Nonstandard Analysis, P15
(His = Cauchy's)
enter image description here

Why says 'Even'?



(3). Abraham Robinson, Non-standard analysis, P276
enter image description here
why Cauchy's definition of infinitesimal, along with his 'basic approach' was superseded?



Besides, I found most of the Real analysis or Calculus textbooks, such as Principles of mathematical analysis(Rudin) and Introduction to Calculus and Analysis(Richard Courant , Fritz John), don't introduce Cauchy's definition of infinitesimal, Why ?
Why Cauchy's definition of infinitesimal was unpopular and not widely used, and nearly discarded?



P.S. I refered some papers still cannot find the answer.

real analysis - Dominated a.e. convergence implies almost uniform convergence





Let $(f_n)$ be a sequence of measurable functions that converges almost everywhere to a measurable function $f$. Assume that there is an integrable function $g$ such that $|f_n|\leq g$ for all $n$ almost everywhere. Show that $(f_n)$ converges almost uniformly to $f$.




Now I don't know of any other sufficient condition for almost uniform convergence other that Ergorov's theorem. However, to apply it I need my space to be of finite measure, and somehow the existence of a dominating integrable function $g$ has to play a role.
But I wouldn't know where to start, so I'd like some hint on how to start working on the problem. Thank you in advance!


Answer



Hint: Work through the proof of Egoroff's theorem. In order to apply continuity from above, one needs that one of the sets has finite measure. In the usual setting, this is given by the fact that the space has finite measure. Here, we instead use the fact that the sequence is dominated.



After you finish proving this, you can redo the proof a couple more times to prove the following results.





1) If $\sum_{n=1}^\infty \|f_n-f\|_1 < \infty$, then $f_n\to f$ almost uniformly.



2) If $\sum_{n=1}^\infty \mu[|f_n-f|>1/n] < \infty$, then $f_n\to f$ almost uniformly.




They show that, in particular, if $f_n\to f$ in $L^1$ or in measure, then there is a subsequence such that $f_{n_k}\to f$ almost uniformly.


Monday 25 January 2016

linear algebra - If Brauer characters are $bar{mathbb{Q}}$-linearly independent, why are they $mathbb{C}$-linearly independent?




If Brauer characters are $\bar{\mathbb{Q}}$-linearly independent, why are they $\mathbb{C}$-linearly independent?



I think this is a linear algebra fact showing up when proving the irreducible Brauer characters on a finite group are linearly independent over $\mathbb{C}$. The proof I've seen observes that the characters take values in the ring of algebraic integers, and then proves linear independence over $\bar{\mathbb{Q}}$.



Why is it sufficient to only check linear independence over $\bar{\mathbb{Q}}$? It seems like something could go wrong when extending the field all the way up to $\mathbb{C}$.



The proof I'm reading is Theorem 15.5 in Isaacs' Character Theory of Finite Groups.



enter image description here


Answer




If $E/F$ is a field extension, we have $F^n\subset E^n$, and if a subset of $F^n$ is $F$-linearly independent, then it is also $E$-linearly independent. A nice, super easy way to see it: extend the subset to a basis for $F^n$. Form the matrix whose columns are elements of this basis. Its determinant is nonzero. But this shows that the columns form a basis for $E^n$ since the determinant has the same formula regardless of the field you work over.



The space of class functions of a finite group can be identified with $F^n$ in an obvious way ($n$= number of conjugacy classes).


elementary number theory - Observation about multiples of $999$



I think I have observed an interesting pattern in the multiples of $999$. I have observed that the sum of digits of first $30$ multiples of $999$ is $27$. I have read this

but it doesn't prove it. So, is it true that the sum of digits of all multiples of $999$ is a multiple of $27$? Please prove or disprove. What is the minimum multiple of $999$ that has a sum other than $27$?


Answer



Here are some ideas that might help answer the question:




What is the minimum multiple of 999 that has a sum other than 27?




For some integer $k$, $999 \cdot k = 1000k - k$. The digit sum of $999k$ should be greater than $27$ for at least one four-digit number $k$. If we take $k = 9999$, the digit sum of $1000k$ is already $36$, which is much greater than $27$ already, and in fact $9999 \cdot 999$ has a digit sum greater than $27$.




Glaringly obviously, with $k=1001$ we have $999 \ 999$ which has a sum of $54$. This proves that if there is any smaller $k$, it must be three digits or shorter.



But according to Szeto's answer, the sum of digits for any three-digit $k$ is $27$. For one-digit $k$, we can construct a similar argument: with $k = a$, $999k = \overline{a000} - a$. Then we have the following subtraction:



$$\begin{array}{r}
&^1a \quad \quad \quad ^10\quad \ \ \ \ ^10 \quad \ \ \ \ ^10\\
-\!\!\!\!\!\!&a\\
\hline
&a-1 \quad \quad \quad 9 \quad \quad \ \ 9 \ \ 10-a
\end{array}$$




and the sum of digits is $a + 9 + 9 + (10-a) = 27$.



For two-digit $k$, we can construct a similar argument:
$$\begin{array}{r}
&a \quad \quad \quad ^1b \quad \quad \quad ^10\quad \ \ \ \ ^10 \quad \ \ \ \ ^10\\
-\!\!\!\!\!\!&a \quad \quad b\\
\hline
&a \quad \quad \quad b-1 \quad \quad \ \ 9 \quad \quad 9-a \quad 10-b
\end{array}$$




and the sum of digits is $a + (b-1) + 9 + (9-a) + (10-b)= 27$.



Therefore $k = 1001$ should be the smallest $k$.



What still needs to be done is to prove this rigorously for all $k$, instead of just by guessing and checking.


real analysis - How to define a bijection between $(0,1)$ and $(0,1]$?





How to define a bijection between $(0,1)$ and $(0,1]$?
Or any other open and closed intervals?




If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it):
I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by
\begin{align*}
-5 = f(-1) &= m(-1)+b \\

4 = f(2) &= m(2) + b
\end{align*}
Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$



Then I show that $f$ is a bijection by showing that it is injective and surjective.


Answer



Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective.



To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.


trigonometry - have trouble with this limit question



enter image description here




a) By considering the areas of the triangle OAD, the sector OAC and the triangle OBC,
show that
$(\cos \theta)(\sin \theta) < \theta < \frac{\sin\theta}{\cos\theta}$
I find out:
Area of OAD=$\frac{1}{2}OD\cdot AD \cdot \sin \theta$
Area of OAC=$\frac{1}{2}OC^2 \theta$
Area of OBC=$\frac{1}{2}OC\cdot BC \cdot\sin\theta$
Now I'm stuck at how to apply this to prove
How to prove?




(b) Use (a) and the Squeeze Theorem to show that
$\displaystyle\lim_{\theta\to 0^+}\frac{\sin\theta}{\theta}= 1$


Answer



Hint: WORK IN RADIANS!
$a)$ $$\text{Area of }\Delta OAD=\dfrac{1}{2}\cdot OD\cdot AD\\
\text{Area of sector }OAC=\dfrac{\theta}{360}\pi (OA)^2\\
\text{Area of }\Delta OBC=\dfrac{1}{2}\cdot OC\cdot BC$$
See that
$$\text{Area of }\Delta OAD<\text{Area of sector }OAC<\text{Area of }\Delta OBC\\

\implies \dfrac{1}{2}\cdot \cos\theta\cdot \sin\theta<\dfrac{\theta}{360}\pi (OA)^2<\dfrac{1}{2}\cdot 1\cdot BC$$
Now,
$$DC=1-\cos\theta\\
BC=\tan\theta$$



$b)$ Then, after doing $a)$, use the fact that
$$\dfrac{1}{2}\sin\theta\cos\theta<\theta/2\\
\theta/2<\dfrac{1}{2}\tan\theta$$
Then use the squeeze theorem. The limit follows.


Sunday 24 January 2016

linear algebra - Calculate the rank of the following matrices



Question: Calculate the rank of the following matrices:




$A = \left( \begin{array}{cc} 1 & n \\ n & 1 \end{array} \right), n \in \mathbb{Z}$ and $B = \left( \begin{array}{ccc} 1 & x & x^{2} \\ 1 & y & y^{2} \\ 1 & z & z^{2} \end{array} \right)$, $x,y,z \in \mathbb{R}$.



So the way I understand rank($A$), is the number of pivots in an echelon form of $A$. To put $A$ into echelon form I would subtract $n$ times the first row from the second row: $A \sim \left( \begin{array}{cc} 1 & n \\ n & 1 \end{array} \right) \sim \left( \begin{array}{cc} 1 & n \\ 0 & 1 - n^{2} \end{array} \right) \Rightarrow $rank$(A) = 2$.



With $B$ I would have done pretty much the same thing, subtracting row 1 from both row 2 and row 3: $B \sim \left( \begin{array}{ccc} 1 & x & x^{2} \\ 1 & y & y^{2} \\ 1 & z & z^{2} \end{array} \right) \sim \left( \begin{array}{ccc} 1 & x & x^{2} \\ 0 & y - x & y^{2} - x^{2} \\ 0 & z - x & z^{2} - x^{2} \end{array} \right)$ (at this point I could multiply row 2 by $-(\frac{z-x}{y-x})$ and add it to row 3 which ends up being a long polynomial....) However, with both parts, I am pretty confident that it is not so simple and that I am missing the point of this exercise. Could somebody please help point me in the right direction?


Answer



You seem to be assuming that because "$1-n^2$" doesn't look like $0$, then it cannot be zero. That is a common, but often fatal, mistake.



Remember that $n$ stands for some integer. Once you get to
$$A = \left(\begin{array}{cc}

1 & n\\
0 & 1-n^2
\end{array}\right),$$
you cannot just jump to saying there are two pivots: your next step would be to divide the second row by $1-n^2$ to make the second pivot, but whenever you divide by something, that little voice in your head should be whispering in your ear: "Wait! Are you sure you are not dividing by zero?" (remember, if you divide by zero, the universe explodes!). And the thing is, you aren't sure you are not dividing by zero. It depends on what $n$ is! So, your answer should be that it will be rank $2$ if $1-n^2\neq 0$, and rank $1$ if $1-n^2 = 0$. But you don't want the person who is grading/reading to have to figure out when that will happen. You want them to be able to glance at the original matrix, and then be able to immediately say (correctly) "Rank is 1" or "Rank is 2". So you should express the conditions in terms of $n$ alone, not in terms of some computation involving $n$. So your final answer should be something like "$\mathrm{rank}(A)=2$ if $n=\text{someting}$, and $\mathrm{rank}(A)=1$ if $n=\text{something else}$."



The same thing happens with the second matrix: in order to be able to multiply by $-(\frac{z-x}{y-x})$, that little voice in your head will whisper "Wait! are you sure you are not dividing by zero?", which leads you to consider what happens when $y-x=0$. But more: even if you are sure that $y-x\neq 0$, that meddlesome little voice should be whispering "Wait! Are you sure you are not multiplying the row by zero?" (because, remember, multiplying a row by zero is not an elementary row operation). (And be careful: if you don't pay attention to that voice, it's going to start yelling instead of whispering...) So that means that you also need to worry about what happens when $z-x=0$. The answer on the rank of $B$, then, will depend on how $x$, $y$, and $z$ relate, and so your solution should reflect that.


complex analysis - A definite integral of the exponential of cos

During some calculs, I came across the following definite integral,
$$\int_0^{2\pi} \frac{\sin^2 \theta}{1+b\cos\theta} \exp(a\cos\theta) d\theta$$

with $a$ and $b$ constants. I tried to look it up in the Gradshteyn and Ryzhik, for example in Section 3.93: Trigonometric and exponential functions of trigonometric functions, but find nothing helpful. I also tried the Poisson Integral via complex analyse, apparently the exponential function is a little particular, if it's a $\log$ function instead of $\exp$ that's done, but with exponential I have not yet found the solution.



Thanks in advance if anyone has any idea :)

calculus - Evaluating $intlimits_0^infty{frac{1}{1+x^2+x^alpha}dx}$

I'm trying to evaluate$$f(\alpha)=\int\limits_0^\infty{\frac{1}{1+x^2+x^\alpha}dx}$$
I proved:
$f(\alpha)$ converges when $\alpha\in\mathbb{R}$
$f(2-\alpha)=f(\alpha)$
$f(0)=f(2)=\frac{\pi}{2\sqrt{2}}$
$f(1)=\frac{2\pi}{3\sqrt{3}}$
$f(-\infty)=f(\infty)=\frac{\pi}{4}$
Similar question:$$\int\limits_0^\infty{\frac{1}{1+x^\alpha}dx}=\frac{\pi}{\alpha}\csc\frac{\pi}{\alpha}$$
I tried all of the techniques can be used in evaluating this integral, but I still cannot get the answer.
When I was using complex analysis, I found that the poles of $\frac{1}{1+x^2+x^\alpha}$ is hard to be found.

calculus - A limit problem $limlimits_{x to 0}frac{xsin(sin x) - sin^{2}x}{x^{6}}$



This is a problem from "A Course of Pure Mathematics" by G H Hardy. Find the limit $$\lim_{x \to 0}\frac{x\sin(\sin x) - \sin^{2}x}{x^{6}}$$ I had solved it long back (solution presented in my blog here) but I had to use the L'Hospital's Rule (another alternative is Taylor's series). This problem is given in an introductory chapter on limits and the concept of Taylor series or L'Hospital's rule is provided in a later chapter in the same book. So I am damn sure that there is a mechanism to evaluate this limit by simpler methods involving basic algebraic and trigonometric manipulations and use of limit $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ but I have not been able to find such a solution till now. If someone has any ideas in this direction please help me out.




PS: The answer is $1/18$ and can be easily verified by a calculator by putting $x = 0.01$


Answer



Preliminary Results:



We will use
$$
\begin{align}
\frac{\color{#C00000}{\sin(2x)-2\sin(x)}}{\color{#00A000}{\tan(2x)-2\tan(x)}}
&=\underbrace{\color{#C00000}{2\sin(x)(\cos(x)-1)}\vphantom{\frac{\tan^2(x)}{\tan^2(x)}}}\underbrace{\frac{\color{#00A000}{1-\tan^2(x)}}{\color{#00A000}{2\tan^3(x)}}}\\
&=\hphantom{\sin}\frac{-2\sin^3(x)}{\cos(x)+1}\hphantom{\sin}\frac{\cos(x)\cos(2x)}{2\sin^3(x)}\\

&=-\frac{\cos(x)\cos(2x)}{\cos(x)+1}\tag{1}
\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}=-\frac12\tag{2}
$$
Thus, given an $\epsilon\gt0$, we can find a $\delta\gt0$ so that if $|x|\le\delta$
$$
\left|\,\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}+\frac12\,\right|\le\epsilon\tag{3}

$$
Because $\,\displaystyle\lim_{x\to0}\frac{\sin(x)}{x}=\lim_{x\to0}\frac{\tan(x)}{x}=1$, we have
$$
\sin(x)-x=\sum_{k=0}^\infty2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\tag{4}
$$
and
$$
\tan(x)-x=\sum_{k=0}^\infty2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1})\tag{5}
$$
By $(3)$ each term of $(4)$ is between $-\frac12-\epsilon$ and $-\frac12+\epsilon$ of the corresponding term of $(5)$. Therefore,

$$
\left|\,\frac{\sin(x)-x}{\tan(x)-x}+\frac12\,\right|\le\epsilon\tag{6}
$$
Thus,
$$
\lim_{x\to0}\,\frac{\sin(x)-x}{\tan(x)-x}=-\frac12\tag{7}
$$
Furthermore,
$$
\begin{align}

\frac{\tan(x)-\sin(x)}{x^3}
&=\tan(x)(1-\cos(x))\frac1{x^3}\\
&=\frac{\sin(x)}{\cos(x)}\frac{\sin^2(x)}{1+\cos(x)}\frac1{x^3}\\
&=\frac1{\cos(x)(1+\cos(x))}\left(\frac{\sin(x)}{x}\right)^3\tag{8}
\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\tan(x)-\sin(x)}{x^3}=\frac12\tag{9}
$$

Combining $(7)$ and $(9)$ yield
$$
\lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag{10}
$$
Additionally,
$$
\frac{\sin(A)-\sin(B)}{\sin(A-B)}
=\frac{\cos\left(\frac{A+B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}
=1-\frac{2\sin\left(\frac{A}{2}\right)\sin\left(\frac{B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}\tag{11}
$$







Finishing Up:
$$
\begin{align}
&x\sin(\sin(x))-\sin^2(x)\\
&=[\color{#C00000}{(x-\sin(x))+\sin(x)}][\color{#00A000}{(\sin(\sin(x))-\sin(x))+\sin(x)}]-\sin^2(x)\\
&=\color{#C00000}{(x-\sin(x))}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&+\color{#C00000}{(x-\sin(x))}\color{#00A000}{\sin(x)}\\

&+\color{#C00000}{\sin(x)}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&=(x-\sin(x))(\sin(\sin(x))-\sin(x))+\sin(x)(x-2\sin(x)+\sin(\sin(x)))\tag{12}
\end{align}
$$
Using $(10)$, we get that
$$
\begin{align}
&\lim_{x\to0}\frac{(x-\sin(x))(\sin(\sin(x))-\sin(x))}{x^6}\\
&=\lim_{x\to0}\frac{x-\sin(x)}{x^3}\lim_{x\to0}\frac{\sin(\sin(x))-\sin(x)}{\sin^3(x)}\lim_{x\to0}\left(\frac{\sin(x)}{x}\right)^3\\
&=\frac16\cdot\frac{-1}6\cdot1\\

&=-\frac1{36}\tag{13}
\end{align}
$$
and with $(10)$ and $(11)$, we have
$$
\begin{align}
&\lim_{x\to0}\frac{\sin(x)(x-2\sin(x)+\sin(\sin(x)))}{x^6}\\
&=\lim_{x\to0}\frac{\sin(x)}{x}\lim_{x\to0}\frac{x-2\sin(x)+\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-(\sin(x)-\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))\left(1-\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}\right)}{x^5}\\

&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))+\sin(x-\sin(x))\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}}{x^5}\\
&=\lim_{x\to0}\frac{\sin(x-\sin(x))}{x^3}\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{x^2}\\[6pt]
&=\frac16\cdot\frac12\\[6pt]
&=\frac1{12}\tag{14}
\end{align}
$$
Adding $(13)$ and $(14)$ gives
$$
\color{#C00000}{\lim_{x\to0}\frac{x\sin(\sin(x))-\sin^2(x)}{x^6}=\frac1{18}}\tag{15}
$$







Added Explanation for the Derivation of $(6)$



The explanation below works for $x\gt0$ and $x\lt0$. Just reverse the red inequalities.



Assume that $x\color{#C00000}{\gt}0$ and $|x|\lt\pi/2$. Then $\tan(x)-2\tan(x/2)\color{#C00000}{\gt}0$.

$(3)$ is equivalent to
$$

\begin{align}
&(-1/2-\epsilon)(\tan(x)-2\tan(x/2))\\[4pt]
\color{#C00000}{\le}&\sin(x)-2\sin(x/2)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-2\tan(x/2))\tag{16}
\end{align}
$$
for all $|x|\lt\delta$. Thus, for $k\ge0$,
$$
\begin{align}
&(-1/2-\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\\[4pt]

\color{#C00000}{\le}&2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\tag{17}
\end{align}
$$
Summing $(17)$ from $k=0$ to $\infty$ yields
$$
\begin{align}
&(-1/2-\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\\[4pt]
\color{#C00000}{\le}&\sin(x)-\lim_{k\to\infty}2^k\sin(x/2^k)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\tag{18}

\end{align}
$$
Since $\lim\limits_{k\to\infty}2^k\tan(x/2^k)=\lim\limits_{k\to\infty}2^k\sin(x/2^k)=x$, $(18)$ says
$$
\begin{align}
&(-1/2-\epsilon)(\tan(x)-x)\\[4pt]
\color{#C00000}{\le}&\sin(x)-x\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-x))\tag{19}
\end{align}
$$

which, since $\epsilon$ is arbitrary is equivalent to $(6)$.


elementary number theory - Question about Paul ErdÅ‘s’ proof on the infinitude of primes



I was reading Julian Havil’s book Gamma where he talks about a short proof by Paul ErdÅ‘s on the infinitude of primes.



As I understand it, here are the steps:



(1) Let $N$ be any positive integer and $p_1, p_2, p_3, \dots, p_n$ be the complete set of primes less than or equal to $N$




(2) Each $1 \le x \le N$ can be written as $p_1^{e_1}p_2^{e_2}p_3^{e_3}\ldots p_n^{e_n} \times m^2$ where $e_i \in \left\{0,1\right\}$



(3) So, there are $2^n$ ways of choosing square-free numbers and $m^2 \le N$



(4) Since $m \le \sqrt{N}$, each $2 \le x \le N$ can be chosen in at most $2^n \times \sqrt{N}$ ways.



(5) Thus, $N \le 2^n \times \sqrt{N}$ and $2^n \ge \sqrt{N}$ so that $n \ge \dfrac{1}{2}\log_2 N$



I am confused by step #4. Why does it follow that $x$ can be chosen in at most $2^n \times \sqrt{N}$ ways. I would think that it would be chosen in $2^n \times m^2$ ways. Why is he allowed to replace $m^2$ with $m$ in this case?







Edit: I figured out my misunderstanding.



When I reread the proof in the book this morning, I noticed the following sentence in the paragraph before the proof:




"In 1938 the consummate practitioner Paul Erdos (1913-1996) gave the
one that follows, which uses a counting technique and a neat device

used by number theorists: that any integer can be written as the
product of a square and a square-free integer"




This device is easily proven. Let $u = p_1^{e_1}p_2^{e_2}\ldots p_n^{e_n}$ so that for any $x \le N$, $x = um^2$:




  • Let $p^v | x$ where $p^{v+1} \nmid x$ and $v \ge 1$

  • If $v \equiv 0 \pmod 2$, then $p \nmid u$ and $p^v | m^2$

  • If $v \equiv 1 \pmod 2$, then $p | u$ and $p^{v-1} | m^2$




So that it follows that $m$ is an integer. Now, the full proof works for me.


Answer



The number of values $m^2$ can have is given by $m$. If you used $m^2$, it would look as if it could have any value from $1$ to $m^2$, which isn't true.



You "replace" $m^2$ by $m$ in the same sense you replaced $p_1^{e_1}\ldots p_n^{e_n}$ by $2^n$ (and not by $p_1\ldots p_n$).


calculus - Random Poisson Sum of Random Variables with known distribution




I am trying to get a closed form expression for the expected value of the following summation of RVs: $\sum_{i=1}^{Y} X_{i}$, where $Y$ is Poisson distributed with parameter $\lambda$ and $ X_{i} $ follows some known distribution $f_X(x)$. Are there any means to drop the RV $Y$ using $\lambda$?



Thank you for your time and patience.


Answer



Yes if your $X_i$'s are iid, then you can use iterated conditioning to prove the Wald's identity.



\begin{align}
\mathbb{E}_X[\sum_{i=1}^Y X_i] &= \mathbb{E}_Y[\mathbb{E}_X[\sum_{i=1}^y X_i | Y=y]]\\
&=\mathbb{E}_Y[ y \mathbb{E}_X[X_1]]\\
&= \sum_{y=0}^{\infty} y \mathbb{E}_X[X_1] e^{-\lambda} \frac{\lambda^y}{y !}\\

&= \lambda \mathbb{E}[X_1] e^{-\lambda} \sum_{y=1}^{\infty} \frac{\lambda^{y-1}}{(y-1) !}\\
&= \lambda \mathbb{E}X_1
\end{align}



since the sum is an series expansion for $e^{\lambda}$. We're using the iid property to go from the first to the second line. Then it's just the expectation of the Poisson distribution.


Saturday 23 January 2016

algebra precalculus - Finding the points of intersection of a circle and a line

In a test (of math in arabic language) we we're asked to find the points of intersection of a circle and a line. Their equation is given.



In the test I solved system of equations made of their equation and in the process I explain my line of thought using the words therefore, then, so, ... etc (in arabic) but in describing that process I acknowledge that the structure of my proof is done by equivalence, that can be easily seen from the context, we're solving an equation so we proceed by equivalence.



But my professor said that it was all wrong and that I should have used the statement is equivalent (in arabic) in each part of my proof and I got all of my exercises wrong, and there were some exercises in the test where in the process we had to solve an equation so he also said that I was wrong because of that reason so I only got 4 out of 20.



Is the professor wrong or I am?







My computer crashed so i couldn't edit so i made a new question. I've added the equations and system of equations with my solutions






Exercise 1



Circle has equation $x^2+y^2+(m+2)x-2my+m^2-36=0$, find center, radius




I found it to be center $(-\frac{m}{2}-1,m)$ radius r= $\sqrt{\frac{m^2}{4}+m+37}$






Exercise 2



Circle has equation $x^2+y^2+2 x - 2y - 4 = 0$ line has equation $x+y-1=0$ find points of intersection



i found $\left(\frac{-1+\sqrt{11}}{2},\frac{3-\sqrt{11}}{2}\right)$ and $\left(\frac{-1-\sqrt{11}}{2},\frac{3+\sqrt{11}}{2}\right)$




for exercise 3 and 4 it is the same and i checked my calculations again and again with the calculator there's no error but yet all of that was marked with 0 for not saying "is equivalent", i didn't use the symbol $\Rightarrow$ All i got was 4 from exercise 5 which had calculation of dot product and of area of triangle where we are given coordinates of two vectors in the plane

real analysis - Showing that $int_{0}^{infty} u^{-alpha} sin(u) , du >0$ for $0

Does anyone know how to show $\int_{0}^{\infty} u^{-\alpha} \sin(u) \, du >0$ for $0<\alpha<1$ (without explicitly having to calculate the exact value)?

calculus - Computing $limlimits_{xtoinfty}ln(x)cdot ln(1-e^{-x})$

Limit as $x$ approaches infinity of $\ln(x)\cdot \ln(1-e^{-x})$:
$$
\lim_{x\to\infty}\ln(x)\cdot \ln(1-e^{-x})
$$



The only thing I can think to do is rewrite the ln(x) on the bottom as $(lnx)^{-1}$ and use L’Hôpital’s rule, but I’ve done two iterations now and it keeps getting back to the 0/0 or infinity*0 indeterminate case. Any help on how to proceed will be much appreciated! Thanks

calculus - How difficult exactly is $inttan(x^2) dx$?



How difficult exactly is $\int\tan(x^2)\ dx$ ?



Is it possible to express this integral in terms of elementary functions?



If not, is there anything one could say about it, that would be in some way helpful?




I have not done anything to answer this question myself. (Well, I googled it,
Wolfram alpha tells me no result found in terms of standard mathematical functions, so it seems safe to assume that no such result exist.)



This integral looks somewhat similar to $\int e^{x^2} dx$ (which cannot be expressed in terms of elementary functions) but I just need some reassurance (possibly with a link or an explanation) specifically for $\int\tan(x^2)\ dx$ .



Just in case, here is the Taylor series expansion
$\tan(x) = x+x^3/3+2x^5/15+17x^7/315+62x^9/2835+O(x^{11})$ and
$\tan(x) = \sum_{n=0}^\infty \dfrac{(-1)^{(n-1)}2^{2n}(2^{2n}-1) B(2n)}{(2n)!} x^{2n-1}$, where $B(n)$ are the Bernoulli numbers.
Someone asked me about this integral and I realized I couldn't say much about it.


Answer



If the question is : "Is it possible to express the integral $\int \tan(x^2)dx$ in terms of elementary functions ?" the answer is : Yes, on the form of infinite series of elementary functions.




If the question is : "Is it possible to express the integral $\int \tan(x^2)dx$ in terms of the combination of a finite number of elementary functions ?" the answer is : No. (as it was already pointed out in a preceeding answer).



If the question is : "How difficult exactly is $\int \tan(x^2)dx$ ?" the answer is : No more difficut than the integrals : $$\int \sin(x^2)dx=\sqrt{\frac{\pi}{2}}\ S
\left( \sqrt{\frac{2}{\pi}}\ x\right)+constant$$
where $S(X)$ is defined as a special function, namely the Fresnel S integral : http://mathworld.wolfram.com/SineIntegral.html



and no more difficult than the integral : $$\int \cos(x^2)dx=\sqrt{\frac{\pi}{2}}\ C
\left( \sqrt{\frac{2}{\pi}}\ x\right)+constant$$
where $C(X)$ is defined as a special function, namely the Fresnel C integral: http://mathworld.wolfram.com/CosineIntegral.html




The only difference is that in : $$\int \tan(x^2)dx=\sqrt{\frac{\pi}{2}}\ T
\left( \sqrt{\frac{2}{\pi}}\ x\right)+constant$$
the special function $T(X)$ is not referenced among the standard special functions, doesn't appear in the handbooks of special functions and is not implemented in the maths softwares.



One could say that just giving a name to an integral is no more than a cleaver trick. Nevertheless, one should think about it. A paper for general public on the subject : https://fr.scribd.com/doc/14623310/Safari-on-the-country-of-the-Special-Functions-Safari-au-pays-des-fonctions-speciales


calculus - Differentiability of this picewise function




$$f(x,y) = \left\{\begin{array}{cc}
\frac{xy}{x^2+y^2} & (x,y)\neq(0,0) \\
f(x,y) = 0 & (x,y)=(0,0)
\end{array}\right.$$



In order to verify if this function is differentiable, I tried to prove it by the theorem that says that if $\frac{∂f}{∂x}$ and $\frac{∂f}{∂y}$ exist and are continuous at the point $(x_0,y_0)$, then the function is differentiable at this point. So I did:



$$\frac{\partial f}{\partial x}(0,0) = \lim_{h\to 0}\frac{f(0+h,0)-f(0,0)}{h} = 0$$
$$\frac{\partial f}{\partial y}(0,0) = \lim_{h\to 0}\frac{f(0,0+h)-f(0,0)}{h} = 0$$




so we have that the partial derivatives at point $(0,0)$ is $0$. Now, if we take the derivative at $(x,y)\neq (0,0)$ and then take the limit of it as $(x,y)\to(0,0)$, we can see if the derivatives are continuous or not. So here it is:



$$\frac{\partial f}{\partial x}(x,y) = \frac{y(y^2-x^2)}{(x^2+y^2)}$$



but



$$\lim_{(x,y)\to(0,0)} \frac{y(y^2-x^2)}{(x^2+y^2)} $$
does not exist (by wolfram alpha... but can anybody tell me an easy way to prove this limit does not exist? easier than taking the limit in different directions?), therefore the derivative is not continuous at $(0,0)$, so we can't say $f$ is differentiable at $(0,0)$, but for $(x,y)\neq (0,0)$ the function is continuous, as it is a quotient of continuous functions. So $f$ is at least differentiable at $(x,y)\neq (0,0)$.




Now, to verify differentiability at $(0,0)$ I think we must use the limit definition of differentiablity:
A function is differentiable at $(0,0)$ iff:
$$\lim_{(h,k)\to (0,0)} \frac{f(0+h,0+k)-f(0,0)-\frac{\partial f}{\partial x}(0,0)-\frac{\partial f}{\partial y}(0,0)}{\|(h,k)\|} = 0$$



Let's calculate this limit:



$$\lim_{(h,k)\to (0,0)} \frac{f(0+h,0+k)-f(0,0)-\frac{\partial f}{\partial x}(0,0)-\frac{\partial f}{\partial y}(0,0)}{\|(h,k)\|} = \\ \lim_{(h,k)\to (0,0)} \frac{\frac{hk}{h^2+k^2}}{\sqrt{h^2+k^2}} = \\ \lim_{(h,k)\to (0,0)} \frac{hk}{(h^2+k^2)\sqrt{h^2+k^2}}$$



which I think, it's a limit that does not exist, therefore the function isn't differentiable at $(0,0)$


Answer




As has been pointed out in comments, your function $f$ is not continuous at the origin, since taking the limit from $y=ax$ yields:
$$\lim_{x\to0}f(x,ax)=\lim_{x\to0}\frac{xax}{x^2+a^2x^2}=\lim_{x\to0}\frac{a}{a^2+1}\frac{x^2}{x^2}=\frac{a}{a^2+1},$$
which depends on $a$.



Let me remark a couple of things. The limit you are looking for an "easy way" of computing". Well, what is easier than taking limits along directions? I can't think of anything easier than that. Can you? If I think of limits, I think of polar coordinates or asymptotics, and of taking limits along directions. What else?



The final limit you can prove not to exist by saying that for it to exist you need $\frac{hk}{h^2+k^2}$ to tend to zero, otherwise the remaining $\frac{1}{\sqrt{h^2+k^2}}$ will make it shoot to infinity, and as I have remarked above this limit does not exist. Otherwise you take limits along directions, getting that along $k=ah$ the limit is that of $\frac{ah^2}{(a^2+1)h^2\sqrt{a^2+1}\sqrt{h^2}}=\frac{a}{(a^2+1)\sqrt{a^2+1}}\frac{h^2}{h^2|h|}$, which depends on $a$, and almost always goes to infinity.



As a bonus, you can check that the function with $x^2y^2$ instead of $xy$ would have been differentiable.


probability theory - Finite expectation of a random variable

$X \geq 0$ be a random variable defined on $(\Omega,\mathcal{F},P)$. Show that $\mathbb{E}[X]<\infty \iff \Sigma_{n=1}^\infty P(X>n) < \infty $.




I got the reverse direction but I am struggling with the $"\implies"$ direction. So far, I have the following worked out:



$\mathbb{E}[X]<\infty$



$\implies \int_0^\infty (1-F(x)) dx < \infty$ (where $F$ is the distribution function of the random variable X)



$\implies \int_0^\infty (1-P(X\leq x)) dx < \infty$



$\implies \int_0^\infty P(X>x) dx < \infty$




Consider $\int_0^\infty P(X>x) dx$



$= \Sigma_{n=1}^\infty \int_{n-1}^n P(X>x) dx$



This is the point I am stuck at. Any help will be deeply appreciated!

real analysis - $b_n$ bounded, $sum a_n$ converges absolutely, then $sum a_nb_n$ also



a) Prove that if $\sum a_n$ converges absolutely and $b_n$ is a bounded sequence, then also $\sum a_nb_n$ converges absolutely.



I wanted to use the comparison test to show it's true, but I think I got something mixed up. Here's what I have so far.



$b_n$ is bounded $\Rightarrow \exists c > 0: |b_k| < c, \forall k \in \mathbb N$




$\sum a_nb_n < \sum a_nc < c\sum a_n$. I'm very tempted to use the comparison test here and say, as $\sum a_n$ converges absolutetly, then so does $c\sum a_n$, but for that I needed the inverted relation, right? I would need $\sum a_n > c\sum a_n$, which is not true. However isn't it obvious that something absolutely convergent multiplied by a constant is also absolutetly convergent? Is there a mathematical way to write this? Thanks a lot in advance.



b) Refute with a counter example: if $\sum a_n$ converges and $b_n$ is a bounded sequence, then also $\sum a_nb_n$ converges.



Is this even possible? I mean, it has to be, but it doesn't make sense to me. If something converges, it means it's bounded, right? And I thought, well, since bounded + bounded = bounded, then bounded*bounded would also get me something bounded again.



Anyway, my idea would be to use for $a_n$ an alternating series that ist only convergent, for example $(-1)^n \frac 1 {n^2}$. But something tells me I'm trying to prove $a_n b_n$ is not absolutely convergent.



Thanks a lot in advance guys!



Answer



Recall that if $\sum a_n$ is finite then $c\cdot\sum a_n = \sum ca_n$. This is true even if the series is not absolutely convergent.



The reason is simple, $\displaystyle\sum a_n = \lim_{k\to\infty}\sum_{n

Now your reasoning is true. If $b_n\le c$ then $\sum a_nb_n\le \sum ca_n\le c\sum a_n$, and the latter converges.



For the second question, in the first one the assumption was the series is absolutely convergent. Take a sequence which is not of this form.



For example $a_n = \dfrac{(-1)^n}{n}$, and $b_n=(-1)^n$. Now what is $\sum a_nb_n$?



combinatorics - Closed form for the sum $sum_{l=0}^k(-1)^l binom{m}{k-l}binom{n+k-1}{l}$



I try to find a closed form of the following sum of binomials:



$$\sum_{l=0}^k(-1)^l \binom{m}{k-l}\binom{n+k-1}{l},$$



where $k$, $m$, $n$ are all non-negative integers but do not have any other relation.



Is there any identity that can be useful here?


Answer




If we start with a Chebyshev polynomial of the second kind
$$ x^{n+k}\cdot U_{n+k}\left(\frac{x}{2}\right) = x^{2n}\sum_{2l\leq (n+k)}(-1)^l\binom{n+k-l}{l}(x^2)^{k-l}\tag{1}$$
we get that the initial sum is the coefficient of $x^{2k}$ in the product
$$ (1+x^2)^m\cdot x^{k-n}\cdot U_{n+k}\left(\frac{x}{2}\right)\tag{2}$$
or the coefficient of $x^{n+k}$ in $(1+x^2)^m\cdot U_{n+k}\left(\frac{x}{2}\right)$. I fear this does not simplify much further.


limits - $lim _{xto -infty }left(frac{left(e^x-1right)}{left(e^{2x}+1right)}right)$

How can I calculate the following limit?



$\lim _{x\to -\infty }\left(\frac{\left(e^x-1\right)}{\left(e^{2x}+1\right)}\right)$




If the limit is



$\lim _{x\to +\infty }\left(\frac{\left(e^x-1\right)}{\left(e^{2x}+1\right)}\right)$



then it is quiet easy, as I just need to make something like



$\lim _{x\to +\infty }\left(\frac{e^x\left(1-\frac{1}{e^x}\right)}{e^{2x}\left(1+\frac{1}{e^{2x}}\right)}\right)$



and it evident it is 0.




But with limit to negative infinity, I cannot do the same, as a I go back to an undetermined form, like



$\frac{0*\infty}{0*\infty}$



So don't know what I should do. Any suggestion?

multivariable calculus - Looking for a specific function from $mathbb R^2 rightarrow mathbb R$ ,something with directional derivatives




this function has to be continuous at the origin,
have finite directional derivatives there,but they are not bounded.
(meaning for some vectors v with |v|=1 the directional derivatives at 0 can be as large as we want)



I first thought about $(x^2+y^2)^{1/3}$ but here the directional derivatives are infinite.



Any ideas would be welcomed,
thanks~


Answer




Here's a geometric answer that should lead to a formula after some thought. Imagine the graph of a function $f(x)$ which changes from $0$ to $x_0$ basically linearly as $x$ varies from $0$ to $x_0^2$, and then exponentially decays back down to 0 as $x \to \infty$. Clearly this can be made into a continuous family of functions as $x_0$ varies, with the limit of the zero function as $x_0$ approaches zero. Also the derivatives become arbitrarily large at $0$ as $x_0$ approaches 0. So just define your two-argument function this way, with the family of functions extended radially from the origin, with $\sin \theta$ being the value of $x_0$ for a given point at angle $\theta$ from the origin. The directional derivatives will be unbounded at 0 as $\sin \theta \to 0$ but be zero (the derivative of the zero function) for $\sin \theta = 0$.


real analysis - Is there a bijection between $[0, infty)$ and $(0,1)$

I tried $\tan(x)$, and $\log(x)$, but seems it does not work, so I wonder is there a bijection or not?

Friday 22 January 2016

algebra precalculus - What does the notation min max mean?



Min clearly means minimum and max maximum, but I am confused about a question that says "With $x, y, z$ being positive numbers, let $xyz=1$, use the AM-GM inequality to show that min max $[x+y,$ $x+z,$ $y+z]=2$ What does this mean? (I am not looking for the answer this particular question, but just what "min max" means.



Answer



The meaning will depend on context. Here it means that for each triple $\langle x,y,z\rangle$ such that $xyz=1$ we find the maximum of $x+y,x+z$, and $y+z$, and then we find the smallest of those maxima: it’s



$$\min\Big\{\max\{x+y,x+z,y+z\}:xyz=1\Big\}\;.$$



In general it will be something similar: you’ll be finding the minimum of some set of maxima.


solid geometry - Volume and surface area of a sphere by polyhedral approximation



Exposition:



In two dimensions, there is a (are many) straightforward explanation(s) of the fact that the perimeter (i.e. circumference) and area of a circle relate to the radius by $2\pi r$ and $\pi r^2$ respectively. One argument proceeds by approximating these quantities using regular cyclic polygons (equilateral, equiangular, on the circle of radius $r$), noting that such a polygon with $n$ sides can be decomposed into $n$ isosceles triangles with peak angle $\frac{2\pi}{n}$, base length $~2r\sin\frac{\pi}{n}$, and altitude $~r \cos \frac{\pi}{n}$ . Then, associating the circle with the limiting such polygon, we have,
$$

P = \lim_{n\to\infty} n \cdot \text{base length } = \lim_{n\to\infty}2r \cdot \pi \frac{n}{\pi} \sin \frac{\pi}{n} = 2\pi r ~~,
$$
and similarly, (via trig identity)
$$
A = \lim_{n\to\infty} n\left(\frac{1}{2} \text{ base } \times \text{ altitude }\right) = \lim_{n\to\infty}\frac{r^2\cdot 2\pi}{2} \frac{n}{2\pi} \sin \frac{2\pi}{n} = \pi r^2 ~~.
$$
Question:



Could someone offer intuition, formulas, and/or solutions for performing a similarly flavored construction for the surface area and volume of a sphere?




Images and the spatial reasoning involved are crucial here, as there are only so many platonic solids, so I am not seeing immediately the pattern in which the tetrahedra (analogous to the 2D triangles) will be arranged for arbitrarily large numbers of faces. Thus far my best result has been a mostly-rigorous construction relying on this formula (I can write up this proof on request). What I'd like to get out of this is a better understanding of how the solid angle of a vertex in a polyhedron relates to the edge-edge and dihedral angles involved, and perhaps a "dimension-free" notion for the ideas used in this problem to eliminate the need to translate between solid (2 degrees of freedom) and planar (1 degree) angles.


Answer



Alright, I've come up with a proof in what I think is the right flavor.



Take a sphere with radius $r$, and consider the upper hemisphere. For each $n$, we will construct a solid out of stacks of pyramidal frustums with regular $n$-gon bases. The stack will be formed by placing $n$ of the $n$-gons perpendicular to the vertical axis of symmetry of the sphere, centered on this axis, inscribed in the appropriate circular slice of the sphere, at the heights $\frac{0}{n}r, \frac{1}{n}r, \ldots,\frac{n-1}{n}r $ . Fixing some $n$, we denote by $r_\ell$ the radius of the circle which the regular $n$-gon is inscribed in at height $\frac{\ell}{n}r$ . Geometric considerations yield $r_\ell = \frac{r}{n}\sqrt{n^2-\ell^2}$ .



As noted in the question, the area of this polygonal base will be $\frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n}$ for each $\ell$ . I am not sure why (formally speaking) it is reasonable to assume, but it appears visually (and appealing to the 2D case) that the sum of the volumes of these frustums should approach the volume of the hemisphere.



So, for each $\ell = 1,2,\ldots,n-1$, the term $V_\ell$ we seek is $\frac{1}{3}B_1 h_1 - \frac{1}{3}B_2 h_2 $, the volume of some pyramid minus its top. Using similarity of triangles and everything introduced above, we can deduce that
$$

B_1 = \frac{n}{2}r_{\ell-1}^2 \sin\frac{2\pi}{n}~,~B_2 = \frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n} ~,~h_1 = \frac{r}{n}\frac{r_{\ell-1}}{r_{\ell-1}-r_{\ell}}~,~h_2=\frac{r}{n}\frac{r_{\ell}}{r_{\ell-1}-r_{\ell}} ~~.
$$
So, our expression for $V_\ell$ is
$$
\frac{r}{6} \sin\frac{2\pi}{n} \left\{ \frac{r_{\ell-1}^3}{r_{\ell-1}-r_{\ell}} - \frac{r_{\ell}^3}{r_{\ell-1}-r_{\ell}} \right\} = \frac{\pi r}{3n} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ r_{\ell-1}^2 + r_\ell^2 + r_{\ell-1}r_\ell \right\}
$$ $$
= \frac{\pi r^3}{3n^3} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ (n^2 - (\ell-1)^2) + (n^2-\ell^2) + \sqrt{(n^2-\ell^2)(n^2-(\ell-1)^2)} \right\} ~~.
$$
So, we consider $ \lim\limits_{n\to\infty} \sum_{\ell=1}^{n-1} V_\ell$ . The second factor involving sine goes to 1, and we notice that each of the three terms in the sum is quadratic in $\ell$, and so the sum over them should intuitively have magnitude $n^3$. Hence, we pass the $\frac{1}{n^3}$ into the sum and evaluate each sum and limit individually, obtaining 2/3, 2/3, and 2/3 respectively (the first two are straightforward, while the third comes from the analysis in this answer).




Thus, we arrive at $\frac{\pi r^3}{3} (2/3+2/3+2/3) = \frac{2}{3}\pi r^3$ as the volume of a hemisphere, as desired.



So was this too excessive or perhaps worth it? I'll leave that to all of you. :)


Summation operation for precalculus




Studying Spivak's Calculus I came across a relation I find hard to grasp. In particular, I want to understand it without using proofs by induction. So please prove or explain the following relationship by not using induction.



$$ \sum_{j=0}^{n}\binom{n}{j}a^{n-j}b^{j+1}=\sum_{j=1}^{n+1}\binom{n}{j-1}a^{n+1-j}b^{j} $$



Thanks in advance.


Answer



The identity you've given appears to be an index shift. Instead of beginning to sum at $i=0$, we wish to begin at $1$. In order to advance the summation index ahead by $1$, we have to take away $1$ from every instance of the index variable inside the summand.



$$\sum_{i=0}^{n}\binom{n}{i}a^{n-i}b^{i+1}$$




The index shift becomes clear if you let $j = i + 1$ and substitute.



$$= \sum_{j=0+1}^{n+1}\binom{n}{j-1}a^{n-(j-1)}b^{(j-1)+1}$$
$$= \sum_{j=1}^{n+1}\binom{n}{j-1}a^{n-j+1}b^{j}$$


Thursday 21 January 2016

Find a bijective function between two sets

I want to find a bijective function from $(\frac{1}{2},1]$ into $[0,1]$. So, What is a bijective function $f:(\frac{1}{2},1]\to[0,1]$?

summation - Control ratio of geometric series through its sum

A geometric series $S_n$ is the sum of the $n$ first elements of a geometric sequence $u_n$:



$$u_n = ar^n \space \forall n \in \mathbb{N}^*$$



with $u_0$ defined, and:



$$S_n = \sum_{k = 0}^{k = n - 1}u_k=a\frac{1 - r^n}{1 - r}$$



Then, is there a way to determine the ratio $r$ analytically through a given finite $n$ and finite sum $S_n$?

calculus - Show that $int_{0}^{pi/2}frac {log^2sin xlog^2cos x}{cos xsin x}mathrm{d}x=frac14left( 2zeta (5)-zeta(2)zeta (3)right)$





Show that :
$$
\int_{0}^{\Large\frac\pi2}
{\ln^{2}\left(\vphantom{\large A}\cos\left(x\right)\right)
\ln^{2}\left(\vphantom{\large A}\sin\left(x\right)\right)
\over
\cos\left(x\right)\sin\left(x\right)}\,{\rm d}x
={1 \over 4}\,
\bigg[2\,\zeta\left(5\right) - \zeta\left(2\right)\zeta\left(3\right) \bigg]

$$




I can only do non squared one. Anyone has a clue?


Answer



Related problems: (I), (II), (III), (IV), (V), (6). Use the change of variables $\ln(\cos(x))=t$ to transform the integral to




$$ I = \int_{0}^{\frac{\pi }{2}}{\frac{{{\ln }^{2}}\cos x{{\ln }^{2}}\sin x}{\cos x\sin x}}\text{d}x = \frac{1}{4}\,\int _{-\infty }^{0}\!{\frac {{t}^{2} \left( \ln \left( 1-{
{\rm e}^{2\,t}} \right)\right) ^{2}}{1-{{\rm e}^{2t}}}}{dt}.$$





Follow it by another change of variables $ 1-e^{2t}=z $ gives



$$\frac{1}{4}\,\int _{-\infty }^{0}\!{\frac {{t}^{2} \left( \ln \left( 1-{
{\rm e}^{2\,t}} \right) \right) ^{2}}{1- {{\rm e}^{2t}}
}}{dt}= \frac{1}{32}\,\int _{0}^{1}\!{\frac { \left( \ln \left( 1-z \right)
\right) ^{2} \left( \ln \left( z \right) \right) ^{2}}{z \left( 1-
z\right) }}{dz}$$




$$= \frac{1}{32}\,\int _{0}^{1}\!{\frac { \left( \ln \left( 1-z \right)
\right) ^{2} \left( \ln \left( z \right) \right) ^{2}}{z }}{dz}+\frac{1}{32}\,\int _{0}^{1}\!{\frac { \left( \ln\left( 1-z \right)
\right) ^{2} \left( \ln \left( z \right) \right) ^{2}}{ \left( 1-
z\right) }}{dz} $$




$$ \implies I = \frac{1}{16}\,\int _{0}^{1}\!{\frac { \left( \ln \left( 1-z \right)
\right) ^{2} \left( \ln \left( z \right) \right) ^{2}}{z }}{dz}\longrightarrow (1). $$





Getting the exact result: Integral (1) can be evaluated as



$$ \frac{1}{16}\,\int _{0}^{1}\!{\frac { \left( \ln \left( 1-z \right)
\right)^{2} \left( \ln \left( z \right) \right)^{2}}{z }}{dz}=\frac{1}{16} \lim_{w\to 0}\lim_{s\to 0^+}\frac{d^2}{dw^2}\frac{d^2}{ds^2}\int_{0}^{1} (1-z)^{w}z^{s-1}dz $$



$$ = \frac{1}{16}\lim_{w\to 0}\lim_{s\to 0^+}\frac{d^2}{dw^2}\frac{d^2}{ds^2}\beta(s,w+1)=\frac{1}{16}\lim_{w\to 0}\lim_{s\to 0^+}\frac{d^2}{dw^2}\frac{d^2}{ds^2}\frac{\Gamma(s)\Gamma(w+1)}{\Gamma(s+w+1)}$$




$$ I=\frac{1}{4}\left( 2\zeta \left( 5 \right)-\zeta \left( 2 \right)\zeta \left( 3 \right) \right) \longrightarrow (*), $$





where $\beta(u,v)$ is the beta function.



Other forms for the solution 1: Using integration by parts with $u=\ln^2(1-z)$, integral $(1)$ can be written as



$$ \frac{1}{16}\,\int _{0}^{1}\!{\frac { \left( \ln \left( 1-z \right)
\right)^{2} \left( \ln \left( z \right)\right)^{2}}{z }}{dz}=\frac{1}{24}\,\int _{0}^{1}\!{\frac{ \ln\left( 1-z \right)\left( \ln \left( z \right) \right)^{3}}{1-z}}{dz} $$



$$ = -\sum_{n=0}^{\infty}(\psi(n+1)+\gamma)\int_{0}^{1}z^n\ln^3(z)dz = \frac{1}{4}\sum_{n=0}^{\infty}\frac{\psi(n+1)+\gamma}{(n+1)^4}. $$





$$ I= \frac{1}{4}\sum_{n=1}^{\infty}\frac{\psi(n)}{n^4}+\frac{\gamma}{4}\zeta(4)\sim 0.02413779000 \longrightarrow (**). $$




You can use the identity $ H_{n-1}=\psi(n)+\gamma $, where $H_n$ are the harmonic numbers, to write the result as




$$ I=\frac{1}{4}\sum_{n=1}^{\infty}\frac{H_{n-1}}{n^4} \longrightarrow (***). $$





Other forms for the solution 2: We can have the following form for the solution




$$ I=\frac{1}{16}\sum_{n=1}^{\infty}\frac{H^2_{n}}{n^3}+\frac{1}{16}\sum_{n=1}^{\infty}\frac{\psi'(n+1)}{n^3}-\frac{1}{16}\zeta(2)\zeta(3)\longrightarrow (****). $$




Note 1: we used the power series expansion of the function $ \frac{\ln(1-z)}{1-z}, $




$$\frac{\ln(1-z)}{1-z}= -\sum _{n=0}^{\infty } \left( \psi \left( n+1 \right) + \gamma \right){x}^{n}=-\sum _{n=0}^{\infty } H_{n}{x}^{n}. $$





Note 2: Try to tackle integral $(1)$ using the technique used in solving your previous question.


convergence divergence - Cauchy in measure implies uniqueness of almost everywhere limits for subsequences

I tried this exercise all the past week, but a I can´t do it.



We know that if $\{f_n\}$ is Cauchy in measure, then, there is a measurable function $h$ such that $\{f_n\}$ converges in measure to $h$. Moreover, there is a subsequence that converges to $h$ almost uniformly and almost everywhere to $h$, but...



If $\{f_n\}$ is Cauchy in measure and there are subsequences $\{f_{n_k}\}$ and $\{f_{m_k}\}$ and measurable functions $f,g$ such that $\{f_{n_k}\}$ and $\{f_{m_k}\}$ converge to $f$ and $g$ almost everywhere, respectively, then $f=g$ almost everywhere.



I tried to show that $f=h=g$ a.e., but I can't solve it. Any help (or hint) is welcome.

real analysis - Proof |1/cosh(x)|≥|ln(cosh(x)/(1+cosh(x))|




I am doing a problem where I need to prove that $$\left|\frac{1}{\cosh(x)}\right|≥\left|\ln\left(\frac{\cosh(x)}{1+\cosh(x)}\right)\right|$$
Without differentiation, and I can't find a way to prove it. Can anyone prove it without



Answer



Using
$$
\ln (1+x) \le x \text { for } x > -1
$$ (see for example how to prove that $\ln(1+x)< x$) the following holds for $y = \cosh x \ge 1$:
$$
\left\lvert\ln \frac{y}{1+y}\right\rvert = - \ln \frac{y}{1+y} = \ln \frac{1+y}{y} \le \frac 1y
$$


calculus - How to show that the series of $frac{sin(n)}{log(n)}$ converges?



Edit: I am seeking a solution that uses only calculus and real analysis methods -- not complex analysis. This is an old advanced calculus exam question, and I think we are not allowed to use any complex analysis that could make the problem statement a triviality.



Show that the series



$$\sum_{n=2}^{\infty} \frac{\sin(n)}{\log(n)}$$




converges.



Any hints or suggestions are welcome.



Some thoughts:



The integral test is not applicable here, since the summands are not positive.



The Dirichlet test does seem applicable either, since if I let 1/log(n) be the decreasing sequence, then the series of sin(n) does not have bounded partial sums for every interval.




Thanks,


Answer



Note that



$$\left|\sum_{k=1}^n \sin k\right|= \frac{|\sin(n/2)\sin[(n+1)/2]|}{\sin(1/2)}\leqslant \frac{1}{\sin(1/2)}.$$



Derivation:



$$\begin{align} 2 \sin(1/2)\sum_{k=1}^n\sin k &= \sum_{k=1}^n2 \sin(1/2)\sin k \\ &= \sum_{k=1}^n2 \sin(1/2)\cos (k +\pi/2) \\ &= \sum_{k=1}^n[\sin(k + 1/2 + \pi/2)-\sin (k -1/2 + \pi/2)] \\ &= \sin(n + 1/2 + \pi/2) - \sin(1/2 + \pi/2) \\ &= 2 \sin(n/2)\cos[(n+1)/2 +\pi/2] \\ &= 2 \sin(n/2)\sin[(n+1)/2] \end{align} \\ \implies \sum_{k=1}^n\sin k = \frac{\sin(n/2)\sin[(n+1)/2]}{\sin(1/2)}$$



calculus - L'Hopital's Rule to determine limit as x approaches infinity

I'm struggling a bit with solving a limit problem using L'Hopital's Rule:



$$\lim_{x\to\infty} \left(1+\frac{1}{x}\right)^{2x}$$



My work:



$$y = \left(1+\frac{1}{x}\right)^{2x}$$

$$\ln y = \ln \left(1+\frac{1}{x}\right)^{2x} = 2x \ln \left(1+\frac{1}{x}\right)$$
$$=\frac{\ln \left(1+\frac{1}{x}\right)}{(2x)^{-1}}$$
Taking derivatives of both the numerator and the denominator:
$$f(x) = \ln\left(1+\frac{1}{x}\right)$$
$$f'(x) = \left(\frac{1}{1+\frac{1}{x}}\right)\left(-\frac{1}{x^2}\right) = (x+1)\left(-\frac{1}{x^2}\right) = -\frac{(x+1)}{x^2}$$
$$g(x) = (2x)^{-1}$$
$$g'(x) = (-1)(2x)^{-2}(2) = -\frac{2}{(2x)^{2}} = -\frac{1}{2x^{2}}$$



Implementing the derivatives:




$$\lim_{x\to\infty} \frac{-\frac{(x+1)}{x^2}}{-\frac{1}{2x^2}} = -\frac{(x+1)}{x^2} \cdot \left(-\frac{2x^2}{1}\right) = 2(x+1) = 2x+2$$



However, I'm not sure where to go from here. If I evaluate the limit, it still comes out to infinity plus 2, and I don't know how much further to take the derivative or apply L'Hopital's Rule.



Any suggestions would be appreciated!

real analysis - Lebesgue measure, Borel sets and Axiom of choice



I cannot proceed my study on measure theory since it seems my measure theory is really unstable. I desperately need someone to briefly answer below 3 questions...



**For convenience, i will write Lebesgue Measurable set to mean the usual Lebesgue Measurable set, and write Codable-Lebesgue Measurable set to mean the Lebesgue Measurable set defined by Codable-Borel sets.




$1$. It is well-known that "Existence of Non-Lebesgue measurable set is unprovable in ZF". WHAT Lebesgue measurable set?



My definition for Lebesgue measurable set is defined by using Riesz Representation Theorem (Whose existence is guranteed by Axiom of Countable Choice, hence it is undefinable without choice). However, I heard that the usual construction of Lebesgue measure is by using Caratheodory's existence theorem. In that way, can Lebesgue measurable set be definable without Axiom of Choice? That is, it really doesn't make sense to me, 'existence of non-Lebesgue Measurable set is unprovable in ZF' since Lebesgue measure cannot be defined without choice (in my way of construction of Lebesgue measure).



$2$. Can "Existence of a set that is Lebesgue measurable but non-Borel" be proved without choice? I saw the standard example (Luzin's example using continued fraction), but i'm not sure what Lebesgue measurable set stated there. Moreover, if it is not true in ZF, what about "Existence of a set that is Codable-Lebesgue Measurable but non-Borel"?



$3$. (This is closely related to 1) Why Lebesgue measure is unique? Assuming Axiom of Choice, it is a theorem, that "If $\mu_1$ and $\mu_2$ are translation-invariant measures on sigma algebras $\mathfrak{M}_1$ and $\mathfrak{M}_2$ repectively, containing all Borel sets, and $\mu_1(K), \mu_2(K) <\infty$ for every compact set $K$, then there exists a constant $c$ such that $\mu_1(E)=c\mu_2(E)$ for all the borel sets $E$". You can see that this theorem does not imply that $\mathfrak{M}_1=\mathfrak{M}_2$ in the hypothesis.



Thank you in advance.


Answer




For the first question, note that if we assume Dependent Choice then all $\sigma$-additivity arguments can be carried out perfectly. In such setting using the Caratheodory theorem makes perfect sense, and indeed the Lebesgue measure is the completion of the Borel measure.



It was proved by Solovay that $\mathsf{ZF+DC}$ is consistent with "Every set is Lebesgue measurable", relative to an inaccessible cardinal. This shows that assuming large cardinals are not inconsistent, we cannot prove the existence of a non-measurable set (Vitali sets, Bernstein sets, ultrafilters on $\omega$, etc.) without appealing to more than $\mathsf{ZF+DC}$.



Under the assumption of $\mathsf{DC}$ it is almost immediate that there are only continuum Borel sets, but if we agree to remove this assumption then it is consistent that every set is Borel, where the Borel sets are the sets in the $\sigma$-algebra generated by open intervals with rational endpoints. For example in models where the real numbers are a countable union of countable sets; but not only in such models. Do note that in such bizarre models the Borel measure is no longer $\sigma$-additive.



For sets with Borel codes the proof follows as in the usual proof in $\mathsf{ZFC}$. There are only $2^{\aleph_0}$ possible codes, but there are $2^{2^{\aleph_0}}$ subsets of the Cantor set, all of which are codible-Lebesgue measurable.



Lastly, we have a very good definition for the Lebesgue measure, it is simply the completion of the Borel measure. The Borel measure itself is unique, it is the Haar measure of the additive group of the real numbers. The Lebesgue measure is simply the completion of the Borel measure which is also unique by definition. This is similar to the case where the rational numbers could have two non-isomorphic algebraic closures, but there is always a canonical closure. Even if you can find two ways to complete a measure, "the completion" would usually refer to the definable one (adding all subsets of null sets).


probability - Intuitive/heuristic explanation of Polya's urn

Suppose we have an urn with one red ball and one blue ball. At each step, we take out a single ball from the urn and note its color; we then put that ball back into the urn, along with an additional ball of the same color.



This is Polya's urn, and one of the basic facts about it is the following: the number of red balls after $n$ draws is uniform over $\{1, \ldots, n+1\}$.



This is very surprising to me. While its not hard to show this by direct calculation, I wonder if anyone can give an intuitive/heuristic explanation why this distribution should be uniform.




There are quite a few questions on Polya's urn on math.stackexchange, but none of them seem to be asking exactly this. The closest is this question, where there are some nice explanations for why, assuming as above that we start with one red and one blue ball, the probability of drawing a red ball at the $k$'th step is $1/2$ for every $k$ (it follows by symmetry).

Wednesday 20 January 2016

real analysis - Series convergence/divergence



I'm trying to figure out whether the following series diverges or converges by using D'Alemberts (quotientcriteria), Cauchy (integral- and rootcriteria) and Leibniz convergence test for alternating series as well as direct comparisson tests.




$$ \sum_{k=2}^\infty \frac{1}{(ln(k!))^2} $$



I'm very unclear on how to parse this series. My guess is that I have to find a series which is smaller since my guess is that it does converge.



$$ ln(k!)\geq ln(k) => \frac{1}{ln(k)}\geq \frac {1}{ln(k!)}$$



But after trying the quotient criteria:



$$ \frac{(ln(k))^2}{(ln(k+1))^2} $$




I find it unclear how to continue.


Answer



Of course it converges. You just have to show that $\ln(k!)^2$ grows fast enough.



For example:
We have $k!\ge (k/2)^{k/2}$, so for $k\ge 6$,



$$\ln(k!)^2\ge \frac{k^2}4 \ln(k/2)^2\ge \frac{k^2}{4}$$



and $\sum\limits_{k=2}^\infty \frac1{k^2}$ converges, so by comparison your series also does.



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...