Wednesday 30 September 2015

summation - Proof without words for $sum_{i=0}^infty(-1)^ifrac{1}{2i+1}$

$$\sum_{i=0}^\infty(-1)^i\frac{1}{2i+1}$$



$$1-\frac13+\frac15-\frac17+\frac19-\cdots=\frac\pi4$$



Does anyone know of a proof without words for this? I am not looking a for a just any proof, since I can prove it myself. What I am looking for is a elegant physical interpretation, or anything else that matches that kind of beauty.



Just to clear up, the proof I already know is



$$\int_0^1\left(1-x^2+x^4-x^6+x^8-\cdots\right)\,\mathrm{d}x = \int_0^1\frac{\mathrm{d}x}{1+x^2}$$




So I am intersted in anything that is not obviously related to this.

real analysis - Convergence in $L^{infty}$ norm implies convergence in $L^1$ norm




Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions on a measure space and $f$ measurable. Assume the measure space $X$ has finite measure. If $f_n$ converges to $f$ in $L^{\infty}$-norm , then $f_n$ converges to $f$ in $L^{1}$-norm.




This is my approach:



We know $||f_n-f||_{\infty} \to 0 $ and by definition $||f_n-f||_{\infty} =\inf\{M\geq 0: |f_n-f|\leq M \}.$ Then
\begin{align}

||f_n-f||_1\
&=\int |f_n-f| dm\
&\leq \int|f_n|dm+\int|f|dm\
\end{align}



I don't know how to proceed after that, any help would be appreciated.


Answer



For any function $g$, $||g||_1 = \int_X|g(m)|dm \leq \int_X||g||_\infty dm = \mu(X)*||g||_\infty$ (as $|g(m)| \leq ||g||_\infty$ almost everywhere); $||g||_\infty \geq \frac{||g||_1}{\mu(X)}$, so if $||f_n-f||_\infty$ tends to zero, then $||f_n-f||_1$ tends to zero as well.


calculus - Exponential Decay Help

Hi I don't mean to sound like a clueless student who should already know this subject but I am really stumped on the topic of exponential growth and decay. Namely this one question:



"When zombies finally take over, the population of the Earth will decrease exponentially. Every HOUR that goes by the human population will decrease by 5%. The population today is 6,000,000,000, find a function P that gives the population of the earth d DAYS after the beginning of the zombie takeover."



This really puts me in between a rock and a hard place. As you can imagine I researched this question and various methods of exponential growth everywhere online, in my textbook, and in my class notes, but I still come full circle and find myself back to square one, with no clue what so ever on how this works. I would really appreciate it if someone could at least point me in the right direction like how to model a decay with exponential variables involving HOURS and DAYS as the question says. I already know my initial formula being 6,000,000,000(0.85) I just need to know the exponents used! Is it 6,000,000,000(0.85)^d/60, 6,000,000,000(0.85)^d/1? Or something like that? Please help I really want to grasp this concept!

linear algebra - Positive semidefinite versus all the eigenvalues having non-negative real parts





  1. Suppose matrix $A$ with all its eigenvalues having non-negative real parts, can we get that $x^TAx\geq0$ holds for any vector $x$?


  2. Suppose matrix $A$ is positive semidefinite, $B$ is a positive definite diagonal matrix with the same dimension as $A$. Do all the eigenvalues of $AB$ have nonnegative real parts?



Answer



For your first question, the answer is negative. A counter-examples is as follows.



Consider $A = \begin{bmatrix}1 & -100\\0 & 1\end{bmatrix}$ which has two non-negative real parts, but we have $x^{\mathrm T}Ax = -98$ when $x = \begin{bmatrix}1\\1\end{bmatrix}$.


calculus - Proving $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-t} mathrm dt=-sqrt{pi}(gamma+ln{4})$



I would like to prove that:




$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt=-\sqrt{\pi}(\gamma+\ln{4})$$



I tried to use the integral $$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt$$



$$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt \;{\underset{\small n\to\infty}{\longrightarrow}}\; \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt$$ (dominated convergence theorem)



Using the substitution $t\to\frac{t}{n}$, I get:



$$ \int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt=\sqrt{n}\left(\ln(n)\int_{0}^{1} \frac{(1-t)^n}{\sqrt{t}} \mathrm dt+\int_{0}^{1} \frac{\ln(t)(1-t)^n}{\sqrt{t}} \mathrm dt\right) $$




However I don't know if I am on the right track for these new integrals look quite tricky.


Answer



Consider integral representation for the Euler $\Gamma$-function:
$$
\Gamma(s) = \int_0^\infty t^{s-1} \mathrm{e}^{-t} \mathrm{d} t
$$
Differentiate with respect to $s$:
$$
\Gamma(s) \psi(s) = \int_0^\infty t^{s-1} \ln(t) \mathrm{e}^{-t} \mathrm{d} t
$$

where $\psi(s)$ is the digamma function.
Now substitute $s=\frac{1}{2}$. So
$$
\int_0^\infty \frac{ \ln(t)}{\sqrt{t}} \mathrm{e}^{-t} \mathrm{d} t = \Gamma\left( \frac{1}{2} \right) \psi\left( \frac{1}{2} \right)
$$
Now use duplication formula:
$$
\Gamma(2s) = \Gamma(s) \Gamma(s+1/2) \frac{2^{2s-1}}{\sqrt{\pi}}
$$
Differentiating this with respect to $s$ gives the duplication formula for $\psi(s)$, and substitution of $s=1/2$ gives $\Gamma(1/2) = \sqrt{\pi}$.

$$
\psi(2s) = \frac{1}{2}\psi(s) + \frac{1}{2} \psi(s+1/2) + \log(2)
$$
Substitute $s=\frac{1}{2}$ and use $\psi(1) = -\gamma$ to arrive at the result.


Tuesday 29 September 2015

analysis - Proof for Strong Induction Principle



I am currently studying analysis and I came across the following exercise.








Proposotion 2.2.14
Let $m_0$ be a natural number and let $P(m)$ be a property pertaining to an arbitrary natural number $m$. Suppose that for each $m\geq m_0$, we have the following implication: if $P(m')$ is true for all natural numbers $m_0\leq m'< m$, then $P(m)$ is also true. (In particular, this means that $P(m_0)$ is true, since in this case the hypothesis is vacuous.) Then we can conclude that $P(m)$ is true for all natural numbers $m\geq m_0$.




Prove Proposition 2.2.14. (Hint: define $Q(n)$ to be the property that $P(m)$ is true for all $m_0\leq m < n$; note that $Q(n)$ is vacuously true when $n





I have difficulty understanding how I should use the hint and in general what the framework of this proof would look like (probably an inductive proof; but on what variable do we induct, what will be the induction hypothesis and how would I go about proving the inductive step etc.?). Could anyone please provide me with some hints to help me get started?


Answer



Let $B$ be the subset of $N=\{m_0,m_0+1,...\}$ such that $P(m)\iff m\in B$. This $B$ is not empty since for all $m_0\le m'

Remark: This works for all sets $N$ where each non-empty subset has a minimal element with respect to a relation $R$. These sets are called well-founded.



If you want to use the hint, show that $Q(n)$ implies $Q(n+1)$ and that $Q(m_0)$: Since $Q(m')$ is true for all $m_0\le m'< m_0$, it is also true for $m_0$. Assume $n$ is a natural number $\ge m_0$ such that $Q(n)$. This means that $P(m)\ \forall m_o\le m

So we can proof the strong induction principle via the induction principle. However, the normal induction principle itself requires a proof, it that is the proof I wrote in the first paragraph. As mentioned it works for all well-founded sets ($\mathbb N$ is such a set.)



complex analysis - Prove $int_0^infty frac{sin^4x}{x^4}dx = frac{pi}{3}$

I need to show that
$$
\int_0^\infty \frac{\sin^4x}{x^4}dx = \frac{\pi}{3}
$$



I have already derived the result $\int_0^\infty \frac{\sin^2x}{x^2} = \frac{\pi}{2}$ using complex analysis, a result which I am supposed to start from. Using a change of variable $ x \mapsto 2x $ :



$$

\int_0^\infty \frac{\sin^2(2x)}{x^2}dx = \pi
$$



Now using the identity $\sin^2(2x) = 4\sin^2x - 4\sin^4x $, we obtain



$$
\int_0^\infty \frac{\sin^2x - \sin^4x}{x^2}dx = \frac{\pi}{4}
$$
$$
\frac{\pi}{2} - \int_0^\infty \frac{\sin^4x}{x^2}dx = \frac{\pi}{4}

$$
$$
\int_0^\infty \frac{\sin^4x}{x^2}dx = \frac{\pi}{4}
$$



But I am now at a loss as to how to make $x^4$ appear at the denominator. Any ideas appreciated.



Important: I must start from $ \int_0^\infty \frac{\sin^2x}{x^2}dx $, and use the change of variable and identity mentioned above

definite integrals - Finding the value of a sum using Riemann sum theorem





Question: Find the value of $\sum_{i=1}^{n}(\frac{1}{n-i})^{c}$ for large $n$.




\begin{align}
\sum_{i=1}^{n}(\frac{1}{n-i})^{c}
& = \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = \frac{n}{n} \times \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = n(\frac{1}{n})^{c} \sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c} \qquad(1)
\end{align}




Let $f(x) = (\frac{1}{1-x})^{c}$, by using Riemann-sum theorem, we have
\begin{align}
\lim_{n\rightarrow \infty}\sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c}
& = \int_{0}^{1} (\frac{1}{1-x})^{c} = A \qquad(2)
\end{align}
By using $(1)$ and $(2)$, for sufficently large $n$, we have
$$\bbox[5px,border:2px solid #C0A000]{\sum_{i=1}^{n}(\frac{1}{n-i})^{c} = A\times n(\frac{1}{n})^{c}}$$





The presented proof has a problem, $f(x)$ is not defined in the closed interval $[0,1]$. How can I solve this problem?







Definition (Riemann-sum theorem) Let $f(x)$ be a function dened on a closed interval $[a, b]$. Then, we have
$$\lim_{n\rightarrow \infty}\sum_{i=1}^{n}f\Big(a +(\frac{b - a}{n})i\Big)\frac{1}{n}=\int_{a}^{b}f(x)dx$$


Answer



\begin{align}\label{eq:7777}
& \frac{2}{\sqrt{n-i} + \sqrt{n-i+1}} \leq \frac{1}{\sqrt{n-i}} \leq \frac{2}{\sqrt{n-i} + \sqrt{n-i-1}} \nonumber\\

& \qquad \Rightarrow 2(\sqrt{n-i+1} - \sqrt{n-i}) \leq \frac{1}{\sqrt{n-i}} \leq 2(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\
& \qquad \Rightarrow 2 \sum_{i=1}^{n-1}(\sqrt{n-i+1} - \sqrt{n-i}) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sum_{i=1}^{n-1}(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\
& \qquad \Rightarrow 2 (\sqrt{n}-1) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sqrt{n-1}
\end{align}


real analysis - convergence radius of two specific power series




I did two calculations that I think are wrong but I am not sure why.



I have to compute the convergence radius of the following power series



a) $\sum_0^{\infty} \ln(k!)x^k$,



b) $\sum_0^{\infty} k8^kx^{3k}$.



Here's my attempt:




a)



$$ \limsup_{k \rightarrow \infty} \sqrt[k]{a_k} = \limsup_{k \rightarrow \infty} \sqrt[k]{\ln(k!)} = \ln (\limsup_{k \rightarrow \infty} \sqrt[k]{k!})= \infty. $$



Therefore the convergence radius should be $0$ and the series only converges for the value $x=0$. Unfortunately, Wolfram alpha gives me another answer. Where's the mistake ?



b)



$$ \limsup_{k \rightarrow \infty} \sqrt[k]{a_k} = \limsup_{k \rightarrow \infty} \sqrt[k]{k8^k} = \limsup_{k \rightarrow \infty} \sqrt[k]{k} \sqrt[k]{8^k} = 8 \limsup_{k \rightarrow \infty} \sqrt[k]{k} = 8. $$




I would now conclude that the convergence radius is $\frac{1}{8}$, but it appears to be $\frac{1}{2}$, what did I miss ?



Thank you for your help.


Answer



For a) you have made the error of assume the log and the exponent commute. That is $$\ln \sqrt[k]{k!} \ne \sqrt[k]{ \ln k!}$$ and hence $$ \limsup_{k \to \infty} \sqrt[k]{ \ln k!} \ne \ln \left( \limsup_{n \to \infty} \sqrt[k]{k!} \right)$$
I have to disagree with the other answer on how to evaluate this limit (after the error). Using Stirling's approximation gives $$ \sqrt[k]{k!} \sim \sqrt[k]{ \sqrt{2 \pi k} \left( \frac{k}{e} \right)^k } = \left(2 \pi k \right)^{\frac{1}{2k}} \frac{k}{e} \to \infty$$



Of course the actual problem was the one I pointed out before. Here's how I would attack it.
$$\lim_{k \to \infty} \sqrt[k]{ \ln k!}=\lim_{k \to \infty} \frac{ \ln (k+1)!}{\ln (k!)}=\lim_{k \to \infty} \frac{\sum_{j=0}^{k+1} \ln j}{\sum_{j=0}^{k} \ln j}=\lim_{k \to \infty} \frac{\ln (k+1)}{\ln k}=1$$ Note I used the Stolz–Cesàro theorem in the third equality.




I agree with the other answer for (b). You missed the $3k$ in the exponent


calculus - How to find the infinite sum

I need to find the infinite sum of the following series expansion



$$1/3 + 2/3^2 + 3/3^3 + 4/3^4 + \dots + k/3^k + \dots$$



I know that



$$x/(1 - x) = x + x^2 + x^3 + \dots + x^k + \dots$$



We need to find the $x$ value in order to find the infinite sum. What could the $x$ value be? I am not sure.

elementary number theory - Show via direct proof that $k(k+1)(k+2)$ is divisible by $6$.



How do I show via direct proof that $k(k+1)(k+2)$ is divisible by $6$. I showed it was divisible by $2$ because at least one of the multiples is even but could not figure out how to show it is divisible by $3$. I tried making $k$ even or odd and substituting $2q$ or $2q+1$ but have not made much progress. Does anyone have any tips as to what direction I should take? Thanks!


Answer



By the division algorithm, $k$ divided by $3$ yields a remainder of $0$, $1$, or $2$. In other words, there are some integers $q,r$ such that $k=3q+r$ where $r=0,1,$ or $2$.



If $r=0$, then $k=3q$ is divisible by $3$. If $r=1$, then $k+2=(3q+1)+2=3(q+1)$ is divisible by $e$. If $r=2$, then $k+1=(3q+2)+1=3(q+1)$ is divisible by $3$. Therefore, in all cases, at least one of $k$, $k+1$, and $k+2$ is divisible by $3$.


contest math - Polynomial $P(a)=b,P(b)=c,P(c)=a$



Let $a,b,c$ be $3$ distinct integers, and let $P$ be a polynomial with integer coefficients.Show that in this case the conditions $$P(a)=b,P(b)=c,P(c)=a$$ cannot be satisfied simultaneously.



Any hint would be appreciated.


Answer



Hint: If $P(a)=b$ and $P(b)=c$ then $a-b$ divides $b-c$.


complex analysis - Higher dimensional analogues of the argument principle?



I know there are higher dimensional analogues of the argument principle.



(See http://en.wikipedia.org/wiki/Variation_of_argument)




But I do not have books about it and I cannot find anything of value on the internet for free.



Plz give me references or explainations.



I would like an example of how one can find the zero's of functions in 3 Dimensions with an integral.



I believe it works as follows :



1) The function needs to be "3D-analytic" , and by that I mean the derivative can be given by the limit




$$ \lim_{h->0} \frac{f(x+h)-f(x)}{h} $$



Where any infinitesimal $h$ will give the same limit as any other infinitesimal $h_2$.
(By "any" I mean in any direction of the dimension.)



This is the natural analogue of complex differentiable.



2) The integral is taken over a closed surface that contains a volume.
The surface must be topologically isomorphic to a sphere. ( no holes , no selfintersections , continu )




3) Just like the argument principle , the direction of the integral matters.
Hence we riemann sum the points with respect to the direction we are going in.
( compare : for contour integration if our (piecewise) path goes from - to + we add if it goes from + to - we substract )



4) The "3D argument principle" then computes $CZ$.
$C$ is some constant and $Z$ is the number of zero's - the number of poles within the volume.



Is this correct ?







The analytic proof of the PNT constructs a counting function for the primes based on the zero's of the Riemann zeta function.



An important part of that proof is the argument principle.



I wonder if there are some similar applications of the "3D argument principle " as described above that have been used in number theory.



I wonder what functions in 3D are associated with (number-theoretical) counting functions*.




(* such as used in the proof of PNT with the argument principle (used on the complex plane).)


Answer



Of course, the Argument Principle is in the context of meromorphic functions. However, the higher-dimensional analogue in the context of smooth functions comes from the theory of the degree of maps and winding numbers.



In particular, suppose $W$ is a compact $n$-dimensional oriented manifold with boundary, $f\colon W \to \Bbb R^n$ is smooth, and $f\big|_{\partial W} \ne 0$. If $0$ is a regular value of $f$, then $f^{-1}(0)$ consists of a finite number of points $x_1,\dots,x_k\in W$, each of which appears with a sign $\epsilon_j$ ($+1$ if $df_{x_j}\colon T_{x_j}W\to\Bbb R^n$ is orientation-preserving, and $-1$ if it is orientation-reversing). Then
$$\sum_{j=1}^k \epsilon_j = \text{deg}\left(\frac{f}{\|f\|}\Bigg|_{\partial W}\colon \partial W\to S^{n-1}\right)\,.$$



Comment: In the case $n=2$, $W\subset\Bbb R^2$, and $f$ holomorphic, the $\epsilon_j$ are all always $+1$, and so we merely count the roots in $W$. The degree on the right-hand side is the winding number of $f(\partial W)$, which is exactly what $\dfrac1{2\pi i}\displaystyle\int_{\partial W} \frac{f'(z)}{f(z)}dz$ computes.



The best reference I know on such matters is Guillemin and Pollack's Differential Topology. See pp. 110-111 and 144, in particular.




EDIT: To respond to your additional comments/questions, the one-variable calculus definition of derivative extends to $\Bbb C$ only because $\Bbb C$ is a field. Once you move on to $\Bbb R^n$, the derivative becomes a linear transformation with a limit property but cannot itself be given by the sort of limit you desire. Only directional derivatives have the single-variable calculus definition. (In time, you will learn some rigorously done multivariable calculus.) The degree calculation I have given above can indeed be represented by an integral (but, once again, you'll have a bit to learn to understand this): Choose an $(n-1)$-form $\omega$ on $S^{n-1}$ with $\displaystyle\int_{S^{n-1}}\omega=1$, and compute $\displaystyle\int_{\partial W} \left(\tfrac{f}{\|f\|}\right)^*\omega$. Alternatively, compute
$$\tfrac1{\text{vol}(S^{n-1})}\int_{\partial W} f^*\left(\tfrac{x_1\, dx_2\wedge\dots\wedge dx_n - x_2\, dx_1\wedge dx_3\wedge \dots \wedge dx_n + (-1)^{n-1} x_n\, dx_1\wedge\dots\wedge dx_{n-1}}{(x_1^2+x_2^2+\dots+x_n^2)^{n/2}}\right)\,.$$
This is in fact an immediate generalization of the integral appearing in the argument principle.



There are generalizations of the Residue Theorem (which is where the Argument Principle comes from) to meromorphic functions in $\Bbb C^n$. This is also a rather rich subject; some comments were made in this post.


sequences and series - Show $(1+frac{z}{n})^{n} underset{n to +infty}{longrightarrow }e^{z}$


To show that $$z_n=\left(1+\dfrac{z}{n} \right)^{n} \underset{n \longmapsto +\infty}{\longrightarrow \exp(z)}$$
the author of textbook use the following method but there is some steps that i'm not sure if i got it right so would someone elaborate it




Let $x = \Re(z)$ and $y=\Im(z)$





  • ${\displaystyle r_n=\sqrt{\left(1+\dfrac{x}{n} \right)^{2}+\dfrac{y^{2}}{n^2}}=\sqrt{1+\dfrac{2x}{n}+o\left(\dfrac{1}{n}\right)}=1+\dfrac{x}{n}+o\left(\dfrac{x}{n}\right)}$

  • why if $r_n=1+\dfrac{x}{n}+o\left(\dfrac{x}{n} \right)$ then $\ln(r_n)\sim \dfrac{x}{n}$

  • how we can get the expression of $\tan(\alpha_n)$

  • why if $\tan\left( \alpha_n \right)\sim \dfrac{y}{n}$ then $\alpha_{n}\sim \dfrac{y}{n}$



enter image description here



My thoughts:




first i think there is typo in $z_n=r_n^{n}e^{n\alpha_n}$ should we write $z_n=r_n^{n}e^{i n\alpha_n}$ instead.




  • ${\displaystyle r_n=\sqrt{\left(1+\dfrac{x}{n}
    \right)^{2}+\dfrac{y^{2}}{n^2}}=\sqrt{1+\dfrac{2x}{n}+o\left(\dfrac{1}{n}\right)}=1+\dfrac{x}{n}+o\left(\dfrac{x}{n}\right)}$



note that





$$\left(1+x \right)^{\alpha}\underset{x\to 0}{=}1+\alpha x +o\left( x\right) $$




$$\begin{cases} \left(1+\dfrac{x}{n} \right)^{2}=1+2\dfrac{x}{n}+o\left( \dfrac{1}{n}\right) \\ \dfrac{y^{2}}{n^{2}}=o\left(\dfrac{1}{n}\right)\end{cases}\implies 1+\dfrac{2x}{n}+o\left(\dfrac{1}{n}\right)$$



on the other hand
\begin{aligned}\sqrt{1+\dfrac{2x}{n}+o\left(\dfrac{1}{n}\right)}=\left(1+\dfrac{2x}{n}+o\left(\dfrac{1}{n}\right) \right)^{\frac{1}{2}}&=1+\dfrac{1}{2}\left( \dfrac{2x}{n}+o\left( \dfrac{1}{n}\right)\right)+o\left(o\left(\dfrac{1}{n}\right)\right) \\
&=1+\dfrac{x}{n}+o\left( \dfrac{1}{n}\right)+o\left(\dfrac{1}{n}\right)\\
&=1+\dfrac{x}{n}+o\left( \dfrac{1}{n}\right)

\end{aligned}
then $$\fbox{$r_{n}=1+\dfrac{x}{n}+o\left( \dfrac{1}{n}\right)$}$$




  • why if $r_n=1+\dfrac{x}{n}+o\left(\dfrac{x}{n} \right)$ then $\ln(r_n)\sim \dfrac{x}{n}$



note that :





if $u_n\sim v_n $ and $v_n \sim w_n $ then $u_n \sim w_n$
$u_n\sim v_n \iff u_n=v_n+o(v_n)$




i can't show this




  • how we can get the expression of $\tan(\alpha_n)$





If $a+ib=\rho e^{i\theta}$ with $a>0$ then $\tan(\theta)=\frac b a$




since $\left( 1+\dfrac{z}{n}\right)=\left( 1+\dfrac{x}{n}\right)+i\dfrac{y}{n}=r_{n}e^{i\alpha_n} $ then
$$\tan(\alpha_n)=\dfrac{\dfrac{y}{n}}{1+\dfrac{x}{n}}=\dfrac{y}{x+n}$$
$$\fbox{$\tan(\alpha_n)=\dfrac{y}{x+n} $} $$




  • why if $\tan\left( \alpha_n \right)\sim \dfrac{y}{n}$ then $\alpha_{n}\sim \dfrac{y}{n}$





if $u_n\sim v_n $ and $v_n \sim w_n $ then $u_n \sim w_n$




So should show that $\alpha_n \underset{n \to +\infty}{\overset{}{\longrightarrow}}0$ to be able to say that $\tan(\alpha_n)\sim r_n$



we've $\tan(\alpha_n)=\dfrac{y}{x+n}$ then $\alpha_n=\arctan\left(\dfrac{y}{x+n}\right) $
$$\lim_{n\to +\infty}\alpha_n=\lim_{n\to +\infty} \arctan\left(\dfrac{y}{x+n}\right)=\arctan\left(\dfrac{y}{x+\lim_{n\to +\infty} n}\right)=\arctan(0)=0 $$

then $$\begin{cases}\tan(\alpha_n) \sim \alpha_n \\ \tan(\alpha_n)\sim \dfrac{y}{n} \end{cases} \implies \alpha_n\sim \dfrac{y}{n}$$




  • If my proof wrong would you elaborate the steps

integration - Does $int_{0}^{infty} cos(cosh x) cosh (alpha x) , mathrm dx$ converge for $0 le alpha



I highly suspect that $$\int_{0}^{\infty} \cos(\cosh x) \cosh (\alpha x) \, \mathrm dx$$ converges for $0\le \alpha <1$



(If true, it obviously also converges for $-1 < a <0$.)




I can show that the integral converges for $\alpha=0$:
$$\int_{0}^{\infty} \cos(\cosh x) \, \mathrm dx= \int_{1}^{\infty} \frac{\cos (u)}{\sqrt{u^{2}-1}} \, \mathrm du$$
which converges by Dirichlet's test



I can also show that the integral doesn't converge for $\alpha=1$:



$$\int_{0}^{\infty} \cos(\cosh x) \cosh(x) \, \mathrm dx = \int_{1}^{\infty} \frac{u \cos (u)}{\sqrt{u^{2}-1}} \, \mathrm du$$
which doesn't converge since $\frac{u \cos (u)}{\sqrt{u^{2}-1}} \sim \cos (u)$ for large values of $u$



For other values of $\alpha$ between $0$ and $1$, I'm not sure what to do. I don't know how to express $\cosh (\alpha x)$ in terms of $\cosh (x)$.



Answer



For $\alpha \in [0, 1)$, let $t = e^x$. Then



$$ \int_{0}^{R} \cos(\cosh x)\cosh(\alpha x) \, dx
= \int_{1}^{e^R} \cos\left( \frac{t+t^{-1}}{2} \right)\frac{t^{\alpha}+t^{-\alpha}}{2t} \, dt. $$



Noticing that



$$ \cos\left( \frac{t+t^{-1}}{2} \right) = \cos\left( \frac{t}{2} \right) + \mathcal{O}\left( \frac{1}{2t} \right) \quad \text{as } t\to\infty, $$




we find that the integral converges as $R\to\infty$ by Dirichlet's test.


calculus - Calculating $lim_{x rightarrow 0} frac{tan x - sin x}{x^3}$.



I have a difficulty in calculating this limit:



$$\lim_{x \rightarrow 0} \frac{\tan x - \sin x}{x^3},$$



I have tried $\tan x = \frac{\sin x}{\cos x}$, then I unified the denominator of the numerator of the given limit problem finally I got $$\lim_{x \rightarrow 0} \frac{\sin x}{x^{3} \cos x} - \lim_{x \rightarrow 0} \frac{ \sin x}{x^3},$$



Then I got stucked, could anyone help me in solving it?


Answer




For $x\ne0,$



$${\tan x-\sin x\over x^3}=\left({\sin x\over x}\right)^3\dfrac1{\cos x \,(1+\cos x)}$$



Now as $x\to0,x\ne0$


Monday 28 September 2015

A logarithm-like functional equation



Suppose we are given that a monotonically decreasing smooth function $f$ on $(0,\infty)$ obeys the functional equation $f(x) = -f(\frac{1}{x})$, and satisfies $f(\frac{1}{3}) = \frac{1}{2}$ and $f(\frac{1}{2}) = \frac{1}{3}$. Furthermore, $\lim\limits_{x\rightarrow0} f(x) = 1$. Is there a way to infer information about the function from these data alone, or even classify all functions satisfying them? I see that a function proportional to $\log x$ satisfies the functional equation, but cannot satisfy the special values.



I now found a function satisfying these data: $f(x) = \dfrac{1-x}{1+x}$.


Answer



The following Mobius function has the desired properties you want:




$$f(x):=\frac{1-x}{1+x}$$



$$f(0)=1,f(x)=-f \left(\frac{1}{x}\right),f \left(\frac{1}{3} \right)=\frac{1}{2},f \left(\frac{1}{2} \right)=\frac{1}{3}$$



EDIT:



I did not pay attention to the modification of the original question that added the right answer while I was typing my answer to the original question above.



Here I post a general method to deal with such problem by providing a new function $g(x)$ which satisfies the desired the requirements.




Define
$$g(x):=\frac{a(x-x^{-1})+b(x-x^{-1})^3+c(x-x^{-1})^5}{u x^5+v x^{-5}}$$



Then



$$g(x)+g(x^{-1})=0 \text{ requires that } u=v$$.



$$\lim_{x\to 0}g(x)=-\frac{c}{v}=1 \text{ requires that } v=-c$$



$$\lim_{x\to \infty}g(x)=\frac{c}{v}=-1 \text{ requires that } v=-c$$




We can then solve
$$g(1/3)=\frac{1}{2}\text { and } g(1/2)=\frac{1}{3}$$
for $a$ and $b$ and obtain:



$$a=\frac{2683}{504}c \text{ and } b=-\frac{61}{42}c \text{ }(c\not = 0)$$


calculus - Real roots of a polynomial



Let $p$ be an even degree polynomial with real coefficients such that the product of the constant term and the leading coefficient is negative. Show that $p$ has at least two real roots.



Thanks!


Answer



Hint: Take a look at $p(0)$ and the limits of $p$ as $x$ approaches $\pm\infty$.


complex analysis - Find $lim_{n to infty} n(frac{1+i}{2})^n$

Find $\lim_{n \to \infty} n(\frac{1+i}{2})^n$.



I don't know how to solve this limit. Should I use the fact that $\lim_{n \to \infty} n(\sqrt{2}/2)^n\cos(n \pi / 4)$ and $\lim_{n \to \infty} n(\sqrt{2}/2)^n\sin(n \pi / 4)$ for the real et imaginary part of $n(\frac{1+i}{2})^n$.



Can anyone give me a hint to solve the problem?

calculus - Verification of the limit of $(x-sin(x))/(tan(x)-x)$ as $xto 0$



I would appreciate if someone could verify to me my answers.



$$\lim_{x\rightarrow0}\frac{x-\sin(x)}{\tan(x)-x}$$I used L'Hopital's rule twice and got answer $1/2$.
$$\lim_{x\rightarrow0^+}\frac{1}{\sin(4x)}-\frac{1}{4x}$$ also I used L'Hopital's rule twice and got $0$.
$$\lim_{x\rightarrow2^-}(x^2-4)\ln(2-x)$$I used L'Hopital's rule once and got $0$.



Thanks.


Answer




Since the answer to the first problem has been given by GTX OC, let us focus on the second problem.
$$(x^2-4) \log(2-x) = (x-2)(x+2)\log(2-x)=-(x+2)(2-x)\log(2-x)$$
I suppose you know that $x \log(x)$ goes to 0 when $x$ goes to zero. Then, ... Are you able to continue with this ?


trigonometry - Expressions of $sin frac{A}{2}$ and $cos frac{A}{2}$ in terms of $sin A$



I am trying to understand the interpretation of $\sin \frac{A}{2}$ and $\cos \frac{A}{2}$ in terms of $\sin A$ from my book, here is how it is given :



We have $ \bigl( \sin \frac{A}{2} + \cos \frac{A}{2} \bigr)^{2} = 1 + \sin A $ and $ \bigl( \sin \frac{A}{2} - \cos \frac{A}{2} \bigr)^{2} = 1 - \sin A $




By adding and subtracting we have $ 2 \cdot \sin \frac{A}{2} = \pm \sqrt{ 1 + \sin A } \pm \sqrt{ 1 - \sin A } $ ---- (1) and $ 2 \cdot \cos \frac{A}{2} = \mp \sqrt{ 1 + \sin A } \mp \sqrt{ 1 - \sin A } $ ---(2)



I have understood upto this far well,



Now they have broke the them into quadrants :



In 1st quadrant :



$$ 2 \cdot \sin \frac{A}{2} = \sqrt{ 1 + \sin A } - \sqrt{ 1 - \sin A } $$

$$ 2 \cdot \cos \frac{A}{2} = \sqrt{ 1 + \sin A } + \sqrt{ 1 - \sin A } $$



In 2nd quadrant :



$$ 2 \cdot \sin \frac{A}{2} = \sqrt{ 1 + \sin A } + \sqrt{ 1 - \sin A } $$
$$ 2 \cdot \cos \frac{A}{2} = \sqrt{ 1 + \sin A } - \sqrt{ 1 - \sin A } $$



In 3rd quadrant :



$$ 2 \cdot \sin \frac{A}{2} = \sqrt{ 1 + \sin A } - \sqrt{ 1 - \sin A } $$

$$ 2 \cdot \cos \frac{A}{2} = \sqrt{ 1 + \sin A } + \sqrt{ 1 - \sin A } $$



In 4th quadrant :



$$ 2 \cdot \sin \frac{A}{2} = - \sqrt{ 1 + \sin A } - \sqrt{ 1 - \sin A } $$
$$ 2 \cdot \cos \frac{A}{2} = - \sqrt{ 1 + \sin A } + \sqrt{ 1 - \sin A } $$



Now, In knew the ALL-SINE-TAN-COSINE rule but still I am not able to figure out how the respective signs are computed in these (above) cases.


Answer



The easiest way of computing the signs is to make them match; we know that sin x > 0 if 0 < x < π and that cos x > 0 if -π/2 < x < π/2. Knowing whether sin A is greater than 0 or less than zero tells you whether $\sqrt{1-\mathrm{sin} A}$ is greater or less than $\sqrt{1+\mathrm{sin} A}$; that in turn lets you figure out what the overall sign on all of the right-hand terms is, and each quadrant corresponds to one of the four positive/negative pairs on the right-hand terms.



linear algebra - If an endomorphism satisfies $alpha^* = -alpha$, then its eigenvalues are purely imaginary



This is another exercise from Golan's book.



Problem: Let $V$ be an inner product space over $\mathbb{C}$ and let $\alpha$ be an endomorphism of $V$ satisfying $\alpha^*=-\alpha$, where $\alpha^*$ denotes the adjoint. Show that every eigenvalue of $\alpha$ is purely imaginary.



My proposed solution is below.


Answer




Let me show another argument which applies to a more general setting: if $\alpha$ is a linear operator on a Hilbert space satisfying $\alpha^*=-\alpha$, then the spectrum of $\alpha$ is purely imaginary (i.e. real part equal zero).



Indeed, one simply needs to notice that $\alpha-\lambda\,\text{id}$ is invertible if and only if $(\alpha-\lambda\,\text{id})^*$ is invertible. As $$(\alpha-\lambda\,\text{id})^*=\alpha^*-\overline\lambda\,\text{id}=-\alpha-\overline\lambda\,\text{id}=-(\alpha+\overline\lambda\,\text{id}),$$ we conclude that any $\lambda$ in the spectrum of $\alpha$ satisfies $\overline\lambda=-\lambda$.


Intuition behind logarithm inequality: $1 - frac1x leq log x leq x-1$



One of fundamental inequalities on logarithm is:
$$ 1 - \frac1x \leq \log x \leq x-1 \quad\text{for all $x > 0$},$$
which you may prefer write in the form of
$$ \frac{x}{1+x} \leq \log{(1+x)} \leq x \quad\text{for all $x > -1$}.$$



The upper bound is very intuitive -- it's easy to derive from Taylor series as follows:
$$ \log(1+x) = \sum_{i=1}^\infty (-1)^{n+1}\frac{x^n}{n} \leq (-1)^{1+1}\frac{x^1}{1} = x.$$




My question is: "what is the intuition behind the lower bound?" I know how to prove the lower bound of $\log (1+x)$ (maybe by checking the derivative of the function $f(x) = \frac{x}{1+x}-\log(1+x)$ and showing it's decreasing) but I'm curious how one can obtain this kind of lower bound. My ultimate goal is to come up with a new lower bound on some logarithm-related function, and I'd like to apply the intuition behind the standard logarithm lower-bound to my setting.


Answer



Take the upper bound:
$$
\ln {x} \leq x-1
$$
Apply it to $1/x$:
$$
\ln \frac{1}{x} \leq \frac{1}{x} - 1

$$
This is the same as
$$
\ln x \geq 1 - \frac{1}{x}.
$$


Proof by induction help. I seem to be stuck and my algebra is a little rusty




Stuck on a homework question with mathematical induction, I just need some help factoring and am getting stuck.



$\displaystyle \sum_{1 \le j \le n} j^3 = \left[\frac{k(k+1)}{2}\right]^2$



The induction part is: $\displaystyle \left[\frac{k(k+1)}{2}\right]^2
+(k+1)^3$ is where I am having a problem.




If you could give me some hints as to where to go since I keep getting stuck or writing the wrong equation.



I'll get to $\displaystyle \left[{k^2+2k\over2}\right]^2 + 2{(k+1)^3\over2}$



Any push in the right direction will be appreciated.


Answer



$(\frac{k(k+1)}{2})^2+(k+1)^3$



$=\frac{k^2(k+1)^2}{4}+(k+1)(k+1)^2$




$=\frac{(k+1)^2}{4}(k^2+4k+4)$



$=\frac{(k+1)^2}{4}(k+2)^2$


Sunday 27 September 2015

abstract algebra - How to prove that $zgcd(a,b)=gcd(za,zb)$




I need to prove that $z\gcd(a,b)=\gcd(za,zb)$.




I tried a lot, for example, looking at set of common divisors of the two sides, but I can't conclude anything from that. Can you please give me some advice how I can handle this problem? And $a,b,z \in \mathbb{Z}$.



Answer



Below are a few proofs of the gcd distributive law $\rm\:(ax,bx) = (a,b)x\:$ using Bezout's identity, universal gcd laws, and unique factorization. In each proof the first line serves as a hint.






First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$



$\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $



$\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$




$\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\ \,$ some $\rm\:n,k\in \mathbb Z$



$\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $



The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$






Alternatively, more generally, in any integral domain $\rm\:D\:$ we may employ the universal definitions of GCD, LCM to generalize the above proof.




Theorem $\rm\ \ (a,b)\ =\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D.$



Proof $\rm\quad\: c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x\ \ \ $ QED



Such universal definitions often serve to simplify proofs, e.g. see this proof of the GCD * LCM law.






Alternatively, comparing powers of primes in unique factorizations, it reduces to the following

$$ \min(a+c,\,b+c)\ =\ \min(a,b) + c$$



The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\le$, and



$$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\quad&&\rm[universal\ definition\ of\ \ min]\\
\rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \ |\ \ gcd(a,b)\quad&&\rm[universal\ definition\ of\ \ gcd] \end{eqnarray}$$


Number of zero digits in factorials



Here is a riddle someone has been asked in a job interview: How many zero digits are there in $100!$?



Well, I found the first $24$ quite fast by counting how many times five divides $100!$ ($5$ divides $20$ times and $25$ divides it $4$ times).




However, there are more zero digits in the middle of the number (these can be found by hand, by typing factorial(100) in sage).



My question is whether there is a smart way to determine the number of zero digits in $100!$, and more generally in $n!$.



By the way, this will not affect the job interview as it was finished some time ago.


Answer



You can get a very good estimate by (a) calculating the number of powers of ten in the factorial, (b) estimating the total number of decimal digits (using Stirling's approximation), and (c) assuming all digits except the trailing zeroes are equally likely to have any value. Since there are plenty of powers of $2$ to go around, the number of trailing zeroes is equal to the number of powers of five, plus the number of powers of twenty-five, etc.
$$
T_n=\sum_{k=1}^{\infty}\left\lfloor{\frac{n}{5^{k}}}\right\rfloor.
$$

The total length as estimated by Stirling's approximation is
$$
L_n=\log_{10}n!=n\log_{10} n - \frac{n}{\ln 10}+O(\ln n).
$$
Combining these, our estimate of the total number of zeroes is
$$
Z_{n}\sim T_n + \frac{1}{10}\left(L_n - T_n\right)=\frac{9}{10}\sum_{k=1}^{\infty}\left\lfloor{\frac{n}{5^{k}}}\right\rfloor+\frac{1}{10}n\log_{10}n-\frac{n}{10\ln 10}+O(\ln n).
$$
This turns out to be pretty good. Using WolframAlpha to get the exact values:
$$

\begin{matrix}
\text{n} & \text{Estimate} & \text{Exact} & \text{Abs. Error}\\
\hline
1000 & 481 & 472 & 9\\
2000 & 1022 & 1025 & 3\\
4000 & 2166 & 2143 & 23\\
8000 & 4573 & 4645 & 72 \\
16000 & 9631 & 9560 & 71 \\
32000 & 20226 & 20227 & 1
\end{matrix}

$$
The result for $n=32000$ is fortuitously precise...


algebra precalculus - Find the sum to n terms of the series $frac{1} {1cdot2cdot3cdot4} + frac{1} {2cdot3cdot4cdot5} + frac{1} {3cdot4cdot5cdot6}ldots $




Find the sum to n terms of the series $\frac{1} {1\cdot2\cdot3\cdot4} + \frac{1} {2\cdot3\cdot4\cdot5} + \frac{1} {3\cdot4\cdot5\cdot6}\ldots $



Please suggest an approach for this task.


Answer



$\dfrac{1}{k(k+1)(k+2)(k+3)} = \dfrac{1}{3} (\dfrac{k+3}{k(k+1)(k+2)(k+3)} - \dfrac{k}{k(k+1)(k+2)(k+3)})$
$ = \dfrac{1}{3}(\dfrac{1}{k(k+1)(k+2)} - \dfrac{1}{(k+1)(k+2)(k+3)})$



$\sum_{k=1}^{\infty}\dfrac{1}{k(k+1)(k+2)(k+3)} = \dfrac{1}{3} \dfrac{1}{2*3} = \dfrac{1}{18}$




@moron Yes, you are right. For the first n terms:



$\sum_{k=1}^{n}\dfrac{1}{k(k+1)(k+2)(k+3)} = \dfrac{1}{3} (\dfrac{1}{1*2*3} - \dfrac{1}{(n+1)(n+2)(n+3)})$



[edit] more details



$\sum_{k=1}^{n}\dfrac{1}{k(k+1)(k+2)(k+3)} = \sum_{k=1}^{n} \dfrac{1}{3} (\dfrac{1}{k(k+1)(k+2)} - \dfrac{1}{(k+1)(k+2)(k+3)})$
$= \dfrac{1}{3} [\sum_{k=1}^{n} \dfrac{1}{k(k+1)(k+2)} - \sum_{k=1}^{n} \dfrac{1}{(k+1)(k+2)(k+3)}]$
$= \dfrac{1}{3} [\sum_{k=0}^{n-1} \dfrac{1}{(k+1)(k+2)(k+3)} - \sum_{k=1}^{n} \dfrac{1}{(k+1)(k+2)(k+3)}]$
$= \dfrac{1}{3} (\dfrac{1}{1*2*3} - \dfrac{1}{(n+1)(n+2)(n+3)})$


real analysis - Show that $f, f^{-1}$ are continuous




Let $A,B \subset \mathbb{R}$ be open, and $f:A\rightarrow B$ be surjective and strictly monotonic increasing. Show that $f,f^{-1}$ are continuous.




Proof: I first show $f$ is injective. Let $x,y \in A, \mbox{and } x\neq y.$ This means either $x


To show $f$ is continuous, let $D \subset B$ be open. I need to show $f^{-1}(D)$ is open in $A$. Suppose not, i.e., $(f^{-1}(D))^{c}$ is not closed. $\quad\Rightarrow \exists$ a sequence $(x_n)_{n\in\mathbb{N}}$ in $(f^{-1}(D))^{c}$ that converges to $x$ which is in $f^{-1}(D)$. As $f$ is bijective, $\exists$ a unique $y_n,y$ for each $x_n$ such that $f(x_n)=y_n,f(x)=y,\forall n\in\mathbb{N}$, where $(y_n)_{n\in\mathbb{N}}$ is in $D^{c}, y\in D$. Here, $y_n\rightarrow y$. If it does not, this means $f^{-1}(y_n)=x_n$ does not converge to $f^{-1}(y)=x$, contradiction. $\quad \Rightarrow D^c$ is not closed. $\Rightarrow D$ is not open. Contradiction.



$f^{-1}$ can also be proved to be continuous in the same way as above.



I somehow get a feeling that I am not allowed to argue $y_n \rightarrow y$ because a priori, my argument, which I think, assumes $f$ is continuous, which I have not proven yet!



Is my proof correct or my doubt? Please help me get out of this situation!


Answer



A conceptual proof follows from the material in $\S$ 6.3 of my honors calculus notes:




Step 1: We are given that $f$ is bijective and increasing. So $f^{-1}$ exists and is moreover increasing: suppose not; then there are $y_1 < y_2 \in B$ with $f^{-1}(y_1) \geq f^{-1}(y_2)$. Then applying $f$ we get $y_1 \geq y_2$, contradiction. Thus the situation is perfectly symmetrical with respect to $f$ and $f^{-1}$, so it suffices to show that $f$ is continuous.



Step 2: We use the fact that every monotone function defined on $A$ has a left hand limit at every $c \in A$ -- namely $\sup \{ f(x) \mid x < c\}$ and a right hand limit -- namely $\inf \{ f(x) \mid x > c\}$, and the value $f(x)$ lies in between. (This is part of the Monotone Jump Theorem in $\S$ 6.3 of my notes.) Thus the only way we can have a discontinuity is if $\lim_{x \rightarrow c^-} f(x) < f(c)$ or $f(c) < \lim_{x \rightarrow c^+} f(c)$. But if either of these occurs, then $f(c)$ is not an interior point of $f(A)$, contradicting the hypothesis that $f(A)$ is open.



This answers the OP's question. I claim that it also proves that the inverse of a continuous function is continuous, at least in the case that the domain of $f$ is an interval. This is because every injective continuous function $f: I \rightarrow \mathbb{R}$ must be monotone: see $\S$ 5.6.3 of the notes. (There is a bit of combinatorial trickiness here.) Using the fact that every open subset of $\mathbb{R}$ is a disjoint union of open intervals -- which is not in the notes (I don't do any explicit topology whatsoever there) but is well known and not hard to show -- and that for every open interval $I$ and continuous $f$, $f(I)$ is an interval ($\S$ 6.2 of the notes) one sees that this extends to continuous functions on any open subset of $\mathbb{R}$, but this seems to be the longer way around this particular question.



Added: It is certainly not the case that any continuous bijection $f: X \rightarrow Y$ of topological spaces must have a continuous inverse. To get a counterexample, let $Y$ be your favorite non-discrete topological space, let $X$ be the same set endowed with the discrete topology, and let $f$ be the identity map. From this perspective the "automatic continuity" property of the inverse for continuous bijections on open subsets of $\mathbb{R}$ is surprising. It can be generalized to open subsets of $\mathbb{R}^n$ and then becomes a quite famous (and rather deep) theorem, Brouwer's Invariance of Domain. This can be generalized to topological manifolds. There are other "Open Mapping Theorems" in mathematics -- famously in complex analysis and Banach space theory -- but such results are highly prized, as they are the exception rather than the rule.


Saturday 26 September 2015

calculus - What does $dx$ mean?



$dx$ appears in differential equations, such us derivatives and integrals.



For example, a function $f(x)$ its first derivative is $\dfrac{d}{dx}f(x)$ and its integral $\displaystyle\int f(x)dx$. But I don't really understand what $dx$ is.


Answer



Formally, $dx$ does not mean anything. It's just a syntactical device to tell you the variable to differentiate with respect to or the integration variable.


Differentiability implies continuous derivative?

We know differentiability implies continuity, and in 2 independent variables cases both partial derivatives fx and fy must be continuous functions in order for the primary function f(x,y) to be defined as differentiable.



However in the case of 1 independent variable, is it possible for a function f(x) to be differentiable throughout an interval R but it's derivative f ' (x) is not continuous?

probability - Expected value question.



This is a question from my lecture notes.



"""
Persons arrive at a copy machine according to a Poisson process with rate λ=1 per minute.

The number of copies made is uniformly distributed between 1 and 10. Each copy requires
3 seconds. Find the average waiting time in the queue and the average system waiting time.
"""



I know how to do the problem, but I am having trouble understanding why the following calculation of an expectation is correct.
$$
E[X^2] = \sum_{i = 1}^{10}(3i)^2 * \space Pr(X = 3i)
$$



I don't understand why there is a $(3i)^2$ and why it's $Pr(X =3i)$ term in this expected value. I would set this up:




$$
3E[X^2] = \sum_{i = 1}^{10}(i)^2 * \space Pr(X = i)
$$
Can anyone please provide some intuition? I've looked online and through my textbooks and I can't find anything that helps me with this intuition. I know it has something to do with the 3 seconds in the problem statement, but I can't figure out how to make sense of this and therefore generalize this problem.



Thanks in advance!


Answer



I am guessing that you are trying to compute variance of $X$.




The problem is we did not define $X$.



In your lecture note, $X$ seems defined to be waiting time. Waiting time comes in multiple of 3 in this setting.



We have $Var[X]=E[X^2]-E[X]^2$



On the other hand, it seems that you are trying to wokr with the number of copies.To avoid confusion, let's define it to be $Y$ instead. In particular, we have the relationship $$X=3Y$$ and if we square them and take expectation, we have



$$E[X^2]=3^2E[Y^2].$$




Again, we can compute $Var[Y]=E[Y^2]-E[Y]^2$



A point to consider is suppose you really prefer to work with $Y$ (the number of copies), can you compute $Var[X]$?



$$Var[X]=E[(3Y)^2]-E[3Y]^2=9(E[Y^2]-E[Y]^2)=3^2Var[Y]$$


Friday 25 September 2015

trigonometry - Show that $sin^2 theta cdot cos^2theta = (1/8)[1 - cos(4 theta)]$.

I have these two problems I'm working on!



First of the Double Angle Formula! This formula I attempted to do a lot but couldn't get to the identity!
$$\sin^2 \theta \cdot \cos^2\theta = \tfrac18[1 - \cos(4 \theta)]$$



For Above question you can only use the following:

\begin{align}
\sin^2\theta &= \tfrac12 (1-\cos(2 \theta)) \\
\cos^2\theta &= \tfrac12(1 + \cos( 2 \theta ))
\end{align}



And lastly this Sum And Difference Formula! I tried this one so much, I'm leaning toward it being impossible (it's obviously not.... because it's a question):
$$\cos(a-b) \cdot \cos(a + b) = (\cos^2a - \sin^2b).$$

limits - Why $lim limits_ {nto infty}left (frac{n+3}{n+4}right)^n neq 1$?




Why doesn't $\lim\limits_ {n\to \infty}\ (\frac{n+3}{n+4})^n$ equal $1$?




So this is the question.




I found it actually it equals $e^{-1}$. I could prove it, using some reordering and canceling.



However another way I took was this:



$$\lim_ {n\to \infty}\ \left(\frac{n}{n+4}+\frac{3}{n+4}\right)^n$$



with the limit of the first term going to $1$ and the second to $0$. So $(1+0)^n=1$ not $e^{-1}$.


Answer



Because $1^\infty$ is a tricky beast. Perhaps the power overwhelms the quantity that's
just bigger than $1$, but approaching $1$, and the entire expression is large. Or perhaps not...




Perhaps the power overwhelms the quantity that's just smaller than $1$, but approaching $1$, and the entire expression tends to $0$ . Or perhaps not...



In your case,
$$
{n+3\over n+4} = 1-{1\over n+4}.
$$
And, as one can show (as you did): $$\lim\limits_{n\rightarrow\infty}(1-\textstyle{1\over n+4})^n =
\lim\limits_{n\rightarrow\infty}\Bigl[ (1-\textstyle{1\over n+4})^{n+4}\cdot (1-{1\over n+4})^{-4}\Bigr] =
e^{-1}\cdot1=e^{-1}.$$




Here, the convergence of $1-{1\over n+4}$ to 1 is too fast for the $n^{\rm th}$ power to drive it back down to $0$.


induction - Show the Fibonacci numbers satisfy F(n) $ge$ $2 ^ {(n-1) / 2}$



Use induction to show that the Fibonacci numbers satisfy F(n) $\ge$ $(2 ^ {(n-1) / 2})$ for all n $\ge$ 3




My work thus far:




Base Case: F(3) $\ge$ $2 ^ {(3-1) / 2}$ => F(3) $\ge$ $2 ^ {1}$



Induction Hypothesis: Assume F(n) is true for all 3 < n < k



Inductive step: for (k + 1), F(k + 1) $\ge$ $2 ^ {(k + 1 - 1) / 2}$ =>
F(3) $\ge$ $2 ^ {k / 2}$





I'm not sure where to go from here.


Answer



You seem to be misunderstanding how a proof by induction works. Say you have a proposition $P(n)$ to be verified for all natural numbers $n=0,1,2,\dots$. The base case is to show that $P(0)$ is true, which is usually the easiest. The inductive hypothesis is that for some $k$, $P(n)$ is true for all $n=0,1,2,\dots,k$. You seem to get the idea until now. The inductive step is to show, assuming $P(0),P(1),\dots,P(k)$, that $P(k+1)$ is also true. This is what allows the domino reaction to occur: once $P(k+1)$ is true, so is $P(k+2)$, and so on for every natural number.



In this case, $F(k+1)\geq2^{(k+1-1)/2}$ is what you want to show. You do not a priori know it to be true. Thus, it makes no sense to start by assuming it, as you seem to have done. Instead, start with the assumptions that
$$ F(k-1)\geq 2^{(k-2)/2}\quad\text{and}\quad F(k)\geq 2^{(k-1)/2}.$$
Note that by definition, $F(k+1)=F(k)+F(k-1)$. Summing the above two inequalities,
$$F(k+1)=F(k)+F(k-1)\geq 2^{(k-2)/2}+2^{(k-1)/2}.$$

The big question, now, is whether it is true that $2^{(k-2)/2}+2^{(k-1)/2}$ is greater than or equal to $2^{k/2}$. It turns out that it is true. To show this is the inductive step that you have to make and the conclusion that would complete the proof. Can you finish now?






Try completing the inductive step on your own first. If you need a hint, look below:




Note that $2^{(k-1)/2}>2^{(k-2)/2}$, so $2^{(k-1)/2}+2^{(k-2)/2}>2\cdot2^{(k-2)/2}=2^{(k-2)/2+1}=2^{k/2}$.



trigonometry - Problem in the properties of limit: $limlimits_{x tofrac{pi}{3}}frac{sinleft(x-frac{pi}{3}right)}{1-2cosleft(xright)}$




$$\lim\limits_{x \to\frac{\pi}{3}}\frac{\sin\left(x-\frac{\pi}{3}\right)}{1-2\cos\left(x\right)}$$





I used the following property:
if $$\lim\limits_{\large x \to\frac{\pi}{3}}f(x)=L$$
then $$\lim\limits_{x \to\frac{\pi}{3}}\frac{1}{f\left(x\right)}=\frac{1}{L}$$



where $L$ is a real number and nonzero,hence we have:



$$\lim\limits_{\large x \to\frac{\pi}{3}}\frac{1-2\cos\left(x\right)}{\sin\left(x-\frac{\pi}{3}\right)}$$



substititute $x-\frac{\pi}{3}=u$:




$$\lim\limits_{\large u \to 0}\frac{1-2\cos\left(u+\frac{\pi}{3}\right)}{\sin\left(u\right)}$$$$=\lim\limits_{\large u \to 0}\frac{1-\cos\left(u\right)+\sqrt{2}\sin\left(u\right)}{\sin\left(u\right)}=\lim\limits_{\large u \to 0}\frac{1-\cos\left(u\right)}{\sin\left(u\right)}+\sqrt{2}$$$$=\lim\limits_{\large u \to 0}\frac{\sin\left(u\right)}{1+\cos\left(u\right)}+\sqrt{2}=\sqrt{2}$$



hence the main limit should be $\frac{1}{\sqrt{2}}$which is wrong, but I don't know why, also is there any way to solve the problem without using Taylor series or L'hopital's rule?


Answer



Your derivation is absolutely fine and right, but we have that



$$1-2\cos\left(u+\frac{\pi}{3}\right)=1-2\frac12\cos u+2\frac {\sqrt 3} 2\sin u=1-\cos u+\color{red}{\sqrt 3}\sin u$$



therefore




$$\lim\limits_{x \to\frac{\pi}{3}}\frac{\sin\left(x-\frac{\pi}{3}\right)}{1-2\cos\left(x\right)}=\frac1{\sqrt 3}$$



Note also that we don't need to invert the expression, indeed in the same way we have



$$\lim\limits_{\large u \to 0}\frac{\sin\left(u\right)}{1-2\cos\left(u+\frac{\pi}{3}\right)}=\lim\limits_{\large u \to 0}\frac{\sin\left(u\right)}{1-\cos u+\sqrt 3\sin u}=\lim\limits_{\large u \to 0}\frac{1}{\frac{1-\cos u}{\sin u}+\sqrt 3}=\frac1{\sqrt 3}$$



since



$$\frac{1-\cos u}{\sin u}=u\frac{1-\cos u}{u^2}\frac{u}{\sin u}\to 0$$


probability - Fair dice thrown until each number is obtained at least once



A fair die is thrown until each number is rolled at least once, but we throw at most seven times. I need to find a probability function, X describes how many fours are thrown.




My problem is that I can either throw 6 times (when I get one of the 6! permutations of (1,2,3,4,5,6)) or 7 times (when one number appears at least twice).



I think the sample space must be sth. like this (where $\omega_i\in\{1,..,6\} \forall i)$:



$\Omega=\{\omega : \omega=(\omega_1,...,\omega_6), \omega_i \not= \omega_j\ \forall i\not=j, \}\cup A$ ... A is the set of all $(\omega_1,...,\omega_7)$ where the first six components $\omega_1,...,\omega_6$ are different numbers because otherwise we would be in the first set again. Maybe this is even too complicated, i don't know, but how does P(X=k) look like?


Answer




Throw the die seven times whatever happens during the six first throws and compare $X$ to the number $Y$ of fours during these seven throws.





One knows that $Y$ is binomial Bin$(7,\frac16)$, that is, $p_k=P(Y=k)$ is $p_k={7\choose k}\left(\frac16\right)^k\left(\frac56\right)^{7-k}$ for every $0\leqslant k\leqslant 7$.



The only case when $Y\ne X$ is if $X=1$ and $Y=2$. This happens if the six first throws produce every result exactly once and if the seventh throw produces a four, thus $P(X\ne Y)=P(X=1,Y=2)$ is $q=\frac{6!}{6^6}\cdot\frac16$.



Finally, $P(X=1)=p_1+q$, $P(X=2)=p_2-q$, and, for every $0\leqslant k\leqslant 7$ not $1$ or $2$, $P(X=k)=p_k$.


sequences and series - Evaluate $1+left(frac{1+frac12}{2}right)^2+left(frac{1+frac12+frac13}{3}right)^2+left(frac{1+frac12+frac13+frac14}{4}right)^2+...$

Evaluate:



$$S_n=1+\left(\frac{1+\frac12}{2}\right)^2+\left(\frac{1+\frac12+\frac13}{3}\right)^2+\left(\frac{1+\frac12+\frac13+\frac14}{4}\right)^2+...$$



a_n are the individual terms to be summed.



My Try :
\begin{align}
&a_1=1\\
&a_2=\left(\frac{3}{4}\right)^2=\frac{9}{16}\\

&a_3=\left(\frac{11}{18}\right)^2\\
&a_4=\left(\frac{25}{48}\right)^2
\end{align}
now :?

Thursday 24 September 2015

elementary set theory - Why is $|Y^{emptyset}|=1$ but $|emptyset^Y|=0$ where $Yneq emptyset$




I have a question about the set of functions from a set to another set. I am wondering about the degenerate cases. Suppose $X^Y$ denotes the set of functions from a set $Y$ to a set $X$, why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$?


Answer



The definition of $A^B$ is "the set of all functions with domain $B$ and codomain $A$".



A function $f$ from $B$ to $A$ is a set of ordered pairs such that:




  1. If $(x,y)\in f$, then $x\in B$ and $y\in A$.

  2. For every $b\in B$ there exists $a\in A$ such that $(b,a)\in f$.


  3. If $(b,a)$ and $(b,a')$ are in $f$, then $a=a'$.



Now, what happens if $B=\emptyset$? Well, then there can be no pair in $f$, because you cannot have $x\in B$. But notice that in that case, 2 is satisfied "by vacuity" (if it were false, you would be able to exhibit a $b\in\emptyset$ for which there is no $a\in A$ with $(b,a)\in f$; but there are no $b\in\emptyset$, so you cannot make such an exhibition; the statement is true because the premise, "$b\in\emptyset$", can never hold). Likewise 3 holds by vacuity. So it turns out that if we take $f=\emptyset$, then $f$ satisfies 1, 2, and 3, and therefore it is by all rights a "function from $\emptyset$ to $A$". But this is the only possible function from $\emptyset$ to $A$, because only the empty set works.



By contrast, if $A=\emptyset$, but $B\neq\emptyset$, then no set $f$ can satisfy both 1 and 2, so no set can be a function from $B$ to $A$.



That means that $Y^{\emptyset}$ always contains exactly one element, namely the "empty function", $\emptyset$. But if $Y\neq\emptyset$, then $\emptyset^Y$ contains no elements; that is, it is empty.



Therefore, since $Y^{\emptyset}$ has exactly one element, $|Y^{\emptyset}|=1$ regardless of what $Y$ is. But if $Y\neq\emptyset$, then $\emptyset^{Y}$ is empty, so $|\emptyset^{Y}| = 0$.



Wednesday 23 September 2015

integration - What's the value of $int_0^1frac{1}{2y} ln(y) ln^2(1-y) , dy$?



I came across this integral while doing a different problem:




$$ \int_0^1\frac{1}{2y} \ln (y)\ln^2(1-y) \, dy$$





I think we can evaluate this integral by differentiating the common integral representation of the beta function, but it seems to get a bit messy.


Answer



We start by introducing the integral
$$
I(a,b)=\frac{1}{2}\int_0^1y^{a-1}(1-y)^b\,dy=\frac{1}{2}B(a,1+b),
$$
where $B$ denotes the beta function. Note that this integral is singular at $a=0$ and $b=-1$. Since $\partial_a y^a=y^a\ln y$ we are
led to calculate

$$
\partial_{a,b,b}I(a,b)=\frac{1}{2}\int_0^1 y^{a-1}(1-y)^b\ln y\bigl(\ln(1-y)\bigr)^2\,dy
$$
as $a$ and $b$ tend to $0$. We will below inser the "non-dangerous" point $b=0$. In other words, we want to calculate
$$
\partial_{a,b,b}B(a,1+b)\mid_{a\to 0^+,b\to 0}.
$$
When differentiating the beta function, polygammas appear. Indeed,
$$
\begin{aligned}

\partial_bB(a,1+b)&=B(a,1+b)\bigl(\psi_0(1+b)-\psi_0(1+a+b)\bigr)\\
\partial_{b,b}B(a,1+b)&=B(a,1+b)\Bigl(\bigl(\psi_0(1+b)-\psi_0(1+a+b)\bigr)^2
+\psi_1(1+b)-\psi_1(1+a+b)\Bigr).
\end{aligned}
$$
Next, we can actually insert $b=0$ before we differentiate with respect to
$a$ and take the limit $a\to 0$. We should differentiate the function (here we have used the facts that $\psi_0(1)=-\gamma$ (Euler's constant) and that $\psi_1(1)=\pi^2/6$)
$$
f(a)=B(a,1)\Bigl(\bigl(\gamma+\psi_0(1+a)\bigr)^2+\frac{\pi^2}{6}-\psi_1(1+a)\Bigr)
$$

and calculate $\lim_{a\to 0^+}f'(a)$. We get that
$$
\begin{aligned}
f'(a)&=B(a,1)\bigl(\psi_0(a)-\psi_0(1+a)\bigr)\Bigl(\bigl(\gamma+\psi_0(1+a)\bigr)^2+\frac{\pi^2}{6}-\psi_1(1+a)\Bigr)\\
&\quad+B(a,1)\Bigl(2\bigl(\gamma+\psi_0(1+a)\bigr)\psi_1(1+a)-\psi_2(1+a)\Bigr)
\end{aligned}
$$
Next, we use the (non-obvious) expansions around $a=0$
$$
\begin{aligned}

B(a,1)&=\frac{1}{a}+O(1)\\
\psi_0(a)&=-\frac{1}{a}-\gamma+O(a)\\
\psi_0(1+a)&=-\gamma+\frac{\pi^2}{6}a+O(a^2)\\
\psi_1(1+a)&=\frac{\pi^2}{6}+\psi_2(1)a+\frac{\pi^4}{30}a^2+O(a^3)\\
\psi_2(1+a)&=\psi_2(1)+\frac{\pi^4}{15}a+O(a^2).
\end{aligned}
$$
to find that, as $a\to0^+$,
$$
\begin{aligned}

f'(a)&\approx -\frac{1}{a^2}\Bigl(\bigl(\frac{\pi^2}{6}a\bigr)^2-\psi_2(1)a-\frac{\pi^4}{30}a^2\Bigr)+\frac{1}{a}\Bigl(2\frac{\pi^2}{6}a\frac{\pi^2}{6}-\psi_2(1)-\frac{\pi^4}{15}a\Bigr)+O(a)\\
&=-\frac{\pi^4}{180}+O(a)
\end{aligned}
$$
as $a\to 0^+$. We conclude that
$$
\partial_{a,b,b}B(a,1+b)\mid_{a\to 0^+,b\to 0}=-\frac{\pi^4}{180}.
$$
Finally, dividing by $2$ (remember, we had a one-half in front of the beta function in the beginning), we get that
$$

\int_0^1\frac{1}{2y}\ln y(\ln(1-y))^2\,dy=-\frac{1}{360}\pi^4.
$$


linear algebra - Row Equivalent Matrices



If I have a matrix $A$, where there are zeros everywhere apart from the first row, what are the matrices that are not row equivalent to $A$.



I know that if two matrices are row equivalent, we can get from one to the other using elementary row operations.



My idea so far is that it would be all the matrices with $1$ or more zeroes in a row, but not all zeroes in the same row. I'm unsure how to put this in to a formal argument.


Answer



Equivalently, two matrices are row equivalent if they have the same
row space. The matrix you describe can be written

$$
A=\left(\begin{array}{c}
a\\
0\\
\vdots\\
0
\end{array}\right)
$$
where $a$ is a row vector. The row space of $A$ is all vectors of
the form $\alpha a$, where $\alpha$ is a (possibly zero) constant.



complex numbers - How to break $frac{1}{z^2}$ into real and imaginary parts?



$$ \frac{1}{(x+iy)^2}=\frac{1}{x^2+i2xy-y^2}=\frac{x^2}{(x^2+y^2)}-\frac{2ixy}{(x^2+y^2)}-\frac{y^2}{(x^2+y^2)}$$
So I thought I could just say:
$$ Re(\frac{1}{z^2})=\frac{x^2}{(x^2+y^2)}-\frac{y^2}{(x^2+y^2)}$$
and
$$ Im(\frac{1}{z^2})=-\frac{2ixy}{(x^2+y^2)}$$
But I know that is wrong because it looks nothing like the graph of the real part of 1/z^2 on wolfram alpha found here: http://www.wolframalpha.com/input/?i=1%2F(x%2Bi*y)%5E2




Then I thought I could must multiply $1/z^2$ by $z/z$ to get $\frac{x}{z^3}$ and $\frac{iy}{z^3}$ however graphing these again shows that they are not the real and complex parts of $\frac{1}{z^2}$.


Answer




Notice, when $z\in\mathbb{C}$:



$$z=\Re[z]+\Im[z]i$$








So, we get (in steps):




  • $$z^2=\left(\Re[z]+\Im[z]i\right)^2=\Re^2[z]-\Im^2[z]+2\Re[z]\Im[z]i$$

  • $$\overline{z^2}=\overline{\Re^2[z]-\Im^2[z]+2\Re[z]\Im[z]i}=\Re^2[z]-\Im^2[z]-2\Re[z]\Im[z]i$$

  • $$z^2\cdot\overline{z^2}=|z|^4=\left(\sqrt{\Re^2[z]+\Im^2[z]}\right)^4=\left(\Re^2[z]+\Im^2[z]\right)^2$$



Now, we get:




$$\frac{1}{z^2}=\frac{\overline{z^2}}{z^2\cdot\overline{z^2}}=\frac{\Re^2[z]-\Im^2[z]-2\Re[z]\Im[z]i}{\left(\Re^2[z]+\Im^2[z]\right)^2}$$



So:




  • $$\color{red}{\Re\left[\frac{1}{z^2}\right]=\frac{\Re^2[z]-\Im^2[z]}{\left(\Re^2[z]+\Im^2[z]\right)^2}}$$

  • $$\color{red}{\Im\left[\frac{1}{z^2}\right]=-\frac{2\Re[z]\Im[z]}{\left(\Re^2[z]+\Im^2[z]\right)^2}}$$


calculus - Convergence of the series $sumlimits_{n=1}^{infty}frac{cos(alphasqrt{n})}{n^q}$



Determine for which real values of $\alpha$ and $q$ the following series converges $\sum\limits_{n=1}^{\infty}\frac{\cos(\alpha\sqrt{n})}{n^q}$?
So far I managed to prove that 1) for $q\leqslant0,\alpha\in\mathbb{R}$ the series diverges; 2) for $q>1,\alpha\in\mathbb{R}$ the series converges absolutely; 3) for $0


How to approach the problem for the case $0

Answer



By the Euler-MacLaurin formula, we have



$$S(k) := \sum_{n = 1}^k \cos (\alpha\sqrt{n}) = \int_1^k \cos (\alpha \sqrt{t})\,dt + O(1).$$



Further,



\begin{align}

\int_1^k \cos (\alpha \sqrt{t})\,dt
&= 2\int_1^{\sqrt{k}} u\cos (\alpha u)\,du\\
&= \frac{2}{\alpha}\bigl[u\sin (\alpha u)\bigr]_1^{\sqrt{k}} - \frac{2}{\alpha}\int_1^{\sqrt{k}} \sin (\alpha u)\,du
\end{align}



shows $\lvert S(k)\rvert \leqslant \frac{4}{\lvert\alpha\rvert}\sqrt{k} + O(1) \leqslant C\sqrt{k}$ for some constant $C \in (0,+\infty)$.



Then a summation by part shows



\begin{align}

\Biggl\lvert\sum_{n = m}^k \frac{\cos (\alpha \sqrt{n})}{n^q}\Biggr\rvert
&= \Biggl\lvert\sum_{n = m}^k \bigl(S(n) - S(n-1)\bigr) n^{-q}\Biggr\rvert\\
&\leqslant \frac{\lvert S(k)\rvert}{k^q} + \frac{\lvert S(m-1)\rvert}{m^q} + \Biggl\lvert \sum_{n = m}^{k-1} S(n) \biggl(\frac{1}{n^q} - \frac{1}{(n+1)^q}\biggr)\Biggr\rvert\\
&\leqslant C\Biggl( k^{\frac{1}{2}-q} + m^{\frac{1}{2}-q} + \sum_{n = m}^{k-1}\frac{q}{n^{q+\frac{1}{2}}}\Biggr)\\
&\leqslant \tilde{C}\cdot m^{\frac{1}{2}-q}
\end{align}



for $q > \frac{1}{2}$.



So as D. Thomine expected, the series converges for $q > \frac{1}{2}$ and $\alpha \in \mathbb{R}\setminus \{0\}$. The argument given in the comment shows (when the details are carried out) that the series is divergent for $q \leqslant \frac{1}{2}$.



elementary number theory - Find the last digit of $3^{1999}$




Problem: Find the last digit of $3^{1999}$.




My answer is $3$, but the answer sheet says $7$.




Here is what I did:




  • $3^{1999}=(3^9)^{222}\cdot3$

  • Using Fermat's Little Theorem: $3^9\equiv1\pmod{10}$

  • Therefore, $3^{1999}\equiv(3^9)^{222}\cdot3\equiv1^{222}\cdot3\equiv3\pmod{10}$

  • Therefore, the last digit should be $3$




Where did I go wrong?


Answer



Here's a straightforward alternative that does not require Euler's or Fermat's, and only requires noticing that



$$3^2 \equiv -1 \pmod {10}$$ so that
$$\begin{align}3^{1999} &= (3^2)^{999}\cdot3\\&\equiv (-1)^{999}\cdot3\pmod{10}\\&\equiv-3\pmod{10}\\&\equiv{7}\pmod{10}\end{align}$$


modular arithmetic - General solution for $x^2 equiv x bmod p$




I want to find a general solution for $x^2 \equiv x \bmod p$, where $p$ is a prime.



It is easy to see that if $x \equiv 0 \bmod p$ or $x \equiv 1 \bmod p$ , the equation holds.



I got these solutions, but I was not able to figure out any other solutions. which, intuitively, would not seem necessarily wrong.



Is it true that there is never any other solution? and if so how is this provable?



Thank you,




V.


Answer



Following the comment of @SystematicDisintegration if $x(x-1)$ is divisible by a prime , then one of them must be a multiple of that prime since prime has no other divisors other than $1$ and prime itself



So either $x=pk$, or $x-1=pk$ for some intiger $k$



The point of the question is there exist no other solution since $x(x-1)$ is not factorizable in any other form in linear pair of intigers



So by considering two mentioned above , we can conclude $x\equiv 0,1(mod ~p) $ are only solutions


Tuesday 22 September 2015

calculus - Find the limit $lim_{x to frac{pi}{2}}(frac{cos(5x)}{cos(3x)})$ without using L'Hospital's rule



I'm trying to find the limit:



$$\lim_{x \to \frac{\pi}{2}}\left(\frac{\cos(5x)}{\cos(3x)}\right)$$



By L'Hospital's rule it is $-\frac{5}{3}$ but I'm trying to solve it without using L'Hospital rule.




What I tried:




  1. Write $\frac{\cos(5x)}{\cos(3x)}$ as $\frac{\cos(4x+x)}{\cos(4x-x)}$ and then using the formula for $\cos(A+B)$.


  2. Write $\cos(x)$ as $\sin\left(x - \frac{\pi}{2}\right)$.




But I didn't have success with those methods (e.g. in the first one I got the same expression $\frac{\cos(5x)}{\cos(3x)}$ again ).


Answer




$$\cos(5x)=\sin \left(\frac52 \pi-5x\right)=\sin5\left(\frac{\pi}{2}-x\right)$$
And
$$\cos(3x)=-\sin\left(\frac32 \pi-3x\right)=-\sin3\left(\frac{\pi}{2}-x\right)$$



So we set $\frac{\pi}{2}-x=w$



as $x\to \frac{\pi}{2} $ we have $x\to 0$



The given limit can be written as




$$\lim_{w\to 0}\frac{\sin 5w}{-\sin 3w}=-\frac{5}{3}\lim_{w\to 0}\frac{3w\sin 5w}{5w\sin 3w}=-\frac{5}{3}\lim_{w\to 0}\left(\frac{\sin 5w}{5w}\cdot \frac{3w}{\sin3w}\right)=-\frac{5}{3}$$
Hope this can be useful


Monday 21 September 2015

matrices - Eigenvalue Decomposition of a Triadiagonal Matrix



I need to compute explicitly an eigenvalue decomposition of the following $N\times N$ tridiagonal matrix,



$T = \left[ {\begin{array}{*{20}{c}} { - 2\left( {\frac{1}{{{h^2}}} + \frac{1}{{{k^2}}}} \right)}&{\frac{1}{{{h^2}}}}&{}&{}&{}\\ {\frac{1}{{{h^2}}}}&{ - 2\left( {\frac{1}{{{h^2}}} + \frac{1}{{{k^2}}}} \right)}&{\frac{1}{{{h^2}}}}&{}&{}\\ {}& \ddots & \ddots & \ddots &{}\\ {}&{}&{\frac{1}{{{h^2}}}}&{ - 2\left( {\frac{1}{{{h^2}}} + \frac{1}{{{k^2}}}} \right)}&{\frac{1}{{{h^2}}}}\\ {}&{}&{}&{\frac{1}{{{h^2}}}}&{ - 2\left( {\frac{1}{{{h^2}}} + \frac{1}{{{k^2}}}} \right)} \end{array}} \right]$



which appears in solving Poisson problem numerically on rectangular uniform mesh in two dimensional space.



I know $T$ is a Toeplitz matrix. Hence eigenvalues of $T$ are




${\lambda _i}\left( T \right) = - 2\left( {\frac{1}{{{h^2}}} + \frac{1}{{{k^2}}}} \right) + \frac{2}{{{h^2}}}\cos \frac{{i\pi }}{{N + 1}},i = 1,2, \ldots ,N$



and the corresponding eigenvectors of $T$ are



$\begin{array}{l} {x_i} = {\left[ {{x_{i,1}},{x_{i,2}}, \ldots ,{x_{i,N}}} \right]^T},i = 1,2, \ldots ,N\\ {x_{i,j}} = \sin \frac{{ij\pi }}{{N + 1}},i = 1:N,j = 1:N \end{array}$



1. I now need to compute the inverse of $X = \left\{ {\sin \frac{{ij\pi }}{{N + 1}}} \right\}_{i,j = 1}^N$ to obtain an eigenvalue decomposition of $T$. How can I compute this $X^{-1}$?



2. I also notice that $T$ is normal. Hence, it has an singular value decomposition as the form $T = Udia{g_{{N}}}\left\{ {{\lambda _i}\left( {{T_N}} \right)} \right\}{U^*}$, where $U$ is unitary. How can I find this $U$?




Thank in advanced.


Answer



We have $X^{-1}=\frac{2}{N+1}X$.



If $u\neq v$, $\left< x_u,x_v\right>=\sum_{i=1}^N \sin(\frac{ui\pi}{N+1})\sin(\frac{vi\pi}{N+1})$.



$\sin(a)\sin(b)=\frac{1}{2}(\cos(a-b)-\cos(a+b))$.



So $\left< x_u,x_v\right>=\frac{1}{2}\sum_{i=1}^N \left(\cos(\frac{(u-v)i\pi}{N+1})-\cos(\frac{(u+v)i\pi}{N+1})\right)$.




$\cos(0)+\sum_{i=1}^N\cos(\frac{(u-v)i\pi}{N+1})=\sum_{i=0}^N\cos(\frac{(u-v)i\pi}{N+1})=0$ because $-2(N+1)

$\cos(0)+\sum_{i=1}^N\cos(\frac{(u+v)i\pi}{N+1})=\sum_{i=0}^N\cos(\frac{(u+v)i\pi}{N+1})=0$ because $0

So $\left< x_u,x_v\right>=0$.



$\left< x_u,x_u\right>=\frac{1}{2}\sum_{i=1}^N \left(\cos(0)-\cos(\frac{2ui\pi}{N+1})\right)=\frac{1}{2}(N+1)$ because $0<2u<2(N+1)$.



So $X^*X=\frac{1}{2}(N+1)I_N$.




But $X^*=X$. So $X^{-1}=\frac{2}{N+1}X$.


I'm struggling to understand and implement this part on finite difference derivatives



I'm trying to implement a finite difference derivative algorithm, as I'm learning how to perform numerical analysis on computational physics. Now I'm following the book found

here. I'm struggling to implement the algorithm used. Here's a bit of background:



As you may know, the function is divided into finite segments called $h$. And each discrete term of the function is labelled as $x_i$. However, the book also defines an arbitrary value called $\zeta$ where $\zeta\in|x_i,x_{i+1})$. It also defines another variable $\varepsilon$ such that $\zeta=x_i+\varepsilon h$. Note that $\varepsilon\in[0,1)$. I know that they're in between the discrete values, but I can't tell what they are.



And to show some conventions used, $f^{(n)}(x_i)\equiv f^{(n)}_i$; and $f^{(n)}_{i+\varepsilon}=f^{(n)}(\zeta)$.



Now if you're familiar with finite difference differentiation, you may know that in order to perform it, you may expand the general term using a Taylor series to change the order of the truncation error. So what the book did was to expand it to a certain number of terms, and played around with the forward difference, backward difference, and central difference approximation terms to get the following equation (I omitted the details to keep the question short):



$f'_i=\frac{1}{6h}(f_{i-1}+8f_{i+\frac{1}{2}}-8f_{i-\frac{1}{2}}-f_{i+1}) + \frac{h⁴}{360}f^V_{i+\varepsilon}$




Now when I tried to punch it into my computer, the approximation I got was way off. I didn't enter second term that has the fourth power of $h$ as I can't tell what $\varepsilon$ or $\zeta$ is. Here is an example code of when I try to approximate $x ^ 2$ when $x=2$:



let derivative = 1.0/(6.0*h) * (1.0+8*(2.5*2.5)-8.0*(1.5*1.5)-9.0)
// The above line gives an approximation of 400, when it should be close to 4


What am I doing wrong here? If there's anything more you need to know, please say so. I'm sorry that the question is so long and wordy.


Answer



Can you be explicit about what you used for $h$? It looks like you used $h = 1$? That's likely going to give an inaccurate result. The quality of the approximation gets better as $h$ gets smaller. Try a sequence of decreasing values of $h$. If you take $h$ "too large" you still get a number out, but it's not a useful number.




EDIT
After a couple of exchanges in the comments, it's clear that you're not using the same value of $h$ in the factor $1/(6h)$ in the front of the expression that you're using in the evaluation of the function. You claimed to use $h=0.01$. Looking at the values that you used $f_{i-1} = 1$ (as well as the other specific values that you showed for the other evaluations), however, shows that you were actually spacing the function evaluations by $h=1$. You need to use the same value of $h$ throughout.



Also, as I pointed out in the comments, the value of a good $h$ varies a lot from problem to problem. Knowing that some specific value works on a problem that you solved before does not necessarily mean that it will work on the next problem.



Edit 2
The variables $\zeta$ and $\epsilon$ are giving the error estimate based on the Lagrange form of the remainder for the Taylor series that you're using to construct the finite difference. In practice you will just drop that term when you do most calculations, but if you want to do an analysis of numerical errors introduced you will want it. Even then you usually only use the factor in front that's proportional to $h$ since you usually want to know how the numerical errors scale with the discretization length. If you wanted a different type of bound, you might need the whole term. In general you don't know the exact values of $\zeta$ and $\epsilon$ and would trying to bound not calculate that term exactly.



See https://en.wikipedia.org/wiki/Finite_difference_method#Accuracy_and_order


real analysis - Which topology do we need to assume to get this result: Every open set $V$ in the plane is a countable union of rectangles

I am self-studying real and complex analysis by W.Rudin and in a proof it is stated, that every open set $V$ in the plane is a countable union of rectangles $R$.



I have the following definitions at hand and am trying to proof this result.



1) A set $S$ is open iff $S$ is a member of the topology on X.



2) The topology of $\mathbb{R}$ is the set of all unions of segments in $\mathbb{R}$, e.g. sets of the form $(a,b)$.




3) The topology of $\mathbb{R^2}$ is the set of all unions of open circular discs.



4) A rectangle is defined as following $R=I_1\times I_2$, where $I_1,I_2$ are segments in $\mathbb{R}$



I can prove the result by assuming the euclidian metric and the topology resulting from the usual definition of open balls: $\{x \in \mathbb{R}: |a-x|

Help would be very much appreciated.

complex analysis - Riemann explicit formula for $psi^*(x)$ in the region $0leq xleq 1$.



Is it possible to extend this Riemann explicit formula to interval $0\leq x\leq 1$?
$$\psi^*(x)=x-\sum_{\rho} \frac {x^{\rho}}{\rho}-\frac{\zeta'(0)}{\zeta(0)}$$




Sum over trivial zeros of zeta $\sum _{n=1}^{\infty } \frac{x^{-2 n}}{-2 n}=\frac{1}{2} \log \left(\frac{x^2-1}{x^2}\right)$ diverges for $|x|\leq 1$. So that is why the formula does not work in this region. But the rest of the formula is still meaningful also in this region, i.e. sum over non-trivial zeros does converge even for $0\leq x\leq 1$.


Answer



The first answer is that $\psi(x) = 0$ for $x \le 1$, that's as dumb as that, even if there is more to say, revealing the point of the domain of convergence of Laplace/ Mellin transforms :



For every $s, \Re(s) \ne \Re(\rho)$



$$\frac{1}{s-\rho} = \int_0^\infty x^{-s-1} x^\rho u_{\Re(s-\rho)}(x) dx, \qquad\qquad
u_a(x) = \begin{cases}1_{x > 1} \text{ if } a > 0, \\ - 1_{x < 1} \text{ otherwise}\end{cases}$$
In the same way, the Riemann explicit formula (residue theorem + density of zeros, or Weierstrass factorization theorem for entire functions of order $\le 1$)




shows that for $s$ on a vertical strip with no zeros or poles
$$\frac{-1}{s}\frac{\zeta'(s)}{\zeta(s)} = \int_0^\infty x^{-s-1}\psi_{\Re(s)}(x)dx$$
Where
$$\psi_c(x)= \frac{1}{2i\pi} \int_{c-i\infty}^{c +i\infty}\frac{-1}{s}\frac{\zeta'(s)}{\zeta(s)} x^s ds$$ $$=x^1 u_{c-1}(x) -\sum_{\rho \text{ non-trivial}} \frac {x^{\rho}}{\rho} u_{c-\Re(\rho)}(x)-\sum_{k=1}^\infty \frac {x^{-2k}}{-2k} u_{c+2k}(x)-\frac{\zeta'(0)}{\zeta(0)} u_{c}(x)$$


radicals - Direct Irrationality Proof for $sqrt{3}$ and $sqrt{6}$

I am having trouble with proving this directly. I am currently learning about greatest common divisors and know that this has a role in the proof. However, I can only prove the two through contradiction and not directly.

soft question - List of interesting math videos / documentaries

This is an offshoot of the question on Fun math outreach/social activities. I have listed a few videos/documentaries I have seen. I would appreciate if people could add on to this list.



$1.$ Story of maths Part1 Part2 Part3 Part4



$2.$ Dangerous Knowledge Part1 Part2



$3.$ Fermat's Last Theorem




$4.$ The Importance of Mathematics



$5.$ To Infinity and Beyond

probability - Expectation of nonnegative Random Variable





enter image description here



Can someone help me give me some pointers as to how to prove this relation?


Answer



Let p be the probability measure. We have that $\int_{0}^{\infty}\left[1-F\left(x\right)\right]dx=\int_{0}^{\infty}\Pr\left[X>x\right]dx=\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx $ using Fubini's theorem we have $\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx=\int\left[\int_{0}^{\infty}1_{X>x}dx\right]dp=\int Xdp=E\left[X\right] $


continuity - No bijective continuous $f:(0,1) to [0,1]$ using facts from topology?

In the context of learning topology, I'm triyng to see how to show that there is no bijective continuous function $f:(0,1) \to [0,1]$ using some basic topology facts such as the definition of continuity.



In this setting, $f$ is continuous if for all open $U$ in $[0,1]$, the set $f^{-1}(U)$ is open in $(0,1)$. Equivalently, we could swap "closed" for "open" in this definition.



As a first thought, I don't see any problem with the fact that, if such an $f$ exists, it would satisfy $f^{-1}([0,1]) = (0,1).$ I think this is OK because the set $(0,1)$ is closed in $(0,1)$. Is this correct? If so, what basic topological theorems do we use to prove this statement? Do we need to go to compactness?

abstract algebra - Why is $n_1 sqrt{2} +n_2 sqrt{3} + n_3 sqrt{5} + n_4 sqrt{7} $ never zero?

Here $n_i$ are integral numbers, and not all of them are zero.



It is natural to conjecture that similar statement holds for even more prime numbers. Namely,



$$ n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} + n_5 \sqrt{11} +n_6 \sqrt{13} $$ is never zero too.



I am asking because this is used in some numerical algorithm in physics

discrete mathematics - Solve the following recurrence relation: $a_{n}=10a_{n-2}$



I'm trying to solve the following recurrence equation $a_{n}=10a_{n-2}$.



Initial conditions: $a_0=1$, $a_1=10$



I have tried to use characteristic polynomial and generating functions but both methods lead to contradiction.



I think I should observe $\mathbb{N}_{even}$ and $\mathbb{N}_{odd}$ as different cases but I don't know how to do it formally.




Any help would be appreciated.


Answer



We need initial values for $ a_0 $ and for $a_1$ to start with.



Notice that, $$ a_2 = 10a_0$$,$$a_4 = 10a_2 =10^2a_0$$ $$ a_6=10a_4 =10^3 a_0$$



With the given initial values we get $$a_{2k}=10^ka_0 = 10^k $$



Similarly you get $$a_{2k+1}=10^{k+1} $$



Sunday 20 September 2015

Complex hyperbolic Trigonometry



When faced with the equation




$\cos{z}=\sqrt{2}$



I want to solve for z so I break it up into a sum $z=x+iy$ and get:



$\cos{z}=\cos{x}\cosh{y}-i \sin{x} \sinh{y}$



equating real and imaginary parts I am faced with



$\cos{x}\cosh{y}=\sqrt{2}$ and $\sin{x}\sinh{y}=0$




How do I go about solving from here I can't seem to get out of this loop where I have to end up using some ugly form for inverse hyperbolic function.



EDIT: $\cos^{-1}{z}$ is defined in Churchill as



$\cos^{-1}{z}=-i\log{[z+i(1-z^2)^{1/2}]}$



Am I better off just plugging it in to here?


Answer



Avoid squaring whenever possible as it immediately introduces extraneous root(s)




We have $\displaystyle\frac{e^{iz}+e^{-iz}}2=\sqrt2$



$$\iff(e^{iz})^2-2\sqrt2(e^{iz})+1=0$$



$$\implies e^{iz}=\dfrac{2\sqrt2\pm\sqrt{8-4}}2=\sqrt2\pm1$$



$$\iff iz=\log(\sqrt2\pm1)$$



$$\iff z=-i\log(\sqrt2\pm1)$$


modular arithmetic - Given $L_1 ={w: |w| bmod 3 >0}$ and $L_2 ={w: |w| bmod 5 =0}$,. What is $L=L_1 cap L_2$ and the grammar it produces?



$\mid w \mid$ is the length of the string.




I know that element in common are something like this...



The left hand side gives the elements in $L_1$ and the right hand side gives the corresponding element in $L_2$



$ 5 \bmod 3 \Rightarrow 5 \bmod 5$



$ 10 \bmod 3 \Rightarrow 10 \bmod 5$



$ 20 \bmod 3 \Rightarrow 20 \bmod 5$




$ 25 \bmod 3 \Rightarrow 25 \bmod 5$



$ 35 \bmod 3 \Rightarrow 35 \bmod 5$



$ 40 \bmod 3 \Rightarrow 40 \bmod 5$



$ 45 \bmod 3 \Rightarrow 45 \bmod 5$



$ 55 \bmod 3 \Rightarrow 55 \bmod 5$




and so on...



So the set of lengths $\{5,10,20,25,35,40,45,55...\}$ are all in $L=L_1 \cap L_2$



The length of each string starting with 5 alternates in size between $5,10,5,10,5,10...$



If let's say the alphabet is $\sum = \{a\}$



How does one conjecture $L$ and the grammar that produces it? I'm stuck been at it for a while, any help would be appreciated.



Answer



The first part of your question boils down to a standard exercise in arithmetic.
You can use the Chinese remainder theorem to show that the conjunction of the conditions $|w| \bmod 3 > 0$ and $|w| \bmod 5 = 0$ is equivalent to
$${|w| \bmod {15} = 5} \quad \text{or}\quad |w| \bmod {15} = 10 $$
If $A$ is the alphabet, the corresponding language is $(A^{15})^*(A^5 + A^{10})$.
Writing a grammar for this language should now be an easy, also somewhat tedious, exercise.


calculus - Show that $ int^infty_1frac{1}{1+x^3}dx = int^1_0frac{x}{1+x^3}dx$



Consider the integral $$J = \int^\infty_0\frac{1}{1+x^3}dx$$show that $$ \int^\infty_1\frac{1}{1+x^3}dx = \int^1_0\frac{x}{1+x^3}dx $$and then deduce that $$J = \int^1_0f(x) dx $$ where f is a function to be determined.



I'm specifically stuck on the second part of the question. It is easy to miss but the bounds for J is $0$ and $\infty$ and not $1$ and $\infty$ as in the case of the first part of the question.



Answer



Let ,



$x = \frac{1}{t}$ , $dx = \frac{-1}{t^2}dt$



at $x = \infty, t = 0$



at $x = 1, t= 1$



$I = \int^0_1\frac{1}{1+\frac{1}{t^3}}.\frac{-dt}{t^2} = - \int^0_1\frac{t^3dt}{(t^3 + 1)t^2}$




Changing the limits,



$I = \int^1_0 \frac{tdt}{1+t^3} = \int^1_0\frac{xdx}{1+x^3}$ (Replacing t by x)



$J = \int^\infty_0\frac{dx}{1+x^3} = \int^1_0\frac{dx}{1+x^3}+ \int^\infty_1\frac{dx}{1+x^3} = \int^1_0\frac{dx}{1+x^3} + \int^1_0\frac{xdx}{1+x^3} $ (From I)



$J = \int^1_0\frac{(x+1)dx}{1+x^3}$



Thus, $f(x) = \frac{x+1}{1+x^3}$



congruences - Number of $k$th Roots modulo a prime?




So I'm having a bit of trouble proving the following theorem:



Suppose that $p$ is a prime, $k \in \mathbb{N}$ and $b \in \mathbb{Z}$. Show that the number of $k$th roots of $b$ modulo $p$ is either $0$ or $(k, p - 1)$ (note that $(\cdot,\cdot) \equiv \gcd(\cdot,\cdot)$).



Here is my attempt.



If there are $0$ roots then we are done.



Suppose now that the congruence $x^k \equiv b \mod{p}$ has a solution, say $x \equiv r \mod{p}$ for some $r \in \{0, 1, 2, \dots, p - 1\}$. We know that $p$ has exactly $\phi(p - 1)$ primitive roots and that $a^{v\phi(n) + 1} \equiv a \mod{p}$ for any $a \in \mathbb{Z}$. Now, if $(r,p) \ne 1$, then either $r = 1$ or $p \vert r$. If $r = 1$, then we must have $x^k \equiv 1 \mod{p}$. We know from a previous theorem that the order of any number modulo a prime must divide $p - 1$, so $(k,p - 1) = k$ (I'm not sure where to go from here). If $p \vert r$ , then $x \equiv 0 \mod{p} \Rightarrow x^k \equiv 0 \mod{p} \Rightarrow b \equiv 0 \mod{p}$ (again no idea if I'm headed in the right direction)...




Suppose that $(r,p) = 1$. Then $r^k \equiv b \mod{p} \Rightarrow r^{k\phi(p)} \equiv b^{\phi(p)} \Rightarrow b^{p - 1} \equiv 1 \mod{p}$. I know that there are $\phi(p - 1)$ such $b$ that satisfy this, but again, I have no idea where this is going..



I think it's likely that I'm just completely venturing out in the wrong direction here. Even a nudge in the right direction would be appreciated.


Answer



If $b \not \equiv 0\bmod p$ and $x^k \equiv b\bmod p$ has one solution $x=a$ then the other solutions are $x= a\zeta$ where $\zeta^k = 1$, which has $$ \# \{ \zeta \bmod p, \text{ord}(\zeta)\ |\ k\}=\# \{ \zeta \bmod p, \text{ord}(\zeta)\ |\ (k,p-1)\}= (k,p-1)$$ solutions, since $\mathbb{Z}/p \mathbb{Z}^\times$ is cyclic with $p-1$ elements so it contains an element $\mu$ of order $(k,p-1)$ and $\zeta^k=1 \implies \zeta = \mu^n$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...