Tuesday, 31 December 2013

integration - Using Complex Analysis to Compute $int_0 ^infty frac{dx}{x^{1/2}(x^2+1)}$



I am aware that there is a theorem which states that for $0

My attempt is the following: Let's take $f(z)=\frac{1}{z^{1/2}(z^2+1)}$ as the complexification of our integrand. Define $\gamma_R=\gamma _R^1+\gamma_R^2+\gamma_R^3$, where $\gamma_R^1(t)=t$ for $t$ from $1/R$ to $R$; $\gamma_R^2(t)=\frac{1}{R}e^{it}$, where $t$ goes from $\pi /4$ to $0$ ; and $\gamma_R^3(t)=e^{\pi i/4}t, $ where $t$ goes from $R$ to $1/R$ (see drawing).enter image description here



The poles of the integrand are at $0,\pm i$, but those are not contained in the contour, so by the residue theorem $\int_{\gamma_R}f(z)dz=0$. On the other hand, $\int_{\gamma_R}=\int_{\gamma_R^1}+\int_{\gamma_R^2}+\int_{\gamma_R^3}$. As $R\to \infty$, $\int_{\gamma_R^1}f(z)dz\to \int_0 ^\infty \frac{1}{x^{1/2}(x^2+1)}dx$. Also, = $\vert \int_{\gamma_R^2}f(z)dz\vert \le \frac{\pi }{4R}\cdot \frac{1}{R^2-1}$ and the lattest expression tends to $0$ as $R\to \infty$. However, $\int_{\gamma_R^3}f(z)=i\int_R ^{1/R}tdt=\frac{i/R^2-iR^2}{2}$, which is unbounded in absolute value for large $R$.



Is there a better contour to choose? If so, what is the intuition for finding a good contour in this case?



Answer



For this, you want the keyhole contour $\gamma=\gamma_1 \cup \gamma_2 \cup \gamma_3 \cup \gamma_4$, which passes along the positive real axis ($\gamma_1$), circles the origin at a large radius $R$ ($\gamma_2$), and then passes back along the positive real axis $(\gamma_3)$, then encircles the origin again in the opposite direction along a small radius-$\epsilon$ circle ($\gamma_4$). Picture (borrowed from this answer):



$\hspace{4.4cm}$enter image description here



$\gamma_1$ is green, $\gamma_2$ black, $\gamma_3$ red, and $\gamma_4$ blue.



It is easy to see that the integrals over the large and small circles tend to $0$ as $R \to \infty$, $\epsilon \to 0$, since the integrand times the length is $O(R^{-3/2})$ and $O(\epsilon^{1/2})$ respectively. The remaining integral tends to
$$ \int_{\gamma_1 \cup \gamma_3} = \int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx + \int_{\infty}^0 \frac{(xe^{2\pi i})^{-1/2}}{1+(xe^{2\pi i})^2} \, dx, $$
because we have walked around the origin once, and chosen the branch of the square root based on this. This simplifies to

$$ (1-e^{-\pi i})\int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx = 2I. $$
Now you need to compute the residues of the function at the two poles, using the same branch of the square root. The residues of $1/(1+z^2) = \frac{1}{(z+i)(z-i)}$ are at $z=e^{\pi i/2},e^{3\pi i/2}$, so you find
$$ 2I = 2\pi i \left( \frac{(e^{\pi i/2})^{-1/2}}{2i} +\frac{(e^{3\pi i/2})^{-1/2}}{-2i} \right) = 2\pi \sin{\frac{1}{4}\pi} = \frac{2\pi}{\sqrt{2}} $$






However, I do recommend that you don't attempt to use contour integration for all such problems: imagine trying to do
$$ \int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, $$
for general $a,b,s,m,n$ such that it converges, using that method! No, the useful thing to know is that
$$ \frac{1}{A^n} = \frac{1}{\Gamma(n)}\int_0^{\infty} \alpha^{n-1}e^{-\alpha x} \, dx, $$

which enables you to do more general integrals of this type. Contour integration's often a quick and cheap way of doing simple integrals, but becomes impractical in some general cases.


probability - $f(x)=alpha e^{-x^2-beta x}$ Find $alpha$ and $beta$ . Expectation is given.




Let the probability density function of a random variable $X$ be given
by




$f(x)=\alpha e^{-x^2-\beta x}\ \ \ \ \ \ \infty



If $E(X)= -\dfrac{1}{2}$ then



$(A) \ \alpha =\dfrac{e^{\frac{-1}{4}}}{\sqrt \pi}$ and $\beta=1$



$(B) \ \alpha =\dfrac{e^{\frac{-1}{4}}}{\sqrt \pi}$ and $\beta=-1$



$(C) \ \alpha =e^\frac{-1}{4}\sqrt \pi$ and $\beta=-1$




$(D) \ \alpha =e^\frac{-1}{4}\sqrt \pi$ and $\beta=1$




The way I did this question:



$\int_{-\infty}^{\infty}f(x)dx=\int_{-\infty}^{\infty}\alpha e^{-(x^2+\beta x +\frac{\beta^2}{4}-\frac{\beta^2}{4})}dx=\int_{-\infty}^{\infty}\alpha e^{-(x+\frac{\beta}{2})^2}e^{\frac{\beta^2}{4}}dx=1 $



put $y=(x+\frac{\beta}{2})$



$\int_{-\infty}^{\infty}\alpha e^{-(y)^2}e^{\frac{\beta^2}{4}}dx=\alpha\sqrt{\pi}e^{\frac{\beta^2}{4}}=1\implies \alpha=\dfrac{e^{\frac{-\beta^2}{4}}}{\sqrt{\pi}}$




Now using $E(X)=\int_{-\infty}^{\infty}x\alpha e^{-(x+\frac{\beta}{2})^2}e^{\frac{\beta^2}{4}}dx$



put $y=(x+\frac{\beta}{2})$



$E(X)=\int_{-\infty}^{\infty}(y-\frac{\beta}{2})\alpha e^{-(y)^2}e^{\frac{\beta^2}{4}}dx=\underbrace{\alpha e^{\frac{\beta^2}{4}} \int_{-\infty}^{\infty}(y) e^{-(y)^2}dx}_{\text {odd function}}-\frac{\beta}{2}\underbrace{\int_{-\infty}^{\infty}\alpha e^{-(y)^2}e^{\frac{-\beta^2}{4}}}_{\text{pdf}}dx$



$0-\dfrac{\beta}{2}=-\dfrac{1}{2} \implies \fbox {$\beta$ =1}$ and $\fbox{$\alpha=\dfrac{e^{\frac{-1}{4}}}{\sqrt{\pi}}$} $



Somehow after doing rigorous calculations, I got this answer but this question came in 2 marks so I am looking for a quick way out. Any alternate solution which less time consuming ?



Answer



$f(x)$ is in the form of a normal PDF: $\frac{1}{\sqrt{2\pi\sigma^2}}\exp(-(x-\mu)^2/2\sigma^2)$, which should be equal to $\alpha\exp(-x^2-\beta x)$. Coefficients should match in LHS and RHS, so $2\sigma^2=1$ and $E[X]=\mu=-1/2$. Then, LHS becomes
$$\frac{1}{\sqrt{\pi}}\exp(-x^2+2\mu x - \mu^2)=\alpha\exp(-x^2-\beta x)$$
which gives you $\beta=1$ and $\alpha=\frac{\exp(-1/4)}{\sqrt{\pi}}$.


calculus - How to evaluate the limit $lim_{x to infty}left(left(x+frac{1}{x}right)arctan(x)-frac{pi}{2}xright)$?




$$\lim\limits_{x \to \infty}\left(\left(x+\frac{1}{x}\right)\arctan(x)-\frac{\pi}{2}x\right)$$




From graphs, I know that the limit is exactly $-1$ (this function limit is "$b$" from $f(x)=ax+b$, where $f(x)$ is asymptote of another function).




I managed to get to a point, by using l'Hospital rule, where my limit equals 0. I've checked calculations few times, and can't find, where the mistake is.



Could you please show me steps, how to evaluate this limit?


Answer



Since
$$\lim\limits_{x \to \infty}\frac{\arctan(x)}{x}=0$$
and
$$\lim\limits_{x \to \infty}\left(\arctan(x)-\frac{\pi}{2}\right)=0$$




you have
$$\lim\limits_{x \to \infty}\left(\left(x+\frac{1}{x}\right)\arctan(x)-\frac{\pi}{2}x\right)$$
$$=\lim\limits_{x \to \infty}\left(x\arctan(x)-\frac{\pi}{2}x\right)$$
$$=\lim\limits_{x \to \infty}\left(x \left(\arctan(x)-\frac{\pi}{2}\right)\right)$$
$$=\lim\limits_{x \to \infty}\frac{\arctan(x)-\frac{\pi}{2}}{1/x}$$
which has the indeterminate form $0/0.$ Now use l'Hospital
$$=\lim\limits_{x \to \infty}\frac{1/(1+x^2)}{-1/x^2}$$
$$=\lim\limits_{x \to \infty}\frac{-x^2}{1+x^2}=-1$$


ELEMENTARY PROOF: Prove $a^2b^2(a^2-b^2)$ is divisible by $12$.



My first thought was to treat see if $a^2 b^2(a^2-b^2)$ is divisible by $2$ and $3$ since they are the prime factors. But I cannot seem to get anywhere. Please give me initial hints. We did not learn about modular arithmetic, so please try not to use it to prove it.



Thanks


Answer



First, let us see how squares behave modulo 3:



$$ n^2\, \text{mod}\, 3$$




We know n is either 0, 1, or 2 mod 3. Squaring this gives 0, 1, and 4 = 1 mod 3. In other words,
$$ n^2\, \text{mod}\, 3 = 0$$



or



$$ n^2\, \text{mod}\, 3 = 1$$



Now, consider the different possible cases (both are 0 mod 3; both are 1 mod 3; one is 0 and the other is 1).




Next, do the same thing but under mod 2. You should notice that if a or b (or both) are even, the result follows easily. The only case left to consider is if a and b are odd... how can we factor the expression $a^2 - b^2$?


Monday, 30 December 2013

elementary number theory - the exponent of the highest power of p dividing n!

The formula for the exponent of the highest power of prime $p$ dividing $n!$ is $\sum \frac{n}{p^k}$, but the question is $n=1000!$ (really, it has the factorial) and $p=5$.



When I use Wolfram Alpha , I panicked because the number has $2,567$ decimal digits.




I think if I write this number I'd need paper all the way to the Amazon.



Perhaps I misunderstand the formula?

calculus - Need some hint or technique for this integral



What technique we should use for this integral:
$$\int_0^1\frac{\ln x\ln (1-x^2)}{1-x^2}\text{d}x$$
Can anyone give a brief way to evaluate this?


Answer



One possible, although not very short, way is to reduce the integral to derivatives of the beta integral:
$$\begin{eqnarray}
\int_0^1 \frac{\log(x) \log(1-x^2)}{1-x^2} \mathrm{d}x &\stackrel{u=x^2}{=}& \frac{1}{4} \int_0^1 \frac{\log(u)}{\sqrt{u}} \frac{\log(1-u)}{1-u} \mathrm{d}u \\

&=& \frac{1}{4} \lim_{\alpha\to \frac{1}{2}} \lim_{\beta \downarrow 0^+} \frac{\mathrm{d}}{\mathrm{d} \alpha} \frac{\mathrm{d}}{\mathrm{d} \beta} \int_0^1 u^{\alpha-1} (1-u)^{\beta-1}\mathrm{d}u
\\ &=& \frac{1}{4} \lim_{\alpha\to \frac{1}{2}} \lim_{\beta \downarrow 0^+} \frac{\mathrm{d}}{\mathrm{d} \alpha} \frac{\mathrm{d}}{\mathrm{d} \beta} \frac{\Gamma(\alpha) \Gamma(\beta)}{\Gamma(\alpha+\beta)}
\\
\\ &=& \frac{1}{4} \lim_{\alpha\to \frac{1}{2}} \lim_{\beta \downarrow 0^+} \frac{\mathrm{d}}{\mathrm{d} \alpha} \frac{\mathrm{d}}{\mathrm{d} \beta} \frac{\Gamma(\alpha) \Gamma(\beta+1)}{\beta \, \Gamma(\alpha+\beta)}
\end{eqnarray}
$$
Now we would do a series expansion around $\beta=0$:
$$
\frac{\Gamma(\alpha) \Gamma(\beta+1)}{\beta \, \Gamma(\alpha+\beta)} = \frac{1}{\beta} + \left(-\gamma - \psi(a)\right) + \frac{\beta}{2} \left( \frac{\pi^2}{6} + \gamma^2+2 \gamma \psi(a) + \psi(a)^2 -\psi^{(1)}(a) \right) + \mathcal{o}(\beta)
$$

Differentation with respect to $\alpha$ kills the singular term, one we find:
$$
\int_0^1 \frac{\log(x) \log(1-x^2)}{1-x^2} \mathrm{d}x = \frac{1}{4} \lim_{\alpha \to \frac{1}{2}} \left( \gamma \psi^{(1)}(a) + \psi(a) \psi^{(1)}(a) - \frac{1}{2} \psi^{(2)}(a) \right)
$$
Now, using $\psi(1/2) = -\gamma - 2 \log(2)$, $\psi^{(1)}(1/2) = \frac{\pi^2}{2}$ and $\psi^{(2)}(1/2) = -14 \zeta(3)$ we finally arrive at the answer:
$$
\int_0^1 \frac{\log(x) \log(1-x^2)}{1-x^2} \mathrm{d}x = \frac{1}{4} \left( 7 \zeta(3) - \pi^2 \log(2) \right)
$$



Notice also that

$$
\frac{\log(1-x^2)}{1-x^2} = -\sum_{n=1}^\infty H_n x^{2n}
$$
This leads, together with $\int_0^1 x^{2n} \log(x) \mathrm{d}x = -\frac{1}{(2n+1)^2}$:
$$
\int_0^1 \frac{\log(x) \log(1-x^2)}{1-x^2} \mathrm{d}x = \sum_{n=1}^\infty \frac{H_n}{(2n+1)^2}
$$
Sum of these types have been discussed before.



Edit: In fact exactly this sum was asked about by the OP, as noted by @MhenniBenghorbal in the comments, and @joriki provided an excellent answer.



complex analysis - How to finish proof of $ {1 over 2}+ sum_{k=1}^n cos(kvarphi ) = {sin({n+1 over 2}varphi)over 2 sin {varphi over 2}}$



I'm trying to prove the identity $$ {1 \over 2}+ \sum_{k=1}^n \cos(k\varphi ) = {\sin({n+1 \over 2}\varphi)\over 2 \sin {\varphi \over 2}}$$



What I've done so far:



From geometric series $\sum_{k=0}^{n-1}q = {1-q^n \over 1 - q}$ for $q = e^{i \varphi}$ and taking the real part on both sides I got




$$ \sum_{k=0}^{n-1}\cos (k\varphi ) = {\sin {\varphi \over 2} - \cos (n\varphi) + \cos (n-1)\varphi \over 2 \sin {\varphi \over 2}}$$



I checked all my calculations twice and found no mistake. Then I used trigonometric identities to get



$$ {1 \over 2} + \sum_{k=1}^{n-1}\cos (k\varphi ) + {\cos n\varphi \over 2 } = {\sin \varphi \sin (n \varphi ) \over 2 \sin {\varphi \over 2}}$$




How to finish this proof? Is there a way to rewrite




$\sin \varphi \sin (n \varphi )$ as



$\sin({n+1 \over 2}\varphi) - \sin {\varphi \over 2} \cos (n \varphi)
$?



Answer



There is a mistake in the real part.



$$
\frac{q^n - 1}{q - 1} = \frac{e^{in\phi} - 1}{e^{i\phi} - 1}

= \frac{e^{i(n-1/2)\phi} - e^{-i\phi/2}}{e^{i\phi/2} - e^{-i\phi/2}}
= \frac{- i e^{i(n-1/2)\phi} + i e^{-i\phi/2}} {2\sin{\phi/2}}
$$
the real part is
$$
\frac{\sin ((n-1/2)\phi) + \sin(\phi/2)} {2\sin{\phi/2}}
$$
yielding the right result.







However, there is a simpler solution:
$$
1 + 2\sum_{k=1}^n \cos k\phi
= 1+ \sum_{k=1}^n \left(e^{i k\phi} + e^{-i k\phi}\right) = \sum_{k=-n}^n e^{i k\phi}
$$
which simplifies to
$$
e^{-i n\phi} \frac{1 - e^{i (2n+1)\phi}}{1 - e^{i\phi}}
= \frac{e^{-i (n + 1/2)\phi} - e^{i (n + 1/2)\phi}}

{ e^{-i\phi/2} - e^{i\phi/2}} = \frac{\sin((n + 1/2)\phi)}{\sin(\phi/2)}
$$


References for modular linear algebra



I am looking for references for dealing with modular linear algebra. Specifically, let $A$ be an $m \times n$ matrix with entries in $\mathbb{Z}_k$, where $k \in \mathbb{N}$, $k>1$, can generally be non-prime. Given a system of linear equations



$$ A x = b \mod k, $$



how can I methodically determine





  • Whether the system has a solution.

  • The dimension of $\text{Col}(A)$, the $\mathbb{Z}_k-$span of the columns of $A$. I understand that if $k$ is not prime then $\text{Col}(A)$ is only a module, and not a vector space. We can still talk about its dimension, correct?

  • How to compute a minimal set of vectors $u_1, \dots, u_r$ such that $$\text{Col}(A) \oplus \text{Span}_{\mathbb{Z}_k}\{u_1,\dots, u_r \} = \mathbb{Z}_k^m $$



These questions are easy to deal with when $k$ is prime, since then we just do linear algebra. But when $k$ is not prime, funny things happen since we are not talking about vector spaces anymore, but about modules over rings. For example, matrices may not be reducible to row-echelon form.



Ideally, I would like to understand better how much of the linear algebra I am familiar with goes through. For example, do we still get a rank-nullity theorem in this case?


Answer



Linear algebra is not so simple over rings which are not fields. For instance, dimension is no longer a sufficient concept to grasp questions like yours about the column space of a matrix. Since this could be any module, one instead needs to specify which one, using a classification of $\mathbb{Z}_k$-modules as the $k$-torsion abelian groups. Dimension is only well defined for the free modules. In particular, it doesn't make sense to ask for a rank-nullity theorem.




If we try to define dimension as you did in the comments, then the $1\times 1$ matrix $2$ over $\mathbb{Z}/4$ violates the rank-nullity theorem.



Since the classification of finitely generated abelian groups is simple, the above isn't that severe a complication. But you'll be in more trouble on finding a complement to the column space, as you ask in your last question. Again, in the "matrix" $2$ over $\mathbb{Z}/4$, the column space is $2\mathbb{Z}/4$, which is not a direct summand of $\mathbb{Z}/4$. In general, submodules over a ring need not have complements.



As for solving equations, as suggested in the comments one can lift the coefficients to $\mathbb{Z}$, compute the Smith normal form up there, and then reduce modulo $k$. One can also use the Chinese Remainder Theorem to work over fields if $k$ happens to be a product of distinct primes-just solve your problem separately modulo each prime factor.


trigonometry - Express the trigonometric identity $cos nx$ as an infinite series of $sin^2 x/2$.



The trigonometric identity $\cos nx$ is expressed as an infinite series only in terms of $\sin^2 \frac{x}{2}$ as follows.

$$ \cos nx = 1 + \sum_{l=1}^n (-1)^l \frac{2^{2l}}{(2l)!} \prod_{k=0}^{l-1} (n^2 - k^2) \sin^{2l} \frac{x}{2}$$
This is given in the literature but the authors have not provided any proof to this equation. I have tried the proof using the Euler formula and Binomial theorem, but could not succeed. Can anyone please provide the proof for this equation, or at least a guide on how to prove this ?



Thank You.


Answer



The Chebyshev polynomials verify $T_n(\cos x) = \cos nx$. This means that $\cos nx$ can be written as a finite sum of powers of $\cos x$. Now simply use the identity $\cos x = 1-2\sin^2(x/2)$ to obtain that $\cos nx$ is a polynomial of $\sin^2(x/2)$.


real analysis - Improper Riemann integral questions and determining whether they exist.



Explain why
\begin{equation*}

\int_{0}^{1} \frac{dx}{x}, \quad \int_{1}^{\infty} \frac{dx}{x}
\end{equation*}

are improper Riemann integrals, and determine whether the limits they represent exist.
The improper Riemann integrals are the limits
\begin{equation*}
\begin{split}
\int_{0}^{1} \frac{dx}{x} &= \lim_{b\to 0^+} \int_{b}^{1} \frac{dx}{x} \\
&= \lim_{b\to 0^+} \left[\ln{(x)}\right]_{b}^{1} \\
&= \lim_{b\to 0^+} \left(-\ln{(b)}\right) \\
&= -\infty
\end{split}

\end{equation*}

and
\begin{equation*}
\begin{split}
\int_{1}^{\infty} \frac{dx}{x} &= \lim_{b\to \infty} \int_{1}^{b} \frac{dx}{x} \\
&= \lim_{b\to\infty} \left[\ln{(x)}\right]_{1}^{b} \\
&= \lim_{b\to\infty} \left(\ln{(b)}\right) \\
&= \infty.
\end{split}
\end{equation*}

So neither of these integrals exist.
My question is how do I explain why these are improper Riemann integrals? Thanks!



Answer



Despite the imtegrand becoming $\infty$ at say $(x=a)$. One way it becomes improper but convergent is when $0 where $$I=\int_{a}^{b} \frac{dx}{(x-a)^p}<\infty.$$ For instance
$$\int_{0}^{1} \frac{dx}{\sqrt{x}}=2$$ is improper but convergent. The integral $$\int_{0}^{1} \frac{dx}{x^{1.01}} =\infty,$$ is improper and divergent.



Next, one more way and integral becomes improper but convergent when $q>1$ where the integral is $$J=\int_{1}^{\infty} \frac{dx}{x^q}.$$ For instance $$\int_{1}^{\infty} \frac{dx}{x^{1.01}}=100, \int_{0}^{\infty} \frac{dx}{\sqrt{x}}=\infty$$



Finally both integrals of yours are inproper but divergent the first one is so as $p=1$ and second one is so as $q=1$.



Notice that the well known integral $$\pi=\int{-1}^{1} \frac{dx}{\sqrt{1-x^2}}=\int_{-1}^{1} \frac{dx}{(1-x)^{1/2} (1+x)^{1/2}},$$ is improper but convergent because for both $x=\pm 1$, $p=1/2<1.$




The integral $$\int_{0}^{1} \ln x~ dx =x\ln x-x|_{0}^{1}=-1.$$ is improper but convergent, In this case one has to take the limit $x\rightarrow 0$ of the anti-derivative.



The improper integrals do not connect well to area under the curve. This may
also be seen as a defect in the theory of integration or these integrals are called improper which may or may not be convergent.



There are many other ways and integral is improper but converget for instance
$$\int_{0}^{\infty} \frac{\sin x}{x} dx=\pi/2,$$ despite the integrand existing only as limit when $x \rightarrow 0.$



The integral $$\int_{0}^{\infty} \frac {dx}{(1+x^4)^{1/4}} \sim \int^{\infty} \frac{dx}{(x^4)^{1/4}} =\infty$$ is divergen, we know this without actually solving this integral!




Most often studying the integrand near the point of singularity (discontinuity) or $\infty$ and making out the value of $p$ or $q$ helps in finding if it is
convergent.


real analysis - Is there a function on a compact interval that is differentiable but not Lipschitz continuous?



Consider a function $f:[a,b]\rightarrow \mathbb{R}$, does there exist a differentiable function that is not Lipschitz continuous?



After discussing this with friends we have come to the conclusion that none exist. However there is every chance we are wrong. If it is true that none exist how could we go about proving that? It is true that if $f$ is continuously differentiable then $f$ is Lipschitz, but what if we don't assume the derivative is continuous?


Answer



The map $f : [0,1] \to \mathbb{R}$, $f(0) = 0$ and $f(x) = x^{3/2} \sin(1/x)$ is differentiable on $[0,1]$ (in particular $f'(0) = \lim_{x \to 0^+} f(x)/x = 0$), but it is not Lipschitz (the derivative $f'(x)$ is unbounded).



Sunday, 29 December 2013

integration - Finding the integral $int_0^pifrac{dtheta}{(2+costheta)^2}$ by complex analysis

Trying to find the integral $$\int_0^\pi\frac{d\theta}{(2+\cos\theta)^2}$$ by complex analysis.



I let $z = \exp(i\theta)$, $dz = i \exp(i\theta)d\theta$, so $ d\theta=\dfrac{dz}{iz}$. I am trying therefore to find the integral $$\frac{1}{2iz} \oint_C \frac{dz}{\left(2 + \frac{z}{2} + \frac{1}{2z}\right)^2}$$ I am unsure of which contour I should use, and how to proceed besides that. Could anyone help?

real analysis - Finding one-to-one correspondence between two infinite sets




Suppose $f:A\rightarrow B$ is a surjective map of infinite sets. Let $C$ be a countable subset of $B$. Suppose that $f^{-1}(y)$ has two elements if $y\in C$, and one element if $y\in B-C$. Show that there is a one-to-one correspondence between $A$ and $B$.



So I thought to define a function $\phi:B\rightarrow A$. It's given that for $y\in B-C$, $f^{-1}(y)$ is one-to-one so we know for those elements we are fine. The only thing to do is find a correspondence for the elements in $C$. But since we know that there are two elements in $A$ corresponding to each of these elements in $C$. How can this possibly be one-to-one?



Thanks for any help offered!


Answer



The trick is to exploit the countability of C. So there is an element 1 of C, element 2 of C, etc. Rewrite f as follows: Leave f on B - C alone. For each pair of elements of A each mapped to the same element of C, arbitrarily designate one of them as 'odd' and the other as 'even'. Then remap them by parity - ie, change the mapping from the i'th pair in A to the i'th element of C to map the 'odd' element to the (2*i-1)'th element of C and the 'even' element to the (2*i)'th element of C.


Saturday, 28 December 2013

calculus - Solving limit without L'Hôpital



I need to solve this limit without L'Hôpital's rule. These questions always seem to have some algebraic trick which I just can't see this time.



$$ \lim_{x\to0} \frac{5-\sqrt{x+25}}{x}$$



Could someone give me a hint as to what I need to do to the fraction to make this work? Thanks!


Answer



$$\lim_{x\to0} \frac{5-\sqrt{x+25}}{x}=\lim_{x\to0} \frac{(5-\sqrt{x+25)}(5+\sqrt{x+25})}{x(5+\sqrt{x+25})}=\lim_{x\to0} \frac{25-(x+25)}{x(5+\sqrt{x+25})}=-\frac{1}{10}$$


geometry - The Sine Law: A Simplified Criterion for the Ambiguous Case?

Here is my suggestion for an issue that doesn't seem to be handled well in any online notes that I have seen. Can anyone give a counter-example?





If you are given $a,b,$ and $B$ in $\triangle ABC$ (using the standard trig naming conventions) and you are using the Sine Law to solve for $A$ in $\frac{a}{\sin A}=\frac{b}{\sin B}$, then you would have a so-called Ambiguous Case if and only if $b



In that case, calculate the first value $A_1=\sin^{-1}(\frac{a \cdot \sin B}{b})$. Calculate the second value $A_2=180^\circ - A_1$.






Follow-up



Based on discussions here and elsewhere, here is my corrected version of the criterion:





If you are given $a,b,$ and $B$ in $\triangle ABC$ (using the standard trig naming conventions) and you are using the Sine Law to solve for $A$ in $\frac{a}{\sin A}=\frac{b}{\sin B}$, then you would obtain at least one solution $A_1=\sin^{-1}(\frac{a \cdot \sin B}{b})$. And you would have a so-called Ambiguous Case if and only if $A_1\ne 90^\circ$ and $b\lt a$.




In that case, you would have a second solution $A_2=180^\circ - A_1$.

set theory - A question concerning on the axiom of choice and Cauchy functional equation



The Cauchy functional equation:
$$f(x+y)=f(x)+f(y)$$

has solutions called 'additive functions'. If no conditions are imposed to $f$, there are infinitely many functions that satisfy the equation, called 'Hamel' functions. This is considered valid if and only if the Zermelo's axiom of choice is accepted as valid.



My question is: suppose we don't consider valid the axiom of choice, this means that we have a finite number of solutions? Or maybe the 'Hamel' functions are still valid?



Thanks for any hints ore answer.


Answer



What you wrote is not true at all. The argument is not valid "if and only if the axiom of choice holds".




  1. Note that there are always continuous functions of this form, all look like $f(x)=ax$, for some real number $a$. There are infinitely many of those.



  2. The axiom of choice implies that there are discontinuous functions like this, furthermore a very very weak form of the axiom of choice implies this. In fact there is very little "choice" which can be inferred from the existence of discontinuous functions like this, namely the existence of non-measurable sets.


  3. Even if the axiom of choice is false, it can still hold for the real numbers (i.e. the real numbers can be well-ordered even if the axiom of choice fails badly in the general universe). However even if the axiom of choice fails at the real numbers it need not imply that there are no such functions in the universe.


  4. We know that there are models in which all functions which have this property must be continuous, for example models in which all sets of real numbers have the Baire property. There are models of ZF in which all sets of reals have the Baire property, but there are non-measurable sets. So we cannot even infer the existence of discontinuous solutions from the existence of non-measurable sets.


  5. Observe that if there is one non-discontinuous then there are many different, since if $f,g$ are two additive functions then $f\circ g$ and $g\circ f$ are also additive functions. The correct question is to ask whether or not the algebra of additive functions is finitely generated over $\mathbb R$, but to this I do not know the answer (and I'm not sure if it is known at all).




More:




integration - How can one check that a definite integral has been evaluated correctly?



In many problems we can check our solution by "solving the problem backwards". E.g. we can plug the solutions of an equation into the equation to get identities, or differentiate the antiderivative of a function we (indefinitely) integrated to get the original function.




But if we've calculated a definite integral, we generally only get a number, which we can't plug anywhere to check that it's the correct result.



Is there any generally applicable way of checking whether the result we got from definite integration is correct?


Answer



Some simple sanity checks off the top of my head:




  • If the function you integrated was positive, did you get a positive answer?

  • More generally, can you write down a bound (an easy one is that $\int_a^b f(x) \, dx$ is between $(b - a) \text{max}_{x \in [a, b]} |f(x)|$ and $(b - a) \text{min}_{x \in [a, b]} -|f(x)|$) and check that your answer satisfies it?


  • Can you do the integral a completely different way and get the same answer? (This is a general-purpose way to check your answer to a problem.)

  • As in Landuros' comment, can you solve the indefinite integral, then differentiate it?



Another general but less simple strategy that comes to mind is to see if whatever method you used to compute the integral can also compute the integral with an additional parameter in the integrand; then you can check whether the answer makes sense as a function of the parameter, or at least whether your method is handling the parameter sensibly.






Here's a simple example. We have the result




$$\int_{-\infty}^{\infty} \frac{1}{1 + x^2} \, dx = \pi$$



which can be obtained by knowing that the antiderivative of $\frac{1}{1 + x^2}$ is $\arctan x$, but let's pretend for a moment that we don't know that, since it can also be obtained by contour integration. What sanity checks can we do?



For starters, the integrand is positive and so is the answer, so that's good. We can also upper bound the integrand $\frac{1}{1 + x^2}$ crudely by $1$ on $[-1, 1]$ and $\frac{1}{x^2}$ elsewhere, giving the upper bound



$$\int_{-\infty}^{-1} \frac{dx}{x^2} + 2 + \int_1^{\infty} \frac{dx}{x^2} = 4$$



which is bigger than $\pi$, so that's good. Similarly, we can lower bound the integrand by $\frac{1}{2}$ on $[-1, 1]$ and $\frac{1}{2x^2}$ elsewhere, giving exactly half the above lower bound, or $2$, which is less than $\pi$, so that's also good; this is already enough to confirm that we're not off by a factor of $2$, which is reasonably common in integrals whose answer involves $\pi$.




More interestingly, the contour integral argument generalizes to give



$$\int_{-\infty}^{\infty} \frac{1}{1 + tx^2} \, dx = \frac{\pi}{\sqrt{t}}$$



(which we can also get by substituting $x \mapsto \sqrt{t} x$, but again pretend for a minute not to have noticed this), and we can ask ourselves if the two sides behave the same way as a function of $t$. Well, on the LHS bigger values of $t$ make the integrand decay faster, so the integral should be a decreasing function of $t$, which is the case on the RHS. Moreover the integral should go to $0$ as $t \to \infty$ and should go to $\infty$ as $t \to 0$, which is also the case on the RHS. You can try to adapt the crude upper and lower bounds from above to this case as well, and they'll continue to match. This is not a check that the numerical answer is right, but it's a check that the way you're using the contour integration method is giving answers that behave sensibly.






A more complicated example is the Gaussian integral




$$\int_{-\infty}^{\infty} e^{-x^2} \, dx = \sqrt{\pi}.$$



since here the indefinite integral is unavailable so we have to do something else. Again we have a positive integrand and a positive answer, so that's good. We can bound $e^{-x^2}$ by $1$ on $[-1, 1]$ and by $|x| e^{-x^2}$ elsewhere (chosen because $xe^{-x^2}$ has antiderivative $-\frac{1}{2} e^{-x^2}$), giving the upper bound



$$2 + 2 \int_1^{\infty} xe^{-x^2} \, dx = 2 + e^{-1} \approx 2.37$$



which is bigger than $\sqrt{\pi} \approx 1.77$, so that's good. This is enough to verify that we're not off by a factor of $2$, and also it's just barely enough to verify that we're not off by a factor of $\sqrt{2}$, since we have $\sqrt{2 \pi} \approx 2.51$.



Now, one way to compute the Gaussian integral is famously to compute its square, namely




$$\int_{\mathbb{R}^2} e^{-x^2 - y^2} \, dx \, dy = \pi$$



using polar coordinates. We can sanity check this method by checking that it continues to give sensible answers in higher dimensions, namely it should give



$$\int_{\mathbb{R}^n} e^{-x_1^2 - \dots - x_n^2} \,dx_1 \dots \, dx_n = \pi^{n/2}.$$



We'll compute this integral in spherical coordinates by integrating over all spheres of radius $r$ for all $r \ge 0$; this gives that the integral becomes



$$\int_0^{\infty} S_{n-1}(r) e^{-r^2} \, dr$$




where $S_{n-1}(r)$ is the "hypersurface area" of an $(n-1)$-sphere of radius $r$. This integral is easiest to compute when $n = 2$ where $S_1(r) = 2 \pi r$ and we have an elementary antiderivative available, which gives $\pi$ as expected. In general $S_{n-1}(r) = r^{n-1} S_{n-1}(1)$ and so the identity we want to check is that



$$\boxed{ S_{n-1}(1) \int_0^{\infty} r^{n-1} e^{-r^2} \, dr = \pi^{n/2} }.$$



Depending on which quantity is more mysterious to you, you can think of this as either a formula for the surface area $S_{n-1}(1)$ of an $(n-1)$-sphere of radius $1$, or a formula for the integral $\int_0^{\infty} r^{n-1} e^{-r^2} \, dr$. Both of these quantities are slightly mysterious, but in any case formulas for both are known and you can verify that they multiply to $\pi^{n/2}$, so at least all of the slightly mysterious things behave consistently with each other.



For starters, when $n = 3$ we have $S_2(1) = 4 \pi$, so the identity to check is that



$$4 \pi \int_0^{\infty} r^2 e^{-r^2} \, dr = \pi^{3/2}.$$




You can check this by using a change of coordinates as above to compute that



$$\int_0^{\infty} e^{-tr^2} \, dr = \frac{\sqrt{\pi}}{2} t^{-1/2}$$



and then differentiating both sides with respect to $t$ to get



$$\int_0^{\infty} r^2 e^{-tr^2} \, dr = \frac{\sqrt{\pi}}{4} t^{-3/2}$$



then substituting $t = 1$ to get $4 \pi \left( \frac{\sqrt{\pi}}{4} \right) = \pi^{3/2}$. So everything checks out. Of course, at this point you're checking integrals by evaluating other integrals and if you're worried about making mistakes while integrating you might worry that your original answer is correct but that your checks are wrong... so keep it simple if you can. (For example, I had accidentally written $t^{1/2}$ instead of $t^{-1/2}$ in the second to last equation, and got confused because I was missing a minus sign, then did the sanity check to see if $t^{1/2}$ had the right qualitative behavior as a function of $t$; of course it didn't and I found my mistake!)


complex analysis - Analytic continuation commuting with series



Suppose $f_1,f_2,...$ are entire functions, and there is an open subset $U \subseteq \mathbb{C}$ such that the series $F(z) = \sum_{n=1}^{\infty} f_n(z)$ converges normally on $U$. Also suppose that $F$ can be analytically continued to an entire function.



I have a situation where all $f_n$ vanish at some point $z_0$, but unfortunately $z_0 \notin U.$ Can we still say that $F(z_0) = 0$?



I would guess not, since it feels like bending the rules of analytic continuation in a way that shouldn't be allowed. But I didn't think of a counterexample.


Answer



The answer is NO.




Example. Let $U$ be the unit disk, and
$$
f_n(z)=(1-z)z^n.
$$
Then $F(z)=\sum f_n(z)=z$, and while $f_n(1)=0$, we have that $F(1)=1$.


Friday, 27 December 2013

proof writing - Proving sequence statement using mathematical induction, $d_n = frac{2}{n!}$



I'm stuck on this homework problem. I must prove the statement using mathematical induction



Given: A sequence $d_1, d_2, d_3, ...$ is defined by letting $d_1 = 2$ and for all integers k $\ge$ 2.

$$
d_k = \frac{d_{k-1}}{k}
$$



Show that for all integers $n \ge 1$ , $$d_n = \frac{2}{n!}$$






Here's my work:




Proof (by mathematical induction). For the given statement, let the property $p(n)$ be the equation:



$$
d_n = \frac{2}{n!}
$$



Show that $P(1)$ is true:
The left hand side of $P(1)$ is $d_n$ , which equals $2$ by definition of the sequence.
The right hand side is:




$$ \frac{2}{(1)!} =2 $$



Show for all integers $k \geq 1$, if $P(k)$ is true, then $p(k+1)$ is true.
Let k be any integer with $k \geq 1$, and suppose $P(k)$ is true. That is, suppose: (This is the inductive hypothesis)



$$ d_{k} = \frac{2}{k!} $$



We must show that $P(K+1)$ is true. That is, we must show that:



$$ d_{k+1} = \frac{2}{(k+1)!} $$




(I thought I was good until here.)



But the left hand side of $P(k+1)$ is:



$$ d_{k+1} = \frac{d_k}{k+1} $$



By inductive hypothesis:



$$ d_{k+1} = \frac{(\frac{2}{2!})}{k+1} $$




$$ d_{k+1} = \frac{2}{2!}\frac{1}{k+1} $$



but that doesn't seem to equal what I needed to prove: $ d_n = \frac{2}{n!}$


Answer



The following is not true $$d_{k+1} = \frac{(\frac{2}{2!})}{k+1}$$ since $d_k=\frac{2}{k!}$ not $\frac{2}{2!}$, you actually have $$d_{k+1} = \frac{(\frac{2}{k!})}{k+1}=\frac{(\frac{2}{k!(k+1)})}{1}=\frac{2}{(k+1)!}$$


calculus - Prove $lim_{xtoinfty} frac{x}{(ln x)^3} = infty$



$$\lim_{x \to \infty} \frac{x}{(\ln x)^3} = \infty$$




One way to think of this problem is in terms of the relative growth rates between the numerator and denominator. I know that $x$ grows asymptotically faster than $(\ln x)^3$ according to WolframAlpha. How can I prove this?


Answer



$ \lim_{x \to \infty} \frac{x}{(\ln x)^3}=\quad\quad (\frac{\infty}{\infty}\textrm {form, using L'Hospital rule})\\ =\lim_{x \to \infty}\frac{x}{3(\ln x)^2}\quad\quad (\frac{\infty}{\infty}\textrm {form, using L'Hospital rule})\\=\lim_{x \to \infty}\frac{x}{6(\ln x)}\quad\quad (\frac{\infty}{\infty}\textrm {form, using L'Hospital rule})\\ =\lim_{x \to \infty}\frac{x}{6}=\infty$


discrete mathematics - Are these sets equipotent?



I need to decide which of these three sets are equipotent:



$M_1=\{(n_1,n_2,n_3)\in\mathbb{N}\times\mathbb{N}\times\mathbb{N}\ |\ n_1+n_2=n_3\}$



$M_2 = \{M\in P(\mathbb{Z})\ |\ 0\in M\}$




$M_3 = \cup _{a\in\mathbb{Z}}\{x\in\mathbb{R}\ |\ a\leq x < \frac{2a+1}{2}\}$



I want to prove (or disprove) the equipotency by finding injections to and from $\mathbb{N}$, $P(\mathbb{N})$ and $\mathbb{R}$ (Cantor-Schroeder-Bernstein).



I've already proven that $M_1$ is equipotent to $\mathbb{N}$:



1) $M_1\rightarrow\mathbb{N}$, $(n_1,n_2,n_3)\mapsto 2^{n_1}\cdot 3^{n_2}\cdot 5^{n_3}$



2) $\mathbb{N}\rightarrow M_2, n\mapsto (n,n,2n)$




I'm stuck finding injections like this for $M_2$ and $M_3$.



It already seems that $M_2$ is equipotent to $P(\mathbb{N})$ and $M_3$ is equipotent to $\mathbb{R}$, but what are the corresponding injections?


Answer



Since for any set $\;X\in P(\Bbb N)\;$ (for me the naturals do not contain zero) , we have that $\;X\cup\{0\}\in M_2\;$ , so we have that



$$\mathfrak c=|P(\Bbb N)|\le|M_2|\le|P(\Bbb Z)|=\mathfrak c\implies |M_2|=\mathfrak c$$


combinatorics - Why does $binom{n}{2} =[ (n-1) + (n-2) + (n-3)+cdots +1]$?



I was recently doing a homework problem that involved finding the number of lines used to connect a given number of points on a circle. Looking at it logically, I saw that that for the first point, there would be $n-1$ lines you could draw (where $n$ is the number of points on the circle) and the next point would have $n-2$ lines because you're not repeating the line between point $1$ and point $2$. It made sense that this would continue until point $n$, at which point there would be zero lines you can draw.



This meant that if you just add up all of those, you'd get the number.



For example, in the picture below there are $12$ points, so



$11+10+9+8+7+6+5+4+3+2+1 = 66$ lines.




Image of how the lines are connected



That's all fine, but the weird thing was, when I looked in the back of the book, the answer was given as $\binom{n}{2}$, which also equals $66$. What's the relationship? Why are the two equal?


Answer



$$\sum_{k=0}^n k = \frac{(n+1)n}{2}$$



And



$$\binom nk = \frac{n!}{k!(n-k)!} = \frac{n\cdot(n-1)\cdots(n-k+1)}{k!}$$




In particular



$$\binom n2 = \frac{n(n-1)}{2} = \sum_{k=0}^{n-1} k$$



Another way to see it is by induction. In particular $\binom 1 2 = 0$, and $\binom {n+1}{2} = \binom{n}{1} + \binom{n}{2}$, which probably makes the identity seem a bit less "random".


combinatorics - power of 2 involving floor function divides cyclic product



if $S_n={a_1,a_2,a_3,...,a_{2n}}$} where $a_1,a_2,a_3,...,a_{2n}$ are all distinct integers.Denote by $T$ the product
$$T=\prod_{i,j\epsilon S_n,i Prove that $2^{(n^2-n+2[\frac{n}{3}])}\times \text{lcm}(1,3,5,...,(2n-1)) $ divides $T$.(where [] is the floor function)



I have tried many approaches.I tried using the fact that either number of odd or even integers$\ge(n+1)$ and since in $T$ every integer is paired with every other integer only once.Since even-even and odd-odd both give even numbers:$2^{\frac{n(n+1)}{2}}$ divides $T$.As for the $lcm(1,,3,5,....(2n-1))$ I have no Idea what to do.Please help.I am quite new to NT.Thank you.


Answer




HINT:



You're not quite correct for the powers of $2$. There are $2n$ numbers total, so there can in fact be exactly $n$ odds and $n$ evens (i.e. neither has to be $\ge n+1$.)



Instead: Let there be $k$ odds, and $2n-k$ evens. This gives you ${k \choose 2}$ factors of $2$ from the (odd - odd) terms and ${2n-k \choose 2}$ factors of $2$ from the (even - even) terms.




  • First you can show that their sum is minimized at $k=n$ and that sum is $n(n-1) = n^2 - n$.


  • So now you're need another $2[{n \over 3}]$ more factors of $2$. These come from splitting evens further into $4k$ vs $4k+2$, and the odds into $4k+1$ vs $4k+3$, because some of the differences will provide a factor of $4$, i.e. an additional factor of $2$ in addition to the factor of $2$ you already counted.





    • Proof sketch: e.g. suppose there are $n$ evens, and for simplicity of example lets say $n$ is itself even. In the worst split exactly $n/2$ will be multiples of $4$, which gives another $\frac12 {\frac n 2}({\frac n 2}-1)$ factors of $2$ from these numbers of form $4k$, and a similar thing happens with the $4k+1$'s, the $4k+2$'s, and the $4k+3$'s. This funny thing is that if you add up everything, $n({\frac n 2}-1) \ge 2[{\frac n 3}]$ (you can try it, and you will need to prove it). In other words $2[{\frac n 3}]$ is not a tight bound at all, but rather a short-hand that whoever wrote the question settled on just to make you think about the splits into $4k + j$. In fact a really tight bound would involve thinking about splits into $8k+j, 16k+j$ etc.


  • As for $lcm(1, 3, 5, \dots, 2n-1)$, first note that $lcm$ divides $T$ just means each of the odd numbers divide $T$. You can prove this by the pigeonhole principle.




    • Further explanation: E.g. consider $lcm(3, 9) = 9$, so this lcm divides $T$ if both odd numbers divide $T$. In general, the lcm can be factorized into $3^{n_3} 5^{n_5} 7^{n_7} 11^{n_{11}} \dots$ and it divides $T$ if every term $p^{n_p}$ divides $T$. but in your case $p^{n_p}$ itself must be one of the numbers in the list $(1, 3, 5, \dots, 2n-1)$ or else the lcm wouldn't contain that many factors of $p$.





Can you finish from here?


abstract algebra - What is $(x^7-x)mod(x^6-3)$ equal to?



I'm trying to use Rabins test for irreducibility over finite fields , but in part of the test you need to calculate $gcd(f,x^{p^{n_i}}-xmodf)$ where in my case p=7 and n=6,3,2 as I'm testing if $f(x)=x^6-3$ is irreducible over GF(7).



My trouble is I don't know how to calculate this modulo, I know how to do it for integers and I know that in my case it implies that $x^6=3$. But after this i'm stuck.



could anyone work me through how to find what $(x^7-x)mod(x^6-3)$ is equal to ?



Also is Rabins test a good go to for testing if a polynomial is irreducible over a finite field ? Or is there perhaps less cumbersome methods for higher degree's of f(x) where degree f(x)>3 and so doesn't strictly need to be factored into linear polynomials in order to be irreducible ? (just suggestions would suffice )



Answer



Division algorithm:



$$x^7 - x = (x^6 - 3) (x) + (2x)$$



and this is valid because $\deg (2x) < \deg (x^6 - 3)$



So the remainder is $2x$.


complex analysis - How to show that the entire function $f(z) = z^2 + cos{z}$ has range all of $mathbb{C}$?



I have been thinking about the following exercise from an old complex analysis qualifier exam for some days but I still don't know how to solve it. The problem is as follows:




Show that the entire function $f(z) := z^2 + \cos{z}$ has range all of $\mathbb{C}$.





At first I thought that maybe I could use Picard's Little Theorem to get a contradiction. I thought that maybe by considering the function $e^{f(z)}$ I could get the contradiction by asssuming that $f(z)$ misses one point so that the exponential would miss two points and that would contradict Picard's Little Theorem, but since the exponential is periodic this argument doesn't work.




So my question is how can this be proved?




Thank you very much for any help.


Answer



I believe a similar proof as in the question What is the image near the essential singularity of z sin(1/z)? would work here too.




The function $g(z) = z + \cos (\sqrt{z})$ is of order $\frac{1}{2}$ (someone should verify that...) and so does not miss any points by Picard's first theorem (the link is a google books link).


Thursday, 26 December 2013

multivariable calculus - Show that $frac{sin(xy)}{y}$ is differentiable at $(0,0)$.

How do I show that the following function is differentiable at $(0,0)$?
$$
\begin{cases}
\dfrac{\sin(xy)}{y}, & \text{if }y \neq 0 \\
\\
0, & \text{if }y = 0
\end{cases}
$$
I calculated the partial derivatives and





  • $f(x) = \cos(xy)$ exists near $(0,0)$ and is continuous

  • $f(y) = \dfrac{xy \cos(xy) - \sin(xy)}{y^2}$ exists, but how do I show that it is continuous?

real analysis - Calculate and provide some justification to get an approximation of $Re(int_0^1(log x)^{-operatorname{li}(x)}dx)$




I was playing with Wolfram Alpha online calculator about possible variations of an integral, the so-called Sophomore's dream. I'm agree that my example is artificious and isn't related with previous nice integral, but I'm curious to know how we can obtain and justify a good approximation of next integral.




Question. Can you provide me a justification for a good approximation (I say a few right digits, four or six) of
$$\Re\left(\int_0^1\frac{1}{(\log x)^{\operatorname{li}(x)}}dx\right),$$
where $\operatorname{li}(x)$ is the logarithmic integral (this MathWorld). Thanks in advance.




I prefer some strategy using analysis, but if you want explain me how get it using a good strategy from the numerical analysis (or combining a numerical method with some facts deduced from the analysis of our integral) it also is welcome.




I add also the plot of our function that you can see using this Wolfram Alpha online calculator



plot Re ((log(x))^(-li(x))), from x=0 to 1



enter image description here


Answer



Solution:



This combination of integrals $$\boxed{A=\int_0^{0.6266}\left(2-e^x+\frac29\sin(6x)\right)\,dx\\B=-\frac9{25}\int_{0.5829}^{0.8685}\cos\left(11x-1.7\right)\,dx\\C=-\frac1{40}\int_{0.8736}^{0.9297}\sin(56x-1.8)\,dx}$$ gives the answer of $$A+B+C=0.3846\cdots$$ which does the job with an error of around $1.3\times10^{-4}$.




For a visualisation see here.






Strategy:



We split the function into three parts, one between $0$ and around $0.58$ (call this $A$), one between $0.58$ to around $0.87$ (call this $B$) and finally one between $0.87$ to around $0.93$ (call this $C$).



We see that for $A$, the function starts at $1$, rises a little and drops to $0$ at $x=0.58$.




For $B$, the function starts at $0$ at $x=0.58$, reaches a minimum of about $0.36$ and rises to $0$ at $0.87$.



For $C$, the function starts at $0$ at $x=0.87$, rises to around $0.02$ and falls back down to $0$ at $0.93$.



These little rises and falls can be modelled simply by either quadratic or trigonometric functions.



Experimenting with functions in Desmos show that the trig ones are better, and further trial and error gives the coefficients shown in the Solution.


real analysis - Regarding a proof involving geometric series



Someone asked this question about how many ways there are to prove $0.999\dots = 1$ and I posted this:




$$ 0.99999 \dots = \sum_{k = 1}^\infty \frac{9}{10^k} = 9 \sum_{k = 1}^\infty \frac{1}{10^k} = 9 \Big ( \frac{1}{1 - \frac{1}{10}} - 1\Big ) = \frac{9}{9} = 1$$



The question was a duplicate so in the end it was closed but before that someone wrote in a comment to the question: "Guys, please stop posting pseudo-proofs on an exact duplicate!" and I got down votes, so I assume this proof is wrong.



Now I would like to know, of course, why this proof is wrong. I have thought about it but somehow I can't seem to find the mistake.



Many thanks for your help. The original can be found here.


Answer



The problem is that you are assuming 1) that multiplication by constants distributes over infinite sums, and 2) the validity of the geometric series formula. Most of the content of the result is in 2), so it doesn't make much sense to me to assume it in order to prove the result. Instead you should prove 2), and if you really want to be precise you should also prove 1).



definite integrals - Real-Analysis Methods to Evaluate $int_0^infty frac{x^a}{1+x^2},dx$, $|a|




In THIS ANSWER, I used straightforward contour integration to evaluate the integral $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^a}{1+x^2}\,dx=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)}$$for $|a|<1$.




An alternative approach is to enforce the substitution $x\to e^x$ to obtain




$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{-\infty}^\infty \frac{e^{(a+1)x}}{1+e^{2x}}\,dx\\\\
&=\int_{-\infty}^0\frac{e^{(a+1)x}}{1+e^{2x}}\,dx+\int_{0}^\infty\frac{e^{(a-1)x}}{1+e^{-2x}}\,dx\\\\
&=\sum_{n=0}^\infty (-1)^n\left(\int_{-\infty}^0 e^{(2n+1+a)x}\,dx+\int_{0}^\infty e^{-(2n+1-a)x}\,dx\right)\\\\
&=\sum_{n=0}^\infty (-1)^n \left(\frac{1}{2n+1+a}+\frac{1}{2n+1-a}\right)\\\\
&=2\sum_{n=0}^\infty (-1)^n\left(\frac{2n+1}{(2n+1)^2-a^2}\right) \tag 1\\\\
&=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)\tag 2
\end{align}$$




Other possible ways forward include writing the integral of interest as



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{0}^1 \frac{x^{a}+x^{-a}}{1+x^2}\,dx
\end{align}$$



and proceeding similarly, using $\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}$.




Without appealing to complex analysis, what are other approaches one can use to evaluate this very standard integral?





EDIT:




Note that we can show that $(1)$ is the partial fraction representation of $(2)$ using Fourier series analysis. I've included this development for completeness in the appendix of the solution I posted on THIS PAGE.



Answer



I'll assume $\lvert a\rvert < 1$. Letting $x = \tan \theta$, we have




$$\int_0^\infty \frac{x^a}{1 + x^2}\, dx = \int_0^{\pi/2}\tan^a\theta\, d\theta = \int_0^{\pi/2} \sin^a\theta \cos^{-a}\theta\, d\theta$$



The last integral is half the beta integral $B((a + 1)/2, (1 - a)/2)$, Thus



$$\int_0^{\pi/2}\sin^a\theta\, \cos^{-a}\theta\, d\theta = \frac{1}{2}\frac{\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)}{\Gamma\left(\frac{a+1}{2} + \frac{1-a}{2}\right)} = \frac{1}{2}\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)$$



By Euler reflection,



$$\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right) = \pi \csc\left[\pi\left(\frac{1+a}{2}\right)\right] = \pi \sec\left(\frac{\pi a}{2}\right)$$




and the result follows.



Edit: For a proof of Euler reflection without contour integration, start with the integral function $f(x) = \int_0^\infty u^{x-1}(1 + u)^{-1}\, du$, and show that $f$ solves the differential equation $y''y - (y')^2 = y^4$, $y(1/2) = \pi$, $y'(1/2) = 0$. The solution is $\pi \csc \pi x$. On the other hand, $f(x)$ is the beta integral $B(1+x,1-x)$, which is equal to $\Gamma(x)\Gamma(1-x)$. I believe this method is due to Dedekind.


Wednesday, 25 December 2013

limits - Proof of $lim_{ntoinfty}frac{e^nn!}{n^nsqrt{n}}=sqrt{2pi}$



I'm looking for a proof of the following limit:

$$\lim_{n\to\infty}\frac{e^nn!}{n^n\sqrt{n}}=\sqrt{2\pi}$$
This follows from Stirling's Formula, but how can it be proven?


Answer



Existence of the Limit



The ratio of two consecutive terms is
$$
\left.\frac{e^nn!}{n^{n+1/2}}\middle/\frac{e^{n-1}(n-1)!}{(n-1)^{n-1/2}}\right.
=e\left(1-\frac1n\right)^{n-1/2}\tag{1}
$$

Using the Taylor approximation
$$
\log\left(1-\frac1n\right)=-\frac1n-\frac1{2n^2}-\frac1{3n^3}+O\left(\frac1{n^4}\right)\tag{2}
$$
we can compute the logarithm of the right hand side of $(1)$:
$$
1+\left(n-\frac12\right)\log\left(1-\frac1n\right)
=-\frac1{12n^2}+O\left(\frac1{n^3}\right)\tag{3}
$$
Therefore,

$$
\begin{align}
\lim_{n\to\infty}\frac{e^nn!}{n^{n+1/2}}
&=e\prod_{n=2}^\infty\left(\frac{e^nn!}{n^{n+1/2}}\middle/\frac{e^{n-1}(n-1)!}{(n-1)^{n-1/2}}\right)\\
&=e\prod_{n=2}^\infty\left(e\left(1-\frac1n\right)^{n-1/2}\right)\\
&=\exp\left(1+\sum_{n=2}^\infty\left(1+\left(n-\frac12\right)\log\left(1-\frac1n\right)\right)\right)\\
&=\exp\left(1+\sum_{n=2}^\infty\left(-\frac1{12n^2}+O\left(\frac1{n^3}\right)\right)\right)\tag{4}
\end{align}
$$
Since the sum on the right hand side of $(4)$ converges, so does the limit on the left hand side.







Log-Convexity of $\boldsymbol{\Gamma(x)}$



Since $\Gamma(x)$ is log-convex,
$$
\begin{align}
\sqrt{n}\,\Gamma(n+1/2)
&\le\sqrt{n}\,\Gamma(n)^{1/2}\,\Gamma(n+1)^{1/2}\\

&=\Gamma(n+1)\tag{5}
\end{align}
$$
and
$$
\begin{align}
\Gamma(n+1)
&\le\Gamma(n+1/2)^{1/2}\,\Gamma(n+3/2)^{1/2}\\
&=\sqrt{n+1/2}\,\Gamma(n+1/2)\tag{6}
\end{align}

$$
Therefore,
$$
\sqrt{\frac{n}{n+1/2}}\le\sqrt{n}\frac{\Gamma(n+1/2)}{\Gamma(n+1)}\le1\tag{7}
$$
Thus, by the Squeeze Theorem,
$$
\lim_{n\to\infty}\sqrt{n}\frac{\Gamma(n+1/2)}{\Gamma(n+1)}=1\tag{8}
$$







Using the Recursion for $\boldsymbol{\Gamma(x)}$



Using the identity $\Gamma(x+1)=x\Gamma(x)$,
$$
\begin{align}
\sqrt{n}\color{#C00000}{\frac{(2n)!}{2^nn!}}\color{#00A000}{\frac1{2^nn!}}
&=\sqrt{n}\frac{\color{#C00000}{1}}{\color{#00A000}{2}}\cdot\frac{\color{#C00000}{3}}{\color{#00A000}{4}}\cdot\frac{\color{#C00000}{5}}{\color{#00A000}{6}}\cdots\frac{\color{#C00000}{2n-1}}{\color{#00A000}{2n}}\\
&=\sqrt{n}\frac{\color{#C00000}{1/2}}{\color{#00A000}{1}}\cdot\frac{\color{#C00000}{3/2}}{\color{#00A000}{2}}\cdot\frac{\color{#C00000}{5/2}}{\color{#00A000}{3}}\cdots\frac{\color{#C00000}{n-1/2}}{\color{#00A000}{n}}\\

&=\sqrt{n}\color{#C00000}{\frac{\Gamma(n+1/2)}{\Gamma(1/2)}}\color{#00A000}{\frac{\Gamma(1)}{\Gamma(n+1)}}\tag{9}
\end{align}
$$






Value of the Limit



Combining $(8)$ and $(9)$ yields
$$

\begin{align}
\lim_{n\to\infty}\left(\frac{e^nn!}{n^{n+1/2}}\right)^{-1}
&=\lim_{n\to\infty}\frac{e^{2n}(2n)!}{(2n)^{2n+1/2}}\left(\frac{n^{n+1/2}}{e^nn!}\right)^2\\
&=\lim_{n\to\infty}\frac{\sqrt{n}}{\sqrt2\,4^n}\frac{(2n)!}{(n!)^2}\\
&=\lim_{n\to\infty}\frac1{\sqrt2}\frac{\Gamma(1)}{\Gamma(1/2)}\sqrt{n}\frac{\Gamma(n+1/2)}{\Gamma(n+1)}\\
&=\frac1{\sqrt{2\pi}}\tag{10}
\end{align}
$$
Therefore,
$$

\lim_{n\to\infty}\frac{e^nn!}{n^{n+1/2}}=\sqrt{2\pi}\tag{11}
$$


calculus - Trying to teach supremum and infimum.

I'm helping out my former calculus teacher as a volunteer calculus advisor, and I have under my supervision 5 students. They've already had an exam and... well, they failed.

I read their exams and I notice that they did understood the concepts, but when it comes to writting mathematically they lose it, even if they know exactly what to do informally, when they are trying to write it down it seems like they are trying to speak russian without knowing the language yet insisting to speak. Just to give you an example of what I mean, one of them said that given a $w\in\Bbb R$ because something happend then $w$ was countable, even tought a lot of times in class I couldn't be more emphatic that being countable was a quality for sets, and they reapeting it each time like so.



I'm not trying to mock them, not at all, I'm really concerned about them, and since the class has started to get very mathematic, I'm afraid that they might get depressed or something.

What's next class-wise, is to get used to the concepts of supremum and minimum. They already have the definitions, and they can reapeat them perfectly, however I don't think they know what they mean, I want to show them with examples a little bit more flashy but I'm affraid that they might not understand, so what do you recommend?



marginal note: I'm an advisor, means that I'm not the teacher, I'm like a helper, but in the words of my teacher "more reacheble and with less responsabilities, since I'm also a student". Also when we have see each other, they supposedly already have the great class of the professor.

probability theory - Find the almost sure limit of $X_n/n$, where each random variable $X_n$ has a Poisson distribution with parameter $n$




$X_{n}$ independent and $X_n \sim \mathcal{P}(n) $ meaning that $X_{n}$ has Poisson distributions with parameter $n$. What is the $\lim\limits_{n\to \infty} \frac{X_{n}}{n}$ almost surely ?





I think we can write $X(n) \sim X(1)+X(1)+\cdots+X(1)$ where the sum is taken on independent identical distribution then use the law of large number. But I am not sure that is it correct or not. Can anyone give me some hints? Thank you in advance!


Answer



If $X_n\sim\mathsf{Pois}(n)$ for $n=1,2,\ldots$, then $n^{-1}X_n\stackrel{\mathrm{a.s.}}\longrightarrow 1$. It suffices to show that for all $\varepsilon>0$,
$$\mathbb P\left(\liminf_{n\to\infty}\left|n^{-1}X_n-1\right|<\varepsilon\right)=1. $$



We have $$\left\{\left|n^{-1}X_n-1\right|<\varepsilon\right\}^c = \left\{X_n\leqslant n(1-\varepsilon) \right\}\cup\left\{X_n\geqslant n(1+\varepsilon) \right\}, $$
so the Chernoff bounds yield $$\mathbb P(X_n\leqslant n(1-\varepsilon)) \leqslant \frac{e^{-n}(ne)^{n(1-\varepsilon)}}{(n(1-\varepsilon))^{n(1-\varepsilon)}}= \left(e^{\varepsilon}(1-\varepsilon)^{1-\varepsilon} \right)^{-n} $$
and
$$\mathbb P(X_n\geqslant n(1+\varepsilon)) \leqslant \frac{e^{-n}(ne)^{n(1+\varepsilon)}}{(n(1+\varepsilon))^{n(1+\varepsilon)}}=

\left(e^{-\varepsilon}(1+\varepsilon)^{1+\varepsilon} \right)^{-n} . $$



Now



$$e^\varepsilon(1-\varepsilon)^{1-\varepsilon} = e^{\varepsilon + (1-\varepsilon)\log(1-\varepsilon)}=e^{\frac{\varepsilon^2}2+O(\varepsilon^3)}>1 $$
and
$$e^{-\varepsilon}(1+\varepsilon)^{1+\varepsilon} = e^{-\varepsilon+(1+\varepsilon)\log(1+\varepsilon)}=e^{\frac{\varepsilon^2}2+O(\varepsilon^3)}>1, $$
so $$\sum_{n=1}^\infty \mathbb P(X_n\leqslant n(1-\varepsilon))\leqslant\sum_{n=1}^\infty \left(e^{\varepsilon}(1-\varepsilon)^{1-\varepsilon} \right)^{-n}<\infty $$ and
$$\sum_{n=1}^\infty \mathbb P(X_n\geqslant n(1+\varepsilon))\leqslant\sum_{n=1}^\infty \left(e^{-\varepsilon}(1+\varepsilon)^{1+\varepsilon} \right)^{-n}<\infty.$$
It follows from the first Borel-Cantelli lemma that $$\mathbb P\left(\limsup_{n\to\infty} \{X_n\leqslant n(1-\varepsilon)\}\right)=\mathbb P\left(\limsup_{n\to\infty} \{X_n\geqslant n(1+\varepsilon)\}\right)=0,$$

and so we conclude.


calculus - Compute $limlimits_{x to +infty}dfrac{ln x}{ int_0^x frac{|sin t|}{t}{rm d}t}$.





Compute

$$\lim\limits_{x \to +\infty}\dfrac{\ln x}{\displaystyle \int_0^x \dfrac{|\sin t|}{t}{\rm d}t}.$$





Maybe, we can solve it by L'Hospital's rule, but there still exists a difficulty here. Though $x \to +\infty$ implies $\ln x \to +\infty$, we do not know the limit of the denominator. How to solve it?


Answer



Thanks to @Alex B.'s hint , I complete the solution. Please correct me if I'm wrong.



For any $x>0$, we can choose some $n \in \mathbb{N}$ such that $n \pi\leq x<(n+1)\pi$. Thus, we obtain
$$\int_0^{n\pi}\frac{|\sin t|}{t}{\rm d}t \leq \int_0^x \frac{|\sin t|}{t}{\rm d}t<\int_0^{(n+1)\pi}\frac{|\sin t|}{t}{\rm d}t.$$




On one hand, notice that
\begin{align*}
\int_0^{n \pi} \frac{|\sin t|}{t}{\rm d}t&=\int_0^\pi \frac{|\sin t|}{t}{\rm d}t+\sum_{k=1}^{n-1}\int_{k\pi}^{(k+1) \pi} \frac{|\sin t|}{t}{\rm d}t\\
&> \sum_{k=1}^{n-1}\int_{k\pi}^{(k+1) \pi} \frac{|\sin t|}{t}{\rm d}t\\
& > \sum_{k=1}^{n-1}\int_{k\pi}^{(k+1) \pi} \frac{|\sin t|}{(k+1)\pi}{\rm d}t\\
&=\sum_{k=1}^{n-1}\frac{\int_{k\pi}^{(k+1)\pi}|\sin t|{\rm d}t}{(k+1)\pi}\\
&=\frac{2}{\pi}\sum_{k=2}^{n}\frac{1}{k}.
\end{align*}




On the other hand, likewise,
\begin{align*}
\int_0^{(n+1) \pi} \frac{|\sin t|}{t}{\rm d}t&=\int_0^\pi \frac{|\sin t|}{t}{\rm d}t+\sum_{k=1}^{n}\int_{k\pi}^{(k+1) \pi} \frac{|\sin t|}{t}{\rm d}t\\
&< \int_0^\pi {\rm d}t+\sum_{k=1}^{n}\int_{k\pi}^{(k+1) \pi} \frac{|\sin t|}{k\pi}{\rm d}t\\
&=\pi+\sum_{k=1}^{n}\frac{\int_{k\pi}^{(k+1)\pi}|\sin t|{\rm d}t}{k\pi}\\
&=\pi+\frac{2}{\pi}\sum_{k=1}^{n}\frac{1}{k}.
\end{align*}



Therefore
$$\frac{2}{\pi}\sum_{k=2}^{n}\frac{1}{k} <\int_0^x \frac{|\sin t|}{t}{\rm d}t<2+\frac{2}{\pi}\sum_{k=1}^{n}\frac{1}{k}.$$




Since
$$\ln n\pi\leq \ln x<\ln(n+1)\pi,$$
we have
$$\dfrac{\ln n\pi}{\pi+\dfrac{2}{\pi}\sum\limits_{k=1}^{n}\dfrac{1}{k}}<\dfrac{\ln x}{\int_0^x \dfrac{|\sin t|}{t}{\rm d}t}<\dfrac{\ln(n+1)\pi}{\dfrac{2}{\pi}\sum\limits_{k=2}^{n}\dfrac{1}{k}}.$$



Applying the subsitution as follows
$$\sum_{k=1}^n \frac{1}{k}=\ln n+\gamma+\varepsilon_n,$$
(in fact, we only need to recall that $\sum\limits_{k=1}^n \dfrac{1}{k}$ and $\ln n$ are equivalent infinities), we can readily infer that the limits of the both sides in the last expression are both equal to $\dfrac{\pi}{2}$ under the process $n \to \infty$(i.e. $x \to +\infty$). Hence, according to the squeeze theorem, we can conclude that
$$\frac{\ln x}{\int_0^x \frac{|\sin t|}{t}{\rm d}t} \to \frac{\pi}{2}(x \to +\infty),$$which is what we want to evaluate.



sequences and series - Find number of terms in arithmetic progression




In a arithmetic progression sum of first four terms sum :
$$a_1+a_2+a_3+a_4=124$$



and sum of last four terms :
$$a_n+a_{n-1}+a_{n-2}+a_{n-3}=156$$
and sum of arithmetic progression is :
$$S_n=350$$



$$n=?$$




How to find $n$? I tried using arithmetic progression sum formulas but getting negative or fractional numbers.


Answer



One has $a_1+a_2+a_3+a_4+a_{n-3}+a_{n-2}+a_{n-1}+a_n=280$. The mean of those
eight terms is $35$, so the mean of $a_1,\ldots,a_n$ is also $35$. The sum
of those $n$ terms is $n$ times their mean, and is $350$. So now you can read
off $n$.


summation - Sum of Squares in terms of Sum of Integers



We know that the sum of squares can be expressed as a multiple of the sum of integers as follows: $$\begin{align}
\sum_{r=1}^n r^2
&=\frac 16 n(n+1)(2n+1)\\
&=\frac {2n+1}3\cdot \frac {n(n+1)}2\\
&=\frac {2n+1}3\sum_{r=1}^nr\end{align}$$



Is there a simple direct proof to express the sum of squares as $\dfrac {2n+1}3$ multiplied by the sum of integers, without first deriving the formula for the sum of squares and then breaking it down as shown above?



Answer



New Solution



Have just found a proof which does not require the prior knowledge of the result of the sum of integers.



$$\begin{align}
\sum_{i=1}^n i^2&=\sum_{i=1}^n\sum_{j=1}^i(2j-1)&& ...(1)\\
\sum_{i=1}^n i^2&=\sum_{i=1}^n\sum_{j=1}^i 2(n-i)+1&& ...(2)\\
\sum_{i=1}^n i^2&=\sum_{i=1}^n\sum_{j=1}^i 2(i-j)+1&& ...(3)\\
(1)+(2)+(3):\\

3\sum_{i=1}^n i^2
&=\sum_{i=1}^n\sum_{j=1}^i (2j-1)+2(n-1)+1+2(i-j)+1\\
&=\sum_{i=1}^n\sum_{j=1}^i (2n+1)\\
&=(2n+1)\sum_{i=1}^n i\\
\sum_{i=1}^n i^2&=\frac{2n+1}3\sum_{i=1}^n i\qquad\blacksquare
\end{align}$$



This proof is a transcription of the diagrammatic proof of the same as shown on the wikipedia page here.



See also the nice diagrams in this solution here.







Earlier post shown below



This is too long for a comment so it's being posted in the solution section.



A synthetic and rather cumbersome approach might be as follows:



$$\begin{align}

(2m+1)\sum_{r=1}^mr-(2m-1)\sum_{r=1}^{m-1}r
&=(2m+1)\frac {m(m+1)}2-(2m-1)\frac{m(m-1)}2\\
&=\frac m2\left[(2m+1)(m+1)-(2m-1)(m-1)\right]\\
&=3m^2\end{align}$$

Summing $m$ from $1$ to $n$ and telescoping LHS gives



$$(2n+1)\sum_{r=1}^nr=3\sum_{m=1}^nm^2\color{lightgray}{=3\sum_{r=1}^n r^2}\\
\sum_{r=1}^nr^2=\frac {2n+1}3\sum_{r=1}^nr\qquad\blacksquare$$


elementary number theory - if $a,b$ are both integers and coprime, prove that the $gcd(a^2 b^3,a+b) =1$




I'm trying to solve this problem. I should be able to do it using simple divisibility properties but I don't know how.




Let a and b be integers such that they are coprime. Prove that $\gcd(a^2b^3,a+b)=1$




For instance... I thought that the gcd divides both $a^2b^3$ and $a+b$ so it must divide a sum of them. I've tried going this way but it's not clear to me where it should lead me. Any hint will be welcomed. Thanks.


Answer



Suppose that $p$ is a prime number such that $p|a^2b^3$ then $p|a$ or $p|b$. Let's say $p|a$. If $p|(a+b)$ then we should have $p|b$ what is impossible because $a,b$ are coprimes.



Tuesday, 24 December 2013

calculus - Square integrable and related limit



Let a continuous function $x:[0,\infty)\rightarrow\mathbb{R}$. Does $x\in\mathcal{L}_2[0,\infty)$ (square integrable i.e. $\lim_{t\rightarrow\infty}\int_0^t{x^2(s)ds=c<\infty}$) implies $\lim_{t\rightarrow\infty}\int_0^t{e^{-\lambda(t-s)}x(s)ds}=0$ for every $\lambda>0$?



I can prove this if $x$ is bounded but does it also hold true for unbounded x?



Note that $\int_0^t{e^{-\lambda(t-s)}x(s)ds}$ is a bounded also square integrable function if $x$ is square integrable and no boundedness assumption on $x$ is needed for this.



Answer



Hint:



For any $\delta>0$, there exists $b>0$ such that $\int_{b}^\infty x^2(s)dx<\delta^2$. Denote $M=\max_{[0, b]} |x(s)|$.



As you noted, we have
$$\int_0^b e^{-\lambda(t-s)} x(s) ds\to 0 \text{ when $t\to \infty$.}$$



On the other hand, suppose $t\gg b$, we have




$$\int_b^t e^{-\lambda(t-s)} x(s) ds\le \sqrt{\int_b^t e^{-2\lambda(t-s)}ds\delta^2} \\
\le \frac{1}{2\lambda} \delta$$



It follows that the desired limit must be 0.


Monday, 23 December 2013

diophantine equations - Can $3p^4-3p^2+1$ be square number?

I know $4p^4+1$ can't be square number, cause $4p^4<4p^4+1<4p^4+4p^2+1$ for all natural number p.
But I don't know the way to prove $3p^4-3p^2+1$ can(not) be square number. Is there a well known way to prove it?

number theory - $1^k+2^k+cdots+n^kmod n$ where $n=p^a$.

My friend said that for any $n=p^a$, where $p$ is odd prime, $a$ is positive integer then: If $k$ is divisible by $p-1$ then $1^k+2^k+\cdots+n^k\equiv -p^{a-1}\pmod{p^a}$. I am very sure that his result is wrong. My though is simple: I use primitive root of $p^a$. But I fail to construct a counterexample for him. So my question is that: Could we construct an example of $k$ so that $p-1\mid k$ and $1^k+2^k+\cdots+n^k\not\equiv -p^{a-1}\pmod{p^a}$.



The second question is that: Does the following holds: $1^k+2^k+\cdots+n^k\not\equiv -p^{a-1}\pmod{p^a}$ if and only if $p-1\mid k$?

complex analysis - How to prove $int_{-infty}^infty dx frac{e^{ixz}}{x-iepsilon} = 2pi i Theta(z)$




where $\Theta(z)$ is the Heaviside step function.



Let's call $f(x) = \frac{e^{ixz}}{x-i\epsilon}$. My first step is to close the contour so that the pole at $x=i\epsilon$ is contained:



$\int_C f(x) dx = \int_{-R}^R f(x) dx + \int_{C_R} f(x) dx$,



where $C_R$ is the semi-circle of radius $R$ in the upper half of the complex plane, parameterized by $z = Re^{i\theta}$, $0<\theta<\pi$.



The term on the LHS gives $2\pi i$ by the residue theorem, as $\epsilon \rightarrow 0$. The first term on the RHS gives the desired integral as $R \rightarrow \infty$. Thus I need to show that




$\int_{C_R} f(x) dx = 0$ as $R \rightarrow \infty$ for $z>0$. How?


Answer



Note that for $0<\epsilon

$$\begin{align}
\left|\int_0^\pi \frac{e^{izRe^{i\phi}}}{Re^{i\phi}-i\epsilon}\,iRe^{i\phi}\,d\phi\right|&\le\int_0^\pi \left|\frac{e^{izRe^{i\phi}}}{Re^{i\phi}-i\epsilon}\,iRe^{i\phi}\right|\,d\phi\tag1\\\\
&\le \frac{R}{R-\epsilon}\int_0^\pi e^{-zR\sin(\phi)}\,d\phi\tag2\\\\
&=\frac{2R}{R-\epsilon}\int_0^{\pi/2}e^{-zR\sin(\phi)}\,d\phi\tag3\\\\
&=\frac{2R}{R-\epsilon}\int_0^{\pi/2}e^{-2zR\phi/\pi}\,d\phi\tag4\\\\

&=\frac{2R}{R-\epsilon}\left(\frac{1-e^{-zR}}{2zR/\pi}\right)
\end{align}$$



which approaches $0$ as $R\to \infty$ as was to be shown!






NOTES:



In arriving at $(1)$, we used the triangle inequality $\left|\int_a^b f(x)\,dx\right|\le \int_a^b |f(x)|\,dx$ for complex valued functions $f$ and $a\le b$.




In going from $(1)$ to $(2)$, we used the triangle inequality $|z_1-z_2|\ge ||z_1|-|z_2||$ so that for $R>\epsilon>0$, we have $|Re^{i\phi}-i\epsilon|\ge ||Re^{i\phi}|-|i\epsilon||=R-\epsilon$.



In arriving at $(3)$, we made use of the even symmetry of the sine function around $\pi/2$.



In arriving at $(4)$, we used the inequality $\sin(x)\ge \pi x/2$ for $x\in [0,\pi/2]$.


soft question - Analogy between linear basis and prime factoring



I recall learning that we can define linear systems such that any vector in the system can be represented as a weighted sum of basis vectors, as long as we have 'suitable' definitions for addition and multiplication operators (i.e. fulfilling certain properties.) This turned out to be extremely useful, as we could prove general things about linear operations over vector spaces and apply them to a surprisingly wide array of systems with linear properties. One of the interesting things about this was that you could define a series of unique real number coordinates for a given series of independent basis vectors in the system.



It struck me the other day that there is an interesting, albeit slightly different pattern in the natural numbers. Any natural number can be written as the product of natural powers of primes. In a sense, it seems like the primes form a kind of 'basis' for the natural numbers, with the series of powers being a kind of 'coordinate'.





  1. Is there is a name for this pattern?


  2. If so, are there are other kinds of sets that can be decomposed as products of powers in this way, with similar generic results we can deduce for how these 'products of independent factors' behave?




I apologize for the lack of clarity here, but my unfamiliarity with the terminology makes it difficult to describe.


Answer



Here's some terminology that captures the 'multiplicative basis' part of your analogy: the natural numbers are a free commutative monoid on the infinite generating set of prime numbers.


Sunday, 22 December 2013

linear algebra - Prove that if the set of vectors is linearly independent, then the arbitrary subset will be linearly independent as well.



This one is quite straightforward, but I just want to make sure that my reasoning is clear.



I have following proposition:




Proposition. If $S = \{\mathbf{v_{1}},\mathbf{v_{2}}...,\mathbf{v_{n}}\}$ is linearly independent then any subset $T = \{\mathbf{v_{1}},\mathbf{v_{2}}...,\mathbf{v_{m}}\}$, where $m < n$, is also linearly independent.





My attempt:



We prove proposition by contrapositive.



Suppose $T$ is linearly dependent. We have



$$\tag1 k_{1}\mathbf{v_{1}} + k_{2}\mathbf{v_{2}}\cdots k_{j}\mathbf{v_{j}} ... k_{m}\mathbf{v_{m}} = \bf O $$



Where there is at least one scalar, call it $k_{j}$, such that $k_{j} = a$ ($a ≠ 0$) and all other scalars are zero.




Since $T$ is the subset of $S$, the linear combination of vectors in $S$ is:



$$\bigl(k_{1}\mathbf{v_{1}} + k_{2}\mathbf{v_{2}}\cdots k_{j}\mathbf{v_{j}} ... k_{m}\mathbf{v_{m}}\bigr) + k_{m+1}\mathbf{v_{m+1}}\cdots +k_n\mathbf{v_{n}} = \bf O$$



Let $k_{j} = a $, and set all other scalars for zero:



$$\underbrace{\bigl(0\cdot\mathbf{v_{1}} + 0\cdot\mathbf{v_{2}}\cdots a \cdot \mathbf{v_{j}} ... 0\cdot\mathbf{v_{m}}\bigr)}_{\mathbf{= O} \text{ by } (1)} + \underbrace{0\cdot\mathbf{v_{m+1}}\cdots +0\cdot\mathbf{v_{n}}}_{\mathbf{= O} \text{ because all scalars = 0}}= \bf O$$



We can see that linear combination of $S$ equals to zero but we have at least one non-zero scalar, which implies that $S$ is not linearly independent, which is a contradiction. Therefore, if $S$ is linearly independent, arbitrary subset $T$ must be linearly independent as well. $\Box$







Is it correct?


Answer



I don't see anything wrong with your proof. Just be careful with the claim that you get a contradiction. The contrapositive of a statement is logically equivalent to the statement itself, so you don't get any contradiction whatsoever when proving a contrapositive.



If you were to use a proof by contradiction, you would start off by assuming that $S$ is linearly independent but $T$ is not, and show that it leads to some impossibility.


real analysis - Direct proof that if p is prime, then $ sqrt{p} $ is irrational.




Does anyone know of a simple direct proof that if p is prime, then $\sqrt{p}$ is irrational?




I have always seen this proved by contradiction and have been trying unsuccessfully to prove it constructively. I searched this site and could not find the question answered without using contradiction.


Answer



Maybe an elementary proof that I gave in a more general context, but I can't find it on the site, so I'll adapt it to this case.



Set $n=\lfloor \sqrt p\rfloor$. Suppose $\sqrt p$ is rational and let $m$ be the smallest positive integer such that $m\sqrt p$ is an integer. Consider $m'=m(\sqrt p-n)$; it is an integer, and
$$ m'\sqrt p=m(\sqrt p-n)\sqrt p=mp-nm\sqrt p $$
is an integer too.



However, since $0\le \sqrt p-n <1$, we have $0\le m' smallest positive integer such that $ m\sqrt p$ is an integer, it implies $m'=0$, which means $\sqrt p=n$, hence $p=n^2$, which contradicts $p$ being prime.



probability - What is the expected number of swaps performed by Bubblesort?

The well-known Bubblesort algorithm sorts a list $a_1, a_2, . . . , a_n$ of numbers by
repeatedly swapping adjacent numbers that are inverted (i.e., in the wrong relative order)

until there are no remaining inversions. (Note that the number of swaps required does not
depend on the order in which the swaps are made.) Suppose that the input to Bubblesort is a
random permutation of the numbers $a_1, a_2, . . . , a_n$ , so that all $n!$ orderings are equally likely,
and that all the numbers are distinct. What is the expected number of swaps performed by
Bubblesort?

Proving that a function is additive in a functional equation

I have the equation
$$h(x,k(y,z))=f(y,g(y,x)+t(y,z)), \tag{*}\label{equation}$$
where




  • $h,k,f,g,t$ are continuous real valued functions.


  • $h,f$ are strictly monotone in their second argument.

  • all functions are "sensitive" to each of the arguments - in some natural sense (if $x,y,z,$ are real numbers, just suppose that all functions are strictly monotone in each of their arguments).

  • $x,y,z$ are from some "well behaved" topological space (connected, ...) - you can assume that they are from some real connected interval.



I want to show that $g$ is additive, in the sense that: $g(y,x)=g_{y}(y)+g_{x}(x)$ (and that $h$ and $f$ are strictly monotone transformations of an additive function).



The reason I think this is true is that in the right-hand side of \eqref{equation}, $x$ is only "tied to" $y$ not $z$, while on the left hand side it is "tied to" a function of both $y$ and $z$.



More specifically, by \eqref{equation} we have:




$$k(y,z)=h^{-1}(x,f(y,g(y,x)+t(y,z))) \tag{**}\label{equation_a}$$



(where $h^{-1}$ is the the inverse of $h$ on the second argument, that is: $h(a,h^{-1}(a,v))=v$).



So, the right-hand side of \eqref{equation_a} must be independent of $x$. Is there any option but that $g$ is additive? I do not know how to go about attacking this. What theorems/tools are available?



Thanks,

Proving Continuity & Adding Discontinuous Functions

I've been wondering, how do you exactly prove that a function is continuous everywhere (or within the domain in which the function is defined)? Given some curve, my current approach would be to to try to think of discontinuities and then find a single counter-example.



But I can imagine that this method has its own limitations for strange functions.



Is there a set 'method' or 'technique' to prove that a function is continuous?






My second question is involving the sum of two discontinuous functions.




If I have a curve with point discontinuity (where at that point, it takes an arbitrary value not equal to the left and right limit), I can easily find another discontinuous curve that when added together, forms a continuous function.



However, lets say the discontinuity is because there is an actual 'hole' in the number line, so for example $y=\frac{x^2-1}{x-1}$, is it possible to find another discontinuous function such that upon addition, the sum is continuous?



My gut is telling me that the answer is 'No' because if the point didn't exist in the first place, then it cannot suddenly appear out of nowhere. But I would like some confirmation with this.

elementary number theory - Let $a,m,n in mathbf{N}$. Show that if $gcd(m,n)=1$, then $gcd(a,mn)=gcd(a,m)cdotgcd(a,n)$.




Let $a,m,n\in\mathbf{N}$. Show that if $\gcd(m,n)=1$, then $\gcd(a,mn)=\gcd(a,m)\cdot\gcd(a,n)$.



Proof:




Let $u,v\in\mathbf{Z}$ such that $\gcd(m,n)=um+vn=1$. Let $b,c\in\mathbf{Z}$ such that $\gcd(m,a)=ab+cm$. Let $d,e\in\mathbf{Z}$ such that $\gcd(a,n)=ad+en$.



So $\gcd(a,m)\cdot\gcd(a,n)=a^2bd+cmen+aben+emad$.



Where do I go from here?


Answer



$\gcd(a,m)\cdot \gcd(a,n) = a(abd+ben+emd)+(mn)(ce) \ge \gcd(a,mn)$



Say $\gcd(\gcd(a,m),\gcd(a,n)) = p$ where $p>1$.
Then $p|gcd(a,m)$ and $p|gcd(a,n)$. Which means $ p|m$ and $p|n$. So $p$ is a common divisor of $m$ and $n$. $\gcd(m,n) \ge p$. But this is impossible since $\gcd(m,n)=1$ and $p>1$. Thus, $p=1$




If $\gcd(a,m) = x$, this means $x|a$ and $x|m$. If $x|m$, then $x|mn$.
Thus $x$ is a common divisor of $a$ and $mn$. $x|\gcd(a,mn)$
If $\gcd(a,n) = y$, this means $y|a$ and $y|n$. If $y|n$, then $y|mn$.
Thus $y$ is a common divisor of $a$ and $mn$. $y|\gcd(a,mn)$
Because $\gcd(x,y) = 1, xy|\gcd(a,mn)$
So $\gcd(a,m) \cdot \gcd(a,n) \le \gcd(a,mn)$



Therefore, $$\gcd(a,m) \cdot \gcd(a,n) = \gcd(a,mn)$$


Saturday, 21 December 2013

modular arithmetic - Remainder is less than divisor



I'm reading a book and it says the equation
$$ a \bmod n = a - n \left\lfloor\frac{a}{n}\right\rfloor$$
follows that $$ 0 \leq a \bmod n \lt n. $$



I understand that the remainder is less than divisor, but I can't understand how the author got it from the first equation. Could someone, please, explain it to me?



Answer



As $\lfloor x\rfloor \le x<\lfloor x\rfloor +1$, we have
$$ 0\le \frac an-\left\lfloor \frac an\right\rfloor <1$$
and after multiplication with $n$ the claim.


elementary number theory - Proof that $sqrt[3]{17}$ is irrational



Consider $\sqrt[3]{17}$. Like the famous proof that $\sqrt2$ is irrational, I also wish to prove that this number is irrational. Suppose it is rational, then we can write:



$$ 17 = \frac{p^3}{q^3}.$$
and then

$$ 17q^3 = p^3$$



With the proof of $\sqrt2$ we used the fact that we got an even number at this step in the proof and that $p$ and $q$ were in lowest terms. However, 17 is a prime number, somehow we could use this fact and the fact that every number has a unique prime factorisation to arrive at a contradiction, but I don't quite see it yet.


Answer



The argument that works with $2$ also works with $17$. Since $17q^3=p^3$, $17\mid p^3$ and therefore $17\mid p$. Can you take it from here?


calculus - Double integral conversions to Polar



Problem 1




Convert to polar form and solve



$$\int^{2}_{0}\int_{0}^{\sqrt{2x-x^2}}((x-1)^2+y^2)^{5/2}\text{ dy dx}$$



$$x^2+y^2=2x, r^2=2rcos\theta , r=2cos\theta$$



$$\int^{\pi/2}_{0}\int^{2\cos\theta}_{0}(r^2-2(rcos\theta)+1)^{5/2}rdrd\theta$$



Problem 2




Convert to polar form and solve



$$\int^{1}_{-1}\int_{0}^{\sqrt{3+2y-y^2}}cos(x^2+(y-1)^2)\text{ dx dy}$$



$$x^2=3+2y-y^2 --> x^2+y^2 = 3+2y -->r^2 =3+2rsin\theta$$



Is the first problem the right integration to follow in polar?
Im stuck at number 2 in finding the bounds..


Answer




In the case of problem 1, the natural substitution is $x=1+r\cos\theta$ and $y=r\sin\theta$, thereby getting$$\int_0^\pi\int_0^1r\times r^5\,\mathrm dr\,\mathrm d\theta.$$
The same idea applies to problem 2: do $x=r\cos\theta$ and $y=1+r\sin\theta$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...