Saturday 31 August 2013

real analysis - Integral $I=int_0^infty frac{ln(1+x)ln(1+x^{-2})}{x} dx$



Hi I am stuck on showing that
$$
\int_0^\infty \frac{\ln(1+x)\ln(1+x^{-2})}{x} dx=\pi G-\frac{3\zeta(3)}{8}
$$
where G is the Catalan constant and $\zeta(3)$ is the Riemann zeta function. Explictly they are given by
$$
G=\beta(2)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^2}, \ \zeta(3)=\sum_{n=1}^\infty \frac{1}{n^3}.

$$
I have tried using
$$
\ln(1+x)=\sum_{n=1}^\infty \frac{(-1)^{n+1} x^n}{n},
$$
but didn't get very far.


Answer



The infinite sum in Chen Wang's answer, that is, $ \displaystyle \sum_{n=1}^{\infty} \frac{H_{4n}}{n^{2}}$, can be evaluated using contour integration by considering the function $$f(z) = \frac{\pi \cot(\pi z) [\gamma + \psi(-4z)]}{z^{2}}, $$



where $\psi(z)$ is the digamma function and $\gamma$ is the Euler-Mascheroni constant.




The function $f(z)$ has poles of order $2$ at the positive integers, simple poles at the negative integers, simple poles at the positive quarter-integers, and a pole of order $4$ at the origin.



The function $\psi(-4z)$ does have simple poles at the positive half-integers, but they are cancelled by the zeros of $\cot( \pi z)$.



Now consider a square on the complex plane (call it $C_{N}$) with vertices at $\pm (N + \frac{1}{2}) \pm i (N +\frac{1}{2})$.



On the sides of the square, $\cot (\pi z)$ is uniformly bounded.



And when $z$ is large in magnitude and not on the positive real axis, $\psi(-4z) \sim \ln(-4z)$.




So $ \displaystyle \int_{C_{N}} f(z) \ dz $ vanishes as $N \to \infty$ through the positive integers.



Therefore,



$$\sum_{n=1}^{\infty} \text{Res} [f(z), n] + \sum_{n=1}^{\infty} \text{Res}[f(z),-n] + \text{Res}[f(z),0] + \sum_{n=0}^{\infty} \text{Res}\Big[f(z), \frac{2n+1}{4} \Big] =0 .$$



To determine the residues, we need the following Laurent expansions.



At the positive integers,




$$ \gamma + \psi (-4z) = \frac{1}{4} \frac{1}{z-n} + H_{4n} + \mathcal{O}(z-n) $$



and



$$ \pi \cot (\pi z) = \frac{1}{z-n} + \mathcal{O}(z-n) .$$



At the origin,



$$ \gamma+ \psi(-4z) = \frac{1}{4z} -4 \zeta(2) z -16 \zeta(3) z^{2} + \mathcal{O}(z^{3})$$




and
$$ \pi \cot (\pi z) = \frac{1}{z} - 2 \zeta(2) z + \mathcal{O}(z^{3}) .$$



And at the positive quarter-integers,



$$ \gamma + \psi(-4z) = \frac{1}{4} \frac{1}{z-\frac{2n+1}{4}} + \mathcal{O}(1)$$



and




$$ \pi \cot (\pi z) = (-1)^{n} \pi + \mathcal{O}\Big(z- \frac{2n+1}{4} \Big) .$$



Then at the positive integers,



$$f(z) = \frac{1}{z^{2}} \Big( \frac{1}{4} \frac{1}{(z-n)^{2}} + \frac{H_{4n}}{z-n} + \mathcal{O}(1) \Big), $$



which implies



$$\begin{align} \text{Res} [f(z),n] &= \text{Res} \Big[ \frac{1}{4z^{2}} \frac{1}{(z-n)^{2}} , n \Big] + \text{Res} \Big[ \frac{1}{z^{2}} \frac{H_{4n}}{z-n}, n \Big] \\ &= - \frac{1}{2n^{3}} + \frac{H_{4n}}{n^{2}} .\end{align}$$




At the negative integers,



$$ \text{Res}[f(z),-n] = \frac{\gamma + \psi(4n)}{n^{2}} = \frac{H_{4n-1}}{n^{2}} = \frac{H_{4n}}{n^{2}} - \frac{1}{4n^{3}} . $$



At the origin,



$$ f(z) = \frac{1}{z^{2}} \Big( \frac{1}{4z^{2}} - \frac{\zeta(2)}{2} - 4 \zeta(2) - 16 \zeta(3) z + \mathcal{O}(z^{2}) \Big),$$



which implies




$$\text{Res}[f(z),0] = -16 \zeta(3) .$$



And at the positive quarter-integers,



$$ f(z) = \frac{\pi}{4z^{2}} \frac{(-1)^{n}}{z- \frac{2n+1}{4}} + \mathcal{O}(1),$$



which implies



$$ \begin{align} \text{Res} \Big[ f(z),\frac{2n+1}{4} \Big] &= \text{Res} \Big[\frac{\pi}{4z^{2}} \frac{(-1)^n}{z- \frac{2n+1}{4}}, \frac{2n+1}{4} \Big] \\ &= 4 \pi \ \frac{(-1)^{n}}{(2n+1)^{2}} . \end{align} $$




Putting everything together, we have



$$ - \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{n^{3}} + 2 \sum_{n=1}^{\infty} \frac{H_{4n}}{n^{2}} - \frac{1}{4} \sum_{n=1}^{\infty} \frac{1}{n^{3}} - 16 \zeta(3) + 4 \pi \sum_{n=0}^{\infty} \frac{(-1)^{n}}{(2n+1)^{2}} $$



$$ = - \frac{1}{2} \zeta(3) + 2 \sum_{n=1}^{\infty} \frac{H_{4n}}{n^{2}} - \frac{1}{4} \zeta(3) - 16 \zeta(3) + 4 \pi G = 0 .$$



Therefore,



$$ \sum_{n=1}^{\infty} \frac{H_{4n}}{n^{2}} = \frac{67}{8} \zeta(3) - 2 \pi G .$$




EDIT:



I found the Laurent expansion of $\psi(-4z)$ at the positive integers by using the functional equation of the digamma function to express $\psi(4z)$ as



$$ \psi(4z) = \psi(4z+4n+1) - \frac{1}{4z+4n} - \frac{1}{4z+4n-1} - \ldots - \frac{1}{4z} .$$



Then I evaluated the limit $$\lim_{z \to -n} (z+n) \psi(4z) = - \frac{1}{4}$$ and the limit $$\lim_{z \to -n} \Big(\psi(4z) + \frac{1}{4} \frac{1}{z+n} \Big) = - \gamma +H_{4n} .$$



This leads to the expansion $$\gamma + \psi (-4z) = \frac{1}{4} \frac{1}{z-n} + H_{4n} + \mathcal{O}(z-n) .$$




I did something similar to find the expansion at the positive quarter-integers.


Theory of removing alternate digits



It has been many years since I noticed something bizarre in our number system. It has caught my eye since then. In the beginning, I discarded it as something irrelevant, obvious maybe and something that was a mere illusion.



Start with a number, any four digit number will explain best and easiest. For example $6173$ (any random number). First remove the alternate digits from right side, i.e. the number becomes $67$ (removing $3 $and $1$). Write$ 6173$ in front of it again. It becomes $617367$. Repeat the steps.
$$

1. 6173$$
$$67$$
$$2. 617367$$
$$676$$
$$3. 6173676$$
$$ 137$$
$$4. 6173137$$
$$ 133$$
$$5. 6173133$$
$$133$$




In four steps, we got a number 133 which will be repeated forever. I like to call this 'purest form of a number'. This is applicable to any digit number, any number till infinity.



Further, I observed that single digit numbers cannot be made any more 'pure'. They reach their purest form in one step. 2 digit numbers reach their purest form in 2 steps, 3 digit numbers in 4 steps, 4 digit numbers also in 4 steps. 5 digit ones in 5, 6 in 6, 7 in 6...



Writing them up in a series:
$$1, 2, 4, 4, 5, 6, 6...$$



Alternate odd digit numbers don't purify in as many number of steps.




Now,1 digit numbers have 1 digit pure number, 2 have 1, 3 have 2, 4 have 3, 5 have 4, 6 have 5, 7 have 6 and so on.



This pattern becomes:
$$1, 1, 2, 3, 4, 5, 6...$$



I studied removing alternate digits from the left side too. It is almost the same. Also, in the second last step of reaching the pure forms, the numbers get very very close in any instance. In the above example, 137 and 133.



My question is:



(a) Is this actually a good observation or just an obvious fact?

(b) In any case, why are we sticking to a number which doesn't change after a few steps?


Answer



For the number of digits, suppose your original number has $n$ digits and the pure number has $p$ digits. When you prepend the original number you get $n+p$ digits. If $p=n-1$ or $p=n$ you will then strike out $n$ digits, leaving you again with $p$. On the first strike you bring the number of digits below $n$ and can never get back. That justifies (with $n=1$ a special case) your number of digits observation.



Once you get to $p$ digits there are only $10^p$ numbers to choose from, so you will eventually cycle. We can easily find the only 1-cycle. We will show an example for a starting seven digit number. The starting number might as well be $0123456$. We know the pure number will have six digits. Let them be $abcdef$. When you prepend the original number you get $0123456abcdef$. After you strike out the digits you have $135ace$, which we can equate to $abcdef$. This gives $a=1, b=3, c=5, d=a=1, e=c=5, f=e=5$, for a final answer of $135155$ for seven digit numbers. You can do the same for any length you like.



Each number goes through two phases in getting to the pure number. The first is to grow the prepended version in length up to the required amount. That takes about $\log_2 n$ iterations. In the seven digit example it goes from $7$ to $10$ because you strike four the first time, then to $12$, then to $13$. After that the length stays constant and the numbers shift in from the left, again taking about $\log_2 n$ iterations. The first $n$ shift in the first time, then the next $n/2$ are shifted in, then the next $n/4$. The total number of iterations is about $2 \log_2 n$. I haven't thought about how the rounding affects this, which is where "about" comes from.



The pure number will start with the digits from the odd positions of the starting number, followed by the digits that are $1$ or $3 \pmod 4$ depending on whether $n$ is even or odd, and so on.


elementary number theory - Prove that $N - lfloor{N/p}rfloor = lfloor{frac{p-1}{p}left({N + 1}right)}rfloor$ for positive $N$ and prime $p$



I am counting the number of positive integers less than or equal to some positive integer $N$ and not divisible by some prime $p$. This gets generalized for $k$ primes where I use the principle of inclusion-exclusion for this result. The simple result for a single prime is $N - \lfloor{\frac{N}{p}}\rfloor$. However, I have noticed by experiment that this is also equal to $\lfloor{\frac{p-1}{p} \left({N + 1}\right)}\rfloor$. I am looking for a proof of this relation if true.



Answer



Consider that $N$ may be expressed as $N = ap + b$ for an integer $a$ and integer $0 \le b \le p - 1$. Using this, the left side of your equation becomes



$$N - \lfloor N/p \rfloor = ap + b - a \tag{1}\label{eq1} $$



Consider the numerator of the right side of your equation, i.e.,



$$\left(p - 1\right)\left(N + 1\right) = \left(p - 1\right)\left(ap + b + 1\right) = ap^2 + \left(b + 1 - a\right)p - \left(b + 1\right) \tag{2}\label{eq2}$$



Since $0 \leq b \leq p - 1$, then $1 \leq b + 1 \leq p$. As such, for any integer $M$, including $M = 0$, we have that




$$\lfloor \frac{Mp - \left(b + 1\right)}{p} \rfloor = M - 1 \tag{3}\label{eq3}$$



Using $\eqref{eq2}$ and $\eqref{eq3}$ in the right side of your equation gives



$$\lfloor \frac{\left(p - 1\right)\left(N + 1\right)}{p} \rfloor = ap + \left(b + 1 - a\right) + \lfloor \frac{-\left(b + 1\right)}{p} \rfloor = ap + b - a \tag{4}\label{eq4} $$



As such, the LHS (from \eqref{eq1}) is always equal to the RHS (from \eqref{eq4} above), so your equation always holds. Note, this is true not only for primes $p$, but for all positive integers $p$, as I didn't use any particular properties of primes anywhere in the proof above.


Friday 30 August 2013

polynomials - Confusing Sequence-Notation

I recently encountered the following definition:





  • there exists a sequence of polynomials $(p_n)_{n\in\mathbb{N}}$ of fixed
    degree
    $\sigma$ such that for every $x\in U$,
    $\left|f(n+x) - p_n(n+x) \right| \longrightarrow 0
    \quad \text{as} \;n\to+\infty$





What would one of these polynomials look like?






I have seen polynomial sequences of the form $p_n(n^2)$, which would expand as $1, 4, 9...$ as well as polynomials of the form $p_n(x) = \sum_{k=0}^n a_nx^k$
In these cases $n$ was always the degree of the polynomial, though here we have polynomials of a fixed degree as well as polynomials in terms of $n+x$ as opposed to just $x$



I would assume that the polynomials in question would be in one of the following forms:



$$\begin{align}

&1)\qquad p_n(n+x) = \sum_{k=0}^{\sigma}a_{n+x}(n+x)^k\\
&2)\qquad p_n(n+x) = \sum_{k=0}^{\sigma}a_{n+x}(n)^k\\
&3)\qquad p_n(n+x) = \sum_{k=0}^{\sigma}a_{n}(n+x)^k
\end{align}
$$



However, I am not quite sure which (if any) of these forms I should be looking at, which is hurting my understanding of the topic in general



The paper from which this comes claims that the natural logarithm can be approximated in this fashion by a sequence of degree $0$ using the above definition, and I am trying to find such a sequence. IF anyone could provide an explicit example that would be great as well!




As noted in the comments, the paper in question (as well as additional constraints on $f$ which maps $\mathbb{C} \to \mathbb{C}$) can be found in the linked question, though I do include such constraints here as I find them unnecessary to my specific question

Math induction ($n^2 leq n!$) help please

I'm having trouble with a math induction problem. I've been doing other proofs (summations of the integers etc) but I just can't seem to get my head around this.





Q. Prove using induction that $n^2 \leq n!$




So, assume that $P(k)$ is true: $k^2 \leq k!$



Prove that $P(k+1)$ is true: $(k+1)^2 \leq (k+1)!$



I know that $(k+1)! = (k+1)k!$ so: $(k+1)^2 \leq (k+1)k!$ but where can I go from here?




Any help would be much appreciated.

Sequences and series(Arithmetic and Geometric progression)

Anyone can help me solve this question?




The first three of four integers are in an a.p. and the last three are in g.p. Find these four numbers, given that the sum of the first and the last integers is 37 and the sum of the two integers in the middle is 36.



Thanks!

Thursday 29 August 2013

elementary number theory - How to solve this congruence $17x equiv 1 pmod{23}$?



Given $17x \equiv 1 \pmod{23}$



How to solve this linear congruence?

All hints are welcome.



edit:
I know the Euclidean Algorithm and know how to solve the equation $17m+23n=1$
but I don't know how to compute x with the use of m or n.


Answer



To do modular division I do this:



an - bm = c where c is dividend, b is modulo and a is divisor, then n is quotient




17n - 23m = 1



Then using euclidean algorithm, reduce to gcd(a,b) and record each calculation



As described by http://mathworld.wolfram.com/DiophantineEquation.html



17 23 $\quad$ 14 19



17 6 $\quad\;\;$ 14 5




11 6 $\quad\;\;\;\;$ 9 5



5 6 $\quad\;\;\;\;\;$ 4 5



5 1 $\quad\;\;\;\;\;$ 4 1



1 1 $\quad\;\;\;\;\;$ 0 1



Left column is euclidean algorithm, Right column is reverse procedure




Therefore $ 17*19 - 23*14 = 1$, i.e. n=19 and m=14.



The result is that 1/17 ≡ 19 mod 23



this method might not be as quick as the other posts, but this is what I have implemented in code. The others could also be, but I thought I would share my method.


calculus - Show that $S_n(x)=sumlimits^{n}_{k=1}frac{sin(kx)}{k},$ is uniformly bounded.




Show that

$$S_n(x)=\sum^{n}_{k=1}\frac{\sin(kx)}{k},$$
is uniformly bounded in $\mathbb{R}.$







I have
$$|S_n(x)|=\Bigg|\sum^{n}_{k=1}\frac{\sin(kx)}{k}\Bigg|\leq\sum^{n}_{k=1}\frac{|\sin(kx)|}{k},$$
but that doesn't help, as it doesn't even imply the sequence is bounded. I also tried rewriting a follows
$$S_n=\int\limits^{x}_{0}\sum^{n}_{k=1}\cos(kt)\,dt,$$

but once again I don't see a way to use this to show anything about the boundedness of $S_n.$






Could you please help me with a hint, only a hint, on how to get started on this problem?



Thank you for your time and appreciate any help or feedback provided.


Answer



By periodicity it is enough find a uniform bound for $x \in [0,\pi]$.




We have



$$\tag{*}\left|\sum_{k=1}^n \frac{\sin kx}{k}\right| = \left|\sum_{k=1}^m \frac{\sin kx}{k}+ \sum_{k=m}^n \frac{\sin kx}{k}\right| \leqslant \sum_{k=1}^m \frac{|\sin kx|}{k}+ \left|\sum_{k=m+1}^n \frac{\sin kx}{k}\right|$$



where $m = \lfloor \frac{1}{x}\rfloor$ and $m \leqslant \frac{1}{x} < m+1$.



Since $|\sin kx| \leqslant k|x| = kx$, we have for the first sum on the RHS of (*),



$$ \sum_{k=1}^m \frac{|\sin kx|}{k} \leqslant mx \leqslant 1$$




Hint for remainder of proof



The second sum can be handled using summation by parts and a well-known bound for $\sum_{k=1}^n \sin kx$.


sum of the series $ frac{1-2x}{1-x+x^2}+frac{4x^3-2x}{1-x^2+x^4}+frac{8x^7-4x^3}{1-x^4+x^8}+cdots cdots $




If $|x|<1,$ Then the sum of the series $$ \frac{2x-1}{1-x+x^2}+\frac{4x^3-2x}{1-x^2+x^4}+\frac{8x^7-4x^3}{1-x^4+x^8}+\cdots \cdots $$




Try: Let $$A= \frac{1-2x}{1-x+x^2}+\frac{4x^3-2x}{1-x^2+x^4}+\frac{8x^7-4x^3}{1-x^4+x^8}+\cdots $$




$\displaystyle \int Adx $



$$= \int \bigg[\frac{1-2x}{1-x+x^2}+\frac{4x^3-2x}{1-x^2+x^4}+\frac{8x^7-4x^3}{1-x^4+x^8}+\cdots \cdots \bigg]dx$$



$$\int Adx = \ln\bigg[(1-x+x^2)\cdot (1-x^2+x^4)\cdot (1-x^4+x^8)\cdots \bigg]$$



Now i have seems that expression under $\ln$ on



Right side must have closed form in $-1




But i could not understand how can i find it.



could some help me, thanks


Answer



Hint:



$$(1+x+x^2)(1-x+x^2)=(1+x^2)^2-x^2=?$$



$$\implies\prod_{r=1}^n(1-x^{2^r}+\left(x^{2^r}\right)^2)=\dfrac{1-x^{2^{n+1}}+\left(x^{2^{n+1}}\right)^2}{1+x+x^2}$$




Now $\lim_{n\to\infty}2^{n+1}\to\infty$



and $|x|<1, \lim_{m\to\infty}x^m=0$


elementary number theory - Taking modulo of both sides

On the following page at the bottom there is an algorithm for calculating modular inverses. In the proof I am confused with the line 'Taking both sides modulo $m$'. How does that work getting a congruence from the above equation?

real analysis - Let $a_n$ be a sequence s.t $a_1 > 0 land a_{n+1} = a_n + dfrac{1}{a_n}$. Prove that $a_n$ is increasing and tends to infinity




Let $a_n$ be a sequence s.t $a_1 > 0 \land a_{n+1} = a_n + \frac{1}{a_n}$. Prove that $a_n$ is increasing and tends to infinity.



Proof:




Consider $a_{n+1} - a_n$:



$a_{n+1} - a_n = a_n + \frac{1}{a_n} - a_n = \frac{1}{a_n}$ This is greater than $0$. Thus, $a_n$ is increasing.



Now this is where I need some help. I would like to say that $a_n$ is unbounded and then conclude that monotone and unbounded implies tending to infinity.



Maybe by contradiction?


Answer



You proved that $a_n$ is increasing. Assume that it is bounded. Then it would follow that $a_n$ is convergent to a real number $L>0$. But taking $n \to \infty$ into the recurrence relation gives
$$ L+\frac{1}{L}=L$$

which is a contradiction. Therefore $a_n$ is unbounded and it follows that $a_n \to \infty$.


Wednesday 28 August 2013

Modular arithmetic three variables



Show that if the integers $x, y,$ and $z$ satisfy $x^3 + 3y^3 = 9z^3$
then $x = y = z = 0.$
How should I interpret this question and how to proceed?
I am thinking about the Euclidean algorithm but it becomes confusing when $x,y,z$ comes like variables?


Answer




First notice that if $d=\mbox{gcd}(x,y,z)$ then $d^3$ can be factored out of the equation. So we can assume that $d=1$. Then $x^3 = 9z^3-3y^3$, so $3$ divides $x$, say $ x=3k$. So we have $3^3k^3 = 9z^3-3y^3$ and we can divide everything by $3$ to get $9k^3= 3 z^3-y^3$. A similar argument shows $3$ divides $y$. Repeat to show $3$ divides $z$. This contradicts that $d=1$.


complex analysis - Intuition behind euler's formula











Hi, I've been curious for quite a long time whether it is actually possible to have an intuitive understanding of euler's apparently magical formula: $$e^{ \pm i\theta } = \cos \theta \pm i\sin \theta$$



I've obviously seen the taylor series/differential equation based proofs, and perhaps I'm just going to have to accept that it's not possible to have an intuition on what it means to raise a number to an imaginary power. I obviously realise that the formula implies that an exponential with a variable imaginary part can be visualised as a complex function going around in a unit circle about the origin of the complex plane. But WHY is this? And why is e so special that it moves at just a fast enough rate so that the argument of the exponential is equal to the arc length of the path made by the locus (i.e. the angle in radians we've moved around the circle)? Is there any way anyone out there 'understand' this?



Thankyou!



Answer



If I recall from reading Analysis of the Infinite (very nice book, at least Volume $1$ is), Euler got it from looking at
$$\left(1+\frac{i}{\infty}\right)^{\infty}$$
whose expansion is easy to find using the Binomial Theorem with exponent $\infty$.



There is a nice supposed quote from Euler, which can be paraphrased as "Sometimes my pencil is smarter than I am." He freely accepted the results of his calculations. But of course he was Euler.


Calculate limit with summation index in formula












I want to calculate the following:



$$ \lim_{n \rightarrow \infty} \left( e^{-n} \sum_{i = 0}^{n} \frac{n^i}{i!} \right) $$



Numerical calculations show it has a value close to 0.5. But I am not able to derive this analytically. My problem is that I am lacking a methodology of handling the $n$ both as a summation limit and a variable in the equation.


Answer



I don't want to put this down as my own solution, since I have already seen it solved on MSE.



One way is to use the sum of Poisson RVs with parameter 1, so that $S_n=\sum_{k=1}^{n}X_k, \ S_n \sim Poisson(n)$ and then apply Central Limit Theorem to obtain $\Phi(0)=\frac{1}{2}$.




The other solution is purely analytic and is detailed in the paper by Laszlo and Voros(1999) called 'On the Limit of a Sequence'.


measure theory - Let $f$ a measurable function, then $f^2$ is a measurable function, $f:Xrightarrowbar{mathbb{R}}$

Let $f$ a measurable function, then $f^2$ is a measurable function, $f:X\rightarrow\bar{\mathbb{R}}$ and
$\mathbb{A}$ a sigma-algebra of sets.



My attempt



Note $x\in(f^2)^{-1}(c,\infty)=\{x:f^2(x)>c\}=\{x:f(x)>\pm\sqrt{c}\}=\{x:f(x)>\sqrt{c}\}\cup\{x:f(x)<-\sqrt{c}\}$



Here i'm stuck. Can someone help me?

calculus - Show that $e+pi$ is not integer.



It was suggested Taylor series for that.



$e^1 = \sum_{k=0}^\infty \frac{1}{k!}$




I don't know how to prove the convergence of this series, so I tried to set the upper limit to 5 (I'm doing all this with a very simple calculator, it's basically by hand). Then $e \approx 2.71$



Since $4\arctan(1)= \pi$



$4\arctan(1)=4 \sum_{k=0}^\infty \frac{(-1)^k}{2k+1}$



Again doing a aproximation setting the upper limit to 5, $\pi\approx2.96$ (which I think it's pretty bad, but with my calculator it's the best I could do).



Then $e+\pi \approx 5.67$. But this only proves the approximation that I did is not integer, not the exactly value of $e+\pi$. Is there a way to prove that $e+\pi$ is not integer without relaying on approximations?



Answer



With your approximations and using Interval arithmetics, you just have to show that $5

Tuesday 27 August 2013

functions - Precision in $f : A to B$ notation

Here I have a nice function (also known as $f(x)=\sin (\pi x/2)+1$) that is continuous on the interval $[0,4]$:



sin graph



According to the definitions I've found, $f:A\rightarrow B$ means that $\forall n\in A:f(n)\in B$. Therefore, in describing this function’s domain and range, the most precise thing to do would be to say $f:[1,4]\rightarrow [0,2]$. However, from what I understand, this notation is not the most precise, and the following would also be true:





  • $f:[1,4]\rightarrow [-5,5]$

  • $f:[1,4]\rightarrow\Bbb{R}$

  • $f:[1,2]\rightarrow\Bbb{R}$



Basically, $A$ must include some—but not all of—$f$’s domain but must not include any number for which $f$ is not defined while $B$ can include any numbers so long as it includes the range of $f$ on $A$—which makes sense when looking at the definition of $f:A\rightarrow B$.



All of this research I’ve been doing (I’m only a junior in high school who just started AP Calculus BC) was originally to determine a concise and symbolic way to express that a function is continuous on a certain interval, but now my question has evolved to include something else: how can one clearly and symbolically define the full domain and range of a function without any ambiguity and in one fell swoop?







Update:



It has come to my attention that codomain and range are actually different. From what I understand at this point, codomain is a kind of restriction on the range. Here's an example:



Let $f:x\mapsto\sin\left(\dfrac{\pi x}{2}\right) +1$, as shown in red:
red




If one wishes to restrict the domain, simply graph $f:[0,4]\rightarrow\Bbb R$ (as shown in blue):
blue



Want to restrict the range, too? Go ahead and meddle with the codomain. Here's $f:[0,4]\rightarrow [0.25,1.75]$ in green:
green
As of right now I'm confused as to whether or not you have to restrict the domain to exclude images of x that would be excluded anyways by restricting the codomain

Monday 26 August 2013

probability - Sum of weighted normal distributions, how to solve $P(X



How do I solve the following equation for $x$



$$\newcommand{\erf}{\operatorname{erf}}\frac{1}{2}\left((f-1)\cdot\erf\left(\dfrac{c-x}{\sqrt2\,b}\right)-f\erf\left(\dfrac{r-x}{\sqrt2\,d}\right)\right)=y$$



I need this to solve this problem :




I have a distribution, built by the sum of two normal distributions, one multiplied by $(1-f)$ and the other multiplied by $f$. The sum of both is multiplied by $g$. like this (where $0 \leq f < 1$) :



$$\left(\frac{(1-f) e^{-\frac{1}{2} \left(\frac{x-c}{b}\right)^2}}{\sqrt{2 \pi } b}+\frac{f e^{-\frac{1}{2} \left(\frac{x-r}{d}\right)^2}}{\sqrt{2
\pi } d}\right)
$$



I need to know the $x$ where $P(X < x) = y$.



First equation is the cumulative distribution or I did a mistake somewhere?


Answer




If you must do this analytically then you need a convolution. I would not.



If your two normal distributions $X_1\sim \mathcal N(a,b^2)$ and $X_2\sim \mathcal N(c,d^2)$ are independent then an easier approach is to say $fX_1\sim \mathcal N\left(fa,f^2 b^2\right)$ and $(1-f)X_2\sim \mathcal N\left((1-f)c,(1-f)^2d^2\right)$ so $$X=fX_1+(1-f)X_2 \sim \mathcal N\left(fa+(1-f)c,f^2 b^2+(1-f)^2d^2\right)$$ and given the mean and variance of $X$ you can solve $P(X\le x)=y$ in the usual way for a normal distribution.


integration - Prove $gamma = frac{1}{2} + 2 cdot int_0^infty frac{sin(arctan(x))}{(e^{2 pi x} - 1) cdot sqrt{1 + x^2} } dx$

I've found the following integral on the Wikipediapage of the Euler-Mascheroni constant and I want to prove it.



$\gamma = \frac{1}{2} + 2 \cdot \int_0^\infty \frac{\sin(\arctan(x))}{(e^{2 \pi x} - 1) \cdot \sqrt{1 + x^2} } dx$



I know that

$\sin(\arctan(x)) = \frac{t}{\sqrt{t^2 + 1}}$. I tried to apply the Abel-Plana- Formula to the first derivate of the digammafunction, but it does not work.



Any help would be appreciated. Thanks in advance.

elementary set theory - Is $mathbb R^2$ equipotent to $mathbb R$?

I know that $\mathbb N^2$ is equipotent to $\mathbb N$ (By drawing zig-zag path to join all the points on xy-plane). Is this method available to prove $\mathbb R^2 $ equipotent to $\mathbb R$?

If two functions are mirror images of each other about the line $y=x$, are they inverses of each other?



I know that for a function $f$ there exists an inverse $f^{-1}$ when $f$ is one-one and onto in its domain. I also know that a function $f$ and its inverse $f^{-1}$ are mirror images about the line $y=x$.




Now, can we say that when two functions which are exactly mirror images about the line $y=x$, are inverses of each other? Or in other words is the converse of the statement "Function and its inverse are mirror images of each other about the line $y=x$" is always true? If it is not always true, kindly give me circumstances when the converse fails.






Edit:



From this Quora answer, it is said that two functions having the same graph need not necessarily be equal. Then how can we conclude that the graph mirror imaged about the line $y=x$ is definitely its inverse?


Answer



Suppose the two functions whose graphs are reflections over the line $y=x$ are named $f$ and $g$. What does it even mean to say the graphs are reflections over the line $y=x$? It means that if $(a,b)$ is some point on $f$'s graph, then $(b,a)$ is a point on $g$'s graph.




So take any $a$ in $f$'s domain, and let $b=f(a)$. Then the point $(a,b)$ is on $f$'s graph. So $(b,a)$ is on $g$'s graph. So $g(b)=a$. So $g(f(a))=a$. And this was for an arbitrary $a$ is $f$'s domain. The argument is symmetric for showing that for any $b$ in $g$'s domain, that $f(g(b))=b$. So the conclusion is yes, $f$ and $g$ are inverses.


Definition of logarithm in complex domain

My first question is:





What is the proper definition of logarithmic function $f(z)=\ln{z}$.
where $z\in \mathbb{C}$.




quoting Wikipedia.




a complex logarithm function is an "inverse" of the complex
exponential function, just as the natural logarithm $\ln{x}$ is the

inverse of the real exponential function $e^x$.




In a book Calculus Vol 1.By Tom M. Apostol, he tells the function




$\ln{x}$ is defined as $\ln{x}=\int_{1}^{x}{\frac1t\;dt}$ $\color{blue}{\star}$




and the function





$e^x$ is defined to be it's inverse




(rather than the opposite).



Some reasons why it is so as per the book and what I have understood is .



We can define what is $e^2$ $=$ $e\times e$ . But how can we give such a defintion to $e^{\sqrt{2}}$ or$\large e^{\sqrt{2+\sqrt[3]{3+\sqrt[5]{5}}}}$ or more generally the function $a^x$ when the domain is $\mathbb{R}$. Hence as the funciton $a^x$ is not properly defined for Real domain, how can we think about it's definition of it's inverse(The way Wikipedia and some other books define natural logarithm)




So if we are to define $\ln{x} $ as in $\color{blue}{\star}$, it solves all the problem( a proper definition of $\ln{x}$ in real domain , a definition for exponential function in real domain, getting rid of otherwise-circular proofs of some basic theorem in limits involving logarithm and exponential function)



Thinking in the same way just like $e^z$ have problems with definition.How can we define it's inverse?.






Doing a bit of research through internet I found some bits of information.



Wikipedia entry: Natural Logarithm says:





The first mention of the natural logarithm was by Nicholas Mercator in
his work Logarithmotechnia published in 1668,2 although the
mathematics teacher John Speidell had already in $\color{red}{1619}$ compiled a table
on the natural logarithm.[3] It was formerly also called hyperbolic
logarithm,[4] as it corresponds to the area under a hyperbola. It is
also sometimes referred to as the Napierian logarithm, although the
original meaning of this term is slightly different.





Wikipedia entry:Exponential Function




The exponential function arises whenever a quantity grows or decays at
a rate proportional to its current value. One such situation is
continuously compounded interest, and in fact it was this that led
Jacob Bernoulli in $\color{red}{1683}$[4] to the number



now known as e. Later, in 1697, Johann Bernoulli studied the calculus

of the exponential function




The dates(as per source) of discoveries suggests such($\color{blue}{\star}$) a definition.






So summing up I have two questions.



1.A proper definition of $\ln{z}$ when $z\in \mathbb{C}$




2.Is't the definition for logarithm in real domain, the one I have mentioned ($\color{blue}{\star}$) is the best/correct (Just because I have only seen a few places it is defined so).?

general topology - Can a set be infinite and bounded?




I don't understand a statement in my math book course, I was restudying the compact sets part of the chapter when at a certain moment there is a corollary saying :



'every infinite and bounded part of $\mathbb{R^n}$ admit at least one accumulation point'



because for me a set is either bounded so finite or infinite so unbounded.



I don't really understand because I can accept the fact that without a metric, bounds make no sense in topology but here $\mathbb{R^n}$ is clearly known as a metric space.



thank you for your help



Answer



When we say that a set is finite or infinite, we are referring to the number of elements in the set, not to the "extent" (putting it roughly) of those elements. You can think of it in the following way. Any set, all of whose elements lie between (for example) $0$ and $1$, is bounded, because no part of the set can possibly "go to infinity". But clearly it is possible to have an infinite number of elements in such a set. For example, all reals between $0$ and $1$, or all rationals between $0$ and $1$, or simply all the numbers $\{\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots\,\}\,$.


algebra precalculus - What is $sqrt{-4}sqrt{-9}$?





I assumed that since $a^c \cdot b^c = (ab)^{c}$, then something like $\sqrt{-4} \cdot \sqrt{-9}$ would be $\sqrt{-4 \cdot -9} = \sqrt{36} = \pm 6$ but according to Wolfram Alpha, it's $-6$?


Answer



The property $a^c \cdot b^c = (ab)^{c}$ that you mention only holds for integer exponents and nonzero bases. Since $\sqrt{-4} = (-4)^{1/2}$, you cannot use this property here.



Instead, use imaginary numbers to evaluate your expression:



$$
\begin{align*}
\sqrt{-4} \cdot \sqrt{-9} &= (2i)(3i) \\
&= 6i^2 \\

&= \boxed{-6}
\end{align*}
$$


analysis - Showing triangle inequality for a norm



I want to determine whether the following is a norm or not:
\begin{equation}

\max\{|x_1-x_2|, |x_1+x_2|, |x_3|, |x_4|, \ldots,|x_n|\}
\end{equation}
Specifically, I want to know whether the triangle inequality holds:
\begin{equation}
||x+y|| \leq ||x||+||y||
\end{equation} I also noted that this is very similar to the $L-\infty$ norm:
\begin{equation}
||x||_\infty = \max_{1\leq i\leq n}|x_i|
\end{equation}
Since this norm is identical to the $L-\infty$ norm except for the first two elements, it suffices to consider the cases when $|x_1+x_2|$ and $|x_1-x_2|$ are the maximum

elements.
I am stuck on this thought, not being able to cover all the cases. (I have been thinking what if $x_1$ and $x_2$ are large but $y_1$ and $y_2$ are small. Anyway, this
leads me to believe that there is a simpler way.




  1. How do I determine whether the triangle inequality holds?

  2. What is a good strategy in general when dealing with maxima?


Answer



Consider the map $f:(x_1,x_2,x_3,\ldots,x_n)\mapsto(x_1-x_2,x_1+x_2,x_3,\ldots,x_n)$. It is easy to show that this is linear and invertible. So, you have got a linear bijection on your hands, and your candidate norm is $|x| = |f(x)|_\infty$.




Then the problem reduces to "given a linear bijection $f:\mathbb{R}^n\to\mathbb{R}^n$ and a known norm $|\cdot|$, show that $|f(\cdot)|$ is also a norm" which (I think) is true and much easier to tackle.


abstract algebra - If $p$ is prime and $|G|

can anyone help me with following exercise from Rotmann's Advanced Modern Algebra book:



Exercise: Prove that if $p$ is prime and $G$ is a finite group such that every element has order a power of $p$ then $G$ is a $p$-group.



Hint: Use Cauchy's theorem.



Recall a finite group $G$ is a $p$-group if its order $|G|=p^n$ for some $p$ prime and some integer $n\geq 0$.



Thanks




Atempt: I've just written a sketch.



Suppose $|G|=p_1^{\alpha_1}\cdots p_n^{\alpha_n}$ where $p_1, \ldots, p_s$ are distinct primes and $\alpha_j\in\mathbb Z^+$. We know (by Lagrange and hypothesis) $$pp^{n-1}=p^n\mid |G|,$$ so that $$p\mid p_1^{\alpha_1}\cdots p_s^{\alpha_s}.$$ In particular $p\mid p_i^{\alpha_i}$ for (at least) one $i\in\{1, \ldots, s\}$. For simplifying suppose $i=1$. Then $p\mid p_1$ hence $p=p_1$ for both $p$ and $p_1$ are primes. Therefore, $$|G|=p^{\alpha_1}p_2^{\alpha_2}\cdots p_n^{\alpha_n}.$$ The terms $p_2^{\alpha_2}, \ldots, p_n^{\alpha_n}$ can't occur above otherwise by Cauchy's theorem $p_j\mid |G|$ for all $j=2, \ldots, s$ so that we could find $a_j\in G$ such that $o(a_j)=p_j\neq p$, a contradiction. Therefore, $|G|=p^{\alpha_1}$ and $G$ is a $p$-group.



Obs: I didn't like my last argument but I don't know how to write better for now.

Sunday 25 August 2013

limits - Find the value of the series $sumlimits_{n=1}^ infty frac{n}{2^n}$

Find the value of the series $\sum\limits_{n=1}^ \infty \dfrac{n}{2^n}$



The series on expanding is coming as $\dfrac{1}{2}+\dfrac{2}{2^2}+..$



I tried using the form of $(1+x)^n=1+nx+\dfrac{n(n-1)}{2}x^2+..$ and then differentiating it but still it is not coming .What shall I do with this?

calculus - How to calculate $int_0^{pi/2} sin^a x cos^b x ,mathrm{d} x$



$$\int_0^{\pi/2} \sin^a x \cos^b x \,\mathrm{d} x = \frac{\Gamma\left(\frac{1+a}{2}\right)\Gamma\left(\frac{1+b}{2}\right)}{2 \Gamma\left(1 + \frac{a + b}{2}\right)} \quad\quad\text{for } a, b > -1$$
according to Mathematica. Wikipedia also lists a recursive expression for the indefinite integral when $a, b > 0$. My question is how to derive the explicit formula given by Mathematica (preferably without using esoteric special functions, but complex analysis is fine).


Answer



Okay first of all we make the substitution $t=\sin^2x$. Thus the integral becomes
$$\frac12\int_0^1t^{\frac{a-1}2}(1-t)^{\frac{b-1}2}\mathrm dt=\frac12\int_0^1t^{\frac{a+1}2-1}(1-t)^{\frac{b+1}2-1}\mathrm dt$$
Then we recall the definition of the Beta function:

$$B(x,y)=\int_0^1t^{x-1}(1-t)^{y-1}\mathrm dt=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$
So we have our integral at
$$\frac{\Gamma(\frac{a+1}2)\Gamma(\frac{b+1}2)}{2\Gamma(\frac{a+b}2+1)}$$


definite integrals - Closed form $int_0^inftyleft(frac{tanh x}{x^2}-frac{1}{xe^{2x}}right)dx=12log A-frac{4}{3}log 2$




Evaluate



$$\int_0^\infty\left(\frac{\tanh x}{x^2}-\frac{1}{xe^{2x}}\right)dx$$



I haven't been able to find references to the indefinite integral of the $\tanh$ term except for some similar forms that had solutions. see here and here.



Edit:
Following Random Variable's result we have the form
$$12\log A-\frac{4}{3}\log 2$$



Answer



Let $$I(z) =\frac{z}{a}\int_0^\infty \left(\frac{1}{2}-\frac{z}{at}+\frac{1}{e^{at/z}-1}\right) \frac{1-e^{-at}}{t^2} dt$$
Then Binet's first formula says
$$I(z) = \int_0^z \left[\ln\Gamma(x) - (z-\frac{1}{2})\ln z + z - \frac{\ln(2\pi)}{2}\right] dx $$






Letting $a=2,z=1/2$ gives
$$4I(\frac{1}{2}) =\int_0^\infty \left(\frac{1}{2}-\frac{1}{4t}+\frac{1}{e^{4t}-1}\right) \frac{1-e^{-2t}}{t^2} dt$$
and $a=4,z=1$ gives

$$4I(1) =\int_0^\infty \left(\frac{1}{2}-\frac{1}{4t}+\frac{1}{e^{4t}-1}\right) \frac{1-e^{-4t}}{t^2} dt$$



Some algebraic manipulation yields
$$4I(1)-4I(\frac{1}{2}) = \underbrace{\int_0^\infty \left[\frac{1}{2t^2}-\frac{e^{-2t}}{2t} - (\frac{1}{2}-\frac{1}{4t})\left(\frac{e^{-2t}-e^{-4t}}{t^2}\right)\right] dt}_{J} - \frac{I}{2}$$



With $I$ your desired integral. Surprisingly, $J$ has elementary primitive (even it were not, we still have systematic way to crash it), with value $5/8$.






Hence it remains to evalute $$\int_0^1 \ln \Gamma(x) dx \quad \quad \int_0^{1/2} \ln \Gamma(x) dx$$

The former is just $\ln(2\pi)/2$, for the latter, we can use the integral representation of Barnes G function:
$$\int_0^z \ln \Gamma(x) dx = \frac{z(1-z)}{2}+\frac{z}{2}\ln(2\pi) + z \ln\Gamma(z) - \log G(1+z)$$



and the special value $$\ln G(\frac{3}{2}) = -\frac{3}{2}\ln A + \frac{\ln \pi}{4}+\frac{1}{8}+\frac{\ln 2}{24}$$



with $A$ being the Glaisher-Kinkelin constant.



Alternatively, use Fourier expansion of $\ln \Gamma(x)$, integrate termwise, and remember the relation between $A$ and $\zeta'(2)$ also gives the value of the integral $$\int_0^{1/2} \ln\Gamma(x)dx = \frac{3}{2}\ln A + \frac{5}{24}\ln 2 + \frac{\ln \pi}{4}$$


logarithms - Series for $log 3$

I have the following series:



$$\sum_{k=0}^{\infty} \left(\frac{1}{3k+1}+\frac{1}{3k+2}-\frac{2}{3k+3}\right)$$



Wolfram says this is just $\log 3$. I have been trying to figure out how this works purely through series manipulation (without integrals etc.).




I've tried splitting it up into several series but nothing seems to fit nicely because the pattern is 3-period. The series I know for $\log$ which tried first was:



$$\log(1+x)=\sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k}x^k$$
perhaps with $x=-\frac{2}{3}$, but this introduces powers which don't seem natural to derive from the original expression.



Any help would be great.

calculus - Evaluating series of zeta values like $sum_{k=1}^{infty} frac{zeta(2k)}{k16^{k}}=ln(pi)-frac{3}{2}ln(2) $



Somehow I derived these values a few years ago but I forgot how.
It cannot be very hard (certainly doesn't require "advanced" knowledge) but I just don't know where to start.




Here are the sums:
$$
\begin{align}
\sum_{k=1}^{\infty} \frac{\zeta(2k)}{4^{k}}&=\frac{1}{2}
\\\\
\sum_{k=1}^{\infty} \frac{\zeta(2k)}{16^{k}}&=\frac{4-\pi}{8}
\\\\
\sum_{k=1}^{\infty} \frac{\zeta(2k)}{k4^{k}}&=\ln(\pi)-\ln(2)
\\\\

\sum_{k=1}^{\infty} \frac{\zeta(2k)}{k16^{k}}&=\ln(\pi)-\frac{3}{2}\ln(2).
\end{align}
$$


Answer



Hint. One may start with the classic series expansion, which may come from the Weierstrass infinite product of the sine function,




$$
\sum _{n=1}^{\infty } \frac{x^2}{n^2+x^2}=\frac{1}{2} (-1+\pi x \cot (\pi x)) , \quad|x|<1. \tag1
$$





Expanding the left hand side of $(1)$ one deduces




$$
\sum_{k=1}^{\infty } \zeta(2k)\:x^{2k}=\frac{1}{2} (1-\pi x \cot (\pi x)) , \quad|x|<1. \tag2
$$





By dividing $(2)$ by $x$ and integrating one gets




$$
\sum_{k=1}^{\infty } \zeta(2k)\:\frac{x^{2k}}k=\log \left(\frac{\pi x}{\sin(\pi x)}\right) , \quad|x|<1. \tag3
$$




Your equalities are now obtained by putting $x:=\dfrac12,\, \dfrac14$ in $(2)$ and in $(3)$.


Friday 23 August 2013

algebra precalculus - General Principles of Solving Radical Equations



What are the general ways to solve radical equations similar to questions like




$\sqrt{x+1}+\sqrt{x-1}-\sqrt{x^2 -1}=x$

$\sqrt{3x-1}+\sqrt{5x-3}+\sqrt{x-1}=2\sqrt2$

$\sqrt{\frac{4x+1}{x+3}}-\sqrt{\frac{x-2}{x+3}}=1$





Are there just a few known ways to solve them? How do you know the best way to solve such questions? I have trouble with a lot of square root equations, and when I ask them on this site, I get good answers, but for one question. I was wondering if there were any general principles of solving such questions.


Answer



One rather general strategy is to replace each new root $\sqrt[k]{expression}$ in the equation by a new variable, $r_j$, together with a new equation
$r_j^k = expression$ (so now you will have $m+1$ polynomial equations in $m+1$ unknowns, where $m$ is the number of roots). Then eliminate variables from the system, ending with a single polynomial equation in one unknown, such that your original variable can be expressed in terms of the roots of this polynomial. This procedure can introduce spurious solutions if you only want the principal branch of the $k$'th root, so don't forget to check whether the solutions you get are valid.



For example, in your second equation,
we get the system
$$ \eqalign{r_1 + r_2 + r_3 - 2 \sqrt{2} &= 0\cr
r_1^2-(3x-1) &= 0\cr r_2^2-(5x-3) &= 0\cr
r_3^2-(x-1) &=0\cr}$$

Take the resultant of the first two polynomials with respect to $r_1$, then the resultant of this and the third with respect to $r_2$, and the resultant of this and the fourth with respect to $r_3$. We get
$$ 121 x^4-4820 x^3+28646 x^2-45364 x+21417$$
which happens to factor as
$$ \left( x-1 \right) \left( x-33 \right) \left( 121\,{x}^{2}-706\,x+
649 \right)
$$
However, only the solution $x=1$ turns out to satisfy the original equation.


Sum of infinite series involving cos$theta$



Find the sum of following series:



$$1 + \frac{1}{4!} \cos 4\theta + \frac{1}{8!} \cos 8\theta + ...$$




My attempt:



I need hint to start. Thanks!


Answer



HINT
$$\frac 12(\cosh(x)+\cos(x))=\frac 12\left(\sum_{k=0}^\infty \frac {x^{2k}}{(2k)!}+\sum_{k=0}^\infty \frac {(-1)^kx^{2k}}{(2k)!}\right)=\sum_{k=0}^\infty \frac {x^{4k}}{(4k)!}$$
and you have:
$$\sum_{k=0}^\infty \frac{\cos(4k\theta)}{(4k)!}$$
and $\cos(x)=\Re(e^{ix})$





The answer turns out to be: $\dfrac 12 \left(\cos(\sin(\theta))\cosh(\cos(\theta))+ \cos(\cos(\theta))\cosh(\sin(\theta))\right)$



approximation - Power series for $ln(1+x^2)$




In the problem I am asked to use a power series representation of $\ln(1+x)$ to approximate the integral from $0$ to $0.5$ of $\ln(1+x^2)$ to within 4 decimal places.
So far I have found a series for $\ln(1+x^2)$ by manipulating the known series 1/(1-r)



$\ln (1+x)=x−x^2/2+(2x^3)/3−(3x^4)/4+(4x^5)/5 \cdots$



Substituting x for $ x^2$ yields:



$\ln (1+(x^2 ))=(x^2 )−(x^2 )^2/2+(2(x^2 )^3)/3−(3(x^2 )^4)/4+(4(x^2 )^5)/5−…=x^2−x^4/2+(2x^6)/3−(3x^8)/4+(4x^{10})/5 - \cdots$



In order to determine accuracy I have to figure out how many terms to take and to figure this out I must be able to define a general term in the series. I have tried for several hours to determine the general terms, and the closest I have come to success is a general term which describes the entire series that I can imagine but not the first term. Something like:$\sum_{n=1}^∞ (−1)^n∙2(n^2+n) x^{2n+3})/(n+1)(2n+3)$




Can anybody tell if/where there is an error in my arithmetic/approach that is causing problems, and send me in the right direction? Or, if anybody knows of any standard procedure to obtain the general term from this point, this would be helpful as well.


Answer



The TeX is not entirely clear, and the answer obtained may not be correct. So we do the calculation. We start from the expansion
$$\frac{1}{1+t}=1-t+t^2-t^3+ t^4-t^5+\cdots,$$
valid when $|t|\lt 1$. Integrating term by term we get
$$\ln(1+t)=t-\frac{t^2}{2}+\frac{t^3}{3}-\frac{t^4}{4}+\frac{t^5}{5}-\cdots. $$
Mechanically replacing $t$ by $x^2$, we get
$$\ln(1+x^2)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-\frac{x^8}{4}+\frac{x^{10}}{5}-\cdots. $$
Integrating from $0$ to $w$, we get

$$\int_0^w\ln(1+x^2)\,dx=\frac{w^3}{1\cdot 3}-\frac{w^5}{2\cdot 5}+\frac{w^7}{3\cdot 7}-\frac{w^9}{4\cdot 9}+\frac{w^{11}}{5\cdot 11}-\frac{w^{13}}{6\cdot 13}+\cdots.$$



Now for the evaluation, say at $w=1/2$, note that we have an alternating series (signs alternate, terms go down in absolute value, and have limit $0$).



So the error when we truncate the calculation has absolute value less than the absolute value of the first omitted term. A little playing with the calculator, or more reliably, mental arithmetic, shows we don't have to go very far to have error less than $5\times 10^{-5}$.



Remark: If we want an expression for the coefficients of powers of $w$, note that the powers are all odd, and the odd power $w^{2k+1}$ has, apart from sign, coefficient $\frac{1}{k(2k+1)}$. If we wish to take sign into account, it is positive if $k$ is odd and negative if $k$ is even. That can be captured by $(-1)^{k+1}$.



We can also use summation notation. However, there is a risk of losing contact with the ground. Even if we use summation notation, it is useful to write down the first few terms explicitly, as a check.


calculus - Solve $lim_{xto 0} frac{sin x-x}{x^3}$




I'm trying to solve this limit




$$\lim_{x\to 0} \frac{\sin x-x}{x^3}$$




Solving using L'hopital rule, we have:




$$\lim_{x\to 0} \frac{\sin x-x}{x^3}= \lim_{x\to 0} \frac{\cos x-1}{3x^2}=\lim_{x\to 0} \frac{-\sin x}{6x}=\lim_{x\to 0} \frac{-\cos x}{6}=-\frac{1}{6}.$$



Am I right?



I'm trying to solve this using change of variables, I need help.



Thanks



EDIT




I didn't understand the answer and the commentaries, I'm looking for an answer using change of variables.


Answer



I suppose the below counts as a change of variable.



Assuming that the limit exists, then you can compute the limit as follows:



Replace $x$ by $3x$, then the limit (say $L$) is



$$L = \lim_{x\to 0}\frac{\sin 3x - 3x}{27x^3} = \lim_{x\to 0}\frac{3\sin x - 3x - 4\sin^3 x}{27x^3} = $$

$$\lim_{x\to 0}\frac{1}{9}\left(\frac{\sin x - x}{x^3}\right) - \lim_{x\to 0}\frac{4}{27}\left(\frac{\sin^3 x}{x^3}\right)$$



(we used the formula $\sin 3x = 3\sin x - 4 \sin^3 x$).



Thus we get



$$L = \frac{L}{9} - \frac{4}{27} \implies L = -\frac{1}{6}$$



Of course, we still need to prove that the limit exists.


sequences and series - how interpret this partition identity?



use the symbol $P(N)$ to denote the set of all partitions of a positive integer $N$ and denote by $P_k$ the number of occurrences of $k$ in the partition $P \in P(N)$, so that
$$
N = \sum kP_k

$$



by equating coefficients in the identity:
$$
\frac1{1-x} = e^{-\ln(1-x)}=e^{\sum_{k=1}^{\infty}\frac{x^k}{k}}
$$
we see that
$$
\sum_{P \in P(N)}\left( \prod_{P_k\gt 0}k^{P_k}P_k!\right)^{-1} = 1 \tag{1}
$$

Question (a) does this identity have any well-known combinatorial interpretation? (b) is there a simple direct proof of (1) which does not invoke power series?


Answer



hint: look at http://lipn.univ-paris13.fr/~duchamp/Books&more/Macdonald/%5BI._G._Macdonald%5D_Symmetric_Functions_and_Hall_Pol%28BookFi.org%29.pdf pg 24 (2.14) giving the $z_\lambda$ ; your expression is $z_\lambda/n!$ known as the inverses of the class sizes of the symmetric group $S_n$.
Example: for $S_5$ we get class sizes 24, 30, 20, 20, 15, 10, 1 adding to $5!$ or 120 (order of the group). Divide them by $5!$ and the sum gets to be 1.


Combinatorics question about six letter sequences with repetition

The question I'm trying to answer is as follows:



"How many six-letter “words” (sequences of letters with repetition) are there in which the first and last letter are vowels? In which vowels appear only (if at all) as the first and last letter?"



For the first part of the problem, I got an answer of $5\cdot 26\cdot26\cdot26\cdot26\cdot5 = 11424400$.



I'm not sure this is correct because I'm not sure if some solutions are being double counted.



For the second part, I'm having trouble finding an answer. For solutions with the vowels, I think it would be $5\cdot21\cdot21\cdot21\cdot21\cdot5$, but then I'm not sure how to account for those solutions that do not have vowels.




Any help is appreciated. Thank you!

probability - How does $E(|X|)=int_0^{infty}P[|X|ge x]dx$?





As the title states, how does $E(|X|)=\int_0^{\infty}P[|X|\ge x]dx$ ? The only assumption being that $E(|X|)\le \infty$



Mybe I can use the identity function in some way, since $E[1_{X\ge x}]=P[X\ge x]$?



Thanks in advance!



Answer



This is, at its heart, a consequence of Tonelli's Theorem, which is a lot like Fubini's Theorem. For any non-negative random variable $Y$ with finite expectation, you can write
$$
\begin{align*}
\int_0^{\infty}P(Y\geq y)\,d\mu(y)&=\int_0^{\infty}\int_{\Omega}1_{\{Y(\omega)\geq y\}}\,dP(\omega)\,d\mu(y)\\
&=\int_{\Omega}\int_0^{\infty}1_{\{Y(\omega)\geq y\}}\,d\mu(y)\,dP(\omega)\\
&=\int_{\Omega}Y(\omega)\,dP(\omega)\\
&=\mathbb{E}[Y],
\end{align*}
$$

where $(\Omega,\mathcal{F},P)$ is our probability space and $\mu$ is Lebesgue measure on $\mathbb{R}$.



(Note that we can definitely apply Tonelli's Theorem here, as $P$ and $\mu$ are both $\sigma$-finite and $1_{\{Y(\omega)\geq y\}}$ is a non-negative function.)


Thursday 22 August 2013

linear algebra - How to find the bases for two spanning sets and for their sum?

Let $u_1=(1,2,0,-1)$, $u_2=(0,2,-1,1)$, $u_3=(3,4,1,-4)$ and $v_1=(-2,-2,1,3)$, $v_2=(2,3,2,-6)$, $v_3=(-1,4,6,-2)$.

Let $H =span\{u_1,u_2,u_3\}$ and $K = span\{v_1,v_2,v_3\}$.



In here I have to find bases for $H$, $K$ and $H+K$.
I can't understand how to do it.
I wrote vectors in $H$ and $K$ as linear combinations.
Then I think I have to prove that those vectors are linearly independent.
But I don't know how to do it.
Can you help me to find an answer for this question?

The probability that x birthdays lie within n days of each other



This is a question that has bugged me for quite some time: what is the chance that x people happen to have their birthdays within n days of each other?



A bit more specific, since this is how a colleague one phrased it: what is the probability that 5 people have their birthdays within 40 days?



Both the birthdays and the "distance" are supposed to be random: there is no fixed time span (e.g., April 1 to May 10) that the birthdays are to lie within. The birthdays should be such, that two birthdays are always within 40 days of each other.




The thing that bugs me, is that it seems to be some kind of recursive calculation, and that I can't find a way to put it into a straightforward mathematical formulation.



To explain that, consider 2 people: the first person is free to have his or her birthday $b_1$ any day of the year, and the second person has 81 days to pick a birthday date $b_2$ from (the 40 day timespan is inclusive, so up to 40 days before $b_1$, plus up to days after $b_1$, plus one on $b_1$ itself. This may be more logically phrased as 41 days for some; I don't know what is best, so please be clear about it in your answer).



Now, for the third person, the number of birthdays he or she can have, is limited by the second person's birthday: if $b_2 = b_1$, then $b_3$ can be among 81 days, but if $b_2 = b_1 + 1$ or $b_2 = b_1 - 1$, there are only 80 days for each option, and 79 for $\|b_1 - b_2\| = 2$, etc.



For the fourth person, the limitation is given by person 2 and 3, complicating things; the fifth person makes things even more complicated.



I've also tried to go the "exclusion" way (what is the chance that 5 people do not share their birthdays within 40 days of each other), but I didn't get anywhere that way.




But perhaps I'm going entirely the wrong way about this.



By now, I've computed it in various way, and I'm quite confident of the answer, but I'm still looking for the mathematical formulation of the general (x birthdays, n days) problem.



The answer I've got, btw, is $7.581428 \cdot 10^{-4}$, or $\frac{13456201}{365^4}$.



NB: this obviously assumes no leap years.
NB2: Extension of the Birthday Problem appears related, though I can't readily see if I can use any of that formulation here.


Answer



Let $B_1,\dots,B_x$ be the birthdays of persons $1,\dots,x$ and consider for all $m \in [0,364]$ the event "all birthdays happen between days $m$ and $m+n$, and $m$ is one of the birthdays", that is

$$
E_m = \{\forall i,\; B_i\in [m, m + n]\}\cap \{\exists i,\; B_i = m\}.
$$
(interpret $m$ as some kind of "minimal" birthday)



If $n < 365/2$, the events $E_m$ are mutually exclusive: at most one of them can happen. On the other hand, the probability that all $x$ birthdays are contained in a block of $n+1$ consecutive days is the probability that at least one of these events $E_m$ happens. This probability is therefore
$$
\sum_{m=0}^{364} \Pr(E_m) = \sum_{m=0}^{364} \left[\left(\frac{n+1}{365}\right)^x- \left(\frac{n}{365}\right)^x\right] = \frac{(n+1)^x - n^x}{365^{x-1}}.
$$




Indeed, $\Pr(E_m)$ is obtained by a simple inclusion-exclusion counting: it is the probability that the birthdays are contained in the block $[m,m+n]$ of size $n+1$ but not all of them in the subblock $[m+1,m+n]$ of size $n$.






What can be said when $n \geq \frac{365}{2}$? The events $E_m$ are not mutually exclusive anymore so we should use the Inclusion–exclusion formula:
$$
\sum_m \Pr(E_m) - \sum_{m_1 < m_2} \Pr(E_{m_1}\cap E_{m_2})+ \dots + (-1)^n \sum_{m_1 < \dots < m_n} \Pr(E_{m_1}\cap \dots\cap E_{m_n}).
$$


trigonometry - Express $sin 3theta$ and $cos 3theta$ as functions of $sin theta$ and $cos theta$ using Euler's identity

Using Euler's identity ($e^{in\theta}=\cos n\theta+i \sin n\theta$), express $\sin 3\theta$ and $\cos 3\theta$ as functions of $\sin \theta$ and $\cos \theta$.



Any ideas?

discrete mathematics - Proof of easy matching condition for Hall's theorem

I was studying with the recitations provided in the course 6.042 "Mathematics for Computer Science" of MIT OCW and while studying the proof of Hall's marriage problem, I understood the first proof where the bottleneck condition comes in.



However, since it is not efficient as you would need to check a billion subsets for a set of size 30, he talks of easy matching condition where he provides this theorem.




Theorem - "Let G be a bipartite graph with vertex partition L,R where $L \leq R$. If G is degree-constrained, then there is a matching that covers L."



and he has provided this def for the term "degree constrained".



Theorem ( for reference ) - "A bipartite graph G with vertex partition L, R where $L \leq R$ is degree-constrained if $deg(l) \geq deg(r)$ for every $l \in L$ and $r \in R$. "



He has proved the matching theorem by contradiction. But I am having problem understanding the theorem!!
Can you provide a better or simple explanation for the same.




EDIT 1:
Following is the proof provided in their "readings" section :



The proof is by contradiction. Suppose that G is degree constrained but that there is no matching that covers L. This means that there must be a bottleneck $S \subseteq L$. Let x be a value such that $deg(l) \geq x \geq deg(r)$ for every $l \in L$ and $r \in R$.



Since every edge incident to a node in S is incident to a node in N(S), we know that
$$|N(S)|x \geq |S|x$$
and thus that
$$|N(s)| \geq |S|$$




This means that S is not a bottleneck, which is a contradiction. Hence G has a matching that covers L.



I am having problem that how did they obtain
$$|N(S)|x \geq |S|x$$
in the first place!! Please also provide a brief intuition, if possible, for the same.

sequences and series - Study the convergence of $sum_nleft|frac{sin(alpha_n-alpha_m)}{alpha_n-alpha_m}right|$.



In many applications available on Math :



When does $\sum_{n=1}^{\infty}\frac{\sin n}{n^p}$ absolutely converge?



Does $\sum_{n=1}^{\infty} \frac{\sin(n)}{n}$ converge conditionally?



How to prove that $ \sum_{n \in \mathbb{N} } | \frac{\sin( n)}{n} | $ diverges?




(for example) is studied the convergence of the series
$$\sum_n\left|\frac{\sin n}{n}\right|$$
In this question let us consider a real sequence $\alpha_n$. About this sequence there are no hypotheses, except that
$$|\alpha_n-\alpha_{n-1}|\geq \gamma>0$$
Study the convergence of
$$\sum_n\left|\frac{\sin(\alpha_n-\alpha_m)}{\alpha_n-\alpha_m}\right|$$



I´m not able to verify it. Any suggestions please?


Answer




Suppose
$a_n = \pi n/2$.
Then
$|a_n-a_m|
=\pi|n-m|/2
$
and
$|\sin(a_n-a_m)|
=|\sin(\pi(n-m)/2)|
=1

$
whenever $n-m$ is odd
and zero otherwise.



Therefore
$S(m)
=\sum_{n \ne m}
\big| \frac{\sin(a_n-a_m)}{a_n-a_m} \big|
$
diverges for all $m$

by comparison with the
harmonic series.


sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)



As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)

$$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$
However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?


Answer



OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus
$$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$

Note that $1/\tan^2 x = 1/\sin^2 x - 1$.
Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
$$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
Denoting the sum on the right-hand side by $S_n$, we can write this as
$$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$
Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,

$$S_n = 4 S_{n-1} + 2.$$
Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)



We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


modular arithmetic - Divisibility tests, finding the value of two missing numbers in a message.

I was checking the following divisibility tests exercise:




You receive a message from an extraterrestrial alien, who is calculating
$43434343^{2}$. The answer is $18865ab151841649$, where the two digits represented
as a and b were lost in transmission. Use congruences $mod$ $9$ and
$mod$ $11$ to determine the answer to this fundamental problem.




I've been trying to start changing the required numbers on

$18865ab151841649$, the sum term by term is $67$, so we need



$a+b$ $=$ $5$ to reach $72$ multiple of $9$, we have six couples:



$(3,2)$ $(2,3)$ $(4,1)$ $(1,4)$ $(5,0)$ $(0,5)$



After that what I tried to do is to check each couple with the divisibility test for 11, but none of them worked by substracting the amount on even positions to odd positions. Any help will be really appreciated.

polynomials - $deg(P) = 4$, then $P'$ doesn't have all zeros real if $P$ has 2 complex zeros and 2 real zeros




Let $P = a_4X^4+a_3X^3+a_2X^2+a_1X+a_0 \in \mathbb{R}[$X$]$ such that $P$ has two complex zeros, $x_1 = a+bi,x_2 = a-bi, a,b \in \mathbb{R}$ and two real zeros, $x_3 = a$ and $x_4 \in \mathbb{R}$.



Prove that $P'$ doesn't have all zeros real.




I have tried writing Viete's formulas for $P$ and the reciprocal of $P$ to tie those zeros to the zeros of $P'$, but I didn't manage to solve the problem. I also thought to assume that $P'$ has all zeros real and achieve a contradiction, but I couldn't.



Answer



As explained in Olivier Begassat's comment, you can assume $P(x)=x(x^2+1)(x-c)$ for some $c\in{\mathbb R}$. Then $P(x)=x^4-cx^3+x^2-cx, P'(x)=4x^3-3cx^2+2x-c=(4x^3+2x)-c(1+3x^2)$. Since $P'$ has odd degree, it has at least one real root $\lambda$. We then have
$c=\frac{4\lambda^3+2\lambda}{1+3\lambda^2}$, and



$$
P'(x)=4x^3+2x-\Bigg(\frac{4\lambda^3+2\lambda}{1+3\lambda^2}\Bigg)(1+3x^2)=
(x-\lambda) \frac{Q(x)}{1+3\lambda^2}
$$



where




$$
Q(x)= (12\lambda^2 + 4)x^2 - 2\lambda x + (4\lambda^2 + 2)
$$



The discriminant of $Q$ is $-(192\lambda^4 + 156\lambda^2 + 32) \lt 0$, so $Q$ has no real root as wished.


real analysis - Derivative of a piecewise defined function

I want to ask something about the definition of derivative. If we have a function like this $ f(x) =
\begin{cases}

x^2+1 & \quad \text{if } x<0 \\
\cos x & \quad \text{if } x\ge 0\\
\end{cases}
$
and we want to compute the derivetive in $x_0=0$ why is necessary to compute this with the definition of derivative? It will be wrong to find the the derivative of $ x^2+1$ and of $\cos x$ with types and to compute in $x_0=0$? Could you give me an example which is compute only with the limit and it is not compute with formulas? Thanks in advance

Wednesday 21 August 2013

Induction proof verificiation



P(n) = in a line of n people show that somewhere in the line a woman is directly in front of a man. The first person will always be a woman and the last person in the line will always be a man




I wanted to prove this with induction and I was a little unsure of how to express this problem in the inductive step and whether I had the right cases and if I had shown them correctly.



Base case: n=2:



P(2) holds as a woman (front of line) will be directly in front of the man at the back of line since there are only 2 people.



Inductive hypothesis:



Assume P(k) holds for some integer k >= 2 there will be a woman directly in front of a man somewhere in the line, where the kth person is at the back of the line.




Inductive step:



Assuming P(k) holds, show P(k+1) holds too:



In a group of k+1 people consider the cases: (this is where I am slightly unsure)



(1) There are k males and 1 female



(2) There are k females and 1 male






If there are k males and 1 female, k males will always be behind the 1 female at the front by the rule in the statement and so the 2nd male from the front will be directly behind the female at the front and so there is always at least one case where a female is directly in front of a male while there are k males and 1 female in a group of k+1 people





If there are k females and 1 male, k females will always be in front of the 1 male at the back of the line by the rule of the statement and so the kth person who is a female will be directly in front of the k+1 person who is a male.



So in both cases there will always be at least one case where a female is directly in front of a male in a line of k + 1 people which concludes the inductive step.




As mentioned, I'm unsure if I've included the correct and all cases necessary in the inductive step and if I've explained them correctly for this proof. Could you verify and provide any pointers?


Answer



Your two cases are far from exhausting the possibilities. If $k=10$, for instance, the number of women can be anywhere from $1$ through $10$, and so can the number of men. You need to come up with an argument that covers all possibilities.



Try this: remove the man at the end of the line. Either the new line has a man at the end, in which care you can apply your induction hypothesis to it, or it has a woman at the end. How do you finish the argument in that second case?


related rates calculus question

A balloon is at a height of 20 meters, and is rising at the constant rate of 5 m/sec. A bicyclist passes beneath it, traveling in a straight line at the constant speed of 10 m/sec. How fast is the distance between the bicyclist and the balloon increasing 2 seconds later?

algebraic topology - Wedge sum of circles and Hawaiian earring



The (countably infinite) wedge sum of circles is quotient of disjoint countable union of circles $\amalg S_i$, with points $x_i\in S_i$ identified to a single point, while the Hawaiian earring H is the topological space defined by the union of circles in the Euclidean plane $\mathbb{R}^2$ with center $(1/n, 0)$ and radius $1/n$ for $n = 1, 2, 3, ...$.



In the definition of countably infinite wedge sum of circles, it is not specified the size of circles, the points which to be identified to a single point etc. So we can take disjoint union of circles of radius $1/n$ and identify a point from each to a common single point to get Hawaiian earring.



I couldn't understood the difference between these two topological spaces. Can one explain more precisely the difference between these two spaces?



Answer



You have two very nice answers discussing the difference between the topologies on these spaces. However, I thought I'd mention one slightly higher-level difference between them. The fundamental group of the wedge of infinitely many circles is the free group on countable many generators, one for each circle. This is a rather uncomplicated countable group. The fundamental group of the Hawaiian earring, however, is truly bizarre. In fact, it is uncountable and has many rather complicated relations in it.



When I first learned about this, I was shocked that closed subsets of the plane could have uncountable fundamental groups.



A nice paper that discusses this (and contains a good bibliography of earlier work) is "The combinatorial structure of the Hawaiian earring group" by Cannon and Conner, which appeared in Topology and its Applications, Volume 106, Issue 3, 6 October 2000, Pages 225-271.


logarithms - Using log tables for exponential solutions

I understand how to use a log table to solve something such as $\log(0.00000000453)$ where we would put $(0.000000453)$ into scientific notation, $4.53 \times 10^{-9}$. Then we can use the log table to find the mantissa of the log, which is $0.6561$, and use the characteristic, $-9$ to add together and get $0.6561+(-9)=-8.3439$ and $\log (0.00000000453)=-8.3439$.




However, if I am given $1.64^{28}$, how would I use the log table? I can use log properties and get $28 \times \log 1.64 = 28 \times 0.2148$ (value from log table). But this gives me $6.0144$ which is not $1.64^{28}=1036639.481$. How do I take my log table calculation and get back to the exponential answer?

calculus - Problem about continuous functions and the intermediate value theorem




Let $S^{1} := \lbrace(\cos\alpha, \sin\alpha) \subset \mathbb{R}^{2} | \alpha \in \mathbb{R}\rbrace$ be the circumference of radius $1$ and $f: S^{1} \to \mathbb{R}$ a continuous function. Prove that there exist two points
diametrically opposed at which $f$ assumes the same value.




My idea for solution: Define $\varphi: S^{1} \to \mathbb{R}$ as$$\varphi(\cos\alpha, \sin\alpha) = f(\cos\alpha, \sin\alpha) - f(-\cos\alpha, -\sin\alpha).$$ If $f(\cos\alpha, \sin\alpha) = f(-\cos\alpha, -\sin\alpha)$ for all $\alpha \in \mathbb{R}$, the result follows. Otherwise, there exist $\alpha_{1},\alpha_{2}$ such that $$f(\cos\alpha_{1}, \sin\alpha_{1}) - f(-\cos\alpha_{1}, -\sin\alpha_{1}) > 0$$ and $$f(\cos\alpha_{2}, \sin\alpha_{2}) - f(-\cos\alpha_{2}, -\sin\alpha_{2}) < 0.$$ Applying the intermediate value theorem, we prove the result.




Is this a correct idea? I appreciate your corrections. Thanks!


Answer



The idea is good, but the
intermediate value theorem
applies to functions mapping a closed interval $I \subset \Bbb R$ to $\Bbb R$.



It would be possible to formulate a similar statement for functions $\phi: S^1 \to \Bbb R$, but it is simpler to consider
$$
\phi: [0, \pi] \to \Bbb R, \quad

\phi(\alpha) = f(\cos\alpha, \sin\alpha) - f(-\cos\alpha, -\sin\alpha)
$$
instead, so that the IVT can be applied directly.



The remaining argument can also be simplified.
It suffices to observe that $\phi(0) = - \phi(\pi)$, so that




  • either $\phi(0) = \phi(\pi) = 0$,

  • or $\phi(0)$ and $ \phi(\pi)$ have opposite sign, and the intermediate value theorem

    states that there is some $\alpha \in (0, \pi)$ with $\phi(\alpha) =0$.


probability - Expected value of a non-negative random variable

How do I prove that $\int_0^\infty Pr(Y\geq y) dy = E[Y]$ if $Y$ is a non-negative random variable?

Tuesday 20 August 2013

calculus - Show that $ lim_{n rightarrow infty} frac{n!}{2^{n}} = infty $

Show that $ \lim_{n \rightarrow \infty} \frac{n!}{2^{n}} = \infty $



I know what happens intuitively....



$n!$ grows a lot faster than $2^{n}$ which implies that the limit goes to infinity, but that's not the focus here.




I'm asked to show this algebraically and use the definition for a limit of a sequence.



"Given an $\epsilon>0$ , how large must $n$ be in order for $\frac{n!}{2^{n}}$ to be greater than this $\epsilon$ ?"



My teacher recommends using an inequality to prove it but I'm feeling completely lost...

linear algebra - Determinant of rank-one perturbation of a diagonal matrix



Let $A$ be a rank-one perturbation of a diagonal matrix, i. e. $A = D + s^T s$, where $D = \DeclareMathOperator{diag}{diag} \diag\{\lambda_1,\ldots,\lambda_n\}$, $s = [s_1,\ldots,s_n] \neq 0$. Is there a way to easily compute its determinant?



One the one hand, $s^Ts$ has rank one so that it has only one non-zero eigenvalue which is equal to its trace $|s|^2 = s_1^2+\cdots+s_n^2$. On the other hand, if $D$ was a scalar operator (i.e. all $\lambda_i$'s were equal) then all eigenvalues of $A$ would be shifts of the eigenvalues of $s^T s$ by $\lambda$. Thus one eigenvalue would be equal to $\lambda+|s|^2$ and the others to $\lambda$. Hence in this case we would obtain $\det A = \lambda^{n-1} (\lambda+|s|^2)$. But is it possible to generalize these considerations to the case of diagonal non-scalar $D$?



Answer



As developed in the comments, for positive diagonal entries:



$$\det(D + s^Ts) = \prod\limits_{i=1}^n \lambda_i + \sum_{i=1}^n s_i^2 \prod\limits_{j\neq i} \lambda_j $$



It's general application can be deduced by extension from the positive cone of $\mathbb{R}^n$ by analytic continuation. Alternatively we can advance a slightly modified argument for all nonzero diagonal entries. The determinant is a polynomial in the $\lambda_i$'s, so proving the formula for nonzero $\lambda_i$'s enables us to prove it for all $D$ by a brief continuity argument.



First assume all $\lambda_i \neq 0$, and define vector $v$ by $v_i = s_i/\lambda_i$. Similar to the OP's observations:



$$ \det(D+s^Ts) = \det(I+s^Tv)\det(D) = (1 + \sum\limits_{i=1}^n s_i^2/\lambda_i) \prod\limits_{i=1}^n \lambda_i $$




where $\det(I+s^Tv)$ is the product of $(1 + \mu_i)$ over all the eigenvalues $\mu_i$ of $s^Tv$. As the OP noted, at most one of these eigenvalues is nonzero, so the product equals $1$ plus the trace of $s^T v$, i.e. the potentially nonzero eigenvalue, and that trace is the sum of entries $s_i^2/\lambda_i$.



Distributing the product of the $\lambda_i$'s over that sum gives the result at top. If some of the $\lambda_i$'s are zero, the formula can be justified by taking a sequence of perturbed nonzero $\lambda_i$'s whose limit is the required $n$-tuple. By continuity of the polynomial the formula holds for all diagonal $D$.


Circular logic in evaluation of elementary limits



Our calculus book states elementary limits like $\lim_{x\to0} \frac{\sin x}{x}=1$, or $\lim_{x\to0} \frac{\ln (1+x)}{x}=1$, $\lim_{x\to0} \frac{e^x-1}{x}=1$ without proof.



At the end of the chapter of limits, it shows that these limits can be evaluated by using series expansion (which is not in our high school calculus course).




However, series expansion of a function can only be evaluated by repeatedly differentiating it.



And, to calculate derivative of $\sin x$, one must use the $\lim_{x\to0} \frac{\sin x}{x}=1$.



So this seems to end up in a circular logic. It is also same for such other limits.



I found that $\lim_{x\to0} \frac{e^x-1}{x}=1$ can be proved using binomial theorem.



How to evaluate other elementary limits without series expansion or L'Hôpital Rule?




This answer does not explain how that limit can be evaluated.


Answer



Hint to prove it yourself:



enter image description here



Let $A_1$ be the area of triangle $ABC$, $A_2$ be the area of arc $ABC$, and $A_3$ be the area of ABD. Then we have:



$$A_1


Try and find expressions for $A_1,A_2,$ and $A_3$ and fill them into the inequality and finally use the squeeze theorem.


linear algebra - Matrices which are both unitary and Hermitian



Matrices such as



$$
\begin{bmatrix}
\cos\theta & \sin\theta \\

\sin\theta & -\cos\theta
\end{bmatrix}
\text{ or }
\begin{bmatrix}
\cos\theta & i\sin\theta \\
-i\sin\theta & -\cos\theta
\end{bmatrix}
\text{ or }
\begin{bmatrix}
\pm 1 & 0 \\

0 & \pm 1
\end{bmatrix}
$$



are both unitary and Hermitian (for $0 \le \theta \le 2\pi$). I call the latter type trivial, since its columns equal to plus/minus columns of the identity matrix.




Do such matrices have any significance (in theory or practice)?





In the answer to this question, it is said that "for every Hilbert space except $\mathbb{C}^2$, a unitary matrix cannot be Hermitian and vice versa." It was commented that identity matrices are always both unitary and Hermitian, and so this rule is not true. In fact, all trivial matrices (as defined above) have this property. Moreover, matrices such as



$$
\begin{bmatrix}
\sqrt {0.5} & 0 & \sqrt {0.5} \\
0 & 1 & 0 \\
\sqrt {0.5} & 0 & -\sqrt {0.5}
\end{bmatrix}
$$




are both unitary and Hermitian.



So, the general rule in the aforementioned question seems to be pointless.




It seems that, for any $n > 1$, infinitely many matrices over the Hilbert space $\mathbb{C}^n$ are simultaneously unitary and Hermitian, right?



Answer



Unitary matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are on the unit circle. Hermitian matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are real. So unitary Hermitian matrices are precisely the matrices admitting a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are $\pm 1$.




This is a very strong condition. As George Lowther says, any such matrix $M$ has the property that $P = \frac{M+1}{2}$ admits a complete set of orthonormal eigenvectors such that the corresponding eigenvalues are $0, 1$; thus $P$ is a Hermitian idempotent, or as George Lowther says an orthogonal projection. Of course such matrices are interesting and appear naturally in mathematics, but it seems to me that in general it's more natural to start from the idempotence condition.



I suppose one could say that Hermitian unitary matrices precisely describe unitary representations of the cyclic group $C_2$, but from this perspective the fact that such matrices happen to be Hermitian is an accident coming from the fact that $2$ is too small.


real analysis - How to prove that derivatives have the Intermediate Value Property




I'm reading a book which gives this theorem without proof:




If a and b are any two points in an interval on which Æ’ is differentiable, then Æ’'
takes on every value between Æ’'(a) and Æ’'(b).




As far as I can say, the theorem means that the fact Æ’' is the derivative of another function Æ’ on [a, b] implies that Æ’' is continuous on [a, b].




Is my understanding correct? Is there a name of this theorem that I can use to find a proof of it?


Answer



The result is commonly known as Darboux’s theorem, and the Wikipedia article includes a proof.


optimizing prime number algorithm



I am doing a function to return a list of prime number up to "n", one what to optimize the algorithm is the following:



"The next most obvious improvement would probably be limiting the testing process to only checking if the potential prime can be factored by those primes less than or equal to the square root of the potential prime, since primes larger than the square root of the potential prime will be complementary factors of at least one prime less than the square root of the potential prime.(taken from http://en.wikibooks.org/wiki/Efficient_Prime_Number_Generating_Algorithms)



Can someone explain this in simpler terms with an example?

Thanks


Answer



Let's say you're trying to find primes below 150. Then, what the statement is saying is that you need to look out for the primes below sqrt(150) i.e. [2,3,5,7,11] only. Why's that you say? Well, if there were to be a number above 11, it'll have to multiply by another prime within the list [2,3,5,7,11].



In other words, if 13 were to be that next prime on the primes list, you'll have to multiply 13 by the primes already on the prime list, namely [2,3,5,7,11]. Also, 13*12>150. This automatically means that the number is not prime.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...