Tuesday 30 April 2013

derivative of binomial probability



I saw the following claim in some book without a proof and couldn't prove it myself.




$\dfrac{d}{dp}\mathbb{P}\left(\text{Bin}\left(n,\,p\right)\leq d\right)=-n\cdot\mathbb{P}\left(\text{Bin}\left(n-1,\,p\right)=d\right)$



So far I got:



$\begin{array}{l}
\dfrac{d}{dp}\mathbb{P}\left(\text{Bin}\left(n,\,p\right)\leq d\right)=\\
\dfrac{d}{dp}\sum\limits _{i=0}^{d}\left(\begin{array}{c}
n\\
i
\end{array}\right)p^{i}\left(1-p\right)^{n-i}=\\

-n\cdot\left(1-p\right)^{n-1}+\sum\limits _{i=1}^{d}\left(\begin{array}{c}
n\\
i
\end{array}\right)\left[ip^{i-1}\left(1-p\right)^{n-i}-p^{i}\left(n-i\right)\left(1-p\right)^{n-i-1}\right]
\end{array}$



But I am not very good playing with binomial coefficients and don't know how to proceed.


Answer



Consider the derivative of the logarithm:
$$ \frac{d}{dp} \left[\log \Pr[X = x \mid p]\right]

= \frac{d}{dp}\left[x \log p + (n-x) \log (1-p)\right]
= \frac{x}{p} - \frac{n-x}{1-p},
$$
hence $$\frac{d}{dp}\left[\Pr[X = x \mid p]\right]
= \binom{n}{x} p^x (1-p)^{n-x} \left(\frac{x}{p} - \frac{n-x}{1-p}\right) $$ and $$\begin{align*}
\frac{d}{dp}\left[\Pr[X \le x \mid p] \right] &= \sum_{k=0}^x \binom{n}{k} p^k (1-p)^{n-k} \left(\frac{k}{p} - \frac{n-k}{1-p}\right) \\
&= \sum_{k=0}^x \binom{n}{k} k p^{k-1} (1-p)^{n-k} - \binom{n}{k} (n-k) p^k (1-p)^{n-1-k}.
\end{align*}$$
But observe that $$\binom{n}{k}(n-k) = \frac{n!}{k!(n-k-1)!} = \frac{(k+1) n!}{(k+1)!(n-(k+1))!} = (k+1)\binom{n}{k+1},$$ hence the second term can be written $$(k+1) \binom{n}{k+1} p^{(k+1)-1} (1-p^{n-(k+1)}),$$ which is the same as the first term except the index of summation has been shifted by $1$. Therefore, the sum is telescoping, leaving $$\frac{d}{dp}\left[\Pr[X \le x \mid p]\right] = 0 - \binom{n}{x} (n-x) p^x (1-p)^{n-1-x}.$$ All that remains is to observe $$\binom{n}{x}(n-x) = \frac{n!}{x!(n-x-1)!} = \frac{n(n-1)!}{x!(n-1-x)!} = n \binom{n-1}{x},$$ therefore $$\frac{d}{dp}\left[\Pr[X \le x \mid p] \right] = -n \Pr[X^* = x \mid p],$$ where $X^* \sim \operatorname{Binomial}(n-1,p)$, as claimed.


abstract algebra - How to prove by the Well Ordering Principle that the equation $4a^3+2b^3=c^3$ has no solution over $mathbb{Z}^+$?



I've already tried to define a set of positive integers that contains all such numbers $c$, i.e $D= \{c : c^3 = 4a^3 + 2b^3, a, b \in \mathbb{Z}^+ \}$, then assuming is not an empty set for the sake of contradiction, if it is not an empty set of nonnegative integers then because of the WOP it must have a minimum element, I called this element $m$, but I can't find a way to prove that no positive integer raised to the third power can be represented by an expression like the given one with $a,b$ also being positive integers. I need help with this one. Thanks beforehand.


Answer




Let $(a,b,c)$ be a solution with $c$ the smallest possible. Then $c^3$ is even, hence $c$ itself is even. Put $c=2c_0$; then $2a^3+b^3=4c_0^3$. This time, we see that $b$ must be even. Put $b=2b_0$; then $a^3+4b_0^3=2c_0^3$, thus $a=2a_0$ must be even, and $4a_0^3+2b_0^3=c_0^3$, i.e. $(a_0,b_0,c_0)$ is another solution. But $c_0=c/2. Contradiction.


abstract algebra - How to prove the uniqueness



I'm trying to solve this question from Fulton's algebraic curves:





I've already easily solved (a) and the existence part of (b). I'm having problems to prove the uniqueness of part (b).




I need help.



Thanks in advance.


Answer



If $a\in R$ is invertible and $b\in R$ are such that $a+bt=0$, then $a=b=0$: if $b\ne0$, then $a\in\mathfrak m$, a contradiction.


elementary number theory - Find the prime-power decomposition of 999999999999

I'm working on an elementary number theory book for fun and I have come across the following problem:



Find the prime-power decomposition of 999,999,999,999 (Note that $101 \mid 1000001$.).



Other than just mindlessly guessing primes that divide it, how should I go about finding the solution? I am curious as to how this hint about 101 dividing 1000001 helps. There is also a factor table for integers less than 10,000 in the back of the book, so really the objective is to get 999,999,999,999 down to a product of numbers with less than 5 digits, then I can just use the table.



Thank you!

analysis - Clarification of the proof that if $f$ is continuous on $[a,b]$ then $f$ is uniformly continuous on $[a,b]$



Claim: If $f$ is continuous on $[a,b]$ then $f$ is uniformly continuous on $[a,b]$.



Proof: Suppose that $f$ is not uniformly continuous on $[a,b]$, then there exists $\epsilon > 0 $ such that for each $\delta > 0$ there must exist $x,y \in [a,b]$ such that $|x-y| < \delta$ and $|f(x)-f(y)| \geq \epsilon$ .

Thus for each $n \in \Bbb N$ there exists $x_n,y_n$ such that $|x_n-y_n| < \frac {1} {n}$, and By B-W, $x_n$ has a subsequence $x_{n_k}$ with limit $x_o$ that belongs to $[a,b]$.

Now here is where I am confused, the author next states that "Clearly we also have $x_0 $ is the limit of the sub-sequence $y_{n_k}$". Are $x_n$ and $y_n$ not potentially different sequences? Why would the same indexes chosen to form a sub-sequence give the same convergent value.



Answer



For every $c>0$, there exists $k_0$ such that $k>k_0$ implies that $|x_0-x_{n_k}|k_0$ such that ${1\over n_{k'_0}}k'_0$, $|x_0-y_{n_k}|\leq |x_0-x_{n_k}|+|x_{n_k}-y_{n_k}|\leq |x_0-x_{n_k}|+{1\over n_k}

Monday 29 April 2013

probability - Coupon Collector's Problem with X amount of coupons already collected.




I am having an issue with understanding how to calculate a specific case of the Coupon Collector's Problem. Say I have a set of 198 coupons. I learned how to find the estimated amount of draws to see all 198 coupons, using the following equation:



$$n \sum_{k=1}^n\frac1k$$



It turns out that for $n = 198$, the expected number of draws is approximately 1162. Let's assume, however, that I already have some of the coupons, say 50. How should I go about solving the same problem, given that I've already collected $X$ of them?


Answer



Based on the corresponding thread on Wikipedia. The expected time to draw all $n$ coupons equals:




$$E[T] = E[t_1] + E[t_2] + \ldots + E[t_n]$$



with $t_i$ the time needed to collect the $i^{th}$ coupon once $i-1$ coupons have been drawn. Once $i-1$ tickets have been drawn, there are $n-i+1$ unseen coupons left. The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:



$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$



In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:



$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$


algebra precalculus - What is the plot of $f(x) = 1 + sqrt{log_{10}cos(2pi x)}$



I'm inspecting a function as in the title, and tried to plot it and then compare my result with graphing tools which lead me to confusion.




Is it true that the plot of $f(x) = 1 + \sqrt{\log_{10}\cos(2\pi x)}$ is just a set of points $(x, y) = (n, 1); \; n \in \mathbb Z$




Inspecting the domain one may see that
$$

\log_{10}\cos(2\pi x) \ge 0 \iff \cos(2\pi x) \ge 1
$$



But $\cos x \in [-1, 1]$ therefore $\cos(2\pi x)$ must be equal to $1$ in order to satisfy the above, and that is only possible in $x \in \mathbb Z$.



The reason I'm asking is because neither Desmos nor W|A is plotting it the way i expected.


Answer



You are right: the domain of $f$ is the set $ \mathbb Z$ and the graph of $f$ is given by



$ \{(x,f(x)): x \in \mathbb Z\}=\{(n,1): n \in \mathbb Z\}$.



real analysis - What is the most rigorous proof of the irrationality of the square root of 3?



I am currently trying to self-study Stephen Abbott's Understanding Analysis. The first exercise asks to prove the irrationality of √3, and I understand the general idea of the contradiction by finding that the relatively prime integers p and q have a common factor. However, I am stuck on the idea that if p^2 is divisible by 3, then p is divisible by 3. Abbott's solution assumes this, but I have also seen proofs that analyze the situations where a and b are even or odd (such as NASA's). Even or odd really is just saying multiple of 2, which confuses me as to why the even/odd method (which is much less concise) would be used.



Sorry for the block of rambling text, I just want to start writing proofs the right way. I guess my real questions are:



If p^2 is divisible by a prime number, is p also divisible by that prime number? Can this just be assumed, or is there a theorem I have to mention in the proof?
Why do some proofs analyze the even/odd situations of a and b? Are they more rigorous, and if they are not, why are they used, considering their added length and complexity? Finally, am I simply over thinking the idea of being rigorous and missing the big picture?



Answer



personally I prefer to prove these results by contrapositive. If $p,q$ are coprime positive integers with $q>1$ then
$$
p^{k}/q^{k}
$$
is not an integer for any $k>0.$ This immediately implies the irrationality of all roots of $3.$ In fact, it proves that any root of an integer that is not an integer is irrational.



The key is, as everyone has already said, is that $p,q$ coprime implies that $p^{k}$ and $q^k$ are also coprime, and so $p^k/q^k$ is not an integer.



This is implied by that if a prime $m$ divides $ab$ then it divides $a$ or $b.$ So if it divides $p^k$ it divides $p$ and if it divides $q^k$ it divides $q.$ So no prime will divide both of them unless it divides $p$ and $q$ which we ruled out.




How do we prove the result that $m$ must divide $a$ or $b$? Either it divides $a$ and we are done or $m$ and $a$ are co-prime since $m$ is prime. We then have that there exist $\alpha $ and $\beta$ such that
$$
\alpha m + \beta a =1.
$$
(this follows from the Euclidean algorithm.)
So
$$
b\alpha m + \beta ab =b
$$

Since $m$ divides $ab$ it divides the LHS, so it divides the RHS too.



(See my book "proof patterns" for more discussion.)


summation - How does the sum of the series “$1 + 2 + 3 + 4 + 5 + 6ldots$” to infinity = “$-1/12$”?

(I was requested to edit the question to explain why it is different that a proposed duplicate question. This seems counterproductive to do here, inside the question it self, but that is what I have been asked by the site and moderators. There is no way for me to vote against their votes. So, here I go: Please stop voting this as a duplicate so quickly, which will eventually lead to this question being closed off. Yes, the other question linked to asks the same math, but any newcomer to the problem who was exposed to it via physics, as I was, will prefer this question instead of the one that is purely mathematically. I beg the moderators to not be pedantic on this one. This question spills into physics, which is why I did the cross post to the physics forum as well.)



How does the sum of the series “1 + 2 + 3 + 4 + 5 + 6…” to infinity = “-1/12”, in the context of physics?



I heard Lawrence Krauss say this once during a debate with Hamza Tzortzis (http://youtu.be/uSwJuOPG4FI). I found a transcript of another debate between Krauss and William Lane Craig which has the same sum. Here is the paragraph in full:




Let’s go to some of the things Dr. Craig talked about. In fact, the

existence of infinity, which he talked about which is
self-contradictory, is not self-contradictory at all. Mathematicians
know precisely how to deal with infinity; so do physicists. We rely on
infinities. In fact, there’s a field of mathematics called “Complex
Variables” which is the basis of much of modern physics, from
electro-magnetism to quantum mechanics and beyond, where in fact we
learn to deal with infinity; without the infinities we couldn’t do the
physics. We know how to sum infinite series because we can do complex
analysis. Mathematicians have taught us how. It’s strange and very
unappetizing, and in fact you can sum things that look ridiculous. For

example, if you sum the series, “1 + 2 + 3 + 4 + 5 + 6…” to infinity,
what’s the answer? “-1/12.” You don’t like it? Too bad! The
mathematics is consistent if we assign that. The world is the way it
is whether we like it or not.




-- Lawrence Krauss, debating William Lane Craig, March 30, 2011



Source: http://www.reasonablefaith.org/the-craig-krauss-debate-at-north-carolina-state-university




CROSS POST: I'm not sure if I should post this in mathematics or physics, so I posted it in both. Cross post: https://physics.stackexchange.com/questions/92739/how-does-the-sum-of-the-series-1-2-3-4-5-6-to-infinity-1-12



EDIT: I did not mean to begin a debate on why Krauss said this. I only wished to understand this interesting math. He was likely trying to showcase Craig's lack of understanding of mathematics or logic or physics or something. Whatever his purpose can be determined from the context of the full script that I linked to above. Anyone who is interested, please do. Please do not judge him out of context. Since I have watched one of these debates, I understand the context and do not hold the lack of a full breakdown as being ignorant. Keep in mind the debate I heard this in was different from the debate above.

Sunday 28 April 2013

real analysis - Evaluating the nested radical $ sqrt{1 + 2 sqrt{1 + 3 sqrt{1 + cdots}}} $.



How does one prove the following limit?
$$
\lim_{n \to \infty}
\sqrt{1 + 2 \sqrt{1 + 3 \sqrt{1 + \cdots \sqrt{1 + (n - 1) \sqrt{1 + n}}}}}
= 3.
$$



Answer



This is the special case $\rm\ x,\:n,\:a = 2,\:1,\:0\ $ in Ramanujan's second notebook, chapter XII, entry 4:



$$\rm x + n + a\ =\ \sqrt{ax + (n+a)^2 + x\sqrt{a(x+n) + (n+a)^2 + (x+n) \sqrt{\cdots}}} $$



Below is Ramanujan's solution of the given special case - which was submitted to a journal in April 1911. Note that his solution is incomplete (exercise: why?). For further discussion see this 1935 Monthly article, Herschfeld: On infinite radicals. It also appeared as Problem A6 on the 27th Putnam competition, 1966. Vijayaraghavan proved that a sufficient criterion for the convergence of the following sequence $\ \sqrt{a_1 + \sqrt{a_2 +\:\cdots\: +\sqrt{a_n}}}\ \ $ is that $\rm\displaystyle\ \ {\overline \lim}_{n\to\infty}\frac{\log{a_n}}{2^n}\ < \infty\:.\ $



alt text


limits - Find $limlimits_{nto+infty}frac{sqrt[n]{n!}}{n}$




I tried using Stirling's approximation and d'Alambert's ratio test but can't get the limit. Could someone show how to evaluate this limit?


Answer




Use equivalents:
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac{\bigl(\sqrt{2\pi n}\bigr)^{\tfrac 1n}}{n}\cdot\frac n{\mathrm{e}}=\frac 1{\mathrm{e}}\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}$$
Now $\;\ln\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}=\dfrac{\ln\pi+\ln 2n}{2n}\xrightarrow[n\to\infty]{}0$, hence
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac 1{\mathrm{e}}. $$


real analysis - Prove $lim_{nrightarrowinfty}x^{n}=0$



I would like to know if my proof is valid, because I did it different from the solution in my textbook (which uses Bernoulli's inequality).



If $|x|<1$, then $\lim_{n\rightarrow\infty}x^{n}=0$.




Proof



For $x=0$ it is trivial, so we suppose that $0<|x|<1$. Let $N>\dfrac{\log(x\varepsilon)}{\log(x)}$, then
\begin{align*}
|x|^{n}\end{align*}
implies that $\lim_{n\rightarrow\infty}x^{n}=0$ when $|x|<1$.


Answer



Suppose $\;0


$$x=\frac1r\;,\;\;1

Hint for the last part: assume it is false and use the archimedean property of the reals...


real analysis - Find a bijection between 2 intervals



So I'm supposed to find a bijection between $[0,1) \longrightarrow [0,1]$. My attempt of solution is the function defined as




$$f(x):=\begin{cases}
2x, & \text{iff} \,\, x=\dfrac{1}{2^n} \\
x, &\text{otherwise}
\end{cases}$$



Using this we have that if $x=\frac{1}{2}$, then $f(1/2)=1$ and so we got that covered. And for $x=\frac{1}{4}$ we have $f(1/4)=1/2$ and so on. Is this correct? Is it possible to assume that there is a bijection, when $n$ goes to infinity?



I also got another question: define a bijection from $[0,1) \longrightarrow [0,1) \times [0,1) \times \ \dots \ \times [0,1)$
$(n-$times$)$. My solution is just to define the value of $f(x)$ as a vector, i.e take $x \in [0,1)$ and $$f(x):=(x,x,x,\dots , x)$$ Is this correct?




Thank you in advance!


Answer



For the first function you either compute the inverse, and show that is is the right and left inverse, or you should show that it is injective and surjective. (that is take to elements and show that they are mapped to different images, and show that ever element in $[0,1]$ has a pre-image)



The function is correct. But why you say assume there is a bijection? What do you mean by $n$ goes to infinity? You should take care that $f$ is well-defined which it is since $x= \frac{1}{2^n}$ is either true or false.



For the second function. Is $(0,\frac{1}{2}, \cdots, 0)$ in the image? You may also should say $n$-tuple instead of vector, since the codomain is not equipped with a vector space structure.


real analysis - Simple bijection: help please



I am trying to show that if two sets $A,B$ have $n$ and $m$ distinct elements respectively then $A \times B$ has $nm$ elements. I assumed that there are bijections $f:\{1, ...,n\} \to A, k \mapsto a_k$ and $g: \{1,...,m\} \to B, k \mapsto b_k$.
Which I wanted to use to define a bijection $F: A \times B \to \{1,...,mn\}$ but no matter how I think about it, I can't construct $F$ that is both surjective and injective. My try $F$ that maps $(a_k, b_j) $ to $kj$ fails.



Please help. Is it possible to define $F$ that it is bijective?


Answer



Recall that $mn = \underbrace{n+n+\dots+n}_{m ~\text{times}}$.



So you can count from $1$ to $mn$ like this (I paired up every $n$ numbers): $$(1,\dots,n),(n+1,\dots,2n),\dots,((m-1)n+1,\dots,nm)$$




Can you now see your bijection?


calculus - Infinite sum convergence test.



I have to tell if the sum $$\sum_{n=1}^{\infty} (-1)^{n} \frac{1}{n + (-1)^{n+1}}$$
Converges or not. Can i use a comprasion test here? To assume that my sum is bigger than $\sum_{n=1}^{\infty} (-1)^{n} \frac{1}{n + 1}$? I dont think that's OK, but that's the only way I can think of right now.


Answer



How about this? Write out the series:
$$-1(\frac{1}{1 + 1}) + 1(\frac{1}{2-1})-1(\frac{1}{3 + 1})+1(\frac{1}{4-1})-1(\frac{1}{5+1})....$$
$$=\frac{-1}{2} + 1 -\frac{1}{4}+\frac{1}{3}-\frac{1}{6}+\frac{1}{5}-\frac{1}{8}....$$

Then reorder terms...
$$1 - \frac{1}{2} + \frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}...$$
And it was the alternating harmonic series all along! Which has been proven to equal
$$\sum_{n=1}^{\infty}(-1)^{n+1}\frac{1}{n}=ln(2)$$


Product of Power Series of Different Powers




I am trying to find the product $M$ of two power series of the form



\begin{equation}
M=\left(\sum_{n=0}^{\infty}a_{n}\, x^{2n}\right)
\left(\sum_{n=0}^{\infty}b_{n}\, x^{n}\right)
\end{equation}



where, $a_{n}=\frac{(-ag^{2})^{n}}{n!}$, and $b_{n}=\frac{(2ag)^{n}}{n!}$.



The product of the two series could be found with the standard formula (discrete convolution) if both series contained powers of $x^{n}$. I have tried to find a way to calculate the product but am not making progress. One potential issues is that $a_{n}$ is alternating and would become imaginary if the square root is taken. How can I calculate this product?




P.S- I suspect the final answer will be an infinite sum over confluent hypergeometric functions.



Additional Information



I am working on an integral of the form



\begin{equation}
\int_{0}^{\infty} x\, e^{-a(gx-b)^{2}}\, e^{-\mu x}\, {_{1}}F_{1}[-\alpha,-\beta,\lambda x] \ dx
\end{equation}




If I keep my limits of integration and write the exponential as a power series I can solve the integral. There is no way I can find to solve the integral if I substitute $u=x-b$. I tried tackling this by writing the exponential in quesiton as:



\begin{equation}
\begin{aligned}
e^{-a(gx-b)^{2}} &= \sum_{n=0}^{\infty}\frac{(-a)^{n}(gx-b)^{2n}}{n!}\\
&= \sum_{n=0}^{\infty}\frac{(-a)^{n}}{n!}\sum_{k=0}^{2n}\binom{2n}{k}(-b)^{2n-k}(gx)^{k}
\end{aligned}
\end{equation}




Switching the order of summation allows for a solution as a single sum:



\begin{equation}
e^{-a(gx-b)^{2}} =\sum_{k=0}^{\infty}\,
\frac{(-a)^{k/2}(-g)^{k}}{\frac{k}{2}!}\,{_{1}}F_{1}\left(\frac{k+1}{2};\frac{1}{2},-ab^{2}\right)\, x^{k}
\end{equation}



This sum has imaginary terms for odd $k$ and is not particularly useful for my purposes.


Answer



We can use the standard formula with a slight variation:





We obtain
\begin{align*}
\left(\sum_{k=0}^\infty a_kx^{2k}\right)\left(\sum_{l=0}^\infty b_lx^l\right)
&=\sum_{n=0}^\infty\left(\sum_{{2k+l=n}\atop{k,l\geq 0}}a_kb_l\right)x^n\tag{1}\\
&=\sum_{n=0}^\infty\left(\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}a_kb_{n-2k}\right)x^n\tag{2}
\end{align*}





Comment:




  • In (1) the condition for the inner sum is $2k+l=n$ to respect the even powers $x^{2k}$ and all powers $x^l$.


  • In (2) we use the floor function to set the upper limit of the inner sum and use $l=n-2k$.



Saturday 27 April 2013

real analysis - When can the order of limit and integral be exchanged?




  1. I was wondering for a real-valued function with two real variables, if there are some theorems/conclusions that can be used to decide the exchangeability of the order of
    taking limit wrt one variable and taking integral (Riemann integral,
    or even more generally Lebesgue integral ) wrt another variable,
    like $$\lim_{y\rightarrow a} \int_A f(x,y) \, dx = \int_A

    \lim_{y\rightarrow a} f(x,y) \,dx \text{ ?}$$

  2. If $y$ approaches $a$ as a countable sequence $\{y_n, n\in
    \mathbb{N}\}$, is the order exchangeable when $f(x,y_n), n \in \mathbb{N}$ is uniformly convergent in some subset for $x$ and $y$?

  3. How shall one tell if the limit and integral can be exchanged in the following examples? If not, how would you compute the values of the integrals:




    • $$\lim_{y\rightarrow 3} \int_1^2 x^y \, dx$$

    • $$ \lim_{y\rightarrow \infty} \int_1^2 \frac{e^{-xy}}{x} \, dx$$





Thanks and regards!


Answer



The most useful results are the Lebesgue dominated convergence and monotone convergence theorems.


real analysis - Let $f:(mathbb{R}setminusmathbb{Q})cap [0,1]to mathbb{Q}cap [0,1]$. Prove there exists a continuous$f$.



I'm working on the following problem from N.L. Carother's Real Analysis:





Let $I=(\mathbb{R}\setminus\mathbb{Q})\cap [0,1]$ with its usual metric. Prove that there is a continuous function $g$ mapping $I$ onto $\mathbb{Q}\cap[0,1]$.




My thoughts:



I feel the preimage of open sets definition of continuity will be the easiest way to prove this. If I could show $V\subset \mathbb{Q}\cap [0,1]$ is open for all open sets $V$, and I could show that $f^{-1}(V)$ is open as well, then that would mean $f$ is continuous. I've considered trying to prove that $(\mathbb{Q}\cap [0,1])^c$ is closed, but that doesn't seem much easier. I know $\mathbb{Q}$ is dense in $\mathbb{R}$, and so maybe I can use that to say that $B_{\epsilon}(x)\setminus\{x\}\cap(\mathbb{Q}\cap[0,1])\neq\emptyset$, which would mean every $x\in\mathbb{Q}\cap[0,1]$ is a limit point of $\mathbb{Q}\cap[0,1]$, but I still don't see how this could be helpful.



Any hints on how to proceed would be appreciated. Thanks.



Answer



Hint: let $a_1=0, a_2=1/2, a_3=2/3, \ldots, a_i = 1-1/i$ and define $f$ to be constant on $(\mathbb R \setminus \mathbb Q) \cap [a_i, a_{i+1}]$.


summation - Induction proof concerning a sum of binomial coefficients: $sum_{j=m}^nbinom{j}{m}=binom{n+1}{m+1}$

I'm looking for a proof of this identity but where j=m not j=0




http://www.proofwiki.org/wiki/Sum_of_Binomial_Coefficients_over_Upper_Index



$$\sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$$

limits - How to evaluate $lim_{x to 0} ( ln(1 - sin x) + x)/x^2$ without using l'Hôpital?

How to evaluate
$$\lim_{x \to 0} \frac{\ln(1 - \sin x) + x} {x^2}$$

without using l'Hôpital?
I am not able to substitute the right infinitesimal. Is there a substitute?



Background




  1. We have yet not done Taylor expansions.

  2. I know that $\ln$ around 1 tends to 0 and $\sin$ around 0 tends to 0.

elementary number theory - Proving that $gcd(ac,bc)=|c|gcd(a,b)$

Let $a$, $b$ an element of $\mathbb{Z}$ with $a$ and $b$ not both zero and let $c$ be a nonzero integer. Prove that $$(ca,cb) = |c|(a,b)$$

calculus - How to evaluate this indefinite integral $intfrac{cos(x)}{1+mathrm{e}^x}mathrm{d}x$

One of my student asked me to help her evaluate this indefinite integral $$\int\dfrac{\cos x}{1+e^x}\mathrm{d}x,$$
and I tried several minutes, but at last I had to given up, for I thought that it is very possible the primitive of $\dfrac{\cos x }{1+e^x}$ can not be expressed in terms of elementary functions. And then, I resorted to Maple and Mathematica. But these two computer algebra systems can not give me the answer, which shows that it is very certain that that indefinite integral is irreducible. But since I am not very familiar with differential Galois theory, I do hot know how to tell my student? Can anyone help me?

calculus - How does one prove $int_0^infty prod_{k=1}^infty operatorname{rm sinc}left( frac{t}{2^{k+1}} right) mathrm{d} t = 2 pi$



Looking into the distribution of a Fabius random variable:
$$
X := \sum_{k=1}^\infty 2^{-k} u_k
$$
where $u_k$ are i.i.d. uniform variables on a unit interval, I encountered the following expression for its probability density:

$$
f_X(x) = \frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^\infty \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \cos \left( t \left( x- \frac{1}{2} \right) \right) \mathrm{d} t
$$
It seems, numerically, that $f\left(\frac{1}{2} \right) = 2$, but my several attempts to prove this were not successful.



Any ideas how to approach this are much appreciated.


Answer



From Theorem 1 (equation (19) on page 5) of Surprising Sinc Sums and Integrals, we have
$$\frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^N \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \mathrm{d} t=2$$
for all $N<\infty$. I suppose you can justify letting

$N\to \infty$ to get your result.






One of the surprises in that paper concerns a similar integral
$$ \int_0^\infty \left( \prod_{k=0}^N \operatorname{\rm sinc}\left( \frac{t}{2{k+1}} \right) \right) \mathrm{d} t.$$ This turns out to be equal to $\pi/2$ when $0\leq N\leq 6$, but is slightly less than $\pi/2$ when $N=7$.


Friday 26 April 2013

calculus - To show that $f (x) = | cos x | + |sin x |$ is not one one and onto and not differentiable




Let $f : \mathbb{R} \longrightarrow [0,2]$ be defined by $f (x) = | \cos x | + |\sin x |$. I need to show that $f$ is not one one and onto. I have only intuitive idea that $\cos x$ is even function so image of $x$ and $-x$ are same. Not one to one , but how do I properly check for other things. Hints ? Thanks


Answer



(i) $f(0) = f(2\pi) = f(4\pi) = \cdots \Rightarrow$ not one to one.



(ii) Can we find $x \in \mathbb{R}$ so that $f(x) = 0 \in \operatorname{codom}f$, i.e. does $\left| \cos x \right| + \left| \sin x \right| =0$ have any real solutions?



(iii) Does $ f'(0) =\displaystyle \lim_{x \to 0} \dfrac{|\cos x| + |\sin x| - |\cos 0|-|\sin 0|}{x-0}=\lim_{x \to 0} \dfrac{|\cos x| + |\sin x| - 1}{x}$ exist?



Edit:




We have



$$
\lim_{x \to 0^-} \dfrac{|\cos x| + |\sin x| - 1}{x}
= \lim_{x \to 0^-} \dfrac{\cos x - \sin x - 1}{x}
\stackrel{\mathcal{L}}{=} \lim_{x \to 0^-} -\sin x - \cos\ x
= -1.
$$




But,



$$
\lim_{x \to 0^+} \dfrac{|\cos x| + |\sin x| - 1}{x}
= \lim_{x \to 0^+} \dfrac{\cos x + \sin x - 1}{x}
\stackrel{\mathcal{L}}{=} \lim_{x \to 0^+} -\sin x + \cos\ x
= 1.
$$



Therefore,




$$
\lim_{x \to 0} \dfrac{|\cos x| + |\sin x| - |\cos 0|-|\sin 0|}{x-0}
$$



does not exist. Hence, $f$ is not differentiable on its entire domain.


calculus - How can I solve these trigonometric limits without using L'Hopital's Rule?



Problems with corresponding solutions




I don't understand why #1 is "does not exist" as opposed to 0. I also don't know how to begin solving #2 and #3. When I look it up, the only solutions involve L'Hopital's Rule but my teacher hasn't taught us it yet so I can't use it. Any help is appreciated!


Answer



For the first one the expression under the radical sign is negative for $\theta \ne \pi/2 $ so it is not real.



For the second one multiply top and bottom by $1-\cos \pi x$ and turn the top into $\sin^2 \pi x$



Then write the $\tan ^2 \pi x$of the bottom in terms of $ \sin \pi x$ and $\cos \pi x$ and cancel the $\sin ^2 \pi x$ from top and bottom.



The rest is easy.




The third one is similar to the second one.


real analysis - Is there a way to show that $e^x=lim_{nto infty }left(1+frac{x}{n}right)^n$?



I know that $$e:=\lim_{n\to \infty }\left(1+\frac{1}{n}\right)^n,$$
by definition. Knowing that, I proved successively that $$e^{k}=\lim_{n\to \infty }\left(1+\frac{k}{n}\right)^n,$$
when $k\in \mathbb N$, $k\in \mathbb Z$ and $k\in\mathbb Q$. Now, I was wondering : how can I extend this result over $\mathbb R$ ? I tried to prove that $f_n(x):=(1+\frac{x}{n})^n$ converge uniformly on $\mathbb R$ but unfortunately it failed (I'm not sure that it's even true). Any idea ?







My idea was to define the function $x\longmapsto e^x$ as $$e^x=\begin{cases}e^x& x\in \mathbb Q\\ \lim_{n\to \infty }e^{k_n}&\text{if }k_n\to x \text{ and }(k_n)\subset \mathbb Q\end{cases}.$$
But to conclude that $$e^x=\lim_{n\to \infty }\left(1+\frac{x}{n}\right)^n,$$
I need to prove that $f_n(x)=\left(1+\frac{x}{n}\right)^n$ converge uniformly on a neighborhood of $x$, but I can't do it. I set $$g_n(x)=f_n(x)-e^x,$$
but I can't find the maximum on a compact that contain $x$, and thus can't conclude.


Answer



We can use that exists $p_n, q_n \in \mathbb{Q}$ such that $p_n,q_n \to x$ and $p_n\le x\le q_n$, therefore



$$\left(1+\frac{p_n}{n}\right)^n\le \left(1+\frac{x}{n}\right)^n\le \left(1+\frac{q_n}{n}\right)^n$$




and



$$\left(1+\frac{p_n}{n}\right)^n=\left[\left(1+\frac{p_n}{n}\right)^\frac{n}{p_n}\right]^{p_n}\to e^x$$



$$\left(1+\frac{q_n}{n}\right)^n=\left[\left(1+\frac{q_n}{n}\right)^\frac{n}{q_n}\right]^{q_n}\to e^x$$



indeed for $\frac{n}{p_n}\in (m,m+1)$ with $m\in \mathbb{N}$ we have



$$\left(1+\frac1{m+1}\right)^m\le \left(1+\frac{p_n}{n}\right)^\frac{n}{p_n}\le \left(1+\frac1m\right)^{m+1}$$




and therefore $\left(1+\frac{p_n}{n}\right)^\frac{n}{p_n}\to e$.


linear algebra - Determinant of $ntimes n$ matrix with parameter



Problem:





Let $\delta \in \mathbb{R}^+$ and $n\in \mathbb{N}$. The matrix $A_n = (a_{i,j}) \in \mathbb{R}^{n\times n}$ is defined as



$$
a_{i,j} = \prod_{k=0}^{i-2}\left((j-1)\delta +n-k\right)
$$
Prove that
$$\det A_n = \delta ^{\frac{1}{2}n(n-1)}\prod^{n-1}_{k=0}k!$$




So there is




$$
A_1 =
\pmatrix{
1\\
}
$$



$$
A_2 =

\pmatrix{
1&1\\
2&\delta+2\\
}
$$



$$
A_3 =
\pmatrix{
1&1&1\\

3&\delta+3&2\delta+3\\
6&(\delta+3)(\delta+2)&(2\delta+3)(2\delta+2)\\
}
$$



$$\vdots$$



I eventually managed to prove it by converting the matrix to upper triangular using elementary row operations, but the proof is just too complicated, involves things like $(k−2)\delta+n−(i−(k−1))+1)$-th multiples of certain rows (for that matter it is quite long, so I am not fully including it here). So it somehow feels like not the best possible way to do this.



What are some another ways to prove this?



Answer



Finally found this in a literature, namely Calculation of some determinants using the s-shifted factorial by Jean-Marie Normand. In Lemma $1$ he proves equation $(3.5)$:




For complex numbers $z_j, s, b_i$ we have$$\tag{1}\det [(b_i+z_j)_{s;i}]_{i,j=0,\dots,n-1}=\prod_{0\leq i \leq j \leq n-1}(z_j-z_i)$$ where
$(z)_{s;i}=z(z+s)\dots(z+(n-1)s)$ is called $s$-shifted factorial and $\prod_{0\leq i\leq j\leq n-1}(z_j-z_i)$ is a Vandermonde determinant.




Furthermore Appendix B shows in equation (B$.5$) that for special case $b_i=0, z_j=b+aj$ one has





$$
\det[(b_i+z_j)_{s;i}]_{i,j=0,\dots,n-1}=\det[(b+aj)_{s;i}]_{i,j=0,\dots,n-1}=a^{n(n-1)/2}\prod_{j=0}^{n-1}j!.\tag{2}
$$




Before applying to our problem, we first need to re-index the range to $i,j=0,\dots,n-1$, then we have $a_{i,j}=(j\delta+n)_{1;i}$. Now choosing $b_i=0, a=\delta, b=n, s=1$ in $(2)$ gives exactly the statement we wanted to prove.


integration - Finding indefinite integral by partial fractions



$$\displaystyle \int{dx\over{x(x^4-1)}}$$



Can this integral be calculated using the Partial Fractions method.


Answer



HINT:



We need to use Partial Fraction Decomposition




Method $1:$



As $x^4-1=(x^2-1)(x^2+1)=(x-1)(x+1)(x^2+1),$



$$\text{Put }\frac1{x(x^4-1)}=\frac Ax+\frac B{x-1}+\frac C{x+1}+\frac {Dx+E}{x^2+1}$$






Method $2:$

$$I=\int \frac1{x(x^4-1)}dx=\int \frac{xdx}{x^2(x^4-1)} $$



Putting $x^2=y,2xdx=dy,$



$$I=\frac12\int \frac{dy}{y(y^2-1)}$$



$$\text{ Now, put }\frac1{y(y^2-1)}=\frac A y+\frac B{y-1}+\frac C{y+1}$$







Method $3:$



$$I=\int \frac1{x(x^4-1)}dx=\int \frac{x^3dx}{x^4(x^4-1)} $$



Putting $x^4=z,4x^3dx=dz,$



$$I=\frac14\int \frac{dz}{z(z-1)}$$



$$\text{ Now, put }\frac1{z(z-1)}=\frac Az+\frac B{z-1}$$




$$\text{ or by observation, }\frac1{z(z-1)}=\frac{z-(z-1)}{z(z-1)}=\frac1{z-1}-\frac1z$$



Observe that the last method is susceptible to generalization.



$$J=\int\frac{dx}{x(x^n-a)}=\int\frac{x^{n-1}dx}{x^n(x^n-a)}$$



Putting $x^n=u,nx^{n-1}dx=du,$



$$J=\frac1n\int \frac{du}{ u(u-a)}$$
$$\text{ and }\frac1{u(u-a)}=\frac1a\cdot\frac{u-(u-a)}{u(u-a)}=\frac1a\left(\frac1{u-a}-\frac1u\right)$$



arithmetic - When asked to solve a question without using a calculator, how much mental computation is reasonable?

Recently, the following question was asked: Without calculator, find out what is larger: $60^\frac{1}{3}$ or $2+7^\frac{1}{3}$. (Apologies; I don't know how to link to that question, but it is not essential for the question I am asking.)



Most people would not be able to extract cube roots without a calculator, unless the numbers were particularly easy, such as $64^\frac{1}{3}$ or $2+8^\frac{1}{3}$. But not using a calculator does not rule out doing some calculation.



As it turns out, the numbers in this case lend themselves to reasonably calculable approximations, which many people could perform in their heads, but might prove daunting to less experienced individuals.



So my question is, are the calculations I made reasonably within the intention of the restriction "without calculator?"



Please consider the difficulty of the calculations, not the total amount of calculation performed. Here is what I did, emphasizing the arithmetic calculation aspects: I cubed both quantities, leaving me to compare $60$ with $8+12(7^\frac{1}{3})+6(7^\frac{2}{3})+7$.




Collecting terms and rearranging, the original question becomes one about a quadratic equation:



Is $$x^2+2x-7.5$$ greater or less than $0$ when $x=7^\frac{1}{3}$?



This in turn becomes: Is $7^\frac{1}{3}$ greater or less than $r$, the positive root of $$x^2+2x-7.5=0$$.



By the quadratic formula $$r=(-2+\sqrt{4+30})/2$$.



Although the square root of $34$ may look like one of those calculations that would require a calculator, it turns out that determining a precise value would just make subsequent calculations dependent on a calculator as well.




By good fortune (or the cleverness of the original poser of the question), $34$ is close to $36$, so we may approximate $\sqrt{34}$ as $(6-a)$.



Thus we look for $$34=36-12a+a^2$$.



But since $a$ will be small compared to $6$, we can approximate by ignoring the $a^2$ term and calculate $a=\frac{1}{6}$. it is easy to see that $(6-\frac{1}{6})^2$ exceeds $34$ by $\frac{1}{36}$. Again, by seeming good fortune, the next reasonable fraction greater than $\frac{1}{6}$ is $\frac{17}{100}$.



$$(6-\frac{17}{100})^2$$ is also calculable as $$36-\frac{204}{100}+\frac{289}{10000}$$. Since the second term decrements $36$ by $2.04$ and the third term only restores $0.0289$, we see that $(6-\frac{17}{100})^2$$ is less than $34$. So $$(6-\frac{1}{6})>\sqrt{34}>(6-\frac{17}{100})$$, hence $$(2-\frac{1}{12})>r>(2-\frac{17}{200})$$.



What remains is to cube the numerical values bracketing $r$ and compare the results to $7$.




$$(2-\frac{1}{12})^3=8-1+\frac{1}{24}-\frac{1}{1728}$$ which is greater than $7$ by observation.



$$(2-\frac{17}{200})^3=8-\frac{204}{200}+6(\frac{289}{40000})-(\frac{17}{200})^3=8-\frac{204}{200}+(\frac{1734}{40000})-(\frac{17}{200})^3$$.



The arithmetic is a little harder here, but the first and second terms are less than $7$ by $0.02$ and the third term is reasonably seen to be greater than $0.04$, making the sum of the first three terms greater than $7$ by at least $0.02$. The last term is certainly smaller than $(\frac{20}{200})^3$ which is $0.001$, so the sum of the terms is greater than 7.



This means that $r^3>7$ or $$r>7^\frac{1}{3}$$. From this, the original question can be answered. In performing calculations, no roots were extracted, but binomial expressions up to cubes involving fractions were calculated. I personally found the numbers in the numerators and denominators tractable, but would this be considered by the community as being in the spirit of "without calculator?"

complex numbers - correctly found the angle phi (arg z)? $left(frac{(sqrt2-isqrt2)(1-isqrt3)}{4}right) $

$$1) \left(\frac{(\sqrt2-i\sqrt2)(1-i\sqrt3)}{4}\right) = \left(\frac{(\sqrt2-\sqrt2\sqrt3)+i(\sqrt2*(-\sqrt3) - \sqrt2)}{4}\right) = \left(\frac{(\sqrt2-\sqrt6)+i(-\sqrt6-\sqrt2)}{4}\right) = \left(\frac{\sqrt2-\sqrt6}{4}+\frac{i(-\sqrt6-\sqrt2)}{4}\right)$$



$$ 2) tg \phi = \frac{y}{x} $$
$$ 3) tg\phi = \frac{-\sqrt6-\sqrt2}{\sqrt2-\sqrt6}=\frac{(-\sqrt6-\sqrt2)(\sqrt2+\sqrt6)}{2-6}= \frac{-(\sqrt6+\sqrt2)(\sqrt2+\sqrt6)}{-4} = \frac{(\sqrt6+\sqrt2)^2}{4}=\frac{6+2\sqrt{12}+2}{4}= \frac{8+2\sqrt{12}}{4}=\frac{8+4\sqrt3}{4} = 2+\sqrt3 $$
$$4) tg\phi = 2+\sqrt3$$

$$5) \phi = arctg(2+\sqrt3)$$



Are all agreed right?



If not - tell me where the error.



And yet, how to find arg z ???

algebra precalculus - How do I show that $sqrt{5+sqrt{24}} = sqrt{3}+sqrt{2}$



According to wolfram alpha this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$



But how do you show this? I know of no rules that works with addition inside square roots.



I noticed I could do this:




$\sqrt{24} = 2\sqrt{3}\sqrt{2}$



But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition


Answer



Hint: Since they are both positive numbers, they are equal if, and only if, their squares are equal.


Thursday 25 April 2013

elementary number theory - Why does $lfloorfrac{n}{x}rfloor$ have at most $2sqrt{n}$ values when $x=1,2,dots n$?



The question is very short and clear:



Why does $\lfloor\frac{n}{x}\rfloor$ (floor of $\frac nx$) have at most $2\sqrt{n}$ values when $x = 1, 2,\dots, n $?



I saw this statement at tutorial of 449A at codeforces, I really want to know how we get that, and since the amount of values should be integer, why $2\sqrt{n}$?



And I don't know the specific category of this problem so I just tag it as elementary number theory.


Answer




Over the range $1 \le x \le \sqrt{n}$, $x$ can take on only $\sqrt{n}$ distinct integer values. Thus, $\left\lfloor\dfrac{n}{x}\right\rfloor$ can only take on $\sqrt{n}$ distinct values, one for each distinct value of $x$.



Over the range $\sqrt{n} \le x \le n$, we have that $1 \le \dfrac{n}{x} \le \sqrt{n}$. Thus, $\left\lfloor\dfrac{n}{x}\right\rfloor$ can only take on $\sqrt{n}$ distinct values, one for each integer between $1$ and $\sqrt{n}$.



This adds up to $2\sqrt{n}$ values over the entire range $1 \le x \le n$.



Note that this is an upper bound and not an exact number. For instance, if $n = 6$, then we have at most $2\sqrt{6} \approx 4.89$ values. Indeed, $\lfloor\frac{6}{1}\rfloor = 6$, $\lfloor\frac{6}{2}\rfloor = 3$, $\lfloor\frac{6}{3}\rfloor = 2$, $\lfloor\frac{6}{4}\rfloor = \lfloor\frac{6}{5}\rfloor = \lfloor\frac{6}{6}\rfloor = 1$, so we have $4$ distinct values.


Alternate complex binomial series sum




Calculation of $\displaystyle \sum^{2n-1}_{r=1}(-1)^{r-1}\cdot r\cdot \frac{1}{\binom{2n}{r}}$ is




My Try: Using $$\int^{1}_{0}x^m(1-x)^ndx = \frac{1}{(m+n+1)}\cdot \frac{1}{\binom{m+n}{n}}$$



So $\displaystyle \int^{1}x^{2n-r}(1-x)^r=\frac{1}{2n}\cdot \frac{1}{\binom{2n}{r}}$




Sum convert into $\displaystyle 2n\sum^{2n-1}_{r=1}(-1)^{r-1}r\int^{1}_{0}x^{2n-r}(1-x)^rdx$



$\displaystyle \Longrightarrow 2n \int^{1}_{0}x^{2n}\sum^{2n-1}_{r=1}(-1)^{r-1}\cdot r \cdot \bigg(1-\frac{1}{x}\bigg)^rdx$



Could some help me to solve it , Thanks


Answer



Here is a more elementary method to find the sum which gives a more general result. Let $\displaystyle f(n)=\sum^{n-1}_{r=1}(-1)^{r-1}\frac{r}{\binom{n}{r}}$. We have been asked to find $\displaystyle f(2n)$. Note that $\displaystyle \frac{1}{\binom{n+1}{k+1}}+\frac{1}{\binom{n+1}{k}}=\frac{n+2}{n+1}\frac{1}{\binom{n}{k}}$. We will use this identity twice to telescope the sum.



$$f(n)=\sum^{n-1}_{r=0}(-1)^{r-1}\frac{r}{\binom{n}{r}}=\frac{n+1}{n+2}\sum^{n-1}_{r=0}\bigg((-1)^{r-1}\frac{r}{\binom{n+1}{r}}-(-1)^{r}\frac{r}{\binom{n+1}{r+1}}\bigg)$$$$=\frac{n+1}{n+2}\sum^{n-1}_{r=0}\bigg((-1)^{r-1}\frac{r}{\binom{n+1}{r}}-(-1)^{r}\frac{r+1}{\binom{n+1}{r+1}}+\frac{(-1)^r}{\binom{n+1}{r+1}}\bigg)=\frac{n+1}{n+2}\bigg((-1)^n\frac{n}{n+1}+\sum^{n}_{r=1}\frac{(-1)^{r-1}}{\binom{n+1}{r}}\bigg)$$$$=\frac{n+1}{n+2}\Bigg((-1)^n\frac{n}{n+1}+\frac{n+2}{n+3}\sum^{n}_{r=1}\bigg(\frac{(-1)^{r-1}}{\binom{n+2}{r}}-\frac{(-1)^r}{\binom{n+2}{r+1}}\bigg)\Bigg)=\frac{n+1}{n+2}\Bigg((-1)^n\frac{n}{n+1}+\frac{1-(-1)^n}{n+3}\Bigg)$$




$$\implies f(n)=\frac{n}{n+2}(-1)^n+\frac{n+1}{(n+2)(n+3)}\big(1-(-1)^n\big)=\left \{
\begin{aligned}
&\ \ \ \ \ \frac{n}{n+2}, && \text{if}\ n \text{ is even} \\
&-\frac{n-1}{n+3}, && \text{if } n \text{ is odd}
\end{aligned} \right.$$

$$\therefore f(2k+1)=-\frac{k}{k+2}\text{ and }f(2k)=\frac{k}{k+1}\text{, as desired.}$$
$\blacksquare$


calculus - Find $limlimits_{ntoinfty}left(frac{a_1}{a_0S_1}+frac{a_2}{S_1S_2}+...+frac{a_n}{S_{n-1}S_n}right)$

Find $\lim\limits_{x\to\infty}\left(\frac{a_1}{a_0S_1}+\frac{a_2}{S_1S_2}+...+\frac{a_n}{S_{n-1}S_n}\right)$ where $n=0,1,2,...$ and $a_n=2015^n,S_n=\sum\limits_{k=0}^{n}a_k$



$S_n$ can be written as the geometric sum $S_n=\frac{2015^{n+1}-1}{2014}$.




Applying the values for $a_k$ and $S_k$ can't give a closed form in the limit.



How to transform sequence in the limit so it gives closed form (if possible)?

convergence divergence - Examples where series converges but product diverges and vice versa

Our professor gives us the following ungraded exercise for our analytic number theory class:




Let $ E $ be a set with one element. Suppose $ (b_n) $ is a sequence with $ |b_n| \leq \lambda < 1 $, and let $ a_n = 1 + b_n $.



1) Find $ (b_n) $ so that $ \sum b_n $ converges, but $ \prod a_n $ diverges.



2) Find $ (b_n) $ so that $ \prod a_n $ converges, but $ \sum b_n $ diverges.



I am not sure how to do this problem. Any help is appreciated.

elementary set theory - Show that $mathbb{N}$ is infinite



I've tried to show that $\mathbb N$ is an infinite set.



Proof. By contradiction: assume that exists a bijeciton $f$ from $\mathbb N$ to $I_n = \{1, \dots, n-1\}$. We know that $f{\restriction_{I_n}}$ is injective and its image is a proper subset of $I_n$. From here, we can show that, by restricting the codomain of $f{\restriction_{I_n}}$ to $f{\restriction_{I_n}}(I_n)$, we obtain a bijection from a proper subset of $I_n$, that is, $f{\restriction_{I_n}}(I_n)$, to $I_n$ itself. But this is a contradiction. $\square$



Is it a correct proof?



EDIT: I'm using Peano's definition for $\mathbb N$, where $I_n$ is the set $\{0, S(0), \dots, S^{n-1}(0)\}$. For infinite set, I mean a set for which there is no bijection between it and any $I_n$, $n \in \mathbb N$.


Answer




Yes. Your proof is correct.



Indeed, if $f\colon\Bbb N\to\{0,\ldots,n-1\}$ is injective, then restricting the function to $\{0,\ldots,n\}$ would be an injection from a finite set into a proper subset, which is a contradiction to the pigeonhole principle.



One can argue in several other ways, e.g. using Peano axioms, or other axioms from which you can obtain (in a second-order) the natural numbers in a relatively natural way. These ways lend themselves to slightly different proofs. But your suggested proof is correct, and is a fairly standard proof in elementary set theory courses.


Wednesday 24 April 2013

real analysis - Must an unbounded set in a metric space be infinite?



I'm new to real analysis and topology. Recently, I'm reading baby rudin. Occasionally, I've a question: does a set is unbounded implies the set is infinite in metric space? I think the statement is right, but I can't prove it.



Please give the strict proof. Thanks in advance.


Answer



Yes (if metric spaces are assumed to be non-empty).



Let $\langle X,d\rangle$ denote a metric space.




Suppose that a set $S\subseteq X$ is finite and let $x\in X$.



If we take $r>\max\{d(x,y)\mid y\in S\}$ then $S\subseteq B(x,r)=\{y\in X\mid d(x,y)

That means that unbounded sets cannot be finite, hence are infinite.






Note: this answer preassumes that metric spaces are not empty. If $X=\varnothing$ then the finite set $\varnothing$ is unbounded since $X=\varnothing$ is not contained in any ball centered at some $x\in X$. This because balls like that simply do not exist in that situation.


general topology - Order type between two sets and bijection?

I want to show that $$ \{1,2\}\times Z_+\ \text{and} \ Z_+ \times\{1,2\}\ \text{have different order type}$$



If we define $$f(i,j)=(j,i)\ \text{for}\ i\ \text{in }\{1,2\}\ \text{and} \ j\ \text{in}\ Z_+$$



It seems like that this is bijective map between two sets.



However, to show that they are not order isomorphic, how shall I start to show that bijection does not preserve ordering?




It seems like that the way I defined the bijection is not the only way.



I am wondering if there exists any bijection between two sets and that bijection does not preserve order, can I conclude that they have different order type?

calculus - Prove that $lim_{n to infty}3nx_n^3 = 1$.



If $\{x_n\}$ is a given sequence satisfying
$$\lim_{n \to \infty}x_n\sum_{k=1}^{n}x_k^2=1,$$
prove that
$$\lim_{n \to \infty}3nx_n^3 = 1.$$
I have tried to use Stolz–Cesàro theorem but failed. Can anyone give a hint? Thanks.


Answer




Question: let $a_{n}$ be a sequence of real numbers such that $\lim_{n\to\infty}a_{n}\sum_{k=1}^{n}a^2_{k}=1$, show that:$$\lim_{n\to\infty}3na^3_{n}=1$$





solution: Set $S_{n}=\sum_{k=1}^{n}a^2_{k}$, then the condition $a_{n}S_{n}\to 1$ implies that
$S_{n}\to \infty$ and $a_{n}\to 0$ as $n\to\infty$,Hence we also have that $a_{n}S_{n-1}\to 1$ as $n\to\infty$,Therefore
$$S^3_{n}-S^3_{n-1}=a^2_{n}(S^2_{n}+S_{n}S_{n-1}+S^2_{n-1})\to 3,n\to \infty$$
Thus by the Stolz-Cesaro lemma,$\dfrac{S^3_{n}}{n}\to 3,n\to \infty$,since
$a_{n}S_{n}\to 1$,so
$$\lim_{n\to\infty}3na^3_{n}=1$$


real analysis - Give an example of a function uniformly differentiable with unbounded derivative




Give an example of a function $$f:U\longrightarrow \mathbb{R}$$ uniformly differentiable with unbounded derivative $f'$, $U\subset\mathbb{R}$ open set.



Any hints would be appreciated.


Answer



Take $f(x) = x^{3/2}$ on $U = [0,\infty).$ Then $f'(x) = (3/2)x^{1/2},$ which is uniformly continuous on $U.$ By the MVT, $f'$ is uniformly differentiable on $[0,\infty).$ Since $f'$ is unbounded, we have an example.


Tuesday 23 April 2013

real analysis - Nowhere continuous function for every equivalence class



Since our calculus lectures, we know that there are nowhere continuous functions (like the indicator function of the rationals). However, if we change this Dirichlet function on a set of measure zero, then we get a continuous function (the zero function). So my question ist





Does there exist a function $f: \mathbb{R}\rightarrow \mathbb{R}$ such that every function which equals $f$ almost everywhere (with respect to the Lebesgue measure) is nowhere continuous?
Is it possible to choose $f$ borel-measurable?



Answer



See my answer to this question for the construction of an $F_\sigma$ set $M\subset\mathbb R$ such that $0\lt m(M\cap I)\lt m(I)$ for every finite interval $I,$ where $m$ is the Lebesgue measure.



Let $f$ be the indicator function of $M.$ Clearly $f$ is Borel-measurable. If $g(x)=f(x)$ almost everywhere, then $g^{-1}(0)$ and $g^{-1}(1)$ are everywhere dense, whence $g$ is nowhere continuous.


real analysis - The Dirichlet Function.



The Dirichlet function is defined by $f(x)=\begin{cases} 1 &\text{ if } x\in \mathbb{Q}\\0 &\text{ if } x\notin \mathbb{Q}.\end{cases}$ Let $g(x)=\begin{cases}0 &\text{ if } x\in \mathbb{Q}\\ 1 &\text{ if } x\notin \mathbb{Q} \end{cases}$ be its evil twin.



(1) Prove that $f$ is discontinuous at every $x\in \mathbb{R}$.




(2) Prove that $g$ is continuous at exactly one point $x_1\in \mathbb{R}$.



(3) Prove that $f+ g$ is continuous at every $x\in \mathbb{R}$.



Definition (Continuous at a point): A function $f: D \longrightarrow \mathbb{R}$ is continuous at $X_0\in D$ if for every $\epsilon > 0$ there exists some $\delta > 0$ such that $|f(x) - f(x_0)| < \epsilon$ whenever $|x-x_0|< \delta$.



Response: I don't have any work yet, hints and answers would be extremely helpful.


Answer



$f+g$ is a constant function, so (3) is easy. (2) is false, so don't waste your time trying to prove it. For (1), set $\epsilon=1$, take any $x_0\in\Bbb R$ and any $\delta>0$, and use the fact that both $\Bbb Q$ and $\Bbb R\smallsetminus\Bbb Q$ are dense in $\Bbb R$ to show that there is some $x\in \Bbb R$ such that $|x-x_0|<\delta$ and $|f(x)-f(x_0)|=\epsilon$.




As a side note, if we'd defined $g(x):=xf(x)$, then $g$ would be continuous at exactly one point--namely $0$.


complex analysis - Inequality involving Mobius transformation

I have the following simple-looking inequality I have to show:



Let $z, w \in \mathbb D$, where $\mathbb D$ is the open unit disc in $\mathbb C$. Show that
$$\left| \frac{z-w}{1-\overline{z}w} \right| \geq \left| \frac{|z|-|w|}{1-|z||w|} \right|.$$



It looks pretty straightforward, but I just can't seem to get it, and I think I might be missing something obvious. I've tried putting $z=|z|e^{i \alpha}$ and $w=|w|e^{i \beta}$ to get
$$\left| \frac{z-w}{1-\overline{z}w} \right| = \left| \frac{|z|-|w|e^{i \theta}}{1-|z||w|e^{i \theta}} \right|$$
where $\theta = \beta - \alpha$, and can't get much out of this. I've tried squaring both sides etc., and a few other things. If anyone has any ideas, I'd be very grateful, thanks.

real analysis - Does $f(ntheta) to 0$ for all $theta>0$ and $f$ Darboux imply $f(x) to 0$ as $x to infty$?



Recall that a Darboux function $f:\mathbb{R} \to \mathbb{R}$ is one which satisfies the conclusion of the intermediate value theorem (i.e., connected sets are mapped to connected sets). Being Darboux is a weaker condition than continuity. If a theorem about continuous functions only uses the intermediate value theorem, then chances are it also holds for the entire class of Darboux functions. I find it interesting to study which theorems about continuous functions also hold for Darboux functions.




We have the following theorem, which is fairly well known and hinges on the Baire Categoery Theorem.




If $f:\mathbb{R} \to \mathbb{R}$ is continuous and $f(n\theta) \xrightarrow[n \in \mathbb{N}, \ n\to\infty]{} 0$ for every $\theta \in (0, \infty)$, then $f(x) \xrightarrow[x \in \mathbb{R}, \ \ x\to\infty]{} 0$.




A counterexample if we drop continuity is $f(x) = \mathbf{1}_{\{ \exp(n) : n \in \mathbb{N}\}}$. However, this counterexample isn't Darboux, and I haven't been able to come up with any counterexample which is Darboux. Thus, this leads me to my question.




Can the continuity condition in the theorem stated above be relaxed to Darboux?





In searching for counterexamples of this sort, one approach is playing around with $\sin \frac{1}{x}$. An alternative approach is considering highly pathological functions with the property that every nonempty open set is mapped to $\mathbb{R}$ (for instance, Conway Base-13, or Brian's example here) and modifying these in such a way that they satisfy the hypotheses of the problem.


Answer



Non-measurable example



By the axiom of choice there is a $\mathbb Q$-linear basis of $\mathbb R.$ This basis has the same cardinality as $\mathbb R$ so can be indexed as $a_r$ for $r\in\mathbb R.$ Define $f$ by setting $f(x)=r$ if $x$ is of the form $a_0+qa_r$ for some rational $q$ and real $r,$ and set $f(x)=0$ for $x$ not of this form. Then $f$ is Darboux because the set $\{a_0+qa_r\mid q\in\mathbb Q\}$ is dense for each $r.$ But for each $\theta>0,$ we can only have $f(q\theta)\neq 0$ for at most one rational $q$ - the reciprocal of the $a_0$ coefficient of $\theta.$ In particular $f(n\theta)\to 0$ as $n\to\infty$ with $n\in\mathbb N.$



Measurable example




For $n\geq 2$ let $b_n=n!(n-1)!\dots 2!.$ Each real has a unique "mixed radix" expression as
$x=\lfloor x\rfloor + \sum_{n\geq 2}\frac{x_n}{b_n}$ where $x_n$ is the unique representative of $\lfloor b_n x\rfloor$ modulo $n!$ lying in $\{0,1,\dots,n!-1\}.$ For non-negative $x$ define $f(x)=\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m$ if this limit exists and $x_n\leq 1$ for all sufficiently large $n,$ and take $f(x)=0$ otherwise. For negative $x$ define $f(x)=f(-x).$ Note $f(x)\in[0,1].$ It is straightforward to see that $f$ takes all values in $[0,1]$ in every interval and is hence Darboux.



Now consider a real $x>0$ with $f(x)\neq 0$ and let $q<1$ be rational. We will show that $f(qx)=0.$ We know there exists $N$ such that $x_n\leq 1$ for all $n>N.$ Increasing $N$ if necessary we can assume that $qN$ is an integer. We also know that $x_n=1$ for infinitely many $n>N$ - otherwise we would have $\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m=0.$
Write $x=x'/b_{n-1}+1/b_n+\epsilon/b_{n+1}$ where $x'$ is an integer and $0\leq\epsilon< 2.$ So $qx b_{n+1}=qx'n!(n+1)!+q(n+1)!+q\epsilon.$ The first term is a multiple of $(n+1)!$ because $qn!$ is an integer, and the second term $q(n+1)!$ is an integer, and $q\epsilon<2.$ So $(qx)_{n+1}$ is either $q(n+1)!$ or $q(n+1)!+1$ (note this is less than $(n+1)!$). Since $q(n+1)!>1$ and there are infinitely many such $n,$ we get $f(qx)=0,$ .



This shows that for each $\theta>0,$ the sequence $f(n\theta)$ takes at most one non-zero value, and in particular $f(n\theta)\to 0.$



Remark: this $f$ appears to be a counterexample to https://arxiv.org/abs/1003.4673 Theorem 4.1.


Monday 22 April 2013

abstract algebra - Homework: Sum of the cubed roots of polynomial




Given $7X^4-14X^3-7X+2 = f\in R[X]$, find the sum of the cubed roots.
Let $x_1, x_2, x_3, x_4\in R$ be the roots. Then the polynomial $X^4-2X^3-X+ 2/7$ would have the same roots. If we write the polynomial as $X^4 + a_1X^3 + a_2X^2 +a_3X + a_4$ then per Viete's theorem:



$a_k = (-1)^k\sigma _k(x_1,x_2,x_3,x_4), k\in \{1,2,3,4\}$, where $\sigma _k$ is the $k$-th elementary symmetrical polynomial. Therefore:



$x_1+x_2+x_3+x_4 = 2$
$x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4 = 0\ (*)$
$x_1x_2x_3 +x_1x_2x_4+x_1x_3x_4+x_2x_3x_4 = 1$
$x_1x_2x_3x_4 = 2/7$



Now how to determine the sum of the cubed roots?
$2^3 = 8= (x_1+x_2+x_3+x_4)(x_1+x_2+x_3+x_4)^2 = (x_1+x_2+x_3+x_4)(x_1^2+x_2^2+x_3^2+x_4^2 + 2(*))$



Here's where things go out of hand:
$(x_1+x_2+x_3+x_4)(x_1^2+x_2^2+x_3^2+x_4^2) = (x_1^3 + x_2^3 + x_3^3+x_4^3) + x_1^2(x_2+x_3+x_4)+x_2^2(x_1+x_3+x_4)+x_3^2(x_1+x_2+x_4)+x_4^2(x_1+x_2+x_3) = 8$
What should I do here?



Answer



Let
$$A=x_1+x_2+x_3+x_4=2$$
$$B=x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4=0$$
$$C=x_1x_2x_3+x_1x_2x_4+x_1x_3x_4+x_2x_3x_4=1$$
$$D=x_1x_2x_3x_4=\frac 27.$$
$$E=x_1^2x_2+x_1x_2^2+x_1^2x_3+x_1x_3^2+x_1^2x_4+x_1x_4^2+x_2^2x_3+x_2x_3^2+x_2^2x_4+x_2x_4^2+x_3^2x_4+x_3x_4^2$$



We have
$$A^3=x_1^3+x_2^3+x_3^3+x_4^3+3E+6C$$

and
$$AB=E+3C.$$



So,
$$x_1^3+x_2^3+x_3^3+x_4^3=A^3-3(AB-3C)-6C=\color{red}{11}.$$


abstract algebra - Find an example about splitting field




Let $f(x)$ be an irreducible polynomial in $\mathbb{Q}[x]$, and let $K$ be the splitting field of $f(x)$ over $\mathbb{Q}$.



Now, suppose that $E$ is a splitting field of some polynomial in $\mathbb{Q}[x]$ with $\mathbb{Q}\subseteq E\subseteq K$, and that $\alpha_{1},\alpha_{2},\ldots,\alpha_{n}$ are roots of $f(x)$ in $K$.



In this situation, prove or disprove : $$\deg(\alpha_{1},E)=\cdots=\deg(\alpha_{n},E)$$





I think that the splitting field $K=\mathbb{Q}(\sqrt[8]{2},i)$ of $f(x)=x^{8}-2\in\mathbb{Q}[x]$ over $\mathbb{Q}$ is a counterexample for this question when $E=\mathbb{Q}(\sqrt{2})$.



Am i correct? or is there an example rather than this?



If it is, under what conditions, the relation $\deg(\alpha_{1},E)=\cdots=\deg(\alpha_{n},E)$ could be true.



Give some comment or advice. Thank you!


Answer



Since $E$ is a splitting field, it is a Galois extension of $\mathbf{Q}$. Thus $N=\mathrm{Gal}(K/E)$ is a normal subgroup of $G=\mathrm{Gal}(K/\mathbf{Q})$. Since $f$ is irreducible, the group $G$ acts transitively on its roots. Let $\alpha$ and $\beta$ be roots of $f$, and choose $g \in G$ with $g(\alpha)=\beta$. Using normality of $N$ in $G$, the stabilizers $N_\alpha$ and $N_\beta$ of $\alpha$ and $\beta$ in $N$ are related by
$$g N_\alpha g^{-1}=N_\beta,$$ and in particular they have the same cardinality. Hence by the orbit stabilizer theorem the degrees over $E$ of $\alpha$ and $\beta$ are equal.



calculus - Evaluation of $int_0^{pi/3} ln^2left(frac{sin x }{sin (x+pi/3)}right),mathrm{d}x$



I plan to evaluate



$$\int_0^{\pi/3} \ln^2\left(\frac{\sin x }{\sin (x+\pi/3)}\right)\, \mathrm{d}x$$
and I need a starting point for both real and complex methods. Thanks !




Sis.


Answer



It turns out that this integral takes on a very simple form amenable to analysis via residues. Let $u = \sin{x}/\sin{(x+\pi/3)}$. We may then find that (+)



$$\tan{x} = \frac{(\sqrt{3}/2)u}{1-(u/2)}$$



A little bit of algebra reveals a very nice form for the differential:



$$dx = \frac{\sqrt{3}}{2} \frac{du}{1-u+u^2}$$




so the original integral takes on a much simpler-looking form:



$$\frac{\sqrt{3}}{2} \int_0^1 du \frac{\log^2{u}}{1-u+u^2}$$



This is not ready for contour integration yet. We may transform this into such an integral by substituting $u=1/v$ and observing that



$$\int_0^1 du \frac{\log^2{u}}{1-u+u^2} = \int_1^\infty du \frac{\log^2{u}}{1-u+u^2} = \frac{1}{2} \int_0^\infty du \frac{\log^2{u}}{1-u+u^2}$$



We may now analyze that last integral via the residue theorem. Consider the integral




$$\oint_C dz \frac{\log^3{z}}{1-z+z^2}$$



where $C$ is a keyhole contour that passes up and back along the positive real axis. It may be shown that the integral along the large and small circular arcs vanish as the radii of the arcs goes to $\infty$ and $0$, respectively. We may then write the integral in terms of positive contributions just above the real axis and negative contributions just below. The result is



$$\oint_C dz \frac{\log^3{z}}{1-z+z^2} = \begin{array}\\ i \left ( - 6 \pi \int_0^\infty du \frac{\log^2{u}}{1-u+u^2} + 8 \pi^3 \int_0^\infty du \frac{1}{1-u+u^2} \right ) \\ + 12 \pi^2 \int_0^\infty du \frac{\log{u}}{1-u+u^2} \end{array}$$



We set this equal to $i 2 \pi$ times the sum of the residues of the poles of the integrand within $C$. The poles are $z \in \{e^{i \pi/3},e^{i 5\pi/3}\}$. The residues are



$$\mathrm{Res}_{z=e^{i \pi/3}} = -\frac{\pi^3}{27 \sqrt{3}}$$




$$\mathrm{Res}_{z=e^{i 5\pi/3}} = \frac{125 \pi^3}{27 \sqrt{3}}$$



$i 2 \pi$ times the sum of these residues is then



$$i \frac{248 \pi^4}{27 \sqrt{3}}$$



Equating imaginary parts of the integral to the above quantity, we see that



$$ - 6 \pi \int_0^\infty du \frac{\log^2{u}}{1-u+u^2} + 8 \pi^3 \int_0^\infty du \frac{1}{1-u+u^2} = \frac{248 \pi^4}{27 \sqrt{3}}$$




Now, I will state without proof for now that (++)



$$\int_0^\infty du \frac{1}{1-u+u^2} = \frac{4 \pi}{3 \sqrt{3}}$$



Then with a little arithmetic, we find that



$$\int_0^\infty du \frac{\log^2{u}}{1-u+u^2} = \frac{20 \pi^3}{81 \sqrt{3}}$$



The integral we want is $\sqrt{3}/4$ times this value; therefore





$$\int_0^{\pi/3} dx \log^2{\left [ \frac{\sin{x}}{\sin{(x + \pi/3)}}
\right ]} = \frac{5\pi^3}{81}$$




Proof of (++)



Now, to prove (++), we go right back to the observation (+) that




$$ x = \int \frac{du}{1-u+u^2} \implies \tan{\left ( \frac{\sqrt{3}}{2} x \right )} = \frac{(\sqrt{3}/2)u}{1-(u/2)}$$



Therefore



$$\int_0^1 \frac{du}{1-u+u^2} = \frac{2}{\sqrt{3}} \left [ \arctan \left ( \frac{(\sqrt{3}/2) u}{1-(u/2)} \right ) \right ]_0^1 = \frac{2}{\sqrt{3}} \frac{\pi}{3}$$



and we showed that this is $1/2$ the integral over $[0,\infty)$, and



$$\int_0^\infty \frac{du}{1-u+u^2} = \frac{4 \pi}{3 \sqrt{3}}$$




QED


calculus - Limit $lim_{n rightarrow infty} (A_1^n + ... A_k^n)^{1/n}= max{ A_1, ..., A_k}$




I have the following question. I was asked to compute the following limit: Let $A_1 ... A_k$ be positive numbers, does exist:




$$ \lim_{n \rightarrow \infty} (A_1^n + ... A_k^n)^{1/n} $$
My work:
W.L.O.G let $A_1= \max{ A_1, ..., A_k}$, so I have
$$ A_1^n \leq A_1^n + ... A_k^n \leq kA_1^n $$
so that



$$ A_1 = \lim_{n \rightarrow \infty} (A_1^n)^{1/n} \leq \lim_{n \rightarrow \infty}(A_1^n + ... A_k^n)^{1/n} \leq \lim_{n \rightarrow \infty} (kA_1^n)^{1/n} = kA_1 $$



Can I do something else to sandwich the limit?



Answer



You have made a little mistake. Correction:
$$ A_1 = \lim_{n \rightarrow \infty} (A_1^n)^{1/n} \leq \lim_{n \rightarrow \infty}(A_1^n + ... A_k^n)^{1/n} \leq \lim_{n \rightarrow \infty} (kA_1^n)^{1/n} = \lim_{n \rightarrow \infty} A_1{\color{blue}{k^{1/n}}}=A_1 $$


limit of $cos x ^{tan x}$



As far as I know, $0^\infty$ is not an indefinite form and it goes to zero. Then the limit of $(\cos x)^{\tan x}$ when $x$ goes to $\pi/2 - $ should equal $0$.




But after log transformation, its limit is $\infty$. I am not sure which one is correct, zero or infinity?



Thanks for your help.



I slightly changed the question. I realized that $x$ goes to $\pi/2 - $ (left hand limit) in the original question.


Answer



Suppose that $f$ and $g$ are functions such that as $x \to a$, $f(x) \to 0$ and $g(x) \to \infty$, and consider $h(x) = f(x)^{g(x)}$. We know that
$$
\lim_{x \to a} \ln h(x) = \lim_{x \to a} g(x) \ln f(x) = -\infty,

$$
Thus,
$$
\lim_{x \to a} h(x) = 0.
$$


sequences and series - Evaluation of a dilogarithmic integral





Problem. Prove that the following dilogarithmic integral has the indicated value:
$$\int_{0}^{1}\mathrm{d}x \frac{\ln^2{(x)}\operatorname{Li}_2{(x)}}{1-x}\stackrel{?}{=}-11\zeta{(5)}+6\zeta{(3)}\zeta{(2)}.$$







My attempt:




I began by using the polylogarithmic expansion in terms of generalized harmonic numbers,



$$\frac{\operatorname{Li}_r{(x)}}{1-x}=\sum_{n=1}^{\infty}H_{n,r}\,x^n;~~r=2.$$



Then I switched the order of summation and integration and used the substitution $u=-\ln{x}$ to evaluate the integral:



$$\begin{align}
\int_{0}^{1}\mathrm{d}x \frac{\ln^2{(x)}\operatorname{Li}_2{(x)}}{1-x}
&=\int_{0}^{1}\mathrm{d}x\ln^2{(x)}\sum_{n=1}^{\infty}H_{n,2}x^n\\
&=\sum_{n=1}^{\infty}H_{n,2}\int_{0}^{1}\mathrm{d}x\,x^n\ln^2{(x)}\\

&=\sum_{n=1}^{\infty}H_{n,2}\int_{0}^{\infty}\mathrm{d}u\,u^2e^{-(n+1)u}\\
&=\sum_{n=1}^{\infty}H_{n,2}\frac{2}{(n+1)^3}\\
&=2\sum_{n=1}^{\infty}\frac{H_{n,2}}{(n+1)^3}.
\end{align}$$



So I've reduced the integral to an Euler sum, but unfortunately I've never quite got the knack for evaluating Euler sums. How to proceed from here?


Answer



$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}

\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}

\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{\int_{0}^{1}{\ln^2\pars{x}{\rm Li}_2\pars{x} \over 1 - x}\,\dd x\
\stackrel{?}{=}\ -11\zeta\pars{5} + 6\zeta\pars{3}\zeta\pars{2}:\

{\large ?}}$.

$\ds{\large\tt\mbox{The above result is correct !!!}}$.




\begin{align}&\color{#c00000}{\int_{0}^{1}%
{\ln^2\pars{x}{\rm Li}_2\pars{x} \over 1 - x}\,\dd x}
=\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}
\sum_{n = 1}^{\infty}{x^{n} \over n^{2}}\,\dd x
\\[3mm]&=\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}\bracks{%
\sum_{n = 1}^{\infty}{1 \over n^{2}}-
\sum_{n = 1}^{\infty}{1 - x^{n} \over n^{2}}}\,\dd x

\\[3mm]&=\zeta\pars{2}
\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}\,\dd x
-\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}
\sum_{n = 1}^{\infty}{1 - x^{n} \over n^{2}}\,\dd x
\end{align}




However,
\begin{align}
\color{#00f}{\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}\,\dd x}&=

\int_{0}^{1}\ln\pars{1 - x}\,\bracks{2\ln\pars{x}\,{1 \over x}}\,\dd x
=-2\int_{0}^{1}{\rm Li}_{2}'\pars{x}\ln\pars{x}\,\dd x
\\[3mm]&=2\int_{0}^{1}{\rm Li}_{2}\pars{x}\,{1 \over x}\,\dd x
=2\int_{0}^{1}{\rm Li}_{3}'\pars{x}\,\dd x=2{\rm Li}_{3}\pars{1}
=\color{#00f}{2\zeta\pars{3}}
\end{align}
such that




\begin{align}&\color{#c00000}{\int_{0}^{1}%

{\ln^2\pars{x}{\rm Li}_2\pars{x} \over 1 - x}\,\dd x}
=2\zeta\pars{2}\zeta\pars{3}
-\color{#00f}{\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}
\sum_{n = 1}^{\infty}{1 - x^{n} \over n^{2}}\,\dd x}\tag{1}
\end{align}




Also,
\begin{align}&\color{#00f}{\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}
\sum_{n = 1}^{\infty}{1 - x^{n} \over n^{2}}\,\dd x}

=\sum_{n = 1}^{\infty}{1 \over n^{2}}
\int_{0}^{1}\ln^2\pars{x}\,{1 - x^{n} \over 1 - x}\,\dd x
\\[5mm]&=\sum_{n = 1}^{\infty}{1 \over n^{2}}
\int_{0}^{1}\ln^2\pars{x}\sum_{k = 1}^{n}x^{k - 1}\,\dd x
\\[3mm]&=\sum_{n = 1}^{\infty}{1 \over n^{2}}
\sum_{k = 1}^{n}\ \overbrace{\int_{0}^{1}\ln^2\pars{x}x^{k - 1}\,\dd x}
^{\ds{=\ {2 \over k^{3}}}}\ =\
2\sum_{n = 1}^{\infty}{H_{n}^{\rm\pars{3}} \over n^{2}}\tag{2}
\end{align}





The last sum can be evaluated with the generating function
$\ds{\sum_{n = 1}^{\infty}x^{n}H_{n}^{\rm\pars{3}}
={{\rm Li}_{3}\pars{x} \over 1 - x}}$. Namely
\begin{align}
\sum_{n = 1}^{\infty}{x^{n} \over n}\,H_{n}^{\rm\pars{3}}
&=\int_{0}^{x}{{\rm Li}_{3}\pars{t} \over t}\,\dd t
+\int_{0}^{x}{{\rm Li}_{3}\pars{t} \over 1 - t}\,\dd t
\\[3mm]&={\rm Li}_{4}\pars{x} - \ln\pars{1 - x}{\rm Li}_{3}\pars{x}
+ \int_{0}^{x}\ln\pars{1 - t}{\rm Li}_{3}'\pars{t}\,\dd t

\\[3mm]&={\rm Li}_{4}\pars{x} - \ln\pars{1 - x}{\rm Li}_{3}\pars{x}
+ \int_{0}^{x}\ln\pars{1 - t}\,{{\rm Li}_{2}\pars{t} \over t}\,\dd t
\\[3mm]&={\rm Li}_{4}\pars{x} - \ln\pars{1 - x}{\rm Li}_{3}\pars{x}
- \int_{0}^{x}{\rm Li}_{2}\pars{t}{\rm Li}_{2}'\pars{t}\,\dd t
\\[3mm]&={\rm Li}_{4}\pars{x} - \ln\pars{1 - x}{\rm Li}_{3}\pars{x}
- \half\,{\rm Li}_{2}^{2}\pars{x}
\\[5mm]\sum_{n = 1}^{\infty}{H_{n}^{\rm\pars{3}} \over n^{2}}
&=\int_{0}^{1}{{\rm Li}_{4}\pars{t} \over t}\,\dd t
- \int_{0}^{1}{\ln\pars{1 - t}{\rm Li}_{3}\pars{t} \over t}\,\dd t
-\half\int_{0}^{1}{{\rm Li}_{2}^{2}\pars{t} \over t}\,\dd t

\\[3mm]&=\zeta\pars{5} + {\rm Li}_{2}\pars{1}{\rm Li}_{3}\pars{1}
-\int_{0}^{1}{\rm Li}_{2}\pars{t}\,{{\rm Li}_{2}\pars{t} \over t}\,\dd t
-\half\int_{0}^{1}{{\rm Li}_{2}^{2}\pars{t} \over t}\,\dd t
\\[3mm]&=\zeta\pars{5} + \zeta\pars{2}\zeta\pars{3}
-{3 \over 2}\color{#c00000}{\int_{0}^{1}{{\rm Li}_{2}^{2}\pars{t} \over t}\,\dd t}
\\[3mm]&=\zeta\pars{5} + \zeta\pars{2}\zeta\pars{3}
-{3 \over 2}\bracks{\color{#c00000}{-3\zeta\pars{5} + 2\zeta\pars{2}\zeta\pars{3}}}
\end{align}
The $\color{#c00000}{\mbox{red result}}$ has been derived
elsewhere such that:

$$
\sum_{n = 1}^{\infty}{H_{n}^{\rm\pars{3}} \over n^{2}}
={11 \over 2}\,\zeta\pars{5} - 2\zeta\pars{2}\zeta\pars{3}
$$




Expresion $\pars{2}$ becomes:
$$
\color{#00f}{\int_{0}^{1}{\ln^2\pars{x} \over 1 - x}
\sum_{n = 1}^{\infty}{1 - x^{n} \over n^{2}}\,\dd x}

=11\zeta\pars{5} - 4\zeta\pars{2}\zeta\pars{3}
$$
which we replace in $\pars{1}$:
$$\color{#66f}{\large%
\int_{0}^{1}{\ln^2\pars{x}{\rm Li}_2\pars{x} \over 1 - x}\,\dd x\
=-11\zeta\pars{5} + 6\zeta\pars{3}\zeta\pars{2}}
\approx {\tt 0.4576}
$$


calculus - Evaluating $int_{0}^{pi}ln (1+cos x), dx$





The problem is



$$\int_{0}^{\pi}\ln (1+\cos x)\ dx$$



What I tried was using standard limit formulas like changing $x$ to $\pi - x$ and I also tried integration by parts on it to no avail. Please help. Also this is my first question so please tell if I am wrong somewhere.


Answer



$$\begin{eqnarray*}\int_{0}^{\pi}\log(1+\cos x)\,dx &=& \int_{0}^{\pi/2}\log(1+\cos x)\,dx+\int_{0}^{\pi/2}\log(1+\cos(\pi-x))\,dx\\ &=& \int_{0}^{\pi/2}\log(\sin^2 x)\,dx=\int_{0}^{\pi}\log(\sin x)\,dx \tag{1}\end{eqnarray*}$$
And by a notorious identity:
$$ \prod_{k=1}^{n-1}\sin\frac{k\pi}{n} = \frac{2n}{2^n},\tag{2}$$
hence the RHS of $(1)$ can be computed as a Riemann sum:

$$ \int_{0}^{\pi}\log(\sin x)\,dx = \lim_{n\to +\infty}\frac{\pi}{n}\sum_{k=1}^{n-1}\log\sin\frac{\pi k}{n}=\color{red}{-\pi \log 2}.\tag{3}$$
There is also a well-known proof through symmetry:
$$ \begin{eqnarray*}I=\int_{0}^{\pi}\log(\sin x)&=&2\int_{0}^{\pi/2}\log(\sin(2t))\,dt=2\int_{0}^{\pi/2}\log(2\sin t\cos t)\,dt\\&=&\pi\log 2+2\int_{0}^{\pi/2}\log(\sin t)\,dt+2\int_{0}^{\pi/2}\log(\cos t)\,dt\\&=&\pi \log 2 + 2I\tag{4}\end{eqnarray*}$$
from which $I=-\pi\log 2$ immediately follows.


real analysis - Bepo - levi implies monotonone convergence theorem??



For Lebesgue measurable functions, Is is true that Beppo-Levi theorem implies the monotone convergence theorem?




Beppo-Levi: Suppose $ \sum_{k=1}^{\infty} \int |f_k |\, dm $ is finite. Then the series $\sum f_k(x) $ converges for almost all $x$, its sum is integrable and
$$ \int \sum f_k \,dm = \sum \int f_k\, dm $$



Monotone convergence theorem: if $\{ f_n \}$ is a sequence of non-negative measurable functions and $\{ f_n(x) : n \geq 1 \}$ increases monotonically to $f(x)$ for each $x$ $( f_n \to_{pointwise} f $), then $$ \lim \int\limits_E f_n\, dm = \int\limits_E f\, dm $$


Answer



The monotone convergence theorem handles infinities gracefully, which can only be done for functions that are positive (or otherwise reasonably controlled from below). In particular, nowhere does it assume that $f_n$, $f$, or the integrals, are finite. This seems to be beyond the scope of Beppo Levi, so I'm not sure that fixing this issue alone is considerably easier than proving everything from scratch. But let me try.



Depending on your version of definitions, it may or may not be trivial that for a positive function $f$ the following special case of monotone convergence holds:




$$\intop_E f dm = \lim_{C \to +\infty, E_n \uparrow E} \intop_{E_n} \min(f, C) dm$$



where $E_n$ are sets of finite measure that approximate $E$ (I assume $\sigma$-finiteness; if it fails then we should restrict to $\{f > 0\}$; if it fails even there then monotone convergence holds almost trivially with both sides infinite).



Now in order to make use of Beppo Levi we should make the limit finite. I would do that by replacing $f_n$ by $f_{n,C,k} := (f_n \wedge C) \mathsf{1}[E_k]$ and $f$ by $f_{C,k} := (f \wedge C) \mathsf{1}[E_k]$ for some fixed $k$. Now we can safely apply Beppo Levi to the successive differences $f_{n+1,C,k} - f_{n,C,k}$ to obtain



$$\intop f_{C,k} dm = \lim_{n \to \infty} \intop f_{n,C,k} dm$$



Now take $C$ and $k$ to $\infty$ and interchange the limits. You can always do this with monotonely increasing limits (this is equivalent to rearrangement of terms in positive series, or Fubini on $\mathbb{N} \times \mathbb{N}$, or whatever you prefer). On the other hand, monotone convergence itself is about rearrangement or Fubini on $E \times \mathbb{N}$, so I'm not even sure you would view the things that I rely on as more basic than those that you prove...


Sunday 21 April 2013

real analysis - Does the series $sum_{n=4}^infty frac{(-1)^n}{log log n}$ converge?

Does the series $\sum_{n=4}^\infty \frac{(-1)^n}{\log \log n}$ converge ?



I thought about alternating test, but for some reason this seems to easy. Why does it start with $n=4$? And how do I prove that $\frac1{\log \log n}$ is decreasing ?

real analysis - Let $f:[a,b]toBbb{R}$ be continuous. Does $max{|f(x)|:aleq xleq b}$ exist?



Let $f:[a,b]\to\Bbb{R}$ be continuous. Does \begin{align}\max\{|f(x)|:a\leq x\leq b\} \end{align} exist?



MY WORK



I believe it does and I want to prove it.




Since $f:[a,b]\to\Bbb{R}$ is continuous, then $f$ is uniformly continuous. Let $\epsilon> 0$ be given, then $\exists\, \delta>$ such that $\forall x,y\in [a,b]$ with $|x-y|<\delta,$ it implies $|f(x)-f(y)|<\epsilon.$



Then, for $a\leq x\leq b,$



\begin{align} f(x)=f(b)+[f(x)-f(b)]\end{align}
\begin{align} |f(x)|\leq |f(b)|+|f(x)-f(b)|\end{align}
\begin{align} \max\limits_{a\leq x\leq b}|f(x)|\leq |f(b)|+\max\limits_{a\leq x\leq b}|f(x)-f(b)|\end{align}



I am stuck at this point. Please, can anyone show me how to continue from here?


Answer




Because $[a,b]$ is compact, every sequence in $[a,b]$ has a subsequence that limits to a point in $[a,b]$. Pick a sequence $x_n \in [a,b]$ such that $\lim_{n \rightarrow \infty} |f(x_n)| = \sup_{[a,b]}|f|$. Now get a convergent subsequence $x_{n_k}$ that converges to some $x \in [a,b]$.



By continuity of $|f|$,



$|f(x)| = |f( \lim_{k\rightarrow \infty}x_{n_k})| = \lim_{k\rightarrow \infty} |f(x_{n_k})| = \lim_{n \rightarrow \infty} |f(x_n)| = \sup_{[a,b]}|f|$.


Saturday 20 April 2013

Real valued function which is continuous only on transcendental numbers

First of all, I am sorry for asking this question.




We know that $R$ is uncountable. And also the set of all transcendental numbers is uncountable.
How can I construct a function $f(x)$ on $R$ which is continuos only at transcendental numbers? Is it possible?



Thanks in advance.

calculus - Proof that the function is uniformly continuous




Proof that the function $f: [0, \infty) \ni x \mapsto \frac{x^{2}}{x + 1} \in \mathbb{R}$ is uniformly continuous.




On the internet I found out that a function is uniformly continuous when



$\forall \varepsilon > 0 \ \exists \delta > 0: \left | f(x)-f(x_{0}) \right | < \varepsilon$ whenever $\left | x - x_{0} \right | < \delta .$




Because I don't know how to prove it calculative, I have drawn the function and showed its uniform continuity like that.
But I'd like to know how to do it the other, more professional and efficient way. I've watched some videos but anyway couldn't find a solution. Also tried to for almost 2 hours myself but nothing came out, too.



For the drawing, I think there is actually a mistake, at the beginning the epsilon (first one I haven't drawed) seems smaller than the others, while the others have same size.
(In addition I skipped the other part of the function because it's trivial).



Here is the picture:



enter image description here


Answer




The best way to start these types of problems is to start by messing with the part $|f(x) - f(y)| < \epsilon$ of the definition. Note that, by combining fractions and multiplying everything out we have



$f(x) - f(y) = \frac{x^2}{x+1} - \frac{y^2}{y+1} = \frac{x^2y-y^2x+x^2-y^2}{xy+x+y+1}$.



After playing around with some grouping I found that this can be rewritten as



$\frac{xy(x-y)+(x^2-y^2)}{xy+x+y+1} = \frac{(x-y)(xy+x+y)}{xy+x+y+1}$.



As a hint for where to go from here it is important to remember that $x, y \in [0, \infty)$.


calculus - Evaluate $lim (n!)^{-1/n ln n}$




Problem :



Evaluate $$\lim_{n\to\infty} \left( \frac{1}{n!}\right)^\frac{1}{n \ln n}$$






My Attempts:



Suppose
\begin{align}

&L=\lim_{n\to\infty} \left( \frac{1}{n!}\right)^\frac{1}{n \ln n}\\
&\ln L=\lim_{n\to\infty}\frac{1}{n\ln n} \ln \left(\frac{1}{n!} \right) \\
& =-\lim_{n\to\infty}\frac{\ln n!}{n\ln n}\\
& =-\lim_{n\to\infty}\frac{\ln n + \ln(n-1) + \cdots+\ln1}{n\ln n} \\
& = 0 \\
&\Leftrightarrow L=1
\end{align}







But the answer is not. Where am I wrong?
And how can I solve this without using stirling approximation?


Answer



You can use Riemann integral to handle the limit. In fact,
\begin{eqnarray}
\frac{\ln(n!)}{n\ln n}&=&\frac{\sum_{k=1}^n\ln k}{n\ln n}=1+\frac1{\ln n} \sum_{k=1}^n\frac1n\ln(\frac{k}{n}).
\end{eqnarray}

Since
$$ \lim_{n\to\infty}\frac1{\ln n}=0,\lim_{n\to\infty}\sum_{k=1}^n\frac1n\ln(\frac{k}{n})=\int_0^1\ln x dx=-1 $$
one has

\begin{eqnarray}
\lim_{n\to\infty}\frac{\ln(n!)}{n\ln n}=\lim_{n\to\infty}\bigg[1+\frac1{\ln n} \sum_{k=1}^n\frac1n\ln(\frac{k}{n})\bigg]=1.
\end{eqnarray}


Friday 19 April 2013

summation - Prove that for $n in mathbb{N}, sumlimits_{k=1}^{n} (2k+1) = n^{2} + 2n $



I'm learning the basics of proof by induction and wanted to see if I took the right steps with the following proof:



Theorem: for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $



Base Case:



let $$ n = 1 $$ Therefore $$2*1+1 = 1^{2}+2*1 $$ Which proves base case is true.




Inductive Hypothesis:



Assume $$\sum_{k=1}^{n} (2k+1) = n^{2} + 2n $$



Then $$\sum_{k=1}^{n+1} (2k+1) = (n+1)^{2} + 2(n+1) $$
$$\iff (2(n+1) +1)+ \sum_{k=1}^{n} (2k+1) = (n+1)^{2} + 2(n+1) $$
Using inductive hypothesis on summation term:
$$\iff(2(n+1) +1)+ n^{2} + 2n = (n+1)^{2} + 2(n+1) $$
$$\iff 2(n+1) = 2(n+1) $$




Hence for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $ Q.E.D.



Does this prove the theorem? Or was my use of the inductive hypothesis circular logic?


Answer



Your proof looks fine, but if you know that
$$1+2+...+n=\frac{n(n+1)}{2}$$
then you can evaluate
$$\sum_{k=1}^n(2k+1)=2\sum_{k=1}^n k+\sum_{k=1}^n1=\rlap{/}2\frac{n(n+1)}{\rlap{/}2}+n=n^2+2n$$


real analysis - Function Satisfying $f(x)=f(2x+1)$



If $f: \mathbb{R} \to \mathbb{R}$ is a continuous function and satisfies $f(x)=f(2x+1)$, then its not to hard to show that $f$ is a constant.




My question is suppose $f$ is continuous and it satisfies $f(x)=f(2x+1)$, then can the domain of $f$ be restricted so that $f$ doesn't remain a constant. If yes, then give an example of such a function.


Answer



Let $f$ have value $1$ on $[0,\infty)$ and value $0$ on $(-\infty,-1]$. This function is not constant (although it is locally constant), and satisfies $f(x)=f(2x+1)$ whenever $x$ is in its domain.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...