Monday 31 December 2012

soft question - Interesting real life applications of elementary mathematics

If you teach mathematics to future highschool teachers, you often feel that they are bored because what they learn at university does not have much to do with what they will have to teach in school, and what they will have to teach in school will be boring for their students because it has nothing to do with real life. To remedy this situation, we sometimes offer a course "Mathematics in Real Life" to explain how often mathematics "just happens" in real life, but goes unnoticed. Some sample topics in that course are






I am looking for more examples like those above. They should be accessible to undergraduates (some of the topics above are actually a bit too hard), have some impact on real life, and (ideally) be somewhat surprising for someone who hasn't heard of them yet. Thus, I am neither looking for recreational mathematics as in this question nor for "typical" applications of maths in computer simulation, statistics or finance, which usually involve some deeper mathematics.

Sunday 30 December 2012

real analysis - Prob. 3, Chap. 3 in Baby Rudin: If $s_1 = sqrt{2}$, and $s_{n+1} = sqrt{2 + sqrt{s_n}}$, what is the limit of this sequence?

Here's Prob. 3, Chap. 3 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:




If $s_1 = \sqrt{2}$, and $$s_{n+1} = \sqrt{2 + \sqrt{s_n}} \ \ (n = 1, 2, 3, \ldots),$$ prove that $\left\{ s_n \right\}$ converges, and that $s_n < 2$ for $n = 1, 2, 3, \ldots$.





My effort:



We can show that $\sqrt{2} \leq s_n \leq 2$ for all $n = 1, 2, 3, \ldots$. [Am I right?]



Then we can also show that $s_n < s_{n+1}$ for all $n = 1, 2, 3, \ldots$. [Am I right?]



But how to calculate the exact value of the limit? Where does this sequence occur in applications?

soft question - More unknown / underappreciated results of Euler



What are some of the more unknown and/or underappreciated things that Euler discovered? The man has done so much that there's bound to be notable results that most people aren't aware of.



This could be, say, a constant or theorem that not many people are aware of, or possibly a well-known theorem that is named after someone else even though he was actually the first one to discover it.



I'd request that, in your answer, you explain what makes the result particularly remarkable despite its unfamiliarity to the general math public. This could be the wide range of applications, or the fact that Euler lived over $200$ years ago, etc.




Of course, please stay away from well-known results such as $e^{i \pi} = -1$ or $V - E + F = 2$


Answer



It is well-known that Euler calculated the values of $\zeta(2n)$ for $n>0$. However, it is less known that he essentially discovered the functional equation of the Riemann zeta function. Euler was interested in the divergent sums



$$1+2^n+3^n + \dots$$



when $n>0$. He noticed that



$$\frac{t}{1+t} = t-t^2+t^3 - \dots$$




so, with $t=1$ we find $$1-1+1-1+\dots = \frac{1}{2},$$
an answer which is of course very silly if we take the left hand side literally. But $1/2$ is indeed the "average" of the partial sums on the left hand side, so maybe it is not completely silly. If we continue with this insanity we find:



$$t\frac{d}{dt}\frac{t}{1+t} = \frac{t}{(1+t)^2} = t - 2t^2 + 3t^3 - \dots$$



and therefore $$1-2+3-\dots = \frac{1}{4}.$$ In general, we have



$$\left(t\frac{d}{dt}\right)^n\frac{t}{1+t}\biggr|_{t=1} = 1-2^n+3^n-4^n+\dots.$$




On the other hand, if we put $t=e^x$, then $t \frac{d}{dt} = \frac{d}{dx}$, and $t=1$ becomes $x=0$:



$$\frac{d^n}{dx^n}\frac{e^x}{1+e^x}\biggr|_{x=0} = 1-2^n+3^n-4^n+\dots.$$



Now for $s>1$ a simple calculation shows that



$$1-2^{-s} + 3^{-s} - \dots = (1-2\times 2^{-s})\zeta(s),$$ hence in analogy, he defined $\zeta(-n)$ as



$$\zeta(-n):= (1-2^{n+1})^{-1} \frac{d^n}{dx^n}\frac{e^x}{1+e^x}\biggr|_{x=0}$$




For instance, for $n=1$ we find



$$\zeta(-1) = (1-4)^{-1} \frac{1}{4} = -1/12.$$



Anyways, it turns out that $e^x/(1+e^x)$ is essentially the generating function of the Bernoulli numbers, and after some simple manipulations, Euler obtained that for $n>1$,



$$\zeta(1-n) = -\frac{B_n}{n}.$$



(Notice in passing that this reveals the "trivial zeroes" of the Riemann zeta function (since the odd Bernoulli numbers vanish).) Hence, putting this together with his work on the values $\zeta(2n)$, Euler found the following explicit formulas:




$$\zeta(1-2n) = -\frac{B_{2n}}{2n}$$
$$\zeta(2n) = \frac{(-1)^{n+1}B_{2n}(2\pi)^{2n}}{2(2n)!}.$$



This is already enough to conjecture the form the functional equation should take, and Euler had all of the ingredients for it. However, it seems that Riemann was the first to explicitly write it down (and to prove it).



(I picked this up from Hida's book Elementary theory of $L$-functions and Eisenstein series (which is, for the most part, far from being elementary...).)


Saturday 29 December 2012

calculus - How to prove infinite limit is limit does not exist using epsilon and delta

So I was recently taught that If $\lim_{x→0}f(x)=∞$, then the limit does not exist, can anyone explain that using epsilon and delta if its possible? But honestly any sort of explanation would be fine

complex analysis - It is possible for $z = x+iy$ to have more than two Arg(z) values?




I have:



$$z = -4 - 4i$$
Then I know that $\theta = \arctan(y/x)$ which gives me:
$$\arctan(1) = \pi/4$$
But, if we draw the angle for $z = -4 - 4i$, we can observe that we are in the 3thd quadrant.



So, one can just takes:
$$\frac{\pi}{4} - \pi = -\frac{3\pi}{4}$$

and I have my $Arg(z)$, which is $\in [-\pi, \pi]$.



My result, which is in polar form is:
$$4\sqrt{2} e^{-\frac{3\pi}{4}i}$$
What I did (and I don't know if its correct) is:
$$z = -4 - 4i = -4(1 + i) = -4\sqrt{2}(\dfrac{1}{\sqrt{2}} + \dfrac{1}{\sqrt{2}}i)$$
and my final result, which is in polar form is:
$$-4\sqrt{2} e^{\frac{\pi}{4}i}$$



My question: It is possible to have more than two Arg(z) values?




Note that $\frac{\pi}{4}$ is also $\in [-\pi, \pi]$.



Thank you very much for your help!


Answer



Both your restults are correct and reprensent the same number :
$$4\sqrt{2}e^{-\frac{3\pi}4 i} = 4\sqrt{2} e^{-\pi i + \frac{\pi}4i}=4\sqrt{2}\times(-1) \times e^{-\frac\pi4 i}$$



In general, the "polar form" is the first you wrote (since we want the radius to be positive).




Use the atan function is a "bad good idea": you encounters problems because the results of this function is always in $(-\pi/2,\pi/2)$. You should use remarkable values of both sine and cosine function to determine the argument: for instance
$$z=4\sqrt{2}(-\frac{1}{\sqrt{2}} - \frac 1{\sqrt{2}}i)$$
we know that $cos(\theta)=-\frac1{\sqrt{2}}=\sin(\theta)$. Hence (draw a circle if needed) $\theta=-3\frac{\pi}4$.






EDIT :
Hum well... the matter is not really in the atan function it self. When you calculate $y/x$, if $x$ and $y$ both are negative then $y/x$ is positive and y've lost the information on the sign of $x$ and $y$.



We can use the atan function but you have to check the sign of $y$ before. For instance an "good algorithm" would be: $$\arg(x+yi) =\left\{\begin{matrix}atan(y/x) \quad\text{ if }y\ge0 \\ atan(y/x)+\pi \quad\text{if } y<0\end{matrix}\right.$$



real analysis - Finding the limit of a sequence involving n-th roots.



We're asked to find the limit (as $n$ goes to infinity) of the following sequence: $$a_n = \frac {\sqrt[3n]{4}\ -6 \ \sqrt[3n]{2} \ + \ 9}{\sqrt[2n]{9} \ - \ 4 \ \sqrt[2n]{3}+ 4} $$.



I thought that since the limit of the numerator exists, and since the limit of the denominator also exists and is non-zero, then the limit of the fraction should be $\frac{9}{4}$.



But apperently, the limit of this sequence is $4$ as $n \rightarrow +\infty$.




I don't understand why my approach is incorrect, nor why the limit of the sequence is $4$.


Answer



It's just $$\frac{1-6+9}{1-4+4}=4$$ because for example $4^{\frac{1}{3n}}\rightarrow4^0=1$.


Friday 28 December 2012

real analysis - Why do we need $x_0$ to be a cluster point if we take take the limit $lim_{x to x_0} f(x)$?

We did limits of functions recently and I am wondering why we always required that $x_0$ is a cluster point of the domain. Why would taking the limit not work if $x_0$ is not a cluster point?




Our definition of a limit of a function is




Let $D \subseteq \mathbb R$ be a subset, $x_0$ a cluster point of $D$ and $f: D \to \mathbb R$ a function. We say $f$ converges to $L \in \mathbb R$ and write $\lim_{x \to x_0} f(x) = L$ $\iff \forall \varepsilon \gt 0 \, \exists \delta \gt 0 \, \forall x \in D \setminus \{x_0\}: |x-x_0| \lt \delta \implies |f(x)-L| \lt \varepsilon$




Our definition of a cluster point is





Let $D \subseteq \mathbb R$ be a subset and $x_0\in \mathbb {R}$. We say $x_0$ is a cluster point of $D$ $\iff$ for every $\delta \gt 0$ we have $D \cap (x_0-\delta, x_0-\delta) \setminus \{x_0\} \neq \emptyset$


integration - Calculate $int_0^{2pi} frac{sin(t) + 4}{cos(t) + frac{5}{3}}dt$



I have to calculate $\int_0^{2\pi} \frac{\sin t + 4}{\cos t + \frac{5}{3}}dt$ using complex analysis.



I was thinking of setting $z(t) = re^{it} $ but I'm not sure what $r$ to pick or can I just pick any and is this even useful? Do I have to worry about the numerator of the integral? Before this I only had to calculate integral around some curve and then look at the singular values and use the residue theorem. Now it seems I have to do it the other way around?



Answer



HINT: split the integral into two summands:
$$\int_0^{2\pi} \frac{\sin t + 4}{\cos t + \frac{5}{3}} dt = \int_0^{2\pi} \frac{\sin t}{\cos t + \frac{5}{3}} dt + \int_0^{2\pi} \frac{dt}{\cos t + \frac{5}{3}} =$$
$$=\left. -\log \left( \cos t + \frac{5}{3} \right) \right|_0^{2\pi} + 4\int_{|z|=1} \frac{1}{\frac{z+z^{-1}}{2} + \frac{5}{3}} \frac{dz}{iz}$$



Where you substitute $z=e^{it}$, so that $dz=ie^{it} dt= iz dt$ and $\cos t = \frac{e^{it}+e^{-it}}{2}=\frac{z+z^{-1}}{2}$.



Continuing, you get
$$0+ \frac{24}{i} \int_{|z|=1} \frac{dz}{(z+3)(3z+1)} = \frac{24}{i} \left(2\pi i \operatorname{Res} \left( \frac{1}{(z+3)(3z+1)} , -\frac{1}{3} \right)\right) = 48 \pi \frac{1}{8} = 6 \pi$$


Thursday 27 December 2012

summation - Proving this inequality by mathematical induction




I want to prove this inequality:



$$ \sum\limits_{i=1}^{2^n} \frac{1}{i} \geq 1 + \frac{n}{2}.$$



So, if I suppose that the inequality holds for a natural number $k$, then



$$ \sum\limits_{i=1}^{2^{k+1}} \frac{1}{i} = {\sum\limits_{i=1}^{2^k} \frac{1}{i}} + {\sum\limits_{i=2^{k}+1}^{2^{k+1}} \frac{1}{i}}.$$



Thus, I just have to prove that $\sum\limits_{i=2^{k}+1}^{2^{k+1}} \frac{1}{i} \geq \frac{1}{2}$, but I'm stuck on this. I know there are $2^{k}$ natural numbers bewteen $2^k$ and $2^{k+1}$, but I'm not sure how to use it . I'd appreciate your help.



Answer



It's enough to prove that
$$1+\frac{n}{2}+\frac{1}{2^n+1}+\frac{1}{2^n+2}+...+\frac{1}{2^{n+1}}\geq1+\frac{n+1}{2}$$ or
$$\frac{1}{2^n+1}+\frac{1}{2^n+2}+...+\frac{1}{2^{n+1}}\geq\frac{1}{2},$$
which is true because
$$\frac{1}{2^n+1}+\frac{1}{2^n+2}+...+\frac{1}{2^{n+1}}\geq\frac{1}{2^{n+1}}+\frac{1}{2^{n+1}}+...+\frac{1}{2^{n+1}}=\frac{2^n}{2^{n+1}}=\frac{1}{2}.$$


calculus - Factorial in power series; intuitive/combinatorial interpretation?




It is well known that the terms of the power series of exponential and trigonometric functions often involve the factorial function, essentially as a consequence of iterating the power rule.



My question is whether there is another way to view the occurrence of factorials so often; factorials are often involved in counting/combinatorics, so is there a combinatorial interpretation of why this happens, or some other interesting interpretation?



More generally, is there any way I can look at the power series of $e^x$ or some other function and intuitively understand why factorials are involved, rather than just thinking of the power rule and derivatives?


Answer



We have $\sin^2t+\cos^2t=1$ and $\cosh^2t-\sinh^2t=1$, both of which are born of the implicit algebraic equations of the circle and hyperbola, $x^2\pm y^2=r^2$. In other words, these two geometric shapes, studied since the time of the ancient Greeks, are defined by constant or bounded sums of powers. And where are factorials known to occur ? In the expression of binomial coefficients, which famously characterize the binomial theorem, which expands the power of a sum into a sum of powers. So the intrinsic umbilical link between trigonometric or hyperbolic functions and factorials or binomial coefficients is as natural and intuitive as can be. Thus, it should come as no surprise that the beta and $\Gamma$ functions of argument $1/n$ are inherently connected to geometric figures described by equations of the form $x^n+y^n=r^n$, yielding, for instance, the celebrated identities $\Gamma\bigg(\dfrac12\bigg)=\sqrt\pi$ and $B\bigg(\dfrac12~,~\dfrac12\bigg)=\pi$.


Tuesday 25 December 2012

real analysis - Construct a Borel set on R such that it intersect every open interval with non-zero non-"full" measure



This is from problem $8$, Chapter II of Rudin's Real and Complex Analysis.



The problem asks for a Borel set $M$ on $R$, such that for any interval $I$, $M \cap I$ has measure greater than $0$ and less than $m(I)$.




I was thinking of taking the Cantor approach: taking $R$ to be the union of $[a,b]$ with $a$ and $b$ rationals, and for each $[a,b]$ we construct Cantor sets inside it. During theconstruction of each Cantor set, in order to have positive measure on it, we need to take off smaller and smaller intervals from it, namely the proportion goes to $0$. As a result, these Cantor sets are extremely "dense" on their ends. If for an interval $I$ it intersects with the Cantor set on $[a,b]$ while $b-a>>m(I)$, we shall expect the measure of intersection to be rather close to $m(I)$ and then we lose control on these cases.



Is there any way to fix this or shall I consider other approaches?



Thank you


Answer



Instead of trying to understand your approach (which sounds complicated), let me tell you how I'd do it. I guess you know how to construct a closed nowhere dense set of positive measure inside a given interval, right? Enumerate all the rational intervals in a sequence $I_1,I_2,I_3,\dots$. Now construct an infinite sequence $M_1,N_1,M_2,N_2,M_3,N_3,\dots$ of pairwise disjoint closed nowhere dense sets of positive measure, with $M_k,N_k\subset I_k$. [1] The $F_\sigma$-set $M=M_1\cup M_2\cup M_3\cup\cdots$ does what you want. [2]



[1] Note that, at each step of the construction, you have constructed a finite number of closed nowhere dense sets, whose union is therefore a closed nowhere dense set. Hence the interval $I_k$ you are currently working in will contain a subinterval which is disjoint from that nowhere dense set. Construct the next closed nowhere dense set inside that subinterval.




[2] Any interval $I$ contains some rational interval $I_k$. Since $N_k\subseteq I_k\subseteq I$ and $N_k\cap M=\emptyset$, it follows that $M\cap I\subseteq I\setminus N_k$ and $m(M\cap I)\le m(I\setminus N_k)=m(I)-m(N_k)\lt m(I)$.


intuition - Math Courses involving clever integration techniques

I am a third year undergraduate mathematics student.



I learned some basic techniques for simplifying sums in high school algebra, but I have encountered some of the more interesting techniques in my combinatorics classes and math contests. Many of my favorite techniques involve showing some sort of bijection between things.




However, I feel that I have learned almost no new cool integration technique since I took the AP Calculus exam in high school. The first combinatorics book I remember reading had a large chunk devoted to interesting techniques for evaluating summations, preferably with bijective techniques. I have yet to encounter a satisfying analog for integrals.



There are two main things I have had difficulty finding out much about:




  1. What "subject" (perhaps a course I can take, or a book I can look up) might I look into for finding a plethora of interesting techniques for calculating integrals (e.g. for summations I might take a course in combinatorics or read "Concrete Mathematics" by Knuth et al)?


  2. I am particularly interested in analogs for "bijective proofs" for integrals. Perhaps there are techniques that look for geometric interpretation of integrals that makes this possible? I often love "bijective proofs" because there is often almost no error-prone calculi involved. In fact, I often colloquially define "bijective proofs" this way--as any method of proof in which the solution becomes obvious from interpreting the problem in more than one way.




I don't know how useful it would be to calculate interesting (definite or indefinite) integrals, but I feel like it would be a fun endeavor to look into, and as a start I'd like to know what is considered "commonly known".

real analysis - Need help to evaluate $I_n=int_{0}^{infty}e^{-x}sin(nln(x))dx$

For $n\in\mathbb{N}$, I'm trying to find a closed form for the following integrals :
$$I_n=\int_{0}^{\infty}e^{-x}\sin(n\ln(x))\text{d}x$$



My real objective is to evaluate $\sum\limits_{n=1}^{\infty}\frac{I_n}{n}$, and since interchanging the sum and the integral didn't lead anywhere, I suppose that finding a closed form expression for $I_n$ is the way to go, but I'm lost as of how to proceed...



Maybe the residue theorem/contour integration could help, but I'm not familiar with complex analysis so I haven't tried it - feel free to use it though.



Any suggestion ?

Monday 24 December 2012

calculus - Is there a general form for $a_n=sqrt{2+a_{n-1}}$?




Can I find the general term of this sequence $a_n=\sqrt{2+a_{n-1}}$, $a_1=\sqrt2$? I have proved the convergence. And found its limit. But is there any general form for it?


Answer



Hint: If $a_n=2 \cos(\theta_n)$ with $0 < \theta_n<\pi/2$, then:




$\sqrt{2+a_n}=\sqrt{2+2 \cos(\theta_n)}$



$=2 \sqrt{(1+\cos(\theta_n))/2}$



$=2 \cos((\theta_n)/2)$ using the appropriate half-angle formula.


calculus - limit of $x cot x$ as $xto 0$.



I was asked to calculate $$\lim_{x \to 0}x\cot x $$
I did it as following (using L'Hôpital's rule):
$$\lim_{x\to 0} x\cot x = \lim_{x\to 0} \frac{x \cos x}{\sin x} $$ We can now use L'Hospital's rule since the limit has indeterminate form $\frac{0}{0}$. Hence $$\begin{align}\lim_{x\to 0}\frac{(x \cos x)'}{(\sin x)'} &= \lim_{x\to 0}\frac{-x\sin x + \cos x}{\cos x} \\ &= \lim_{x\to 0}\frac{-x\sin x}{\cos x} + 1 \\[4pt ]&= \lim_{x\to 0} - x \tan x + 1 \\[4pt] &= 1 \end{align}$$
I think that the result is correct but are the arguments correct?



Answer



Welcome to MSE! As best as I can tell your reasoning is sound, although as other users pointed out you could have done this in a few less steps. Regardless, your work is clear and your answer is correct. On an aside I am happy to see that you formatted your question with $\LaTeX$. I added some additional formatting to clean things up a bit, but overall nicely done.


calculus - Evaluate $int_0^inftyfrac{ln x}{1+x^2}dx$



Evaluate $$\int_0^\infty\frac{\ln x}{1+x^2}\ dx$$
I don't know where to start with this so either the full evaluation or any hints or pushes in the right direction would be appreciated. Thanks.


Answer



Hint



Write $$\int_0^\infty\frac{\ln x}{(1+x^2)}dx=\int_0^1\frac{\ln x}{(1+x^2)}dx+\int_1^\infty\frac{\ln x}{(1+x^2)}dx$$ For the second integral make a change of variable $x=\frac{1}{y}$ and see the beauty of the result.




I am sure that you can take from here.


Sunday 23 December 2012

integration - $intlimits_{-infty}^{infty}frac{sin(x)}{x^2}$



I have seen in my complex analysis class that $lim_{\epsilon\to 0}\int\limits_{|x|\geq \epsilon}\frac{1-e^{ix}}{x^2}dx=\pi$.



From here, by taking the real part, we concluded that $\int\limits_{-\infty}^{\infty}\frac{1-cos(x)}{x^2}dx=\pi$.



I thought that now by taking to imaginary part, one should get $\int\limits_{-\infty}^{\infty}\frac{sin(x)}{x^2}dx=0$.




However, it does seem to me that it even converges, since near $0$ it looks like $1/x$ from both sides.



On the other hand, it is an odd function, so maybe it does exist and is $0$?



You can see the same argument in the following notes: Example 2



Thanks!


Answer



Through complex Analysis you may only show that
$$\color{red}{\text{PV}}\int_{-\infty}^{+\infty}\frac{\sin x}{x^2}\,dx = 0 \tag{1} $$

since we are not allowed to integrate through a pole, and $\int_{-\infty}^{+\infty}\frac{\sin x}{x^2}\,dx$ is not convergent in the usual sense (as an improper Riemann integral). On the other hand $(1)$ is trivial since $\frac{\sin x}{x^2}$ is an odd function.


combinatorics - Combinatorial Proof of Derangement Identity



Let $D_n$ be the number of derangements of n objects. Find a combinatorial proof of the following identity:



$n! = D_n + \dbinom{n}{1} \cdot D_{n-1}+ \dbinom{n}{2} \cdot D_{n-2} + \cdots + \dbinom{n}{n-1} \cdot D_1 + \dbinom{n}{n} \cdot D_{0}$



This question has been somewhat asked and answered before here

Finding a combinatorial proof of this identity: $n!=\sum_{i=0}^n \binom{n}{n-i}D_i$



However, I need to come up with a combinatorial proof. I don't think the answer provided in the linked page is a combinatorial proof. When I learned about combinatorial proofs I was told they need to be more 'real life.' For example, in a committee of $n$ people, a subcommittee and a president must be chosen...



How can I write a true combinatorial proof for this identity?


Answer



All permutations of $N$ objects have between $0$ and $N$ objects in their original position. Then count each case, and add them all up:




  • Pick zero objects to keep in their original position. Scramble the remaining $N$ objects.


  • Pick one object to keep in its original position. Scramble the remaining $N-1$ objects.

  • Pick two objects to keep in their original positions. Scramble the remaining $N-2$ objects.

  • ($N-4$ similar statements ...)

  • Pick $N-1$ objects to keep in their original position. Scramble the remaining $1$ object.

  • Pick $N$ objects to keep in their original positions. Scramble the remaining $0$ objects.



All of these cases have the same form: "Pick $n$ objects out of $N$" has $_NC_n$ choices. "Scramble $m$ objects" has $D_m$ choices.


real analysis - Is there an analytic function which is not monotone on any interval?



I am in search of a an analytic function $f:\mathbb{R} \to \mathbb{R}$ which is not monotone on any nonempty open interval. Does one exist, or is there a proof that no such function exists?



If there does not exist such a function, is there an example of an infinitely differentiable function which is not monotone on any interval?


Answer



If $f$ is continuously differentiable, so in particular if it is twice differentiable, then $\{x:f'(x)>0\}$ and $\{x:f'(x)<0\}$ are open, and unless $f$ is constant at least one of the sets is nonempty. On an open interval in one of these sets, $f$ is monotone.




For differentiable functions that are not monotone in any interval, see the question "Differentiable+Not monotone."


complex analysis - Definition of a simply connected region

I am reading Bak and Newman's Complex Analysis, and I am having a difficult time understanding the definition of a simply connected region. Here's the definition:




A region $D$ is simply connected if its complement is “connected within $\epsilon$ to $\infty$.” That is, if for any $z_0 \in D^{c}$ and $\epsilon > 0$, there is a continuous curve $\gamma (t), 0 \leq t < \infty$, such that:



i) $d(\gamma(t), D^c) < \epsilon$ for all $t \geq 0$,



ii) $\gamma(0) = z_0$,



iii) $\lim_{t \rightarrow \infty } \gamma(t) = \infty$.





While I understand that, intuitively, the last two conditions state that $D^c$ is unbounded in the sense that any point in the complement can be "joined to $\infty$" using a line/curve that lies within $D^c$. What about the first condition? Also, it'd be great if someone could motivate this definition as well, without invoking algebraic toploogical notions.

Saturday 22 December 2012

integration - Prove that $intlimits_{-infty}^{infty} frac{e^{-x}}{1+e^{-2pi x}},dx=frac1{2sinleft(frac{1}{2}right)}$



I need to prove that $\displaystyle\int_{-\infty}^{\infty} \dfrac{e^{-x}}{1+e^{-2\pi x}}\,dx=\dfrac{1}{2\sin\left(\frac{1}{2}\right)}$. This is an exercise from Basic Complex Analysis by Marsden and Hoffman. My attempt:



First, Marsden says that we need to consider the complex function $f(z)=\dfrac{e^{-z}}{1+e^{-2\pi z}}$ and the next curve



enter image description here



$\gamma_2=r+t\pi i$ with $t\in [0,1]$




$\gamma_3=(\pi i+r)-2tr$ with $t\in [0,1]$



$\gamma_4=(1-t)(\pi i-r)-tr$ with $t\in [0,1]$



Note that the only poles of the function $f(z)$ in the rectangle are when $z=\dfrac{i}{2}$, $z=\dfrac{3i}{2}$ and $z=\dfrac{5i}{2}$. From a direct calculation we obtain that $\text{Res}\left(f(z),\dfrac{i}{2}\right)=\dfrac{e^{-i/2}}{2\pi}$, $\text{Res}\left(f(z),\dfrac{3i}{2}\right)=\dfrac{e^{-3i/2}}{2\pi}$ and $\text{Res}\left(f(z),\dfrac{5i}{2}\right)=\dfrac{e^{-5i/2}}{2\pi}$.



After to a lot of calculations we obtain a bound for the integral over $\gamma_2$: $$\dfrac{e^{-r}}{\sqrt{(1-e^{-2\pi r})^2}}\geq \dfrac{|e^{-r-t\pi i}|}{|1+e^{-2\pi(r+t\pi i)}|}\geq 0$$Thus $$\displaystyle\int_{\gamma_2}^{}f(\gamma_2)\cdot d\gamma\leq \dfrac{e^{-r}}{\sqrt{(1-e^{-2\pi r})^2}}$$When $r\to\infty$ then $\displaystyle\int_{\gamma_2}^{}f(\gamma_2)\cdot d\gamma\to 0$



I think that is the same for $\gamma_4$ but I have troubles with $\gamma_3$. After a lot of calculations we obtain the next bound: $$\dfrac{e^{-r+2rt}}{\sqrt{1+2e^{-2\pi}\cos(-2\pi^2)+e^{-4\pi r}}}\geq \dfrac{|e^{-r+2rt-\pi i}|}{|1+e^{-2\pi(\pi i+r-2rt)}|}$$ but when $r\to\infty$ we obtain that $\dfrac{e^{-r+2rt}}{\sqrt{1+2e^{-2\pi}\cos(-2\pi^2)+e^{-4\pi r}}}\to\infty$ and I need, maybe, that this limits exists (maybe, zero).




Clearly I want the integrals over the curves because if $\gamma=\gamma_1\cup\gamma_2\cup\gamma_3\cup\gamma_4$ then $\displaystyle\int_{\gamma} f(\gamma)\cdot d\gamma=\displaystyle\int_{\gamma_1} f(\gamma_1) \cdot d\gamma_1+\displaystyle\int_{\gamma_2} f(\gamma_2) \cdot d\gamma_2+\displaystyle\int_{\gamma_3} f(\gamma_3) \cdot d\gamma_3+\displaystyle\int_{\gamma_4} f(\gamma_4) \cdot d\gamma_4$ and I want to use the Residue Theorem. But, is the rectangle the correct curve? Here an screenshot of the exercise:



enter image description here


Answer



For your question, you will have to consider a rectangular contour with vertices $\pm R$ and $\pm R+i$. I'm guessing the authors meant by "same technique" as in a rectangular contour situated in the upper half of the contour plane. Calling our integrand as $f(z)$



$$f(z)\equiv\frac {e^{-z}}{1+e^{-2\pi z}}$$



and parametrizing about all four sides of the contour gives us




$$\begin{multline}\oint\limits_{\mathrm C}dz\, f(z)=\int\limits_{-R}^{R}dz\, f(x)+i\int\limits_0^1dy\, f(R+yi)-\int\limits_{-R}^{R}dz\, f(z+i)-i\int\limits_0^1dy\, f(-R+yi)\end{multline}$$



When we take the limit as $R\to\infty$, it can be shown that the integrals of the vertical sides (i.e the second and fourth integrals) vanish. This can be justified using the estimation lemma. Both integrals have a length of $L=1$ while their upper-bound $M$ can be computed by using the fact that $|e^{-yi}|\leq 1$ and $|e^{\pm R}|=e^{\pm R}$.



$$\begin{align*}M_1 & =\left|\,\frac {e^{-R}e^{-yi}}{1+e^{-2\pi R}e^{-2\pi yi}}\,\right|=\frac {e^{-R}}{1+e^{-2\pi R}}\xrightarrow{R\,\to\,\infty}0\\M_2 & =\left|\,\frac {e^{R}e^{-yi}}{1+e^{2\pi R}e^{-2\pi yi}}\,\right|=\frac {e^{R}}{1+e^{2\pi R}}\xrightarrow{R\,\to\,\infty}0\end{align*}$$



Taking their product and calling their arcs $\Gamma_{1}$ and $\Gamma_{2}$ respectively



$$\begin{align*} & \left|\,\int\limits_{\Gamma_{1}}dz\,\frac {e^{-z}}{1+e^{-2\pi z}}\,\right|\leq M_1L=0\\ & \left|\,\int\limits_{\Gamma_{2}}dz\,\frac {e^{-z}}{1+e^{-2\pi z}}\,\right|\leq M_2L=0\end{align*}$$




As in, the arc integrals vanish as $R\to\infty$. Now, what's left is



$$\oint\limits_{\mathrm C}dz\, f(z)=(1-e^{-i})\int\limits_{-\infty}^{\infty}dx\, f(x)$$



The contour integral is also equal to the sum of its residues inside the contour multiplied by $2\pi i$. Fortunately, there is only one residue inside at $z=i/2$. Therefore



$$\operatorname*{Res}_{z\,=\, i/2}\frac {e^{-z}}{1+e^{-2\pi z}}=\lim\limits_{z\,\to\, i/2}\frac {(z-i/2)e^{-z}}{1+e^{-2\pi z}}=\frac {e^{-i/2}}{2\pi}$$



Hence the contour integral is




$$\oint\limits_{\mathrm C}dz\, f(z)=ie^{-i/2}$$



Putting everything together and isolating our integral $I$, we get



$$\int\limits_{-\infty}^{\infty}dx\,\frac {e^{-x}}{1+e^{-2\pi x}}=\frac {ie^{-i/2}}{1-e^{-i}}\color{blue}{=\frac 12\csc\left(\frac 12\right)}$$


Limit of goniometric function without l'Hospital's rule




I'm trying figure this out without l'Hospital's rule. But I don't know how should I start. Any hint, please?



$$\lim_{x\to \frac{\pi}2} \frac {1-\sin x}{\left(\frac\pi2 -x\right)^2 }$$


Answer



Set $t=\frac \pi2 - x,$
$$\lim_{x\to {\pi\over 2}} \frac {1-\sin x}{(\frac\pi2 -x)^2}=\lim_{t\to {0}} \frac {1-\cos t}{t^2}=\lim_{t\to {0}} \frac {2 \sin ^2(t/2)}{4(t/2)^2}={1\over 2}$$


real analysis - Differentiable and Not Continuous

Find a function $f\colon\mathbb{R}\to\mathbb{R}$ that is differentiable at one point and not continuous at any other point.

number theory - Find all positive integers $n$ s.t. $3^n + 5^n$ is divisible by $n^2 - 1$


As is the question in the title, I am wishing to find all positive integers $n$ such that $3^n + 5^n$ is divisible by $n^2 - 1$.




I have so far shown both expressions are divisible by $8$ for odd $n\ge 3$ so trivially a solution is $n=3$. I'm not quite sure how to proceed now though. I have conjectured that the only solution is $n=3$ and have tried to prove it but have had little luck. Can anyone point me in the right direction? thanks

abstract algebra - Elementary proof for $sqrt{p_{n+1}} notin mathbb{Q}(sqrt{p_1}, sqrt{p_2}, ldots, sqrt{p_n})$ where $p_i$ are different prime numbers.

Take $p_1, p_2, \ldots, p_n, p_{n+1}$ be $n+1$ prime numbers in $\mathbb{P} \subseteq \mathbb{N}$. $\sqrt{p_{n+1}} \notin \mathbb{Q}(\sqrt{p_1}, \sqrt{p_2}, \ldots, \sqrt{p_n})$ seems to be quite clear, but still need a proof. I know some proofs are involved with Galois theory, which is not I want.

linear algebra - How to find characteristic polynomial of this matrix?




Let, $A=\begin{bmatrix} 4&0&1&0\\1&1&1&0\\0&1&1&0 \\0&0&0&4 \end{bmatrix}$. Knowing that $4$ is one of its eigenvalues, find the characteristic polynomial of $A$.





Well if $4$ is an eigenvalues of $A$, one should have $|A-4I_{4}|=0$ .
And so,



$\begin{vmatrix} 0&0&1&0\\1&-3&1&0\\0&1&-3&0 \\0&0&0&0 \end{vmatrix}=0$



It's clear that the previous equation is true (the determinant of $(A-4I_{4})=0$). Now that the factor $(\lambda-4)$ was pull out, one gets a new matrix by removing the null row and null column.



$A'=\begin{bmatrix} 0&0&1\\1&-3&1\\0&1&-3&\end{bmatrix}$




The characteristic polynomial of $A'$ will be a $3^{th}$ degree polynomial, which product with $(\lambda-4)$ equals to a $4^{th}$ degree polynomial.



Now, in order of finding the characteristic polynomial of $A'$ one must to solve the characteristic equation:



$\begin{vmatrix} -\lambda&0&1\\1&-3-\lambda&1\\0&1&-3-\lambda&\end{vmatrix}=0$



My doubt is on finding this determinant. I already tryed Laplace's transformations in order to make null row or a column, but I couldn't do it.



Can you give me a clue? Thanks.


Answer




...and how to find the charateristic polynomial of the original matrix, $A$
$$\begin{align}\mathrm{det}(A - \lambda \mathrm{I}) &= 0 \tag{1}\\
\begin{vmatrix}(4-\lambda)&0&1&0 \\
1&(1-\lambda)&1&0\\
0&1&(1-\lambda)&0\\
0&0&0&(4-\lambda) \end{vmatrix} &= 0 \tag{2}\\
(4-\lambda)\begin{vmatrix}(4-\lambda)&0&1\\
1&(1-\lambda)&1\\
0&1&(1-\lambda) \tag{3}\\
\end{vmatrix} &= 0 \\

(4-\lambda) \left[(4-\lambda) \left[(1-\lambda)^2 -1 \right] + 1\right] &= 0\tag{4} \\
(4-\lambda) \left[(4-\lambda)(\lambda^2 -2\lambda) + 1\right] &= 0\tag{5} \\
(4-\lambda) \left[(4\lambda^2 -8\lambda -\lambda^3+2\lambda^2) + 1\right] &= 0 \tag{6}\\
(4-\lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda +1 ) &= 0 \tag{7}\\
(\lambda -4)(\lambda^3 - 6\lambda^2 + 8\lambda - 1)&= 0\tag{8}\\
\end{align}$$


elementary number theory - show that if $a | c$ and $b | c$, then $ab | c$ when $a$ is coprime to $b$.



Given two numbers $a$ and $b$, where $a$ is co-prime to $b$,




  • Show that for any number $c$, if $a|c$ and $b | c$ then $ab| c$.

  • Is the reverse also true? In other words, if $ab |c$ and $a$ is co-prime to $b$, then do we have $a | c$ as well as $b|c$?



Answer



The second property holds, but the assumption that $\gcd(a,b)=1$ is superfluous. In general, if $x|y$ and $y|z$, then $x|z$. Since $a|ab$ and $b|ab$, then $ab|c$ implies $a|c$ and $b|c$, without any conditions on $\gcd(a,b)$.



On the other hand, $\gcd(a,b)=1$ is required for the first.



There are a number of ways of doing the proof. One is to use the Bezout identity: for any integers $a$ and $b$, there exist integers $x$ and $y$ such that $\gcd(a,b)=ax+by$. If $\gcd(a,b)=1$, then we can write $1 = ax+by$. Multiplying through by $c$, we get $c=axc + byc$. Since $a|c$, we can write $c=ak$; and since $b|c$, we can write $c=b\ell$. So we have
$$c = axc+byc = ax(b\ell) + by(ak) = ab(x\ell + yk),$$
so $ab|c$.




Another is to use the following property:



Lemma. If $a|xy$ and $\gcd(a,x)=1$, then $a|y$.



This can be done using the Bezout identity, but here is a proof that avoids it and only uses the properties of the gcd (so it is valid in a larger class of rings than Bezout rings):
$$\begin{align*}
r|\gcd(a,xy) & \iff
r|a,xy\\
&\iff r|ay, xy,a\\
&\iff r|\gcd(ay,xy),a\\

&\iff r|y\gcd(a,x),a\\
&\iff r|y,a\\
&\iff r|\gcd(a,y)
\end{align*}$$
In particular, $a=\gcd(a,xy)$ divides $\gcd(a,y)$, which divides $y$, hence $a|y$.



With that Lemma on hand, we obtain that the result you want as follows: If $a|c$, then $c=ak$ for some $k$. Then $b|ak$, $\gcd(a,b)=1$. so $b|k$. Hence $k=b\ell$; thus, $c=ak=ab\ell$, so $ab|c$.


Friday 21 December 2012

Prove the Binomial Theorem using Induction




I'm trying to prove the Binomial Theorem using induction. I know that I am supposed to use ${n\choose k} + {n\choose k - 1} = {n + 1 \choose k}$. I just really want to know how to use this equation for the inductive step. I've already verified that the base step of $n = 1$ is true.



Also, I'm not quite sure what the inductive hypothesis is for this theorem.



I appreciate any and all help on this inquiry.



Thank you in advance!


Answer



Hint: you write $(x+y)^{n+1}=(x+y)^n(x+y)$, then use the binomial formula for $(x+y)^n$ as induction hypothesis, expand and use the identity which you wrote.



Thursday 20 December 2012

complex analysis - Finding real and imaginary parts of $frac{1-e^{2i pi x}}{R left(1-e^{frac {2i pi x}{R}} right)}$



I have a function given by



$$\frac{1-e^{2i \pi x}}{R \left(1-e^{\frac {2i \pi x}{R}} \right)}$$



Using Euler's formula, I expand into real and complex components:



$$\frac{1-\cos 2 \pi x-i\sin2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R}-i\sin\frac{2 \pi x}{R} \right)}$$




But for some reason, here I come unstuck. It seems obvious to me that the real part should be



$$\frac{1-\cos 2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R} \right)}$$



but this appears not to be the case. In fact, it's pretty obvious from the plots below (with $R=3$), that I'm wrong:



enter image description here



And it's not even as simple as multiplying through by $\frac{1}{R}$, as this plot charting the variance after multiplying through shows:




enter image description here



Could someone please explain what I'm doing wrong?



(I'd also appreciate help with the complex component.)


Answer



You are right in your use of Euler's formula to convert to rectangular coordinates, but you seemed to just divide the real part of the numerator by the real part of the denominator. To rectify the issue, you should multiply by the complex conjugate of the denominator on top and bottom (so that way the denominator becomes a real scalar that can be applied to both the imaginary and real parts of the numerator):



$$F = \frac{1-\cos 2 \pi x-i\sin2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R}-i\sin\frac{2 \pi x}{R} \right)} \cdot \frac{1-\cos \frac{2 \pi x}{R}+i\sin\frac{2 \pi x}{R}}{1-\cos \frac{2 \pi x}{R}+i\sin\frac{2 \pi x}{R}}$$




Letting the following substitutions take place:



$\quad a = 1 - \cos2\pi x$



$\quad b = \sin 2\pi x$



$\quad c = 1 - \cos \frac{2\pi x}{R}$



$\quad d = \sin\frac{2\pi x}{R}$




Then we get:



$$F = \frac{a-bi}{R(c-di)} \cdot \frac{c+di}{c+di} = \frac{ac+bd + i(ad-bc)}{R(c^2 + d^2)} $$



Thus:



$$Re(F) = \frac{ac+bd}{R(c^2 + d^2)}$$



Appropriate trig identities and algebraic manipulation can massage the above into a nicer form if you desire.


Proving Summations



I'm unsure of how to continue in my proof. How can I prove the follow through induction:



$\sum\limits_{k=66}^n {k-1 \choose 65} = {n \choose 66}$ where $n \geq k \geq 66$



Basis:Let $n=66$.
$$\sum\limits_{k=66}^{66} {66-1 \choose 65} = {66 \choose 66}$$

$$1 = 1$$
The basis holds.



Induction Hypothesis: Suppose $n=m$ holds for all $m\geq 66$



Induction Step: Consider $m+1$.
$$\sum\limits_{k=66}^{m+1} {k-1 \choose 65} = {m+1 \choose 66}$$


Answer



$$\sum\limits_{k=66}^{m+1} {k-1 \choose 65} ={m\choose 65} + \sum\limits_{k=66}^{m} {k-1 \choose 65} \stackrel{\star}{=} {m\choose 65}+{m\choose 66} = {m+1 \choose 66}$$




Where $\star$ holds because the identity holds for $m$






However, there is a little (well, not even) error in your "basis-step". $\sum\limits_{k=66}^{66} {k-1 \choose 65}$ is of course equal to ${65\choose 65}$. Indeed this is the same as ${66\choose 66}$


elementary set theory - Bijective Function between Uncountable Set of Real numbers and a set of all functions





Let $S$ be the set of all real numbers in $(0, 1)$ having a decimal representation which only uses the digits $0$ and $1$. So for example, the number $1/9$ is in $S$ because $1/9 = 0.1111\ldots$, the number $1/100$ is in $S$ because we could write $1/100 = 0.010000\ldots$, but $1/2$ is not in $S$ because $1/2 = 0.5$ which contains the digit $5$ (or $1/2 = 0.49999\ldots$ which contains $4$ and $9$).




(a) Prove that $S$ is uncountable. [Hint: use a proof similar to the proof of Theorem 10.8.]



(b)Let $T =\{1,2\}^{\mathbb{N}}$ be the set of all functions $f :\mathbb{N}\to\{1,2\}$,where $\mathbb{N}$ is the set of all positive integers. Find a bijection between the sets $S$ and $T$, and thus conclude that $T$ is uncountable.



(c) Prove that the set $\mathbb{N}^{\{1,2\}}$ of all functions $f : \{1, 2\} → \mathbb{N}$ is denumerable.







We solved question (a) and know $S$ is uncountable, we are looking to do (b) and if anyone wants to give a hint for (c) that would be great.
I'm having trouble with notation, but we think:



$$T=\{\{(i,a_{i}+1):i \in \mathbb{N}\}: n\text{ corresponds to some real number }c \in S\}$$
$g: S \rightarrow T = \{(c_{m},\{(i,a_{i}+1):i \in \mathbb{N}\}\}$, where $c_{m}$ is an arbitrary element of $S$.



Then we tried to show $g$ is one-to-one and onto and didn't make much progress.




Alternatively, we thought of defining:
$$g = \{(c,f(i))\},$$ where $c \in S$ and $$f(i) = \begin{cases}2\text{ if }a_{i}=0\\ 1\text{ if }a_{i}=1\end{cases}$$


Answer



Thank you everyone for the feedback and suggestions. Andre, your suggestions for question 2(b) were very helpful, but not so much for 2(c) and we did something completely different.



Our solutions for 2(b) and 2(c) were:



2(b)
Prove that: $T=\{1,2\}^{\mathbb{N}}$ is uncountable.




For $c \in S$, we know $c=0.a_{1}a_{2}a_{3}...$, where for the digit $a_{i}$ of $c$, $i \in \mathbb{N}$. Then for $T =\{1,2\}^{\mathbb{N}}$, we map $c$ to a subset $\mathbb{B}=T-\{(1,1),(2,1),(3,1),...\}$, of $T$ to ensure that $0.000...$, does not have an image in $\mathbb{B}$, since $0.000... \notin S$. Define the elements $f \in \mathbb{B}$ as, $f=\{(i,b_{i})|i \in \mathbb{N}, b_{i} \in \{1,2\}\}$, where $b_{i}=a_{i}+1$. By Result 2, we know $S$ is uncountable, so if we can show that there exists a bijective function $g:S \rightarrow \mathbb{B}$, then $\mathbb{B}$ must be uncountable. We now show this for $g=\{(c,f)|c \in S, f \in \mathbb{B}\}$, and since $B \subseteq T$, then by Theorem 10.9, $T$ would be uncountable. For $g$ to be onto we take an arbitrary element $f \in \mathbb{B}$, where $f=\{(1,b_{1}),(2,b_{2}),(3,b_{3}),...\}$, which can also be written as $\{b_{1},b_{2},b_{3},...\}$ or $f=\{b_{i}\}_{i=1}^{\infty}$, where $b_{i} \in \{1,2\}$. Then, for $c=0.a_{1}a_{2}a_{3}...$, the $i^{th}$ digit $a_{i}$ is given by $a_{i}=b_{i}-1$. So, $c=0.b_{1}-1\text{ }b_{2}-1\text{ }b_{3}-1...$, therefore, since all $c \in S$ have unique decimal representations, for any arbitrary $f \in \mathbb{B}$, there exists a $c \in S$, $g:S \rightarrow \mathbb{B}$ is onto. For $g$ to be one-to-one, we assume for $c_{1},c_{2} \in S$, that $g(c_{1})=g(c_{2})$, where $c_{1}=0.x_{1}x_{2}x_{3}...$, and $c_{2}=0.y_{1}y_{2}y_{3}...$, with $x_{i},y_{i} \in \{0,1\}$. So, $g(0.x_{1}x_{2}x_{3}...)=g(0.y_{1}y_{2}y_{3}...)$, then, $\{x_{1}+1,x_{2}+1,x_{3}+1,...\}=\{y_{1}+1,y_{2}+1,y_{3}+1,...\}$. Since for every digit $x_{i}$ of $c_{1}$ and every digit $y_{2}$ of $c_{2}$, $x_{i}+1=y_{i}+1$, then $x_{i}=y_{i}$. So, any arbitrary digit of $c$, is equal to any arbitrary digit of $c_{2}$, and all $c \in S$ have unique decimal representations, so $c_{1}=c_{2}$. Thus, $g$ is one-to-one. So, since $g: S \rightarrow \mathbb{B}$ is one-to-one and onto it is bijective and so $|S|=|\mathbb{B}|$. Since $\mathbb{B} \subseteq T$, and $\mathbb{B}$ is uncountable, by Theorem 10.9, $T$ is uncountable.



2(c)
There is a table and figure included in our proof, but I'll list out some of what we had:



Let $f$ be an arbitrary function $f \in \mathbb{N}^{\{1,2\}}$ so that $f$ is of the form $f=\{(1,a),(2,b)|a,b \in \mathbb{N}\}$. We list the entries for $a$ and $b$ as their own ordered pair in the following table:
If we traverse this table as shown, we hit the ordered pair that gives the values for $a$ and $b$ for every possible function $f=\{(1,a),(2,b)\}$. So, the set of all functions $f:\{1,2\} \rightarrow \mathbb{N}$ can be listed in a sequence as:
$$
\mathbb{N}^{\{1,2\}} = \{\{(1,1),(2,1)\},\{(1,1),(2,2)\},\{(1,2),(2,1)\},

\{(1,1),(2,3)\},\{(1,2),(2,2)\},\{(1,3),(2,1)\},\{(1,1),(2,4)\},...\}
$$
Therefore, $\mathbb{N}^{\{1,2\}}$ is denumerable.


Tuesday 18 December 2012

calculus - How to evaluate $intsin ^3 xcos^3 x:dx$ without a reduction formula?




We have the integral $$\displaystyle\int \sin ^3 x \cos^3 x \:dx.$$ You can do this using the reduction formula, but I wonder if there's another (perhaps simpler) way to do this, like for example with a substitution?


Answer



Hint. You may write
$$
\sin^3 x \cos^3 x= \sin x(1 - \cos^2x)\cos^3 x=\sin x(\cos^3x - \cos^5x)
$$


calculus - Proving $frac2pi x le sin x le x$ for $xin [0,frac {pi} 2]$




Prove $\frac2\pi x \le \sin x \le x$ for $x\in [0,\frac {\pi} 2]$.





I tried to do this in two ways, I'm not sure about CMVT and I have a problem with the other way.






Using Cauchy's MVT:



RHS:
$\sin x \le x \implies \frac {\sin x}{x}\le 1$
So define:
$f(x)=\sin x, \ g(x)=x$ then from CMVT:

$\dfrac {f(\frac {\pi} 2)}{g(\frac {\pi} 2)}=\dfrac {f'(c)} {g'(c)}=\cos c$
and from the fact that $c$ is between $0$ and $\pi/2 \implies \cos c \le 1$.



LHS: In the same manner but here I run into some trouble:
$\frac2\pi x \le \sin x\implies \frac {2x}{\pi\sin x}\le 1$
So:
$\dfrac {f(\frac {\pi} 2)}{g(\frac {\pi} 2)}=\dfrac {f'(c)} {g'(c)}\implies\frac {1}{\sin {\frac {\pi}{2}}}=\frac {2}{\pi \cos c}$
Here actually
$\frac {1}{\sin {\frac {\pi}{2}}}=1$ so it's also $\le 1$




Is it correct to use CMVT like this ?






The other way:



We want to show: $f(x)=\sin x - x < 0$ and $g(x)=\frac {2x}{\pi}-sinx <0 $ by deriving both it's easy to show that the inequality stands for $f$ but for $g$ it isn't so obvious that $g'(x)=\frac {2}{\pi}-\cos x$ is negative. In fact for $x=\frac {\pi} 2$ it's positive. Please help figure this out.







This is the same The sine inequality $\frac2\pi x \le \sin x \le x$ for $0 but all the answers there are partial or hints and I want to avoid convexity.



Note: I can't use integrals.


Answer



To show that $\sin x\le x$ you can apply the Cauchy mean value theorem. (Note that you want to show the inequality for any $x\in\left[0,\frac{\pi}{2}\right].$ )Consider, as you have done, $f(x)=\sin x$ and $g(x)=x.$ Apply the theorem in the interval $[0,x]$ and you will get the inequality, as a consequence of $\cos c\le 1.$ Indeed, there exists $c\in(0,x)$ such that $$\sin x=g'(c)(f(x)-f(0))=f'(c)(g(x)-g(0))=(\cos c)\cdot x\le x.$$



To show the other inequality consider $f(x)=\sin x-\frac{2}{\pi}x.$ We have that $f(0)=f(\pi/2)=0.$ Since $f$ is continuous and $[0,\pi/2]$ is compact it attains a global minimum. If the minimum is not achieved at the extrema of the interval then it belongs to the open interval $(0,\pi/2).$ Let $c$ be the point where $f$ achieves its global minimum. Then $f''(c)\ge 0,$ but $f''(c)=-\sin c<0$ for any $c\in(0,\pi/2).$ So the minimum value is $f(0)=f(\pi/2)=0,$ from where $0\le f(x)=\sin x-\frac{2}{\pi}x,$ which shows the inequality.


elementary number theory - Proof of Bezout's Lemma using Euclid's Algorithm backwards

I've seen it said that you can prove Bezout's Identity using Euclid's algorithm backwards, but I've searched google and cannot find such a proof anywhere. I found another proof which looks simple but which I don't understand.



Bezout's lemma is:




For every pair of integers a & b there are 2 integers s & t such that as + bt = gcd(a,b)


Euclid's algorithm is:



1. Start with (a,b) such that a >= b
2. Take reminder r of a/b
3. Set a := b, b := r so that a >= b
4. Repeat until b = 0



So here's the proof by induction that I found on the internet:



Base case:

b = 0
gcd (a,b) = a
s = 1
t = 0


Inductive Step:

Assume Bezout's Lemma is true for all pairs of b < k.

a = qb + r with 0 =< r < b = k

gcd (a,b) = gcd (b,r)

By the inductive hypothesis there are integers x and y with bx and ry = gcd(a,b)


bx + ry = bx + (a - qb)y = b(x - qy) + ay = gcd(a,b)

Set t = (x - qy) and s = y. This establishes the identity for a pair (a,b) with b = k and completes the induction.


I only followed the proof up to the base case. I don't see how it proved b = k from b < k. Could you please explain this to me?



Thanks.



EDIT: After two days of trying to figure it out I still don't understand the proof. I conclude that either I lack the requisite knowledge or the intelligence or both. Thanks for trying to help.

Is the notation $[x,to[$ common?



I recently started reading Topology and Groupoids by Ronald Brown and this notation came up. The notations is $$[x,\to[ \; =\{z \mid x \leq z\}$$ and a similar notation for other type of intervals. I have never seen this before, and I was baffled wondering if this was a funny $\LaTeX$ macro mistake where "2" was used as "\to" in some situations. I am wondering if it is sort of common? Is it common in some area of mathematics? If it is not clear, I am asking about using the "$\to$" arrow not really the brackets (although I rarely see the bracket notation).


Answer



It is not uncommon.




When discussing (partially) ordered sets, people sometimes use $\leftarrow$ and $\to$ rather than $-\infty$ and $+\infty$, so the interval $(\leftarrow,b)$ means the same as what other times one writes as $(−∞,b)$, that is, $\{z\mid z

Here are some examples: 1, 2.


measure theory - Fast $L^{1}$ Convergence implies almost uniform convergence



$\sum_{n \in \mathbb{N}} ||f_{n}-f||_{1} < \infty$ implies $f_{n}$ converges almost uniformly to $f$, how to show this?




EDIT: Egorov's theorem is available. I have been able to show pointwise a.e. convergence using Chebyshev and Borel-Cantelli, I am having trouble trying to pass to almost uniform convergence using the absolute summability condition...


Answer



Put $g_n:=|f_n-f|$, and fix $\delta>0$. We have $\sum_{n\in\mathbb N}\lVert g_n\rVert_{L^1}<\infty$ so we can find a strictly increasing sequence $N_k$ of integers such that $\sum_{n\geq N_k}\lVert g_n\rVert_1\leq \delta 4^{-k}$. Put $A_k:=\left\{x\in X:\sup_{n\geq N_k}g_n(x)>2^{1-k}\right\}$. Then $A_k\subset\bigcup_{n\geq N_k}\left\{x\in X: g_n(x)\geq 2^{-k}\right\}$ so
$$2^{-k}\mu(A_k)\leq \sum_{n\geq N_k}2^{-k}\mu\left\{x\in X: g_n(x)\geq 2^{-k}\right\}\leq \sum_{n\geq N_k}\lVert g_n\rVert_1\leq \delta 4^{-k},$$
so $\mu(A_k)\leq \delta 2^{-k}$. Put $A:=\bigcup_{k\geq 1}A_k$. Then $\mu(A)\leq \sum_{k\geq 1}\mu(A_k)\leq \delta\sum_{k\geq 1}2^{-k}=\delta$, and if $x\notin A$ we have for all $k$: $\sup_{n\geq N_k}g_n(x)\leq 2^{1-k}$ so $\sup_{n\geq N_k}\sup_{x\notin A}g_n(x)\leq 2^{1-k}$. It proves that $g_n\to 0$ uniformly on $A^c$, since for a fixed $\varepsilon>0$, we take $k$ such that $2^{1-k}$, so for $n\geq N_k$ we have $\sup_{x\notin A}g_n(x)\leq\varepsilon$.


Monday 17 December 2012

calculus - How to compute this limit $lim_{nto ∞}frac{1}{n}log{{nchoose 2alpha n}}$

$$\lim_{n\to ∞}\frac{1}{n}\log{{n\choose 2\alpha n}}=\frac{3}{2}((1-2\alpha) \log{2\alpha}+2\alpha\log2\alpha)$$



such that $2\alpha n\le n$



I tried to use Stirling formula and we get



$$\lim_{n\to ∞}\frac{1}{n}\log{{n\choose 2\alpha n}}=\lim_{n\to ∞}\frac{1}{n}\log\frac{n^{\frac{3n}{2}}}{2\pi(n-2\alpha n)^{\frac{3((n-2\alpha n)}{2}{(2\alpha n)}^{3\alpha n}}}=$$




$$=\lim_{n\to ∞}\log{\frac{n^{\frac{3}{2}}}{2\pi(n-2\alpha n)^{\frac{3((1-2\alpha )}{2}{(2\alpha n)}^{3\alpha }}}}$$



but I couldn't continue

proof writing - Mathematical induction (sum of the first few even numbers)

So the question was basically "
Suppose that there are n teams in a rugby league competition. Every team A
plays every other team B twice, once at the home ground for team A, and the other time
at the home ground for team B."



2(n 1) + 2(n 2) + 2(n 3) + : : : + 6 + 4 + 2 is given



a) Write the expression in summation notation.

b) Use mathematical induction to prove it, n>=2



So I got this expression for (a)
n^Sigma(i=1) = (2(n-i)) where n is the number of teams



Part B



Proof:



Let P(n) denote the sequence n^Sigma(i=1)=2(n-i) and n≥2




Consider P (2)
n^Sigma(i=1)=2(n-i) =2(2-1)=2
∴it is true when n=2



We will now assume it is true for P(k)



k^Sigma(i=1)=2(k-i) for some integer k ≥2



Consider P(k+1)




k+1^Sigma(i=1)=2(k+1-i) for some integer k ≥2



P(k+1)=2(k-1)+2(k-2)+2(k-3)+⋯+2(k-i)+2(k+1-1)



Since we have assumed that P(k) is true.



So we know: P(k+1)=P(k)+(k+1)



ANSWER i cant answer my own question for 8hrs so here it is:




P(n)=n^2-n



P(K+1)=P(k)+2((k+1)-1)



P(K+1)=〖(k〗^2-k)+2(k+1)-2



P(K+1)=〖(k〗^2-k)+2(k+1)-2



P(K+1)=〖(k〗^2-k)+2k+2-2




P(K+1)=〖(k〗^2-k)+2k



P(K+1)=〖(k〗^2+k)



P(K+1)=(k+1)^2-(k+1)



Therefore under induction the sequence has been proven.



Thanks to @P..

linear algebra - Find eigenvector and eigenvelue of 2 by 2 symmetric matrix, given another eigenvector and eigenvalue



A symmetric 2 by 2 matrix has eigenvalue -4 and eigenvector(-2;3). Find another eigenvector and eigenvalue and construct orthonormal basis(of eigenvectors of matrix A).



Is it possible to find eigenvector not knowing the Matrix(only size and symmetry is given)?




When it is possible, what should i do?
Many thanks


Answer



Hint: The spectral theorem tells us that every symmetric matrix has an orthonormal basis of eigenvectors. This fact allows us to use one eigenvector to find another.



From this question, however, we don't have enough information to figure out what the other eigenvalue is.



In fact, we can deduce that every matrix satisfying the conditions for this question will have the form
$$
-\frac{4}{13}\pmatrix{4&-6\\-6&9}

+\frac{\lambda}{13} \pmatrix{9&6\\6&4}
$$
Where $\lambda$ is the second eigenvalue.


Sunday 16 December 2012

algebra precalculus - Euclidean Algorithm Proof



My lecturer gave us the following side note when explaining the euclidean algorithm in class.





Eucledian Algorithm: Let $a$ and $b$ be natural numbers, then there are integers $m$ and $n$ such that $\gcd(a, b) = am + bn$



The implementation of the Euclidean
Algorithm consists of a sequence of repetitions
of the Division Algorithm with
the integers m and n being obtained
by unravelling this sequence of steps.
To appreciate why such an operation
identifies $\gcd(a, b)$, we may start with
the assumption that $a > b$ otherwise

$\gcd(a, b) = a$ (the case $a = b$) and there
is nothing to be done. When $a > b$
the Division Algorithm asserts that $a =
bq + r$
with $0 r < b.$ There are two
points to note.
a) Since $\gcd(a, b)$ divides $a$ and $b$ so
$\gcd(a, b)$ must also divide the remainder
$r.$
Therefore the greatest
common divisor of $a$ and $b$ is also
the greatest common divisor of $b$

and $r$, that is, $\gcd(a, b) = \gcd(b, r)$.
b) Since $b < a$ and $r < b$ the sequence
of applications of the Division Algorithm
generates a strictly decreasing
sequence of remainders terminating
with 0, but such that each remainder
is divisible by $\gcd(b, r)$. By definition,
the penultimate remainder is
therefore $\gcd(a, b)$.





I understand and can apply the algorithm, i.e. I can do the repeated division and find the gcd. But I still don't understand the part in bold above. Why can we just assume that?



When trying to prove the same for polynomials in a tutorial homework, I simply used a modified version of the part in bold as a lemma, specifically:




Any polynomial divisor of both $f(x)$ and $g(x)$ must also divide the remainder polynomial when $f(x)$ is divided by $g(x)$




and the tutor made no remark, so I'm assuming this is something which can be clearly seen but that I'm missing. Could anyone please break it down for me?



Answer



Let $d=\gcd(a,b)$. You know $d$ divides both $a$ and $b$, and that means there are integers $k,l\in\mathbb{Z}$ such that $a=dk,b=dl$. And hence:



$r=a-bq=dk-dlq=d(k-lq)$



When $k-lq\in\mathbb{Z}$. So we got $d$ divides $r$.


Saturday 15 December 2012

modular arithmetic - Extended Euclidean Algorithm: a remainder becomes zero



When working on the Chinese Remainder Theorem, I have stumbled upon this system of linear congruences.
$$
x\equiv2 \mbox{ mod 3}

$$
$$
x\equiv3 \mbox{ mod 5}
$$
$$
x\equiv4 \mbox{ mod 11}
$$
$$
x\equiv5 \mbox{ mod 16}
$$




Problem I am having is, when I apply the extended Euclidean Algorithm to find $M_2$ such that $N_2.M_2\equiv1\mbox{ mod }n_2$ (where $n_2=5$ and $N_2=3\times11\times16=528$ and $M_3$ being the modular inverse of 528 under $\mbox{ mod }5$), I reach the following.



$$
528=105\times5+3\\
105=35\times3+0
$$
What I don't understand is how to go from this point forth. This question might have been repeated somewhere. But I am unable to find any such. That is why I have chosen to post this. Thanks n advance.


Answer



So the problem here is I am choosing the wrong input to the second iteration. I am choosing 105, which is the quotient of $528\div5$ as the dividend, whereas it should actually be 5, which is the divider of the previous iteration. So it actually should be




$$
528=105\times5+3\\
5=1\times3+2\\
3=1\times2+1
$$
and that's it. Thanks to @N.F.Taussig and @LordSharktheUnknown.


real analysis - How can I prove this sequence of functions converges pointwise but not uniformly to a function?



Let $f_k(x)= \tan^{-1}(kx)$ and $f(x)=\left\{\begin{array}{rcl} \pi/2 & \mbox{if} & 0< x \leq 1, \\ 0 & \mbox{if} & x = 0. \end{array} \right.$



How can I prove that $\{f_k\}$ converges pointwise but not uniformly to $f$, wuthout using the fact that the uniform limit of a sequence of continuous functions is continuous?




I could use the theorem that says that the convergence is uniform if and only if $\lim_{k \to \infty} \sup_{x \in I}|f_k(x)-f(x)|=0$, but I don't know how to apply it.



Any hint or ideas will be very appreciated. Thank you very much.


Answer



Hint: Fix $k$. What is



$$\lim_{x \to 0^+} |f_k(x)-f(x)| \,?$$



Can you deduce from here that for all $k$ we have




$$\sup_{x \in I}|f_k(x)-f(x)| \geq \frac{\pi}{4} ?$$



Can you see how this shows that that limit cannot be 0?


limits - Proving $lim_{xtoinfty} (x^2 +1)(frac{pi}{2} - arctan{x}) $ doesn't exist.

How can I show that $$\lim_{x\to\infty} (x^2 +1)(\frac{\pi}{2} - \arctan{x}) $$ doesn't exist? I used the fact that $$\arctan{x}\ge x-\frac{x^3} {3}, $$ so the initial limit is less than $$\lim_{x\to\infty} \frac{x^5}{3} +O(x^4),$$ therefore the limit tends to infinity.



Is this enough? If not, then how can I show this rigorously?

proof writing - Prove inequality using induction with a sequence



The sequence is defined as $d_n = 1$ if $n = 0$, $\frac{n}{d_{n-1}}$ otherwise. The goal is to prove $$\forall n \in \mathbb{Z^+}, d_{2n-1} \leq \sqrt{2n-1}$$ using induction. I successfully proved the base case, and laid out the induction step: assume $$d_{2k-1} \leq \sqrt{2k-1}$$ prove $$d_{2k+1} \leq \sqrt{2k+1}$$ But I am now struggling to algebraically prove the later inequality. I expressed $d_{2k+1}$ in terms of $d_{2k-1}$:
$$d_{2k+1} = \frac{(2k+1)d_{2k-1}}{2k}$$ But I do not see any steps I can take from here. Any help is appreciated! I am new to this resource, please let me know if I formatted anything incorrectly!


Answer



Starting from
$$d_{2k-1} \leq \sqrt{2k-1}$$
multiply both sides with $2k+1$ and you will get
$$2k\cdot d_{2k+1} = (2k+1)d_{2k-1} \leq (2k+1)\sqrt{2k-1}$$

Divide both sides by $2k$:



$$d_{2k+1} \le \sqrt{2k+1} \frac{\sqrt{(2k-1)(2k+1)}}{2k} \leq \sqrt{2k+1} \sqrt{\frac{4k^2-1}{4k^2}} \le \sqrt{2k+1}$$


Friday 14 December 2012

algebra precalculus - If a certain number is divided by the sum of its two digits, how to find it from quotient and remainder?



If a certain number is divided by the sum of its two digits, the quotient is $7$ and the remainder is $3$. If the digits are reversed and the resulting number is divided by the sum of the digits, the quotient is $3$ and remainder is $7$. Find the number.




My Attempt:



Let the number be $10x+y$.



According to question:
$$\dfrac {10x+y}{x+y}=??$$.



I could not get how to make the equation using quotient and remainder. Please help.


Answer



HINT:$$10x+y=7(x+y)+3=7x+7y+3$$

and $$10y+x=3(x+y)+7=3x+3y+7$$
Can you proceed now?


Set Theory: Symmetric Relations



I was just trying to figure out this problem I came across. For a set
$X = \{1, 2, 3, 4, 5\}$ is it possible to come up with a relation on $X$ that is symmetric, but neither reflexive nor transitive?



Also, it is possible to find a relation that is transitive, but neither reflexive nor symmetric?




Thank you!


Answer



In what follows I use $aR\,b$ as an abbreviation for $(a,b)\in R$.



To attack problems like these, start with a small relation that satisfies the positive condition. For your second problem, for instance, you could begin by letting $1 R\,2$, $2 R\,3$, and $1R\,3$; if you stop there, you have a transitive relation. Is it reflexive? Is it symmetric? Can you stop at this point?



Of course, you could have started simply with $1R\,1$, which is also transitive, but it’s clearly symmetric, so that’s a bad idea. The same objection rules out starting with $\varnothing$, the empty relation, in which nothing is related to anything: it’s also symmetric. The next simplest way to get transitivity is the one that I actually used.



The same approach works for your first problem and yields the answer suggested by JDH: starting with $1R\,1$ isn’t very helpful, but starting with $1R\,2$ and $2R\,1$ works without further modification.


polynomials - A contest math problem


Let $P(x)$ be a polynomial with integer coefficients of degree $d>0$.




  1. If $\alpha $ and $\beta $ are two integers such that $P(\alpha)=1$ and $P(\beta)=-1$, then prove that $|\beta -\alpha | $ divides $2$.


  2. Prove that the number of distinct integer roots of $(P(x))^2-1$ is at most $d+2$.




First one is very easy. But I cannot understand how to prove the second one. I would appreciate any help.

Thursday 13 December 2012

Find the limit as n approaches infinite



We have the following function:



$$U_n = \sin \dfrac{1}{3} n \pi$$



What is the limit of this function as n approaches infinity?




I first tried to use my calculator as help, for n I chose some arbitrary large numbers, such as 100 and 1000. Then I I just took $n = 10^{50}$ and it gave me an error.



So the correct answer is it doesn't have one, but why? Why does this function have a solution for $n = 10^2, 10^3$ and not for bigger numbers such as $n=10^{50}$?


Answer



As David hinted in his comment, try using the fact that
$$
\sin(x + 2\pi k) = \sin(x)
$$
whenever $k$ is an integer. Or, if you just write out the first few values of $U_n$ (compute these using the unit circle) and you should notice a pattern.



recurrence relations - closed form for $d(4)=2$, $d(n+1)=d(n)+n-1$?



I am helping a friend in his last year of high school with his math class. They are studying recurrences and proof by inference. One of the exercises was simply "How many diagonals does a regular $n$-polygon have?".



We quickly found a direct answer to that question: $d(n) = \frac{(n-3)n}2$, but since they are studying recurrences, we didn't stop, and found the recurrence $d(n+1)=d(n)+n-1$. We proceeded to prove by induction that the closed form we initially found is indeed a solution for the recurrence.



But then he asked me a question I couldn't answer: we found the closed form because the question gives us an obvious mapping to an easy to solve geometry problem, but is there a way to find a closed form using the recurrence only?


Answer



Yes it is possible. This is an example of a telescoping sequence.




Notice that $d_{n+1} - d_n = n-1$



Now consider



$(d_{n+1} - d_n) + (d_n - d_{n-1}) + (d_{n-1} - d_{n-2}) + \cdots + (d_5 - d_4)$



I will leave the rest to you.


Wednesday 12 December 2012

integration - Evaluating $int_{-infty}^infty frac{sin x}{x(x^2 +1)}dx$



When constructing appropriate contours, we would like it so that the singularities are not on the contour but rather inside or outside the contour.



I see that the integrand has a removeable singularity at $z=0$. Does this matter? I feel like even if we define $f(z) = 1$, then $f$ is analytic at $z=0$, so it is fine to construct an integral passing through $z=0$, (and it will never be fine to construct one passing through $\pm i$.)


Answer



Yes, you are right. If we define$$f(z)=\begin{cases}\dfrac{\sin z}z&\text{ if }z\neq0\\1&\text{ otherwise,}\end{cases}$$then your integral is$$\int_{-\infty}^{+\infty}\frac{f(x)}{x^2+1}\,\mathrm dx.$$Since $f$ is an analytic function, you can use the standard methods of Complex Analysis.



linear algebra - Determine whether the set $H$ of all matrices form a subspace of $M_{2 times 2}$

Determine if the set $Z$ of all matricies form
$ \left[ \begin{array}{cc}
a & b \\
0 & d

\end{array} \right] $
is a subspace of $M_{2 \times 2}$ (the set of all $2 \times 2$ matrices).



% This is something I came up with. Can someone look at it and let me know any useful corrections/suggestions to the question please.



Answer:



Without specification as to the nature of $a,b$ and $d$, it is assumed that $a,b,d \in \mathbb{R}$



Hence, $H$ is determined to be a subspace of $M_{2 \times 2}$ because it is closed under scalar addition and scalar multiplication and contains the zero vector when $a=b=d=0$.

Tuesday 11 December 2012

algebra precalculus - Given $n$, find smallest number $m$ such that $m!$ ends with $n$ zeros

I got this question as a programming exercise. I first thought it was rather trivial, and that $m = 5n$ because the number of trailing zeroes are given by the number of factors of 5 in $m!$ (and factors of 2, but there are always a lot more of those).




But it seems like this is not true, because I'm not getting the correct answers for certain $n$. Any hints?

modular arithmetic - $a equiv b pmod n$ and $cequiv d pmod n$ implies $ac equiv bd pmod n$



Given that $a \equiv b \pmod n$ and $c\equiv d \pmod n$, I need to prove that $ac \equiv bd \pmod n$



So far, I've only managed to deduce that $a+b \equiv c+d \pmod n$. I don't know if this is usable, but it's there, at least.



Any hint would be greatly appreciated!


Answer



$\begin{align}

{\bf Hint}\ \ \ a\, &=\ b\ \ +\,n\,j\\
c\, &= \ \ d\,+\,n\,k\\
\Rightarrow\,\ ac &= bd\,+\,n(\_\_)\ \ \text{for an integer }\, (\ldots)
\end{align}$



Remark $\, $ If $\,n = 10\,$ then this generalizes a units digit rule well-known from decimal arithmetic, namely mod $10,\,$ the units digit of a product is congruent to the product of the unit digits, e.g. $\,1\color{#c00}3\cdot 1\color{#0a0}6 = 208\,$ has units digit $\,\color{#c00}3\cdot\color{#0a0}6\equiv 8.\,$ Said in the language of the Congruence Product Rule



$$\begin{eqnarray}{\rm mod}\ 10\!:\ &&1\color{#00}{3}\equiv \color{#00}3,\ \ 1\color{0a0}{6}\equiv \color{0a0}6\\ \Rightarrow\ &&1\color{c00}{3}\cdot 1\color{0a0}{6}\equiv 3\cdot 6\equiv 8\end{eqnarray}\qquad\qquad\qquad $$



The Congruence Product Rule may be viewed as a radix $\,n\,$ generalization of the units digit product rule. However, it is more general, since the "units digits" $\,b,d\,$ need not lie in the interval $\,[0,n\!-\!1].$




See here for proofs of all of the common congruence arithmetic rules.


integration - Showing the limit of $int_x^infty (y log y)^{-1}dy$ is zero



I'm trying to show



$$\lim_{x \to \infty} \int_x^\infty (y \log y)^{-1} dy = 0$$



In order to finish a proof. The problem I'm having is that without the limit, I know the integral diverges, and hence when I use substitution I end up with indeterminate form.



I think rather than using a substitution like $v = \log x$, I need to re-write the integral in the form of $e^{-t}$ so that the integral can be expressed in a proper form.




Any hints/advice is appreciated thank you!



Edit:



Using the substitution $t=y \log y$ I end up at an integral of the form $\int \frac{1}{1+e^t}dt$ which does not seem to help as I again end up with an indeterminate form


Answer



$\int \limits_{x}^{\infty} \frac{1}{y \ln y} dy > \int \limits_{x}^{x^2} \frac{1}{y \ln y} dy = \ln \ln x^2 -\ln \ln x = \ln 2 \approx 0.693 > 0$ for all $0 < x \in \mathbb{R}$, so if $ \lim \limits_{x \to \infty} \int \limits_{x}^{\infty } \frac{1}{y \ln y} dy = 0$ then we would have that for all $ x > x_0$ that $ \int \limits_{ x}^{x^2} \frac{1}{y \ln y }d y < \epsilon$ which contradict the fact that $\int \limits_{x}^{x^2} \frac{1}{y \ln y} dy = \ln \ln x^2 -\ln \ln x = \ln 2 \approx 0.693 $ and $\epsilon = 0.1$


number theory - Find all integer solutions to $a+b+c|ab+bc+ca|abc$

As you can see from the title, I am trying to find all integer solutions $(a,b,c)$ to $$(a+b+c) \ \lvert\ (ab+bc+ca) \ \lvert\ abc$$ (that is, $a+b+c$ divides $ab+bc+ca$, and $ab+bc+ca$ divides $abc$). Unfortunately, I could not find anything on this problem (although I find it hard to believe that nobody though of this before).



What I've found so far

I have looked at the simpler case: $(a+b) \ \lvert\ ab$. I was able to solve this, and all solutions are $$(a,b)=(\alpha(\alpha+\beta)\gamma,\beta(\alpha+\beta)\gamma)$$ with $\alpha,\beta,\gamma\in\mathbb{Z}$.



I was also able to reduce the given problem to only one division. If we are able to solve $$(a_0b_0+b_0c_0+c_0a_0) \ \lvert\ (a_0+b_0+c_0)a_0b_0c_0 $$ with $\gcd(a_0,b_0,c_0)=1$, then we know that
\begin{align}
a&=a_0(a_0+b_0+c_0)\cdot k\\
b&=b_0(a_0+b_0+c_0)\cdot k\\

c&=c_0(a_0+b_0+c_0)\cdot k\\
\end{align}
For $k\in\mathbb{Z}$ are all solutions to the original problem. However, I was not able to solve this. I have computed a few solutions to the last (and the corresponding solutions for the original problem) but was not able to find a pattern. Any progress on the problem is welcome!

Sunday 9 December 2012

trigonometry - Converting a sum of trig functions into a product



Given,
$$\cos{\frac{x}{2}} +\sin{(3x)} + \sqrt{3}\left(\sin\frac{x}{2} + \cos{(3x)}\right)$$
How can we write this as a product?



Some things I have tried:




  • Grouping like arguments with each other. Wolfram Alpha gives $$\cos{\frac{x}{2}} + \sqrt{3}\sin{\frac{x}{2}} = 2\sin{\left(\frac{x}{2} + \frac{\pi}{6}\right)}$$but I don't know how to derive that myself or do a similar thing with the $3x$.

  • Write $3x$ as $6\frac{x}{2}$ and then using the triple and double angle formulas, but that is much too tedious and there has to be a more efficient way.

  • Rewriting $\sqrt{3}$ as $2\sin{\frac{\pi}{3}}$ and then expanding and trying to use the product-to-sum formulas, and then finally grouping like terms and then using the sum-to-product formulas, but that didn't work either.




I feel like I'm overthinking this, so any help or insights would be useful.


Answer



$$\cos{\frac{x}{2}} +\sin(3x) + \sqrt{3}\left(\sin\frac{x}{2} + \cos(3x)\right)$$



$$=\cos{\frac{x}{2}} + \sqrt{3}\sin\frac{x}{2} +\sin(3x) + \sqrt{3}\cos(3x)$$



$$=2\left(\frac{1}{2}\cos\frac{x}{2} + \frac{\sqrt{3}}{2}\sin\frac{x}{2} +\frac{1}{2}\sin(3x) + \frac{\sqrt{3}}{2}\cos(3x)\right)$$




Note that $\frac{1}{2}=\sin\frac{\pi}{6}$ and $\frac{\sqrt{3}}{2}=\cos\frac{\pi}{6}$ so:



$$=2\left(\sin\frac{\pi}{6}\cos\frac{x}{2} + \cos\frac{\pi}{6}\sin\frac{x}{2} +\sin\frac{\pi}{6}\sin(3x) + \cos\frac{\pi}{6}\cos(3x)\right)$$



Then using Addition Theorem:



$$=2\left(\sin\left(\frac{x}{2}+\frac{\pi}{6}\right)+\cos\left(3x-\frac{\pi}{6}\right)\right)$$



$$=2\left(\sin\left(\frac{x}{2}+\frac{\pi}{6}\right)+\sin\left(3x+\frac{\pi}{3}\right)\right)$$




Then using Sums to Products:



$$=4\left(\sin\left(\frac{\frac{x}{2}+\frac{\pi}{6}+3x+\frac{\pi}{3}}{2}\right)\cos\left(\frac{\frac{x}{2}+\frac{\pi}{6}-3x-\frac{\pi}{3}}{2}\right)\right)$$



$$=4\sin\left(\frac{7x+\pi}{4}\right)\cos\left(\frac{-15x-\pi}{12}\right)$$


statistics - Probability with loaded and fair dice




I own five different six-sided dice. Four of the dice are fair dice, meaning they have values 1, 2, 3, 4, 5, 6. However, one of the dice is loaded; thus, it never shows 1, 2 or 3, but is equally likely to show the values 4, 5, or 6. For my experiment, I will pick up one random dice and roll it twice.



The first thing I would like to calculate is the probability of getting two sixes. To calculate this, I first calculated the probability of getting one six and multiplied it by two. Suppose $S$ = event that two sixes are rolled.
$$P(S) = 2(\frac45(\frac16) + \frac15(\frac13)) = .4 $$
However, I am not sure if this is correct. I need to calculate this because I would also like to calculate $P(L|S)$ where L = event that a loaded die was picked. Additionally, I feel this is incorrect, because if I change the '2' to a '10' to calculate it for 10 rolls instead of 2, I get a value over 1 which makes no sense. To summarize, how can I calculate $P(S)$ properly so I can calculate $P(L|S)$?


Answer



The $\frac16$ and $\frac13$ should be squared at first, not doubled at the end. The correct calculation for $P(S)$ is
$$P(S)=\frac45\cdot\frac1{6^2}+\frac15\cdot\frac1{3^2}=\frac1{45}+\frac1{45}=\frac2{45}$$
The second term above is $P(L\cap S)$, so

$$P(L\mid S)=\frac{P(L\cap S)}{P(S)}=\frac{1/45}{2/45}=\frac12$$


calculus - Evaluate $int_{0}^{+infty }{left( frac{x}{{{text{e}}^{x}}-{{text{e}}^{-x}}}-frac{1}{2} right)frac{1}{{{x}^{2}}}text{d}x}$



Evaluate :

$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}$$


Answer



Related technique. Here is a closed form solution of the integral



$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x} = -\frac{\ln(2)}{2}. $$



Here is the technique, consider the integral



$$ F(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}, $$




which implies



$$ F''(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\text{d}x}. $$



The last integral is the Laplace transform of the function



$$ \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} $$



and equals




$$ F''(s) = \frac{1}{4}\,\psi' \left( \frac{1}{2}+\frac{1}{2}\,s \right) -\frac{1}{2s}. $$



Now, you need to integrate the last equation twice and determine the two constants of integrations, then take the limit as $s\to 0$ to get the result.


Saturday 8 December 2012

real analysis - Function whose image of every open interval is $(-infty,infty)$




How to find a function from reals to reals such that the image of every open interval is the whole of R?



Is there one which maps rationals to rationals?


Answer



Though this can be done explicitly with enough cleverness (for example with the Conway base 13 function), I rather like the following choice-based argument, which requires almost no thought once you're familiar with transfinite recursion.



Consider the set consisting of all ordered triples of reals $(a,b,c)$ with $a

Now we build our function $f$ recursively along this well-order, fixing the value of $f$ at one new point at each step. At the step corresponding to $(a,b,c)$, we want to ensure that there exists a point $x$ in the open interval $(a,b)$ such that $f(x)=c$. Since we've only fixed $f$ at less-than-continuum-many points, and $(a,b)$ has cardinality continuum, we can choose an $x$ in $(a,b)$ such that $f(x)$ is not yet fixed, and fix $f(x)$ to be $c$.




This recursion gives us a partial function from $\mathbb{R}$ to $\mathbb{R}$ that already satisfies our requirements. We can make it total by just setting $f(x)$ to be $0$ (say) wherever $f(x)$ is not yet defined.



If we additionally want $f$ to map rationals to rationals, we can simply set $f(x)=0$ for every rational $x$ before commencing the recursion.


Friday 7 December 2012

real analysis - Use $epsilon - delta$ definition to prove $limlimits_{x rightarrow x_0}sqrt[3]{f(x)} = sqrt[3]{A}$

It's known that $\lim\limits_{x \rightarrow x_0}f(x) = A$, how to prove $\lim\limits_{x \rightarrow x_0}\sqrt[3]{f(x)} = \sqrt[3]{A}$?



Here's what I've got now:



When $A = 0$, to prove $\lim\limits_{x \rightarrow x_0}\sqrt[3]{f(x)} = 0$: Since we have $\lim\limits_{x \rightarrow x_0}f(x) = A = 0$, so $|f(x)| < \epsilon$. => $|\sqrt[3]{f(x)}| < \epsilon_0^3 < \epsilon$



When $A \ne 0$, $|\sqrt[3]{f(x)} - \sqrt[3]{A}| = \frac{|f(x) - A|}{|f(x)^{\frac{2}{3}}+(f(x)A)^{\frac{1}{3}} + A^{\frac{2}{3}}|}$...




How can I deal with $(f(x)A)^{\frac{1}{3}}$? Thanks.

Sum of product of squared binomial coefficients




Similar to the sum of product of binomial coefficients—what happens when I square the binomial coefficients? Is there a nice closed formula for that?



More precisely, I'm interested in the special case where the $k_i$ are all equal to the sum of all $x_i$ (to use the notation of the question linked above), which I'll call $m$ (not necessarily equal to $n$ here).



So what I'm looking for is making the following sum tractable:



$$S_{n,m}=\sum_{x_1+\ldots+x_n=m}\prod_{i=1}^{n}{\binom{m}{x_i}^2}$$



My goal is to show that $S_{n,m}\cdot n^{-2m}$ decreases as the number of summands, $n$, grows. I've tried expanding $n^{2m}$:




$$n^{2m} = \sum_{y_1+\ldots+y_n=2m}{\binom{2m}{y_1,\ldots,y_n}} = \sum_{y_1+\ldots+y_n=2m} {\prod_{i=1}^{w} {\binom{\sum_{j=1}^{i}{y_j}}{y_i}}}$$



but I didn't get anywhere with that. I'd be grateful for any pointers on how to approach this.


Answer




We consider the generating function
\begin{align*}
S_m(x)=\sum_{j=0}^m\binom{m}{j}^2x^j\qquad\qquad m\geq 0
\end{align*}
Note $S_{n,m}$ is the coefficient of $x^m$ of $\left(S_m(x)\right)^n$.

\begin{align*}
S_{n,m}=\sum_{{x_1+\ldots+x_n=m}\atop{x_j\geq 0}}\prod_{i=1}^{n}{\binom{m}{x_i}^2}
=[x^m]\left(S_m(x)\right)^n
\end{align*}




We can represent $S_m(x)$ in terms of Legendre polynomials $P_m(x)$. From the representation
\begin{align*}
P_m(x)=\frac{1}{2^m}\sum_{j=0}^m\binom{m}{j}^2(x-1)^{m-j}(x+1)^j
\end{align*}

we obtain



\begin{align*}
P_m\left(\frac{x+1}{x-1}\right)
&=\frac{1}{2^m}\sum_{j=0}^m\binom{m}{j}^2\left(\frac{x+1}{x-1}-1\right)^{m-j}\left(\frac{x+1}{x-1}+1\right)^j\\
&=\frac{1}{2^m}\sum_{j=0}^m\binom{m}{j}^2\frac{2^{m-j}}{(x-1)^{m-j}}\cdot
\frac{(2x)^j}{(x-1)^j}\\
&=\frac{1}{(x-1)^m}\sum_{j=0}^m\binom{m}{j}^2x^j\tag{1}
\end{align*}





Multiplication of (1) with $(x-1)^m$ gives
\begin{align*}
S_m(x)=(x-1)^mP_m\left(\frac{x+1}{x-1}\right)\qquad\qquad m\geq 0
\end{align*}
and we finally conclude
\begin{align*}
S_{n,m}&=[x^m]\left(S_m(x)\right)^n\\
&=[x^m](x-1)^{mn}P_m\left(\frac{x+1}{x-1}\right)^n\qquad\qquad n\geq 1,m\geq 0
\end{align*}





Example: $S_{3,1}$ to $S_{3,3}$



At first wie calculate $S_{3,1}$ to $S_{3,3}$ according to the OPs definition of $S_{n,m}$. The we look at the coefficients of the corresponding transformed Legendre polynomials.



\begin{align*}
S_{3,1}&=\sum_{{x_1+x_2+x_3=1}\atop{x_j\geq 0}}\prod_{i=1}^3\binom{1}{x_i}^2
=\frac{3!}{2!1!}\binom{1}{0}^2\binom{1}{0}^2\binom{1}{1}^2\\
&=\color{blue}{3}\\

S_{3,2}&=\sum_{{x_1+x_2+x_3=2}\atop{x_j\geq 0}}\prod_{i=1}^3\binom{2}{x_i}^2\\
&=\frac{3!}{2!1!}\binom{2}{0}^2\binom{2}{0}^2\binom{2}{2}^2
+\frac{3!}{1!2!}\binom{2}{0}^2\binom{2}{1}^2\binom{2}{1}^2\\
&=3+48\\
&=\color{blue}{51}\\
S_{3,3}&=\sum_{{x_1+x_2+x_3=3}\atop{x_j\geq 0}}\prod_{i=1}^3\binom{3}{x_i}^2\\
&=\frac{3!}{2!1!}\binom{3}{0}^2\binom{3}{0}^2\binom{3}{3}^2
+\frac{3!}{1!1!1!}\binom{3}{0}^2\binom{3}{1}^2\binom{3}{2}^2\\
&\qquad+\frac{3!}{3!}\binom{3}{1}^2\binom{3}{1}^2\binom{3}{1}^2\\
&=3+486+729\\

&=\color{blue}{1218}
\end{align*}



On the other hand we obtain with some help of Wolfram Alpha
\begin{align*}
S_1(x)&=(x-1)P_1\left(\frac{x+1}{x-1}\right)\\
&=1+x\\
S_1^2(x)&=1+2x+x^2\\
S_1^3(x)&=1+\color{blue}{3}x+3x^2+x^3\\
\\

S_2(x)&=(x-1)^2P_2\left(\frac{x+1}{x-1}\right)\\
&=1+4x+x^2\\
S_2^2(x)&=1+8x+18x^2+8x^3+x^4\\
S_2^3(x)&=1+12x+\color{blue}{51}x^2+88x^3+51x^4+12x^5+x^6\\
\\
S_3(x)&=(x-1)^3P_3\left(\frac{x+1}{x-1}\right)\\
&=1+9x+9x^2+x^3\\
S_3^2(x)&=1+18x+99x^2+164x^3+99x^4+18x^5+x^6\\
S_3^3(x)&=1+27x+270x^2+\color{blue}{1218}x^3+2484x^4+2484x^5+1218x^6+270x^7+27x^8+x^9
\end{align*}

with corresponding coefficients marked in $\color{blue}{\text{blue}}$.


linear algebra - Containment of one convex hull in another



This question is related my previous question (Comparing two probability distributions) which are both related to my current research.




Suppose we have two bounded convex hulls in $\mathbb{R}^n$ defined by the two sets of linear inequalities $A_1x \geq 0$ and $A_2x \geq 0$ and the common equality $\sum x=a$ where $A_1$, $A_2$ are real valued matrices, $a$ is a real positive constant and $x \in \mathbb{R}^n$. What conditions must be satisfied by $A_1$, $A_2$ for the first convex hull to be contained in the second convex hull?



Thank you.


Answer



Let's assume $\{x: A_1 x \ge 0, \sum x = a\}$ is nonempty.
For each row $R$ of $A_2$, you want every solution of $A_1 x \ge 0$, $\sum x = a$ to satisfy $R \cdot x \ge 0$, i.e. the optimal value of the linear programming
problem P:



minimize $R \cdot x$ subject to $A_1 x \ge 0$, $\sum x = a$




is at least $0$. The dual of this linear programming problem is D:



maximize $a z$
subject to $ A_1^T y + z {\bf 1} = R$, $y \ge 0$ (where $\bf 1$ is the vector of all $1$'s)



By duality, P has a solution with objective value $ < 0$ if and only if D
has no solution with objective value $\ge 0$. Your condition is equivalent
to: there exist $y \ge 0$ and $z\ge 0$ such that $R = A_1^T y + z \bf 1$,
i.e. each row of $A_2$ is a linear combination with nonnegative coefficients

of $\bf 1$ and the rows of $A_1$.


Extended Euclidean algorithm with negative numbers



Does the Extended Euclidean Algorithm work for negative numbers?
If I strip out the sign perhaps it will return the correct GCD, but what exactly to do if I also want $ax + by = GCD(|a|,|b|)$ to be correct (I guess it won't hold anymore as soon as I've stripped out the signs of $a$ and $b$)?



== UPDATE ==




It couldn't be that simple.



If $a$ was negative, then after stripping out signs EEA returns such $x$ and $y$ that $(-a)*x + b*y = GCD(|a|,|b|)$. In this case $a*(-x) + b*y = GCD(|a|,|b|)$ also holds.



The same for $b$.



If both $a$ and $b$ were negative, then $(-a)*x + (-b)*y = GCD(|a|,|b|)$ holds and, well $a*(-x) + b*(-y) = GCD(|a|,|b|)$ ought to hold.



Am I right?
Should I just negate $x$ if I have negated $a$ and negate $y$ if I have negated $b$?



Answer



Well, if you strip the sign of $a$ and $b$, and instead run the Euclidean algorithm for $|a|$ and $|b|$, then if your result is $|a|x+|b|y=1$, you can still get a solution of what you want because
$$a(\text{sign}(a)\cdot x)+b(\text{sign}(b)\cdot y)=1.$$


complex analysis - how to evaluate this integral by considering $oint_{C_{(R)}} frac{1}{z^{2}+1}$

Consider the integral $I=\int_{-\infty}^{\infty} \frac{1}{x^{2}+1}\, dx$. Show how to evaluate this integral by considering $\oint_{C_{(R)}} \frac{1}{z^{2}+1}, dz$ where $C_{R}$ is the closed semicircle in the upper half plane with endpoints at $(-R, 0)$ and $(R, 0)$ plus the $x$ axis.




I use $\frac{1}{z^{2}+1}=-\frac{1}{2i}\left[\frac{1}{z+i}-\frac{1}{z-i}\right]$ and I must prove without using the residue theorem the integral along the open semicircle in the upper half plane vanishes as $R\rightarrow \infty$



Could someone help me through this problem?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...