Monday 29 February 2016

statistics - whats the probability of a number getting picked from a pool fewer than 3 times?

Picking from a pool of 20000 different numbers, each experiment we pick 1000 different numbers(one numbers can be picked repeatedly in different experiments but not in the same experiment), and we pick n times(n experiments).whats the probability that at least one number is picked fewer than 3 times in the entire n experiments?




From What I got, the probability of a number in this pool getting picked at least once is (1-the probability of this number never getting picked).



So In Experiment 1, the probability of this number not getting picked is 19/20.



In Experiment 2, the probability of this number not getting picked is (19/20)^2



So the probability of this number not getting picked at experiment n is (19/20)^n



And that's what I got if just at least 1, what if we change that to 2 or 3 or d?

multivariable calculus - Can I convert to polar coordinates when calculating multivariate limits with three variables

When working on limits of functions with two variables, $f(x,y)$, I like to convert the problem to polar coordinates a lot of the time, by changing the question from $$\lim_{(x,y)\to (0,0)}f(x,y)$$ to $$\displaystyle\lim_{r\to 0}f(r\cos\theta,r\sin\theta).$$ I was just doing some problems in my book when I encountered a limit of a function with three variables, $f(x,y,z)$. I was just wondering if there was a way to calculate such a limit with polar coordinates.



An example being: $$\lim_{(x,y,z)\to(0,0,0)}\frac{xy+yz^2+xz^2}{x^2+y^2+z^4}$$



Converting it into polar coordinates gives me:




$\displaystyle\lim_{r\to 0}\dfrac{r^2\sin\theta\cos\theta+r\sin\theta \cdot z^2+r\cos\theta\cdot z^2}{r^2(\sin^2\theta+\cos^2\theta)+z^4}=\displaystyle\lim_{r\to 0}\dfrac{r(r\sin\theta\cos\theta+\sin\theta\cdot z^2+\cos\theta\cdot z^2)}{r^2+z^4}$



Can I proceed or is polar coordinates strictly for use with two variables only?

geometry - One of the diagonals in a hexagon cuts of a triangle of area $leq 1/6^{th}$ of the hexagon



Problem: Show that, in a convex hexagon, there exists a diagonal which cuts off a triangle of area not more than one-sixth of the hexagon.



My attempt: Suppose we have a hexagon $ABCD$. There are two possible cases: either the main diagonals are concurrent, or they are not.
enter image description here




If the main diagonals $AD, BE, CF$ concur at a point $G$, then the main diagonals cut the hexagon into $6$ triangles, atleast one of which has area $\leq \frac 16 [ABCDEF]$ Suppose one such triangle is $DEG$. Thus one of the triangles $DEF$ or $DEC$ has area $\leq[DEG]$, and we are done.



But suppose the main diagonals are not concurrent, i.e., they form a triangle $PQR$. How can I prove the statement in this case?


Answer



Consider the six triangles ABQ, BCQ, CDR, DER, EFP, FAP. They are disjoint and cover an area smaller than the entire hexagon. And each of them reaches up to a diagonal that doesn't touch their base.



hexagon diagram with colors


definite integrals - Finding the value of a sum using Riemann sum theorem




Question: Find the value of $\sum_{i=1}^{n}(\frac{1}{n-i})^{c}$ for large $n$.





\begin{align}
\sum_{i=1}^{n}(\frac{1}{n-i})^{c}
& = \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = \frac{n}{n} \times \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = n(\frac{1}{n})^{c} \sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c} \qquad(1)
\end{align}



Let $f(x) = (\frac{1}{1-x})^{c}$, by using Riemann-sum theorem, we have

\begin{align}
\lim_{n\rightarrow \infty}\sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c}
& = \int_{0}^{1} (\frac{1}{1-x})^{c} = A \qquad(2)
\end{align}
By using $(1)$ and $(2)$, for sufficently large $n$, we have
$$\bbox[5px,border:2px solid #C0A000]{\sum_{i=1}^{n}(\frac{1}{n-i})^{c} = A\times n(\frac{1}{n})^{c}}$$




The presented proof has a problem, $f(x)$ is not defined in the closed interval $[0,1]$. How can I solve this problem?








Definition (Riemann-sum theorem) Let $f(x)$ be a function dened on a closed interval $[a, b]$. Then, we have
$$\lim_{n\rightarrow \infty}\sum_{i=1}^{n}f\Big(a +(\frac{b - a}{n})i\Big)\frac{1}{n}=\int_{a}^{b}f(x)dx$$


Answer



\begin{align}\label{eq:7777}
& \frac{2}{\sqrt{n-i} + \sqrt{n-i+1}} \leq \frac{1}{\sqrt{n-i}} \leq \frac{2}{\sqrt{n-i} + \sqrt{n-i-1}} \nonumber\\
& \qquad \Rightarrow 2(\sqrt{n-i+1} - \sqrt{n-i}) \leq \frac{1}{\sqrt{n-i}} \leq 2(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\
& \qquad \Rightarrow 2 \sum_{i=1}^{n-1}(\sqrt{n-i+1} - \sqrt{n-i}) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sum_{i=1}^{n-1}(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\

& \qquad \Rightarrow 2 (\sqrt{n}-1) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sqrt{n-1}
\end{align}


Sunday 28 February 2016

linear algebra - Can two matrices with the same characteristic polynomial have different eigenvalues?



The polynomial is $-\lambda^3+3\lambda -2$




which factorizes into ($\lambda-1$)($\lambda +1$)($\lambda -2$)



A matrix A has the above characteristic polynomial, and so its eigenvalues are 1, -1, and 2.



However, another matrix B, already in diagonal form, has the same characteristic polynomial, but with eigenvalues 1,1,-2, i.e., diagonal entries 1,1,-2.



Is this possible? Or have I gone wrong in my computations?



The problem statement does ask to show that the characteristic polynomials are the same but that the matrices A and B are not similar. So, perhaps I have found exactly what I needed, but it just seems weird...




Thanks,


Answer



$-\lambda^3+3\lambda - 2 = -(\lambda-1)^2(\lambda+2) \neq -(\lambda-1)(\lambda+1)(\lambda-2)$.


probability - A multinomial problem (balls, bins, etc.)

Consider the well known multinomial setting: there are L balls, thrown at random at n bins so that the probability that a ball falls in bin i is $p_i$, independent of the other balls (the $p_i$’s are all positive and sum to 1).



Let $X_i$ be the number of balls in the ith bin. It is straightforward to show that for any k distinct indices $i_1, i_2,\ldots, i_k$, the events $\lbrace X_{i_1} > 0\rbrace, \lbrace X_{i_2} > 0\rbrace,\ldots, \lbrace X_{i_k} > 0\rbrace$ are negatively dependent in the sense that



$P(X_{i_1} > 0, X_{i_2} > 0,\ldots, X_{i_k} > 0) < P(X_{i_1} > 0)P(X_{i_2} > 0)\ldots P(X_{i_k} > 0)$




The proof is by induction over k: for k = 2, the probability on the LHS is $1 – [(1 - p_{i_1})^L + (1 - p_{i_2})^L - (1 - p_{i_1} - p_{i_2})^L]$, and on the RHS it’s $[1 - (1 – p_{i_1})^L] [1 - (1 – p_{i_2})^L]$. Elementary manipulations show that the inequality holds, and the induction step is easily proven using the multiplication rule $P(A \cap B) = P(A)P(B | A)$.



This negative dependence is also intuitively clear: if we know that there is at least one ball in bin i, there is at least one ball that will surely not fall into bin j, so the probability of bin j being non-empty decreases.



So that was easy. However, for proving a certain bound in my research, I need to do something similar, but for random k indices: suppose that after the balls were thrown into the bins, somebody comes and chooses randomly k distinct bins (so that each set of k bins has the same probability to be chosen, namely, 1/(n choose k)). Let $i_1, i_2, \ldots, i_k$ be the chosen indices, and again, I need to show that the inequality (below the second paragraph above) holds.



I thought to repeat the outline of the previous proof, but had a problem with proving the k = 2 case (the induction step is no problem). To compute each of the sides of the inequality I conditioned on the choice of the two indices (law of total probability), but even though many terms got cancelled, I couldn’t prove the inequality.



I am quite confident that the inequality holds, both by intuition (basically the same argument as in the non-random indices case) and because of numerical calculations.




Any ideas what to do here? I’d be grateful for any help.

Saturday 27 February 2016

Bijection between finite and infinite sequences over Reals.

So define the set of finite sequences to be $S={a_1,a_2,\cdots}$ where $a_k$ are in real numbers and only finitely many of them are non-zero. The set of infinite sequences is defined similarly except that we can have infinitely many non-zero terms. How do I prove that there does not exist a bijection between these two sets?

integration - Evaluate the double sum $sum_{m=1}^{infty}sum_{n=1}^{m-1}frac{ 1}{m nleft(m^2-n^2right)^2}$




As a follow up of this nice question I am interested in



$$
S_1=\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m n\left(m^2-n^2\right)^2}
$$



Furthermore, I would be also very grateful for a solution to



$$

S_2=\sum_{m=1}^{\infty}\sum_{n=m+1}^{\infty}\frac{ 1}{m n\left(m^2-n^2\right)^2}
$$



Following my answer in the question mentioned above and the numerical experiments of @Vladimir Reshetnikov it's very likely that at least



$$
S_1+S_2 = \frac{a}{b}\pi^6
$$



I think both sums may be evaluated by using partial fraction decomposition and the integral representation of the Polygamma function but I don't know how exactly and I guess there could be a much more efficient route.



Answer



Clearly, $S_1$=$S_2$ (this can be shown by reversing the order of summation, as was noted above).
Using
$$
\frac{ 1}{m n\left(m^2-n^2\right)^2}=\frac{ (m+n)^2-(m-n)^2}{4 m^2 n^2\left(m^2-n^2\right)^2}
$$
we get
$$
S_1=\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m n\left(m^2-n^2\right)^2}=\frac{1}{4}\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m^2 n^2\left(m-n\right)^2}-\frac{1}{4}\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m^2 n^2\left(m+n\right)^2},
$$

and after reversing the order of summation in the first sum
$$
S_1=\frac{1}{4}\sum_{n=1}^{\infty}\sum_{m=n+1}^{\infty}\frac{ 1}{m^2 n^2\left(m-n\right)^2}-\frac{1}{4}\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m^2 n^2\left(m+n\right)^2}=\\
\frac{1}{4}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{ 1}{m^2 n^2\left(m+n\right)^2}-\frac{1}{4}\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m^2 n^2\left(m+n\right)^2}. \qquad\qquad (1)
$$



Let's introduce a third sum
$$
S_3=\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m^2 n^2\left(m+n\right)^2}
=\sum_{n=1}^{\infty}\sum_{m=n+1}^{\infty}\frac{ 1}{m^2 n^2\left(m+n\right)^2}=\\

\frac{1}{2}\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\frac{ 1}{m^2 n^2\left(m+n\right)^2}-\frac{1}{8}\zeta(6).
$$
Using An Infinite Double Summation $\sum_{k=1}^{\infty} \sum_{n=1}^{\infty} \frac{1}{n^2k^2(n+k)^2}$? we get
$$
S_3=\frac{1}{2}\cdot\frac{1}{3}\zeta(6)-\frac{1}{8}\zeta(6)=\frac{1}{24}\zeta(6).\qquad\qquad\qquad (2)
$$
From (1) and (2) we get



$$
S_1=\frac{1}{4}\cdot\frac{1}{3}\zeta(6)-\frac{1}{4}\cdot\frac{1}{24}\zeta(6)=\frac{7}{96}\zeta(6)=\frac{7}{96}\frac{\pi^6}{945}=\frac{\pi^6}{12960}

$$


calculus - General Cesaro summation with weight




Assume that $a_n\to \ell $ is a convergent sequence of complex numbers and $\{\lambda_n\}$ is a sequence of positive real numbers such that $\sum\limits_{k=0}^{\infty}\lambda_k = \infty$




Then, show that,
$$\lim_{n\to\infty} \frac{1}{\sum_\limits{k=0}^{n}\lambda_k} \sum_\limits{k=0}^{n}\lambda_k a_k=\ell =\lim_{n\to\infty} a_n$$



(Note that : This is more general than the special case where, $\lambda_n= 1$)




Answer



Let $\varepsilon >0$ and $N$such that $|a_k-l|\le \varepsilon $ for all $k>N$
Then, for $n>N$ we have,
\begin{split}\left| \frac{\sum_\limits{k=0}^{n}\lambda_k a_k}{\sum_\limits{k=0}^{n}\lambda_k} -l\right|
&= &\left| \frac{\sum_\limits{k=0}^{n}\lambda_k (a_k - l)}{\sum_\limits{k=0}^{n}\lambda_k} \right|\\
&= &\left| \frac{\sum_\limits{k=0}^{N}\lambda_k (a_k - l)+\sum_\limits{k=N}^{n}\lambda_k (a_k - l)}{\sum_\limits{k=0}^{n}\lambda_k} \right|\\
&\le & \frac{M}{\sum_\limits{k=0}^{n}\lambda_k} + \frac{\sum_\limits{k=N}^{n}\lambda_k \underbrace{\left| a_k - l\right|}_{\le\varepsilon}}{\sum_\limits{k=0}^{n}\lambda_k} \\
&\le&
\frac{M}{\sum_\limits{k=0}^{n}\lambda_k} + \varepsilon\to 0

\end{split}
since $\sum_\limits{k=0}^{N}\lambda_k\to \infty$.
Where $M= \left|\sum_\limits{k=0}^{N}\lambda_k( a_k-l)\right|$


real analysis - Bijection from $[0,1]$ to $(1, infty)$

I've come across many different versions of this question on here, but not any that map the $[0,1]$ to $(1, \infty)$.



I was thinking that it must be piece-wise defined, since the endpoints 0 and 1 will be the trickiest part of defining the bijection... The only method of doing this that I could come up with would be to possibly show a bijection from $[0,1]$ to $(1,2)$, then construct another bijection from $(1,2)$ to $(1, \infty)$, and then the composition will be from $[0,1]$ to $(1, \infty)$, but I haven't been able to come up with any function that can do this... Any help is much appreciated.

terminology - What is the $x$ in $log_b x$ called?



In $b^a = x$, $b$ is the base, a is the exponent and $x$ is the result of the operation. But in its logarithm counterpart, $\log_{b}(x) = a$, $b$ is still the base, and $a$ is now the result. What is $x$ called here? The exponent?


Answer



Argument (as you call the $x$ in any $f(x)$ argument of the function)


terminology - What is the difference between a function and a formula?

I think that the difference is that the domain and codomain are part of a function definition, whereas a formula is just a relationship between variables, with no particular input set specified.



Hence, for two functions $f$ and $g$, $f(x)$ can be equal to $g(x)$ for all integers, say, but if the domain of $f$ is {2, 3, 4} and the domain of $g$ is {6, 7, 8, 9}, the two functions will be different.



And on the converse, if the functions 'do different things' - i.e. $f(x) = x$ and $g(x) = x^3$ - but the domains of $f$ and $g$ (these are the same) are set up such that the values of the functions are the same over the domain (this would work in this case for {-1, 0, 1}), then the functions are the same, even though the formulas are different.




Is this correct?



Thank you.

algebra precalculus - Representing rounding algebraically





Is there a standard way to deal with rounding in algebra? For example:



y = x + round(x/2)


Would give 2 when x = [1, 3), 3 when x = [3, 5), etc. This of course ends up creating a step function. EDIT: the function is injective, see the comments.



Is there a standard math-y way to represent the above equation, and is it possible to solve for x in terms of y?



For context, I need to know the value of y above, as well as round(x/2), both as integers. I'm currently using algebra.js to simplify my expressions, so extra bonus points for showing me how I might implement rounding using that library.



Answer



Edit because the OP keeps changing the question.



The answer below applies to the function
$$
y = 1 + \text{round}(x/2)
$$

which the OP says in a comment is what he meant to ask.



The original question asked about

$$
y = x + \text{round}(x/2)
$$

which is indeed injective and can be inverted, as @EeveeTrainer has commented.





You can't solve for integer $x$ in terms of integer $y$ since two values of $x$ give the same value of $y$.

If $x$ must be an integer then in a computer program you can write the function that computes $y$ from $x$ by looking at whether $x$ is even or odd and acting accordingly. You don't need rounding. For arbitrary $x$ you do need the round function.


calculus - Proof involving the continuous function $f:[0,1] to [0,1]$



Theorem: Let $f:[0,1] \to [0,1]$. Assume $f$ is continuous. Then there exists
$c \in [0, 1]$ such that $f(c) = c$.



1) For any function $f$, let $g(x) = f(x) − x$. If the theorem is true, what does that say about $g(c)$?



2) Write a proof of the theorem. Do it by applying the Intermediate Value Theorem
in the appropriate way to the function $g(x) = f(x) − x$ with $a = 0$ and $b =

1$. Clearly state why the hypotheses of the IVT are satisfied here.



1) Well if the theorem is true, then there exists $c$ such that $f(c) = c$. Then $f(c) - c = 0$. Then $g(c) = 0$.



2)Intermediate Value Theorem: Let $I$ be an interval and $f$ a function whose domain contains $I$. If $f$ is continuous, then for all $a, b \in I$
with $a < b$ and all real numbers $k$, if $k$ is strictly between $f (a)$ and $f (b)$, then there exists $c$ such
that $a < c < b$ and $f (c) = k$.



Proof Attempt: Take $a = 0$ and $b = 1$. Then $a,b \in[0,1]$ and $a < b$. Since $f:[0,1] \to [0,1]$, it follows that for all real numbers $k$ such that $f(a) < k < f(b)$, then there exists $c$ such that $a < c < b$ so $0 < c < 1$ and $f(c) = k$. Then $f(c) - k = 0$. Thus, since $g(x)=f(x)-x$, it follows that $g(c) = 0$.




Looking to see if I did this correctly and/or if there is a more elegant way to prove problem 2. I am a little confused why the first theorem is true, but I went with it.


Answer



Here's a proof as, I suppose, would be expected:



Let's apply the IVT to the function $g$, on the interval $[a, b] = [0, 1]$.



We are especially interested in showing that the function $g$ has a zero in its domain. Therefore we apply the IVT with the intention of using a value of $k$ equal to $0$ (as per your statement of the IVT).



But $g(0) = f(0)-0=f(0)$, with the first equality by the definition of $g$. Also $g(1) = f(1)-1.$




But since $f$ has codomain $[0, 1]$, it must be $f(1)\leq1$. If the equality stands, then the proof is over: the point $x=1$ is such that $f(x)=x$.



If the inequality is strict, that is, $f(1)<1$, then $g(1)<0$.



Similarly, $g(0)=f(0)\geq0$, because $f$ has codomain $[0, 1].$ If the equality stands, we are done, just as before: $f(0)=0$.



Otherwise, $g(0)$ is positive, and finally we can apply the IVT:



The function $g$ is such that $g(0)>0$ and $g(1)<0$, and is continuous because it is sum of continuous functions, therefore a number $c$ must exist, such that $0


But if $g(c)=0$, then by the definition of $g$, $f(c)-c=0$, and rearranging, $f(c)=c$, Q.E.D.






I know this proof is extremely verbose, but since you appear to be at your first steps in proof-writing, I firmly believe that writing every single logical step in your proofs is a good habit to pick up, very much like being generous in indentation and verbose in comments is a good habit for beginner programmers.


Friday 26 February 2016

calculus - Find the mistake in $lim_{xrightarrow 1^-} frac{sum_{n=0}^infty x^n}{sum_{n=0}^infty x^n}=1 Rightarrow 1=frac{1}{2}$



It is obvious that we have:
$$\lim_{x\rightarrow 1^-} \frac{\sum_{n=0}^\infty x^n}{\sum_{n=0}^\infty x^n}=\lim_{x\rightarrow 1^-}1=1.$$
But let us now write this sum in two ways, let $a_n=x^n$ and $b_n=x^{2n}+x^{2n+1}$ we thus have $\sum_{n=0}^\infty x^n = \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty b_n$. We can write the above limit as:
$$
\lim_{x \rightarrow 1^-}\lim_{N\rightarrow \infty} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n} = \lim_{N\rightarrow \infty} \lim_{x \rightarrow 1^-} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n},
$$

where we can swap limits because of the Moore-Osgood Theorem. We now find for the right hand side:

$$
\lim_{N\rightarrow \infty} \lim_{x \rightarrow 1^-} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n}=\lim_{N\rightarrow \infty} \frac{N+1}{2(N+1)}=\frac{1}{2}.
$$

This shows that $1=\frac{1}{2}$ which is clearly incorrect, but I do not see where the error occurs, I guess it is in the step where the Moore-Osgood Theorem is applied where we define $f_N(x)=\frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n}$.



EDIT: I believe I have found the error, in order to apply the Moore-Osgood Theorem we need uniform convergence from $f_N(x)$ to $f(x)=\frac{\sum_{n=0}^\infty a_n}{\sum_{n=0}^{\infty} b_n}$ but this $f$ is not continuous, therefore we can not apply Dini's theorem to show that pointwise convergence implies uniform convegence.


Answer



In order to use the Moore-Osgood Theorem, you must make sure that $(f_n)_{n \geq 0}$ converges uniformly toward $f$.



$i.e. \sup\limits_{[0,1]}|f_n - f| \rightarrow 0$.




This is not the case here.


real analysis - Examples of bijective map from $mathbb{R}^3rightarrow mathbb{R}$




Could any one give an example of a bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$?



Thank you.


Answer



First, note that it is enough to find a bijection $f:\Bbb R^2\to \Bbb R$, since then $g(x,y,z) = f(f(x,y),z)$ is automatically a bijection from $\Bbb R^3$ to $\Bbb R$.



Next, note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.




Mapping the unit square to the unit interval



There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.



This problem can be fixed.



(In answering this question, I tried many web searches to try to remember the fix, and I was amazed at how many sources I found that ignored the problem, either entirely, or by handwaving. I never did find it; I had to remember it. Sadly, I cannot remember where I saw it first.)



First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.




Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.



Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



This is enough to answer the question posted, but I will give some alternative approaches.



Continued fractions




According to the paper "Was Cantor Surprised?" by Fernando Q. Gouveâ, Cantor originally tried interleaving the digits himself, but Dedekind pointed out the problem of nonunique decimal representations. Cantor then switched to an argument like the one Robert Israel gave in his answer, based on continued fraction representations of irrational numbers. He first constructed a bijection from $(0,1)$ to its irrational subset (see this question for the mapping Cantor used and other mappings that work), and then from pairs of irrational numbers to a single irrational number by interleaving the terms of the infinite continued fractions. Since Cantor dealt with numbers in $(0,1)$, he could guarantee that every irrational number had an infinite continued fraction representation of the form $$x = x_0 + \dfrac{1}{x_1 + \dfrac{1}{x_2 + \ldots}}$$



where $x_0$ was zero, avoiding the special-case handling for $x_0$ in Robert Israel's solution.



Cantor-Schröder-Bernstein mappings



The Cantor-Schröder-Bernstein theorem takes an injection $f:A\to B$ and an injection $g:B\to A$, and constructs a bijection between $A$ and $B$.



So if we can find an injection $f:[0,1)^2\to[0,1)$ and an injection $g:[0,1)\to[0,1)^2$, we can invoke the CSB theorem and we will be done.




$g$ is quite trivial; $x\mapsto \langle x, 0\rangle$ is one of many obvious injections.



For $f$ we can use the interleaving-digits trick again, and we don't have to be so careful because we need only an injection, not a bijection. We can choose the representation of the input numbers arbitrarily; say we will take the $0.5000\ldots$ representation rather than the $0.4999\ldots$ representation. Then we interleave the digits of the two input numbers. There is no way for the result to end with an infinite sequence of nines, so we are guaranteed an injection.



Then we apply CSB to $f$ and $g$ and we are done.



Appendix




  1. There is a bijection from $(-\infty, \infty)$ to $(0, \infty)$. The map $x\mapsto e^x$ is an example.



  2. There is a bijection from $(0, \infty)$ to $(0, 1)$. The map $x\mapsto \frac2\pi\tan^{-1} x$ is an example, as is $x\mapsto{x\over x+1}$.


  3. There is a bijection from $[0,1]$ to $(0,1]$. Have $0\mapsto \frac12, \frac12\mapsto\frac23,\frac23\mapsto\frac34,$ and so on. That takes care of $\left\{0, \frac12, \frac23, \frac34,\ldots\right\}$. For any other $x$, just map $x\mapsto x$.


  4. Similarly, there is a bijection from $(0,1]$ to $(0,1)$.



combinatorics - Prove or name this identity: the number of factors of $p$ in $binom{n}{k}$ is $(s_p(k)+s_p(n-k)-s_p(n))/(p-1)$



you can count the number of factors of $p$ that are in $\binom{n}{k}$ for prime $p$. Let $s_p(n)$ be the sum of the digits of $n$ in base $p$. Then, the number of factors of $p$ in $\binom{n}{k}$ is $(s_p(k)+s_p(n-k)-s_p(n))/(p-1)$.


Answer




It's a corollary of Legendre's 1808 theorem that power of the prime $p$ that divides $n!$ is



$$ [n/p] + [n/p^2] + [n/p^3] +\: \cdots\ =\ \frac{n-s_p(n)}{p-1}$$



For this and much more see Andrew Granville's very interesting survey The Arithmetic Properties of Binomial Coefficients.



A full answer is here.


number theory - Project Euler problem #3 (how to do it by hand?)

Problem #3 in Project Euler:




What is the largest prime factor of the number $600851475143$?




I want to solve this by hand. (I am doing this with all problems.) What techniques would allow me to figure this out? (Yes, I checked the prime factors up to $53$, but I got quite tired.)

Thursday 25 February 2016

How do I determine if the set of triples of all real numbers on $mathbb{R}$ is a vector space.

I need to confirm if the set of all triples of real numbers of the form $(0, y, z)$ where $y=z$ with standard operations of addition and scalar multiplication on $\mathbb{R}^3$ is a vector space. Any clues will be greatly appreciates.

radicals - Prove that the square root of 3 is irrational

I'm trying to do this proof by contradiction. I know I have to use a lemma to establish that if $x$ is divisible by $3$, then $x^2$ is divisible by $3$. The lemma is the easy part. Any thoughts? How should I extend the proof for this to the square root of $6$?

sequences and series - Showing $sum_{n=1}^inftyfrac{1-(-1)^n}{n^2} = frac{pi^2}{4}$ from Fourier expansion



Find the Fourier series for the function:




$$f(x) =
\begin{cases}
\hfill 0 \hfill & \text{$- \pi \hfill \pi - x \hfill & \text{ $0 \end{cases}$$



on $- \pi < x< \pi$. Hence show that:



$$\sum_{n=1}^\infty\frac{1-(-1)^n}{n^2} = \frac{\pi^2}{4}$$




My attempt:



Fourier series general form:



$$FS\left[f(t)\right] = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx) $$



Calculating $a_0$:



$$a_0 = \frac{1}{\pi} \int_0^\pi (\pi -x) dx = \frac{\pi}{2}$$




Calculating $a_n$:



$$a_n = \frac{1}{\pi} \int_0^\pi (\pi -x) \cos(nx) dx$$



$$=\frac{-1}{\pi} \left( \left[ \frac{x\sin(nx)}{n} \right]_0^\pi - \int_0^\pi \frac{\sin(nx)}{n}dx \right) $$



$$=\frac{1-(-1)^n}{\pi n^2}$$



Calculating $b_n$:




$$b_n = \frac{1}{\pi} \int_0^\pi (\pi -x) \sin(nx) dx$$



$$=\frac{-1}{\pi} \left( \left[ \frac{-(\pi-x)\cos(nx)}{n} \right]_0^\pi - \int_0^\pi \frac{\cos(nx)}{n}dx \right) $$



$$=\frac{1}{n}$$



giving the $FS\left[f(t)\right]$:



$$FS{[f(t)]} = \frac{\pi}{4} + \sum_{n=1}^\infty \frac{1-(-1)^n}{n^2} \cos(nx) + \sum_{n=1}^\infty \frac{\sin(nx)}{n} $$




However I am now stuck on showing:



$$\sum_{n=1}^\infty\frac{1-(-1)^n}{n^2} = \frac{\pi^2}{4}$$



Anyone have any idea? Or see where I might have gone wrong?
Kind Regards,


Answer



When you compute $a_n$ (which you typo-ed $a_0$), you left off the $1/\pi$. After sticking that back into the FS, and plugging in $0$ you have




$$f(0) =\frac{\pi}{4} +\frac{1}{\pi}\sum \frac{1-(-1)^n}{n^2}.$$



Well, this is a little wrong. The FS doesn't exactly equal the original function. The FS convergence theorem says that the series converges to the average of the left and right hand limits of the function at its discontinuities. So instead of $f(0)$, we should have $(0+\pi)/2$. It's easy to get your result from there.


real analysis - bijective measurable map existence



Does there exist bijective measurable maps between $\mathbb{R}$ and $\mathbb{R}^n$?



If so, could you give me an example of that?



Thank you.


Answer



Polish space is a topological space that is isomorphic to a complete separable metric space, for example $\Bbb R^n$ for any $n\in \Bbb N$. For the proof of the following fact, see e.g. here.





Any uncountable Polish space is Borel isomorphic (there exists a bimeasurable bijection) to the space of real numbers $\Bbb R$ with standard topology.



calculus - Trouble with trigonometric limits without derivatives



I am trying to find the following 2 limits as part of a series of 4 exercises following out lectures. We haven't really learned derivatives yet, so I can't just slap L'Hospital and be done with it.



The first two I could solve with Taylor Expansions, but I still need to solve:



$$\lim_{x\to 0} \frac{e^x -\sin x -1}{x^2}$$
Taylor Expansion of sin didn't work here, and I don't know how to deal with the $e^x$.



$$\lim_{x\to 0} \frac{x\sin x}{\cos x -1}$$

Given $-\sin^2 x = \cos^2 x -1$ I tried multiplying by $\frac{\cos x +1}{\cos x +1}$, which lead me to $\frac{-x\cos x \ -x}{\sin x}$ and then I don't know what else to do. (SOLVED) I suppose I could simplify this to $-x\cot x - \frac{x}{\sin x}$, which leads to -2.


Answer



You have



$$e^x=1+x+\frac{x^2}{2}(1+\epsilon_1(x))$$



and



$$\sin(x)=x+x^2\epsilon_2(x)$$




cause $x\mapsto \sin(x)$ is odd.



then the limit is $$\frac{1}{2}$$


Find The Summation OF Both AP and GP series

Is there is any formula for summation of series which is both AP and GP
For Example This Series:



(2^(n-1))*1 + (2^(n-2))*2 + (2^(n-3))*3+.....so on


How do i compute the above series ?

calculus - Let $a_n = intlimits_{0}^{n} e^{-x^4} dx$. Does ${ a_n }_{n rightarrow infty}$ converge?



Let $a_n = \int\limits_{0}^{n} e^{-x^4} dx$. Does $\{ a_n \}_{n \rightarrow \infty}$ converge?



$\{ a_n \} =\{ \int\limits_{0}^{1} e^{-x^4} dx, \int\limits_{0}^{2} e^{-x^4} dx, ..., \int\limits_{0}^{\infty} e^{-x^4} dx \}$



So we need need only to check that the definite integral




$\int\limits_{0}^{\infty} e^{-x^4} dx$ converges



By using Wolfram Alpha,



$\int\limits_{0}^{\infty} e^{-x^4} dx \} = \Gamma \left( \frac54 \right) \approx 0.906402$



where $\Gamma$ is the Gamma function



Therefore $\{ a_n \}$ converges, to $\Gamma \left( \frac54 \right)$.




But what if I can't use Wolfram Alpha? How can I solve for this integration by hand? Sorry if this makes me look stupid, I don't recall learning any techniques from Calculus that can help me solve this integration.



Thanks in advance!


Answer



We do not need an explicit expression to show that an improper integral converges.



The sequence $(a_n)$ is obviously increasing. It is bounded above by $\int_0^1 e^{-x^4}\,dx+\int_1^\infty e^{-x}\,dx$. To be explicit, the first integral is less than $1$, and the second is $e^{-1}$, so the sequence $(a_n)$ is bounded above by $1+e^{-1}$.



Any increasing sequence which is bounded above converges.



elementary number theory - Prove by induction that $2^{2n} – 1$ is divisible by $3$ whenever n is a positive integer.

I am confused as to how to solve this question.



For the Base case $n=1$, $(2^{2(1)} - 1)\,/\, 3 = 1$, base case holds




My induction hypothesis is:
Assume $2^{2k} -1$ is divisible by $3$ when $k$ is a positive integer



So, $2^{2k} -1 = 3m$



$2^{2k} = 3m+1$



after this, I'm not quite sure where to go. Can anyone provide any hints?

sequences and series - Show that $left(1+dfrac{1}{n}right)^n$ is monotonically increasing




Show that $U_n:=\left(1+\dfrac{1}{n}\right)^n$, $n\in\Bbb N$, defines a monotonically increasing sequence.




I must show that $U_{n+1}-U_n\geq0$, i.e. $$\left(1+\dfrac{1}{n+1}\right)^{n+1}-\left(1+\dfrac{1}{n}\right)^n\geq0.$$



I am trying to go ahead of this step.



Answer



$$x_n=\bigg(1+\frac{1}{n}\bigg)^n\longrightarrow x_{n+1}=\bigg(1+\frac{1}{n+1}\bigg)^{n+1}$$
$$\frac{x_{n+1}}{x_{n}}=\frac{(1+\frac{1}{n+1})^{n+1}}{(1+\frac{1}{n})^{n}}=\bigg(\frac{1+\frac{1}{n+1}}{1+\frac{1}{n}}\bigg)^n\bigg(1+\frac{1}{n+1}\bigg)=\bigg(\frac{n(n+2)}{(n+1)^2}\bigg)^n\bigg(1+\frac{1}{n+1}\bigg)$$
$$=\bigg(1-\frac{1}{(n+1)^2}\bigg)^n\bigg(1+\frac{1}{n+1}\bigg)≥\bigg(1-\frac{n}{(n+1)^2}\bigg)\bigg(1+\frac{1}{n+1}\bigg)$$
$$≥^*\frac{1}{1+\frac{1}{n+1}}\bigg(1+\frac{1}{n+1}\bigg)≥1$$
It means that your sequence is increasing.



≥*: $$(n+2)(n^2+n+1)=(n+2)\bigg((n+1)^2-n\bigg)≥(n+1)^3$$


Please explain what's wrong with the proof that every group element is its own inverse.

What is wrong with my proof here?



Proof:



Let $a, b$ be elements of a group and let $aa = b$.



Through manipulation, we see that
$$a = ba^{-1}$$
$$b^{-1}a = a^{-1}$$

$$b^{-1}aa^{-1} = e$$
$$b^{-1} = e$$



We of course know that
$bb^{-1} = e$
And from above, we see that
$be = e$
or
$b = e$.
And since $aa = b$, we can see that $aa = e$ and that $a=a^{-1}$.

sequences and series - Explicit bijection between $mathbb{N}$ and a dense subset of some interval in $mathbb{R}$




I am working on a way of expressing some kinds of countably infinite sums in terms of easier to evaluate integrals. Many kinds of double summations are handled quite easily, however I am finding it more difficult to express using the same techniques any "single summation," such as the standard sum $\sum_{n=0}^\infty$.



It would suffice to find any kind of "nice" bijection between the naturals and some dense subset of any interval in $\mathbb{R}$. As this number will be a portion of a term in an infinite series, the "nicer" the bijection the better.



For example, Thomas Andrews (Produce an explicit bijection between rationals and naturals?) illustrated a relatively simple-to-write bijection, but it is not simplifiable in any significant way; any term in my series based on that function would of near necessity be described in terms of the prime factorization of the input, which relates to so few infinite sums as to be virtually worthless in this case. Similarly, there are many recursive results scattered throughout math.stackexchange which do not suffice.



Without the constraint that the dense subset be $\mathbb{Q}$, my hope is that this problem is tractable, and any insights against or in favor of its tractability would be appreciated.


Answer



There are other posts showing ${sin (n)} $ is dense in [-1;1]

Though I don't know if it is bijective.


Wednesday 24 February 2016

proof for a continuous distribution function where a single point has 0 probability of occurring



enter image description here



Here is a proof using the continuity property of probabilities(property which I understand) that for a continuous distribution function, a single point has 0 probability of occurring.
I understand fully this proof until the last line where i'm not completely sure why this follows. Is it because $F_X$ $(a-1/n)$ converges to a as n tends to infinity and hence the expression is equal to 0? (how is $F_x$ being continuous connected to this deduction?)


Answer




Your guess about the reason is correct. Remember that a function $f : \mathbb{R} \to \mathbb{R}$ is continuous at a point $a$ if and only if, for every sequence $(x_n)_{n=1}^{\infty}$ such that $\lim_{n\to\infty}x_n = a$, we have $\lim_{n\to\infty}f(x_n)=f(a)$. In particular, if $F_X$ is continuous, then since $\lim_{n\to\infty}(a- 1/n) = a$, we get $\lim_{n\to\infty}F_X(a-1/n) = F_X(a)$.


probability - Expectation of a continuous random variable explained in terms of the CDF



Problem:



Let $F_X(x)$ be the CDF of a continuous random variable $X$. Show that:




$$E[X]= \int_0^\infty(1-F_X(x)) \, dx -\int_{-\infty}^0F_X(x) \, dx.$$



Attempt:



A comprehensible explanation of the intuition regarding the expectation $E[X]$ and CDF for a non-negative random is found here: Intuition behind using complementary CDF to compute expectation for nonnegative random variables.



However, I am still at a loss about how to show the general case when $-\infty < x < \infty$.



I am again solving this as an exercise in my probability course and any help is greatly appreciated!



Answer



You can also see it by interchanging the order of integrals. We have



\begin{eqnarray*}
\int_{0}^{\infty}(1-F_{X}(x))dx=\int_{0}^{\infty}P(X>x)dx & = & \int_{0}^{\infty}\int_{x}^{\infty}dF_{X}(t)dx\\
& = & \int_{0}^{\infty}\int_{0}^{t}dF_{X}(t)dx\\
& = & \int_{0}^{\infty}t\;dF_{X}(t)
\end{eqnarray*}
and,
\begin{eqnarray*}

\int_{-\infty}^{0}F_{X}(x)dx=\int_{-\infty}^{0}P(X\leq x)dx & = & \int_{-\infty}^{0}\int_{-\infty}^{x}dF_{X}(t)dx\\
& = & \int_{-\infty}^{0}\int_{t}^{0}dF_{X}(t)dx\\
& = & \int_{-\infty}^{0}-t\;dF_{X}(t)
\end{eqnarray*}
Since,
$$
E[X]=\int_{-\infty}^{0}t\;dF_{X}(t)+\int_{0}^{\infty}t\;dF_{X}(t)
$$
the result follows.


For writing up the $epsilon-delta$ proof, what is the order of $|f(x)-L|$ and $|x-a|$ at the end?

To elaborate, do you...




  1. $0<|x-a|<\delta$ and then add to that so that you build up to $|f(x)-L|$ and then show that this is less than $\epsilon$ by using the $\epsilon-\delta$ relationship you found earlier.

  2. $|f(x)-L|$ and then break it down so you can substitute $|x-a|<\delta$ and then show that the former is less than $\epsilon$ by using the $\epsilon-\delta$ relationship you found earlier.




In the way the final part of the definition is written ("if $0<|x-a|<\delta$, then $|f(x)-L|<\epsilon$"), does it matter which choice you use, or it just preference? I've seen books do both ways, but from what I've seen, the second option is more common.



In terms of "If P, then Q" which would you want to work with first - P, or Q?

Find the real part of a given complex number

Let $n \in \mathbb{N}^*$ and $k \in \{0, 1, 2, ..., n - 1\}$.
$$z = \left( \cot \frac{(2k + 1)\pi}{2n} + i \right)^n$$



Find the real part of $z$.



I think we need to transform $z$ into something like $\cos \phi + i \sin \phi$, so we can compute its n-th power using De Moivre's rule. However, I haven't figured out any way to do this yet.




Thank you in advance!

Tuesday 23 February 2016

elementary number theory - The modular n-th root (mod p*q)



I am interested in the solution of the following modular equation. Is the solution unique? If not, how difficult do find more than one solutions?




$$x^n \equiv a \; \bmod (p\cdot q)$$
where $p$ and $q$ are large primes.



Actually, I am investigating the question that is it possible (how difficult) to make more than one valid RSA signatures for a given message $a$ and a public key $n$ (with the private key $d$ known). Apparently signing $a$ with $d$ gives one valid signature. My question is how difficult to find another one. Thanks.


Answer



The solution is unique in the RSA setting where $d \equiv n^{-1} \pmod {\varphi(pq)}\;$ is assumed to exist, if $\gcd(a,pq)=1.\;$ But the case $\gcd(a,pq)\ne 1$ must be avoided anyway because otherwise you can factor $pq\;$ and your specific RSA system is broken (in practice with large primes this is very unlikely, and I do not know if in actual implementations this situation is checked).



By Euler's theorem the unique solution is
$$x \equiv a^{d} \pmod {pq}\equiv a^{n^{−1}\pmod {\varphi(pq))}} \pmod {pq}$$


real analysis - Classification of all functions satisfying $f$ such that $f(x)=f(x^2)$.



I know that of functional identity $f(a+b)=f(a)+f(b)$, $y=kx$ is definitely a solution. Other solutions may be constructed by treating the real numbers as a vector field over the rational numbers, which are pathological.



How to construct all pathological functions satisfying the identity $f(x)=f(x^2)$? I know that f must be constant if it's continuous, but what are the other cases?


Answer



Edited 4 Jan to incorporate details I skipped pointed out by user517969.



Let $\phi(x)$ and $\psi(x)$ be periodic with period $\log 2$, let $a,b\in\mathbb R$. Then $$f(x)=\begin{cases}a & \text{if }x=0\\\phi(\log(-\log|x|))&0<|x|<1\\b&|x|=1\\\psi(\log(\log(|x|))&|x|>1\end{cases}$$
satisfies your functional equation , and can be as pathological (or as smooth) as $\phi$ and $\psi$ are. Because $f(-x)=f(x)$ there is no loss of generality in using the absolute value signs in the innermost logarithm. I think this is the most general form of solution.



analysis - Where does Jacobi's accessory equation come from?

I'm reading Charles Fox's An Introduction to the Calculus of Variations and in section 2.4 he just suddenly introduces Jacobi's accessory equation and I don't understand where it's coming from.



Jacobi's accessory equation (which oddly doesn't have a Wikipedia page) is $$\left[\frac{\partial^2 F(x,s,s')}{\partial s^2}-\frac d{dx}\frac{\partial^2 F(x,s,s')}{\partial s\partial s'}\right]u-\frac d{dx}\left(\frac{\partial^2 F(x,s,s')}{\partial s'^2}\frac {du}{dx}\right)=0$$ and has something to do with the second variation of $\int_a^b F(x,y,y')dx$ evaluated near a stationary path $y=s(x)$.



Could someone either derive (or give the main idea of the derivation) the equation or suggest a good source that does?



P.S. I checked in Gelfand and Fomin's book which I'm not reading but have laying around. The problem with their derivation is that it's in the middle of their book (as opposed to near the beginning of Fox's) and thus seems to make use of stuff I haven't gotten to yet in Fox's book.







Here's the context in Fox's book:



Note that for brevity the notation $F_{00} := \frac{\partial^2 F}{\partial s^2}$, $F_{01} := \frac{\partial^2 F}{\partial s\partial s'}$, $F_{11} := \frac{\partial^2 F}{\partial s'^2}$ is used in the following.



We just started looking at the second variation and trying to derive the conditions for extremizing the functional subject to weak variation. In the previous section we derived that if $t(a)=t(b)=0$, then $$\int_a^b \left(t^2F_{00} +2tt'F_{01}+t'^2F_{11}\right)dx = \int_a^b\left[t^2F_{00}-t^2\frac d{dx}(F_{01})-t\frac d{dx}(t'F_{11})\right]dx$$



Then this section starts off with:





On solving the [Euler-Lagrange equation], the equation of the extremal $y=s(x)$ which passes through the given points $A$ and $B$ can be determined.



Thus the quantities $F_{00}$, $F_{01}$, $F_{11}$, and $\frac d{dx} F_{01}$ can all be expressed in terms of $x$ and the differential equation $$\left[F_{00}-\frac d{dx}F_{01}\right]u-\frac d{dx}\left(F_{11}\frac {du}{dx}\right)=0 \tag{1}$$



can then be solved for $u$ as a function of $x$. This is an ordinary linear differential equation of the second order and is known as the subsidiary or Jacobi's equation or, more frequently, as the accessory equation.



On taking $x$ to be independent and $t(=t(x))$ the dependent variable in the integral $I_2 [= \int_a^b \left(t^2F_{00} +2tt'F_{01}+t'^2F_{11}\right)dx]$, it is easily seen that $(1)$ is the [Euler-Lagrange equation] for minimizing $I_2$ with $t$ replaced by $u$.




Note that other than that last line he doesn't explain what the function $u$ is.

probability - Expected number of rolls until all die faces have appeared?

Suppose you have a die. On average, how many rolls does it take for all the faces (1 through 6) to have appeared?




Also, what kind of distribution is this?



(I thought of this question when watching rain fall on my driveway; I wondered how long it takes for rain to completely cover the whole driveway.)

calculus - Methods for Finding Limits as x Approaches Infinity








I have a question about the best method to find the limit of a function as it approaches infinity.



The limit as x approaches infinity of (x^3+5x)/(2 x^3-x^2+4) is 1/2.



I found this just by taking the largest values of x (X^3/2x^3) and plugging in a number (1). I proceeded under the assumption that if x gets large it will subsume the lesser values of x and the constants, so in a way these lesser values are not necessary.



The book I am using (and other posts on this site), however, provide a different method (i.e., divide the numerator and the denominator by the largest value of x, and then use the limit laws to find the result). This method is obviously much more cumbersome, but I am not sure if any rigor is gained by it.



My question is: Is there a case where the first method will produce a wrong answer? Put another way, will the limit as x approaches infinity of (ax^2+bx+c/dx^2+ex+f) always be equal to the limit as x approaches infinity of (ax^2/dx^2)?

Monday 22 February 2016

number theory - Why aren't logarithms defined for negative $x$?



Given a logarithm is true, if and only if, $y = \log_b{x}$ and $b^y = x$ (and $x$ and $b$ are positive, and $b$ is not equal to $1$)[1], are true, why aren't logarithms defined for negative numbers?



Why can't $b$ be negative? Take $(-2)^3 = -8$ for example. Turning that into a logarithm, we get $3 = \log_{(-2)}{(-8)}$ which is an invalid equation as $b$ and $x$ are not positive! Why is this?


Answer



In complex analysis, $x$ can be negative. For example $e^{i\pi} = -1$, so $\ln{(-1)} = i\pi$.




I hadn't seen a log with a negative base, but I thought one could define it with the normal change of base formula: $\log_{b}{x} = \frac{\ln{x}}{\ln{b}}$. However, this turns out to be inconsistent Might be inconsistent, at the very least, it doesn't give 3:



$$\log_{(-2)}{(-8)} = \frac{\ln{(-8)}}{\ln{(-2)}} = \frac{\ln{8} + i\pi}{\ln{2} + i\pi} = \frac{3 \ln{2} + 3i\pi - 2i\pi}{\ln{2} + i\pi} = 3 - \frac{2i\pi}{\ln{2} + i\pi} \ne 3$$



This is because the complex log has a branch cut in it. For example: $e^{3i\pi} = -1$, but $\ln{(-1)} = i\pi$, not $3i\pi$. The cut is made so that $\mathrm{Im}(\ln{z}) \in \left(-\pi,\pi\right]$.


limits - Prove convergence of sequence given by $a_{1}=1$ and $a_{n+1}= frac{1}{a_1+a_2+ldots+a_n}$



For sequence given by $a_{1}=1$ and $a_{n+1}= \frac{1}{a_1+a_2+\ldots+a_n}$ I have to prove that it converges to some number and find this number.



I tried toshow that it's monotonic by calculating
$$
\frac{a_{n+1}}{a_{n}} = \frac{1}{a_{n}(a_1+a_2+\ldots+a_n)}

$$

but I cannot say anything about the denominator. How can I try to find it's limit?


Answer



Let $s_n = \sum\limits_{k=1}^n a_k$. We can rewrite the recurrence relation as



$$s_{n+1} - s_n = a_{n+1} = \frac{1}{s_n} \quad\implies s_{n+1} = s_n + \frac{1}{s_n}$$



This implies
$$s_{n+1}^2 = s_n^2 + 2 + \frac{1}{s_n^2} \ge s_n^2 + 2$$




So for all $n > 1$, we have



$$s_n^2 = s_1^2 + \sum_{k=1}^{n-1} (s_{k+1}^2 - s_k^2) \ge 1 + \sum_{k=1}^{n-1} 2 = 2n - 1$$



Since all $a_n$ is clearly positive, we have $\displaystyle\;0 < a_n = \frac{1}{s_{n-1}} \le \frac{1}{\sqrt{2n-3}}$.



By squeezing, $a_n$ converges to $0$ as $n\to\infty$.


real analysis - Proof of this inequality



I have a finite sequence of positive numbers $(a_i)_1^n$ for which:




  1. $a_1>a_n$,

  2. $a_j\geq a_{j+1}\geq\cdots\geq a_n$ for some $j\in\{2,\ldots,n-1\}$,


  3. $a_1>a_2>\cdots>a_{j-1}$,

  4. $a_j\geq a_i$ for all $1\leq i\leq n$.



I conjecture that:



$$(a_n+a_2+\cdots+a_j)\left(\sum_{i=j+1}^{n-1}{\frac{a_i^2}{(a_1+\cdots+a_i)(a_n+a_2+\cdots + a_i)}}+\frac{a_1+a_n}{a_1+\cdots+a_n}\right) \geq (a_n+a_1+\cdots+a_j)\left(\sum_{i=j+1}^{n-1}{\frac{a_i^2}{(a_1+\cdots+a_i)(a_n+a_1+\cdots + a_i)}}+\frac{a_n}{a_1+\cdots+a_n}\right).$$



I have an unappealing brute-force proof when $n\in\{3,4,5\}$ but I can't prove it in general. I have tried calculus to no avail, and it doesn't seem like a good fit for any of the standard inequalities.Does this look even remotely similar to anything already done? I appreciate that it's a rather ugly inequality but some suggestions would be greatly appreciated!




The proof when $n=3$ is outlined below. By condition 2. we know that $j=2$ so that



\begin{align*}
(a_3+a_2)\left(\frac{a_1+a_3}{a_1+a_2+a_3}\right)-(a_3+a_1+a_2)\left(\frac{a_3}{a_1+a_2+a_3}\right)=\frac{a_2(a_1-a_3)}{a_1+a_2+a_3}>0
\end{align*}
where we have used condition 1, which states that $a_1>a_3$.The proofs when $n=4$ and $n=5$ are likewise, only uglier. When $n=5$ the trick is to find common denominators then simply pair each negative term with some larger positive term. It's horrible, but it works. Perhaps a general proof would involve a similar argument but more formalised?



If a full proof can't be found then I'd be happy for a proof in the special case where $j=2$ and $j=3$.


Answer



Your conjecture fails for $n \ge 7$ (and maybe for some smaller $n$). The following results are from programming your inequality in Sage and testing a variety of patterned sequences.




1. First family of counterexamples



Let $S(a,b) = [a, a-1, a-2, ..., 1, b, b-1, b-2,..., 1]$, of length $a+b$, for positive integers $a,b$.



For any $a \ge 2$, there is some $b^* \gt a$ such that for any $b \ge b^*$, $S(a, b)$ is a counterexample. (The quantity $\ \text{LHS}-\text{RHS}\ $ is a decreasing function of $b$, and falls below $0$ at $b=b^*$.)



Such counterexamples include $S(2, b\ge 11), S(3, b\ge 14), S(4, b\ge 16), S(5,b\ge 19)$, and so on. (I haven't determined a formula for $b^*$.)



The smallest counterexample of this form appears to be

$$S(2,11) = [2, 1, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]$$ for which
$$\text{LHS} = \frac{79582008868974649241}{15735265132809166560} = 5.0575... < 5.0608... = \frac{1185342437701}{234217526928} = \text{RHS} $$



2. Second (shorter) family of counterexamples



Another family of counterexamples is $[2, b, b, b, b, b, 1]$ with $b \ge 5$. The smallest counterexample of this form appears to be
$[2, 5, 5, 5, 5, 5, 1]$, for which
$$\text{LHS} = \frac{3515219}{1225224} = 2.869... < 2.881... = \frac{30446902}{10567557} = \text{RHS} $$






Here's the core of the program (note the indexing adjustments due to Python lists being $0$-based):




def lhs(L,j):
n = len(L)
tot = (L[0] + L[n-1])/sum(L)
for i in [j+1..n-2]: tot += L[i]^2 / ( sum(L[0:i+1])*( L[n-1] + sum(L[1:i+1]) ) )
return (L[n-1] + sum(L[1:j+1]))*tot

def rhs(L,j):
n = len(L)
tot = L[n-1]/sum(L)

for i in [j+1..n-2]: tot += L[i]^2 / ( sum(L[0:i+1])*( L[n-1] + sum(L[0:i+1]) ) )
return (L[n-1] + sum(L[0:j+1]))*tot

for b in [3..8]:
L = [2,b,b,b,b,b,1]; left = lhs(L,1); right = rhs(L,1)
print b, left.n(), right.n(), (left-right).n()

> 3 1.96695167577521 1.92603768780239 0.0409139879728115
> 4 2.41522132314971 2.40469223123685 0.0105290919128582
> 5 2.86904190580661 2.88116752055371 -0.0121256147470981

> 6 3.32586148147269 3.35536963036963 -0.0295081488969435
> 7 3.78448484495198 3.82769776396051 -0.0432129190085339
> 8 4.24427752155089 4.29855086580553 -0.0542733442546420

calculus - Integration by substitution notation question




Often with integration by substitution I see (and use) the notation $ x \to \frac{\pi}{2} - x $, for the simple reason that I don't have to rename the variable that I am integrating with respect to, but recently this notation has gotten me confused, in the sense that I'm not sure I properly understand it.



For example, with the integral $$ \int^{2\pi}_{0} \frac{x\sin x}{3+\sin^2 x} \ dx $$



I proceeded as follows.



First I will note, that previously I have shown that $\int^{\pi}_{0} \frac{x \sin x}{3+\sin^2 x} \ dx =\frac{\pi}{2}\int^{\pi}_{0} \frac{\sin x}{3+\sin^2 x} \ dx = \frac{\pi}{4}\ln 3 $



Returning to the integral

$$ \int^{2\pi}_{0} \frac{x\sin x}{3+\sin^2 x} \ dx = \int^{\pi}_{0} \frac{x\sin x}{3+\sin^2 x} \ dx + \int^{2\pi}_{\pi} \frac{x\sin x}{3+\sin^2 x} \ dx $$



Now in the second integral substitute $ x \to x - \pi $ then we have



$$\int^{2\pi}_{\pi} \frac{x\sin x}{3+\sin^2 x} \ dx = \int^{\pi}_{0} \frac{(x - \pi)(-\sin x)}{3+\sin^2 x} \ dx= -\int^{\pi}_{0} \frac{x\sin x}{3+\sin^2 x}\ dx + \int^{\pi}_{0} \frac{\pi \sin x}{3+\sin^2 x}\ dx $$



This means that



$$ \int^{2\pi}_{0} \frac{x\sin x}{3+\sin^2 x} \ dx = \int^{\pi}_{0} \frac{\pi \sin x}{3+\sin^2 x}\ dx = \frac{\pi}{2} \ln 3 $$ by the initial result.




This is incorrect; the answer should be $ - \frac{\pi}{2} \ln 3 $, but I don't see my error. I think an error may have arisen with my substitution, so I'd appreciate it if someone could point out where the error is and why it is wrong.



Also, I wasn't sure how to entitle this question, but I hope that the title I've chosen is appropriate.


Answer



When you make the substitution $ x \to x - \pi $ then you rather have



$$\int^{2\pi}_{\pi} \frac{x\sin x}{3+\sin^2 x} \ dx = \int^{\pi}_{0} \frac{\color{red}{(x+ \pi)}(-\sin x)}{3+\sin^2 x} \ dx= -\int^{\pi}_{0} \frac{x\sin x}{3+\sin^2 x}\ dx \color{red}{- }\int^{\pi}_{0} \frac{\pi \sin x}{3+\sin^2 x}\ dx $$
which gives the correct answer.


Sunday 21 February 2016

real analysis - Create disjoint Union from sequence on Algebra




I am trying to solve the following,



Problem



Let $\mathcal{A}$ be an algebra,



Given any sequence $\{A_n\}_{n \in \mathbb{N}} \subseteq \mathcal{A}$, with $\cup_{n = 1}^{\infty} A_n = A \in \mathcal{A}$, it is possible to construct a pairwise disjoint sequence ${B_n}_{n \in \mathbb{N}} \subseteq \mathcal{A}$ with $B_n \subseteq A_n$ and $A = \sqcup_{n = 1}^{\infty} B_n$, where $\sqcup$ denotes a disjoint union.



My Solution




I use the following construction,



$B_1 = A_1$, $B_n = A_n - (\cup_{j=1}^{n-1}A_j) \ \forall n \geq 2$.



Since $\mathcal{A}$ is an algebra we have that $B_n \in \mathcal{A} \ \forall n$.



By construction $B_n \subseteq A_n$ for each $n$.



If we let $i > j$, $$\begin{align*} B_i \cap B_j & = (A_i - (\cup_{n = 1}^{i - 1} A_n)) \cap (A_j - \cup_{n = 1}^{j-1} A_n) \\

& = (A_i \cap (\cap_{n = 1}^{i - 1} A_n^c)) \cap (A_j \cap (\cap_{n = 1}^{j-1} A_n^c)) \\
& = (A_i \cap (\cap_{n = 1}^{i - 1} A_n^c)) \cap (A_j) \\
& = A_i \cap A_j \cap A_j^c \cap (\cap_{n = 1}^{j - 1} A_n) \cap (\cap_{n=j+1}^{i-1} A_n) = \emptyset \end{align*}$$



Now I want to show that $\sqcup_{n = 1}^{\infty} B_n = \cup_{n = 1}^{ \infty} A_n$.



I can show $$\begin{align*}B_1 \sqcup B_2 & = A_1 \sqcup (A_2 - A_1) \\
& = A_1 \sqcup (A_2 \cap A_1^c) \\ & = (A_1 \sqcup A_2) \cap (A_1 \sqcup A_1^c) \\ &= A_1 \cup A_2 \end{align*}$$



From here I could go on by induction to show that it holds for all $n \in \mathbb{N}$.




However if I understand induction correctly this argument is not valid when I want to proof something for a countable union.



Question



How can I show directly (without induction) that $\sqcup_{n = 1}^{\infty} B_n = \cup_{n = 1}^{ \infty} A_n$ in this case ?


Answer



Based on your construction, we get
$B_1\subset A_1$ and for $n\geq 2$, $B_n\subset A_n$. Hence,




$$\bigcup_{n=1}^{+\infty}B_n\subset \bigcup_{n=1}^{+\infty}A_n.$$



Let $$x\in\bigcup_{n=1}^{+\infty}A_n.$$ Then there exists $p\in\Bbb N$ such that $x\in A_p$. Define
$$S=\{m\in\Bbb N:x\in A_m\}.$$



Then $p\in S$. Thus, $\emptyset\neq S\subset \Bbb N$. By the Well Ordering Principle, $S$ has a least element, say $k$. Then $k\geq 1$. If $k=1$, then $x\in A_1$, and so $x\in B_1\subset \bigcup_{n=1}^{+\infty}B_n.$ So, suppose that $k\geq 2$. We have
$$x\notin A_1,\quad x\notin A_2,\quad \dots\quad,x\notin A_{k-1},\quad \text{and}\quad x\in A_k.$$
Thus,
$$x\notin \bigcup_{i=1}^{k-1}A_i.$$
Hence,

$$x\in A_k\smallsetminus\bigcup_{i=1}^{k-1}A_i=B_k\subset\bigcup_{n=1}^{+\infty}B_n.$$
Thus,
$$x\in\bigcup_{n=1}^{+\infty}B_n.$$ Hence,
$$\bigcup_{n=1}^{+\infty}A_n\subset\bigcup_{n=1}^{+\infty}B_n.$$
Equality follows.


algebra precalculus - Absolute value fraction problem



Is there an easier way to (I am aware of my poor translation, but am not familiar with english terminology; however, I think you will understand.) "Reduce the following fraction:"



$\dfrac{x | x-1 |-x+1}{x^2-2|x|+1}$



besides doing numerous situations with absolute values (in this case, 4 situations);
in this case I'd make:
i.) $x - 1 \ge 0$, $\space x \le 0$




ii.) $x - 1 \le 0$, $\space x \ge 0$



iii.) $x - 1 \ge 0$, $\space x \ge 0$



iv.) $x - 1 \le 0$, $\space x \le 0$



Then solve each pair's cross-section of the two solutions...



Second thing I'm not certain about are the "$\ge$" and "$\le$" signs. Do I have to put on both equations in the pair the equal sign "$=$", or just "$>$" / "$<$". If it's not clear, do I have to make every like:




i.) $x - 1 \ge 0$, $\space x \le 0$



ii.) $x - 1 \le 0$, $\space x \ge 0$



or



i.) $x - 1 \ge 0$, $\space x < 0$



ii.) $x - 1 \le 0$, $\space x > 0$




Please explain when to put the "$\le$" and "$\ge$".



EDIT:
Basically, what I wanted to know is, if there is any kind of faster - automated - way of solving the question. The answers you provided matched my opinion and praxis. The thing here is, I know a faster way of solving these, it includes a, so called "table" (with 'null-points', something like RecklessReckoner advised), if someone is interested I'll make a photo of it. It makes solving these totally automated that you don't have to think about the actual problem and/or understand it, that's why I tried to solve these traditionally to understand it - behind the scenes. Once again, thanks for your answers.


Answer



It takes thinking. When you split into cases, generally you have something like $|x-a|$, which will be $x-a$ if $x \ge a$ and $a-x$ if $a \ge x$. If $x=a$ it doesn't matter which direction you use, so generally you use the sign with the one with the equals because you don't want to exclude a potential solution. The point is "do you know it isn't equal-otherwise include the equals. Then you need to check the solutions you find in the original equation because you may have introduced extraneous solutions.


Convergence and sum of an infinite series: $sum_{i=1}^{infty}frac{6}{24 i-4 i^2-35}$

Determine whether the following series is convergent or divergent. If convergent, find the sum.
$$\sum_{i=1}^{\infty}\frac{6}{24 i-4 i^2-35}$$



Since the limit of the series is zero, I know that it is not divergent (divergence test).




How do i prove that the series is convergent, and futhermore, find the sum?



I rewrote the series (using partial fraction decomposition) as:



$$\sum_{i=1}^{\infty}\frac{6}{24 i-4 i^2-35}=\sum_{i=1}^{\infty}\frac1{1/4i-10(-(1/4 i-7))}$$



But I don't know what to do from here.

algebra precalculus - The number of solutions of the equation $|cot x|=cot x +frac{1}{sin x}$ where $0

$$|\tan x| = \frac{\sin x}{1+\cos x}$$
$$|\tan x|=\tan (\frac x2)$$then x can assume only two values $0,2\pi$ (at least that’s what I think). But since it doesn’t fit in the given interval, number of solutions should be zero, but the answer is 2. How should I correct it?

real analysis - $(y_n)$ bounded sequence and $limfrac{(x_n)}{(y_n)}=0$ show $lim(x_n) = 0$



If $(x_n)$ and $(y_n)$ are positive real sequences such that $(y_n)$ is bounded and $\lim\frac{(x_n)}{(y_n)} = 0$, prove that $\lim(x_n) = 0$



So since $(y_n)$ is bounded, there exists a real number $M$ such that for all n natural numbers, $(y_n) < |M|$
So by the limit definition :
$\lim|\frac{(x_n)}{(y_n)}-0| = \lim\frac{(x_n)}{(y_n)} < \epsilon$
So $|(x_n)| < \epsilon(y_n)$




Now I need to find a natural number $N$ such that for all $n > N$, $|(x_n)| < \epsilon$ is this correct which would prove this statement. However I'm having trouble picking something that will work here? Am I on the right track? I have to be very careful also about anything I use to quote the theorem since this is an introduction to real analysis class.



Thanks!


Answer



I would approach this by contradiction:



If the sequence does not converge to 0, then there is a subsequence that is bounded away from 0. So we may fix $a>0$ such that $|x_n|>a$ for infinitely many values of $n$.



Since the sequence of $y_n$ is bounded, say by $M$, we have $|y_n|a/M$.




This inequality holds for infinitely many $n$, and we have found a subsequence of $(x_n/y_n)$ that does not converge to 0.



Does this make sense?


Saturday 20 February 2016

abstract algebra - Show that the set $mathbb{Q}[sqrt{2}] = {a + b sqrt{2} mid a, b in mathbb{Q}}$ is a field with the usual multiplication and addition.




Show that the set $\mathbb{Q}[\sqrt{2}] = \{a + b \sqrt{2} \mid a, b \in \mathbb{Q}\}$ is a field with the usual multiplication and addition.



It is easy enough to show that it is closed under addition and multiplication ( $\forall a,b \space)$ we have $a+b \in \mathbb{Q}\sqrt{2}$ and $a * b = ab \sqrt{2} \in \mathbb{Q}\sqrt{2}$.



However, I had trouble proving the other axioms (associativity, commutativity, unique neutral element, unique inverse, and distributivity of multiplication over addition). I would really appreciate it if someone could help me with those.


Answer



Associativity, commutativity and distributivity of multiplication over additioncome from the fact that $\Bbb{Q}[\sqrt{2}]\subset\Bbb{R}$ that is a field. The neutral element for addition is $0$ and for multiplication is $1$. We just need to prove that the inverse in $\Bbb{R}$ of $a+b\sqrt{2}\neq 0$ belongs in fact to $\Bbb{Q}[\sqrt{2}]$. One has



$$\begin{align}\left(a+b\sqrt{2}\right)^{-1}=&{1\over a+b\sqrt{2}}\\=&{a\over a^2-2b^2}-{b\over a^2-2b^2}\sqrt{2}\in \Bbb{Q}[\sqrt{2}]\end{align}$$




Where the denominator $a^2-2b^2\neq 0$ because $2$ is irrational


calculus - Why Euler's formula is true?











I need to know why Euler's formula is true? I mean why is the following true:
$$
e^{ix} = \cos(x) + i\sin(x)
$$


Answer



Hint: Notice $$\sin (x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ..... $$ and $$i\cos (x) = i - i\frac{x^2}{2!} + i\frac{x^4}{4!} - i\frac{x^6}{6!} + .... $$ Now add them and use the fact that $i^2 = -1$, $i^3 = -i$, $i^4 = 1$. You should obtain $e^{ix}$. Also notice: $$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ....... $$


paradoxes - Balls and Urn Paradox

So, I came across the following paradox:




At $1$ minute before noon, put in balls $1 \sim 10$ and take out ball number $1$. At $1/2$ minute before noon, put in balls $11\sim20$ and take out ball number $2$ and so on. How many balls are there at noon?



None.



At $1$ minute before noon, put in balls $1 \sim 10$ and randomly take out a ball. At $1/2$ minute before noon, put in balls $11\sim20$ and randomly take out another ball and so on. How many balls are there at noon?



None.



Okay, so I understand the first paradox because one can describe the exact moment each ball was taken out. But, you can't apply a similar argument to the second paradox because we randomly take out a ball.
I feel as if it's like infinitely summing $\frac{1}{n}$ and eventually there would be too many balls.




Can someone explain to me mathematically why this is the case?

discrete mathematics - Prove that $mathbb{|Q| = |Qtimes Q|}$



I have this problem:




Prove that $\mathbb{|Q| = |Q\times Q|}$




I know that $\mathbb Q$ is countably infinite.




But then how can I prove that $\mathbb{|Q\times Q|}$ is countably infinite?



Thanks you!


Answer



Whatever proof you have that $\mathbb Q$ is countably infinite probably relies on a mapping between elements of $\mathbb Q$ and elements of $\mathbb {Z \times Z}$. But to say that $\mathbb Q$ is countably infinite is to put it in correspondence with $\mathbb Z$. Use this fact, then repeat the original proof.


Friday 19 February 2016

probability - Coupon Collecting Problem using Inclusion-Exclusion




I am very new to probability and combinatorics.
I am trying to solve Coupon Collection Problem using Inclusion-Exclusion formula.



I found a lot of references her a some:



http://en.wikipedia.org/wiki/Coupon_collector%27s_problem



Using Recursion to Solve Coupon Collector




but they all seem concern with expected time to collect all N coupons form what my problem is asking the following:



Suppose there are N coupons after buying n boxes what is the probability that at least one each type has been collected.



Thanks


Answer



Hint: Consider the inclusion-exclusion principle as stated in Wikipedia: Given events $A_1,\ldots,A_N$, we are interested in the probability that at least one of them occurs. In your case, $A_i$ is the event that coupon $i$ never appears. The probability that you're after is $1-\Pr[A_1 \lor \cdots \lor A_N]$. It remains to find a formula for the probability that a set $S$ of coupons never appears, and to plug these probabilities into the inclusion-exclusion formula.


integration - Integral for the New Year $2019$!





Does the following transition between $2018$ and $2019$ hold true?$$\large\bbox[10pt,#000,border:5px solid green]{\color{#58A}{\color{#A0A}\int_{\color{#0F5}{-\infty}}^{\color{#0F5}{+\infty}} \frac{\color{yellow}\sin\left(\color{#0AF}x\color{violet}-\frac{\color{tomato}{2018}}{\color{#0AF}x}\right)}{\color{#0AF}x\color{violet}+\frac{\color{aqua}1}{\color{#0AF}x}} \color{#A0A}{\mathrm d}\color{#0AF}x\color{aqua}=\frac{\color{magenta}\pi}{\color{magenta}e^{\color{red}{2019}}}}}$$
$$\large\color{red}{\text{Happy new year!}}$$







I must say that I got lucky arriving at this integral.



Earlier this year I have encountered the following integral:$$\int_0^\infty \frac{\sqrt{x^4+3x^2+1}\cos\left[x-\frac{1}{x} +\arctan\left(x+\frac{1}{x}\right)\right]}{x(x^2+1)^2}dx=\frac34\cdot \frac{\pi}{e^2}$$

Which at the first sight looks quite scary, but after some manipulations it breaks up into two integrals, one of which is:$$\int_{-\infty}^\infty \frac{\sin\left(x-\frac{1}{x}\right)}{x+\frac{1}{x}}dx$$
And while trying to solve it I also noticed a pattern on an integral of this type.



Also today when I saw this combinatorics problem I tried to make something similar and remembered about the older integral. $\ddot \smile$






If you have other integral of the same type feel free to add!


Answer



I will show that $$\int_{-\infty}^{\infty} \frac{\sin(x-nx^{-1})}{x+x^{-1}}\,dx=\frac{\pi}{e^{n+1}}.$$ I will do this using residue theory. We consider the function $$F(z)=\frac{z\exp(i(z-nz^{-1}))}{z^2+1}.$$ On the real axis, this has imaginary part equal to our integrand. We integrate around a contour that goes from $-R$ to $R$, with a short half circle detour around the pole at $0$. Then we enclose it by a circular arc through the upper half plane, $C_R$. The integral around this contour is $2\pi i$ times the residue of the pole at $z=+i$. Using the formula (see Wikipedia, the formula under "simple poles") for the residue of the quotient of two functions which are holomorphic near a pole, we see that the residue is $$Res(F,i)=\frac{i\exp(i(i-i^{-1}n)}{2i}=\frac{1}{2}e^{-(n+1)}.$$ Thus the value of the integral is $2\pi iRes(F,i)=i\frac{\pi}{e^{n+1}}$. This is the answer we want up to a constant of $i$, which comes from the fact that our original integrand is the imaginary part of the function $F(z)$. We are therefore done if we can show that the integral around $C_R$ approaches $0$ as $R\to \infty$ as well as the integral around the little arc detour at the origin going to $0$ as its radius gets smaller. The fact that the $C_R$ integral approaches $0$ follows from Theorem 9.2(a) in these notes. This is because we can take $f(z)=\frac{z e^{-inz^{-1}}}{z^2+1}$ in that theorem to get $F(z)=f(z)e^{iz}$. The modulus $$|e^{-inz^{-1}}|=|e^{-inR^{-1}(\cos\theta-i\sin\theta)}|=e^{-\frac{n}{R}\sin\theta}.$$ Note that $\sin\theta \geq 0$ in the upper half plane, so we can bound this modulus by $1$.

So we get that $|f(z)|\leq |z|/|z^2+1|$ and moreover $z/(z^2+1)$ behaves like $1/z$ as $R$ increases, so the hypotheses of Theorem 9.2a are satisfied.



The integral around the arc near the origin limits to zero by elementary estimates, concluding the proof.


elementary set theory - The cartesian product $mathbb{N} times mathbb{N}$ is countable



I'm examining a proof I have read that claims to show that the Cartesian product $\mathbb{N} \times \mathbb{N}$ is countable, and as part of this proof, I am looking to show that the given map is surjective (indeed bijective), but I'm afraid that I can't see why this is the case. I wonder whether you might be able to point me in the right direction?



Indeed, the proof begins like this:




"For each $n \in \mathbb{N}$, let $k_n, l_n$ be such that $n = 2^{k_n - 1} \left(2l_n - 1 \right)$; that is, $k_n - 1$ is the power of $2$ in the prime factorisation of $n$, and $2 l_n - 1$ is the (necessarily odd) number $\frac{n}{2^{k_n - 1}}$."



It then states that $n \mapsto \left(k_n , l_n \right)$ is a surjection from $\mathbb{N}$ to $\mathbb{N} \times \mathbb{N}$, and so ends the proof.



I can intuitively see why this should be a bijection, I think, but I'm not sure how to make these feelings rigorous?



I suppose I'd say that the map is surjective since given any $\left(k_n , l_n \right) \in \mathbb{N} \times \mathbb{N}$ we can simply take $n$ indeed to be equal to $2^{k_n - 1} \left(2l_n - 1 \right)$ and note that $k_n - 1 \geq 0$ and thus $2^{k_n - 1}$ is both greater or equal to one so is a natural number (making the obvious inductive argument, noting that multiplication on $\mathbb{N}$ is closed), and similarly that $2 l_n - 1 \geq 2\cdot 1 - 1 = 1$ and is also a natural number, and thus the product of these two, $n$ must also be a natural number. Is it just as simple as this?



I suppose my gut feeling in the proving that the map is injective would be to assume that $2^{k_n - 1} \left(2 l_n - 1 \right) = 2^{k_m - 1} \left(2 l_m - 1 \right)$ and then use the Fundamental Theorem of Arithmetic to conclude that $n = m$. Is this going along the right lines? The 'implicit' definition of the mapping has me a little confused about the approach.







On a related, but separate note, I am indeed aware that if $K$ and $L$ are any countable sets, then so is $K \times L$, so trivially, taking the identity mapping we see trivially that this map is bijective and therefore that $\mathbb{N}$ is certainly countable (!), and thus so is $\mathbb{N} \times \mathbb{N}$. Hence, it's not really the statement that I'm interested in, but rather the exciting excursion into number theory that the above alternative proof provides.


Answer



Your intuition is correct. We use the fundamental theorem of arithmetic, namely the prime factorization is unique (up to order, of course).



First we prove injectivity:



Suppose $(k_n,l_n),(k_m,l_m)\in\mathbb N\times\mathbb N$ and $2^{k_n - 1} (2 l_n - 1 ) = 2^{k_m - 1} (2 l_m - 1)$.




$2$ is a prime number and $2t-1$ is odd for all $t$, and so we have that the power of $2$ is the same on both sides of the equation, and it is exactly $k_n=k_m$.



Divide by $2^{k_n}$ and therefore $2l_n-1 = 2l_m-1$, add $1$ and divide by $2$, so $(k_n,l_n)=(k_m,l_m)$ and therefore this mapping is injective.



Surjectivity it is even simpler, take $(k,l)\in\mathbb N\times\mathbb N$ and let $n=2^{k-1}(2l-1)$. Now $n\mapsto(k,l)$, because $2l-1$ is odd, so the powers of $2$ in the prime decomposition of $n$ are exactly $k-1$, and from there $l$ is determined to be our $l$. (If you look closely, this is exactly the same argument for injectivity only applied "backwards", which is a trait many of the proofs of this kind has)






As for simpler proofs, there are infinitely many... from the Cantor's pairing function ($(n,m)\mapsto\frac{(n+m)(n+m+1)}{2}+n$), to Cantor-Bernstein arguments by $(n,m)\mapsto 2^n3^m$ and $k\mapsto (k,k)$ for the injective functions. I like this function, though. I will try to remember it and use it next time I teach someone such proof.


elementary number theory - Solve $x^2+2=y^3$ using infinite descent?



just so this doesn't get deleted, I want to make it clear that I already know how to solve this using the UFD $\mathbb{Z}[\sqrt{-2}]$, and am in search for the infinite descent proof that Fermat claimed to have found.



I've alaways been fascinated by this Diophantine equation $x^2+2=y^3$ in particular ever since I saw it, and I still have no clue how to attack it without $\mathbb{Z}[\sqrt{-2}]$. What's disappointing is that no one else seems interested in the hunt (an elementary proof using infinite descent). I know it's been studied extensively, and there have even been generalizations, such as Mordell's equation. However, I've never seen Fermat's original proof that $(x,y)=(\pm 5, 3)$ is the only integer solution. Obviously, Fermat probably knew nothing of UFD's, which is why I believe there has to be an infinite descent proof like he claimed. Has anyone apart from him actually seen this proof? People mention it all the time, yet I can't find anything about it. As I said, I know that it involved infinite descent, but I've never seen it anywhere and no one seems to have any idea about it.




Does anyone have ideas for this approach? I mean, infinite descent seems more effective for showing a contradiction, e.g. showing there are no solutions. But how could it work here? Also, why isn't it published anywhere in all this time? Could it really be that only Fermat knew his method of descent well-enough to make this problem submit to it?



Thanks!


Answer



To answer your last question: Yes, it could really be that only Fermat knew his method of descent [using contemporary techniques] well enough to make this problem submit to it.



The companion problem regarding $x^2+4=y^3$ also has no known descent proof, though he claimed to have one. There is no known descent proof of the fact that Pell's equation has infinite solutions — but Fermat claimed to have proven that by descent as well. In fact, of the ten problems mentioned in his letter to Carcavi, which Fermat claimed to prove by infinite descent, as far as I know only one (FLT for $n=3$) has had a published descent proof.



To summarize: If Fermat had only claimed to have proven one of his theorems (e.g. FLT) by descent, and no such proof was ever found, I would have no problem convincing myself that he was mistaken. But he claimed descent proofs of dozens of theorems, all of which were later proven true using other methods — at some point, we have to ask ourselves what he knew that we don't.


sequences and series - What is $sum_{n=1}^{infty} frac{1}{sqrt{n^{3} + 1}}$?

I am interested in the symbolic evaluation of infinite series over algebraic functions in terms of well-known special functions such as the
Hurwitz zeta function. Often, infinite sums over algebraic expressions can be evaluated in a simple way in terms of well-known special functions. For example, a direct application of Euler's identity may be used to show that $$ \sum_{n \in \mathbb{N}} \frac{1}{\sqrt{(n^2+1)(\sqrt{n^2+1}+n)}}

= - \frac{i \left( \zeta\left( \frac{1}{2}, 1 - i \right) -
\zeta\left(\frac{1}{2}, 1 + i \right) \right)}{\sqrt{2}}. $$
The above formula seems to suggest that there may be many classes of infinite series over algebraic expressions that could be easily evaluated in terms of well-established special functions.



Inspired in part by this question, together with
this question
and
this question, as well as
this question, I'm interested in the problem of evaluating
$$\sum_{n=1}^{\infty} \frac{1}{\sqrt{n^{3} + 1}}

= 2.29412...$$
in terms of "known" special functions, e.g., special functions implemented within Mathematica. This problem is slightly different from the related problems given in the above links, since I'm interested specifically in the use of special functions to compute the above sum symbolically.



More generally, what kinds of algorithms could be used to evaluate infinite sums over algebraic expressions symbolically in terms of special functions?

real analysis - Show $a_{n+1}=sqrt{2+a_n},a_1=sqrt2$ is monotone increasing and bounded by $2$; what is the limit?


Let the sequence $\{a_n\}$ be defined as $a_1 = \sqrt 2$ and $a_{n+1} = \sqrt {2+a_n}$.



Show that $a_n \le$ 2 for all $n$, $a_n$ is monotone increasing, and find the limit of $a_n$.





I've been working on this problem all night and every approach I've tried has ended in failure. I keep trying to start by saying that $a_n>2$ and showing a contradiction but have yet to conclude my proof.

Thursday 18 February 2016

elementary number theory - Find the remainder when $10^{400}$ is divided by 199?



I am trying to solve a problem



Find the remainder when the $10^{400}$ is divided by 199?




I tried it by breaking $10^{400}$ to $1000^{133}*10$ .



And when 1000 is divided by 199 remainder is 5.



So finally we have to find a remainder of :



$5^{133}*10$



But from here I could not find anything so that it can be reduced to smaller numbers.




How can I achieve this?



Is there is any special defined way to solve this type of problem where denominator is a big prime number?



Thanks in advance.


Answer



You can use Fermat's little theorem. It states that if $n$ is prime then $a^n$ has the same remainder as $a$ when divided by $n$.



So, $10^{400} = 10^2 (10^{199})^2$. Since $10^{199}$ has remainder $10$ when divided by $199$, the remainder is therefore the same as the remainder of $10^4$ when divided by $199$. $10^4 = 10000 = 50*199 + 50$, so the remainder is $50$.


number theory - How to go from Fermat’s little theorem to Euler’s theorem thought Ivory’s demonstration?

Ivory’s demonstration of Fermat’s theorem exploit the fact that given a prime $p$, all the numbers from $1$ to $p-1$ are relatively prime to $p$ (obvious since $p$ is prime). Ivory multiply them by x and he gets:



$(x)(2x)\cdots((p-1)x)\equiv(1)(2)\cdots(p-1)\pmod{p}$




which gives the theorem since I can cancel all the integers and leave:



$x^{p-1}\equiv1\pmod{p}$.



To derive Euler’s theorem I should switch to modulus $m$ non-prime and take the positive integers relatively prime to $m$ and repeat the process. At this point I have $\phi(m)$ such numbers. How do I prove that they are all non congruent one another (to form a complete set of residues to the modulus $\phi(m)$) so that I can multiply them by an $x$ relatively prime to $m$ and repeat the same steps to prove



$x^{\phi(m)}\equiv1\pmod{m}$ ?

sequences and series - Proof that $lim_{ntoinfty} nleft(frac{1}{2}right)^n = 0$




Please show how to prove that $$\lim_{n\to\infty} n\left(\frac{1}{2}\right)^n = 0$$



Answer



Consider extending the sequence {$n/2^n$} to the function $f(x)=x/2^x$.



Then use L'Hopital's rule: lim$_{x\to\infty} x/2^x$ has indeterminate form $\infty/\infty$. Taking the limit of the quotient of derivatives we get lim$_{x\to\infty} 1/($ln$2\cdot 2^x)=0$. Thus lim$_{x\to\infty} x/2^x=0$ and so $n/2^n\to 0$ as $n\to\infty$.


Derivative of Square Root Polynomial?



How do you find the derivative of





$\sqrt{x^2 - 4x + 4}$




I applied Chain rule and got this




$\frac{x-2}{\sqrt{(x-2)^2}}$




However, the fill-in box requires two distinct functions (piecewise) where x > ______ and x < _____.




How would I get two equations from the derivative?



Problem Img


Answer



Hint: $\sqrt{x^2-4x+4}=\sqrt{(x-2)^2}=|x-2|$


Analogues of Galois Theory for Complex Numbers



Are there any analogues of Galois Theory for complex numbers (with non-zero imaginary part)? This is motivated by Schwarz Principle. It says that if $f$ is analytic, $f$ is defined in the upper-half disk, and $f$ extends to a continuous function on the real axis, then $f$ can be extended to an analytic function on the whole disk by the formula $f(\bar{z}) = \overline{f(z)}$.



This reminds me of the Isomorphism Extension Theorem.


Answer



Galois theory is useful when you have some algebraic object, and a list of tools you are allowed to use within that object. The purpose of Galois theory is to explain how far one can go only using those tools.




For example, it is impossible to create, using only the tools of +, -, *, / and nth roots, a formula for the zeroes of a general 5th degree polynomial. This is the most basic application of Galois theory. This is the Abel-Ruffini theorem:



http://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem



Another application is Differential Galois theory, which tells you which integrals can be expressed in terms of elementary functions. But once again, it comes down to having only a few basic tools with which to construct an anti-derivative for a function using only elementary functions, and that is why Galois theory is useful in this application.



So, its hard to understand what your question is asking about, because you haven't really phrased your question as a Galois Theory question.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...