Saturday 31 March 2018

calculus - How many n-th Order Partial Derivatives Exist for a Function of k Variables?



Example:
Let's say for example I have a function, $f$, of $2$ variables :




$f(x,y)$



For this function there exits $2$ first-order partial derivatives namely:



$f_x = \frac{\partial f}{\partial x}$



$f_y = \frac{\partial f}{\partial y}$



Then if we are to differentiate further we will find that there are $4$ computable second-order partial derivatives.




$f_{xx} = \frac{\partial^2 f}{\partial x^2}$



$f_{xy} = \frac{\partial^2 f}{{\partial x} {\partial y}}$



$f_{yy} = \frac{\partial^2 f}{\partial y^2}$



$f_{yx} = \frac{\partial^2 f}{{\partial y} {\partial x}}$



However due to the Equality of Mixed Partials (https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives), two of those second-order partial derivatives are equivalent, $f_{xy} = f_{yx}$, and thus we are left with $3$ second-order partial derivatives for a function of 2 variables.




$f_{xx} = \frac{\partial^2 f}{\partial x^2}$



$f_{xy} = \frac{\partial^2 f}{{\partial x} {\partial y}} \Leftrightarrow f_{yx} = \frac{\partial^2 f}{{\partial y} {\partial x}}$



$f_{yy} = \frac{\partial^2 f}{\partial y^2}$



Question: Given a function of k variables:



$f(x_1 , x_2,x_3,\dots,x_{k-1},x_k)$




Is there a formula to find the number of $n^{th}$-order partial derivatives, (where $n$ is the order of the partial derivative), for a function of $k$ variables?



For example where $n=1$ (i.e. the first-order derivatives), there would be $k$ partial derivatives, just as in the example above, for a function of $2$ variables there exists $2$ first-order derivatives.


Answer



This is the problem of distributing $n$ balls over $k$ bins, which can be solved using the stars and bars approach; the result is



$$\binom{n+k-1}{k-1}\;.$$


summation - My visual interpretation of $1+2+3+ dots +n$

To be frank, I didn't learn any sort of proof for this (visual or non-visual), so I came up with this proof through trial and error.
Moreover, I haven't checked my proof online yet, therefore I am not sure if I am the first one to come up with this proof - Nonetheless, it is still quite a remarkable proof, at least for me :D.



Hope you will appreciate my visual proof from below!



enter image description here



enter image description here

elementary number theory - $gcd(b^x - 1, b^y - 1, b^ z- 1,...) = b^{gcd(x, y, z,...)} -1$







Dear friends,




Since $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1, how can we prove that
$$
\gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\ldots)= b ^ {\gcd (x, y, z, .. .)} - 1
$$
?



Thank you!



Paulo Argolo

calculus - what about $limlimits_{xto0}-frac{sin x}x=$?



we all know that:



$\lim\limits_{x\to0}\frac{\sin x}x=1$



so what is the negative




$\lim\limits_{x\to0}-\frac{\sin x}x=$?



i am trying to prove what about $\lim\limits_{x\to0}\frac{x^2\sin \frac 1x}{\sin^2 x}$



i got to $\frac{-x^2}{\sin^2 x} \le $ $\frac{x^2\sin \frac 1x}{\sin^2 x}\le$$\frac{x^2}{\sin^2 x}$



and $\lim\limits_{x\to0}\frac{x^2}{\sin^2 x}= \lim\limits_{x\to0}\frac{\sin x}x\lim\limits_{x\to0}\frac{\sin x}x= 1\times1=1 $



and$\lim\limits_{x\to0}-\frac{x^2}{\sin^2 x}= \lim\limits_{x\to0}\frac{\sin x}x\lim\limits_{x\to0}-\frac{\sin x}x=?$



Answer



Simply put,




$$\lim_{x\to a} (Af(x))=A\lim_{x\to a} f(x),\text{ where $A$ is a constant.}$$
So let $A=-1$ in your expression.




Edit: Let me generalize:




If $\lim_{x\to a}f(x),\lim_{x\to a}g(x)$ exist, then $\lim_{x\to a}(f(x)g(x))$ exists. So, let $g(x)=\dfrac{\sin x}{x},f(x)=-1$.


complex analysis - Is there an elementary method for evaluating $displaystyle int_0^infty frac{dx}{x^s (x+1)}$?

I found a way to evaluate $\displaystyle \int_0^\infty \frac{dx}{x^s (x+1)}$ using the assumption that $s\in\mathbb{R}$ and $0

Apparently it should be easily extended to all $s\in\mathbb{C}$ with $0

I posted my solution here: http://thetactician.net/Math/Analysis/Integral1.pdf



I'm pretty sure there's a more concise method for evaluating it...and I'd also like to make the extension to $\mathbb{C}$ more rigorous.



Any ideas?

sequences and series - How can I find $sumlimits_{n=i+1}^infty binom{n-1}{i}left (frac{1}{3}right)^{n}$?



In a probability proof I've arrived at the sum $\sum\limits_{n=i+1}^\infty \binom{n-1}{i} \left(\frac{1}{3}\right)^{n}$ where $i$ is constant. WolframAlpha gives a simple expression for this sum, but I've been unable to find it myself: I've tried rewriting it as $\frac{1}{i!}\sum\limits_{n=i+1}^\infty \frac{(n-1)!}{(n-1-i)!} \left(\frac{1}{3}\right)^{n}$ but this hasn't advanced me much.



How can I find this sum?


Answer



Using Grigory's Wiki link, we wish to adjust our sum to start at zero.



Find $m$ such that $n = i+1$ implies $m=0$: $m = n-(i+1) = n-i-1.$ Now adjust the term inside the binomial coefficient. Since $n = m+i+1$, then $n-1 = m+i$.




Therefore,



$$\sum_{n=i+1}^\infty \begin{pmatrix} n-1 \\ i\end{pmatrix} \left(\frac13\right)^n = \sum_{m=0}^\infty \begin{pmatrix} m+i \\ i \end{pmatrix}\left(\frac13\right)^{m+i+1}.$$



Now, $i$ is fixed inside the sum. So let's re-write:



$$\sum_{m=0}^\infty \begin{pmatrix} m+i \\ i \end{pmatrix}\left(\frac13\right)^{m+i+1} = \left(\frac13\right)^{i+1}\sum_{m=0}^\infty \begin{pmatrix} m+i \\ i \end{pmatrix}\left(\frac13\right)^m.$$



The binomial coefficient is symmetric, in other words




$$\begin{pmatrix} p \\ q \end{pmatrix} = \frac{p!}{q!(p-q)!} = \frac{p!}{(p-q)!(p-(p-q))!} = \begin{pmatrix} p \\ p-q\end{pmatrix}.$$



Therefore, $$\begin{pmatrix} m+i \\ i\end{pmatrix} = \begin{pmatrix} m+i \\ m+i-i \end{pmatrix} = \begin{pmatrix} m+i \\ m \end{pmatrix}.$$



Now our series becomes



$$\begin{align*}
\left(\frac13\right)^{i+1}\sum_{m=0}^\infty \begin{pmatrix} m+i \\ i \end{pmatrix}\left(\frac13\right)^m &= \left(\frac13\right)^{i+1}\sum_{m=0}^\infty \begin{pmatrix} m+i \\ m \end{pmatrix}\left(\frac13\right)^{m} \\
&= \frac{\left(\frac13\right)^{i+1}}{\left(1-\frac13\right)^{i+1}}\\
&= \frac{\left(\frac13\right)^{i+1}}{\left(\frac23\right)^{i+1}} \\

&= 2^{-i-1}.
\end{align*}$$



Note that the second step uses the special case linked in the Wiki article.


Friday 30 March 2018

Which step is wrong in this proof




Proof: Consider the quadratic equation $x^2+x+1=0$. Then, we can see that $x^2=−x−1$. Assuming that $x$ is not zero (which it clearly isn't, from the equation) we can divide by $x$ to give
$$x=−1−\frac{1}{x}$$
Substitute this back into the $x$ term in the middle of the original equation, so
$$x^2+(−1−\frac{1}{x})+1=0$$
This reduces to
$$x^2=\frac{1}{x}$$

So, $x^3=1$, so $x=1$ is the solution. Substituting back into the equation for $x$ gives
$1^2+1+1=0$

Therefore, $3=0$.




What happened?


Answer



Up to $x^3=1$ everything is fine. This allows you to conclude that $x\in \{z\in \Bbb C\colon z^3=1\}$. Since $\{z\in \Bbb C\colon z^3=1\}=\left\{\dfrac{-1 + i\sqrt 3}{2}, \dfrac{-1 - i\sqrt 3}{2},1\right\}$, then $x$ is one of the elements of this set.



You made a reasoning consisting of logical consequences, not one of logical equivalences. That's why you can tell that $x\in \{z\in \Bbb C\colon z^3=1\}$, but you can't say which one is it.




See this for a similar issue.



An even more simpler version of your mistake is this: suppose $x^2=1$, then $x=1$.
You can convince yourself that this is wrong and that you did the same thing in your question.


Thursday 29 March 2018

real analysis - Limit of $|x|_p$ as $prightarrowinfty$




I am not sure how to start this hw problem. Here it is:



Let $x$ be a given vector in $\ell^p$ with $1\le p\lt\infty$. Show that $x \in \ell^\infty$ and
$$\lim_{p \to \infty} \|x\|_p = \|x\|_\infty.$$ This seems like it would be obvious, but trying to prove it is different. I am not sure how to even start this one. Thanks.



EDIT: Would the triangle inequality be useful here?


Answer



Let $x\in\ell^{p_0}$ for some $1\le p_0<\infty$. In order for that limit to even have a chance to converge to a finite limit, we first need to prove $x\in \ell^{p}$ for all $p\ge p_0$.





Claim: $\|x\|_p\le \|x\|_{p_0}$ for all $p_0\le p\le\infty$.




Proof: First consider the case $p<\infty$. Let us assume $\|x\|_{p_0}=1$. It follows that $|x_n|\le 1$ for all $n$. Now



$$\|x\|_p^p =\sum_n |x_n|^p \le \sum_n |x_n|^{p_0}=1$$



so $\|x\|_p\le 1=\|x\|_{p_0}$. The claim for all $x\not=0$ follows by exploiting homogeneity of the inequality. Set $\lambda=\|x\|_{p_0}$. Then $\|x/\lambda\|_{p_0}=1$. Therefore




$$\|x\|_p=\lambda \left\|\frac{x}{\lambda}\right\|_p\le \lambda=\|x\|_{p_0}$$



Now we do the case $p=\infty$. By assumption, the sum $\sum_n |x_n|^{p_0}$ converges. Therefore $|x_n|^{p_0}\rightarrow 0$ as $n\rightarrow\infty$ and $|x_n|$ must be bounded. $\square$



This shows in particular $x\in \ell^{p}$ for all $p\ge p_0$ (including $p=\infty$). Now we show



$$\lim_{p\rightarrow\infty}\|x\|_p = \|x\|_\infty$$



Again, first assume $\|x\|_\infty=1$. Then $|x_n|\le 1$ for all $n$. Therefore




$$\limsup_{p\rightarrow\infty} \|x\|_p \le 1\tag{1}$$



On the other hand $1=\|x\|_\infty=\sup_n |x_n|$ implies that for every $\epsilon>0$ there is an $n_0$ such that $|x_{n_0}|>1-\epsilon$. Thus



$$\|x\|_p = \left(\sum_{n} |x_n|^p\right)^{1/p} \ge |x_{n_0}|>1-\epsilon$$



Letting $\epsilon\rightarrow 0$ gives



$$\liminf_{p\rightarrow\infty} \|x\|_p \ge 1\tag{2}$$




Now $(1)$ and $(2)$ imply together that $\lim_{p\rightarrow\infty} \|x\|_p$ exists and equals $1$.



In general we conclude again by homogeneity as follows: write $\lambda=\|x\|_\infty\not=0$. Then $\|x\|_p=\lambda \|x/\lambda\|_p$ and since $\|x/\lambda\|_\infty=1$ the above gives



$$\lim_{p\rightarrow\infty} \|x\|_p = \lambda \lim_{p\rightarrow\infty} \left\|\frac{x}{\lambda}\right\|_p = \lambda = \|x\|_\infty$$


modular arithmetic - Construction of multiplication and addition tables for GF(4) with Modulo ($x^2 + x + 1$)

I am still trying to understand polynomial arithmetic with Galois Field.
Can someone explain to me what this question is asking to be constructed. I have searched all over google for GF(4) with modulo and I am not getting anywhere.




I have read over a few of the other posts here on the forums but none seem to apply to my specific situation. Can anyone help me understand GF with modulo of a polynomial?



Thanks in advance.

sequences and series - Find value of infinite sum











How would I go about deriving the value of the following infinite sum: $\sum\limits_{k=1}^\infty kx^k$ ?



I thought about expanding first: $\sum\limits_{k=1}^\infty kx^k= x + 2x^2 + 3x^3 + \cdots$



Then a bit of algebra: $\sum\limits_{k=1}^\infty kx^k - \sum\limits_{k=1}^\infty (k-1)x^k = x + x^2 + x^3 + \cdots + 1 -1 $



And now I'm stuck with this: $\sum\limits_{k=1}^\infty x^k = \frac{x}{1-x}$




How can I introduce the $k$ into $\sum\limits_{k=1}^\infty x^k$ ? Or is there a different approach that I don't know of?



Any help is much appreciated.


Answer



As you know $\sum_{k=0}^\infty x^k$ you can differentiate the result. Justify that you can differentiate the series term-by-term.


limits - Find $lim_{nrightarrowinfty}frac{n}{(n!)^{1/n}}$





I am having trouble showing $$\lim_{n\rightarrow\infty}\frac{n}{(n!)^{1/n}}=e.$$


Answer



Let $a_{n}=n^{n}/n!$ and then $$a_{n+1}/a_{n}=\{(n +1)^{n+1}/(n+1)!\}\{n!/n^{n}\}=\left(1+\frac{1}{n}\right)^{n}$$ which tends to $e$ and therefore $a_{n}^{1/n}$ also tends to $e$.


modular arithmetic - Find the inverse modulo of a number - got a negative result



I'm trying to find the inverse modulo of $17\pmod{3120}$



I've tried:



$$
\begin{eqnarray}
3120 =& 17\cdot 183 &+ 9\\
17 =& 9\cdot 1 &+ 8\\

9 =& 8\cdot 1 &+ 1\\
8 =& 8\cdot1
\end{eqnarray}
$$



and then do it from the the bottom:



$$
\begin{align}
1 = & 9 - 8\cdot1 \\

= & 9 - (17 - 9\cdot1)\\
= & 9\cdot2 - 17\cdot1\\
= & 2 (3120 - 17\cdot183) -17\cdot1\\
= & 3120\cdot2 -17\cdot367
\end{align}
$$
This means that $-367$ is the inverse modulo. Is it the correct result? Can a reverse modulo be negative? Using this website it gives the inverse modulo to be $2753$ instead, how can I get that?


Answer



You have obtained the correct answer. Note that
$$-367 \equiv (3120-367) \pmod{3120} \equiv 2753 \pmod{3120}$$

In general, if you do not like negative modulo, you can always make it into a positive one. If you have $-a \pmod{n}$, where $a \in \{1,2,\ldots,n\}$, then you can make it into a positive one as follows.
$$-a \pmod{n} \equiv (n-a) \pmod{n}$$


Tuesday 27 March 2018

calculus - Limit Proof of $e^x/x^n$




I am wondering how to prove $$\lim_{x\to \infty} \frac{e^x}{x^n}=\infty$$



I was thinking of using L'Hospital's rule? But then not sure how to do the summation for doing L'Hospital's rule n times on the denominator? Or whether it would be easier using longs like $\lim_{x\to \infty} \ln(e^x)-\ln(x^n)$?




Thank you!


Answer



You can certainly use L'Hopital's $n$ times. That is, for each $n\geq 0$ we have $$\lim_{x\to\infty}\frac{e^x}{x^n}=\lim_{x\to\infty}\frac{e^x}{nx^{n-1}}=\cdots=\lim_{x\to\infty}\frac{e^x}{n!}=\infty$$ since at each stage we are in $\frac{\infty}{\infty}$ indeterminate form.


Monday 26 March 2018

linear algebra - find eigenvalues and eigenvectors



Question: A).Consider a matrix $A= \begin{pmatrix}0&1&1\\ -2&3&1\\-3&1&4 \end{pmatrix}$ Find all eigenvalues and the corresponding eigenvecors(and generalized eigenvectors) of A.



B). Find a vector $v$ such that $v$,$Av$, and $A^2v$ are linearly independent.



Proof: A) Ok so for this one I got my eigenvalues to be $\lambda= 2,3$ and my vectors to be $v_1= \begin{pmatrix} 1\\1\\2 \end{pmatrix}$ such that $\lambda=3$, $v_2= \begin{pmatrix} 1\\1\\1 \end{pmatrix}$ such that $\lambda=2$, and because we have a generalized eigenvector we get $v_3= \begin{pmatrix} 2\\2\\1 \end{pmatrix}$. Are these results right?



and for part B) I'm not quite sure how to show that. Where can I go to see how to show linear independence?



Answer



Your ${\bf v_3}$ is wrong. An easy way to see this is that the generalised eigenvectors should form a basis for $\Bbb R^3$, but for your vectors
$${\bf v}_1+{\bf v}_3=3{\bf v}_2\ ,$$
so they are not independent.



You can find ${\bf v}_3$ in various ways, for example by solving
$$(A-2I){\bf v}_3={\bf v}_2\ ,$$
which gives the useful relations
$$A{\bf v}_1=3{\bf v}_1\ ,\quad A{\bf v}_2=2{\bf v}_2\ ,\quad
A{\bf v}_3={\bf v}_2+2{\bf v}_3\ .\tag{$*$}$$




These also give a (relatively) simple way to do the second question. Since $\{{\bf v}_1,{\bf v}_2,{\bf v}_3\}$ is a basis for $\Bbb R^3$, we can take
$${\bf v}=\alpha{\bf v}_1+\beta{\bf v}_2+\gamma{\bf v}_3\ .\tag{${*}{*}$}$$
Using $(*)$, we have
$$\eqalign{
A{\bf v}&=3\alpha{\bf v}_1+2\beta{\bf v}_2+\gamma({\bf v}_2+2{\bf v}_3)\cr
A^2{\bf v}&=9\alpha{\bf v}_1+4\beta{\bf v}_2+\gamma(4{\bf v}_2+4{\bf v}_3)\ .
\cr}$$
For these vectors to be independent we need the determinant
$$D=\det\pmatrix{\alpha&3\alpha&9\alpha\cr \beta&2\beta+\gamma&4\beta+4\gamma\cr

\gamma&2\gamma&4\gamma\cr}$$
to be non-zero. But
$$\eqalign{D
&=\alpha\gamma
\det\pmatrix{1&3&9\cr \beta&2\beta+\gamma&4\beta+4\gamma\cr 1&2&4\cr}\cr
&=\alpha\gamma\det\pmatrix{1&3&9\cr 0&2\gamma&4\gamma\cr 1&2&4\cr}\cr
&=\alpha\gamma^2\det\pmatrix{1&3&9\cr 0&2&4\cr 1&2&4\cr}\cr
&=-\alpha\gamma^2\ ;\cr}$$
so you can take any ${\bf v}$ from $({*}{*})$ as long as neither $\alpha$ nor $\gamma$ is zero.


elementary set theory - How to prove that a set A is finite iif it is equipotent to $J_n={1,…,n}$ for some $n{in}mathbb{N}$?




Considering the (not most common) definition:




A set is infinite if it is equipotent to a proper subset of itself. A set is finite if it is not infinite.




How can I prove that a set $A$ is finite iif it is equipotent to $J_n=\{1,…,n\}$ for some $n{\in}\mathbb{N}$ (assuming that I already proved that $J_n$ is a finite set for every $n{\in}\mathbb{N}$)?


Answer



Suppose $A$ is not equipotent with any of the $J_n$. Then we see that we can produce a surjection from $A$ to $\mathbb{N}$. If not then there is some maximum $n\in \mathbb{N}$ for $f(A)$. From there you can whittle it down to a bijection to a $J_n$. Then $A$ you can conclude by contradiction that $A$ is equipotent to some $J_n$.




For the reverse direction, if $A$ is equipotent to $J_n$ for some $n$ then there is a bijection $f:A\to J_n$. Pick any $k\in J_n$ then there is no bijection from $J_n$ to $J_n\setminus \{k\}$. Since $f$ is a bijection, argue that there is no bijection from $A$ to $A\setminus\{f^{-1}(k)\}.$


Sunday 25 March 2018

probability - Finding PDF involving absolute value



I'm trying to solve the following question:
Given an exponential R.V. X with rate parameter $\lambda > 0$, find the PDF of $V=|X-\lambda|$.




In order to find the PDF, I would like to use the CDF method (i.e. finding the CDF and then taking the derivative to obtain the PDF). I realize this function is not one to one on the range between 0 and $2\lambda$, so the CDF should be broken into three parts: $0>w$, $0think I want to do the double integral of $\int_0^\infty\int_{-w+\lambda}^{w+\lambda}\lambda e^{-\lambda x}dx$, with the function being integrated there being the exponential distribution pdf. After getting this, I should be able to take the derivative and get the PDF for $0

For the last part, I believe the bounds are $2\lambda

I understand there are probably more efficient ways of solving this, but I'm specifically trying to do it using the CDF method!


Answer



You have done most of the analysis, so I will be to a great extent repeating what you know. We want to find an expression for $\Pr(V\le w)$.



In general, $V\le w$ iff $|X-\lambda| \le w$ iff $X-\lambda\le w$ and $X-\lambda\ge -w$, that is, iff

$$\lambda-w \le X\le \lambda+w.$$



There are three cases to consider, (i) $w\le 0$; (ii) $0\lt w\le \lambda$; and (ii) $w \gt \lambda$.



Case (i): This is trivial: if $w\le 0$ then $\Pr(V\le w)=0$.



Case (ii): We want $\Pr(X\le \lambda+w)-\Pr(X\lt \lambda -w)$. This is
$$(1-e^{-\lambda(\lambda+w)})-(1-e^{-\lambda(\lambda-w)}).$$
There is some immediate simplification, to $e^{-\lambda(\lambda-w)}-e^{-\lambda(\lambda+w)}$, and there are various alternate ways to rewrite things, by introducing the hyperbolic sine.




Case (iii): This one is easier. We simply want $\Pr(X\le \lambda+w)$. For $w\ge -lambda$, this is
$$1-e^{-\lambda(\lambda+w)}.$$



We could have set up the calculations using integrals, but since we already know that $F_X(x)=1-e^{-\lambda x}$ (when $x\gt 0$) there is no need to do that.



Now that we have the cdf of $V$, it is straightforward to find the density. For $w\le 0$, we have $f_V(w)=0$. For $0\lt w\lt \lambda$, we have $f_V(w)=\lambda e^{-\lambda(\lambda-w)}+\lambda e^{-\lambda(\lambda+w)}$. Finally, for $w\gt \lambda$ we have $f_V(w)=\lambda e^{-\lambda(\lambda+w)}$.



Remark: Suppose that we did not have a nice expression for the cdf of $X$. That happens, for example, with the normal, and a number of other distributions. We could still find the density function by setting up our probabilities as integrals, and differentiating under the integral sign.


calculus - How to solve this limit question without using L'Hopital's rule?


Find the limit $$\lim_{x\rightarrow 0} \frac{\sin x-x}{\tan^3 x}$$




I found the limit which is $-\frac{1}{6}$ by using L'Hopital Rule. Is there another way to solve it without using the rule? Thanks in advance.

Saturday 24 March 2018

calculus - Find $int _{-infty}^{infty}frac{1}{x^{12} + 1} dx$ elementarily





Ho to find the following integral $$\int_{-\infty}^{\infty}\frac{1}{x^{12} + 1} dx $$using parts by substitution, partial fractions, etc. but not Cauchy's residue theorem?


Answer



Following Lucian's comment, by partial fractions we have
$$\frac{1}{x^{12}+1}=\frac{1}{\left(x^4+1\right) \left(x^8-x^4+1\right)}=\frac{1}{3}\left[\frac{1}{x^4+1}+\frac{2}{x^8-x^4+1}-\frac{x^4}{x^8-x^4+1}\right]$$
where




$$\begin{align}\frac{1}{x^4+1}=&\,\frac{x-\sqrt{2}}{2 \sqrt{2} \left(-x^2+\sqrt{2} x-1\right)}+\frac{x+\sqrt{2}}{2 \sqrt{2} \left(x^2+\sqrt{2} x+1\right)}\\[15pt]\frac{1}{x^8-x^4+1}=&\,\frac{2 x-\sqrt{6}}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(-2 x^2+\sqrt{2} x+\sqrt{6} x-2\right)}\\&\,+\frac{2 x-\sqrt{6}}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(-2 x^2-\sqrt{2} x+\sqrt{6} x-2\right)}\\&\,+\frac{2 x+\sqrt{6}}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(2 x^2+\sqrt{2} x+\sqrt{6} x+2\right)}\\&\,+\frac{2 x+\sqrt{6}}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(2 x^2-\sqrt{2} x+\sqrt{6} x+2\right)}\\[15pt]\frac{x^4}{x^8-x^4+1}=&\,\frac{\left(1+\sqrt{3}\right) x}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(-2 x^2-\sqrt{2} x+\sqrt{6} x-2\right)}\\&\,+\frac{\left(1+\sqrt{3}\right) x}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(2 x^2-\sqrt{2} x+\sqrt{6} x+2\right)}\\&\,+\frac{\left(\sqrt{3}-3\right) x}{2 \sqrt{6} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(-2 x^2+\sqrt{2} x+\sqrt{6} x-2\right)}\\&\,-\frac{\left(\sqrt{3}-1\right) x}{2 \sqrt{2} \left(2+\sqrt{3}\right) \left(2 \sqrt{3}-3\right) \left(2 x^2+\sqrt{2} x+\sqrt{6} x+2\right)}\end{align}$$
The rest evaluations are elementary but tedious.


probability - Rolling a die until obtaining the face 6. Whats the expected amout of the sum?

A game where you roll a fair die, repeatedly, adding up the faces that show up, until the face 6 appears. What is the expected sum (including the 6)?


All I can think of is that the expected value of each roll is $7/2$.

I'd appreciate your help!

calculus - Limit of some specific "almost Riemann" sums



I'm just beginning my journey with calculus, and this problem is giving me hard time.




$$\lim_{n \to \infty} \sum_{k=1}^n \frac{k}{{n^2+k}}$$



I calculated first three sums: $$\frac{1}{2}, \frac{16}{30}, \frac{117}{220}$$
From that I guessed that the limit is $$\frac{1}{2}$$. The hard part, though, is proving it.



I reckon that the squeeze theorem should be used here, but I can't seem to find anything useful. (The only sequences I was able to figure out give me limits of 1 and 0, so it's not something I can use)



Maybe you guys can give me some ideas or theorems that can be of use here? I know that this problem could be probably solved using some more advanced calculus techniques, but I'm not familiar with derivatives and integrals yet, and this problem is in the limits section of my problem book, so I'm not supposed to use anything more advanced.


Answer



For every $1\leqslant k\leqslant n$, $n^2\leqslant n^2+k\leqslant n^2+n$ hence the $n$th sum $S_n$ is such that

$$
\sum_{k=1}^n\frac{k}{n^2+n}\leqslant S_n\leqslant\sum_{k=1}^n\frac{k}{n^2}.
$$
Now, if it happens that the lower bound and the upper bound both converge to the same limit $\ell$, then $\lim\limits_{n\to\infty}S_n=\ell$.



Hence, what is left to do is to show that $\ell$ exists and to compute its value (or rather, to check that your guess that $\ell=\frac12$ is indeed correct--which it is).


optimization - Proving the shortest function connecting two points is a straight line WITHOUT assuming the Euler-Lagrange equation



For learning purposes, I'm trying to prove that the shortest function passing through the two points $(x_1, y_1)$, $(x_2, y_2)$ is a straight line, without using the Euler-Lagrange equation.



My attempt at a proof is below. I think I have most of it, but I get stuck at the end.
How do I finish the proof?







My attempt:



Assume $y$ is the function of $x$ to be determined, which satisfies $y(x_1) = y_1$ and $y(x_2) = y_2$.
I define the length to be minimized as follows:



$$L(y) = \int_{x_1}^{x_2} \sqrt{1 + \left(\frac{dy}{dx}\right)^2} \,dx$$



Consider perturbing $y$ by some multiple $\delta$ of an arbitrary deviation $w$ that vanishes at $x_1$ and $x_2$:



$$L(y, \delta) = \int_{x_1}^{x_2} \sqrt{1 + \left(\frac{d(y + \delta w)}{dx}\right)^2} \,dx$$




$$L(y, \delta) = \int_{x_1}^{x_2} \sqrt{1 + \left(\frac{dy}{dx} + \delta\frac{dw}{dx}\right)^2} \,dx$$



If $y$ minimizes $L$, then for any fixed $w$ the rate of change (i.e., derivative) of $L$ with respect to $\delta$ must approach zero as $\delta \to 0$:



$$
\frac{d}{d\delta}L(y, \delta)
= \frac{d}{d\delta}\int_{x_1}^{x_2} \sqrt{1 + \left(\frac{dy}{dx} + \delta\frac{dw}{dx}\right)^2} \,dx \\
= \int_{x_1}^{x_2} \frac{d}{d\delta} \sqrt{1 + \left(\frac{dy}{dx} + \delta\frac{dw}{dx}\right)^2} \,dx \\
= \int_{x_1}^{x_2} \frac{\left(\frac{dy}{dx} + \delta\frac{dw}{dx}\right) \frac{dw}{dx}}{\sqrt{1 + \left(\frac{dy}{dx} + \delta\frac{dw}{dx}\right)^2}} \,dx \to 0
$$




Specifically, since this holds true when $\delta = 0$, we have:



$$
\int_{x_1}^{x_2} \frac{\frac{dy}{dx} \frac{dw}{dx}}{\sqrt{1 + \left(\frac{dy}{dx}\right)^2}} \,dx = 0
$$



Now here's where I get stuck:



I need to get rid of $w$ somehow.




Because the equality above needs to hold for all $w$, I think I would prefer to pick $w$ to be something convenient that makes my life easier; say, $\frac{dw}{dx} = \sqrt{1 + \left(\frac{dy}{dx}\right)^2}$, to cancel the denominator.
But I cannot assume this is possible, as $w$ must satisfy two boundary conditions:
it must vanish at both $x_1$ and at $x_2$.



How am I supposed to proceed?


Answer



Well, two final steps might be following.



First, you integrate last expression by parts:



$$\int_{x_1}^{x_2} \frac{y'w'}{\sqrt{1+ (y')^2}} dx

= \left \lbrack \frac{y'w}{\sqrt{1+ (y')^2}} \right \rbrack \Bigg \vert_{x_1}^{x_2} - \int_{x_1}^{x_2} w \frac{y''}{{(1+ (y')^2)}^{\frac{3}{2}}} dx $$



Since $w(x)$ vanishes at the endpoints, then we have



$$ \int_{x_1}^{x_2} w \frac{y''}{{(1+ (y')^2)}^{\frac{3}{2}}} dx \equiv 0 $$
for any choice of $w$.



The second step is following. Now consider family of functions $w_{x_0, n}(x)$
($x_0$ is a point from $(x_1, x_2)$ and $n$ belongs to a subset of natural numbers s.t. $(x_0 - \frac{1}{n}, x_0 + \frac{1}{n}) \subset (x_1, x_2)$ ):





  • $w_{x_0, n}(x_1) = w_{x_0, n}(x_2) = 0 $

  • $w_{x_0, n}(x)$ is zero outside $(x_0 - \frac{1}{n}, x_0 + \frac{1}{n})$

  • $w_{x_0, n}(x)$ is positive on $( x_0 - \frac{1}{n}, x_0 + \frac{1}{n} )$



It's easy to show that



$$ \lim\limits_{n \rightarrow + \infty} \int_{x_1}^{x_2} w_{x_0, n} \frac{y''}{{(1+ (y')^2)}^{\frac{3}{2}}} dx = \frac{ y''(x_0)}{{(1+ (y'(x_0))^2)}^{\frac{3}{2}}}. $$




But we know that each of these integrals is equal to 0, so $y''(x_0) \equiv 0$ for $x_0 \in (x_1, x_2) $. By continuity you just obtain that $y'' \equiv 0$ at $\lbrack x_1, x_2 \rbrack$ and thus it's a linear function of $x$.


Friday 23 March 2018

trigonometry - Solving limits of a sine wave $lim_{xto+infty}frac{sin(x)}{x}$




So I got this assignment. And I was wondering how is it possible to get a limit from a constantly changing formula.
$$\lim_{x\to+\infty}\frac{\sin(x)}{x}$$
Can I only look in the domain $]0,2\pi[$?


Answer



A possible approach is to take help of the following:




  1. $$\lim_{x\to+\infty}\frac{1}{x}=0$$


  2. $$-1\le\sin x\le1$$




discrete mathematics - Prove that a number with 30 digits cannot have more than 100 prime factors.

I know that the a number with more than 100 prime factors must be larger than



$2 ^ {100}$, so it must have more than 30 digits but i am having trouble with proof.



I was given
Hint: every prime number is $≥ 2$.

Can someone help me connect the two ideas.

geometry - Find possible areas of triangle given radius of circumscribed circle

Question: In triangle ABC, AB=4, AC=5, and the radius R of the circumscribed circle is equal to √7. Find all possible values of the area of triangle ABC.



Through using the sine rule, i found the angle of B to be approximately $70.89339$, the angle of C to be approximately $49.10661$ and the angle of A to be $60$. Using the cosine law, I found the missing side length BC to be $√21$. Using Heron's law, I then found the area to be $5√3$.




However, this question requires more than one area. I am confused as to how to obtain the second area?

Wednesday 21 March 2018

integration - Evaluating the contour integral $int_{0}^{infty}frac{sin^{3}(x)}{x^{3}}mathrm dx$





I am trying to show



$$\int_{0}^{\infty}\frac{\sin^{3}(x)}{x^{3}}\mathrm dx = \frac{3\pi}{8}.$$



I believe the contour I should use is a semicircle in the upper half plane with a slight bump at the origin, so I miss the singularity.



Lastly I have the hint to consider
$$\int_{0}^{\infty}\frac{e^{3iz}-3e^{iz}+2}{z^{3}}\mathrm dz$$



around the contour I mentioned. Thanks for any help or hints!



Answer



Your coutour will work perfectly, so I wonder why you're hesitating to proceed with your calculation. Anyway, here is a solution:



Since $\sin^3 z = (\sin 3z - 3\sin z)/4$, we have



$$\int_{0}^{\infty} \frac{\sin^3 x}{x^3} \, dx = \frac{1}{8} \Im \lim_{\epsilon \to 0} \int_{\mathbb{R} \backslash (-\epsilon, \epsilon)} \frac{3 e^{iz} - e^{3iz} -2}{z^3} \, dz,$$



where the term $-2$ is introduced in order to cancel out the pole of order 3, without affecting the value of the integral. Consider the counterclockwise-oriented upper semicircle $C$ of radius $R$, centered at the origin, with semicircular indent of radius $\epsilon$. Let $\Gamma_{R}^{+}$ and $\gamma_{\epsilon}^{-}$ denote semicircular arcs of $C$ of radius $R$ and $\epsilon$, respectively. If we put



$$f(z) = \frac{3 e^{iz} - e^{3iz} - 2}{z^3},$$




then we find that




  • On $\Gamma_R^+$, we have $|f(z)| \leq 6R^{-3}$ and thus
    $$\int_{\Gamma_{R}^{+}} f(z) \, dz \to 0 \quad \text{as } R \to \infty.$$


  • Notice that
    $$ f(z) = \frac{3}{z} + O(1) \quad \text{near } z = 0. $$
    So by the direct computation,
    $$\int_{\gamma_{\epsilon}^{-}} f(z) \, dz

    = -\int_{0}^{\pi} f(\epsilon e^{i\theta}) i\epsilon e^{i\theta} \, d\theta
    = -\int_{0}^{\pi} (3i + O(\epsilon)) \, d\theta
    \to -3\pi i \quad \text{as } \epsilon \to 0.$$
    (This is exactly the same as $-\pi i$ times the residue of $f$ at $z = 0$. The emergence of residue can be attributed to the fact that $f$ has only simple pole at $z = 0$.)




Since $f(z)$ has no pole on the region enclosed by $C$, we have
$$\lim_{\epsilon \to 0} \int_{\mathbb{R} \backslash (-\epsilon, \epsilon)} \frac{3 e^{iz} - e^{3iz} - 2}{z^3} \, dz = 3\pi i.$$
This proves the desired identity.


Tuesday 20 March 2018

limits - 1 to the power of infinity, why is it indeterminate?





I've been taught that $1^\infty$ is undetermined case. Why is it so? Isn't $1*1*1...=1$ whatever times you would multiply it? So if you take a limit, say $\lim_{n\to\infty} 1^n$, doesn't it converge to 1? So why would the limit not exist?


Answer



It isn’t: $\lim_{n\to\infty}1^n=1$, exactly as you suggest. However, if $f$ and $g$ are functions such that $\lim_{n\to\infty}f(n)=1$ and $\lim_{n\to\infty}g(n)=\infty$, it is not necessarily true that



$$\lim_{n\to\infty}f(n)^{g(n)}=1\;.\tag{1}$$



For example, $$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e\approx2.718281828459045\;.$$



More generally,




$$\lim_{n\to\infty}\left(1+\frac1n\right)^{an}=e^a\;,$$



and as $a$ ranges over all real numbers, $e^a$ ranges over all positive real numbers. Finally,



$$\lim_{n\to\infty}\left(1+\frac1n\right)^{n^2}=\infty\;,$$



and



$$\lim_{n\to\infty}\left(1+\frac1n\right)^{\sqrt n}=0\;,$$




so a limit of the form $(1)$ always has to be evaluated on its own merits; the limits of $f$ and $g$ don’t by themselves determine its value.


real analysis - Dense subset of continuous functions



Let $C([0,1], \mathbb R)$ denote the space of real continuous functions at $[0,1]$, with the uniform norm. Is the set $H=\{ h:[0,1] \rightarrow \mathbb R : h(x)= \sum_{j=1}^n a_j e^{b_jx} , a_j,b_j \in \mathbb R, n \in \mathbb N \}$ dense in $C([0,1], \mathbb R)$?


Answer



Yes, by the general Stone-Weierstrass Approximation Theorem. Indeed, $H$ is an algebra of continuous real-valued functions on the compact Hausdorff space $[0,1]$, and $H$ separates points and contains a non-zero constant function. That is all we need to conclude that $H$ is dense in $C([0,1],\mathbb{R})$.


inequality - Prove $ min left(a+b+frac1a+frac1b right) = 3sqrt{2}:$ given $a^2+b^2=1$



Prove that
$$ \min\left(a+b+\frac1a+\frac1b\right) = 3\sqrt{2}$$
Given $$a^2+b^2=1 \quad(a,b \in \mathbb R^+)$$
Without using calculus.
$\mathbf {My Attempt}$
I tried the AM-GM, but this gives $\min = 4 $.



I used Cauchy-Schwarz to get $\quad (a+b)^2 \le 2(a^2+b^2) = 2\quad \Rightarrow\quad a+b\le \sqrt{2}$
But using Titu's Lemma I get $\quad \frac1a+\frac1b \ge \frac{4}{a+b}\quad \Rightarrow\quad \frac1a+\frac1b \ge 2\sqrt{2}$
I'm stuck here, any hint?


Answer




Notice
$$\begin{align}
a + b + \frac1a + \frac1b
= &\; \left(a + \frac{1}{2a} + \frac{1}{2a}\right)
+ \left(b + \frac{1}{2b} + \frac{1}{2b}\right)\\
\\
\color{blue}{\rm AM \ge \rm GM \rightarrow\quad}
\ge &\; 3\left[\left(\frac{1}{4a}\right)^{1/3} + \left(\frac{1}{4b}\right)^{1/3}\right]\\
\color{blue}{\rm AM \ge \rm GM \rightarrow\quad}
\ge &\; 6 \left(\frac{1}{16ab}\right)^{1/6}\\

\color{blue}{a^2 + b^2 \ge 2 ab \rightarrow\quad}
\ge &\; 6 \left(\frac{1}{8(a^2+b^2)}\right)^{1/6}\\
= &\; 6 \left(\frac18\right)^{1/6} = \frac{6}{\sqrt{2}} = 3\sqrt{2}
\end{align}
$$
Since the value $3\sqrt{2}$ is achieved at $a = b = \frac{1}{\sqrt{2}}$, we have



$$\min \left\{ a + b + \frac1a + \frac1b : a, b > 0, a^2+b^2 = 1 \right\} = 3\sqrt{2}$$



Notes




About the question how do I come up with this. I basically know the minimum is
achieved at $a = b = \frac{1}{\sqrt{2}}$. Since the bound $3\sqrt{2}$ on RHS of is optimal, if we ever want to prove the inequality, we need to use something that is tight when $a = b$. If we want to use AM $\ge$ GM, we need to arrange the pieces so that all terms are equal. That's why I split $\frac1a$ into $\frac{1}{2a} + \frac{1}{2a}$ and $\frac1b$ into $\frac{1}{2b} + \frac{1}{2b}$ and see what happens. It just turns out that works.


sequences and series - arithmetic progression, division and remainder. finding pattern length



When all elements in a arithmetic progression are divided by a constant number k, and are written down the remainder, you'll quickly notice that a series of numbers will appear.
simple example



7   42  77  112 147 182 217 252 287 322 357 392 (progression, 35 added each time)

7 42 17 52 27 2 37 12 47 22 57 32 (remainder after division by 60)



When 35 is added again, that gives 392+35=427. The remainder after division by 60 is again 7. The pattern



7   42  17  52  27  2   37  12  47  22  57  32


repeats itself. In this example, this pattern consists of 12 numbers.



Can you calculate the length of that pattern with a formula when you know the constant difference in the progression and the constant number? I tried to figure this out, but failed.



Answer



The period in the example is the smallest number $k$ with the property $$35k\equiv 0 \mod 60$$



We get $$k=\frac{60}{\gcd(35,60)}=\frac{60}{5}=12$$



This way you can find the period in general.


Monday 19 March 2018

combinatorics - Combinatorial interpretation for the identity $sumlimits_ibinom{m}{i}binom{n}{j-i}=binom{m+n}{j}$?

A known identity of binomial coefficients is that
$$
\sum_i\binom{m}{i}\binom{n}{j-i}=\binom{m+n}{j}.
$$
Is there a combinatorial proof/explanation of why it holds? Thanks.

differential geometry - Does this proof work to prove that the greatest area of a triangle inside a circle is when the triangle is equilateral?



Does this proof work to prove that the greatest area of a triangle inside a circle is when the triangle is equilateral? I gather it doesn't because most of the proofs I've seen use derivatives etc. If so why doesn't it work?




Consider a triangle $ABC$ inscribed inside a circle $\Gamma$. Assume WLOG one of the sided is fixed then one can easily see that the other two sides being equal maximizes the area of the triangle (it has the greatest height). Now consider one of the two equal sides - using that side as a base we can see from the same argument as before that the triangle must in fact be equilateral to maximize the area (this triangle has the greatest height as before).



Thanks in advance.


Answer



Let us some analytic geometry using your idea: suppose our circle is $\,x^2+y^2=R^2\;$ , and we're going to fix, as you did, one of our sides as being parallel to the $\,x-$axis for simplicity, so that its end points are $\,A=(a,b)\;,\;\;B=(-a,b)\;,\;\;a>0\;,\;a^2+b^2=R^2\;,\;\;b\le 0\;$ .



Let the other vertex be $\,C=(0,R)\;$ since, as you remarked, for the above two points of the triangle, the maximal height is obtained when the other two sides are equal, which means the third vertex is on the sides perpendicular bisector, which is easy to see is the $\,y-$axis.



Thus, the area of the triangle depends on the distance $\,R-b=R-\sqrt{R^2-a^2}\,$ and on the horizonal side length's $\,2a\;$ :




$$f(a):=a\left(R-\sqrt{R^2-a^2}\right)\implies f'(a)=R-\sqrt{R^2-a^2}+\frac{a^2}{\sqrt{R^2-a^2}}\stackrel ?=0\iff$$



$$R\sqrt{R^2-a^2}-R^2+a^2+a^2=0\implies(2a^2-R^2)^2=R^2(R^2-a^2)\implies$$



$$R^4-4R^2a^2+4a^4=R^4-R^2a^2\implies a^2\left(4a^2-3R^2\right)=0\implies$$



$$\implies a=\frac{\sqrt3}2R$$



so that the slope of the line




$$CA\,\;,\;\;A=\left(\frac{\sqrt3}2R\,,\,-\frac12R\right)\;,\;C=(0,R)\;\,\;\;\text{is}$$



$$-\frac{\frac32R}{\frac{\sqrt3}2R}=-\sqrt3=\tan\frac{-\pi}3$$



and we're done...


sequences and series - Convergence of $1+frac{1^2cdot2^2}{1cdot3cdot5}+ frac{1^2cdot2^2cdot3^2}{1cdot3cdot5cdot7cdot9}+...$



I am trying to use the ratio test, for that, I need the general formula for the series.



The general formula for the numerator is $(n!)^2$



The denominator is a sequence of odd numbers that grows by two terms every time but how do I represent it?




Also, any tips for how I can guess the series from a sequence would be greatly appreciated.


Answer



Lets try writing the general term
$$a_n=\frac{(n!)^2}{1\cdot 3\cdot 5\cdots (4n-5)(4n-3)}\\a_{n+1}=\frac{((n+1)!)^2}{1\cdot 3\cdot 5\cdots (4n-1)(4n+1)}\\\frac{a_{n+1}}{a_n}=\frac{((n+1)!)^2}{1\cdot 3\cdot 5\cdots(4n-1)(4n+1)}\cdot\frac{1\cdot 3\cdot 5\cdots(4n-5)(4n-3)}{(n!)^2}\\\frac{a_{n+1}}{a_n}=\frac{(n+1)^2}{(4n-1)(4n+1)}$$
also you can notice that
$$1\cdot 3\cdot 5\cdots (2k-1)(2k+1)=\frac{(2k+1)!}{2^kk!}$$
So $$a_n=\frac{(n!)^2(2k-2)!2^{k-2}}{(4k-3)!}$$
Doing the ratio test should give the same result as above.


probability - How many random samples needed to pick all elements of set?





If repeatedly picking a random element from a set, what is the expected number of times I'd have to pick before seeing all the elements of the set?



Edit: when picking an element, it is simply counted and not removed from the set, so it can be picked again.


Answer



This is the coupon collector's problem. The expected number of picks required to choose all the elements of the set is $$nH_n = n\sum_{i=1}^n\frac1i.$$



Sunday 18 March 2018

discrete mathematics - Euclid's lemma, mod, gcd proof


(a) Let $p$ be an odd prime and $a$ be an integer with $\gcd(a, p) = 1$. Show that $x^2 - a \equiv 0 \mod p$ has either $0$ or $2$ solutions modulo $p$



b) Generalize Part a) in the following way: Let $m = p_{1} · · · p_{r}$ with

distinct odd primes $p_{1} · · · p_{r}$ and let $a$ be an integer with
gcd(a, m) = 1. Show that $x_{2} ≡ a \bmod m$ has either $0$ or $2^r$
solutions
modulo m.




Answer:
(a) If $p=2$, no solution. If $p>2$ and $\gcd(a,p)=1$ and if a solution exists, so will a second solution because $(-x)^2 \equiv x^2\bmod p$ and $-x\not\equiv x \bmod p$.



$(-x)^2\equiv y^2\bmod p$ means $x^2-y^2=(x-y)(x+y)\equiv0\bmod p$. By Euclid's lemma, this implies either $p|x-y$ or $p|x+y$. Therefore, either $y \equiv x\bmod p$ or $y \equiv -x\bmod p$ and there is no possibility of a third solution.




This shows that $x^2\equiv a\bmod p$ either has no solutions or exactly two solutions.



I'm not sure how to do part b though.

real analysis - Can the function defined in this way be everywhere discontinuous?

Suppose that we have some real function of a real variable $f$ defined on the set $[a,b]$ which has the properties that:



1) $f$ takes values in the set on which it is defined



2) for every $y \in [a,b]$ there exists one and only one $x \in [a,b]$ such that $f(x)=y$.



The question is:





Can such function be everywhere discontinuous?




The question can be asked also in this form:




Does there exist everywhere discontinuous bijection defined on the set $[a,b]$ which also takes values in the set $[a,b]$.





I guess that the answer is yes but at the moment I am not smart enough to prove the existence or to construct such an example.



Thank you for your response and co-operation.

algebra precalculus - Why k should be odd?




My teacher once said, for any positive number $\ n, $ $\ n^k - 1 $ would always have $\ n-1 $ as a factor for all positive odd values of $\ k $. Could anyone tell me the proof? I have written my approach below.



Assuming $n-1$ to be a factor.



$n^k - 1 = (n-1)x $



$n^k = (n-1)x + 1 $




$k = \lg( (n-1)x + 1 ) / \lg (n) $



If my approach is right could you tell me where should I go from here?



Edit : This clears my doubt.



Edit 2 : My question is "why for any positive number $\ n, $ $\ n^k + 1 $ would always have $\ n+1 $ as a factor for all positive odd values of $\ k $ ".


Answer



More generally, if $a,b\in\mathbb Z$, $n\in\mathbb Z^+$, then $a-b\mid a^n-b^n$.




It follows from $$a^n-b^n=(a-b)\left(a^{n-1}+a^{n-2}b+\cdots+b^{n-1}\right)$$



Edit: Since it seems you wanted to ask about the fact that for $n\in\mathbb Z$ with odd $k\in\mathbb Z^+$ we have $n+1\mid n^k+1$:



More generally, for $a,b\in\mathbb Z$ and odd $n\in\mathbb Z^+$ we have $$a^n+b^n=(a+b)\times$$



$$\times \left(a^{n-1}-a^{n-2}b+a^{n-3}b^2-\cdots-ab^{n-2}+b^{n-1}\right)$$


Saturday 17 March 2018

real analysis - Open maps which are not continuous

What is an example of an open map $(0,1) \to \mathbb{R}$ which is not continuous? Is it even possible for one to exist? What about in higher dimensions? The simplest example I've been able to think of is the map $e^{1/z}$ from $\mathbb{C}$ to $\mathbb{C}$ (filled in to be $0$ at $0$). There must be a simpler example, using the usual Euclidean topology, right?

elementary number theory - Does this follow from congruence?



Let a and b be two distinct prime numbers and x and y are integers. Is the following true?



($x \equiv y \mod a$) and ($x \equiv y \mod b$). So, $a|(x-y)$ and $b|(x-y)$. This means $x-y=ab\phi$ with $\phi \in \mathbb{Z}$.



Can I state the last part or do I need to prove that somehow? It seems logical enough to me but it might be wrong. Your help is appreciated.



Answer



Let $x-y = am = bn$, $m, n\in \mathbb Z$.



$$am = bn$$



Since $a$ and $b$ are coprime and $a \mid bn$, $a\mid n$. Let $n = a\phi$.


probability - Determine the expected time until the first arrival.



Two people agree to meet at a specified place between 3:00

P.M. and 4:00 P.M. Suppose that you measure time to the nearest
minute relative to 3:00 P.M. so that, for instance, time 40
represents 3:40 P.M. Further suppose that each person arrives
according to the discrete uniform distribution on {0, 1, ..., 60}
and that the two arrival times are independent. Determine the
expected time until the first arrival.






Here I've tried setting :

$X_i$=person i's arrival time with i=1,2



$$P_X(x_i)= \begin{cases}
1/61, & \text{if $x_i$ $\in[0,1,2,3...60]$} \\
0, & \text{otherwise}
\end{cases} $$



I create a function Z=min{x,y}



$$E(z)=\sum_{z=0}^{60}zP_Z(z)=\sum_{z=0}^{60}z[P(x_1=z,x_2>z)+P(x_1>z,x_2=z)+P(x_1=z,x_2=z)]$$

And that is where I'm kind of stuck because
I've tried rewrittin $P(x_1=z,x_2>z)$ as $P(x_2>z)P(x_1=z|x_2>z)$ and drawing a little pmf table but still no luck.



If anybody by any chance knows a much easier way to solve this problem I would be happy to hear from them.


Answer



A reusable trick . . .


Since the values of $z$ are nonnegative integers, it follows that
$$E(z) = p_1 + p_2 + p_3 + \cdots$$
where $p_k = P(z \ge k)$.


Hence we get
$$E(z) = \sum_{k=1}^{60} \left(\frac{61-k}{61}\right)^2 = \frac{1210}{61}\approx 19.84$$


To explain the trick . . .


For each positive integer $k$, let $q_k=P(z=k)$.


Then we have
\begin{align*}

p_1 &= q_1 + q_2 + q_3 +\cdots\\[4pt]
p_2 &= \phantom{q_1 +\; }q_2 + q_3 +\cdots\\[4pt]
p_3 &= \phantom{q_1 + q_2 + \;}q_3 + \cdots\\[4pt]
\vdots\\[4pt]
\end{align*}
hence, summing the columns, we get
$$
p_1 + p_2 + p_3 + \cdots = 1q_1 + 2q_2 + 3q_3 + \cdots
\qquad\qquad\;\;\;\;\;
$$



real analysis - By choosing one word within each parentheses,four statements can be made.

The book I am using for my Advance Calculus course is Introduction to Analysis by Arthur Mattuck.



By choosing one word within each parentheses, four statements can be made from the following. Label each true, or false (with counterexample).




If lim $a_n$ is (positive, non-negative), then for large n the individuals term $a_n$ are (positive, non-negative).



This is my rough proof to this question. I was wondering if anybody can look over it and see if I made a mistake or if there is a simpler way of doing this problem. I want to thank you ahead of time it is greatly appreciated.So lets begin:



Proof:



enter image description here

Friday 16 March 2018

integration - Error in $intlimits_0^{infty}dx,frac {log^2 x}{a^2+x^2}$



This is my work for solving the improper integral$$I=\int\limits_0^{\infty}dx\,\frac {\log^2x}{x^2+a^2}$$I feel like I did everything write, but when I substitute values into $a$, it doesn’t match up with Wolfram Alpha.






First substitute $x=\frac {a^2}u$ so$$\begin{align*}I & =-\int\limits_{\infty}^0du\,\frac {2\log^2a-\log^2 u}{x^2+a^2}\\ & =\int\limits_0^{\infty}du\,\frac {2\log^2a}{a^2+x^2}-\int\limits_0^{\infty}du\,\frac {\log^2 u}{a^2+x^2}\end{align*}$$Hence$$\begin{align*}I & =\int\limits_0^{\infty}du\,\frac {\log^2a}{a^2+x^2}\\ & =\frac {\log^2a}{a^2}\int\limits_0^{\pi/2}dt\,\frac {a\sec^2t}{1+\tan^2t}\\ & =\frac {\pi\log^2a}{2a}\end{align*}$$However, when $a=e$ Wolfram Alpha evaluates the integral numerically as$$I\approx 2.00369$$however the input that I arrived at evaluates numerically$$\frac {\pi}{2e}\approx0.5778$$Where did I go wrong? And how would you go about solving this integral?


Answer



Note
$$\log^2\left(\frac{a^2}{u}\right)\neq2\log^2a-\log^2u.$$



What is the closed form sum of this series?



What is the closed form sum of this series?



$(1 - \frac12)+(\frac13 - \frac14)(1 - \frac12 + \frac13)+(\frac15 - \frac16)(1 - \frac12 + \frac13 - \frac14 + \frac15)+(\frac17 - \frac18)(1 - \frac12 + \frac13 - \frac14 + \frac15 - \frac16 + \frac17)+...$




I have been working on this infinite series (and other similar series). Wolfram doesn't know and I don't have any other mathematical software so I worked out an answer but I'm not sure if it is correct. Therefore I would like to see what answer anyone else can get to compare.



Also I would like to see some alternative proofs even if my answer is right because my proof was geometric and involved a lot of drawing!


Answer



It looks like it's given by
$$
\frac{1}{2}\left[\left(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots\right)^2 - \left(1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\ldots\right)\right]+\left(1+\frac{1}{9}+\frac{1}{25}+\ldots\right).
$$
That is, it contains just the off-diagonal terms from the square of the series for $\ln(1+x)$ (evaluated at $x=1$), plus the odd diagonal terms. This evaluates to
$$

\frac{1}{2}\left[\left(\ln 2\right)^2-\frac{\pi^2}{6}\right]+\frac{\pi^2}{8}=\frac{1}{2}\left(\ln 2\right)^2+\frac{\pi^2}{24},
$$
in agreement with your result.


trigonometry - Use of de Moivre's Theorem and Euler's formula to solve an expression




Let $n$ be a natural number, the Show that $$(\cos(2)+i\sin(2)+1)^n=2^{n}\cos^n(1)(\cos(n)+i\sin(n))$$



The use of Euler's formula and de Moivre's Theorem isn't succeeding. Does anyone have a hint for how I should proceed?



EDIT: Thank you all. There was a typo in the textbook, which is why I was struggling.


Answer



$$
2^n\cos^n(1)(\cos(n)+i\sin(n))=2^n\cos^n(1)(\cos(1)+i\sin(1))^n
=(2\cos(1)\cos(1)+i2\cos(1)\sin(1))^n=(2\cos^2(1)+i\sin(2))^n=(\cos(2)+i\sin(2)+1)^n
$$



summation - Prove by induction that $sum _{r=1}^n cos((2r-1)theta) = frac{sin(2ntheta)}{2sintheta}$ is true $forall n in mathbb{Z^+}$




Prove by induction that $$\sum _{r=1}^n \cos((2r-1)\theta) = \frac{\sin(2n\theta)}{2\sin\theta}$$ is true $\forall \ n \in \mathbb{Z^+}$





So my attempt as this is as follows, I've started the inductive step, but I don't know where to continue from now, any help would be great.



If $n=1$



LHS = $\cos\theta$, RHS = $\frac{\sin(2\theta)}{2\sin\theta} = \cos\theta$ so $\therefore$ true when $n=1$.



Assume true for $n=k$,
$$\sum _{r=1}^k \cos((2r-1)\theta) = \frac{\sin(2k\theta)}{2\sin\theta}$$
If $n=k+1$
$$\sum _{r=1}^{k+1} \cos((2r-1)\theta) = \sum _{r=1}^k \cos((2r-1)\theta) + \cos((2k+1)\theta)$$

$$ = \frac{\sin(2k\theta)}{2\sin\theta} + cos((2k+1)\theta)$$
$$= \frac{\sin(k\theta)\cos(k\theta)}{\sin\theta} + \cos(2k\theta)\cos\theta - \sin(2k\theta)\sin\theta$$
I'm not really sure my last step has lead me anywhere, but i wasn't sure on what else to do than apply the compound angle formulae.


Answer



Starting from where you left off:



$$\frac{\sin (2k\theta)}{2\sin \theta} + \cos ((2k+1)\theta)$$



Let $x=2k\theta$. Then the expression is




$$\frac{\sin x}{2\sin\theta} + \cos (x+\theta)$$



Angle sum identity gives



$$\frac{\sin x}{2\sin\theta} + \cos x\cos\theta - \sin x\sin\theta$$



$$=\frac{\sin x + 2\cos x\cos\theta\sin\theta - 2\sin x\sin^2 \theta}{2\sin\theta}.$$



Factoring, we get
$$=\frac{(1-2\sin^2\theta)\sin x + (2\cos\theta\sin\theta)\cos x}{2\sin\theta}.$$




We use the double angle formula and angle sum formula in reverse:
$$=\frac{\cos 2\theta \sin x + \sin 2\theta \cos x}{2\sin \theta}$$



$$=\frac{\sin(x+2\theta)}{2\sin\theta}$$



$$=\frac{\sin(2k\theta + 2\theta)}{2\sin \theta}$$



$$=\frac{\sin(2(k+1)\theta)}{2\sin \theta}.$$




This completes the inductive step.


elementary number theory - Finding integer multipliers for linear combination's value $= 0$, using Extended Euclidean Algorithm



I have till now considered EEA as a way to find the linear combination's integer multipliers value to find $\gcd$, as follows:



Given two integers, not both zero, first find the $\gcd$, and then take the remainder at each step, and express the others in terms of that (by substituting), starting from the penultimate step in reverse. The last step has $r_{n+1}=0$.



$$
\begin{align}
a = bq_1 + r_1 <--> & \ r_1 = a - bq_1 \\
b= r_1q_2 + r_2 <--> & \ r_2 = b - r_1q_2 \\

r_1 = r_2q_3 + r_3<--> & \ r_3 = r_1-r_2q_3 \\
&... \\
r_{n-2} = r_{n-1}q_{n} + r_{n}<--> & \ r_n = r_{n-2}-r_{n-1}q_{n} \\
r_{n-1} = r_{n}q_{n+1} + r_{n+1}<--> & \ \\
\end{align}
$$



It is easy to calculate using EEA, the value of integer multipliers for dividend ($a$), divisor ($b$), for a given value of $\gcd = r_n$. But, to get integer multipliers is not occurring to me, for value of linear combination being $0$, as shown here in Problem $2$.



Also, want to know the logic behind finding integer multipliers for such specific value of linear combination, when the value is a multiple of $\gcd$, including $0$, as any integer divides $0$.



Answer



The way you pose the question is slightly unclear for me, but if I am reading it correctly, I think you want continue the Euclidean Algorithm one step further (instead of stopping at the $\gcd$, stop at $0$) in order to arrive at a last line that looks like $$r_{n-1} = r_{n}q_{n+1} + 0 < -- > 0 = r_{n-1}-r_{n}q_{n+1} $$


poisson distribution find exact sum



Using only properties of Poisson distribution find the exact value of this sum




$\sum_{x=0}^{\infty}\frac{x^22^x}{x!}$



I believe $\lambda$ = 2
(E(X))$^2$ = 4
V(X) = 2
E(X$^2$) = 6



I don't know how to find the exact value of the sum.


Answer



Let $X$ have Poisson distribution with parameter $\lambda=2$. You showed that $E(X^2)=6$. Note that
$$6=E(X^2)=\sum_0^\infty x^2e^{-2}\frac{2^x}{x!}.$$
Multiply both sides by $e^2$. We conclude that our sum is $6e^2$.



combinatorics - Fermat's Combinatorial Identity: How to prove combinatorially?

$$\binom{r}{r} + \binom{r+1}{r} + \binom{r+2}{r} + \dotsb + \binom{n}{r} = \binom{n+1}{r+1}$$




I don't have much experience with combinatorial proofs, so I'm grateful for all the hints.



(Presumptive) Source: Theoretical Exercise 1.11, P18, A First Course in Pr, 8th Ed, by S Ross

Thursday 15 March 2018

sequences and series - Value of Riemann zeta function at $-1$

This claim is false $\sum_{n=1}^{\infty}n=\sum_{n=1}^{\infty}n^{-(-1)}= \zeta(-1)=-1/12$.



The error is that we should



$\sum_{n=1}^{\infty}n=\sum_{n=1}^{\infty}(1/n ^1)^{-1}=(0)^{-1}$.



Am I correct? It's difficult to say that an infinite sum like that don't diverge and that sum of positive numbers can give negative number.

complex numbers - Find smallest positive integer k such that $z* z^2*z^3*...*z^k $ is real

The question I have been given is
Given that $z=2e^{i\frac{\pi}{7}}$, find the smallest positive integer of $k$ such that $z\times{z^2}\times{z^3}\times{...}\times{z^k}$ is real, and state the value of $|z\times{z^2}\times{z^3}\times{...}\times{z^k}|$ in this case.



A previous part of the question had me show that for any complex number $z=re^{i\theta}$, $z\times{z^2}\times{z^3}\times{...}\times{z^k}=(re^{i\theta})^{\frac{k(k+1)}{2}}$, which I achieved using de Moivre's Theorem




Here's what I've tried doing;
Using Euler's Identity, I expanded the product to



$\cos{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}+i\sin{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}$



For a number to be real, I know that it's imaginary component must equal zero, so I figured that I would have to find the smallest positive integer of $k$ that satisfies the equation



$$\sin{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}=0$$



From here, I tried the following;
$$\therefore \frac{k(k+1)\frac{\pi}{7}}{2}=\arcsin{0}=0+a\pi\qquad a\in\mathbb{Z} $$

$$\therefore k(k+1)=\frac{2a\pi}{\frac{\pi}{7}}$$
$$\therefore k^2+k=14a$$
$$\therefore k^2+k-14a=0$$
$$\therefore k=\frac{-1\pm\sqrt{1-56a}}{2}$$
I'm not sure where to go from here. If $a$ was a defined constant, I could easily find $k$, however, because it isn't and I'm trying to find the smallest positive integer. According to the textbook, the solution is $k=6$, giving a modulus of $2^{21}$, but I do not understand how they came to this answer. I think I've gone about this the wrong way and there is probably a simpler method.



Edit: Fixed my dumb $\arcsin{0}$ mistake..

Wednesday 14 March 2018

summation - Differences In Explanations of Quadratic Functions/Forms



My textbook explains quadratic functions as follows:




Quadratic functions are functions $g: \mathbb{R^n} \to \mathbb{R}$ that have the form



$$g(h_1, \dots h_n) = \sum_{i, j = 1}^n a_{ij} h_i h_j$$




for an $n \times n$ matrix $[a_{ij}]$.




Wikipedia defines quadratic functions as follows:




$$q(x_1, \dots, x_n) = \sum_{i = 1}^n \sum_{j = 1}^n a_{ij} x_i x_j$$





I have some questions regarding these definitions:




  1. There are two forms of summation notation used: $\sum_{i, j = 1}^n$ and $\sum_{i = 1}^n \sum_{j = 1}^n$. Now, as I understand it, these are not necessarily equivalent notations, since, with $\sum_{i, j = 1}^n$, we always have $i= j$, whereas with $\sum_{i = 1}^n \sum_{j = 1}^n$, we can have $i \not= j$, since one summation, say, $\sum_{j = 1}^n$, is summed over while the other, $\sum_{i = 1}^n$, is held constant at $i = c \in [1, n]$, and only incremented by one once the summation over all values of $j \in [1, n]$ is complete. It seems to me like both of these can't be correct, so what's going on here? Am I misunderstanding something? Since $[a_{ij}]$ is an $n \times n$ matrix, it seems to me like the double-summation $\sum_{i = 1}^n \sum_{j = 1}^n$ is the one we would be looking for?


  2. How do these above definitions of quadratic functions/forms relate to the commonly-known quadratic function/equation $f(x) = ax^2 + bx + c$?




I would appreciate it if people could please take the time to clarify this.


Answer



A quadratic form on a vector space should be thought of as a homogeneous quadratic function. Here homogeneous means that every monomial must have the same degree, and quadratic means that it must be degree 2. For example $f(x, y, z) = xy + 3yz - x^2$ is a homogeneous quadratic in three variables, but $g(x, y, z) = x^2 + y + y^2 - 7$ is not, because of the degree-1 term $y$, and the degree-0 term $-7$. Hopefully you can see how the definition given on the wikipedia page allows all functions like $f$, while not including ones like $g$.




As for your second question, the only quadratic forms in one variable are $f(x) = ax^2$ for some $a \in \mathbb{R}$. So there is perhaps not so much relation to quadratic polynomials in a single variable.



If you want a classic example of a quadratic form, the norm squared on a vector space is a quadratic form:



$$ f(x_1, \ldots, x_n) = \lvert (x_1, \ldots, x_n) \rvert^2 = x_1^2 + \cdots + x_n^2$$


real analysis - Modifications of Weierstrass's continuous, nowhere differentiable functions



Recalling how nowhere continuous functions such as the Dirichlet function can sometimes be modified on a $\lambda$-null set of points (in this instance, a countable set) to become everywhere continuous, I was wondering whether a Weierstrass nowhere differentiable function can be modified on a $\lambda$-null set of points to become piecewise (or even everywhere) differentiable. If not, is there an example of such a nowhere differentiable function that remains nowhere differentiable irrespective of modifications on $\lambda$-null sets? And such nowhere continuous functions?



($\lambda$ denotes Lebesgue measure)


Answer



You can't fix a continuous nowhere differentiable function by redefining it on a set of measure zero.




Claim. Suppose $f$ is continuous and $f'(a)$ does not exist. If $f=g$ almost everywhere, then $g'(a)$ does not exist.



Proof. Suppose $g'(a)$ exists. Then $g(a)=f(a)$; otherwise $g$ would not be even continuous at $a$. Subtracting a linear function from both $f,g$ we can make $g'(a)=0$. Since it's not true that $f'(a)=0$, there is $c>0$ such that the set $$U = \{x: |f(x)-f(a)|> c|x-a|\}$$
has $a$ as its limit point. Since $f$ is continuous, $U$ is open. Therefore, the intersection of $U$ with $(a-r,a+r)$ has positive measure for every $r$. Let
$$ V=\{x : |g(x)-g(a)|> c|x-a|\}$$
Since
$$\lambda(V\cap (a-r,a+r)) = \lambda(U\cap (a-r,a+r)) >0$$
the intersection $V\cap (a-r,a+r)$ is nonempty. It follows that $a$ is a limit point of $V$, which contradicts $g'(a)=0$. $\Box$








And such nowhere continuous functions?




Yes, there are nowhere continuous functions that remain nowhere continuous, no matter how they are redefined on a null set. The characteristic function of this set is an example.


elementary set theory - Explicit bijection between $Bbb R$ and permutations of $Bbb N$



The set of permutations of the natural numbers has the cardinality of the continuum. I've got injections in both directions, no problem. The Schröder–Bernstein theorem tells us that this implies the existence of a bijection. I'm wondering if it's possible to construct one explicitly.



(In what follows, I'm using the convention that $0\not\in\Bbb N$. This obviously doesn't change anything, cardinality-wise.)







For $\Bbb R\to S_\Bbb N$, we note that every real number is the limit of some rearrangement of the alternating harmonic series. If $\alpha\in\Bbb R$, we start with positive terms: $1+\frac13+\frac15+\cdots$, until we obtain a partial sum greater than $\alpha$. We then add negative terms until our partial sum is less than $\alpha$, then we switch back to the positive terms, starting with the first unused one, etc.



In this way, we construct a series, and if we take the absolute values of the reciprocals of the terms of it, we have a permutation of $\Bbb N$. This mapping is not onto, because many permutations of the series converge to $\alpha$ - all we have to do is "overshoot" at some point, and then continue converging to $\alpha$ as usual. (More trivially, we only get permutations with $\sigma(1)=1$ using this particular construction.)






In the other direction, if we have a permutation $\sigma\in S_\Bbb N$, we can write the continued fraction $[\sigma(1);\sigma(2),\sigma(3),\ldots]$. This actually injects $S_\Bbb N$ into $\Bbb R\setminus\Bbb Q$, because non-terminating continued fractions represent irrational numbers. Thus, this mapping is not onto; it is not even onto the irrationals.For example, no permutation of $\Bbb N$ maps in this way to any quadratic irrational, or to $e$, or to any other irrational whose c.f. expansion has repeated terms.







So, those injections are fun and all, but finding an explicit bijection seems hard. Clearly, a bijection between $S_\Bbb N$ and any interval would suffice, because there are standard, elementary ways to construct bijections between $\Bbb R$ and any interval.



I have Googled in vain for a solution, and I don't believe this question is a duplicate. I will be happy for a hint or a full solution, or an explanation of why such an explicit construction is impossible.



Full disclosure: someone online claimed that this is "not hard", but refused to explain how, other than mentioning the Cauchy sequence construction of the reals. I don't see how that's useful, and I think he's mistaken or bluffing. I'm not too proud to admit I'm stumped. :/


Answer



It's not exactly pretty, but we can do something like:





  1. Get rid of the rationals by mapping $q+n\sqrt2$ to $q+(n+1)\sqrt2$ for every $q\in\mathbb Q$ and $n\in\mathbb N_0$

  2. Map the irrationals to the irrationals in $(0,1)$ by standard techniques, e.g. by $$ x\mapsto\begin{cases} 1/(2+x) & \text{for }x>0 \\ 1-1/(2-x) & \text{for }x<0 \end{cases} $$

  3. Write each irrational in $(0,1)$ as a continued fraction. Ignoring the leading $0$ in each continued fraction, they become all the infinite sequences of positive integers.

  4. Now convert each infinite sequence of positive integers to a bijection between the odd naturals and the even naturals by the "back-and-forth" technique. This construction proceeds in steps, by alternating between


    • Take a number $n$ from the sequence and map the first odd natural that has not yet been paired, with the $n$th even natural that has not yet been paired.

    • Take a number $m$ from the sequence and map the $m$th odd natural that has not yet been paired, with the first even natural that has not yet been paired.


  5. Finally convert each bijection $f$ from odds and evens (which was only about odds and evens in order to make the previous step easier to describe) to a permutation of $\mathbb N$, namely $n\mapsto f(2n+1)/2$.



real analysis - Proving uniform continuity of function on a half-open interval whose derivative has a limit at the boundary




I'm given a continuous function $f : (a,b] \rightarrow \mathbb{R}$ for which




  • $f'(x)$ exists on $(a,b)$ and

  • $\lim_{x \to^+ a} f'(x)$ exists



and asked to prove that $f$ is uniformly continuous.




I am having a bit of trouble with how to show this formally, though I understand the essence of the answer:




  • First off, the proof would be immediate if $f$ was defined over the closed interval $[a,b]$ since continuous functions are uniformly continuous on closed domains.


  • Second, because the limit of $f'(x)$ exists as $x \to^+ a$, $f$ must be Lipschitz on its domain ($f'$ must be bounded since it is bounded near $a$ and bounded everywhere to the right of $a$ because the domain is closed in that direction).




This is all very well and good for an intuitive answer, but it seems a bit hand wavy. How do I do give a real $\epsilon, \delta$ style argument here? Or is that overkill for this problem?



Thanks.



Answer



Since $f'$ has side limit at $a$, it is bounded on $(a,c)$ for some $c>a$. So $f$ is Lipschitz (and hence uniformly continuous) on $(a,c]$. On the other hand, $f$ is also uniformly continuous on $[c,b]$ by compactness.



Now you can show the following general fact: if a function is uniformly continuous on $(a,c]$ and also on $[c,b]$, then it is uniformly continuous on $(a,b]$. Using this fact and the above observations the proof is finished.






PS: This is assuming the side limit of $f'$ exists and is finite, otherwise there are counter-examples.



PPS: $f'$ may still be unbounded on $(a,b]$, you can make counter-examples by adding countably-many disjoint tiny blobs with higher and higher derivatives.



Tuesday 13 March 2018

elementary number theory - Chinese Remainder Theorem/Simultaneous congruences



Find an integer N , 0 ≤ N < 105 such that




N ≡ 2 mod 3,



N ≡ 1 mod 5, and



N ≡ 4 mod 7.



What I have done:



So I have been able to start by splitting up the summation of $x$ into 3 sections:




$ x = 5*7 + 3*7 + 3*5 $



with the first multiplication corresponding with the mod 3 equation, the second multiplication corresponding with the mod 5 equation and the third multiplication corresponding with the mod 7 equation.



Therefore:



$x = 35 + 21 +15$



However, I know that this isn't complete. But I am not exactly sure on how to proceed.




Any help?


Answer



$N \equiv 2 \mod 3$ so $N = 2 + 3a$.



$N \equiv 1 \mod 5$ so $N =1 + 5b$



So $2+3a = 1 + 5b$ so $5b - 3a = 1$. One solution is $b = 2$ and $a = 3$



So $N \equiv 2 + 3*3 = 1 + 2*5 = 11 \mod 3*5$.




So $N = 11 + 15c$.



$N \equiv 4 \mod 7$ so $N = 4 + 7d$.



So $11 + 15c = 4 + 7d$ so $15c - 7d = -7$. $c =0$ and $d = 1$ is a solution.



So $N \equiv 11 = 4+7 \equiv 11 \mod 3*5*7 = 105$.



And, indeed, $11 \equiv 2 \mod 3$ and $11 \equiv 1 \mod 5$ and $11\equiv 4 \mod 7$.







Trying to follow your partition reasoning.



$5*7 = 35 \equiv 2 \mod 3$ so that satisfies.



$3*7 = 21 \equiv 1 \mod 5$ so that satisifies.



$3*5 = 15 \equiv 1 \not \equiv 4 \mod 7$ so that does not satisfy.




But $4*3*5 \equiv 4 \mod 7$ so that does.



So $5*7 + 3*7 + 60 = 116$ solves all three. But $116 > 105$. But any $k \equiv 116 \mod 105$ so do so $116 - 105 = 11$ will do.



.... or ... when we hae $3*5 \equiv 1 \mod 7$ and we could have figured $4 \equiv -3 \mod 7$ so $-3*3*5 \equiv 4 \mod 7$.



So $N = 5*7 + 3*7 - 3*3*5 = 11$. (by taking a negative we know we won't get a number too large).



Actually, I had never done the "partitioning" before.




It works well. I like it.


combinatorics - Prove $sumlimits_{i=0}^nbinom{i+k-1}{k-1}=binom{n+k}{k}$ (a.k.a. Hockey-Stick Identity)




Let $n$ be a nonnegative integer, and $k$ a positive integer. Could someone explain to me why the identity
$$

\sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}{k}
$$
holds?


Answer



One way to interpret this identity is to consider the number of ways to choose $k$ integers from the set $\{1,2,3,\cdots,n+k\}$.



There are $\binom{n+k}{k}$ ways to do this, and we can also count the number of possibilities by considering the largest integer chosen. This can vary from $k$ up to $n+k$, and if the largest integer chosen is $l$, then there are $\binom{l-1}{k-1}$ ways to choose the remaining $k-1$ integers.



Therefore $\displaystyle\sum_{l=k}^{n+k}\binom{l-1}{k-1}=\binom{n+k}{k}$, and letting $i=l-k$ gives $\displaystyle\sum_{i=0}^{n}\binom{i+k-1}{k-1}=\binom{n+k}{k}$.


Monday 12 March 2018

fractions - Is the product of two numbers both less than one less than one



I'm bad at mathematics, and I wanted to know something.



Say there are two numbers $a$ and $b$ where $a, b \in \Bbb R$ $-1 < a < 1$ and $-1 < b < 1$
Is it necessary that $a \times b < 1$?




Edit: I was in hurry and didn't notice the big mistake I did


Answer



Yes, because $|ab|=|a||b|<1\cdot 1 = 1$, hence $-1

Prove that a polynomial of odd degree in $ Bbb R[x]$ with no multiples roots must have an odd number of real roots.

The coefficients of the polynomial are in the ring of real numbers.

Prove that a polynomial of odd degree in $ \Bbb R[x]\\$ with no multiple roots must have an off number of real roots. I hate to ask people to solve things, so I was wondering if you could glance at my attempt and give a suggestion. There is a theorem in my textbook that states that every polynomial of odd degree has a root.



Let f(x) be a polynomial of odd degree. Since, I know that since f(x) is odd it has a root, and thus can be factored as follows:
Let a be an arbitrary root of f(x). With a $\in \Bbb R$, then f (x) = (x-a)(h(x)) where h(x) $\in \Bbb R[x]$. This implies that h(x) must have even degree, because deg(f(x)) is odd and deg (x-a) is odd. At this point can I assume that, since h(x) must have even degree, it must have an even number of roots and therefore factors? Because, to the contrary, if it had an odd number of factors or roots, our initial assumption that f(x) has odd degree would violated?

Sunday 11 March 2018

probability - Why does $sum_{n=1}^infty frac{1}{n(log(n))^{1+2epsilon}}$ converge?



I am looking through examples on convergences of random series, and in one of the proofs the following result is used: If $\epsilon > 0$ then



$$\sum_{n=1}^\infty \frac{1}{n(\log(n))^{1+2\epsilon}}<\infty$$



No explanation is given hence my confusion. I know that if $p>1$ then $\sum_{n} \frac{1}{n^p} < \infty$ but as this involes $\log(n)$, I do not understand how they reach this conclusion. Especially since $\sum_n \frac{1}{n(\log(n))}=\infty$.



Could someone explain how I would show this converges?


Answer




You may use Cauchy condensation test:



$$\sum_{n=1}^\infty \frac{1}{n(\log(n))^{1+2\epsilon}} \sim \sum_{n=1}^\infty \frac{2^n}{2^n(\log(2^n))^{1+2\epsilon}} = \sum_{n=1}^\infty \frac{1}{(n\log(2))^{1+2\epsilon}}< \infty$$


algebra precalculus - Power summation of $n^3$ or higher











If I want to find the formula for $$\sum_{k=1}^n k^2$$
I would do the next: $$(n+1)^2 = a(n+1)^3 + b(n+1)^2 + c(n+1) - an^3 - bn^2 - cn$$
After calculations I will match the coefficients.
What equation should I use, if I want to calculate the coefficients of the formula that gives the sum $$\sum_{k=1}^n k^3$$ or higher?




EDIT: Please do not refer me to other places, not trying to blaim, or to be rude, but I find your sources too complicated for me, and unable to learn from, for my level.



If you can please explain how am I supposed to find an equation that matches to my needs, not the equation itself, only the technique I should be working with.



Regards, Guy


Answer



I think I know what strategy was used for the proof you are referring to for sum of squares. The same idea works for sum of cubes, but is more painful. We try to find numbers $a$, $b$, $c$, and $d$ such that
$$(n+1)^3=[a(n+1)^4 +b(n+1)^3+c(n+1)^2+d(n+1)]-[an^4+bn^3+cn^2+dn].$$
To find these numbers, there are some shortcuts. But ultimately you will probably need to compute $(n+1)^2$ (familiar), $(n+1)^3=n^3+3n^2+3n+1$, and $(n+1)^4=n^4+4n^3+6n^2+4n+1$.




We get a system of $4$ equations in $4$ unknowns, but the solution turns out to be surprisingly uncomplicated. To give a start, on the left the coefficient of $n^3$ is $1$. On the right it is $4a$, so $a=\frac{1}{4}$. As a further check when you do the work, it should turn out that $d=0$.



After you have found the remaining constants $b$ and $c$, do the "collapsing" or "telescoping" argument that you seem to have seen for $\sum_{k=1}^n k^2$.


elementary number theory - the exponent of the highest power of p dividing n!

The formula for the exponent of the highest power of prime $p$ dividing $n!$ is $\sum \frac{n}{p^k}$, but the question is $n=1000!$ (really, it has the factorial) and $p=5$.



When I use Wolfram Alpha , I panicked because the number has $2,567$ decimal digits.



I think if I write this number I'd need paper all the way to the Amazon.



Perhaps I misunderstand the formula?

Saturday 10 March 2018

number theory - Repeating patterns in sequence of binary bit counts

I was solving a problem on leetcode where you produce a list of bit counts for every number $ i $ in the range $ 1 \le i \le n $. The bit count is simply the amount of 1's in a numbers binary representation (e.g. 5 is 101 in binary, meaning that its bit count is 2)(Link to the problem since I'm not sure I'm explaining it clearly)



Now that thats out of the way, I believe I found something quite interesting. I'm sure its been discovered before but my google fu skills when it comes to mathematics is extremely lacklustre so I havent been able to find anything.




Let's start off with an example to explain this problem more simply. Say we are given $ n = 32 $, we'd find the bit count (amount of 1s in a numbers binary representation) for each number from 1 up to 32. The bit count for the number 1 is 1 (0001 has one "1"). The bit count for the number 2 is also 1 (0010 has one "1"). The bit count for the number 3 is 2 (0011 has two "1"). We do this for all the numbers up until 32 and we are left with
this.



Within this sequence of numbers there is an interesting pattern. We can see that between the 4th and 8th indices (3rd and 4th red "1"'s respectively) that there is a sequence which is repeated in subsequent numbers; [2, 2, 3]. Since this is the first sequence we've found, we can call it "1". To make this clearer I have highlighted them here



As we can see, this sequence is shown after the introduction of a new bit (as shown by the red "1"s). Also, if we look between the 8th and 16th indices, we can see that another repeating sequence has emerged; [2, 3, 3, 4]. Lets call this sequence "2" since its the second repeating sequence we've come across. To make this pattern and future patterns a bit clearer, I will be making $ n = 64 $. If we circle the repeating sequences we are left with this.



Lets continue the same naming convention for these sequences based off the order of their discovery; [3, 4, 4, 5] = "3" and [4, 5, 5, 6] = "4". There are two observations that can be made at this point:





  1. The amount of sequences double between subsequent red "1"s. E.g. between the 4th and 8th indices there is one sequence. Between the 8th and 16th indices there are two sequences. Between the 16th and 32nd there are four sequences and between the 32nd and 64th there are eight sequences. However, I'm sure this is easily explained and isn't all that interesting.

  2. Each sequence of length 4 follows the same pattern as the previous sequences; [n, n+1, n+1, n+2], where n is one greater than the first digit of the previous sequence. However, this is also probably easily explained and isn't the interesting thing.



To make things clearer, every digit that isn't in a sequence will be removed, which leaves us with this. As stated earlier, each sequence was given a name signifying the order of its discovery, these names and their corresponding sequences are as follows:




  • [2, 2, 3] = "1"

  • [2, 3, 3, 4] = "2"


  • [3, 4, 4, 5] = "3"

  • [4, 5, 5, 6] = "4"



If we substitute each sequence with its given name, we are left with
this colour coded sequence. If we compare this new sequence with the sequence from right at the start, we can see that it repeats the exact same pattern (starting from index 2, represented by the second red "1").



My question is, why is this happening and what is its significance?

elementary number theory - Euclid proof explanation



Can anyone help me understand the following proof that if $p|ab$ then $p|a$ or $p|b$? This proof is on a separate question.




Suppose there were a counterexample, with $pa=bc$, $p$ a prime, but
neither $b$ nor $c$ divisible by $p$. Then there would be a counterexample
with $p$ as small as possible and, for that $p$, $b$ as small as possible.
Note that $b>1$, since otherwise we would have $pa=c$, which means $p$
divides $c$.




We first note that $b, since otherwise $pa′=p(a−c)=(b−p)c=b′c$ would be
a smaller counterexample. But now $b>1$ implies $b$ is divisible by some
prime $q$, which means we have $q$ dividing pa with $q≤b. By the
minimality of $p$ as a counterexample, we conclude that $q$ divides $a$
(since it can't divide $p$). If we now write $a=a′q$ and $b=b′q$ and note
that $b′ implies $p$ doesn't divide $b′$ either, we find that $pa′=b′c$
is a smaller counterexample, which is a contradiction. Thus there can
be no counterexample.





I am having trouble understanding how this proves anything. Especially this part:




$pa′=p(a−c)=(b−p)c=b′c$




What is the reasoning behind subtracting $c$ and $p$ from the factors? Would someone be willing to go through this proof step by step and explain why it works?



Question: Proof of Euclid's Lemma



Answer



These "direct" proofs of Euclid's Lemma all achieve descent via the division algorithm. The first step is to reduce to a smaller problem when $\,b> p\,$ by replacing it by a smaller $\,b'\equiv b\pmod{p},\,$ which doesn't alter the truth of the statement since $\,p\mid bc\iff p\mid b'c,\,$ and $\,(b',p) = (b,p) = 1$.



OP chooses $\,b' = b-p,\,$ but we could also choose $\,b' = b\bmod p < p\,$ as in the equivalent proof you posted a few days ago. Further when $1 < b < p$ we don't need prime factorizations for descent. More constructive is to replace $\,b\,$ by $\,p\bmod b = p - qb.\,$ Then the two descent steps amount to the following variant of the Euclidean algorithm when one argument is a prime $p$ and $\,p\nmid b$



$$\begin{align} &(b,p) = (b\bmod p,\,p)\ \ {\rm if}\ \ b > p\\[.3em]
&(b,p) = (p\bmod b,\,p)\ \ {\rm if}\ \ b < p\end{align}$$



This form of the proof essentially uses $\,p\mid bc\,\Rightarrow\, p\mid(b,p)c = c\,$ by $\,(b,p) = 1,\,$ while using the above two descent steps to calculate the gcd $(b,p) = 1.\,$ Here is a simple concrete example.




$$\begin{align}
&31\mid 38c\\
\iff\ &31\mid 7c\ \ \ {\rm by}\ \ \ 7 \,=\, 38\bmod 31\\
\iff\ &31\mid 3c\ \ \ {\rm by}\ \ \ 3 \,=\, 31\bmod 7\\
\iff\ &31\mid 1c\ \ \ {\rm by}\ \ \ 1 \,=\, 31\bmod 3
\end{align}\qquad$$



Eliminating the (unneeded) contradictive form and viewing it positively leads to Gauss's algorithm for computing inverses and fractions $\!\bmod p$



See also this closely related proof.



elementary number theory - Why is it true that if $ax+by=d$ then $gcd(a,b)$ divides $d$?

Can someone help me understand this statement:




If $ax+by=d$ then $\gcd(a,b)$ divides $d$.




Bezout's identity states that:





the greatest common divisor $d$ is the smallest positive integer that can be written as $ax + by$




However the definition of $\gcd(a, b)$ is the largest positive integer which divides both $a$ and $b$.



I'm am completely lost.
If anyone could provide some sort of layout to help me sort this out I would be really happy.

Friday 9 March 2018

integration - How to evaluate $int sin(x)arcsin(x)dx$

I need to evaluate following integral



$$\int \sin(x)\arcsin(x) \ dx$$



Can anyone please help me? Thanks.

functions - Proving that $C$ is a subset of $f^{-1}[f(C)]$



More homework help. Given the function $f:A \to B$. Let $C$ be a subset of $A$ and let $D$ be a subset of $B$.




Prove that:



$C$ is a subset of $f^{-1}[f(C)]$



So I have to show that every element of $C$ is in the set $f^{-1}[f(C)]$



I know that $f(C)$ is the image of $C$ in $B$ and that $f^{-1}[f(C)]$ is the pre-image of $f(C)$ into $A$. Where I'm stuck is how to use all of this information to show/prove that $C$ is indeed a subset.



Do I start with an arbitrary element (hey, let's call it $x$) of $C$? and then show that $f^{-1}[f(x)]$ is $x$? I could use a little direction here... Thanks.


Answer




Since you want to show that $C\subseteq f^{-1}\big[f[C]\big]$, yes, you should start with an arbitrary $x\in C$ and try to show that $x\in f^{-1}\big[f[C]\big]$. You cannot reasonably hope to show that $f^{-1}\big[f[\{x\}]\big]=x$, however: there’s no reason to think that $f$ is $1$-$1$, so there may be many points in $A$ that $f$ sends to the place it sends $x$.



Let $x\in C$ be arbitrary. For convenience let $E=f[C]\subseteq B$. Now what elements of $A$ belong to the set $f^{-1}\big[f[C]\big]=f^{-1}[E]$? By definition $f^{-1}[E]=\{a\in A:f(a)\in E\}$. Is it true that $f(x)\in E$? If so, $x\in f^{-1}[E]=f^{-1}\big[f[C]\big]$, and you’ll have shown that $C\subseteq f^{-1}\big[f[C]\big]$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...