Monday 31 July 2017

real analysis - Showing that a sequence is monotone and thus convergent



Here's the original question:




Let $(a_n)$ be bounded. Assume that $a_{n+1} \ge a_{n} - 2^{-n}$. Show
that $(a_n)$ is convergent.





Okay, I know that if I can show that if the sequence is monotone, I can conclude that it is convergent. But I am not sure how to show that it is monotone.



I know that
$$a_n \le a_{n+1} + \frac{1}{2^n} < a_{n+1} + \frac{1}{n}$$



It looks to me as if it is monotonically increasing but I'm quite not sure how to prove my claim. Any hints would be appreciated.


Answer



For all $n$, let
$$b_n = a_n - 2^{1-n}.$$

Note that
$$b_{n+1} \ge b_n \iff a_{n+1} - 2^{-n} \ge a_n - 2^{1-n} \iff a_{n+1} \ge a_n - 2^{-n},$$
which is true. Note also that $b_n$ is the sum of the bounded sequence $a_n$ and the convergent sequence $-2^{1 - n}$, and hence is bounded as well. Thus, $b_n$ converges by the monotone convergence theorem, and hence, by the algebra of limits, so does $a_n$.


How do I find triangular numbers $a$ and $b$ such that $a+b$ and $a-b$ are also triangular?



As a challenge problem in my number theory course I was asked to find the 2 pairs of triangular numbers $a$ and $b$ smaller than 1000 for which $a+b$ and $a-b$ are also triangular.



For example 15 and 21. I found the pairs, but I was wondering if there is a formula/algorithm for finding them. I started using algebra on the formulas for triangular numbers, but got stuck without a real result. Is there a formula, or how can I derive one?


Answer




Triangle numbers are of the form $(n^2+n)/2$. Look for solutions to $m^2+n^2+m+n=x^2+x$ and $m^2-n^2+m-n=y^2+y$. This is the python code I used:



l=[]



for n in range(1,1000):



    l.append(n+n**2)


for a in l:




    for b in l:

if a!=b:

if l.count(a+b)!=0:

if l.count(a-b)!=0:

print((a+b)/2,"hey",(a-b)/2,"dee",a/2,"taa",b/2)



Solutions <500000
36.0 hey 6.0 dee 21.0 taa 15.0



276.0 hey 66.0 dee 171.0 taa 105.0



1081.0 hey 325.0 dee 703.0 taa 378.0



1770.0 hey 210.0 dee 990.0 taa 780.0




5886.0 hey 1596.0 dee 3741.0 taa 2145.0



5671.0 hey 2701.0 dee 4186.0 taa 1485.0



12246.0 hey 1326.0 dee 6786.0 taa 5460.0



16653.0 hey 903.0 dee 8778.0 taa 7875.0



60031.0 hey 1225.0 dee 30628.0 taa 29403.0




60726.0 hey 16836.0 dee 38781.0 taa 21945.0



147153.0 hey 6903.0 dee 77028.0 taa 70125.0



293761.0 hey 82621.0 dee 188191.0 taa 105570.0



264628.0 hey 141778.0 dee 203203.0 taa 61425.0



257403.0 hey 181503.0 dee 219453.0 taa 37950.0




477753.0 hey 354903.0 dee 416328.0 taa 61425.0


calculus - How does $frac{1}{cos y}$ = $frac{1}{sqrt{1-x^2}}$

I was going through a textbook of mine and I noticed that in the proof of derivative of

$y = \sin^{-1}$,



i.e. proof that $$\frac{dy}{dx} \sin^{-1}(x) = \frac{1}{\sqrt{1-x^2}}$$



there's a point where they say $$\frac{1}{\cos y} = \frac{1}{\sqrt{1-x^2}}$$



and I'm not sure how this makes sense or works out. I've looked for proofs, tried implicit differentiation, and tried graphing it, but couldn't find anything. Can someone please explain?

dice - A six-sided die is rolled five times. What is the probability that only the final roll will be a deuce?



A six-sided die is rolled five times. What is the probability that only the final roll will be a deuce?




I've tried to reason this out myself but I can only think that there's a 1/6 chance that the roll will be 2 and another 1/5 chance for it to be the last one. What am I missing here?



Thanks!


Answer



Allow me to quote your reasoning:




1/5 chance for it to be the last one





Here is what is wrong with your reasoning: it is not guaranteed that only one roll will be a deuce. It is also possible that there are two deuces rolled.






We need the first four rolls to be not deuce and the last one to be a deuce. Each roll is independent, so we can multiply the probabilities for each roll together.



The probability for a roll to be a deuce is $\dfrac16$, and similarly the probability for a roll to be not a deuce is $\dfrac56$. Therefore, the required probability is $\left(\dfrac56\right)^4\left(\dfrac16\right) = \dfrac{625}{7776}$.


complex analysis - Theorems on the Cauchy integral operator

While reading on Cauchy's integral formula, I found the following two theorems on the Cauchy integral operator:



The first theorem states: Let $f$ be a complex function that is holomorphic in a region $U$. Let $\Delta_r(c)$ be a disk, which along with its boundary $\delta \Delta_r(c)$ is contained in $U$. For all $z \in \Delta_r(c)$ the following is true:
\begin{equation*}
f(z) = \frac{1}{2 \pi i} \int_{\delta \Delta_r(c)} \frac{f(\zeta)}{\zeta - z} d\zeta
\end{equation*}




The second theorem states: Let $f: \overline{\Delta_1(0)} \rightarrow \mathbb{C}$ be a continous function which is also holomorphic in $\Delta_1(0)$. It holds for every $z \in \Delta_1(0)$ that
\begin{equation*}
f(z) = \frac{1}{2 \pi i} \int_{\delta \Delta_1(0)} \frac{f(\zeta)}{\zeta - z} d\zeta
\end{equation*}



The second theorem is given without proof. My question is whether the second statement holds for general disks and whether it can be proven using the first one?

elementary set theory - Inverse images and containment

I have a question about inverse images and containment.



Let $\;f: A \rightarrow B, D \subseteq A$, and $E \subseteq B$. Show that $\; f^{-1}(B-E) \subseteq A-f^{-1}(E)$.



I have started by letting $x \in f^{-1}(B-E)$ so $f(x) \in (B-E)$, but I am unsure where to go from here.

calculus - Evaluate $lim_{nto infty} frac{2^{ln(ln(n))}}{nln(n)}$



I'm having trouble evaluating the limit: $$\lim_{n\to \infty} \frac{2^{\ln(\ln(n))}}{n\ln(n)}$$ (as it looks like, the limit tends to $0$)



This is what I got until now:
$$0\le \lim_{n\to \infty} \frac{2^{\ln(\ln(n))}}{n\ln(n)}\le \lim_{n\to \infty} \frac{2^{\ln(n)}}{n\ln(n)}=\lim_{n\to \infty} \frac{e^{\ln{2^{\ln(n)}}}}{n\ln(n)} = \lim_{n\to \infty} \frac{e^{\ln(n)\ln{2}}}{n\ln(n)}$$



Tried L'Hopital from here, but it seems usefulness.



Would appreciate your advice.



Answer



You're almost there. $$\frac{e^{\ln(n)\ln(2)}}{n\ln(n)}= \frac{(e^{\ln(n)})^{\ln(2)}}{n\ln(n)}=\frac{n^{\ln2}}{n\ln(n)} = \frac{n^{\ln2 -1}}{\ln(n)}$$ which should tend to zero as $\ln2<1$.


Sunday 30 July 2017

elementary number theory - Canonical Prime Factorisation




The Fundamental Theorem of Arithmetic states that every integer $n > 1$ can be uniquely expressed as a product of prime powers, that is:



$$n = p_1^{n_1}p_2^{n_2} \cdots p_k^{n_k}= \Pi_{i=1}^{k}p_i^{n_i}$$
Here, $p_1 < p_2 \cdots < p_k$ are primes, and the $n_i$s are positive integers.



Now, when listing the prime factorisations of $2$ arbitrary integers $a$ and $b$ ( in order to compute their gcd, lcm, etc ), many resources online, including some textbooks, simply say: Let $a=p_1^{m_1} \cdots p_k^{m_k}$ and $b=p_1^{n_1} \cdots p_k^{n_k}-\mathbf(1) $. My concern is why this can be simply stated without the need to prove it ... Or is this actually a pretty trivial observation? Granted, one could first provide the caveat that $p_i$ is the set of all primes dividing either $a$ or $b$. Still, I am not convinced that the factorisation in $ -\mathbf(1)$ should be so intuitive, because, from the FTA, we know that the canonical factorisation of any integer must be unique. In addition, some sort of permutation of the prime factors between those in $a$ and those in $b$ may be involved. I think the following claim has to be proven first, before this statement can be validated:



Let $a=q_1^{e_1} \cdots q_r^{e_r}$ and $b=s_1^{f_1} \cdots s_t^{f_t}$ be the canonical prime factorisations of $a$ and $b$ respectively, and each prime power is a postitive integer. Then, there exists a strictly increasing sequence of primes $(p_i)_{i=1}^{k}$, and 2 non-negative sequences of integers, $(m_i)_{i=1}^{k}$ and $(n_i)_{i=1}^{k}$, such that :



$a=p_1^{m_1} \cdots p_k^{m_k}$ and $b=p_1^{n_1} \cdots p_k^{n_k} $




Below's the approach that I have taken:



Clearly, $a= q_1^{e_1}...q_r^{e_r}s_1^0s_2^0\cdots s_t^0$, and $b=s_1^{f_1}s_2^{f_2} \cdots s_t^{f_t}q_1^0 \cdots q_r^0$. Thus, there exists $ k \in \mathbb{Z^+}$, $max(r,t) \leq k \leq r+t$, such that $a=\Pi_{p=1}^{k}p_i^{m_i}$. Also, $p_i \in \{q_1, \cdots q_r, s_1, \cdots s_t\}$ is a strictly increasing sequence of primes, and we have that $m_i \in (e_i)_{i=1}^{r}$ or $m_i =0 \ \forall \ i $.



Similarly, for $b$, we have that $b=\Pi_{p=1}^{k}p_i^{n_i}$. Also, $p_i \in \{q_1, \cdots q_r, s_1, \cdots s_t\}$ is a strictly increasing sequence of primes, and $n_i \in (f_i)_{i=1}^{t}$ or $n_i =0 \ \forall \ i $.



Of course, this proof may not be complete. In particular, I would appreciate it if anyone could suggest on how to improve it, or give a more concise/elegant argument.


Answer



You're overcomplicating things. Let $P_1,P_2$ be the set of primes dividing two positive integers $m,n$ respectively, then we write $m=\prod p^\alpha,n=\prod p^\beta$ with the understanding that $p\in P_1\cup P_2$ and $\alpha,\beta$ are nonnegative (not necessarily nonzero).




There is absolutely no need to prove this in a more "mathematical" way by using more abstract symbols or logical notation. If the logic is clear (as it should be---since raising anything to the $0$th power gives you the multiplicative identity and does not change the integer in question), then your proof is complete, and you don't need to "complete" it even more with more symbols in order for it to seem more "mathematical". Beware of confusing yourself when you do so!






By the way, the fundamental theorem of arithmetic doesn't exactly say that the canonical factorisation is unique. It says that it is unique up to permutation of the factors. Since multiplication of integers is commutative, this means we can always write the canonical factorisation without loss of generality in increasing order of the factors, so the issue of permutation of factors you raised is really a non-issue.


discrete mathematics - Binomial Theorem identities, evaluate the sum



This is a homework problem, please don't blurt out the answer! :)




I've been given the following, and asked to evaluate the sum:



$$\sum_{k = 0}^{n}(-1)^k\binom{n}{k}10^k$$



So, I started out trying to look at this as equivalent to the binomial theorem, in which case, I could attempt something like this: $10^k = y^{n-k}$ but I didn't feel that got me anywhere.



So I started actually evaluating it...



$$(-1)^0\binom{n}{0}10^0 + (-1)^1\binom{n}{1}10^1 + \ldots + (-1)^n\binom{n}{n}10^n$$




So, if I'm thinking correctly, all the other terms cancel out and you are left with:



$$(-1)^n\binom{n}{n}10^n = (-1)^n10^n$$



But, obviously this cannot be correct (or can it?). The book gives a slightly different answer, so I'm wondering where I'm going wrong. Some direction would be greatly appreciated!



Books answer: $\displaystyle (-1)^n9^n$


Answer



Try to fit your sum into one of the following:

$$
\sum_{k=0}^n\binom{n}ka^kb^{n-k}=(a+b)^n,\quad\sum_{k=0}^n\binom{n}kb^{n-k}=(1+b)^n,\quad\sum_{k=0}^n\binom{n}ka^k=(a+1)^n.
$$


polynomials - Find all roots of $x^{6} + 1$




I'm studying for my linear algebra exam and I came across this exercise that I can't solve.



Find all roots of polynomial $x^{6} + 1$. Hint: use De Moivre's formula.



I guessed that two roots are $i$ and $-i$, since:



$i^{6} = (i^{2})^{3} = (-1)^{3} = -1 $



therefore, $i$ is root and his complex conjugate $-i$ has to be root too. However that was just guessing. I have no idea how can I use De Moivre's formula here.




Can you help me solve this?


Answer



Hint: if $x^6=-1$, then $|x|^6=1$ and you can write $x=\cos\theta + i\sin\theta$.



details:



Then the equation is, thanks to De Moivre theorem and
$\cos^2 + \sin^2 =1$, equivalent to
$$
\cos 6\theta =-1\\

6\theta = \pi\mod 2\pi\\
\theta\in \frac \pi 6+\left\{0, \frac\pi 3, \frac{2\pi}3,\pi,\frac{4\pi} 3,
\frac{5\pi}3
\right\}.
$$


Saturday 29 July 2017

calculus - Showing a limit does not exist using Cauchy's $epsilon, delta $ limit definition





Show $\displaystyle\lim_{x\to\infty}x\sin x$ does not exist using Cauchy definition of limit.




Am supposed to negate the defintion: $\exists \epsilon >0 :\forall \delta>0 :(\forall x: 0<|x-a|>\delta \Rightarrow |f(x)-L| > \epsilon) $ ?



But here $a$ is infinity and there is no $L$ so I don't know how to write the inequality.


Answer



To show that the limit doesn't exist using the formal definition of a limit, I would do something like this:



Suppose $\lim\limits_{x\to\infty}x\sin x=L$. Then we should have:

$$\forall\epsilon>0\, \exists N\in\mathbb{N}:x>N\Rightarrow|x\sin x-L|<\epsilon$$
However \begin{equation}|x\sin x-L|\geq|x\sin x|-|L|=|x||\sin x|-|L|\end{equation}
Now by choosing $x=\frac\pi2+2Nk\pi$ ($k\in\mathbb{Z}_{\geq0}$) we can make this quantitiy arbitrairilly high, while still having $x>N$. This is a contradiction. Hence the limit doesn't exist.


real analysis - Continuous, bijective function from $f:[0,1)to mathbb{R}.$



Prove that there does not exist a continuous, bijective function $f:[0,1)\to \mathbb{R}.$



By contradiction I can assume a function exists, so that function is surjective, onto and continuous. And I know I need to use the intermediate value theorem but I can't create such a contradiction.



Answer



From the interaction in comments I think OP needs bit more elaboration on the hint by Asaf. I will however refrain from providing a complete solution.



Assume that there is a function $f$ defined on $[0, 1)$ which is continuous and a bijection from $[0, 1)$ to $\mathbb{R}$. It means that $f$ is one-one function in particular i.e $f(a) = f(b)$ implies $a = b$.



Next let's consider $f(0)$ and $f(1/2)$. Because $f$ is one-one we must have $f(0) \neq f(1/2)$.



Let's assume that $f(0) < f(1/2)$ (the case $f(0) > f(1/2)$ can be handled similarly). Now we need to prove that if $x \in (0, 1)$ then $f(0) < f(x)$. This is where you need to use IVT for continuous functions. You should be able to do this by following Asaf's comments. And then we know that $f(0)$ is the minimum value of $f(x)$ and hence the part $(-\infty, f(0))$ of $\mathbb{R}$ is not mapped by this function $f$ and thus $f$ is not onto $\mathbb{R}$.



The proof of $f(0) < f(x)$ for all $x \in (0, 1)$ proceeds as follows. Clearly $f$ is one-one so $f(0) \neq f(x)$. If $x = 1/2$ then we already know that $f(0) < f(1/2) = f(x)$. So let $x \neq 1/2$. If $f(0) > f(x)$ then $f(x) < f(0) < f(1/2)$ so that $f(0)$ lies between $f(x)$ and $f(1/2)$ and hence by IVT we have ..... (I hope OP will be able to complete the dots)




If $f(0) > f(1/2)$ then we can show that $f(x) < f(0)$ for all $x \in (0, 1)$ so that $f(0)$ is the maximum value of $f$ and again $f$ is not onto $\mathbb{R}$.



Another thing to note. There is nothing special in $1/2$ we have chosen above. It can be replaced by any number lying in $(0, 1)$.


$sum_{i=0}^n frac{1}{(i+3)(i+4)} = frac{n}{4(n+4)}$ (prove by induction)

I'm having some difficulty proving by induction the following statement.



$$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$




I have shown that $\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$ holds for $n=1$ (equals $\frac{1}{20}$) , but I am getting stuck on the induction step.



As far as I know I have to show $$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$
implies
$$\sum_{i=0}^{n+1} \frac{1}{(i+3)(i+4)} = \frac{n+1}{4(n+5)}$$



To do this I think I should add the number $\frac{1}{(n+4)(n+5)}$ to $\frac{n}{4(n+4)}$ and see if it gives $\frac{n+1}{4(n+5)}$ , if I am not mistaken.



When trying to do that however I get stuck. I have:




$$\frac{n}{4(n+4)} +\frac{1}{(n+4)(n+5)} = \frac{n(n+4)(n+5)}{4(n+4)^2(n+5)} + \frac{4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+4)(n+5)+4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+5)+4}{4(n+4)(n+5)}$$



However beyond this point I don't know how to reach $\frac{n+1}{4(n+5)}$ I always just end up at the starting point of that calculation.



So I think that either my approach must be wrong or I am missing some trick how to simplify $$\frac{(n(n+5)+4}{4(n+4)(n+5)}$$



I would be very grateful for any help, as this is a task on a preparation sheet for the next exam and I don't know anyone, that has a correct solution.

Friday 28 July 2017

real analysis - How is this not a proof of the Jacobian conjecture in the complex case?

I've just been reading the Wikipedia entry regarding the Jacobian conjecture, and it said that either the conjecture is true for all fields of characteristic zero, or it is false for all such fields.



Hence, I wonder, shouldn't this be an easy problem that yields to methods from real or complex analysis? After all, it involves only simple terms like determinant, inverse, constant, polynomial etc.



Specifically, the determinant condition gives a relation between the derivatives, which one may then be able to integrate in order to possibly obtain polynomials.



To make this more specific, say that we have a polynomial function $f: \mathbb K^n \to \mathbb K^n$, where $\mathbb K = \mathbb R$ or $\mathbb C$. Then $\det J_f$ is a polynomial in the derivatives of the components and hence itself a polynomial. By the inverse rule and Cramer's rule, the derivative of the (local) inverse has the form
$$
\frac{1}{\det(J_f)} \operatorname{Cof}(J_f),

$$

where by assumption $\det(J_f)$ is constant. Also, the cofactor matrix is a polynomial matrix. Thus, we integrate any of its entries for each component to obtain a local polynomial inverse, which is also global due to the identity theorem (at least in the complex case).




What makes this approach fail?




(This main part of my question makes it unique among other questions regarding the Jacobian conjecture, which have been completely falsely suggested to be a duplicate of this one.)

limits - Stirling's formula: proof?

Suppose we want to show that $$ n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$



Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe $C = \sqrt{2 \pi}$.



What is a good way of doing this? Could we use L'Hopital's Rule? Or maybe take the log of both sides (e.g., compute the limit of the log of the quantity)? So for example do the following $$\lim_{n \to \infty} \log \left[\frac{n!}{n^{n+(1/2)}e^{-n}} \right] = \log C$$

integration - for each $epsilon >0$ there is a $delta >0$ such that whenever $m(A)



This is an old preliminary exam problem:



Show that, for every nonnegative Lebesgue integrable function $f:[0,1]\rightarrow \mathbb{R}$ and every $\epsilon>0$ there exists a $\delta>0$ such that for each measurable set $A\subset [0,1]$ with $m(A)<\delta$ it follows that $\int_A f(x)dx<\epsilon$.




Here's my attempt at a proof: Since $[0,1]$ is compact, and $f$ is real-valued, there exists an $M>0$ such that $f(x)\le M$ for all $x\in [0,1]$. Therefore, for $\epsilon>0$, let $\delta=\epsilon/M$. Then for all $A\subset [0,1]$ such that $m(A)<\delta$, we have that $\int_A f(x)dx\le Mm(A)<\epsilon$. Where here $m$ denotes Lebesgue measure.



The part I'm unsure about is the existence of $M$. If the function is continuous, then there is no problem, but $f$ does not have to be continuous to be Lebesgue measurable. On the other hand, the problem says that $f$ is real-valued, not extended real-valued, so this means that $f(x)$ is defined and finite for each $x$, right?


Answer



Any integrable $f:[0,1] \to R$ can be approximated by a continuous function $f_\epsilon$ on $[0,1]$ up to $\epsilon$ in $L^1$. With this, the fix to your argument is to use the triangle inequality:



$$\left| \int_A f dx \right| \leq \left| \int_A (f-f_\epsilon) dx \right| + \left| \int_A f_\epsilon dx \right| \leq \epsilon + \delta M.$$



Epsilon is picked first, from this we get an $M$ dependent on $f_\epsilon$ (dependent on $\epsilon$) and from this we can pick $\delta = \epsilon/M$. to get the upper bound $2 \epsilon$.




If you don't feel comfortable with a continuous approximation (ala Lusin's theorem) you can use simple functions instead.


number theory - How to prove equality of sum of Legendre symbols

I have to prove that the next equality holds:
$$\sum_{k=0}^{p-1} \left( \frac{k(k+a)}{p} \right)=\sum_{k=0}^{p-1} \left( \frac{k(k+1)}{p} \right)$$
with $a \in \mathbb{Z}$ and $a$ not divisible by $p$ and p prime. I am supposed to use a substitution for this, but I have no idea which one.




Afterwards I have to use this equality, together with this one (which I already proved)
$$\sum_{k,l=1}^{p-1} \left( \frac{kl}{p} \right)=0$$



to prove that
$$\sum_{k=1}^{p-2} \left( \frac{k(k+1)}{p} \right)=-1.$$



Any help would be appreciated!

Thursday 27 July 2017

algebra precalculus - real solution of eqn. in $sin x+2sin 2x-sin 3x = 3,$ where $xin (0,pi)$.



The no. of real solution of the equation $\sin x+2\sin 2x-\sin 3x = 3,$ where $x\in (0,\pi)$.




$\bf{My\; Try::}$ Given $\left(\sin x-\sin 3x\right)+2\sin 2x = 3$



$\Rightarrow -2\cos 2x\cdot \sin x+2\sin 2x = 3\Rightarrow -2\cos 2x\cdot \sin x+4\sin x\cdot \cos x = 3$



$\Rightarrow 2\sin x\cdot \left(-\cos 2x+2\cos x\right)=3$



Now I did not understand how can i solve it.



Help me




Thanks


Answer



$$\begin{cases}\sin 2x=2\sin x\cos x\\{}\\\sin 3x=\sin2x\cos x+\sin x\cos2x=2\sin x\cos^2x+\sin x(1-2\sin^2x)\end{cases}$$



Thus we get



$$0=\sin x+4\sin x\cos x-2\sin x\cos^2x-\sin x+2\sin^3x$$



Divide al through by $\;\sin x\;$ (why can we?):




$$4\cos x-2\cos^2x+2\sin^2x=0\iff2\cos x-\cos^2x+1-\cos^2x=0\iff$$



$$2\cos^2x-2\cos x-1=0\iff \ldots$$


calculus - Indeterminate form $1^infty$ vs. $0^infty$




Why is $1^\infty$ an indeterminate form while $0^\infty = 0$? If $0\cdot0\cdot0\cdots = 0$ shouldn't $1\cdot1\cdot1\cdots = 1$?


Answer



To say that $1^\infty$ is an indeterminate form means that there is more than one object that can be $\lim\limits_{x\,\to\,\text{something}} f(x)^{g(x)}$ where $f(x)\to1$ and $g(x)\to\infty,$ so that the limit depends on which functions $f$ and $g$ are.



Thus
$$
\left.
\begin{align}
& \lim_{x\to\infty} \left(1+\frac 1 x\right) = 1 \quad\text{and} \quad \lim_{x\to\infty} \left( 1 + \frac 1 x \right)^x = e \\[10pt]

& \qquad \text{and} \\[10pt]
& \lim_{x\to\infty} \left( 1 - \frac 1 x\right) = 1 \quad \text{and} \quad \lim_{x\to\infty} \left( 1 - \frac 1 x\right)^x = \frac 1 e.
\end{align} \right\} \longleftarrow \text{two different numbers}
$$


Wednesday 26 July 2017

calculus - Is there an everywhere discontinuous increasing function?



Does there exist a function $f : \mathbb{R} \rightarrow \mathbb{R}$ that is strictly increasing and discontinuous everywhere?




My line of thought (possibly incorrect): I know there are increasing functions such as $f(x) = x$, and there are everywhere-discontinuous functions such as the Dirichlet function. I also know that when there is a discontinuity at a point $c$, there is a finite gap $\epsilon$ such that there are points $d$ arbitrarily close to $c$ such that $|f(d) - f(c)| > \epsilon$. This is where my thinking gets unclear - does it make sense to have a "gap" at every real number?


Answer



There is no such function. Suppose that $f:\mathbb{R}\to\mathbb{R}$ is strictly increasing. For each $a\in\mathbb{R}$ let $f^-(a) =$ $\lim\limits_{x\to a^-}f(x)$ and $f^+(a) = \lim\limits_{x\to a^+}f(x)$. Then $f$ is discontinuous at $a$ if and only if $f^-(a) < f^+(a)$. Let $D = \{a\in\mathbb{R}:f\text{ is not continuous at }a\}$, and for each $a\in D$ let $q_a$ be a rational number in the non-empty open interval $I_a = (f^-(a),f^+(a))$.



It’s not hard to check that if $a,b \in D$ with $a

algebra precalculus - Solving Radical Equations $x-7= sqrt{x-5}$

This the Pre-Calculus Problem:



$x-7= \sqrt{x-5}$



So far I did it like this and I'm not understanding If I did it wrong.



$(x-7)^2=\sqrt{x-5}^2$ - The Square root would cancel, leaving:



$(x-7)^2=x-5$ Then I F.O.I.L'ed the problem.




$(x-7)(x-7)=x-5$



$x^2-7x-7x+14=x-5$



$x^2-14x+14=x-5$



$x^2-14x-x+14=x-x-5$



$x^2-15x+14=-5$




$x^2-15x+14+5=-5+5$



$x^2-15x+19=0$



$(x-1)(x-19)=0$



Now this is where I'm stuck because when I tried to see if I got the right numbers in the parentheses I got this....



$x^2-19x-1x+19=0$




$x^2-20x+19=0$



As you may see I'm doing something bad because I don't get $x^2-15x+19$



Could anyone please help me and tell me what I'm doing wrong?

How to show that the given sequence of functions converges to $f$ almost everywhere, almost uniformly and in measure?

Consider a sequence $(f_n)$ defined on $\mathbb{R}$ by $f_n =\chi_{[n,n+1]},$ $n\in \mathbb{N}$ and the function $f\equiv 0.$ Does $f_n$ converge to $f$ almost everywhere, almost uniformly or in measure?



$f_n\to f$ almost everywhere is the same as saying that $f_n\to f$ pointwise almost everywhere, i.e. on a subset whose complement has measure zero. Given $\epsilon>0$ and $x\in \mathbb{R}$ we observe that if $x\geq 0$ then for some $n_0\in \mathbb{N}$ we have that $n_0\leq xOn the other hand if $x<0$ then $f_{n}(x)=0$ and so we see that $$\lim_{n\to \infty}f_n(x) = f(x)$$ for all $x\in \mathbb{R}.$ Since the complement of $\mathbb{R}$ is $\emptyset$ which has measure $0$ we conclude that $f_n$ converges to $f$ almost everywhere.




Now $f_n$ does not converge uniformly to $f$ on the set $[0,\infty),$ a set of infinite measure. And so we conclude that $f_n$ does not converge almost uniformly to $f.$



If we choose $\epsilon = 0$ and $\eta =5$ then for all $N(\epsilon,\eta)\in \mathbb{N}$ we have that for $n\geq N.$
$$\mu(x\in D:|f_n|\geq \epsilon)=\mu(\mathbb{R})=+\infty\geq 5.$$



So we see that $f_n$ does not converge to $f$ in measure.



Is this solution correct?




This is problem 39 page 48 on the following pdf https://huynhcam.files.wordpress.com/2013/07/anhquangle-measure-and-integration-full-www-mathvn-com.pdf



The definition for convergence in measure used there is:



enter image description here

Limit of $lim_{xto0^+}frac{sin x}{sin sqrt{x}}$



How do I calculate this? $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}$$
If I tried using l'Hopital's rule, it would become

$$\lim_{x\to0^+}\frac{\cos x}{\frac{1}{2\sqrt{x}}\cos \sqrt{x}}$$
which looks the same. I can't seem to find a way to proceed from here. Maybe it has something to do with $$\frac{\sin x}{x} \to 1$$
but I'm not sure what to do with it. Any advice?



Oh and I don't understand series expansions like Taylor's series.


Answer



By equvilency near zero $\sin x\approx x$ we have
$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{x}{\sqrt{x}}=0$$
or
$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{\sin x}{x}\frac{\sqrt{x}}{\sin \sqrt{x}}.\sqrt{x}=1\times1\times0=0$$



Tuesday 25 July 2017

calculus - Prove that $cos(x)$ doesn't have a limit as $x$ approaches infinity.



I've been working on this one for quite a long time now.



I have to prove that $\cos(x)$ has no limit as $x$ approaches infinity.
Let $\epsilon>o$ and M be any number greater than 0, so that for any x>M:$$|\cos(x)-L| < \epsilon$$
I'm not sure how am I to show that for a specific $L$(anywhere in between $-1$ to $1$) I would choose, I could find ε that would contradict $$|\cos(x)-L| < \epsilon$$
For example, if I choose $\epsilon =1/2$ and $L=1/4$ then for $\cos(x)=1$ the expression above is wrong ,thus $1/4$ isn't the limit, but this happens with any L I chose.

Is giving one example of an arbitrarily chosen L enough?
I cannot give 100 examples, now cannot I?



(tia)


Answer



To show that there is no such $L$, you can show that $\cos x$ will always assume points more than $\epsilon$ apart for some $\epsilon > 0$. Take $\epsilon <1$. $\cos 2n\pi = 1$ for all $n\in\Bbb{Z}$, and $\cos\left((2n + 1)\pi\right) = -1$ for all $n\in\Bbb{Z}$, so that no matter what $N$ you choose, you can always find $x = 2n\pi,x' = (2n + 1)\pi > N$ such that $\left|\cos x - \cos x'\right| = 2 > \epsilon$.



Edit (for completeness): You can use the triangle inequality to show that the above implies so such $L$ can work. Do you see it?





! This implies that no $L$ satisfies $\left|\cos x - L\,\right| < \epsilon$ for all $\epsilon > 0$ whenever $x > M_{\epsilon}$ (some $M_{\epsilon}\in\Bbb{R}$) because if it did, we would have
$$
\left|\cos 2n\pi - L\,\right| = \left| 1 - L\,\right| < 1/4
$$
for $2n\pi > M_{1/4}$,
$$
\left|\cos (2n + 1)\pi - L\,\right| = \left|-1 - L\,\right| = \left| 1 + L\,\right| < 1/4
$$
for $(2n+1)\pi > M_{1/4}$, so
$$

2 = \left|1 - L + 1 + L\,\right| \leq \left| 1 - L\,\right| + \left| 1 + L\,\right|< 2\cdot \left(1/4\right) < 1/2,
$$
which is absurd.



Monday 24 July 2017

calculus - Why is the intermediate value theorem so important?

I would like to know why the intermediate value theorem is so important. So my questions are:





  1. Which important theorems do we prove using the intermediate value theorem?

  2. Are there direct applications of the intermediate value theorem outside mathematics?

  3. Does the intermediate value theorem have a historically importance?



Sunday 23 July 2017

combinatorics - Prove that $P(X)$ has exactly $binom nk$ subsets of $X$ of $k$ elements each.

Let set $X$ consist of $n$ members. $P(X)$ is power set of $X$.



Prove that set $P(X)$ has exactly $$\binom nk = \frac{n!}{k!(n-k)!}$$ subsets of $X$ of $k$ elements each. Hence, show that $P(X)$ contains $2^n$ members.
Hint: use the binomial expansion of $(1 + 1)^n$.




I'm trying to get more into math (proof writing first) so I got this from a book about Real Analysis. It's a short first sub-chapter which was an intro review of set theory and I have absolutely no idea how to do this particular problem given what has been said in 8 pages.



I 'know' that it's true, since I've tried it. I can also somewhat prove that $P(X)$ has $2^n$ elements thinking of binary numbers. I can also see that the binomial expansion of $(1 + 1)^n$ does tell the number of subsets of $X$ that has $k$ elements via the coefficients. I just don't know how they all fit together given that the author thinks they should.



I've also seen some questions in here proving the summation of $\binom nk = 2^n$, which was suppose to help, but I couldn't understand them at all given my limited math/combinatorics knowledge.

real analysis - Prove that $limsup_{xtoinfty}left(cos x + sinleft(sqrt2 xright)right) = 2$





Prove that
$$
\limsup_{x\to\infty}\left(\cos x + \sin\left(\sqrt2 x\right)\right) = 2
$$




Pretty much always when I ask a question here I do provide some trials of mine to give some background. Unfortunately, this one is such a tough one for me that I don't even see a starting point.



Here are some observations though. Let's denote the function under the limsup as $f(x)$:
$$

f(x) = \cos x + \sin\left(\sqrt2 x\right)
$$



Since sin's argument contains an irrational multiplier the function itself is not periodic, perhaps this may be used somehow. I've tried assuming that there exists $x$ such that the equality holds, namely:
$$
\cos x + \sin\left(\sqrt2 x\right) =2
$$



Unfortunately, I was not able to solve it for $x$. I've then tried to use Mathematica for a numeric solution, but NSolve didn't output anything in Reals.




The problem becomes even harder since there are some constraints on the tools to be used. It is given at the end of the chapter on "Limit of a function". Before the definition of derivatives, so the author assumes the statement might be proven using more or less elementary methods.



Also, I was thinking that it could be possible to consider $f(n),\ n\in\Bbb N$ rather than $x\in\Bbb R$ and use the fact that $\sin(n)$ and $\cos(n)$ are dense in $[-1, 1]$. But not sure how that may help.



What would the argument be to prove the statement in the problem section?


Answer



In $1901$, Minkowski has used his geometry of numbers and proved following theorem${}^{\color{blue}{[1],[2]}}$.




Given any irrational $\theta$ and non-integer real number $\alpha$ such that $x - \theta y - \alpha = 0$ has no solutions in integers. Then for any $\epsilon > 0$, there are infinitely many pairs of integers $p,q$ such that

$$|q(p - \theta q - \alpha)| < \frac14 \quad\text{ and }\quad |p - \theta q - \alpha| < \epsilon$$




Take $(\theta,\alpha) = (\sqrt{2},-\frac14)$, this choice satisfies the condition in above theorem. This means for any $\epsilon > 0$, there are infinitely many pairs of integers $p,q$ such that



$$\left|p - \sqrt{2} q + \frac14\right| < \frac{\epsilon}{6\pi}$$



For such a pair $p,q$ with $q \ne 0$, define



$$(P,Q) = \begin{cases}(p,q), & q > 0\\(-3p-1,-3q) & q < 0\end{cases}$$




If we set $x$ to $2\pi Q$, we will have



$$\sqrt{2}x = 2\pi P + \frac{\pi}{2} + \eta\quad\text{ for some } |\eta| < \epsilon$$



This leads to



$$\cos x = 1 \land \sin(\sqrt{2}x) \ge 1 - |\eta| > 1 - \epsilon
\quad\implies\quad \cos x + \sin(\sqrt{2}x) > 2 - \epsilon$$




Since there are infinitely many such $x$ and they can be as large as one wish, we obtain



$$\limsup_{x\to \infty}\, ( \cos x + \sin(\sqrt{2}x) ) \ge 2 - \epsilon$$



Since $\epsilon$ is arbitrary and $\cos x + \sin(\sqrt{2}x)$ is bounded from above by $2$, we can conclude
$$\limsup_{x\to \infty}\, ( \cos x + \sin(\sqrt{2}x) ) = 2$$



Update



Thinking more about this, since we only use the part $| p - \theta q - \alpha| < \epsilon$ in Minkowski's theorem. This theorem is an overkill. The fact "the fractional part of $\sqrt{2} q$ is dense in $[0,1]$" is enough to derive above limit. Since Minkowski's theorem is a useful theorem to know for such problems, I will leave the answer as is.




References




probability - Concentration inequality to bound expectation

Let $X$ be a non-negative r.v. so that
$$ P(X \geq t) \leq C \exp\bigg\{\frac{-t^2/2}{\sigma^2 + bt}\bigg\}$$
for positive $\sigma, b$ and $C\geq 1$. Show that



$$ E[X] \leq 2\sigma (\sqrt{\pi} + \sqrt{\log C}) + 4b(1 + \log C) $$



I know how to usually get concentration bound or bound the deviation from the mean, but no idea how to bound this expectation here. Any help would be really appreciated.

number theory - Prove that $gcd(m+m', n+n') = 1$



I'm stuck trying to solve this problem:



"Given positive integers $m, n, m', n'$ such as $m/n < m'/n'$ and $m'n - mn' = 1$, we define $$a/b = (m+m')/(n+n').$$ Check that $m/n < a/b < m'/n'$ and prove that $$gcd(a, b) = 1."$$




The way I see it, it must be that $a = m+m'$ and $b = n+n'$. But I'm not sure how to prove the $gcd(a,b) = 1$ part. Any help would be appreciated.


Answer



Hint: $\ (m+m')\,n - m\,(n'+n) = m'n-mn'= 1$,


inequality - Prove $sumlimits_{cyc} frac{sqrt{xy}}{sqrt{xy+z}}lefrac{3}{2}$ if $x+y+z=1$





if $x,y,z$ are positive real numbers and $x+y+z=1$ Prove:$$\sum_{cyc} \frac{\sqrt{xy}}{\sqrt{xy+z}}\le\frac{3}{2}$$
where $\sum_{cyc}$ denotes sums over cyclic permutations of the symbols $x,y,z$.



Additional info:I'm looking for solutions and hint that using Cauchy-Schwarz and AM-GM because I have background in them.




Things I have done so far: Using AM-GM $$xy+z \ge 2$$ $$\sqrt{xy+z} \ge \sqrt2$$
So manipulating this leads to $$\sum_{cyc}\frac{\sqrt{xy}}{\sqrt{xy+z}} \le \sum_{cyc}\frac{\sqrt{xy}}{\sqrt2}$$




I stuck here.I'm thinking about applying Cauchy-Schwartz.Also I have not used the assumption $x+y+z=1$.Any hint is appreciated.


Answer



$$\sum_{cyc} \frac{\sqrt{xy}}{\sqrt{xy+z}}\le\frac{3}{2} $$
$$ \sum_{cyc} \frac{\sqrt{xy}}{\sqrt{xy+z}} = \sum_{cyc} \frac{\sqrt{xy}}{\sqrt{xy+1-x-y}} = \sum_{cyc} \frac{\sqrt{xy}}{\sqrt{(1-x)(1-z)}} = \sum_{cyc} \frac{\sqrt{xy}}{\sqrt{(y+z)(x+z)}} $$
And now, by AM-GM
$$ \sum_{cyc} \frac{\sqrt{xy}}{\sqrt{(y+z)(x+z)}} \le \sum_{cyc} \frac{x}{2(x+z)}+\frac{y}{2(y+z)} = \frac{x}{2(x+z)}+\frac{y}{2(y+z)} + \frac{y}{2(y+x)}+\frac{z}{2(z+x)}+ \frac{z}{2(z+x)}+\frac{x}{2(x+y)} = \frac{3}{2} \Box $$


real analysis - Find a differentiable $f$ such that $f'$ is not continuous.




I'm trying to solve this problem:





Find a differentiable function $f:\mathbb{R} \longrightarrow \mathbb{R}$ such that $f':\mathbb{R} \longrightarrow \mathbb{R}$ is not continuous at any point of $\mathbb{R}$.




Any hints would be appreciated.


Answer



You are looking for a derivative that is discontinuous everywhere on $\Bbb R$. Such a function doesn't exist. Since $f'$ is the pointwise limit of continuous functions, it is a Baire class $1$ function. A theorem of Baire says that the set of discontinuities of $f'$ is a meager subset of $\Bbb R$.


Where $a$ is introduced in $a + bi$ when solving polynomials using complex numbers

Trying to wrap my head around complex numbers and almost there. I am looking for problems that show me how to introduce $i$ into an equation. What I'm finding a lot of is "Simplify 2i + 3i = (2 + 3)i = 5i", where the $i$ has already been introduced somehow magically. The only primitive examples I've tried so far is "Simplify $\sqrt{-9}$" and by the definition of $i = \sqrt{-1}$ we get $3i$. That part makes sense for now.



But it's just $3i$, or $bi$ from the equation, there is no $a$. I don't see in what situations you get the $a$ and how you know how/where to add it. For example, on Wikipedia they show:





In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1:



${\displaystyle ((-1+3i)+1)^{2}=(3i)^{2}=\left(3^{2}\right)\left(i^{2}\right)=9(-1)=-9,}$
${\displaystyle ((-1-3i)+1)^{2}=(-3i)^{2}=(-3)^{2}\left(i^{2}\right)=9(-1)=-9.}$




I am not skilled enough yet to know how they solved this, but I am wondering if they are saying $−1 + 3i$ is the form $a + bi$, or that $-1$ is separate.




Wondering if one could start off with a simple polynomial equation without any presence of $i$, and then show how you introduce $i$ in two different cases/examples:




  1. Where it's just $bi$, not $a + bi$

  2. Where it's $a + bi$



That way it should help explain how to introduce $i$ into a polynomial equation.



I'm imagining something like, or something more complicated if this doesn't have the $a$:




$(x + 3)^2 = -10$



I've started by doing:



$x + 3 = \sqrt{-10} = \sqrt{10}i$



$x = -3 + \sqrt{10}i$



Not sure if this means that $-3 + \sqrt{10}i$ is the complex number, or just $\sqrt{10}i$. Not sure if you need to be adding complex numbers to both sides, etc.

Euclide arithmetic, divisibility



If $a_1,\dots,a_n$ are non zero integers, prove that if $d$ divides $a_i$, for $i=1,\dots,n$, then $d$ divides $a_1+\dots+a_n$.



I can see that if it divides each term individually it will divide the sum, but I don't know how to prove it.


Answer



Here is a fundamental and generalization rule of your thoughts:





If $c$ divides $a$ and $b$, then it divides any linear combination
$ax+by$ for any integers $x,y$.




The proof of this statement goes as follows:




from the definition of divisibility $a = ck$ for some integer $k$, implying that $ax = cK, K = xk$.

Similarly, $by = cR$ for some integer $R$. Adding these two equations,
$ax+by = c(R+K)$. We know that $(R+K)$ is an integer, let us write it
$P$. Thus $ax+by = cP$ implying that $c$ divides $a+b$.




Now you can use this fact to prove your statement with $x,y = 1$, can't you?


real analysis - About the existence of a certain type of linear map



I have a conjecture inspired by the following observation. If $f:\mathbb{R}\to \mathbb{R}$ is a continuous bijective function that satisfies $f(x)+f^{-1}(x)=2x$ and has a fixed point, then $f(x)=x$. This can be proved quite easily, though not trivial.



I conjecture that if we omit the assumption that $f$ has a fixed point, then $f(x)=x+d$ for some $d\in \mathbb{R}$. More generally, I think that if $f:\mathbb{R}^n \to \mathbb{R}^n$ is continuous bijection which satisfies $f(x)+f^{-1}(x) = \Lambda x$ for some bijective linear map $\Lambda:\mathbb{R}^n \to \mathbb{R}^n$, then $f$ must be linear.



I first want to tackle a seemingly easier problem: does there exist a bijective linear map $S$ with $S+S^{-1}=\Lambda$? Unfortunately I could not handle this problem, so I would really appreciate it if you could give me some suggestions or a counterexample.


Answer



Okay, here's a proof sketch for your first conjecture.




Let $f:\mathbb R \to \mathbb R$ be a continuous bijection, therefore monotonic. Suppose $f(x) + f^{-1}(x) = 2x$. Then for all $a\in \mathbb R$ and $n\in\mathbb Z$, $f$ also satisfies $f^n(a) = a+nd_a$ where $d_a = f(a) - a$, by induction on $n$. You've already handled the case where some $d_a=0$, so assume that all $d_a\neq0$. Then we've described the behavior of $f$ on a bunch of lattices in $\mathbb R$.



Now, if for some $a_1, a_2 \in \mathbb R$ we have $d_{a_1}\neq d_{a_2}$, then the two lattices don't interleave properly at large values of $n$ (starting somewhere around $(a_2-a_1)/(d_{a_1}-d_{a_2})$, I think). So $f$ is not monotonic, a contradiction. Therefore $d_a$ doesn't depend on $a$, and $f(x)=x+d_0$ for all $x$.


Saturday 22 July 2017

sequences and series - Is there any summation method that assigns $ sum_{n=1}^infty frac{1}{n} =-frac{pi}{2}$

I don't know too much about alternate summation methods, but am interesting to know if any give the sum of the harmonic series to be



$$-\frac{\pi}{2}$$

integration - Fourier transform in three dimensions getting out of hand



I have the following integral I wish to compute, it transforms a quantum position wave function into momentum space:



$$\phi(\mathbf p)=\int\frac{\mathrm d^3r}{(2\pi\hbar)^{3/2}}e^{-i\mathbf{p\cdot r}}\psi(\mathbf r)$$
Where $\psi(\mathbf r)=e^{-|\mathbf r|/a}/\sqrt{\pi a^3}$, the wave equation for the ground state of hydrogen.



So I thought about converting it to polar coordinates first then evaluate the integrals, but it turned out to be unexpectedly complicated.




Below is my attempt (note, since this is physics related, $\theta$ is swapped with $\varphi$ in polar coordinates):






We first rewrite the integral in polar coordinates:
$$\phi(r_p, \varphi_p, \theta_p)=\frac{1}{(2a\hbar)^{3/2}\pi^2}\int_0^\pi\int_0^{2\pi}\int_0^\infty e^{-i\mathbf{p\cdot x}/h-r_x/a}\mathrm d r_x\mathrm d\varphi_x \mathrm d\theta_x$$



Where the dot product in spherical coordinates is:
$$\mathbf{p\cdot x}=r_xr_p[\sin\theta_p\sin\theta_x\cos(\varphi_p-\varphi_x)+\cos\theta_p\cos\theta_x]$$




Substitute and integrate, we get:
\begin{align*}\phi(r_p, \varphi_p, \theta_p)&=\frac{1}{(2a\hbar)^{3/2}\pi^2ir_p}\int_0^{2\pi}\int_0^\pi \frac{\mathrm d\theta_x \mathrm d\varphi_x}{\sin\theta_p\sin\theta_x\cos(\varphi_p-\varphi_x)+\cos\theta_p\cos\theta_x}\\
\textrm{Now let: } a &=\sin\theta_p\cos(\varphi_p-\varphi_x)\\
b &=\cos\theta_p \\
\phi&=\frac{1}{(2a\hbar)^{3/2}\pi^2ir_p}\int_0^\pi \frac{\ln(\sqrt{a^2+b^2}-a)-\ln(\sqrt{a^2+b^2}+a)}{\sqrt{a^2+b^2}}\mathrm d\varphi_x\\
\end{align*}







The second integral is evaluated from wolfram alpha. From here I'm quite stuck. The final integral looks rather hopeless. Is there any tricks to Fourier transforms in space or did I make a mistake somewhere?



Edit: I think I accidentally substituted $2\pi$ instead of $\pi$, but the correct version is even more complicated.


Answer



Symmetry is your friend. Since $\psi$ is radially symmetric, so is $\phi$.
So wlog we can assume ${\bf p} = p \bf k$, and ${\bf p} . {\bf r} = p r \cos(\theta)$. Then



$$\eqalign{\phi(p {\bf k}) &= \dfrac{1}{(2 a \hbar)^{3/2} \pi^2} \int_0^{2\pi} d\varphi \int_{0}^\infty r^2 \; dr \int_{0}^{\pi} \sin(\theta)\; d\theta\; e^{ip r \cos(\theta)} e^{-r/a}\cr &= \dfrac{1}{\sqrt{2} (a \hbar)^{3/2} \pi}
\int_0^\infty r^2\; dr \dfrac{2 \sin(pr)}{pr} e^{-r/a}\cr
&= \dfrac{4 a^{3/2}

}{\sqrt{2} \hbar^{3/2} \pi (a^2 p^2 + 1)^2}}$$


Computing the limit of $n cdot arccos left( left(frac{n^2-1}{n^2+1}right)^{cos (1/n)} right)$

I need to solve this earth's wonder:




$$\lim_{n \rightarrow \infty} \left[n \; \arccos
\left( \left(\frac{n^2-1}{n^2+1}\right)^{\cos \frac{1}{n}} \right)\right]$$





I have tried to write down it using $e^{v \ln u}$,and then used L'Hôpital's rule, but with no luck, i'm constantly getting indeterminate form like $\infty-\infty$ inside $\ln$ which makes another application of L'Hôpital impossible.



My professor told me (with great smile on its face) that if i use Taylor expansion, it will lead me into the abyss...



any hints about possible rewriting this limit and what i should use would be VERY helpful. Thanks in advance.

real analysis - Convergence of $sumlimits_{n=1}^{infty}frac{1+(-1)^n}{n}$



I want to check, whether $\sum\limits_{n=1}^{\infty}\frac{1+(-1)^n}{n}$ converges or diverges.



Leibniz's test failed, and ratio test just made it even more complicated, so i tried to use the comparison test, but i can't find a suitable series so that $\lim\limits_{n \rightarrow \infty} \frac{a_n}{b_n}$ exists..


Answer



To prove: $\sum_{n\geq 1} \frac{1+(-1)^n}{n}$ diverges.




Proof: \begin{align*} \sum _{n\geq 1} \frac{1+(-1)^n}{n} &= \sum _{k\geq 1} \frac{1+(-1)^{2k}}{2k} + \sum _{k\geq 1} \frac{1+(-1)^{2k-1}}{2k-1} \\
&= \sum _{k\geq 1} \frac{2}{2k} + \sum _{k\geq 1} \frac{0}{2k-1} \\\
&= \sum _{k\geq 1} \frac{1}{k} \end{align*}
Because $\sum _{k\geq 1} \frac{1}{k}$ diverges, $\sum_{n\geq 1} \frac{1+(-1)^n}{n}$ diverges as well. \qed


conjectures - Does any sum of twin primes, where the sum is greater than 12, also represents the sum of 2 other distinct primes?

I was in the midst of proving a conjecture when I came across an observation that led me to forming a potentially new conjecture. The conjecture goes as follows:




Any given sum of twin primes (specifically the two primes in a twin prime pair, ie. $11$ and $13$) where the sum is greater than $12$, also represents the sum of 2 other distinct primes.




I've proven this for the first $1000$ twin primes and my computer is calculating beyond that set. Anyways, does anybody have any ideas of how I could go about proving this conjecture? I apologize if this conjecture has already been posed.

Friday 21 July 2017

proof writing - Prove gcd and common divisor



I have this math problem.





Let $a, b, m$ be any positive integers with $\gcd(a,m)=d$ and
$\gcd(b,m)=1$.



i) Show that if $k$ is a common divisor to $ab$ and $m$, then $k$
divides $d$.



ii) Use the result in part i) to conclude that $\gcd(ab, m)=d$.





I'm not 100% sure about how to start this. Can I conclude that if $k\mid ab$, then $k\mid a$? If I can do that, then I can say since $k\mid a$ and $k\mid m$, $k\mid d$.



Thanks


Answer



Yes, you can make that conclusion. The best way to check the validity of a claim like that is writing down a more detailed proof like below:



Let $k$ be a common divisor to $ab$ and $m$. Since $k|m$ and $(b,m) = 1$, $(b,k) = 1$. Thus $k|ab$ and $(b,k) = 1$ implies $k |a$, and thus also $k|d$.



For part (ii), $k|d$ implies $k \leq d$. Since $k|m$ and $k|ab$, $gcd(ab,m) \leq d$. But since $d |ab$ and $d|m$, we know $gcd(ab,m) \geq d$. Thus $gcd(ab, m) = d$.



integration - Integral $int_{sqrt{33}}^inftyfrac{dx}{sqrt{x^3-11x^2+11x+121}}$


How can we prove $$I:=\int_{\sqrt{33}}^\infty\frac{dx}{\sqrt{x^3-11x^2+11x+121}}\\=\frac1{6\sqrt2\pi^2}\Gamma(1/11)\Gamma(3/11)\Gamma(4/11)\Gamma(5/11)\Gamma(9/11)?$$





Thoughts of this integral
This integral is in the form $$\int\frac{1}{\sqrt{P(x)}}dx,$$where $\deg P=3$. Therefore, this integral is an elliptic integral.
Also, I believe this integral is strongly related to Weierstrass elliptic function $\wp(u)$. In order to find $g_2$ and $g_3$, substitute $x=t+11/3$ to get $$I=2\int_{\sqrt{33}-11/3}^\infty\frac{dt}{\sqrt{4t^3-352/3t+6776/27}}$$
The question boils down to finding $\wp(I;352/3,-6776/27)$ but I seem to be on the wrong track.

elementary number theory - Modular multiplicative inverse proof

Does the concept of modular multiplicative inverse require a proof or is it taken as a definition?



Suppose $5/4 \equiv 3$ (mod $7$).




Can that even be written in the standard $a = bq + r$ notation and proven from there?

number theory - Reed Solomon Polynomial Generator




I am developing a sample program to generate a 2D Barcode. And i am using reed solomon error correction code. By Going through this article i am developing the program. But i couldn't understand how he generated the Generator Polynomial. Anybody can explain me how they generated the generator polynomial. Please guide me to complete this correction step.



Help Appreciated,



Thanks



Sunny


Answer



Let me show a toy example of a case, where we can use $\alpha=2$. In the more general finite field we cannot think of $\alpha$ as having a numerical 'value'. I use the finite field $GF(11)$ that is isomorphic to the ring of residue classes of integers modulo 11. I assume that you are at least somewhat familiar with modular arithmetic. This way we can take a look at the use of the generator polynomial in encoding the message without having to worry about the construction of the finite field as well. Also the answer will be of a `reasonable' size :-)

$$
GF(11)=\{0,1,2,3,4,5,6,7,8,9,A=-1\},
$$
where $A$ stands for the residue class of $10\equiv-1\pmod{11}$. The Reed-Solomon codes use
the fact that the multiplicative group of non-zero elements of this (and any other) finite field
is cyclic, i.e. consists of powers of a carefully selected element $\alpha$. Trial and error shows that we can select $\alpha=2$, because $\alpha^0=1$, $\alpha^1=2$, $\alpha^2=4$,
$\alpha^3=8$, $\alpha^4=16= 5$, $\alpha^5=\alpha\cdot \alpha^4=2\cdot 5=10=-1$,
$\alpha^6=2\cdot A=20= 9$, $\alpha^7=2\cdot9=18= 7$, $\alpha^8=2\cdot7=14=3$, and
$\alpha^9=2\cdot3=6$. Note that i) I simply equate any two numbers that are congruent with each other modulo 11, because then the two numbers represent the same element of the field, ii) I won't get any new elements by continuing, because $\alpha^{10}=2\cdot6=12=1=\alpha^0$, so the powers of $\alpha$ repeat starting from the tenth power. A similar thing happens with all the finite fields.




In this toy example I describe the encoding procedure using an RS-code with alphabet $GF(11)$ that has $r=4$ check symbols (IOW the code will have minimum distance $r+1=5$ and
thus be able to correct up to $t=2$ errors, because $2t+1=5$. This type of an RS-code can
carry a message consisting of up to six ($6=11-1-r$) symbols $m_0,m_1,m_2,m_3,m_4,m_5$
that are all elements of the field $GF(11)$. We could agree to use shorter messages, but
this time I go with the maximum. The encoding process expands this message into a longer sequence $c_0,c_1,c_2,\ldots,c_9$ of ten symbols from the field $GF(11)$. In order to make the algebra easier to describe we view such sequences as a polynomials. So let $x$ be an unknown, and write
$$m(x)= m_0+m_1x+m_2x^2+m_3x^3+m_4x^4+m_5x^5$$
and
$$c(x)= c_0+c_1x+c_2x^2+c_3x^3+\cdots+c_9x^9.$$
For the error-correction to work as described, we must make sure that the polynomial $c(x)$ represents a valid codeword, so it has to be a multiple of the generator polynomial
of degree $r=4$

$$
g(x)=(x-\alpha)(x-\alpha^2)(x-\alpha^3)(x-\alpha^4)=(x-2)(x-4)(x-8)(x-5).
$$
As an exercise you are invited to verify that after expanding this product and reducing all the coefficients modulo 11 you get
$$
g(x)=x^4+3x^3+5x^2+8x+1.
$$
There are two common ways of turning the message polynomial $m(x)$ to a codeword $c(x)$
that is always divisible by $g(x)$. The simplest way (algebraically) is to declare
$$

c(x)=g(x)m(x).
$$
This is what is known (see e.g. the Wikipedia page) as non-systematic encoding, so e.g. the said Wikipedia page and
Dilip's answer denote this polynomial
$c_{nonsys}(x)$. For example, if the message sequence that you want to encode is
$(m_0,m_1,m_2,m_3,m_4,m_5)=(3,0,0,0,0,1)$, the message polynomial is $m(x)=3+x^5$, and
$$
c_{nonsys}(x)=g(x)m(x)=g(x)(x^5+3)=3 + 2x + 4x^2 + 9x^3 + 3x^4 + x^5 + 8x^6 + 5x^7 +
3x^8 + x^9,
$$

so the encoded message is the sequence $(3,2,4,9,3,1,8,5,3,9)$.



For practical reasons engineers often prefer to use so called systematic encoding. Dilip's answer (linked to above) gives you the following recipe: Compute the polynomial
$x^rm(x)=x^4(x^5+3)=x^9+3x^4$, and then compute the remainder, when you divide this
polynomial with the generator polynomial $g(x)$. The answer is $r(x)=4x+4x^2+x^3$.
Thus the polynomial
$$
c_{sys}(x)=x^4 m(x)-r(x)=x^9+3x^4-x^3-4x^2-4x=7x+7x^2+Ax^3+3x^4+x^9
$$
is also divisible by $g(x)$. This time the encoded sequence is thus $(0,7,7,A,3,0,0,0,0,1)$. The reason why this is called systematic is that you see the

payload message sequence $(3,0,0,0,0,1)$ at the end.



=======================



Added: Constructing finite fields of characteristic two.



Here we need more algebra. The field $GF(256)$ is of characteristic two. In other words, every element $\beta \in GF(256)$ satisfies the relation $\beta+\beta=0$. To give you the idea I first describe, how you get a smaller field $GF(8)$. This is just to save space.
The idea is that we want to list the elements of this field as powers of a special element
$\alpha$ the same way we used powers of two in the earlier example. A field will always have special elements $0,1$, so to get to eight elements we want the field to look like
$$

GF(8)=\{0,1,\alpha,\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6\}.
$$
In the above example of $GF(11)$ the powers of $\alpha$ started repeating after the tenth power ($2^{10}\equiv 1\pmod{11}$). Here we want the powers to start repeating starting from the seventh ($7=8-1$, $10=11-1$), so we want $\alpha^7=1$. Furthermore, we want to be able to add elements of the field together, like $\alpha^3+\alpha^5$ or $1+\alpha^4$ should be one of the elements. The way to achieve this is to declare that $\alpha$ is a root of certain carefully chosen polynomial equation. This time we choose the equation
$$\alpha^3+\alpha+1=0.$$
IOW, $\alpha$ is a root of the polynomial $p(x)=x^3+x+1$.



How does that help? The idea is that then we can calculate with powers of $\alpha$, and always use that equation $p(\alpha)=0$ to reduce to lower powers of $\alpha$. This is much the same way as when you multiply complex numbers together, you use the equation $i^2=-1$, e.g. $(2+i)(3+i)=6+2i+3i+i^2=6+5i+i^2=6+5i-1=5+5i$. Only this time the equation only begins to help, when we reach the third power. Here
$$
\alpha^3=\alpha^3+0=\alpha^3+p(\alpha)=\alpha^3+\alpha^3+\alpha+1=1+\alpha,
$$

because $\alpha^3+\alpha^3=0$. Let's see what happens, when we reduce the powers of $\alpha$ using this relation. In the last column of the following table I always
list the fully reduced version of that power of $\alpha$.
$$
\eqalign{
\alpha^0&=&&=1,\\
\alpha^1&=&&=\alpha,\\
\alpha^2&=&&=\alpha^2,\\
\alpha^3&=&&=1+\alpha,\\
\alpha^4&=&\alpha\cdot\alpha^3=\alpha(1+\alpha)&=\alpha+\alpha^2,\\
\alpha^5&=&\alpha\cdot\alpha^4=\alpha(\alpha+\alpha^2)=\alpha^2+\alpha^3=\alpha^2+(1+\alpha)&=1+\alpha+\alpha^2,\\

\alpha^6&=&\alpha\cdot\alpha^5=\alpha(1+\alpha+\alpha^2)=\alpha+\alpha^2+\alpha^3=
\alpha+\alpha^2+(1+\alpha)&=1+\alpha^2,\\
\alpha^7&=&\alpha\cdot\alpha^6=\alpha(1+\alpha^2)=\alpha+\alpha^3=\alpha+(1+\alpha)&=1.
}$$
Here the last row was in a way superfluous, but I wanted to show you that this choice of the polynomial $p(x)$ leads to the desired conclusion that the powers of $\alpha$ start repeating after the seventh. Notice how a new line always depends on the preceding one, and how
the relation $\alpha^3=\alpha+1$ is applied as many times as necessary to get rid of all the cubic terms and higher.



Now you should notice that the end results in the last column contain all the quadratic polynomials of the form $b_0+b_1\alpha+b_2\alpha^2$, where all the coefficients $b_i,i=0,1,2$ are bits, i.e. elements of the set $\{0,1\}$. That this worked out in this way is kind of a miracle, but it happened, because we were smart in choosing the polynomial $p(x)$. Notice that $p(x)$ is of degree three, and $8=2^3$. Further notice that we can choose to represent
the elements of $GF(8)$ in two ways: either as a power of $\alpha$ or as a quadratic polynomial of $\alpha$. Which do we use? Depends on what we want to do! If we want to add
two elements of the field, we first switch to the quadratic polynomial form, so for example

$$
\alpha^3+\alpha^5=(1+\alpha)+(1+\alpha+\alpha^2)=(1+1)+(\alpha+\alpha)+\alpha^2=\alpha^2.
$$
On the other hand, if we want to multiply two elements of the field, we simply use the
fact that the powers start repeating after the seventh, so $\alpha^6\cdot\alpha^4=\alpha^{10}=
\alpha^{7}\cdot\alpha^3=1\cdot\alpha^3=\alpha^3$. Or, if the elements are given in the
quadratic polynomial form, then we read the table from right to left e.g.
$$
(1+\alpha)(1+\alpha+\alpha^2)=\alpha^3\cdot\alpha^5=\alpha^8=\alpha\cdot\alpha^7=\alpha.
$$

Observe that addition of two quadratic polynomials
$$
(a_0+a_1\alpha+a_2\alpha^2)+(b_0+b_1\alpha+b_2\alpha^2)=(c_0+c_1\alpha+c_2\alpha^2)
$$
amounts to (because of the $\beta+\beta=0$ rule) bitwise XORing of the bit strings, or
$c_0c_1c_2=(a_0a_1a_2)$^$(b_0b_1b_2)$, if I remember the correct C-notation.



For these reasons I keep two look up tables at hand, when I implement a finite field of characteristic two in a program. One to convert the powers $\alpha^i$ to low degree polynomials, and another to go to the reverse direction.



Ok, so that was $GF(8)$, but you want $GF(256)$, where $256=2^8$. This time the field should look like

$$
GF(256)=\{0,1,\alpha,\alpha^2,\alpha^3,\ldots,\alpha^{254}\}.
$$
Now the powers of $\alpha$ start repeating starting from the $255^{th}$, so $\alpha^{255}=1$.
Instead of quadratic polynomials we now end up using the representation in the form
$$
b_0+b_1\alpha+b_2\alpha^2+\cdots+b_7\alpha^7
$$
in terms of $8$ bits $b_0,b_1,\ldots,b_7$. In other words, a single byte will represent an element of this field in the 'additive' form. How do we build the conversion table? To do that we need a suitable polynomial $p(x)$. This time $p(x)$ must be of degree 8. The most common choice is
$$

p(x)=x^8+x^4+x^3+x^2+1.
$$
The page that you linked to seems to be using this. The replacement rule that we get out of this is $\alpha^8=\alpha^4+\alpha^3+\alpha^2+1$. You can use this relation to get rid of
all the eighth powers (and higher!) of $\alpha$. This time the table would have 255 rows, so I hope that you understand, why I won't reproduce it here. Your link seems to have that table.



For a list of other possible polynomials see this link. They give some further links there, too. The links from your other questions lead to some hopefully useful C-code.


Improper integral: $,frac{1}{pi}int^infty_0 frac{sqrt{x}}{1+x}e^{-xt},dx$



Good evening! How could one evaluate the following integral $$\frac{1}{\pi}\int^\infty_0 \frac{\sqrt{x}}{1+x}e^{-xt}\,dx$$
I have tried the substitution $x\equiv x^2$ but still I could not manage to get to a final result. Any ideas would be really appreciated!

Also $t>0$.


Answer



Yes, sub $x=u^2$ to get



$$\frac{2}{\pi} \int_0^{\infty} du \frac{u^2}{1+u^2} e^{-t u^2} = \frac1{\pi} \int_{-\infty}^{\infty} du \left (1-\frac1{1+u^2} \right )e^{-t u^2}= \frac1{\sqrt{\pi t}} - \frac1{\pi} \int_{-\infty}^{\infty} du \frac{e^{-t u^2}}{1+u^2} $$



The latter integral may be evaluated a few different ways. One way is to multiply and divide by $e^{-t}$, and differentiate:



$$\frac1{\pi} \int_{-\infty}^{\infty} du \frac{e^{-t u^2}}{1+u^2} = \frac{e^t}{\pi} \int_{-\infty}^{\infty} du \frac{e^{-t (1+u^2)}}{1+u^2}$$




and



$$\frac{d}{dt} \int_{-\infty}^{\infty} du \frac{e^{-t (1+u^2)}}{1+u^2} = -e^{-t} \int_{-\infty}^{\infty} du \, e^{-t u^2} = - \frac{\sqrt{\pi}e^{-t}}{\sqrt{ t}} $$



Integrate back with respect to $t$ and get an error function:



$$ \int_{-\infty}^{\infty} du \frac{e^{-t (1+u^2)}}{1+u^2}= -\sqrt{\pi}\int^t dt' \frac{e^{-t'}}{\sqrt{ t'}} = C-\pi \operatorname{erf}{\sqrt{t}} $$



Noting that




$$\int_{-\infty}^{\infty} du \frac{1}{1+u^2} = \pi$$



We finally have the integral taking the value



$$\frac1{\sqrt{\pi t}} - e^t \operatorname{erfc}{\sqrt{t}} $$


inequality - Why is Mathematical Induction used to prove solvable inequalities?



As a first year undergrad student I've seen problems where solvable inequalities need to be proven to hold in a specific domain using Mathematical Induction.




My question is, if the inequalities are solvable, i.e. where the bounds where the inequality holds true, it's domain where it holds, can be solved for, why is there a need for induction to prove the inequality holds true?



For example this was one of the problems where I had to prove the inequality held true throughout the domain.



Prove using Mathematical Induction: $$n^2 \geq 2n +3, \forall n\in \mathbb{Z} \geq3$$



But I could far easier solve the inequality for the domain where the inequality is true. As shown below.
$$ Let, P(n) = n^2 \geq 2n+3$$
$$ Let, D = \{n \in \mathbb{Z} | n \geq 3 \}$$

Required to prove:
$$P(n) \forall n \in D$$
Proof :
$$ n^2 -2n -3 \geq 0 $$
$$ (n-3)(n+1) \geq 0 $$
$$ (n \geq 3\wedge n \geq -1) \vee (n \leq 3 \wedge n \leq -1)$$
$$ (n \geq 3) \vee (n \leq -1)$$
$$ n \in \mathbb{R} (-\infty, 1] \cup [3, \infty)$$



Since $$\mathbb{Z} \subset \mathbb{R}$$

we have proven P(n) for all elements in D. $$Q.E.D$$



First off is what I've done above Mathematically, correct, and furthermore is it formally correct? I've written what I set out to prove in symbolic form, which is why some of the proof may seem a bit redundant.



Secondly why would I need to use Mathematical Induction for cases like these where the domains in which the inequality holds true is easily solvable, what is the upside of using Mathematical Induction in this case?



Thirdly I was told that Mathematical Induction is great to use for natural numbers and integers, but that it fails for real numbers. If that is correct wouldn't that favor using the direct solving of the inequality root as I did above, as solving the inequality would solve for the domain in which it is true for up to real and complex numbers if I'm not mistaken.



Finally if you have spotted any errors in my reasoning or proof writing techniques/skills, please feel free to criticize and inform me of it, I'm majoring in Math, so don't hold back on me.


Answer




As many users commented above, these sorts of fairly trivial questions are mainly given to students to increase their familiarity with Mathematical Induction.


Thursday 20 July 2017

sequences and series - Why does $sum_{n = 0}^infty frac{n}{2^n}$ converge to 2?

Apparently,



$$
\sum_{n = 0}^\infty \frac{n}{2^n}
$$




converges to 2. I'm trying to figure out why. I've tried viewing it as a geometric series, but it's not quite a geometric series since the numerator increases by 1 every term.

real analysis - Proving that the Cesaro means of Fourier sums for continuous $f$ converge uniformly to $f$

Let $f$ be a $1$-periodic and continuous function on $\mathbb{R}$. I would like someone to verify that the following proof is correct. I prove that $\|\sigma_n(f)-f\|_\infty\to0$, where $\sigma_n(f)$ denotes the $n$-th Cesaro mean of the partial sums of the Fourier series of $f$. That being said, $\sigma_n(f)=f*K_n$, where $K_n$ is Fejer's kernel: $K_n(x)=\displaystyle\frac{1}{n+1}\biggl(\frac{\sin((n+1)\pi x)}{\sin(\pi x)}\biggr)^2$ (for $x\not\in\mathbb{Z}$; for $x\in\mathbb{Z}$ it is $K_n(x)=n+1$).



Now a popular and sharp estimate of Fejer's kernel is the following: for all $x\in[-1/2, 1/2]$ it is $K_n(x)\leq\Phi_{\frac{1}{n+1}}(x)$, where $\Phi_{\frac{1}{n+1}}(x)$ is a piecewise function: For $|x|<\frac{1}{2(n+1)}$ it is $\Phi_{\frac{1}{n+1}}(x)=n+1$ and for $\frac{1}{2(n+1)}\leq|x|<\frac{1}{2}$ it is $\Phi_\frac{1}{n+1}(x)=\frac{1}{4(n+1)x^2}$.



I am not going to prove the estimate; as I said, it's popular.



Now for our goal: It is $\|\sigma_n(f)-f\|_\infty\leq\displaystyle\sup_{x\in[-\frac{1}{2},\frac{1}{2})}|\sigma_n(f)(x)-f(x)|$. As proved here, since $f$ is $1$-periodic and continuous it is uniformly continuous; Let $\varepsilon>0$ and $\delta>0$ be the corresponding quantity. Let $x\in[-\frac{1}{2},\frac{1}{2})$. We have $|\sigma_n(f)(x)-f(x)|=|(f*K_n)(x)-f(x)|=|\int_{(-1/2,1/2)}f(x-y)K_n(y)dy-\int_{(-1/2,1/2)}f(x)K_n(y)dy|\leq\displaystyle{\int_{(-1/2,1/2)}|f(x-y)-f(x)|K_n(y)dy}\;\;(1)$




Now we can break the integral on RHS in two parts: The first, $I_1$, being the integral over $(-\delta,\delta)$ and the second, $I_2$, the integral over the rest of $(-1/2,1/2)$.



But $I_1=\displaystyle{\int_{(-\delta,\delta)}|f(x-y)-f(x)|K_n(y)dy\leq\varepsilon\int_{(-\delta,\delta)}K_n(y)dy\leq\varepsilon}$ (we used the fact that $\int_0^1K_n(y)dy=1$)



As for $I_2$, for $n$'s large enough so that $\frac{1}{2(n+1)}<\delta$, we have that $I_2=2\displaystyle{\int_{(\delta,\frac{1}{2})}|f(x-y)-f(x)|K_n(y)dy\leq4M\int_{(\delta,\frac{1}{2})}K_n(y)dy\leq 4M\int_\delta^\frac{1}{2}\frac{1}{4(n+1)y^2}dy=\frac{C_\varepsilon}{(n+1)}},$ a quantity that also gets arbitarily small as we increase $n$.



Everything is independent of $x$; therefore the $\infty$-norm gets small as $n$ gets large.



Is there something wrong with my proof?

trigonometry - General Sine and Cosine formula for sum of a finite number of angles

I was wondering is there is a general formula for $\sin(x_1+x_2+x_3+...+x_n)$ as well as for the cosine function. I know that $\sin(x_1+x_2)=\sin(x_1)\cos(x_2)+\cos(x_1)\sin(x_2)$ and $\cos(x_1+x_2)=\cos(x_1)\cos(x_2)-\sin(x_1)\sin(x_2)$ But I want to find a general formula for the sum of a finite number of angles for the Sine and the cosine but I didn't noticed any pattern. I suspect that it may have a recursive pattern. Any suggestions and hints (not answers) will be appreciated.

calculus - Why does substitution work in antiderivatives?

I'm not entirely sure what I want to ask here, so please bear with me!



I think the explanation they give us in school for how finding the antiderivative by substitution works is: $$\int f(g(t))g'(t)dt=(Fg)(t)=F(g(t))=F(u)=\int f(u)du$$




But I never really understood the equality $F(u)=\int f(u)du$. Why can we behave as if the identity $u=g(t)$ doesn't exist, compute the integral $\int f(u)du$, and then 'substitute' $u$ in the result yielding the correct answer? In a sense, it feels like the variable $u$ switches from being 'meaningful' in $F(u)$ (where it stands for $g(t)$) and being insignificant in $\int f(u)du$, since as I perceive it the "variable name" here is meaningless and we could say $ \int f(u)du = \int f(k)dk = \int f(x)dx = ... $ all the same. Another way to ask the same thing: why $(Fg)(t)=\int f(u)du$ and not, say, $(Fg)(t)=\int f(x)dx$? Where am I getting confused?



Maybe someone can explain this better than my high school teacher? Is there a 'formal' explanation for this? Thank you a lot!

algebra precalculus - Transforming this exponential function?



Here's the problem:





A biologist places agar, a gel made from
seaweed, in a Petri dish and infects it with
bacteria. She uses the measurement of
the growth ring to estimate the number of
bacteria present. The biologist finds that
the bacteria increase in population at an
exponential rate of $20%$ every $2$ days.




a) If the culture starts with a population
of $5000$ bacteria, what is the
transformed exponential function in
the form $P = a(c)^{bx}$ that represents the
population, $P$, of the bacteria over time,
$x$, in days?




My problem is creating the exponential function.




What I did so far is $P = 5000(0.20)^{???x}$



Where $5000$ is the starting population, $0.20$ is the exponential rate, and I'm not sure how to transform the $x$ so that I could get a rate of "every $2$ days."



The textbook had an answer of:




$P = 5000(1.2)^{\frac{1}{2}x}$





I wanna understand how I could get this answer.



Why is the constant $1.2$? I'm assuming they added a $1$ for the first day of growth; but why would they do that?



Also, how is "every $2$ days" modeled by multiplying $x$ by $\frac{1}{2}$?


Answer



We know that $P(0) = 5000$, so if the form is $P = a(c)^{bx}$ then $5000 = a(c)^{b*0} = a$. Then we know that $\frac{P(x+2)}{P(x)} = 1.2$, so $\frac{5000c^{b(x+2)}}{5000c^{bx}} = 1.2$ which yields $c^{2b} = 1.2$. At this point, let $c > 1$ be whatever you want and calculate $b$. In this case, we can let $c=1.2$ since that's the rate that the problem provided. Then $b$ must be $\frac{1}{2}$.



The intuition is as you said, since the rate of growth is $.2$ for every two years, then if that's the rate of growth we choose for $c$, then $b$ must be halved to account for that, as $x$ would be growing "two times as fast". $c$ also must be greater than $1$, as the growth rate adds on top of what was there before.


Wednesday 19 July 2017

Functional equation: $f(f(x))=k$


If $k\in\Bbb R$ is fixed, find all $f:\Bbb R\to\Bbb R$ that satisfy $f(f(x))=k$ for all real $x$.





If $k\ge 0$, $f(x)=|k+g(x)-g(|x|)|$ is a solution for any $g:\Bbb R\to\Bbb R$.

combinatorics - Prove that $sum_{k=0}^{n-r} binom{r+k}{r} = binom{n+1}{r+1}$



In my combinatorics book I have the following identity:




$$\binom{r}{r} + \binom{r+1}{r} + \binom{r+2}{r} + ... + \binom{n}{r} = \binom{n+1}{r+1}$$



More succinctly:



$$\sum_{k=0}^{n-r} \binom{r+k}{r} = \binom{n+1}{r+1}$$



I'm trying to reason about why this is the case. My textbook actually "proves" this identity (although I'm not convinced).




We break the ways to pick r + 1 members of a committee from n + 1

people into cases depending on who is the last person chosen: the (r +
1)st, the (r + 2)nd, . . . , the (n + 1)st. If the (r + k + 1)st
person is the last chosen, then there are C(r + k, r) ways to pick
the first r members of the committee. [The identity] now follows.
Q.E.D.




So... what are they trying to tell me? I'm guessing we need to partition the committee selection into multiple parts. But what do I partition the committee selection into, and why?



Of course, an algebraic proof is trivial for small examples, but is not practicable for the general case. Can anybody reword my book's committee-selection argument or offer a clearer proof?



Answer



Actually, an algebraic proof is possible, here's an example of one. Fix an integer $r$. We'll make induction over $n$. Of course, we need to have $n \geq r$, so our base case will be $n=r$. In that case, we have to prove



$$\binom{r}{r}=\binom{n+1}{r+1}$$



And it's obvious, because $n=r$ implies $\binom{n+1}{r+1}=\binom{r+1}{r+1}=1$. For the induction step, let's assume



$$\binom{r}{r}+\binom{r+1}{r}+\dots+\binom{n}{r}=\binom{n+1}{r+1} $$



Adding $\binom{n+1}{r}$ on both sides, we have




$$\binom{r}{r}+\binom{r+1}{r}+\dots+\binom{n}{r}+\binom{n+1}{r}=\binom{n+1}{r+1}+\binom{n+1}{r} $$



But $\binom{n+1}{r}+\binom{n+1}{r+1}=\binom{n+2}{r+1}$, which gives us



$$ \binom{r}{r}+\binom{r+1}{r}+\dots+\binom{n}{r}+\binom{n+1}{r} = \binom{n+2}{r+1}=\binom{(n+1)+1}{r+1} $$



So the identity is valid for $n+1$ too. By the Principle of Induction, we have



$$ \binom{r}{r}+\binom{r+1}{r}+\dots+\binom{n}{r}=\binom{n+1}{r+1} $$




for each $r$ integer and $n\geq r$


calculus - Limits with trigonometric functions without using L'Hospital Rule.

I want to find the limits $$\lim_{x\to \pi/2} \frac{\cos x}{x-\pi/2} $$
and
$$\lim_{x\to\pi/4} \frac{\cot x - 1}{x-\pi/4} $$



and

$$\lim_{h\to0} \frac{\sin^2(\pi/4+h)-\frac{1}{2}}{h}$$
without L'Hospital's Rule.



I know the fundamental limits $$\lim_{x\to 0} \frac{\sin x}{x} = 1,\quad \lim_{x\to 0} \frac{\cos x - 1}{x} = 0 $$



Progress



Using $\cos x=\sin\bigg(\dfrac\pi2-x\bigg)$ I got $-1$ for the first limit.

complex analysis - Evaluate $sum_{n=1}^infty frac{(-1)^{n+1}n^2}{n^4+1}$




Evaluate
$$\sum_{n=1}^\infty \frac{(-1)^{n+1}n^2}{n^4+1}$$





Does anyone have any smart ideas how to evaluate such a sum? I know one solution with complex numbers and complex analysis but I'm looking for some more smart or sophisticated methods.


Answer



I would not say that it is elegant, but:



The form $n^4+1$ in the denominator suggests that one should be able to get this series by expanding a combination of a hyperbolic and trigonometric function in a Fourier series.



Indeed, after some trial and error, the following function seems to work:



$$

\begin{gathered}
\left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)-\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)\right)\cos \left(\frac{x}{\sqrt{2}}\right) \cosh \left(\frac{x}{\sqrt{2}}\right) \\
+ \left(\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)+\sin \left(\frac{\pi }{\sqrt{2}}\right) \cosh
\left(\frac{\pi }{\sqrt{2}}\right)\right)\sin \left(\frac{x}{\sqrt{2}}\right) \sinh
\left(\frac{x}{\sqrt{2}}\right)
\end{gathered}
$$



It is even, and its cosine coefficients are

$$
\frac{\sqrt{2}\bigl(\cos(\sqrt{2}\pi)-\cosh(\sqrt{2}\pi)\bigr)(-1)^{n+1} n^2}{\pi(1+n^4)},\quad n\geq 1.
$$
(The zero:th coefficient is also zero). Evaluating at $x=0$ (the series converges pointwise there) gives
$$
\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}n^2}{1+n^4}=
\frac{\pi\left(\sin
\left(\frac{\pi }{\sqrt{2}}\right) \cosh \left(\frac{\pi }{\sqrt{2}}\right)-\cos \left(\frac{\pi }{\sqrt{2}}\right) \sinh \left(\frac{\pi }{\sqrt{2}}\right)\right)}{\sqrt{2}\bigl(\cosh(\sqrt{2}\pi)-\cos(\sqrt{2}\pi)\bigr)}\approx 0.336.
$$


Tuesday 18 July 2017

probability - How do I show that $ E(g(X))= int_{-infty}^infty g(x)f(x),dx $?

Given that $X$ is a continuous random variable with pdf $f$, and $g(x)$ is a nonnegative function.



How do I show that
$$
E(g(X))= \int_{-\infty}^\infty g(x)f(x)\,dx
$$
using the fact that $E(X) = \int_0^\infty P(X>x)\, dx$.




I attempted to prove this by plugging in g(X) into the second equation instead of just X. And then I took the inverse of g to come up with just a cdf of X, then I rewrote the cdf to its equivalent integral form, giving me an expression with double integral. I have no idea how to move on from here.



Can anyone help me?

probability - Finding expectation and variance of a selection of three balls out of six?



I just asked this question, but worded it wrong so while the given answers are useful, they still leave me confused for where I am in the progression through my stats book.



My problem is I've got 3 balls chosen (with replacement) out of 6, of which 2 of the 6 are blue. What is the expectation and variance of the number of blue balls picked?



In my original question, I included the idea of "chance" (see here: Probability of particular subset of balls occurring in a larger set chosen from a total?), but that lead to a discussion of the binomial theorem, which I have not yet come to in my book, so it can't be expected in the answer (and is a bit ahead of me).




So far, I've covered expectation and variance of random variables, and probability theory. I have a very hard time with probability theory, so that's what is killing me here.



My approach here is that this uses linear combinations of element expectations and variances, but I'm at an almost total loss as to how to find the probabilities of these blue balls in the selection of 3, to establish a probability mass function that can be used for expectation and variance calculations.



In organizing the problem, I've got four conditions, where the number of blue balls is 0, 1, 2, and 3, but how to use combinations for finding the relative probabilities of each leaves me at a blank.



I know I'm choosing 3 out of the 6, so the possible combinations are: 6!/(3!(6-3)!) = 20. However, this being "with replacement" means that we'd be using permutations for establishing the number of possibilities of choosing 3, and not combinations, so this would be 6!/3! = 120, right?



I'm guessing this means that we'd need to find the possible permutations of 3 that include 0 balls, and divide this by 120 to get the probability, and then repeat for 1, 2, and 3? I'm not seeing how to do this part...


Answer





Thanks! That seems to explain it a bit, but I'm confused as this is a question in my book where I have not even touched the binomial distribution. I've just covered linear combinations and expectation of random variables. I did word it slightly differently, where they specifically ask for the expectation and variance of the number of blue balls picked (In slightly rewording it (my mistake), using "chance" might have lead to the binomial distribution discussion. I think its very valuable, but I'm still confused.




Okay.   In this case let $X_i$ be the Boolean indicator that a favoured item is drawn on the $i^{th}$ draw.   Then since there are two favoured items in the population of six, then: $\mathsf E(X_i)= 1/3$ for all draws.   (Interestingly this bit is true whether we are drawing with or without replacement.)



The count of favoured items drawn in a run is the sum of the indicators. $$X=\sum_{i=1}^3 X_i$$



Since expectation is linear the expected count of favoured items in the run is the sum of the expected value of these indicators.
$$\begin{align}\mathsf E(X) & = \mathsf E(\sum_{i=1}^3 X_i) \\ & = \sum_{i=1}^3 \mathsf E(X_i) \\ & = 1\end{align}$$




The expected square of the count is slightly more involved.



$$\begin{align}
\mathsf E(X^2) & = \mathsf E(\sum_{i=1}^3 X_i \sum_{j=1}^3 X_j)
\\[1ex] & = \mathsf E\left(\sum_{i\in\{1,2,3\}} X_i^2 + \mathop{\sum\sum}_{
\substack{
i\in\{1,2,3\}\\[0ex]
j\in \{1,2,3\}\setminus\{i\}}} X_iX_j\right)
\\[1ex] & = 3\mathsf E(X_i^2) + 6\mathsf E(X_iX_j\mid i\neq j)

\\[1ex] & = 3(\tfrac 1 3) +6(\tfrac 1 9)
\\[2ex] & = \tfrac 5 3
\end{align}$$



The second last step trips people up when the first encounter it.
$$\begin{align}
\mathsf E(X_i^2) & = 1^2\;\mathsf P(X_i=1) + 0^2\;\mathsf P(X_i=0)
\\[1ex] & = \tfrac 1 3
\\[2ex]
\mathsf E(X_iX_j\mid i\neq j) & = 1\cdot\mathsf P(X_i=1\cap X_j=1)+ 0\cdot\mathsf P(X_i=0 \cup X_j=0)

\\[1ex] & = \tfrac 1 9
\end{align}$$



You know have the mean and enough to find the variance, since $\mathsf{Var}(X)=\mathsf E(X)^2-\mathsf E(X)^2$






NB: If we are drawing without replacement then $$
\begin{align}
\mathsf E(X_iX_j\mid i\neq j) & = \dfrac {\quad\binom{2}{2}\quad}{\binom{6}{2}}

\\[1ex] & = \tfrac 1 {15}
\\[2ex]
\therefore \mathsf E(X^2) & = \tfrac 7 5
\end{align}$$


How can I show vectors are parallel and perpendicular using complex variables?




I have a question which asks:



If vectors $v_1$ and $v_2$ have associated complex numbers $z_1$ and $z_2$ respectively then express, in terms of $z_1$ and $z_2$, the fact that the two vectors are a) parallel and b) perpendicular. Then, using this information, find the conditions necessary for 4 points, $z_1,z_2,z_3,z_4$ to constitute a parallelogram.





This should be super easy but I'm getting hung up.



My attempt at a solution:



If the vectors are perpendicular, ${z_1}\cdot{z_2} = 0$ so $x_1x_2+y_1y_2=0$ or $\frac{\operatorname*{Re}(z_1)}{\operatorname*{Im}(z_1)}=-\frac{\operatorname*{Im}(z_2)}{\operatorname*{Re}(z_2)}$. I think this is fine, but I could be wrong.



If two vectors are parallel, their slopes should be equal, namely $\frac{\operatorname*{Im}(z_2)}{\operatorname*{Im}(z_1)}=\frac{\operatorname*{Re}(z_2)}{\operatorname*{Re}(z_1)}$. I also believe this is correct, but I could still be mistaken.



The parallelogram part is where I'm getting confused. Suppose for convention that the line joining $z_1$ and $z_3$ is parallel to the line joining $z_2$ and $z_4$. Similarly for the line joining $z_1$ and $z_2$, and $z_3$ and $z_4$. For there to be a parallelogram, I know that the lengths of the sides must be equal, so $|z_4-z_2| = |z_3-z_1|$ and $|z_4-z_3| = |z_2-z_1|$. This is fine.
However, how do I make sure the vectors constituting the parallel sides are, indeed, parallel?




Is it okay to use the parallel condition I used above if the vectors dont start at the origin? So, should I say that $\frac{\operatorname*{Im}(z_4-z_2)}{\operatorname*{Im}(z_3-z_1)}=\frac{\operatorname*{Re}(z_4-z_2)}{\operatorname*{Re}(z_3-z_1)}$ for one pair of sides,and a similar expression for the other pair?



If I can make anything clearer please let me know.


Answer




If the vectors are perpendicular, ${z_1}\cdot{z_2} = 0$ so $x_1x_2+y_1y_2=0$ or $\frac{\operatorname*{Re}(z_1)}{\operatorname*{Im}(z_1)}=-\frac{\operatorname*{Im}(z_2)}{\operatorname*{Re}(z_2)}$.




This can also be written as $\operatorname{Re}(z_1 \bar z_2) = 0\,$, which is the same as $\arg(z_1)-\arg(z_2) = \pm \pi /2\,$.





If two vectors are parallel, their slopes should be equal, namely $\frac{\operatorname*{Im}(z_2)}{\operatorname*{Im}(z_1)}=\frac{\operatorname*{Re}(z_2)}{\operatorname*{Re}(z_1)}$.




This can also be written as $\operatorname{Im}(z_1 \bar z_2) = 0\,$, same as $\arg(z_1)-\arg(z_2) = 0$ or $\pi$.




The parallelogram part is where I'm getting confused.





A quadrilateral is a parallelogram iff two opposite sides are parallel and equal. For $z_1\,z_2\,z_3\,z_4$ to be the vertices of a parallelogram (in this order) the necessary and sufficient condition is $z_2-z_1=z_4-z_3\,$. Another way to write it is $z_1+z_3=z_2+z_4\,$ which corresponds to the condition that the diagonals intersect in their respective midpoints.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...