Wednesday 31 January 2018

real analysis - Determine the sum $sum_{n=1}^infty left( frac{1}{(4n)^2-1}-frac{1}{(4n+2)^2+1}right)$ using the Fourier series of $|cos x|$

I need to determine the sum $$\sum_{n=1}^\infty \left( \frac{1}{(4n)^2-1}-\frac{1}{(4n+2)^2+1}\right)$$ using the Fourier series of $\lvert \cos x\rvert$ on the interval $ [-\pi,\pi]$.



I have already calculated Fourier series and I get this:
$$\lvert \cos x\rvert= {\frac2\pi}+\sum_{n=2}^\infty{\frac4\pi}\frac{\cos(n\frac\pi2)}{1-n^2}\cos(nx)$$




I do not know how to manipulate the Fourier series to get that specific sum.
I tried this but did not get me anywhere.
$$\sum_{n=2}^\infty{\frac4\pi}\frac{{\frac12}\cos(n\frac\pi2-nx)-{\frac12}\cos(n\frac\pi2+nx)}{1-n^2}$$
$$-\sum_{n=2}^\infty{\frac4\pi}\frac{\cos(n\frac\pi2-nx)}{2n^2-2}-\frac{\cos(n\frac\pi2+nx)}{2n^2-2}$$

Tuesday 30 January 2018

abstract algebra - Simple proof for independence of square roots of two primes



Consider the following problem:





Let $p$ and $q$ be two distinct prime numbers. Show that $\sqrt{p}$
and $\sqrt{q}$ are independent over $\mathbb{Q}$, which means that:



$\forall a,b \in \mathbb{Q}: a\sqrt{p} + b\sqrt{q} = 0 \Rightarrow a = b = 0$




I'm well aware how to prove this for a sequence $p_i$ of primes and thus a sequence $\sqrt{p_i}$ of prime roots using Galois theory, but I want to show some students a very elemental and short proof for just two prime roots. Those students are only at the beginning of an elemental algebra course and did not learn about anything like field traces. Is this possible?


Answer



I wanted to construct a proof of this using as elementary means as possible, avoiding if at all feasible "big gun" results such as the fundamental theorem of arithmetic, which in the following has been supplanted by repeated application of Bezout's identity:




If $\sqrt p$ and $\sqrt q$ are dependent over $\Bbb Q$, they satisfy a relation of the form



$r\sqrt p + s\sqrt q = 0, \; 0 \ne r, s \in \Bbb Q; \tag 0$



by clearing the denominators of $r$ and $s$ we find there exist $0 \ne a, b \in \Bbb Z$ with



$a\sqrt p + b\sqrt q = 0, \tag 1$



and we may clearly further assume




$\gcd(a, b) = 1; \tag 2$



from (1) we have, upon multiplication by $\sqrt p$,



$ap + b\sqrt{pq} = 0, \tag 3$



whence



$ap = -b\sqrt{pq}; \tag 4$




we square:



$a^2 p^2 = b^2 pq, \tag 5$



and divide through by $p$:



$a^2 p = b^2 q \Longrightarrow p \mid b^2 q; \tag 6$



now since $p, q \in \Bbb P$ are distinct, $p \ne q$, we have




$\gcd(p, q) = 1, \tag 7$



and thus



$\exists x, y \in \Bbb Z, \; xp + yq = 1, \tag 8$



which further implies



$xpb^2 + yqb^2 = b^2 \Longrightarrow p \mid b^2, \tag 9$




since



$p \mid pb^2, \; p \mid qb^2; \tag{10}$



now with $p \in \Bbb P$,



$p \not \mid b \Longrightarrow \gcd(p, b) = 1, \tag{11}$



whence




$\exists z, w \in \Bbb Z, \; zp + wb = 1, \tag{12}$



and so



$zpb + wb^2 = b \Longrightarrow
p \mid b \Rightarrow \Leftarrow p \not \mid b, \tag{13}$



as assumed in (11); thus in fact




$p \mid b \Longrightarrow \exists c \in \Bbb Z, \; b = pc \Longrightarrow b^2 = p^2c^2, \tag{14}$



and thus (6) becomes



$a^2 p = c^2p^2 q \Longrightarrow a^2 = c^2pq \Longrightarrow p \mid a^2; \tag{15}$



now repeating in essence the argument of (11)-(13) proves that $p \mid a$, which is of course precluded by (2), lest $p \mid \gcd(a, b) = 1$.



We thus see that there can be no relation of the form (0) for $p, q \in \Bbb P$ distinct; $p$ and $q$ are independent over $\Bbb Q$.




The informed reader, upon careful scrutiny, will note that this demonstration also has much in common with the classic proof that $\sqrt 2 \notin \Bbb Q$, which truth be told inspired my conception of this answer.


elementary number theory - Find $6^{1000} mod 23$





Find $6^{1000} \mod 23 $





Having just studied Fermat's theorem I've applied $6^{22}\equiv 1 \mod 23 $, but now I am quite clueless on the best way to proceed.



This is what I've tried:



Raising everything to the $4th$ power I have $$6^{88} \equiv 1 \mod 23 $$ $$6^{100} \equiv 6^{12} \mod 23 $$ $$6^{1000}\equiv 6^{120} \mod 23 $$ $$6^{1000} \equiv 6^{10} \mod 23$$ How do I simplify now the right hand side of the congruence ?


Answer



We may exploit the fact that $1000\equiv 10\pmod{22}$ to get:
$$ 6^{1000}\equiv 6^{10} \equiv (6^5)^2 \equiv 2^2 \equiv\color{red}{4}\pmod{23}.$$
As an alternative, we may notice that $a^{11}\pmod{23}$ is the Legendre symbol $\left(\frac{a}{23}\right)$, so:




$$ 6^{11} \equiv \left(\frac{2}{23}\right)\cdot\left(\frac{3}{23}\right) \equiv 1\cdot 1\equiv 1\pmod{23} $$
gives $6^{1001}\equiv 1\pmod{23}$ and by multiplying both sides by the inverse of $6$ we get $4$ as above.


radicals - Direct Irrationality Proof for $sqrt{3}$ and $sqrt{6}$

I am having trouble with proving this directly. I am currently learning about greatest common divisors and know that this has a role in the proof. However, I can only prove the two through contradiction and not directly.

abstract algebra - Uniqueness of greatest common divisor



Suppose $R$ is an integral domain and $a,b\in R$ have no common factors other than units. Then $1$ is a greatest common divisor of $a$ and $b$.



Is $1$ the only greatest common divisor, or can there be others?


Answer



The GCD is very rarely unique. If $x$ is a GCD of $a$ and $b$, then so is any associate of $x$, that is, any number of the form $ux$ with $u$ a unit. This is easily verifiable from the definition





$gcd(a,b)=x$ if $x$ is a common divisor of $a$ and $b$ and is divisible by all other common divisors of $a$ and $b$




Thus the GCD is only unique when you have a ring with one unit. In your particular case, every unit is a GCD.


calculus - Simplest way to integrate this expression : $int_{-infty}^{+infty} e^{-x^2/2} dx$




I'm toying around with statistics and calculus for a project of mine and I'm trying to find the simplest/fastest way to integrate this formula :



$$\int_{-\infty}^{+\infty} e^{-x^2/2} dx$$





  • I do not want to use a table.

  • I'm taking this opportunity to get more practice with my new calculus skills

  • It seems that a Taylor series approx is the only way to go



Best Regards


Answer



If we set $$I := \int_{\mathbb{R}} \exp \left(- \frac{x^2}{2} \right) \, dx,$$

then



$$I^2 = \int_{\mathbb{R}} \int_{\mathbb{R}} \exp \left( - \frac{x^2+y^2}{2} \right) \, dx \, dy.$$



Introducting polar coordinates, i.e.



$$\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} r \cos \varphi \\ r \sin \varphi \end{pmatrix},$$



yields




$$I^2 = \int_{r=0}^{\infty} \int_{\varphi=0}^{2\pi} e^{-r^2/2} r \, dr \, d\varphi = \left( \int_{0}^{\infty} r e^{-r^2/2} \, dr \right) \left( \int_{\varphi=0}^{2\pi} d \varphi \right).$$



This expression can be easily calculated.


Monday 29 January 2018

special polynomials

I'm searching for a polynomial $f$ of degree 4 with the following property: $f$ and all its derivatives have the maximum number of integer roots.



Concretely formulated:



$$\begin{eqnarray*}
f(x) & = & (x-a)(x-b)(x-c)(x-d) \\
f'(x) & = & 4(x-e)(x-f)(x-g) \\
f''(x) & = & 12(x-h)(x-i) \\
f'''(x) &= & 24(x-j) \\

\end{eqnarray*}
$$



should be satisfied simultaneously with distinct integers $a,b,c,d$, distinct integers $e,f,g$, disctinct integers $h,i$ and an integer $j$.



My conjecture is that there is no such polynomial. For degree 3, there are solutions. Can anyone either prove this or find a counterexample?

Finding modular of a fraction

Im really into cryptography and to find the private key of a message I need to use modular arithmetic. I understand how modular arithmetic using a clock with whole numbers. But I get really stuck when I get to fractions, for example:





1/3 mod 8




How do I find a modular of a fraction? Is there a method for finding this?



Thanks in advance!

algebra precalculus - What are the solutions for the following equation?




I have the following equation:



$\log_{2x}4x+\log_{4x}16x=4$



What are the solutions of this equation?



This is what I did:



Firstly, I applied the following conditions:




$2x>0 \Rightarrow x>0$



$2x \ne 1 \Rightarrow x \ne \dfrac{1}{2}$



$4x>0 \Rightarrow x>0$



$4x \ne 1 \Rightarrow x \ne \dfrac{1}{4}$



$16x > 0 \Rightarrow x > 0$




Considering all of these at once, I have: $x \in (0, + \infty) \setminus \bigg\{\dfrac{1}{2}, \dfrac{1}{4} \bigg \}$



Next, I wrote the equation like so:



$\log_{2x}(2x*2) + \log_{4x}(4x*4)=4$



$\log_{2x}2x + \log_{2x}2 + \log_{4x}4x+\log_{4x}4 = 4$



$1 + \log_{2x}2 + 1 + \log_{4x}4 = 4$




$\log_{2x}2 + \log_{4x}4 =2$



$\log_{2x}2 + 2\log_{4x}2=2$



$\log_{2x}2 + \dfrac{2}{\log_{2}4x}=2$



$\log_{2x}2 + \dfrac{2}{\log_2 2 + \log_2 2x}=2$



$\log_{2x}2+\dfrac{2}{1+\dfrac{1}{\log_{2x}2}}=2$




Then I used the substitution $\log_{2x}2=t$



$t+ \dfrac2{1+ \dfrac{1}{t}}=2$



$...$



$t^2+t-2=0$



$(t+2)(t-1)=0$




In the case that $t=-2$,



$\log_{2x}2=-2$



$(2x)^{-2}=2$



$\dfrac{1}{4x^2}=2$



$x^2= \dfrac{1}{8}$




$x = \pm \dfrac{1}{2 \sqrt{2}}$, but because of our conditions: $x = \dfrac{1}{2 \sqrt{2}}$



And in the case that $t=1$,



$\log_{2x}2=1$



$2x=2$



$x=1$




So, in the end I got: $x \in \bigg \{ \dfrac{1}{2 \sqrt{2}}, 1 \bigg \}$. The only problem with this is that it is completly wrong...



My textbook says that the correct answer should be: $\bigg [ \dfrac{1}{2 \sqrt{2}}, 2 \bigg ]$. So you see I'm not even close, I need an interval of values, not just $2$ solutions like I got. So, where did I make mistakes and what should I do to get the right answer?


Answer



As mentioned in the comments, your answer is perfectly correct and the textbook is the one which is wrong. You can check this by substituting the values in. Another way to see that it certainly cannot be an interval of solutions is by noticing that the equation $$\log_{2x}2+\log_{4x}4=2,\tag{1}$$
which is equivalent to the given one, has a LHS which changes with $x$, and on no interval $[a,b]$ is it constant. So it is absurd that there can be an interval on which the LHS is constant at $2$.



I will say though that in questions involving the logarithm like this one, a smart general approach is to convert everything to the same base. So starting from $(1)$, write
$$\frac{1}{1+\log_2(x)}+\frac{2}{2+\log_2(x)}=2,$$
and then substitute $y=\log_2(x)$ to solve for $y$ and hence for $x$. This is slightly cleaner than your solution, but of course yours is correct too.



Sunday 28 January 2018

matrices - Estimate the diagonal elements of a real positive definite matrix from its eigenvalues



Usually one has the matrix and wishes to estimate the eigenvalues, but here it's the other way around: I have the positive eigenvalues of an unknown real positive definite matrix and I would like to say something about it's diagonal elements.



The only result I was able to find is that the sum of the eigenvalues coincides with the trace of the matrix, does anyone know of anything more specific? Or perhaps can point me to any literature that discusses this problem?


Answer




There's probably not a lot more to say in terms of bounds. Look at
\begin{bmatrix}
p & q \\ r & 3-p
\end{bmatrix}
Its characteristic polynomial is
$$
x^2 - 3x + p(3-p) - rq = 0
$$
whose values at $x = 1$ and $x = 2$ are identical. By making $rq = p(3-p) + 2$, for instance, by choosing $r = p(3-p)$ and $q = 2$, we get the value 0 at $x = 1, 2$. That means that $1$ and $2$ are the eigenvalues of the matrix, but as $p$ varies over $\mathbb R$, you get every possible diagonal pair that sums up to $1+2$. In other words: the trace constraint, and no other, holds. Since you can take a whole bunch of $2 \times 2$ blocks like this down the diagonal of a $2n \times 2n$ matrix, there are similar freedoms in larger matrices, although this doesn't prove that the diagonal entries are completely free in the sense above -- that might take another page of work; it merely suggests that this more general statement might be true.


summation - Induction: $sum_{k=1}^{2n} (-1)^k k = n$





Use the proof of induction to show : $\sum_{k=1}^{2n} (-1)^k k = n$




I know how to show the base step of this problem, but in showing the inductive step I am having trouble determining how to show they are equal.


Answer



Suppose, it is true for $n-1$.
Then $$\sum_{k=1}^{2n} (-1)^k k =$$ $$=\left( \sum_{k=1}^{2(n-1)} (-1)^k k\right)+(-1)^{2n-1}\cdot(2n-1)+(-1)^{2n}\cdot(2n)=$$
$$=(n-1)-(2n-1)+2n=n$$
it is also true for $n$.



Saturday 27 January 2018

trigonometry - Finding a closed form for $cos{x}+cos{3x}+cos{5x}+cdots+cos{(2n-1)x}$




We have to find




$$g(x)=\cos{x}+\cos{3x}+\cos{5x}+\cdots+\cos{(2n-1)x}$$




I could not get any good idea .




Intialy I thought of using



$$\cos a+\cos b=2\cos(a+b)/2\cos (a-b)/2$$


Answer



Let $z=\cos\theta+i\sin\theta$ i.e. $z=e^{i\theta}$



Your sum:$$e^{i\theta}+e^{3i\theta}+e^{5i\theta}+...e^{(2n-1)i\theta}$$



This is a GP with common ratio $e^{2i\theta}$




Therefore sum is $$\frac{a(r^n-1)}{r-1}$$
$$\frac{e^{i\theta}(e^{2ni\theta}-1)}{e^{2i\theta}-1}$$
$$\frac{(\cos \theta+i\sin\theta)(\cos(2n\theta)+i\sin\theta-1)}{\cos(2\theta)+i\sin(2\theta)-1}$$



Computing it's real part should give you the answer



Acknowledgement:Due credits to @LordShark Idea


calculus - Limit as n goes to infinity of a limit involving factorials and exponents



$$\lim_{n \to \infty} = \dfrac{3((n+1)!)(n-1)}{3^n + (n!)n^2} $$



I'm stuck with this limit. I managed to rewrite it to the following form: $$\dfrac{3n-3}{\dfrac{3^n}{(n+1)!}+\dfrac{n^2}{(n+1)}}$$ but I don't know how to further simplify it. To me it seems like this goes to 0 because $3^n$ grows faster than any other term in the function, but wolfram alpha tells me it's $3$.


Answer




Divide the top and bottom by $n!$:
$$\lim_{n\to\infty} \dfrac{3((n+1)!)(n-1)}{3^n+(n!)n^2}=\lim_{n\to\infty} \dfrac{3(n+1)(n-1)}{\dfrac{3^n}{n!}+n^2}=\lim_{n\to\infty}\dfrac{3n^2-3}{n^2}=\lim_{n\to\infty}3-\dfrac{3}{n^2}=3$$
The thing is that $n!$ grows way faster than $3^n$ once $n>3$.


Friday 26 January 2018

Possible proof that $1 = 2$? No it can't be! So why do I keep concluding to that?



So my teacher gives me a problem to work with:






Let $n = \sqrt{1 + \sqrt{1 + \sqrt{1 + \sqrt{\cdots}}}} \ $ then how must you solve for $n$?



Solution:



$$n = \sqrt{1 + \sqrt{1 + \sqrt{1 + \sqrt{\cdots}}}} \Rightarrow n = \sqrt{1 + n} \Rightarrow n^2 - 1 = n \Rightarrow n^2 - n - 1 = 0$$ $$\therefore n = \frac{1\pm \sqrt{5}}{2}$$ However, $n > 0$ $$\therefore n = \frac{1 + \sqrt{5}}{2} = 1.6180339887\ldots = φ \neq \frac{1 - \sqrt{5}}{2}$$ Now we can substitute $n = φ$ into the following equation to double-check: $$\begin{align} n^2 - 1 &= n \Rightarrow φ^2 - 1 = φ \Rightarrow \bigg(\frac{1 + \sqrt{5}}{2}\bigg)^2 - 1 = \frac{(1 + \sqrt{5})^2}{2^2} - 1 = \frac{6 + 2\sqrt{5}}{4} - 1 \\ &= \frac{6 + 2\sqrt{5}}{4} - \frac{4}{4} = \frac{2 + 2\sqrt{5}}{4} = \frac{2}{2}\times \frac{1 + \sqrt{5}}{2} = 1\times \frac{1 + \sqrt{5}}{2} = \boxed{\frac{1 + \sqrt{5}}{2}} \quad \color{green}{\checkmark} \end{align}$$






My teacher said, "Very good... Now try this problem."





Let $n = \frac{2}{3 - \frac{2}{3 - \frac{2}{3 - \frac{2}{\cdots}}}} \ $ then how do you solve for $n$?





No matter what I do, I always get that $n = 1 = 2 \Rightarrow \boxed{1 = 2}$ !!! I used the same method as I did for the previous question, so why isn't it working? And of course my teacher just said in response, "work on it as part of your homework". Could somebody please help me?




Thank you in advance.


Answer



If you were to use the same method as in your first example, (substituting n back into your infinite continuous fraction)



$$ 3-n=\frac{2}{n}$$
You can simplify to obtain: $ 3n-n^2=2 $, which works for any n
, as $n \neq 0$.
By factorising, you get the solutions $n = 1, $ OR $2$, not $1=2$.




Hope this helps!


real analysis - I want to show that $f(x)=x.f(1)$ where $f:Rto R$ is additive.










I know that if $f$ is continuous at one point then it is continuous at every point.
From this i want to show that $f(x)=xf(1).$

Can anybody help me to proving this?


Answer



HINTS:




  1. Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.


  2. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.


  3. $f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.


  4. Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.


  5. Then use continuity at a point.




modular arithmetic - computing modulo with large numbers

I want to compute $$36^{293}\equiv \alpha \quad \text{mod}\, 1225284684$$ with a pocket calculator, but I'm not sure how to do this, because the modulus is so large. Is there a way to do this?

real analysis - Where does this sequence $sqrt{7}$,$sqrt{7+ sqrt{7}}$,$sqrt{7+sqrt{7+sqrt{7}}}$,.... converge?





The given sequence is $\sqrt{7}$,$\sqrt{7+ \sqrt{7}}$,$\sqrt{7+\sqrt{7+\sqrt{7}}}$,.....and so on.



the sequence is increasing so to converge must be bounded above.Now looks like they would not exceed 7. The given options are




  1. ${1+\sqrt{33}}\over{2}$


  2. ${1+\sqrt{32}}\over{2}$


  3. ${1+\sqrt{30}}\over{2}$


  4. ${1+\sqrt{29}}\over{2}$





How to proceed now.
Thanks for any help.


Answer



Trick: Let $X = \sqrt{ 7 + \sqrt{ 7 + ... } } $. We have $X = \sqrt{ 7 + X } $ and so $X^2 = 7 + X $. Now you solve the quadratic equation.


error function - The integral $int_0^∞ e^{-f(x^2)} dx$



We know that :

$$\int_0^∞ e^{-x^2} dx = \frac {\sqrt{Ï€}}{2}$$
$$\int_0^∞ e^{-x^2-\frac {a^2}{x^2}} dx = \frac {\sqrt{Ï€}}{2}e^{-2a}$$
Both the above results can be easily proved by integration under integration. I was wondering if it can be extended to the general integral $\int_0^∞ e^{-f(x^2)} dx$. The result is surely $\frac {\sqrt{Ï€}}{2}F$ where $F$ is a constant function. But I am unable to relate $F$ with $f$.
Any suggestions are welcome.


Answer



Here's a little list with examples of the integrals in the form:



$$\int_0^\infty e^{-f(x^2)} dx$$







Gamma function related integrals:



$$\int_0^\infty e^{-x^2} dx =\Gamma \left( \frac{3}{2}\right)=\frac{1}{2} \Gamma \left( \frac{1}{2}\right)=\frac{\sqrt{\pi}}{2}$$



$$\int_0^\infty e^{-x^4} dx =\Gamma \left( \frac{5}{4}\right)$$



$$\int_0^\infty e^{-x^{2n}} dx =\Gamma \left( \frac{2n+1}{2n}\right)$$







Bessel function related integrals:



$$\int_0^\infty e^{-a \cosh x} dx=K_0 (a)$$



$$\int_0^\infty e^{-a (1+\frac{4}{3} x^2) \sqrt{1+\frac{1}{3} x^2}} dx=\frac{1}{\sqrt{3}}K_{1/3} (a)$$



Many more examples are possible.







Trivial cases:



$$\int_0^\infty e^{-\ln (1+x^2)} dx=\int_0^\infty \frac{1}{1+x^2} dx=\frac{\pi}{2}$$



$$\int_0^\infty e^{-\frac{3}{2}\ln (1+x^2)} dx=\int_0^\infty \frac{1}{(1+x^2)^{3/2}} dx=1$$



$$\int_0^\infty e^{-\ln (1+x^4)} dx=\int_0^\infty \frac{1}{1+x^4} dx=\frac{\pi}{2\sqrt{2}}$$



$$\int_0^\infty e^{-\ln (\cosh x)} dx=\int_0^\infty \frac{1}{\cosh x} dx=\frac{\pi}{2}$$




And so on.






What this list is intended to show is that there's no general method for finding the closed forms for such integrals. They need to be dealt with on case by case basis. Sometimes generalization is possible, sometimes not.






An example of such a general theorem can be seen in this answer, which allows us to make an infinite number of integrals giving the same value.



sequences and series - How do i evaluate this :$sum_{k=1}^{+infty}frac{{(-1)^k}}{sqrt{k+1}+sqrt{k}}$?

I have tried to evaluate the bellow sum using some standard altern sum but i don't succed then my question here is :






Question:
How do i evaluate this sum





$\sum_{k=1}^{+\infty}\frac{{(-1)^k}}{\sqrt{k+1}+\sqrt{k}}$ ?



Note: Wolfram alpha show it's values here

Thursday 25 January 2018

integration - Finding $intlimits_1^infty frac{sin^4(log x)}{x^2 log x} mathrm{d}x$




How do we prove that $$\int_{1}^{\infty} \frac{\sin^4(\log x)}{x^2 \log x} \mathrm{d}x=\dfrac{\log\left(\dfrac{625}{17}\right)}{16}$$



I tried substitutions like $\log x=\arcsin t$, but it doesn't seem to work out. Please help me out. Thank you.


Answer



The integral screams for a sub $x=e^u$; the result is



$$\int_0^{\infty} du \, e^{-u} \frac{\sin^4{u}}{u} $$



This is very computable by introducing a parameter and differentiating under the integral. In this case, consider




$$F(k) = \int_0^{\infty} du \, e^{-k u} \frac{\sin^4{u}}{u} $$



$$F'(k) = -\int_0^{\infty} du \, e^{-k u} \sin^4{u} $$



$F'(k)$ is relatively easy to compute using the fact that $\sin^4{u} = \frac{3}{8}-\frac12 \cos{2 u} + \frac18 \cos{4 u}$, and that



$$\int_0^{\infty} du \, e^{-k u} \cos{m u} = \frac{k}{k^2+m^2}$$



Thus




$$F'(k) = -\frac{3}{8 k} + \frac12 \frac{k}{k^2+4} - \frac18 \frac{k}{k^2+16} $$



and



$$F(k) = -\frac1{16} \log{\left [ \frac{k^6 (k^2+16)}{(k^2+4)^4} \right ]} + C$$



To evaluate $C$, we must consider $\lim_{k \to \infty} F(k)$ because $F(0)$ represents a non convergent integral. Because the limit is zero, we must have $C=0$. The integral we seek is then



$$F(1) = \frac1{16} \log{\frac{625}{17}}$$


probability - What is the intuition behind the Poisson distribution's function?



I'm trying to intuitively understand the Poisson distribution's probability mass function. When $X \sim \mathrm{Pois}(\lambda)$, then $P(X=k)=\frac{\lambda^k e^{-k}}{k!}$, but I don't see the reasoning behind this formula. In other discrete distributions, namely the binomial, geometric, negative binomial, and hypergeometric distributions, I have an intuitive, combinatorics-based understanding of why each distribution's pmf is defined the way it is.




That is, if $Y \sim\mathrm{Bin}(n,p)$ then $P(Y=k)=\binom{n}{k}p^k(1-p)^{n-k}$, and this equation is clear - there are $\binom{n}{k}$ ways to choose the $k$ successful trials, and we need the trials to succeed $k$ times and fail $n-k$ times.



What is the corresponding intuition for the Poisson distribution?


Answer



Explanation based on DeGroot, second edition, page 256. Consider the binomial distribution with fixed $p$
$$
P(X = k) = {n \choose k}p^k(1-p)^{n-k}
$$



Now define $\lambda = np$ and thus $p = \frac{\lambda}{n}$.




$$
\begin{align}
P(X = k) &= {n \choose k}p^k(1-p)^{n-k}\\
&=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}\frac{\lambda^k}{n^k}\left(1-\frac{\lambda}{n}\right)^{n-k}\\
&=\frac{\lambda^k}{k!}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^n\left(1-\frac{\lambda}{n}\right)^{-k}
\end{align}
$$
Let $n \to \infty$ and $p \to 0$ so $np$ remains constant and equal to $\lambda$.




Now
$$
\lim_{n \to \infty}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^{-k} = 1
$$
since in all the fractions, $n$ climbs at the same rate in the numerator and the denominator and the last parentheses has the fraction going to $0$. Furthermore
$$
\lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n = e^{-\lambda}
$$
so under our definitions
$$

\lim_{n \to \infty} = {n \choose k}p^k(1-p)^{n-k} = \frac{\lambda^k}{k!}e^{-\lambda}
$$
In other words, as the probability of success becomes a rate applied to a continuum, as opposed to discrete selections, the binomial becomes the Poisson.



Update with key point from comments



Think about a Poisson process. It really is, in a sense, looking at very, very small intervals of time and seeing if something happened. The "very, very, small" comes from the need that we really only see at most one instance per interval. So what we have is pretty much an infinite sum of infinite Bernoullis. When we have a finite sum of finite Bernoullis, that is binomial. When it is infinite, but with finite probability $np=λ$, it is Poisson.


sequences and series - Infinite trigonometry Summation: $ sum_{k=1}^infty left( cos frac{pi}{2k}-cos frac{pi}{2(k+2)} right) $



I would like to evaluate
$$

\sum_{k=1}^\infty \left( \cos \frac{\pi}{2k}-\cos \frac{\pi}{2(k+2)} \right)
$$
Summation image please view before solving



I saw a pattern and realized the answer will converge and the final summation will be 1-(1/√2) but I am going wrong somewhere .Answer given is 2-(1/√2)
Plz help


Answer



This may be seen as a telescoping series, one may write for $N\ge1$,
$$
\small{\sum_{k=1}^N\! \left( \cos \frac{\pi}{2k}-\cos \frac{\pi}{2(k+2)} \right)\!=\sum_{k=1}^N \!\left(\! \cos \frac{\pi}{2k}-\cos \frac{\pi}{2(k+1)}\! \right)\!+\!\sum_{k=1}^N\! \left(\! \cos \frac{\pi}{2(k+1)}-\cos \frac{\pi}{2(k+2)} \!\right)}

$$ giving
$$
\small{\sum_{k=1}^N \! \left( \cos \frac{\pi}{2k}-\cos \frac{\pi}{2(k+2)} \right)=\!\!\left(\cos \frac{\pi}{2}-\cos \frac{\pi}{2(N+1)}\right)\!+\!\left(\cos \frac{\pi}{4}-\cos \frac{\pi}{2(N+2)}\right)}
$$ then by letting $N \to \infty$, one gets
$$
\sum_{k=1}^\infty \left( \cos \frac{\pi}{2k}-\cos \frac{\pi}{2(k+2)} \right)=(0-1)+\left(\cos \frac{\pi}{4}-1\right)
$$I think you can take it from here.


discrete mathematics - Big-Oh Notation Proofs

I'm studying Big-Oh Notation in my Discrete Structures class and I've proven most of the two questions that I've been assigned, although I'm stuck at one part on each.



So far in the class, we have not learned the induction method and are not allowed using graphs, limits or slopes of tangent lines to prove things. We can only use summations and algebraic calculations. Here is what I'm stuck on:



Note: The logarithms can be expressed with any base.





  1. Proving $n!$ $\leq$ $n^n$ for all $n$ $\geq$ $2$



(Original question: Prove log$(n!) \in \Omega(n$log$(n))$ - I chose $c=1/2, n_0=2$)




  1. Proving $3n^3-n^2+43 \geq \sqrt{n}^7$ for all $n \geq 9$




(Original question: Prove $2$log$(3n^3-n^2+43)\in O($log$(n))$ - I chose $c=7, n_0=9$)



I just can't figure out how to go about proving these things without using any of the methods listed above. If anyone has any idea where I could start, I would really appreciate some guidance!

Why can we convert a base $9$ number to a base $3$ number by simply converting each base $9$ digit into two base $3$ digits?




Why can we convert a base $9$ number to a base $3$ number by simply
converting each base $9$ digit into two base $3$ digits ?



For example $813_9$ can be converted directly to base $3$ by noting




\begin{array} \space 8_9&=22_3 \\ \space 1_9 &=01_3 \\ \space 3_9
&=10_3 \\ \end{array}



Putting the base digits together ,we get $$813_9=220110_3$$




I know it has to do with the fact that $9=3^2$ but I am not able to understand this all by this simple fact...


Answer



Consider $N$ in base 3. For simplicity, we can assume that $N_3$ has an even number of digits: if it doesn't, just tack on a leftmost $0$. So let:

$$N_3 = t_{2n+1} t_{2n}\dotsc t_{2k+1} t_{2k} \dotsc t_1 t_0.
$$
What this positional notation really means is that:
$$
N = \sum_{i = 0}^{2n+1} t_i 3^i,
$$
which we can rewrite as:
$$\begin{align}
N &= \sum_{k = 0}^{n} (t_{2k+1} 3^{2k+1} + t_{2k} 3^{2k}) \\
&= \sum_{k = 0}^{n} (3 t_{2k+1} + t_{2k}) 3^{2k} \\

&= \sum_{k = 0}^{n} (3 t_{2k+1} + t_{2k}) 9^{k}. \\
\end{align}$$



But now, note that for each $k$, $3 t_{2k+1} + t_{2k}$ is precisely the base-9 digit corresponding to the consecutive pair of base-3 digits $t_{2k+1} t_{2k}$.


Wednesday 24 January 2018

Constructing sub-fields of a finite field



Given a finite field $\mathbb{F}_{p^m}$ that is given $p$ and the irreducible polynomial of degree $m$. I want to construct irreducible polynomials of degree $d|m$.



Let $\mathbb{F}_{p^m}= \mathbb{F}_{p}(\alpha)$, then we can take its partial trace or norm such that $ \beta :=Tr(\alpha) \in \mathbb{F}_{d} $. Note that we can find trace/Norm using Frobenius.



Now if can construct min poly of $\beta$ then we can get the required irreducible polynomial.




Does this idea work ?? I am unable to prove it. Is there any simple solution for this ?


Answer



You need to exercise a bit of care, but the basic idea is fine.



What can go wrong is that $tr^m_d(\alpha)$ may land in too small a field. In other words, it may happen that the extension degree $[\Bbb{F}_p(\beta):\Bbb{F}_p]$ is a proper factor of $d$. The same thing may happen, if you use the relative norm instead of the relative trace.



As examples of both these phenomenon let me proffer the following. Consider the field $\Bbb{F}_{16}=\Bbb{F}_2(\alpha)$ where $\alpha$ is a zero of the irreducible polynomial $x^4+x+1$. Assume that we want to produce an element
that generated the intermediate field $\Bbb{F}_4$. If we use the relative trace
$$
tr^4_2:\Bbb{F}_{16}\to\Bbb{F}_4, x\mapsto x^4+x,

$$
and the above element $\alpha$, then we see that
$$
\beta=tr^4_2(\alpha)=\alpha^4+\alpha=1.
$$
Thus $\Bbb{F}_2(\beta)=\Bbb{F}_2$, and $\beta$ failed to generate the intermediate field $\Bbb{F}_4$.



If we, instead, you the relative norm
$$
N^4_2:\Bbb{F}_{16}\to\Bbb{F}_4, x\mapsto x\cdot x^4=x^5,

$$
then the above choice of $\alpha$ actually works (this is because $\alpha$ is of order fifteen, so $N_4^2(\alpha)$ is of order three, and hence generates $\Bbb{F}_4$). But if we carelessly use, in the role of $\alpha$, a fifth root of unity $\gamma\in\Bbb{F}_{16}$, then obviously $N^4_2(\gamma)=1$, and we get that
$\Bbb{F}_2(\gamma)=\Bbb{F}_{16}$ but $\Bbb{F}_2(N^4_2(\gamma))=\Bbb{F}_2$.



What works in general is the following. Let $\alpha$ be a primitive element of
$\Bbb{F}_{p^m}$, in other words, $\alpha$ is a generator of the multiplicative group $\Bbb{F}_{p^m}^*$ (warning: there are two notions of primitive in field theory, this is prevalent in the theory of finite fields, but there is another definition of primitivity elsewhere in field-theory that is at odds with this).
Then the relative norm
$$
\beta=N^m_d(\alpha)=\prod_{i=0}^{m/d-1}\alpha^{p^{id}}
$$

is always a generator of the multiplicative group of the intermediate field $\Bbb{F}_{p^d}$. Consequently, the minimal polynomial of $\beta$ over $\Bbb{F}_p$ is of degree $d$.



Also, both the relative norm and the relative trace are surjective mappings from the bigger field to the intermediate field. Therefore the process you described always works for some elements $\alpha$. Actually it works for most elements of $\Bbb{F}_{p^m}$ for a suitable definition of most. The above intermediate field $\Bbb{F}_4$ probably gives you the lowest success rate, when using a random $\alpha$.






In a comment the OP asked for examples with $\gcd(m,p)=1$. Let us denote
$F=\Bbb{F}_{p^d}$, $E=\Bbb{F}_{p^m}$. The relative trace
$$
tr^E_F:E\to F, x\mapsto x+x^{p^d}+x^{p^{2d}}+\cdots+x^{p^{m-d}}

$$
is a surjective $F$-linear mapping. Therefore the kernel of $tr^E_F$ is
an $F$-subspace of $E$ of dimension $(m/d)-1$. In other words, there exist
$p^{m-d}$ elements $x\in E$ with the property $tr^E_F(x)=0$. So when $d$ is relatively small, the number of elements with trace zero exceeds the number of elements in the union of proper subfields of $E$, because the latter number is strictly less that $\sum_{\ell\mid m}p^{m/\ell}$, where $\ell$ ranges over the prime factors of $m$.



Similarly, the kernel of the relative norm map has $(p^m-1)/(p^d-1)>p^{m-d}$
elements.



Given these observations there will be elements $\alpha$ such that $\Bbb{F}_p(\alpha)=E$ but $tr^E_F(\alpha)=0$. And also elements $\alpha$ such that $\Bbb{F}_p(\alpha)=E$ and $N^E_F(\alpha)=1$.


Tuesday 23 January 2018

polynomials - Partial fraction decomposition over arbitrary field.




Let $K$ be a field and $f,g$ be (1 variable) polynomials over $K$, and suppose that $g=p_{1}^{e_1} p_{2}^{e_2} \cdots p_{k}^{e_k}$ where each $p_i$ is irreducible over $K$ and $e_i \geq 1$. Does there exist polynomials $b$ and $a_{ij}$ with the following properties?




  • $\deg{a_{ij}}<\deg{p_i}$ for all $i=1,\ldots,k$

  • $\displaystyle \frac{f}{g}=b+\sum_{i=1}^{k}\sum_{j=1}^{e_k}\frac{a_{ij}}{p_{i}^{j}}$



Moreover, are such polynomials unique?



What I have tried: Since $\{p_{i}^{e_i}\}$ are pairwise relatively prime, there are polynomials $A_1,\ldots,A_k$ (Bezout) such that $$A_1 p_1^{e_1}+\cdots+ A_k p_k^{e_k}=1$$ and thus we may write $\displaystyle \frac{f}{g}=\frac{fA_1 p_1^{e_1}+\cdots+ fA_k p_k^{e_k}}{g}=\frac{fA_1}{p_2^{e_2}\cdots p_k^{e_k}}+\cdots+\frac{fA_k}{p_1^{e_1}\cdots p_{k-1}^{e_{k-1}}}$. Repeating this on every summand $k$ times, we get polynomials $B_i$ such that $\displaystyle \frac{f}{g}=\sum_{i=1}^{k}\frac{B_i}{p_{i}^{e_i}}$, and after long division (if necessary) there exist polynomials $b,\tilde{B}_i$ such that $$\frac{f}{g}=b+\sum_{i=1}^{k}\frac{\tilde{B}_i}{p_{i}^{e_i}}$$ with $\deg{\tilde{B}_i}<\deg{p_i^{e_i}}$.




Where I'm stuck: I don't see how I should proceed with the summands of the form $\frac{\tilde{B}_i}{p_i^{e_i}}$. Since $\{p_i,p_i^2,\ldots,p_i^{e_i}\}$ are not relatively prime, Bezout does not work and I don't see how to choose $a_{ij}$ from $\tilde{B}_i$ (unless $p_i^{e_i}$ is linear...). I'm also having difficulties with the uniqueness part of the assertion.



Can someone give me an advice for this problem? Please enlighten me!


Answer



(Note: After the OP comment below, the answer has been edited a lot from original, which was incorrect.)



You have reduced the problem to the case where there is only one irreducible factor. Let's write it as $\frac{f}{g^n}$ where $g$ is irreducible in $K$, and $\deg f < \deg g^n$. We seek polynomials $a_1, a_2, \ldots, a_n$, with $\deg a_i < \deg g$, such that $$\frac{f}{g^n} = \sum_{i=1}^{n} \frac{a_i}{g^i}.$$



If we clear denominators, we get this (where I have purposely split out the last term because it has no $g$'s): $$ f = \left(\sum_{i=1}^{n-1} a_i g^{n-i}\right) + a_n.$$




Now the $a_i$'s can be computed by successive division. The polynomial $a_n$ is the remainder when $f$ is divided by $g$, then $a_{n-1}$ is the remainder of $(f-a_n)/g$, etc.


Monday 22 January 2018

sequences and series - The sum of $1+frac 25 + frac {3}{5^2} + frac {4}{5^3}....$

The nth term would be
$$T_n = \frac{n}{5^{n-1}}$$



So the sum will be $$\sum T_n = 5\sum \frac{n}{5^n}$$



Can’t solve further. What’s next?

gcd and lcm - The maximum valued integer X such that X divides A i.e. A % X = O and X and B are co-prime

I came across this question while practicing coding questions -



You are given two positive numbers A and B. You need to find the maximum valued integer X such that:




  • X divides A i.e. A % X = 0



  • X and B are co-prime i.e. gcd(X, B) = 1




I could only think of a brute force approach where I check for all the factors of A and test if that is coprime with B. Is there any faster way to do this?

Integer and Complex Values for the Gamma Function:

While reading the Wikipedia page on Particular values of the Gamma Function, it listed a formula:$$\Gamma\left(\dfrac z2\right)=\sqrt{\pi}\dfrac {(z-2)!!}{2^{(z-1)/2}}\tag{1}$$
Where $z\in\mathbb{Z}$ for positive half integers. $(1)$ can be used to compute $\Gamma\left(\frac 12\right)$ by setting $z=1$ to get$$\Gamma\left(\dfrac 12\right)=\sqrt{\pi}\dfrac {(1-2)!!}{2^{(1-1)/2}}=\sqrt\pi\tag{2}$$
Extending this, I'm wondering




Questions:





  1. Can forumula $(1)$ be generalized to include complex numbers?$$\Gamma\left(a+bi\right)=\text{something}\tag{3}$$

  2. If so, how would you prove such formula?




Running it for WolframAlpha, it says that the Gamma function of a complex number is defined and is possible. But I'm just not sure how to derive a formula for $(3)$.

Sunday 21 January 2018

algebra precalculus - Is there a solution to $y=ln(x)+x$ which yields an answer in the form $x^2=...$




  1. How could I write $y=\ln(x)+x$ as $x^2=...$



Since there might be another solution to this problem I'll give some background. So I had a math test yesterday where they wanted you to calculate the volume of (V) when turned around the y-axis, see here:



enter image description here




The formula for this is pretty easy: $\pi r^2h-\int_a^b(x)^2 dy$



The notation might be different (Dutch) so h is the height of a cylinder, and $\int_a^b(x)^2 dy$ means the integral of the primitive of $(x)^2$.



Since the formula needs $x^2$ it has to be written in that way.




  1. The math test was a pretty big bummer, even more so since I learned hard for it and understood all of the higher concepts but I (and many others) stranded on the basic things like this.




Normally they make the concepts more complicated so you have to combine multiple, however this time there were just a lot of hard things like the above, writing $y=\ln(x)+x$ as $x^2=...$. They don't seem to require a lot of insight more so knowing the rules. Particularly if the book and your teacher don't even explain what things like $\ln$ and e actually mean.



Is there a way to learn solving these problems which require rules with understanding when your high school teacher/ book doesn't tell you about it? I really like trying to understand math but this seems more like just learning the rules. Is there a way to mix these two together? For instance a book on mixing high school calculus together with a deeper insight.



This is my first stackexchange post so I hope it's fine, I couldn't find anything about asking multiple questions at the same time so I hope it's allowed.



I also couldn't find anything about these the required level of math so I hope my high school math is allowed, if not I'd still like to know the answer to my second question. Which I think is more important and would allow me to enjoy math more anyway. That's why I like to look at stackexchange, things don't just get answered; insight is provided.


Answer



$$y=\ln(x)+x \implies e^y=xe^x \implies x=W(e^y) \implies x^2=(W(e^y))^2 $$
Where $W$ is W Lambert function. As you see, this is involves a special function, so it's probably not the intented way to go



Saturday 20 January 2018

Given a tower of field extensions, does this equality involving Galois group orders hold in general?

Suppose we have a tower of field extensions:



$\overline{F} \subset K \subset E \subset F$



Is it true in general that $|G(K/F)| = |G(K/E)| \cdot |G(E/F)|$?



I was able to verify some specific examples, like $\mathbb{Q}(\sqrt[3]{2}, \omega)$ for $x^3-2$ and another extension, but how could I show that this holds in general for all such towers of extensions?

calculus - Computing diagonal Length of a Square

While studying rectification of curves, I considered a curve and to measure its length in a different fashion, and arrived at a problem. I would like to clarify the confusion in my understanding.



Consider a unit square in plane with vertices (0,0), (0,1),(1,0), (1,1). The diagonal joining (1,0) and (0,1) has length $\sqrt{2}$, well known. Suppose I approach theis diagonal in the following way: first by the path $P_1: (1,0)-(1/2,0)-(1/2,1/2)-(0,1/2)-(1,0)$. This is like tow "L"s, with top end of one joined to bottom end of other- forming stairs. The length of this path is $2$. Next, we form path $P_2$ with four "L"'s, in a nice way to form stairs. Again the length of this path is $2$.



We see that the sequence $\{P_n\}$ of paths, which are piecewise differentiable functions (?), converges to the diagonal $(1,0)-(0,1)$. But the length of each path is $2$, but we cant conclude that the diagonal should have length $2$. Why such a contradiction arises?

elementary number theory - Prove that $sqrt 5$ is irrational



I have to prove that $\sqrt 5$ is irrational.



Proceeding as in the proof of $\sqrt 2$, let us assume that $\sqrt 5$ is rational. This means for some distinct integers $p$ and $q$ having no common factor other than 1,



$$\frac{p}{q} = \sqrt5$$



$$\Rightarrow \frac{p^2}{q^2} = 5$$




$$\Rightarrow p^2 = 5 q^2$$



This means that 5 divides $p^2$. This means that 5 divides $p$ (because every factor must appear twice for the square to exist). So we have, $p = 5 r$ for some integer $r$. Extending the argument to $q$, we discover that they have a common factor of 5, which is a contradiction.



Is this proof correct?


Answer



It is, but I think you need to be a little bit more careful when explaining why $5$ divides $p^2$ implies $5$ divides $p$. If $4$ divides $p^2$ does $4$ necessarily divide $p$?


Friday 19 January 2018

limits - How to evaluate $limlimits_{nto+infty} prodlimits_{k=1}^n (1+k/n^2)$?



I've got a limit which puzzle me several days. The question is



$$ \lim_{n\to+\infty} \prod_{k=1}^n\left(1+\frac{k}{n^2}\right).$$




Can you help me? Thank you in advance


Answer



Intuitively, we have



$$\log\left( 1 + \frac{k}{n^2} \right) = \frac{k}{n^2} + O\left(\frac{1}{n^2}\right) \quad \Longrightarrow \quad \log \prod_{k=1}^{n} \left( 1 + \frac{k}{n^2} \right) = \frac{1}{2} + O\left(\frac{1}{n}\right)$$



and therefore the log-limit is $\frac{1}{2}$.



Here is a more elementary approach: Let $P_n$ denote the sequence inside the limit. Then just note that




$$ P_n^2 = \left[ \prod_{k=1}^{n} \left( 1 + \frac{k}{n^2} \right) \right]^2 = \prod_{k=1}^{n} \left( 1 + \frac{k}{n^2} \right)\left( 1 + \frac{n-k}{n^2} \right) = \prod_{k=1}^{n} \left( 1 + \frac{1}{n}+\frac{k(n-k)}{n^4} \right). $$



Now fix $m$ and let $n \geq m$. Since $k (n-k) \leq \frac{1}{4}n^2$, we have



$$ \frac{k(n-k)}{n^4} \leq \frac{1}{4n^2} \leq \frac{1}{4mn}.$$



Thus we have



$$ \left( 1 + \frac{1}{n} \right)^n \leq P_n^2 \leq \left( 1 + \frac{1+(1/4m)}{n} \right)^n. $$




Thus taking $n \to \infty$,



$$e \leq \liminf_{n\to\infty} P_n^2 \leq \limsup_{n\to\infty} P_n^2 \leq e^{1+1/(4m)}.$$



Since $m$ is now arbitrary, we have $P_n^2 \to e$, or equivalently, $P_n \to \sqrt{e}$.


arithmetic - Multiplication of repeating decimal $0.3333overline{3}$ by $3$




Let's start considering a simple fractions like $\dfrac {1}{2}$ and $\dfrac {1}{3}$.



If I choose to represent those fraction using decimal representation, I get, respectively, $0.5$ and $0.3333\overline{3}$ (a repeating decimal).




That is where my question begins.



If I multiply either $\dfrac {1}{2}$ or $0.5$ by $2$, I end up with $1$, as well as, if I multiply $\dfrac {1}{3}$ by $3$.



Nonetheless, if I decide to multiply $0.3333\overline{3}$ by $3$, I will not get $1$, but instead, $0.9999\overline{9}$



What am I missing here?



*Note that my question is different than the question Adding repeating decimals


Answer




Hint: compute the difference between $1$ and $0.9\bar9$. How much is that ? What do you conclude ?


calculus - How can we prove that any integral in the set of non-elementary integrals cannot be expressed in the form of elementary functions?



We know that the derivative of some non-elementary functions can be expressed in elementary functions. For example $ \frac{d}{dx} Si(x)= \frac{\sin(x)}{x} $



So similarly are there any non-elementary functions whose integrals can be expressed in elementary functions?



If not then how can we prove that any integral in the set of non-elementary integrals cannot be expressed in the form of elementary functions?


Answer



The derivative of an elementary function is an elementary function: the standard Calculus 1 differentiation methods can be used to find this derivative. So an antiderivative of a non-elementary function can't be elementary.




EDIT: More formally, by definition an elementary function is obtained from
complex constants and the variable $x$ by a finite number of steps of the following forms:




  1. If $f_1$ and $f_2$ are elementary functions, then $f_1 + f_2$, $f_1 f_2$ and (if $f_2 \ne 0$) $f_1/f_2$ are elementary.

  2. If $P$ is a non-constant polynomial whose coefficients are elementary functions, then a function $f$ such that $P(f) = 0$ is an elementary function.

  3. If $g$ is an elementary function, then a function $f$ such that $f' = g' f$ or $f' = g'/g$ is elementary (this is how $e^g$ and $\log g$ are elementary).



To prove that the derivative of an elementary function, you can use induction on the number of these steps. In the induction step, suppose

the result is true for elementary functions obtained in at most $n$ steps.
If $f$ can be obtained in $n+1$ steps, the last being $f = f_1 + f_2$ where $f_1$ and $f_2$ each require at most $n$ steps, then $f' = f_1' + f_2'$ where $f_1'$ and $f_2'$ are elementary, and therefore $f'$ is elementary. Similarly for the other possibilities for the last step.


Can someone explain the following modular arithmetic?

$7^{-1} \bmod 120 = 103$



I would like to know how $7^{-1} \bmod 120$ results in $103$.

Parameterize a polynomial with no real roots



An even polynomial with a constant term of 1 will have no real roots if the coefficients of the powers (the c's below) are non-negative. So



$$1 + c_2x^2 + c_4x^4 + c_6x^6$$




has no real roots. Is there a general way to parameterize an nth order polynomial with a constant term of 1 so that it has no real roots? I know that the above conditions (even powers, with non-negative coefficients) are more restrictive than necessary. The application is fitting (x,y) data where y is always positive with a polynomial in x.


Answer



Let $p(x)=1+c_1+\dots c_n$. Since $p(0)=1>0$, if $p$ does not have real roots, it must be positive. This implies $c_n>0$ (otherwise there would be a positive root) and $n$ even (otherwise there would be a negative root.) Applying Descartes rule of signs to $p(x)$ and $p(-x)$ we get the following necessary condition: the sequences of coefficients
$$
1,\,c_1,\,c_2,\,c_3,\dots,c_n,\quad\text{and}\quad
1,\,-c_1,\,c_2,-\,c_3,\dots,c_n
$$
must have an even number of sign changes.


integration - Frullani's theorem in complex context, other examples



One has as application of Frullani's theorem in complex context that
$$\int_0^\infty \frac{e^{-x\log 2}-e^{-xb}}{x}dx=\mathcal{Log} \left( \frac{1}{2\log 2}+i\frac{B}{\log 2} \right) $$
where I taken $a=\log 2+i\cdot 0$ and $b=\frac{1}{2}+iB$, that are complex numbers whose real parts are positive, here also I take $B>0$ as a fixed real number.




Question 1. Can you define and compute RHS of previous identity?





I would like to refresh the notion of complex logarith. My attempt for Question 1, is that I believe that I need write RHS should be $$\frac{1}{2}\log \left( \frac{1}{4} +B^2\right) -\log\log 2+i\arctan 2B,$$
and saying that is required $-\pi\leq \arctan 2B<\pi$. Also I've tried write the previous imaginary part as $$\arcsin \left( \frac{2B}{\sqrt{4B^2+1}} \right).$$ Notice that it is possible take advantage from the computations of Dr.MV answer. Then since $0=\text{arg}(\log 2)\neq \text{arg}(\pm b)$ I did the following computations $$\log(|b/a|)=-\log\log 2+\frac{1}{2}(1/4+B^2),$$ $$(\Re(a\bar{b})-|a|^2)/\Im(a\bar{b})=(\log2-1/2)/B$$ and $$(|b|^2-\Re(a\bar{b}))/\Im(a\bar{b})=(1/4+B^2)(1/2-1/\log 2)/B.$$




Question 2. Denoting by $\Phi(t)$ previous deduction from Frullani's theorem with same (negative exponential function) $a=\log 2$ but now taking $b=\frac{1}{2}+it$, where $t>0$ is a real variable, can you provide us the statement that it is possible to write combining previous deduction and Lebesgue's Dominated Convergence theorem to compute the derivative $\Phi'(t)$? If you consider only hints to this second question, I believe that I can get the final statement with a clarification of the notation, as you see.




I am almost sure that I can justify the differentiation under the integral sign, but in my book the justification is explained in terms of $I(\cdot,t)$ and $I(x,\cdot)$, when $I(x,t)$ is denoting the function in the integrand, and now I am a few stuck with this notation. Also the computations of the derivative of the RHS could be tedious, without a good simplification in previous question. Thanks in advance all users.



Answer



With a change of variable, it is enough to study
$$ J(z)=\int_{0}^{+\infty}\frac{e^{-x}-e^{-zx}}{x}\,dx \tag{1}$$
with $z=\sigma+it$ being a complex number in the right half-plane. The integral is convergent by Dirichlet's test, since $\frac{1}{x}$ is decreasing towards zero and $\int_{0}^{M}\left(e^{-x}-e^{-zx}\right)\,dx $ is bounded. We may study the real part and the imaginary part of $(1)$ as two separate integrals, computing them through the Laplace transform. Since $\mathcal{L}^{-1}\left(\frac{1}{x}\right)=1$,
$$ \int_{0}^{+\infty}\frac{e^{-x}-e^{-\sigma x}\cos(tx)}{x}\,dx = \int_{0}^{+\infty}\left(\frac{1}{1+s}-\frac{s+\sigma}{t^2+(s+\sigma)^2}\right)\,ds\tag{2}$$
$$ \int_{0}^{+\infty}\frac{-e^{-\sigma x}\sin(tx)}{x}\,dx = \int_{0}^{+\infty}\left(-\frac{t}{t^2+(s+\sigma)^2}\right)\,ds\tag{3}$$
and the integrand functions in the RHSs of $(2)$ and $(3)$ have elementary primitives, leading to:
$$ \text{Re}\left(J(\sigma+it)\right) = \frac{1}{2}\log(t^2+\sigma^2),\tag{4}$$
$$ \text{Im}\left(J(\sigma+it)\right) = \arctan\left(\frac{t}{\sigma}\right),\tag{5}$$
hence:





For any $z\in\mathbb{C}$ such that $\text{Re}(z)>0$,
$$ J(z)=\int_{0}^{+\infty}\frac{e^{-x}-e^{-zx}}{x}\,dx = \color{red}{\text{Log}(z)} = \log(\|z\|)+ i\,\arctan\left(\frac{\text{Im}(z)}{\text{Re}(z)}\right). \tag{6}$$




If we assume now that $\sigma>0$ is fixed, we may study:
$$ \frac{J(\sigma+it_1)-J(\sigma+it_2)}{t_1-t_2} = \int_{0}^{+\infty}\color{blue}{\left(\frac{e^{-it_1 x}-e^{-it_2 x}}{x(t_1-t_2)}\right)} e^{-\sigma x}\,dx $$
where the blue term is an entire function, bounded over the positive real axis. Since $\int_{0}^{+\infty}e^{-\sigma x}\,dx$ is finite, by the dominated convergence theorem ($t_2\to t_1$) we are allowed to state:





$$ \frac{d}{dt}\,J(\sigma+it) = \int_{0}^{+\infty}i e^{-itx}e^{-\sigma x}\,dx =\frac{i}{\sigma+it}=\color{red}{\frac{i}{z}}.\tag{7}$$



abstract algebra - Proof of Artin's Theorem (linearly independent functions)



Recently I have come across one of Artin's theorems and I have not been able to crack it quite yet. The theorem is stated as follows:





Let $G$ be a group. and let $f_1,\dots, f_n\colon G\to K^*$ be distinct homomorphisms of $G$ into the multiplicative group of a field. Prove that these functions are linearly independent over $K$.




Would anyone know a (if possible quite simple) proof of this Theorem. This proof came up in a chapter regarding eigenvectors and eigenvalues, so I presume it has something to do with that?


Answer



Suppose there are nontrivial linear relations between the maps $f_1,\dots,f_n$ seen as elements of the vector space $K^G$; among them choose one with the minimum number of nonzero coefficients. Upon a reordering, we can assume it is
$$
\alpha_1f_1+\dots+\alpha_kf_k=0

$$
with all $\alpha_i\ne0$. This means that, for every $x\in G$,
$$
\alpha_1f_1(x)+\dots+\alpha_kf_k(x)=0
$$
Note that $k>1$ or we have a contradiction.



Fix $y\in G$; then also
$$
\alpha_1f(yx)+\dots+\alpha_kf_k(yx)=0

$$
and, since the maps are homomorphisms,
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_k(y)f_k(x)=0\tag{1}
$$
for every $x\in G$ and
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_1(y)f_k(x)=0\tag{2}
$$
By subtracting $(2)$ from $(1)$ we get

$$
\alpha_2(f_2(y)-f_1(y))f_2(x)+\dots+\alpha_k(f_k(y)-f_1(y))f_k(x)=0
$$
for all $x$, hence
$$
\alpha_2(f_2(y)-f_1(y))f_2+\dots+\alpha_k(f_k(y)-f_1(y))f_k=0
$$
which would be a shorter linear relation, so we conclude that
$$
f_2(y)=f_1(y),\quad

\dots,\quad
f_k(y)=f_1(y)
$$
Now, choose $y$ such that $f_1(y)\ne f_2(y)$ and you have your contradiction.


Thursday 18 January 2018

real analysis - Approximation of $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-nt} mathrm dt,nrightarrowinfty $

How can I find the first term of the series expansion of



$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt,n\rightarrow\infty ?$$




Or:



As $$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt = \frac{1}{\sqrt{n}}\int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt $$



What is $$ \lim_{n\rightarrow\infty} \int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt?$$

negative binomial - Probability that 4 boxes are purchased?



The probability that a randomly selected box of a certain type of cereal has a particular prize is 0.2. Suppose you purchase box after box until you obtained two of these prizes. What is the probability that four boxes are purchased?




My approach was to use negative binomial distribution:



Let $X$ be the number of boxes purchased that don't have a prize until you find two prizes.



P(4 boxes purchase) = P(2 boxes w/o a prize)
=$$\binom{x+r-1}{r-1}(p)^r(1-p)^x$$ = $$\binom{2+2-1}{2-1}(0.8)^2(1-0.2)^2$$ = $$\binom 3 1(0.2)^2(0.8)^2$$ = $$.0764$$




I wanted to know if this is correct approach to solving this kind of probability question?

Wednesday 17 January 2018

elementary number theory - Fast modular exponentiation



Suppose that $p$ and $q$ are distinct primes, then for every integer $a$ and exponent $e$ with $e\not \equiv (\bmod \,(p - 1)(q - 1))$ show that:
${a^e} \equiv {a^{e\, \cdot \,\bmod \,(p - 1)(q - 1)}}\,(\bmod \,p \cdot q)$.



I tried to prove it this way:



Let $n = p \cdot q$, $\gcd (p,q) = 1$ and $e' = e\,\bmod \,\varphi (n),$ then $e = m \cdot \varphi (n) + e',\,\,\,m \in {\Bbb Z}$. We have ${a^e} = {a^{m \cdot \varphi (n) + e'}} = {({a^{\varphi (n)}})^m} \cdot {a^{e'}} \equiv {a^{e'}}\,(\bmod \,n)$. The last step is true based on the Euler's totient theorem.




Unfortunately, this proof is correct if and only if $a$ is relatively prime to $n$. However, the task asks to prove it for every integer $a$ and exponent $e$. How to prove it in that case?


Answer



Case$\#1:$ If $p|a$ clearly the ${a^e} \equiv {a^{e\, \cdot \,\bmod \,(p - 1)(q - 1)}}\,(\bmod \,p )$ holds as $e>0$



and similarly for $q$



Case$\#2:$ If $p\nmid a\iff (a,p)=1, a^{p-1}\equiv1\pmod p$



$\implies a^{\text{lcm}(p-1,q-1)}\equiv1\pmod p$

and similarly, $a^{\text{lcm}(p-1,q-1)}\equiv1\pmod q$ for $q\nmid a\iff(a,q)=1$



$\implies a^{\text{lcm}(p-1,q-1)}\equiv1\pmod{pq}$



Let $e=r+k\cdot$lcm$(p-1,q-1)$ where $r$ is any integer



$a^e=a^r\cdot (a^{\text{lcm}(p-1,q-1)})^k\equiv a^r\cdot1^k\pmod{pq}$


elementary set theory - Subbasis for Cartesian Product Topology




Let $\aleph(\mathcal{A})$ be arbitrary (i.e., let $\mathcal{A}$ be an indexing set of arbitrary cardinality). Then, in the space $\prod \{Y_{\alpha}|\alpha \in \mathcal{A}\}$, for each $\alpha \in \mathcal{A}$, let $\displaystyle \Sigma_{\alpha}$ be a subbasis for the topology $\mathcal{T}_{\alpha}$ of $Y_{\alpha}$.



I need to prove that the family $\{ \langle V_{\beta}\rangle \,|\,\text{all}\,v_{\beta}\in \Sigma_{\beta}; \, \text{all}\, \beta \in \mathcal{A} \}$ is also a subbasis for the cartesian product topology in $\displaystyle \prod_{\alpha}Y_{\alpha}$



Please note that this is NOT homework! I have my final exam on Wednesday and I am trying to read through Chapter IV of Dugundji's Topology to help me prepare, but I still am having a lot of trouble even understanding how this cartesian product topology works.



This problem appears in the text as part of a given theorem, and then for the proof, it says "This is immediate from I, 9.5, and is left for the reader".



I. 9.5 states that





"Let $\{Y_{\alpha}|\alpha \in \mathcal{A}\}$ be a family of nonempty sets; for each $\alpha \in \mathcal{A}$, let $A_{\alpha}, B_{\alpha}$ be subsets of $Y_{\alpha}$. Then, $$(1) \, \displaystyle \prod_{\alpha}A_{\alpha} \cap \prod_{\alpha}B_{\alpha} $$
$$(2) \displaystyle \prod_{\alpha}A_{\alpha} \cup \prod_{\alpha}B_{\alpha} \subset \prod_{\alpha}\left(A_{\alpha} \cup B_{\alpha} \right)" $$




I don't really see how this result is "immediate" from this - and even if it were, why would it be left to the reader? If something is left to the reader that implies that it requires a little bit of work and mechanics and manipulation, right?



On that front, I believe the way to show that this family $\{\langle V_{\beta} \rangle\}$ is a subbasis for the product topology, call it $\mathcal{T}_{p}$ is to show that the topology generated by $\{\langle V_{\beta} \rangle \}$, call it just $\mathcal{T}$ is the same topology as $\mathcal{T}_{p}$. However, I am not sure how to go about doing this. Especially since we need to be a little bit careful, because isn't it true that a set of the form $\displaystyle \prod U_{\alpha}$ where each $U_{\alpha}$ is a proper open subset of $Y_{\alpha}$ is never an open set in the cartesian product $\displaystyle \prod Y_{\alpha}$? (This is what I meant when I said I was having trouble understanding how the cartesian topology works.)



Anyway, to start, I'd say, suppose $\langle V_{\beta} \rangle = p_{\beta}^{-1}(V_{\beta})$ is a subbasis for $\mathcal{T}$. Then, taking finite intersections of these $p_{\beta}^{-1}(V_{\beta})$, we obtain basic elements for the topology $\mathcal{T}$. Now, from a different result related to I. 9.5, we have that $\displaystyle \cap_{\beta} \langle V_{\beta} \rangle = \prod_{\beta} V_{\beta}$, and so our basic elements look like $\displaystyle \cup_{\beta}\prod_{\beta}V_{\beta}$.




Then, by I. 9.5, we have that $\displaystyle \cup_{\beta}\prod_{\beta}V_{\beta} \subset \prod_{\beta}\left( \cup_{\beta}V_{\beta} \right)$. But then, how do I get that this gives me the open sets in $\mathcal{T}_{p}$?



Then, going in the other direction is also a problem for me. How do I show that $\mathcal{T}_{p} \subset \mathcal{T}$ here?



There is a question already posted on here where the OP asked for a proof of this result using the approach of proving that the product topology is the smallest topology containing $\{ V_{\beta} \}$, but I'm not sure I really understand the language the answer uses regarding maximal elements, or even why proving that it is the smallest such topology is helpful (for reference, the question/answer is here).



What I would like is for someone to provide me with a worked out detailed proof building on what I've already said here (supposing it's right, of course) explaining all the why's. Like I said, this is not homework - I was just trying to work through it on my own to prepare for my exam, and I got stuck. I have a tendency of doing that - getting to a point and then getting stuck for hours in a quagmire of details and losing loads of valuable study time because of it. Therefore, any help you could give me would be greatly appreciated!



Thanks ahead of time.



Answer



So the product topology on $\Pi_{\alpha \in \mathcal A}Y_{\alpha}$ is defined as the topology generated by subbase $\cup_{\alpha \in \mathcal A}\mathcal C_{\alpha}$ (i.e. the coasest topology for which all the projections $\{\pi_{\alpha}\}$ are continuous), where
$$\mathcal C_{\alpha} = \{\pi_{\alpha}^{-1}(A): \, A \in \mathcal T_{\alpha}\}$$



Now consider
$$\mathcal D_{\alpha} = \{\pi_{\alpha}^{-1}(A): \, A \in \Sigma_{\alpha}\}$$



Since $\Sigma_{\alpha}$ is a subbase of $\mathcal T_{\alpha}$, $\forall O \in \mathcal T_{\alpha}$, it could be expressed as arbitrary union of sets, and those sets are expressed by finite intersection of elements in $\Sigma_{\alpha}$



Also we know that $f^{-1}(\cap_i A_i)=\cap_i f^{-1}(A_i)$ and $f^{-1}(\cup_i A_i)=\cup_i f^{-1}(A_i)$




Thus any element in $\mathcal C_{\alpha}$ could be expressed by arbitrary unions of sets, and those sets could be expressed as finite intersection of elements in $\mathcal D_{\alpha}$



Since $\cup_{\alpha \in \mathcal A}\mathcal C_{\alpha}$ is a subbase of the product topology, so is $\cup_{\alpha \in \mathcal A}\mathcal D_{\alpha}$. So we are done.


integration - Integrating w.r.t. two different variables



I can't really follow what's going on here to get from line 2 to line 3:
\begin{align}
& \int^\infty_0 \frac{d^2 p}{(2\pi)^2} \frac{\Lambda^2}{(p^2+\Lambda^2)(p^2+m^2)} \\[10pt]
= {} & \int^1_0 dz \, \frac{d^2 p}{(2\pi)^2}\frac{\Lambda^2}{(p^2+z\Lambda^2+(1-z) m^2)^2} \\[10pt]
= {} & \int^1_0dz \, \frac{1}{4\pi}\frac{\Lambda^2}{z\Lambda^2+(1-z)m^2} \\[10pt]
= {} & \frac{1}{4\pi}\frac{\Lambda^2}{\Lambda^2-m^2}\ln\frac{\Lambda^2}{m^2}
\end{align}




There are two different integration variables and then one of them vanishes. I've tried evaluating the integral w.r.t $p$ and then again w.r.t. $p$ because it's $d^2 p$, but that didn't work (just using wolfram alpha). From line one to line two I can follow, it's a Feynman parametrization, but what do you do to get from line 2 to line 3?


Answer



The integral over momentum space is missing in Line 2 of the OP. That is to say that the expression $\displaystyle d^2 p\,\frac{\Lambda^2}{(p^2+z\Lambda^2+(1-z) m^2)^2}$ should read



$$\int_{-\infty}^\infty\int_{-\infty}^\infty \frac{\Lambda^2}{(p_1^2+p_2^2+z\Lambda^2+(1-z) m^2)^2}\,dp_1\,dp_2$$



which after a transformation to polar coordinates $(p_1,p_2)\mapsto(\rho,\phi)$ becomes



$$\begin{align}

\int_0^\infty\int_0^\infty \frac{\Lambda^2}{(p_1^2+p_2^2+z\Lambda^2+(1-z) m^2)^2}\,dp_1\,dp_2&= \int_0^{2\pi}\int_0^\infty \frac{\Lambda^2\rho}{(\rho^2+z\Lambda^2+(1-z) m^2)^2}\,\rho\,d\rho\,d\phi\\\\
&=2\pi \int_0^\infty \frac{\Lambda^2\rho}{(\rho^2+z\Lambda^2+(1-z) m^2)^2}\,\rho\,d\rho\\\\\
&=2\pi \left.\left(-\frac12 \frac{1}{\rho^2+z\Lambda^2+(1-z)m^2}\right)\right|_{\rho=0}^{\rho\to \infty}\\\\
&=2\pi\left(\Lambda^2 \frac12 \frac{1}{z\Lambda^2+(1-z)m^2}\right)
\end{align}$$



Finally, dividing by $4\pi^2$ and integrating over $z$ from $0$ to $1$ yields the coveted result.


Tuesday 16 January 2018

logarithms - Are there functions except $log$ where $f(ab)$ = $f(a)$+$f(b)$?

Find all functions $f:\mathbb{R}\to\mathbb{R}$ such that



$f(ab)=f(a)+f(b)$. Assume that $f$ is continuous.



The other answer unfortunately haven't provided any proofs.

modular arithmetic - Quadratic Residues, $y^2 equiv q bmod p,$

Given $p$, $q$ and
$$
y^2 \equiv q \bmod p,
$$
how to solve for the values of $y$?



I can do it manually with small numbers, but with large numbers is not applicable, since there is a mathematical way, could you please help me in this?

calculus - Proof using the mean value theorem



I need to prove (using the mean value theorem) that for all $x\in(0,\infty)$, the following inequality holds:



$\exp(-x)\geq 1 - x$






I don't know how the mean value theorem is applicable here, since we don't have a closed interval.




How do I prove the statement?


Answer



Hint: let $f(t)=e^{-t}$, and (for a fixed $x$) try using the mean value theorem on the interval $[0,x]$.


decimal expansion - Is $0.9999...$ an integer?




Just out of curiosity, since
$$\sum_{i>0}\frac{9\times10^{i-1}}{10^i}, \quad\text{ or }\quad 0.999\ldots=1,$$
Does that mean $0.999\ldots=1$, or in other words, that $0.999\ldots$ is an integer, by applying the transitive property?




Ty.


Answer



$0.999999\ldots$ is indeed $1$, which indeed is a natural number and therefore an integer.


Sunday 14 January 2018

number theory - Show that $(2^p-1,2^{q-1}-1)=2^{(p,q-1)}-1$




Let $p$ and $q$ are two prime numbers. Also, let us assume $q|(2^p-1)$. Then show that $(2^p-1,2^{q-1}-1)=2^{(p,q-1)}-1$. Note- $(p,q)$ denotes HCF of $p$ and $q$.


Answer



It's true in general that $$\left (2^a-1,2^b-1 \right )=2^{(a,b)}-1$$ for $a$ , $b$ positive integers .



My favorite proof goes as follows :



Perform an induction on $a+b$ :





  • The base case $a+b=2$ so $a=b=1$ it's obvious .


  • Now assume that it's true for all $a$ and $b$ with $a+b \leq k$ .




Take some $a$ and $b$ with $a+b=k+1$ . Let also $d=\left (2^a-1,2^b-1 \right )$



Clearly $d$ is odd and also :



$$d \mid (2^a-1)-(2^b-1)=2^b(2^{a-b}-1)$$ (assuming $a \geq b$) so :




$$d \mid 2^{a-b}-1$$



This means that :



$$d \mid (2^b-1,2^{a-b}-1)$$



But because $b+a-b=a \leq k$ it folows that the RHS is $2^{(b,a-b)}-1$



Also it's clear that $(b,a-b)=(a,b)$ so :




$$d \mid 2^{(a,b)}-1$$



But also $$2^{(a,b)}-1 \mid d $$ because :
$$2^{(a,b)}-1 \mid 2^a-1$$ and also :



$$2^{(a,b)}-1 \mid 2^b-1$$



This means that $d=2^{(a,b)}-1$ and the induction is finished :




NOTE : This is essentially equivalent with a proof using the Euclidean algorithm .


calculus - Limits: How to evaluate $limlimits_{xrightarrow infty}sqrt[n]{x^{n}+a_{n-1}x^{n-1}+cdots+a_{0}}-x$




This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions, and here: List of abstract duplicates.






What methods can be used to evaluate the limit $$\lim_{x\rightarrow\infty} \sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$



In other words, if I am given a polynomial $P(x)=x^n + a_{n-1}x^{n-1} +\cdots +a_1 x+ a_0$, how would I find $$\lim_{x\rightarrow\infty} P(x)^{1/n}-x.$$



For example, how would I evaluate limits such as $$\lim_{x\rightarrow\infty} \sqrt{x^2 +x+1}-x$$ or $$\lim_{x\rightarrow\infty} \sqrt[5]{x^5 +x^3 +99x+101}-x.$$


Answer




Your limit can be rewritten as
$$\lim_{x\rightarrow\infty}\left(\frac{\sqrt[n]{1+\frac{a_{n-1}}{x}+\cdots+\frac{a_{0}}{x^{n}}}-1}{1 \over x}\right)$$
Or equivalently,
$$\lim_{y\rightarrow 0}\left(\frac{\sqrt[n]{1+{a_{n-1}}{y}+\cdots+{a_{0}}{y^{n}}}-1}{y}\right)$$
This, by the definition of derivative, is the derivative of the function $f(y) = {\sqrt[n]{1+{a_{n-1}}{y}+\cdots+{a_{0}}{y^{n}}}}$ at $y = 0$, which evaluates via the chain rule to ${a_{n-1} \over n}$.


Saturday 13 January 2018

sequences and series - What about the asymptotic behaviour of $sum_{n=1}^Nfrac{logoperatorname{rad}(n)}{n}$ as $Ntoinfty$?




Motivation. Let $\mu(n)$ the Möbius function and we denote with $$\operatorname{rad}(n)=\prod_{p\mid n}p$$ the radical of an integer $n\geq 1$, see this Wikipedia.



Fact. Using that $\mu(n)=0$ iff $\operatorname{rad}(n)MathWorld, (by cases 1) $\mu(n)=0$, 2) $\operatorname{rad}(n)=n$ with $\mu(n)=1$ and 3)$\operatorname{rad}(n)=n$ with $\mu(n)=-1$ ) one can to prove $$\sum_{n=1}^\infty\frac{\mu(n)}{n}\log\operatorname{rad}(n)=-1.$$



Since series involving $\operatorname{rad}(n)$ are interestings, I've thought this exercise:




Question. We know Chebyshev's result about the asymptotic behaviour of $\sum_{p\leq n}\frac{\log p}{p}$ and since the series $$\sum_{n=1}^\infty\frac{\log\operatorname{rad}(n)}{n}$$ has positive terms, I know that it diverges. But, is it possible to deduce some more precisely about the asymptotic behaviour of the partial sums $$\sum_{n=1}^N\frac{\log\operatorname{rad}(n)}{n}$$ as $N\to\infty?$ Provide hints, or a detailed answer, as you prefer. Or references if it is well known. Many thanks.



Answer




In the following, $p$ always runs over primes, and $n,k$ over positive integers. By changing the order of summation, we obtain



\begin{align}
\sum_{n \leqslant x} \frac{\log (\operatorname{rad} n)}{n}
&= \sum_{n \leqslant x} \sum_{p\mid n} \frac{\log p}{n} \\
&= \sum_{p \leqslant x} \sum_{\substack{n \leqslant x \\ p \mid n}} \frac{\log p}{n} \\
&= \sum_{p \leqslant x} \frac{\log p}{p}\sum_{k \leqslant x/p} \frac{1}{k} \\
&= \sum_{p\leqslant x} \frac{\log p}{p}\biggl(\log \frac{x}{p} + \gamma + O\biggl(\frac{p}{x}\biggr)\biggr) \\
&= \log x\sum_{p\leqslant x} \frac{\log p}{p} - \sum_{p\leqslant x} \frac{(\log p)^2}{p} + \gamma\sum_{p\leqslant x} \frac{\log p}{p} + O\Biggl(\frac{1}{x}\sum_{p\leqslant x}\log p\Biggr).
\end{align}




From the prime number theorem, we have the strengthening of Mertens' first theorem that there is a constant $C$ such that



$$\sum_{p\leqslant x} \frac{\log p}{p} = \log x + C + o(1).\tag{1}$$



Also, $\vartheta(x) = \sum_{p \leqslant x} \log p \sim x$, so it remains to find the behaviour of



\begin{align}
\sum_{p \leqslant x} \frac{(\log p)^2}{p}
&= \bigl(\log x + C + o(1)\bigr)\log x - \int_2^x \frac{\log t + C + o(1)}{t}\,dt \\

&= (\log x)^2 + C\log x + o(\log x) - \frac{1}{2}\bigl((\log x)^2 - (\log 2)^2\bigr) - C\log x + C\log 2 - o(\log x) \\
&= \frac{1}{2}(\log x)^2 + o(\log x)
\end{align}



to overall get



$$\sum_{n \leqslant x} \frac{\log (\operatorname{rad} n)}{n} = \frac{1}{2}(\log x)^2 + (C + \gamma)\log x + o(\log x).$$


calculus - Stuck on basic limit problem: $lim_{x to 0} frac{sin(tan x)}{sin x}$



Consider $\lim_{x \to 0} \frac{\sin(\tan x)}{\sin x}$. The answer is $1$. This is clear intuitively since $\tan x ≈ x$ for small $x$. How do you show this rigorously? In general, it does not hold that $\lim_{x \to p} \frac{f(g(x))}{f(x)} = 1$ if $g(x) - x \to 0$ as $x \to p$.



No advanced techniques like series or L'Hôpital. This is an exercise from a section of a textbook which only presumes basic limit laws and continuity of composite continuous functions.




This should be a simple problem but I seem to be stuck. I've tried various methods, including $\epsilon-\delta$, but I'm not getting anywhere. The composition, it seems to me, precludes algebraic simplification.


Answer



$$\lim_{x \to 0}\dfrac{\sin(\tan(x))}{\sin(x)}=\lim_{x \to 0}\dfrac{\sin(\tan(x))}{\sin(x)/x} \cdot \dfrac{1}{x} = \lim_{x \to 0}\dfrac{\sin(\tan(x))/\tan(x)}{\sin(x)/x} \cdot \dfrac{\tan(x)}{x}\text{.}$$
Now
$$\lim_{x \to 0}\sin(\tan(x))/\tan(x) = \lim_{\tan(x) \to 0}\sin(\tan(x))/\tan(x) = 1\text{,}$$
$$\lim_{x \to 0}\sin(x)/x = 1$$
and you can use this (or any of the other answers if you haven't covered derivatives) to show
$$\lim_{x \to 0}\tan(x)/x=\sec(0) = 1\text{.}$$


Friday 12 January 2018

calculus - show that $int_{0}^{infty} frac {sin^3(x)}{x^3}dx=frac{3pi}{8}$



show that



$$\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$$



using different ways




thanks for all


Answer



Let $$f(y) = \int_{0}^{\infty} \frac{\sin^3{yx}}{x^3} \mathrm{d}x$$
Then,
$$f'(y) = 3\int_{0}^{\infty} \frac{\sin^2{yx}\cos{yx}}{x^2} \mathrm{d}x = \frac{3}{4}\int_{0}^{\infty} \frac{\cos{yx} - \cos{3yx}}{x^2} \mathrm{d}x$$
$$f''(y) = \frac{3}{4}\int_{0}^{\infty} \frac{-\sin{yx} + 3\sin{3yx}}{x} \mathrm{d}x$$
Therefore,
$$f''(y) = \frac{9}{4} \int_{0}^{\infty} \frac{\sin{3yx}}{x} \mathrm{d}x - \frac{3}{4} \int_{0}^{\infty} \frac{\sin{yx}}{x} \mathrm{d}x$$



Now, it is quite easy to prove that $$\int_{0}^{\infty} \frac{\sin{ax}}{x} \mathrm{d}x = \frac{\pi}{2}\mathop{\mathrm{signum}}{a}$$




Therefore,
$$f''(y) = \frac{9\pi}{8} \mathop{\mathrm{signum}}{y} - \frac{3\pi}{8} \mathop{\mathrm{signum}}{y} = \frac{3\pi}{4}\mathop{\mathrm{signum}}{y}$$
Then,
$$f'(y) = \frac{3\pi}{4} |y| + C$$
Note that, $f'(0) = 0$, therefore, $C = 0$.
$$f(y) = \frac{3\pi}{8} y^2 \mathop{\mathrm{signum}}{y} + D$$
Again, $f(0) = 0$, therefore, $D = 0$.



Hence, $$f(1) = \int_{0}^{\infty} \frac{\sin^3{x}}{x^3} = \frac{3\pi}{8}$$



inequality - Lower and upper bound on $sum_i logfrac{N}{x_i}$ with $sum_i x_i = N$



Some days ago I came across this question. You may notice that I tried to answer it being however really not sure about what I was saying. (Feel free to "-1" my post !). Not being satisfied by my answer, I've been playing around with this problem and came to an odd conclusion. I'm therefore asking you to check what's wrong in my proof. I repeat the setting here :



Let $N \in \mathbb{N},\ N \geq 1$, let also $x_1,...,x_n,\ x_i \geq 1,\ \sum_i x_i = N$. Note also that $n$ may vary $1 \leq n \leq N$



(In the original post the OP possibly specified $x_i \in \mathbb{N}$ but I relaxed this constraint here)




We are interessed in the following quantity on which I manage to extract a lower and upper bound: $$\sum_{i=1}^n \log \frac{N}{x_i}$$




  • Lowerbound :
    $$
    \sum_{i=1}^n \log \frac{N}{x_i} = n\log N - \sum_{i=1}^n \log x_i \overset{(*)}{\geq} n\log N -n\log\left(\frac1n \sum_{i=1}^n x_i \right) = \\
    n \log N - n\log\frac{N}n = n\log n
    $$


  • Upperbound :

    $$
    \sum_{i=1}^n \log \frac{N}{x_i} \overset{(*)}{\leq} n \log\left(\frac1n \sum_{i=1}^n \frac{N}{x_i} \right) = n\log N -n\log n + n\log\left(\sum_{i=1}^n \frac{1}{x_i} \right) \overset{(**)}{\leq} \\
    n\log N -n\log n + n\log\left(\sum_{i=1}^n \frac{n}{N} \right) = n\log N -n\log n + n\log\left(\frac{n^2}{N}\right) = \\
    n\log N -n\log n + 2n\log n - n\log N = n \log n
    $$




$(*)$ Jensen's inequality, precisely $n \log\left(\frac1n \sum_i x_i \right) \geq \sum \log x_i$



$(**)$ Setting $x_i = N/n$ which maximize the sum.




This therefore would prove that $\sum_{i=1}^n \log \frac{N}{x_i} = n \log n$ which sounds totally wrong to me however I'm not sure where this proof fails. Could you help me ? :)






EDIT : After seeing the comments, it is now clear that $(**)$ is wrong. The setting that maximizes the sum, I think, is $x_1 = N-n+1,\ x_2 = ... = x_n = 1$. Remember that $x_i$ must be $\geq 1$ in the statement.


Answer



Let's examine first this problem: Given $A$ with $0 < A \le 1/n$, find the extrema of $$L = -\sum_{i=1}^n \log x_i$$ when for all $i, A\le x_i$ and $\sum_i x_i = 1$.



since $L$ and the conditions are symmetric in their treatment of the $x_i$, any permutation of their values will have no affect on $L$. In particular, we can restrict our attention to the region where $x_n \ge x_i$ for $i < n$. Any extremum that occurs elsewhere must also occur in this region.




In the special case where $A = 1/n$, there is only one point within the region of interest: $(A, A, ..., A)$, which is therefore both maximum and minimum. So assume that $A < 1/n$. Therefore the largest coordinate $x_n > A$.



Now $x_n = 1 - \sum_{iso treating $x_n$ as dependent and the others as independent, we get for all $i < n$, $$\frac{\partial L}{\partial x_i} = \frac{1}{1 - \sum_{i

If $x_i < x_n$, then it can increase. But because the partial derivative is negative in that case, this causes $L$ to decrease. Conversely if $x_n > x_i > A$, then decreasing $x_i$ will cause $L$ to increase (by the 2nd derivative test, this is also true when $x_n = x_i$).



Hence, the maximum of $L$ occurs when all of the $x_i$ with $i$$\max L = -\log(1-(n-1)A) - (n-1)\log A$$$$\min L = n\log n$$




Now your problem is to find the extrema of $$K = -\sum_i \log\left(\frac{x_i}N\right)$$
given
$$1\le x_i, \quad \sum_i x_i = N$$
Make the substitution $y_i = \frac{x_i}N$, and we get
$$K = -\sum_i \log y_i$$
given
$$1/n \le y_i,\quad \sum_i y_i = 1$$
By the above work, the extrema are
$$\max K = n\log N - \log (N+1-n)$$

$$\min K = n\log n$$
If we allow $n$ to vary from $1$ to $N$, then $$\max K = N\log N$$$$\min K = 0$$


measure theory - Prove that $limsup_{ntoinfty}int f_nleq intlimsup_{ntoinfty }f_n$




Les $f_n$ a sequence of measurable function such that $|f_n|\leq 1$ for all $n$.



1) Prove that $$\limsup_{n\to\infty}\int f_n\leq \int\limsup_{n\to\infty }f_n$$



2) Does this inequality hold if supposed $f_n\geq 0$ but no necessarily bounded ?



My try



I set $f_n=f_n^++f_n^-$ where $f_n^+(x)=\max\{0,f_n(x)\}$ and $f_n^-=\min\{0, -f(x)\}$. Then
\begin{align*}

\limsup_{n\to\infty }\int f_n&=\limsup_{n\to\infty }\int f_n^+-f_n^-\\
&\leq\limsup_{n\to\infty }\int f_n^++\limsup_{n\to\infty }\int(-f_n^-)\\
&=\limsup_{n\to\infty }\int f_n^+-\liminf_{n\to\infty }\int f_n^-\\
&\underset{Fatou}{\leq} \limsup_{n\to\infty }\int f_n^+-\int\liminf_{n\to\infty }f_n^-\\
&= \limsup_{n\to\infty }\int f_n^++\int\limsup_{n\to\infty }(-f_n^-)
\end{align*}
but I can't conclude.



For 2) I'm sure it's wrong, but I can't find a counter example.


Answer




Question 1: First of all: If you consider the Lebesgue measure on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$, then your claim is not correct. Just consider



$$f_n(x) := 1_{[n,n+1]}(x),$$



then



$$\limsup_{n \to \infty} \int f_n(x) \, dx= 1 > 0 = \int \limsup_{n \to \infty} f_n(x) \, dx.$$



Now if you restrict the Lebesgue measure to $[0,1]$, i.e.




$$\limsup_{n \to \infty} \int_{[0,1]} f_n(x) \, dx \leq \int_{[0,1]} \limsup_{n \to \infty} f_n(x) \, dx,$$



then this inequality follows from Fatou's lemma applied to the non-negative sequence $g_n(x) := 1-f_n(x)$.



Question 2: No, this inequality is, in general, not satisfied if $f_n$ is not bounded. Consider



$$f_n(x) := n 1_{[0,1/n]}(x).$$



Remark: More generally, one can show the following statement





Let $(X,\mathcal{A},\mu)$ be a measure space. If $(f_n)_{n \in \mathbb{N}}$ is a sequence of measurable functions such that $f_n \leq g$ for some $g \in L^1(\mu)$, then $$\limsup_{n \to \infty} \int f_n \, d\mu \leq \int \limsup_{n \to \infty} f_n \, d \mu.$$



elementary number theory - Does $3$ divide $2^{2n}-1$?

Prove or find a counter example of : $3$ divide $2^{2n}-1$ for all $n\in\mathbb N$.



I compute $2^{2n}-1$ until $n=5$ and it looks to work, but how can I prove this ? I tried by contradiction, but impossible to conclude. Any idea ?

Thursday 11 January 2018

integration - Integral of product of CDF and PDF of a random variable



For a continuous real random variable $X$ with CDF $F_X(x)$ and PDF $f_X(x)$ I want to prove the following
$$\int\limits_{-\infty}^{\infty}x(2F_x(x)-1)f_x(x)dx\geq 0$$
I was thinking of integration by parts but $x$ complicates it. Any hints?


Answer




Let $X$ and $Y$ denote i.i.d. random variables whose cdf are given by $F$.



Then $P(\max{\{X,Y\}}\leq x) = P(X\leq x, Y \leq x) = P(X \leq x)P(Y \leq x) = F(x)^2$.



Therefore the cdf of $\max\{X,Y\}$ is given by $\frac{d}{dx}F(x)^2 = 2F(x)f(x)$.



We clearly have that $X \leq \max\{X,Y\}$, so that $E[X] \leq E[\max\{X,Y\}]$ which means that $$\int xf(x)dx \leq \int 2xF(x)f(x)dx$$


summation - Finding sum of a finite series.

Consider the series




$$\frac {q_1} {p_1} + \frac {q_1 q_2} {p_1 p_2} + \cdots + \frac {q_1q_2 \cdots q_n} {p_1 p_2 \cdots p_n}$$ where $p_i + q_i = 1$ and $0 < p_i < 1$ and $0 < q_i < 1$ for all $i=1,2, \cdots , n$.



How can I find the sum of this series? Please help me in this regard.



Thank you very much.

Wednesday 10 January 2018

geometry - Calculate radius of variable circles surrounding big circle.



I got a circle, which I know all the details about him. (Radius [100], Diameter [200], Circumference [628.32], Area [31415.93]...)




I would like to surround this circle, with smaller circles. I know the amount of the smaller circles (in this case 14, but it can be less > minimum 4).



I need a formula to calculate the Radius of the smaller circles.



I got the following formula, but this is not right...
R = radius of big circle
n = number of smaller circles
P = tan(360/2n)
r = radius of smaller circles



r = (-(PR))/(P-1)




Here's an example of how it should looks like (this is not right, because I didn't know the Radius of the smaller circles, I just guessed..):



enter image description here



Thank you very much!


Answer



If you connect the centers of two adjacent little circles and the center of the big one, you'll get a triangle. The sides of this triangle have lengths $r+R, r+R$ and $2r$. A little trigonometry will get you that the top angle is



$$\theta=2\arcsin\left(\frac{r}{r+R}\right) \; .$$




Since you want the small circles to form a closed ring around the big circle, this angle should enter an integer amount of times in $360°$ (or $2\pi$ if you work in radians). Thus,



$$\theta=360°/n \; .$$



From this, you can compute that



$$r=\frac{R \sin(180°/n)}{1-\sin(180°/n)} \; .$$



Here's a plot of the result for $n=14$:




Circle ring



Here's the code in Scilab:




n=14;



R=1;



r=R*sin(%pi/n)/(1-sin(%pi/n));




theta=2*%pi*(0:999)/1000;



plot(Rcos(theta),Rsin(theta));



hold on;



for k=0:(n-1),



plot((r+R)*cos(2*k*%pi/n)+r*cos(theta),(r+R)*sin(2*k*%pi/n)+r*sin(theta));




end;



hold off;



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...