Sunday 31 March 2019

Basic Algebra + basic calculus/ find the equation of a line

Find the equation of the line tangent to the graph of $f$ at $(1, -3)$, where $f$ is given by $f(x) = 6x^3 − 13x^2 + 4$. Use $y$ as the dependent variable when you write your equation.



My answer which was false:




The $f'(x)= 18x^2-26x$
$f'(1) = 18-26 = -8$



the equation $(y+3)=-8(x-1)$
$y= -8x+5 $



but it was false, so could any one help please :)

calculus - Integrate $int_0^pifrac{3cos x+sqrt{8+cos^2 x}}{sin x}x mathrm dx$



Please help me to solve this integral:
$$\int_0^\pi\frac{3\cos x+\sqrt{8+\cos^2 x}}{\sin x}x\ \mathrm dx.$$



I managed to calculate an indefinite integral of the left part:
$$\int\frac{\cos x}{\sin x}x\ \mathrm dx=\ x\log(2\sin x)+\frac{1}{2} \Im\ \text{Li}_2(e^{2\ x\ i}),$$
where $\Im\ \text{Li}_2(z)$ denotes the imaginary part of the dilogarithm. The corresponding definite integral $$\int_0^\pi\frac{\cos x}{\sin x}x\ \mathrm dx$$ diverges. So, it looks like in the original integral summands compensate each other's singularities to avoid divergence.




I tried a numerical integration and it looks plausible that
$$\int_0^\pi\frac{3\cos x+\sqrt{8+\cos^2 x}}{\sin x}x\ \mathrm dx\stackrel{?}{=}\pi \log 54,$$
but I have no idea how to prove it.


Answer



Let
$$y=\frac{3\cos x+\sqrt{8+\cos^2 x}}{\sin x},$$
then, solving this with respect to $x$, we get
$$x=\frac{\pi}{2}+\text{arccot}\frac{6y}{8-y^2}.$$
So,
$$\int_0^\pi\frac{3\cos x+\sqrt{8+\cos^2 x}}{\sin x}x\ \mathrm dx=\int_0^\infty\frac{6y(8+y^2)}{(4+y^2)(16+y^2)}\left(\frac{\pi}{2}+\text{arccot}\frac{6y}{8-y^2}\right)\mathrm dy.$$

The latter integral can be solved by Mathematica and yields $$\pi\log54.$$



Of course, we want to prove that the result returned by Mathematica is correct.

The following statement is provably true, that can be checked directly by taking derivatives of both sides:
$$\int\frac{6y(8+y^2)}{(4+y^2)(16+y^2)}\left(\frac{\pi}{2}+\text{arccot}\frac{6y}{8-y^2}\right)\mathrm dy =\\ \frac{1}{2} i \left(2 \text{Li}_2\left(\frac{iy}{8}+\frac{1}{2}\right)+\text{Li}_2\left(\frac{iy}{6}+\frac{1}{3}\right)+2\text{Li}_2\left(\frac{iy}{6}+\frac{2}{3}\right)+\text{Li}_2\left(\frac{iy}{4}+\frac{1}{2}\right)+\text{Li}_2\left(\frac{2i}{y-2 i}\right)-\text{Li}_2\left(-\frac{2 i}{y+2i}\right)-\text{Li}_2\left(-\frac{1}{6} i (y+2i)\right)-\text{Li}_2\left(-\frac{1}{4} i (y+2i)\right)-2 \left(-\text{Li}_2\left(-\frac{2i}{y-4 i}\right)+\text{Li}_2\left(\frac{2 i}{y+4i}\right)+\text{Li}_2\left(-\frac{1}{8} i (y+4i)\right)+\text{Li}_2\left(-\frac{1}{6} i (y+4i)\right)\right)\right)+\pi \left(\frac{1}{2}\log \left(3 \left(y^2+4\right)\right)+\log\left(\frac{3}{64}\left(y^2+16\right)\right)\right)+\log \left(4\left(y^2+4\right)\right) \arctan\left(\frac{y}{4}\right)-\left(\log576-2\log \left(y^2+16\right)\right) \arctan\left(\frac{4}{y}\right)+\log\left(y^2+4\right) \text{arccot}\left(\frac{6y}{8-y^2}\right)-\arctan\left(\frac{2}{y}\right)\log12 +\arctan\left(\frac{y}{2}\right)\log2$$



The remaining part is to calculate $\lim\limits_{y\to0}$ and $\lim\limits_{y\to\infty}$ of this expression, which I haven't done manually yet, but it looks like a doable task.


Saturday 30 March 2019

number theory - Show that it is impossible that $P(a)=b$, $P(b)=c$, and $P(c)=a$ at the same time.

Let $a, b, c$ be distinct integers, and let $P$ be a polynomial with integer coefficients. Show that it is impossible that $P(a)=b$, $P(b)=c$, and $P(c)=a$ at the same time.

natural numbers - Why is the sum over all positive integers equal to -1/12?





Recently, sources for mathematical infotainment, for example numberphile, have given some information on the interpretation of divergent series as real numbers, for example



$\sum_{i=0}^\infty i = -{1 \over 12}$



This equation in particular is said to have some importance in modern physics, but, being infotainment, there is not much detail beyond that.




As an IT Major, I am intrigued by the implications of this, and also the mathematical backgrounds, especially since this equality is also used in the expansion of the domain of the Riemann-Zeta function.



But how does this work? Where does this equation come from, and how can we think of it in more intuitive terms?


Answer



Basically, the video is very disingenuous as they never define what they mean by "=."



This series does not converge to -1/12, period. Now, the result does have meaning, but it is not literally that the sum of all naturals is -1/12. The methods they use to show the equality are invalid under the normal meanings of series convergence.



What makes me dislike this video is when the people explaining it essentially say it is wrong to say that this sum tends to infinity. This is not true. It tends to infinity under normal definitions. They are the ones using the new rules which they did not explain to the viewer.




This is how you get lots of shares and likes on YouTube.


abstract algebra - Structure of the field $mathbb F_{p^n}(y)$




As the question says , i am confused about the notation above . At first glance it looks like i can compare it with something like $\mathbb R(i)$ . But i am still confused how the field $\mathbb F_{p^n}(y)$ looks like ?
Consider a polynomial $f=x^2-y$ , it clearly belongs to $\mathbb F_{p^n}(y)[x]$ . Is $f$ irreducible in $\mathbb F_{p^n}(y)$ , if i take $p=2, n=1 $ , i know that it is irreducible buy i don't understand exactly why ?



What would be the field extension with respect to $f$ ?



I am quite comfortable with finite fields .


Answer



$\mathbb{F}_q(y)$ (with $q = p^n$) is presumably the field of rational functions over $\mathbb{F}_q$. It consists of fractions whose numerator and denominator are polynomials over $\mathbb{F}_q$.




$f$ is irreducible over $\mathbb{F}_q(y)$, no matter what $q$ is. We can see this, because the roots are clearly $\pm \sqrt{y}$, but $y$ doesn't have any square roots in $\mathbb{F}_q(y)$. (Note that if $p=2$, this is a double root! But it doesn't behave like double roots "usually" behave, because $f$ is inseparable.)



The field extension that $f$ defines is $\mathbb{F}_q(x)$; the embedding $\mathbb{F}_q(y) \to \mathbb{F}_q(x)$ sends $y \mapsto x^2$.


Friday 29 March 2019

Determine the divisibility of a given number without performing full division



My question is slightly more complicated than what's implied on the title, so I will start with an example. Given any number $N$ on base $10$, we can easily determine whether or not $N$ is divisible by any $d\in[2,9]$, except for $d=7$:





  • $N$ is divisible by $2$ if $N\bmod10$ is divisible by $2$

  • $N$ is divisible by $3$ if $N$'s digit-sum is divisible by $3$

  • $N$ is divisible by $4$ if $N\bmod100$ is divisible by $4$

  • $N$ is divisible by $5$ if $N\bmod10$ is divisible by $5$

  • $N$ is divisible by $6$ if $N$ is divisible by $2$ and $3$

  • $N$ is divisible by $8$ if $N\bmod1000$ is divisible by $8$

  • $N$ is divisible by $9$ if $N$'s digit-sum is $9$




In the case of $d=7$, we pretty much have to perform a full division (at least for some values of $N$).



I believe that the general rule for base $B$ and $d\in[2,B-1]$ is the following:



[We can easily determine $d|N_B]\iff[\gcd(B,d)>1]\vee[gcd(B-1,d)>1]$



Is there an easy way to prove or refute this?







Partial answers will also be appreciated:




  • [We can easily determine $d|N_B]\impliedby[\gcd(B,d)>1]$

  • [We can easily determine $d|N_B]\impliedby[[gcd(B-1,d)>1]$

  • [We can easily determine $d|N_B]\implies[\gcd(B,d)>1]\vee[gcd(B-1,d)>1]$



The $1^{st}$ one is pretty trivial, I believe that the $2^{nd}$ one also holds, but I'm not sure about the $3^{rd}$ one.


Answer




Just to summarise some of the material in the comments and expand it slightly.



There is a well-known test for 7. $a_k\dots a_1a_0$ is divisible by 7 iff $(a_2a_1a_0)-(a_5a_4a_3)+(a_8a_7a_6)-\dots$ is divisible by 7. For example 12383. We divide the digits into groups 12 383 and subtract to get 371. That is divisible by 7, so 12383 is divisible by 7.



This approach can be used for divisors outside the range 2-9. The idea is to look for divisors of $B^k+1$. It also works for $B^k-1$ (but you add the groups instead of alternately adding and subtracting them. Since $1001=7\times11\times13$ the test for 7 can be used for 11 and 13 at the same time. For example, 371 is not divisible by 11 or 13, so 12383 is also not divisible by 11 or 13.


calculus - Are some indefinite integrals impossible to compute or just don't exist?





I've just started working with integrals relatively recently and I am so surprised how much harder they are to compute than derivatives. For example, for something as seemingly simple as $\int e^{ \cos x} dx $ is impossible right? I can't use u-sub since there is no $-\sin(x)$ multiplying the function, also integration by parts seems like it wouldn't work, correct? So does this mean this integral is impossible to compute?


Answer



The indefinite integral of a continuous function always exists. It might not exist in "closed form", i.e. it might not be possible to write it as a finite expression using "well-known" functions. The concept of "closed form" is
somewhat vague, since there's no definite list of which functions are "well-known". A more precise statement is that there are elementary functions whose indefinite integrals are not elementary. For example, the indefinite integral $\int e^{x^2}\; dx$ is not an elementary function, although it can be expressed in terms of a non-elementary special function as $\frac{\sqrt{\pi}}{2} \text{erfi}(x)$.




Your example $\int e^{\cos(x)}\; dx$ is also non-elementary. This can be proven using the Risch algorithm. This one does not seem to have any non-elementary closed form either.


calculus - $lim_{nto infty} n^3(frac{3}{4})^n$





I have the sequence $a_n=(n^3)\big(\frac{3}{4}\big)^n$
and I need to find whether it converges or not and the limit.



I took the common ratio which is $\frac{3 (n+1)^3}{4 n^3}$ and since $\big|\frac{3}{4}\big|<1$ it converges.
I don't know how to find the limit from here.


Answer



If $\frac {a_{n+1}} {a_n} \to l$ with $|l|<1$ then $a_n \to 0$. (In fact the series $\sum a_n$ converges).



In this case $\frac {a_{n+1}} {a_n} =(1+\frac 1 n)^{3}(\frac 3 4) \to \frac 3 4$.


sequences and series - Find $A=+frac{3}{4×8}-frac{3×5}{4×8×12}+frac{3×5×7}{4×8×12×16}-···$

Find $A$:



$$A=+\frac{3}{4×8}-\frac{3×5}{4×8×12}+\frac{3×5×7}{4×8×12×16}-···$$



My Try :




$$a_1=\frac{3}{4×8}-\frac{3×5}{4×8×12}=\frac{3×12-3×5}{4×8×12}=\frac{3(12-5)}{4×8×12}=\frac{3(7)}{4×8×12}$$



$$a_2=\frac{3(7)}{4×8×12}-\frac{3×5×7}{4×8×12×16}=\frac{3×7×16-3×5×7}{4×8×12×16}=\frac{3×7(16-7)}{4×8×12×16}\\=\frac{3×7(8)}{4×8×12×16}$$



now?

Thursday 28 March 2019

geometry - Formula to calculate a square root

I already know about the relation between a circle and the square root of a certain number. It looks like this
, where $b=\sqrt{x}$ and is actually very interesting. I wanted to check if I could take the image, and basically "write it down" in math form to come up with a, maybe useless, formula for the square root of $x$. The one I came up with is$$
\sqrt{x}=\frac{(x+1)\sin(\arccos\big(\frac{x-1}{x+1}\big))}{2}
$$ Now this formula is kinda long, and I want to know if, first of all, it is correct, and also if there is any possible way to simplify it without using a square root to achieve this, because that would be contradictory.

sequences and series - Does the limit of $f(x) = sumlimits_{n=1}^{+infty} frac{(-1)^n sin frac{x}{n}}{n}$ exist?

Consider the function $f: [0; +\infty] \rightarrow \mathbb{R}$ given by the formula $$f(x) = \sum\limits_{n=1}^{+\infty} \frac{(-1)^n \sin \frac{x}{n}}{n}$$
Does $\lim\limits_{x\rightarrow 0} f(x)$ exist? If yes, find it's value.
Is $f$ differantiable? If yes, check if $f'(0) > 0$.



Any ideas how to do that? I assume this is about functions series, it's pointwise convergent I think, does that mean that this limit exists?



Thanks a lot for your help!

calculus - The integral $intlimits_0^inftyfrac{x^4e^x}{(e^x-1)^2} mathrm{d}x$

How to calculate the following integral $$\int_0^\infty\frac{x^4e^x}{(e^x-1)^2}\mathrm{d}x$$
I would like to solve this integral by means of two different ways: for example, integration by parts and using Residue Theorem.

Wednesday 27 March 2019

calculus - Evaluating $lim_{btoinfty} int_0^b frac{sin x}{x}, dx= frac{pi}{2}$











Using the identity $$\lim_{a\to\infty} \int_0^a e^{-xt}\, dt = \frac{1}{x}, x\gt 0,$$ can I get a hint to show that $$\lim_{b\to\infty} \int_0^b \frac{\sin x}{x} \,dx= \frac{\pi}{2}.$$


Answer



Hint:
$$\begin{align} \lim_{b\to \infty}\int_{0}^{b}\frac{\sin x}{x}dx &= \lim_{a,b\to \infty}\int_{0}^{b}\int_{0}^{a}e^{-xt}dt\sin x dx\\& = \lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}e^{-xt}\frac{e^{ix}-e^{-ix}}{2i} dx \\&=\lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}\frac{e^{-(t-i)x}-e^{-(i+t)x}}{2i} dx\end{align}$$.



calculus - Prove that $sum frac{1}{n^2} = frac{pi^2}{6}$

In this answer two sequences are mentioned.
In particular, I would like to prove that



$$\sum_{n = 1}^{+ \infty} \frac{1}{n^2} = \frac{\pi^2}{6}$$



If I knew that the sequence converges to $\frac{\pi^2}{6}$, I could use the $\epsilon$-$M$ criterion to prove the convergence to that value.



But how to prove that the above sequence converges to that value if I don't know the value itself? Is there a general way to proceed in such cases?

measure theory - Show that $f_n1_{A_n}$ convergences in mean




Consider the measurable space $(\Omega,\mathcal{A},\mu)$. Let $f,f_1,f_2,\ldots$ be measurable functions on that measurable space and $A,A_1,A_2,\ldots\in\mathcal{A}$. Let $(f_n)$ converge in mean to $f$ and $1_{A_n}$ to $1_A$ in measure. Show that then $f_n1_{A_n}$ converges in mean to $f1_A$.





Hello and good evening!



My idea is to show instead that $(f_n1_{A_n})_{n\in\mathbb{N}}$ converges to $f1_A$ in measure and that $(f_n1_{A_n})_{n\in\mathbb{N}}$ is uniformly integrable (because this together is equivalent to the convergence in mean of $(f_n1_{A_n})_{n\in\mathbb{N}}$ to $f1_A$).



I. Uniformly integrability




Consider any $\varepsilon>0$. $(f_n)_{n\in\mathbb{N}}$ is uniformly integrable (and converges to $f$ in measure what is needed below). So there exists an integrable function $h\geq 0$, so that
$$
\sup_{f_n\in (f_n)_{n\in\mathbb{N}}}\int 1_{\lvert f_n\rvert\geq h}\lvert f_n\rvert\, d\mu<\varepsilon.
$$
For this function $h$, it is
$$
(\lvert 1_{A_n}f_n\rvert-h)^+\leq 1_{\lvert 1_{A_n}f_n\rvert\geq h}\lvert 1_{A_n}f_n\rvert\leq 1_{\lvert f_n\rvert\geq h}\lvert f_n\rvert.
$$
So it follows that

$$
\int (\lvert 1_{A_n}f_n\rvert-h)^+\, d\mu\leq\int 1_{\lvert f_n\rvert\geq h}\lvert f_n\rvert\, d\mu\leq\sup_{f_n\in (f_n)_{n\in\mathbb{N}}}\int 1_{\lvert f_n\rvert\geq h}\lvert f_n\rvert\, d\mu<\varepsilon
$$
what means that
$$
\sup_{1_{A_n}f_n\in (1_{A_n}f_n)_{n\in\mathbb{N}}}\int (\lvert 1_{A_n}f_n\rvert-h)^+\, d\mu<\varepsilon.
$$
So $(1_{A_n}f_n)_{n\in\mathbb{N}}$ is uniformly integrable.



II. Convergence in measure




Consider any $\varepsilon>0$. It is
$$
\left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_Af\rvert>\varepsilon\right\}\\=\left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_{A_n}f+1_{A_n}f-1_Af\rvert>\varepsilon\right\}\\\subset \left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_{A_n}f\rvert+\lvert 1_{A_n}f-1_Af\rvert>\varepsilon\right\}\\\subset\left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_{A_n}f\rvert>\frac{\varepsilon}{2}\right\}\cup\left\{\omega\in\Omega:\lvert 1_{A_n}f-1_Af\rvert>\frac{\varepsilon}{2}\right\}\\=\left\{\omega\in\Omega:\lvert 1_{A_n}(f_n-f)\rvert>\frac{\varepsilon}{2}\right\}\cup\left\{\omega\in\Omega:\lvert f(1_{A_n}-1_A)\rvert>\frac{\varepsilon}{2}\right\}\\\subset\left\{\omega\in\Omega:\lvert f_n-f\rvert>\frac{\varepsilon}{2}\right\}\cup\underbrace{\left\{\omega\in\Omega:\lvert 1_{A_n}-1_A\rvert=1\right\}}_{=\left\{\omega\in\Omega:\lvert 1_{A_n}-1_A\rvert>\lambda\right\}\text{ for a }0<\lambda<1}.
$$
So it is
$$
\mu(\left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_Af\rvert>\varepsilon\right\})\leq\mu(\left\{\omega\in\Omega:\lvert f_n-f\rvert>\frac{\varepsilon}{2}\right\}\cup\left\{\omega\in\Omega:\lvert 1_{A_n}-1_A\rvert>\lambda\right\})\\\leq\underbrace{\mu(\left\{\omega\in\Omega:\lvert f_n-f\rvert>\frac{\varepsilon}{2}\right\})}_{\to 0}+\underbrace{\mu(\left\{\omega\in\Omega:\lvert 1_{A_n}-1_A\rvert>\lambda\right\})}_{\to 0}\to 0
$$
$$

\Longrightarrow \mu(\left\{\omega\in\Omega:\lvert 1_{A_n}f_n-1_Af\rvert>\varepsilon\right\})\to 0
$$



From I. and II. it follows that $(1_{A_n}f_n)_{n\in\mathbb{N}}$ converges to $1_Af$ in mean.



Could you pls say me if my proof is allright?



With best wishes for X-mas,



math12

discrete mathematics - Functions images and inverse images

The objective of this question is to find if the function is a bijective function or not and if it is a bijective find its images and inverse images.



$$ f:\mathbb{Z^2} \to \mathbb{Z}$$
$$ f(n,k) = n^2k $$



We have to find inverses of   $ f^{-1}(\{0\}) $,  $ f^{-1}(\mathbb{N}) $  and  $ f(\mathbb{Z} \times \{1\}) $



But I fail to understand the approach to this problem, I do understand that they need to have unique mappings and co-domains must be matched, but could anyone help me make it analogous to this situation?




questions such $$y = x^2 $$ is not bijective since they have multiple images and are not bijective. Their inverse will be a sqaure root with + and - and hence its an invalid case. Could someone please correct my approach?

induction - Prove that every natural number >1 has a unique way of prime factorization

I am trying to prove that every natural number >1 has exactly one way of factorization by prime numbers.
I read the wikipedia page but it is not so clear to me (they use strong induction).
is there any other way to prove it ? maybe not by induction?




thanks

Tuesday 26 March 2019

algebra precalculus - The drying water melon puzzle



I couldn't find an explanation to this problem that I could understand.



A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon?



I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why.




EDIT
I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.


Answer



At the beginning the solid material is $1\%$ of the total which is a trifle (to be neglected) more than $1\%$ of $99\%$ of the total, or $1\%$ of $2000\ {\rm cm}^3$. Therefore the solid material has volume $\sim20\ {\rm cm}^3$.



After one day in the sun these $20\ {\rm cm}^3$ solid material are still the same, but now they make up $2\%$ of the total. Therefore the total now will be $1000\ {\rm cm}^3$ or $1$ litre. $98\%$ of this volume, or almost all of it, will be water.


Monday 25 March 2019

Find polar function, knowing the 'Cartesian' derivative




I know the Cartesian derivative and would like to find the formula for the polar function. I am not sure whether my thinking is correct, however.



$$\frac{dy}{dx}=\tan\left(\theta+\frac{5}{4}\pi\right)$$ ...this much I know.



Now my thinking would be that since:
$$\frac{dy}{dx}=\frac{\frac{dy}{d\theta}}{\frac{dx}{d\theta}}$$



I'd have:




$$\frac{\frac{dy}{d\theta}}{\frac{dx}{d\theta}}=\tan\left(\theta+\frac{5}{4}\pi\right)=\frac{c\sin\left(\theta+\frac{5}{4}\pi\right)}{c\cos\left(\theta+\frac{5}{4}\pi\right)}$$



so:



$$\frac{dy}{d\theta}=c\sin\left(\theta+\frac{5}{4}\pi\right), \space \frac{dx}{d\theta}=c\cos\left(\theta+\frac{5}{4}\pi\right)$$



Now I doubt this is correct. I mean, there is plenty of other possible pairs of functions. But then, if this isn't correct, is there any other way to find the polar function formula by just having the tangent in Cartesian coordinates?



I kind of feel like I'm approaching this problem from the wrong side, but also it feels like it should be possible to find the formula for the polar function, knowing the tangent, up to maybe some multiplication/addition (the same way as in Cartesian). Obviously, I have close to zero experience with polar functions, so any kind of advice will be greatly appreciated.


Answer




Well! $c$ itself can be a function of $\theta$ so this approach is incorrect. But the following way leads to something better i think:$$\dfrac{dy}{dx}=\dfrac{1+\tan\theta}{1-\tan\theta}=\dfrac{1+\dfrac{y}{x}}{1-\dfrac{y}{x}}$$Let $y=xu$ therefore $$\dfrac{dy}{dx}=u+xu'=\dfrac{1+u}{1-u}\to\\xu'=\dfrac{1+u^2}{1-u}\to\dfrac{dx}{x}=\dfrac{1-u}{1+u^2}du\to\\\ln x+C_1=\tan^{-1}u-\dfrac{1}{2}\ln({1+u^2})\to\\C_2x=\dfrac{e^{\tan^{-1}\frac{y}{x}}}{\sqrt{1+\dfrac{y^2}{x^2}}}$$or$$C_2r\cos\theta=\dfrac{e^{\theta}}{\sqrt{1+\tan^2\theta}}$$which yields to $$r=Ce^\theta$$


Find the remainder of a number with a large exponent

I have to find the remainder of $10^{115}$ divided by 7.



I was following the way my book did it in an example but then I got confused. So far I have,
$\overline{10}^{115}$=$\overline{10}^{7*73+4}$=($\overline{10}^{7})^{73}$*$\overline{10}^4$
and that's where I'm stuck.



Also, I don't fully understand what it means to have a bar over a number.

calculus - $inttext{e}^{-ax^2 } text{erf}left(bx + cright) dx$



I'm hoping to find a closed expression for the following integral.
$$
\int\text{e}^{-ax^2 } \text{erf}\left(bx + c\right) dx

$$
One can find a solution for a family of products between exponentials and error functions. None of which apparently have the offset term in the error function.



I have tried tackling the problem with two approaches.



Approach #1: Expanding the error function hoping to find nice cancelations leading to the maclerin series of some known elementary function. Following a similar approach by Alex:



$$
\begin{aligned}
\int\text{e}^{-ax^2 } \text{erf}\left(bx + c\right) dx &= \frac{2}{\sqrt{\pi}} \sum_{n=0}^\infty \frac{(-1)^n }{n!(2n+1)} \int(bx+c)^{2n +1} \text{e}^{-ax^2} dx \\

& = \frac{2}{\sqrt{\pi}} \sum_{n=0}^\infty \frac{(-1)^n }{n!(2n+1)} \int
\sum_{k=0}^{2n+1} {{2n+1}\choose{k}} (bx)^{k} c^{2n+1-k} \text{e}^{-ax^2} dx \\
& = \frac{2}{\sqrt{\pi}} \sum_{n=0}^\infty \frac{(-1)^n }{n!(2n+1)}
\sum_{k=0}^{2n+1} {{2n+1}\choose{k}} c^{2n+1-k} b^k\int x^{k} \text{e}^{-ax^2} dx
= \\
&\frac{2}{\sqrt{\pi}} \sum_{n=0}^\infty \frac{(-1)^n }{n!(2n+1)}
\sum_{k=0}^{2n+1} {{2n+1}\choose{k}} c^{2n+1-k} b^k \left(-\frac{1}{2}a^{-\frac{k+1}{2}} \Gamma\left(\frac{k+1}{2},ax^2\right)\right)
\\
&= -\frac{1}{\sqrt{a}\sqrt{\pi}} \sum_{n=0}^\infty \frac{(-1)^n }{n!(2n+1)}
\sum_{k=0}^{2n+1} {{2n+1}\choose{k}} c^{2n+1-k} \left(\frac{b}{\sqrt{a}}\right)^k \Gamma\left(\frac{k+1}{2},ax^2\right)

\end{aligned}
$$



I have used the binomial expansion for $(bx + c)^{2n+1}$ and that $\int x^k \text{e}^{-ax^2}dx = -\frac{1}{2}a^{-\frac{k+1}{2}} \Gamma\left(\frac{k+1}{2} ,ax^2\right)$ where $\Gamma(,)$ is the incomplete gamma function. Too bad, the last term can not be combined again in the form of a binomial expansion.



Approach #2: Instead of expanding the error function, I tried writing it in terms of the cumulative CDF function (Q-Function) as $\text{erf}(x) = 2\Phi(\sqrt{2} x) - 1$. However, the following can be shown to be true using integration under the integral sign with respect to $\mu$. [Section 2.4, and ref]

$$
\frac{1}{\sqrt{2 \pi} \sigma}\int_{-\infty}^{\infty}\Phi(\lambda x) \text{e}^{-\frac{(x - \mu)^2}{2 \sigma^2}}dx =\Phi\left(\frac{\lambda \mu}{\sqrt{1+\lambda^2\sigma^2}}\right)
$$




Now with some change of variables and rescaling we are instead interested in the following integral:
$$
\int\text{e}^{a_1 x^2 + a_2 x} \text{erf}\left(x\right) dx = \underbrace{2\int \text{e}^{a_1 x^2 + a_2 x} \Phi(\sqrt{2} x) dx}_{I} - \underbrace{\int \text{e}^{a_1 x^2 + a_2 x} dx}_{easy}
$$



However, what I'm not certain of if I can use the trick of integration under integral sign for the indefinite integral labeled I. Can I, with some change of variables, use the result deduced for the definite integral case as $\Phi\left(\frac{\lambda \mu}{\sqrt{1+\lambda^2\sigma^2}}\right) + C(x)$?








EDIT: It seems that the problem in had has no closed form solution as pointed out by user90369. Also, user90369 has pointed out that the following more general case have no closed form solution.
$$
\int x^{2n} \text{e}^{-ax^2} \text{erf}(bx+c) dx
$$
I was wondering, if there are any good approximations that I can use here. By good, I mean refer to an error that is $|e(x)| \leq 10^{-5} \forall x$.
For starter, I was looking at the high accuracy approximations in here for the erf function. Unfortunately, none of these approximations result into an integral that inherits a closed form solution. I, however, have the following suggested approach with the use of the following identity.
$$
\text{erf}(bx+c) = 2 \Phi\left(\sqrt{2} (bx+c)\right) - 1
$$

This results into the following:
$$
\begin{aligned}
\int\text{e}^{-ax^2 } \text{erf}\left(bx + c\right) dx = 2\int \text{e}^{-ax^2 } \Phi\left(\sqrt{2} (bx+c)\right) dx - \int \text{e}^{-ax^2 } dx
\end{aligned}
$$
Now, one can use the approximation of the $\Phi$-function that results from applying Chernof's bound. Link
$$
\Phi(x) \approx \frac{1}{12} \text{e}^{-\frac{x^2}{2}} + \frac{1}{4} \text{e}^{-\frac{2}{3} x^2}
$$

I'd like to take a suggestion of how good is this approximation after computing the integral. Or maybe if there are other better approximations/recommendations that result into a manageable integral afterwards.



Answer



I don't know if this helps for an useful approximation but maybe it's better than nothing. :-)



For $\,v\in\mathbb{N}_0\,$ we get



$$ \int x^{2v+1}e^{-ax^2}dx= -\frac{v!e^{-ax^2}}{2a^{v+1}}\sum\limits_{j=0}^v\frac{(ax^2)^j}{j!} + C_{2v+1} $$



and




$$ \int x^{2v}e^{-ax^2}dx= \frac{(2v)!\sqrt{a\pi}\text{erf}(\sqrt{a}x)}{2^v v!(2a)^{v+1}}-e^{-ax^2}\sum\limits_{j=0}^{v-1}\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} + C_{2v} $$



and it follows:



\begin{align}
& \hphantom{ {}={}} \int e^{-ax^2} \text{erf}(bx+c)dx \\
&= \sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}\int (bx+c)^{2k+1} e^{-ax^2} dx \\
&= \sum\limits_{k=0}^\infty \frac{(-1)^k}{k!(2k+1)}\sum_{v=0}^{2k+1}\binom {2k+1} v b^v c^{2k+1-v}\int x^v e^{-ax^2} dx \\
&= \sum\limits_{k=0}^\infty \frac{(-1)^k}{k!(2k+1)}\sum_{v=0}^k\binom {2k+1} {2v} b^{2v} c^{2k+1-2v}\int x^{2v} e^{-ax^2} dx \\

&\hspace{5mm} +\sum\limits_{k=0}^\infty \frac{(-1)^k}{k!(2k+1)}\sum_{v=0}^k\binom {2k+1} {2v+1} b^{2v+1} c^{2k-2v}\int x^{2v+1} e^{-ax^2} dx \\
&= \sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}
\sum\limits_{v=0}^k \binom {2k+1} {2v} b^{2v}c^{2k-2v+1} \\
&\hspace{3cm} \cdot \left( \frac{(2v)!\sqrt{a\pi}\text{erf}(\sqrt{a}x)}{2^v v!(2a)^{v+1}}-e^{-ax^2}\sum\limits_{j=0}^{v-1}\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} \right) \\
&\hspace{5mm} - \sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}
\sum\limits_{v=0}^k \binom {2k+1} {2v+1} b^{2v+1}c^{2k-2v} \frac{v!e^{-ax^2}}{2a^{v+1}}\sum\limits_{j=0}^v\frac{(ax^2)^j}{j!} + C \\
&= \sqrt{a\pi}\text{erf}(\sqrt{a}x)\sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}\sum\limits_{v=0}^k \binom {2k+1} {2v} \frac{(2v)!b^{2v}c^{2k-2v+1}}{2^v v!(2a)^{v+1}} \\
&\hspace{5mm} -e^{-ax^2}\sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}\sum\limits_{v=0}^k \binom {2k+1} {2v} b^{2v}c^{2k-2v+1}\sum\limits_{j=0}^{v-1}\frac{(v-j)!(2v)!x^{2v-2j-1}}{2^j v!(2v-2j)!(2a)^{j+1}} \\
&\hspace{5mm} -e^{-ax^2}\sum\limits_{k=0}^\infty\frac{(-1)^k}{k!(2k+1)}\sum\limits_{v=0}^k \binom {2k+1} {2v+1} b^{2v+1}c^{2k-2v} \frac{v!}{2a^{v+1}}\sum\limits_{j=0}^v\frac{(ax^2)^j}{j!} + C
\end{align}



(Proof-verification) Proof that multiplication of natural numbers is commutative

This isn't that rigorous, in that assumes that axioms about the addition of natural numbers have already been shown and proven, and I think it also assumes distributivity of multiplication. I don't know if the distributive property is something that has to be shown or assumed.




Anyway, for any natural number $n$, we have that $$n*1 = 1*n $$ because the left hand side is $$ \underbrace{1+ 1 + \dots + 1}_{n\ \text{times}} $$ which is the definition of $n$. Likewise, the right hand side is defined to be $n$. So they are equal.




Now we can use this to show that multiplication with $2$ is commutative, i.e. for any $n$ $$n * 2 = 2 *n$$ The left hand side equals $$ n * (1+1) = n*1 + n*1 = n + n = 1*n + 1*n = (1+1)*n = 2*n$$ Then we can show that multiplication with $3$ is commutative knowing that multiplication with both $1$ and $2$ is commutative, and that multiplication with arbitrary $m$ is commutative knowing that multiplication with $1$ and $m-1$ is commutative.




Is this correct? I see at least two potential problems:




  1. It assumes the distributive property of multiplication for natural numbers. While I find this axiom to be more "common sense" than commutativity, perhaps it should also be proved somehow. And maybe the proof relies essentially on commutativity of multiplication, leading to circular reasoning.


  2. It seems to use not only regular induction, but strong induction. In other words, I am not sure if multiplication being commutative for $m-1$ can be used as the induction hypothesis in regular induction or not, since it seems like it uses implicitly that multiplication is commutative for all lower natural numbers. I think ultimately this comes down to my not understanding the difference between induction and strong induction -- they are supposed to be equivalent, but I imagine that at such a foundational level the distinction between them might be important. What's the difference between simple induction and strong induction?





Context:
This might be related to this question: Why Does Induction Prove Multiplication is Commutative? To be honest I am not sure though since I don't understand that question.



I remember reading somewhere, after doing a Google search, that one can prove that multiplication of natural numbers is commutative by induction. I didn't understand the argument at the time, probably because I didn't put in the effort to understand it properly, but I think I have the idea now, and want to confirm or verify that this is correct. It was stated similar to this: proof of commutativity of multiplication for natural numbers using Peano's axiom

Saturday 23 March 2019

calculus - How to evaluate the infinite series: $ frac 1 {3cdot6} + frac 1 {3cdot6cdot9} +frac 1 {3cdot6cdot9cdot12}+ldots$



The infinite series is given by:



$$ \dfrac 1 {3\cdot6} + \dfrac 1 {3\cdot6\cdot9} +\dfrac 1 {3\cdot6\cdot9\cdot12}+\ldots$$



What I thought of doing was to split the general term as:




$$\begin{align}
t_r &= \dfrac 1 {3^{r+1}(r+1)!}\\\\
&= \dfrac {r+1 - r} {3^{r+1}(r+1)!}\\\\
&= \dfrac {1} {3^{r+1}\cdot r!} - \dfrac{r}{3^{r+1}(r+1)!}
\end{align}$$



But this doesn't seem to help.



HINTS?


Answer




HINT:



$$e^x=\sum_{0\le r<\infty}\frac{x^r}{r!}$$



Can you take it from here?



A strongly resembling sequence $$-\ln(1-x)=\sum_{1\le r<\infty}\frac{x^r}{r}$$ for $-1\le x<1$


Friday 22 March 2019

elementary number theory - Bezout identity of 720 and 231 by hand impossible?

Is it possible to solve this by hand? Without using the Extended Euclidean Algorithm




We do the Euclidean algorithm and we get:



720 = 231 * 3 + 27



231 = 27 * 8 + 15



27 = 15 * 1 + 12



15 = 12 * 1 + 3




12 = 3 * 4 + 0



The GCD is 3.



We now isolate the remainders:



27 = 720 - 231 * 3



15 = 231 - 27 * 8




12 = 27 - 15 * 1



3 = 15 - 12 * 1



We proceed by substitution:



3 = 15 - 12 * 1



What now? How can we proceed when we have *1? There is no substitution possible?




Help! Thanks!

number theory - Modular equations and congruences



I have the following question, actually from the book Rational Points on Elliptic Curves by Silverman and Tate;




Prove that for every exponent $e \geq 1$, the congruence
$$x^2 +1 \equiv 0 (\text{mod } 5^e)$$



has a solution $x_e \in \mathbb{Z} / 5^e \mathbb{Z}$. Prove further that these solutions can be chosen to satisfy



$$x_1 \equiv 2 (\text{mod } 5),\qquad
\text{and}\quad
x_{e+1} \equiv x_e (\text{mod } 5^e) \quad
\text{for all } e \geq 1.$$




The best way to go about it is supposed to be induction on $e$. So I have gone through the initial step, i.e. for $e=1$ and shown that there is indeed a solution to the congruence in $\mathbb{Z} / 5^e \mathbb{Z}$, namely $x=2$ or $x=3$. However my problems begin from here on out. I just don't know how to proceed with this problem. This question is actually already (Prove that for every exponent $e\ge 1$, the congruence $x^2+1\equiv 0$ (mod $5^e$) has a solution $x_e\in \mathbb{Z}/5^e\mathbb{Z}$). I just can't follow the answer at all and was hoping someone could provide a more basic answer or methodology I could follow. Thank you for your help!


Answer



Starting with $x_1\equiv 2 \mod 5$ and $x_2\equiv x_1 \mod 5$, we have $x_2=5k+2$ for some integer k. Then



$x_2^2+1 = 25k^2 + 20k + 5 \equiv 20k+5 \mod 25$



So $k \equiv 1 \mod 5$ and $x_2 \equiv 7 \mod 25$



Then wash, rinse and repeat ...




$x_3\equiv x_2 \mod 25$



$\Rightarrow x_3 = 25k + 7$



$\Rightarrow x_3^2 + 1 = 625k^2 + 350k + 50 \equiv 100k + 50 \mod 125$



$\Rightarrow k \equiv 2 \mod 5$



$\Rightarrow x_3 \equiv 57 \mod 125$




etc.



The general case is known as Hensel's lemma.


number theory - Explanation of Zeta function and why 1+2+3+4+... = -1/12











I found this article on Wikipedia which claims that $\sum\limits_{n=0}^\infty n=-1/12$. Can anyone give a simple and short summary on the Zeta function (never heard of it before) and why this odd result is true?


Answer



The answer is much more complicated than $\lim_{x \to 0} \frac{\sin(x)}{x}$.



The idea is that the series $\sum_{n=1}^\infty \frac{1}{n^z}$ it is convergent when $Re(z) >1$, and this works also for complex numbers.



The limit is a nice function (analytic) and can be extended in an unique way to a nice function $\zeta$. This means that




$$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z} \,;\, Re(z) >1 \,.$$



Now, when $z=-1$, the right side is NOT convergent, still $\zeta(-1)=\frac{-1}{12}$. Since $\zeta$ is the ONLY way to extend $\sum_{n=1}^\infty \frac{1}{n^z}$ to $z=-1$, it means that in some sense



$$\sum_{n=1}^\infty \frac{1}{n^{-1}} =-\frac{1}{12}$$



and this is exactly what that means. Note that, in order for this to make sense, on the LHS we don't have convergence of series, we have a much more suttle type of convergence: we actually ask that the function $\sum_{n=1}^\infty \frac{1}{n^z}$ is differentiable as a function in $z$ and make $z \to -1$...



In some sense, the phenomena is close to the following:




$$\sum_{n=0}^\infty x^n =\frac{1}{1-x} \,;\, |x| <1 .$$



Now, the LHS is not convergent for $x=2$, but the RHS function makes sense at $x=2$. One could say that this means that in some sense $\sum_{n=0}^\infty 2^n =-1$.



Anyhow, because of the Analyticity of the Riemann zeta function, the statement about $\zeta(-1)$ is actually much more suttle and true on a more formal level than this geometric statement...


integration - Prove $int_{0}^{infty}frac{|sin x|sin x}{x}dx=1$



Prove
$$\int_{0}^{\infty}\frac{|\sin x|\sin x}{x}dx=1.$$
I know how to calculate $\int_{0}^{\infty}\frac{\sin x}{x}dx=\frac{\pi}{2}$, but the method cannot be applied here. So I am thinking
$$\sum_{k=0}^n(-1)^k\int_{k\pi}^{(k+1)\pi}\frac{\sin^2 x}{x}dx$$
but I don't know how to proceed.


Answer



By Lobachevsky integral formula: https://en.wikipedia.org/wiki/Lobachevsky_integral_formula

$$\int_{0}^{\infty}\frac{\sin x}{x}|\sin x|\,\mathrm{d}x=\int_0^{\pi/2}|\sin x|\,\mathrm{d}x=1.$$


Thursday 21 March 2019

calculus - does the integral $int _{1}^{infty }!{frac {sin left( cos left( x right) +sin left( xsqrt {3} right) right) }{x}}{dx}$ converges?

I want to know if this integral converges or not : $\int _{1}^{\infty }\!{\frac {\sin \left( \cos \left( x \right) +\sin\left( x\sqrt {3} \right) \right) }{x}}{dx}.$ I tried to integrate by parts or to use dirichlet's test, but it seems impossible to prove that $\int _{1}^{x}\!\sin \left( \cos \left( t \right) +\sin \left( t\sqrt {3} \right) \right) {dt}$ is bounded. Do you have any idea how to solve this problem?

real analysis - Let $f:(0,infty)tomathbb{R}$ be defined by $ f(x)=frac{sin(x^{3})}{x}$. Then f is not bounded and not uniformly continuous.



Let $f:(0,\infty)\to\mathbb{R}$ be defined by $ f(x)=\frac{\sin(x^{3})}{x}$. Then which of the following is correct:



a)f is not bounded and not uniformly continuous



b)f is bounded and not uniformly continuous



c)f is not bounded and uniformly continuous




d)f is bounded and uniformly continuous



I think option a is correct $\because \sin{x}$ is bounded between $-1$ and $1$ and $\frac{1}{x}$ approches $\infty$ in neighborhood of zero.



This question was asked in TIFR 2019.


Answer



The function is bounded and uniformly continuous on $(0,\infty)$.



Clearly, $f$ is continuous and, hence, uniformly continuous on any compact interval $[a,b]$ with $a > 0$.




On the interval $(0,a]$ we have $\displaystyle f(x) = \frac{\sin x^3}{x} = x^2\frac{\sin x^3}{x^3} \to 0\cdot 1 = 0 $ as $x \to 0$ and
$f$ is extendible as a continuous function to the compact interval $[0,a]$, and, hence, uniformly continuous there.



On $[b, \infty)$, $f$ is uniformly continuous as well since $\displaystyle |f(x)| = \frac{|\sin x^3|}{x} \leqslant \frac{1}{x} \to 0 $ as $x \to \infty$.



A continuous function that approaches a finite limit as $x \to \infty$ must be uniformly continuous -- proved many times on this site -- for example here. This is also an interesting example of a function with an unbounded derivative that is uniformly continuous.


Wednesday 20 March 2019

combinatorics - Prove that $sumlimits_{i=1}^n 2ibinom{2n}{n-i}= nbinom{2n}{n}$



Let n be a positive integer.





Prove that $$\sum_{i=1}^n 2i\binom{2n}{n-i}= n\binom{2n}{n}$$



Answer



Here’s an alternative that requires a little less of a leap to get started, but a little more algebra later. Instead of pulling $2i=(n+i)-(n-i)$ out of thin air, substitute $k=n-i$:



$$\begin{align*}
\sum_{i=0}^n2i\binom{2n}{n-i}&=\sum_{k=0}^{n-1}2(n-k)\binom{2n}k\\
&=2n\sum_{k=0}^{n-1}\binom{2n}k-2\sum_{k=0}^{n-1}k\binom{2n}k\\
&=2n\sum_{k=0}^{n-1}\binom{2n}k-4n\sum_{k=1}^{n-1}\binom{2n-1}{k-1}\\

&=2n\left(\sum_{k=0}^{n-1}\binom{2n}k-2\sum_{k=0}^{n-2}\binom{2n-1}k\right)\\
&=2n\left(\sum_{k=0}^{n-1}\binom{2n}k-\sum_{k=0}^{n-2}\left(\binom{2n-1}k+\binom{2n-1}{2n-1-k}\right)\right)\\
&=2n\left(\sum_{k=0}^{n-1}\binom{2n}k-\sum_{k=0}^{n-2}\binom{2n-1}k-\sum_{k=n+1}^{2n-1}\binom{2n-1}k\right)\\
&=2n\left(\sum_{k=0}^{n-1}\binom{2n}k-\left(2^{2n-1}-\binom{2n-1}{n-1}-\binom{2n-1}n\right)\right)\\
&=2n\left(\sum_{k=0}^{n-1}\binom{2n}k-2^{2n-1}+\binom{2n}n\right)\\
&=n\left(2\sum_{k=0}^{n-1}\binom{2n}k-2^{2n}+2\binom{2n}n\right)\\
&=n\left(2^{2n}-\binom{2n}n-2^{2n}+2\binom{2n}n\right)\\
&=n\binom{2n}n\;.
\end{align*}$$




I still haven’t found a combinatorial argument, though.


Prime number and square problem

How many pairs of natural numbers, not bigger than 100, are such that difference between that pair is a prime number, and their product is a square of a natural number.
My attempt: I tried writing relationship such as $x-y=p$ and $xy=n^2$ but I can't seem to find any pattern to enumerate it.

calculus - Evaluating $int_0^1frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}dx$




How can we prove $$\int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}\mathrm{d} x=\frac{2\pi}{3\sqrt 3}?$$





Thought 1
It cannot be solved by using contour integration directly. If we replace $-1/3$ with $-2/3$ or $1/3$ or something else, we can use contour integration directly to solve it.
Thought 2
I have tried substitution $x=t^3$ and $x=1-t$. None of them worked. But I noticed that the form of $1-x+x^2$ does not change while applying $x=1-t$.
Thought 3
Recall the integral representation of $_2F_1$ function, I was able to convert it into a formula with $_2F_1\left(2/3,1;4/3; e^{\pi i/3}\right)$ involved. But I think it will only make the integral more "complex". Moreover, I prefer a elementary approach. (But I also appreciate hypergeometric approach)


Answer



The solution heavily exploits symmetry of the integrand.



Let $$I = \int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2} dx $$
Replace $x$ by $1-x$ and sum up gives
$$\tag{1} 2I = \int_0^1 \frac{x^{2/3}(1-x)^{-1/3} + (1-x)^{2/3}x^{-1/3}}{1-x+x^2} dx = \int_0^1 \frac{x^{-1/3}(1-x)^{-1/3}}{1-x+x^2} dx$$







Let $\ln_1$ be complex logarithm with branch cut at positive real axis, while $\ln_2$ be the one whose cut is at negative real axis. Denote
$$f(z) = \frac{2}{3}\ln_1(x) - \frac{1}{3}\ln_2 (1-x)$$
Then $f(z)$ is discontinuous along the positive axis, but have different jump in $\arg$ across intervals $[0,1]$ and $[1,\infty)$.



Now integrate $g(z) = e^{f(z)}/(1-z+z^2)$ using keyhole contour. Let $\gamma_1$ be path slightly above $[0,1]$, $\gamma_4$ below. $\gamma_2$ be path slightly above $[1,\infty)$, $\gamma_3$ below. It is easily checked that
$$\int_{\gamma 1} g(z) dz = I \qquad \qquad \int_{\gamma 4} g(z) dz = I e^{4\pi i/3}$$
$$\int_{\gamma 2} g(z) dz = e^{\pi i/3} \underbrace{\int_1^\infty \frac{x^{2/3}(x-1)^{-1/3}}{1-x+x^2} dx}_J\qquad \int_{\gamma 3} g(z) dz = e^{\pi i} J$$



If we perform $x\mapsto 1/x$ on $J$, we get $\int_0^1 x^{-1/3}(1-x)^{-1/3}/(1-x+x^2)dx$, thus $J = 2I$ by $(1)$.




Therefore $$I(1-e^{4\pi i/3}) + 2I(e^{\pi i / 3} - e^{\pi i}) = 2\pi i\times \text{Sum of residues of } g(z) \text{ at } e^{\pm 2\pi i /3}$$
From which I believe you can work out the value of $I$.


real analysis - Show that a singleton ${x}$ is negligible




With the following definition of negligible set :



$S \subseteq \mathbb{R}$ is negligible if $$\forall \varepsilon > 0 \hspace{0.2cm} \exists I_{k} : S \subseteq \bigcup\limits_{k \in \mathbb{N}}I_{k} \hspace{0.2cm}, \sum\limits_{k \in \mathbb{N}} |I_{k}| < \varepsilon$$



With $I_{k}$ closed or open intervals of $\mathbb{R}$.



I'd like to prove that a singleton $\{x\}$ is negligible, to be able to say for example that $\mathbb{Q}$ is negligible.



This was my effort :trying with the definition,noticing that $\{x\} \subseteq (x-\frac{1}{k},x+\frac{1}{k}) = |I_{k}|,$




I thought that those could be my interval because $|I_{k}| = \frac{2}{k} \underset{k \to \infty}{\longmapsto} 0$, therefore they satisfy $|I_{k}| < \varepsilon$,



But then i realized i was wrong because i had to sum all the lengths of the intervals,but $\sum\limits_{k \in \mathbb{N}} \frac{1}{k} = +\infty$, is that right ?



If so,any solution or tip to solve the problem would be appreciated.


Answer



Why summing them? For each $\varepsilon>0$, take $k\in\mathbb N$ such that $\frac2k<\varepsilon$ and take only the interval $\left(x-\frac1k,x+\frac1k\right)$. That's all.


Tuesday 19 March 2019

matrices - Eigenvalues and Eigenvectors of a Normal Matrix

W is a normal stochastic matrix which has non-negative elements and each row sums to 1.



W can be represented by the factorization (a constraint that can be imposed on the particular system):



W = ED



Where E is a symmetric matrix and D is a diagonal matrix.



How can I calculate the eigenvalues and eigenvectors of W?




W will be large and sparse, any advice with regards algorithms would be greatly appreciated.

real analysis - Extension of Fundamental Theorem of Algebra

The problem states:



Let $p(x)$ be a polynomial in $x$ of degree $n$ with $n\ge2$. Recall that, according to the Fundamental Theorem of Algebra, $p(x)$ has $n$ number of roots in the complex number set. Suppose all roots of $p(x)$ are real and distinct. Prove that the roots of $p'(x)$ are all real.



I know and kind of understand the proof of the Fundamental Theorem of Algebra, but I do not know how to extend it to $p'(x)$. Any thoughts?



Thanks!

real analysis - Sum the series: $1+frac{1+3}{2!}+frac{1+3+3^2}{3!}+cdots$



We have the series$$1+\frac{1+3}{2!}+\frac{1+3+3^2}{3!}+\cdots$$How can we find the sum$?$



MY TRY: $n$th term of the series i.e $T_n=\frac{3^0+3^1+3^2+\cdots+3^n}{(n+1)!}$. I don't know how to proceed further. Thank you.


Answer



The numerator is a geometric sum that evaluates to,



$$\frac{3^{n+1}-1}{3-1}$$




Hence what we have is,



$$\frac{1}{2} \sum_{n=0}^{\infty} \frac{(3^{n+1}-1)}{(n+1)!}$$



$$=\frac{1}{2} \sum_{n=1}^{\infty} \frac{3^n-1}{n!}$$



$$=\frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{3^n}{n!}- \sum_{n=1}^{\infty} \frac{1^n}{n!} \right)$$



Recognizing the Taylor series of $e^x$ we have




$$=\frac{1}{2}((e^3-1)-(e-1))$$


linear algebra - Matrix invertiblity and its Inverse



I'd like to prove that the Matrix $L:={ M }^{ T }M$ is invertible and determine its inverse (in dependence of $A$ and $B$).



$M:=\begin{pmatrix}A & B \\ 0_{q\times p}& I_q\end{pmatrix}$ and $A\in K^{p\times p},B\in K^{p\times q}$.
Further is given that the matrix $A$ has rank $p$.



I tried to form $L=\begin{pmatrix}{ A }^T A& A^T B\\ B^T A &{ B^T B+I }_{ q }\end{pmatrix}$ into the Unity Blockmatrix using elementary row operations.
Since A has a full rank it must be invertible and because A is a $p\times p$ matrix, its transpose must be invertible too, with this knowledge its fairly easy to get ${ a }_{ 21 }=0$ and I'm stuck in getting ${ a }_{ 12 }=0$ since I know nothing about the invertibility of $B$.

Are there any other ways to solve this problem?


Answer



We can compute the inverse of $M$ with "row-operations" as follows:
$$
\left[\begin{array}{cc|cc}
A&B&I&0\\0&I&0&I
\end{array}\right] \to \\
\left[\begin{array}{cc|cc}
I&A^{-1}B&A^{-1}&0\\0&I&0&I
\end{array}\right] \to\\

\left[\begin{array}{cc|cc}
I&0&A^{-1}&-A^{-1}B\\0&I&0&I
\end{array}\right]
$$
So that
$$M^{-1} = \pmatrix{A^{-1} & -A^{-1}B\\0&I}$$
From there, we can compute
$$
(M^TM)^{-1} = M^{-1}(M^T)^{-1} = M^{-1}(M^{-1})^T =\\
\pmatrix{A^{-1} & -A^{-1}B\\0&I} \pmatrix{A^{-1} & -A^{-1}B\\0&I}^T =\\

\pmatrix{A^{-1} & -A^{-1}B\\0&I} \pmatrix{(A^{-1})^T & 0\\-B^T(A^{-1})^T&I}^T = \\
\pmatrix{A^{-1}(A^{-1})^T + A^{-1}BB^T(A^{-1})^T & -A^{-1}B\\
-B^T(A^{-1})^T & I} = \\
\pmatrix{(A^TA)^{-1} + (A^{-1}B)(A^{-1}B)^T & -A^{-1}B\\
-(A^{-1}B)^T & I}
$$


contour integration - Complex Analysis - $int_0^inftyfrac{cos(5x)}{(1+x^2)^2}mathrm{d}x$




How to calculate the following integral using complex analysis?
$\int_0^\infty\frac{\cos(5x)}{(1+x^2)^2}\mathrm{d}x$.



So far I have $$\int_0^\infty\frac{\cos(5x)}{(1+x^2)^2}\mathrm{d}x = \int_{-\infty}^\infty\frac{1}{(1+x^2)^2}e^{5ix}\mathrm{d}x$$ Then, $$Res(f(x),i)=\frac{d}{dx}[e^{5ix}]|_i=5ie^{5ix}|_i=2\pi i5ie^{5i(i)}=\frac{-10\pi}{e^5}$$
Then I might have to multiply by 1/2 to get from 0 to infinity only but that gives $\frac{-5\pi}{e^5}$ and the answer should be $\frac{3\pi}{2e^5}$ and I am not sure what I am doing wrong...


Answer



$$\int_{0}^{+\infty}\frac{\cos(5x)}{(1+x^2)^2}\,dx = \frac{1}{2}\text{Re}\int_{-\infty}^{+\infty}\frac{e^{5ix}}{(1+x^2)^2}\,dx \tag{1}$$
and $x=i$ is a double pole for $\frac{e^{5ix}}{(1+x^2)^2}$, in particular



$$ \text{Res}\left(\frac{e^{5ix}}{(1+x^2)^2},x=i\right) = \lim_{x\to i}\frac{d}{dx}\left(\frac{e^{5ix}}{(x+i)^2}\right)=-\frac{3i}{2e^5}\tag{2}$$

and
$$ \int_{0}^{+\infty}\frac{\cos(5x)}{(1+x^2)^2}\,dx = \text{Re}\left(\frac{(-3i)\cdot(\pi i)}{2e^5}\right)=\color{red}{\frac{3\pi}{2e^5}}.\tag{3}$$


proof explanation - If $p_1,...,p_n$ are positive prime numbers then $sqrt{p_1cdots p_n} notin Bbb{Q}$




We want to proove that:




If $p_1,...,p_n$ are positive, distinct, prime numbers then
$\sqrt{p_1\cdots p_n} \notin \Bbb{Q}$.




Let's assume that $\sqrt{p_1\cdots p_n} \in \Bbb{Q}$. Then, $\exists (a,b)\in \mathbb{Z^*\times Z^*}:\sqrt{p_1\cdots p_n}=\frac{a}{b}$ with $\gcd (a,b)=1.$ So, $a^2=p_1\cdots p_n \cdot b^2. $ But how do we continue? Is this technique right or should we follow something different?




PS: This is a part this proof, and I would like to discuss it.



Thank you.


Answer



The classic proof that $\sqrt 2 \notin \Bbb Q$ readily extends to this case, to wit:



if



$\sqrt{p_1p_2 . . . p_n} = \dfrac{a}{b} \tag 1$




with $a, b \in \Bbb N$, $\gcd(a, b) = 1$, then



$p_1p_2 \ldots p_n b^2 = a^2, \tag 2$



whence $p_1 \mid a^2$; thus $p_1 \mid a$ and so $a = p_1c$; thus



$a^2 = p_1^2c^2 = p_1p_2 \ldots p_n b^2, \tag 3$



whence




$p_1c^2 = p_2p_3 \ldots p_n b^2; \tag 4$



since the $p_i$ are distinct, we must have $p_1 \mid b^2$, whence $p_1 \mid b$, contradicting our assumption that $\gcd(a, b) = 1$.



I think the ancient roots or this proof are worthy or respect, and I also note the method readily extends to show many similar propositions hold.



I presented this answer because it seems to me that the linked proof is pretty complex for this particular problem, although it is certainly engaging in and of itself, and leads in entaging directions.


linear algebra - Proof of this result related to Fibonacci numbers: $begin{pmatrix}1&1\1&0end{pmatrix}^n=begin{pmatrix}F_{n+1}&F_n\F_n&F_{n-1}end{pmatrix}$?



$$\begin{pmatrix}1&1\\1&0\end{pmatrix}^n=\begin{pmatrix}F_{n+1}&F_n\\F_n&F_{n-1}\end{pmatrix}$$



Somebody has any idea how to go about proving this result? I know a proof by mathematical induction, but that, in my opinion, will be a verification of this result only. I tried to find through the net, but in vain, if someone has some link or pointer to its proof, please provide, I'll appreciate that.



Thanks a lot!


Answer



(I'm assuming here that what the OP really wants is to know how one would ever get the idea to try to prove this. He already has a proof by induction, which is a perfectly valid and respectable proof method; you won't get anything proofier than that, except perhaps methods that hide the induction within a general theorem).




One way to invent this relation is to start from the following fairly simple algorithm for computing Fibonacci numbers:




  • Start by setting $a=0$, $b=1$

  • Repeat the following until you reach the $F_n$ you want:

    • (Invariant: $a=F_{n-1}$ and $b=F_n$).

    • Set $c=a+b$

    • (Now $c=F_{n+1}$)


    • $a\leftarrow b$ and $b \leftarrow c$.




Now observe that the loop body computes the new $a$ and $b$ as linear combinations of the old $a$ and $b$. Therefore there's a matrix that represents each turn through the loop. Many turns through the loop become multiplication with a power of the matrix.



This reasoning gives you the matrix $M=\pmatrix{1&1\\1&0}$ and an informal argument that $M^n \pmatrix{1\\0} = \pmatrix{F_n\\F_
{n-1}}$. This gives us one column of $M^n$, and it is reasonable to hope that the other one will also be something about Fibonacci numbers. One can either repeat the previous argument with different starting $a$ and $b$, or simply compute the first few powers of $M$ by hand and then recognize the pattern to be proved formally later.


Monday 18 March 2019

trigonometry - The lengths of the sides of a triangle are $sinalpha$, $cosalpha$ and $sqrt{(1+sinalphacosalpha)}$...




The lengths of the sides of a triangle are $\sin\alpha$, $\cos\alpha$ and $\sqrt{(1+\sin\alpha\cos\alpha)}$, where $0^o < \alpha < 90^o$. The measure of its greatest angle is.......





What I have tried.
By using Cosine Rule,
$$\sqrt{(1+\sin\alpha\cos\alpha)} = \sin^2 \alpha + \cos^\alpha + 2(\sin\alpha)(\cos\alpha)(\cos x)$$
Letting $x$ be an angle for the opposite to $\sqrt{(1+\sin\alpha\cos\alpha)}$,



But my confusion here is how would I know that $x$ is the greatest angle. Do I have to do this step for all other sides? or Is there any shortcut here? or Am I doing it correctly?



The answer is $120^o$.


Answer



Clearly the greatest angle is opposite to the greatest side. Use Cosine Rule to get $$\begin{aligned}(1+\sin \alpha\cos\alpha)&=\sin^2\alpha+\cos^2\alpha-2\sin\alpha\cos\alpha\cos x\\ \dfrac{\sin\alpha\cos\alpha+1-1}{-2\sin\alpha\cos\alpha}&=\cos x\\ \cos x&=\dfrac{-1}{2}\implies x=\dfrac{2\pi}{3}=120^{\circ}\end{aligned}$$



integration - An almost Fresnel integral



$\def\d{\mathrm{d}}$So I tried doing the following integral:



$$I=\int_0^{+\infty}\sin(2^x)\,\d x,$$




which is quite similar to the famous Fresnel integral. First, I rewrote $\sin$ using its complex exponential definition, then I let $u=2^x$:



$$I=\int_0^{+\infty}\frac{e^{i2^x}-e^{-i2^x}}{2i}\,\d x = \frac1{2i\ln(2)} \int_1^{+\infty} \frac{e^{iu}-e^{-iu}}u \,\d u.$$



(so close to letting me use Frullani's integral $\ddot\frown$)



But where do I go from here? It looks very close to a place where I could use the exponential integral or something like that, but not quite...


Answer



Hint. One may perform the change of variable

$$
u=2^x, \quad \ln x= \frac1{\ln 2}\cdot \ln u, \quad dx=\frac1{\ln 2}\cdot \frac{du}u,
$$ giving
$$
I=\int_0^{+\infty}\sin(2^x)\ dx=\frac1{\ln 2}\cdot\int_1^{+\infty}\frac{\sin(u)}{u}\ du=\frac1{\ln 2}\cdot\left(\frac{\pi }{2}-\text{Si}(1)\right)
$$ where we have made use of the sine integral function $\text{Si}(\cdot)$.


sequences and series - Compute $1 cdot frac {1}{2} + 2 cdot frac {1}{4} + 3 cdot frac {1}{8} + cdots + n cdot frac {1}{2^n} + cdots $




I have tried to compute the first few terms to try to find a pattern but I got



$$\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}$$



but I still don't see any obvious pattern(s). I also tried to look for a pattern in the question, but I cannot see any pattern (possibly because I'm overthinking it?) Please help me with this problem.


Answer



$$I=\frac{1}{2}+\frac{2}{4}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}+\cdots$$
$$2I=1+1+\frac{3}{4}+\frac{4}{8}+\frac{5}{16}+\frac{6}{32}+\cdots$$
$$2I-I=1+\left(1-\frac 12 \right)+\left(\frac 34 -\frac 24 \right)+\left(\frac 48 -\frac 38 \right)+\left(\frac {5}{16} -\frac {4}{16} \right)+\cdots$$

$$I=1+\frac 12+\frac 14+\frac 18+\cdots=2$$


number theory - What is the remainder when $3^{57} + 27$ is divided by $28$?



Problem: What is the remainder when$$3^{57}+27$$ is divided by $28$ ?




Source: I'm pretty much interested in calculus (you can refer to my previous posts) but I have to prepare for a test where they even put up problems on elementary number theory. I got this problem from a practice set and it stumped me. I looked up for similar questions on the website and most of them include the use of $\mathrm{mod}$. I don't know what it is, and I haven't got time to understand it as I also have to deal with physics and chemistry at the same time. I have solved a very few problems of this kind (mainly divisibility) using mathematical induction and binomial theorem last year.



My try: When you got integral calculus embedded into your mind, how do you approach without using it? I have tried to develop a function:
$$f(x) = \int(a^x+b)\mathrm{d}x$$
$$= \int{a^x}\mathrm{d}x + b\int \mathrm{d}x$$
$$= \frac{ a^{x+1}}{x+1} + bx + C$$
put limits $l_l = 0$ and $l_u = 57$ where $l_l$ and $l_u$ are lower and upper limits respectively.



But I have tried to solve it for no good. I can't think of a possible way, and my professor is unwilling to help me with it (duh!). I'm stuck. I have to perform better. So can you please give me an approach without using the $\mathrm{mod}$ function? All help appreciated!



Answer



You want remainder when $3^{57}+27 $ is divided by $28$. Note that $3^{57}=(3^3)^{19}$.



$$3^{57}+27=(3^3)^{19}+27=(28-1)^{19}+27={19\choose0}28^{19}-{19\choose 1}28^{18}\cdot\cdot\cdot\cdot\cdot+{19\choose18}28-{19\choose19}+27=28k-1+27=28k+26$$



When divided by $28$, $28k+26$ gives $26$ as remainder.


Principal (and secondary) square roots of a complex number



This is a follow-up of the post here:



using phasors to handle complex numbers




I have decided to create a new post as now I am considering a deeper issue.



Say if we want to compute $\sqrt{-5}$. If I want to find its principal square root then I can use phasor arithmetic as follows:
$\sqrt{-5}=\sqrt{5 \angle 180}=\sqrt{5}\angle 90 = \sqrt{5} \; \mathrm{i}$.
This agrees with the definition namely (see http://en.wikipedia.org/wiki/Square_root#Principal_square_root_of_a_complex_number ):



If $z=r\: \mathrm{e}^{\psi \mathrm{i}}$ with $-\pi < \psi \leq \pi$, then the principal
square root of $z$ is defined as $\sqrt{z}=\sqrt{r}\: \mathrm{e}^{\frac{\psi \mathrm{i}}{2}}$.



And we also have the definition that the other square root is simply $-1$ times the principal square root. So for $\sqrt{-5}$, we have the principal square root as

$\sqrt{5} \; \mathrm{i}$ and the other root as $-\sqrt{5} \; \mathrm{i}$. This is OK as $-\sqrt{5} \; \mathrm{i} \times -\sqrt{5} \; \mathrm{i} = \sqrt{5}\angle 270 \times \sqrt{5}\angle 270 = 5\angle 540 = 5\angle 180 = -5$.



Now let's consider two intricate cases




  1. $\sqrt{{-1}\times{-1}}=\sqrt{1 \angle 180 \times 1 \angle 180}= \sqrt{1 \angle 360}=\sqrt{1 \angle (360-360)}= 1\angle 0 = 1$ (as the principal square root).


  2. $\sqrt{\frac{1}{-1}}=\sqrt{\frac{1 \angle 0}{1 \angle 180}}=\sqrt{1 \angle -180} =
    \sqrt{1 \angle (-180+360)}= \sqrt{1 \angle 180}= 1 \angle 90 = \mathrm{i}$ (as the principal square root).





In regard with the above two cases, can we say that another root of $\sqrt{{-1}\times{-1}}$ is $-1$ ( i.e. -ve of the principal square root) and that another root of $\sqrt{\frac{1}{-1}}$ is $-\mathrm{i}$ (i.e -ve of the principal square root)?



If we check:



For, $\sqrt{{-1}\times{-1}}$ having a second root as $-1$, we have
on squaring, $1\angle 180 \times 1\angle 180 = 1\angle 360 = 1\angle 0 = 1$. This is equivalent to squaring the principal square root i.e. $1$ to give also $1$.



For $\sqrt{\frac{1}{-1}}$ having a second root as $-\mathrm{i}$ we have
on squaring, $-\mathrm{i} \times -\mathrm{i} = 1\angle 270 \times 1\angle 270 = 1\angle 540 = 1\angle 180 = -1$. This is equivalent to squaring the principal square root i.e. $\mathrm{i}$ to give also $-1$.




So $\sqrt{{-1}\times{-1}} = 1 \; \text{(principal root)} \; \mathrm{or} \; -1$ and $\sqrt{\frac{1}{-1}} = \mathrm{i} \; \text{(principal root)} \; \mathrm{or} \; -\mathrm{i}$. Is this correct?



Thanks a lot...


Answer



It's mathematical semantics. The square root function, which is what $\sqrt{z}$ denotes by convention, only takes on a single value - in which case equations like $\sqrt{-1\times-1}=-1$ are false. However one can refer to "square roots" as solutions to the equations of the form $x^2=a$, in which case statements like $x=1\text{ or }-1$ are meaningful, but the general practice is simple to write $\pm\sqrt{a}$ to refer to either value within a single equation or statement.



Bottom line: there are two "square roots," but symbolically $\sqrt{z}$ only refers to the principal root.


Sunday 17 March 2019

A tough series related with a hypergeometric function with quarter integer parameters




Is it possible to express $$ \sum_{n\geq

0}\frac{\binom{4n}{2n}\binom{2n}{n}}{64^n(4n+1)} = \phantom{}_3
F_2\left(\frac{1}{4},\frac{1}{4},\frac{3}{4}; 1,\frac{5}{4}; 1\right)
$$
in terms of standard mathematical constants given by Euler sums and values of the $\Gamma$ function?




This problem arise from studying the interplay between elliptic integrals, hypergeometric functions and Fourier-Legendre expansions. According to Mathematica's notation we have
$$ \sum_{n\geq 0}\frac{\binom{4n}{2n}\binom{2n}{n}}{64^n}y^{2n}=\frac{2}{\pi\sqrt{1+y}}\,K\left(\frac{2y}{1+y}\right) $$
for any $y\in[0,1)$, where the complete elliptic integral of the first kind fulfills the functional identity
$$\forall x\in[0,1),\qquad K(x) = \frac{\pi}{2\cdot\text{AGM}\left(1,\sqrt{1-x}\right)} $$
hence the computation of the above series boils down to the computation of

$$ \int_{0}^{1}K\left(\frac{2y^2}{1+y^2}\right)\frac{2\,dy}{\pi\sqrt{1+y^2}}\stackrel{y\mapsto\sqrt{\frac{x}{2-x}}}{=}\frac{\sqrt{2}}{\pi}\int_{0}^{1}\frac{K(x)}{\sqrt{x}(2-x)}\,dx\\=\frac{1}{\pi}\int_{-1/2}^{+\infty}\frac{\arctan\sqrt{u}}{\sqrt{u(1+u)(1+2u)}}\,du$$
where $K(x),\sqrt{2-x},\frac{1}{\sqrt{x}},\frac{1}{\sqrt{2-x}}$ all have a pretty simple FL expansion, allowing an easy explicit evaluation of similar integrals. This one, however, is a tougher nut to crack, since $\frac{1}{2-x}$ does not have a nice FL expansion. There are good reasons for believing $\Gamma\left(\frac{1}{4}\right)$ is involved, since a related series fulfills the following identity:
$$ \sum_{n\geq 0}\frac{\binom{2n}{n}^2}{16^n(4n+1)}=\frac{1}{2\pi}\int_{0}^{1}K(x)\,x^{-3/4}\,dx = \frac{1}{16\pi^2}\,\Gamma\left(\frac{1}{4}\right)^4 $$
which ultimately is a consequence of Clausen's formula, stating that in some particular circumstances the square of a $\phantom{}_2 F_1$ function is a $\phantom{}_3 F_2$ function.




March 2019 Update: after some manipulations, it turns out that the computation of the original $\phantom{}_3 F_2$ is equivalent to the computation of the integral
$$ \int_{0}^{1}\frac{-\log x}{\sqrt{x(1+6x+x^2)}}\,dx = \frac{1}{\sqrt{2}}\int_{0}^{+\infty}\frac{z\,dz}{\sqrt{3+\cosh z}}$$
which is way less scary. Additionally, nospoon has already tackled similar integrals, so I guess he might have something interesting to share.




Answer



This is not going to be a full answer but I think I am able to contribute a little more compared to what has been presented above therefore I present my approach.
Denote :
\begin{equation}
S(y):= \sum\limits_{n=0}^\infty \binom{4 n}{2 n} \binom{2 n}{n} \frac{y^{2 n}}{64^n}
\end{equation}
We use the good old identity:
\begin{equation}
\binom{2 n}{n}=(-4)^n \cdot \binom{-\frac{1}{2}}{n}
\end{equation}

and we get
\begin{eqnarray}
S(y)&=& \sum\limits_{n=0}^\infty \binom{-\frac{1}{2}}{n} \cdot \underbrace{\binom{-\frac{1}{2}}{2n}}_{\frac{1}{\pi} \int\limits_0^1 t^{-1/2} (1-t)^{2 n-1/2} dt} \cdot (-y^2)^n\\
&=& \frac{1}{\pi} \int\limits_0^1 \frac{1}{\sqrt{t(1-t)(1-y^2 t^2)}} dt
&=&
\end{eqnarray}
Now in the last equation above we replace $y$ by $y^2$ and integrate over $y$ from zero to unity. This gives:
\begin{eqnarray}
\int\limits_0^1 S(y^2) dy&=& \frac{1}{\pi} \int\limits_0^1 \frac{F_{2,1}[\frac{1}{4},\frac{1}{2},\frac{5}{4},t^2]}{\sqrt{t(1-t)}}dt\\
&=&\frac{1}{\pi} \int\limits_0^1 \frac{F[\arcsin(\sqrt{t}),-1]}{t \sqrt{1-t}} dt\\

&=&-\frac{1}{\pi}\int\limits_0^1 \frac{2\log(1-\sqrt{1-t})-\log(t)}{2\sqrt{t(1-t^2)}}dt\\
&=&-\frac{1}{\pi} \left(2\int\limits_0^1 \frac{\log(t)}{\sqrt{t(t-2)(t-1+\sqrt{2})(t-1-\sqrt{2})}}dt+\pi^{3/2} \frac{\Gamma(5/4)}{\Gamma(3/4)}\right)\\
&=&-\frac{1}{\pi} \left(
2 \sqrt{2} \int\limits_0^{\pi} \frac{\log[\sin(u/4)]}{\sqrt{3-\cos(u)}}-
\frac{2 \sqrt{\pi} \Gamma(5/4) (\pi+4 \log(2))}{\Gamma(-1/4)}
\right)\\
&=&-\frac{1}{\pi} \left(
\sqrt{2} \int\limits_0^\pi \frac{\log[1-\cos(u/2)]}{\sqrt{3-\cos(u)}}du-
\frac{2 \pi^{3/2} \Gamma(5/4)}{\Gamma(-1/4)}
\right)\\

&=&
-\frac{1}{\pi} \left(
\sqrt{2} \int\limits_0^{\pi/2} \frac{\log[1-\cos(u)]}{\sqrt{1-1/2\cos(u)^2}}du-
\frac{2 \pi^{3/2} \Gamma(5/4)}{\Gamma(-1/4)}
\right)
\end{eqnarray}
In the first line above I expanded the square root in a series and integrated over $y$ term by term. In the second line above I went to Wolframs site http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/03/09/19/02/ and looked up the closed form for the particular hypergeometric function. Here $F[\phi,m]$ is the elliptic function of the first kind. In the third line I integrated by parts once and finally in the forth line I integrated the second term by using appropriate Euler sums and in the first term we substituted for $1-\sqrt{1-t}$.In the fifthe line I substituted for $t:=2 \sin(u)^2$ and in the sixth line I simplified the result using trigonometric half-angle identities.



The remaining integral can be actually simplified by expanding the integrand in a double series and then doing one of the sums. Here we just state the result:
\begin{eqnarray}

&&\int\limits_0^{\pi/2} \frac{\log[1-\cos(u)]}{\sqrt{1-1/2\cos(u)^2}}du =
-\frac{\pi}{2}\sum\limits_{\lambda=0}^\infty
\binom{-1/2}{\lambda} (-\frac{1}{2})^\lambda \cdot \\
&&
\left(\frac{1+\lambda}{1+2\lambda} \binom{1/2+\lambda}{-1/2}(2 \log(2)+H_\lambda) + \binom{\lambda}{-1/2} F_{3,2}[\begin{array}{rrr} \frac{1}{2}& 1&1+\lambda\\ \frac{3}{2}& \frac{3}{2}+\lambda & \end{array};1] \right)
\end{eqnarray}
Now, the remaining hypergeometric functions $F_{3,2}$ can also be simplified and expressed through Catalan numbers. We lack time to complete the whole calculations and therefore we have to leave this apart for the time being and complete this later.



Update: By using the following identity:
\begin{equation}

\frac{(1+\lambda)^{(l)}}{(3/2+\lambda)^{(l)}} = \frac{(3/2)^{(\lambda)}}{1^{(\lambda)}} \cdot \frac{1^{(l)}}{(3/2)^{(l)}} \cdot \frac{(l+1)^{(\lambda)}}{(l+3/2)^{(\lambda)}}
\end{equation}
and then by decomposing the last term on the righthand side into partial fractions in $l$ we easily get the following identity:
\begin{eqnarray}
&&F_{3,2}[\begin{array}{rrr} \frac{1}{2}& 1&1+\lambda\\ \frac{3}{2}& \frac{3}{2}+\lambda & \end{array};1]=
\frac{(3/2)^{(\lambda)}}{1^{(\lambda)}} \cdot \left(\right.\\
&&\left.
2 C- \sum\limits_{j=1}^\lambda j \binom{-1/2+j}{\lambda} \binom{\lambda}{j} (-1)^{\lambda-j} \cdot \int\limits_0^1 \theta^{j-1/2} \cdot F_{3,2}[\begin{array}{rrr} \frac{1}{2}& 1&1\\ \frac{3}{2}& \frac{3}{2} & \end{array};\theta]d\theta\right.\\
&&\left.
\right)=\\

&&\frac{(3/2)^{(\lambda)}}{1^{(\lambda)}} \cdot \left(\right.\\
&&\left.
2 C-(1-\frac{(-1)^l \sqrt{\pi}}{\Gamma[1/2-l]l!}) \frac{\imath \pi^2}{4}-\right.\\
&&\left.
(-1)^l\sum\limits_{j=1}^l j \binom{-1/2}{j} \binom{-1/2}{l-j} \cdot \right.\\
&&\left.\int\limits_0^{\pi/2} [\sin(u)]^{2j-1} 2 \cos(u)\left( i \text{Li}_2\left(-e^{i u}\right)-i \text{Li}_2\left(e^{i u}\right)+u \log \left(\frac{1-e^{i u}}{1+e^{i u}}\right)\right)du\right.\\
&&\left.
\right)=\\
&&\frac{(3/2)^{(\lambda)}}{1^{(\lambda)}} \cdot \left(\right.\\
&&\left.

2 C-(1-\frac{(-1)^l \sqrt{\pi}}{\Gamma[1/2-l]l!}) \frac{\imath \pi^2}{4}-\right.\\
&&\left.
(-1)^l\sum\limits_{j=1}^l\sum\limits_{p=-j-1}^{j-1} j \binom{-1/2}{j} \binom{-1/2}{l-j} \cdot \frac{(-1)^{p+l}}{2^{2j-1}}[\binom{2j-1}{p+1} - \binom{2j-1}{p+j+1}]\cdot\right.\\
&&\left.\int\limits_1^{\imath} z^{p+1} \left(\log(z) \log(\frac{1-z}{1+z})+Li_2(z)-Li_2(-z)\right)dz\right.\\
&&\left.
\right)=\\
&&\frac{(3/2)^{(\lambda)}}{1^{(\lambda)}} \cdot \left(\right.\\
&&\left.
2 C+\right.\\
&&\left.

\frac{1}{2}
\sum\limits_{j=0}^{l-1}\sum\limits_{p=-l-1,p\neq -1}^{l-1}
\binom{-3/2}{j} \binom{-1/2}{l-1-j} \frac{1}{2^{2j+1}}
[\binom{2j+1}{p+j+1}-\binom{2j+1}{p+j+2}]\cdot\right.\\
&&\left.
\frac{(-1)^{p+l}}{p+1}
\left((1+(-1)^p) \cdot C - \sum\limits_{k=0}^{p \cdot 1_{p\ge 0}-(p+2) 1_{p<0}} \frac{(-1)^k}{(2k+1)^2}\right)\right.\\
&&\left.
\right)
\end{eqnarray}

where in the second line we used the closed form expression from http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric3F2/03/08/05/02/01/07/0001/ and we changed the variables appropriately. In the third line we substituted for $z:=\exp(\imath u)$. In the fourth line we calculated the integrals using integration by parts and then simplified the result.



In summary we have shown that :
\begin{eqnarray}
&&F_{3,2}[\begin{array}{rrr} \frac{1}{2}& 1&1+\lambda\\ \frac{3}{2}& \frac{3}{2}+\lambda & \end{array};1]=
2 \frac{(1/2)^{(\lambda)} (3/2)^{(\lambda)}}{\lambda! \cdot \lambda!} \cdot C + \frac{(3/2)^{(\lambda)}}{\lambda!}\cdot {\mathfrak A}_\lambda\\
&&\int\limits_0^{\pi/2} \frac{\log(1-\cos(u))}{\sqrt{1-1/2 \cos(u)^2}}=\frac{\sqrt{\frac{2}{\pi }} \Gamma \left(\frac{1}{4}\right) (4 C+\pi \log (2))}{\Gamma \left(-\frac{1}{4}\right)}-\\
&& \sum\limits_{\lambda=0}^\infty \binom{-1/2}{\lambda} (\frac{1}{2})^\lambda \left( \frac{\pi}{4} \binom{-1/2}{\lambda} H_\lambda+(-1)^\lambda {\mathfrak A}_\lambda\right)\\
&&\int\limits_0^1 S(y^2) dy=-\frac{1}{\pi}\left(\right.\\
&&\left.

-\frac{2 \Gamma \left(\frac{5}{4}\right) (\pi (\pi -2 \log (4))-16 C)}{\sqrt{\pi } \Gamma \left(-\frac{1}{4}\right)}\right.\\
&&\left.
-\sqrt{2}\cdot\sum\limits_{\lambda=0}^\infty \binom{-1/2}{\lambda} (\frac{1}{2})^\lambda \left( \frac{\pi}{4} \binom{-1/2}{\lambda} H_\lambda+(-1)^\lambda {\mathfrak A}_\lambda\right)\right.\\
&&\left.
\right)
\end{eqnarray}
where
\begin{eqnarray}
&&{\mathfrak A}_\lambda := \\
&&\left(

\sum\limits_{j=0}^{\lambda-1} \sum\limits_{\begin{array}{r} p=-\lambda-1\\p\neq-1\end{array}}^{\lambda-1}
\binom{-3/2}{j} \binom{-1/2}{\lambda-1-j} \frac{[\binom{2j+1}{p+j+1} - \binom{2j+1}{p+j+2}]}{2^{2 j+2}}
\frac{(-1)^{p+\lambda+1}}{p+1}\cdot
\sum\limits_{k=0}^{p \cdot 1_{p\ge 0}-(p+2) 1_{p<0}} \frac{(-1)^k}{(2k+1)^2}
\right)
\end{eqnarray}
Note : The remaining infinite sum converges quite quickly. Truncating the sum at the first one hundred terms produces a more than thirty digits accuracy. On the other hand the the series in question converges very slowly; one needs to take five thousand terms in order to get the first four decimal digits right.


how to do integration $int_{-infty}^{+infty}exp(-x^n),mathrm{d}x$?




how to do integration $\int_{-\infty}^{+\infty}\exp(-x^n)\,\mathrm{d}x$, assuming $n>1$ ?



From wiki page Gaussian Integral: $\int_{-\infty}^{+\infty}\exp(-x^2)\,\mathrm{d}x = \sqrt{\pi}$



So, one can define a random variable $X$ has
$\text{pdf}(x) = \frac{1}{\sqrt{\pi}} \text{exp}(-x^2)$
, since $\int_{-\infty}^{+\infty}\text{pdf}(x)\,\mathrm{d}x = 1$. Actually, this is normal distribution.



Now, I'd like to define

$\text{pdf}(x) = \frac{1}{c} \text{exp}(-x^n)$, but how much is $c$?



Or, $\int_{-\infty}^{\infty}\exp(-x^n)\,\mathrm{d}x = ?$



There is a hint on wiki page Error Function:



Error function is $\text{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x}\exp(-t^2)\,\mathrm{d}t$, with $\text{erf}(0)=0$ and $\text{erf}(+\infty) = 1$.



So this means a pdf can be defined as $\text{pdf}(x) = \frac{1}{2}\text{erf}(x)$.




Then, Generalized Error Function is defined as
$$E_n(x) = \frac{n!}{\sqrt{\pi}} \int_{0}^{x} \text{exp}(-t^n)dt$$
does this mean



$\int_{-\infty}^{+\infty}\exp(-x^n)\,\mathrm{d}x = \frac{2\sqrt{\pi}}{n!} $
?



Even so, there's still a problem: if $n$ is not integer, how to calculate $n!$? Does it becomes Gamma function $\Gamma(n)$? like this:



$\int_{-\infty}^{+\infty}\exp(-x^n)\,\mathrm{d}x = \frac{2\sqrt{\pi}}{\Gamma(n)} $



Answer



$$n>1,\; t=x^n:$$



$$\int_{0}^{\infty} e^{-x^n}\,dx=\frac{1}{n}\int_0^{\infty} t^{\frac{1}{n}-1}e^{-t}dt=\frac{1}{n}\Gamma \left(\frac{1}{n}\right)=\Gamma \left(\frac{n+1}{n}\right)$$



As for the integral over $\mathbb{R}:$ when $n$ is even, double this, when $n$ is odd, the integral does not converge.


real analysis - finding $lim_{n rightarrow +infty}frac{n}{2^n}= ?$




Finding the limit below..:




$$\lim_{n \rightarrow +\infty}\frac{n}{2^n}= ?$$



I really think its 0. But intuitively, infinity over infinity. how can that be? indeterminate forms? Thanks


Answer



Intuitively, $2^n$ grows much faster than $n$.



Note that by the Binomial Theorem, $2^n=(1+1)^n=1+n+\frac{n(n-1)}{2}+\cdots$.



In particular, if $n\gt 1$, we have $2^n\ge \dfrac{n(n-1)}{2}$.




Thus $0\lt \dfrac{n}{2^n}\le \dfrac{2}{n-1}$. But $\frac{2}{n-1}$ approaches $0$ as $n\to\infty$, so by Squeezing, so does $\dfrac{n}{2^n}$.



Another way: You can use L'Hospital's Rule to show $\lim_{x\to\infty}\frac{x}{2^x}=0$.


Saturday 16 March 2019

elementary set theory - The cartesian product $mathbb{N} times mathbb{N}$ is countable



I'm examining a proof I have read that claims to show that the Cartesian product $\mathbb{N} \times \mathbb{N}$ is countable, and as part of this proof, I am looking to show that the given map is surjective (indeed bijective), but I'm afraid that I can't see why this is the case. I wonder whether you might be able to point me in the right direction?



Indeed, the proof begins like this:



"For each $n \in \mathbb{N}$, let $k_n, l_n$ be such that $n = 2^{k_n - 1} \left(2l_n - 1 \right)$; that is, $k_n - 1$ is the power of $2$ in the prime factorisation of $n$, and $2 l_n - 1$ is the (necessarily odd) number $\frac{n}{2^{k_n - 1}}$."




It then states that $n \mapsto \left(k_n , l_n \right)$ is a surjection from $\mathbb{N}$ to $\mathbb{N} \times \mathbb{N}$, and so ends the proof.



I can intuitively see why this should be a bijection, I think, but I'm not sure how to make these feelings rigorous?



I suppose I'd say that the map is surjective since given any $\left(k_n , l_n \right) \in \mathbb{N} \times \mathbb{N}$ we can simply take $n$ indeed to be equal to $2^{k_n - 1} \left(2l_n - 1 \right)$ and note that $k_n - 1 \geq 0$ and thus $2^{k_n - 1}$ is both greater or equal to one so is a natural number (making the obvious inductive argument, noting that multiplication on $\mathbb{N}$ is closed), and similarly that $2 l_n - 1 \geq 2\cdot 1 - 1 = 1$ and is also a natural number, and thus the product of these two, $n$ must also be a natural number. Is it just as simple as this?



I suppose my gut feeling in the proving that the map is injective would be to assume that $2^{k_n - 1} \left(2 l_n - 1 \right) = 2^{k_m - 1} \left(2 l_m - 1 \right)$ and then use the Fundamental Theorem of Arithmetic to conclude that $n = m$. Is this going along the right lines? The 'implicit' definition of the mapping has me a little confused about the approach.







On a related, but separate note, I am indeed aware that if $K$ and $L$ are any countable sets, then so is $K \times L$, so trivially, taking the identity mapping we see trivially that this map is bijective and therefore that $\mathbb{N}$ is certainly countable (!), and thus so is $\mathbb{N} \times \mathbb{N}$. Hence, it's not really the statement that I'm interested in, but rather the exciting excursion into number theory that the above alternative proof provides.


Answer



Your intuition is correct. We use the fundamental theorem of arithmetic, namely the prime factorization is unique (up to order, of course).



First we prove injectivity:



Suppose $(k_n,l_n),(k_m,l_m)\in\mathbb N\times\mathbb N$ and $2^{k_n - 1} (2 l_n - 1 ) = 2^{k_m - 1} (2 l_m - 1)$.



$2$ is a prime number and $2t-1$ is odd for all $t$, and so we have that the power of $2$ is the same on both sides of the equation, and it is exactly $k_n=k_m$.




Divide by $2^{k_n}$ and therefore $2l_n-1 = 2l_m-1$, add $1$ and divide by $2$, so $(k_n,l_n)=(k_m,l_m)$ and therefore this mapping is injective.



Surjectivity it is even simpler, take $(k,l)\in\mathbb N\times\mathbb N$ and let $n=2^{k-1}(2l-1)$. Now $n\mapsto(k,l)$, because $2l-1$ is odd, so the powers of $2$ in the prime decomposition of $n$ are exactly $k-1$, and from there $l$ is determined to be our $l$. (If you look closely, this is exactly the same argument for injectivity only applied "backwards", which is a trait many of the proofs of this kind has)






As for simpler proofs, there are infinitely many... from the Cantor's pairing function ($(n,m)\mapsto\frac{(n+m)(n+m+1)}{2}+n$), to Cantor-Bernstein arguments by $(n,m)\mapsto 2^n3^m$ and $k\mapsto (k,k)$ for the injective functions. I like this function, though. I will try to remember it and use it next time I teach someone such proof.


Friday 15 March 2019

elementary set theory - example of non order preserving bijection




I read that "the set of integers and the set of rational numbers (with the standard ordering) do not have the same order type, because even though the sets are of the same size (they are both countably infinite), there is no order-preserving bijective mapping between them."



I am still new to set theory, I see that integers can have a bijection with the set of rational numbers, but why is there no order-preserving bijection mapping?


Answer



Assume, for contradiciton, that $f:\Bbb Z\to \Bbb Q$ is an order-preserving bijection. Now consider $f(1)$ and $f(2)$. Since $f$ is order-preserving, we must have $f(1). But more than that, we have
$$
f(1)<\frac{f(1) + f(2)}2$$

That number in the middle is a rational number, and $f$ is a bijection, so there must be an integer $n$ such that $f(n) = \frac{f(1) + f(2)}{2}$. Which is to say that
$$

f(1)$$

But $f$ is order-preserving, meaning we have
$$
1$$

which is impossible. Thus, by contradiciton, there cannot be any order-preserving bijections between the integers and the rationals.


combinatorics - Can this binomial polynomial sum be simplified?



$$\sum_{k=0}^{n} \binom{n}{k} k^d$$



where $d$ is some fixed positive integer. Is this a well known sum that has a faster-than-$O(n)$ evaluation? It looks similar to Faulhaber's formula, except with binomial coefficients.


Answer



Suppose we seek to evaluate
$$S_d(n) = \sum_{k=0}^n {n\choose k} k^d.$$
using

$$k^d =
\frac{d!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{d+1}} \exp(kz) \; dz.$$



This yields for the sum
$$\frac{d!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{d+1}}
\sum_{k=0}^n {n\choose k} \exp(kz)

\; dz
\\ = \frac{d!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{d+1}}
(1+\exp(z))^n
\; dz
\\ = \frac{d!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{d+1}}
(2+\exp(z)-1)^n

\; dz
\\ = \frac{d!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{d+1}}
\sum_{q=0}^n {n\choose q} 2^{n-q} (\exp(z)-1)^q
\; dz
\\ = \sum_{q=0}^n {n\choose q} 2^{n-q}
\times q! \times {d\brace q}$$



for a final answer of

$$\sum_{q=0}^d {n\choose q} 2^{n-q}
\times q! \times {d\brace q}.$$



This is a polynomial of degree $d$ in $n.$



The change of the upper limit is justified as follows: if $n\gt d$
then the Stirling number is zero for $n\ge q \gt d$ and we may replace
$n$ by $d$. On the other hand if $n\lt d$ the binomial coefficient is
zero for $n\lt q\le d$ and we may again replace $n$ by $d.$




Here we have used the species equation for labeled set partitions
$$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ge 1}(\mathcal{Z}))$$



which gives the bivariate generating function of the Stirling numbers
of the second kind
$$\exp(u(\exp(z)-1)).$$


calculus - To prove a sequence is Cauchy




I have a sequence:
$ a_{n}=\sqrt{3+ \sqrt{3 + ... \sqrt { 3} } } $ , it repeats $n$-times.



and i have to prove that it is a Cauchy's sequence.
So i did this:
As one theorem says that every convergent sequence is also Cauchy, so i proved that it's bounded between $ \sqrt{3}$ and $ 3 $ (with this one i am not sure, please check if i am right with this one.)And also i proved tat this sequence is monotonic. (with induction i proved this: $ a_{n} \leq a_{n+1} $
so if it's bounded and monotonic, therefore it is convergent and Cauchy.
I am just wondering if this already proved it or not? And also if the upper boundary - supremum if you wish - is chosen correctly.

I appreciate all the help i get.


Answer



${ a }_{ n+1 }=\sqrt { a_{ n }+3 } $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ as $n\rightarrow \infty $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ $\quad x^{ 2 }=x+3\quad \Rightarrow $ $x^{ 2 }-x-3=0 $ $and\quad it\quad$ convergents to the $x=\frac { 1+\sqrt { 13 } }{ 2 } $


limits - What is wrong with the following "proof" that $e=1$?




Let's analyze this expression



$\lim_\limits{n\rightarrow\infty} (1+\frac{1}{n})^n$



It's the definition of $e$ which, as we know is not equal to $1$. So what is wrong with the following "logic":



As $\lim_\limits{n\rightarrow\infty}(a_{n}b_{n}) = \lim_\limits{n\rightarrow\infty}(a_{n})\times\lim_\limits{n\rightarrow\infty}(b_{n}) $ and $\lim_\limits{n\rightarrow\infty}(1+\frac{1}{n}) = 1$, we can say that $\lim_\limits{n\rightarrow\infty} (1+\frac{1}{n})^n=1^n$, which is equal to $1$.



I know something's wrong there, but the question is - what?



Answer



What you actually proved is that
$$\lim_{k\to\infty} \left(\lim_{n\to\infty} 1+ \frac1n\right)^k = 1$$
Wich is correct, but the LHS is not equal to $e$.
The problem is most apparent when you end up with a $1^n$ (supposed to be a $\lim_{k\to\infty} 1^k$) and got rid of the limit expression $\lim_{n\to\infty}$.


Thursday 14 March 2019

abstract algebra - Linearly disjoint fields



We say that two field $E,F$, extending the same base field $K$, are linearly disjoint if every finite subset of $E$ that is $K$-linearly independent is also $F$-linearly independent.



Suppose $K = \mathbb{Q}$. Is this definition equivalent to say that $E \cap F = \mathbb{Q}$? And if so, why?



My attempt: Assuming that the extensions $E/\mathbb{Q}$, $F/\mathbb{Q}$ are finite, I tried using the primitive element theorem, so that $E=\mathbb{Q}(\alpha)$ and $F=\mathbb{Q}(\beta)$, for some $\alpha,\beta$ algebraic. Then the elements of these fields are just polynomials in these numbers, but from here i was not able to conclude.




Is is even true if the extensions are not finite?



Thanks in advance!


Answer



$\newcommand{\Q}{\mathbb{Q}}$No, it is not equivalent.



As a possibly typical example, take $K = \Q$, $E = \Q(\omega \alpha)$, $F = \Q(\alpha)$, where $\alpha = \sqrt[3]{2}$ and $\omega$ is a primitive third root of unity.



We have $E \cap F = K$, but while $1, \omega \alpha, \omega^{2} \alpha^{2} \in E$ are independent over $K$, you have
$$

1 + \frac{\alpha^{2}}{2}( \omega \alpha) + \frac{\alpha}{2} (\omega^{2} \alpha^{2}) = 1 + \omega + \omega^{2} = 0,
$$
so they are not independent over $F$.


combinatorics - How many ways can the digits $2,3,4,5,6$ be arranged to get a number divisible by 11



How many ways can the digits $2,3,4,5,6$ be arranged to get a number divisible by $11$



I know that the sum of the permutations of the digits should be divisible by 11. Also, the total number of ways the digits can be arranged is $5! = 120$.


Answer



Hint. By the divisibility rule by $11$ we have to count the arrangements $d_1,d_2,d_3,d_4,d_5$ of the digits $2,3,4,5,6$ such that $d_1+d_3+d_5-(d_2+d_4)$ is divisible by $11$. Notice that
$$-2=2+3+4-(5+6)\leq d_1+d_3+d_5-(d_2+d_4)\leq 4+5+6-(2+3)=10$$

therefore we should have $d_1+d_3+d_5=d_2+d_4=\frac{2+3+4+5+6}{2}=10$.



In how many ways we can do that?


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...