Monday 30 September 2019

modular arithmetic - Finding the inverse of a number under a certain modulus

How does one get the inverse of 7 modulo 11?



I know the answer is supposed to be 8, but have no idea how to reach or calculate that figure.



Likewise, I have the same problem finding the inverse of 3 modulo 13, which is 9.

Sunday 29 September 2019

algebra precalculus - 8^n - 3^n is divisible by 5

My compitations:
Base Case= $8-3 = 5$ is divisible by $5$
Inductive Hypothesis$=$
$8^k - 3^k = 5M$
$8^k = 5M + 3^k$
$3^k = 8^k - 5M$



My final answer is $5(11M + 11^k)$
Please tell me if I have any mistake regarding my computations.

integration - Evaluating $int^{fracpi2}_{fracpi4}(2csc(x))^{17}dx$



I saw this question in JEE Advanced. But in that we had to simplify it to $$\int^{\log(1+\sqrt2)}_{0}2(e^u+e^{-u})^{16}du$$
But I pose the following question as to evaluate this integral in closed form.
My Attempt:
$$\int^{\frac\pi2}_{\frac\pi4}(2\csc(x))^{17}dx$$
$$=\int^{\frac\pi2}_{\frac\pi4}\frac{(2.2i)^{17}}{(e^{ix}-e^{-ix})^{17}}dx$$
(Bit of a cheeky attempt! I know. :D)
$$4^{17}i\int^{\frac\pi2}_{\frac\pi4}\frac{e^{17ix}}{e^{2ix}-1}dx$$

If we put $u=e^{ix};du=i.e^{ix}dx$, then $$-4^{17}\int^{i}_{\frac{1+i}{\sqrt2}}\frac{u^{16}}{u^2-1}du$$
(Huh??!!)
$$4^{17}\int_i^{\frac{1+i}{\sqrt2}}\frac{u^{16}}{u^2-1}du$$
And $$\int\frac{u^{16}}{u^2-1}=\frac{u^{15}}{15}+\frac{u^{13}}{13}+\frac{u^{11}}{11}+\frac{u^{9}}{9}+\frac{u^{7}}{7}+\frac{u^{5}}{5}+\frac{u^{3}}{3}+u+\frac12\log(1-u)-\frac12\log(u+1)+C$$
But at this point I am stuck. Please can someone help?


Answer



By applying the Weierstrass substitution $x=2\arctan t$ we have:
$$\color{red}{I}=\int_{\pi/4}^{\pi/2}\left(\frac{2}{\sin x}\right)^{17}\,dx = \int_{\pi/4}^{\pi/2}\left(\frac{1}{\sin\frac{x}{2}\,\cos\frac{x}{2}}\right)^{17}=2\int_{\sqrt{2}-1}^{1}\left(\frac{1+t^2}{t}\right)^{17}\frac{dt}{1+t^2}$$
then by setting $t=e^{-u}$ and applying the binomial theorem it follows that:
$$ \color{red}{I} = 2\int_{0}^{\log(1+\sqrt{2})}(e^u+e^{-u})^{16}\,du=\color{red}{\frac{16037316\,\sqrt{2}}{7}+25740\log\left(1+\sqrt{2}\right)}.$$



Saturday 28 September 2019

elementary number theory - Proving that $ 30 mid ab(a^2+b^2)(a^2-b^2)$




How can I prove that $30 \mid ab(a^2+b^2)(a^2-b^2)$ without using $a,b$ congruent modulo $5$ and then
$a,b$ congruent modulo $6$ (for example) to show respectively that $5 \mid ab(a^2+b^2)(a^2-b^2)$ and
$6 \mid ab(a^2+b^2)(a^2-b^2)$?



Indeed this method implies studying numerous congruences and is quite long.


Answer



You need to show $ab(a^2 - b^2)(a^2 + b^2)$ is a multiple of 2,3, and 5 for all $a$ and $b$.



For 2: If neither $a$ nor $b$ are even, they are both odd and $a^2 \equiv b^2 \equiv 1 \pmod 2$, so that 2 divides $a^2 - b^2$.




For 3: If neither $a$ nor $b$ are a multiple of 3, then $a^2 \equiv b^2 \equiv 1 \pmod 3$, so 3 divides $a^2 - b^2$ similar to above.



For 5: If neither $a$ nor $b$ are a multiple of 5, then either $a^2 \equiv 1 \pmod 5$ or $a^2 \equiv -1 \pmod 5$. The same holds for $b$. If $a^2 \equiv b^2 \pmod 5$ then 5 divides $a^2 - b^2$, while if $a^2 \equiv -b^2 \pmod 5$ then 5 divides $a^2 + b^2$.



This does break into cases, but as you can see it's not too bad to do it systematically like this.


elementary number theory - How can you prove that the square root of two is irrational?




I have read a few proofs that $\sqrt{2}$ is irrational.



I have never, however, been able to really grasp what they were talking about.



Is there a simplified proof that $\sqrt{2}$ is irrational?


Answer



You use a proof by contradiction. Basically, you suppose that $\sqrt{2}$ can be written as $p/q$. Then you know that $2q^2 = p^2$. However, both $q^2$ and $p^2$ have an even number of factors of two, so $2q^2$ has an odd number of factors of 2, which means it can't be equal to $p^2$.


Fundamental assumption about number theory: divisibility



Some interesting questions arose in my mind. The Mathematics we have built to date depend on the fundamental assumptions we make in the very beginning. For example, if we consider Number Theory - more precisely divisibility (see [1], page 24) - we come upon the following statement:





An integer $d$ is said to divide $n$ or be a divisor of $n$ if there exists an integer $e$ such that $n=de$.




I am wondering:




  1. Is this an axiom?

  2. Is this a fact of life arising from nature, or was this invented?


  3. Can a different Mathematics be built if we change this statement?



Bonus points for anyone who can point me to literature that covers topics such as:




  1. Fundamental nature of Number Theory (or in general, Mathematics)

  2. Possibility of other variants of Number Theory (or in general, Mathematics)







[1] Kenneth S. Williams, 2010. Number Theory in the Spirit of Liouville. Cambridge University Press.


Answer



It is just a definition of a concept, a notion that mathematicians stumbled across. So, they just gave it a name (divisor). They could also have called the concept "Peter" or "Flabber" or whatever. The concept was not invented but found/discovered. This is also true for many other mathematical concepts such as groups, for example. Even though groups do not appear in there pure form in nature, their concept can be transferred to many instances in reality (for example, in quantum mechanics).
By simply assigning a different name to the word divisor we do not get some "different Mathematics" since we are only dealing with a definition and not with an axiom. If your statement was indeed an axiom, we could very well end up with some different mathematics if we changed it. A good example for this is the parallel postulate in geometry. If you replace the parallel postulate of Euclidean Geometry with




For any given line $R$ and point $P$ not on $R$, in the plane containing both line $R$ and point $P$ there are at least two distinct lines through $P$ that do not intersect $R$. (taken from: Wikipedia),





then you will come to Hyperbolic geometry which is indeed very different from Euclidean geometry.



Concerning your last question about possible variants of number theory, you might find analytic number theory interesting, which brings together methods from number theory and mathematical analysis such as complex analysis.


real analysis - When $ lim_{xrightarrow +infty}frac{f(x)}{x}=0$ implies $lim_{xrightarrow +infty}f'(x)=0$?

I have just solved a problem:




Let $f:[0,+\infty)\rightarrow \mathbb{R}$ be continuous on $[0,+\infty)$ and differentiable on $(0,+\infty)$. If $\displaystyle \lim_{x\rightarrow +\infty}f'(x)=0$, prove that $\displaystyle \lim_{x\rightarrow +\infty}\frac{f(x)}{x}=0$





My questions is about the inverse: if $\displaystyle \lim_{x\rightarrow +\infty}\frac{f(x)}{x}=0$, which hypothesis can be added (if needed) so that we can conclude $\displaystyle \lim_{x\rightarrow +\infty}f'(x)=0$ ?



Actually, I tried to solve the following one, but I still cannot solve it:




If $f:[0,+\infty)\rightarrow [0,+\infty)$ such that its second derivative is continuous, $f'\le 0$ and $|f''|\le M$ for some $M$ for all $x\ge 0$, then $\displaystyle \lim_{x\rightarrow +\infty}f'(x)=0$.




From the hypothesis, $f$ decreases and bounded below, so $f$ has a limit when $x$ tends to infinity. Thus $\displaystyle \lim_{x\rightarrow +\infty}\frac{f(x)}{x}=0$.




My questions seems to be in a wider range than the second problem above. Any help would be appreciated.

Friday 27 September 2019

Find limit without using L'hopital or Taylor's series



I'm trying to solve this limit $without$ using L'hopital's Rule or Taylor Series. Any help is appreciated!



$$\lim\limits_{x\rightarrow 0^+}{\dfrac{e^x-\sin x-1}{x^2}}$$


Answer



One possible way is to shoot linear functions at the limit - not very elegant, but it works. Let:



$$f(x)=x^3-\frac{x^2}{2}+e^x-\sin x-1,\;\;x\geq 0$$




Computing the first few derivatives of $f:$



$$f'(x)=3x^2-x+e^x-\cos x$$
$$f''(x)=6x-1+e^x+\sin x$$



$f''$ is clearly increasing and since $f''(0)=0$ we have $f''(x)>0$ for $x\in (0,a)$ for some $a$.
This in turn implies that $f'$ is strictly increasing and since $f'(0)=0$ we again have $f'(x)>0$ for $x\in (0,a)$. Finally, this means $f$ is also increasing on this interval, and since $f(0)=0$ we have:



$$0\leq x\leq a:\quad f(x)\geq 0$$




$$\Rightarrow \;\;\frac{e^x-\sin x-1}{x^2}\geq \frac{1}{2}-x$$



Similarly by considering $h(x)=-x^3-\dfrac{x^2}{2}+e^x-\sin x-1$ it is very easy to show that:



$$0\leq x\leq b: \quad h(x)\leq 0$$



$$\Rightarrow \;\;\frac{1}{2}+x\geq\frac{e^x-\sin x-1}{x^2}$$



Hence for small positive $x$ we have:




$$\frac{1}{2}-x\leq\frac{e^x-\sin x-1}{x^2}\leq \frac{1}{2}+x$$



$$\lim_{x\to 0^+}\frac{e^x-\sin x-1}{x^2}=\frac{1}{2}$$


sequences and series - If sides $a$, $b$, $c$ of $triangle ABC$ are in arithmetic progression, then $3tanfrac{A}{2}tanfrac {C}{2}=1$




If sides $a$, $b$, $c$ of $\triangle ABC$ (with $a$ opposite $A$, etc) are in arithmetic progression, then prove that
$$3\tan\frac{A}{2}\tan\frac{C}{2}=1$$




My attempt:




$a$, $b$, $c$ are in arithmetic progression, so
$$\begin{align}
2b&=a+c \\[4pt]
2\sin B &= \sin A+ \sin C \\[4pt]
2\sin(A+C) &=2\sin\frac {A+C}{2}\;\cos\frac{A-C}{2} \\[4pt]
2\sin\frac{A+C}{2}\;\cos\frac{A+C}{2}&=\sin\frac{A+C}{2}\;\cos\frac{A-C}{2} \\[4pt]
2\cos\frac{A+C}{2}&=\cos\frac{A-C}{2}
\end{align}$$


Answer




Expand your last line: $$2\left(\cos\frac A2\cos\frac C2 - \sin\frac A2\sin\frac C2\right)=\left(\cos\frac A2\cos\frac C2 +\sin\frac A2\sin\frac C2\right)$$
and your result is immediate after a cancellation.


Thursday 26 September 2019

elementary number theory - if $pmid a$ and $pmid b$ then $pmid gcd(a,b)$


I would like to prove the following property :




$$\forall (p,a,b)\in\mathbb{Z}^{3} \quad p\mid a \mbox{ and } p\mid b \implies p\mid \gcd(a,b)$$




Knowing that :



Definition




Given two natural numbers $a$ and $b$, not both zero, their greatest common divisor is the largest divisor of $a$ and $b$.






  • If $\operatorname{Div}(a)$ denotes the set of divisors of $a$, the greatest common divisor of $a$ and $b$ is $\gcd(a,b)=\max(\operatorname{Div}(a)\cap\operatorname{Div}(b))$

  • $$d=\operatorname{gcd}(a,b)\iff \begin{cases}d\in \operatorname{Div}(a)\cap\operatorname{Div}(b) & \\ & \\ \forall x \in \operatorname{Div}(a)\cap\operatorname{Div}(b): x\leq d \end{cases}$$

  • $$\forall (a,b) \in \mathbb{N}^{2}\quad a\mid b \iff Div(a) \subset Div(b)$$

  • $$\forall x\in \mathbb{Z}\quad \operatorname{Div}(x)=\operatorname{Div}(-x) $$

  • If $a,b\in\mathbb{Z}$, then $\gcd(a,b)=\gcd(|a|,|b|)$, adding $\gcd(0,0)=0$



Indeed,




Let $(p,a,b)\in\mathbb{Z}^{3} $ such that $p\mid a$ and $p\mid b$ then :



$p\mid a \iff \operatorname{Div}(p)\subset \operatorname{Div}(a)$ and $p\mid b \iff \operatorname{Div}(p)\subset \operatorname{Div}(b)$ then



$\operatorname{Div}(p)\subset \left( \operatorname{Div}(a)\cap \operatorname{Div}(b)\right) \iff p\mid \gcd(a,b)$



Am I right?

Wednesday 25 September 2019

sequences and series - Find the value of : $limlimits_{nrightarrowinfty}left({2sqrt n}-sumlimits_{k=1}^nfrac{1}{sqrt k}right)$



How to find $\displaystyle\lim_{n\rightarrow\infty}\left({2\sqrt n}-\sum_{k=1}^n\frac{1}{\sqrt k}\right)$ ?



And generally does the limit of the integral of f(x) minus the sum of f(x) exist?
How to prove that and find the limit?


Answer



Use $\sqrt{n} = \sum_{k=1}^n \left( \sqrt{k} - \sqrt{k-1} \right)$, then
$$

\begin{eqnarray}
2 \sqrt{n} - \sum_{k=1}^n \frac{1}{\sqrt{k}} &=& \sum_{k=1}^n \left( 2 \sqrt{k} - 2 \sqrt{k-1} - \frac{1}{\sqrt{k}} \right) = \sum_{k=1}^n \frac{1}{\sqrt{k}} \left( \sqrt{k}-\sqrt{k-1} \right)^2\\
&=& \sum_{k=1}^n \frac{1}{\sqrt{k}} \left( \frac{(\sqrt{k}-\sqrt{k-1})(\sqrt{k}+\sqrt{k-1})}{(\sqrt{k}+\sqrt{k-1})} \right)^2 \\
&=& \sum_{k=1}^n \frac{1}{\sqrt{k} \left(\sqrt{k}+\sqrt{k-1}\right)^2}
\end{eqnarray}
$$



This shows the limit does exist and $\lim_{n \to \infty} \left( 2 \sqrt{n} - \sum_{k=1}^n \frac{1}{\sqrt{k}} \right) = \sum_{k=1}^\infty \frac{1}{\sqrt{k} \left(\sqrt{k}+\sqrt{k-1}\right)^2}$.



The value of this sums equals $-\zeta\left(\frac{1}{2} \right) \approx 1.4603545$. This value is found by other means, though:

$$
2 \sqrt{n} - \sum_{k=1}^n \frac{1}{\sqrt{k}} = 2 \sqrt{n} - \left( \zeta\left(\frac{1}{2}\right) - \zeta\left(\frac{1}{2}, n+1\right)\right) \sim -\zeta\left(\frac{1}{2}\right) - \frac{1}{2\sqrt{n}} + o\left( \frac{1}{n} \right)
$$


calculus - How to determine with certainty that a function has no elementary antiderivative?




Given an expression such as $f(x) = x^x$, is it possible to provide a thorough and rigorous proof that there is no function $F(x)$ (expressible in terms of known algebraic and transcendental functions) such that $ \frac{d}{dx}F(x) = f(x)$? In other words, how can you rigorously prove that $f(x)$ does not have an elementary antiderivative?


Answer



To get some background and references you may start with this SE thread.



Concerning your specific question about $z^z$ here is an extract from a sci.math answer by Matthew P Wiener :



"Finally, we consider the case $I(z^z)$.



So this time, let $F=C(z,l)(t)$, the field of rational functions in $z,l,t$, where $l=\log z\,$ and $t=\exp(z\,l)=z^z$. Note that $z,l,t$ are algebraically

independent. (Choose some appropriate domain of definition.) Then $t'=(1+l)t$, so for $a=t$ in the above situation, the partial fraction
analysis (of the sort done in the previous posts) shows that the only possibility is for $v=wt+\cdots$ to be the source of the $t$ term on the left,
with $w$ in $C(z,l)$.



So this means, equating $t$ coefficients, $1=w'+(l+1)w$. This is a first order ODE, whose solution is $w=\frac{I(z^z)}{z^z}$. So we must prove that no
such $w$ exists in $C(z,l)$.
So suppose (as in one of Ray Steiner's posts) $w=\frac PQ$, with $P,Q$ in $C[z,l]$ and having no common factors. Then $z^z=
\left(z^z\cdot \frac PQ\right)'=z^z\cdot\frac{[(1+l)PQ+P'Q-PQ']}{Q^2}$, or $Q^2=(1+l)PQ+P'Q-PQ'$.



So $Q|Q'$, meaning $Q$ is a constant, which we may assume to be one. So we have it down to $P'+P+lP=1$.




Let $P=\sum_{i=0}^n [P_i l^i]$, with $P_i, i=0\cdots n \in C[z]$. But then in our equation, there's a dangling $P_n l^{n+1}$ term, a contradiction."



$$-$$



For future references here is a complete re-transcript of Matthew P Wiener's $1997$ sci.math article (converted in $\LaTeX$ by myself : feel free to fix it!).
A neat translation in french by Denis Feldmann is available at his homepage.






What's the antiderivative of $\ e^{-x^2}\ $? of $\ \frac{\sin(x)}x\ $? of $\ x^x\ $?







These, and some similar problems, can't be done.



More precisely, consider the notion of "elementary function". These are the functions that can be expressed in terms of exponentals and logarithms, via the usual algebraic processes, including the solving (with or without radicals) of polynomials. Since the trigonometric functions and their inverses can be expressed in terms of exponentials and logarithms using the complex numbers $\mathbb{C}$, these too are elementary.



The elementary functions are, so to speak, the "precalculus functions".



Then there is a theorem that says certain elementary functions do not
have an elementary antiderivative. They still have antiderivatives,

but "they can't be done". The more common ones get their own names.
Up to some scaling factors, "$\mathrm{erf}$" is the antiderivative of $e^{-x^2}$ and "$\mathrm{Si}$" is the antiderivative of $\frac{\sin(x)}x$, and so on.






For those with a little bit of undergraduate algebra, we sketch a proof
of these, and a few others, using the notion of a differential field.
These are fields $(F,+,\cdot,1,0)$ equipped with a derivation, that is, a
unary operator $'$ satisfying $(a+b)'=a'+b'$ and $(a.b)'=a.b'+a'.b$. Given
a differential field $F$, there is a subfield $\mathrm{Con}(F)=\{a:a'=0\}$, called the constants of $F$. We let $I(f)$ denote an antiderivative. We ignore $+C$s.




Most examples in practice are subfields of $M$, the meromorphic functions on $\mathbb{C}$ (or some domain). Because of uniqueness of analytic extensions, one rarely has to specify the precise domain.



Given differential fields $F$ and $G$, with $F$ a subfield of $G$, one calls $G$ an algebraic extension of $F$ if $G$ is a finite field extension of $F$.



One calls $G$ a logarithmic extension of $F$ if $G=F(t)$ for some transcendental
$t$ that satisfies $t'=\dfrac{s'}s$, some $s$ in $F$. We may think of $t$ as $\;\log s$, but note that we are not actually talking about a logarithm function on $F$.
We simply have a new element with the right derivative. Other "logarithms" would have to be adjoined as needed.



Similarly, one calls $G$ an exponential extension of $F$ if $G=F(t)$ for some transcendental $t$ that satisfies $t'=t.s'$, some $s$ in $F$. Again, we may think of $t$ as $\;\exp s$, but there is no actual exponential function on $F$.




Finally, we call $G$ an elementary differential extension of $F$ if there is a finite chain of subfields from $F$ to $G$, each an algebraic, logarithmic, or exponential extension of the next smaller field.



The following theorem, in the special case of $M$, is due to Liouville.
The algebraic generality is due to Rosenlicht. More powerful theorems have been proven by Risch, Davenport, and others, and are at the heart of symbolic integration packages.



A short proof, accessible to those with a solid background in undergraduate algebra, can be found in Rosenlicht's AMM paper (see references). It is probably easier to master its applications first, which often use similar techniques, and then learn the proof.







MAIN THEOREM. Let $F,G$ be differential fields, let $a$ be in $F$, let $y$ be in $G$, and suppose $y'=a$ and $G$ is an elementary differential extension field of $F$, and $\mathrm{Con}(F)=\mathrm{Con}(G)$. Then there exist $c_1,...,c_n \in \mathrm{Con}(F), u_1,\cdots,u_n, v\in F$ such that



$$a = c_1\frac{u_1'}{u_1}+ ... + c_n\frac{u_n'}{u_n}+ v'$$



That is, the only functions that have elementary antiderivatives are the ones that have this very specific form. In words, elementary integrals always consist of a function at the same algebraic "complexity" level as the starting function (the $v$), along with the logarithms of functions at the same algebraic "complexity" level (the $u_i$'s).






This is a very useful theorem for proving non-integrability. Because this topic is of interest, but it is only written up in bits and pieces, I give numerous examples. (Since the original version of this FAQ from way back when, two how-to-work-it write-ups have appeared. See Fitt & Hoare and Marchisotto & Zakeri in the references.)




In the usual case, $F,G$ are subfields of $M$, so $\mathrm{Con}(F)=\mathrm{Con}(G)$ always holds, both being $\mathbb{C}$. As a side comment, we remark that this equality is necessary.
Over $\mathbb{R}(x)$, $\frac 1{1+x^2}$ has an elementary antiderivative, but none of the above form.



We first apply this theorem to the case of integrating $f\cdot\exp(g)$, with $f$ and $g$ rational functions. If $g=0$, this is just $f$, which can be integrated
via partial fractions. So assume $g$ is nonzero. Let $t=\exp(g)$, so $t'=g't$.
Since $g$ is not zero, it has a pole somewhere (perhaps out at infinity), so $\exp(g)$ has an essential singularity, and thus $t$ is transcendental over $C(z)$. Let $F=C(z)(t)$, and let $G$ be an elementary differential extension containing an antiderivative for $f\cdot t$.



Then Liouville's theorem applies, so we can write



$$f\cdot t = c_1\frac{u_1'}{u_1} + \cdots + c_n \frac{u_n'}{u_n} + v'$$




with the $c_i$ constants and the $u_i$ and $v$ in $F$. Each $u_i$ is a ratio of two $C(z)[t]$ polynomials, $\dfrac UV$ say. But $\dfrac {(U/V)'}{U/V}=\dfrac {U'}U-\dfrac{V'}V$ (quotient rule), so we may rewrite the above and assume each $u_i$ is in $C(z)[t]$.
And if any $u_i=U\cdot V$ factors, then $\dfrac{(U\cdot V)'}{(U\cdot V)}=\dfrac {U'}U+\dfrac {V'}V$ and so we can further assume each $u_i$ is irreducible over $C(z)$.



What does a typical $\frac {u'}u$ look like? For example, consider the case of $u$ quadratic in $t$. If $A,B,C$ are rational functions in $C(z)$, then $A',B',C'$ are also rational functions in $C(z)$ and



\begin{align}
\frac {(A.t^2+B.t+C)'}{A.t^2+B.t+C} &= \frac{A'.t^2 + 2At(gt) + B'.t + B.(gt) + C'}{A.t^2 + B.t + C}\\
&= \frac{(A'+2Ag).t^2 + (B'+Bg).t + C'}{A.t^2 + B.t + C}\\
\end{align}




(Note that contrary to the usual situation, the degree of a polynomial in $t$ stays the same after differentiation. That is because we are taking derivatives with respect to $z$, not $t$. If we write this out explicitly, we get $(t^n)' = \exp(ng)' = ng'\cdot \exp(ng) = ng'\cdot t^n$.)



In general, each $\frac {u'}u$ is a ratio of polynomials of the same degree. We can, by doing one step of a long division, also write it as $D+\frac Ru$, for some $D \in C(z)$ and $R \in C(z)[t]$, with $\deg(R)<\deg(u)$.



By taking partial fractions, we can write $v$ as a sum of a $C(z)[t]$ polynomial
and some fractions $\frac P{Q^n}$ with $\deg(P)<\deg(Q)$, $Q$ irreducible, with each $P,Q \in C(z)[t]$. $v'$ will thus be a polynomial plus partial fraction like terms.



Somehow, this is supposed to come out to just $f\cdot t$. By the uniqueness of partial fraction decompositions, all terms other than multiples of $t$ add up to $0$. Only the polynomial part of $v$ can contribute to $f\cdot t$, and this must be a monomial over $C(z)$. So $f\cdot t=(h\cdot t)'$, for some rational $h$. (The temptation to assert $v=h\cdot t$ here is incorrect, as there could be some $C(z)$ term, cancelled by $\frac {u'}u$ terms. We only need to identify the terms in $v$ that contribute to $f\cdot t$, so this does not matter.)




Summarizing, if $f\cdot \exp(g)$ has an elementary antiderivative, with $f$ and $g$ rational functions, $g$ nonzero, then it is of the form $h\cdot \exp(g)$, with $h$ rational.



We work out particular examples, of this and related applications. A bracketed function can be reduced to the specified example by a change of variables.



$\quad\boxed{\displaystyle\exp\bigl(z^2\bigr)}$ $\quad\left[\sqrt{z}\cdot\exp(z),\frac{\exp(z)}{\sqrt{z}}\right]$



Let $h\cdot \exp\bigl(z^2\bigr)$ be its antiderivative. Then $h'+2zh=1$.
Solving this ODE gives $h=\exp(-z^2)\cdot I\left(\exp\bigl(z^2\bigr)\right)$, which has no pole (except perhaps at infinity), so $h$, if rational, must be a polynomial. But the derivative of $h$ cannot cancel the leading power of $2zh$, contradiction.



$\quad\boxed{\displaystyle\frac{\exp(z)}z}$ $\quad\left[\exp(\exp(z)),\frac 1{\log(z)}\right]$




Let $h\cdot \exp(z)$ be an antiderivative. Then $h'+h=\frac 1z$. I know of two quick ways to prove that $h$ is not rational.



One can explicitly solve the first order ODE (getting
$\exp(-z)\cdot I\left(\frac{\exp(z)}z\right))$, and then notice that the solution has a logarithmic singularity at zero.
For example, $h(z)\to\infty$ but $\sqrt{z}\cdot h(z)\to 0$ as $z\to 0$. No rational function does this.



Or one can assume $h$ has a partial fraction decomposition. Obviously no
$h'$ term will give $\frac 1z$, so $\frac 1z$ must be present in $h$ already. But $\left(\frac 1z\right)'=-\frac 1{z^2}$,
and this is part of $h'$. So there is a $\frac 1{z^2}$ in $h$ to cancel this. But $\left(\frac 1{z^2}\right)'$ is $-\frac 2{z^3}$, and this is again part of $h'$. And again, something in $h$ cancels this, etc etc etc. This infinite regression is impossible.




$\quad\boxed{\displaystyle\frac {\sin(z)}z}$ $\quad[\sin(\exp(z))]$



$\quad\boxed{\displaystyle\sin\bigl(z^2\bigr)}$ $\quad\left[\sqrt{z}\sin(z),\frac{\sin(z)}{\sqrt{z}}\right]$



Since $\sin(z)=\frac 1{2i}[\exp(iz)-\exp(-iz)]$, we merely rework the above $f\cdot \exp(g)$ result. Let $f$ be rational, let $t=\exp(iz)$ (so $\frac {t'}t=i$) and let $T=\exp(iz^2)$ (so $\frac{T'}T=2iz$) and we want an antiderivative of either $\frac 1{2i}f\cdot\left(t-\frac 1t\right)$ or $\frac 1{2i}f\cdot(T-\frac 1T)$. For the former, the same partial fraction results still apply in identifying $\frac 1{2i}f\cdot t=(h\cdot t)'=(h'+ih)\cdot t$, which can't happen for $f=\frac 1z$, as above. In the case of $f\cdot\sin\bigl(z^2\bigr)$, we want $\frac 1{2i}f\cdot T=(h\cdot T)'=(h'+2izh)\cdot T$, and again, this can't happen for $f=1$, as above.



Although done, we push this analysis further in the $f\cdot \sin(z)$ case, as there are extra terms hanging around. This time around, the conclusion gives an additional $\frac kt$ term inside $v$, so we have $-\frac 1{2i}\frac ft=\left(\frac kt\right)'=\frac{k'-ik}t$.
So the antiderivative of $\frac 1{2i}f\cdot\left(t-\frac 1t\right)$ is $h\cdot t+\frac kt$.



If $f$ is even and real, then $h$ and $k$ (like $t=\exp(iz)$ and $\frac 1t=\exp(-iz)$) are parity flips of each other, so (as expected) the antiderivative is even.
Letting $C=\cos(z), S=\sin(z), h=H+iF$ and $k=K+iG$, the real (and only) part of the antiderivative of $f$ is $(HC-FS)+(KC+GS)=(H+K)C+(G-F)S$.
So over the reals, we find that the antiderivative of (rational even).$\sin(x)$ is of the form (rational even).$\cos(x)+$ (rational odd).$\sin(x)$.




A similar result holds for (odd)$\cdot\sin(x)$, (even)$\cdot\cos(x)$, (odd)$\cdot\cos(x)$.
And since a rational function is the sum of its (rational) even and odd parts, (rational)$\cdot\sin$ integrates to (rational)$\cdot\sin$ + (rational)$\cdot\cos$, or not at all.



Let's backtrack, and apply this to $\dfrac {\sin(x)}x$ directly, using reals only.
If it has an elementary antiderivative, it must be of the form $E\cdot S+O\cdot C$.
Taking derivatives gives $(E'-O)\cdot S+(E+O')\cdot C$. As with partial fractions, we have a unique $R(x)[S,C]$ representation here (this is a bit tricky, as $S^2=1-C^2$: this step can be proven directly or via solving for $t, \frac 1t$ coefficients over $C$). So $E'-O=\frac 1x$ and $E+O'=0$, or $O''+O=-\frac 1x$.
Expressing $O$ in partial fraction form, it is clear only $(-\frac 1x)$ in $O$ can contribute a $-\frac 1x$. So there is a $-\frac 2{x^3}$ term in $O''$, so there is a $\frac 2{x^3}$ term in $O$ to cancel it, and so on, an infinite regress. Hence, there is no such rational $O$.



$\quad\boxed{\displaystyle\frac{\arcsin(z)}z}$ $\quad[z.\tan(z)]$




We consider the case where $F=C(z,Z)(t)$ as a subfield of the meromorphic functions on some domain, where $z$ is the identify function, $Z=\sqrt{1-z^2}$, and $t=\arcsin z$. Then $Z'=-\frac zZ$, and $t'=\frac 1Z$. We ask in the main theorem result if this can happen with $a=\frac tz$ and some field $G$. $t$ is transcendental over $C(z,Z)$, since it has infinite branch points.



So we consider the more general situation of $f(z)\cdot \arcsin(z)$ where $f(z)$ is rational in $z$ and $\sqrt{1-z^2}$. By letting $z=\frac {2w}{1+w^2}$, note that members of $C(z,Z)$ are always elementarily integrable.



Because $x^2+y^2-1$ is irreducible, $\frac{C[x,y]}{x^2+y^2-1}$ is an integral domain, $C(z,Z)$ is isomorphic to its field of quotients in the obvious manner, and $C(z,Z)[t]$ is a UFD whose field of quotients is amenable to partial fraction analysis in the variable $t$. What follows takes place at times in various $z$-algebraic extensions of $C(z,Z)$ (which may not have unique factorization), but the terms must combine to give something in $C(z,Z)(t)$, where partial fraction decompositions are unique, and hence the $t$ term will be as claimed.



Thus, if we can integrate $f(z)\cdot\arcsin(z)$, we have $f\cdot t$ = $\sum$ of $\frac {u'}u s$ and $v'$, by the main theorem.



The $u$ terms can, by logarithmic differentiation in the appropriate algebraic extension field (recall that roots are analytic functions of

the coefficients, and $t$ is transcendental over $C(z,Z)$), be assumed to all be linear $t+r$, with $r$ algebraic over $z$. Then $\frac {u'}u=\frac {1/Z+r'}{t+r}$.
When we combine such terms back in $C(z,Z)$, they don't form a $t$ term (nor any higher power of $t$, nor a constant).



Partial fraction decomposition of $v$ gives us a polynomial in $t$, with coefficients in $C(z,Z)$, plus multiples of powers of linear $t$ terms.
The latter don't contribute to a $t$ term, as above.



If the polynomial is linear or quadratic, say $v=g\cdot t^2 + h\cdot t + k$, then $v'=g'\cdot t^2 + \left(\frac{2g}Z+h'\right)\cdot t + \left(\frac hZ+k'\right)$. Nothing can cancel the $g'$, so $g$ is just a constant $c$. Then $\frac {2c}Z+h'=f$ or $I(f\cdot t)=2c\cdot t+I(h'\cdot t)$. The $I(h'.t)$ can be integrated by parts. So the antiderivative works out to $c\cdot(\arcsin(z))^2 + h(z)\cdot \arcsin(z) - I\left(\frac{h(z)}{\sqrt{1-z^2}}\right)$, and as observed above, the latter is elementary.



If the polynomial is cubic or higher, let $v=A.t^n+B.t^{n-1}+\cdots$, then $v'=A'.t^n + \left(n\cdot\frac AZ+B'\right).t^{n-1} +\cdots$ $A$ must be a constant $c$. But then $\frac{nc}Z+B'=0$, so $B=-nct$, contradicting $B$ being in $C(z,Z)$.




In particular, since $\frac 1z + \frac c{\sqrt{1-z^2}}$ does not have a rational in "$z$ and/or $\sqrt{1-z^2}$" antiderivative, $\frac {\arcsin(z)}z$ does not have an elementary integral.



$\quad\boxed{\displaystyle z^z}$



In this case, let $F=C(z,l)(t)$, the field of rational functions in $z,l,t$, where $l=\log z$ and $t=\exp(z\,l)=z^z$. Note that $z,l,t$ are algebraically independent. (Choose some appropriate domain of definition.) Then $t'=(1+l)t$, so for $a=t$ in the above situation, the partial fraction analysis (of the sort done in the previous posts) shows that the only possibility is for $v=wt+\cdots$ to be the source of the $t$ term on the left, with $w$ in $C(z,l)$.



So this means, equating $t$ coefficients, $1=w'+(l+1)w$. This is a first order ODE, whose solution is $w=\frac{I(z^z)}{z^z}$. So we must prove that no such $w$ exists in $C(z,l)$. So suppose (as in one of Ray Steiner's posts) $w=P/Q$, with $P,Q$ in $C[z,l]$ and having no common factors. Then $z^z= \left(z^z\cdot \frac PQ\right)'=z^z\cdot\frac{[(1+l)PQ+P'Q-PQ']}{Q^2}$, or $Q^2=(1+l)PQ+P'Q-PQ'$.



So $Q|Q'$, meaning $Q$ is a constant, which we may assume to be one. So we have it down to $P'+P+lP=1$.




Let $P=\sum_{i=0}^n [P_i l^i]$, with $P_i, i=0\cdots n \in C[z]$. But then in our equation, there's a dangling $P_n l^{n+1}$ term, a contradiction.






On a slight tangent, this theorem of Liouville will not tell you that Bessel functions are not elementary, since they are defined by second order ODEs. This can be proven using differential Galois theory. A variant of the above theorem of Liouville, with a different normal form, does show however that $J_0$ cannot be integrated in terms of elementary methods augmented with Bessel functions.






What follows is a fairly complete sketch of the proof of the Main Theorem.
First, I just state some easy (if you've had Galois Theory 101) lemmas.




Throughout the lemmas $F$ is a differential field, and $t$ is transcendental over $F$.




  • Lemma $1$: If $K$ is an algebraic extension field of $F$, then there exists a unique way to extend the derivation map from $F$ to $K$ so as to make $K$ into
    a differential field.

  • Lemma $2$: If $K=F(t)$ is a differential field with derivation extending $F$'s, and $t'$ is in $F$, then for any polynomial $f(t)$ in $F[t]$, $f(t)'$ is a
    polynomial in $F[t]$ of the same degree (if the leading coefficient is not in $\mathrm{Con}(F)$) or of degree one less (if the leading coefficient is in $\mathrm{Con}(F)$).

  • Lemma $3$: If $K=F(t)$ is a differential field with derivation extending $F$'s, and $\frac{t'}t$ is in $F$, then for any $a$ in $F$, $n$ a positive integer, there exists $h$ in $F$ such that $(a\cdot t^n)'=h\cdot t^n$. More generally, if $f(t)$ is any polynomial in $F[t]$, then $f(t)'$ is of the same degree as $f(t)$, and is a multiple of $f(t)$ iff $f(t)$ is a monomial.




These are all fairly elementary. For example, $(a\cdot t^n)'=\bigl(a'+a\frac {t'}t\bigr)\cdot t^n$ in lemma $3$. The final 'iff' in lemma $3$ is where transcendence of $t$ comes in. Lemma $1$ in the usual case of subfields of $M$ is an easy consequence of the implicit function theorem.






MAIN THEOREM. Let $F,G$ be differential fields, let $a$ be in $F$, let $y$ be in $G$, and suppose $y'=a$ and $G$ is an elementary differential extension field of $F$, and $\mathrm{Con}(F)=\mathrm{Con}(G)$. Then there exist $c_1,...,c_n \in \mathrm{Con}(F), u_1,\cdots,u_n, v\in F$ such that



$$(*)\quad a = c_1\frac{u_1'}{u_1}+ ... + c_n\frac{u_n'}{u_n}+ v'$$



In other words, the only functions that have elementary antiderivatives are the ones that have this very specific form.







Proof:



By assumption there exists a finite chain of fields connecting $F$ to $G$ such that the extension from one field to the next is given by performing an algebraic, logarithmic, or exponential extension. We show that if the form $(*)$ can be satisfied with values in $F2$, and $F2$ is one of the three kinds of allowable extensions of $F1$, then the form $(*)$ can be satisfied in $F1$. The form $(*)$ is obviously satisfied in $G$: let all the $c$'s be $0$, the $u$'s be $1$, and let $v$ be the original $y$ for which $y'=a$. Thus, if the form $(*)$ can be pulled down one field, we will be able to pull it down to $F$, and the theorem holds.



So we may assume without loss of generality that $G=F(t)$.





  • Case $1$ : $t$ is algebraic over $F$. Say $t$ is of degree $k$. Then there are polynomials $U_i$ and $V$ such that $U_i(t)=u_i$ and $V(t)=v$. So we have $$a = c_1 \frac{U_1(t)'}{U_1(t)} +\cdots + c_n \frac{ U_n(t)'}{U_n(t)} + V(t)'$$ Now, by the uniqueness of extensions of derivatives in the algebraic case, we may replace $t$ by any of its conjugates $t_1,\cdots, t_k,$ and the same equation holds. In other words, because $a$ is in $F$, it is fixed under the Galois automorphisms. Summing up over the conjugates, and converting the $\frac {U'}U$ terms into products using logarithmic differentiation, we have $$k a = c_1 \frac{[U_1(t_1)\times\cdots\times U_1(t_k)]'}{U_1(t_1)\times \cdots \times U_n(t_k)}+ \cdots + [V(t_1)+\cdots +V(t_k)]'$$ But the expressions in $[\cdots]$ are symmetric polynomials in $t_i$, and as they are polynomials with coefficients in $F$, the resulting expressions are in $F$. So dividing by $k$ gives us $(*)$ holding in $F$.


  • Case $2$ : $t$ is logarithmic over $F$. Because of logarithmic differentiation we may assume that the $u$'s are monic and irreducible in $t$ and distinct.
    Furthermore, we may assume v has been decomposed into partial fractions.
    The fractions can only be of the form $\dfrac f{g^j}$, where $\deg(f)<\deg(g)$ and $g$ is monic irreducible. The fact that no terms outside of $F$ appear on the left hand side of $(*)$, namely just $a$ appears, means a lot of cancellation must be occuring.



    Let $t'=\dfrac{s'}s$, for some $s$ in $F$. If $f(t)$ is monic in $F[t]$, then $f(t)'$ is also in $F[t]$, of one less degree. Thus $f(t)$ does not divide $f(t)'$. In particular, all the $\dfrac{u'}u$ terms are in lowest terms already. In the $\dfrac f{g^j}$ terms in $v$, we have $a g^{j+1}$ denominator contribution in $v'$ of the form $-jf\dfrac{g'}{g^{j+1}}$.
    But $g$ doesn't divide $fg'$, so no cancellation occurs. But no $\dfrac{u'}u$ term can cancel, as the $u$'s are irreducible, and no $\dfrac{(**)}{g^{j+1}}$ term appears in $a$, because $a$ is a member of $F$. Thus no $\dfrac f{g^j}$ term occurs at all in $v$.
    But then none of the $u$'s can be outside of $F$, since nothing can cancel them. (Remember the $u$'s are distinct, monic, and irreducible.) Thus each of the $u$'s is in $F$ already, and $v$ is a polynomial. But $v' = a -$ expression in $u$'s, so $v'$ is in $F$ also. Thus $v = b t + c$ for some $b$ in $\mathrm{con}(F)$, $c$ in $F$, by lemma 2. Then $$a= c_1 \frac{u_1'}{u_1} +\cdots + c_n\frac{u_n'}{u_n} + b \frac{s'}s + c'$$ is the desired form. So case 2 holds.


  • Case $3$ : $t$ is exponential over $F$. So let $\dfrac {t'}t=s'$ for some $s$ in $F$. As in case 2 above, we may assume all the $u$'s are monic, irreducible, and distinct and put $v$ in partial fraction decomposition form. Indeed the argument is identical as in case 2 until we try to conclude what form $v$ is. Here lemma 3 tells us that $v$ is a finite sum of terms $b\cdot t^j$ where each coefficient is in $F$. Each of the $u$'s is also in $F$, with the possible exception that one of them may be $t$. Thus every $\dfrac {u'}u$ term is in $F$, so again we conclude $v'$ is in $F$. By lemma 3, $v$ is in $F$. So if every $u$ is in $F$, $a$ is in the desired form. Otherwise, one of the $u$'s, say $u_n$, is actually $t$, then $$a = c_1\frac{u_1'}{u_1} + \cdots + (c_n s + v)'$$ is the desired form. So case 3 holds.








References:



A D Fitt & G T Q Hoare "The closed-form integration of arbitrary functions", Mathematical Gazette (1993), pp 227-236.
I Kaplansky An introduction to differential algebra (Hermann, 1957)
E R Kolchin Differential algebra and algebraic groups (Academic Press, 1973)
A R Magid "Lectures on differential Galois theory" (AMS, 1994)
E Marchisotto & G Zakeri "An invitation to integration in finite terms", COLLEGE MATHEMATICS JOURNAL (1994), pp 295-308.
J F Ritt Integration in finite terms (Columbia, 1948).
J F Ritt Differential algebra (AMS, 1950).
M Rosenlicht "Liouville's theorem on functions with elementary integrals", PACIFIC JOURNAL OF MATHEMATICS (1968), pp 153-161.
M Rosenlicht "Integration in finite terms", AMERICAN MATHEMATICS MONTHLY, (1972), pp 963-972.
G N Watson A treatise on the theory of Bessel functions (Cambridge, 1962).



-Matthew P Wiener


Tuesday 24 September 2019

Prime number equation

The number of solutions of the equation $xy(x+y)=2010$ where $x$ and $y$ denote positive prime numbers, is ____




I tried various things but nothing seems to work out. $2010$ can be resolved into $67\times30$ but $30$ is not a prime number. Moreover, they do not come under the format $xy(x+y)$. Please help me.

complex analysis - Evaluating $zetaleft(frac{1}{2}right)$ as an integral $ zetaleft(frac{1}{2}right) = frac{1}{2} int_0^infty frac{[x]-x}{x^{3/2}} , dx$




I am reading the second chapter of Titchmarsh's book on the Riemann Zeta Function. I would have written:



$$ \zeta\left(\frac{1}{2}\right) = 1 + \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + \dots = \infty $$



If you think about it for a moment. This doesn't decay nearly fast enough, and so the sequence diverges. Then I had to look up the actual definition of $\zeta(s)$ in the region $s = \sigma + it$ and $0 < \sigma < 1$. We have:



$$ \zeta(s) = \left\{ \begin{array}{cl}
\sum \frac{1}{n^s} & \mathrm{Re}(s) > 1 \\ \\
s \int_0^\infty \frac{[x]-x}{x^{s+1}} dx & 1 > \text{Re}(s) > 0

\end{array} \right. $$



Then if I evaluate at $s = \frac{3}{2} = \frac{3}{2} + 0i$ we use the second formula:



$$ \zeta\left(\frac{1}{2}\right) = \frac{1}{2} \int_0^\infty \frac{[x]-x}{x^{3/2}} \, dx = \sum_{n=0}^\infty \left[ 2 \sqrt{n} - 2 \sqrt{n+1} - \frac{1}{\sqrt{n+1}} \right] \stackrel{?}{<} 0$$



Is this thing negative? Is $\zeta(\frac{1}{2}) < 0$. The book overs several "analytic continuations" and I'm only looking at this first one, to make sure I understand.



Could someone help me evaluate the integral? I didn't use any fancy changes of variables. The main step is:




$$ \int_0^\infty f(x) \,dx = \sum \int_{n}^{n+1} f(x) \,dx = \int_0^1 f(x) \,dx + \int_1^2 f(x) \,dx + \dots $$


Answer



$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}

\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$




You can take advantage of the
following identity:





\begin{align}
\zeta\pars{1 \over 2} & =
\sum_{k = 1}^{N}{1 \over \root{k}} - 2\root{N} - {1 \over 2}\int_{N}^{\infty}{x - \left\lfloor x\right\rfloor \over x^{3/2}}\,\dd x\,,\qquad N = 1,2,3,\ldots
\end{align}




Note that
$\ds{0 < \verts{{1 \over 2}\int_{N}^{\infty}{x - \left\lfloor x\right\rfloor \over x^{3/2}}\,\dd x} < {1 \over \root{N}}}$ such that you don't need to evaluate the integral. Namely,





$$
\bbx{\zeta\pars{1 \over 2} =
\lim_{N \to \infty}\pars{\sum_{k = 1}^{N}{1 \over \root{k}} - 2\root{N}}}
$$


Computing $gcd$ of very large numbers with powers

How to calculate $\gcd$ of $5^{2^{303} - 1} - 1$ and $5^{2^{309} - 1} - 1$?




I stumbled upon this interesting problem and tried elementary algebraic simplification and manipulation. But found no success.

calculus - A limit without invoking L'Hopital: $lim_{x to 0} frac{x cos x - sin x}{x^2}$




The following limit



$$\ell=\lim_{x \rightarrow 0} \frac{x \cos x - \sin x}{x^2}$$



is a nice candidate for L'Hopital's Rule. This was given at a school before L'Hopital's Rule was covered. I wonder how we can skip the rule and use basic limits such as:



$$\lim_{x \rightarrow 0} \frac{\sin x}{x} \quad , \quad \lim_{x \rightarrow 0} \frac{\cos x -1}{x^2}$$


Answer



We have,




$$\lim_{x \to 0} \dfrac{x\cos x - \sin x}{x^2} = \lim_{x \to 0} \dfrac{\cos x -1}{x} + \lim_{x \to 0}\dfrac{x - \sin x}{x^2} $$



$$ = -2\lim_{x \to 0} \dfrac{\sin^2 \left(\frac{x}{2}\right)}{x} + \lim_{x \to 0}\dfrac{x - \sin x}{x^2} $$



The first limit is zero since $\displaystyle \lim_{x \to 0} \dfrac{\sin x}{x} = 1$, and,



$$ 0 \leq \lim_{x \to 0}\dfrac{x - \sin x}{x^2} \leq \lim_{x \to 0}\dfrac{\tan x - \sin x}{x^2}$$



But,




$$\lim_{x \to 0}\dfrac{\tan x - \sin x}{x^2} = \lim_{x \to 0} \ \left( \sin x \times \dfrac{1-\cos x}{x^2 \cos x} \right) = \lim_{x \to 0} \dfrac{1 - \cos x}{x} = 0$$



Thus, by the Squeeze Theorem,



$$\lim_{x \to 0} \dfrac{x\cos x - \sin x}{x^2} =0$$


Monday 23 September 2019

Finding the root of a complex number

I have a complex number $8-6 i$ . I have to find the square root . I did all the steps and I got $\pm(-3+i)$. I also got $\pm(3-i)$. On squaring I am getting the same $8-6 i$, but what is the right square root . Please someone point out my mistake.

Finding a limit using High School methods



One of my students approached me with a question about a limit. (She doesn't know the idea of a limit.) I was able to find the limit quite easily by using L'Hopital's Rule. However, they don't know about L'Hopital's Rule, and so a proof cannot use it. The limit is as follows:



$$\lim_{x \to \infty} \frac{\ln^2x + 2\ln x}{x}=0$$



I must be missing a simple trick. The proof cannot use L'Hopital's Rule, nor can it be a formal $\varepsilon\delta$-proof. Neither of these ideas have been introduced in the course. We have done Taylor Series.



Can anyone see a simple way, of finding this limit?


Answer




Let $x=e^u$. We are looking at $\frac{u^2+2u}{e^u}$.



For the $\frac{u^2}{e^u}$ part, use the fact that for positive $u$, we have $e^u\gt \frac{u^3}{3!}$. (This comes from the Taylor series.)



The $\frac{2u}{e^u}$ part is done in the same way.


trigonometry - Finding square roots of $sqrt 3 +3i$



I was reading an example, where it is calculating the square roots of $\sqrt 3 +3i$.




$w=\sqrt 3 +3i=2\sqrt 3\left(\frac{1}{2}+\frac{1}{2}\sqrt3i\right)\\=2\sqrt 3(\cos\frac{\pi}{3}+i\sin\frac{\pi}{3})$



Let $z^2=w \Rightarrow r^2(\cos(2\theta)+i\sin(2\theta))=2\sqrt 3(\cos\frac{\pi}{3}+i\sin\frac{\pi}{3})$.



But how did they get from $\sqrt 3 +3i=2\sqrt 3\left(\frac{1}{2}+\frac{1}{2}\sqrt3i\right)=2\sqrt 3(\cos\frac{\pi}{3}+i\sin\frac{\pi}{3})$?



And can one just 'let $z^2=w$' as above?



Edit:

$w=2\sqrt 3(\cos\frac{\pi}{3}+i\sin\frac{\pi}{3})=z^2\\ \Rightarrow z=\sqrt{2\sqrt 3}(\cos\frac{\pi}{6}+i\sin\frac{\pi}{6})\\ \Rightarrow \sqrt{2\sqrt 3}\frac{\sqrt 3}{2} +i \sqrt{2\sqrt 3} \frac{1}{2}$


Answer



Wikipedia page on polar form of complex numbers is quite good.



Given a complex number $z = a + i b$, its absolute value $|z| = \sqrt{a^2+b^2}$, naturally the quotient $\frac{z}{|z|}$ has unit absolutely value, hence $\frac{z}{|z|} = \mathrm{e}^{i \theta} = \cos(\theta) + i \sin(\theta)$ for some angle $\theta$.



In the case at $a=\sqrt{3}$ and $b=3$, thus $\sqrt{a^2+b^2} = \sqrt{3+3^2} = 2 \sqrt{3}$. Therefore $\frac{z}{|z|} = \frac{\sqrt{3}}{2 \sqrt{3}} + i \frac{3}{2 \sqrt{3}} = \frac{1}{2} + i \frac{\sqrt{3}}{2}$. Solving for $\cos(\theta) = \frac{1}{2}$ and $\sin(\theta) = \frac{\sqrt{3}}{2}$ for $0 \leqslant \theta < 2\pi$ gives $\theta = \frac{\pi}{3}$.



Finding the square root proceeds as follows. Let $w = |w| \mathrm{e}^{i \phi}$, then
$$

|w|^2 \mathrm{e}^{2 i \phi} = 2 \sqrt{3} \mathrm{e}^{i \pi/3}
$$
Taking the absolute value we must have $|w|^2 = 2 \sqrt{3}$, hence $|w| = \sqrt{2} 3^{1/4}$. When solving for the angle $\phi$, remember that there are two roots for $0 \leqslant \phi <2 \pi$.


real analysis - Why is $e^{x}$ not uniformly continuous on $mathbb{R}$?




It seems intuitively very clear that $e^{x}$ is not uniformly continuous on $\mathbb{R}$. I'm looking to 'prove' it using $\epsilon$-$\delta$ analysis though. I reason as follows:



Suppose $\epsilon > 0$; in fact, fix it to be $\epsilon=1$.



For contradiction, suppose that $\exists \delta >0$ s.t. $$ (\star) \ |x-y|<\delta \Rightarrow |e^{x}-e^{y}|<\epsilon=1 \text{ for } x,y \in \mathbb{R}.$$
Note that $e^{x+\delta}-e^{x}=e^{x}(e^{\delta}-1)$. So, for $x$ large enough (so that RHS $>1$), the relation $(\star)$ does not hold.



This is our contradiction, and so the exponential function is not uniformly continuous on $\mathbb{R}$.



Is this reasoning correct and sufficient?




Thanks.


Answer



If you are truly looking for a rigorous answer then you need to justify the "So, for $x$ large enough...".



For instance, here is a very rigorous solution along the lines you suggest:



Assume that $e^x$ is uniformly continuous on $\mathbb {R}$. Let $\epsilon = 1$. Thus there is $\delta >0$ such that for all $x,y\in \mathbb R$ if $|x-y|<\delta $ then $|e^x-e^y| < 1$. Let $a=\delta/2$. Since $\lim_{x\to\infty }e^x=\infty$ and since $e^a-1>0$ it follows that $\lim_{x\to \infty }e^x(e^a-1)=\infty$. Consequently, there is some $x\in \mathbb {R}$ such that $e^x(e^a-1)>1$. However, taking $y=x+a$ we have $|x-y|<\delta$ while $|e^x-e^y|=e^x(e^a-1)>1$, a contradiction.


Sunday 22 September 2019

sequences and series - Infinite sum of positive integers and why an integral test doesn't disprove it

Problem



I first came across this statement

$\sum_{n=1}^\infty n = -\frac{1}{12}$ a couple of years ago.



Why does an integral test for convergance not disprove this.
That is, with an integral of
$$\int_0^\infty x dx$$



I see the integral test requires a monotonically decreasing function and this will be why you can't use it for the infinte sum of positive integers.
I fail to understand why this function requires a decreasing function. Perhaps this is beyond the scope of what I'm trying to understand.



Context




I saw a video challenging the viewer to solve the Balancing bricks problem and I proceeded to solve it to the point of the Sum of the harmonic series which I looked up information.



I saw the integral test as a way to show this series also had an infinite sum as demonstrated in the Harmonic Series Wikipedia Article. This immediately reminded me about that ol' friend $\sum_{n=1}^\infty n = -\frac{1}{12}$ and I was lead to wonder why that integral test couldn't be applied. Cue much internet searching and now this question.

Elementary field theory, field extensions of the rationals of degree 2



I've just started some reading and doing exercises on field theory with Galois theory in scope, and have had some trouble with this exercise. I think I have simply misunderstood some of the definitions, and would like someone to set this straigth to me.




If $K$ is an extension field of $\mathbb{Q}$ such that $[K:\mathbb{Q}] = 2$, prove that $K = \mathbb{Q}(\sqrt{d})$ for some square-free integer $d$.





Now, I understand that since the extension is finite-dimensional, so it has to be algebraic. So in particular if I take any element $u \in K$ not in $\mathbb{Q}$ then it must be algebraic. Since the basis of $K$ over $\mathbb{Q}$ is of size 2, the set $\{1, u, u^2\}$ must be linearly dependant and with it I could construct a polynomial of degree two with $u$ as a root.



If the polynomial is $f(x) = x^2 + ax + b$, then I know $u = -a/2 + \sqrt{a^2/4 -b}$, where $t = \sqrt{a^2/4 -b}$ cannot be a square or else $u \in \mathbb{Q}$. I can see why $\mathbb{Q}(u) = \mathbb{Q}(t)$.



In this way I get the chain of fields $\mathbb{Q} \subset \mathbb{Q}(t) \subset K$, but because $[K:\mathbb{Q}]= 2$ and certainly $\mathbb{Q} \neq \mathbb{Q}(t)$ then $\mathbb{Q}(t) = K$. Now, my problem lies in proving that the field $\mathbb{Q}(t)$ actually can be represented by $\mathbb{Q}(\sqrt d)$ where $d$ is square-free.



What bothers me is the following. The polynomial $f(x) = x^2 - 2/3$ has a root in $\sqrt{2/3}$ and is certainly irreducible in $\mathbb{Q}$. But then the field $\mathbb{Q}(\sqrt{2/3})$ has dimension 2. How is this field equal to some field $\mathbb{Q}(\sqrt d)$ where $d$ is square-free?




EDIT: Thanks for the help in the comments. Obviously if $n/m$ is a rational number in reduced form then $nm$ is square-free and $\mathbb{Q}(\sqrt{n/m}) = \mathbb{Q}(\sqrt{nm})$. Feel free to close the question.


Answer



From the discussion in the original post it follows that if $[K:\mathbb{Q}] = 2$ then $K = \mathbb{Q}(t)$ where $t$ is the square root of some rational number in reduced form, say $t = \sqrt{n/m}$. Then $\gcd(n,m) = 1$. Then the integer $nm$ is clearly square-free and what is left to show is that $\mathbb{Q}(\sqrt{n/m}) = \mathbb{Q}(\sqrt{nm})$.



Since $\mathbb{Q}(\sqrt{n/m}) = \{a+b\sqrt{n/m} \ | \ a,b \in \mathbb{Q}\}$, letting $a = 0, b = m$ we get that $\sqrt{mn} \in \mathbb{Q}(\sqrt{n/m})$ and since $\mathbb{Q}(\sqrt{nm})$ is the smallest field containing this element, we ge the inlusion $\mathbb{Q}(\sqrt{nm}) \subset \mathbb{Q}(\sqrt{n/m})$.



By a similar argument, $\mathbb{Q}(\sqrt{n/m}) \subset \mathbb{Q}(\sqrt{nm})$ since $\mathbb{Q}(\sqrt{nm}) = \{a+b\sqrt{nm} \ | \ a,b \in \mathbb{Q}\}$ and letting $a=0, b = 1/m$ we get that $\mathbb{Q}(\sqrt{n/m}) \subset \mathbb{Q}(\sqrt{nm})$.



So in conclusion $\mathbb{Q}(\sqrt{n/m}) = \mathbb{Q}(\sqrt{nm})$ and every field extension of the rationals of degree 2 is on form $\mathbb{Q}(\sqrt{d})$ where $d$ is square-free.


Saturday 21 September 2019

proof writing - Proving sequence statement using mathematical induction, $d_n = frac{2}{n!}$



I'm stuck on this homework problem. I must prove the statement using mathematical induction



Given: A sequence $d_1, d_2, d_3, ...$ is defined by letting $d_1 = 2$ and for all integers k $\ge$ 2.
$$
d_k = \frac{d_{k-1}}{k}
$$




Show that for all integers $n \ge 1$ , $$d_n = \frac{2}{n!}$$






Here's my work:



Proof (by mathematical induction). For the given statement, let the property $p(n)$ be the equation:



$$
d_n = \frac{2}{n!}

$$



Show that $P(1)$ is true:
The left hand side of $P(1)$ is $d_n$ , which equals $2$ by definition of the sequence.
The right hand side is:



$$ \frac{2}{(1)!} =2 $$



Show for all integers $k \geq 1$, if $P(k)$ is true, then $p(k+1)$ is true.
Let k be any integer with $k \geq 1$, and suppose $P(k)$ is true. That is, suppose: (This is the inductive hypothesis)




$$ d_{k} = \frac{2}{k!} $$



We must show that $P(K+1)$ is true. That is, we must show that:



$$ d_{k+1} = \frac{2}{(k+1)!} $$



(I thought I was good until here.)



But the left hand side of $P(k+1)$ is:




$$ d_{k+1} = \frac{d_k}{k+1} $$



By inductive hypothesis:



$$ d_{k+1} = \frac{(\frac{2}{2!})}{k+1} $$



$$ d_{k+1} = \frac{2}{2!}\frac{1}{k+1} $$



but that doesn't seem to equal what I needed to prove: $ d_n = \frac{2}{n!}$



Answer



The following is not true $$d_{k+1} = \frac{(\frac{2}{2!})}{k+1}$$ since $d_k=\frac{2}{k!}$ not $\frac{2}{2!}$, you actually have $$d_{k+1} = \frac{(\frac{2}{k!})}{k+1}=\frac{(\frac{2}{k!(k+1)})}{1}=\frac{2}{(k+1)!}$$


probability theory - Monotone Convergence Theorem - Lebesgue measure



One of the conditions for the monotone convergence theorem is that $f_n \uparrow f$ pointwise. Is there a version of this theorem for which $f_n \downarrow f$ pointwise? If there is, what are the other conditions?




Any help would be appreciated.


Answer



Yes. Look up dominated convergence. Basically, when approaching from above, you need for the sequence of functions to eventually have finite integral, then you can do a subtraction to get out monotone convergence. If the sequence always has infinite integral, it could converge to anything, imagine $f_n=1_{[n,\infty]}$, for example.


Friday 20 September 2019

real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer




What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).




The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:




  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.


  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$





More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).




In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.




In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]




[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


summation - Prove by Induction : $sum n^3=(sum n)^2$

I am trying to prove that for any integer where $n \ge 1$, this is true:



$$ (1 + 2 + 3 + \cdots + (n-1) + n)^2 = 1^3 + 2^3 + 3^3 + \cdots + (n-1)^3 + n^3$$



I've done the base case and I am having problems in the step where I assume that the above is true and try to prove for $k = n + 1$.



I managed to get,




$$(1 + 2 + 3 + \cdots + (k-1) + k + (k+1))^2 = (1 + 2 + 3 + \cdots + (k-1) + k)^2 + (k + 1)^3$$



but I'm not quite sure what to do next as I haven't dealt with cases where both sides could sum up to an unknown integer.

induction - Proving that two summations are equivalent: $sum_{i=1}^n i^3 = (sum_{i=1}^n i)^2$





Give a constructive proof to show that for all $n \geq 1$ ,



$\sum\limits_{i=1}^n i^3 = (\sum\limits_{i=1}^n i)^2$



Observe that $(n+1)^4 - n^4 = 4n^3 + 6n^2 + 4n + 1$ .







Now, the two following equalities are obvious:



$\sum\limits_{i=1}^n i^3 = 1^3 + 2^3 + 3^3 + ... + n^3$



$(\sum\limits_{i=1}^n i)^2 = (1 + 2 + 3 + ... + n)^2$



And they are both obviously equivalent given the first few test cases:




$\sum\limits_{i=1}^n i^3 = A(n)$




  • $A(1) = 1^3 = 1$

  • $A(2) = 1^3 + 2^3 = 1 + 8 = 9$

  • $A(3) = 1^3 + 2^3 + 3^3 = 9 + 27 = 36$



$(\sum\limits_{i=1}^n i)^2 = B(n)$





  • $B(1) = (1)^2 = 1$

  • $B(2) = (1 + 2)^2 =9 $

  • $B(3) = (1 + 2 + 3)^2 = 36$



Now, I am thinking of finding the closed-forms for both functions in the hopes that they are indeed the same. Then I would prove those closed forms to work by induction. But:




  1. I don't know if that would be a sound way to do it.


  2. I don't know if this would even qualify as constructive, as the question requests.



As you may tell, I am no math major. I am a Computer Science major, though. This is a computing fundamentals class. I took discrete 1.5 years ago, so my knowledge is about as fresh as a litter box. I've been in quite a rut for a few hours over this.


Answer



Your goal is to prove the statement $S(n)$ for all $n\geq 1$ where
$$
S(n) : 1^3 + 2^3 +3^3 +\cdots + n^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Using $\Sigma$-notation, we may rewrite $S(n)$ as follows:

$$
S(n) : \sum_{r=1}^n r^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Base step: The statement $S(1)$ says that $(1)^3 = (1)^2$ which is true because $1=1$.



Inductive step [$S(k)\to S(k+1)$]: Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
$$
S(k) : \sum_{r=1}^k r^3 = \left[\frac{k(k+1)}{2}\right]^2
$$
holds. To be proved is that

$$
S(k+1) : \sum_{r=1}^{k+1} r^3 = \left[\frac{(k+1)((k+1)+1)}{2}\right]^2
$$
follows. Beginning with the left side of $S(k+1)$,
\begin{align}
\sum_{r=1}^{k+1}r^3 &= \sum_{r=1}^k r^3 + (k+1)^3\tag{evaluate sum for $i=k+1$}\\[1em]
&= \left[\frac{k(k+1)}{2}\right]^2+(k+1)^3\tag{by $S(k)$}\\[1em]
&= \frac{(k+1)^2}{4}[k^2+4(k+1)]\tag{factor out $\frac{(k+1)^2}{4}$}\\[1em]
&= \frac{(k+1)^2}{4}[(k+2)(k+2)]\tag{factor quadratic}\\[1em]
&= \frac{(k+1)^2(k+2)^2}{4}\tag{multiply and rearrange}\\[1em]

&= \left[\frac{(k+1)(k+2)}{2}\right]^2\tag{rearrange}\\[1em]
&= \left[\frac{(k+1)((k+1)+1)}{2}\right]^2,\tag{rearrange}
\end{align}
one arrives at the right side of $S(k+1)$, thereby showing that $S(k+1)$ is also true, completing the inductive step.



By mathematical induction, it is proved that for all $n\geq 1$, where $n\in\mathbb{N}$, that the statement $S(n)$ is true.



Note: The step where $\dfrac{(k+1)^2}{4}$ is factored out is an important one. If we do not factor this out and, instead, choose to expand $(k+1)^3$, the problem becomes much more messy than it needs to be.


Thursday 19 September 2019

sequences and series - Arithmetic progression , find $n$



Given arithmetic progresion $20,17,14,\ldots$



Find smallest value of $n$ so $y_n<0$.



I can find value of $n$ by mind , but do not know how to write the solution.


Answer



Assuming $y_0=20$, your arithmetic progression has closed form




$$y_n=20-3n$$



Now let us solve the following inequality.



$$0>20-3n$$
$$3n>20$$
$$n>\frac{20}{3}=7-\frac{1}{3}$$



The minimum integer value of $n$ that satisfies the condition is

$n=7$.


number theory - If $x$ is a positive rational but not an integer, is $x^x$ irrational?


Let $x$ be positive, rational, but not an integer. That means $x$ can be written as $\frac{p}{q}$ with $p,q$ coprime, $p,q \neq 0$ and $q \neq 1$. Is $x^x$ always irrational?





I think that this has to be the case, but I can't prove it.



$x^x = (\frac{p}{q})^\frac{p}{q} = \frac{p^\frac{p}{q}}{q^\frac{p}{q}}$



I know that $p^\frac{p}{q}$ and $q^\frac{p}{q}$ can't both be rational, but there are cases where the division of two irrational numbers gives us an rational number. For example $\frac{\sqrt{2}}{\sqrt{2}} = 1$

Wednesday 18 September 2019

Proof of the power series 1 + $x^2$ + $x^3$ + $ldots$ + $x^n$ = $frac{1}{1-x}$

Can anyone show me the proof of why if $|x|<1$ then:



$$

\lim_{n \to \infty} 1+ x^2 + x^3 + \ldots + x^n = \frac{1}{1-x}
$$

Monday 16 September 2019

proof writing - Prove the inequality for all natural numbers n using induction



$\log_2 n

I know how to prove the base case Base Case $\log_2 1<1$

likewise assuming the inequality for n=k; $\log_2 k

Then to prove by induction I show $\log_2 k<(k+1)$?



I know it's true since the domain is all real numbers i just cant figure out the next step to prove it.


Answer



$\log_2 (n) = \frac{\ln (n)}{\ln(2)}$, where $ln$ is the natural log i.e. with base $e$



You need to show, $\log_2(n) < n\implies \frac{\ln (n)}{\ln(2)} < n \implies \ln(n)


Since $\log$ is an increasing function you can rephrase your question to $n < 2^n$ as suggested by @jwsiegel



That is easy to show by induction



n=1: $1<2^1$. True.



Induction Hypothesis: $n-1< 2^{n-1}$



for $n \geq 2$, $n \leq 2(n-1) < 2*2^{n-1} = 2^n$


Probability Puzzle: Mutating Loaded Die



Take an (initially) fair six-sided die (i.e. $P(x)=\frac{1}{6}$ for $x=1,…,6$) and roll it repeatedly.




After each roll, the die becomes loaded for the next roll depending on the number $y$ that was just rolled according to the following system:



$$P(y)=\frac{1}{y}$$
$$P(x)=\frac{1 - P(y)}{5} \text{, for } x \ne y$$



i.e. the probability that you roll that number again in the next roll is $\frac{1}{y}$ and the remaining numbers are of equal probability.



What is the probability that you roll a $6$ on your $n$th roll?






NB: This is not a homework or contest question, just an idea I had on a boring bus ride. Bonus points for calculating the probability of rolling the number $x$ on the $n$th roll.


Answer



The transition matrix is given by $$\mathcal P = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ \tfrac{1}{10} & \tfrac{1}{2} & \tfrac{1}{10} & \tfrac{1}{10} & \tfrac{1}{10} & \tfrac{1}{10} \\ \tfrac{2}{15} & \tfrac{2}{15} & \tfrac{1}{3} & \tfrac{2}{15} & \tfrac{2}{15} & \tfrac{2}{15} \\ \tfrac{3}{20} & \tfrac{3}{20} & \tfrac{3}{20} & \tfrac{1}{4} & \tfrac{3}{20} & \tfrac{3}{20} \\ \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{1}{5} & \tfrac{4}{25} \\ \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} \end{bmatrix}.$$ It is fairly easy to get numerical values for the probability distribution of being in state $6$ after $n$ steps, but a closed form solution appears difficult.


integration - show that $int_{0}^{infty } frac{sin (ax)}{x(x^2+b^2)^2}dx=frac{pi}{2b^4}(1-frac{e^{-ab}(ab+2)}{2})$



show that




$$\int_{0}^{\infty } \frac{\sin (ax)}{x(x^2+b^2)^2}dx=\frac{\pi}{2b^4}\left(1-\frac{e^{-ab}(ab+2)}{2}\right)$$



for $a,b> 0$



I would like someone solve it using contour but also I would to see different solution using different way to solve it



is there any help
thanks for all


Answer



$$\int_0^{\infty} \frac{\sin ax\,dx}{x(x^2+b^2)^2}=\frac{1}{b^4}\left(\int_0^{\infty}\frac{\sin ax}{x}\,dx-\int_0^{\infty}\frac{x\sin ax\,dx}{x^2+b^2}\right)-\frac{1}{b^2}\int_0^{\infty}\frac{x\sin ax\,dx}{(x^2+b^2)^2}$$




The first integral is well known. For any $a>0:$



$$\int_0^{\infty} \frac{\sin ax}{x}\,dx=\frac{\pi}{2}$$



The second, consider:



$$\begin{aligned}f(t)=\int_0^{ \infty} \frac{x\sin axt\,dx}{x^2+b^2}\,dx \Rightarrow \mathcal{L} \{ f(t)\} &=\int_0^{ \infty}e^{-st}\int_0^{ \infty}\frac{x\sin axt\,dx}{x^2+ b^2}\,dx\,dt\\&=\int_0^{ \infty}\frac{x}{x^2+ b^2}\int_0^{\infty}e^{-st}\sin axt\,dt\,dx\\&=\int_0^{\infty} \frac{ax^2}{(x^2+ b^2)(a^2x^2+s^2)} \,dx\\&= \frac{\pi}{2(s+ab)}\end{aligned}$$



$$\frac{\pi}{2}\cdot\mathcal{L}^{-1}\left\{ \frac{1}{s+ab}\right\}\Bigg|_{t=1}= \frac{\pi}{2e^{ab}}$$




The third, using the same parameter (call the function $g(t)$ now) one obtains:



$$\begin{aligned}\mathcal{L} \{ g(t)\} &=\int_0^{ \infty}\frac{x}{(x^2+ b^2)^2}\int_0^{\infty}e^{-st}\sin axt\,dt\,dx\\&=\int_0^{\infty} \frac{ax^2}{(x^2+ b^2)^2(a^2x^2+s^2)} \,dx\\&= \frac{a\pi}{4b(s+ab)^2}\end{aligned}$$



$$\frac{\pi a}{4b}\cdot\mathcal{L}^{-1}\left\{ \frac{1}{(s+ab)^2}\right\}\Bigg|_{t=1}= \frac{a\pi}{4be^{ab}}$$



Therefore:



$$\int_0^{\infty} \frac{\sin ax\,dx}{x(x^2+b^2)^2}=\frac{\pi}{2b^4}\left(1-\frac{2+ab}{2e^{ab}}\right)$$



elementary number theory - Find the remainder when $ 528528528...$up to $528$ digits is divided by $27$?



Find the remainder when $528528528...$up to $528$ digits is divided by $27$?
Here's what I have done: The number can be written as $528\cdot 10^{525}+528\cdot 10^{522}+...+528$ which has $176$ terms and each term is $\equiv15 \mod 27$ thus the number should be $176*15 \mod 27$ hence $21$ should be the remainder. But book says it is $6$. I don't understand the flaw in my logic. Please correct me.


Answer



Here is a python3 session




>>> s = '528' * 176
>>> len(s)
528
>>> int(s) % 27
21

Sunday 15 September 2019

calculus - Closed form for $int_0^1 e^{frac{1}{ln(x)}}dx$?



I want to evaluate and find a closed form for this definite integral:$$\int_0^1 e^{\frac{1}{\ln(x)}}dx.$$



I don't know where to start. I've tried taking the natural logarithm of the integral, substitution and expressing the integrand in another way but they haven't led anywhere. The approximation should be between $.2$ and $.3,$ and is probably a transcendental number.



Thanks.


Answer




Let's verify Robert Israel's find. Observe that



$$I:=\int_0^1\exp\frac{1}{\ln x}dx=\int_0^\infty\frac{1}{y^2}\exp -(y+y^{-1})dy=\frac12\int_0^\infty(1+1/y^2)\exp -(y+y^{-1})dy,$$where we have substituted $y=-1/\ln x$ and then averaged with $y\mapsto 1/y$.



In view of the integral $K_\alpha(x)=\int_0^\infty\exp[-x\cosh t]\cosh(\alpha t)dt$ and the substitution $y=\exp -t$, $$2K_1(2)=\int_{-\infty}^\infty\exp[-2\cosh t]\cosh tdt=\frac12\int_0^\infty\exp [-y-1/y](1+1/y^2)dy.$$


Saturday 14 September 2019

calculus - limit $lim_{xto 0}frac{tan x-x}{x^2tan x}$ without Hospital



Is it possible to find $$\lim_{x\to 0}\frac{\tan x-x}{x^2\tan x}$$ without l'Hospital's rule?



I have $\lim_{x\to 0}\frac{\tan x}{x}=1$ proved without H. but it doesn't help me with this complicated limit (however I'm sure I have to use it somehow).




I know the answer is $\frac{1}{3}$, so I tried to estimate: $0<\frac{\tan x-x}{x^2\tan x}\le\frac{1}{3}\cdot\frac{\tan x}{x}+g(x)$ with various $g$ and prove that $g(x)\to 0$, but with no result.


Answer



$$L=\lim_{x\to 0}\frac{x-\tan x}{x^2 \tan x}=\lim_{x\to 0}\frac{\cos x-\frac{\sin x}{x}}{x^2}\cdot\frac{x}{\sin x}=\lim_{x\to 0}\frac{1-2\sin^2\frac{x}{2}-\cos\frac{x}{2}\frac{\sin(x/2)}{(x/2)}}{x^2} $$
gives $L=A+B$ where:
$$ A = -\frac{1}{2}+\lim_{x\to 0}\frac{1-\cos\frac{x}{2}}{x^2} = -\frac{1}{2}+\lim_{x\to 0}\frac{2\sin^2\frac{x}{4}}{x^2} = -\frac{1}{2}+\frac{1}{8}=-\frac{3}{8}$$
and
$$ B = \lim_{x\to 0}\frac{1}{x^2}\left(1-\frac{\sin(x/2)}{x/2}\right)=\frac{1}{4}\lim_{x\to 0}\frac{1}{x^2}\left(1-\frac{\sin x}{x}\right)$$
(assuming such a limit exists) fulfills:
$$ 3B = 4B-B = \lim_{x\to 0}\left(\frac{\sin(x/2)}{x/2}-\frac{\sin x}{x}\right)=\lim_{x\to 0}\frac{1}{x^2}\left(\frac{2\sin(x/2)}{\sin (x)}-1\right)$$

so that:
$$ B = \frac{1}{3}\lim_{x\to 0}\frac{1}{x^2}\left(\frac{1}{\cos\frac{x}{2}}-1\right)=\frac{1}{3}\lim_{x\to 0}\frac{1-\cos\frac{x}{2}}{x^2}=\frac{1}{3}\lim_{x\to 0}\frac{2\sin^2\frac{x}{4}}{x^2}=\frac{1}{24}$$
and $L=A+B=-\frac{3}{8}+\frac{1}{24}=\color{red}{\large-\frac{1}{3}}$.


elementary number theory - Prove $gcd(ka,kb) = k*gcd(a,b)$

For all $k > 0,\ k\in \Bbb Z$ . Prove
$$\gcd(k*a,\ k*b) = k *\gcd(a,\ b)$$




I think I understand what this wants but I can't figure out how to set up a formal proof. These are the guidelines we have to follow



enter image description here

Friday 13 September 2019

integration - Closed formula for integral



Mathematica can evaluate



$$\int\limits_0^\infty \frac{ \ln^{p} {x} \sin^{q} {x}}{x^r}dx $$




for all $p,q,r \in \mathrm{N}$ when $q \geq r>c, c \equiv q-r \ (\textrm{mod} \ 2) $, but can't give general formula. Any references would be appreciated. Also, any hints on how to do $p=q=r=1$ case? All results are some combination of a fraction, Euler–Mascheroni and logarithms.



So, is there a formula in spirit of :



enter image description here



UPDATE



I changed the notation a bit here to match Raymond Manzoni's first reference: integral converges for $p\geq q$ :





  1. for $p$ odd (even) and $q$ even(odd)



$$\int\limits_0^\infty \frac{\ln^{r}\! x \sin^{p} \! x }{x^q} \mathrm{d}x = \frac{(-1)^{r} (-1)^{\lfloor \frac{p-q}{2} \rfloor} }{ \Gamma(q) 2^p} \sum\limits_{i=0}^{r} \binom{r}{i} \frac{(-1)^{r-i} (r-i)!}{2^{r-i}} \sum\limits_{k=0}^{\lfloor \frac{p}{2} \rfloor - \text{mod}(2p-q,2)} \! \! \! \! \! \! \! \! \! \! 2(-1)^k \binom{p}{k}(p-2k)^{q-1} \sum\limits_{t=0}^{\lfloor \frac{r-i+1}{2} \rfloor} \frac{\text{Li}_{2t}(-1)\ln^{r-i+1-2t}\left(\frac{1}{(p-2k)^2}\right) }{(r-i+1-2t)!} \sum\limits_{\lambda \vdash i} \psi_0^{m_1}(q) \psi_1^{m_2}(q)\cdot \ldots\cdot \psi_{i-1}^{m_{l(\lambda)} }(q) \frac{\binom{i}{m_1,m_2,\ldots,m_{l(\lambda)}}}{m_1! m_2! \cdot \ldots\cdot ,m_{l(\lambda)}!} (-1)^{m_1+m_2+\ldots +m_{l(\lambda)}}$$




  1. for $p$ odd (even) and $q$ odd(even)




$$\int\limits_0^\infty \frac{\ln^{r}\! x \sin^{p} \! x }{x^q} \mathrm{d}x = \frac{(-1)^{r} (-1)^{ \frac{2p-q + \text{mod}(p,2)}{2}} }{ \Gamma(q) 2^{p-1}} \sum\limits_{i=0}^{r} \binom{r}{i} (-1)^{r-i} (r-i)! \sum\limits_{k=1}^{ \frac{p +\text{mod}(p,2)}{2}} \! \! \! \! \! \! 2(-1)^k \binom{p}{\frac{p +\text{mod}(p,2)}{2}- k}(2k-\text{mod}(p,2))^{q-1} \sum\limits_{t=0}^{\lfloor \frac{r-i+1}{2} \rfloor} \frac{\text{Li}_{2t}(-1) \sum\limits_{m=0}^{\lfloor \frac{r-i-2t}{2} \rfloor} \binom{r-i-2t+1}{2m+1} \left( \frac{\pi}{2} \right)^{2m+1} \ln^{r-i-2t-2m} \left( \frac{1}{2k -\text{mod}(p,2) } \right) }{(r-i-2t+1)!} \sum\limits_{\lambda \vdash i} \psi_0^{m_1 }(q) \psi_1^{m_2 }(q)\cdot \ldots\cdot \psi_{i-1}^{m_{l(\lambda)} }(q) \frac{\binom{i}{m_1,m_2,\ldots,m_{l(\lambda)}}}{m_1! m_2! \cdot \ldots\cdot ,m_{l(\lambda)}!} (-1)^{m_1+m_2+\ldots +m_{l(\lambda)}}$$



where $\text{Li}$ is polylogarithm, $\psi$ is polygamma, and the last sum is over all partitions $\lambda = \left(1^{m_1} 2^{m_2} \ldots \right) $ and $l(\lambda)$ is partition length.



fell free to tidy up and improove if you want, I don't have the will right now.


Answer



Here is the proof and Mathematica code to play with. Took me some time to write all this. Thank you again Raymond Manzoni for that nice Edwards book(s) refference, couldn't have done it without it. It would be nice to see a proof for idenetities (37) and (38) in paper, one proof is in Edwards book but it's not too elegant, one way is to reduce it to hypergeometric identity http://functions.wolfram.com/07.31.03.0032.01 but I don't know how to prove that identity. Also the final expressions are equivalent to ones I wrote above without proof, but more elegant.



You can downolad it here : download link




and the .tex file here: download link



This code is also much nicer than the last one, you can download the Mathematica notebook here download link


linear algebra - Prove that a square matrix commutes with its inverse





The Question:



This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:




let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse





My thoughts so far:



This is particularly annoying to me because it seems like it should be easy.



We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have
$$
\left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right)
$$

But is that enough?



It might help to talk about generalized inverses.


Answer



You notation $A^{-1}$ is confusing because it makes you think of it as a two-sided inverse but we only know it's a left-inverse.



Let's call $B$ the matrix so that $BA=I$. You want to prove $AB=I$.



First, you need to prove that there is a $C$ so that $AC=I$. To do that, you can use the determinant but there must be another way. [EDIT] There are several methods here. The simplest (imo) is the one using the fact the matrix has full rank.[/EDIT]




Then you have that $B=BI=B(AC)=(BA)C=IC=C$ so you get $B=C$ and therefore $AB=I$.


polynomials - Commas under the Sigma notation



This might be a trivial notation question but I am having trouble understanding the following equation, describing a polynomial as sum of monomials:



$P(x_1, x_2, \dots, x_n) = \sum\limits_{i_1, i_2, \dots, i_l} a_{i_1, i_2, \dots, i_l} x_{i_1} x_{i_2} \dots x_{i_l}$.



I am not sure what this notation implies as it's the first time I have come across it. Is it saying:



$P(x_1, x_2, \dots, x_n) = a_{i_1}x_{i_1} + a_{i_2}x_{i_2}+ \dots + a_{i_l}x_{i_l}$ ??




But later the text says that the monomials are of the form: $x_{i_1} x_{i_2} \dots x_{i_l}$.



Thanks!


Answer



The notation $$\sum_{i, j}$$ means "the sum over the variables $i$ and $j$ independently". For example, $$\sum_{i, j} x_{i} x_{j} = (x_1 x_1 + x_1 x_2 + \dots + x_1 x_n) + (x_2 x_1 + x_2 x_2 + \dots + x_2 x_n) + \dots$$


calculus - Value of $int_{- infty}^0 frac{mathrm dx}{(49+x) sqrt{x}}$?



I've been trying to find this integral:$$\int_{- \infty}^0 \frac{\mathrm dx}{(49+x) \sqrt{x}}$$



I used a trigonometric substitution, and after some work arrived at this:



$$\arctan\left(\frac{\sqrt{x}}{7}\right)$$




So if written properly, it would look like:



$$\lim_{k \to -\infty} \arctan\left.\left(\frac{\sqrt{x}}{7}\right) \right|^{\,0}_{\,k}$$



Evaluating zero wasn't an issue, however taking the limit was. How can k approach negative infinity if it'll appear in a square root?



Wolfram Alpha told me this integral did not converge.
Another online calculator told me it was zero.
The person who gave this problem said it converged.




Any ideas? Thank you for your input!


Answer



This integral does not converge as either a Riemann or Lebesgue integral.



However, the integral does exist in the Cauchy Principal Value sense:
$$\newcommand{\PV}{\mathrm{PV}}
\begin{align}
\PV\int_{-\infty}^0\frac{\mathrm{d}x}{(49+x)\sqrt{x}}
&=-2i\,\PV\int_0^\infty\frac{\mathrm{d}x}{49-x^2}\tag{1}\\
&=\frac{i}7\,\PV\int_0^\infty\left(\frac1{x-7}-\frac1{x+7}\right)\,\mathrm{d}x\tag{2}\\

&=\lim_{L\to\infty}\frac{i}7\,\PV\int_0^L\left(\frac1{x-7}-\frac1{x+7}\right)\,\mathrm{d}x\tag{3}\\
&=\lim_{L\to\infty}\left(\frac{i}7\,\PV\int_{-7}^{L-7}\frac1x\,\mathrm{d}x\right)-\lim_{L\to\infty}\left(\frac{i}7\int_7^{L+7}\frac1x\,\mathrm{d}x\right)\tag{4}\\
&=\lim_{L\to\infty}\left(\frac{i}7\,\PV\int_{-7}^7\frac1x\,\mathrm{d}x\right)-\lim_{L\to\infty}\left(\frac{i}7\int_{L-7}^{L+7}\frac1x\,\mathrm{d}x\right)\tag{5}\\[9pt]
&=0\tag{6}
\end{align}
$$
Explanation:
$(1)$: substitute $x\mapsto-x^2$
$(2)$: partial fractions
$(3)$: write improper integral as a limit
$(4)$: substitute $x\mapsto x+7$ on the left and $x\mapsto x-7$ on the right
$(5)$: subtract the integral on $[7,L-7]$ from both limits
$(6)$: evaluate integrals


Thursday 12 September 2019

algebra precalculus - Exponential Form of Complex Numbers - Why e?





Please delete this question please. It is a duplicate. Thank you!!!!!! I cannot delete the question.



Thanks!


Answer



If you remember series, notice that



$$ e^{i x } = \sum_{n \geq 0} \frac{ i^n x^n }{n!} $$



Now, notice that $i^2 = -1 $ but $i^{3} = -i$, and $i^4 = 1 $ and $i^5 = i$ so on, and since




$$ \sin x = \sum_{n \geq 0} \frac{ (-1)^n x^{2n+1 } }{(2n+1)!} \; \; \text{and} \; \;\cos x = \sum_{n \geq 0} \frac{ (-1)^n x^{2n } }{(2n)!} $$



after breaking the $n$ in the first summation for even cases and odd cases and seeing in the third line how the $i's$ alternate, one obtains the result


limits - Function sequence $ left( cosleft( frac{x}{sqrt{n}} right) right)^n $




I'm studying about uniform convergence of function sequences. I haven't been able to prove that



$$\lim_{n \to \infty} \left( \cos\left( \frac{x}{\sqrt{n}} \right) \right)^n=e^{-\frac{x^2}{2}}.$$




Can you help me, please?


Answer



Alternatively $$\lim _{ n\to \infty } \left( \cos \left( \frac { x }{ \sqrt { n } } \right) \right) ^{ n }=exp\left( \lim _{ n\to \infty } n\ln { \left( \cos \left( \frac { x }{ \sqrt { n } } \right) \right) } \right) =\\ =exp\left( \lim _{ n\to \infty } \frac { \ln { \left( \cos \left( \frac { x }{ \sqrt { n } } \right) \right) } }{ \frac { 1 }{ n } } \right) \overset { L'Hopital }{ = } exp\left( \lim _{ n\to \infty } \frac { -\frac { \sin { \left( \frac { x }{ \sqrt { n } } \right) } }{ \cos \left( \frac { x }{ \sqrt { n } } \right) } }{ -\frac { 1 }{ { n }^{ 2 } } } \left( -\frac { x }{ 2n\sqrt { n } } \right) \right) =\\ =exp\left( -\lim _{ n\to \infty } \tan { \left( \frac { x }{ \sqrt { n } } \right) } \frac { \sqrt { n } }{ 2x } { x }^{ 2 } \right) ={ e }^{ -\frac{x^2}{2} }$$


logic - Proof by deduction - implications



Currently trying to explain some maths to a friend.



He has taken a statement $x^2 + 4 > 2x$ and tried to prove this is true for all $x$.




His proof is $x^2+4>2x \Rightarrow x^2-2x + 4 > 0 \Rightarrow (x-1)^2 + 3 > 0$ which is true so the original statement is true.



However this starts at the wrong place and the implication goes in the wrong direction. So I think it’s wrong and I can’t seem to convince him of this or find some basic examples to illustrate the point that statement X $\rightarrow$ true statement doesn’t mean that X is true....



So can anyone explain to me why it’s wrong using some basic counterexamples perhaps so I can have the knowledge to explain why it is wrong...



Thanks


Answer



Your friend is correct, the subtlety is that all his steps are reversible, so a clear way to put it is as:

$$
x²+4>2 \iff x²-2x+4>0 \iff (x-1)²+3>0
$$

This way the truthiness of the last statement implies the same for the first.
But you are correct to be cautious, a case where things would go wrong is with squares. For example:
$$
x=1 \Rightarrow x² = 1 \Rightarrow x=1~\text{or}~x =-1
$$

The last sentence is true if $x=-1$, but the first would be false.


complex numbers - Find square roots of $8 - 15i$

Find the square roots of:
$8-15i.$



Could I get some working out to solve it?



Also what are different methods of doing it?

Wednesday 11 September 2019

abstract algebra - How can we show that all the roots of some irreducible polynomial are not of algebraically equal status?



In studying Galois theory, I found that all roots of some irreducible polynomial are not of algebraically equal status, because the Galois group of some irreducible polynomial may not be full symmetric group $S_n$.




But I am searching a concrete example elucidating that all roots are not algebraically equal and should be distinct in an algebraic way.




One of my attempt is like this.




Let $p(t)$ be an irreducible separable polynomial over $F$ and $E$ its spliting field over $F$. Let $\alpha_1,\alpha_2,\cdots,\alpha_n$ are all roots of $p(t)$. Then $E=F(\alpha_1,\alpha_2,\cdots,\alpha_n)$ and we consider the tower of fields $F\le F(\alpha_1) \le F(\alpha_1,\alpha_2)\le \cdots \le F(\alpha_1,\alpha_2,\cdots,\alpha_n)$.



I guess that the $\deg (irr(\alpha_i,F(\alpha_j)))$ may differ depending on the choice of $\alpha_i$ and $\alpha_j$. If this is true, I can suggest this to support my claim that all roots are not algebraically equal.



But I am not able to find an apt example supporting my guess.



Do you know some example verifying my guess? Or if you have any idea which helps convince that all roots are not algebraically equal, please share with me.



Thanks for reading my question and any comment will be appreciated!


Answer




Sure, what you're asking for can happen. For a simple example, take $F=\mathbb{Q}$ and $p(t)=t^4-2$. The roots are $\alpha_1=\sqrt[4]{2}$, $\alpha_2=-\sqrt[4]{2}$, $\alpha_3=i\sqrt[4]{2}$, and $\alpha_4=-i\sqrt[4]{2}$. Note then that $\alpha_2\in F(\alpha_1)$ (so its minimal polynomial over $F(\alpha_1)$ has degree $1$), while $\alpha_3\not\in F(\alpha_1)$ (and its minimal polynomial over $F(\alpha_1)$ has degree $2$).



However, I would object somewhat to your phrasing that this means the roots are "algebraically distinct". It is always true that the Galois group acts transitively on the roots (as long as $p$ is irreducible): that is, for any $i$ and $j$, there is an automorphism of $E$ over $F$ that maps $\alpha_i$ to $\alpha_j$. So you can't really distinguish $\alpha_i$ from $\alpha_j$ (at least, from the perspective of $F$). All you can really say is that you can distinguish certain subsets of the roots from other subsets of the roots of the same size. For instance, in the example above, the set $\{\alpha_1,\alpha_2\}$ is in a strong sense distinguishable from $\{\alpha_1,\alpha_3\}$ over $F$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...