Saturday 30 November 2019

arithmetic - Why does $987,654,321$ divided by $123,456,789 = 8$?

Why does $987,654,321$ divided by $123,456,789 = 8$?



Is it a coincidence or is there a special reason?



Note: The numbers are a mirror of each other

real analysis - Continuous functional on the linear operator




Let $\Pi, \hat \Pi$ be two linear operators from $U$ to $V$. The norm-distance is defined as $$||\hat \Pi- \Pi||=\sup_{x\in U}\frac{||(\hat \Pi- \Pi)x||}{||x||}$$



Let us define a continuous bounded function $g:V\rightarrow \mathbb{R}$,



Claim: So for every $\epsilon>0$, $\exists \delta>0$ such that
$$\sup_{x\in U}\frac{||(\hat \Pi- \Pi)x||}{||x||}<\epsilon\Rightarrow \sup_{x\in U}\frac{||g(\hat \Pi x)- g(\Pi x)||}{||x||}<\delta.$$



Is the claim is right based on the "continuity" and bounded (only) assumption of $g$? Or do we need uniform continuity as well?



Thanks.


Answer




The claim is not true for merely continuous and bounded $g$. For instance, take
$$
g(v):=\min( \sqrt{\|v\|},1).
$$
Fix $x$ such that $\Pi x \ne \hat\Pi x$.
Then for $s>0$ such that $\|s\hat\Pi x\|\le1$ and $\|s\Pi x\|\le1$
$$
\|g(s\hat\Pi x)-g(s\Pi x) \|= \sqrt s\left|\sqrt{\|\hat\Pi x\|} - \sqrt{\Pi x}\right|,
$$
hence for $s\searrow 0$ the quantity

$$
\frac{\|g(s\hat\Pi x)-g(s\Pi x) \|}{\|sx\|}
$$
blows up.






If in addition, $g$ is globally Lipschitz continuous, i.e., $|g(v_1)-g(v_2)|\le L \|v_1-v_2\|$ for all $v_1,v_2\in V$ the claim can be proven easily.


complex analysis - Refining my knowledge of the imaginary number

So I am about halfway through complex analysis (using Churchill amd Brown's book) right now. I began thinking some more about the nature and behavior of $i$ and ran into some confusion. I have seen the definition of $i$ in two different forms; $i = \sqrt{-1} $ and $i^2 = -1$. Now I know that these two statements are not equivalent, so I am confused as to which is the 'correct' definition. I see quite frequently that the first form is a common mistake, but then again Wolfram Math World says otherwise. So my questions are:




  1. What is the 'correct' definition of $i$ and why? Or are both definitions correct and you can view the first one as a principal branch?


  2. It seems that if we are treating $i$ as the number with the property $i^2 = -1$, it is implied that we are treating $i$ as a concept and not necessarily as a "quantity"?


  3. If we are indeed treating $i$ as a concept rather than a "quantity", how would things such as $i^i$ and other equations/expressions involving $i$ be viewed? How would such an equation have value if we treat $i$ like a concept?





I've checked around on the various imaginary number posts on this site, so please don't mark this as a duplicate. My questions are different than those that have already been asked.

Friday 29 November 2019

reference request - Is this divisibility test for 4 well-known?



It has just occurred to me that there is a very simple test to check if an integer is divisible by 4: take twice its tens place and add it to its ones place. If that number is divisible by 4, so is the original number.



This result seems like something that anybody with an elementary knowledge of modular arithmetic could realize, but I have noticed that it is conspicuously missing on many lists of divisibility tests (for example, see here, here, here, or here). Is this divisibility test well-known?


Answer



Yes, it is well known; do you know modular arithmetic? Assuming you do, we have a number $abc=a\cdot 10^2+b\cdot 10^1+c\cdot 10^0$. Now $$a\cdot 10^2+b\cdot 10^1+c\cdot 10^0\equiv 2\cdot b+c\pmod{4}.$$ Many people know the multiples of $4$ for numbers less than $100$, so it is commonly just said if the last two digits (as a number) is divisible by $4$, then the number is divisible by $4$.



reference request - Book on Advanced Calculus

I'm an undergraduate student of physics. I have an upcoming course on Advanced Calculus but I do not know which book to follow. So, recommend me an undergraduate level book on Advanced Calculus.



Edit: The syllabus has 1. Multiple integration, 2. Partial Derivatives and 3. Applications of Partial Derivatives. (It has one more block, it's something basic though)



Edit 2: I just looked up on quora to know what Advanced Calculus actually is. They are saying it includes things like Green's theorem, Stokes' theorem, line integrals, linear algebra ( I've a separate course on this), Fourier analysis, Taylor series, $\mathbb{R}^n$ and $\mathbb{R}$ (infinity) etc.
My course includes all of these and even more( I listed them just for an idea)

real analysis - Properties of a continuous map $f : mathbb{R}^2rightarrow mathbb{R}$ with only finitely many zeroes



Let $f : \mathbb{R}^2\rightarrow \mathbb{R}$ be a continuous map such that $f(x)=0$ only for finitely many values of $x$. Which of the following is true?





  • Either $f(x)\leq 0$ for all $x$ or $f(x)\geq 0$ for all $x$.

  • the map $f$ is onto.

  • the map $f$ is one one.

  • None of the above.



What I have done so far is :




  • I would take polynomial in two variables.. This need not be like $f(x)\geq 0$ for all $x$ or $f(x)\geq 0$ for all $x$.So,first option is eliminated.


  • The map is not one one assuming that $f$ has more than one zero. So, third option is wrong.



I could not think of an example in which it is not onto..



Only examples i am getting in my mind are polynomials and they are onto..



So, I am having trouble with surjectiveness of the function.



Please help me to clear this.




Thank you..


Answer




  • Either $f(x)\leq 0$ for all $x$ or $f(x)\geq 0$ for all $x$.



This one is the most interesting.
It can happen that $f$ does not change sign under these conditions: take $u^2+v^2$ for example (if you write $x=(u,v)$). It is positive and has finitely many (one single) zeros. You can have any given number of zeros with $\prod_{i=1}^n \left((u-u_i)^2+(v-v_i)^2\right)$. Notice none of these polynomials ever changes sign.







Now, interestingly, if a continuous function $f$ of $\Bbb R^2 \to \Bbb R$ has finitely many zeros, then it can never change sign.



Assume the contrary, then there are two points x and y such that $f(x)>0$ and $f(y)<0$. Since it has only finitely many zeros, it must be positive for infinitely many points, or negative for infinitely many points, or both. Assume the former, without loss of generality because otherwise you could take $-f$. So, $f$ is positive for infinitely many points (and maybe also negative for infinitely many others, but we don't care).



Draw a ray from $y$, in any direction $d$. If it "hits" a positive value of $f$, that is $f(y+\lambda_d d)>0$ for some $\lambda_d>0$, then there is a zero of $f$ lying between $y_d=y+\lambda_d d$, and $y$, by continuity of $f$, and especially by continuity of the restriction, $\lambda \to f(y+\lambda d)$.



Now, two cases may happen:





  • for infinitely many directions, you can find such a $y_d$, thus there are infinitely many zeros of $f$ (they are all different, since they are on different rays), contradiction.


  • you can find only finitely many such directions. Thus, for infinitely many directions, you find only negative or null values along the ray. Now, draw a circle centered at $y$, with radius larger than $|x-y|$, so that your point $x$ is inside the circle. For infinitely many points $t_i$ on this circle, $f(t_i) \leq 0$. But $f$ has finitely many zeros, thus indeed, for infinitely many points $x_i$ on this circle, $f(x_i) < 0$ (with strict inequality). But then, trace rays between $x_i$ and $x$, and you will find again infinitely many zeros of $f$, hence again a contradiction.




And you are done.



The proof looks rather convoluted to me, maybe there is some obvious argument I didn't see.



By the way, it's trivial to generalize to $\Bbb R^n$ for $n>1$: just pick the restriction of $f$ to a plane where it would take on both positive and negative values.







I'll try to add a bit of intuition. Sadly, I don't have a scanner to show pictures...



It is easy to find a polynomial that has any given number $n$ of zeros, as shown above. But these examples are either always negative or always positive, so you can't get any conclusion from it. Remember, you are in $\Bbb R^2 \to \Bbb R$, thus polynomials don't have necessarily finitely many zeros! And actually, you often get a curve of zeros, and it's a usual way to define curves, for example the unit circle, defined as zeros of $(u,v) \to u^2+v^2-1$.



Now, assume $f$ takes on both positive and negative values. Then $f^{-1}(]-\infty,0[)$ and $f^{-1}(]0, +\infty[)$ are both open sets. My idea was that if both are nonempty, the boundary can't be finite.



Now, the easiest way to find a zero of a continuous function is if it is a function of one variable, and you know it has a positive and a negative value: you just have to use the good old bisection algorithm.




Thus, drawing rays (or half-lines) enables you to look only at functions of one variable. The only thing that remains to do is finding enough zeros, that is enough directions, to lend to a contradiction.






Even if the two other points in the question have easy counterexamples, it's interesting to see if they can happen at all.




  • the map $f$ is onto.




That is, surjective.



Not necessarily. Take $(u,v) \to \frac{1}{1+u^2+v^2}$, which has finitely many (none) zeros, and is bounded.



But, as seen in the answer to the first case, $f$ can't change sign, thus it is always $\leq 0$ or always $\geq 0$, thus certainly never onto $\Bbb R$.




  • the map $f$ is one one.




Not if there is more than one zero, of course, and we saw examples above.



But, could it happen at all? No, there is a proof here: Is there a continuous bijection from $\mathbb{R}$ to $\mathbb{R}^2$. You can also conclude using the fact that if $f$ is continuous and bijective, it has one zero, hence it does no change sign, contradiction because then it can't be bijective.


analysis - Show convergence of recursive sequence and find limit value



Let $(a_n)_{n \in \mathbb N}$ be a recursive sequence. It is defined as $a_1=1, \quad a_{n + 1} = \frac{4a_n}{3a_n+3}$.



I have to show that the sequences converges and find a limit value.



To show convergence I was about to use the Cauchy criterum. Unfortunately I am quite confused here because of the recursive definition.




Question: How can I show show that the sequences converges and how can I find a limit value?


Answer



We already proved (in Show limit for recursive sequence by induction) that $a_n\geq {1\over 3}$ so
$$a_{n+1}-a_n=a_n{1-3a_n\over 3a_n+3}\leq 0$$
so the sequence is decreasing and bounded so it is convergent. Say $a$ is it limit, then
\begin{eqnarray*}
a&=&\lim _{n\to \infty} a_{n+1} \\
&=& \lim _{n\to \infty} {4a_n\over 3a_n+3}\\
&=& {\lim _{n\to \infty}4a_n\over \lim _{n\to \infty}(3a_n+3)}\\
&=& {4a\over 3a+3}

\end{eqnarray*}
So we have to solve the equation $$a={4a\over 3a+3}\Longrightarrow 3a^2-a=0 \Longrightarrow a=0\;\; {\rm or}\;\; a=1/3$$
Since all members of sequence are $\geq {1/3}$ we have $$a ={1\over 3}$$


elementary number theory - Geometrical intuition for sum of first n cubes

The relation



$$
\sum_{k=1}^n k^3 = \left(\sum_{k=1}^n k\right)^2
$$



baffled me when I first found out (i.e. yesterday on a train trip). Writing an inductive proof is easy and I know that there is a recursive way to obtain a general formula for




$$
\sum_{k=1}^n k^j
$$



for any $j \in \mathbb{N}$, but I feel like this relationship between the sum of the first n cubes and the sum of the first n integers should have some nice geometrical proof. The closest thing I found was the first answer to this question, but I still don't find it intuitively clear. Maybe I am asking for too much here.

Thursday 28 November 2019

multivariable calculus - Changing order of integration:$int_0^inftyint_{-infty}^{-y}f(x)mathrm dxmathrm dyRightarrowint_{-infty}^0int_0^{-x}f(x)mathrm dymathrm dx$




Why does $$\int_{0}^{\infty} \int_{-\infty}^{-y} f(x)\mathrm dx \mathrm dy \Rightarrow \int_{-\infty}^{0} \int_{0}^{-x} f(x) \mathrm dy \mathrm dx$$




The title is pretty self explanatory. I couldn't see how to properly change the order of the left integeral to the right one.



I'd love to hear your thoughts, thanks.


Answer




The easiest way to perform a change of the order of integration in the multivariable setting is via Iverson's bracket. This is the indicator function such that $$[P] =\begin{cases} 1 & P \text{ is true}\\
0 & \text{else}.\end{cases}$$
The change of variable arises from reinterpreting the system of inequalities (see below).



With the Iverson notation, one can remove the boundaries from the integral and implement it in the integrand, i.e.,
$$\int_{0}^{\infty} \int_{-\infty}^{-y} f(x,y)dx \,dy =\iint_{\mathbb{R}^2} f(x,y)\Bigl[(y\geq 0) \text{ and } (x\leq-y) \Bigr]dx\, dy \,.$$



Now in order to perform the change of the order of integration, you have to reinterpret Iverson's bracket. You have to figure out what condition $$P=(y\geq0) \text{ and } (x\leq-y)$$ poses on $x$ first.



The maximal value that $x$ can achieve is $0$ (when $y=0$). The second condition in $P$ is equivalent to
$$ x \leq -y \Leftrightarrow y \leq -x \,.$$

The first condition demands that $y>0$. Together, we have that
lf that $P$ is equivalent to
$$ P \Leftrightarrow (x<0) \text{ and } (0 < y < -x )\,.$$



So we find
$$\int_{0}^{\infty} \int_{-\infty}^{-y} f(x,y)dx\, dy =\iint_{\mathbb{R}^2} f(x,y)\Bigl[(x<0) \text{ and } (0 < y < -x )\Bigr]dx \,dy = \int_{-\infty}^0 \int_{0}^{-x} f(x,y) \,dy\,dx \,.$$


probability - Poisson Distribution when only given using mean

I'm doing the following homework problem and am unsure of whether or not my answers are correct. This is my first time working with Poisson distribution and I want to make sure I am doing it correctly.





Suppose that the number of drivers who travel between a particular origin and destination during a designated time period has a Poisson distribution with mean $u = 20$. What is the probability that the number of drivers will



a. Be at most 10?



b. Exceed 20?



c. Be between 10 and 20, inclusive? Be strictly between 10 and 20?



d. Be within 2 standard deviations of the mean value?





I'm pretty much just trying to follow the formula that I was given for Poisson distribution and have the following answers:



a. $P(x\le 10) = \sum{0\to10}\frac {e^{20} \times 20^x}{x!} $



b. $P(x>20) = \frac {e^{20} \times 20^(20)}{20!} $



c. $P(10 \le x \le 20) = \sum_{10\to20}\frac {e^{20} \times 20^x}{x!} $



$P(10 < x <20) = \sum_{10\to20}\frac {e^{20} \times 20^x}{x!} $




d. not sure still

Wednesday 27 November 2019

multivariable calculus - How to find the boundaries of integration for e.g. triple integrations?



I'm having a lot of trouble finding from where to where I have to integrate when splitting up a triple integral into 3 integrals.



I've already posted a question regarding this but while that helps for that specific problem I'd like to know what technique I generally need to apply to solve it.



Here's an example of the type of problem I'm talking about:





I need to calculate the following in cylindrical coordinates:
$$\iiint_K \sqrt{x^2+y^2+z^2}\,dx\,dy\,dz$$
$K$ is bounded by the plane $z=3$ and by the cone $x^2+y^2=z^2$.




That question in particular can be found here Calculating $\iiint_K \sqrt{x^2+y^2+z^2}\,dx\,dy\,dz$., but again, I'm not looking for an answer to that particular integral in this question, I'm merely asking for a good way to solve most of these types of problems.


Answer



The key is to somehow translate the equations defining the bounding surfaces of the region of integration into a system of inequalities that explicit define the region through bounds on the coordinates.



Consider the example problem you provided. The region $K$ is defined implicitly to be the region between two bounding surfaces, the plane $z=3$ and the (double-)cone $x^2+y^2=z^2$. Let's convert to cylindrical coordinates as the problem suggests. Substituting $x=\rho\cos{\phi}$ and $y=\rho\sin{\phi}$, the equation for the plane is unchanged, but the equation for the cone becomes:




$$z^2=x^2+y^2=\rho^2\cos^2{\phi}+\rho^2\sin^2{\phi}=\rho^2\\
\implies z=\pm\rho.$$



We're only interested in the cone corresponding to the solution $z=+\rho$ since this is the cone intersected by the plane $z=3$. Now, any point in this region $K$ between the cone $z=\rho$ and plane $z=3$ will have $z$ coordinates satisfying the inequality,



$$\rho\le z\le 3.$$



The lower bound on $\rho=\sqrt{x^2+y^2}$ is of course $\rho=0$, so we actually have:



$$0\le\rho\le z\le 3.$$




This last inequality above allows you to immediately read off the limits of integration. If you want to integrate with respect $\rho$ first, then you would integrate $\rho$ from $0$ to $z$, and next you would integrate $z$ from $0$ to $3$. Similarly, if you wanted to integrate with respect to $z$ first, then you would integrate $z$ from $\rho$ to $3$, and next you would integrate $\rho$ from $0$ to $3$.


Definition of ordinary differential equation

Is a differential equation ordinary if it only contains derivatives with respect to one variable, even if the function has multiple variables?



For example the function y=f(x,t) and the differential equation
$\frac{d^2y}{dx^2}+2\frac{dy}{dx}=4$ would that be ordinary are partial?




Does the question not make sense because if you already know how many (independent) variables the function (in this case $y$) depends on, then it wouldn't be a differential equation because the function isn't unknown?

elementary set theory - Specify a bijection from [0,1] to (0,1].

A) Specify a bijection from [0,1] to (0,1]. This shows that |[0,1]| = |(0,1]|




B) The Cantor-Bernstein-Schroeder (CBS) theorem says that if there's an injection from A to B and an injection from B to A, then there's a bijection from A to B (ie, |A| = |B|). Use this to come to again show that |[0;1]| = |(0;1]|

Tuesday 26 November 2019

real analysis - Showing convergence of the sequence of functions defined by $f_n = frac{1}{nx +1}$





Let $f_n: (0,1) \to \mathbb{R}: x \mapsto \frac{1}{nx+1}$



Does $(f_n)_{n\geq0}$ converge pointwise? Uniformly?




My attempt:



Let $x \in (0,1)$. Then $\lim_{n \to \infty} f_n = 0$. Hence, $\forall x \in (0,1): f_n(x) \to 0$ and we deduce that $(f_n)$ converges pointswise to $0: (0,1) \to \mathbb{R}: x \mapsto 0$




Now, since $$\sup_{x \in (0,1)}\left \vert \frac{1}{nx+1} - 0\right \vert = 1 \to 1 \neq 0$$



it follows that $(f_n)$ does not converge uniformly.




Questions:



(1) Is this correct?




(2) Are there alternatives to show that it is not uniform convergent?



Answer



Yes, it is correct. I think that that way of proving that the convergence is not uniform is the simplest one. However, I would have added an explanation for the equality$$\sup_{x\in(0,1)}\left|\frac1{nx+1}\right|=1.$$


calculus - $lim_{xto 0} frac{sin x - x}{x^2}$ without L'Hospital or Taylor




It is easy to see that $$\lim_{x\to 0} \frac{\sin x - x}{x^2} =0, $$but I can't figure out for the life of me how to argue without using L'Hospital or Taylor. Any ideas?


Answer



In THIS ANSWER, I used the integral definition of the arcsine function to show that for $0 \le x\le \pi/2$, we have the inequalities



$$x\cos(x)\le \sin(x)\le x \tag 1$$



Using the trigonometric identity $1-\cos(x)=2\sin^2(x/2)$, we see from $(1)$ that



$$-2x\,\,\underbrace{\left(\frac{\sin^2(x/2)}{x^2}\right)}_{\to \frac14}\le \frac{\sin(x)-x}{x^2}\le 0 \tag2$$




Applying the squeeze theorem to $(2)$ yields the coveted limit




$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\frac{\sin(x)-x}{x^2}=0}$$



integration - Double polar integral $int_{-infty}^{infty}int_{-infty}^{infty} e^{-frac{x^2+y^2}2},dx,dy$



Evaluate $$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{x^2+y^2}2}\,dx\,dy$$ using polar coordinates, where the upper limits of the both integrals are infinity and their lower limits are -infinity.




the only reason why I am confused by these is the limits of the integrals. I can't visualise the area of integration and I do not know how to convert them into polar limits. I know that the integrand will be $re^{-\frac{r^2}2}$.


Answer



By polar coordinates, since we are integrating over all the $x-y$ plane, we have that



$$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{x^2+y^2}2}\,dx\,dy=\int_0^{2\pi}\int_0^{\infty}re^{-\frac{r^2}2}\,dr\, d\theta$$



then note that $\frac{d}{dr}\left(e^{-\frac{r^2}2}\right)=-re^{-\frac{r^2}2}$ and use that



$$\int_0^{2\pi}\int_0^{\infty}re^{-\frac{r^2}2}\,dr\, d\theta =\lim_{R\to \infty} \int_0^{2\pi}\int_0^{R}re^{-\frac{r^2}2}\,dr\, d\theta$$



Monday 25 November 2019

sequences and series - How can I show that $sqrt{1+sqrt{2+sqrt{3+sqrtldots}}}$ exists?



I would like to investigate the convergence of




$$\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\sqrt\ldots}}}}$$



Or more precisely, let $$\begin{align}
a_1 & = \sqrt 1\\
a_2 & = \sqrt{1+\sqrt2}\\
a_3 & = \sqrt{1+\sqrt{2+\sqrt 3}}\\
a_4 & = \sqrt{1+\sqrt{2+\sqrt{3+\sqrt 4}}}\\
&\vdots
\end{align}$$




Easy computer calculations suggest that this sequence converges rapidly to the value 1.75793275661800453265, so I handed this number to the all-seeing Google, which produced:





Henceforth let us write $\sqrt{r_1 + \sqrt{r_2 + \sqrt{\cdots + \sqrt{r_n}}}}$ as $[r_1, r_2, \ldots r_n]$ for short, in the manner of continued fractions.



Obviously we have $$a_n= [1,2,\ldots n] \le \underbrace{[n, n,\ldots, n]}_n$$



but as the right-hand side grows without bound (It's $O(\sqrt n)$) this is unhelpful. I thought maybe to do something like:




$$a_{n^2}\le [1, \underbrace{4, 4, 4}_3, \underbrace{9, 9, 9, 9, 9}_5, \ldots,
\underbrace{n^2,n^2,\ldots,n^2}_{2n-1}] $$



but I haven't been able to make it work.




I would like a proof that the limit $$\lim_{n\to\infty} a_n$$
exists. The methods I know are not getting me anywhere.





I originally planned to ask "and what the limit is", but OEIS says "No closed-form expression is known for this constant".



The references it cites are unavailable to me at present.


Answer



For any $n\ge4$, we have $\sqrt{2n} \le n-1$. Therefore
\begin{align*}
a_n
&\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{(n-1) + \sqrt{2n}}}}}}\\
&\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{2(n-1)}}}}}\\
&\le\ldots\\

&\le \sqrt{1+\sqrt{2+\sqrt{3+\sqrt{2(4)}}}}.
\end{align*}
Hence $\{a_n\}$ is a monotonic increasing sequence that is bounded above.


elementary number theory - Why $9$ (and $11)$ are special in testing divisibility by digit sums? (casting out nines & elevens)



I don't know if this is a well-known fact, but I have observed that every number, no matter how large, that is equally divided by $9$, will equal $9$ if you add all the numbers it is made from until there is $1$ digit.



A quick example of what I mean:




$9*99 = 891$




$8+9+1 = 18$



$1+8 = 9$




This works even with really long numbers like $4376331$



Why is that? This doesn't work with any other number. Similarly for $11$ and alternating digits sums.


Answer



Not quite right, since $9\times 0 = 0$ and the digits don't add up to $9$; but otherwise correct.




The reason it works is that we write numbers in base $10$, and when you divide $10$ by $9$, the remainder is $1$. Take a number, say, $184631$ (I just made it up). Remember what that really means:
$$184631 = 1 + 3\times 10 + 6\times 10^2 + 4\times 10^3 + 8\times 10^4 + 1\times 10^5.$$
The remainder when you divide any power of $10$ by $9$ is again just $1$, so adding the digits gives you a number that has the same remainder when dividing by $9$ as the original number does. Keep doing it until you get down to a single digit and you get the remainder of the original number when you divide by $9$, except that you get $9$ instead of $0$ if the number is a nonzero multiple of $9$.



But since every multiple of $9$ is a multiple of $9$, you will always get $9$.



Note that you have a similar phenomenon with $3$ (a divisor of $9$), since adding the digits of a multiple of $3$ will always result in one of the one-digit multiples of $3$: $3$, $6$, or $9$.



If we wrote in base $8$, instead of base $10$, then $7$ would have the property: if you write a number in base $8$ and add the digits (in base 8) until you get down to a single digit between $1$ and $7$, then the multiples of $7$ will always end yielding $7$, for precisely the same reason. And if we wrote in base $16$, then $15$ (or rather, F) would have the property. In general, if you write in base $b$, then $b-1$ has the property.




This is a special case of casting out nines, which in turn is a special case of modular arithmetic. It is what is behind many divisibility tests (e.g., for $2$, $3$, $5$, $9$, and $11$).



Coda. This reminds me of an anecdote a professor of mine used to relate: a student once came to him telling him he had discovered a very easy way to test divisibility of any number $N$ by any number $b$: write $N$ in base $b$, and see if the last digit is $0$. I guess, equivalently, you could write $N$ in base $b+1$, and add the digits to see if you get $b$ at the end.


Sunday 24 November 2019

fake proofs - Why is $i^3$ (the complex number "$i$") equal to $-i$ instead of $i$?




$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$



Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?


Answer



We cannot say that $\sqrt{a}\sqrt{b}=\sqrt{ab}$ for negative $a$ and $b$. If this were true, then $1=\sqrt{1}=\sqrt{\left(-1\right)\cdot\left(-1\right)} = \sqrt{-1}\sqrt{-1}=i\cdot i=-1$. Since this is false, we have to say that $\sqrt{a}\sqrt{b}\neq\sqrt{ab}$ in general when we extend it to accept negative numbers.


algebra precalculus - How to solve $sqrt{9-4sqrt{5}}=$?




Need some hints how to solve this: $\sqrt{9-4\sqrt{5}}=$ ?



Thanks.


Answer



$\sqrt{9-4\sqrt{5}}=\sqrt{5+4-2\cdot 2\cdot \sqrt{5}}=\sqrt{(\sqrt{5}-2)^{2}}=|\sqrt{5}-2|=\sqrt{5}-2$


Saturday 23 November 2019

summation - Sum of an infinite series $(1 - frac 12) + (frac 12 - frac 13) + cdots$ - not geometric series?



I'm a bit confused as to this problem:




Consider the infinite series:



$$\left(1 - \frac 12\right) + \left(\frac 12 - \frac 13\right) + \left(\frac 13 - \frac 14\right) \cdots$$



a) Find the sum $S_n$ of the first $n$ terms.



b) Find the sum of this infinite series.



I can't get past part a) - or rather I should say I'm not sure how to do it anymore.




Because the problem asks for a sum of the infinite series, I'm assuming the series must be Geometric, so I tried to find the common ratio based on the formula



$$r = S_n / S_{n-1} = \frac{\frac 12 - \frac 13}{1 - \frac 12} = \frac 13$$



Which is fine, but when I check the ratio of the 3rd and the 2nd terms:



$$r = \frac{\frac 13 - \frac 14}{\frac 12 - \frac 13} =\frac 12$$



So the ratio isn't constant... I tried finding a common difference instead, but the difference between 2 consecutive terms wasn't constant either.




I feel like I must be doing something wrong or otherwise missing something, because looking at the problem, I notice that the terms given have a pattern:



$$\left(1 - \frac 12\right) + \left(\frac 12 - \frac 13\right) + \left(\frac 13 - \frac 14\right) + \cdots + \left(\frac 1n - \frac{1}{n+1}\right)$$



Which is reminiscent of the textbook's proof of the equation that yields the sum of the first $n$ terms of a geometric sequence, where every term besides $a_1$ and $a_n$ cancel out and yield



$$a_1 \times \frac{1 - r^n}{1 - r}.$$



But I don't know really know how to proceed at this point, since I can't find a common ratio or difference.



Answer



$$\left(1-\frac12\right)+\left(\frac12-\frac13\right)+\left(\frac13-\frac14\right) +\cdots +\left(\frac{1}{n} - \frac{1}{n+1}\right) $$
$$ = 1-\frac12+\frac12-\frac13+\frac13-\frac14+\frac14 -\cdots +\frac{1}{n} - \frac{1}{n+1}$$
$$ = 1-\left(\frac12-\frac12\right)-\left(\frac13-\frac13\right)-\left(\frac14-\frac14\right)-\space \cdots \space - \left(\frac{1}{n}-\frac{1}{n}\right)-\frac{1}{n+1}$$



Notice how each of the terms in parentheses is zero, so we are left with: $$\boxed{\text{Sum of first n terms: }1-\frac{1}{n+1}}$$



If we want the infinite sum we must take the limit as $n \to \infty$ because $n$ is the number of terms in the sequence. So as $n$ becomes arbitrarily large, $\dfrac{1}{n}$ tends towards $0$ so we get that the sequence of finite sums approaches: $$1-0 = \boxed{1}$$







Not all infinite series need to be arithmetic or geometric! This special one is called a telescoping series.


binomial coefficients - Prove that $sumlimits_{k=0}^rbinom{n+k}k=binom{n+r+1}r$ using combinatoric arguments.





Prove that $\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ using combinatoric arguments.





(EDITED)



I want to see if I understood Brian M. Scott's approach so I will try again using an analogical approach.



$\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ can be rewritten as $\binom{n+0}n + \binom{n+1}n +\binom{n+2}n
+\ldots+\binom{n+r}n = \binom{n+r+1}{n+1}$




We can use the analogy of people lining up to buy tickets to see a concert. Let's say there are only $n$ number of tickets available for sale. "Choosing" who gets to attend the concert can be done in two ways.



The first way (the RHS), we have $n+1$ number of tickets for sale but $n+r+1$ people who wants to buy the tickets. Thus, there are $\binom{n+r+1}{n+1}$ ways to "choose" who gets to attend the concert.



The second way (the LHS) is to select the last person in line to buy the first ticket (I think this was the step I missed in my first attempt). Then, we choose $n$ from the remaining $n+r$ people to buy tickets. Or we can ban the last person in line from buying a ticket and choose the second-to-last person in line to buy the first ticket. Then, we have $\binom{n+r-1}n$ ways. This continues until we reach the case where we choose the $n+1$ person in line to buy the first ticket (banning everyone behind him/her from buying a ticket). This can be done in $\binom{n+0}n$ ways.



Therefore, adding up each case on the LHS is equal to the RHS.


Answer



You’re on the right track, but you have a discrepancy between choosing $r$ from $n+r+1$ on the right, and choosing $r$ from $n+r$ on the left, so what you have doesn’t quite work. Here’s an approach that does work and is quite close in spirit to what you’ve tried.




Let $A=\{0,1,\ldots,n+r\}$; clearly $|A|=n+r+1$, so $\binom{n+r+1}r$ is the number of $r$-sized subsets of $A$. Now let’s look at a typical term on the lefthandside. The term $\binom{n+k}k$ is the number of ways to choose a $k$-sized subset of $\{0,1,\ldots,n+k-1\}$; how does that fit in with choosing an $r$-sized subset of $A$?



Let $n+k$ be the largest member of $A$ that we do not pick for our $r$-sized subset; then we’ve chosen all of the $(n+r)-(n+k)=r-k$ members of $A$ that are bigger than $n+k$, so we must fill out our set by choosing $k$ members of $A$ that are smaller than $n+k$, i.e., $k$ members of the set $\{0,1,\ldots,n+k-1\}$. In other words, there are $\binom{n+k}k$ ways to choose our $r$-sized subset of $A$ so that $n+k$ is the largest member of $A$ that is not in our set. And that largest number not in our set cannot be any larger than $n$, so the choices for it are $n+0,\ldots,n+r$. Thus, $\sum_{k=0}^r\binom{n+k}k$ counts the $r$-sized subsets of $A$ by classifying them according to the largest member of $A$ that they do not contain.






It may be a little easier to see what’s going on if you make use of symmetry to rewrite the identity as



$$\sum_{k=0}^r\binom{n+k}n=\binom{n+r+1}{n+1}\;.\tag{1}$$




Let $A$ be as above; the righthand side of $(1)$ is clearly the number of $(n+1)$-sized subsets of $A$. Now let $S$ be an arbitrary $r$-sized subset of $A$. The largest element of $S$ must be one of the numbers $n,n+1,\ldots,n+r$, i.e., one of the numbers $n+k$ for $k=0,\ldots,r$. And if $n+k$ is the largest element of $S$, there are $\binom{n+k}n$ ways to choose the $n$ smaller members of $S$. Thus, the lefthand side of $(1)$ also counts the $(n+1)$-sized subsets of $A$, classifying them according to their largest elements.



The relationship between the two arguments is straightforward: the sets that I counted in the first argument are the complements in $A$ of the sets that I counted in the second argument. There’s a bijection between the $r$-subsets of $A$ and their complementary $(n+1)$-subsets of $A$, so your identity and $(1)$ are essentially saying the same thing.


radicals - Prove that if $n$ is a positive integer then $sqrt{n}+ sqrt{2}$ is irrational




Prove that if $n$ is a positive integer then $\sqrt{n}+ \sqrt{2}$ is irrational.




The sum of a rational and irrational number is always irrational, that much I know - thus, if $n$ is a perfect square, we are finished.
However, is it not possible that the sum of two irrational numbers be rational? If not, how would I prove this?



This is a homework question in my proofs course.



Answer



Multiply both sides by $\sqrt n - \sqrt 2$. Then $n - 2 = \frac{p}{q} ( \sqrt n - \sqrt 2 )$ so $\sqrt n - \sqrt 2$ is also rational. So we have two rational numbers whose difference (which must be rational) is $2 \sqrt 2$, meaning that $\sqrt 2$ is rational.


calculus - How does $a^2 + b^2 = c^2$ work with ‘steps’?





We all know that $a^2+b^2=c^2$ in a right-angled triangle, and therefore, that $c, so that walking along the red line would be shorter than using the two black lines to get from top left to bottom right in the following graphic:





Now, let's assume that the direct way using the red line is blocked, but instead, we can use the green way in the following picture:






Obviously, the green way isn't any shorter than the black one, it's just $a/2+b/2+a/2+b/2 = a+b$. Now, we can divide the green path again, just like the black path, and get to the purple path. Dividing this one in two halfs again, we get the yellow path:





Now obviously, the yellow path is still as long as the black path from the beginning, it's just $8*a/8+8*b/8=a+b$. But if we do this segmentation again and again, we approximate the red line - without making the way any shorter. Why is this so?


Answer



Essentially,

it is because the distance of the
stepped curve from the line
does not get small compared
to the length of the steps.



An example where the limit
is properly found
is dividing a circle
into $n$ equal parts
and computing the sum of the

line segments
connecting the endpoints
of the arcs.
This $does$ converge
to the length of the circle
because the height of each arc
gets arbitrarily small
compared to the length of each arc
as $n$ gets large.


discrete mathematics - Proof by contradiction and mathematical induction

$\sum_{i=1}^n {2\over3^i}={2\over3}+{2\over9}+\dots+{2\over3^n}=1-{({1\over3})^n}$



I had this problem in class and we proved using 2 different methods: contradiction and mathematical induction. I thought it was understood, I just got bumped into certain point.



Please point it out which step I'm thinking wrong.



For the contradiction,

We assume that there is some integer n for which $i=1$ is false.
And we are applying smaller positive integer smaller than 1.



for the smallest n, ${2\over3}+{2\over9}+{\dots}+{1\over3^{n-1}}$ indicates that our assumption $i=1$ is false.



(I don't remember how the calculation was made for this proof by contradiction.)



Therefore, our assumption was true.



For induction,




Try out the base case with applying $i=1$
inductive hypothesis would be ${2\over3}=1-{1\over3}$



What would be the next step?

Friday 22 November 2019

calculus - Not able to solve $intlimits_1^n frac{g(x)}{x^{p+1}} mathrm dx $



If $p=\frac{7}{8}$ then what should be the value of $\displaystyle\int\limits_1^n \frac{g(x)}{x^{p+1}} \mathrm dx $
when $$g(x) = x \log x \quad \text{or} \quad g(x) = \frac{x}{\log x}? $$



Wondering which way to proceed?




  1. an algebraic substitution,

  2. partial fractions,


  3. integration by parts, or

  4. reduction formulae.



Please don't suggest something like ("Learn basic Calculus first" etc).
Kindly help by solving if possible because I'm out of touch with calculus for nearly 15 yrs.


Answer



$$\frac{g(x)}{x^{p+1}}=\frac{x\log x}{x^{p+1}}=\frac{\log x}{x^p}$$



By parts:




$$u=\frac1{x^p}\;,\;\;u'=-\frac p{x^{p+1}}\\v'=\log x\;,\;\;v=x\log x-x$$



Thus:



$$\int\limits_1^n\frac{\log x}{x^p}dx=\left.\left(\frac{\log x}{x^{p-1}}-\frac1{x^{p-1}}\right)\right|_1^n+p\int\limits_1^n\frac{\log x}{x^p}dx-p\int\limits_1^n\frac1{x^p}dx\ldots\ldots$$


real analysis - Integrate $int frac{sin^2(x)}{1+sin^2(x)}dx$



Got any ideas (what substitution should I use) to evaluate $$\int \frac{\sin^2(x)}{1+\sin^2(x)}dx~?$$



Answer



Write:
$$\frac {\sin^2 x} {1 + \sin^2 x} = \frac {1 +\sin^2 x - 1} {1 + \sin^2 x} = 1 - \frac 1 {1 + \sin^2 x} = 1 - \frac {\sec^2 x} {\sec^2 x + \tan^2 x} = 1 - \frac {\sec^2 x} {1 + 2\tan^2 x} $$



Now substitute $u = \tan x$ and the rest is easy.


real analysis - Prove that $limsup_{xtoinfty}left(cos x + sinleft(sqrt2 xright)right) = 2$





Prove that
$$
\limsup_{x\to\infty}\left(\cos x + \sin\left(\sqrt2 x\right)\right) = 2
$$




Pretty much always when I ask a question here I do provide some trials of mine to give some background. Unfortunately, this one is such a tough one for me that I don't even see a starting point.



Here are some observations though. Let's denote the function under the limsup as $f(x)$:

$$
f(x) = \cos x + \sin\left(\sqrt2 x\right)
$$



Since sin's argument contains an irrational multiplier the function itself is not periodic, perhaps this may be used somehow. I've tried assuming that there exists $x$ such that the equality holds, namely:
$$
\cos x + \sin\left(\sqrt2 x\right) =2
$$



Unfortunately, I was not able to solve it for $x$. I've then tried to use Mathematica for a numeric solution, but NSolve didn't output anything in Reals.




The problem becomes even harder since there are some constraints on the tools to be used. It is given at the end of the chapter on "Limit of a function". Before the definition of derivatives, so the author assumes the statement might be proven using more or less elementary methods.



Also, I was thinking that it could be possible to consider $f(n),\ n\in\Bbb N$ rather than $x\in\Bbb R$ and use the fact that $\sin(n)$ and $\cos(n)$ are dense in $[-1, 1]$. But not sure how that may help.



What would the argument be to prove the statement in the problem section?


Answer



In $1901$, Minkowski has used his geometry of numbers and proved following theorem${}^{\color{blue}{[1],[2]}}$.





Given any irrational $\theta$ and non-integer real number $\alpha$ such that $x - \theta y - \alpha = 0$ has no solutions in integers. Then for any $\epsilon > 0$, there are infinitely many pairs of integers $p,q$ such that
$$|q(p - \theta q - \alpha)| < \frac14 \quad\text{ and }\quad |p - \theta q - \alpha| < \epsilon$$




Take $(\theta,\alpha) = (\sqrt{2},-\frac14)$, this choice satisfies the condition in above theorem. This means for any $\epsilon > 0$, there are infinitely many pairs of integers $p,q$ such that



$$\left|p - \sqrt{2} q + \frac14\right| < \frac{\epsilon}{6\pi}$$



For such a pair $p,q$ with $q \ne 0$, define




$$(P,Q) = \begin{cases}(p,q), & q > 0\\(-3p-1,-3q) & q < 0\end{cases}$$



If we set $x$ to $2\pi Q$, we will have



$$\sqrt{2}x = 2\pi P + \frac{\pi}{2} + \eta\quad\text{ for some } |\eta| < \epsilon$$



This leads to



$$\cos x = 1 \land \sin(\sqrt{2}x) \ge 1 - |\eta| > 1 - \epsilon
\quad\implies\quad \cos x + \sin(\sqrt{2}x) > 2 - \epsilon$$




Since there are infinitely many such $x$ and they can be as large as one wish, we obtain



$$\limsup_{x\to \infty}\, ( \cos x + \sin(\sqrt{2}x) ) \ge 2 - \epsilon$$



Since $\epsilon$ is arbitrary and $\cos x + \sin(\sqrt{2}x)$ is bounded from above by $2$, we can conclude
$$\limsup_{x\to \infty}\, ( \cos x + \sin(\sqrt{2}x) ) = 2$$



Update




Thinking more about this, since we only use the part $| p - \theta q - \alpha| < \epsilon$ in Minkowski's theorem. This theorem is an overkill. The fact "the fractional part of $\sqrt{2} q$ is dense in $[0,1]$" is enough to derive above limit. Since Minkowski's theorem is a useful theorem to know for such problems, I will leave the answer as is.



References




summation - Why is sum of a sequence $displaystyle s_n = frac{n}{2}(a_1+a_n)$?

Is there a way to prove that the sum of the arithmetic progression $a_1, a_2, \dots, a_n$ can be calculated by $\displaystyle s_n = \frac{n}{2}(a_1+a_n)$?

calculus - Evaluating the integral $int_0^infty frac{sin x} x ,mathrm dx = frac pi 2$?



A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral:
$$\displaystyle\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$



Well, can anyone prove this without using Residue theory. I actually thought of doing this:
$$\int_0^\infty \frac{\sin x} x \, dx = \lim_{t \to \infty} \int_0^t \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$
but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.


Answer




Here's another way of finishing off Derek's argument. He proves
$$\int_0^{\pi/2}\frac{\sin(2n+1)x}{\sin x}dx=\frac\pi2.$$
Let
$$I_n=\int_0^{\pi/2}\frac{\sin(2n+1)x}{x}dx=
\int_0^{(2n+1)\pi/2}\frac{\sin x}{x}dx.$$
Let
$$D_n=\frac\pi2-I_n=\int_0^{\pi/2}f(x)\sin(2n+1)x\ dx$$
where
$$f(x)=\frac1{\sin x}-\frac1x.$$
We need the fact that if we define $f(0)=0$ then $f$ has a continuous

derivative on the interval $[0,\pi/2]$. Integration by parts yields
$$D_n=\frac1{2n+1}\int_0^{\pi/2}f'(x)\cos(2n+1)x\ dx=O(1/n).$$
Hence $I_n\to\pi/2$ and we conclude that
$$\int_0^\infty\frac{\sin x}{x}dx=\lim_{n\to\infty}I_n=\frac\pi2.$$


Tuesday 19 November 2019

linear algebra - Can we prove $BA=E$ from $AB=E$?




I was wondering if $AB=E$ ($E$ is identity) is enough to claim $A^{-1} = B$ or if we also need $BA=E$. All my textbooks define the inverse $B$ of $A$ such that $AB=BA=E$. But I can't see why $AB=E$ isn't enough. I can't come up with an example for which $AB = E$ holds but $BA\ne E$.

I tried some stuff but I can only proof that $BA = (BA)^2$.



Edit: For $A,B \in \mathbb{R}^{n \times n}$ and $n \in \mathbb{N}$.


Answer



If $AB = E$, then (the linear application associated to) $A$ has a right inverse, so it's surjective, and as the dimension is finite, surjectivity and injectivity are equivalent, so $A$ is bijective, and has an inverse. And the inverse is also a right inverse, so it's $B$


sequences and series - find the sum to n term of $frac{1}{1cdot2cdot3} + frac{3}{2cdot3cdot4} + frac{5}{3cdot4cdot5} + frac{7}{4cdot5cdot6 } + ... $



$$\frac{1}{1\cdot2\cdot3} + \frac{3}{2\cdot3\cdot4} + \frac{5}{3\cdot4\cdot5} + \frac{7}{4\cdot5\cdot6 } + ... $$



$$=\sum \limits_{k=1}^{n} \frac{2k-1}{k\cdot(k+1)\cdot(k+2)}$$ $$= \sum \limits_{k=1}^{n} - \frac{1}{2}\cdot k + \sum \limits_{k=1}^{n} \frac{3}{k+1} - \sum \limits_{k=1}^{n}\frac{5}{2\cdot(k+2)} $$



I do not know how to get a telescoping series from here to cancel terms.


Answer



HINT:




Note that we have



$$\begin{align}
\frac{2k-1}{k(k+1)(k+2)}&=\color{blue}{\frac{3}{k+1}}-\frac{5/2}{k+2}-\frac{1/2}{k}\\\\
&=\color{blue}{\frac12}\left(\color{blue}{\frac{1}{k+1}}-\frac1k\right)+\color{blue}{\frac52}\left(\color{blue}{\frac{1}{k+1}}-\frac{1}{k+2}\right)
\end{align}$$


Linear Algebra - Determining if two matrices are similar



Are the matrices $A = \begin{bmatrix}1&2&-3\\1&-1&2\\0&3&5\end{bmatrix}$ and $B = \begin{bmatrix}1&-1&4\\1&1&5\\1&1&6\end{bmatrix}$ similar?



I know that similar matrices have the same determinant, the same rank, and the same characteristic polynomial (and therefore, the same eigenvalues).



What I have tried:



$det(A) = -30$




$det(B) = 2$



$rank(A) = 3$



$rank(B) = 3$



How do I show the characteristic polynomial for $A$ and $B$?



Also since $det(A) \ne det(B)$, we already know that matrices $A$ and $B$ are not similar so we don't need to prove that using $B = P^{-1}AP$, right?


Answer




Similar matrices have the same eigenvalues, determinants are product of eigenvalues so similar matrices have same determinants. Thus, different determinants $\implies$ not similar, so you don't need to do any more work to prove they aren't similar.



You can find the characteristic polynomial from the definition -- the characteristic polynomial of $A$ is $det(A - \lambda I)$ [ or depending on who you ask, sometimes $det(\lambda I - A)$; this is negative of the other definition if $A$ has an odd number of rows] where $I$ is the identity matrix of the same size as $A$, and the variable is $\lambda$.


Monday 18 November 2019

calculus - Prove that $E(X) = int_{0}^{infty} P(X>x),dx = int_{0}^{infty} (1-F_X(x)),dx$.




Let $X$ be a continuous non-negative random variable (i.e. $R_x$ has only non-negative values). Prove that $$E(X) = \int_{0}^{\infty} P(X>x)\,dx = \int_{0}^{\infty} (1-F_X(x))\,dx$$ where $F_X(x)$ is the CDF for $X$. Using this result, find $E(X)$ for an exponential ($\lambda$) random variable.





I know that by definition, $F_X(x) = P(X \leq x)$ and so $1 - F_X(x) = P(X>x)$



The solution is:
$$\int_{0}^{\infty} \int_{x}^{\infty} f(y)\,dy dx
= \int_{0}^{\infty} \int_{0}^{y} f(y)\,dy dx
= \int_{0}^{\infty} yf(y) dy.$$



I'm really confused as to where the double integral came from. I'm also rusty on multivariate calc, so I'm confused about the swapping of $x$ and $\infty$ to $0$ and $y$.




Any help would be greatly appreciated!


Answer



Observe that for a continuous random variable, (well absolutely continuous to be rigorous):



$$\mathsf P(X> x) = \int_x^\infty f_X(y)\operatorname d y$$



Then taking the definite integral (if we can):



$$\int_0^\infty \mathsf P(X> x)\operatorname d x = \int_0^\infty \int_x^\infty f_X(y)\operatorname d y\operatorname d x$$




To swap the order of integration we use Tonelli's Theorem, since a probability density is strictly non-negative.



Observe that we are integrating over the domain where $0< x< \infty$ and $x< y< \infty$, which is to say $0

$$\begin{align}\int_0^\infty \mathsf P(X> x)\operatorname d x = & ~ \iint_{0< x< y< \infty} f_X(y)\operatorname d (x,y)
\\[1ex] = & ~ \int_0^\infty \int_0^y f_X(y)\operatorname d x\operatorname d y\end{align}$$



Then since $\int_0^y f_X(y)\operatorname d x = f_X(y) \int_0^y 1\operatorname d x = y~f_X(y)$ we have:



$$\begin{align}\int_0^\infty \mathsf P(X> x)\operatorname d x = & ~ \int_0^\infty y ~ f_X(y)\operatorname d y \\[1ex] = & ~ \mathsf E(X \mid X\geq 0)~\mathsf P(X\geq 0) \\[1ex] = & ~ \mathsf E(X) & \textsf{when $X$ is strictly positive} \end{align}$$




$\mathcal {QED}$


algebra precalculus - Polynomial with odd number of real roots



I have been trying to characterize the number of roots on $\theta$ for the following polynomial




$$ \sum_{i=1}^n \frac{\theta- x_i}{1+ \left(x_i - \theta \right)^2} = 0$$




If we were to put everything under a common denominator then we would see that the polynomial is of degree $2n -1 $ with coefficients depending on the $x_i$ for $i=1,2,...,n$ . Taking limits as $\theta \to \infty$ and $\theta \to -\infty$ we find that the polynomial has at least one real root as it tends to zero through positive and negative values respectively.




What I do not understand is how one reaches the conclusion that the number of real roots is odd in this case. I know that the complex roots are $2 n -1$ but I'm talking about the real roots. Is there a theorem that guarantees that or is that common sense hiding in plain sight, at least for me?



Thank you.


Answer



If you write your equation in the form of $p(\theta)=\theta^m+a_{m-1}\theta^{m-1}+...+a_1\theta+a_0=0$, you get that $m=2n+1$ is an odd number and $a_i\in \mathbb R$. So, $p(\theta)$ is a polynomial with real coefficient by odd degree. Thus, $p(\theta)$ has at least one real root. But if $z\in \mathbb C$ is a root of a polynomial with real, then $\overline z$ is to. So the number of complex root of any polynomial with real coefficient, is even. From here, the number of real root of a polynomial with real coefficient, is odd.


Sunday 17 November 2019

summation - Sum of binomial coefficients when lower suffices is same in the series: ${m choose m}+{m+1 choose m}+{m+2 choose m}+...+{n choose m}$




I want to find out sum of the following series:

$${m \choose m}+{m+1 \choose m}+{m+2 \choose m}+...+{n \choose m}$$
My try:
${m \choose m}+{m+1 \choose m}+{m+2 \choose m}+...+{n \choose m}$ = Coefficient of $x^m$ in the expansion of $(1+x)^m + (1+x)^{m+1} + ... + (1+x)^n$

Or, Coefficient of $x^m$
$$\frac{(1+x)^{m}((1+x)^{n}-1)}{1+x-1}$$
$$=\frac{(1+x)^{m+n}-(1+x)^{m}}{x}$$
But, how to proceed further?



Note: $m≤n$


Answer



Other way

$$\left( \begin{matrix}
k \\
m \\
\end{matrix} \right)=\left( \begin{matrix}
k+1 \\
m+1 \\
\end{matrix} \right)-\left( \begin{matrix}
k \\
m+1 \\
\end{matrix} \right)

$$
we have
$$\sum\limits_{k=m}^{n}{\left( \begin{matrix}
k \\
m \\
\end{matrix} \right)}=\sum\limits_{k=m}^{n}\left[{\left( \begin{matrix}
k+1 \\
m+1 \\
\end{matrix} \right)-\left( \begin{matrix}
k \\

m+1 \\
\end{matrix} \right)}\right]=\left( \begin{matrix}
n+1 \\
m+1 \\
\end{matrix} \right)
$$
Also



Let $x_i\in \mathbb{N}$ and
$$x_1+x_2+x_3+\cdots+x_{k+2}=n+2$$



analysis - Convergence of series involving iterated $ sin $









Hi all,




I've been trying to show the convergence or divergence of



$$ \sum_{n=1}^\infty \frac{\sin^n 1}{n} = \frac{\sin 1}{1} + \frac{\sin \sin 1}{2} + \frac{\sin \sin \sin 1}{3} + \ \cdots $$



where the superscript means iteration (not multiplication, so it's not simply less than a geometric series -- I couldn't find the standard notation for this).



Problem is,




  • $ \sin^n 1 \to 0 $ as $ n \to \infty $ (which I eventually proved by assuming a positive limit and having $ \sin^n 1 $ fall below it, after getting its existence) helps the series to converge,




    but at the same time


  • $ \sin^{n+1} 1 = \sin \sin^n 1 \approx \sin^n 1 $ for large $ n $ makes it resemble the divergent harmonic series.




I would appreciate it if someone knows a helpful convergence test or a proof (or any kind of advice, for that matter).



In case it's useful, here are some things I've tried:





  • Show $ \sin^n 1 = O(n^{-\epsilon}) $ and use the p-series. I'm not sure that's even true.

  • Computer tests and looking at partial sums. Unfortunately, $ \sum 1/n $ diverges very slowly, which is hard to distinguish from convergence.

  • Somehow work in the related series
    $$ \sum_{n=1}^\infty \frac{\cos^n 1}{n} = \frac{\cos 1}{1} + \frac{\cos \cos 1}{2} + \frac{\cos \cos \cos 1}{3} + \ \cdots $$
    which I know diverges since the numerators approach a fixed point.


Answer



A Google search has turned up an analysis of the asymptotic behavior of the iterates of $\sin$ on page 157 of de Bruijn's Asymptotic methods in analysis. Namely,



$$\sin^n(1)=\frac{\sqrt{3}}{\sqrt{n}}\left(1+O\left(\frac{\log(n)}{n}\right)\right),$$




which implies that your series converges.



Edit: Aryabhata has pointed out in a comment that the problem of showing that $\sqrt{n}\sin^n(1)$ converges to $\sqrt{3}$ already appeared in the question Convergence of $\sqrt{n}x_{n}$ where $x_{n+1} = \sin(x_{n})$ (asked by Aryabhata in August). I had missed or forgot about it. David Speyer gave a great self contained answer, and he also referenced de Bruijn's book. De Bruijn gives a reference to a 1945 work of Pólya and Szegő for this result.


gcd and lcm - Why does the Euclidean algorithm for finding GCD work?




I am having trouble understanding why the Euclidean algorithm for finding the GCD of two numbers always works?



I found some resources here (http://www.cut-the-knot.org/blue/Euclid.shtml), and here(http://sites.math.rutgers.edu/~greenfie/gs2004/euclid.html).



But I am a little confused about how they approach it here. I understand that if we have two numbers, a and b, then the greatest common divisor of a and b has to be less than a, and if a divides b, then a will have to be the GCD.



But I am confused about what happens when:



b=a*q+r

So now, we are saying that we take a/r, correct? Why should we do this at all?


Answer



I will try to explain




why the Euclidean algorithm for finding the GCD of two numbers always
works




by using a standard argument in number theory: showing that a problem is equivalent to the same problem for smaller numbers.




Start with two numbers $a > b \ge 0$. You want to know two things:




  1. their greatest common divisor $g$,


  2. and how to represent $g$ as a combination of $a$ and $b$




It's clear that you know both of these things in the easy special case when $b = 0$.




Suppose $b > 0$. The divide $a$ by $b$ to get a quotient $q$ and a remainder $r$ strictly smaller than $b$:
$$
a = bq + r. \quad \text{(*)}
$$



Now any number that divides both $a$ and $b$ also divides $r$, so divides both $b$ and $r$. Also any number that divides both $b$ and $r$ also divides $a$, so divides both $a$ and $b$. That means that the greatest common divisor of $a$ and $b$ is the same as the greatest common divisor of $b$ and $r$, so (1) has the same answer $g$ for both those pairs.



Moreover, if you can write $g$ as a combination of $b$ and $r$ then you can write it as a combination of $a$ and $b$ (substitute in (*)). That means if you can solve (2) for the pair $(b,r)$ then you can solve it for the pair $(a,b)$.



Taken together, this argument shows that you can replace your problem for $(a,b)$ by the same problem for the smaller pair $(b,r)$. Since the problem can't keep getting smaller forever, eventually you will reach $(z, 0)$ and you're done.



Saturday 16 November 2019

fine the limits :$lim_{x to 0} frac{(sin 2x-2xcos x)(tan 6x+tan(frac{pi}{3}-2x)-tan(frac{pi}{3}+4x))}{xsin x tan xsin 2x}=?$



fine the limits-without-lhopital rule and Taylor series :



$$\lim_{x \to 0} \frac{(\sin 2x-2x\cos x)(\tan 6x+\tan(\frac{\pi}{3}-2x)-\tan(\frac{\pi}{3}+4x))}{x\sin x \tan x\sin 2x}=?$$




i know that :



$$\lim_{x \to 0} \frac{\sin x}{x}=1=\lim_{x \to 0}\frac{\tan x}{x}$$



But I can not answer please help .


Answer



If you know, that $\enspace\displaystyle \lim\limits_{x\to 0}\frac{1}{x^2}(1-\frac{\sin x}{x})=\frac{1}{3!}\enspace$ then you can answer your question easily:



$\displaystyle \frac{(\sin(2x)-2x\cos x)(\tan(6x)+\tan(\frac{\pi}{3}-2x)-\tan(\frac{\pi}{3}+4x))}{x\sin x\tan x\sin(2x)}=$




$\displaystyle =\frac{(\sin(2x)-2x\cos x)(\frac{\sin(6x)}{\cos(6x)}-\frac{\sin(6x)}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})}{x\sin x\tan x\sin(2x)}$



$\displaystyle =\frac{2\sin x\cos x -2x\cos x}{\sin x\tan x\sin(2x)}6\frac{\sin(6x)}{6x}(\frac{1}{\cos(6x)}-\frac{1}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})$



$\displaystyle =-\frac{1}{x^2}(1-\frac{\sin x}{x}) (\frac{x}{\sin x}\cos x)^2 \frac{2x}{\sin(2x)} 6\frac{\sin(6x)}{6x}(\frac{1}{\cos(6x)}-\frac{1}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})$



$\displaystyle \to -\frac{1}{3!}6(1-4)=3\enspace$ for $\enspace x\to 0$



A note about what I have used:




$\displaystyle \tan x=\frac{\sin x}{\cos x}$



$\sin(2x)=2\sin x\cos x$



$\displaystyle \tan x-\tan y=\frac{\sin(x-y)}{\cos x\cos y}$


elementary set theory - Does $mathbb R^2$ contain more numbers than $mathbb R^1$?











Does $\mathbb R^2$ contain more numbers than $\mathbb R^1$? I know that there are the same number of even integers as integers, but those are both countable sets. Does the same type of argument apply to uncountable sets? If there exists a 1-1 mapping from $\mathbb R^2$ to $\mathbb R^1$, would that mean that 2 real-valued parameters could be encoded as a single real-valued parameter?



Answer



Indeed $\mathbb R^2$ has the same cardinality as $\mathbb R$, as the answers in this thread show.



And indeed it means that functions of two variables can be encoded as functions of one variable. However do note that such encoding cannot be continuous, but can be measurable.



Lastly, to extend this result to all infinite sets one needs the axiom of choice. In fact the assertion "For every infinite $A$ there is a bijection between $A$ and $A^2$" is equivalent to the axiom of choice. If one requires that $A$ is well-ordered then this is true without the axiom of choice, but for many "sets of interest" (e.g. the real numbers) one cannot prove the existence of a well-ordering without some form of choice.



Despite the last sentence, the existence of a bijection between $\mathbb R$ and $\mathbb R^n$ does not require the axiom of choice (for $n>0$, of course).


calculus - How do I derive $1 + 4 + 9 + cdots + n^2 = frac{n (n + 1) (2n + 1)} 6$












I am introducing my daughter to calculus/integration by approximating the area under y = f(x*x) by calculating small rectangles below the curve.



This is very intuitive and I think she understands the concept however what I need now is an intuitive way to arrive at $\frac{n (n + 1) (2n + 1)} 6$ when I start from $1 + 4 + 9 + \cdots + n^2$.



In other words, just how came the first ancient mathematician up with this formula - what were the first steps leading to this equation? That is what I am interested in, not the actual proof (that would be the second step).


Answer



Same as you can prove sum of n = n(n+1)/2 by



*oooo
**ooo

***oo
****o


you can prove $\frac{n (n + 1) (2n + 1)} 6$ by building a box out of 6 pyramids:



enter image description here
enter image description here
enter preformatted text here




Sorry the diagram is not great (someone can edit if they know how to make a nicer one). If you just build 6 pyramids you can easily make the n x n+1 x 2n+1 box out of it.




  • make 6 pyramids (1 pyramid = $1 + 2^2 + 3^2 + 4^2 + ...$ blocks)

  • try to build a box out of them

  • measure the lengths and count how many you used.. that gives you the formula



Using these (glued) enter image description here


calculus - Generalizing the trick for integrating $int_{-infty}^infty e^{-x^2}mathrm dx$?




There is a well-known trick for integrating $\int_{-\infty}^\infty e^{-x^2}\mathrm dx$, which is to write it as $\sqrt{\int_{-\infty}^\infty e^{-x^2}\mathrm dx\int_{-\infty}^\infty e^{-y^2}\mathrm dy}$, which can then be reexpressed in polar coordinates as an easy integral. Is this trick a one-hit wonder, or are there other cases where this trick works and is also necessary? It seems to depend on the defining property of the exponential function that $f(a+b)=f(a)f(b)$, which would make me think that it would only allow fairly trivial generalizations, e.g., to $\int_{-\infty}^\infty 7^{-x^2}\mathrm dx$ or $\int_{-\infty}^\infty a^{bx^2+cx+d}\mathrm dx$.



Can it be adapted through rotation in the complex plane to do integrals like $\int_{-\infty}^\infty \sin(x^2)\mathrm dx$? Here I find myself confused by trying to simultaneously visualize both the complex plane and the $(x,y)$ plane.



WP http://en.wikipedia.org/wiki/Gaussian_integral discusses integrals that have a similar form and seem to require different methods, but I'd be more interested in integrals that have different forms but can be conquered by the same trick.



The trick involves expanding from 1 dimension to 2. Is there a useful generalization where you expand from $m$ dimensions to $n$?



This is not homework.


Answer




I would like to answer to your question about Fresnel integral as I did this in my undergraduate studies. So, let us consider the integrals



$$S_1=\int_{-\infty}^\infty dx\sin(x^2) \qquad C_1=\int_{-\infty}^\infty dx\cos(x^2).$$



We want to apply the same technique used for Gauss integral in this case and consider the two-dimensional integrals



$$S_2=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy\sin(x^2+y^2) \qquad C_2=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy\cos(x^2+y^2).$$



If you go to polar coordinates, these integrals doe not converge. So, we introduce a convergence factor in the following way




$$S_2(\epsilon)=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy e^{-\epsilon(x^2+y^2)}\sin(x^2+y^2)$$
$$C_2(\epsilon)=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy e^{-\epsilon(x^2+y^2)}\cos(x^2+y^2).$$



Then one has, moving to polar coordinates,



$$S_2(\epsilon)=2\pi\int_0^\infty \rho d\rho e^{-\epsilon\rho^2}\sin(\rho^2) \qquad C_2(\epsilon)=2\pi\int_0^\infty \rho d\rho e^{-\epsilon\rho^2}\cos(\rho^2)$$



that is



$$S_2(\epsilon)=\pi\int_0^\infty dx e^{-\epsilon x}\sin(x) \qquad C_2(\epsilon)=\pi\int_0^\infty dx e^{-\epsilon x}\cos(x).$$




These integrals are well known and give



$$S_2(\epsilon)=\frac{\pi}{1+\epsilon^2} \qquad C_2(\epsilon)=\frac{\epsilon\pi}{1+\epsilon^2}$$



noting that integration variables are dummy. Now, in this case one can take the limit for $\epsilon\rightarrow 0$ producing



$$S_2=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy\sin(x^2+y^2)=\pi \qquad C_2=\int_{-\infty}^\infty\int_{-\infty}^\infty dxdy\cos(x^2+y^2)=0$$



By applying simple trigonometric formulas you will get back the value of the Fresnel integrals. But now, as a bonus, you have got the value of these integrals in two dimensions.



Friday 15 November 2019

calculus - Evaluating the integral $int_0^infty frac{sin x} x ,mathrm dx = frac pi 2$?



A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral:

$$\displaystyle\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$



Well, can anyone prove this without using Residue theory. I actually thought of doing this:
$$\int_0^\infty \frac{\sin x} x \, dx = \lim_{t \to \infty} \int_0^t \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$
but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.


Answer



Here's another way of finishing off Derek's argument. He proves
$$\int_0^{\pi/2}\frac{\sin(2n+1)x}{\sin x}dx=\frac\pi2.$$
Let
$$I_n=\int_0^{\pi/2}\frac{\sin(2n+1)x}{x}dx=

\int_0^{(2n+1)\pi/2}\frac{\sin x}{x}dx.$$
Let
$$D_n=\frac\pi2-I_n=\int_0^{\pi/2}f(x)\sin(2n+1)x\ dx$$
where
$$f(x)=\frac1{\sin x}-\frac1x.$$
We need the fact that if we define $f(0)=0$ then $f$ has a continuous
derivative on the interval $[0,\pi/2]$. Integration by parts yields
$$D_n=\frac1{2n+1}\int_0^{\pi/2}f'(x)\cos(2n+1)x\ dx=O(1/n).$$
Hence $I_n\to\pi/2$ and we conclude that
$$\int_0^\infty\frac{\sin x}{x}dx=\lim_{n\to\infty}I_n=\frac\pi2.$$



Thursday 14 November 2019

real analysis - Proof that $lim_{nrightarrow infty} sqrt[n]{n}=1$



Thomson et al. provide a proof that $\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$ in this book. It has to do with using an inequality that relies on the binomial theorem. I tried to do an alternate proof now that I know (from elsewhere) the following:




\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}



Then using this, I can instead prove:
\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}




On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}



I know such an identity would hold for bounded $n$ but I'm not sure I can use this identity when $n\rightarrow \infty$.



If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence $x_n$, can I always assume:
\begin{align}

\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}
Or are there sequences that invalidate that identity?



(Edited to expand the last question)
given any sequence $x_n$, can I always assume:
\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)

\end{align}
Or are there sequences that invalidate any of the above identities?



(Edited to repurpose this question).
Please also feel free to add different proofs of $\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$.


Answer



Since $x \mapsto \log x$ is a continuous function, and since continuous functions respect limits:
$$
\lim_{n \to \infty} f(g(n)) = f\left( \lim_{n \to \infty} g(n) \right),
$$

for continuous functions $f$, (given that $\displaystyle\lim_{n \to \infty} g(n)$ exists), your proof is entirely correct. Specifically,
$$
\log \left( \lim_{n \to \infty} \sqrt[n]{n} \right) = \lim_{n \to \infty} \frac{\log n}{n},
$$



and hence



$$
\lim_{n \to \infty} \sqrt[n]{n} = \exp \left[\log \left( \lim_{n \to \infty} \sqrt[n]{n} \right) \right] = \exp\left(\lim_{n \to \infty} \frac{\log n}{n} \right) = \exp(0) = 1.
$$



Wednesday 13 November 2019

logarithms - Prove that $log X 0$

I'm working through Data Structures and Algorithm Analysis in C++, 2nd Ed, and problem 1.7 asks us to prove that $\log X < X$ for all $X > 0$.




However, unless I'm missing something, this can't actually be proven. The spirit of the problem only holds true if you define several extra qualifiers, because it's relatively easy to provide counter examples.



First, it says that $\log_{a} X < X$ for all $X > 0$, in essence.



But if $a = -1$, then $(-1)^{2} = 1$. Therefore $\log_{-1} 1 = 2$. Thus, we must assume
$a$ is positive.



if $a$ is $< 1$, then $a^2 < 1$. Therefore we must assume that $a \geq 1$.




Now, the book says that unless stated otherwise, it's generally speaking about base 2 for logarithms, which are vital in computer science.



However, even then - if $a$ is two and $X$ is $\frac{1}{16}$, then $\log_{a} X$ is $-4$. (Similarly for base 10, try taking the log of $\frac{1}{10}$ on your calculator: It's $-1$.) Thus we must assume that $X \geq 1$.



...Unless I'm horribly missing something here. The problem seems quite different if we have to prove it for $X \geq 1$.



But even then, I need some help solving the problem. I've tried manipulating the equation as many ways as I could think of but I'm not cracking it.

Tuesday 12 November 2019

elementary number theory - Prove that a perfect square (also a perfect square backwards) is divisible by 121



Suppose that $n=x^2$ is a perfect square with an even number of base-10 digits. Assume that when n is written backwards, you get another perfect square $y^2$. Prove that 121|n. (Use the mod 11 divisibility test, and the fact that -1 is a quadratic non-residue mod 11.)



I know by using the mod 11 divisibility test (where you add and subtract alternating digits) that $x^2 \equiv -y^2$ (mod 11). I believe, also, that since $x^2$ is a quadratic residue mod 11 and $-y^2$ is a quadratic non-residue mod 11, by the properties of Legendre symbols it can't be true that $gcd(x^2,11)=gcd(y^2,11)=1$, so 11 divides $x^2$ and $y^2$. But after that, I'm stuck, and I don't know how to get to the conclusion that 121|n. Help, please?


Answer



as you said, It is easy to see that if $n$ has an even number of digits then the number obtained when reversing the digits is congruent to $-n\bmod 11$.




Therefore $n$ and $-n$ are quadratic residues. Suppose $n$ is not $0\bmod 11$. Now use the fact the quadratic residues form a subgroup of the multiplicative group $\mathbb Z_{11}^*$. So $n^{-1}$ is a quadratic residue, and subsequently $n^{-1}(-n)\equiv -1$ is a quadratic residue $\bmod 11$. This is a contradiction, the quadratic residues $\bmod 11$ are $0,1,4,9,5,3$.



We conclude $n\equiv 0 \bmod 11$. So $11|n$, and since $n$ is a square $11^2|n$ (since $11$ is a prime).


galois theory - Is there a quick way to compute the degree of the splitting field of $x^3+x+1$ over $mathbb{Q}$?



Is there a way to find the degree of the splitting field of $x^3+x+1$ over $\mathbb{Q}$?



Just analyzing the roots shows that the polynomial is separable, so I suppose the splitting field would be a Galois extension. However, the roots are not easy to get a handle on, so it's not obvious to me what roots would need to be adjoined to $\mathbb{Q}$ to get the degree.



What is the right way to do this? Thanks.



Answer



The degree is at least 3 and at most $6=3!$. So you only have to decide whether it's 3 or 6. Since $x^3+x+1$ has only one real root, the other two roots are complex conjugates and so conjugation is an automorphism of the splitting field. Since conjugation has order 2, the degree is 6.



You can avoid Galois theory. The complex roots are roots of a quadratic. Since they cannot be in the real field generated by the real root, the splitting field must be a quadratic extension of the real field and so has degree $2\cdot3=6$.


Monday 11 November 2019

calculus - Proof only by transformation that : $ int_0^infty cos(x^2) dx = int_0^infty sin(x^2) dx $




This was a question in our exam and I did not know which change of variables or trick to apply



How to show by inspection ( change of variables or whatever trick ) that



$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx \tag{I} $$



Computing the values of these integrals are known routine. Further from their values the equality holds. But can we show the equality beforehand?





Note: I am not asking for computation since it can be found here
and we have as well that,
$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx =\sqrt{\frac{\pi}{8}}$$
and the result can be recover here, Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?.




Is there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?


Answer






Employing the change of variables $2u =x^2$ We get $$I=\int_0^\infty \cos(x^2) dx =\frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx$$ $$ J=\int_0^\infty \sin(x^2) dx=\frac{1}{\sqrt{2}}\int^\infty_0\frac{\sin(2x)}{\sqrt{x}}\,dx $$




Summary: We will prove that $J\ge 0$ and $I\ge 0$ so that, proving that $I=J$ is equivalent to $$ \color{blue}{0= (I+J)(I-J)=I^2 -J^2 =\lim_{t \to 0}I_t^2-J^2_t}$$
Where, $$I_t = \int_0^\infty e^{-tx^2}\cos(x^2) dx~~~~\text{and}~~~ J_t = \int_0^\infty e^{-tx^2}\sin(x^2) dx$$
$t\mapsto I_t$ and $t\mapsto J_t$ are clearly continuous due to the present of the integrand factor $e^{-tx^2}$.




However, By Fubini we have,




\begin{split}
I_t^2-J^2_t&=& \left(\int_0^\infty e^{-tx^2}\cos(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\cos(y^2) dy\right) \\&-& \left(\int_0^\infty e^{-tx^2}\sin(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\sin(y^2) dy\right) \\
&=& \int_0^\infty \int_0^\infty e^{-t(x^2+y^2)}\cos(x^2+y^2)dxdy\\
&=&\int_0^{\frac\pi2}\int_0^\infty re^{-tr^2}\cos r^2 drd\theta\\&=&\frac\pi4 Re\left( \int_0^\infty \left[\frac{1}{i-t}e^{(i-t)r^2}\right]' dr\right)\\
&=&\color{blue}{\frac\pi4\frac{t}{1+t^2}\to 0~~as ~~~t\to 0}
\end{split}



To end the proof: Let us show that $I> 0$ and $J> 0$. Performing an integration by part we obtain
$$J = \frac{1}{\sqrt{2}} \int^\infty_0\frac{\sin(2x)}{x^{1/2}}\,dx=\frac{1}{\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx\color{red}{>0}$$
Given that $\color{red}{\sin 2x= 2\sin x\cos x =(\sin^2x)'}$. Similarly we have,

$$I = \frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx=\frac{1}{2\sqrt{2}}\underbrace{\left[\frac{\sin 2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{4\sqrt{2}} \int^\infty_0\frac{\sin 2 x}{x^{3/2}}\,dx\\=
\frac{1}{4\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{3}{8\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{5/2}}\,dx\color{red}{>0}$$




Conclusion: $~~~I^2-J^2 =0$, $I>0$ and $J>0$ impliy $I=J$. Note that we did not attempt to compute neither the value of $~~I$ nor $J$.




Extra-to-the answer However using similar technique in above prove one can easily arrives at the following $$\color{blue}{I_tJ_t = \frac\pi8\frac{1}{t^2+1}}$$ from which one get the following explicit value of $$\color{red}{I^2=J^2= IJ = \lim_{t\to 0}I_tJ_t =\frac{\pi}{8}}$$


paradoxes - Changing odds paradox



A paradox of changing odds I read about - doing my head in, but it must be easy to explain why.



Three sets of two playing cards: AA KK AK




With the cards turned face down, you task is to pick the AK pair. Odds are 3 to 1 you pick the correct pair. That is not the problem.



You pick a pair, and one card is turned over - it's a K. That means now, the odds of having the AK pair are 2 to 1.



How? Nothing changed, no magic, yet just by seeing one card of the pair you chose the odds change from 3 to 1 -> 2 to 1.



I have read the solution, but still don't understand this simple logic.



Nick


Answer




Now you know that the pair is not AA. That new information could change the odds. Usually the original odds would be quoted as 2 to 1 against. In fact if you turn just one card at random and find a K, the odds you have AK are still 2 to 1 against. You now have $2/3$ chance of having the KK and $1/3$ chance of having AK because it is twice as likely you found a K from KK. To see this in detail, list the six possibilities of which pair you pick and which card you pick from the pair. Initially two of the six have you picking AK. When you find a K, three of the possibilities are ruled out, but only one of the remaining three has you choosing AK.


calculus - Differentiate $:textrm{ln}sqrt{textrm{ln}:x}$

I am confused about the solution and method of differentiating this function:



$$\frac{d}{dx}\:\textrm{ln}\sqrt{\textrm{ln}\:x}$$



Why is ln not considered a constant and then multiplied by the derivative of$\:\sqrt{\textrm{ln}\:x}$ ?



The solution is given as:



$$\left(\frac{1}{2x\:\textrm{ln}x}\right)$$
How exactly is the chain rule applied to the entire function at once?

Show that for every integer $n$, $n^3 - n$ is divisible by 3 using modular arithmetic



Problem:




Show that for every integer $n$, $n^3 - n$ is divisible by 3 using modular arithmetic



I was also given a hint:



$$n \equiv 0 \pmod3\\n \equiv 1 \pmod3\\n \equiv 2 \pmod3$$



But I'm still not sure how that relates to the question.


Answer



Using the hint is to try the three cases:




Case 1: $n \equiv 0 \mod 3$



Remember if $a \equiv b \mod n$ then $a^m \equiv b^m \mod n$ [$*$]



So $n^3 \equiv 0^3 \equiv 0 \mod 3$



Remember if $a \equiv c \mod n$ and $b \equiv d \mod n$ then $a+b \equiv c + d \mod n$ [$**$]



So $n^3 - n\equiv 0 - 0 \equiv 0 \mod n$.




Case 2: $n \equiv 1 \mod 3$



Then $n^3 \equiv 1^3 \mod 3$ and $n^3 - n \equiv 1^3 - 1 \equiv 0 \mod n$.



Case 3: $n \equiv 2 \mod 3$



Then $n^3 \equiv 2^3 \equiv 8 \equiv 2 \mod 3$.



So $n^3 - n \equiv 2- 2 \equiv 0 \mod 3$.




So in either of these three cases (and there are no other possible cases [$***$]) we have $n^3 - n \equiv 0 \mod 3$.



That means $3\mid n^3 - n$ (because $n^3 - n \equiv 0 \mod 3$ means there is an intger $k$ so that $n^3 - n = 3k + 0 = 3k$.)






I find that if I am new to modulo notation and I haven't developed the "faith" I like to write it out in terms I do have "faith":



Let $n = 3k + r$ where $r = 0, 1$ or $2$




Then $n^3 - n = (3k+r)^3 -(3k+r) = r^3 - r + 3M$



where $M = [27k^3 + 27k^2r + 9kr^2 - 3k]/3$ (I don't actually have to figure out what $M$ is... I just have to know that $M$ is some combination of powers of $3k$ and those must be some multiple of $3$. In other words, the $r$s are the only things that aren't a multiple of three, so they are the only terms that matter. )



and $r^3 -r$ is either $0 - 0$ or $1 - 1 = 0$ or $8 - 2 = 6$. So in every event $n^3 - n$ is divisible by $3$.



That really is the exact same thing that the $n^3 - n^2 \equiv 0 \mod 3$ notation means.







[$*$] $a\equiv b \mod n$ means there is an integer $k$ so that $a = kn + b$ so $a^m = (kn + b)^m = b^m + \sum c_ik^in^ib^{m-i} = b^m + n\sum c_ik^{i}n^{i-1} b^{m-i}$. So $a^m \equiv b^m \mod n$.



[$**$] $a\equiv c \mod n$ means $a= kn + c$ and $b\equiv d \mod n$ means $b = jn + d$ for some integers $j,k$. So $a + b = c+ d + n(j+k)$. So $a+b =c + d \mod n$.



[$***$]. For any integer $n$ there are unique integers $q, r$ such that $n = 3q + r ; 0 \le r < 3$. Basically this means "If you divide $n$ by $3$ you will get a quotient $q$ and a remainder $r$; $r$ is either $0,1$ or $2$".



In other words for any integer $n$ then $n \equiv r \mod 3$ where $r$ is either $0,1,$ or $2$. These are the only three cases.







P.S. That is how I interpretted the hint. As other have pointed out, a (arguably) more elegant proof is to note $n^3 - n = (n-1)n(n+1)$ For any value of $n$ one of those three, $n$, $n-1$, or $n+1$ must be divisible by $3$.



This actually proves $n^3 - n$ is divisible by $6$ as one of $n$, $n-1$ or $n+1$ must be divisible by $2$.






There is also induction. As $(n+ 1)^3 - (n+1) = n^3 - n + 3n^2 + 3n$, this is true for $n+1)$ if it is true for $n$. As it is true for $0^3 - 0 = 0$ it is true for all $n$.


algebra precalculus - How to prove by induction that $frac{a^n+b^n}{2}geqleft(frac{a+b}{2}right)^n$?



I'm about to prove that for any $a,b>0$ and $n\in\mathbb{N},$ the inequality: $\frac{a^n+b^n}{2}\geq\left(\frac{a+b}{2}\right)^n$ holds.



By induction I get: $$\left(\frac{a+b}{2}\right)\cdot\frac{a^n+b^n}{2}$$ $$=\frac{1}{2}\cdot\frac{a^{n+1}+b^{n+1}+ab^n+ba^n}{2}$$ $$=\frac{1}{2}\left(\frac{a^{n+1}+b^{n+1}}{2}+\frac{ab^n+ba^n}{2}\right).$$ Now I have to prove that $$\frac{1}{2}\left(\frac{a^{n+1}+b^{n+1}}{2}+\frac{ab^n+ba^n}{2}\right)$$ $$\leq\frac{a^{n+1}+b^{n+1}}{2}$$ $$\Leftrightarrow a^{n+1}+b^{n+1}\geq ab^n +ba^n.$$ But I don't know how to prove that one, is it possible to be proved by induction ? Thank you.


Answer



Your desired inequality can be written as $0\leq (a-b)(a^n-b^n)$ this is true regardless which is bigger.



Sunday 10 November 2019

linear algebra - What does it imply if all the eigenvalues of a matrix are all the same?



What properties does a matrix have if all its eigenvalues are the same? In particular, what happens if all eigenvalues are all equal to 1?


Answer



Generally, it means not much in paricular, just that it is composed of Jordan block corresponding to the same eigenvalue. From the Jordan decomposition theorem, we see that $A = V^{-1} J V$, with $J$ having constant diagonal entries, which are the eigenvalues of $A$.



However, if the matrix has all the eigenvalues the same, and is in addition normal, you know that it is a constant multiple of identity matrix.



integration - Gradient of a function involving integrals



Statement of the problem




Let $F(x,y)=\iint_{R_{xy}} e^{(u-1)}e^{(v^2-v)}dudv$ where $R_{xy}$ is the region $[0,x]\times[0,y]$. Calculate $\nabla F(1,1)$



The attempt at a solution



I know that $\nabla F(1,1)=(\dfrac{\partial F(1,1)}{\partial x},\dfrac{\partial F(1,1)}{\partial y})$.



I am going to calculate the first partial derivative since the other one can be calculated in a similar way.



$\dfrac{\partial F (x,y)}{\partial x}=\dfrac{\partial}{\partial x} \int_0^y\int_0^x e^{(u-1)}e^{(v^2-v)}dudv$. Now, I don't know if the following step is legitimate:




$\dfrac{\partial}{\partial x} \int_0^y\int_0^x e^{(u-1)}e^{(v^2-v)}dudv=\int_0^y [\dfrac{\partial}{\partial x} \int_0^x e^{(u-1)}e^{(v^2-v)}du]dv$



If that last step was correct, I would like to know how to justify it.



Now, by the fundamental theorem of calculus I know that



$\dfrac{\partial}{\partial x} \int_0^x e^{(u-1)}e^{(v^2-v)}du=e^{(x-1)}e^{(v^2-v)}$



Using this I get




$\dfrac{\partial F (x,y)}{\partial x}=\int_0^y e^{(x-1)}e^{(v^2-v)}dv=e^{(x-1)}\int_0^y e^{(v^2-v)}dv$



Here I got stuck, I've tried to calculate this integral but I couldn't.



Have I done something wrong up to now and maybe that's why I am having trouble with this integral? I would appreciate some help.


Answer



The step you want to justify is actually a pretty subtle thing, known as differentiation under the integral sign. If you're new to calculations like this, then you probably aren't expected to justify it, but this answer of Qiaochu Yuan tells you some conditions under which it's valid.



You applied the Fundamental Theorem of Calculus correctly. The integral you got stuck on has no elementary antiderivative, and doesn't really simplify to something in terms of simple constants when you plug in $(1,1)$ as you have to do to get $\frac{\partial F(1,1)}{\partial x}$. So after you plug in $(1,1)$ and simplify the $e^{x-1}$ part, there's nothing more you can really do with that piece symbolically.




The $y$ partial should work out a bit more nicely.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...