Friday 31 August 2018

Evaluating the complex limit with indeterminate form



I'm trying to evaluate the following complex limit:



$$\lim_{z \rightarrow -1} \left|\frac{\sqrt{z^2-1}}{z+1}\right|.$$



I notice that the limit is of the indeterminate form $0/0$ which prompts me to use l'Hopital's rule to get infinity. However, my instincts has told me that this is not quite a valid use for l'Hopital's rule, although I'm not clear on why I feel this way. As such, I have tried another method of evaluating this limit by rearranging it to look like the following expression:




$$\lim_{z \rightarrow -1} \left|\sqrt{1 - \frac{2}{z + 1}}\right|.$$



Again this evaluates to infinity and seems to be a little more acceptable than the previous method. As such, I would like to verify whether any of these methods are correct, and if not, how would one evaluate this limit without resorting to the use of Wolfram Alpha say.



Thanks for the help!


Answer



Both methods are correct. Using L'Hôpital's rule and continuity of the absolute value function



$$\lim_{z \rightarrow -1^+} \left|\frac{\sqrt{z^2-1}}{z+1}\right|=\left|\lim_{z \rightarrow -1^+}\frac{z}{\sqrt{z^2-1}}\right|=\left|\lim_{z \rightarrow -1^+}\frac{1/z}{\sqrt{1-\frac{1}{z^2}}}\right|=\lim_{z \rightarrow -1^+}\frac{1}{\sqrt{\left|1-\frac{1}{z^2}\right|}}=\infty$$

$$\lim_{z \rightarrow -1^-} \left|\frac{\sqrt{z^2-1}}{z+1}\right|=\left|\lim_{z \rightarrow -1^-}\frac{z}{\sqrt{z^2-1}}\right|=\left|\lim_{z \rightarrow -1^-}\frac{1/z}{\sqrt{1-\frac{1}{z^2}}}\right|=\lim_{z \rightarrow -1^-}\frac{1}{\sqrt{\left|1-\frac{1}{z^2}\right|}}=\infty$$



and



$$\left|\lim_{z \rightarrow -1^+}\sqrt{1 - \frac{2}{z + 1}}\right|=\lim_{z \rightarrow -1^+}\sqrt{\left|1 - \frac{2}{z + 1}\right|}=\infty$$
$$\left|\lim_{z \rightarrow -1^-}\sqrt{1 - \frac{2}{z + 1}}\right|=\lim_{z \rightarrow -1^-}\sqrt{\left|1 - \frac{2}{z + 1}\right|}=\infty$$


Thursday 30 August 2018

Number of zeros at the end of $k!$





For how many positive integer $k$ does the ordinary decimal representation of the integer $k\text { ! }$ end in exactly $99$ zeros ?




By inspection I found that $400\text{ !}$ end in exactly $99$ zeros , but $399\text{ !}$ does NOT end in $99$ zeros ; using the formula $$\text{ number of zeros at the end }=\sum_{n=1}^{\infty}\left[\frac{k}{5^n}\right]$$where , $[x]$ denotes the greatest integer part not exceeding $x$.



I also found that, for $k=401,402,403,404$ the number of zeros is same, but for $k=405$ the number of zeros increase by $1$ ; as $405$ is divisible by $5$ again , after $400$.



Thus I got that there are only $5$ integers satisfying the condition which are $400,401,402,403,404$.




The question is possible duplicate of this or this but my question is different from these two questions.




Does there exist any other rule or easy formula from where I can get how many integers are there



Answer



Because the number of zeroes steps up at each multiple of $5$, the only possible answers are five or zero. So the question might be: How to determine of there exists $k$ such that $k!$ ends in a given number of zeroes?



Say we want $m$ zeroes. So we look for $k$ with
$$ m=f(k):=\sum_{i=1}^\infty\left\lfloor \frac k{5^i}\right \rfloor$$

First note that
$$ \tag1f(k)<\sum_{i=1}^\infty\frac k{5^i}=\frac k4$$
and on the other side
$$ \tag2f(k)\ge \sum_{i=1}^{\lfloor \log_5k\rfloor}\left( \frac k{5^i}-1\right )> \frac k4-\lfloor \log_5k\rfloor-\frac 54>\frac k4-\log_5k-\frac 94$$
For $k\ge 3$, the right hand side of $(3)$ is strictly increasing. Therefore we start our search at $k=4m+1$ and end it no later than $k=4m+4\log_5(4m+1)+9$. Unless $m$ is awfully large, this requires us to try just a few values of $k$ (recall that we only need to try multiples of $5$).


complex analysis - Derive Poisson's integral formula from Laplace's equation inside a circular disk




For Laplace's equation inside a circular disk of radius $a$:
\begin{align*}
u(r, \theta) &= \frac{1}{\pi} \int_{-\pi}^{\pi} f(\overline{\theta}) \left[ -\frac{1}{2} + \sum\limits_{n=0}^\infty \left( \frac{r}{a} \right)^n \cos n (\theta - \overline{\theta}) \right] \, d\overline{\theta} \\

\end{align*}

Using $\cos z = \mathop{Re} [e^{iz}]$, sum the geometric series to obtain Poisson's integral formula:
\begin{align*}
u(r,\theta) &= \frac{a^2 - r^2}{2\pi} \int_{-\pi}^{\pi} \frac{f(\overline{\theta}) \, d\overline{\theta}}{a^2 + r^2 - 2ar \cos (\theta - \overline{\theta})} \\
\end{align*}




My work:



\begin{align*}

\sum\limits_{n=0}^\infty \left( \frac{r}{a} \right)^n \cos n (\theta - \overline{\theta}) &= \sum\limits_{n=0}^\infty \left( \frac{r}{a} \right)^n \mathop{Re} \left[ e^{i n (\theta - \overline{\theta})} \right] \\
\end{align*}



Now for real $k \in \mathbb{R}$ and complex $z \in \mathbb{C}, z=a+ib$ we have $k \mathop{Re} [z] = \mathop{Re} [kz]$ and $k + \mathop{Re} [z] = \mathop{Re} [k+z]$ so that:



\begin{align*}
&= \mathop{Re} \left[ \sum\limits_{n=0}^\infty \left( \frac{r}{a} \right)^n e^{i n (\theta - \overline{\theta})} \right] \\
&= \mathop{Re} \left[ \sum\limits_{n=0}^\infty \left( \frac{r}{a} e^{i (\theta - \overline{\theta})} \right)^n \right] \\
\end{align*}




Converging geometric series:



\begin{align*}
&= \mathop{Re} \left[ \frac{1}{1 - \frac{r}{a} e^{i (\theta - \overline{\theta})}} \right] \\
\end{align*}



At this point, I believe I need to convert back from $\mathop{Re} [e^{iz}] = \cos z$, but I'm not so sure how to do that.



Plugging that back without converting:




\begin{align*}
u(r, \theta) &= \frac{1}{\pi} \int_{-\pi}^{\pi} f(\overline{\theta}) \left[ -\frac{1}{2} + \mathop{Re} \left[ \frac{1}{1 - \frac{r}{a} e^{i (\theta - \overline{\theta})}} \right] \right] \, d\overline{\theta} \\
u(r, \theta) &= \frac{1}{\pi} \int_{-\pi}^{\pi} f(\overline{\theta}) \mathop{Re} \left[ -\frac{1}{2} + \frac{1}{1 - \frac{r}{a} e^{i (\theta - \overline{\theta})}} \right] \, d\overline{\theta} \\
\end{align*}



... and I'm stuck.


Answer



Hint: $Re \frac 1 z=Re \frac {\overline {z}} {|z|^{2}}=\frac {Re \overline {z}} {|z|}$. When $z=1-\frac r a e^{i(\theta- \overline {\theta})}$ we have $|z|^{2}=1+\frac {r^{2}} {a^{2}}-2\frac r a \cos(\theta- \overline {\theta})$. Can you finish the computation?


Wednesday 29 August 2018

Matrices: left inverse is also right inverse?

If $A$ and $B$ are square matrices, and $AB=I$, then I think it is also true that $BA=I$. In fact, this Wikipedia page says that this "follows from the theory of matrices". I assume there's a nice simple one-line proof, but can't seem to find it.



Nothing exotic, here -- assume that the matrices have finite size and their elements are real numbers.



This isn't homework (if that matters to you). My last homework assignment was about 45 years ago.

algebra precalculus - Distributive Property Theory question



I've been wondering about the theory behind the distributive property lately.
For example: 2(pi * r^2) is just 2 * pi * r^2.
However, when you add a positive number like +3. You get 2(PI*r^2 +3). But, that isn't just 2*PI*r^2+3.
Its: 2pi*r^2 + 2*3.
So I was wondering why that is. Why do you only have to multiply once with the whole multiplication part(with pi^r2), instead of having to multiply 2 by both pi and r^2.
So then I thought isn't it just: (pi * r^2 + 3) + (pi * r^2 + 3)? Then, I tried to simplify that, thinking it would help me understand why...but all that did was make me more confused than when I started. Could someone help me understand please?



Answer



The distributive property,
and most basic properties of real numbers,
comes from geometry.



A non-negative value
corresponds to the length of a line segment.



Adding values corresponds to placing
two segments together

and measuring their length.
Since the order that the segments are placed
does not change the total length,
addition is commutative.
Looking at three segments,
the length is the same if
the first two are placed and then the third,
or the last two are placed and then the first.
Therefore
addition is associative.




Multiplying two segments
corresponds to getting the area
of a rectangle with sides the lengths
of the two segments.
Since swapping the two segments
just rotates the rectangle by 90 degrees,
which does not change the area,
multiplication is commutative.




Consider two rectangles
with a common height $h$
and bases $a$ and $b$.
Their areas are
$ah$ and $bh$,
and the sum of the areas is
$ah+bh$.
Place these two rectangles together
so their common height lines up.
They now form a single rectangle

with base $a+b$ and
height $h$,
and the area of this rectangle
is $(a+b)h$.
This means that
$ah+bh = (a+b)h$,
which is the distributive law.



These laws were, of course,
extended to other type of numbers

(real, complex, fields, ...),
but they all started with geometry.



Off topic but interesting:
Try to prove that
$\sqrt{2}$ is irrational
using only geometric concepts
and proofs.
No algebra is allowed.
(I think I'll even propose this as a question.)



Tuesday 28 August 2018

algebra precalculus - Pre-calculus method for finding the 'neat' value $sin(frac{pi}{3})=frac{sqrt3}{2}$?

If you have a unit circle and the Pythagorean theorem, how do you discover that $\sin(\frac{\pi}{3})=\frac{\sqrt3}{2}$? Finding the $1, 1, \sqrt2$ triangle seems more obvious. Do you consult a chart of previously-found Pythagorean triples and scale them to a unit hypotenuse? Do you have some reason (and, if so, what?) for wanting to know the sine whose cosine $=\frac{1}{2}$ and get lucky with a neat (as long as you don't mind surds) value? Do you use lengthy trial and error (historically, over centuries)? Or is there some other pre-calculus method than Pythagoras?

Monday 27 August 2018

calculus - Convergence of the integral $int_0^infty frac{sin^2x}{x^2}~mathrm dx.$





Determine whether the integral $$\int_0^\infty \frac{\sin^2x}{x^2}~\mathrm dx$$ converges.




I know it converges, since in general we can use complex analysis, but I'd like to know if there is a simpler method that doesn't involve complex numbers. But I cannot come up with a function that I could compare the integral with.


Answer



Hint:$$x>1\implies0\le\frac{\sin^2(x)}{x^2}\le\frac1{x^2}\\0

limits - Summation of infinite exponential series



How is the given summation containing exponential function

$\sum_{a=0}^{\infty} \frac{a+2} {2(a+1)} X \frac{(a+1){(\lambda X)}^a e^{-\lambda X}}{a! (1+\lambda X)}=\frac{X}{2} (1+ \frac{1}{1+\lambda X})$



I tried the following.



$=\sum_{a=0}^{\infty} \frac{a+2} {2(a+1)} X \frac{(a+1){(\lambda X)}^a e^{-\lambda X}}{a! (1+\lambda X)}$



$=\sum_{a=0}^{\infty} \frac{a+2} {2} X \frac{{(\lambda X)}^a e^{-\lambda X}}{a! (1+\lambda X)}$



$=\sum_{a=0}^{\infty} \frac{X(a+2)} {2(1+\lambda X)} \frac{{(\lambda X)}^a e^{-\lambda X}}{a! }$




$=\sum_{a=0}^{\infty} \frac{X(a+2)} {2(1+\lambda X)} \sum_{a=0}^{\infty} \frac{{(\lambda X)}^a e^{-\lambda X}}{a! }$



As $\sum_{a=0}^{\infty} \frac{{(\lambda X)}^a e^{-\lambda X}}{a! }=1$, the sum of poisson pmf sum up to 1.



$=\frac{X} {2(1+\lambda X)} \sum_{a=0}^{\infty}(a+2) $



I don't know how to further proceed. Can someone explain me the remaining steps to get to the answer. thanks in advance.


Answer



After removal of clutter, the summation is essentially




$$\sum_{k=0}^\infty\frac{k+2}{k!}t^k=t\sum_{k=0}^\infty\frac{1}{(k-1)!}t^{k-1}+2\sum_{k=0}^\infty\frac{1}{k!}t^k=(t+2)e^t$$ where $t=\lambda X$. The remaining factors are $\dfrac{Xe^{-t}}{2(t+1)}$.


elementary set theory - How do we draw the number hierarchy from natural to complex in a Venn diagram?

I want to make a Venn diagram that shows the complete number hierarchy from the smallest (natural number) to the largest (complex number). It must include natural, integer, rational, irrational, real and complex numbers.



How do we draw the number hierarchy from natural to complex in a Venn diagram?




Edit 1:



I found a diagram as follows, but it does not include the complex number.



enter image description here



My doubt is that shoul I add one more rectangle, that is a litte bit larger, to enclose the real rectangle? But I think the gap is too large enough only for i, right?



Edit 2:




Is it correct if I draw as follows?



enter image description here

polynomials - A question on greatest common divisor



I had this question in the Maths Olympiad today. I couldn't solve the part of the greatest common divisor. Please help me understand how to solve it. The question was this:



Let $P(x)=x^3+ax^2+b$ and $Q(x)=x^3+bx+a$, where $a$ and $b$ are non-zero real numbers. If the roots of $P(x)=0$ are the reciprocals of the roots of $Q(x)=0$, then prove that $a$ and $b$ are integers. Also find the greatest common divisor of $P(2013!+1)$ and $Q(2013!+1)$.



Let the roots of $P(x)=0$ be $\alpha,\; \beta, \;and\;\gamma$. Then we have the following four relations. $$\alpha+\beta+\gamma=-a$$ $$\alpha\beta\gamma=-b$$$$\frac{1}{\alpha\beta}+\frac{1}{\beta\gamma}+\frac{1}{\alpha\gamma}=b$$$$\frac{1}{\alpha\beta\gamma}=-a$$




From these, we get $a=b=1$ So, $P(x)=x^3+x^2+1$ and $Q(x)=x^3+x+1$. Now, how to proceed further?


Answer



If $d$ divides $P(x),Q(x)$



$d$ will divide $P(x)-Q(x)=x^2-x=x(x-1)$



But $d$ can not divide $x$ as $(P(x),x)=1\implies d$ will divide $x-1$



Again, $d$ will divide $x^2+x+1-(x^2-x)=2x+1$




Again, $d$ will divide $2x+1-2(x-1)=3$



Observe that $3$ divides $P(2013!+1),Q(2013!+1)$



as $2013!+1\equiv1\pmod3, (2013!+1)^n\equiv1$ for any integer $n\ge0$


probability - Proving that this $(X,Y)$ is not uniformly distributed over the unit disk



I'm told that $X$ has uniform distribution on $(-1,1)$ and given $X=x$, $Y$ is uniformly distributed on $(-\sqrt{1-x^2},\sqrt{1-x^2})$. I know it's not uniformly distributed over the unit disk $\{(x,y):x^2+y^2\lt 1\}$, because my professor did this in class but I didn't quite understand what he was talking about. For a reason why, it was because I was just a few minutes late for class and he doesn't like to explain things twice. So could anyone show how he did this?


Answer




Over the unit disc, a uniform distribution should have the same probability for any portion of that disc with the same area, hence the name uniform. That's not the case for the example your professor gave you.


Sunday 26 August 2018

intuition - Numbers to the Power of Zero



I have been a witness to many a discussion about numbers to the power of zero, but I have never really been sold on any claims or explanations. This is a three part question, the parts are as follows...






1) Why does $n^{0}=1$ when $n\neq 0$? How does that get defined?



2) What is $0^{0}$? Is it undefined? If so, why does it not equal 1?




3) What is the equation that defines exponents? I can easily write a small program to do it (see below), but what about in equation format?






I just want a little discussion about numbers to the power of zero, for some clarification.






Code for Exponents: (pseudo-code/Ruby)




def int find_exp (int x, int n){
int total = 1;
n.times{total*=x}
return total;
}

Answer



It's basically just a matter of what you define the notation to mean. You can define things to mean whatever you want -- except that if you choose a definition that leads to different results than everyone else's definitions give, then you're responsible for any confusion brought about by your using a familiar notation to mean something nonstandard.



Most commonly we define $x^0$ to mean $1$ for any $x$. What you find in discussions elsewhere are argument that this is a useful definition, not arguments that it is correct. (Definitions are correct because we choose them, not for any other reason. That's why they are definitions).




Some people choose (for certain purposes) to explicitly refrain from defining $0^0$ to mean anything. That choice is (supposedly) useful because then the map $x,y\mapsto x^y$ is continuous in the entire subset of $\mathbb R\times\mathbb R$ it is defined on. But it's an equally valid choice to define $0^0$ to mean $1$ and then just remember that $x,y\mapsto x^y$ is not continuous at $(0,0)$.


sequences and series - How to derive the formula for the sum of the first $n$ perfect squares?

How do you derive the formula for the sum of the series when $S_n = \sum_{j=1}^n j^2$?



The relationship $(n, S_n)$ can be determined by a polynomial in $n$. You are supposed to use finite differences to determine the degree and then use a calculator to derive the function. But how?

exponential function - What is the most intuitive explanation for euler's identity?




Is there any intuitive explanation for:
$$e^{i\pi} + 1 = 0$$



About whether this question is a duplicate, what is asked for is not a proof but an explanation that helps with the not-so-intuitive aspects of the identity.


Answer



Unit vectors in the opposite direction along the real line in the complex plane add to zero.




enter image description here


probability - Making 400k random choices from 400k samples seems to always end up with 63% distinct choices, why?



I have a very simple simulation program, the sequence is:




  • Create an array of 400k elements


  • Use a PRNG to pick an index, and mark the element (repeat 400k times)

  • Count number of marked elements.



An element may be picked more than once, but counted as only one "marked element".



The PRNG is properly seeded. No matter how many times I run the simulation, I always end up getting around 63% (252k) marked elements.



What is the math behind this? Or was there a fault in my PRNG?


Answer




No, your program is correct. The probability that a particular element is not marked
at all, is about $\frac{1}{e}$. This comes from the poisson-distribution which is
a very well approximation for large samples (400k is very large). So $1-\frac{1}{e}$
is the fraction of marked elements.


Saturday 25 August 2018

analysis - Monotone Function on n-tuples




I have a function $f: M_k \to R$, where $M_k \subset \{0,1\}^n$ is the set of all n-tuples with exactly $k$ entries that are $1$ while the rest is $0$.



I wish to show that $f$ is strong monotonically increasing in $M_k$, that is:



$\forall (x_1, \dots, x_n),(y_1, \dots, y_n) \in M_k:\\ (x_1, \dots, x_n) <(y_1, \dots, y_n) \Rightarrow f((x_1, \dots, x_n)) < f((y_1, \dots, y_n))$



Here, $(x_1, \dots, x_n) < (y_1, \dots, y_n)$ holds iff $x_i \leq y_i \; \forall 1 \leq i \leq n$ and $\exists j: x_j < y_j$.



My problem is that this seems to be undefined since no two distinct tuples can fulfill this.




Any tuple with $x_i < y_i$ must have another element with $x_j > y_j$, because each tuple has exactly $k$ entries that are 1?



Is strong monotonicity even defined in this case or is each function on $M_k$ strongly monotone per default?



Thanks in advance!


Answer



Claim: Given two distinct $n$-tuples, $\mathbf{x}=(x_1,\dots,x_n)$ and $\mathbf{y}=(y_1,\dots,y_n),$ with exactly $k$ entries equal to $1$ and the rest $0,$ where $1\le k< n,$ and given the definition
$$(u_1,\dots,u_n)<(v_1,\dots,v_n) \iff [(\forall\,i)(1\le i\le n)(u_i\le v_i)\;\land\;(\exists\,j)(u_jit follows that $(x_1,\dots,x_n)\not<(y_1,\dots,y_n).$




Proof: If there does not exist a $j$ such that $x_jy_{\ell},$ making the condition $(\forall\,i)(1\le i\le n)(x_i\le y_i)$ false. Hence, $\mathbf{x}\not<\mathbf{y}.$


limits - Summation of the series

Please help me find the summation of following series under limit.



$$
\lim_{n \to \infty}

\sum_{x = 1}^{n}{1 \over x}\,\cos\left(\left[x - 1\right]{\pi \over 3}\right)
$$



Thank you.
:)

integration - Help me Evaluate this Integral




I have this problem $\int \frac {-5x + 11}{{(x+1)(x^2+1)}} \text{d}x$




And i'm not sure how to deal with this. I've tried substitution and getting nowhere. I've peeked at the answer and there a trigonometric part with arctan. Do i use Partial fraction expansion here?



Thanks


Answer



use the partial fraction to get
$$\frac{-5x+11}{(x+1)(x^2+1)}=\frac{3-8x}{x^2+1}+\frac{8}{x+1}=\frac{3}{x^2+1}-\frac{8x}{x^2+1}+\frac{8}{x+1}$$



details:




$$\frac{-5x+11}{(x+1)(x^2+1)}=\frac{Ax+B}{x^2+1}+\frac{C}{1+x}=\frac{(Ax+B)(1+x)+C(x^2+1)}{(x^2+1)(1+x)}$$
$$\frac{-5x+11}{(x+1)(x^2+1)}=\frac{Ax+Ax^2+B+Bx+Cx^2+C}{(x^2+1)(1+x)}$$
$$A+B=-5$$



$$A+C=0$$
$$B+C=11$$



solve these simultaneous equations to get
$A=-8$
$B=3$

$C=8$
so the integral will be
$$\int (\frac{3}{x^2+1}-\frac{8x}{x^2+1}+\frac{8}{x+1})dx=3\tan^{-1}x-4\log(x^2+1)+8\log(x+1)+C $$


abstract algebra - How to prove that $zgcd(a,b)=gcd(za,zb)$





I need to prove that $z\gcd(a,b)=\gcd(za,zb)$.




I tried a lot, for example, looking at set of common divisors of the two sides, but I can't conclude anything from that. Can you please give me some advice how I can handle this problem? And $a,b,z \in \mathbb{Z}$.


Answer



Below are a few proofs of the gcd distributive law $\rm\:(ax,bx) = (a,b)x\:$ using Bezout's identity, universal gcd laws, and unique factorization. In each proof the first line serves as a hint.







First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$



$\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $



$\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$



$\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\ \,$ some $\rm\:n,k\in \mathbb Z$



$\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $




The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$






Alternatively, more generally, in any integral domain $\rm\:D\:$ we may employ the universal definitions of GCD, LCM to generalize the above proof.



Theorem $\rm\ \ (a,b)\ =\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D.$



Proof $\rm\quad\: c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x\ \ \ $ QED




Such universal definitions often serve to simplify proofs, e.g. see this proof of the GCD * LCM law.






Alternatively, comparing powers of primes in unique factorizations, it reduces to the following
$$ \min(a+c,\,b+c)\ =\ \min(a,b) + c$$



The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\le$, and



$$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\quad&&\rm[universal\ definition\ of\ \ min]\\

\rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \ |\ \ gcd(a,b)\quad&&\rm[universal\ definition\ of\ \ gcd] \end{eqnarray}$$


calculus - Integration of $int^{1}_{-1} frac {1}{3} sinh^{-1} left( frac {3sqrt 3}{2} (1-t^2) right) dt$



Recently I came across with respect to this post of mine hyperbolic solution to the cubic equation for one real root given by
$$
t=-2\sqrt \frac {p}{3} \sinh \left( \frac {1}{3} \sinh^{-1} \left( \frac {3q}{2p} \sqrt \frac {3}{p} \right) \right)
$$

Intuitively I sought to find the related definitely integral,
$$
I=\int^{1}_{-1} \frac {1}{3} \sinh^{-1} \left( \frac {3\sqrt 3}{2} (1-t^2) \right) dt
$$


Unfortunately, there was no closed form solution. However, the Integral is amazingly near $\sqrt 2$.
$$
I=0.8285267994716327, \frac {I}{2} +1=1.4142633998
$$

To investigated more, I tried a heuristic expansion of the integral into Egyptian fractions. Although it gets problematic after the 4th term,
The first four terms are,
$$
\frac {I}{2} +1 = 1+ \frac {1}{2} - \frac {1}{12}-\frac {1}{416}
$$

Here the denominators can be given by,

$$
a_n = \sum_{k=0}^{n} { }^nC_k (2^n - 2^kq)^{n-k}q^k , q=\sqrt 2
$$

(Likewise, the denominators in the expansion for $\sqrt 2$ are related to Pell numbers, which makes me believe that my integral too is somewhat related to the numbers $a_n$.) Therefore, I am finding either a closed form or possibly a fast converging infinite series solution to the integral, just any of these. Thanks for any help.





For $t=\sin z$ and applying integration by parts, I get another, somewhat simpler, indefinite integral,
$$
\frac {\sin z}{3} \sinh^{-1} \left( \frac {3\sqrt 3}{2} \cos^2 z \right) + 2\sqrt 3 \int \frac {\sin^2 z \cos z dz}{\sqrt {27\cos^4 z + 4}}

$$

Then again I am stuck. Moreover, this expression ensures that my definite integral is an improper one.





A closed solution in terms of incomplete elliptic integrals with complex arguments is, as given by a user in the comments section,
$$
\frac {4}{9} (9+2\sqrt 3 i) \left[ F \left( \sin^{-1} \sqrt {\frac {3}{31}(9+2\sqrt 3 i)} ; \frac {1}{31} (23-12\sqrt 3 i) \right)-
E \left( \sin^{-1} \sqrt {\frac {3}{31}(9+2\sqrt 3 i)} ; \frac {1}{31} (23-12\sqrt 3 i) \right) \right]
$$


However, I am still wondering how to transform this into a real number, especially the $a_n$ connection of the integral is fascinating my mind.


Answer



We have from symmetry that $$I=\frac23\int_0^1\sinh^{-1}\left[\frac{3\sqrt3}2(1-x^2)\right]dx$$
So we define
$$f(a)=\int_0^1\sinh^{-1}[a(1-x^2)]dx$$
Then we recall that
$$\sinh^{-1}(x)=x\,_2F_1\left(\frac12,\frac12;\frac32;-x^2\right)=\sum_{n\geq0}(-1)^n\frac{(1/2)_n^2}{(3/2)_n}\frac{x^{2n+1}}{n!}$$
so
$$\sinh^{-1}[a(1-x^2)]=a(1-x^2)\,_2F_1\left(\frac12,\frac12;\frac32;-a^2(1-x^2)^2\right)\\
=\sum_{n\geq0}(-1)^n\frac{a^{2n+1}}{n!}\frac{(1/2)_n^2}{(3/2)_n}(1-x^2)^{2n+1}$$


so
$$f(a)=\sum_{n\geq0}(-1)^n\frac{a^{2n+1}}{n!}\frac{(1/2)_n^2}{(3/2)_n}\int_0^1(1-x^2)^{2n+1}dx$$
For this integral, we use $x=\sin(t)$:
$$j_n=\int_0^1(1-x^2)^{2n+1}dx=\int_0^{\pi/2}\cos(t)^{4n+3}dt$$
I leave it as a challenge to you to show that $$\int_0^{\pi/2}\sin(t)^a\cos(t)^bdt=\frac{\Gamma(\frac{a+1}2)\Gamma(\frac{b+1}2)}{2\Gamma(\frac{a+b}2+1)}$$
choosing $b=4n+3$, $a=0$ we have
$$j_n=\frac{\Gamma(1/2)\Gamma(2n+2)}{2\Gamma(2n+5/2)}$$
Then defining
$$t_n=\frac{(1/2)_n^2}{(3/2)_n}j_n$$
we have $$\frac{t_{n+1}}{t_n}=\frac{(n+\frac12)^2(n+1)}{(n+\frac74)(n+\frac54)}$$

Which gives $$f(a)=a\,_3F_2\left(\frac12,\frac12,1;\frac74,\frac54;-a^2\right)$$
And since $I=\frac23f(3\sqrt3/2)$ we have (assuming I've made no mistakes),
$$I=\sqrt3\,_3F_2\left(\frac12,\frac12,1;\frac74,\frac54;-\frac{27}4\right)$$


Friday 24 August 2018

logarithms - Convert a sum contain a log function to geometric series



I'm trying to calculate the complexity of an algorithm. I've managed to boil down the cost of the algorithm to the following sum.
$$\sum_{k = 1}^{\log_{2}(n)} \frac{n\log_{2}(\frac{k^{2}}{2})}{k^{2}}$$
where $n$ is the size of the data set.



I'm struggling to turn the sum into a function that I can use to figure out the big-$O$ of my algorithm. I was initially thinking that if I could convert the sum into a geometric series, then I might have something that I could work with.


Answer



The series $\displaystyle\sum_{k\ge1}\frac{\log_{2}(\frac{k^{2}}{2})}{k^{2}}$ converges. Let $S$ denote its sum. Then
$$
\sum_{k = 1}^{\log_{2}(n)} \frac{n\log_{2}(\frac{k^{2}}{2})}{k^{2}}=nS-\left(n\frac{\log\log n}{\log n}\right)x_n,

$$
where $x_n\to+1$.



Edit In fact (see comment below), the OP is interested in $T_n=nS_i(x)$ for $i=\lfloor \log_2(n)\rfloor$ and $x=\frac12$, where, for every $i$ and $x$,
$$
S_i(x)=\sum_{k=1}^i(k-1)x^k=\sum_{k=0}^{i-1}kx^{k+1}=x^2U'_i(x),\quad U_i(x)=\sum_{k=0}^{i-1}x^{k}.
$$
The function $x\mapsto U_i(x)$ is the sum of a geometric series, hence, for every $x\ne1$,
$$
U_i(x)=\frac{1-x^i}{1-x},\qquad U'_i(x)=\frac1{(1-x)^2}(1+ix^i-x^i-ix^{i-1}).

$$
Using $x=\frac12$, this yields
$$
T_n=n(1-(i+1)2^{-i}).
$$
Since $i=\lfloor \log_2(n)\rfloor$, one gets
$$
n-2(\log_2(n)+1)$$
hence $T_n=n-\Theta(\log_2(n))$. The sequence of general term $(T_n-n)/\log_2(n)$ has the interval $[-2,-1]$ as limit set.



analysis - Inverse of the sum $sumlimits_{j=1}^k (-1)^{k-j}binom{k}{j} j^{,k} a_j$



$k\in\mathbb{N}$



The inverse of the sum $$b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j} j^{\,k} a_j$$ is obviously

$$a_k=\sum\limits_{j=1}^k \binom{k-1}{j-1}\frac{b_j}{k^j}$$ .



How can one proof it (in a clear manner)?



Thanks in advance.






Background of the question:




It’s $$\sum\limits_{k=1}^\infty \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\sum\limits_{k=1}^\infty \frac{a_k}{k}$$ with $\,\displaystyle b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k}a_j $.



Note:



A special case is $\displaystyle a_k:=\frac{1}{k^n}$ with $n\in\mathbb{N}$ and therefore $\,\displaystyle b_k=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k-n}$ (see Stirling numbers of the second kind)
$$\sum\limits_{k=1}^n \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\zeta(n+1)$$ and the invers equation can be found in A formula for $\int\limits_0^\infty (\frac{x}{e^x-1})^n dx$ .


Answer



In this proof, the binomial identity
$$\binom{m}{n}\,\binom{n}{s}=\binom{m}{s}\,\binom{m-s}{n-s}$$
for all integers $m,n,s$ with $0\leq s\leq n\leq m$ is used frequently, without being specifically mentioned. A particular case of importance is when $s=1$, where it is given by

$$n\,\binom{m}{n}=m\,\binom{m-1}{n-1}\,.$$



First, rewrite
$$b_k=k\,\sum_{j=1}^{k}\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$
Then,
$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{k}{l^k}\,\sum_{j=1}^k\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$
Thus,
$$\begin{align}
\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}&=\sum_{j=1}^l\,\frac{a_j}{j}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-1}{k-1}\,\binom{k-1}{j-1}\,k\left(\frac{j}{l}\right)^k
\\

&=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-j}{k-j}\,k\left(\frac{j}{l}\right)^k\,.
\end{align}$$
Let $r:=k-j$. We have
$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\left(\frac{j}{l}\right)^j\,\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}\,.\tag{*}$$



Now, if $j=l$, then
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=l\,.$$
If $j$$\begin{align}
\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,r\,\left(\frac{j}{l}\right)^{r}&=-(l-j)\left(\frac{j}{l}\right)\,\sum_{r=1}^{l-j}\,(-1)^{r-1}\,\binom{l-j-1}{r-1}\,\left(\frac{j}{l}\right)^{r-1}

\\&=-j\left(1-\frac{j}{l}\right)\,\left(1-\frac{j}{l}\right)^{l-j-1}=-j\left(1-\frac{j}{l}\right)^{l-j}
\end{align}$$
and
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,j\left(\frac{j}{l}\right)^r=j\left(1-\frac{j}{l}\right)^{l-j}\,.$$
Consequently,
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=\begin{cases}
0\,,&\text{if }jl\,,&\text{if }j=l\,.
\end{cases}$$
From (*),

$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\frac{a_l}{l}\,\binom{l-1}{l-1}\,\left(\frac{l}{l}\right)^l\,l=a_l\,.$$


real analysis - Why is multivariable continuous differentiability defined in terms of partial derivatives?



Both in my textbook and on Wikipedia, continuous differentiability of a function $f:\Bbb R^m \to \Bbb R^n$ is defined by the existence and continuity of all of the partial derivatives. Since there is a notion of a (total) derivative (AKA differential) for multivariable functions, I'm wondering why continuous differentiability is not defined as existence and continuity of the derivative map $Df(a)$? Is there some reason why having existence and continuity of partials is more convenient or maybe continuity of the total derivative is too strict of a condition?


Answer




Continuous differentiability of the function $f: \mathbb{R}^m \to \mathbb{R}^n$ (in terms of partial derivatives) is equivalent to existence and continuity of the map
$$Df: \mathbb{R}^m \to L(\mathbb{R}^m, \mathbb{R}^n)$$
$$ x \to Df_x$$
which takes a point to the derivative at the point. Any book on analysis on $\mathbb{R}^n$ will have a proof of this fact.


continuity - Modified dirichlet function





  1. Can Dirichlet's function be modified in such a way that it is continuous at some real number? For instance, as $xD(x)$ is continuous at $x=0$, is it possible that $(x-1)D(x)$ is continuous at $x=1$? Here $D(x)$ is the Dirichlet function.


  2. Can it be modified to be continuous at finitely many points in $\mathbb{Q}$?


  3. I am aware that there can't be a function continuous only on rational numbers, but with the same approach as $2$, if possible, why isn't it possible to construct such a function?




Thanks in advance!


Answer



Given numbers $x_1,\ldots,x_n$, you can define an "accordion function" $g(x)$ by piecing together absolute values so that $g(x_j)=0$ and $g((x_j+x_{j+1})/2)=1$. Then $g(x)D(x)$ is continuous at $x_1,\ldots,x_n$ and discontinuous everywhere else.



The same approach wouldn't work for all the rationals because they are dense, and so there is no room for the function to "raise up" and "come down".



Thursday 23 August 2018

real analysis - Proving a limit using precise definition

Make a conjecture about $\lim_{n \to \infty} s_k$ , and prove your conjecture.



$s_k=

\begin{cases}
k, & \text{if $k$ is even} \\[2ex]
\frac{1}{k}, & \text{if $k$ is odd}
\end{cases}$



I have come to the conclusion that the limit does not exist? But I am unsure how to prove this using the precise definition of the limit. Do I choose a concrete number for epsilon and show that the limit will exceed that?

Wednesday 22 August 2018

linear algebra - A real function which is additive but not homogenous



From the theory of linear mappings, we know linear maps over a vector space satisfy two properties:



Additivity: $$f(v+w)=f(v)+f(w)$$




Homogeneity: $$f(\alpha v)=\alpha f(v)$$



which $\alpha\in \mathbb{F}$ is a scalar in the field which the vector space is defined on, and neither of these conditions implies the other one. If $f$ is defined over the complex numbers, $f:\mathbb{C}\longrightarrow \mathbb{C}$, then finding a mapping which is additive but not homogenous is simple; for example, $f(c)=c^*$. But can any one present an example on the reals, $f:\mathbb{R}\longrightarrow \mathbb{R}$, which is additive but not homogenous?


Answer



If $f : \Bbb{R} \to \Bbb{R}$ is additive, then you can show that $f(\alpha v) = \alpha f(v)$ for any $\alpha \in \Bbb{Q}$ (so $f$ is a linear transformation when $\Bbb{R}$ is viewed as a vector space over $\Bbb{Q}$). As $\Bbb{Q}$ is dense in $\Bbb{R}$, it follows that an additive function that is not homogeneous must be discontinuous. To construct non-trivial discontinuous functions on $\Bbb{R}$ with nice algebraic properties, you usually need to resort to the existence of a basis for $\Bbb{R}$ viewed as a vector space over $\Bbb{Q}$. Such a basis is called a Hamel basis. Given a Hamel basis $B = \{x_i \mid i \in I\}$ for $\Bbb{R}$ (where $I$ is some necessarily uncountable index set), you can easily define a function that is additive but not homogeneous, e.g., pick a basis element $x_i$ and define $f$ such that $f(x_i) = 1$ and $f(x_j) = 0$ for $j \neq i$.


abstract algebra - Existence of solution for the diophantine equation $100x - 23y = -19$



In order to solve the diophantine equation:



$$100x + (-23)y = -19$$



(from here)



we could use the theorem that :





The diophantine equation $ax+by=c$ has solutions if and only if
$gcd(a,b)|c$. If so, it has infinitely many solutions, and any one
solution can be used to generate all the other ones.




Well, $gcd(100,-23)=1$, therefore this equation couldn't have integer solutions. However, I used the euclidean algorithm and arrived at:



$$100 = 23*4 + 8 \\
23 = 8*2 + 7\\

8 = 7*1 + 1\\
7 = 7*1 + 0$$
From this, we have that:



$$1 = 8 - 7\\
7 = 23 - 8*2\\
8 = 100 - 23*4$$



Therefore, substituting back in the GCD process we get:




$$1 = 8 - (23 - 8*2)\\
1 = 100 - 23*4 - (23 - (100 - 23*4)*2)\\
1 = 100 - 23*4 - 23 + 2*(100 - 23*4)\\
1 = 100 - 23*4 - 23 + 2*100 - 2*23*4\\
1 = 100*(1 + 2) + 23*(-4 - 1 - 8)\\
1 = 100*3 + 23 (-13)\\
1 = 100*3 - 23*13$$



Multiplying both sides by $-19$ get us to:




$$-19 = 100*(-3*19) - 23 (-13*19)\\
-19 = 100*(-57) - 23*(-247)$$



So (-57,-247) are a solution to the equation. However, $gcd(100,-23)$ does not divide $19$. Also, this made me think that in general:



if $gcd(a,b)=1$, then there exists integers $m,n$ such that:



$$am+bn=1\implies a(cm)+b(cn)=c$$



So, always in the equation:




$ax+by=c$



when $gcd(a,b)=1$ I can get the form:



$ax+by=1$



and then multiply by $c$ to find the integer solutions.



I think i'm doing something terribly wrong.




OH NO, WAIT! $GCD(100,-23)$ DIVIDES $19$, RIGHT??? :O



(i'm gonna leave this here to help someone since I typed all this)


Answer



As you seem to have noticed, $1$ does divide $-19$.


sequences and series - Sum of $frac{1}{n^{2}}$ for $n = 1 ,2 ,3, ...$?







I just got the "New and Revised" edition of "Mathematics: The New Golden Age", by Keith Devlin. On p. 64 it says the sum is $\pi^2/6$, but that's way off. $\pi^2/6 \approx 1.64493406685$ whereas the sum in question is $\approx 1.29128599706$. I'm expecting the sum to be something interesting, but I've forgotten how to do these things.

elementary number theory - Decimal Representation vs Decimal Expansion

What does it mean when they ask you to write the decimal representation of a number.




Looking it up Google for a while,



On some pages, it means to write the value of the number. While on others it means to write it in a form of series.
What does it mean when I say, find the decimal representation of $\dfrac{123456}{100}$
Does it mean $1234.56$ or $1 \cdot 10^3 + 2 \cdot 10^2 + 3 \cdot 10^1 + 4 \cdot 10^0 + 5 \cdot 10^{-1} + 6 \cdot 10^{-2}$?



Also what is the difference between Decimal Expansion and Decimal Representation?

Tuesday 21 August 2018

integration - Any solution for $intintintfrac{x^2+2y^2}{x^2+4y^2+z^2},dv$



I tried to solve this triple integral but couldn't integrate the result.
$$\int\int\int\frac{x^2+2y^2}{x^2+4y^2+z^2}\,dv$$ and the surface to integrate in is $$x^2+y^2+z^2\le1$$
Is there any way to transform the integral into polar coordinates?


Answer



Notice that:
$$\iiint \frac{x^2+2y^2}{x^2+4y^2+z^2} \mbox{d}v =

\iiint \frac{x^2+4y^2+z^2-2y^2-z^2}{x^2+4y^2+z^2} \mbox{d}v
= \iiint 1-\frac{z^2+2y^2}{x^2+4y^2+z^2} \mbox{d}v $$
But by symmetry $x \leftrightarrow z$, we have:
$$\iiint \frac{x^2+2y^2}{x^2+4y^2+z^2} \mbox{d}v
= \iiint \frac{z^2+2y^2}{x^2+4y^2+z^2} \mbox{d}v $$
So:
$$\iiint \frac{x^2+2y^2}{x^2+4y^2+z^2} \mbox{d}v = \frac{1}{2}
\iiint 1 \mbox{d}v = \frac{1}{2}\frac{4}{3}\pi = \frac{2}{3}\pi$$


elementary number theory - The term “remainder” and IMO 2005, problem 2



I was looking at IMO 2005, problem 2:




Let $a_1, a_2, . ...$ be a sequence of integers with infinitely many positive terms and infinitely many negative terms. Suppose that for each positive integer $n$, the numbers $a_1, a_2, ...a_n$ leave $n$ different remainders on division by $n$. Prove that each integer occurs exactly once in the sequence.



I thought I was more fluent in math English but I was uncertain of the meaning of term “remainder”. I take it that it has the meaning least absolute remainder and e.g. that the remainder of $9$ divided by $6$ is probably not both $-3$ and $3$(?). For $l$ and $m < n$, $a_l≠a_m$ mod ($n$) implies that $a_l≠a_m$. If there also is a one-to-one correspondence between the “remainder” of $a_k$ to $a_k$ mod ($n$) and there are n different terms in the “remainder” series $a_1$ to $a_n$, the problem seems easy, or is it too easy because I got it wrong?



Edit: Slade pointed out that a the proof needs more. Unfortunately my first attempt to prove the missing second part was wrong and had to be deleted - I am still working on this.



Edit2: To avoid the term “remainder” I intended to treat the positive and negative part of the series separately and prove that all integers, positive and negative respectively, must occur once for the two parts and then adding the two series (worrying later about zero possibly occurring twice). However it seems that for e.g. a positive series $a_n=A+n$ the first requirement holds without the numbers $0$ to A being in the series. It seems therefore that there is no way out except to understand the term “remainder”.


Answer



You've proved that no integer can occur twice. You also need to prove that every integer occurs somewhere in the sequence, which is much harder.



Monday 20 August 2018

sequences and series - Compute: $limlimits_{ntoinfty} frac{x_n}{ln {n}}$



Let be the sequence $(x_n)_{n\geq0}$, $x_0$ a real number, and defined as follows: $$ x_{n+1} = x_n + e^{-x_n} $$
Compute the limit:



$$\lim_{n\to\infty} \frac{x_n}{\ln {n}}$$




Luckily, I've found another post here with a very similar question. If you know other ways to solve it and want to share them then I'd be very grateful for that.


Answer



Heuristically, this difference equation resembles the differential equation $x'(t) = e^{-x}$, which has solutions $x(t) = \ln(t + C)$, so I would expect the answer to be $1$.



In fact, let $x_n = \ln(n + s_n)$, and $y(t) = \ln(t+s_n)$. Then for $n \le t < n+1$ we have
$y(t) > y(n) = x_n$, so
$$\eqalign{
\ln(n+1+s_n) - \ln(n+s_n) &= \int_n^{n+1} \dfrac{dt}{t+s_n} = \int_n^{n+1} e^{-y(t)}\ dt\cr
&< \int_n^{n+1} e^{-x_n}\ dt = e^{-x_n} = x_{n+1} - x_n\cr}$$

i.e. $x_{n+1} - \ln(n+1+s_n) > x_n - \ln(n+s_n) = 0$, or $s_{n+1} > s_n$.
Thus $x_n > \ln(n+s_0)$. But then $x_{n+1} - x_n = e^{-x_n} < \dfrac{1}{n + s_0}$
so $\displaystyle x_n < x_0 + \sum_{j=0}^{n-1} \frac{1}{j+s_0}$. Since both upper and lower bounds are $\ln(n) + O(1)$, we conclude that $x_n = \ln(n) + O(1)$.


Sunday 19 August 2018

abstract algebra - $p^{th}$ roots of a field with characteristic $p$



This is problem 10.9 from the book "Error-Correcting Codes and Finite Fields by Oliver Pretzel".



The Question:




Show that in a field of characteristic $p$, any element $\alpha$ has at most one $p$-th root $\beta$ (i.e., an element $\beta\in F$ with $\beta^p = \alpha$). Show further that if $F$ is finite, then every element has exactly one $p$-th root





This my attempt at the second part of the question.
From Fermat's little theorem $\beta^{{p^n}-1} = 1$, where $p^n$ is the size of the field. Now multiplying both sides by $\beta$ we get $\beta^{p^n} = \beta$.
If there is $p$ elements then $n=1$ and we can see this is true for any non-zero element.
For the general case, take the $p$-th root of both sides $\beta^{p^{n-1}} = \beta^{1/p}$ and we know from the multiplicative properties of a field if $\beta$ is a non zero element of the field then any multiple will be.



The first part of the question I'm not sure where to begin.
Any help would be appreciated.


Answer




We are looking for a root of $x^p-\alpha$; the formal derivative of this polynomial is zero, which means that $x^p-\alpha$ has repeated roots.



Indeed, if $K$ is an extension of $F$ where the polynomial has a root $\beta$, we have
$$
(x-\beta)^p=x^p-\beta^p=x^p-\alpha
$$
which shows the root is unique.



For a finite field $F$, the map
$$

\alpha\mapsto\alpha^p
$$
is a field homomorphism, so it is injective. Finiteness yields surjectivity.


trigonometry - A complex number in the power of another complex number




I saw this question, and found a formula:
$$=\cos \left( d\log |a+bi|+c\arctan \frac{d}{c}\right)+i\sin \left( d\log |a+bi|+c\arctan \frac{d}{c}\right).$$
Which I later translated to Microsoft Math format:
cos(dlog(Abs(a+bi))+carctan(d/c))+sin(dlog(Abs(a+bi))+carctan(d/c))*i
And - that formula gives wrong results.
While the result is -0.507 - 0.861i (for that formula, setting a=1,b=2,c=3,d=4)
Doing (a + bi)^(c + di) gives me 0.129+0.033i
Can anyone explain what I am doing wrong ? (I am trying to write a program which does this.)


Answer



I corrected the answer you cite. Instead of the $\arctan$ function it should be the $\arg$ function. These two functions are not always equal. The $\arg$ function is used in the principal logarithm of $z=x+iy$, which is the complex number



$$w=\text{Log }z=\log |z|+i\arg z$$



so that $e^w=z$, where $\arg z$ (the principal argument of $z$) is the real number in $-\pi\lt \arg z\le \pi$, with $x=|z|\cos (\arg z)$ and $y=|z|\sin (\arg z)$.




The formula now reads as follows:



$$\begin{eqnarray*}
\left( a+bi\right) ^{c+di} &=&e^{(c+di)\text{ Log }(a+bi)} \\
&=&e^{(c+di)\left( \ln |a+bi|+i\arg (a+bi)\right) } \\
&=&e^{c\ln \left\vert a+ib\right\vert -d\arg \left( a+ib\right) +i\left(
c\arg \left( a+ib\right) +d\ln \left\vert a+ib\right\vert \right) } \\
&=&e^{c\ln \left\vert a+ib\right\vert -d\arg(a+bi)}\times \\
&&\times \left( \cos \left( c\arg \left( a+ib\right) +d\ln \left\vert

a+ib\right\vert \right) +i\sin \left( c\arg \left( a+ib\right) +d\ln
\left\vert a+ib\right\vert \right) \right).
\end{eqnarray*}$$



For $a=1,b=2,c=3,d=4$, we have (numeric computations in SWP)



$$\begin{eqnarray*}
\left( 1+2i\right) ^{3+4i} &=&e^{(3+4i)\text{ Log }(1+2i)} \\
&=&e^{(3+4i)\left( \log |1+2i|+i\arg (1+2i)\right) } \\
&=&e^{3\ln \left\vert 1+2i\right\vert -4\arg \left( 1+2i\right) +i\left(

3\arg \left( 1+2i\right) +4\ln \left\vert 1+2i\right\vert \right) } \\
&=&e^{3\ln \left\vert 1+2i\right\vert -4\arg \left( 1+2i\right) }\times \\
&&\times \left( \cos \left( 3\arg \left( 1+2i\right) +4\ln \left\vert
1+2i\right\vert \right) +i\sin \left( 3\arg \left( 1+2i\right) +4\ln
\left\vert 1+2i\right\vert \right) \right) \\
&\approx &0.13340\left( \cos \left( 6.5403\right) +i\sin \left(
6.5403\right) \right) \\
&\approx &0.12901+3.3924\times 10^{-2}i,
\end{eqnarray*}$$




which agrees with the computation in Wolfram Alpha for $(1+2i)^{3+4i}.$



And for instance, if $a=-1,b=2,c=3,d=4,$ then



$$\begin{eqnarray*}
\left( -1+2i\right) ^{3+4i} &=&e^{(3+4i)\text{ Log }(-1+2i)} \\
&=&e^{(3+4i)\left( \log |-1+2i|+i\arg (-1+2i)\right) } \\
&=&e^{3\ln \left\vert -1+2i\right\vert -4\arg \left( -1+2i\right) +i\left(
3\arg \left( -1+2i\right) +4\ln \left\vert -1+2i\right\vert \right) } \\
&=&e^{3\ln \left\vert -1+2i\right\vert -4\arg \left( -1+2i\right) }\times \\

&&\times \left( \cos \left( 3\arg \left( -1+2i\right) +4\ln \left\vert
-1+2i\right\vert \right) +i\sin \left( 3\arg \left( -1+2i\right) +4\ln
\left\vert -1+2i\right\vert \right) \right) \\
&\approx &3.267\,9\times 10^{-3}\left( \cos \left( 9.3222\right) +i\sin
\left( 9.3222\right) \right) \\
&\approx &3.\,250\,7\times 10^{-3}+3.346\times 10^{-4}i.
\end{eqnarray*}$$



In Wolfram Alpha we get $(-1+2i)^{3+4i}\approx -0.003250688+0.000334598i$


elementary set theory - Let $X$ and $Y$ be countable sets. Then $Xcup Y$ is countable




Since $X$ and $Y$ are countable, we have two bijections:



(1) $f: \mathbb{N} \rightarrow X$ ;



(2) $g: \mathbb{N} \rightarrow Y$.



So to prove that $X\cup Y$ is countable, I figure I need to define some function,



h: $\mathbb{N} \rightarrow X\cup Y$




Thus, I was wondering if I could claim something similar to the following:



since we have (1) & (2) it follows that we also have the bijections



$\alpha : \{n\in \mathbb{N} : n = 2k + 1, k\in \mathbb{N}\} \rightarrow X$;



$\beta : \{n\in \mathbb{N} : n = 2k, k\in \mathbb{N}\} \rightarrow Y$;



because we have bijections from $\mathbb{N}$ to the evens and odds respectively.




Then define $h := \alpha$ for odd $n$, $h := \beta$ for even $n$.



Thus, since $\forall n\in \mathbb{N}$(either $n$ is even or $n$ is odd but not both) $h$ is a bijection from $\mathbb{N}$ to $X\cup Y$.



Thanks for reading, and for answering if you answer. I'm just really unsure if my logic holds here, or if I'm even approaching this right because I've been stuck on this problem for a little while today.



p.s sorry for asking so many questions recently, I'm trying to study on my own and apparently I get confused and stuck more easily than I thought I would without a teacher.



EDIT: (1) fixed the even and odd.

(2) I do mean countably infinite. My bad, going through some notes I found online they were using the notation of "countable" for what you call "countably infinite", and "at most countable" for what you called "countable"
(3) Thanks for the good answers, I do see now that this makes breaks down when they are not disjoint, but at least I'm not too far off.


Answer



By countable you seem to mean countably infinite. That may not be the formal definition in your book. The formal definition of countable that I am more accustomed to is that a set $S$ is countable if there is a bijection between $S$ and a subset of $\mathbb{N}$.



If the more general definition is the formal definition of countable used in your book, you will need to either break up the proof into a number of cases, or write an argument that simultaneously covers all cases. That can be done, but there will be greater clarity if you take the number of cases approach.



For countably infinite sets $X$ and $Y$, the proof technique that you used is very good if $X$ and $Y$ are disjoint. The notation that you used is not familiar to me. It is clear what you intend the functions $\alpha$ and $\beta$ to do. It would have been relatively easy to be totally explicit, as follows.



If $n$ is odd, then $h(n)=f((n+1)/2)$; if $n$ is even then $h(n)=g(n/2)$.




You will need to modify your method to take care of the cases where $X$ and $Y$ are not disjoint. The intuition is clear, and even, informally, a procedure for finding a bijection. But it is a very good idea to write out all of the details.



I hope this gives a good way to begin. I can append more details if you like.


Saturday 18 August 2018

combinatorics - Combinatorial interpretation of sum of squares, cubes



Consider the sum of the first $n$ integers:
$$\sum_{i=1}^n\,i=\frac{n(n+1)}{2}=\binom{n+1}{2}$$
This has always made the following bit of combinatorial sense to me. Imagine the set $\{*,1,2,\ldots,n\}$. We can choose two from this set, order them in decreasing order and thereby obtain a point in $\mathbb{N}^2$. We interpret $(i,*)$ as $(i,i)$. These points give a clear graphical representation of $1+2+\cdots+n$:



$$
\begin{matrix}

&&&\circ\\
&&\circ&\circ\\
&\circ&\circ&\circ\\
\circ&\circ&\circ&\circ\\
\end{matrix}
$$



Similar identities are:
$$\sum_{i=1}^n\,i^2=\frac{n(n+1)(2n+1)}{6}=\frac{2n(2n+2)(2n+1)}{24}=\frac{1}{4}\binom{2n+2}{3}$$
$$\sum_{i=1}^n\,i^3=\frac{n^2(n+1)^2}{4}=\binom{n+1}{2}^2$$

I am aware of geometric explanations of these identities, but not combinatorial ones similar to the above explanation for summing first powers that make direct use of the "choosing" interpretation of the binomial coefficient. Can anyone offer combinatorial proofs of these?


Answer



Here's a combinatorial proof for $$\sum_{k=1}^n k^2 = \binom{n+1}{2} + 2 \binom{n+1}{3},$$ which is just another way of expressing the sum. Both sides count the number of ordered triples $(i,j,k)$ with $0 \leq i,j < k \leq n$.



For the left side, condition on the value of $k$. For each $k$, there are $k^2$ ways to choose $i$ and $j$ from the the set $\{0, 1, \ldots, k-1\}$.



For the right side, consider the cases $i=j$ and $i \neq j$ separately. If $i = j$, then there are $\binom{n+1}{2}$ such triples. This is because we just choose two numbers from $\{0, \ldots, n\}$; the smaller must be the value of $i$ and $j$ and the larger must be the value of $k$. If $i \neq j$, then there are $2\binom{n+1}{3}$ such triples, as we could have $i < j$ or $j < i$ for the smaller two numbers.







For $$\sum_{k=1}^n k^3 = \binom{n+1}{2}^2,$$
both sides count the number of ordered 4-tuples $(h,i,j,k)$ with $0 \leq h,i,j < k \leq n$.



For the left side, once again if we condition on the value of $k$ we see that there are $\sum_{k=1}^n k^3$ such 4-tuples.



For the right side, there is a bijection from these 4-tuples to ordered pairs of two-tuples $(x_1,x_2), (x_3,x_4)$ with $0 \leq x_1 < x_2 \leq n$ and $0 \leq x_3 < x_4 \leq n$. There are $\binom{n+1}{2}^2$ such pairs, so let's look at the bijection.



The bijection: If $h < i$, then map $(h,i,j,k)$ to $(h,i),(j,k)$. If $h > i$, then map $(h,i,j,k)$ to $(j,k), (i,h)$. If $h = i$, then map $(h,i,j,k)$ to $(i,k), (j,k)$. This mapping is reversible, as these three cases correspond to the cases where $x_2 < x_4$, $x_2 > x_4$, and $x_2 = x_4$.






(Both of these proofs are in Chapter 8 of Proofs that Really Count, by Benjamin and Quinn. They give at least one other combinatorial proof for each of these identities as well.)

Friday 17 August 2018

Zeta regularization vs Dirichlet series




Suppose you have a sequence of real numbers, denoted $a_n$. Then the sum of the sequence is



$\sum_n a_n$



If this is divergent, we can use zeta regularization to get a sum. We can do this by defining the function



$\zeta_A(s) = \sum_n a_n^{-s}$



and then analytically continue to the case where $s=-1$.




A different approach is to define the Dirichlet series



$A(s) = \sum_n \frac{a_n}{n^s}$



and then analytically continue to the case where $s=0$.



$Questions:$




  1. When these two approaches are both defined, are they guaranteed to agree on the result? If not, for which sequences do they agree?



  2. If they are compatible, is the second summation method strictly stronger than the first?




For instance, it is clear that the first method can't do anything for the series $1+1+1+1+1+...$, whereas the second method yields -1/2, so it is at least as strong as the first.


Answer



For $n \geqslant 1$, let $a_n = n + (-1)^{n-1}$. Then $a_{2n} = 2n-1$ and $a_{2n-1} = 2n$, so



$$\sum_{n = 1}^{\infty} \frac{1}{a_n^s} = \zeta(s)$$



for $\operatorname{Re} s > 1$. Thus $\zeta$-regularisation leads to $\zeta(-1)$. And for $\operatorname{Re} s > 2$ we have




$$\sum_{n = 1}^{\infty} \frac{a_n}{n^s} = \sum_{n = 1}^{\infty} \frac{1}{n^{s-1}} + \sum_{n = 1}^{\infty} \frac{(-1)^{n-1}}{n^s} = \zeta(s-1) + \eta(s)\,$$



so the analytic continuation of Dirichlet series leads to $\zeta(-1) + \eta(0) = \zeta(-1) + \frac{1}{2}$.



These methods are hence not compatible.


calculus - Is the limit finite? (corrected)

I need to find $r>0$ for which the following limit is finite



$$\lim_{n \rightarrow \infty} \sum_{k=1}^{n^2} \frac{n^{r-1}}{n^r+k^r}$$




I get inconclusiveness using the ratio test. The root test does not seem to help me. Does it converge to zero to for $r \in \mathbb Z^+$?



Any ideas?

Thursday 16 August 2018

elementary number theory - Linear congruence proof, show congruence has exactly two incongruent solutions



Let p be an odd prime and k a positive integer. Show that the congruence $x^{2}$ $\equiv 1 \ mod p^{k}$ has exactly two incongruence solutions, namely, $x \equiv \pm 1\mod p^{k}$.



I'm not sure what to do after this:
$x^{2}$ $\equiv 1 \ mod p^{k}$ => $p^{k}$ | $x^{2}-1$=(x-1)(x+2)



Answer



As you have already observed, $p^k\mid (x-1)(x+1)$. In particular $p\mid (x-1)(x+1)$, so either $p\mid x-1$ or $p\mid x+1$. In any case, we cannot have both since $p\not\mid 2$. This implies that $p^k\mid x-1$ or $p^k \mid x+1$. Why?



This works in general: $x^2=n\mod p^k$ has at most two incongruent solutions.


trigonometry - Calculating value of $text{sinc}(x)$, WolframAlpha and MATLAB give two different answers.




I need to evaluate the following:
$$\frac{2}{3}\text{sinc}\bigg(\frac{2\pi}{3}(n-4)\bigg)-\frac{1}{3}\text{sinc}\bigg(\frac{\pi}{3}(n-4)\bigg)$$



for $n=[0,...,8]$



I don't have the sinc function in my casio fx so I wanted to use the fact that $\text{sinc}(x)=\frac{\text{sin}(x)}{x}$ and that $\text{sinc}(0)=1$
Hence, for $n=0$ I got
$$\frac{2}{3}\text{sinc}\bigg(\frac{2\pi}{3}(-4)\bigg)-\frac{1}{3}\text{sinc}\bigg(\frac{\pi}{3}(-4)\bigg)=0.1378....$$



This seems to agree with wolfram alpha:







enter image description here






But then I checked the mark scheme on my past paper that the question is taken from, and there it says that I should've got $0.0093$ so i put it in MATLAB:







enter image description here






...and it also says $0.0093$.



So... which one of the two is correct? What's going on?


Answer



As it happens, there are apparently two different conventions for what the $\text{sinc}(x)$ function actually denotes in terms of the $\sin(x)$ function. (I ran into this same confusion on my class on Fourier analysis.) The conventions you might see are




$$\text{sinc}(x) = \frac{\sin(x)}{x} \;\;\; \text{or} \;\;\; \text{sinc}(x) = \frac{\sin(\pi x)}{\pi x}$$



The latter is known as the "normalized sinc function," per Wikipedia. I don't know much about which is used more when, so I'll leave you with the Wikipedia article in that respect.



Checking your functions if interpreted in the latter way, i.e. for $n=0$



$$\frac{2}{3} \left( \frac{-3}{8\pi^2} \right) \sin \bigg(\frac{-8\pi^2}{3}\bigg)-\frac{1}{3} \left( \frac{-3}{4\pi^2} \right) \sin \bigg(\frac{-4\pi^2}{3}\bigg)$$



Wolfram Alpha gives a value of $0.0093...$, in agreement with your MATLAB answer. Indeed, as noted by Josh B. in the comments, MATLAB uses the latter convention.




I would assume, then, this is the source of the discrepancy.


Wednesday 15 August 2018

calculus - Conceptual question on substitution in integration




In calculus we learn about the substitution method of integrals, but I haven't been able to prove that it works. I mainly don't see how manipulations of differentials is justified, i.e how $dy/dx = f(x)$ means that $dy = dx * f(x)$ so $dx * f(x)$ can be substituted for $dy$, since I thought that $dy/dx$ is merely notation and that $dy$ and $dx$ don't actually exist.


Answer



Essentially we want to show:
$$\int_a^bf(y)dy=\int_{g^{-1}(a)}^{g^{-1}(b)}f(g(x))g'(x)dx$$

for a strictly increasing continuous function $g$ (if $g$ is decreasing, we take $-g$ and absorb the minus sign into swapping the limits). If $\{y_0,\ldots,y_n\}$ is a partition of $[a,b]$, $y_{j-1}\leq y_j^*\leq y_j$, then a Riemann sum for the left integral is
$$\sum_{j=1}^nf(y^*_j)(y_j-y_{j-1}).$$
Put $y_j:=g(x_j)$. So $\{x_0,\ldots,x_n\}$ is a partition of $[g^{-1}(a),g^{-1}(b)]$. By the mean value theorem, there exists $x_j^*\in[x_{j-1},x_j]$ such that
$$g'(x_j^0)=\frac{g(x_j)-g(x_{j-1})}{x_j-x_{j-1}}\iff y_j-y_{j-1}=g'(x_j^*)(x_j-x_{j-1}).$$
Now remember that all Riemann sums converge to the integral (by definition of a function being Riemann integrable), so we may choose $y_j^*$ so that $x_j^*=g^{-1}(y_j^*)$. Hence we have
$$\sum_{j=1}^nf(y^*_j)(y_j-y_{j-1})=\sum_{j=1}^nf(g(x_j^*)g'(x_j^*)(x_j-x_{j-1}).$$
Now we simply recognise that the right hand side is a Riemann sum for the right integral and that $\max|y_j-y_{j-1}|\to0$ as $\max|x_j-x_{j-1}|\to0$.



Sorry for such a lengthy answer - the basic takeaway is that you can prove it, and the notation we use is just a shorthand. While $dy$ and $dx$ are not actual objects as you rightly point out, they act similarly enough in a lot of ways that we often treat them as such.


binomial distribution - If you throw a fair dice $10$ times, what is the probability to throw number $6$ at most once?





If you throw a fair die $10$ times, what is the probability to throw number $6$ at most once?




I thought the answer was the sum of probability to throw $6$ once in $10$ throws plus probability to throw $6$ zero times in $10$ throws:
$$\frac{1}{6}\left(\frac{5}{6}\right)^9+\left(\frac{5}{6}\right)^{10}$$
Why is this not correct?


Answer



You forgot the binomial coefficient. It should be

$$\binom{10}{0}\left(\frac{1}{6}\right)^0\left(\frac{5}{6}\right)^{10}+\binom{10}{1}\left(\frac{1}{6}\right)^1\left(\frac{5}{6}\right)^9 = 0.4845167$$



In other words, you need to count which spots get a six. In the first case, zero spots get a six and there are $\binom{10}0$ ways to do that. In the second case, you need to get one six and there are $\binom{10}{1}$ ways to choose the spot where six lands.


Limit with criteria $lim_{n to infty}n cdot left [ frac1e left (1+frac{1}{n+1} right )^{n+1}-1 right ]$

$$\lim_{n \to \infty}n \cdot \left [ \frac{\left (1+\frac{1}{n+1} \right )^{n+1}}{e}-1 \right ]$$

I was trying to calculate a limit that drove me to this case of Raabe-Duhamel's test, but I don't know how to finish it. Please give me a hint or a piece of advise.



I cannot use any of the solution below, but they are clear and good. I'm trying to prove it using squeeze theorem like this:
$$\lim_{n \to \infty}n \cdot \left [ \frac{\left (1+\frac{1}{n+1} \right )^{n+1}}{e}-1 \right ]=\frac{-1}{e} \cdot\lim_{n \to \infty}n \cdot \left [e- \left (1+\frac{1}{n+1} \right )^{n+1} \right ]$$
I found this:
$$\frac{e}{2n+2}
Is this true? How can I prove this? Thanks for the answers.

Tuesday 14 August 2018

Summation of series with factorial



enter image description here




I tried breaking the terms into differences or finding a generalised term but did not get it right. Can someone please help me to proceed with this?


Answer



Assuming the first term is $1$ (and not $2$ as written), the general term of the series is, for $n\geq 1$,
$$
a_n \stackrel{\rm def}{=} \frac{\prod_{k=2}^{n}(2k+1)}{n!3^{n-1}}
= \frac{\prod_{k=1}^{n}(2k+1)}{n!3^{n}}
= \frac{(2n+1)!}{n!3^{n}\prod_{k=1}^n(2k)}
= \frac{(2n+1)!}{n!3^{n}2^nn!}
= \frac{(2n+1)!}{(n!)^26^{n}}
$$

or, equivalently, $a_n= \binom{2n}{n}\left(\frac{1}{6}\right)^n$.



Now, either you work towards finding the general form for $$f(x) = \sum_{n=1}^\infty (2n+1) \binom{2n}{n}x^n$$
(a power series with radius of convergence $1/4$), which you can find by relating it to both
$
g(x) = \sum_{n=1}^\infty n\binom{2n}{n}x^{n-1}
$
(recognize a derivative) and $
h(x) = \sum_{n=1}^\infty \binom{2n}{n}x^{n}
$, since $$f(x) = 2xg(x)+h(x)\,;$$

or, by other means (there may be?) you establish that
$f(1/6) = 3\sqrt{3}$, leading to
$$
\sum_{n=1}^\infty a_n = 3\sqrt{3}.
$$


algebra precalculus - Evaluate this continued trigonometric sum


$$\sin^2(4) + \sin^2(8) + \sin^2(12) + ... + \sin^2(176)$$





Where the number is in degrees not radians.



$$\cos(x) = \sin(90 - x) \implies \cos(x) = \sin(90 + x)$$



$$\implies \sin(x) = \cos(x - 90)$$



$$S = \sum_{n=1}^{44} \sin^2(4n)$$

calculus - prove factored polynomial has no real roots

the polynomial $6x^3-18x^2-6x-6$ can be factored as $6(x-r)(x^2+ax+b)$ for some $a,b \in \Bbb{R}$ and where $r$ is a real root fo the polynomial. How would you prove that the polynomial $x^2+ax+b$ has no real roots. I know that you can do polynomial long division to get concrete values for $a$ and $b$ but $r$ is a decimal so the long division would be messy and inaccurate, but other then that method I have no idea how i would prove that it has no real roots.

Monday 13 August 2018

sequences and series - Convergence using Root Test

Problem: test if the series converges$$\sum_{n=1}^ \infty \frac {(-2)^{n+1}} {n^{n+1}} $$



My approach:



I see it is equal to $$\sum_{n=1}^ \infty \frac {(-2)^n} {n^n} \cdot \frac {-2} n$$, and $\sum_{n=1}^ \infty \frac {(-2)^n} {n^n}$ converges absolutely using root test, and $\sum_{n=1}^ \infty \frac {-2} n $ diverges by using p-series test.



So is the original series divergent because convergent * divergent = divergent?




Is convergent * convergent = convergent??

Saturday 11 August 2018

Friday 10 August 2018

Proving with induction $(1-x)^n




Prove using induction that $\forall n\in\mathbb N, \forall x\in \mathbb R: 0



My attempt:




Base: for $n=1: 1-x<\frac 1 {1+x}\iff 1-x^2<1$, true since $0

Suppose the statement is true for $n$, prove for $n+1$:



$(1-x)^{n+1}=(1-x)(1-x)^{n}\overset{i.h}<\frac{(1-x)}{1+nx}$



Now I got stuck, maybe another induction to show that $1+nx+x<1+nx$? Is there another way?



Moreover, I was told it's wrong to begin with $(1-x)^{n+1}$ and reach to $\frac 1 {1+(n+1)x}$ but why? Is it assuming what I need to prove?



Answer



Apply again the base case: $1-x<\displaystyle\frac1{1+x}$ and that $x^2>0$ to get



$$\frac{1-x}{1+nx} < \frac1{(1+nx)(1+x)}=\frac1{1+(n+1)x+nx^2}<\frac1{1+(n+1)x}\,.$$


sequences and series - How to calculate: $sum_{n=1}^{infty} n a^n$

I've tried to calculate this sum:



$$\sum_{n=1}^{\infty} n a^n$$



The point of this is to try to work out the "mean" term in an exponentially decaying average.




I've done the following:



$$\text{let }x = \sum_{n=1}^{\infty} n a^n$$
$$x = a + a \sum_{n=1}^{\infty} (n+1) a^n$$
$$x = a + a (\sum_{n=1}^{\infty} n a^n + \sum_{n=1}^{\infty} a^n)$$
$$x = a + a (x + \sum_{n=1}^{\infty} a^n)$$
$$x = a + ax + a\sum_{n=1}^{\infty} a^n$$
$$(1-a)x = a + a\sum_{n=1}^{\infty} a^n$$



Lets try to work out the $\sum_{n=1}^{\infty} a^n$ part:




$$let y = \sum_{n=1}^{\infty} a^n$$
$$y = a + a \sum_{n=1}^{\infty} a^n$$
$$y = a + ay$$
$$y - ay = a$$
$$y(1-a) = a$$
$$y = a/(1-a)$$



Substitute y back in:




$$(1-a)x = a + a*(a/(1-a))$$
$$(1-a)^2 x = a(1-a) + a^2$$
$$(1-a)^2 x = a - a^2 + a^2$$
$$(1-a)^2 x = a$$
$$x = a/(1-a)^2$$



Is this right, and if so is there a shorter way?



Edit:




To actually calculate the "mean" term of a exponential moving average we need to keep in mind that terms are weighted at the level of $(1-a)$. i.e. for $a=1$ there is no decay, for $a=0$ only the most recent term counts.



So the above result we need to multiply by $(1-a)$ to get the result:



Exponential moving average "mean term" = $a/(1-a)$



This gives the results, for $a=0$, the mean term is the "0th term" (none other are used) whereas for $a=0.5$ the mean term is the "1st term" (i.e. after the current term).

Thursday 9 August 2018

limits - Evaluating $ lim_{x to 0}{ arctan( 2 {Large[} frac{cos(x) - 1}{sin^2x} {Large]} )} $



Having
$$ \lim_{x \to 0}{ \arctan\left( 2 \left[ \frac{\cos(x) - 1}{\sin^2x}\right] \right)} = L $$




The interesting part however is $$ \frac{\cos(x) - 1}{\sin^2x} $$ and $ \lim_{x \to 0} \frac{(\cos(x) - 1)}{\sin^2x} = \left[\frac{0}{0}\right] = -1$. With l'hopital's rule it's easy to solve - I'm interested in other methods though1. Hints are welcome too.






Attempt 1



Considering $$ 1 = \sin^2x + \cos^2x $$ Plugging in:
$$ \frac{\cos(x) - \sin^2x - \cos^2x}{\sin^2x}
\longrightarrow \lim_{x \to 0} {\cot(x)\csc(x) - 1 - \cot^2x}$$
which doesn't go very far.




Attempt 2



Considering:



$$ -2 \leq \cos(x) - 1 \leq 0 \rightarrow \frac{-2}{\sin^2x} \leq \cos(x) - 1 \leq \frac{0}{\sin^2x}$$



But $$ \lim_{x\to0}{ \frac{-2}{\sin^2x} } \neq \lim_{x\to0}{ \frac{0}{\sin^2x}} $$



so the squeeze theorem cannot be applied. (bonus question: can it?)




Please avoid Taylor expansion.


Answer



$${\cos x-1\over\sin^2x}={\cos x-1\over\sin^2x}\cdot{\cos x+1\over\cos x+1}={\cos^2x-1\over\sin^2x(\cos x+1)}={-\sin^2x\over\sin^2x(\cos x+1)}={-1\over\cos x+1}$$


Wednesday 8 August 2018

algebra precalculus - Find the sum to n terms of the series $frac{1}{1.2.3}+frac{3}{2.3.4}+frac{5}{3.4.5}+frac{7}{4.5.6}+cdots$..

Question :




Find the sum to n terms of the series $\frac{1}{1.2.3}+\frac{3}{2.3.4}+\frac{5}{3.4.5}+\frac{7}{4.5.6}+\cdots$



What I have done :



nth term of numerator and denominator is $2r-1$ and $r(r+1)(r+2)$ respectively.



Therefore the nth term of given series is :



$\frac{2r-1}{r(r+1)(r+2)} =\frac{A}{r}+\frac{B}{r+1}+\frac{C}{r+2}$ .....(1)




By using partial fraction :



and solving for A,B and C we get A = 1/2, B = -1, C =1/2



Putting the values of A,B and C in (1) we get :



$\frac{1}{2r}-\frac{1}{r+1}+\frac{1}{2(r+2)}$



But by putting $r =1,2,3, \cdots$ I am not getting the answer. Please guide how to solve this problem . Thanks.

Tuesday 7 August 2018

trigonometry - Application Of De Moivre & Euler's Formulae

Can anyone show me how I can prove that $\sin x \cos (3x) = \frac{1}{4} \sin (7x) - \frac{1}{4}\sin (5x) + \frac{1}{2}\sin x$?
I tried using Euler's formulae
$$\sin x= \frac{e^{ix}-e^{-ix}}{2i}$$
and
$$\cos x = \frac{e^{ix}+e^{-ix}}{2}$$
but the simplification didn't help at all.
PS: Simplify starting from the left.

Monday 6 August 2018

Elementary proof that the derivative of a real function is continuous somewhere



One can use the Baire category theorem to show that if $f:\mathbb{R} \to \mathbb{R}$ is differentiable, then $f'$ is continuous at some $c \in \mathbb{R}$. Is there an elementary proof of this fact? By "elementary" I mean at the level of intro real analysis.



Edit: In spite of the decent response this question has gotten, after more than two and a half months there are still no answers. It's perhaps possible that there's some "deep" reason we should not expect an elementary proof of this. I will therefore also accept a well reasoned discussion as to why such a proof is unlikely.


Answer



One can actually prove with elementary tools something stronger, namely the following



Theorem: If $f:\mathbb{R}\rightarrow\mathbb{R}$ is differentiable in a (non-degenerate) interval $[a,b]$ then $f'$ is continuous at some point $c\in(a,b)$ (a corollary being that if $f$ is differentiable throughout $\mathbb{R}$, then $f’$ if continuous on a dense subset of $\mathbb{R}$).




Proof: For any $[u,t]\subseteq[a,b]$, define $osc(u,t)=\sup_{[w,z]\subseteq[u,t]}\left|\frac{f(t)-f(u)}{t-u} - \frac{f(z)-f(w)}{z-w}\right|$ (with $osc(u,t)=+\infty$ if the right-hand side is unbounded; this can easily be made more formal with some more verbiage). Informally, $osc(u,t)$ tells us how much the slope of $f$ can "oscillate" in the interval $[u,t]$; the essence of the proof is showing that $osc(u,t)$ must converge to $0$ as $u$ and $t$ converge to some point $c$, and that $c$ is then a point at which $f'$ is continuous.



Consider a generic sequence of “concentric and convergent” intervals $[u_i,t_i]$, i.e. one that satisfies $u_i\leq u_{i+1}

Let us now show that there is a sequence of concentric and convergent intervals $[u_i,t_i]\subseteq (a,b)$ for which $osc(u_i,t_i)\rightarrow 0$. Suppose it were not the case. Then, starting with an arbitrary $[w_0, z_0]\subseteq(a,b)$ there would be an $\epsilon>0$ such that given $[w_i,z_i]$ we could always find a non-degenerate $[w_{i+1},z_{i+1}]\subseteq [w_i,z_i]$ such that $\left|\frac{f(z_i)-f(w_i)}{z_i-w_i} - \frac{f(z_{i+1})-f(w_{i+1})}{z_{i+1}-w_{i+1})}\right|>\epsilon$ (note that $\epsilon$ is independent of $i$). Furthermore, such $[w_i,z_i]$ could always be chosen arbitrarily small, since if $g:\mathbb{R}\rightarrow\mathbb{R}$ is continuous in a non-degenerate interval $[\alpha,\beta]$, then for any $\delta>0$ there exists a non-degenerate interval $[\alpha',\beta']\subseteq(\alpha,\beta)$ with $|\beta'-\alpha'|<\delta$ and $\frac{g(\beta)-g(\alpha)}{\beta-\alpha}=\frac{g(\beta')-g(\alpha')}{\beta'-\alpha'}$ (the proof is essentially identical to that of Rolle's theorem, but stopping before taking the differentiation limit). Thus $\frac{f(z_i)-f(w_i)}{z_i-w_i}$ would not converge to a (finite) limit, contradicting the differentiability of $f$ in $\lim w_i =
\lim z_i$.



Then, consider a sequence of concentric and convergent intervals $[u_i,t_i]\subseteq(a,b)$, with $u_i,t_i\rightarrow c$, for which $osc(u_i,t_i)\rightarrow 0$. It is immediate to see that $osc(u_i,t_i)=max\left(\left(\left(\sup_{[w,z]\subseteq[u_i,t_i]} \frac{f(z)-f(w)}{z-w}\right) - \frac{f(t_i)-f(u_i)}{t_i-u_i}\right), \left(\frac{f(t_i)-f(u_i)}{t_i-u_i} - \left(\inf_{[w,z]\subseteq[u_i,t_i]} \frac{f(z)-f(w)}{z-w}\right)\right)\right)$, so if $osc(u_i,t_i)\rightarrow 0$ then $\sup_{[w,z]\subseteq[u_i,t_i]}\frac{f(z)-f(w)}{z-w}, \inf_{[w,z]\subseteq[u_i,t_i]
} \frac{f(z)-f(w)}{z-w}\rightarrow \lim \frac{f(t_i)-f(u_i)}{t_i-u_i}
= f’(c)$. And since in any interval $[u_i,t_i]$ we have that $\sup_{[w,z]\subseteq[u_i,t_i]}\frac{f(z)-f(w)}{z-w} \geq f' \geq \inf_{[w,z]\subseteq[u_i,t_i]

} \frac{f(z)-f(w)}{z-w}$, then $f’(x)\rightarrow f’(c)$ as $x\rightarrow c$, i.e. $f’$ is continuous at $c$.


Sunday 5 August 2018

real analysis - $0^0$ -- indeterminate, or $1$?




One of my teachers argued today that 0^0 = 1. However, WolframAlpha, intuition(?) and various other sources say otherwise... 0^0 doesn't really "mean" anything..



can anyone clear this up with some rigorous explanation?


Answer



Short answer: It depends on your convention and how you define exponents.



Long answer: There are a number of ways of defining exponents. Usually these definitions coincide, but this is not so for $0^0$: some definitions yield $0^0=1$ and some don't apply when both numbers are zero (leaving $0^0$ undefined).




For example, given nonnegative whole numbers $m$ and $n$, we can define $m^n$ to be the number of functions $A \to B$, where $A$ is a set of size $n$ and $B$ is a set of size $m$. This definition gives $0^0=1$ because the only set of size $0$ is the empty set $\varnothing$, and the only function $\varnothing \to \varnothing$ is the empty function.



However, an analyst might not want $0^0$ to be defined. Why? Becuase look at the limits of the following functions:
$$\lim_{x \to 0^+} 0^x = 0, \qquad \lim_{x \to 0} x^0 = 1, \qquad \lim_{x \to 0^+} (e^{-1/t^2})^{-t} = \infty$$
All three limits look like $0^0$. So when this is desired, you might want to leave $0^0$ undefined, so that it's a lack of definition rather than a discontinuity.



Typically this is resolved by:




  • If you're in a discrete setting, e.g. considering sets, graphs, integers, and so on, then you should take $0^0=1$.


  • If you're in a continuous setting, e.g. considering functions on the real line or complex plane, then you should take $0^0$ to be undefined.



Sometimes these situations overlap. For example, usually when you define functions by infinite series
$$f(x) = \sum_{n=0}^{\infty} a_nx^n$$
problems occur when you want to know the value of $f(0)$. It is normal in these cases to take $0^0=1$, so that $f(0)=a_0$; the reason being that we're considering what happens as $x \to 0$, and this corresponds with $\lim_{x \to 0} x^0 = 1$.


trigonometry - Generalization of Euler's Formula



Euler's formula states that, for any real number x:



$$\cos x=\frac{e^{ix}+e^{-ix}}{2}$$



Can it be generalized in that way?



$$ae^{ix}+be^{-ix}=c\cos(x+d)$$




where $a,b\in \mathbb{C}$ and $c,d\in \mathbb{R}$.
Of course if $a=b=1$ and $c=2$, $d=0$ this is the common Euler's fomula, but it is true that for every $a,b$ I can rewrite a sum of complex exponentials as a single cosine? If it is, what is the relationship between these constants?


Answer



Suppose your formula is true for any $x \in \mathbb R$. You can write it as



$$ ae^{ix}+be^{-ix}- \frac{c}{2}(e^{id} e^{ix}+ e^{-id}e^{-ix}) = 0,$$



that is




$$ \left(a - \frac{c}{2}e^{id}\right) e^{ix}+ \left( b - \frac{c}{2}e^{-id}\right) e^{-ix} =0.$$



Now we can use the fact that $e^{ix}$ and $e^{-ix}$ are linearly independent to get



$$a = \frac{c}{2}e^{id}, \quad b = \frac{c}{2}e^{-id}.$$



This implies



$$ c = 2 (ab)^{1/2}, \quad \cos d = \frac{a+b}{2 (ab)^{1/2}}.$$




Since we require $c$ and $d$ to be real, then $ab$ has to be a positive real number and $a+b$ has to be real. This is possible only if $a$ and $b$ are complex conjugates: $b = \bar a$.


Saturday 4 August 2018

divisibility - Gcd number theory proof: $(a^n-1,a^m-1)= a^{(m,n)}-1$

Prove that if $a>1$ then $(a^n-1,a^m-1)= a^{(m,n)}-1$



where $(a,b) = \gcd(a,b)$




I've seen one proof using the Euclidean algorithm, but I didn't fully understand it because it wasn't very well written.
I was thinking something along the lines of have $d= a^{(m,n)} - 1$ and then showing
$d|a^m-1$ and $d|a^n-1$ and then if $c|a^m-1$ and $c|a^n-1$, then $c\le d$.



I don't really know how to show this though...



I can't seem to be able to get $d* \mathbb{K} = a^m-1$.



Any help would be beautiful!

Friday 3 August 2018

real analysis - What is $lim_{ntoinfty} root{2n+1} of {-1} ?$



First of all, I'm sorry if this question has been already asked and answered, as far as I searched, I couldn't find such a question on this site.
So, I've been thinking about the limit of the sequence $\left(\root{2n+1}\of{-1}\right)_{n\geq 0}$. Since the order of the root is odd for every $n$, this sequence, is obviously a constant sequence with the general term $a_n = -1$. So, from this follows that $$ \lim_{n\to\infty} \root{2n+1}\of{-1} = -1 $$.
We can even do an $\epsilon-N$ proof to show this (and it's realy easy actually): $$ \forall \epsilon > 0 \hspace{0.5cm} \exists N \geq 0 \hspace{0.3cm} \text{s.t.} \hspace{0.3cm} \left|\root{2n+1}\of{-1}+1\right|<\epsilon \hspace{0.5cm} \forall n \geq N \\ \left|\root{2n+1}\of{-1} + 1\right| = \left|-1 + 1\right| = 0 < \epsilon \hspace{0.5cm} \forall n \geq 0 \\ N = 0 \ _\blacksquare $$.




However, if we use tehniques ussualy used for solving limits, we end up with a different result:
$$ \begin{align*}
\lim_{n\to\infty} \root{2n+1}\of{-1} &= \lim_{n\to\infty} (-1)^{1\over 2n+1} \\
&= \left(\lim_{n\to\infty}-1\right)^{\lim_{n\to\infty}{1\over 2n+1}} \\ &= (-1)^0 \\ &= 1 \end{align*} $$
.



What is wrong here? Why do the two methods give different results?



Edit: To make everything clear, I'm assuming the real root as defined by $\root n \of {} : \mathbb{R} \to \mathbb{R}$ for odd $n$ and treating this as a real-analysis problem. Also, it's pretty explicit from my question that I'm working with a sequence and not with a function. The limit only goes through natural values of $n$




Edit 2: I've figured it out. Thank you all for your answers esspecially to @Jack who pointed out the theorem I've been using $\lim_{n\to\infty}(a_n^{b_n}) = (\lim_{n\to\infty} a_n)^{(\lim_{n\to\infty} b_n)}$ is not true in general. I've consulted my textbook again and saw that I've missed the part where they said $a_n > 0, \forall n \in \mathbb{N}$. Of course, we can think of this problem also from the viewpoint of functions and the fact that the function $(-1)^x$ is not continuous is another gap in using something like the above theorem. Thank you all again for being so kind and giving me so many answers.


Answer



Your expression $\sqrt[2n+1]{-1}$ (for any nonnegative integers $n$) is defined to be, as you stated in the post, the unique real number $y$ such that $y^{n+1}=-1$. Since by your definition, $\sqrt[2n+1]{-1}=-1$, there is no doubt that
$$
\lim_{n\to\infty}\sqrt[2n+1]{-1}=\lim_{n\to\infty}(-1)=-1.
$$



There is no problem for the limit itself.



What goes wrong here is in your second "method":





if we use techniques usually used for solving limits, we end up with a different result:
$$ \begin{align*}
\lim_{n\to\infty} \root{2n+1}\of{-1}
&= \lim_{n\to\infty} (-1)^{1\over 2n+1} \\
&= \left(\lim_{n\to\infty}-1\right)^{\lim_{n\to\infty}{1\over 2n+1}} \\
&= (-1)^0 \\ &= 1 \end{align*} $$
.





The following step is problematic:
$$
\lim_{n\to\infty} (-1)^{1\over 2n+1}
= \left(\lim_{n\to\infty}-1\right)^{\lim_{n\to\infty}{1\over 2n+1}}
$$



What you use here is
$$
\lim_{n\to\infty}{a_n}^{b_n}=(\lim_{n\to\infty}a_n)^{(\lim_{n\to\infty}b_n)} \tag{1}
$$


where $a_n=-1$ is the constant sequence and $b_n=\frac{1}{2n+1}$. But (1) is NOT true in general.






[Added]
In real analysis, one rarely writes expression like $a^b$ for $a\leq 0$ and arbitrary real number $b$, unless one specifically defines such expression for some particular $a$ and $b$. For instance, you define $(-1)^{1/n}$ for only $n$ being an odd positive integer and let $(-1)^{1/n}$ be the unique number $y$ such that $y^{n}=-1$. In such situation, $(-1)^{1/n}$ is nothing but the real number $-1$.



One definition for the expression $a^b$ with $a>0$ and $b\in\mathbb{R}$ is $e^{b\ln a}$. And one has the following statement





Suppose $\{a_n\}$ is a positive sequence of real numbers such that $\lim_{n\to \infty}a_n=a$. Assume in addition that $\{b_n\}$ is a real sequence with $\lim_{n\to\infty}b_n=b$. Then
$$
\lim_{n\to \infty}a_n^{b_n}=\lim_{n\to \infty} e^{b_n\ln a_n}=\lim_{n\to\infty}e^{b\ln a}=a^b.
$$




If one does want to consider the expression $a^b$ for negative real number $a$, then one would




  • either stick to the definition for the some specific $a$ one has,



  • or unavoidably talk about the complex logarithm. See also this Wikipedia article.



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...