Sunday 30 December 2018

Complex integration: choosing a contour without divergences

How should I pick the contour to compute the integral
$$\int_{-a}^a\frac{1-z}{\sqrt{(z-a)(z+a)}}\mathrm d z\,, $$
where $a$ is a real number?




My problem is that when I choose a keyhole contour around the cut $(-a,a)$, the big circle with radius $R$ going to infinity diverges. The integrand goes as
$$\frac{1-z}{\sqrt{(z-a)(z+a)}} \sim i-\frac{i}{z}+i\frac{ a^2}{2 z^2}+O\left(\frac{1}{z^3}\right)\,,$$
for $|z|\rightarrow \infty$ and thus
$$ \lim_{R\rightarrow \infty}\int_R\frac{1-z}{\sqrt{(z-a)(z+a)}}\mathrm d z < \lim_{R\rightarrow\infty}( i\times (2\pi R))=\infty\,.$$



But I know that the answer is finite as



$$\int_{-a}^a\frac{1-z}{\sqrt{(z-a)(z+a)}}\mathrm d z =\pi\,. $$



Where am I doing something wrong?

algebra precalculus - Why $sqrt{-1 times -1} neq sqrt{-1}^2$?




We know $$i^2=-1 $$then why does this happen?
$$

i^2 = \sqrt{-1}\times\sqrt{-1}
$$
$$
=\sqrt{-1\times-1}
$$
$$
=\sqrt{1}
$$
$$
= 1

$$



EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.


Answer



From $i^2=-1$ you cannot conclude that $i=\sqrt{-1}$, just like from $(-2)^2 = 4$ you cannot conclude that $-2=\sqrt 4$. The symbol $\sqrt a$ is by definition the positive square root of $a$ and is only defined for $a\ge0$.



It is said that even Euler got confused with $\sqrt{ab} = \sqrt{a}\,\sqrt{b}$. Or did he? See Euler's "mistake''? The radical product rule in historical perspective (Amer. Math. Monthly 114 (2007), no. 4, 273–285).


sequences and series - Calculating $frac{1}{1cdot 2cdot 3}+frac{1}{5cdot 6cdot 7}+frac{1}{9cdot 10cdot 11}+cdots$



I found this question on an old exam paper - UK GCE A-Level (1972) - equivalent to university entrance level in most countries I believe. The method may be "standard" but has left me stumped. Maybe I am missing something obvious. Can someone give me a hint rather than a full worked solution?




Question




Calculate: $$\dfrac{1}{1\cdot 2\cdot 3}+\dfrac{1}{5\cdot 6\cdot 7}+\dfrac{1}{9\cdot 10\cdot 11}+\cdots$$




What do I notice?



It is an infinite series, so one of Geometric, Maclaurin, Taylor Series might be useful. The sum converges because each term is less than geometric series with ratio (0.5).



The terms are formed from "truncated" factorials (my expression)



So the series can be rewritten




$$\frac{0!}{3!}+\frac{4!}{7!}+\frac{8!}{11!}+\cdots$$



There are three successive positive integers in the denominators of each term in the original series and the multiples of 4 are missing from the denominators.



The integers "within" the factorials in the numerator and denominator are (arithmetically) increasing by 4.



Because it is an infinite series I can't hope to "group" the terms by finding common multiples.



So I get stuck.




Then I cheat and put: $\displaystyle\sum \frac{(4k-4)!}{(4k-1)!}$ into Wolfram Alpha.



The answer $\frac{\ln(2)}{4}$, pops out. So I feel an approach to solution might have something to do with the Maclaurin expansion of $\ln(1+x)$ but I can't get anywhere with this.



Any hints would be gratefully received.



Thanks,



Clive



Answer



I have a suspicion that the following method would be more like the one expected of the candidates for this exam.



First we decompose into partial fractions, so, as given already, $$S=\frac 12\sum_{r=0}^{\infty}\left(\frac{1}{4k+1}-\frac{2}{4k+2}+\frac{1}{4k+3}\right)$$



Then we start by writing this out explicitly, so that $$2S=\left(\frac 11-\frac 22+\frac 13\right)+\left(\frac 15-\frac 26+\frac 17\right)+\left(\frac 19-\frac{2}{10}+\frac{1}{11}\right)+...$$



Then we systematically add in and subtract terms, so $$2S=\left(\frac 11-\frac 12+\frac 13-\frac 14\right)+\color{red}{\left(-\frac 12+\frac 14\right)}+\left(\frac 15-\frac 16+\frac 17-\frac 18\right)+\color{red}{\left(-\frac 16+\frac 18\right)}+\left(\frac 19-\frac{1}{10}+\frac{1}{11}-\frac{1}{12}\right)+\color{red}{\left(-\frac{1}{10}+\frac{1}{12}\right)}+...$$



So $$2S=\ln 2-\color {red}{\frac 12\ln 2}$$




Then $$S=\frac 14\ln 2$$



I don't think the integration method as shown by @Dr. MV was expected to be known by those students...


elementary number theory - Arithmetic of integers, divisibility

Show that, for $a$ and $b$ integers, it has been $3|a^{2}+b^{2}$ then $3|a$ and $3|b$.



I tried immediately, assuming that 3 divides the sum, then it has to divide separately.

Friday 28 December 2018

abstract algebra - When does a polynomial divide $x^k - 1$ for some $k$?



Given a monic polynomial $f\in\mathbb{Z}[x]$, how can I determine whether there is a $k\in\mathbb{Z}^+$ such that $f|(x^k-1)$?



For example, $x^2-x+1$ divides $x^6-1$, but $x^2-x-1$ does not divide any such $x^k-1$ (unless I miss my mark!).



I would also be interested in finding how to answer this for other parameterized families of polynomials, or working over $\mathbb{Q}[x]$—I expect the former to be hard and the latter easy.


Answer



There are various algorithms known for such, based on properties of roots of unity, e.g.
enter image description here
See for example the following papers.




F. Beukers , C. J. Smyth. Cyclotomic Points on Curves, Proc. Milennial Conference on Number Theory, May 21–26, 2000, Urbana-Champaign, AK Peters (2001). (Section 2)



Iskander Aliev, Chris Smyth. Solving algebraic equations in roots of unity. (Section 2.1)



R. J. Bradford and J. H. Davenport, Effective tests for cyclotomic polynomials.
Symbolic and Algebraic Computation, Lecture Notes in Computer Science, 1989,
Volume 358/1989, 244-251, DOI: 10.1007/3-540-51084-2_22



Abstract (from Bradford and Davenport)



We present two efficient tests that determine if a given polynomial is cyclotomic, or is a product of cyclotomics. The first method uses the fact that all the roots of a cyclotomic polynomial are roots of unity, and the second the fact that the degree of a cyclotomic polynomial is a value of $\:\phi(n),$ for some $n$. We can also find the cyclotomic factors of any polynomial.




Here is the first method:



enter image description here
enter image description here


Thursday 27 December 2018

Proof of determinants for matrices of any order



I was told that the determinant of a square matrix can be expanded along any row or column and given a proof by expanding in all possible ways, but only for square matrices of order 2 and 3.





  • Is a general proof for any order even possible ?

  • If so, how is this done ?

  • On a similar note, how can we prove the various properties of determinants for square matrices for any order like the following:




    • Swap two rows/columns and all we get is a minus sign as a result.

    • $R_1 \to R_1+ aR_2$ does not change the determinant.

    • Determinant of the transpose is the same as the determinant of the original matrix.




Answer



Here is one possible path. We define the determinant recursively:




  • if $A$ is $1\times 1$, let $\det A=A$;


  • If $A$ is $(n+1)\times (n+1)$, let
    $$
    \det A=\sum_{k=1}^{n+1} (-1)^{k+1}A_{k1}\,M_{k1}^A,

    $$
    where $M_{k1}^A$ is the determinant of the $n\times n$ matrix obtained by removing the $k^{\rm th}$ row and the first column of $A$.




Now,




  1. Show that if $B$ is obtained from $A$ by multiplying a row by $\alpha$, then $$\det B=\alpha\,\det A.$$ This is done by induction very easily.


  2. Show that if we have $A,B,C$ with $A_{rj}=B_{rj}+C_{rj}$ for all $j$, and $A_{kj}=B_{kj}=C_{kj}$ when $k\ne r$ and for all $j$, then
    $$\det A=\det B+\det C.$$ Again this is done by induction. When $r=1$ the equality follows trivially from the definition of determinant (as the minors of $A,B,C$ will be all equal) and when $r\ne 1$ we use induction.



  3. Show that if $B$ is obtained from $A$ by swapping two rows, then $$\det B=-\det A.$$ Here one first swaps rows $1$ and $r$, and then any other swapping of two rows $r$ and $s$ can be achieved by three swaps ($r$ to $1$, $s$ to $1$, $r$ to $1$). This can be used to show that one can calculate the determinant along any row (swap it with row 1).


  4. It now follows that if $A$ has two equal rows, then $\det A=0$ (because $\det A=-\det A$).


  5. If $B_{rj}=A_{rj}+\alpha A_{sj}$, and $B_{kj}=A_{kj}$ when $k\ne r$, then by 1. and 2.,
    $$\det B=\det A+\alpha\det C,$$ where $C$ is the matrix equal to $A$ but with the $s$ row in place of the $r$ row; by 4., $\det C=0$, so $$\det B=\det A$.


  6. Now one considers the elementary matrices, and checks directly (using the above properties) that for any elementary matrix $E$, $$\det EA=\det E\,\det A.$$


  7. If $B$ is invertible, then $B$ can be written as a product of elementary matrices, $B=E_1E_2\cdots E_m$, and so
    \begin{align}
    \det BA&=\det E_1E_2\cdots E_m A=\det E_1\det E_2\cdot\det E_m\det A\\ \ \\
    &=\det (E_1\cdots E_m)\det A=\det B\det A.
    \end{align}

    Similarly, $\det AB=\det A\det B$.


  8. If neither $A$ nor $B$ are invertible: then $AB$ is not invertible either. For a non-invertible matrix, its Reduced Row Echelon form has a row of zeroes, and so its determinant is zero; as we can move to $A$ by row operations, it follows that $\det A=0$; similarly, $\det AB=0$. So $$\det AB=\det A\det B$$ also when one of them is not invertible.


  9. Knowing that det is multiplicative, we immediate get that, when $A$ is invertible, $$\det A^{-1}=\frac1{\det A}.$$


  10. For an arbitrary matrix $A$, it is similar to its Jordan form: $A=PJP^{-1}$. Then
    $$
    \det A=\det (PJP^{-1})=\det P\,\det J\,\frac1{\det P}=\det J.
    $$
    As $J$ is triangular with the eigenvalues of $A$ (counting multiplicities) in its diagonal, we get that
    $$
    \det A=\lambda_1\cdots\lambda_n,

    $$
    where $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $A$, counting multiplicities.


  11. Since the eigenvalues of $A^T$ are the same as those from $A$, we get
    $$
    \det A^T=\det A.
    $$


  12. Now, everything we did for rows, we can do for columns by working on the transpose. In particular, we can calculate the determinant along any column.



algebra precalculus - How to know that $a^3+b^3 = (a+b)(a^2-ab+b^2)$



Is there a way of go from $a^3+b^3$ to $(a+b)(a^2-ab+b^2)$ other than know the property by heart?


Answer



If you want to know if $a^n + b^n$ is divisible by $a+b$ (or by $a-b$, perhaps), you can always try long division, whether explicitly or in your head. I can't figure out a way to do the LaTeX or ASCII art for it here to do it explicitly, so I'll show you how one would do it "in one's head".



For example, for $a^3+b^3$, to divide $a^3+b^3$ by $a+b$, start by writing $a^3+b^3= (a+b)(\cdots)$. Then: we need to multiply $a$ by $a^2$ to get the $a^3$, so we will get $a^3+b^3=(a+b)(a^2+\cdots)$. The $a^2$ makes an unwanted $a^2b$ when multiplied by $b$, so we need ot get rid of it; how? We multiply $a$ by $-ab$. So now we have
$$a^3+b^3 = (a+b)(a^2-ab+\cdots).$$
But now you have an unwanted $-ab^2$ you get when you multiply $b$ by $-ab$; to get rid of that $-ab^2$, you have to "create" an $ab^2$. How? We multiply $a$ by $b^2$. So now we have:
$$a^3 + b^3 = (a+b)(a^2-ab+b^2+\cdots)$$

and then we notice that it comes out exactly, since we do want the $b^3$ that wee get when we multiply $b^2$ by $b$; so
$$a^3 + b^3 = (a+b)(a^2-ab+b^2).$$



If the expression you want is not divisible by what you are trying, you'll run into problems which require a "remainder". For instance, if you tried to do the same thing with $a^4+b^4$, you would start with $a^4+b^4 = (a+b)(a^3+\cdots)$; then to get rid of the extra $a^3b$, we subtract $a^2b$: $a^4+b^4 = (a+b)(a^3 - a^2b+\cdots)$. Now, to get rid of the unwanted $-a^2b^2$, we add $ab^2$, to get $a^4+b^4 = (a+b)(a^3-a^2b+ab^2+\cdots)$. Now, to get rid of the unwanted $ab^3$, we subtract $b^3$, to get
$$a^4+b^4 = (a+b)(a^3-a^2b+ab^2 - b^3+\cdots).$$
At this point we notice that we get an unwanted $-b^4$ (unwanted because we want $+b^4$, not $-b^4$). There is no way to get rid of it with the $a$, so this will be a "remainder". We need to cancel it out (by adding $b^4$) and then add what is still missing (another $b^4$), so
$$a^4 + b^4 = (a+b)(a^3-a^2b+ab^2 -b^3) + 2b^4.$$
(Which, by the way, tells you that $a^4-b^4 = (a+b)(a^3-a^2b+ab^2-b^3)$, by moving the $2b^4$ to the left hand side).


Wednesday 26 December 2018

real analysis - Is there a way to show the sum of any different square root of prime numbers is irrational?

Is there a way to show the sum of any different square root of prime numbers is irrational? For example, $$\sqrt2+\sqrt3+\sqrt5 +\sqrt7+\sqrt{11}+\sqrt{13}+\sqrt{17}+\sqrt{19}$$ should be a irrational number.



One approach I used is to let the sum be a solution of an even polynomial $f(x)$with integer coefficients and prove by induction that by adding another $\sqrt{p_{k+1}}$. The new polynomial can be written as $$f(x+\sqrt{p_{k+1}})f(x-\sqrt{p_{k+1}})$$



where $$f(x+-\sqrt{p_{k+1}})=P(x)+- Q(x)\sqrt{p_{k+1}},$$




where $P(x)$ is an even plynomial and $Q(x)$
is an odd polynomial.



The new polynomial can be written as $$P^{2}(x)- Q^{2}(x)p_{k+1}.$$



Assume it has a rational solution $a$, we must have$$P(a)=Q(a)=0.$$



My calculation stopped here since I can't find any contradiction result from this. Can anyone continue this proof, or has other better way to solve this? Thanks!

calculus - Function as a "constant of integration"



I'm reading a book Differential Equations with Applications and Historical Notes, 3rd edition, specifically section 8 about exact equations. The author is trying to prove that iff $\partial M/\partial y = \partial N/\partial x$ then equation
\begin{equation}
M(x,y)dx + N(x,y)dy = 0
\end{equation}


is exact differential equation.
At some point we integrate equation
\begin{equation}
\frac{\partial f(x,y)}{\partial x} = M(x,y)
\end{equation}

to get
\begin{equation}
f(x, y) = \int M(x,y)dx + g(y)
\end{equation}

The author states that function $g(y)$ appears as a constant of integration because if we take derivative of both sides with respect to $x$, $g(y)$ would disappear because it doesn't depend on $x$.
That's the part that I have trouble with, $y$ is a dependent variable and $x$ is independent variable so wouldn't derivative of $g(y)$ with respect to $x$ be
\begin{equation}

\frac{d\,g(y)}{dy} \frac{dy}{dx}
\end{equation}

and not $0$ ?


Answer



This is a common poorly written part in differential equations textbooks, because they don't want to spend time discussing differential forms.



At this point we forget that $y$ depends on $x$. Of course then the equation $M(x,y)dx+N(x,y)dy=0$ looks weird, and indeed it's wrong. What is meant there is that if we have a dependence of $x$ and $y$, a curve on $x$-$y$ plane, denoted $\gamma$, then the pullback of $M(x,y)dx+N(x,y)dy$ on $\gamma$ is $0$. For example, if we can parametrize $\gamma$ by $x$ (i.e. we can write $y$ as a function of $x$), then this condition says $\frac{dy}{dx} = -\frac{M(x,y)}{N(x,y)}$. That's why we want to find such $\gamma$.



The exactness condition means that $df=M(x,y)dx+N(x,y)dy$. Then the level sets of $f$, $\{(x,y)|f(x,y)=c\}$, give us such $\gamma$'s. Note that exactness follows from closeness on simply connected domains.




So, one can separate this problem into two stages, where $x$ and $y$ are independent, and then were we look for a required dependence.



Alternatively, instead of using differential forms, one can think of $(N,M)$ as a vector field on $x$-$y$ plane perpendicular to $\gamma$'s, the level sets of $f$, gradient of which is $(N,M)$.


probability - Expected value of non-negative bounded random variable

I have the random variable $Y$ and it is such that $0\leq Y\leq 1$ I want to bound its expected value, given that I have a high probability bound:
$P(Y\geq x) \leq g(x)$, can I say that:

$$ E[Y] = \int_0^1P(Y\geq x) dx \leq \int_0^1 g(x) dx\, ?$$
I tried showing it but I am not really strong in measure theory and Fubini theorem:
\begin{align}\int_0^1P(Y\geq x) dx &= \int_0^1 \left(\int_y^\infty f_Y(z)dz\right)dy \\ &= \int_0^1 \left(\int_0^z f_Y(z)dy\right)dz \\ &= \int_0^1 z\, f_Y(z)dz = E[Y]
\end{align}

Is it correct?

Tuesday 25 December 2018

sequences and series - How to prove this identity $pi=sumlimits_{k=-infty}^{infty}left(frac{sin(k)}{k}right)^{2};$?



How to prove this identity? $$\pi=\sum_{k=-\infty}^{\infty}\left(\dfrac{\sin(k)}{k}\right)^{2}\;$$

I found the above interesting identity in the book $\bf \pi$ Unleashed.



Does anyone knows how to prove it?



Thanks.


Answer



Find a function whose Fourier coefficients are $\sin{k}/k$. Then evaluate the integral of the square of that function.



To wit, let




$$f(x) = \begin{cases} \pi & |x|<1\\0&|x|>1 \end{cases}$$



Then, if



$$f(x) = \sum_{k=-\infty}^{\infty} c_k e^{i k x}$$



then



$$c_k = \frac{1}{2 \pi} \int_{-\pi}^{\pi} dx \: f(x) e^{i k x} = \frac{\sin{k}}{k}$$




By Parseval's Theorem:



$$\sum_{k=-\infty}^{\infty} \frac{\sin^2{k}}{k^2} = \frac{1}{2 \pi} \int_{-\pi}^{\pi} dx \: |f(x)|^2 = \frac{1}{2 \pi} \int_{-1}^{1} dx \: \pi^2 = \pi $$



ADDENDUM



This result is easily generalizable to



$$\sum_{k=-\infty}^{\infty} \frac{\sin^2{a k}}{k^2} = \pi\, a$$




where $a \in[0,\pi)$, using the function



$$f(x) = \begin{cases} \pi & |x|a \end{cases}$$


Monday 24 December 2018

algebra precalculus - Why doesn't simplifying $i$ using division not seem to work?

I was wondering why simplifying $i$ doesn't seem to work with division the same way it does multiplication.



For example, the following works:



$i^4 = 1$



$i^{31} = (i^4)^7 \cdot i^3 = 1 \cdot (-1) \cdot i = -i$




But not when applying the rules of exponents to this concept with division:



$i^{31} = \cfrac{(i^4)^8}{i^1} = \cfrac{1}{i} = \cfrac{1}{\sqrt{-1}} \ne -i$



Have I messed things up with the rules?

How to solve this limit: $lim_{n to infty} frac{(2n+2) (2n+1) }{ (n+1)^2}$




$$
\lim_{n\to\infty}\frac{(2n+2)(2n+1)}{(n+1)^{2}}
$$



When I expand it gives:
$$
\lim_{n\to\infty} \dfrac{4n^{2} + 6n + 2}{n^{2} + 2n + 1}
$$
How can this equal $4$? Because if I replace $n$ with infinity it goes $\dfrac{\infty}{\infty}$ only.



Answer



To handle limits involving fractions you factor out the dominant term from top and bottom such that they cancel, by dominant term; I mean the term of highest degree, in this case it's $n^2$: $$\lim_{n\to\infty} \frac{4n^2 + 6n + 2}{n^2 + 2n + 1}$$
$$=\require\cancel\lim_{n\to\infty} \frac{2\cancel{n^2}\left(2 + \frac{3}{n} + \frac{1}{n^2}\right)}{\cancel{n^2}\left(1 + \frac{2}{n} + \frac{1}{n^2}\right)}$$
$$=\lim_{n\to\infty} \frac{2\left(2 + \frac{3}{n} + \frac{1}{n^2}\right)}{1 + \frac{2}{n} + \frac{1}{n^2}}\tag{1}$$
$$=\frac{2\left(2 + 0 + 0\right)}{1 + 0 + 0}\tag{2}$$
$$=\color{blue}{4}$$



You get from $(1)$ to $(2)$ by making the observation that each of the fractions with $n$ or $n^2$ in the denominator will equal zero in the limit as $n$ tends to infinity.


geometry - Have "algebraic angles" been studied before?

I'm writing a geometric software library and I came up with a useful concept.



Let's call a real number $\alpha$ an algebraic angle if $\alpha\in[0,2\pi)$ and $\cos \alpha$ is an algebraic number. The set of algebraic angles has some pretty neat properties:




  • The sine, cosine, and tangent of an algebraic angle are algebraic.

  • Define negation and addition of angles the usual way, wrapping around at $2\pi$. Then algebraic angles are closed under negation and addition (since $\cos(\alpha+\beta)=\cos\alpha\cos\beta-\sin\alpha\sin\beta$).

  • Define multiplication of an algebraic angle by a rational number to also wrap around at $2\pi$. Multiplying an algebraic angle by a rational yields another algebraic angle! This can be proved from the identity $\cos(n\alpha)=T_n(\cos\alpha)$, which I learned about here. $T_n$ is a Chebyshev polynomial of the first kind. In particular, it is a polynomial with integer coefficients.

  • Since $\pi$ is an algebraic angle, so is any rational multiple of $\pi$ in $[0,2\pi)$.


  • Algebraic angles are a vector space over the rationals (I haven't proved anything interesting using this fact though). They're not a vector space because $x(y\alpha)$ is not necessarily equal to $(xy)\alpha$. For example, $(1/2)(2\cdot\pi)$ is $0$ but $(1/2\cdot 2)\pi$ is $\pi$.



Has the set of algebraic angles appeared in academic literature? Individual algebraic angles come up everywhere in geometry: for example, the interior angle between two faces of a regular dodecahedron is $\arccos(-\frac{1}{5}\sqrt{5})$.






Let me elaborate on why an algebraic angle divided by an integer yields an algebraic angle: Suppose that $\cos\alpha$ is algebraic, so that there is some polynomial $P(x)$ with integer coefficients such that $P(\cos\alpha)=0$. Let $n$ be a positive integer. By the Chebyshev identity above, $P(T_n(\cos(\alpha/n)))=0$, so $\cos(\alpha/n)$ is algebraic.







Edited 2017-01-10:



Ok, after reading some of the answers and comments I realized that the interesting properties of algebraic angles become obvious if you consider how they transform under $\alpha\mapsto e^{i\alpha}$. The algebraic angles become algebraic points on the unit circle, angle negation becomes complex conjugation, angle addition becomes complex multiplication, and multiplying by a rational essentially becomes raising to a rational power. It's clear that algebraicity is preserved by each of these operations.

linear algebra - Largest eigenvalue of a symmetric "generalized doubly stochastic" matrix

Let $A \in \mathbb{R}^{n \times n}$ be a symmetric matrix whose rows and columns sum to one. $A$ is not necessarily a doubly stochastic matrix, because negative entries are possible.



What can be said about the largest eigenvalue $\lambda$ of $A$? Is there a "good" upper bound for $\lambda$?




Additional constraint: Suppose that $|a_{ij}| \leq 1$. Does $\lambda \leq 1$ hold?

Saturday 22 December 2018

complex analysis - Transfer function graph versus polynomial function graph



I completely understand basics of polynomial functions, which includes how to solve one, draw it on graph, find zeros and poles, asymptote, etc.




A rational polynomial function would look like this:
$$f(x) = \frac{10x}{(x+10)^2}$$
Plotted graph of same function would look like this:
enter image description here



Double pole at $-10$ makes function approach that point from both sides, while never reaching it. Simple zero at $0$ means that function crosses x-axis at $x=0$.



But I don't quite understand transfer functions, that are being plotted on Bode graph, though.



If I take same function (but here $x$ is replaced with $s$, which is $j\omega$ or "corner frequency" representing x-axis, while magnitude represents y-axis of same graph) it (transfer function) would look like this:

$$H(s) = \frac{10s}{(s+10)^2}$$
Where plotted graph of same function would look like this:
enter image description here



Here, double pole at $-10$ doesn't "divide" a function into two separate parts but affect curvature of a curve. Same holds true for a pole at $0$.



Due to $log$ scale of x-axis, we are not able to plot a graph below zero but rather approach it.



The problem that confuses me here is that a pole in first example doesn't play the same role as in the second example. And same for zeros in both examples. Why is it so? Why graphs of functions of both examples differ so much, when both functions are so similar to each other?


Answer




In the second graph you are poltting this function:
$$\vert H(i\omega)\vert=\left\vert\frac{10j\omega}{(j\omega+10)^2}\right\vert$$
that is
$$\vert H(i\omega)\vert=\frac{10\vert\omega\vert}{\omega^2+100}$$
Using a log scale the expression becomes (assuming $\omega\in]0,\infty[$):
$$y=20\log_{10}\frac{10\times10^x}{10^{2x}+100}$$



Are now you still sure that this function is the same as the one that you plotted in the first diagram, that is:
$$y=\frac{10x}{(x+10)^2}$$




EDIT:



There is not so much math in the answer I gave you. You need just to be able to compute the magnitude of a complex number as $\vert z\vert=\sqrt{zz^*}$ and to understand that the substitution $\omega=10^x$ reparametrizes the function with the log parameter $x$. Then you need to take the log of the value of the function if you want to have also the $y$-axis in log scale (and then, optionally, multiplying by $10$ to pass from bel to decibel and again by $2$, as is the engineering practice, to pass from the signal power level to the signal rms value).



That said, please think a little bit on whether it does make sense to compare what you are comparing. Even though $f$ and $H$ have the same analytical expression they are two completely different functions. $f$ is real-valued and its domain is in the real line, while $H$ is complex-valued and its domain is in the complex plane. So the first inconsistency is in trying to compare the value of a real quantity with a complex one. In that doing you chose to take the modulus of the complex quantity in order to compare it with the real quantity. But why? You could have chosen, for instance, the real part, or why not, the phase or the imaginary part. The second inconsistency is due to the fact that you are comparing the functions on different sets: $f$ on the real line, $H$ on the imaginary line.



All that said, let's see what's happening to address your concerns in your comment. $f$ is the restriction of $H$ to the real line (precisely, to the intersection of the real line with the domain of $H$). That is, you have the same plot of $f$ if you plot the modulus (or the real part) of $H( \sigma)$, where $\sigma\in\mathbb{R}$, that is, on $s=\sigma+j\omega$, putting $\omega=0$. Please note that the imaginary part of $H(\sigma)$ is identically $0$, that's why it does not matter what you choose to plot between the modulus of $H(\sigma)$ and its real part.



Moreover $H$ is not the only complex function that gives $f$ as a restriction on the real line, but it is the only one that is also analytic with the exception of its singular point. That means among other things that the modulus on the real line ($\vert H(\sigma)\vert=f(x)$) influences the modulus on the entire complex plane ($\vert H(s)\vert$) in a sort of propagation effect. It follows that also the modulus on the imaginary line ($\vert H(j\omega)\vert$) is affected by that on the real line ($\vert H(\sigma)\vert$). Now the modulus of $H$ is
$$\vert H(s)\vert=\frac{10\vert s\vert}{\vert s+10\vert^2}$$ that is, the product of





  • $10$

  • $\vert s\vert=$ the magnitude of the vector $s$, that you can think as an arrow pointing to $s$ from the origin of the complex plane



divided by the product of




  • $\vert s+10\vert=$ the magnitude of the vector $s+10$, that you can think as an arrow pointing to $s$ from $-10$ on the real line.


  • $\vert s+10\vert=$ again the magnitude of the same vector $s+10$



Now you can "see" where your second plot comes from and how its curvatures and slopes are affected by the pole that brokes the first plot and and zero that makes it cross the $x$-axis.
Inspecting the magnitude of $H$ you have that:




  • when $s=0$, that is, in the origin of the complex plane, the vector $s+10$ is not zero therefore $\vert H(0)\vert=0$. That means that its log goes to $-\infty$.

  • when the vector $s$ starts moving on the imaginary axis, taking magnitude small w.r.t. $10$ (let's say $\vert s\vert\leq 1$), the vector $s+10$ can be considered constant and so its magnitude is $\vert s+10\vert\approx 10$, and then $\vert H(s)\vert\approx \frac{10\times \vert s\vert}{10^2}=\frac{\omega}{10}$, that is, it increases as $\omega$ increases with a slope of $0.1$, and in log scale it increases with a slope of $20$ db/dec. At the limit of the region where our approximations stop to be valid, in $\omega=1$, that is $s=j$, $\vert H(j)\vert\approx 0.1$, whose rms in decibel is $-20$.

  • when the vector $s$ on the imaginary axis starts to have magnitudes too big w.r.t. $10$ (let's say $\vert s \vert\geq 100$) then we can say it is not so much different than the vector $\vert s+10\vert$ and so $H(s)\approx\frac{10s}{s^2}=\frac{10}{s}$ and its magnitude $\vert H(s)\vert\approx\frac{10}{\vert s\vert}$ that, on the imaginary axis, is $\vert H(j\omega)\vert\approx\frac{10}{\omega}$, that is, decreases and its rms value decreases with the constant slope of $-20$ db/dec. In $\omega=100$, at the limit of the region where our approximations are valid, $\vert H(j100)\vert=0.1$ that in rms is $-20$ db.


  • In the region $1<\omega<100$ the two parts of the diagram examined till now smoothly merges and in particular when $s=j10$ the vector $s$ has magnitude $10$ and the vector $s+10$ has magnitude $10\sqrt{2}$. So $\vert H(j10)\vert=\frac{10\times 10}{2\times 100}=\frac{1}{\sqrt{2}}$ that, in rms value, is $-3$ db.



(With this approximating strategy you can draw the phase diagram for $H$)



So in particular you can see why at $\omega=10$, that is, at $s=j10$, the modulus of $H$ does not go to $-\infty$ as was, instead, the case for $f$: the pole is at $s=10$, not at $s=j10$, where only a perturbation of the pole is propagated in terms of middle point (logarithmically speaking) of the two regions of approximation. To have the modulus of $H$ falling to $-\infty$ (in natural scale) at $\omega=10$, $H$ must have a pole at $s=j10$, that is, it must have an expression like this:
$$H(s)=\frac{1}{s^2+10^2}$$


sequences and series - proof of $sum_{n=1}^infty n cdot x^n= frac{x}{(x-1)^2}$




I know that the Series $\sum_{n=1}^\infty n \cdot x^n $ converges to $\frac{x}{(x-1)^2}$ but I'm not sure how to show it. I'm pretty sure that has been asked before, but I wasn't able to find anything...


Answer



Let $$S_n=\sum_{k=1}^n kx^k$$



Assuming $|x|<1$,
$$S_n-xS_n=\sum_{k=1}^n x^k-nx^{n+1}=\frac{x(1-x^n)}{1-x}-nx^{n+1}\Rightarrow S_n=\frac{x(1-x^n)}{(1-x)^2}-\frac{nx^{n+1}}{1-x}$$



Now, if $|x|<1$, then $$\limsup_{n\rightarrow \infty}\left|\frac{(n+1)x^{n+1}}{nx^n}\right|=|x|\limsup_{n\rightarrow \infty}\left(1+\frac{1}{n}\right)=|x|<1$$So, by ratio test, the series $\displaystyle \sum_{k=1}^n kx^k$ converges and hence $$\lim_{n\rightarrow \infty}nx^{n+1}=x\lim_{n\rightarrow \infty}nx^{n}=0$$




So, $$S=\sum_{n=1}^{\infty}nx^n=\lim_{n\rightarrow \infty}S_n=\frac{x}{(1-x)^2}$$


Friday 21 December 2018

Am I allowed to change induction hypothesis?

I am doing the exercise from my textbook teaching the induction and stuck for a long time about the induction hypothesis part.



The question is the following:




Let $n \in \mathbb N \backslash \{0\}$, use some form of induction to prove that for all such n there exists an odd natural $m$ and a natural $k$ such that $n = 2^{k}m$.



From my lecture my prof told us that we have to use $P(n)$ to prove $P(k+1)$ and can't use $P(k+1)$ as a precedent, I understand this part, and the following is my approach.




  1. Define predicate $P(n) =$ there exists an odd natural $m$ and a natural $k$ such that $n = 2^{k}m$ for all such $n$.


  2. Check $P(1)$, if I choose $k = 0$ and $m = 1$ this holds.


  3. Assume $P(h)$ is True, namely, I can find such $m$ and $k$ so that $h = 2^{k}m$.


  4. Prove $p(h+1)$, I know I have to show two cases since $P(h+1)$, so I assume $P(h) + 1$ is odd first, then I can write $h+1 = 2^{k}m + 1$, and get $h = 2^{k}m$, and by I.H. this is true. (Although I don't think I get this correctly it just looks weird). Then I assume $h+1$ is even, this implies that h must be odd, so induction hypothesis must be changed to $h = 2^{k}m + 1$ in this case, so $h+1 = 2^{k}m + 2$, then get $h + 2^{k}m + 1$, by I.H this is true.





I don't know what goes wrong in my proof, but when I just stare at it I just feel it is not correct, I need help to explain this question, thanks.

elementary set theory - prove that set of reals numbers and complex numbers are equipotent.

I have to prove that set of reals R and set of complex C are equipotent.
" i know that set A and B are equipotent iff there is one to one mapping of A onto B. "
please anyone give me answer of this...

complex numbers - How to compute $sqrt{i + 1}$










I'm currently playing with complex numbers and I realized that I don't understand how to compute $\sqrt{i + 1}$. My math program, sage, returns



$$1.09868411346781 + 0.455089860562227*i$$




How does sage compute that number? I don't even see how one could rewrite $\sqrt{i + 1}$ in a number of the form $a+bi$.


Answer



$i + 1 = \sqrt2 \left ( {1 \over \sqrt 2} + {1 \over \sqrt 2} i\right ) \\
= \sqrt 2 \left( \cos \left( \pi \over 4 \right) + i \sin \left( \pi \over 4 \right) \right ) \\
= \sqrt 2 e^{i \pi \over 4}$



$\sqrt{i +1} =\left( 2\right)^{1 \over 4}e^{i \pi \over 8} = \left( 2\right)^{1 \over 4}\left( \cos \left( \pi \over 8 \right) + i \sin \left( \pi \over 8 \right)\right)$



Well, this is how Wolframalpha calculates.

The other root would be $\left( 2\right)^{1 \over 4}\left( \cos \left( 9\pi \over 8 \right) + i \sin \left(9 \pi \over 8 \right)\right)$


functions - On the bijection between the naturals and a countable number of countable sets.



I am assuming we are utilizing the axiom of choice because I read in the suggested questions that may have my answer that it's needed for the following bijection to work.



Suppose I am given a countable number of sets $A_1, A_2 \dots$ each with a countable number of elements, I will use the notation that given a set $A$ the first element will be represented as $A(1)$ and so forth.



I was shown a bijection $f$ that puts these in correspondence with the natural numbers as follows:



$$f(1) = A_1(1), \quad f(2) = A_1(2), \quad f(3) = A_2(1), \quad f(4) = A_1(3), \quad f(5) = A_2(2), \quad f(6) = A_3(1) \quad \dots $$




And I was told that the explicit bijection is $$f(\frac{n^2+2nx + x^2 -3x - n +2}{2}) = A_n(x)$$



Where $x$ takes values in the naturals.



How is the formula for this bijection derived?


Answer



The pattern being followed is shown here. As Gae. S. indicates, this works off of a particular map from $\Bbb N \to \Bbb N \times \Bbb N$ that follows this path:



countable times countable is countable




(taken from http://www.cut-the-knot.org/do_you_know/countRatsX.shtml)



Someone apparently has taken the time to calculate this map is the inverse function of $$\rho\ :\ \Bbb N \times \Bbb N \to \Bbb N\ :\ (n, x) \mapsto \frac{n^2+2nx + x^2 -3x - n +2}{2}$$
(I've never seen anyone bother to figure that out, before. Usually, they just take it as obvious from the path that such a map exists, which is all that you need to prove the theorem.)



So far, the axiom of choice is not involved. The only place it comes up is this: All you know about the $A_i$ is that they are countable. But in proclaiming that there is a "least element" of $A_1$ to be called $A_1(1)$, etc. you are actually choosing for each $i$, a particular mapping from some initial segment of $\Bbb N$ to $A_i$. This requires using the Axiom of Choice (or at least, of Countable Choice, which is weaker). Once you have made these choices, then you can define a map from $$g\ :\ \Bbb N \times \Bbb N \to \bigcup_{i=1}^\infty A_i\ :\ (n, x) \mapsto A_n(x)$$
Then your $f = g\circ \rho^{-1}$.



But there are two problems with calling this a bijection, or even fully defined. "Countable" does not imply infinite. Finite sets are also countable. ("Denumerable" is the condition of being both countable and infinite.) If the number of sets $A_i$ is finite, or if any of the $A_i$ are themselves finite, then $f$ is not a bijection between $\Bbb N$ and $\bigcup_{i=1}^\infty A_i$. In fact, there are some $n$ for which $f(n)$ is not even defined. $f$ will only be a bijection when the collection $\{A_i\}$ is infinite, and each $A_i$ is also infinite. Or at least, if it doesn't run afoul of the second problem.




The second problem is that we don't know if $A_i$ are disjoint. It could even be that for all $i > 1, A_i \subseteq A_1$. Even if everything is denumerable, if there is even one element common to two of the sets, say $A_i(x) = A_j(y)$, then $f$ will not be an injection, as $$f(\frac{i^2+2ix + x^2 -3x - i + 2}{2}) = f(\frac{j^2+2jy + y^2 -3y - j + 2}{2})$$



Finally, to answer the exact question you asked, if you look at the figure above, the indexes for the first column are $1,3,6,10,16,...$. These are the triangular numbers. The $n$th such number is the sum from $1$ to $n$, and is well-known to be $\frac{n(n+1)}2$. To get the rest of the indices along a diagonal, subtract the column number less $1$ from the triangular number from the first column. For row $n$ column $x$, the triangular number is for $n + x - 1$, so it is $\frac{(n+x)(n+x-1)}2 - (x - 1)$, which simplifies to your formula.


Thursday 20 December 2018

elementary number theory - Proof that a Combination is an integer



From its definition a combination $\binom{n}{k}$, is the number of distinct subsets of size $k$ from a set of $n$ elements.



This is clearly an integer, however I was curious as to why the expression




$$\frac{n!}{k!(n-k)!}$$ always evaluates to an integer.



So far I figured:



$n!$, is clearly divisible by $k!$, and $(n-k)!$, individually, but I could not seem to make the jump to proof that that $n!$ is divisible by their product.


Answer



See my post here for a simple purely arithmetical proof that every binomial coefficient is an integer. The proof shows how to rewrite any binomial coefficient fraction as a product of fractions whose denominators are all coprime to any given prime $\rm\:p.\,$ This implies that no primes divide the denominator (when written in lowest terms), therefore the fraction is an integer.



The key property that lies at the heart of this proof is that, among all products of $\rm\, n\,$ consecutive integers, $\rm\ n!\ $ has the least possible power of $\rm\,p\,$ dividing it - for every prime $\rm\,p.\,$ Thus $\rm\ n!\ $ divides every product of $\rm\:n\:$ consecutive integers, since it has a smaller power of every prime divisor. Therefore
$$\rm\displaystyle\quad\quad {m \choose n}\ =\ \frac{m!/(m-n)!}{n!}\ =\ \frac{m\:(m-1)\:\cdots\:(m-n+1)}{\!\!n\:(n-1)\ \cdots\:\phantom{m-n}1\phantom{+1}}\ \in\ \mathbb Z$$



calculus - Computing the integral of $e^{-x^2}$ over the entire line







At lunch with a math friend years ago, he showed me an integral whose solution was, he said, so beautiful that it compelled him to become a professional mathematician. I scribbled the integral down on a napkin, but for the life of me I cannot remember the trick he found so compelling.




And these things are hard to Google...



$$ \int_{-\infty}^{\infty} e^{-x^2} dx $$

elementary set theory - Are the following sets isomorphic or have the same order type?





Are the following sets isomorphic or have the same order type ?




  1. $(\mathbb R, \le) ,\ (\mathbb R,\ge)$

  2. $(\mathbb Q, \le), \ (\mathbb R, \le) $

  3. $(\mathbb N,\ge ), \ (\mathbb N,\le)$

  4. $\omega+\omega, \ \omega\cdot\omega$





So in order to show they're isomorphic I need to find an injective function.




  1. They are, there is no maximal/minimal element, both have the same cardinality. the function is $f(x)=x $.


  2. No, they have different cardinality (is that enough?).


  3. I think they aren't since one have a minimal element while the other does not.


  4. Both have the same cardinality but the elements are different: one is {1,2,3...1',2'...} the other is {(0,1),(0,2)...} but I think there could be an injection.



Answer





  1. You need $f(x)=-x$ instead.


  2. Yes. So there cannot be any bijection, much less isomorphism.


  3. You're correct.


  4. Simpler argument appear to be $\omega+\omega=\omega\cdot2<\omega\cdot\omega$



Wednesday 19 December 2018

measure theory - Showing that $nu ll mu$ implies $forall epsilon > 0$, $exists delta > 0$ s.t. $mu(A) < delta implies nu(A) < epsilon$



I am stuck on what I think may be the very last line of the proof I am seeking.



Let $(X, \mathcal{B})$ be a measurable space which has associated with it the finite measures $\mu$ and $\nu$ s.t. $\nu \ll \mu$. I aim to show that $\forall \epsilon > 0$, $\exists \delta > 0$ s.t. $\forall A \in \mathcal{B}$,



$$\mu(A) < \delta \implies \nu(A) < \epsilon$$





  1. Fix $\epsilon > 0$.


  2. For all $n \in \mathbb{N}$, let $\delta_n = \frac{1}{n^2}$.


  3. For all $n \in \mathbb{N}$, let $A_n \in \mathcal{B}$ s.t. if $\exists E \in \mathcal{B}$ s.t. $\mu(E) < \delta_n$ and $\nu(E) > \epsilon$, then set $A_n = E$. Otherwise, set $A_n = \emptyset \in \mathcal{B}$.


  4. Suppose for sake of contradiction that $|\{A_n\}| = \infty$, so that no matter the $\delta > 0$, we could find a $\delta_n = \frac{1}{n^2} < \delta$ which has associated with it a measurable $A_n \ne \emptyset$ with $\mu(A_n) < \delta_n < \delta$ and $\nu(A_n) > \epsilon$.


  5. Now if we let $\underset{n \rightarrow \infty}{\text{limsup}}$ $A_n = S$, we have (from a prior problem) that $\mu(S) = 0$ since $\mu$ is a finite measure and $\sum_{n=1}^\infty \mu(A_n) \le \sum_{n=1}^\infty \delta_n = \sum_{n=1}^\infty \frac{1}{n^2} < \infty$. Since $\nu \ll \mu$ we therefore have $\nu(S) = 0$ as well.


  6. Yet $\nu(S) = \nu(\bigcap_{n=1}^\infty \bigcup_{n=m}^\infty A_m) \ge \epsilon > 0$ since...





and it's here where I'm stuck in the proof.


Answer



You are almost there. Notice that since $\nu(S) = 0$,
$$
0 = \nu(S) = \nu \left( \bigcap_{n=1}^{\infty} \bigcup_{m=n}^{\infty} A_m \right) = \lim_{n \to \infty} \nu \left( \bigcup_{m=n}^{\infty} A_m \right).
$$
Therefore, there exists $n$ such that
$$
\nu(A_n) \le \nu \left( \bigcup_{m=n}^{\infty} A_m \right) < \varepsilon,
$$

a contradiction of the choice of $A_n$.



I must say though that the right way to formulate this proof would be this : Suppose by sake of contradiction that the result is false, i.e. that there exists an $\varepsilon > 0$ such that for all $\delta > 0$, there is $C_{\delta} \in \mathcal B$ with
$$
\mu(C_{\delta}) < \delta, \quad \nu(C_{\delta}) \ge \varepsilon.
$$
Then you can choose $\delta_n = \frac 1{n^2}$ and $A_n = C_{\delta_n}$ and then continue by following our steps.



Hope that helps,


Modular Arithmetic and Congruences



I understand that $a \equiv b \pmod n$ means when you divide each $a$ and $b$ by $n$ you get the same remainder. But why do people say: "$a$ divided by $n$ gives you remainder $b$"?




They say that in the first 30 seconds of this video lecture http://www.youtube.com/watch?v=QXIlkq06Ct0&feature=youtube_gdata_player



Example



$12 \equiv 17 \pmod 5$



$12$ divided $5$ has remainder of $2$



17 divided by 5 has remainder of 2




Neither has the latter relation, so why do people sometimes say this.


Answer



12 is the same as 2 $\bmod{5}$ and
17 is the same as 2 $\bmod{5}$



Lets say you have $a\equiv b\bmod{n}$. Then the numbers you are working with are basically from the set {0,1,2,3,...,n-1}, the number n=0 (mod n), n+1=1 (mod n), n+2=2 (mod n), ect.



If two numbers, a and b are related by $a\equiv b\bmod{n}$, then (a-b)=nc,for $c\in \mathbb{N}$, that is (a-b) is a multiple of n. So in your case above, $2\equiv2+5\equiv2+10\equiv2+15\equiv ect.\bmod{5}$. So 2 is the same as 12 which is the same as 15 $\bmod{5}$



When you have $a\equiv b \bmod{n}$, with $a>b$ and $b\in\{0,1,\cdots ,n-1 \}$, then in fact a divided by n is b, this is the case in the video. When this is not the case, it causes for confusion, as in your example. It would make sense to write $17\equiv 2 \bmod{5}$ however.



Tuesday 18 December 2018

calculus - When and why is $x^x$ undefined?




I was learning in logarithmic differentiation how to differentiate a function like $f(x)=x^x$ so naturally i was curious about how the graph of the actual function looks like. I graphed it on this app and to my surprise the entire section from $(- \infty,0)$ is undefined. I do understand why we could have problems and abnormalities about $x=0$ but what about $x=-10$ shouldn't that just be $(-10)^{-10}$ which from my understanding is a very valid real number. So the question really begs itself why is the function undefined at $x<0$?


Answer



Indeed, as you mentioned, something like $(-10)^{-10}$ is just $1/(-10)^{10}$.



The problem is that anywhere that we have rationals we will be very close to a value of $x$ which is of the form $-\frac{1}{2}p$.



In other words, note that $(-1/2)^{-1/2}$ is equal to $\frac{1}{(-1/2)^{1/2}}$ and we cannot take the square root of a negative number.



Likewise $$(-1/4)^{-1/4}=\frac{1}{(-1/4)^{1/4}}=\frac{1}{((-1/4)^{1/2})^{1/2}}$$ and again we have the square root of a negative number.




It turns out we have infinitely many of these cases along the negative real number line. Even though we technically can let the domain have some points, such as $-10$ or $-1$, it is easier if we just say the domain is $(0,\infty)$.


Monday 17 December 2018

An interesting geometry problem about incenter and ellipses.

Let $I$ be the incenter of a triangle $ABC$. A point $X$ satisfies the conditions $XA+XB=IA+IB$, $XA+XC=IA+IC$. The points $Y,Z$ are defined similarly. Show that the lines $AX,BY,CZ$ are concurrent or parallel to each other.



My friend discovered this problem when he was drawing random ellipses for fun. But we have no idea how to solve such a problem because we literally know nothing about ellipses (except its definition). So I can't post where I'm stuck here. We're just curious to see the solution, whether or not it's elementary.




We do not know what kind of tags we should add because we do not know what methods are to be used. Please edit the tagging.



Figure

number theory - Estimate how much large is $lfloor log{n!rfloor}$






Is there a simple method to find or estimate how large $$\lfloor \lg{n!\rfloor}$$ is ? I'd like to find (or estimate) how much digits $2017!^{2017}$ has, or how much is big that number .




I tried for some little number and general form of $$n!^{n}=\overline{a_k...a_4a_3a_2a_1} \\ k=?$$ so
I took logarithm of $n!^{n} \mapsto n\log(n!) $.



It is easy when $n$ is little number , but when go for a large number ...what we can do ?
My question can be translate as $$n\sum_{i=1}^{n}\log(i)= ?$$


Answer




Stirling's approximation says that
$$
n!\approx \sqrt{2\pi n}\left(\frac ne\right)^n
$$
Taking logarithms on both sides should be easy enough.


elementary number theory - calculating $a^b !mod c$




What is the fastest way (general method) to calculate the quantity $a^b \!\mod c$? For example $a=2205$, $b=23$, $c=4891$.



Answer



Let's assume that a,b,c referred to here are positive integers, as in your example.



For a specific exponent b, there may be a faster (shorter) method of computing a^b than binary exponentiation. Knuth has a discussion of the phenomenon in Art of Computer Programming Vol. 2 (Semi-numerical Algorithms), Sec. 4.6.3 and the index term "addition chains". He cites b=15 as the smallest case where binary exponentiation is not optimal, in that it requires six multiplication but a^3 can be computed in two multiplications, and then (a^3)^5 in three more for a total of five multiplications.



For the specific exponent b=23 the parsimonious addition chain involves the exponents (above 1) 2,3,5,10,13, at which point a^23 = (a^10)*(a^13), for a total of six multiplications. Binary exponentiation for b=23 requires seven multiplications.



Another approach that can produce faster results when b is large (not in your example) depends on knowing something about the base a and modulus c. Recall from Euler's generalization of Fermat's Little Thm. that if a,c are coprime, then a^d = 1 mod c for d the Euler phi function of c (the number of positive integers less than c and coprime to it). In particular if c is a prime, then by Fermat's Little Thm. either c divides a and a^b = 0 mod c or else a^b = a^e mod c where e = b mod (c-1) since phi(c) = c-1 for a prime c.



If the base a and modulus c are not coprime, then it might be advantageous to factor a into its greatest common factor with c and its largest factor that is coprime to c.




Also it might be advantageous if c is not prime to factor it into prime powers and do separate exponentiations for each such factor, piecing them back together via the Chinese Remainder Thm. In your example c = 4891 = 67*73, so you might compute a^b mod 67 and a^b mod 73 and combine those results to get a^b mod c. This is especially helpful if you are limited in the precision of integer arithmetic you can do.


probability - Let $F_X(x):=P(Xleq x)$ a distribution function of a random variable $X$. Prove that $F_X$ is right-continuous.

Let $F_X(x):=P(X\leq x)$ a distribution function of a random variable $X$.



Prove that $F_X$ is right-continuous.



I need to show that for every non-increasing sequence $x_n$ with $\lim x_n=x$ I will get:




$$\lim_{n\to\infty}f(x_n)=f(x_0)$$



How do I show this ? Any ideas ?

Sunday 16 December 2018

abstract algebra - Is there an algebraic-geometric solution to the problem of the Leibnizian formalism?

The precise question appears at the end of this entry.



With all the recent advances in understanding infinitesimals, we still don't fully understand why Leibniz's definition of $\frac{dy}{dx}$ as literally a ratio works the way it does and seems to explain numerous facts including chain rule.




Note that Robinson modified Leibniz's approach as follows. Suppose we have a function $y=f(x)$. Let $\Delta x$ be an infinitesimal hyperreal $x$-increment. Consider the corresponding $y$-increment $\Delta y=f(x+\Delta x)-f(x)$. The ratio of hyperreals $\frac{\Delta y}{\Delta x}$ is not quite the derivative. Rather, we must round off the ratio to the nearest real number (its standard part) and so we set $f'(x)=\text{st}\big(\frac{\Delta y}{\Delta x}\big)$. To be consistent with the traditional Leibnizian notation one then defines new variables $dx=\Delta x$ and $dy=f'(x)dx$ so as to get $f'(x)=\frac{dy}{dx}$ but of course here $dy$ is not the $y$-increment corresponding to the $x$-increment. Thus the Leibnizian notation is not made fully operational.



Leibniz himself handled the problem (of which he was certainly aware, contrary to Bishop Berkeley's allegations) by explaining that he was working with a more general relation of equality "up to" negligible terms, in a suitable sense to be determined. Thus if $y=x^2$ then the equality sign in $\frac{dy}{dx}=2x$ does not mean, to Leibniz, exactly what we think it means.



Another approach to $dy=f'(x)dx$ is smooth infinitesimal analysis where infinitesimals are nilsquare so you get equality on the nose though you can't form the ratio. On the other hand, Leibniz worked with arbitrary nonzero orders of infinitesimals $dx^n$ so this doesn't fully capture the Leibnizian framework either.




Question: In an algebraic-geometric or other algebraic or analytic context (with suitable limitations on $f$), is there a way of assigning a precise sense to the Leibnizian generalized equality using global considerations?





Note. Related material can be found at this MO post.

calculus - How can I deduce that $limlimits_{xto0} frac{ln(1+x)}x=1$ without Taylor series or L'Hospital's rule?

How can I calculate this limit without Taylor series and L'Hospital's rule?
$$\lim _{x\to0} \frac{\ln(1+x)}x=1$$

Taylor series expansion of base 2 logarithms

Sorry for the noob question, but I've been hitting my head against the wall on this for a while.



I am looking for a Taylor series expansion of a logarithm other than the natural logarithm $ln(x)$. It seems that every piece of literature I've been going through treats solely the natural logarithm and not logarithms in other bases.



In particular, I would like to know the Taylor series corresponding to the binary logarithm $log_2(x)$. For instance, how would I go about calculating $log_2(3)$ using the Taylor series?



Thanks!

Saturday 15 December 2018

sequences and series - Why does $1+2+dots+2003=dfrac{2004cdot2003}2$?





Why does $1+2+\dots+2003=\dfrac{2004\cdot2003}2$?




Sorry if this is missing context; not really much to add...


Answer



$$\begin{array}{ccc}

S&=&1&+&2&+&3&+&\ldots&+&2001&+&2002&+&2003\\
S&=&2003&+&2002&+&2001&+&\ldots&+&3&+&2&+&1\\ \hline
2S&=&2004&+&2004&+&2004&+&\ldots&+&2004&+&2004&+&2004
\end{array}$$



There are $2003$ columns, so $2S=2003\cdot2004$, and therefore $S=\dfrac{2003\cdot2004}2$.


sequences and series - Show that $sum_{n=0}^{infty} dfrac{n}{2^{n+1}} = 1$



My Work



I felt the best way to go about this problem was to compare it to a well known MacLaurin series. I noticed it resembled the reciprocal of the absolute value of the MacLaurin series of $\ln(1+x)$ where $x = \dfrac12$ but I had the problem of the $n$ in the numerator still. None of the well known series have an $n$ in the numerator.



The Well Known MacLaurin Series




enter image description here



My Question



Can someone give me a hint as to which well known MacLaurin series this resembles? Once I have that I know the solution!


Answer



If you want to directly proceed via MacLaurin series, try the MacLaurin series of $\dfrac{x^2}{(1-x)^2}$, which unfortunately is not there in your list. Else, proceed as follows:
Let $S_m = \displaystyle \sum_{n=1}^m \dfrac{n}{2^{n+1}}$.
\begin{align}

S_m & = \dfrac14 + \dfrac28 + \dfrac3{16} + \dfrac4{32} + \cdots + \dfrac{m-1}{2^m} + \dfrac{m}{2^{m+1}} &\spadesuit\\
\dfrac{S_m}2 & = \,\,\,\,\,\,\,\,\,\,\,\dfrac18 + \dfrac2{16} + \dfrac3{32} + \cdots + \dfrac{m-2}{2^m} + \dfrac{m-1}{2^{m+1}} + \dfrac{m}{2^{m+2}} & \diamondsuit
\end{align}
$\spadesuit-\diamondsuit$ now gives us
$$\dfrac{S_m}2 = \dfrac14 + \dfrac18 + \cdots + \dfrac1{2^{m+1}} - \dfrac{m}{2^{m+2}}$$
I trust you can conclude from this making use of the first MacLaurin series.


trigonometry - Proof of an equality involving cosine $sqrt{2 + sqrt{2 + cdots + sqrt{2 + sqrt{2}}}} = 2cos (pi/2^{n+1})$



so I stumbled upon this equation/formula, and I have no idea how to prove it. I don't know how should I approach it:
$$
\sqrt{2 + \sqrt{2 + \cdots + \sqrt{2 + \sqrt{\vphantom{\large A}2\,}\,}\,}\,}\

=\
2\cos\left(\vphantom{\Large A}\pi \over 2^{n + 1}\right)
$$



where $n\in\mathbb N$ and the square root sign appears $n$-times.



I thought about using sequences and limits, to express the LHS as a recurrence relation but I didn't get anywhere.



edit: Solved, thanks for your answers and comments.


Answer




Hint:



Use induction and the half-angle formula for cosine.



Solution:



For $n=1$, the claim is true, since $\cos(\pi/4)=\sqrt{2}/2$. By the half-angle formula $$2\cos(x/2)=\sqrt{2+2\cos(x)}$$
Therefore
$$\sqrt{2+\sqrt{2+\cdots+\sqrt{2+\sqrt{2}}}}=\sqrt{2+2\cos\left(\frac{\pi}{2^n}\right)}=2\cos\left(\frac{\pi}{2^{n+1}}\right)$$
where in the left square root expressions there are $n$ square roots and in the first equality we have used the induction hypothesis that the claim holds for $n-1$.



Friday 14 December 2018

galois theory - Find the elements of the extension field using primitive polynomial over $GF(4)$



Let $p(z) = z^2 + z + 2$ be a primitive polynomial. I want to construct the elements of the extensional field $GF(4^2)= GF(16).$



Since $p(z)$ is primitive polynomial , it should generate the elements of the extension field.




Let $\alpha$ be zero point then :




  • $\alpha^2 + \alpha + 2 \rightarrow \alpha^2 = -\alpha-2 \rightarrow \alpha^2 = 3\alpha + 2$.



Using $\alpha$ :





  • $\alpha^1 = \alpha$

  • $\alpha^2 = 3\alpha+2$ and according to the book, $\alpha^2 = \alpha+2$ but $-1 \mod 4 = 3$.



Since the base is $4$ how can it be that the correct value of $\alpha^2 = \alpha + 2$?



I have mostly solved problems where the base is a prime number and the same procedure i have used here.


Answer



You seem confused about what $GF(4)$ is: it's not $\mathbb{Z}/(4)$ (which is not even a field). You can represent $GF(4)$ as an extension of $GF(2)=\mathbb{Z}/(2)$ by an element $\beta$ such that $\beta^2=\beta+1$. You can identify $GF(4)$ with the set $\{0,1,2,3\}$ by mapping $\beta$ to $2$ and $\beta+1$ to $3$, which appears to be what the book you refer to has in mind, but this does not mean you are working mod $4$ (the addition and multiplication operations are not ordinary mod $4$ addition and multiplication on $\{0,1,2,3\}$).




In particular, $1+1=0$ in $GF(4)$ and so $\alpha+2$ and $-\alpha-2$ are the same thing, and so you have $\alpha^2=\alpha+2$.


sequences and series - Sum $sum_{x=0}^{infty} frac{x}{2^x}$




Calculate $\sum\limits_{x=0}^{\infty} \dfrac{x}{2^x}$



So, this series converges by ratio test. How do I find the sum? Any hints?



Answer



As a first step, let us prove that



$$f(r) := \sum_{n=0}^\infty r^n = \frac{1}{1-r}$$



if $r \in (-1,1)$. This is the geometric series. If you haven't seen this proven before, here's a proof. Define



$$S_N = \sum_{n=0}^N r^n.$$



Then




$$r S_N = \sum_{n=0}^N r^{n+1} = \sum_{n=1}^{N+1} r^n = S_N - 1 + r^{N+1}.$$



Solve this equation for $S_N$, obtaining



$$S_N = \frac{1-r^{N+1}}{1-r}$$



and send $N \to \infty$ to conclude.



The sum above converges absolutely, so we can differentiate term by term. Doing so we get




$$f'(r) = \sum_{n=0}^\infty n r^{n-1} = \frac{1}{(1-r)^2}.$$



(Precisely speaking, the sum in the middle is ill-defined at $r=0$, in that it has the form $0/0$. However, $f'(0)=1$ still holds. This doesn't matter for this problem, but it should be noted regardless.) Now multiply by $r$ to change it into your form:



$$\sum_{n=0}^\infty n r^n = \frac{r}{(1-r)^2}.$$



Now substitute $r=1/2$.


Thursday 13 December 2018

real analysis - Is there a bijection from [0,1] to R?



I'm looking for a bijection from the closed interval [0,1] to the real line. I have already thought of $\tan(x-\frac{\pi}{2})$ and $-\cot\pi x$, but these functions aren't defined on 0 and 1.




Does anyone know how to find such a function and/or if it even exists?



Thanks in advance!


Answer



Use Did's method here to construct a bijection $[0,1] \to (0,1)$. Play around with $\tan$ for a bijection $(0,1) \to \mathbb{R}$



Note that any bijection cannot be continuous. This is because $[0,1]$ is compact.


Simplifying the infinite series $sum_{n = 1}^{infty} left(frac{1}{2}right)^{3n}$

Does anyone know of a way to simplify



$$

\sum_{n = 1}^{\infty} \left(\frac{1}{2}\right)^{3n}
$$



to a number?

combinatorics - Another Hockey Stick Identity



I know this question has been asked before and has been answered here and here.




I have a slightly different formulation of the Hockey Stick Identity and would like some help with a combinatorial argument to prove it. First I have this statement to prove:
$$
\sum_{i=0}^r\binom{n+i-1}{i}=\binom{n+r}{r}.
$$
I already have an algebraic solution here using the Pascal Identity:
$$
\begin{align*}
\binom{n+r}{r}&=\binom{n+r-1}{r}+\binom{n+r-1}{r-1}\\
&=\binom{n+r-1}{r}+\left[\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-1)-1}{r-2}\right]\\

&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\left[\binom{n+(r-2)-1}{r-2}+\binom{n+(r-2)-1}{(r-2)-1}\right]\\
&\,\,\,\vdots\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\left[\binom{n+1-1}{1}+\binom{n+1-1}{0}\right]\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\binom{n+1-1}{1}+\binom{n-1}{0}\\
&=\sum_{i=0}^r\binom{n+i-1}{i}.
\end{align*}
$$



I have read both combinatorial proofs in the referenced answers above, but I cannot figure out how to alter the combinatorial arguments to suit my formulation of the Hockey Stick Identity. Basically, this formulation gives the "other" hockey stick. Any ideas out there?


Answer




Note that $\binom{n+r}{r}=\binom{n+r}{n}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$. On the other hand, for $i=0,1,2,\ldots,r$, $\binom{n+i-1}{i}=\binom{n+i-1}{n-1}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$ whose largest element is $n+i$.


Wednesday 12 December 2018

combinatorics - Notation for writing multinomial coefficient as sum of smaller multinomial coefficients



This question is an attempt to extend the Pascal triangle's hockey stick identity to multinomial coefficients as asked in question Hockey-Stick Theorem for Multinomial Coefficients.



Consider the following recursive relation:



$$\binom{n_1+n_2+\cdots+n_t}{n_1,n_2,\cdots,n_t}=\sum_{\text{For all nonzero $x_j$ except last}}\binom{n_1+n_2+\cdots+n_t-1}{n_1,\cdots,n_j-1,\cdots,n_t}+

\binom{n_1+n_2+\cdots+n_t-1}{n_1,n_2,\cdots,n_t-1}_{\text{$n_t$ being last non-zero $x_j$}}$$



where
$$\binom{n_1+n_2+\cdots+n_t}{n_1,n_2,\cdots,n_t}=\frac{(n_1+n_2+\cdots+n_t)!}{n_1! n_2! \cdots n_t!} $$
Example:
\begin{eqnarray}
\binom{6}{3,1,2}&=&\binom{5}{2,1,2}+\binom{5}{3,0,2}+\binom{5}{3,1,1}\\
&=&\binom{5}{2,1,2}+\binom{5}{3,0,2}+\left\{ \binom{4}{2,1,1}+\binom{4}{3,0,1}+\binom{4}{3,1,0} \right\}\\
&=&\binom{5}{2,1,2}+\binom{5}{3,0,2}+\binom{4}{2,1,1}+\binom{4}{3,0,1}+
\left\{\binom{3}{2,1,0}+\binom{3}{3,0,0} \right\}\\

&=&\binom{5}{2,1,2}+\binom{5}{3,0,2}+\binom{4}{2,1,1}+\binom{4}{3,0,1}+
\binom{3}{2,1,0}+\left\{\binom{2}{2,0,0} \right\}\\
\end{eqnarray}
How may I write the following line in compact sigma notation?



$$\binom{6}{3,1,2}=\binom{5}{2,1,2}+\binom{5}{3,0,2}+\binom{4}{2,1,1}+\binom{4}{3,0,1}+
\binom{3}{2,1,0}+\binom{2}{2,0,0}$$



How to write it for general form?


Answer




$$\binom{n_1+n_2+\cdots+n_t}{n_1,n_2,\cdots,n_t}=1+\sum_{i=2}^t \sum_{j=1}^{i-1} \sum_{k=1}^{n_i} \binom{ n_1+n_2+\cdots+n_{i-1}+k }{n_1,n_2,\cdots,n_j-1,\cdots,n_{i-1},k }$$


How do I prove that this limit does not exist over complex field? $lim_{zrightarrow 0} z^2 sin{left(frac{1}{z}right)}$



So, here's the problem. I'm trying to figure out the type of singularity at $z=0$ to find a residue at the given point.
One of the possible approaches is find a limit of the function as $z$ tends to the singularity point, and based on the outcome, you could decide, whether it's a pole, an essential singularity or a removable singularity.



Based on Laurent's series, I expect this to be an essential singularity:
$f\left(z\right)=z-\frac{1}{6 z}+\frac{1}{120 z^3}-\frac{1}{5040 z^5}+\frac{1}{362880 z^7} + \frac{\left(-1\right)^{n}\left(\frac{1}{z}\right)^{2n-1}}{\left(2n+1\right)!}$,
since the series has infinitely many negative degree terms (i.e., the principal part of the Laurent series is an infinite sum). If that's so, the aforementioned limit has to be non-existent at $z=0$.




I tried to get an approach to the limit like this:
since $\left|\sin{\left(\frac{1}{z}\right)}\right|\le 1$, we have $-1\le\sin{\left(\frac{1}{z}\right)}\le1$. Multiplying by $z^2$ we get the following: $-z^2\le z^2\sin{\left(\frac{1}{z}\right)}\le z^2$. Since both $z^2$ and $-z^2$ are equal to zero as z tends to $0$, so is the $z^2 \sin{\left(\frac{1}{z}\right)}$ according to the squeeze theorem.
So, actually the limit exists.



However, shortly after this I realised that this approach is of no use over complex field since there's no such thing as order for complex numbers, which is why my approach has nothing to do with the actual problem, and the whole thing was solved over reals.



So, I'm puzzled. How do I prove that this limit does not exist at all?


Answer



Let $z=x\in \mathbb{R}$. Then, clearly




$$\lim_{z\to 0}z^2\sin(1/z)=\lim_{x\to 0}x^2\sin(1/x)=0$$



Now, let $z=iy$, $y\in \mathbb{R}$, $y>0$. Then, clearly



$$\lim_{z\to 0}z^2\sin(1/z)=\lim_{y\to 0^+}y^2\sin(i/y)=i\lim_{y\to 0^+}y^2\sinh(1/y)=i\infty$$



Inasmuch as the two limits are unequal, the limit $\lim_{z\to 0}z^2\sin(1/z)$ fails to exist.







The Laurent series for $z^2\sin(1/z)$ is given by



$$z^2\sin(1/z)=\sum_{n=0}^\infty \frac{(-1)^n z^{-(2n-1)}}{(2n+1)!}\tag1$$



The residue is equal to the coefficient on the term $z^{-1}$, which occurs in $(1)$ when $n=1$. Hence, we see that



$$\text{Res}\left(z^2\sin\left(\frac1z\right),z=0\right)=-\frac16$$


sequences and series - Sum of Harmonic numbers $sumlimits_{n=1}^{infty} frac{H_n^{(2)}}{2^nn^2}$



Finding the closed form of:



$$\sum\limits_{n=1}^{\infty} \frac{H_n^{(2)}}{2^nn^2}$$



where, $\displaystyle H_n^{(2)} = \sum\limits_{k=1}^{n}\frac{1}{k^2}$



It appears when we try to determine the summation $\displaystyle \sum_{n=1}^\infty\frac{H_n}{n^3\,2^n}$, using the generating function:




\begin{align}
\sum_{n=1}^{\infty} \frac{H_n}{n^3} \, x^{n} &= - \frac{1}{2} \, \sum_{n=1}^{\infty}\frac{1}{n^2} \, \sum_{k=1}^{n} \frac{(1-x)^k}{k^2} - \frac{\zeta(2)}{2} \, \operatorname{Li}_2(x) + \frac{7 \, \zeta(4)}{8} - \frac{1}{4} \, \operatorname{Li}_2^2(1-x) + \frac{\zeta^2(2)}{4} + \operatorname{Li}_4(x) \\
& \hspace{5mm} + \frac{1}{4} \, \log^2 x \, \log^2(1-x) + \frac{1}{2}\log x \, \log (1-x) \, \operatorname{Li}_2(1-x) + \zeta(3) \, \log x - \log x \,
\operatorname{Li}_2(1-x)
\end{align}
when we write, $\displaystyle \sum\limits_{n=1}^{\infty}\frac{1}{n^2}\sum\limits_{k=1}^{n}\frac{(1-x)^k}{k^2} = \zeta(2)\operatorname{Li}_2(1-x) + \operatorname{Li}_4(1-x) - \sum\limits_{n=1}^{\infty} \frac{H_n^{(2)}}{n^2}(1-x)^n$



Combined with Cleo's closed form here, I know what the closed form should be, but how do I derive the result ?


Answer




HINT: Consider $\displaystyle \sum\limits_{n=1}^{\infty} \frac{H_n^{(2)}}{2^nn^2}=\sum\limits_{n=1}^{\infty} \frac{H_{n-1}^{(2)}}{2^nn^2}+\operatorname{Li}_4\left(\frac{1}{2}\right)$ and then express the remaining sum as a double integral. After some work, you get



$$\int_0^1 \frac{\displaystyle\log(x)\operatorname{Li}_2\left(\frac{x}{2}\right)}{x-2} \ dx+\operatorname{Li}_4\left(\frac{1}{2}\right)$$



and after letting $x\mapsto 2x$ combined with the integration by parts, you arrive at some integrals
pretty easy to finish. I'm confident you can finish the rest of the job to do.



$$\frac{\pi^4}{1440}-\frac{\pi^2}{3}\log^2(2)+\frac{1}{24}\log^4(2)+\frac{7}{24}\pi^2\log^2(2)+\frac{1}{4}\log(2)\zeta(3)+\operatorname{Li}_4\left(\frac{1}{2}\right)$$


Tuesday 11 December 2018

field theory - Composite of Galois extensions is Galois, a particular proof.

This has been asked over and over again on math.stackexchange and I will ask it again.




Let $L_1/K$ and $L_2/K$ be finite Galois extensions of $K$ inside a common field, then $L_1L_2/K$ is a finite Galois extension.




I'm interested in one common proof of that fact. It goes like this:





$L_1L_2/K$ is finite so it suffices to prove that $L_1L_2$ is the splitting field of a separable polynomial over $K$. $L_i$ is the splitting field of a separable polynomial $f_i$ over $K$. Then $L_1L_2$ is the splitting field for the product of $f_1$ and $f_2$ with common factors only used once.




However, to me it seems that this only works when (the product of) common factors belong to $K[X]$ and I cannot think of a reason why this would be guaranteed (except e.g. when $L_1\cap L_2 = K$).What am I missing, or is this a bogus proof?

integration - $int^infty_0 frac{dx}{x^n + 1}$, n odd

Well, I saw this integral :



$$\int^\infty_0 \frac{dx}{x^n + 1}$$



on some questions (like this one : Calculating a real integral using complex integration )



But it has always been for $n$ even.
Is there something wrong with $n$ odd ? Is not, how do you compute this ? by the same argument ?




thank you and sorry if this is easy. I can't figure out how to do it since it's neither odd nor even.

Monday 10 December 2018

How do I prove that the limit as x approaches 0 of (sin(x)/x) = 1 using the definition of a limit?

If I let r be a positive number, I have to show that there exists a positive number, u, such that |x| < u implies that |(sin(x))/x - 1| < r. Now I've gone about this two different ways, but hit dead ends both times.



My first approach was to use the triangle inequality and the fact that -1 <= sin(x) <= 1. |(sin(x))/x - 1| <= |(sin(x))/x| + 1 = |sin(x)|/|x| + 1 <=
1/|x| + 1. I knew that if I could show that 1/|x| + 1 < r, that would mean
|(sin(x))/x - 1| < r. However, the issue is that if 1/|x| + 1 < r, than 1 < r. But I need prove it with r being any positive number.




My second approach was to split it into left-hand and right-hand limits and use the squeeze theorem. I knew that if I show that each limit was 1, then the entire limit was 1. I decided to start with the left-hand limit. For x<0, 1/x <= sin(x)/x <= -1/x. However, since the limit as x approaches 0 from the left of 1/x = -oo and the limit as x approaches 0 from the left of -1/x is oo, the squeeze theorem really can't be applied.



What is it that I'm doing wrong?

abstract algebra - Extension of residue fields and algebraic independence




Let $A$ be a Noetherian integral domain, $B$ a ring extension of $A$ that is an integral domain, $P \in \operatorname{Spec} B, \, p = P \cap A$. Denote by $\kappa(p),\ \kappa(P)$ the residue fields of the two prime ideals. Then we see that there is a field extension $\kappa(p) \hookrightarrow \kappa(P)$. Let $t$ be a non-negative integer such that $t \le {\rm tr}.\deg_{\kappa(p)} \kappa(P)$.



Matsumura in his Commutative Ring Theory, proof of Theorem 15.5, says "let $c_1,\dots,c_t \in B$ such that their images modulo $P$ are algebraically independent over $A/p$."



Question: Why would such elements exist?



(Edited)



Remark: Matsumura wants to prove that $\operatorname{ht}(P)+\operatorname{tr.deg}_{\kappa(p)} \kappa(P) \le \operatorname{ht}(p)+ \operatorname{tr.deg}_A B$ and he starts by proving that we can assume that $B$ is a finitely generated $A$-algebra. Specifically, he is showing that we can construct a subring $C$ of $B$ that is a finitely-generated $A$-algebra, and that if the theorem is true for $C$, then it is true of $B$. My question relates to his argument of "why we can assume that". Consequently, in answering my question, we can not make the assumption that $B$ is a finitely generated $A$-algebra.



Answer



Take $x_1,\dots,x_t$ to be algebraically independent over $k(p)$ in $k(P)$. Then, write each $x_i$ as a fraction. Then, you get $2t$ elements of $B/P$. I claim that there are $t$ algebraically independent among the $2t$ elements. Why? What happens if there is only $m

Modular Arithmetic and Congruences



I understand that $a \equiv b \pmod n$ means when you divide each $a$ and $b$ by $n$ you get the same remainder. But why do people say: "$a$ divided by $n$ gives you remainder $b$"?



They say that in the first 30 seconds of this video lecture http://www.youtube.com/watch?v=QXIlkq06Ct0&feature=youtube_gdata_player



Example



$12 \equiv 17 \pmod 5$




$12$ divided $5$ has remainder of $2$



17 divided by 5 has remainder of 2



Neither has the latter relation, so why do people sometimes say this.


Answer



12 is the same as 2 $\bmod{5}$ and
17 is the same as 2 $\bmod{5}$




Lets say you have $a\equiv b\bmod{n}$. Then the numbers you are working with are basically from the set {0,1,2,3,...,n-1}, the number n=0 (mod n), n+1=1 (mod n), n+2=2 (mod n), ect.



If two numbers, a and b are related by $a\equiv b\bmod{n}$, then (a-b)=nc,for $c\in \mathbb{N}$, that is (a-b) is a multiple of n. So in your case above, $2\equiv2+5\equiv2+10\equiv2+15\equiv ect.\bmod{5}$. So 2 is the same as 12 which is the same as 15 $\bmod{5}$



When you have $a\equiv b \bmod{n}$, with $a>b$ and $b\in\{0,1,\cdots ,n-1 \}$, then in fact a divided by n is b, this is the case in the video. When this is not the case, it causes for confusion, as in your example. It would make sense to write $17\equiv 2 \bmod{5}$ however.


arithmetic - Calculate fractional part of square root without taking square root



Let's say I have a number $x>0$ and I need to calculate the fractional part of its square root:



$$f(x) = \sqrt x-\lfloor\sqrt x\rfloor$$



If I have $\lfloor\sqrt x\rfloor$ available, is there a way I can achieve (or approximate) this without having to calculate any other square roots? I would like to avoid square roots for performance reasons as they are usually a rather slow operation to calculate.



As an example of what I am thinking about, here is one option:




$$f(x)\approx\frac{x-\lfloor\sqrt x\rfloor^2}{(\lfloor\sqrt x\rfloor+1)^2-\lfloor\sqrt x\rfloor^2}$$



As $x$ gets large, this approximation becomes better and better (less than 1% error if $\sqrt x\ge40$), but for small $x$, it's quite terrible (unbounded error as $x$ goes to $0$).



Can I do better? Preferably an error of less than 1% for any $\sqrt x<10000$.



The holy grail would be to find a formula that doesn't need any division either.



Aside:
In case anyone is interested why I need this: I'm trying to see if I can speed up Xiaolin Wu's anti-aliased circle drawing functions by calculating the needed variables incrementally (Bresenham-style)



Answer



If you are using IEEE 754 compliant floating point numbers, it may be that square root operations are faster than you might suppose, as the square root operation is directly supported in the standard and any compliant hardware is guaranteed to round correctly (unlike for sine and cosine) and is often using a hardware implementation of the Newton-Raphson method Alijah described.



That said, the algorithm you linked to only uses the fractional part of the square root to calculate pixel opacity, and consequently the final opacity value ranges only from 0 to 255. Because of the small range, floating point numbers may be overkill and a fixed-point integer representation might work better. If the range is truly only a byte and the maximum size of your radii aren't too large, you can use a look-up table with fixed-point input to skip the expensive square root calculation. A 16-bit fixed-point number would give you a 64KB look-up table, which isn't too bad.



You might also be able to avoid a division and square root operation for calculating the $45^\circ$ value (ffd in the algorithm) by using the Fast Inverse Square Root hack.



Now for the question of whether there is a method to calculate the fractional part of a square root knowing only the integer part, there is one approach that iteratively calculates a square root and minimizes divisions: Continued Fractions. The simplest approach I know of for what you are wanting to do (fast convergence) is detailed here and works as follows:



$a_0 = x - \lfloor\sqrt x\rfloor^2$




$b_0 = 2\lfloor\sqrt x\rfloor$



$a_n = a_0b_{n-1}$



$b_n = b_0b_{n-1} + a_{n-1}$



which gives you quadratically better and better approximations of the fractional part of $\sqrt x$, and you divide $\frac{a_n}{b_n}$ when you've done enough iterations to ensure accuracy. If you are ultimately needing only byte sized opacity values, it should only take 3 or 4 iterations or so, and we save the division until the end, which is the only significant difference between this and the Newton-Raphson method other than the fact that it gives you the fractional part directly.



If you really want to pursue incrementally calculating variables as far as it can go, you can use Gosper's continued fraction algorithms (see especially the section on square roots) and calculate all the variables involved as continued fractions one term at a time. This allows you to avoid square root calculations and divisions other than bit shifts, as well as abort as soon as you know the correct pixel opacity (or whatever else you're calculating) without wasting time calculating digits you don't need, but it involves a serious overhaul to the algorithm you linked, and if I went into detail this answer would turn into a book.




So essentially, if you have memory to spare and the max length of your radii isn't huge and your output size is small, go with a look-up table with fixed-point numbers. If you want simplicity of implementation, go with floating point or fixed-point numbers. If you want to absolutely avoid square root calculation without a look-up table go with Newton-Raphson or the continued fraction variant on it. If you want to absolutely minimize wasted computation at the expense of some up-front overhead, go with Gosper's continued fraction algorithms.


Sunday 9 December 2018

sequences and series - How to evaluate a zero of the Riemann zeta function?

Here is a super naive question from a physicist:




Given the zeros of the Riemann zeta function,

$\zeta(s)
= \sum_{n=1}^\infty n^{-s}$,
how do I actually evaluate them?




On this web page I found a list of zeros. Well I guess if the values, call one ${a_\text{zero}}$, are just given in decimal expansion, then I'd have to run a program and see how it approaches zero
$$\zeta({a_\text{zero}}) =
\sum_{n=1}^\infty n^{-{a_\text{zero}}} =
\frac{1}{1^{a_\text{zero}}} + \frac{1}{2^{a_\text{zero}}} + \frac{1}{3^{a_\text{zero}}} + \cdots\approx 0. \qquad ;\Re(a)>1$$




But are there also analytical solutions, let's call one ${b_\text{zero}}$, which I can plug in and see $\zeta({b_\text{zero}}) = 0$ exactly? Specifically the non-trivial ones with imaginary part are of interest. What I find curious is that these series, with each term being a real number multiplied by some $n^{-i\cdot \text{Im}(b_\text{zero})}$, then also kind of resemble Fourer series.

elementary number theory - Prove that if $x^9 equiv 3 mod{61}$, then $x^{12}equiv 34 mod{61}$



Prove that if $x^9 \equiv 3 \mod{61}$, then $x^{12}\equiv 34 \mod{61}$.




The first thing I tried was the obvious:



$$(x^9)^{4/3} \equiv 3^{4/3} \mod{61}$$
$$x^{12} \equiv 3^{4/3} \mod{61}$$



But this doesn't get me anywhere.



I've also tried breaking it down by saying that there exists a $k_1 \in \mathbb{Z}$ s.t. $x^9 - 3 = 61k_1$ and trying to work it towards $x^{12} - 34 = 61k_2$, for some $k_2 \in \mathbb{Z}$, but again, I didn't get too far.


Answer



Recall that $x^{60} = 1$ modulo $61$. This is Fermats Little Theorem (of course $x$ is a not divisible by $61$ so it is applicable).




Thus, $3^7 = (x^9)^7 = x^{63}=x^3$ modulo $61$.
So you know $x^3$ and the rest is direct.



The general strategy would be to solve $9 k = 12$ modulo $60$, and then compute $x^{12}= (x^{9})^k = 3^k$, but I took a slight shortcut.


word problem - Extended Euclidean Algorithm to find 2 POSITIVE Coefficients?

There's a problem I ran into that said:



"At a certain casino, blue poker chips are worth 9 dollars and white poker chips are worth 14 dollars. How many chips can Hank buy to spend exactly 206 dollars?"



The answer is 19 chips (12 blue and 7 white). The solution posed was trial-and-error which was frustrating. I was able to use an augmented matrix to solve it, but that took longer than I would've liked.



I noticed 9 and 14 are coprime, though, so according to the Euclidean Algorithm there exist integers X and Y such that 9X+14Y=1 (since gcd(9,14)=1). That said, there should be integers X' and Y' such that 9X'+14Y'=206 (linear independence; spanning set; etc.). The issue here is that 9X+14Y=1 results in X=-3, Y=2. Negative numbers aren't allowed in the question posed above, technically.



Is there any way to use the Extended Euclidean Algorithm to solve this problem such that we have X' and Y' BOTH positive? Or is it not possible using the Algorithm this way?

Saturday 8 December 2018

Is square root of an integer, either an integer or an irrational number, but never (non-integer) rational?








$\sqrt{2}$ is irrational number, but $\sqrt{9} = 3$ is an integer.
Are there such integers whose square root is a (non-integer)rational number?

calculus - Limitations of approximating $sin(x) = x$



$$\lim _{x\rightarrow 0}{\frac {\cos \left( x \right) \sin \left( x
\right) -x}{ \left( \sin \left( x \right) \right) ^{3}}}$$






I know that the real limit is $-2/3$

However, I've noticed that by approximating $\sin(x)$ as $x$ and $\cos(x)$ as $1-(x^2/2)$ I get the following:



$(1-(x^2/2))x - x)/ x^3 = (1- (x^2/2) -1 )(1/x^2) = -x^2/(2x^2) = -1/2 $



also if I only partially approximate x like: $(x\cos(x) - x)/(x^3)$ = $(\cos(x)-1)/x^2$ and then use L'hospital's rule to chisel this down I get:
(L'hospital) = $-\sin x / 2x $ = (L'hospital) = $-\cos(x) / 2 = -1/2 $



Why does this conflict with doing L'hospital the whole way through without approximating $\sin(x)$ as $x$ ? Why is approximating $\sin(x)^3$ as $x^3$ wrong? Isn't this always approaching zero?


Answer



As you know, $\sin x$ is only approximately equal to $x$, and $\cos x$ is only approximately $1 - \frac12 x^2$. How do you know when the approximation is good enough? One way is to keep track of how big the terms we've thrown away are. If you look at the Taylor series of $\sin$ and $\cos$, you find that $\sin x = x + O(x^3)$, and $\cos x = 1 - \frac12 x^2 + O(x^4)$, where $O(x^n)$ means something on the order of $x^n$ whose exact value we don't care about. So your limit is

$$\begin{align}
\lim_{x\to 0} \frac {\cos x \sin x - x}{\sin^3 x} &= \lim_{x\to 0} \frac{\big(1 - \frac12 x^2 + O(x^4)\big)\big(x + O(x^3)\big) - x}{\big(x + O(x^3)\big)^3} \\
&= \lim_{x\to 0} \frac{\big(x - \frac12 x^3 + O(x^3)\big) - x}{x^3 + O(x^5)} \\
&= \lim_{x\to 0} \frac{-\frac12 x^3 + O(x^3)}{x^3 + O(x^5)} \\
&= \lim_{x\to 0} -\frac12 + O(1)
\end{align}$$
As you can see, one of the terms we ignored produces an error that doesn't go away as we approach $0$, so our solution is no good. If you trace backward to where the error came from, you'll find that it's the $O(x^3)$ term in the approximation of $\sin$. Then you would be wise to replace it with its true value, giving $\sin x = x - \frac16 x^3 + O(x^5)$, and evaluate the limit again. This time you should get the right answer, with an extra term that goes to zero as $x \to 0$.


Friday 7 December 2018

complex numbers - Logical explanation of Euler's formula



This question is a about (if not proving) at least guessing the Euler's formula.
I don't want the proof using the infinite sums.




We can guess by logic that for example that the equation $x^2+1=\sqrt{x}$ has no real solutions because $x^2=\sqrt{x}$ has 2 solutions $x=0, x=1$ but by adding 1 on the left side, we cancel these 2 solutions, so there are no solutions.



I want to know if there is a way to guess by logic that $e^{ix}=\cos(x)+i\sin(x)$. I guess that the most important here here will be $\frac{d}{dx}e^x=e^x$. And suggestions?


Answer



The power series argument, while simple, is indeed unenlightening. You can easily show that $f(\theta)=\cos\left(\theta\right)+i\sin\left(\theta\right)$ satifies both $f'\left(\theta\right)=i\cdot f\left(\theta\right)$ and $f\left(0\right)=1$, and it looks remarkably similar to one definition of $\gamma(t)=e^{\alpha t}$: the function $\gamma\,\colon\mathbb{C}\rightarrow\mathbb{C}$ that both $\left(e^{\alpha t}\right)'=\alpha e^{\alpha t}$ and $\gamma(0)=1$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...