Friday 31 July 2015

real analysis - How to test this improper integral for convergence?



I'm supposed to test for convergence the following integral $$\int_1^{\infty}\frac{\ln x}{x\sqrt{x^2-1}}dx$$ I have tried using the comparison test with two different integrals but I've failed. I also tried using the Dirichlet test, however it doesn't work for this integral. I have thought about using the limit comparison test however I don't have any idea with what would I compare the expression I have.



Any hints?


Answer




Testing for convergence isn't so bad, simply note that for $x>\sqrt2$:



$$0<\frac1{x\sqrt{x^2-1}}<\frac1{x\sqrt{x^2-\frac12x^2}}=\frac{\sqrt2}{x^2}$$



Thus,



$$0<\int_{\sqrt2}^\infty\frac{\ln(x)}{x\sqrt{x^2-1}}~\mathrm dx<\sqrt 2\int_{\sqrt2}^\infty\frac{\ln(x)}{x^2}~\mathrm dx$$



Integration by parts,




$$\int_{\sqrt2}^\infty\frac{\ln(x)}{x^2}~\mathrm dx=\frac{\ln(2)}{2\sqrt2}+\int_{\sqrt2}^\infty\frac1{x^2}~\mathrm dx=\frac{\ln(2)}{2\sqrt2}+\frac1{\sqrt2}$$



For $1\le x\le\sqrt2$:



$$0\le\frac{\ln(x)}{x\sqrt{x^2-1}}\le1$$



$$0<\int_1^{\sqrt2}\frac{\ln(x)}{x\sqrt{x^2-1}}~\mathrm dx<\sqrt2-1$$



Thus, the integral converges and is bounded by $\displaystyle0

abstract algebra - If $F(alpha)=F(beta)$, must $alpha$ and $beta$ have the same minimal polynomial?





Let's consider a field $F$ and $\alpha,\beta\in\overline{F}-F$ (where $\overline{F}$ is an algebraic closure). If $F(\alpha)=F(\beta)$, is it true that $\alpha$, $\beta$ have the same minimal polynomial over $F$?




For reference, I explain my way. (From here forward, all field isomorphisms are the identity on $F$.)



There exist polynomials $f,g\in F[x]$ such that $f(\beta)=\alpha$, $g(\alpha)=\beta$. Let $p$ and $q$ be the minimal polynomials of $\alpha$ and $\beta$ over $F$. There are field isomorphisms
$$\sigma:F[x]/(p)\to F(\alpha),\qquad \tau:F[x]/(q)\to F(\beta)$$
(these two isomorphisms are constructed in the proof of the fundamental theorem of field theory), and the trivial isomorphism $i:F(\alpha)\to F(\beta)$ (I hope that there is isomorphism $j$ with $j(\alpha)=\beta$, but I can't prove it. How do you think about this?) Then
$$\rho:=(\tau^{-1}\circ i\circ\sigma)$$
is also an isomorphism with

$$\rho(x+(p))=f(x)+(q),\qquad \rho^{-1}(x+(q))=g(x)+(p).$$
From this, I got $q\mid f\circ p$ but not $q\mid p$. Is this way right? If it is, please proceed this way.


Answer



Let $a$ and $b$ be any two distinct elements of $F$. Then we have $F(a)=F(b)=F$ but the minimal polynomial of $a$ over $F$ is $x-a$, whereas the minimal polynomial for $b$ over $F$ is $x-b$, which are different.



This is just a trivial example of the general behavior. Let $F$ be any infinite field, and let $a$ be anything that's algebraic over $F$ (regardless of whether it's an element of $F$ or not). Then there are infinitely many $b$'s that are algebraic over $F$ such that $F(a)=F(b)$ — consider $b=a+c$ as $c$ ranges over all elements of $F$ — but only finitely many roots of the minimal polynomial of $a$ (since a polynomial of degree $n$ over a field has at most $n$ distinct roots). Thus, all but finitely many of the $b$'s such that $F(a)=F(b)$ do not share a minimal polynomial with $a$.



If you really want a concrete example, observe that $\mathbb{Q}(\sqrt{2})=\mathbb{Q}(1+\sqrt{2})$ but
$$\text{min poly of }\sqrt{2} = x^2-2,\qquad \text{min poly of }1+\sqrt{2}=x^2-2x-1$$


calculus - Polygamma function series: $sum_{k=1}^{infty }left(Psi^{(1)}(k)right)^2$



Applying the Copson's inequality, I found:
$$S=\displaystyle\sum_{k=1}^{\infty }\left(\Psi^{(1)}(k)\right)^2\lt\dfrac{2}{3}\pi^2$$ where
$\Psi^{(1)}(k)$ is the polygamma function.
Is it known any sharper bound for the sum $S$?
Thanks.


Answer



The upper bound can be improved using asymptofic series :




enter image description here


calculus - Find limit of $lim_{xto 0}{left(sqrt{x^6+5x^4+7x^2}cos(1+x^{-1000})right)}$, if it exists



Continuing my practice at solving limits, I'm currently trying to solve the following limit:



$$\lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\cos(1+x^{-1000})\right)}$$



What I've done:




I know that since the cosine is an oscillating function, $\cos(\infty)$ doesn't converge to a single value, but I thought that, perhaps, by using trigonometric formulas and various tricks, I could bring the limit to a form that can be solved, but in vain.



Here is an attempt using the trigonometric formula $\cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)$:



$$
\begin{align}
l
& = \lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\cdot \cos(1+x^{-1000})\right)}\\
& = \lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\right)}\cdot

\lim_{x\to 0}{\left(\cos(1)\cos(x^{-1000})-\sin(1)\sin(x^{-1000})\right)}\\
& = 0\ \cdot \lim_{x\to 0}{\left({{\cos(1)}\over{1/\cos(x^{-1000})}}-{{\sin(1)}\over{1/\sin(x^{-1000})}}\right)}\\
\end{align}
$$



Question:



Can the above limit be solved or it's simply undefined, since $\cos(\infty)$ diverges?


Answer



You are correct, the limit is $0$. Another elegant way to prove is using squeeze theorem.




Using $$-1 \le \cos (x) \le 1$$



We've



$$ -\sqrt{x^6+5x^4+7x^2} \le {\left(\sqrt{x^6+5x^4+7x^2}\cos(1+x^{-1000})\right)} \le \sqrt{x^6+5x^4+7x^2}$$



Thus,



$$\lim_{x\to 0} \sqrt{x^6+5x^4+7x^2} \le \lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\cos(1+x^{-1000})\right)} \le \lim_{x\to 0}\sqrt{x^6+5x^4+7x^2}$$




$$\implies 0\le \lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\cos(1+x^{-1000})\right)} \le 0$$



$$\implies \color{blue}{\lim_{x\to 0}{\left(\sqrt{x^6+5x^4+7x^2}\cos(1+x^{-1000})\right)}=0}$$


complex analysis - Integration using residues

For the following problem from Brown and Churchill's Complex Variables, 8ed., section 84



Show that



$$ \int_0^\infty\frac{\cos(ax) - \cos(bx)}{x^2} \mathrm{d}x= \frac{\pi}{2}(b-a)$$



where $a$ and $b$ are positive, non-zero constants, by integrating about a suitable indented contour. The contour in question is the upper half of an annulus bisected by the $x$-axis with an outer radius of $R$ and in inner radius of $\delta$, such that the singularity at $x = 0$ is completely avoided by the integration.



I would expect the evaluation to involve the Cauchy-Goursat theorem to show that the integral about the entire region $\mathscr{D}$ which is enclosed by the half-annulus is $0$.




However I'm having difficulty constructing the appropriate complex analogue.



For example I know that the function $f(x) = \frac{1}{x^2} $ has the complex analogue $f(z) = \frac{1}{z^2}$ and that $\cos(x)$ can be obtained by extracting the real portion of the exponential function i.e. $\cos(x) = \mathrm{Re}(e^{ix})$.



What is the appropriate analogue for this function? Can you show that it would work?

Thursday 30 July 2015

recurrence relations - Closed form for the sequence defined by $a_0=1$ and $a_{n+1} = a_n + a_n^{-1}$



Today, we had a math class, where we had to show, that $a_{100} > 14$ for



$$a_0 = 1;\qquad a_{n+1} = a_n + a_n^{-1}$$



Apart from this task, I asked myself: Is there a closed form for this sequence? Since I didn't find an answer by myself, can somebody tell me, whether such a closed form exists, and if yes what it is?


Answer



I agree, a closed form is very unlikely.

As for more precise asymptotics, I think $a_n = \sqrt{2n} + 1/8\,{\frac {\sqrt {2}\ln \left( n \right) }{\sqrt {n}}}-{\frac {1}{
128}}\,{\frac {\sqrt {2} \left( \ln \left( n \right) -2 \right) ^{2} + o(1)}
{{n}^{3/2}}}$


real analysis - Proof of Uniform Convergence of continuous functions




Suppose K is compact and ${f_n}$ is a sequence of continuous functions on K which converges pointwisely to a continuous function f(x), and $f_n(x)\geq f_{n+1}(x)$ for all x $\in$ K and n $\in$ N. Show $f_n \rightarrow f$ uniformly.





My thoughts:



To prove uniform convergence I think we should use the definition(epsilon delta). But I'm not sure how to use other conditions. I was trying to combine "uniform
continuous" and "pointwise convergence" but it didn't work.


Answer



Actually, this is the Dini’s Theorem. You can see this lecture note: http://www.math.ubc.ca/~feldman/m321/dini.pdf


calculus - Evaluating the definite integral $int_{-infty}^{+infty} mathrm{e}^{-x^2}x^n,mathrm{d}x$



I recognize that the $\int_0^\infty \mathrm{e}^{-x}x^n\,\mathrm{d}x = \Gamma(n+1)$ and $\int_{-\infty}^{+\infty} \mathrm{e}^{-x^2}\,\mathrm{d}x = \sqrt{\pi}$. I am having difficulty, however with $\int_{-\infty}^{+\infty} \mathrm{e}^{-x^2}x^n\,\mathrm{d}x$. By the substitution $u=x^2$, this can be equivalently expressed as $\frac{1}{2} \int_{-\infty}^{+\infty} \mathrm{e}^{-u}u^{\frac{n-1}{2}}\,\mathrm{d}u$. This integral is similar to the first one listed (which equates to the $\Gamma$ function), except that its domain spans $\mathbb{R}$ like the second integral (which equates to $\sqrt{\pi}$). Any pointers on how to evaluate this integral would be helpful.


Answer



Let $I_n:=\int_{-\infty}^{+\infty}e^{-x^2}x^ndx$. If $n$ is odd then $I_n=0$ and for $p\geq 1$:
\begin{align}

I_{2p}&=\int_0^{+\infty}e^{-x^2}x^{2p}dx+\int_{-\infty}^0e^{-x^2}x^{2p}dx\\
&=\int_0^{+\infty}e^{-t^2}t^{2p}dt+\int_0^{+\infty}e^{-t^2}(-t)^{2p}dt\quad (\mbox{left: } t=x,\mbox{right: } t=-x)\\
&=2\int_0^{+\infty}e^{-t^2}t^{2p}dt\\
&=2\int_0^{+\infty}e^{-s}s^p\frac 1{2\sqrt s}ds \quad (s=t^2)\\
&=\int_0^{+\infty}e^{-s}s^{p-1/2}ds\\
&=\left[-e^{-s}s^{p-1/2}\right]_0^{+\infty}+\int_0^{+\infty}e^{—s}\left(p-\frac 12\right)s^{p-1-1/2ds}\\
&=\left(p-\frac 12\right)I_{2(p-1)}.
\end{align}
Finally we get $I_{2p+1}=0$ and $I_{2p}=\sqrt \pi\prod_{j=1}^p\left(j-\frac 12\right)$ for all $p\geq 0$.


elementary number theory - Flirtatious Primes




Here's a possibly interesting prime puzzle. Call a prime $p$ flirtatious if the sum of its digits is also prime. Are there finitely many flirtatious primes, or infinitely many?


Answer



These are tabulated at the Online Encyclopedia of Integer Sequences. It appears to be known that there are infinitely many, and a link is given to a recent paper of Harman. Some high-powered math is involved.


calculus - Prove that $limlimits_{xto0^+}frac{f(x)}{f'(x)}=0$.


Let $f:(0,\infty)\to\mathbb{R}$ be a twice differentiable function with $f''$ continuous and let $\lim\limits_{x\to0^+}f'(x)=-\infty$ and $\lim\limits_{x\to0^+}f''(x)=+\infty$. Prove that:
$$\lim_{x\to0^+}\frac{f(x)}{f'(x)}=0.$$




My problem is not a proof of this itself (e.g. using $\epsilon-\delta$ definition). I recently found this in an old high-school textbook where no mention of the "traditional" $\epsilon-\delta$ definition is made, so, is it possible to find a solution without it?



What we can do is find some $a>0$ such that $f$ is strictly decreasing and $f'$ strictly increasing in $(0,a)$ which proves that
$$\lim_{x\to0^+}f(x)=\ell$$
exists (either number or $+\infty$) and we can easily prove what we want in case $\ell\in\mathbb{R}$. But that case $\ell=+\infty$ is one I cannot solve without proving some inequality of the form:

$f(x)+\epsilon f'(x)<0,$
for $x\in(0,\delta)$ for some $\delta>0$. But this is not supposed to be the solution in a high school textbook.



So, does anyone have a more "elementary" solution or an appropriate rephrasing of a current one?

real analysis - If $f$ is nowhere differentiable does it follow that $f$ is monotonic at no point?



Let $f \colon \mathbb{R} \to \mathbb{R}$ be a continuous functions that is nowhere differentiable. From this question (Does there exist a nowhere differentiable, everywhere continous, monotone somewhere function?) , I know that it follows
that $f$ is monotone on no interval.



Let $x$ be a real number. We say that $f$ is non-decreasing at $x$ if there is a neighborhood of $x$, $N_x$, such that $\frac{f(y)-f(x)}{y-x} \ge 0$ if $y \in N_x-\{x\}$.





If a function is continuous everywhere and differentiable nowhere,
does it follow that it is monotonic at no point? If this is not the
case can you please give a counterexample?



Answer



This does not follow: here's a cheap way to modify a given $f$ so that it becomes monotone at a point. Consider a local minimum $x_0$ of $f$. Let's assume that $x_0=f(x_0)=0$. Then $f(x)\ge 0$ in a neighborhood of $0$, so
$$
g(x) = \begin{cases} f(x) & x\ge 0\\ -f(x) & x<0 \end{cases}
$$
is monotone at $0$. If I'm spectacularly unlucky here, then $g$ is now differentiable at $0$, but then I can simply redefine it as $2f(x)$ for $x\ge 0$.



Notation for summation



I have a function $f(x)$ that I want to sum in two separate ways:





  • across integer values of $x\geqslant0$

  • across all real values of $x\geqslant0$



I am interested in the notation for both situations. Is it legitimate to say something like



$$\sum_{x \in \mathbb{Z}\geqslant0} f(x)$$



and

$$\sum_{x \in \mathbb{R}\geqslant0} f(x)$$



I realise that this second example is also equivalent to a partial integral, but since the expression isn't algebraically integrable, I want to explore alternative notations.


Answer



The first sum doesn't really make sense in the way you phrased it (but can be rephrased to make sense). The second sum doesn't even make sense, unless you redefine what it means to take the summation.



The way summation is defined on a finite set $S$ like below$$\sum\limits_{x\in S} f(x)$$ is to first order the set with some bijective map from the set of numbers from $1$ to $n$, then evaluate the summation as $$f(x_1)+f(x_2)+...+f(x_n)$$Now, for countably infinite sets, we can generalize this procedure. We begin by creating a bijective map from the natural numbers to the set $S$, and then, we compute the sum $$f(x_1)+f(x_2)+..+f(x_n)$$ for all $n$. After doing so, we can take the limit as $n\to\infty$.



As such, your first summation doesn't make much sense, unless we create a mapping between your set $\mathbb{N}\cup\{0\}$ and $\mathbb{N}$, which is relatively trivial to do. Your second summation is not well defined, as there exists no bijective mapping between the set and the natural numbers.




If you seek to redefine what the summation means, be my guest, but make sure to be rigorous and precise when dealing with concepts as such.


Wednesday 29 July 2015

probability - Expected value of a continuous random variable (understanding of the proof)



I need a little help to understand the steps of this proof. (I'm currently studying double integrals) but also i don't understand the first equalities. Thank you in advance.



Let $X$ be an absolutely continous variable with density function $f_x$. Suppose that $g$ is a nonnegative function then.
$$E(g(X))=\int_{0}^{\infty}P(g(X))>y)dy-\int_{0}^{\infty}P(g(X)\leqslant -y)dy$$
$$=\int_{0}^{\infty}P(g(X))>y)dy=\int_{0}^{\infty} (\int_{B} f_x(x) dx)dy$$

where B:= {x:g(x)>y}. Therefore.
$$E(g(X))=\int_0^\infty \int_0^g(x) f_x dydx= \int_0^\infty g(x)f_x(x)dx.$$


Answer



The first equality can be skipped if you just use the tail sum formula for nonnegative random variables: $E[Y] = \int_0^\infty P(Y > y) \mathop{dy}$ if $Y$ is continuous and nonnegative. Applying this to $Y=g(X)$ immediately yields $E[g(X)] = \int_0^\infty P(g(X)) > y) \mathop{dy}$.



[The first equality is a generalization, and can be proven by writing $Y=Y \cdot 1_{Y \ge 0} - (-Y) \cdot 1_{Y < 0}$ and applying the tail sum probability to each term both of which are nonnegative.]






The next part is simply the definition of $B$. $$P(g(X)>y) = \int_B f_X(x) \mathop{dx}$$







Finally, switch the order of the two integrals.
I think you forgot to mention that $X$ is also nonnegative?
The region you are integrating over is $\{y \ge 0\} \cap \{x \ge 0 :g(x) > y\}$, which can be rewritten as $\{x \ge 0\} \cap \{y : 0 \le y < g(x)\}$.



$$\int_0^\infty \int_B f_X(x) \mathop{dx} \mathop{dy} = \int_0^\infty \int_0^{g(x)} f_X(x) \mathop{dy} \mathop{dx}.$$


calculus - Strange behavior of $lim_{xto0}frac{sinleft(xsinleft(frac1xright)right)}{xsinleft(frac1xright)}$



Alright, scratch everything below the line. Let me present one cohesive question not marred by repeated edits.



The limit $\lim_{x\to a}f(x)=L$ exists iff for every $\epsilon>0$ there is a $\delta>0$ such that $|f(x)-L|<\epsilon$ when $0<|x-a|<\delta$.



Thus, $\lim_{x\to0}\sin\left(\frac1x\right)$ does not exist because, being that it oscillates infinitely near $0$, there is no $\epsilon,\delta$.




On the other hand, with the limit$$\lim_{x\to0}\frac{\sin\left(x\sin\left(\frac1x\right)\right)}{x\sin\left(\frac1x\right)}\\\lim_{x\to0}x\sin\left(\frac1x\right)=0\\y=x\sin\left(\frac1x\right)\\\lim_{y\to0}\frac{\sin y}{y}=1\\\lim_{x\to0}\frac{\sin\left(x\sin\left(\frac1x\right)\right)}{x\sin\left(\frac1x\right)}=1$$



this proof can be shown. However, since $\sin\left(\frac1x\right)$ oscillates infinitely, by the same definition of limit we used to show the above, the limit does not exist. How do I resolve this discrepancy?


Answer



A useful thing to know about limits, is that if you are evaluating limit of the form $\lim_{x \to a} f(g(x))$, and you know that $\lim_{x \to a} g(x) = b$ then $\lim_{x \to a} f(g(x)) = \lim_{y \to b} f(y)$ (assuming everything is nice and continuous). So, because you have already worked out the limit $\lim_{x\to 0} x \sin\frac{1}{x} = 0$, you get:
$$ \lim_{x \to 0} \frac{\sin(x \sin\frac{1}{x})}{x \sin\frac{1}{x}} = \lim_{y\to 0}\frac{\sin y}{y} = 1$$



Edit




Let me be a little more precise. Suppose you know that $\lim_{y \to b} f(y) = L$ (so for $\varepsilon$ you have $\delta_f(\varepsilon)$ such that if $|y -b| < \delta_f(\varepsilon)$ you have $|f(y) - L| < \varepsilon$) and that $\lim_{x \to a} g(x) = b$ (so for $\varepsilon$ you have $\delta_g(\varepsilon)$ such that if $|x -a| < \delta_g(\varepsilon)$ you have $|g(x) - b| < \varepsilon$). I claim that then, with no extra assumption $\lim_{x \to a} f(g(x)) = L$. With $g(x) = x \sin\frac{1}{x}$ and $f(y) = \frac{1}{y}\sin y$, this solves your problem.



The reasoning is as follows: Take $\varepsilon > 0$, and define $\delta := \delta_g(\delta_f)$. I claim that if $|x-a| < \delta$ then $|f(g(x))-L| < \varepsilon$. First, because $|x-a| < \delta_g(\delta_f(\varepsilon))$, you have $|g(x)-b| < \delta_f(\varepsilon)$. Next, because $|g(x)-b| < \delta_f(\varepsilon)$, you have $|f(g(x)) - L| < \varepsilon$, as promised.


number theory - Fermat's Little Theorem definition clarification.



Fermat's Little Theorem states that (acc to Gallian book)




$a^p \mod p= a \mod p$.



Does it mean that we get the same remainder when both $a^p$ and $a$ are divided by some prime $p$? I am quite confused about this statement. Through wikipedia,
I read $a^p \equiv a \mod p$. Kindly help. I am new to this number system topic.


Answer



$a^p\mod p\equiv a \mod p\implies a^p-a\equiv 0 \mod p \implies p\text{divides} (a^p-a)\implies a^p-a=kp\implies a^p=kp+a$



So what is the remainder when $a^p$ is divided by $p$


summation - Evaluate the sum $1+2+3+...+n$

How do we evaluate the sum:



\begin{equation*}
1+2+...+n
\end{equation*}



I don't need the proof with the mathematical induction, but the technique to evaluate this series.

calculus - How to show that $int_0^{pi/2}int_0^{pi/2}left(frac{sinphi}{sintheta}right)^{1/2},dtheta,dphi=pi$?

$$\int_0^{\pi/2}\int_0^{\pi/2}\left(\frac{\sin\phi}{\sin\theta}\right)^{1/2}\,d\theta\,d\phi=\pi$$
Indeed, I tried to solve this integral by complexifying (using Euler's formula) the $\sin\theta$ and $\sin\phi$.But it didn't work because I faced the exponent which would make things difficult to tackle such integral.



I would appreciate any suggestions for solving this integral.

trigonometry - The ratio of the sides of a triangle gives us the angle of a line?

I'm really trying to understand trigonometry and I am having problem understanding some basic things.




We have
$$\sin(1) = 0.8414709848078965$$



This is basically telling me a ratio. The ratio is of the side that is opposite an angle of 57 degrees (1 radian) to its longest side, the hypotenuse (opp/hyp) so if you convert this into a fraction. I rounded to $0.838$ and I got $\frac{419}{500}$.. So the opposite sides length is $419$ and the terminal side is $500$. $419$ is a prime number, and can't be reduced (I think). I'm having problems figuring out how since we have those lengths we can make the lines of any size we want in canvas.



Maybe I'm not sure I know how ratios works. back to the triangle I know that the adjacent side is $652$. Does that come in to play?



Is the way that we get the proper sized lines at a specific angle done through the unit circle. Is that the only way to do it?




I actually just set out to make a line with an angle of about $57$ degrees and a length of $200$ for an example. While doing it I remembered from trig tutorials to use the cosine for the $x$ value of the unit circle and the sine for the $y$ value. I just accepted that.



The part in the code



context.lineTo(100 + length * fiftySevenX, 100 + length * fiftySevenY);



How do we know to do $\frac{419}{500} * (\text{the length that we want})$ will give us the line that we want? This has to do with the fact that the radius is $1$ in the unit circle and we're scaling it up by $200$ but how are we scaling $\frac{419}{500}$ down ?



  window.onload = function(){
var canvas = document.getElementById("canvas");

var context = canvas.getContext("2d");

var length = 200;

var fiftySevenY = Math.sin(1); // 0.8414709848078965
var fiftySevenX = Math.cos(1) // 0.5403023058681398

context.beginPath()
context.arc(100,100,4,0, 2*Math.PI, false)
context.fill()

context.moveTo(100,100), // start canvas point
context.lineTo(100 + length * fiftySevenX, 100 + length * fiftySevenY);
context.stroke()
context.closePath();
}




Tuesday 28 July 2015

lie algebras - Automorphism group of $sl_2$ over a finite field

Let $K$ be a finite field of characteristic $\geq 5$ and $\mathfrak{L} = \mathfrak{sl}_2(K)$ be the set of $2 \times 2 $ trace zero matrices over $K$. Let $H_0 = \Bigg\langle \begin{pmatrix}

1 & 0 \\
0 & -1 \\
\end{pmatrix}
\Bigg \rangle _ K$. Of course, $H_0$ is an abelian Cartan subalgebra of $\mathfrak{L}$. Let $G'$ be the Chevalley group, that is, the group of automorphisms of $\mathfrak{L}$ generated by all $exp(ad x_\alpha)$, $\alpha \neq 0$ is a root.



According to Seligman(Theorem III.4.1, Modular Lie algebra 1967), for any abelian Cartan subalgebra $H$ of $\mathfrak{L}$, there exists $\sigma \in G'$ such that $\sigma(H_0) = H$. In realization, let $E_{ij}$ be the matrix whose position $(i, j)$ is $1$ and zero elsewhere. Then $G'$ is the image of the group generated by the $I + \lambda E_{ij}$, $\lambda \in K$, under the mapping $U \mapsto \sigma_U$, where $\sigma_U: X \mapsto U^{-1} X U$ and we also have $G' \cong PSL(K)$.



Now, let $K = \mathbb{Z}_7 $ and $H = \Bigg\langle \begin{pmatrix}
0 & 1 \\
-1 & 0 \\

\end{pmatrix}
\Bigg \rangle _ K$. This is an abelian Cartan subalgebra of $\mathfrak{L}$. However, I can not find an invertible matrix $U$ over $K$ such that $U^{-1}H_0U = H$. I think it is because $-1$ is non square in $K$.



Where is my mistake?

summation - Induction Proof: $sum_{k=1}^n k^2$



Prove by induction, the following: $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}6$$ So this is what I have so far:



We will prove the base case for $n=1$: $$\sum_{k=1}^1 1^2 = \frac{1(1+1)(2(1)+1)}6$$ We can see this is true because $1=1$.



Using induction we can assume the statement is true for $n$, we want to prove the statement holds for the case $n+1$:
\begin{align*}
\sum_{k=1}^{n+1} k^2 :& =\sum_{k=1}^n k^2 + (n+1)^2\\
& = \frac{n(n+1)(2n+1)}6 + (n+1)^2 && \text{by I.H}\\

& = \frac{(n+6)(2n+1)(n+1)^3}6 && \text{algebra}
\end{align*}



This is as far as I have gotten in the proof I know that I am almost there, but I am wondering if I missed a step or did the algebra wrong. I know that what I'm supposed to get is $\frac{(n+1)(n+2)(2n+3)}6$ or is that incorrect too? This should be fairly simple, but for some reason it just isn't working for me.


Answer



Without seeing details of your "algebra" step, it is hard to be more specific. Start with: $$\begin{align}
\frac{n(n+1)(2n+1)}{6}+(n+1)^2 &= \frac{n(n+1)(2n+1)}{6} + \frac{6(n+1)^2}{6} \\
&= \frac{n+1}{6}(n(2n+1) +6(n+1))
\end{align}$$


sequences and series - Generating function of forth powers of harmonic numbers.

Let $x\in (-1,1)$ and let $n\ge 1$ be an integer. Now, let us define a following family of harmonic sums:
\begin{eqnarray}
S^{(n)}(x):= \sum\limits_{m=1}^\infty [H_m]^n \cdot x^m
\end{eqnarray}



It is not hard to see that the following recursion relation holds true:

\begin{eqnarray}
S^{(n+1)}(x)=\int\limits_0^1 Li_1(t) \cdot \frac{d}{d t} \left( S^{(n)}(x t)\right) dt
=-\int\limits_0^1 \frac{S^{(n)}(x t)-S^{(n)}(x)}{1-t} dt
\end{eqnarray}
Now by building up on the results in Generating function for cubes of Harmonic numbers where $S^{(3)}(x)$ was being derived in closed form we obtained the following:
\begin{eqnarray}
&&4(1-x) \cdot S^{(4)}(x)=\\
&&4 \text{Li}_2(x){}^2-12 \text{Li}_4(1-x)-28 \text{Li}_4(x)-32 \text{Li}_4\left(\frac{x}{x-1}\right)+16 \text{Li}_3(x) \log (1-x)+\\
&&16 \zeta (3) \log (1-x)+\frac{5}{2} \log ^4(1-x)-\frac{8}{3} \log (x) \log ^3(1-x)+\pi ^2 \log^2(1-x)+\frac{2 \pi ^4}{15}+\\
&&

4\left\{
\begin{array}{rr}
-\text{Li}_4(1-x)+\frac{1}{24} \log ^4(1-x)+\frac{1}{12} \pi ^2 \log ^2(1-x)+\frac{\pi ^4}{90} & \mbox{if $x\ge 0$}\\
\text{Li}_4\left(\frac{1}{1-x}\right)+\frac{1}{12} \log ^4(1-x)+\frac{1}{6} i \pi \log ^3(1-x)-\frac{1}{12} \pi ^2 \log ^2(1-x)-\frac{\pi ^4}{90} & \mbox{if $x<0$}
\end{array}
\right.
\end{eqnarray}
Running the code below:



x =.; {Normal[

Series[1/(
4 (1 - x)) ((2 \[Pi]^4)/15 + \[Pi]^2 Log[1 - x]^2 +
5/2 Log[1 - x]^4 - 8/3 Log[1 - x]^3 Log[x] +
4 PolyLog[2, x]^2 + 16 Log[1 - x] PolyLog[3, x] -
12 PolyLog[4, 1 - x] - 28 PolyLog[4, x] -
32 PolyLog[4, x/(-1 + x)] + 16 Log[1 - x] Zeta[3] +
4 (\[Pi]^4/ 90 - PolyLog[4, 1 - x] +
1/12 \[Pi]^2 Log[1 - x]^2 + 1/24 Log[1 - x]^4 )), {x, 0, 5},
Assumptions -> 0 < x < 1]],
Normal[Series[

1/(4 (1 - x)) ((2 \[Pi]^4)/15 + \[Pi]^2 Log[1 - x]^2 +
5/2 Log[1 - x]^4 - 8/3 Log[1 - x]^3 Log[x] +
4 PolyLog[2, x]^2 + 16 Log[1 - x] PolyLog[3, x] -
12 PolyLog[4, 1 - x] - 28 PolyLog[4, x] -
32 PolyLog[4, x/(-1 + x)] + 16 Log[1 - x] Zeta[3] +
4 (-(\[Pi]^4/90) + PolyLog[4, 1/(1 - x)] -
1/12 \[Pi]^2 Log[1 - x]^2 + 1/12 Log[1 - x]^4 +
1/6 I Pi Log[1 - x]^3)), {x, 0, 5},
Assumptions -> -1 < x < 0]]}



produces the following result:



{x + (81 x^2)/16 + (14641 x^3)/1296 + (390625 x^4)/20736 + (
352275361 x^5)/12960000,
x + (81 x^2)/16 + (14641 x^3)/1296 + (390625 x^4)/20736 + (
352275361 x^5)/12960000}


as it should be.




Now, apart from the obvious question as to how the result looks like for generic values of $n$ I would like to learn more about the motivation for deriving those Euler sums. Clearly, I can see by myself, that it is very addictive to delve into those calculations and in most cases closed form results can really be obtained. However, what is the use of all that besides pure fun and exercise in integral calculus?

Galois Field GF(4)

Question:
Why is the table of GF(4) look like the one below? I know it has to do with the fact that 4 is composite
Let GF(4) = {0,1,B,D}



Addition:
$$
\begin{array}{c|cccc}
+ & 0& 1& B & D \\

\hline
0& 0 & 1 & B & D \\
1 & 1 & 0 & D & B \\
B & B & D & 0 & 1 \\
D & D & B & 1 & 0 \end{array}
$$



Multiplication:
$$\begin{array}{c|cccc}
\cdot & 0 & 1 & B & D \\
\hline

0 & 0 & 0 & 0 & 0 \\
1 & 0 & 1 & B & D \\
B & 0 & B & D & 1 \\
D & 0 & D & 1 & B
\end{array}$$

algorithms - Why does the method to find out log and cube roots work?



To find cube roots of any number with a simple calculator, the following method was given to us by our teacher, which is accurate to atleast one-tenths.



1)Take the number $X$, whose cube root needs to be found out, and take its square root 13 times (or 10 times) i.e. $\sqrt{\sqrt{\sqrt{\sqrt{....X}}}}$



2)next, subtract $1$, divide by $3$ (for cube root, and any number $n$ for $n$th root), add $1$.




3) Then square the resultant number (say $c$) 13times (or 10 times if you had taken out root 10 times) i.e. $c^{2^{2^{....2}}}=c^{2^{13}}$. This yields the answer.



I am not sure whether taking the square root and the squares is limited to 10/13 times, but what I know is this method does yield answers accurate to atleast one-tenths.



For finding the log, the method is similar:-



1)Take 13 times square root of the number, subtract 1, and multiply by $3558$. This yield s the answer.





Why do these methods work? What is the underlying principle behind
this?



Answer



Let's use these classical formulae :



$$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$
$$\ln\,x=\lim_{n\to\infty}n\left(x^{1/n}-1\right)$$



to get (replacing the limit by a large enough value of $n$ : $N=2^{13}$) :

\begin{align}
\sqrt[3]{x}=e^{\left(\ln x/3\right)}&\approx \left(1+\frac {\ln x/3}N\right)^N\\
&\approx \left(1+\frac {N\left(x^{1/N}-1\right)}{3\,N}\right)^N\\
&\approx \left(1+\frac {\left(x^{1/N}-1\right)}3\right)^N\\
\end{align}



Concerning the decimal logarithm we have :
$$\log_{10}\,x=\frac{\ln\,x}{\ln\,10}\approx \frac N{\ln\,10}\left(x^{1/N}-1\right)$$



For $N=2^{13}$ we may (as indicated by peterwhy) approximate the fraction with $$ \frac N{\ln\,10}=\frac {2^{13}}{\ln\,10}\approx 0.4343\times 8192\approx 3558$$




Hoping this clarified things,


real analysis - T/F: a smooth function that grows faster than any linear function grows faster than $x^{1+epsilon}$



Prove or find a counterexample to the claim that a smooth function that grows faster than any linear function grows faster than $x^{1+\epsilon}$ for some $\epsilon>0$.



My attempt: I understand that the first part of the problem claims $\lim_{x\rightarrow \infty}\frac{g(x)}{kx} = \infty, \forall k>0$. We want to show, then, that $\exists \epsilon >0$ and constant $l>0$ such that $\lim_{x\rightarrow \infty}\frac{g(x)}{lx^{1+\epsilon}} = \infty$.



I've tried using the definition of limits, but I get stuck trying to bound the function $\frac{1}{x^\epsilon}$. Also, I've tried using L'Hopital's rule to no avail. Any ideas?



Any help is appreciated!



Answer



Hint: It is false. Find a counterexample.



Followup hint: (place your mouse on the hidden text to show it)




The function $f\colon(0,\infty)\to\mathbb{R}$ defined by $f(x) = x\ln x$ is such a counterexample.




Followup followup hint: (pretty much the solution, with some details to fill in; place your mouse on the hidden text to show it)





For any $a>0$, $\frac{x\ln x}{a x} = \frac{1}{a}\ln x \xrightarrow[x\to\infty]{} \infty$. However, for any fixed $\epsilon > 0$, $$\frac{x\ln x}{x^{1+\epsilon}} = \frac{\ln x}{x^\epsilon}=\frac{1}{\epsilon}\frac{\ln(x^\epsilon)}{x^\epsilon} = \frac{1}{\epsilon}\frac{\ln t}{t}$$ for $t=x^\epsilon \xrightarrow[x\to\infty]{}\infty$.



sequences and series - Bernoulli's representation of Euler's number, i.e $e=lim limits_{xto infty} left(1+frac{1}{x}right)^x $





Possible Duplicates:
Finding the limit of $n/\sqrt[n]{n!}$
How come such different methods result in the same number, $e$?







I've seen this formula several thousand times: $$e=\lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x $$



I know that it was discovered by Bernoulli when he was working with compound interest problems, but I haven't seen the proof anywhere. Does anyone know how to rigorously demonstrate this relationship?



EDIT:
Sorry for my lack of knowledge in this, I'll try to state the question more clearly. How do we prove the following?



$$ \lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x = \sum_{k=0}^{\infty}\frac{1}{k!}$$


Answer




From the binomial theorem



$$\left(1+\frac{1}{n}\right)^n = \sum_{k=0}^n {n \choose k} \frac{1}{n^k} = \sum_{k=0}^n \frac{n}{n}\frac{n-1}{n}\frac{n-2}{n}\cdots\frac{n-k+1}{n}\frac{1}{k!}$$



but as $n \to \infty$, each term in the sum increases towards a limit of $\frac{1}{k!}$, and the number of terms to be summed increases so



$$\left(1+\frac{1}{n}\right)^n \to \sum_{k=0}^\infty \frac{1}{k!}.$$


calculus - Bijection between an open and a closed interval



Recently, I answered to this problem:





Given $a $]a,b[$ to $[a,b]$.




using an "iterative construction" (see below the rule).



My question is: is it possible to solve the problem finding a less exotic function?



I mean: I know such a bijection cannot be monotone, nor globally continuous; but my $f(x)$ has a lot of jumps... Hence, can one do without so many discontinuities?







W.l.o.g. assume $a=-1$ and $b=1$ (the general case can be handled by translation and rescaling).
Let:



(1) $X_0:=]-1,-\frac{1}{2}] \cup [\frac{1}{2} ,1[$, and



(2) $f_0(x):=\begin{cases}
-x-\frac{3}{2} &\text{, if } -1
if } \frac{1}{2}\leq
x<1\\ 0 &\text{, otherwise} \end{cases}$,



so that the graph of $f_0(x)$ is made of two segment (parallel to the line $y=x$) and one segment laying on the $x$ axis; then define by induction:



(3) $X_{n+1}:=\frac{1}{2} X_n$, and



(4) $f_{n+1}(x):= \frac{1}{2} f_n(2 x)$



for $n\in \mathbb{N}$ (hence $X_n=\frac{1}{2^n} X_0$ and $f_n=\frac{1}{2^n} f_0(2^n x)$).




Then the function $f:]-1,1[\to \mathbb{R}$:



(5) $f(x):=\sum_{n=0}^{+\infty} f_n(x)$



is a bijection from $]-1,1[$ to $[-1,1]$.



Proof: i. First of all, note that $\{ X_n\}_{n\in \mathbb{N}}$ is a pairwise disjoint covering of $]-1,1[\setminus \{ 0\}$. Moreover the range of each $f_n(x)$ is $f_n(]-1,1[)=[-\frac{1}{2^n}, -\frac{1}{2^{n+1}}[\cup \{ 0\} \cup ]\frac{1}{2^{n+1}}, \frac{1}{2^n}]$.



ii. Let $x\in ]-1,1[$. If $x=0$, then $f(x)=0$ by (5). If $x\neq 0$, then there exists only one $\nu\in \mathbb{N}$ s.t. $x\in X_\nu$, hence $f(x)=f_\nu (x)$. Therefore $f(x)$ is well defined.




iii. By i and ii, $f(x)\lesseqgtr 0$ for $x\lesseqgtr 0$ and the range of $f(x)$ is:



$f(]-1,1[)=\bigcup_{n\in \mathbb{N}} f(]-1,1[) =[-1,1]$,



therefore $f(x)$ is surjective.



iv. On the other hand, if $x\neq y \in ]-1,1[$, then: if there exists $\nu \in \mathbb{N}$ s.t. $x,y\in X_\nu$, then $f(x)=f_\nu (x)\neq f_\nu (y)=f(y)$ (for $f_\nu (x)$ restrited to $X_\nu$ is injective); if $x\in X_\nu$ and $y\in X_\mu$, then $f(x)=f_\nu (x)\neq f_\mu(y)=f(y)$ (for the restriction of $f_\nu (x)$ to $X_\nu$ and of $f_\mu(x)$ to $X_\mu$ have disjoint ranges); finally if $x=0\neq y$, then $f(x)=0\neq f(y)$ (because of ii).
Therefore $f(x)$ is injective, hence a bijection between $]-1,1[$ and $[-1,1]$. $\square$


Answer




It seems that your construction is fine, however coarse and crude. We usually give this question in the introductory course of set theory, the solution is quite elegant too.



Firstly, it is very clear that this function cannot be continuous. Consider a sequence approaching the ends of the interval, the function cannot be continuous there.



Secondly, without the loss of generality assume the interval is $[0,1]$. Define $f(x)$ as following:
$$f(x) = \left\{
\begin{array}{1 1}
\frac{1}{2} & \mbox{if } x = 0\\
\frac{1}{2^{n+2}} & \mbox{if } x = \frac{1}{2^n}\\
x & \mbox{otherwise}

\end{array}
\right.$$



It is relatively simple to show that this function is as needed.


Monday 27 July 2015

calculus - Why do the infinitely many infinitesimal errors from each term of an infinite Riemann sum still add up to only an infinitesimal error?



Ok, so after extensive research on the topic of how we deal with the idea of an infinitesimal amount of error, I learned about the standard part function as a way to deal with discarding this infinitesimal difference $\Delta x$ by rounding off to the nearest real number, which is zero. I've never taken nonstandard analysis before, but here's my question.



When you take a Riemann sum, you are approximating an area by rectangles, and each of those rectangles has an error in approximating the actual area under the curve for the corresponding part of the graph. As $\Delta x$ becomes infinitesimal, the width of these rectangles becomes infinitesimal, so each error becomes infinitesimal. But since there are infinitely many rectangles in that case, why is it that the total error from all of them still infinitesimal? In other words, shouldn't an infinite amount of infinitesimals add up to a significant amount?


Answer




If I've understood the question correctly, here is a heuristic explanation. Note that this is not rigorous, since to make your question rigorous you have to give some precise definition of what you mean by "the exact area", which is not at all easy to define in general.



Let us assume we are integrating a continuous function $f(x)$ from $0$ to $1$ by using a Riemann sum with infinitesimal increment $\Delta x$. Let us also assume for simplicity that $f$ is increasing (the general case works out essentially the same way but is a little more complicated to talk about). So we are approximating "the area under $f$" by replacing the region under the graph of $f$ from $x=c$ to $x=c+\Delta x$ by a rectangle of height $f(c)$, for $1/\Delta x$ different values of $c$. Now since $f$ is increasing, the difference between our rectangle of height $f(c)$ and the actual area under the graph of $f$ from $c$ to $c+\Delta x$ is at most $\Delta x(f(c+\Delta x)-f(c))$. But since $f$ is (uniformly) continuous, $f(c+\Delta x)-f(c)$ is infinitesimal. So our error is an infinitesimal quantity times $\Delta x$.



So although we are adding up $1/\Delta x$ (an infinite number) different errors to get the total error, each individual error is not just infinitesimal but infinitesimally smaller than $\Delta x$. So it is reasonable to expect that the sum of all of the errors is still infinitesimal.


calculus - Evaluating $lim_{x to 0} frac{sqrt{1- cos x^2}}{1 - cos x}$




I'm trying to evaluate the following limit:
$$\lim_{x \to 0} \frac{\sqrt{1- \cos x^2}}{1 - \cos x}$$
I've tried multiplying by the conjugate and variable substitution. I had a look at wolfram alpha and it said that $\lim_{x \to 0} \frac{\sqrt{1- \cos x^2}}{1 - \cos x}=\sqrt{2}$, though I'm interested in the process to achieve that.



Any help would be much appreciated / actually finding the limit.



Thanks


Answer



Note: I am using the limit $\lim_{\theta \to 0}\frac{\sin \theta}{\theta}=1$ and the identity $1-\cos 2A=2\sin^2 A$.




\begin{align*}
\lim_{x \to 0} \frac{\sqrt{1- \cos x^2}}{1 - \cos x} & = \lim_{x \to 0} \frac{\sqrt{2 \sin^2 \left(x^2/2\right)}}{2 \sin^2 \left(x/2\right)}\\
& = \lim_{x \to 0} \frac{\sin \left(x^2/2\right)}{\sqrt{2}\sin^2 \left(x/2\right)}\\
& = \frac{1}{\sqrt{2}}\lim_{x \to 0} \frac{\sin \left(x^2/2\right)}{x^2/2}\frac{(x/2)^2}{\sin^2 \left(x/2\right)}.2\\
& =\sqrt{2}.
\end{align*}


calculus - $sum_{n=2}^{infty} frac{1}{n log n}$ Prove series diverge using comparsion test .

prove $$\sum_{n=2}^{\infty} \frac{1}{n \log n}$$ diverge. I have done this problem using cauchy integral test and condensation test. But i want to do it by comparison test or by limit comparison test . any hint about that .



Thanks in advanced.

Sunday 26 July 2015

calculus - Find $sum_{k=1}^{infty}a^k left(frac{1}{k} - frac{1}{k+1}right)$



I want to find $$\sum_{k=1}^{\infty}a^k \left(\frac{1}{k} - \frac{1}{k+1}\right)$$



I'm not sure what to do here. Without the $a^k$ term, it's a simple telescoping series, but that term changes everything. I tried writing out the first few terms but could not come up with anything that doesn't turn right back into the original series.


Answer




Hint:



Using What is the correct radius of convergence for $\ln(1+x)$?,



for $-1\le x<1,$



$$\ln(1-x)=-\sum_{k=1}^n\dfrac{x^k}k$$



Now $$a^k\left(\dfrac1k-\dfrac1{k+1}\right)=\dfrac{a^k}k-\dfrac1a\cdot\dfrac{a^{k+1}}{k+1}$$


combinatorics - Prove the identity $sum^{n}_{k=0}binom{m+k}{k} = binom{n+m+1}{n}$




Let $n,m \in \mathbb{N}$. Prove the identity $$\sum^{n}_{k=0}\binom{m+k}{k} = \binom{n+m+1}{n}$$





This seems very similar to Vandermonde identity, which states that for nonnegative integers we have $\sum^{m}_{k=0}\binom{m}{k}\binom{n}{r-k} = \binom{m+n}{r}$. But, clearly this identity is somehow different from it. Any ideas?


Answer



We can write $\displaystyle \sum^{n}_{k=0}\binom{m+k}{k} = \sum^{n}_{k=0}\binom{m+k}{m} = \binom{m+0}{m}+\binom{m+1}{m}+........+\binom{m+n}{m}.$



Now Using Coefficient of $x^r$ in $(1+x)^{t} $ is $\displaystyle = \binom{t}{r}.$



So we can write above series as...



Coefficient of $x^m$ in $$\displaystyle \left[(1+x)^m+(1+x)^{m+1}+..........+(1+x)^{m+n}\right] = \frac{(1+x)^{m+n+1}-(1+x)^{m}}{(1+x)-1} = \frac{(1+x)^{m+n+1}-(1+x)^{m}}{x}$$




above we have used Sum of Geometric Progression.



So we get Coefficient of $x^{m+1}$ in $\displaystyle \left[(1+x)^{m+n+1}-(1+x)^{m}\right] = \binom{m+n+1}{m+1} = \binom{m+n+1}{n}.$


abstract algebra - Is the group isomorphism $exp(alpha x)$ from the group $(mathbb{R},+)$ to $(mathbb{R}_{>0},times)$ unique?





I'm having a problem trying to find the simplest way of proving this, which has most probably been solved a hundred of times but I am unable to find a good reference.



I have two groups, $(\mathbb{R},+)$ and $(\mathbb{R}_{>0},\times)$. I am trying to prove that the only class of isomorphisms between them is the class $F = \{f: f(x) = \exp(\alpha x), $ for all $\alpha \in \mathbb{R}_{>0}\}$. Existence is easy to prove: what I'm having trouble with is a clean algebraic uniqueness proof.



Does anyone know the proof or a reference containing this proof?




Thanks in advance!


Answer



Wait, it may not be true.



Consider $(\Bbb R,+)$ as an infinite (continuum) dimension vector space over $\Bbb Q$, and fix a basis (Hamel basis).



Then, any automorphism of this vector space (for example permuting the basis) will be an automorphism of $(\Bbb R,+)$, and you can compose this with any exponential.


linear algebra - Prove that every $ A in M_nleft ( mathbb{C} right )$ is similar to a matrix with at most one non-zero element in the first column



I need that prove that every $ A \in M_n\left ( \mathbb{C} \right )$ is similar to a matrix $B$ where $B$'s first column is of the form $\begin{pmatrix}\lambda\\0\\\vdots\\ 0\\ \end{pmatrix}$




where $M_n\left ( \mathbb{C} \right ) $ is the set of all square matrices above $\mathbb{C}$.



I haven't been able to make much progress with this question - any help would be appreciated.


Answer



Every complex matrix is triangularizable, because its characteristic polynomial factorises completely into linear factors. Hence A is similar to an upper-triangular matrix. The first column of such a matrix has the desired form.


algebra precalculus - $16$ men can finish $80$% work in $24$ days..



$16$ men can finish $80$% work in $24$ days. When should $8$ men leave the work so that the whole work is completed in $40$ days? (Answer: $20$days).




MY Attempt:



in $24$ days, $16$ men can do $\dfrac {4}{5}$ work.
In $1$ day, $16$ men can do $\dfrac {1}{30}$ work.
In $1$ day, $1$ man can do $\dfrac {1}{480}$ work.



now, what is the simplest method to complete it further?


Answer



Let $t$ be the number of days when the 8 men leave. Then, the amount of work $w(u)$ finished after working for $u$ days is given by




$$w(u)=\frac{(16-8)\cdot u}{480}+\frac{8t}{480}$$



(assuming that $u>t$). The first term corresponds to 16-8 men working all u days. The second term is the additional 8 men that work only the first $t$ days. Now, solve for $t$ in $w(40)=1$.



Edit: How to get $w(u)$: As another example, assume $p$ men work for $u$ days. As you have already pointed out, one man does $\frac{1}{480}$ of the entire work. So, if $p$ men work for $u$ days, you get done $\frac{pu}{480}$ of the work. You can write $w(u)=\frac{pu}{480}$ to make the number of days worked $u$ a variable.



Now for my expression of $w(u)$: I've split the amount of work done into two parts, depending on the number of men working. You can assume that 16-8 men work for the entire time, i.e. $u$ days. This is the first term. In addition, in the first $t$ days 8 extra men work (which will be removed from the team after $t$ days). This corresponds to the second term.


functional equations - If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t




Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$.





I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$



I looked up the hint and it says let $g(x) = \log f(a^x) $



The wikipedia page for functional equations only states the form of the solutions without proof.



Attempt
Using the hint (which was like pulling a rabbit out of the hat)



Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$

so that we can define the real function $g(x) = \log f(a^x)$ and we
have $$g(x+y) = g(x)+ g(y)$$



i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is).



Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited


Answer



So, we assume $f$ is continuous. Letting $g(x) = \ln(f(a^x))$, we get
$$
\begin{align*}

g(x+y) &= \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x)f(a^y))\\
&= \log(f(a^x)) + \ln(f(a^y))\\
&= g(x)+g(y).
\end{align*}$$
So $g$ satisfies the Cauchy functional equation; if you assume $f$ is continuous, then so is $g$, hence $g(x) = xg(1)$ for all $x\gt 0$.



Since $g(1) = \ln(f(a))$, we have
$$f(a^x) = e^{g(x)} = e^{g(1)x} = (e^{x})^{g(1)}.$$
Given $r\in \mathbb{R}$, $r\gt 0$, we have $r = a^{\log_a(r)}$, hence
$$\begin{align*}

f(r) &= f\left(a^{\log_a(r)}\right)\\
&= \left(e^{\log_a(r)}\right)^{g(1)}\\
&= \left(e^{\ln(r)/\ln(a)}\right)^{g(1)}\\
&= \left(e^{\ln(r)}\right)^{g(1)/\ln(a)}\\
&= r^{g(1)/\ln(a)},
\end{align*}$$
where we have used the change-of-base formula for the logarithm,
$$\log_a(r) = \frac{\ln r}{\ln a}.$$
Finally, since $g(1) = \ln(f(a))$, we have
$$f(r) = r^{\ln(f(a))/\ln(a)}.$$

As this works for any positive $a$, $a\neq 1$, taking $a=e$ we get
$$f(r) = r^{\ln(f(e))}.$$


analysis - Isn't that proof going the wrong way?




I'm currently working on the very well written book Understanding Analysis, by Stephen Abbott.



But I found a proof that looks wrong, I think that it going the wrong way (showing that A $\implies$ B while it should demonstrate that $B \implies A$).



Here is the theorem :



A function f : A $\to$ R, fails to be uniformly continuous if there exists a particular $\epsilon_0$ and two sequences $(x_n)$ and $(y_n)$ such that :
$\lim(|x_n - y_n|) = 0$ and $|f(x_n) - f(y_n)| \geq \epsilon_0 $.




Here is the proof :



Negating the definition of uniform continuity gives the following:
A function $f : A → R$ fails to be uniformly continuous on A if there exists $ε_0 > 0$ such that for all $δ > 0$ we can find two points $x$ and $y$ satisfying $|x − y| < δ$ but with $|f(x) − f(y)| ≥ \epsilon_0$.



The fact that no $δ$ “works” means that if we were to try $δ = 1$, we would be able to find points $x_1$ and $y_1$ where $|x_1 − y_1| < 1$ but $|f(x_1) − f(y_1)| ≥ ε_0.$
In a similar way, if we try $δ = 1/n$ where $n ∈ N$, it follows that there exist points $x_n$ and $y_n$ with $|x_n − y_n| < 1/n$ but where $|f(x_n) − f(y_n)| ≥ \epsilon_0$. The sequences $(x_n)$ and $(y_n)$ are precisely the ones described in theorem.



Comment :




I think that the proof demonstrated that :
f not uniformly continuous $\implies$ $(x_n)$ and $(y_n)$ exist.
While it should have been the other way around.



Do you agree that the proof is wrong ?


Answer



Note that "$f\colon A\to \mathbb R$ is uniformly continuous" is in fact by definition equivalent to
$$\forall \epsilon>0\colon \exists \delta>0\colon\forall x,y\colon (|x-y|<\delta\to |f(x)-f(y)|<\epsilon )$$
Hence the negation
$$\exists \epsilon>0\colon \forall \delta>0\colon\exists x,y\colon (|x-y|<\delta\land|f(x)-f(y)|\ge\epsilon )$$

is also equivalent to the negation "$f\colon A\to \mathbb R$ fails to be uniformly continuous".
So on the one hand you may have tripped over the custom that "if" in a definition is conceptually an "iff" and that on the other hand the argument does indeed show something stronger than "if", namely "iff".


sequences and series - How to solve this multiple summation?



How to solve this summation ?



$$\sum_{0\le x_1\le x_2...\le x_n \le n}^{}\binom{k+x_1-1}{x_1}\binom{k+x_2-1}{x_2}...\binom{k+x_n-1}{x_n}$$
where $k$ , $n$ are known.



Due to hockey-stick identity ,

$$\sum_{i=0}^n\binom{i+k-1}{i}=\binom{n+k}{k}$$


Answer



Suppose we seek to evaluate
$$\sum_{0\le x_1\le x_2\cdots \le x_n \le n}
{k+x_1-1\choose x_1}
{k+x_2-1\choose x_2}
\cdots
{k+x_n-1\choose x_n}.$$



Using the Polya Enumeration Theorem and the cycle index of the

symmetric group this becomes
$$Z(S_n)
\left(Q_0+Q_1+Q_2+\cdots +Q_n\right)$$



evaluated at
$$Q_m = {k-1+m\choose m}.$$



Now the OGF of the cycle index $Z(S_n)$ of the symmetric group is
$$G(z) = \exp
\left(a_1 \frac{z}{1}

+ a_2 \frac{z^2}{2}
+ a_3 \frac{z^3}{3}
+ \cdots \right).$$



The substituted generating function becomes
$$H(z) =
\exp
\left(\sum_{p\ge 1} \frac{z^p}{p}
\sum_{m=0}^n {k-1+m\choose m}^p\right)
= \exp

\left(\sum_{m=0}^n
\sum_{p\ge 1} \frac{z^p}{p}
{k-1+m\choose m}^p\right)
\\ = \exp
\left(\sum_{m=0}^n
\log\frac{1}{1-{k-1+m\choose m} z}\right)
= \prod_{m=0}^n
\frac{1}{1-{k-1+m\choose m} z}.$$



Some thought shows that this could have been obtained by inspection.


We use partial fractions by residues on this function which we
re-write as follows:
$$(-1)^{n+1} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\prod_{m=0}^n
\frac{1}{z-1/{k-1+m\choose m}}.$$



Switching to residues we obtain
$$(-1)^{n+1} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n

\frac{1}{z-1/{k-1+m\choose m}}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}.$$



Preparing to extract coefficients we get
$$(-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
\frac{{k-1+m\choose m}}{1-z{k-1+m\choose m}}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}.$$




Doing the coefficient extraction we obtain
$$(-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
{k-1+m\choose m}^{n+1}
\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1/{k-1+m\choose m}-1/{k-1+p\choose p}}
\\ = (-1)^{n} \prod_{m=0}^n {k-1+m\choose m}^{-1}
\sum_{m=0}^n
{k-1+m\choose m}^{2n+1}

\\ \times \prod_{p=0, \; p\ne m}^n
\frac{1}{1-{k-1+m\choose m}/{k-1+p\choose p}}
\\ = (-1)^{n}
\sum_{m=0}^n
{k-1+m\choose m}^{2n}
\prod_{p=0, \; p\ne m}^n
\frac{1}{{k-1+p\choose p}-{k-1+m\choose m}}.$$



The complexity here is good since the formula has a quadratic number
of terms in $n.$ The number of partitions that a total enumeration

would have to consider is given by



$$Z(S_n)
\left(Q_0+Q_1+Q_2+\cdots +Q_n\right)$$



evaluated at $Q_0 = Q_1 = Q_2 = \cdots = Q_n = 1$ which gives the
substituted generating function



$$A(z) = \exp\left((n+1)\log\frac{1}{1-z}\right)
= \frac{1}{(1-z)^{n+1}}.$$




This yields for the total number of partitions
$${n+n\choose n} = {2n\choose n}$$
which by Stirling has asymptotic
(consult OEIS A00984)
$$\frac{4^n}{\sqrt{\pi n}} \quad\text{and}\quad
n^2\in o\left(\frac{4^n}{\sqrt{\pi n}}\right).$$



For example when $n=24$ and $k=5$ we would have to consider
${48\choose 24} = 32.247.603.683.100$ partitions but the formula

readily yields
$$424283851839410438109261697709077430045882514844\\
665327684062172306602549601581316037895634544256\\
47212676100.$$



Additional exploration of these formulae may be undertaken using the
following Maple code which contrasts total enumeration and the closed
formula.





A :=
proc(n, k)
option remember;
local iter;

iter :=
proc(l)
if nops(l) = 0 then
add(iter([q]), q=0..n)
elif nops(l) < n then

add(iter([op(l), q]), q=op(-1, l)..n)
else
mul(binomial(k-1+l[q], l[q]), q=1..n);
fi;
end;

iter([]);
end;

EX :=

proc(n, k)
option remember;

(-1)^n*add(binomial(k-1+m,m)^(2*n)*
mul(1/(binomial(k-1+p,p)-binomial(k-1+m,m)), p=0..m-1)*
mul(1/(binomial(k-1+p,p)-binomial(k-1+m,m)), p=m+1..n),
m=0..n);
end;

Saturday 25 July 2015

elementary set theory - Bijection between SxS and S where S is an infinite string of 1's and 0's

Denote $S = \{(a_1, a_2, a_3, \dots)| a_i \text{is 0 or 1}\}$.



So I know if I think of one S as $\{(a_1,a_2,a_3,\dots)\}$ and another S as $\{(b_1,b_2,b_3,\dots)\}$, I can create a function that spits out something like $\{(a_1,b_1, a_2,b_2, a_3, b_3, \dots)\}$. I've seen this sort of thing before when showing that (0,1)x(0,1) bijects to (0,1), but I'm having trouble proving that such a function is injective and surjective. Thanks.

trigonometry - Is π unusually close to 7920/2521?

EDIT: One can look at a particular type of approximation to $\pi$ based on comparing radians to degrees. If you try to approximate $\pi$ by fractions of the form $180n/(360k+1)$, you can find that $\pi \approx \frac{7920}{2521}$. And this is pretty close: the continued fraction expansion of $\pi$ is $(3; 7,15,1,292,\ldots)$, and the continued fraction for the approximation is $(3;7,16,4,2,2)$.



However, if you switched out 360 for dividing the circle into $r$ pieces, you are looking for approximations of $\pi$ of the form $rn / 2(rk+1)$ as $n$ and $k$ vary.




  • Is this particular approximation for $\pi$ you get, based on 360 degrees, unusually close for the size ($n=44$, $k=7$) present?




I feel like I have to tell a story to justify this question, because otherwise it's a little weird -- please bear with me.



Some time (a long time) ago, I was in a programming class for grade-school kids and we were learning to do some basic graphics. We were drawing fireworks, and encountered the problem that the "bursts" seemed to have their points going off in random directions. You can guess why: the standard library functions for sin and cos were expecting radians instead of degrees.



Our teacher told us that there was some conversion factor, but didn't remember it. So I set the computer on a loop to draw lines out from the center of a circle, to see if I could figure out what the conversion factor was. It animated drawing lines at angles of 1 radian, 2 radians, 3 radians, ... 360 radians, and then cleared the screen and started over drawing lines at 2 radians, 4 radians, 6 radians ... and then again drawing at 3 radians, 6 radians, 9 radians, ...



Of course, this was kind of misguided, but it was interesting to watch for a few minutes. (If I was more motivated, I would try and animate it again, but those skills are very rusty.) And then, at the multiples of 44 radians, it suddenly worked: it visibly drew a nice, sweeping path all the way around the circle in one pass.



I tweaked around with it a little and found that while (cos(360*44),sin(360*44)) were pretty close to (1,0), a better approximation was had if you used about 43.99975 instead. I still remember this number all this time later.




Now I know radians a little better, and abstractly know that this is because $2\pi(7 + \tfrac{1}{360})$ is very close to the integer 44. Another way to say this is by trying to find approximations as I stated above. As I learned more math I understood this less and less, because the answer seems much closer than it deserves to be based on how small the numbers in question are.

probability - Transformation of Random Variable $Y = X^2$



I'm learning probability, specifically transformations of random variables, and need help to understand the solution to the following exercise:




Consider the continuous random variable $X$ with probability density function $$f(x) = \begin{cases} \frac{1}{3}x^2 \quad -1 \leq x \leq 2, \\ 0 \quad \quad \text{elsewhere}. \end{cases}$$ Find the cumulative distribution function of the random variable $Y = X^2$.




The author gives the following solution:




For $0 \leq y \leq 1: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-\sqrt y \leq X \leq \sqrt y) = \int_{-\sqrt y}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{2}{9}y\sqrt y.$



For $1 \leq y \leq 4: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-1 \leq X \leq \sqrt y) = \int_{-1}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{1}{9} + \frac{1}{9}y\sqrt y.$



For $y > 4: F_{Y}(y) = 1.$






Previous to this exercise, I've managed to follow the solutions of two similar (obviously simpler) problems for a strictly increasing and strictly decreasing function of $X$, respectively. However in this problem, I don't understand the computations being done, specifically:





  • How does the three intervals $0 \leq y \leq 1$, $1 \leq y \leq 4$ and $y > 4$ are determined? In the two previous problems I've encountered, we only considered one interval which was identical to the interval where $f(x)$ was non-zero.

  • In the case where $0 \leq y \leq 1$, why does $P(X^2 \leq y) = P(-\sqrt y \leq X \leq \sqrt y)$ and not $P(X \leq \sqrt y)$? I have put question marks above the equalities that I don't understand.



I think I have not understand the theory well enough. I'm looking for an answer that will make me understand the solution to this problem and possibly make the theory clearer.


Answer



Let's start by seeing what the density function $f_X$ of $X$ tells us about the cumulative distribution function $F_X$ of $X$. Since $f_X(x) = 0$ for $-\infty < x < -1$, we see that
$$F_X(x) = \int_{-\infty}^x f_X(t) \, dt \equiv 0 $$
in this range. Similarly, since $f_X(x) = 0$ in the range $2 < x < \infty$, we see that

$$F_X(x) = \int_{-\infty}^x f_X(t) \, dt = \int_{-\infty}^{\infty} f_X(t) \, dt \equiv 1$$
in this range. In other words, the random variable is "supported on the interval $[-1,2]$" in the sense that $P(X \notin [-1,2]) = 0$.



Now let us consider $Y = X^2$. This variable is clearly non-negative and since $X$ is supported on $[-1,2]$, we must have that $Y$ is supported on $[0, \max((-1)^2,2^2)] = [0,4]$. This is intuitively clear because the variable $X$ (with probability $1$) takes values in [-1,2] and so $X^2$ takes values in $[0,\max((-1)^2,(2)^2)]$. So we only need to understand $F_Y(y)$ in the range $y \in [0,4]$. Now, we always have



$$ F_Y(y) = P(Y < y) = P(X^2 < y) = P(-\sqrt{y} < X < \sqrt{y}) = \int_{-\sqrt{y}}^{\sqrt{y}} f_X(t) \, dt $$



but since $f_X$ is defined piecewise, to proceed at this point we need to analyze several cases. We already know that $F_Y(y) = 0$ if $y \leq 0$ and $F_Y(y) = 1$ if $y \geq 4$.



If $0 \leq y \leq 1$ then $[-\sqrt{y},\sqrt{y}]$ is contained in $[-1,1]$ and on $[-1,1]$ the density function is $f_X(x) = \frac{1}{3}x^2$ so we can write




$$ F_Y(y) = \int_{-\sqrt{y}}^{\sqrt{y}} \frac{1}{3} t^2 \, dt. $$



However, if $1 < y \leq 4$ then $-\sqrt{y} < -1$ and so the interval of integration splits as $[-\sqrt{y}, -1] \cup [-1,\sqrt{y}]$. Over the left $[-\sqrt{y},-1]$ part, the density function is zero so the integal will be zero and we are left only with calculating the integral over the right part:



$$ F_Y(y) = \int_{-\sqrt{y}}^{-1} f_X(t) \, dt + \int_{-1}^{\sqrt{y}} f_X(t) \, dt = \int_{-1}^{\sqrt{y}} \frac{1}{3}t^2 \, dt. $$


Friday 24 July 2015

fractals - What is this pattern found in the first occurrence of each $k in {0,1,2,3,4,5,6,7,8,9}$ in the values of $f(n)=sqrt{n}-lfloor sqrt{n} rfloor$?




Learning how to generate the Mandelbrot set, I came across the definition of the "escape condition" which is the one that decides the color that is applied to each point of the plane where the Mandelbrot set is being calculated.



I tried to use that "escape condition" concept in a test regarding the fractional part of $\sqrt{n}$, $f(n)=\sqrt{n}-\lfloor \sqrt{n} \rfloor$ truncated to 7 decimals. The algorithm finds $\forall f(n)$ the first occurrence of a specific value $k \in \{0,1,2,3,4,5,6,7,8,9\}$ in the decimals of $f(n)$. For instance $0.0123456$ has its first decimal value $k=0$ in position $1$, in other hand $0.1230320$ has its first decimal value $k=0$ in position $4$. If it is not found, the escape value will be the maximum possible value, $8$. As in the Mandelbrot set depending of the escape value, a different color will be used.



A) First I have organized $n \in \Bbb N$ in the positions $a_{xy}$ of a plane as follows:




(1) $a_{n0}=n^2$



(2) $a_{n1}=(n^2)+1$




(3) $a_{n2}=(n^2)+2$



...



(i) $a_{ni}=(n^2)+i$



(This is done while $i\lt n$)





This is how it looks like, in the first file there are squares, in the second squares + 1, in the third squares +2, etc. while the values are less than the next square:




$0,1,4,9,16,25,36...$



$0,2,5,10,17,26,37...$



$0,3,6,11,18,27,38...$



$0,0,7,12,19,28,39...$




$0,0,8,13,20,29,40...$



$0,0,0,14,21,30,41...$



etc.




B) Then each element $a_{ni}$ is replaced by $f(a_{ni})=\sqrt{a_{ni}}-\lfloor \sqrt{a_{ni}} \rfloor$ truncating up to 7 decimals.




C) Finally, $\forall f(a_{ni})$ now it is possible to find the first occurrence of for instance $k=0$ in $f(a_{ni})$. It is found starting from the upper decimal value. The position where it is found is the "escape position", and each position will have its own color.



This is the plotting of colors of each escape position for the algorithm when looking for the first occurrences of $k=0$ in $f(a_{ni})$:



enter image description here



Same exercise for the first occurrences of $k=1$:



enter image description here




They are very similar, but they are not the same. It seems that there are star-like patterns acting like local "attractors" (please excuse me if I abuse of the terminology).



This is an animated image with the same graph when looking for the first $k=0$, then $k=1$... up to the first $k=9$. Same color in the animation means that the first occurrence of $k$ is in a same specific position. It is reduced to fit the screen, when clicking in the image it is possible to see it correctly:



enter image description here



When looking to the animation, it seems that the first occurrences of each type of $k$ are "rotating" around the "local attractors". E.g. at column $n=500$ is easy to observe the rotation.



It seems that there is a symmetry in the background between the upper side and the lower side of the bisection of the triangle that has (0,0) as the bisection vertex. The direction of the "rotation" explained above is inverted between both sides of the bisection (the upper part "rotates" to the right, and the lower part to the left), but the mapping of colors in both sides seem to be initially the same one.




I have plotted the images with a gray scale gradient, so now it is possible to see more details of the above rotating picture:



enter image description here



The following animation is the search of the position of the first decimal value $k=0$ in the fractional digits obtained by applying the functions $f(n)=n^{t}-\lfloor n^{t} \rfloor$ where $t \in [0.49995..0.50006]$ each frame is increasing by $0.00001$. The pattern "bends" in $XY$ and the background star-like patterns are relocated depending on the slight changes of $t$.



enter image description here



Finally, as $+XY$ is symmetric in $-XY$ this is how it looks like the complete picture, without the restriction regarding the values being less than the next square, so it fills the $XY$ plane completely:




enter image description here



I would like to ask the following:





  1. Does it makes sense a pattern like this regarding the first occurrence of each $k \in \{0,1,2,3,4,5,6,7,8,9\}$ for each $f(a_{ni})=\sqrt{a_{ni}}-\lfloor \sqrt{a_{ni}} \rfloor$?


  2. Is the pattern a result of the way of representing the values in the plane, for instance a Moiré pattern like in the modular arithmetic cases?






Thank you!



** UPDATE 2015/08/25 **



I have been able to isolate the same star-like pattern in polar coordinates $(r,\theta)$ using the same algorithm for a different function. In this case:



$r=\lfloor \frac{n}{2\pi} \rfloor$, $n \in \Bbb N$



The value of the angle in percentage will be the fractional part:




$\theta_\%(n) = \frac{n}{2\pi}-\lfloor \frac{n}{2\pi} \rfloor$



thus:



$\theta= \theta_\%(n) \cdot 2\pi = (\frac{n}{2\pi}-\lfloor \frac{n}{2\pi} \rfloor) \cdot 2\pi = n - (\lfloor \frac{n}{2\pi} \rfloor \cdot 2\pi)$



Each frame of the cyclical animation represents $\forall n \in [0,10^7]$ the search of $k=0$,$k=1$,...,$k=9$ in $\theta_\%(n)$ (the value of the fractional digits of the angle truncated up to $7$ decimals):



enter image description here




** UPDATE 2015/08/26 **



As requested, color codes and explanation about the plotting (initial multi-color graphs above):







  1. Color code:




Associated to positions of $k \in \{0,1,2,3,4,5,6,7,8,9\}$ when found at the decimal position $p$ (starting from the decimal point).



$p=0=$"green (big frame)", $p=1=$"green" (small frames), $p=2=$"cyan", $p=3=$"blue", $p=4=$"magenta", $p=5=$"red", $p=6=$"yellow", $p=7=$"white".



e.g. 0.354697812



$k=1$ is at position $p=7$, white



$k=3$ is at position $p=0$, green (big frame)




$k=4$ is at position $p=2$, cyan



$k=5$ is at position $p=1$, green (small frames)



$k=6$ is at position $p=3$, blue



$k=7$ is at position $p=5$, red



$k=8$ is at position $p=6$, yellow




$k=9$ is at position $p=4$, magenta



if $k$ is not found in the first seven positions, the associated position is the deepest one, $p=7$.



$k=2$ is at position $p=8$, $8 \gt 7$, so it is white too.








  1. Plotting of the initial examples above (multi-color graphs)



The tickers (units) of the $x$ axis are $n^2, n \in \Bbb N$, so $x=0$ represents $0^2$, $x=1$ represents $1^2$, $x=2$ represents $2^2$ ... $x=i$ represents $i^2$.



The tickers of the $y$ axis are $m \in \Bbb N$: 0,1,2,3...



So each plotted point $(x,y)$ represents the color code of the pair $(n^2,m)$ converted into $\sqrt{(n^2+m)}-\lfloor \sqrt{(n^2+m)} \rfloor$, in other words, the fractional part of $\sqrt{(n^2+m)}$, truncated to 7 digits.



Summarizing: each graph is the plotting of the search of an unique $k$ in all the pairs $(x,y)=(n^2,m)$ of the XY plane (the initial graphs above were restricted to $n^2+m \lt (n+1)^2$, that is the reason why they are triangular graphs and do not fill the whole $XY$ plane). For instance, the first graph added in the question is the search $\forall (x,y)=(n^2,m)$ of the first occurrence of the digit $k=0$ in the first 7 truncated digits of the fractional part of $\sqrt{(n^2+m)}$.



Answer



This is not a direct answer to your main question, but it does answer something very important. What is the nature of relationship between taking the fractional part of a number and the mod function?



Just for fun, let's use the notation you've built above. Take $n=1$ and $m$. The function is $\sqrt{1+m}$. Let's look at the fractional part $F_p=\sqrt{1+m}-\lfloor \sqrt{1+m} \rfloor$ and compare to $\mod(m,1)$. The mod function takes a number $m$ and returns the remainder after it's divided by a number. However, since the number is $1$, it just returns the fractional part. In fact, $\mod (\sqrt{1+m},1)$ is the same thing as $F_p$!.



The case of p=0:



Ok, let's apply what we know to your problem. Let's take $k$ and $p=0$. If take the fractional part of $\sqrt{n^2+m}$, we have to have,



$${k \over {10}} \le \sqrt{n^2+m}-I \lt {{k+1} \over {10}}$$




The trick used here is that we know that fractional part is nothing more than an integer subtraction. That means we can replace the operation with a constant $I$. Now we solve the inequality for m since our goal is to predict $m$ given $n$.



$$\left( {k \over 10}+I \right)^2 -n^2 \le m \lt \left({{k+1} \over {10}}+I \right)^2-n^2$$



Now we make an ansatz. We'll assume that $I=n+c$. We can justify this choice as follows. Consider,



$$\sqrt{n^2+m}$$



Since $n^2$ dominates $m$ in the limit, we can neglect the contribution from $m$ for large values of $n$. However, we add a constant $c$ to reflect the fact that $m$'s influence might not be diminished for a while. Substituting this in, we get,




$$\sqrt{n^2+m} \sim \sqrt{n^2} =n \sim n+c$$



This happens to be an integer, so we'll use it as an approximation for $I$. Think of $c$ as the arbitrary constant from integration. The only difference is that $c$ is an integer.



$$\left( {k \over 10}+n+c \right)^2 -n^2 \le m \lt \left({{k+1} \over {10}}+n+c \right)^2-n^2$$



Factoring gets,



$${{k^2} \over {100}}+{{k \cdot n} \over 5}+{{c\cdot (k+10n)} \over 5}+c^2 \le m \lt {{k^2} \over {100}}+{{k \cdot (10n+1)} \over {50}}+{n \over 5}+{{c \cdot (k+10n+1)} \over 5}+{1 \over {100}}+c^2$$




Note that these are linear functions. This inequality explains everything given that, $p=0$ and we need to find the occurrence of a $k$ digit. For instance, the slope of the lines increases whenever we increase $k$. This explains the behavior you observed when conducting your simulations.



Proof that solutions for integer $m$ exist:



To prove that this inequality always contains an integer $m$, find the width of the interval given by the inequality. The width $w$ is,



$$w={c \over 5} + {k \over {50}}+{{20n+1} \over 100}$$



By the Pigeon Hole Principle, if the width $w$ is greater than or equal to $1$, the values that $m$ can take on must include an integer. If we set $w=$ and solve for $n$ we get,




$$n \gt {{99-20 \cdot c -2 \cdot k} \over 20}$$



So any $n$ greater than the value on the right, will create an interval of solutions $m$ that contains an integer.


calculus - Convergence of series of the form $sum 1/n^x$



Why the solution says $\sum_{n=1}^\infty \dfrac{1}{n^{1.5}}$ converges? Does every series $\sum_{n=1}^\infty \dfrac{1}{n^{x}}$ converges to 0 except $1/n$ (harmonic Series)?



enter image description here




I found that after verifying a series with series convergence test, especially for comparison test and limit comparison test, I do not have a clear mind on verifying the series that I comparing is converge or diverge. Do I need to use the partial sum to test the convergence of series that I comparing every time?


Answer



When discussing series, avoid saying "series converges to ..."; this kind of statement is almost always misguided.



When discussing sequences, we talk about what their terms converge to. For example, $1/\sqrt{n}$ converges to $0$ as $n\to\infty$.



When discussing series, we could still think about what happens to individual terms, but this is not the main thing: the convergence of a series is a matter of their partial sums. The series $\sum_{n=1}^\infty 1/2^n$ converges, but it would be wrong to say that it "converges to $0$". Rather, the sequence of its terms $1/2^n$ converges to zero. The series itself converges and has sum equal to $1$.



In your examples: the sequence of terms $1/n^x$ converges to $0$ for any $x>0$.




But the series $\sum_{n=1}^\infty 1/n^x$ converges only when $x>1$. And its sum is never $0$; it's some positive number which we don't necessarily know or perhaps even care about.


convergence divergence - Establishing an inequality between the first term of an infinite geometric series and the infinte sum?

An infinite geometric series has the first term a and sum to infinity b, where b $\neq 0$. Prove that a lies between 0 and 2b.



$
\rightarrow \text{Since the series converges, } r \text{ has to be between 0 and 1 }\\ \text{(}\text{using the geometric series formula, i.e } \frac{a(1
- r^n)}{1 - r}\text{):}\\
\text{The sum}=b=\frac{a}{1-r}\text{, where } r \text{ is the common ratio.}\\
\rightarrow b - br = a

$



Ok. Now what? I'm stuck.

Thursday 23 July 2015

trigonometry - complex analysis trigonometric inequalities 2


Using the definitions prove that $$|\sinh y| \le |\cos z|\le \cosh y \ , \ |\sinh y|\le|\sin z|\le \cosh y$$

Conclude that the complex cosine and sine are not bounded in the whole complex plane.




So I used the identity that $|\cos z|^2 = \cos^2(x) + \sinh^2(y)\ge |\sinh y|^2=\sinh^2(y)$. However, I'm not sure how to show that $|\cos z|^2 \le \cosh^2 y$.



Similarly for $|\sin z|$ I used the identity $|\sin z|^2 = \sin^2(x) + \sinh^2(y)\ge |\sinh y|^2$ and because we have the absolute value this implies $|\sinh y| \le |\sin z|$. However, again I am unsure how to show that $|\sin z| \le \cosh y$.



Also how do these inequalities allow us to conclude that the complex cosine and sine are not bounded in the whole complex plane?

Wednesday 22 July 2015

geometry - Length of hypotenuse of a right triangle when dimensions are not scaled equally



What I ask is if $1$ meter in $x$ direction is $2$ times bigger than $1$ meter in $y$ direction. What is the length of hypotenuse when for ex, $3$ in $x$ direction and $4$ in $y$ direction ?



I thought this when i was studying weighted least squares and there uses Mahalanobis distance. It is a very similar idea, but there uses the variance-covariance to compare scales of dimensions. I couldn't directly link variance to exact scale factor like $2$ in this example. I did something but i am not sure if it is right.




++ After thinking, i can rephrase better. Now i think of a moving object that moves with $V$ speed in $y$ direction and $2V$ speed in $x$ direction. If it goes along perpendicular axes, it would take $5.5$ time to move from one corner to another. What is time required if this object moves from one corner to another, diagonally?



Thanks in advance


Answer



One way to interpret what you are asking is to think of a change of coordinates. We have my "normal" coordinate system $(x,y)$ on $\Bbb R^2$ and you have another coordinate system $(z,y)$ with the transformation between your coordinates and mine as $z=2x$. The distance between two points $(z_1,y_1)$ and $(z_2,y_2)$ is $s=\sqrt{4(z_1-z_2)^2+(y_1-y_2)^2}$. It sounds like you are being perverse, but this may be useful. An example would be a crystal where the spacing in one axis is twice the spacing in the other and the coordinates now nicely count lattice positions. You have a space where the metric tensor is $\begin {bmatrix} 4&0\\0&1 \end {bmatrix}$


calculus - Integrate $ int frac { e^{arctan(x)}}{{(1+x^2)}^{frac{3}{2}}} dx $



$y=arctanx$
$tany=x$



\begin{align}

\int \frac { e^{\Large\arctan(x)}}{{(1+x^2)}^{\Large\frac{3}{2}}} \ dx&=\int \frac {e^{\Large\arctan(\tan y)}}{{(1+\tan^2y)}^{\Large\frac{3}{2}}}dy\\
&=\int \frac {e^{y}}{\sec^3 y} dy\\
&= e^y \cos^3 y+ \int 3e^{y}\sin y\ \cos^2 y\ dy\\
\end{align}



Is this right so far or am I doing something wrong?
It's been quite a while since I've done integration with trig substitutions. Last time I did this integral I did not use trig subtitution and still got the correct answer, I can't find my solutions from then(over 2 years ago).


Answer



Your substitution gives
$$ I = \int \frac{e^y}{(1+tan^2{y})^3/2} \sec^2{y} \, dy = \int e^y \cos{y} \, dy, $$

which we work out by integrating by parts a couple of times to be
$$ \frac{1}{2}e^y(\cos{y}+\sin{y})=\frac{1}{2}e^y \cos{y}(1+\tan{y}) = \frac{e^{\arctan{x}}}{2\sqrt{1+x^2}}(1+x) $$






You can also do this by parts without substitution: the derivative of $e^{\arctan{x}}$ is $e^{\arctan{x}}/(1+x^2)$, so you have
$$ I = \int \frac{1}{\sqrt{1+x^2}}\frac{e^{\arctan{x}}}{1+x^2} \, dx \\
= \frac{e^{\arctan{x}}}{\sqrt{1+x^2}} + \int \frac{x e^{\arctan{x}}}{(1+x^2)^{3/2}} \, dx $$
If you do this again, you get
$$ \int \frac{x}{\sqrt{1+x^2}} \frac{e^{\arctan{x}}}{1+x^2} \, dx = \frac{xe^{\arctan{x}}}{1+x^2} - \int \frac{e^{\arctan{x}}}{(1+x^2)^{3/2}} \left( 1+x^2-x^2 \right) \, dx, $$

and $I$ has reappeared on the right, so solving for $I$ gives
$$ I = \frac{1+x}{2\sqrt{1+x^2}}e^{\arctan{x}} $$
as before.


calculus - Maclaurin polynomial of tan(x)




The method used to find the Maclaurin polynomial of sin(x), cos(x), and $e^x$ requires finding several derivatives of the function. However, you can only take a couple derivatives of tan(x) before it becomes unbearable to calculate.



Is there a relatively easy way to find the Maclaurin polynomial of tan(x)?



I considered using tan(x)=sin(x)/cos(x) somehow, but I couldn't figure out how.


Answer



Long division of series.



$$ \matrix{ & x + \frac{x^3}{3} + \frac{2 x^5}{15} + \dots
\cr 1 - \frac{x^2}{2} + \frac{x^4}{24} + \ldots & ) \overline{x - \frac{x^3}{6} + \frac{x^5}{120} + \dots}\cr

& x - \frac{x^3}{2} + \frac{x^5}{24} + \dots\cr & --------\cr
&
\frac{x^3}{3} - \frac{x^5}{30} + \dots\cr
& \frac{x^3}{3} - \frac{x^5}{6} + \dots\cr
& ------\cr
&\frac{2 x^5}{15} + \dots \cr
&\frac{2 x^5}{15} + \dots \cr
& ----}$$


divisibility - $6$ digit numbers formed from the first six positive integers such that they are divisible by both $4$ and $3$.



The digits $1$, $2$, $3$, $4$, $5$ and $6$ are written down in some order to form a six digit number. Then (a) how many such six digits number are even? and (b) how many such six digits number are divisible by $12$?



My attempt: "In some order" means that the six digits number we get do not have repeated digits. Then there are $6!$ possible digits, that is we can make $720$ six digits number. Then the number of even numbers must be end with one of $2$ or $4$ or $6$. Then the number of even numbers can be formed by the given digits are $3\times 5!=360$. Then it is almost done. Now for (b) we have to find the possible numbers of six digits which is divisible by $12$, that is the number must be divisible by both $3$ and $4$. I know that a number is divisible by $3$ if the sum of the numbers are divisible by $3$ and a number is divisible by $4$ if its last two digit is divisible by $4$. But how can I find that how many of such common numbers are there that are divisible by both three and four? Please help me to solve this.


Answer



Since digits are not repeated, all $6$ digits must be used.

Sum of all digits is $21$ $(1+2+3+4+5+6)$. So this number is divisible by $3$.



Now we only have to check its divisibility by $4$.



Number should end with any of $8$ combination $(12,16,24,32,36,52,56,64)$.



Starting $4$ digits can be in any order.



So total count $= 8*4! = 192$


sequences and series - limit $limlimits_{ntoinfty}left(sumlimits_{i=1}^{n}frac{1}{sqrt{i}} - 2sqrt{n}right)$



Calculate below limit
$$\lim_{n\to\infty}\left(\sum_{i=1}^{n}\frac{1}{\sqrt{i}} - 2\sqrt{n}\right)$$


Answer



As a consequence of Euler's Summation Formula, for $s > 0$, $s \neq 1$ we have
$$

\sum_{j =1}^n \frac{1}{j^s} = \frac{n^{1-s}}{1-s} + \zeta(s) + O(|n^{-s}|),
$$
where $\zeta$ is the Riemann zeta function.
In your situation, $s=1/2$, so
$$
\sum_{j =1}^n \frac{1}{\sqrt{j}} = 2\sqrt{n} + \zeta(1/2) + O(n^{-1/2}) ,
$$
and we have the limit
$$
\lim_{n\to \infty} \left( \sum_{j =1}^n \frac{1}{\sqrt{j}} - 2\sqrt{n} \right) = \lim_{n\to \infty} \big( \zeta(1/2) + O(n^{-1/2}) \big) = \zeta(1/2).

$$


Tuesday 21 July 2015

Simple Modular Arithmetic How to

Most efficient way to do this question?



$25^{37}\cong (mod 55)$




Attempt:



$25^2 \cong 20 (mod55)$



Then I don't know the best way to continue... I know how to brute force it but I'm just curious what the best way is?

real analysis - Given $a_n > 0$ for all $n$ and $sum a_n$ converges. Show that if $b > frac{1}{2}$, then $sum_{n=1}^infty n^{-b} sqrt{a_n}$ converges.



I attempted the integral test, limit comparison test, ratio test, and root test.




Limit comparison test: $\lim \sup \frac{n^{-b} \sqrt{a_n}}{a_n} < \infty$?



I get $\frac{0}{0}$ and apply l'Hopital's rule. (Note: I believe that when applying l'Hopital's rule, I take the derivative with respect to $n$, in which case, I suppose I can think of $a_n$ as $f(n)$, and the $\frac{\partial}{\partial x} f(n) = f'(n) \to 0$ since $a_n \to 0$.)



In most cases, I'm left with a perpetual loop of $\frac{0}{0}$.



I'm wondering if I should instead approach this problem via a comparison test and find some $\sum b_n$ that converges such that $0 \leq \sum_{n=1}^\infty n^{-b} \sqrt{a_n} \leq \sum b_n$.



Can this be proven using one of the aforementioned tests?


Answer




Using the AM-GM Inequality we have



$$ n^{-b}\sqrt{a_n} \leq \frac12(n^{-2b}+a_n)$$



Apply the conditions to the two series on the right hand side and the series on the left converges by the comparison test.


set theory - Fractional cardinalities of sets



Is there any extension of the usual notion of cardinalities of sets such that there is some sets with fractional cardinalities such as 5/2, ie a set with 2.5 elements, what would be an example of such a set?



Basically is there any consistent set theory where there is a set whose cardinality is less than that of {cat,dog,fish} but greater than that of {47,83} ?



Answer



One can extend the notion of cardinality to include negative and non-integer values by using the Euler characteristic and homotopy cardinality. For example, the space of finite sets has homotopy cardinality $e=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\dotsi$. The idea is to sum over each finite set, inversely weighted by the size of their symmetry group. John Baez discusses this in detail on his blog. He has plenty of references, as well as lecture notes, course notes, and blog posts about the topic here. The first sentence on the linked page:



"We all know what it means for a set to have 6 elements, but what sort of thing has -1 elements, or 5/2? Believe it or not, these questions have nice answers." -Baez


Monday 20 July 2015

convergence divergence - Riemann Zeta Function Analytic Continuation

I am struggling to understand how the analytic continuation of the Riemann Zeta function is derived to extend it to all complex values $z$ not equal to $1$, starting with the series which converges only for $Re(z)>1$. Can someone provide a relatively simple/intuitive explanation of how this is achieved? Also, I understand that analytic continuation is unique, so there is only one analytic continuation of the Riemann Zeta Function, right?

divisibility - How many 4-digit numbers with $3$, $4$, $6$ and $7$ are divisible by $44$?





Consider all four-digit numbers where each of the digits $3$, $4$, $6$ and $7$ occurs exactly once. How many of these numbers are divisible by $44$?







My attack:



There are $24$ possible four digit numbers where $3$, $4$, $6$ and $7$ occur exactly once. I thought of writing them all down and checking divisibility, but isn't there a better way to do this?




Also, how do I check divisibility by $44$ easily? I read on the internet there was a trick* to determine if a number is divisible by $11$, but a number which is divisible by $11$ doesn't have to be divisible by $44$, does it?



*For example $3729$, you write down $(7+9)-(3+2)=11$, which is divisible by $11$, so $3729$ is divisible by $11$.



I'm only looking for $\large{\textbf{a hint}}$.


Answer



For a number to be divisible by $4$, the last 2 digits have to form a 2-digit number that is divisible by $4$. This should simplify things a lot.



The trick for $11$: you already know.




And if $ABCD$ is divisible by both $4$ and $11$, it is divisible by $44$.


real analysis - Find $limlimits_{ntoinfty}frac{a_1+a_2+...+a_n}{1+frac{1}{sqrt2}+...+frac{1}{sqrt{n}}}$ with $a_1=1$ and $a_{n+1}=frac{1+a_n}{sqrt{n+1}}$




Let $(a_n)_{n\ge1}, a_1=1, a_{n+1}=\frac{1+a_n}{\sqrt{n+1}}$.
Find $$\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$





These is my try:



I intercalated the limit like that
$$L=\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{\sqrt{n+1}}\frac{\sqrt{n+1}}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$.
The second term of the limit tends to 2.
The first one, after Cesaro-Stols, become:
$$\lim_{n\to\infty}a_{n+1}(\sqrt{n+1}+\sqrt{n+2})$$
I tried to intercalate the term $a_n$ between 2 terms in function of n, just like $a_n<\frac{1}{\sqrt{n}}$ or something like that to use the sandwich theorem. Any ideas of this kind of terms? Or other ideas for the problem?



Answer



Stolz–Cesàro is a way to go, but applied to
$S_n=\sum\limits_{k=1}^n a_n$ and $T_n=\sum\limits_{k=1}^n \frac{1}{\sqrt{k}}$, where $T_n$ is strictly monotone and divergent sequence ($T_n >\sqrt{n}$). Then
$$\lim\limits_{n\rightarrow\infty}\frac{S_{n+1}-S_n}{T_{n+1}-T_n}=
\lim\limits_{n\rightarrow\infty}\frac{a_{n+1}}{\frac{1}{\sqrt{n+1}}}=
\lim\limits_{n\rightarrow\infty} \left(1+a_n\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1+a_{n-1}}{\sqrt{n}}\right)=
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{a_{n-2}}{\sqrt{n(n-1)}}\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{1}{\sqrt{n(n-1)(n-2)}}+...+\frac{a_1}{\sqrt{n!}}\right)=\\
1+\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right)$$







Now, for
$$\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right) \tag{1}$$
we have
$$0<\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)(n-3)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)<
\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)(n-2)}}\right)
=\\\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{n-3}{\sqrt{(n-1)(n-2)}}\right)\rightarrow 0$$







Finally, $(1)$ has $0$ as the limit, $\frac{S_{n+1}-S_n}{T_{n+1}-T_n}$ has $1$ as the limit. The original sequence has $1$ as the limit as well.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...