Wednesday 31 May 2017

linear algebra - Is there a slick proof for the identity that expresses the inner product of imaginary octonions in terms of the cross product?




Consider the octonions $\mathbb O$ and in particular their imaginary part $\operatorname{Im}\mathbb O$. Let $(-,-)$ be the scalar product induced by the identification of the imaginary octonions with $\Bbb R^7$. Furthermore, define the cross product of (any) octonions by $x\times y:= \frac{1}{2}(xy-yx)$. For imaginary octonions $x,y$, there is the nice identity $x\times y=(x,y)+x\cdot y$, where the dot denotes octonion multiplication.



Define the linear map $L_{x,y}:\operatorname{Im}\mathbb O\to \operatorname{Im}\mathbb O$ by $L_{x,y}(v)=x\times (y\times v)$, where $x,y$ are also imaginary quaternions. It is easy to prove by explicit calculation (plugging in a basis) that there is the following identity:



$$(x,y)=-\frac{1}{6}\operatorname{tr}L \qquad \qquad x,y\in \operatorname{Im}\mathbb O$$



This identity is useful because it shows that octonion multiplication of imaginary octonions only depends on the cross product, and thereby yields the equivalence of two common definitions of the exceptional Lie group $G_2$. I don't like the brute force proof, so I am left wondering whether there is a better way to see that this identity holds. Is anyone aware of a slick(er) proof?



EDIT: It was just pointed out to me by Ted Shifrin that $\operatorname{tr}L$ is the would-be Killing form on $\Bbb R^7$ seen as an (almost-but-not-quite) Lie algebra (equipped with the cross product). This might point the way towards a nice proof (?)



Answer



Since $L_{x,y}$ is linear in $x$ and $y$, it suffices to compute the trace in two cases: when $x$ and $y$ are parallel and when they are perpendicular (assuming $x$ and $y$ have norm $1$ as well).



In the first case, it's just the square of $L_x=x\times-$, which annihilates $x$ and equals multiplication by $x$ on its orthogonal complement, so $L_{x,x}$ will be multiplication by negative one on $x$'s orthogonal complement; the trace is obviously $-6$.



In the second case, $L_{x,y}$ annihilates $y$ and $xy$, sends $x$ to $y$, and $z\perp xy \Rightarrow x\perp yz$, implying



$$ \langle L_{x,y}z,z\rangle=\langle x(yz),z\rangle=\langle yz,\bar{x}z\rangle=\langle y,\bar{x}\rangle|z|^2=0$$



when $z\perp \{x,y,xy\}$, thus $L_{x,y}$ is skew-hermitian on $\{x,y,xy\}^\perp$ hence has trace zero.



summation - Evaluating sum of binomial coefficients



This sum popped out of one of my calculations, I know what it should evaluate to, but I have no idea how to prove it.
$$\sum_{i=0}^{r}{n \choose 2i } - {n\choose 2i -1}$$
I know that $2i-1$ is negative for $i=0$, but for the purpose of this sum, we will say that ${n \choose x}=0$ if $n<0$ or $x<0$. So this sum is basically summing the difference of consecutive even/odd binomial coefficient pairs. We can rewrite this sum as



$$\sum_{i=0}^{2r}(-1)^i {n \choose i}.$$



I don't really know how to proceed from here, I couldn't find any information on evaluating sums of alternating series involving binomial coefficients.


Answer




This is a special case of the result
$$\sum_{k=0}^m(-1)^k{n\choose k}=(-1)^m{n-1\choose m}$$
($0\le m\le n-1$) which can be proved by induction on $m$.


Time complexity of variation on Coupon's collector problem



I need to know the complexity of the following algorithm:



Draw a set of n numbers from a larger set of m numbers, one by one, randomly, with replacement. The result may be any set of numbers as long as the size is n and the elements are different.




This is a variation of the Coupon collector's problem: we do not have to draw all m elements, any set of different numbers of size n will do.



Example:



n = 3, m = 5, result = {1,5,2} produced by successive draws 1,1,5,1,5,2 from pool {1,2,3,4,5}



What is the average time complexity of this problem?


Answer



The expected
number of trials until your collection has $n$ distinct elements is:

$${m\over m}+{m\over m-1}+{m\over m-2}+\cdots +{m\over m-n+1}. $$


Tuesday 30 May 2017

complex analysis - Definite integral calculation with poles at 0 and $pm isqrt{3}$

$$\int_0^\infty \frac{\sin(2\pi x)}{x(x^{2}+3)}$$



I looked at $\frac{e^{2\pi i z}}{z^{3}+3z}$, also calculated the residues, but they don't get me the right answer. I used that $\int_{-\infty}^\infty f(z)dz = 2\pi i (\sum \operatorname{Res} z_{r}) + \pi i Res_{0}$, but my answer turns out wrong when I check with wolframalpha.



Residue for $0$ is $1$, for $z=\sqrt{3}i$ it's $-\frac{e^{-2\pi}}{2}$ . . .



In a worse attempt I forgot $2\pi$ and used $z$ only (i.e. $\frac{e^{iz}}{z^{3}+3z}$) and the result was a little closer, but missing a factor of 2 and and $i$.



Can anyone see the right way? Please do tell.

probability - Find the Mean for Non-Negative Integer-Valued Random Variable



Let $X$ be a non-negative integer-valued random variable with finite mean.
Show that
$$E(X)=\sum^\infty_{n=0}P(X>n)$$



This is the hint from my lecturer.



"Start with the definition $E(X)=\sum^\infty_{x=1}xP(X=x)$. Rewrite the series as double sum."




For my opinion. I think the double sum have the form of $\sum\sum f(x)$, but how to get this form? And how to continue?


Answer



\begin{array}
& & 0P(X=0) & + & 1P(X=1) & + & 2 P(X=2) & + & 3P(X=3) & + & \cdots \\[18pt]
= & & & P(X=1) & + & P(X=2) & + & P(X=3) & + & \cdots \\
& & & & + & P(X=2) & + & P(X=3) & + & \cdots \\
& & & & & & + & P(X=3) & + & \cdots\\
& & & & & & & & + & \cdots
\end{array}




The sum in the first row is $P(X>0)$; that in the second row is $P(X>1)$; that in the third row is $P(X>2)$, and so on.


Fresnel Integrals via Differentiation under the Integral Sign




I've been trying to compute $\int_{-\infty}^{\infty}sin( x^2)dx$ via the feynman method with no luck. I was able to compute the Gaussian integral. The trick failed for fresnel integrals. Any suggestions?
here is what ive did for the gaussian



$I(t)=(\int_{0}^{t}exp( \frac{-x^2}{2})dx)^2 \\ \frac{\mathrm{d} I(t)}{\mathrm{d} t} = 2 \cdot exp(\frac{-t^2}{2})\cdot \int_0^{t}exp( \frac{-x^2}{2})dx \\ x=tb \\ \frac{\mathrm{d} I(t)}{\mathrm{d}} = 2 \cdot exp(\frac{-t^2}{2})\cdot \int_0^{1}t\cdot exp( \frac{-t^2b^2}{2})db = \int_0^1 2 \cdot t \cdot exp(\frac{-(1+b^2)t^2}{2})db \\ \int_0^1 2 \cdot t \cdot exp(\frac{-(1+b^2)t^2}{2})db = -2 \cdot \frac{d}{dt} \int_0^1 \frac {exp(\frac{(-1+b^2)t^2}{2})}{1+b^2}db \\ B(t) = \int_0^1 \frac {exp(\frac{(-1+b^2)t^2}{2})}{1+b^2}db \\ \frac{dI}{dt} = -2 \cdot \frac{dB}{dt} \\ I(t) = -2B(t) + C \\ t \to 0 \\C= \frac{-\pi}{2} \\ t \to \infty \\B(\infty)=0 \\I(\infty)=\frac{\pi}{2} \Rightarrow \int_{0}^\infty exp(\frac{-t^2}{2})= \sqrt{\frac{\pi}{2}} $


Answer



Let
$$
S(t)=\int_{0}^{t}\sin(x^{2})\,dx\quad\text{and}\quad C(t)=\int_{0}^{t}\sin(x^{2})\,dx\,.
$$
Using the identities

$$
\sin(x^{2})=-\frac{1}{2x}\frac{d}{dx}\cos(x^{2})\quad\text{and}\quad\cos(x^{2})=\frac{1}{2x}\frac{d}{dx}\sin(x^{2})\,,
$$
integrating by parts over $[1,t]$, and passing to the limit as $t\to\infty$, one shows that there exist
$$
\lim_{t\to\infty}S(t)=S_{\infty}\in\mathbb{R}\quad\text{and}\quad\lim_{t\to\infty}C(t)=C_{\infty}\in\mathbb{R}\,.
$$
Moreover, since
$$
S_{\infty}=\int_{0}^{\infty}\frac{\sin y}{2\sqrt y}\,dy=\sum_{k=0}^{\infty}(-1)^{k}a_{k}\quad\text{where}\quad a_{k}=\int_{k\pi}^{(k+1)\pi}\frac{|\sin y|}{2\sqrt y}\,dy

$$
and $00$. To evaluate $S_{\infty}$ with the Feynman method, let us introduce the mappings
$$
f(t)=\left(\int_{0}^{t}\sin(x^{2})\,dx\right)^{2}+\left(\int_{0}^{t}\cos(x^{2})\,dx\right)^{2},\quad
g(t)=\int_{0}^{1}\frac{\sin(t^{2}(1-x^{2}))}{1-x^{2}}\,dx\,.
$$
We can check that $f'(t)=g'(t)$ and $f(0)=g(0)$, hence $f(t)=g(t)$ for every $t\ge 0$. Since
$$
g(t)=\int_{0}^{t^{2}}\frac{\sin(2x+x^{2}/t^{2})}{2x+x^{2}/t^{2}}\,dx\to\int_{0}^{\infty}\frac{\sin(2x)}{2x}\,dx=\frac{\pi}{4}\quad\text{as }t\to\infty\,,
$$

we obtain that
$$
S_{\infty}^{2}+C_{\infty}^{2}=\frac{\pi}{4}\,.
$$
Proving that
$$
(*)\qquad S_{\infty}^{2}=C_{\infty}^{2}\,,
$$
we conclude that $S_{\infty}=\sqrt{\pi/8}$. To show $(*)$ we introduce the mappings
$$

F(t)=\left(\int_{0}^{t}\cos(x^{2})\,dx\right)^{2}-\left(\int_{0}^{t}\sin(x^{2})\,dx\right)^{2},\quad
G(t)=\int_{0}^{1}\frac{\sin(t^{2}(1+x^{2}))}{1+x^{2}}\,dx\,.
$$
One can check that $F'(t)=G'(t)$ and $F(0)=G(0)$, hence $F(t)=G(t)$ for every $t\ge 0$ and in particular
$$
C_{\infty}^{2}-S_{\infty}^{2}=\lim_{t\to\infty}G(t)=\lim_{t\to\infty}\int_{0}^{t}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy\,.
$$
We split
$$
\int_{0}^{t}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy=\int_{0}^{1}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy+\int_{1}^{t}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy

$$
and we observe that
$$
\left|\int_{0}^{1}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy\right|\le\frac{1}{t}
$$
whereas
\begin{equation*}
\begin{split}
\int_{1}^{t}\frac{t}{y^{2}+t^{2}}\,\sin(y^{2}+t^{2})\,dy&=\left[-\frac{t\cos(y^{2}+t^{2})}{2y(y^{2}+t^{2})}\right]_{y=1}^{y=t}\\
&\qquad-\frac{1}{2t}\int_{1}^{t}\frac{3t^{2}y^{2}+t^{4}}{y^{2}(y^{4}+2t^{2}y^{2}+t^{4})}\,\cos(y^{2}+t^{2})\,dy

\end{split}
\end{equation*}
and one can easily see that each term tends to zero as $t\to\infty$. Hence $(*)$ is proved.


Monday 29 May 2017

set theory - A question concerning on the axiom of choice and Cauchy functional equation



The Cauchy functional equation:
$$f(x+y)=f(x)+f(y)$$
has solutions called 'additive functions'. If no conditions are imposed to $f$, there are infinitely many functions that satisfy the equation, called 'Hamel' functions. This is considered valid if and only if the Zermelo's axiom of choice is accepted as valid.



My question is: suppose we don't consider valid the axiom of choice, this means that we have a finite number of solutions? Or maybe the 'Hamel' functions are still valid?



Thanks for any hints ore answer.



Answer



What you wrote is not true at all. The argument is not valid "if and only if the axiom of choice holds".




  1. Note that there are always continuous functions of this form, all look like $f(x)=ax$, for some real number $a$. There are infinitely many of those.


  2. The axiom of choice implies that there are discontinuous functions like this, furthermore a very very weak form of the axiom of choice implies this. In fact there is very little "choice" which can be inferred from the existence of discontinuous functions like this, namely the existence of non-measurable sets.


  3. Even if the axiom of choice is false, it can still hold for the real numbers (i.e. the real numbers can be well-ordered even if the axiom of choice fails badly in the general universe). However even if the axiom of choice fails at the real numbers it need not imply that there are no such functions in the universe.


  4. We know that there are models in which all functions which have this property must be continuous, for example models in which all sets of real numbers have the Baire property. There are models of ZF in which all sets of reals have the Baire property, but there are non-measurable sets. So we cannot even infer the existence of discontinuous solutions from the existence of non-measurable sets.


  5. Observe that if there is one non-discontinuous then there are many different, since if $f,g$ are two additive functions then $f\circ g$ and $g\circ f$ are also additive functions. The correct question is to ask whether or not the algebra of additive functions is finitely generated over $\mathbb R$, but to this I do not know the answer (and I'm not sure if it is known at all).





More:




linear algebra - Primitive elements of GF(8)



I'm trying to find the primitive elements of $GF(8),$ the minimal polynomials of all elements of $GF(8)$ and their roots, and calculate the powers of $\alpha^i$ for $x^3 + x + 1.$



If I did my math correct, I found the minimal polynomials to be $x, x + 1, x^3 + x + 1,$ and $x^3 + x^2 + 1,$ and the primitive elements to be $\alpha, \dots, \alpha^6 $



Would the powers of $\alpha^i$ as a polynomial (of degree at most two) be: $\alpha, \alpha^2, \alpha+ 1, \alpha^2 + \alpha, \alpha^2 + \alpha + 1,$ and $\alpha^2 + 1$?




Am I on the right track?


Answer



Those are all correct. Here's everything presented in a table:



$$\begin{array}{lll}
\textbf{element} & \textbf{reduced} & \textbf{min poly} \\
0 & 0 & x \\
\alpha^0 & 1 & x+1 \\
\alpha^1 & \alpha & x^3+x+1 \\
\alpha^2 & \alpha^2 & x^3+x+1 \\

\alpha^3 & \alpha+1 & x^3+x^2+1 \\
\alpha^4 & \alpha^2+\alpha & x^3+x+1 \\
\alpha^5 & \alpha^2+\alpha+1 & x^3 + x^2 + 1 \\
\alpha^6 & \alpha^2+1 & x^3 + x^2 + 1 \\
\end{array}$$


integration - Integrating the gamma function



I assumed that

$$\Gamma\left(k+\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2k}\,dx=\frac{\sqrt{\pi}(2k)!}{4^k k!} \,,\space k>-\frac{1}{2}$$
and that
$$\Gamma\left(k+\frac{3}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2(k+1)}\,dx$$



and my goal is to solve the integral and get a function in terms of $k$ for $\Gamma\left(k+\frac{3}{2}\right)$



I use partial integration and differentiate $x^2$ and integrate the rest:
$$=\left[x^2.2\int^\infty_0 e^{-x^2}x^{2k}\,dx \right]^\infty_0 - \int^\infty_02x\left(2\int^\infty_0 e^{-x^2}x^{2k}\,dx\right)\,dx$$
and then I substitute the above function in terms of k and get:
$$=\left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 - \int^\infty_02x\frac{\sqrt{\pi}(2k)!}{4^k k!}\,dx$$

$$=\left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 - \left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 =0$$



I know for sure that the final answer is wrong. I think my problem has to do with the substitution of the definite integral in the penultimate step. How can I make the math work out?



EDIT: Sorry for not mentioning previously but this is part of a proof by induction. The first statement is only assumed to be true.


Answer



Let us assume that



$$\Gamma\left(k+\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2k}\,dx=\frac{\sqrt{\pi}(2k)!}{4^k k!}$$




1- for $k=0$ we have



$$\Gamma\left(\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}\,dx=\sqrt{\pi}$$



which holds true since



$$\int^\infty_{-\infty} e^{-x^2}\,dx=\sqrt{\pi}$$



2- We need to prove the case $P(k)\to P(k+1)$




$$\Gamma\left(k+1+\frac{1}{2}\right) = \left( k+\frac{1}{2}\right)\Gamma\left( k+\frac{1}{2}\right)$$



From the inductive step we have



$$\left( k+\frac{1}{2}\right)\Gamma\left( k+\frac{1}{2}\right) =\left( k+\frac{1}{2}\right) \frac{\sqrt{\pi}(2k)!}{4^k k!} = \sqrt{\pi}\frac{(2k+1)(2k)!}{2\times4^kk!} $$



Mutliply and divide by $(2k+2)$



$$ \frac{\sqrt{\pi}}{4}\frac{(2k+2)(2k+1)(2k)!}{ 4^k (k+1)k!} =\frac{\sqrt{\pi}(2(k+1))!}{4^{(k+1)}(k+1)!}\blacksquare$$


algebra precalculus - An identity relating the roots of $F(x)=x^3+x^2+4x+4$

Demonstrate how
$$\frac1{x_1} + \frac1{x_2} + \frac1{x_3} + \frac1{x_1x_2} + \frac1{x_2x_3} + \frac1{x_3x_1} = -\frac34$$
where $x_1, x_2, x_3$ are roots of the polynomial $F(x) = x^3 + x^2 + 4x + 4$.




Can someone help me please, thank you!

geometry - Quadrilateral in a Parallelogram - Interesting Proofs!



Here's an interesting problem, and result, that I wish to share with the math community here at Math SE. I think I've found a proof without words...



enter image description here



I came across this problem sometime back, and here's a possible proof without words that I'll post as a picture -



enter image description here




I claim that one can, for any inscribed quadrilateral EFGH, make necessary constructions (several parallelograms) as in the figure, and hence show that at least one of the diagonals is parallel to a side of the quadrilateral ABCD.



Is there anything I'm missing, or is the proof complete? Let me know!



Also, please post other proofs in the answers section! (So we can all solve the problem together and discuss several methods for the same - it'll be of use to everyone to know all possible methods of approaching this problem)



I was thinking about a possible solution using complex numbers, assuming one of the side pair to be parallel to the real axis in Argand's plane, for simplicity. A similar solution, not involving too much calculation can be done using vectors!
I'm not sure about coordinate geometry, as it tends to get quite cumbersome in such situations, however if anyone does manage to prove it neatly, please post the solution.



Share your ideas!



Answer



I think the figure proof is applying the method of contradiction.



Suppose that 2[quad EFGH] = [//gm ABCD] and no one of the diagonals is parallel to the sides of //gm ABCD. Then, we can draw parallel lines EPS, PQH … etc to form several parallelograms as shown.



In that case, $[\triangle AFE] = [\triangle SFE] = \dfrac 12$ [//gm AFSE] and ….



Clearly, there is no region to pair up with parallelogram PQRS to make the supposition valid. If that given condition must be maintained, PQRS must be deformed to a region with zero area. Then, we have



Case-1 PQRS is made up of EP(Q)S(R)G, which is in fact EG and is parallel to the side AB.




Case-2 PQRS is made up of FS(P)Q(R)H, …..



Case-2 PQRS is a just the point T with ETG // AB and FTH // BC..


Sunday 28 May 2017

soft question - Which are the mathematical problems in non-standard analysis? (If any)



I would like to learn non-standard analysis, at least the basics of it. I will make use of this book: Elementary Calculus: An Infinitesimal Approach (Dover Books on Mathematics), by H. Jerome Keisler. Before anything else, please let me take up some links that DO NOT have a fully fleshed out answer to my question, at least according to me.



Here are the links:



The content on this link 'Is non-standard analysis worth learning?' more or less discusses the use of studying non-standard analysis. The content on this link mostly refers to the link that discusses the use of studying non-standard analysis, 'Is non-standard analysis worth learning?'. This and this does not answer my question.



The problems with the answers to the question up above, is that while they may scratch on the surface and from to time take up the disadvantages of non-standard analysis, they DO NOT purely discuss the disadvantages or/and mathematical disadvantages of non-standard analysis.




'What are the disadvantages of non-standard analysis?' does not rigorously answer that which I want an answer to. For example, let me cite this answer:




I think there are a number of reasons:



Early reviews of Robinson's papers and Keisler's textbook were done by
a prejudiced individual, so most mature mathematicians had a poor
first impression of it. It appears to have a lot of nasty set theory
and model theory in it. Start talking about nonprincipal ultrafilters
and see the analysts' eyes glaze over. (This of course is silly: the

construction of the hyperreals and the transfer principle is as
important to NSA as construction of the reals is for real analysis,
and we know how much people love that part of their first analysis
course.) There is a substantial set of opinion that because NSA and
standard analysis are equivalent, there's no point in learning the
former. Often, the bounds created with NSA arguments are a lot weaker
than standard analysis bounds. See Terry Tao's discussion here. Lots
of mathematicians are still prejudiced by history and culture to
instinctively think that anything infinitesimal is somewhere between
false and actually sinful, and best left to engineers and physicists.

As Stefan Perko mentions in the comments, there are a number of other
infinitesimal approaches: smooth infinitesimals, nilpotents, synthetic
differential geometry, . . . none of these is a standout candidate for
replacement. It's not a widely-studied subject, so using it in papers
limits the audience of your work. Most of these reasons are the usual
ones about inertia: unless a radical approach to a subject is shown to
have distinct advantages over the prevalent one, switching over is
seen as more trouble than it's worth. And at the end of the day,
mathematics has to be taught by more senior mathematicians, so they
are the ones who tend to determine the curriculum.





This is a good start of an answer to the question that I am asking. What I am missing in the answer that you saw, are if there exists any mathematical problems in non-standard analysis. Does it, and if so, which?



I once read - on this forum, at a place that I really can't remember - that there exists some mathematical problems in non-standard analysis. At least some ideas or concepts that weren't, if I remember correctly, likable. The word likable points towards at least one bias. But is it a bias?



Please help me to understand if there are mathematical problems/ problems in certain concepts of non-standard analysis.


Answer



I'm not entirely sure what you're asking, but let me take a stab at it:




First of all, there's nothing standard analysis can do that nonstandard analysis can't. A nonstandard analyst could always decide to just study the standard hyperreals, and this would correspond to standard analysis. (The converse is also true, but nontrivially so.) So you won't find a mathematical problem in a deep sense; anytime the nonstandard approach is less useful than the standard one, a nonstandard analyst could always just use the standard approach inside nonstandard analysis.



That said, there are mathematical features of the hyperreals which are (in my opinion) less than ideal. Topologically, they are ugly: there are multiple natural topologies to put on them, and they all have odd features (see here). And I would consider the presence of lots of automorphisms to be a negative feature as well - it means that if someone asks for an example of an infinitesimal, we can't really give a satisfying answer; however, this arguably reflects my own standard bias.



I suspect there are also algebraic properties the reals have which the hyperreals lack, although at the moment I can't think of any (my previous example was incorrect and silly).



Basically, I think the bottom line is this:




  • Anything a standard analyst can do, a nonstandard analyst can also do - sometimes more easily. (Although my understanding is that that gain in ease rapidly drops off once one is comfortable with standard analysis. Some exceptions exist, however: the invariant subspaces problem was originally solved via nonstandard analysis, and I think there are some nonstandard proofs of esoteric results for which no proof via standard analysis is currently known, although we know that such a proof must exist.)



  • That said, the hyperreal field is a much less nice object than $\mathbb{R}$: the price of having a nice infinitesimal structure is that we lose good properties elsewhere. And it lacks - in my opinion - the compellingness of the structure $\mathbb{R}$. Note that this objection is completely unrelated to the question of whether nonstandard analysis and the hyperreals are useful: something need not be philosophically compelling to be a good tool. I actually think there is a really interesting philosophical phenomenon here: I find the hyperreals completely uncompelling, but the language and techniques of nonstandard analysis to be very compelling! The subject is somehow more compelling to me than its subject matter. No idea what that says about me.


  • I think there are extremely good reasons to learn standard analysis, but no good ones besides personal preference and limitations of time to not learn nonstandard analysis. I would argue that for most mathematicians, learning nonstandard analysis would not necessarily be a good use of time (a combination of unpopularity and - I suspect - a low benefit to their already-existing research interests), but the reasons for this are at least largely sociological, and not inherent to the subject.



integration - show that $int_{0}^{pi/2}tan^ax , dx=frac {pi}{2cos(frac{pi a}{2})}$




show that $$\int_{0}^{\pi/2}\tan^ax \, dx=\frac {\pi}{2\cos(\frac{\pi a}{2})}$$



I think we can solve it by contour integration but I dont know how.



If someone can solve it by two way using complex and real analysis its better for me.



thanks for all.


Answer



Let $u=\tan{x}$, $dx=du/(1+u^2)$. Then the integral is




$$\int_0^{\infty} du \frac{u^a}{1+u^2}$$



This integral may be performed for $a \in (-1,1)$ by residue theory. By considering a contour integral about a keyhole contour about the positive real axis



keyhole



we find that



$$\left ( 1-e^{i 2 \pi a} \right) \int_0^{\infty} du \frac{u^a}{1+u^2} = i 2 \pi \frac{e^{i \pi a/2}-e^{i 3 a\pi/2}}{2 i}$$




Or



$$\int_0^{\infty} du \frac{u^a}{1+u^2} = \pi \frac{\sin{\pi a/2}}{\sin{\pi a}} $$



From which the sought after result may be found.



ADDENDUM



A little further explanation. Consider the contour integral




$$\oint_C dz \frac{z^a}{1+z^2}$$



where $C$ is the above keyhole contour. This means that the integral may be written as



$$\int_{\epsilon}^R dx \frac{x^a}{1+x^2} + i R \int_0^{2 \pi} d\theta \,e^{i \theta} \frac{R^a e^{i a \theta}}{1+R^2 e^{i 2 \theta}} + \\ e^{i 2 \pi a} \int_R^{\epsilon}dx \frac{x^a}{1+x^2} + i \epsilon \int_0^{2 \pi} d\phi\,e^{i \phi} \frac{\epsilon ^a e^{i a \phi}}{1+\epsilon ^2 e^{i 2 \phi}} $$



We take the limit as $R \to \infty$ and $\epsilon \to 0$ and we recover the expression for the contour integral above.


calculus - The substitution rule and differentials



The following is from Stewart's 'single variable calculus, 6E' (the bold is mine)



$$\int f(g(x))g'(x)dx = \int f(u)du$$



"Notice that the Substitution Rule for integration was proved using the chain rule for differentiation. Notice also that if $u=g(x)$, then $du = g'(x)dx$, so a way to remember the Substitution Rule is to think of $dx$ and $du$ in (4) [the above equaion] as differentials."




I understand the proof that the textbook provides for the substitution rule, but it doesn't say anything about differentials. I also see that if you make the substitutions mentioned, then you have the exact same notation on both sides of the equation, but saying that $du$ can be treated as a differential seems not justified. Therefore, its not clear to me how we can say that $du = g'(x)dx$.


Answer



This is kind of a short answer, but if $u=g(x)$, then $\frac{du}{dx}=g'(x)$ and so $du = g'(x)\ dx$. Remember that this is just convenient notation to help us remember the substitution rule; in this context, the differentials don't actually have any meaning.


calculus - Prove that the Taylor series converges to $ln(1+x)$.

Prove the following statement.




For $0 \leq x \leq 1$, the Taylor Series, $\displaystyle x - \frac{x^2}{2} + \frac{x^3}{3} - \cdots$ converges to $\ln(1+x)$




Any help will be greatly appreciated!

Thank you!

Saturday 27 May 2017

set theory - About a paper of Zermelo




This about the famous article



Zermelo, E., Beweis, daß jede Menge wohlgeordnet werden kann, Math. Ann. 59 (4), 514–516 (1904),



available here. Edit: Springer link to the original (OCR'ed, may be behind a paywall)



An English translation can be found in the book Collected Works/Gesammelte Werke by Ernst Zermelo. Alternative source: the book From Frege to Gödel: a source book in mathematical logic, 1879-1931, by Jean Van Heijenoort.



[See also this interesting text by Dan Grayson.]




I don't understand the paragraph on the last page whose English translation is




Accordingly, to every covering $\gamma$ there corresponds a definite well-ordering of the set $M$, even if the well-orderings that correspond to two distinct coverings are not always themselves distinct. There must at any rate exist at least one such well-ordering, and every set for which the totality of subsets, and so on, is meaningful may be regarded as well-ordered and its cardinality as an "aleph". It therefore follows that, for every transfinite cardinality,
$$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2,\mbox{and so forth;}$$
and any two sets are "comparable"; that is, one of them can always be mapped one-to-one onto the other or one of its parts.




It seems to me Zermelo says that the fact that any set can be well-ordered immediately implies that any infinite cardinal equals its square. Is this interpretation correct? If it is, what is the argument?




Side question: What is, in a nutshell, the history of the statement that any infinite cardinal equals its square? Where was it stated for the first time? Where was it proved for the first time? (In a similar vein: where was the comparability of any two cardinal numbers proved for the first time?)






Edit: German original of the passage in question:




Somit entspricht jeder Belegung $\gamma$ eine ganz bestimmte Wohlordnung der Menge $M$, wenn auch nicht zwei verschiedenen Belegungen immer verschiedene. Jedenfalls muß es mindestens eine solche Wohlordnung geben, und jede Menge, für welche die Gesamtheit der Teilmengen usw. einen Sinn hat, darf als eine wohlgeordnete, ihre Mächtigkeit als ein „Alef“ betrachtet werden. So folgt also für jede transfinite Mächtigkeit
$$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2\text{ usw.,}$$

und je zwei Mengen sind miteinander „vergleichbar“, d. h. es ist immer die eine ein-eindeutig abbildbar auf die andere oder einen ihrer Teile.



Answer



The following is a theorem of $ZF$: 



The axiom of choice holds if and only if for every infinite set $A$, there exists a bijection of $A$ with $A\times A$. (i.e. $|A|=|A|^2$) 






Let us overview the theorem of Zermelo, namely if the axiom of choice holds then $\kappa=\kappa^2$ for every infinite $\kappa$.




This is fairly simple, by the canonical well ordering of pairs.



Consider $\alpha\times\beta$, this can be well ordered as ordinal multiplication (that is $\beta$ copies of $\alpha$, i.e. lexicographical ordering), or it can be ordered as following:



$$(x,y)<(w,z)\iff\begin{cases} \max\{x,y\}<\max\{w,z\}\\ \max\{x,y\}=\max\{w,z\}\land x

This is a well-ordering (can you see why?). Now we will prove that $\kappa\times\kappa$ has the same order type as $\kappa$, this is a proof that the two sets have the same cardinality, since similar order types induce a bijection.



Firstly, it is obvious that $\kappa$ is at most of the order type of $\kappa\times\kappa$ since the order type of $\kappa$ can be simply be written as $\alpha\mapsto (\alpha,\alpha)$. The other direction we prove by induction on $\alpha$ that for the initial ordinal $\omega_\alpha$ it is true: $\omega_\alpha=\omega_\alpha\times\omega_\alpha$.




Fact: If $\delta<\omega_\alpha$ (where $\omega_\alpha$ is the $\alpha$-th initial ordinal) then $|\delta|<\aleph_\alpha$.



The claim is true for $\omega_0=\omega$ since for any $k$ the set $\{(n,m)\mid (n,m)<(k,k)\}$ is finite. Therefore the order type of $\omega\times\omega$ is the supremum of $\{k_n\mid n\in\omega\}$ and $k_n$ are finite. Simply put, the order type is $\omega$.



Now assume (by contradiction) $\alpha$ was the least ordinal such that $\omega_\alpha$ was a counterexample to this claim, i.e. $\omega_\alpha$ is strictly less than the order type of $\omega_\alpha\times\omega_\alpha$.



Let $(\gamma,\beta)<\omega_\alpha\times\omega_\alpha$ be the pair of ordinals such that the order type of $\{(\xi,\zeta)\mid (\xi,\zeta)<(\gamma,\beta)\}$ is $\omega_\alpha$.



Take $\delta$ such that $\omega_\alpha>\delta>\max\{\gamma,\beta\}$ then $(\gamma,\beta)<(\delta,\delta)$ and in particular $\{(\xi,\zeta)\mid (\xi,\zeta)<(\delta,\delta)\}$ has cardinality of at least $\omega_\alpha$, as it extends a well order of the type $\omega_\alpha$.




However, $\delta<\omega_\alpha$ by the fact above it is of smaller cardinality, and thus that set has the cardinality $|\delta|\times |\delta|=|\delta|<\omega_\alpha$ by our induction assumption. Hence, a contradiction.






The other direction, also known as Tarski's theorem (I managed to find that it was published around 1923, but I could not find a proper reference.) is as follows:



Suppose that for all infinite $A$, there exists a bijection of $A$ with $A\times A$ then the axiom of choice holds.



The proof (which I will not bring here, as it would require a few more notations and definitions - I did give it here) uses the concept of Hartogs number (the least ordinal which cannot be injected into $A$). The proof in its essence is:




If $\aleph(A)$ is the Hartog of $A$,
$$A+\aleph(A)=(A+\aleph(A))^2=A^2+2\cdot A\cdot\aleph(A)+\aleph(A)^2\ge A\cdot\aleph(A)\ge A+\aleph(A)$$



We then use (or prove) a theorem that if $A+\aleph(A)=A\cdot\aleph(A)$ then $A$ can be well ordered.



Historically, Tarski came to publish this theorem. It was rejected at first. Polish-American mathematician Jan Mycielsi relates in his article A System of Axioms of Set Theory for the Rationalists Notices AMS, February 2006, p.209:




Tarski told me the following story. He tried to publish his theorem (stated above) in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never tried to publish in the Comptes Rendus.





Found via Wikipedia article on the axiom of choice.


sequences and series - Can one show that $sum_{n=1}^Nfrac{1}{n} -log N - gamma leqslant frac{1}{2N}$ without using the Euler-Maclaurin formula?



I would like to prove that

$$
\sum_{n=1}^N\frac{1}{n} -\log N - \gamma \leqslant \frac{1}{2N}
$$
without using the Euler-Maclaurin summation formula. The motivation for this is that I have come very close to doing so (see the answer provided below) but annoyingly have not actually proved the above.



Some may ask why I don't just use the formula. I'm writing a set of analytic number theory notes for my own use and it seems an unwieldy result to introduce and prove, given that the above inequality is all I need, and given that I have gotten so close without using Euler-Maclaurin!


Answer



Let
$$\gamma_n = \sum_{k=1}^n \frac{1}{k} - \log n.$$
Our goal is to show that

$$\gamma_n - \lim_{m \to \infty} \gamma_m \leq \frac{1}{2n}.$$
It is enough to show that, for $n$$\gamma_n - \gamma_m \leq \frac{1}{2n}.$$
This has the advantage of dealing solely with finite quantities.



Now,
$$\gamma_n - \gamma_m = \int_{n}^m \frac{dt}{t} - \sum_{k=n+1}^m \frac{1}{k} =\sum_{j=n}^{m-1} \int_{j}^{j+1} \left( \frac{1}{t} - \frac{1}{j+1} \right) \cdot dt .$$



At this point, if I were at a chalkboard rather than a keyboard, I would draw a picture. Draw the hyperbola $y=1/x$ and mark off the interval between $x=n$ and $x=m$. Divide this into $m-n$ vertical bars of width $1$. Each bar stretches up to touch the hyperbola at its right corner. There is a little wedge, bounded by $x=j$, $y=1/(j+1)$ and $y=1/x$. We are adding up the area of each of these wedges.1




Because $y=1/x$ is convex, the area of this wedge is less than that of the right triangle with vertices at $(j,1/(j+1))$, $(j+1, 1/(j+1))$ and $(j,1/j)$. This triangle has base $1$ and height $1/j - 1/(j+1)$, so its area is $(1/2) (1/j - 1/(j+1))$. So the quantity of interest is
$$\leq \sum_{j=n}^{m-1} \frac{1}{2} \left( \frac{1}{j} - \frac{1}{j+1} \right) = \frac{1}{2} \left( \frac{1}{n} - \frac{1}{m} \right) \leq \frac{1}{2n}.$$



Of course, this is just a standard proof of Euler-Maclaurin summation, but it is a lot more geometric and easy to follow in this special case.



1 By the way, since this area is positive, we also get the corollary that $\gamma_n - \gamma_m > 0$, so $\gamma_n - \gamma >0$, another useful bound.


proof verification - induction and proper use of the inductive step



Suppose I have a conjecture of the form $\forall x \in \mathbb{N}^{+}$ $f(x) > g(x)$.




Without explicitly giving the functions $f$ and $g$ I would like to prove this conjecture using induction.



Specifically I have shown $f(1) > g(1)$. My inductive assumption is now that $f(x - 1) > g(x - 1)$ and I'm trying to show $f(x) > g(x)$.



I haven't been able to do this, but I've noticed that I can if I assume $f(y) > g(y)$ $\forall y \in \mathbb{N}^{+}$ such that $y < x$ instead of my inductive argument I can prove $f(x) > g(x)$.



My question is, is this ok or am I using circular logic? Typically when we use induction we use $x - 1$ to prove the $x$ case after an initial base case. For my problem it doesn't appear possible to prove $f(x) > g(x)$ without assuming that for all values less than $x$ the conjecture holds ($f$ and $g$ are recursive). I suspect that it is ok, but it sounds extremely circular and I haven't been able to convince myself that it isn't.



Note also that I've included proof verification as a tag because although I have not included the explicit proof, I have included a proof approach that I'm interested in validating.



Answer



So you just want to make the stronger assumption that some statement $P(y)$ holds for all $y < x$ instead of only for $y = x-1$?



This is totally fine. It is called strong induction and is equivalent to simple induction.


calculus - Substitution Rule for Definite Integrals

I'm working on an integration by parts problem, and I'm trying to substitute to simplify the equation:



$$\int_\sqrt{\frac{\pi}{2}}^\sqrt{\pi} \theta^3 \cos(\theta^2) d\theta$$



Using the substitution rule for definite integrals, I substitute $\theta^2 = t$ and apply the same to the limits of integration:



$$\int_\frac{\pi}{2}^\pi t^\frac{3}{2} \cos(t) dt$$



However, Wolfram|Alpha tells me that I have done something wrong, as these two integrals are not equivalent. Where did I screw up?

Friday 26 May 2017

calculus - Proving using mean value theorem




Let f be a function continuous on [0, 1] and twice differentiable on (0, 1).



a) Suppose that f(0) = f(1) =0 and f(c) > 0 for some c ∈ (0,1).
Prove that there exists $x_0$ ∈ (0,1) such that f′′($x_0$) < 0.)



b) Suppose that $$\int_{0}^{1}f(x)\,\mathrm dx=f(0) = f(1) = 0.$$



Prove that there exists a number x$_0$ ∈ (0,1) such that f′′(x$_0$) = 0.




How do I solve the above two questions? I have tried using Mean Value Theorem but it gives me zero when I differentiate for the first time. Not sure how I can get the second derivative to be less than zero.



Any help is much appreciated! Thanks!


Answer



For the second problem, we note that if $f$ is constant, then any point satisfies the requirement, so we can suppose $f$ is not constant.



If we can show that we can split $[0,1]$ into two intervals of non-zero length
$[0, t^*], [t^* ,1]$ such that $f(t^*) = 0$, and there exists $t_1 \in (0, t^*)$ and $t_2 \in (t^*,1)$ such that $f'(t_1)=0, f'(t_2) = 0$, we can apply
the mean value theorem to find some $t_3 \in (t_1,t_2)$ such that $f''(t_3 ) = 0$.




Hint:




Let $\phi(t) = \int_0^t f(x) dx$ and note that $\phi(0) = \phi(1) = 0$. Since $f$ is not constant, and $\phi'(t) = f(t)$, we see that $\phi$ is not constant. Then $\phi$ must have a maximum or minimum at $t^* \in (0,1)$, and we see that $\phi'(t^*) = f(t^*) = 0$.



real analysis - Showing $sin(x) 0$ using the mean value theorem

I want to show that $\sin(x) < x$ for all $x>0$, using the mean value theorem.



Since the sine is bounded above by $1$, it's obviously true for $x > 1$. Consider $x \in ]0,1]$. Let $f(x)=\sin(x)$. Choose $a=0$ and $x>0$, then there is, according to the mean value theorem, an $x_0$ between $a$ and $x$ with



$$f'(x_0)=\frac{f(x)-f(a)}{x-a} \Leftrightarrow (\sin(x))'(x_0)= \frac{\sin(x)-\sin(a)}{x} \Leftrightarrow \cos(x_0)=\frac{\sin(x)}{x}$$




Since $1\geq x_0>0 \Rightarrow \cos(x_0) < 1$,



$$\Rightarrow 1 > \cos(x_0)=\frac{\sin(x)}{x} \Rightarrow x > \sin(x)$$



Is my proof correct?

trigonometry - How to simplify the ratio $1 - cos2x + isin2x over 1 + cos2x - isin 2x$



The ratio is as follows:



$$1 - \cos2x + i\sin2x \over 1 + \cos2x - i\sin 2x$$




I am unsure how to simplify this, as the numerator poses a problem as I try to multiply this equation by $\operatorname{cis}(2x)$ to get a real denominator.


Answer



HINT



Recall that




  • $\cos t = \frac{e^{it}+e^{-it}}{2}$


  • $\sin t = \frac{e^{it}-e^{-it}}{2i}$




linear algebra - Can anyone please explain the the difference between a vector and a matrix?



I just took Calculus 3 last semester at my University and got comfortable with the idea of vectors, vector-valued functions, and basic vector operations like the dot and cross products.



This semester, I'm taking Differential Equations and we seem to be throwing around the terms "vector" and "matrix" as if they're interchangeable, especially now that we're studying systems of first-order differential equations. Additionally, there's mention of "vector spaces" which haven't been clearly explained to me.




The last time I dealt with matrices was in Algebra II in the 9th grade about 4 years ago, so there's quite a disconnect here. I feel like if I had taken Linear Algebra, this course would have been easier since my professor keeps saying "if you've taken Linear Algebra, then this should be familiar to you" which isn't exactly helpful.



Can anyone help me bridge these gaps in my understanding?


Answer



Very roughly speaking ...



A matrix is a 2-dimensional array of numbers. If the array has $m$ rows and $n$ columns, we say that we have a matrix of size $m \times n$.



A vector can be regarded as a special type of matrix. A row vector is a matrix of size $1 \times n$, and a column vector is a matrix of size $m \times 1$.




You probably know how to multiply matrices. Since vectors are just special types of matrices, you know how to multiply a matrix times a vector. Multiplying by a matrix is often used as a way to somehow "transform" a vector (to rotate it or mirror it or scale it, for example).


Strategy for the Limit: $lim_{ntoinfty} frac{2^{n+1}+3^{n+1}}{2^n+3^n} $



I do not understand how to properly solve this limit:
$$
\lim_{n\to\infty} \frac{2^{n+1}+3^{n+1}}{2^n+3^n}

$$



I thought of breaking it up:
$$
\lim_{n\to\infty} \frac{2^{n+1}}{2^n+3^n} +\lim_{n\to\infty} \frac{3^{n+1}}{2^n+3^n}
$$

But I do not see how this will allow me to use any of the limit rules to reduce. I know that the series converges though.



Thanks.


Answer




The rule of the dominant term : always divide by the most dominant term on top and bottom, and see where things go. (The dominant term is the algebraic expression growing the fastest, usually detected by observation).



For example, here the most dominant term is $3^{n+1}$. So divide top and bottom by $3^{n+1}$ :
$$
\frac{2^{n+1} + 3^{n+1}}{2^n + 3^n} = \frac{\left(\frac{2}{3}\right)^{n+1} + 1}{\frac 13\left(\frac 23\right)^n + \frac 13}
$$



The limit of the numerator is $1$ and the denominator is $\frac 13$ as $n \to \infty$, and thus the desired limit is their quotient i.e. $3$. The limits of top and bottom are easy to calculate since we have the power of $\frac 23 < 1$ in both expressions, which goes to $0$ as $n \to \infty$.



Dividing by the most dominant term allows you to create a numerator and denominator whose limits are easy to calculate.



Thursday 25 May 2017

arithmetic - Does conjunction of linear inequalities implies the summation of them

Let A and B represent two linear inequalities:



$A : a_1 x_1 + ... + a_n x_n \geq k_1$



$B : b_1 x_1 + ... + b_n x_n \geq k_2$




If A and B is unsatisfiable (does not have solution), does the following hold in general (the conjunction of two inequalities implies the summation of them )? If so, I am looking for a formal proof?



$A \land B \implies A + B$



$𝑎_1𝑥_1+...+𝑎_n𝑥_n \geq𝑘1 \;\; \land \;\; 𝑏_1𝑥_1+...𝑏_n𝑥_n\geq 𝑘_2 \implies 𝑎_1𝑥_1+...+𝑎_n𝑥_n + 𝑏_1𝑥_1+...𝑏_n𝑥_n \geq 𝑘_1+𝑘_2 $



and then I would like to generalize the above theorem to summation of several inequalities.



My attempt:
My intuition is that if A and B be unsatisfiable, there is a matrix of Farkas coefficient C such that the weighted sum of A + B would be zero, and leads to -1 > 0 contradiction. Since A and B are unsatisfiable, the conjunction would be false. Therefore $\bot \implies \bot$

which is a correct statement.



My question is how to generalise this proof for a system of linear inequalities
$A : \bigwedge \Sigma_{i=1}^{n} a_i x_i\leq k_i \;\; \wedge \;\; \bigwedge \Sigma_{i=1}^{n} b_i y_i\leq l_i $



and



$B: \bigwedge \Sigma_{j=1}^{n} a_j x_j\leq w_j \;\; \wedge \;\; \bigwedge \Sigma_{j=1}^{n} b_j y_j\leq z_j $

abstract algebra - Visualising finite fields

I'm interested in finding visual and/or physical approaches to understanding finite fields. I know of a few: V. I. Arnold has a few pictures of 'finite circles' and 'finite tori' in his book Dynamics, Statistics and Projective Geometry of Galois Fields. Also, N. Carter displays what you might call 'double Cayley diagrams' of the fields of order $4=2^2$ and $8=2^3$ in his book Visual Group Theory, which I reproduce here:



Cayley diagrams of fields



The solid lines are the graph for addition and the dotted lines are the graph for multiplication. I like how you can see the structure of the additive group as a product of cyclic groups with the order of the characteristic, and if you look closer you can also see how the multiplicative group is cyclic.



Are there any other interesting visual/physical ways of understanding finite fields?

calculus - Find the limit of $lim_{xto0}{frac{ln(1+e^x)-ln2}{x}}$ without L'Hospital's rule




I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$
and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$.
Any ideas?


Answer



Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!


Wednesday 24 May 2017

sequences and series - convergence criteria from real to complex domains?



It is well known the following function




$$
f(x) = \frac{1}{1^x} + \frac{1}{2^x} + \frac{1}{3^x} + \frac{1}{4^x} + \cdots
$$



only converges if $x > 1$.



If we now consider $f(z)$ where z is complex, why can we say that the function converges for $Re(z) > 1$?



What's logic that allows us to simply apply convergence rules to the real part of a complex function's domain?




(I am not a trained mathematician so I'd appreciate answers which minimise assumptions about terminology.)


Answer



As indicated by Martin R in the comment, the reason is that absolute convergence of complex series implies convergence and in this case we have that for $z=x+iy$



$$\left|\frac1{n^z}\right|=\frac1{|n^z| }=\frac1{|n^{x}|}$$



indeed



$$|n^z|= |n^x||n^{iy}|=|n^x||e^{iy\log n}|=|n^x|$$




thus the series conveges for $Re(z)=x>1$.


sequences and series - Result of the product $0.9 times 0.99 times 0.999 times ...$



My question has two parts:




  1. How can I nicely define the infinite sequence $0.9,\ 0.99,\ 0.999,\ \dots$? One option would be the recursive definition below; is there a nicer way to do this? Maybe put it in a form that makes the second question easier to answer.

    $$s_{i+1} = s_i + 9\cdot10^{-i-2},\ s_0 = 0.9$$
    Edit: Suggested by Kirthi Raman:
    $$(s_i)_{i\ge1} = 1 - 10^{-i}$$


  2. Once I have the sequence, what would be the limit of the infinite product below? I find the question interesting since $0.999... = 1$, so the product should converge (I think), but to what? What is the "last number" before $1$ (I know there is no such thing) that would contribute to the product?
    $$\prod_{i=1}^{\infty} s_i$$



Answer



To elaborate, and extend on GEdgar's answer: there is what is called the $q$-Pochhammer symbol



$$(a;q)_n=\prod_{k=0}^{n-1} (1-aq^k)$$




and $(a;q)_\infty$ is interpreted straightforwardly. The product you are interested in is equivalent to $\left(\frac1{10};\frac1{10}\right)_\infty\approx0.8900100999989990000001$.



One can also express the $q$-Pochhammer symbol $(q;q)_\infty$ in terms of the Dedekind $\eta$ function $\eta(\tau)$ or the Jacobi $\vartheta$ function $\vartheta_2(z,q)$; in particular we have



$$\left(\frac1{10};\frac1{10}\right)_\infty=\sqrt[24]{10}\eta\left(\frac{i\log\,10}{2\pi}\right)=\frac{\sqrt[24]{10}}{\sqrt 3}\vartheta_2\left(\frac{\pi}{6},\frac1{\sqrt[6]{10}}\right)$$






I might as well... there is the following identity, due to Euler (the pentagonal number theorem):




$$(q;q)_\infty=\prod_{j=1}^\infty(1-q^j)=\sum_{k=-\infty}^\infty (-1)^k q^\frac{k(3k-1)}{2}$$



which, among other things, gives you a series you can use for quickly estimating your fine product:



$$\left(\frac1{10};\frac1{10}\right)_\infty=1+\sum_{k=1}^\infty (-1)^k\left(10^{-\frac{k}{2}(3k+1)}+10^{-\frac{k}{2}(3k-1)}\right)$$



Three terms of this series gives an approximation good to twenty digits; five terms of this series yields a fifty-digit approximation.


physics - How to get to this result? $sqrt{frac{2times6.73times10^{-19}}{9.109times10^{-31}}}=sqrt{1.50times 10^{12}}$




I'm sorry but I can't understand what's happening between $$\sqrt{\frac{2\times6.73\times10^{-19}}{9.109\times10^{-31}}}=\sqrt{1.50\times 10^{12}}$$ This is what the solutions from my manual have written on them, but I don't get how that operation was done. Thanks in advance


Answer



We have $6.73 = 673\times 10^{-2}$ and $9.107=9107\times 10^{-3}$ and recall that $10^a\times 10^b=10^{a+b}$ for $a,b\in\mathbb{R}$.



So, we get the following



$$\sqrt{\frac{2\times 673\times 10^{-2}\times 10^{-19}}{9107\times 10^{-3}\times 10^{-31}}}= \sqrt{\frac{2\times 673\times 10^{-21}}{9107\times 10^{-34}}}= \sqrt{\frac{2\times 673\times 10^{34} \times 10^{-21}}{9107}}=\sqrt{0.1477\times 10^{13}}$$



which leads to approximately $\sqrt{1.5\times 10^{12}}$ since $\frac{2\times 673}{9107}=0.1477\approx 0.15$.


Tuesday 23 May 2017

polynomials - When is $sqrt[3]{a+sqrt b}+sqrt[3]{a-sqrt b}$ an integer?

I saw a Youtube video in which it was shown that
$$(7+50^{1/2})^{1/3}+(7-50^{1/2})^{1/3}=2$$
Since there are multiple values we can choose for the $3$rd root of a number, it would also make more sense to declare the value of this expression to be one of $2, 1 + \sqrt{-6},$ or $1 - \sqrt{-6}$



We may examine this more generally. If we declare $x$ such that
$$x=(a+b^{1/2})^{1/3}+(a-b^{1/2})^{1/3}$$
$$\text{(supposing } a \text{ and } b \text{ to be integers here)}$$
one can show that

$$x^3+3(b-a^2)^{1/3}x-2a=0$$
Which indeed has $3$ roots.



We now ask




For what integer values of $a$ and $b$ is this polynomial solved by an integer?




I attempted this by assuming that $n$ is a root of the polynomial. We then have

$$x^3+3(b-a^2)^{1/3}x-2a$$
$$||$$
$$(x-n)(x^2+cx+d)$$
$$||$$
$$x^3+(c-n)x^2+(d-nc)x-nd$$
Since $(c-n)x^2=0$ we conclude that $c=n$ and we have
$$x^3+3(b-a^2)^{1/3}x-2a=x^3+(d-c^2)x-cd$$
And - to continue our chain of conclusions - we conclude that
$$3(b-a^2)^{1/3}=d-c^2 \quad\text{and}\quad 2a=cd$$
At this point I tried creating a single equation and got

$$108b=4d^3+15c^2d^2+12c^4d-c^6$$
This is as far as I went.

matrices - Linear Algebra - Prove $AB=BA$



Let $A$ and $B$ be any $n \times n$ defined over the real numbers.



Assume that $A^2+AB+2I=0$.





  • Prove $AB=BA$



My solution (Not full)



I didn't managed to get so far.



$A(A+B)=-2I$




$-\frac{1}{2}(A(A+B)=I$



Therefore $A$ reversible and $A+B$ reversible.



I don't know how to get on from this point, What could I conclude about $A^2+AB+2I=0?$



Any ideas? Thanks.


Answer



From

$$
A(A+B)=A^2+AB=-2I
$$
we have that
$$
A^{-1}=-\frac12(A+B)
$$
then multiplying by $-2A$ on the right and adding $2I$ gives
$$
A^2+BA+2I=0=A^2+AB+2I

$$
Cancelling common terms yields
$$
BA=AB
$$






Another Approach




Using this answer (involving more work than the previous answer), which says that
$$
AB=I\implies BA=I
$$
we get
$$
-\frac12A(A+B)=I\implies-\frac12(A+B)A=I
$$
Cancelling common terms gives $AB=BA$.


elementary set theory - Evaluating correctness of various definitions of countable sets



I was trying to understand the definition of countable set (again!!!). Wikipedia has a very great explanation:






  1. A set $S$ is countable if there exists an $\color{red}{\text{injective}}$ function $f$ from $S$ to the natural numbers $\mathbb N$.

  2. If such an $f$ can be found that is also $\color{red}{\text{surjective}}$ (and therefore bijective), then $S$ is called countably infinite.

  3. In other words, a set is countably infinite if it has $\color{red}{\text{bijection}}$ with the $\mathbb N$.




So I summarize:




  1. $S$ is countable iff $S\xrightarrow{injection}\mathbb N$


  2. $S$ is countably infinite iff $S\xrightarrow{bijection}\mathbb N$



But then wikipedia confuses by stating following points:




Theorem: Let $S$ be a set. The following statements are equivalent:




  1. $S$ is countable, i.e. there exists an injective function $f : S → \mathbb N$.


  2. Either $S$ is empty or there exists a surjective function $g : \mathbb N → S$.

  3. Either $S$ is finite or there exists a bijection $h : \mathbb N → S$.




Q1. I feel 2nd statement is wrong, as it allows some element in $S$ to not to map to any element in $\mathbb N$. That is $\mathbb N \xrightarrow{surjection} S$ does not imply $S\xrightarrow{injection}\mathbb N$. Hence $S$ is not countable. Right?
Q2. 3rd statement defines countably infinite set, so its countable also. Right?
Q3. Also I dont get if the extra restrictions of emptyness and finiteness in statements 2 and 3 are required.



Wikipedia further says:





Corollary: Let $S$ and $T$ be sets.




  1. If the function $f : S → T$ is injective and $T$ is countable then $S$ is countable.

  2. If the function $g : S → T$ is surjective and $S$ is countable then $T$ is countable.




Q4. Here, too, I feel 2nd statement is incorrect for the same reason as 2nd statement in the theorem. Right?




Edit



I dont know if its correct to add this edit. But its the source of my confusion. So adding it anyway. All answers on this post go on explaining how sujectivity and injectivity imply each other and hence bijectivity. But does that means, whenever injective $f:X\rightarrow Y$ exists, there also holds surjective $g:Y\rightarrow X$ (and also a bijective)? I dont feel so, as the wikipedia gives examples of injective $f:X\rightarrow Y$, for which $g:Y\rightarrow X$ is not surjective:



enter image description here



On the same page, it gives example of surjective $g:Y\rightarrow X$, for which $f:X\rightarrow Y$ is not injective:



enter image description here




How can I reconcile these facts with given answers? I must be missing something very basic!!!


Answer




  1. In fact, $f:\mathbb N \xrightarrow{surjection} S$ does imply $g:S\xrightarrow{injection} \mathbb N$. Since $f$ is a function, for each $n\in\mathbb N$ there exists a unique $s\in S$ so that $f(n)=s$. Since $f$ is surjective, each $s\in S$ can be found in such a way. To define $g$, for each $s\in S$ choose some $n\in\mathbb N$ so that $f(n)=s$. Define $g(s)=n$. This function must be injective, since if it weren't then two distinct $s,t$ would map to the same natural $n$. By construction, $f(n)=s$ and $f(n)=t$, contradicting $s\neq t$.


  2. The third statement doesn't quite define a countable set. In the one case, if there is a bijection, yes that agrees with your definition of a countable set. In the other, a very small amount of work needs to be done to show that there is an injection from any finite set into the naturals. This isn't quite included in your definition.


  3. $S$ being empty needs to be treated as a special case because functions need outputs. If $S$ is empty, no such function $g$ can be defined. In your other point, finiteness is also required since all finite sets are countable by your definition (Order them as $x_1,\dots,x_n$ and define $f(x_i)=i$. The details can be handled with induction.), but no finite set has a bijection with $\mathbb N$, so they have to be handled as a special case.


  4. Once this is dealt with in the theorem, it is dealt with here as well. Injectivity and surjectivity can be flipped when the order of the sets is flipped as well. Another way of thinking about this is that once the theorem is proven true, as hard as it might be to accept the corollary it must be true as well since it is such a small leap from the theorem itself.



complex analysis - Finding a Laurent Series involving two poles



Find the Laurent Series on the annulus $1 < |z| < 4$ for




$$R(z) = \frac{z+2}{(z^2-5z+4)}$$



So I am having a few issues with this. I know there are two poles in this problem particulaly $z = 1$ and $z = 4$, so if I factor out I get it into a form as:



$$ \frac{z+2}{(z-1)(z-4)} $$



and here is where it get's a little hazy. I know there is a relationship in which I would have to split this expression into partial fractions:



$$ \frac{z+2}{(z^2-5z+4)} = \frac{2}{z-4} + \frac{-1}{z-1} $$




now the textbook goes on about using their geometric series, which I somewhat see, but I cannot understand how to get the coefficients. I was trying to use the method of treating each of the numerators in the partial fractions as power series and then solving for coeffecients, but that resulted to no avail. Then I tried not even expanding the expression into partial fractions and attempting to solve for the coefficients by accounting for each singularity and using the remaining part as a power series i.e



$ \frac{z+2}{(z-4)} $ as one power series and then $ \frac{z+2}{(z-1)} $ as the other.



Still not working out. Perhaps my ideas are scattered.


Answer



You need put each partial fraction into the form $\frac{1}{1-w}$ where $|w| \lt 1$ in order to use the geometric series expansion.



$\frac{2}{z-4}$ is analytic in $|z| \lt 4$ and $\Big|\frac{z}{4}\Big| \lt 1$ so we have:

$$
\frac{2}{z-4} = -\frac{1}{2}\cdot\frac{1}{1-\frac{z}{4}} = -\frac{1}{2}\sum_{n=0}^{\infty}\frac{z^n}{4^n}=\sum_{n=0}^{\infty}-\frac{z^n}{2^{2n+1}}
$$



$-\frac{1}{z-1}$ is analytic in $|z| \gt 1$ and $\Big|\frac{1}{z}\Big| \lt 1$ so we have:
$$
-\frac{1}{z-1} = -\frac{1}{z}\cdot\frac{1}{1-\frac{1}{z}} = -\frac{1}{z}\sum_{n=0}^{\infty}\frac{1}{z^n} = \sum_{n=0}^{\infty}-\frac{1}{z^{n+1}}
$$



So in total we have:

$$
R(z) = \dots -\frac{1}{z^3}-\frac{1}{z^2}-\frac{1}{z} -\frac{1}{2}-\frac{z}{8}-\frac{z^2}{32}-\dots
$$


elementary number theory - Let $p=q+4a$. Prove that $left( frac{a}{p} right) = left( frac{a}{q} right)$.



Here's a little number theory problem I'm wrestling with.



Let $p$ and $q$ be odd prime numbers with $p=q+4a$ for some $a \in \mathbb{Z}$. Prove that $$\left( \frac{a}{p} \right) = \left( \frac{a}{q} \right),$$




where $\left( \frac{a}{p} \right)$ is the Legendre symbol. I have been trying to use the law of quadratic reciprocity but to no avail. Can you help?


Answer



Note that $p \equiv q \pmod{4}$, so $\frac{p-1}{2}\frac{q+1}{2} \equiv \frac{p-1}{2}\frac{p+1}{2} \equiv 0 \pmod{2}$.



\begin{align}
\left(\frac{a}{p}\right)=\left(\frac{4a}{p}\right)=\left(\frac{p-q}{p}\right)& =\left(\frac{-q}{p}\right) \\
& =\left(\frac{-1}{p}\right)\left(\frac{q}{p}\right) \\
&=(-1)^{\frac{p-1}{2}}\left(\frac{p}{q}\right)(-1)^{\frac{p-1}{2}\frac{q-1}{2}} \\
&=(-1)^{\frac{p-1}{2}\frac{q+1}{2}}\left(\frac{p-q}{q}\right) \\
&=\left(\frac{4a}{q}\right) \\

&=\left(\frac{a}{q}\right)
\end{align}


random variables - Probability of having at least one coupon out of N types

I'm facing a question regarding random variables:




A coupon website has N distinct kinds of coupons. Each selection of a
coupon is equally likely and selections are independent. Let $T$ be a random variable representing the number of
coupons, $n$, that one should collect until he has at least one coupon
of every kind.





I need to determine the probability function $P(T=n)$. How should I approach this question?

integration - How to solve the given integral?



I have the following integral, which is part of a larger function, but this is the only part I'm not sure about how to solve:



$$\int (x\frac{da(x)}{dx})dx$$




The variable a (depending on x) is derived over x, and this derivative is multiplied with x. I want to integrate the entire thing over x. An online integral calculator suggested the result was zero, but I am uncertain as to whether I defined the equation properly in that page.



Could you help me solving this integral?


Answer



We have that $\frac {d}{dx}a (x) = a'(x) $. Thus, we get, $$I = \int x a'(x) \mathrm {d}x$$ $$ = x \int a'(x) \mathrm {d}x - \int (\int a'(x) \mathrm {d}x) \frac {d}{dx}(x) \mathrm {d}x$$ $$ = xa (x) - \int a (x) \mathrm {d}x \neq 0$$



We have calculated the integral using integration by parts where $u=x $ and $\mathrm {d}v = a'(x) \mathrm {d}x $. Hope it helps.


summation - How find this sum $f(x)=sum_{n=1}^{infty}frac{x^{n^2}}{n}$

First, Merry Christmas everyone!



Find this sum
$$f(x)=\sum_{n=1}^{\infty}\dfrac{x^{n^2}}{n},1>x\ge 0 \tag{1}$$




This problem is creat by Laurentiu Modan.and I can't see this solution.



I know this sum
$$\sum_{n=1}^{\infty}\dfrac{x^n}{n}=-\ln{(1-x)},-1\le x<1$$



and I know this
$$\sum_{n=1}^{\infty}x^{n^2}\approx \dfrac{\sqrt{\pi}}{2\sqrt{1-x}},x\to 1^{-}$$



But for $(1)$,I can't find it,Thank you.




This problem is from this
enter image description here

What is the probability that X takes even values?

Am trying to show that the probability a count r.v $X$ takes even values is given by



$\frac 1 2( 1 + G(-1)),$ where $G(t)$ is its probability generating function.



I know that due to possible symmetry then the $P(\text{odd}) + P(\text{even}) =1.$




Kindly help any initial stages.

Prove $x_n$ converges if $x_n$ is a real sequence and $s_n=frac{x_0+x_1+cdots+x_n}{n+1}$ converges

Given that $x_n$ is a real sequence, $s_n = \frac{x_0+x_1+\cdots+x_n}{n+1}$ and $s_n$ converges, $a_n = x_n-x_{n-1}$, $na_n$ converges to 0, and $x_n-s_n=\frac{1}{n+1}\sum_{i=1}^n ia_i$, prove $x_n$ converges, and that $x_n$ and $s_n$ converge to the same limit.



Since I don't know anything about the sequences increasing, decreasing, or being positive, I was thinking that I can show that $\frac{1}{n+1}\sum_{i=1}^n ia_i$ converges, so then $x_n$ will be the sum of two convergent sequences and so it must converge, but I don't know how to show $\frac{1}{n+1}\sum_{i=1}^n ia_i$ converges, or if this is even the right way to go about this problem.



Also since $s_n$ converges, it is bounded so maybe I can use that fact, but I'm not sure how that could help.



I also showed that $a_n$ must converge to 0 as well using the squeeze theorem, but I don't know if that helps either.



Any hints would be great. I have been stuck on this for days.

Monday 22 May 2017

real analysis - $sum_limits{n=1}^infty {a_n}$ converges $iff sum_limits{n=1}^infty {a_{n_k}}$ converges.




Let $({a_n})_{n\in{\mathbb{N}}}$ a sequence,and let $({a_{n_k}})_{k\in{\mathbb{N}}}$ the sequence of all terms of $({a_n})$ different than zero. Then $$\sum_\limits{n=1}^\infty {a_n}\text{ converges} \iff \sum_\limits{n=1}^\infty {a_{n_k}} \text{ converges}$$



My approach to the proof:



$\Rightarrow$ Suppose $\sum_\limits{n=1}^\infty {a_n}$ converges, so for every $\epsilon>0$ ,$\exists N\in{\mathbb{N}} $ so that $\forall n,m\ge N$ then $|{a_m}-{a_n}|<\epsilon$.
Let $B=[j\in{\mathbb{N}}|{a_j}=0]$, so that ${a_n}={a_j}\cup {a_i}$. Can I express $\sum_\limits{n=1}^\infty {a_n}$ as $\sum_\limits{n\notin B}^\infty {a_n} +\sum_\limits{n\in B}^\infty {a_n}$



I need some help proving this, it might be trivial but I'm having problems with notation. Any help will be appreciated.


Answer



Define the partial sums




$$
S_n = \sum_{i=1}^n a_i
$$



For any given $\varepsilon$, let $N$ be that integer such that for all $p, q \geq N, |S_p-S_q| < \varepsilon$. Either there exists a minimum $K$ such that $n_K \geq N$ (and then for all $r, s \geq K, |S_{n_r}-S_{n_s}| < \varepsilon$ and the second series is convergent), or else there does not exist such a minimum $K$, in which case the second series has a finite number of terms and is convergent.



ETA: Oh yes, the inverse. For any given $\varepsilon$, let $K$ be that integer such that for all $r, s \geq K, |S_{n_r}-S_{n_s}| < \varepsilon$. We observe that for any $n$, $S_n = S_{n_r}$ where $r = \max_{n_s \leq n} s$ (since we are only adding a finite number of trailing zeros). Let $N = n_K$. Then for any $p, q \geq N, |S_p-S_q| = |S_{n_r}-S_{n_s}| < \varepsilon$ for some $r, s \geq K$, and the first series is convergent.


Determining a limit of parametrized, recursively defined sequence $a_{n+1}=1+frac{(a_n-1)^2}{17}$




For every $c\in [0;2]$ determine whether the sequence $\{a_n\}_{n\geq 1}$ which is defined as follows:



$a_1=c$, $a_{n+1}=1+\frac{(a_n-1)^2}{17}$ for $n\geq 1$



is monotonic for sufficiently large $n$, and determine whether its limit exits and if it exists, give its value.



I have no idea what to do with this problem. I was able to see that $a_{n+1}-a_n$ is a quadratic function and I also found ot that limit, if exists, is equal to either $1$ or $18$ (that's beacause if $a_n$ id convergent to $g$ then every subsequence is also convergent to $g$). So how to determine the limit for every $c\in [0;2]$?


Answer



$$a_{n+1}-a_n=\frac{17+(a_n-1)^2-17a_n}{17}=\frac{(a_n-1)(a_n-18)}{17}$$




This number is $\leq 0$ for $a_n\in[1,2]$ and $>0$ for $a_n\in[0,1)$. But observe that even if $a_0<1$ then $a_2>1$. So, the sequence eventually decreases while it is bounded by $0$.



You already got the possible limits and you can discard $18$ because the sequence is always smaller than say $2$.


algebra precalculus - Show (via Complex Numbers): $frac{cosalphacosbeta}{cos^2theta}+frac{sinalphasinbeta}{sin^2theta}+1=0$ under given conditions




$\alpha$ and $\beta$ do not differ by an even multiple of $\pi$. If $\theta$ satisfies $$\frac{\cos\alpha}{\cos\theta}+ \frac{\sin\alpha}{\sin\theta}=\frac{\cos\beta}{\cos\theta}+\frac{\sin\beta}{\sin\theta}=1$$ then show that $$\frac{\cos\alpha\cos\beta}{\cos^2\theta}+\frac{\sin\alpha\sin\beta}{\sin^2\theta}+1=0$$
I wish to solve this problem using some elegant method, preferably complex numbers.





I've tried using the fact that $\alpha$ and $\beta$ satisfy an equation of the form $\cos x/\cos\theta + \sin x/\sin\theta = 1$, and got the required result. See my solution here: https://www.pdf-archive.com/2017/07/01/solution



I'm guessing there's an easier way to go about it. Thanks in advance!


Answer



Well, I'm not sure I can do that in a very elegant way, but it might be shorter. I'm using addition theorems, and the identity $$\sin x -\sin y=2\sin\frac{x-y}{2}\,\cos\frac{x+y}{2}$$ following immediately from them.
Multiplying the given equations by $\sin\theta\,\cos\theta,$ we get
$$\sin(\alpha+\theta)=\sin\theta\,\cos\theta=\sin(\beta+\theta),$$
but $$0=\sin(\alpha+\theta)-\sin(\beta+\theta)=2\sin\frac{\alpha-\beta}{2}\,\cos\left(\frac{\alpha+\beta}{2}+\theta\right).$$ The first factor is $\neq0$ by assumption, so $$\cos\left(\frac{\alpha+\beta}{2}+\theta\right)=0.$$ Multiplying by $2\sin\left(\frac{\alpha+\beta}{2}-\theta\right)$ and using the above identity, you get $\sin(\alpha+\beta)-\sin2\theta=0$, and this means (using $\sin2\theta=2\sin\theta\,\cos\theta$ and dividing by $\sin\theta\,\cos\theta$)
$$\frac{\sin\alpha\,\cos\beta}{\sin\theta\,\cos\theta}+\frac{\sin\beta\,\cos\alpha}{\sin\theta\,\cos\theta}=2.$$ Now you have

$$\left(\frac{\cos\alpha}{\cos\theta}+ \frac{\sin\alpha}{\sin\theta}\right)\,\left(\frac{\cos\beta}{\cos\theta}+\frac{\sin\beta}{\sin\theta}\right)-\left(\frac{\sin\alpha\,\cos\beta}{\sin\theta\,\cos\theta}+\frac{\sin\beta\,\cos\alpha}{\sin\theta\,\cos\theta}\right)=1\cdot1-2=-1,$$ and that gives your required result after simplifying.


philosophy - Why is time important in the Ross-Littlewood paradox?

I have read many defferent versions of the Ross-Littlewood Paradox.



This post: Fun quiz: where did the infinitely many candies come from?



This post: Paradox: increasing sequence that goes to $0$?



This post: A strange puzzle having two possible solutions




And many others.



In all of them, a great effort is made to note that actions are performed in decreasing time intervals (1/2 second, 1/4 second, 1/8 second...). I am wondering why this specification is so important to the paradox? I understand that it stops the infinite steps from taking infinite time. The thing is the steps must then be performed infinitely fast.



Why is it that performing actions infinitely fast is so much more believable than performing actions for an infinite amount of time? In my opinion the latter is more plausible. Also why is believability so important for a paradox which is clearly impossible to execute?



Edit: I also wanted to note that there are similar things where we don't seem to need this kind of action. For example the Infinite Monkey Theorem. Why is it important in one and not the other?

combinatorics - An application of this binomical identity $ binom{n}{k}=binom{n-1}{k}+binom{n-1}{k-1} $

I need a clarification on this manner, in terms of "When do I apply this form of



identity". I managed to proove it algebraically and combinatorially, using



"Pascal's triangle". so a proof is not needed. However I am not comprehending what it's




application in terms of combinatoric(or any) exersice.



$$ \binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$$

Sunday 21 May 2017

sequences and series - Intuition behind $zeta(-1)$ = $frac{-1}{12}$

When I first watched numberphile's 1+2+3+... = $\frac{-1}{12}$ I thought the sum actually equalled $\frac{-1}{12}$ without really understanding it.




Recently I read some wolframalpha pages and watched some videos and now I understand (I think), that $\frac{-1}{12}$ is just an associative value to the sum of all natural numbers when you analytically continue the riemann-zeta function. 3Blue1Brown's video really helped. What I don't really understand is why it gives the value $\frac{-1}{12}$ specifically. The value $\frac{-1}{12}$ seems arbitrary to me and I don't see any connection to the sum of all natural numbers. Is there any intuition behind why you get $\frac{-1}{12}$ when analytically continue the zeta function at $\zeta(-1)$?



EDIT(just to make my question a little clearer):
I'll use an example here. Suppose you somehow didn't know about radians and never associated trig functions like sine to $\pi$ but you knew about maclaurin expansion. By plugging in x=$\pi$ to the series expansion of sine, you would get sine($\pi$) = 0. You might have understood the process in which you get the value 0, the maclaurin expansion, but you wouldn't really know the intuition behind this connection between $\pi$ and trig functions, namely the unit circle, which is essential in almost every branch of number theory.



Back to this question, I understand the analytic continuation of the zeta function and its continued form for $s < 0$ $$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$ and how when you plug in s = -1, things simplify down to $\frac{-1}{12}$ but I don't see any connection between the fraction and the infinite sum. I'm sure there is a beautiful connection between them, like the one between trig functions and $\pi$, but couldn't find any useful resources on the internet. Hope this clarified things.

modular arithmetic - Find the last two digits of $2019^{2019}$



Find the last two digits of $2019^{2019}$



I know that you can typically find the last two digits of a number to any power by reducing the number to end with a one and so on (I will show an example of what I am talking about below).



However, $2019$ cannot be reduced such that I will get an even exponent required in this strategy of solving.




So, how do I figure out the last two digits of this equation?



* Example of the method I referred to *



Find the last two digits of $41^{2789}$




  • Multiply the tens digit of the number (4 here) with the last digit of the exponent (9 here) to get the tens digit. The unit digit will always equal to one.


  • $61 (4 × 9 = 36)$. Therefore, 6 will be the tens digit and one will be the unit digit





Keep in mind, I am an algebra 2 student, but my teacher, also a calc teacher, thought I might be able to figure this one out :)


Answer



Calculation shows that the last two digits of $19^5$ are $99$.



Therefore, the last two digits of $19^{10}$ are $01$.



Therefore, the last two digits of $19^{10n}$ are $01$ for $n \in \mathbb N$.




Therefore, the last two digits of $19^{10n+9}$ are same as those of $19^9$, which calculation shows to be 79 (same as those of $19^4 19^5$ or $19^4 99$).



Therefore, the last two digits of $2019 ^ {2019}$ are the same as the answer submitted earlier.


notation - Is there a standard name or shorthand for "plustorial"?








We're all familiar with factorial:
$$n>0,\quad n! = n \times (n-1) \times \cdots \times (n-(n-1))$$



I've occasionally seen "plustorial":
$$n>0,\quad n(\mathrm{plustorial}) = n + (n-1) + \ldots + (n-(n-1))$$



Some quick web searching indicates that there is some non-standard but somewhat common usage of the term "plustorial" to describe this, with shorthand being a double-dagger or an exclaimation point having a "+" rather than a dot beneath the vertical mark.




My question is: Is there a "real" standard name for this process and is there a standard corresponding shorthand? I understand that it could be written in sigma notation, was curious about something more terse.

Necessity of Axiom of Choice in Functional Analysis given ZF + Dependent Choice

What do we get with the Axiom of Choice (AC) in Functional Analysis that cannot be accomplished with Zermelo-Fraenkel (ZF) plus the Axiom of Dependent Choice (DC)?



So, for instance, just dusted off and opened my old class notes and saw





  1. Hahn-Banach (see note below)

  2. Baire Category (see note below)


  3. Open Mapping / Closed Graph thrms


  4. Uniform Boundedness Principle

  5. Projection Lemma (Hilbert Space)

  6. Unit Ball is weak-* compact (Banach-Algaoglu)

  7. Riesz Representation Theorem (see note below)

  8. Spectral Theorem


  9. Separation of Convex sets by Hyperplanes

  10. Basic theorems about distribution functions/L^P spaces



... and any other theorems you can think of that fit in this theme of being well known to anyone who took a basic course in functional analysis, and useful to people who use analysis in their work. Please add to list b/c I am sure I forgot some theorems.



(note on Hahn-Banach: In the spirit of the question, the most concrete form that is still abstract enough to use in the proofs of the other theorems is fine here. For instance, in my notes I have $p$ sublinear on a N.V.S. $X$ and $V$ a subspace of $X$, $f\in V^*$ and $|f|\le p$ on $V$ as the assumptions.



note on Baire Category: The version that every complete metric space (or Banach space) is a Baire space would likely be sufficient here since I believe that the second one that locally compact Hausdorff spaces are Baire might not be standard material for a basic course in Functional Analysis. Please correct me if I'm wrong!)




So, which require AC if one already accepts ZF+DC?



Edit This is intended to be more a question about the internal logical dependencies of Functional Analysis than about logic and set theory. A good answer does not need to prove each of the theorems separately. In particular, One might show theorems $n_1$ and $n_2$ can be proven with DC by citing good links. Then say "a standard proof for $n_3$ uses $n_1$ and some epsilon delta stuff. A standard proof $n_4$ uses $n_2$ and $n_3$ plus image of compact sets is compact, so also doesn't need full AC." For the ones that do need AC, maybe a good link for one and then a link showing that others are equivalent under ZF.



In other words, what I am looking for is for someone to take the few theorems about DC vs. AC that have already been proven, and flesh this out to the rest of (basic) functional analysis by discussing logical dependencies within the field of functional analysis.



Please only assume a background in functional analysis, not in Foundations (sets/logic/etc. beyond everyday use). References to other questions where the details have been worked out in more rigour are quite sufficient.

real analysis - Problems proving that if $f_nrightarrow f$ pointwise and $int_R f=lim_{n}int_R f_n$ then $int_E f=lim_{n}int_E f_n$ for meas $E subseteq R$.



This is a problem from Royden 4th edition (updated printing). Problem 4.22





Let $\{f_n\}$ be a sequence of nonnegative measurable functions on $\mathbb{R}$ that converges pointwise on $\mathbb{R}$ to $f$ and $f$ be integrable over $\mathbb{R}$. Show that
\begin{equation}
\text{if}~\int_\mathbb{R}f=\lim_{n\rightarrow \infty} \int_\mathbb{R} f_n,~\text{then}~\int_E{f}=\lim_{n\rightarrow \infty} \int_E f_n,~\text{for any measurable set $E$}.
\end{equation}




Solution so far...



I've been able to deduce that the problem statement follows if $\lim_{n\rightarrow \infty} \int_E f_n$ exists for any measurable set $E$.




Suppose that under the assumptions of the problem, $\lim_{n\rightarrow \infty} \int_E f_n$ exists for any measurable set $E$.



Proof that if $\int_\mathbb{R} f = \lim_{n\rightarrow\infty} \int_\mathbb{R} f_n$ then $\int_E f = \lim_{n\rightarrow\infty} \int_E f_n$. We prove by contradiction that equality holds.



Suppose equality does not hold. By
Fatou's Lemma we know that $\int_E f \leq \lim_{n\rightarrow\infty} \int_E f_n$, so since equality doesn't hold then $\int_E f < \lim_{n\rightarrow\infty} \int_E f_n$. Then it follows that
\begin{align*}
\int_\mathbb{R} f &= \int_E f + \int_{\mathbb{R} \sim E} f &(\text{additivity over domains})\\
&\leq \int_{E} f + \lim_{n\rightarrow \infty} \int_{\mathbb{R} \sim E} f_n & \text{(Fatou's)}\\

&< \lim_{n\rightarrow \infty} \int_E f_n + \lim_{n\rightarrow \infty} \int_{\mathbb{R}\sim E} f_n\\
&= \lim_{n\rightarrow \infty} \int_\mathbb{R} f_n
\end{align*}

which contradicts our assumption. Therefore $\int_E f = \lim_{n\rightarrow\infty} \int_E f_n$.



The part I'm having trouble with is the initial claim that under the problems assumptions $\lim_{n\rightarrow \infty} \int_E f_n$ exists for any measurable set $E$. Am I approaching this in a reasonable way? Any hints on how to prove this last bit?


Answer



$\int (f-f_n)^{+} \to 0$ by DCT because $(f-f_n)^{+} \leq f$ and $(f-f_n)^{+} \to 0$. Also $\int (f-f_n) \to 0$ by hypothesis. Subtract the first from the second to get $\int (f-f_n)^{-} \to 0$. Add this to $\int (f-f_n)^{+} \to 0$ to get $\int |f-f_n| \to 0$. For any measurable set $E$ we have $\int_E |f-f_n|\leq \int_{\mathbb R} |f-f_n| \to 0$ which implies $\int_E f_n \to \int_E f$.


Saturday 20 May 2017

calculus - Calculate: $limlimits_{x to infty}left(frac{x^2+2x+3}{x^2+x+1} right)^x$



How do I calculate the following limit without using l'Hôpital's rule?



$$\lim_{x \to \infty}\left(\frac{x^2+2x+3}{x^2+x+1} \right)^x$$


Answer




$$\lim_{x \rightarrow \infty}\left(\frac{x^2+2x+3}{x^2+x+1} \right)^x$$



$$=\lim_{x \rightarrow \infty}\left(1+\frac{x+2}{x^2+x+1} \right)^x$$



$$=\lim_{x \rightarrow \infty}\left(\left(1+\frac{x+2}{x^2+x+1} \right)^\frac{x^2+x+1}{x+2}\right)^{\frac{x(x+2)}{x^2+x+1}}$$



$$=e$$ as $\lim_{x\to\infty}\frac{x(x+2)}{x^2+x+1}=\lim_{x\to\infty}\frac{(1+2/x)}{1+1/x+1/{x^2}}=1$



and $\lim_{x\to\infty}\left(1+\frac{x+2}{x^2+x+1} \right)^\frac{x^2+x+1}{x+2}=\lim_{y\to\infty}\left(1+\frac1y\right)^y=e$


real analysis - Is this an equivalent definition of uniform continuity?

The motivation for this definition is that a main characteristic of functions defined on an interval which are continuous but not uniformly continuous is that the largest possible $\delta$ required to keep $\,f(x)\,$ within $\varepsilon$ of $\,f(x_0)\,$ can be made arbitrarily small by choosing the appropriate $x_0$. Consider, for example, $\,f(x) = \tan x\,$ in $\,\left(\dfrac{\pi}{2} - \varepsilon_0, \dfrac{\pi}{2}\right)$ for $\varepsilon_0$ small.
It is continuous everywhere in that interval but we are forced to choose smaller and smaller $\delta$ as $\,x_0 \to \dfrac{\pi}{2}$.




Let $\,f:I \to \mathbb{R}$ be a continuous real function. Consider some $\,x_0 \in I\,$ and fix $\,\varepsilon \in \mathbb{R_+}$. Define

$P = \left\lbrace\delta \mid \forall \,x \in I \ , \left\lvert x-x_0\right\rvert < \delta \implies \left\lvert \,f(x) - f(x_0)\right\rvert < \varepsilon\right\rbrace$.
Let $$\delta_{x_0}^* =
\begin{cases}
\sup P, & P \ \text{ bounded above} \\
1, & \text{otherwise}
\end{cases}
$$
We say $\,f$ is uniformly continuous if for all $\,\varepsilon \in \mathbb{R_+}, \:\inf\left\lbrace\delta_{x_0}^* \mid x_0 \in I \right\rbrace \in \mathbb{R_+}$.





It should be noted that the choice $1$ is arbitrary and $P$ is non empty by hypothesis. I'm not sure if this is equivalent, and if so how can one prove it is equivalent to the standard definition?

measure theory exercise: null integral implies null function

Let $\Omega \subset R^n$ a non empty open set and $f: \Omega \rightarrow R$ a nonnegative measurable function with $\int_{\Omega} f =0$. Then $f=0$ in $\Omega$ almost everywhere.



I have no idea of how to start this problem, someone could help me ?



Thanks in advance!




My try (I am not sure):



Let $E_n:= \{ x \in \Omega; f(x) > 1 / n\}, n \in N$ and define $E:= \{ x \in \Omega; f(x) > 0\} = \cup_{n \geq 1} E_n.$



Note that



$$ 0 = \int_{\Omega} f \geq \int_{E} f \geq \int_{E_n} f \geq \frac{|E_n|}{n} \geq 0.$$



Then $|E_n| = 0$ for all n, which implies $|E| = 0. $ Then $f=0 $ in $\Omega$ a.e




I am not sure because it seems that we can replace the set $\Omega$ by a measurable set with zero measure and if we consider a set like this the affirmation is not true.

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...