Saturday 30 September 2017

matrices - Determinant of Matrix is different than product of diagonal

(sorry in advance, but I can't find a page on how to format math equation/structures)
I'm having a bit of an issue with this matrix and finding its determinant. I know what the correct determinant is (thanks to wolfram), but I need to be able to figure it on paper for my math class, especially for larger matrices. I'm dealing with eigenvalues and eigenvectors in class but right now the issue is getting $\det \left( A-\lambda I\right)$ for a $3 \times 3$ matrix:



$\left( \begin{matrix} 2 & -2 & 5 \\ 0 & 3 & -2 \\ 0 & -1 & 2 \end{matrix}\right)$



For $\det\left(A-\lambda I\right)$, I get the matrix:



$\left( \begin{matrix} 2- \lambda & -2 & 5 \\ 0 & 3 - \lambda & -2 \\ 0 & -1 & 2 - \lambda \end{matrix}\right)$




The determinant/characteristic equation being $-\lambda^3 + 7 \lambda^2 - 14\lambda + 8$. I can get this by using expansion of cofactors. However, if I attempt to use elementary row operations and find the product of the main diagonal, I get a polynomial with a $\lambda^4$ in it.



I even tried finding the determinant of $A$ by itself with both methods and I still get two different answers. Can someone explain what I'm doing wrong here. I tried asking the instructor but he just tells me to look over the book (which I have) but I still have this problem.



EDIT: In response to mookid's answer. This is what I did:



$\left( \begin{matrix} 2- \lambda & -2 & 5 \\ 0 & 3 - \lambda & -2 \\ 0 & -1 & 2 - \lambda \end{matrix}\right)$ -> $\left( \begin{matrix} 2- \lambda & -2 & 5 \\ 0 & 3 - \lambda & -2 \\ 0 & -1(3 - \lambda) + 3 - \lambda & (3 - \lambda)(2 - \lambda) - 2 \end{matrix}\right)$
=



$\left( \begin{matrix} 2- \lambda & -2 & 5 \\ 0 & 3 - \lambda & -2 \\ 0 & 0 & \lambda^2 - 5\lambda + 4\end{matrix}\right)$




I feel like I'm doing something wrong here because based on what my teacher told me, I should be getting what you got for the determinant.

Friday 29 September 2017

limits - How to find $lim_{n to infty} frac {2^n n!}{n^n}$



$$\lim_{n \to \infty} \frac {2^n n!}{n^n}$$




How can I find this limit? I have tried to use the ratio test and then I have got the the following limit: $$\lim_{n \to \infty} 2 \frac{ n^{n+1}}{(n+1)^{n+1}}$$ I have tried for a while but I can't figure out this limit either. How can I find this limit?


Answer



You should clarify: do you want to



1) find $\lim \limits_{n\to\infty} \frac{2^nn!}{n^n}$ or



2) test the convergence of the series $\sum_{n=1}^{\infty} \frac{2^nn!}{n^n}$ (for which the ratio test is used)?



I assume the question stated is the first, which is calculated by the Stirling's formula:




$\lim \limits_{n\to\infty} \frac{2^nn!}{n^n} = \lim \limits_{n\to\infty} \frac{2^n\cdot\sqrt{2\pi n}\cdot\left(\frac{n}{e}\right)^n}{n^n}=\lim \limits_{n\to\infty} \frac{\sqrt{2\pi}\cdot \sqrt{n}}{\left(\frac{e}{2}\right)^n} \stackrel{l'H}=\lim \limits_{n\to\infty} \frac{\sqrt{2\pi}}{2\left(\frac{e}{2}\right)^n\cdot \ln{\left(\frac{e}{2}\right)}\cdot\sqrt{n}}=0.$



Note: Another direction: $\frac{2^nn!}{n^n}=\frac{2\cdot1}{n}\cdot\frac{2\cdot2}{n}\cdot\cdot\cdot\frac{2\cdot n}{n}$


calculus - Limit of a function without using L'Hôpital Rule

Prove that



$$\lim_{x\to 0} \frac{\ln(x+1)}{x} = 1$$



without using L'Hôpital Rule.

Thursday 28 September 2017

abstract algebra - Trick for reducing a specific polynomial.



Let $f(x) = x^4 − x^3 + 2x^2 − 3x − 3$



Show that f(x) is reducible over $\Bbb{Q}$



Rational zeros shows there is no roots i tried writing $(x^2+bx+c)(x^2+dx+e)$ and then tried to solve the equations for each of letters but it got crazy ugly which lead me to the conclusion there must be a better way...


Answer




i think that is a good idea, it is $$(x^2+3)(x^2-x-1)=f(x)$$


real analysis - How to prove every Cauchy Sequence in $mathbb{R}^n$ converges

This is an analysis exercise that I have been struggling with for some time now. I am not familiar with metric spaces.



In $\mathbb{R}$, the book that I am using proves this fact by showing that every Cauchy sequence in $\mathbb{R}$ is bounded. Next, they use Bolzano-Weierstrass to choose a convergent subsequence of that Cauchy sequence.



However, the book does not specify an analog to boundedness in $\mathbb{R}^{n}$. Also, the book proved Bolzano-Weierstrass for $\mathbb{R}$, not $\mathbb{R}^{n}$. I was originally planning to outline the $\mathbb{R}$ approach by proving boundedness and choosing a convergent subsequence, but this is not currently possible because of what I said.



I was wondering if there is a good way to do this problem.




Thanks

Is the square root of a negative number defined?



I have been in a debate over 9gag with this new comic: "The Origins"





And I thought, "haha, that's funny, because I know $i = \sqrt{-1}$".



And then, this comment cast a doubt:





There is no such thing as sqrt(-1). The square root function is only
defined for positive numbers. Sorry...




This wasn't by ignorance of complex numbers. In the short round of arguing that happened there, the guys insisted that negative numbers don't have square roots, and that $i$ is defined such that $-1 = i^2$, which is not the same as $i = \sqrt{-1}$. In their opinion, too, no respectable math textbook should ever say that $i = \sqrt{-1}$; which is precisely how Wolfram MathWorld defines $i$.



Is there a consensus over if negative numbers have a square root? Are there some gotchas or disputes with the definitions I should be aware of in the future?


Answer



I think Asaf's answer, while correct, misses some of the point. It's fairly clear from context that the OP wants to know whether writing $i=\sqrt{-1}$ makes sense in the complex numbers. This is essentially a matter of convention. You can define it that way, but any way you define the function $\sqrt{z}$ it won't have all the properties that it does for real numbers.




A square root of a number $a$ is any number $x$ with $x^2=a$, or equivalently a root of the polynomial $x^2-a$. The fundamental theorem of algebra implies that every complex number $a$ has a square root. In fact, for $a \ne 0$, $a$ has precisely two square roots, which are additive inverses.



You can already see that this is a bit of a problem in the non-negative real numbers. For this case, we choose $\sqrt{a}$ to be the unique non-negative square root, which has a lot of nice properties. Viewed as a function of $a$, $\sqrt{a}$ is continuous, and is a multiplicative homomorphism (i.e. $\sqrt{ab}=\sqrt{a}\sqrt{b}$). These properties are nice enough that it makes sense to call this choice of $\sqrt{a}$ the square root of $a$, instead of a square root of $a$. Of course, this is already abusing terminology a bit, but don't worry too much about that.



It is perfectly reasonable to try to extend the square root function $\sqrt{a}$ for $a$ any complex number, since we know that complex numbers have square roots. But unlike in the positive reals, there's no really nice way to choose what the square root of $a$ should be. In particular, for $\sqrt{-1}$, we can choose either $-i$ or $i$, and since complex conjugation preserves all the algebraic properties of $\mathbb{C}$ we shouldn't expect a nice way to do so purely based on algebraic considerations (like we had for the nonnegative reals).



Let's ignore this for a minute. Any complex number $z$ can be written in the form $z=r e^{i \theta}$ where $0 \le r < \infty$ and $0 \le \theta < 2 \pi$ , the polar representation for complex numbers. For $z \ne 0$ the choice of $r$ and $\theta$ is unique. We can define $\sqrt{z}= \sqrt{r} e^{i \theta /2}$, which is a square root of $z$. Indeed, if we do this, then $\sqrt{-1} = i$. So what's the issue with this?



For one thing, you lose the homomorphism property. There are cases where $\sqrt{ab} \ne \sqrt{a}\sqrt{b}$. This is generic for all extensions, of the square root function, too. There is no we could avoid it by choosing a different definition. You can look at this question to see why this must fail generically.




Furthermore, our choice is not continuous, since $\lim\limits_{y \to 0^+} \sqrt{x-iy} = -\sqrt{x}$ for $x,y$ real. We could make it continuous, but at the cost of not defining $\sqrt{z}$ for $\theta =0$. This is what's called a branch cut. But this is precisely the case when $z$ is a positive real number! Some people do this, but it's bad practice and probably confusing. We could define a branch cut elsewhere, for instance, only allow $-\pi < \theta < \pi$ and use the same formula. Now the branch cut is on the negative real axis, which is better since our notation doesn't conflict with the notation for real numbers, but now $\sqrt{-1}$ is undefined, and $\lim\limits_{y \to 0^+} \sqrt{-1+iy} = i$ while $\lim\limits_{y \to 0^-} \sqrt{-1+iy} = -i$. If you don't care about continuity at the negative real axis, you can extend the definition to $\theta = \pi$, which again gives you back $\sqrt{-1}=i$. We could also induce a branch cut other places, for instance on the positive imaginary axis, to avoid both the problem of disagreeing with the real square root and of being undefined or discontinuous on some part of the real axis, but then your branch cut has to change under complex conjugation, which can also be a problem.



There are other ways to address the issue. The nicest is the theory of Riemann surfaces. The idea here is that you think of $\sqrt{z}$ as a function on a set that is larger than just the complex plane. The Riemannn surface for square root is essentially 2 copies of the complex plane, split at the branch cuts and glued to each other. Here is an image, taken from Wikipedia, for the Riemann surface for the square root function:Riemann surface for the square root



This approach is not without fault either, since you're now no longer talking strictly about the square root of a complex number. The points on the Riemann surface tell you which square root to pick, and there is one point for each square root. Since -1 is a complex number, not a point on the surface, $\sqrt{-1}$ doesn't make sense. However, there are 2 points corresponding to -1, one of which has square root $i$ and the other one $-i$.



You can also consider $\sqrt{z}$ to be a multivalued function, returning a pair of numbers which are both of the square roots of $z$. This works fine, except for when you want to do any sort of actual calculations. In particular, in this approach, $\sqrt{-1} = \{i, -i\}$. This approach does have the homomorphism property that $\sqrt{ab} = \sqrt{a}*\sqrt{b}$, once you define what it means to multiply sets (namely, $\{a,-a\} * \{b,-b\} = \{ab,-ab\}$). This definition does not agree with our definition for positive reals, but it's not as bad as before since the square root of a real number is at least an element of the set of its square roots as a complex number.







EDIT (to clear up an issue in the comments)



I will say, though, that saying that $i = \sqrt{-1}$ is the definition for $i$ is unacceptable in my view. You can't define $i$ this way because it makes no sense. Unfortunately, this is exactly how MathWorld "defines" it, but I think it's circular and doesn't make sense.



$\sqrt{\phantom{-1}}$ is a function which is defined on the non-negative real numbers. $\sqrt{-1}$ does not exist in this context. The whole reason to construct the complex numbers is to fix this, so that you can solve equations like $x^2 = -1$. Before you can say "$i$ is a square root of $-1$" you need to make sure that there is some context in which -1 has a square root. Of course, we all know that the complex numbers are a consistent system with this property, but for hundreds of years people were not sure about this.



But even once you have constructed $\mathbb{C}$, I would argue that defining $i = \sqrt{-1}$ is bad practice. Sure, from an algebraic standpoint, the two square roots of $-1$ are indistinguishable. So there's nothing wrong with defining $i$ to be a square root of $-1$.



The issue here is that $\sqrt{\phantom{-1}}$ is not some universal operator which takes square roots in any context. It's a function on $\mathbb{R}_{\ge 0}$, which we may or may not want to extend to $\mathbb{C}$. Writing $\sqrt{-1}$, without defining $\sqrt{\phantom{-1}}$ on $\mathbb{C}$, is sure to lead to confusion. In fact, whoever wrote the MathWorld article doesn't want to define $\sqrt{z}$ to be just any square root of $z$, for each complex $z$. They at least want continuity on some relatively large subset of $\mathbb{C}$, and some other relatively nice properties. The fact that there are so many ways to define $\sqrt{z}$ for complex $z$ is reason enough to be explicit about what is meant when one writes things like $\sqrt{-1}$.




It's just not clear what $\sqrt{z}$ means until you have defined it. The proper order to do it is this: First, you construct the complex numbers. Second, you choose a square root of -1 and call it $i$ (the other root is obviously $-i$). Finally, after you know what $\mathbb{C}$ and $i$ mean, now you can define what $\sqrt{z}$ means for complex $z$ (if you even want to). At this point, it may or may not be true that $\sqrt{-1}=i$, but if it is this is the definition of $\sqrt{-1}$, not of $i$. I'll also note that it isn't immediately clear that you can even take square roots of arbitrary complex numbers; this is a fact that has to be proven.



I realize this is seems pedantic, and once you're familiar with field extensions and know the existence of algebraic closures for arbitrary fields you can abuse language and define things like $i=\sqrt{-1}$ since you know there is a consistent way to do it (especially if you're not worried about continuity and such). But at the level of someone seeing complex numbers for the first time, it's just too likely to cause some confusion. Of course, people did this sort of thing without explicitly constructing the complex numbers for quite a long time, but no one was really sure that the complex numbers were mathematically consistent. Now that we know they are, they should be presented as such.



END EDIT






tl;dr: There are many ways to generalize the square root function to the complex numbers, some of which have $\sqrt{-1} = i$. However, all of them lack some properties of the real square root function. Different people have different preferences. Some people prefer to reserve the symbol $\sqrt{\phantom{-1}}$ for positive reals, and just talk about a square root of a complex number. For them, the properties of the real square root other than just being a solution to $z^2=a$ are too important to use the same symbol. Some people care less about continuity, being defined everywhere, etc., and they will freely write $\sqrt{-1} = i$. It is a matter of personal preference. You can use whatever convention you like, but need to make sure to be consistent and don't write things like $\sqrt{ab} = \sqrt{a}\sqrt{b}$ when they don't hold, or not write $\sqrt{z}$ where it is undefined. What definitely always holds, regardless of convention, is that $i^2 = -1$.



Geometric Summations that reference two variables



Hello math stackexchange! I'm new here so please correct any formatting mistakes I make / I'm happy to provide more info if needed.



I have a summation of the form $\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j)$, and I'm not too sure how to go about solving this (and by solve, I mean rewrite it into a formula) . So far I've been able to refactor the inner set of brackets into $\sum_{i=1}^{n} \big( \sum_{j=1}^{i} i + \sum_{j=1}^{i} j \big) $. From here though I'm not too sure what to do as my inner summations depend on two variables (the i and j cannot be factored out)



From here I've tried the following:





  • Rewrite the inner summations to iterate up to n instead of i. However I'm not too sure if this is a valid way of continuing the problem:



$\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j) = \sum_{i=1}^{n} \sum_{j=i}^{n} (i+j)$




  • Given that the variable i is in the scope of the inner summations, i can be factored out and then solve the inner summation for j.




$\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j) = \sum_{i=1}^{n} \big( \sum_{j=1}^{i} i + \sum_{j=1}^{i} j \big)$



$= \sum_{i=1}^{n} i \big(\sum_{j=1}^{i} 1 + \sum_{j=1}^{i} j \big) $



This only gets me up to this point, because I don't know a formula to rewrite a nested summation that iterates up to i.




  • That lead me to combining both those ideas and ending up with this:




$\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j) = \sum_{i=1}^{n} \sum_{j=i}^{n} (i+j)$ [1]



$= \sum_{i=1}^{n} i \big( \sum_{j=i}^{n} (1) + \sum_{j=i}^{n} (j) \big)$ [2]



$= \sum_{i=1}^{n} i \big(n + \frac{n(n+1)}{2} \big)$ [3]



$=\sum_{i=1}^{n} i n + \sum_{i=1}^{n} i \frac{n(n+1)}{2} $ [4]



$=\frac{1}{2} n^2 (n+1) + \frac{1}{2} \sum_{i=1}^{n} i n(n+1)$ [5]




$=\frac{1}{2} n^2 (n+1) + \frac{1}{2} n^2 (n+1)^2$ [6]



Which simplifies to $\frac{1}{2} n^2 (n+2)(n+1)$. Wolfram's solution was $\frac{1}{2} n(n+1)^2$. I'm wondering which of my assumptions are invalid, or is it an issue to do with my process? Maybe I'm overcomplicating the fact that there is two variables being referenced inside the summations, and this problem could be simplified in a different way?



Cheers


Answer



Correction:



$$\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j) = \sum_{i=1}^{n} \left[i\left( \sum_{j=1}^{i} 1\right) + \sum_{j=1}^{i} j \right]$$




To obtain the answer:
\begin{align}
\sum_{i=1}^{n} \sum_{j=1}^{i} (i+j) &= \sum_{i=1}^{n} \left[i^2 + \frac{i(i+1)}{2} \right] \\
&=\frac32 \sum_{i=1}^n i^2+\frac12 \sum_{i=1}^n i\\
&=\frac32 \frac{n(n+1)(2n+1)}{6} + \frac12\frac{n(n+1)}{2} \\
&= \frac{n(n+1)(2n+2)}4 \\
&= \frac{n(n+1)^2}{2}
\end{align}


The Formula to this Sequence Series

What is the formula for following sequence?



$$\frac12 + \frac12 \cdot \frac34 + \frac12 \cdot \frac34 \cdot \frac56 + ... + \frac12 \cdot \frac34 \cdot \frac56 ... \frac{2n - 1}{2n}$$



This is a question from my Calculus class about sequences.

calculus - Simplify to terms of generalized power mean



I have the following expression${}^1$ I'd like to simplify
$$r \equiv \left[ \frac1{x\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{y\sqrt{y}} \left( 1 - \frac{k}{y} \right) \right]\left[ \frac1{\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{\sqrt{y}} \left( 1 - \frac{k}{y} \right)\right]^{-1}$$

where $x > y > k\geq 1$.



Some attempts led me to think that $r$ can be expressed nicely in terms of generalized means of $x,y$ like the harmonic ($p=-1$), geometric ($p \to 0$), and other powers:${}^2$
$$\begin{align}
H &\equiv \left( \frac1x + \frac1y\right)^{-1} & G &\equiv \sqrt{xy} & M \equiv \left( x^p + y^p \right)^{1/p} \quad \text{with perhaps} \quad p=\frac{-1}2
\end{align}$$



To be specific, my question is this:





Is there a way to write it in this form $$r \overset{?}{=} \frac1x + \frac1y + \frac1{\sqrt{xy}} + {}\color{red}{??} =\frac1H + \frac1G + {}\color{red}{??} $$ such that the $\color{red}{??}$ part is "nice" in terms of $k$ and maybe $H,G$ or $M$ defined above?




I also wonder if there's some standard techniques like examining the asymptotic form to guess the coefficients of the proposed terms. I tried a bit but didn't get very far.



Directly pulling out $\frac1H$ and $\frac1G$ just change $r$ into something equally inviting but not more compact.



Any suggestions will be appreciated. Honestly, one main reason for me to think that "oh there must be some nice and more compact forms" is the glaring symmetry. I will be okay if there turn out to be none. Thanks.







Footnote 1:
$\quad$For the record, $r$ is part of the expression of the ratio between $\frac{\partial^2 f(u)}{ \partial u^2}$ and $\frac{\partial f(u) }{ \partial u}$, evaluated at $u=0$, where $$f(u) = \left( \frac1{\sqrt{u+y}} - \frac1{\sqrt{u+x}} \right)\sqrt{u+k}$$



Footnote 2:
$\quad$This is a slight abuse of notations, as the generalized means should carry the averaging factors $\frac12$ or $\frac1{\sqrt{2}}$. My shorthands can also match the p-norms, which has the benefit of the norms ordered in powers have correspondingly decreasing magnitudes. However, p-norms doesn't cover $\sqrt{x y}$ like generalized mean.






Update (Nov.28th, 2017)




With all the inverses lying around, it turns out that the best way is to express $r$ in terms of the ... you guessed it ... inverses: $v \equiv 1/x$ and $w \equiv 1/y$. I'll spare the readers of the actual algebra. Suffice to say that this is also the natural course to take to adopt the notions of p-norms; one just have to give up on unifying $1 / \sqrt{xy} = 1/G = \sqrt{vw}$ formally.


Answer



We can write
$$r=\frac 1H+\frac 1G+\color{red}{\frac{1}{M-G+\frac{GM}{k}}}$$
where
$$M=(x^{-1/2}+y^{-1/2})^{-2}$$







If we write
$$r=\frac{\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}-k\left(\frac{1}{x^2\sqrt x}-\frac{1}{y^2\sqrt y}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}=\frac 1x+\frac 1y+\frac{1}{\sqrt{xy}}+A$$ then we have
$$A=\frac{k\left(\frac 1x-\frac 1y\right)\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}$$



Using
$$x+y=\frac{G^2}{H},\quad xy=G^2,\quad \frac{1}{M}=\frac 1H+\frac 2G$$
we have
$$\left(\frac 1x-\frac 1y\right)^2=\left(\frac 1H+\frac 2G\right)\left(\frac 1H-\frac 2G\right)\implies \frac 1x-\frac 1y=-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}$$



$$\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)^2=\frac{1}{G^2}\left(\frac 1H+\frac 2G\right)\implies \frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}=\frac 1G\sqrt{\frac 1M}$$




$$\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt y}\right)^2=\frac 1H-\frac{2}{G}\implies \frac{1}{\sqrt x}-\frac{1}{\sqrt y}=-\sqrt{\frac 1H-\frac{2}{G}}$$



$$\small\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)^2=\left(\frac 1H+\frac 1G\right)^2\left(\frac 1H-\frac 2G\right)\implies \frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}=-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}$$



So, we get



$$A=\frac{k\left(-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}\right)\frac 1G\sqrt{\frac 1M}}{-\sqrt{\frac 1H-\frac{2}{G}}-k\left(-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}\right)}=\frac{1}{M-G+\frac{GM}{k}}$$


real analysis - Lebesgue integrable function?

If $\displaystyle f(x)=\frac{1}{x^p}$ $(0 < x \leq 1)$ then $f \in L[0,1]$ if $p<1$ and



$$\int_{0}^1 f= \frac{1}{1+p} $$



I know that non negative measurable function f is Lebesgue integrable on [a,b] if



$$\int_{a}^b f=\lim_{n \to \infty} \int_{a}^b f^n$$
If this limit is finite then function is Lebesgue integrable.

but how can i find $f^n$ for this function? please help me.Thanks in advance.

calculus - Evaluating a limit using only rationalization and algebraic methods



Evaluate (without using de L'Hôpital Rule)



$$\lim_{x \to -3}\frac{\sqrt{3 x^2+10 x+4}-\sqrt{x^2+3 x+1}}{\sqrt{5 x^2+11 x+5}-\sqrt{2 x^2+x+2}} = \frac 58 \sqrt{17}$$




I have to evaulate the limit of this function as it approaches $-3$. I have tried plugging it in but I get $0/0$. Then I tried LCD, but the $\sqrt x-\sqrt y$ does not equal $\sqrt{x-y}$. Then I tried multiplying the conjugate of the denominator but after all of that I still get $0/0$. I cannot use the calculator except to check my answers after completion. However when I checked there is a limit at $-3$ so I resorted to looking online and it gave me the answer $\frac58\sqrt{17}$, but I cannot seem to get this answer without using L'Hôpital's rule.


Answer



Hint 1) $\frac{\sqrt{a}- \sqrt{b}}{\sqrt{c}-\sqrt{d}} = \frac{(a-b)(\sqrt{c} + \sqrt{d})}{(c+d)(\sqrt{a}+\sqrt{b})}$



Hint 2) $2x^2 + 7x +3 = (2x+1)(x+3)$, this is $a+b$



Hint 3) Try hint 2) with $c+d$.



What do you get?



Wednesday 27 September 2017

number theory - Prime dividing binomial coefficient involving prime power

I was wondering if there was a straightforward proof of the following fact (which I can show is true for specific cases, but not generally):



Let $n$ be composite, and let $p$ be a prime factor $n$. Suppose $p^k$ is the highest power of $p$ that divides $n$. Then the following is true:
$\binom{n}{p^k} \ne 0 \pmod p$.



My approach was to see that $\binom{n}{p^k} = \frac{n(n-1)\ldots (n - p^k + 1)}{(p^k)!}$. However, how do I show that $p$ does not divide this quantity? It seems not obvious how to argue that $p$ occurs the same number of times in the numerator as in the denominator.




**Update: ** I am looking for an elementary proof "from scratch" that does not rely on Lucas' Lemma or anything too heavily reliant on field theory. Preferably an argument that pops out from analyzing the binomial coefficient. Thanks!

convergence divergence - Determining if a series is absolutely convergent or conditionally convergent without the usage of the limit comparison test

Would the following series be conditionally convergent, absolutely convergent or divergent?



$$\sum^\infty_{k=1}\frac{k\sin{(1+k^3)}}{k+\ln{k}}$$




Whereas for sine functions in series like this, you can usually say that it just alternates between 1 and -1 but would this one do the same for $k\geq1$? I don't think it would so that would mean I can't use the alternating series test. Perhaps, then, the ratio test or some comparison test?

gcd and lcm - GCD of two big numbers

How to find $gcd(5^{100}-1, 5^{120}-1)$?
The problem is numbers are really big ($5^{100}$ is 70 digits).
None of the numbers is prime.
I ran Euclid's algorithm on Python and found the answer in less than a second, however it seems that there is no on-paper approach to this problem, is it?

real analysis - Show that $ limlimits_{ntoinfty}frac{1}{n}sumlimits_{k=0}^{n-1}e^{ik^2}=0$



TL;DR : The question is how do I show that $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{ik^2}=0$ ?



More generaly the question would be : given an increasing sequence of integers $(u_k)$ and an irrational number $\alpha$, how do I tell if $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi \alpha u_k}=0$ ? I'm not asking for a criterium for completely general sequences, an answer for sequences like $u_k=k^2$, $v_k=k!$ or $w_k=p(k)$ with $p\in \mathbf Z [X]$ would already be awesome.



A little explanation about this question :




In Real and Complex Analysis by Rudin there is the folowing exercise :



Let $f$ be a continuous, complex valued, $1$-periodic function and $\alpha$ an irrational number. Show that
$\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\alpha k)=\int_0^1f(x)\mathrm d x$. (We say that $(\alpha k)_k$ is uniformly distributed in $\mathbf R / \mathbf Z$)



With the hint given by Rudin the proof is pretty straightforward : First one show that this is true for every $f_j=\exp(2i\pi j\cdot)$ with $j\in \mathbf{Z} $. Then using density of trigonometric polynomials in $(C^0_1(\mathbf{R}),\|\cdot\|_\infty)$ and the fact that the $0$-th Fourier coefficient of $f$ is it's integral over a period, one can conclude using a $3\varepsilon$ argument. This proof is possible because one can compute explicitly the sums $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k}=\frac{1}{n}\cdot\frac{1-e^{2i\pi j\alpha n}}{1-e^{2i\pi j\alpha}}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$



Now using a different approach (with dynamical systems and ergodic theorems) Tao show in his blog that $(\alpha k^2)_k $ is uniformly distributed in $\mathbf R / \mathbf Z$ (corollary 2 in this blog). I'd like to prove this result using the methods of the exercice of Rudin, but this reduce to show that $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k^2}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$ Hence my question.



P.S. When i ask wolfram alpha to compute $\sum_{k\geq0}e^{ik^2}$ it answer me with some particular value of the Jacobi-theta function. Of course the serie is not convergent but maybe it's some kind of resummation technique or analytic continuation. I'm not familiar with these things but it might be interesting to look in that direction.



Answer



Gauss sums



Your sum is strongly related to the Gauss sum. The usual trick is to compute the modulus. This works particularly smoothly over $\mathbf{Z}/p\mathbf{Z}$ as with usual Gauss sums, but essentially it works here too: If $S = \sum_{k=0}^{n-1} e^{ik^2},$ then
\begin{align}
|S|^2 &= \sum_{k=0}^{n-1} \sum_{k'=0}^{n-1} e^{i(k'^2 - k^2)}\\
&= \sum_{h=-n+1}^{n-1} \sum_{\substack{0\leq k\end{align}
where we have written $h=k'-k$. Now the inner sum is a geometric series with at most $n$ terms and with common ratio $e^{i2h}$, so we have
\begin{equation}

\left|\sum_{\substack{0\leq k\end{equation}
Thus
\begin{equation}
|S|^2 \leq 2\sum_{h=0}^{n-1} \min\left(\frac2{|1-e^{i2h}|},n\right).
\end{equation}
Now fix $\epsilon>0$. Since $(h/\pi)_{h=1}^\infty$ is equidistributed mod $1$ the number of $h=0,\dots,n-1$ for which $|1-e^{i2h}| \leq \epsilon$ is $O(\epsilon n)$, so
\begin{equation}
|S|^2 \leq 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| \leq \epsilon}}n + 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| > \epsilon}} \frac2{|1-e^{i2h}|} \leq O(\epsilon n^2) + O(\epsilon^{-1} n).
\end{equation}

Since $\epsilon$ was arbitrary this implies $|S|^2=o(n^2)$, and hence $|S|=o(n)$.



The van der Corput trick



The only thing we really used about $k^2$ here is that for fixed $h$ we understand the behaviour of the sequence $(k+h)^2 - k^2$, and indeed if you repeat the above calculation but with a judicious application of the Cauchy--Schwarz inequality then you prove a general fact called van der Corput's difference theorem (aka Weyl's differencing trick): if $(u_k)$ is a sequence such that for each $h\geq 1$ the sequence $(u_{k+h}-u_k)$ is equidistributed modulo $1$, then $(u_k)$ is equidistributed modulo $1$. See for example Corollary 2 on Tao's blog here. This implies for example that $\sum_{k=0}^{n-1} e^{i2\pi p(k)} = o(n)$ for every nonconstant polynomial $p$ with irrational leading coefficient.



Other sequences



In general there is no hard and fast rule about $\lim \frac1n \sum_{k=0}^{n-1} e^{i2\pi \alpha u_k}$, i.e., about equidistribution of $(u_k)$, and in fact the other sequence you mention, $k!$, is indeed very different. To take a slightly simpler example which is qualitatively similar, consider $u_k = 2^k$. Let $f_n(\alpha)$ be the exponential sum $\frac1n \sum_{k=1}^n e^{i2\pi \alpha 2^k}$. Then it is a well known consequence of the ergodic theorem that $f_n(\alpha)$ converges to $0$ for almost every $\alpha$. On the other hand clearly $f_n(\alpha)\to 1$ for every dyadic rational $\alpha$, as $\alpha 2^k$ is eventually constantly $0$ mod $1$. But then by Baire category theorem we must have for a comeagre set of $\alpha$ that $f_n(\alpha)$ does not converge to $0$. Thus it's difficult to say anything too general about $f_n(\alpha)$, especially for particular $\alpha$. For instance, proving $\lim_{n\to\infty} f_n(\sqrt{2})=0$ is a famous open problem.




Test your understanding



Here are some related problems to think about, not all of which I know off-hand how to answer!




  1. Is $(\sqrt{n})$ equidistributed mod $1$?

  2. What about $(\log n)$?

  3. Show that there are some $\alpha$ for which $f_n(\alpha)$ does not converge.

  4. Determine $\{z: f_n(\alpha)\to z~\text{for some}~\alpha\}$.

  5. Let $g_n(\alpha) = \frac1n \sum_{k=1}^n e^{i2\pi \alpha k!}$. Prove statements for $g_n$ analogous to those we proved for $f_n$.


  6. Is there a power of $2$ with at least $7$ decimal digits equal to $7$?

  7. Think of other silly (but not open) problems like these ones.


Tuesday 26 September 2017

sequences and series - Show that $lim_{ntoinfty}b_n=frac{sqrt{b^2-a^2}}{arccosfrac{a}{b}}$



If $a$ and $b$ are positive real numbers such that $a$$a_1=\frac{a+b}{2}, b_1=\sqrt{(a_1b)},..., a_n=\frac{a_{n-1}+b_{n-1}}{2},b_n=\sqrt{a_nb_{n-1}},$$ then show that



$$\lim_{n\to\infty}b_n=\frac{\sqrt{b^2-a^2}}{\arccos\frac{a}{b}}.$$
I tried to calculate explicitly the first few terms $a_2,b_2$ etc but the terms got too complicated really quickly and I couldn't spot any pattern.


Answer



If we set $a=b\cos\theta$, we can show by induction $$b_n=b\prod^n_{k=1}\cos\frac{\theta}{2^k},\quad a_n=b_n\cos\frac{\theta}{2^k},$$ using $$\frac{1+\cos\frac{\theta}{2^k}}{2}=\cos^2\frac{\theta}{2^{k+1}}.$$
But $$\prod^n_{k=1}\cos\frac{\theta}{2^k}=\frac{\sin\theta}{\theta}\,\frac{\frac{\theta}{2^n}}{\sin\frac{\theta}{2^n}},$$ as we can show by repeated application of the identity $\sin2\alpha=2\sin\alpha\cos\alpha$, so the limit of the product as $n\rightarrow\infty$ is $\sin\theta/\theta$. Since $$b\sin\theta=b\sqrt{1-\cos^2\theta}=\sqrt{b^2-a^2},$$ the result follows.


integration - Examples of use of integrals to show convergence of a limit

Recently I got the problem where I had to show the existence and finiteness of the limit of $a_n$ where $a_n$ is bounded and such that both limits

\begin{align}
\lim_{n\to\infty} \cos(a_nt) \ \ \ \text{ and } \ \ \ \lim_{n\to\infty} \sin(a_nt)
\end{align}

exist for all $t\in\mathbb R$. A reasonable proof would be to argue that there can only by one accumulation point, because more would lead to a contradiction. It is not so hard, but one needs to think a little bit.



What did I do? Well, MSE as you guys know is full integrals. They are everywhere! Some see it as recreational math, but some see it as ways to proof things in an elegant manner. It is already done in here by Terry Tao. Also in this post proving $e<\pi$ and finally in this post where there was integral milking which actually lead to a proof of a serious result.



So to tackle my problem with sines and cosines, I have used the integrals
\begin{align}
\int_\mathbb R \frac{\cos(a_nx)}{x^2+1}\,dx \ \ \ \text{ and } \ \ \ \int_\mathbb R\frac{\sin(a_nx)}{x(x^2+1)}\,dx

\end{align}

together with Dominated Convergence Theorem to prove the convergence of $a_n$. The original problem and the detailed solution can be found here.



How did I came up with these integrals? I wanted to use the fact that is pretty dumb if $a_n$ would not converge. Indeed, it would lead to ridiculous results. Moreover since the cos and the sin are bounded, I thought there must be a nice integral....before I finished my sentence in my head those integrals popped up with a clean button which said "DCT"!



The first one is something everyone has seen in Complex Analysis in the chapter "contour integration" and the second one as well. However the first one actually comes up naturally in probability (cauchy distribution) while the second one is to exercise (contour) integration, say. I believe they are constantly posted in MSE, so I couldn't miss it.




Question. What are examples where the use of integrals (or series) provide a proof of existence/non-existence of a limit?





This question makes sense, because it shows that the fancy recreational integrals can actually be beneficial for proofs. Integrals connect functions with each other. In this specific case they connected the periodic cosine with the exponential function which is invertible. I think there might be a lot more of such examples.



Of course series are also allowed, since one can write series as an integral anyway. By the way I do not mean to prove really really trivial results, but I cannot put a line there...

Monday 25 September 2017

fake proofs - Simple equation manipulation gives the wrong solution.



I was stumped by a false proof I came across, in which I cannot wrap my head around the exact reason why it does not work. We start from the following equation:
$$x^2+x+1=0.$$
On one hand we get that $x = -1 -x^2.$
On the other we can divide the equation by $x$ to get $x+1+\frac{1}{x}=0,$
which gives us that $x = -1 - \frac{1}{x}.$
By combining the 2 equations we get that $x^2 = \frac{1}{x}$, so $x^3 = 1$ and finally $x=1$.




However this obviously is not the correct solution, and I really can't put my finger on what went wrong here. I suspect the division did something questionable there, but since the solution is not $x=0$, it does not seem all that wrong to me. Secondly, the part where the equations are combined together seem like a one-way implication to me, but I can't find a reason why it would give a wrong solution to the initial equation.



I would really appreciate a thorough explanation as to why this "proof" is false and what are the exact reasons it is so.


Answer



As I don't find the existing answers particularly illuminating, here's my own take on this.



There's no issue with your reasoning, except that you've proved something vacuous. To really drill this home, let's start by writing the argument in a simpler way:




Assume $x^2 + x + 1 = 0$.




It follows that $(x-1)(x^2 + x + 1) = 0$.



Hence $x^3 - 1 = 0$.



Thus $x^3 = 1$.



Therefore $x = 1$.





Now observe the following:




  1. The above argument is directional. In particular, we've proved is that if our initial assumption of $x^2 + x + 1 = 0$ is true, then our final conclusion of $x = 1$ is true. We have not proved the converse.


  2. The step from $x^3=1$ to $x=1$ forces us to work in the real line (notice that from $z^3 = 1$ we can't deduce $z = 1$, assuming that $z$ is a complex number).


  3. Since we're working over the real line, the initial assumption of $x^2 + x + 1 = 0$ is false. To see this, compute a discriminant, or draw a graph.


  4. By the principle of explosion, everything follows from a false assumption, including $x = 1$, $x=2$, and $x=-1/1893248129823489245894589$. So the above argument is logically correct, but tells us nothing about the relationship between the condition $x^2 + x + 1 = 0$ and the condition $x = 1$. There's no relationship between these equations that has somehow been established by our proof, despite that our proof is 100% correct.


  5. I just want to emphasize that we haven't proved the back ward direction. That is, we haven't proved that if $x = 1$, then we can deduce that $x^2 + x + 1 = 0$. That's because of the step where we multiplied both sides by $(x-1)$. In general, it's a law of math that $x = y \rightarrow f(x) = f(y)$ for an arbitrary function $f$. So we can do the same thing to both sides of an equation, and the new equation will be a logical consequence of the old equation. In this case, the thing we're doing to both sides is multiplication by $(x-1)$. But there's no law of math that allows you to go backward, unless $f$ is an injective function. Exercise. Prove that there exists $y \in \mathbb{R}$ such that the function $y \mapsto (x-1)y$ fails to be injective.





There's also the question of what happens over the complex numbers. There's some suggestion here that we have to work over $\mathbb{C}$ to understand the false proof. I disagree with this. Because firstly, it's not a false proof, as long as you understand what's been proved. And secondly, the argument fares worse, not better, over the complex plane! As J. W. Tanner explains, it's not true that $x^3 = 1 \rightarrow x = 1$ over the complex plane. So we don't even get a chain of implications over $\mathbb{C}$, because the step from $x^3 = 1$ to $x = 1$ simply fails in that context. You might say hang on, we get some implications like this: $$\alpha \rightarrow \beta \leftarrow \gamma.$$ Where the motivation is that we're thinking of $\alpha$ as the statement $x^2 + x + 1 = 0$, and $\beta$ as the statement $x^3 = 1$, and $\gamma$ as the statement $x = 1$. So, maybe that tells us something about the relationship between $\alpha$ and $\gamma$? But in fact, logically speaking, having one implication go one way and the other go the other way tells us nothing about the relationship between $\alpha$ and $\gamma$.


sequences and series - How to calculate $sum_{k=1}^{infty}x^{k^2}$ for different $x$



This is quick question, hopefully. How can I evaluate $f(x) = \sum_{k=1}^{\infty}x^{k^2}$? Some power series can easily be reduced to a geometric series, taylor series etc. via term wise integration/differentiation. I want to find an expression for $f(x)$ not involving series, to be able to calculate the exact value for the sum for different $x\in (-1,1)$. I've already shown that the radius of convergence is 1, and the series looks kind of like the regular geometric series. I've tried to do some term wise integration/differentiation, which however turned out to not work very well. Perhaps this is easy, but it has been a while since I was doing these kind of problems.




Cheers!


Answer



$\displaystyle \sum_{k = 1}^{\infty}x^{k^{2}} =
-\,{1 \over 2} + {1 \over 2}\sum_{k = -\infty}^{\infty}x^{k^{2}} =
\bbox[8px,border:1px groove navy]{{\vartheta_{3}\left(0,x\right) - 1 \over 2}}$
where $\displaystyle\vartheta_{\nu}$ is a Jacobi Theta Function.


elementary set theory - Relation between successor cardinals and power sets



What are the known relation between successor cardinals $\kappa^+$ and power sets $2^\kappa$ (when GCH is not assumed)?



For example, is it true that $\kappa^+ \le 2^\kappa \le \kappa^{++}$?



In general, I am interested in rules and tricks for transforming an inequality involving one into an inequality involving the other. Is there a good (preferably online) reference that covers how the successors and power sets relate to one another?


Answer




Since $\kappa<2^\kappa$ it is always true that $\kappa^+\leq2^\kappa$.



Anything beyond that is unprovable completely by $\sf ZFC$. To see why, recall Solovay's theorem that given any $\lambda$ it is consistent that $\lambda\leq 2^{\aleph_0}$. Now simply take $\lambda\geq\kappa^{+++}$ and then we have that $\kappa^{++}<2^{\aleph_0}\leq2^\kappa$.



If you want to omit the axiom of choice, then we can prove there is always a surjection from $2^\kappa$ onto $\kappa^+$, but that's about it. We cannot prove there is an injection in the other direction.






In general, we know nowadays that the power set operation is a wild card. It cannot be controlled without additional axioms such as $\sf GCH$.


Sunday 24 September 2017

calculus - Area of Surface Revolution of $y = sin(pi x)$ From 0 to 1



I have been trying to solve this problem and I am making a mistake somewhere that I can't discern.



To begin with, $y = \sin(\pi x)$ and thus $\frac{dy}{dx} = \pi\cos(\pi x)$, and $(\frac{dy}{dx})^2 = \pi^2\cos^2(\pi x)$




So, the formula for the surface of revolution should be $2\pi\int_0^1 \sin(\pi x)\sqrt{1+\pi^2\cos^2(\pi x)} dx$



A substitution of $u=\pi \cos(\pi x)$ and $du=-\pi^2 \sin(\pi x)dx$ yields:



$-\frac{2 \pi}{\pi^2} \int_1^{-1} \sqrt{1+u^2}du$ which is also $\frac{2}{\pi} \int_{-1}^{1} \sqrt{1+u^2}du$



From here I set $u = \tan(\theta)$ and $du = \sec^2(\theta)d \theta$.
Along with the trig substitution, I changed the limits of limits of integration to: $-\frac\pi 4$ (lower) and $\frac\pi 4$ (upper).



Following that trig substitution eventually took me to $\frac2\pi\int_{-\frac\pi 4}^{\frac\pi 4} \sec^3(\theta) d \theta$.




The above integral resolves to: $$\frac2\pi \left[{\frac{ \sec(\theta) \tan(\theta) + \ln{|\sec(\theta) + \tan(\theta)|}}{2}}\right] _{-\frac\pi 4}^{\frac\pi 4}$$ ...which I don't believe correctly evaluates to the answer to the original question. I believe my error was made earlier in the problem.



I've done something wrong, I'm interested in solving it with my method, but I need to know what my mistake(s) is/are. Can anyone help me figure out what I've done wrong?


Answer



When you change variables the first time putting $u$ in terms of $x,$ when $x=0$ you should have $u=\pi \cos 0=\pi,$ and for $x=1$ it is $u=\pi \cos (\pi)=-\pi.$ With this change it comes out $7.77939,$ which agrees with what maple found for the original integral without any initial substitution.


calculus - Not using Taylor series or Maclaurin series, prove $frac{1}{sqrt{1-x^2}}=1+frac{1cdot3}{2cdot4}x^2+frac{1cdot3cdot5}{2cdot4cdot6}x^4+cdots$

I’m trying to find a proof of
$$\frac{1}{\sqrt{1-x^2}} = 1+\frac{1\cdot3}{2\cdot4}x^2+\frac{1\cdot3\cdot5}{2\cdot4\cdot6}x^4+\cdots,$$
which doesn’t need Taylor or Maclaurin series, like the proof of Mercator series and Leibniz series.
I tried to prove it by using calculus, but I couldn’t hit upon a good proof.

Saturday 23 September 2017

elementary number theory - Simple modulus algebra - rabin karp weird implementation



I'm studying the Rabin Karp algorithm and something isn't clear about the modulus algebra:




Let's suppose I have all base-10 numbers for simplicity's sake



$14159 = (31415 - 3 \cdot 10^4) \cdot 10 + 9$



now if I apply the modulus operation ($\bmod 997$) to each term I get:



$201 = [ (508 - 3 \cdot 30) \cdot 10 +9 ] \bmod 997$



but in the algorithm I'm studying there is this line:




$201 = [ (508 + 3 \cdot (997-30)) \cdot 10 +9] \bmod 997$



interestingly to me, the result is the same: $201$.



Why do they used the second version? Is there something I'm not considering and the $3 \cdot 997 \cdot 10$ term is useful to something?



Edit: I was wondering... does adding a large prime number (like 997) has some algorithmic implications?


Answer



Hint $\rm\ mod\ 997\!:\ -30\, \equiv\, 997-30.\:$ This is probably done to keep the remainders positive.




Note that $\rm\: 1000\equiv 3\:\Rightarrow\: 31415\, =\ 31\cdot 1000 + 415\,\equiv\, 31\cdot 3 + 415\,\equiv\, 508,$ etc.


radicals - Are the square roots of all non-perfect squares irrational?




I was asked to define a non-perfect square. Now obviously, the first definition that comes to mind is a square that has a root that is not an integer. However, in the examples, 0.25 was considered a perfect square. And the square itself + its root were both not integers.



Is it that all non-perfect squares have irrational roots, e.g. $\sqrt{2}$?



Answer



In the integers, a perfect square is one that has an integral square root, like $0,1,4,9,16,\dots$ The square root of all other positive integers is irrational. In the rational numbers, a perfect square is one of the form $\frac ab$ in lowest terms where $a$ and $b$ are both perfect squares in the integers. So $0.25=\frac 14$ is a perfect square in the rationals because both $1$ and $4$ are perfect squares in the integers. Any rational that has a reduced form where one of the numerator and denominator is not a perfect square in the integers is not a perfect square. For example, $\frac 12$ is not a perfect square in the rationals. $1$ is a perfect square in the integers, but $2$ is not, and there is no rational that can be squared to give $\frac 12$


Friday 22 September 2017

number theory - Prove that for any $m > 0$, $gcd(mb,mc) = mgcd(b,c)$.

Property of GCD





For any $m > 0$ , $\gcd(mb,mc) = m\gcd(b,c)$.




Please prove this. I am learning the Theory Of Numbers in Detail but I am not able to find the proof for this. It is not available in the internet also. So please help me with this problem. Please prove this without using the Euclidean Algorithm as it is deriver from this.

Thursday 21 September 2017

abstract algebra - Addition in finite fields



For a question, I must write an explicit multiplication and addition chart for a finite field of order 8. I understand that I construct the field by taking an irreducible polynomial in $F_2[x]$ of degree 3, and creating the splitting field for that polynomial.



The polynomial I chose for the field of order 8 is $x^3 + x + 1$. Since every finite field's multiplicative group is cyclic, it's my understanding that I can write the 6 elements of this field that are not 0 and 1 and $a, a^2, ..., a^6$, where $a^7 = 1$. And if my thinking is correct about that, I know how multiplication works. But I'm lost on how to develop addition in this field. I know in this case that since 1 + 1 = 0, every element should be its own additive inverse, but beyond that I'm lost as to how, for example, I should come up with a proper answer for what $1 + a$ is.



That is, assuming I'm right about the first parts.




An answer that could help me understand how to do this in general would be very helpful, as I need to do this for a field of another order as well.


Answer



Hint: $a$ being a root of your polynomial gives you the relation $a^3+a+1=0$


sequences and series - How to find the area. Linked with another question.








In this question we discussed why the fake proof is wrong.



But, what about the area?



The process converges to the same area of the circle ($\frac{\pi}{4}$)?



What's the area when the process repeats infinitely?




How we can prove, using calculus and limits the result?



Pi is equal to 4

Wednesday 20 September 2017

sequences and series - Evaluating $limlimits_{ n to +infty} (sqrt{n}( e^{1/sqrt{n}} -2^{1/sqrt{n}}))^3$



I've got problems with calculating the limits in these two examples:



$$\begin{align*}
&\lim_{ n \to +\infty}\left( \sqrt{n}\cdot \left( e^{\frac{1}{\sqrt{n}}} -2^{\frac{1}{\sqrt{n}}} \right) \right)^3\\

&\lim_{n \to +\infty} n\cdot \sqrt{e^{\frac1n}-e^{\frac{1}{n+1}}}
\end{align*}$$



Can anybody help?


Answer



For the first one, let $u=\frac1{\sqrt n}$, and note that $u\to 0^+$ as $n\to\infty$. Then $$\sqrt n\left(e^{\frac1{\sqrt n}}-2^{\frac1{\sqrt n}}\right)=\frac1u\left(e^u-2^u\right)=\frac{e^u-2^u}u\;,$$ so you’re interested in $$\lim_{u\to 0^+}\frac{(e^u-2^u)^3}{u^3}\;,$$ and I expect that you know a way to deal with that kind of limit.



I can make a similar trick work for the second one, but it gets a bit messier. First, I’m actually going to look at $$\lim_{n\to\infty}n^2\left(e^{\frac1n}-e^{\frac1{n+1}}\right)$$ and then take its square root to get the desired limit.



Let $u=\frac1n$, so that $n=\frac1u$, $n+1=\frac1u+1=\frac{u+1}u$, and $\frac1{n+1}=\frac{u}{u+1}=1-\frac1{u+1}$. Again $u\to 0^+$ as $n\to\infty$, so I look at $$\lim_{u\to 0^+}\frac{e^u-e^{1-\frac1{u+1}}}{u^2}\;;$$ applying l’Hospital’s rule takes a little more work this time, but it’s still eminently feasible.



logarithms - Log or Antilog tables, which ones are more useful?

I'm trying to make a Log or Antilog table small enough to fit in the back of a wallet calendar (or a business card). My intend is to build a mathematically useful gift that can be used by anybody eager to spend 5 minutes learning how it works.



To simplify the question let's assume that I'm able to fit just 50 numbers in the table (with some spare space to add some useful logarithmic formulas and some additional constants like $\log_{10}(2)$, $\log_{10}(e)$ or $\log_{10}(\pi)$).



My question is: What 50 numbers should I select to maximize the usefulness of the table?







My first idea is to select the decimal logarithms of the first 50 even numbers:



 x   | Log(x)
--------------
2 | log(2)
4 | log(4)
... | ...
100 | log(100)



But it turns out that the first 5 logarithms are not necessary at all since: log(x) = log(10·x)-1.



I can put them in the table anyway or just use 45 numbers (which are both sub-optimal options from my point of view) or simply re-scale the whole table:



   x   |  Log(x)
------------------
11.8 | log(11.8)
13.6 | log(13.6)
... | ...
100.0 | log(100.0)



But if I'm not going to be able to obtain the logarithm of most natural numbers maybe it is a better idea to just make an Antilog table:



   x  |   AntiLog(x)
---------------------
0.00 | antilog(0.00)
0.02 | antilog(0.02)
... | ...
0.98 | antilog(0.98)



I know that both kind of tables (logarithmic or antilogarithmic) can be used to compute logarithms and antilogarithms approximately (using a direct or a reverse look-up depending on the case) but I'm not sure if there is a good reason to prefer the former ones instead of the latter ones.



One possible argument in favor of logarithmic tables is that they can be used to compute the logarithm of a very big number as long as this number has small factors since log(a·b) = log(a)+log(b). With an antilogarithmic table you can only approximate the logarithm of a prime number and, therefore, you will only obtain an approximate answer (since the logarithms are irrationals in general, here approximate should be read as with less precision than the numbers in the table offer).




  • Are there more arguments in favor of logarithmic tables (versus antilogarithmic tables)?


  • Are them strong enough to justify the waste of the 10% of the table?


  • Are there some applications where each type of table is preferred over the other?



  • Is one operation used more often than the other in practice?


  • Assuming that both operations are equally likely to be used, and that a direct look-up provide better precision than a reverse look-up, which operation should I privilege? Are the interpolation errors equally severe in both cases?




Related Questions:




  • Since the precision on the left column of the table is just 1/50 (in the sense that we are going to divide either [1,100] or [0, 1) into 50 equidistant points), I'm not sure about how much precision should I use in the right column. Does it make sense to fit as many decimal places as possible or is it just a waste of ink and readability?


  • Can you suggest a better content for a table like this? For example, following a previous argument it may be better to make a table of the logarithms of the first 50 prime numbers (finding a logarithm will be slightly more difficult and that's OK but I'm not sure if that kind table will allow me to find antilogarithms or arbitrary numbers with a reasonable precision). A good trade off may be to list the logarithms of the first 50 odd numbers (instead of the first 50 even numbers) to include all prime numbers smaller than 100.


  • Which formulas and constants would you add to increase the usefulness of the gift? (remember that the space is really limited)





Example:



This is an example of what I mean by Decimal Logarithms Table in a Business Card:



Tuesday 19 September 2017

Sum of the first n Prime numbers



Let $P_i$ denote the i-th prime number. Is there any formula for expressing



$$S= \sum_{i=1}^m P_i.$$




We know that there are around $\frac{P_m}{\ln(P_m)}$ prime numbers less than or equal to $P_m$. So, we have:



$$S\le m\times P_m\le \frac{P_m^2}{\ln(P_m)}.$$



I want to know, if there is a better bound for $S$, in the litrature.


Answer



Summation by parts gives
$$
\begin{align}

\sum_{p\le n}p
&=\sum_{k=1}^n(\pi(k)-\pi(k-1))\,k\\
&=n\,\pi(n)+\sum_{k=1}^{n-1}\pi(k)(k-(k+1))\\
&=n\,\pi(n)-\sum_{k=1}^{n-1}\pi(k)\tag{1}
\end{align}
$$
We have that $\pi(k)=\dfrac{k}{\log(k)}\left(1+O\left(\frac1{\log(k)}\right)\right)$ and so using the Euler-Maclaurin Sum Formula, we get that
$$
\sum_{k=1}^{n-1}\pi(k)=\frac12\frac{n^2}{\log(n)}+O\left(\frac{n^2}{\log(n)^2}\right)\tag{2}
$$

Therefore, we get
$$
\sum_{p\le n}p=\frac12\frac{n^2}{\log(n)}+O\left(\frac{n^2}{\log(n)^2}\right)\tag{3}
$$
Setting $n=P_m$ should give you a closer estimate.


real analysis - Convergence of $sum_{n=3}^{infty} frac{(log(log(n))^2}{n(log(n))^2}$.



I need to show that the series $$\sum_{n=3}^{\infty} \frac{(\log(\log(n))^2}{n(\log(n))^2}$$
converges.



I'm trying to find some upper bound for it, since the tests i've used so far did not lead to any useful conclusion. I think that a good upper bound might be the series $\sum_{n=3}^{\infty} \frac{1}{n(\log(n))^{1+\epsilon}}$, for $\epsilon >0$ because then i could use the fact that the series $\sum_{n=3}^{\infty} \frac{1}{n^{1+\epsilon}}$ converges. Therefore i tried to approximate the expression $(\log \log n)^2$ to something like




$$(\log \log n)^2 \le (\log n)^{1+\delta}$$



for some $\delta > 0$ but without success. Could you please help me?


Answer



Hint: for $n$ big enough,
$$
(\log \log n)^2 < \sqrt{\log n
}
$$

Now use the fact that $$
\sum \frac 1{n(\log n)^{3/2}}<\infty
$$


Evaluate the limit $mathop {lim }limits_{n to infty } frac{{(n + 1){{log }^2}(n + 1) - n{{log }^2}n}}{{{{log }^2}n}}$



Evaluate:
$$\mathop {\lim }\limits_{n \to \infty } \frac{{(n + 1){{\log }^2}(n + 1) - n{{\log }^2}n}}{{{{\log }^2}n}}$$




Intuitively, I feel that for large $n$, ${\log}(n+1) \approx \ {\log}(n)
$. So, the above limit should reduce to:



$$=\mathop {\lim }\limits_{n \to \infty } \frac{{\{ (n + 1) - n\} {{\log }^2}n}}{{{{\log }^2}n}} \ = 1$$



However, can someone please suggest how can one mathematically show this.



Thanks!


Answer




If you want to be strict, write $$\log(1+n)=\log(n)+\log(1+\frac 1n)$$ So, the numerator is $$A=(n + 1) \log^2(n + 1) - n\log^2(n)=(n+1)\left(\log(n)+\log(1+\frac 1n)\right)^2-n \log^2(n)$$ Expanding the square and grouping $$A=a^2 n+a^2+2 a n \log (n)+2 a \log (n)+\log ^2(n)$$ where $a=\log(1+\frac 1n)\approx \frac 1n $ is small when $n$ is large.



Is this what you are looking for ?



Edit



Alternatively, you could set $n=\frac 1x$ and use Taylor series around $x=0$. Doing so, the entire expression will be $$\left(1-\frac{2}{\log (x)}\right)+x \left(\frac{1}{\log ^2(x)}-\frac{1}{\log
(x)}\right)+\frac{x^2}{3 \log (x)}+O\left(x^3\right)$$


Monday 18 September 2017

exponential function - The integral $int_0^infty e^{-t^2}dt$




Me and my highschool teacher have argued about the limit for quite a long time.



We have easily reached the conclusion that integral from $0$ to $x$ of $e^{-t^2}dt$ has a limit somewhere between $0$ and $\pi/2$, as we used a little trick, precisely the inequality $e^t>t+1$ for every real $x$. Replacing $t$ with $t^2$, inversing, and integrating from $0$ to $x$, gives a beautiful $\tan^{-1}$ and $\pi/2$ comes naturally.




Next, the limit seemed impossible to find. One week later, after some google searches, i have found what the limit is. This usually spoils the thrill of a problem, but in this case it only added to the curiosity. My teacher then explained that modern approaches, like a computerised approximation, might have been applied to find the limit, since the erf is not elementary. I have argued that the result was to beautiful to be only the result of computer brute force.



After a really vague introduction to fourier series that he provided, i understood that fourier kind of generalised the first inequality, the one i have used to get the bounds for the integral, with more terms of higher powers.



To be on point: I wish to find a simple proof of the result that the limit is indeed $\sqrt\pi/2$, using the same concepts I am familiar with. I do not know what really Fourier does, but i am open to any new information.



Thank you for your time, i appreciate it a lot. I am also sorry for not using proper mathematical symbols, since I am using the app.


Answer



It's useless outside of this one specific integral (and its obvious variants), but here's a trick due to Poisson:

\begin{align*}
\left(\int_{-\infty}^\infty dx\; e^{-x^2}\right)^2
&= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-x^2}e^{-y^2} \\
&= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-(x^2 + y^2)} \\
&= \int_0^{2\pi} \!\!\int_0^\infty \;r\,dr\;d\theta\; e^{-r^2} \\
&= \pi e^{-r^2}\Big\vert_{r=0}^\infty \\
&= \pi,
\end{align*}
switching to polar coordinates halfway through. Thus the given integral is $\frac{1}{2}\sqrt{\pi}$.


number theory - Multiplicative Inverse Question




What is the multiplicative inverse of $9\pmod{37}$?
I've done the Euclidean algorithm and found the gcd is $1$. I'm stuck on using the extended Euclidean algorithm. I'm confused because I'm left with $$37=(9\times 4)+1$$ and can't substitute it anywhere.


Answer



$37=9\cdot 4 + 1$ therefore $9\cdot 4 + 1\equiv 0\pmod{37}$



$9\cdot 4 \equiv -1\pmod{37}$



$9\cdot (-4)\equiv 1\pmod{37}$



$9\cdot (37-4)\equiv 1\pmod{37}$



elementary number theory - What will be the remainder when $2^{31}$ is divided by $5$?




The question is given in the title:




Find the remainder when $2^{31}$ is divided by $5$.




My friend explained me this way:





$2^2$ gives $-1$ remainder.



So, any power of $2^2$ will give $-1$ remainder.



So, $2^{30}$ gives $-1$ remainder.



So, $2^{30}\times 2$ or $2^{31}$ gives $3$ remainder.




Now, I cannot understand how he said the last line. So, please explain this line.




Also, how can I do this using modular congruency?


Answer



Your friend is wrong in one statement. Indeed you have $2^2 \equiv -1 \pmod 5$. But this implies that $(2^2)^n \equiv (-1)^n \pmod 5$ not that any power of $ 2^2 $ will give $ -1 $ remainder.



However you get $$(2^2)^{15} = 2^{30} \equiv (-1)^{15} = -1 \pmod 5.$$ And therefore $2^{31} \equiv 2 \cdot 2^{30} = -2 = 3 \pmod 5$.


Saturday 16 September 2017

limits - $lim_{xto0} frac{x-sin x}{x-tan x}$ without using L'Hopital



$$\lim_{x\to0} \frac{x-\sin x}{x-\tan x}=?$$



I tried using $\lim_{x\to0} \frac{\sin x}{x}=1$.



But it doesn't work :/


Answer



$$\frac{x - \sin(x)}{x - \tan(x)} = \frac{x - \sin(x)}{x^3} \cdot \frac{x^3}{x - \tan(x)}$$




Let $x = 3y$ and $x\to 0 \implies y\to 0$
$$\lim_{x\to0} \frac{x - \sin(x)}{x^3} = L $$
$$L = \lim_{y\to0}\frac{3y - \sin(3y)}{(3y)^3} = \lim_{y\to0} \frac 3 {27} \frac{y - \sin(y)}{y^3} + \lim_{y\to0} \frac{4}{27} \frac{\sin^3(y)}{y^3} = \frac{1}{9} L + \frac 4{27} $$



This gives
$$\lim_{x\to0}\frac{x - \sin(x)}{x^3} = \frac 1 6 $$



\begin{align*}
L &= \lim_{y\to0}\frac{ 3y - \tan(3y)}{27 y^3} \\
&= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \frac{3y(1 - 3\tan^2(y )) - 3 \tan(y) + \tan^3(y)}{27y^3}\\

&= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \left(
\frac 3 {27} \frac{y - \tan(y)}{y^3} + \frac 1 {27} \frac{\tan^3(y)}{y^3} - \frac 9 {27} \frac{y \tan^2(y)}{y^3 }
\right )\\
&= \frac {3L}{27} + \frac 1 {27} - \frac 1 3 \\
\end{align*}



This gives other limit to be $-1/3$, put it up and get your limit.


probability - Expected Value of Die until I roll an even number




I roll a standard die repeatedly until I roll an even number. What is the probability that the last number rolled is $2$?



The probability of rolling an even number is $\frac12$ but how would you work out what the probability that number is? Would it be the intersection of the two events?


Answer



By the principle of indifference, every even number is equally likely to be the last number rolled. Since a standard die has $6$ sides, there are $3$ even numbers, and their equal probabilities must add up to $1$, so each of them must be $\frac13$.


Friday 15 September 2017

proof writing - Proving inequality using induction

Question:



$ Prove\ for\ all\ natural\ n:\ \frac{1}{n+1}+\frac{1}{n+2}+\ldots +\ \frac{1}{3n+1}>1 $



I know that the base case holds. I.H: Assume it is true for $n = k$. Now I am not sure how to prove it for $n = k+1$.

real analysis - Is this limit proof correct?



I am currently studying the formal defenition of the limit. One of the examples given by my book is the following: Prove that:

$$
\lim_{x \to 3} x^2 = 9
$$
So, using only the defenition of the limit, I have to prove that for every $\epsilon > 0$ there is a $\delta > 0$ for which the following is true:
$$
0 < |x - 3| < \delta \to |x^2 - 9| < \epsilon
$$
So after some puzzeling I came up with:
$\delta = \frac{\epsilon}{|x + 3|}$. And I though this was correct: under the assumption that the antecedent is true, we can make the following construct:
$$

0 < |x - 3| < \delta \Rightarrow 0 < |x - 3| < \frac{\epsilon}{|x + 3|} \Rightarrow 0 < |x + 3||x - 3| < \epsilon \Rightarrow |x^2 - 9| < \epsilon
$$



Q.E.D, I thought. But the book came up with the following solution:
$$
\delta = \min{(1,\frac{\epsilon}{7})}
$$
Which is a correct solution. So, is mine wrong? Or did the book just provide a different proof?


Answer



Your $\delta$ can't depend on $x$, because $x$ is varying, and so is your $\delta$, but $\delta$ was supposed to be constant. That's the mistake. Let's do a reverse engineering, supposing that $\delta < 1$ (this usually helps).




If $|x-3|<\delta$, then $|x-3|<1 \implies |x| < 3+1 = 4$. And: $$|x^2-9|= |x+3||x-3|\leq (|x|+3)\delta < 7\delta,$$so that $\delta = \min (1, \epsilon/7)$ will work. So far, just ideas.



Now we check: let $\epsilon > 0$, choose $\delta = \min (1,\epsilon/7)>0$ and take $x \in \Bbb R$ such that $|x-3|<\delta$. Since $\delta < 1$, we also have $|x|<4$. And in these conditions: $$|x^2-9| = |x+3||x-3| \leq (|x|+3)\delta < (4+3)\delta = 7\delta < 7 \frac
\epsilon 7 = \epsilon.$$



(You can read more about the idea of supposing that $\delta < 1$ in my answer here.)


inequality - How to prove $(1-frac{1}{n})^n leq frac{1}{e} leq(1-frac{1}{n})^{n-1}$?



How to show $(1-\frac{1}{n})^n \leq \frac{1}{e} \leq(1-\frac{1}{n})^{n-1}$?



I can prove the first inequality: take the logarithm of both sides and then use the fact that $\log(1+x) \leq x.$



But how to prove the second inequality? The same method does not work here because we need an lower bound for $\log(1+x)$ now.


Answer



Take logarithms in the second inequality to get $$-1 \le (n-1) \log(1-1/n)$$ which rearranges to $$- \log(1-1/n) \le \frac{1}{n-1}.$$ You can write this as $$\int_{1-1/n}^1 \frac 1t \, dt \le \frac{1}{n-1}.$$




Since $f(t) = \frac 1t$ is decreasing on $[1-1/n,1]$ its maximum value there is $1/(1-1/n) = n/(n-1)$. Consequently $$\int_{1-1/n}^1 \frac 1t \, dt \le \frac{n}{n-1} \cdot \frac 1n = \frac 1{n-1}.$$


Thursday 14 September 2017

calculus - Showing that $limlimits_{x→0}frac{xsin(1/x)-cos(1/x)}x$ does not exist without sequence

Show that $$\lim_{x→0}\frac{x\sin(1/x)-\cos(1/x)}x$$ does not exist.




I understand that at $0$, the $\dfrac{\cos(1/x)}x$ term varies between $(-\infty , + \infty)$. But I want a complete formal proof that use the definition of limit ($ε, δ$) and not using sequences like ($2k\pi n$) or other techniques like l'Hopital, etc… How to write a formal proof for that?



Also I want to show that $\lim\limits_{x\to0} (x\sin(1/x) - \cos(1/x))$ does not exist without using any sequences and only using the definition of limit.

sequences and series - How to find the sum of a geometric progression involving cos using complex numbers?

Use $ 2\cos{n\theta} = z^n + z^{-n} $ to express $\cos\theta + \cos3\theta + \cos5\theta + ... + \cos(2n-1)\theta $ as a geometric progression in terms of $z$. Hence find the sum of this progression in terms of $\theta$. Any tips/help would be appreciated. I have the common ratio as $z^2$ and the first term as $z^{1-2n},$ and I can put these into the original formula, but I can't seem to get the answer I'm looking for.



z= $\cos\theta + i\sin\theta$ where i is the imaginary unit

Wednesday 13 September 2017

number theory - Prove that there are no integers $x,y$ such that $x^2+y^2=7,000,000$.



Prove there are no integers $x,\,y$ such that $x^2+y^2=7,000,000$. (Hint: $\pmod7$)



I'm a little stuck on this problem. I'm assuming the hint is to arrange the problem as $x^2+y^2\equiv0\pmod7$, but I can't find anything that will let me proceed from there. I'm considering a proof by contradiction where I would assume there are such integers, come to a contradiction after a little hand waving and conclude that there are no such integers. I've been searching online and reading through my textbook but I can't seem to find anything useful.
If you know of any modulo theorems that would get me rolling here, that would be fantastic.


Answer




You can list the squares mod 7 -- you just need to check what $0^2$, $1^2$, $2^2$, and $3^2$ are (since $4^2 \equiv (-3)^2 = 3^2$, etc.). Then you can check what combinations of them add up to $0$. Proceed from there.


limits - Evaluation $lim_{nto infty}frac{{log^k n}}{n^{epsilon}}$




Evaluate where $\epsilon>0,k\geqslant 1$ are constants



$$\lim_{n\to \infty}\frac{{\log^k n}}{n^{\epsilon}}$$





L'Hopital can't help here, also I tried to use $\log$ rules but it didn't helped, I know that $\log$ grows slower then polynom, but $n^\epsilon$ is not polynom, how can I evaluate this limit? thank you


Answer



Write
$$
\frac{(\log n)^k}{n^\varepsilon}=
\left(\frac{\log n}{n^{\varepsilon/k}}\right)^{\!k}
$$
For $r>0$, we have

$$
\lim_{x\to\infty}\frac{\log x}{x^r}=
\lim_{x\to\infty}\frac{1/x}{rx^{r-1}}=
\lim_{x\to\infty}\frac{1}{rx^r}=0
$$


real analysis - Evaluate $limlimits_{ntoinfty}nintlimits_0^1 f(x)e^{-nx}mathrm dx$ where $,f$ is bounded in $mathbb{R}^+cup{0}$



Evaluate $\lim\limits_{n\to\infty}n\int\limits_0^1 f(x)e^{-nx}\mathrm dx$ where $\,f$ is bounded in $\mathbb{R}^+\cup\{0\}$.




My problem is that I think there's missing information about $f$, e.g. some kind of continuity on $0$. Because if we change the variable of integration for $\frac xn$ the integral is equal to
$$\lim\limits_{n\to\infty}\int\limits_0^n f\left(\frac xn\right)e^{-x}\mathrm dx$$
And that can be dominated by $Me^{-x}$, where $|f|\leq M$. But the convergence is to a function non continue (necessary).



Am I wrong?

calculus - How does the existence of a limit imply that a function is uniformly continuous



I am working on a homework problem from Avner Friedman's Advanced Calculus (#1 page 68) which asks




Suppose that $f(x)$ is a continuous function on the interval $[0,\infty)$. Prove that if $\lim_{x\to\infty} f(x)$ exists (as a real number), then $f(x)$ is uniformly continuous on this interval.





Intuitively, this argument makes sense to me. Since the limit of $f(x)$ exists on $[0,\infty)$, we will be able to find a $\delta > |x_0 - x_1|$ and this implies that, for any $\epsilon>0$, we have $\epsilon > |f(x_0) - f(x_1)|$ (independent of the points chosen). I am aware that the condition of uniform continuity requires that $\delta$ can only be a function of $\epsilon$, not $x$.



What information does the existence of a real-valued limit provide that implies $f(x)$ is uniformly continuous on this interval?


Answer



Remember the definition of "uniformly continuous":




$f(x)$ is uniformly continuous on $[0,\infty)$ if and only if for every $\epsilon\gt 0$ there exists $\delta\gt 0$ such that for all $x,y\in [0,\infty)$, if $|x-y|\lt \delta$, then $|f(x)-f(y)|\lt \epsilon$.





We also know that the limit exists. Call
$$\lim_{x\to\infty}f(x) = L.$$
That means that:




For every $\varepsilon\gt 0$ there exists $N\gt 0$ (which depends on $\varepsilon$) such that if $x\gt N$, then $|f(x)-L|\lt \varepsilon$.





Finally, you probably know that if $f(x)$ is continuous on a finite closed interval, then it is uniformly continuous on that interval.



So: let $\epsilon\gt 0$. We need to show that there exists $\delta\gt0$ such that for all $x,y\in [0,\infty)$, if $|x-y|\lt \delta$, then $|f(x)-f(y)|\lt\epsilon$.



We first use a common trick: if you know that any value of $f(x)$ in some interval is within $k$ of $L$, then you know that any two values of $f(x)$ in that interval are within $2k$ of each other: because if $|f(x)-L|\lt k$ and $|f(y)-L|\lt k$, then
$$|f(x)-f(y)| = |f(x)-L + L-f(y)| \leq |f(x)-L| + |L-f(y)| \lt k+k = 2k.$$



So: pick $N\gt 0$ such that for all $x\gt N$, $|f(x)-L|\lt \epsilon/2$. That means that if $x,y\gt N$, then $|f(x)-f(y)|\lt \epsilon$, by the argument above. So we are "fine" if both $x$ and $y$ are greater than $N$.



Now, we just need to worry about what happens if both $x$ and $y$ are in $[0,N]$, or if one of $x$ and $y$ is in $[0,N]$ and the other one is in $(N,\infty)$.




For both in $[0,N]$, we are in luck: since $f$ is continuous on $[0,\infty)$, then it is continuous on the finite closed interval $[0,N]$, hence is uniformly continuous there. So we know there exists $\delta_1\gt 0$ such that for all $x,y\in [0,N]$, if $|x-y|\lt\delta_1$, then we have $|f(x)-f(y)|\lt \epsilon$. So we just need to ensure that $x$ and $y$ are within $\delta_1$ of each other; that will ensure the inequality we want if $x$ and $y$ are both in $[0,N]$, or if they are both in $(N,\infty)$.



Now we run into a slight problem: what if, say, $x\in [0,N]$ and $y\in (N,\infty)$? Well, since $f$ is continuous at $N$, we know that we can ensure that $f(x)$ and $f(y)$ are both as close as we want to $f(N)$ provided that $x$ and $y$ are both very close to $N$. But if $x$ and $y$ are within some $\ell$ of $N$, then they are within $2\ell$ of each other (same argument as before); and if $f(x)$ and $f(y)$ are both within some $k$ of $f(N)$, then they will be within $2k$ of each other.



So: let $\delta_2$ be such that if $|a-N|\lt\delta_2$, then $|f(a)-f(N)|\lt \epsilon/2$. Then, if $x$ and $y$ are both within $\delta_2$ of $N$, then $|f(x)-f(y)|\lt \epsilon$, and we'll be fine.



In summary: we want to select a $\delta\gt 0$ that will ensure that if $|x-y|\lt\delta$, then:





  • If $x$ and $y$ are both less than $N$, then $|x-y|\lt \delta_1$;

  • If $x$ and $y$ are both greater than $N$, then it doesn't matter how close to one another they are; and

  • If one of $x$ and $y$ is less than $N$ and the other is larger than $N$, then they are each within $\delta_2$ of $N$.



To make sure the first condition happens, we just need to make sure that $\delta\leq\delta_1$. The second condition is easy. What should we require of $\delta$ in order for the second condition to hold? If we can find a $\delta$ that makes all three things happens simultaneously, we'll be done.


discrete mathematics - Troubles finding inverse modulus







Hello all



Me and some friends are studying for a discrete exam and we are having some troubles finding the inverse modulus of things. The question we are suck on is "what is the inverse of 8 mod 29".




Thanks a lot!

Tuesday 12 September 2017

real analysis - For sufficiently large $n$, Which number is bigger, $2^n$ or $n^{1000}$?




How do I determine which number is bigger as $n$ gets sufficiently large, $2^n$ or $n^ {1000}$?



It seems to me it is a limit problem so I tried to tackle it that way.



$$\lim_{n\to \infty} \frac{2^n}{n^{1000}}$$



My thoughts are that, after some $n$, the numerator terms will be more than the terms in the denominator so we'll have something like $$\frac{\overbrace{2\times 2\times\cdots \times 2}^{1000\text{ factors}}}{n\times n \times \cdots \times n} \times 2^{n-1000}$$

At this point, I was thinking of using the fact that $2^n$ grows faster slower than $n!$ as $n$ gets larger so the limit, in this case, will be greater than $1$, meaning $2^n$ is bigger than $n^{1000}$ for sufficiently large $n$. This conclusion is really just a surmise based on a non-concrete formulation. Therefore, I'd appreciate any input on how to tackle this problem.


Answer



Note that$$\frac{2^{n+1}}{2^n}=2\text{ whereas }\frac{(n+1)^{1\,000}}{n^{1\,000}}=\left(1+\frac1n\right)^{1\,000}.$$Since$$\lim_{n\to\infty}\left(1+\frac1n\right)^{1\,000}=1<2,$$you have that$$\left(1+\frac1n\right)^{1\,000}<\frac32$$if $n$ is large enough.It's not hard to deduce from this that $2^n>n^{1\,000}$ if $n$ is large enough.


abstract algebra - Surjective exponentials for algebraically closed fields



The existence of the exponential on $\mathbb{C}$ has a very basic, yet very strong consequence : $(\mathbb{C}^*,\cdot)$ is a quotient of $(\mathbb{C},+)$. This question is concerned with fields $K$ such that $K^*$ is a quotient of $K$ ; that is, with the existence of a surjective group morphism $K\to K^*$. I will refer to such morphisms as "exponentials" on $K$. (I know that the notion of exponential fields exists, but I only consider the surjective case.)




The existence of such a map is not benign : since the additive group $K$ is $q$-divisible for all $q\neq char(K)$, it implies that $K^*$ is also $q$-divisible. In characteristic $0$, this actually implies that $K$ is algebraically closed (I suspect that in characteristic $p$ this implies that $K$ is separably closed, but I'll focus on zero characteristic for now). EDIT : That was false, but it doesn't really matter since Eric's answer gives a construction when $K^*$ is divisible.



Question 1: It's a basic fact that algebraically closed fields of characteristic zero and with the same transcendance degree over $\mathbb{Q}$ are isomorphic. So $\mathbb{C}_p \simeq \mathbb{C}$ for all $p$, and thus $\mathbb{C}_p$ admits (at least one) exponential map. On the other hand, isomorphisms $\mathbb{C}\to \mathbb{C}_p$ are highly non-constructible objects, so this does not give us any clue about what an exponential on $\mathbb{C}_p$ may look like.




Can such an exponential map $\mathbb{C}_p\to \mathbb{C}_p^*$ be explicitly defined ? In particular, does it exist without the axiom of choice (or with only a weak version) ?




Question 2: If we put the axiom of choice back on :





For any countable cardinal $\kappa$, do algebraically closed fields of transcendance degree $\kappa$ over $\mathbb{Q}$ admit a surjective exponential ? (In particular, what about $\overline{\mathbb{Q}}$ ?)




We already know from $\mathbb{C}$ that this is true for $\kappa = 2^{\aleph_0}$ so since "algebraically closed fields of characteristic zero with a surjective exponential" may be countably axiomatized in first-order logic, Löwenheim-Skolem implies that there are models of all infinite cardinality, so in particular question 2 has a positive answer for all uncountable $\kappa$, and for some countable $\kappa$. But since all countable $\kappa$ give models of the same cardinality $\aleph_0$, we cannot use that remark to answer the question.


Answer



I have no idea how to answer question 1, but in the presence of the axiom of choice, everything is very simple. If $K$ is an algebraically closed field of characteristic $0$, then the additive group of $K$ is isomorphic to a direct sum of $|K|$ copies of $\mathbb{Q}$ (this is trivial when $K$ is uncountable; when $K$ is countable note that it must have infinite degree over $\mathbb{Q}$ to be algebraically closed). The torsion subgroup of the multiplicative group (i.e., the roots of unity) is $\mathbb{Q}/\mathbb{Z}$, and since $\mathbb{Q}/\mathbb{Z}$ is injective, the multiplicative group splits as a direct sum $\mathbb{Q}/\mathbb{Z}\oplus A$ for some group $A$. The group $A$ is torsion-free and divisible and hence a $\mathbb{Q}$-vector space, and its dimension is easily seen to be $|K|$ (this is trivial when $K$ is uncountable; when $K$ is countable note that the prime integers are an infinite linearly independent set).



Thus $K^*$ is a direct sum of $\mathbb{Q}/\mathbb{Z}$ and $|K|$ copies of $\mathbb{Q}$. There is clearly a surjective homomorphism from $K$ to this direct sum, since there is a surjective homomorphism $\mathbb{Q}\to\mathbb{Q}/\mathbb{Z}$ and $|K|$ is infinite. In fact, there is a surjective homomorphism whose kernel is cyclic, just like in the case of $\mathbb{C}$.




Note that all that this argument needs from the axiom of choice is for $K$ to be well-orderable. So if you know $K$ is well-orderable without the axiom of choice, you get a surjective exponential without choice. In particular, for instance, $\overline{\mathbb{Q}}$ has a surjective exponential which you can actually write down (in terms of explicit bases for $\overline{\mathbb{Q}}$ and $\overline{\mathbb{Q}}^*$), though this will of course still not be very "natural".



Note also that it is not actually true that $K^*$ being divisible implies $K$ is algebraically closed. Indeed, this just says $K$ is closed under taking radicals, but since not all polynomials can be solved by radicals, this will not generate all roots of polynomials. The above discussion, however, applies perfectly well to any field of characteristic $0$ such that $K^*$ is divisible (with the modification that the group of roots of unity might be a subgroup of $\mathbb{Q}/\mathbb{Z}$ rather than all of $\mathbb{Q}/\mathbb{Z}$; however, as a divisible subgroup, it will be a direct summand of $\mathbb{Q}/\mathbb{Z}$, and hence still a quotient of $\mathbb{Q}$).



Finally, I would note that almost no field $K$ of characteristic $p>0$ has a surjective exponential. Indeed, every element of $K$ is annihilated by $p$, so this would mean $x^p=1$ for all $x\in K^*$. This implies $x=1$ for all $x\in K^*$, so the only such field is $K=\mathbb{F}_2$.


Monday 11 September 2017

real analysis - Examples of differentiable functions with bounded but discontinuous derivatives

I am looking for examples of functions $f:\mathbb{R} \rightarrow \mathbb{R}$ that:





  • are differentiable, and

  • have discontinuous but bounded derivatives.



I believe that a function satisfying these conditions must also be (globally) Lipschitz continuous.



The only example I know of is $f(x) = \left\{ \begin{array}{c l} x^2\cdot \sin\left(\frac{1}{x}\right) & ,\quad x\neq0\\ 0 & ,\quad x=0 \end{array} \right.$, so examples besides this one would be most helpful.



My question is inspired by having recently proven that if a differentiable function $f$ has a bounded derivative, then $f$ is Lipschitz. A differentiable function with a continuous derivative is also Lipschitz, but requiring $f'$ to be continuous is not necessary for Lipschitz continuity. This exercise made me curious about examples of differentiable functions that meet the bounded derivative requirement but not the (stricter) continuous derivative requirement.




Thanks in advance for any suggestions!

math history - Why two symbols for the Golden Ratio?



Why is it that both
$\phi$
and
$\tau$
are used to designate the Golden Ratio
$\frac{1+\sqrt5}2?$


Answer



The Golden Ratio or Golden Cut is the number

$$\frac{1+\sqrt{5}}{2}$$
which is usually denoted by phi ($\phi$ or $\varphi$), but also sometimes by tau ($\tau$).



Why $\phi$ : Phidias (Greek: Φειδίας) was a Greek sculptor, painter, and architect. So $\phi$ is the first letter of his name.




The symbol $\phi$ ("phi") was apparently first used by Mark Barr at the beginning of the 20th century in commemoration of the Greek sculptor Phidias (ca. 490-430 BC), who a number of art historians claim made extensive use of the golden ratio in his works (Livio 2002, pp. 5-6).




Why $\tau$ : The golden ratio or golden cut is sometimes named after the greek verb τομή, meaning "to cut", so again the first letter is taken: $\tau$.




Source: The Golden Ratio: The Story of Phi, the World's Most Astonishing Number by Mario Livio; MathWorld


elementary number theory - Prove if $nmid ab$, then $nmid [gcd(a,n) times gcd(b,n)]$

Prove if $n\mid ab$, then $n\mid [\gcd(a,n)\times \gcd(b,n)]$



So I started by letting $d=\gcd(a,n)$ and $e=\gcd(b,n)$.
Then we have $x,y,w,z$ so that $dx=a$, $ey=b$,$dw=ez=n$
and we also have $s$ so that $ns=ab$



or $ns=dexy$.




what I want is $n\mid de$, but I'm only getting to $n\mid de(xy)$ since I cannot prove that $s/(xy)$ is an integer.

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...