Sunday 31 March 2013

elementary number theory - Name of 'mod' or 'remainder' operation in mathematics




I am trying to write the Euclidean algorithm in the following way:



$A = \lfloor A \div B \rfloor \times B + (\text{remainder of}) \: A \div B $



Now is there any symbol I can use to say "remainder of A $\div$ B"? I know that in the C programming language there is the operator % for modulus; is that a valid symbol in maths? Can I write A % B? Or is there some other way?


Answer



Per request, I post my comment(s) as an answer, and add some further remarks.



The notation $\, a\bmod b\, $ denotes the remainder when dividing $\,a\,$ by $\,b\,$ using the division algorithm. The same notation is used in other rings that have an analogous (Euclidean) Division Algorithm, e.g. polynomials with coefficients over a field, e.g. we can write the Polynomial Remainder Theorem as $\,f(a) = f(x)\bmod x\!-\!a,\,$ or higher-degree forms like $\,f(i) = (f(x)\bmod x^2\!+\!1)\bmod x\!-\!i$.




Also $\!\bmod\!$ is used as a ternary relation (vs. above binary operation) when dealing with congruences, i.e. $\ a\equiv b\pmod{\! n}\iff n\mid a-b\,$ (an equivalence relation for a fixed modulus $\,n).$



These two denotations of $\!\bmod\!$ are related as follows



$$ a\equiv b\!\!\!\pmod{\!n}\iff a\bmod n\, =\, b\bmod n$$



so $\,a\bmod n\,$ serves as a normal form or canonical representative for the entire equivalence class $\,[a]_n = a + n\Bbb Z\,$ of all integers $\,\equiv a\pmod{\!n}$. So the above displayed equivalence means that testing congruence of integers reduces to testing equality of their normal forms (= remainders $\!\bmod n),\,$ just as we can test equivalence of fractions by testing equality of their least-terms normal forms. Similarly we can view the remainder as a "least terms" rep: it is the least nonnegative integer in the class $[a]_n.\,$



The operational use of mod is often more convenient in computational contexts, whereas the relational use often yields more flexibility in theoretical contexts. The difference amounts to whether it is more convenient to work with general equivalence classes vs. canonical / normal representatives ("reps") thereof. For example, it would be quite cumbersome to state the laws of fraction arithmetic if we required that all fractions to be in normal (reduced) form, i.e. in lowest terms. Instead, it proves more convenient to have the flexibility to work with arbitrary equivalent fractions. For example, this allows us to state the fraction addition rule in very simple form by first choosing convenient reps having a common denominator.




Analogously, in modular arithmetic the canoniocal remainder $\,a\ {\rm mod}\ m\,$ may not be the most convenient choice of representative of the equivalence class $\,[a]_n =\, a + n\,\Bbb Z.\,$ For example, casting out elevens exploits that $\ {\rm mod}\ 11\!:\ 10\equiv -1\,\Rightarrow\,10^{\large k}\equiv (-1)^{\large k}\equiv \pm1,\,$ which involves choosing a rep of least magnitude $\,\color{#c00}{\bf -1}\,$ vs. $\,\color{#0a0}{10}\in [10]_{11}\! = \{\ldots,\, -23,-12,\color{#c00}{\bf -1},\color{#0a0}{10},21,\,\ldots\}.\,$ Or, as here we can choose reps that conveniently make a quotient be exact when computing modular fractions, e.g. $\!\bmod 11\!:\,\ 9/13\equiv -2/2\equiv -1.\,$
Hence, analogous to fraction addition, we chose reps which simplified arithmetic. Using least magnitude reps often simplifies other computations too, e.g. it can halve the number of steps in the Euclidean algorithm. Thus the use of congruence classes (vs. canonical reps) provides much greater flexibility, which may yield great simplifications - not only computationally, but also theoretically, which becomes clearer when one studies quotient rings, which yield (algebraic) structure reifications of the the congruence rules = compatibility of congruences with addition and multiplication).



Beware that some authors omit the parentheses in $\, a\equiv b\pmod{\!n}$ instead writing them as follows $\,a\equiv b\mod n\ $ or $\ a = b\mod n,\ $ using \mod vs. \pmod in $\TeX$. These might easily be confused with $\,a = b\bmod n\,$ i.e. $\,a = (b\bmod n),\,$ so one should keep in mind such possible ambiguities in contexts where both forms of $\!\bmod\!$ are in use.



The name '%' for the normal form $\!\bmod\!$ operation (as in the C programming language) has not percolated to the mathematical community as far as I can tell. I recall many questions on sci.math regarding the meaning of $\rm\, a\,\%\, b.\, $ As such, if you use this notation in a mathematical forum then I recommend that you specify its meaning. This would not be necessary for $\!\bmod\!$ since that notation is ubiquitous in mathematics (currently more so for congruence than operator form). Be aware, however, that some mathematicians look down on the operational use of mod in the case when it would be more natural to use the congruence form. Apparently the mathematical Gods do too, since doing so can make some proofs quite more difficult (much more so than the above simple case of fraction addition).


integration - Solving defined integral



Is there an analytical solution to the following integral:



$$ I = \iint\limits_{\mathcal{D}} \exp\left(-kx\right) \mathrm{d}x \mathrm{d}y $$



Where:



$$ \mathcal{D}(x,y) \equiv x^2 + y^2 \leq R^2 $$




And:



$$ R, k \in \mathbb{R}^+_0$$



This integral arose in a simple problem (detector response to a cylindrical cell using Beer-Lambert law) but I am struggling to solve it. I did a polar coordinate change, but then I got:



$$ I = \iint\limits_{\mathcal{D}} \rho \exp \left( -k\rho \cos(\theta) \right) \mathrm{d}\rho \mathrm{d}\theta $$



Which seems not easier to solve.




I looked for this integral form in Handbook of Integrals but I cannot found it. I tried to solve it with a symbolic solver without success. I tried to apply the Green's Theorem but the mixed exponential/trigonometric term reappeared.



Edit:



As pointed out by Santosh Linkha and JJacquelin (featuring Mathematica), this integral can be solved using First Kind modified Bessel function. Readers which are not used to Bessel functions - as I was, might be intersted to know that resolution of this integral requires the following identities:



$$ I_n(x) = \frac{1}{\pi}\int\limits_0^\pi \cos(n\theta)\exp(x\cos(\theta))\mathrm{d}\theta $$



And:




$$ \frac{\mathrm{d}}{\mathrm{d}x} \left( x^\nu I_\nu(x) \right) = x^\nu I_{\nu-1}(x) $$


Answer



I cannot do better than Mathematica !



enter image description here


Saturday 30 March 2013

linear algebra - Finitely generated complement of vector subspace



This is problem 181 in Golan's linear algebra book. I have posted a proposed solution in the comments.




Problem: Let $V$, $W$, and $Y$ be vector spaces over a field $F$ and let $a\in Hom(V,W)$ and $b\in Hom(W,Y)$ satisfy the condition that $im(a)$ has a finitely generated complement in $W$ and $im(b)$ has a finitely generated complement in $Y$. Show that $im(b\circ a)$ has a finitely-generated complement in $Y$.


Answer



Let $\{w_i\}$ be a (finite) basis of the complement of $im(a)$ and $\{y_i\}$ be a basis complement of $im(b)$. We want to show that composition $b\circ a$ "misses" a finite number of basis elements of $Y$. We already know it "misses" everything in $\{y_i\}$. But $\dim \text{im} (b(w_i)$, $w_i\in \{w_i\})$ is finite, and the composition will miss these and only these vectors too, so the total complement of the image of $b\circ a$ is finite.


algebra precalculus - Why is $frac{sqrt{32}}{sqrt{14D}}=frac{4sqrt{7D}}{7D}$ and not $2sqrt{8}$?



I am asked to simplify $\frac{\sqrt{32}}{\sqrt{14D}}$ and am provided with the solution $\frac{4\sqrt{7D}}{7D}$.




(Side question, in the solution why can't $\sqrt{7D}$ and $7D$ cancel out since one is in he denominator so if multiplying out would it not be $\sqrt{7D} * 7D = 0$?)



I gave this question a try but arrived at $2\sqrt{8}$. Here's my working:



(multiply out the radicals in the denominator)



$\frac{\sqrt{32}}{\sqrt{14D}}$ =



$\frac{\sqrt{32}}{\sqrt{14D}} * \frac{\sqrt{14D}}{\sqrt{14D}}$ =




$\frac{\sqrt{32}\sqrt{14D}}{14D}$ =



(Use product rule to split out 32 in numerator)



$\frac{\sqrt{4}\sqrt{8}\sqrt{14D}}{14D}$ =



$\frac{2\sqrt{8}\sqrt{14D}}{14D}$ =



Then, using my presumably flawed logic in my side question above I cancelled out $14D$ to arrive at $2\sqrt{8}$




Where did I go wrong and how can I arrive at $\frac{4\sqrt{7D}}{7D}$?


Answer



Hint: Multiplying numerator and denomninator by $$\sqrt{14D}$$ we get $$\frac{\sqrt{32}\sqrt{14D}}{14D}$$ and this is equal b$$\frac{4\sqrt{2}{\sqrt{2}}\sqrt{7D}}{14D}$$ and this is equal to $$\frac{4\sqrt{7D}}{7D}$$ for $$D\neq 0$$
$$\sqrt{32}=\sqrt{2\cdot 16}=4\sqrt{2}$$ and $$\sqrt{14}=\sqrt{2}\sqrt{7}$$


functions - Prove the existence of a bijection




Let $A$ and $B$ be sets, and suppose $A$ is infinite.
Let $B$ be a countably infinite subset of $A$.



Show that if $f: \mathbb{N} \to B$ and $g: B \to \mathbb{N}$ are bijections, then
$$
h: A \to A- \{ f(1) \}, \, a \mapsto
\begin{cases}
a, & \text{if $a \notin B$} \\
f(1+g(a)), & \text{if $a \in B$ }

\end{cases},
$$
is also a bijection.



I am not really sure how to do this one


Answer



To show injectivity, assume that $h(x) = h(y)$ for some $x,y \in A$. We want to show that $x = y$. There are a few cases:



Case 1 ($x,y \in B$): Then by definition $h(x) = x$ and $h(y) = y$. So we have $x = y$.




Case 2 ($x,y \notin B$): Then we have that $f(1+g(x)) = f(1+g(y))$. Since both $f,g$ are invertible, we have that $x = y$.



Case 3 ($x \in B$, $y \notin B$): Then $h(x) \in B$ since $f(1+g(x)) \in B$, and $h(y) = y \notin B$. Thus $h(x) \neq h(y)$, so this case is impossible.



Case 4 ($x \notin B$, $y \in B$): As in case 3, this case cannot happen.



So we have that $h(x) = h(y) \Rightarrow x = y$. Thus $h$ is injective.



Now we want to show surjectivity. Let $b \in A - f(1)$. We want to show that there exists an $a \in A$ such that $h(a) = b$. There are a couple cases:




Case 1 ($b \notin B$): Set $a = b$. Then $h(a) = h(b) = b$.



Case 2 ($b \in B$): Then $b = f(n)$ for some $n \in \mathbb{N}$, since $f$ is bijective. Further, $n > 1$ since $b \in A - f(1)$. We have $g(a) = n-1$ for some $a \in B$, since $g$ bijective. Then $h(a) = f(n) = b$.



Thus $h$ is surjective. Since $h$ is both injective and surjective, it is bijective.


Friday 29 March 2013

Why did Euler use e to represent complex numbers?



From Euler we've learned that $z=re^{i\theta}$.




And it's easy to see that $|z|^2=r^2$, since $re^{i\theta}\times re^{-i\theta}=r^2$.



Why must we use e to represent these numbers correctly? It seems that I could arbitrarily choose a different exponent $z=r\pi^{i\theta}$ and get the same size for $z$ as I did before: $|z|^2=r\pi^{i\theta}\times r\pi^{-i\theta}=r^2$



What did I miss?


Answer



If we wish to express $\pi^{i\theta}$ as a series then we have:



$$\pi^{i\theta} = e^{i\ln(\pi)\theta} = \sum_{n=0}^\infty i^n \frac{(\ln(\pi)\theta)^n}{n!} = \cos(\ln(\pi)\theta)+i\sin(\ln(\pi)\theta).$$




Calculating precisely $\ln(N)$ for $N \in \mathbb{N}$ can be difficult, not to mention $\ln(\pi)$. This would add more complications than it would be worth. Moreover, $\pi^{i\theta}$ has period $2\pi/\ln(\pi)$, which is not compatible with polar coordinates.



On the other hand, since we can write $$e^{i\theta} = \cos(\theta) + i \sin(\theta),$$ we can express $e^{i\theta}$ by calculating the already well known trigonometric functions.






I would like to add that the use of $e^{i\theta}$ is because of the nice representation found by Euler. If you were to approach the polar representation for the first time, you would approach it more like this:



Let $z=x+iy$ be a complex number, which we can visualize as a vector in $\mathbb{R}^2$, $z=(x,y)$. The magnitude of $z$ is $\|z\|= \sqrt{x^2+y^2}$. We can write the real part as $x=\|z\| \cos(\theta)$ where $\theta$ is the angle formed between the real axis and the vector at the origin. Similarly $y=\|z\| \sin(\theta)$. Thus $$z= \|z\|\cos(\theta)+i \|z\|\sin(\theta) = \|z\|(\cos(\theta)+i\sin(\theta)).$$




Until now, our reasoning was completely geometric. Independently we can work out the expression, due to Euler, $e^{i\theta} = \cos(\theta)+i\sin(\theta)$. This now naturally leads to $$z=\|z\|e^{i\theta}.$$ If it turned out that $\pi^{i\theta} = \cos(\theta)+i\sin(\theta)$ then we would use that instead. However, we know that this is not the case.






I would also like to point out that there is an intuitive reason to think that $e^{i\theta}$ should be of the form $\cos(\theta)+i\sin(\theta)$.



Notice that if we write $f(\theta) = e^{i\theta} = u(\theta)+iv(\theta)$, then $$f''(\theta) = i^2 f(\theta) = - f(\theta).$$



Hence $$u''(\theta) = -u(\theta) \text{ and } v''(\theta) = - v(\theta).$$




Thus from differential equations, we can express $u$ and $v$ as a linear combination of $\sin(\theta)$ and $\cos(\theta)$.



This motivates the investigation into the series of the exponential function. From this perspective, it is not surprising to discover $\cos(\theta)$ and $\sin(\theta)$ inside the series for $e^{i\theta}$.






One final edit: If we let $A$ and $B$ be complex numbers, then my previous statement can be expressed as: $$e^{i\theta} = A\cos(\theta)+B\sin(\theta)$$



Setting $\theta=0$ we see that $e^{0}=1=A\cdot 1 = A$. And $\theta = \pi/2$ yields $e^{i\pi/2} = B$.




Therefore, $$e^{i\theta} = \cos(\theta) + e^{i\pi/2} \sin(\theta).$$ What is left is to determine $e^{i\pi/2}$. Since $e^{i\theta}$ is $2\pi$ periodic, $e^{0}=e^{i2\pi}$. Thus we can see that $(e^{i\pi/2})^4 -1 = 0$, which means $e^{i\pi/2}$ satisfies the polynomial $x^4-1=0$. Thus $e^{i\pi/2} = \pm 1 \text{ or } \pm i$.



Taking the derivative of both sides of $e^{i\theta} = \cos(\theta) + e^{i\pi/2} \sin(\theta)$ we find: $$ie^{i\theta} = -\sin(\theta) + e^{i\pi/2} \cos(\theta)$$ and therefore by setting $\theta = 0$ we have: $$i = e^{i\pi/2}.$$ Thus we conclude $$e^{i\theta} = \cos(\theta)+i\sin(\theta).$$ All without Taylor series.


functions - Union of preimages and preimage of union



Is it possible to have a map $f:X\to Y$ from a topological space $X$ to a set $Y$ and some subsets of $Y$ namely $U_i,i\in I$ such that $\bigcup_{i\in I} f^{-1}(U_i)$ is not equal to $f^{-1}\left(\bigcup_{i\in I}U_i\right)$ ?



I can't think of a case that this is true, but of course this doesn't mean anything! Thanks.


Answer



It is a general fact that for any mapping of sets $f: X \rightarrow Y$,

$\bigcup f^{-1}(U_i) = f^{-1}\left(\bigcup U_i\right)$ and $\bigcap f^{-1}(U_i)=f^{-1}\left( \bigcap U_i\right)$. Try proving this by elementary set
theory, i.e. take an element of $\bigcup f^{-1}(U_i)$ and show that it is an element of $f^{-1}\left(\bigcup U_i\right)$ and conversely.


calculus - Definite integration evaluation of $int_0^{pi/2} frac{sin^2(x)}{(b^2cos^2(x)+a^2 sin^2(x))^2}~dx$.

$$\int_0^{\pi/2} \frac{\sin^2(x)}{(b^2\cos^2(x)+a^2 \sin^2(x))^2}~dx$$



how to proceed please help



The answer given is $\dfrac{\pi}{4a^3b}$.

sequences and series - A Question On Euler's Proof Of the Basel Problem



I've studied the proof that Euler gave for the famous Basel Problem, and it would seem that while it is technically correct, he does not justify all of his steps properly. Namely, he assumes that



$$\frac{\sin(x)}{x}=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)\dots$$



simply because they have the same roots, which is really not a strong enough condition. How do you really show that the equality holds?



He then notices that if you use the above equality, and consider it against the Taylor expansion for $\frac{\sin(x)}{x}$, then you can equate the coefficients of the two infinite expansions at each order, and the result of the Basel problem follows. But how do you know that if you have two different expansions for a function, then their coefficients at each order must be equal?




I would really appreciate if someone could show me how to make these two intuitive, yet informal, steps rigorous.


Answer



The following theorem can be used: http://en.wikipedia.org/wiki/Weierstrass_factorization_theorem


Limit of the sequence $frac{(n!)^{1/n}}{n}$

Which is the limit of the fllowing sequence $$\frac{(n!)^{1/n}}{n}$$

calculus - Limit $mathop {lim }limits_{n to infty } frac{{n{{left( {{a_1}...{a_n}} right)}^{frac{1}{n}}}}}{{{a_1} + ... + {a_n}}}$



Evaluate the following limit.
$a_i > 0.\forall i\in \mathbb{N}$.




$$\mathop {\lim }\limits_{n \to \infty } \frac{{n{{\left( {{a_1}...{a_n}} \right)}^{\frac{1}{n}}}}}{{{a_1} + ... + {a_n}}}$$



I tried to use the Stolz-Cesaro Theorem which I thought might feet here. I got this expression:



$$\mathop {\lim }\limits_{n \to \infty } \frac{{(n + 1){{\left( {{a_1}...{a_{n + 1}}} \right)}^{\frac{1}{{n + 1}}}} - n{{\left( {{a_1}...{a_n}} \right)}^{\frac{1}{n}}}}}{{{a_{n + 1}}}}$$



Which looked a little promising, but I don't know how to take it from here.


Answer



I believe the limit can be anything between $0$ and $1$.




Taking $a_n=1$ for all $n$ clearly yields the limit of $1$.



On the other hand, if you take $$a_n=\cases{1&$\text{ if }$ n$\neq 2^k$\\ \frac1n&\text{ else}},$$
You can prove that the limit of $$\frac{a_1+\dots+a_n}{n}$$ is still $1$ (it would be $1$ even if you replaced every $2^k$-th value with $0$), while the limit of $$\sqrt[n]{a_1a_2\cdots a_n} = 0,$$
meaning the total limit is $0$.


Thursday 28 March 2013

Radius of convergence of the series $ sumlimits_{n=1}^infty (-1)^nfrac{1cdot3cdot5cdots(2n-1)}{3cdot6cdot9cdots(3n)}x^n$



How do I find the radius of convergence for this series:



$$ \sum_{n=1}^\infty (-1)^n\dfrac{1\cdot3\cdot5\cdot\cdot\cdot(2n-1)}{3\cdot6\cdot9\cdot\cdot\cdot(3n)}x^n $$




Treating it as an alternating series, I got
$$x< \dfrac{n+1}{2n+1}$$



And absolute convergence tests yield
$$-\dfrac{1}{2}

I feel like it's simpler than I expect but I just can't get it. How do I do this?



Answer in book: $\dfrac{3}{2}$


Answer




The ratio test allows to determine the radius of convergence.



For $n \in \mathbb{N}^{\ast}$, let :



$$ a_{n} = (-1)^{n}\frac{1 \times 3 \times \ldots \times (2n-1)}{3 \times 6 \times \ldots \times (3n)}. $$



Then,



$$ \begin{align*}
\frac{\vert a_{n+1} \vert}{\vert a_{n} \vert} &= {} \frac{1 \times 3 \times \ldots \times (2n-1) \times (2n+1)}{3 \times 6 \times \ldots (3n) \times (3n+3)} \times \frac{3 \times 6 \times \ldots \times (3n)}{1 \times 3 \times \ldots \times (2n-1)} \\[2mm]

&= \frac{2n+1}{3n+3} \; \mathop{\longrightarrow} \limits_{n \to +\infty} \; \frac{2}{3}.
\end{align*}
$$



Since the ratio $\displaystyle \frac{\vert a_{n+1} \vert}{\vert a_{n} \vert}$ converges to $\displaystyle \frac{2}{3}$, we can conclude that $R = \displaystyle \frac{3}{2}$.


elementary number theory - How many zeroes are there at the end of the sum $1^1 + 2^2 + 3^3 + cdots+ 100^{100}$?


Find the number of zeroes at the end of the sum
$$1^1 + 2^2 + 3^3 + \cdots+ 100^{100}$$





I tried a lot and my answer came 4700 but it was not correct.

algebra precalculus - Why does $(-n)^{4.4}$ yield a non-real number in WolframAlpha? $n$ stands for any number

One would think that, since $4.4$ as a mixed fraction is ($22/5$), raising a number regardless of it being negative or positive would yield a real number. However, when plugging this equation into WolframAlpha, I keep getting an imaginary number.



enter image description here



Can someone help me understand? I plug the equation in like so: (-4)^4.4 and still get a nonreal answer. What is going on here?

Proof verification induction on sequence




I am doing problems that involve inequalities. My understanding is that you through a string of inequalities show that one is less than the other. Kind of like the transitive property. For example:



$2n+1 \lt 2^n$ for $n=3,4,...$



Assume this is true for P(k):



Base case $k = 3$



$LHS: 7 \space \space \space RHS: 8$




$LHS \space \space \lt \space \space RHS$



This holds true for $(k)$



Prove for P(k+1):



$2(n+1)+1 \lt 2^{n+1}$



my understanding is that if I can find something that is greater than $2(n+1)+1$ and obviously below $2^{n+1}$ then the inequality holds true




$2k+3 \lt 2^{k+1}$



$(2k + 1) + 2 \lt 2^k2$



$(2k+1) + 2 \lt 2^k + 2$ by our hypothesis on $k$



It is very obvious that $2^k + 2 \lt 2^k2$ and doesn't need explanation so its safe to assume



$2k+3 = (2k+1)+2 \lt 2^k + 2 \lt 2^k2$




Thus $(2k+3) \lt 2^k2$



This seems perfectly logical to me since we are dealing with integers. These inequalities would not hold for $\mathbb{R}$.



I am having a hard time find a string of inequalities that proves $n^2 \leq 2^n+1$



Assume this holds true for $k$. $P(k) = k^2 \leq 2^k+1$ for $n = 1,2...$



Base case $P(1):$ $LHS: 1 \space \space \space RHS: 3$




$LHS \space \space \leq \space \space RHS$



holds true for $P(k)$



$P(k+1)$:



$(k+1)^2 \leq 2^k + 1$



$k^2 + 2k + 1 \leq 2^k2+1$




$k^2 + 2k + 1 \leq 2^k +1 + 2k +1 \leq 2^k2+1$



$k^2 + 2k + 1 \leq 2^k + 2k + 2 \leq 22^k+1$ by our hypothesis on $k$



I do not know where to go from here


Answer



You assume that $k^2\leq 2^k+1$ for some $k\geq 1$, and you want to prove that $(k+1)^2\leq 2^{k+1}+1$. Starting from the left-hand-side, you can proceed as follows:



\begin{align*}

(k+1)^2 &= k^2+2k+1\\
&\leq (2^k+1) + 2k + 1,
\end{align*}

where the inequality follows from the induction hypothesis. If we can now show that



$$(2^k+1) + 2k + 1 \leq 2^{k+1},$$
then we we are done. By rearranging, this amounts to showing (for $k\geq 1$) that
$$2k+2\leq 2^{k+1} - 2^k,$$
or




$$2(k+1)\leq 2^k(2-1),$$



or



$$2(k+1)\leq 2^k,$$



or



$$k+1\leq 2^{k-1}.$$




Oops! This last inequality is not true for all $k\geq 1$ like I hoped it would be. Don't worry about this, it happens. It doesn't mean that the original statement is wrong.



The last inequality fails for $k=1$ and $k=2$. But it does hold for all $k\geq 3$. I can remedy the proof by checking the base cases $k=1$, $k=2$, and $k=3$ in $k^2\leq 2^k+1$ by hand. Then I assume the induction hypothesis $k^2\leq 2^k+1$ for some $k\geq 3$, and I can proceed as above.


convergence divergence - express the dirichlet series for the sequence d(n)^2 in terms of riemann zeta.




Prove that
$$\sum_{n=1}^\infty d(n)^2n^{-s}=\zeta(s)^4/\zeta(2s)$$
for $\sigma>1$



what i did:



I already proved this formally, that is, without considering convergence. I use euler products, that is, theorem 1.9 in Montgomerys multiplicative number theory:



If f is multiplicative, and

$$\sum \vert f(n)\vert n^{-\sigma}<\infty$$
Then
$$\sum_{n=1}^\infty f(n)n^{-s}=\prod_{p\in\mathbf{P}}(\sum_{n=0}^{\infty}f(p^n)p^{-ns}).$$
I first prove that $d$ is a multiplicative function. Then i apply Euler-products and after some technicalities, the result pops up.



However, my problem is the assumption for the Euler product. My naive bound for the divisor function is $d(n)<2\sqrt{n}$, with the rough argument that $d>\sqrt{n}$ is a divisor if and only if $n/d$ is a divisor $d<\sqrt{n}$.
But this is not good enough , since this bound only lets me apply the Euler product form for $\sigma>2$.



I found some rather complicated bounds for the divisor functions on the internet, but since this is an early exercise in the montgomery Multiplicative number theory book (1.3.1 exercise 5) i doubt thats what i should use.




The post beneath consider the same problem, but it solved what i already solved, and ignore the convergence part:



Dirichlet series generating function


Answer



I found a solution myself.



We use Montgomery theorem 1.8 :



If $$\alpha(s)=\sum_{n\in\mathbb{N}}f(n) n^{-s}\quad \beta(s)=\sum_{n\in\mathbb{N}}g(n) n^{-s}$$ converges absolutely, then their product converges to
$$\gamma(s)=\sum_{n\in\mathbb{N}}h(n) n^{-s}$$

Where $h(n)$ is the Dirichlet product $f*g\; (n)$



Indeed,
$$\alpha(s)=\sum_{n\in\mathbb{N}}d(n)n^{-s}$$



converges absolutely for $\sigma>1$, so
$$\alpha(s)^2=\sum_{n\in\mathbb{N}}d*d(n)n^{-s}$$
converges as well, due to the theorem above.



But since $d(D)d(n/D)\geq d(n)$ we get

$$d*d(n)=\sum_{D\vert n}d(D)d(n/ D)\geq \sum_{D\vert n}d(n)=d(n)^2$$
and hence the result follows by direct comparison.


elementary set theory - Does there exist a bijection from $[0,1]$ to $mathbb R$?



We can find a bijection from $(0,1)$ to $\mathbb R$. For example, we can use $f(x)=\frac{2x-1}{1+|2x-1|}$ composed of parts of two hyperbolas, see the graph here. Or we could appropriately scale the tangent function to get $g(x)=\tan\pi\left(x-\frac12\right)$, see the graph here. Several such bijections are suggested in the answers to this post: Is there a bijective map from $(0,1)$ to $\mathbb{R}$?



But does there exist a bijection from $[0,1]$ to $\mathbb R$?

If yes, then what is it?


Answer



Let’s fix $f:(0,1)\to\mathbb{R}$.



Define $g:[0,1]\to\mathbb{R}$ as follows:




  • $g(0) = -1$

  • $g(1) = 1$




and for $0


  • if $f(x)\in\mathbb{N}^*$, then $g(x) = f(x)+1$

  • if $-f(x)\in\mathbb{N}^*$, then $g(x) = f(x)-1$

  • otherwise, $g(x) = f(x)$



Then, if $f$ is a bijection, so is $g$.



exponentiation and modular arithmetic



How would I be able to simplify



$$2^x\mod 10^9$$




Since there are only $10^9$ possible values mod $10^9$, somewhere the pattern must repeat. I could have a computer program trudge through it, but I'm dealing with storing potentially 10 billion values and I'm guessing there's an easier way. I need to be able to calculate this for values of $x$ as low as $3$ and values too high to be effectively stored. I can't use Euler's Theorem since $\gcd(2,10)\ne1$.


Answer



The largest power of $2$ that divides $10^9$ is $2^9=512$. From there on we have
$$ 2^{9+n} \bmod 10^9 = 2^9\left(2^n \bmod \frac{10^9}{2^9}\right) $$
The sequence $2^n \bmod 5^9$ does satisfy the conditions for Euler's Theorem to apply; we find that it has period $\varphi(5^9)=4\cdot 5^8=1562500$. (Though actually it is not trivial that the period is not some divisor of this -- see Carmichael's theorem).



So we get
$$ 2^n \bmod 10^9 = \begin{cases} 2^n \bmod 10^9 & n < 1562509 \\
2^{((n-9)\bmod 1562500)+9} \bmod 10^9 & n \ge 1562509 \end{cases}$$


Wednesday 27 March 2013

limits - Summation of an infinite Exponential series

Q. Find the value of - $$ \lim_{n\to\infty} \sum_{r=0}^{n} \frac{2^r}{5^{2^r} +1} $$



My attempt - I seem to be clueless to this problem. Though I think that later terms would be much small and negligible( Imagine how large would be $ 5^{2^r} $ after 3-4 terms), so I calculated the sum of first 3-4 terms and my answer was just around the actual answer ( just a difference of $0.002$ ). But I wonder if there is an actual method to solve this problem? If it is, would you please share it to me?



Any help would be appreciated

analysis - Riemann zeta function, representation as a limit

is it true that $$\displaystyle \zeta(s) = \ \lim_{\scriptstyle a \to 0^+}\ 1 + \sum_{m=1}^\infty e^{\textstyle -s m a } \left\lfloor e^{\textstyle(m+1)a} - e^{\textstyle m a} \right\rfloor$$ my proof :



\begin{equation}F(z) = \zeta(-\ln z) = \sum_{n=1}^\infty z^{\ln n}\end{equation}



which is convergent for $|z| < \frac{1}{e}$. now I consider the functions :



$$\tilde{F}_a(z) = \sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=0}^\infty z^{a n} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $$



because $\displaystyle\lim_{ a \to 0^+} a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor = \ln n$, we get that :




$$\lim_{\scriptstyle a \to 0^+} \ \tilde{F}_a(z) = \sum_{n=1}^\infty z^{\ln n} = \zeta(-\ln z)$$






(details)



$\displaystyle\sum_{m=0}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $
is also convergent for $z < \frac{1}{e}$ because $\displaystyle\sum_{m=0}^\infty (z^a e^a)^{m}$ is convergent for $z < \frac{1}{e}$ and $\displaystyle\sum_{m=0}^\infty z^{am} \left\{e^{a(m+1)} - e^{a m} \right\} $ is convergent for $z < 1$.




to justify $\displaystyle\sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=1}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $ : if $\left\lfloor \frac{\ln n}{a} \right\rfloor = m \ne 0$ then
$\displaystyle\frac{\ln n}{a} \in [m, m+1[ \implies n \in [ e^{am}, e^{a(m+1)}[ $ . how many different $n$'s is that ? $\left\lfloor e^{a(m+1)} - e^{am} \right\rfloor $.

linear algebra - Find the eigenvalues and their multiplicities of a special matrix




Find the eigenvalues (with multiplicities) of the matrix $M=M_{a,b}\in Mat_n(\mathbb R)$ that has $a$'s on the main diagonal and $b$'s elsewhere.




I tried to adapt the great method suggested by @Lord Shark the Unknown in this answer.




For simplicity first assume $a < b$. Then $M=B-(b-a)I$, where $B$ is the matrix with $b$'s everywhere. We have $$\det(tI-M)=\det(tI-B+(b-a)I)=\det([t+b-a]I-B).$$ Thus it suffices to find the eigenvalues with multiplicities of $B$. The product of eigenvalues is $0$, the sum is $nb$. But the only thing I can conclude from this is that there is the eigenvalue $0$ of unknown multiplicity. How to find the other eigenvalues and their multiplicities?


Answer



Think about the possible eigenvectors.



You can have an eigenvector with all the entries are $1$, giving an eigenvalue of $a+(n-1)b$. (with multiplicity of $1$)



You can have eigenvectors with one entry $1$ , one entry $-1$ and all other entries are zero, this gives an eigenvalue of $a-b$ and this eigenspace has multiplicity $n-1$.


limits - $lim _{ nrightarrow infty }{ { left( -1+frac { 1 }{ n } right) }^{ n } } =? $

We know that $$\lim _{ n\rightarrow \infty }{ { \left( 1-\frac { 1 }{ n } \right) }^{ n } } =\frac { 1 }{ e } .$$ However the result of $$\lim _{ n\rightarrow \infty }{ { \left( -1+\frac { 1 }{ n } \right) }^{ n } } $$ is shown in complex form by Wolframalpha . Why complex numbers?




Yes, $-1+\frac { 1 }{ n } < 0 $, but if we write the values from $n=1,2..,10$ , all values will be real. Any opinion? How do you calculate this limit?

elementary set theory - Let $A,B$ be infinite sets such that $Acap B=emptyset$ and $|A|=|B|$. Then $|Acup B|=|A|$




Let $A,B$ be infinite sets such that $A\cap B=\emptyset$ and $|A|=|B|$. Then $|A\cup B|=|A|$.








My Attempt:



Lemma: Any infinite set can be partitioned into a family of countably infinite sets. (I presented a proof here)



We denote $X$ and $Y$ are equinumerous by $X\sim Y$.



By Lemma, $A$ can by partitioned into a family $(A_i\mid i\in I)$ where $A_i \sim \Bbb N$.




Thus there exist a bijection $f_i:\Bbb N\to A_i$ for all $i\in I$. Hence every $a\in A_i$ is determined by $f_i(k)$ for a unique pair $(i,k)\in I\times\Bbb N$. It follows that $A\sim I\times\Bbb N$.



We have $|A|=|I\times\Bbb N|=|(I\times\Bbb N_1)\cup(I\times\Bbb N_2)|$ where $\Bbb N_1=\{n\in \Bbb N\mid n\text{ is even}\}$ and $\Bbb N_2=\{n\in \Bbb N\mid n\text{ is odd}\}$. It's clear that $B\sim A\sim I\times\Bbb N\sim I\times\Bbb N_1\sim I\times\Bbb N_2$.



We have:




  1. $I\times\Bbb N_1\sim A$ and $I\times\Bbb N_2\sim B$


  2. $(I\times\Bbb N_1) \cap (I\times\Bbb N_2)=\emptyset$


  3. $A\cap B=\emptyset$





Thus there is a bijection from $A\cup B$ to $(I\times\Bbb N_1)\cup(I\times\Bbb N_2)$ and hence $A\cup B\sim (I\times\Bbb N_1)\cup(I\times\Bbb N_2)$



As a result, $|A|=|(I\times\Bbb N_1)\cup(I\times\Bbb N_2)|=|A\cup B|$.







Does this proof look fine or contain gaps? Do you have suggestions? Many thanks for your dedicated help!




Answer



(if $A,B$ disjoint we have $|A|+|B|=|A\cup B|$ if not $|A|+|B|=|(A\times\{0\})\cup (B\times\{1\})|$, we have $|A|=|A\times\{a\}|$so we can always assume the two are disjoint)



($|A|\cdot|B|=|A\times B|$)



Assuming $A$ and $B$ are not the empty set.



$$\max\{|A|,|B|\}\le|A|+|B|\le|A|\cdot|B|\le\max\{|A|,|B|\}\cdot\max\{|A|,|B|\}=\max\{|A|,|B|\}$$




The only inequality you need to be concern about is




$|A|+|B|\le|A|\cdot|B|$




Try to prove it yourself.



This shows that for $A,B$ such that $|A|\ne0\ne|B|$ and $|A|$ or $|B|$ are infinite then $|A|+|B|=|A|\cdot|B|=\max\{|A|,|B|\}$.


limits - Derivatives of functions composition: $lim_{xrightarrow 8}frac{root{3}of{x} - 2}{root{3}of{3x+3}-3}$



I have to calculate the folowing:



$$\lim_{x\rightarrow 8}\frac{\root{3}\of{x} - 2}{\root{3}\of{3x+3}-3}$$




I am not allowed to used anything else than the definition of the derivative of a function $f(a)$ with respect to $x$, that is



$$f'(a) = \lim_{x\rightarrow a} \frac{f(x) - f(a)}{x - a}$$



After some thinking I found out we could define two function $g(x) = \frac{x-3}{3}$ and $f(x) = \root{3}\of{3x+3}$ edit: this should actually be $h(x)$



Then $h(g(x)) = \root{3}\of{x}$, and $h(8) = 3\\$ and finally: $h(g(8)) = 2$



If $f(x)$ is the first function (at the beginning), I could now rewrite it's limit as:




$$
\lim_{x\rightarrow 8} \frac{h(g(x)) - h(g(8))}{h(x) - h(8)}
$$
(1)






My question is:



If we note $c(x) = (h(g(x)))$, Does it make sense to write $h(x) \rightarrow h(8)$ under the $lim$ symbol instead of $x\rightarrow 8$? Is the above limit equal to the derivative of $c$ with respect to $h$? Does this even make sense since $h$ is applied "after" $g$?







If I just apply the chain rule:



$$\frac{dc}{dx} = \frac{dc}{dh}\frac{dh}{dx} \Leftrightarrow \frac{dc}{dh} = \frac{dc}{dx}\frac{dx}{dh}$$



then I find a wrong limit:



We are looking for $\frac{dc}{dh}$. So we can calculate




$$
\frac{dc}{dx} = \frac{1}{3\root{3}\of{(3x+3)^2}} \\
\frac{dh}{dx} = \frac{3}{3\root{3}\of{(3x+3)^2}} \\
\Rightarrow \frac{dc}{dh} = \frac{1}{3}
$$



So the limit should be one third, but it actually turns out to be $\frac{3}{4}$.



Why can't we write the limit $(1)$ as $\frac{dc}{dh}$, and more importantly, how can I get the correct limit using the definition of the derivative?







PS: I'm not sure if it's even possible because the exercise doesn't say which method to use (I know it's possible using the factorization of $a^3 - b^3$). I just find it would be really cool to solve the exercise this way






Edit



I have made a mistake on the derivative of $h(g(x)) = \root{3}\of{x}$. I used the chain rule $c'(x) = h'(g(x))\cdot g'(x)$ and that led to a mistake. I should simply have derived $c(x) = \root{3}\of{x}$. So the actual derivative is $\frac{1}{3\root{3}\of{x^2}}$. This leads to the correct limit, as showed in farruhota's answer.


Answer




You want to calculate the limit with the help of a composite function:
$$\lim_{x\rightarrow 8}\frac{\root{3}\of{x} - 2}{\root{3}\of{3x+3}-3}.$$



There are two problems in your solution:



1) You actually implied $f(x)\equiv h(x)$. So, $h(x) = \root{3}\of{3x+3}, g(x) = \frac{x-3}{3}$, then $h(g(x))=\sqrt[3]{x}$ and $h(g(8)) = 2$.



2) The formal definition of the derivative of a composite function is: $\lim_\limits{x\rightarrow a} \frac{h(g(x)) - h(g(a))}{x - a}$. So:
$$\lim_{x\rightarrow 8} \frac{h(g(x)) - h(g(8))}{h(x) - h(8)}=\\
\lim_{x\rightarrow 8} \frac{h(g(x)) - h(g(8))}{x - 8}\cdot \lim_{x\rightarrow 8} \frac{x - 8}{h(x) - h(8)}=\\

(\sqrt[3]{x})'_{x=8}\cdot \left((\sqrt[3]{3x+3})'_{x=8}\right)^{-1} =\\
\frac{1}{3\sqrt[3]{8^2}}\cdot \left(\frac{3}{3\sqrt[3]{(3\cdot 8+3)^2}}\right)^{-1}=\\
\frac1{12}\cdot 9=\frac34.$$


Tuesday 26 March 2013

calculus - Finding the surface area of a solid of revolution



I'm given the function



$x=\frac{1}{15}(y^2+10)^{3/2}$



and I need to find the area of the solid of revolution obtained by rotating the function from $y=2$ to $y=4$ about the $x-axis$.




I've tried applying the formula:



$2\pi\int_{a}^{b}f(x)\sqrt{1+(f'(x))^2}dx$



where $f(x)=\sqrt{(15x)^{2/3}-10}$ and



$f'(x)=\frac{15^{2/3}}{3x^{1/3}\sqrt{{(15x)}^{2/3}-10}}$



but it still isn't the correct answer. Is there any way to do the problem without having to find $f(x)$ but by just working with $f(y)$?




Thanks for the help


Answer



You calculated wrongly $f(x)$: the exact value is
$$f(x)=\sqrt{(15x)^{3/2}-10}$$
But why not use the formula
$$A=2\pi\int_2^4y\sqrt{g'^{\,2}(y)+1}\,\mathrm dy$$
since $x$ is given as a function $g(y)$ ?


abstract algebra - Prove linear independence using mathematical induction

Given the following set of functions:
$1,\sin(x),\sin(2x),\sin(3x), \cdots, \sin(nx)$ (n is integer, n>0)
The thing to be done is to prove that the set of functions is linearly independent using mathematical induction.
I googled for common tasks, now it is clear that mathematical induction will work fine, but I have no idea of how to do it.
How to apply the method here?

modular arithmetic - RSA encryption theory - modulo theory

I'm a bit mathematically challenged and have been working on the RSA cipher (good start). I can find the public and private keys and know how to work do modulo operations on a calculator. The problem is that I can't do them when the numbers get to high. For example say I have:



$$10^{541} \bmod{2923} = C$$
The numbers involved here become very large and don't display fully on a calculator, if it can even handle the numbers (mine is crap). What I am wondering is if there is a better method to work out the ciphertext or plaintext that will work for largish numbers.




N.B. I'm not a mathematician, I'm in computing but was told my question would be better posed here.

Monday 25 March 2013

linear algebra - Significance of row reduction

I hadn't realized it until I read the most shocking line while studying linear algebra:



"Row operations CAN change the column space of a Matrix."



Basically, I have accustomed myself in viewing solution of matrix equations as a series of transformation. Say for example, I come accross an eqution for $Ax=b$, then I would interpret this as if the vector x was tranformed by Matrix A, then what would that vector x be?



Normally, in linear algebra we'd do such solutions by reducing the matrix into echelon form or reduced echelon form. Note that the column space is also the Span of the Matrix A.




Then, this is where the confusion pin point is. Say $A$ spanned a space where $b$ was in the span of A, so, $Ax=b$ has a solution. Then we row reduce $A$ to echelon form, then possibly the span of $A$ has changed and it may no longer include $b$ in it's span, on top of which the vector b itself has row reduced equivalently. What guarantee is there that $b$ is in the span of $A$ ~ $B$?



Now there is a second confusion, if row reductions do change the span of a Matrix, then, do the entries in the reduced row echelon of $A$, that are pivots in $A$, not span the same space as A? Then, how are these the bases for the space spanned by $A$?



EDIT: more confusion here:,
when we're finding the null space of $A$, then we row reduce it and find the linear combinations of free variables on pivot columns, and this is the Null Space of A. But, how does this all make sense, or how is this even equivalent, when the span of A has changed?

real analysis - Minimal dense subset of $mathbb{Q} cap [0,1]$



The following question was a problem in an Analysis exam:




Let $n \in \mathbb{N}$. Define $A_{n} := \displaystyle \left\{\frac{k}{2^n} \bigg| k \in \mathbb{Z}, 0 \leq k \leq 2^n \right\}$. Let $A_{\infty} = \cup_{n \in \mathbb{N}} A_n$ .



Compute $\overline{A_{\infty}}$ (closure of $A_{\infty}$) and $A_{\infty}^{\circ}$ (the interior of $A_{\infty}$).





I have solved the problem. I got the answer as $\overline{A_{\infty}} = [0,1]$ and $A_{\infty}^{\circ} = \phi$.



But as I wondered about the problem, I observed that $A_{\infty}$ was a strict subset of $\mathbb{Q} \cap [0,1]$ and was still dense in $[0,1]$. This set me thinking; I asked myself if can I get a minimally dense (in $[0,1]$) subset of $A_{\infty}$?



So I defined $P_{\infty} = \cup_{n \in \mathbb{N}} P_n$ where $P_n := \displaystyle \left\{\frac{k}{2^n} \bigg| k \in \mathbb{Z}, k \text{ is prime } , 0 \leq k \leq 2^n \right\}$. And then I defined $D_n := A_n \setminus P_n$ and $D_{\infty}$ appropriately. I could show that $D_{\infty}$ is dense in $[0,1]$ and that it is a strict subset of $A_{\infty}$.



So I have up to two questions: (dense always means dense in $[0,1]$ in the following questions)





0)Does there exist a minimally dense subset of $\mathbb{Q} \cap [0,1]$? If it does, how do we find it?



1) Does there exist a minimally dense subset of $A_{\infty}$? If it does, how do we find it?



2) Is $P_{\infty}$ dense?




Thanks,
Isomorphism



Answer



We show that the set $P_\infty$ defined by the OP is dense in $[0,1]$.



Let $x\in (0,1)$. We will show that $x$ can be well-approximated by numbers of the form $\frac{p}{2^n}$, where $p$ is prime.
Take $n$ very large, and let $\frac{m}{2^n} \in (0,1)$ be a very good approximation to $x$ (absolute value of the error $\lt \epsilon/2$), with $m$ a very large integer. So if $x$ is very close to $0$, we still make sure that the numerator $m$ is large, by taking $2^n$ huge.



A not too difficult result (given the Prime Number Theorem!) about prime gaps is that for any given $\epsilon$, and large enough $k$, there is always a prime between $p_k$ and $p_k(1+\epsilon)$.



Let $q$ be the largest prime which is $\le m$. By the result quoted above, if $m$ is large enough there is a prime $p$ in the interval $q


We show that $\frac{p}{2^n}$ is within $\epsilon$ of $x$. It is enough to show that $\frac{p}{2^n}$ is within $\epsilon/2$ of $\frac{m}{2^n}$.



Note that $p>m$, and $p$$\frac{p}{2^n}-\frac{m}{2^n}<\frac{m(1+\epsilon/2)}{2^n}-\frac{m}{2^n}=\frac{m\epsilon/2}{2^n}<\epsilon/2.$$



(There is a small deliberate gap in the argument, since for $x$ extremely close to $1$, we could have $p>2^n$. Then we use $q/2^n$.)



Comment: There is nothing special about $2^n$ here. The sequence $(2^n)$ can be replaced by any integer sequence $(a_n)$ which is not bounded above. An interesting choice is to let $(a_n)$ be the sequence of primes. The proof is the same.


integration - Prove that $intlimits_{-infty}^{infty} frac{e^{-x}}{1+e^{-2pi x}},dx=frac1{2sinleft(frac{1}{2}right)}$



I need to prove that $\displaystyle\int_{-\infty}^{\infty} \dfrac{e^{-x}}{1+e^{-2\pi x}}\,dx=\dfrac{1}{2\sin\left(\frac{1}{2}\right)}$. This is an exercise from Basic Complex Analysis by Marsden and Hoffman. My attempt:



First, Marsden says that we need to consider the complex function $f(z)=\dfrac{e^{-z}}{1+e^{-2\pi z}}$ and the next curve



enter image description here




$\gamma_2=r+t\pi i$ with $t\in [0,1]$



$\gamma_3=(\pi i+r)-2tr$ with $t\in [0,1]$



$\gamma_4=(1-t)(\pi i-r)-tr$ with $t\in [0,1]$



Note that the only poles of the function $f(z)$ in the rectangle are when $z=\dfrac{i}{2}$, $z=\dfrac{3i}{2}$ and $z=\dfrac{5i}{2}$. From a direct calculation we obtain that $\text{Res}\left(f(z),\dfrac{i}{2}\right)=\dfrac{e^{-i/2}}{2\pi}$, $\text{Res}\left(f(z),\dfrac{3i}{2}\right)=\dfrac{e^{-3i/2}}{2\pi}$ and $\text{Res}\left(f(z),\dfrac{5i}{2}\right)=\dfrac{e^{-5i/2}}{2\pi}$.



After to a lot of calculations we obtain a bound for the integral over $\gamma_2$: $$\dfrac{e^{-r}}{\sqrt{(1-e^{-2\pi r})^2}}\geq \dfrac{|e^{-r-t\pi i}|}{|1+e^{-2\pi(r+t\pi i)}|}\geq 0$$Thus $$\displaystyle\int_{\gamma_2}^{}f(\gamma_2)\cdot d\gamma\leq \dfrac{e^{-r}}{\sqrt{(1-e^{-2\pi r})^2}}$$When $r\to\infty$ then $\displaystyle\int_{\gamma_2}^{}f(\gamma_2)\cdot d\gamma\to 0$




I think that is the same for $\gamma_4$ but I have troubles with $\gamma_3$. After a lot of calculations we obtain the next bound: $$\dfrac{e^{-r+2rt}}{\sqrt{1+2e^{-2\pi}\cos(-2\pi^2)+e^{-4\pi r}}}\geq \dfrac{|e^{-r+2rt-\pi i}|}{|1+e^{-2\pi(\pi i+r-2rt)}|}$$ but when $r\to\infty$ we obtain that $\dfrac{e^{-r+2rt}}{\sqrt{1+2e^{-2\pi}\cos(-2\pi^2)+e^{-4\pi r}}}\to\infty$ and I need, maybe, that this limits exists (maybe, zero).



Clearly I want the integrals over the curves because if $\gamma=\gamma_1\cup\gamma_2\cup\gamma_3\cup\gamma_4$ then $\displaystyle\int_{\gamma} f(\gamma)\cdot d\gamma=\displaystyle\int_{\gamma_1} f(\gamma_1) \cdot d\gamma_1+\displaystyle\int_{\gamma_2} f(\gamma_2) \cdot d\gamma_2+\displaystyle\int_{\gamma_3} f(\gamma_3) \cdot d\gamma_3+\displaystyle\int_{\gamma_4} f(\gamma_4) \cdot d\gamma_4$ and I want to use the Residue Theorem. But, is the rectangle the correct curve? Here an screenshot of the exercise:



enter image description here


Answer



For your question, you will have to consider a rectangular contour with vertices $\pm R$ and $\pm R+i$. I'm guessing the authors meant by "same technique" as in a rectangular contour situated in the upper half of the contour plane. Calling our integrand as $f(z)$



$$f(z)\equiv\frac {e^{-z}}{1+e^{-2\pi z}}$$




and parametrizing about all four sides of the contour gives us



$$\begin{multline}\oint\limits_{\mathrm C}dz\, f(z)=\int\limits_{-R}^{R}dz\, f(x)+i\int\limits_0^1dy\, f(R+yi)-\int\limits_{-R}^{R}dz\, f(z+i)-i\int\limits_0^1dy\, f(-R+yi)\end{multline}$$



When we take the limit as $R\to\infty$, it can be shown that the integrals of the vertical sides (i.e the second and fourth integrals) vanish. This can be justified using the estimation lemma. Both integrals have a length of $L=1$ while their upper-bound $M$ can be computed by using the fact that $|e^{-yi}|\leq 1$ and $|e^{\pm R}|=e^{\pm R}$.



$$\begin{align*}M_1 & =\left|\,\frac {e^{-R}e^{-yi}}{1+e^{-2\pi R}e^{-2\pi yi}}\,\right|=\frac {e^{-R}}{1+e^{-2\pi R}}\xrightarrow{R\,\to\,\infty}0\\M_2 & =\left|\,\frac {e^{R}e^{-yi}}{1+e^{2\pi R}e^{-2\pi yi}}\,\right|=\frac {e^{R}}{1+e^{2\pi R}}\xrightarrow{R\,\to\,\infty}0\end{align*}$$



Taking their product and calling their arcs $\Gamma_{1}$ and $\Gamma_{2}$ respectively




$$\begin{align*} & \left|\,\int\limits_{\Gamma_{1}}dz\,\frac {e^{-z}}{1+e^{-2\pi z}}\,\right|\leq M_1L=0\\ & \left|\,\int\limits_{\Gamma_{2}}dz\,\frac {e^{-z}}{1+e^{-2\pi z}}\,\right|\leq M_2L=0\end{align*}$$



As in, the arc integrals vanish as $R\to\infty$. Now, what's left is



$$\oint\limits_{\mathrm C}dz\, f(z)=(1-e^{-i})\int\limits_{-\infty}^{\infty}dx\, f(x)$$



The contour integral is also equal to the sum of its residues inside the contour multiplied by $2\pi i$. Fortunately, there is only one residue inside at $z=i/2$. Therefore



$$\operatorname*{Res}_{z\,=\, i/2}\frac {e^{-z}}{1+e^{-2\pi z}}=\lim\limits_{z\,\to\, i/2}\frac {(z-i/2)e^{-z}}{1+e^{-2\pi z}}=\frac {e^{-i/2}}{2\pi}$$




Hence the contour integral is



$$\oint\limits_{\mathrm C}dz\, f(z)=ie^{-i/2}$$



Putting everything together and isolating our integral $I$, we get



$$\int\limits_{-\infty}^{\infty}dx\,\frac {e^{-x}}{1+e^{-2\pi x}}=\frac {ie^{-i/2}}{1-e^{-i}}\color{blue}{=\frac 12\csc\left(\frac 12\right)}$$


deriving an identity for complex numbers



For future reference the following question is from Complex Variables and Applications by Brown and Churchill, 8th Edition.



Question #6 on Page 8 concerning the derivation of an identity.



Let $z = (x, y)$ the ordered pair in which $x$ represents a pure real and $y$ represents a pure imaginary number. From the relations:



$(1) \frac{z_1}{z_2} = z_1 \frac{1}{z_2}$




$(2) \frac{1}{z_1}\frac{1}{z_2} = \frac{1}{z_1z_2}$



show that $\big(\frac{z_1}{z_3}\big)\big(\frac{z_2}{z_4}\big) = \frac{z_1z_2}{z_3z_4}$



I'm a little bit stuck, or maybe I'm missing something. But this is how I'm approaching it.



Expanding the r.h.s.:



we have:




$z_1z_2(z_3)^{-1}(z_4)^{-1} $



$z_1z_2 (z_3z_4)^{-1}$



$z_1z_2 \frac{1}{z_3z_4}$



$ \frac{z_1z_2}{z_3z_4}$, by relation 1.



What do you think? Did I get it right?



Answer



Assuming that it has been proved that multiplication of complex numbers are commutative (i.e. $u_1u_2=u_2u_1$ for any two complex numbers $u_1,u_2$), your proof looks good.


Sunday 24 March 2013

calculus - Find the limit of a series involving "e"

Hey I've been having problem finding the limit of the sequence below



$\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}$



Thanks!

algebra precalculus - My dilemma about $0^0$





We know that $0^0$ is indeterminate.



But if do this:



$$(1+x)^n=(0+(1+x))^n=C(n,0)\cdot ((0)^0)((1+x)^n) + \cdots$$



we get
$$(1+x)^n=(0^0)\cdot(1+x)^n$$



So, $0^0$ must be equal to $1$.




What is wrong in this?
Or am I mistaking $0^0$ as indeterminate?



Other threads are somewhat similar but not exactly the one I am asking.


Answer



There's a common yet subtle misconception among mathematics students that algebraic laws are prior to the numerical values that they describe. For example, "why" is it that:



$$5(3+2)=5\cdot 3 + 5\cdot 2\tag1$$




The tendency is to say, oh, this is true because of the distributive law: $a(b+c)=ab+ac$. But actually, equation (1) is true because it's true, not because of an abstract law. That is, you can actually calculate the left hand side and the right hand side and check that they're equal. And it's the fact that that always works that justifies the abstract law.



In your case, the binomial theorem is considered true because it always works when you plug in actual numerical values. You know how to calculate $(2+3)^8$ without the binomial theorem, but the result you get is consistent with the binomial theorem. To compute an expression using a general theorem, and then conclude that a specific numerical expression has a specific value is backwards reasoning.



If we're starting from the point of view that $0^0$ is undefined, then when we apply the binomial theorem to $(0+a)^n$ and realize that it's asking us to compute $0^0$, the conclusion is not that the binomial theorem is true all of the time, and that therefore $0^0$ must have a certain value, the conclusion is that therefore the binomial theorem does not apply to $(a+b)^n$ when $a$ or $b$ is zero. If we want it to, we have to come up with a definition for $0^0$, and then re-prove the binomial theorem, making sure that our proof is consistent with our new definition.



And, in fact, if we define $0^0=1$, then it's possible to do that. Your reasoning almost amounts to a proof that given that definition, the binomial theorem works, but it's important to recognize that you're verifying that a given definition leads to a given theorem, you are not concluding that a definition is "true" as a consequence of a theorem.


elementary number theory - Showing $gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$



Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$.




I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere.



Can anybody provide a hint as to how I should be looking at this problem?


Answer



As you note, $\gcd(n^3+1,n^2+2) = \gcd(1-2n,n^2+2)$.



Now, continuing in that manner,
$$\begin{align*}
\gcd(1-2n, n^2+2) &= \gcd(2n-1,n^2+2)\\
&= \gcd(2n-1, n^2+2+2n-1)\\

&= \gcd(2n-1,n^2+2n+1)\\
&= \gcd(2n-1,(n+1)^2).
\end{align*}$$



Consider now $\gcd(2n-1,n+1)$. We have:
$$\begin{align*}
\gcd(2n-1,n+1) &= \gcd(n-2,n+1) \\
&= \gcd(n-2,n+1-(n-2))\\
&=\gcd(n-2,3)\\
&= 1\text{ or }3.

\end{align*}$$
Therefore, the gcd of $2n-1$ and $(n+1)^2$ is either $1$, $3$, or $9$. Hence the same is true of the original gcd.


Proof for complex numbers and square root



Use the polar form of complex numbers to show that every complex number $z\neq0$ has two square roots.



I know the polar form is $z=r(\cos(\alpha)+i \sin(\alpha))$. I'm just not sure how to do this one.


Answer




De Moivre's formula is $(\cos\theta + i\sin\theta)^n = \cos(n\theta) + i\sin(n\theta)$ for any integer $n$. Therefore, if $w = s(\cos\theta + i\sin\theta)$ is a non-zero complex number such that $w^2 = z$, then



$$s^2(\cos(2\theta) + i\sin(2\theta)) = r(\cos\alpha + i\sin\alpha).$$



Therefore $s^2 = r$, so $s = \pm\sqrt{r}$, but $s > 0$, so $s = \sqrt{r}$.



Furthermore, $\cos(2\theta) = \cos\alpha$, so $2\theta = \alpha + 2k\pi$ where $k \in \mathbb{Z}$, or $2\theta = -\alpha + 2l\pi$ where $l \in \mathbb{Z}$.



Likewise, $\sin(2\theta) = \sin\alpha$, so $2\theta = \alpha + 2m\pi$ where $m \in \mathbb{Z}$, or $2\theta = \pi - \alpha + 2n\pi$ where $n \in \mathbb{Z}$.




The only possibility for $2\theta$ that will give both $\cos(2\theta) = \cos\alpha$ and $\sin(2\theta) = \sin\alpha$ is $2\theta = \alpha + 2k\pi$ for some $k \in \mathbb{Z}$. Therefore $\theta = \frac{\alpha}{2} + k\pi$ where there are no restrictions on $k$, other than being an integer. So the complete set of solutions to $w^2 = z$ is



$$\left\{\sqrt{r}\left(\cos\left(\frac{\alpha}{2}+k\pi\right) + i\sin\left(\frac{\alpha}{2} + k\pi\right)\right) \mid k \in \mathbb{Z}\right\}.$$



However, as $\cos$ and $\sin$ are $2\pi$-periodic, there are many different values of $k$ which gives the same complex number. If $k$ is even, then $k = 2t$ for some $t \in \mathbb{Z}$, so



\begin{align*}
\sqrt{r}\left(\cos\left(\frac{\alpha}{2}+k\pi\right) + i\sin\left(\frac{\alpha}{2} + k\pi\right)\right) &= \sqrt{r}\left(\cos\left(\frac{\alpha}{2}+2t\pi\right) + i\sin\left(\frac{\alpha}{2} + 2t\pi\right)\right)\\
&= \sqrt{r}\left(\cos\left(\frac{\alpha}{2}\right) + i\sin\left(\frac{\alpha}{2}\right)\right).
\end{align*}




If $k$ is odd, then $k = 2t+1$ for some $t \in \mathbb{Z}$, so



\begin{align*}
\sqrt{r}\left(\cos\left(\frac{\alpha}{2}+k\pi\right) + i\sin\left(\frac{\alpha}{2} + k\pi\right)\right) &= \sqrt{r}\left(\cos\left(\frac{\alpha}{2}+(2t+1)\pi\right) + i\sin\left(\frac{\alpha}{2} + (2t+1)\pi\right)\right)\\
&= \sqrt{r}\left(\cos\left(\frac{\alpha}{2}+\pi+2t\pi\right) + i\sin\left(\frac{\alpha}{2} + \pi 2t\pi\right)\right)\\
&= \sqrt{r}\left(\cos\left(\frac{\alpha}{2} + \pi\right) + i\sin\left(\frac{\alpha}{2}+\pi\right)\right).
\end{align*}



Using the fact that $\cos(\beta + \pi) = -\cos\beta$ and $\sin(\beta + \pi) = -\sin\beta$ we can simplify this further:




$$\sqrt{r}\left(\cos\left(\frac{\alpha}{2} + \pi\right) + i\sin\left(\frac{\alpha}{2}+\pi\right)\right) = -\sqrt{r}\left(\cos\left(\frac{\alpha}{2}\right) + i\sin\left(\frac{\alpha}{2}\right)\right).$$



Therefore, the two solutions to $w^2 = z$ are $\pm\sqrt{r}\left(\cos\left(\frac{\alpha}{2}\right) + i\sin\left(\frac{\alpha}{2}\right)\right)$; note that these are distinct, so there are actually two different solutions.






In general, if you were trying to solve $w^n = z$ with $z \neq 0$, you could use the same method as above. When it came to determine which $k$ gave different complex numbers, you'd consider $k$ modulo $n$, i.e. write $k = nt + j$ where $j = 0, 1, \dots, n - 1$. You'd then find $n$ distinct solutions corresponding to the different values of $j$. Usually, you would have to stop here, as there won't be a corresponding trigonometric formula to simplify the complex numbers (as there was above for the phase shift $\pi$).


calculus - How to prove that a function is integrable?



How can I prove that the function $f(x) = \cos (\pi/x)$ when $0 < x \leq 1$, and $f(x)= 0$ when $x=0$ is integrable on $[0,1]$?



I know I can use that the funtion is continous in the interval and using the Riemann's criterion for integrability but I don't know exactly how to start it or if it makes sense.


Answer



Here's one thing you could do: fix $\varepsilon > 0$, and partition the domain into $[0, \varepsilon / 4]$ and $[\varepsilon / 4, 1]$. On the latter interval, the function is continuous, and hence integrable.



Therefore, we may find a partition $\mathcal{P}$ of $[\varepsilon/4, 1]$ such that both the upper and lower sums, $U(f, \mathcal{P})$ and $L(f, \mathcal{P})$ respectively, on this partition are within $\varepsilon / 2$ of each other.




Extend $\mathcal{P}$ to a new partition $\mathcal{P}'$ on $[0, 1]$, by taking $\mathcal{P}$ and including $[0, \varepsilon / 4]$. Note that the function is bounded by $1$ and $-1$, and in fact, attains these extreme values infinitely often on $[0, \varepsilon / 4]$. Therefore, the upper sum $U(f, \mathcal{P}')$ will just be $U(f, \mathcal{P}) + \varepsilon/4$. Similarly, $L(f, \mathcal{P}') = L(f, \mathcal{P}) - \varepsilon/4$.



Hence, for any $\varepsilon > 0$, we have constructed a partition $\mathcal{P}'$ on $[0, 1]$ such that
$$U(f, \mathcal{P}') - L(f, \mathcal{P}') = U(f, \mathcal{P}) - L(f, \mathcal{P}) + \varepsilon / 2 < \varepsilon.$$
Thus, the function must be integrable.


real analysis - prove that a function $f$ is continuous at $x_0$ iff $f(x_{0}^{-}) = f(x_{0}^{+}) = f(x_0)$


prove that a function $f$ is continuous at $x_0$ iff $$\lim\limits_{x \rightarrow x_{0}^{-}} f(x) = \lim\limits_{x \rightarrow x_{0}^{+}} f(x) = \lim\limits_{x \rightarrow x_0}f(x)$$





By definition, a function $f$ is said to be continuous at $x=x_0$ if $\lim\limits_{x \rightarrow x_{0}} f(x) = f(x_0)$



It follows that for each $\epsilon>0$ there is a $\delta>0$ s.t. $$| f(x) - f(x_0)|< \epsilon$$ whenever $$|x-x_0|<\delta$$



Considering $$|x-x_0|<\delta$$ is true
$$-\delta < x -x_0 < \delta$$
$$ x_0 - \delta < x < x_0 + \delta $$




Therefore,
On the right of $x_0$, it follows that : $x_0 \leq x < x_0 + \delta$ such that $| f(x) - f(x_0)|< \epsilon$.
It shows that the function f is continuous from the right at $x_0$,
Therefore:
$$\lim\limits_{x \rightarrow x_{0}^{+}} f(x) = \lim\limits_{x \rightarrow x_0}f(x)$$



Similarly, on the left of $x_0$, it follows that $x_0 - \delta < x \leq x_0$ such that $| f(x) - f(x_0)|< \epsilon$.
It shows that the function f is continuous from the left at $x_0$.
Therefore:
$$\lim\limits_{x \rightarrow x_{0}^{-}} f(x) = \lim\limits_{x \rightarrow x_0}f(x)$$




We can conclude then that if $f$ is continuous at $x_0$, $f$ has to be continuous from both the left and the right at $x_0$,It follows that
$$\lim\limits_{x \rightarrow x_{0}^{-}} f(x) = \lim\limits_{x \rightarrow x_{0}^{+}} f(x) = \lim\limits_{x \rightarrow x_0}f(x)$$



Is this correct? anything could have been done differently? Any input is much appreciated

elementary set theory - What is the Cardinality of the Nameable Numbers?



Having just finished "Meta Math!" (Chaitin), I came across an interesting observation on infinite sets that I hadn't seen before. If I'm correct (and please let me know if I'm not):



1] There are infinite sets that we are used to, like the whole numbers, integers, and rational numbers that, since they can be put in one-to-one correspondence with each other, have the same 'size' or cardinality, $\aleph_0$



2] The real numbers can not be put in correspondence with these sets. They are in fact a power set of the smaller sets, $| \mathbb{R} | = 2^{\aleph_0}$.




3] If the continuum hypothesis holds we get that $| \mathbb{R} |=\aleph_1$, ie. there is no intermediate infinity in between the cardinality of the set of the whole numbers versus that of the reals.



Now comes the part where I get a bit fuzzy. There are 'computable reals', numbers like $\pi$ that can be encoded in a finite length program. These computable reals must be of size $\aleph_0$ as well, since you can encode all of them with a Turing machince and match each code to an integer.



There are uncomputable reals, such as Chaitin's $\Omega$ which, although we can't write down all the digits, we can at least specify what the number means. Chaitin calls these numbers 'nameable', let's call them $\mathbb{A}$. He asserts that the probability that you'll pick one at random is zero. This implies (I think) that $|\mathbb{A}|=\aleph_0$, but it also implies that there is a mapping of the integers to the uncomputable, yet namable numbers!



What is the cardinality of numbers like $\Omega$?


Answer



I forgot what this term was called, but I remember now: the numbers you're looking for are called definable numbers. Instead of Turing machines, we use first-order formulas, of which there are countably many since the collection of all finite strings of symbols from a finite alphabet is countable (exercise).



Saturday 23 March 2013

linear algebra - G graph of diameter d implies an adjacency matrix with at least d+1 distinct eigenvalues!




In reading the well known book on Algebraic graph theory I came across a theorem that could be stated in the following way:



If $G$ is a graph of diameter $d$ then the adjacency matrix of $G$ has at least $d+1$ distinct eigenvalues.



The proof shows that if $A$ is the adjacency matrix of a graph $G$ that has diameter $d$, then $I,A,A^2, \ldots, A^d$ are linearly independent. And then the main theorem somehow follows from here.



I assume what is used is the fact that if $A$ is a symmetric matrix and $I,A,\ldots,A^d$ are linearly independent, then $A$ has at least $d+1$ distinct eigenvalues?



Am I right? If yes, how can one quickly prove this fact?



Answer



Yes, you are: If $A$ is a symmetric matrix then $A$ is diagonalizable (semi-simple). Hence, the minimal polynomial of $A$, $m_A(x)$ consists of linear factors of multiplicity $1$ each, i.e. $m_A(x)=(x-\lambda_1)\cdot...\cdot(x-\lambda_k)$, where $\lambda_1,...,\lambda_k$ are all the distinct eigenvalues of $A$.
Now, you know that $I,A,…,A^d$ are linearly independent, which implies that there is no polynomial of degree $\leq d$ annulling $A$ (applying any such polynomial to gives you a linear combination of $I,A,…,A^d$). Since $m_A(x)$ is annulling $A$, it follows that $d<\deg(m_A(x))=k$. Hence $k\geq d+1$.


trigonometry - Sine of the sum of angles

I need to prove the formula for the sine of the sum $$\sin(\alpha+\beta) = \sin(\alpha)\cos(\beta) + \sin(\beta)\cos(\alpha)$$
I already know how to prove it when $\alpha,\beta\geq 0$ and $\alpha+\beta <\pi/2$. How can I extend it to any pair of angles? The definition of sine and cosine that I am using is the length of the $y$-axis and the $x$-axis respectively when you intersect the circle of radius 1, but I can't use analytic geometry. Also I can't use complex numbers multiplication. Only relations like $\sin(\alpha +\pi/2) = \cos(\alpha)$.

Friday 22 March 2013

calculus - show that $lim _{xrightarrow 0}sinfrac{1}{x}$ doesn't exist by using the Squeeze Theorem



Exercise 4, page 92 from Guidorizzi' book Calculo (in Portuguese) he asks to show that $\lim _{x\rightarrow 0}\sin\frac{1}{x}$ doesn't exist. I know how to do that by using two differents sequences, but that exercise is in the Squeeze Theorem section. So, is it possible to show that $\lim _{x\rightarrow 0}\sin\frac{1}{x}$ doesn't exist by using the Squeeze Theorem?


Answer



The usual squeeze theorem only says that a limit does exist, and it is not phrased as an "if and only if", so it can't literally be used to test for nonconvergence.




However, we can do something "squeeze theorem like". Given a limit $\lim_{x \to c} f(x)$, we can come up with "universal" functions $g(x)$ and $h(x)$, which take values in $[-\infty, \infty]$, such that for all $x$, $g(x) \leq f(x) \leq h(x)$, and such that the limit exists if and only if $\lim_{x \to c} g(x)$ and $\lim_{x \to c}h(x)$ both exist and are equal.



To make the notation easier, suppose $c = 0$. Both $g$ and $h$ will be even functions, so we only need to define them for non-negative $x$. We let $g(0) = -\infty$ and for $x > 0$ we let $$g(x) = \inf \{ f(y) : y \in [-x,0)\cup(0,x]\}$$



Similarly, we let $h(0) = \infty$ and for $x > 0$ we let
$$h(x) = \sup \{ f(y) : y \in [-x,0)\cup(0,x]\}$$



Now, as we move towards $0$, $h(x)$ will decrease and $g(x)$ will increase, and so they will both have limits at $0$. If these limits are equal, then $g$ and $h$ can be used in the squeeze theorem to show that the limit of $f(x)$ exists. On the other hand, if the limits are not equal, then an argument directly using the definition of limits can be used to show the limit of $f(x)$ does not exist.




For $f(x) = \sin(1/x)$, we have that $g(x) = -1$ for all $x > 0$ and $h(x) = 1$ for all $x > 0$, so this method can be used to show that the $\lim_{x \to 0} \sin(1/x)$ does not exist.


Negative Algebra Formula



Quite simple for you Math Genius but I'm struggling to understand the following equation. (I've only just started an introductory course in Mathematics and I'm keen to learn) :)




I'm getting 2 answers



On my scientific Calculator $= 5.4$



On this website $= -5985$



My formula below. First I would like to know which one is correct or an explanation as to why they can both be correct. if someone could then break it down for me so I can understand where I have gone wrong.



$$10+6\cdot-\frac{8}{2(-25)}\cdot(-10)+5=$$



Answer



To be honest the mistake seems to lie somewhere within the interpretation of your given parenthesis hence it is not clear at all. The website you are refering to interprets your input as



$$10+6\cdot-\frac{8}{2}(-25)\cdot(-10)+5=-5985$$



where on the other hand you are asking for the evaluation of



$$10+6\cdot-\frac{8}{2(-25)}\cdot(-10)+5=5.4$$



The simple mistake is that the calculator you used online did not know that the $(-25)$ belongs to the denominator. Therefore use more parenthesis to make sure what you want to be computed.



Thursday 21 March 2013

calculus - A problem about functional equations



We want to find all continuous functions $f:R→R$ that satisfy the equation $f(x^2+1/4)=f(x)$ for all real x.
Of course -If I am right- constant functions satisfy the equation mentioned, and as well many other do not such as polynomials, rational functions and etc.; But to find all of them!
I need your hints or solutions with my regards.


Answer



From $f(-x)=f((-x)^2+\frac14)=f(x)$ we see that $f$ is even and it suffices to consider $x\ge 0$.
For $x_1\in[0,\frac12)$, the sequence defined by $x_{n+1}=x_n^2+\frac14$ is remains within that interval and is strictly increasing, hence converges to the fixpoint $\frac12$. By the property and continuity we conclude $f(x_1)=f(x_2)=f(x_3)=\ldots=f(\frac12)$.
For $x_1>\frac12$, the sequence defined by $x_{n+1}=\sqrt{x_n-\frac14}$ remains in $(\frac12,\infty)$, is stictly decreasing, hence converges to the fixpoint $\frac12$. Again we conclude $f(x_1)=f(x_2)=f(x_3)=\ldots=f(\frac12)$.


linear algebra - Generalizing the norm and trace of finite extensions over finite fields.




I'm currently reading through Ireland and Rosen's A Classical Introduction to Modern Number Theory, and I'm working on proving that a later definition of trace and norm of arbitrary finite algebraic extensions generalizes an earlier definition for the case of finite fields.



The early definition from Chapter 11.2 goes as follows. Let $F$ have $q$ elements and suppose $E \supset F$ has $q^n$ elements (that is, $[E: F] = n$). Let $a \in E$. The trace of $a$ is $\text{tr}(a) = a + a^q + a^{q^2} + \dots + a^{q^{s-1}}$. The norm of $a$ is $N(a) = a \cdot a^q \cdot a^{q^2} \dots a^{q^{s-1}}$.



The later definition in Chapter 12.1 goes as follows. Now just suppose $E \supseteq F$ is just an algebraic extension with degree $n$ with no further assumption on the base field $F$. Let $\{\beta_1, \beta_2, \dots, \beta_n \}$ be a basis for $E$ over $F$. Then $a \beta_i = \sum_j a_{ij} \beta_j$ for some constants $a_{ij} \in F$ determined by just $i$ and $j$. Thus, we can put the $a_{ij}'s$ into a matrix, $[a_{ij}]$. The norm of $a$ is defined as determinant of $[a_{ij}]$ and the trace of $a$ is just the trace of $[a_{ij}]$, which is $a_{11} + a_{22} + \dots + a_{nn}$.



How can I show the second definition generalizes the first? Going back to the assumption that $F$ and $E$ are finite fields, I can write a basis for $E$ over $F$ as $\{1, \beta, \beta^2, \dots, \beta^{n-1} \}$ where $\beta$ is the root of some monic irreducible polynomial of degree $n$ with coefficients in $F$. Write $a = \sum\limits_{j = 0}^{n-1} a_j \beta^j$ with $a_j \in F$. To determine the trace, we would have to find the coefficient of $\beta^i$ in $a \beta^i = \sum\limits_{j = 0}^{n-1} a_j\beta^{i+j}$, for all $i$. We know one of the terms in the coefficient is $a_0$ since when $j = 0$, the term of the sum is $a_0 \beta^i$. However, when the $i + j > n-1$, we would have to rewrite $\beta^{i+j}$ in terms of lower powers via the minimal polynomial of $\beta$, which is where things can get really messy. This is where I'm stuck. I haven't begun working on the determinant.


Answer



If $F\subseteq E$, then you defined the map $E\to End_F(E)\cong M_n(F)$ by sending $\alpha\mapsto T_\alpha$ such that $T_\alpha(\beta)=\alpha \beta$.




This is a homomorphism, and since $E$ is a field, it is also injective. Consider the characteristic polynomial $p_\alpha$ of the matrix $T_\alpha$ - then the trace and determinant (up to a sign) are the coefficients of $x^{n-1}$ and $1$.



You now have that $p_\alpha \in F[x]$ has $T_\alpha$ as a root, and therefore has $\alpha$ as a root (using the injective homomorphism $\alpha \mapsto T_\alpha$), namely $p_\alpha (\alpha)=0$. If $\sigma$ is any Galois automorphism of $E$ over $F$ (or in general non finite field, $\sigma :E\to F^{alg}$ which fixes $F$), then
$$0=\sigma (0)=\sigma (p_\alpha(\alpha))=\sigma(p_\alpha)(\sigma(\alpha))=p_\alpha(\sigma(\alpha))$$ so you get that $\sigma(\alpha)$ is also a root of $p_\alpha$ (this argument is just the usual argument that if a complex number is a root of a real polynomial, then its complex conjugate is also a root). If you assume that all the $\sigma(\alpha)$ are distinct, then you get all the $n$ roots so that $p_\alpha(x)=\prod_\sigma(x-\sigma(\alpha))$, so that the trace and determinant are $\sum \sigma(\alpha)$ and $ \prod\sigma(\alpha)$ (EDIT: removed the $\pm 1$). For finite fields, the Galois group is generated by the Frobenius automorphism $\alpha\mapsto \alpha^p$ which proves the equality you wanted.



If the $\sigma(\alpha)$ are not distinct, you first find the trace and determinant for $F[\alpha]:F$ and then the determinant for $E:F[\alpha]$ and the composition will give you what you need. In this second case, multiplying by $\alpha$ is just multiplication by scalar from the base field so that trace is just $[E:F[\alpha]]\cdot \alpha$ and the determinant is $\alpha^{[E:F[\alpha]]}$.


How to prove if a function is bijective?



I am having problems being able to formally demonstrate when a function is bijective (and therefore, surjective and injective). Here's an example:



How do I prove that $g(x)$ is bijective?




\begin{align}
f &: \mathbb R \to\mathbb R \\
g &: \mathbb R \to\mathbb R \\
g(x) &= 2f(x) + 3
\end{align}



However, I fear I don't really know how to do such. I realize that the above example implies a composition (which makes things slighty harder?). In any case, I don't understand how to prove such (be it a composition or not).



For injective, I believe I need to prove that different elements of the codomain have different preimages in the domain. Alright, but, well, how?




As for surjective, I think I have to prove that all the elements of the codomain have one, and only one preimage in the domain, right? I don't know how to prove that either!



EDIT



f is a bijection. Sorry I forgot to say that.


Answer



The way to verify something like that is to check the definitions one by one and see if $g(x)$ satisfies the needed properties.



Recall that $F\colon A\to B$ is a bijection if and only if $F$ is:





  1. injective: $F(x)=F(y)\implies x=y$, and

  2. surjective: for all $b\in B$ there is some $a\in A$ such that $F(a)=b$.



Assuming that $R$ stands for the real numbers, we check.



Is $g$ injective?




Take $x,y\in R$ and assume that $g(x)=g(y)$. Therefore $2f(x)+3=2f(y)+3$. We can cancel out the $3$ and divide by $2$, then we get $f(x)=f(y)$. Since $f$ is a bijection, then it is injective, and we have that $x=y$.



Is $g$ surjective?



Take some $y\in R$, we want to show that $y=g(x)$ that is, $y=2f(x)+3$. Subtract $3$ and divide by $2$, again we have $\frac{y-3}2=f(x)$. As before, if $f$ was surjective then we are about done, simply denote $w=\frac{y-3}2$, since $f$ is surjective there is some $x$ such that $f(x)=w$. Show now that $g(x)=y$ as wanted.






Alternatively, you can use theorems. What sort of theorems? The composition of bijections is a bijection. If $f$ is a bijection, show that $h_1(x)=2x$ is a bijection, and show that $h_2(x)=x+2$ is also a bijection. Now we have that $g=h_2\circ h_1\circ f$ and is therefore a bijection.




Of course this is again under the assumption that $f$ is a bijection.


calculus - Compute $ limlimits_{ntoinfty}sqrt[n]{logleft|1+left(frac{1}{ncdotlog n}right)^kright|}$.



Compute $$ \lim\limits_{n\to\infty}\left(\sqrt[n]{\log\left|1+\left(\dfrac{1}{n\cdot\log\left(n\right)}\right)^k\right|}\right). $$
What I have: $$ \forall\ x\geq 0\ :\ x- \frac{x^2}{2}\leq \log(1+x)\leq x. $$
Apply to get that the limit equals $1$ for any real number $k$.



Is this correct? Are there any other proofs?


Answer




Yes it works, here's another proof using a little more sofisticate tool (in this case unnecessary, but sometimes more useful).



By Stolz-Cesaro if $ (x_n) $ is a positive sequence and
$$ \lim_n \dfrac{x_{n+1}}{x_n} = l $$
then
$$ \lim_n \sqrt[n]{x_n} = l.$$



Taking as $(x_n)$ the sequence you defined, an easy calculation shows that $$ \dfrac{x_{n+1}}{x_n} \rightarrow 1,$$
therefore the thesis.


Wednesday 20 March 2013

functions - Is 'every exponential grows faster than every polynomial?' always true?

My algorithm textbook has a theorem that says



'For every $r > 1$ and every $d > 0$, we have $n^d = O(r^n)$.'



However, it does not provide proof.



Of course I know exponential grows faster than polynomial in most cases, but is it true for all case?




What if the polynomial function is something like $n^{100^{100}}$ and exponential is $2^n$? Will the latter outgrow the former at some point?

calculus - How can I solve this crazy limit? $lim _{xrightarrow0}frac{1-cosleft(frac{1-cos x cos 2x}{x^2}-frac {5}{2}right)cos2x}{x^2} $

Firstly, I think this can be done with equivalent infinitesimal, but it seems so much complicated. I'm not very brave to do L'Hospital's rule on this question. And I don't think trig formulas can simplify this..



$$\lim _{x\rightarrow0}\frac{1-\cos\left(\frac{1-\cos x \cos 2x}{x^2}-\frac {5}{2}\right)\cos2x}{x^2} $$

sequences and series - Finding $lim_{ntoinfty}(frac{1}{sqrt{n^2+1}} + frac{1}{sqrt{n^2+2}} + ... + frac{1}{sqrt{n^2+n}})$



I'm trying to find $\lim_{n\to\infty}(\frac{1}{\sqrt{n^2+1}} + \frac{1}{\sqrt{n^2+2}} + ... + \frac{1}{\sqrt{n^2+n}})$.




  • I tried to use the squeeze theorem, failed.

  • I tried to use a sequence defined recursively: $a_{n+1} = {a_n} + \frac{1}{\sqrt{(n+1)^2 +n+1}}$. It is a monotone growing sequence, for every $n$, $a_n > 0$. I also defined $f(x) = \frac{1}{\sqrt{(x+1)^2 +x+1}}$. So $a_{n+1} = a_n + f(a_n)$. But I'm stuck.




How can I calculate it?


Answer



It looks squeezable.



\begin{align}
\frac{n}{\sqrt{n^2+n}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{n}{\sqrt{n^2+1}}
\\
\\
\frac{1}{\sqrt{1+\frac{1}{n}}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{1}{\sqrt{1+\frac{1}{n^2}}}
\end{align}



trigonometry - Trigonometric Arithmetic Progression



If $a$, $b$, $c$ are in arithmetic progression, prove that

$$\cos A \cot\frac{A}{2} \qquad \cos B \cot \frac{B}{2} \qquad \cos C \cot\frac{C}{2}$$
are in arithmetic progression, too.



Here, $a$, $b$, $c$ represent the sides of a triangle and $A$, $B$, $C$ are the opposite angles of the triangle.


Answer



For better clarity, I'm adding another proof that $\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ are also in AP if $a,b,c$ are so.



We have $\displaystyle00$



So, $\displaystyle\cot\frac C2=\frac1{\tan\frac C2}=+\sqrt{\frac{1+\cos A}{1-\cos A}}$




Using Law of Cosines and on simplification, $\displaystyle\cot\frac C2=+\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}$ where $2s=a+b+c$



$\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ will be in AP



$\displaystyle\iff\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}+\sqrt{\frac{s(s-a)}{(s-b)(s-c)}}=\sqrt{\frac{s(s-b)}{(s-c)(s-a)}}$



$\displaystyle\iff s-a+s-c=2(s-b)\iff a+c=2b$


Tuesday 19 March 2013

calculus - Is $sum sin^2(k)/k$ Convergent?




A student recently used the series $\displaystyle\sum_{k=1}^\infty\frac{\sin^2k}{k}$ as an example of a divergent series whose terms tend to $0$. However, I'm having trouble convincing myself that this series does in fact converge. Anyone have any ideas?


Answer



The series diverges. Notice



$$\begin{align}
\sin^2(k) + \sin^2(k+1) &= \frac12(1-\cos(2k)) + \frac12(1-\cos(2k+2))\\
&= 1 - \cos(1)\cos(2k+1)\\

&\ge 1 - \cos(1)\end{align}$$



We have $$
\sum_{k=1}^{2N} \frac{\sin^2(k)}{k} = \sum_{k=1}^N\left(\frac{\sin^2(2k-1)}{2k-1} + \frac{\sin^2(2k)}{2k}\right) \ge \frac{1-\cos(1)}{2}\sum_{k=1}^N\frac{1}{k}
$$
which diverges to $\infty$ as $N \to \infty$.


real analysis - Applying root test to sequence $frac{1}{2} + frac{1}{3} + frac{1}{2^2} + frac{1}{3^2} + cdots$

The following is an example from Principles of Mathematics, by Rudin. I've been trying to understand the example but haven't quite grasped it because it seems I can solve it differently.



Given the following sequence: $\displaystyle \frac{1}{2} + \frac{1}{3} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{2^3} + \frac{1}{3^3} + \cdots$



Using the Ratio Test:




$$\lim \inf_{n \to \infty} \frac{a_{n+1}}{a_{n}} = \lim_{n \to \infty}
(\frac{2}{3})^n = 0$$



$$\lim \sup_{n \to \infty} \frac{a_{n+1}}{a_n} = \lim_{n \to \infty}
\frac{1}{2}(\frac{3}{2})^n = +\infty$$



Using the Root Test:



$$\lim \inf_{n \to \infty} (a_n)^{\frac{1}{n}} = \lim_{n \to \infty} (\frac{1}{3^n})^{\frac{1}{2n}} = \frac{1}{\sqrt{3}}$$




$$\lim \sup_{n \to \infty} (a_{n})^{\frac{1}{n}} = \lim_{n \to \infty} (\frac{1}{2^n})^{\frac{1}{2n}} = \frac{1}{\sqrt{2}}$$



What I don't understand is how to find the $\lim \sup$ and $\lim \inf$ for the ratio test. I also don't understand why for the root test, we are looking at the $2n^\text{th}$ root. Where does this 2 come from? Furthermore, are we looking at $a_n$ as alternating between $\frac{1}{2^m}$ and $\frac{1}{3^m}$ or is $a_{n}$ actually $\frac{1}{2^m} + \frac{1}{3^m}$?



As a side note, I do know how to solve this question if asked whether or not this series converges. I simply don't understand the book went around solving it.

real analysis - Application of Fundamental Theorem of Calculus, splitting integrals

Looking for some help and explanation on some questions that require FTC.
I will be using the following versions of FTC. I have made an attempt but they do not correspond to the solutions I was given out so am looking for feedback on why my method is incorrect.



FTC 1



Let $f$ be regulated on the interval $[a,b]$ and $f$ is continuous on at $c \in (a,b)$. Then $F(x) = \int^x_a f$ is differentiable on $(a,b)$ with $F'(c) = f(c)$



FTC 2




Let $f: [a,b] \rightarrow \mathbb{R}$ be continuous and let $g: [a,b] \rightarrow \mathbb{R}$ be differentiable with $g'(x) = f(x) \forall x \in [a,b]$. Then $\int^b_a f = g(b) - g(a)$



Questions



1) Let $I = \int^b_a \frac{1}{log x}dx$
Find: $\frac{dI}{da}, \frac{dI}{db}$



So my method is to let $F$ be such that $F' = \frac{1}{log x}$, and then using FTC2, I get $I = F(b) - F(a)$. After differentiation I get $\frac{1}{\log a}, -\frac{1}{\log b}$ respectively.



Now the solutions suggest that I split the integral up into $\int^b_2, \int^2_a =\int^b_2 - \int^a_2$, and then differentiate here. Am I missing something about the conditions? Do I need to rearrange each equation into the indefinite integral?




2) Same as above except now $I = \int^{b(x)}_{a(x)} \frac{1}{\log x} dx$ I need to find $I'(x)$ $a(x)$ and $b(x)$ are functions with values in $[2, \infty)$



In the form of b(x), a(x) I can't seem to rearrange it into the indefinite integral form. From the solutions they still seem to recommend that I write
$I(x) = J(a(x)) - J(b(x))$ where $J(a) = \int^a_2 \frac{1}{log t} dt$ and then differentiating with the chain rule. I don't understand why this is necessary, as to me, an unrigorous solution that uses the same idea $\frac{d}{dx}(F(b(x)) - F(a(x))$ does the same thing, so why do we need to split the integral like this?



3) Can someone give me an example of how they would split $\int^{1+x^2}_{x^2}f(t) dt, x \in \mathbb{R}$



4) Where do I need to comment about differentiability / continuity? Do I need to prove that the functions I here are continuous before I use the theorem?

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...