Sunday 31 August 2014

linear algebra - Free Variables in a Reduced Matrix

What is the relation between Column space, Null space, and Span to free variables? In other words if I'm given a matrix and I get it into echelon form or reduced echelon form, and there are free variables left over, what does that tell me about Column space, Null space, and Span?

calculus - How to integrate $cos^2x$?





It seems like I am stuck on such a simple problem:



How to I find the antiderivative of $\cos^2x$?
I have tried partial integration, it doesn't seem to work (for me). Some help on how to integrate it would be nice.


Answer




You use the identity (e.g. solving from $\cos(2x) = 2\cos^2x-1$):
$$\cos^2 x = \frac{1+ \cos(2x)}{2}$$






Addendum: the previous hint will give you the easiest solution, but you mentioned an attempt with integration by parts - that would work too:
$$\begin{array}{rl}
\displaystyle \color{blue}{\int \cos^2x \, \mbox{d}x} & \displaystyle
= \int \cos x \, \mbox{d} \sin x \\[6pt]
& \displaystyle

= \cos x \sin x - \int \sin x \, \mbox{d} \cos x \\[6pt]
& \displaystyle
= \cos x \sin x + \int \sin^2 x \, \mbox{d}x \\[6pt]
& \displaystyle
= \cos x \sin x + \int 1-\cos^2 x \, \mbox{d}x \\[6pt]
& \displaystyle
= \cos x \sin x + x - \color{blue}{\int \cos^2x \, \mbox{d}x} \\[6pt]
\Rightarrow \displaystyle 2 \int \cos^2x \, \mbox{d}x & \displaystyle
= \cos x \sin x + x + C
\end{array}$$

Then you divide by 2.


Limits problem: Factoring a cube root of x?



Disclaimer: I am an adult learning Calculus. This is not a student posting his homework assignment. I think this is a great forum!



$$\lim_{x\to8}{\frac{\sqrt[3] x-2}{x-8}}$$



How do I factor the top to cancel the $x-8$ denominator?


Answer




Actually you need to factor the denominator: $x-8 = (\sqrt[3]{x})^3-2^3 = (\sqrt[3]{x}-2)\left((\sqrt[3]{x})^2+2\sqrt[3]{x}+2^2\right)$.


elementary number theory - Remainder in Division using divisibility tests

I'm a bit lost / behind in my number theory class, but hey what can I do but try to catch up.



I'm asked to find the remainder in division by $m= 3,7,9,11,13$ using divisbility tests for $1234567 \times 10^{89}$



For division by 3 (and 9), I know that the sum of the digits must be divisible by 3 (or 9).



Immediately it follows that both 3 and 9 do not divide our monster number. To find the remainder, this is what I did though I'm not sure if I am correct.



$28 \equiv 1 \mod 3$

and so I say that the remainder in division by 3 of $1234567 \times 10^{89}$ is 1.



For 7,11,13 I see that
$123 - 456 + 700 = 367$ and that none of 7,11,13 divide 367.



This is where I'm stuck -- the test is developed using the fact that $7*11*13 = 1001$ and $1000 \equiv -1 \mod 1001$.



Do i set $367 \equiv x \mod 143$ ? Or is the remainder in division from 7,11 and 13 all $367 \mod 1001$?

linear algebra - Does bra-ket notation work for all inner product spaces?



My quantum computation instructor keeps referring to the vector space in which he is using Dirac's bra-ket notation as an "inner product space", but doesn't it need additional properties to use that notation? In particular don't we need to





  1. specify an implementation of the inner product in terms of some other vector space for the first argument; and


  2. require that the inner product be linear in that argument.




The first requirement seems to be necessary to get bras in the first place (I gather there are some theorems that guarantee we can use the inner product to do this) and the second seems necessary to allow us to identify the notation with something like "multiplication" of a bra and a ket.



Does bra-ket notation work for all inner product spaces, or are additional properties required? If so, do these properties have names; does the space that has them?







Forgive the naive formulation. I may not have the language quite right.


Answer



Given an inner product space $V$, you can imagine that there are two different copies of $V$, say $V_1$ and $V_2$, in which each vector $v\in V$ corresponds to a bra $\langle v|\in V_1$ and a ket $|v\rangle\in V_2$. To multiply a bra and a ket together, $\langle v|$ times $|w\rangle$ will by definition be $\langle v,w\rangle$ via the inner product.



Another way to think about this is as $V$ and its Hilbert space dual $V^*$ being identified together; each vector $v\in V$ is afforded the covector $v^*$ which is the linear mapping $v^*(w):=\langle w,v\rangle$ afforded by the given inner product. In this setting the covectors / dual vectors / linear functionals $v^*$ are denoted as bras $\langle v|$ and the usual vectors as kets $|v\rangle$, & multiplication is evaluation $\langle v||w\rangle=v^*(w)=\langle w,v\rangle$.



The reason for $v^*(w):=\langle w,v\rangle$ having $v$ in the second argument is so that each bra is a complex-linear functional of the argument $w$. This is related to a Hilbert space $V$ and its dual $V^*$ being anti-isomorphic; see Riesz representation theorem.


Saturday 30 August 2014

group theory - Embedding $operatorname{Aut}(G/Z(G))$ in $operatorname{Aut}(G)$



The question, which is somewhat open-ended, is this: under which conditions can we guarantee that for a finite group $G$, $\operatorname{Aut}(G/Z(G))$ is isomorphic to a subgroup of $\operatorname{Aut}(G)$?



This is sometimes possible, but not always. It is clearly possible if $G$ has trivial centre or if $G$ is abelian. But it is also possible for $G \cong Q_8$, since $Q_8/Z(Q_8) \cong C_2 \times C_2$ so $\operatorname{Aut}(Q_8/Z(Q_8)) \cong S_3$ whereas $\operatorname{Aut}(Q_8) \cong S_4$. On the other hand, $D_8/Z(D_8) \cong C_2 \times C_2$ again, but $\operatorname{Aut}(D_8) \cong D_8$.



Another case where the embedding I am asking about is possible is when $G/Z(G)$ is a complete group, i.e. it has trivial centre and no outer automorphisms. In that case, $$\operatorname{Aut}(G/Z(G)) \cong G/Z(G) \cong \operatorname{Inn}(G)$$ is a normal subgroup of $\operatorname{Aut}(G)$, but that's not very interesting.



I should, perhaps, clarify that I am mainly interested in what (if anything) we can say from knowledge of $G$ and its subgroup structure alone, without assuming knowledge of $\operatorname{Aut}(G)$. If nothing interesting can be said, however, ignore this restriction.


Answer




If $H$ is a perfect group with trivial centre, and $G$ is the Schur covering group of $H$, then $G/Z(G) \cong H$ and ${\rm Aut}(G) \cong {\rm Aut}(H)$.



For example, if $H$ is the simple group ${\rm PSL}(n,q)$ with $n>1$, then with a small number of exceptions (such as ${\rm PSL}(2,9)$ and ${\rm PSL}(3,4)$), we have $G = {\rm SL}(n,q)$.



To explain the above, let $G$ be any group and write $G = F/R$ with $F$ free. Then an automorphism $\tau$ of $G$ lifts to a homomorphism (not necessarily an automorphism) $\rho:F \to F$ with $\rho(R) \le R$, and so $\rho$ induces a homomorphism $\bar{\rho}:F/[F,R] \to F/[F,R]$.



Since $R/[F,R] \le Z(F/[F,R])$, the restriction, $\sigma$ say, of $\bar{\rho}$ to $[F,F]/[F,R]$ is uniquely determined by $\tau$ - i.e. it is does not depend on the chosen lift to $\rho$.



Also, it is easy to see that applying the same process to $\tau^{-1}$ results in the inverse of $\sigma$ on $[F,F]/[F,R]$, so $\sigma$ is an automorphism. Note that $\sigma$ induces an automorphism of the Schur multiplier $M(G) = ([F,F] \cap R)/[F,R]$ of $G$, and in fact we get an induced homomorphism ${\rm Aut}(G) \to {\rm Aut}(M(G))$.




The above is true for any group $G$. But if $G$ is perfect, then $[F,F]/([F,F] \cap R) \cong G$, and $[F,F]/[F,R]$ is the unique Schur cover of $G$.


trigonometry - Can we always use Euler's formula for complex numbers?




I was thinking of this today as I was looking over my complex analysis notes.



If you have some complex number $z$, then we can define it using Euler's formula as $z=a+ib=\cos\theta+i \sin\theta$. Say we have the case that $z=3+4i=25(\cos\theta+i\sin\theta)$. Then $25\cos \theta=3$, and $25\sin\theta=4$. But this would mean that



$$\theta=\cos^{-1}\left(\frac{3}{25}\right) =\sin^{-1}\left(\frac{4}{25}\right).$$



How can this be true if $\cos^{-1}\left(\frac{3}{25}\right)=83.107 \text{ degrees}$ and $\sin^{-1}\left(\frac{4}{25}\right)=9.206 \text{ degrees}$? Does this mean that we can only have certain values of $z$ in order to use Euler's formula ?


Answer



For $z=3+4i$;
$$r=\sqrt{3^2+4^2}=5$$

$$\theta=\tan^{-1}(\frac{4}{3})\approx0.9273^c$$



Hence $z=5(\cos[0.9273]+i\sin[0.9273])$



Note via use of the $3,4,5$ triangle, we can tell that $\cos\theta=\frac{3}{5}$ and $\sin\theta=\frac{4}{5}$.



You simply missed the fact that you need to square root for $r$.


elementary set theory - Show that $f^{-1}(A cup B) = f^{-1}(A) cup f^{-1}(B)$



Show that $f^{-1}(A\cup B) = f^{-1}(A)\cup f^{-1}(B)$ but not necessarily



$f^{-1}(A\cap B)=f^{-1}(A)\cap f^{-1}(B)$.




Let $S=A\cup B$



I know that $f^{-1}(S)=\{x:f(x)\in S\}$ assuming that that $f$ is one to one.
Is this true $\{x:f(x)\in S\}=\{x:f(x) \in A\}\cup\{x:f(x)\in B\}$?



Why doesn't the intersection work?



Sources : ♦ 2nd Ed, $\;$ P219 9.60(d), $\;$ Mathematical Proofs by Gary Chartrand,
♦ P214, $\;$ Theorem 12.4.#4, $\;$ Book of Proof by Richard Hammack,
♦ P257-258, $\;$ Theorem 5.4.2.#2(b), $\;$ How to Prove It by D Velleman.


Answer



Your exercise is incorrect.




$$\begin{align}f^{-1}[A\cap B] &:= \{x\in\text{dom}(f):f(x)\in A\cap B\}\\ &= \{x\in\text{dom}(f):f(x)\in A\text{ and }f(x)\in B\}\\ &= \{x\in\text{dom}(f):f(x)\in A\}\cap\{x\in\text{dom}(f):f(x)\in B\}\\ &=: f^{-1}[A]\cap f^{-1}[B].\end{align}$$



You'll proceed similarly to show that $f^{-1}[A\cup B] = f^{-1}[A]\cup f^{-1}[B],$ trading "and" for "or".






On the other hand, while we have $f[A\cup B]=f[A]\cup f[B]$ and $f[A\cap B]\subseteq f[A]\cap f[B],$ we don't generally have equality in the last case, unless $f$ is one-to-one. Pick any constant function on your personal favorite set of two or more elements, then choose two disjoint subsets $A$ and $B$ for an example where the inclusion is strict.


linear algebra - Find the eigenvalues and eigenvectors of the matrix with all diagonal elements as $d$ and rest $1$



A matrix has all elements 1 except the diagonal elements. It is an $n\times n$ matrix. What are the eigenvectors and eigenvalues ?




Solving book problems in Strang book and stuck on this one and I have no idea where to begin ?


Answer



Denote the matrix as $M$.
Let $J$ be the matrix with all entries $1$.
The matrix $J$ has rank $1$ therefore only one non zero eigenvalue which is $n$ with corresponding eigenvector $(1,1,\ldots,1)^T$ (what are the eigenvectors corresponding to the eigenvalue(s) $0$?).



Now note that $M=J+(d-1)I_n$ and that for any matrix $A\in\mathbb R^{n\times n}, \ x\in\mathbb R^n$ and $r\in\mathbb R$ if $Ax=\lambda x $ then $(A+rI_n)x=(\lambda+r)x$.


abstract algebra - Shortest irreducible polynomials over $Bbb F_p$ of degree $n$

For any prime $p$, one can realize any finite field $\Bbb F_{p^n}$ as the quotient of the ring $\Bbb F_p[X]$ by the maximal ideal generated by an irreducible polynomial $f$ of degree $n$. By dividing by the leading coefficient, we may as well assume $f$ is monic, in which case we can write it as
$$f(X) = X^n + a_{n - 1} X^n + \cdots + a_1 X + a_0.

\def\co{\color{#00bf00}{1}}
\def\ct{\color{#0000ff}{2}}
\def\ch{\color{#bf00bf}{3}}
\def\cf{\color{#ff0000}{4}}
\def\ci{\color{#ff7f00}{5}}
$$



If we let $\zeta$ denote a root of $f$, then $\Bbb F_{p^n} \cong \Bbb F_p[X] / \langle f \rangle \cong \Bbb F_p[\zeta]$, and so when computing multiplication in this field and write elements as polynomials in $\zeta$ of degree $< n$, one way or another we use iteratively the identity
$$\zeta^n = -a_{n - 1} \zeta^{n - 1} - \cdots - a_1 \zeta - a_0.$$




Manually multiplying elements in this field is naively more efficient, then, when one chooses a polynomial $f$ with fewer nonzero coefficients.



So, naturally, we can ask just how efficient we can be:




For any prime $p$ and any positive integer $n$, what is the least number $\lambda(p, n)$ of nonzero coefficients an irreducible polynomial of degree $n$ over $\Bbb F_p$ can have?




Some trivial to easy observations:





  • The only polynomial of degree $n$ with exactly one nonzero coefficient is $X^n$, and this is reducible iff $n > 1$, so $\lambda(p, 1) = 1$ and $\lambda(p, n) > 1$ for $n > 1$.

  • For $p \neq 2$, the squaring map on $\Bbb F_p^{\times}$ is $2$-to-$1$, so there are always nonsquares $a$ modulo $p$, and for such $a$ the polynomial $x^2 - a$ is irreducible, and so $\lambda(p, 2) = 2$. The only irreducible polynomial of degree $2$ over $\Bbb F_2$ is $X^2 + X + 1$, so $\lambda(2, 2) = 3$.

  • Any irreducible polynomial of degree $> 1$ has nonzero constant term. In particular, the only polynomial over $\Bbb F_2$ with exactly two nonzero coefficients and nonzero constant term is $X^n + 1$, but this always has root $1$ and so has $X + 1$ as a factor; thus, $\lambda(2, n) \geq 3$ for $n > 1$.

  • The only (monic) polynomials over $\Bbb F_3$ with exactly two nonzero coefficients, the constant term among them, are $X^n \pm 1$. Now, $X^n - 1$ again always has root $1$. If $n$ is not a power of $2$, say, $n = 2^m q$ for $q > 1$, one can factor $X^n + 1 = (X^{2^m} + 1)(X^{2^m (q - 1)} - X^{2^m (q - 2)} + \cdots + X^{2^m})$, and if $n$ is a power of $2$ larger than $2$ (indeed, any multiple of $4$), we can factor $X^n + 1 = (X^{n / 2} - X^{n / 4} - 1)(X^{n / 2} + X^{n / 4} - 1)$. In summary, if $n > 2$ then $X^n \pm 1$ is reducible and so $\lambda(3, n) \geq 3$. On the other hand $X^2 + 1$ is irreducible, so $\lambda(3, 2) = 2$.

  • For $n = p$, this question shows that $X^p - X + a$ is irreducible for all $a \neq 0$, so $\lambda(p, p) \leq 3$. Then, using that the Frobenius map $\phi: X \mapsto X^p$ is an automorphism we see that every polynomial $X^p + a$ is reducible---indeed, it factors as $(X - b)^p$, where $b := \phi^{-1}(-a)$---and so (again since the constant term of any irreducible polynomial of degree $n > 1$ is nonzero) $\lambda(p, p) = 3$.



Some more observations from answers:





  • Harry Altman gives a proof (generalizing an observation) below that for $p > 3$ we have $\lambda(p, 3) = 2$ for $p \equiv 1 \bmod 3$ and $\lambda(p, 3) = 3$ for $p \equiv 2 \bmod 3$. With this in hand, all $\lambda(p, n)$, $n \leq 3$, are resolved.


  • Jim Belk's answer shows more generally that we can quickly characterize the $(p, n)$ for which there is an irreducible polynomial of the form $X^n + a$, that is, for which $\lambda(p, n) = 2$.




Yet more results:




  • Swan has given several sufficient conditions for the reducibility of a trinomial $x^n + x^k + 1$ in $\Bbb F_2[x]$ (see citation below). One of these conditions in particular implies that all such trinomials are reducible when $n \equiv 0 \bmod 8$, and hence $\lambda(2, 8m) > 3$. More details can be found in $\S$40.9 of Jörg Arndt's Matters Computational (pdf warning, $>5$ MB).


  • Ciet, Quiscater, and Siet showed similarly that $\lambda(2, n) > 3$ if $n \equiv 13 \bmod 24$ or $n \equiv 19 \bmod 24$.





Together with the above, a few naive Maple scripts yield the following table that includes all $(p, n)$ such that $p < 2^5, n \leq 2^4$. (The table includes a column for $n = 49$, because this is the smallest value for which $\lambda(p, n) = 4$ for some small $p$, and possibly any $p$.)
\begin{array}{c|cccccccc}
\lambda & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & \cdots & 49\\
\hline
2 & \co & \ch & \ch & \ch & \ch & \ch & \ch & \ci & \ch & \ch & \ch & \ch & \ci & \ch & \ch & \ci & \cdots & \ch\\
3 & \co & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \cdots & \cf\\
5 & \co & \ct & \ch & \ct & \ch & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ct & \cdots & \ch\\
7 & \co & \ct & \ct & \ch & \ch & \ch & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \cdots & \ch\\

11 & \co & \ct & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \cdots & \ch\\
13 & \co & \ct & \ct & \ct & \ch & \ct & \ch & \ct & \ct & \ch & \ch & \ct & \ch & \ch & \ch & \ct & \cdots & \ch\\
17 & \co & \ct & \ch & \ct & \ch & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ct & \cdots & \ch\\
19 & \co & \ct & \ct & \ch & \ch & \ct & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \cdots & \ch\\
23 & \co & \ct & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ch & \ct & \ch & \ch & \ch & \ch & \ch & \cdots & \ch\\
29 & \co & \ct & \ch & \ct & \ch & \ch & \ct & \ct & \ch & \ch & \ch & \ch & \ch & \ct & \ch & \ct & \cdots & \ct\\
31 & \co & \ct & \ct & \ch & \ct & \ct & \ch & \ch & \ct & \ct & \ch & \ch & \ch & \ch & \ct & \ch & \cdots & \ch\\
\end{array}



Some data:





  • The only values of $\lambda(n, p)$ that occur for $p \leq 11, n \leq 2^8$ or $p \leq 97, n \leq 20$ are $1, 2, 3, 4, 5$. Apparently minimal examples are: $\lambda(2, 1) = 1$, $X$; $\lambda(3, 2) = 2$, $X^2 + 1$; $\lambda(2, 2) = 3$, $X^2 + X + 1$; $\lambda(3, 49) = 4$, $X^{49} + X^{14} + X + 2$; $\lambda(2, 8) = 5$, $X^8 + X^7 + X^2 + X + 3$. For all $106$ values of $n \leq 2^8$ for which $\lambda (2, n) > 3$ we have $\lambda(2, n) = 5$. For all $p = 3, 5, 7, 11$ and $n \leq 2^8$ for which $\lambda(p, n) > 3$ (there are $23$, $14$, $2$, and $1$ of these, resp.) we have $\lambda(p, n) = 4$.

  • OEIS A057486 is the sequence of "Degrees of absolutely reducible trinomials, i.e. numbers $n$ such that $x^n + x^m + 1$ is factorable [modulo $2$] for all $m$ between $1$ and $n$," i.e., a list of $n > 1$ such that $\lambda(2, n) > 3$.


  • Remarkably, it was recently shown that $\lambda(2, 57885161) > 3$. (Not unrelated is the fact that $2^{57885161}-1$ is a Mersenne prime, though I don't know the extent of this relationship.)




One can also determine values for small enough $p, n$ by consulting these tables.





Jörg Arndt, Matters Computational (pdf warning, $>5$ MB)



Mathieu Ciet, Jean-Jacques Quisquater, Francesco Sica, "A Short Note on Irreducible Trinomials in Binary Fields", (2002).



Richard G. Swan, "Factorization of polynomials over finite fields", Pacific Journal of Mathematics, (12) 3, pp. 1099-1106, (1962).


Friday 29 August 2014

calculus - Evaluating Limit - $(1-(cos x)^{sin x})/(x^3)$




I need to evaluate the following limit using l'Hospital's rule:
$$\lim_{x\to 0}\dfrac{1-(\cos x)^{\sin x}}{x^3}$$



By doing one step, i get
$$\lim_{x\to 0}\dfrac{-(\cos x)^{\sin x}[(\cos x) \ln(\cos x)-\frac{(\sin^2 x)}{\cos x}]}{3x^2}$$



If I did this correctly, I still need to use l'Hospital's rule again, but this seems too complicated for an exam question. Is there another, simpler way of doing this, but by still using L'Hospital's.


Answer



There is a way of solving this using l'Hospital's rule.




$$\lim_{x\to 0}\dfrac{1-(\cos x)^{\sin x}}{x^3}=\lim_{x\to 0}\dfrac{1-(\cos x)^{\sin x}}{\sin^{3}{x}}$$



Apply the rule, we get:



$$\lim_{x\to 0}\dfrac{-(\cos x)^{\sin {x}}[(\cos x) \ln(\cos x)-\frac{(\sin^2 x)}{\cos x}]}{3\sin^2{x}\cos{x}}$$
$$\lim_{x\to 0}-\dfrac{-(\cos x)^{(\sin {x}-1)}[(\cos x) \ln(\cos x)-\frac{(\sin^2 x)}{\cos x}]}{3\sin^2{x}}$$
$$=\lim_{x\to 0}\dfrac{[(\cos x) \ln(\cos x)-\frac{(\sin^2 x)}{\cos x}]}{-3\sin^2{x}}$$
$$=\lim_{x\to 0}\dfrac{[(\cos^2 {x}) \ln(\cos x)-{(\sin^2 x)}]}{-3\sin^2{x}\cos{x}}$$
$$=\lim_{x\to 0}\dfrac{[(\cos^2 {x}) \ln(\cos x)-{(\sin^2 x)}]}{-3\sin^2{x}}$$
$$=\lim_{x\to 0}\dfrac{(\cos^2 {x}) \ln(\cos x)}{-3\sin^2{x}}+\lim_{x\to 0}\dfrac{{(\sin^2 x)}}{3\sin^2{x}}$$

$$=\lim_{x\to 0}\dfrac{\ln(\cos x)}{-3x^2}+\frac{1}{3}$$
Apply the rule again:
$$=\lim_{x\to 0}\dfrac{\sin x}{6x\cos x}+\frac{1}{3}$$
$$=\frac{1}{2}$$


Elementary Number Theory: Proving log$_2$($3$) is irrational using the Fundamental Theorem of Arithmetic.

Problem: Prove $log$$_2$($3$) is irrational using the Fundamental Theorem of Arithmetic.



What I have so far:



Proof: Suppose $log$$_2$($3$) $\in$ $\mathbb Q$



Then there are $p,q$ $\in$ $\mathbb Z$ , $q$$\neq$$0$ s.t. $log$$_2$($3$)=$\frac pq$




Then $2$$^p$ $=$ $3$$^q$



This is where I get stuck because I'm not sure how to incorporate the Fundamental Theorem of Arithmetic.

calculus - How to compute the formula $sum limits_{r=1}^d r cdot 2^r$?



Given $$1\cdot 2^1 + 2\cdot 2^2 + 3\cdot 2^3 + 4\cdot 2^4 + \cdots + d \cdot 2^d = \sum_{r=1}^d r \cdot 2^r,$$
how can we infer to the following solution? $$2 (d-1) \cdot 2^d + 2. $$




Thank you


Answer



As noted above, observe that
\begin{eqnarray}
\sum_{r = 1}^{d} x^{r} = \frac{x(x^{d} - 1)}{x - 1}.
\end{eqnarray}
Differentiating both sides and multiplying by $x$, we find
\begin{eqnarray}
\sum_{r = 1}^{d} r x^{r} = \frac{dx^{d + 2} - x^{d+1}(d+1) + x}{(x - 1)^{2}}.

\end{eqnarray}
Substituting $x = 2$,
\begin{eqnarray}
\sum_{r = 1}^{d} r 2^{r} = d2^{d + 2} - (d+1) 2^{d+1} + 2 = (d - 1) 2^{d+1} + 2.
\end{eqnarray}


calculus - Limit of $frac{n^2}{4^n}$

How can I prove that the limit of $n^2/4^n$ as $n$ approaches infinity is 0?

I want to solve it without using de l'Hospital's rule and I tried some inequalities, but I don't find a nice solution.

Thursday 28 August 2014

functions - Examples of bijections from $mathbb Ztomathbb Z$ such that their sum is a bijection?

What are some simple examples of functions $f,g\colon \mathbb Z\to\mathbb Z$ which are bijective and their sum is also bijective?



Unless I am missing something, it is not difficult show that such functions exist by induction. (We simply order $\mathbb Z=\{z_n; n=0,1,2,\dots\}$ in some way. And we can define $f$, $g$, $h$ by induction in such way that we make sure that after $n$-th step: a) $f$ and $g$ are defined for the integers from some set $A_n$; b) $f$, $g$ and $h=f+g$ are injective on $A_n$; c) $z_n\in A_n$; d) $z_n$ belongs to $f[A_n]$, $g[A_n]$, $h[A_n]$. If needed, I can try to make this more precise and post inductive construction in an answer; of course, if somebody wants to post such an answer, you're more than welcome to do so. Especially if you can suggest some more elegant way than what I sketched here.)



But I have doubts that there is a bijection given by a simple formula. (Although I admit that the words "simple formula" are rather vague.)




So, as an additional question, is there example of bijections $f$, $g$ such that $f+g$ is also bijection, and these functions are "nice"? (For some reasonable meaning of the word "nice".) Or can we show that such example cannot be found if we restrict $f$, $g$ to some reasonably behaved class of functions?






Basically the motivation for this question came from a course that I am TA-ing. As an exercise, to help them acquainted with the notion of bijection, the students were asked to find an example of two bijections from $\mathbb Z$ to $\mathbb Z$ such that their sum is not a bijection. A colleague, who is also TA at the same course, asked me whether I can think of example where the sum is bijection, since for the most natural examples that one typically thinks for (such as $x\mapsto x+d$ or $x\mapsto -x+d'$) you never get a bijection.

functional equations - How to find a such function $f:3mathbb{N}+1to 4mathbb{N}+1$

How to find a bijective function $f: 3\mathbb{N}+1\to 4\mathbb{N}+1$ such that $$f(xy)=f(x)f(y),\forall x,y\in 3N+1$$




If i let $x,y\in 3\mathbb{N}+1$ then there exists $n,m\in \mathbb{N}$ such that $x=3n+1,y=3m+1$



but I have no idea how I can find a such $f$, Is there a method please ?

integration - Evaluating $int_{0}^{infty} frac{cos {ax}}{cosh{x}}dx$



$\int_{0}^{\infty} \frac{\cos {ax}}{\cosh{x}}dx$



$a$ is a real number.
Hint: Consider a suitable rectangular contour




I know that $\frac{1}{\cosh z}$ has simple poles when $z=\frac{(2n-1)\pi}{2}i$.



What should I do next?
I would appreciate any help. Thank you.


Answer



First off, let us notice that the function $\cos (ax) / \cosh (x)$ does not change under $x \mapsto -x$. So,



$$
\int\limits_0^\infty \frac{\cos ax}{\cosh x} \, dx = \frac{1}{2} \int\limits_{-\infty}^\infty \frac{\cos ax} {\cosh x} \, dx.

$$



Next, we should take the contour $\gamma_M$ to be like that. Line (1) is a part of real line $[-M, \, M]$, and the upper line (3) is the same line translated by $\pi i$.



The integral over line (3) could be rewritten as



$$
\int\limits_{(3)} \frac{\cos ax}{ \cosh x} \, dx = \int\limits_{(1)} \frac{\cos a(x + \pi i)}{\cosh (x + \pi i)} \, dx = (*).
$$




Notice, that $\cosh (x + \pi i) = - \cosh x$ and



$$
\cos a (x + \pi i) = \cos (ax) \cos ( a \pi i) - \sin (ax) \sin ( a \pi i).
$$



Therefore,



$$
(*) = -\cos (a \pi i )\int\limits_{(1)} \frac{\cos a x}{\cosh x} \, dx + \sin (a \pi i) \int\limits_{(1)} \frac{\sin a x}{\cosh x} \, dx.

$$



Last integral clearly equals zero, as it is the sum of symmetric and antisymmetric function over symmetric interval.



Integrals over lines (2) and (4) go to zero as $M \to \infty$. So, we have now, via residuals theorem



$$
2 \pi i \, \mathrm{res} \; \frac{\cos a x}{\cosh x} = \lim\limits_{M \to \infty} \int\limits_{\gamma_M} \frac{\cos a x}{\cosh x} \, dx = (1 - \cos (a \pi i)) \int\limits_{-\infty}^\infty \frac{\cos a x}{\cosh x} \, dx
$$


calculus - $alpha:(0, +infty) to (0, +infty)$ continuous and increasing, $alpha(1) = 1$. Show $int_1^x dalpha/alpha = log alpha(x)$





Let $\alpha:(0, +\infty) \to (0, +\infty)$ continuous and increasing,
with $\alpha(1) = 1$. Show that $\int_1^x d\alpha/\alpha = \log
\alpha(x)$ (don't suppose $\alpha$ differentiable)




Well, the change of variable theorem for Stieltjes integration says:




Let $f:[c,d]\to\mathbb{R}$ continuous, $\alpha:[c,d]\to\mathbb{R}$ of
bounded variation (rectifiable path) and $\phi:[a,b]\to[c,d]$ an

homeomorphism. Then:



$$\int_c^d f(t) d\alpha = \int_a^b f\circ\phi(s)d(\alpha\circ\phi)),
\mbox{ if $\phi$ is increasing } $$



$$\int_c^d f(t) d\alpha = -\int_a^b f\circ\phi(s)d(\alpha\circ\phi)),
\mbox{ if $\phi$ is decreasing } $$




I think I'm supposed to find the inverse homeomorphism from $\alpha$ (since its continuous and injective), in a way that $\alpha\circ\phi = 1$, so the integral becomes a Riemann integral. I should then evaluate $\int_{\phi(c)}^{\phi(d)} f\circ \phi(s) ds$ where $f: t\to \frac{1}{t}$. The integral now depends on $\phi(s)$, but how to Riemann integrate it?



Answer



Here is a simple change of variables result:




Claim. Let $f : [c, d] \to \mathbb{R}$ be Riemann integrable and $\alpha : [a, b] \to [c, d]$ be continuous and monotone increasing. Then



$$ \int_{a}^{b} f(\alpha(x)) \, d\alpha(x) = \int_{\alpha(a)}^{\alpha(b)} f(y) \, dy. $$




Proof. For each $\epsilon > 0$, pick $\delta > 0$ such that $|\alpha(x) - \alpha(y)| < \epsilon$ whenever $|x-y| < \delta$. Then for any partition $\Pi = \{a = x_0 < \cdots < x_n = b\}$ with $\|\Pi\| < \delta$ and for any $i = 1, \cdots, n$, we have




$$ m_i [\alpha(x_i) - \alpha(x_{i-1})] \leq \int_{x_{i-1}}^{x_i} f(\alpha(x)) \, d\alpha(x) \leq M_i [\alpha(x_i) - \alpha(x_{i-1})] $$



where



\begin{align*}
m_i &= \inf \{ f(y) : y \in [\alpha(x_{i-1}),\alpha(x_i)] \} \\
M_i &= \sup \{ f(y) : y \in [\alpha(x_{i-1}),\alpha(x_i)] \}.
\end{align*}




This tells that, with $\alpha(\Pi) = \{ \alpha(x_0), \cdots, \alpha(x_n)\}$ we have $\|\alpha(\Pi)\| < \epsilon$ and



$$ L(f, \alpha(\Pi)) \leq \int_{a}^{b} f(\alpha(x)) \, d\alpha(x) \leq U(f, \alpha(\Pi)), $$



where $L(f, \cdot)$ and $U(f, \cdot)$ are the lower Riemann sum and the upper Riemann sum, respectively. Therefore the desired identity follows by taking $\epsilon \to 0$.


linear algebra - Given $A in mathbb{R}^{n times n}$ real, symmetric, positive semidefinite matrix, find all eigenvalue and eigenvector pairs




Let $A \in \mathbb{R}^{n \times n}$ be a real, symmetric, positive semidefinite matrix with $n$ eigenvalues between $[0,1]$



Suppose we have the following information:




  • $\lambda_{min}(A) = 0, \lambda_{max}(A) = 1$

  • $\det(A) = 0$

  • $A\mathbf{1} = 0$ (the one vector of $\mathbb{R}^n$ is an eigenvector
    associated with the eigenvalue 0)




Since $A$ is symmetric, we know that all eigenvalues are going to be real.



Is it possible to find all the other eigenvalues and eigenvectors?



This sounds like an impossible question because the information presented is minimal, but if there is some result that the distribution of the eigenvalues and eigenvectors are..."uniformly spaced" for this kind of matrix, then perhaps we can find all the other eigenvalue-eigenvector pairs.



Anyone has any ideas how to approach this question?


Answer




There is no way to know what the other eigenvectors or eigenvalues are. Consider a matrix defined as $$A=\lambda_1v_1v_1^T + ... + \lambda_nv_nv_n^T$$
Where $\{v_1,...,v_n\}$ is an orthonormal basis for $\mathbb{R}^n$. Then clearly every $\lambda_i$ is an eigenvalue with eigenvector $v_i$. You can fix $\lambda_1=0$, $v_1=(1,...,1)$ and $\lambda_2=1$, but any choices over the other eigenvalues and eigenvectors on the definition of $A$ will give you a matrix that will hold all the constraints.


functional equations - If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t




Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$.




I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$




I looked up the hint and it says let $g(x) = \log f(a^x) $



The wikipedia page for functional equations only states the form of the solutions without proof.



Attempt
Using the hint (which was like pulling a rabbit out of the hat)



Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$
so that we can define the real function $g(x) = \log f(a^x)$ and we

have $$g(x+y) = g(x)+ g(y)$$



i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is).



Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited


Answer



So, we assume $f$ is continuous. Letting $g(x) = \ln(f(a^x))$, we get
$$
\begin{align*}
g(x+y) &= \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x)f(a^y))\\

&= \log(f(a^x)) + \ln(f(a^y))\\
&= g(x)+g(y).
\end{align*}$$
So $g$ satisfies the Cauchy functional equation; if you assume $f$ is continuous, then so is $g$, hence $g(x) = xg(1)$ for all $x\gt 0$.



Since $g(1) = \ln(f(a))$, we have
$$f(a^x) = e^{g(x)} = e^{g(1)x} = (e^{x})^{g(1)}.$$
Given $r\in \mathbb{R}$, $r\gt 0$, we have $r = a^{\log_a(r)}$, hence
$$\begin{align*}
f(r) &= f\left(a^{\log_a(r)}\right)\\

&= \left(e^{\log_a(r)}\right)^{g(1)}\\
&= \left(e^{\ln(r)/\ln(a)}\right)^{g(1)}\\
&= \left(e^{\ln(r)}\right)^{g(1)/\ln(a)}\\
&= r^{g(1)/\ln(a)},
\end{align*}$$
where we have used the change-of-base formula for the logarithm,
$$\log_a(r) = \frac{\ln r}{\ln a}.$$
Finally, since $g(1) = \ln(f(a))$, we have
$$f(r) = r^{\ln(f(a))/\ln(a)}.$$
As this works for any positive $a$, $a\neq 1$, taking $a=e$ we get

$$f(r) = r^{\ln(f(e))}.$$


real analysis - How to prove that the ball is convex.

I want to conclude that $B_r(x_0)$ is convex from the fact that $B_1(0)$ is convex. So I was trying to use the answer provided here Proving that closed (and open) balls are convex



But the thing is that I don't know how to conclude, I think that they may be need that $B_1(0)$ is convex to get the result. Can someone help me to get this result please?



Thanks a lot in advance.

linear algebra - Prove that if $T: V to W$ is one to one and ${Tv_1, ... Tv_n}$ is a basis for W, then ${v_1,..., v_n}$ is also a basis for V.



Prove that if $T: V \to W$ is one to one and ${Tv_1, ... Tv_n}$ is a basis for W, then ${v_1,..., v_n}$ is also a basis for V.



My idea is to introduce a $T^{-1}$ and then do a proof that is similar to the converse which is proven in many textbook. However, for the converse, there is also a requirement for T to be onto. I am wondering, why for this case, onto is not needed.



Answer



The proof of Nameless, as it stands now, only proves that the $v_{i}$are linearly independent. You must also prove that they span $V$.



Let $v \in V$, and consider $w = T(v)$, an element of the image of $T$. By assumption, $w$ can be written uniquely as
$$
T(v) = w = c_{1} T(v_{1}) + \dots c_{n} T(v_{n})
=
T(c_{1} v_{1} + \dots + c_{n} v_{n}).
$$
Since $T$ is one-to-one, one obtains

$$
v = c_{1} v_{1} + \dots + c_{n} v_{n}.
$$






Also, the linear independence bit does not require a proof by contradiction. If
$$
0 = c_{1} v_{1} + \dots + c_{n} v_{n},
$$

apply $T$. BY a standard property of linear maps, we have $T(0) = 0$, thus
$$
0 = c_{1} T(v_{1}) + \dots c_{n} T(v_{n}),
$$
and since the $T(v_{i})$ are linearly independent by assumption, we have that all $c_{i}$ are $0$, showing that the $v_{i}$ are also linearly independent.


elementary number theory - Prime $p$ with $p^2+8$ prime



I need to prove that there is only one $p$ prime number such that $p^2+8$ is prime and find that prime.



Anyway, I just guessed and the answer is 3 but how do I prove that?


Answer




Any number can be written as $6c,6c\pm1,6c\pm2=2(3c\pm1),6c+3=3(2c+1)$



Clearly, $6c,6c\pm2,6c+3$ can not be prime for $c\ge 1$



Any prime $>3$ can be written as $6a\pm 1$ where $a\ge 1$



So, $p^2+8=(6a\pm 1)^2+8=3(12a^2\pm4a+3)$.



Then , $p^2+8>3$ is divisible by 3,hence is not prime.




So, the only prime is $3$.






Any number$(p)$ not divisible by $3,$ can be written as $3b\pm1$



Now, $(3b\pm1)^2+(3c-1)=3(3b^2\pm2b+c)$.



Then , $p^2+(3c-1)$ is divisible by 3




and $p^2+(3c-1)>3$ if $p>3$ and $c\ge1$,hence not prime.



The necessary condition for $p^2+(3c-1)$ to be prime is $3\mid p$



$\implies$ if $p^2+(3c-1)$ is prime, $3\mid p$.



If $p$ needs to be prime, $p=3$, here $c=3$


Wednesday 27 August 2014

algebra precalculus - How do I show $sin^2(x+y)−sin^2(x−y)≡sin(2x)sin(2y)$?



I really don't know where I'm going wrong, I use the sum to product formula but always end up far from $\sin(2x)\sin(2y)$. Any help is appreciated, thanks.


Answer



\begin{align}

\Big(\sin(x+y)\Big)^2 - \Big(\sin(x-y)\Big)^2 & = \Big( \sin x\cos y+\cos x\sin y \Big)^2 - \Big( \sin x\cos y+\cos x\sin y \Big)^2 \\[6pt]
& = \Big( \sin^2\cos^2y + 2\sin x\cos y\cos x\sin y + \cos^2x\sin^2y \Big) \\[6pt]
&\phantom{{}=} {}- \Big( \sin^2\cos^2y - 2\sin x\cos y\cos x\sin y + \cos^2x\sin^2y \Big) \\[6pt]
& = 4\sin x\cos y\cos x\sin y \\[6pt]
& = (2\sin x\cos x)(2\sin y\cos y) \\[6pt]
& = \sin(2x)\sin(2y).
\end{align}


inequality - How to show without calculator that $leftlfloor, log_{10}{999^{999}}rightrfloor =leftlfloor, log_{10}{999^{999}}+log_{10}2rightrfloor$

By wolfram alpha, I get



$\left\lfloor\, \log_{10}{999^{999}}\right\rfloor =\left\lfloor\, \log_{10}{999^{999}}+\log_{10}2\right\rfloor=2996$.



How to prove that $\left\lfloor\, \log_{10}{999^{999}}\right\rfloor =\left\lfloor\, \log_{10}{999^{999}}+\log_{10}2\right\rfloor$ without calculator or wolfram alpha?




Thank in advances.

abstract algebra - The definition of "algebraically independent"



In Lang's Algebra, he gives a definition that




Elements $x_1, \cdots, x_n\in B$ are called algebraically independent over $A$[a subring of $B$] if the evaluation map $$f\mapsto f(x)$$is injective. Equivalently, we could say that if $f\in A[X]$ is a polynomial and $f(x)=0$, then $f=0$.





I am confused about the "injective" here. Two possible interpretations in my mind:




  1. Fix an $f$, for $x\neq y,f(x)\neq f(y).$

  2. Fix an $x$, for $f_1\neq f_2,f_1(x)\neq f_2(x).$



I was wondering which one is correct and why. Could you give me some helpful examples?




Besides, why are these two definitions equivalent?



Thanks in advance.


Answer



The evaluation map is from $A[X_1,\dots,X_n]$ to $B$. I'll denote it $\text{ev}_x : A[X] \rightarrow B$. Writing it out very explicitly, $\text{ev}_x(f(X_1,\dots,X_n)) = f(x_1,\dots,x_n)$. To say that it is injective is to say $ev_x(f) = ev_x(g) \Rightarrow f = g$. This is your second interpretation. To see that the two definitions are equivalent, recall that for any ring homomorphism $\phi : R \rightarrow S$, $\phi(r) = \phi(r')$ $\Longleftrightarrow$ $r - r' \in \text{ker}(\phi)$. Then it is a little exercise to see that $\phi$ is injective if and only if $\text{ker}(\phi) = \{0\}$. Apply this result to $ev_x$ and it says that $ev_x$ is injective if and only if $ev_x(f) = 0 \Rightarrow f = 0$. But $ev_x(f) = f(x_1,\dots,x_n)$, so the condition becomes $f(x_1,\dots,x_n) = 0 \Rightarrow f(X_1,\dots,X_n) = 0$. It is important to understand throughout that $f(x_1,\dots,x_n)$ is always an element of $B$, while $f(X_1,\dots,X_n)$ is always a polynomial in several variables with coefficients in $A$ (that is, an element of $A[X]$).


Tuesday 26 August 2014

combinatorics - Prove the following equality: $sum_{k=0}^nbinom {n-k }{k} = F_n$




I need to prove that there is the following equality:



$$

\sum\limits_{k=0}^n {n-k \choose k} = F_{n}
$$
where $F_{n}$ is a n-th Fibonacci number.



The problem seems easy but I can't find the way to prove it. Could you give me any hints? I'm looking for a combinatorial proof.


Answer



Arthur Benjamin actuall wrote a book that seemed to me to be nothing other than combinatorial properties of Fibonacci and other similar numbers, called "Proofs that Really Count". I found it rather interesting. $F_n$ has one popular representation as a combinatorial object, and that is the number of sequences of 1s and 2s that sum to form $n-1$. This is $F_n$ because $F_1 = F_2 = 1$ and the number of ways to make $n-1$ is the number of sequences that sum to $n-3$ with a two at the end, and the number of ways to sum to $n-2$ with a 1 at the end. This can be recast with a number of other situations, like coloring sequences of tiles or placing dominoes, but it's the same idea.



Now this identity comes from counting the number of ways to form this sum that contain exactly $k$ $2s$ in the sequence. Summing this from $0$ to $n$ is the same as summing from $0$ to $[\frac{n}{2}]$ and is finding how many sequences with 1 two, 2 twos, and so on. This ends up being the total number of ways to make the sequence, although it comes out to be $F_{n+1}$ and not $F_n$, as the Fibonacci sequence is usually indexed at $1$ and not $0$ (hence why we are summing to $n-1$).


number theory - Primes and arithmetic progressions

In a book on complex analysis, the authors prove:




Given finitely many (non-trivial) arithmetic progressions of natural numbers $$a_1, a_1+d_1, a_1+2d_1, \cdots $$
$$a_2, a_2+d_2, a_2+2d_2, \cdots, $$
$$a_k, a_k+d_k, a_k+2d_k, \cdots, $$
their totality (union) is never $\mathbb{N}$.





Given: any $k$ (non-trivial) arithmetic progressions.



Let $S$ denote the set of those numbers which are not in any of above $k$ arithmetic progressions.



If $S$ would have been finite, then we can form new finitely many arithmetic progressions which covers $S$, and combining them with given progressions, we will cover $\mathbb{N}$ by finitely many arithmetic progressions, a contradiction.



Thus $S$, the complement of all the given $k$ progressions, must be infinite.




Question Does $S$ contain infinitely many primes?






A non-trivial arithmetic progression means, the common difference is not $1$, i.e. it is not of the type
$$a, a+1, a+2, \cdots $$

real analysis - Jump Continuity



I am reading Amann and Escher's trilogy on Analysis and at the very beginning of volume II, the authors begin with the definition of a step functions and what it means for a step function to be "jump continuous". Intuitively, I know what this means but the authors use very unfamiliar notation at this point. Specifically, they use $f(a + 0)$ to denote the limit of $f$ as $x$ approaches $a$ from the right (at least, this is what I think they mean by it) I have rewritten the definition as I understand it using notation that is more familiar and it reads:



A function $f:I\rightarrow E$ from the perfect interval I to the Banach space $(E, ||\cdot||)$ is called a step function if $I$ has a partition $\mathcal{P} = (\alpha_0, \dots, \alpha_n)$ such that $f$ is constant on every open interval $(\alpha_{j-1},\dots \alpha_j)$

Moreover, if the limits $\lim \limits_{x \to \alpha^+}{f(x)}$ and $\lim \limits_{x \to \beta^-}{f(x)}$ exist and the limits
$\lim \limits_{x \to a^-}{f(x)}$ and $\lim \limits_{x \to a^+}{f(x)}$ exist for each $a$ in the interior of $I$ then $f$ is said to be
jump continuous



Can anyone comment on the correctness of the above definition and verify that I have interpreted the authors' meaning correctly?


Answer



There isn't much more to say to answer this question, except to confirm that you understood the definition correctly. I also find the notation $f(a+0)$ for the limit from one side of $a$ of the function $f$ quite strange.



Step functions are fundamental for various reasons; among the most elementary is that they are typically used in the development of the Lebesgue integral, although they (well, their `derivatives') are also typically used to motivate the theory of distributions and generalised functions.




Perhaps one can also store a link to the Wikipedia page here.


integration - Evaluate $int _0^{infty }frac{x^6}{left(x^4+a^4right)^2}dx$

The function




$$f\left(z\right)=\frac{z^6}{\left(z^4+a^4\right)^2}$$



Has the following poles of order 2:



$$ z(k)=a \exp\left( \frac{\left(2k+1\right)}4 i\pi \right)$$



$f$ is even, therefore: $$\int _0^{+\infty }\frac{x^6}{\left(x^4+a^4\right)^2}dx =\frac{1}{2}\int _{-\infty }^{+\infty \:}\frac{x^6}{\left(x^4+a^4\right)^2}dx$$



$$\int _0^{+\infty }\frac{x^6}{\left(x^4+a^4\right)^2}dx=i\pi \sum _k\:Res\left(f,\:z\left(k\right)\right)$$




$$Res\left(f,\:z\left(k\right)\right)=\lim _{z\to z\left(k\right)}\left(\frac{1}{\left(2-1\right)!}\left(\frac{d}{dz}\right)^{2-1}\frac{z^6\left(z-z\left(k\right)\right)^2}{\left(z^4+a^4\right)^2}\right)$$



$$z^4+a^4=z^4-z_k^4\implies\dfrac{z^6(z-z_k)^2}{(z^4+a^4)^2}=\dfrac{z^6}{(z^3+z_k z^2+z_k^2 z+z_k^3)^2}$$



$$Res\left(f,\:z_k\right)=\lim _{z\to \:z_k}\left(\frac{d}{dz}\left(\frac{z^6}{\left(z^3+z_kz^2+z_k^2z+z_k^3\right)^2}\right)\right)$$



$$Res\left(f,\:z_k\right)=\frac{2z_kz^5\left(z^2+2z_kz+3z_k^2\right)}{\left(z^3+z_kz^2+z_k^2z+z_k^3\right)^3}=\frac{2z_k^6\cdot 6z_k^2}{\left(4z_k^3\right)^3}$$



$$Res\left(f,\:z_k\right)=\frac{12z_k^8}{64z_k^9}=\frac{3}{16z_k}$$




$$\int _0^{+\infty }\frac{x^6}{\left(x^4+a^4\right)^2}dx=\frac{3i\pi }{16a}\sum _{k=0}^n\:e^{-\frac{\left(2k+1\right)}{4}i\pi }$$



We consider only the residues within the upper half plane, that is to say those corresponding to $k=0$ and $k=1$.



$$\int _0^{+\infty \:}\frac{x^6}{\left(x^4+a^4\right)^2}dx=\frac{3i\pi \:}{16a}\left(e^{-\frac{i\pi }{4}\:\:}+e^{-\frac{3i\pi \:}{4}\:\:}\right)$$



$$\int _0^{+\infty \:}\frac{x^6}{\left(x^4+a^4\right)^2}dx=\frac{3i\pi \:}{16a}\left(\frac{\sqrt{2}}{2}\:-i\frac{\sqrt{2}}{2}-\frac{\sqrt{2}}{2}-i\frac{\sqrt{2}}{2}\right)$$



$$\int _0^{+\infty \:}\frac{x^6}{\left(x^4+a^4\right)^2}dx=\frac{3\pi \sqrt{2}\:}{16a}$$

Monday 25 August 2014

real analysis - Alternative proofs that if $a_n leqslant b_n$ then $lim a_n leqslant lim b_n$



A well-known limit property asserts that if $a_n$ and $b_n$ are convergent sequences and $a_n \leqslant b_n$ for all $n$, then $\lim a_n \leqslant \lim b_n$.The most common means of proof is by contrapositive, but are there any other nice ways of proving this without using contrapositive?


Answer



Assume $a_n \to A$ and $b_n \to B$. We use the fact that $A \le B$ if and only if $A < B + \epsilon$ for every $\epsilon > 0$.



Let $\epsilon > 0$ be given.




There exists an index $n$ with the property that $|a_n - A| < \epsilon/2$ and $|b_n - B| < \epsilon/2$. Thus $$A < a_n + \epsilon/2 \le b_n + \epsilon/2 < B + \epsilon.$$



Thus $A \le B$.


real analysis - Differentiability of $f(x+y) = f(x)f(y)$

Let $f$: $\mathbb R$ $\to$ $\mathbb R$ be a function such that $f(x+y)$ = $f(x)f(y)$ for all $x,y$ $\in$ $\mathbb R$. Suppose that $f'(0)$ exists. Prove that $f$ is a differentiable function.




This is what I've tried:
Using the definition of differentiability and taking arbitrary $x_0$ $\in$ $\mathbb R$.



$\lim_{h\to 0}$ ${f(x_0 + h)-f(x_0)\over h}$ $=$ $\cdots$ $=$ $f(x_0)$$\lim_{h\to 0}$ ${f(h) - 1\over h}$.



Then since $x_0$ arbitrary, using $f(x_0+0) = f(x_0) = f(x_0)f(0)$ for $y = 0$, can I finish the proof?

summation - Sum of Geometric Series Formula




I just need the formula for the sum of geometric series when each element in the series has the value $1/2^{j+1}$, where $j = 0, 1, 2, \ldots, n$. Please help.



Someone told me it is:



$$S = 2 - \frac{1}{2^n}$$



I am not sure if its right because he has given me no proof and I couldn't prove it when I calculate it manually. Say for example:




$$S = 1/2 + 1/4 + 1/8 = .875$$



But when using the formula given above, with $n=3$ (since there are $3$ elements):



$$S = 2 - 1/8 = 1.875$$



The answers are not the same. Please enlighten me with this issue.


Answer



Consider the $n$th partial sum

$$S_n = \sum_{j = 0}^{n} r^j = 1 + r + r^2 + \cdots + r^n$$
of the geometric series
$$\sum_{j = 0}^{\infty} r^j$$
with common ratio $r$.



If we multiply $S_n$ by $1 - r$, we obtain
\begin{align*}
(1 - r)S_n & = (1 - r)(1 + r + r^2 + \cdots + r^n)\\
& = 1 + r + r^2 + \cdots + r^n - (r + r^2 + r^3 + \cdots + r^{n + 1})\\
& = 1 - r^{n + 1}

\end{align*}
If $r \neq 1$, we may divide by $1 - r$ to obtain
$$S_n = \frac{1 - r^{n + 1}}{1 - r}$$
In particular, if $r = 1/2$, we obtain
\begin{align*}
S_n & = \sum_{r = 0}^{n} \left(\frac{1}{2}\right)^j\\
& = \frac{1 - \left(\frac{1}{2}\right)^{n + 1}}{1 - \frac{1}{2}}\\
& = \frac{1 - \left(\frac{1}{2}\right)^{n + 1}}{\frac{1}{2}}\\
& = 2\left[1 - \left(\frac{1}{2}\right)^{n + 1}\right]\\
& = 2\left(1 - \frac{1}{2^{n + 1}}\right)\\

& = 2 - \frac{1}{2^n}
\end{align*}
which is the formula you were given.



However, you want
\begin{align*}
\sum_{j = 0}^{n + 1} \frac{1}{2^{j + 1}} & = \frac{1}{2} \sum_{j = 0}^{n} \frac{1}{2^j}\\
& = \frac{1}{2} \sum_{j = 0}^{n} \left(\frac{1}{2}\right)^n\\
& = \frac{1}{2}\left[2 - \frac{1}{2^n}\right]\\
& = 1 - \frac{1}{2^{n + 1}}

\end{align*}
As a check, observe that when $n = 2$
$$\sum_{j = 0}^{2} \frac{1}{2^{j + 1}} = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} = \frac{7}{8} = 0.875$$
and
$$1 - \frac{1}{2^{2 + 1}} = 1 - \frac{1}{2^3} = 1 - \frac{1}{8} = \frac{7}{8} = 0.875$$
In your calculation, you used $n = 3$ because you did not take into account the fact that if the index starts with $0$, the third term is $n = 2$.


linear algebra - Is left inverse implying right inverse in matrix a property of structure?

If $A$ is a square matrix and there exists a square matrix $B$ such that $AB =1$, than it is known that $BA=1$. This property is proved with some properties from linear algebra. Although I've never seen it be proved just by structures of matrix multiplication, I couldn't find a counterexample of a set with structures of matrix multiplication but left inverse doesn't imply right inverse.



To be more specific, let $X$ be a set and binary operation $\cdot$ is defined on $X$. If $\cdot$ is associative and $X$ has left and right identity(which will be the same), than does $A \cdot B = 1$ for some $A, B\in X$ implies $B \cdot A = 1$?



If not, what other properties of matrix multiplication should we add to this structure of $(X,\cdot)$ in order to get the property?

calculus - When are differentials actually useful?



I think of differentials as a way to approximate $\Delta y$ in a function $y = f(x)$ for a certain $\Delta x$.



The way I understood it, the derivative itself is not a ratio because you can't get $\frac{dy}{dx}$ by taking the ratio of the limits of the numerator and denominator separately.



However, once you do have $\frac{dy}{dx}$, you can then think of $dx$ as $\Delta x$ and of $dy$ as the change in $y$ for a slope $\frac{dy}{dx}$ over a certain $\Delta x$.




My problem is that I don't know why differentials are useful. The examples I saw are along the lines of approximating the max error in a sphere's volume if we know that the radius we're given (let's say 20) has a possible error of let's say 0.01.



In this kind of example, it seems to me we're better off computing $V(20.01) - V(20)$, instead of $\Delta V \approx dV = V' \cdot \Delta x$.



In the first case at least we get an exact maximum error instead of an approximation.



So my question is: When are differentials actually useful? Are they ever anything more than at best a time saver over hand computations? Are they useful at all in today's world with Wolfram Alpha and other strong computational engines around?


Answer



The list is endless, as the comments have started to indicate.




An example of an extremely important real-world application, on which literally trillions of dollars a day depends, comes from the risk management (hedging) of derivatives contracts from the point of view of a large bank or dealer. Say you call up Goldman Stanley's equity desk to purchase an equity call option, where the underlying equity has a continuously quoted market price of $S_{t}$ and the pricing function for the derivative contract is $f(S_{t},t)$ (at least according to the pricing models the dealer depends on and uses), then the "delta" of the derivative is
$$\frac{\partial f}{\partial S}(S(t),t)$$
and represents a first order approximation of the amount of risk a dealer has from selling the option to their client (i.e., as a first approximation, it quantifies how much money the dealer will gain or lose when $S$ changes by an amount $\Delta S$). In particular,
$$\Delta f_{t}\approx\frac{\partial f}{\partial S}\Delta S_{t}.$$
The market-maker is thus constantly buying and selling $|\partial f/\partial S|$ of the underlying asset in order to hedge (eliminate $S$-risk of) their exposure. They make money from the initial transaction costs/spreads, provided they effectively execute the hedging strategy, and avoid taking bets that their short position will be in the money and their clients' long position out of the money.



Obviously this differential is only a first-order approximation (and we're ignoring dependence on $t$ as well). To neutralize risk even more effectively, traders also try to be "gamma-neutral" in addition to "delta-neutral" by adjusting their hedging strategy according to the second-order approximation
$$\Delta f_{t}\approx\frac{\partial f}{\partial S}\Delta S_{t}+\frac{1}{2}\frac{\partial^{2}f}{\partial S^{2}}(\Delta S)^{2}.$$




Other differentials that traders commonly use to manage their risk involve the quantities "vega", "charm", etc., these nicknames just being cute trader-speak for the partial derivatives of the pricing function with respect to other market variables for which it depends, including volatility, time, etc., respectively.


abstract algebra - Degree of Field Extension $mathbb{Q}(sqrt[4]{2}):mathbb{Q}(sqrt{2})$



How do I find the degree of this field extension?




$$ \mathbb{Q}(\sqrt[4]{2}):\mathbb{Q}(\sqrt{2}) $$



I've tried thinking of the larger field as a vector space over the smaller one to find a basis, but I haven't had any luck. There's no tower law to use here, and I don't think I can use a minimal polynomial argument here, because both fields are different (the larger one isn't the smaller one plus some algebraic element).


Answer



Hint: $\left[\mathbb{Q}(\sqrt[4]{2}):\mathbb{Q}\right] =\left[\mathbb{Q}(\sqrt[4]{2}):\mathbb{Q}(\sqrt{2})\right]\cdot \left[\mathbb{Q}(\sqrt{2}):\mathbb{Q}\right]$


calculus - Prove: Monotonic And Bounded Sequence- Converges

Let $a_n$ be a monotonic and bounded sequence, WLOG let assume it is monotonic increasing.
$a_n$ is bounded therefore there is a Supremum, $Sup(a_n)=a$, therefore $a_nOn the other hand due to $Sup(a_n)=a$, there is $N$ such that $a-\epsilontherefore $a-\epsilon< a_n


Is the proof valid? does it apply to strictly monotonic sequence too?

sequences and series - How to compute $sum_{x=0}^{infty}xa^x$?





Need a hint to compute $\displaystyle \sum_{x=0}^\infty xa^x$ and $\displaystyle \sum_{x=0}^\infty x^2a^x$, where $a \in (0,1)$.


Answer



Hint: Differentiate $\sum_{x=0}^{\infty}a^x$ with respect to $a$.


Sunday 24 August 2014

real analysis - Proving a function is continuous on all irrational numbers





Let $\langle r_n\rangle$ be an enumeration of the set $\mathbb Q$ of rational numbers such that $r_n \neq r_m\,$ if $\,n\neq m.$ $$\text{Define}\; f: \mathbb R \to \mathbb R\;\text{by}\;\displaystyle f(x) = \sum_{r_n \leq x} 1/2^n,\;x\in \mathbb R.$$
Prove that $f$ is continuous at each point of $\mathbb Q^c$ and discontinuous at each point of $\mathbb Q$.




I find this question very challenging and have no idea even how to start off with the proof. Please suggest a proof or any hint.


Answer



This is a part of my answer here, but it should completely answer your questions too.



I use the notation $$\lim_{y\to x^{+}}f(y)=f(x^+)$$ $$\lim_{y\to x^{-}}f(y)=f(x^-)$$







There is a very nice way of constructing, given a sequence $\{x_n\}$ of real numbers, a function which is continuous everywhere except the elements of $\{x_n\}$ [That is, discontinuous on a countable set $A\in\Bbb R$]. Let $\{c_n\}$ by any nonnegative summable sequence [that is $\sum\limits_{n\geq 0} c_n$ exists finitely], and let $$s(x)=\sum_{x_n

What we do is sum through the indices that satisfy the said inequality. Because of absolute convergence, order is irrelevant. The function is monotone increasing because the terms are nonnegative, and $s$ is discontinuous at each $x_n$ because $$s(x_n^+)-s(x_n^-)=c_n$$



However, it is continuous at any other $x$: see xzyzyz's proof with the particular case $c_n=n^{-2}$. In fact, this function is lower continous, in the sense $\lim\limits_{y\to x^{-}}f(y)=f(x^-)=f(x)$ for any value of $x$. If we had used $x_n\leq x$, it would be upper continuous, but still discontinuous at the $x_n$.



To see the function has the said jumps, note that for $h>0$, we have $$\begin{align} s(x_n^+)-s(x_n^-)&=\\ \lim_{h\to 0^+} s(x_k+h)-s(x_k-h)&=\lim_{h\to 0^+}\sum_{x_n


and we can take $\delta$ so small that whenever $0

exponential function - Using L'hopital's rule to evaluate the limit $lim_{x to 0} left(e^x+xright)^{frac{1}{x}}$



I am trying to use L'Hopital's rule to evaluate the following limit but I am not sure I am doing it correctly.




$$\lim_{x \to 0} \left(e^x+x\right)^{\frac{1}{x}}.$$



To use L'Hopitals rule I have been told that the limit has to evaluate to certain in-determinants before it can be applied, in the case above we would get $1^{\infty}$ so it can be used.



Here are my steps to evaluate the limit:



$$\lim_{x \to 0} \left(e^x+x\right)^{\frac{1}{x}}.$$



$$\lim_{x \to 0} \ln\left(\left(e^x+x\right)^{\frac{1}{x}}\right) = \lim_{x \to 0} \frac{1}{x} \cdot \ln\left(e^x+x\right)$$
$$\lim_{x \to 0} \frac{\ln\left(e^x+x\right)}{x} = \lim_{x \to 0} \frac{1}{x^2+xe^x}$$

$$\lim_{x \to 0} \frac{0}{2x+e^x+xe^x} = \frac{0}{1}=0$$



As I took the natural log at the start the evaluated limit would be $e^0=1$.



So I have two questions, firstly is my method correct? I am particularly unsure about what to do when (for example in the 4th line) I get a 0 as my numerator, is $\frac{0}{1}$ considered to be an in determinant? Secondly if the method is correct, is my answer correct? Is there a way to prove what I have done is the limit evaluated?



I am not sure on the definition of an in-determinant as in my lecture notes I have that I can apply L'Hopital's rule on problems with $1^{\infty}$ but I am not sure if this is an in-determinant in the same way that $\frac{0}{0}$ is considered to be undefined. I am not sure my last point is entirely clear so if it is not please let me know and I will try to clarify further.


Answer



L’Hospital’s rule applies only to limits of the form $0/0$ and $\infty/\infty$. However, there is a standard trick for converting the indeterminate form $1^\infty$ to one of these forms so that l’Hospital’s rule can be applied, and that’s essentially what you’re doing here. Let me write it up in a way that makes a little clearer exactly what is going on.




Let $L=\displaystyle\lim_{x\to 0}\left(e^x+x\right)^{1/x}$. Then $$\ln L=\ln\lim_{x\to 0}\left(e^x+x\right)^{1/x}=\lim_{x\to 0}\ln\left(e^x+x\right)^{1/x}\;,$$ since $\ln$ is a continuous function. Thus,



$$\ln L=\lim_{x\to 0}\ln\left(e^x+x\right)^{1/x}=\lim_{x\to 0}\frac1x\ln(e^x+x)=\lim_{x\to 0}\frac{\ln(e^x+x)}x\;.$$



This last limit is a $\frac00$ indeterminate form, so l’Hospital’s rule applies:



$$\ln L=\lim_{x\to 0}\frac{\ln(e^x+x)}x=\lim_{x\to 0}\frac{\frac{e^x+1}{e^x+x}}1=\lim_{x\to 0}\frac{e^x+1}{e^x+x}=\frac21=2\;.$$



Finally, then $L=e^{\ln L}=e^2$.




You went astray when you wrote $$\lim_{x \to 0} \frac{\ln\left(e^x+x\right)}{x} = \lim_{x \to 0} \frac{1}{x^2+xe^x}\;:$$ you did not differentiate the numerator correctly, and you did not differentiate the denominator at all.



If $\lim_{x\to a}f(x)=0$ and $\lim_{x\to a}g(x)=1$, then $\lim_{x\to a}\frac{f(x)}{g(x)}=\frac01=0$; there is nothing indeterminate here.


algebra precalculus - How do i do this mathematical induction question?



My question:$5+10+20+...+5(2)^{n-1} = 5(2^n -1)$




  1. So first step i have to prove LHS = RHS when $n=1$, which is true.

  2. Then I assume the statement is true for $n=k$


  3. Since the statement is true for $n=k$ then for $n=k+1$



My workings:



$5+10+20+...+5(2)^{k-1} +5(2)^{(k+1)-1}= 5(2^{k+1} -1)$



LHS: $5(2^{k-1}) + 5(2)^k$



Then I do not know how to proceed to simplify, in general, can someone show some steps and show me how to tackle simplifying this kind of questions?



Answer



$5(2^k - 1) + 5(2^k) = 5(2^{k+1} -1)$


calculus - How do I solve $lim_{x to infty} x(e^{(1/x)}-1)$ without L'Hopital?



I don't have experience with L'Hopital Rule nor series and thats what most solution are, is there is other method can be used to solve that limit? i thought about trying to use first principle of derivative but i dont know where to begin. i need some help to guide me to the right direction.


Answer



This is precisely $$\lim_{x \to +\infty} \frac{\exp\left(\frac 1x\right) - 1}{\frac 1x} = \lim_{u \to 0^{+}} \frac{\exp\left(u \right) - 1}{u}$$



Does that remind you of some derivative?


Saturday 23 August 2014

number theory - $7^n$ contains a block of consecutive zeroes


Prove that there is a positive integer $n$ such that the decimal representation of $7^n$ contains a block of at least $m$ consecutive zeros, where $m$ is any given positive integer.




I will prove it more generally for any prime $p$. It is sufficient to find an $n$ such that $p^n$ begins with the number $100\ldots 0$ which has exactly $m$ zeroes. Thus, we are looking for $n$ and $k$ with $k < m$ such that $10^m 10^k \leq p^n < 10^k(10^m+1)$. This is equivalent to $m \leq n\log_{10}{p}-k < \log_{10}(10^m+1)$.




Where do I go from here?

Proof by induction in trigonometry.





Prove that $\cos x +\cos 2x + \cos 3x + ...+ \cos nx =\cos \left(\dfrac{n+1}{2}x\right) \sin \left(\dfrac{nx}{2}\right)\csc \dfrac{x}{2}$





Attempt:



Clearly, $P(1)$ is true.



Assume $P(m)$ is true.



Thus, $P(m+1) = (\cos x +\cos 2x + \cos 3x + ...+ \cos mx)+ \cos((m+1)x)$




$= \cos \left(\dfrac{m+1}{2}x\right) \sin \left(\dfrac{mx}{2}\right)\csc \dfrac{x}{2} + \cos((m+1)x)
\\= \csc (\dfrac x 2)\left(\cos \left(\dfrac{m+1}{2}x\right) \sin \left(\dfrac{mx}{2}\right)+ (\cos(m+1)x)\sin (\dfrac x 2)\right)$



What do I do next?


Answer



Formula to be used:



$\sin A- \sin B = \cos\left(\dfrac{A+B}{2}\right)\sin \left(\dfrac{A-B}{2}\right)$



Thus,

$\csc \left(\dfrac x 2 \right)\left(\cos \left(\dfrac{m+1}{2}x\right) \sin \left(\dfrac{mx}{2}\right)+ (\cos(m+1)x)\sin (\dfrac x 2)\right)$



$= \csc \left(\dfrac x 2 \right)\left(\dfrac 1 2 \left(\sin \dfrac{2mx+x}{2} - \sin \dfrac x 2 \right)+ \dfrac 1 2 \left(\sin \dfrac{2mx+3x}{2} - \sin \dfrac{2mx + x}{2 } \right) \right)$



Now, again use the formula on the left out terms.



$= \csc \left(\dfrac x 2 \right)\left(\dfrac 1 2 \left(\sin \dfrac{2mx+3x}{2} - \sin \dfrac x 2 \right) \right)$



$= \cos \left(\dfrac{m+2}{2}x\right) \sin \left(\dfrac{(m+1)x}{2}\right)\csc \dfrac{x}{2} $




Thus, $P(m+1)$ is also true.



Q.E.D.


complex numbers - How can I compute the limit of this sequence: $sqrt[n]{sin n}$?



I need to calculate the limit of the following sequence:




$$\lim _ {n \to \infty} \sqrt[n]{\sin(n)}$$



where the $n$-th root of a negative number is defined as the principal complex root.



I suspect the answer to be $1$, but I do not know how to prove it.


Answer



The problem boils down to proving that $\sin(n)$ cannot be too close to zero for small values of $n$.



We know that $\pi$ is a trascendental number with a finite irrationality measure. In particular, the inequality

$$ \left| \pi-\frac{p}{q}\right| \leq \frac{1}{q^{10}} $$
may hold only for a finite number of rational numbers $\frac{p}{q}$, hence (since $\left|\sin x\right|\geq K\left|x-k\pi\right|$ when $x$ is close to $k\pi$, thanks to Adayah) in the general case $\left|\sin(n)\right|$ is greater than $\frac{C}{n^9}$ for some constant $C$. That is enough to ensure that the wanted limit is $1$ by squeezing.


algebra precalculus - Find closed formula by changing order of summation: $sum_{i=1}^ni3^i$



Working on homework for a probability and computing class, but my ability to work with summations is rusty to say the least, so I suspect this is going to turn out pretty straightforward.



Problem asks to find a closed formula for $$\sum_{i=1}^ni3^i$$ by representing it as a double sum and changing the order of summation. I did that by following a hint from the instructor and came up with $$\sum_{k=1}^n\sum_{i=k}^n3^i,$$ but I'm not really sure what that accomplished. What's the next step? What am I looking for here?


Answer



Here is a rather detailed elaboration which might be useful.





We obtain
\begin{align*}
\color{blue}{\sum_{i=1}^ni3^i}&=\sum_{i=1}^n\left(\sum_{k=1}^i 1\right)3^i\tag{1}\\
&=\sum_{i=1}^n\sum_{k=1}^i 3^i
=\sum_{1\leq k\leq i\leq n}3^i
=\sum_{k=1}^n\sum_{i=k}^n3^i\tag{2}\\
&=\sum_{k=1}^n\sum_{i=0}^{n-k}3^{i+k}\tag{3}\\
&=\sum_{k=1}^n3^k\cdot\frac{3^{n-k+1}-1}{3-1}\tag{4}\\
&=\frac{1}{2}\sum_{k=1}^n\left(3^{n+1}-3^k\right)\tag{5}\\

&=\frac{n}{2}3^{n+1}-\frac{1}{2}\sum_{k=1}^n3^k\tag{6}\\
&=\frac{n}{2}3^{n+1}-\frac{1}{2}\cdot\left(\frac{3^{n+1}-1}{3-1}-1\right)\tag{7}\\
&=\frac{n}{2}3^{n+1}-\frac{1}{4}3^{n+1}+\frac{3}{4}\tag{8}\\
&\color{blue}{=\frac{n}{4}(2n-1)3^{n+1}+\frac{3}{4}}\tag{9}
\end{align*}




Comment:





  • In (1) we represent the factor $i$ as sum.


  • In (2) we multiply out in the left-hand sum and write the index range somewhat more conveniently in the middle sum. We exchange the sums in the right-hand double-sum.


  • In (3) we shift the index of the inner sum to start from $i=0$.


  • In (4) we apply the finite geometric summation formula.


  • In (5) we do some simplifications.


  • In (6) we multiply out and do some simplifications.


  • In (7) we apply the finite geometric summation formula again.


  • In (8) and (9) we do some more simplifications.



Friday 22 August 2014

probability - Finding the PDF of the sum of two independent standard normal variables

The question asks for the PDF of $$Y=(X_1)^2+(X_2)^2$$
Given that $X_1$ and $X_2$ are independent standard normal variables.
I found that the pdf for $(X_i)^2$ is
$$f_{X^2}(x) = \frac{1}{2\pi}e^{-x/2} x^\frac{1}{2}$$ for $x \geq 0$.
But I'm stuck on what to do next.

Proof that there are infinitely many prime numbers $p$ such that $p-2$ is not prime.



Just need verification of this proof by contradiction:




Let $p_k$ be the largest prime number such that $p_k-2$ is not a prime number. Let $p_l$ be some prime number such that $p_l > 3p_k$. We know that $p_l$ exists because there are infinitely many prime numbers as proven elsewhere.



$p_l-2$ is a prime number. $p_l-4$ is a prime number as well. By induction, all numbers $p_k < x < p_l | x \mod 2 = 1 $ are prime numbers. But $p_k<3p_k

Also would love to see more concise proofs if at all possible.



Edit:



Great that I got so many people trying to provide their own proof but the question was primarily asked so that I could have my proof verified. Only skyking addressed my question, though I do not yet understand his criticism.


Answer




First of all some criticism, the proof seem to be somewhat overly complicated. But I see no fault in it.



It's not that clear that $p_l-4$ is a prime number, not using the fact that $p_k$ being the largest prime such that $p_k-2$ is not a prime at least.



The statement however follows quite easily from two observations:




  1. There are infinite number of primes

  2. There are infinite number of odd composite numbers (non-primes)




For each odd composite number $k$ there exists a smallest prime $p(k)>k$. Now since it's the smallest such prime you have that $p(k)-2$ is not a prime ($p(k)-2\ge k$, either it's $k$ and composite or between $k$ and $p(k)$ and therefore composite).



The first observation is a direct consequence of that $3(2n+1)$ is composite. The second is because otherwise (if there were finitely many primes) the product of all primes plus 1 would be another prime.


real analysis - 1-periodic and continuous is uniformly continuous

Let $f$ be a $1$-periodic function that is continuous on $\mathbb{R}$. Then $f$ is uniformly continuous on $\mathbb{R}$.



Indeed, since $f$ is continuous on $\mathbb{R}$, it is continuous in $[0,1]$ which is compact; therefore $f$ is uniformly continuous there, that is, for $\varepsilon>0$ there exists $\delta(\varepsilon)>0$ such that for all $x,y \in [0,1]$ $|x-y|<\delta\implies|f(x)-f(y)|<\varepsilon$. Now, let $x,y$ be two points of $\mathbb{R}$ such that $|x-y|<\delta(\varepsilon/2)$. We distinguish two cases:



Case 1: there exists an integer $n$ between $x,y$. Then without loss of generality, we may assume that $x<1

Case 2: there is no integer between $x,y$. Then without loss of generality (periodicity) we may assume that $x,y\in (0,1)$ and we have nothing to prove.



Any objections?

calculus - Where to find interesting integrals for a Calc III student?

I apologize in advance if this is a very soft question. I won't be surprised or offended if I can't get a good answer.



One of my favorite things to do in my spare time, when I'm feeling analytical of course, is to evaluate integrals, both definite and indefinite. However, I've had little success here on Math.SE trying to find integrals that meet my criteria.



Either the integral in question will be way beyond the methods that I understand to evaluate it (typically using contour integration), or is so mind-numbingly trivial that I can't be bothered writing it down. I've scoured the internet for some interesting integrals, and I found the MIT Integration Bee, but those aren't really that hard either. There are some decent ones in my multivariable calculus textbook, but I'm starting to run out of those too.



Is there any specific place I should be looking for interesting, tough but doable without complex analysis? Specifically ones where we can evaluate through tricks like clever substitutions or exploitation of symmetry or changing coordinates, etc.

continuity - Prove discontinuity of piecewise linear function using epsilon-delta



Let $f(x)=\begin{cases}
2x + 3&\text{ for }x\geq 1,\\
-x+5 &\text{ for }x<1.
\end{cases}$



$f$ is continuous from the right at $x\geq1$.
The proof would be:




Let $\epsilon>0$ be arbitrary.



Let $x_0\geq1$.



Let $\delta=\epsilon/2$.



Let $x\in R$ and $x_0\leq x

Thus $|f(x)-f(x_0)|=|2x+3-2x_0-3|=|2x-2x_0|=2|x-x_0|<2\delta=2\epsilon/2=\epsilon$.




This comes from the definition for continuity from the right:$\forall\epsilon>0\; \exists\delta>0$ such that if $x\in I$ and $x_0\leq x

Prove $f$ is discontinuous from the left at $x=1$ using the definition: $\exists\epsilon>0\; \forall\delta>0$ such that if $x\in I$ and $x_0-\delta

I can't seem to find $\epsilon$. But I think the proof would go like:



Let $\epsilon$ = ?



Let $\delta >0$ be arbitrary.




Let $x_0<1$.



Let $x\in I$ that is to say $x<1$ and $x_0-\delta

Then from there is figuring out $|f(x)−f(x_0)|\geq\epsilon$ which I don't get because $|f(x)−f(x_0)|=|-x+5 +x_0-5|=|-x+x_0|=|x-x_0|<\delta$.



So would you set $\epsilon\leq\delta$?



I know my definitions are correct. My teacher has drilled them into our brains. $I$ stands for the domain of $f$ which is the reals or $R$ except the domain is split in two. And by "from the right" and "from the left" I mean that the space between $x$ and $x_0$ denoted as $\delta$, or $|x-x_0|<\delta$, is only calculated on one side, either adding or subtracting $\delta$, not by doing both which would be $x_0-\delta

Answer



Hint 1: Show that $\lim_{x\rightarrow 1^{-}}\left(f(x)\right)$=4. However $f(1)=5$ which is not equal to the left limit. Showing this is sufficient to say that $f$ is not left continuous at 1. Hope this helps.



Hint 2 : Take $ϵ=1/2$. Let $δ>0$. What I want you to do is to prove the converse of the definition of left continuity. Suppose $x<1$. Observe that $|f(x)-f(1)|=|-x+5-2(1)-3|=|-x|=|x|$. Clearly which ever interval $(1-δ,1)$ that is formed depending on $δ$ there always exists a $x$ that is in $(1-δ,1)$ such that $|x|>1/2$ ($|f(x)-f(1)|>1/2$). This is not written in the most formal way I hope you can figure out the whole answer.


algebraic geometry - If $p$ is a positive multivariate polynomial, does $1/p$ have polynomial growth?



I wanted to ask a separate question to focus on an elementary issue from my question Does the inverse of a polynomial matrix have polynomial growth?.




Let $p : \mathbb{R}^n \to \mathbb{R}$ be a polynomial in $n$ real variables, with real coefficients, and suppose that $p > 0$ on all of $\mathbb{R}^n$.



Does $1/p$ have (at most) polynomial growth? That is, are there constants $C,N$ such that $1/p(\mathbf{x}) \le C (1+|\mathbf{x}|)^N$ for all $\mathbf{x} \in \mathbb{R}^n$?





Of course this is true when $n=1$ because in that case we have $\lim_{x \to \pm \infty} p(x) = +\infty$, and so $1/p$ is actually bounded. In higher dimensions this is more subtle: consider for example $p(x,y) = x^2 + (1-xy)^2$ which is strictly positive for all $x,y$, yet $1/p$ is unbounded: consider $p(1/y,y)$ as $y \to \infty$. For this example, I can use some calculus to show that $1/p(\mathbf{x}) \le 1+\|\mathbf{x}\|^2 \le (1+\|\mathbf{x}\|)^2$, so $1/p$ does have polynomial growth. But I don't know what to do in general.


Answer



Yes. By Stengle's Positivstellensatz, we can find polynomials $f_1$ and $f_2$, such that each of $f_1$ and $f_2$ can be written as sums of squares of polyomials, and $p f_1 = 1+f_2$. (In Wikipedia's language, take $F = \emptyset$ and $W = \mathbb{R}^n$.) Then $1/p = f_1/(1+f_2) \leq f_1$, which is a polynomial.


Thursday 21 August 2014

real analysis - Part 1: Does the arithmetic mean of sides right triangles to the mean of their hypotenuse converge?




Let $a_k be the $k$-th primitive Pythagorean triplet in ascending order of the hypotenuse $c_k$. Define



$$
l = \frac{b_1 + b_2 + b_3 + \cdots + b_k}{c_1 + c_2 + c_3 + \cdots + c_k}, \text{ } s = \frac{a_1 + a_2 + a_3 + \cdots + a_k}{c_1 + c_2 + c_3 + \cdots + c_k}
$$



Question: What is the limiting value of $l$ and $s$?



The difference between this question and the related question: Part 2: Does the arithmetic mean of sides right triangles to the mean of their hypotenuse converge? is that here the triangles are in sequenced in ascending order of the hypotenuse $c_k$ where as in the related question, they are sequenced in ascending order of $r$ and $s$, and depending on the choice of sequencing, the limiting value differs.




SageMath Code



c  = 1
sa = 1
sb = 1
sc = 1
f = 0
sx = 0
while(c <= 10^20):

a = c - 1
b = 3
while(a > b):
b = (c^2 - a^2)^0.5
if(b%1 == 0):
if(b <= a):
if(gcd(a,b) == 1):
f = f + 1
sa = sa + a
sb = sb + b

sc = sc + c
sx = sx + 1/c.n()
print(f,c, sa/sc.n(),sb/sc.n(),sx)
else:
break
a = a - 1
c = c + 1

Answer



Quantities $x=a/c$ and $y=b/c$ are the legs of a pythagorean triangle having unit hypotenuse, hence they are the coordinates of points lying on a unit circle centred at the origin, and $x=\cos\theta$, $y=\sin\theta$ with $\pi/4<\theta<\pi/2$.




It is reasonable to think that, at least in the case of primitive triples, those points are spread evenly on that arc. In that case their average values are:
$$
\langle x\rangle={\int_{\pi/4}^{\pi/2}\cos\theta\,d\theta\over\int_{\pi/4}^{\pi/2}d\theta}=
{4-2\sqrt2\over\pi}\approx 0.372923,
$$

$$
\langle y\rangle={\int_{\pi/4}^{\pi/2}\sin\theta\,d\theta\over\int_{\pi/4}^{\pi/2}d\theta}={2\sqrt2\over\pi}\approx 0.900316.
$$

One should then justify that $\langle a\rangle/\langle c\rangle$ and $\langle a/c\rangle$ have the same limiting value, but that also seems very reasonable. I ran a simulation up to $k\approx800000$ and found the encouraging results:

$$
s_k\approx0.373,\quad l_k\approx0.900.
$$


combinatorics - Algebraic proof of combinatorial identity



I would like to obtain the algebraic proof for the following identity. I already know the combinatorial proof but the algebraic proof is evading me.



$$\sum_{r=0}^n\binom{n}{r}\binom{2n}{n-r}=\binom{3n}{n}$$



Thanks.


Answer




We make use of the Binomial Theorem. Observe that:
\begin{align*}
\sum_{k=0}^{3n} \binom{3n}{k}x^k &= (1+x)^{3n} \\
&= (1+x)^n(1+x)^{2n} \\
&= \left[ \sum_{i=0}^{n} \binom{n}{i}x^i \right] \left[ \sum_{j=0}^{2n} \binom{2n}{j}x^j \right] \\
&= \sum_{k=0}^{3n} \left[\sum_{r=0}^n\binom{n}{r}\binom{2n}{k-r}\right]x^k \\
\end{align*}



Hence, by setting $k=n$, we compare the coefficients of $x^n$ of both sides to obtain:
$$

\binom{3n}{n} = \sum_{r=0}^n\binom{n}{r}\binom{2n}{n-r}
$$
as desired.


real analysis - Lebesgue measure paradox



See Lebesgue outer measure of $[0,1]\cap\mathbb{Q}$



Lebesgue measure:
$$ m(A) = \inf \left\{ \sum |I_n| : A \subset \bigcup I_n \right\} $$




We know that $m(\mathbb{Q} \cap [0,1]) = 0$.



Proof:



Enumerate the rationals $\mathbb{Q} \cap [0,1] = \{ q_n \}_{n \in \mathbb{N}}$.



Let $A_n$ be the covering of rationals $\{ q_n \}_{n \in \mathbb{N}}$ in $[0,1]$ by intervals $A_n = (q_n - \varepsilon/2^n , q_n + \varepsilon/2^n)$. Then,
$$ m(\mathbb{Q} \cap [0,1]) \le m( \bigcup A_n ) \le \sum_n |A_n| = \sum_n \frac{\varepsilon}{2^n} = \varepsilon $$
and since it is true for every $\epsilon > 0$ as small as we want, we have $m(\mathbb{Q} \cap [0,1]) = 0$. $\square$




Paradox?



Every number in $[0,1]$ is contained in an open neighborhood of a rational number. We reach a paradox if $$\bigcup_{n \in \mathbb{N}} A_n = [0,1] $$
since then
$$m([0,1]) \le m( \bigcup A_n ) \le \sum m(A_n) = \varepsilon $$
and then $m([0,1])=0$ and not $1$ as it should be.



To contradict this we need to show
$$ \bigcup A_n \ne [0,1] \ . $$

We need to build an irrational number $r$ such that $\forall n : |r - q_n| > \varepsilon/2^n$. Can you construct such a number (or prove it exists without using Lebesgue measure argument)?



Moreover, I think we need to show the set $[0,1]-\bigcup A_n$ has $\aleph = 2^{\aleph_0}$ elements. Otherwise, $m([0,1]-\bigcup A_n) = 0$ and we reach a contradiction.


Answer



If I may choose the enumeration of the rationals:



Enumerate the rationals in increasing order of the denominator in the reduced form, rationals with the same denominator in increasing order, so



$$q_1 = \frac01,\, q_2 = \frac11,\, q_3 = \frac12,\, q_4 = \frac13,\, q_5 = \frac23,\, q_6 = \frac14,\,q_7 = \frac34,\,\ldots$$




Then for $q_n = \frac{r_n}{s_n}$, we have $s_n \leqslant n$.



For $x = \frac{1}{\sqrt{2}}$, and a rational $0 \leqslant \frac{r}{s} \leqslant 1$, we have



$$\left\lvert\frac{r}{s} -\frac{1}{\sqrt{2}}\right\rvert = \frac{\left\lvert \frac{r^2}{s^2} - \frac12\right\rvert}{\left\lvert\frac{r}{s} +\frac{1}{\sqrt{2}}\right\rvert} \geqslant \frac{\lvert 2r^2-s^2\rvert}{4s^2} \geqslant \frac{1}{4s^2}.$$



For $0 < \varepsilon < \frac29$, we have $\frac{1}{4n^2} > \frac{\varepsilon}{2^n}$ for all $n \geqslant 1$, hence $1/\sqrt{2}$ is not contained in any $(q_n - \varepsilon 2^{-n},\,q_n + \varepsilon 2^{-n})$.



By a similar reasoning, we find $2^{\aleph_0}$ irrationals that aren't covered:




For an irrational $x \in (0,\,1)$ with continued fraction expansion



$$x = [a_0,\, a_1,\, a_2,\, \dotsc],$$



convergents $\frac{r_n}{s_n}$, and the complete quotients $\alpha_n$ - that means



$$x = \frac{\alpha_{n+1}r_n + r_{n-1}}{\alpha_{n+1}s_n + s_{n-1}},$$



and thus the continued fraction expansion $\alpha_n = [a_n,\, a_{n+1},\, \dotsc]$, whence $a_n < \alpha_n < a_n + 1$ - we have




$$\begin{align}
\left\lvert x - \frac{r_n}{s_n}\right\rvert &= \left\lvert \frac{\alpha_{n+1}r_n + r_{n-1}}{\alpha_{n+1}s_n + s_{n-1}} - \frac{r_n}{s_n}\right\rvert\\
&= \left\lvert \frac{\alpha_{n+1}(r_n s_n - s_nr_n) + (r_{n-1}s_n - r_ns_{n-1})}{(\alpha_{n+1}s_n+s_{n-1})s_n}\right\rvert\\
&= \left\lvert\frac{(-1)^{n}}{(\alpha_{n+1}s_n+s_{n-1})s_n}\right\rvert\\
&> \frac{1}{\bigl((a_{n+1}+1)s_n + s_{n-1}\bigr)s_n}\\
&\geqslant \frac{1}{(a_{n+1}+2)s_n^2}.\tag{1}
\end{align}$$



For a rational number $\frac{r}{s}$ with $s \leqslant s_n$, $n \geqslant 1$, we have




$$\lvert sx - r\rvert \geqslant \lvert s_nx - r_n\rvert$$



with equality if and only if $s = s_n$ and $r = r_n$.



This immediately leads to



$$\left\lvert x - \frac{r}{s}\right\rvert > \frac{1}{(a_{n+1}+2)s\cdot s_n}.\tag{2}$$



Thus, for any fixed $M \in \mathbb{Z}^+$, let $A_M$ be the set of all irrational numbers in $(0,\,1)$ whose continued fraction expansion has all partial quotients $\leqslant M$.




For $M \geqslant 2$, the set $A_M$ has cardinality $2^{\aleph_0}$.



For $x \in A_M$, the denominators of the convergents satisfy



$$s_{n+1} = a_{n+1}s_n + s_{n-1} \leqslant M\cdot s_n + s_{n-1} \leqslant (M+1)\cdot s_n.\tag{3}$$



Choosing the minimal $n$ with $s_n \geqslant s$ in $(2)$, we deduce



$$\left\lvert x - \frac{r}{s}\right\rvert > \frac{1}{(M+2)(M+1)s^2}$$




from $(3)$ for $x \in A_M$.



If $\varepsilon$ is small enough, so that



$$\frac{1}{(M+2)^2n^2} > \frac{\varepsilon}{2^n}$$



for all $n$, the set



$$U_\varepsilon = \bigcup_{n=1}^\infty \left(q_n - \frac{\varepsilon}{2^n},\,q_n + \frac{\varepsilon}{2^n}\right)$$




doesn't intersect $A_M$.


calculus - When can one apply limit properties?

I'm starting to think you can always apply them, but you might get indeterminate form 0/0. However, I haven't seen this stated in relation to limits, only in some complex derivatives. It's not stated when limit properties are applied.



I had expected that you could apply properties and rules blindly (with any preconditions included as part of the property/rule), but maybe you must also look at the outcome you get in order to know what rules you're allowed to apply? Perhaps it's like division in algebra, where a manipulation might be valid (at "compile time"), but if specific values of variables make a divisor equal zero, then it's undefined (at "runtime")?



I'll give a warmup example, then the one I'm actually concerned about.



1) First example: applying the quotient limit property to $x/x$ as $x\to0$, which should be $1$:




$$
\lim_{x \to 0}\frac{x}{x} = \frac{\lim_{x \to 0}x}{\lim_{x \to 0}x} = \frac{0}{0}
$$



2) Second example: in part of a proof of the product rule (used by Khan at 7:30, and Spivak in Ch.10, Theorem 4, p.45):



$$
\lim_{h\to0}
\frac

{ f(x+h) [ g(x+h) - g(x) ] }
{h}
=
\lim_{h\to0} f(x+h)
\lim_{h\to0}
\frac
{ g(x+h) - g(x) }
{h}
$$




Then using $\lim_{h\to0} f(x+h) = f(x)$:



$$
=
f(x)
\lim_{h\to0}
\frac
{ g(x+h) - g(x) }
{h}
$$




My difficulty is you could apply the same approach to the other factor: $\lim_{h\to0} g(x+h) = g(x)$.



$$
\lim_{h\to0}
\frac
{f(x+h)}
{h}
\lim_{h\to0}
[ g(x+h) - g(x) ]

= \\
\lim_{h\to0}
\frac
{f(x+h)}
{h}
[
\lim_{h\to0}
g(x+h)
-
\lim_{h\to0}

g(x)
]
= \\
\lim_{h\to0}
\frac
{f(x+h)}
{h}
[
g(x)
-

g(x)
]
= \\
\lim_{h\to0}
\frac
{f(x+h)}
{h}
0
=
0

$$






Am I wrong to think you should be allowed to apply property limits whenever you like, without condition? (And if there are pre-conditions, they should be part of the property?) Or, is there some aspect relating to $0/0$ (indeterminate form) whose connection to property limits I've somehow missed or not appreciated?



It seems derivatives are about finding a ratio $a/b$, even as $a,b\to0$. But I think limits are meant to be standalone, independent of derivatives.



PS. if my confusion is too fundamental to address in an answer, could you suggest a reference that definitely does address it, please? (It's a lot of work to go through a reference, only to find it doesn't have the answer I need.) Many thanks!

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...