Monday 30 September 2013

residue calculus - Improper integral computation using complex analysis



$$pv\int_{-\infty}^{\infty} \frac{x}{(x^2+4)(x^2+2x+2)}dx$$



I get an answer of $-\frac{\pi}{5}$ but wolframalpha disagrees by a factor of $2$ ($-\frac{\pi}{10}$):




http://www.wolframalpha.com/input/?i=integration+%28z%2F%28%28z^2%2B4%29%28z^2%2B2z%2B2%29%29%29++from+-+infinity+to+infinity



I cannot spot the error.



I get these residues:



$$Res(-1+i)=\frac{1+3i}{20}$$



$$Res(2i)=\frac{-2-i}{40}$$




which I add up and multiply by $2\pi i$.



What am I doing wrong?


Answer



$$\text{Res}(2i)={x\over (x+2i)(x^2+2x+2)}\Big{|}_{x=2i}={2i\over4i(4i-2)}={1\over4(-1+2i)}={-1-2i\over20}$$


ordinary differential equations - Power series of the solution of $2t^2x'' + tx' -(t+1)x=0$




Use the method of Frobenius, with the larger root of the indicial
equation, to find the first three terms of the power series of the solution to $$2t^2x'' +
tx' -(t+1)x=0.$$




My work:




Note that $t=0$ is a regular singular point. On solving the indicial equation, we get $r=1$ or $r=-\frac12$. We proceed by letting $$x(t) = t^r \sum_{k=0}^{\infty} a_k t^k$$ which, after plugging into the equation, yields $$(ra_1 - a_0)t^{r+1} - a_0 t^r + \sum_{n=r+2}^{\infty} [(2n^2-n-1)a_{n-r} - a_{n-r-1}]t^n=0.$$ However, I cannot proceed. The above with $r=1$ would imply that all coefficients are zero. The calculation should be fine as several of my friends also get the same answer. Should we conclude that the first three terms are all zero?


Answer



you have $$2t^2x''+tx' - (t+1)x= 0 $$ now sub $$x = t^k + a_1t^{k+1} + a_2 t^{k+2} + \cdots\\
x' = kt^{k-1} + (k+1)a_1t^k + (k+2)a_2t^{k+1} + \cdots\\
x'' = (k-1)kt^{k-2} + k(k+1)a_1t^{k-1} + (k+1)(k+2)a_2t^{k} + \cdots$$ we get
$$2t^2\left( (k-1)kt^{k-2} + k(k+1)a_1t^{k-1} + (k+1)(k+2)a_2t^{k} + \cdots\right) + t\left( kt^{k-1} + (k+1)a_1t^k + (k+2)a_2t^{k+1} + \cdots\right)- (t+1)\left( t^k + a_1t^{k+1} + a_2 t^{k+2} + \cdots\right) =0$$



equating the coefficient of $t^{k}$ we have



$$f(k)=2(k-1)k +k-1=(k-1)(2k+1)=0 \to k = 1, k = -\frac 12.$$




equating the coefficient of $t^{k+1}$ we have



$$2k(k+1)a_1+(k+1)a_1 -a_1-1=0\to a_1 = \frac 1{f(k+1)} $$
in the same way you find $$a_2 = \frac {a_1}{f(k+2)}, \cdots, a_{n+1}=\frac {a_n}{f(n+k)}, = n = 1, 2, \cdots $$



let us look at the series for the case $k = 0$



$$a_1 = \frac 1{f(2)} = \frac 1{1 \cdot 5}\\
a_2 = \frac {a_1}{f(3)}=\frac 1{1\cdot 2 \cdot 5 \cdot 7}\\

a_3 = \frac{a_2} {f(4)}=\frac 1{1\cdot 2 \cdot 3 \cdot 5 \cdot 7 \cdot 9} \\
\vdots$$


Sunday 29 September 2013

real analysis - $lim_{ntoinfty} e^{-n} sum_{k=0}^{n} frac{n^k}{k!} = frac{1}{2}$ - basic methods



Prove that $$\lim\limits_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!} = \frac{1}{2}$$




This problem appeared on MSE many times, but each time it was solved using Poisson distribution or lots of integrals. I am wondering, is there any way to prove it using some basic properties of limits (their arithmetics, squeeze theorem etc.), definition of $e^x$ as $\lim\limits_{n\to\infty}(1+\frac{x}{n})^n$, basic limits with $e$, binomial expansion and logarithms, but without using integrals, series, Stirling formula, asymptotics, Taylor series?



This problem was given to me by my mathematical analysis teacher, but it's not a homework, just additional problem to think on. My teacher claims it can be solved with knowledge introduced on lectures so far, which is not much, mainly things mentioned above. Of course, I can use theorems not mentioned on the lectures, but then I have to prove them, and again, with the baisc knowledge. I've been thinking about it for a few days and couldn't do any major progress in my attempts.


Answer



Table of Content.




  1. Heuristic argument

  2. Elementary proof, version 1.

  3. Elementary proof, version 2. (NEW!)









Although far from being rigorous, one can provide a heuristic computation which explains why we expect the answer to be $\frac{1}{2}$. Notice that



$$ \frac{n^{n+j}/(n+j)!}{n^n / n!} = \begin{cases}
\prod_{k=1}^{j} \frac{n}{n+k}, & j \geq 1 \\

1, & j = 0, -1 \\
\prod_{k=1}^{-j-1} \frac{n-k}{n}, & j \leq -2
\end{cases}
$$



In any of cases, taking logarithm and applying the approximation $\log(1+x) \approx x$ shows that the above quantity is approximately $e^{-j^2/2n}$. So



$$
e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}
= \frac{\sum_{j=-n}^{0} \frac{n!}{n^n \sqrt{n}} \frac{n^{n+j}}{(n+j)!} }{\sum_{j=-n}^{\infty} \frac{n!}{n^n \sqrt{n}}\frac{n^{n+j}}{(n+j)!} }

\approx \frac{\sum_{j=-n}^{0} e^{-(j/\sqrt{n})^2/2} \frac{1}{\sqrt{n}} }{\sum_{j=-n}^{\infty} e^{-(j/\sqrt{n})^2/2} \frac{1}{\sqrt{n}} }
\approx \frac{\int_{-\infty}^{0} e^{-x^2/2} \, dx}{\int_{-\infty}^{\infty} e^{-x^2/2} \, dx}
= \frac{1}{2}.
$$



Most of the solutions that I know is more or less a rigorous realization of this kind of observation, and so, it is either involved or requiring extra knowledge.









Define $A_n$, $B_n$ and $C_n$ by



$$ A_n := e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}, \qquad B_n := e^{-n} \sum_{k=n+1}^{\infty} \frac{n^k}{k!}, \qquad C_n = \frac{n^{n} e^{-n}}{n!}. $$



From the Taylor expansion of the exponential function, we know that $A_n + B_n = 1$. In order to show the desired convergence, it suffices to show that the following claim holds.




Claim. $A_n/B_n \to 1$ as $n\to\infty$.





Indeed, once this is proved, then both $A_n$ and $B_n$ converge to $\frac{1}{2}$ as $n\to\infty$.



Proof of Claim. Using the substitution $k = n-j$ followed by $p = j-1$, one may write



\begin{align*}
\frac{A_n}{C_n}
&= \sum_{j=0}^{n} \frac{n!}{(n-j)!n^j}
= 2 + \sum_{p=1}^{n-1} \prod_{l=1}^{p} \left( 1 - \frac{l}{n} \right).
\end{align*}




Similarly, applying the substitution $k = n+p$, one may write



\begin{align*}
\frac{B_n}{C_n}
&= \sum_{p=1}^{\infty} \frac{n!n^p}{(n+p)!}
= \sum_{p=1}^{\infty} \prod_{l=1}^{p} \left( \frac{1}{1 + \frac{l}{n}} \right).
\end{align*}



1. Estimation of leading sums. Fix $\alpha \in \left( 0, \frac{1}{3} \right)$ and set $N = N(\alpha) = \left\lceil n^{(1+\alpha)/2} \right\rceil$. Then using the asymptotic formula $1+x = e^{x+\mathcal{O}(x^2)}$ as $x \to 0$, for $1 \leq p \leq N$ we have




$$ \prod_{l=1}^{p} \left( 1 - \frac{l}{n} \right)
= \exp\left\{ -\frac{p^2}{2n} + \mathcal{O}\left( n^{-(1-3\alpha)/2} \right) \right\}
= \prod_{l=1}^{p} \left( \frac{1}{1 + \frac{l}{n}} \right). $$



In particular, there exists a constant $C > 0$, depending only on $\alpha$, such that



$$ \max\Bigg\{ \prod_{l=1}^{N} \left( 1 - \frac{l}{n} \right), \prod_{l=1}^{N} \left( \frac{1}{1 + \frac{l}{n}} \right) \Bigg\} \leq C e^{-\frac{1}{2}n^{\alpha}}. $$



2. Estimation of tail sums. In this time, consider $p > N$. Then we have




$$ \prod_{l=1}^{p} \left( 1 - \frac{l}{n} \right)
\leq C e^{-\frac{1}{2}n^{\alpha}} \left( 1 - \frac{N}{n} \right)^{p-N}
\quad \text{and} \quad
\prod_{l=1}^{p} \left( \frac{1}{1 + \frac{l}{n}} \right)
\leq C e^{-\frac{1}{2}n^{\alpha}} \left( \frac{1}{1 + \frac{N}{n}} \right)^{p-N}. $$



So the tail sum for $A_n/C_n$ can be bounded from above by



$$ \sum_{p=N+1}^{n-1} \prod_{l=1}^{p} \left( 1 - \frac{l}{n} \right)

\leq Ce^{-\frac{1}{2}n^{\alpha}} \sum_{k = 0}^{\infty} \left( 1 - \frac{N}{n} \right)^k
\leq \frac{Cn}{N} e^{-\frac{1}{2}n^{\alpha}}
\leq Cn^{(1-\alpha)/2} e^{-\frac{1}{2}n^{\alpha}}, $$



and likewise,



$$ \sum_{p=N+1}^{\infty} \prod_{l=1}^{p} \left( \frac{1}{1 + \frac{l}{n}} \right)
\leq Ce^{-\frac{1}{2}n^{\alpha}} \sum_{k = 0}^{\infty} \left( 1 - \frac{N}{N+n} \right)^k
\leq \frac{2Cn}{N} e^{-\frac{1}{2}n^{\alpha}}
\leq 2Cn^{(1-\alpha)/2} e^{-\frac{1}{2}n^{\alpha}}. $$




3. Conclusion. Combining altogether,



$$ \frac{A_n}{B_n} = \frac{\left( 1 + o(1) \right) \sum_{p=1}^{N} e^{-\frac{p^2}{2n}} + \mathcal{O}(1)}{\left( 1 + o(1) \right) \sum_{p=1}^{N} e^{-\frac{p^2}{2n}} + o(1)}, $$



which can be easily shown to converge to $1$ as $n\to\infty$, since $\sum_{p=1}^{N} e^{-\frac{p^2}{2n}} \geq \sqrt{n} \, e^{-1/2} \to \infty$ as $n\to\infty$. (In fact, this sum is $(1+o(1))\sqrt{\pi n/2}$ as $n\to\infty$.)









In this answer, we do appeal to the concentration behavior of the sum, but rather utilize a mysterious identity from combinatorics.



Write $A_n = e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$ for the sequence of our interest. We also introduce the following auxiliary sequences:



$$
a_n = \frac{n^n}{n!e^n}, \qquad
b_n = (-1)^n \binom{-1/2}{n} = \frac{1}{4^n} \binom{2n}{n},
$$




Before proceeding, we make some observations. The key ingredients are the following identities



$$ A_n = \sum_{k=0}^{n} a_k a_{n-k}, \qquad 1 = \sum_{k=0}^{n} b_k b_{n-k}. $$



The former one is quite non-trivial, and a proof can be found here. On the other hand, the latter one is easily proved by comparing both sides of $
\sum_{n=0}^{\infty} x^n = \frac{1}{1-x} = \left( \frac{1}{\sqrt{1-x}} \right)^2 = \left( \sum_{n=0}^{\infty} b_n x^n \right)^2$
. Next, we have the following observation.




Lemma. $ \frac{a_n}{b_n} \to \frac{1}{\sqrt{2}} $ as $n\to\infty$.





This lemma tells that, roughly $a_{k}a_{n-k} \approx \frac{1}{2} b_k b_{n-k}$ and hence $ A_n \approx \frac{1}{2} \sum_{k=0}^{n} b_k b_{n-k} = \frac{1}{2}$. Indeed, this is an instance of the philosophy that `limit should be preserved under averaging', and so, it can be proved by a standard machinery. We separate the rigorous claim into a standalone result:




Proposition. Let $(a_n), (b_n)$ be sequences in $(0, \infty)$ such that




  1. $a_n/b_n \to \ell \in (0, \infty)$;

  2. $b_n \to 0$ as $n\to\infty$;

  3. $\sum_{k=0}^{n} b_k b_{n-k} = 1$ for all $n$.




Then $\sum_{k=0}^{n} a_k a_{n-k} \to \ell^2$ as $n\to\infty$.




Therefore, $A_n \to \frac{1}{2}$ is a direct consequence of this proposition together with the well-known fact that $b_n \to 0$. Indeed, this can be proved as follows:



$$ b_n^2
= \left( \frac{1 \cdot 3 \cdots (2n-1)}{2 \cdot 4 \cdots (2n)} \right)^2
= \left( \frac{1 \cdot 3}{2 \cdot 2} \right) \left( \frac{3 \cdot 5}{4 \cdot 4} \right) \cdots \left( \frac{(2n-3)(2n-1)}{(2n-2)(2n-2)} \right) \frac{2n-1}{4n^2}

\leq \frac{1}{2n}. $$



Finally, we prove the claims above.




  • Proof of Lemma. Using the identity $-\int_{0}^{1} \frac{u}{a+u} \, du = a \log (a+1) - a \log a - 1$ for $a > 0$, we notice that



    \begin{align*}
    &- \sum_{k=1}^{n} \int_{0}^{1} \frac{u}{n+k-1+u} \, du \\
    &= \sum_{k=1}^{n} \left[ (n+k-1)\log (n+k) - (n+k-1)\log(n+k-1) - 1 \right] \\

    &= (2n)\log(2n) - n \log n - n - \sum_{k=1}^{n} \log(n+k) \\
    &= \log \left[ \left( \frac{4n}{e} \right)^n \frac{n!}{(2n)!} \right]
    = \log \left( \frac{a_n}{b_n} \right).
    \end{align*}



    Then, using $ \frac{1}{2(n+k)}
    \leq \int_{0}^{1} \frac{u}{n+k-1+u} \, du
    \leq \frac{1}{2(n+k-1)} $
    , we obtain



    $$ -\frac{1}{2}\sum_{k=1}^{n} \frac{1}{n+k-1}

    \leq \log \left( \frac{a_n}{b_n} \right)
    \leq -\frac{1}{2}\sum_{k=1}^{n} \frac{1}{n+k}. $$



    Therefore the conclusion follows from the well-known limit $ \sum_{k=1}^{n} \frac{1}{1+\frac{k}{n}} \frac{1}{n} \to \int_{0}^{1} \frac{dx}{1+x} = \log 2$.


  • Proof of Proposition. Let $\alpha, \beta $ satisfy $0 < \alpha < \ell < \beta$. Then there exists $N$ such that $\alpha \leq \frac{a_n}{b_n} \leq \beta$ for all $n \geq N$. So, if $n \geq 2N$, then



    $$ \alpha^2 \sum_{k=N}^{n-N} b_k b_{n-k}
    \leq \sum_{k=N}^{n-N} a_k a_{n-k}
    \leq \beta^2 \sum_{k=N}^{n-N} b_k b_{n-k}. $$




    Now let $M > 0$ be a bound of $a_n/b_n$. Since $b_n \to 0$ as $n\to\infty$, we have



    $$ \sum_{k=0}^{N-1} a_k a_{n-k} + \sum_{k=n-N+1}^{n} a_k a_{n-k}
    = 2\sum_{k=0}^{N-1} a_k a_{n-k}
    \leq 2M^2 \sum_{k=0}^{N-1} b_k b_{n-k}
    \xrightarrow[n\to\infty]{} 0 $$



    Combining altogether and using $1 = \sum_{k=0}^{n} b_k b_{n-k}$,



    $$ \alpha^2 \leq \liminf_{n\to\infty} \sum_{k=0}^{n} a_k a_{n-k} \leq \limsup_{n\to\infty} \sum_{k=0}^{n} a_k a_{n-k} \leq \beta^2. $$




    Letting $\alpha \uparrow \ell$ and $\beta \downarrow \ell$ proves the desired identity.




We conclude with some remarks.




Remark. If we do not care about elementary solution, this approach can be simplified by showing that





  1. $A_n$ is bounded and decreasing, hence converges.

  2. By the identity $A_n = \sum_{k=0}^{n} a_k a_{n-k}$, we have



    $$ (1-x) \sum_{n=0}^{\infty} A_n x^n = \left( \frac{\sum_{n=0}^{\infty} a_n x^n}{\sum_{n=0}^{\infty} b_n x^n} \right)^2, $$



    hence by a version of abelian theorem, we obtain



    $$ \lim_{n\to\infty} A_n = \lim_{n\to\infty} \left( \frac{a_n}{b_n} \right)^2 = \frac{1}{2}. $$





real analysis - Prove that $f$ is Borel measurable.

Let $\mu$ be a finite Borel measure on $\mathbb R$, i.e. a finite measure on the Borel $\sigma$-algebra $S (\mathbb R)$,
and let $B$ be a Borel subset of $\mathbb R$. Define the function $f$ on $\mathbb R$ by $f (x) =\mu (B + x)$.



(a) Show that $f$ is Borel measurable.
(b) Show that $\int f (x) d\lambda (x) =\int f (x) dx =\mu(R)\lambda(B)$, where $\lambda$ denotes the Lebesgue measure




I am not able to prove that $f$ is Borel measurable. I tried to prove that $f^{-1}(a,\infty)$ is a Borel set but couldn't prove it.

Proof convergence of series



I'm trying to proof the convergence of the series



$\sum\limits_{n=1}^{\infty} \exp\left(- \dfrac{n^k}{\log(n)} \right)$




where $0 < k < \frac{1}{2}$ is a positive constant.



I tried to use the ratio test, but it was inconclusive. I think I can show the convergence using the comparision test, but I can't seem to find a convergent majorant.



Does anyone know a convergent majorant or another method to show the convergence? ;) Thanks in advance!


Answer



As a first step, note that $n^k/\log n = \Omega(n^{k/2})$, and so the summand is $\exp - \Omega(n^{k/2})$ (here $k/2$ is an arbitrary number in $(0,k)$). Second, $e^{-x} = O(1/x^t)$ for any $t>0$. In particular, choosing $t = 4/k$ (or any other $t > 2/k$), we get that the summand is $O(1/n^2)$. Since $\sum_{n=1}^\infty 1/n^2$ converges, so does your series.


geometry - The Wobbling Staircase Function



Define the wobbling staircase function as follows:




Informally: The wobbling staircase function consists of $3$ connected line segments with alternating 'size' of gradients. That is, the gradient of the second line segment is smaller than the first and that of the third is greater than the second.




Note: if you sketch this then it does look like a wobbly staircase!







Formally: Assume that the first line segment begins at the origin. If the endpoint of the first line segment is $(a,d)$, that of the second is $(b,e)$ aand that of the third is $(c,f)$, then the wobbling staircase function is $$\begin{align}y=\begin{cases}\frac dax,\quad &0\le x\le a\\\frac{e-d}{b-a}x+\frac{d(b-a)-a(e-d)}{b-a},\quad & a\frac{e-d}{b-a}\implies \frac de>\frac ab$$




Define also the line joining the origin and $(c,f)$: $$y_{\text{tot}}=\frac fcx.$$








Question: On what condition(s) is $y_{\text{tot}}$ always above each of the line segments, except for the points $(0,0)$ and $(c,f)$?




Note that this is equivalent to asking when $y_{\text{tot}}$ does not cross the second line segment.




Bonus Question: What if I extend this is $2k+1$ connected line segments? What would the constraints then be?




Answer



Informally, what you want is for the vector $\left(\matrix{c\\f}\right)$ to lie "on the left" of the vector $\left(\matrix{a\\d}\right)$.



Formally, this is saying that the basis $\left(\left(\matrix{c\\f}\right),\left(\matrix{a\\d}\right)\right)$ is negatively oriented, which you check by computing the sign of its determinant. Hence $y_\text{tot}$ lies above all line segments if and only if $$cd-af<0.$$



For more line segments, you have more determinants to compute.


inequality - Inequalities from Taylor expansions of $log$ functions



Consider the following Taylor expansion of the natural logarithm (denoted by $\log$ here):



$$ \log(1+x) = x - x^2/2 + x^3/3 - x^4/4 + x^5/5 - \cdots $$



It appears that from this expansion, inequalities can be generated.
$ \log(1+x) \leq x $ is well known for all $x > -1$. The Taylor expansion however
motivates further inequalities which, on numerical inspection, appear valid for all $x > -1$:




$$ \log(1+x) \leq x - x^2/2 + x^3/3 \\
\log(1+x) \leq x - x^2/2 + x^3/3 - x^4/4 + x^5/5 \\
\cdots $$



Further, for even powers also inequalities appear to hold. For $-1 < x \leq 0$:
$$ \log(1+x) \leq x - x^2/2 \\
\log(1+x) \leq x - x^2/2 + x^3/3 - x^4/4 \\
\cdots $$




and for $x \geq 0$ the opposite:
$$ \log(1+x) \geq x - x^2/2 \\
\log(1+x) \geq x - x^2/2 + x^3/3 - x^4/4 \\
\cdots $$



The very same procedure also works with the Taylor expansion of $ (1+x) \log(1+x)$. Possibly other examples can be found.



Questions:





  • does it hold indeed for expansions up to all powers of $x$?

  • is this a special feature of the $\log$ function?

  • is there a general rule when this procedure of "generating inequalities from Taylor expansions with alternating signs" will work?



Thanks for your help!


Answer



The derivatives of $f(x) = \log (1+x)$ are
$$
f^{(n)}(x) = \frac{(-1)^{n-1}(n-1)!}{(1+x)^n} \quad (n \ge 1)

$$
If we denote the $n$th Taylor polynomial with $T_n$
$$
T_n(x) = \sum_{k=1}^n \frac{(-1)^{n-1}}{n} x^n
$$
and the remainder with $R_n$
$$
\log(1+x) = T_n(x) + R_n(x)
$$
then Taylor's theorem with the mean-value form of the remainder gives

$$
R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} x^{n+1} = \frac{(-1)^nx^{n+1}}{(n+1)(1+\xi)^{n+1}}
$$
for some $\xi$ between $0$ and $x$.



So in this case, $f^{(n)}(x)$ has alternating signs independent of $x$,
and it follows that
$$
\log(1+x) \begin{cases}
< T_n(x) & \text{ for } -1 < x < 0 \\

< T_n(x) & \text{ for } 0 < x \le 1, n \text{ odd} \\
> T_n(x) & \text{ for } 0 < x \le 1, n \text{ even}
\end{cases}
$$



The case $-1 < x < 0$ is also obvious because all terms in the Taylor expansion are negative.



For $0 < x \le 1 $ it would also follow because the Taylor series
is alternating with decreasing absolute values.







The same reasoning can be applied to $g(x) = (1+x)\log(1+x)$
because $g'(x) = 1 + \log(1+x)$, so that $g^{(n)}(x)$ has alternating
signs for $n \ge 2$.






Alternating terms in the Taylor series alone are not sufficient
to draw any conclusion about the relationship of $f(x)$ and $T_n(x)$, a simple counter example is

$$
f(x) = x-\frac{x^2}2 + \frac{x^3}3 - x^4 \, , \quad T_2(x) = x-\frac{x^2}2
$$
with
$$
f(x) \begin{cases}
< T_2(x) & \text{ for }x < \frac 13 \\
> T_2(x) & \text{ for } x > \frac 13
\end{cases}
$$



Saturday 28 September 2013

Functional equation extended solution

The question is Find all functions $f:R \to R$ such that $$f(x+y)f(x-y)=(f(x)+f(y))^2-4x^2f(y)$$ Taking $x=y=0$, we get $f(0)^2=4f(0)^2 \implies f(0)=0$. Now take $x=y$ which immediately gives $$4f(x)^2=4x^2f(x)\\\implies f(x)(f(x)-x^2)=0\\\implies f(x)=0 \space or \space f(x)=x^2 \space \forall x \in \mathbb R$$ This was my solution. But I was stunned when I looked at the official solution.enter image description here



Why do we need to continue to do anything after getting what I got, isn't it sufficient?

set theory - "Proof" that $text{cof}(omega_lambda)



The following lemma is from this introduction to cardinals.




Lemma 2.7. Let $\omega_\alpha$ be a limit cardinal. Then $\alpha$ is a limit ordinal and $\text{cof}(\omega_\alpha)=\text{cof}(\alpha)$.





I understand the proof of this lemma, including the proof of relevant lemma 2.4 (beware, there is a typo in that proof: $g$ instead of $h$).



Now I want to use it to make the following conclusion:



$$\text{cof}(\omega_\lambda)\stackrel{2.7}=\text{cof}(\lambda)\leq \lambda < \omega_\lambda$$



However, in the same paper I'm being told that the existence of regular limit cardinals is independent of ZFC! So there must be a flaw in my conclusion.



I can think of two things that would undermine my argument:





  1. $\lambda < \omega_\lambda$ is not always true.

  2. For a nonzero limit ordinal $\lambda$, $\omega_\lambda$ need not be a limit cardinal.



But I don't see how either 1. or 2. is possible.


Answer



It’s the first alternative: it’s not always true that $\lambda<\omega_\lambda$. Let $\lambda_0=\omega$, and for $n\in\omega$ let $\lambda_{n+1}=\omega_{\lambda_n}$. Let $\lambda=\sup_n\lambda_n$; then



$$\lambda=\sup_n\lambda_n=\sup_n\omega_{\lambda_n}=\omega_{\sup_n\lambda_n}=\omega_\lambda\;.$$



probability - Hypotheses on ${X_n}_{n=1}^{infty}$ and $X$ so that $lim_n f_{X_n}(x)=f_X(x)$ for a.e. $xinmathbb{R}$.

Let $\{X_n\}_{n=1}^\infty$ and $X$ be absolutely continuous random variables on a probability space $(\Omega,\mathcal{F},P)$, with density functions $f_{X_n}(x)$ and $f_X(x)$. Are there any hypotheses on $\{X_n\}_{n=1}^\infty$ and/or $X$ that guarantee the convergence $\lim_n f_{X_n}(x)=f_X(x)$ for a.e. $x\in\mathbb{R}$?




The most related result that I know is the following: if $\{X_n\}_{n=1}^\infty$ does not converge in law to $X$, then it not possible to have $\lim_n f_{X_n}(x)=f_X(x)$ for a.e. $x\in\mathbb{R}$, by Scheffé's Lemma. But this result does not give a sufficient condition for $\lim_n f_{X_n}(x)=f_X(x)$.

probability - (Combinatorial?) Proof of the identity $sum_{k=1}^n frac {(-1)^k}{k,(k+1)}binom nk = frac 12 + frac 13 + dots + frac 1{n+1}$?



Recently I've come across an interesting identity:
$$

\frac 1{1\cdot 2}\binom n1 - \frac 1{2\cdot 3}\binom n2 + \frac 1{3\cdot 4}\binom n3 - \dots + \frac {(-1)^n}{n\cdot (n+1)}\binom nn =
\frac 12 + \frac 13 + \dots + \frac 1{n+1}.
$$



Does anyone have an idea how to prove this?



PS. It'd be of special interest if someone could provide a combinatorial approach to this problem, although I'm not sure if that'd make sense since this is an identity for non-integer.



Edit: Perhaps my question was worded in a way that seems trivial since there's a vote to close this question. I should have mentioned that I can prove it by expanding $\frac 1{k(k-1)}$ and do some further calculation. What I want is a clever way to interpret it, e.g. using combinatorics or some kind of generating function or integration.


Answer




For the solution see original post below



EDIT



Here are direct proofs of (1), (2), and (3), each in one sequence



ad (1)



$$H_{n} = \sum_{k=1}^n \frac{1}{k} = \sum_{k=1}^n \int_0^1 y^{k-1}= \int_0^1\,dy \sum_{k=1}^n y^{k-1}= \int_0^1\,dy \frac{1-y^n}{1-y} \\({y\to 1-x})=\int_0^1\,dx \frac{1}{x} \left(1-(1-x)^n\right)= \int_0^1\,dx \sum_{k=1}^n (-1)^{k+1}\binom{n}{k} x^{k-1} \\= \sum_{k=1}^n (-1)^{k+1}\binom{n}{k}\int_0^1\,dx\; x^{k-1} = \sum_{k=1}^n (-1)^{k+1}\binom{n}{k}\frac{1}{k}$$




QED.



ad (2)



$$\sum _{k=1}^n \frac{(-1)^{k+1} }{k+1}\binom{n}{k}=\sum _{k=1}^n (-1)^{k+1}\binom{n}{k} \int_0^1 x^k \,dx =\int_0^1 \,dx \sum _{k=1}^n (-1)^{k+1}\binom{n}{k} x^k \\= \int_0^1 \, dx \left(1-(1-x)^n\right)= 1-\frac{1}{n+1} = \frac{n}{n+1} $$



QED.



ad (3)




$$\sum _{k=1}^n (-1)^{k+1} \binom{n}{k} \frac{1}{a+k}=\sum_{k=1}^n
(-1)^{k+1} \binom{n}{k} \int_0^1 \,dx x^{a+k-1}\\=\int_0^1\,dx x^{a-1} \sum _{k=1}^n (-1)^{k+1} \binom{n}{k} x^{k}=\int_0^1\,dx x^{a-1} \left(1-(1-x)^n\right)\\= \frac{1}{a} - \int_0^1\,dx x^{a-1} (1-x)^n= \frac{1}{a} - \text{Beta}(a,n+1)= \frac{1}{a} - \frac{\Gamma(a) \Gamma(n+1)}{\Gamma(a+n+1)}$$



QED.



Remark



Combining (3) and (1) we see that



$$H_{n} = \lim_{a\to0}\left( \frac{1}{a} - \frac{\Gamma(a) \Gamma(n+1)}{\Gamma(a+n+1)}\right)\tag{4a}$$




To make this explicit we write the r.h.s as



$$\frac{1}{a} \left( 1- \frac{\Gamma(a+1) \Gamma(n+1)}{\Gamma(a+n+1)}\right)=\frac{\Gamma(a+n+1)-\Gamma(a+1) \Gamma(n+1)}{a\Gamma(a+n+1)} $$



and then apply l'Hospital to arrive at



$$H_{n} =\frac{ \frac{\partial (\Gamma (a+n+1)-n! \Gamma (a+1))}{\partial a}\text{/.}\, a\to 0}
{\frac{\partial (a \Gamma (a+n+1))}{\partial a}\text{/.}\, a\to 0}
= \frac{ \Gamma' (n+1)-n! \Gamma' (1)}

{\Gamma(n+1)}\\= \frac{ \Gamma' (n+1)}{\Gamma(n+1)}+ \gamma
= \frac{\partial \log (\Gamma (n+1))}{\partial n}+\gamma= \psi ^{(0)}(n+1)+\gamma\tag{4b}$$



Here $\gamma= -\Gamma'(1)$ is the Euler-Mascheroni constant and $\psi$ is the polygamma function.



Original post



This is not the requested combinatorical interpretation but there are some interesting identities popping up.



Subtracting these two interesting relations




$$\sum _{k=1}^n \frac{(-1)^{k+1}}{k} \binom{n}{k} = H_{n}\tag{1}$$



and



$$\sum _{k=1}^n \frac{(-1)^{k+1}}{1+k} \binom{n}{k} = \frac{n}{1+n}\tag{2}$$



and observing



$$\frac{1}{k}-\frac{1}{1+k}= \frac{1}{k(k+1)}$$




we find that the left hand side is that of the formula we have to prove while the right hand side gives



$$H_{n}-\frac{n}{1+n} = H_{n+1}-1$$



which proves the OP.



Here we have used the basic relation



$$H_{n}+\frac{1}{1+n} =H_{n+1} $$




Remarks



1) I found it surprising that the small difference in the l.h.s. of (1) and (2) leads to a apreciably different results.



2) formulas (1) and (2) can be generalized to



$$\sum _{k=1}^n \frac{(-1)^{k+1}}{a+k} \binom{n}{k} = \frac{1}{a}-\frac{\Gamma (a) \Gamma (n+1)}{\Gamma (a+n+1)}\tag{3}$$


Friday 27 September 2013

real analysis - Can we simplify this sum?



Let $r>4$ and $n>1$ be positive integers. Can we simplify this sum:



$$S=\sum_{m=1}^{n}\frac{2m}{r^{m^2}}$$



I have no idea to start.


Answer




Such a sum is related with a Jacobi theta function, hence it cannot be "simplified" too much, but it can be approximated in a quite effective way.



Let $K=\log r$. Then:
$$ S = \frac{1}{K}\sum_{m=1}^{n} 2mK\, e^{-Km^2} $$
can be seen as a Riemann sum, hence:
$$ S \approx \frac{n^2}{K}\int_{0}^{1}2K x\, e^{-K n^2 x^2}\,dx =\frac{1-e^{-K n^2}}{K}.$$


linear algebra - Are there matrices such that $AB=I$ and $BA neq I$





Are there matrices such that $AB=I$ and $BA \neq I$ ?



$A$ and $B$ are square matrices


Answer



With square matrices, no. With non-square matrices, it's perfectly possible. For example,
\begin{align*}
A &= \left(\begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 0\end{array}\right) \\
B &= \left(\begin{array}{cc}1 & 0 \\ 0 & 1 \\ 0 & 0\end{array}\right)
\end{align*}
Note that, indeed, we cannot have non-square matrices where $AB$ and $BA$ are identities of appropriate dimensions, because that would imply the existence of an isomorphism between spaces of different dimensions!



proof verification - Mathematical Induction for sum of squares



I don't understand what I am doing wrong. I worked through this problem before and got it right, but then I misplaced a portion of the answer...I am having difficulties proving the second step in the induction process, substituting n for $k+1$.



enter image description here




This is the complete answer above, and I can get up to here the following



$$
\frac{(k + 1)2k^2 + 7k + 6}{6} $$



However when I do the quadratic formula, I get
$$\frac{(k+1)(k-2)(k-1.5)}{6} \ne \frac{(k+1)(k+2)(2k+3)}{6}$$



What am I doing wrong with my factoring?


Answer



Solutions to equation $2k^2+7k+6=0$ are $k_1=-2$ and $k_2=-3/2$. Now $$2k^2+7k+6 = 2(k-k_1)(k-k_2)$$ so




$$2k^2+7k+6 = 2(k+2)(k+{3\over 2})=(k+2)(2k+3)$$


Thursday 26 September 2013

Can we determine if a complex number is greater than another?




Is it possible to determine if one complex number is greater than another? Or as the question implies is there an "order" to complex numbers (like 1 is before 2 in the real numbers)?



I thought that would could simply use the modulus to determine if one complex numbers is greater than another, though I believe this can't be the only way used (what if 2 complex numbers have the same modulus and are quite different). So I thought, if the point is in the uppermost right quadrant of the complex plane, then both real and imaginary parts are positive, so it would be greater than any other complex number in a different quadrant of the complex plane (you might say what about the modulus, but in the reals $1>-2$ even though $|-2|>|1|$). But what if one complex number has a positive real, and negative imaginary and another one has a negative real and positive imaginary? (And for arguments sake they both have the same modulus)



If we can't determine why not? In the real numbers it seems (to me), quit trivial at a basic level to determine if one real is greater than another e.g. $2>1$. What is this property of numbers called? Why doesn't complex numbers exhibit this property (if indeed it doesn't)?


Answer



It is possible to order the complex numbers. For instance, one could define $x_1+iy_1


However, it's impossible to define a total order on the complex numbers in such a way that it becomes an ordered field. This is because in an ordered field the square of any non-zero number is $>0$. Hence we would have $-1=i^2>0$, and adding $1$ to both sides would imply $0>1=1^2$, which is a contradiction.


calculus - Radius of Convergence of $sum_{n = 0}^{infty} frac{(-1)^nn!x^n}{n^n}$



I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide explanations, and I'd really, really appreciate it if someone could explain why the answer is $e$.




which of the following is the radius of the convergence of the series $\large{\sum\limits_{n=0}^\infty\frac{(-1)^n n!x^n}{n^n}}$



$(A) \quad 0,\qquad (B) \quad\frac1e,\qquad (C)\quad1,\qquad (D) \quad e,\qquad (E) \infty$





Thank you all so much.


Answer



Denote $\sum a_n x^n$ the given series then



$$\left\vert\frac{a_{n+1}}{a_n}\right\vert=\left(1+\frac1n\right)^{-n}\xrightarrow{n\to\infty}\frac1e$$
so by the ratio test, the radius of convergence is $e$.


elementary number theory - Proof: Given that $x$ is a positive integer, prove...

Given that x is a positive integer, prove that $f(x) = x^2 + x + 1$ will never be divisible by $5$.



I've tried a contrapostive proof so far:
Assume $f(x)$ is divisible by $5$. Then, $x^2 + x + 1 = 5p$ for some integer $p$.



$x(x+1) + 1 = 5p$



Since $x(x+1)$ has to be even because an even number times an odd number if even, $x(x+1)$ is odd. Thus, p is odd (the product of 5 and an odd number is an odd number). Thus:




$x(x+1) + 1 = 5(2n+1)$ for some integer $n$.



$x^2 + x - (10n+4) = 0$.



I need to show that x is NOT an integer, but I'm not sure how to proceed from here. Help?

proof verification - Are we properly using mathematical induction?




I recently had a lecture in a Computer Science course involving the use of induction. The main discrepancy lies in that a few students and I think our proof should suffice, but our instructor argues that there are gaps in our logic. We really just want to get a full understanding of why there may be gaps in our logic.



Problem: Prove that for $K\geq8$, the value $K$ can be made up of the sum of two different types of coins with value $3$ and $5$.



Our Proof: We show that this is true for $K=8,9,10$.



Let $m$ denote a $3$ coin and $n$ denote a $5$ coin.



$K=8=3+5=m+n$




$K=9=3+3+3=3m$



$K=10=5+5=2n$



So we have established that this holds for $K=8,9,10$. Let $N>10$. Assume that our hypothesis hold for $8\leq K\leq N$. We wish to show this implies it holds for $K=N+1$.



So, $K=N+1=(N-2)+3$, and by our inductive hypothesis, $N-2$ can be made up of 3-coins and 5-coins, (since $N>10$), and clearly $3$ is simply one 3-coin., thus $K=N+1$ can be made up of 3-coins and 5-coins.



We have established our base case ($K=8,9,10$) and shown that assuming $K=N$ holds for $N\geq10$, this implies that $K=N+1$ holds, and thus by the principle of mathematical induction, any value $K\geq8$ can be made up of 3-coins and 5-coins.




Is our argument unsound? Our professor's disagreement was that in order to prove it in this way, we actually must prove for 3 different sequences, that is, 3 separate sequences in which our $K$ value is $8,11,14,...$, and $9,12,15,...$, and $10,13,16,...$ respectively. He states this is because induction must rely solely on the immediately prior term, i.e. $A_N$ implies $A_{N+1}$ implies $A_{N+2}$ ... but that our proof only relates $A_{N+1}$ to $A_{N-2}$. Is this really how induction works?



We think this is maybe because of how induction and sequences are defined in our course, but based on our prior math courses, we think this should suffice.


Answer



First we have this principle: take any (mathematically well-formed) property $P(n)$, depending on $n$ if you can prove that



$$P(0)$$



$$\forall n \in \mathbb{N}\, P(n) \implies P(n+1)$$




from this you may conclude $$\forall n \in \mathbb{N}\, P(n)$$



From this induction principle you can derive many other induction principles: finite induction, backwards induction etc but we only need two of the following:



You may start from any other natural number other than $0$:



$$P(k)$$



$$\forall n \in \mathbb{N_{\ge{k}}}\, P(n) \implies P(n+1)$$




$$\forall n \in \mathbb{N_{\ge{k}}}\, P(n)$$



and a principle of strong induction:



Suppose you prove



$$\forall n \in \mathbb{N_{\ge{k}}}\,\Bigl(\forall k \in \mathbb{N_{\ge{k}}}\, [k < n \implies P(k)]\Bigr) \implies P(n)$$



then you may conclude




$$\forall n \in \mathbb{N_{\ge{k}}} P(n)$$



In words, the first principle of mathematical induction works like this:
You prove the base case (it is absolutely necessary to do this!) and after that you prove that the property holds for some number $n$ by knowing only that it holds for the previous number $n-1$!
And if you work with the principle of strong induction not only that you don't need to prove the base case - you can also consider that the property holds for all of the previous numbers! How cool is that?
And this is exactly what you did in your proof by strong induction:



Let's suppose that $P(n)$ means than $n$ can be rewritten as sum of $2$'s and $3$'s.



So you take any number $n \ge 8$. You assume that forall $8 \le k < n$ that property holds and then you prove that it also holds for $n$. You do this by cases:




if $n = 8$ then $P(n)$



if $n = 9$ then $P(n)$



if $n = 10$ then $P(n)$



if $n > 10$ then $8 \le n - 3 < n$ and thus by assumption $P(n-3)$ from this follows $P(n)$ since it only has one more $3$.



So you see that $P(n)$ holds in all cases thus by principle of strong induction you may conclude $\forall n \in \mathbb{N_{\ge{8}}}\, P(n)$




Now let's see how you could proceed by using first (ordinary) induction:



The idea is to use ordinary induction 3 times to prove that the property holds for 3 different sequences of natural numbers, that is for
$8 + 3n$, $9+3n$, $10+3n$ $n \ge 8$ from this you will be able to conclude that it in facts hold for any natural number greater than $8$, since it'll definitely be in one of those sequences.



I'll show how to do that for the first one, the other two are proved verbatim.



We prove $$\forall n \in \mathbb{N} \mbox{ we have } P(8+3n)$$




$$n = 0, \mbox{ we certainly have } P(8)$$



Now assume we have $P(8+3k)$ for some $k \in \mathbb{N}$ then we need to prove that $P[8 + 3(k+1)]$.



But $8+3(k+1) = (8 + 3k) + 3$ and since $8 + 3k$ could be rewritten as a sum of $3$'s and $2$'s then $(8+3k)+3$ can be also rewritten this way since it is the same but has one more 3.
Thus, we have $P[8 + 3(k+1)]$ and thus by the ordinary induction you may conclude $$\forall n \in \mathbb{N}\, P(8+3n)$$


abstract algebra - Let $K$ and $L$ be extensions of $F$. Show that $KL$ is Galois over $F$ if both $K$ and $L$ are Galois over $F$. Is the converse true?



Let $K$ and $L$ be extensions of $F$. Show that $KL$ is Galois over $F$ if
both $K$ and $L$ are Galois over $F$. Is the converse true?




I should show $|Gal(KL/F)|=[KL:F]$ and for this I am using $|Gal(K/F)|=[K:F], |Gal(L/F)|=[L:F] $ and $[KL:F]\leq[K:F][L:F]$, and I get to the next but I do not know what else to do, could someone help me please? Is there a counterexample to the converse? Thank you in advance.


Answer



To prove that the converse is not true, consider the extension $\mathbb{Q}(\sqrt[3]{2},\zeta_{3})$ which is a Galois extension of $\mathbb{Q}$, but $\mathbb{Q}(\sqrt[3]{2})$ is not! (Edit: Take $K=\mathbb{Q}(\sqrt[3]{2})$ and $L=\mathbb{Q}(\zeta_{3})$ and $\mathbb{F}=\mathbb{Q})$ As for the direct statement, we might use splitting fields.
Now, if we want to use splitting fields, if $K$ is a splitting field over $F$ for $f(x)$ and $L$ is a splitting field of $g(x)$ over $F$, then $KL$ is a splitting field of...


linear algebra - A problem regarding geometric progressions




Hello my homework included this problem and I really need a hint how to solve it.



It says that the numbers $a_1,a_2 \ldots a_n$ form a geometric progression. Knowing $S=a_1+a_2+\ldots+a_n$ and
$P=a_1 \cdot a_2 \cdot a_3\ldots \cdot a_n$, find $S_1= \frac{1}{a_1}+\frac{1}{a_2}+\ldots+\frac{1}{a_n}$.



I somehow need to find a combination of $S$ and $P$ to form $S_1$ I guess.


Answer



Since $a_1,a_2,a_3,\cdots,a_n$ form a geometric progression then
$$
a_k=a_1 r^{k-1}\quad;\;\text{for}\;k=1,2,\cdots,n.

$$
Therefore
$$
\begin{align}
S&=a_1+a_2+a_3+\cdots+a_n\\
&=a_1+a_1r+a_1r^2+\cdots+a_1r^{n-1}\\
&=a_1\left(1+r+r^2+\cdots+r^{n-1}\right)\\
&=a_1\left(\frac{1-r^n}{1-r}\right),
\end{align}
$$




$$
\begin{align}
P&=a_1\cdot a_2\cdot a_3\cdots a_n\\
&=a_1 \cdot a_1r \cdot a_1r^2 \cdots a_1r^{n-1}\\
&=a_1^n r^{0+1+2+\cdots+(n-1)} \\
&=a_1^n r^{\frac n2(n-1)}, \\
\end{align}
$$
and

$$
\begin{align}
S_1&=\frac1a_1+\frac1a_2+\frac1a_3+\cdots+\frac1a_n\\
&=\frac1a_1+\frac1{a_1r}+\frac1{a_1r^2}+\cdots+\frac1{a_1r^{n-1}}\\
&=\frac1a_1\left(1+\frac1r+\frac1{r^2}+\cdots+\frac1{r^{n-1}}\right)\\
&=\frac1a_1\left(\frac{1-\frac1{r^n}}{1-\frac1r}\right)\\
&=\frac{1}{a_1r^{n-1}}\left(\frac{r^n-1}{r-1}\right).\\
\end{align}
$$
Now, it should be easy to obtain $S_1$ in term of $S$ and $P$. The rest, I leave to you to handle it. Good luck! :)



Wednesday 25 September 2013

Prove sequence $a_n=n^{1/n}$ is convergent





How to prove that the sequence $a_n=n^{1/n}$ is convergent using definition of convergence?


Answer



Noticing that $n^\frac{1}{n} > 1$ for all $n$, it all comes down to showing that for any $\epsilon > 0$, there is a $n$ such that $(1+\epsilon) \geq n^\frac{1}{n}$, or by rearranging, that



$$

(1+\epsilon)^n \geq n
$$



Now, let's first of all choose an $m$ such that $(1+\epsilon)^{m}$ is some number bigger than 2, let's say the smallest number greater than $3$ that you can get. From here, swap $m$ for $2m$. This will make the left side a little over 3 times larger, and the right side 2 times larger. The next doubling will still double the right side, but the left side will increase roughly 9-fold. Repeating, we can easily see that the left side will at some point overtake the right side, and we have our $n$


combinatorics - Combinatorial proof of $binom{nk}{2}=kbinom{n}{2}+n^2binom{k}{2}$



This identity was posted a while back but the question had been closed; the question wasn't asked elaborately, though the proof of the identity is a nice application of combinatorics and a good example for future reference.



Also, there was just a flat-out wrong answer which had a couple of upvotes, so I thought to give my own possible proof and sort of 'reopen' the question that was left. Hopefully this time the question will have enough context to stay alive.



As a side note: the given right side does not appear symmetric between $k$ and $n$. However, when $\binom{m}{2}=m(m-1)/2$ is inserted and the products expanded, only symmetric terms remain uncancelled. Rendering the right side as $k^2\binom{n}{2}+n\binom{k}{2}$ would give the same cancellations and the same net value.




Proof



Suppose you have a grid of $n\times k$ dots. Firstly, the amount of ways to connect any two dots is $\binom{nk}{2}$. Now consider the right-hand side. We can split the cases for which the connected dots are on the same column, row, or are in both different columns and rows.



If the two connected dots are in the same column, we can choose two points from any two different rows in $\binom{n}{2}$ ways, and we have $k$ columns for which the two points can be on the same column, which gives a total of $k\binom{n}{2}$ options. The same argument holds for constant rows: $n\binom{k}{2}$.



Now if neither the row nor the column can stay constant, we can pick any point in $nk$ ways, and choose the second point from the remaining $(n-1)(k-1)$ points; one column and one row will be unavailable. This gives us $\frac{nk(n-1)(k-1)}{2}$ options, as we have to rule out the double counting.



We will now show (algebraically) that $n\binom{k}{2}+\frac{nk(n-1)(k-1)}{2}=n^2\binom{k}{2}$.

We have that $\binom{k}{2}=\frac{k(k-1)}{2} =\frac{nk(n-1)(k-1)}{2n(n-1)} \iff n(n-1)\binom{k}{2}=\frac{nk(n-1)(k-1)}{2}=n^2\binom{k}{2}-n\binom{k}{2}$ which leads to the equation above.



Combining these cases gives $\binom{nk}{2}=k\binom{n}{2}+n\binom{k}{2}+\frac{nk(n-1)(k-1)}{2} = k\binom{n}{2}+n^2\binom{k}{2}$



If there are any mistakes or improvements on the arguments, please feel free to point them out.


Answer



I don't think you need as many cases, which saves a little algebra. We have $k$ groups of $n$ dots each, so choosing two of them can be done in $\binom{nk}{2}$ ways.



Alternatively, both are in the same group of $n$ which has $\binom{k}{1} \cdot \binom{n}{2}$ possibilities, or they are in different groups of $n$, which has $\binom{k}{2} \binom{n}{1}^2$ possibilities.




Therefore, $\binom{nk}{2} = k \binom{n}{2} + n^2 \binom{k}{2}$


proof verification - How to prove that $n sqrt{17}$ is irrational?

Prove that $ \sqrt{17}$ is irrational. Subsequently, prove that $n \sqrt{17}$ is irrational too, for any natural number $n \neq 0$. Use the following lemma: Let p be a prime number; if $p | a^2$ then $p | a$ as well. I proved by contradiction that $\sqrt{17}$ is irrational, but I'm not sure how to prove that $n \sqrt{17}$ is irrational. I tried to prove it by contradiction as well, but I'm not sure if that's what I'm supposed to do. Is it easier to prove with induction?

algebra precalculus - What is going wrong in this "proof" of $0=1$?




\begin{align}
-20 &= -20\\
16-36 &= 25-45\\
4^2-4\times 9&=5^2-5\times 9\\
4^2-4\times 9+81/4&=5^2-5\times 9+81/4\\
4^2-4\times 9+(9/2)^2&=5^2-5\times 9+(9/2)^2\\
\end{align}



Considering the formula $a^2+2ab+b^2=(a-b)^2$, one has

\begin{align}
(4-9/2)^2&=(5-9/2)^2\\
\sqrt{(4-9/2)^2}&=\sqrt{(5-9/2)^2}\\
4-9/2&=5-9/2\\
4&=5\\
4-4&=5-4\\
0&=1
\end{align}


Answer



Here let me simplify your process:




$$
\begin{align}
(-2)^2 = 4 &\implies \sqrt{(-2)^2} = \sqrt{2^2} \\
&\implies -2 = 2 \\
&\implies -2 + 2 = 2 +2 \\
&\implies 0 = 4
\end{align}
$$




QED. Do you see the mistake?


reference request - L'Hôpital's full name

I think we've all heard of L'Hôpital's rule when solving limits of the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$ (for those doing calculus or real analysis at a higher level) but the other day my lecturer went through some interesting facts and background about l'Hôpital (other common names: Guillaume de l'Hôpital or Guillaume François Antoine, Marquis de l'Hôpital).




I went to Google on his full name (since everyone said that that was the shortened version) and the closest that I got to was "Guillaume-François-Antoine Marquis de l'Hôpital, Marquis de Sainte-Mesme, Comte d'Entremont and Seigneur d'Ouques-la-Chaise", even still they mentioned that his name was longer.



So, what was his full name? (I couldn't find any from my searches on Google)



(I was thinking about this while taking a break from my real analysis tutorials :) )

calculus - How do I show that $lim_{x rightarrow infty}(1+frac{a}{x}+frac{b}{x^{3/2}})^x =e^{a}$?




How do I show that $\lim_{x \rightarrow \infty}(1+\frac{a}{x}+\frac{b}{x^{3/2}})^x = e^a$? Actually, I had to deal with something similar yesterday and after thinking about it for quite a while I did it with L'Hospital's rule, but this was very unsatisfactory for me. I am rather interested in a more algebraic proof that this last term does not contribute to the limit, but I found it quite hard to do something more elementary.


Answer



I am expanding the hint in comments. In dealing with limits of expression of type $\{f(x)\}^{g(x)}$ it is much better to take logs rather than write complicated exponents. Let the limit be $L$. Then we have $$\begin{aligned}\log L &= \log\left\{\lim_{x \to \infty}\left(1 + \frac{a}{x} + \frac{b}{x^{3/2}}\right)^{x}\right\}\\
&=\lim_{x \to \infty}\log\left(1 + \frac{a}{x} + \frac{b}{x^{3/2}}\right)^{x}\text{ because log is continuous}\\
&= \lim_{x \to \infty}x\log\left(1 + \frac{a}{x} + \frac{b}{x^{3/2}}\right)\\
&= \lim_{x \to \infty}x\cdot\left(\dfrac{a}{x} + \dfrac{b}{x^{3/2}}\right)\dfrac{\log\left(1 + \dfrac{a}{x} + \dfrac{b}{x^{3/2}}\right)}{\dfrac{a}{x} + \dfrac{b}{x^{3/2}}}\\
&= \lim_{x \to \infty}\left(a + \frac{b}{\sqrt{x}}\right)\cdot \lim_{y \to 0}\frac{\log(1 + y)}{y}\text{ where }y = \frac{a}{x} + \frac{b}{x^{3/2}}\\
&= a\cdot 1 = 1\end{aligned}$$ Hence $L = e^{a}$.


Tuesday 24 September 2013

trigonometry - Calculate $frac{1}{sin(x)} +frac{1}{cos(x)}$ if $sin(x)+cos(x)=frac{7}{5}$

If



\begin{equation}
\sin(x) + \cos(x) = \frac{7}{5},
\end{equation}



then what's the value of



\begin{equation}

\frac{1}{\sin(x)} + \frac{1}{\cos(x)}\text{?}
\end{equation}



Meaning the value of $\sin(x)$, $\cos(x)$ (the denominator) without using the identities of trigonometry.



The function $\sin x+\cos x$ could be transformed using some trigonometric identities to a single function. In fact, WolframAlpha says it is equal to $\sqrt2\sin\left(x+\frac\pi4\right)$ and there also are some posts on this site about this equality. So probably in this way we could calculate $x$ from the first equation - and once we know $\sin x$ and $\cos x$, we can calculate $\frac1{\sin x}+\frac1{\cos x}$. Is there a simpler solution (perhaps avoiding explicitly finding $x$)?

algebra precalculus - Proving $1^3+ 2^3 + cdots + n^3 = left(frac{n(n+1)}{2}right)^2$ using induction



How can I prove that




$$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$




for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction.




Thanks


Answer



You are trying to prove something of the form, $$A=B.$$ Well, both $A$ and $B$ depend on $n$, so I should write, $$A(n)=B(n).$$ First step is to verify that $$A(1)=B(1).$$ Can you do that? OK, then you want to deduce $$A(n+1)=B(n+1)$$ from $A(n)=B(n)$, so write out $A(n+1)=B(n+1)$. Now you're trying to get there from $A(n)=B(n)$, so what do you have to do to $A(n)$ to turn it into $A(n+1)$, that is (in this case) what do you have to add to $A(n)$ to get $A(n+1)$? OK, well, you can add anything you like to one side of an equation, so long as you add the same thing to the other side of the equation. So now on the right side of the equation, you have $B(n)+{\rm something}$, and what you want to have on the right side is $B(n+1)$. Can you show that $B(n)+{\rm something}$ is $B(n+1)$?


real analysis - Generalized Fresnel integral $int_0^infty sin x^p , {rm d}x$




I am stuck at this question. Find a closed form (that may actually contain the Gamma function) of the integral



$$\int_0^\infty \sin (x^p)\, {\rm d}x$$



I am interested in a Laplace approach, double integral etc. For some weird reason I cannot get it to work.



I am confident that a closed form may actually exist since for the integral:



$$\int_0^\infty \cos x^a \, {\rm d}x = \frac{\pi \csc \frac{\pi}{2a}}{2a \Gamma(1-a)}$$




there exists a closed form with $\Gamma$ and can be actually be reduced further down till it no contains no $\Gamma$. But trying to apply the method of Laplace transform that I have seen for this one , I cannot get it to work for the above integral that I am interested in.



May I have a help?


Answer



If $p>1$,
$$ I(p)=\int_{0}^{+\infty}\sin(x^p)\,dx = \frac{1}{p}\int_{0}^{+\infty}x^{\frac{1}{p}-1}\sin(x)\,dx \tag{1}$$
but since $\mathcal{L}(\sin(x))=\frac{1}{s^2+1}$ and $\mathcal{L}^{-1}\left(x^{1/p-1}\right)=\frac{s^{-1/p}}{\Gamma\left(1-\frac{1}{p}\right)}$ we have:
$$ I(p)=\frac{1}{p\,\Gamma\left(1-\frac{1}{p}\right)}\int_{0}^{+\infty}\frac{s^{-1/p}}{1+s^2}\,ds = \color{red}{\frac{\pi}{2p\,\Gamma\left(1-\frac{1}{p}\right)}\,\sec\left(\frac{\pi}{2p}\right)}.\tag{2}$$


real analysis - Derivation of an integral result




In this thread it is mentioned that:



$$\int_0^\infty \left ( \mathrm{arccot} x \right )^3\; \mathrm{d}x = \frac{3 \pi^2 \ln 2}{4} - \frac{21\zeta(3)}{8}$$



where $\zeta$ is the Riemann zeta function.



The steps to the solution I took are:



\begin{align*}
\int_{0}^{\infty} \left ( \mathrm{arccot}x \right )^3 \, \mathrm{d}x &= \int_{0}^{\infty} \left ( x \right ) ' \left ( \mathrm{arccot}x \right )^3 \, \mathrm{d}x \\

&=\left [ x \left ( \mathrm{arccot}x \right )^3 \right ]_0^{\infty} +3 \int_{0}^{\infty} \frac{x\left( \mathrm{arccot }x \right )^2}{ x^2+1} \, \mathrm{d}x \\
&=3 \left [ \frac{\ln \left ( 1+x^2 \right ) \, \left (\mathrm{arccot}(x) \right )^2}{2} \right ]_0^{\infty} + 3 \int_{0}^{\infty} \frac{\ln \left ( 1+x^2 \right ) \mathrm{arccot} x}{x^2+1} \, \mathrm{d}x \\
&=3 \left ( \int_{0}^{1} \frac{\ln \left ( 1+x^2 \right ) \mathrm{arccot} x}{x^2+1} \, \mathrm{d}x + \int_1^{\infty} \frac{\ln \left ( 1+x^2 \right ) \mathrm{arccot} x}{x^2+1} \, \mathrm{d}x \right ) \\
&=3 \left ( \int_{0}^{1} \frac{\ln \left ( 1+x^2 \right ) \mathrm{arccot} x}{x^2+1} \, \mathrm{d}x + \int_{0}^{1} \frac{\ln \left ( 1+\frac{1}{x^2} \right ) \arctan x}{1+\frac{1}{x^2}} \cdot \frac{1}{x^2} \, \mathrm{d}x \right ) \\
&=3 \left ( \int_{0}^{1} \frac{\ln \left ( 1+x^2 \right ) \mathrm{arccot} x}{x^2+1} \, \mathrm{d}x + \int_{0}^{1} \frac{\left (\ln \left ( 1+x^2 \right ) - 2 \ln x \right ) \arctan x}{1+x^2} \, \mathrm{d}x \right )\\
&=3\left ( \int_{0}^{1} \frac{\ln \left ( 1+x^2 \right )\left ( \mathrm{arccot} x + \arctan x \right )}{x^2+1} \, \mathrm{d}x - 2 \int_{0}^{1} \frac{\ln x \arctan x}{1+x^2} \, \mathrm{d}x \right ) \\
&=3\left ( \frac{\pi}{2} \int_{0}^{1} \frac{\ln \left ( 1+x^2 \right )}{1+x^2} \, \mathrm{d}x - 2 \int_{0}^{1} \frac{\ln x \arctan x}{1+x^2} \, \mathrm{d}x \right )
\end{align*}



Let us now calculate the the first integral:




\begin{align*}
\int_{0}^{1} \frac{\ln \left ( 1+x^2 \right )}{1+x^2} \, \mathrm{d}x &\overset{x=\tan \theta}{=\! =\! =\! =\! =\!} \int_{0}^{\pi/4} \frac{\ln \left ( 1+\tan^2 \theta \right )}{1+\tan^2 \theta} \cdot \sec^2 \theta \, \mathrm{d}\theta\\
&=\int_{0}^{\pi/4} \ln \left ( 1+ \tan^2 \theta \right ) \, \mathrm{d}\theta \\
&= \int_{0}^{\pi/4} \ln \sec^2 \theta \, \mathrm{d} \theta \\
&=2 \int_{0}^{\pi/4} \ln \sec \theta \, \mathrm{d} \theta \\
&=-2 \int_{0}^{\pi/4} \ln \cos \theta \, \mathrm{d} \theta \\
&=-2 \left ( -\int_{0}^{\pi/4} \left (\sum_{n=1}^{\infty} (-1)^n \frac{\cos 2n \theta}{n} - \ln 2 \right ) \, \mathrm{d}\theta \right )\\
&=2 \sum_{n=1}^{\infty} \frac{(-1)^n}{n} \int_{0}^{\pi/4} \cos 2n\theta \, \mathrm{d} \theta +2 \int_{0}^{\pi/4} \ln 2 \, \mathrm{d}\theta \\
&= 2 \sum_{n=1}^{\infty} \frac{(-1)^n \sin \frac{n \pi}{2}}{2n^2} +\frac{\pi \ln 2}{2} \\

&= \sum_{n=1}^{\infty} \frac{(-1)^n \sin \frac{n \pi}{2}}{2n^2} + \frac{\pi \ln 2}{2} \\
&=\frac{\pi \ln 2}{2} + \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)^2}\\
&= \frac{\pi \ln 2}{2} - \mathcal{G}
\end{align*}



where $\mathcal{G}$ is the Catalan's constant.



Let's move on to the second integral:



\begin{align*}

\int_{0}^{1} \frac{\ln x \arctan x}{1+x^2} \, \mathrm{d}x &=\int_{0}^{1} \ln x \sum_{n=1}^{\infty} (-1)^{n-1} \left ( \mathcal{H}_{2n} - \frac{\mathcal{H}_n}{2} \right ) x^{2n-1} \, \mathrm{d}x \\
&=\sum_{n=1}^{\infty} (-1)^{n-1} \left ( \mathcal{H}_{2n} - \frac{\mathcal{H}_n}{2} \right ) \int_{0}^{1}x^{2n-1} \ln x \, \mathrm{d}x \\
&=-\frac{1}{4}\sum_{n=1}^{\infty} (-1)^{n-1} \frac{\mathcal{H}_{2n}- \frac{\mathcal{H}_n}{2}}{n^2} \\
&= \frac{1}{4} \sum_{n=1}^{\infty} (-1)^n \frac{\mathcal{H}_{2n} - \frac{\mathcal{H}_n}{2}}{n^2} \\
&=\frac{1}{4} \sum_{n=1}^{\infty} (-1)^n \frac{\mathcal{H}_{2n}}{n^2} -\frac{1}{8} \sum_{n=1}^{\infty} (-1)^{n} \frac{\mathcal{H}_n}{n^2}
\end{align*}



The second sum is an old chestnut evaluating to



$$\sum_{n=1}^{\infty} (-1)^n \frac{\mathcal{H}_n}{n^2} = -\frac{5 \zeta(3)}{8}$$




see for example this link.The first sum should bring the $\mathcal{G}$ in . The question is how?


Answer



\begin{align}J&=\int_0^\infty \frac{\arctan x\ln x}{1+x^2}\,dx\\
K&=\int_0^1 \frac{\arctan x\ln x}{1+x^2}\,dx
\end{align}



Define on $[0;+\infty[$ the function,
\begin{align}R(x)&=\int_0^x \frac{\ln t}{1+t^2}\,dt\\
&=\int_0^1 \frac{x\ln(tx)}{1+t^2x^2}\,dt\\

\end{align}

Observe that,
\begin{align}\displaystyle \lim_{x\rightarrow +\infty }R(x)=\int_0^\infty \frac{\ln x}{1+x^2}\,dx=0\end{align}



Perform integration by parts,
\begin{align}J&=\Big[R(x)\arctan x\Big]_0^\infty-\int_0^\infty \left(\int_0^1 \frac{x\ln(tx)}{(1+t^2x^2)(1+x^2)}\,dt\right)\,dx\\
&=-\int_0^\infty\left(\int_0^1 \frac{x\ln x}{(1+t^2x^2)(1+x^2)}\,dt\right)\,dx-\int_0^1 \left(\int_0^\infty \frac{x\ln t}{(1+t^2x^2)(1+x^2)}\,dx\right)\,dt\\
&=-\int_0^\infty \ln x\left[\frac{\arctan(tx)}{1+x^2}\right]_{t=0}^{t=1} \,dx-\int_0^1 \ln t\left[\frac{\ln\left(\frac{1+x^2}{1+t^2x^2}\right)}{2(1-t^2)}\right]_{x=0}^{x=\infty}\,dt\\
&=-J+\int_0^1 \frac{\ln^2 t}{1-t^2}\,dt\\
&=-J+\int_0^1 \frac{\ln^2 t}{1-t}\,dt-\int_0^1 \frac{t\ln^2 t}{1-t^2}\,dt\\

\end{align}

In the latter integral perform the change of variable variable $\displaystyle y=x^2$,
\begin{align}J&=-J+\left(1-\frac{1}{8}\right)\int_0^1 \frac{\ln^2 t}{1-t}\,dt\\
&=-J+\left(1-\frac{1}{8}\right)\times 2\zeta(3)\\
\end{align}

Therefore,
\begin{align}\boxed{J=\dfrac{7}{8}\zeta(3)}\end{align}
But,
\begin{align}J&=\int_0^1 \frac{\arctan x\ln x}{1+x^2}\,dx+\int_1^\infty \frac{\arctan x\ln x}{1+x^2}\,dx\\
&=K+\int_1^\infty \frac{\arctan x\ln x}{1+x^2}\,dx\end{align}


Perform the change of variable $y=\dfrac{1}{x}$,
\begin{align}J&=K-\int_0^1 \frac{\arctan \left(\frac{1}{x}\right)\ln x}{1+x^2}\,dx\\
&=2K+\frac{1}{2}\pi\text{G}
\end{align}

therefore,
\begin{align}\boxed{K=\dfrac{7}{16}\zeta(3)-\frac{1}{4}\pi\text{G}}\end{align}



NB:



For $x>0,\arctan x+\arctan\left(\dfrac{1}{x}\right)=\dfrac{\pi}{2}$




$\displaystyle \int_0^1 \dfrac{\ln x}{1+x^2}\,dx=-\text{G}$



$\text{G}$ the Catalan's constant.



Addendum:
\begin{align}L&=\int_0^1 \frac{\ln(1+x^2)}{1+x^2}\,dx\end{align}
Perform the change of variable $y=\tan x$,
\begin{align}L&=-2\int_0^{\frac{\pi}{4}}\ln(\cos x)\,dx\end{align}




\begin{align}A&=\int_0^{\frac{\pi}{4}}\ln(\cos x)\,dx\\
B&=\int_0^{\frac{\pi}{4}}\ln(\sin x)\,dx\\
A+B&=\int_0^{\frac{\pi}{4}}\ln(\sin x\cos x)\,dx\\
&=\int_0^{\frac{\pi}{4}}\ln\left(\frac{\sin(2x)}{2}\right)\,dx\\
&=\int_0^{\frac{\pi}{4}}\ln\left(\sin(2x)\right)\,dx-\frac{\pi}{4}\ln 2\\
\end{align}

Perform the change of variable $\displaystyle y=2x$,
\begin{align}A+B&=\frac{1}{2}\int_0^{\frac{\pi}{2}}\ln\left(\sin x\right)\,dx-\frac{\pi}{4}\ln 2\\
&=\frac{1}{2}\times -\frac{\pi}{2}\ln 2-\frac{\pi}{4}\ln 2\\
&=-\frac{\pi}{2}\ln 2\\

B-A&=\int_0^{\frac{\pi}{4}}\ln\left(\tan x\right)\,dx\\
&=-\text{G}\\\end{align}

Therefore,
\begin{align}A&=\frac{1}{2}\text{G}-\frac{1}{4}\pi\ln 2\\
B&=-\frac{1}{2}\text{G}-\frac{1}{4}\pi\ln 2
\end{align}

Therefore,
\begin{align}\boxed{L=\frac{1}{2}\pi\ln 2-\text{G}}\end{align}


sequences and series - If $a_{n+1}=lfloor 1.05times a_nrfloor$, does there exist $N$ such that $a_Nequiv0 $(mod$ $$10$)?

I've known the following theorem.




Theorem: Supposing that $a_{n+1}=\lfloor 1.05\times a_n\rfloor$ for any natural number $n$, there exists $N$ such that $a_N\equiv0 \ $(mod$\ $$10$) for any integer $20\le a_1\le100$.



Proof: $\{a_n\}$ is a monotonic increase sequence, so let's observe the minimum $n$ such that $a_n\ge100$ for any $a_1$. The observation shows you that you'll always get any one of $100, 101, 102, 103$. Then, you get $$101\to106\to111\to116\to121\to127\to133\to139\to145\to152\to159\to166\to174\to182\to191\to200$$



$$102\to107\to112\to117\to122\to128\to134\to140$$



$$103\to108\to113\to118\to123\to129\to135\to141\to148\to155\to162\to170,$$ so the proof is completed.



Then, here are my questions.




As far as I know, the next question still remains unsolved.



Question1: Supposing that $a_{n+1}=\lfloor 1.05\times a_n\rfloor$ for any natural number $n$, does there exist $N$ such that $a_N\equiv0 \ $(mod$\ $$10$) for any integer $a_1\ge20\ $?



It is likely that such $N$ would exist, but I'm facing difficulty.
I'm also interested in the following generalization.



Question2: Supposing that $\alpha\gt1$ is a real number and that $a_{n+1}=\lfloor \alpha\times a_n\rfloor$ for any natural number $n$, does there exist $N$ such that $a_N\equiv0 \ $(mod$\ $$10$) for any integer $a_1\ge \frac{1}{\alpha-1} \ $?




Any help would be appreciated.

combinatorics - Inductive Proof for Vandermonde's Identity?



I am reading up on Vandermonde's Identity, and so far I have found proofs for the identity using combinatorics, sets, and other methods. However, I am trying to find a proof that utilizes mathematical induction. Does anyone know of such a proof?



For those who don't know Vandermonde's Identity, here it is:



For every $m \ge 0$, and every $0 \le r \le m$, if $r \le n$, then



$$ \binom{m+n}r = \sum_{k=0}^r \binom mk \binom n{r-k} $$



Answer



We have using the recursion formula for binomial coefficients the following for the induction step
\begin{align*}
\binom{m + (n+1)}r &= \binom{m+n}r + \binom{m+n}{r-1}\\
&= \sum_{k=0}^r \binom mk\binom n{r-k} + \sum_{k=0}^{r-1} \binom mk\binom{n}{r-1-k}\\
&= \binom mr + \sum_{k=0}^{r-1} \binom mk\biggl(\binom n{r-k} + \binom n{r-1-k}\biggr)\\
&= \binom mr\binom{n+1}0 + \sum_{k=0}^{r-1} \binom mk\binom{n+1}{r-k}\\
&= \sum_{k=0}^r \binom mk \binom{n+1}{r-k}
\end{align*}


calculus - Convergence or Divergence of $sum_{n=1}^{infty}(3^{1/n}-1)sin(pi/n)$



How to determine convergence of this series. $$\sum_{n=1}^{\infty}(3^{1/n}-1)\sin(\pi/n)$$
I've tried using comparison test:




$\sin(\pi/n) \leq \pi/n $, so:





$$(3^{1/n}-1)\sin(\pi/n)<(3^{1/n}-1)\pi/n < (3^{\frac{1}{n}})\frac{\pi}{n}$$
By comparison test if $\sum(3^{\frac{1}{n}})\frac{\pi}{n}$ is convergent, so would be initial.




But $\sum(3^{\frac{1}{n}})\frac{\pi}{n}$ is divergent.




I also know that $\sin(\pi/n)$ is divergent. How would it help? Can you give a hint?


Answer



HINT




We have that




  • $3^x = e^{x\log 3}=1+x\log 3 +O(x^2)$

  • $\sin x = x +O(x^2)$



therefore




$$(3^{1/n}-1)\sin(\pi/n)\sim \frac {\pi\log 3} {n^2}$$



then refer to limit comparison test with $\sum \frac 1 {n^2}$.


elementary number theory - simple congruence with large power and large moduli

I am trying to compute $2^{111455} \pmod{2012}$, but since the numbers are too large, I don't know how to compute it efficiently. I've got: $2012=2^2 \times 503$, $503$ is a prime. And that $111455=2012 \times 55 + 795$ but I don't know if it is useful.

Monday 23 September 2013

abstract algebra - Explicit construction of a finite field with $8$ elements


Give an explicit contruction of the finite field $K$ containing $8$ elements, as a quotient of an appropriate polynomial ring. Include the multiplication table of the group $K^{*}=K\setminus \{0\},$ and write $K^{*}=\langle \alpha \rangle$ for some $\alpha \in K.$




I have no idea how to approach this problem. Can anyone guide me in the right direction? Thanks.

summation - A formula for the power sums: $1^n+2^n+dotsc +k^n=,$?

Is there explicit formula for the expression $1^n + 2^n + \dotsc + k^n\,$?



I know that for $n=1$ the explicit formula becomes $S=k(k+1)/2$ and for $n=3$ the formula becomes $S^2$. But what about general $n$?




I know there is a way using the Taylor expansion of $f(x)=1/(1-x)=1+x+x^2+\dotsc\;$, by differentiating it and then multiplying by $x$ and then differentiating again. Repeating this $n$ times, we get



$$\frac{d}{dx}(x\frac{d}{dx}(\dots x\frac{d}{dx}f(x))\dots )=1+2^nx^n+3^nx^n\dots.$$



Now do the same process but with the function $g(x)=x^{k+1}f(x)$. Then subtract them and we get $1+2^nx^n+\dots k^nx^n$. Because we have the explicit formulas $f(x)$ and $g(x)$ we can find the explicit formula by this process for arbitrary $n$. A big problem is that as $n$ grows, it is going take a lot of time finding the explicit formula. My question is therefore: are there other ways?

combinatorics - Asymptotic Gilbert-Varshamov Bound Using Hilbert's Entropy Formula

I am reading Walker's book Codes and Curves and am having trouble proving this Lemma regarding the Asymptotic Gilbert-Varshamov bound.



Suppose that $q$ is a prime power and we define \begin{align*}
V_q(n,r) &:= \sum\limits_{i=0}^r {n\choose r}(q-1)^i
\end{align*}



We define the Hilbert entropy function as \begin{align*}
H_q(x) &:= \cases{0, & x= 0\\
x\log_q(q-1)-x\log_q x - (1-x)log_q(1-x), & $0 < x \leq 1-\frac{1}{q}$}

\end{align*}



There is a lemma that states if $0\leq\lambda\leq 1-\frac{1}{q}$ then \begin{align*}
\lim\limits_{n\to\infty}\frac{1}{n} \log_q V_q(n,\lfloor \lambda n\rfloor) &= H_q(\lambda)
\end{align*}



Walker suggests using Stirling's approximation to get this limit. Here is what I have so far: First, I find that if $0<\lambda \leq 1-\frac{1}{q}$ then \begin{align*}
H_q(\lambda) &= \lambda\log_q(q-1)-\lambda\log_q \lambda - (1-\lambda)log_q(1-\lambda)\\
&= \log_q\left(\frac{(q-1)^\lambda}{\lambda^\lambda(1-\lambda)^{1-\lambda}}\right)
\end{align*}




Then, try to calculate $\lim\limits_{n\to\infty} \frac{1}{n}\log_q V_q(n,\lfloor \lambda n\rfloor)$.
\begin{align*}
\lim\limits_{n\to\infty} \frac{1}{n}\log_q V_q(n,\lfloor \lambda n\rfloor) &= \lim\limits_{n\to\infty} \log_q\left(\left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n}\right)\\
&= \log_q\left(\lim\limits_{n\to\infty} \left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n} \right)
\end{align*}



Looking only at the terms inside the logarithm, I would like to show that \begin{align*}
\lim\limits_{n\to\infty} \left(\sum\limits_{i=0}^{\lfloor \lambda n\rfloor} {n\choose i}(q-1)^i\right)^\frac{1}{n} &= \frac{(q-1)^\lambda}{\lambda^\lambda(1-\lambda)^{1-\lambda}}
\end{align*}




Unfortunately, I get stuck here. This stackexchange post pointed me to this resource which essentially shows the case for $q=2$ in exercise 9.42. It looks easy to generalize to this problem using the provided method. However, I do not quite understand this crucial step:



If we let $m = \lfloor\lambda n\rfloor$, then we get that \begin{align*}
{n\choose m}\sum\limits_{i=0}^m \left(\frac{\alpha}{1-\alpha}\right)^i = {n\choose m}\frac{1-\alpha}{1-2\alpha}
\end{align*}

This step seems so simple based off of geometric series, but I cannot get my calculations into the provided form.

calculus - No complex roots of a polynomial implies its derivative has no complex roots



It's a well known result that if a degree $n \geq 2$ polynomial $P(x)$ with real coefficients has $n$ distinct real roots, then its derivative $P'(x)$ will have $n-1$ distinct real roots. This is a consequence of Rolle's Theorem.



I would like to know if the following statement true:



If a degree $n \geq 2$ polynomial $P(x)$ with real coefficients has $n$ real roots (not necessarily distinct), may consist of multiplicities $>1$), then $P'(x)$ will have $n-1$ real roots (not necessarily distinct).




I've spent a while looking for a proof or a counterexample to this online but had no luck in doing so. I've also tried working out a proof for myself but ran into trouble when considering multiplicities of roots greater than one.


Answer



This follows by Rolle's theorem by essentially the same argument. Let $a_1

integration - How to show the following inequality in Measure Theory

How to show the following inequality in Measure Theory:





If $f$ is a non-negative measurable function defined on a measurable set $E$ then for any $\lambda >0$ we have $m\{x\in E:f(x)>\lambda \}\le\frac{1}{\lambda}\int _E f$ where $m$ denotes measure of a set




I can't figure out how to approach the above problem.Please give some hints.

calculus - Covergence of $frac{1}{4} + frac{1cdot 9}{4 cdot 16} + frac{1cdot9cdot25}{4cdot16cdot36} + dotsb$



I am investigating the convergence of the following series: $$\frac{1}{4} + \frac{1\cdot 9}{4 \cdot 16} + \frac{1\cdot9\cdot25}{4\cdot16\cdot36} + \frac{1\cdot9\cdot25\cdot36}{4\cdot16\cdot36\cdot64} + \dotsb$$



This is similar to the series $$\frac{1}{4} + \frac{1\cdot 3}{4 \cdot 6} + \frac{1\cdot3\cdot5}{4\cdot6\cdot8} + \dotsb,$$which one can show is convergent by Raabe's test, as follows: $$\frac{a_{n+1}}{a_n}= \frac{2n-1}{2n+2}= \frac{2n + 2 - 3}{2n+2}=1 - \frac{3}{2(n+1)}.$$



However, in the series I am looking at, $\frac{a_{n+1}}{a_n}=\frac{(2n-1)^2}{(2n)^2}= 1 - \frac{4n-1}{4n^2}$, so Raabe's test doesn't work (this calculation also happens to show that the ratio and root tests will not work).




Any ideas for this one? It seems the only thing left is comparison...


Answer



I just read that there is a partial converse to Raabe's test that if $\frac{a_{n+1}}{a_n}\geq 1-p/n$ for $p\leq 1$, then the series diverges. So I think that settles that it diverges.


calculus - Evaluating the sum $sum_{n=2}^{infty}ln[1-1/n^2]$



For convenience the sum is again
$$
\sum_{n=2}^{\infty}\ln[1-1/n^2]=\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}
$$
I first tried solving using a definite integral, since this seems to make telescoping easier to see,
$$
\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}=\sum_{n=2}^{\infty}\ln(n^2-1)-\ln(n^2)=

\sum_{n=2}^{\infty}\int_{n^2}^{n^2-1}\frac{dx}{x}
$$
But writing out the first few terms doesn't make any cancellation obvious because of the $n^2$ terms.



I also tried futzing around with log rules and got things down to
$$
\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}=
\sum_{n=2}^{\infty}\ln((n-1)(n+1))-\ln(n^2)=
\sum_{n=2}^{\infty}\ln(n-1)+\ln(n+1)-\ln(n^2)
$$

The first few terms of which are
$$
\sum_{n=2}^{4}\ln\frac{(n^2-1)}{(n^2)}=[\ln1+\ln3-2\ln 2]+[\ln2+\ln4-2\ln 3]+[\ln3+\ln5-2\ln 4]+...\\
=\ln 2+\ln 5-2\ln 4
$$
Which leads to the guess that
$$
\sum_{n=2}^{4}\ln\frac{(n^2-1)}{(n^2)}=\ln (N-1)+\ln(N+2)-2\ln(N)\\
=\ln(\frac{(N-1)(N+2)}{N^2})\rightarrow 0
$$

Which means I'm wrong. Should I soldiering on looking for a pattern through more computation, or is there a more expedient/elegant way to evaluate the sum?


Answer



You were on the right track in writing $$\log\left(1-\frac{1}{n^2}\right)=\log(n+1)-\log(n)+\log(n-1)-\log(n)$$



Proceeding, we can write



$$\begin{align}
\sum_{n=2}^N \log\left(1-\frac{1}{n^2}\right)&=\sum_{n=2}^N \left(\log(n+1)-\log(n)\right)+\sum_{n=2}^N \left(\log(n-1)-\log(n)\right)\\\\
&=\log(N+1)-\log(2)-\log(N)
\end{align}$$




Therefore, we find that



$$\sum_{n=2}^\infty \log\left(1-\frac{1}{n^2}\right)=-\log(2)$$


Sunday 22 September 2013

calculus - Connection between u-substitution change of limit and Riemann sum



I worked with integration by $u$ substitution for a bit now, but I still have a hard time rationalizing why we change the limits of the integral when doing $u$ substitution (I know we don't have to change the limits, there is another way, but I'm not interested in that in this question).



I always think of the integral as the sum of infinite many rectangles under a curve (as an intuition.) Let's say we have the following more formal definition of a definite integral:



$$\int_{{\,a}}^{{\,b}}{{f\left( x \right)\,dx}} = \mathop {\lim }\limits_{n \to \infty } \sum\limits_{i = 1}^n {f\left( {x_i^*} \right)\Delta x}

$$



I tried to bring this together with $u$ substitution, because I want to actually understand why we need to change the limits of the integral. Say we have:



$$\int_{{\,a}}^{{\,b}}{{f(g(x))g'(x)\,dx}}$$



Let $u = g(x)$, then I think we have:



$$\int_{{\,g(a)}}^{{\,g(b)}}{{f(u)\,du}} = \mathop {\lim }\limits_{n \to \infty } \sum\limits_{i = 1}^n {f\left( {u_i^*} \right)\Delta u}$$




But what I don't understand is: what is the proper justification to change the limits of the integral that we change $\int_{{a}}^{{b}}$ to $\int_{{g(a)}}^{{g(b)}}$, and what effect does this have on the sum, the actual definition of the integral? Is there any connection to the Riemann sum?


Answer



The change-of-variables theorem can be proved under very weak assumptions about $f$ and $g$. However, a standard proof assumes that $g$ is continuously differentiable and uses the fundamental theorem of calculus. The theorem holds even if $g$ is not monotone (increasing or decreasing). We can even have $g(a) = g(b)$.



A proof along the lines of what you are suggesting requires an assumption that $g$ is monotone. We begin with a partition $a = x_0 < x_1 < \ldots < x_n = b$ and form the sum



$$\tag{*}S(P,fg')= \sum_{j=1}^n f(g(\xi_j))g'(\xi_j)(x_j - x_{j-1})$$



where we use intermediate points $\xi_j \in [x_{j-1},x_j]$ and which will converge with refinement to $$\int_a^bf(g(x)) g'(x) \, dx$$




If $g$ is increasing then a partition $P'$ of $[g(a),g(b)]$ is naturally induced by



$$g(a) = g(x_0) < g(x_1) < \ldots < g(x_n) = g(b),$$



and using the intermediate points $g(\xi_j)$, we have a Riemann sum for the integral of $f$ over $[g(a),g(b)]$ taking the form



$$S(P',f) = \sum_{j=1}^n f(g(\xi_j))(\,g(x_j) - g(x_{j-1})\,)$$



We need the monotonicity of $g$ to ensure that $g(\xi_j) \in [g(x_{j-1}), g(x_j)]$.




Applying the mean value theorem, there exist points $\eta_j \in (x_{j-1},x_j))$ such that



$$\tag{**}S(P',f) = \sum_{j=1}^n f(g(\xi_j))g'(\eta_j)(x_j - x_{j-1})$$



Notice the similarity between the sums in (*) and (**). Aside from the distinction between $\eta_j$ and $\xi_j$, they are identical. Using the continuity (and uniform continuity) of $g'$ we can show that as the partition is refined and both $\|P\|, \|P'\| \to 0$ we have



$$\lim_{\|P|| \to 0}|S(P,fg') - S(P',f)| = 0$$



Therefore, $S(P',f)$ converges to both integrals and we have




$$\lim_{\|P'\| \to 0}S(P',f) = \int_{g(a)}^{g(b)} f(u) \, du = \int_a^b f(g(x)) g'(x) \, dx$$



Again, there are a number of ways to prove the change-of-variables theorem -- without the assumption that $g$ is monotone -- that avoid this association with Riemann sums. In the most general form only integrability and not continuity of $f$ and $g'$ is assumed.


linear algebra - Find eigenvalues and eigenvectors of this matrix



Problem: Let \begin{align*} A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{pmatrix}. \end{align*} Compute all the eigenvalues and eigenvectors of $A$.



Attempt at solution: I found the eigenvalues by computing the characteristic polynomial. This gives me \begin{align*} \det(A - x \mathbb{I}_4) = \det \begin{pmatrix} 1-x & 1 & 1 & 1 \\ 1 & 1-x & 1 & 1 \\ 1 & 1 & 1-x & 1 \\ 1 & 1 & 1 & 1-x \end{pmatrix} = -x^3 (x-4) = 0 \end{align*} after many steps. So the eigenvalues are $\lambda_1 = 0$ with multiplicity $3$ and $ \lambda_2 = 4$ with multiplicity $1$.



Now I was trying to figure out what the eigenvectors are corresponding to these eigenvalues. Per definition we have $Av = \lambda v$, where $v$ is an eigenvector with the corresponding eigenvalue. So I did for $\lambda_2 = 4$: \begin{align*} \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} = 4 \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} \end{align*} I think this is only possible when $x_1 = x_2 = x_3 = x_4 = 1$. So am I right in stating that all the eigenvectors corresponding to $\lambda_2$ are of the form $t \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}$ with $t$ some number $\neq 0$? For $\lambda_1 = 0$, I'm not sure how to find the eigenvectors. The zero vector is never an eigenvector. This means $x_1, x_2, x_3$ and $x_4$ can be anything aslong as they add to zero?



Answer



You are right, and to show how this applies more generally, look at the eigenspace corresponding to the other eigenvalue, $\lambda = 0$. This has multiplicity 3, so we expect exactly 3 linearly independent vectors in this space. The main equation looks like $A \vec{x} = 0 \vec{x} = \vec{0}$, in other words,
$$
\begin{pmatrix}
1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1
\end{pmatrix}
\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}
= \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
$$
It's not hard to see that we have 4 identical equations, so the only constraint is $x + y + z + w = 0$, so we define $w = -x-y-z$ and any vector in the eigenspace now looks like

$$
\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}
= \begin{pmatrix} x \\ y \\ z \\ -x-y-z \end{pmatrix}
= x \begin{pmatrix} 1 \\ 0 \\ 0 \\ -1 \end{pmatrix}
+ y \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}
+ z \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}
$$
and the three vectors which form the basis of the eigenspace are now specified.


calculus - How does k in $int_{-infty}^infty ke^{frac{-x^2}{2}} ~dx =1$ solve to $k = frac{1}{sqrt{2pi}}$

I'm working on stats problems and am a bit rusty with calculus as I haven't worked with it in 3 years. My prof gave us an equation
$\int_{-\infty}^\infty ke^{\frac{-x^2}{2}} ~dx =1$ where XER and f(x) is a PDF to solve for k.



Could someone please explain to me how he arrived at $k = \frac{1}{\sqrt{2\pi}}$

Saturday 21 September 2013

integration - Computing the integral $int_{-infty}^{infty}frac{z^4}{1+z^8}dz$

I need help to compute the following integral:



$$\int_{-\infty}^{\infty}\frac{z^4}{1+z^8}dz$$



I need to use Cauchy's residue theorem.



I can write that $z^8+1=z^8-i^2=(z^4-i)(z^4+i)$. How do I proceed?



Please give a methodological answer so that I can solve other questions too.




Thanks

Improper integration involving complex analytic arguments



I am trying to evaluate the following:



$\displaystyle \int_{0}^{\infty} \frac{1}{1+x^a}dx$, where $a>1$ and $a \in \mathbb{R}$




Any help will be much appreciated.


Answer



Use the change of variables $1+x^\alpha=\frac{1}{t}$ to cast the integral in terms of the beta function



$$ \frac{1}{\alpha}\int_{0}^{1}t^{-1/\alpha}(1-t)^{1/\alpha-1}= \frac{1}{\alpha}\Gamma\left(\frac{1}{\alpha}\right)\Gamma\left(1-\frac{1}{\alpha}\right) $$


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...