Monday 30 April 2018

calculus - Showing Dirichlet Integral exists



The problem I'm trying to do is the following, Spivak Chapter 19 Problem 43.



Problem:



a) Use integration by parts to show that
$$\int_a^b \frac{\sin x}{x}d{x}=\frac{\cos a}{a}-\frac{\cos b}{b} -\int_a^b\frac{\cos x}{x^2}d{x}$$

and conclude that $\int_0^\infty \frac{\sin x}{x}$ exists. (Use the left side to investigate the limit as $a\to 0$ and the right side for the limit as $b \to \infty$.)



c) Prove that
$$\lim_{\lambda \to \infty}\int_0^\pi \sin{(\lambda+\frac{1}{2})t}\bigg[\frac{2}{t}-\frac{1}{\sin \frac{t}{2}}\bigg]=0$$



What I tried:



For item a, the integration by parts is easy. For the rest, I think that just pointing out that the integrand behaves nicely at $x=0$ is sufficient as elsewhere it's a continuous function (right?). But for the second thing it asks regarding the limit at infinity, I wasn't able to come up with a good explanation.



For item c, I know that the limit of the brackets as x goes to zero is zero, and I know that this somehow is to be used, but I don't see how. It gives a hint: to use the "Reimann-Lebesgue Lemma".




Please keep the answers at an appropriate level and from fundamentals so that there is no need to use fancy theorems that solve in few lines, but just take the search for the answer elsewhere.



*I found many solutions to the Dirichlet problem, but I believe none of them actually took/explained these steps.


Answer



For (a), notice that $\lim_{x\to 0}\frac{\sin(x)}{x}=0$ implies that $\frac{\sin(x)}{x}$ is continuous and bounded in $(0,a]$ and therefore it is integrable in $[0,a]$.



In $[a,+\infty)$, with $a>0$,
$$\int_a^{+\infty} \frac{\sin x}{x}d{x}=\frac{\cos a}{a}-\lim_{b\to +\infty}\frac{\cos b}{b} -\int_a^{+\infty}\frac{\cos x}{x^2}d{x}.$$
which is finite because $\lim_{b\to +\infty}\frac{\cos b}{b}=0$ and

$$\frac{|\cos x|}{x^2}\leq \frac{1}{x^2}\implies \int_a^{+\infty}\frac{|\cos x|}{x^2}d{x}\leq \int_a^{+\infty}\frac{1}{x^2}d{x}= \frac{1}{a}<+\infty$$
(recall that absolutely integrable functions are also integrable).



Hence we may conclude that $\frac{\sin(x)}{x}$ is integrable in $[0,a]\cup [a,+\infty)=[0,+\infty)$.



As regards (c), in order to apply Riemann-Lebesgue Lemma, we have to verify that
$$f(t):=\frac{2}{t}-\frac{1}{\sin \frac{t}{2}}$$
is integrable in $[0,\pi]$ which is true because
$$\lim_{t\to 0^+}f(t)=\lim_{t\to 0^+}\left( \frac{2}{t}-\frac{1}{ \frac{t}{2}+o(t^2)}\right)=\lim_{t\to 0^+}\frac{2}{t}\left( 1-\frac{1}{ 1+o(t)}\right)0=\lim_{t\to 0^+}\frac{2o(t)}{t(1+o(t))}=0$$
and therefore $f$ is continuous and bounded in $(0,\pi]$.



summation - Equality of the sums $sumlimits_{v=0}^k frac{k^v}{v!}$ and $sumlimits_{v=0}^k frac{v^v (k-v)^{k-v}}{v!(k-v)!}$



How can one proof the equality
$$\sum\limits_{v=0}^k \frac{k^v}{v!}=\sum\limits_{v=0}^k \frac{v^v (k-v)^{k-v}}{v!(k-v)!}$$
for $k\in\mathbb{N}_0$?




Induction and generating functions don't seem to be useful.



The generation function of the right sum is simply $f^2(x)$ with $\displaystyle f(x):=\sum\limits_{k=0}^\infty \frac{(xk)^k}{k!}$



but for the left sum I still don't know.



It is $\displaystyle f(x)=\frac{1}{1-\ln g(x)}$ with $\ln g(x)=xg(x)$ for $\displaystyle |x|<\frac{1}{e}$.


Answer



Recall the combinatorial class of labeled trees which is




$$\def\textsc#1{\dosc#1\csod}
\def\dosc#1#2\csod{{\rm #1{\small #2}}}\mathcal{T} = \mathcal{Z}\times \textsc{SET}(\mathcal{T})$$



which immediately produces the functional equation



$$T(z) = z \exp T(z)
\quad\text{or}\quad
z = T(z) \exp(-T(z)).$$



By Cayley's theorem we have




$$T(z) = \sum_{q\ge 1} q^{q-1} \frac{z^q}{q!}.$$



This yields



$$T'(z) = \sum_{q\ge 1} q^{q-1} \frac{z^{q-1}}{(q-1)!}
= \frac{1}{z} \sum_{q\ge 1} q^{q-1} \frac{z^{q}}{(q-1)!}
= \frac{1}{z} \sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}.$$



The functional equation yields




$$T'(z) = \exp T(z) + z \exp T(z) T'(z)
= \frac{1}{z} T(z) + T(z) T'(z)$$



which in turn yields



$$T'(z) = \frac{1}{z} \frac{T(z)}{1-T(z)}$$



so that




$$\sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}
= \frac{T(z)}{1-T(z)}.$$



Now we are trying to show that



$$\sum_{v=0}^k \frac{v^v (k-v)^{k-v}}{v! (k-v)!}
= \sum_{v=0}^k \frac{k^v}{v!}.$$



Multiply by $k!$ to get




$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! \sum_{v=0}^k \frac{k^v}{v!}.$$



Start by evaluating the LHS.

Observe that when we multiply two
exponential generating functions of the sequences $\{a_n\}$ and
$\{b_n\}$ we get that



$$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!}
\sum_{n\ge 0} b_n \frac{z^n}{n!}
= \sum_{n\ge 0}

\sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!}
= \sum_{n\ge 0}
\left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$



i.e. the product of the two generating functions is the generating
function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$



In the present case we have

$$A(z) = B(z) = 1 + \frac{T(z)}{1-T(z)}
= \frac{1}{1-T(z)} $$
by inspection.




We added the constant term to account for the fact that $v^v=1$ when
$v=0$ in the convolution. We thus have



$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v}
= k! [z^k] \frac{1}{(1-T(z))^2}.$$




To compute this introduce



$$\frac{k!}{2\pi i}
\int_{|z|=\epsilon}
\frac{1}{z^{k+1}} \frac{1}{(1-T(z))^2} \; dz$$



Using the functional equation we put $z=w\exp(-w)$ so that $dz =
(\exp(-w)-w\exp(-w)) \; dw$
and obtain



$$\frac{k!}{2\pi i}

\int_{|w|=\gamma}
\frac{\exp((k+1)w)}{w^{k+1}} \frac{1}{(1-w)^2}
(\exp(-w)-w\exp(-w)) \; dw
\\ = \frac{k!}{2\pi i}
\int_{|w|=\gamma}
\frac{\exp(kw)}{w^{k+1}} \frac{1}{1-w} \; dw$$



Extracting the coefficient we get



$$k! \sum_{v=0}^k [w^v] \exp(kw) [w^{k-v}] \frac{1}{1-w}

= k! \sum_{v=0}^k \frac{k^v}{v!}$$



as claimed.


Remark. This all looks very familiar but I am unable to locate the
duplicate among my papers at this time.


real analysis - Why can a closed, bounded interval be uncountable?



From what I have read, all finite sets are countable but not all countable sets are finite. As I understand it,




  • Countably Finite --- a one to one map onto $\Bbb{N}$ with a limited number of members

  • Countably Infinite --- a one to one map onto $\Bbb{N}$ with an unlimited number of members but that you can count in principle if given an infinite amount of time

  • Uncountably Infinite --- there is no one to one mapping onto $\Bbb{N}$. Even if you count, you will miss some of the members. And it is infinite.




From this I gather that countable is not the same as finite. Countable is the one to one property with $\Bbb{N}$. Finite just means a limited number of elements.



Now consider $[0,1]$ which is closed and bounded.




  • Bounded --- $\forall k \in [0,1]$ we have $k \leq 1$. Similarly all $k\geq0$.

  • Closed --- it contains the endpoints $0$ and $1$




Yet I read $[0,1]$ is uncountably infinite. So clearly, neither closure nor boundedness implies finiteness or countability.



Question:



Why can a closed, bounded interval be uncountable?



It just seems like something that is bounded would be "more finite" than something that isn't.


Answer



I suspect you're conflating two meanings of "finite". Some sets are finite, meaning they have only finitely many elements. An interval like $[0,1]$ is not such a set. On the other hand, $[0,1]$ has finite length, which is a quite different matter. As the other answers have explained, finite length does not imply finiteness (or even countability) in terms of the number of elements.



Saturday 28 April 2018

trigonometry - How is $frac{sin(x)}{x} = 1$ at $x = 0$




I have a function:
$$\text{sinc}(x) = \frac{\sin(x)}{x}$$

and the example says that: $\text{sinc}(0) = 1$, How is it true?



I know that $\lim\limits_{x \to 0} \frac{\sin(x)}{x} = 1$, But the graph of the function $\text{sinc}(x)$ shows that it's continuous at $x = 0$ and that doesn't make sense.


Answer



In an elementary book, they should define $\mathrm{sinc}$ like this
$$
\mathrm{sinc}\; x = \begin{cases}
\frac{\sin x}{x}\qquad x \ne 0
\\
1\qquad x=0

\end{cases}
$$
and then immediately prove that it is continuous at $0$.



In a slightly more advanced book, they will just say
$$
\mathrm{sinc}\;x = \frac{\sin x}{x}
$$
and the reader will understand that removable singularities should be removed.


calculus - How to Evaluate $int _0^{infty} lfloor x rfloor e^{-x} dx$?

How to Evaluate $\int _0^{\infty} \lfloor x \rfloor e^{-x} dx$ , where $\lfloor x \rfloor$ is the largest integer less than x?




I am thinking of using something like $ \int_0^{\infty} (e^{-2}+2e^{-3}+....+ne^{-n-1}+.....)dx$ and then use $\int ne^{-n-1}= -e^{-n}$ But the integral, AFAIK, is only finitely-additive, so now what?

integration - Compute $sum_{n=1}^inftyfrac{H_n^{(2)}}{n^7}$ and $sum_{n=1}^inftyfrac{H_n^2}{n^7}$

How to prove that




$$S_1=\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^7}=7\zeta(2)\zeta(7)+2\zeta(3)\zeta(6)+4\zeta(4)\zeta(5)-\frac{35}{2}\zeta(9)\ ?$$
$$S_2=\sum_{n=1}^\infty\frac{H_n^2}{n^7}=-\zeta(2)\zeta(7)-\frac72\zeta(3)\zeta(6)+\frac13\zeta^3(3)-\frac{5}{2}\zeta(4)\zeta(5)+\frac{55}{6}\zeta(9)\ ?$$
where $H_n^{(p)}=1+\frac1{2^p}+\cdots+\frac1{n^p}$ is the $n$th generalized harmonic number of order $p$.





I came across these two sums while working on an tough one of wight 9 and I managed to find these two results but I don't think my solution is a good approach as it's pretty long and involves tedious calculations, so I am seeking different methods if possible. I am much into new ideas. All approaches are appreciated though.



By the way, do we have a generalization for $\displaystyle\sum_{n=1}^\infty \frac{H_n^{(2)}}{n^a}$, for odd $a$? Note that there is no closed form for even $a>4$.



Thanks in advance.



Note: You can find these two results on Wolfram Alpha here and here respectively but I modified it a little bit as I like it expressed in terms of $\zeta(a)$ instead of $\pi^a$.

Friday 27 April 2018

trigonometry - Finding a closed form for $cos{x}+cos{3x}+cos{5x}+cdots+cos{(2n-1)x}$




We have to find




$$g(x)=\cos{x}+\cos{3x}+\cos{5x}+\cdots+\cos{(2n-1)x}$$




I could not get any good idea .



Intialy I thought of using




$$\cos a+\cos b=2\cos(a+b)/2\cos (a-b)/2$$


Answer



Let $z=\cos\theta+i\sin\theta$ i.e. $z=e^{i\theta}$



Your sum:$$e^{i\theta}+e^{3i\theta}+e^{5i\theta}+...e^{(2n-1)i\theta}$$



This is a GP with common ratio $e^{2i\theta}$



Therefore sum is $$\frac{a(r^n-1)}{r-1}$$

$$\frac{e^{i\theta}(e^{2ni\theta}-1)}{e^{2i\theta}-1}$$
$$\frac{(\cos \theta+i\sin\theta)(\cos(2n\theta)+i\sin\theta-1)}{\cos(2\theta)+i\sin(2\theta)-1}$$



Computing it's real part should give you the answer



Acknowledgement:Due credits to @LordShark Idea


elementary number theory - Avoiding negative integers in proof of Fundamental Theorem of Arithmetic



There is an exercise on page 44 of Amann's book Analysis, Vol I which stuck me so much. I quoted it here:



Ex7: Let $p\in\mathbb{N}$ with $p>1.$ Prove that $p$ is a prime number if and only if, for all $m,n\in\mathbb{N},$ $$p\mid mn\implies p\mid m \quad \text{or} \quad p\mid n.$$



Since this exercise occurs only after the Euclid's Division Algorithm (Page 34, Amann, Analysis Vol 1) and the Fundamental Theorem of Arithmetic (Page 36), I can not use the result of greatest common divisor of $m$ and $n,$ because that concerns the definition of the negative numbers, which is not coming into being in that section. Therefore, I ask the following question:




Can Euclid's Division Algorithm and/or Fundamental Theorem of Arithmetic implies Ex7? Or in another word, Can we prove Ex7 by only using Euclid's Division Algorithm and/or Fundamental Theorem of Arithmetic?


Answer



It seems you seek a proof avoiding negative integers. One direction is easy. If $\,p = ab,\, a,b>1\,$ is composite then $\,p\mid ab\,$ but $\,p\nmid a,b.\,$ Below are a handful ways to prove the less trivial converse.



$(1)\ \, $ By FTA, if $\, pk =ab,\,$ then comparing the unique prime factorizations arising from both sides we deduce that $\,p\,$ occurs among the prime factors of $\,a\,$ or $\,b,\,$ so $\,p\mid a\,$ or $\,p\mid b.$



Alternatively we can directly employ the Division Algorithm as below.



$(2)\ \ $The set $S$ of naturals $\,n\,$ such that $\,\color{#c00}{p\mid nb}\,$ is closed under subtraction $> 0$ and contains $\, a,p\,$ therefore its least positive element $\,\color{#0a0}{d\mid a,p}.\,$ Since $\,\color{#a0f}{d\mid p\ \ \rm prime},\,$ either $\,\color{#a0f}{d=p}\,$ so $\ \color{#0a0}{p=d\mid a},\,$ or $\,\color{#a0f}{d=1}\in S\ $ thus $\ \color{#c00}{p\mid d b = b},\ $ i.e. $\,\ p\mid \color{}a\,$ or $\ p\mid \color{}b.$




$(3)\ \, $ If gcds are known, then we know the gcd $\, (p,a)\,$ exists, so we can rewrite the proof as



$$ p\mid pb,ab\,\Rightarrow\, p\mid(pb,ab)\overset{\color{brown}{\rm(D\,L)}}= (p,a)b = b\ \ {\rm if}\ \ \,(p,a)= 1,\ \ {\rm i.e.}\ \ p\nmid a$$



where we used $\,\color{brown}{\rm(DL)}$ = gcd Distributive Law. Compare this to the analogous proof using the gcd Bezout Identity i.e. $\, (p,a)=1\,$ so $\,jp\!+\!ka = 1\,$ for $\,j,k\in\Bbb Z,\,$ hence



$\qquad\qquad\qquad\ \ p\mid pb, ab\,\Rightarrow\, p\mid jpb,kab\,\Rightarrow\, p\mid (jp\!+\!ka)b = b$



This answer precisely highlights the analogy between the gcd, Bezout and ideal form of this version of Euclid's Lemma. Since the 2nd and 3rd proofs of the Distributive Law in the linked post use only positive integers, this provides another approach.




$(4)\ \ $ One can present $(3)$ more constructively as Gauss's Algorithm, which is a special case of the Euclidean algorithm when one of the arguments is prime. It iterates $\,p\mid ab\,\Rightarrow\,p\mid (p\ {\rm mod}\ a)b,\,$ producing smaller and smaller multiples of $\,b\,$ divisible by $\,p\,$ till, by induction, we reach $\,b.\,$ This is more intuitive in fractional form.



$(5)\ \, $ Finally another more brute-force approach is to simply eliminate all use of negative numbers in any proof, e.g. by transforming equations to use only positive numbers. But this introduces greater complexity, and tends to obfuscate the arithmetical essence of the matter.


probability - What are the odds of any role of a 24 sided die occurring 4 or more times in 10 rolls?



Note that I am not asking about the odds of a chosen roll happening 4 times in 10 rolls, (this has a probability of 0.000517 according to a binomial calculator), rather, the odds of ANY roll happening 4 or more times in 10 rolls.



Another way to ask it is: What are the odds that the mode of the list of 10 rolls of the die will be 4 or more?
In general how is this calculated?
Thank you.


Answer



Suppose $i$ is an integer between $1$ and $24$, and let $A_i$ be the event that you roll an $i$ at least $4$ times in ten rolls. Notice that two of the $A_i$ could occur, but not three, since that would be twelve different rolls.




By the inclusion/exclusion principle, you get that $$\mathbb{P}(\cup A_i) = \sum \mathbb{P}(A_i) - \sum_{i\ \neq j}\mathbb{P}(A_i \cap A_j).$$ You have already calculated $\mathbb{P}(A_i)$ for each $i$. Now you have to calculate $\mathbb{P}(A_i \cap A_j)$ for $i \neq j$, which is a little trickier (unless I'm missing an easier way).



To do this, you should add up




  • the number of ways of rolling (exactly) four $i$s and exactly four $j$s,


  • ...of rolling four $i$s and five $j$s,


  • ...of rolling five $i$s and four $j$s (same as
    above),


  • ...of rolling five $i$s and five $j$s,

  • ...of rolling six $i$s and four $j$s, and

  • ...of rolling six $j$s and four $i$s (same as
    above).



This is $$22^2 \binom{10}{8}\binom{8}{4} + 2 \cdot 22 \cdot \binom{10}{9}\binom{9}{5} + \binom{10}{5} + 2 \cdot\binom{10}{6}.$$



Call that number $n$ for brevity. Dividing $n$ by $24^{10}$ will give you $\mathbb{P}(A_i \cap A_j)$, given $i$ and $j$. Now add up all the different choices of $i$ and $j$ to give you (assuming you did your previous calculation right) $$\mathbb{P}(\cup A_i)= 24 \cdot (.000517) - \binom{24}{2}\frac{n}{24^{10}}$$


Thursday 26 April 2018

sequences and series - Proving that $frac{pi^{3}}{32}=1-sum_{k=1}^{infty}frac{2k(2k+1)zeta(2k+2)}{4^{2k+2}}$




After numerical analysis it seems that



$$
\frac{\pi^{3}}{32}=1-\sum_{k=1}^{\infty}\frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}}
$$



Could someone prove the validity of such identity?


Answer



Yes, we can prove it. We can change the order of summation in




$$\begin{align}
\sum_{k=1}^\infty \frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}}
&= \sum_{k=1}^\infty \frac{2k(2k+1)}{4^{2k+2}}\sum_{n=1}^\infty \frac{1}{n^{2k+2}}\\
&= \sum_{n=1}^\infty \sum_{k=1}^\infty \frac{2k(2k+1)}{(4n)^{2k+2}}\\
&= \sum_{n=1}^\infty r''(4n),
\end{align}$$



where, for $\lvert z\rvert > 1$, we define




$$r(z) = \sum_{k=1}^\infty \frac{1}{z^{2k}} = \frac{1}{z^2-1} = \frac12\left(\frac{1}{z-1} - \frac{1}{z+1}\right).$$



Differentiating yields $r''(z) = \frac{1}{(z-1)^3} - \frac{1}{(z+1)^3}$, so



$$1 - \sum_{k=1}^\infty \frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}} = \sum_{\nu = 0}^\infty \frac{(-1)^\nu}{(2\nu+1)^3},$$



and the latter sum is by an earlier answer using the partial fraction decomposition of $\dfrac{1}{\cos z}$:



$$\sum_{\nu=0}^\infty \frac{(-1)^\nu}{(2\nu+1)^3} = - \frac{\pi^3}{32} E_2 = \frac{\pi^3}{32}.$$


roots - How to locate zeros of the Riemann Zeta function?

I'm trying to locate the first 1000 zeros of $\zeta(s)$ and not sure about the best way to go about it. I was considering the Newton-Raphson method but can'd find a good way to code $\zeta'(s)$ in python as I can't find a functional equation anywhere.



I'm using a simple 'while' loop to locate sign changes of $Z(t)$ to count zeros (would also appreciate if anyone knows a quicker way to count the zeros of $\zeta(s)$ btw) and have found 649 zeros below 1000 and 10,142 zeros below 10,000.



I now need to locate these zeros so I can sum the arguments of each complex zero. I'm led to believe this will give me $$\int_{\gamma} \zeta(s) ~ ds$$ where $\gamma$ is the contour of length 1000 or 10,000 respectively. ie: $$\int_{\gamma} \zeta(s) ~ ds = 2\pi \cdot \sum_{s' \in S} \arg{(s')}$$ where $S$ is the set of all zeros below 1000 or 10,000 around the critical strip. I will first calculate this then back it up rigorously.



Thus if I am able to locate all zeros and sum their arguments, then divide this number by $2 \pi$ I will be able to directly compare this result with the number of zeros found earlier, the goal being to verify the Riemann Hypothesis up to a certain height.




Any help very much appreciated!

Wednesday 25 April 2018

calculus - Strange Limit in Proof of the Fresnel Integral

I was playing around with the Fresnel integrals and I've come up with a proof for the fact $$\int_0^\infty \cos x^2 dx= \int_0^\infty \sin x^2 dx= \sqrt{\frac{\pi}{8}}$$



The proof goes as follows



Use the fact that $$\int_{-\infty}^{\infty} e^{-u^2} du = \sqrt{\pi} $$
Letting $ u = \sqrt{-i} \cdot x, du = \sqrt{-i}*dx $

$$\sqrt{-i} \cdot \int_{-\infty}^{\infty} e^{ix^2} du = \sqrt{\pi}$$
$$\int_{-\infty}^{\infty} e^{ix^2} dx = \int_{-\infty}^{\infty} \cos{x^2} + i \cdot \int_{-\infty}^{\infty} \sin{x^2}dx = \sqrt{\frac{-\pi}{i}}=(1+i)\sqrt{\frac{\pi}{2}}$$



Equating real and imaginary parts (we can do this because we can assume that the two integrals have real values), we get



$$\int_{-\infty}^\infty \cos x^2 dx= \int_{-\infty}^\infty \sin x^2 dx= \sqrt{\frac{\pi}{2}}$$



Then, using the fact that $\cos{x^2}$ and $\sin{x^2}$ are even functions, we get the fact above.



My question stems from the substitution made. After the substitution is made, the bounds of the integral change from $\pm\infty$ to $\pm(1+i)\infty$




Is this integral valid? Is it possible to have a complex number in the bounds of the integral?

Compilation of proofs for the summation of natural squares and cubes

I want to know different proofs for the following formulas,



$$
\sum_{i=1}^n{i^2} = \frac{(n)(n+1)(2n+1)}{6}
$$



$$

\sum_{i=1}^n{i^3} = \frac{n^2(n+1)^2}{2^2}
$$



Please do not mark this as duplicate, since what I specifically want is to be exposed to a variety of proofs using different techniques (I did not find such a compilation anywhere on the net)



I am only familiar with two proofs, one which uses expansion of $(x+1)^2 - x^2$ and $(x+1)^3 - x^3$ and the other which uses induction. I have provided link for the induction proof in a self-answer.



I am particularly interested in proof without words and proofs which use a unrelated mathematical concept (higher level math upto class 12 level is acceptable).



Also,




(1) Don't think I am being rude or anything, it is out of genuine interest that I am asking this question.



(2) Someone marked this as a duplicate of Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula



My question is different in three ways:



(i) I want to focus only on these two summation and not the general case,



(ii) Hence, it follows that the proofs which I am looking for a simpler than the ones provided in that link and are simpler (using images, pictures or high school algebra). What I want is to study new proofs. I believe it is a good practice when learning math to so this.




(iii) Since the proofs in the link are given for the general case, they are complicated and I am finding it hard to understand them. If someone is able to use the same method to the two cases in my question, then it would probably become much simpler and easier to digest.



Appendix



Feel free to make use of these topics in your answers,



Calculus



Basic Binomial Expansion




Coordinate Geometry



Algebra (upto what 18 year olds learn)



Taylor Series Expansions



Geometry (18 year old level)



basically.............math which 18 year old's learn on Earth.




If you want to err, then err on the higher math side:)




Answers Compilation List




  1. By Newton series

  2. By Sterling Numbers

  3. By Induction


  4. From the book Generatingfunctionology by Herbert Wilf

  5. By generalizing the following pattern
    $$\begin{align}
    &\ \,4\cdot5\cdot6\cdot7\\=&(1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot5+3\cdot4\cdot5\cdot6+4\cdot5\cdot6\cdot7)\\-&(0\cdot1\cdot2\cdot3+1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot5+3\cdot4\cdot5\cdot6)\\
    =&(1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot4+3\cdot4\cdot5\cdot4+4\cdot5\cdot6\cdot4)\\
    \end{align}$$


  6. By Lagrangian Interpolation


  7. By Formal Differentiation

  8. By the Euler-Maclaurin Summation Formula

  9. By Assuming that the expression is a polynomial of degree $2$.


  10. A Proof Without Words for the cube case

  11. By integrating and assuming a error term.

  12. SimplyBeatifulArt's Personal Approach


integration - How to prove: $lim_{ntoinfty}left(int_{0}^{frac{pi}{2}}leftvertfrac{sin{(n+1)x}}{sin{x}}rightvert dx-frac{2ln{n}}{pi}right)$



show that




$$\mathop {\lim }\limits_{n \to \infty } \left( {\int\limits_0^{\frac{\pi }{2}} {\left\vert\frac{{\sin \left( {2n + 1} \right)x}}{{\sin x}}\right\vert\,dx - \frac{{2\ln n}}{\pi }} } \right) = \frac{{6\ln 2}}{\pi } + \frac{{2\gamma }}{\pi } + \frac{2}{\pi }\sum\limits_{k = 1}^\infty {\frac{1}{{2k + 1}}\ln \left( {1 + \frac{1}{k}} \right)}\cdots (1) $$




I can prove $(1)$ it exsit it.and also it is well kown that
$$I_{n}=\int_{0}^{\frac{\pi}{2}}\dfrac{\sin{(2n+1)x}}{\sin{x}}dx=\dfrac{\pi}{2}$$





proof:$$I_{n}-I_{n-1}=\int_{0}^{\frac{\pi}{2}}\dfrac{\sin{(2n+1)x}-\sin{(2n-1)x}}{\sin{x}}dx=2\int_{0}^{\frac{\pi}{2}}\cos{(2nx)}dx=0$$
so
$$I_{n}=I_{n-1}=\cdots=I_{0}=\dfrac{\pi}{2}$$
But I can't prove $(1)$,Thank you



Answer



Notice for any continuous function $f(x)$ on $[0,\frac{\pi}{2}]$, we have:




$$\lim_{n\to\infty} \int_0^{\frac{\pi}{2}} \Big|\sin((2n+1)x)\Big| f(x) dx = \frac{2}{\pi}\int_0^{\frac{\pi}{2}} f(x) dx$$



Apply this to $\frac{1}{\sin x} - \frac{1}{x}$, we get



$$\lim_{n\to\infty} \int_0^{\frac{\pi}{2}} \Big|\sin((2n+1)x)\Big| \Big(\frac{1}{\sin x} - \frac{1}{x} \Big) dx
= \frac{2}{\pi} \int_0^{\frac{\pi}{2}} \Big(\frac{1}{\sin x} - \frac{1}{x} \Big) dx\\
= \frac{2}{\pi} \left[\log\left(\frac{\tan(\frac{x}{2})}{x}\right)\right]_0^{\frac{\pi}{2}}
= \frac{2}{\pi} \left[\log\frac{2}{\pi} - \log{\frac12}\right] = \frac{2}{\pi} \log\frac{4}{\pi}
\tag{*1}$$
So it suffices to figure out the asymptotic behavior of following integral:




$$\int_0^{\frac{\pi}{2}} \frac{|\sin((2n+1)x)|}{x} dx
= \int_0^{\pi(n+\frac12)} \frac{|\sin x|}{x} dx = \int_0^{\pi n} \frac{|\sin x|}{x} dx + O(\frac{1}{n})
$$
We can rewrite the rightmost integral as



$$\int_0^{\pi} \sin x \Big( \sum_{k=0}^{n-1} \frac{1}{x+k\pi} \Big) dx
= \int_0^1 \sin(\pi x) \Big( \sum_{k=0}^{n-1} \frac{1}{x+k} \Big) dx\\
= \int_0^1 \sin(\pi x) \Big( \psi(x+n) - \psi(x) \Big) dx
\tag{*2}

$$
where $\displaystyle \psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$ is the
digamma function.



Using following asymptotic expansion of $\psi(x)$ for large $x$:



$$\psi(x) = \log x - \frac{1}{2x} + \sum_{k=1}^{\infty}\frac{\zeta(1-2k)}{x^{2k}}$$
It is easy to verify
$$\int_0^1 \sin(\pi x)\psi(x+n) dx = \frac{2}{\pi} \log n + O(\frac{1}{n})\tag{*3}$$.




Substitute $(*3)$ into $(*2)$ and combine it with $(*1)$, we get



$$\lim_{n\to\infty} \left(\int_0^{\frac{\pi}{2}} \left|\frac{\sin((2n+1)x)}{\sin x}\right| dx - \frac{2}{\pi} \log n\right) = \frac{2}{\pi} \log\frac{4}{\pi} - \int_0^1 \sin(\pi x)\psi(x) dx \tag{*4}$$
To compute the rightmost integral of $(*4)$, we first integrate it by part:



$$\int_0^1 \sin(\pi x)\psi(x) dx = \int_0^1 \sin(\pi x)\,d\log\Gamma(x) =
-\pi\int_0^1 \cos(\pi x)\log\Gamma(x) dx
$$
We then apply following result$\color{blue}{^{[1]}}$





Kummer (1847) Fourier series for $\log\Gamma(x)$ for $x \in (0,1)$
$$\log\Gamma(x) = \frac12\log\frac{\pi}{\sin(\pi x)} + (\gamma + \log(2\pi))(\frac12 - x) + \frac{1}{\pi}\sum_{k=2}^{\infty}\frac{\log k}{k}\sin(2\pi k x)$$




Notice




  1. $\displaystyle \int_0^1 \cos(\pi x)\log \frac{\pi}{\sin(\pi x)} dx = 0\quad$ because of symmtry.


  2. $\displaystyle \int_0^1 \cos(\pi x)\Big(\frac12 - x\Big) dx = \frac{2}{\pi^2}$



  3. $\displaystyle \int_0^1 \cos(\pi x)\sin(2\pi k x) dx = \frac{4k}{(4k^2-1)\pi} $




We can evaluate RHS of $(*4)$ as
$$\begin{align}
\text{RHS}_{(*4)} = & \frac{2}{\pi}\log\frac{4}{\pi} + \pi \left[
\Big(\gamma + \log(2\pi)\Big)\frac{2}{\pi^2}
+ \frac{4}{\pi^2}\sum_{k=2}^{\infty}\frac{\log k}{4k^2-1}
\right]\\
= & \frac{2}{\pi}\left[\log 8 + \gamma + \sum_{k=2}^{\infty}\log k \left(\frac{1}{2k-1}-\frac{1}{2k+1}\right) \right]\\

= & \frac{6\log 2}{\pi} + \frac{2\gamma}{\pi} + \frac{2}{\pi}\sum_{k=1}\frac{\log(1+\frac{1}{k})}{2k+1}
\end{align}$$



Notes



$\color{blue}{[1]}$ For more infos about Kummer's Fourier series, please see
following paper by Donal F. Connon.


real analysis - Does $sum _{n=1}^{infty }frac{left|sinleft(nright)right|}{n}$ converge?



I'm not sure whether the following series converges or diverges:
$$\sum _{n=1}^{\infty }\frac{\left|\sin (n)\right|}{n}$$




I proved that the series $\sum _{n=1}^{\infty }\frac{\sin^2 (n )}{n}$ converge. Is there a way I can use that? I've tried using Dirichlet series test with the latter but didn't got nowhere since $\frac{1}{\left|\sin x\right|}$ is not monotone decreasing.


Answer



$$ \frac{\sin^2(n)}{n} = \frac{1}{2n} - \frac{\cos(2n)}{2n} $$
By Dirichlet's test, $\sum \frac{\cos(2n)}{2n} $ converges, hence $\sum \frac{\sin^2(n)}{n} $ diverges (since $\sum \frac{1}{n}$ diverges to $\infty$).
$$ \frac{\sin^2(n)}{n} \leq \frac{|\sin(n)|}{n}$$
So by the comparision test, $\sum \frac{|\sin(n)|}{n} $ diverges


algebra precalculus - Percent decrease in log space

I calculated the percent decrease of a value measured at two timepoints:



$\frac{t_1-t_2}{t_1} = d$



but I've now realized that the numbers I was given were log transformed, so it's not really correct to say, for example, that if $d = 0.6$ then the value decreased by 60%, since it's a 60% decrease in log space. I'm trying to figure out if there's a way to transform the $d$ I've already calculated into the correct value without recalculating using the untransformed $t_1$ and $t_2$. It seems like it should be fairly straightforward, but for some reason I'm having a hard time thinking about it. Maybe it's not as simple as I think. Is this an easy thing to do?

Tuesday 24 April 2018

calculus - Solutions of differential euqations in term of definite integrals

In my textbook, the authors tried to solve the differential equation: $dy/dt+ay=g(t)$, where $a$ is a constant and $g(t)$ is a function. Why the authors of my textbook tend to leave the answer in terms of definite integral instead of indefinite integral if the indefinite integral cannot be evaluated in terms of elementary functions (the authors leave the answer as $y=e^{-at}\int_{t_0}^t e^{as} g(s) ds + ce^{-at}$ (by using s as a dummy variable of integration, c as an arbitrary constant, $t_0$ as a convenient lower limit of integration) instead of $y=e^{-at}\int e^{at} g(t) dt + ce^{-at}$ if $\int e^{at} g(t) dt$ cannot be evaluated in terms of elementary functions). Please refer to the text:



http://issuu.com/wiley_publishing/docs/boyce_elementary_differential_equations_10e_sample?e=1085234/2816160




On pages: 5-6 (34-35) (the numbers in the parentheses are the actual page numbers), the authors wrote "... For many simple functions g(t), we can evaluate the integral in Eq. (23) and express the solution y in terms of elementary functions, as in Example 2. However, for more complicated functions g(t), it is necessary to leave the solution in integral form. In this case $y=e^{-at}\int_{t_0}^t e^{as} g(s) ds +ce^{-at}$..."

Monday 23 April 2018

elementary set theory - Equinumerosity of infinite sets

Key issue: For infinite sets, does the existence of a bijection mean they have the same number of elements?



For example, does the set of natural numbers N = {1,2,3,4...} have the same number of elements as the set of positive even integers E = {2,4,6,8...}? There is a one-to-one correspondence between these sets:




1 -> 2
2 -> 4
3 -> 6
...


However, depending on how you look at it, there is also an injective function (one-to-one but not onto) between E and N:



  -> 1
2 -> 2

-> 3
4 -> 4
-> 5
6 -> 6
...


In this way, all of the elements in E are in N, but not all of the elements in N are in E. Does this "prove", in some way, that N and E are not equinumerous? If not, does this mean that while E is a proper subset of N, it still has the same number of members as N - something that is true for infinite sets but not true of finite sets?



A second example:




What about the following sets:



A: {a1,a2,a3,…}
B: {b1,b2,b3,…}


A has one-to-one correspondence (bijection) with B:



a1 -> b1

a2 -> b2
a3 -> b3



If this, bijection from A to B, shows that A and B have the same number of elements, what happens if we shift A up so that there is an injective function (one-to-one but not onto) between A and B, e.g.



?  -> b1
a1 -> b2
a2 -> b3

a3 -> b4



Does this show that A and B do not necessarily have the same number of elements, after all - despite having bijection between them in one arrangement - since we can arrange them in such a way that there is a one-to-one function, but not onto?

Sunday 22 April 2018

functional equations - Solutions of $f(x+y^{n})=f(x)+[f(y)]^{n}$.



Consider the functional equation $f(x+y^{n})=f(x)+[f(y)]^{n}$ where $f:\mathbb R \to \mathbb R$ and $n$ is given integer $>1$. This equation was discussed yesterday and it was shown that $f$ is necessarily additive. Assuming continuity it was concluded that $f(x)\equiv cx$ for some $c$. [ Necessarily $c$ is an n-th root of unity]. If $n$ is even then the given functional equation gives $f(x+y^{n}) \geq f(x)$ which easily leads to the conclusion that $f$ is an increasing function. It follows that $f$ is Borel measurable; since any Borel measurable additive function if of the type $f(x)\equiv cx$ the assumption that $f$ is continuous is not necessary. My question is what can be said for $n$ odd? Can one use some trick to prove that $f$ is necessarily Borel measurable? Or is there a counter-example? Discontinuous additive functions are constructed using Hamel basis but I am unable to use this method to construct a counter-example. I would appreciate receiving any ideas about this question.


Answer



Here's a generalization of i707107's argument that is actually a bit simpler, as long as I didn't make any mistakes:



You have




$$f(x+y)=f(x)+f(y)$$



and



\begin{align}
\sum_{i=0}^n \binom{n}{i}f(x^iy^{n-i})
&=f((x+y)^n)\\
&=f(x+y)^n\\
&=(f(x)+f(y))^n\\
&=\sum_{i=0}^n \binom{n}{i}f(x)^if(y)^{n-i}.

\end{align}



Taking $y$ rational, we have $f(x^iy^{n-i})=y^{n-i}f(x^i)$ and $f(y)=yf(1)$, so



$$\sum_{i=0}^n\binom{n}{i}y^{n-i}\left[f(x^i)-f(1)^{n-i}f(x)^i\right]=0$$



As this is a polynomial of degree $n$ that is $0$ for all rationals, it is identically $0$, so



$$f(x^i)=f(1)^{n-i}f(x)^i$$




for all $0\leq i\leq n$. Originally, we had $f(1)=f(1)^n$, so $f(1)\in\{-1,0,1\}$. If $f(1)=0$, we have $f(x^i)=0$, so $\boxed{f(x)\equiv 0}$. Otherwise, we have



$$f(x^2)=f(1)^{n-2}f(x)^2=f(1)f(x)^2$$



$$f(x+y^2)=f(x)+f(y^2)=f(x)+f(1)f(y)^2.$$



If $f(1)=1$, this means $f$ is increasing, and if $f(1)=-1$ this means $f$ is decreasing. Either way, $f$ is not everywhere dense, so $f(x)=cx$ for some $c$ and all $x$. The observation that $f(1)=\pm 1$ means $\boxed{f(x)=x}$ and $\boxed{f(x)=-x}$ are our only other solutions.


complex analysis - Finding the Fourier Series of $sin(x)^2cos(x)^3$



I'm currently struggling at calculation the Fourier series of the given function



$$\sin(x)^2 \cos(x)^3$$




Given Euler's identity, I thought that using the exponential approach would be the easiest way to do it.



What I found was: $$\frac{-1}{32}((\exp(2ix)-2\exp(2ix)+\exp(-2ix))(\exp(3ix)+3\exp(ix)+3\exp(-ix)+\exp(-3ix)))$$



Transforming it back, the result is:



$$ -\frac{1}{18}(\cos(5x)+\cos(3x)+2\cos(x))$$



(I've checked my calculations multiple times, I'm pretty sure it's correct.)




Considering the point $x = 0$ however, one can see that the series I found doesn't match the original function.



Could someone help me find my mistake?


Answer



1) Trigonometric identities:
$$
\sin^2 x\cos^3x=(\sin x\cos x)^2\cos x=\left(\frac{\sin 2x}{2}\right)^2\cos x=\frac{1}{4}\sin^22x\cos x
$$
$$
=\frac{1}{4}\left(\frac{1-\cos 4x}{2}\right)\cos x=\frac{\cos x}{8}-\frac{\cos 4x\cos x}{8}

$$
$$
=\frac{\cos x}{8}-\frac{\cos 5x+\cos 3x}{16}
$$
$$
=\frac{\cos x}{8}-\frac{\cos 3x}{16}-\frac{\cos 5x}{16}
$$
2) Complex exponential:
$$
\sin^2x\cos^3x=\left(\frac{e^{ix}-e^{-ix}}{2i}\right)^2\left(\frac{e^{ix}+e^{-ix}}{2}\right)^3

$$
$$
=-\frac{1}{32}(e^{2ix}-2+e^{-2ix})(e^{3ix}+3e^{ix}+3e^{-ix}+e^{-3ix})
$$
$$
=-\frac{1}{32}(e^{5ix}+e^{3ix}-2e^{ix}-2e^{-ix}+e^{-3ix}+e^{-5ix})
$$
$$
=-\frac{1}{32}(2\cos 5x+2\cos 3x-4\cos x)
$$

$$
=\frac{1}{16}(2\cos x-\cos 3x-\cos 5x)
$$



Note: you made a mistake when you expanded $(e^{ix}-e^{-ix})^2$. I have no idea how you ended up with this $18$. You probably meant $16$.


Saturday 21 April 2018

abstract algebra - Write $x^3 + 2x+1$ as a product of linear polynomials over some extension field of $mathbb{Z}_3$



Write $x^3 + 2x+1$ as a product of linear polynomials over some extension field of $\mathbb{Z}_3$



Long division seems to be taking me nowhere, If $\beta$ is a root in some extension then using long division one can write




$$x^3 + 2x+1 = (x-\beta) (x^2+ \beta x+ \beta^2 + 2)$$



Here is a similar question



Suppose that $\beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$



Is there a general method to approach such problems or are they done through trail and error method.



I haven't covered Galois theory and the problem is from field extensions chapter of gallian, so please avoid Galois theory if possible.


Answer




If you want to continue the way you started, i.e. with
$$x^3 + 2x+1 = (x-\beta) (x^2+ \beta x+ \beta^2 + 2)$$
you can try to find the roots of the second factor by using the usual method for quadratics, adjusted for characteristic 3. I'll start it so you know what I mean:



To solve $x^2+ \beta x+ \beta^2 + 2=0$, we can complete the square, noting that $2\beta + 2\beta = \beta$ and $4\beta^2=\beta^2$in any extension of $\mathbb{Z}_3$ (since $4\equiv 1$).
$$x^2+ \beta x+ \beta^2 + 2=(x+2\beta)^2 + 2=0$$
But this is easy now since this is the same as
$$(x+2\beta)^2 = 1$$
which should allow you to get the remaining roots.


functions - Which one grows asymptotically faster $g(n) = 10^{79} n log n$ or $f(n) = 3^{log n}$?




I'm trying to understand which of the following functions



$$g(n) = 10^{79} n \log n$$



and



$$f(n) = 3^{\log n}$$



grows asymptotically faster.




I know $f(n) = n^{\log n}$. I tried to compare the log of both functions above, that is



$$\log g(n) = \log {10^{79} n \log n} = \log {10^{79}} + \log n + \log (\log n)$$



and



$$\log f(n) = \log {3^{\log n}} = \log 3 * \log n$$



Asymptotically, $\log n$ dominates $\log (\log n)$ and $\log {10^{79}}$ becomes irrelevant. $\log 3$ is also a constant and asymptotically does not matter, but it's a multiplicative constant, so I'm tempted to say that $f$ actually grows faster asymptotically. Am I correct? If I am correct, does this hold for every base of logarithms?




Any other cleverer way to show which function grows asymptotically (i.e. as $n \to \infty$) faster?


Answer



Using base $10$ logs as specified in a comment



$3^{\log_{10} n}=3^{\log_{10}3 \cdot \log_3 n}=n^{\log_{10}3}\lt n$



Note how this would change if the base for the log were $e$ or $2$


Friday 20 April 2018

linear algebra - A tricky problem about matrices with no ${-1,0,1}$ vector in their kernel




A Hankel matrix is a square matrix in which each ascending skew-diagonal from left to right is constant. Let us call a matrix partial Hankel if it is the first $m


Does there exist an $m$ by $n$ partial Hankel matrix $M$ such that it has $m < n/2$ rows and there is no non-zero vector $v\in\{-1,0,1\}^n$ in its kernel?




Such matrices certainly do exist for non-Hankel $m$ by $n$ $\{-1,1\}$-matrices.



Other that writing computer code to exhaustively explore small matrices I am not sure how to approach this question.




I would be very grateful for any ideas at all.


Answer



It appears completely equivalent to ask this question for Toeplitz matrices instead of Hankel matrices. The reason is that arranging the $m$ rows of any partial Hankel matrix simply in reverse order turns it into a rectangular Toeplitz matrix but doesn't affect its kernel. I will give an answer for Toeplitz matrices below.



In another answer at MathOverflow I have argued that for "typical" $m$ by $n$ $\{0,1\}$-matrices $M$, regardless of whether they have Toeplitz (or circulant) form or not (in other words, they are chosen at random either among all $m$ by $n$ $\{0,1\}$-matrices, with uniform distribution, or within the subsets of Toeplitz or circulant matrices), the following holds:




If $w \in \{-1,0,1\}^n$ is a random vector with i.i.d. components, distributed with probabilities $P(w_i = -1) = P(w_i = 1) = \frac14$ and $P(w_i=0) = \frac12$, then the probability of $w$ lying in the kernel of $M$ is asyptotically (for large $n$) given by
$$
P(Mw=0) \sim

\begin{cases}
\quad 2^{-n} \ , \quad m > \frac{2n}{\log_2 ( \pi n/6)} \\
m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ , \quad m \leq \frac{2n}{\log_2 ( \pi n/6)}
\end{cases} \ .
\hspace{2cm} (1)
$$




An intermediate step of this argument effectively supports an even stronger claim, namely that for all $m$
$$

P(Mw=0) - 2^{-n} \sim m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ .
\hspace{2cm} (2)
$$
Since $w=0$ is always a solution of $Mw=0$ and has probability $P(w=0) = 2^{-n}$, the right hand side is essentially the asymptotic probability that $Mw=0$ with some $w \neq 0$. Consequently, since by definition every vector $w \in \{-1,0,1\}^n$ has at least probability $4^{-n}$, the probability that there exists at least one $w \neq 0$ with $Mw=0$ must then, with some suitable constant $\alpha>1$ and for large enough $n$, obey the following inequality:
$$
P(\exists w \neq 0 : Mw=0) \leq \alpha \, 4^n m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ .
\hspace{2cm} (3)
$$
Again, this supposes that $M$ is a random matrix, with uniform distribution over the set of all $m$ by $n$ $\{0,1\}$-matrices, or over the subset of Toeplitz matrices.




An adaptation of the above argument to $\{-1,1\}$-matrices $M$ is completely straightforward; the eigenvalues of $MM^{\rm T}$ are just scaled by a factor of $4$ (i.e. they asymptotically tend now to $n$ instead of $\frac n4$) and the single large eigenvalue $\sim \frac{mn}4$ disappears. The above inequality $(3)$ then becomes instead
$$
P(\exists w \neq 0 : Mw=0) \leq \alpha \, 4^n \left( \frac{2\pi n}3 \right)^{-\frac m2} \ .
\hspace{2cm} (4)
$$
The base 2 logarithm of the expression on the right hand side is $\log_2\alpha + 2n - \frac m2 \log_2 \left( \frac{2\pi n}3 \right)$, and so for $m > \beta \frac{4n}{\log_2 ( 2\pi n/3)}$ (with some constant $\beta>1$ of our choice) it will be bounded from above by $\alpha \, 4^{(1-\beta)n}$ and thus become arbitrarily small for large enough $n$. Recall that this is an upper bound on the probability that any randomly chosen $m$ by $n$ matrix $M$ (of the required form) will admit a nontrivial solution $w \in \{-1,0,1\}^n$ of the equation $Mw=0$. For given $n$ and $m$ there is a number $2^{n+m-1} \gg 1$ of such matrices of Toeplitz type (or $2^{nm} \gg 1$ of general form), so the probability that every single one of these will admit such a nontrivial solution must become overwhelmingly small for large $n$.




In conclusion, what the above argument tells us is that if we choose some $\beta>1$ and consider the $m$ by $n$ $\{-1,1\}$-matrices $M$, with sufficiently large $n$ and any $m$ obeying
$$

m > \beta \frac{4n}{\log_2 \left( \frac{2\pi n}3 \right)} \ ,
\hspace{2cm} (5)
$$
then with overwhelming probability there will be at least one among these matrices which has the wanted property (i.e. at least one $M$ which does not admit any nontrivial solution of $Mw=0$). This holds independently of whether we require in addition that $M$ has a Toeplitz, Hankel or circulant form. Moreover, not just one but probably the majority of all matrices of the specified form will have the wanted property.




Note that the condition $(5)$ is asymptotically weaker than $m>n/c$ for any real constant $c>1$, which implies that for any such $c$ and sufficiently large $n$ there should exist $n$ by $m$ Toeplitz (Hankel, ...) matrices with $m

A final note: Of course, in the present form the entire argument is not mathematically rigorous, since it relies on several uncontrolled approximations. I believe, however, that with a little effort these holes can be filled.


sequences and series - Convergence of $sum_{n=1}^infty frac{sin^2(n)}{n}$




Does the series $$ \sum_{n=1}^\infty \frac{\sin^2(n)}{n} $$
converge?





I've tried to apply some tests, and I don't know how to bound the general term, so I must have missed something. Thanks in advance.


Answer



Hint:



Each interval of the form $\bigl[k\pi+{\pi\over6}, (k+1)\pi-{\pi\over6}\bigr)$ contains an integer $n_k$. We then have, for each $k$, that ${\sin^2(n_k)\over n_k}\ge {(1/2)^2\over (k+1)\pi}$. Now use a comparison test to show your series diverges.


Thursday 19 April 2018

complex numbers - What is $sqrt{i}$?

If $i=\sqrt{-1}$, is $\large\sqrt{i}$ imaginary?



Is it used or considered often in mathematics? How is it notated?

limits - Find lim$_{n to infty} sum _{ k =0}^ n frac{e^{-n}n^k}{k!}$




We need to find out the limit of,




lim$_{n \to \infty} \sum _{ k =0}^ n \frac{e^{-n}n^k}{k!}$



One can see that $\frac{e^{-n}n^k}{k!}$ is the cdf of Poisson distribution with parameter $n$.



Please give some hints on how to find out the limit.


Answer



It's a good start to try to solve it in a probabilistic way: notice that the Poisson random variable has the reproducibility property, that is, if $X_{k} \sim \text{Poisson}(1)$, $k = 1, 2, \ldots, n$ independently, then
$$S_n = \sum_{k = 1}^n X_{k} \sim \text{Poisson}(n),$$
whose distribution function $F_{S_n}$ satisfies:
$$F_{S_n}(n) = P[S_n \leq n] = \sum_{k = 0}^n e^{-n} \frac{n^k}{k!},$$

which is exactly the expression of interest. Hence this suggests linking this problem to central limit theorem.



By the classic CLT, we have
$$\frac{S_n - n}{\sqrt{n}} \Rightarrow \mathcal{N}(0, 1).$$
Hence
$$P[S_n \leq n] = P\left[\frac{S_n - n}{\sqrt{n}} \leq 0\right] \to P[Z \leq 0] = \frac{1}{2}$$
as $n \to \infty$.


logarithms - Understanding calculations of log/antilog tables of polynomials over finite field



I'm learning about finite fields and I came across this example online: http://www.csee.umbc.edu/~lomonaco/s12/443/handouts/Log-Antilog-Calculation.pdf



I'm having trouble understanding what exactly log/antilog tables are used for, and how the calculations are being done in this example. I'm guessing since it's GF(2^4) = GF(16) so there's 16 entries and the highest exponent is 4 which means I have to use at most E^4 in the calculations (I can't get that symbol correctly displayed here, using E for now).



So what exactly are the uses for these tables and how do the calculations work?



Edit: link didn't work, example pic below.




enter image description here


Answer



How to use a log table? Oldsters like me know that if you want to multiply two quantities, like $\xi^3+\xi^2$ and $\xi^4+\xi^3+\xi^2+\xi$ in $\Bbb F_{16}$, you look up their logarithms, to find $6$ and $13$. Then add the logarithms modulo $15$ to get $4$. The quantity whose log is $4$ is $\xi+1$, and that’s your product.



To divide, subtract the logarithms, modulo $15$.


calculus - Derivative defined at some point but not continuous there?





Suppose $f$ is a continuous function, and $f'$ is its derivative-function. Is it possible that $f'(c)$ exists for some point $c$, but $f'$ is not continuous at $c$?


Answer



Yes. The standard example is $f: \mathbb R\to \mathbb R$ with
$$f(x) = \begin{cases}0 & x=0 \\x^2\sin(\frac{1}{x})&x\neq0\end{cases}$$ Check (using the definition) that the derivative exists at the origin and is equal to $0$. But the derivative is not continuous at $0$. We would need $\lim_{x \rightarrow 0} 2x\sin({\frac{1}{x}}) - \cos(\frac{1}{x}) = 0$, which it is not, because of the oscillation.



In fact, there are examples which are even worse. See for instance Volterra's Function http://en.wikipedia.org/wiki/Volterra%27s_function



Wednesday 18 April 2018

sequences and series - Show $lim_{n->infty} (3sqrt{n})^{frac{1}{2n}}=1$ (no L'hopital).

I need show $\lim_{n->\infty} (3\sqrt{n})^{\frac{1}{2n}}=1$ in a course of Real Analysis, but I can't use derivative. Can you give me a hit?



I can use:





  • Squeeze theorem

  • Convergence with εε-δδ

  • Sequential convergence

  • The notion of a monotone increasing/decreasing function



Limit laws no. Thanks in advance.

sequences and series - Strictly speaking, is it true that $zeta(-1)ne1+2+3+cdots$?



There are various notorious proofs that $1+2+3+\cdots=\frac{-1}{12}.$



Some of the more accessible proofs basically seem to involve labelling this series as $S=\sum_\limits{i=1}^
\infty i$ and playing around with it until you can say $12S=-1$.



Even at High School, I could have looked at that and thought "well since you're dealing with infinities and divergent series, those manipulations of $S$ are not valid in the first place so you're really just rearranging falsehoods." It's a bit like the error in this proof that $1=0$, or $\forall x.(\text{false}\implies x)$, it's a collapse of logic.




Greater minds than mine have shown that $\zeta(-1)=\frac{-1}{12}$ and I have no argument with that, but I do dispute the claim that $\zeta(-1)=S$.



My thinking here is that, although the analytic continuation of $\zeta$ is well-defined, that analytic continuation is not the same thing as $\sum_\limits{i=1}^\infty i$.



Once you have




  1. defined $\zeta(s) =\sum_\limits{n=1}^\infty\frac{1}{n^s}$ where $\vert s\vert>1$

  2. defined $\zeta^\prime(s)=...$ by analytic continuation for all $s$




then you can only claim




  1. $\zeta(s)=\zeta^\prime(s)$ where $\vert s\vert>1$.



Basicaly, your nice, differentiable-everywhere definition of the Zeta function is not substituable for the original series $S$ in the unrestricted domain.



Hence, $\zeta(-1)=\frac{-1}{12}\nRightarrow S=\frac{-1}{12}$.




Right? Convince me otherwise.


Answer



You only have



$$\zeta(s)=\sum_{n=1}^\infty n^{-s}$$



for $\mathfrak R(s)>1$. The right-hand side of the equation is not defined otherwise.



Like you said, $\zeta(s)$ is defined by analytic continuation on the rest of the complex numbers, so the formula $\zeta(s)=\sum_{n=1}^\infty n^{-s}$ is not valid on $\mathbb C \setminus \{z\in \mathbb C, \mathfrak R(z)>1\}$.




Therefore,



$$\frac{-1}{12}=\zeta(-1)\ne \sum_{n=1}^\infty n \quad\text{(which $=+\infty$ in the best case scenario)}.$$



So what you say is correct.


Tuesday 17 April 2018

number theory - Finding X and Y for given equation

Given two numbers $A$,$B$. Let $G$ be the GCD of two numbers. I need to tell the values of $X$ and $Y$ such that



$$ G = X A + Y B $$



How to approach this problem ? Like if we have $A=25$ and $B=45$ then GCD , $G=5$.




So $5 = 2 \times 25 - 1 \times 45$. Hence here $X=2$ and $Y=-1$.



So how to tackle this problem for given $A$ and $B$?



My try :



int a=25;
int b=45;
int s=0;

int old_s=1;
int t=1;
int old_t=0;
int r=b;
int old_r=a;
while(r!=0){
int quotient = old_r / r;
old_r = r;
r = old_r-quotient * r;
old_s = s;

s = old_s - quotient * s;
old_t = t;
t = old_t - quotient * t;
}
cout<< old_s << " " << old_t<cout<< old_r <cout<< t << " " << s <


Whats wrong with this code ?

Monday 16 April 2018

algorithms - Question on Asymptotic Function.



Question





Let $n = m!$. Which of the following is TRUE?




$a)m = \Theta (\frac{\log n}{\log \log n})$



$b)m = \Omega (\frac{\log n}{\log \log n})$ but not $m = O(\frac{\log n}{\log \log n})$



$c)m = \Theta (\log^2 n)$

$m = \Omega (\log^2 n)$ but not $m = ÎŸ( (\log^2 n)$



$d)m = \Theta (\log^{1.5} n)$



Let me Clear that this is not a homework question, i have learnt asymptotic
growth and trying to solve some good question.During Practicing, i found this
beautiful question but i am stuck at one point and unable to move on to conclude the answer.



My approach





Given $n = m!$




$\Rightarrow \log n=(\log m!)=m \log m$



$\Rightarrow m=\frac{\log n }{\log m}$



I am stuck how to move forward .What value should i give to $\log m$?




Please help


Answer



The first issue with what you wrote is that you write an equality instead of an asymptotic equivalence: you cannot write $\log m! = m\log m$. You can write $$\log m! \operatorname*{\sim}_{m\to\infty} m\log m$$, which is equivalent to writing $\log m! = m\log m + o(m\log m)$; or, more precise,
$$
\ln m! = m\ln m - m + O(\log m)
$$
(this is Stirling's approximation).







Now, let us start from there. Since $n=m!$, we have
$$
\log n = m\log m +o(m\log m) \tag{1}
$$
by the above. Let us write $m = f(n)\log n$ for some $f>0$ we are trying to figure out (asymptotically). (1) becomes
$$
\log n = f(n)\log n\log f(n) + f(n)\log n\log \log n +o(f(n)\log n\log (f(n)\log n))
$$
or equivalently
$$

1 = f(n)\log f(n) + f(n)\log \log n +o(f(n)\log (f(n)\log n)) \tag{2}
$$
From this, it is clear that we must have $f(n)=o(1)$ (as otherwise $f(n)\log \log n$ is unbounded). That allows us to simplify (2) as
$$
1 = o(1) + f(n)\log \log n +o(f(n)\log\log n) \tag{3}
$$
which implies (for the equality to hold) that we must have $f(n)\log \log n = 1+o(1)$, and thus $$f(n) \operatorname*{\sim}_{n\to\infty} \frac{1}{\log\log n}\,.$$



Putting it all together,
$$

\boxed{m \operatorname*{\sim}_{n\to\infty} \frac{\log n}{\log\log n}\,.}
$$


Sunday 15 April 2018

algorithms - polynomial representation of binary in computer memory and its efficiency

I have noticed that in some cases 'polynomial representation' of a number is the preferred method to represent numbers in computer programs, especially those involving modular arithmetic and cryptography.



I have read that it allows to simplify the code required to implement the different arithmetic operations and thus is more comfortable to work with. Also it is claimed that polynomial form is the natural way to represent integers modulo something.




Here are my questions:




  1. Would you please elaborate on the available methods of converting between the traditional little-endian/big-endian representation of integers and their polynomial counter-parts?


  2. Why is the polynomial form a natural way to represent number modulo something ?


  3. Why does it lead to code simplification ?


Quadratic equations using modular arithmetic

I have a problem I want to solve, but I could not figure out how to approach it. And since I do not have formal maths training, I am not very familiar with the terminology so it is possible I may not know the correct keywords for search.



Below are a couple sample equations, which I want to find a way to solve them efficiently, when the numbers are big. So, no trial and error:




$$ k^2 + k + 5 \equiv 0 \, (mod \,\, 13-k) $$





Solution for this is k=2




$$ k^2 + 3028 \equiv 0 \, (mod \,\, 9011-k) $$




And solution for this is k=864




So the question is, how can I solve similar equations with a lot bigger coefficients without trial-error.



Thank you



EDIT: These equations also have negative solutions (k=-4 for the first one and k=-956 for the second one)



UPDATE:



As far as I see from the answers, the solution comes to factoring, which does not have an efficient solution when you have big numbers (more than hundreds of digits) as coefficients.




Thank you for your time and answers. Now I at least know this is really a difficult problem.

Saturday 14 April 2018

elementary set theory - Cardinality of the set of at most countable subsets of the real line?

I'm exploring an unrelated question about power series with complex coefficients. While exploring this question, I wondered: What is the cardinality of the set of all such power series? Or with different language: What is the cardinality of at most countable subsets of $\mathbb{C}$ (or $\mathbb{R}$, if you prefer)?



I asked my advisor and he surprisingly wasn't sure, though he suspects that the set of subsets in question has a larger cardinality than $\mathbb{R}$.



Thanks a lot!




Edit: Certainly if we only consider finite subsets, then this set of subsets has cardinality equal to $\mathbb{R}$.



Edit2: Realized my wording was wrong. I'm actually looking for the cardinality of the set of sequences with entries in $\mathbb{C}$, not the cardinality of the set of at most countable subsets of $\mathbb{C}$. However, both questions are answered below, and both turn out to be $|\mathbb{R}|$.

Find the limits without L'Hôpital:$lim_{ x to 0 }frac{x-sin x}{x-tan x}=? $

Find the limits without L'Hôpital's rule
$$\lim_{ x \to 0 }\frac{x-\sin x}{x-\tan x}=? $$
My Try:
$$\lim_{ x \to 0 }\frac{\sin(\pi-x)-\sin x}{\tan(\pi+x)-\tan x}=?\\\lim_{ x \to 0 }\frac{2\sin(\frac{\pi}{2}-x)\cos(\frac{\pi}{2})}{\frac{\sin(\frac{Ï€}{2}-x)}{\cos(\pi+x)\cos(x)}}=\lim_{ x \to 0 }\frac{(2\cos x)(-\cos x)(\cos(\frac{\pi}{2}))}{\cos x}=0$$

but:
$$\lim_{ x \to 0 }\frac{x-\sin x}{x-\tan x}=-1/2$$
Where is my mistake?

Friday 13 April 2018

real analysis - Finding $limlimits_{n rightarrow infty}left(int_0^1(f(x))^n,mathrm dxright)^frac{1}{n}$ for continuous $f:[0,1]to[0,infty)$






Find $$\lim_{n \rightarrow \infty}\left(\int_0^1(f(x))^n\,\mathrm dx\right)^\frac{1}{n}$$if $f:[0,1]\rightarrow(0,\infty)$ is a continuous function.





My attempt:



Say $f(x)$ has a max. value $M$. Then $$\left(\int_0^1(f(x))^ndx\right)^\frac{1}{n}\leq\left(\int_0^1M^ndx\right)^\frac{1}{n} =M$$



I cannot figure out what to do next.


Answer



Your guess that it should be the maximum is a good guess. You have shown that the limit must be $\leq M$. We will now show that the limit must be greater than or equal to $M-\epsilon$ for any $\epsilon$, from which you can conclude that the limit is indeed $M$.



Since $f(x)$ is continuous, given $\epsilon > 0$, there exists a $\delta$ such that
$$f(x) > M - \epsilon$$ for all $x \in (x_0 -\delta, x_0 + \delta)$. Hence, we have

$$\int_0^1 f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} (M - \epsilon)^n dx = (M-\epsilon)^n 2\delta$$
Hence for any $\epsilon > 0$,
$$\left(\int_0^1 f(x)^n dx\right)^{1/n} > (M-\epsilon)(2 \delta)^{1/n}$$
Letting $n \to \infty$, we get what we want, i.e.,
$$\lim_{n \to \infty}\left(\int_0^1 f(x)^n dx\right)^{1/n} \geq (M-\epsilon)$$


calculus - Bernoulli and Euler numbers in some known series.



The series for some day to day functions such as $\tan z$ and $\cot z$ involve them. So does the series for $\dfrac{z}{e^z-1}$ and the Euler Maclaurin summation formula. How can it be analitically shown that their generating functions are:



$$\frac{z}{e^z-1}=\sum\limits_{n=0}^\infty B_n \frac{z^n}{n!}$$



$$\operatorname{sech} z =\sum\limits_{n=0}^\infty E_n \frac{z^n}{n!}$$




as an introductory example?



I've read a little about $A_n$ numbers and their relation to the tangent and secant, which might lead to a new question, but I guess I'm OK with a proof of the first two series.


Answer



For the Bernoulli numbers, take a look at Graham, Knuth, & Patashnik, Concrete Mathematics, Sections 6.5 and 7.6. In 6.5 they define the Bernoulli numbers $B_k$ by the implicit recurrence $$\sum_{j=0}^m\binom{m+1}jB_j=[m=0]\tag{1}$$ for all $m\ge 0$. (The righthand side is an Iverson bracket.) They then prove the identity $$\sum_{k=0}^{n-1}k^m=\frac1{m+1}\sum_{k=0}^m\binom{m+1}kB_kn^{m+1-k}\;.$$



In 7.6, on exponential generating functions, they rewrite $(1)$ by substituting $n$ for $m+1$ and adding $B_n$ to both sides to get $$\sum_k\binom{n}kB_k=B_n+[n=1]\tag{2}$$ for all $n\ge 0$. The lefthand side of $(2)$ is the binomial convolution of $\langle B_n:n\in\omega\rangle$ and the constant $1$ sequence. The egf of the constant $1$ sequence is just $e^z$; let $$\widehat B(z)=\sum_{n\ge 0}B_n\frac{z^n}{n!}\;,$$ the egf of $\langle B_n:n\in\omega\rangle$. Then $$\sum_k\binom{n}kB_k=\widehat B(z)e^z\;.$$ On the other hand, the egf of the righthand side of $(2)$ is $$\sum_{n\ge 0}\Big(B_n+[n=1]\Big)\frac{z^n}{n!}=\widehat B(z)+z\;,$$ so $$\widehat B(z)e^z=\widehat B(z)+z\;,$$ and $$\widehat B(z)=\frac{z}{e^z-1}\;.$$ The relationship with the Bernoulli polynomials is explored further in the next few pages.



I can’t help with the Euler (secant) numbers: I’ve only ever seen them defined as the coefficients in the Maclaurin expansion of $\operatorname{sech} x$. You might look at Exercise 7.41 and its solution, however, since it shows the connection between the up-down numbers and the tangent and secant functions.




Added: Suppose that $\widehat A(x)$ and $\widehat B(x)$ are exponential generating functions for $\langle a_n:n\in\omega\rangle$ and $\langle b_n:n\in\omega\rangle$, respectively, so that $$\widehat A(x)=\sum_{n\ge 0}\frac{a_n}{n!}x^n\quad\text{ and }\quad\widehat B(x)=\sum_{n\ge 0}\frac{b_n}{n!}x^n\;.$$



Now let $$c_n=\sum_k\binom{n}ka_kb_{n-k}\;;$$ the sequence $\langle c_n:n\in\omega\rangle$ is the binomial convolution of $\langle a_n:n\in\omega\rangle$ and $\langle b_n:n\in\omega\rangle$. Let $\widehat C(x)$ be the egf of this binomial convolution. Then



$$\begin{align*}
\widehat C(x)&=\sum_{n\ge 0}\frac{c_n}{n!}x^n\\
&=\sum_{n\ge 0}\sum_k\left(\frac{a_n}{k!}\cdot\frac{b_{n-k}}{(n-k)!}\right)x^n\\
&=\left(\sum_{n\ge 0}\frac{a_n}{n!}x^n\right)\left(\sum_{n\ge 0}\frac{b_n}{n!}x^n\right)\\
&=\widehat A(x)\widehat B(x)\;.
\end{align*}$$




Just as ordinary convolution of sequences is reflected in the product of their ordinary generating functions, binomial convolution of sequences is reflected in the product of their exponential generating functions.


sequences and series - Convergence of $a_{n+1}=sqrt{2-a_n}$




I'm attempting to prove the convergence (or divergence, though I strongly suspect it converges) of a sequence defined as $a_{n+1}=\sqrt{2-a_n}$ with $a_1=\sqrt{2}$.



I cannot use the monotonic sequence theorem as the sequence is not monotonically increasing. In fact, the first few values of the sequence are:



$a_1 =\sqrt{2}\approx 1.4142$



$a_2 =\sqrt{2-\sqrt{2}}\approx .7653$



$a_3 =\sqrt{2-\sqrt{2-\sqrt{2}}}\approx 1.1111$




Thus, it seems that $a_{n \to \infty} \to 1$



It seems that the sequence is behaving similarly to $\frac{\sin x}{x}$, leading me to think that the squeeze theorem may be useful. Still, I cannot seem to make any progress besides numerical computation of successive terms.


Answer



Hint: write $a_n=1+b_n$ or $a_n=1-b_n$, whichever makes $b_n$ positive. How does $b_n$ behave?



Elaboration: we have $a_n=1+b_n$ for odd $n$ and $a_n=1-b_n$ for even $n$ (why so?). So, for example, for even $n$ we can write $a_{n+1}=\sqrt{2-a_n}$ as $1+b_{n+1}=\sqrt{2-(1-b_n)}=\sqrt{1+b_n}$. Now you can compare $b_{n+1}$ and $b_n$. Proceed similarly for odd $n$.


Thursday 12 April 2018

lebesgue integral - Condition for integrability on finite measure space

Let $(X,\mathcal{F},\mu)$ be a finite measure space. If $f:X\rightarrow \mathbb{R}$ is a measurable real function, show that, $f\in L^1(\mu)$ iff $\sum\limits_{n=0}^{\infty}\mu(\{f\geq n\})<\infty$. Am a bit stuck on the $(\Leftarrow)$ direction so any help is appreciated, thanks.

calculus - A sine integral $int_0^{infty} left(frac{sin x }{x }right)^n,mathrm{d}x$



The following question comes from Some integral with sine post

$$\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$$
but now I'd be curious to know how to deal with it by methods of complex analysis.
Some suggestions, hints? Thanks!!!



Sis.


Answer



Here's another approach.



We have
$$\begin{eqnarray*}
\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n

&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \int_{-\infty}^\infty dx\,
\left(\frac{\sin x}{x-i\epsilon}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \int_{-\infty}^\infty dx\,
\frac{1}{(x-i\epsilon)^n}
\left(\frac{e^{i x}-e^{-i x}}{2i}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n} \int_{-\infty}^\infty dx\,
\frac{1}{(x-i\epsilon)^n}

\sum_{k=0}^n (-1)^k {n \choose k} e^{i x(n-2k)} \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^n (-1)^k {n \choose k}
\int_{-\infty}^\infty dx\, \frac{e^{i x(n-2k)}}{(x-i\epsilon)^n}.
\end{eqnarray*}$$
If $n-2k \ge 0$ we close the contour in the upper half-plane and pick up the residue at $x=i\epsilon$.
Otherwise we close the contour in the lower half-plane and pick up no residues.
The upper limit of the sum is thus $\lfloor n/2\rfloor$.
Therefore, using the Cauchy differentiation formula, we find

$$\begin{eqnarray*}
\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!}
\left.\frac{d^{n-1}}{d x^{n-1}} e^{i x(n-2k)}\right|_{x=0} \\
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor}
(-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!} (i(n-2k))^{n-1} \\

&=& \frac{\pi}{2^n (n-1)!}
\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} (n-2k)^{n-1}.
\end{eqnarray*}$$
The sum can be written in terms of the hypergeometric function but the result is not particularly enlightening.


combinatorics - How to determine the size of the complete game tree for basic [M]?

You can read the rules of the game here, or actually play it free on the mobile mbrane app, but it's not required to address the question.



Essentially: players take turns placing integers onto an empty Sudoku until no more integers may be legally placed.



Part of the complete gametree involves ~6.67x10²¹ complete Sudoku, reduced for rotation but not substitution because the integers have magnitude.



(Full disclosure: this part of the tree is almost entirely meaningless as strategic placement of the integers, influenced by the topology, seems to always result in incompletable Sudoku, which leads to the real problem.)




Here is an image to illustrate how the dead sectors occur, for those interested. (x's mark the dead sectors):



Typical basic [M] game displaying dead sectors



At some point I plan to figure out how to derive total number of broken Sudoku--dead sectors can be created with as few as 9 placements--but for now I just want to make sure I understand how the exponential expansion of placement sequence interacts mathematically with the factorial structure of Sudoku, and the proper notation.




  • What is the size of the basic [M] gametree, assuming only completable Sudoku?




Alternately:




  • How to derive the complete gametree size of basic [M] on a 2x2(2x2) gameboard?



The second example on 2x2(2x2) can actually be checked!



Sorry if this is a really basic question, but I only have basic maths, so be kind! (I'm working up to a question on how to determine the complexity class of the basic game:)

Wednesday 11 April 2018

integration - Triple Euler sum result $sum_{kgeq 1}frac{H_k^{(2)}H_k }{k^2}=zeta(2)zeta(3)+zeta(5)$



In the following thread



I arrived at the following result



$$\sum_{k\geq 1}\frac{H_k^{(2)}H_k }{k^2}=\zeta(2)\zeta(3)+\zeta(5)$$



Defining




$$H_k^{(p)}=\sum_{n=1}^k \frac{1}{n^p},\,\,\, H_k^{(1)}\equiv H_k $$



But, it was after long evaluations and considering many variations of product of polylogarithm integrals.



I think there is an easier approach to get the solution, any ideas ?


Answer



Here's a derivation that, while fairly long, is self-contained and uses only basic series manipulation techniques, like partial fractions decomposition, telescoping, swapping the order of summation, etc. It leans heavily on ideas from Borwein and Girgensohn's paper "Evaluation of Triple Euler Sums" (Electronic Journal of Combinatorics 3(1) 1996).



First, some notation. Define the multiple zeta functions by

\begin{align}
\zeta_N(a) &= \sum_{x=1}^N \frac{1}{x^a}, \:\:\: \zeta_N(a,b) = \sum_{x=1}^N \sum_{y=1}^{x-1} \frac{1}{x^a y^b}, \:\:\: \zeta_N(a,b,c) = \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^a y^b z^c}, \\
\zeta(a,b) &= \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \frac{1}{x^a y^b}, \:\:\: \zeta(a,b,c) = \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^a y^b z^c}.
\end{align}



We will need the following symmetry relation, as well as expressions for $\zeta(4,1)$ and $\zeta(2,2,1) + \zeta(2,1,2)$. Proofs for all of these are given at the end of the post.
\begin{align}
\zeta_N(a,b) + \zeta_N(b,a) &= \zeta_N(a) \zeta_N(b) - \zeta_N(a+b) \tag{1}\\
\zeta(4,1) &= \zeta(5) - \zeta(3,2) - \zeta(2,3) \tag{2}\\
\zeta(2,2,1) + \zeta(2,1,2) &= \zeta(2,3) + \zeta(3,2) \tag{3}

\end{align}



Given these, we have



The Main Proof:
\begin{align}
\sum_{k=1}^{\infty} \frac{H^{(2)}_k H_k}{k^2} &= \sum_{k=1}^{\infty} \frac{H^{(2)}_{k-1} H_{k-1}}{k^2} + \sum_{k=1}^{\infty} \frac{H^{(2)}_{k-1}}{k^3} + \sum_{k=1}^{\infty} \frac{H_{k-1}}{k^4} + \sum_{k=1}^{\infty} \frac{1}{k^5} \\
&= \sum_{k=1}^{\infty} \frac{H^{(2)}_{k-1} H_{k-1}}{k^2} + \zeta(3,2) + \zeta(4,1) + \zeta(5).
\end{align}
The most complicated sum is the first, so let's look at that more closely.

\begin{align}
\sum_{k=1}^{\infty} \frac{H^{(2)}_{k-1} H_{k-1}}{k^2} &= \sum_{k=1}^{\infty} \frac{1}{k^2} \zeta_{k-1}(2) \zeta_{k-1}(1) \\
&= \sum_{k=1}^{\infty} \frac{1}{k^2} (\zeta_{k-1}(2,1) + \zeta_{k-1}(1,2) + \zeta_{k-1}(3)), \text{ by (1)} \\
&= \zeta(2,2,1) + \zeta(2,1,2) + \zeta(2,3), \text{ by definition of the multiple zeta functions} \\
&= 2\zeta(2,3) + \zeta(3,2), \text{ by (3)}.
\end{align}
Thus
\begin{align}
\sum_{k=1}^{\infty} \frac{H^{(2)}_k H_k}{k^2} &= 2 \zeta(2,3) + \zeta(3,2) + \zeta(3,2) + \zeta(5) - \zeta(3,2) - \zeta(2,3) + \zeta(5), \text{ by (2)} \\
&= \zeta(2,3) + \zeta(3,2) + 2 \zeta(5) \\

&= \zeta(2) \zeta(3) - \zeta(5) + 2 \zeta(5), \text{ by (1)} \\
&= \zeta(2) \zeta(3) + \zeta(5).
\end{align}




Proof of (1):
\begin{align}
\zeta_N(a,b) + \zeta_N(b,a) &= \sum_{x=1}^N \sum_{y=1}^{x-1} \frac{1}{x^a y^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \frac{1}{x^b y^a} \\
&= \sum_{y=1}^N \sum_{x=y+1}^N \frac{1}{x^a y^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \frac{1}{x^b y^a}, \\
& \:\:\:\:\:\: \text{ swapping the order of summation on the first sum} \\
&= \sum_{x=1}^N \sum_{y=x+1}^N \frac{1}{y^a x^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \frac{1}{x^b y^a}, \text{ relabeling variables on the first sum} \\

&= \sum_{x=1}^N \sum_{y=1}^N \frac{1}{y^a x^b} - \sum_{x=1}^N \frac{1}{x^{a+b}}, \text{ combining sums} \\
&= \zeta_N(a) \zeta_N(b) - \zeta_N(a+b). \square
\end{align}

Proof of (2):
\begin{align}
\zeta(4,1) &= \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \frac{1}{x^4 y} \\
&= \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \frac{1}{x^4 (x-y)}, \text{ reindexing the second sum} \\
&= \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \left(-\frac{1}{x^4 y} - \frac{1}{x^3 y^2} - \frac{1}{x^2y^3} - \frac{1}{x y^4} + \frac{1}{(x-y)y^4}\right), \\
&\:\:\:\:\: \text{ by partial fractions decomposition}\\

&= - \zeta(4,1) - \zeta(3,2) - \zeta(2,3) + \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \left(\frac{1}{(x-y)y^4} - \frac{1}{x y^4} \right) \\
&= - \zeta(4,1) - \zeta(3,2) - \zeta(2,3) + \sum_{x=1}^{\infty} \sum_{y=1}^{x-1} \frac{1}{y^4} \left(\frac{1}{x-y} - \frac{1}{x} \right) \\
&= - \zeta(4,1) - \zeta(3,2) - \zeta(2,3) + \sum_{y=1}^{\infty} \frac{1}{y^4} \sum_{x=y+1}^{\infty} \left(\frac{1}{x-y} - \frac{1}{x} \right), \\
& \:\:\:\:\: \text{ swapping the order of summation} \\
&= - \zeta(4,1) - \zeta(3,2) - \zeta(2,3) + \sum_{y=1}^{\infty} \frac{1}{y^4} \sum_{x=1}^y \frac{1}{x}, \text{ as the sum telescopes} \\
&= - \zeta(4,1) - \zeta(3,2) - \zeta(2,3) + \zeta(4,1) + \zeta(5) \\
&= \zeta(5) - \zeta(3,2) - \zeta(2,3). \square
\end{align}



For the proof of (3), we need the following additional symmetry result:

\begin{equation}
\zeta_N(a,b,c) + \zeta_N(a,c,b) + \zeta_N(c,a,b) = \zeta_N(c) \zeta_N(a,b) - \zeta_N(a,b+c) - \zeta_N(a+c,b) \tag{4}
\end{equation}



Proof of (4):
\begin{align}
&\zeta_N(a,b,c) + \zeta_N(a,c,b) + \zeta_N(c,a,b) \\
&=\sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^a y^b z^c} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^a y^c z^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^c y^a z^b} \\
&= \sum_{x=1}^N \sum_{z=1}^{x-1} \sum_{y=z+1}^{x-1} \frac{1}{x^a y^b z^c} + \sum_{y=1}^N \sum_{x=y+1}^N \sum_{z=1}^{y-1}\frac{1}{x^a y^c z^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^c y^a z^b}, \\
&\:\:\:\:\:\text{ swapping order of summation on the first two sums} \\

&= \sum_{z=1}^N \sum_{x=z+1}^N \sum_{y=z+1}^{x-1} \frac{1}{x^a y^b z^c} + \sum_{y=1}^N \sum_{x=y+1}^N \sum_{z=1}^{y-1}\frac{1}{x^a y^c z^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^c y^a z^b}, \\
&\:\:\:\:\:\text{ swapping order of summation on the first sum} \\
&= \sum_{x=1}^N \sum_{y=x+1}^N \sum_{z=x+1}^{y-1} \frac{1}{x^c y^a z^b} + \sum_{x=1}^N \sum_{y=x+1}^N \sum_{z=1}^{x-1}\frac{1}{x^c y^a z^b} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^c y^a z^b}, \\
&\:\:\:\:\: \text{ relabeling variables on the first two sums} \\
&= \sum_{x=1}^N \sum_{y=x+1}^N \sum_{z=1}^{y-1} \frac{1}{x^c y^a z^b} - \sum_{x=1}^N \sum_{y=x+1}^N \frac{1}{x^{b+c} y^a} + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1}\frac{1}{x^c y^a z^b}, \\
&\:\:\:\:\: \text{ combining the first two sums} \\
&= \sum_{x=1}^N \sum_{y=1}^N \sum_{z=1}^{y-1} \frac{1}{x^c y^a z^b} - \sum_{x=1}^N \sum_{z=1}^{y-1} \frac{1}{x^{a+c} z^b} - \sum_{y=1}^N \sum_{x=1}^{y-1} \frac{1}{x^{b+c} y^a}, \\
&\:\:\:\:\:\text{ combining the first and third sums and swapping the order of summation on the second} \\
&= \zeta_N(c) \zeta_N(a,b) - \zeta_N(a+c,b) - \zeta_N(a,b+c). \square
\end{align}




Proof of (3):
\begin{align}
\zeta_N(2,2,1) &= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{x^2 y^2 z} \\
&= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{x^2 y^2 (y-z)}, \text{ reindexing on the third sum} \\
&= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \left( -\frac{1}{x^2 y z^2} - \frac{1}{x^2 y^2 z} + \frac{1}{x^2(y-z)z^2} \right), \\
&\:\:\:\:\: \text{ by partial fractions decomposition} \\
&= - \zeta_N(2,1,2) - \zeta_N(2,2,1) + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{x^2(y-z)z^2} \tag{5}. \\
\end{align}
Now, let's look at the third expression in (5).

\begin{align}
&\sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{x^2(y-z)z^2} \\
&= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{x-y-1} \frac{1}{x^2(x-y-z)z^2}, \text{ reindexing the second sum} \\
&= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=y+1}^{x-1} \frac{1}{x^2(x-z)(z-y)^2}, \text{ reindexing the third sum} \\
&= \sum_{x=1}^N \sum_{z=1}^{x-1} \sum_{y=1}^{z-1} \frac{1}{x^2(x-z)(z-y)^2}, \text{ swapping the order of summation} \\
&= \sum_{x=1}^N \sum_{z=1}^{x-1} \sum_{y=1}^{z-1} \frac{1}{x^2(x-z)y^2}, \text{ reindexing the third sum} \\
&= \sum_{x=1}^N \sum_{z=1}^{x-1} \sum_{y=1}^{z-1} \left(-\frac{1}{x y^2 z^2} - \frac{1}{x^2 y^2 z} + \frac{1}{(x-z)y^2 z^2} \right), \text{ by partial fractions decomposition} \\
&= - \zeta_N(1,2,2) - \zeta_N(2,1,2) + \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{(x-y)y^2 z^2} \tag{6}, \text{ relabeling variables}.
\end{align}
Let's look at the third expression in (6).

\begin{align}
&\sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{(x-y)y^2 z^2} \\
&= \sum_{x=1}^N \sum_{y=1}^{x-1} \sum_{z=1}^{y-1} \frac{1}{(x-y)y^2 z^2} + \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^{y-1} \frac{1}{x y^2 z^2} - \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^{y-1} \frac{1}{x y^2 z^2} \\
&= \left(\sum_{x=1}^N \frac{1}{x}\right) \left(\sum_{y=1}^N \sum_{z=1}^{y-1} \frac{1}{y^2 z^2} \right) - \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^{y-1} \frac{1}{x y^2 z^2}, \\
&\:\:\:\:\: \text{ via the finite sum version of the Cauchy product} \\
&= \zeta_N(1) \zeta_N(2,2) - e_N(1,2,2), \tag{7} \\
\end{align}
where
$$e_N(1,2,2) = \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^{y-1} \frac{1}{x y^2 z^2}.$$
Putting (5), (6), and (7) together, we have

\begin{align}
\zeta_N(2,2,1) =& - \zeta_N(2,1,2) - \zeta_N(2,2,1) - \zeta_N(1,2,2) - \zeta_N(2,1,2) + \zeta_N(1) \zeta_N(2,2) \\
&- e_N(1,2,2), \\
\zeta_N(2,2,1) + \zeta_N(2,1,2) &= - \zeta_N(1) \zeta_N(2,2) + \zeta_N(2,3) + \zeta_N(3,2) + \zeta_N(1) \zeta_N(2,2) \\
&- e_N(1,2,2), \text{ by (4)} \\
=& \zeta_N(2,3) + \zeta_N(3,2) - e_N(1,2,2). \\
\end{align}
All that remains to complete the proof of (3) is to show that $e_N(1,2,2) \to 0$ as $N \to \infty$. We have
\begin{align}
e_N(1,2,2) &= \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^{y-1} \frac{1}{x y^2 z^2} \\

&\leq \sum_{x=1}^N \sum_{y=N+1-x}^N \sum_{z=1}^N \frac{1}{x y^2 z^2} \\
&= \zeta_N(2) \sum_{x=1}^N \sum_{y=N+1-x}^N \frac{1}{x y^2} \\
&= \zeta_N(2) \sum_{y=1}^N \sum_{x=N+1-y}^N \frac{1}{x y^2}, \text{ swapping the order of summation} \\
&\leq \zeta_N(2) \sum_{y=1}^N \frac{1}{y^2} \sum_{x=N+1-y}^N \frac{1}{N+1-y} \\
&= \zeta_N(2) \sum_{y=1}^N \frac{1}{y^2} \frac{y}{N+1-y} \\
&= \zeta_N(2) \sum_{y=1}^N \frac{1}{y (N+1-y)}\\
&= \zeta_N(2) \frac{1}{N+1}\sum_{y=1}^N \left(\frac{1}{y} + \frac{1}{N+1-y} \right), \text{ by partial fractions decomposition} \\
&= \zeta_N(2) \frac{2}{N+1} \zeta_N(1),
\end{align}
which goes to $0$ as $N \to \infty$, since $\zeta_N(1) = O(\log N)$ and $\zeta_N(2) = O(1)$. $\square$



How to simplify this fraction?




Can anyone show me how to simplify this fraction:




$$
\frac{(k + 1)((k + 1)+1)(2(k + 1)+1)}{6}\;\;.
$$




What can be factored out and so forth?




Thanks.


Answer



$$
\frac{(k+1)((k+1)+1)(2(k+1)+1)}{6}=\frac{(k+1)(k+2)(2k+3)}{6}
$$



Then if $k=2n$ you have



\begin{align*}
\frac{(k+1)(k+2)(2k+3)}{6}=&

\frac{(2n+1)(2n+2)(4n+3)}{6}\\
=&\frac{(2n+1)2(n+1)(4n+3)}{6}\\
=&\frac{(2n+1)(n+1)(4n+3)}{3}
\end{align*}



it's not so different if $k=2n+1$.
Then if you want you can consider in more detail: what if $n=2m$? And $n=2m+1$?
Enjoy!


Tuesday 10 April 2018

modular arithmetic - Prove by induction that if $ aequiv b pmod m$ then $a^n equiv b^n pmod m$

The base case is pretty straightforward. But I'm stuck on the inductive step.



As the base case holds, assume for when $n=k$ holds, show the $k+1$ case holds true.




Inductive Hypothesis: $a^k \equiv b^k \pmod m$, then



$$a^{k+1} \equiv b^{k+1} \pmod m \iff a^{k+1} - b^{k+1} = m(k), k \in \mathbb{Z}. $$



I think I'm missing some steps, I'm not sure how to manipulate what I have to shows it holds.

sequences and series - Prove for a triangle whose side are in AP

If the sides of a triangle are in AP and the greatest angle exceeds the smallest angle by a show that the sides are in the ratio 1 - X :1: 1 + X where X = √((1- cos a)(7-cos a))

Sunday 8 April 2018

linear algebra - If $AB = I$ then $BA = I$





If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$.




I do not understand anything more than the following.




  1. Elementary row operations.

  2. Linear dependence.


  3. Row reduced forms and their relations with the original matrix.



If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?



P.S.: Please avoid using the transpose and/or inverse of a matrix.


Answer



Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.



Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).




Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.



Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.


complex variable integral using residue theorem

I am asked to calculate a complex integral. how to compute $\displaystyle \int \limits_{-\infty}^{\infty}\frac{x^4}{1+x^8}dx$ with residue theorem?

cardinals - Why are the cardinality of $mathbb{R^n}$ and $mathbb{R}$ the same?

As I study the first part of abstract algebra, I have a question: why $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^2}\right\vert$?



And moreover, I knew the fact that $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^n}\right\vert$.



Does the equality apply to any infinite set?

How can I prove them?

real analysis - A discontinuous function such that $f(x + y) = f(x) + f(y)$




Is it possible to construct a function $f \colon \mathbb{R} \to \mathbb{R}$ such that $$f(x + y) = f(x) + f(y)$$ and $f$ is not continuous?


Answer



"Fix a basis for $\mathbb R$ as a $\mathbb Q$-vector space" (which exists under the axiom of choice, but under weaker axioms I have no idea what happens). The condition $f(x+y) = f(x) + f(y)$ is equivalent to $f$ being $\mathbb Q$-linear, so you're asking if there exists a non-trivial discontinuous map of $\mathbb Q$-vector spaces between $\mathbb R$ and itself. If you map the basis elements to other basis elements in a discontinuous way, you will obtain such a map.



Added : A quick way to see that "a discontinuous way of doing it" exists, is that the set of $\mathbb Q$-linear maps that are also $\mathbb R$-linear has cardinality of the reals, where as the set of all $\mathbb Q$-linear maps (or in other words, the number of maps between the basis elements) has cardinality $|\mathbb R|^{|\mathbb R|}$. To understand why, well of course the basis for $\mathbb R$ as a $\mathbb Q$-vector space has cardinality $\le |\mathbb R|$, but if it was countable, then $\mathbb R$ would be countable because all of its elements would be a linear combination of elements in a countable set with countably many possible coefficients. Since the basis for $\mathbb R$ as a $\mathbb Q$-vector space is uncountable, the set of all maps from a basis to another also is. Therefore there must be at least one map between the two bases which generates a discontinuous linear map.



Hope that helps,


sequences and series - Is $sum_{n=3}^inftyfrac{1}{nlog n}$ absolutely convergent, conditionally convergent or divergent?



Classify
$$\sum_{n=3}^\infty \frac{1}{n\log(n)}$$
as absolutely convergent, conditionally convergent or divergent.



Is it,
$$\sum_{n=3}^\infty \frac{1}n$$ is a divergent $p$-series as $p=1$, and

$$\lim_{n\to\infty} \frac{1}{n\log(n)}{n} = 0$$ by comparison test. And this converges to $0$.



So,
$$\sum_{n=3}^\infty \frac{1}{n\log (n)}$$
is conditionally convergence?



I'm not sure if I'm doing right or not. Could you guide me?



Thanks in advance! :)


Answer



Note that if your given series is convergent then it's also absolutely convergent since $\frac{1}{n\log n}\geq 0\quad \forall n\geq 3$.




Now, since the sequence $(\frac{1}{n\log n})$ is decreasing to $0$ so by the integral test your series has the same nature that the improper integral
$$\int_3^\infty\frac{dx}{x\log x}=\left[\log(\log(x))\right]_3^{\to\infty}=\infty$$
hence the series is divergent.


algebra precalculus - Prove that $(1+2+3+cdots+n)^2=1^3+2^3+3^3+cdots+n^3$ $forall n in mathbb{N}$.

Prove that $(1+2+3+\cdots+n)^2=1^3+2^3+3^3+\cdots+n^3$ for every $n \in \mathbb{N}$.



I'm trying to use induction on this one, but I'm not sure how to. The base case is clearly true. But when I add $n+1$ to the right and left hand sides, I don't know what I'm supposed to get. For example, when I extend the right hand side's sequence by $n+1$, I get $n^3+(n+1)^3$ at the end of the sequence, which is $2n^3 +3n^2+3n+1$.




This doesn't seem that meaningful, so I don't know how to proceed.

Saturday 7 April 2018

polynomials - How do I come up with a function to count a pyramid of apples?

My algebra book has a quick practical example at the beginning of the chapter on polynomials and their functions. Unfortunately it just says "this is why polynomial functions are important" and moves on. I'd like to know how to come up with the function (the teacher considers this out of scope). Even suggesting google search terms to find out more information on this sort of thing would be helpful.



The example




Consider a stack of apples, in a pyramid shape. It starts with one apple at the top. The next layer is 2x2, then 3x3, and so on. How many apples are there, given x number of layers?



The polynomial function



$$f(x) = \frac{2x^3 + 3x^2 + x}{6}$$



What I do and don't understand



Thanks to @DJC, I now know this is a standard function to generate Square Pyramidal Numbers, which is part of Faulhaber's formula. Faulhaber's formula appears to be about quickly adding sequential coefficients which all have the same exponent. Very cool. But how does one get from:




$$\sum_{k=1}^{n} k^p$$



to the spiffy function above? If I'm sounding stupid, how do I make the question better?



Fwiw, I'm in intermediate algebra in the USA. The next course would be Trigonometry or Calculus. (to help people place my current knowledge level)

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...