Tuesday 30 April 2019

calculus - True/False: $mathop {lim }limits_{n to infty } {{{a_n}} over {{b_n}}} = 1$ implies $sum {{a_n},sum {{b_n}} } $ converge or diverge together.



$$\mathop {\lim }\limits_{n \to \infty } {{{a_n}} \over {{b_n}}} = 1$$
Prove the statement implies $\sum {{a_n},\sum {{b_n}} } $ converge or diverge together.
My guess the statement is true.



if $\sum{{a_n}}$ diverges, then $\mathop {\lim }\limits_{n \to \infty } {a_n} \ne 0$



So,
$$\eqalign{
& \mathop {\lim }\limits_{n \to \infty } {a_n} = L \ne 0 \cr

& {{\mathop {\lim }\limits_{n \to \infty } {a_n}} \over {\mathop {\lim }\limits_{n \to \infty } {b_n}}} = 1 \Rightarrow {L \over {\mathop {\lim }\limits_{n \to \infty } {b_n}}} = 1 \Rightarrow L = \mathop {\lim }\limits_{n \to \infty } {b_n} \ne 0 \cr} $$



therefore,
$\sum {b_n}$ also diverges.



What I was not managed to do is proving that the two series converges together.
Or maybe the statement is not always true?


Answer



Surprisingly, this statement is false. For a simple counter-example, consider
$$
a_n = \frac{(-1)^n}{\sqrt{n}},\quad\text{and}\quad b_n = \frac{(-1)^n}{\sqrt{n}} + \frac{1}{n}

$$
The condition $a_n \sim b_n$ holds but $\sum a_n$ is convergent whereas $\sum b_n$ is divergent.


calculus - How to prove $f(x)$ is integrable function on $[0,2]$



Let



$$ f(x) = \begin{cases}
1, & 0 \le x < 1 \\
0, & x=1 \\
1, & 1
\end{cases}$$



How Prove that $f(x)$ integrable on $[0,2]$



I know that $f(x)$ defined and Bounded on [a,b] then $f(x)$ integrable



$iff$



$\forall ϵ>0$ there is Partition $P$ so that




$$U(f;P)-L(f;P)<ϵ$$



how can i prove that?



thanks


Answer



Notice that on any interval $(a,b)\subset [0,2]$ we have that $\max_{x\in (a,b)}{f(x)}=1$ and $$\min_{x\in (a,b)}f(x)=\begin{cases}
0 & \mbox{ if } 1\in (a,b),\\
1 & \mbox{ otherwise.}
\end{cases}$$




Let $n\in \mathbb{N}$ and let $P=(P_i)_{i\in I}$ be any partition such that $P_j=(1-\frac{1}{n},1+\frac{1}{n})$ for some $j$. Clearly $U(f;P)=2$. Notice that $L(f;P)=2-\frac{2}{n}$. Since $n$ was chosen arbitrarily, we can get $U(f;P)-L(f;P)$ as small as we want.


calculus - $dyover dx$ is one things but why in integration we can treat it as 2 different terms

when i am learning differentiation, my lectuer tell us that the deriative $dy\over dx$ is one things, it is not the ration between dy and dx. However when i learn
about integrating, sometime we need to do substitution, like integrating $\int_{0}^{1}2xdx$ when substituting $y=2x$, we can substitute $dy=2dx$, but why in this case it can be treated as 2 different terms instead of 1 term??

limits - Why is $sqrt x in O(log(x)) $?



Since $\lim_{x \rightarrow \infty}(\frac {\log(x)}{\sqrt x} ) = 0$, we can conclude that $\log(x) \in o(\sqrt x)$.




This implies that $\sqrt x \in O(\log (x)) $. Yet, looking at the graphs, there doesn't seem to be a constant multiple of $\log (x)$ that $\sqrt x$ is always less than, because $\sqrt x$ just keeps getting bigger.



Not sure what I'm missing here, any help is appreciated.


Answer



$f(x)\in o(g(x))$ does not imply that $g(x)\in O(f(x)).$ For example, $x\in o(x^2)$, but surely $x^2 \notin O(x).$ What you might write is that $x^2 \in \omega (x).$ (Some people say that $g(x)\in\omega (f(x))$ if $f(x)/g(x)$ converges to $0$, as $x\to \infty.$)



Loosely speaking, $f\in O(g)$ means that $f/g$ is bounded in the long run, and $f\in o(g)$ means that $f(x)/g(x)$ diminishes to zero as $x\to\infty.$ So you see, $f\in o(g)$ merely implies $f\in O(g)$; $g/f$ can very well be unbounded!


Monday 29 April 2019

calculus - How to solve this problem on the differentiation under integral sign

I was solving problems on differentiation under integral sign :



I encountered this problem :
Evaluate this integral$$I=\int_0^\infty\frac{(e^{-ax}-e^{-4bx})sin(x)}{x}dx$$
and then use it to solve the integral
$$I_1=\int_0^\infty\frac{(x^3e^{-2x}-x^2e^{-3x}+1)sin(x)}{x}dx$$

I evaluated the integral I , then I divided integral I1 to 3 integrals , I can solve the first 2 integrals (by differentiating the integral as I clarify below).. My problem is in the integration of sin(x) over x .. How can I deduce it from the integral I ?
$$I_1=\int_0^\infty x^2e^{-2x}sin(x)dx+\int_0^\infty-x e^{-3x}sin(x)dx+\int_0^\infty \frac{sin(x)}{x}dx$$



Here is my solution to evaluate I , then the first 2 parts of the integral I1: ( I know the below solution is correct .. But how to complete to evaluate the last part of I1)
$$\frac{\partial I}{\partial a}=\int_0^\infty - e^{-ax} sin(x)=\frac{-1}{1+a^2}$$
$$I=\int\frac{-1}{1+a^2}da+f(b)$$
$$I=-\tan^{-1}(a)+f(b)$$
$$f(b)=\tan^{-1}(4b)$$
$$I=-\tan^{-1}(a)+\tan^{-1}(4b)$$
$$\frac{\partial^2 I}{\partial a^2}=\int_0^\infty x e^{-ax} sin(x)=\frac{d}{da}(\frac{-1}{1+a^2})=\frac{2a}{(1+a^2)^2}$$

$$\frac{\partial^3 I}{\partial a^3}=\int_0^\infty -x^2 e^{-ax} sin(x)=\frac{d}{da}(\frac{2a}{(1+a^2)^2})=\frac{-8a^3+2a^2-8a+2}{(a^2+1)^3}$$
$$\int_0^\infty x^2e^{-2x}sin(x)dx=-\frac{\partial^3 I}{\partial a^3}|_{a=2}$$
$$\int_0^\infty-x e^{-3x}sin(x)dx=-\frac{\partial^2 I}{\partial a^2}|_{a=3}$$

field theory - $sqrt{p_1}$ is not in $Q[sqrt{p_2},...,sqrt{p_n}]$




How to show $\sqrt{p_1}$ is not in $Q[\sqrt{p_2},...,\sqrt{p_n}]$ if $p_1,...,p_n$ are distinct primes? Intuitively, this is pretty clear, but it makes me very uncomfortable to just believe. Any idea to prove this rigorously? I want this result because I am trying to compute the Galois group of $(X^2-p_1)...(X^2-p_n)$. If I know the statement is true, then the Galois group of this polynomial will be direct product of separate Galois group.


Answer



You may also go through the following lines: by quadratic reciprocity and Dirichlet's theorem, there is some uber-huge prime $P$ for which $p_2,p_3,\ldots,p_n$ are quadratic residues, while $p_1$ is not. It follows that the algebraic numbers $\sqrt{p_1}$ and $\sqrt{p_2}+\ldots+\sqrt{p_n}$ have different degrees over $\mathbb{F}_P$ ($2$ and $1$, respectively), so they cannot be linearly dependent over $\mathbb{Q}$.


calculus - Integration of $xsin^6x cos^4x$

It has been given as, $\int_0^\pi x\sin^6x\cos^4x dx$




I am able to solve $\sin^6x\cos^4x$. But there is a $x$ in between. So could you please help me out.

A functional power series equation

I am interested in solving the following functional equation:




$$(1-w-zw)F(z,w)+zw^nF(z,w^2)=z-zw\,.$$



Here $n\geq2$ is a fixed integer and $F(z,w)$ is a power series in two variables with complex coefficients, which you can assume that converges in a neighborhood of the origin in $\mathbb C^2$.



My thoughts: Defining $G(j)=F(z,w^j)$ for $j\geq1$ and using the functional equation we obtain



$$G(j)-\underbrace{\,\frac{-zw^{jn}}{1-w^j-zw^j}\,\\[4mm]}_{a_j}\,G(2j)=\underbrace{\,\frac{z-zw^j}{1-w^j-zw^j}\,\\[4mm]}_{b_j}\,.$$



Dividing both sides by $\prod_{k=0}^\infty a_{2^kj}=c_j$ and defining $H(j)=G(j)/c_j$ we get




$$H(j)-H(2j)=\frac{b_j}{c_j}\,,$$



which implies



$$\begin{align*}
F(z,w)=&\,G(1)=c_1H(1)\\
=&\,c_1\sum_{r=0}^\infty\bigl[H(2^r)-H(2^{r+1})\bigr]+c_1\lim_{r\to\infty}H(2^r)\\
=&\,\sum_{r=0}^\infty b_{2^r}\frac{c_1}{c_{2^r}}+\lim_{r\to\infty}\frac{c_1}{c_{2^r}}G(2^r)\\
%=&\,\sum_{r=0}^\infty \frac{z-zw^{2^r}}{1-w^{2^r}-zw^{2^r}}\,\prod_{k=0}^{r-1}\frac{zw^{2^kn}}{1-w^{2^k}-zw^{2^k}}+\lim_{r\to\infty}\frac{c_1}{c_{2^r}}G(2^r)\\
\end{align*}$$




Since $F$ is holomorphic, then for $|w|<1$ we have $\lim\limits_{r\to\infty}G(2^r)=F(z,0)=z$ by the functional equation. Defining



$$d_r=\frac{c_1}{c_{2^r}}=\prod_{k=0}^{r-1}\frac{-zw^{2^kn}}{1-w^{2^k}-zw^{2^k}}$$



we arrive to the following "explicit" formula, provided that the sequence $\boldsymbol{(d_r)_{r\geq1}}$ converges for $\boldsymbol{|z|,|w|}$ small:



$$F(z,w)=\sum_{r=0}^\infty b_{2^r}d_r+z\lim_{r\to\infty}d_r\,.$$



I personally believe that in fact $(d_r)_{r\geq1}$ converges. Ironically, I wasn't lazy to think the partial solution above, but I am not feeling like solving the system of infinite linear equations satisfied by the coefficients of $F$. Who knows? some seasoned complex analyst can help me and obtain an explicit formula for both the series and the infinite product above.

Sunday 28 April 2019

Calculating Modular Arithmetic in the below way

How do I calculate:





$a \pmod c + b \pmod c$




using the modular arithmetic:




$(a+b) \pmod c = (a \pmod c + b \pmod c)\pmod c$





For example, assuming that $a=14, b=17$ and $c=5$;



$(a+b) \pmod c = (a \pmod c + b \pmod c) \pmod c$



$31 \pmod 5 = ( 4 + 2 ) \pmod 5$



I just want $6 (4+2)$ to be the output.



One way to calculate it is to perform $(a \pmod c + b \pmod c).$




But how do I calculate that value without performing in the above mentioned way but by using $(a+b) \pmod c$ ?

Prove proposition on real numbers and uniqueness.

How would I go about proving the following proposition. Do I have to prove uniqueness, or that if $x^2 = r$, then $x = \sqrt r$?




Prove given any $r \in \mathbb R\gt 0$, the number $\sqrt r$ is unique in the sense that, if $x$ is a positive real number such that $x^2 = r$, then $ x = \sqrt r$.


complex analysis - Contour integration for the improper integral



How to integrate
$\Large \int_{-\infty}^{\infty}\frac{x \sin (7x)}{(x^2-6x+18)} dx$ with contours?




So far I have
$\Large \int_{C}^{}\frac{ze^{7iz}}{(z-(3+3i)) (z-(3-3i))} dz$



With the $$ \large {Res}[f(z),3+3i] = \lim_{z\to3} \ (z-(3+3i))\frac{ze^{7iz}}{(z-(3+3i)) (z-(3-3i))} = \frac{(3+3i)e^{7i(3+3i)}}{(3+3i)-(3-3i))} = \frac{(3+3i)e^{(21i-21)}}{6i}$$



and $$ \large {Res}[f(z),3-3i] = \lim_{z\to3} \ (z-(3-3i))\frac{ze^{7iz}}{(z-(3+3i)) (z-(3-3i))} = \frac{(3-3i)e^{7i(3-3i)}}{((3-3i)-(3+3i))} = \frac{(3-3i)e^{(21i+21)}}{-6i}$$



$$$$



For the upper half plane $$\large 2\pi i{Res}[f(z),3+3i] = \pi \frac{(3+3i)e^{(21i-21)}}{3} = \pi(1+i)e^{(21i-21)}$$




and for the upper lower half plane and $$\large -2\pi i{Res}[f(z),3-3i] = \pi \frac{(3-3i)e^{(21i+21)}}{3} = \pi(1-i)e^{(21i+21)}$$



so
$$ \int_{-\infty}^{\infty}\frac{x \cos (7x)}{(x^2-6x+18)} dx + i\int_{-\infty}^{\infty}\frac{x \sin (7x)}{(x^2-6x+18)} dx + \int_{C}^{}\frac{ze^{7iz}}{(z-(3+3i)) (z-(3-3i))} dz = \pi(1+i)e^{(21i-21)} + \pi(1-i)e^{(21i+21)}$$



and equating the imaginary parts



$$\int_{-\infty}^{\infty}\frac{x \sin (7x)}{(x^2-6x+18)} dx = i\pi(e^{21i-21} - e^{21i+21})$$


Answer




When transforming to the $z$ plane we have



$$\sin 7x \to \sin 7z \ne e^{i7z}$$



However,



$$\sin z =\text{Im}\{e^{i7z}\}$$



so that $\int_{-\infty}^{\infty}\frac{x \sin (7x)}{(x^2-6x+18)} dx$ becomes




$$\text{Im}\left(\oint_C \frac{z e^{i7z}}{z^2-6z+18}dz\right)$$



where the contour $C$ is composed of the real axis plus the "infinite semi-circle" that encloses the upper half plane. You use Jordan's lemma to show that the integration over the semi-circle does not contribute to the closed-contour integration. Then, the integral of interest is equal to $2\pi i$ times the imaginary part of the residue of



$$\frac{z e^{i7z}}{z^2-6z+18}$$



in the upper-half plane. Thus,



$$\int_{-\infty}^{\infty}\frac{x \sin (7x)}{(x^2-6x+18)} dx=\text{Im}\left(2\pi i \text{Res}\left( \frac{z e^{i7z}}{z^2-6z+18}\right) \right)$$




Working out the details, the integral is



$$\sqrt{2}\pi e^{-21}\sin(21+\pi/4)$$


linear algebra - Finding a matrix given its characteristic polynomial



I am asked to find a $2 \times 2$ matrix with real and whole entries given it's characteristic polynomial:




$$p^2 -5p +1.$$



This is what I have done thus far:



I equated the polynomial to zero, and the roots (eigenvalues) were found to be $2.5 +/- \sqrt({21}/2$



I named the matrix to be solved $C$,



so $\det(C) =$ product of eigenvalues $= 1$




$trace(C) =$ sum of eigenvalues $=5$



I then tried to find C by solving $T^{-1} \times D \times T$, where $D$ is a matrix whose diagonal entries are the eigenvalues solved above, and $T$ is any matrix who's determinant is non zero.



I used $T$ as a $2 \times 2$ being



$$\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}.$$



I solved $T^{-1} \times D \times T$, and threw it in a calculator to make sure I made no algebraic mistakes, but the answer I received is wrong.




I appreciate any help, thank you.


Answer



Let's start off by looking at the characteristic polynomial of the $2 \times 2$ matrix



$A = \begin{bmatrix} 0 & 1 \\ -a & -b \end{bmatrix}: \tag 1$



$\det (A - \lambda I) = \det \left (\begin{bmatrix} -\lambda & 1 \\ -a & -b - \lambda \end{bmatrix} \right ) = \lambda^2 + b\lambda + a; \tag 2$



we see from (2) that we may always present a $2 \times 2$ matrix with given characteristic polynomial $\lambda^2 + b\lambda + a$ in the form $A$; for example, if he quadratic is $\lambda^2 + 5\lambda + 1$, as in the present problem (tho' I have replaced $p$ with $\lambda$), we may take




$P = \begin{bmatrix} 0 & 1 \\ -1 & -5 \end{bmatrix}, \tag 3$



which may be easily checked:



$\det(P - \lambda I) = -\lambda(-5 - \lambda) - 1 ( -1) = \lambda^2 + 5\lambda + 1. \tag 4$



The matrices $A$ and $P$ above are part of a general paradigm for constructing matrices with a given characteristic polynomial, and it extends to higher dimensions. Now, there are a very many matrices possessed of a given characteristic polynomial, since it is a similarity invariant; that is, the characteristic polynomials of $X$ and $S^{-1}XS$ are always the same; thus it behooves us to find a matrix of particularly simple, general form for a given polynomial. If



$q(\lambda) = \displaystyle \sum_1^n q_i \lambda^i, \; q_n = 1, \tag5$




we define $C(q(\lambda))$ to be the $n \times n$ matrix



$C(q(\lambda)) = \begin{bmatrix} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\
\vdots & \vdots & \ldots & \vdots & \vdots \\ -q_0 & -q_1 & -q_2 & \ldots & -q_{n - 1} \end{bmatrix}; \tag 6$



that is, $C(q(\lambda))$ has all $1$s on the superdiagonal, the negatives of the coefficients of $q(\lambda)$ on the $n$-th row, and $0$s everywhere else. We have



$C(q(\lambda) - \lambda I = \begin{bmatrix} -\lambda & 1 & 0 & \ldots & 0 \\ 0 & -\lambda & 1 & \ldots & 0 \\
\vdots & \vdots & \ldots & \vdots & \vdots \\ -q_0 & -q_1 & -q_2 & \ldots & -q_{n - 1} - \lambda \end{bmatrix}; \tag 7$




it is easy to see, by expanding in minors along the $n$-th row, that



$\det(C(q(\lambda)) - \lambda I) = q(\lambda); \tag 8$



also, since a matrix and its transpose have equal determinants, the transposed form $C(q(\lambda))$, $C^T(q(\lambda))$, also gives rise to the same characteristic polynomial. The matrices $C(q(\lambda))$ and $C^T(q(\lambda))$ are known as companion matrices for the polynomial $q(\lambda)$; the linked article has more of the story.


calculus - How to evaluate $int_{0}^{pi }theta lntanfrac{theta }{2} , mathrm{d}theta$



I have some trouble in how to evaluate this integral:
$$
\int_{0}^{\pi}\theta\ln\left(\tan\left(\theta \over 2\right)\right)

\,\mathrm{d}\theta
$$
I think it maybe has another form
$$
\int_{0}^{\pi}\theta\ln\left(\tan\left(\theta \over 2\right)\right)
\,\mathrm{d}\theta
=
\sum_{n=1}^{\infty}{1 \over n^{2}}
\left[\psi\left(n + {1 \over 2}\right) - \psi\left(1 \over 2\right)\right]
$$



Answer



Obviously we have
$$\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta =4\int_{0}^{\pi /2}x\ln \tan x\mathrm{d}x$$
then use the definition of Lobachevskiy Function(You can see this in table of integrals,series,and products,Eighth Edition by Ryzhik,page 900)
$$\mathrm{L}\left ( x \right )=-\int_{0}^{x}\ln\cos x\mathrm{d}x,~ ~ ~ ~ ~ -\frac{\pi }{2}\leq x\leq \frac{\pi }{2}$$
Hence we have
\begin{align*}
\int_{0}^{\pi /2}x\ln\tan x\mathrm{d}x &= x\left [ \mathrm{L}\left ( x \right )+\mathrm{L}\left ( \frac{\pi }{2}-x \right ) \right ]_{0}^{\pi /2}-\int_{0}^{\pi /2}\left [ \mathrm{L}\left ( x \right )+\mathrm{L}\left ( \frac{\pi }{2}-x \right ) \right ]\mathrm{d}x\\
&= \left ( \frac{\pi }{2} \right )^{2}\ln 2-2\int_{0}^{\pi /2}\mathrm{L}\left ( x \right )\mathrm{d}x
\end{align*}

use
$$\mathrm{L}\left ( x \right )=x\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k-1}}{k^{2}}\sin 2kx$$
(Integrate the fourier series of $\ln\cos x$ from $0$ to $x$.)



we can calculate
\begin{align*}
\int_{0}^{\pi /2}\mathrm{L}\left ( x \right )\mathrm{d}x&=\frac{1}{2}\left ( \frac{\pi }{2} \right )^{2}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k-1}}{k^{2}}\int_{0}^{\pi /2}\sin 2kx\mathrm{d}x \\
&= \frac{\pi ^{2}}{8}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}}
\end{align*}
So

\begin{align*}
\int_{0}^{\pi/2}x\ln\tan x\mathrm{d}x &=\frac{\pi ^{2}}{4}\ln 2-2\left [ \frac{\pi ^{2}}{8}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}} \right ] \\
&=\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}}\\
&=\sum_{k=1}^{\infty } \frac{1}{k^{3}}-\sum_{k=1}^{\infty }\frac{1}{\left ( 2k \right )^{3}}=\frac{7}{8}\zeta \left ( 3 \right )
\end{align*}
Hence the initial integral is
$$\boxed{\Large\color{blue}{\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta=\frac{7}{2}\zeta \left ( 3 \right )}}$$
in addition,as you mentioned
$$\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta=\color{red}{\sum_{n=1}^{\infty }\frac{1}{n^{2}}\left [ \psi \left ( n+\frac{1}{2} \right )-\psi \left ( \frac{1}{2} \right ) \right ]=\frac{7}{2}\zeta \left ( 3 \right )}$$
or

$$\sum_{n=1}^{\infty }\frac{1}{n^{2}}\psi \left ( n+\frac{1}{2} \right )=\frac{7}{2}\zeta \left ( 3 \right )-\left ( \gamma +2\ln 2 \right )\frac{\pi ^{2}}{6}$$


Telescoping trigonometric series

How would you evaluate the sum of this series



Sinx*sec3x + sin3x*sec3^2(x) + sin3^2(x)*sec3^3(x) + ... upto n terms.



I can tell it is a telescoping series but I am not able to get that one step that will cancel all terms

Saturday 27 April 2019

real analysis - Compact $Ksubset A$ such that $lambda(K) = lambda(A) / 2$



Let $A\subset \mathbb{R}$ be a (Lebesgue) measurable set of finite measure. Using the fact that the function $f:\mathbb{R}\rightarrow \mathbb{R}$, $$f(x)=\lambda(A\cap [-x,x]) $$
is continuous, we can find a bounded subset $K\subset A$ such that $\lambda(K) = \lambda(A) / 2$.




Is it possible to choose $K$ to be compact as well?



($\lambda$ denotes the Lebesgue measure)


Answer



Assume $\lambda(A)>0$ (otherwise just take $K=\emptyset$). By the inner regularity of Lebesgue measure, there is a compact set $K_0 \subseteq A$ with $\lambda(K_0)>\lambda(A)/2$. Define
$$g(x)=\lambda(K_0 \cap [-x,x])$$
By the Intermediate Value Theorem, there exists an $x$ such that $g(x)=\lambda(A)/2$, and then $K:=K_0 \cap [-x,x]$ will be a compact set satisfying $\lambda(K)=\lambda(A)/2$, as desired.


calculus - Identification of a curious function

During computation of some Shapley values (details below), I encountered the following function:
$$
f\left(\sum_{k \geq 0} 2^{-p_k}\right) = \sum_{k \geq 0} \frac{1}{(p_k+1)\binom{p_k}{k}},
$$
where $p_0 > 0$ and $p_{k+1} > p_k$ for all $k$. In other words, the input to $f$ is the binary expansion of a real number in the range $[0,1]$, and the $p_k$ correspond to the positions of $1$s in the binary expansion.




For example, $f(2^{-t}) = 1/(t+1)$, so $f(1/2) = 1/2$, $f(1/4) = 1/3$ and so on. More complicated examples are $f(5/8) = f(2^{-1} + 2^{-3}) = 1/2 + 1/(4\cdot 3) = 7/12$ and
$$ f(2/3) = f\left(\sum_{k \geq 0}2^{-(2k+1)}\right) = \sum_{k \geq 0} \frac{1}{(2k+2)\binom{2k+1}{k}} = \frac{\pi}{\sqrt{27}}.$$



The function $f$ is a continuous increasing function satisfying $f(0) = 0$, $f(1) = 1$, and $f(1-t) = 1-f(t)$ for $t \in [0,1]$. It has vertical asymptotes at dyadic points.



Here is a plot of $f$:plot of f




Is the function $f$ known?








Here is where $f$ came from. Let $n \geq 1$ be an integer and let $t \in [0,1]$. For a permutation $\pi$ of the numbers $\{ 2^{-m} : 0 \leq m \leq n-1 \}$ satisfying $\pi^{-1}(1) = i$, we say that $\pi$ is pivotal if $\sum_{j

For example, take $n = 4$. The permutation $1/8,1/2,1,1/4$ is pivotal for $t \in (5/8,1]$. For all $n \geq 2$ we have $f_n(1/2) = 1/2$, since $\pi$ is pivotal iff $1$ appears before $1/2$ in $\pi$. The general formula for $f$ is derived in a similar way.



We leave it to the reader to figure out how $f_n$ measures some Shapley value. The functions $f_n$ are step functions with steps of length $1/2^{n-1}$. They are left-continuous, and are equal to $f$ at the breakpoints.

Friday 26 April 2019

Series with $n$th term having integer raised to the power of $n$ in the denominator




$$

1+\frac{4}{6}+\frac{4\cdot5}{6\cdot9}+\frac{4\cdot5\cdot6}{6\cdot9\cdot12}+\cdots
$$




I could reduce it to $n$th term being $\dfrac{(n+1)\cdot(n+2)}{n!\cdot3^n}$.
Took me an hour just to get to this.
But I am now stuck up. PL. Help


Answer



$$1+\frac{4}{6}+\frac{4\cdot5}{6\cdot9}+\frac{4\cdot5\cdot6}{6\cdot9\cdot12}+\cdots$$
$$=1+\sum\limits_{n=1}^\infty\frac{(n+3)!}{2\times3\times3^n\times(n+1)!}$$

$$=1+\frac{1}{2}\sum\limits_{n=1}^\infty\frac{(n+2)\times(n+3)}{3^{n+1}}$$



By induction, we can show that for $n\ge7$, $0<\frac{(n+2)\times(n+3)}{3^{n+1}}<\frac{1}{n^2}$, and hence the series is convergent.



In fact, $=1+\frac{1}{2}\cdot\frac{11}{4}=\frac{19}{8}$


discrete mathematics - Proving the sum of the first $n$ natural numbers by induction




I am currently studying proving by induction but I am faced with a problem.



I need to solve by induction the following question.



$$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$



for all $n > 1$.



Any help on how to solve this would be appreciated.







This is what I have done so far.



Show truth for $N = 1$



Left Hand Side = 1



Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$




Suppose truth for $N = k$



$$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$



Proof that the equation is true for $N = k + 1$



$$1 + 2 + 3 + ... + k + (k + 1)$$



Which is Equal To




$$\frac{1}{2} k (k + 1) + (k + 1)$$



This is where I'm stuck, I don't know what else to do. The answer should be:



$$\frac{1}{2} (k+1) (k+1+1)$$



Which is equal to:



$$\frac{1}{2} (k+1) (k+2)$$




Right?



By the way sorry about the formatting, I'm still new.


Answer



Basic algebra is what's causing the problems: you reached the point



$$\frac{1}{2}K\color{red}{(K+1)}+\color{red}{(K+1)}\;\;\;\:(**)$$



Now just factor out the red terms:




$$(**)\;\;\;=\color{red}{(K+1)}\left(\frac{1}{2}K+1\right)=\color{red}{(K+1)}\left(\frac{K+2}{2}\right)=\frac{1}{2}(K+1)(K+2)$$


abstract algebra - Example check: two algebraically closed fields with one a subset of the other




The problem asks to find two different algebraically closed fields $\mathcal{E}$ and $\mathcal{F}$ with $\mathcal{E} \subseteq \mathcal{F}$.



We have not done a whole lot of stuff with algebraically closed and in fact the only one that came to mind was $\mathbb{C}$.



Does $\mathbb{C}(x)$, the field of rational functions in $x$ with coefficients from $\mathbb{C}$ work? Our definition of an algebraically closed field (which I'm not sure if there are other standard definitions or not) is that every polynomial with coefficients from the field must have a root in the field. Since the set of polynomials in $\mathbb{C}(x)$ is just $\mathbb{C}[x]$ it seems like my example is algebraically closed as well. Does this in fact work or am I missing something?



Thanks




Edit: I did notice a flaw in my thinking. I need to think of polynomials with coefficients in $\mathbb{C}(x)$ not polynomials contained in $\mathbb{C}(x)$.




Answer



The algebraic closure of $\Bbb Q, \overline{\Bbb Q} \subset \Bbb C$ gives an example to your main question.



More generally, for every (edit : u uncountable) cardinality, there is a unique algebraically closed field (upto isomorphism) So while $\Bbb C \subset \overline{C(X)}$(the algebraic closure of $C(X)$) are another pair of examples, the two fields are isomorphic although they are not equal.



(The uniqueness requires the axiom of choice.)


Thursday 25 April 2019

real analysis - Let $f(x)$ be Riemann-integrable, $F(x)=int_a^x f(t)dt.$ Then $F(x)$ is differentiable, and $F'(x)=f(x)$ almost everywhere.


What is wrong with the following statement:



Let $f(x)$ be Riemann-integrable, $$F(x)=\int_a^x f(t)dt.$$ Then $F(x)$ is differentiable, and $F'(x)=f(x)$ almost everywhere.




I think the statement is true. Because $f(x)$ be Riemann-integrable, it's continuous almost everywhere. Then by the Second Fundamental Theorem of Calculus, $F(x)$ is is differentiable almost everywhere, and $F'(x)=f(x)$ almost everywhere.

calculus - What is the practical difference between a differential and a derivative?

I ask because, as a first-year calculus student, I am running into the fact that I didn't quite get this down when understanding the derivative:



So, a derivative is the rate of change of a function with respect to changes in its variable, this much I get.



Thing is, definitions of 'differential' tend to be in the form of defining the derivative and calling the differential 'an infinitesimally small change in x', which is fine as far it goes, but then why bother even defining it formally outside of needing it for derivatives?



And THEN, the bloody differential starts showing up as a function in integrals, where it appears to be ignored part of the time, then functioning as a variable the rest.



Why do I say 'practical'? Because when I asked for an explanation from other mathematician parties, I got one involving the graph of the function and how, given a right-angle triangle, a derivative is one of the other angles, where the differential is the line opposite the angle.




I'm sure that explanation is correct as far it goes, but it doesn't tell me what the differential DOES, or why it's useful, which are the two facts I need in order to really understand it.



Any assistance?

geometry - Can squares be replaced by cycloids in the Pythagorean Theorem?

As everyone knows, for a right triangle, the square on the hypotenuse is equal to the sum of the squares on the legs.



What happens if we replace the squares with some other geometrical figure, such as one arch of a cycloid? Does the Pythagorean relation still hold? That is, is the area under one arch of a cycloid that exactly fits the hypotenuse equal to the sum of the areas under the arches of the cycloids that exactly fit the legs?



It relies on the fact that area varies as square of scaling ratio.

Wednesday 24 April 2019

real analysis - If $f:[0,1]toBbb R$ is continuous and $f(0)=f(1)$ show that $forall ninBbb N,exists x_n,y_nin[0,1]:|x_n-y_n|=1/n$ and $f(x_n)=f(y_n)$






If $f:[0,1]\to\Bbb R$ is continuous and $f(0)=f(1)$ show that $\forall n\in\Bbb N,\exists x_n,y_n\in[0,1]:|x_n-y_n|=1/n$ and $f(x_n)=f(y_n)$




It is supposed that I must use the intermediate value theorem, not something more advanced (like any kind of derivative or so).




The case for $n=2$ is the unique that I can prove, it is easy because if we create a function $h(x)=f(x+1/2)-f(x)$ (what is continuous because is a sum of two continuous functions) then we can see that $h(0) h(1/2)<0$ and cause the intermediate value theorem then exists a point $c$ such that $h(c)=0$ and $f(c+1/2)=f(c)$.



But for the general case $1/n$ Im lost... I was trying to see anything, with graphs, but seems too complicate. I feel that one possible solution could come from some recursive mechanic involving $f(x)$ and $h_n(x)$, but it seems too complicate and I dont think this will be the real solution.



Can you help me please, leaving some hint or so? Thank you very much.


Answer



The idea is still to use the intermediate value theorem. For fixed $n$, setting $g(x):=f(x+\frac1n)-f(x)$, we need to show that $g$ has a zero in the interval $[0,\frac{n-1}{n}]$. To do this, we consider the value of $g$ at $0,\frac{1}{n},\dots\frac{n-1}{n}$. Note that
$$\sum_{k=0}^{n-1}g(\frac{k}{n})=\sum_{k=0}^{n-1}f(\frac{k+1}{n})-f(\frac{k}{n})=f(1)-f(0)=0.$$
Now if there exists $0\leq k\leq n-1$ such that $g(\frac{k}{n})=0$, then we are done. Otherwise, $g(\frac{k}{n}$) are not zero for $0\leq k\leq n-1$, then there must exist $0\leq k_1\leq n-1$ and $0\leq k_2\leq n-1$ such that $g(\frac{k_1}{n})$ and $g(\frac{k_2}{n})$ has opposite sign, thus we see from the intermediate value theorem that $g$ must have a zero between $\frac{k_1}{n}$ and $\frac{k_2}{n}$.



sequences and series - why does math "break" when it comes to certain topics?

For example,



$1 + 2 + 3 + ... = -1/12$



or $2 + 4 + 8 + ... = -1$




What caused it to break? Are there some properties of real numbers that do this? Sorry if it's too vague because I was looking at group theory and there were things that didn't hold.



Another thing is why do some problems become ill-posed? For example, Wilkinson Polynomial, why do roots break by having too many multiplication of binomials?

real analysis - Does there exist a function $f:[0,1] to[0,1]$ such its graph is dense in $[0,1]times[0,1]$?

Does there exist a function $f:[0,1]\to [0,1]$ such that the graph of $f$ is dense in $[0,1]\times[0,1]$?



Not necessarily continuous.

matrices - Determinant of $N times N$ matrix



I have the following matrix which I need to find the determinant of. I am not too sure of how to proceed. Here is my working so far.




\begin{equation}
\det(\boldsymbol J (E_i) - \lambda \mathbb I ) =
\begin{pmatrix}
A_1 -\lambda & \dots & \phi_{1i} & \dots & 0 \\
\vdots & \ddots & \vdots & & \vdots \\
-c_{i1} & \dots & \color{red}{A_i -\lambda} & \dots & -c_{iN}\\
\vdots & & \vdots & \ddots & \vdots \\
0 & \dots & \phi_{Ni} & \dots & A_N -\lambda
\end{pmatrix}

\end{equation}
The matrix has a diagonal given by $A_j -\lambda$. From the central red element there are vertically and horizontally non-zero elements. All other elements are zero exactly.



I am really not sure how to find the determinant from here and any help or pointers would be greatly appreciated!



Edit



For instance if $N$ where to equal 4 we might have the following case if $i=3$,
\begin{equation}
\det(\boldsymbol J (E_i) - \lambda \mathbb I ) =

\begin{pmatrix}
D_1 & 0 & V_1 & 0 \\
0 & D_2 & V_2&0 \\
H_1 & H_2 & D_3 & H_4\\
0 & 0 & V_4 & D_4 \\
\end{pmatrix}
\end{equation}


Answer



Edit. Let $B$ the submatrix obtained by deleting the $i$-th row and $i$-th column of the given matrix. Then the required determinant is the product of determinant of the Schur complement of $B$ and $\det B$, i.e.
$$

\left(A_i-\lambda + \sum_{k\ne i}\frac{c_k\phi_{ki}}{A_k-\lambda}\right) \prod_{k\ne i}(A_k-\lambda).
$$


Monday 22 April 2019

cardinals - Trouble understanding cardinality




Hi guys I am having trouble understanding cardinality. I am given this practice question.



1) Use Cantor-Schroder-Bernstein Theorem to prove that the intervals $(0,1)$ and $[0,1]$ have the same cardinality?



My attempt :



I know the CSB theorem is



Let $A, B$ be sets and if $f: A \rightarrow B$ , and $g: B \rightarrow A$ are both injections, then there exists a bijection from $A$ to $B$




I have to show that $|(0,1)| = |[0,1]| $ so we need $f: (0,1) \rightarrow [0,1]$ and we can set $f(x) = x$ and also we need $g: [0,1] \rightarrow (0,1)$ and we can let $g(x) = \frac{1}{2}x + \frac{1}{4}$



but after I get stuck I dont know what to do i know have to set these equations up but get confused after.



Please help out any help or hints will be greatly appreciated



Thank you


Answer



You’re done as soon as you verify that the maps $f$ and $g$ are injections, that the range of $f$ is a subset of $[0,1]$, and that the range of $g$ is a subset of $(0,1)$: at that point you have an injection $f:(0,1)\to[0,1]$ and an injection $g:[0,1]\to(0,1)$, and the CSB theorem tells you outright that there exists a bijection $h:[0,1]\to(0,1)$ and hence that $\left|[0,1]\right|=\left|(0,1)\right|$.



Sunday 21 April 2019

calculus - Finding $lim_{xto0} frac{sin(x^2)}{sin^2(x)}$ with Taylor series




Evaluate $$\lim_{x\to0} \frac{\sin(x^2)}{\sin^2(x)}.$$



Using L'Hospital twice, I found this limit to be $1$. However, since the Taylor series expansions of $\sin(x^2)$ and $\sin^2(x)$ tell us that both of these approach $0$ like $x^2$, I'm wondering if we can argue that the limit must be 1 via the Taylor series formal way?


Answer



$$\lim_{x\to0} \frac{\sin(x^2)}{\sin^2(x)}=\lim_{x\to0}\left(\frac{x}{\sin x}\right)^2 \frac{\sin(x^2)}{x^2}=1$$


sequences and series - Compute $lim_{n to +infty} n^{-frac12 left(1+frac{1}{n}right)} left(1^1 cdot 2^2 cdot 3^3 cdots n^n right)^{frac{1}{n^2}}$


How to compute
$$\displaystyle \lim_{n \to +\infty} n^{-\dfrac12 \left(1+\dfrac{1}{n}\right)} \left(1^1\cdot 2^2 \cdot 3^3 \cdots n^n \right)^{\dfrac{1}{n^2}}$$
I'm interested in more ways of computing limit for this expression





My proof:



Let $u_n$be that sequence we've:



\begin{eqnarray*}
\ln u_n &=& -\frac{n+1}{2n}\ln n + \frac{1}{n^2}\sum_{k=1}^n k\ln k\\
&=& -\frac{n+1}{2n}\ln n + \frac{1}{n^2}\sum_{k=1}^n k\ln \frac{k}{n}+\frac{1}{n^2}\sum_{k=1}^n k\ln n\\
&=& \frac{1}{n^2}\sum_{k=1}^n k\ln \frac{k}{n}\\

&=& \frac{1}{n}\sum_{k=1}^n \frac{k}{n}\ln \frac{k}{n}\\
&\to&\int_0^1 x\ln x\,dx = -1/4
\end{eqnarray*}



Therefore the limit is $e^{-\frac{1}{4}}$

real analysis - Show that the sequence $left(frac{2^n}{n!}right)$ has a limit.





Show that the sequence $\left(\frac{2^n}{n!}\right)$ has a limit.



I initially inferred that the question required me to use the definition of the limit of a sequence because a sequence is convergent if it has a limit $$\left|\frac{2^n}{n!} - L \right|{}{}<{}{}\epsilon$$



I've come across approaches that use the squeeze theorem but I'm not sure whether its applicable to my question. While I have found answers on this site to similar questions containing the sequence, they all assume the limit is $0$.




I think I need to show $a_n \geq a_{n+1},\forall n \geq 1$, so



$$a_{n+1} = \frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\frac{2^{n}}{n!}

A monotonic decreasing sequence is convergent and this particular sequence is bounded below by zero since its terms are postive. I'm not sure whether or not I need to do more to answer the question.


Answer



It is easy to prove that
$$\sum_{k=1}^{\infty}\frac{2^n}{n!} \lt \infty$$
e.g. with the ratio test you have
$$\frac{a_{n+1}}{a_n}=\frac{n!}{2^n}\cdot\frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\longrightarrow 0$$

Then $\lim\limits_{n\rightarrow\infty}\frac{2^n}{n!}$ has to be $0$






If you do not know anything about series you might assert, that $n!>3^n$ if $n\gt N_0$ for some fixed $N_0\in\mathbb{N}$. Therefore you have for those $n$
$$0<\frac{2^n}{n!} \lt\frac{2^n}{3^n}=\left(\frac{2}{3}\right)^n\longrightarrow 0$$
Thus $$\lim\limits_{n\longrightarrow\infty} \frac{2^n}{n!} =0 $$


algebra precalculus - Prove that $(n-m) mid (n^r - m^r)$



In respect to a larger proof I need to prove that $(n-m) \mid (n^r - m^r) $ (where $\mid$ means divides, i.e., $a \mid b$ means that $b$ modulus $a$ = $0$). I have played around with this for a while now but to no avail and would be grateful if you could help me.


Answer




There are several ways of proving this. For example, one can use the binomial theorem $(a+b)^r=\sum_{k=1}^r\binom{k}ra^kb^{n-k}$ with $a=n-m$ and $b=m$. (The change of variables from $n,m$ to $a,b$ is in itself a good idea in problems like this, since it tends to reduce the size of some of the expressions.)



Another way is to use the identity $$n^r-m^r=(n-m)\sum_{k=0}^{r-1}n^{r-1-k}m^k=(n-m)(n^{r-1}+n^{r-2}m+\dots+nm^{r-2}+m^{r-1}).$$



Both of these identities are proved by induction on $r$.



Or, one can do directly an inductive argument: The result is clear if $r=0$, since $n^r-m^r=0$. Given the result for $r$, prove it for $r+1$ using, for example, that $n^{r+1}-m^{r+1}=n(n^r-m^r)+m^r(n-m)$.


Friday 19 April 2019

number theory - How to calculate mod inverse



Given a number set of integers $\mathbb{Z}$, how do I find the inverse of a given number?



I am trying to test an algorithm to extract the $k$ and $x$ values from the Elgamal Signature algorithm given that $k$ is repeated.



What I have is
$k$ congruent to $(m_1 - m_2)\times(s_1 - s_2)^{-1} \mod p - 1$
given $k$ is used twice.




I am not sure how to calculate the mod inverse though?
_
Is the above formula the same thing as $((m_1 - m_2) \mod p -1 \times (s_1 - s_2)^{-1} \mod p -1) \mod p -1$



I am not sure if it is any different since I am doing a mod inverse.



PS. I am a programmer, not a mathematician so please elaborate.


Answer



Yes, the two formulas you wrote in the question give the same output.




More generally, as Bill Dubuque points out in the comments, you can usually just take mods at each step, instead of doing the whole computation and then modding at the end. However, exponentiation is a notable exception; you can reduce the base but generally not the exponent
$$ a^k \bmod n \quad=\quad (a\bmod n)^k \bmod n \qquad\neq\qquad (a\bmod n)^{(k \bmod n)}.$$


Thursday 18 April 2019

real analysis - finding $lim_{n rightarrow +infty}frac{n}{2^n}= ?$





Finding the limit below..:



$$\lim_{n \rightarrow +\infty}\frac{n}{2^n}= ?$$



I really think its 0. But intuitively, infinity over infinity. how can that be? indeterminate forms? Thanks


Answer



Intuitively, $2^n$ grows much faster than $n$.



Note that by the Binomial Theorem, $2^n=(1+1)^n=1+n+\frac{n(n-1)}{2}+\cdots$.




In particular, if $n\gt 1$, we have $2^n\ge \dfrac{n(n-1)}{2}$.



Thus $0\lt \dfrac{n}{2^n}\le \dfrac{2}{n-1}$. But $\frac{2}{n-1}$ approaches $0$ as $n\to\infty$, so by Squeezing, so does $\dfrac{n}{2^n}$.



Another way: You can use L'Hospital's Rule to show $\lim_{x\to\infty}\frac{x}{2^x}=0$.


divisibility - Gcd number theory proof: $(a^n-1,a^m-1)= a^{(m,n)}-1$

Prove that if $a>1$ then $(a^n-1,a^m-1)= a^{(m,n)}-1$




where $(a,b) = \gcd(a,b)$



I've seen one proof using the Euclidean algorithm, but I didn't fully understand it because it wasn't very well written.
I was thinking something along the lines of have $d= a^{(m,n)} - 1$ and then showing
$d|a^m-1$ and $d|a^n-1$ and then if $c|a^m-1$ and $c|a^n-1$, then $c\le d$.



I don't really know how to show this though...



I can't seem to be able to get $d* \mathbb{K} = a^m-1$.




Any help would be beautiful!

Wednesday 17 April 2019

real analysis - Improper integral $int_0^infty frac{sin(x)}{x}dx$ - Showing convergence.





1)Show that for all $n\in\mathbb{N}$ the following is true:



$$\int_{\pi}^{n\pi}|\frac{\sin(x)}{x}|dx\geq C\cdot \sum_{k=1}^{n-1}\frac{1}{k+1}$$



for a constant $C>0$ and conclude that the improper integral $\int_0^\infty \frac{\sin(x)}{x}dx$ isn't absolutely convergent.



2)Show that the improper integral $\int_0^\infty \frac{1-\cos(x)}{x^2}dx$ is absolutely convergent. (The integrand is to be expanded continuous at $x=0$.).



3)Using 2), show that the improper integral $\int_0^\infty \frac{\sin(x)}{x}dx$ is convergent.





We started discussing improper integrals in class and our prof showed us how some can be solved and some can't.



Anyways,



Here were my ideas so far:



1) I thought about to do the integral and seeing if what I get out of it gives me any idea to show the inequality. But I couldn't even solve the integral (not by hand nor with the help of an integral calculator). So I don't know what to do next.




2)To be hoenst I'm totally lost here. No idea how to approach it.



3)Well, since I didn't solve 2).



Sorry for my lack of work here, but this topic just doesn't want to stick with me.


Answer



1). Since
$$
\int_{n\pi}^{(n+1)\pi}\left|\sin{x}\right|dx=2
$$

there is
\begin{align}
\int_{\pi}^{(n+1)\pi}\left|\dfrac{\sin{x}}{x}\right|dx&=\sum_{k=1}^n\int_{k\pi}^{(k+1)\pi}\left|\dfrac{\sin{x}}{x}\right|dx
\\
&\geqslant\sum_{k=1}^n\dfrac1{(k+1)\pi}\int_{k\pi}^{(k+1)\pi}\left|\sin{x}\right|dx
\\
&=\dfrac{2}{\pi}\sum_{k=1}^n\dfrac1{k+1}
\end{align}
So $\int_{0}^{\infty}\dfrac{\sin{x}}{x}dx$ diverges absolutely.




2). Since
\begin{align}
\int_{\pi}^{(n+1)\pi}\dfrac{1-\cos{x}}{x^2}dx&=\sum_{k=1}^n\int_{k\pi}^{(k+1)\pi}\dfrac{1-\cos{x}}{x^2}dx
\\
&\leqslant\sum_{k=1}^n\dfrac1{k^2\pi^2}\int_{k\pi}^{(k+1)\pi}(1-\cos{x})dx
\\
&=\dfrac1{\pi}\sum_{k=1}^n\dfrac1{k^2}
\end{align}
So $\int_{0}^{\infty}\dfrac{1-\cos{x}}{x^2}dx$ is absolutely convergent.




3).By partial integration, there is
\begin{align}
\int_{0}^{\infty}\dfrac{1-\cos{x}}{x^2}dx&=-\dfrac{1-\cos{x}}{x}\Bigg|_0^{\infty}+\int_{0}^{\infty}\dfrac{\sin{x}}{x}dx
\\
&=-\dfrac{2\sin^2{\dfrac{x}{2}}}{x}\Bigg|_0^{\infty}+\int_{0}^{\infty}\dfrac{\sin{x}}{x}dx
\\
&=\int_{0}^{\infty}\dfrac{\sin{x}}{x}dx
\end{align}
So $\int_{0}^{\infty}\dfrac{\sin{x}}{x}dx$ is convergent.


exponentiation - Exponents with variables inside exponents

I am confused about how to reduce this, is there any way?



$\sqrt x \ln(\sqrt x) = ln(\sqrt x^\sqrt x) = $ $?$



This can be written like this too:



$ln(x^\frac{x^\frac{1}{2}}{2}$)



Or:




$ln(x^\frac{\sqrt x}{2}$)

Finding remaining polynomial after finding complex factors



I want to express this polynomial as a product of linear factors:




$x^5 + x^3 + 8x^2 + 8$



I noticed that $\pm$i were roots just looking at it, so two factors must be $(x- i)$ and $(x + i)$, but I'm not sure how I would know what the remaining polynomial would be. For real roots, I would usually just do use long division but it turns out a little messy in this instance (for me at least) and was wondering if there was a simpler method of finding the remaining polynomial.



Apologies for the basic question!


Answer



If you divide $$ x^5 + x^3 + 8x^2 + 8$$ by $$(x-i)(x+i) = x^2+1$$ you will get $$x^3+8$$ which factors as $$(x^3+8) = (x+2)(x^2-2x+4)$$ which has a solution of $x=-2$



Now use quadratic formula to solve $x^2-2x+4=0$ to find other roots and factor if you wish.



Tuesday 16 April 2019

real analysis - Second Baire class and Borel measurable

I am new to Baire class theory, but need it for one part of a project I am working on. I have seen it referenced that functions of second Baire class are Borel measurable. For example here in this book, but I cannot find a proof of this (or have come accross one and not realized I am looking at it).



My other question, regarding this same result, is what what restrictions are there on the domain? For example, if we have a function $f:B \rightarrow \mathbb{R}$, where $B$ is a Borel measurable set in $\mathbb{R}^n$ or any metric space and $f$ is of second Baire class on $B$. Does it follow that $f$ is Borel measurable?

sequences and series - Find the converging limit?




I'm trying to solve this problem from the text but I can't seem to figure out how to exactly approach it. It says,




Find the limit of the convergent sequence,
$(\sqrt{2}-1)^n$ as n goes from 1 to $\infty$




I'm assuming I have to find the value of the sequence as n goes to infinity by applying the limit, but since its to the power n, I'm having bit of a trouble simplifying it. Please correct me if I'm wrong.


Answer




Note $0<\sqrt 2-1<1$. What do you know about $r^n$ if $|r|<1$?


Monday 15 April 2019

probability - A number of dice rolls to see every number at least once



We have a fair dice that can produce n different numbers. How many times should we roll the dice to see every number at least once with probability p?



Not a homework, just interesting. Tried to solve myself but with no luck.



I think it could be sort of coupon collector problem, but I can't get exact formula.


Answer




Let $S_k$ be all the outcomes in which a $k$ is not rolled. For each $k$, $|S_k|=(n-1)^r$ and there are $\binom{n}{1}$ choices for $k$. For each $j\ne k$, $|S_j\cap S_k|=(n-2)^r$ and there are $\binom{n}{2}$ choices for $j$ and $k$. Continuing in this manner and using Inclusion-Exclusion to count the number of outcomes missing at least $1$ number, we get
$$
\begin{align}
\left|\bigcup_{k=1}^nS_k\right|
&=\sum_{k=1}^n|S_k|-\sum_{j< k}|S_j\cap S_k|+\sum_{i&=\binom{n}{1}(n-1)^r-\binom{n}{2}(n-2)^r+\binom{n}{3}(n-3)^r-\dots
\end{align}
$$
Since there are a total of $n^r$ total outcomes, we get the number of outcomes in which all possible numbers are rolled is
$$

n^r-\binom{n}{1}(n-1)^r+\binom{n}{2}(n-2)^r-\binom{n}{3}(n-3)^r+\dots
$$
Thus, the probability of this occurring is
$$
1-\binom{n}{1}\left(1-\frac1n\right)^r+\binom{n}{2}\left(1-\frac2n\right)^r-\binom{n}{3}\left(1-\frac3n\right)^r+\dots
$$
So we need to find the smallest $r$ so that
$$
p\le\sum_{k=0}^n(-1)^k\binom{n}{k}\left(1-\frac kn\right)^r
$$

Expected Duration



The expected duration until all numbers are rolled can be computed using the formulas above; i.e.
$$
\begin{align}
\mathrm{E}[r]
&=\color{#C00000}{\sum_{r=1}^\infty}\sum_{k=0}^n(-1)^k\binom{n}{k}\color{#C00000}{r\left[\left(1-\frac kn\right)^r-\left(1-\frac kn\right)^{r-1}\right]}\\
&=\sum_{k=1}^n(-1)^k\binom{n}{k}\color{#C00000}{\left(-\frac nk\right)}\\
&=n\sum_{k=1}^n(-1)^{k-1}\color{#00A000}{\binom{n}{k}}\frac1k\\
&=n\sum_{k=1}^n(-1)^{k-1}\color{#00A000}{\sum_{j=k}^n\binom{j-1}{k-1}}\frac1k\\

&=n\sum_{j=1}^n\sum_{k=1}^j(-1)^{k-1}\binom{j-1}{k-1}\frac1k\\
&=n\sum_{j=1}^n\color{#0000FF}{\sum_{k=1}^j(-1)^{k-1}\binom{j}{k}}\frac1j\\
&=n\sum_{j=1}^n\frac1j
\end{align}
$$
However, there is a simpler way. Since the expected duration for a binomial event with probability $p$ is $\frac1p$, we get that the expected duration for the $k^{\mathrm{th}}$ number is $\frac{n}{n-k+1}$. Therefore, the expected duration to get all numbers is
$$
\sum_{k=1}^n\frac{n}{n{-}k{+}1}=n\sum_{k=1}^n\frac1{k}
$$


Sunday 14 April 2019

calculus - Proof that $Γ'(1) = -γ$?



I know that $Γ'(1) = -γ$, but how does one prove this?
Starting from the basics, we have that:




$$Γ(x) = \int_0^\infty e^{-t} t^{x-1} dt$$



How do we differentiate this? How do we then find that



$$Γ'(1) = \int_0^\infty e^{-t} \log(t) dt$$



and how would one solve this integral?


Answer



Taken from this answer:




By the recursive relation $\Gamma(x+1)=x\Gamma(x)$, we get
$$
\small{\log(\Gamma(x))=\log(\Gamma(n+x))-\log(x)-\log(x+1)-\log(x+2)-\dots-\log(x+n-1)}\tag{1}
$$
Differentiating $(1)$ with respect to $x$, evaluating at $x=1$, and letting $n\to\infty$ yields
$$
\begin{align}
\frac{\Gamma'(1)}{\Gamma(1)}&=\log(n)+O\left(\frac1n\right)-\frac11-\frac12-\frac13-\dots-\frac1n\\
&\to-\gamma\tag{2}

\end{align}
$$
A discussion of why $\frac{\mathrm{d}}{\mathrm{d}x}\log(\Gamma(x))=\log(x)+O\left(\frac1x\right)$, and a proof of the log-convexity of $\Gamma(x)$, is given in the answer cited above. It is easy to accept since $\log(\Gamma(n+1))=\log(\Gamma(n))+\log(n)$.






Another Approach
$$
\begin{align}
\int_0^\infty\log(t)\,e^{-t}\,\mathrm{d}t

&=\lim_{n\to\infty}\int_0^n\log(t)\,\left(1-\frac{t}{n}\right)^n\,\mathrm{d}t\tag{3a}\\
&=\lim_{n\to\infty}n\int_0^1(\log(t)+\log(n))\,(1-t)^n\,\mathrm{d}t\tag{3b}\\
&=\lim_{n\to\infty}\left(\frac{n}{n+1}\log(n)+n\int_0^1\log(1-t)\,t^n\,\mathrm{d}t\right)\tag{3c}\\
&=\lim_{n\to\infty}\left(\frac{n}{n+1}\log(n)-\frac{n}{n+1}H_{n+1}\right)\tag{3d}\\[6pt]
&=-\gamma\tag{3e}
\end{align}
$$
Explanation:
$\text{(3a)}$: Monotone Convergence and $e^{-t}=\lim\limits_{n\to\infty}\left(1-\frac tn\right)^n$
$\text{(3b)}$: substitute $t\mapsto nt$
$\text{(3c)}$: substitute $t\mapsto1-t$
$\text{(3d)}$: integrate by parts $\int_0^1\log(1-t)\,t^n\,\mathrm{d}t=-\frac1{n+1}\int_0^1\frac{1-t^{n+1}}{1-t}\,\mathrm{d}t$
$\text{(3e)}$: $\gamma=\lim\limits_{n\to\infty}H_n-\log(n)$


discrete mathematics - How to get $k^{k + 1} + k^k$ to equate $(k+1)^{k+1}$?



This is a problem from Discrete Mathematics and its Applications





  1. Let $P(n)$ be the statement that $n!



$\quad(a)$ What is the statement $P(2)$?
$\quad(b)$ Show that $P(2)$ is true, completing the basis step of the proof.
$\quad(c)$ What is the inductive hypothesis?
$\quad(d)$ What do you need to prove in the inductive step?
$\quad(e)$ Complete the inductive step.
$\quad(f)$ Explain why these steps show that this inequality is true whenever $n$ is an integer greater than $1$.




I am currently on part e, completing the inductive step.
Here is my work so far,



I was able to show that the basic step, $P(2)$ is true because $2! < 2^2$ or $2 < 4$
Now I am trying to show the inductive step, or $P(k)\to P(k+1)$
Assuming $P(k)$, $k! To get $(k+1)!$ on both sides, I multiplied both sides by $k +1$ to get
$$(k+1)! < k^k(k+1)$$ or $$(k+1)! < k^{k + 1} + k^k$$



How can I get this expression, $k^{k + 1} + k^k$ to equate $(k+1)^{k+1}$?



Answer



$$(n+1)!=(n+1) n! < (n+1) n^n<(n+1) (n+1)^n=(n+1)^{n+1} $$


linear algebra - Eigenvalues of a nxn matrix without calculations





I have a question about the following matrix:



$$
\begin{bmatrix}
1 & 2 & 3 \\

1 & 2 & 3 \\
1 & 2 & 3 \\
\end{bmatrix}
$$



Find the eigenvalues without calculations and define your answer. Now, I was thinking about this problem. And I thought, yeah ok if you try the vector (1,1,1), you can find 6 as one eigenvalue (and I know you have a double multiplicity 0 too). But than you are doing sort of guessing/calculation work.



I see that the columns are linearly dependant. So I know the dimension of the column space and of the null space.



Thank you in advance.




EDIT: follow up question:



Ok, so you find that the dimension of the null space is 2, so there are 2 eigenvectors when the eigenvalue is 0. Now my question is, can the dimension of the eigenspace be bigger than the amount of eigenvalues? I guess not. I know it can be smaller


Answer



Notice that rank=1 and hence $0$ is an eigenvalue of multiplicity $2$.
Then trace=sum of eigenvalue and hence the last eigenvalue is $6$.



It is also rather easy to find all eigenvectors without a lot of work. For $6$ the vector is $(1,1,1)$. For $0$ you can take basis $(2,-1,0),(3,0,-1)$.


Saturday 13 April 2019

Evaluation of a series (possibly related to Binomial Theorem)

I have the following series:




$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2\cdot5}{3\cdot6}\cdot\frac{1}{2^2} + \frac{2\cdot5\cdot8}{3\cdot6\cdot9}\cdot\frac{1}{2^3} + \ldots$$




I have to find the value of this series, and I have four options:
(A) $2^{1/3}$ (B) $2^{2/3}$ (C) $3^{1/2}$ (D) $3^{3/2}$




I can't seem to find a general term for this. I tried:



$$S = 1 + \frac{(1 - \frac{1}{3})}{1!}(\frac{1}{2}) + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})}{2!}(\frac{1}{2})^2 + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})(3 - \frac{1}{3})}{3!}(\frac{1}{2})^3 + \ldots$$



But this doesn't seem to get me anywhere.



Any help?







This maybe a telescopic series, because there was a similar question we solved in class which ended up being telescopic:




$$ \frac{3}{2^3} + \frac{4}{2^4\cdot3} + \frac{5}{2^6\cdot3} + \frac{6}{2^7\cdot5} + \ldots$$



$=\displaystyle\sum\limits_{r=1}^\infty\frac{r+2}{2^{r+1}r(r+1)}$



$=\displaystyle\sum \bigg(\frac{1}{2^r r} - \frac{1}{2^{r+1}(r+1)}\bigg) = \frac{1}{2}$





$P.S:$ This problem was included in my set of questions for Binomial Theorem, which is why I thought it might be related to it.

How to find the sum of a geometric sequence with an upper bound of n

Let's say I have an equation that includes the Sum, $\sum_{i=0}^n \frac12 (-5)^i$ where $n$ is the last term in the sequence.



We know that this sequence is geometric because the common difference is a multiple of $-5$ meaning that every term is multiplied by $-5$.



The sequence goes like:
$$\frac12, \frac{-5}{2}, \frac{25}2, \frac{-125}2, \ldots, n$$




My question is, how do we find the sum of this geometric sequence when the upper bound of the sigma notation is $n$? Is there some sort of formula that we can use in order to find the sum?



Thanks in advance!

summation - Finite Sum of Power?



Can someone tell me how to get a closed form for



$$\sum_{k=1}^n k^p$$



For $p = 1$, it's just the classic $\frac{n(n+1)}2$.




What is it for $p > 1$?


Answer



The general formula is fairly complicated:



$$\sum_{k=1}^n{k^x} = {1\over x+1}\sum_{k=0}^x{{x+1\choose k}B_k n^{x+1-k}}$$



Where $B_k$ are the Bernoulli numbers, discovered around the same time, but independently, by Johann Bernoulli and Seki Takakazu.



This is commonly called "Faulhaber's formula", but that is a misattribution. The Donald Knuth paper "Johann Faulhaber and sums of powers" explains the history in some detail, including what Faulhaber did and did not know. In particular, he did not know the general formula above, although he did work on many special cases.




The first few special cases are:



$$\begin{array}{rl}
\sum_{k=1}^n 1 & = n \\
\sum_{k=1}^n k & = \frac12{(n^2+n)} \\
\sum_{k=1}^n k^2 & = \frac16{(2n^3+3n^2+n)} \\
\sum_{k=1}^n k^3 & = \frac14{(n^4+2n^3+n^2)} \\
\sum_{k=1}^n k^4 & = \frac1{30}{(6n^5+15n^4+10n^3-n)} \\
\sum_{k=1}^n k^5 & = \frac1{12}{(2n^6+6n^5+5n^4-n^2)} \\
\end{array}$$



integration - Improper integral of $exp[-(a+bi)x]log x$




Prove:$$\begin{align*}\mathcal{I} & =\int\limits_0^{\infty}dx\, e^{-ax}\log x\cos bx\\ & =-\frac a{a^2+b^2}\left[\gamma+\log\sqrt{a^2+b^2}+\frac ba\arctan\frac ba\right]\end{align*}$$





This is what I've done so far, but I'm not getting the exact answer the problem has. I rewrote $\mathcal{I}$ as$$\mathcal{I}=\Re\left[\mathcal{J}\right]=\Re\left[\lim\limits_{\mu\to0}\frac {\partial}{\partial\mu}\int\limits_0^{\infty}dx\, e^{-(a+bi)x}x^{\mu}\right]$$Substitute $t=(a+bi)x$ so$$\begin{align*}\mathcal{J} & =\lim\limits_{\mu\to0}\frac {\partial}{\partial\mu}(a+bi)^{-\mu}\int\limits_0^{\infty}dt\, e^{-t}t^{\mu}\\ & =\lim\limits_{\mu\to0}\frac {\partial}{\partial\mu}\left[\left(\frac 1{a+bi}\right)^{\mu}\Gamma(\mu+1)\right]\end{align*}$$However, if I differentiate and take the limit, I'm just left with$$\mathcal{J}=-\gamma-\log(a+bi)=-\gamma-\log\sqrt{a^2+b^2}-i\arctan\frac ba$$Which is completely different than the answer given above. Where did I go wrong? I'm not entirely sure. The fact that we need to recover the real part of $\mathcal{J}$ doesn't help either.


Answer



Rotating the contour back to the real axis after substituting, which works out without difficulty, your substitution isn't quite right: $dx = dt/(a+bi)$, so actually
$$ \int_0^{\infty} e^{-(a+bi)x} x^{\mu} \, dx = (a+bi)^{-\mu-1} \int_0^{\infty} e^{-t} t^{\mu} \, dt. $$
(we can check this by comparison with the elementary
$$\int_0^{\infty} e^{-(a+bi)x} \, dx = \int_0^{\infty} e^{-ax}(\cos{bx}+i\sin{bx}) \, dx = \frac{a}{a^2+b^2}-i\frac{b}{a^2+b^2} = \frac{1}{a+bi}) $$
Hence the actual answer is yours divided by an extra $a+bi$, so the real part is
$$ \frac{1}{a^2+b^2} (a(-\gamma-\log{\sqrt{a^2+b^2}}) - b\arctan{\frac{b}{a}}), $$
which rearranges into the given expression.



Friday 12 April 2019

probability - Expectation of geometric random variable

Let $X$ be geometric random variable with parameter $p$.



How to prove that:





(1) $E[X-1|X>1] = E[X]$



(2) $E[X^2|X>1] = E[(X+1)^2]$




Author explains the fact below and states it is used to prove (1)
$$P(X-1=k|X>k) = P(X=k).$$




I understood how the fact is true but could not understand how it is used to derive (1)



(2) was stated without explanation. Could someone help in deriving (1) and (2)?

sequences and series - How can I show this in a summation?



I am trying to work out a summation for packet delays which is very similar to the summation for estimating RTT, which is an exponentially weighted moving average. I have modified the estimating RTT summation below (I think I did it correctly) but it does not seem to fit correctly with the part I know is right (where 0.1 has been substituted for u). If I have written this summation out correctly can someone explain why the summation raises (1-u) to the ith power and why the summation replaces u with (1-u)^n-1 and why the summation is multiplied by u/(1-u).enter image description here



EDIT:
If I had not looked at the estimated RTT summation this is what I would have guessed...



enter image description here



Answer



HINT



Since there is uncertainty with 1-u, I suggest backtracking, and writing the first few terms completely out:



$d_1 = u(r_1 - t_1)$



$d_2 = (1-u)^1 u(r_1 - t_1) + u(r_2 - t_2)$



$d_3 = (1-u)^2 u(r_1 - t_1) + (1-u)^1 u(r_2 - t_2) + u(r_3-t_3)$




Now, adding in a $(1-u)^0$ which equals one, we can get:



$d_1 = (1-u)^0 u(r_1 - t_1)$



$d_2 = (1-u)^1 u(r_1 - t_1) + (1-u)^0 u(r_2 - t_2)$



$d_3 = (1-u)^2 u(r_1 - t_1) + (1-u)^1 u(r_2 - t_2) + (1-u)^0 u(r_3-t_3)$



By looking at the equations that are written out, the powers of (1-u) are of reverse magnitude to the indexes of r and t. So I think that you're looking for that in your sum. Both summations you have don't do that.




Consider this: it's possible to consolodate everything into one sum, once you match up the powers and indexes correctly.



So here's something that you can try. Take $d_2$, the equation with two terms. How do you write that as a sum, for example $d_2 = \sum_{i=0}^{2-1}{\mbox{equation}}$



Try $d_2 = u\sum_{i=0}^{2-1}{(1-u)^i(r_{2-i}-t_{2-i})}$



This should check out as being $d_2$.



Once this is confirmed, it shouldn't be too much trouble to extend this so that 2 can be replaced with n.



Thursday 11 April 2019

Asymptotic approximation to incomplete elliptic integral of third kind at a pole - determine constant



while studying a physics problem I found that asymptotically the incomplete elliptic integral of the third kind, (using the Mathematica conventions, where it is called EllipticPi),



$\Pi(n;\phi|m)=\int_0^{\sin\phi} \frac{dt}{(1-nt^2)\sqrt{(1-mt^2)(1-t^2)}},$




which seems to logarithmically diverge when $\sin^2 \phi$ approaches $1/n$, approaches the logarithm indeed in the following way:



$\lim_{\epsilon \rightarrow 0} \left[ 2(n-1)\Pi\left(n;\arcsin \sqrt{\frac{1-\epsilon}{n}}|(2-n)n\right) +\log\epsilon\right]=C(n)$



where $C(n)$ is a constant. This identity (could it be a new discovery?) I checked numerically and it might be independent of the value of $m$.



I need to know the constant $C(n)$ in terms of some known functions (or constants from integrations independent of $n$) if possible, for the value of $m$ given.
Mathematica will not help. Is there any chance to do it? I have no training in this apart from undergraduate analysis and use elliptic integrals for the first time here.




(by the way, for large $n$, I can from looking at the graph see $C(n)$ being of the form $a+log(n+b)$, but for $n$ around unity this is no exact fit while being similar (divergence to minus infinity for small $n$ included).



Recent Edit:



putting the elliptic integral and the logarithm in the same integral and doing what was suggested by user8268, splitting them into two nondiverging integrals, I have now problems with the more complicated, see my comment to the answer. Can anybody tell me if this is an elliptic integral of the third kind or if Mathematica is right when it gives me a mixture of them (including ugly roots, making evaluation of the limit impossible) and if so must there be a calculation mistake or might the advice given even not be correct?



notsoimportantdetails(
Background:
This comes from wanting to evaluate an improper integral (§) that I physically know and numerically see to converge. In turn this integral comes frome the difference of two diverging integrals. Mathematica cannot do the improper integral of the combination but it can do the indefinite integral. But then it splits up the integrand again and the result is the elliptic integral and the logarithm, each diverging when doing the limit. Mathematica cannot do the limit and as I said astonishingly does not even know that there is a divergence for the elliptic integral. When the elliptic integral and the logarithm are put together into one integral (use $-log\epsilon=\int_0^{\sqrt{\frac{1-\epsilon}{n}}}\frac{2tn}{1-t^2n}dt$ or transform the elliptic integral to obtain (o) below) we are more or less back where we started, I suppose - well, my version of Mathematica even hangs up for the indefinite evaluation now.




Edit: Since maybe all I have done so far is making it more complicated, the original problem is to evaluate the integral (§) of
$\frac{1-\sqrt{1-\frac{\text{r0} (-2 m+\text{r0})}{x (-2 m+x)}}}{\left(1-\frac{2 m}{x}\right) \sqrt{1-\frac{\text{r0} (-2 m+\text{r0})}{x (-2 m+x)}}}$
from $r0$ to infinity, where $r0>2m$. This has the obvious split into the two divergent integrals.



For your reference, the combined integral (o) I talked about in the end above reads
$C(n)=\lim_{\epsilon\rightarrow 0}\int_1^\epsilon\frac{1 - n + \sqrt{(1 + n (-1 + t) - 2 t) (-1 + t) (-1 + n +
t)}}{t \sqrt{(1 + n (-1 + t) - 2 t) (-1 + t) (-1 + n + t)}}$ where the integration will solve the problem by providing $C(n)$ ; $\epsilon$ can actually be replaced by $0$ right away since the limit exists.



notsoimportantdetails)


Answer




I finally after all this time found the answer by simply using the second identity on http://functions.wolfram.com/EllipticIntegrals/EllipticPi3/17/01/ which generally expresses the incomplete elliptic integral of the 3rd kind with one term of an incomplete integral of the 3rd kind, one term of an incomplete integral of the 1st kind and one logarithm, which cancels the logarithm in my limit.
The result has whence indeed the form suggested by user8268.


Prove 7 divides $15^n+6$ with mathematical induction



Prove that for all natural numbers statement n, statement is dividable by 7



$$15^n+6$$



Base. We prove the statement for $n = 1$



15 + 6 = 21 it is true




Inductive step.



Induction Hypothesis. We assume the result holds for $k$. That is, we assume that



$15^k+6$



is divisible by 7



To prove: We need to show that the result holds for $k+1$, that is, that




$15^{k+1}+6=15^k\cdot 15+6$



and I don't know what to do


Answer



Observe that $14$ is divisible by 7. Then let $15^k\cdot 15+6=15^k\cdot 14+ 15^k+6$.


linear algebra - Finding eigenvalues of a matrix given its characteristic polynomial and the trace and determinant




I am told a matrix A has characteristic polynomial: $(\lambda−1)^3(a\lambda+\lambda^2+b),$ and that $\text {tr}(A)=12,$ and $\det(A) =14.$



I am asked to find the eigenvalues.



Is the only to do this by simply using the fact that the sum of the eigenvalues is the trace, and the product is the determinant?



I had done so, and had gotten that the eigenvalues are 1, 2 and 7, (1 with multiplicity 3), which was correct.



I am asking because won't the number of eigenvalues have to be less than the number of rows and columns of the matrix? What if, if given different trace and determinant values, I somehow found eigenvalues which satisfied the trace and determinant given, but had a number of eigenvalues that exceeded the number of columns and rows of the matrix?




Also, I was not given the dimensions of the matrix A.


Answer



I hope I understand everything you said.




  1. The degree of your characteristic polynomial is 5. Therefore, the matrix it is coming from is $5 \times 5$.

  2. An $n \times n$ matrix has $n$ (complex) eigenvalues (counting multiplicities). That's called the fundamental theorem of algebra.

  3. Should the matrix be real and should you be only interested in real eigenvalues, you would see that some of the eigenvalues are conjugate from one another.

  4. Given that the eigenvalues are the zeros of the characteristic polynomial, and given that it is factored, you directly notice that $\lambda = 1$ is an eigenvalue with algebraic multiplicity $3$. Therefore, you only have $5-3 = 2$ remaining eigenvalues to find.

  5. You have two conditions (trace and determinant), i.e. two equations, and solve for two eigenvalues. This shouldn't be an issue and solvable for (almost) every pair (trace,det).



Find all three-digit numbers $overline{abc}$ such that 6003 digit number $overline{abcabcabc.....abc}$ is divisible by 91?




Find all three-digit numbers $\overline{abc}$ such that $6003$ digit number $\overline{abcabcabc......abc}$ is divisible by 91?Here $\overline{abc}$ occurs $2001$ times.I know the divisibility rule for 91 which states to subtract $9$ times the last digit from the rest and, for large numbers,to form the alternating sum of blocks of three numbers from right to left. However, I am not able to see how I could apply this rule to determine the numbers $\overline{abc}$. How can I solve this?


Answer



The given number can be written as follows,



$abc(1+10^3+10^6+\cdots+10^{6000})$



Now, $91|1001=1+10^3$ . The sum $S=1+10^3+10^6+\cdots+10^{6000}$ has $2001$ terms, therefore, $91$ and $(1+10^3)+10^6(1+10^3)+\cdots+10^{1999}(1+10^3)+10^{6000}$ are relatively prime $\implies$ $abc$ is a multiple of 91.



Therefore, the required numbers are $91\times n$ , where $n={2,3,4,5,6,7,8,9}$ i.e., the required numbers are :




$182,273,364,455,546,637,728,819$ and $910$


Test for divergence of $int_{0}^{infty} frac{sin^2(x)}{x}dx$ without evaluating the integral



I would like to prove that $$\int_{0}^{\infty} \frac{\sin^2(x)}{x}dx$$ diverges without actually evaluating the integral. Is there a convergence test from calculus or real analysis that can show that this integral diverges?



Thanks.



Edit: Someone pointed out that this is a possible duplicate. However, the question put forth as a possible duplicate asks about $\sin(x^2)$, not about $\sin^2(x)$.


Answer



It is a divergent integral by Kronecker's lemma, since $\sin^2(x)$ is a non-negative function with mean value $\frac{1}{2}$. In more explicit terms, by integration by parts we have




$$ \int_{\pi}^{N\pi}\frac{\sin^2(x)}{x}\,dx =\color{blue}{\left[\frac{1}{2}-\frac{\sin(2x)}{4x}\right]_{\pi}^{N\pi}}+\color{red}{\frac{1}{2}\int_{\pi}^{N\pi}\frac{dx}{x}}+\color{blue}{O(1)} $$
where the blue terms are bounded, but the red term equals $\frac{1}{2}\log N$.


Wednesday 10 April 2019

linear algebra - Finding eigenvalues of a matrix given its characteristic polynomial and the trace and determinant



I am told a matrix A has characteristic polynomial: $(\lambda−1)^3(a\lambda+\lambda^2+b),$ and that $\text {tr}(A)=12,$ and $\det(A) =14.$



I am asked to find the eigenvalues.




Is the only to do this by simply using the fact that the sum of the eigenvalues is the trace, and the product is the determinant?



I had done so, and had gotten that the eigenvalues are 1, 2 and 7, (1 with multiplicity 3), which was correct.



I am asking because won't the number of eigenvalues have to be less than the number of rows and columns of the matrix? What if, if given different trace and determinant values, I somehow found eigenvalues which satisfied the trace and determinant given, but had a number of eigenvalues that exceeded the number of columns and rows of the matrix?



Also, I was not given the dimensions of the matrix A.


Answer



I hope I understand everything you said.





  1. The degree of your characteristic polynomial is 5. Therefore, the matrix it is coming from is $5 \times 5$.

  2. An $n \times n$ matrix has $n$ (complex) eigenvalues (counting multiplicities). That's called the fundamental theorem of algebra.

  3. Should the matrix be real and should you be only interested in real eigenvalues, you would see that some of the eigenvalues are conjugate from one another.

  4. Given that the eigenvalues are the zeros of the characteristic polynomial, and given that it is factored, you directly notice that $\lambda = 1$ is an eigenvalue with algebraic multiplicity $3$. Therefore, you only have $5-3 = 2$ remaining eigenvalues to find.

  5. You have two conditions (trace and determinant), i.e. two equations, and solve for two eigenvalues. This shouldn't be an issue and solvable for (almost) every pair (trace,det).


Tuesday 9 April 2019

logic - Is it possible to avoid the axiom of choice?




For example, when I want to prove a proposition like "Let $V$ be a vector space..." and I need a basis of $V$ to prove it, instead of using the theorem "Every vector space has a basis" which needs the axiom of choice, I can replace the assumption with "Let $V$ be a vector space with a basis".



If the axiom of choice is true, this replacement does not lose generality of the theorem. In addition, we can individually prove the existence of basis of concrete vector spaces such as $\mathbb{R}^n, \mathbb{C}^n, ...$



Is it possible to avoid the axiom of choice?


Answer



Yes, you can always require that your objects have the properties you'd need choice for.



Just like you can always require that your sets are well ordered, and simply limit the generality.




The problem is that a lot of times naturally occurring objects will not be well behaved without choice. For example, $\ell_2$ might not have a basis without choice.


calculus - Integral of trig fraction using substitution?

I'm chewing on an integral problem and don't have a clue where to begin. If someone could assist by suggesting a good starting point, I'd really appreciate it! Not asking for anyone to solve the integral, just looking for a hint or two:




$$\int{\frac{1}{1+\sin{x}+\cos{x}}\,\,dx}$$



I'm completely stuck - tried substituting for $\tan^2{x}=\sec^2{x}-1$ and so forth, but haven't found it to be terribly useful yet. Maybe I'm missing something. I also suspect this is a prime candidate for u- or t-substitution.



Thanks!

how to find the remainder when a polynomial $p(x)$ is divided my another polynomial $q(x)$



i was solving the question from the book IIT FOUNDATION AND OLYMPIAD - X and i was solving the problems of polynomials-III. so on the page number 136, there is a question (question 17) given below:




The remainder when $x$^100 is divided by $x^2-3x+2$ is:



a) $(2$^100$-1)x + (-2$^100$ +2) $




b) $(2$^100$+1)x + (-2$^100$ -2) $



c) $(2$^100$-1)x + (-2$^100$ -2) $



d) none




as far as i tried to find the remainder, i tried long division method but it was getting more and more complicated, then i used systematic method of division but i can't get the corret option
what is the correct option. please explain me how did you find the remainder. thanks




and yes its answer is option (a)


Answer



Hint Write $x^{100}= (x^2-3x+2)q(x) + ax+b$. Now plug $x=1$ and $x=2$ to find $a$ and $b$.


Monday 8 April 2019

combinatorics - Combinatorial reasoning for the identity $left ( sum_{i=1}^n i right )^2 = left ( sum_{i=1}^n i^3 right ) $








There is the interesting identity:



$$\left ( \sum_{i=1}^n i \right )^2 = \left ( \sum_{i=1}^n i^3 \right ) $$
which holds for any positive integer $n$.



I know several was of proving this (finite differences, induction, algebraic tricks etc..), but even so I still find it "weird" that it is even true.



Is there a very nice intuitive way to prove this using some kind of combinatorial argument? (Like the why the sum of the volume of the first $n$ cubes should be the area of ... not sure here?)




If you have any pretty different proof that could be enlightening I would love to see it.



Thanks a lot!

Sunday 7 April 2019

calculus - Exponential of complex square root




Is there any way to simplify any further the exponential of a complex square root, as in the following expression:



$$ e^{ a + \sqrt{x + i\cdot y}}, $$



where $a>0, x >0$ and $y<0$. If I were to select the principal square root, I could define $ r = \sqrt{x^2 + y^2}$ and $\theta = \arctan {x/y}$. Then,



$$ e^{ a + \sqrt{r}\cdot(\cos(\theta/2) + i\cdot \sin(\theta/2) )}. $$



Is there a way to get a friendlier or simplify? I have to later on integrate this expression with respect to $y$ and it doesn't seem easy to integrate.


Answer




The integration does not seem to be very difficult.



Let
$$\sqrt{x+i y}=t \implies y=i \left(x-t^2\right)\implies dy=-2it\,dt$$
$$\int e^{a+\sqrt{x+i y}}\,dy=-2i e^a\int t e^{\sqrt{t^2}}\,dt$$ Simplify and use one integration by parts.


number theory - Arithmetic progression divisibility




Suppose $b|a$ and $\frac{a}{b} \neq \frac{v}{y}$, $a, b, v, y \in \mathbb{N}$ arbitrary. Is there a nice clean intuitive proof to show that it is never true that $b+yk|a+vk$ for all $k \in \mathbb{N}$? Or is it true sometimes after all (I strongly feel like not)?


Answer



Assume for contradiction that for all $k\in \Bbb N$, $a+vk|b+yk$ and consider the sequence $u_k\in \Bbb Z$ such that $u_k(a+vk) = b+yk$. Then since $u_k = {b+yk\over a+vk} \to {y\over v}$ as $k$ goes to $+\infty$, $u_k$ approaches ${y\over v}$ arbitrary close, which therefore must be an integer, and thus $v|y$. Let $u \in \mathbb{Z}$ be such that $y = vu$. Then for all $k\in \Bbb N$ we have $a+vk |b+vuk$ and therefore also $a+vk |b-au$. However since $a+vk$ is unbounded and $b-au$ does not depend on $k$, what follows is that $b-au = 0$, which rewrites as ${a\over b} = {v \over y}$.



The contrapositive of this is your statement.


elementary number theory - Prove that if $gcd( a, b ) = 1$ then $gcd( ac, b ) = gcd( c, b ) $



I know it might be too easy for you guys here. I'm practicing some problems in the textbook, but this one really drove me crazy.
From $\gcd( a, b ) = 1$, I have $ax + by = 1$, where should I go from here? The extra $ac$ is so annoying. Any hint?




Thanks,
Chan


Answer



Setup: Let $d_1 = \gcd(c,b)$ and $d_2 = \gcd(ac,b)$.



So we have $cx_1 + by_1 = d_1$, $acx_2 + by_2 = d_2$, and $ax + by = 1$.



Step 1: Multiply $ax+by = 1$ by $d_1$ (using $d_1 := cx_1 + by_1$) and rearrange to show that $d_2|d_1$.



$$\begin{aligned}
d_1 (ax+by &= 1)\\

\implies ax(cx_1 + by_1) + bd_1y &= d_1 \\
\implies a c (x x_1) + b(a x y_1 + d_1 y) &= d_1
\end{aligned}$$



Since we know that $d_2 = \gcd(ac,b)$ divides any integer linear combination of $ac$ and $b$, we have $d_2 | d_1$.



Step 2: By a similar argument, multiply $ax+by = 1$ by $d_2$ (using $d_2 := acx_2 + by_2$) and rearrange to show that $d_1|d_2$.



$$\begin{aligned}
d_2 (ax+by &= 1) \\

\implies ax(acx_2 + by_2) + bd_2y &= d_2 \\
\implies c (a^2 x x_2) + b(a x y_2 + d_2 y) &= d_2
\end{aligned}$$



Since we know that $d_1 = \gcd(c,b)$ divides any integer linear combination of $c$ and $b$, we have $d_1 | d_2$.



Conclusion: Finally, because we have $d_1 | d_2$, $d_2 | d_1$, and $d_1$ and $d_2$ are non-negative (since they are the gcd of two integers), we conclude that $d_1 = d_2$. Thus, $gcd(ac,b)=gcd(c,b)$.


sequences and series - Trigonometric proof stuck with induction step



I am trying to prove:
$$\sum_{s=0}^{\infty}\frac{1}{(sn)!}=\frac{1}{n}\sum_{r=0}^{n-1}\exp\left(\cos\left(\frac{2r\pi}{n}\right)\right)\cos\left(\sin\left(\frac{2r\pi}{n}\right)\right)$$




We know that $$\begin{align}\exp\left(\cos\theta+i\sin\theta\right)&=e^{\cos\theta}\times e^{i\sin\theta}\\& =e^{\cos\theta}\left(\cos(\sin\theta)+i\sin(\sin\theta)\right)\end{align}$$



We therefore have the equivalent:
$$\begin{align}\sum_{s=0}^{\infty}\frac{1}{(sn)!}&=\frac{1}{n}\sum_{r=0}^{n-1}\Re\left\{\exp\left(e^{\frac{2r\pi}{n}i}\right)\right\}\\n\sum_{s=0}^{\infty}\frac{1}{(sn)!}& = \sum_{r=0}^{n-1}\Re\left\{\exp\left(e^{\frac{2r\pi}{n}i}\right)\right\}\end{align}$$



Where $\Re\{\}$ denotes the real part.



Attempt at induction:
Assume true for $n=k$ Try to prove as a consequence it is true for $n=k+1$




$$\begin{align}\frac{1}{k+1}\sum_{r=0}^{(k+1)-1}\Re\left\{\exp\left(e^{\frac{2r\pi}{k+1}i}\right)\right\}&=\frac{1}{k+1}\sum_{r=0}^{k-1}\Re\left\{\exp\left(e^{\frac{2r\pi}{k+1}i}\right)\right\}+\frac{1}{k+1}\Re\left\{\exp e^{\left(\frac{2(k+1)\pi}{k+1}i\right)}\right\}\\& =\frac{1}{k+1}\sum_{r=0}^{k-1}\Re\left\{\exp\left(e^{\frac{2r\pi}{\color{red}{k+1}}i}\right)\right\}+\frac{1}{k+1}\exp(1)\end{align}$$



Am I along the right lines here?
I having concerns given the highlighted term in red. Specifically if this was $\color{red}{k}$ I might have a chance at the induction step.



I have taken this from a question marked as difficult with a $\dagger$ and so wanted to complete it naturally. (I'm preparing to teach harder material). Unfortunately there are no answers and so I can't even see if I'm on the right lines.



Any help would be gratefully appreciated.


Answer




An idea: the RHS is (the real part of) the average of the exponentials of $n$th-roots of unity. The LHS is a kind of lacunary series related to the exponential series. Maybe, by expanding each exponential in the RHS, many terms vanish and only one every $n$ remains.



Credible, since every $n$th term has a $n$th power of a $n$th root of unity, that is, $1$. The other corresponding terms of each series may give a permutation of the $n$th roots of unity, whose sum is zero.






To develop this idea, rewrite the RHS using $\omega_n=\exp(2i\pi/n)$, a primitive $n$th root of unity, for integer $n>0$.



First, your RHS is $\mathrm{Re}(S_n)$, with




$$S_n=\frac1n\sum_{r=0}^{n-1}\exp(\omega_n^r)=\frac1n\sum_{r=0}^{n-1}\left(\sum_{k=0}^\infty \frac{\omega_n^{rk}}{k!}\right)=\frac1n\sum_{k=0}^\infty\left(\frac{1}{k!} \sum_{r=0}^{n-1}\omega_n^{rk}\right)$$



(since the double sum above is absolutely convergent, you can exchange the summations)



The inner sum is, with $e_{n,k}=\omega_n^k$,



$$\sum_{r=0}^{n-1}\omega_n^{rk}=\sum_{r=0}^{n-1}e_{n,k}^r$$



If $e_{n,k}\neq1$, then the sum is




$$\frac{e_{n,k}^n-1}{e_{n,k}-1}=\frac{(\omega_n^n)^k-1}{e_{n,k}-1}=0$$



And if $e_{n,k}=1$, the sum is simply $n$.



Now, $e_{n,k}=1$ iff $\omega_n^k=1$, iff $k=pn$ for some integer $p$.



Thus, the entire sum is



$$S_n=\frac{1}{n}\sum_{k=0}^\infty \frac{u_{n,k}}{k!}$$




Where $u_{n,k}=n$ if $k=pn$ and $0$ otherwise, that is



$$S_n=\sum_{p=0}^\infty \frac{1}{(pn)!}$$



And finally,



$$\sum_{s=0}^\infty \frac{1}{(sn)!}=\frac1n\sum_{r=0}^{n-1}\exp(\omega_n^r)$$



By the way, taking the real part is not necessary, as we have also proved that $S_n$ is a real number.







In a comment above, I mention an interesting integral. Actually, the RHS is trivially a Riemann sum, and by letting $n\to\infty$ and estimating the rest of the LHS after the first term (which is always $1$), you get



$$\int_0^{2\pi} \exp(\cos x) \cos(\sin x) \mathrm{d}x=2\pi$$



Since we have also proved the imaginary part of $S_n$ is zero, you also get



$$\int_0^{2\pi} \exp(\cos x) \sin(\sin x) \mathrm{d}x=0$$




But this one was actually trivial: the integrand is both odd and periodic, so its integral on a period must be $0$.



Now, thinking again about this integral, putting together the real and imaginary parts, it can be rewritten $\displaystyle{\int_\Gamma \frac{e^z}{iz}\mathrm{d}z}$ ($\Gamma$ being the unit circle), and this integral is immediately $2\pi$ by the residue theorem, so it's not that interesting.


real analysis - Find all continuous functions in $0$ that $2f(2x) = f(x) + x $



I need to find all functions that they are continuous in zero and
$$ 2f(2x) = f(x) + x $$



About




I know that there are many examples and that forum but I don't understand one thing in it and I need additional explanation. (Nowhere I see similar problem :( )



My try



I take $ y= 2x$ then
$$f(y) = \frac{1}{2}f\left(\frac{1}{2}y\right) + \frac{1}{4}$$
after induction I get:
$$f(y) = \frac{1}{2^n}f\left(\frac{1}{2^n}y\right) + y\left(\frac{1}{2^2} + \frac{1}{2^4} + ... + \frac{1}{2^{2n}} \right)$$
I take $\lim_{n\rightarrow \infty} $
$$ \lim_{n\rightarrow \infty}f(y) = f(y) = \lim_{n\rightarrow \infty} \frac{1}{2^n}f\left(\frac{1}{2^n}y\right) + y\cdot \lim_{n\rightarrow \infty} \left(\frac{1}{2^2} + \frac{1}{2^4} + ... + \frac{1}{2^{2n}} \right)$$

$$f(y) = \lim_{n\rightarrow \infty} \frac{1}{2^n} \cdot f\left( \lim_{n\rightarrow \infty} \frac{1}{2^n}y \right) + \frac{1}{3}y$$



Ok, there I have question - what I should there after? How do I know that $$f(0) = 0 $$?
I think that it can be related with " continuous functions in $0$ " but
function is continous in $0$ when
$$ \lim_{y\rightarrow 0^+}f(y)=f(0)=\lim_{y\rightarrow 0^-}f(y)$$
And I don't see a reason why $f(0)=0$



edit





  • Ok, I know why $f(0) =0$ but why I need informations about "Continuity at a point $0$ " ? It comes to
    $$\lim_{n\rightarrow \infty}f\left(\frac{1}{2^n}y\right) = f\left( \lim_{n\rightarrow \infty} \frac{1}{2^n}y \right)$$ ?


Answer



A powerful method to solve these kinds of problems is to reduce to a simpler equation. In this case we want to eliminate the $x$ in the right hand side. Set $g(x)=f(x)+ax$, with $a$ to be found later. Note that $f$ is continuous if and only if $g$ is. Then the equality becomes
$$2(g(2x)-a(2x))=g(x)-ax+x$$
$$2g(2x)=g(x)+x(1+3a)$$
Therefore setting $a=-\frac13$ the equality simplifies to
$$g(2x)=\frac12g(x).$$
Now plugging zero gives $g(0)=0$. You can now prove by induction that for every $x$

$$
g\left(\frac{x}{2^n}\right)=2^ng(x).\tag{1}
$$

If $g$ is not identically zero, say $g(x_0)\neq 0$, then we find a contradiction. Indeed by continuity in zero (which is still true for $g$) $g(\frac{x_0}{2^n})$ should converge to zero, while by $(1)$ it does not.



Therefore we conclude that $g$ must be identically zero, or equivalently $f(x)=\frac13 x$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...