Friday, 31 May 2019

fake proofs - Why is this wrong (complex numbers and proving 1=-1)?




$$(e^{2πi})^{1/2}=1^{1/2}$$$$(e^{πi})=1$$ $$-1=1$$ I think it is due to not taking the principle value but please can someone explain why this is wrong in detial, thanks.


Answer



You used two different branches of the function $x^{\frac{1}{2}}$.



Note that even in exponential form $(e^{x})^\frac{1}{2}$ has two different branches: $e^{\frac{x}{2}}$ and $e^{\frac{x}{2}+\frac{2 \pi i}{2}}$.


real analysis - Is there a general formula for $int_0^l x^n sin(mpi x/l) dx$?




The integral
$$\int_0^l x^n \sin\left(\frac{m\pi x}l\right) dx$$
frequently arises for computing Fourier coefficients, for $m,n$ integers. Is there any general formula for that? What about $\cos$ instead of $\sin$?


Answer



$$J_{\sin}=\int_0^lx^n\sin\left(\dfrac{m\pi x}{l}\right)dx$$
$$J_{\sin}=\dfrac{1}{n+2}\left(\pi ml^{n+1}F^1_2\left(\dfrac{n}{2}+1;\dfrac{3}{2},\dfrac{n}{2}+2;-\dfrac{1}{4}m^2\pi^2\right)\right)$$
with$$\Re(n)\gt 2$$
and $$F^p_q(a_1...a_p;b_1...b_q;z)$$
is the generalized Hypergeometric function.




$$J_{\cos}=\int_0^lx^n\cos\left(\dfrac{m\pi x}{l}\right)dx$$
$$J_{\cos}=\dfrac{1}{n+2}\left(l^{n+1}F^1_2\left(\dfrac{n}{2}+\dfrac{1}{2};\dfrac{1}{2},\dfrac{n}{2}+\dfrac{3}{2};-\dfrac{1}{4}m^2\pi^2\right)\right)$$
$$\Re(n)\gt -1$$


real analysis - A continuously differentiable function is weakly differentiable

Define a continuously differentiable function to be a function $f: \mathbb{R}^n \to \mathbb{R}$ which has a continuous derivative. Define a weakly differentiable function to be a function $f: \mathbb{R}^n \to \mathbb{R}$ which is locally integrable and there exists $n$ locally integrable functions $g_1, \dots, g_n$ which satisfy the integration by parts formula, $$\int_{\mathbb{R}^p} f(x) \frac{\partial \varphi}{\partial x_j} (x) \, \mathrm{d}x = - \int_{\mathbb{R}^p} g_j(x) \varphi(x) \, \mathrm{d}x,$$ for all $j \in \{1, \dots, n\}$ and for any function $\varphi$ that is any infinitely differentiable function with compact support.



I've come across the claim that "if a function $f$ is continuously differentiable, then it is weakly differentiable." How can that be true? Continuously differentiable functions needn't be locally integrable it seems.

linear algebra - Function that satisfies $f(x+ y) = f(x) + f(y)$ but not $f(cx)=cf(x)$




Is there a function from $ \Bbb R^3 \to \Bbb R^3$ such that $$f(x + y) = f(x) + f(y)$$ but not $$f(cx) = cf(x)$$ for some scalar $c$?



Is there one such function even in one dimension? I so, what is it? If not, why?




I came across a function from $\Bbb R^3$ to $\Bbb R^3$ such that $$f(cx) = cf(x)$$ but not $$f(x + y) = f(x) + f(y)$$, and I was wondering whether there is one with converse.



Although there is another post titled Overview of the Basic Facts of Cauchy valued functions, I do not understand it. If someone can explain in simplest terms the function that satisfy my question and why, that would be great.


Answer



Take a $\mathbb Q$-linear function $f:\mathbb R\rightarrow \mathbb R$ that is not $\mathbb R$-linear and consider the function $g(x,y,z)=(f(x),f(y),f(z))$.



To see such a function $f$ exists notice that $\{1,\sqrt{2}\}$ is linearly independent over $\mathbb Q$, so there is a $\mathbb Q$-linear function $f$ that sends $1$ to $1$ and $\sqrt{2}$ to $1$. So clearly $f$ is not $\mathbb R$-linear. ( Zorn's lemma is used for this).


Thursday, 30 May 2019

ordinary differential equations - Calculating roller coaster loops - how to get x and y in terms of s?

In this video, the author presents a method to calculate shapes of roller coaster loops. At 13:20, three differential equations are presented to plot the shape of a loop providing a constant force $G$ for an initial velocity $v_0$:



$\begin{align}
\frac{\partial\theta}{\partial s} &= \frac{G-g\cos\left(\theta\right)}{v_0^2-2gy} \\
\frac{\partial x}{\partial s} &= \cos\theta \\
\frac{\partial y}{\partial s} &= \sin\theta
\end{align}$




where $g$ is acceleration due to gravity, $9.80665~\text{m}/\text{s}^2$.



I would like to eliminate the need for the $\frac{\partial\theta}{\partial s}$ term. Do I integrate both sides of all these equations with respect to $s$ and then substitute $\theta$ in for the $x$ and $y$ equations, or am I stuck with three equations?



Another problem is that the first equation has $\theta$ and $y$ on the right side, so how would I proceed?

sequences and series - Limits Problem : $lim_{n to infty}[(1+frac{1}{n})(1+frac{2}{n})cdots(1+frac{n}{n})]^{frac{1}{n}}$ is equal to..





Problem:



How to find the following limit :



$$\lim_{n \to \infty}[(1+\frac{1}{n})(1+\frac{2}{n})\cdots(1+\frac{n}{n})]^{\frac{1}{n}}$$ is equal to



(a) $\frac{4}{e}$



(b) $\frac{3}{e}$




(c) $\frac{1}{e}$



(d) $e$



Please suggest how to proceed in this problem thanks...


Answer



$$\log\left(\lim_{n \to \infty}[(1+\frac{1}{n})(1+\frac{2}{n})\cdots(1+\frac{n}{n})]^{\frac{1}{n}}\right) =\lim_{n \to \infty}\frac{\log(1+\frac{1}{n})+\log(1+\frac{2}{n})+\cdots+\log(1+\frac{n}{n})}{n} =\int_{1}^2 \log(1+x)dx= [x\log(x)-x]_{x=1}^{x=2}=2\log(2)-1$$



This yields the solution $e^{2\log(2)-1}=4/e$.


trigonometry - What is the exact value of sin 1 + sin 3 + sin 5 ... sin 177 + sin179 (in degrees)?

My attempt:



First I converted this expression to sum notation:




$\sin(1^\circ) + \sin(3^\circ) + \sin(5^\circ) + ... + \sin(175^\circ) + \sin(177^\circ) + \sin(179^\circ)$ = $\sum_{n=1}^{90}\sin(2n-1)^\circ$



Next, I attempted to use Euler's formula for the sum, since I needed this huge expression to be simplified in exponential form:



$\sum_{n=1}^{90}\sin(2n-1)^\circ$ = $\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$



$\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$ = $\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$



$\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$ = $\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$




Next, I used the sum of the finite geometric series formula on this expression:



$\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$ = $\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$



$\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$ = $\operatorname{Im}(\dfrac{2e^i}{1-e^{2i}})$



Now I'm stuck in here;

summation - What is $sumlimits_{i=1}^n sqrt i $?



What is $\sum\limits_{i=1}^n\sqrt i\ $?




Also I noticed that $\sum\limits_{i=1}^ni^k=P(n)$
where $k$ is a natural number and $P$ is a polynomial of degree $k+1$. Does that also hold for any real positive number? How could one prove it?


Answer



$$
\sum_{i=1}^n\sqrt{i}=f(n)
$$



$$
\sum_{i=1}^{n-1}\sqrt{i}=f(n-1)

$$



$$
f(n)-f(n-1)=\sqrt{n}
\tag 1$$



We know Taylor expansion



$$
f(x+h)=f(x)+hf'(x)+\frac{h^2 f''(x)}{2!}+\frac{h^3f'''(x)}{3!}+....

$$



Thus



$$
f(n-1)=f(n)-f'(n)+\frac{f''(n)}{2!}-\frac{f'''(n)}{3!}+....
$$



$$
f(n)-f(n)+f'(n)-\frac{f''(n)}{2!}+\frac{f'''(n)}{3!}-....=\sqrt{n}

$$



$$
f'(n)-\frac{f''(n)}{2!}+\frac{f'''(n)}{3!}-...=\sqrt{n}
$$



$$
f(n)-\frac{f'(n)}{2!}+\frac{f''(n)}{3!}-...=\int \sqrt{n} dn
$$




$$
f(n)-\frac{f'(n)}{2!}+\frac{f''(n)}{3!}-\frac{f'''(n)}{4!}...=\frac{2}{3}n^\frac{3}{2} +c
$$



$$
\frac{1}{2} ( f'(n)-\frac{f''(n)}{2!}+\frac{f'''(n)}{3!}-...)=\frac{1}{2}\sqrt{n}
$$



$$
f(n)+ (-\frac{1}{2.2} +\frac{1}{3!})f''(n)+(\frac{1}{2.3!} -\frac{1}{4!})f'''(n)...=\frac{2}{3}n^\frac{3}{2}+\frac{1}{2}\sqrt{n}+c

$$



$$
f''(n)-\frac{f'''(n)}{2!}+\frac{f^{4}(n)}{3!}-...)=\frac{d(\sqrt{n})}{dn}=\frac{1}{2\sqrt{n}}
$$



If you continue in that way to cancel $f^{r}(n)$ terms step by step, you will get



$$
f(n)=c+\frac{2}{3}n^\frac{3}{2}+\frac{1}{2}\sqrt{n}+a_2\frac{1}{\sqrt{n}}+a_3\frac{1}{n\sqrt{n}}+a_4\frac{1}{n^2\sqrt{n}}+....

$$



You can find $a_n$ constants by Bernoulli numbers, please see Euler-Maclaurin formula. I just wanted to show the method. http://planetmath.org/eulermaclaurinsummationformula



You can also apply the same method for $\sum_{i=1}^n(i^k)=P(n)$, $k$ is any real number .



you can get



$$
\sum_{i=1}^n(i^k)=P(n)=c+\frac{1}{k+1}n^{k+1}+\frac{1}{2}n^{k}+b_2kn^{k-1}+....

$$



$$
P(1)=1=c+\frac{1}{k+1}+\frac{1}{2}+b_2k+....
$$



$$
=c=1-\frac{1}{k+1}-\frac{1}{2}-b_2k+....
$$


Wednesday, 29 May 2019

abstract algebra - Write $x^3 + 2x+1$ as a product of linear polynomials over some extension field of $mathbb{Z}_3$



Write $x^3 + 2x+1$ as a product of linear polynomials over some extension field of $\mathbb{Z}_3$




Long division seems to be taking me nowhere, If $\beta$ is a root in some extension then using long division one can write



$$x^3 + 2x+1 = (x-\beta) (x^2+ \beta x+ \beta^2 + 2)$$



Here is a similar question



Suppose that $\beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$



Is there a general method to approach such problems or are they done through trail and error method.




I haven't covered Galois theory and the problem is from field extensions chapter of gallian, so please avoid Galois theory if possible.


Answer



If you want to continue the way you started, i.e. with
$$x^3 + 2x+1 = (x-\beta) (x^2+ \beta x+ \beta^2 + 2)$$
you can try to find the roots of the second factor by using the usual method for quadratics, adjusted for characteristic 3. I'll start it so you know what I mean:



To solve $x^2+ \beta x+ \beta^2 + 2=0$, we can complete the square, noting that $2\beta + 2\beta = \beta$ and $4\beta^2=\beta^2$in any extension of $\mathbb{Z}_3$ (since $4\equiv 1$).
$$x^2+ \beta x+ \beta^2 + 2=(x+2\beta)^2 + 2=0$$
But this is easy now since this is the same as
$$(x+2\beta)^2 = 1$$

which should allow you to get the remaining roots.


linear algebra - cross product and RHR

Edit: I really like the approach taken by $\mathbb{R}^n$ in the comments below. I posted this late last night and didn't get to ask him why $\bf R(a \times b) = Ra \times Rb$ for any rotation matrix $\bf R$. I'm not sure how to prove that: along with trying to evaluate the determinant directly, I looked in a linear algebra book and tried using the "norm-preserving" property of rotation matrices to evaluate $|\bf R(a \times b) - R(a) - R(b)|$ $^2$, but didn't succeed. How do you do this?







In one of my courses, a professor briefly summarized properties of the cross product at the start of last class. I realized that the assertion



$$ \bf a \times (b + c) = a\times b + a\times c$$



was actually surprising to me. Clearly this is fundamental (if you don't accept this, you can't derive a way of computing the cross product), but the proof is a bit slippery.



If you start from $\bf{ a \times b}$ $:= \bf \hat n |a||b|$ $ \sin\theta$, where $ \bf \hat n$ comes from the right hand rule, then if you can prove as a lemma $\bf a \cdot (b \times c) = (a \times b) \cdot c$, then there's a neat little proof which I found here. But the only argument I've seen for the needed lemma is about the volume of a parallelepiped, which only convinces me that $\bf | a \cdot (b \times c)| = |(a \times b) \cdot c|$.



I think I prefer the approach in one of my textbooks, which starts by defining the cross product by determinants - so, distributivity holds - and proves most of the needed properties. But it wusses out at a crucial point: "it can be proven that the orthogonal vector obtained from this matrix obeys the Right Hand Rule".




Could somebody either prove that lemma, or the textbook claim? (Preferably the latter.)

Tuesday, 28 May 2019

reference request - Why can't we define more elementary functions?



$\newcommand{\lax}{\operatorname{lax}}$

Liouville's theorem is well known and it asserts that:




The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions.




The problem I got from this is what is an elementary function? Who defines them? How do we define them?



Someone can, for example, say that there is a function which is called $\lax(\cdot)$ which is defined as:




$$
\lax\left(x\right)=\int_{0}^{x}\exp(-t^2)\mathrm{d}t.
$$



Then, we can say that $\lax(\cdot)$ is a new elementary function much like $\exp(\cdot)$ and $\log(\cdot)$, $\cdots$.



I just do not get elementary functions and what the reasons are to define certain functions as elementary.



Maybe I should read some papers or books before posting this question. Should I? I just would like to get some help from you.


Answer




Elementary functions are finite sums, differences, products, quotients, compositions, and $n$th roots of constants, polynomials, exponentials, logarithms, trig functions, and all of their inverse functions.



The reason they are defined this way is because someone, somewhere thought they were useful. And other people believed him. Why, for example, don't we redefine the integers to include $1/2$? Is this any different than your question about $\mathrm{lax}$ (or rather $\operatorname{erf}(x)$)?



Convention is just that, and nothing more.


real analysis - Prove that $1 + frac{1}{sqrt{2}} + frac{1}{sqrt{3}} + ... + frac{1}{sqrt{n}}geq sqrt{n}$

Anyone who can solve it or give me an idea on how to try to do it myself?




$$1 + \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} + ... + \frac{1}{\sqrt{n}}\geq \sqrt{n}, \;\;\;n \in \mathbb{N^*}$$

complex numbers - $1/i=i$. I must be wrong but why?




$$\frac{1}{i} = \frac{1}{\sqrt{-1}} = \frac{\sqrt{1}}{\sqrt{-1}} = \sqrt{\frac{1}{-1}} = \sqrt{-1} = i$$




I know this is wrong, but why? I often see people making simplifications such as $\frac{\sqrt{2}}{2} = \frac{1}{\sqrt{2}}$, and I would calculate such a simplification in the manner shown above, namely



$$\frac{\sqrt{2}}{2} = \frac{\sqrt{2}}{\sqrt{4}} = \sqrt{\frac{2}{4}} = \frac{1}{\sqrt{2}}$$


Answer



What you are doing is a version of
$$
-1=i^2=\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)}=\sqrt1=1.
$$
It simply shows that for non-positive numbers, it is not always true that $\sqrt{ab}=\sqrt{a}\sqrt{b}$.


trigonometry - Generalization of the sum of angles formula for any number of angles



In another question one user helped me prove that the sum of three angles was a multiple of 360 degrees with formulas for sine and cosine sums of three angles. The sine formula was:
$\sin⁡(α+β+γ)=\sin ⁡α\cos⁡\beta\cos ⁡γ+\cos ⁡α\sin ⁡β\cos ⁡γ+\cos ⁡α\cos ⁡β\sin ⁡γ-\sin ⁡α\sin ⁡β\sin ⁡γ$



I infer that the pattern for five angles is as shown below? For brevity, I'm using a shorthand, e.g. $\sin⁡(α+β+γ):s(a_1+a_2+a_3 )$ and $\sin ⁡α\cos ⁡\beta\cos ⁡γ∶s_1 c_2 c_3$. So, is the following the proper pattern for summing five angles?
$$s(a_1+a_2+a_3+a_4+a_5)=s_1 c_2 c_3 c_4 c_5+c_1 s_2 c_3 c_4 c_5+c_1 c_2 s_3 c_4 c_5+c_1 c_2 c_3 s_4 c_5+c_1 c_2 c_3 c_4 s_5-s_1 s_2 s_3 s_4 s_5$$



If so, I can also infer the pattern for cosine and use the patterns for any number of angles.



Answer



$s(a_1+a_2+a_3) = s_1c_2c_3+c_1s_2c_3+c_1c_2s_3-s_1s_2s_3 $



$c(a_1+a_2+a_3) =$ $$s(\frac \pi 2 -(a_1+a_2+a_3))
\\ =s((\frac \pi 2 -a_1)+(-a_2)+(-a_3))
\\= c_1c_2c_3 -s_1s_2c_3- s_1c_2s_3-c_1s_2s_3$$
And we also know ...
$s(a_4+a_5) = s_4c_5+ c_4s_5$



$c(a_4+a_5) = c_4c_5- s_4s_5$




So



$$s(a_1+a_2+a_3+a_4+a_5) \\
=( s_1c_2c_3+c_1s_2c_3+c_1c_2s_3-s_1s_2s_3) ( c_4c_5- s_4s_5)
\\+ ( c_1c_2c_3 -s_1s_2c_3- s_1c_2s_3-c_1s_2s_3)(s_4c_5+ c_4s_5 )
$$
There will be no cancellations so you will get 16 terms ( they will be the half of the $2^5=32$ possible arrangements of sines and cosines having an odd number of sines )



The coefficient in front of each term having $(2k+1)$ sines will be $(-1)^k$




So for 5 angles you would expect...



$\binom 51=5$ terms having 1 sine , coefficient = +1



$\binom 53=10$ terms having 3 sines , coefficient = -1



$\binom 55=1$ term having 5 sines , coefficient = +1


Monday, 27 May 2019

calculus - Limit of $sqrt{4x^2 + 3x} - 2x$ as $x to infty$



$$\lim_{x\to\infty} \sqrt{4x^2 + 3x} - 2x$$



I thought I could multiply both numerator and denominator by $\frac{1}{x}$, giving



$$\lim_{x\to\infty}\frac{\sqrt{4 + \frac{3}{x}} -2}{\frac{1}{x}}$$




then as x approaches infinity, $\frac{3}{x}$ essentially becomes zero, so we're left with 2-2 in the numerator and $\frac{1}{x}$ in the denominator, which I thought would mean that the limit is zero.



That's apparently wrong and I understand (algebraically) how to solve the problem using the conjugate, but I don't understand what's wrong about the method I tried to use.


Answer



Hint: $\sqrt{4x^{2}+3x}-2x=\frac{3x}{\sqrt{4x^{2}+3x}+2x}=\frac{3}{\sqrt{4+\frac{3}{x}}+2}$


probability theory - Poisson point process construction



Let $(\mathcal{X},\mathcal{H},\mu)$ be a $\sigma$-finite measure space.



Suppose that $0<\mu(\mathcal{X})<\infty$. Let $(N;X_j:j=1,2,\ldots)$ be independent on a probability space $(\Omega,\mathcal{F},P)$ with each $X_j$ having distribution $\mu/\mu(\mathcal{X})$ on $(\mathcal{X},\mathcal{H})$ and $N$ having a Poisson distribution with mean $\mu(\mathcal{X})$.



Set

$$V(\omega)= (X_j(\omega):j\leq N(\omega))$$
a multi set of members of $\mathcal{X}$.



How can I show that this construction gives a Possion point process $V$ in $(\mathcal{X},\mathcal{H})$ with intensity $\mu$.



Here I note that $V(\omega)=\emptyset$ if $N(\omega)=0$ and I understand that in general, a point process is a random variable $N$ from some probability space $(\Omega,\mathcal{F},P)$ to a space of counting measures on ${\bf R}$, say $(M,\mathcal{M})$. So each $N(\omega)$ is a measure which gives mass to points
$$
\ldots < X_{-2}(\omega) < X_{-1}(\omega) < X_0(\omega) < X_1(\omega) < X_2(\omega) < \ldots
$$
of ${\bf R}$ (here the convention is that $X_0 \leq 0$. The $X_i$ are random variables themselves, called the points of $N$.




The intensity of a point process is defined to be
$$
\lambda_N
= {\bf E}[N(0,1]].
$$



It´s really hard for me this problem, could someone help me pls.



Thanks for your time and help.



Answer



Technically, you should put $M(\omega):=\sum_{v\in V(\omega)}\delta_v=\sum_{j=1}^{N(\omega)}\delta_{X_j(\omega)}$, and this random variable will be a Poisson point process of intensity $\mu$. To prove this, we need to show that if $A_1,\ldots,A_n\in\mathcal H$ are disjoint, then
$$\mathbf P(M(A_i)=k_i,1\le i\le n)=\prod_{i=1}^ne^{-\mu(A_i)}\frac{\mu(A_i)^{k_i}}{k_i!}.$$
Break up the problem into steps. First, set $k:=\sum_{i=1}^nk_i$ and $A:=\bigcup_{i=1}^nA_i$. For $m\ge k$, conditioned on $N=m$, the vector
$$\Big(M(A_1),\ldots,M(A_n),M(\mathcal X\setminus A)\Big)$$
is multinomial with $m$ trials and respective event probabilities $\frac{\mu(A_1)}{\mu(\mathcal X)},\ldots,\frac{\mu(A_n)}{\mu(\mathcal X)},\frac{\mu(\mathcal X\setminus A)}{\mu(\mathcal X)}$. Thus,
$$\mathbf P(M(A_i)=k_i,1\le i\le n|N=m)=\frac{m!}{k_1!k_2!\ldots k_n!(m-k)!}\left(\frac{\mu(A_1)}{\mu(\mathcal X)}\right)^{k_1}\ldots\left(\frac{\mu(A_n)}{\mu(\mathcal X)}\right)^{k_n}\left(\frac{\mu(\mathcal X\setminus A)}{\mu(\mathcal X)}\right)^{m-k}\\
=m!(\mu(\mathcal X))^{-m}\prod_{i=1}^n\frac{\mu(A_i)^{k_i}}{k_i!}\cdot\frac{\mu(\mathcal X\setminus A)^{m-k}}{(m-k)!}.$$
Since $\mathbf P(N=m)=e^{-\mu(\mathcal X)}\frac{\mu(\mathcal X)^m}{m!}=\frac{\mu(\mathcal X)^m}{m!}\prod_{i=1}^ne^{-\mu(A_i)}\cdot e^{-\mu(\mathcal X\setminus A)}$
\begin{align*}

\mathbf P(M(A_i)=k_i,1\le i\le n, N=m) &=\mathbf P(M(A_i)=k_i,1\le i\le n|N=m)\mathbf P(N=m)\\
&=\prod_{i=1}^ne^{-\mu(A_i)}\frac{\mu(A_i)^{k_i}}{k_i!}\cdot e^{-\mu(\mathcal X\setminus A)}\frac{\mu(\mathcal X\setminus A)^{m-k}}{(m-k)!}.
\end{align*}
Now simply sum over all $m\ge k$ and use the fact that $\sum_{m=k}^\infty \frac{x^{m-k}}{(m-k)!}=e^x$.


real analysis - Construct a merely finitely additive measure on a $sigma$-algebra

Is it possible to give an explicit construction of a set function, defined on a $\sigma$-algebra, with all the properties of a measure except that it is merely finitely additive and not countably additive?



Let me elaborate. By "explicit" I mean that the example should not appeal to non-constructive methods like the Hahn-Banach theorem or the existence of free ultrafilters. I'm aware that such examples exist, but I'm looking for something more concrete. If such constructions are not possible, I'm especially interested in understanding why that is so.



This question is similar, but, so far as I can tell, not identical to several other questions asked on this site and MO. For example, I've learned that proving the existence of the "integer lottery" on $P(\mathbb{N})$ requires the Axiom of Choice (https://mathoverflow.net/questions/95954/how-to-construct-a-continuous-finite-additive-measure-on-the-natural-numbers).




That's the sort of result I'm interested in, but it doesn't fully answer my question. My question doesn't require that the $\sigma$-algebra in question be $P(\Omega)$, and I'm interested in general $\Omega$, not just $\Omega = \mathbb{N}$.

Sunday, 26 May 2019

elementary set theory - Let $A,B,C$ be sets, and $B cap C=emptyset$. Show $|A^{B cup C}|=|A^B times A^C|$




Let $A,B,C$ be sets, and $B \cap C=\emptyset$. Show $|A^{B \cup C}|=|A^B \times A^C|$ by defining a bijection $f:A^{B \cup C} \rightarrow A^B \times A^C$.



Any hints on this one?



Thank you!


Answer



Hint: If $f$ is a function is a function from $B\cup C$ to $A$, let $f_B$ ($f$ restricted to $B$) be the function from $B$ to $A$ defined by $f_B(b)=f(b)$. Define $f_C$ analogously.



Now show that the mapping $\varphi$ which takes any $f$ in $A^{B\cup C}$ to the ordered pair $(f_B,f_C)$ is a bijection from $A^{B\cup C}$ to $A^B\times A^C$.



Saturday, 25 May 2019

Limit of a sequence including infinite product. $limlimits_{n toinfty}prod_{k=1}^n left(1+frac{k}{n^2}right)$




I need to find the limit of the following sequence:
$$\lim\limits_{n \to\infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)$$


Answer




PRIMER:



In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the logarithm function satisfies the inequalities



$$\bbox[5px,border:2px solid #C0A000]{\frac{x-1}{x}\le \log(x)\le x-1} \tag 1$$




for $x>0$.







Note that we have



$$\begin{align}
\log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)&=\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)\tag 2

\end{align}$$



Applying the right-hand side inequality in $(1)$ to $(2)$ reveals



$$\begin{align}
\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\le \sum_{k=1}^n \frac{k}{n^2}\\\\
&=\frac{n(n+1)}{2n^2} \\\\
&=\frac12 +\frac{1}{2n}\tag 3
\end{align}$$




Applying the left-hand side inequality in $(1)$ to $(2)$ reveals



$$\begin{align}
\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\ge \sum_{k=1}^n \frac{k}{k+n^2}\\\\
&\ge \sum_{k=1}^n \frac{k}{n+n^2}\\\\
&=\frac{n(n+1)}{2(n^2+n)} \\\\
&=\frac12 \tag 4
\end{align}$$



Putting $(2)-(4)$ together yields




$$\frac12 \le \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)\le \frac12+\frac{1}{2n} \tag 5$$



whereby application of the squeeze theorem to $(5)$ gives



$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty} \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)=\frac12}$$



Hence, we find that



$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)=\sqrt e}$$




And we are done!


real analysis - Find $lim_{n to infty} frac{(sqrt[n]{(7^n+n)}-frac{1}{7})^n}{7^n-n^7}$





Find
$$\lim_{n \to \infty} \frac{(\sqrt[n]{(7^n+n)}-\frac{1}{7})^n}{7^n-n^7}$$




I know that it can be done with using the squeeze theorem but I cannot find a proper upper bound limit


Answer



You may write
$$
\begin{align}
\frac{(\sqrt[n]{(7^n+n)}-\frac{1}{7})^n}{7^n-n^7}&=\frac{\left(7\left(1+\frac{n}{7^n}\right)^{1/n}-\frac{1}{7}\right)^n}{7^n\left(1-\frac{n^7}{7^n}\right)}\\\\

&=\frac{7^n\left(1+\mathcal{O}\left(\frac{1}{7^n}\right)-\frac{1}{49}\right)^n}{7^n\left(1-\frac{n^7}{7^n}\right)}\\\\
&=\frac{\left(1+\mathcal{O}\left(\frac{1}{7^n}\right)-\frac{1}{49}\right)^n}{\left(1-\frac{n^7}{7^n}\right)}\\\\
&=\frac{\left(\frac{48}{49}\right)^n\left(1+\mathcal{O}\left(\frac{1}{7^n}\right)\right)^n}{1-\frac{n^7}{7^n}}\\\\
&=\frac{\left(\frac{48}{49}\right)^n\left(1+\mathcal{O}\left(\frac{n}{7^n}\right)\right)}{1-\frac{n^7}{7^n}}\\\\
& \sim \left(\frac{48}{49}\right)^n
\end{align}
$$ and the desired limit is equal to $0$.


calculus - $dyover dx$ is one things but why in integration we can treat it as 2 different terms

when i am learning differentiation, my lectuer tell us that the deriative $dy\over dx$ is one things, it is not the ration between dy and dx. However when i learn
about integrating, sometime we need to do substitution, like integrating $\int_{0}^{1}2xdx$ when substituting $y=2x$, we can substitute $dy=2dx$, but why in this case it can be treated as 2 different terms instead of 1 term??

Friday, 24 May 2019

calculus - A limit of a sequence

I'm trying to prove the following limit



$$(\frac{2^n}{n!}) \to 0$$



But it seems difficault to me. How can I prove it?




Thanks.

Wednesday, 22 May 2019

Reducing double summation to geometric series

How can I reduce this summation to a geometric series?



$\displaystyle\sum\limits_{i=0}^n x^{25i}\cdot\displaystyle\sum\limits_{j=i}^n x^{5j}$



I'm a little confused since the second summation begins at $i$.

calculus - Evaluating this integral $ smallint frac {x^2 dx} {(xsin x+cos x)^2} $




The question:




Compute$$
\int \frac {x^2 \, \operatorname{d}\!x} {(x\sin x+\cos x)^2}
$$




Tried integration by parts. That didn't work.




How do I proceed?


Answer



$$\text{Observe that, }\frac{d(x\sin x+\cos x)}{dx}=x\cos x$$



$$ \int \frac {x^2 \, \operatorname{d}\!x} {(x\sin x+\cos x)^2} =\int \frac x{\cos x}\cdot \frac{x\cos x}{(x\sin x+\cos x)^2}dx$$



So, if $z=x\sin x+\cos x, dz=x\cos xdx$



So, $\int \frac{x\cos x}{(x\sin x+\sin x)^2}dx=\int \frac{dz}{z^2}=-\frac1z=-\frac1{x\sin x+\cos x}$




So, $$I=\frac x{\cos x}\int \frac{x\cos x}{(x\sin x+\cos x)^2}dx-\int \left(\frac{d(\frac x{\cos x})}{dx}\int \frac{x\cos x}{(x\sin x+\cos x)^2}dx\right)dx$$



$$=-\frac x{\cos x(x\sin x+\cos x)}+\int \left(\frac{x\sin x+\cos x}{\cos^2x}\right)\left(\frac1{x\sin x+\cos x} \right)dx$$



$$=-\frac x{\cos x(x\sin x+\cos x)}+\int\sec^2xdx$$



$$=-\frac x{\cos x(x\sin x+\cos x)}+\tan x+C$$ where $C$ is an arbitrary constant of indefinite integral



$$\text{Another form will be } \frac{\sin x-x\cos x}{x\sin x+\cos x}+C$$



Tuesday, 21 May 2019

Circular logic in evaluation of elementary limits



Our calculus book states elementary limits like $\lim_{x\to0} \frac{\sin x}{x}=1$, or $\lim_{x\to0} \frac{\ln (1+x)}{x}=1$, $\lim_{x\to0} \frac{e^x-1}{x}=1$ without proof.




At the end of the chapter of limits, it shows that these limits can be evaluated by using series expansion (which is not in our high school calculus course).



However, series expansion of a function can only be evaluated by repeatedly differentiating it.



And, to calculate derivative of $\sin x$, one must use the $\lim_{x\to0} \frac{\sin x}{x}=1$.



So this seems to end up in a circular logic. It is also same for such other limits.



I found that $\lim_{x\to0} \frac{e^x-1}{x}=1$ can be proved using binomial theorem.




How to evaluate other elementary limits without series expansion or L'Hôpital Rule?



This answer does not explain how that limit can be evaluated.


Answer



Hint to prove it yourself:



enter image description here



Let $A_1$ be the area of triangle $ABC$, $A_2$ be the area of arc $ABC$, and $A_3$ be the area of ABD. Then we have:




$$A_1

Try and find expressions for $A_1,A_2,$ and $A_3$ and fill them into the inequality and finally use the squeeze theorem.


abstract algebra - Showing that $sqrt{5} in mathbb{Q}(sqrt[p]{2} + sqrt{5})$




I am attempting to show that $\sqrt{5} \in \mathbb{Q}(\sqrt[p]{2} + \sqrt{5})$, where $p > 2$ is prime. I have already shown that $[\mathbb{Q}(\sqrt[p]{2}, \sqrt{5}) : \mathbb{Q}] = 2p$.



If needs be, I can understand that this might constitute proving that $\mathbb{Q}(\sqrt[p]{2} + \sqrt{5}) = \mathbb{Q}(\sqrt[p]{2}, \sqrt{5})$, which I know intuitively but am unsure how to prove. In that regard, I am aware of questions such as this and this, but all answers provided either




  • do not seem to generalize easily to cases where not both of the roots are square.

  • are beyond the scope of my current course.




Any help towards a (preferably low-level) proof of either the inclusion of $\sqrt{5}$ or the equality of $\mathbb{Q}(\sqrt[p]{2} + \sqrt{5})$ and $\mathbb{Q}(\sqrt[p]{2}, \sqrt{5})$ is much appreciated.


Answer



So let $\alpha = \sqrt [p]2+\sqrt 5$ then $$\left(\alpha -\sqrt 5\right)^p=2$$



Expand the left-hand side using the binomial theorem to obtain $$p(\alpha)-q(\alpha)\sqrt 5=2$$ where $q(\alpha)\gt 0$ since all the terms involving $\sqrt 5$ have the same sign. Also $p(\alpha), q(\alpha)$ are polynomials in $\alpha$ and belong to $\mathbb Q(\alpha)$
Finally $$\sqrt 5=\frac {p(\alpha)-2}{q(\alpha)}$$


calculus - Find the sum of infinite series when ever the series converges.



Hi I am trying to find the sum of this infinite series whenever it converges. I have tried the common ratio technique but my work doesn't match the answers. I would appreciate any help with explanation.



$$\sum_{k=1}^\infty (2\cos^2 \theta)^k $$


Answer




this is a geometric series. So answer is
$$
\frac{2\cos^2 \theta}{1-2\cos^2 \theta}
$$
whenever
$$
\cos^2 \theta <\frac12
$$


elementary number theory - Is it possible to do modulo of a fraction

I am trying to figure out how to take the modulo of a fraction.



For example: 1/2 mod 3.




When I type it in google calculator I get 1/2. Can anyone explain to me how to do the calculation?

Monday, 20 May 2019

elementary number theory - 'Gauss's Algorithm' for computing modular fractions and inverses




There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.



Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $p\bmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.



Edit:



Actual question for Bill:



I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.




Let's take your sci.math example for this:



So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$



$$60x \equiv 1 \pmod{103}$$



The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(p\bmod b,\, p)$ instead of the usual Euclidean algorithm that iterates $(p \bmod b,\, b)$.



This is the result of that algorithm:




$$103=60 \cdot \color{#c00} 1 + 43 = 43 \cdot \color{#c00}2 + 17 = 17 \cdot \color{#c00} 6 + 1$$



And then this translates into the following in mod $103$:
$$60 \cdot \color{#c00}{(-1)} \equiv 43 \rightarrow 43 \cdot \color{#c00}{(-2)} \equiv 17 \rightarrow 17 \cdot \color{#c00}{(-6)} \equiv 1$$



Producing the numbers in red which when multiplied give an inverse:



$$60 \cdot \color{#c00}{(-1)(-2)(-6)} \equiv 1 \pmod{103}$$
$$x \equiv-12 \pmod{103}$$




And this is fine and I see it works, of course only when the number and modulo are coprime.



Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.


Answer



Below we compare the related forms. First is the iterated descent $\,a\to 103\bmod a\,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.\,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).



$$\begin{align}
103\bmod{60} &= 103 - 1(60) = 43\\
103\bmod 43 &= 103\color{#0a0}{-2(43)=17}\\

103\bmod 17 &= 103-6(17) = 1
\end{align}\qquad\qquad\quad$$



$$\begin{array}{rl}
\bmod{103}\!:\qquad\ (-1)60\!\!\!\! &\equiv\, 43 &\Rightarrow\ 1/60\equiv -1/43\\[.3em]
\smash[t]{\overset{\large\color{#0a0}{*(-2)}}\Longrightarrow}\ \ \ \ \ \ \ \ \ \ (-2)(-1)60\!\!\!\! &\equiv \color{#0a0}{(-2)43\equiv 17}\!\! &\Rightarrow\ 1/60\equiv\ \ \ 2/17\\[.3em]
\smash[t]{\overset{\large *(-6)}\Longrightarrow}\ \ \color{#c00}{(-6)(-2)(-1)}60\!\!\!\! &\equiv (-6)17\equiv 1 &\Rightarrow\ 1/60 \equiv {\color{#c00}{-12}}/1\\
\end{array}$$



$$ \begin{align}

&\dfrac{1}{60}\ \,\equiv\ \ \dfrac{-1}{43}\, \ \equiv\, \ \dfrac{2}{17}\, \equiv\, \dfrac{\color{#c00}{-12}}1\ \ \ \rm[Gauss's\ algorithm]\\[.3em]
&\, 60\overset{\large *(-1)}\longrightarrow\color{#0a0}{43}\overset{\large\color{#0a0}{*(-2)}}\longrightarrow\,\color{#0a0}{17}\overset{\large *(-6)}\longrightarrow 1\\[.4em]
\Rightarrow\ \ &\,60*(-1)\color{#0a0}{*(-2)}*(-6)\equiv 1\ \Rightarrow\ 60^{-1}\rlap{\equiv (-1)(-2)(-6)\equiv \color{#c00}{-12}}
\end{align}$$



The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.



$$\ 103\color{#0a0}{-2(43) = 17}\,\Rightarrow\, \color{#0a0}{-2(43) \equiv 17}\!\!\pmod{\!103} $$



This leads to the following simple recursive algorithm for computing inverses $\!\bmod p\,$ prime.




$\begin{align}\rm I(a,p)\ :=\ &\rm if\ \ a = 1\ \ then\ \ 1\qquad\qquad\ \ \ ; \ \ a^{-1}\bmod p,\,\ {\rm for}\ \ a,p\in\Bbb N\,\ \ \&\,\ \ 0 < a < p\ prime \\[.5em]
&\rm else\ let\ [\,q,\,r\,]\, =\, p \div a\qquad ;\, \ \ p = q a + r\ \Rightarrow \color{#0a0}{-qa\,\equiv\, r}\!\!\pmod{\!p},\ \ 0 < r < a\,\\[.2em]
&\rm\ \ \ \ \ \ \ \ \ ({-}q*I(r,p))\bmod p\ \ \ ;\ \ because\ \ \ \dfrac{1}a \equiv \dfrac{-q}{\color{#0a0}{-qa}}\equiv \dfrac{-q}{\color{#0a0}r}\equiv -q * I(r,p)\ \ \ \ \ \color{#90f}{[\![1]\!]} \end{align}
$



Theorem $\ \ I(a,p) = a^{-1}\bmod p$



Proof $\ $ Clear if $\,a = 1.\,$ Let $\,a > 1\,$ and suppose for induction the theorem holds true for all $\,n < a$. Since $\,p = qa+r\,$ we must have $\,r > 0\,$ (else $\,r = 0\,\Rightarrow\,a\mid p\,$ and $\,1< a < p,\,$ contra $\,p\,$ prime). Thus $\,0 < r < a\,$ so induction $\,\Rightarrow\,I(r,p)\equiv \color{#0a0}{r^{-1}}$ so reducing equation $\color{#90f}{[\![1]\!]}\bmod p\,$ yields the claim.


real analysis - The Cantor staircase function and related things



The Cantor staircase function
https://en.wikipedia.org/wiki/Cantor_function
has an interesting property: $\{x\colon f'(x)\neq 0\}$ is a nowheredense nullset.

But it it differentiable almost everywhere, not everywhere.
Hence my question is if nondecreasing nonconstant differentiable function can have the above property?


Answer



Theorem (Goldowski-Tonelli): If $f$ is a real function on an interval that is differentiable everywhere (where "differentiable" means the derivative exists and is finite) and such that $f'\geq 0$ almost everywhere then, in fact, $f$ is nondecreasing (and by applying the same to $-f$ we get the immediate corollary: if $f$ is differentiable everywhere and $f'=0$ almost everywhere then, in fact, $f$ is constant).



This result is not trivial. It was proven in 1928 and 1930 by Goldowski and Tonelli independently. The hypotheses can be weakened slightly: it is enough for $f$ to be (a) continuous everywhere, (b) differentiable on the complement of a denumerable set, and (c) have nonnegative (resp. zero for the corollary) derivative almost everywhere. Cantor's devil staircaise shows that (b) cannot be weakened to "differentiable almost everywhere".



A (modern) proof of the Goldowski-Tonelli theorem can be found in the paper "On Positive Derivatives and Monotonicity" by Stephen Casey and Richard Holzsager (Missouri J. Math 17 (2005), 161–173), available here. Many related theorems are stated (and sometimes proved) in Andrew Bruckner's classic book Differentiation of Real Functions (AMS 1994).



The following examples is also worth thinking about: let $r_n$ be an enumeration of the rationals in $[0,1]$. Let $a_n$ be sequence of positive reals with finite sum. Then $f(x) := \sum_{n=0}^{+\infty} a_n (x-r_n)^{1/3}$ is a continuous increasing function which is differentiable everywhere except on the rationals where it has $+\infty$ derivative (in the obvious sense). So the inverse bijection of $f$ is continuous, increasing, differentiable everywhere, and its derivative vanishes on a dense set. (The derivatives of such functions are called "Pompeiu derivatives".)



real analysis - What is so special about the Lebesgue-Stieltjes measure




A measure $\lambda: B(\mathbb{R}^n) \rightarrow \overline{{\mathbb{R_{\ge 0}}}}$ that is associated with a monotone increasing and right-side continuous function $F$ is called a Lebesgue-Stieltjes measure. But I am wondering, why it is not true that every measure $\lambda: B(\mathbb{R}^n) \rightarrow \overline{{\mathbb{R_{\ge 0}}}}$ is a Lebesgue-Stieltjes measure?


Answer



$\mu$ being a lebesgue-stiltjes measure with corresponding function $F$ implies that $$
\mu\left((a,b]\right) = F(b) - F(a) \text{.}
$$



Now take the (rather silly) measure $$
\mu(X) = \begin{cases}
\infty &\text{if $0 \in X$} \\
1 &\text{if $0 \notin X$, $1 \in X$} \\

0 &\text{otherwise.}
\end{cases}
$$



We'd need to have $F(x) = \infty$ for $x \geq 0$ and $F(x) = 0$ for $x < 0$ to have $\mu\left((a,b]\right) = F(b) - F(a)$ for $a < 0$, $b \geq 0$. But then $$
\mu\left((0,2]\right) = F(2) - F(0) = \infty - \infty
$$
which is





  1. meaningless, and

  2. surely not the same as $1$, which is the actual measure of $(0,2]$.



Note that a measure doesn't necessarily need to have infinite point weights (i.e., $x$ for which $\mu(\{x\}) = \infty$ to cause trouble. Here's another measure on $B(\mathbb{R})$ which isn't a lebesgue-stiltjes measure $$
\mu(X) = \sum_{n \in \mathbb{N}, \frac{1}{n} \in X} \frac{1}{n} \text{.}
$$
For every $\epsilon > 0$, $\mu\left((0,\epsilon]\right) = \infty$, which again would require $F(x) = \infty$ for $x > 0$, and again that conflicts the requirement that $\mu((a,b]) = F(b) - F(a) < \infty$ for $0 < a \leq b$. Note that this measure $\mu$ is even $\sigma$-finite! You can write $\mathbb{R}$ as the countable union $$
\mathbb{R} = \underbrace{(-\infty,0]}_{=A} \cup \underbrace{(1,\infty)}_{=B} \cup \bigcup_{n \in \mathbb{N}} \underbrace{(\tfrac{1}{n+1},\tfrac{1}{n}]}_{=C_n}
$$

and all the sets have finite measure ($\mu(A)=\mu(B) = 0$, $\mu(C_n) = \frac{1}{n}$).



You do have that all finite (i.e., not just $\sigma$-finite, but fully finite) measures on $B(\mathbb{R})$ are lebesgue-stiltjes measures, however. This is important, for example, for probability theory, because it allows you to assume that every random variable on $\mathbb{R}$ has a cumulative distribution function (CDF), which is simply the function $F$.


real analysis - Convergent sequence and $lim_limits{n to infty}n(a_{n+1}-a_n)=0$




Let $(a_n)_n$ be a real, convergent, monotonic sequence. Prove that if the limit $$\lim_{n \to \infty}n(a_{n+1}-a_n)$$ exists, then it equals $0$.





I tried to apply the Stolz-Cesaro theorem reciprocal:
$$\lim_{n\to \infty}n(a_{n+1}-a_n)=\lim_{n \to \infty} \frac{a_n}{1+\frac{1}{2}+\dots+\frac{1}{n-1}}=0$$ but I can't apply it since for $b_n=1+\frac{1}{2}+\dots+\frac{1}{n-1}$ we have $\lim_\limits{n \to \infty} \frac{b_{n+1}}{b_n}=1$. I also attempted the $\epsilon$ proof but my calculations didn't lead to anything useful.


Answer



Hint. You are on the right track. Note that $a_n$ is convergent to a finite limit and the harmonic series at the denominator is divergent. Therefore
$$0=\lim_{n \to \infty} \frac{a_n}{1+\frac{1}{2}+\dots+\frac{1}{n-1}}\stackrel{\text{SC}}{=}\lim_{n\to \infty}\frac{a_{n+1}-a_n}{\frac{1}{n}}=\lim_{n\to \infty}n(a_{n+1}-a_n).$$


lp spaces - convergence of $L^p $ norm










If I define $|f|_{L^\infty}= \lim_{n\to \infty} |f|_{L^n}$. How can I prove that this limit is esssup $|f|$?


Answer



The main reason to choose $\text{ess}\sup\vert f\vert$ over $\sup \vert f\vert$ is that "functions" in $L^p$ are in fact equivalence classes of functions: $f\sim g$ if $\{x:f(x)\neq g(x)\}$ has measure zero. By construction of the Lebesgue integral, for all $1\leq p<\infty$ we have $\|f\|_p=\|g\|_p$ if $f\sim g$; we would like $\|f\|_\infty$ to have the same property. $\sup\vert f\vert$ won't work because we can have $f\sim g$ but $\sup\vert f\vert \neq \sup\vert g\vert$, i.e. two functions in the same equivalence class will have different norm. Since $\text{ess}\sup\vert f\vert$ "ignores" sets of measure zero, we will have $\text{ess}\sup\vert f\vert=\text{ess}\sup\vert g\vert $ if $f\sim g$ and hence the norm $\|\cdot\|_\infty$ will be well defined on our equivalence classes.




Edit: I guess the question changed as I was writing this. This is more the reason for $\text{ess}\sup$ rather than the proof requested.


exponential function - Efficient method to evaluate the following series: $sum_{n=1}^infty frac{n^2cdot (n+1)^2}{n!}$



How do I calculate the infinite series:




$$\frac{1^2\cdot 2^2}{1!}+\frac{2^2\cdot 3^2}{2!}+\dots \quad?$$
I tried to find the nth term $t_n$. $$t_n=\frac{n^2\cdot (n+1)^2}{n!}.$$
So, $$\sum_{n=1}^{\infty}t_n=\sum_{n=1}^{\infty}\frac{n^4}{n!}+2\sum_{n=1}^{\infty}\frac{n^3}{n!}+\sum_{n=1}^{\infty}\frac{n^2}{n!}$$
after expanding. But I do not know what to do next.



Thanks.


Answer



You are right, now you need to expand them separately and express each of them in form of $e$:




$$ \sum \limits_{n=1}^{\infty}\frac{n^2}{n!}= \sum \limits_{n=1}^{\infty} \frac{n+(n-1)n}{n!} = \sum \limits_{n=1}^{\infty} \frac 1{(n-1)!} +\frac 1{(n-2)!} = 2e $$



Similarly, we can show that,



$$\sum \limits_{n=1}^{\infty}\frac{n^3}{n!}= \sum \limits_{n=1}^{\infty} \frac{n+(n^2-1)n}{n!} = 5e$$



and,



$$\sum \limits_{n=1}^{\infty}\frac{n^4}{n!}= 15e$$


Saturday, 18 May 2019

real analysis - The Integral that Stumped Feynman?

In "Surely You're Joking, Mr. Feynman!," Nobel-prize winning Physicist Richard Feynman said that he challenged his colleagues to give him an integral that they could evaluate with only complex methods that he could not do with real methods:




One time I boasted, "I can do by other methods any integral anybody else needs contour integration to do."




So Paul [Olum] puts up this tremendous damn integral he had obtained by starting out with a complex function that he knew the answer to, taking out the real part of it and leaving only the complex part. He had unwrapped it so it was only possible by contour integration! He was always deflating me like that. He was a very smart fellow.




Does anyone happen to know what this integral was?

intuition - Dominoes and induction, or how does induction work?



I've never really understood why math induction is supposed to work.



You have these 3 steps:





  1. Prove true for base case (n=0 or 1 or whatever)


  2. Assume true for n=k. Call this the induction hypothesis.


  3. Prove true for n=k+1, somewhere using the induction hypothesis in your proof.




In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.



Here's one I just worked out,




Show $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^n}{x} = 0$



So you go:




  1. Use L'Hospital's rule.
    $\displaystyle\lim_{x\to\infty} \frac{\ln x}{x} = 0$.
    Since that's
    $\displaystyle\lim_{x\to\infty} \frac{1}{x} = 0$.



  2. Assume true for $n=k$.
    $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^k}{x} = 0$.


  3. Prove true for $n=k+1$. You get
    $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^{k+1}}{x} = 0.$




Use L'Hospital again:
$\displaystyle\lim_{x\to\infty} \frac{(k+1)(\ln x)^{k}}{x} = 0$.



Then you see the induction hypothesis appear, and you can say this is equal to $0$.




What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.



I don't see how you can use something you've assumed to be true to prove something else to be true.


Answer



The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.



It is a result of formal logic that if you can prove $P\rightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $P\rightarrow Q$.



We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").




We do that in Calculus: "if $\lim\limits_{x\to x_0}f(x) = a$ and $\lim\limits_{x\to x_0}g(x) = b$, then $\lim\limits_{x\to x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $x\to x_0$ is $a$, and that the limit of $g(x)$ as $x\to x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.



What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$\mbox{"It" holds for $k$}\rightarrow \mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".



When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.



It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).




Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.



In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.





Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.

Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.




To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED



This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.






By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.



Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.




The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.


number theory - Let $a$ and $b$ be integers $ge 1$. prove that $(2^a-1) | (2^{ab}-1)$.



Let $a$ and $b$ be integers $\ge 1$. prove the following:



$(2^a-1) | (2^{ab}-1)$




My attempt:



$2^{ab}-1=(2^a)^b-1$



$= (2^a-1)((2^a)^{b-1}+(2^a)^{b-2}+...+2^a+1)$




Since $(2^a)^{b-1}+(2^a)^{b-2}+...+2^a+1\in \Bbb{Z}$,
then



$(2^{ab}-1)\equiv 0 \mod (2^a-1)$.



Is that true, please ?

Friday, 17 May 2019

combinatorics - Combinatorial identity's algebraic proof without induction.




How would you prove this combinatorial idenetity algebraically without induction?




$$\sum_{k=0}^n { x+k \choose k} = { x+n+1\choose n }$$



Thanks.


Answer



Here is an algebraic approach. In order to do so it's convenient to use the coefficient of operator $[z^k]$ to denote the coefficient of $z^k$ in a series. This way we can write e.g.
\begin{align*}
\binom{n}{k}=[z^k](1+z)^n
\end{align*}





We obtain



\begin{align*}
\sum_{k=0}^{n}\binom{x+k}{k}&=\sum_{k=0}^{n}\binom{-x-1}{k}(-1)^k\tag{1}\\
&=\sum_{k=0}^n[z^k](1+z)^{-x-1}(-1)^k \tag{2}\\
&=[z^0]\frac{1}{(1+z)^{x+1}}\sum_{k=0}^n\left(-\frac{1}{ z }\right)^k\tag{3}\\
&=[z^0]\frac{1}{(1+z)^{x+1}}\cdot \frac{1-\left(-\frac{1}{z}\right)^{n+1}}{1-\left(-\frac{1}{z}\right)}\tag{4}\\
&=[z^n]\frac{z^{n+1}+(-1)^n}{(1+z)^{x+2}}\tag{5}\\
&=(-1)^n[z^n]\sum_{k=0}^\infty\binom{-x-2}{k}z^k\tag{6}\\

&=(-1)^n\binom{-x-2}{n}\tag{7}\\
&=\binom{x+n+1}{n}\tag{8}
\end{align*}
and the claim follows.




Comment:




  • In (1) we use the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$.



  • In (2) we apply the coefficient of operator.


  • In (3) we do some rearrangements by using the linearity of the coefficient of operator and we also use the rule
    \begin{align*}
    [z^{p-q}]A(z)=[z^p]z^{q}A(z)
    \end{align*}


  • In (4) apply the formula for the finite geometric series.


  • In (5) we do some simplifications and use again the rule stated in comment (3).


  • In (6) we use the geometric series expansion of $\frac{1}{(1+z)^{x+2}}$. Note that we can ignore the summand $z^{n+1}$ in the numerator since it has no contribution to the coefficient of $z^n$.


  • In (7) we select the coefficient of $z^n$.


  • In (8) we use the rule stated in comment (1) again.




An additive map that is not a linear transformation over $mathbb{R}$, when $mathbb{R}$ is considered as a $mathbb{Q}$-vector space












I am looking for an example of an additive map that is not a linear transformation over $\mathbb{R}$, when $\mathbb{R}$ is considered as a $\mathbb{Q}$-vector space. I mean, I want to find an example of a map $T:\mathbb{R}\rightarrow\mathbb{R}$ such that $T(u+v)=T(u)+T(v)$ for all $u,v\in \mathbb{R}$, but $T(\alpha v)=\alpha T(u)$ is not true for all $\alpha \in\mathbb{R}$.



Thanks for your kindly help.


Answer



Let $\{r_\alpha\}$ be a Hamel basis of $\Bbb R$ over $\Bbb Q$. Let $\phi$ map $x$ to $c_{\alpha_1}+\cdots+c_{\alpha_k}$, where the (unique) basis representation of $x$ is $c_{\alpha_1}r_{\alpha_1}+\cdots+c_{\alpha_k}r_{\alpha_k}$. Then $\phi(x+y)=\phi(x)+\phi(y)$, but takes on only rational values.



If $\phi(\alpha v)=\alpha\phi(v)$ for all $\alpha$, $v$ in $\Bbb R$, then $\phi$ would be onto.
As this isn't the case, $\phi$ is not $\Bbb R$-linear.



It is $\Bbb Q$-linear, though. In fact, any additive map would automatically be $\Bbb Q$-linear.




As far as I know, you need the axiom of choice to construct a function of this type (?).


calculus - How to find the $arctan(2sqrt{3})$ by hand?

I'm trying to find the polar form of the complex number $zw$ where $z = 1 + i$. and $w = \sqrt{3} + i$.



I multiplied foiled the complex numbers, grouped the real and imaginary terms together to get a modulus of $\sqrt{8}$ and an angle of $\theta = \arctan(2\sqrt{3})$. I dont know how to find this, i do know that $\arctan(\sqrt{3})$ is $\pi/3$ but i dont know how to incorporate the multiplied 2. The answer is given as $5\pi/12$.

Thursday, 16 May 2019

sequences and series - Sum of all odd numbers to infinity




Starting from the idea that $$\sum_{n=1}^\infty n = -\frac{1}{12}$$
It's fairly natural to ask about the series of odd numbers $$\sum_{n=1}^{\infty} (2n - 1)$$

I worked this out in two different ways, and get two different answers. By my first method
$$\sum_{n=1}^{\infty} (2n - 1) + 2\bigg( \sum_{n=1}^\infty n \bigg) = \sum_{n=1}^\infty n$$
$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = - \sum_{n=1}^\infty n = \frac{1}{12}$$
But then by the second
$$\sum_{n=1}^{\infty} (2n - 1) - \sum_{n=1}^\infty n = \sum_{n=1}^\infty n$$
$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = 2 \sum_{n=1}^\infty n = -
\frac{1}{6}$$
Is there any reason to prefer one of these answers over the other? Or is the sum over all odd numbers simply undefined? In which case, was there a way to tell that in advance?



I'm also curious if this extends to other series of a similar form

$$\sum_{n=1}^{\infty} (an + b)$$
Are such series undefined whenever $b \neq 0$?


Answer



With the usual caveat that $$ \sum_{n=1}^\infty n \ne -\frac{1}{12}$$
we can do a similar zeta function regularization for the sum of odd integers. We start with the fact that $$ \sum_{n = 1}^\infty \frac{1}{(2n-1)^s} =(1-2^{-s})\zeta(s)$$ for $\Re(s) > 1$ and then analytically continue to $s=-1$ to get $$ \sum_{n=1}^\infty(2n+1) "=" (1-2)\zeta(-1) = \frac{1}{12}$$



Edit



Zeta function regularization and Ramanujan get the same answer here. As for why your first method gets the "right answer" and the second doesn't, note that the first is argued by the exact same formal steps used to derive $$ \sum_{n=1}^\infty\frac{1}{(2n-1)^s} = (1-2^{-s})\zeta(s)$$ while the second uses both linearity and index shifting which are generally not preserved by the regularization methods.


Wednesday, 15 May 2019

algebra precalculus - Simplifying a logarithm



enter image description here



Can someone explain the logic behind the two steps? I can't figure out how to get the S in the denominator. Thanks!


Answer



(1) Combine the inside expression with a common denominator.


(2) The minus sign outside becomes a $\small{(−1)}$ power inside, so you can flip the fraction.




Done.


Limit of $lim_{xto0} left(frac{1}{x}-cot xright) $ without L'Hopital's rule

Is it possible to evaluate $$\lim_{x\to 0} \left(\frac{1}{x}-\cot x\right) $$ without L'Hopital's rule? The most straightforward way is to use $\sin{x}$ and $\cos{x}$ and apply the rule, but I stuck when I arrived to this part (since I don't want to use the rule as it is pretty much cheating):



$$\lim_{x\to0} \left(\frac{\sin{x}-x\cos{x}}{x\sin{x}}\right) $$



It seems like there's something to do with the identity
$$\lim_{x\to0} \frac{\sin{x}}{x}=1$$
but I can't seem to get the result of $0.$

Tuesday, 14 May 2019

factorial - Prove the following result




Prove that if $p$ is a prime number, then p divides $\binom{n}{p} − \lfloor\frac{n}{p}\rfloor$, for all $n > p$.



(where the $\lfloor\frac{n}{p}\rfloor$ denotes the greatest integer less than or equal to $\frac np $, for any real number $\frac np $.)



Does this result generalise to a result about $p^{r}$ instead of $p$?, where $r = \frac np $



My only thought on approaching this is assuming that it is true and proving my contradiction, but am unsure on how to use the floor function as it can give the same result for many numbers, for example $4.1,4.2,4.4,4.4$ all output $4$.



Any help would be appreciated.


Answer




Lucas' theorem does the job. Write $n=\sum\limits_0^q p^kn_k$ where $n_k$'s are the digits in the base $p$-representation of $n$. Since $n\gt p$, we must have $q\geq 1$. Then, by Lucas' theorem,



$$\binom np\equiv\binom{n_1}1\binom{n_0}0\equiv n_1\pmod p$$



Also, $$\lfloor n/p\rfloor=\lfloor(\sum_0^q p^kn_k)/p\rfloor=\lfloor n_0/p\rfloor+n_1+p(\cdots)=0+n_1+p(\cdots)\equiv n_1\pmod p$$



Thus, $$\binom np\equiv\left\lfloor\frac np\right\rfloor\pmod p$$







I'm not sure if an analogous argument would work for modulo prime powers of $p$ because Lucas' theorem modulo prime powers is a bit different from the usual statement, but here's a (sort of) generalization:



$$\binom n{p^k}\equiv\left\lfloor\frac n{p^k}\right\rfloor\equiv n_k\pmod p$$



where $k$ goes from $1$ to $q$ (also the trivial case of $k=0$ holds)


Are all prime numbers finite?




If we answer false, then there must be an infinite prime number. But infinity is not a number and we have a contradiction. If we answer true, then there must be a greatest prime number. But Euclid proved otherwise and again we have a contradiction. So does the set of all prime numbers contain all finite elements with no greatest element? How is that possible?


Answer



Every natural number is a finite number. Every prime number (in the usual definition) is a natural number. Thus, every prime number is finite. This does not contradict the fact that there are infinitely many primes, just like the fact that every natural number is finite does not contradict the fact that there are infinitely many natural numbers. You can have infinitely many finite things, and there won't ever be a biggest exemplar.



To make things a bit more complicated (and a lot more interesting), there are extensions of the set of natural numbers that do contain infinite numbers, and even infinite prime numbers. For instance, in any hyperreal extension of the reals, there is a system of hypernatural numbers. Some of these hypernatural numbers are finite and some are infinite. The finite ones are just a copy of the usual set of natural numbers and the primes in it are the usual primes. For the infinite hypernatural numbers, there are also prime numbers. For instance, the hypernatural represented by the sequence $(2,3,5,7,11,13,17,19,\cdots )$ is an infinite prime number.


linear algebra - $AU = ULambda$ - single solution (by eigenvalue decomposition)?



Let $A$ be a symmetric, real matrix.



We want to find the matrices $U$ and $\Lambda$ such that $AU = U\Lambda$.




Obviously, a solution is given by the eigenvalue decomposition, where $\Lambda$ is diagonal. But is there any other solution?



In other words: if $AU = U\Lambda$, where $A$ is a known real symmetric matrix, then must $\Lambda$ be diagonal?



Observation: In my context, I also know that $\Lambda$ is symmetric. Maybe it helps.


Answer



Definitely No, In fact if $A=U\Lambda U^\dagger$ is an answer then so is
$$A=V\Lambda^\prime V^\dagger$$
For any unitary $V$ and $\Lambda^\prime=V^\dagger U\Lambda U^\dagger V$.

An extreme example is
$$A=1\Lambda 1$$


real analysis - proving that if $lim_{ntoinfty}a_n=infty$ then $lim_{ntoinfty}b_n=infty$, where $b_n=frac{1}{n}sum_{i=1}^n{a_i}$.



I have to prove that if $lim_{n\to\infty}a_n=\infty$ then $lim_{n\to\infty}b_n=\infty$, where $b_n=\frac{1}{n}\sum_{i=1}^n{a_i}$.



What I've got:




Let $\epsilon > 0$. We know that $a_n\to \infty$, so we fix $n_0$ s.t. when $n > n_0$ we have $a_n > \frac{\epsilon}{2}$. We fix $k \in \mathbf N, k > n_0$. Let $n > k$, so we have
$$b_n > \frac{1}{n}\sum_{i=1}^{k-1}a_i + \frac{n-k+1}{2n}\epsilon$$



That's where I got stuck. Basically I want to express $\frac{1}{n}\sum_{i=1}^{k-1}a_i$ somehow in terms of $\frac{\epsilon}{2}$.



Thanks in advance!


Answer



$a_n \to \infty$ means: for all $M > 0$ there is $N > 0$ such that $a_n > M$ whenever $n > N$. Moreover, we can choose $N$ to be large enough that $\sum_{i=1}^n a_i > 0$ for $n > N$, because $a_i \to \infty$.



Then, if $n > 2N$,

$$b_{n} = \frac{1}{n} \sum_{i=1}^{n} a_i = \frac{1}{n} \sum_{i=1}^N a_i + \frac{1}{n} \sum_{i=N+1}^{n} a_i.$$
Because $a_i > M$ for $i > N$ and $N/n < 1/2$, the second sum on the RHS is at least
$$\frac{1}{n} \cdot (n-N) \cdot M = M - M \cdot (N/n) > M/2.$$
And we chose $N$ large enough that the first sum on the RHS is positive. So the total sum is at least $M/2$.



That is, if $n > 2N$, then $b_n > M/2$. But $M$ was arbitrary, so this means $b_n \to \infty$.


Monday, 13 May 2019

limits - Proof that $lim_{xto0}frac{sin x}x=1$

Is there any way to prove that
$$\lim_{x\to0}\frac{\sin x}x=1$$
only multiplying both numerator and denominator by some expression?

I know how to find this limit using derivatives, L'Hopital's rule, Taylor series and inequalities. The reason I tried to find it using only multiplying both numerator and denominator and then canceling out indeterminate terms is because the most other limits can be solved using this method.

This is an example:

$$\begin{align}\lim_{x\to1}\frac{\sqrt{x+3}-2}{x^2-1}=&\lim_{x\to1}\frac{x+3-4}{\left(x^2-1\right)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{x-1}{(x+1)(x-1)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{1}{(x+1)\left(\sqrt{x+3}+2\right)}\\=&\frac{1}{(1+1)\left(\sqrt{1+3}+2\right)}\\=&\frac18\end{align}$$
It is obvious that we firtly multiplied numerator and denominator by $\sqrt{x+3}+2$ and then canceled out $x-1$. So, in this example, we can avoid indeterminate form multiplying numerator and denominator by
$$\frac{\sqrt{x+3}+2}{x-1}$$
My question is can we do the same thing with $\frac{\sin x}x$ at $x\to0$? I tried many times, but I failed every time. I searched on the internet for something like this, but the only thing I found is geometrical approach and proof using inequalities and derivatives.

Edit
I have read this question before asking my own. The reason is because in contrast of that question, I do not want to prove the limit using geometrical way or inequalities.

integer linear combination of irrational number is irrational number?



How can I prove that



nonzero integer linear combination of two rational independent irrational numbers is still a irrational number?That is to say, given two irrational numbers a and b, if a/b is a irrational number too, then for any m,n is nonzero integer, we have that the number ma+nb is a irrational number, why?


Answer



That's not true: Take $a=\sqrt{2} -1$, $b=\sqrt{2}$. Then $\frac{a}{b} = \frac{1}{\sqrt{2}} - 1 $ isn't rational, but $a-b=1$


Sunday, 12 May 2019

calculus - Is line element mathematically rigorous?

I know differentials (in a way of standard analysis) are not very rigorous in mathematics, there are a lot of amazing answers here on the topic.




But what about line element?



$$ds^2 = dx^2 + dy^2 +dz^2 $$



The way I think about this line element is being geometrically constructed from Pythagoras theorem as:
$$\Delta s^2 =\Delta x^2 + \Delta y^2 +\Delta z^2$$
and then we assume that we can 'get' these quantities ($\Delta x$) to be infinitesimally small (as small as we like) and represent as $dx$ instead, right?



Now then lets take line element on a sphere:

$$ds_2 ^2=r^2sin^2(\theta)d\phi^2 + r^2d\theta ^2$$



It is geometrically constructed again using Pythagoras theorem and assuming that sides of a 'triangle' are small:



$$\Delta s_2 ^2 \approx (rsin(\theta)\Delta \phi)^2 + (r\Delta\theta)^2$$



But this approximation never really becomes equality, the smaller the angles the better it works, but still never equality! People just replace $\Delta->d$ and say $ds$ and say it's differential.



I guess my question is this:




when we write something like
$$ds_2 ^2=r^2sin^2(\theta)d\phi^2 + r^2d\theta ^2$$
we actually have in mind that this quantity contains higher order terms, but they will vanish after we parametrise?
I think about parametrisation in a way:
$$\frac{ds_2^2}{dt^2}=r^2sin^2(\theta)\frac{d\phi^2}{dt^2} + r^2\frac{d\theta^2}{dt^2}$$

calculus - Show that the sequence $sqrt{2}, sqrt{2+sqrt{2}}, sqrt{2+sqrt{2+sqrt{2+}}}...$ converges and find its limit.

Setting $a_n=\sqrt{2+\sqrt{2+\sqrt{2+}}}\ $ I get that $$a_{n+1}=\sqrt{2+a_n} \quad \quad a_1=\sqrt{2}.$$ Clearly all numbers in the sequence are positive and we see that $a_n


We can use the help-function $f(x)=\sqrt{2+x}$ such that $a_{n+1}=f(a_n).$ But since $$f'(x)=\frac{1}{2\sqrt{2+x}}=0\Leftrightarrow\text{no real solutions,}$$



$f(x)$ never flattens out or decreases, so it can't be convergent?

calculus - Why do the infinitely many infinitesimal errors from each term of an infinite Riemann sum still add up to only an infinitesimal error?



Ok, so after extensive research on the topic of how we deal with the idea of an infinitesimal amount of error, I learned about the standard part function as a way to deal with discarding this infinitesimal difference $\Delta x$ by rounding off to the nearest real number, which is zero. I've never taken nonstandard analysis before, but here's my question.



When you take a Riemann sum, you are approximating an area by rectangles, and each of those rectangles has an error in approximating the actual area under the curve for the corresponding part of the graph. As $\Delta x$ becomes infinitesimal, the width of these rectangles becomes infinitesimal, so each error becomes infinitesimal. But since there are infinitely many rectangles in that case, why is it that the total error from all of them still infinitesimal? In other words, shouldn't an infinite amount of infinitesimals add up to a significant amount?


Answer



If I've understood the question correctly, here is a heuristic explanation. Note that this is not rigorous, since to make your question rigorous you have to give some precise definition of what you mean by "the exact area", which is not at all easy to define in general.




Let us assume we are integrating a continuous function $f(x)$ from $0$ to $1$ by using a Riemann sum with infinitesimal increment $\Delta x$. Let us also assume for simplicity that $f$ is increasing (the general case works out essentially the same way but is a little more complicated to talk about). So we are approximating "the area under $f$" by replacing the region under the graph of $f$ from $x=c$ to $x=c+\Delta x$ by a rectangle of height $f(c)$, for $1/\Delta x$ different values of $c$. Now since $f$ is increasing, the difference between our rectangle of height $f(c)$ and the actual area under the graph of $f$ from $c$ to $c+\Delta x$ is at most $\Delta x(f(c+\Delta x)-f(c))$. But since $f$ is (uniformly) continuous, $f(c+\Delta x)-f(c)$ is infinitesimal. So our error is an infinitesimal quantity times $\Delta x$.



So although we are adding up $1/\Delta x$ (an infinite number) different errors to get the total error, each individual error is not just infinitesimal but infinitesimally smaller than $\Delta x$. So it is reasonable to expect that the sum of all of the errors is still infinitesimal.


Saturday, 11 May 2019

calculus - Does $sumlimits_{n=1}^inftyfrac{1}{sqrt{n}+sqrt{n+1}}$ converge?



Does the following series converge or diverge?
$$
\sum\limits_{n=1}^\infty\frac{1}{\sqrt{n}+\sqrt{n+1}}

$$
The methods I have at my disposal are geometric and harmonic series, comparison test, limit comparison test, and the ratio test.


Answer



It is not hard to see that
$$\sum_{n=1}^\infty\frac{1}{\sqrt{n+1}+\sqrt{n}}=\sum_{n=1}^\infty(\sqrt{n+1}-\sqrt{n})$$



As you know this series is divergent.


AP Calculus BC - Area integration question




I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide explanations, and I'd really, really appreciate it if someone could explain how to solve this problem, and why the answer is (4ln2)/5. It's a non-calculator problem.




The region is bounded by the curve $$f(x) = \frac{1}{(x+1)(4-x)}$$ and the $x$-axis from $x = 0$ to $x = 3$. What is the area of this region?




I can't seem to get the correct antiderivative of the curve.


Answer



The idea here is to use partial fraction decomposition. We set

$$\frac{1}{(x + 1)(4 - x)} = \frac{A}{x + 1} + \frac{B}{4 - x}$$
Since
\begin{align*}
\frac{A}{x + 1} + \frac{B}{4 - x} & = \frac{A(4 - x)}{(x + 1)(4 - x)} + \frac{B(x + 1)}{(x + 1)(4 - x)}\\
& = \frac{4A - Ax + Bx + B}{(x + 1)(4 - x)}\\
& = \frac{(B - A)x + 4A + B}{(x + 1)(4 - x)}
\end{align*}
we obtain
$$\frac{1}{(x + 1)(4 - x)} = \frac{(B - A)x + 4A + B}{(x + 1)(4 - x)}$$
which is an algebraic identity that holds for every real number except $x = -1$ and $x = 4$. Since the denominators are the same, we may equate the numerators, which yields

$$(B - A)x + 4A + B = 1$$
which is an algebraic identity that holds for every real number except $x = -1$ and $x = 4$. In particular, it holds when $x = 0$ and $x = 1$. Substituting $0$ for $x$ yields
$$4A + B = 1$$
Substituting $1$ for $x$ yields
$$3A + 2B = 1$$
This gives us the system of equations
\begin{alignat*}{3}
4A & + & B & = 1\\
3A & + & 2B & = 1
\end{alignat*}

Solving the system of equations yields $A = 1/5$ and $B = 1/5$. Hence,
$$\int_{0}^{3} \frac{1}{(x + 1)(4 - x)} = \frac{1}{5}\int_{0}^{3} \left(\frac{1}{x + 1} + \frac{1}{4 - x}\right) dx$$
To make the integration easier, factor out $-1$ from the numerator and denominator of $\frac{1}{4 - x}$ in order to obtain
$$\int_{0}^{3} \frac{1}{(x + 1)(4 - x)} = \frac{1}{5}\int_{0}^{3} \left(\frac{1}{x + 1} - \frac{1}{x - 4}\right) dx$$


probability - Normalized vector of i.i.d. copies of $X$ uniformly distributed on the sphere means $X$ is normally distributed

Let $X$ be a random variable with $\mathbb{E} X^2 = 1$. Let $X_i$ be i.i.d. copies of $X$ such that
$$
\frac{1}{\sqrt{\sum X_i^2}} \left(X_1, ..., X_N\right)
$$
is uniformly distributed on $\mathbb{S}^{N-1}$. Prove that $X = \mathcal{N}(0,1)$ in distribution.

calculus - Calculate a limit of exponential function



Calculate this limit:



$$
\lim_{x \to \infty } = \left(\frac{1}{5} + \frac{1}{5x}\right)^{\frac{x}{5}}
$$



I did this:




$$
\left(\frac{1}{5}\right)^{\frac{x}{5}}\left[\left(1+\frac{1}{x}\right)^{x}\right]^\frac{1}{5}
$$



$$
\left(\frac{1}{5}\right)^{\frac{x}{5}}\left(\frac{5}{5}\right)^\frac{1}{5}
$$



$$

\left(\frac{1}{5}\right)^{\frac{x}{5}}\left(\frac{1}{5}\right)^\frac{5}{5}
$$



$$
\lim_{x \to \infty } = \left(\frac{1}{5}\right)^\frac{x+5}{5}
$$



$$
\lim_{x \to \infty } = \left(\frac{1}{5}\right)^\infty = 0
$$




Now I checked on Wolfram Alpha and the limit is $1$
What did I do wrong? is this the right approach? is there an easier way?:)



Edit:
Can someone please show me the correct way for solving this? thanks.



Thanks


Answer



The limit is indeed $0$, but your solution is wrong.

$$\lim_{x\to\infty}\left(\frac15 + \frac1{5x}\right)^{\!x/5}=\sqrt[5\,]{\lim_{x\to\infty}\left(\frac15\right)^{\!x}\lim_{x\to\infty}\left(1 + \frac1x\right)^{\!x}}=\sqrt[5\,]{0\cdot e}=0$$



And WolframAlpha confirms it: https://www.wolframalpha.com/input/?i=%281%2F5%2B1%2F%285x%29%29%5E%28x%2F5%29+as+x-%3Einfty


Friday, 10 May 2019

limits - $limlimits_{n rightarrow +infty} frac{sumlimits_{k=1}^{n} sqrt[k] {k} }{n}= 1$



I would like to prove that:




$$\lim\limits_{n \rightarrow +\infty} \frac{\sum\limits_{k=1}^{n} \sqrt[k] {k} }{n}= 1$$



I thought to write $\sqrt[k] {k} = e^{\frac{\ln({k})}{k}}$ but I don't know how to continue.


Answer



Use Stolz–Cesàro theorem or a version of it here.



$$\lim\limits_{n \rightarrow +\infty} \frac{\sum\limits_{k=1}^{n+1} \sqrt[k] {k} -\sum\limits_{k=1}^{n} \sqrt[k] {k} }{n+1 - n}=
\lim\limits_{n \rightarrow +\infty}\sqrt[n+1] {n+1} =1$$


complex analysis - Show that $intnolimits^{infty}_{0} x^{-1} sin x dx = fracpi2$

Show that $\int^{\infty}_{0} x^{-1} \sin x dx = \frac\pi2$ by integrating $z^{-1}e^{iz}$ around a closed contour $\Gamma$ consisting of two portions of the real axis, from -$R$ to -$\epsilon$ and from $\epsilon$ to $R$ (with $R > \epsilon > 0$) and two connecting semi-circular arcs in the upper half-plane, of respective radii $\epsilon$ and $R$. Then let $\epsilon \rightarrow 0$ and $R \rightarrow \infty$.



[Ref: R. Penrose, The Road to Reality: a complete guide to the laws of the universe (Vintage, 2005): Chap. 7, Prob. [7.5] (p. 129)]




Note: Marked as "Not to be taken lightly", (i.e. very hard!)



Update: correction: $z^{-1}e^{iz}$ (Ref: http://www.roadsolutions.ox.ac.uk/corrections.html)

abstract algebra - Find the $[Bbb{Q}(sqrt2+sqrt3)(sqrt5):Bbb{Q}(sqrt2+sqrt3)]$

We want to find the $[\Bbb{Q}(\sqrt2+\sqrt3)(\sqrt5):\Bbb{Q}(\sqrt2+\sqrt3)]$.



My first thought is to find the minimal polynomial of $\sqrt5$ over $\Bbb{Q}(\sqrt2+\sqrt3)$. And from this, to say that $[\Bbb{Q}(\sqrt2+\sqrt3)(\sqrt5):\Bbb{Q}(\sqrt2+\sqrt3)]=\deg m_{\sqrt5,\Bbb{Q}(\sqrt2+\sqrt3)}(x).$



We take the polynomial $m(x)=x^2-5\in\Bbb{Q}(\sqrt2+\sqrt3)[x].$ This is a monic polynomial which has $\sqrt5\in \Bbb{R}$ as a root. We have to show that this is irreducible over $\Bbb{Q}(\sqrt2+\sqrt3)$, in order to say that $m(x)=m_{\sqrt5,\Bbb{Q}(\sqrt2+\sqrt3)}(x)$. The roots of $m(x)$ are $\pm \sqrt5\in \Bbb{R}.$ So,



$$m(x) \text{ is irreducible over } \Bbb{Q}(\sqrt2+\sqrt3) \iff \pm \sqrt5 \notin \Bbb{Q}(\sqrt2+\sqrt3)=\Bbb{Q}(\sqrt2,\sqrt3)$$




because $\deg m(x) =2.$



And this is the point I stack. I tried with the use of the basis of the $\Bbb{Q}$-vector space $\Bbb{Q}(\sqrt2+\sqrt3)$:
$$A:= \{1,\sqrt2,\sqrt3,\sqrt6 \}$$



in order to claim that $\nexists a,b,c,d\in \Bbb{Q}:\sqrt5=a+b\sqrt2+c\sqrt3+d\sqrt6$ but this doesn't help.



Any ideas please?



Thank you in advance.

rationality testing - Is there a way to prove numbers irrational in general?

I'm familiar with the typical proof that $\sqrt2\not\in\mathbb{Q}$, where we assume it is equivalent to $\frac ab$ for some integers $a,b$, then prove that both $a$ and $b$ are divisible by $2$, repeat infinitely, proof by contradiction, QED. I'm also familiar with the fact that if you repeat this procedure for any radical, you can similarly prove that each $a^n,b^n$ are divisible by the radicand, where $n$ is the root of the radical (though things get tricky if you don't reduce first). In other words, the proof that $\sqrt2\not\in\mathbb{Q}$ generalizes to prove that any number of the form $\sqrt[m]{n}\not\in\mathbb{Q}$.



Further, since adding or multiplying a rational and an irrational yields an irrational, this is a proof for all algebraic irrationals. (Ex. I can prove $\phi=\frac{1+\sqrt5}{2}$ irrational just by demonstrating that the $\sqrt5$ term is irrational.)



Because this relies on the algebra of radicals, this won't help for transcendental numbers. Is there a proof that generalizes to all irrational numbers, or must each transcendental number be proven irrational independently?

symbolic computation - Need help understanding finite fields / modulo for polynomials

I'm taking a class in finite fields and have not been able to conceptualize how modulo + finite fields works in polynomial space. I understand the basic premises of modular arithmetic, but can't work out how to actually generate a finite field of polynomials.



For example:





Find all $f(x)$ and $g(x)$ in $\mathbb Z_3[x]$:
$$(x^3 + x +1) f(x) + (x^2 + x +1)g(x) = 1$$




I know conceptually how to solve this sort of equation when the coefficients are integers and $f(x), g(x)$ are simple variables, but I don't know how to generate fields in $\mathbb Z_3[x]$ and then how exactly to use them to solve this sort of equation for polynomials once I have their $\mathrm{gcd}$ in $\mathbb Z_3[x]$.

Thursday, 9 May 2019

calculus - Finding $lim_{n to infty} frac{1}{2^{n-1}}cot(frac{x}{2^n})$





Find: $$\lim_{n \to \infty} \frac{1}{2^{n-1}}\cot\left(\frac{x}{2^n}\right)$$




Can L' Hopital's rule be used to solve this? And differentiate it with respect to $x$ or $n$?



What I've found is that



\begin{equation}
\lim_{n \to \infty} \frac{1}{2^{n-1}}\cot\left(\frac{x}{2^n}\right) = \lim_{n \to \infty} \frac{\frac{1}{2^{n-1}}\cos\left(\frac{x}{2^n}\right)}{\sin\left(\frac{x}{2^n}\right)}
\end{equation}




which is of the form $\frac{0}{0}$, but I don't know how to go further from here. Any help is appreciated.


Answer



Hint:
$$
\lim_{x\to 0}\frac{\sin x}{x}=1
$$
and
$$
\lim_{x\to 0}\cos x =1.

$$


Wednesday, 8 May 2019

integration - Solving the Definite Integral $int_0^{infty} frac{1}{t^{frac{3}{2}}} e^{-frac{a}{t}} , mathrm{erf}(sqrt{t}), mathrm{d}t$



I would like to solve the following integral




$$\int_0^{\infty} \frac{1}{t^{\frac{3}{2}}} e^{-\frac{a}{t}} \, \mathrm{erf}(\sqrt{t})\, \mathrm{d}t$$



with Re$(a)>0$ and erf the error function.
Is it possible to given an closed form solution for this integral? Thank you.



Edit: Maybe this helps
$$\mathrm{L}(\mathrm{erf}(\sqrt{t}),s)=\frac{1}{s \, \sqrt{1+s}}$$
$$\mathrm{L}^{-1}(t^{-\frac{3}{2}} e^{-\frac{a}{t}})=\frac{1}{\sqrt{\pi \, a}}\mathrm{sin}(2 \sqrt{a \, s})$$




with L the Laplace transform.



Therefore it should be
$$\int_0^{\infty} \frac{1}{t^{\frac{3}{2}}} e^{-\frac{a}{t}} \, \mathrm{erf}(\sqrt{t})\, \mathrm{d}t = \int_0^{\infty} \frac{1}{s \, \sqrt{1+s} \, \sqrt{\pi \, a}} \, \mathrm{sin}(2 \sqrt{a \, s}) \mathrm{d}s$$


Answer



Represent the erf as an integral and work a substitution. To wit, the integral is



$$\frac{2}{\sqrt{\pi}} \int_0^1 dv \, \int_0^{\infty} \frac{dt}{t} e^{-(a/t+v^2 t)} $$



To evaluate the inner integral, we sub $y=a/t+v^2 t$. Then the reader can show that




$$\int_0^{\infty} \frac{dt}{t} e^{-(a/t+v^2 t)} = 2 \int_{2 v \sqrt{a}}^{\infty} \frac{dy}{\sqrt{y^2-4 a v^2}} e^{-y}$$



The latter integral is easily evaluated using the sub $y=2 v \sqrt{a} \cosh{w} $ and is equal to



$$2 \int_{2 v \sqrt{a}}^{\infty} \frac{dy}{\sqrt{y^2-4 a v^2}} e^{-y} = 2 \int_0^{\infty} dw \, e^{-2 v \sqrt{a} \cosh{w}} = 2 K_0 \left ( 2 v \sqrt{a} \right )$$



where $K_0$ is the modified Bessel function of the second kind of zeroth order. Now we integrate this expression with respect to $v$ and multiply by the factors outside the integral to get the final result:





$$\begin{align} \int_0^{\infty} dt \, t^{-3/2} e^{-a/t} \operatorname{erf}{\left ( \sqrt{t} \right )} &= \frac{4}{\sqrt{\pi}} \int_0^1 dv \, K_0 \left ( 2 v \sqrt{a} \right ) \\ &= 2 \sqrt{\pi} \left [K_0 \left ( 2 \sqrt{a} \right ) \mathbf{L}_{-1}\left ( 2 \sqrt{a} \right ) + K_1 \left ( 2 \sqrt{a} \right ) \mathbf{L}_{0}\left ( 2 \sqrt{a} \right ) \right ] \end{align}$$




where $\mathbf{L}$ is a Struve function.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...