Tuesday 31 May 2016

indeterminate forms - Can we say that $frac{0}{0}$ is every number?



Suppose we have an equation $ab=0$. This equation is true when statements $a=0$ or $b=0$ are true.



If $a=0$, then $b=\frac{0}{0}$. That means $b$ could be any number for $ab=0$ to be true. If the set which groups all the numbers is the complex set, then $b$ will every number within $\mathbb{C}$, so $\forall z\in\mathbb{C}:b=\frac{0}{0}=z$. Therefore, $\frac{0}{0}$ is every number.



I know it really is not defined as number but conceptually it is every number, right?




Is this right, or am I missing something?


Answer



When we say that $ab=0$ we define $a$ and $b$ as being unique numbers. So when saying $b$ is all the numbers we are saying that $b$ is not unique. That leads to a contradiction so $b$ is undefined.


summation - Prove that for $n in mathbb{N}, sumlimits_{k=1}^{n} (2k+1) = n^{2} + 2n $



I'm learning the basics of proof by induction and wanted to see if I took the right steps with the following proof:



Theorem: for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $



Base Case:




let $$ n = 1 $$ Therefore $$2*1+1 = 1^{2}+2*1 $$ Which proves base case is true.



Inductive Hypothesis:



Assume $$\sum_{k=1}^{n} (2k+1) = n^{2} + 2n $$



Then $$\sum_{k=1}^{n+1} (2k+1) = (n+1)^{2} + 2(n+1) $$
$$\iff (2(n+1) +1)+ \sum_{k=1}^{n} (2k+1) = (n+1)^{2} + 2(n+1) $$
Using inductive hypothesis on summation term:

$$\iff(2(n+1) +1)+ n^{2} + 2n = (n+1)^{2} + 2(n+1) $$
$$\iff 2(n+1) = 2(n+1) $$



Hence for $n \in \mathbb{N}, \sum\limits_{k=1}^{n} (2k+1) = n^{2} + 2n $ Q.E.D.



Does this prove the theorem? Or was my use of the inductive hypothesis circular logic?


Answer



Your proof looks fine, but if you know that
$$1+2+...+n=\frac{n(n+1)}{2}$$
then you can evaluate

$$\sum_{k=1}^n(2k+1)=2\sum_{k=1}^n k+\sum_{k=1}^n1=\rlap{/}2\frac{n(n+1)}{\rlap{/}2}+n=n^2+2n$$


complex numbers - How can I compute the limit of this sequence: $sqrt[n]{sin n}$?



I need to calculate the limit of the following sequence:



$$\lim _ {n \to \infty} \sqrt[n]{\sin(n)}$$



where the $n$-th root of a negative number is defined as the principal complex root.




I suspect the answer to be $1$, but I do not know how to prove it.


Answer



The problem boils down to proving that $\sin(n)$ cannot be too close to zero for small values of $n$.



We know that $\pi$ is a trascendental number with a finite irrationality measure. In particular, the inequality
$$ \left| \pi-\frac{p}{q}\right| \leq \frac{1}{q^{10}} $$
may hold only for a finite number of rational numbers $\frac{p}{q}$, hence (since $\left|\sin x\right|\geq K\left|x-k\pi\right|$ when $x$ is close to $k\pi$, thanks to Adayah) in the general case $\left|\sin(n)\right|$ is greater than $\frac{C}{n^9}$ for some constant $C$. That is enough to ensure that the wanted limit is $1$ by squeezing.


calculus - Finding $lim_{xtoinfty} frac {(x!)^{frac 1 x}}{x}$






Find $\displaystyle \lim_{x\to\infty} \frac {(x!)^{\frac 1 x}}{x}$




I have no idea how to solve it, I can approximate it to be in $(0,1)$ by squeezing but getting to the solution $(\frac 1 e)$ seems like it would require a lot more. Is this an identity?




Note: no integrals nor gamma function.


Answer



Note this
$$ \left( \frac{x!}{x^x} \right)^{1/x} = (a_x)^{1/x} $$



where $a_x = \frac{x!}{x^x}$ and then use the fact that




$$ \lim_{x\to \infty} (a_x)^{1/x} = \lim_{x\to \infty} \frac{a_{x+1}}{a_x} $$





and the evaluation of limit will become easy



$$ \lim_{x\to \infty} \frac{a_{x+1}}{a_x} = \lim_{x\to \infty} \frac{1}{(1+1/x)^x} = \frac{1}{e}. $$


Proving the Cauchy-Schwarz inequality, spivak's calculus

I'm working on a problem concerning the Cauchy-Schwarz inequality from Spivak's calculus 3rd edition. The problem consists of completing three equivalent proofs of the inequality. In one of the proofs the inequality can be deduced from the fact $(x_1^2+x_2^2)(y_1^2+y_2^2)=(x_1y_1+x_2y_2)^2+(x_1y_2-x_2y_1)^2$ for real numbers $x_1,x_2,y_1$ and $y_2$. Manipulation of this equation together with facts about inequalities leads one to $|x_1y_1+x_2y_2|\leq \sqrt{x_1^2+x_2^2}\sqrt{y_1^2+y_2^2}$ . The other proof uses $2xy\leq x^2+y^2$ and the substitutions $x:=\frac{x_A}{\sqrt{x_1^2+x_2^2}}$ and $y:=\frac{y_A}{\sqrt{y_1^2+y_2^2}}$, first for $A=1$ and then for $A=2$. Here's where my difficulties arise: when doing the second proof, I can arrive at the inequality, but not for the absolute value of $x_1y_1+x_2y_2$. Can anyone work the second proof and see how the absolute value comes into play?




Thanks

analysis - Finding that a statement is divisible by a greatest $n$ through induction

I have the following statement.





Find the largest natural number $m$ such that $n^3 - n$ is divisible by $m$ for all $n$ in $\mathbb{N}$. Prove your result.




I have deduced that the greatest natural number $m$ would be $6$ and I am pretty sure that I have to prove this statement through induction. T he base case is pretty obvious but it is the inductive step that is giving me trouble.



Please help!

analysis - Proving an inequality without an integral: $frac {1}{x+1}leq ln (1+x)- ln (x) leq frac {1}{x}$

I would like to prove the following inequality without integration; could you help?



$$\frac {1}{x+1}\leq \ln (1+x)- \ln (x) \leq \frac {1}{x}, \quad x > 0. $$



I can however differentiate this.



Thanks in advance.

real analysis - Calculating the $ underset{n rightarrow infty}{lim} int_{0}^{infty}f_n(t)dt $




Suppose you have $$f_n: \mathbb{R} \rightarrow \mathbb{R} $$ $$x \mapsto \frac{x^n e^{-x}}{n!} $$



I am asked to find the limit of this function and prove its uniform convergence.



For finding the limit(basically showing that it converges pointwise) I did this:




Using the stirling's formula, I found that $$f_n(x)\sim \frac{x^n e^{-x}}{n^n e^{}-x \sqrt{2 \pi n}} = \left(\frac{x}{n}\right)^n \frac{e^{-x}}{e^{-n}} \frac{1}{\sqrt{2 \pi n}}=\left (\frac{x}{n}\right)^n e^{n-x} \frac{1}{\sqrt{2 \pi n}} = \left(\frac{xe^{1- \frac{x}{n}}}{n}\right)^n \frac{1}{\sqrt{2 \pi n}}$$
Thus it converges to the null function.





For the uniform convergence, I derived the function and searched when it reached its maximum.




The derivative of $f_n(x)$ is $$f'_n(x)=\frac{1}{n!}\left(\frac{nx^{n-1}-x^n}{e^x}\right)$$
Thus is equals to zero whenever $x=n$. This allows me to state that the function $f_n$ reaches it's maximum whenever $x=n$, yet $f_n(n) \sim \frac{1}{\sqrt{2 \pi n}}$




Now I am asked to find $$ \underset{n \rightarrow \infty}{\lim} \int_{0}^{\infty}f_n(t)dt $$, but thinking logically, I am most likely asked to integrated the limit function because I proved the uniform convergence. Yet I don't know exactly to what it converges. Can someone help me out?



Answer



I think the goal of the exercise was to give an example of sequence of function which converge pointwisely and uniformly to zero, yet their intergrals are all equal to $1$.



You have $$\int_0^\infty x^{s-1}e^{-s}ds = \Gamma (s),$$ where $\Gamma$ is the Gamma-function. For natural $n$ you can find - by integration by parts - that $\Gamma(n+1)=n!$, hence for your problem
$$\forall n\in \Bbb N \quad\int_0^{\infty}\frac{x^ne^{-x}}{n!}dx = 1.$$



Digression. We know that if $f_n\to f$ uniformly on a set $\Omega$, with $f_n$ and $f$ being in $L^1(\Omega)$ and $|\Omega| <\infty$, then $f_n\to f$ in $L^1(\Omega)$. In your exercise, however, we do not have $|\Omega|<\infty$, hence the convergence in $L^1$ is not guaranteed and we build an explicit counterexample.


Monday 30 May 2016

trigonometry - Verify identity: $sin(x+1)sin(x+1) - sin(x+2)sin x = sin^2(1)$

I have the following identity to verify:
$$\sin(x+1)\sin(x+1) - \sin(x+2)\sin x = \sin^2(1).$$



I'm becoming more familiar with sum and difference formulas to some degree, but this one has stumped me.




I don't know if I'm doing it right, even, but I have this so far:
$$(\sin x \cos(1) + \cos x \sin(1))^2 - (\sin x \cos(2) + \cos x \sin(2))(\sin x) = \sin^2(1). $$



I don't want to just ask "how i do dis" and expect an answer. I am trying, but my brain doesn't quite understand all this yet.



Please help! I may be late to reply, I have work to get to here.



Thanks a million,



-Jon

Sunday 29 May 2016

Why arithmetic progression formula $S_n = (a_1 + a_n)*n/2$ works with uneven number of integer members?



Let's consider arithmetic progression with integer numbers.



Arithmetic progression sum $S_n = (a_1 + a_n)*n/2$, where $a_n=a_1+d(n-1) $



So $ S_n = (2*a_1 + d(n-1))*n/2 = a_1*n + d(n-1)*n/2$




I cannot understand, why it always happens that $d(n-1)*n/2$ is always an integer number? So that $S_n$ is also always an integer.



Besides, everything seems clear with even number of progression members: 1 + 2 + 3 + 4 + 5 + 6 = (1+6) + (2+5) + (3+4) = (1+6)* (6 members / 2)



But if number of progression members is not even (1 + 2 + 3), it is unclear why $S_n = (a_1 + a_n)*n/2$ formula works perfectly! Because (3 members / 2) is not an integer!


Answer



$\frac{d(n-1)n}{2}$ is always an integer because $\frac{(n-1)n}{2}$ is always an integer. This is simply because $n$ and $n-1$ differ by $1$, and so atleast one of them must be even.


exponential function - Why is $0^0$ undefined?











I'm wondering why $0^0$ is considered undefined. Why isn't 1 considered a valid solution?



Considering $0^0 = 1$ seems reasonable to me for two reasons:





  1. $\lim_{x \rightarrow 0} x^x = 1$


  2. $a^x$ would be a continuous function




Could you please explain why 1 can't be a solution and maybe provide some examples that show why having $0^0$ undefined is useful?


Answer



0Because as a function $f(x,y): R^2 \rightarrow R = x^y$ we have two different values moving toward $f(0,0) = 0^0$. In other words, $f(0^+,0) = 1$ and $f(0,0^+) = 0$.



But beware that there are some places in mathematics which by convention accept one of these values. For example in some parts of combinatorics we have $0^0 = 1$ to ease the definition of some functions.



calculus - Show that $(a_n)n_inmathbb{N}$ with $a_n := |sqrt{n+1}| - |sqrt{n}| , ninmathbb{N}$ is a Cauchy sequence.

It's sufficient enough to show, that $|\sqrt{n+1}|-|\sqrt{n}|$ convergences, since all convergent sequences are Cauchy sequences. What I've done so far:
$$||\sqrt{n+1}|-|\sqrt{n}||= |\sqrt{n+1}|-|\sqrt{n}|=\sqrt{n+1}-\sqrt{n}$$
since $n\in\mathbb{N}$ , so $\sqrt{n+1}$ and $\sqrt{n}$ are positive.
After that I can't find an upper estimate so I eventually arrive at $<\epsilon$.

abstract algebra - 1-1 Correspondence $(S times T) times U$ and $S times (T times U)$




(Herstein section 1.2 problem 3)



If $S, T, U$ are nonempty sets, prove that there exists a one-to-one correspondence $(S \times T) \times U$ and $S \times (T \times U)$.



An element of $(S \times T) \times U$ is of the form $((s,t),u)$ and for $S \times (T \times U)$ an element is of the form $(s,(t,u))$.



I am unsure of such a function. The only thing that comes to mind is that given an element of the form $((s,t),u)$ take the $T$ value from $S \times T$ and then take the $U$ value from $(S \times T) \times U$ to obtain an element of $T \times U$ and then take that value and cross it with the value from $S$ to get an element from $S \times (T \times U)$.



But I am highly unsure of this function as it is literally looking at the form of the elements and essentially "swapping the parentheses".



Answer



The function $f:(S\times T)\times U\to S\times(T\times U)$ prescribed by $$\langle\langle s,t\rangle,u\rangle\mapsto\langle s,\langle t,u\rangle\rangle$$ is evidently surjective and can also be proved to be injective (can you do that?).



So $f$ is a bijection whence represents a one-to-one correspondence between domain and codomain of $f$.


Saturday 28 May 2016

real analysis - A necessary condition to $F'(x)=f(x)$ for a continuous function $f$


Theorem: Consider ,



$$F(x)=\int_a^xf(t)\,dt$$



If the function $f:[a,b]\to \mathbb R$ is continuous then , $F(x)$ is differentiable and $F'(x)=f(x).$




I know that the continuity condition of $f$ is sufficient condition.




That means there exists a discontinuous function $f$ for which this $F'(x)=f(x)$.



My Question:



Does there exist a necessary condition for this ?



$$OR$$



After imposing which extra condition on $f$ it is necessary that $F'(x)=f(x)$ ?

Using induction to prove an inequality for a sequence of numbers



We have the sequence $d_n = \begin{cases} 1 &\text{ if } n=0 \\
\frac{n}{d_{n-1}} &\text{ if } n>0 \end{cases}$




for all natural numbers $n$.
($d_{n-1}$ is the previous number of the sequence.)



examples: $d_0 = 1$, $d_1 = 1$, $d_2 = 2$, $d_3 = \frac{3}{2}$, $d_4 = \frac{8}{3} \dots$



I have to prove using induction that $\forall n \in \mathbb N \setminus \{0\}$, $d_{2n-1}$ $\leq$ $\sqrt{2n-1}$.



so far, I've figured out the pattern that for every n greater than or equal to $2$, $d_{2n-1} = d_{2n-3} \, \frac{2n-1}{2n-2}$.
i.e. $d_5 = d_3 \, \frac{5}{4}$



In the hints section, they told me to write $d_{2k+1}$ in terms of $d_{2k-1}$ and to use the difference of squares: $(2k-1)(2k+1) = 4k^2 - 1$ for the induction step.




Any hints/tips/advice on how to solve this problem is much appreciated!
Thank you!


Answer



Your observation $$d_{2n-1} = d_{2n-3} \, \frac{2n-1}{2n-2}$$ is homologous with $$d_{2k+1} = d_{2k-1} \, \frac{2k+1}{2k}.$$



\begin{align}
d_{2k-1} \, \frac{2k+1}{2k} &\le \sqrt{2k-1} \, \frac{\bbox[yellow, 2px]{2k+1}}{2k} \tag{induction hypothesis}\\
&\le \color{blue}{\sqrt{2k-1}} \, \frac{\bbox[yellow, 2px]{\color{blue}{\sqrt{2k+1}}}}{2k} \, \bbox[yellow, 2px]{\sqrt{2k+1}} \\
&\le \frac{\color{blue}{\sqrt{4k^2-1}}}{2k} \sqrt{2k+1} \tag{hint} \\

&\le 1 \cdot \sqrt{2k+1} = \sqrt{2k+1} \tag*{$\square$}
\end{align}


sequences and series - How to show the existence of the limit $lim_{nto infty}frac{x_n}{n}$ if $x_n$ satisfy $x^{-n}=sum_{k=1}^infty (x+k)^{-n}$?



Suppose $x_n$ is the only positive solution to the equation $x^{-n}=\sum\limits_{k=1}^\infty (x+k)^{-n}$,how to show the existence of the limit $\lim_{n\to \infty}\frac{x_n}{n}$?




It is easy to see that $\{x_n\}$ is increasing.In fact, the given euation equals
$$1=\sum_{k=1}^\infty(1+\frac{k}{x})^{-n} \tag{*}$$
If $x_n\ge x_{n+1}$,then notice that for any fixed$ k$,$(1+\frac{k}{x})^{-n}$ is increasing,thus we can get
$$\frac{1}{(1+\frac{k}{x_n})^n}\ge \frac{1}{(1+\frac{k}{x_{n+1}})^n}>\frac{1}{(1+\frac{k}{x_{n+1}})^{n+1}}$$
By summing up all k's from 1 to $\infty$,we can see
$$\sum_{k=1}^\infty\frac{1}{(1+\frac{k}{x_n})^n}>\sum_{k=1}^\infty\frac{1}{(1+\frac{k}{x_{n+1}})^{n+1}}$$
then from $(*)$ we see that the two series in the above equality are all equals to $1$,witch is a contradiction!



But it seems hard for us to show the existence of $\lim_{n\to \infty}\frac{x_n}{n}$.What I can see by the area's principle is




$$\Big|\sum_{k=1}^\infty\frac{1}{(1+\frac{k}{x_n})^n}-\int_1^\infty \frac{1}{(1+\frac{x}{x_n})}dx\Big|<\frac{1}{(1+\frac1{x_n})^n}$$
or
$$\Big|1-\frac{x_n}{n-1}(1+\frac{1}{x_n})^{1-n}\Big|<\frac{1}{(1+\frac1{x_n})^n}$$


Answer



For any $n \ge 2$, consider the function $\displaystyle\;\Phi_n(x) = \sum_{k=1}^\infty \left(\frac{x}{x+k}\right)^n$.



It is easy to see $\Phi_n(x)$ is an increasing function over $(0,\infty]$.
For small $x$, it is bounded from above by $x^n \zeta(n)$ and hence decreases to $0$ as $x \to 0$.
For large $x$, we can approximate the sum by an integral and $\Phi_n(x)$ diverges like $\displaystyle\;\frac{x}{n-1}$ as $x \to \infty$. By definition, $x_n$ is the unique root for $\Phi_n(x_n) = 1$. Let $\displaystyle\;y_n = \frac{x_n}{n}$.




For any $\alpha > 0$, apply AM $\ge$ GM to $n$ copies of $1 + \frac{\alpha}{n}$ and one copy of $1$, we obtain



$$\left(1 + \frac{\alpha}{n}\right)^{n/n+1} > \frac1{n+1} \left[n\left(1 + \frac{\alpha}{n}\right) + 1 \right] = 1 + \frac{\alpha}{n+1}$$
The inequality is strict because the $n+1$ numbers are not identical. Taking reciprocal on both sides, we get
$$\left( \frac{n}{n + \alpha} \right)^n \ge \left(\frac{n+1}{n+1 + \alpha}\right)^{n+1}
$$



Replace $\alpha$ by $\displaystyle\;\frac{k}{y_n}$ for generic positive integer $k$, we obtain



$$\left( \frac{x_n}{x_n + k} \right)^n = \left( \frac{n y_n}{n y_n + k} \right)^n > \left(\frac{(n+1)y_n}{(n+1)y_n + k}\right)^{n+1}$$

Summing over $k$ and using definition of $x_n$, we find



$$\Phi_{n+1}(x_{n+1}) = 1 = \Phi_n(x_n) > \Phi_{n+1}((n+1)y_n)$$



Since $\Phi_{n+1}$ is increasing, we obtain $x_{n+1} > (n+1)y_n \iff y_{n+1} > y_n$.
This means $y_n$ is an increasing sequence.



We are going to show $y_n$ is bounded from above by $\frac32$
(see update below for a more elementary and better upper bound).
For simplicity, let us abberivate $x_n$ and $y_n$ as $x$ and $y$. By their definition, we have




$$\frac{2}{x^n} = \sum_{k=0}^\infty \frac{1}{(x+k)^n}$$



By Abel-Plana formula, we can transform the sum on RHS to integrals. The end result is



$$\begin{align}\frac{3}{2x^n} &= \int_0^\infty \frac{dk}{(x+k)^n} +
i \int_0^\infty \frac{(x+it)^{-n} - (x-it)^{-n}}{e^{2\pi t} - 1} dt\\
&=\frac{1}{(n-1)x^{n-1}}
+ \frac{1}{x^{n-1}}\int_0^\infty \frac{(1+is)^{-n} - (1-is)^{-n}}{e^{2\pi x s}-1} ds
\end{align}

$$

Multiply both sides by $nx^{n-1}$ and replace $s$ by $s/n$, we obtain



$$\begin{align}\frac{3}{2y} - \frac{n}{n-1} &=
i \int_0^\infty \frac{(1 + i\frac{s}{n})^{-n} - (1-i\frac{s}{n})^{-n}}{e^{2\pi ys} - 1} ds\\
&= 2\int_0^\infty \frac{\sin\left(n\tan^{-1}\left(\frac{s}{n}\right)\right)}{\left(1 + \frac{t^2}{n^2}\right)^{n/2}} \frac{ds}{e^{2\pi ys}-1}\tag{*1}
\end{align}
$$

For the integral on RHS, if we want its integrand to be negative, we need




$$n\tan^{-1}\left(\frac{s}{n}\right) > \pi
\implies \frac{s}{n} > \tan\left(\frac{\pi}{n}\right) \implies s > \pi$$



By the time $s$ reaches $\pi$, the factor $\frac{1}{e^{2\pi ys} - 1}$ already drops to very small. Numerically, we know $y_4 > 1$, so for $n \ge 4$ and $s \ge \pi$, we have



$$\frac{1}{e^{2\pi ys} - 1} \le \frac{1}{e^{2\pi^2} - 1} \approx 2.675 \times 10^{-9}$$



This implies the integral is positive. For $n \ge 4$, we can deduce



$$\frac{3}{2y} \ge \frac{n}{n-1} \implies y_n \le \frac32\left(1 - \frac1n\right) < \frac32$$




Since $y_n$ is increasing and bounded from above by $\frac32$, limit
$y_\infty \stackrel{def}{=} \lim_{n\to\infty} y_n$ exists and $\le \frac32$.



For fixed $y > 0$, with help of DCT, one can show the last integral of $(*1)$
converges.
This suggests $y_\infty$ is a root of following equation near $\frac32$



$$\frac{3}{2y} = 1 + 2\int_0^\infty \frac{\sin(s)}{e^{2\pi ys} - 1} ds$$



According to DLMF,

$$\int_0^\infty e^{-x} \frac{\sin(ax)}{\sinh x} dx = \frac{\pi}{2}\coth\left(\frac{\pi a}{2}\right) - \frac1a\quad\text{ for }\quad a \ne 0$$



We can transform our equation to



$$\frac{3}{2y} = 1 + 2\left[\frac{1}{4y}\coth\left(\frac{1}{2y}\right) - \frac12\right]
\iff \coth\left(\frac{1}{2y}\right) = 3$$



This leads to $\displaystyle\;y_\infty = \frac{1}{\log 2}$.



This is consistent with the finding of another answer (currently deleted):





If $L_\infty = \lim_{n\to\infty}\frac{n}{x_n}$ exists, then $L_\infty = \log 2$.




To summarize, the limit $\displaystyle\;\frac{x_n}{n}$ exists and should equal to $\displaystyle\;\frac{1}{\log 2}$.






Update




It turns out there is a more elementary proof that $y_n$ is bounded from above by the optimal bound $\displaystyle\;\frac{1}{\log 2}$.



Recall for any $\alpha > 0$. we have $1 + \alpha < e^\alpha$. Substitute
$\alpha$ by $\frac{k}{n}\log 2$ for $n \ge 2$ and $k \ge 1$, we get



$$\frac{n}{n + k\log 2} = \frac{1}{1 + \frac{k}{n}\log 2} > e^{-\frac{k}{n}\log 2} = 2^{-\frac{k}{n}}$$



This leads to




$$\Phi_n\left(\frac{n}{\log 2}\right)
= \sum_{k=1}^\infty \left(\frac{n}{n + \log 2 k}\right)^n
> \sum_{k=1}^\infty 2^{-k}
= 1 = \Phi_n(x_n)
$$

Since $\Phi_n(x)$ is increasing, this means
$\displaystyle\;\frac{n}{\log 2} > x_n$ and $y_n$ is bounded from above by $\displaystyle\;\frac{1}{\log 2}$.


Intuition behind logarithm inequality: $1 - frac1x leq log x leq x-1$



One of fundamental inequalities on logarithm is:
$$ 1 - \frac1x \leq \log x \leq x-1 \quad\text{for all $x > 0$},$$
which you may prefer write in the form of
$$ \frac{x}{1+x} \leq \log{(1+x)} \leq x \quad\text{for all $x > -1$}.$$



The upper bound is very intuitive -- it's easy to derive from Taylor series as follows:

$$ \log(1+x) = \sum_{i=1}^\infty (-1)^{n+1}\frac{x^n}{n} \leq (-1)^{1+1}\frac{x^1}{1} = x.$$



My question is: "what is the intuition behind the lower bound?" I know how to prove the lower bound of $\log (1+x)$ (maybe by checking the derivative of the function $f(x) = \frac{x}{1+x}-\log(1+x)$ and showing it's decreasing) but I'm curious how one can obtain this kind of lower bound. My ultimate goal is to come up with a new lower bound on some logarithm-related function, and I'd like to apply the intuition behind the standard logarithm lower-bound to my setting.


Answer



Take the upper bound:
$$
\ln {x} \leq x-1
$$
Apply it to $1/x$:
$$

\ln \frac{1}{x} \leq \frac{1}{x} - 1
$$
This is the same as
$$
\ln x \geq 1 - \frac{1}{x}.
$$


reference request - How to calculate the sum of a general series

In class we learned how to test the convergence of series and how to calculate the sums of arithmetic and geometric series (if they exist) but are there methods to actually calculate the values non-arithmetic, non-geometric series? Can the value of any series be calculated or only those of a certain type? And can it still be done if they're infinite? So my question is: are there any general methods for calculating the values of certain non-arithmetic/geometric series (references would be appreciated as well) [because it seemed like a lot of effort for Euler just to calculate the sum from n=1 to infinity of $\frac1{n^2}$]. Thanks in advance!

elementary set theory - Union of countable sets is countable.

is my proof that the union of countable sets is countable correct ?



If $A_1, A_2, A_3,\dots, A_n$ is a collection of countable sets, then the union
$$A_1\cup A_2\cup A_3 \cup \dots A_n$$
is countable as well.



Proof. Base case: Consider the set
$$B=A_2\setminus A_1$$
Clearly, $B\subseteq A_2$($B$ is countable) and $A_1\cup B$ = $A_1\cup A_2$.




If $B$ is finite, then
$$B= \{b_1, b_2, b_3, b_4, \dots, b_j \}\quad j\in\mathbb{N}_0$$
and so we can construct a bijection
$$f(n)=\begin{cases}
b_n\quad n\leq j\\
a_{n-j}\quad n> j
\end{cases}$$
If $B$ is infinite, then we can construct a bijection
$$f(n)=\begin{cases}
b_{\frac n2}\quad n\text{ even}\\

a_{\frac{n+1}{2}}\quad n\text{ odd}
\end{cases}$$
Now, suppose the statement holds for $n= k\geq 2$, that is,
$$A_1\cup A_2\cup A_3 \cup \dots A_k$$ is a countable set. Observe that
$$(A_1\cup A_2\cup A_3 \cup \dots A_k)\cup A_{k+1}$$
is a union of two countable sets which, by the base case, is also countable. Thus, by induction, the statement holds for all $n\in\mathbb{N}.\qquad\square$

multivariable calculus - What are the real and imaginary parts of the complex function?




So, it is asked to find the real and imaginary parts of the specific complex function:



$f(z)=sin(z)+i(3z+2) $
So I use $z$ as $z=x+iy$



everything seemed clear till I met Mr. Sinus:



$u+iv= sin(x+iy)+i(3(x+iy)+2)$




and I don't really know how to seperate the imaginary and real parts of $sin(x+iy)$ argument.



Need hints...


Answer



Hint:



use addition formula: $\sin(x+iy)=\sin x \cos(iy)+\cos x \sin(iy)$ and remember that: $ \cos(iy)=\cosh(y)\;$ and $ \sin(iy)=i\sinh y$.


abstract algebra - Multiplicative group modulo polynomials



When working over $\mathbb{Z}$, it is well known what the structure of the multiplicative group $(\mathbb{Z}/n\mathbb{Z})^{\times}$ exactly is:
If $n=p$ for a prime $p$, then $(\mathbb{Z}/p\mathbb{Z})^{\times}$ is the $p-1$ cyclic group $C_{p-1}$.
If $n=p^e$ for an odd prime $p$, then $(\mathbb{Z}/p^e\mathbb{Z})^{\times}$ is again a cyclic group $C_{\phi(p^e)}=C_{p^e-p^{e-1}}$.
If $n=2^e$ for $e \geq 2$, then $(\mathbb{Z}/2^e\mathbb{Z})^{\times}$ is no longer a cyclic group, but a product of two cyclic groups. It is of the form $C_{2} \times C_{2^{e-2}}$.
For a more general integer $n$, we can simply use the Chinese remainder theorem to obtain the precise structure of $(\mathbb{Z}/n\mathbb{Z})^{\times}$ using the above results.




My question is:




Is there a similar result for the multiplicative group of a quotient ring of $\mathbb{F}_{p}[X]$ where $\mathbb{F}_{p}$ is the finite field of $p$ elements (where $p$ is a prime power)? That is, what is the structure of
$$(\mathbb{F}_{p}[X]/\left)^{\times}$$
for a polynomial $f(X) \in \mathbb{F}_{p}[X]$?




If $f(X)$ is irreducible, then this group is again cyclic (since we know that the multiplicative group of a finite field is cyclic). But what about the more general case? Again, we can use Chinese remainder theorem to reduce the general question to the case of $f(X)$ being a power of an irreducible. So what do we know about the structure of
$$(\mathbb{F}_{p}[X]/\left)^{\times}$$

for an irreducible $f(X) \in \mathbb{F}_{p}[X]$ and $e \geq 2$?


Answer



what do we know about the structure of
$({\mathbb F}_p[X]/\langle f(X)^e\rangle)^{\times}$
for an irreducible $f(X)\in \mathbb F_p[X]$ and $e\geq 2$?



Assuming that $\deg(f) = n$,
I claim that
$({\mathbb F}_p[X]/\langle f(X)^e\rangle)^{\times}$
is isomorphic to




$$
\mathbb Z_{p^n-1}\times \left(
\mathbb Z_{p}^{n(e-2\lceil\frac{e}{p}\rceil+
\lceil\frac{e}{p^2}\rceil)}\times
\mathbb Z_{p^2}^{n(\lceil\frac{e}{p}\rceil
-2\lceil\frac{e}{p^2}\rceil+
\lceil\frac{e}{p^3}\rceil)}\times
\mathbb Z_{p^3}^{n(\lceil\frac{e}{p^2}\rceil
-2\lceil\frac{e}{p^3}\rceil+

\lceil\frac{e}{p^4}\rceil)}\times \cdots\right).
$$



The product looks infinite, but the exponents on the
factors eventually become zero.
For example, the group
$\left(\mathbb F_3[x]/\langle(x^2+1)^{13}\rangle\right)^{\times}$ is
isomorphic to $\mathbb Z_8\times \left(\mathbb Z_3^{10}\times
\mathbb Z_9^4\times \mathbb Z_{27}^2\right)$.







Let $R = ({\mathbb F}_p[X]/\langle f(X)^e\rangle)$ and
$J = \langle f(X)\rangle$.
Reduction modulo $J$ yields a homomorphism of $R^{\times}$
onto $(R/J)^{\times}$ with kernel $1+J$.
The image $(R/J)^{\times}$ is isomorphic to
$\mathbb F_{p^n}^{\times}\cong \mathbb Z_{p^n-1}$,
which has order that is relatively prime to the order
of the kernel $|1+J| = |J| = p^{n(e-1)}$.

Thus
$$R^{\times} \cong (R/J)^{\times}\times (1+J)\cong
\mathbb Z_{p^n-1}\times (1+J).$$
It remains to determine the structure of the $p$-group
$(1+J)$.



The structure of a finite abelian $p$-group $A$ is determined
by the orders of the annihilators



$$

A[p^k]:= \{a\in A\;|\;p^ka = 0\}.
$$



These orders are not so hard to determine
when $A = 1+J$.
That is, for $\alpha\in J$, the element
$1+\alpha\in 1+J$ is annihilated by $p^k$ exactly
when $1=(1+\alpha)^{p^k} = 1+\alpha^{p^k}$, which happens
exactly when $f^e$ divides $\alpha^{p^k}$, which
happens exactly when $\alpha\in \langle f^{\lceil\frac{e}{p^k}\rceil}\rangle$.

Thus



$$|A[p^k]| = |\langle f^{\lceil\frac{e}{p^k}\rceil}\rangle|
= p^{n\left(e-\lceil\frac{e}{p^k}\rceil\right)}.$$



What remains is to determine the $m_i$'s in the expression



$$
B = \mathbb Z_p^{m_1}\times
\mathbb Z_p^{m_2}\times

\mathbb Z_p^{m_3}\times \cdots
$$



if this general abelian $p$-group has the same size annihilators as $A=1+J$;
i.e., if $|B[p^k]| = |A[p^k]|$ for all $k$.



One computes that



$$
\begin{array}{rl}

|B[p]| &= p^{m_1+m_2+m_3+m_4+\cdots}\\
|B[p^2]| &= p^{m_1+2m_2+2m_3+2m_4+\cdots}\\
|B[p^3]| &= p^{m_1+2m_2+3m_3+3m_4+\cdots}\\
&\textrm{ETC.}
\end{array}$$
Thus we need to solve
the equations



$$
\begin{array}{rl}

m_1+m_2+m_3+m_4+\cdots&=n\left(e-\lceil\frac{e}{p}\rceil\right)\\
m_1+2m_2+2m_3+2m_4+\cdots&=n\left(e-\lceil\frac{e}{p^2}\rceil\right)\\
m_1+2m_2+3m_3+3m_4+\cdots&=n\left(e-\lceil\frac{e}{p^3}\rceil\right)\\
\textrm{ETC.}&
\end{array}
$$



The solution is $m_i =
n\left(
\lceil\frac{e}{p^{i-1}}\rceil-2\lceil\frac{e}{p^i}\rceil

+\lceil\frac{e}{p^{i+1}}\rceil
\right)$. Therefore



$$
\begin{array}{rl}
({\mathbb F}_p[X]/\langle f(X)^e\rangle)^{\times}&=R^{\times}\\
&\cong (R/J)^{\times}\times (1+J)\\
&\cong \mathbb Z_{p^n-1}\times \left(
\mathbb Z_{p}^{n(e-2\lceil\frac{e}{p}\rceil+
\lceil\frac{e}{p^2}\rceil)}\times

\mathbb Z_{p^2}^{n(\lceil\frac{e}{p}\rceil
-2\lceil\frac{e}{p^2}\rceil+
\lceil\frac{e}{p^3}\rceil)}\times
\mathbb Z_{p^3}^{n(\lceil\frac{e}{p^2}\rceil
-2\lceil\frac{e}{p^3}\rceil+
\lceil\frac{e}{p^4}\rceil)}\times \cdots\right).
\end{array}
$$


algebra precalculus - The Polynomial concept needs to include both variables and contants?




As in Wikipedia:




In mathematics, a polynomial is an expression of finite length constructed from variables (also known as indeterminates) and constants.




So, it's only considered a polynomial if it has both variables and constants?



$x^2 + x$ or $x + x$ are not polynomials (as they only have variables)?



Answer



As Qiaochu pointed out, the coefficients of the polynomials are the constants. Hence, in a polynomial like
$x^2 + x + 5$, the constants are:
$1$ for $x^2$,
$1$ for $x^1 = x$
and $5$ for $x^0 = 1$. $3$ constants for the $3$ terms.


calculus - Polygamma function series: $sum_{k=1}^{infty }left(Psi^{(1)}(k)right)^2$



Applying the Copson's inequality, I found:

$$S=\displaystyle\sum_{k=1}^{\infty }\left(\Psi^{(1)}(k)\right)^2\lt\dfrac{2}{3}\pi^2$$ where
$\Psi^{(1)}(k)$ is the polygamma function.
Is it known any sharper bound for the sum $S$?
Thanks.


Answer



The upper bound can be improved using asymptofic series :



enter image description here


calculus - Asymptotic behavior of $cos(sqrt{4n+1}x)-cos(sqrt{4n+alpha}x)$




While reading a paper in physics i came across asymptotic behavior of $\cos(\sqrt{4n+1}x)-\cos(\sqrt{4n+\alpha}x)$ and it was written this is equal to $O(n^{-1/4})$ for any real $\alpha$. I tried to prove this by considering the taylor expansion, but i couldn't get any result.



Any help is appreciated



Clip from paper



paper:http://www.sciencedirect.com/science/article/pii/S0375960111002970


Answer



$$\begin{align} \cos{(\sqrt{2 n+1} x)} - \cos{(\sqrt{2 n+\alpha} x)} = -2 \sin{\left ( \frac{\sqrt{2 n+\alpha}-\sqrt{2 n+1}}{2} x \right )} \sin{\left ( \frac{\sqrt{2 n+\alpha}+\sqrt{2 n+1}}{2} x \right )}\\ \sim -\frac{\alpha - 1}{2} x \frac{\sin{(2 \sqrt{n} x)}}{2 \sqrt{n}} & (n \rightarrow \infty) \end{align}$$




No idea where you got the $O(n^{-1/4})$ thing.


Friday 27 May 2016

calculus - given Sn, find a_n and sum of a_n

I am given $$s_n=\frac{n-1}{n+1}$$ find $a_n$ and $\sum_{n=1}^\infty a_n$




I use $a_n = s_n-s_{n-1}$ and got $$lim_{n->{\infty}}\frac{2}{n(n+1)}$$



Then the theorem says



$$\sum_{n=1}^\infty a_n= lim_{n->{\infty}}s_n$$



and the answer is 1 as n approaches infinity.



I also know that if I change $a_n$ into a telescope sum and solve it.

$$\lim_{n->\infty} \frac{2}{n}-\frac{2}{n+1}$$



it becomes $$\lim_{n->\infty}2-\frac{2}{n+1}$$



the answer is 2 as n approach infinity.



now, I don't understand why the two are not identical. did I make theoretical error with my reasoning?

real analysis - Integral $ int_{0}^{infty} ln xleft[ln left( frac{x+1}{2} right) - frac{1}{x+1} - psi left( frac{x+1}{2} right) right] mathrm{d}x $





Prove That : $$ \int_{0}^{\infty} \ln x\left[\ln \left( \dfrac{x+1}{2} \right) - \dfrac{1}{x+1} - \psi \left( \dfrac{x+1}{2} \right) \right] \mathrm{d}x = \dfrac{\ln^2 2}{2}+\ln2\cdot\ln\pi-1 $$



where $\psi(z)$ denotes the Digamma Function.




This integral arose from my attempt to find an alternate solution to Problem 5, i.e,



$$ {\large\int}_0^\infty\frac{\ln\left(x^2+1\right)\,\arctan x}{e^{\pi x}-1}dx=\frac{\ln^22}2+\ln2\cdot\ln\pi-1 $$








Here's my try : We have the identity,



$$ \displaystyle \int_0^\infty \frac{\ln y}{(y+a)^2 + b^2}\,\mathrm{d}y \; = \; \tfrac{1}{2b}\,\tan^{-1}\tfrac{b}{a}\,\ln(a^2+b^2) $$



Since substituting $y \mapsto \dfrac{a^2+b^2}{y}$ gives,



$$\displaystyle \int_0^\infty \frac{\ln y}{(y+a)^2+b^2} \, \mathrm{d}y =\ln(a^2+b^2)\int_0^\infty \frac{dy}{y^2+2ay+a^2+b^2} - \int_0^\infty \frac{\ln y}{(y+a)^2+b^2} \, \mathrm{d}y $$




$$ \implies \displaystyle \int_0^\infty \frac{\ln y}{(y+a)^2 + b^2}\,\mathrm{d}y \; = \; \tfrac{1}{2b}\,\tan^{-1}\tfrac{b}{a}\,\ln(a^2+b^2) $$



Putting $a=1$ and $b=x$, we have,



$ \displaystyle \int_0^\infty \frac{2x \ln y}{(y+1)^2 + x^2}\,\mathrm{d}y \; = \; \,\,\ln(1+x^2) \ \tan^{-1}x \tag{1} $



Now, we have to prove,



$$\displaystyle {\large\int}_0^\infty\frac{\ln\left(x^2+1\right)\,\tan^{-1}x}{e^{\pi x}-1} \mathrm{d}x=\frac{\ln^22}2+\ln2\cdot\ln\pi-1$$




Let,



$$\displaystyle \text{I} = {\large\int}_0^\infty\frac{\ln\left(x^2+1\right)\,\tan^{-1}x}{e^{\pi x}-1} \mathrm{d}x $$



$$\displaystyle = \int_{0}^{\infty} \int_{0}^{\infty} \frac{2x \ln y}{[(y+1)^2 + x^2][e^{\pi x} - 1]} \mathrm{d}x \ \mathrm{d}y \quad (\text{From 1}) \tag{2}$$



The inner integral is of the form,



$$ \displaystyle \text{J} = \int_{0}^{\infty} \dfrac{x}{(x^2+a^2)(e^{\pi x} - 1)} \ \mathrm{d}x \ ; \ a = (y+1)$$




I have proved here that,



$\displaystyle \int_{0}^{\infty} \dfrac{\log(1-e^{-2a\pi x})}{1+x^2} \mathrm{d}x = \pi \left[\dfrac{1}{2} \log (2a\pi ) + a(\log a - 1) - \log(\Gamma(a+1)) \right] \tag{3}$



Differentiating both sides w.r.t. $a$, substituting $ a \mapsto \frac{a}{2} $ and $ x \mapsto \frac{x}{a} $, we get,



$\displaystyle \int_{0}^{\infty} \dfrac{x}{(x^2+a^2)(e^{\pi x} - 1)} \ \mathrm{d}x = \dfrac{1}{2} \left[ \dfrac{1}{a} + \ln \left( \dfrac{a}{2} \right) - \psi \left( \dfrac{a}{2} + 1 \right) \right] \tag{4}$



Putting $(4)$ in $(2)$, we have,




$$ \displaystyle \text{I} = \int_{0}^{\infty} \left[ \dfrac{\ln y}{y+1} + \ln y \ln \left( \dfrac{y+1}{2} \right) - \ln y \ \psi \left( \dfrac{y+1}{2} + 1 \right) \right] \mathrm{d}y $$



$ = \displaystyle \int_{0}^{\infty} \ln x\left[\ln \left( \dfrac{x+1}{2} \right) - \dfrac{1}{x+1} - \psi \left( \dfrac{x+1}{2} \right) \right] \mathrm{d}x \tag{*}$







Since the original question has already been proved in the link, so $(*)$ must be equal to the stated closed form. It also matches numerically.




I'm looking for some method to evaluate $(*)$ independent of Problem 5.



Any help will be greatly appreciated.


Answer



Two Auxiliary Identities



First, we shall establish two simple identities.




$\textbf{Identity }(*)$

$$\int^1_0\left[\frac{1}{\log{x}}+\frac{1}{1-x}\right]x^{s-1}\ dx=\log{s}-\psi_0(s)\tag{*}$$




$\text{Proof Outline: }$ Differentiate with respect to $s$, recognise the integral representation of the trigamma function, then integrate back.




$\textbf{Identity }(**)$ $$\int^\infty_0e^{-ax}\log{x}\
dx=-\frac{\gamma+\log{a}}{a}\tag{**}$$





$\text{Proof Outline: }$ Differentiate the result $\int^\infty_0x^{s-1}e^{-ax}\ dx=a^{-s}\Gamma(s)$ with respect to $s$ and set $s=1$.






The Integral In Question



Applying these two identities and switching the order of integration gives us
\begin{align}
I

&:=\int^\infty_0\log{x}\left[\log\left(\frac{x+1}{2}\right)-\psi_0\left(\frac{x+1}{2}\right)-\frac{1}{x+1}\right]\ dx\\
&=\int^\infty_0\log{x}\int^1_0\left[\frac{1}{\log{t}}+\frac{1}{1-t}-\frac{1}{2}\right]t^{\frac{x-1}{2}}\ dt\ dx\\
&=\int^1_0\frac{1}{\sqrt{t}}\left[\frac{1}{\log{t}}+\frac{1}{1-t}-\frac{1}{2}\right]\int^\infty_0 \exp\left[-\left(-\frac{\log{t}}{2}\right)x\right]\log{x}\ dx\ dt\\
&=2\int^1_0\frac{\gamma+\log\left(-\tfrac{\log{t}}{2}\right)}{\sqrt{t}\log{t}}\left[\frac{1}{\log{t}}+\frac{1}{1-t}-\frac{1}{2}\right]\ dt\\
&=2\int^\infty_0\frac{e^{-u}(\gamma+\log{u})}{u}\left[\frac{1}{2u}-\frac{1}{1-e^{-2u}}+\frac{1}{2}\right]\ du
\end{align}
Let us define
$$I(s)=2\int^\infty_0u^{s-1}e^{-u}(\gamma+\log{u})\left[\frac{1}{2}+\frac{1}{2u}-\frac{1}{1-e^{-2u}}\right]\ du$$
so that $I=I(0)$. We have
\begin{align}

I(s)
&=\color{red}{\gamma\Gamma(s)+\gamma\Gamma(s-1)}+\color{blue}{\Gamma'(s)+\Gamma'(s-1)}-2\sum^\infty_{n=0}\int^\infty_0 u^{s-1}e^{-(2n+1)u}(\gamma+\log{u})\ du\\
&=\color{red}{\frac{\gamma\Gamma(1+s)}{s}\frac{s}{s-1}}\color{blue}{-\frac{s\Gamma(s)\psi_0(s)}{1-s}-\frac{\Gamma(s)}{(1-s)^2}}-2\sum^\infty_{n=0}\frac{\Gamma(s)\left(\gamma+\psi_0(s)-\log(2n+1)\right)}{(2n+1)^s}\\
&=\color{red}{\frac{\gamma\Gamma(1+s)}{s-1}}\color{blue}{-\frac{s\Gamma(s)\psi_0(s)}{1-s}-\frac{\Gamma(s)}{(1-s)^2}}\color{green}{-2(1-2^{-s})\Gamma(s)\zeta(s)(\gamma+\psi_0(s))}\\
&\color{purple}{\ \ \ \ -2\Gamma(s)\frac{d}{ds}(1-2^{-s})\zeta(s)}\\
\end{align}
With the expansions
\begin{align}
\Gamma(s)&\sim_0\frac{1}{s}-\gamma+\mathcal{O}(s)\\
\gamma+\psi_0(s)&\sim_0 -\frac{1}{s}+\mathcal{O}(s)\\

(1-2^{-s})\zeta(s)&\sim_0\left(s\log{2}-\frac{s^2\log^2{2}}{2}+\mathcal{O}(s^3)\right)\left(-\frac{1}{2}-\frac{s\log(2\pi)}{2}+\mathcal{O}(s^2)\right)\\
&\sim_0 -\frac{s\log{2}}{2}-\left(\frac{\log^2{2}}{4}+\frac{\log 2\log{\pi}}{2}\right)s^2+\mathcal{O}(s^3)
\end{align}
we obtain
\begin{align}
I(s)
&\sim_0\color{red}{-\gamma}\color{blue}{-1+\gamma}\color{green}{-\frac{\log{2}}{s}+\gamma\log{2}-\frac{\log^2{2}}{2}-\log{2}\log\pi}+\color{purple}{\frac{\log{2}}{s}-\gamma\log{2}}\\
&\color{purple}{\ \ \ \ \ \ \ +\log^2{2}+2\log{2}\log\pi}+\mathcal{O}(s)\\
&\sim_0 -1+\log{2}\log{\pi}+\frac{\log^2{2}}{2}+\mathcal{O}(s)
\end{align}

as was to be shown.






Note:



One can obtain the values of the derivatives of the zeta function at $0$ by differentiating Riemann's functional equation and using the Laurent series of the zeta function at $1$.


real analysis - Function $f$ from $[0,1]$ to $[0,1]$, bounded, such that the graph of $f$ is not Jordan measurable.

I am currently doing a problem in my real analysis text book and I am having trouble with one in particular. It asks whether or not the graph of a bounded function is Jordan Measurable or not. The function isn't necessarily continuous or continuous almost everywhere continuous it is just bounded. I personally think there does exist a bounded function from $[0,1]$ to $[0,1]$ which has a graph that is not Jordan measurable but I cannot find the function. Any help will be greatly appreciated.

elementary number theory - Proof using exhaustion $n^4 - 1$ is divisible by $5$ where $n$ is not divisible by $5$.

The title pretty much states it.
Proof using exhaustion $n^4 - 1$ is divisible by $5$ where $n$ is not divisible by $5$.



Can anyone give me a hint how to approach this?

Do I need to consider separate case where n is odd and when n is even?



Solution: use division algorithm

arithmetic - Prove that GCD$(n^a - 1,n^b -1)= n^{GCD(a,b)} -1$

I used Euclid's theorem $a=bq +r$.
$n^a -1 = ((n^b)^q )n^r -1$
I don't know how to move forward.

roots - Find all complex numbers $z$ that satisfy equation $z^3=-8$



Problem




Find all complex numbers $z$ that satisfy equation $z^3=-8$



Attempt to solve



The real solution is quite easily computable or more specifically complex solution where imaginary part is zero.



$$ z^3=-8 \iff z_1 = \sqrt[3]{-8}=-2 $$



Now WolframAlpha suggests that other complex solutions would be :




$$ z_2 = 1 - i\sqrt{3} $$
$$ z_3 = 1 + i\sqrt{3} $$



Only problem is i don't have clue on how to derive these. I heard something about using polar form of complex number and then increasing argument by $2\pi$ so that we have all roots. I lack intuition on how this would work.



I could try to represent our complex number $-2$ in polar



$$ re^{i \theta} $$



Computing radius via pythagoras theorem




$$ r=\sqrt{(-2)^2+(0)^2} = \sqrt{4}=2 $$



which is quite intuitive even without pythagoras theorem since our imaginary part is $0$ so "radius" has to be same as real part, just without the $-$ sign.



our angle would be $\pi$ radians since our complex number was $-2+i \cdot 0$.



We get:



$$ 2e^{i\pi} $$




Now increasing by every $2\pi$



$$ 2e^{i3\pi},2e^{i5\pi},2e^{i7\pi},\dots $$



which doesn't make any sense since we end up in the same spot over and over again since $2\pi$ in radians is full circle by definition.


Answer



$z^3+8=0;$



$(z+2)(z^2-2z +2^2)=0;$




$z_1=-2;$



Solve quadratic equation:



$z_{2,3} = \dfrac{2\pm \sqrt{4-(4)2^2}}{2}$;



$z_{2,3}= \dfrac{2\pm i 2√3}{2}.$


Thursday 26 May 2016

elementary number theory - Linear Diophantine Equations in Three Variables

$$

3x+6y+5z=7
$$
The general solution to this linear Diophantine equation is as described
here (Page 7-8) is:



$$
x = 5k+2l+14
$$
$$
y = -l

$$
$$
z = -7-k
$$
$$
k,l \in \mathbb{Z}
$$



If I plug the original equation into Wolframalpha the solution is:
$$

y = 5n+2x+2
$$
$$
z =-6n-3x-1
$$
$$
n \in \mathbb{Z}
$$



I can rewrite this as:




$$
x = l
$$
$$
y = 5k+2l+2
$$
$$
z = -6k-3l-1
$$

$$
k,l \in \mathbb{Z}
$$



However now two equations depend on two variables ($k,l$) and one on one variable $l$.
In the first solution one equation depends on two variables and two on one variable.



Questions:



How can I come from a representation like the one from wolfram alpha for the general solution to one where all equations depend on one distinct variable except one equation.




Is there always such a representation?

integration - The sum of an infinite series with integral

$1+\dfrac{1}{9}+\dfrac{1}{45}+\dfrac{1}{189}+\dfrac{1}{729}+\dots=\sum\limits_{n=1}^\infty \dfrac{1}{(2n-1)\cdot 3^{n-1}}$




I got:
$\sum\limits_{n=1}^\infty \dfrac{1}{(2n-1)\cdot 3^{n-1}}=\sum\limits_{n=1}^\infty \dfrac{\int\limits_0^1 x^{2n-2}\,dx}{3^{n-1}}=\dots$



And no idea how to impove.



Thx!

calculus - Evaluate $intlimits_0^{1}frac{sqrt{1+x^2}}{1+x}dx$

Evaluate:



$I=\int\limits_0^1 \frac{\sqrt{1+x^2}}{1+x}dx$



My try:



Let $x=\tan y$ then $dx=(1+\tan^{2} y)dy$
As for the integration limits: if $x=0$ then $y=0$ and if $x=1$ then $y=\frac{Ï€}{4}$




So:



$I=\int\limits_0^{\frac{Ï€}{4}}\frac{1+\tan^{2} y}{(1+\tan y)\cos y}\,dy$



$I=\int\limits_0^{\frac{Ï€}{4}}\frac{1}{\cos^{3} y+\cos^{2} y\sin y}\,dy$



But I don't know how to continue.

Wednesday 25 May 2016

algebra precalculus - Finding a coefficient of $x^6$ in the expansion $(x-1)^5 (x+1)^5$

Find the coefficient of $x^6$ in the expansion $(x-1)^5 (x+1)^5$.



Is this question binomial? I understand the basiscs of the question but am unsure of how to complete the second part of the question.




Any assistance is greatly appreciated.



Thank you.

Tuesday 24 May 2016

elementary number theory - Is there a better way to factor $375007$ with out testing first $612$ primes? No calculators please

Is there a better way to factor $375007$ with out testing first $612$ primes ?






I know this factors to $31\times 12097$ by testing the primes $2,3,5,\ldots,31$.
Is there any other clever way to work this ? I have tried Fermat's factorization by writing the number as $x^2-y^2$ but it is also taking too many iterations because the factors differ by large magnitude.



Also I have been trying to factor it by changing the base to 10^2 : $37x^2 + 50x+7 = (ax+b)(cx+d)$ and other bases but no success yet.

dice - probability of false alarm for a loaded die

A loaded 6-sided die has probability 1/4 for 3 & 4 and 1/8 for 1,2,5,6. If i decide whether a die is loaded or not based on one roll what is the probability of falsely classifying a fair die as loaded? What is the probability of classifying a loaded die as fair?




Not sure how to solve this. The probability of getting a 3 or 4 on a loaded die is 1/2 and on a fair die it is 1/3. The likelihood ratio is 3/2. Where do i go from here?

calculus - The integral problem: $int_{0}^{16}frac{dx}{sqrt{x^2+9}-sqrt{x}}$

I have met this kind of problem today.in fact, I spent hours trying to solve this problem on its own. My method was just to calculate the indefinite integral. I looked at Wolfram Alpha after I failed. Wolfram couldn't evaluate this integral. I don't know the spesific reason.





The integral:



$$\int_{0}^{16}\frac{dx}{\sqrt{x^2+9}-\sqrt{x}}$$




enter image description here

Cauchy type equation in three variables



Let $P(x)$ a polynomial in x with real coefficients such that for all real numbers $x, y, z$ satisfying $xy + yz + zx = 1$,$ P(x) + P(y) + P(z) = P(x + y + z)$. Furthermore, $P(0) = 1$ & $P(1) = 4$. Find $P(2017)$.



This looks to me like the Cauchy functional equation which is why the title. I understand that a Cauchy equation has solutions of the type $f(x)=cx$ but can't figure out the case here with that constraint. My wild guess, however, is that $P(x)$ should be a perfect square. Please help.



Answer



Note that $P(x) = ax^2+bx+a$ solves this problem, as \begin{align*} P(x+y+z)-P(x)-P(y)-P(z) &= a(x+y+z)^2-a(x^2+y^2+z^2)-2a \\ &= 2a(xy+yz+zx-1) \end{align*} Then, we can impose $P(0) = a = 1$ and $P(1) = 2a+b = 4$. The latter implies $b = 2$, so we have $P(x) = x^2+2x+1 = (x+1)^2$. Then we find $P(2017) = 2018^2$.






To prove that this is the only solution, note that for $$P^*(x, y, z) := P(x+y+z)-P(x)-P(y)-P(z)$$ $P^*(x, y, z)$ must be in the ideal generated by $xy+yz+zx-1$ in $\mathbb{C}[x, y, z]$ as this polynomial is irreducible over $\mathbb{C}$. Therefore, $$P^*(x, y, z) = Q(x, y, z)(xy+yz+zx-1)$$ for some polynomial $Q$. Note that $P^*(x, y, z)$ and $xy+yz+zx-1$ are both symmetric polynomials. In particular, $xy+yz+zx-1 = \frac{1}{2}p_1^2-\frac{1}{2}p_2-1$, where $p_k := x^k+y^k+z^k$. Now, note that for $P(x) = \sum_{k=0}^n a_kx^k$, $$P^*(x, y, z) = \sum_{k=0}^n a_k(p_1^k-p_k)$$ As $Q(x, y, z)$ is the quotient of symmetric polynomials, it must also be a symmetric polynomial. Thus, we write $R^*(p_1, p_2, \ldots, p_n)$ and $R(p_1, p_2, \ldots, p_n)$ to be the polynomial representations of $P^*$ and $Q$ respectively in terms of power sums (which are algebraically independent). After switching to $R$ and $R^*$, our equation in $P^*$ and $Q$ becomes $$R^*(p_1, \ldots, p_n) = \left(\frac{1}{2}p_1^2-\frac{1}{2}p_2-1\right)R(p_1, \ldots, p_n)$$ If $a_n\neq 0$ for some $n\geq 3$, then $R^*(p_1, \ldots, p_n)$ will have a nonzero term $-a_np_n$. However, this term cannot appear on the right-hand side, as if $R$ has a term $a_np_n$, then it would imply that $R^*$ also has terms $-\frac{a_n}{2}p_1^2p_n$ and $-\frac{a_n}{2}p_2p_n$ (which do not appear in the above sum). Thus, $R^*$ must equal $-2a_0+a_2(p_1^2-p_2)$, which implies that $\deg(R) = \deg(R^*)-2 = 0$. Then, $R\equiv c$ for some constant $c$, and $a_0 = a_2 = \frac{c}{2}$. Letting $a = \frac{c}{2}$ and $b = a_1$, we have $$P(x) = ax^2+bx+a$$ as above.


notation - Why do we use "congruent to" instead of equal to?

I'm more familiar with the notation $a \equiv b \pmod c$, but I think this is equivalent to $a \bmod c = b \bmod c $, which makes it clear that we should put a $=$ instead of $\equiv$.



What's the reason for the change of sign? If it's to emphasize that modular equivalence is a congruence relation, why don't we use the $\equiv$ sign in both notations?

summation - What is the formula for $frac{1}{1cdot 2}+frac{1}{2cdot 3}+frac{1}{3cdot 4}+cdots +frac{1}{n(n+1)}$



How can I find the formula for the following equation?



$$\frac{1}{1\cdot 2}+\frac{1}{2\cdot 3}+\frac{1}{3\cdot 4}+\cdots +\frac{1}{n(n+1)}$$



More importantly, how would you approach finding the formula? I have found that every time, the denominator number seems to go up by $n+2$, but that's about as far as I have been able to get:



$\frac12 + \frac16 + \frac1{12} + \frac1{20} + \frac1{30}...$ the denominator increases by $4,6,8,10,12,\ldots$ etc.




So how should I approach finding the formula? Thanks!


Answer



If you simplify your partial sums, you get $\frac12,\frac23,\frac34,\frac45,....$ Does this give you any ideas?


calculus - Limit Proof of $e^x/x^n$




I am wondering how to prove $$\lim_{x\to \infty} \frac{e^x}{x^n}=\infty$$



I was thinking of using L'Hospital's rule? But then not sure how to do the summation for doing L'Hospital's rule n times on the denominator? Or whether it would be easier using longs like $\lim_{x\to \infty} \ln(e^x)-\ln(x^n)$?



Thank you!


Answer



You can certainly use L'Hopital's $n$ times. That is, for each $n\geq 0$ we have $$\lim_{x\to\infty}\frac{e^x}{x^n}=\lim_{x\to\infty}\frac{e^x}{nx^{n-1}}=\cdots=\lim_{x\to\infty}\frac{e^x}{n!}=\infty$$ since at each stage we are in $\frac{\infty}{\infty}$ indeterminate form.


Monday 23 May 2016

Finite Field Homomorphism

Let's say that I have a finite field K with characteristic 2. I define @ as a map where
@ : K -> K, and
x -> $x^2$.



First of all, what are some examples of fields like K? I initially thought it would only be 0 and 1 with addition and multiplication mod 2, but apparently more elements can exist so long as 1+1=0.



Secondly, how would one begin to prove that @ is an automorphism?

calculus - What's so different about limits compared to infinitesimals?



If you find the limit is 2 for a given function, wouldn't this be the same as $2 + \epsilon$ with $\epsilon$ being a negligible value? This different way of defining limit-like behavior seems rigorous enough, but it took until Abraham Robinson in the 60s to really define the foundation for nonstandard analysis. My question is: what really is the difference?


Answer



There were various theories extending the real numbers to include infinitesimals throughout the period from 1870 until 1960, but Robinson was the first to introduce a system that can be used in analysis. Earlier systems were critized by Klein, Fraenkel, and others on the grounds that they were not proven to satisfy the mean value theorem for, e.g., infinitesimal intervals. Robinson's framework satisfies this and more.


real analysis - Example for unbounded Lipschitz function on a bounded domain



Let $D\subseteq \mathbb{R}^2$ be a bounded open rectangle in the plane.

(You can assume $D=(0,1) \times (0,1)$). Let $f:D \to \mathbb{R}$ be a continuous function which is uniformly-Lipschitz in the second variable $y$, i.e there exists $K>0$ such that



$$|f(x,y_2)-f(x,y_1)|\le K|y_1-y_2| \, \, \forall x,y_1,y_2 \in (0,1)$$



Is it true that $f$ is bounded on $D$?


Answer



No, simply choose an unbounded function that is continuous in the first coordinate and constant in the second, e.g. $\displaystyle f(x, y) = \frac{1}{x}$.


calculus - Find formula of sum $sin (nx)$




I wonder if there is a way to calculate the



$$S_n=\sin x + \sin 2x + … + \sin nx$$




but using only derivatives ?


Answer



Using telescopic sums:



$$ \sin(mx)\sin(x/2) = \frac{1}{2}\left(\cos\left((m-1/2)x\right)-\cos\left((m+1/2)x\right)\right)$$
Hence:
$$ S_n \sin\frac{x}{2} = \frac{1}{2}\left(\cos\frac{x}{2}-\cos\left(\left(n+\frac{1}{2}\right)x\right)\right)=\sin\frac{nx}{2}\cdot\sin\frac{(n+1)x}{2}.$$


Show that all numbers $a$ are congruent modulo 8 to its units digit in base 1000



So as the title says, I want to show that all numbers $a$ are congruent modulo 8 to its units digit in base 1000. I believeI'm more confused as to what 'its units digit in base 1000' mean more than anything, so clarification would be helpful.



A hint on the problem would be welcomed as well, though I don't think that should be too difficult as I have a proposition that says something similar to what I'm trying to prove, I just don't quite understand the wording in that case either.



For reference, the proposition I believe I should use is as follows: "$7$ (respectively $11, 13$) divides $a$ iff $7$ (respectively $11, 13$) divides the alternating sum of the "digits" of $a$ in base 1000."


Answer




" I believeI'm more confused as to what 'its units digit in base 1000' mean more than anything, so clarification would be helpful. "



A non-negative integer (actually any real, but lets assume a non-negative integer) can be written in base, $1000$ (or any base $b$) by expressing it as:



$a = \sum_{i=0}^n a_i * 1000^i; 0 \le a_i < 1000; a_i \in \mathbb Z$



for appropriate values of and appropriately many, $a_i$s.



The units digit is $a_0$.




So the question is asking you to prove:



$a \equiv a_0 \mod 8$.



Once the question is clarified, I hope and trust the actual reason is fairly straightforward.



==== old response (when I misunderstood the exact question but had the general idea-- but still technically an incorrect answer) =======



The question is asking: Let $a = \sum_{i=0}^n a_i *1000^i$. Let $b=\sum_{i=0}^m b_i*1000^i $ with $a_0 = b_0$.




Prove that $a \equiv b \mod 8$.



That's the question. Can you prove it?


Sunday 22 May 2016

roots - Proof that $sqrt[m]{a} + sqrt[n]{b}$ is irrational

Is there a way to prove that $\sqrt[m]{a} + \sqrt[n]{b}$ ($\sqrt[m]{a}$ and $\sqrt[n]{b}$ are irrational); $a, b, m, n \in \mathbb{N}$; $m, n \neq 2$; is irrational without using the theorem mentioned in Sum of irrational numbers, a basic algebra problem?



If one of $m$ or $n$ is $2$, then a polynomial with integer coefficients can be easily constructed, and rational root theorem (http://en.wikipedia.org/wiki/Rational_root_theorem) can be used to prove that it's irrational. For example, if $x = \sqrt{2} + \sqrt[3]{3}$:



$$
\begin{align}
(x - \sqrt{2})^3 = x^3 - 3x^2\sqrt{2} + 6x - 2\sqrt{2} & = 3 \\

\implies x^3 + 6x - 3 &= \sqrt{2}(3x^2 + 2) \\
\implies x^6 + 12x^4 - 6x^3 + 36x^2 - 36x + 9 & = 2(9x^4 + 12x^2 + 4) \\
\implies x^6 - 6x^4 - 6x^3 + 12x^2 - 36x + 5 & = 0
\end{align}
$$



By evaluating the polynomial for $\pm1$ and $\pm5$, it can be verified that $x$ is irrational. However, if neither of $m$ or $n$ is $2$, then constructing a polynomial with integer coefficients seems impossible (if not very tedious). Let's say $x = \sqrt[3]{2} + \sqrt[4]{3}$. Is there any way to prove that this is irrational without using the above-mentioned theorem?

linear algebra - If $AB = I$ then $BA = I$: is my proof right?





I want to prove that for matrices $A,B \in M_n (\mathbb K)$ where $\mathbb K \in \{\mathbb R, \mathbb C, \mathbb H\}$ if $AB = I$ then $BA = I$.



My proof is really short so I'm not sure it's right:



If $AB = I$ then $(BA)B = B$ and therefore $BA=I$?


Answer




The implication $(BA)B=B \Rightarrow BA=I$ is a little quick and not always true...



But observe that
$$1=\det(BA)= \det(B)\det(A)$$
thus $B$ is invertible and it follows that
$$BA= BA(BB^{-1}) = B(AB)B^{-1}=BB^{-1}=I.$$


sequences and series - Why does $sum_{n = 0}^infty frac{n}{2^n}$ converge to 2?

Apparently,



$$
\sum_{n = 0}^\infty \frac{n}{2^n}
$$



converges to 2. I'm trying to figure out why. I've tried viewing it as a geometric series, but it's not quite a geometric series since the numerator increases by 1 every term.

Saturday 21 May 2016

calculus - Integrating a 2D Gaussian over a linear strip



How do I show that



$$\int_{-\infty}^{\infty} \frac{e^{-\frac{x^2}{2 \sigma ^2}} \left(\text{erf}\left(\frac{2 d-\sqrt{2} x}{2 \sigma
}\right)+\text{erf}\left(\frac{2 d+\sqrt{2} x}{2 \sigma }\right)\right)}{2 \sqrt{2
\pi } \sigma }dx=\text{erf}\left( \frac{d}{\sqrt{2}\sigma}\right)?$$

Mathematica couldn't do it, though I've verified this numerically. The following argument shows that these are equal, but I want to show this analytically, i.e. using direct integration methods.





(Let



$$P(x,y) = \frac{1}{2\pi \sigma^2} \exp\left(- \frac{x^2+y^2}{2\sigma^2}\right),$$



which is clearly spherically symmetric. I want to integrate this function over the region



$$R = \{ |x-y| \leq d \mid (x,y) \in \mathbb{R}^2\}$$
which is just a diagonal linear strip of width $d$.




(a) I could evaluate



$$ \int_{-\infty}^{\infty}\int_{x-\sqrt{2}d}^{x+\sqrt{2}d} P(x,y)\,dx\,dy = \int_{-\infty}^{\infty} \frac{e^{-\frac{x^2}{2 \sigma ^2}} \left(\text{erf}\left(\frac{2 d-\sqrt{2} x}{2 \sigma
}\right)+\text{erf}\left(\frac{2 d+\sqrt{2} x}{2 \sigma }\right)\right)}{2 \sqrt{2
\pi } \sigma }dx \tag{1}$$



(b) or I could note the spherical symmetry of $P(x,y)$, rotate my region $R$ to the $y$-axis, and get



$$\int_{-\infty}^{\infty} \int_{-d}^d P(x,y)\,dx\,dy = \text{erf}\left( \frac{d}{\sqrt{2}\sigma}\right)\tag{2}.)$$




Answer



Direct $x$-integration is not possible. However, expressing the erf functions by (direct) indefinite integrals does the job, after exchanging integration orders:



Notice$$
\text{erf}\left(\frac{2 d\pm\sqrt{2} x}{2 \sigma }\right)= \int
\frac{2 e^{-\frac{( \sqrt{2}d \pm x)^2}{2 \sigma^2} }}{\sqrt{\pi} \sigma} \text{d} d
$$

Hence we can write
$$
\int_{-\infty}^{\infty} \frac{e^{-\frac{x^2}{2 \sigma ^2}} \left(\text{erf}\left(\frac{2 d-\sqrt{2} x}{2 \sigma

}\right)+\text{erf}\left(\frac{2 d+\sqrt{2} x}{2 \sigma }\right)\right)}{2 \sqrt{2
\pi } \sigma }dx = \\
\int \text{d} d \int_{-\infty}^{\infty} \frac{e^{-\frac{x^2}{2 \sigma ^2}} \left( \frac{2 e^{-\frac{( \sqrt{2}d + x)^2}{2 \sigma^2} }}{\sqrt{\pi} \sigma} + \frac{2 e^{-\frac{( \sqrt{2}d - x)^2}{2 \sigma^2} }}{\sqrt{\pi} \sigma} \right)}{2 \sqrt{2
\pi } \sigma }dx = \\
2 \int \frac{e^{-d^2/(2 \sigma^2)}}{\sqrt{2 \pi} \sigma} \text{d} d =\text{erf}\left( \frac{d}{\sqrt{2}\sigma}\right)
$$



Done. $\qquad \square$


number theory - How to prove that any (integer)$^{1/n}$ that isn't an integer, is irrational?





Is my proof beneath perfect and complete?



I wanted to prove that for any nth root of an integer, if it's not an integer, than it's irrational:
$$\begin{cases}
m,n\in \mathbb{N}\\\sqrt[n]{m}\notin \mathbb{N}
\end{cases}\implies \sqrt[n]{m}\notin \mathbb{Q}.$$




I start by assuming that $m^{\frac 1n}$ is rational and non-integer. So there exist co-prime integers $a,b$ so that $$\sqrt[n]{m}=\frac{a}{b}$$ $$\implies
m=\frac{a^n}{b^n}\in\mathbb{N}.$$
But since $a$ and $b$ have no common factor, $a^n$ and $b^n$ also have no common factor. So:
$$\frac{a^n}{b^n}\notin\mathbb{N},$$
a contradiction.


Answer



Your proof is fine. You can use essentially the same idea to prove the following more general statement:



Theorem. If $ P(X) \in \mathbf Z[X] $ is a monic polynomial, then any rational roots of $ P $ are integers. In other words, $ \mathbf Z $ is integrally closed.




Proof. Assume that $ q = a/b $ is a rational root with $ a, b $ coprime, and let $ P(X) = X^n + c_{n-1} X^{n-1} + \ldots + c_0 $. We have $ P(q) = 0 $, which gives



$$ a^n + c_{n-1} a^{n-1} b + \ldots + c_0 b^n = 0 $$



In other words, $ a^n $ is divisible by $ b $. This is a contradiction unless $ b = \pm 1 $, since then any prime dividing $ b $ also divides $ a $, contradicting coprimality. Hence, $ b = \pm 1 $ and $ q \in \mathbf Z $.


real analysis - Limit of $limlimits_{n rightarrow infty}(sqrt{x^8+4}-x^4)$

I have to determine the following:



$\lim\limits_{n \rightarrow \infty}(\sqrt{x^8+4}-x^4)$



$\lim\limits_{n \rightarrow \infty}(\sqrt{x^8+4}-x^4)=\lim\limits_{x \rightarrow \infty}(\sqrt{x^8(1+\frac{4}{x^8})}-x^4 = \lim\limits_{x \rightarrow \infty}(x^4\sqrt{1+\frac{4}{x^8}}-x^4 = \lim\limits_{x \rightarrow \infty}(x^4(\sqrt{1+\frac{4}{x^8}}-1)= \infty$




Could somebody please check, if my solution is correct?

calculus - Evaluate $int_{Gamma}xy^2dx+xydy$ on $Gamma={y=x^2}$




Evaluate $\int_{\Gamma}xy^2dx+xydy$ on $\Gamma=\{(x,y)\in\mathbb{R}^2:y=x^2,x\in[-1,1]\}$ with orientation clockwise using Green theorem



So $\Gamma$ is a parabola to use Green we have to close the curve, to do so we will add the line from $(1,1)$ to $(-1,1)$



Then



$\gamma_1(t)=(-t,1),t\in[-1,1]$



$\gamma_2(t)=(t,t^2),t\in[-1,1]$




$\int_{wanted}=\int_{\gamma_1(t)\cup \gamma_2(t)}-\int_{\gamma_1(t)}$



But we must have one parameterization of $2$ variables which is closed to use green?



maybe $\phi(r,\theta)=(\sin t\cos t,\sin ^2t-\sin t ),t\in [-\pi,-2\pi]$ is the closed curve?


Answer



By green's theorem,



$\int Mdx+Ndy = \iint (N_x-M_y)dxdy \\ M = xy^2, N = xy \\\int Mdx+Ndy = \int_{x=-1}^{x=1}\int_{y=0}^{y=x^2} (y-2xy) dydx \\ = \int_{x=-1}^1 (1-2x )x^4 \frac{1}{2} dx \\ = \frac{1}{5}$



calculus - What is the conceptual meaning of a definite integral?



I'm having trouble articulating my difficulty, but here's my best shot. Thanks in advance for any light you can shed.



I understand that a definite integral is the limit of a Riemann sum, and can represent an area between a curve and the x-axis.



I have found a couple of problems in my first-year calculus textbook that have led me to believe that I do not have a full understanding of definite integrals. These are applications problems, and although I have full solutions to them, I don't understand why the solutions produce answers that make sense, particular in regards to the units of the answers.



These are from the Larson Calculus text, eighth edition.




This first one, I understand, but I need to quote it for context to frame what it is I don't understand. (I am not giving the actual functions because they aren't relevant to my misunderstanding.)




Water Supply: A model for the flow rate of water at a pumping station
on a given day is: R(t). R is the flow rate in thousands of gallons
per hour, and t is the time in hours. Approximate the total volume of
water pumped in 1 day.




This is my understanding: R(t) is a rate in thousands of gallons per hour, and t is in hours. If I integrate R(t) over the interval [0, 24], then the answer I get from the integral, in addition to being the area under R(t) from 0 to 24, will represent a volume of water. This makes sense to me because if I were to graph R(t) and look at it as an area, the Riemann rectangles would have a width of hours and a height of thousands of gallons per hour and multiplying them together means the area will have units of gallons, because (gal/hr) times hours would leave gallons. So the integral will evaluate to a value representing gallons.




In general, integrating a rate of change should result in the net change over the interval, yes? This makes sense to me.



Now, the second problem is this:




Temperature: The temperature in degrees Fahrenheit in a house is T(t)
where t is the time in hours, with t=0 representing midnight. The
hourly cost of cooling a house is $0.10 per degree. Find the cost of
cooling the house if its thermostat is set at 72 F. by evaluating the

integral



$$C=0.1\displaystyle \int_{8}^{20}{T(t)-72} \ {\rm d}t$$




Now here, the function T is not a rate of change, it gives the actual temperature of the house at time t, and not the rate at which the temperature changes.



So I guess I have some conceptual misunderstanding of the definite integral because I can't reconcile the units or the meaning of what's going on here.



I can see that the 0.1 in front of the integral is there because in order to get the cost in $$ we need to multiply the $0.1 times the number of degrees, so it appears from the question that I am to interpret the integral itself as calculating a number of degrees. But I don't understand how it could.




That is, if the units of T(t) are degrees, and the time is hours, then my area understanding of the integral should give the area calculated by this integral in units of degree times hours, or degree-hours? Since the units of the 0.1 are dollars/degree, then multiplying dollars/degree by degree times hours results in dollars times hours, but it should come out in dollars alone, and I don't see how.



The integral would have to have units of degrees in order for the cost to come out in dollars. How can it? If the function T is not a rate, but an actual amount of something, like degrees, then what meaning do I give to integrating it over an interval? I will get an area, yes, but would that mean anything necessarily?



If I integrate velocity, which is a rate of change of position, then I get the total change of position, or distance. What if I have a position function, which is not a rate of change, and integrate that? Does the result have any meaning? That's what this temperature function seems to be doing. T(t) is not a rate of change function, it's like a position function. If I integrate it, why do I end up with total degrees?


Answer



As a(n aspiring) geometer, the conceptual meaning of the integral for me is that integration is the process of adding little flats volumes (or areas, or...) to get big generally curved volumes (...).



enter image description here




In $1$-d calculus the regions you integrate over may not be as interesting but the idea is the same.



enter image description here



That's really all there is to integration -- add up a lot of little things to get a (potentially) big thing.



The fact that integration over an interval of a single-variable rate of change function is equal to the total change of that interval is actually somewhat of a minor miracle. That inverse relationship between the derivative and the integral doesn't work nearly as nicely in higher dimensions. Even so, it really is useful ... in fact I'd say it's almost fundamental to calculus. :)







Mathematicians don't normally worry about units, but physicists do and they do tend to always work out. So it is a good idea to use dimensional analysis to figure out if the quantities you're dealing with actually make sense.



Speaking of physics, here's one more image just for fun.



enter image description here



The thing you seem to have misunderstood in your problem is that the quantity \$0.10 actually has units of dollars per degree per hour (equivalently dollars per degree-hour). You can see this from the beginning of that line which reads "The hourly cost ... is \$0.10 per degree".



The integral $\int_8^{20} (T(t) -72)dt$ itself doesn't give the total temperature change over the interval $8\le t\le 20$ -- it gives the value of the average temperature (measured from $72^\circ$) over that time interval times the length of the time interval. For instance if the temperature were $85^\circ$ for the first 6 hours and then $83^\circ$ for the last 6 hours then the value of the integral would be $(84-72)\times 12 = 144$. The units of this are then degree-hours.




The above comes essentially from the definition of an average. To calculate the average value of a function $f$ over the interval $a\le t\le b$ we find that $f_{av} = \frac{1}{b-a}\int_a^b f(t)dt$. Multiplying by $b-a$ on both sides we see that the integral $\int_a^b f(t)dt$ is equal to the average value of the function $f$ over the interval times the length of the interval.



enter image description here



Then we can see that dollars per degree-hour $\times$ degree-hour $=$ dollars, as expected.


limits - Find $limlimits_{nto+infty}frac{sqrt[n]{n!}}{n}$





I tried using Stirling's approximation and d'Alambert's ratio test but can't get the limit. Could someone show how to evaluate this limit?


Answer



Use equivalents:
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac{\bigl(\sqrt{2\pi n}\bigr)^{\tfrac 1n}}{n}\cdot\frac n{\mathrm{e}}=\frac 1{\mathrm{e}}\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}$$
Now $\;\ln\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}=\dfrac{\ln\pi+\ln 2n}{2n}\xrightarrow[n\to\infty]{}0$, hence
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac 1{\mathrm{e}}. $$



probability - Expectation of a continuous random variable explained in terms of the CDF



Problem:



Let $F_X(x)$ be the CDF of a continuous random variable $X$. Show that:



$$E[X]= \int_0^\infty(1-F_X(x)) \, dx -\int_{-\infty}^0F_X(x) \, dx.$$



Attempt:




A comprehensible explanation of the intuition regarding the expectation $E[X]$ and CDF for a non-negative random is found here: Intuition behind using complementary CDF to compute expectation for nonnegative random variables.



However, I am still at a loss about how to show the general case when $-\infty < x < \infty$.



I am again solving this as an exercise in my probability course and any help is greatly appreciated!


Answer



You can also see it by interchanging the order of integrals. We have



\begin{eqnarray*}
\int_{0}^{\infty}(1-F_{X}(x))dx=\int_{0}^{\infty}P(X>x)dx & = & \int_{0}^{\infty}\int_{x}^{\infty}dF_{X}(t)dx\\

& = & \int_{0}^{\infty}\int_{0}^{t}dF_{X}(t)dx\\
& = & \int_{0}^{\infty}t\;dF_{X}(t)
\end{eqnarray*}
and,
\begin{eqnarray*}
\int_{-\infty}^{0}F_{X}(x)dx=\int_{-\infty}^{0}P(X\leq x)dx & = & \int_{-\infty}^{0}\int_{-\infty}^{x}dF_{X}(t)dx\\
& = & \int_{-\infty}^{0}\int_{t}^{0}dF_{X}(t)dx\\
& = & \int_{-\infty}^{0}-t\;dF_{X}(t)
\end{eqnarray*}
Since,

$$
E[X]=\int_{-\infty}^{0}t\;dF_{X}(t)+\int_{0}^{\infty}t\;dF_{X}(t)
$$
the result follows.


Friday 20 May 2016

paradoxes - Zeno's Place Paradox

Zeno's Paradoxes are a series of problems intended to challenge our view of reality. Some of these paradoxes (e.g. Achilles and the Tortoise) have been disproven by a better understanding of physics and the concept of infinity. Here is his "Paradox of Place":



"If everything that exists has a place, place too will have a place, and so on ad infinitum."




So my question is, is this argument rigorous, and if so, what are the implications of the fact that (as a direct consequence) every object in the universe has an infinite number of places? (e.g. I am at my "place", my place's "place", my place's place's "place", etc. as given by Zeno's argument)

Prove that none of the numbers $1_{(9)},11_{(9)},111_{(9)},1111_{(9)}cdots$ are prime



I want to prove that none of the numbers $1_{(9)},11_{(9)},111_{(9)},1111_{(9)}\cdots$ are prime where $x_{(9)}$ means, the number $x$ is in base $9$. My first attempt was to try mathematical induction. $V(0)$ works because $1$ is not a prime. But then i couldn't prove the induction step anyhow.
$$1+9+81+...+9^n \text{ is not prime}\Rightarrow 1+9+81+...+9^{n+1} \text{ is not prime}$$

My next try was to prove it with geometric series.
Let $$s_n=\sum_{k=1}^n9^n$$
We are proving $\forall n\in\Bbb{N}:s_n \text{ is not prime}$.
using the sum of geometric series$$s_n=\frac{9^n-1}{8}$$ But here I am stuck again and have no idea how to show that this can't be prime for any $n\in\Bbb{N}$.


Answer



Hint
$$9^n-1=(3^n)^2-1=(3^n-1)(3^n+1)$$



Now, $3^n-1, 3^n+1$ are two consecutive even numbers, thus one is divisible by $4$ and the other by 2. Consider the two cases (when $4|3^n-1$ and $4|3^n+1$) and write $\frac{9^n-1}{8}$ as a product of two integers. Explain why neither can be 1.


discrete mathematics - Prove that the set of $mathbb{N} times mathbb{N} times mathbb{N}$ is countable.




I have seen proofs of $\mathbb{N}\times\mathbb{N}$. The proof by list where you list each element in a pair of elements and then counting them diagonally is the most convincing. I have two thoughts in mind:




  1. Write it as a composition of functions. Show that both functions are bijections, so the composition is also a bijection.

  2. Show that $\mathbb{N} \times \mathbb{N} \times \mathbb{N}$ has the same cardinality as $\mathbb{N}\times\mathbb{N}$, and $\mathbb{N}\times\mathbb{N}$ has the same cardinality as $\mathbb{N}$.



Generally, I'm having difficulty going from $\mathbb{N}\times\mathbb{N}\times\mathbb{N}$ to $\mathbb{N}\times\mathbb{N}$, since it easy to prove the bijection from $\mathbb{N}\times\mathbb{N}$ to $\mathbb{N}$.


Answer



If you already know that $\Bbb{N\times N}$ is countable, fix a bijection $f\colon\Bbb{N\times N\to N}$. Now consider $g\colon\Bbb{N\times N\times N\to N\times N}$ defined as: $$g(n,m,k)=(n,f(m,k)),$$ or better yet $h\colon\Bbb{N\times N\times N\to N}$ defined as $$h(n,m,k)=f(n,f(m,k))$$ and cut out the middle-man.




As my freshman discrete mathematics professor used to tell us, go home and convince yourself this is correct.


linear algebra - matrix elementary column operations

Until now I was using the elementary row operations to do the gaussian elimination or to calculate the inverse of a matrix.
As I started learning Laplace's transformation to calculate the determinant of a $n \times n$ matrix, I noticed that the book uses elementary column operations. I tried to use the column operations to do the gaussian elimination or to solve a $Ax = b$ matrix but it didn't work (comes out as a wrong answer).
I'm getting confused!

Example:
Let $A=\left[\begin{array}{rrrr}

x_1 & x_2 & x_3 \\
2 & 1 & 3 \\
4 & 4 & 2 \\
1 & 1 & 4 \\
\end{array}\right]b= \left[\begin{array}{r}10\\8\\16\end{array}\right]$

in this case if I interchange two rows/add a row to another/or multiply a row with a nonzero element the answer is always $$x = \begin{bmatrix}{-2,2,4}\end{bmatrix}$$
but if I interchange for example $x_1$ and $x_2$ /add a column to another.... comes out a different answer.



So why do column operations work for some operations and others not? How do you know when to use the column operations?
It would be great if anybody can help!




Reminder:



Elementary Row / Column Operations :
1. Interchanging two rows/or columns,
2. Adding a multiple of one row/or column to another,
3. Multiplying any row/or column by a nonzero element.

Thursday 19 May 2016

measure theory - If $fleft(x,cdotright)$ is measurable for every $x$ and $fleft(cdot,yright)$ is measurable for every $y$, is $f$ necessarily measurable?



If $f\left(x,\cdot\right)$ is measurable for every $x$ and $f\left(\cdot,y\right)$ is measurable for every $y$, is $f$ necessarily measurable?



More precisely, let $\left(\Omega_i,\mathcal{A}_i\right)$ be measurable spaces, $i\in\left\{1,2,3\right\}$. Let $f:\Omega_1\times\Omega_2\rightarrow\Omega_3$. For every $\omega_1\in\Omega_1$, $\omega_2\in\Omega_2$ define




$$
f_1^{\left(\omega_1\right)}:\Omega_2\rightarrow\Omega_3,\space\space f_1^{\left(\omega_1\right)}\left(\omega_2\right):=f\left(\omega_1,\omega_2\right)
$$



$$
f_2^{\left(\omega_2\right)}:\Omega_1\rightarrow\Omega_3,\space\space f_2^{\left(\omega_2\right)}\left(\omega_1\right):=f\left(\omega_2,\omega_1\right)
$$



It is known (e.g. Schilling, Theorem 13.10 iii) that if $f$ is $\mathcal{A}_1\otimes\mathcal{A}_2/\mathcal{A}_3$-measurable, then $f_1^{\left(\omega_1\right)}$ is $\mathcal{A}_2/\mathcal{A}_3$-measurable for every $\omega_1\in\Omega_1$ and $f_2^{\left(\omega_2\right)}$ is $\mathcal{A}_1/\mathcal{A}_3$-measurable for every $\omega_2\in\Omega_2$.




But does the converse hold as well?



In comparison, both directions hold in the following, related result (Schilling, Theorem 13.10 ii): $f:\Omega_3\rightarrow\Omega_1\times\Omega_2$ is $\mathcal{A}_3/\mathcal{A}_1\otimes\mathcal{A}_2$-measurable iff $\pi_i\circ f$ is $\mathcal{A}_3/\mathcal{A}_i$-measurable ($i\in\left\{1,2\right\}$), with
$$\pi_i:\Omega_1\times\Omega_2\rightarrow\Omega_i,\space\space \pi_i\left(\left(\omega_1,\omega_2\right)\right):=\omega_i$$



References



Schilling, René L. Measures, Integrals and Martingales. 2005


Answer




No, the converse doesn't hold. The following counter-example is based on clark's comment to my original post.



Set $\Omega_i:=\mathbb{R}$, $\mathcal{A}_i:=\mathfrak{B}$ ($\mathfrak{B}$ being the standard Borel field on the real line). Let $C\subseteq\mathbb{R}$ be any non Borel set, e.g. the Vitali set. Let $T:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\times\mathbb{R}$ be a rotation of the plane by an angle that is not a multiple of $\frac{\pi}{2}$. Define $C'$ to be the set $\left\{\left(x,0\right)\space:\mid\space x\in C\right\}$, i.e. the natural embedding of $C$ in the $2$-dimensional "$x$-axis".



$C'\notin\mathfrak{B}\otimes\mathfrak{B}$, since otherwise $C$ would be $\in\mathfrak{B}$, as a section of $C'$ (cf Halmos, p. 141), contrary to assumption. Define $D:=T\left(C'\right)$. Since $T$ is surjective and measurable (it is measurable since it is linear), $D\notin\mathfrak{B}\otimes\mathfrak{B}$. Define $f:=\mathbb{1}_D$. Thus $f$ is not $\mathfrak{B}\otimes\mathfrak{B}/\mathfrak{B}$-measurable. Since $T$ is linear and $C'$ lies on a line through the origin, so does $D$, but since by definion of $T$ this line is perpendicular to neither "axis", $D$'s sections consist of at most a single point, so for all $x\in\mathbb{R}$, $f_1^{(x)}$ is $\mathfrak{B}/\mathfrak{B}$-measurable and likewise $f_2^{(y)}$ for all $y\in\mathbb{R}$.



References



Halmos, Paul Richard. Measure Theory. 1974


calculus - How to solve this limit without L'Hospital?



How can we solve this limit without using L'Hospital rule? I tried using some other methods but can't get the answer. $$\lim_{x\to 1}\frac{\sqrt[3]{x}-1}{\sqrt[]{x}-1}$$


Answer



Let $x=t^6$. Note that $$\lim_{x \to 1 }\frac{\sqrt[3]{x}-1}{\sqrt[]{x}-1}=\lim_{t \to 1} \frac{t^2-1}{t^3-1}=\lim_{t \to 1}\frac{t+1}{t^2+t+1}$$
So the limit is $\frac{2}{3}$.



complex analysis - Find the range of convergence of the series$,,sum_{n=0}^infty {frac{z^n}{1+z^{2n}}}$



The series I have is
$$\displaystyle\sum_{n=0}^\infty {\dfrac{z^n}{1+z^{2n}}}$$



The same series with absolute values is:
$$\displaystyle\sum_{n=0}^\infty {\dfrac{|z|^n}{1+|z|^{2n}}}$$




Using D'Alembert's principle,
$$\displaystyle\lim {\dfrac{a_{n+1}}{a_n}} = {\dfrac{|z|^n \cdot |z|}{1+|z|^{2n} \cdot |z|}} \cdot {\dfrac{1+|z|^{2n}}{|z|^n}} = |z|$$



The convergence range is when $|z| < 1$. But the book answer is $|z| \ne 1$.


Answer



If $|z|=r<1$, and $n\ge 1$, then
$$
\left|\frac{z^n}{1+z^{2n}}\right|\le \frac{|z|^n}{1-|z|^{2n}}=\frac{r^n}{1-r^{2n}}
<\frac{r^n}{1-r}
$$


and hence the series
$$
\sum_{n=0}^\infty\frac{z^n}{1+z^{2n}}
$$

converges, due to Comparison Test.



If $|z|=1$, and in particular $z=i$, then the series is not even definable.



Note. This is not a power series, and hence finding the radius of convergence is out of question. Clearly, there exist values of $z$, with $|z|>1$, for which the series converges absolutely, i.e., all $z\in\mathbb R$, with $|z|>1$. Meanwhile, the unit circle is a natural boundary of the series, since, for the points $z=\exp(ik/2^\ell)$ are singularities (not isolated) of the series, for all $k,\ell\in\mathbb N$.


discrete mathematics - How do I properly do back substitution and put equations into the form of Bezout's theorem after using the Euclidean Algorithm?



For some problems, even longer ones, I've been able to see the pattern and properly do back substitution to bring a series of equations I've derived using the Euclidean algorithm to the form of Bezout's theorem:




$sa+tm$



Where $s$ and $t$ are parameters.



But, on some problems I get stuck and have no idea how to move forward.



For example, starting from finding the $gcd(3454,4666)$:



Using the Euclidean Algorithm I find:




$4666 = 3454 * 1 + 1212$ ------------- $1212 = 4666 - 3454 * 1$



$3454 = 1212 * 2 + 1030$ ---------------- $1030 = 3454 - 1212 * 2$



$1212 = 1030 * 1 + 182$ ----------------- $182 = 1212 - 1030 *1$



$1030 = 182 * 5 + 120$ ------------------ $120 = 1030 - 182 * 5$



$182 = 120 * 1 + 62$ --------------------- $62 = 182 - 120 * 1$




$120 = 62 * 1 + 58$ ---------------------- $58 = 120 - 62*1$



$62 = 58 * 1 + 4$ ------------------------ $4 = 62 - 58 * 1$



$58 = 4 * 14 + 2$ ------------------------ $2 = 58 - 4 * 14$



For my first step I substitute for the $4$ :



$2 = 58 - (62 - 58) * 14$




Where do I go from here? What are some general strategies to solve problems of this form? I'm having an inordinately hard time with some of these problems, but find others trivial--what is going on? What should I look out for when approaching problems of this type?



If you would like me to clarify content, please ask me such and I will edit accordingly. Thank you for taking the time to read this!


Answer



It is usually simpler and far less error prone to compute the Bezout identity in the forward direction by using this version of the Extended Euclidean algorithm, which keeps track of each remainder's expression as a linear combination of the gcd arguments. Below is the computation in your example - so simple that it can be done purely mentally in a few minutes. Here we use least magnitude remainders to speed it up, e.g. $\bmod 1212\!:\,\ 3454\equiv 1030\equiv -182$.



$$\rm\begin{eqnarray}
[\![0]\!]\quad \color{}{4666}\ &=&\,\ \ \ 1&\cdot& 4666\, +\ 0&\cdot& 3454 \\
[\![1]\!]\quad \color{}{3454}\ &=&\,\ \ \ 0&\cdot& 4666\, +\ 1&\cdot& 3454 \\
\color{}{[\![0]\!]\ -\,\ [\![1]\!]}\, \rightarrow\, [\![2]\!]\quad \color{}{1212}\ &=&\,\ \ \ 1&\cdot& 4666\, -\ 1&\cdot& 3454 \\

\color{}{[\![1]\!]-3\,[\![2]\!]}\,\rightarrow\,[\![3]\!]\ \ \ \color{}{{-}182}\ &=&\, {-}3&\cdot& 4666\, +\, 4&\cdot& 3454 \\
\color{}{[\![2]\!]+7\,[\![3]\!]}\,\rightarrow\,[\![4]\!]\ \ \ \ \ \color{}{{-}62}\ &=& {-}20&\cdot& 4666\, +\color{}{27}&\cdot& \color{}{3454}\\
\color{}{[\![3]\!]-3\,[\![4]\!]}\,\rightarrow\,[\![5]\!]\qquad\ \ \color{}{4}\ &=&\, \ \ 57&\cdot& 4666\, -77&\cdot& 3454 \\
\color{}{[\![4]\!]\!+\!15[\![5]\!]}\,\rightarrow\,[\![6]\!]\quad\ \ \, \color{}{{-}2}\ &=&\ \ 835&\cdot& 4666\, {-}1128&\cdot& 3454 \\
\end{eqnarray}\qquad$$



Negating the final equation yields the Bezout equation for the gcd $= 2$.



As an optimization we can omit one of the RHS columns, it being computable from the other, e.g. $1128 = ((835\cdot 4666)+2)/3454$. Then the equations may be viewed in fractional form. But it is best to master the above explicit equational form before proceeding to the optimizations.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...