Saturday 30 June 2018

modular arithmetic - How to calculate $5^{2003}$ mod $13$


How to calculate $5^{2003}$ mod $13$





using fermats little theorem



5^13-1 1 mod 13



(5^12)^166+11 mod 13



a+b modn=(a modn + b modn) modn




(1+11mod13)mod13



12 mod 13 = 12



why answer is 8 ?



how do we calculate this



thanks

elementary number theory - Discuss the solution to 5x ≡ 8(mod 10)



Discuss the solution to 5x ≡ 8 in arithmetic modulo 10.



The methods I know for solving modular equations are by using the multiplicative inverse and by using the array method (using a table to get a linear combination such that ax + by = 1).



5 in the case of mod 10 is not a unit/ does not have a multiplicative inverse because it is not relatively prime to 10. So this lead me to try the array method and I ended up getting the linear combination of x = 2 and y = -3.
15= 8(2) + 5(-3)
So x = -3. Does this answer make sense in the context of the problem and can solutions to modular arithmetic of this kind be negative?




Thank you for any help, I'm still new at this.


Answer



The easiest way is to notice that $5x = 0$ or $5 \mod 10$. So it can never be 8. This can be shown formally.



If $x$ is even then let $x=2k$, then $5x = 10k \equiv 0 \neq 8 \mod 10$.



If $x$ is odd then let $x=2k+1$, then $5x = 10k+5 \equiv 5 \neq 8 \mod 10$.



Therefore the equation $5x \equiv 8 \mod 10$ has no solutions.



Complex sum of sine and cosine function

I have worked something out like the following



https://math.stackexchange.com/a/1172454/626039
- reproduced here:
$$\begin{align}
\sum_{k=1}^{n} \sin (k\theta)&=\Im \sum_{k=1}^{n} e^{ik\theta}\\\\
&=\Im\left( e^{i\theta}\frac{e^{in\theta}-1}{e^{i\theta}-1}\right)\\\\
&=\Im\left( e^{i\theta}\frac{e^{in\theta/2}\left(e^{in\theta/2}-e^{-in\theta/2}\right)}{e^{i\theta/2}\left(e^{i\theta/2}-e^{-i\theta/2}\right)}\right)\\\\
&=\Im\left( e^{i\theta}\frac{e^{in\theta/2}\left(2i\sin(n\theta/2)\right)}{e^{i\theta/2}\left(2i\sin(\theta/2)\right)}\right)\\\\
&=\Im\left( e^{i(n+1)\theta/2}\frac{\sin(n\theta/2)}{\sin(\theta/2)}\right)\\\\

&=\Im\left( \left(\cos ((n+1)\theta/2)+i\sin ((n+1)\theta/2)\right)\frac{\sin(n\theta/2)}{\sin(\theta/2)}\right)\\\\
&=\frac{\sin ((n+1)\theta/2)}{\sin(\theta/2)}\sin(n\theta/2).
\end{align}$$



The only problem is that i want to do this for when the sequence starts at k = 0 NOT k = 1 . I can't seem to achieve the same result but it should do according to my question.

Friday 29 June 2018

calculus - Example of a continuous and Gâteaux differentiable function that is not Fréchet differentiable.



Is there an example of a function $f:\mathbb{R}^2 \to \mathbb{R}$, with $f(0,0) = 0$, that is Gâteaux differentiable (all directional derivatives exist) and continuous at $(0,0)$, but is not Fréchet differentiable at $(0,0)$?



Edit: By Gâteaux differentiable, I use the definition that the Gâteaux derivative is not required to be a linear map, but just all the directional derivatives to exist in all directions.


Answer



Let $f \colon \mathbb R^2 \to \mathbb R$ be given by

$$f(x,y) = \begin{cases} x & x \ne 0, y = x^2, \\ 0 & \text{else}.\end{cases}$$
This function is directionally differentiable (with a linear derivative) and continuous in $(0,0)$.



With your definition of Gâteaux differentiability, you can even use any norm on $\mathbb R^2$, e.g.,
$$f(x,y) = \sqrt{x^2 + y^2}$$
or
$$f(x,y) = |x| + |y|.$$


integration - Calculate $int_0^{2pi} frac{sin(t) + 4}{cos(t) + frac{5}{3}}dt$




I have to calculate $\int_0^{2\pi} \frac{\sin t + 4}{\cos t + \frac{5}{3}}dt$ using complex analysis.



I was thinking of setting $z(t) = re^{it} $ but I'm not sure what $r$ to pick or can I just pick any and is this even useful? Do I have to worry about the numerator of the integral? Before this I only had to calculate integral around some curve and then look at the singular values and use the residue theorem. Now it seems I have to do it the other way around?


Answer



HINT: split the integral into two summands:
$$\int_0^{2\pi} \frac{\sin t + 4}{\cos t + \frac{5}{3}} dt = \int_0^{2\pi} \frac{\sin t}{\cos t + \frac{5}{3}} dt + \int_0^{2\pi} \frac{dt}{\cos t + \frac{5}{3}} =$$
$$=\left. -\log \left( \cos t + \frac{5}{3} \right) \right|_0^{2\pi} + 4\int_{|z|=1} \frac{1}{\frac{z+z^{-1}}{2} + \frac{5}{3}} \frac{dz}{iz}$$



Where you substitute $z=e^{it}$, so that $dz=ie^{it} dt= iz dt$ and $\cos t = \frac{e^{it}+e^{-it}}{2}=\frac{z+z^{-1}}{2}$.




Continuing, you get
$$0+ \frac{24}{i} \int_{|z|=1} \frac{dz}{(z+3)(3z+1)} = \frac{24}{i} \left(2\pi i \operatorname{Res} \left( \frac{1}{(z+3)(3z+1)} , -\frac{1}{3} \right)\right) = 48 \pi \frac{1}{8} = 6 \pi$$


Thursday 28 June 2018

calculus - How to know whether an integral can be solved?

I just wonder: How do you know whether an integral can be solved? For example, exponential integral can not be derived to final result.

Wednesday 27 June 2018

sequences and series - Proving an identity involving the alternating sum of products of binomial coefficients



Prove the following identity:
$$
\sum_{k=0}^{l}(-1)^k
\binom{j-k}{l-1}\binom{l}{k}=0$$

for some integers $l\geq1$ and $j\geq l$.




Using wolfram alpha I have confirmed that this identity is true. But I am not sure how I can prove it myself. I have tried to split it into even and odd values of $k$, but that did not work. I have tried a proof by induction in combination with the identity $\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$, but that also did not work. I think the proof might require a more sophisticated method.


Answer



Your induction idea should work. Using the identity you suggested, rewrite your expression as
$$
\begin{align}
\sum_{k=0}^l(-1)^k\binom{j-k}{l-1}\binom{l}{k}&=\sum_{k=0}^{l-1}(-1)^k\binom{j-k}{l-1}\binom{l-1}{k}+\sum_{k=1}^l(-1)^k\binom{j-k}{l-1}\binom{l-1}{k-1}\\
&=\sum_{k=0}^{l-1}(-1)^k\binom{j-k}{l-1}\binom{l-1}{k}-\sum_{k=0}^{l-1}(-1)^k\binom{j-1-k}{l-1}\binom{l-1}{k}.
\end{align}
$$

Now use your identity a second time, applying it to the binomial coefficient $\binom{j-k}{l-1}$ in the first sum. After cancelling some terms, you will be able to apply induction on $l$.



calculus - Prove $sin a=int_{-infty}^{infty}cos(ax^2)frac{sinh(2ax)}{sinh(pi x)} operatorname dx$

Derive the integral representation




$$\sin a=\int_{-\infty}^{\infty}\cos(ax^2)\frac{\sinh(2ax)}{\sinh(\pi x)}dx$$
for $|a|\le \pi/2$.

Tuesday 26 June 2018

trigonometry - Prove this trigonometric identity in quadrilateral



If $\alpha,\beta,\gamma,\delta$ are angles in quadrilateral different from $90^\circ$, prove the following:



$$ \frac{\tan\alpha+\tan\beta+\tan\gamma+\tan\delta}{\tan\alpha\tan\beta\tan\gamma\tan\delta}=\cot\alpha+\cot\beta+\cot\gamma+\cot\delta $$



I tried different transformations with using $\alpha+\beta+\gamma+\delta=2\pi$ in equation above, but no success. Am I missing some not-so-well-known formula?


Answer



It follows directly from $\tan(\alpha + \beta + \gamma + \delta) = 0$ and the sum angle formula for $\tan$ (see here: Tangent sum using symmetric polynomials)




Using that formula we get (from numerator = 0) that



$$ \tan \alpha + \tan \beta + \tan \gamma + \tan \delta = $$



$$\tan \alpha\tan \beta\tan \gamma+ \tan \alpha\tan \beta\tan \delta + \tan \alpha\tan \gamma\tan \delta + \tan \beta\tan \gamma\tan \delta$$



divididing by $ \tan \alpha\tan \beta\tan \gamma\tan \delta$ gives the result.


calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$



I'm supposed to calculate:



$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$



By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer



Edited. I justified the application of the dominated convergence theorem.




By a simple calculation,



$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.

\end{align*}$$



We remark that




  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.




Then in view of the Stirling's formula, it suffices to show that



$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$



The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$



and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then




$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$



From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,



$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


logic - Classifying Types of Paradoxes: Liar's Paradox, Et Alia

The well-known Liar's Paradox "This statement is false" leads to a recursive contradiction: If the statement is interpreted to be true then it is actually false, and if it is interpreted to be false then it is actually true. The statement is a paradox where neither truth value can be assigned to it.



However, "This statement is true" also leads to a paradox where either truth value can be assigned to it with equal validity. If the statement is perceived to be true then it is actually true, and if the statement is perceived to be false then it is actually false.



These two statements demonstrate two different classes of paradox.



The same paradox states exist in set theory. Consider "The set of all sets that do not contain themselves" leads to the former paradox (neither solution is valid), and "the set of all sets that do contain themselves" leads to the latter paradox (either solution is valid.)



My question is: How many classifications of paradox exist? Is there any development in classifying types of paradoxes and applying them to mathematical logic, computer science, and set theory? What implications would classes of paradoxes have on Gödel's incompleteness theorems--could a system that allows and classifies paradoxes be demonstrably consistent?

Monday 25 June 2018

calculus - Probability of intersection of line segments




A pair of points is selected at random inside a unit circle and a line segment is drawn joining the two. Another pair is selected and a second line segment is drawn. Find probability that the two segments don't intersect.





I don't have a clue where to start with this one. I assumed the first pair but I'm not able to come up with the condition for the two segments to not intersect. Please help me out. Thank you.



Here is an example of a possible position of the segments -



enter image description here


Answer



The following is an equivalent way of seeing your problem:



Pick randomly any 4 points in a circle, these four points are the vertices of a convex quadrilateral. After that pick randomly two of them and label them as $P,P'$. Label the other two vertices as $Q, Q'$. Then, the segments $PP'$ and $QQ'$ intersect if and only if they are the diagonals of the quadrilateral.




The process I described is equivalent on picking randomly a segment among the six segments whose vertices are the four points. And the probability of taking a diagonal between them is $\frac{2}{6}$.



So the probability that the two do not intersect is $\frac{4}{6}= \frac{2}{3}$.


Sunday 24 June 2018

definition - What is the square root of complex number i?



Square root of number -1 defined as i, then what is the square root of complex number i?, I would say it should be j as logic suggests but it's not defined in quaternion theory in that way, am I wrong?



EDIT: my question is rather related to nomenclature of definition, while square root of -1 defined as i, why not j defined as square root of i and k square root of j and if those numbers have deeper meanings and usage as in quaternions theory.



Answer



Unfortunately, this cannot be answered definitively. In fact, every non-zero complex number has two distinct square roots, because $-1\ne1,$ but $(-1)^2=1^2.$ When we are discussing real numbers with real square roots, we tend to choose the nonnegative value as "the" default square root, but there is no natural and convenient way to do this when we get outside the real numbers.



In particular, if $j^2=i,$ then putting $j=a+bi$ where $a,b\in\Bbb R,$ we have $$i=j^2=(a+bi)^2=a^2-b^2+2abi,$$ so we need $0=a^2-b^2=(a+b)(a-b)$ and $2ab=1.$ Since $0=(a+b)(a-b),$ then $a=\pm b.$ If we had $a=-b,$ then we would have $1=2ab=-2b^2,$ but this is impossible, since $b$ is real. Hence, we have $a=b,$ so $1=2ab=2b^2,$ whence we have $b=\pm\frac1{\sqrt{2}},$ and so the square roots of $i$ are $\pm\left(\frac1{\sqrt{2}}+\frac1{\sqrt{2}}i\right).$



I discuss in my answer here that $i$ is defined as one of two possible numbers in the complex plane whose square is $-1$ (it doesn't actually matter which, as far as the overall structure of the complex numbers is concerned). Once we've chosen our $i,$ though, we have fixed which "version" of the complex numbers we're talking about. We could then pick a canonical square root of $i$ (and call it $j$), but there's really no point. Once we've picked our number $i,$ we have an algebraically closed field, meaning (incredibly loosely) that we have all the numbers we could want already there, so we can't (or at least don't need to) add more, and there's no particular need to give any others of them special names.


derivatives - Real Analysis - Uniform continuity



Prove that if $f: (a,b) \rightarrow \Re$ is differentiable on the open interval $(a,b)$, and $f'(x)$ is bounded on the interval $(a,b)$, then $f$ is uniformly continuous on $(a,b)$. Also, prove the converse is false, that is, find a function that is uniformly continuous on $(-1,1)$ whose derivative is unbounded on $(-1,1)$.



So I'm a little confused with how to approach this question but this is what I'm thinking so far.



So for every $\epsilon \gt 0$ there is a $\delta \gt 0$ and there exists $(x,y) \in (a,b)$ such that $|x-y| \lt \delta$... and this is where I get stuck I know it's only the beginning but I don't know how to incorporate the differentiability of $f$ and I have only done uniform continuity on functions such as $1/x$ or others like that can someone guide me in the right direction?



Answer



In general, if $f(x)$ is continuous on [a,b] and $f'(x)$ exists and is bounded by say $M$ on $(a,b)$, then $f(x)$ is Lipschitz continuous(https://en.wikipedia.org/wiki/Lipschitz_continuity), which is a stronger form of uniform continuity.



Indeed, by MVT for any $x,y \in [a,b]$, there exists a $\xi \in (a,b)$ such that (assume $x>y$)



$|f(x)-f(y)| = |(x-y)||f'(\xi)| \leq M|(x-y)|$. Now we let $\epsilon > 0$, and choose $\delta < \frac{\epsilon}{M}$, and we have uniform continuity. In general, MVT is a very powerful tool to bound functions.



For the second part, $f(x) = \frac{1}{x}$ is not uniformly continuous on (0,1) because it can be proven on an open interval (a,b), $f$ is uniformly continuous over (a,b) IF AND ONLY IF it can be extended to a continuous function $g$ such that $g$ is continuous on $[a,b]$ ($g$ will be uniformly continuous over [a,b] as [a,b] is compact).



For a an answer to the second part (A uniformly continuous function such that the derivative is not bounded and is not defined on a compact?).



Odd number $N$ which does not divide $2^k-1$


Prove that there does not exist a composite odd number $N > 1$ which does not divide $2^k-1$ for $k = 1,2,\ldots,N-2$.




I conjectured this result, but wasn't sure how to prove it. I tried it for many cases and it seems to be true.

Friday 22 June 2018

complex numbers - How do I prove the following statement about a summation of a series?



I have not been able to completely solve this problem and it's driving me crazy. Could you please help.
The question is to show that,

$$\sum_{n=1}^N \frac{\sin n\theta}{2^n} =\frac{2^{N+1}\sin\theta+\sin N\theta-2\sin(N+1)\theta }{2^N(5-4\cos\theta)}$$
Where do I start? I tried solving this using de Moivre's Theorem but I don't know where I am going wrong. Could you please help me or if possible show other ways to tackle this particular problem.



Any Help is much appreciated!




Thanks in Advance!


Answer



If you follow one of the suggestions the summation is the imaginary part of



$$

\begin{align*}
\sum_{n = 1}^{N}\frac{e^{in\theta}}{2^{n}} &= \sum_{n = 1}^{N}(e^{i\theta}/2)^{n}\\
&= \frac{e^{i\theta}}{2} \frac{\left(1-\frac{e^{Ni\theta}}{2^N}\right)}{\left(1-\frac{e^{i\theta}}{2}\right)}\\
&= \frac{e^{i\theta}(2^N-e^{Ni\theta})}{2^N(2-e^{i\theta})}\\
&= \frac{e^{i\theta}(2^N-e^{Ni\theta})(2-e^{-i\theta})}{2^N(2-e^{i\theta})(2-e^{-i\theta})}\\
&= \frac{(2^Ne^{i\theta}-e^{(N+1)i\theta})(2-e^{-i\theta})}{2^N(4-2(e^{i\theta}+e^{-i\theta})+1)}\\
&= \frac{2^{(N+1)}e^{i\theta}-2e^{(N+1)i\theta}-2^N+e^{Ni\theta}}{2^N(4-2(e^{i\theta}+e^{-i\theta})+1)}
\end{align*}
$$




The imaginary part of this is



$$ \frac{2^{(N+1)}\sin \theta - 2\sin (N+1)\theta + \sin N\theta}{2^N (5-4\cos \theta)}$$


Thursday 21 June 2018

probability theory - Finite expectation of a random variable

$X \geq 0$ be a random variable defined on $(\Omega,\mathcal{F},P)$. Show that $\mathbb{E}[X]<\infty \iff \Sigma_{n=1}^\infty P(X>n) < \infty $.



I got the reverse direction but I am struggling with the $"\implies"$ direction. So far, I have the following worked out:



$\mathbb{E}[X]<\infty$




$\implies \int_0^\infty (1-F(x)) dx < \infty$ (where $F$ is the distribution function of the random variable X)



$\implies \int_0^\infty (1-P(X\leq x)) dx < \infty$



$\implies \int_0^\infty P(X>x) dx < \infty$



Consider $\int_0^\infty P(X>x) dx$



$= \Sigma_{n=1}^\infty \int_{n-1}^n P(X>x) dx$




This is the point I am stuck at. Any help will be deeply appreciated!

sequences and series - How to find the limit $lim limits_{ntoinfty}prod limits_{j=1}^n left( 1+frac{1}{j} left(cos left(frac{tj}{n} right)-1 right) right)$?





$$ \lim \limits_{n\to\infty}\prod \limits_{j=1}^n \left( 1+\frac{1}{j} \left(\cos \left(\frac{tj}{n} \right)-1 \right) \right)$$




I am unsure how to find this limit. Are there some general techniques that I should be using when looking for the limit? I know that I should get :
$$ \exp\left({\int_0^1 \frac{\cos(tx)-x}{x} \, dx}\right) .
$$


Answer



Put $a_n:=\prod \limits_{j=1}^n \left( 1+\frac{1}{j} \left(\cos \left(\frac{tj}{n} \right)-1 \right) \right)$. Then
$$b_n:=\ln a_n=\sum_{j=1}^n\ln\left(1+\frac{1}{j} \left(\cos \left(\frac{tj}{n} \right)-1 \right)\right).$$
Using the inequality $x-\frac{x^2}2\leq \ln(1+x)\leq x$ for $x>-1$, we get

$$\sum_{j=1}^n\frac 1j\left(\cos\left(\frac{tj}n\right)-1\right)-\frac 12\sum_{j=1}^n\frac 1{j^2}\left(\cos\left(\frac{tj}n\right)-1\right)^2\leq b_n\leq \sum_{j=1}^n\frac 1j\left(\cos\left(\frac{tj}n\right)-1\right).$$
Put $c_n:=\sum_{j=1}^n\frac 1j\left(\cos\left(\frac{tj}n\right)-1\right)$. Then
$$
c_n=\frac1n\sum_{j=1}^n\frac nj\left(\cos\left(\frac{tj}n\right)-1\right)=\frac 1n\sum_{k=1}^nf\left(\frac kn\right),$$
with $f(x)=\frac{\cos (tx)-1}x$, which can be extended by continuity to $\left[0,1\right]$. Hence we get $\lim_{n\to\infty}c_n=\int_0^1f(x)dx=\int_0^1\frac{\cos (tx)-1}xdx$.
Now, since
$$\sum_{j=1}^n\frac 1{j^2}\left(\cos\left(\frac{tj}n\right)-1\right)^2=\frac 1{n^2}\sum_{j=1}^n\frac {n^2}{j^2}\left(\cos\left(\frac{tj}n\right)-1\right)^2$$
is $\frac 1n$ times a Riemann sum of a continuous function (after extension), it's limit $n\to\infty$ is $0$, and we conclude by the squeeze theorem.


calculus - What forms can a function take such its derivative is greater than or equal to the function?



Following this question and discussion recently
Is the derivative of a function bigger or equal to $e^x$ will always be bigger or equal to the function ?!



I decided to look at the different forms that a function which is always less than its derivative takes, so instead of looking at when $\frac{f'(x)}{f(x)} \ge 1$. I restated the question to this.



Let $g(x)$ be a function such that $g(x) \ge 1\space \forall x$ then solving the following the inequality will give us the functions we need.

$$\frac{f'(x)}{f(x)} = g(x)$$



Solving that differential equation you get that $f(x)$ takes on the form (where $k$ is a constant and $g(x) \ge 1 \space \forall x$)
$$f(x) = ke^{\int g(x)dx}$$



My question is do all functions that have derivatives greater than the function itself have this form or am i missing something?


Answer



We are looking for a simple criterion for a positive function $f$, which is equivalent to the differential inequality $f'(x)\geq f(x)$ for all $x$. One can say the following:



A differentiable function $f:{\mathbb R}\to{\mathbb R}_{>0}$ satisfies $f'(x)\geq f(x)$ for all $x$ iff the function $$g(x):={f(x)\over e^x}$$

is increasing.



Proof. One has $g'(x)=\bigl(f'(x)-f(x)\bigr)e^{-x}$, and this is $\geq0$ for all $x$ iff $f'(x)\geq f(x)$ for all $x$.


real analysis - Find $lim_{n to infty } a_n$




\begin{cases} a_1=\sqrt 3 \\ a_2 = \sqrt {3\sqrt 3}\\ a_n = \sqrt {3a_{n-1}} \quad \text{for } n\in\mathbb Z^+\end{cases}



This sequence is bounded above by $3$ and is monotone increasing, so by monotone bounded sequence theorem, the sequence converges.



But, the question asks to find $\lim_\limits{n \to \infty } a_n$. I guess the limit is $3$, but don't know how to prove it.



Could you give some hint? Thank you in advance.


Answer



Hint




If $\ell$ is the limit, then $$\ell=\sqrt{3\ell}.$$


Wednesday 20 June 2018

linear algebra - find two different generalized inverse of the given matrix



Definition:



For a given matrix $A_{m\times n}$, a matrix $G_{n\times m}$ is said to be a generalized inverse of $A$, if it satisfies
$$AGA=A.$$







Question:



Find two different generalized inverse of the given matrix



$$\begin{pmatrix}
1 & 0 &-1 & 2\\2 & 0 &-2 & 4 \\-1 & 1 & 1 & 3\\
-2 & 2 & 2 & 6

\end{pmatrix}$$



Work done:



Since the echelon form of the matrix is,
$$
\left(\begin{array}{rrrr}
1 & 0 & -1 & 2 \\
0 & 1 & 0 & 5 \\
0 & 0 & 0 & 0 \\

0 & 0 & 0 & 0
\end{array}\right)$$ rank is 2.



since there are two distinct $2\times 2$ minors,



one of the generalized inverse is,
$$\left(\begin{array}{rrrr}
0 & 0 & 0 & 0 \\
\frac 1 2 &0 & 0 & 0 \\
\frac 1 2 & 1 & 0 & 0 \\

0 & 0 & 0 & 0
\end{array}\right)$$
and the other one is,



$$\left(\begin{array}{rrrr}
0 & 0 & 0 & 0 \\
0 &0 & \frac 3 {10} & -\frac 4{10} \\
0& 0 & \frac 1 {10} & \frac 2 {10} \\
0 & 0 & 0 & 0
\end{array}\right)$$




Luckily we get two different solutions,




But if the question is to find 5 different generalized inverses, How to do that?




As we know there are plenty of generalized inverses are there for this given matrix, different possible ways are welcome.



Thanks in advance.



Answer



If $AGA=A$, then $A(G+uv^T)A=A$ if $u\in\ker A$ or $v\in\ker A^T$. Note that when $A$ is not a nonsingular matrix (this includes the case where $A$ is not square), at least one of $A$ or $A^T$ has a nonzero nullspace. Therefore, if you can find one generalised inverse of $A$, you can find infinitely many others if the field is infinite.



By the way, the two matrices that you claim to be generalised inverses of your example $A$ do not seem to be correct.


calculus - How to prove that $limlimits_{xto0}frac{sin x}x=1$?



How can one prove the statement
$$\lim_{x\to 0}\frac{\sin x}x=1$$

without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution.



This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.


Answer



sinc and tanc at 0



The area of $\triangle ABC$ is $\frac{1}{2}\sin(x)$. The area of the colored wedge is $\frac{1}{2}x$, and the area of $\triangle ABD$ is $\frac{1}{2}\tan(x)$. By inclusion, we get
$$
\frac{1}{2}\tan(x)\ge\frac{1}{2}x\ge\frac{1}{2}\sin(x)\tag{1}
$$

Dividing $(1)$ by $\frac{1}{2}\sin(x)$ and taking reciprocals, we get
$$
\cos(x)\le\frac{\sin(x)}{x}\le1\tag{2}
$$
Since $\frac{\sin(x)}{x}$ and $\cos(x)$ are even functions, $(2)$ is valid for any non-zero $x$ between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Furthermore, since $\cos(x)$ is continuous near $0$ and $\cos(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\sin(x)}{x}=1\tag{3}
$$
Also, dividing $(2)$ by $\cos(x)$, we get that
$$

1\le\frac{\tan(x)}{x}\le\sec(x)\tag{4}
$$
Since $\sec(x)$ is continuous near $0$ and $\sec(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\tan(x)}{x}=1\tag{5}
$$


complex analysis - Why is $sinx$ the imaginary part of $e^{ix}$?

Most of us who are studying mathematics are familiar with the famous $e^{ix}=cos(x)+isin(x)$. Why is it that we have $e^{ix}=cos(x)+isin(x)$ and not $e^{ix}=sin(x)+icos(x)$? I haven't studied Complex Analysis to know the answer to this question. It pops up in Linear Algebra, Differential Equations, Multivariable Equations and many other fields. But I feel like textbooks and teachers just expect us students to take it as given without explaining it to a certain extent. I also couldn't find any good article that explains this.

Tuesday 19 June 2018

limits - How to compute $lim_{xto 0}frac{sin(x^2+x)-x}{x^2}$ without L'Hospital's Rule or series?

I came across this problem in an examination for beginners in Calculus:




Compute $\displaystyle \lim_{x\to 0}\frac{\sin(x^2+x)-x}{x^2}$.





It is easy to show that the limit is $1$ by using series or L'Hospital's Rule. But the examination is aimed to students that know only the basics of Calculus (derivatives of power functions and trigonometric functions, Chain Rule, Mean Value Theorem basically).



How to compute this limit by elementary means?

functional analysis - Operator norm and tensor norms



I have a linear operator $A\in\mathcal{L}(X,Y)$ where $X$ and $Y$ are some Banach spaces (or Hilbert spaces would also do, if that simplifies the answer.). The operator norm of $A$ is given by

$$
\|A\|=\sup_{x\in X} \|Ax\|_Y/\|x\|_X.
$$
Now, if the operator is of finite rank (makes things a little easier), I can view it as an element of the tensor product space $X^*\otimes Y$, where $X^*$ is the continuous dual of $X$, such that
$$
A=\sum_{i=1}^n u_i^*\otimes v_i
$$
with $u_i^*\in X^*$ and $v_i\in Y$. Now, there are quite a few tensor norms that I can define on $X^*\otimes Y$ in order to complete the space, e.g. the projective norm
$$
\epsilon(A)=\inf\left(\sum_{i=1}^n \|u_i^*\|_{X^*} \|v_i\|_Y\right)

$$
where the infimum goes over all such decompositions, and (as I take from R. Ryan, Introduction to Tensor Products of Banach Spaces) 13 more norms you can sensibly define on a tensor product space. Now my question is: is there any of these tensor norms that coincides with the operator norm? The only thing I could show is, that the projective norm if always larger or equal to the operator norm. However, that doesn't help me much, I would need equality.


Answer



Your question is a very natural one (if I understand it correctly) and at the same time it raises a rather difficult question.



As you say, it makes sense to identify the operators $A : X \to Y$ of finite rank with elements of $X^{\ast} \otimes Y$. Now you want the operator norm of $A$ to coincide with its norm in $X^{\ast} \otimes Y$ with respect to some tensor norm. I leave it to you to check that the injective tensor norm defined for $\omega = \sum x_{i}^{\ast} \otimes y_{i}$ by
$$\Vert \omega \Vert_{\varepsilon} = \sup_{\substack{\Vert \phi \Vert_{X^{\ast\ast}} \leq 1 \\\ \Vert \psi \Vert_{Y^{\ast}} \leq 1}}{\left\vert \sum \phi(x_{i}^{\ast}) \, \psi(y_{i})\right\vert}$$
does what you want (it is independent of the representation of $\omega$ as a finite sum of elementary tensors and $\|A\| = \|A\|_{\varepsilon}$ for operators of finite rank). Edit: To see the second claim, use Goldstine's theorem that allows you to replace the supremum over $\phi \in X^{\ast\ast}$ with $\Vert\phi\Vert_{X^{\ast\ast}} \leq 1$ by the supremum over $\operatorname{ev}_{x}$ with $x \in X$ and $\Vert x \Vert_{X} \leq 1$.



The projective tensor norm of a finite rank operator is usually much larger than its operator norm (see the discussion on pp.41ff in Ryan, for example).




Given this, we can identify the completion $X^{\ast} \otimes_{\varepsilon} Y$ of $X^{\ast} \otimes Y$ with respect to the injective tensor norm with a space $K_{0}(X,Y) \subset L(X,Y)$ of operators $X \to Y$ and we will freely do so from now on. Note that $K_{0}(X,Y)$ is nothing but the closure of the operators of finite rank in $L(X,Y)$.



Now the question is: What are the operators lying in $K_{0}(X,Y)$ ?



This is really difficult and I'll outline the closest I know to an answer to that.



As a first observation note that the compact operators $K(X,Y) \subset L(X,Y)$ are a closed subspace of $L(X,Y)$ containing $X^{\ast} \otimes_{\varepsilon} Y = K_{0}(X,Y)$. Looking at the examples of Hilbert spaces or the classical Banach spaces one finds out that quite often $K(X,Y) = K_{0}(X,Y)$ holds. However, it may fail in general, and that's where the famous Approximation Property comes in. I'll refrain from delving into the numerous equivalent formulations and use it as a black box. We have the following theorem due to Grothendieck:





Theorem. The Banach space $X^{\ast}$ has the approximation property if and only if $K_{0}(X,Y) = K(X,Y)$ holds for all Banach spaces $Y$.




Edit 2: (in response to a comment of the OP) It follows that for a reflexive Banach space $X$ with the approximation property we have $K(X,Y) = X^{\ast} \otimes_{\varepsilon} Y$ for all Banach spaces $Y$.



Now most of the Banach spaces you'll run into have the approximation property, e.g. $L^{p}$, $C(X)$ and so on. However, P. Enflo (in a veritable tour de force) has shown that there exist Banach spaces failing the approximation property. An explicit example (identified by Szankowski) is the space $L(H,H)$ of a separable Hilbert space. Note that this space is the dual space of the trace class operators. A famously open question is whether the space $H^{\infty}(D)$ of bounded holomorphic functions on the open unit disk has the approximation property.



I hope this answers your question. The approximation property is discussed in detail in any book that treats the tensor products of Banach spaces. In particular, this is well treated in Ryan's book.


algebra precalculus - Is $pi = 4$ really?

Can anyone explain what's wrong with this?



enter image description here

Monday 18 June 2018

calculus - On the equivalence of two definitions for summations over the integers



If $a_k\in\mathbb{C}$, then define
$$

\sum_{k=-\infty}^{\infty}a_k=L\\
\Updownarrow\\\forall\epsilon>0,\exists N,m,n> N\implies\left|\sum_{k=-m}^na_k-L\right|<\epsilon
$$



Question: Is it true that




$$
\sum_{k=-\infty}^{\infty}a_k=L\\
\Updownarrow\\

\sum_{k=0}^{\infty}a_k\text{ and }\sum_{k=1}^{\infty}a_{-k}\text{ both exist and }\sum_{k=0}^{\infty}a_k+\sum_{k=1}^{\infty}a_{-k}=L
$$




Thoughts:$[\Downarrow]$ Couldn't come up with something useful. Tried to show that both series are Cauchy. Failed.



$[\Uparrow]$ Let
$$
\alpha:=\sum_{k=0}^{\infty}a_k\\
\beta:=\sum_{k=1}^{\infty}a_{-k}

$$
Given $\epsilon>0$, there exist $N_1,N_2$ such that
$$
\left|\sum_{k=0}^na_k-\alpha\right|<\frac{\epsilon}{2}\\
\left|\sum_{k=1}^ma_{-k}-\beta\right|<\frac{\epsilon}{2}
$$
whenever $n>N_1$ and $m>N_2$. Hence if $N:=\max\{N_1,N_2\}$ then
\begin{align}
\left|\sum_{k=-m}^na_k-L\right|&=\left|\sum_{k=0}^na_k-\alpha+\sum_{k=1}^ma_{-k}-\beta\right|\\
&\leq\left|\sum_{k=0}^na_k-\alpha\right|+\left|\sum_{k=1}^ma_{-k}-\beta\right|\\

&<\frac{\epsilon}{2}+\frac{\epsilon}{2}\\
&=\epsilon
\end{align}
whenever $m,n>N$.


Answer



I assume you meant (adding quantors for $m,n$):



$$\forall\epsilon>0,\exists N,\forall m > N, \forall n> N\implies\left|\sum_{k=-m}^na_k-L\right|<\epsilon.$$



The [$\Uparrow$] case is OK. For [$\Downarrow$], let $N(\epsilon)$ denote a suitable $N$ for $\epsilon$ in the above condition. If $n_2 > n_1 > N(\frac{\epsilon}{2})$, then $$\left|\sum_{n=n_1+1}^{n_2} a_n\right| = \left|\sum_{n=-n_1}^{n_2} a_n - \sum_{n=-n_1}^{n_1} a_n\right| = \left|\left(\sum_{n=-n_1}^{n_2} a_n -L \right) - \left(\sum_{n=-n_1}^{n_1} a_n - L\right)\right| \le \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon.$$




This means $\sum_{n=0}^{\infty} a_n$ is a Cauchy series. The proof for the negative part being Cauchy is done just the same.



Proving that $L$ is now the sum of the limits of the positive and negative parts works just as in the [$\Uparrow$] part.



Adding remarks due to comment from OP below:
That argument seems to work as well, but I was literally looking at your prove for $[\Uparrow]$. You start there by defining $\alpha$ and $\beta$. In that $[\Uparrow]$ proof that they exist is the assumption, in the $[\Downarrow]$ proof this was proven by them being Cauchy series'. Then you choose $\epsilon$ and based on that $N_1,N_2$. Then you use $L$ for the one and only time, but it is simply used as $L= \alpha + \beta$. So you proved that the $L$ defined that way fits your definition for "sum of series from $-\infty$ to $\infty$". Since that value is 'obviously' unique if it exists at all, you have now proven (in the $[\Downarrow]$ proof) that the value $L$ (which we assumed existed) must be equal to $\alpha + \beta$, which what was left to show.


integration - How to solve $int_{-infty}^{infty} frac {sin(t)}{t^2+1} dt$?



I'm considering here the fact that
$$\lim\limits_{R\to\infty} \int_{\Gamma_R} \frac {e^{iz}}{z^2+1} dz=0$$
, where $\Gamma$ is a contour defined as a semicircle centred about the origin, of radius $R>1$, in the upper half-plane of $\mathbb{C}$, positively oriented.




Now, I think, one could express $\int\limits_{-\infty}^{\infty} \frac {sin(t)}{t^2+1} dt$ as $$\lim\limits_{R\to\infty} \int_{\Gamma_R} \frac {sin(t)}{t^2+1} dt$$ But here's where I don't know what to do next. Would appreciate a hint.



On a side note,
$$\int\limits_{-\infty}^\infty \frac {e^{iz}}{z^2+1} dz\ne 0$$
, so I'm not sure what I'm not seeing here because $$\lim\limits_{R\to\infty} \int_{\Gamma_R} \frac {e^{iz}}{z^2+1} dz=0$$, so where's the analogy?


Answer



The integrand is and odd function of $t$ and the integral is absolutely convergent, so
$$\int_{-\infty}^{\infty}\frac{\sin t}{t^2+1}dt=0$$
EDIT: Writing out the theorem longhand,

$$\begin{align}\int_{-\infty}^{\infty}\frac{\sin t}{t^2+1}dt&=\lim_{M\rightarrow-\infty}\int_M^0\frac{\sin t}{t^2+1}dt+\lim_{N\rightarrow\infty}\int_0^N\frac{\sin t}{t^2+1}dt\\
&=\lim_{M\rightarrow-\infty}\int_{-M}^0\frac{\sin(-u)}{u^2+1}(-du)+\lim_{N\rightarrow\infty}\int_0^N\frac{\sin u}{u^2+1}du\\
&=\lim_{M\rightarrow-\infty}-\int_0^{-M}\frac{\sin u}{u^2+1}du+\lim_{N\rightarrow\infty}\int_0^N\frac{\sin u}{u^2+1}du\\
&=-\int_0^{\infty}\frac{\sin u}{u^2+1}du+\int_0^{\infty}\frac{\sin u}{u^2+1}du=0\end{align}$$
Where we have used the substitution $u=-t$ in the left integral and $u=t$ in the right integral. Notice that all that was required for this proof to work was that the integrand was an odd function, the interval of integration was $(-\infty,\infty)$, although an interval of $(-a,a)$ would have worked and that the function was Riemann integrable over the interval. This theorem can reduce some perfectly horrible-looking integrals to zero with relatively little effort.


calculus - Evaluating $ dfrac{1}{2 pi} int_{-infty}^infty int_{-infty}^infty e^{tuv} e^{-u^2/2} e^{-v^2/2} du dv $



Solving a probability problem I came across this integral:




$$ \dfrac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty e^{tuv} e^{-u^2/2} e^{-v^2/2}\ du\ dv $$





Can you explain how to integrate this?


Answer



Hint. Assume $-1gaussian result,
$$
\int_{-\infty}^\infty e^{tuv} e^{-u^2/2} \ du=\sqrt{2\pi} \:e^{t^2v^2/2}
$$ then with respect to $v$,
$$
\int_{-\infty}^\infty e^{t^2v^2/2} e^{-v^2/2} \ dv=\int_{-\infty}^\infty e^{-(1-t^2)v^2/2} \ dv=\frac{\sqrt{2\pi}}{\sqrt{1-t^2}}

$$ obtaining




$$
\dfrac{1}{2 \pi} \int_{-\infty}^\infty \int_{-\infty}^\infty e^{tuv} e^{-u^2/2} e^{-v^2/2}\ du\ dv=\frac1{\sqrt{1-t^2}}.
$$



sequences and series - Is $sum_{n=1}^{+infty}frac{(-1)^n log n}{n!}$ a positive sum?




The below series is convergent series by the ratio test but i'm no able to know if this series have a positive sum , and i don't succeed to check if it has a closed form ,Then my question here is :




Question:
Is this : $\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^n \log n}{n!}$ a positive sum ?



Answer



For $n \geq 2$, the expression $\frac{\log n}{n!}$ is strictly decreasing, as $\frac{\log (n+1)}{(n+1)!} < \frac{\log n}{n!} \iff \log (n+1) < (n+1) \log n \iff \log_n (n+1) < n+1$, which is true, because $\log_n (n+1) < \log_n (n^2) = 2$. Therefore, $$\sum_{n=1}^\infty \frac{(-1)^n \log n}{n!}$$ is an alternating series with terms that strictly decrease in magnitude after the first nonzero term, so the series has the same sign as the first nonzero term, which is positive.


Limit of $frac{n^{1/2}}{5^{log n}}$



I have to show that the limit of $\dfrac{n^{1/2}}{5^{\log n}}$ is equal to as $n$ goes to infinity.



I have seen in some cases people using logarithms on the numerator and the denominator and say that the original fraction goes to infinity(or zero) if the logarithms go to infinity (or zero). Is this valid? And when it does not work? For example, in this case if i work with the logarithms it comes out to be a non zero constant, so this approach is wrong.
L'hospital on the other hand doesn't seem practical. Any ideas?



Answer



$$\log \dfrac{n^{1/2}}{5^{\log n}} = \frac{1}{2}\log n -\log 5 \log n \to -\infty$$



since $\frac{1}{2}< \log5$, then use the continuity of $\log(x)$ to conclude that $$\dfrac{n^{1/2}}{5^{\log n}} \to 0$$



Using logarithms respectively on the numerator and the denominator lead to possible mistakes, such as to say:



$\dfrac{\log \frac{1}{n}}{ \log \frac{1}{n^2}} \to \frac{1}{2}$, so $\dfrac{\frac{1}{n}}{\frac{1}{n^2}} \to e^{\frac{1}{2}}$, which is obviously wrong.


Sunday 17 June 2018

reference request - Is there a way to prove that a theorem has no elementary proof? Or to prove that something may have no proof?




Recently I was trying to prove something, more or less elementarily, but eventually started going in circles. My prof said that the proof involves mathematical tools that I've not seen yet, and that it's unlikely there is an elementary proof.



Which got me wondering: would there be a way to prove that there is no elementary proof to a particular theorem, or even a particular kind of theorem? (other than trivial cases such as theorems already involving non-elementary maths)



So, I guess, to rephrase: a way to prove that theorem $X$ can be proved using mathematical tools that are not higher than those needed to state $X$ in the first place?



Or how about theorems that are true but maybe have no proof; can this be known? Can we prove that a theorem may or may not be true, but would have no proof if it were?







I guess this is more metamathematics than mathematics, as it would involve theorems about the properties of theorems.... is there a branch of math that studies this sort of thing? Is this what "proof theory" is?



If there is such a subject: any book recommendations that would be accessible to someone in his 2nd undergrad year? This subject is interesting to me.


Answer



I don't know of any way to prove that a theorem can or cannot be proven by "elementary" methods, and without giving a rigorous definition to what "elementary" means I doubt one would be able to proceed in this direction.



What I do know, is that you should look up Gödel's incompleteness theorems. The branch of math that these sorts of things are studied under is logic. An excellent book on this topic is Gödel, Escher, Bach. It is appropriate for an undergrad, indeed it was written as a popular book and assumes very little mathematical knowledge, although the more math you understand the deeper some parts of the book will seem.



I caution you, the book will seem fairly abstruse at times, and indeed it is a tome (upwards of 700 pages). The part you are interested in is about half way through. I would recommend taking your time. Once you realize how interwoven the content really is with the presentation, you will realize (perhaps) how dense the book is and how much you may have missed. I Think Surely Most Everything Truly Abstracts.*




As a gentler introduction, I would recommend listening to this podcast of Radiolab on loops. It's a great show in general, but if you want to just hit the part about Gödel it is 38 minutes in.



Unprovability, in mathematics, is called independence. One method commonly used to prove independence is called forcing. From the wikipedia article, I see that it is covered in Set Theory: An Introduction to Independence Proofs, by Kenneth Kunen. I have not read this book, but from the online preview and the comments on the wikipedia page I would have to say that it requires a strong mathematical background.


How Many Natural Numbers Can Get Expressed as the Sum of Consecutive Natural Numbers in only One Way?








Some natural numbers can be expressed as a sum of consecutive natural numbers in more than one way. For example, $7$ can get expressed both as $7$, and $(3+4).$ In terms of a sum of consecutive numbers, $4$ and $8$ can only get expressed as $4$, and $8$ respectively. Call such numbers consecutive-primes. How many consecutive-primes exist? Given all previous consecutive-primes, is there a way to compute the next consecutive-prime?

Saturday 16 June 2018

limits - Polylogarithm grows slower than polynomial proof

In the CLRS book, there's this part, where it's shown that $$\lim_{n\to\infty}\frac{(n^b)}{(a^n)} = 0.$$ In the same chapter, it uses the aforementioned equation to prove that any logarithmic function grows slower than any polynomial one, thus, $$\lim_{n\to\infty}\frac{\log b^n}{ n^a}$$. It does that by substituting lgn for n and 2^a for a in the first equation. How is it allowed to substitute the terms and prove the latter equation.

Friday 15 June 2018

Limit of a sequence involving root of a factorial: $lim_{n to infty} frac{n}{ sqrt [n]{n!}}$




I need to check if
$$\lim_{n \to \infty} \frac{n}{ \sqrt [n]{n!}}$$ converges or not. Additionally, I wanted to show that the sequence is monotonically increasing in n and so limit exists. Any help is appreciated. I had tried taking log and manipulating the sequence but I could not prove monotonicity this way.


Answer



Use Stirling's approximation:
$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n $
and you'll get
$$
\lim_{n \rightarrow \infty} \frac{n}{(n!)^{1/n}}
=\lim_{n \rightarrow \infty} \frac{n}{(\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n)^{1/n}}

=\lim_{n \rightarrow \infty} \frac{n}{({2 \pi n})^{1/2n} \left(\frac{n}{e}\right)}
=\lim_{n \rightarrow \infty} \frac{e}{({2 \pi n})^{1/2n} }=e,
$$
because $\lim_{n\to \infty} ({2 \pi n})^{1/2n}= \lim_{n\to \infty} n^{1/n}=1$.


Thursday 14 June 2018

real analysis - Show $p< -log(1-p)$ for $0

Show $$p< -\log(1-p)$$ for $0. I thought about using Taylor expansion, is that the way to go?

Wednesday 13 June 2018

reference request - how do we know that integral is non-elementary?











Is there a condition that states that the indefinite integration is non-elementary?


Answer



There is a decision procedure called the Risch algorithm that will either tell you that the intergral is non-elementary, or produce an elementary anti-derivative. It is not an easy algorithm to execute, or even implement in a computer algebra system (although the latter has been done), so there is no hope of finding an easy condition for the existence of an anti-derivative.


linear algebra - Characteristic polynomial of a matrix 7x7?




Avoiding too many steps, which is the characteristic polynomial of this matrix 7x7? And why?



\begin{pmatrix}
5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\end{pmatrix}


Answer



As it was stated in the commentaries, the rank of this matrix is $1$; so it will have $6$ null eigenvalues, which means the characteristic polynomial will be in the form:



$p(\lambda)=\alpha\,\lambda^6(\lambda-\beta) = \gamma_6\,\lambda^6 +\gamma_7\,\lambda^7$




Using Cayley-Hamilton:



$p(A)=\gamma_6\,A^6+\gamma_7\,A^7 =0$



Any power of this matrix will have the same format, a positive value for all elements.



$B=\begin{bmatrix}1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\end{bmatrix}$



$A = 5\,B$




$A^2 = 5^2\,7\,B$



$...$



$A^6 = 5^6\,7^5\,B$



$A^7=5^7\,7^6\,B$



$p(A) = (\gamma_6+35\,\gamma_7)\,B=0\Rightarrow\gamma_6=-35\gamma_7$




So we have: $\alpha=\gamma_7$ and $\beta = 35$



$p(\lambda)=\alpha\,\lambda^6(\lambda-35)$


probability theory - Weak Law of Large Number and Truncated Mean




I am working on this question:




Let $X_{1},\cdots, X_{n}$ be i.i.d with $P(X_{i}=(-1)^{k}k)=1/(c_{0}k^{2}\log k)$ for $k\geq 2$, where $c_{0}$ is a normalizer. Prove that $E|X_{1}|=\infty$ and yet there exists a constant $\mu<\infty$ such that $$\dfrac{1}{n}\sum_{i=1}^{n}X_{i}\longrightarrow\mu,\ \text{in distribution}.$$




I got a solution but I cannot understand it. The solution argued as follows:





Firstly we have $$P(|X_{i}|>n)<\sum_{k=n+1}^{\infty}\dfrac{1}{c_{0}k^{2}\log k}\leq \dfrac{1}{c_{0}n\log n},$$ so we have $nP(|X_{i}|>n)\longrightarrow 0$, and thus the weak law can be applied.



$$E|X_{i}|=\sum_{k=2}^{\infty}\dfrac{1}{c_{0}k\log k}=\infty,$$ but the truncated mean $\mu_{n}=EX_{i}\mathbb{1}_{(|X_{i}|\leq n)}=\sum_{k=2}^{n}(-1)^{k}\dfrac{1}{c_{0}k\log k}\longrightarrow \sum_{k=2}^{\infty}(-1)^{k}\dfrac{1}{c_{0}k\log k},$ since the latter is an alternating series with decreasing terms for $k\geq 3.$




I have following confusions:



(1) I only have $P(X_{i}=(-1)^{k}k)$, how does it calculate the $P(|X_{i}|)>n$ and $E|X_{i}|$? and why $$\sum_{k=n+1}^{\infty}\dfrac{1}{c_{0}k^{2}\log k}\leq \dfrac{1}{c_{0}n\log n}?$$



(2) Does the weak law have anything to do $E|X_{i}|=\infty$? or $E|X_{i}|=\infty$ can be concluded directly from that infinity sum? if so, why?




(3) I understand that the weak law implies that $$\dfrac{1}{n}\sum_{i=1}^{n}X_{i}-\mu_{n}\longrightarrow 0\ \text{in distribution,}$$ and I understand that the solution implies the desired $\mu$ is the infinite sum that $\mu_{n}$ converges to, but how could I connect $\mu$ with $\mu_{n}$ in the weak law? Can I do the following: $$\dfrac{1}{n}\sum_{i=1}^{n}X_{i}-\mu_{n}\longrightarrow_{p} 0,$$ but $\mu_{n}\longrightarrow \mu<\infty$, so $$\dfrac{1}{n}\sum_{i=1}^{n}X_{i}\longrightarrow_{p} \mu?$$



By the way, here is the weak law related to the truncated mean:




Show that if $X_{1},\cdots, X_{n}$ are i.i.d and $\lim_{k\rightarrow\infty}kP(|X|>k)=0$, then if we let $S_{n}=X_{1}+\cdots+X_{n}$ and let $\mu_{n}=E[X_{1}\mathbb{1}_{(|X_{1}|\leq n)}]$, then we have $$S_{n}/n-\mu_{n}\longrightarrow_{p} 0.$$




Thank you!




Edit 1:



Okay I think I figure it all out, and the proof I am going to present basically follows the answer of kasa, I just add more technical but non-trivial details:



Proof:



For brevity, set $S_{n}:=X_{1}+\cdots+X_{n}$. For me to type Latex easier, let us set $C:=\dfrac{1}{c_{0}}.$



Let us firstly deal with $E|X_{1}|$. We have the following calculation: $$E|X_{1}|=\sum_{k=2}^{\infty}kP(|X_{1}|=k)=\sum_{k=2}^{\infty}\dfrac{Ck}{k^{2}\log k}=C\sum_{k=2}^{\infty}\dfrac{1}{k\log k},$$ and I claim that $\sum_{k=2}^{\infty}\dfrac{1}{k\log k}=\infty$.




Indeed, if we do a change of variable $u=\log (x)$ to the following integral, we have $$\int_{2}^{\infty}\dfrac{1}{x\log x}dx=[\log(\log x)]_{2}^{\infty}=\lim_{b\rightarrow\infty}\log(\log(b))-\log (2)=\infty.$$



Therefore, it follows from the integral test that $\sum_{k=2}^{\infty}\dfrac{1}{k\log k}=\infty$.



Thus, $E|X_{1}|=\infty$.



Now, for the second part of this question, denote the truncated mean by $$\mu_{n}:=E(X_{i}\mathbb{1}_{(|X_{i}\leq n)})=\sum_{k=2}^{n}(-1)^{k}\dfrac{C}{k\log k}.$$



Then, notice that $$\mu_{n}\longrightarrow \sum_{k=2}^{\infty}(-1)^{k}\dfrac{C}{k\log k}<\infty,$$ since the latter one is actually an alternating sum with decreasing terms for $k\geq 3$.




Denote the limit of $\mu_{n}$ to be $\mu$, and we will show that $\mu$ is the desired constant we need. Firstly, we know that $\mu<\infty$ by above argument.



Then, since $\mu_{n}\longrightarrow\mu$, I claim that $\mu_{n}\longrightarrow_{p}\mu$. Indeed, since $\mu_{n}\longrightarrow\mu$, the set $$\mathcal{O}:=\{\omega:\lim_{n\rightarrow\infty}\mu_{n}\neq \mu\},$$ has $P(\mathcal{O})=0$.



Now, fix $\epsilon>0$, then define the sets $$A_{n}:=\bigcup_{m\geq n}\{|\mu_{m}-\mu|>\epsilon\},\ \text{and}\ A:=\bigcap_{n\geq 1}A_{n},$$ which is clearly to see that $A_{n}\searrow A$, and thus $P(A)=\lim_{n\rightarrow\infty}P(A_{n})$ by continuity.



On the other hand, for $\omega_{0}\in\mathcal{O}^{c}$, by definition it means $|\mu_{n}(\omega_{0})-\mu(\omega_{0})|<\epsilon$ for all $n>N$ for some $N$. Therefore, for all $n\geq N$, $\omega_{0}\notin A_{n}$ and thus $\omega_{0}\notin A$. Therefore, $A\cap \mathcal{O}^{c}=\varnothing$, which implies $A\subset\mathcal{O}$. Thus, by monotonicity, we have $P(A)=0$, and thus $\lim_{n\rightarrow\infty}P(A_{n})=0$.



Finally, we can see that $P(|\mu_{n}-\mu|>\epsilon)\leq P(A_{n})\longrightarrow 0$, and thus $\mu_{n}\longrightarrow_{p}\mu$.




The point here is that pointwise convergence can certainly imply almost sure convergence. Then the proof is exactly the same as almost sure convergence implying convergence in probability.



On the other hand, we have the following computation:
\begin{align*}
P(|X_{i}|>n)&=\sum_{k=n+1}^{\infty}\dfrac{C}{k^{2}\log k}\leq C\sum_{k=n+1}^{\infty}\dfrac{1}{k^{2}\log n}\\
&=\dfrac{C}{\log n}\sum_{k=n+1}^{\infty}\dfrac{1}{k^{2}}\leq\dfrac{C}{\log n}\sum_{k=n+1}^{\infty}\dfrac{1}{k(k-1)}\\
&\leq\dfrac{C}{n\log n},
\end{align*}

which implies that $$nP(|X_{i}|>n)\longrightarrow 0,\ \text{as}\ n\longrightarrow\infty.$$




Therefore, by the weak law of large number, we have $$S_{n}/n-\mu_{n}\longrightarrow_{p} 0.$$



To conclude the proof, we need the following lemma. I will put the proof of lemma in the end.




Lemma: If we have two random variable $X_{n}$ and $Y_{n}$ such that $X_{n}\longrightarrow_{p} a$ and $Y_{n}\longrightarrow_{p} b$ for some constant $a$ and $b$, then $$X_{n}+Y_{n}\longrightarrow_{p} a+b.$$




Now, by this lemma, we can set $X_{n}:=S_{n}/n-\mu_{n}$ and $Y_{n}:=\mu_{n}$, then we have $$S_{n}/n=X_{n}+Y_{n}\longrightarrow_{p}0+\mu=\mu,\ \text{as desired.}$$




Proof of lemma:



Let $\epsilon>0$, then consider $P(|X_{n}+Y_{n}-a-b|\geq\epsilon)$. Then, using the hypothesis that $X_{n}\longrightarrow_{p}a$ and $Y_{n}\longrightarrow_{p}b$, we have the following computation:
\begin{align*}
P(|X_{n}+Y_{n}-a-b|\geq\epsilon)&=P(|(X_{n}-a)+(Y_{n}-b)|\geq\epsilon)\\
&\leq P(|X_{n}-a|\geq\epsilon/2\ \text{or}\ |Y_{n}-b|\geq\epsilon/2)\\
&\leq P(|X_{n}-a|\geq\epsilon/2)+P(|Y_{n}-b|\geq \epsilon/2)\longrightarrow 0+0=0\ \text{as}\ n\rightarrow\infty.
\end{align*}




Therefore, $X_{n}+Y_{n}\longrightarrow_{p} a+b.$



The only part I am still doubting is that $\mu_{n}$ is not a random variable, it is just a sequence of number, so I don't know if the convergence in probability even makes sense to $\mu_{n}$. It will be really appreciated if anyone can clarify my confusion here :)



Let me express my appreciation to kasa for his/her patience and brilliant answer. Please let me know if you think anything can be improved or reined in my proof :)


Answer



HINT to part 1:



$P(|X_i|>n)$ is straight-forward - sum of all the probabilities that $|X_i|$ is greater than $n$, which in this case, is equal to $\sum_{k=n+1}^{\infty}\dfrac{2}{c_{0}k^{2}\log k}$.




Now, we can use the following bound
\begin{align}
\sum_{k=n+1}^{\infty}\dfrac{1}{k^{2}} & \le \sum_{k=n+1}^{\infty}\dfrac{1}{k(k-1)} \leq (\dfrac{1}{n} - \dfrac{1}{n+1}) + (\dfrac{1}{n+1} - \dfrac{1}{n+2}) \dots \\
< \dfrac{1}{n}
\end{align}



This implies $\sum_{k=n+1}^{\infty}\dfrac{1}{k^{2}\log k} < \dfrac{1}{n \log k} < \dfrac{1}{n \log n} $



HINT to part 3:
Use the property:





If $X_n \longrightarrow_{p} x$ and $Y_n \longrightarrow_{p} y$, then
$X_n + Y_n \longrightarrow_{p} x+y$




Edit: The expectation $\sum \dfrac{1}{k\log k}$ goes to infinity. Please check this proof.


Tuesday 12 June 2018

real analysis - divergence of $sum_{n=3}^infty frac{sqrt{n}+2}{n-2}$ verification/ alternative method



I wish to prove divergence of
$$\sum_{n=3}^\infty \frac{\sqrt{n}+2}{n-2}$$




I wish to do so by comparison, since $n\geq 3$:
$$\sum_{n=3}^\infty \frac{\sqrt{n}+2}{n-2} > \sum_{n=3}^\infty \frac{1+2}{n-2}>\sum_{n=3}^\infty \frac{3}{n}>\sum_{n=3}^\infty \frac{1}{n} \rightarrow \infty$$
And the harmonic series is divergent, so if we just remove finitely many terms, we still have that it is divergent, because divergence is determined "in the tail". We have a divergent minorant series and hence the original series diverges to $\infty$.



Is this approach fine, or is there some more elegant method, this was about the simplest thing I could think of.






Alternatively we have:
$$\sum_{n=3}^\infty \frac{\sqrt{n}+2}{n-2} > \sum_{n=3}^\infty \frac{\sqrt{n}+2}{n}=\sum_{n=3}^\infty \frac{1}{\sqrt{n}}+ \frac{2}{n}\rightarrow \infty$$



Answer



Not clear what your question is, but your answer is correct.



Your approach is fine. Comparison test would be the proper test to use.



One can also show that $$\sum _{n=3}^{\infty \:}\frac{\sqrt{n}+2}{n-2}\ge \sum _{n=3}^{\infty \:}\frac{\sqrt{n}+2}{n}$$



and show that the rightmost sum is diverging via the integral test.


contest math - Help with inequality problem





Given $a$ , $b$ , $c \ge 0$ show that
$$\frac{a^2}{(a+b)(a+c)} + \frac{b^2}{(b+a)(b+c)}+ \frac{c^2}{(c+a)(c+b)} \ge \frac{3}{4}.$$




I tried using Titu's lemma on it, resulting in



$$\frac{a^2}{(a+b)(a+c)}+\frac{b^2}{(b+a)(b+c)}+ \frac{c^2}{(c+a)(c+b)}\ge \frac{(a+b+c)^2}{a^2+b^2+c^2 + 3(ab + bc + ca)} $$




And I am stuck here.


Answer



By C-S $$\sum_{cyc}\frac{a^2}{(a+b)(a+c)}\geq\frac{(a+b+c)^2}{\sum\limits_{cyc}(a+b)(a+c)}\geq\frac{3}{4},$$ where the last inequality it's
$$4\sum_{cyc}(a^2+2ab)\geq3\sum_{cyc}\left(a^2+3ab\right)$$ or
$$\sum_{cyc}(a^2-ab)\geq0$$ or
$$\sum_{cyc}(2a^2-2ab)\geq0$$ or
$$\sum_{cyc}(a^2+b^2-2ab)\geq0$$ or $$\sum_{cyc}(a-b)^2\geq0.$$


proof writing - Prove that there is no rational number whose square is 12.



Let's assume that $12 = (\frac{p}{q})$, where p,q $\in$ $\mathbb{R}$ and $p$ and $q$ are coprime.




Then we have,



$(\frac{p^2}{q^2})= 12^2 = 144.$



So,



$p^2 = 144*q^2$



and $p^2 = 2*(72)*q^2.$




This implies that p is even.



Then,



$p=(2k)$



So,



$(2k)^2=2*(72)*q^2$




$4k^2 = 2*(72)*q^2$



$2k^2 = 72q^2$



Thus,



$k^2 = 36q^2$



so,




$k=6q$



then $(\frac{p}{q}) = (\frac{12q}{q})$,



which contradicts p and q being coprime. Therefore 12 is irrational. QED



I was wondering if the proof I've provided is sound.


Answer



Assume otherwise. then there exists $p,q\in \Bbb N$ such that $mcd\{p,q\}=1$ and $\frac{p}{q}=\sqrt{12}.$ From here $\frac{p^2}{q^2}=12$ which is equivalent to $p^2=12q^2=3(2q)^2.$ From here we deduce that $3|p,$ so $p=3k$ for some natural number $k.$ The equation is now $9k^2=3(2q)^2,$ or after simplify, $3k^2=(2q)^2.$ Now we deduce that $3|2q,$ and hence $3|q.$ But this is a contradiction because $3|p,$ so that $mcd\{p,q\}\geq 3>1.$ Hence $\sqrt{12}$ is irrational.


algebra precalculus - Solve a simple equation: $x+xsqrt{(2x+2)}=3$




$x+x\sqrt{(2x+2)}=3$





I must solve this, but I always get to a point where I don't know what to do. The answer is 1.



Here is what I did:



$$\begin{align}
3&=x(1+\sqrt{2(x+1)}) \\
\frac{3}{x}&=1+\sqrt{2(x+1)} \\
\frac{3}{x}-1&=\sqrt{2(x+1)} \\
\frac{(3-x)^{2}}{x^{2}}&=2(x+1) \\

\frac{9-6x+x^{2}}{2x^{2}}&=x+1 \\
\frac{9-6x+x^{2}-2x^{2}}{2x^{2}}&=x \\
\frac{9-6x+x^{2}-2x^{2}-2x^{3}}{2x^{2}}&=0
\end{align}$$



Then I got:
$-2x^{3}-x^{2}-6x+9=0$


Answer



As in the comment $x-1$ is a factor of $f(x) = -2x^3 -x^2-6x+9$. After an easy long division we get $f(x) = (x-1)(-2x^2-3x-9)$. From this we can use the quadratic equation to see the other two roots are not real.




We can do the long division in the following way:



We need to divide $-2x^3-x^2-6x+9$ by $x-1$ so we can ask what times $x-1$ gives the first term $-2x^3$? This is clearly $\boxed{-2x^2}$. But this also gives $+2x^2$ that we didn't want. So we need to take this away from what is left: we now have $-x^2-6x+9 - (+2x^2) = -3x^2-6x +9$. Again we look for a term that when multiplied with $x-1$ gives the highest order term ($-3x^2$). This is $\boxed{-3x}$ but this gives an extra $+3x$. Now we are left with $-6x+9 - (+3x) = -9x+9$. Here we see that this is $\boxed{-9}$ times $(x-1)$. Adding together the terms that we used along the way, we have $-2x^2-3x-9$.


elementary number theory - Extended Euclidean Algorithm As Row Operations

I am trying to find $\gcd (211,88)$ and $\gcd (-26400,63300)$ and the smallest linear combination that gives the Greatest Common Divisor (a.k.a. $\gcd$) .



I have been using the following algorithm



For $gcd(211,88)$ I got:




$$211=1(211)+0(88)\\88=0(211)+1(88)\\35=1(211)-2(88)\\18=-2(211)+5(88)\\17=3(211)-7(88)\\1=-5(211)+12(88)\\ 0=88(211)-211(88)$$



And the operations were:
$R_1-2R_2 , R_2-2R_3,R_3-R_4,R_4-R_5,R_5-17R_6$



What is the $\gcd$ ? the one before $0?$



In the case of $\gcd(-26400,63300)$ or in the case that we have one or two negative numbers, how do we use the algorithm?

Sunday 10 June 2018

soft question - What does closed form solution usually mean?




This is motivated by this question and the fact that I have no access to Timothy Chow's paper What Is a Closed-Form Number? indicated there by
Qiaochu Yuan.



If an equation $f(x)=0$ has no closed form solution, what does it normally
mean? Added: $f$ may depend (and normally does) on parameters.



To me this is equivalent to say that one cannot solve it for $x$ in
the sense that there is no elementary expression $g(c_{1},c_{2},\ldots
,c_{p})$ consisting only of a finite number of polynomials, rational

functions, roots, exponentials, logarithmic and trigonometric functions,
absolute values, integer and fractional parts, such that



$f(g(c_{1},c_{2},\ldots ,c_{p}))=0$.


Answer



I would say it very much depends on the context, and what tools are at your disposal. For instance, telling a student who's just mastered the usual tricks of integrating elementary functions that



$$\int\frac{\exp{u}-1}{u}\mathrm{d}u$$



and




$$\int\sqrt{(u+1)(u^2+1)}\mathrm{d}u$$



have no closed form solutions is just the fancy way of saying "no, you can't do these integrals yet; you don't have the tools". To a working scientist who uses exponential and elliptic integrals, however, they do have closed forms.



In a similar vein, when we say that nonlinear equations, whether algebraic ones like $x^5-x+1=0$ or transcendental ones like $\frac{\pi}{4}=v-\frac{\sin\;v}{2}$ have no closed form solutions, what we're really saying is that we can't represent solutions to these in terms of functions that we know (and love?). (For the first one, though, if you know hypergeometric or theta functions, then yes, it has a closed form.)



I believe it is fair to say that for as long as we haven't seen the solution to an integral, sum, product, continued fraction, differential equation, or nonlinear equation frequently enough in applications to give it a standard name and notation, we just cop out and say "nope, it doesn't have a closed form".


abstract algebra - Proving that either $x $ or $2x$ is a generator of the cyclic group $(mathbb Z_3[x]/langle f(x) rangle)^*$




if $f(x)$ is a cubic irreducible polynomial over $\mathbb Z_3$, prove that either $x $ or $2x$ is a generator of the cyclic group $(\mathbb Z_3[x]/\langle f(x) \rangle)^*$



Attempt: $f(x) = \alpha x^3+ \beta x^2 + \gamma x + \rho~~|~~ \alpha ,\beta,\gamma,\rho \in \mathbb Z_3 $



$\implies \mathbb Z_3[x]/\langle f(x) \rangle = \{ax^2+bx+c~~|~~a,b,c \in \mathbb Z_3\}$



$O[ \mathbb Z_3[x]/\langle f(x) \rangle ]^*=26$



Hence, elements can $(\mathbb Z_3[x]/\langle f(x) \rangle)^*$ can have orders $1,2,13,26$.




If $x$ is the generator for the above field, then $x^{26}=1$, then : $(2x)^{26} = (-x)^{26}=1$. Hence, isn't $2x$ also a generator, contrary to what the question is asking?



How do I proceed ahead?



Thank you for your help.


Answer



I'll denote the image of $x$ in the quotient as $\bar{x}$.



It's clear that $\bar{x}$ doesn't have order 1, and similarly it doesn't have order 2 (because then $\bar x$ would be a root of $x^2 - 1 \in \Bbb{Z}_3[x]$, contradicting that the cubic $f(x)$ (which has $\bar x$ as a root) was irreducible).




Thus $\bar x$ has order 13 or 26.



Likewise, $2\bar x$ doesn't have order 2 as this would imply $\bar x$ had order 2. So $2\bar x$ also has order 13 or 26.



If $\bar x$ has order 26 we're done, so suppose $\bar x$ has order 13, so $\bar x^{13} = 1$. Then $(2\bar x)^{13} = 2\bar x^{13} = 2$, so $2\bar x$ must have order 26, and generates the group.


Why are the elements of a galois/finite field represented as polynomials?



I'm new to finite fields - I have been watching various lectures and reading about them, but I'm missing a step. I can understand what a group, ring field and prime field is, no problem.




But when we have a prime extension field, suddenly the elements are no longer numbers, they are polynomials. I'm sure there is some great mathematical tricks which show that we can (or must?) use polynomials to be able to satisfy the rules of a field within a prime extension field, but I haven't been able to find a coherent explanation of this step. People I have asked in person don't seen to know either, it's just assumed that that is the way it is.



So I have two questions:



What is a clear explanation of "why polynomials?".



Has anyone tried using constructs other than polynomials to satisfy the same field rules?



Thanks in advance.


Answer




In any ring, finite or infinite, we have two operations: $+$ and $\cdot$. The idea of a ring extension is this: let $R$ be a ring and $x$ be some element that we want to add to $R$ (maybe $R\subset S$ and $x\in S$, or $x$ is some formal element).




We need $R[x]$ to be closed under addition and multiplication.




This means that any element that can be formed by manipulating elements of the set $R\cup\{x\}$ by $+$ and $\cdot$ must be an element of $R[x]$.




Polynomials are a general way of expressing such manipulations.





An arbitrary polynomial in $x$ is a completely general manipulation of $x$ using only the operations valid in a ring. Moreover, any element of a ring can be expressed as a polynomial in terms of the generators of the ring.



Let's see how this works: start with some element $a\in R$. We can add or multiply by any other element of $R$, but this just gives us some $a'\in R$. Or we can multiply by $x$ any number of times to get $a'x^n$ for some $n$. And given different elements of this form, we can add them together to get a polynomial.



In many rings, because the elements $x$ satisfy non-trivial relations (e.g. in $\mathbb C=\mathbb R[i]$, $i^2+1=0$), there are neater ways to express elements, and we can avoid polynomials, even though they lurk in the background. In finite fields, polynomials happen to be the easiest and most intuitive way to express what's going on.


algebra precalculus - Simplify the expression $binom{n}{0}+binom{n+1}{1}+binom{n+2}{2}+cdots +binom{n+k}{k}$




Simplify the expression $\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k}{k}$



My attempt: Using the formula $\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}$



$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots \binom{n+k-1}{k-1}+\binom{n+k}{k}$



=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k-1}{k-1}+(\binom{n+k-1}{k}+\binom{n+k-1}{k-1})$




=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +2\binom{n+k-1}{k-1}+\binom{n+k-1}{k}$



I can again use the same formula for the term $2\binom{n+k-1}{k-1}$, and in the next to step to the term $3\binom{n+k-2}{k-2}$.



But I don't think this way the expression will get simplified.



Any help is appreciated.


Answer



First show that $\large{n\choose r}={{n-1}\choose {r-1}}+{{n-1}\choose r}$,




from which we get $\large{{n+r+1}\choose r}={{n+r}\choose r}+{{n+r}\choose {r-1}}$



By the same rule, $\large{{n+r}\choose {r-1}}={{n+r-1}\choose {r-1}}+{{n+r-1}\choose {r-2}}$



$\large{{n+r-1}\choose {r-2}}={{n+r-2}\choose {r-2}}+{{n+r-3}\choose {r-3}}$



$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$



$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$




$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$



$\large{{n+3}\choose {2}}={{n+2}\choose {2}}+{{n+2}\choose {1}}$



$\large{{n+2}\choose {1}}={{n+1}\choose {1}}+{{n+1}\choose {0}}$



Adding, we get $\large{n\choose 0}+{{n+1}\choose 1}+...+{{n+r-1}\choose {r-1}}+{{n+r}\choose r}={{n+r+1}\choose r}$







Alternatively, we can fix any $r$ of the $(n+r+1)$ objects given. Label them $A_1,A_2,...A_r$. Now our choices of $r$ objects from the $(n+r+1)$ objects may or may not contain any or all of the set $\{A_1,A_2,...,A_r\}$.



Case I: It does not contain $A_1$



This will happen in $\large{{n+r}\choose r}$ ways as the $r$ things have to be chosen from the remaining $(n+r)$ things.



Case II: It contains $A_1$ but does not contain $A_2$.



This will happen in $\large{{n+r-1}\choose {r-1}}$ ways, because having chosen $A_1$ and rejectd $A_2$, we have only $(n+r-1)$ things to choose from and we need only $(r-1)$.




$...$



Case r: It contains $A_1,A_2,...,A_{r-1}$ but does not contain $A_r$.



This will happen in $\large {{n+1}\choose 1}$ ways.



Case $(r+1)$: It contains $A_1,A_2,...,A_r$.



This will happen in $\large{n\choose 0}=1$ way.




Hence, $\displaystyle\sum_{k=0}^{r}{{n+k}\choose k}={{n+r+1}\choose r}$


linear algebra - If a $3times 3$ matrix is not invertible, how do you prove the rest of the invertible matrix theorem?



This is the exact question/instructions I am to follow. I answered part (a) and found the matrix I used is not invertible. Now I am stuck.




propose a specific $n \times n$ matrix $A$, where $n \ge 3$. Select four statements from the invertible matrix theorem and show that all four statements are true or false, depending on the matrix you selected. Make sure to clearly explain and justify your work. Also, make sure to label your statements using the notation used in the notes (e.g., part (a), part (f), etc.). In responding to your classmates’ posts, verify that their work is correct and select three additional statements they did not originally address and show that these statements are true (or false). Again, make sure to show your work.




My matrix:

$\pmatrix{1& 1& 1\\
1& 1& 0\\
0& 0& 1}$



I'm trying to answer parts a,b,c, and k of:



Let A be a square n x n matrix. Then, the following statements are equivalent. That is, for a given A, the statements are either all true or all false.




  • a. A is an invertible matrix.


  • b. A is row equivalent to the n x n identity matrix.

  • c. A has n pivot positions

  • d. The equation Ax = 0 has only the trivial solution.

  • e. The columns of A for a linearly independent set.

  • f. The linear transformation x ↦ Ax is one-to-one.

  • g. The equation Ax = b has at least one solution for each b in ℝn.

  • h. The columns of A span ℝn.

  • i. The linear transformation x ↦ Ax maps ℝn onto ℝn.

  • j. There is an n x n matrix C such that CA = I.

  • k. There is an n x n matrix D such that AD = I.


  • l. AT is an invertible matrix.


Answer




$A$ is an invertible matrix.




Find $B$ such that $AB = BA = I$ or show that it is impossible.





$A$ is row equivalent to the n x n identity matrix.




Reduce $A$ to reduced row echelon form. See if is identity.




$A$ has n pivot positions




Pretty much the same as above.





The equation $Ax = 0$ has only the trivial solution.




Solve the system.




The columns of $A$ for a linearly independent set.





Write down your columns as vectors in $\Bbb R^n$. See if $\{x_1,x_2,\ldots,x_n\}$ is linearly dependent or independent by your usual means.




The linear transformation $x ↦ Ax$ is one-to-one.




Solve $Av = 0$ as above. Pick one solution $v_0$ and define $y = x + v_0$ for any vector $x$. Note that $x= y$ if and only if $v_0 = 0$.





The equation $Ax = b$ has at least one solution for each $b$ in $ℝ^n$.




If $A$ is invertible, $A^{-1}b$ is the solution. If not, find $b$ such that $Ax = b$ has no solutions (hint: reduce extended matrix $(A|b)$ to row echelon form. If there are zero rows in transformed $A$, choose $b$ such that its transformed component in that row is non-zero).




The columns of $A$ span $ℝ^n$.




Write down columns of $A$ as vectors in $\Bbb R^n$. See if they generate $\Bbb R^n$ or not. Since there are $n$ columns, it is equivalent to check if columns are linearly dependent or not.





The linear transformation $x ↦ Ax$ maps $ℝ^n$ onto $ℝ^n$.




You must check surjectivity of $A$, i.e. for all $b\in ℝ^n$, you must find solution to $Ax = b$ or find $b$ such that there is no solution.




There is an n x n matrix $C$ such that $CA = I$.





Write general matrix $C$, multiply by $A$ and show that coefficients of $C$ can be chosen to get identity or disprove it.




There is an n x n matrix $D$ such that $AD = I$.




As above.





$AT$ is an invertible matrix.




As the first one.


Saturday 9 June 2018

number theory - Fermat's Last Theorem near misses?

I've recently seen a video of Numberphille channel on Youtube about Fermat's Last Theorem. It talks about how there is a given "solution" for the Fermat's Last Theorem for $n>2$ in the animated series The Simpsons.



Thanks to Andrew Wiles, we all know that's impossible. The host tells that the solution that appears in the episode is actually a near miss.



There are two near-miss solutions and they are:




$$3987^{12} + 4365^{12} = 4472^{12}$$



$$1782^{12} + 1841^{12} = 1922^{12}$$



Anyone who knows modular arithmetic can check that those solutions are wrong, even without a calculator. You can check that $87^{12} \equiv 81 \pmod{100}$ and $65^{12} \equiv 25 \pmod {100}$, while $72^{12} \equiv 16 \pmod {100}$. So we have:



$$81 + 25 \equiv 16 \pmod {100}$$
$$106 \equiv 16 \pmod {100} \text{, which is obviously wrong}$$




For the second example it's even easier. We know that LHS is an addition of an even and odd number, and the RHS is even number, which is impossible, because we know that the addition of an even and an odd number will provide an odd number.



But what's made me most interested in this is the following. Using a calculator I expanded the equations and I get:



$$3987^{12} + 4365^{12} = 4472^{12}$$



$$63976656349698612616236230953154487896987106 = 63976656348486725806862358322168575784124416$$



$$1211886809373872630985912112862690 = 0$$




And you'll immediately conclude that those numbers aren't even close, their difference is a $33$-digit number. But bearing in mind that we are working with really, really big numbers, it's probably better to take relative difference. So we really want to find the ratio of LHS and RHS:



$$\frac{63976656349698612616236230953154487896987106}{63976656348486725806862358322168575784124416} \approx 1.00000000002$$



Somehow that's impressive, but if take a look into the second example the things start to get more interesting:



$$1782^{12} + 1841^{12} = 1922^{12}$$
$$2541210258614589176288669958142428526657 = 2541210259314801410819278649643651567616$$



As we can see the first 9 digits are exactly the same and the apsolute difference is: $700212234530608691501223040959$. But if we take a relative difference or ratio we'll get:




$$\frac{2541210258614589176288669958142428526657}{2541210259314801410819278649643651567616} \approx 0.9999999997244572$$



And this is pretty amazing, because if we make comparion using smaller number this is the same as comparing $10 000 000 000$ and $10 000 000 001$



Probably there are many more, probably infinite amount of such "close" examples, but are there any known examples? Is there any list of them?



And as user17762 commented it would be nice to find a bound $\phi(n) = \inf\{\left| x^n + y^n - z^n\right| : x,y,z \in \mathbb{Z}\}$, although I would be more interested in finding the ratio bound, such that the ratio is closer to $1$



Also as user17762 pointed Taxicab can be used to provide a really close examples for $n=3$, but what about other values for $n$?

Friday 8 June 2018

calculus - Closed-form of $sum_{k=1}^{infty }left(psi_1(k)right)^n$




Inspired by answers to this question, for which $n$ values could we specify a closed-form of



$$S(n)=\sum_{k=1}^{\infty }\left(\psi_1(k)\right)^n\,?$$



Here $\psi_1$ is the trigamma function, and $n\geq2$ is an integer.



I think for small $n$ values there is a closed-form. From the answers above we know that $S(2)=3\zeta(3)$. I think there is a generalization of Olivier Oloa's approach using techniques from Pedro Freitas' paper. Note that there is a known error in the paper. Furthermore I think robjohn's answer has a generalization using results from the paper by Philippe Flajolet and Bruno Salvy.



Question. Is there a closed-form of $S(n)$ for $2\leq n \leq 7\,?$



Answer



Using
$$ \psi_1(n) = \sum_{k\geq n}\frac1{k^2}, $$
write
$$ S(3) = \sum_{k\geq1}\sum_{i,j,l\geq k}\frac{1}{i^2j^2l^2}. $$
The sum over $i,j,l\geq k$ decomposes into the sums
$$ \{i=j=l\geq k\} + 3\{i=j, l\geq k ; i\neq l\} + \{i,j,l\geq k; i\neq j, i\neq l, j\neq l\}. $$
Hence
$$ S(3) = \sum_{i\geq k\geq 1}\frac{1}{i^6} + 3\sum_{i\geq k, j\geq k, i\neq j}\frac{1}{i^4j^2} + \sum_{i,j,l\geq k, i\neq j, j\neq l, i\neq l} \frac{1}{i^2j^2l^2}. $$
Doing the sum over $k$ first, and reordering the summation appropriately gives

$$ \begin{aligned}
S(3) &= \sum_{i\geq 1}\frac1{i^5} + 3\sum_{i>l\geq1}\frac{1}{i^4l} + 3\sum_{i>l\geq1}\frac{1}{i^2l^3} +6\sum_{i>j>l\geq1}\frac{1}{i^2j^2l}
\\&= \zeta(5) + 3\zeta(4,1) + 3\zeta(2,3) + 6\zeta(2,2,1)
\end{aligned} $$
in terms of the multivariate zeta function.



Simplifying, this analysis gives:
$$ S(2) = 3\zeta(3). $$
$$ S(3) = 9 \zeta (3) \zeta (2)-\tfrac{25}{2} \zeta (5). $$
$$ S(4) = 10 \zeta (5) \zeta (2)+51 \zeta (3) \zeta (4)- \tfrac{301}{4}\zeta (7) $$

$$ S(5) = -\tfrac{1505}{4} \zeta (7) \zeta (2)+\tfrac{125}{2} \zeta (5) \zeta (4)+\tfrac{835}{4} \zeta (3) \zeta (6)+\tfrac{11791}{36} \zeta (9)-10 \zeta (3)^3. $$
Here I found $S(4)$ and $S(5)$ numerically without proof. For $S(m)$ for $m\geq 6$ there is probably no closed form except in terms of irreducible multivariate zeta values.



For example, if you allow closed forms in terms of double Euler sums, then
$$ S(6) = \tfrac{820230}{901} \zeta (3) s_h(2,6)+\tfrac{192}{901} s_h(8,3)+\tfrac{10}{901} s_h(9,2)-\tfrac{1953509}{2703} \zeta (9) \zeta (2)-\tfrac{762298}{901} \zeta (3)^3 \zeta (2)-\tfrac{19487373}{7208} \zeta (7) \zeta (4)+\tfrac{48455}{34} \zeta (5) \zeta (6)-\tfrac{8977165}{1272} \zeta (3) \zeta (8)+\tfrac{1790142}{901} \zeta (3)^2 \zeta (5)+\tfrac{12188899}{3604} \zeta (11) $$
$$ S(7) = \tfrac{6143922}{1007} \zeta (3) \zeta (2) s_h(2,6)+\tfrac{29361277}{2014} \zeta (5) s_h(2,6)-\tfrac{2696}{1007} \zeta (3) s_h(7,3)+\tfrac{3138}{1007} \zeta (3) s_h(8,2)+\tfrac{273}{1007} \zeta (2) s_h(8,3)-\tfrac{84}{1007} \zeta (2) s_h(9,2)+\tfrac{1046978}{1007} s_h(5,8)-\tfrac{805894}{1007} s_h(6,7)-\tfrac{1796398}{1007} s_h(7,6)-\tfrac{1530997}{1007} s_h(8,5)-\tfrac{720964}{1007} s_h(9,4)-\tfrac{144069}{1007} s_h(10,3)+\tfrac{12}{1007} s_h(11,2)-\tfrac{4525536}{1007} \zeta (11) \zeta (2)-\tfrac{11096255}{2014} \zeta (3)^2 \zeta (5) \zeta (2)-\tfrac{14173495}{2014} \zeta (9) \zeta (4)-\tfrac{19952365}{1007} \zeta (3)^3 \zeta (4)+\tfrac{8234835}{152} \zeta (7) \zeta (6)+\tfrac{339826375}{2544} \zeta (5) \zeta (8)-\tfrac{9006237625}{32224} \zeta (3) \zeta (10)+\tfrac{15302200}{1007} \zeta (3) \zeta (5)^2+\tfrac{42533615}{2014} \zeta (13)-\tfrac{39044583}{1007} \zeta (3)^2 \zeta (7) $$


limits - Find $limlimits_{nto+infty}frac{sqrt[n]{n!}}{n}$




I tried using Stirling's approximation and d'Alambert's ratio test but can't get the limit. Could someone show how to evaluate this limit?


Answer



Use equivalents:
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac{\bigl(\sqrt{2\pi n}\bigr)^{\tfrac 1n}}{n}\cdot\frac n{\mathrm{e}}=\frac 1{\mathrm{e}}\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}$$
Now $\;\ln\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}=\dfrac{\ln\pi+\ln 2n}{2n}\xrightarrow[n\to\infty]{}0$, hence
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac 1{\mathrm{e}}. $$


How many numbers less than 100 have the sum of factors as odd?





How many numbers less than 100 have the sum of factors as odd?



Answer is 16




This question and explanation is taken from careerbless.com The link given derives the answer using some properties of number. But, I am trying to solve it in a different way, using a general formula to find sum of the factors of numbers, possibility that may lead to a simple solution.



I have gone through the post in this website where a formula is explained to find sum of the factors of a number. But, it cannot be applied individually for this question (because we need to do it for 1-99 and it takes lot of time).



Is there any shortcut method to derive the answer for this question, possibility using a formula to understand if the sum of the factors is even or odd? Please comment with different approaches for this problem.



Answer



The sum-of-divisors function $\sigma$ is multiplicative, with $\sigma(p^m) = 1 + p + \ldots + p^m$ for a prime $p$. In particular, $\sigma(2^m)$ is always odd, while for an odd prime $p$, $\sigma(p^m)$ is odd if and only if $m$ is even. So $\sigma(x)$ is odd if and only if
$x$ is of the form $2^k y^2$ where $y$ is odd.



EDIT: The numbers such that $\sigma(n)$ is odd form OEIS sequence A028982. The number of $x \le n$ such that
$\sigma(x)$ is odd is OEIS sequence A071860.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...