Sunday 31 July 2016

number theory - Is there no proof of Dirichlet's results on quadratic residues without analysis?

Wikipedia states that all known proofs of Dirichlet's results



$$
L(1) = -\frac{\pi}{\sqrt q}\sum_{n=1}^{q-1} \frac{n}{q} \left(\frac{n}{q}\right) \gt 0
$$




and



$$
L(1) = \frac{\pi}{\left(2-\left(\frac{2}{q}\right)\right)\!\sqrt q}\sum_{n=1}^\frac{q-1}{2}\left(\frac{n}{q}\right) \gt 0\;,
$$



both valid for primes $q\equiv3\bmod4$, rely on analysis and that no "simple or direct" proof has ever been published. It cites the third edition of Davenport's Multiplicative Number Theory from $2000$, but the second edition from $1980$ that's available online (at archive.org and at Google Books) only contains the "simple and direct" claim and doesn't mention analysis (though it gives Dirichlet's proof, which uses a moderate amount of analysis).



Since these inequalities correspond to the rather basic number-theoretic facts that for primes $q\equiv3\bmod4$ the sum of the quadratic residues in $[1,q-1]$ is less than the sum of the non-residues and there are more quadratic residues than non-residues in $[1,(q-1)/2]$, respectively, I found this rather surprising and am wondering whether this claim is correct and up to date.




At Evaluate a character sum $\sum\limits_{r = 1}^{(p - 1)/2}r \left( \frac{r}{p} \right) = 0$ for a prime $p \equiv 7 \pmod 8$, David Speyer gives an elementary proof of the equation in the title, i.e. of the fact that the sums of the quadratic residues and non-residues in $[1,(q-1)/2]$ are equal for $q\equiv7\bmod8$. His proof also yields an elementary proof of the equality of Dirichlet's two expressions for that case (which means an elementary proof of either inequality would also yield a proof of the other for that case), but I don't see how to turn it into a proof of their positivity, even for that case. But the elementary character of his proof of a directly related result adds to my feeling that one shouldn't need analysis to prove the other results.



I also found this comment by Terry Tao on MO from 2011 saying that "a proof that $L(1,\chi)\ne0$ [...] seems to require some nontrivial machinery at some point" (where $\chi$ is a quadratic character), linking to On elementary proofs of the Prime Number Theorem for arithmetic progressions, without characters by Andrew Granville and apparently referring to the remark at the bottom of the second page.



So is it true that there is no proof of these inequalities that doesn't use analysis? Are there new developments in this regard? Or could you provide some insight to someone with a limited number-theoretical background why one might expect these proofs to require analysis?

calculus - Is it possible to find the limit without l'Hopital's Rule or series

Is it possible to find $$\lim_{x\to0}\frac{\sin(1-\cos(x))}{x^2e^x}$$
without using L'Hopital's Rule or Series or anything complex but just basic knowledge (definition of a limit, limit laws, and algebraic expansion / cancelling?)

algebra precalculus - Help with showing how $sinalphacosbeta$ $=$ $frac{1}{2}(sin (alpha + beta) + sin(alpha-beta))$ using Eulers formula.



I need help with understanding how one can rewrite:



$\sin\alpha\cos\beta$



to be equal to: $\frac{1}{2}(\sin (\alpha + \beta) + \sin(\alpha-\beta))$ using Eulers formula.




I know that it probably is quite simple but I cannot get my head around this... Frustrating!



Eulers formulas:



$\cos \theta = \frac{1}{2}(e^{i\theta} + e^{-i\theta})$



$\sin \theta = \frac{1}{2i}(e^{i\theta} - e^{-i\theta})$



Thank you kindly for your help!


Answer




$$\begin{align*}
\sin\alpha\cos\beta&=\frac1{4i}(e^{i\alpha}-e^{-i\alpha})(e^{i\beta}+e^{-i\beta})\\
&=\frac1{4i}\Big(e^{i\alpha}e^{i\beta}+e^{i\alpha}e^{-i\beta}-e^{-i\alpha}e^{i\beta}-e^{-i\alpha}e^{-i\beta}\Big)\\
&=\frac1{4i}\Big(e^{i\alpha}e^{i\beta}-e^{-i\alpha}e^{-i\beta}+e^{i\alpha}e^{-i\beta}-e^{-i\alpha}e^{i\beta}\Big)\\
&=\frac1{4i}\Big(e^{i(\alpha+\beta)}-e^{-i(\alpha+\beta)}+e^{i(\alpha-\beta)}-e^{-i(\alpha-\beta)}\Big)\\
&=\frac12\Big(\sin(\alpha+\beta)+\sin(\alpha-\beta)\Big)
\end{align*}$$



I actually got this by working from each end towards the middle, however.


elementary number theory - Prove that $4^n+ 1$ is not divisible by $3$

For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.



I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.



I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.

Saturday 30 July 2016

combinatorics - Combinatorial proof of $ sum limits_{i = 0} ^{m} 2^{n-i} {n choose i}{m choose i} = sumlimits_{i=0}^m {n + m - i choose m} {n choose i} $

I've been wondering for a while how to solve (prove) a combinatorial identity, using just combinatorial interpretation:



$$ \sum \limits_{i = 0} ^{m} 2^{n-i} {n \choose i}{m \choose i} = \sum\limits_{i=0}^n {n + m - i \choose m} {n \choose i} $$



($ m \leq n $ )



The left hand side is pretty much about choosing any number of elements from the set $ M = \{a_1, \dots, a_m\} $ and then choosing at least the same amount from $ N = \{b_1, \dots, b_n\} $, but I can't see how the right hand side satisfies that.

real analysis - Proving that the function is bounded by the function of its derivative on the given interval

Suppose $h:R \longrightarrow R$ is differentiable everywhere and $h'$ is continuous on $[0,1]$, $h(0) = -2$ and $h(1) = 1$. Show that:

$|h(x)|\leq max(|h'(t)| , t\in[0,1])$ for all $x\in[0,1]$



I attempted the problem the following way:

Since $h(x)$ is differentiable everywhere then it is also continuous everywhere. $h(0) = -2$ and $h(1) = 1$ imply that h(x) should cross x-axis at some point (at least once). Denote that point by c to get $h(c) = 0$ for some $c\in[0,1]$.

$h'(x)$ continuous means that $lim[h'(x)] = h'(a)$ as $x\rightarrow a$ but then I am stuck and I don't see how what I have done so far can help me to obtain the desired inequality.

Thank you in advance!

Friday 29 July 2016

What is the limit of this sequence as it approaches infinity

We've got a question that shows us how to find the limit of this sequence as the nth term approaches infinity. I'm unsure of if we should use L'Hopitals rule, or if not what we should use instead. I can see, we may be able to use L'hopital as the formula will be infinity/infinity.



We have that



$$a_n=\frac{n\cos(n\pi+\pi/3)+n(-1)^n}{n^2+1}$$



Then, how evaluate $\lim_{n\to\infty}a_n$?

real analysis - Look for a one-to-one function that maps a square to R

I am looking for a one-to-one function which maps (0,1)^2 to R. It is preferable that the function doesn't involve trig functions.
I have tried several mappings like $\ln(\frac{x_2}{1-x_1}),$ but they are not one-to-one. The challenge for me is the one-to-one requirement.



I have read Examples of bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$. I like the idea there, but I need to use this function to do further calculation, so it has to be in explicit form. Is it possible to find such a function?



I appreciate any ideas and comments.

probability - Propability of M-faced dice rolling a greater result than an N-faced dice ( M



Hello I have a game-mechanic which goes like this:




Two-sides are rolling numbers. The defender rolls between [1,Armor_Value] , attacker rolls between [1,Damage_Value].



If the attacker_roll> defender_roll the damage is equal to attacker_roll - defender_roll. If not, no damage is dealt. Given




  1. Armour Value of defender

  2. Damage Value of attacker




What is the average damage that the attacker deals?


Answer



[It is unfortunate that armor and attack both begin with "A" while damage and defender both begin with "D". I will use different letters to avoid this confusion.]



Let $N_1$ and $N_2$ be the damage and armor values respectively, and let $X_1$ and $X_2$ be the random variable representing the roll of the attacker and the defender respectively. You are interested in the random variable
$$Y:=\max(X_1-X_2,0).$$



The expected value is
\begin{align*}
\mathbb{E}[Y]

&= \sum_{x_2=1}^{N_2} \sum_{x_1=1}^{N_1} \max(x_1-x_2,0) \cdot \mathbb{P}(X_1=1, X_2=x_2)\\
&= \sum_{x_2=1}^{\min(N_1,N_2)} \sum_{x_1=x_2}^{N_1} (x_1-x_2) \frac{1}{N_1N_2} &\text{only consider $x_1$ when it is $\ge x_2$}\\
&= \frac{1}{N_1 N_2} \sum_{x_2=1}^{\min(N_1,N_2)}\sum_{z=0}^{N_1-x_2} z\\
&= \frac{1}{N_1 N_2} \sum_{x_2=1}^{\min(N_1,N_2)}\frac{(N_1-x_2)(N_1-x_2+1)}{2}\\
&= \frac{1}{N_1 N_2} \sum_{k=N_1-\min(N_1,N_2)}^{N_1-1} \frac{k(k+1)}{2}\\
\end{align*}
The last sum is the sum of triangular numbers. You can use the formula for the sum of the first $n$ triangular numbers $n(n+1)(n+2)/6$ to help compute. For example, if $N_1 \le N_2$, then $N_1-\min(N_1,N_2)=0$, so the sum is the sum of the first $N_1-1$ triangular numbers, so the expected value is $\frac{1}{N_1N_2} \frac{(N_1-1)N_1 (N_1+1)}{6}$. If $N_1 > N_2$ you will need to do an extra step to subtract out the first few triangular numbers.



It will probably be easier to write out a table of all the possible outcomes and notice the pattern. For instance, when $N_1=4$ and $N_2=5$,
$$\begin{array}{c|cccccccc}

& 1 & 2 & 3 & 4\\ \hline
1 & 0 & 1 & 2 & 3\\
2 & 0 & 0 & 1 & 2\\
3 & 0 & 0 & 0 & 1\\
4 & 0 & 0 & 0 & 0\\
5 & 0 & 0 & 0 & 0\\
\end{array}$$
so the expected value is
$$\frac{1}{4 \cdot 3} ((1+2+3)+(1+2)+1) = \frac{1}{12} \left(6+3+1\right)=\frac{5}{6}$$
This is consistent with our above formula in the case $N_1 \le N_2$: $$\frac{1}{N_1N_2} \frac{(N_1-1)N_1 (N_1+1)}{6}= \frac{1}{12} \frac{3\cdot 4 \cdot 5}{6}=\frac{5}{6}.$$



Roots of polynomials over finite fields

I've been trying to find the decomposition of $x^2-2$ to irreducible polynomials over $\mathbf{F}_5$ and $\mathbf{F}_7$.
I know that for some $a$ in $\mathbf{F}_5$ (for example), $x-a$ divides $x^2-2$ iff $f(a) = 0$, i.e $a$ is a root of $x^2-2$.
Over the field $\mathbf{F}_7$, I've found (by trail and error) that one irreducible polynomial is $x-3$.
I've now got two questions -





  1. How can I find the other irreducible polynomial?

  2. Is there any more efficient method to find roots than trial and error?



Thanks in advance

multivariable calculus - Prove $ left(sum limits_{k=1}^n (2k-1)frac{k+1}{k}right) left( sum limits_{k=1}^n (2k-1)frac{k}{k+1}right) le frac{9}{8}n^4$




Prove that for all $n \in \mathbb{N}$ the inequality $$ \left(\sum \limits_{k=1}^n (2k-1)\frac{k+1}{k}\right) \left( \sum \limits_{k=1}^n (2k-1)\frac{k}{k+1}\right) \le \frac{9}{8}n^4$$ holds.





My work. I proved this inequality, but my proof is ugly (it is necessary to check by brute force whether the inequality holds for $n=1,2,3,...,15$). I hope that there is nice proof of this inequality. Michael Rozenberg wrote a very nice solution to a similar problem ( Prove the inequality $\sum \limits_{k=1}^n \frac{k+1}{k} \cdot \sum \limits_{k=1}^n \frac{k}{k+1} \le \frac{9}{8}n^2$ ). I think this inequality has a similar proof, but I can’t prove in a similar way. I will write as I proved the inequality. Let $S_n= \sum \limits_{k=1}^n \frac{1}{k} $. Then $$ \sum \limits_{k=1}^n (2k-1)\frac{k+1}{k}=n^2+2n-S_n $$ and $$\sum \limits_{k=1}^n (2k-1)\frac{k}{k+1}=n^2-2n-3+\frac{3}{n+1}+3S_n$$ We need to prove that $$3S_n^2-S_n \left( 2n^2+8n+3-\frac{3}{n+1}\right)+\frac{n^4}{8}+7n^2+3n-3+\frac{3}{n+1} \ge 0$$ To prove this inequality, I found discriminant of the quadratic polynomial and used the fact that $S_n \le n$. It was possible to prove that the inequality holds for all $n \ge 16$.


Answer



We can use also the Cassel's inequality:




Let $a$, $b$ and $w$ be sequences of $n$ positive numbers such that $1 for any $k.$ Prove that:
$$\sum_{k=1}^nw_ka_k^2\sum_{k=1}^nw_kb_k^2\leq\frac{(M+m)^2}{4Mm}\left(\sum_{k=1}^nw_ka_kb_k\right)^2.$$




This inequality was here:




G.S. WATSON, Serial Correlation in Regression Analysis, Ph.D. Thesis, Dept. of Experimental
Statistics, North Carolina State College, Raleigh; Univ. of North Carolina, Mimograph Ser., No.
49, 1951, appendix 1.



In our case $w_k=2k-1$, $a_k=\sqrt{\frac{k+1}{k}}$, $b_k=\sqrt{\frac{k}{k+1}},$ $M=2$ and $m=1$, which gives:
$$\sum_{k=1}^n(2k-1)\frac{k+1}{k}\sum_{k=1}^n(2k-1)\frac{k}{k+1}\leq\frac{(2+1)^2}{4\cdot2\cdot1}\left(\sum_{k=1}^n(2k-1)\right)^2=\frac{9n^4}{8}.$$


Thursday 28 July 2016

trigonometry - Complex argument and nyquist plot

I'm trying to sketch the nyquist plot of
$$\frac{j\omega-1}{j\omega+1}$$
but can't seem to calculate the argument correctly. I think it should be $$\arctan(-\omega) - \arctan(\omega) = -2\arctan(\omega)$$ but this doesn't give the correct nyquist plot behaviour for $\omega \to 0$ and $\omega \to \infty$ - surely $-2\arctan(\omega)$ implies that $\lim_{x\to 0} = 0^\circ$ and $\lim_{x\to \infty} = -180^\circ$?



Wolfram Alpha disagrees but I can't see where I'm going wrong. Am I making a glaring error somewhere? Any help would be greatly appreciated.



Thanks very much

Function notation terminology



Given the function $f:X\longrightarrow Y$, $X$ is called the domain while $Y$ is called the codomain. But what do you call $f(x)=x^2$ in this context, where $x\in X$? That is to say - what is the name for the $f(x)$ notation?



And while I'm here, what is the proper way to write a function like this? Would it be $f:\mathbb{R}\to\mathbb{R},\;f(x)=x^2$?







Edit:



I figured I'd add this to add a bit of context into why I'm asking. I'm writing a set of notes in LaTeX, and I'd like to use the correct terminology for the definition of a function.




A function from set $A$ to set $B$, denoted by
$$f:A\to B;x\mapsto f(x)$$
is a mapping of elements from set $A$, (the $\textit{domain}$) to elements in set $B$ (the $\textit{codomain}$) using the $\color{blue}{\sf function}$ $f(x)$. The domain of a function is the set of all valid elements for a function to map from. The codomain of a function is the set of all possible values that an element from the domain can be mapped to. The $\textit{range}$ (sometimes called the $\textit{image}$) of a function is a subset of the codomain, and is the set of all elements that actually get mapped to by the function $f$.





Here I'm pretty sure the highlighted word "function" is not right.


Answer



I can remember to read this text, and being puzzled with the exact same question. From what I've learned from my teacher, you're right, writing down something as "the function $f(x)$..." is sloppy notation. However, many books/people will use it this way.



If you're are very precise, $f(x)$ is not a function or an map. I don't know of a standard way to refer to $f(x)$, but here is some usage I found on the internet:




  • The output of a function $f$ corresponding to an input $x$ is denoted by $f(x)$.

  • Some would call "$f(x)=x^2$" the rule of the function $f$.


  • For each argument $x$, the corresponding unique $y$ in the codomain is called the function value at $x$ or the image of $x$ under $f$. It is written as $f(x)$.

  • If there is some relation specifying $f(x)$ in terms of $x$, then $f(x)$ is known as a dependent variable (and $x$ is an independent variable).



A correct way to notate your function $f$ is:
$$f:\Bbb{R}\to\Bbb{R}:x\mapsto f(x)=x^2$$



Note that $f(x)\in\Bbb{R}$ and $f\not\in\Bbb{R}$. But the function $f$ is an element of the set of continuous functions, and $f(x)$ isn't.



In some areas of math it is very important to notate a function/map specifying it's domain, codomain and function rule. However in for example calculus/physics, you'll see that many times only the function rule $f(x)$ is specified, as the reader is supposed to understand domain/codmain from the context.




You can also check those questions:




calculus - evaluation of limit $limlimits_{xto infty}left(frac{2arctan(x)}{pi}right)^x $



I`m trying to evaluate this limit and I need some advice how to do that.
$$\lim\limits_{x\to \infty}\left(\frac{2\arctan(x)}{\pi}\right)^x $$
I have a feeling it has to do with a solution of form $1^\infty$ but do not know how to proceed. Any hints/solutions/links will be appreciated


Answer



It could be suitably modified into something involving the limit $(1+\frac1x)^x\rightarrow e$ for $x\to\infty$.
$$

\left(\frac{2\arctan x}{\pi}\right)^x
~=~
\left[1 + \left(\frac{2\arctan x}{\pi}-1\right)\right]^x
$$
Let $f(x)=\left(\frac{2\arctan x}{\pi}-1\right)$; clearly $f(x)\to 0$ for $x\to+\infty$, therefore
$$
\left[1+f(x)\right]^{\frac{1}{f(x)}}\longrightarrow e
$$
Let us focus on the limit of $xf(x)$: using l'Hospital's rule we get
$$

\lim_{x\to+\infty}x\,f(x)
~=~
\lim_{x\to+\infty}
\frac{\frac{2\arctan x-1}{\pi}}{\frac1x}
~\stackrel H=~
\lim_{x\to+\infty}
\frac{\frac{2}{\pi(1+x^2)}}{-\frac1{x^2}}
~=~
-\frac2\pi
$$

Now, putting all together:
$$
\lim_{x\to+\infty}
\left(\frac{2\arctan x}{\pi}\right)^x
~=~
\lim_{x\to+\infty}
\big(1+f(x)\big)^x
~=~
\lim_{x\to+\infty}
\left[\big(1+f(x)\big)^{\frac{1}{f(x)}}\right]^{xf(x)}

~=~
e^{-2/\pi}
$$
Generally, when you run into $1^\infty$ you can work it out in this way.


probability theory - Implications of Poisson distribution



Consider the random variable $N_n$ drawn from a Poisson distribution with intensity parameter $n$ so that $E(N_n)=n$.



Could you help me to show that $\frac{N_n}{n}\rightarrow_p 1$ as $n\rightarrow \infty$?



My doubts: I do not understand what happens to $N_n$ as $n\rightarrow \infty$. If I consider the probability distribution of the Poisson, when $n\rightarrow \infty$ the probability of observing $X_n=x$ goes to zero. Is this somehow related to the statement above?


Answer




It's true that the probability of $N_n$ taking any fixed value goes to zero as $n\to\infty$ but that has little to do with what it's value will tend to be. It is simply a reflection of the fact that there are a wider range of likely values, so the probability of any one of them must be very small.



The slickest way to prove your statement is probably to show that a Poission with mean $n$ is equal in distribution to the sum of $n$ independent Poissons with mean $1$ and then use the law of large numbers.


real analysis - How to solve this limit $limlimits_{nrightarrowinfty},{{left( frac{{{u}_{n}}}{{{v}_{n}}} right)}^{2040}}$?

Let $(u_n)_n$ and $(v_n)_n$ be positive numerical sequences such that $(1+\sqrt{2})^n=u_n+v_n\sqrt{2}$ with $n\in\mathbb{N}$

How to determine the limit of
$$\underset{n\to \infty }{\mathop{\lim }}\,{{\left( \frac{{{u}_{n}}}{{{v}_{n}}} \right)}^{2040}}$$



I start think to find such recessive sequence for $u_n$ terms in order to separate them using induction but this leads to nothing . so as result is there any shortcut to attack this problem?

calculus - How to solve limits with Taylor expansion?

I'm in trouble with Taylor series..... how can I solve limits without Bernoulli-de L'Hôpital method?? For example, $$\lim_{x \to +\infty} \frac{x-\sin{x}}{2x+\sin{x}}.$$

The answer, if I'm not wrong is $\frac{1}{2}$... How, can I show that?
Thanks.

calculus - Does this series converge or diverge? $sum_{n=1}^inftylnleft(1+frac1{n^2}right)$



I have a series here, and I'm supposed to determine whether it converges or diverges. I've tried the different tests, but I can't quite get the answer.




$$\sum_{n=1}^\infty\ln\left(1+\frac1{n^2}\right)$$




Answer



Hint: Recall that $\ln(1+x)\sim x$ for $x\to 0$, and use the fact that $\sum_{n=1}^\infty\frac1{n^2}$ is convergent.


alternative proof - Concise induction step for proving $sum_{i=1}^n i^3 = left( sum_{i=1}^niright)^2$



I recently got a book on number theory and am working through some of the basic proofs. I was able to prove that $$\sum_{i=1}^n i^3 = \left( \sum_{i=1}^ni\right)^2$$ with the help of the identity $$\sum_{i=1}^ni = \frac{n(n+1)}{2}$$ My proof of the induction step goes as follows (supposing equality holds for all $k \in \{1,2,\dots n \})$: $$\sum_{i=1}^{n+1} i^3 = \sum_{i=1}^{n} i^3+(n+1)^3 \\ = \left( \sum_{i=1}^ni\right)^2+(n+1)^3 \\ = \left( \frac{n(n+1)}{2}\right)^2 + (n+1)^3 \\ = \frac{n^2(n+1)^2}{4}+\frac{4(n+1)^3}{4} \\ = \frac{n^4+2n^3+n^2}{4}+\frac{4n^3+12n^2+12n+4}{4} \\ = \frac{n^4+6n^3+13n^2+12n+4}{4} \\ = \frac{(n^2+3n+2)^2}{4} \\ = \frac{[(n+1)(n+2)]^2}{4} \\ = \left(\frac{(n+1)(n+2)}{2}\right)^2 \\ = \left(\sum_{i=1}^{n+1}i\right)^2$$ I was a little disappointed in my proof because the algebra got really hairy. It took me a long while to see that I could twice unfoil the polynomial $n^4+6n^3+13n^2+12n+4$ and all in all the solution seems pretty inelegant. Does anyone have a smoother way to prove the induction step or bypass the algebra? I feel like their must be some concise way to get the same result.


Answer



An idea for your fourth line:



$$\frac{n^2(n+1)^2}{4}+\frac{4(n+1)^3}{4}=\frac{(n+1)^2}4\left(n^2+4n+4\right)=\frac{(n+1)^2}4(n+2)^2$$


linear algebra - Determining a matrix given the characteristic and minimal polynomial



Let $p_a=(x-2)^2(x-7)^4x$ be the characteristic polynomial of the matrix $A$ and $(x-2)^2(x-7)x$ the minimal polynomial. Determine the matrix $A$.



My work: I know the matrix has to be $7x7$ and in its diagonal it must have two $2$, four $7$ and one $0$, so:



\begin{bmatrix}{}

2& & & & & & \\
& 2& & & & &\\
& & 7 & & & &\\
& & & 7 & & &\\
& & & & 7& & \\
& & & & & 7 &\\
& & & & & & 0\\ \end{bmatrix}



I don't know how to follow, what information gives me the minimal polynomial?


Answer




The minimal polynomial in this case gives you the information about the relevant Jordan blocks. Since it has $(x-2)^2$ as a factor, you must have one $2 \times 2$ Jordan block associated to the eigenvalue $2$ (and not two $1 \times 1$ Jordan blocks). To see why, note that the minimal polynomial of



$$ \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix} $$



is $(x - 2)$ while the minimal polynomial of



$$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} $$



is $(x - 2)^2$.




Similarly, since the minimal polynomial has $(x-7)$ as a factor, al the Jordan blocks associated to the eigenvalue $7$ must be $1 \times 1$. Hence, $A$ is similar to the matrix



$$ \begin{pmatrix} 2 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 7 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 7 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 7 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 7 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{pmatrix}. $$



calculus - Find the limit of $lim_{xto0}{frac{ln(1+e^x)-ln2}{x}}$ without L'Hospital's rule



I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$
and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$.
Any ideas?


Answer



Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!


Wednesday 27 July 2016

logic - How can know if a proof technique can actually prove something? Specifically, induction



Induction is an incredible tool to prove some propositions. Although it seems that these propositions require some level of simplicity for us to be able prove them using only induction. If we wanted, for example, to prove Fermat's last theorem using induction on the exponential, I think we would have a hard time.



Regardless of that, if $\phi$ (n) is the statement:$n > 2$ $\land$ $x^n + y^n = z^n$ has no natural solutions. Since Fermat's last theorem is now proved the statement: $\phi (0)$ $ \land $ $\forall x(\phi(x) \implies \phi(x+1))$ is trivially true. (trivially in the sense that it follows directly from Wile's proof of Fermat's last theorem). Then, using in the Induction Scheme, we can conclude that $\forall x \phi (x)$



So, the induction scheme over our statement is provable. But i doubt that anyone would ever be capable of proving Fermat's last theorem using only induction.




Is there a formalization of the concept of "proof technique"? Are there current theories which could, theoretically, prove that if a computer is programmed to only search for proofs using induction, it would never be capable of proving Fermat's last theorem?



I apologize if this question is too vague. If a moderator believes this question is not meant for this website, please delete this.



Edit: In my research I found some paper called "natural proofs", in which it shows that any proof using certain technique, directly related to a combinatorial property, would not be enough to prove that P != NP. But it seems to me, at least in my ignorance, that the proof is too connected with Complexity Theory, and I wouldn't know if we can generalize this for other methods of mathematics. Has this been done before?


Answer



Your question can be made precise in a variety of different ways, but you first need to know logic and Peano Arithmetic (PA). (You can skip these links if you understand the rest of this post.)
$\def\nn{\mathbb{N}}$
$\def\t#1{\text{#1}}$
$\def\place#1{\,\boxed{#1}\,}$

$\def\code#1{\underline{#1}}$



Meta-Reasoning



We must always work in some meta-system when talking about proofs. We always choose our meta-system to have the collection of natural numbers, which we call $\nn$, and arithmetic constants $0,1 \in \nn$, and arithmetic operations $+,\times$, such that $(\nn,0,1,+,\times)$ is a model of (first-order) PA (a world satisfying the axioms of PA) and every member of $\nn$ is of the form "$1+1+\cdots+1$". All the mathematics in the rest of this post will be done within the meta-system, where we will talk about other formal systems and what they can or cannot prove.



Induction in general



Induction in the most general sense is the following inference rule:





Given any $1$-parameter sentence $P$ with one natural number parameter:



  If you can deduce both of the following:



    $P(0)$.  



    $\forall n \in \nn\ ( P(n) \to P(n+1) )$.



  Then you can deduce:




    $\forall n \in \nn\ ( P(n) )$.




Here, a $1$-parameter sentence is an expression with blanks, such that filling in the blanks with the same parameter (of the correct type) results in a sentence. Here $P$ requires a parameter that is a natural number, namely a member of $\nn$. We write "$P(t)$" to denote the sentence formed by filling in the blanks in $P$ with the term "$t$". Note that the induction rule is about deduction, not about truth (in the mathematical sense).



For example "$\exists m \in \nn\ \left( \place{1} = 2m \lor \place{1} = 2m+1 \right)$" is a $1$-parameter sentence where all the occurrences of "$\place{1}$" denote the blanks to be filled in, and the result will be a sentence if they are filled in with a natural number. Induction applied to this would say that:




If you can deduce both of the following:




$\exists m \in \nn\ ( 0 = 2m \lor 0 = 2m+1 )$.



$\forall n \in \nn\ ( \exists m \in \nn\ ( n = 2m \lor n = 2m+1 ) \to \exists m \in \nn\ ( n+1 = 2m \lor n+1 = 2m+1 ) )$.



Then you can deduce:



$\forall n \in \nn\ ( \exists m \in \nn\ ( n = 2m \lor n = 2m+1 ) )$.





Indeed we can deduce the two sentences that induction requires in this case (try it!), and hence we can deduce the sentence given by induction, which is in fact a special case of the division-remainder theorem.



Now Fermat's last theorem corresponds to the statement:




$\forall n \in \nn\ ( \t{FLT}(n) )$



  where "$\t{FLT}(n)$" denotes "$n > 2 \to \neg \exists x,y,z \in \nn\ ( x > 0 \land y > 0 \land x^n+y^n=z^n )$".





Indeed since it has been proven (in some powerful formal system), the following can be deduced (in that system) trivially (as you noted):




$\t{FLT}(0)$.



$\forall n \in \nn\ ( \t{FLT}(n) \to \t{FLT}(n+1) )$.




So we could use the induction rule to deduce Fermat's last theorem. But for what? We already have deduced it, so it is pointless to append a few lines consisting of the above deduction just to deduce it again! Instead, what you want to ask is not about induction alone but about the formal system as a whole, specifically what formal systems can deduce the theorem.




One natural candidate that comes to mind is first-order PA, but there are some caveats that I will talk about later. First let us see what induction means for PA.



Induction in first-order PA



Since the language of PA only has $0,1,+,\times$, we can apply induction to only sentences involving these and nothing else. This also means that all the quantifiers are unrestricted, since PA is unable to state anything about $\nn$ or other collections. Specifically, induction in PA is a schema, namely an infinite collection of axioms that form a part of the axioms of PA. This schema includes for every $1$-parameter sentence over PA the following axiom:




$P(0) \land \forall n\ ( P(n) \to P(n+1) ) \to \forall n\ ( P(n) )$.





Note that general induction as described earlier uses restricted quantifiers, and hence is only valid in a system that is able to express statements about the collection $\nn$.



However, you can check that the axioms of PA$^-$ (PA without the induction schema) are sufficient to prove:




$\exists m\ ( 0 = 2m \lor 0 = 2m+1 )$.



$\forall n\ ( \exists m\ ( n = 2m \lor n = 2m+1 ) \to \exists m\ ( n+1 = 2m \lor n+1 = 2m+1 ) )$.





Therefore PA can, by using one of the axioms in the induction schema, deduce:




$\forall n\ ( \exists m\ ( n = 2m \lor n = 2m+1 ) )$.




It turns out that this theorem of PA cannot be proven without using induction, because there is a model of PA$^-$ that does not satisfy this sentence, and so PA$^-$ cannot prove the sentence.



This technique of constructing a model of a formal system $S$ (in this case PA$^-$) that does not satisfy some sentence $A$ (in this case "$\neg \forall n\ ( \exists m\ ( n = 2m \lor n = 2m+1 ) )$") to show that $S$ does not prove $A$ is a very useful technique, though there is no systematic way to construct the model needed.




Fermat's last theorem in weak arithmetic theories



The model-construction technique was used to prove that Fermat's last theorem in various forms cannot be proven in various formal systems. This recent paper states (roughly) the following (page 2):




PA$^-$ plus quantifier-free induction cannot prove "$\t{FLT}(3)$".



  where "$t^3$" is read as "$t \times t \times t$" [the language of PA cannot express exponentiation!].



$\t{Th}(\nn) + \t{Exp}$ cannot prove "$\forall n\ ( \t{FLT}_e(n) )$"




  where $\t{Th}(\nn)$ (the theory of $\nn$) is the collection of all sentences satisfied by $\nn$



  and $e$ is a binary function symbol and $\t{Exp}$ consists of the familiar laws of exponentiation



  and $\t{FLT}_e(n)$ is the same as $\t{FLT}(n)$ except that "$t^n$" is read as "$e(t,n)$".




The problem here that the language of PA cannot talk about exponentiation is an important one. Godel showed as part of the proof of his incompleteness theorems that there is a way to encode finite sequences of natural numbers as single natural numbers such that the encodings can be manipulated by first-order formulae over PA. It implies that there is a $3$-parameter sentence $\t{exp}$ over PA such that for every $a,b,c \in \nn$ we have that $\nn$ satisfies "$\t{exp}(a,b,c)$" iff $a^b = c$. So PA can sort of talk about exponentiation using $\t{exp}$.




The paper asserts (page 1) that it is widely believe that PA can prove Fermat's last theorem. This claim is that:




? PA proves "$\forall n\ ( n > 2 \to \neg \exists x,y,z,p,q,r \in \nn\ ( x > 0 \land y > 0$



$\ \land \t{exp}(x,n,p) \land \t{exp}(y,n,q) \land \t{exp}(z,n,r) \land p+q=r ) )$".




The catch is that there is no way to see that the formulae used in manipulating sequence encodings faithfully represent the manipulation of the sequences themselves, without already having an understanding of sequences! So if (hypothetically) one rejects the meta-system's notions of $\nn$ and sequences from $\nn$, then one might consider $\t{exp}$ to be a meaningless sentence. Similarly, unless one already accepts the existence of the exponential function on $\nn$, the above claim loses its meaning.




This is in my opinion why it is interesting to consider those extensions of PA that explicitly have the exponential function. Exponential function arithmetic (EFA) is one. And if Friedman's conjecture turns out to be true, EFA would prove Fermat's last theorem in its original form (i.e. no encoding needed).


probability - Show $mathbb{E}(X) = int_0^{infty} (1-F_X(x)) , dx$ for a continuous random variable $X geq 0$

If $X$ is a continuous random variable with density $f_X$ and taking non-negative values only, how do I show that $$\mathbb{E}(X)=\int_0^{\infty}[1-F_X(x)]dx$$ whenever this integral exists?

calculus - Integral of $int frac{x^2 - 5x + 16}{(2x+1)(x-2)^2}dx$



I am trying to find the integral of this by using integration of rational functions by partial fractions.




$$\int \frac{x^2 - 5x + 16}{(2x+1)(x-2)^2}dx$$



I am not really sure how to start this but the books gives some weird formula to memorize with no explanation of why $\frac {A}{(ax+b)^i}$ and $ \frac {Ax + B}{(ax^2 + bx +c)^j}$



I am not sure at all what this means and there is really no explanation of any of it, I am guessing $i$ is for imaginary number, and $j$ is just a representation of another imaginary number that is no the same as $i$. $A$, $B$ and $C$, I have no idea what that means and I am not familiar with capital letters outside of triangle notation so I am guessing that they are angles of lines for something.


Answer



See first Arturo's excellent answer to Integration by partial fractions; how and why does it work?








I am guessing i is for imaginary number, and j is just a representation
of another imaginary number that is no the same as i.



I don't know what an indice or natural number is and it is not
mentioned naywhere in the text. (in a comment)




The numbers $i$ and $j$ are natural numbers, i.e. they are positive integers $1,2,3,\dots,n,\dots .$ Their set is denoted by $\mathbb{N}$.





$A$, $B$ and $C$, I have no idea what that means and I am not familiar
with capital letters outside of triangle notation so I am guessing
that they are angles of lines for something.




In this context the leters $A$, $B$ and $C$ are constants, i.e. independent of the variable $x$.





  • Let
    $$\begin{equation*}
    \frac{P(x)}{Q(x)}:=\frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right)
    ^{2}}\tag{1}.
    \end{equation*}$$ The denominator $Q(x):=\left( 2x+1\right) \left( x-2\right) ^{2}$ has factors of the form $(ax+b)^{i}$ only. Each one originates $i\in\mathbb{N}$ partial fractions whose integrals can be computed recursively and/or found in tables of integrals. See $(6),(7),(8)$ bellow for the present case.)
    $$\begin{equation*}
    \frac{A_{i}}{(ax+b)^{i}}+\frac{A_{i-1}}{(ax+b)^{i-1}}+\ldots +\frac{A_{1}}{ax+b}.
    \end{equation*}\tag{2}$$ The exponent of the factor $\left( x-2\right) ^{2}$ is $i=2$ and of the factor $2x+1$ is $i=1$. Therefore we should find the constants $A_{1}$, $A_{2}$, $B$ such that
    $$\begin{equation*}
    \frac{P(x)}{Q(x)}=\frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right)

    ^{2}}=\frac{B}{2x+1}+\frac{A_{2}}{\left( x-2\right) ^{2}}+\frac{A_{1}}{x-2}\end{equation*}.\tag{3}$$


  • One methodis to reduce the RHS to a common denominator
    $$\begin{equation*}
    \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}=\frac{B\left(x-2\right) ^{2}+A_{2}\left( 2x+1\right) +A_{1}\left( x-2\right) \left(2x+1\right) }{\left( 2x+1\right) \left( x-2\right) ^{2}}.
    \end{equation*}$$ $$\tag{3a}$$
    [See remak below.] This means that the polynomials of the numerators must be equal on both sides of this last equation. Expanding the RHS, grouping the terms of the same degree
    $$\begin{eqnarray*}
    P(x) &:=&x^{2}-5x+16=B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
    +A_{1}\left( x-2\right) \left( 2x+1\right) \\
    &=&\left( Bx^{2}-4Bx+4B\right) +\left( 2A_{2}x+A_{2}\right) +\left(
    2A_{1}x^{2}-3A_{1}x-2A_{1}\right) \\

    &=&\left( B+2A_{1}\right) x^{2}+\left( -4B+2A_{2}-3A_{1}\right) x+\left(
    4B+A_{2}-2A_{1}\right)
    \end{eqnarray*}$$ $$\tag{3b}$$ and equating the coefficients of $x^{2}$, $x^{1}$ and $x^{0}$, we conclude that they must satisfy‡ the following system of 3 linear equations [See (*) for a detailed solution of the system]
    $$\begin{equation*}
    \left\{
    \begin{array}{c}
    B+2A_{1}=1 \\
    -4B+2A_{2}-3A_{1}=-5 \\
    4B+A_{2}-2A_{1}=16
    \end{array}

    \right. \Leftrightarrow \left\{
    \begin{array}{c}
    A_{1}=-1 \\
    A_{2}=2 \\
    B=3.
    \end{array}
    \right.\tag{3c}
    \end{equation*}$$ In short, this method reduces to solving a linear system. So, we have
    $$\begin{equation*}
    \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}=\frac{3}{2x+1}+

    \frac{2}{\left( x-2\right) ^{2}}-\frac{1}{x-2}.
    \end{equation*}\tag{4}$$


  • We are now left with the integration of each partial fraction
    $$\begin{equation*}
    \int \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}dx=3\int
    \frac{1}{2x+1}dx+2\int \frac{1}{\left( x-2\right) ^{2}}dx\\-\int \frac{1}{x-2}
    dx.\tag{5}
    \end{equation*}$$





Can you proceed from here? Remember these basic indefinite integral formulas:



$$\int \frac{1}{ax+b}dx=\frac{1}{a}\ln \left\vert ax+b\right\vert +C, \tag{6}$$



$$\int \frac{1}{\left( x-r\right) ^{2}}dx=-\frac{1}{x-r}+C,\tag{7}$$



$$\int \frac{1}{x-r}dx=\ln \left\vert x-r\right\vert +C.\tag{8}$$



--




† Another method is to evaluate both sides of $(3)$ at 3 different values, e.g. $x=-1,0,1$ and obtain a system of 3 equations. Another one is to compute $P(x)$



$$\begin{equation*}
P(x)=x^{2}-5x+16=B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
+A_{1}\left( x-2\right) \left( 2x+1\right)
\end{equation*}$$



first at the zeros of each term, i.e. $x=2$ and $x=-1/2$
$$\begin{eqnarray*}
P(2) &=&10=5A_{2}\Rightarrow A_{2}=2 \\

P\left( -1/2\right) &=&\frac{75}{4}=\frac{25}{4}B\Rightarrow B=3;
\end{eqnarray*}$$



and then at e.g. $x=0$
$$\begin{equation*}
P(0)=16=4B+A_{2}-2A_{1}=12+2-2A_{1}\Rightarrow A_{1}=-1.
\end{equation*}$$



For additional methods see this Wikipedia entry




‡ If $B+2A_{1}=1,-4B+2A_{2}-3A_{1}=-5,4B+A_{2}-2A_{1}=16$, then $x^{2}-5x+16=\left( B+2A_{1}\right) x^{2}+\left( -4B+2A_{2}-3A_{1}\right) x+\left(4B+A_{2}-2A_{1}\right)$ for all $x$ and $(3a)$ is an identity.






REMARK in response a comment below by OP. For $x=2$ the RHS of $(3a)$ is not defined. But we can compute as per $(3b,c)$ or as per †, because we are not plugging $x=2$ in the fraction $(3a)$. In $(3c)$ we assure that the numerators of $(3a)$ $$x^{2}-5x+16$$ and $$B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
+A_{1}\left( x-2\right) \left( 2x+1\right) $$ are identically equal, i.e. they must have equal coefficients of $x^2,x,x^0$.






(*) Detailed solution of $(3c)$. Please note that we cannot find $A,B$ and $C$ with one equation only, as you tried below in a comment ("$16=2b+A_1−A_2$ I have no idea how to solve this.")

$$\begin{eqnarray*}
&&\left\{
\begin{array}{c}
B+2A_{1}=1 \\
-4B+2A_{2}-3A_{1}=-5 \\
4B+A_{2}-2A_{1}=16
\end{array}
\right. \\
&\Leftrightarrow &\left\{
\begin{array}{c}

B=1-2A_{1} \\
-4\left( 1-2A_{1}\right) +2A_{2}-3A_{1}=-5 \\
4\left( 1-2A_{1}\right) +A_{2}-2A_{1}=16
\end{array}
\right. \Leftrightarrow \left\{
\begin{array}{c}
B=1-2A_{1} \\
-4+5A_{1}+2A_{2}=-5 \\
4-10A_{1}+A_{2}=16
\end{array}

\right. \\
&\Leftrightarrow &\left\{
\begin{array}{c}
B=1-2A_{1} \\
A_{2}=-\frac{1+5A_{1}}{2} \\
4-10A_{1}-\frac{1+5A_{1}}{2}=16
\end{array}
\right. \Leftrightarrow \left\{
\begin{array}{c}
B=1-2A_{1} \\

A_{2}=-\frac{1+5A_{1}}{2} \\
A_{1}=-1
\end{array}
\right. \\
&\Leftrightarrow &\left\{
\begin{array}{c}
B=1-2\left( -1\right) \\
A_{2}=-\frac{1+5\left( -1\right) }{2} \\
A_{1}=-1
\end{array}

\right. \Leftrightarrow \left\{
\begin{array}{c}
A_{1}=-1 \\
A_{2}=2 \\
B=3
\end{array}
\right.
\end{eqnarray*}$$







Comment below by OP




I watched the MIT lecture on this and they use the "cover up" method to solve systems like this and I am attempting to use that here. I have $$\frac{A}{2x+1} + \frac{B}{x-2} + \frac{C}{(x-2)^2}$$ Is there anything wrong so far? It appears to me to be correct. Now I try to find B by making x = 2 and multplying by x-2 which gets rid of C and A since it makes them zero and then the RHS which cancels out and leaves me with B = 2 but that also works for C I think so I am confused, and for A I get 55/6 which I know is wrong but the method works and I am doing the math right so what is wrong?




Starting with $$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{A}{2x+1}+\frac{B}{x-2}+\frac{C}{(x-2)^{2}}\tag{3'}$$



we can multiply it by $(x-2)^{2}$




$$\frac{x^{2}-5x+16}{2x+1}=\frac{A(x-2)^{2}}{2x+1}+B(x-2)+C.$$



To get rid of $A$ and $B$ we make $x=2$ and obtain $C$



$$\frac{2^{2}-5\cdot 2+16}{2\cdot 2+1}=\frac{A(2-2)^{2}}{2x+1}+B(2-2)+C$$



$$\Rightarrow 2=0+0+C\Rightarrow C=2$$



We proceed by multiplying $(3')$ by $2x+1$




$$\frac{x^{2}-5x+16}{(x-2)^{2}}=A+\frac{B(2x+1)}{x-2}+\frac{C(2x+1)}{(x-2)^{2}}$$



and making $x=-1/2$ to get rid of $B$ and $C$



$$\frac{\left( -1/2\right) ^{2}-5\left( -1/2\right) +16}{(-1/2-2)^{2}}=A+
\frac{B(2\left( -1/2\right) +1)}{-1/2-2}+\frac{C(2\left( -1/2\right) +1)}{
(-1/2-2)^{2}}$$



$$\Rightarrow 3=A+0+0\Rightarrow A=3$$




Substituing $A=3,C=2$ in $(3')$, we have



$$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{3}{2x+1}+\frac{B}{x-2}+\frac{2}{
(x-2)^{2}}$$



Making e.g. $x=1$ (it could be e.g. $x=0$)



$$\frac{1^{2}-5+16}{(2+1)(1-2)^{2}}=\frac{3}{2+1}+\frac{B}{1-2}+\frac{2}{
(1-2)^{2}},$$




$$\Rightarrow 4=1-B+2\Rightarrow B=-1.$$



Thus



$$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{3}{2x+1}-\frac{1}{x-2}+\frac{2}{(x-2)^{2}},\tag{3''}$$



which is the same decomposition as $(4)$.


elementary set theory - (Verification) Zorn's Lemma is Equivalent to Hausdorff Maximal Principle




Let $(X, \le)$ be a partially ordered set $X$.



Claim



Zorn's Lemma and Hausdorff Maximal Principle are equivalent.



Zorn's Lemma



Suppose $X$ has the property that every chain has an upper bound in $X$. Then the set $X$ contains at least one maximal element.




Hausdorff Maximal Principle



$X$ holds maximal chain.



$1.\;$Zorn's Lemma $\rightarrow$ Hausdorff Maximal Princple



Let $\Bbb C(X)$ be the family of every chain of $X$ and Let $\Bbb C$ be the chain of $\Bbb C(X)$ and Let $C= \cup\Bbb C$.



Then for $a,b, \in C$ there $\exists C_1, C_2 \;\text{s.t}. a \in C_1 \in \Bbb C \;\text{and}\; b \in C_2 \in \Bbb C$




But $C_1 \subset C_2 \;\text{or}\;C_2\subset C_1 $ since $C$ is chain.



If $C_1 \subset C_2 $, $a,b \in C_2$. Then $a \le b \;\text{or}\; b \le a$ since $C_2$ is a chain of $X$ and $vice\;versa$



Thus $C$ is a chain of $X$



Now Hausdorff Maximal Principle holds since $\Bbb C \subset \Bbb C(X)$ has maximal chain $C$



$2.\;$ Hausdorff Maximal Princple $\rightarrow$ Zorn's Lemma




Suppose every chain of $X$ has an upper bound. Then for the maximal chain of $X$,$\;C$, let $m\in X$ be the upperbound of $C$.



Now suppose $x \in X \;\text{and}\; x>m$ Then



$C \cup \{x\}$ is also a chain since x is comparable with an element in $C$



But it contradicts to the fact that $C$ is maximal chain since $C \cup \{x\} \supsetneq C$



Thus m is a maximal element of $X$


Answer




For 1. I would write: Let $C(X)$ be the set of chains of $X,$ partially ordered by $\subset$. Then show that $\subset$ is a transitive relation on $C(X).$ (Which is fairly obvious)...Then show, as you did that if $S$ is a $\subset$-chain of $C(X),$ then $\cup S\in C(X)$ and $\cup S$ is a $\subset$-upper bound for $S$. Zorn's Lemma then implies that $C(X)$ has a $\subset$-maximal member....Part 2 is OK.



As I said in a comment, it's my opinion that your presentation could be a bit better.


What is the proof for $sqrt{-a}timessqrt{-b}neqsqrt{ab},text{ where }a,bin mathbb{R}$



Having just learned about $i$ in my 10$^{th}$ grade classroom I'm interested in the proofs underlying the rules for algebraic manipulations with imaginary numbers; such an understanding will create a greater appreciation for these foundations mathematics.



Without given a reason, we were told that $\sqrt{-a}*\sqrt{-b}\neq\sqrt{ab},\text{ where }a,b\in \mathbb{R}$.



I've proved on my own (I don't know if my attempt is correct and recognize how relatively rudimentary it is, though I'm just starting out) that

enter image description here



I need to prove $\sqrt{-a}*\sqrt{-b}\neq\sqrt{ab},\text{ where }a,b\in \mathbb{R}$. So I begin by assuming the proof holds true for all $b_1,b_2\in \mathbb{R}$ and not just $\mathbb{R}^{-}$ and try to prove by way of contradiction that this won't work. But from what I see, it does work. So where am I going wrong?



Maybe it's that once imaginary numbers exist this system falls apart because $\sqrt{-1}\times\sqrt{-1}=-1$ so you perform some sort of pattern matching?



Obviously $-1$ is some special case.



I'm just not clear on how to resolve this. Some clarity would be much appreciated.


Answer




In line three you use the fact that for positive reals $a,b$ from $a^n=b^n$ it follows that $a=b$. This is not longer true over the complex numbers(its not even true over the real numbers). For example $i^4=1^4$ but certainly we don't have $i=1$



Also showing that the above proof doesn't work for couplex numbers does not prove that the theorem is wrong. To show that the theorem is wrong you just have to give a counterexample. As you already noted $a=b=-1$ would do the job.


discrete mathematics - Show that $f(S)cup f(T)subset f(Scup T)$



$y\in f(S)\cup f(T)\Longrightarrow \exists\,x\in (S\cup T) \,\,s.t.\,\,f(x)=y$



$x\in (S \cup T)\Longrightarrow\,y=f(x)\in f(S) \wedge y=f(x)\in f(T)$




$y=f(x)\in f(S\cup T)\,\Longrightarrow \,f(S)\cup f(T)\subset f(S\cup T)$



Ok, I don't have the intuition to prove things like this, so what can I do to develop that intuition?



I don't even really understand what I wrote.


Answer



The very first thing that I recommend is that you use more words and fewer symbols: it’s much easier to be clear on the logic if you explain it verbally as you go.



You have a function $f:X\to Y$, say, and you want to show that if $S,T\subseteq X$, then $$f[S]\cup f[T]\subseteq f[S\cup T]\;.$$




The most straightforward approach will work nicely: start with an arbitrary element of $f[S]\cup f[T]$, and show that it necessarily belongs to $f[S\cup T]$. Here’s how that might look in practice:




Let $y\in f[S]\cup f[T]$; then by the definition of union we know that $y\in f[S]$ or $y\in f[T]$. Suppose first that $y\in f[S]$; then there is some $x\in S$ such that $f(x)=y$. Of course $S\subseteq S\cup T$, so $x\in S\cup T$, and therefore $y=f(x)\in f[S\cup T]$.



Now suppose instead that $y\in f[T]$; then there is some $x\in T$ such that $f(x)=y$. Of course $T\subseteq S\cup T$, so $x\in S\cup T$, and therefore $y=f(x)\in f[S\cup T]$. In all cases, therefore, $y\in f[S\cup T]$, and since $y$ was an arbitrary member of $f[S]\cup f[T]$, it follows that $$f[S]\cup f[T]\subseteq f[S\cup T]\;.$$




Once you really understand what you’re doing, you can shorten this quite a bit, but at this point, when you’re still feeling your way, it’s better to include too much detail than to include too little.



calculus - How does one interpret $dfrac{dx}{dy}$ for a function which isn't invertible?

I was just going through the proof of derivative of inverse functions.



The statement reads:




If $y= f(x)$ is a differentiable function of $x$ such that it's inverse $x=f^{-1}(y)$ exists, then $x$ is a differentiable function of $y$ and it's derivative is $\dfrac{dx}{dy} = \dfrac{1}{\frac{dy}{dx}}, \dfrac{dy}{dx}≠0 $





Which naturally arises few questions, what if the inverse $y=f(x)$ doesn't exist?



My Questions :









  1. Does $\dfrac{dx}{dy}$ still have a meaning?

  2. If so then what would it mean geometrically?

  3. Would $\dfrac{dx}{dy} = \dfrac{1}{\frac{dy}{dx}}$ still hold? And if it does, then why do we even need the invertible condition in the statement?

calculus - Need help solving this indefinite integral (not homework!)



Source: A question bank of "tough" problems on integrals (maybe tough for a noob like me). Started by learning integration for use in physics only, but now it's got me hooked :p



Problem: Evaluate the indefinite integral $$\int {x\,\mathrm{d}x\over(7x-10-x^2)^{3/2}}.$$



I have used all the tools in my arsenal; substitution: no viable substitution comes in mind here. I have tried factoring the quadratic but that doesn't help.
I have tried to multiply-divide the denominator by $x^2$ and then substitute $x={1\over t}$ but no help. I'm actually stuck right now. Please give me a hint to solve this one. All help appreciated!



@Frank gave it a shot as well...




$$\int {x\,\mathrm{d}x\over{(-1)^{3/2}(x^2-7x+10)^{3/2}}}.$$



$$\int {x\,\mathrm{d}x\over{(i)^3(x^2-7x+10)^{3/2}}}.$$ ($i$ is the imaginary unit)



Clearly we don't get any imaginary term in the answer and there are probably no chances that we'll cancel the imaginary number. That's why I did not look forward to this method. Will go ahead and try the Euler substitution...



Edit: This question is solved but I'm still looking for a better, more faster alternative as Euler's substitution can sometimes invite a bunch of calculations.


Answer



Thanks @DrSonnhardGraubner for giving me the right article for the problem. I didn't know about this one.




We are going to use the third substitution of Euler here, wherein we assume that



$$\sqrt{7x-10-x^2} = (5-x)t$$ (consider factorization)



$$t = \sqrt{(x-2)\over(5-x)}$$



partially differentiate to get an expression of $\mathrm{d}x$ in terms of $\mathrm{d}t$



$$\mathrm{d}x = \frac{6t \mathrm{d}t}{(t^2+1)^2}$$




Now substitute $x$ with a function of $t$ according to the above equation and get the answer as given above in the comment by @John Chessant
$$-\frac{2}{9}\cdot\frac{20-7x}{\sqrt{(2-x)(5-x)}}+C$$



Any alternates are welcome!


Monday 25 July 2016

Performing Induction on the process of Induction (Function Spaces?)

I think it was Gauss who taught us that 1+2+...+n = n(n+1)/2 for natural numbers
We can easily verify this with a proof by induction.



However, what if I would like to find the sum of all the natural numbers between any two given natural numbers (not just between 1 and a given natural number)? And from there, what if I wanted to generalize to all integers?




Well, using Gauss' reasoning it's not hard to develop an equation that would satisfy this.



For example, I could say that



For all x and all y,



The sum between x and y (inclusive: implying that if x and y are equal it will return that value) is:



(x+y)(abs(abs(x)-abs(y))+1)/2 where x,y are elements of Z.




I may have typed that in wrong; I'm not sure. But it's not really relevant. What I'm really wondering is how I would go about proving something like this.



My first instinct is that I could just do induction on the process of induction. That is, say we were just sticking with a proof within the natural numbers, if I prove with induction that it is equal to the sum of the values from 1 to y, that is a proof of the base case where x = 1. Then I could just do the inductive step on that...proving 2 to y, 3 to y, and so on. And I could do this in such a way that it wouldn't matter whether or not x>y (ie: doing the induction in both directions: doing the induction on the induction with the base case of 1 to x, and doing the induction on the induction with the base case of 1 to y). But there are other ways (more simple ways I think even) that I could go about making sure that it wouldn't matter whether x>y.



Nonetheless, my professor says that this method of applying induction on induction will not work. She also says I would have to use function spaces for this proof (which I haven't learned yet). Is my professor correct? Why or why not? Also, are there any other methods for the proof that anyone knows?

soft question - Are infinitesimals equal to zero?



I am trying to understand the difference between a sizeless point and an infinitely short line segment. When I arrive to the notion coming from different perspectives I find in the mathematical community, I arrive to conflicting conclusions, meaning that either the mathematical community is providing conflicting information (not very likely) or that I don't understand the information provided (very likely).



If I think of a sizeless point, there are no preferential directions in it because it is sizeless in all directions. So when I try to think of a line tangent to it, I get an infinite number of them because any orientation seems acceptable. In other words, while it makes sense to talk about the line tangent to a curve at a point, I don't think it makes sense to talk about the line tangent to an isolated sizeless point.




However, if I think of an infinitely short line segment, I think of one in which both ends are separated by an infinitely short but greater than zero distance, and in that case I don't have any trouble visualising the line tangent to it because I already have a tiny line with one specific direction. I can extend infinitely both ends of the segment, keeping the same direction the line already has, and I've got myself a line tangent to the first one at any point within its length.



What this suggests to me is that sizeless points are not the same notions as infinitely short line segments. And if I can remember correctly from my school years, when I was taking limits I could assume that, as a variable approached some value, it never quite got to the value so I could simplify expressions that would otherwise result in 0/0 indeterminations by assuming that they represented tiny non-zero values divided by themselves and thus yielding 1. That, again, suggests to me that infinitesimals are not equal to zero.



But then I got to the topic about 0.999... being equal to 1, which suggests just the opposite. In order to try to understand the claim, I decided to subtract one from itself and subtract 0.999... from one, hoping to arrive to the same result. Now, subtracting an ellipsis from a number seems difficult, so I started by performing simpler subtractions, to see if I could learn anything from them.



1.0 - 0.9 = 0.1



That was quite easy. Let's now add a decimal 9:




1.00 - 0.99 = 0.01



That was almost as easy, and a pattern seems to be emerging. Let's try with another one:



1.000 - 0.999 = 0.001



See the pattern?



1.0000 - 0.9999 = 0.0001




I always get a number that starts with '0.' and ends with '1', with a variable number of zeros in between, as many as the number of decimal 9s being subtracted, minus one. With that in mind, and thinking of the ellipsis as adding decimal 9s forever, the number I would expect to get if I performed the subtraction would look something like this:



1.000... - 0.999... = 0.000...1



So if I never stop adding decimal 9s to the number being subtracted, I never get to place that decimal 1 at the end, because the ellipsis means that I never get to the end. So in that sense, I might understand how 0.999... = 1.



However, using the same logic:



1.000... - 1.000... = 0.000...0




Note how there is no decimal 1 after the ellipsis in the result. Even though both numbers might be considered equal because there cannot be anything after an ellipsis representing an infinite number of decimal digits, the thing is both numbers cannot be expressed in exactly the same way. It seems to me that 0.000...1 describes the length of an infinitely short line segment while 0.000...0 describes the length of a sizeless point. And indeed, if I consider the values in the subtraction as lengths along an axis, then 1 - x, as x approaches 1, yields an infinitely short line segment, not a sizeless point.



So what is it? Is the distance between the points (0.999..., 0) and (1.000..., 0) equal to zero, or is it only slightly greater than zero?



Thanks!



EDIT:



I would like to conclude by "summarising" in my own non-mathematical terms what I think I may have learned from reading the answers to my question. Thanks to everyone who has participated!




Regarding infinitely short line segments and sizeless points, it seems that they are indeed different notions; one appears to reflect an entity with the same dimension as the interval it makes up (1) while the other reflects an entity with a lower dimension (0). In more geometrical terms (which I find easier to visualise) I interpret that as meaning that an infinitely short line segment represents a distance along one axis, while a sizeless point represents no distance at all.



Regarding infinitesimals, it turns out most of them are not real, that is, most of them are not part of the set of real numbers; they are numbers whose absolute value is smaller than any positive real number. But there is an integer -a number without any fractional part-, therefore also a real number, whose absolute value is smaller than any positive real number and that is zero, of course; zero does not have a fractional part and it is smaller than any positive real number. Hence, zero is also an infinitesimal. But not necessarily exactly like other infinitesimals, because it seems you cannot add zero to itself any number of times and arrive to anything other than zero, while you can add other infinitesimals to themselves and arrive to real values.



And regarding 0.999... being exactly 1, I think I now understand what is going on. First, I apologise for my use of a rather unconventional notation, so unconventional that even I didn't know exactly what it meant. The expressions '0.999...', '1.000...' and '0.000...' do not represent numerical values but procedures that can be followed in order to construct a numerical value. For example, in a different context, '0.9...' might be read as:



1) Start with '0.'
2) Add decimal '9'
3) Goto #2.



And the key thing is the endless loop.



The problem lied in the geometrical interpretation in my mind of the series of subtractions I presented; I started with a one-unit-long segment with a notch at 90% the distance between both ends, representing a left sub-segment of 0.9 units of length and a right sub-segment of 0.1 units. I then moved the notch 90% closer to the right end, making the left sub-segment 0.99 units long and the right 0.01. I then zoomed my mind into the right sub-segment and again moved the notch to cover 90% of the remaining distance, getting 0.999 units of length on one side and 0.001 on the other. A few more iterations led me to erroneously conclude that the remaining distance is always greater than zero, regardless of the number of times I zoomed in and moved the notch.



What I had not realised is that every time I stopped to examine in my mind the remaining distance on the right of the notch, I was examining the effects of a finite number of iterations. First it was one, then it was two, then three and so on, but in none of those occasions I had performed an infinite number of iterations prior to examining the result. Every time I stopped to think, I was breaking instruction #3. So what I got was not a geometrical interpretation of '0.999...' but a geometrical interpretation of '0.' followed by an undetermined but finite number of decimal 9s. Not the same thing.



I now see how it does not matter what you write after the ellipsis, because you never get there. It is just like writing a fourth and subsequent instructions in the little program I am comparing the notation to:




1) Start with '0.'
2) Add decimal '9'
3) Goto #2.
4) Do something super awesome
5) Do something amazing


It doesn't matter what those amazing and super awesome things may be, because they will never be performed; they are information with no relevance whatsoever. Thus,



0.000...1 = 0.000...0 = 0.000...helloiambob




And therefore,



(1.000... - 0.999...) = 0.000...1 = 0.000...0 = (1.000... - 1.000...)



Probably not very conventional, but at least I understand it. Thanks again for all your help!


Answer




"I think of one in which both ends are separated by an infinitely short but greater than zero distance"





That does not exist within the real numbers. So what you think of "infinitely short line segment" does not exist within the context of real numbers.




"And if I can remember correctly from my school years, when I was taking limits I could assume that, as a variable approached some value, it never quite got to the value so I could simplify expressions that would otherwise result in 0/0 indeterminations by assuming that they represented tiny non-zero values divided by themselves and thus yielding 1. That, again, suggests to me that infinitesimals are not equal to zero."




When taking a limit $\lim_{x\to 0} f(x)$, $x$ is not some "infinitesimal." You're most likely stuck because you only have an intuitive notion of the limit, I suggest you look up a rigorous definition of limit. Furthermore, regarding the simplifications of indeterminate expressions, here are some questions that might help you: here and here.





Regarding $0.9$, $0.99$, etc.




It is true that for any finite number of nines, you end with a one at the end, i.e.
$$1 - 0.\underbrace{99...99}_{n} = 0.\underbrace{00...00}_{n-1}1.$$



However, we are not talking about some finite number of nines, we're talking about the limit which can be rigorously proven to be $1$, i.e.



$$\lim\limits_{n\to\infty} 0.\underbrace{99...99}_n = 1.$$





"So what is it? Is the distance between the points $(0.999..., 0)$ and $(1.000..., 0)$ equal to zero, or is it only slightly greater than zero?"




It is (exactly!) zero because we define $0.999...$ as the limit of the sequence $(0.9, 0.99, 0.999,...)$, which happens to be $1$.


calculus - Compute the following sum for any x?

Compute the following sum for any x?



$\sum_{n=0}^\infty {(x-1)^n\over (n+2)!}$



I am having trouble to compute that sum. It looks like geometric series but I don't know where to start.
Can everyone help me with some hints?

Thank you.

calculus - Value of $int ^infty_0 frac{bsin{ax} - asin{bx}}{x}dx$?



$$\int ^\infty_0 \frac{b\sin{ax} - a\sin{bx}}{x}dx$$




Hello guys! I'm having trouble solving this integral...looks an awful lot like an Frullani Integral, and I've tried to get it to an appropriate form to apply that method, but I've failed so far. Also, Wolfram gives a result lacking any logarithms, which may hint that it cannot be solved in that way. Can anyone help me here? (I have to solve this without using any complex math)


Answer



Hint



Split your integral in txo pieces and make the appropriate changes of variable to make the integrand looking like $$\frac{\sin(y)}{y}~dy$$ and you will then arrive to $$\int \frac{b\sin{ax} - a\sin{bx}}{x}dx=b \text{Si}(a x)-a \text{Si}(b x)$$ from where $$\int ^\infty_0 \frac{b\sin{ax} - a\sin{bx}}{x}dx=\frac{1}{2} \pi \Big(b~~ \text{sgn}(a)-a~~ \text{sgn}(b)\Big)$$


Limit of this recursive sequence and convergence



$$a_{n+1}=\sqrt{4a_n+3}$$ $a_1=5$
I can solve simpler but I get stuck here because I cant find an upper bound or roots of the quadratic equation $a_{n+1} -a_n= \frac{4a_n+3 - a_n^2}{\sqrt{4a_n +3}+a_n}...$ to find monotony.
I tried this generic aproach but have difficulties Convergence and limit of a recursive sequence


Answer



Prove it using induction. Whether the sequence is increasing or decreasing depends on the value of $a_1$. Observe that $$a_n \le a_{n+1} \implies 4a_n + 3 \le 4a_{n+1} + 3 \implies a_{n+1} \le a_{n+2}$$ and similarly $$a_n \ge a_{n+1} \implies 4a_n + 3 \ge 4a_{n+1} + 3 \implies a_{n+1} \ge a_{n+2}.$$ Thus if $a_1 \le a_2$ the full sequence is nondecreasing and if $a_1 \ge a_2$ the full sequence is nonincreasing.


Sunday 24 July 2016

integration - Find the value of $limlimits_{n to infty}nleft(left(int_0^1frac{1}{1+x^n},mathrm{d}xright)^n-frac{1}{2}right)$




Find the value of the limit $$\lim_{n \to \infty}n\left(\left(\int_0^1 \frac{1}{1+x^n}\,\mathrm{d}x\right)^n-\frac{1}{2}\right)$$




I can't solve the integral $\int_0^1 \mathrm{\frac{1}{1+x^n}}\,\mathrm{d}x$. But maybe the questions doesn't require solving the integral.
Apparently the $\lim_{n \to \infty}(\int_0^1 \mathrm{\frac{1}{1+x^n}}\,\mathrm{d}x)^n$ should be $\frac{1}{2}$ for the question to make sense. That's all I know.


Answer




Let $I(n)$ be given by the integral



$$\begin{align}
I(n)&=\int_0^1 \frac{1}{1+x^n}\,dx \tag 1\\\\
\end{align}$$



Then, expanding the integrand of the integral on the right-hand side of $(1)$ in the Taylor series $ \frac{1}{1+x^n}=\sum_{k=0}^\infty (-1)^kx^{nk}$ reveals



$$\begin{align}
I(n)&=\sum_{k=0}^\infty \frac{(-1)^k}{nk+1}\\\\

&=1+\frac1n \sum_{k=1}^\infty\frac{(-1)^k}{k+1/n} \tag 2
\end{align}$$






Next, using the fact that $\log(2)= \sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}$ and that $\frac{\pi^2}{12}=-\sum_{k=1}^\infty \frac{(-1)^k}{k^2}$, we can write the series in $(2)$ as



$$\begin{align}
\sum_{k=1}^\infty\frac{(-1)^k}{k+1/n} &=-\log(2)+\sum_{k=1}^\infty (-1)^k\left(\frac{1}{k+1/n}-\frac1k\right)\\\\
&=-\log(2)-\frac1n \sum_{k=1}^\infty \frac{(-1)^k}{k(k+1/n)}\\\\

&=-\log(2)-\frac1n \sum_{k=1}^\infty\frac{(-1)^k}{k^2}-\frac1n\sum_{k=1}^\infty (-1)^k\left(\frac{1}{k(k+1/n)}-\frac{1}{k^2}\right)\\\\
&=-\log(2)+\frac{\pi^2}{12n}+O\left(\frac1{n^2}\right) \tag 3
\end{align}$$






Using $(1)-(3)$ yields



$$I(n)=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac1{n^3}\right) \tag 4$$







Next, using $(4)$, we can write



$$\begin{align}
\left(I(n)\right)^n&=e^{n\log\left(1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac1{n^3}\right)\right)}\\\\
&=e^{-\log(2)+\frac{\pi^2}{12n}-\frac{\log^2(2)}{2n}+O\left(\frac{1}{n^2}\right)}\\\\
&=\frac12 \left(1+\frac{\pi^2}{12n}-\frac{\log^2(2)}{2n}+O\left(\frac{1}{n^2}\right)\right)
\end{align}$$




Finally, we have




$$\begin{align}
\lim_{n\to \infty}\left(n\left(\left(I(n)\right)^n-\frac12\right)\right)&=\lim_{n\to \infty}\left(\frac{\pi^2}{24}-\frac{\log^2(2)}{4}+O\left(\frac1n\right)\right)\\\\
&=\bbox[5px,border:2px solid #C0A000]{\frac{\pi^2}{24}-\frac{\log^2(2)}{4}}
\end{align}$$




And we are done!



real analysis - Limit:$ limlimits_{nrightarrowinfty}left ( nbigl(1-sqrt[n]{ln(n)} bigr) right )$




I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$
what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?


Answer



Since it's so hard let's solve it in one line



$$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )=-\lim_{n\rightarrow\infty}\left(\frac{\sqrt[n]{\ln(n)}-1}{\displaystyle\frac{1}{n}\ln(\ln (n)) }\cdot \ln(\ln (n))\right)=-(1\cdot \infty)=-\infty.$$



Chris.


probability - Expected value with nine-sided die

You have a fair nine-sided die, where the faces are numbered from $1$ to $9.$ You roll the die repeatedly, and write the number consisting of all your rolls so far, until you get a multiple of $3.$ For example, you could roll an $8,$ then a $2,$ then a $5.$ You would stop at this point, because $825$ is divisible by $3$, but $8$ and $82$ are not.



Find the expected number of times that you roll the die.




I am fairly new to the concept of expected value, and I don't really know how to go about solving this. It would be great if someone could help.

derivatives - Continuous but nowhere differentiable function on domain

I am trying to come up with an example of a function which is continuous on $[0,1]$ but nowhere differentiable but which is in a way simpler than Weierstrass function or "more intuitive" so to speak.



Do you know any such example?

Nesbitt's Inequality for 4 Variables




I'm reading Pham Kim Hung's 'Secrets in Inequalities - Volume 1', and I have to say from the first few examples, that it is not a very good book. Definitely not beginner friendly.



Anyway, it is proven by the author, that for four variables $a, b, c$, and $d$, each being a non-negative real number, the following inequality holds:
$$\frac{a}{b+c} + \frac{b}{c+d} + \frac{c}{d+a} + \frac{d}{a+b}\ge 2$$



I have no idea how the author proves this. It comes under the very first section, AM-GM. I get the original Nesbitt's inequality in 3 variables that the author proves (which is also cryptic, but I was able to decipher it).



My effort: I understood how the author defines the variables $M, N$ and $S$.
$$S = \frac{a}{b+c} + \frac{b}{c+d} + \frac{c}{d+a} + \frac{d}{a+b}$$
$$M = \frac{b}{b+c} + \frac{c}{c+d} + \frac{d}{d+a} + \frac{a}{a+b}$$

$$N = \frac{c}{b+c} + \frac{d}{c+d} + \frac{a}{d+a} + \frac{b}{a+b}$$



$M + N = 4$, pretty straightforward. The numerators and denominators cross out to give four 1s.



Then the author, without any expansion/explanation, says



$$M + S = \frac{a+b}{b+c} + \frac{b+c}{c+d} + \frac{c+d}{d+a} + \frac{d+a}{a+b}\ge 4$$



Which is also true, since the AM-GM inequality says




$$\frac{M+S}{4}\ge \left(\frac{a+b}{b+c}\cdot\frac{b+c}{c+d}\cdot\frac{c+d}{d+a}\cdot\frac{d+a}{a+b}\right)^{1/4}$$



The RHS above evaluates to $1^{1/4}$ since all the numerators and denominators cancel out.



The next part is the crux of my question.



The author claims,



$$N + S =\frac{a+c}{b+c}+\frac{a+c}{a+d}+\frac{b+d}{c+d} + \frac{b+d}{a+b}\ge\frac{4(a+c)}{a+b+c+d}+\frac{4(b+d)}{a+b+c+d}$$




This is completely bizarre for me! Where did the author manage to get a sum of $(a+b+c+d)$??



As a side note, I'd definitely not recommend this book for any beginner in basic algebraic inequalities (even though the title of the book promotes that it's a treatment of basic inequalities). The author takes certain 'leaps of faith', just assuming that the student reading the book would be able to follow.


Answer



Since we have $(x-y)^2\ge 0$, we have, for $x\gt 0,y\gt 0$,
$$\begin{align}(x-y)^2\ge 0&\Rightarrow x^2+y^2+2xy\ge 4xy\\&\Rightarrow y(x+y)+x(x+y)\ge 4xy\\&\Rightarrow \frac{1}{x}+\frac 1y\ge\frac{4}{x+y}\end{align}$$
Now set $x=b+c,y=a+d$ and $x=c+d,y=a+b$ to get
$$\frac{1}{b+c}+\frac{1}{a+d}\ge\frac{4}{b+c+a+d}$$and$$\frac{1}{c+d}+\frac{1}{a+b}\ge\frac{4}{c+d+a+b}.$$


probability - Fundamental theorem of calculus - range instead of point by point



I read the following in a math article about continuous sample spaces:





We need to have P(Ω) = 1, i.e., P([0, T]) = 1. On the other hand, in the first experiment, all points in the interval [0, T] seem to be equiprobable. And, since the sum of the probabilities P(t) must be 1, it looks like we have arrived at an impossible situation. If P(t) is non-zero, the sum of all probabilities will be infinite; if P(t) is 0, the sum will vanish as well. The apparent paradox is resolved by pointing out that the notion of the sum of a continuum of values is commonly replaced by an integral - the concept taught at the beginning Calculus courses. The probabilities on a continuous sample space should be defined somehow differently and not point-by-point.




I am interested in the last few lines of the paragraph. It states that instead of calculating point by point, we shift to adding by small ranges.



Although I'm quite familiar with calculus, I never thought of this. Why can't we add point by point?



Instead of $\int f(x) \, dx$ where we add for a small range, why cannot we have $\sum f(x) $ where we add point by point?



I am missing something fundamental in my concepts. I am not able to understand this intuitively.



Answer



This is similar to what I answered in another question.



The sum and the integral are closely related; closer than the definition of an integral might make you think.
Let me start a little further back by discussing ways to measure size.



There are two very natural ways to measure size of set on the real line: "length" and "number of points".
For a single point the first measure gives zero and the second one gives one.
For an open interval the first measure gives the distance between the endpoints and the second one gives infinity.
A single point is negligible in the sense of length but not in the sense of amount.




These different ways of associating sets with sizes are called measures.
The measure corresponding to length is called the Lebesgue measure, and the one corresponding to number of points is called the counting measure.
These may or may not mean anything to you at this point, but you will encounter them later on if you continue working with real analysis.



The natural way to measure the size of an event in the theory of probability is the probability of that event.
The measure of a set corresponds to a probability, and probabilities are in fact measures if you dig a little deeper.



Once you have a measure, you can integrate with respect to that measure.
The integral with respect to the Lebesgue measure is the usual integral you know (or rather an extension of it).

The integral with respect to the counting measure is the sum.
That is, for $A\subset\mathbb R$ and $f\colon A\to\mathbb R$ both
$$
\int_Af(x)dx
$$
and
$$
\sum_{x\in A}f(x)
$$
are integrals of the function $f$ over the set $A$.

But these integrals are taken with respect to different measures and they are very different things.



The interval $[0,1]$ consists of points, so its total length must be the sum of the lengths of its points.
(We can make sense of uncountable sums. Especially if all numbers are zero, this is easy: then the sum is indeed zero.)
But all points have length zero, so the length of $[0,1]$ is zero, too!
This is not a bad argument, but it turns out that lengths — or measures in general — do not have such an additivity property.
However, if you only take a finite or countable union of disjoint sets, the length of the union is the sum of the lengths.
(I'm ignoring technicalities related to measurability as they are beside the point.)
The set $[0,1]$ is uncountable, so this "naive geometric reasoning" fails, and it can be enlightening to figure out why.




The remaining question is then why the Lebesgue measure was chosen and not the counting measure for probability on $[0,1]$.
Counting measures (with suitable normalization) can be used for probability, and you can in fact see all introductory probability with finite sets as an example of this.
There are too many points on $[0,1]$ to count: the counting measure is infinite.
The total measure should be one, and there is no way to normalize it.
On the other hand, the Lebesgue measure of $[0,1]$ is one, and it matches the geometric intuition of length.



Summing over the index set $[0,1]$ (as opposed to the usual $\mathbb N$ or $\mathbb Z$) can be defined, and it corresponds to integrating a function on $[0,1]$ with respect to the counting measure.
However, uncountable sums are somewhat ill-behaved, and things are usually awry when you end up using one.
The sum $\sum_{x\in[0,1]}f(x)$ of a function $f\colon[0,1]\to[0,\infty)$ can only be finite if only countably many values of $f(x)$ are non-zero.
That is, any sum of nonzero elements over an uncountable set is necessarily infinite.

If you want to use a counting-type measure on $[0,1]$ and have total measure (probability) one, you need to choose a countable amount of points $[0,1]$ with non-zero measure and let all other points (most of them!) have zero measure.


calculus - Evaluate $int_0^{{pi}/{2}} log(1+cos x), dx$



Find the value of $\displaystyle \int_0^{{\pi}/{2}} \log(1+\cos x)\ dx$



I tried to put $1+ \cos x = 2 \cos^2 \frac{x}{2} $, but I am unable to proceed further.
I think the following integral can be helpful:
$\displaystyle \int_0^{{\pi}/{2}} \log(\cos x)\ dx =-\frac{\pi}{2} \log2 $.


Answer




Using Weierstrass substitution
$$
t=\tan\frac x2\qquad;\qquad\cos x=\frac{1-t^2}{1+t^2}\qquad;\qquad dx=\frac{2}{1+t^2}\ dt
$$
we obtain
\begin{align}
\int_0^{\Large\frac\pi2}\ln(1+\cos x)\ dx&=2\underbrace{\int_0^1\frac{\ln2}{1+t^2}\ dt}_{\color{blue}{\text{set}\ t=\tan\theta}}-2\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}\\
&=\frac{\pi}{2}\ln2-2\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}.\tag1
\end{align}
Consider

\begin{align}
\int_0^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt&=\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt+\underbrace{\int_1^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}_{\large\color{blue}{t\ \mapsto\ \frac1t}}\\
&=2\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt-2\int_0^1\frac{\ln t}{1+t^2}\ dt\\
\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}&=\frac12\underbrace{\int_0^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}_{\color{blue}{\text{set}\ t=\tan\theta}}+\int_0^1\frac{\ln t}{1+t^2}\ dt\\
&=-\underbrace{\int_0^{\Large\frac\pi2}\ln\cos\theta\ d\theta}_{\color{blue}{\Large\text{*}}}+\sum_{k=0}^\infty(-1)^k\underbrace{\int_0^1 t^{2k}\ln t\ dt}_{\color{blue}{\Large\text{**}}}\\
&=\frac\pi2\ln2-\sum_{k=0}^{\infty} \frac{(-1)^k}{(2 k+1)^2}\\
&=\frac\pi2\ln2-\text{G},\tag2
\end{align}
where $\text{G}$ is Catalan's constant.




$(*)$ can be proven by using the symmetry of $\ln\cos\theta$ and $\ln\sin\theta$ in the interval $\left[0,\frac\pi2\right]$ and $(**)$ can be proven by using formula
$$
\int_0^1 x^\alpha \ln^n x\ dx=\frac{(-1)^n n!}{(\alpha+1)^{n+1}}, \qquad\text{for }\ n=0,1,2,\ldots
$$

Thus, plugging in $(2)$ to $(1)$ yields
\begin{align}
\int_0^{\Large\frac\pi2}\ln(1+\cos x)\ dx
=\large\color{blue}{2\text{G}-\frac{\pi}{2}\ln2}.
\end{align}


abstract algebra - The square roots of different primes are linearly independent over the field of rationals



I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals.




I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

Saturday 23 July 2016

integration - Real integral using residue theorem - why doesn't this work?




Consider the following definite real integral:
$$I = \int_{0}^\infty dx \frac{e^{-ix} - e^{ix}}{x}$$



Using the $\text{Si}(x)$ function, I can solve it easily,
$$I = -2i \int_{0}^\infty dx \frac{e^{-ix} - e^{ix}}{-2ix} = -2i \int_{0}^\infty dx \frac{\sin{x}}{x} = -2i \lim_{x \to \infty} \text{Si}(x) = -2i \left(\frac{\pi}{2}\right) = - i \pi,$$
simply because I happen to know that $\mathrm{Si}(x)$ asymptotically approaches $\pi/2$.



However, if I try to calculate it using the residue theorem, I get the wrong answer, off by a factor of $2$ and I'm not sure if I understand why. Here's the procedure:
$$I= \int_{0}^\infty dx \frac{e^{-ix}}{x} - \int_{0}^\infty dx \frac{ e^{ix}}{x} = \color{red}{-\int_{-\infty}^0 dx \frac{e^{ix}}{x}} - \int_{0}^\infty dx \frac{ e^{ix}}{x}

= -\int_{-\infty}^\infty dx \frac{e^{ix}}{x} $$

Then I define $$I_\epsilon := -\int_{-\infty}^\infty dx \frac{e^{ix}}{x-i\varepsilon}$$ for $\varepsilon > 0$ so that$$I=\lim_{\varepsilon \to 0^+} I_\varepsilon.$$
Then I complexify the integration variable and integrate over a D-shaped contour over the upper half of the complex plane. I choose that contour because
$$\lim_{x \to +i\infty} \frac{e^{ix}}{x-i\varepsilon} = 0$$ and it contains the simple pole at $x_0 = i \varepsilon$. Using the residue theorem with the contour enclosing $x_0$ $$I_\varepsilon = -2 \pi i \, \text{Res}_{x_0} \left( \frac{e^{ix}}{x-i\varepsilon}\right) = -2 \pi i \left( \frac{e^{ix}}{1} \right)\Bigg\rvert_{x=x_0=i\varepsilon}=-2 \pi i \, e^{-\varepsilon}.$$
Therefore,
$$I=\lim_{\varepsilon \to 0^+} \left( -2 \pi i \, e^{-\varepsilon} \right) = -2\pi i.$$



However, that is obviously wrong. Where exactly is the mistake?


Answer



You've replaced the converging integral $\int_0^\infty \frac{\mathrm{e}^{-\mathrm{i} x} - \mathrm{e}^{\mathrm{i} x}}{x} \,\mathrm{d}x$ with two divergent integrals, $\int_0^\infty \frac{\mathrm{e}^{-\mathrm{i} x}}{x} \,\mathrm{d}x$ and $\int_0^\infty \frac{\mathrm{e}^{\mathrm{i} x}}{x} \,\mathrm{d}x$. (That something divergent has been introduced is evident in your need to sneak up on a singularity at $0$ that was not in the original integral.)




Also, notice that your D-shaped contour does not go around your freshly minted singularity at $x = 0$. The singularity lands on your contour. See the Sokhotski–Plemelj theorem to find that the multiplier for the residue of the pole is $\pm \pi \mathrm{i}$, not $\pm 2 \pi \mathrm{i}$.


integration - Check convergence of $ int_{0}^{1}frac{dx}{|sin{x}|^{1/2}} $




Check convergence of integral: $$ \int_{0}^{1}
\frac{dx}{|\sin{x}|^{1/2}} $$





My attempt



Dirichlet's test would't help there so I am going to use comparison test:



$$ x \ge \sin{x} \\
\frac{1}{x} \le \frac{1}{\sin{x}} \\
\frac{1}{|x|} \le \frac{1}{|\sin{x}|} \\
\frac{1}{|x|^{1/2}} \le \frac{1}{|\sin{x}|^{1/2}}$$

So




$$ \int_{0}^{1} \frac{dx}{|\sin{x}|^{1/2}} \ge \int_{0}^{1} \frac{1}{|x|^{1/2}} $$
But $\int_{0}^{1} \frac{1}{|x|^{1/2}}$ converges so it doesn't help me.


Answer



Instead of using the upper bound $\sin x\le x$, use the to lower bound $\sin x\ge\frac{2x}{\pi}$, which follows from $\sin x$ being concave on $[0,\,\pi/2]$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...