Sunday 31 December 2017

complex analysis - Where does this bound on polynomial come from?



Suppose that $p(z)$ is a polynomial of degree $k$ with leading coefficient equal to $1$ , moreover assume that all the zeros of p lie in the unit disc, then for z suffieciently large




$$|p(z)| \ge \frac{9|z|^{k}}{10}$$



The poof that was given was ; The result follows immediately from



$$|1/z| \lt \frac{1}{(10k)(max|a_{j}|+1)}$$



But I don't understand where they came up with this from and how it implies the result? Can anyone help me to understand? I am really not sure, I tried to think about maybe radius of convergence.



say we write




$$p(z)=a_{0}+a_{1}z+...+a_{k-1}z^{k-1}+z^{k}$$



$$p(z)=z^{k}(1+a_{k-1}/z+...+a_{0}/z^{k})$$



I thought maybe since we are told that zeros are only in the unit circle, then for z large ie z away from the unit circle and $z=0$ then p(z) is maybe holomorphic and so has a convergent power series?



But I am really not sure and very confused.


Answer



Can you show that for $z$ sufficiently large the inequality below holds?




$$\left(1+\frac{a_{k-1}}{z}+...+\frac{a_{0}}{z^{k}}\right)\geq \frac{9}{10}$$



What happens as $z \to \infty$?


summation - Prove by Induction : $sum n^3=(sum n)^2$

I am trying to prove that for any integer where $n \ge 1$, this is true:



$$ (1 + 2 + 3 + \cdots + (n-1) + n)^2 = 1^3 + 2^3 + 3^3 + \cdots + (n-1)^3 + n^3$$



I've done the base case and I am having problems in the step where I assume that the above is true and try to prove for $k = n + 1$.



I managed to get,



$$(1 + 2 + 3 + \cdots + (k-1) + k + (k+1))^2 = (1 + 2 + 3 + \cdots + (k-1) + k)^2 + (k + 1)^3$$




but I'm not quite sure what to do next as I haven't dealt with cases where both sides could sum up to an unknown integer.

functional equations - Proof that solutions to Cauchy F.E. over $mathbb{Q}$ are linear



I would like to prove that the solutions to Cauchy's functional equation over $\mathbb{Q}$ are linear, that is, all solutions $f:\mathbb{Q}\rightarrow\mathbb{Q}$ have the property $f(x)=cx$ for some $c\in\mathbb{Q}$.



The proof is to be done in a very specific order: First, we must prove that it works for all $x\in\mathbb{N}$, then for all $x\in\mathbb{Z}$. Third we must show it works for all $\frac{1}{x}$ where $x\in\mathbb{Z}$. Finally, we must show that it works for all $\frac{p}{q}$ where $p,q\in\mathbb{Z}$, $q\neq 0$.



I am having trouble with the third part, where I have to show it works for $\frac{1}{x}$.
Here is what I have done up to this point:




Let $f$ be a solution to the cauchy functional equation.
We know that $f(0)=c\cdot 0 = 0$, as $f(x)=f(0+x)=f(0)+f(x)$ for some number $x$.
Suppose $x\in\mathbb{N}$ and $x\neq 0$.
Then,
\begin{align*}
f(x) & =\underbrace{f(1)+f(1)+\dots+f(1)}_{\text{$x$ times}}
\\ & = x\cdot f(1)
\\ & =x\cdot c
\end{align*}

If $x\in\mathbb{Z}$ and $x\neq 0$, then we have two cases: $x>0$ or $x<0$.
For $x>0$, the proof with $x\in\mathbb{N}$ suffices.

If $x<0$, then we have
\begin{align*}
f(x) & =\underbrace{f(-1)+f(-1)+\dots+f(-1)}_{\text{$x$ times}}
\\ & = x\cdot f(-1)
\\ & =x\cdot c
\end{align*}



I don't know how to proceed with this method to show that $f(\frac{1}{x})=\frac{c}{x}$.


Answer



The right way to proceed lies in the realization that your argument for $x \in \mathbb{N}$ can be extended slightly so say that if $x \in \mathbb{N}$, then $f(x k) = x f(k)$. Why? For the same reason you pointed out. If $x \in \mathbb{N}$, then

$$f(xk) = f(k + k + \dots +k) = f(k) + \dots + f(k)=x f(k)$$This result is also what provides the last part of the question.



With this in hand, you're in fine shape! This is because you know $f(x* \frac{1}{x}) = f(1) = c = x f(\frac{1}{x})$.


Saturday 30 December 2017

integration - Converting this summation into an integral

This summation includes a sum of n derivatives of the function f(x) at the point (c+d) / 2




I'm trying to convert a Taylor polynomial into an integral. Since I don't understand the MathExchange syntax, I have attached a couple pictures.



The first is the general problem:



enter image description here



The second is my conversion to a Taylor series summation:



(Sorry I left out that n goes from 0 to infinity)




enter image description here



Now, how do I convert it into an integral?

elementary number theory - Find all $x$ such that $11mid 3x+7$



I found this question in Beachy and Blair: Abstract algebra book, they even have a solution to this but its not satisfactory for me. They only say "$x\equiv 5 \pmod{11}$ ". Which one can "feel" simply by trial and error. I would like to know what is the proper approach. Thank you in advance!


Answer



We need $3x+7\equiv 0\pmod{11}$



Add 4 to both sides:




$$3x+11\equiv 4\pmod{11}$$



reduce:



$$3x \equiv 4\pmod{11}$$



multiply both sides by a number to make the coefficient on the left equivalent to $1$. In this case, $4$ works:



$$12x\equiv 16\pmod{11}$$




reduce:



$$x\equiv 5\pmod{11}$$



Does that work for you?


Friday 29 December 2017

Proving by induction that $frac{n(n + 1)(2n + 1)}{6} = 0^2 + 1^2 + 2^2 + 3^2 + ... + n^2$

Note: I am asking this question as a simple introductory question to proofs by induction, to which I will give also my formal answer (which should be correct, if not, please comment) for future visitors of this website.



I know that to prove something by induction, we need formally 3 steps (2, if we considere inductive hypothesis and inductive step together):





  1. Base case or basis:



    We determine what is the base case for our statement $P(n)$, and we immediately prove it.


  2. Inductive hypothesis:



    We assume our statement $P(n)$ is true for $n$.


  3. Inductive step:



    We try to prove that our statement is also true for $P(n + 1)$. If we can prove that, we can also prove that our statement is true for $n + 2$ (we don't need to do it), and so on.




    This step is usually the most difficult part, because it involves some algebraic manipulation, or some imagination in some way. For this last reason, it might be helpful watching other similar proofs by induction to solve the current one.




For example, suppose our $P(n)$ is the following:



$$\frac{n(n + 1)(2n + 1)}{6} = 0^2 + 1^2 + 2^2 + 3^2 + ... + n^2$$



which is basically saying that the sum of all squares of the natural numbers from $0$ to $n$ can be determined by the formula on the left side of the equation.



How would we prove $P(n)$ is true for all $n \in \mathbb{N}$?




For more information about the proof by induction, see the Wikipedia page about induction called Mathematical induction.

linear algebra - Can we prove that matrix multiplication by its inverse is commutative?

We know that $AA^{-1} = I$ and $A^{-1}A = I$, but is there a proof for the commutative property here? Or is this just the definition of invertibility?

real analysis - The n-th root of a prime number is irrational

If $p$ is a prime number, how can I prove by contradiction that this equation $x^{n}=p$ doesn't admit solutions in $\mathbb {Q}$ where $n\ge2$

divisibility - Which sets of base 10 digits have the property that, for every $n$, there is a $n$-digit number made up of these digits that is divisible by $5^n$?



Which sets of non-zero base 10 digits have the property that, for every $n$, there is a $n$-digit number made up of these digits that is divisible by $5^n$?



This is an extension of
Prove that for any integer $n>0$, there exists a number consisting of 1's and 2's only, which is divisible by $2^n$.



Here is my answer:




Any set of digits
which form
a complete residue set
modulo 5.
In particular,
any 5 consecutive digits.



A proof by induction
is not too difficult.


Answer




Partial result . . .


Claim:


If $S \subseteq \{1,2,3,4,5,6,7,8,9\}$ contains a complete residue system, mod $5$, then for all positive integers $n$, there is an $n$-digit number $x$ such that




  • All digits of $x$ are elements of $S$.$\\[4pt]$
  • $5^n{\mid}x$.



Proof:


Assume $S \subseteq \{1,2,3,4,5,6,7,8,9\}$ contains a complete residue system, mod $5$.


Necessarily $5 \in S$, hence the claim holds for $n=1$.


Proceed by induction on $n$.


Fix $n \ge 1$, and let $x$ be an $n$-digit number such that all digits of $x$ are elements of $S$, and $5^n{\mid}x$.


Let $y ={\large{\frac{x}{5^n}}}$.


Choose $d \in S$ such that $d(2^n) + y \equiv 0\;(\text{mod}\;5)$.


Let $x'=d(10^n)+x$.


Then $x'$ is an $(n+1)$-digit number, all of whose digits are elements of $S$.
\begin{align*}
\text{Also,}\;\;x'&=d(10^n)+x\\[4pt]

&=(5^n)(d(2^n)+y)\\[4pt]
\end{align*}
which is a multiple of $5^{n+1}$.


Thus, the induction is complete.


Update:


For $S \subseteq \{1,2,3,4,5,6,7,8,9\}$, call $S$ qualifying if for every positive integer $n$, there is an $n$-digit number $x$ such that





  • All digits of $x$ are in $S$.$\\[4pt]$
  • $5^n{\mid}x$.


As was shown, if $S \subseteq \{1,2,3,4,5,6,7,8,9\}$, and $S$ contains a complete residue system, mod $5$, then $S$ is qualifying.


For $S \subseteq \{1,2,3,4,5,6,7,8,9\}$, call $S$ a minimal exception if





  • $S$ is a qualifying set.$\\[4pt]$
  • $S$ does not contain a complete residue system, mod $5$.$\\[4pt]$
  • No proper subset of $S$ is qualifying.


Conjecture:


There are exactly $11$ minimal exceptions, as listed below:
\begin{align*}
&\{1, 2, 3, 5, 6, 7\}\\[4pt]

&\{1, 2, 3, 5, 6, 8\}\\[4pt]
&\{1, 2, 3, 5, 7, 8\}\\[4pt]
&\{1, 2, 5, 6, 7, 8\}\\[4pt]
&\{1, 2, 5, 6, 7, 9\}\\[4pt]
&\{1, 3, 5, 6, 7, 8\}\\[4pt]
&\{2, 3, 4, 5, 7, 9\}\\[4pt]
&\{2, 3, 5, 6, 7, 8\}\\[4pt]
&\{2, 3, 5, 7, 8, 9\}\\[4pt]
&\{2, 4, 5, 6, 7, 9\}\\[4pt]
&\{3, 4, 5, 7, 8, 9\}\\[4pt]

\end{align*}
Remarks: All of the above sets survived testing for $1 \le n \le 10000$.


Thursday 28 December 2017

Probability of an event happening N or more times



I need to determine the probability of an event happening N or more times over M iterations. I know the probability of the event in question happening and its likelihood does not change over time.



For example, say I am rolling a six-sided die and I want to know what is the probability that over the course of 100 rolls I'll roll a 1 or a 2 at least 75 times?



Is there a nice and tidy formula for computing such probabilities?




Thanks


Answer



Yes - it is called a binomial distribution. The probability of a trial having a probability of success of $p$ being successful $k$ times out of $n$ trials is



$$P(k) = \binom{n}{k} p^k (1-p)^{n-k}$$



For your numbers, the probability is



$$P(K \ge N) = \sum_{k=N}^M \binom{M}{k} p^k (1-p)^{M-k}$$




For your die example:



$$P(K \ge 75) = \sum_{k=75}^{100} \binom{100}{k} \left (\frac13\right)^k \left ( \frac{2}{3}\right )^{100-k} \approx 1.89 \cdot 10^{-17}$$



Here is the full distribution over all rolls, which explains why this probability is so small:



binom_dist



ADDENDUM




You may also have noticed that the distribution looks quite normal. In fact, for large $N$, it is well-known that binomial distributions (among others) approximate a normal distribution very well. In this case, $\mu = N p = 100/3$ and $\sigma^2 = N p (1-p) = 200/9$.


trigonometry - Confirmation of Proof: $cos^2(x) = frac 12big(1+cos(2x)big)$ using Euler's identity?



How can I prove the following equation: $$\cos^2(x) = \frac 12\big(1+\cos(2x)\big),\tag1$$ using Euler's identity? $$e^{i\pi} + 1 = 0.\tag*{$\begin{align} \because e^{i\theta} &= \cos\theta + i\sin\theta \\ &= \text{cis} \ \theta \end{align}$}$$






I have tried equating Euler's equation to cos on one side and squaring that but haven't had luck reducing it to the desired form as outlined in $(1)$.

Wednesday 27 December 2017

metric spaces - Correctness of Analysis argument with Cauchy sequences



Let $(x_n)$ and $(y_n)$ be Cauchy sequences in $(X, d)$. Show that $(x_n)$ and $(y_n)$ converge to the same limit iff $d(x_n, y_n) \rightarrow 0$



Proof $\rightarrow$



Suppose $(x_n) \to a$ and $(y_n) \to a$, then

\begin{equation}
\forall_{\epsilon > 0} \exists_{N_1} s.t. \forall_{n>N_1} \implies d(x_n, a) < \dfrac{\epsilon}{2} \\
\forall_{\epsilon > 0} \exists_{N_2} s.t. \forall_{n>N_2} \implies d(y_n, a) < \dfrac{\epsilon}{2}
\end{equation}



Given $\epsilon>0$ take $N=\max(N_1, N_2)$ then $\forall_{n>N}$ we have:
\begin{align}
d(x_n, y_n) &\leq d(x_n, a) + d(a, y_n) \\
& < \dfrac{\epsilon}{2} + \dfrac{\epsilon}{2} = \epsilon
\end{align}




What is the cleanest way to finish this piece of the proof? Technically I feel like I could do the following:



Consider $d(d(x_n, y_n), 0)$:
\begin{align}
d(d(x_n, y_n), 0) &< d(\epsilon, 0) < \epsilon \implies d(x_n, y_n) \to 0
\end{align}



What can I do to make this direction of the proof cleaner? Note that I only used the fact that the sequences converged and nothing special about Cauchy sequences. Additionally, what is the formal definition of $d(x_n, yn) \to 0$?




ADDED



$\leftarrow$ Suppose $d(x_n, y_n) \to 0$ that is $\forall_{\epsilon > 0} \exists_{N_1} s.t \forall_{n > N_1} \implies |d(x_n, y_n) - 0| < \epsilon$. Given $\epsilon > 0$ take $N > N_1$ then by the triangle inequality we have:



\begin{align}
0 < d(x_n, y_n) \leq d(x_n, a) + d(a, y_n) < \epsilon
\end{align}
We know this $a$ exists as $(X, d)$ is a complete metric space and $(x_n)$ and $(y_n)$ are Cauchy. Thus we have that $d(x_n, a) < \epsilon \land d(y_n, a) < \epsilon$.
Therefore, $(x_n)\to a \land (y_n)\to a$




How does the other direction look?


Answer



Remember that your metric d is real valued, and the real numbers are a metric space. Thus, the formal definition of $d(x_n, y_n) \rightarrow 0$ is the same as for convergence of a sequence in a metric space, i.e., for every $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that for all $n \geq N$ we have $d_{\mathbb{R}}(d(x_n,y_n), 0) < \epsilon$.



Note the $d_{\mathbb{R}}$! This is the metric on the real numbers (where $d(x_n,y_n)$ and $0$ live). It is defined by $d_{\mathbb{R}}(x,y) = |x -y|$. So a much less obfuscated way to write the definition above is the following:



$d(x_n, y_n) \rightarrow 0$ if for every $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that for all $n \geq N$ we have $|d(x_n,y_n) - 0| < \epsilon$. But this is exactly what you have shown in the part before your proposed finish! You can actually stop right there with the $\epsilon$. You could also add a note about how you know $d(x_n,y_n) \geq 0$.



The error in what you go on to write is that $d(d(x_n,y_n), 0)$ may not be defined since $d$ is a metric on $X$ while $d(x_n, y_n)$ and $0$ are in $\mathbb{R}$.




As for the Cauchy assumption, you need that to do the converse (along with completeness of $X$).


limits - Does $limlimits_{xto0}operatorname{sgn} (x)$ exist?



I have a problem with this exercise



Does this limit exist?




$$\lim\limits_{x\to0} \operatorname{sgn} (x)$$



this limit should exist and its value is $0$ according to our textbook. It is also written, that we can prove it by using one-sided limits.
And there is a problem, because as I see it



$$\lim_{x\to0^-} \operatorname{sgn} (x) = -1$$



$$\lim_{x\to0^+} \operatorname{sgn} (x) = 1$$



(Because the limit goes very close to $0$, but it never reaches it. I also think it is very similar to prove of non-existence $\displaystyle \lim_{x\to0} \sin\frac 1 x$)




I also tried online limit calculators and they said, that one-sided limits equals $0$.



Could you help me find a problem in my approach?



Thanks for your time!


Answer



If the book says the limit is $0$, then it is wrong.



If $\lim\limits_{x\to0+}$ and $\lim\limits_{x\to0-}$ both exist (as finite numbers) and are not equal to each other, then $\lim\limits_{x\to0}$ does not exist.




In some contexts, it might make sense to say it exists as a "principal value", taking an average: $\displaystyle \frac 1 2 \left( \lim_{x\to0+} + \lim_{x\to0-} \right),$ but that is not what is conventionally done when the concept of limit is first introduced, and I would allow is only when the context for it has been explicitly set.


Tuesday 26 December 2017

discrete mathematics - Sum of cubes proof

Prove that for any natural number n the following equality holds:



$$ (1+2+ \ldots + n)^2 = 1^3 + 2^3 + \ldots + n^3 $$



I think it has something to do with induction?

calculus - Unsolved Elementary Integrals

I am currently in Integral Calculus, and I wondered if I could get a little creative with my practice. I was curious if there were any unsolved, but rather simple (solvable using methods taught in calc I/II), so that I could be practicing the integration tehniques I have learned while also solving a problem that hasn't been solved yet.



I have consulted lists of integrals, but those are in a more general form (constants are represented by letters, etc.), and I was wondering if there was a place I could find lists of indefinite integrals that had never been calculated but weren't that difficult.



Also, on a side note, if you happen to know any good resources for reading about more techniques of integration, I am currently looking to expand my "toolbox" of integration methods.



Thank you all, and I hope each and every one of you is having a nice day.

functional analysis - Show that supremum of over a bounded set is continuous.

Let $A \in \mathbb{R}^n$ $S_A(x)$ is defined as the following



$$
S_A(x)=\sup_{y \in A} x^Ty
$$

where $x \in \mathbb{R}^n$.




Show that if $A$ is a bounded set, then $S_A(x)$ is a continuous function.



I tried the following:



Let $A$ be a bounded set in $\mathbb{R}^n$ and $y \in A$. So $\|y\| \leq M$ where $M>0$. (If $M=0 \rightarrow \|y\|=0 \rightarrow y=0 \rightarrow S_A(x)=0\rightarrow S_A(x)$ is continuous $\forall x$).



Let $\|x-x_c\|<\delta=\frac{\epsilon}{M}, \,\,\,\forall \epsilon>0$ be a neighborhood of $x_c$ where $x_c \in \mathbb{R}^n$.



I need to show that

$$
|S_A(x)-S_A(x_c)|=|\sup_{y \in A} x^Ty-\sup_{y \in A} x_c^Ty|<\epsilon
$$



How can I proceed?
Please follow my proof and complete it.

real analysis - Does $sum frac{sin^k j}{j}$ converge or diverge?



Does $\sum \frac{\sin^k j}{j}$ converge or diverge where $k$ is a positive integer? I know that for $k=1$ the series converges and for $k=2$ the series diverges. What can we say about its convergence when $k>2$? The usual proof for $k=2$ is to reduce the summand to $\frac{1-\cos\left( 2j \right)}{2j}$ using a trig identity, but it's not clear to me how this can be generalized to $k>1$.


Answer



It converges for odd values of $k$ and it diverges for even values of $k$.




Indeed, if $k$ is even then $\sin^k(n)$ has a positive mean value and the divergence is a consequence of Kronecker's lemma.



If $k$ is odd then $\sin^k(n)$ can be written as a linear combination of $\sin(n),\sin(3n),\sin(5n)$ etcetera, and
$$ \sum_{n\geq 1}\frac{\sin(nx)}{n} $$
equals the $2\pi$-periodic extension of the function which equals $\frac{\pi-x}{2}$ on $(0,2\pi)$.



Examples:



$$\sum_{n\geq 1}\frac{\sin^3(n)}{n}=\frac{3}{4}\sum_{n\geq 1}\frac{\sin(n)}{n}-\frac{1}{4}\sum_{n\geq 1}\frac{\sin(3n)}{n}=\frac{3}{4}\cdot\frac{\pi-1}{2}-\frac{1}{4}\cdot\frac{\pi-3}{2}=\color{red}{\frac{\pi}{4}}.$$
Since $\sin^4(x)=\frac{3}{8}-\frac{1}{2}\cos(2x)+\frac{1}{8}\cos(4x)$,

$$ \sum_{n=1}^{N}\frac{\sin^4(n)}{n} \geq \frac{3}{8}H_n - C.$$


Monday 25 December 2017

algebra precalculus - How can $frac{4}{3} times 3=4$ if $ frac{4}{3}$ is $1.3$?

Ok use your closest calculator, and type $\frac{4}{3}$, which is $1.3333333333$,and then multiply it with $3$ which is $3.9999999999$ but then type $\frac{4}{3} \times 3=4$ how?. How can it be $4$ if $\frac{4}{3}$ is $1.3333333333$ and when you multiply it with $3$ is $3.9999999999$.

Is there a name for infinite series of this type?



I asked a question about this series;



$(1 - \frac12)+(\frac13 - \frac14)(1 - \frac12 + \frac13)+(\frac15 - \frac16)(1 - \frac12 + \frac13 - \frac14 + \frac15)+(\frac17 - \frac18)(1 - \frac12 + \frac13 - \frac14 + \frac15 - \frac16 + \frac17)+...$



in a previous thread and something else about it that I'd like to know is if there is a name for series where the coefficient of each term is a partial sum? Furthermore, is there a general method for finding the closed form sums of such series?



Answer



There is no special name since what you have is just a double summation instead of a single summation. Your series is nothing but
$$\sum_{n=0}^{\infty} \sum_{k=1}^{2n+1} \left(\dfrac1{2n+1}-\dfrac1{2n+2}\right) \left(\dfrac{(-1)^{k-1}}{k} \right)$$
which can also be written as a single summation
$$\sum_{n=0}^{\infty} \left(\dfrac1{2n+1}-\dfrac1{2n+2}\right) \left(H_{2n+1} - H_n\right)$$


Continuing "Pascal's triangle" for negative binomial exponents




Does there exist a pattern for the coefficients in a negative binomial expansion? This question is not about the explicit formula, but rather if there exist a continuation for Pascal's triangle.



$$\begin{array}l
(a+b)^{-2} &=&&&& \color{red}?\\
(a+b)^{-1} &=&&&& \color{red}?\\
(a+b)^{0} &=&&&& 1\\
(a+b)^{1} &=&&& 1a &+& 1b\\
(a+b)^{2} &=&& 1a^2 &+& 2ab &+&1b^2\\
(a+b)^{3} &=& 1a^3 &+& 3a^2b &+& 3ab^2 &+& 1b^3 &
\end{array}$$




It would obviously not be a triangle given that it's an infinite sum, but it seems reasonable that there should be a similar interpretation.


Answer



There is a continuation respecting the addition law
\begin{align*}
\binom{p+1}{q}=\binom{p}{q}+\binom{p}{q-1}
\end{align*}




This way we can write

\begin{array}{l|rrrrrrrrrr}
(1+x)^{-3}&\color{grey}{0}&\color{grey}{0}&1&-3x&6x^2&-10x^3&15x^4&-21x^5&38x^6&\ldots\\
(1+x)^{-2}&\color{grey}{0}&\color{grey}{0}&1&-2x&3x^2&-4x^3&5x^4&-6x^5&7x^6&\ldots\\
(1+x)^{-1}&\color{grey}{0}&\color{grey}{0}&1&-x&x^2&-x^3&x^4&-x^5&x^6&\ldots\\
(1+x)^{0}&\color{grey}{0}&\color{grey}{0}&1&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}\\
(1+x)^{1}&\color{grey}{0}&\color{grey}{0}&1&x&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}\\
(1+x)^{2}&\color{grey}{0}&\color{grey}{0}&1&2x&x^2&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}\\
(1+x)^{3}&\color{grey}{0}&\color{grey}{0}&1&3x&3x^2&x^3&\color{grey}{0}&\color{grey}{0}&\color{grey}{0}\\
\end{array}





For negative exponents $-n$ with $n>0$ we have
\begin{align*}
(1+x)^{-n}&=\sum_{j=0}^\infty\binom{-n}{j}x^j
=\sum_{j=0}^\infty\binom{n+j-1}{j}(-1)^jx^j\\
\end{align*}




Hint: See table 164 in Concrete Mathematics by R.L. Graham, D.E. Knuth and O. Patashnik.




Sunday 24 December 2017

algebra precalculus - Integration by partial fractions; how and why does it work?



Could someone take me through the steps of decomposing
$$\frac{2x^2+11x}{x^2+11x+30}$$
into partial fractions?



More generally, how does one use partial fractions to compute integrals

$$\int\frac{P(x)}{Q(x)}\,dx$$
of rational functions ($P(x)$ and $Q(x)$ are polynomials) ?






This question is asked in an effort to cut down on duplicates. See Coping with *abstract* duplicate questions.



Also see List of Generalizations of Common Questions.


Answer



What you are trying to do is a partial fraction decomposition.




Idea. Imagine your calculator is broken and a bunch of hoodlums stop you on the street, and demand at knifepoint that you compute $\frac{191}{105}$ as a decimal (part of their initiation into the Mathies Gang, you see), or else they will slit your throat. Unfortunately, since your calculator is broken, you can really only do divisions if the divisor is a single digit number (so you can use your fingers to do the operations; you're very good with those, because you know the multiplication tables of single digit numbers...). Luckily, you do notice that $105 = 3\times 5 \times 7$. Is there some way you can save your neck using this observation?



Well, you do know that $191 = 105 + 86$, so at least you know that $\frac{191}{105} = 1 + \frac{86}{105}$, so that takes care of the integer part of the fraction. What about $\frac{86}{105}$? Aha! Here's a clever idea: maybe $\frac{86}{105}$ is really the result of a sum of fractions! If you had a sum of fractions of the form
$$\frac{A}{3} + \frac{B}{5} + \frac{C}{7}$$
then to write it as a single fraction you would find the common denominator, $105$, and then do a bunch of operations, and end up with a fraction $\frac{\mathrm{something}}{105}$. If you can find an $A$, $B$, and $C$ so that the something is $86$, then instead of computing $\frac{86}{105}$ you can do $\frac{A}{3}$, $\frac{B}{5}$, and $\frac{C}{7}$ (which you can do, since the denominators are single digit numbers), and then add those decimals to get the answer. Can we? We do a bit of algebra:
$$\frac{A}{3} + \frac{B}{5} + \frac{C}{7} = \frac{35A + 21B + 15C}{105}$$
so you want $35A + 21B + 15C = 86$. As luck would have it, $A=B=1$ and $C=2$ works, so
$$\frac{86}{105} = \frac{1}{3} + \frac{1}{5} + \frac{2}{7}.$$
And now, all is well:

\begin{align*}
\frac{191}{105} &= 1 + \frac{86}{105}\\\
&= 1 + \frac{1}{3} + \frac{1}{5} + \frac{2}{7}\\\
&= 1 + (0.3333\ldots) + (0.2) + (0.285714285714\overline{285714}\ldots)
\end{align*}
and you can give those dangerous hoodlums their answer, and live to derive another day.





Your problem. You want to do something similar with the polynomial quotient, with denominators that "easy"; in this case, degree $1$. The first task is to make the fraction "less than $1$", by making sure the denominator has degree less than the numerator. You do this with polynomial long division (see also this recent answer). Doing the long division mentally, we have: to get $2x^2$ from $x^2+11x+30$ we multiply by $2$:
$$2x^2 + 11x = (x^2+11x+30)(2+\cdots)$$

that produces unwanted $11x + 60$, (well, you get $22x + 60$, but you do want $11x$ of those, so you only have a leftover of $11x+60$); nothing to do about them, except cancel them out after the product is done. So you have
$$2x^2 + 11x = (x^2+11x+30)(2) - (11x+60).$$
So you can write
$$\frac{2x^2+11x}{x^2+11x+30} = 2 + \frac{-(11x+60)}{x^2+11x+30}.$$
Now we got the "integer part", and we work on the "fraction part". The denominator factors as $(x+5)(x+6)$, so we want to think of that fraction as the end result of doing a sum of the form
$$\frac{A}{x+5} + \frac{B}{x+6}.$$
Because the sum is "smaller than $1$" (numerator of degree smaller than the denominator) each of these fractions should also be "smaller than one". So both the $A$ and $B$ will be constants.

So we have:
\begin{align*}

\frac{-11x - 60}{x^2+11x+30} &= \frac{A}{x+5} + \frac{B}{x+6} \\\
&= \frac{A(x+6)}{(x+5)(x+6)} + \frac{B(x+5)}{(x+6)(x+5)}\\\
&= \frac{A(x+6) + B(x+5)}{(x+5)(x+6)}\\\
&= \frac{Ax + 6A + Bx + 5B}{x^2+11x+30} = \frac{(A+B)x + (6A+5B)}{x^2+11x+30}.
\end{align*}
For this to work out, you need $(A+B)x + (6A+5B) = -11x-60$. That means we need $A+B=-11$ (so the $x$s agree) and $6A+5B = -60$ (so the constant terms agree).



That means $A=-11-B$ (from the first equation). Plugging into the second equation, we get
$$-60 = 6A+5B = 6(-11-B)+5B = -66 -6B + 5B = -66-B.$$
So that means that $B=60-66 = -6$. And since $A+B=-11$, then $A=-5$.




(An alternative method for finding the values of $A$ and $B$ is the Heaviside cover-up method; from the fact that
$$-11x - 60 = A(x+6)+B(x+5)$$
we know that the two sides must take the same value for every value of $x$; if we plug in $x=-6$, this will "cover up" the $A$ on the right and we simply get $B(-6+5) = -B$; this must equal $-11(-6)-60 = 6$; so $6 = -B$, hence $B=-6$. Then plugging in $x=-5$ "covers up" the $B$ to give us $-11(-5)-60 = A(-5+6)$, or $-5=A$. So we obtain $A=-5$, $B=-6$, same as before.)



That is,
$$\frac{-11x-60}{x^2+11x+30} = \frac{-5}{x+5} + \frac{-6}{x+6}.$$



Putting it all together, we have:
$$\frac{2x^2+11x}{x^2+11x+30} = 2 + \frac{-11x-60}{x^2+11x+30} = 2 - \frac{5}{x+5} - \frac{6}{x+6}.$$

And ta da! Your task is done. You've decomposed the quotient into partial fractions.



Caveat. You need to be careful if the denominator has repeated factors, or has factors that are of degree $2$ and cannot be further factored (e.g., $x^2+1$); I talk about that below in the general discussion.



Now, let's hope the Mathies Gang doesn't ask for a proof of Goldbach's Conjecture from its next batch of pledges...






General Idea.




So let's discuss this idea for the general problem of integrating a rational function; that is, a function of the form
$$\frac{P(x)}{Q(x)}$$
where $P(x)=a_nx^n+\cdots +a_1x+a_0$ and $Q(x)=b_mx^m+\cdots + b_1x+b_0$ are polynomials.



Integrating polynomials is easy, so the first task is to do long division in order to write the fraction as a polynomial plus a rational function in which the numerator has degree smaller than the denominator. So we will consider only the case where $n\lt m$ going forward.



First, let us make sure we can actually do those fractions with "easy denominators." To me, "easy denominator" means (i) linear, $ax+b$; (ii) power of a linear polynomial, $(ax+b)^n$; (iii) irreducible quadratic; (iv) power of an irreducible quadratic. So let's talk about how to integrate those:




  1. When $Q(x)$ has degree $1$ and $P(x)$ is constant. For example, something like

    $$\int \frac{3}{2x-5}\,dx.$$
    These integrals are very easy to do: we do a change of variable $u=2x-5$, so $du=2dx$. We simply get
    $$\int\frac{3}{2x-5}\,dx = 3\int\frac{dx}{2x-5} = 3\int\frac{\frac{1}{2}du}{u} = \frac{3}{2}\ln|u|+C = \frac{3}{2}\ln|2x-5|+C.$$


  2. In fact, the idea above works whenever the denominator is a power of a degree $1$ polynomial and the numerator is a constant. If we had something like
    $$\int\frac{3}{(2x-5)^6}\,dx$$
    then we can let $u=2x-5$, $du=2dx$ and we get
    $$\begin{align*}
    \int\frac{3}{(2x-5)^6}\,dx &= 3\int\frac{\frac{1}{2}du}{u^6} = \frac{3}{2}\int u^{-6}\,dx \\&= \frac{3}{2}\left(\frac{1}{-5}u^{-5}\right)+C\\ &= -\frac{3}{10}(2x-5)^{-5} + C.\end{align*}$$


  3. What if the denominator is an irreducible quadratic? Things get a little more complicated. The simplest example of an irreducible quadratic is $x^2+1$, and the easiest numerator is $1$. That integral can be done directly:
    $$\int\frac{1}{x^2+1}\,dx = \arctan(x)+C.$$

    If we have an irreducible quadratic of the form $x^2+a$, with $a\gt 0$, then we can always write it as $x^2+b^2$ with $b=\sqrt{a}$; then we can do the following: factor out $b^2$ from the denominator,
    $$\int\frac{dx}{x^2+b^2} = \int\frac{dx}{b^2((\frac{x}{b})^2 + 1)};$$
    and now setting $u=\frac{x}{b}$, so $du = \frac{1}{b}\,dx$, we get:
    $$\int\frac{dx}{x^2+b^2} = \frac{1}{b}\int\frac{1}{(\frac{x}{b})^2+1}\left(\frac{1}{b}\right)\,dx = \frac{1}{b}\int\frac{1}{u^2+1}\,du,$$
    and now we can do the integral easily as before.
    What if we have a more general irreducible quadratic denominator? Something like $x^2+x+1$, or something else with an $x$ term?



    The magic phrase here is *Completing the square". We can write $x^2+x+1$ as
    $$x^2 + x + 1 = \left(x^2 + x + \frac{1}{4}\right) + \frac{3}{4} = \left(x+\frac{1}{2}\right)^2 + \frac{3}{4}.$$
    Then setting $w=x+\frac{1}{2}$, we end up with an integral that looks just like the previous case! For instance,

    $$\int\frac{dx}{x^2+x+1} = \int\frac{dx}{(x+\frac{1}{2})^2+\frac{3}{4}} = \int\frac{dw}{w^2+\frac{3}{4}},$$
    and we know how to deal with these.



    So: if the denominator is an irreducible quadratic, and the numerator is a constant, we can do the integral.


  4. What if the denominator is an irreducible quadratic, but the numerator is not constant? Since we can always do the long division, then we can take the numerator to be of degree $1$. If we are lucky, it's possible we can do it with a simple substitution; for example, to do
    $$\int\frac{2x+3}{x^2+3x+4}\,dx$$
    (note the denominator is irreducible quadratic), we can just let $u=x^2+3x+4$, since $du = (2x+3)\,dx$, exactly what we have in the numerator, so
    $$\int\frac{2x+3}{x^2+3x+4}\,dx = \int\frac{du}{u} = \ln|u|+C = \ln|x^2+3x+4|+C = \ln(x^2+3x+4) + C.$$
    If we are not lucky? Well, we can always make our own luck. For instace, if we had
    $$\int\frac{3x}{x^2+3x+4}\,dx$$

    then we can't just make the substitution $u=x^2+3x+4$; but if we wanted to do that anyway, we would need $2x+3$ in the numerator; so we play a little algebra game:
    $$\begin{align*}
    \frac{3x}{x^2+3x+4} &= 3\left(\frac{x}{x^2+3x+4}\right)\\
    &= 3\left(\frac{\frac{1}{2}(2x)}{x^2+3x+4}\right)\\
    &=\frac{3}{2}\left(\frac{2x}{x^2+3x+4}\right) &\text{(still not quite right)}\\
    &= \frac{3}{2}\left(\frac{2x+3-3}{x^2+3x+4}\right)\\
    &= \frac{3}{2}\left(\frac{2x+3}{x^2+3x+4} - \frac{3}{x^2+3x+4}\right).
    \end{align*}$$
    What have we accomplished? The first summand, $\frac{2x+3}{x^2+3x+4}$, is an integral we can do with a simple substitution; and the second summand, $\frac{3}{x^2+3x+4}$, is an integral that we just saw how to do! So in this way, we can solve this kind of integral. We can always rewrite the integral as a sum of an integral that we can do with a substitution, and an integral that is as in case 3 above.


  5. What if the denominator is a power of an irreducible quadratic? Something like $(x^2+3x+5)^4$? If the numerator is of degree at most $1$, then we can play the same game as we just did to end up with a sum of two fractions; the first one will have numerator which is exactly the derivative of $x^2+3x+5$, and the second will have a numerator that is constant. So we just need to figure out how to do an integral like

    $$\int\frac{2dx}{(x^2+3x+5)^5}$$
    with constant numerator and a power of an irreducible quadratic in the denominator.



    By completing the square as we did in 3, we can rewrite it so that it looks like
    $$\int\frac{dw}{(w^2+b^2)^5}.$$
    Turns out that if you do integration by parts, then you get a Reduction formula that says:
    $$\int\frac{dw}{(w^2+b^2)^n} = \frac{1}{2b^2(n-1)}\left(\frac{w}{(w^2+b^2)^{n-1}} + (2n-3)\int\frac{dw}{(w^2+b^2)^{n-1}}\right)$$
    By using this formula repeatedly, we will eventually end up in an integral where the denominator is just $w^2+b^2$... and we already know how to do those. So all is well with the world (with a lot of work, at least).





Okay. Do we now need to go and discuss what to do when the denominator is a cubic polynomial, then a fourth degree polynomial, then a fifth degree polynomial, etc.?



No! We can play the same game we did with fractions above, and take an arbitrary rational function and rewrite it as a sum of fractions, with each fraction a power of a degree 1 or an irreducible quadratic polynomial. The key to this is the Fundamental Theorem of Algebra, which says that every polynomial with real coefficients can be written as a product of linear and irreducible quadratic polynomials. In fact, one of the major reasons why people wanted to prove the Fundamental Theorem of Algebra was to make sure that we could do integrals of rational functions in the manner we are discussing.



So here is a method for integrating a rational function:




To compute
$$\int\frac{P(x)}{Q(x)}\,dx$$
where $P(x)$ and $Q(x)$ are polynomials:





  1. If $\deg(P)$ is equal to or greater than $\deg(Q)$, perform long division and rewrite the fraction as a polynomial plus a proper fraction (with numerator of degree strictly smaller than the denominator). Integrate the polynomial (easy).


  2. Completely factor the denominator $Q(x)$ into linear terms and irreducible quadratics. This can be very hard to do in practice. In fact, this step is the only obstacle to really being able to do these integrals always, easily. Factoring a polynomial completely and exactly can be very hard. The Fundamental Theorem of Algebra says that there is a way of writing $Q(x)$ that way, but it doesn't tell us how to find it.


  3. Rewrite the fraction as a sum of fractions, each of which has a denominator which is either a power of linear polynomial or of an irreducible quadratic. (More about this below.)


  4. Do the integral of each of the fractions as discussed above.





How do we rewrite as a sum?




Write $Q(x)$ as a product of powers of distinct polynomials, each linear or irreducible quadratic; e.g., $Q(x) = x(2x-1)(x+2)^3(x^2+1)(x^2+2)^2$.



For each power $(ax+b)^n$, use $n$ fractions of the form:
$$\frac{A_1}{ax+b} + \frac{A_2}{(ax+b)^2} + \cdots+\frac{A_n}{(ax+b)^n},$$
where $A_1,\ldots,A_n$ are constants-to-be-determined-later.



For each power of an irreducible quadratic, $(ax^2+bx+c)^m$, use $m$ fractions of the form:
$$\frac{C_1x+D_1}{ax^2+bx+c} + \frac{C_2x+D_2}{(ax^2+bx+c)^2} + \cdots + \frac{C_mx+D_m}{(ax^2+bx+c)^m},$$
where $C_1,D_1,\ldots,C_m,D_m$ are constants-to-be-determined-later.




So in the example above, we $Q(x) = x(2x-1)(x+2)^3(x^2+1)(x^2+2)^2$, we would get:
$$\begin{align*}
&\quad+\frac{A}{x} &\text{(corresponding to the factor }x\text{)}\\
&\quad+\frac{B}{2x+1} &\text{(corresponding to the factor }2x-1\text{)}\\
&\quad+\frac{C}{x+2} + \frac{D}{(x+2)^2}+\frac{E}{(x+2)^3} &\text{(corresponding to the factor }(x+2)^3\text{)}\\
&\quad+\frac{Gx+H}{x^2+1} &\text{(corresponding to the factor }x^2+1\text{)}\\
&\quad+\frac{Jx+K}{x^2+2} + \frac{Lx+M}{(x^2+2)^2}&\text{(corresponding to the factor }(x^2+2)^2\text{)}
\end{align*}$$




And now, the final step: how do we figure out what all those constants-to-be-determined-later are? We do the operation and compare it to the original! Let's say we are trying to calculate
$$\int\frac{3x-2}{x(x+2)^2(x^2+1)}\,dx$$
(not the same as above, but I want to do something small enough that we can do it).



We set it up as above; then we do the algebra. We have:
$$\begin{align*}
\frac{3x-2}{x(x+2)^2(x^2+1)} &= \frac{A}{x} + \frac{B}{x+2} + \frac{C}{(x+2)^2} + \frac{Dx+E}{x^2+1}\\
&= \frac{\small A(x+2)^2(x^2+1) + Bx(x+2)(x^2+1) + Cx(x^2+1) + (Dx+E)x(x+2)^2}{x(x+2)^2(x^2+1)}.
\end{align*}$$
Now we have two options: we can do the algebra in the numerator and write it as a polynomial. Then it has to be identical to $3x-2$. For example, the coefficient of $x^4$ in the numerator would be $A+B+D$, so we would need $A+B+D=0$; the constant term would be $4A$, so $4A=-2$; and so on.




The other method is the Heaviside cover-up method. Since the two expressions have to be the same, the numerator has to be the same as $3x-2$ when evaluated at every value of $x$. If we pick $x=0$ and plug it into the left numerator, we get $-2$; if we plug it into the right hand side, we get $A(2)^2(1) = 4A$, so $4A=-2$, which tells us what $A$ is. If we plug in $x=-2$, th left h and side is $3(-2)-2 = -8$, the right hand side is $C(-2)((-2)^2+1) = -10C$, so $-10C = -8$, which tells us what $C$ is; we can then simplify and continue doing this (you'll note I selected points where a lot of the summands on the right simply evaluate to $0$) until we get the value of all the coefficients.



And once we've found all the coefficients, we just break up the integral into a sum of integrals, each of which we already know how to do. And so, after a fair amount of work (but mainly just busy-work), we are done.


calculus - Evaluating the integral $int_0^infty frac{x sin rx }{a^2+x^2} dx$ using only real analysis



Calculate the integral$$ \int_0^\infty \frac{x \sin rx }{a^2+x^2} dx=\frac{1}{2}\int_{-\infty}^\infty \frac{x \sin rx }{a^2+x^2} dx,\quad a,r \in \mathbb{R}. $$
Edit: I was able to solve the integral using complex analysis, and now I want to try and solve it using only real analysis techniques.


Answer



It looks like I'm too late but still I wanna join the party. :D




Consider
$$
\int_0^\infty \frac{\cos rx}{x^2+a^2}\ dx=\frac{\pi e^{-ar}}{a}.
$$

Differentiating the both sides of equation above with respect to $r$ yields
$$
\begin{align}
\int_0^\infty \frac{d}{dr}\left(\frac{\cos rx}{x^2+a^2}\right)\ dx&=\frac{d}{dr}\left(\frac{\pi e^{-ar}}{a}\right)\\
-\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=(-a)\frac{\pi e^{-ar}}{a}\\

\Large\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=\Large\pi e^{-ar}.
\end{align}
$$
Done! :)


integration - The closed form representations of Integrals of logarithm functions

I wish to find a closed form representations of the following integral
$$\int\limits_{0}^1\frac{\log^p(x)\log^r\left(\frac{1-x}{1+x}\right)}{x}dx=?$$
Here $p\ge 1$ and $r\ge 0$ are nonnegative integers. It can be expressed in terms of a linear combination of well known constants (such as: Riemann zeta values,$\pi$ et. al.)?

Saturday 23 December 2017

calculus - Confusion around $lim_{x to 0}frac{sin x}{x}, sin x/x,sin 0/0$

Recently in my calculus class I have been taught $\lim_{x\to 0}\frac{\sin x}{x}$ =1. Now, in trigonometry I have studied $\sin 0=0$. Also, I don't see $\sin x/x=1$. Now my confusion is that if I have to use these identities in some practical applications , which of them should I use?

real analysis - How to prove that $exp(x_1+x_2)=exp(x_1)exp(x_2)$?



In my analysis class, we covered the properties of the exponential function (as of now we use $\exp$ instead of $e$). One of the properties of $e$ is that $\exp(x_1+x_2)=\exp(x_1)\exp(x_2)$. In high school this was just assumed knowledge but now we have to prove that this statement is indeed true, which is proving to be quite difficult.



I assume that one would need to work out the derivatives of both side and see if they are equal. Then if $f(0)=1$ we can assume that both sides of the equation are true.


Answer



From your comments it appears that you are using the definition of $\exp(x) $ as the unique solution to $f'(x) =f(x), f(0)=1$. It can be proved that such a solution must be non-zero for all $x$ (see second part of this answer). Thus $\exp(x) \neq 0$ for all $x$. Let $a$ be any arbitrary real number and consider the function $g$ defined by $g(x) =\exp(x+a) /\exp(x) $. We have $$g'(x) =\frac{\exp (x) \exp(x+a) - \exp(x+a) \exp(x) } {(\exp(x)) ^2}=0$$ and thus $g$ is a constant. It follows that $g(x) =g(0)=\exp(a)$ and thus $$\exp(x+a) =\exp(x) \exp(a) $$ Now replace $x$ by $x_1$ and $a$ by $x_2$.







The above technique does not work if one chooses $$g(x) =\exp(x+a) - \exp(x) \exp(a) $$ because one can't see that $g'(x) =0$ in very obvious manner. But this can be done with some effort. Using above definition of $g$ one gets $g'(x) =g(x),g(0)=0$. Ideally when one studies the definition of exponential function as a solution of differential equation $f'(x) =f(x), f(0)=1$ then the first step is to show that if the solution exists then it must be unique. And that uniqueness is shown by the following:




Theorem: If function $g:\mathbb{R} \to\mathbb{R} $ satisfies $g'(x) =g(x), g(0)=0$ then $g(x) =0$ for all values of $x$.




On the contrary assume that there is some $a$ such that $g(a) \neq 0$ and consider $$h(x) =g(a+x) g(a-x) $$ then $$h'(x) =g(a+x) g(a-x) - g(a+x) g(a-x) =0$$ so that $h$ is constant and $h(x) =h(0)=g(a)g(a)>0$ or in other words $g(a+x) g(a-x) >0$. Putting $x=a$ and noting that $g(0)=0$ we see that $0>0$ and this contradiction shows that $g(x) =0$ for all $x$ and we are done. The same proof can be used to show that $\exp(x) \neq 0$. We just need to consider $h(x) =\exp(x) \exp(-x) $.


Friday 22 December 2017

calculus - How to prove that $int_0^{pi} log(|sin t|),textrm{dt} ;;textrm{is integrable }$




How to prove that $$\int_0^{\pi} \log(|\sin t|)\,\textrm{dt} \;\;\textrm{is integrable }$$



Any hints would be appreciated.


Answer



Hint: It is true that $$\sin{t} \ge \frac{2}{\pi} t$$ on the interval $[0, \pi/2]$; try drawing the graph, or noting that $\sin{t}$ is concave down and we have equality at the endpoints. Now show that



$$\int_0^{\pi/2} |\ln{\frac{2}{\pi} t}| dt < \infty$$



Do something similar for the right-half of the interval.



functional analysis - Let $(x_n)$ be a bounded sequence such that $(T^* Tx_n)$ is Cauchy. Then $T^*x_n$ is also Cauchy.



Let $T$ be a compact operator in Hilbert space. Let $(x_n)$ be a bounded sequence such that $(T T^*x_n)$ is Cauchy. Then $T^*x_n$ is also Cauchy.




I'm quite lost here. I know that $TT^*$ is also compact, so we can find some convergent subsequence $TT^*x_{n_k}$, but this does not help in showing that $T^*x_n$ is also Cauchy. I tried using the fact that $x_n$ is bounded so $x_n$ has a weakly convergent subsequence, however, I couldn't make much progress from this. How may I go about to show this? I would greatly appreciate any help.


Answer



Note that:
$$
||T(x_n-x_m)||^2 = \langle T(x_n-x_m),T(x_n-x_m)\rangle \\= \langle (x_n-x_m),T^*T(x_n-x_m)\rangle \leq ||x_n-x_m|| \cdot ||T^*T(x_n-x_m)||
$$
Since $x_n$ are bounded, say $||x_n||

number theory - Bezout's Identity proof and the Extended Euclidean Algorithm

I am trying to learn the logic behind the Extended Euclidean Algorithm and I am having a really difficult time understanding all the online tutorials and videos out there. To make it clear, though, I understand the regular Euclidean Algorithm just fine. This is my reasoning for why it works:



If $g = \gcd(a,b)$, then $gi = a$ and $gj = b$ with $\gcd(i,j)=1$. If you subtract the equations, then $g(i-j) = a-b$, which means $g$ divides $a-b$ too.



This implies that $\gcd(a,b) = \gcd(a-kb,b)$ and so the maximum $k$ is $a - kb = 0$ or $a = kb$ or $\lfloor a/b \rfloor = k$, and the value of $a-\lfloor a/b \rfloor b$ is the same as $a$ mod $b$.




But I do not understand the Extended algorithm or the Identity.




  1. Why is there always an $x,y$ such that $ax + by = \gcd(a,b)$ for nonzero (positive?) $a,b$? I don't understand the intuition behind this claim.


  2. I also don't understand how the Extended Euclidean algorithm works at all. Is it correct to say that the Extended Euclidean algorithm is what solves Bezout's Identity? It returns nonzero (positive?) $x,y$ such that $ax + by = \gcd(a,b)$?


  3. Is the idea behind modular inverses included here too? If I want to solve for the inverse of $a$ modulo $m$ then this is the same as solving for $x$ in $ax = 1 \bmod m$ for known integers $a,m$, the same as $ax = 1 - my$, the same as $ax + my = 1$, so this is like using the Extended algorithm on $a,m$ and checking if the gcd is equal to $1$. If so, then the answer is $x$. Is my understanding correct?


elementary set theory - Is there a one-to-one correspondence between the set of real numbers and the set of complex numbers?





It is well known that there is a one-to-one mapping between the integers and the rationals.



It is also well known that there no such mapping between the rationals and the real numbers.



Is there a one-to-one correspondence between the set of real numbers and the set of complex numbers?




As part of the answer, could you either provide a high-level sketch or a reference (either the name of a famous theorem or a url) that would help me to understand the argument.


Answer



Yes there is. Actually, the is a bijection between $\Bbb R$ and $\Bbb R^n$ for any $n\in \Bbb N$. You can think of $\Bbb C$ as $\Bbb R^2$ in this case. It is called Cardinality of the continuum, proven by Cantor.



The proof dates back to Cantor's correspondence with Dedekind in the 1870s. The result was really shocking at the time since it was tacitly assume that there can be no bijection between manifold with different dimension. Dedekind resolve the problem by conjecturing that no such continuous bijection exists. The fact was later proved by Brouwer (if I recall correctly) many decades after that.


elementary number theory - $gcd(n!+1,(n+1)!)$



The recent post didn't really provide sufficient help. It was too vague, most of it went over my head.




Anyway, I'm trying to find the $\gcd(n!+1,(n+1)!)$.



First I let $d=ab\mid(n!+1)$ and $d=ab\mid(n+1)n!$ where $d=ab$ is the GCD.



From $ab\mid(n+1)n!$ I get $a\mid(n+1)$ and $b|n!$.



Because $b\mid n!$ and $ab\mid(n!+1)$, $b$ must be 1.



Consequently, $a\mid(n!+1)$ and $a\mid(n+1)$.




So narrowing down options for $a$ should get me an answer. At this point I've tried to somehow bring it around and relate it to Wilson's theorem as this problem is from that section of my textbook, but I seem to be missing something. This is part of independent study, though help of any kind is appreciated.


Answer



The previous posts have I think carefully explained why the gcd is $1$ if $n+1$ is composite. It comes down to this: if $q$ is a prime that divides $(n+1)!$, and $n+1$ is composite, then $q \lt n+1$, and therefore $q \le n$. But then $q$ divides $n!$, and therefore $q$ cannot divide $n!+1$.



You have shown that any common divisor of $n!+1$ and $(n+1)!$ must divide $n+1$.



Suppose now that $n+1$ is prime, say $n+1=p$. Then by Wilson's Theorem, $(p-1)!\equiv -1 \pmod p$. This says that $p$ divides $(p-1)!+1$, meaning that $n+1$ divides $n!+1$.



It follows that if $n+1$ is prime, then $n+1$ is a common divisor of $n!+1$ and $(n+1)!$. It is the greatest common divisor, since all common divisors must divide $n+1$, and nothing bigger than $n+1$ can divide $n+1$.




We conclude that $\gcd(n!+1,(n+1)!)=1$ if $n+1$ is composite, and $\gcd(n!+1,(n+1)!)=n+1$ if $n+1$ is prime.


algebra precalculus - Why $sqrt{-1 times -1} neq sqrt{-1}^2$?




We know $$i^2=-1 $$then why does this happen?
$$
i^2 = \sqrt{-1}\times\sqrt{-1}
$$
$$
=\sqrt{-1\times-1}
$$

$$
=\sqrt{1}
$$
$$
= 1
$$



EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.


Answer



From $i^2=-1$ you cannot conclude that $i=\sqrt{-1}$, just like from $(-2)^2 = 4$ you cannot conclude that $-2=\sqrt 4$. The symbol $\sqrt a$ is by definition the positive square root of $a$ and is only defined for $a\ge0$.




It is said that even Euler got confused with $\sqrt{ab} = \sqrt{a}\,\sqrt{b}$. Or did he? See Euler's "mistake''? The radical product rule in historical perspective (Amer. Math. Monthly 114 (2007), no. 4, 273–285).


Thursday 21 December 2017

integration - Integrating tensors on manifolds

When/how can you integrate tensors on manifolds and what does it mean?



I imagine that line integrals of tensors make sense when you have a connection,
since you can uniquely parallel transport all the tensors along the path to a common point on the paths and "sum" them there, but what about area, volume and higher dimensional integrals? What is required for that to make sense? Parallel transport does not seem sufficient anymore because there are no longer unique path to a common point and parallel transport is path dependent.

abstract algebra - Show that $4x^2+6x+3$ is a unit in $mathbb{Z}_8[x]$.



Show that $4x^2+6x+3$ is a unit in $\mathbb{Z}_8[x]$.



Once you have found the inverse like here, the verification is trivial. But how do you come up with such an inverse. Do I just try with general polynomials of all degrees and see what restrictions RHS = $1$ imposes on the coefficients until I get lucky? Also is there a general method to show an element in a ring is a unit?


Answer



If $R$ is a commutative ring: the units in $R[x]$ are the polynomials whose constant term is a unit, and whose higher order coefficients are nilpotent. You can apply this directly to your example.


Wednesday 20 December 2017

Function notation terminology




Given the function $f:X\longrightarrow Y$, $X$ is called the domain while $Y$ is called the codomain. But what do you call $f(x)=x^2$ in this context, where $x\in X$? That is to say - what is the name for the $f(x)$ notation?



And while I'm here, what is the proper way to write a function like this? Would it be $f:\mathbb{R}\to\mathbb{R},\;f(x)=x^2$?






Edit:



I figured I'd add this to add a bit of context into why I'm asking. I'm writing a set of notes in LaTeX, and I'd like to use the correct terminology for the definition of a function.





A function from set $A$ to set $B$, denoted by
$$f:A\to B;x\mapsto f(x)$$
is a mapping of elements from set $A$, (the $\textit{domain}$) to elements in set $B$ (the $\textit{codomain}$) using the $\color{blue}{\sf function}$ $f(x)$. The domain of a function is the set of all valid elements for a function to map from. The codomain of a function is the set of all possible values that an element from the domain can be mapped to. The $\textit{range}$ (sometimes called the $\textit{image}$) of a function is a subset of the codomain, and is the set of all elements that actually get mapped to by the function $f$.




Here I'm pretty sure the highlighted word "function" is not right.


Answer



I can remember to read this text, and being puzzled with the exact same question. From what I've learned from my teacher, you're right, writing down something as "the function $f(x)$..." is sloppy notation. However, many books/people will use it this way.




If you're are very precise, $f(x)$ is not a function or an map. I don't know of a standard way to refer to $f(x)$, but here is some usage I found on the internet:




  • The output of a function $f$ corresponding to an input $x$ is denoted by $f(x)$.

  • Some would call "$f(x)=x^2$" the rule of the function $f$.

  • For each argument $x$, the corresponding unique $y$ in the codomain is called the function value at $x$ or the image of $x$ under $f$. It is written as $f(x)$.

  • If there is some relation specifying $f(x)$ in terms of $x$, then $f(x)$ is known as a dependent variable (and $x$ is an independent variable).




A correct way to notate your function $f$ is:
$$f:\Bbb{R}\to\Bbb{R}:x\mapsto f(x)=x^2$$



Note that $f(x)\in\Bbb{R}$ and $f\not\in\Bbb{R}$. But the function $f$ is an element of the set of continuous functions, and $f(x)$ isn't.



In some areas of math it is very important to notate a function/map specifying it's domain, codomain and function rule. However in for example calculus/physics, you'll see that many times only the function rule $f(x)$ is specified, as the reader is supposed to understand domain/codmain from the context.



You can also check those questions:





algebra precalculus - Writing the Piecewise Function?

I have to solve each equation and verify my solution graphically... but first, I would like to create the piecewise for $|x^2 - 2x + 2| = 3x - 4$ to have a reference to know if my solution is extraneous.



I know that I would need to find the solutions for $x^2 - 2x + 2$ (Case 1) and $-x^2 + 2x - 2$ (Case 2), and since it's not factorable, I used the quadratic formula:



[Case 1]
\begin{align*}
x & = \frac{2 \pm \sqrt{(-2)^2 - 4(1)(2)}}{2}\\

x & = \frac{2 \pm \sqrt{4 - 8}}{2}
\end{align*}
This is where I got stuck... I'd get $-12$ in the square root which isn't correct, but my math and variables were right. I don't understand what I did wrong?



[edit] Same thing happened when solving Case 2, where I got $-4$ in the square root.

terminology - What is this called?







I am looking for the name of an operation similar to factorial.




Factorial would be like:
6! = 6*5*4*3*2*1



but if the operator is "+" and not "*", like:
6+5+4+3+2+1



What would I call this?



Edit:

Sorry, it seems this was a duplicate question. The answer appears to be "nth triangular number". Although that's really not as satisfying and concise as the word "factorial".

A polynomial with integer coefficients




Is it true that if a polynomial $f(x)$ takes integer values for every integer $x$ then its coefficients are integers?




I believe that it is. I just need a hint as to how I can prove or disprove this and just in case you were wondering this is not a homework problem.


Answer



It's false. Take for example $x(x-1)/2$


Tuesday 19 December 2017

calculus - Universal Chord Theorem




Let $f \in C[0,1]$ and $f(0)=f(1)$.



How do we prove $\exists a \in [0,1/2]$ such that $f(a)=f(a+1/2)$?



In fact, for every positive integer $n$, there is some $a$, such that $f(a) = f(a+\frac{1}{n})$.



For any other non-zero real $r$ (i.e not of the form $\frac{1}{n}$), there is a continuous function $f \in C[0,1]$, such that $f(0) = f(1)$ and $f(a) \neq f(a+r)$ for any $a$.



This is called the Universal Chord Theorem and is due to Paul Levy.




Note: the accepted answer answers only the first question, so please read the other answers too, and also this answer by Arturo to a different question: https://math.stackexchange.com/a/113471/1102






This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



You want to use the intermediate value theorem, but not applied to $f$ directly. Rather, let $g(x)=f(x)-f(x+1/2)$ for $x\in[0,1/2]$. You want to show that $g(a)=0$ for some $a$. But $g(0)=f(0)-f(1/2)=f(1)-f(1/2)=-(f(1/2)-f(1))=-g(1/2)$. This gives us the result: $g$ is continuous and changes sign, so it must have a zero.


real analysis - What is the measure space a right continuous, non-decreasing function defines?



(Dealing with the real line $\mathbb{R}$) A right continuous, non-decreasing function $g$ defines a pre-measure on the algebra of half-open intervals:
$$ \mu _g (a,b] = g(b)-g(a)$$



This pre-measure can be extended to a measure using the Caratheodory process - first defining an outer measure (which turns out to be metric outer measure, which guarantees it is defined on the borel sets).
This process provides us with a complete measure space.




How can we calculate the $\sigma$-algebra corresponding to this measure space?



Examples:




  1. The function $x\mapsto x$ - Noting the borel measure it defines is also invariant under translations - we deduce it is "the" Borel measure on the real line and so (with a little bit more arguing) we understand the complete $\sigma$-algebra the Caratheodory process produces is the Lebesgue $\sigma$-algebra.

  2. The function $x\mapsto 0$ - This produces the $0$-measure, and so every subset of $\mathbb{R}$ is measurable.


Answer




Basic analysis of tractable cases:



Language: We denote $\mathfrak{B}(\cdot)$ , $\mathfrak{L}(\cdot)$, $\mathbb{P}(\cdot)$ , $(\cdot)^*$ the borel sets, the collection of lebesgue measurable sets , the powerset, the cylinderical $\sigma$-algebra of $(\cdot)$.



Also note that every borel measure $\lambda$ has a decomposition $\lambda_{\text{ac}} + \lambda_{\text{d}} + \lambda_{\text{sc}} = \lambda$ where the LHS corresponds to the decomposition into absolutely continuous, discrete and singular continuous parts of the measure $\lambda$ (with respect to the Lebesgue measure).



$g$ is absolutely continuous ($\lambda_{\text{ac}}$ case):
Note that the derivative is guaranteed to exist almost-everywhere, and so this sets cover $\mathbb{R}$ up to a lebesgue null-set. Because this case is not tractable yet, we further assume $g$ is $C^0$, and in such cases we divide the real line into 2 sets: $\{g'>0 \}\cup \{g'\leq0\}$.
On the first set, $g$ has an absolutely continuous inverse. Since both $g$ and its inverse have Luzin's property-N ; we realize $N$ is a null set iff $g(N)$ is, and so the completion of the borel sigma algebra $\mathfrak{B}\left(\{g'>0 \} \right)$ is simply $\mathfrak{L}(\{g'>0 \})$.
On the other set; since $g$ is non-decreasing we have $\{g'\leq0 \}=\{g'=0\}$ and so $g$ is constant there. Every set has measure $0$ and so it is measureable.
Conclusion: the $\sigma$-algebra is
$$\big[\mathfrak{L}(\{g'>0 \}) \cup \mathbb{P}(\{g'=0 \}) \big]^*$$



$g$ is discrete ($\lambda_{\text{d}}$ case):
This is also easy and corresponds to the case where $g$ is a non-decreasing step function. Every subset of $\mathbb{R}$ is measurable.




$g$ is continuous but not absultely $(\lambda_{\text{sc}}$ case):
This happens when $g$ is of the form of Cantor-Lebesgue function. Since it is not absolutely continuous, it is singular with respect to the lebesgue measure and we have already dealt with the case of discrete measures, so we are left to understand the continuous case.
This case is the less tractable case, and it is shown by an example that we cannot further analyse it. Denote $c(x)$ the Cantor-Lebesgue function and notice it is continuous (but not AC) and non-decreasing. The image of the (null-set) cantor-set under $c$ has positive lebesgue measure and so the cantor set has positive measure under the Lebesgue-Stieltjes measure generated with $c$. Also note $c$ is constant at some parts and so we conclude there is non obvious inclusion between this $\sigma$-algebra and $\mathfrak{L}$.



Back to my question. We investigate $g=1_{[0,\infty)} \sqrt x$ and are trying to realize the $\sigma$-algebra, $\mathfrak{A}$, the construction generates. By the first case we have:
$$\mathfrak{A} = \big[\mathfrak{L}\big([0,\infty)\big) \ \cup \ \mathbb{P}\big((-\infty,0]\big) \big]^*$$



That is, the cylindrical $\sigma$-algebra of the lebesgue measurable sets in $[0,\infty)$ and the power set of $(-\infty,0]$.


Number of zeros at the end of $k!$




For how many positive integer $k$ does the ordinary decimal representation of the integer $k\text { ! }$ end in exactly $99$ zeros ?





By inspection I found that $400\text{ !}$ end in exactly $99$ zeros , but $399\text{ !}$ does NOT end in $99$ zeros ; using the formula $$\text{ number of zeros at the end }=\sum_{n=1}^{\infty}\left[\frac{k}{5^n}\right]$$where , $[x]$ denotes the greatest integer part not exceeding $x$.



I also found that, for $k=401,402,403,404$ the number of zeros is same, but for $k=405$ the number of zeros increase by $1$ ; as $405$ is divisible by $5$ again , after $400$.



Thus I got that there are only $5$ integers satisfying the condition which are $400,401,402,403,404$.



The question is possible duplicate of this or this but my question is different from these two questions.




Does there exist any other rule or easy formula from where I can get how many integers are there




Answer



Because the number of zeroes steps up at each multiple of $5$, the only possible answers are five or zero. So the question might be: How to determine of there exists $k$ such that $k!$ ends in a given number of zeroes?



Say we want $m$ zeroes. So we look for $k$ with
$$ m=f(k):=\sum_{i=1}^\infty\left\lfloor \frac k{5^i}\right \rfloor$$
First note that
$$ \tag1f(k)<\sum_{i=1}^\infty\frac k{5^i}=\frac k4$$
and on the other side
$$ \tag2f(k)\ge \sum_{i=1}^{\lfloor \log_5k\rfloor}\left( \frac k{5^i}-1\right )> \frac k4-\lfloor \log_5k\rfloor-\frac 54>\frac k4-\log_5k-\frac 94$$

For $k\ge 3$, the right hand side of $(3)$ is strictly increasing. Therefore we start our search at $k=4m+1$ and end it no later than $k=4m+4\log_5(4m+1)+9$. Unless $m$ is awfully large, this requires us to try just a few values of $k$ (recall that we only need to try multiples of $5$).


soft question - What is the point of making dx an infinitesimal hyperreal?



It seems fairly common to describe $\mathrm{d}x$ in nonstandard analysis as an infinitesimal. But after thinking it through (and then skimming Keisler's text), I can't see the point and actually think it's misleading!




First, let me clearly point out that $\mathrm{d}y$ is not being used here as to express a "difference in $y$"; this post is following the convention that $\Delta y$ is used for such things, and $\mathrm{d}y$ is reserved for the differential.



That is, suppose $y = x^2$. A "change in $y$" is the quantity $\Delta y$ given by, after fixing some change $\Delta x$ in $x$:
$$ \Delta y = (x + \Delta x)^2 - x^2 = 2 x (\Delta x) + (\Delta x)^2 $$



This is not what $\mathrm{d}y$ is. We simply have $\mathrm{d}y = 2x \,\mathrm{d} x$. In general, if $y = f(x)$, then $\mathrm{d} y $ is simply defined to be $f'(x) \,\mathrm{d} x$. No differences, infinitesimal approximations, or anything of that flavor is going on here; $\mathrm{d} y$ is nothing more than a vessel for carrying around a copy of $f'(x)$.



(and $\mathrm{d} x$ was simply defined to be an independent, infinitesimal variable)




A typical application of a differential is that in a definite integral $\int_a^b f(x) \,\mathrm{d}x$, we might decide to write down a Riemann sum with $H$ evenly spaced partitions for some infinite $H$, and substitute in the notation $\mathrm{d}x$ with the width of an interval $\frac{b-a}{H}$ to get
$$ \int_a^b f(x) \,\mathrm{d}x \approx \sum_{i=1}^H f\left(a + \frac{b-a}{H}i \right) \frac{b-a}{H} $$
However, if I encode $\mathrm{d}x$ as an infinitesimal, then write $\int_a^b f(x) \epsilon$, there's no way to figure out what that means. You might write $H = (b-a)/\epsilon$ and write down the Riemann sum above, but that gives the wrong answer if I encoded $\mathrm{d}x$ as $2 \epsilon$. The best you can do is to undo the encoding; e.g.
$$ \int_a^b f(x) \epsilon \approx \sum_{i=1}^H f\left(a + \frac{b-a}{H}i \right) \frac{\epsilon}{\mathrm{d}x} \frac{b-a}{H} $$



Thus, the encoding of the differential form as an infinitesimal does not seem to do anything useful for this application of differentials. But maybe we can do other interesting arithmetic with them. However, I don't think there's any application of quantities like $(\mathrm{d}y)^2$ or $1 + \mathrm{d}y$ or $\sin(\mathrm{d}y)$ — it's the quantities like $(\Delta y)^2$ or $1 + \Delta y$ or $\sin(\Delta y)$ that we play with.



Instead, the only useful operations seem to be the ordinary differential form operations — things like adding two differential forms or multiplying a differential form by a function.



In sum, the only application of this definition seems to be to allow one to say that $\frac{\mathrm{d}y}{\mathrm{d}x}$ is the ratio of two hyperreal-valued variables — but even in ordinary analysis we can understand that as a ratio of differential forms!




Furthermore, insistence that $\mathrm{d}x$ be infinitesimal appears to be completely irrelevant; you could do the same thing in standard analysis simply by removing the constraint that $\mathrm{d}x$ be infinitesimal. In fact, to some extent, people do do the same thing; e.g. defining the differential of a function $f$ to be the function $\mathrm{d}f(x,e) = f'(x) e$.



So, I pose my question — what is the point of making $\mathrm{d}x$ an infinitesimal hyperreal?


Answer



I assert that that there is no intrinsic reason to make $\mathrm{d}x$ an infinitesimal. However, conventions can force us to doing so; for example, it is a consequence of:




  • The functional form of the differential: $\mathrm{d}f(x,y) = f'(x) y $

  • The habit of designating a particular variable (e.g. $x$) as special


  • The habit of implicitly partially evaluating differentials at the special difference $\Delta x$



And by making the second argument implicit and fixed, if we use $\mathrm{d}f(x)$ to implicitly mean $\mathrm{d}f(x, \Delta x)$, the notation lends itself to the variable form $\mathrm{d}y$ to be used in place of $\mathrm{d}f(x)$ whenever $y = f(x)$.



Under these conventions, if $i$ is the identity function $i(x) = x$, then
$$\mathrm{d}x = \mathrm{d}i(x) = \mathrm{d}i(x, \Delta x) = \Delta x $$



thus identifying the differential $\mathrm{d}x$ with the variable $\Delta x$ which, conventionally, is infinitesimal-valued.


Sunday 17 December 2017

real analysis - Sub-gradient and super-gradient are bounded implies globally Lipschitz.



Let $u$ be a real valued function defined on the open set $\Omega \subset \mathbb{R}^n$. Assume that $u$ is continuous, for any $x\in \Omega$, the sets
\begin{align*}
D^-u(x) &= \left\lbrace p\in \mathbb{R}^n: \liminf_{y\longrightarrow x} \frac{u(y) - u(x) - \langle p,y-x\rangle}{\Vert y-x\Vert} \geq 0 \right\rbrace \\
D^+u(x) &= \left\lbrace p\in \mathbb{R}^n: \limsup_{y\longrightarrow x} \frac{u(y) - u(x) - \langle p,y-x\rangle}{\Vert y-x\Vert} \leq 0 \right\rbrace
\end{align*}
are called, respectively the subdifferential and superdifferential of $u$ at $x$.




One can prove that for a $C^1$ function $\phi$ then



\begin{align*}
u-\phi \;\text{has a strict max at}\;x_0 &\Longleftrightarrow u\;\text{is touched from above by}\;\phi\;\text{at}\;x_0\\
&\Longleftrightarrow D\phi(x_0) \in D^+u(x_0),\\
u-\phi \;\text{has a strict min at}\;x_0 &\Longleftrightarrow u\;\text{is touched from below by}\;\phi\;\text{at}\;x_0\\
&\Longleftrightarrow D\phi(x_0) \in D^-u(x_0).
\end{align*}
And $u$ is differentiable at $x$ if and only if $D^+u(x) = D^-u(x) = \{\nabla u(x)\}$.




MY QUESTION: If I have $u$ is continuous on the whole space $\mathbb{R}^n$ and there exists a constant $C>0$ such that



\begin{align*}
\Vert p\Vert \leq C \qquad \text{for all}\; p\in D^+u(x)\;\text{or}\; p\in D^-u(x)\;\text{for all}\;x.
\end{align*}
Could I have $u$ is Lipschitz globally? This question pops out when I study the notion of viscosity solution for Hamilton-Jacobi equations. We know that if $u$ is differentiable and $\Vert Du\Vert$ is bounded then $u$ is Lipschitz, I just wonder that maybe my statement is true, but I failed to prove it, though I can prove that it is locally Lipschitz.


Answer



Yes. The proof is similar to how one proves that viscosity solutions are Lipschitz when the Hamiltonian is coercive. You will have to assume some growth conditions on $u$ at $\infty$ (though this could be relaxed with some effort). I'll give the proof for $u$ bounded.



Assume $u$ is bounded and let $y \in \mathbb{R}^n$. Define $\phi(x) = (C+\epsilon)|x-y|$. Since $u$ is bounded, $u-\phi$ attains its maximum at some point $x_0 \in \mathbb{R}^n$. If $x_0\neq y$ then $\phi$ is smooth in a neighborhood of $x_0$ and so $p := D\phi(x_0) \in D^+u(x_0)$. Computing $|p| = C+\epsilon>C$, we get a contradiction.




Therefore $x_0=y$ and we get



$$u(x) - (C+\epsilon)|x-y| \leq u(y)$$



for all $\epsilon>0$ and $x \in \mathbb{R}^n$. It follows that



$$u(x) - u(y) \leq C|x-y|$$



for all $x,y \in \mathbb{R}^n$. The basic idea of the proof is that we showed we can touch the graph of $u$ with a cone $\phi(x)$.



calculus - calculate the the limit of the sequence $a_n = lim_{n to infty} n^frac{2}{3}cdot ( sqrt{n-1} + sqrt{n+1} -2sqrt{n} )$



Iv'e been struggling with this one for a bit too long:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1} + \sqrt{n+1} -2\sqrt{n} )$$




What Iv'e tried so far was using the fact that the inner expression is equivalent to that:



$$ a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1}-\sqrt{n} + \sqrt{n+1} -\sqrt{n} ) $$



Then I tried multiplying each of the expression by their conjugate and got:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \frac{1}{\sqrt{n+1} +\sqrt{n}} - \frac{1}{\sqrt{n-1} +\sqrt{n}} )
$$




But now I'm in a dead end.
Since I have this annyoing $n^\frac{2}{3}$ outside of the brackets, each of my attemps to finalize this, ends up with the undefined expression of $(\infty\cdot0)$



I've thought about using the squeeze theorem some how, but didn't manage to connect the dots right.



Thanks.


Answer



Keep on going... the difference between the fractions is




$$\frac{\sqrt{n-1}-\sqrt{n+1}}{(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$



which, by similar reasoning as before (diff between two squares...), produces



$$\frac{-2}{(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$



Now, as $n \to \infty$, the denominator behaves as $(2 \sqrt{n})^3 = 8 n^{3/2}$. Thus, $\lim_{n \to \infty} (-1/4) n^{-3/2} n^{2/3} = \cdots$? (Is the OP sure (s)he didn't mean $n^{3/2}$ in the numerator?)


analysis - Prove that region under graph of function is measurable



In the measure theory book that I am studying, we consider the 'area' under (i.e. the product measure of) the graph of a function as an example of an application of Fubini's Theorem for integrals (with respect to measures).




The setting: $(X,\mathcal{A}, \mu)$ is a $\sigma$-finite measure space, $\lambda$ is Lebesgue measure on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ (Borel $\sigma$-algebra), $f:X \to [0,+\infty]$ is $\mathcal{A}$-measurable, and we are considering the region under the graph of $f$,



$E=\{(x,y)\in X \times \mathbb{R}|0\leq y < f(x)\}$.



I need to prove $E \in \mathcal{A} \times \mathcal{B}(\mathbb{R})$. I thought to write $E=g^{-1}((0,+\infty])\cap(X \times [0,+\infty])$ where $g(x,y)=f(x)-y$ but I can't see why $g$ must be $\mathcal{A} \times \mathcal{B}(\mathbb{R})$-measurable. Any help would be appreciated.


Answer



$g=k\circ h$ where $h(x,y)=(f(x),y)$ and $k(a,b)=a-b$. [ Here $h:X\times \mathbb R \to \mathbb R^{2}$ and $k:\mathbb R^{2} \to \mathbb R$]. $k:\mathbb R^{2} \to \mathbb R$ is Borel measurable because it is continuous. To show that $h$ is measurable it is enough to show that $h^{-1} (A \times B) \in \mathcal A \times B(\mathbb R)$ for $A,B \in \mathcal B(\mathbb R)$. This is clear because $h^{-1} (A \times B)=f^{-1}(A) \times B$.



I have assumed that $f$ takes only finite values. To handle the general case let $g(x)=f(x)$ if $f(x) <\infty$ and $0$ if $f(x)=\infty$. Let $F=\{(x,y):0\leq y . Then $E=(f^{-1}\{\infty\}\times [0,\infty)) \cup [(f^{-1}\{\mathbb R\}\times \mathbb R) \cap F]$.


elementary number theory - calculating $a^b !mod c$





What is the fastest way (general method) to calculate the quantity $a^b \!\mod c$? For example $a=2205$, $b=23$, $c=4891$.


Answer



Let's assume that a,b,c referred to here are positive integers, as in your example.



For a specific exponent b, there may be a faster (shorter) method of computing a^b than binary exponentiation. Knuth has a discussion of the phenomenon in Art of Computer Programming Vol. 2 (Semi-numerical Algorithms), Sec. 4.6.3 and the index term "addition chains". He cites b=15 as the smallest case where binary exponentiation is not optimal, in that it requires six multiplication but a^3 can be computed in two multiplications, and then (a^3)^5 in three more for a total of five multiplications.



For the specific exponent b=23 the parsimonious addition chain involves the exponents (above 1) 2,3,5,10,13, at which point a^23 = (a^10)*(a^13), for a total of six multiplications. Binary exponentiation for b=23 requires seven multiplications.



Another approach that can produce faster results when b is large (not in your example) depends on knowing something about the base a and modulus c. Recall from Euler's generalization of Fermat's Little Thm. that if a,c are coprime, then a^d = 1 mod c for d the Euler phi function of c (the number of positive integers less than c and coprime to it). In particular if c is a prime, then by Fermat's Little Thm. either c divides a and a^b = 0 mod c or else a^b = a^e mod c where e = b mod (c-1) since phi(c) = c-1 for a prime c.




If the base a and modulus c are not coprime, then it might be advantageous to factor a into its greatest common factor with c and its largest factor that is coprime to c.



Also it might be advantageous if c is not prime to factor it into prime powers and do separate exponentiations for each such factor, piecing them back together via the Chinese Remainder Thm. In your example c = 4891 = 67*73, so you might compute a^b mod 67 and a^b mod 73 and combine those results to get a^b mod c. This is especially helpful if you are limited in the precision of integer arithmetic you can do.


Saturday 16 December 2017

calculus - Solving $limlimits_{xto0} frac{x - sin(x)}{x^2}$ without L'Hospital's Rule.



How to solve $\lim\limits_{x\to 0} \frac{x - \sin(x)}{x^2}$ Without L'Hospital's Rule?
you can use trigonometric identities and inequalities, but you can't use series or more advanced stuff.


Answer



The given expression is odd; therefore it is enough to consider $x>0$. We then have
$$0<{x-\sin x\over x^2}<{\tan x -\sin x\over x^2}=\tan x\ {1-\cos x\over x^2}={\tan x\over2}\ \Bigl({\sin(x/2)\over x/2}\Bigr)^2\ ,$$
and right side obviously converges to $0$ when $x\to0+$.


Friday 15 December 2017

sequences and series - Prove the inequality for all $N$



Show that the following inequality holds for all integers $N\geq 1$
$$\left|\sum_{n=1}^N\frac{1}{\sqrt{n}}-2\sqrt{N}-c_1\right|\leq\frac{c_2}{\sqrt{N}}$$
where $c_1,c_2$ are some constants.



I have tried induction but it doesn't seem promising.
Any ideas please?



Answer



Since $1/\sqrt{n}$ is decreasing, we can get an upper bound on the sum by computing: $$1+\int_1^N \frac{1}{\sqrt{x}}dx = 1+2\sqrt{N}$$



Thus we have $$\sum_{n=1}^N \frac{1}{\sqrt{n}} - 2\sqrt{N} = (1+2\sqrt{N}) - 2\sqrt{N} + E(N) = 1 + E(N)$$



Where $$E(N) = 1+\int_1^N \frac{1}{\sqrt{x}}dx - \sum_{n=1}^N \frac{1}{\sqrt{n}} > 0$$



is the error of our estimation at the point $N$. Now can you estimate $E(N)$?







Using the formula from Corollary 2.4 in the pdf linked in the comments, we find that $$\sum_{n=1}^N \frac{1}{\sqrt{n}} = \int_{1}^N \frac{1}{\sqrt{x}} dx - \int_{1}^N \{x\} \frac{-1/2}{(x)^{3/2}}dx + 1$$



Here $\{x\}$ is the fractional part of $x$, which is always less than 1. The middle integral can be estimated by $$\left|\int_{1}^N \{x\} \frac{-1/2}{(x)^{3/2}}dx\right| < \int_1^N 1 \cdot \frac{1/2}{(x)^{3/2}}dx = 1-\frac{1}{\sqrt{N}}$$


sequences and series - If $sum_{k=0}^{r-1} c_k =0 $, and $a_n to 0$, does $sum_{n=0}^{infty} sum_{k=0}^{r-1} c_ka_{nr+k} $ converge?



This is a generalization of
the alternating series convergence result
and this:
Is this:$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$ a convergent series?



Here is my question:



If

$\sum_{k=0}^{r-1} c_k
=0
$,
and
$a_n$ is a
monotonic decreasing series
such that
$a_n \to 0$,
does
$\sum_{n=0}^{\infty}c_{n \bmod r}a_n

=\sum_{n=0}^{\infty} \sum_{k=0}^{r-1} c_ka_{nr+k}
$
converge?



If $a_n = \frac1{n+1}$,
then the sum does converge
as shown by my question here:
Show that if $\sum_{k=1}^m c_k =0 $, $\sum_{n=0}^{\infty} \sum_{k=1}^m \frac{c_k}{nm+k} $ converges.



I can show that

the sum does converge
as long as the
$a_n$ satisfies
some smoothness properties:



Assume that
$a_n
=f(n)
$,
where

$f(x)$ is differentiable,
$f(x) \to 0$,
$f'(x)
\to 0
$,
and
$|f''(x)|
< C/x^2
$
for some $C > 0$.




Let
$s(n)
=\sum_{k=0}^{r-1} c_ka_{nr+k}
$,
and
$S(n)
=\sum_{j=0}^n s(j)
=\sum_{n=0}^{nr+r-1} a_n
$.




Then



$\begin{array}\\
s(n)-\sum_{k=0}^{r-1} c_ka_{nr}
&=\sum_{k=0}^{r-1} c_ka_{nr+k}-\sum_{k=0}^{r-1} c_ka_{nr}\\
&=\sum_{k=0}^{r-1} c_k(a_{nr+k}-a_{nr})\\
&=\sum_{k=0}^{r-1} c_k(f(nr+k)-f(nr))\\
&=\sum_{k=0}^{r-1} c_k(kf'(nr)+(k^2/2)f''(nr+tk))
\quad\text{where }0 \le t \le 1\\

&=\sum_{k=0}^{r-1} c_kkf'(nr)+\sum_{k=0}^{r-1}c_k(k^2/2)f''(nr+tk)\\
&=f'(nr)\sum_{k=0}^{r-1} c_kk+\sum_{k=0}^{r-1}c_k(k^2/2)f''(nr+tk)\\
&=f'(nr)S+s_1(n)\\
\end{array}
$



where
$S = \sum_{k=0}^{r-1} c_kk
$
and

$s_1(n)
=\sum_{k=0}^{r-1}c_k(k^2/2)f''(nr+tk)
$.



Since
$\sum_{k=0}^{r-1} c_ka_{nr}
=a_{nr}\sum_{k=0}^{r-1} c_k
=0
$,
$s(n)

=f'(nr)S+s_1(n)
$.



Also,



$\begin{array}\\
|s_1(n)|
&=|\sum_{k=0}^{r-1}c_k(k^2/2)f''(nr+tk)|\\
&\le\sum_{k=0}^{r-1}|c_k(k^2/2)f''(nr+tk)|\\
&\le\sum_{k=0}^{r-1}|c_k(k^2/2)\frac{C}{(nr+tk)^2}|\\

&\le\sum_{k=0}^{r-1}|c_k(k^2/2)\frac{C}{(nr)^2}|\\
&=\frac{C}{2(nr)^2}\sum_{k=0}^{r-1}|c_kk^2|\\
&=\frac{CC_1}{2(nr)^2}
\quad\text{ where } C_1 =\sum_{k=0}^{r-1}|c_kk^2|\\
\end{array}
$



Therefore,
$\sum |s_1(n)|
$

converges.



If $m < n$,



$\begin{array}\\
S(n)-S(m)
&=\sum_{j=m+1}^n s(j)\\
&=\sum_{j=m+1}^n (f'(jr)S+s_1(j))\\
&=S\sum_{j=m+1}^n f'(jr)+\sum_{j=m+1}^ns_1(j)\\
\end{array}

$



By the smoothness assumption
on $f(x)$,
$f'(jr)
\approx\frac1{r}\int_{jr}^{jr+r} f'(t)dt
$,
so that



$\begin{array}\\

\sum_{j=m+1}^n f'(jr)
&\approx\sum_{j=m+1}^n \frac1{r}\int_{jr}^{jr+r} f'(t)dt\\
&= \frac1{r}\int_{(m+1)r}^{nr+r} f'(t)dt\\
&= \frac1{r}(-f((m+1)r)+f(nr+r))\\
\end{array}
$



Since
$f(x) \to 0$
as $x \to \infty$,

$f(nr+r)-f((m+1)r)
\to 0
$
as $m$ and $n \to \infty$.



Since
$\sum |s_1(n)|$
converges,
$\sum_{j=m+1}^n s_1(j)
\to 0$

as $m$ and $n \to \infty$.



Therefore,
$S(n)-S(m)
\to 0$
as $m$ and $n \to \infty$,
so
$\lim_{n \to \infty} S(n)
$
exists.



Answer



The partial sums $C_N$ of the sequence $c_{n\text{ mod } r}$ satisfy $$|C_N| = \left|\sum_{n=0}^Nc_{n\text{ mod }{r}}\right| \leq \sum_{n=0}^{r-1}|c_{n}|$$ and is therefore bounded. Since $a_{n}$ is monotonicly decreasing with $\lim\limits_{n\to\infty} a_n = 0$ we have that Dirichlet's test apply and it follows that the series $\sum c_{n\text{ mod }r}a_n$ converges.


combinatorics - Generalized Vandermonde's identity

Can you please provide a reference to the following generalization of Vandermonde's identity?




Given a positive integer $k$ and nonnegative integers $n_1, n_2, \ldots, n_k$ and $m$, it holds that $$\sum_{i_1+i_2+\cdots+i_k=m} \binom{n_1}{i_1} \binom{n_2}{i_2} \cdots \binom{n_k}{i_k} = \binom{n_1+n_2+\cdots+n_k}{m}.$$ The proof is well-known and based on the idea of counting in two different ways the coefficient of $x^m$ in the polynomial $(1+x)^{n_1+n_2+\cdots+n_k}$.

Thursday 14 December 2017

two real analysis problems on continuous functions




1.Let $I = [a , b]$ and let $f : I \to \mathbb{R}$ be a continuous function on $I$ such that for each $x$ in $I$ there exists $y$ in $I$ such that $|f(y)| \le \frac{1}{2}|f(x)|$.
Prove there exists a point $c$ in $I$ such that $f (c) = 0$.
2.Let $f$ be continuous on the interval [$0, 1$] to $\mathbb{R}$ and such that $f (0) = f (1)$. Prove that there exists a point $c$ in [$0, 1/2$] such that $f (c) = f (c+ 1/2)$.






here are two problems on which I have completely stuck.can anyone guide me please to solve these problems


Answer



1) If $|f(x)| > 0$ for all $x \in [a,b]$ let $x_0$ be the point where it attains its minimum. Can the stated condition hold at $x_0$?



2) If $f(0) = f(1/2)$ you are done. Otherwise consider $g(x) = f(x) - f(x + 1/2)$. What can you say about $g(0)$ and $g(1/2)$? Does $g$ vanish somewhere in between?



real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...