Thursday 30 November 2017

sequences and series - Relation between Fibonacci number and the golden section




We denote the $n$th term of Fibonacci number with $F_n$. Assume that
$\alpha=\frac{1+\sqrt{5}}{2}$. With simulation, I found the following
relation between Fibonacci number and the golden section
$$
\mid \frac{F_{n+1}}{F_{n}}-\alpha\mid \, \approx \, \frac{1}{(F_n)^2}~.
$$
Is there a analytical method that we can proof the mentioned formula. I would greatly appreciate for any suggestions.



Maple simulation




Edit: First I want to gratitude from Milo Brandt for nice answer. In continue, i want to generalize my question.



One of the most important generalization of the classical Fibonacci numbers is the Fibonacci $p$-step numbers that is defined as follows
$$
\begin{equation}\label{cp26}
F_n^{(p)}=F_{n-1}^{(p)}+F_{n-2}^{(p)}+\cdots+F_{n-p}^{(p)}\, .
\end{equation}
$$
With boundary conditions
$$

F_{0}^{(p)}=0\quad , \quad F_{1}^{(p)}=0\quad ,\, \cdots\, ,\quad F_{p-2}^{(p)}=0\quad , \quad F_{p-1}^{(p)}=1\, .
$$



We can get the limit value of Fibonacci $p$-step numbers by inverse of solution of equation $x^{p+1}-2\, x+1=0$ in the interval $(0,1)$. We denote the limit value of Fibonacci $p$-step numbers with $\alpha_p$. In fact, $\alpha_p$ is defined in the following form
$$
\alpha_p=\displaystyle{\lim_{n\rightarrow\infty}}\quad \frac{F^{(p)}_{n+1}}{F^{(p)}_{n}}~.
$$
The generalization of the above formula is
$$
\mid \frac{F^{(p)}_{n+1}}{F^{(p)}_{n}}-\alpha_p\mid \, \approx \, {F^{(p)}_{n}}^{-{\displaystyle{(\frac{p}{p-1})}}}~.

$$



For example for the case $p=4$, we have



Fibonacci $4$-step numbers


Answer



The most direct way to deal with the Fibonacci numbers is to use Binet's formula:
$$F_n=\frac{\varphi^n - (-\varphi)^{-n}}{\sqrt{5}}$$
where $\varphi=\frac{1+\sqrt{5}}2$. So, the ratio of $\frac{F_{n+1}}{F_n}$ can be written in closed form as:
$$\frac{F_{n+1}}{F_n}=\frac{\varphi^{n+1} - (-\varphi)^{-n-1}}{\varphi^n - (-\varphi)^{-n}}$$

Note that this obviously tends towards $\varphi$ as $n$ goes to $\infty$, since the $(-\varphi)^{-n}$ terms quickly go to zero. Now, you additionally want to show that the difference between this ratio and $\varphi$ shrinks with $\frac{1}{F_n^2}$. To do this, let us take that difference symbolically:
$$\frac{F_{n+1}}{F_n}-\varphi = \frac{\varphi^{n+1}-(-\varphi)^{-n-1}}{\varphi^n-(-\varphi)^{-n}}-\frac{\varphi^{n+1}+(-\varphi)^{-n+1}}{\varphi^n-(-\varphi)^{-n}}=\frac{-(-\varphi)^{-n+1}-(-\varphi)^{-n-1}}{\varphi^n-(-\varphi)^{-n}}$$
Then, we use that $(\varphi)^{-1}+\varphi=\sqrt{5}$ to simplify to
$$\frac{F_{n+1}}{F_n}-\varphi = \frac{\sqrt{5}(-\varphi)^{-n}}{\varphi^n - (-\varphi)^{-n}}\approx \frac{\sqrt{5}(-1)^n}{\varphi^{2n}}\approx \frac{(-1)^n}{\sqrt{5}F_n^2}$$
where the $\approx$ signs indicate that the ratio of the two sides of the "equation" tends to $1$ as $n$ goes to $\infty$. We use that $F_n\approx \frac{\varphi^n}{\sqrt{5}}$, which implies $F_n^2\approx \frac{\varphi^{2n}}{5}$


general topology - Discontinuous maps taking compacts to compacts

It's commonly known that in general topology, a continuous map $f$ from a topological space $(X, \tau)$ to another topological space $(Y, \tau')$ will send every compact set to another compact set. Moreover, I'm aware that the converse does not hold, as so long as $\# Y \geq 2$, we can pick some set $S \subseteq X$ and define
$$f(x) = \begin{cases}

y_{1} & x \in S \\
y_{2} & x \in S^{\complement} .
\end{cases}$$
This would send not only compact sets to compact sets, but all sets to compact sets because $f(X)$ is finite, and thus compact (as are all its subsets). But this map could easily not be continuous.



NOTE: Understand for the rest of this post that $f: X \to Y$ is surjective.



My question is if there are more interesting counter-examples. Say $X = \mathbb{R}$ with the Euclidean metric; then what would be an example of such a discontinuous map $f: X \to Y$ where $Y = \mathbb{N}$ (with the usual metric), or $Y = \mathbb{Q}$ (again with the usual topology). What about $\mathbb{R}$, or $\mathbb{R} \setminus \mathbb{Q}$? Is there perhaps a limit on how large in cardinality $Y$ can be (with respect to $X$)? I'm curious to see less trivial counterexamples to the converse. Thanks!



EDIT: Many of the examples I'm getting still rely on compact sets having finite image. So a problem I might find more interesting: Can I construct a discontinuous counterexample where $f$ is also injective?

calculus - Multiple answers to $int sqrt{4t - t^2} , textrm{d} t$



I'm trying to understand why I'm getting different answers when taking different approaches to integrating




$$
\int \sqrt{4t - t^2} \, \textrm{d} t
$$



First, I tried substituting $\sqrt t = 2 \sin \theta$:



$$
\begin{eqnarray}
\int \sqrt{t}\sqrt{4 - t} \, \textrm{d} t

&=& \int 2 \sin \theta \cdot \sqrt{4 - 4 \sin^2 \theta} \cdot 8 \sin \theta \cos \theta \, \textrm{d} \theta \\
&=& 32 \int \sin^2 \theta \cos^2 \theta \, \textrm{d} \theta \\
&=& 4 \int 1 - \cos 4 \theta \, \textrm{d} \theta \\
&=& 4 \theta - \sin 4 \theta + C \\
&=& 4 \theta - 4 \sin \theta \cos^3 \theta + 4 \sin^3 \theta \cos \theta + C \\
&=& 4 \arcsin \frac{\sqrt{t}}{2} + \frac{1}{2} (t - 2)\sqrt{4t - t^2} + C \\
\end{eqnarray}
$$



Second, I tried completing the square and substituting $t - 2 = 2 \sin \theta$:




$$
\begin{eqnarray}
\int \sqrt{4 - (t^2 - 4t + 4)} \, \textrm{d} t
&=& \int \sqrt{4 - (t - 2)^2} \, \textrm{d} t \\
&=& \int \sqrt{4 - 4 \sin^2 \theta} \cdot 2 \cos \theta \, \textrm{d} \theta \\
&=& 4 \int \cos^2 \theta \, \textrm{d} \theta \\
&=& 2 \int 1 - \cos 2 \theta \, \textrm{d} \theta \\
&=& 2 \theta + \sin 2 \theta + C \\
&=& 2 \theta + 2 \sin \theta \cos \theta + C \\

&=& 2 \arcsin \left(\frac{t - 2}{2}\right) + \frac{1}{2}(t - 2)\sqrt{4t - t^2} + C \\
\end{eqnarray}
$$



The second answer is the same as in the book but I don't understand why the first approach gives the wrong answer.


Answer



The answers differ by a constant. In fact,



$$\arcsin\left(\frac t2-1\right) = 2\left(\arcsin\frac{\sqrt t}2 -\frac\pi4\right).\tag{*}$$




Proof: Write $\theta:=\arcsin\frac{\sqrt t}2$. Then $\sin^2\theta=\frac t4$ and $\cos^2\theta=1-\frac t4$, and using the angle-difference identities,
$$
\sin\left(\theta-\frac\pi4\right)=\frac1{\sqrt 2}(\sin\theta-\cos\theta)\tag1
$$
while
$$\cos\left(\theta-\frac\pi4\right)=\frac1{\sqrt 2}(\sin\theta+\cos\theta).\tag2$$
Therefore
$$
\sin2\left(\theta-\frac\pi4\right)=2\sin\left(\theta-\frac\pi4\right)\cos\left(\theta-\frac\pi4\right)
\stackrel{(1),(2)}=\sin^2\theta-\cos^2\theta =\frac t4-\left(1-\frac t4\right)=\frac t2-1.$$

Therefore both sides of (*) have the same sine. Similarly you can show that both sides have the same cosine.


sequences and series - Summation of $1cdot 3cdot 5cdot 7 + 3cdot 5cdot 7cdot 9 ...$


Find the sum of:




$1 \cdot 3\cdot 5\cdot 7 + 3\cdot 5\cdot 7\cdot 9+...$ till $n$ terms.




My attempt:



I got the $i^{th}$ term to be $(2i-1)(2i+1)(2i+3)(2i+5)$



Expansion gives: $16i^4 +64i^3+56i^2+-16i-15$



Required: $$\sum\limits_{i=1}^n (16i^4 +64i^3+56i^2+-16i-15) $$




Using summation identities, I got:



$\dfrac{16n(n+1)(2n+1)(3n^2+3n-1)}{30}+\dfrac{64n^2(n+1)^2}{4}+\dfrac{56(n)(n+1)(2n+1)}{6}- \dfrac{16n(n+1)}{2}- 15n$



However, answer given is simply $$\frac{1}{10}\{(2n-1)(2n+1)(2n+3)(2n+5)(2n+7)+1\cdot 3\cdot 5\cdot 7\}$$

Wednesday 29 November 2017

calculus - Prove that $lim limits_{xto infty}e^{frac{ln(x)}{x}}=1$



How to prove that $\lim \limits_{x\to \infty}e^{\frac{\ln(x)}{x}}=1$?



I know that $x$ grows much faster to infinity then $\ln(x)$, therefore the limit equivalent to $e^0 = 1$



but that's not a rigorous proof.



Answer



$$\lim_{x \to \infty}\frac{\ln{x}}{x}=\lim_{x \to \infty}\frac{1}{x}=0$$



by L Hopital's Rule


combinatorics - Prove $sumlimits_{i=0}^{n}binom{n+i}{i}=binom{2n+1}{n+1}$




I'm trying to prove this algebraically: $$\sum\limits_{i=0}^{n}\dbinom{n+i}{i}=\dbinom{2n+1}{n+1}$$



Unfortunately I've been stuck for quite a while. Here's what I've tried so far:





  1. Turning $\dbinom{n+i}{i}$ to $\dbinom{n+i}{n}$

  2. Turning $\dbinom{2n+1}{n+1}$ to $\dbinom{2n+1}{n}$

  3. Converting binomial coefficients to factorial form and seeing if anything can be cancelled.

  4. Writing the sum out by hand to see if there's anything that could be cancelled.



I end up being stuck in each of these ways, though. Any ideas? Is there an identity that can help me?


Answer




Oh, hey! I just figured it out. Funny how simply posting the question allows you re-evaluate the problem differently...



So on Wikipedia apparently this is a thing (the recursive formula for computing the value of binomial coefficients): $$\dbinom{n}{k} = \dbinom{n-1}{k-1} + \dbinom{n-1}{k}$$



On the RHS of the equation we have (let's call this equation 1):



$$\dbinom{2n+1}{n+1} = \dbinom{2n}{n} + \dbinom{2n-1}{n} + \dbinom{2n-2}{n} + \cdots + \dbinom{n+1}{n} + \dbinom{2n-(n-1)}{n+1} $$
The last term $\dbinom{2n-(n-1)}{n+1}$ simplifies to $\dbinom{n+1}{n+1}$ or just 1.



Meanwhile on the LHS we have $\sum\limits_{i=0}^{n}\dbinom{n+i}{i}$ which is also $\sum\limits_{i=0}^{n}\dbinom{n+i}{n}$ because $\dbinom{n}{k} = \dbinom{n}{n-k}$.




Written out that is (let's call this equation 2):
$$\sum\limits_{i=0}^{n}\dbinom{n+i}{n} = \dbinom{n}{n} + \dbinom{n+1}{n} + \dbinom{n+2}{n} + \cdots + \dbinom{2n}{n} $$



The first term $\dbinom{n}{n}$ simplifies to 1.



Hey, look at that.



In each written out sum there's a term in equation 1 and an equivalent in equation 2. And in each sum there's $n$ terms, so equation 1 definitely equals equation 2. I mean, the order of terms is flipped, but whatever. Yay!


find sum of n terms of series $sum cos(ntheta)$




Use the result $1 + z + z^2...+z^n=\frac{z^{n+1}-1}{z-1}$ to sum the series to n terms



$1+\cos\theta+\cos2\theta+...$



also show that partial sums of series $\sum \cos (n\theta)$ is bounded when $0<\theta<\pi/2$



My attempt



so z can be written as $e^{i\theta}$ which means:




$1+ \cos \theta + \cos 2\theta ....+\cos n\theta + i(\sin \theta+\sin 2\theta+....+\sin n\theta)=\frac{z^{n+1}-1}{z-1}$



after this.. i dont know


Answer



Remember that
$$
e^{it}=\cos t+i\sin t\;\;\;\;\forall t\in\Bbb C
$$
and that

$$
\sum_{j=0}^{n}z^j=\frac{1-z^{n+1}}{1-z}\;\;\forall z\in\Bbb C,\;\;|z|<1.
$$
Thus
$$
\sum_{j=0}^{n}\cos(j\theta)=
\sum_{j=0}^{n}\Re{(e^{ij\theta})}=
\Re\left(\sum_{j=0}^{n}(e^{ij\theta})\right)=
\Re\left(\frac{1-e^{i\theta(n+1)}}{1-e^{i\theta}}\right)
$$

The last term I wrote can be handled easily in order to be written explicitly and get the results you wanted.


Tuesday 28 November 2017

combinatorics - How to prove that $n$ divides $binom{n}{k}$ if $n$ is prime?

I want to prove that n divides $\binom{n}{k}$ and so I expanded the term to
$\frac{n(n-1)..(n-k+1)}{k!}$. Clearly $n$ divides the numerator and also $n$ is relatively prime to all of the terms in the denominator and so $n$ is not divisible by $k!$. I'm struggling with how to approach that $\frac{(n-1)..(n-k+1)}{k!}$ is integer.



This problem comes as an example in the book I'm reading and supposedly it's obvious but I don't see it.



Edit: Sorry adding that we must have $1 \leq k \leq n-1$.

soft question - Mathematical ideas that took long to define rigorously

It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)



More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.



I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics$^1$. What are further examples of this type?



I am mainly interested in historical examples, but contemporary ones (i.e. ideas which have yet to be rigorously formulated) are also welcome.








  1. Footnote: I have some specific examples in mind that I will share as an answer, if nobody else does.

Monday 27 November 2017

real analysis - Functions such that $f(ax)=f(x)+f(a)$

I thought of this question somewhat randomly on a walk, and have discussed it with another friend of mine (we both have pure mathematics degrees). We have made some headway, and we think we have generated a proof, but we would appreciate any additional insight and proof verification.




Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous function such that $f(ax)=f(x)+f(a)$. How much can we say about the behavior of $f(x)$? What additional restrictions, if any, allow us to show $f(x)=\log_b(x)$?




We conjecture that, with the restriction that $f$ is not $0$ everywhere, $f$ must be the logarithm. We have, by the log rules,




$$\log_b(|ax|)=\log_b(|x|)+\log_b(|a|)$$



So the logarithm is indeed one such $f$. I will give what we have been able to show about $f$ below, followed by our general proof. If you can offer any insight into this problem, we would greatly appreciate it.



Edit: Turns out this can be shown fairly easily by considering the Cauchy Functional Equation and showing $g(x)=f(e^x)=cx$ for some $c$. The proof given below does not use this fact.



From here on, we assume $f$ is not trivial.







Result 1: $f(1)=0$



We note that



$$f(a)=f(a\cdot1)=f(1)+f(a)$$



which shows $f(1)=0$.







Result 2: $f(0)$ is not defined



We see that



$$f(0)=f(a\cdot0)=f(0)+f(a)$$



which implies $f(a)=0$ for all $a$. However, we have assumed $f(x)\neq0$ for some $x$, giving a contradiction. Thus $f(0)$ must not be defined. So, we redefine $f$ as $f:\mathbb{R}\setminus\{0\}\to\mathbb{R}$.







Result 3: $f(-x)=f(x)$



If we allow $x<0$, we then have



$$0=f(1)=f(-1\cdot-1)=2f(-1)$$



Showing that $f(-1)=0$ as well. Using this result, we have



$$f(-a)=f(-1)+f(a)=f(a)$$







Our Proof:



From here on, we use results from elementary abstract algebra and analysis. We realized that the condition on $f$ is that of a group homomorphism $\varphi:(\mathbb{R}\setminus\{0\},*)\to(\mathbb{R},+)$ where



$$\varphi(xy)=\varphi(x)+\varphi(y)$$



Similarly, you can show the above results for $\varphi$. We noted that if we also define the continuous group homomorphism $\psi:(\mathbb{R},+)\to(\mathbb{R}\setminus\{0\},*)$ where




$$\psi(x+y)=\psi(x)\psi(y)$$



$\psi$ and $\varphi$ seem like they could possibly be inverses with some additional restrictions. We see that



$$\psi(x)=\psi(0+x)=\psi(0)\psi(x)=1$$



So that $\psi(0)=1$. We can then say for any integer $n\geq0$,



$$\psi(n)=\psi(1+1+1+...+1)=\psi(1)\cdot\psi(1)\cdot\cdot\cdot\psi(1)=\psi(1)^n$$




Note, $\psi(1)<0$ may give us complex results, so we restrict $\psi(1)>0$. For any integer $n<0$,



$$\psi(n)=\psi(-1-1+...-1)=\psi(-1)^{|n|}=(\psi(1)^{-1})^{|n|}=\psi(1)^{-|n|}=\psi(1)^n$$



Then, for any rational number $\frac{p}{q}$,



$$\psi(\frac{p}{q})=\psi(\frac{1}{q})^p=\psi(q^{-1})^p=(\psi(q)^{-1})^p=\psi(1)^{\frac{p}{q}}$$



Let $x\in\mathbb{R}\setminus\mathbb{Q}$. Since the rationals are dense in the reals, we can find rational $\frac{p}{q}


$$\psi(1)^{\frac{p}{q}}<\psi(x)<\psi(1)^{\frac{r}{s}}$$



Since $\psi$ is continuous, this implies $\psi(x)=\psi(1)^x$ for all $x\in\mathbb{R}$. Therefore,



$$\psi(x)=\psi(1)^x$$



This shows that $\psi$ must be the exponential function. Note that it is completely characterized by its value at $1$. Noting that $\varphi$ seems like it should be related to the inverse of $\psi$, we would expect $\varphi(x)=\log_b(x)$. We note that for integer $n\geq0$ and real $a>0$,



$$\varphi(a^n)=\varphi(a)+\varphi(a)+...+\varphi(a)=n\varphi(a)$$




For $n<0$, we have



$$\varphi(a^n)=\varphi(\frac{1}{a})+...+\varphi(\frac{1}{a})=|n|\varphi(a^{-1})=-|n|\varphi{a}=n\varphi(a)$$



The rational case is slightly trickier. We note that $a^{\frac{n}{n}}=a$, and thus



$$\varphi(a)=\varphi(a^{\frac{n}{n}})=n\varphi(a^{\frac{1}{n}})$$



And thus $\varphi(a^{\frac{1}{n}})=\frac{1}{n}\varphi(a)$. Therefore, for any rational $\frac{p}{q}$




$$\varphi(a^{\frac{p}{q}})=p\varphi(a^{\frac{1}{q}})=\frac{p}{q}\varphi(a)$$



Using the same argument as before, we can extend this to all $x\in\mathbb{R}$. Therefore,



$$\varphi(a^x)=x\varphi(a)$$



Combining these, we have



$$\varphi(\psi(x))=\varphi(\psi(1)^x)=x\varphi(\psi(1))$$




Thus, $\varphi$ and $\psi$ are inverses up to multiplication by a constant, which confirms that $\psi$ must be the logarithm. If we restrict $\varphi(\psi(1))=1$, these are exactly inverses. If $\psi(1)>0$, we must have $\varphi_{\psi(1)}(x)=\log_{\psi(1)}(x)$. This proof requires that $x>0$. However, we showed earlier that $\varphi(-x)=\varphi(x)$, and for positive $x$, $\varphi_b(x)=\log_b(x)$. To make this function even, we modify it as



$$\varphi_b(x)=\log_b|x|$$



and this is the only possible continuous solution for $\varphi$ defined on all of $\mathbb{R}\setminus\{0\}$. In other words, $f$ must be the logarithm. We have shown the logarithm rules are unique to logarithms!






One thing that concerns us is the restriction that $f$ is continuous. Are there discontinuous $f$ that satisfy this property? Please let us know if we made any invalid assumptions somewhere, or if our statements about the uniqueness of these functions is incorrect. We would also appreciate any alternate proofs you may have.

calculus - Problem solving a series with convergence test: $sum_{k=1}^{infty}frac{k!}{(2k)!}$



Good morning, I have a big problem solving this:
$\sum_{k=1}^{\infty}\frac{k!}{(2k)!}\:$



I'm trying solving this limit with test of D'Alembert, but I have a problem solving the limit.



$\lim_{k\rightarrow\infty}\frac{(k+1)!k!}{(2k+2)!2k!}=(?)$




please, help me.


Answer



We have that
$$
\lim_{k\to\infty}\frac{(k+1)!(2k)!}{k!(2(k+1))!}=\lim_{k\to\infty}\frac{(k+1)(2k)!}{(2k+2)!}=\lim_{k\to\infty}\frac{k+1}{(2k+2)(2k+1)}=\lim_{k\to\infty}\frac1{2(2k+1)}=0.
$$
Hence, the series converges by the ratio test.


trigonometry - Polar coordinates and extremes of integration

I have trouble understanding how finding the extremes of integration of $\theta$ when I pass in polar coordinates.



1° example - Let $(X,Y)$ a random vector with density $f(x,y)=\frac{1}{2\pi}e^{-\frac{(x^2+y^2)}{2}}$.



Using the transformation $g=\left\{\begin{matrix}
x=rcos\theta\\
y=rsin\theta
\end{matrix}\right.$
and after calculating the determinant of Jacobian matrix, I have $dxdy=rdrd\theta$ from which




$\mathbb{E}[g(X^2+Y^2)]=\int_{\mathbb{R}^2}g(x^2+y^2)f(x,y)dxdy=\frac{1}{2\pi}\int_{0}^{+\infty}g(r^2)e^{-\frac{r^2}{2}}\int_{0}^{2\pi}d\theta$
$\Rightarrow X^2+Y^2\sim Exp(\frac{1}{2})$



2° example - Why for $\int\int_{B}\frac{y}{x^2+y^2}dxdy$ with $B$ annulus of centre $(0,0)$ and radius $1$ and $2$ the extremes of integration of $\theta$ are $(0,\pi)$?



3° example - Why for $\int\int_{B}\sqrt{{x^2+y^2}}dxdy$ with $B$ segment of circle $(0,0)$ and radius $1$ and $2$ the extremes of integration of $\theta$ are $(0,\frac{\pi}{2})$?



4° example - Why for $\int\int_{S}(x-y)dxdy$ with $S={((x,y)\in \mathbb{R}:x^2+y^2=r^2; y\geq 0)}$ the extremes of integration of $\theta$ are $(0,\pi)$?




I hope I have made clear my difficulties.
Thanks in advance for any answer!

elementary set theory - infinite countably cartesian product



Let $A^{\mathbb N}=\prod_{m\in\mathbb N} A_m$ be the infinite countably cartesian product of the sets $A_m$. Let $A_i'$ be a subset of $A_i$ for $i=1,...,n$. Is it true that $A_1'\times A_2'\times...\times A_n'\times A^{\mathbb N\setminus\{1,...,n\}}$ is equal to $A_1'\times A_2'\times...\times A_n'\times A^{\mathbb N}$ ?
For me the answer is yes because an infinite countably set minus a finite set is still infinite countably but I don't know how formally using this to solve the problem. Thank you very much.


Answer



Hint : consider $A_m = \{0,1,…,m\}$, $A_m' = \{0\}$.





Then, you can find $(\underbrace{0,0,…,0}_{n \text{ times}},n+1,n+1,…)$ in the first product, but not in the second one (because the $n+1$-th component in the second product is $A_0 = \{0\}$).




However, if all the $A_m$ are equal, then it is true : it is sufficient to prove that $$A^{\mathbb N\setminus\{1,...,n\}} = A^{\mathbb N}$$



To do this, pick $X=(x_{n+1},x_{n+2},…) \in A^{\mathbb N\setminus\{1,...,n\}}$, that means $x_j \in A_j := A_0$ for all $j ≥ n+1$.



Then, you want to prove that $X \in A^{\mathbb N}$. This means : does the $k$-th component of $X$ ($k≥1$) belong to $A_k=A_0$ ? Does every component of $X$ belong to $A_0$ ?




Then, $X$ belongs to $A^{\mathbb N}$ simply because all its components belong to $A_0$... Even if I denoted the first component of $X$ by $x_{n+1}$, this doesn't really matter because what is important is that $x_{n+1} \in A_{n+1} = A_0$.


Saturday 25 November 2017

calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$




I'm supposed to calculate:



$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$



By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer



Edited. I justified the application of the dominated convergence theorem.



By a simple calculation,




$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$




We remark that




  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.



Then in view of the Stirling's formula, it suffices to show that




$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$



The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$



and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then



$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u

= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$



From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,



$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


real analysis - Why can't we determine the limit of $cos x$ and $sin x$ at $x=infty $ or $x=-infty$?

I'm confused about why we can't determine the limit of $\cos x$ and $\sin x$ as $x \to \infty$, even though they are defined over $\mathbb{R}.$



When I use Wolfram Alpha, I get the following result (link to page):




enter image description here




which shows only that there are $2$ limits :$-1$ and $ 1 $.




Can someone show me why we can't determine $\lim \sin x$ and $\lim \cos x$ at $x=\infty $ or $x=-\infty$ ?



Thank you for your help.

Friday 24 November 2017

real analysis - Compute $limlimits_{xto 0_+}e^{frac{1}{sqrt{x}}}cdot(1-sqrt{x})^frac{1}{x}$

Evaluate $$\lim_{x\to 0_+}e^{\frac{1}{\sqrt{x}}}\cdot(1-\sqrt{x})^\frac{1}{x}$$
I tried using $$\lim_{x\to 0}(1+x)^\frac{1}{x} = e$$ like so:
$$l = \lim_{x\to 0_+}e^\frac{1}{\sqrt{x}}\cdot\bigg[\big(1+(-\sqrt{x})\big)^{-\frac{1}{\sqrt{x}}}\bigg]^{\frac{-1}{\sqrt{x}}} = \lim_{x\to 0_+}e^{\frac{1}{\sqrt{x}}-\frac{1}{\sqrt{x}}} = e^0 = 1$$
However, the right answer is $\frac{1}{\sqrt e}$. Why is it that the whole expression in square brackets can't be taken as $e$ in this case?

Thursday 23 November 2017

discrete mathematics - Finding a formula to sum natural numbers up to $n$










I got this question in homework:




Find an expression for the sum ‫‪




$\sum k = 1 +\cdots + n‬‬$.




and prove it using an induction.



I'm not even near finding the expression. What I did notice is that
if $n$ is (for example) 5 then the sum would be




$5^2 - 4^2 + 3^2 - 2^2 + 1^2$



So the first number is always positive and from there on the sign changes.



Any tips on how do I contintue from this point on, assuming I'm in the right direction?



Thanks!


Answer



Hint.

$$\begin{array}{cccccccccccc}
& 1 & + & 2 & + & 3 & + & 4 & + & \cdots & + & n\\
+& n & + &n-1 & + & n-2 & + & n-3 & + & \cdots & + & 1\\
\hline
& n+1 & + &n+1 & + &n+1 &+& n+1 & + & \cdots & + & n+1
\end{array}$$


Isomorphism with Lie algebra $mathfrak{sl}(2)$

Let $L$ be a Lie algebra on $\mathbb{R}$. We consider $L_{\mathbb{C}}:= L \otimes_{\mathbb{R}} \mathbb{C}$ with bracket operation
$$ [x \otimes z, y \otimes w] = [x,y] \otimes zw $$
far all $x,y \in L$ and $z,w \in \mathbb{C}$. We have that $L_{\mathbb{C}}$ is a Lie algebra.
If $L= \mathbb{R}^{3}$ and for $x,y \in L$ we define $[x,y]:= x \wedge y$ (where $\wedge$ denotes the usual vectorial product). We have that $(L, \wedge)$ is a Lie algebra. I have to prove that $L \simeq \mathfrak{sl}(2)$. In order to do this I'd like to prove that $L \simeq \mathfrak{so}(3,\mathbb{R})$. Than, because $\mathfrak{so}(3,\mathbb{R}) \otimes \mathbb{C} \simeq \mathfrak{sl}(2)$ and $\mathfrak{sl}(2)$, up to isomorphism, is the unique $3$-dimetional semisimple algebra, I complete my proof. So my questions are: 1) How to prove that $(\mathbb{R}^{3}, \wedge) \simeq \mathfrak{so}(3, \mathbb{R})$ ? 2) Why $\mathfrak{so}(3,\mathbb{R}) \otimes \mathbb{C} \simeq \mathfrak{sl}(2)$ ?

elementary number theory - Prove $gcd(k, l) = d Rightarrow gcd(2^k - 1, 2^l - 1) = 2^d - 1$

This is a problem for a graduate level discrete math class that I'm hoping to take next year (as a senior undergrad). The problem is as stated in the title:




Given that $\gcd(k, l) = d$, prove that $\gcd(2^k - 1, 2^l - 1) = 2^d
- 1$.




The problem also says "hint: use Euclid's lemma," although I'm not sure what part of the problem should be applied to. I've been thinking about it for a few days and I'm completely stumped.




I'm not really even sure how to show that it divides either $2^k - 1$ or $2^l - 1$. From the given, we know that $\exists c: dc = k$. We need to show that $\exists c': (2^d - 1)c' = 2^k - 1$. Obviously $c' = 2^c$ gives you $(2^d - 1)c' = 2^k - 2^c$, but I can't figure out how to get rid of the extra terms that the $- 1$ brings in in various places.



From Euclid's lemma on the left side, you know $\exists i: di = k - l$, and applying it on the right side, you know it suffices to show that $\gcd(2^k - 2^l, 2^l - 1) = 2^d - 1$. And by Bezout's identity, it's enough to show that $2^d - 1$ can be written as a linear combination of either of those things.



Can anyone give me a hint?

Wednesday 22 November 2017

algebra precalculus - A Question About The First Step In Induction

I am currently learning mathematical induction from this site (https://www.mathsisfun.com/algebra/mathematical-induction.html). It has broken induction into 3 steps:




  1. Show that it is true for n=1

  2. Assume it is true for n=k

  3. Show that it is true for n=k+1




I have 4 questions:




  1. Why, of all numbers do we pick n=1? Can't we pick something like n=1, n=2, or the like?


  2. Why do we need the 3rd step? I get a feeling it is to prove that it is true for all n=k, but if that is so, how does it do it? It does prove that it is true for all n=k+1, but that is based on the assumption that n=k; and therefore doesn't prove it. Because if a proof is based on an assumption, how does that prove anything?


  3. Why do we need the first step when we show that it is true for all n=k+1?


  4. In n=k+1, why do we add 1? Why can't we subtract 1, or add 2, etc? Why must it be n=k+1?





Is it possible to answer the question at the level of a Pre-Calc student, who hasn't learnt Calculus (obviously), set theory, and all those complicated stuff?



This question is different from "Dominoes and induction, or how does induction work?" because I have learnt neither limit notation nor L'Hopital's rule, and the other question contains them. This is important for a Precalc student who understands neither of them.

linear algebra - Same Eigenvalues = Similar Matrices?

While watching Strang's lecture on similar matrices he stated that if any two matrices,$A$ and $B$, have the same eigenvalues then they can be put in the form $A=P B P^{-1}$
. This is very easy to see when $B$ is the diagonal matrix housing $A's$ eigenvalues. In this example $A$ and $B$ are not diagonal matrices.




But if this is not the case and say we are in $R^2$ and if $A$ has eigenvectors $v_1$ and $v_2$ with eigenvalues $\lambda_1$ and $\lambda_2$ and a angle of $\theta$(say 30 degrees) from each other. And $B$ has the same eigenvalues but they make some angle $\theta_2$(say 120 degrees) then how can I intuitively see that they are similar?



The intuition for similar matrices is often said as "they are the same transformation just in different basis". This is easy to see if the $\theta's$ are same as then $P$ is just a rotation matrix but I want to understand why its true for all matrices of equal eigenvalues. And how can this $P$ then be found?



Edit: Please dont state the example of a matrix having all equal eigenvalues and not being similar to identity. This is stated and we are assuming distinct eigenvalues for this question.

Tuesday 21 November 2017

linear algebra - Elementary row operations on $A$ don't change the row-space of $A$



Assume that $F$ is a field. Consider a matrix like $A \in M_{m,n}(F)$.



If we look at each row of $A$, as a member of $F^n$, the subspace produced by rows of $A$ is called the row-space of $A$.



a) Prove that elementary row operations on $A$ don't change the row-space of $A$. Then, Conclude that foreach matrix like $B$, if $A$,$B$ are row equivalent, then they have the same row-space.



b) Prove that $A \in M_{n}(F)$ is invertible iff the row-space of it is $F^n.$




Note : I'm confused with the definitions ... I have no idea how to solve this problem.


Answer



Let $v_1,\ldots,v_n$ denote the rows of $A$, and let $V$ be the row-space of $A$. In other words,
$$
V=\text{span}\,\{v_1,\ldots,v_n\}.
$$
Elementary operations could consist of a permutation of rows, which amount to permuting some $v_j$ and $v_k$ above; such operation will not change the span of $v_1,\ldots,v_n$. The same happens with multiplying by a nonzero number: $v_1,\ldots,v_n$, and $v_1,\ldots,\lambda v_k,\ldots, v_n$ span the same subspace.



The last kind of elementary operation consists of replacing $v_k$ with $v_k+\lambda v_j$. In this case, we can write

$$
\alpha_1v_1+\cdots+\alpha_nv_n=\alpha_1v_1+\cdots+\alpha_{k-1}v_{k-1}+\alpha_k(v_k+\lambda v_j)+\alpha_{k+1}v_{k+1}+\cdots (\alpha_j-\alpha_k\lambda)v_j+\cdots+\alpha_nv_n.
$$
So every linear combination of $v_1,\ldots,v_n$ is also a linear combination of $v_1,\ldots,v_n$ with $v_k$ replaced by $v_k+\lambda v_j$.



In summary, after doing any elementary operation to $v_1,\ldots,v_n$, the span doesn't change. It follows directly that if $A$ and $B$ are row equivalent, since the rows of $B$ can be obtained by elementary operations from the rows of $A$, the spans of their rows are equal.



If $A$ is invertible, then it is row equivalent to $I$, and so its row space is $F^n$. Conversely, if the row space of $A$ is $F^n$, then by writing each of $v_1,\ldots,v_n$ in terms of the canonical basis, we get a recipe to go from $I$ to $A$ by elementary operations, and so $A$ is invertible.


logarithms - discrete exponential calculating



I am interesting in about discrete exponential calculating.



I know that $a^b = c\mod k$ is calculated as below.



for example $3^4 = 13 \mod 17$. $3^4 = 81$; $81 \mod 17 = 13$.




I am interesting about big numbers .



for example $12356423547^{72389478972138} \mod 1239859034832$



This calculation is more difficult . And I don't think it is possible to calculate this expression without any simplification.



I I have researched some ways to simplification this equalization. And I found that.



$a^b \mod k = a^{(b \mod \log(a,1)) \mod k}$




$\log(a,b)$ is discrete logarithm where '$a$' is base.



Of course this is more simply from $a^b$ but not enough simply .



In very big calculations this way isn't useful . For example in $128$ bit public keys a and $b$ will be $128$ bit integers (for example). I am interesting about that .Is there any simplification formula to calculate big numbers exponential in real time?
thank you


Answer



Powers can be calculated quickly by a technique called repeated squaring. For example, \[3^{32} = (((((3^2)^2)^2)^2)^2)\]




When the exponent is not a power of two, you write it as a sum of powers of two, and use the fact that $a^{b + c} = a^b a^c$. So, for example, \[3^{43} = 3^{32} \times 3^{8} \times 3^{2} \times 3\], and since we re-use $3^2$ in calculating $3^8$, we can do the whole calculation in just eight multiplications.



You can also do the squaring with respect to your modulus, as Hagen von Eitzen explained. So to calculate $3^{43} \mod 17$, I'd do the following:




  • $3^2 = 9$

  • $3^4 = 9^2 = 81 = -4 \mod 17$ (the only multiplication step here is $9^2 = 81$)

  • $3^8 = (-4)^2 = 16 = -1$ (from now on, I'll just leave all the mod-17 implicit)

  • $3^{16} = (-1)^2 = 1$

  • $3^{32} = 1^2 = 1$




and now:




  • $3^3 = 3 \times 3^2 = 3 \times 9 = 27 = 10$

  • $3^{11} = 3^3 \times 3^8 = 10 \times (-1) = 7$

  • $3^{43} = 3^{11} \times 3^{32} = 7 \times 1 = 7$




So the answer is $7$, all through calculations I did in my head. I've set it out so that only one multiplication is done per line.



In fact, I could have used some more theoretical results to determine that $3^{16} = 1$ straight away (Carmichael's Theorem) and hence $3^{43} = 3^{11} \times 3^{16} \times 3^{16} = 3^{11}$, which would have made it even faster. Because of Carmichael's Theorem, you never need to worry about an exponent that's bigger than the modulus.


trigonometry - $(sintheta+costheta)^2=1+sin2theta$



49) $(\sin\theta+\cos\theta)^2=1+\sin2\theta$
Left Side:
\begin{align*}
(\sin\theta+\cos\theta)^2=\sin^2\theta+2c\cos\theta\sin\theta+cos^2\theta=1+2\cos\theta\sin\theta
\end{align*}
This can either be $1$ or I can power reduce it. I don't know.



Right Side:
\begin{align*}
1+\sin2\theta=1+2\sin\theta\cos\theta

\end{align*}



Thank you!


Answer



Open parentheses and use:



$$(1)\,\,\sin^2x+\cos^2x=1$$



$$(2)\,\,\sin 2x=2\sin x\cos x$$


functional equations - Finding all real valued functions that satisfy $f(f(y) + xf(x)) = y + (f(x))^2$



I would like some help with finding all real valued functions that satisfy this equation:




$f(f(y) + xf(x)) = y + (f(x))^2$



I tried the usual substitutions like $x = y = 0$, but my experience with this kind of problem is very limited.



EDIT: I'm an idiot and copied the wrong right side. Updated it now.


Answer



$$f(f(y) + xf(x)) = y + f^2(x)$$
Let $x=0$, then :
$$f(f(y))=y+f^2(0)$$

So as $y$ is one-one and onto and invertible, that said that $f$ is onto, so there's no harm supposing that $f(x)=0$ or $x=f^{-1}(0)$:
$$f(f(y))=y\implies f(y)=f^{-1}y$$
Does that give you a hint[$f$ is a one-one onto invertible self-inverse function]. Try $f(x)=x$ or $f(x)=-x$.


Monday 20 November 2017

linear algebra - Unit Matrices for $M_n(M_n(mathbb{R}))$



In the case of $M_n(\mathbb{R})$ we can define unit matrices $E_{ij}$ in the following way,



$$E_{ij}f_l = \delta_{lj}f_i$$



where the $\{f_i\}$ is a basis for our space. One consequence of this definition is that the $(i,j)$ element of $E_{ij}$ is $1$ and everything else is $0$; additionally



$$E_{ij}E_{lk} = \delta_{jl} E_{ik}$$




.... okay, anyway I'm not sure how unit matrices look in $M_n(M_n):=M_n(M_n(\mathbb{R}))$.
And how do the above properties carry over?



Also, I don't understand why the following line in a proof is true (as per the above):
Consider $P=(E_{ij}) \in M_n(M_n)$. Then $P=P^\dagger$, and
$$P^2 = (\sum_{k=1}^n E_{ik}E_{kj})(nE_{ij}) = nP$$


Answer



I see that you have tagged your question with quantum computation. So perhaps you may find it more intuitive if we work in Dirac notation. That is we let $\{| j \rangle\}$ be an orthonormal basis for the underlying vector space (so that $\langle i|j\rangle =\delta_{i,j}$). Then what you call the unit matrices are equivalent defined as
$$
E_{i,j} = |i\rangle\!\langle j|.

$$
That is, they have a 1 in the entry at the $i$'th row and $j$'th column and zeros everywhere else. Now the set of matrices $\{|E_{i,j}\rangle\!\rangle \}$ form an orthonormal basis for the vector space of matrices, where the inner product between two matrices $A$ and $B$ is $$\langle\!\langle A|B\rangle\!\rangle = \operatorname{Tr}(A^\dagger B).
$$



We can now play the same trick. Define the "super" unit matrices as
$$
F_{i,j;k,l} = |E_{i,j}\rangle\!\rangle\!\langle\!\langle E_{k,l}|.
$$
Then, the $F$ matrices satisfy the same properties as the $E$ matrices.




From your last sentence, however, it does not seem like you are interested in these unit matrices. Your final equation suggests the matrix you define as $P$ is simply the matrix of 1's in every entry. This can be written
$$
P = \sum_{i,j} E_{i,j}.
$$
Then direct calculation leads to
$$
P^\dagger = \sum_{i,j} E_{i,j}^\dagger = \sum_{i,j} E_{j,i} = P,
$$
and
$$

P^2 = \sum_{i,j,k,l} E_{i,j}E_{k,l} = \sum_{i,j,k,l} E_{i,l}\delta_{j,k} = n \sum_{i,l} E_{i,l} = n P.
$$


Arithmetic Sequence - Find the Last Number of Terms



The sequence is: $11, 13, 15, ... 59$



I need to find $i$ so that the sum of all terms before $i$ equals the sum of all terms after $i$



The simple way would be to calculate: $S_{i-1} = S_{n} - S_{i}$




But I would like to solve it a different way. Is there a formula to find the last number of terms of an arithmetic sequence?



I have found something but it doesn't seem to work when I use it. Please also show me how you solve the equation.


Answer



Another method, as requested.



Counting backwards from $59$ using $d'=-2$, the $j$-th term is $T'_j=61-2j$, and the sum of $j$ terms is $S'_j=\frac j2(59+T'_j)=60j-j^2=j(60-j)$, where the dash indicates counting backwards. This is also the formula for the last $j$ terms (counting forwards), as mentioned in your question.*



Counting forwards from $11$, using standard formulas, we have $S_i=10i+i^2=i(10+i)$.




We want the forward sum to $(i-1)$ terms to be equal the backward sum to $j$ terms, and also $i+j=25$, hence
$$\begin{align}
S'_j&=S_{i-1}\\
j(60-j)&=(i-1)(10+\overline{i-1})\\
(25-i)(35+i)&=(i-1)(i+9)\\
i^2+9i-442&=0\\
(i-17)(i+26)&=0\\
i&=17 \quad \color{lightgray}{(i>0)}\qquad\blacksquare\end{align}$$







*The general expression for the last $j$ terms in an AP (derived using the method outlined above) is given by
$$\begin{align}
S'_j
&=\frac j2 [2a+(2n-j-1)d]\\
&=\left[a+(n-\frac12)d\right]j-\frac d2 j^2\\
&=\left(L+\frac d2\right)j-\frac d2j^2
\end{align}$$


How to prove Cauchy-Schwarz Inequality in $R^3$?




I am having trouble proving this inequality in $R^3$. It makes sense in $R^2$ for the most part. Can anyone at least give me a starting point to try. I am lost on this thanks in advance.


Answer



You know that, for any $x,y$, we have that



$$(x-y)^2\geq 0$$



Thus



$$y^2+x^2\geq 2xy$$




Cauchy-Schwarz states that



$$x_1y_1+x_2y_2+x_3y_3\leq \sqrt{x_1^2+x_2^2+x_3^3}\sqrt{y_1^2+y_2^2+y_3^3}$$



Now, for each $i=1,2,3$, set



$$x=\frac{x_i}{\sqrt{x_1^2+x_2^2+x_3^2}}$$



$$y=\frac{y_i}{\sqrt{y_1^2+y_2^2+y_3^2}}$$




We get



$$\frac{y_1^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_1^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_1}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_1}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



$$\frac{y_2^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_2^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_2}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_2}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



$$\frac{y_3^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_3^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_3}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_3}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



Summing all these up, we get




$$\frac{y_1^2+y_2^2+y_3^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_1^2+x_2^2+x_3^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_1y_1+x_2y_2+x_3y_3}{\sqrt{y_1^2+y_2^2+y_3^2}\sqrt{x_1^2+x_2^2+x_3^2}}$$



$$\sqrt{y_1^2+y_2^2+y_3^2}\sqrt{x_1^2+x_2^2+x_3^2}\geq {x_1y_1+x_2y_2+x_3y_3}$$



This works for $\mathbb R^n$. We sum up through $i=1,\dots,n$ and set



$$y=\frac{y_i}{\sqrt{\sum y_i^2}}$$



$$x=\frac{x_i}{\sqrt{\sum x_i^2}}$$




Note this stems from the most fundamental inequality $x^2\geq 0$.


elementary set theory - Cardinality of the power set $mathcal Pleft(Sright),$ where $S$ is a set of $15$ elements?




What is the cardinality of the power set $\mathcal P\left(S\right)$ where $S$ is a set of $15$ elements?



I think the power set is a set of all the subsets of a given set or $2^n$. So would the cardinality of this set be $2^{15}$ or $32,768$?


Answer



Yes, the cardinality of the power set $\mathcal P(S)$ of a set $S$ is given by $2^n$, and so in your case, by $2^{15}$.



Note: There is a difference between a set, and its cardinality. The power set $\mathcal P(S)$ itself is the set of all subsets of $S$, whereas $2^n$ is the number of these sets (which are the elements) in $\mathcal P(S).$


exponentiation - The binomial formula and the value of 0^0




Here is the text from Knuth's The Art of computer programming, 1.2.6 F formula 14:



enter image description here



Knuth doesn't give the proof of the statement. So, I tried to write it myself.



To make binomial formula equal to $0^0$, it must satisfy the following conditions:



$
\left\{

\begin{aligned}
x = -y\\
r = 0\\
\end{aligned}
\right.
$



By definition:



$

{n\choose k}=\frac{n!}{k!(n-k)!}
$



If $k < 0$ or $k > n$, the coefficient is equal to 0 (provided that n is a nonnegative integer) - 1.2.6 B.



and if $r = 0$, we have:



$
{0\choose k}
$




which is non-zero only when k = 0:



$
{0\choose 0} = \frac{0!}{0!(0-0)!} = \frac{1}{1} = 1
$



So, putting our conditions into the formula, we get:



$

(x + (-x))^0 = {0\choose 0} x^0(-x)^0 = 1\cdot1\cdot1 = 1
$



Therefore, $0^0 = 1$



Is my proof correct?



Also this page: http://mathworld.wolfram.com/Power.html says, that $0^0$ itself is undefined, although defining $0^0=1$ allows some formulas to be expressed simply (Knuth 1992; Knuth 1997, p. 57).



But he is not defining it to be equal to 1, he is deducing this fact from binomial theorem. Plus Knuth doesn't say on page 57 (which is provided), what formulas can be expressed simply. Is the statement about Knuth on mathworld not fully correct?



Answer



I think you do not interpret the text as it was intended. What Knuth is saying is that the binomial formula as stated cannot be valid unless one defines $0^0=1$. He says so more clearly in his 1992 article Two notes on notation (page 6, equation (1.18)):




Anybody who wants the binomial theorem $$(x + y)^n =\sum_{k=0}^n\binom nk x^ky^{n−k}$$ to hold for at least one nonnegative integer $n$ must believe that $0^0 = 1$, for we can plug in $x = 0$ and $y = 1$ to get $1$ on the left and $0^0$ on the right.




Indeed one has for instance $1=(0+1)^2=\binom20.0^0.1^2+\binom21.0^1.1^1+\binom22.0^2.1^0$, and the right hand side reduces to $0^0$ because the last two terms vanish because of the non-controversial evaluations $0^1=0^2=0$.



So in particular, this does not just involve the case of the binomial theorem for exponent$~0$; the point is that exponents$~0$ occur on the in the right hand side for every instance of the binomial formula.




Note that one usually does not encounter this case explicitly because of our habit of contracting the term $\binom n0x^0y^n$ to $y^n$ whenever we write explicit instances of the binomial formula. Your citation illustrates this, as Knuth writes terms $x^4$ and $y^4$ in his explicit instance of the binomial formula for $r=4$. (It also illustrates another common peculiarity: in explicit instances of the binomial formula we tend to take the summation index decreasing from $n$ to $0$, or what amounts to the same use a variant of the formula in which the exponents of $x$ and $y$ are interchanged.)



Indeed, the most important reason we are not confronted with instances of $0^0$ (and therefore with the need to have it well defined) all the time is, that we have learned the habit, similar to that of suppressing explicit terms$~0$ in a summation or factors$~1$ in a product, of immediately replacing any expression $x^0$ by$~1$ (or simply omitting it if occurring in a product). I consider this a nice paradox of human psychology:




The fact that people can maintain that $0^0$ should be left undefined depends on the circumstance that instances of the expression almost never occur when doing mathematics. This state of affairs is a consequence of the habit of systematically suppressing multiplicative factors of the form$~x^0$ when writing formulas. This notwithstanding the fact that the equation $x^0=1$ implicitly applied during this suppression can only be justified if $x^0$ is always defined, in particular for $x=0$.



Sunday 19 November 2017

elementary set theory - Injective and Surjective Functions




Let $f$ and $g$ be functions such that $f\colon A\to B$ and $g\colon B\to C$. Prove or disprove the following



a) If $g\circ f$ is injective, then $g$ is injective





Here's my proof that this is true.
Let $A = \{4,5\}$, $B = \{3,9\}$, $C = \{1,2\}$
$f(4) = 3$; $f(5) = 9$
$g\circ f(4) = 1$ and $g\circ f(5) = 2$
Since no 2 elements map to the same element in the range $g\circ f$ and $g$ are both injective.




b) If $f$ and $g$ are surjective, then $g\circ f$ is surjective.





Here's my proof that this is true.
Let $A = \{1,2\}$, $B = \{4\}$, $C = \{5\}$,
$f(1) = 4$; $f(2) = 4$
$g\circ f(1) = 5$ and $g\circ f(2) = 5$.
Since no elements are left unmapped then $g\circ f$ and $f$ and $g$ are surjective



Are these proofs valid?



**Edit:

Ok so I revised my proof for a. I will disprove it using a counterexample.
Let A = {4}, B = {3,9}, C={1}.
g∘f(4) = 1 and g(3) = 1 and g(9) = 1
g∘f is injective but g is not injective because 3 and 9 both map to 1.
Is that a valid disproof?



Now I'm just having some trouble disproving or proving b. If anyone could lend some tips.
Also this is not homework. Just reviewing for an exam.


Answer



The idea of a mathematical proof is that without dependence of a specific input, if certain properties are true, then the conclusion is also true.




In this case, if the composition is injective then it has to be that the "outer" function is injective, and that if the functions are surjective then their composition is as such.



Showing a specific case is a valid method for disproving a claim, as it shows that at a certain time the properties hold but the conclusion is false.



For example, consider $A=\{0\}$ and $B=\{1,2\}$ and $C=\{0\}$ and $f(0)=1$ and $g(1)=0, g(2)=0$.



The composition $g\circ f$ is a function from $A$ to $C$, since $A$ is a singleton every function is injective. However $g$ is clearly not injective.



This is a counterexample to the claim, thus disproving it.




The proof for the second one, if both functions are surjective then so is their composition, let us check this.



Let $A,B,C$ any sets, and $f,g$ functions as required. To show that $g\circ f$ is surjective we want to show that every element of $C$ is in the range of $g\circ f$. Assume $c\in C$ then there is some element $b\in B$ such that $g(b)=c$. Since $f$ is surjective we have some $a\in A$ such that $f(a)=b$. It follows that $g\circ f (a) = c$ and therefore the composition of surjective functions is surjective.



Now, consider this proof. There was nothing to limit ourself. I did not require anything from $A,B,C$ other than them being sets, and I did not require anything from $f,g$ other than them being functions from/onto the required set in the premise of the claim. Furthermore, I did not check the surjective-ness of the composition by choosing a certain element. I picked and arbitrary element, from a set which was also arbitrary. Using this sort of argument assures us that the property is not dependent on any of the characteristics of the set (for example, some things are only true on finite, or infinite, or other certain sets). This allows us to have a very general theorem. Whenever we have three sets, and two functions which are surjective between them, then the composition is also surjective!


summation - Prove that $1^3 + 2^3 + ... + n^3 = (1+ 2 + ... + n)^2$




This is what I've been able to do:



Base case: $n = 1$




$L.H.S: 1^3 = 1$



$R.H.S: (1)^2 = 1$



Therefore it's true for $n = 1$.



I.H.: Assume that, for some $k \in \Bbb N$, $1^3 + 2^3 + ... + k^3 = (1 + 2 +...+ k)^2$.



Want to show that $1^3 + 2^3 + ... + (k+1)^3 = (1 + 2 +...+ (k+1))^2$




$1^3 + 2^3 + ... + (k+1)^3$



$ = 1^3 + 2^3 + ... + k^3 + (k+1)^3$



$ = (1+2+...+k)^2 + (k+1)^3$ by I.H.



Annnnd I'm stuck. Not sure how to proceed from here on.


Answer



HINT: You want that last expression to turn out to be $\big(1+2+\ldots+k+(k+1)\big)^2$, so you want $(k+1)^3$ to be equal to the difference




$$\big(1+2+\ldots+k+(k+1)\big)^2-(1+2+\ldots+k)^2\;.$$



That’s a difference of two squares, so you can factor it as



$$(k+1)\Big(2(1+2+\ldots+k)+(k+1)\Big)\;.\tag{1}$$



To show that $(1)$ is just a fancy way of writing $(k+1)^3$, you need to show that



$$2(1+2+\ldots+k)+(k+1)=(k+1)^2\;.$$




Do you know a simpler expression for $1+2+\ldots+k$?



(Once you get the computational details worked out, you can arrange them more neatly than this; I wrote this specifically to suggest a way to proceed from where you got stuck.)


Saturday 18 November 2017

linear algebra - connection between determinants



Suppose A is an nxn matrix with real entries and R is its row reduced echelon form. Using information on elementary matrices, explain the connection between det(A) and det(R). Note: you may use the fact that if M,N are two square matrices of the same size then det(MN)= det(M)det(N).



The only thing that is coming to my mind is that the A*R=A^-1, but that doesn't have anything to do with the determinant. Or the sum of the diagonal within the row reduced form is the determinant of A and if any elementary operations happens within A it is also done in R which would change the sum of the diagonal. Can someone point me in the right direction


Answer



Just use the fact that when you row reduce a matrix $A$ you can write $R = E_k E_{k-1} \cdots E_{2}E_{1}A$ where the $E_i$ are the elementary matrices. Then you have; $$Det(R) = Det(E_k) \cdots Det(E_1) \cdot Det(A)$$


Friday 17 November 2017

sequences and series - prove $lim_ {n rightarrow infty} frac{b^n}{n^k}=infty$










I have this sequence with $b>1$ and $k$ a natural, which diverges:
$$\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty$$
I need to prove this, with what i have learnt till now from my textbook, my simple step is this:




Since $n^2\leq2^n$ for $n>3$, i said $b^n\geq n^k$, so it diverges. Is it right?



I am asking here not just to get the right answer, but to learn more wonderful steps and properties.


Answer



$$\lim_{n \rightarrow \infty} \frac{b^n}{n^k}=\infty$$



You can use the root test, too: $$\lim_{ n\to \infty}\sqrt[\large n]{\frac{b^n}{n^k}} = b>1$$



Therefore, the limit diverges.







The root test takes the $\lim$ of the $n$-th root of the term: $$\lim_{n \to \infty} \sqrt[\large n]{|a_n|} = \alpha.$$



If $\alpha < 1$ the sum/limit converges.



If $\alpha > 1$ the sum/limit diverges.



If $\alpha = 1$, the root test is inconclusive.



Thursday 16 November 2017

algebra precalculus - Why $sqrt{-1 times {-1}} neq sqrt{-1}^2$?



I know there must be something unmathematical in the following but I don't know where it is:



\begin{align}
\sqrt{-1} &= i \\ \\
\frac1{\sqrt{-1}} &= \frac1i \\ \\
\frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\
\sqrt{\frac1{-1}} &= \frac1i \\ \\

\sqrt{\frac{-1}1} &= \frac1i \\ \\
\sqrt{-1} &= \frac1i \\ \\
i &= \frac1i \\ \\
i^2 &= 1 \\ \\
-1 &= 1 \quad !!!
\end{align}


Answer



Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.



edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."




In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).



This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.


real analysis - An alternative way to find the sum of this series?

$\displaystyle \frac{4}{20}$+$\displaystyle \frac{4.7}{20.30}$+$\displaystyle \frac{4.7.10}{20.30.40}$+...



Now I have tried to solve this in a usual way, first find the nth term $t_n$.



$t_n$= $\displaystyle \frac{1}{10}$($\displaystyle \frac{1+3}{2}$) + $\displaystyle \frac{1}{10^2}$($\displaystyle \frac{1+3}{2}$)($\displaystyle \frac{1+6}{3}$) + ...+ $\displaystyle \frac{1}{10^n}$($\displaystyle \frac{1+3}{2}$)($\displaystyle \frac{1+6}{3}$)...($\displaystyle \frac{1+3n}{n+1}$)



=$\displaystyle \frac{1}{10^n}\prod$(1+$\displaystyle \frac{2r}{r+1}$) , $r=1,2,..,n$




=$\displaystyle \prod$($\displaystyle \frac{3}{10}-\displaystyle \frac{1}{5(r+1)}$)
thus, $t_n=$ (x-$\displaystyle \frac{a}{2}$)(x-$\displaystyle \frac{a}{3}$)...(x-$\displaystyle \frac{a}{n+1}$), x=$\displaystyle \frac{3}{10}$, a=$\displaystyle \frac{1}{5}$



Now to calculate $S_n$, I have to find the product $t_n$, and then take sum over it. But this seems to be a very tedious job. Is there any elegant method(may be using the expansions of any analytic functions) to do this?

real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction


  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions

  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?







Merged with "How to prove that some functions don't have a primitive" by Ismael:



Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative




$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:



$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf




Liouville's original paper:




Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.


Wednesday 15 November 2017

Solution of functional equation $f(x/f(x)) = 1/f(x)$?

I've been trying to add math rigor to a solution of the functional equation in [1], eq. (22). It is:
$$
f\left(\frac{x}{f(x)}\right) = \frac{1}{f(x)}\,,
$$
where you know that $f(0)=1$ and $f(-x) = f(x)$.



I've been trying to fill in the missing steps. Let's use a substitution $g(x) = \frac{x}{f(x)}$. So we rewrite the functional equation as
$$

x = \frac{\frac{x}{f(x)}}{f\left(\frac{x}{f(x)}\right)}
$$
and get
$$
x = g(g(x))\,.
$$
Then we calculate $g'(0)$ as follows:
$$
g'(0) = \left.\left(\frac{x}{f(x)} \right)'\right|_{x=0}
= \left.\frac{f(x)-xf'(x)}{f^2(x)}\right|_{x=0}

= \frac{f(0)}{f^2(0)}
= 1\,.
$$
Also we can use that $g(x)$ is odd
$$
g(-x) = \frac{-x}{f(-x)} = -\frac{x}{f(x)} = -g(x)\,.
$$
Whenever $g(g(x)) = x$, I have found that this is called involution. That can have many solutions, but using $g(-x)=-g(x)$ and $g'(0)=1$, there should be a way to prove that $g(x) = x$, from which $f(x)=1$ as the only solution.



The article [1] just says, that since $g(x) = g^{-1}(x)$, the graph of $g(x)$ and its inverse must be symmetric along the $y=x$ axis, and since the tangent at $x=0$ is equal to this same line, then $g(x)=x$ must be the only solution. They do mention that this is not completely rigorous proof.




You can assume that $f(x)$ is differentiable. It'd be nice if it can be proven only for continuous functions $f(x)$, but any proof at first would be a good start, even with stronger assumptions.



Update: another idea is to use the fact, that if $a$ and $b$ are two points in the domain of the $g$ function for which $g(a)=g(b)$, then it follows that
$g(g(a)) = g(g(b))$ and $a = b$, which proves that the function is one-to-one.
Given that $g$ is continuous, it means that it is strictly increasing or decreasing [2]. From that, let's say it's increasing, then we can use $y=g(x)$, if $y < x$, then $g(y) < g(x)$ from which $g(g(x)) < y$ and $x < y$ which is a contradiction. Similarly for $y > x$. So we must have $y=x$, from which $g(x)=x$. If $g$ is decreasing, then we set $h(x)=-g(x)$, obtain $h(x)=x$ and $g(x)=-x$. Unless I made a mistake we got $g(x)=\pm x$. And using $g'(0)=1$, we get $g(x)=x$ as the only solution. But I don't feel very good about this proof yet.



[1] Levy-Leblond, J.-M. (1976). One more derivation of the Lorentz transformation. American Journal of Physics, 44(3), 271–277.



[2] A continuous, injective function $f: \mathbb{R} \to \mathbb{R}$ is either strictly increasing or strictly decreasing.

analysis - $phi(t):= sqrt{2t |log | log t||} ; (t>0, t neq 1)$ is equal to $t mapsto sqrt{2t log log 1/t}$ for small $t>0$




I am looking at the function $$\phi(t):= \sqrt{2t |\log | \log t||} \; (t>0, t \neq 1).$$



How is this function equal to $t \mapsto \sqrt{2t \log \log t}$ for large $t$ and equal to $t \mapsto \sqrt{2t \log \log 1/t}$ for small $t$?
I can see that if $t>e$, then $|\log |\log t|| = \log \log t$ so the first case holds. But I cannot see why for small $t>0$, we would have $|\log | \log t|| = \log \log 1/t$. I would greatly appreciate any help.


Answer



Note that $\log1/t=\log t^{-1}=-\log t$, so $\log\log1/t=\log(-\log t)$. Now if $t\leq 1$, then $\log t\leq 0$, so $|\log t|=-\log t$. Therefore $\log|\log t|=\log(-\log t)=\log\log1/t$.


real analysis - Strange functional equation: $f(x)+f(cos(x))=x$

BACKGROUND: A while ago, I became obsessed for a period of time with the following functional equation:
$$f(x)+f(\cos(x))=x$$

I am only considering the unique real analytic solution to this functional equation (it is indeed unique... all derivatives of $f$ at $w\approx 0.739$, the Dottie Number, can be determined by differentiating this equation repeatedly). Here is a graph of the function:



enter image description here



I had already given up a long time ago when I accidentally came across my notes on this problem today, and I figured I would put it one MSE to see if anyone could find anything interesting that I missed.



WHAT I FOUND: I am almost absolutely certain that there is no nice closed-form for this function, so I resorted to finding special values and other neater functional equations. So far, I have found the following special values of $f$ and its derivatives:
$$f(w)=w/2$$
$$f'(w)=\frac{1}{1-\sqrt{1-w^2}}$$
$$f''(w)=\frac{1}{2-w^2}\frac{w}{1-\sqrt{1-w^2}}$$

$$f'(0)=1$$
$$f'(\pi/2)=2$$
$$f'(-\pi/2)=0$$
Here are some functional equations I have found for $f$:
$$f(x+2\pi)-f(x)=2\pi$$
$$f(x+\pi)-f(x)=2\cos(x)+\pi$$
$$f(x)-f(-x)=2x$$
And here is a series representation for $f$, where $\cos^{\circ n}$ represents the cosine function composed $n$ times:
$$f(x)=\frac{w}{2}+\sum_{n=0}^\infty \big(\cos^{\circ 2n}(x)-\cos^{\circ 2n+1}(x)\big)$$




QUESTIONS: This is a very open-ended question. I have a few unproven conjectures or particular unanswered questions about this function, but I really just want to see what interesting properties (especially special values, zeroes, maxima or minima, inflection points, and functional or differential equations) people can find.



An example of one of my conjectures: by looking at graphs, I have conjectured that $f(x+a)-f(x)$ is a sinusoid or a sum of sinusoids for all $a$. It is easy to show that this must be periodic, but I would like to prove or disprove that it can be expressed as a sum of sinusoids. In particular, can we find an expression for $f(x+\pi/2)-f(x)$ as a sum of sinusoids, possibly involving other mathematical constants like $w$? Here is a graph of $f(x+\pi/2)-f(x)$:



enter image description here



I appreciate any contributions!

Tuesday 14 November 2017

Calculating last digit of a number using binomial theorem


How to calculate the last digit of a number say like $$\large
3^{4^{5}}$$ using binomial theorem?




P.S:I know how to solve it using modular arithmetic.



I saw this one

but its not of much use in this case.

Monday 13 November 2017

abstract algebra - Find the number of irreducible polynomials in any given degree




For any prime $p$ find the number of monic irreducible polynomials of degree $2$ over
$\mathbb Z_p$. Do the same problem for degree $3$. Generalize the above statement to higher degree polynomials as much as you can.




My idea for degree 2:

assume that polynomial is reducible, then we can write into this form: $(x-m)(x-n)=0$, expand this, so $x^2-(m+n)x+mn=0$,we can use a matrix to capture all possible value of $(m+n) and $ $mn$, like when $p=3$, $m+n$ matrix is $\begin{matrix}
0&1&2\\
1&2&0\\
2&0&1\end{matrix}$, similarly, we can write mn, find the different value, that m,n must be irreducible when we expand (x-m)(x-n)=0.


Answer



There are $p^2$ total monic polynomials of degree 2 (where the $p^2$ counts the possible linear and constant term combinations). There are $p+\binom{p}{2}$ monic polynomials of degree 2 that are reducible (where the $p$ counts the ones with a repeated root, and the $\binom{p}{2}$ counts the ones with two distinct roots). So there are $$p^2-\left(p+\binom{p}{2}\right)=\frac{p^2-p}{2}$$ irreducible quadratics. Can you extend this approach to cubics?


real analysis - Show that if $f(x+y)=g(x)+g(y) $ a.e. then functions are linear



Let $f(x), g(x), h(x)$ be measureable functions such that



\begin{align}

f(x+y)=g(x)+h(y),
\end{align}
almost everywhere $(x,y) \in \mathbb{R}^2$.



Can we show that
\begin{align}
g(x)&=ax+b_1,\\
h(x)&=ax+b_2,
\end{align}
almost everywhere $(x,y) \in \mathbb{R}^2$ for some $a,b_1,b_2$.




My attempt:
Which I think is problematic.



Since, $f(x+y)=g(x)+h(y)$, we have that
\begin{align}
f(x+0)=g(x)+h(0)\\
f(0+y)=g(0)+h(y)
\end{align}




Now adding the two equations and use our identity we get
\begin{align}
f(x+0)+f(y+0)=g(x)+h(y)+h(0)+g(0)= f(x+y)+f(0).
\end{align}



So, we have that
\begin{align}
f(x)+f(y)=f(x+y)+f(0)
\end{align}
Next, difine a function $\phi(x)=f(x)-f(0)$ and we get

\begin{align}
\phi(x)+\phi(y)=\phi(x+y).
\end{align}



The above is know as Cauchy's functional equation and if $\phi(x)$ is measureable (which it is since $f$ is measurable) then it must be linear (i.e., $\phi(x)=ax$).



Going backward this shows that $g(x)$ and $h(x)$ are given by \begin{align}
g(x)&=ax+b_1,\\
h(x)&=ax+b_2,
\end{align}

for some $a,b_1,b_2$.



Questions:



1) Is this proof correct?



2) Since, we are assuming that equation $f(x+y)=g(x)+h(y)$ holds almost everywhere and not everywhere. Is this problem? For example, in the very first step of the proof
\begin{align}
f(x+0)=g(x)+h(0)\\
f(0+y)=g(0)+h(y)

\end{align}



can we do this? Is there problem of choosing $(x,0)$ and $(0,y)$? I think there is a possibility that $(x,0)$ and $(0,y)$ might belong to a set of measure zero on which the equations don't hold.



3) Another, question very similar to 2). Since, $\phi(x)+\phi(y)=\phi(x+y)$ holds a.e. can we apply Cauchy equation?


Answer



For 2), you're right, this could be a problem. For example, if $f(x) = x$ for $x \neq 0$ and $f(0) = 1$, and $g = h = f$, then the functional equation holds whenever $x, y, x+y$ are all nonzero (so a.e.), but the two equations you get from setting $x = 0$ and $y = 0$ are never true. Since the set of $(x, y)$ such that the equation fails has measure zero, there are some fixed $a, b \in \mathbb{R}$ such that
$$f(x+a) = g(x) + h(a)$$
for a.e. $x$ and
$$f(b+y) = g(b) + h(y)$$

for a.e. $y$.



This allows you to write $f(x + a) + f(y + b) = f(x+y) + h(a) + g(b)$ for a.e. (x, y) (this only fails when either of the two previous equations fail or where the original functional equation fails). From here we can replace $x$ with $x+b$ and $y$ with $y+a$ to get



$$f(x+a+b) + f(y+a+b) = f(x+y+a+b) + h(a) + g(b)$$



for a.e. (x, y), so if we define $\phi(x) = f(x + a + b) - h(a) - g(b)$, then



$$\phi(x) + \phi(y) = \phi(x+y)$$




for a.e. (x, y).



Now for 3), De Bruijn has shown that we must have $\phi = \phi_0$ a.e., where $\phi_0$ is a solution to Cauchy's functional equation (for all $x, y$). But since $f$ is measurable, $\phi$ and $\phi_0$ must be measurable, and thus $\phi_0$ must be linear. From here it follows that $f, g, h$ are linear (or rather affine) almost everywhere.


Sunday 12 November 2017

elementary number theory - Prove if 2 divides $a^2$, then 2 divides $a$.

If 2 divides $a^2$, then 2 divides a.



I know that 2 divides $a^2$ means there is some integer $n$ such that $a^2 = 2n$,
and similarly, 2 divides $a$ means there is some integer $m$ such that $a = 2m$



I thought I could rewrite $a^2 = 2n$ into this $= a = 2(n/a)$ but I don't think that helps, because I'm not sure $n/a$ is an integer.



Thank you for any help!

algebra precalculus - Why is the quadratic equation $ax^2+bx+c=0$?



Shouldn't it be $y=ax^2+bx+c$? According to Wikipedia, it is $ax^2+bx+c=0$.



I guess that they are both equations, right?


Answer



They are both equations that involve quadratics, but the usual meaning of "quadratic equation" is that it has only one variable, while $y=ax^2+bx+c$ has two variables ($x$ and $y$).



modular arithmetic - Fast modulo operation











I have a number of form: $p^n + p$, where $p$ is a prime number and $n$ can be any large number, for example, say $10^{12}$.




What is the generic algorithm to compute $(p^n + p) \pmod k$, where $k$ is a huge number say $k=1000000007$.



Thanks!


Answer



As you already know (a+b)mod n = ((a mod n) + (b mod n)) mod n .
So I guess addition here is not a problem.



The real question seems to be on $p^n$ mod k where n is large. For that, have a look at Modular Exponentiation on wikipedia.


functional analysis - $f_n rightarrow 0$ in $L^1$ $implies sqrt{f_n} rightarrow 0$ also?



Let $(X,\Sigma,\mu)$ be a finite measure space, and let $\{f_n : n \in \mathbb{N} \}$ be a sequence of non-negative measurable functions converging in the $L^1$ sense to the zero function. Show that the sequence $\{\sqrt{f_n}:n \in \mathbb{N} \}$ also converges in the $L^1$ sense to the zero function.



So I have to somehow show that




$$
\lim_{n \to \infty}\int_X\lvert\sqrt{f_n(x)}\rvert\;\mathbb{d}\mu(x) = 0
$$



If I'm honest I don't really know where to start. I think it's an easy question, but I'm new to this stuff. Any help appreciated!


Answer



You have the right choice of $p = q = 2$. However, choose $u = 1$, $v = \sqrt{f_n}$, then



$$\int_X \left|\sqrt{f_n}\right| \ d\mu = \left\| 1.\sqrt{f_n} \right\|_1 \ \leq \ \left\| 1 \right\|_2 \ \ \left\| \sqrt{f_n}\right\|_2 = \ ...$$


Saturday 11 November 2017

calculus - How to find $limlimits_{xto0}frac{e^x-1-x}{x^2}$ without using l'Hopital's rule nor any series expansion?

Is it possible to determine the limit



$$\lim_{x\to0}\frac{e^x-1-x}{x^2}$$



without using l'Hopital's rule nor any series expansion?




For example, suppose you are a student that has not studied derivative yet (and so not even Taylor formula and Taylor series).

sequences and series - Prove any $kinmathbb{N}$ can be created using sum of $a_{n}=2a_{n-1}-leftlfloor frac{a_{n-1}}{2}rightrfloor ,a_{1}=1$



I need to prove that any integer can created using a sum of elements in $a_{n}=2a_{n-1}-\left\lfloor \frac{a_{n-1}}{2}\right\rfloor ,a_{1}=1$ without repetition.



My attempt was just bad...



I've tried proving by induction but it I'm not sure it's legitimate.




I've divided the proof into 2 cases:




  • $k=1$ then it's obvious since $a_1 =1 $
    -$k\geq2$ by induction let's prove that any $k\geq2\in\mathbb{N}$ can be created using the above mentioned series without using $a_1$. For $k=2$ it works since $a_2=2$ and given it's correct for some $k$, for $k+1$ we use the I.H for $k$ and then use $a_1=1$ since we did not use it. It's bad...



I've also tried looking at $a_{n+1}-a_n$ but it didn't help.



Two main questions:





  • Any clues to how would I prove it?

  • I've tried to understand how to find a general non-recursive formula for this series and couldn't( since it's not linear and not anything I've learned so far...). This part is actually pure curiosity and is not homework but it really is more interesting :)


Answer



It is more natural to use induction on $a_n$. Let the inductive hypothesis be that it is possible to represent the numbers $1\ldots\ a_n$ using $a_1 \ldots\ a_n$. This is obviously true for $n = 1$; assume it is also true for $n = k$. Then, take some $1 \leq\ x \leq\ a_{k+1}$. If $1 \leq\ x \leq\ a_k$ then $x$ has a representation by the inductive hypothesis and if $x = a_{k+1}$ then $x$ can obviously be represented. Otherwise, we will represent $x$ as $a_{k} + (x - a_{k})$. To ensure that it is valid to do this, we need only verify that $1 \leq\ x - a_{k} \lt\ a_{k}$. Note that the right bound is strict in order to enforce uniqueness. Taking $a_{k} \lt\ x \lt\ a_{k+1}$, we have



$a_{k} \lt\ x \lt\ 2a_{k}-\lfloor \frac{a_k}{2} \rfloor$




$0 \lt\ x - a_k \lt\ a_{k}-\lfloor \frac{a_k}{2}\rfloor \lt a_k$ (if $k > 1$)



$1 \leq\ x - a_k \lt a_k$



The result we wanted to show. Hence, any number can be so represented.


Friday 10 November 2017

real analysis - Convergence of $int_0^1 frac{sqrt{x-x^2}ln(1-x)}{sin{pi x^2}} mathrm{d}x.$



I would like to prove the convergence of the Newton integral



$$\int_0^1 f(x)\mathrm{d}x =\int_0^1 \frac{\sqrt{x-x^2}\ln(1-x)}{\sin{\pi x^2}} \mathrm{d}x.$$



I split this into two integrals $\displaystyle\int_0^\epsilon f(x)\mathrm{d}x$ and $\displaystyle\int_\epsilon^1 f(x)\mathrm{d}x$. It is easy to show that the integral is convergent on $(0, \epsilon]$ by limit comparison with $\displaystyle\int_0^\epsilon \frac{\sqrt{x}x}{\pi x^2}\mathrm{d}x$.




But I cannot find anything to compare with around $x = 1$ on $[\epsilon, 1)$.


Answer



If you write $x = 1 - \delta$, you obtain



$$\int_0^{1-\varepsilon} \frac{\sqrt{(1-\delta)(1-(1-\delta))}\,\ln \delta}{\sin \bigl(\pi(1-\delta)^2\bigr)}\, d\delta = \int_0^{1-\varepsilon} \frac{\sqrt{\delta(1-\delta)}\,\ln \delta}{\sin \bigl(\pi(2\delta-\delta^2)\bigr)}\,d\delta,$$



and the integrand of that can be compared to the harmless



$$\frac{\sqrt{x}\,\ln x}{2\pi x}.$$


number theory - What is the importance of the Collatz conjecture?



I have been fascinated by the Collatz problem since I first heard about it in high school.




Take any natural number $n$. If $n$ is even, divide it by $2$ to get $n / 2$, if $n$ is odd multiply it by $3$ and add $1$ to obtain $3n + 1$. Repeat the process indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach $1$. [...]



Paul Erdős said about the Collatz conjecture: "Mathematics is not yet ready for such problems." He offered $500 USD for its solution.





How important do you consider the answer to this question to be? Why?



Would you speculate on what might have possessed Paul Erdős to make such an offer?



EDIT: Is there any reason to think that a proof of the Collatz Conjecture would be complex (like the FLT) rather than simple (like PRIMES is in P)? And can this characterization of FLT vs. PRIMES is in P be made more specific than a bit-length comparison?


Answer



Most of the answers so far have been along the general lines of 'Why hard problems are important', rather than 'Why the Collatz conjecture is important'; I will try to address the latter.



I think the basic question being touched on is:





In what ways does the prime factorization of $a$ affect the prime factorization of $a+1$?




Of course, one can always multiply out the prime factorization, add one, and then factor again, but this throws away the information of the prime factorization of $a$. Note that this question is also meaningful in other UFDs, like $\mathbb{C}[x]$.



It seems very hard to come up with answers to this question that don't fall under the heading of 'immediate', such as distinct primes in each factorization. This seems to be in part because a small change in the prime factorization for $a$ (multiplication by a prime, say) can have a huge change in the prime factorization for $a+1$ (totally distinct prime support perhaps). Therefore, it is tempting to regard the act of adding 1 as an essentially-random shuffling of the prime factorization.



The most striking thing about the Collatz conjecture is that it seems to be making a deep statement about a subtle relation between the prime factorizations of $a$ and $a+1$. Note that the Collatz iteration consists of three steps; two of which are 'small' in terms of the prime factorization, and the other of which is adding one:





  • multiplying by 3 has a small effect on the factorization.

  • adding 1 has a (possibly) huge effect on the factorization.

  • factoring out a power of 2 has a small effect on the factorization (in that it doesn't change the other prime powers in the factorization).



So, the Collatz conjecture seems to say that there is some sort of abstract quantity like 'energy' which is cannot be arbitrarily increased by adding 1. That is, no matter where you start, and no matter where this weird prime-shuffling action of adding 1 takes you, eventually the act of pulling out 2s takes enough energy out of the system that you reach 1. I think it is for reasons like this that mathematicians suspect that a solution of the Collatz conjecture will open new horizons and develop new and important techniques in number theory.


integration - Area for an ellipsoid

"Calculate the area for the rotation ellipsoid you get by rotating the ellipsoid $\frac{x^2}{2}+y^2 = 1$ around the x-axis."



I solved for x:



$$ y = \pm \sqrt{1-\frac{x^2}{2}} $$



Then did $ y = 0$ to get the integration limits, $\pm \sqrt(2)$.




So I've ended up with an integral I don't know if it's correct, and even if it is, I can't solve it.



$$ 4\pi \int_{\sqrt{2}}^{\sqrt{2}} \sqrt{1-\frac{x^2}{2}} \sqrt{1-\frac{x}{2\sqrt{1-\frac{x^2}{2}}}}dx$$

Thursday 9 November 2017

calculus - Intuitive understanding of the derivatives of $sin x$ and $cos x$



One of the first things ever taught in a differential calculus class:




  • The derivative of $\sin x$ is $\cos x$.

  • The derivative of $\cos x$ is $-\sin x$.



This leads to a rather neat (and convenient?) chain of derivatives:





sin(x)
cos(x)
-sin(x)
-cos(x)
sin(x)
...



An analysis of the shape of their graphs confirms some points; for example, when $\sin x$ is at a maximum, $\cos x$ is zero and moving downwards; when $\cos x$ is at a maximum, $\sin x$ is zero and moving upwards. But these "matching points" only work for multiples of $\pi/4$.



Let us move back towards the original definition(s) of sine and cosine:



At the most basic level, $\sin x$ is defined as -- for a right triangle with internal angle $x$ -- the length of the side opposite of the angle divided by the hypotenuse of the triangle.



To generalize this to the domain of all real numbers, $\sin x$ was then defined as the Y-coordinate of a point on the unit circle that is an angle $x$ from the positive X-axis.



The definition of $\cos x$ was then made the same way, but with adj/hyp and the X-coordinate, as we all know.




Is there anything about this basic definition that allows someone to look at these definitions, alone, and think, "Hey, the derivative of the sine function with respect to angle is the cosine function!"



That is, from the unit circle definition alone. Or, even more amazingly, the right triangle definition alone. Ignoring graphical analysis of their plot.



In essence, I am asking, essentially, "Intuitively why is the derivative of the sine the cosine?"


Answer



Perhaps the following diagram will provide insight:



(Non)Proof without Words: Derivatives of Sine and Cosine




The idea is to look at the sine and cosine curves as projections of a helix drawn on a cylinder. If you look at the cylinder itself as a curled planar square of length 2pi, then helix is a curled version of the square's diagonal. A tangent vector along the flat square's diagonal always lies at 45 degrees to the square's sides, say with length-"1" shadows in each direction; after smoothly curling the square into the cylinder, the tangent vector lies at 45 degrees to the cylinder's (z-)axis and the perpendicular (xy-)plane.



Projecting the helix into the zy- and zx-planes gives graphs of sine and cosine. Projecting the helix's tangent vector gives tangent vectors to those graphs. The "dz"s for these projected tangents are always 1 (the "vertical" shadow of the helix's tangent vector). To get at "dy" and "dx" ("v_x" and "v_y" in the diagram) we project down into the xy-plane where we see a circle, and yet another projected tangent vector.



Basic geometry tells us that a tangent to a circle is perpendicular to the radius at the point of tangency. In our circle, the point of tangency --and the radius vector it-- is parameterized as "< cos, sin, 0 >". The perpendicular tangent line must therefore have a "negative-reciprocal" direction vector: "< -sin, cos, 0 >", which gives us our "dx" and "dy" for the helix tangent ... and the projected graph tangents as well, so that we may make the following conclusions:



The derivative of cosine --by its conceptual definition as "slope of the tangent line"-- is change-in-x-over-change-in-z = dx/dz = -sin/1 = -sin.



Likewise, the derivative of sine is dy/dz = cos/1 = cos.




I like this approach because the conceptual "slope of tangent line" definition of the derivative is used throughout; there are no (obvious) appeals to digressive computational tricks involving trig identities and limits of difference quotients. I also like that the curious negative sign in the derivative of cosine traces back to an elementary property of circle geometry.



Of course, this approach doesn't constitute proof of the formulas. The process of curling the planar square into a cylinder and claiming that the tangent vector behaves as claimed actually assumes the computational machinery covered by the traditional limit arguments. Nevertheless, on an intuitive level, I think this argument explains the "why" of the derivatives quite beautifully. Then, knowing what the formulas are (or "should be") helps motivate the investigation of the computational tricks needed to provide a rigorous proof.



Here's a PDF with a variant of the above discussion (but the same image). Here's a Mathematica Demonstration that animates the various elements, including the square curling into the cylinder.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...