Thursday 30 June 2016

special functions - Bounds on geometric sum



Consider the sum $\sum_{x=1}^{\infty} \frac{\log{x}}{z^x}$. We can assume that $z\geq1$ (and is real). Mathematica gives this sum as



-PolyLog^(1, 0)[0,1/z]


Even after reading the manual page for PolyLog I don't understand what this function is like and I certainly don't know how the sum was derived.




Are there simple upper and lower bounds for this sum?



I also tried to compute $\int_{x=1}^{\infty} \frac{\log{x}}{z^x}$ in the hope that this would shed more light but Mathematica gives



Gamma[0, Log[z]]/Log[z]


which I also didn't find helpful.


Answer




I assume you mean $\displaystyle \sum_{x=1}^\infty \dfrac{\ln(x)}{z^x}$.
Since $z > 1$ and $\ln(x)$ grows very slowly, the terms go to $0$ rapidly, so you
can get good bounds by taking a partial sum and bounds for the "tail".
For $x \ge n+1$ we have $\ln(n+1) \le \ln(x) \le \ln(n+1) + \dfrac{x-n-1}{n+1}$ so
$$ \sum_{x=1}^n \dfrac{\ln(x)}{z^x} + \dfrac{z \ln(n+1)}{z^{n+2}-z^{n+1}} \le \sum_{x=1}^\infty \dfrac{\ln(x)}{z^x} \le \sum_{x=1}^n \dfrac{\ln(x)}{z^x} + \dfrac{1+(z-1)(n+1)\ln(n+1)}{(n+1)(z-1)^2 z^n}$$



Some explanation for the PolyLog: by definition
$$\text{PolyLog}(p,1/z) = \sum_{x=1}^\infty \dfrac{x^{-p}}{z^x}$$
Take the derivative of this with respect to $p$ (which is what the $(1,0)$ superscript refers to), evaluated at $p=0$ and you get $-1$ times your sum, because
$$ \dfrac{\partial}{\partial p} x^{-p} = -x^{-p} \log(x)$$



multivariable calculus - Finding the partial derivatives of this function




Let $g \in C^1(\mathbb R)$ be a real valued function and $f$ defined by $$f(u,v,w) = \int_{u}^{v} g(w^2+\sqrt{s})\,\,ds$$ where $u,v,w \in \mathbb R$ and $u,v>0$.
Find all partial derivatives.



I'm not sure how to attempt this problem. I assume if it were a function of two variables, say something like $$f(x,w) = \int_{0}^{x} g(w^2+\sqrt{s})\,\,ds$$ then for example the partial derivative with respect to $x$ would just be $g(w^2 + \sqrt{x})$ (is that true?).



Anyhow, some hint or strategy would be very welcomed.


Answer



Call $h(s) = g(w^2-\sqrt{s})$. When you compute the partial derivatives with rispect to $u,v$ the variable $w$ is fixed, hence you can think $h$ as a function $\mathbb{R} \longrightarrow \mathbb{R}$.




So
$$\frac{\partial}{\partial u} \int_u^v h(s) ds = - \frac{\partial}{\partial u} \int_v^u h(s) ds =- h(u)$$
since $v$ is considered a constant. In the same way you get
$$ \frac{\partial}{\partial v} \int_u^v h(s) ds = h(v)$$
While for the third partial derivative you need to exchange the derivative with the integral sign, so you get
$$\frac{\partial}{\partial w} \int_u^v g(w^2-\sqrt{s}) ds =
\int_u^v \frac{\partial}{\partial w} g(w^2-\sqrt{s}) ds =
\int_u^v g'(w^2-\sqrt{s}) 2w \ ds$$


elementary number theory - Prove if $nmid ab$, then $nmid [gcd(a,n) times gcd(b,n)]$

Prove if $n\mid ab$, then $n\mid [\gcd(a,n)\times \gcd(b,n)]$



So I started by letting $d=\gcd(a,n)$ and $e=\gcd(b,n)$.
Then we have $x,y,w,z$ so that $dx=a$, $ey=b$,$dw=ez=n$
and we also have $s$ so that $ns=ab$




or $ns=dexy$.



what I want is $n\mid de$, but I'm only getting to $n\mid de(xy)$ since I cannot prove that $s/(xy)$ is an integer.

calculus - The integral of $sec^4(x)tan(x)$



Consider the integral

$$\int \sec^4(x)\tan(x)$$



Now right off the bat I see two ways of solving this.




  1. Let u=$\sec(x)$



2.Use integration by parts




Now doing the first way results in the integrand looking like
$$\int u^3du=\frac{1}{4}\sec^4(x)+C $$



Which is correct but it's not the answer I'm looking for, so instead we'll do it the second way.



$$\int \sec^2(x)\cdot\sec^2(x)\tan(x)dx$$
$$\int\left(\tan^2(x)+1\right)\sec^2(x)\tan(x)dx $$
$$\int\sec^2(x)\tan^3(x)+\sec^2(x)\tan(x)dx$$
Now this is where I got stuck, because I don't know whether to continue with Pythagorean identities or to factor a term out and solve for that. Or perhaps even break the two up and create two integrals.


Answer




Write your integrand in the form$$\tan(x)(\tan^2(x)+1)\sec^2(x)$$ and substitute $$u=\tan(x)$$ and you will get $$\int u(u^2+1)\,du$$


Wednesday 29 June 2016

real analysis - Studying the character of $sum_{n=3}^infty frac{1}{n(log(log n))^{alpha}}$




I have to study the character of this series
$$\sum_{n=3}^\infty \frac{1}{n(\log(\log n))^{\alpha}}$$



with $\alpha$ a real parameter.



Considering the Cauchy condensation test, the equivalent series is:
$$\sum_{n=1}^\infty 2^n\frac{1}{2^n[\log(\log 2^n)]^{\alpha}}=\sum_{n=1}^\infty \frac{1}{[\log(n\log 2)]^{\alpha}}$$



Using the ratio test:




$$\lim_{n\rightarrow \infty} \frac{[\log(n\log 2)]^{\alpha}}{[\log((n+1)\log 2)]^{\alpha}}=\lim_{n\rightarrow \infty} \frac{1}{[\log(\log 2)]^{\alpha}}= \frac{1}{[\log(\log 2)]^{\alpha}}\sim \frac{1}{(-0,36)^{\alpha}}$$



Then when $(-0,36)^{\alpha}>1$ the given series converges, otherwise it diverges



if $\alpha =1$ ,$ -0,36<1$ , diverges



if $\alpha =0$ ,$ 1<1$ , diverges



if $\alpha =-1$ ,$ (-0,36)^{-1}=-2,77<1$ , converges




if $\alpha >1$ ,$ (-0,36)^{\alpha}<1$ , converges



if $\alpha <-1$ ,$ (-0,36)^{\alpha}<1$ , converges



if $|\alpha| <1 , \ne 0$ sometimes it doen't exist $ (-0,36)^{\alpha}$
Can someone help me?


Answer



HINT




Let use limit comparison test with



$$\sum \frac{1}{n}$$


calculus - What will be the value of $int_gamma xcdot n(x) , ds(x).$





Let $x=(x,y)\in \mathbb R^2$, $n(x)$ denote the unit outward normal to the ellipse $\gamma$ whose equation is given by $\frac{x^2} 4 +\frac{y^2} 9 = 1$ at the point $x$ on it.



What will be the value of $\displaystyle\int_{\gamma}x\cdot n(x)\,ds(x)\text{ ?}$



Answer



Hint. Use the planar version of the Divergence Theorem:
$$\int_{\partial D} (v_1,v_2) \cdot n \, ds=\int_D \left(\frac{\partial v_1}{\partial x} +\frac{\partial v_2}{\partial y} \right)\, dxdy.$$
In particular take a look to this example.


algebra precalculus - Evaluate $lim limits_{nto infty }sin^2 (pi sqrt{(n!)^2-(n!)})$




Evaluate $$\lim \limits_{n\to \infty }\sin^2 \left(\pi \sqrt{(n!)^2-(n!)}\right)$$





I tried it by Stirling's Approximation
$$n! \approx \sqrt{2\pi n}.n^n e^{-n}$$
but it leads us to nowhere.



Any hint will be of great help.


Answer



$$\sin (\pi \sqrt{(n!)^2 -n!} )=\sin (\pi \sqrt{(n!)^2 -n!} -\pi\cdot n! )=\sin \left(\pi \frac{-n!}{\sqrt{(n!)^2 -n!} +n!}\right)=\sin \left(\pi \frac{-1}{\sqrt{1 -\frac{1}{n!}} +1}\right)\to -\sin\frac{\pi}{2} =-1$$


Tuesday 28 June 2016

matrices - Find $x_0$ when a 3x3 symmetric matrix has equal eigenvalues



The question goes like this: There is a symmetric matrix:$$A=\begin{bmatrix}3 & 0 & 0\\ 0 & x & 2\\0 & 2 & x\end{bmatrix}$$



Find the value(s) of $x$ for which $A$ has at most two distinct eigenvalues. (Eigenvalues like $3,2,2$)



In my attempts to solve this problem, I got the characteristic equation as:

$$\lambda^3-(2x+3)\lambda^2+(x^2+3x-2)\lambda-3(x^2-4)=0$$
I am unable to proceed any further than this. Should I try to solve for $\lambda$ by putting appropriate values in the equation, then find $x$?



Is there any property that I seem to be missing?


Answer



Observe that
$$A\begin{bmatrix}1\\0\\0\end{bmatrix}=3\begin{bmatrix}1\\0\\0\end{bmatrix}.$$
Thus $\lambda=3$ is an eigenvalue of this matrix.



Also observe

$$A\begin{bmatrix}0\\1\\1\end{bmatrix}=(x+2)\begin{bmatrix}0\\1\\1\end{bmatrix}.$$
Thus $\lambda=x+2$ is an eigenvalue of this matrix as well.



Now the sum of the eigenvalues is the trace of the matrix. Let the other eigenvalue be $\lambda_3$, then
$$3+(x+2)+\lambda_3=2x+3 \implies \color{red}{\lambda_3=x-2}.$$



So the three eigenvalues are $\boxed{3,x+2}$ and $\boxed{x-2}$. We want at most two distinct eigenvalues. Observe that when $x=1,5$ then two of them are equal, hence only two distinct eigenvalues.



When $\color{red}{x=1}$, the eigenvalues are $\color{blue}{3,3,-1}$.




When $\color{red}{x=5}$, the eigenvalues are $\color{blue}{3,7,3}$.



When $\color{red}{x \neq 1,5}$, the eigenvalues are all $\color{blue}{\text{distinct}}$.



For no value of $x$ can all the eigenvalues be the same.






Further addition to the solution:




In case you are not aware of the trace result, you can still get the third eigenvalue by observing that
$$A\begin{bmatrix}0\\1\\-1\end{bmatrix}=(x-2)\begin{bmatrix}0\\1\\-1\end{bmatrix}.$$
Thus $\lambda=x-2$ is an eigenvalue of this matrix as well.


calculus - A closed form for a triple integral with sines and cosines



$$\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin(x)\sin(y)\sin(z)}{xyz(x+y+z)}(\sin(x)\cos(y)\cos(z) + \sin(y)\cos(z)\cos(x) + \sin(z)\cos(x)\cos(y))\,dx\,dy\,dz$$



I saw this integral $I$ posted on a page on Facebook . The author claims that there is a closed form for it.







My Attempt



This can be rewritten as



$$3\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z)}{xyz(x+y+z)}\,dx\,dy\,dz$$



Now consider



$$F(a) = 3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz(x+y+z)}\,dx\,dy\,dz$$




Taking the derivative



$$F'(a) = -3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz}\,dx\,dy\,dz$$



By symmetry we have



$$F'(a) = -3\left(\int^\infty_0 \frac{\sin^2(x)e^{-ax}}{x}\,dx \right)\left( \int^\infty_0 \frac{\sin(x)\cos(x)e^{-ax}}{x}\,dx\right)^2$$



Using W|A I got




$$F'(a) = -\frac{3}{16} \log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)$$



By integeration we have



$$F(0) = \frac{3}{16} \int^\infty_0\log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)\,da$$



Let $x = 2/a$



$$\tag{1}I = \frac{3}{8} \int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$




Question



I seem not be able to verify (1) is correct nor find a closed form for it, any ideas ?


Answer



Ok I was able to find the integral



$$\int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$



First note that




$$\int \frac{\log(1+x^2)}{x^2}\,dx = 2 \arctan(x) - \frac{\log(1 + x^2)}{x}+C$$



Using integration by parts



$$I = \frac{\pi^3}{12}+2\int^\infty_0\frac{\arctan(x)\log(1 + x^2)}{(1+x^2)x}\,dx$$



For the integral let



$$F(a) = \int^\infty_0\frac{\arctan(ax)\log(1 + x^2)}{(1+x^2)x}\,dx$$




By differentiation we have



$$F'(a) = \int^\infty_0 \frac{\log(1+x^2)}{(1 + a^2 x^2)(1+x^2)}\,dx $$



Letting $1/a = b$ we get



$$\frac{1}{(1 + a^2 x^2)(1+x^2)} = \frac{1}{a^2} \left\{ \frac{1}{((1/a)^2+x)(1+x^2)}\right\} =\frac{b^2}{1-b^2}\left\{ \frac{1}{b^2+x^2}-\frac{1}{1+x^2} \right\}$$



We conclude that

$$\frac{b^2}{1-b^2}\int^\infty_0 \frac{\log(1+x^2)}{b^2+x^2}-\frac{\log(1+x^2)}{1+x^2} \,dx = \frac{b^2}{1-b^2}\left\{ \frac{\pi}{b}\log (1+b)-\pi\log(2)\right\}$$



Where we used that



$$\int^\infty_0 \frac{\log(a^2+b^2x^2)}{c^2+g^2x^2}\,dx = \frac{\pi}{cg}\log \frac{ag+bc}{g}$$



By integration we deduce that



$$\int^1_0 \frac{\pi}{a^2-1}\left\{ a\log \left(1+\frac{1}{a} \right)-\log(2)\right\}\,da = \frac{\pi}{2}\log^2(2)$$




For the last one I used wolfram alpha, however it shouldn't be difficult to prove.



Finally we have




$$\int^\infty_0\frac{\log\left(x^2+1
\right)\arctan^2\left(x\right)}{x^2}\,dx = \frac{\pi^3}{12}+\pi
\log^2(2)$$



real analysis - A discontinuous function at every point in $[0,1]$


Suppose you are given a measurable set $E\subset[0,1]$ such that for any nonempty open sub-interval $I$ in $[0,1]$, both sets $E \cap I$ and $E^c \cap I$ have positive measure. Then, for the function $f:=\chi_E$, where $\chi_E$ is characteristic function, show that whenever $g(x)=f(x)$ a.e. in $x$, then $g$ must be discontinuous at every point in $[0,1]$ .





I think we can take advantage of the problem of making a measurable subset $E ⇢ [0, 1]$ such that for every sub-interval $I$, both $E \ I$ and $I - E$ have positive measure by taking a Cantor-type subset of $[0, 1]$ with positive measure and on each sub-interval of the complement of this set, and construct another such set, and so on. I don't know if I am right.

real analysis - The series $sum_{n=1}^infty a_n$ converges absolutely, does the series $sum_{n=1}^infty a_{n_k}$ converge?



If the series $\sum_{n=1}^\infty a_n$ converges absolutely



and $a_{n_k}$ is sub sequence of $a_n$




does the series $\sum_{n=1}^\infty a_{n_k}$ also always converge ?



Since I failed to found any counter-example,
and the only connection between absolutely convergence of $\sum_{n=1}^\infty a_n$ and convergence of $\sum_{n=1}^\infty a_{n_k}$ in my textbook is this theorem:
the series $\sum_{n=1}^\infty a_n$ converges absolutely iif the series of positive and negative members of $a_n$ converges,
I concluded that my proof must be somehow connected to this theorem.



In this stage I arrived to the dead-end.




Could you please give me some hint how to deal with this question ?



Thanks.


Answer



Yes, it does. Since $\{n_k : k \in \mathbb{N}\} \subseteq \{n : n \in \mathbb{N}\}$, we see that



$$\sum_k \left|a_{n_k}\right| \le \sum_n |a_n| < \infty$$



So the relevant sum is, in fact, absolutely convergent.


elementary number theory - When is $1+x+x^2+x^3+... $ and $ {1 over 1-x} quad (x ne 1) $ *not* interchangeable in an algebraic formula?



[update 3]: This question was stated from a wrong premise ("...fails obviously...") which I became aware of after comments and answers, and thus I tend to retract it. But there are those constructive answers, sheding light on this, so I think it is better to keep the question together with the answers alive






This is merely an accidental question, to improve my understanding of the concept of divergent summation.




I'm nearly completely used to the assumtion, that $ \sum_{k=0}^\infty x^k = {1 \over 1-x}$ by analytic continuation can be inserted in any formula (except of course for $x \ne 1$) - maybe I'm so-to-say over-experienced by sheer practice. On the other hand I think to remember to have read in Konrad Knopp's book, that divergent summation in the case of geometric series can be inserted in every analytical expression (I'll to check this possibly false memory when I've Knopp's book available again).



But here is an example, where the identity fails obviously:
$$ e^{-1-2-4-8-16- \cdots }={1 \over e^1}{1 \over e^2}{1 \over e^4} \cdots \ne e^1 $$



How can I characterize the range of algebraic operations, where such (even much standard) analytic continuation is applicable and where not? (Other examples might be the insertion of $\zeta$-values at negative arguments in place of their sum/product-representations in algebraic formulae)



[remark in the update 3]: analytic continuation needs some variable parameter with a possible value for which the expression is true/convergent. It proceeds in that that parameter gets changed as far the expression is analytic and convergent - and then analytic continuation is tried by further operations, coordinate change and shift of the range for the parameter. In the above formula such a variable parameter should be included, say the base parameter for the geometric series should be kept variable and for this the analytic continuation should then be attempted. This is kindly reflected in R.Israels answer



[update 2]: I'll add some context for this question from my comment to R. Israels's answer. It should shed much mor light on the intention of my question:
My question arose today when I re-read an older discussion of mine in the tetration-forum, where I didn't find an answer nor even a suitable direction for an answer (for my level of understanding at that time) Here is the link to the discussion where I had posed this in context with iteration-series and had already landed at that example of my current question: http://math.eretrandre.org/tetrationforum/showthread.php?tid=420 .




[update]:
I've done a quick view into the chapter "divergent series" in Knopp's monography (in german language). I see at least one formulation which I might have overgeneralized and not taken precisely enough. I'll paraphrase it here to show the root of my concern:


(chap XIII, par. 261.) (...) "in a reasonable way" - this could also be interpreted, that we assign the sequence (s_n) in such a way a value s, that, wherever this sequence appears in a formula as a result of a computation, we should assign that value s always or at least generally to that result (...)
(par 262.) (...) Whether now, whereever this series $\sum (-1)^n $ occurs as a result of a computation, we should assign it the value $\frac 12$ - this cannot be decided without further consideration. With the representation $ {1 \over 1-x} = \sum x^n $ for $x=-1$ this however is surely the case. (...)




It seems, I took that remarks too wide when I studied this chapter, and got too much unsensitive against the geometric series $\sum 2^k$ and its relatives... possibly I should be more critical today even to Knopp's formulation, which seems a bit too vague in the light of my concern today.


Answer




I don't know what is your criterion for validity of the insertion. Let's say $f$ is an analytic function on a domain $D$, and the series $\sum_j g_j$ converges for $z$ in a domain $W$ to a function $g$ that has an analytic continuation to a larger domain $U$, with $g(U) \subseteq D$. Then $f(\sum_j g_j(z))$ for $z \in W$ has an analytic continuation to $f(g(z))$ on $U$. Is that what you're thinking of?
In your example, with $f(z) = \exp(-z)$ and the series $\sum_{j=0}^\infty z^j$, $g(z) = 1/(1-z)$, it is indeed true that $$\exp\left(-\sum_{j=0}^\infty z^j\right) = \prod_{j=0}^\infty e^{-z^j}$$ has an analytic continuation to $\exp(-1/(1-z))$ on ${\mathbb C} \backslash \{1\}$, with value $e$ at $z = 2$. However, I would avoid writing this as
$$ \prod_{j=0}^\infty e^{-2^j} = e$$



EDIT: Note also that it is possible to have
$f(\sum_{j=1}^N c_j z^j)$ converge uniformly on compact subsets of domain $A$ to an analytic function $g(z)$
and uniformly on compact subsets of domain $B$ to a different analytic function $h(z)$
where $g(z)$ and $h(z)$ are not analytic continuations of each other. This will happen if $\sum_j c_j z^j$ has a finite nonzero radius of convergence and $f$ is analytic in a neighbourhood of $\infty$ and also analytic (and nonconstant) in a neighbourhood of $c_0$. For example, with $f(z) = z/(1+z)$ and $\sum_{j=0}^\infty z^j$ we have



$$ \eqalign{\dfrac{\sum_{j=0}^N z^j}{1 + \sum_{j=0}^N z^j} &= \dfrac{z^{N+1}-1}{z^{N+1}-z+2}\cr

&\to 1 \ \text{for } |z| > 1\cr
& \to \dfrac{1}{2-z} \ \text{for } |z| < 1\cr}$$


real analysis - I need to find all functions $f:mathbb R rightarrow mathbb R$ which are continuous and satisfy $f(x+y)=f(x)+f(y)$

I need to find all functions $f:\mathbb R \rightarrow \mathbb R$ such that $f(x+y)=f(x)+f(y)$. I know that there are other questions that are asking the same thing, but I'm trying to figure this out by myself as best as possible. Here is how I started out:



Try out some cases:



$x=0:$
$$f(0+y)=f(0)+f(y) \iff f(y)=f(0)+f(y) \iff 0=f(0) $$
The same result is for when $y=0$




$x=-y:$
$$f(-y+y)=f(-y)+f(y) \iff f(0)=f(-y)+f(y) \iff 0=f(-y)+f(y)\iff \quad f(-y)=-f(y)$$
I want to extend the result of setting $x=-y$ to numbers other that $-1$, perhaps all real numbers or all rational numbers. I got a little help from reading other solutions on the next part:



Let $q=1+1+1+...+1$. Then
$$f(qx)=f((1+1+...+1)x)=f(x+x+...+x)=f(x)+f(x)+...+f(x)=qf(x)$$
I understood this part, but I don't understand why this helps me find all the functions that satisfy the requirement that $f(x+y)=f(x)+f(y)$, but here is how I went on:



Thus
$$f(qx)=qf(x)$$ and it should follow that

$$f \bigg (\frac {1}{q} x\bigg)= \frac{1}{q}f(x)$$ where $q\not =0$, then it further follows that
$$f \bigg (\frac {p}{q} x\bigg)= \frac{p}{q}f(x)$$ where $\frac{p}{q}$ is rational, and lastly it further follows that
$$f (ax)= af(x)$$ where $a$ is real. Thus functions of the form $f(ax)$ where $a$ is real satisfies the requirement of $f(x+y)=f(x)+f(y)$.



I don't know how much of what I did is correct\incorrect, and any help would be greatly appreciated. Also is there any way that I can say that functions of the form $f(ax)$ where $a$ is real are the only functions that satisfy the requirement of $f(x+y)=f(x)+f(y)$? Or do other solutions exist?



Again, thanks a lot for any help! (Hints would be appreciated, I'll really try to understand the hints!)

logic - How to prove $2+2=4$ using axioms of real number system?



How to prove $2+2=4$ using axioms of real number system? How do you make sense of the axioms for real number system when you cannot define the operations. You don't give an algorithm to calculate the product or sum of two numbers. How do you know for example $[0,2]$ is a subset of $[0,4]$ when you can't say if $2\lt4$?



I know how to define $2$ and the $+$ operation in arithmetic based on Peano postulates. So my question is how to define $2$ and $+$ in real number system.


Answer



"The real numbers" is a vague term. It could mean the ordered set $(\Bbb R,\leq)$ or it could mean the group $(\Bbb R,+)$. It could also mean the field $(\Bbb R,+,\cdot)$ or even the ordered field $(\Bbb R,+,\cdot,0,1,\leq)$.



It can mean many things. And unless you explicitly say what you mean by "the real numbers" one can only guess.




Indeed as a first-order structure, $(\Bbb R,\leq)$ cannot define the operations $+,\cdot$ or the constants $0,1$. In fact, even as an ordered group $(\Bbb R,+,\leq)$ cannot define the number $1$. These are all simple exercises that can be given after the second week of a first course about model theory.



If you mean "the real numbers" as a field, which may or may not include the constant symbols $0$ and $1$, then we can prove that $2+2=4$. But first we need to define what are these symbols $2$ and $4$, because they do not appear in the language.



First off, note that if $0$ and $1$ don't appear in the language we can define them as the neutral elements for $+$ and $\cdot$ respectively. So we may assume that the language includes them. Now we need to define the terms, so generally we define closed terms (objects in the language which do not depend on the assignment to free variables): We have defined $0,1$ so define the term for $n+1$ as the term $\bar n+1$, where $\bar n$ is the term defined so far for $n$.



In simple words, we define $2$ to be $1+1$, $3=(1+1)+1$ and $4=((1+1)+1)+1$ and so on. I'll stop here because we only care about $2$ and $4$ for now.



So now we want to prove that the two terms $2+2$ and $4$ are equal. This means that we want to prove, from the axioms of fields, or perhaps ordered fields, the following equality: $$(1+1)+(1+1)=((1+1)+1)+1$$




From this the proof is not very long, and I shall leave it for you to finish it. Just apply the associativity of $+$.






To the comment,




By real number system I mean the complete ordered field. In your answer you did not define +.





First of all, in logic we don't "define" the operations. Once you said "field" then you have de facto defined the operations, the constants, etc. this comes from the fact that when we say that $\Bbb R$ is a field we mean that it is a structure in the language of fields, satisfying the axioms of the field. The language of fields includes $+$ and includes $\cdot$, and it includes $0,1$.



When you said "the complete field", then you have added the order to the language, now you have $\leq$, and you have added a second-order axiom asserting that a bounded set has a least upper bound.



If you are asking "How do we define addition on the real numbers?" then the answer is embedded in the answer as to "How do we define the real numbers?". Given the rational numbers with their field structure, we can define the real numbers to be Dedekind cuts, or the metric completion of the rational numbers.



Each of these constructions comes with its set of addition, multiplication, and so on. And one can show that these definitions are well-defined, and satisfy the axioms of a field.



Moreover, in the case that you start with the rational numbers then it suffices to show that the rational numbers embed "nicely" into your structure, and then $2+2=4$ is inherited from the rational numbers themselves, with proof just as given above. Or, as I said, you can prove that in the old fashion way, after you have shown that the structure that you have defined as the real numbers satisfies the few needed axioms.




But now you can ask, how do you define the addition on the rational numbers? Again, this amounts to asking -- do you take the rational numbers for granted? Are they given to you? Are they a field? If the answer is yes, then you are essentially done. If not, then you need to construct them, and you can do that from the integers by considering equivalence classes and so on.



The question is again reduced to the integers. You can show that the integers embed nicely into the structure you have defined as "the rational numbers" and then $2+2=4$ is true in the real numbers, because it is true in the rational numbers, because it is true in the integers. But you can also show directly that it holds in the rational numbers by the way you constructed them from the integers, or that it holds in the real numbers. Depending on what you prefer to do.



But the same question applies again. Where do the integers come from? The integers can be taken as an ordered ring, which means that the addition, multiplication, order and so on, are all given to us. We do not define them. They are part of the language, and by taking "the integers" we have de facto chosen an interpretation for these operations. However, one can also define all that from the natural numbers and their operations and order. And then one can show that $2+2=4$ in the natural numbers, and show that there is a nice embedding for the natural numbers into the integers as defined, and therefore into the rational numbers and therefore into the real numbers.



But wait. Where did the natural numbers come from? Well, you can just take them for granted, or you can in fact define them from the empty set using the axioms of set theory. Then you can show that this structure that you have defined as the natural numbers has a definable order, addition, multiplication, and so on. And then you can go back and construct the integers, then the rational numbers, and then the real numbers.



I'd go over the whole construction myself, but it has been covered several times on this site before, and essentially this is not what you have asked for in the first place. You asked how to prove that $2+2=4$ in the real numbers.




My suggestion is to learn about predicate logic, and a bit about first-order logic, to understand what does it mean to say that a certain object is a field, i.e. an interpretation for the language of fields. That means that we don't have to define its operations, they are given to us. But at the same time, learning about the construction of the real numbers is also important, and is covered, as I said, in other threads on the site.



To read more:




  1. In set theory, how are real numbers represented as sets?

  2. True Definition of the Real Numbers

  3. Building the integers from scratch (and multiplying negative numbers)

  4. Why does the Dedekind Cut work well enough to define the Reals?


  5. Completion of rational numbers via Cauchy sequences

  6. What is the basis for a proof?

  7. Given real numbers: define integers?



In these threads you will find a lot of questions about defining the real numbers from the rational numbers, and similar constructions. You will also find many links to other (possibly useful) threads.


Monday 27 June 2016

calculus - show that $int_{0}^{infty} frac {sin^3(x)}{x^3}dx=frac{3pi}{8}$




show that



$$\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$$



using different ways



thanks for all


Answer



Let $$f(y) = \int_{0}^{\infty} \frac{\sin^3{yx}}{x^3} \mathrm{d}x$$

Then,
$$f'(y) = 3\int_{0}^{\infty} \frac{\sin^2{yx}\cos{yx}}{x^2} \mathrm{d}x = \frac{3}{4}\int_{0}^{\infty} \frac{\cos{yx} - \cos{3yx}}{x^2} \mathrm{d}x$$
$$f''(y) = \frac{3}{4}\int_{0}^{\infty} \frac{-\sin{yx} + 3\sin{3yx}}{x} \mathrm{d}x$$
Therefore,
$$f''(y) = \frac{9}{4} \int_{0}^{\infty} \frac{\sin{3yx}}{x} \mathrm{d}x - \frac{3}{4} \int_{0}^{\infty} \frac{\sin{yx}}{x} \mathrm{d}x$$



Now, it is quite easy to prove that $$\int_{0}^{\infty} \frac{\sin{ax}}{x} \mathrm{d}x = \frac{\pi}{2}\mathop{\mathrm{signum}}{a}$$



Therefore,
$$f''(y) = \frac{9\pi}{8} \mathop{\mathrm{signum}}{y} - \frac{3\pi}{8} \mathop{\mathrm{signum}}{y} = \frac{3\pi}{4}\mathop{\mathrm{signum}}{y}$$

Then,
$$f'(y) = \frac{3\pi}{4} |y| + C$$
Note that, $f'(0) = 0$, therefore, $C = 0$.
$$f(y) = \frac{3\pi}{8} y^2 \mathop{\mathrm{signum}}{y} + D$$
Again, $f(0) = 0$, therefore, $D = 0$.



Hence, $$f(1) = \int_{0}^{\infty} \frac{\sin^3{x}}{x^3} = \frac{3\pi}{8}$$


Explain the proof that the root of a prime number is an irrational number

Though the proof of this is done in a previous question, I have some doubt about a certain concept. So I ask to clarify it.



In the proof we say that $\sqrt{p} = \frac{a}{b}$ (In their lowest form).
Now
$$p = a^2 / b^2\\p\cdot b^2 = a^2.$$



Hence $p$ divides $a^2$ so $p$ divides $a$. We say that the above mentioned condition ("Hence $p$ divides $a^2$ so $p$ divides $a$") is valid as $p$ is a prime number. I didn't get the fact that why this is only true for prime numbers. Could someone please me this?

Sunday 26 June 2016

linear algebra - Pseudo inverse of a product of two matrices with different rank



Let $V$ be an $n \times n$ symmetric, positive definite matrix (of rank $n$). Let $X$ be an $n \times p$ matrix of rank $p$.




Define $A^- = (A^\top A)^{-1} A^\top$ as the pseudo inverse of $A$ when $A$ is of full column rank. Note that $V^- = V^{-1}$ because $V$ is invertible.



I'd like to prove that



$$ (VX)^- = X^- V^{-1} $$



but the only theorem I know about the pseudo-inverses of products requires that both of the matrices be of the same rank AND that the second matrix has full row rank. (To wit: If $B$ is an $m \times r$ matrix of rank $r$ and $C$ is an $r \times m$ 
matrix of rank $r$, then $(BC)^- = C^-B^-$.)




There is likely something obvious I'm missing. Any clues?


Answer



I am assuming that by a "pseudoinverse" you mean Moore–Penrose pseudoinverse $A^+$ of a matrix $A$. Let us check the defining properties of the Moore-Penrose pseudoinverse against $X^+ V^{-1}$:




  1. $(VX) (X^+ V^{-1}) (VX) = VX X^+ X = VX$. Ok.

  2. $(X^+ V^{-1}) (VX) (X^+ V^{-1}) = X^+ X X^+ V^{-1} = X^+ V^{-1}$. Ok.

  3. $((VX) (X^+ V^{-1}))^* = V^{-*} (XX^+)^* V^* = V^{-2} (VX)(X^+ V^{-1}) V^2$. Hmmm...

  4. $((X^+ V^{-1}) (VX))^* = (X^+X)^* = X^+X = (X^+ V^{-1}) (VX)$. Ok.




So, the above is O.K. if and only if item 3 is O.K., i.e.,



$$((VX) (X^+ V^{-1}))^* = V^{-2} (VX)(X^+ V^{-1}) V^2.$$



However, this is not generally true. For example (by Pedro Milet in comments),



$$V = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}, \quad X = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.$$



Then




$$(VX)^+ = \frac{1}{5} \begin{bmatrix} 2 & 1 \end{bmatrix} \ne \begin{bmatrix} 1 & -1 \end{bmatrix} = X^+ V^{-1}.$$



Notice, however, that it would work if $V$ was unitary, instead of positive definite.


real analysis - if $0

if $0

$\mid \sqrt[n]{a^n+b^n+c^n} - c \mid \, < \,\mid \sqrt[n]{a^n}+\sqrt[n]{b^n}+\sqrt[n]{c^n} - c \mid$ doesn't get me anywhere.



$$\begin{align}
\ln((a^n+b^n+c^n)^{1/n}) &= \frac{1}{n}\ln(a^n+b^n+c^n) \\
&= \frac{1}{n}\ln\left(c^n\left(\frac{a^n}{c^n} + \frac{b^n}{c^n} + 1\right) \right)\\
&= \frac{1}{n}\bigg (\ln(c^n)+\ln(\frac{a^n}{c^n} + \frac{b^n}{c^n} + 1)\bigg) \\
&\le \frac{1}{n}\bigg( \ln(c^n) + \ln(3) \bigg) \\&= \frac{1}{n}\bigg( n \cdot \ln(c) + \ln(3) \bigg)
\end{align}$$




$\mid \ln(\sqrt[n]{a^n+b^n+c^n} - \ln(c)\mid \,\le\, \mid\frac{1}{n} \mid \mid (n\cdot \ln(c) + \ln(3)) - n\cdot \ln(c)\mid = \mid ( ln(c) + \frac{ln(3)}{n}) - \ln(c)\mid $



let $\epsilon > 0$
choose $N \in \mathbb{N}$ such that $N > \frac{1}{\epsilon}$ then $\forall n > N \implies \mid ( \ln(c) + \frac{ln(3)}{n}) - \ln(c)\mid < \epsilon$

calculus - Absolute function continuous implies function piecewise continuous?



I have a simple true/false question that I am not sure on how to prove it.





If $|f(x)|$ is continuous in $]a,b[$ then $f(x)$ is piecewise continuous in $]a,b[$




Anyone that can point me in the right direction or give a counterexample, even though I think it's true. Thanks in advance!


Answer



Here is a counter example to the statement:



Define $f(x)$ to be $1$ if $x$ is rational and $-1$ if $x$ is irrational. Now $f$ is not continuous anywhere, but $|f|$ is identically $1$ and thus continuous.


probability theory - Show that $E(Z^p) = p int_0^infty (1-F_Z(x))x^{p-1} , dx$ for every $p>0$ and nonnegative random variable $Z$




Given a continuous positive r.v. (I think this means $Z \geq 0$), with pdf $f_Z$ and CDF $F_Z$, how would I show that the following expression
$$\mathbb{E}(Z^p) = p \int_0^\infty (1-F_Z(x))x^{p-1} \, dx\text{?}$$
I don't know where to start, I tried maybe changing it to a by-parts question or something using $px^{p-1} = \frac{d}{dx}(x^p)$ but that's all I can think of.


Answer



Note that for $X \geq 0$ we have $\mathbb{E}(X) = \int_0^{\infty} \mathbb{P}(X \geq x) \, \mathrm{d}x$. Now let $Y= Z^p$ then $\mathbb{P}(Y < y) = \mathbb{P}(Z < y^{1/p})$ by monotonicity of $x^p$ on $[0,\infty)$. Hence we have $$\mathbb{E}(Z^p) = \int_0^{\infty} \mathbb{P}(Y > y) \, \mathrm{d}y = \int_0^{\infty} 1-F_Y(y) \, \mathrm{d}y = \int_0^{\infty}1-F_Z(y^{1/p}) \, \mathrm{d}y$$




Now we use the substitution $x^p = y$ to get $$\mathbb{E}(Z^p) = \int_0^{\infty}p(1-F_Z(x))x^{p-1} \, \mathrm{d}x.$$



Alternatively you can mimic the proof for $\mathbb{E}(Z) = \int_0^{\infty} \mathbb{P}(Z \geq z) \, \mathrm{d}z$ using Fubini for $\mathbb{E}(Z^p)$.


Saturday 25 June 2016

real analysis - Does $f(ntheta) to 0$ for all $theta>0$ and $f$ Darboux imply $f(x) to 0$ as $x to infty$?



Recall that a Darboux function $f:\mathbb{R} \to \mathbb{R}$ is one which satisfies the conclusion of the intermediate value theorem (i.e., connected sets are mapped to connected sets). Being Darboux is a weaker condition than continuity. If a theorem about continuous functions only uses the intermediate value theorem, then chances are it also holds for the entire class of Darboux functions. I find it interesting to study which theorems about continuous functions also hold for Darboux functions.



We have the following theorem, which is fairly well known and hinges on the Baire Categoery Theorem.





If $f:\mathbb{R} \to \mathbb{R}$ is continuous and $f(n\theta) \xrightarrow[n \in \mathbb{N}, \ n\to\infty]{} 0$ for every $\theta \in (0, \infty)$, then $f(x) \xrightarrow[x \in \mathbb{R}, \ \ x\to\infty]{} 0$.




A counterexample if we drop continuity is $f(x) = \mathbf{1}_{\{ \exp(n) : n \in \mathbb{N}\}}$. However, this counterexample isn't Darboux, and I haven't been able to come up with any counterexample which is Darboux. Thus, this leads me to my question.




Can the continuity condition in the theorem stated above be relaxed to Darboux?





In searching for counterexamples of this sort, one approach is playing around with $\sin \frac{1}{x}$. An alternative approach is considering highly pathological functions with the property that every nonempty open set is mapped to $\mathbb{R}$ (for instance, Conway Base-13, or Brian's example here) and modifying these in such a way that they satisfy the hypotheses of the problem.


Answer



Non-measurable example



By the axiom of choice there is a $\mathbb Q$-linear basis of $\mathbb R.$ This basis has the same cardinality as $\mathbb R$ so can be indexed as $a_r$ for $r\in\mathbb R.$ Define $f$ by setting $f(x)=r$ if $x$ is of the form $a_0+qa_r$ for some rational $q$ and real $r,$ and set $f(x)=0$ for $x$ not of this form. Then $f$ is Darboux because the set $\{a_0+qa_r\mid q\in\mathbb Q\}$ is dense for each $r.$ But for each $\theta>0,$ we can only have $f(q\theta)\neq 0$ for at most one rational $q$ - the reciprocal of the $a_0$ coefficient of $\theta.$ In particular $f(n\theta)\to 0$ as $n\to\infty$ with $n\in\mathbb N.$



Measurable example



For $n\geq 2$ let $b_n=n!(n-1)!\dots 2!.$ Each real has a unique "mixed radix" expression as
$x=\lfloor x\rfloor + \sum_{n\geq 2}\frac{x_n}{b_n}$ where $x_n$ is the unique representative of $\lfloor b_n x\rfloor$ modulo $n!$ lying in $\{0,1,\dots,n!-1\}.$ For non-negative $x$ define $f(x)=\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m$ if this limit exists and $x_n\leq 1$ for all sufficiently large $n,$ and take $f(x)=0$ otherwise. For negative $x$ define $f(x)=f(-x).$ Note $f(x)\in[0,1].$ It is straightforward to see that $f$ takes all values in $[0,1]$ in every interval and is hence Darboux.




Now consider a real $x>0$ with $f(x)\neq 0$ and let $q<1$ be rational. We will show that $f(qx)=0.$ We know there exists $N$ such that $x_n\leq 1$ for all $n>N.$ Increasing $N$ if necessary we can assume that $qN$ is an integer. We also know that $x_n=1$ for infinitely many $n>N$ - otherwise we would have $\lim_{n\to\infty} \tfrac{1}{n}\sum_{m=2}^n x_m=0.$
Write $x=x'/b_{n-1}+1/b_n+\epsilon/b_{n+1}$ where $x'$ is an integer and $0\leq\epsilon< 2.$ So $qx b_{n+1}=qx'n!(n+1)!+q(n+1)!+q\epsilon.$ The first term is a multiple of $(n+1)!$ because $qn!$ is an integer, and the second term $q(n+1)!$ is an integer, and $q\epsilon<2.$ So $(qx)_{n+1}$ is either $q(n+1)!$ or $q(n+1)!+1$ (note this is less than $(n+1)!$). Since $q(n+1)!>1$ and there are infinitely many such $n,$ we get $f(qx)=0,$ .



This shows that for each $\theta>0,$ the sequence $f(n\theta)$ takes at most one non-zero value, and in particular $f(n\theta)\to 0.$



Remark: this $f$ appears to be a counterexample to https://arxiv.org/abs/1003.4673 Theorem 4.1.


algebra precalculus - Functional equation - Cyclic Substitutions



Please help solve the below functional equation for a function $f: \mathbb R \rightarrow \mathbb R$:
\begin{align}

&f(-x) = -f(x) , \text{ and } f(x+1) = f(x) + 1, \text{ and } f\left(\frac 1x\right) = \frac{f(x)}{x^2} \\
&\text{ for all } x \in \mathbb R \text{ and } x \ne 0 .
\end{align}



I know this will be solved by cyclic substitutions, but I'm unable to figure out the exact working. Can someone explain step wise?


Answer



I don't know if this is what is meant with "cyclic substitutions", but it is a solution.



First, we observe that
$$

f(0)=0, f(1)=1, f(-1)=-1
$$

has to be true. Also, it suffices to determine $f(x)$ for $x>0$, because the rest follows from the condition $f(-x)=-f(x)$.



Let $x>0$.
Applying some of the conditions, we have
$$
\begin{aligned}
f(x)+1
&= f(x+1) \\

&= f(1(x+1)^{-1})(x+1)^2 \\
&= f(1-x(x+1)^{-1})(x+1)^2\\
&= (1+f(-x(x+1)^{-1}))(x+1)^2\\
&= (1-f(-x(x+1)^{-1}))(x+1)^2\\
&= (1-f(-x(x+1)^{-1}))(x+1)^2\\
&= (1-f((x+1)x^{-1})x^2(x+1)^{-2})(x+1)^2\\
&= (1-f(1+x^{-1})x^2(x+1)^{-2})(x+1)^2\\
&= (1-(1+f(x^{-1}))x^2(x+1)^{-2})(x+1)^2\\
&= (1-(1+f(x)x^{-2})x^2(x+1)^{-2})(x+1)^2\\
&= (x+1)^2-x^2-f(x)\\

\end{aligned}
$$



This yields $f(x)=x$.


Friday 24 June 2016

algebra precalculus - Prove $e^{i pi} = -1$











I recently heard that $e^{i \pi} = -1$.



WolframAlpha confirmed this for me, however, I don't see how this works.


Answer



This identity follows from Euler's Theorem,
\begin{align}
e^{i \theta} = \cos \theta + i \sin \theta,
\end{align}

which has many proofs. The one that I like the most is the following (sketched). Define $f(\theta) = e^{-i \theta}(\cos \theta + i \sin \theta)$. Use the quotient rule to show that $f^{\prime}(\theta)= 0$, so $f(\theta)$ is constant in $\theta$. Evaluate $f(0)$ to prove that $f(\theta) = f(0)$ everywhere.



Take $\theta = \pi$ for your claim.


complex numbers - How to show that the roots of $-x^3+3x+left(2-frac{4}{n}right)=0$ are real (and how to find them)




I'm trying to find the three distinct and real roots of
$$-x^3+3x+\left(2-\frac{4}{n}\right)=0,$$



where $n>0$ (we could say $n\geq 2$ if that helps), but I'm not able to get very far:



Using the notation of the Wikipedia-page, I find that the discriminant is
$$\Delta=27\cdot 4\left(\frac{1}{n}-\frac{1}{n^2}\right),$$
which gives



$$C=3\left( 1-\frac{2}{n} \pm 2 i \sqrt{\frac{1}{n}-\frac{1}{n^2}} \;\right)^{1/3},$$




which is then used to find $x$, as
$$x=\frac{C}{3}+\frac{3}{C}.$$



Two things confuse me:




  1. $C$ looks like a non-real number, but $x$ should be real (since $\Delta >0$). How can one further reduce the expression for $x(C)$ to show that the imaginary part is zero? I'm having trouble evaluating that cube root.

  2. When I use my expression for $x(C)$ in Mathematica and evaluate numerically for some $n$, I only find one of the three solutions that Mathematica finds if I just ask it to give the roots of the original equation (the change of sign in $C$, i.e. the $\pm$, doesn't even give two different solutions). What have I done to exclude the two other solutions (or is it just Mathematica excluding them somehow)?




Context:



Actually, I'm only trying to find the root where $-1\leq x\leq 1$. I'm trying solve (the first) part of system of equations that I solved numerically in order to make this answer analytically, so that I can play with the limit of the expression (as $n\rightarrow \infty$).



Thank you.


Answer



For the calculation of the roots of the depressed cubic
$$
y^{\,3} + p\,y + q = 0

$$
where $p$ and $q$ are real or complex,
I personally adopt a method indicated in this work by A. Cauli, by which putting
$$
u = \sqrt[{3\,}]{{ - \frac{q}
{2} + \sqrt {\frac{{q^{\,2} }}
{4} + \frac{{p^{\,3} }}
{{27}}} }}\quad v = - \frac{p}
{{3\,u}}\quad \omega = e^{\,i\,\frac{{2\pi }}
{3}}

$$
where for the radicals you take one value, the real or
the first complex one (but does not matter which)
then you compute the three solutions as:
$$
y_{\,1} = u + v\quad y_{\,2} = \omega \,u + \frac{1}
{\omega }\,v\quad y_{\,3} = \frac{1}
{\omega }\,u + \omega \,v
$$
In your case:

$$
y^{\,3} - 3\,y - 2\left( {\frac{{n - 2}}
{n}} \right) = 0
$$
we obtain
$$
\frac{{q^{\,2} }}
{4} + \frac{{p^{\,3} }}
{{27}} = \left( {\frac{{n - 2}}
{n}} \right)^{\,2} - 1 = - 4\frac{{\left( {n - 1} \right)}}

{{n^{\,2} }} < 0
$$
which confirms that there are three real solutions, and
$$
\begin{gathered}
u = \sqrt[{3\,}]{{\frac{{n - 2}}
{n} + i\,\frac{2}
{n}\sqrt {\left( {n - 1} \right)} }} = \frac{1}
{{\sqrt[{3\,}]{n}}}\;\sqrt[{3\,}]{{n - 2 + i\,2\sqrt {\left( {n - 1} \right)} }} = \hfill \\
= \frac{1}

{{\sqrt[{3\,}]{n}}}\;\sqrt[{3\,}]{{n\,e^{\,i\,\alpha } }} = e^{\,i\,\alpha /3} \quad \left| {\,\alpha = \arctan \left( {\frac{{2\sqrt {\left( {n - 1} \right)} }}
{{n - 2}}} \right)} \right. \hfill \\
v = - \frac{p}
{{3\,u}} = \frac{1}
{u} = e^{\, - \,i\,\alpha /3} \hfill \\
\end{gathered}
$$
with the understanding that for $n=1,\; 2$, $\alpha= \pi , \; \pi /2$, i.e. that we use the 4-quadrant $arctan$.
So that in conclusion, for $0$$
\left\{ \begin{gathered}

y_{\,1} = e^{\,i\,\alpha /3} + e^{\, - \,i\,\alpha /3} = 2\cos \left( {\frac{\alpha }
{3}} \right) \hfill \\
y_{\,2} = e^{\,i\,\alpha /3 + 2\pi /3} + e^{\, - \,i\,\alpha /3 - 2\pi /3} = 2\cos \left( {\frac{{\alpha + 2\pi }}
{3}} \right) \hfill \\
y_{\,3} = e^{\,i\,\alpha /3 - 2\pi /3} + e^{\, - \,i\,\alpha /3 + 2\pi /3} = 2\cos \left( {\frac{{\alpha - 2\pi }}
{3}} \right) \hfill \\
\end{gathered} \right.
$$
Concerning the range spanned by the solutions, apart for $n=1$ where we get the solutions (1,-2,1), then
for $2 \le\; n$ we have

$$
\frac{{\alpha (n)}}
{3}\quad \left| {\;2 \leqslant n} \right.\quad = \frac{1}
{3}\arctan _{\,4\,Q} \left( {n - 2,\;2\sqrt {\left( {n - 1} \right)} } \right) = \left\{ {\frac{\pi }
{6},\frac{\pi }
{{7.66}},\; \cdots } \right\}
$$
which means:
$$
\left\{ \begin{gathered}

\quad \quad 2 \leqslant n \hfill \\
0 < \frac{{\alpha (n)}}
{3} \leqslant \frac{\pi }
{6}\quad \Rightarrow \quad \sqrt 3 \leqslant y_{\,1} < 2 \hfill \\
2\frac{\pi }
{3} < \frac{{\alpha (n)}}
{3} + 2\frac{\pi }
{3} \leqslant \frac{5}
{6}\pi \quad \quad \Rightarrow \quad - 2 < y_{\,2} \leqslant - \sqrt 3 \hfill \\
- 2\frac{\pi }

{3} < \frac{{\alpha (n)}}
{3} - 2\frac{\pi }
{3} \leqslant - \frac{\pi }
{2}\quad \quad \Rightarrow \quad - 1 < y_{\,3} \leqslant 0 \hfill \\
\end{gathered} \right.
$$


limits - how to find $lim_{large x to frac{pi}{3}}frac{tan^{3}left(xright)-3tanleft(xright)}{cosleft(x+frac{pi}{6}right)}$



how to find





$$\lim_{\large x \to \frac{\pi}{3}}\frac{\tan^{3}\left(x\right)-3\tan\left(x\right)}{\cos\left(x+\frac{\pi}{6}\right)}$$ without L'hopital or taylor/Laurent series




I tried but did not get any answer:



$$\frac{\tan^{2}\left(x\right)\tan\left(x\right)-3\tan\left(x\right)}{\cos\left(x+\frac{\pi}{6}\right)}=\frac{2\left(\sin^{3}\left(x\right)-3\sin\left(x\right)\cos^{2}\left(x\right)\right)}{\cos^{3}\left(x\right)\left(\sqrt{3}\cos\left(x\right)-\sin\left(x\right)\right)}=\frac{2\left(\sin^{3}\left(x\right)-3\left(\sin\left(x\right)\left(1-\sin^{2}\left(x\right)\right)\right)\right)}{\cos^{3}\left(x\right)\left(\sqrt{3}\cos\left(x\right)-\sin\left(x\right)\right)}=\frac{2}{1}\frac{-3\left(\sin(x)-\sin^{2}\left(x\right)\right)+\sin^{3}\left(x\right)}{\left(1-\sin^{2}\left(x\right)\right)(\cos(x))\left(\sqrt{3}\cos\left(x\right)-\sin\left(x\right)\right)}$$


Answer



$$\lim_{\large x \to \frac{\pi}{3}}\frac{\tan^{3}\left(x\right)-3\tan\left(x\right)}{\cos\left(x+\frac{\pi}{6}\right)}=$$




$$\lim_{\large x \to \frac{\pi}{3}}\frac{\tan x (\tan x - \sqrt 3)(\tan x +\sqrt 3)}{\cos x \cos (\pi /6)-\sin x \sin (\pi /6)}=$$



$$\lim_{\large x \to \frac{\pi}{3}}\frac{(2\sec x)\tan x (\tan x - \sqrt 3)(\tan x +\sqrt 3)}{\sqrt 3 - \tan x}=-24$$


Thursday 23 June 2016

calculus - Definition of e









I'm very eager to know and understand the definition of $e$. Textbooks define $e$ as follows




$$ e = \lim_{p\to\infty} \left[1+\frac1{p}\right]^p \approx 2.71828 $$



Is there an "easy to understand" proof of this? I'm really looking for a derivation of this which is very intuitive and easy to comprehend.



By the way I'm watching this video lecture.


Answer



$\pi$ is the name of the constant relating the diameter and circumferance of a circle. It's a definition, a particular constant that we thought deserved a name. $e$ happens to be the name of a constant from a particular limit. Like $\pi$, we named it because we thought it would be useful.



In Physics, many, many constants have names. Gravitational constants, expansional constants, electrical constants, etc. Each is useful, but somewhat arbitrarily chosen.




But I suspect you won't like the apparently nature of this. So I come up with something related: We call 2 the number s.t. 1 + 1 = 2. Why? What is its derivation? There is no derivation - we defined it, gave it a symbol, and a name.


real analysis - Find a bijection between 2 intervals




So I'm supposed to find a bijection between $[0,1) \longrightarrow [0,1]$. My attempt of solution is the function defined as



$$f(x):=\begin{cases}
2x, & \text{iff} \,\, x=\dfrac{1}{2^n} \\
x, &\text{otherwise}
\end{cases}$$



Using this we have that if $x=\frac{1}{2}$, then $f(1/2)=1$ and so we got that covered. And for $x=\frac{1}{4}$ we have $f(1/4)=1/2$ and so on. Is this correct? Is it possible to assume that there is a bijection, when $n$ goes to infinity?



I also got another question: define a bijection from $[0,1) \longrightarrow [0,1) \times [0,1) \times \ \dots \ \times [0,1)$

$(n-$times$)$. My solution is just to define the value of $f(x)$ as a vector, i.e take $x \in [0,1)$ and $$f(x):=(x,x,x,\dots , x)$$ Is this correct?



Thank you in advance!


Answer



For the first function you either compute the inverse, and show that is is the right and left inverse, or you should show that it is injective and surjective. (that is take to elements and show that they are mapped to different images, and show that ever element in $[0,1]$ has a pre-image)



The function is correct. But why you say assume there is a bijection? What do you mean by $n$ goes to infinity? You should take care that $f$ is well-defined which it is since $x= \frac{1}{2^n}$ is either true or false.



For the second function. Is $(0,\frac{1}{2}, \cdots, 0)$ in the image? You may also should say $n$-tuple instead of vector, since the codomain is not equipped with a vector space structure.


Wednesday 22 June 2016

Is this function bijective, surjective and injective?

$\lfloor\cdot\rfloor: Q \rightarrow\mathbb Z$ with $\lfloor x\rfloor :=$ floor of $x$.




I know a function is injective by using $f(x_1)=f(x_2) \Rightarrow x_1=x_2$
and a function is surjective if each element of the codomain, $y\in Y$, is the image of some element in the domain $x\in X$,
and bijective if the function is both injective and surjective.



I don't know what floor of $x$ is.

integration - $int_0^{pi/2}log^2(cos^2x)mathrm{d}x=frac{pi^3}6+2pilog^2(2)$???



I saw in a paper by @Jack D'aurizio the following integral

$$I=\int_0^{\pi/2}\log^2(\cos^2x)\mathrm{d}x=\frac{\pi^3}6+2\pi\log^2(2)$$
Below is my attempt.



$$I=4\int_0^{\pi/2}\log^2(\cos x)\mathrm{d}x$$
Then we define
$$F(a)=\int_0^{\pi/2}\log^2(a\cos x)\mathrm{d}x$$
So we have
$$F'(a)=\frac2a\int_0^{\pi/2}\log(a\cos x)\mathrm{d}x$$
Which I do not know how to compute. How do I proceed? Thanks.


Answer




Let $$I(a)=\int_0^{\frac {\pi}{2}} (\cos^2 x)^a dx$$



Hence we need $I''(0)$.



Now recalling the definition of Beta function we get $$I(a)=\frac 12 B\left(a+\frac 12 ,\frac 12\right)=\frac {\sqrt {\pi}}{2}\frac {\Gamma\left(a+\frac 12\right)}{\Gamma(a+1)}$$



Hence we have $$I''(a) =\frac {\sqrt {\pi}}{2}\frac {\Gamma\left(a+\frac 12\right)}{\Gamma(a+1)}\left(\left[\psi^{(0)}\left(a+\frac 12 \right)-\psi^{(0)}(a+1)\right]^2 +\psi^{(1)}\left(a+\frac 12 \right)-\psi^{(1)}(a+1)\right) $$



Substituting $a=0$ in above formula yields the answer.


Tuesday 21 June 2016

elementary number theory - Use mathematical induction to prove that any integer $nge2$ is either a prime or a product of primes.




Use strong mathematical induction to prove that any integer $n\ge2$ is either a prime or a product of primes.



I know the steps of weak mathematical induction...
Basis step: $p(n)$ for $n=1$ or any arbitrary $n_0$ ... show that it is true
Inductive hypothesis: $p(n)$ for $n=k$ ... Assume that it is true for $n=k$
Inductive step: $P(n)$ for $n=k+1$ ... Show that this is true for $n=k+1$


Answer



Strong induction means following: suppose $P(0)$ and that $P(k),k

For this question, our base is $n=2$, which is prime, so the statement holds. Now assume $n>2$ and that every $k

Monday 20 June 2016

calculus - Using Euler's formula for cos/sin 12 degrees and cos/sin 48 degrees

$(\cos12^\circ+i\sin12^\circ+\cos48^\circ+i\sin48^\circ)$. Using Euler's Formula, turn this into exponential form (i.e. something like $e^{i\frac{5\pi}{12}}$).




Would I need to use the $\cos$ and $\sin$ sum and difference formulas? I tried doing that and it became messy very quickly. Is there an alternative?

calculus - Elegant way to make a bijection from the set of the complex numbers to the set of the real numbers





Make a bijection that shows $|\mathbb C| = |\mathbb R| $



First I thought of dividing the complex numbers in the real parts and the complex parts and then define a formula that maps those parts to the real numbers. But I don't get anywhere with that. By this I mean that it's not a good enough defined bijection.




Can someone give me a hint on how to do this?




Maybe I need to read more about complex numbers.


Answer




You can represent every complex number as $z=a+ib$, so let us denote this complex number as $(a,b) , ~ a,b \in \mathbb R$. Hence we have cardinality of complex numbers equal to $\mathbb R^2$.



So finally, we need a bijection in $\mathbb R$ and $\mathbb R^2$.



This can be shown using the argument used here.




Note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.





Mapping the unit square to the unit interval




There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.



This problem can be fixed.



First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.




This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.



Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



summation - Alternative ways to find the sum of this series

Question: Find the sum of the series $3+7+13+21+\dotsm$ upto $n$ terms.




My attempt:



Consider the arithmetic series $4+6+8+\dotsm$ upto $n$ terms.



The sum of first $k$ terms of this series $=S_k=\frac{k}{2}[2\times 4+(k-1)2 ]=\frac{k}{2}[8+2k-2]=\frac{k}{2}(2k+6)=k(k+3)$






Now, consider the sequence $3,7,13,21,\dotsm$ upto $n$ terms.




This sequence can be written as $3,(3+4),(3+4+6),(3+4+6+8),\dotsm$ upto $n$ terms.



General term of this sequence $=t_k=3+S_{k-1}=3+(k-1)(k+2)=3+k^2+2k-k-2=k^2+k+1$



Therefore the sum of the series $3+7+13+21+\dotsm$ upto $n$ terms $=S_n^{'}=\sum_{k=1}^{n}t_k=\sum_{k=1}^{n}(k^2+k+1)=\sum_{k=1}^{n}k^2+\sum_{k=1}^{n}k+\sum_{k=1}^{n}1=\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}+n=n[\frac{(n+1)(2n+1)}{6}+\frac{n+1}{2}+1]=n(\frac{2n^2+3n+1+3n+3+6}{6})=\frac{n}{6}(2n^2+6n+10)=\frac{n}{3}(n^2+3n+5)$



My problem: Is there any elegant method to find the sum of this series?

Formula for a sequence




I have this sequence:
$-2, 1, 6, 13, 22, 33, ...$



where each term is the previous plus an odd number. The odd numbers are increasing from 3. I am asked to find an explicit formula with respect to $n$ which can give me the $n$-th number in the sequence. The sequence starts at $n = 1$.



I tried to find a pattern, without success. I then tried to write the sequence as a recursive formula:
$a_1 = -2$
$a_{n + 1} = a_n + 2n + 1$



and then I got stuck.




Can you please advice me about the way to go?



Thanks,
rubik


Answer



Hint: Add $3$ to every number in your sequence.



Remark: A related result is that the sum of the first $n$ odd positive integers is equal to $n^2$. This follows easily from the fact that $n^2-(n-1)^2=2n-1$. There is also an attractive proof without words.


trigonometry - Use an expression for $frac{sin(5theta)}{sin(theta)}$ to find the roots of the equation $x^4-3x^2+1=0$ in trigonometric form





Question: Use an expression for $\frac{\sin(5\theta)}{\sin(\theta)}$ , ($\theta \neq k \pi)$ , k an integer to find the roots of the equation $x^4-3x^2+1=0$ in trigonometric form?









What I have done



By using demovires theorem and expanding



$$ cis(5\theta) = (\cos(\theta) + i\sin(\theta))^5$$



$$ \cos(5\theta) + i \sin(5\theta) = \cos^5(\theta) - 10\cos^3(\theta)\sin^2(\theta) + 5\cos(\theta)\sin^4(\theta) +i(5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta)$$



Considering only $Im(z) = \sin(5\theta)$




$$ \sin(5\theta) = 5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta) $$



$$ \therefore \frac{\sin(5\theta)}{\sin(\theta)} = \frac{5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta)}{\sin(\theta)}$$



$$ \frac{\sin(5\theta)}{\sin(\theta)} = 5\cos^4(\theta) -10\cos^2(\theta)\sin^2(\theta) + \sin^4(\theta) $$



How should I proceed , I'm stuck trying to incorporate what I got into the equation..


Answer



HINT:




Using Prosthaphaeresis Formula,



$$\sin5x-\sin x=2\sin2x\cos3x=4\sin x\cos x\cos3x$$



If $\sin x\ne0,$
$$\dfrac{\sin5x}{\sin x}-1=4\cos x\cos3x=4\cos x(4\cos^3x-3\cos x)=(4\cos^2x)^2-3(4\cos^2x)$$



OR replace $\sin^2x$ with $1-\cos^2x$ in your $$ 5\cos^4x-10\cos^2x\sin^2x + \sin^4x$$




Now if $\sin5x=0,5x=n\pi$ where $n$ is any integer



$x=\dfrac{n\pi}5$ where $n\equiv0,\pm1,\pm2\pmod5$



So, the roots of
$\dfrac{\sin5x}{\sin x}=0\implies x=\dfrac{n\pi}5$ where $n\equiv\pm1,\pm2\pmod5$



But $$\dfrac{\sin5x}{\sin x}=(4\cos^2x)^2-3(4\cos^2x)+1$$


calculus - Could $4+2+4+2+4+2+cdots = -1 $?

In physics classes, on this StackExchange and even in blogs the sum $1 + 2 + 3 + 4 + \cdots = - \frac{1}{12} $ has been under the microscope.





As a consequence of the trapezoid rule, the error to the Riemann sum is bounded by the second derivative.




$$ \int_0^N f(x) \; dx = \frac{1}{2}f(0) + f(1) + \dots + f(N-1) + \frac{1}{2}f(N) + O\,(N \| f \|_{\dot{C}^2} ) $$



Letting $f(x) = (1-x/N)_+$ we get $\sum_{i=0}^{N-1} 1 = -\frac{1}{2} + \int_0^N 1 \, dx + O(1)$.



I am wondering what happen if we use the Simpson rule:



$$ \int_0^N f(x) \; dx = \frac{1}{3} \big( f(0) + 4f(1) + 2f(2) + \dots + 4f(N-1) + f(N) \big) + O\,(N || f ||_{\dot{C}^4} ) $$



Now we plug in $f(x) = (1-x)_+$ and get




$$\frac{1}{3} (4 + 2 + 4 + 2 + \cdots ) = -\frac{1}{3} + \int_0^N f(x) \; dx + O(N^{-3}) $$



Is that still consistent with the other types of sums you get from Euler-Macularin type summation methods?

Sunday 19 June 2016

geometry - Packing problem -uniform distribution by Percentage of coverage - circles on a plane .



EDIT :



After reading the comments, I understood that in fact, my question was not so clear . so I will put it in simple terms of programming :




First Scenario :



My Input :



$a = Plane area [ nominally X,Y ] .



$r == Circles ( "dots" or "discs" ) Diameter or radius [r or d].



$per == Wanted Coverage in percentage .




( desired ) Output :



An array of aligned ( x-axis && Y-axis ) circles with given radius ( $r ) distributed in $row,$column to cover the exact wanted percentage ( $per ) .



$d_y == $d_x == distance in horizontal,vertical ( uniform distance for $row, $column )



Unknown :



$d_y == $d_x == distance in horizontal,vertical




Problem :
Given $a , $r and $per , what are the distribution distances between the circles ( $d_y, and $d_x ) that will result in a coverage of exact $per of $a .



Second Scenario ( derivative )



Input :



$a = Plane area [ nominally X,Y ] .




$d_y, $d_x = Distances between circles ( "dots" ) on x, y axis .



$per = Wanted Coverage in percentage .



( desired ) Output :



An array of aligned ( x-axis && Y-axis ) with radius $r circles with the given distances between $row and $column, that will result in a coverage of exact $per of $a .



Problem :




Given $d_y , and $d_x , What is the Circle's ( "dots" or "discs" ) Diameter or radius [$r or d] that will result in a coverage of exact $per of $a .



Unknown :



$r = Circle's diameter or radius .



Original Question :



So, first, I am not a mathematician, and I only encounter kids-level math on my daily work when programming.




I need to write a simple CAD macro that given a wanted percentage coverage of a plane, and the diameter ( or radius ) of a "dot", actually a circle , will distribute the points in such distances to allow the exact wanted coverage percentage .



In other words : given the percentage of wanted coverage , circles size and plane size , what is the distance between the points ( straight line circumference , not center if possible )



Or with a simple imageenter image description here :



Given Y,X of a plane and [r] of circle, and wanted coverage percentage of plane by "dots" or circles ( say 32% ) how to know the distance D[H] - horizontal and D[V]- vertical



I know I also need to assume that the "dots" center in edge rows are on the edge itself, or alternative the distance from edges is equal to the distance between them ..




If it were a square and not a circle, I could have managed with very simple calculations . But now I am not sure of myself .
Does calculating the circle area πr2 and subtracting from the coinciding square and then distribute the squares will work ?
(Later I also need the reverse - given the distances and the percentage - calculate the circles diameter ( or r )



( I found this "circle packing" article on wikipedia - but it does not address the wanted distances )



last Edit ( result )



I ended up using some combination of the first answer by Kimchi lover and second Ross Millikan .




Both valid as far as my non-mathematical self can understand .(too bad can only accept one .. )
I thought I would post The result ( which is not final - but works ) , so to show what your help produced :



enter image description here



So thanks again ..


Answer



I hope I have understood what is giving you difficulty.



In terms of your rewrite, the symbols I use here match up with yours as follows:$$ d_x = H,\space d_y = V,\space \mathrm {per} = C$$




For starters, with a rectangular grid as illustrated, you want $\pi R^2 / (V\times H) = C$, where $C$ is the desired fractional coverage ($.33,$ say), $R$ is the dot's radius, and $V$ and $H$ are the vertical and horizontal spacings. Everything is specified except the $H$ and $V$ parameters. This is assuming that $H>R$ and $V>R$, so the dots don't overlap. Then $HV =\pi R^2 / C$ is what you want. To get a square grid, $H=V=\sqrt{\pi R^2 / C}.$ The densest you can get this way is $\pi/4\approx.785$.



To use this to draw the dots, for each integer $m$ and $n$, put a dot of radius $R$ with center at $(mH,nV)$. Or for integer $m$ and $n$ in the range $1\le m \le M$ and $1\le n \le N$ put a dot. Now the $MN$ dots have the desired density in a $(M+1)H\times(N+1)V$ rectangle.



An equilateral triangular grid will allow denser dot placement and a different set of formulas. Here the $(m,n)$-th dot is centered at $((m+n/2)H, n (\sqrt 3 / 2)H)$, with $C= \pi R^2/ (H^2 \sqrt 3 / 2)$. The fundamental region is an equilateral triangle, with side length $H$. This is what you get if you put a dot at the center of each hexagon in a honeycomb grid. The dots don't overlap condition is $2R

In general, the coverage $C$ is the ratio of the area of a dot to the area of the fundamental region of the pattern of dots. In the edited version of the problem, both scenarios are covered by the equation $\pi R^2 / (V\times H) = C$ but in scenario 1 the unknowns are $H$ and $V$ and in scenario 2, $R$ is the unknown.


real analysis - If $fcolon mathbb{R} to mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere




Prove that if $f\colon\mathbb{R}\to\mathbb{R}$ is such that $f(x+y)=f(x)f(y)$ for all $x,y$, and $f$ is continuous at $0$, then it is continuous everywhere.




If there exists $c \in \mathbb{R}$ such that $f(c) = 0$, then
$$f(x + c) = f(x)f(c) = 0.$$
As every real number $y$ can be written as $y = x + c$ for some real $x$, this function is either everywhere zero or nowhere zero. The latter case is the interesting one. So let's consider the case that $f$ is not the constant function $f = 0$.




To prove continuity in this case, note that for any $x \in \mathbb{R}$
$$f(x) = f(x + 0) = f(x)f(0) \implies f(0) = 1.$$



Continuity at $0$ tells us that given any $\varepsilon_0 > 0$, we can find $\delta_0 > 0$ such that $|x| < \delta_0$ implies
$$|f(x) - 1| < \varepsilon_0.$$



Okay, so let $c \in \mathbb{R}$ be fixed arbitrarily (recall that $f(c)$ is nonzero). Let $\varepsilon > 0$. By continuity of $f$ at $0$, we can choose $\delta > 0$ such that
$$|x - c| < \delta\implies |f(x - c) - 1| < \frac{\varepsilon}{|f(c)|}.$$



Now notice that for all $x$ such that $|x - c| < \delta$, we have

$$\begin{align*}
|f(x) - f(c)| &= |f(x - c + c) - f(c)|\\
&= |f(x - c)f(c) - f(c)|\\
&= |f(c)| |f(x - c) - 1|\\
&\lt |f(c)| \frac{\varepsilon}{|f(c)|}\\
&= \varepsilon.
\end{align*}$$
Hence $f$ is continuous at $c$. Since $c$ was arbitrary, $f$ is continuous on all of $\mathbb{R}$.



Is my procedure correct?



Answer



One easier thing to do is to notice that $f(x)=(f(x/2))^2$ so $f$ is positive, and assume that it is never zero, since then the function is identically zero. Then you can define $g(x)=\ln f(x)$ and this function $g$ will satisfy the Cauchy functional equation
$$ g(x+y)=g(x)+g(y)$$
and the theory for this functional equation is well known, and it is easy to see that $g$ is continuous if and only if it is continuous at $0$.


Saturday 18 June 2016

calculus - Calculate fundamental limits using l'Hospital rule



So I have this essay where a question is "Calculate the three fundamental limits using l'Hospital's rule"



I find easy to calculate $\lim_{x \rightarrow 0}\frac{\sin(x)}{x}$ and $\lim_{x \rightarrow 0}\frac{e^x - 1}{x}$, however the one I can't understand is the limit $\lim_{x \rightarrow +\infty}\left(1 + \frac{1}{x}\right)^x$... How exactly am I supposed to use l'Hospital's rule here?



I tried writing $\left(1 + \frac{1}{x}\right)^x$ as $\frac{(x+1)^x}{x^x}$ and utilize the fact that $\frac{d(x^x)}{dx} = x^x(ln(x) + 1)$ but instead of simplifying, using l'Hospital'a rule that way actually makes it worse...




Can anyone point me to the right direction?


Answer



HINT



By the well known exponential manipulation $A^B=e^{B\log A}$, we have



$$\left(1 + \frac{1}{x}\right)^x=\large{e^{x\log \left(1 + \frac{1}{x}\right)}}=\large{e^{\frac{\log \left(1 + \frac{1}{x}\right)}{\frac1x}}}$$



and $\frac{\log \left(1 + \frac{1}{x}\right)}{\frac1x}$ is an indeterminate form $\frac{0}{0}$.



How to show infinite square-free numbers?

Here's the exact wording of the problem:



"The squares, of course, are the numbers 1,4,9,... The square-free numbers are the integers 1,2,3,5,6,... which are not divisible by the square of any prime (so that 1 is both square and square-free). Show that every positive integer is uniquely representable as the product of a square and a square-free number. Show that there are infinitely many square-free numbers."



I was able to prove the first statement with a proof by contradiction, but can't figure out the second part. Does it rely on the first part?

real analysis - How to show that the sequence is not monotonic



Let the sequence $a_n=\frac{n^{30}}{2^n}$.



I want to check if the sequence is monotonic or strictly monotonic. If it is not monotonic, I want to check if it is monotonic from an index and on. Also if it is bounded.




I have thought to consider that the sequence is increasing.



Then



$$a_{n+1} \geq a_n \Rightarrow \frac{(n+1)^{30}}{2^{n+1}} \geq \frac{n^{30}}{2^n} \Rightarrow 2^n (n+1)^{30} \geq n^{30} 2^{n+1} \Rightarrow (n+1)^{30} \geq 2n^{30} \Rightarrow \left( \frac{n+1}{n}\right)^{30} \geq 2 \Rightarrow \left( 1+\frac{1}{n}\right)^{30} \geq 2$$



Can we find from this a restriction for $n$ and then conclude that the sequence is not increasing? And after that the same to show that $a_n$ is not decreasing?



Is there also an other way to show that the sequence is not monotonic?



Answer



Since $a_2>a_1$, the sequence is not decreasing. And since $a_1=\frac12$ and $\lim_{n\to\infty}a_n=0$, the sequence is not increasing.


Friday 17 June 2016

real analysis - Prove that the function $f(x)=frac{sin(x^3)}{x}$ is uniformly continuous.



Continuous function in the interval $(0,\infty)$ $f(x)=\frac{\sin(x^3)}{x}$. To prove that the function is uniformly continuous. The function is clearly continuous. Now $|f(x)-f(y)|=|\frac{\sin(x^3)}{x}-\frac{\sin(y^3)}{y}|\leq |\frac{1}{x}|+|\frac{1}{y}|$. But I don't think whether this will work.



I was trying in the other way, using Lagrange Mean Value theorem so that we can apply any Lipschitz condition or not!! but $f'(x)=3x^2\frac{\cos(x^3)}x-\frac{\sin(x^3)}{x^2}$



Any hint...



Answer



Hint:



Any bounded, continuous function $f:(0,\infty) \to \mathbb{R}$ where $f(x) \to 0$ as $x \to 0,\infty$ is uniformly continuous. The derivative if it exists does not have to be bounded.



Note that $\sin(x^3)/x = x^2 \sin(x^3)/x^3 \to 0\cdot 1 = 0$ as $x \to 0$.



This is also a great example of a uniformly continuous function with an unbounded derivative.


divisibility - What is the remainder when 318 is divided by 100?



I know people here might find it silly but I want to clear my understanding about remainder.



I had a fight with my brother over this question. I suggested that remainder is something that is left when two numbers can't further be divided.



This is what I think remainder should be:-
$$\dfrac{318}{100}=3+\dfrac{18}{100}$$
$$\dfrac{318}{100}=3+\dfrac{9}{50}$$So according to me 9 should be the remainder of $\dfrac{318}{100}$ as 9 cannot further be divided by $50$.




This is what my brother does enter image description here



So according to him remainder should be $18$. But this is not the remainder actually this something we call mod of two number (which is usually used by computers to calculate remainder of two numbers ). My brother says that 9 is not the remainder of $\dfrac{318}{100}$ but the remainder of $\dfrac{159}{50}$. Aren't they the same thing.



Can anyone tell me, who is correct.


Answer



The remainder of $n$ divided by $d$ is the unique integer $r$ satisfying $0\le r and $n=qd+r$ for some integer $q$. When $n= 318$ and $d=100$, $r=18$ satisfies the criterion (with $q=3$).


calculus - A series involving the harmonic numbers : $sum_{n=1}^{infty}frac{H_n}{n^3}$

Let $H_{n}$ be the nth harmonic number defined by $ H_{n} := \sum_{k=1}^{n} \frac{1}{k}$.



How would you prove that



$$\sum_{n=1}^{\infty}\frac{H_n}{n^3}=\frac{\pi^4}{72}?$$







Simply replacing $H_{n}$ with $\sum_{k=1}^{n} \frac{1}{k}$ does not seem like a good starting point. Perhaps another representation of the nth harmonic number would be more useful.

propositional calculus - Does the => truth table break mathematical induction?



Since $F \Rightarrow F$ and $F \Rightarrow T$ both evaluate to $T$ with the truth table for $\Rightarrow$, does this not break mathematical induction?



For example, once you show the base case holds for a proposition $P$, then you could do the induction hypothesis as follows: "Suppose $P(k)$ does not hold. Since $P(k+1)$ will either hold or not with this assumption, $P(k) \Rightarrow P(k+1)$, thus $P(n)$ holds for all $n \in \mathbb{N}$."


Answer



No, it doesn't. You can only deduce from true statements. That's independent of the truth table (and even is true in logic systems that don't know truth tables).




But imagine if $F\implies T$ would be false. Then you could prove $1=0$ as follows:




Let's assume $0=1$ were false. Now $0=1 \implies 1=0$ (symmetry). But $1=0\land 0=1\implies 1=1$ (transitivity). Therefore we have proven $0=1\implies 1=1$. But since $1=1$ is undeniably true, that would mean a false statement implies a true statement, violating that $(F\implies T)$ is false. Thus $0=1$ cannot be false by contradiction.




As you can see, actually $(F \implies T)=F$ would break logic.


Thursday 16 June 2016

ordinary differential equations - On the Proof of the Uniqueness/Existence Theorem





I am trying to show that $$|\underline{x}(t)-\underline{y}(t)|\leq \left|\underline{x}(t_0)-\underline{y}(t_0)\right|+\int_{t_0}^{t}\left|\underline{f}(\underline{x},s)-\underline{f}(\underline{y},s)\right| \ ds,$$
where $$\underline{x}(t)=\underline{x}(t_0)+\int_{t_0}^{t}\underline{f}(\underline{x},s) \ ds, \ \underline{y}(t)=\underline{y}(t_0)+\int_{t_0}^{t}\underline{f}(\underline{y},s) \ ds.$$ This is part of my proof for the uniqueness/existence theorem.




This is my working so far.



\begin{align}
|\underline{x}(t)-\underline{y}(t)|&\leq |\underline{x}(t)|+\left|-\underline{y}(t)\right| \\

&= \left| \underline{x}(t_0)+\int_{t_0}^{t}\underline{f}(\underline{x},s) \ ds\right|+\left| -\underline{y}(t_0)-\int_{t_0}^{t}\underline{f}(\underline{y},s) \ ds\right| \\
&\leq |\underline{x}(t_0)|+\left|\int_{t_0}^{t}\underline{f}(\underline{x},s) \ ds\right|+\left|-\underline{y}(t_0)\right|+\left| -\int_{t_0}^{t}\underline{f}(\underline{y},s) \ ds\right| \\
\end{align}

I don't see how to combine the terms as desired.


Answer



Hint: $|(a+b)-(c+d)|=|(a-c)+(b-d)| \leq |a-c| +|b-d|$. (Your first step makes it impossible to complete the proof). [Take $a$ and $b$ to be the first and second terms of $x(t)$ and $c$ and $d$ to be the first and second terms of $y(t)$].


probability theory - Show that $limlimits_{nrightarrowinfty} e^{-n}sumlimits_{k=0}^n frac{n^k}{k!}=frac{1}{2}$



Show that $\displaystyle\lim_{n\rightarrow\infty} e^{-n}\sum_{k=0}^n \frac{n^k}{k!}=\frac{1}{2}$ using the fact that if $X_j$ are independent and identically distributed as Poisson(1), and $S_n=\sum\limits_{j=1}^n X_j$ then $\displaystyle\frac{S_n-n}{\sqrt{n}}\rightarrow N(0,1)$ in distribution.



I know that $\displaystyle\frac{e^{-n} n^k}{k!}$ is the pdf of $K$ that is Poisson(n) and $k=\sum\limits_{j=1}^n X_j$ is distributed Poisson(n), but I don't know what I can do next.


Answer



Hint: $e^{-n} \sum_{k=0}^n \dfrac{n^k}{k!}$ is the probability of what?



Wednesday 15 June 2016

calculus - Compute $lim_limits{xto -2^-}frac{sin(x+2)}{|4-x^2|}$ without L'Hopital's rule


Compute, without L'Hopital's rule, the limit $$\lim_{x\to -2^-}\frac{\sin(x+2)}{|4-x^2|}$$





Since $x\to -2^-$ , the denominator can be rewritten as $-4+x^2$, but there isn't much more I've been able to do (I tried using $\sin(x+2)=\sin x\cos2+\sin2\cos x$ without getting much out of it). Thanks for your answers.

calculus - calculate the the limit of the sequence $a_n = lim_{n to infty} n^frac{2}{3}cdot ( sqrt{n-1} + sqrt{n+1} -2sqrt{n} )$



Iv'e been struggling with this one for a bit too long:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1} + \sqrt{n+1} -2\sqrt{n} )$$



What Iv'e tried so far was using the fact that the inner expression is equivalent to that:




$$ a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1}-\sqrt{n} + \sqrt{n+1} -\sqrt{n} ) $$



Then I tried multiplying each of the expression by their conjugate and got:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \frac{1}{\sqrt{n+1} +\sqrt{n}} - \frac{1}{\sqrt{n-1} +\sqrt{n}} )
$$



But now I'm in a dead end.

Since I have this annyoing $n^\frac{2}{3}$ outside of the brackets, each of my attemps to finalize this, ends up with the undefined expression of $(\infty\cdot0)$



I've thought about using the squeeze theorem some how, but didn't manage to connect the dots right.



Thanks.


Answer



Keep on going... the difference between the fractions is



$$\frac{\sqrt{n-1}-\sqrt{n+1}}{(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$




which, by similar reasoning as before (diff between two squares...), produces



$$\frac{-2}{(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$



Now, as $n \to \infty$, the denominator behaves as $(2 \sqrt{n})^3 = 8 n^{3/2}$. Thus, $\lim_{n \to \infty} (-1/4) n^{-3/2} n^{2/3} = \cdots$? (Is the OP sure (s)he didn't mean $n^{3/2}$ in the numerator?)


elementary set theory - Order Type of $mathbb Z_+ times {1,2}$ and ${1,2} times mathbb Z_+$



I'm currently working on §10 of "Topology" by James R. Munkres. I've got a problem with task 3:





Both $\{1,2\} \times \mathbb Z_+$ and $\mathbb Z_+ \times \{1,2\}$ are well-ordered
in the dictionary order. Do they have the same order type?




$A := \{1,2\} \times \mathbb Z_+$



$B := \mathbb Z_+ \times \{1,2\}$



They have the same order type if there is an order preserving bijection between them.

Since both have the same cardinality, I could construct a function



$f: A \to B$



$f(\min A) = \min B$



$f(\,\min\,(A-\{\min A\})\,) = \min\,(B - \{\min B\})$



and so forth.




This function preserves the order. Now my study partner disagrees with this, because f reaches every element of B whose second component is 1, but not the others. So the function would not be surjective. But f being injective would surely imply different cardinalities for A and B.



Can you tell us the correct solution?


Answer



Because every element with first coordinate $1$ always lies before any element with first coordinate $2$, the set $A$ looks like two copies of $\mathbb{Z}^+$, one after the other:



$$(1,0),(1,1),(1,2),\ldots,(2,0),(2,1)(2,2),\ldots$$



While we can order $B$ as




$$(0,1),(0,2),(1,1),(1,2),(2,1),(2,2),\ldots$$



which looks just like $\mathbb{Z}^+$ (we just doubles all the points).



So intuitively we expect the order types to be different.



If $f: A \rightarrow B$ is an order preserving bijection, then suppose $f(2,0) = (n,i)$, for some $n \in \mathbb{Z}^+, i \in \{1,2\}$. But $(2,0)$ has infinitely many predecessors, but no element in $B$ has. Contradiction, as $f$ should be a bijection between them.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...