Tuesday 30 June 2015

linear algebra - Prove that: If $n in mathbb{Z}_{m} setminus left{0right}$ has a multiplicative inverse, then this is definitely unique




Prove that: If $n \in \mathbb{Z}_{m} \setminus \left\{0\right\}$ for
$m \in \mathbb{N}$ has a multiplicative inverse, then this is
definitely unique.




I don't know where we can start with this proof but I think it's important to know that $n \in \mathbb{Z}_{m}$ has got a multiplicative inverse in $\mathbb{Z}_{m}$ if and only if the $\text{gcd }(n,m)=1$. That's what I have learned some days ago here, from many answers.




Maybe it's somehow possible to deduce it from this fact in order to prove it? Because that's what comes to my mind when we talk about multiplicative inverses :P


Answer



Hint If $k,l$ are multiplicative inverses of $n$ then what is
$$knl=?$$


complex analysis - Real part and imaginary part of an expression under sqrt

How can one find real and imaginary part of below expression? b abd a are real numbers:



$$\sqrt{a+ i b}$$

real analysis - Proof $frac{log(n!)}{nlog n}$ is increasing for positive integer $n$ ($log n=ln(n)$)

I would like to show that $\frac{\log(n!)}{n\log n}$ increases as n increases, for positive integer n only (to ignore the use of gamma function). Note here $\log n = \ln(n)$, so using base e. I would like to be able to show this so I can then apply the Monotone Convergence Theorem to show that $(\log(n!)\sim(n\log n)$. My first idea was to try and show that the second derivative is always positive, but due to $\log(n!)$ not being continuous (without having to use gamma function which I feel is OTT for this question) this, of course, would not work. Does anyone have any ideas about how to prove that this sequence is increasing rigorously?

real analysis - Infinite Series $sumlimits_{n=1}^inftyleft(frac{H_n}nright)^2$




How can I find a closed form for the following sum?
$$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$
($H_n=\sum_{k=1}^n\frac{1}{k}$).


Answer



EDITED. Some simplifications were made.






Here is a solution.




1. Basic facts on the dilogarithm. Let $\mathrm{Li}_{2}(z)$ be the dilogarithm function defined by



$$ \operatorname{Li}_{2}(z) = \sum_{n=1}^{\infty} \frac{z^{n}}{n^{2}} = - \int_{0}^{z} \frac{\log(1-x)}{x} \, dx. $$



Here the branch cut of $\log $ is chosen to be $(-\infty, 0]$ so that $\operatorname{Li}_{2}$ defines a holomorphic function on the region $\Bbb{C} \setminus [1, \infty)$. Also, it is easy to check (by differentiating both sides) that the following identities hold



\begin{align*}
\operatorname{Li}_{2}\left(\tfrac{z}{z-1}\right)
&= -\mathrm{Li}_{2}(z) - \tfrac{1}{2}\log^{2}(1-z); \quad z \notin [1, \infty) \tag{1} \\

\operatorname{Li}_{2}\left(\tfrac{1}{1-z}\right)
&= \color{blue}{\boxed{\operatorname{Li}_{2}(z) + \zeta(2) - \tfrac{1}{2}\log^{2}(1-z)}} + \color{red}{\boxed{\log(-z)\log(1-z)}}; \quad z \notin [0, \infty) \tag{2}
\end{align*}



Notice that in (2), the blue-colored part is holomorphic on $|z| < 1$ while the red-colored part induces the branch cut $[-1, 0]$.



2. A useful power series. Now let us consider the power series



$$ f(z) = \sum_{n=0}^{\infty} \frac{H_n}{n} z^n. $$




Then $f(z)$ is automatically holomorphic inside the disc $|z| < 1$. Moreover, it is easy to check that



$$ \sum_{n=1}^{\infty} H_{n} z^{n-1}
= \frac{1}{z} \left( \sum_{n=1}^{\infty} \frac{z^{n}}{n} \right)\left( \sum_{n=0}^{\infty} z^{n}\right)
= -\frac{\log(1-z)}{z(1-z)}. $$



thus integrating both sides, together with the identity $\text{(1)}$, we obtain the following representation of $f(z)$.



$$f(z)
= \operatorname{Li}_{2}(z) + \tfrac{1}{2}\log^{2}(1-z)

= -\operatorname{Li}_{2}\left(\tfrac{z}{z-1}\right). \tag{3}$$



3. Integral representation and the result. By the Parseval's identity, we have



$$ \sum_{n=1}^{\infty} \frac{H_{n}^{2}}{n^{2}}
= \frac{1}{2\pi} \int_{0}^{2\pi} f(e^{it})f(e^{-it}) \, dt
= \frac{1}{2\pi i} \int_{|z|=1} \frac{f(z)}{z} f\left(\frac{1}{z}\right) \, dz \tag{4} $$



Since $\frac{1}{z}f(z)$ is holomorphic inside $|z| = 1$, the failure of holomorphy of the integrand stems from the branch cut of




\begin{align*}
f\left(\tfrac{1}{z}\right)
&= -\operatorname{Li}_{2}\left(\tfrac{1}{1-z}\right) \\
&= -\color{blue}{\left( \operatorname{Li}_{2}(z) + \zeta(2) - \tfrac{1}{2}\log^{2}(1-z) \right)} - \color{red}{\log(-z)\log(1-z)},
\end{align*}



which is $[0, 1]$. To resolve this, we utilize the identity $\text{(2)}$. Note that the blue-colored portion does not contributes to the the integral $\text{(4)}$, since it remains holomorphic inside $|z| < 1$. That is, only the red-colored portion gives contribution to the integral. Consequently we have



\begin{align*}
\sum_{n=1}^{\infty} \frac{H_{n}^{2}}{n^{2}}

&= -\frac{1}{2\pi i} \int_{|z|=1} \frac{f(z)}{z} \color{red}{\log(-z)\log(1-z)} \, dz. \tag{5}
\end{align*}



Since the integrand is holomorphic on $\Bbb{C} \setminus [0, \infty)$, we can utilize the keyhole contour wrapping around $[0, 1]$ to reduce $\text{(5)}$ to



\begin{align*}
\sum_{n=1}^{\infty} \frac{H_{n}^{2}}{n^{2}}
&=-\frac{1}{2\pi i} \Bigg\{ \int_{0^{-}i}^{1+0^{-}i} \frac{f(z)\log(-z)\log(1-z)}{z} \, dz \\
&\qquad \qquad + \int_{1+0^{+}i}^{+0^{+}i} \frac{f(z)\log(-z)\log(1-z)}{z} \, dz \Bigg\} \\
&=-\frac{1}{2\pi i} \Bigg\{ \int_{0}^{1} \frac{f(x)(\log x + i\pi)\log(1-x)}{x} \, dx \\

&\qquad \qquad - \int_{0}^{1} \frac{f(x)(\log x - i\pi)\log(1-x)}{x} \, dx \Bigg\} \\
&=-\int_{0}^{1} \frac{f(x)\log(1-x)}{x} \, dx. \tag{5}
\end{align*}



Plugging $\text{(3)}$ to the last integral and simplifying a little bit, we have



\begin{align*}
\sum_{n=1}^{\infty} \frac{H_{n}^{2}}{n^{2}}
&= - \int_{0}^{1} \frac{\operatorname{Li}_2(x)\log(1-x)}{x} \, dx - \frac{1}{2}\int_{0}^{1} \frac{\log^{3}(1-x)}{x} \, dx \\
&= \left[ \frac{1}{2}\operatorname{Li}_2(x)^2 \right]_0^1 - \frac{1}{2} \int_{0}^{1} \frac{\log^3 x}{1-x} \, dx \\

&= \frac{1}{2}\zeta(2)^{2} + \frac{1}{2} \Gamma(4)\zeta(4) \\
&= \frac{17\pi^{4}}{360}
\end{align*}



as desired.


problem solving - Calculate the probability with a finite arithmetic progression



We have a finite arithmetic progression $ a_n $, where $ n \geq 3 $ and its $r\neq 0 $.
We draw three different numbers. We have to calculate the probability, that
these three numbers in the order of drawing will create another arithmetic progression.



My proposition :




$ \Omega={n!\over (n-3)!}$



$ \mathbf{A}= {n \choose 3} \cdot 2$



But I think that my way of thinking about way of counting $ \mathbf{A}$ is incorrect.



Any suggestions how can I count it? Thanks in advice, Kuba!


Answer



$\Omega = n(n-1)(n-2)$




We're interested in triples whose elements differ by $r$, $2r$ ... up to $\lceil n/3\rceil r$
Notice, that if a triple $a_i,a_k,a_l$ is OK, then so is $a_l,a_k,a_i$.
My idea was to "stick together" those desired elements that form each triple and calculate their quantity. To be precise: there are $2(n-2)$ triples whose elements differ by r, $2(n-4)$ triples whose elements differ by $2r$ .... and $2(n-2\lceil n/3\rceil)$ triples whose elements differ by $\lceil n/3\rceil r$. Then:
$P(A) = \frac{2(n-2+n-4...+n-2 \lceil n/3\rceil)}{n(n-1)(n-2)}$


calculus - Find $lim limits_{xto 0}frac{logleft(cos xright)}{x^2}$ without L'Hopital



$$\lim_{x\to 0}\frac{\log\left(\cos x\right)}{x^2}$$



I've been triyng to:




  1. show $\displaystyle -\frac{\pi}{2}


  2. find a function so that $\displaystyle f(x)<\frac{\log\left(\cos x\right)}{x^2}$ and $\displaystyle \lim\limits_{x\to0}f(x) = -\frac{1}{2}$





And then apply the squeeze principle, but haven't managed any of these.


Answer



HINT:



$$\dfrac{\log(\cos x)}{x^2}=\dfrac{\log(\cos^2x)}{2x^2}=-\dfrac12\cdot\dfrac{\log(1-\sin^2x)}{-\sin^2x}\cdot\left(\dfrac{\sin x}x\right)^2$$


probability - Birthday Problem without using complement

I have a solution to the birthday problem without using complements that is arriving at the wrong answer. I'd like to understand what I am doing wrong. I am not looking for alternate solutions to the problem.




Problem



Assuming there are only 365 days (ignore leap year), and each day is equally likely to be a birthday, what is the probability that at least 2 people have the same birthday in a room of N people?



Sample Space: $365^N$



Event Space




  • ${N\choose 2 }$ pairings for people with the same birthday


  • for each pair, $365$ possible birthdays

  • for remaining $N-2$ people, $365^{(N-2)}$ permutations which we can basically ignore (but still must be counted since they are part of the event space)



So I would expect the answer to be:



$$\frac{{N\choose 2 } * 365 * 365^{(N-2)}}{365^N} = \frac{{N\choose 2 }}{365}$$



With $N=23$, I get 69% chance of $2$ people having same birthday, but correct answer is ~50%. So where am I over-counting?

Monday 29 June 2015

discrete mathematics - inverse modulo, modulo arithmetic




I was given following example in the book, however I am not sure how can the result of 27 be calculated. I realise that -13 + 40 gives 27, however how 27 ≡ −13 (mod 40) is the same as 3·(−13) ≡ 1 (mod 40) I dont really follow.



Moreover I don't really see how by Theorem ab≡cd(mod n) 3·27 ≡ 3·(−13) ≡ 1 (mod 40),
I guess 3=a and c=3 by following example, ab≡cd. However it does not make much sense really.



find a linear combination of 3 and 40 that equals 1.



Step1: Divide 40 by 3 to obtain 40=3·13+1.This simples that 1=40−3·13.
Step 2: Divide 3 by 1 to obtain 3 = 3·1 + 0. This implies that gcd(3, 40) = 1.

Step 3: Use the result of step 1 to write



3·(−13) = 1 + (−1)40.



This result implies that −13 is an inverse for 3 modulo 40.
In symbols, 3·(−13) ≡1 (mod 40).
To find a positive inverse, compute 40 − 13. The result is 27, and 27 ≡ −13 (mod 40)
because 27 − (−13) = 40. So, by Theorem ab≡cd(mod n)
3·27 ≡ 3·(−13) ≡ 1 (mod 40),
and thus by the transitive property of congruence modulo n, 27 is a positive integer that is an inverse for 3 modulo 40. ■




Thank you for your help in advance


Answer



Since $40 = 3 \cdot 13 + 1$,
\begin{align*}
40 - 3 \cdot 13 & = 1\\
40 + -13 \cdot 3 & = 1\\
\end{align*}



Thus,

$$-13 \cdot 3 \equiv 1 \pmod{40}$$
Hence, $-13$ is a multiplicative inverse of $3 \pmod{40}$. However, so is any integer that is equivalent to $-13 \pmod{40}$. Those integers have the form $-13 + 40k$, where $k \in \mathbb{Z}$. To see this, observe that
$$(-13 + 40k) \cdot 3 \equiv -13 \cdot 3 + 40 \cdot 3k \equiv -13 \cdot 3 \equiv 1 \pmod{40}$$
In particular, if $k = 1$, then
$$(-13 + 40 \cdot k) \cdot 3 \equiv (-13 + 40) \cdot 3 \equiv 27 \cdot 3 \equiv 1 \pmod{40}$$
Hence, $27$ is an inverse of $3 \pmod{40}$. Since $0 \leq 27 < 40$, it is the positive inverse we seek.



Check: $27 \cdot 3 = 81 = 2 \cdot 40 + 1 \Rightarrow 27 \cdot 3 \equiv 1 \pmod{40}$, so $3^{-1} \equiv 27 \pmod{40}$.


elementary set theory - Does $a le b$ imply $a+cle b+c$ for cardinal numbers?




Let $a, b, c$ be cardinalities.



Prove or disprove:



If $a \le b$ then $a+c\le b+c$





I realize that $a \le b$ means that there's a bijection between A and B. But I don't really know what to do with the addition in the inequality.



Can I simply separate this into cases where one or two cardinalities are infinite while the others aren't and then just solve from there like numbers or known cardinalities like $\aleph_0$ ?



Thanks.


Answer



I will prove something stronger -



For cardinals $a,b,c$ and $d$, if $a\leq c$ and $b\leq d$, then $a+b\leq c+d$.




Proof: Let $A,B,C$ and $D$ be sets with the cardinals $a,b,c$ and $d$ respectively, and $A\cap B=C\cap D=\emptyset$.
If $a\leq c$ and $b\leq d$, then there exists injective functions $f:A\to C$ and $g:B\to D$. By defining new function $h:A\cup B\to C\cup D$ with



$$h(x)=\left\{\begin{matrix}
f(x) & x\in A\\
g(x) & x\in B
\end{matrix}\right.$$



That $h(x)$ is well defined(why?) and injective one(why?). From that, we get




$$|A\cup B|\leq|C\cup D|$$



i.e. $a+b\leq c+d$.


discrete mathematics - Prove summation related to cycles




Let $b_r(n,k)$ be the number of n-permutations with $k$ cycles, in which numbers $1,2,\dots,r$ are in one cycle.



Prove that for $n \geq r $ there is:



$$
\sum_{k=1}^{n} {b_r(n,k)x^k=(r-1)!\frac{x^\overline{n}}{(x+1)^\overline{r-1}}}
$$


Answer



It should be clear by inspection that
$$b_r(n,k) = (r-1)! \left[n-r\atop k-1\right].$$

The sum then becomes
$$(r-1)! \sum_{k=1}^n x^k \left[n-r\atop k-1\right].$$



Recall the bivariate generating function of the Stirling numbers of
the first kind, which is
$$G(z, u) = \exp\left(u\log\frac{1}{1-z}\right).$$



This yields the following for the inner sum:
$$\sum_{k=1}^n x^k (n-r)! [z^{n-r}] [u^{k-1}] G(z, u)
\\= (n-r)! [z^{n-r}] \sum_{k=1}^n x^k

\frac{1}{(k-1)!} \left(\log\frac{1}{1-z}\right)^{k-1}.$$



Now use the fact that $\log\frac{1}{1-z}$ starts at $z$ to see that
terms with $k> n-r+1$ do not contribute to $[z^{n-r}]$ to get
$$(n-r)! [z^{n-r}] \sum_{k=1}^\infty x^k
\frac{1}{(k-1)!} \left(\log\frac{1}{1-z}\right)^{k-1}.$$



This simplifies to
$$(n-r)! [z^{n-r}] x \exp\left(x\log\frac{1}{1-z}\right)
= x \times (n-r)! [z^{n-r}] \left(\frac{1}{1-z}\right)^x

\\= x \times (n-r)! \times {n-r+x-1\choose n-r}
\\= x \times (n-r-1+x)(n-r-1+x-1)(n-r-1+x-2)\cdots x
\\= x \times x^{\overline{n-r}}.$$



We thus have for the sum the formula
$$\sum_{k=1}^n b_r(n, k) x^k =
(r-1)! \times x \times x^{\overline{n-r}}.$$



I verified this by going back to the basics and implementing a Maple
program that factors permutations. It confirms the above formula for

small permutations ($n<10$). (This code is not optimized.)




with(combinat);

pet_disjcyc :=
proc(p)
local dc, pos;

dc := convert(p, 'disjcyc');


for pos to nops(p) do
if p[pos] = pos then
dc := [op(dc), [pos]];
fi;
od;

dc;
end;


gf :=
proc(n, r)
option remember;
local p, res, f, targ, q;

res := 0; targ := {seq(q, q=1..r)};

for p in permute(n) do
f := pet_disjcyc(p);


for cyc in f do
if convert(cyc, set) = targ then
res := res + x^nops(f);
break;
fi;
od;
od;

res;
end;


bs := (n,r)-> (r-1)!* sum(x^k*abs(stirling1(n-r,k-1)),k=1..n);
bsp := (n, r) -> (r-1)! * x * pochhammer(x, n-r);


Remark. This would seem to be a very basic calculation if the ordinary generating function instead of the exponential one is used. The OGF is in terms of the rising factorial, done. Very simple indeed.


Cauchy functional equation with non choice



Assume ZF+ not AC. Then how many solutions are there for Cauchy functional equation?



Thank you


Answer




The answer is undecidable. We know it could be $2^{\aleph_0}$ and it could be $2^{2^{\aleph_0}}$. I am unaware of results that it could be an intermediate cardinality, though.



It is true that there are always the continuous ones (and there are $2^{\aleph_0}$ of those), but it is consistent that there are only the continuous ones. For example in Solovay's model or in Shelah's model of $\sf ZF+DC+BP$ (the last one denotes "all sets of reals have the Baire property").



Assuming only that the axiom of choice fails is not enough to conclude in what manner it fails, and whether or not the real numbers are even well-orderable or not.



It is consistent that the axiom of choice fails and the real numbers are well-orderable, in which case the usual proof as if the axiom of choice holds shows that there are $2^{2^{\aleph_0}}$ solutions.


real analysis - continuous functions on $mathbb R$ such that $g(x+y)=g(x)g(y)$




Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$.




$g(0)=1$. If $a=g(1)$,then $a>0$ and $g(r)>a^r$ for all $r$ in $\mathbb Q$.



Show that the function is strictly increasing if $g(1)$ is greater than $1$, constant if $g(1)$ is equal to $1$ or strictly decreasing if $g(1)$ is between zero and one, when $g$ is continuous.


Answer



For $x,y\in\mathbb{R}$ and $m,n\in\mathbb{Z}$,
$$
\eqalign{
g(x+y)=g(x)\,g(y)
&\implies

g(x-y)={g(x) \over g(y)}
\\&\implies
g(nx)=g(x)^n
\\&\implies
g\left(\frac{m}nx\right)=g(x)^{m/n}
}
$$
so that $g(0)=g(0)^2$ must be one (since if it were zero, then $g$ would be identically zero on $\mathbb{R}$), and with $a=g(1)$, it follows that $g(r)=a^r$ for all $r\in\mathbb{Q}$. All we need to do now is invoke the continuity of $g$ and the denseness of $\mathbb{Q}$ in $\mathbb{R}$ to finish.



For example, given any $x\in\mathbb{R}\setminus\mathbb{Q}$, there exists a sequence $\{x_n\}$ in $\mathbb{Q}$ with $x_n\to x$ (you could e.g. take $x_n=10^{-n}\lfloor 10^nx\rfloor$ to be the approximation of $x$ to $n$ decimal places -- this is where we're using that $\mathbb{Q}$ is dense in $\mathbb{R}$). Since $g$ is continuous, $y_n=g(x_n)\to y=g(x)$. But $y_n=a^{x_n}\to a^x$ since $a\mapsto a^x$ is also continuous.




Moral: a continuous function is completely determined by its values on any dense subset of the domain.


Sunday 28 June 2015

real analysis - Integral$=-frac{4}{3}log^3 2-frac{pi^2}{3}log 2+frac{5}{2}zeta(3)$

Hi I have been trying to prove this
$$
I:=\int \limits_{0}^{1} \left[ \frac{1}{x(x-1)} \bigg(2\mathrm{Li}_2\bigg(\frac{1-\sqrt{1-x}}{2}\bigg)-\log\bigg(\frac{1+\sqrt{1-x}}{2}\bigg)^2 \bigg) -\frac{\zeta(2)-2\log^2 2}{x-1} \right]{dx}=\sum_{k=2}^\infty \binom{2k}{k} \frac{1}{k^2 4^k} \sum_{j=1}^{k-1} \frac{1}{j}=\color{#00f}{\large%
-{4 \over 3}\log^3 2-\frac{\pi^2}{3}\log 2+\frac{5}{2}\zeta(3) }
$$
What a beautiful result!!!! I am trying to prove this.
I am not sure of what to do, perhaps we could start with a change of variables

$$
\xi=\frac{1-\sqrt{1-x}}{2},
$$
but I get stuck shortly after. This is strongly related to Mahler measures and integration. Thanks for your help.



I tried the following substitution but failed,



UPDATE: I tried a change of variables given above by $\xi$, we obtain
$$
I=\int\limits_{0}^{1/2}\big(2\mathrm{Li}_2(\xi)-\log^2(1-\xi)\big)\left(\frac{4}{2\xi-1}-\frac{1}{\xi-1}-\frac{1}{\xi}\right)d\xi-4(\zeta(2)-2\log^2 2) \int\limits_0^{1/2}\frac{d\xi}{2\xi-1}

$$
but the integral on the right diverges so I need to use another method now.



Thanks

calculus - Differentiable function, not constant, $f(x+y)=f(x)f(y)$, $f'(0)=2$


Let $f: \mathbb R\rightarrow \mathbb R$ a derivable function but not
zero, such that $f'(0) = 2$ and $$ f(x+y)= f(x)\cdot \ f(y)$$ for all
$x$ and $y$ belongs $\mathbb R$. Find $f$.





My first answer is $f(x) = e^{2x}$, and I proved that there are not more functions like $f(x) = a^{bx}$ by Existence-Unity Theorem (ODE), but I don't know if I finished.



What do you think about this sketch of proof's idea?



Thanks,



I'll be asking more things.

integration - Show that $intlimits_0^{infty}frac {dt}te^{cos t}sinsin t=frac {pi}2(e-1)$




How do you show that




$$\int_{0}^{\infty}\frac{\mathrm{d}t}{t}\,\mathrm{e}^{\cos\left(t\right)}\,
\sin\left(\sin\left(t\right)\right) =
\frac{\pi}{2}\,\left(\,\mathrm{e} - 1\right)$$




I managed to get the left-hand side to equal the imaginary part of$$I=\int\limits_0^{\infty}\frac {dt}te^{e^{it}}$$But I’m not very sure what to do next. I’m thinking of a substitute $t\mapsto e^{it}$, but I’m not very sure how to evaluate the limit as $t\to\infty$. I also tried contour integration, but I’m not exactly sure what contour to draw.


Answer




$$e^{\cos t}\sin\sin t=\text{Im}\exp\left(e^{it}\right)=\text{Im}\sum_{n\geq 0}\frac{e^{nit}}{n!}=\sum_{n\geq 1}\frac{\sin(nt)}{n!} $$
and since for any $a>0$ we have $\int_{0}^{+\infty}\frac{\sin(at)}{t}\,dt=\frac{\pi}{2}$ it follows that
$$\int_{0}^{+\infty}e^{\cos t}\sin\sin t\frac{dt}{t} = \frac{\pi}{2}\sum_{n\geq 1}\frac{1}{n!}=\frac{\pi}{2}(e-1), $$
pretty simple.






I have a counter-proposal:
$$\begin{eqnarray*} \int_{0}^{+\infty}\left(e^{\cos t}\sin\sin t\right)^2\frac{dt}{t^2} &=&\frac{\pi}{2}\sum_{m,n\geq 1}\frac{\min(m,n)}{m!n!}\\&=&-\frac{\pi}{2}I_1(2)+\pi e(e-1)-2\pi e\int_{0}^{1}I_1(2x)e^{-x^2}\,dx. \end{eqnarray*}$$


improper integrals - Laplace transform:$int_0^infty frac{sin^4 x}{x^3} , dx $

I have a trouble with a integral:
Using this Laplace trasform equation:
$$\begin{align}

\int_0^\infty F(u)g(u) \, du & = \int_0^\infty f(u)G(u) \, du \\[6pt]
L[f(t)] & = F(s) \\[6pt]
L[g(t)] & = G(s)
\end{align}$$



Applying to compute this integral:



$$I = \int_0^\infty \frac{\sin^4 x}{x^3} \, dx $$

Saturday 27 June 2015

sequences and series - Is there a connection between $zeta(-1)$ and Ramanujan's calculation of the sum over $mathbb{N}$?



Let me elaborate a little on the matter that I've been mulling over for a little while. This essentially concerns the summation of $1+2+3+...$, how it equals $-1/12$ (in a certain sense, obviously not normal circumstances), and how this seems to strangely coincide with the Riemann zeta function at $s=-1$.



I'm essentially concerned over whether this is a coincidence, or if there's something deeper at play here. It would certainly strike me as one heck of a coincidence.



I'll elaborate a bit for those unfamiliar with these matters...







The Value of $\zeta(-1)$:



We know, for $s \in \mathbb{C}$, with $Re(s)>1$, we can define the Riemann zeta function by



$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$



This can be analytically continued to the entirety of the complex plane, giving a definition of the function for $Re(s)<1, s\neq1$:




$$\zeta(s) = 2^s \pi^{s-1} \sin\left(\frac{\pi s}{2} \right) \Gamma (1-s) \zeta(1-s)$$



Through this and the known result $\zeta(2)=\pi^2/6$ we can see that



$$\zeta(-1) = 2^{-1} \pi^{-1-1} \sin\left(\frac{\pi (-1)}{2} \right) \Gamma (1-(-1)) \zeta(1-(-1)) = \frac{1}{2} \left( \frac{1}{\pi^2} \right)(-1)(1)\left(\frac{\pi^2}{6}\right)=\frac{-1}{12}$$



We also know that people - often mistakenly - claim $1+2+3+4+...=-1/12$ through this result, since the summation of the natural numbers appears if you plug in $s=-1$ into the original sum, $\sum n^{-s}$.



This obviously isn't true - mistake aside, at least in what we consider the "usual topology of $\mathbb{R}$", the summation $1+2+3+...$ doesn't converge in





  • The limit of its partial sums (which are the triangle numbers)

  • The limit of the averages of its partial sums (Cesaro summation)

  • Through Abel summation (per Wikipedia - I don't really know of this myself)



...and apparently a number of other summation methods (again, according to Wikipedia; I'm only familiar with the first two methods). Yet, interestingly...







Ramanujan's Summation of $1+2+3+...$:



Now, we know that in, the usual sense, we cannot assign a value to an obviously divergent series. For example, we cannot say $\sum a_n = x$ if this summation is divergent, and then perform manipulations on that as if it had a value, and try to deduce whatever $x$ is.



This doesn't always have to hold, just in the sense of the usual topology of the real numbers. (At least, this is what I was told by Thomas Andrews in the comments of an semi-related question here - related in the sense that it dealt with an "obviously divergent" product. I honestly don't know much about topologies.) I take this to mean that we can assign values to these "obviously divergent in the usual sense" summations, but it changes the context, the framework in which we're working.



So Ramanujan was essentially doing that, in a certain sense - implicitly considering an alternate topology of the reals, in which the summation over the naturals could be assigned a value $c$, i.e.



$$c = \sum_{n=1}^\infty n = 1+2+3+4+...$$




and then manipulating $c$ to try and determine its value. (At least, that's what he might have been doing, I'm not sure. He could've just been ignoring the topologies altogether and just seeing what he could do. Either way...)



Subtracting $4c$ from $c$, he obtained



$$-3c = 1-2+3-4+5-6+...$$



This summation is the expansion of the power series for $(1+x)^{-2}$ and thus, taking $x=1$, Ramanujan obtains



$$-3c = \frac{1}{(1+1)^2} = \frac{1}{4} \;\;\; \Rightarrow \;\;\; c = \frac{-1}{12}$$







So, The Question, and What I've Found:



So, by using the zeta function, we can show that $\zeta(-1) = -1/12$. Erroneously, people claim this as the sum of the natural numbers, but interestingly enough, Ramanujan showed that, if you can assign the divergent sum of the naturals a value, then it coincidentally is $-1/12$.




So, is there a connection between the Riemann zeta function - in particular its value at $s=-1$ - and Ramanujan's method of summing over the naturals? Is a mere coincidence? And in particular, why does Ramanujan's summation yield the same result?





I'm not really sure where to begin looking into this. Ramanujan's summation introduces the notion of a different topology on the reals, but I've never had a proper introduction into topology so I'm a bit loathe to go into learning a whole subject to understand a connection that might not even exist.



I did find that a Wikiepdia article on the summation $1+2+3+...$ kinda touches on a possible connection but I don't really buy it. (In the section on zeta function regularization, it suddenly starts considering the Riemann zeta function instead of the summation which seems more like a "similar but related" thing than establishing a proper connection.)



I mean, I understand where we're coming from in zeta function regularization, and I can sort of see it. We consider the summation over the naturals as a special case of the Riemann zeta function, then associate the value of the summation with the value of the zeta function there, or its analytic continuation if applicable. I can get that.



But it doesn't answer what I feel is the heart of my question:




I think more than anything, my big question is... Ramanujan's method relied nothing on "regularization" or "zeta functions" or any of this higher level stuff. He just assumed the summation had a value, and manipulated the summation to find that value. Yes, it broke some assumptions about the topology of the reals - that I myself don't understand well - yet somehow got the same answer that this regularization and all that would. How is it right? What is the connection?





The "obviously wrong" method somehow got the right answer. I mean, sure, it's right in the sense of the topology of the reals in which these manipulations are valid, but it's wrong in the usual sense. Yet it achieves the right answer.



So I think my biggest question is why it's correct. Coincidence? Perhaps the zeta function relies on likewise assumptions of topology that I've just not known about? I'm not really sure where to go from here; as it is my research is already broaching topics I know little to nothing about.


Answer



Well, "Ramanujan summation" usually refers to something else entirely, so let's not use that phrase here. Alternate topologies on the real numbers are another red herring; they're not relevant to this discussion. We're not evaluating the limit of a sequence of partial sums in any topology. With that stuff out of the way...



There are multiple ways to compute $\zeta(-1)$. You've outlined using the functional equation to relate it to $\zeta(2)$. But the more relevant method is in the Wikipedia article:




$$-3\zeta(-1)=\eta(-1)=\lim_{x\to 1^-}\left(1-2x+3x^2-4x^3+\cdots\right)=\lim_{x\to 1^-}\frac{1}{(1+x)^2}=\frac14$$



where $\eta$ is the Dirichlet eta function, not to be confused with the unrelated and better-known Dedekind eta function.



This is a direct analogue to what Ramanujan wrote in his notebook:
$$-3c=1-2+3-4+\mathrm{\&c}=\frac{1}{(1+1)^2}=\frac14$$



Imagine, for the sake of argument, that Ramanujan's intention is in fact to compute $\zeta(-1)$. Then every single step in his computation of "$c$" corresponds to a step in the computation of $\zeta(-1)$, just with the variables $s$ and $x$ elided. And the computation of $\zeta(-1)$ is rigorous; each step is justified in the Wikipedia article, which I'll quote for completeness:





Where both Dirichlet series converge, one has the identities:



$$
\begin{alignat}{7}
\zeta(s)&{}={}&1^{-s}+2^{-s}&&{}+3^{-s}+4^{-s}&&{}+5^{-s}+6^{-s}+\cdots& \\
2\times2^{-s}\zeta(s)&{}={}& 2\times2^{-s}&& {}+2\times4^{-s}&&{} +2\times6^{-s}+\cdots& \\
\left(1-2^{1-s}\right)\zeta(s)&{}={}&1^{-s}-2^{-s}&&{}+3^{-s}-4^{-s}&&{}+5^{-s}-6^{-s}+\cdots&=\eta(s) \\
\end{alignat}
$$




The identity $(1-2^{1-s})\zeta(s)=\eta(s)$ continues to hold when both functions are extended by analytic continuation to include values of $s$ for which the above series diverge. Substituting $s = −1$, one gets $−3\zeta(−1) = \eta(−1)$. Now, computing $\eta(−1)$ is an easier task, as the eta function is equal to the Abel sum of its defining series, which is a one-sided limit.




And that explains why Ramanujan's manipulation yields the same value as $\zeta(-1)$.



Perhaps the most obscure step in the argument is the usage of Abel summation to evaluate a Dirichlet series. The reference to Knopp justifies that step better than I would be able to. Beyond that, I'm not sure what else you're likely to object to. Feel free to ask about any particular step!


real analysis - Show that the countable union of countable sets is countable - conceptualisation help



I have come across a rather unsatisfying proof to this, which, in fact, I cannot even understand fully. Here it goes.



We can say that $A=\cup_{i \in I} A_i$ where $I \sim \mathbb{N}$. Without loss of generality we can say that $I$ is $\mathbb{N}$, so:

$A=\cup_{n \in \mathbb{N}} A_n$.



I will not state what I was thinking about. We have some set $A_1$ then some set $A_2$ etc... we know that they are individually countable, i.e. there is bijection for each of the set to the natural numbers, we need to show that there is such a bijection for the union of these sets. In my view it is not possible to construct some function $f$ for the union that by definition would map everything from set $A_1$ then proceed to map everything from set $A_2$ etc., this is due to $2$ reasons, first: there are countably infinite number of elements in each of the sets, so we will "run out of indices" and second there is a countably infinite number of sets themselves. For that reason, one should define the bijection $f$ for the union in such a manner that you take "first" element from set $A_1$ then take first element from set $A_2$ etc. But even this has a problem, because there is a countably infinite number of sets, so "we will run out of indices" by even taking the first elements of each set. It would be quite easy to show this result for say 10 sets or whatever the natural fixed number, but there is a problem when we are asked to show it for the countable number of such sets. I am also thinking about these countable sets in a very general perspective, I do not think of them as containing numbers (maybe you want to count the number of "wiggles" in the fractal shape, for example, and those are the objects of your sets; not sure, whethr that would be a countable set or just uncountable infinite).



The solution that I see is the following, which assumes that the elements of the sets are numbers. They define $n_a = min\{n \in \mathbb{N}, a \in A_n\}$. Then they say that for any $n \in \mathbb{N}$ (which I think they mean to say $a \in A_n$) $\exists \ f_n: A_n \rightarrow \mathbb{N}$. Then define $f:A\rightarrow \mathbb{N}$ by $f(a)=f_{n_a}(a)$. I do not follow this. What if set $A_1$ has all elements greater than $1$ then the minimum is always $n$ i.e. $1$ so for each element in that set you would be calling the function $f_1$, which makes sense. Now imagine that set $A_2$ has all elements greater than $2$, so minimum is $2$ and you would be calling $f_2$, but what if this function $f_2$ is now mapping to the same natural numbers all over again as the $f_1$?



EDIT: also what if $a \in A_n$ is some irrational number. How will you call $f_{\text{irrational number}}$..



Problem 1.23


Answer




Think of $\mathbb{N} \times \mathbb{N}$ as a double-entry array and put the elements of $A_{i}$ into column $\#i$. You then just need a bijection between $\mathbb{N}$ and $\mathbb{N} \times \mathbb{N}$, which is quite easy to do (for instance $(i,j) \mapsto 2^{i}(2j+1)-1$).


modular arithmetic - Cryptology Proof


Given prime $p, 0 < m < p$ and $ed \equiv 1 \pmod{p-1}$, prove $ m^{ed} \equiv m \pmod{p}$.





I get that this is hinting at a proof very similar to that of RSA, and that I have to consider when $\gcd(m,p)=1$ and when it doesn't. I also know that I need to use Euler's theorem and CRT. I just can't get past the $p-1$ passing itself into the mod from Euler's theorem. How should this proof look?

inequality - $a_1 + a_2 + dots a_n = 1$ find min of $a_1^2 +frac{a_2^2}{2} + dots + frac{a_n^2}n.$



Given $n$ numbers $a_1$, $\cdots$ and such that $a_1$, $a_2$, $\cdots$, $a_n > 0$ and their sum is $1$, I want to find the minimum value of




$$a_1^2 + \frac{a_2^2}{2} + \cdots + \frac{a_n^2}{n}.$$



I have tried using weighted AM-GM inequality, like this:



$$\frac{a_1^2 + \frac{a_2^2}{2} + \cdots + \frac{a_n^2}{n}}{a_1 + a_2 + \cdots + a_n } \geqslant \frac{a_1^{a_1} \cdots a_n^{a_n}}{2^{a_2} \cdots n^{a_n}}$$



but was unable to make progress on the right hand side.



Is there a better way to apply AM-GM inequality? Or is there some different way altogether to solve this?



Answer



By the Cauchy-Schwarz inequality we have
$$ \left(\sum_{i=1}^{n}i\right)\left(\sum_{i=1}^{n}\frac{a_i^2}{i}\right) \geq \left(\sum_{i=1}^{n}\sqrt{i}\cdot\frac{a_i}{\sqrt{i}}\right)^2 = 1 \tag{1}$$
hence with our hypothesis we have:
$$ \sum_{i=1}^{n}\frac{a_i^2}{i}\geq \frac{2}{n(n+1)}\tag{2} $$
and equality is achieved iff $\frac{a_i}{\sqrt{i}}=\lambda\sqrt{i}$, i.e. iff $a_i = \frac{2i}{n(n+1)}$.


integration - Integrating the gamma function




I assumed that
$$\Gamma\left(k+\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2k}\,dx=\frac{\sqrt{\pi}(2k)!}{4^k k!} \,,\space k>-\frac{1}{2}$$
and that
$$\Gamma\left(k+\frac{3}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2(k+1)}\,dx$$



and my goal is to solve the integral and get a function in terms of $k$ for $\Gamma\left(k+\frac{3}{2}\right)$



I use partial integration and differentiate $x^2$ and integrate the rest:
$$=\left[x^2.2\int^\infty_0 e^{-x^2}x^{2k}\,dx \right]^\infty_0 - \int^\infty_02x\left(2\int^\infty_0 e^{-x^2}x^{2k}\,dx\right)\,dx$$
and then I substitute the above function in terms of k and get:

$$=\left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 - \int^\infty_02x\frac{\sqrt{\pi}(2k)!}{4^k k!}\,dx$$
$$=\left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 - \left[x^2\frac{\sqrt{\pi}(2k)!}{4^k k!}\right]^\infty_0 =0$$



I know for sure that the final answer is wrong. I think my problem has to do with the substitution of the definite integral in the penultimate step. How can I make the math work out?



EDIT: Sorry for not mentioning previously but this is part of a proof by induction. The first statement is only assumed to be true.


Answer



Let us assume that



$$\Gamma\left(k+\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}x^{2k}\,dx=\frac{\sqrt{\pi}(2k)!}{4^k k!}$$




1- for $k=0$ we have



$$\Gamma\left(\frac{1}{2}\right)=2\int^\infty_0 e^{-x^2}\,dx=\sqrt{\pi}$$



which holds true since



$$\int^\infty_{-\infty} e^{-x^2}\,dx=\sqrt{\pi}$$



2- We need to prove the case $P(k)\to P(k+1)$




$$\Gamma\left(k+1+\frac{1}{2}\right) = \left( k+\frac{1}{2}\right)\Gamma\left( k+\frac{1}{2}\right)$$



From the inductive step we have



$$\left( k+\frac{1}{2}\right)\Gamma\left( k+\frac{1}{2}\right) =\left( k+\frac{1}{2}\right) \frac{\sqrt{\pi}(2k)!}{4^k k!} = \sqrt{\pi}\frac{(2k+1)(2k)!}{2\times4^kk!} $$



Mutliply and divide by $(2k+2)$



$$ \frac{\sqrt{\pi}}{4}\frac{(2k+2)(2k+1)(2k)!}{ 4^k (k+1)k!} =\frac{\sqrt{\pi}(2(k+1))!}{4^{(k+1)}(k+1)!}\blacksquare$$



calculus - What is$limlimits_{n rightarrow +infty} left(int_{a}^{b}e^{-nt^2}text{d}tright)^{1/n}$?




For $\left(a,b\right)) \in \left(\mathbb{R}^{*+}\right)^2$. Let $\left(I_n\right)_{n \in \mathbb{N}}$ be the sequence of improper integrals defined by
$$
\left(\int_{a}^{b}e^{-nt^2}\text{d}t\right)^{1/n}
$$
I'm asked to calculate the limit of $I_n$ when $ \ n \rightarrow +\infty$.



I've shown that
$$

\int_{x}^{+\infty}e^{-t^2}\text{d}t \underset{(+\infty)}{\sim}\frac{e^{-x^2}}{2x}
$$
However, how can I use it ? I wrote that
$$
\int_{a}^{b}e^{-nt^2}\text{d}t=\frac{1}{\sqrt{n}}\int_{\sqrt{n}a}^{\sqrt{n}b}e^{-t^2}\text{d}t
$$
Hence I wanted to split it in two integrals to use two times the equivalent but i cannot sum them so ... Any idea ?


Answer



First answer. This has some problems but now it is fixed.




So you have the result:
\begin{align}\tag{1}
\int^{\infty}_x e^{-t^2}\,dt = \frac{e^{-x^2}}{2x}+o\left(\frac{e^{-x^2}}{x}\right) \ \ \ \text{as} \ \ x\to\infty
\end{align}
In your last step, you had a mistake. It would be:
\begin{align}
\int^b_a e^{-nt^2}\,dt &= \frac{1}{\sqrt[]{n}}\int^{\sqrt[]{n}b}_{\sqrt[]{n}a}e^{-t^2}\,dt\\
& = \frac{1}{\sqrt[]{n}}\left(\int^{\infty}_{\sqrt[]{n}a}e^{-t^2}\,dt - \int^\infty_{\sqrt[]{n}b}e^{-t^2}\,dt \right)\\
\end{align}
Assume $0
\begin{align}\tag{2}
\int^b_a e^{-nt^2}\,dt = \frac{e^{-na^2}}{2na}+o\left(\frac{e^{-na^2}}{n}\right)
\end{align}
For $n$ large enough we can take $n$-th root on both sides of $(2)$ to get:
\begin{align}
\left(\int^b_a e^{-nt^2}\,dt\right)^{1/n}&=\left[\frac{e^{-na^2}}{2na}+o\left(\frac{e^{-na^2}}{n}\right)\right]^{1/n}\\
&=e^{-a^2}\frac{1}{n^{1/n}(2a)^{1/n}}\left[1+o\left(1\right)\right]^{1/n}\\
&\to e^{-a^2}
\end{align}
Where we have used $c_n^{1/n}\to 1$ for $c_n$ strictly positive and bounded away from $0$ and the fact that $\sqrt[n]{n}\to 1$.




$(\star)$: If you allow $a=0$, then something similar can be done which is even easier.






Edit One can also come up with the asymptotics of the integral:
\begin{align}
I_n^n=\int^b_a e^{-nt^2}\,dt
\end{align}
Assume $0The Laplace Method, we get:

\begin{align}
I_n^n\sim \frac{e^{-na^2}}{2an}
\end{align}
Taking $n$-th root we obtain the result:
\begin{align}
\lim_{n\to\infty} I_n = e^{-a^2}
\end{align}


general topology - Constructing a circle from a square




I have seen a [picture like this] several times:



troll proof




featuring a "troll proof" that $\pi=4$. Obviously the construction does not yield a circle, starting from a square, but how to rigorously and demonstratively prove it?



For reference, we start with a circle inscribed in a square with side length 1. A step consists of reflecting each corner of figure $F_i$ so that it lies precisely on the circle and yielding figure $F_{i+1}$. $F_0$ is the square with side length 1. After infinitely many steps we have a figure $F_\infty$. Prove that it isn't a circle.



Possible ways of thinking:




  1. Since the perimeter of figure $F_i$ indeed does not change during a step, it is invariant. Since it does not equal the perimeter of the circle, $\pi\neq4$, it cannot be a circle.




While it seems to work, I do not find this proof demonstrative enough - it does not show why $F_\infty$ which looks very much like a circle to us, is not one.




  1. Consider one corner of the square $F_0$. Let $t$ be a coordinate along the edge of this corner, $0 \leq t \leq 1$ and $t=0, t=1$ being the points of tangency for this corner of $F_0$ and the circle.
    By construction, all points $t \in A=\{ \frac{n}{2^m} | (n,m\in \mathbb{N}) \& (n<2^m)\}$ of $F_\infty$ lie on the circle. I think it can be shown that the rest of the points, $\bar{A}=[0;1] \backslash A$, lie in an $\varepsilon$-neighbourhood $U$ of the circle. I also think that in the limit $\varepsilon \to 0$, points $ t\in\bar{A}$ also lie on the circle. Am I wrong in thinking this? Can we get a contradiction from this line of thought?



Any other elucidating proofs and thoughts are also welcomed, of course.


Answer



You have rigorously defined $F_i$, but how do you define $F_\infty$? You cannot say: "after infinitely many steps...".




In this case you could define $F_\infty = \bigcap_i F_i$ (i.e. the intersection of all $F_i$), since $F_i$ is a decreasing sequence this is a good notion of limit. Notice however that $F_\infty$ is a circle! But this does not mean that the perimeter of $F_i$ should converge to the perimeter of $F_\infty$.



You could also choose a metric on subsets of the plane to define some sort of convergence $F_i \to F_\infty$ as $i\to \infty$. In any case, if you choose any good metric you find that either $F_\infty$ is the circle or that the sequence does not converge.



The point here is that the perimeter is not continuous with respect to the convergence of sets... so even if $F_i\to F_\infty$ (in any decent notion of convergence) you cannot say that $P(F_i)\to P(F_\infty)$ (where $P$ is the perimeter).


Show that exists some positive integer $N$ such that among $1,2,...,N$, there are at least $0.99N$ good numbers.

Given a positive integer $k$, call $n$ good, if among $$\binom{n}{0},\binom{n}{1},\binom{n}{2},...,\binom{n}{n}$$ at least $0.99n$ of them are divisible by $k$. Show that exists some positive integer $N$ such that among $1,2,...,N$, there are at least $0.99N$ good numbers.



some try: for all prime $p$ and nonnegative integers $m,n$,
$$\nu_p \binom{m+n}{n} = \frac{s_p(m)+s_p(n)-s_p(m+n)}{p-1}$$
which is equal to the number of carries when adding $m$ and $n$ in mod $p$.

Friday 26 June 2015

linear algebra - Show that the matrix $(c{bf A})^n= c^n{bf A}^n$

I am being asked to show that given a square, invertible matrix $\bf A$, then $(c{\bf A})^n= c^n{\bf A}^n$ for all non zero $c$'s in $\mathbb R$.



I've tried just sort of writing down the definition of invertibility and playing a bit with that but it doesn't seem to be working. I think perhaps there's some basic property I'm missing.




Thanks for any help.

calculus - Finishing proof of identity $sum_{k=b}^{n} binom{n}{k} binom{k}{b} = 2^{n-b} binom{n}{b}$



The identity



$$
\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = 2^{n-b} \binom{n}{b}\
$$



is one of a few combinatorial identities I having been trying to prove, and it has taken me way too long. I am using the principles most familiar to me (which are algebra, some basic combinatorial identities, but not applying differentiation or proof by bijection).




First I tried to see whether finding an identity for $\sum\limits_{k=0}^n \binom{n}{k}$ leads anywhere.



$$\begin{align}
&\sum_{k=0}^{n} \binom{n}{k} = \sum_{0 \le k \lt b} \binom{n}{k} + \sum_{b \lt k \lt n} \binom{n}{k} \tag{1} \\
\\
\end{align}$$



But it didn't for me, so I started over and next tried



$$\begin{align}

&\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = \sum_{k=b}^{n} \left( \frac{n!}{k! (n-k)! } \right) \left( \frac{k!}{(k-b)!} \right) \tag{2} \\
\\
\end{align}$$



but this also fell short of a proof.



It is really hard for me to step away from the problem. I was just hoping for a really a big hint on how to proceed.


Answer



Try thinking of this in terms of what it's counting, instead of algebraically. If we view this as counting committees, we're picking some big committee (of size at least b) of our n employees, and then choosing a subcommittee of that with b employees. However, what if we try picking the b employees first, and then choosing some extra people for the bigger committee?


algebra precalculus - Induction proof I'm having trouble with: $1+x+x^2+x^3+...+x^n = frac{1-x^{n+1}}{1-x}$



So I'm being asked to use induction to prove that for every $x\in\{a\ |\ a\in R, a\neq 1\}$ and for every $n\in N$



$$
1+x+x^2+x^3+...+x^n = \frac{1-x^{n+1}}{1-x}
$$



I have no trouble proving it for $n=1$ :




$$
1+x = \frac{1-x^2}{1-x}
$$
Factor the polynomial:
$$
1+x = \frac{(1-x)(1+x)}{1-x}
$$
Divide by $(1-x)$
$$
1+x=1+x

$$



And there you have it. The trouble I'm running into is with the induction step. If we assume that our claim is true for $n=k$ then
$$
1+x+x^2+x^3+...+x^k+x^{k+1} = \frac{1-x^{k+2}}{1-x}
$$
Or in other words,
$$
\frac{1-x^{k+1}}{1-x} + x^{k+1} = \frac{1-x^{k+2}}{1-x}
$$




Can someone help with this? I'm having some trouble with the factoring and the book I'm studying from isn't very clear on how they proved the last equation is true.



Thanks in advance :)


Answer



From where you are stuck, there is only one little step left: reduce both terms of the LHS to the same denominator. That is, $$\frac{1-x^{k+1}}{1-x} + x^{k+1} = \frac{1-x^{k+1}+(1-x) x^{k+1}}{1-x}=\frac{1-x^{k+1}+x^{k+1} - x\cdot x^{k+1}}{1-x} = \frac{1-x^{k+2}}{1-x}.$$


calculus - Prove ${largeint}_{-1}^1frac{dx}{sqrt[3]{9+4sqrt5,x} left(1-x^2right)^{2/3}}=frac{3^{3/2}}{2^{4/3}5^{5/6}pi }Gamma^3left(frac13right)$



Here is one more conjecture I discovered numerically:
$${\large\int}_{-1}^1\frac{dx}{\sqrt[3]{9+4\sqrt5\,x}\ \left(1-x^2\right)^{\small2/3}}\stackrel{\color{#808080}?}=\frac{3^{\small3/2}}{2^{\small4/3}\,5^{\small5/6}\,\pi }\Gamma^3\!\!\left(\tfrac13\right)$$

How can we prove it?



Note that $\sqrt[3]{9+4\sqrt5}=\phi^2$.
Mathematica can evaluate this integral, but gives a large expression in terms of Gauss and Appel hypergeometric functions of irrational arguments.


Answer



I will start with and prove Chen Wang's equivalent formulation:
$$ F\left({\tfrac13,\tfrac12\atop \tfrac56}\middle| \tfrac45 \right) =
\frac{3}{\sqrt{5}}. $$



By the integral representation of hypergeometric functions

(DLMF 15.6.E1), this is equal to
$$ \frac1{B(\frac13,\frac12)}\int_0^1
\frac{dx}{x^{2/3}(1-x)^{1/2}(1-A^6x)^{1/2}}, $$
where $A = (4/5)^{1/6}$ is easier to use than $\frac45$. Let the
integral be denoted by $I$. Introducing two changes of variables,
$x\mapsto 1/u^3$ and later $u=A^2/v$, we see that
$$ I = \int_1^\infty \frac{3u du}{\sqrt{(u^3-1)(u^3-A^6)}} =
\int_0^{A^2} \frac{3A\,dv}{\sqrt{(1-v^3)(A^6-v^3)}}. $$



The hyperelliptic curve

$$ y^2 = (x^3-1)(x^3-A^6), \qquad \frac{1}{3A}I = \int_0^{A^2}\frac{dx}{y} =
\int_1^\infty \frac{x}{A}\frac{dx}{y} $$
admits an involution $x\mapsto A^2/x$, and, as demonstrated very
clearly by Jyrki Lahtonen
here, there is a
rational change of variables that maps this curve onto the curve
$$ s^2 = t^3 + 9A^2t^2 + 6A(A^3+1)^2t+(A^3+1)^4. $$



In particular, first by writing
$$ u = x+A^2/x, \qquad v = y\left(\frac1x + \frac A{x^2}\right),

\qquad \frac{v/y}{du/dx} = \frac1{x-A}, $$
we get
$$ \frac{2}{3A} I = \int_0^{A^2}\frac{dx}{y} + \int_1^\infty
\frac{x}{A}\frac{dx}{y} = \int_{1+A^2}^\infty
\frac{du}{v}\frac{v/y}{du/dx}\left(\frac xA-1\right) = \frac{{\color{red}6}}{A}\int_{1+A^2}^\infty \frac{du}{v}. $$
(I lost a factor of $6$ somewhere in my notes; I'll edit this once I
find it.) And transforming to
$$ t = -\frac{(A^3+1)^2}{u+2A}, \qquad s =
\frac{(A^3+1)^2v}{(u+2A)^2}, $$
gives

$$ I = 9\int_{t_1}^{0}\frac{dt}{s}, \qquad t_1 = -(1-A+A^2)^2. $$



Finally, the curve $(s,t)$ is elliptic, and sage's function
isogenies_prime_degree tells us that there exists a rational map
given by
$$\begin{eqnarray}z &=& \Big(9000 A^2 \left(754+843 A^3\right) t+63000 \left(94+105 A^3\right) t^2+67500 A \left(34+35 A^3\right) t^3\\&&+112500 A^2 \left(4+3 A^3\right) t^4+45000 t^5\Big)\Big/\\&&\Big(60508 A^2+67650 A^5+100 \left(754+843 A^3\right) t+75 A \left(514+575 A^3\right) t^2\\&&+625 A^2\left(14+15 A^3\right) t^3+1250 t^4\Big),\end{eqnarray}$$
$$ w/s = \left(345600 \left(51841+57960 A^3\right)+7776000 A \left(2889+3230 A^3\right) t+1620000 A^2 \left(8278+9255 A^3\right) t^2+1080000 \left(4136+4635 A^3\right) t^3+48600000 A \left(21+25 A^3\right) t^4+10125000 A^2 \left(14+15 A^3\right) t^5+13500000 t^6\right)/\left(32 \left(832040+930249 A^3\right)+1200 A \left(46368+51841 A^3\right) t+300 A^2 \left(159454+178275 A^3\right) t^2+5000 \left(3872+4329 A^3\right) t^3+7500 A \left(648+725 A^3\right) t^4+46875 A^2 \left(14+15 A^3\right) t^5+62500 t^6\right) $$
with
$$ \frac{w/s}{dz/dt} = 6, $$
that maps the curve $(s,t)$ to the curve

$$ w^2 = z^3+180^3. $$



This means that the integral is given by
$$ I = 9\times 6\times \int_{-180}^0 \frac{dz}{\sqrt{z^3+180^3}} =
\frac{3}{\sqrt{5}}B(\tfrac12,\tfrac13), $$
where the last integral is elementary in terms of beta functions. Putting things together gives
the desired result.


general topology - Show that continuous functions preserve sequence convergence in topological spaces




Let $(X, \mathcal{T}) and (Y, \mathcal{U})$ be topological spaces and let $f : X \rightarrow Y$.



Suppose $f$ is continuous, and $\{x_n\}_{n=1}^{\infty}$ is a sequence in $X$ converging to a point $x$.



I need to show that $\{f(x_n)\}_{n=1}^{\infty}$ converges to $f(x)$.



This is what I have so far:




Since $f$ is continuous, we have that $f(\overline A) =$

$\overline{f(A)}$. We see that $\overline{\{x_n\}_{n=1}^{\infty}} =$
$\{x_n\}_{n=1}^{\infty} \cup \{x\}$ and so we have
$f(\{x_n\}_{n=1}^{\infty} \cup \{x\}) =$
$\overline{\{f(x_n)\}_{n=1}^{\infty}} = \{f(x_n)\}_{n=1}^{\infty} \cup$
$f(x)$. Therefore, there exists a sequence in
$\{f(x_n)\}_{n=1}^{\infty}$ that converges to $f(x)$.




But now I'm stuck because I don't know how to show that this sequence is $\{f(x_n)\}_{n=1}^{\infty}$ and not just some subsequence of it?


Answer




For any neighborhood $V$ of $f(x)$, $f^{-1}(V)$ is a neighborhood of $x$, so there exists some $N\in \mathbb{N}$ such that $x_n\in f^{-1}(V) \forall n>N,$ $i.e. f(x_n)\in V\forall n>\mathbb{N}$.


Thursday 25 June 2015

calculus - So close yet so far Finding $int frac {sec x tan x}{3x+5} dx$



Cruising the old questions I came across juantheron asking for $\int \frac {\sec x\tan x}{3x+5}\,dx$ He tried using $(3x+5)^{-1}$ for $U$ and $\sec x \tan x$ for $dv$while integrating by parts. below is his work.



How can I calculate
$$
\int {\sec\left(x\right)\tan\left(x\right) \over 3x + 5}\,{\rm d}x

$$



My Try:: $\displaystyle \int \frac{1}{3x+5}\left(\sec x\tan x \right)\,\mathrm dx$



Now Using Integration by Parts::



We get



$$= \frac{1}{3x+5}\sec x +\int \frac{3}{(3x+5)^2}\sec x\,\mathrm dx$$




Here he hit his road block.



I tried the opposite tactic



Taking the other approach by parts.



let $$U= \sec x \tan x$$ then$$ du= \tan^2 x \sec x +\sec^3 x$$ and $$dv=(3x+5)^{-1}$$ then $$v=\frac 1 3 \ln(3x+5)$$ Thus $$\int \frac {\sec x \tan x}{3x+5}\,dx= \frac {\ln(3x+5)\sec x \tan x}{3} - \int \frac {\ln(3x+5) [\tan^2 x \sec x +\sec^3 x]}{3} \,dx$$



As you can see I got no further than he did.




So how many times do you have to complete integration by parts to get the integral of the original $\frac {\sec x \tan x}{3x+5} \, dx$ or is there a better way?


Answer



Integrating elementary functions in elementary terms is completely algorithmic. The algorithm is implemented in all major computer algebra systems, so the fact that Mathematica fails to integrate this in closed form can be viewed as (in effect) a proof that such a closed form does not exist in elementary terms.



To answer the comments:




  1. You may or may not trust Mathematica (I don't always, but do for this). The fact that "its algorithm is not open for inspection" is not relevant -- it would take you much longer to figure out what the code does than to run the algorithm by hand (well, you need some linear algebra).


  2. If you do want to try it in the privacy of your own home, you need to go no further than the late, lamented Manuel Bronstein's excellent book: http://www.amazon.com/Symbolic-Integration-Transcendental-Computation-Mathematics/dp/3540214933


  3. I am quite sure that this particular integral is easy to show non-integrable in elementary terms by hand (if you understand the Risch algorithm).




limits - Shouldn't this function be discontinuous everywhere?



I was thinking about single point continuity and came across this function. $$
f(x) = \left\{
\begin{array}{ll}
x & \quad x\in \mathbb{Q}\\
2-x & \quad x\notin \mathbb{Q}
\end{array}
\right.
$$ We know this function is continuous only at $x=1$ . But doesn't that contradict our whole idea of continuity? A function is continuous if we are able to draw the function without lifting our pen or pencil. But here both the pieces of the function exist at specific places, so we have to lift our pen. Shouldn't the function be discontinuous everywhere? Looks like a stupid doubt though.



Answer



More precisely, a function is continuous over an interval if we are able to draw the function without lifting our pen or pencil within that interval, in our intuition. This function is only continuous at one point.


Inductive Step For a Summation mathematical induction

I have a bit of trouble with how I should go about showing the inductive step for my induction problem. I know the general idea is to show that it can work for all numbers based on the base case but I'm still stuck on how to show it.



$\sum_{i=2}^n \frac{1}{i} \leq \frac{n}{2}$ for n $\geq$ 2



I know we have to plug in k+1 instead of n and show that it is also equal to k by using algebra. I'm just stuck on how to show it. I guess the i is the part that confusing me.

multivariable calculus - The meaning of Differentials in Integration




This is further to the questions discussed in a previous post



Here is an example of what I mean: Suppose that $C$ is a closed path in the plane and consider the line integral of $xy\,dx+x^2\,dy$ over $C$. Note that this can be written as $$\oint_C x\,d(xy)$$



Question: If we let $A$ denote the starting and ending point of the path, is it valid to use "Integration by Parts" to write $$\oint_C x\,d(xy)=x\cdot xy\bigg|_A^A -\oint_C xy\cdot dx=-\oint_C xy\cdot dx$$ Note that the claim is valid, as the sum of the two integrands is $2xy\,dx+x^2\,dy$ which is conservative, being the gradient of $f(x,y)=x^2y$. Therefore the sum of the two integrals in the last equation is zero, hence they are negatives of each other. (And obviously, the second integral will usually be more desirable to compute than the first.)



But what does this mean, and why is it true?



More generally, I think it would insightful to have a discussion about differentials in the context of integration and how/why the Leibniz notation can or cannot be used to intuitively understand calculations such as the above.




And how can we interpret differentials of general quantities which may involve several variables, as opposed to just single ones? Does the expression $d(xy)$ only make sense in the context of a path $C$, where we can view it as the change in $g(x,y)=xy$ over small segments of that curve, or does it admit a broader interpretation?


Answer



There is a satisfactory modern account of this in terms of differential forms, though it is not the only possible account. Here $d(xy)$ is a differential 1-form. It is shown in general theory that it safisfies the suitable version of the Leibniz rule. Therefore these manipulations are easily accounted for in that framework. In general, the viewpoint is that what one integrates when path integrals are concerned are not functions but differential 1-forms. One can then interpret the results in terms of functions, but the interpretation is less elegant and sometimes you run into problems, as the problem with the notorious change of sign when you switch $dx$ and $dy$ in multiple integrals.



See Writing Integrals using Differential Forms for a related discussion.


Wednesday 24 June 2015

real analysis - The limit of the difference of two consecutive sequence members is equal to $0$. Can we conclude that the sequence itself has a limit?



Let $a_n$ be an infinite sequence. The limit of the difference of two consecutive members is equal to $0$. Can we conclude that the sequence itself has a limit?



My attempt:
We have
$$

\lim_{n\rightarrow\infty}{a_n} - \lim_{n\rightarrow\infty}{a_{n-1}} = 0
$$
since as $n$ approaches infinity $a_{n-1}$ gets arbitrarily close to $a_n$ the sequence cannot diverge or be bounded but have no limit.



Is my proof correct and how would I be able to formalize the last sentence? Thanks


Answer



While I can't entirely follow your proof, it seems to be assuming the existence of $\lim_{n\to\infty}a_n$, which is what you're trying to prove or disprove.



Here's an example which shows that you cannot conclude that $\lim_{n\to\infty}a_n$ exists: Let $a_n=1+\frac{1}{2}+\dots+\frac{1}{n}$. Then $a_{n+1}-a_n=\frac{1}{n+1}\to0$, but the sequence $\{a_n\}$ diverges.


complex analysis - Conformal map from $mathbb C - [-1, 1]$ onto the exterior of unit disc $mathbb C - overline{mathbb D}$.

I am finding a conformal map from $\mathbb C - [-1, 1]$ onto $\mathbb C - \overline{\mathbb D}$, where $\mathbb D$ denote the open unit disk.



The hint of this exercise says that I could make use of square root $\sqrt{\quad}$ to construct such a function, but I can't figure out what's the relation here since I don't think $\sqrt{\quad}$ can be defined on the domain.



Could anybody give me some further hints?



Rmk: By conformal we means that the desired function $f$ is holomorphic with nowhere vanishing derivative but not necessarily bijective.

Question regarding the Cauchy functional equation



Is it true that, if a real function $f$ satisfies $f(x+y) = f(x) + f(y)$ and vanishes at some $k \neq 0$, then $f(x) = 0$? Over the rationals(or, allowing certain conditions like continuity or monotonicity), this is clear since it is well known that the only solutions to this equation are functions of the form $f(x) = cx$. The reason I'm asking is to see whether or not there's "weird" solutions other than the trivial one.



Some observations are that $f(x) = f(x+k) = -f(k-x)$. $f$ is periodic with $k$.



It is easy to see that at $x=\frac{k}{2}$ the function also vanishes, and so, iterating this process, the function vanishes at and has a "period" of $\frac{k}{2^n}$ for all $n$. If the period can be made arbitrarily small, I want to say that implies the function is constant, but of course I don't know how to preclude pathological functions.


Answer




The values of $f$ can be assigned arbitrarily on the elements of a Hamel basis for the reals over the rationals, and then extended to all of $\mathbb{R}$ by $\mathbb{Q}$-linearity. So (assuming the Axiom of Choice) there are indeed weird solutions.


Tuesday 23 June 2015

elementary number theory - Solving a system of equations using modular arithmetic modulo 5



Give the solution to the following system of equations using modular arithmetic modulo 5:




$4x + 3y = 0 \pmod{5}$
$2x + y \equiv 3 \pmod{5}$




I multiplied $2x + y \equiv 3 \pmod 5$ by $-2$, getting $-4x - 2y \equiv -6 \pmod{5}$.




$-6 \pmod{5} \equiv 4 \pmod 5$



Then I added the two equations:




$4x + 3y \equiv 0 \pmod{5}$
$-4x - 2y \equiv 4 \pmod{5}$




This simplifies to $y \equiv 4 \pmod{5}$.




I then plug this into the first equation: $4x + 3(4) = 0 \pmod{5}$



Wrong work:




Thus, $x = 3$.
But when I plug the values into the first equation, I get $2(3) + 4 \not\equiv 3 \pmod{5}$.
What am I doing wrong?





EDIT:



Revised work:




$x = -3 \pmod{5} = 2 \pmod{5}$.
Now when I plug the values into the first equation, I get $2(2) + 4 \equiv 8 \pmod{5} \equiv 3 \pmod{5}$.



Answer



Sign error on substitution, it should be $x\equiv -3\pmod{5}$.




You had $4x+(3)(4)\equiv 0$, that is, $4(x+3)\equiv 0$. From this we get $x+3\equiv 0$, so $x\equiv -3\pmod{4}$.



Negative numbers are sometimes troublesome, so we may wish to rewrite as $x\equiv 2\pmod{5}$.


Monday 22 June 2015

Functional equation of divisibility



I am struggling with the following issue:

Find all functions $f:\mathbb{N}_{+}\rightarrow\mathbb{N}_{+} $, such that for all positive integers $m$ and $n$, there is divisibility
$$m^2+f(n)\mid mf(m)+n$$



$\mathbb{N}_{+}$ stands for the set of positive integers



I've tried various substitutions but I dont know how to solve functional equations of this form therefore I couldn't manage to find any $f$. I think this is a interesting problem so I'd like to know the answer.


Answer



Thanks @lulu for showing that $f(1)=1$. Now I can finish the problem by proving that the identity is the only possibility for $f$.



Suppose that there exists $m$ such that $f(m)
$$m^2+1=m^2+f(1)\le mf(m)+1a contradiction. Now suppose there exists $n$ such that $f(n)>n$. Taking $m=1$, we have
$$1+n<1+f(n)\le f(1)+n=1+n\ ,$$
another contradiction. Thus $f(n)=n$ for all $n$.


Addendum. Proof that $f(1)=1$. This proof was provided in a comment by Lulu, I'm copying it verbatim here in case the comment disappears in the future.

Setting $n=1$ we see that $m^2+f(1)$ divides $mf(m)+1$ for all $m$. But if there were a prime $p\mid f(1)$ then taking $m=p$ gives a contradiction, hence there is no such $p$ which implies that $f(1)=1$.


measure theory - $v(B)=int_{B} f dmu $



I have a question in integration theory:




If I have $(\Psi,\mathcal{G},\mu)$ a $\sigma$-finite measure space and $f$ a $[0,\infty]$-valued measurable function on $(\Psi,\mathcal{G})$ that is finite a.s.



So my question is if I define for $B\in \mathcal{G}$ $$v(B)=\int_{B} f d\mu $$




Is $(\Psi,\mathcal{G},v)$ a $\sigma$-finite measure space too ?




I think this reationship betwwen $v$ and $\mu$ can help me in calculational purpose.




Could someone help me? Thanks for the time and help.


Answer



If $\mu$ is $\sigma$-finite, there exists a countable collection of disjoint sets $X_i$ s.t. $\mu(X_i)<\infty$ and $\bigcup_{i\ge 1}X_i=X$. Consider $F_j=\{j-1\le f

logarithms - Why is $ a^x = e^{x log a} $?



Why is $ a^x = e^{x \log a}$, where $ a $ is a constant?




From my research, I understand that the the natural log of a number is the constant you get when you differentiate the function of that number raised to the power $ x$. For example, if we differentiate the function $ 2^x $, we get $ 2 \log 2$. I also understand that the natural log of e is just 1. But I cannot connect the dots here.



I would really appreciate an intuitive explanation of why we can write a number raised to a power as e raised to (the power x the natural logarithm of the number)?


Answer



I think we can agree that



$$a=e^{\log a}$$



which arises from one of the properties of the logarithm. Therefore, it’s sufficient to say that




$$a^x=e^{\log a^x}$$



But one of the properties of the logarithm also dictates that



$$\log a^x=x\log a$$



Therefore



$$a^x=e^{x\log a}$$


radicals - complex modulus and square root



I am failing to understand something about complex square roots:




If we fix the argument $\theta\in(0,2\pi],$ that is we take the positive real line as branch cut, than for $z=r\mathrm{e}^{i\theta}$, $\sqrt{z}$ has argument in the interval $(0,\pi].$ In other words, a positive real number will have a negative square root and thus
$$|\sqrt{z}|\neq\sqrt{|z|}.$$
Is that true?


Answer



According to the definition, $\sqrt{1}=-1$ and so
$$
|\sqrt{1}|=1
$$
whereas
$$

\sqrt{|1|}=\sqrt{1}=-1
$$



For any positive real it's the same. If $a>0$, then
$$
|\sqrt{a^2}|=\lvert-a\rvert=a,
\qquad
\sqrt{|a^2|}=\sqrt{a^2}=-a
$$


discrete mathematics - Showing a counter-example for a 4 sets that share a subset relation



What is the easiest and correct way of finding a counter-example of this kind of questions:




Do these relation holds for every sets $A,B,C,D$?



1) $ (A \setminus C) \times (B \setminus D)) = (A \times B)\setminus(C
\times D) $




2) $ (C \times D) \setminus (A \times (D \setminus B)) \subset (A
\times D) \cup (C \times (D \setminus B)) $




Thanks in advance.


Answer



Here's my answer to the actual question you asked, not the specific set theory questions.



For me the "easiest and correct way" to decide whether such a statement is true is to play with small examples. That often leads either to a counterexample (if the assertion is false) or to an idea of how to prove it (if it's true).



abstract algebra - Show that the algebraic closure of $F$ in $K$ is an algebraic closure of $F$.


(a) Let $K$ be an algebraically closed field extension of $F$. Show that the algebraic closure of $F$ in $K$ is an algebraic closure of $F$.




What is algebraic closure of $F$ in $K$? The definition of algebraic closure is:



If $K$ is an algebraic extension of $F$ and is algebraically closed, then $K$ is said to be an algebraic closure of $F$



In this case, $K$ is more than algebraic extension so, what algebraically closed extension? I'm a little confused by this.





(b) If $\mathbb{A} = \lbrace a \in \mathbb{C}\,|\,a\,\text{is algebraic over}\,\mathbb{Q}\rbrace$, then, assuming that $\mathbb{C}$ is algebraically closed, show that $\mathbb{A}$ is an algebraic closure of $\mathbb{Q}$.




I imagine that $\mathbb{C}$ is an algebraically closed field extension of $\mathbb{Q}$ and $\mathbb{A}$ is the algebraic closure of $\mathbb{Q}$ in $\mathbb{C}$.



So, I would like some help to understand these definitions for can answer the two items. Thanks for the advance!



It may be useful to put some definitions:





Lema 1. If $K$ is a field, then the following statements are equivalente:




  1. There are no algebraic extensions of $K$ other than $K$ itself.

  2. There are no finite extensions of $K$ other than $K$ itself.

  3. If $L$ is a field extension of $K$, then $K = \lbrace a \in L\,|\,a\,\text{is algebraic over}\,K\rbrace$.

  4. Every $f(x) \in K[x]$ splits over $K$.

  5. Every $f(x) \in K[x]$ has a root in $K$.

  6. Every irreducible polynomial over $K$ has degree $1$.




Definition 1. If $K$ satisfies the equivalent conditions of Lema 1, then $K$ is said to be algebraically closed.


Sunday 21 June 2015

intuition - Numbers to the Power of Zero



I have been a witness to many a discussion about numbers to the power of zero, but I have never really been sold on any claims or explanations. This is a three part question, the parts are as follows...







1) Why does $n^{0}=1$ when $n\neq 0$? How does that get defined?



2) What is $0^{0}$? Is it undefined? If so, why does it not equal 1?



3) What is the equation that defines exponents? I can easily write a small program to do it (see below), but what about in equation format?







I just want a little discussion about numbers to the power of zero, for some clarification.






Code for Exponents: (pseudo-code/Ruby)



def int find_exp (int x, int n){
int total = 1;
n.times{total*=x}

return total;
}

Answer



It's basically just a matter of what you define the notation to mean. You can define things to mean whatever you want -- except that if you choose a definition that leads to different results than everyone else's definitions give, then you're responsible for any confusion brought about by your using a familiar notation to mean something nonstandard.



Most commonly we define $x^0$ to mean $1$ for any $x$. What you find in discussions elsewhere are argument that this is a useful definition, not arguments that it is correct. (Definitions are correct because we choose them, not for any other reason. That's why they are definitions).



Some people choose (for certain purposes) to explicitly refrain from defining $0^0$ to mean anything. That choice is (supposedly) useful because then the map $x,y\mapsto x^y$ is continuous in the entire subset of $\mathbb R\times\mathbb R$ it is defined on. But it's an equally valid choice to define $0^0$ to mean $1$ and then just remember that $x,y\mapsto x^y$ is not continuous at $(0,0)$.


complex analysis - Show that $e^{ta}=frac1{2pi i}int_{gamma_alpha}frac{e^{lambda t}}{lambda-a},dlambda$



This is the exercise 18 on page 359 of Analysis II of Amann and Escher. I'm stuck in this exercise




Suppose $a\in\Bbb C$ and $\alpha\neq\Re(a)$. Show that for $\gamma_\alpha:\Bbb R\to\Bbb C,\,s\mapsto \alpha+is$ we have $$e^{ta}=\frac1{2\pi i}\int_{\gamma_\alpha}\frac{e^{\lambda t}}{\lambda-a}\,d\lambda\quad\text{for }t>0\tag1$$ (HINT: the Cauchy integral formula gives $$e^{ta}=\frac1{2\pi i}\int_{\partial\Bbb D(a,r)}\frac{e^{\lambda t}}{\lambda-a}\,d\lambda\quad\text{for }t\in\Bbb R\text{ and }r>0\tag2$$ Now apply the Cauchy integral theorem.)





Trying to follow the hint I tried to create a family of closed paths $\Gamma_r=[\gamma_r]+[\delta_r]$, such that $a$ doesn't belong to it bounded regions, and then use the Cauchy integral theorem, that is



$$\int_{\Gamma_r}g(\lambda)\, d\lambda=0\quad r>0$$



for $g(\lambda):=\frac{e^{\lambda t}}{\lambda-a}$, and exploit some kind of symmetry to relate the integration on the paths $[\gamma_r]$ or $[\delta_r]$ to the Cauchy integral formula. By example, without lose of generality suppose that $\alpha>\Re(a)$, then I could define



$$\gamma_r:[-r,r]\to\Bbb C,\quad s\mapsto \alpha+is\\\delta_r:[r,r+\pi]\to\Bbb C,\quad s\mapsto re^{-i(s-r-\pi/2)}\tag3$$



and try to relate the integration on the half circle defined by $\delta_r$ with the integral in the complete circle, where for suitable enough big $r$ I can use the Cauchy integral formula. However this is not easy to deal with, because it don't show a symmetry to exploit.




Maybe I'm over-complicating and the exercise want to do something different. Can someone help me?






EDIT:



It can be shown that $(1)$, as an improper integral of Riemann, converges conditionally, and after some changes of variables the question reduces to show that



$$\frac1{2\pi i}\int_{\gamma_r}\frac{e^\zeta}{\zeta}\,d\zeta=1$$




where $\gamma_r:\Bbb R\to\Bbb C,\, t\mapsto r+it$
for any chosen $r\in\Bbb R\setminus\{0\}$. Graphing the integrand we can see that it describes two non-rectifiable spirals (symmetric respect to the real axis) that converges to some point on the real axis.


Answer



I will use a strategy that it is used in the last chapter of the book.






The parametrization $\gamma_\alpha:\Bbb R\to\Bbb C,\, t\mapsto \alpha+it$ defines a vertical ray at $\Re(\alpha)\in\Bbb R$. Without lose of generality we can assume that $\alpha=a+\ell$ for some $\ell\in\Bbb R\setminus\{0\}$. Then
$$
f(h):=\frac1{2\pi i}\int_{\gamma_\alpha}\frac{e^{\lambda h}}{\lambda-a}\, d\lambda=\frac1{2\pi}\int_{-\infty}^\infty\frac{e^{h(a+\ell+it)}}{\ell+it}\, dt=e^{ha}\cdot\frac1{2\pi}\int_{-\infty}^\infty\frac{e^{h(\ell+it)}}{\ell+it}\, dt\\

=e^{ha}\cdot\frac{e^{h\ell}}{2\pi}\int_{-\infty}^\infty\frac{e^{hit}}{\ell+it}\, dt=e^{ha}\cdot\frac{e^{h\ell}}{2\pi}\int_{-\infty}^\infty\frac{e^{is}}{h\ell+is}\, ds=e^{ha}\cdot\frac1{2\pi i}\int_{\gamma_{h\ell}}\frac{e^\zeta}{\zeta}\,d\zeta\tag1
$$
after the changes of variable $ht=s$ and $h\ell+is=\zeta$. Then setting $g(h):=e^{-ha}f(h)$ the question reduces to show that $g(h)=1$ when $h>0$. Also from the expansion of $\frac{e^{h(\ell+it)}}{\ell+it}$ it can be seen that the improper integral of Riemann $\int_{-\infty}^\infty\frac{e^{h(\ell+it)}}{\ell+it}\, dt$ converges conditionally.



We set $r:=h\ell$ and the paths defined by
$$
\alpha:[0,\pi]\to\Bbb C,\quad t\mapsto r-iRe^{it}\\\beta:[0,\pi]\to\Bbb C,\quad t\mapsto r+iRe^{it}\\
\gamma:[-R,R]\to\Bbb C,\quad t\mapsto r+it\tag2
$$
Then from the Cauchy integral theorem we knows that $\int_\gamma\frac{e^\zeta}{\zeta}d\zeta=\int_\alpha\frac{e^\zeta}{\zeta}d\zeta$, and for $R>r$ from the Cauchy integral formula that

$$
\int_{\alpha+\beta}\frac{e^\zeta}{\zeta}d\zeta=2\pi i\implies\frac1{2\pi i}\int_\gamma\frac{e^\zeta}{\zeta}d\zeta=1-\frac1{2\pi i}\int_\beta\frac{e^\zeta}{\zeta}d\zeta\tag3
$$
Thus it is enough to show that $\lim_{R\to\infty}\int_\beta\frac{e^\zeta}{\zeta}d\zeta=0$. Now observe that
$$
\begin{align}\left|\int_\beta\frac{e^\zeta}{\zeta}d\zeta\right|&=\left|\int_0^\pi\frac{-Re^{it}e^{r+iRe^{it}}}{r+iRe^{it}}dt\right|\\&\le\left|\int_0^\epsilon\frac{-Re^{it}e^{r+iRe^{it}}}{r+iRe^{it}}dt\right|+\left|\int_{\pi-\epsilon}^\pi\frac{-Re^{it}e^{r+iRe^{it}}}{r+iRe^{it}}dt\right|+\left|\int_\epsilon^{\pi-\epsilon}\frac{-Re^{it}e^{r+iRe^{it}}}{r+iRe^{it}}dt\right|\\
&\le 2\epsilon\, e^r\max_{t\in[0,\epsilon]}\frac{R e^{-R\sin t}}{|r+iRe^{it}|}+(\pi-2\epsilon) e^r\max_{t\in[\epsilon,\pi-\epsilon]}\frac{R e^{-R\sin t}}{|r+iRe^{it}|}\end{align}\tag4
$$
Also observe that $\sin t>0$ for $t\in[\epsilon,\pi-\epsilon]$ for any chosen $\epsilon\in(0,\pi/2)$. Also is easy to check that
$$

\lim_{R\to\infty}\frac{R}{|r+iRe^{it}|}=\lim_{R\to\infty}\frac1{\sqrt{(r/R)^2+1-2(r/R)\sin t}}=1\tag5
$$
Then putting all together we found that
$$
\lim_{R\to\infty}\left|\int_\beta\frac{e^\zeta}{\zeta}d\zeta\right|\le 2\epsilon\, e^r,\quad\forall \epsilon\in(0,\pi/2)\implies\lim_{R\to\infty}\int_\beta\frac{e^\zeta}{\zeta}d\zeta=0\tag6
$$
as desired.$\Box$


Saturday 20 June 2015

probability - Throw a coloured die until blue face is on top



A game is played by rolling a six sided die which has four red faces and two blue faces. One turn consists of throwing the die repeatedly until a blue face is on top or the die has been thrown 4 times




Adnan and Beryl each have one turn. Find the probability that Adnan throws the die more turns than Beryl



I tried :
Adnan throws two times and Beryl throws once = $\frac{2}{3}$ x $\frac{1}{3}$



Adnan throws three times and Beryl throws once =$\frac{4}{9}$ x $\frac{1}{3}$



Adnan throws three times and Beryl throws twice = $\frac{4}{9}$ x $\frac{2}{3}$




Adnan throws four times and Beryl throws once = $\frac{8}{27}$ x $\frac{1}{3}$



Adnan throws four times and Beryl throws twice = $\frac{8}{27}$ x $\frac{2}{3}$



Adnan throws four times and Beryl throws three times =$\frac{8}{27}$ x$\frac{4}{9}$



The answer says 0.365



Please help


Answer




When Beryl throws the die once, Adnan can throw it $2,3$ or $4$ times. The required probability is$$\underbrace{\frac13}_{\text{Beryl=1}}\left(1-\underbrace{\frac13}_{\text{Adnan=1}}\right)$$Similarly, when Beryl throws the die $2$ times, Adnan may throw $3$ or $4$ times, giving the required probability$$\underbrace{\frac23\frac13}_{\text{Beryl=2}}\left(1-\left(\underbrace{\frac13}_{\text{Adnan=1}}+\underbrace{\frac23\frac13}_{\text{Adnan=2}}\right)\right)$$and when Beryl throws it $3$ times, Adnan throws the die $4$ times. The last throw could result in a red face or a blue face. So the probability of this case is$$\underbrace{\frac23\frac23\frac13}_{\text{Beryl=3}}\left(\underbrace{\frac23\frac23\frac23\left[\frac23+\frac13\right]}_{\text{Adnan=4}}\right)$$The sum of these terms yields the required answer.


calculus - I need to understand why the limit of $xcdot sin (1/x)$ as $x$ tends to infinity is 1



here's the question, how can I solve this:



$$\lim_{x \rightarrow \infty} x\sin (1/x) $$



Now, from textbooks I know it is possible to use the following substitution $x=1/t$, then, the ecuation is reformed in the following way



$$\frac{\sin t}{t}$$




then, and this is what I really can´t understand, textbook suggest find the limit as $t\to0^+$ (what gives you 1 as result)



Ok, I can't figure out WHY finding that limit as $t$ approaches $0$ from the right gives me the answer of the limit in infinity of the original formula. I think I can't understand what implies the substitution.



Better than an answer, I need an explanation.



(Sorry If I wrote something incorrectly, the english is not my original language)
Really thanks!!


Answer




$$\lim_{x\to\infty}x\sin(1/x)$$ is like the limit of this sequence: $$1\sin(1/1), 10\sin(1/10), 100\sin(1/100),\ldots$$ where we have inserted $x$ marching along from $1$ to $10$ to $100$ on its merry way to $\infty$. This is almost literally the same as $$\frac{1}{1}\sin(1), \frac{1}{0.1}\sin(0.1), \frac{1}{0.01}\sin(0.01),\ldots$$ which is an interpretation of $$\lim_{t\to0^+}\frac{1}{t}\sin(t)$$ with $t$ marching its way from $1$ down to $0.1$ down to $0.01$ on its merry way to $0$. So whatever the value of these limits are, $\lim\limits_{x\to\infty}x\sin(1/x)=\lim\limits_{t\to0^+}\frac{1}{t}\sin(t)$.


trigonometry - Help with using Euler’s formula to prove that $cos^2(theta) = frac{cos(2theta)+1}{2}$



I have to use Euler's Formula to prove that:




$$\cos^2(\theta) = \frac{\cos(2\theta)+1}{2}.$$



I have managed to prove this using trigonometric identities but I'm not sure how to use Euler's Formula or how it links into the question.



My method so far has been:



$$\frac{(\cos(2\theta)+1)}{2} = \frac{(\cos^2(\theta) - \sin^2(\theta)+1)}{2}$$



since




$$\cos(2\theta)=\cos(\theta)\cos(\theta)-\sin(\theta)\sin(\theta).$$



So
$$\frac{(\cos(2\theta)+1)}{2} =\frac{2\cos^2(\theta)}{2}
=\cos^2(\theta).$$


Answer



Eulers identity $e^{i\theta} = \cos \theta + i\sin\theta$



$e^{i\theta} + e^{-i\theta} = 2\cos \theta\\
\frac 14 (e^{i\theta} + e^{-i\theta})^2 = \cos^2 \theta\\

\frac 14 (e^{2i\theta} + e^{-2i\theta} + 2) = \cos^2 \theta\\
\frac 14 (2\cos 2\theta + 2) = \cos^2 \theta\\
\frac 12 (\cos 2\theta + 1) = \cos^2 \theta$


functional equations - Show that for this function the stated is true.


For the function

$$G(w) = \frac{\sqrt2}{2}-\frac{\sqrt2}{2}e^{iw},$$
show that
$$G(w) = -\sqrt2ie^{iw/2} \sin(w/2).$$




Hey everyone, I'm very new to this kind of maths and would really appreciate any help. Hopefully I can get an idea from this and apply it to other similar questions. Thank you.

Sum of this Geometric Series $frac{1}{4}, frac{1}{12}, frac{1}{36}, frac{1}{108}, ldots, frac{1}{2916}$




$$\frac{1}{4}, \frac{1}{12}, \frac{1}{36}, \frac{1}{108}, \ldots,\frac{1}{2916}$$



I am trying to calculate the sum of this geometric series. Here's what I've got so far.



$a = \frac{1}{4}, r = \frac{1}{3}$ and $n = 7$



So $\sum_{n=7}\frac{1}{4}(\frac{1}{3})^{n}$



Somehow equals $\frac{1093}{2916}$ according to my book?




Here are my questions:




  1. How do I get the sum of this geometric series

  2. Does my work look correct?

  3. How can I find a way to calculate the number of terms in the series that isn't "brute forcing"? This is quite inelegant.



Thanks!


Answer




The partial sum of a geometric series is



$$\sum_{k=0}^{n-1}ar^k = a\frac{1-r^n}{1-r}$$



so take $a=1/4, r=1/3, n=7$ to get



$$S = \left(\frac{1}{4}\right)\frac{1-(1/3)^7}{1-(1/3)} = \frac{1093}{2916}.$$


Friday 19 June 2015

discrete mathematics - Proof: For all integers $x$ and $y$, if $x^2+ y^2= 0$ then $x =0$ and $y =0$




I need help proving the following statement:



For all integers $x$ and $y$, if $x^2+ y^2= 0$ then $x =0$ and $y =0$



The statement is true, I just need to know the thought process, or a lead in the right direction. I think I might have to use a contradiction, but I don't know where to begin.



Any help would be much appreciated.


Answer



If $x\ne 0$ or $y\ne 0$ then $|x| \ge 1$ or $|y| \ge 1$, which implies $x^2+y^2 \ge 1$.


Prove this sum of binomial terms using induction.



Here's the problem stumping me today:



Let $n \in \mathbb{N}$ and $r \in \mathbb{N}$ such that $r \leq n$, and prove using induction that $\binom{n+1}{r+1} = \sum\limits_{i=r}^n \binom{i}{r}$.



I've setup the basics of my inductive proof, but I'm struggling with the induction step.



Could anyone point me in the right direction?


Answer




When choosing r+1 items from a set of n+1 items either you (a) choose the n+1 item (and choose the remaining r items from the first n) or (b) choose all r+1 items from the first n.



That means we can write C(r+1, n+1) = C(r, n) + C(r+1, n)



By induction (on n) C(r+1, n) = SUM[i = r...n-1] C(r, i). Including the C(r, n) we get the desired result.


algebra precalculus - Why is $ln(sqrt{|2x-5|}) + frac{1}{2} ln(|2x+3|) neq ln(sqrt{|2x-5|}) + ln(sqrt{|2x+3|})$ in Wolfram Alpha?



According to Wolfram Alpha, $$\frac{1}{2} \ln(|2x+3|) = \ln(\sqrt{|2x+3|})$$



is always true, which makes sense given what I know of log rules.



However, if I add the expression $\ln(\sqrt{|2x-5|})$ to both sides of that equation, as such: $$\ln(\sqrt{|2x-5|}) + \frac{1}{2} \ln(|2x+3|) = \ln(\sqrt{|2x-5|}) + \ln(\sqrt{|2x+3|})$$



WA tells me that the two sides of this equation are not always equal! How is this possible if $\frac{1}{2} \ln(|2x+3|) = \ln(\sqrt{|2x+3|})$ is always true and I'm adding the same expression to both sides of the equation?
What's going on here?




EDIT: Here's the WA output: enter image description here


Answer



OK I tried it. Now what? Is this different from yours?



WA image


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...