Monday 30 June 2014

calculus - Limit of $a_n := frac{5^n}{2^{n^2}}$




Consider the sequence $(a_n)$ defined by $a_n := \frac{5^n}{2^{n^2}}$.
1. Prove that the sequence $(a_n)$ is bounded below by $0$.
We note that $a_n > 0$ for $n\geq 0$. Thus, the sequence is bounded from below.
2. Prove that the sequence $(a_n)$ is strictly decreasing by showing that $a_{n+1}-a_n < 0$ for all $n\in \mathbb{N}$.
We look to $a_n = \frac{5^n}{2^{n^2}}$ and $a_{n+1} = \frac{5^{n+1}}{2^{(n+1)^2}}$. For $n\geq 1$ we see that $a_n > a_{n+1}$. Therefore, we have a strictly decreasing sequence.
3. Deduce that the sequence $(a_n)$ converges and calculate its limit.
Since we have a (monotonically) decreasing sequence which is bounded below, by the monotone convergence theorem this sequence converges. How do we find the limit? Is it the squeeze theorem? Thank you for the help!!!


Answer



Once you know a limit $L$ exists, then find a recurrence relation, like
$$a_{n+1} = \frac52\frac{1}{2^{2n}}a_n$$
And take the limit as $n\to\infty$:
$$L = \frac{5}{2}\cdot0\cdot L$$
which implies that limit $L$ must equal $0$.


complex numbers - What is the meaning of Euler's identity?




I know that euler's identity state that $e^{ix} = \cos x + i\sin x$




But e is a real number. What does it even mean to raise a real number to an imaginary power. I mean multiplying it with itself underoot $-1$ times? What does that mean?


Answer



If $z$ and $w$ are complex numbers, you define $z^w = e^{w \log z}$. The problem is that $\log w$ assumes several values, so you can say that $z^w$ is a set. So if you fix a principal value for ${\rm Log}\,z$, you have a principal power $e^{w\,{\rm Log}\,z}$. For each branch you'll have a different power.



More exactly, the argument of a complex number is the set: $$\arg z = \{ \theta \in \Bbb R \mid z = |z|(\cos \theta + i \sin \theta) \}.$$We call ${\rm Arg}\,z$ the only $\theta \in \arg z$ such that $-\pi < \theta \leq \pi$. Also, if $z \neq 0$, we have: $$\log z = \{ \ln |z| + i \theta \mid \theta \in \arg z \}.$$
Call ${\rm Log}\,z = \ln |z| + i \,{\rm Arg}\,z$. Then you could say that $z^w = \{ e^{w \ell} \mid \ell \in \log z \}$.



To make sense of $e^{\rm something}$, we use the definition of the exponential with series.


calculus - Is $f(x,y)=frac{sinsqrt[3]{x^3+y^3}}{sqrt[5]{x^5+y^5}}$ uniformly continuous or not

Find out if function $$f(x,y)=\frac{\sin\sqrt[3]{x^3+y^3}}{\sqrt[5]{x^5+y^5}}$$ is uniformly continous or not in area $D=\{0I found out that we have no $\lim\limits_{x,y\to 0}f(x,y)$, because $$\lim\limits_{x,y\to 0}\frac{\sin\sqrt[3]{x^3+y^3}}{\sqrt[5]{x^5+y^5}}=\lim\limits_{\rho\to0}\frac{\sqrt[3]{\rho^3\sin^3\alpha+\rho^3\cos^3\alpha}}{\sqrt[5]{\rho^5\sin^5\alpha+\rho^5\cos^5\alpha}}=\lim\limits_{\rho\to0}\frac{\sqrt[3]{\sin^3\alpha+\cos^3\alpha}}{\sqrt[5]{\sin^5\alpha+\cos^5\alpha}}$$
hence we can't use The Uniform Continuity Theorem as we can't determ $f(0,0)$. Function doesn't have bounded partial derivatives, so I think it's not uniformly continous, but I don't know how to show that

number theory - In how many solutions of equation : $x_1+x_2+...+x_n=m$

In how many solutions of equation: $x_1+x_2+...+x_n=m$ satisfied: $x_i\in \mathbb{N}(i=\overline{1,n}),1\le x_i\le 26,n\le m\le 26n,m\in \mathbb{N}$




This is my try:
Let $t_i=x_i-1\implies \sum\limits_{i=1}^{n}t_i=\sum\limits_{i=1}^{n}x_i-n=m-n$, where $0\le t_i\le 25$.
And I don't know how to solve it when $0\le t_i\le 25$.


sequences and series - Finding limit of a product.



Prove:$$\lim_{n \to\infty }\frac{1}{n}\left[\prod_{i=1}^{n}(n+i) \right ]^{\frac{1}{n}}=\frac{4}{e}$$
I tried using Squeeze Theorem but can't go beyond $1

Answer



$\frac{1}{n}\left[\prod_{i=1}^n(n+i)\right]^{1/n}=\left[\prod_{i=1}^n \frac{1}{n}(n+i)\right]^{1/n}=\left[\prod_{i=1}^n (1+\frac{i}{n})\right]^{1/n}$

Taking log we get
$\frac{1}{n}\sum_{i=1}^n\ln (1+\frac{i}{n}) \to \int_0^1 \ln(1+x)dx, n \to \infty$
Integrating by parts gives $\int_0^1 \ln(1+x)dx=\ln 4 -1.$
Now the limit of the product is $e^{\ln 4 - 1}$.


Sunday 29 June 2014

probability - Moments and non-negative random variables?



I want to prove that for non-negative random variables with distribution F:
$$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$




Is the following proof correct?



$$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$



using integration by parts:
$$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$




If not correct, then how to prove it?


Answer



Here's another way. (As the others point out, the statement is true if $E[X^n]$ actually exists.)



Let $Y = X^n$. $Y$ is non-negative if $X$ is.



We know
$$E[Y] = \int_0^{\infty} P(Y \geq t) dt,$$
so
$$E[X^n] = \int_0^{\infty} P(X^n \geq t) dt.$$

Then, perform the change of variables $t = x^n$. This immediately yields
$$E[X^n] = \int_0^{\infty} n x^{n-1} P(X^n \geq x^n) dx = \int_0^{\infty} n x^{n-1} P(X \geq x) dx.$$


big list - Which one result in mathematics has surprised you the most?

A large part of my fascination in mathematics is because of some very surprising results that I have seen there.




I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius!



So I ask you which are your most surprising moments in maths?




  • Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!

probability - Expected value of different types of sample means

Assuming $X_i$ is iid normally distributed with $N(\mu , \sigma ^2) $




In summation notation, what is the difference between



1) $ E(\overline X ^2 )$ and



2)$ E(\mathrm{X}^\overline2) $
(should have the bar over the entire $X^2$)



3)$E(\overline X)^2$




so basically the difference between the expected value of the sample mean squared (1), the expected value of the RV squared's sample mean (2)(not sure how to put #2 into words sorry), and the square of the expected value of the sample mean (3).



I know



(2) $ E(\mathrm{X}^\overline2) $ = $(\frac{1}{n}$)$(\sum_{i=1}^{n} X_i^2)$



(3) $E(\overline X)^2$ = $(\frac{1}{n^2}$)$(\sum_{i=1}^{n} X_i)^2$



But I'm confused on what (1) would be? How is it different from (3)?

Saturday 28 June 2014

combinatorics - Prove the following by two different methods, one combinatorial and one algebraic



Reading through my textbook I came across the following problem, and I am looking for some help solving it. I am asked to prove the following by two different methods, one combinatorial and one algebraic. If I could get help with either or both it would be great, thanks!



Prove that this identity is true,



$$\binom{n}{k} -\binom{n-3}{k} =\binom{n-1}{k-1} + \binom{n-2}{k-1} + \binom{n-3}{k-1}$$


Answer



Repeatedly, use the identity (Pascal's Identity), namely

$$
\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}.
$$
Note that
$$
\left(\binom{n}{k}-\binom{n-1}{k-1}\right)-\binom{n-2}{k-1}-\binom{n-3}{k-1}-\binom{n-3}{k}
$$
equals
$$
\binom{n-1}{k}-\binom{n-2}{k-1}-\binom{n-3}{k-1}-\binom{n-3}{k}

$$
which equals
$$
\binom{n-2}{k}-\binom{n-3}{k-1}-\binom{n-3}{k}
$$
which equals
$$
\binom{n-3}{k}-\binom{n-3}{k}=0
$$
as desired.



How to compute the norm of a complex number under square root?

How to compute the norm of a complex number under square root? Does the square of norm equal the norm of square:





$\|\sqrt z\|^2 = \|\sqrt {z^2}\|$?




Let $z = re^{i\theta}$, then $$\|\sqrt z\|^2 =\|\sqrt {re^{i\theta}}\|^2 =
\|\sqrt r \sqrt {e^{i\theta}}\|^2 =\|\sqrt r {e^{1/2i\theta}}\|^2 = \|r {e^{i\theta}}\|.$$
And
$$\|\sqrt {z^2}\|=\|\sqrt {(re^{i\theta})^2}\| = \|\sqrt {r^2e^{2i\theta}}\|= \|{re^{i\theta}}\|.$$



I hope this is correct? Thank you.

Friday 27 June 2014

algebra precalculus - Approximation of the Sine function near $0$



What is the reason that for $x<0.5$, $\sin(x)\approx x$?



Are there more known properties of these kind for other trigonometry functions?



Answer



To see that $\sin(x) \approx x$ for small $x$ all you have to do (without using the Taylor series) is look at the graph:
enter image description here



You can see that $\sin x = x$ when $x = 0$, and since the gradient of the graph is approximately 1 for $-0.5

$\cos x \approx 1-\frac{x^2}{2}$



$\tan x \approx x$


trigonometry - Efficiently finding $theta$ such that $costheta = -frac12$ and $sintheta = frac{sqrt{3}}{2}$. I know the quadrant, but what are the angles?



I'm having trouble solving trigonometric equations. For example, let's say I'm solving a problem and I arrive at a trigonometric equation that says,
$$\cos\theta = -\frac12 \quad\text{and}\quad \sin\theta = \frac{\sqrt{3}}{2} $$
At this point, I get stuck and I don't have an efficient way to proceed apart from picking up a calculator.



I can figure that the quadrants (from the signs of the ratios) -- but I can't figure out the angles. What is a good way to figure out the angle? Specifically, how do I systematically solve $\sin$, $\cos$, and $\tan$ trigonometric equations? (I can reciprocate the other three into these ratios.)



I don't have trouble figuring out angles between $0^\circ$ to $90^\circ$ (since I have that memorized), but for angles in other quadrants, I get stuck.



Answer



If you have angle $\theta$ in quadrant $1$, you can find its "corresponding" angle in quadrant $2$ by $(\pi - \theta)$, in quadrant $3$ by $(\pi+\theta)$, and in quadrant $4$ by $(2\pi-\theta)$. For example, $\frac{\pi}{4}$ corresponds to $\frac{3\pi}{4}$, $\frac{5\pi}{4}$, and $\frac{7\pi}{8}$ in quadrants $2$, $3$, and $4$, respectively. (That's how I always think of them at least.)



Also, recall sine functions correspond to the height of the right triangle ($y$-axis), so they are positive in quadrants $1$ and $2$. Cosine functions correspond to base of the right triangle ($x$-axis), so they are positive in quadrants $2$ and $4$. (Tangent functions can be found through sine and cosine functions.)



You can use the following identities (which are derived from the aforementioned facts).




$$\sin\bigg(\frac{\pi}{2}+\theta\bigg) = \cos\theta \quad \sin\bigg(\frac{\pi}{2}-\theta\bigg) = \cos\theta$$




$$\cos\bigg(\frac{\pi}{2}+\theta\bigg) = -\sin\theta \quad \cos\bigg(\frac{\pi}{2}-\theta\bigg) = \sin\theta$$



$$\tan\bigg(\frac{\pi}{2}+\theta\bigg) = -\cot\theta \quad \tan\bigg(\frac{\pi}{2}-\theta\bigg) = \cot\theta$$



$$\sin\bigg(\pi+\theta\bigg) = -\sin\theta \quad \sin\bigg(\pi-\theta\bigg) = \sin\theta$$



$$\cos\bigg(\pi+\theta\bigg) = -\cos\theta \quad \cos\bigg(\pi-\theta\bigg) = -\cos\theta$$



$$\tan\bigg(\pi+\theta\bigg) = \tan\theta \quad \tan\bigg(\pi-\theta\bigg) = -\tan\theta$$




$$\sin\bigg(\frac{3\pi}{2}+\theta\bigg) = -\cos\theta \quad \sin\bigg(\frac{3\pi}{2}-\theta\bigg) = -\cos\theta$$



$$\cos\bigg(\frac{3\pi}{2}+\theta\bigg) = \sin\theta \quad \cos\bigg(\frac{3\pi}{2}-\theta\bigg) = -\sin\theta$$



$$\tan\bigg(\frac{3\pi}{2}+\theta\bigg) = -\cot\theta \quad \tan\bigg(\frac{3\pi}{2}-\theta\bigg) = \cot\theta$$



$$\sin\bigg(2\pi+\theta\bigg) = \sin\theta \quad \sin\bigg(2\pi-\theta\bigg) = -\sin\theta$$



$$\cos\bigg(2\pi+\theta\bigg) = \cos\theta \quad \cos\bigg(2\pi-\theta\bigg) = \cos\theta$$




$$\tan\bigg(2\pi+\theta\bigg) = \tan\theta \quad \tan\bigg(2\pi-\theta\bigg) = -\tan\theta$$




I certainly wouldn't recommend memorizing these though since knowing how the unit circle works basically means you know them already.



For example, in an equation you reach $$\cos \theta = -\frac{\sqrt{3}}{2}$$



You already know that $\cos {\frac{\pi}{6}} = \frac{\sqrt{3}}{2}$ and you also know cosine is negative in quadrants $2$ and $3$, so all you need to do is find the corresponding angle for ${\frac{\pi}{6}}$ in those quadrants.



$$\text{Quadrant II} \implies \theta = \pi-{\frac{\pi}{6}} = \frac{5\pi}{6}$$




$$\text{Quadrant III} \implies \theta = \pi+{\frac{\pi}{6}} = \frac{7\pi}{6}$$



This might take a bit of practice, but once you get this whole "corresponding" angle concept, it all becomes simple. Perhaps you can start by trying to visualize this by solving equations with a unit circle. You'll eventually get the hang of it.


calculus - Finding $ lim_{xto -2}~~ sin(frac{pi x}{2})frac{x^2+1}{x+2}$.



I would really appreciate if you could help me solving this limit problem!



Find the limit without using L'Hopital's rule!




$$ \lim_{x\to -2} \sin\bigg(\frac{\pi x}{2}\bigg)\frac{x^2+1}{x+2} = ?$$



Thank you in advance!


Answer



First, pull out $\lim_{x\to -2} (x^2+1)$ from the limit. Then, set $y=x+2$, which gives
$$\lim_{y\to 0}\frac{\sin \big((\frac{\pi}{2}y)-(\frac{\pi}{2}2)\big)}{y}=$$



$$=\lim_{y\to 0}\frac{-\sin(\frac{\pi}{2}y)}{y}=$$




$$=\lim_{y\to 0}\frac{-\frac{\pi}{2}\sin(\frac{\pi}{2}y)}{\frac{\pi}{2}y}=$$



$$=-\frac{\pi}{2}\lim_{\frac{\pi}{2}y\to 0}\frac{\sin(\frac{\pi}{2}y)}{\frac{\pi}{2}y}=$$
$$=-\frac{\pi}{2}$$



Now combine with the $5$ you pulled out previously to get $-\frac{5\pi}{2}$.


real analysis - differentiability at a point (0,0) based on partial derivatives

For $$
f(x,y)=\begin{cases}
y^2 sin\left(\frac{x}{y}\right) & \text{if } y\neq0 \\
0 & \text{if } y=0

\end{cases}$$
i've shown that it is continuous and that the partial derivatives exist in $\mathbb{R}^2$. However, it appears that the partials are not continuous at $(0,0)$; is this sufficient to show that $f$ is not differentiable at $(0,0)$?

calculus - Evaluating $int_0^inftyfrac{log^{10} x}{1 +x^3}dx$




How one would evaluate the following integral?




$$\int_{0}^{\infty}\frac{\log^{10}(x)}{1+x^3} \, \mathrm{d}x$$




I have tried substitution with no success as well as differentiation under integral sign.
Can anyone help me please. I prefer not to use contour integration.


Answer




Caveat. I wrote the following answer not having seen the request
by the OP that he would prefer not to use contour integration. Perhaps
what follows can help make the case for and showcase contour
integration for this integral which belongs to a class that has
frequently appeared here at MSE.



Observe that if we are allowed to use a CAS (which would appear
necessary for this problem) then we may compute



$$Q_n = \int_0^\infty \frac{\log^n x}{x^3+1}\; dx

= \int_0^\infty f_n(x) \; dx$$



where



$$f_n(z) = \frac{\log^n z}{z^3+1}$$



by computing all $Q_n$ recursively by integrating $f_{n+1}(z), f_n(z),
\ldots$ and so on around a keyhole contour with the slot on the
positive real axis and the branch cut of the logarithm on that axis as
well, with argument from $0$ to $2\pi.$ The poles of $f_n(z)$ are at

$\rho_k = \exp(\pi i/3 + 2\pi ik /3)$ with $k=0,1,2.$ We obtain for
the residues



$$\mathrm{Res}_{z=\rho_k} f_n(z)
= \mathrm{Res}_{z=\rho_k} \frac{\log^n z}{z^3+1}
\\ = \left. \frac{\log^n z}{3z^2} \right|_{z=\rho_k}
= \left. z \frac{\log^n z}{3z^3} \right|_{z=\rho_k}
= - \left. \frac{1}{3} z \log^n z \right|_{z=\rho_k}
\\ = - \frac{1}{3} \exp(\pi i/3 + 2\pi ik /3)
(\pi i/3 + 2\pi i k/3)^n = \alpha_{n,k}.$$




We obtain by integrating $f_n(z)$



$$\int_0^\infty \frac{\log^n z}{z^3+1} \; dz +
\int_\infty^0 \frac{(2\pi i + \log z)^n}{z^3+1} \; dz
\\ = 2\pi i \sum_k \mathrm{Res}_{z=\rho_k} f_n(z)
= 2\pi i \sum_k \alpha_{n,k}.$$



This yields




$$ \sum_{p=0}^{n-1} {n\choose p} (2\pi i)^{n-p}
\int_0^\infty \frac{\log^p z}{z^3+1} \; dz
= - 2\pi i \sum_k \alpha_{n,k}$$



which is



$$ \sum_{p=0}^{n-1} {n\choose p} (2\pi i)^{n-p-1} Q_p
= - \sum_k \alpha_{n,k}$$



or




$$\sum_{p=0}^{n} {n+1\choose p} (2\pi i)^{n-p} Q_p
= - \sum_k \alpha_{n+1,k}$$



Therefore to compute $Q_n$ we use the recurrence



$$Q_n = - \frac{1}{n+1} \sum_k \alpha_{n+1,k}
- \frac{1}{n+1}
\sum_{p=0}^{n-1} {n+1\choose p} (2\pi i)^{n-p} Q_p$$




We just need the base case $Q_0$ which we compute using a pizza slice
resting on the positive real axis and having argument $2\pi/3$ so that
it only contains $\alpha_{0,0}.$ Parameterizing with $z=\exp(2\pi i/3)
t$ we get



$$Q_0 - \exp(2\pi i/3) Q_0 = 2\pi i \alpha_{0,0} $$



which yields



$$Q_0 = - \frac{1}{3} 2\pi i \frac{\exp(\pi i/3)}{1-\exp(2\pi i/3)}

= - \frac{1}{3} 2\pi i \frac{1}{\exp(-\pi i/3)-\exp(\pi i/3)}
\\ = \frac{1}{3} \pi \frac{1}{\sin(\pi/3)}
= \frac{2}{9}\sqrt{3}\pi.$$



With everything in place we obtain e.g. the sequence up to $n=10$



$$-{\frac {2\,{\pi }^{2}}{27}},{\frac {10\,{\pi }^{3}
\sqrt {3}}{243}},-{\frac {14\,{\pi }^{4}}{243}},{\frac
{34\,{\pi }^{5}\sqrt {3}}{729}},\\-{\frac {806\,{\pi }^{6
}}{6561}},{\frac {910\,{\pi }^{7}\sqrt {3}}{6561}},-{

\frac {10414\,{\pi }^{8}}{19683}},{\frac {415826\,{\pi
}^{9}\sqrt {3}}{531441}},\\-{\frac {685762\,{\pi }^{10}}{
177147}},{\frac {3786350\,{\pi }^{11}\sqrt {3}}{531441}},
\ldots$$



The Maple code for this is extremely simple, consisting of a few
lines.




alpha := (n,k) ->

-1/3 * exp(Pi*I/3+2*Pi*I*k/3) * (Pi*I/3 + 2*Pi*I*k/3)^n;

Q :=
proc(n)
option remember;
local res;

if n = 0 then return 2/9*sqrt(3)*Pi fi;

res :=

-1/(n+1)*add(alpha(n+1,k), k=0..2)
-1/(n+1)*add(binomial(n+1, p)*(2*Pi*I)^(n-p)*Q(p),
p=0..n-1);

simplify(res);
end;

VERIF := n -> int((log(x))^n/(x^3+1), x=0..infinity);



Observe that this method generalizes quite nicely. Suppose we are
interested in



$$K_n = \int_0^\infty \frac{\log^n x}{x^3-2x+4} \; dx.$$



The same computation goes through except now we have the following
three poles and their logarithms



$$\begin{array}{|l|l|} \hline
\text{pole} & \text{logarithm} \\ \hline

\rho_0 = 1+i
& \log \rho_0 = \frac{1}{2} \log 2 + \frac{1}{4}i\pi \\ \hline
\rho_1 = 1-i
& \log \rho_1 = \frac{1}{2} \log 2 + \frac{7}{4}i\pi \\ \hline
\rho_2 = -2
& \log \rho_2 = \log 2 + i\pi.\\ \hline
\end{array}$$



The rest is unchanged. We obtain e.g.




$$K_4 = {\frac {357\,{\pi }^{5}}{10240}}-{\frac {31\, \left( \ln
\left( 2 \right) \right) ^{5}}{1600}}-{\frac {139\,
\left( \ln \left( 2 \right) \right) ^{3}{\pi }^{2}}{1920
}} \\ -{\frac {4897\,\ln \left( 2 \right) {\pi }^{4}}{76800}}+
{\frac {9\, \left( \ln \left( 2 \right) \right) ^{4}\pi
}{640}}+{\frac {63\, \left( \ln \left( 2 \right) \right)
^{2}{\pi }^{3}}{1280}}.$$



The Maple code is very similar to the first version.





alpha_sum :=
proc(n)
local poles;

poles :=
[[1+I, 1/2*log(2) + I*Pi/4],
[1-I, 1/2*log(2) + 7*I*Pi/4],
[-2, log(2) + I*Pi]];


add(residue(1/(x^3-2*x+4), x=p[1])*p[2]^n,
p in poles);
end;

Q :=
proc(n)
option remember;
local res;

if n = 0 then

return
simplify(int(1/(x^3-2*x+4), x=0..infinity));
fi;

res :=
-1/(n+1)*alpha_sum(n+1)
-1/(n+1)*add(binomial(n+1, p)*(2*Pi*I)^(n-p)*Q(p),
p=0..n-1);

simplify(res);

end;

VERIF := n -> int((log(x))^n/(x^3-2*x+4), x=0..infinity);

calculus - What is the domain of the function $f(x)$ and what is the value of the parameter $a$ for which the function is always positive?



I have the following function:



$f: D \rightarrow \mathbb{R},$ $f(x)= \dfrac{ln(x+a)}{\sqrt{x}}$



$a \in \mathbb{R}$



And I am asked $2$ things:





  • Find the domain $D$ of the function $f(x)$.

  • Find the values of $a$ such that $f(x) > 0, \forall x \in
    \mathbb{R}$
    .



Concerning point $1$, I applied the following conditions:



$$x > 0 \Rightarrow x \in (0, + \infty)$$




$$x+a>0 \Rightarrow x > -a \Rightarrow x \in (-a, + \infty)$$



Combining these $2$ conditions, I got:



$$x \in (max(-a, 0), + \infty)$$



However, when I checked my textbook, I found that the answer they have listed is this:



$$x \in \bigg ( \dfrac{-a+ |a| }{2}, + \infty \bigg )$$




Can somebody explain why are these $2$ answers equivalent (that is, if I didn't do any mistakes and they are indeed equivalent)?



And concerning the $2 ^ {nd}$ point, I tried to find the derivative thinking that I would use it to find the minimum/minima point/points and choose values of $a$ such that those minima are $>0$. But the derivative I found is a whole mess with both $x's$ and $a's$ and I didn't know how to handle it.


Answer



$\frac {-a+|a|} 2$ is same as $\max (-a,0)$. Just consider the case $a \geq 0$ and $a<0$ to verify this.



The second question does not make sense. Whatever $a$ is, there are values of $x$ for which $f(x)$ is not defined.



However if the question is to find $a$ such that $f(x)>0$ for ll $x \in D$ then the answer is $a \geq 1$.




[We want $x+a >1$ whenever $x >\max (-a,0)$. It is easy to see that this will not hold if $a<0$. So take $a \geq 0$. Then the condition is $x>1-a$ whenever $x >0$. In particular this gives $\frac 1 n > 1-a$ for all $n \in \mathbb N$. Letting $n \to \infty$ we get $0 \geq 1-a$ or $a \geq 1$. Conversely if $a \geq 1$ it is easy to see that $f(x) >0$ for all $x \in D$].


linear algebra characteristic polynomial, matrix rank, Matrix similarity



I'm having a problem solving the following assignment, can someone please help me?



I'm given 2 $n \times n$ matrices, $n>1$.



A=$\begin{bmatrix}1 & .& .& .& .& .& 1\\. &&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\1 &.&.&.&.&.&1\end{bmatrix}$




B=$\begin{bmatrix}n & 0& & .& .& 0& 0\\0 &0&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\0 &.&.&.&.&.&0\end{bmatrix}$



1) I need to find the characteristic polynomial of A using A's Rank.



2) I need to prove that the Coefficient of $t^n-1$ in the characteristic polynomial of A is equal -(trA).



3) I need to prove that A and B are similar matrices and find P so that $B = P^{-1}AP$



*All of A's entries = 1.


Answer




$A$ is symmetric, so the algebraic multiplicity of an eigenvalue is equal to the geometric multiplicity.



It is not hard to see that, for any $x$, $Ax = c(1,1,\dots,1)^T$ for some constant $c$. Thus, its rank is $1$ (corresponding to eigenvalue $\lambda = ...?$) and the other $n-1$ eigenvalues are $0$. Such a matrix has characteristic polynomial



$$
(t - \lambda)(t - 0)^{n-1} = t^{n-1}(t - \lambda)
$$



For question 2, it is easy to directly calculate the trace, and you should now have the characteristic polynomial, so just verify.




To find a similarity transform, you can find all the eigenvectors (meaning, find $n$ linearly independent eigenvectors) of $A$, or of $B$. One will be much easier than the other.


Thursday 26 June 2014

trigonometry - How prove this trigonometric identity



Show that




$$\sum_{k=0}^{n-1}\dfrac{\tanh{\left(x\dfrac{1}{n\sin^2{\left(\dfrac{2k+1}{4n}\pi\right)}}\right)}}{1+\dfrac{\tanh^2{x}}{\tan^2{\left(\dfrac{2k+1}{4n}\pi\right)}}}=\tanh{(2nx)}$$





Thank you ,This problem I take some hours,and at last I don't prove it



and This problem is from
enter image description here



This book have some same problem.all is not true? if not true,and we how find it or edit it somewhere?
enter image description here




enter image description here



Thank you achille hui,he told me this following maybe is true,Now How prove it?




$$\sum_{k=0}^{n-1}\dfrac{\dfrac{\tanh{x}}{n\sin^2{\left(\dfrac{2k+1}{4n}\pi\right)}}}{1+\dfrac{\tanh^2{x}}{\tan^2{\left(\dfrac{2k+1}{4n}\pi\right)}}}=\tanh{(2nx)}$$



Answer



Notice




$$\cosh(2nx) = T_{2n}(\cosh x)\quad\text{ and }\quad \cos(2nx) = T_{2n}(\cos x)$$



where $T_{2n}(z)$ is a
Chebyshev polynomial of the first kind. Using the $2^{nd}$ relation above, it is clear the roots of $T_{2n}(x)$ has the
form:
$$\pm\cos(\frac{2k+1}{4n}\pi),\quad\text{ for } k = 0,\ldots, n-1$$



From this, we arrive following expansion of $\cosh(2n x)$:



$$\cosh(2n x ) = A \prod_{k=0}^{n-1}\left(\cosh^2 x - \cos^2(\frac{2k+1}{4n}\pi)\right)$$

for some constant $A$ we don't care.
Taking logarithm, differentiate w.r.t $x$ and divide by $2n$ for both sides, we find:
$$\begin{align}
\tanh(2n x)
= & \frac{1}{2n} \sum_{k=0}^{n-1}\frac{ 2\sinh x\cosh x}{\cosh^2 x - \cos^2(\frac{2k+1}{4n}\pi)}\\
= & \frac{1}{n}\sum_{k=0}^{n-1}\frac{ \sinh x\cosh x}{\sinh^2 x + \sin^2(\frac{2k+1}{4n}\pi)}\\
= & \frac{1}{n}\sum_{k=0}^{n-1}\frac{ \tanh x}{\tanh^2 x + \sin^2(\frac{2k+1}{4n}\pi)(1 - \tanh^2 x)}\\
\\
= & {\Large \sum_{k=0}^{n-1}\frac{\frac{\tanh x}{n \sin^2(\frac{2k+1}{4n}\pi)}
}{1 + (\frac{1}{\sin^2(\frac{2k+1}{4n}\pi)} - 1 ) \tanh^2 x} }\\

\\
= & {\Large \sum_{k=0}^{n-1}\frac{\frac{\tanh x}{n \sin^2(\frac{2k+1}{4n}\pi)}
}{1 + \frac{\tanh^2 x}{\tan^2(\frac{2k+1}{4n}\pi)}} }\\
\end{align}$$


real analysis - Unique uniformly continuous function into complete space




Let $M_1,M_2$ be metric spaces such that $M_2$ is complete. Let $f$ be a uniformly continuous function from a subset $X$ of $M_1$ into $M_2$. Suppose that $\overline{X}=M_1$. Prove that $f$ has a unique uniformly continuous extension from $M_1$ into $M_2$ (that is, prove that there exists a unique uniformly continuous function $g$ from $M_1$ into $M_2$ such that $g|X=f$.)




I'm not sure where to start on this one... how can I extend a uniformly continuous function from $X$ to $M_1$?


Answer




Because $X$ is dense in $M_1$, any $x \in M_1$ is the limit of a sequence $(x_n)$ in $X$.




  • Show that $(f(x_n))$ is a Cauchy sequence. In particulier, $(f(x_n))$ converges.

  • Show that if $(y_n)$ is another sequence converging to $x$, then $(f(x_n))$ and $(f(y_n))$ have the same limit.



Thus, you can extend $f$ by $f(x)=\lim\limits_{n \to + \infty} f(x_n)$.





  • To conclude, show that if $g$ is another extension of $f$, then $f=g$ (the key property is that $X$ is dense in $M_1$).


real analysis - borel measurable functions and measurable functions



Say you are given the Lebesgue measure on the real line and a Lebesgue measurable function $f$. Here Lebesgue measure is a complete measure (defined for some non-Borel set). And note that in the case of real line, the borel sets are just the sets that are in the sigma-algebra generated by the open sets.




I remember seeing something like: we can change $f$ on a set of measure $0$, and turn $f$ into a Borel measurable function.



Now I don't quite know how I can prove it or if it is even true.



I think if it is indeed true, the same proof should work for any metric space and for any measure that satisfies:




  1. $\mu(A)=\inf_{O\supset A} \mu(A)$ for all measurable $A$, where the infimum is taken over all such open sets $O$.

  2. $\mu(A)=\sup_{K\subset A} \mu(A)$ for all measurable $A$, where the infimum is taken over all such compact sets $K$.




Can anybody give me any hint how one might prove this? Or if it's incorrect.


Answer



For every rational $r$ let $A_r = \{x: f(x) \le r\}$. There is a $G_\delta$ set
$B_r$ such that $A_r \subseteq B_r$ and $m(B_r \backslash A_r) = 0$.
Define $g(x) = \inf \{r \in \mathbb Q: x \in B_r\}$.


calculus - Integrating cos and csc



$$\int{\frac{\sqrt{\cos x+1}}{\csc x}dx}$$
Sorry for the bad formatting, I still need to learn math jax. I am trying to integrate this by u-sub but am stuck. Looking at the equation I can see somewhere I will probably have to use the fact the $\csc x=1/\sin x$. My hunch is I choose $u$ to be something in the numerator, but is there an identity i'm not seeing?


Answer




Guide:



\begin{align}
\int \frac{(\cos x+1)^\frac12}{ \csc x} \, dx = \int \sin x(\cos x+1)^\frac12 \, dx
\end{align}



Try substitution $u = \cos x + 1$.


real analysis - How to prove $frac{pi^2}{6}le int_0^{infty} sin(x^{log x}) mathrm dx $?

I want to prove the inequality
$$\frac{\pi^2}{6}\le \int_0^{\infty} \sin(x^{\log x}) \ \mathrm dx $$



There are some obstacles I face: the indefinite integral cannot be expressed in terms of elementary functions,
Taylor series leads to a another function that cannot be expressed in terms of elementary functions. What else to try?

Wednesday 25 June 2014

summation - Series with Double Binomial Coefficients



How do I show the following?




$$ \sum_{x=0}^{n} x {N_1 \choose {n-x}} {N_2 \choose x} = N_2 {N_1 + N_2 - 1 \choose n-1} $$



I tried breaking down the left hand side into factorials and pulling out $N_2$, but that did not help. How does one deal with these summmations in general?


Answer



$$
\binom{N_2}{x} = \frac{N_2}{x}\binom{N_2-1}{x-1}
$$



With this, the sum gets transformed to




$$
\sum_{x=1}^n x\binom{N_1}{n-x}\binom{N_2}{x} = N_2\sum_{x=1}^n\binom{N_1}{n-x}\binom{N_2-1}{x-1}.
$$



The rest is easy with a combinatorial argument. Starting the index with $0$ or $1$ doesn't make a difference.


elementary number theory - Find the remainder for $sum_{i=1}^{n} (-1)^i cdot i!$ when dividing by 36 $forall n in Bbb N$



I need to find the remainder $\forall n \in \Bbb N$ when dividing by 36 of:



$$\sum_{i=1}^{n} (-1)^i \cdot i!$$



I should use congruence or the definitions of integer division as that's whave we've seen so far in the course. I don't know where to start. Any suggestions? Thanks!


Answer



Hint:




For $n\geq 6$ one has:



$\sum\limits_{i=1}^n(-1)^ii! = \sum\limits_{i=1}^5(-1)^ii! + \sum\limits_{i=6}^n(-1)^ii!$



Next, notice that for all $i\geq 6$ one has $i!=1\cdot \color{red}{2\cdot 3}\cdot 4\cdot 5 \cdot\color{red}{6}\cdots (i-1)\cdot i$




implying that for $i\geq 6$ one has $36$ divides evenly into $i!$. What does the right sum contribute to the remainder when divided by $36$ then?





From here it should be easy enough to brute force the remainder of the solution.


calculus - Definite integration problem (trig).




I have this definite integral:



$$
\int_0^\Pi \cos{x} \sqrt{\cos{x}+1} \, dx
$$



For finding the indefinite integral, I have tried substitution, integration by parts, but I'm having trouble solving it.



By parts




$$
\int \cos{x} \sqrt{\cos{x}+1} \, dx\ = \sqrt{\cos{x}+1}\sin{x} + \frac{1}{2} \int \frac{\sin^{2}{x}}{\sqrt{\cos{x}+1}} \, dx
$$



$
f(x) = \sqrt{\cos{x}+1} \\ f'(x) = \frac{1}{2} \frac{-\sin{x}}{\sqrt{\cos{x}+1}} \\ g(x) = \sin{x} \\ g'(x) = \cos{x}
$



I don't know how to approach this further because of the $\sin^{2}{x}$.




Substitution



$
\cos{x} + 1 = u \\ -\sin{x} \, dx = du
$



But I have no use for $sin\,x$.



I believe it has something to do with trig manipulations.




WolframAlpha tells me to substitute, but I don't understand how to get the first u-substituted integral like shown:



enter image description here



I would really appreciate any help on this. Thank you.


Answer



Here is how I would do it: first, let's recall the cosine double-angle identity. $$\cos 2x = \cos^2 x - \sin^2 x = \cos^2 x - (1 - \cos^2 x) = 2\cos^2 x - 1.$$ Thus the corresponding half-angle identity can be written $$\cos x = \sqrt{\frac{1 + \cos 2x}{2}}$$ or equivalently, $$\sqrt{1 + \cos x} = \sqrt{2} \cos \frac{x}{2}, \quad 0 \le x \le \pi.$$ So the integral becomes $$I = \int_{x=0}^\pi \sqrt{2} \cos x \cos \frac{x}{2} \, dx.$$ Now recall the angle addition identity $$\cos(a \pm b) = \cos a \cos b \mp \sin a \sin b,$$ from which we obtain $$\cos (a+b) + \cos (a-b) = 2 \cos a \cos b.$$ Then with $a = x$, $b = x/2$, we easily see the integral is now $$I = \frac{1}{\sqrt{2}} \int_{x=0}^\pi \cos \frac{3x}{2} + \cos \frac{x}{2} \, dx.$$ Now it is a simple matter to integrate each term: $$\begin{align*} I &= \frac{1}{\sqrt{2}} \left[ \frac{2}{3} \sin \frac{3x}{2} + 2 \sin \frac{x}{2} \right]_{x=0}^\pi \\ &= \frac{1}{\sqrt{2}} \left( -\frac{2}{3} + 2 \right) = \frac{2\sqrt{2}}{3}. \end{align*} $$


Can squeeze theorem be used to prove the nonexistence of a limit?

For squeeze theorem, if $0 \leqslant f(x) \leqslant 1$, can this be used to prove the assertion that the limit $f(x)$ does not exist? I.e. $f(x)$ is $xy/ (x^2+xy+y^2)$, as $(x,y)\rightarrow(0,0)$.



I'm aware you can use the method of approaching from different paths, but was wondering if squeeze theorem was enough? My first guess is obviously this isn't enough to prove anything, since $f(x)$ can still be $0$, but of all the examples I tried, when the squeeze theorem doesn't squeeze $f(x)$ into a number $L$, the limit doesn't exist. Just a coincidence?

matrices - Characteristic polynomial of linear operator and matrix



Does the characteristic polynomial of a linear operator always equal to the characteristic polynomial of its representation under some basis $\mathcal{B}$?
That is $c_{\tau}(x) = c_{[\tau]_{\mathcal{B}}}(x)$?




Another question is why the characteristic polynomial of a symmetric linear operator may have complex root? (we know symmetric matrix always has real eigenvalues.) Is there some example?



The characteristic polynomial is defined by product of elementary divisor for $\tau$ .



The symmetric operator means $(\tau v,w) = (v,\tau w)$ that is adjoint for real bilinear form.



$[\tau]_{\mathcal{B}}$ is the matrix that columns are coordinate of $\tau(b_i)$ under basis $\mathcal{B}$


Answer



Yes: the characteristic polynomial of a linear operator always equal to the characteristic polynomial of its representation under any choice basis $\mathcal B$. The fact that this is so is a direct consequence of the structure theorem for PIDs (which presumably you have seen if you are doing anything with "elementary divisors"). In particular, there is a direct correspondence between elementary divisors of an operator and elementary divisors of the associated matrix.




For a more thorough discussion, see this wikipedia page.



Regarding your second question: if our vector space is over the field $\Bbb R$ and the bilinear form is positive definite, then "symmetric operators" will correspond to "symmetric (real) matrices", which means that the spectral theorem applies and the roots of the characteristic polynomial must be real. For other bilinear forms, we can make no such guarantee.



An example of a "symmetric" operator that fails to have real eigenvalues. Consider the bilinear form over $\Bbb R^2$ given by
$$
(x,y) = x_1y_2 + x_2 y_1.
$$

Relative to this bilinear form, we find that the operator $x \mapsto Ax$ where
$$

A = \pmatrix{0&-1\\1&0}
$$

is "symmetric", but has no real eigenvalues.


Tuesday 24 June 2014

probability - The Birthday Problem

I've been reading about the birthday problem which, as I'm sure many of you will know, is a statistical problem which aims at finding out the how many people you would need in a random group to be certain that two of them shared a birthday. I've read the wikipedia article and am happy with the concept and the answers to this problem. What I'm interested in doing is expanding the principle. I've been trying to work out the answer to a similar problem, but where you simply wanted to know the probability that two people were born in the same week and in the same month. I'm not really sure how to go about this, though, so my first question is: is there a general equation I can use to extend the problem to these cases? But, I know there are many articles and stackexchange questions on this, so I wouldn't ask unless I had a specific problem, which is this:



Suppose a person has met 500 people in their lifetime. What is the probability that seven of those 500 share a birthday in the same two month period?



I think the answer to my last question is that it's certain. But could I ask what is the smallest number of people you would need for the probability that - in a group of 500 people - the probability of them sharing a birthday in a two month period is less that 50%? If that makes sense?



Okay, thank you everybody, edited to tidy up:



Question 1: What is the smallest group of randomly selected people required such that the probability that two of them share a birthday within one week of each other is at least 75%?




Question 2: What is the smallest group of randomly selected people required such that the probability that two of them share a birthday within thirty days of each other is at least 75%?



Question 3: What is the smallest group of randomly selected people required such that the probability that seven of them share a birthday within sixty days of each other is at least 75%?



Question 4: In a group of 30 randomly selected people, what is the probability that seven of them will share a birthday with fifty days?



I hope that's a lot clearer. I had no idea how to word these questions until I posted this and am grateful to everyone who's contributed for helping me do so :)

real analysis - Show $f$ is integrable



Let $f$ be such that $\int_0^{\infty} |f(s)|e^s ds< \infty.$




Now, I want to argue that for $x,y$ sufficiently large and $\lambda < 1$ fixed we have that



$$\int_0^{\infty} \int_x^y e^{\lambda z} e^{s} |f(s+z)|dz ds$$ can be made arbitrarily small for $\lambda <1.$



In other words: $\forall \varepsilon >0 \exists N: \left(x,y >N \Rightarrow \int_0^{\infty} \int_x^y e^{\lambda z} e^{s} |f(s+z)|dz ds< \varepsilon \right)$



But how can I show this rigorously? Does anybody have an idea


Answer



As often, choosing the right order of integration is the key to success: Let $C =\int_0^\infty e^s |f (s)|\,ds $.




We then have for $y\geq x\geq R $ that



\begin{eqnarray*}
\int_0^\infty \int_x^y e^{\lambda z}e^s |f (s+z)|\,dz\,ds &=&\int_x^y e^{\lambda z}\int_0^\infty e^s |f (s+z)|\,ds \,dz \\
(w =s+z)&=& \int_x^y \int_z^\infty e^{w-z}|f (w)|\,dw \\
&\leq& C \cdot \int_x^y e^{(\lambda -1)z} \,dz.
\end{eqnarray*}



Now, I leave it to you to verify

$$
\int_x^y e^{(\lambda -1)z}\,dz \to 0
$$
as $R \to\infty $ where $y\geq x\geq R $.


calculus - Evaluate the $lim_{x to -infty} (x + sqrt{x^2 + 2x})$



Evaluate : $$\lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x})$$




I've tried some basic algebraic manipulation to get it into a form where I can apply L'Hopital's Rule, but it's still going to be indeterminate form.



This is what I've done so far



\begin{align}
\lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x}) &= \lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x})\left(\frac{x-\sqrt{x^2 + 2x}}{x-\sqrt{x^2 + 2x}}\right)\\ \\
&= \lim_{x \to \ -\infty} \left(\frac{x^2 - (x^2 + 2x)}{x-\sqrt{x^2 + 2x}}\right)\\ \\
&= \lim_{x \to \ -\infty} \left(\frac{-2x}{x-\sqrt{x^2 + 2x}}\right)\\
\\
\end{align}




And that's as far as I've gotten. I've tried applying L'Hopitals Rule, but it still results in an indeterminate form.



Plugging it into WolframAlpha shows that the correct answer is $-1$



Any suggestions on what to do next?


Answer



$$\lim _{ x\to -\infty } \left( \frac { -2x }{ x-\sqrt { x^{ 2 }+2x } } \right) =\lim _{ x\rightarrow -\infty }{ \left( \frac { -2x }{ x-\sqrt { { x }^{ 2 }\left( 1+\frac { 2 }{ x } \right) } } \right) =\lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x-\left| x \right| \sqrt { 1+\frac { 2 }{ x } } } = } } \\ =\lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x+x\sqrt { 1+\frac { 2 }{ x } } } = } \lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x\left( 1+\sqrt { 1+\frac { 2 }{ x } } \right) } = } -1$$


Prove that if $n$ is not the square of a natural number, then $sqrt{n}$ is irrational.











I have this homework problem that I can't seem to be able to figure out:

Prove: If $n\in\mathbb{N}$ is not the square of some other $m\in\mathbb{N}$, then $\sqrt{n}$ must be irrational.

I know that a number being irrational means that it cannot be written in the form $\displaystyle\frac{a}{b}: a, b\in\mathbb{N}$ $b\neq0$ (in this case, ordinarily it'd be $a\in\mathbb{Z}$, $b\in\mathbb{Z}\setminus\{0\}$) but how would I go about proving this? Would a proof by contradiction work here?



Thanks!!


Answer



Let $n$ be a positive integer such that there is no $m$ such that $n = m^2$. Suppose $\sqrt{n}$ is rational. Then there exists $p$ and $q$ with no common factor (beside 1) such that




$\sqrt{n} = \frac{p}{q}$



Then



$n = \frac{p^2}{q^2}$.



However, $n$ is an positive integer and $p$ and $q$ have no common factors beside $1$. So $q = 1$. This gives that



$n = p^2$




Contradiction since it was assumed that $n \neq m^2$ for any $m$.


Monday 23 June 2014

limits - How to determine if a $limlimits_{n rightarrow infty}{(1+{ixover n})^n}$ would be complex





Question



Recently, I have been looking at complex limits, The most famous being $e^{ix}$=$\lim\limits_{n \rightarrow \infty}{(1+{ix\over n})^n}$. An example would be that when $x = \pi$ we know that the answer will be -1. But how do you determine this due to the fact that you can always $+1$ which will determine the outcome.




I am fully aware that you are able to do this via the $i\cdot \sin(a \ln b) +\cos(a\ln b)$ however, how can you prove this via a limit, because if you test it on a calculator, most of the time you'll end up with some imaginary part.



Specifically I have been looking at the representation of $\sin x={ie^{-ix}\over 2}-{ie^{ix}\over 2}$. Everyone would be safe to assume that $\sin x$ is always real, but when you apply a limit then how can you determine if it is only real or imaginary and real?


Answer



Using the polar form, you can rewrite the expression as $$\left(\sqrt{1+\frac{x^2}{n^2}}\right)^n\text{cis}\left(n\arctan\frac xn\right).$$



It tends to $1\cdot\text{cis }x$.


Help with a proof in modular arithmetic




Let $a,b,n \in Z$ with $n > 0$ and $a \equiv b \mod n$. Also, let $c_0,c_1,\ldots,c_k \in Z$. Show that :



$c_0 + c_1a + \ldots + c_ka^k \equiv c_0 + c_1b + \ldots + c_kb^k \pmod n$.



For the proof I tried :



$a = b + ny$ for some $y \in Z$.




If I multiply from both side $c_1 + \ldots + c_k$ I obtain :



$c_1a + c_2a + \ldots + c_ka = (c_1b + c_2b + \ldots + c_k) (b + ny)$.



However I can't prove that is true when I multiply by both side $a^1 + a^2 + \ldots + a^k$.


Answer



1) Prove that $k*a \equiv k*b \pmod n$ for any integer $k$.



2) Show that by induction that means $a^k \equiv b^k \pmod n$ for any natural $k$.




3) Show that if $a\equiv b\pmod n$ and $a' \equiv b'\pmod n$ that $a+a'\equiv b +b' \pmod n$.



4) Show your result follows by induction and combinition



....



Or. Note that $a^k - b^k = (a-b)(a^{k-1}+a^{k-2}b + .... +ab^{k-2} + b^{k-1})$.



And that $(c_0 + c_1a + ... + c_ka^k) - (c_0 + c_1b + ... + c_kb^k)=$




$c_1(a-b) + c_2(a^2 - b^2) + ...... c_k(a^k - b^k) =$



.... And therefore......


linear algebra - Help Determinant Binary Matrix




I was messing around with some matrices and found the following result.




Let $A_n$ be the $(2n) \times (2n)$ matrix consisting of elements $$a_{ij} = \begin{cases} 1 & \text{if } (i,j) \leq (n,n) \text{ and } i \neq j \\ 1 & \text{if } (i,j) > (n,n) \text{ and } i \neq j \\ 0 & \text{otherwise}. \end{cases} $$

Then, the determinant of $A_n$ is given by $$\text{det}(A_n) = (n-1)^2.$$




Example: $$A_2 = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}, A_3 = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{pmatrix},$$ with det$(A_2)$ and det$(A_3)$ being $1$ and $4$ respectively. I was wondering if anybody could prove this statement for me.


Answer



Your matrix $A_n$ has the block diagonal structure



$$ A_n = \begin{pmatrix} B_n & 0 \\ 0 & B_n \end{pmatrix} $$



where $B_n \in M_n(\mathbb{F})$ is the matrix which has diagonal entries zero and all other entries $1$. Hence, $\det(A_n) = \det(B_n)^2$ so it is enough to calculate $\det(B_n)$. To do that, let $C_n$ be the matrix in which all the entries are $1$ (so $B_n = C_n - I_n$).




The matrix $C_n$ is a rank-one matrix so we can find it's eigenvalues easily. Let us assume for simplicity that $n \neq 0$ in $\mathbb{F}$. Then $C_n$ has an $n - 1$ dimensional kernel and $(1,\dots,1)^T$ is an eigenvalue of $C_n$ associated to the eigenvalue $n$. From here we see that the characteristic polynomial of $C_n$ must be $\det(\lambda I - C_n) = \lambda^{n-1}(\lambda - n)$ and hence
$$\det(B_n) = \det(C_n - I_n) = (-1)^n \det(I_n - C_n) = (-1)^{n} 1^{n-1}(1 - n) = (-1)^n(1 - n) = (-1)^{n-1}(n-1).$$



In fact this formula works even if $n = 0$ in $\mathbb{F}$ because in this case, $A^2 = 0$ so $A$ is nilpotent and $\det(C_n - \lambda I) = \lambda^n$.


modular arithmetic - Elementary Number Theory; prove existence





Prove that there exists a positive integer $n$ such that
$$2^{2012}\;|\;n^n+2011.$$




I was wondering if you could prove this somehow with induction (assume that $n$ exists for $2^k|n^n+2011$ then prove for $2^{k+1}$). But I couldn't get anywhere with that.



Or perhaps you could try and use some modular arithmetic, but that gets nasty.



Any help would be greatly appreciated.


Answer




We prove by induction on $m \geq 1$ that there exists an odd positive integer $n$ such that $2^m \mid n^n+2011$.



When $m=1$, simply take $n=1$. We have $2^1 \mid 1^1+2011$.



Suppose that the statement holds for $m=k$. By the induction hypothesis, there exists an odd positive integer $n$ such that $2^m \mid n^n+2011$. If $2^{m+1} \mid n^n+2011$, then we are done.



Otherwise $n^n+2011 \equiv 2^m \pmod{2^{m+1}}$, so $$(2^m+n)^{2^m+n} \equiv (2^m+n)^n \equiv n^n+\binom{n}{1}n^{n-1}2^m \equiv n^n+2^m \pmod{2^{m+1}}$$ (Note: Here we have used the fact that $\gcd(x, 2)=1$ implies $x^{2^{m-1}}=x^{\varphi(2^{m+1})} \equiv 1 \pmod{2^{m+1}}$) Thus $2^{m+1}\mid (2^m+n)^{2^m+n}+2011$, so we can choose $2^m+n$, which is odd.



We are thus done by induction. In particular, when $m=2012$, there exists a positive integer $n$ such that $2^{2012} \mid n^n+2011$.




P.S. It is a coincidence that $2011$ is prime, and that $2011 \equiv -1 \pmod{2012}$. We only need the fact that $2011$ is odd.


calculus - Show that for any $r>0$, $ln x=O(x^r)$ as $xto infty$



Show that for any $r>0$, $\ln x=O(x^r)$ as $x\to \infty$



I know that if $x_n=O(\alpha_n)$ then there is a constant $C$ and a natural number $n_0$ such that $|x_n|=C|\alpha_n|$ for all $n\geq n_0$. But in this case I do not have sequences, how can I work with these functions? In this case there would be no natural number? Would only the constant be demanded? One would not have to $\ln x\leq x$ for all $x>0$ and with this could not solve much of the problem with $C=1$?


Answer




If you can use derivative-based methods:
$$
\lim_{x\to\infty}\frac{\ln x}{x^r}=
\lim_{x\to\infty}\frac{1/x}{r x^{r-1}}=
\lim_{x\to\infty}\frac{1}{r x^{r}}=0
$$


sequences and series - Show that $S$ has the same cardinality with the set of real numbers.




Suppose that $a_n$ is a sequence with values in $\mathbb R$ which is not ultimately constant.



Let $S$ be the set of the subsequences of $a_n$.






Question: Show that $S$ has the same cardinality with the set of real numbers.







Attempt: First, I tried to write $S$ in my logic as $S=\{a_{f_n}\;|\; f_n \text{is monotone increasing}, f_n:\mathbb N\to\mathbb N \}$



If two sets have the same cardinality, there must be a bijective mapping between them. If I am able to show there are uncountable many increasing $f_n$, then I am done, but I couldnot. Besides, how can one prove this by using my logic or in other method?






Comment:
Moreover, I can understand intuitively that there cannot be countable many such subsequences since $a_n$ is not convergent, that is not going to the any number, changes everytime.


Answer



The $\{0,1\}$ sequences are not ultimately constant (if we forget countably many being constant from some term), hence

$\mathfrak{c}=2^{\aleph_0}\leq|{S}|\leq\mathfrak{c}^{\aleph_0}=\mathfrak{c}$


sequences and series - divergence of $sum_{k_1neq k_2neq dots neq k_n}^{infty}frac{1}{p_1^{k_1}p_2^{k_2}dots p_n^{k_n}}$

Let $p_1=2


Let $$S_n=\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\dots \sum_{k_n=0}^{\infty}\frac{1}{p_1^{k_1}p_2^{k_2}\dots p_n^{k_n}}$$



It is easy to check that $S_1=2,S_3=3$ and in general $S_n=\frac{1}{1-\frac{1}{p_1}}\frac{1}{1-\frac{1}{p_2}}\dots \frac{1}{1-\frac{1}{p_n}}$



Then if we can show that as $n\to\infty$, $S_n$ diverges to infinity we can say $\sum_{n=1}^{\infty} \frac{1}{n}$ also diverges to infinity.



If we can show that $S_n\thicksim f(n)$, where $f(n)$ increases to infinity as $n\to\infty$, then we are done.



What is this $f(n)$?




Is there any other way to show $\sum_{k=1}^{\infty}\frac{1}{k}$ diverges using $S_n$?






But I have no idea about the following problem:



$$T_n=\sum_{k_1\neq k_2\neq \dots \neq k_n}^{\infty}\frac{1}{p_1^{k_1}p_2^{k_2}\dots p_n^{k_n}}$$



Here $k_1\neq k_2\neq \dots\neq k_n$ means they are distinct($n!$ values).




How should we proceed to in this case?

Sunday 22 June 2014

real analysis - Uniform continuous functions on bounded sets are Lipschitz?



I'm trying to prove the following:





if $f: E \rightarrow \mathbb{R}$ where $ E$ is a bounded subset of $\mathbb{R}$, and $f$ is uniformly continuous then there exists $K$ such that
$$|f(x)-f(y)|\leq K|x-y|$$
for all $x,y\in E$




now I have written down this proof which I'm unsure of:



Assume for a contradiction that for all $n \in \mathbb{Z}$ there exists $x_n,y_n \in E$ such that $$|f(x_n)-f(y_n)|\geq n|x_n-y_n|$$

then since $E$ is bounded there exists a convergent sub sequence $\{x_{n_k}\}$ which converges to some number $p$, now since $f$ is uniformly continuous $\{f(x_{n_k})\}$ converges to some number $l$. Thus by the triangle inequality we have that $|f(y_{n_k})-l|\geq ||f(y_{n_k})-f(x_{n_k})|-|f(x_{n_k})-l||\rightarrow \infty $ as $k \rightarrow \infty$ thus $f$ is unbounded which is a contradiction.


Answer



Your main error is in the interpretation of the last formula:$$|f(y_{n_k})-l|\geq |f(y_{n_k})-f(x_{n_k})|-|f(x_{n_k})-l|. $$ We have $|f(y_{n_k})-f(x_{n_k})|\geq n_k|y_{n_k}-x_{n_k}|$ but this does not imply that $|f(y_{n_k})-f(x_{n_k})|\to \infty .$ We have $|y_{n_k}-x_{n_k}|\to 0$ so $n_k|y_{n_k}-x_{n_k}|$ need not go to $\infty.$


Nested radical sequence convergence

How do I prove the sequence $\{\sqrt{7}, \sqrt{7\sqrt{7}},\sqrt{7\sqrt{7\sqrt{7}}}{,... \}}$ converges at 7? I understand intuitively that the final term would be $7^{1/2} \cdot7^{1/4} \cdot7^{1/8}\ldots$ , and that would converge ultimately to $7^1$ but I'm not sure how to properly show that. Thanks!

number theory - Why is $nchoose k$ periodic modulo $p$ with period $p^e$?

Given some integer $k$, define the sequence $a_n={n\choose k}$. Claim: $a_n$ is periodic modulo a prime $p$ with the period being the least power $p^e$ of $p$ such that $k

In other words, $a_{n+p^e}\equiv a_{n} (\text{mod } p)$. But the period $p^e$ is smaller than I'd have expected (it is obvious that a period satisfying $k! < p^e$ would work). So how can I prove that it works?

complex numbers - Geometrical interpretation of $ Im(z^4) ge 0$



I'd like to ask you about a geometrical interpretation of the expression like $Im(z^4) \ge 0$.




What I did:



$Im [r^4(cos4α + isin4α] \ge 0$



$r^4sin4α ≥ 0$



$sin4α \ge 0$



$4\alpha = k \cdot \pi$, k is integer




$\alpha = k \cdot \frac{\pi}{4}$



But how to draw it on argand diagram?
Is there any tool online? Is it possible on Wolfram Alpha?



Is it something like this?



Geometrical interpretation


Answer



Credits to Marconius.




The plot was made with Mathematica:



ContourPlot[Im[(a + I b)^4], {b, a} \[Element] Disk[], 
PlotTheme -> "Monochrome", PlotPoints -> 50,
Contours -> #] & /@ {1, 5, 10}


enter image description here


sequences and series - Convergence of $sum_{n=1}^inftyfrac{2^nsin^{2n}(alpha)}{n^beta}$




Determine for which values of the parameters $\alpha,\beta\in\mathbb{R}$ the following series is convergent: $$\sum_{n=1}^\infty\frac{2^n\sin^{2n}(\alpha)}{n^\beta}$$





It seems clear that if $\alpha=\pi k, k\in\mathbb{Z},$ then $\forall\beta$ the series converges as $\sin^{2n}(\alpha)=0$. Otherwise, as $0\leq\sin^{2n}\leq1$ we can fit the series in the following way:



$$\sum_{n=1}^\infty\frac{2^n\sin^{2n}(\alpha)}{n^\beta}\leq\sum_{n=1}^\infty\frac{2^n}{n^\beta}$$



But I don't know how to continue. The right term of the inequality is always divergent, so I can't apply comparison. Could you give me some hints? Thanks in advance!


Answer



Denote $\gamma = 2 \sin^2(\alpha)$. We have to study the convergence of the series $\sum u_n(\gamma, \beta)$ where $u_n(\gamma, \beta) = \frac{\gamma^n}{n^\beta}$.



Easy case... $\gamma = 0$ or $\alpha = k \pi$ with $k \in \mathbb Z$. The general term of the series is equal to zero, so the series converges.




So let's suppose that $\gamma \neq 0$ and separate the cases:




  1. $\vert \sin(\alpha)\vert= 1/\sqrt{2}$, then $ \gamma = 1$ and $u_n(\gamma, \beta) = 1/n^\beta$. The series converges for $\beta >1$ and diverges otherwise.

  2. $\vert \sin(\alpha)\vert \neq 1/\sqrt{2}$, then $\left\vert \frac{u_{n+1}(\gamma, \beta)}{u_n(\gamma, \beta)} \right\vert = \gamma \left(\frac{n+1}{n}\right)^\beta$. According to the ratio test, the series converges for $\gamma <1$ and diverges for $\gamma >1$ whatever the value of $\beta$.


algebra precalculus - Point out my fallacy, in sequence and series.


The sum of the first $n$--terms of the series $1^2+2\cdot2^2+3^2+2\cdot4^2+\cdots$ is $\dfrac{n(n+1)^2}{2}$, when $n$ is even. When $n$ is odd, the sum is?




I got the correct answer when is replaced $n\rightarrow (n+1)$ to make above valid for odd, but when I tried the different approach then something following had happened.



For $n$ even, last term $=n$ which is even and term before it $=n-1$ which is odd. Clubbing all odds and evens separately as follows:



$\big(1^2+3^2+\cdots +(n-1)^2\big)+2\big(2^2+4^2+\cdots+n^2\big)=\dfrac{n(n+1)^2}{2}\tag{1}$




For $n$ odd, last term $=n$ which is odd and term before it $=n-1$ which is even. Clubbing all odds and evens separately as follows:



$\big(1^2+3^2+\cdots +n^2\big)+2\big(2^2+4^2+\cdots+(n-1)^2\big)\tag*{}$



$=\big(1^2+3^2+\cdots +(n-1)^2\big)+2\big(2^2+4^2+\cdots+n^2\big)-n^2+(n-1)\tag*{}$



From equation $(1)$



$=\dfrac{n(n+1)^2}{2}-n^2+(n-1)\tag*{}$




And answer given is: $\dfrac{n^2(n+1)}{2}$



please help.

algebra precalculus - In how many days can Sachin alone complete the work given the following conditions?


Alok and Sachin agree to complete a piece of work in $20$ days?They also agree to forfeit double the amount of wages corresponding to the uncompleted part of work,if they fail.If Alok alone can complete the work in $40$ days and they lost $1$/$3$ of the pay of the total work.In how many days can Sachin alone complete the work?





options:



a) $60$ b) $24$ c) $36$ d) $30$



MyApproach:



Alok+Sachin=$20$ Days




If they fail,They agree to forfeit double the amount of wages corresponding to the uncompleted part of work



If Alok alone can complete the work in $40$ days.



Therefore,Alok did 2.5% work in 1 day and given:(together)they did 5% work(20 days).



Therefore,Sachin do 2.5% work.




I am confused how to use these equations to solve the problem.




Can anyone guide me how to approach the problem correctly.


Saturday 21 June 2014

calculus - Compute: $int_{0}^{1}frac{x^4+1}{x^6+1} dx$



I'm trying to compute: $$\int_{0}^{1}\frac{x^4+1}{x^6+1}dx.$$



I tried to change $x^4$ into $t^2$ or $t$, but it didn't work for me.



Any suggestions?




Thanks!


Answer



Edited Here is a much simpler version of the previous answer.



$$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{x^4-x^2+1}{x^6+1}dx+ \int_0^1 \frac{x^2}{x^6+1}dx$$



After canceling the first fraction, and subbing $y=x^3$ in the second we get:



$$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{1}{x^2+1}dx+ \frac{1}{3}\int_0^1 \frac{1}{y^2+1}dy = \frac{\pi}{4}+\frac{\pi}{12}=\frac{\pi}{3} \,.$$




P.S. Thanks to Zarrax for pointing the stupid mistakes I did...


calculus - The $n^{th}$ root of the geometric mean of binomial coefficients.



$\{{C_k^n}\}_{k=0}^n$ are binomial coefficients. $G_n$ is their geometrical mean.



Prove
$$\lim\limits_{n\to\infty}{G_n}^{1/n}=\sqrt{e}$$



Answer



$G_n$ is the geometric mean of $n+1$ numbers:
$$
G_n=\left[\prod_{k=0}^n{n\choose k}\right]^{\frac1{n+1}}
$$
or with $\log$ representing the natural logarithm (to the base $e$),
$$
\log G_n
= \frac1{n+1} \sum_{k=0}^n \log {n\choose k}
= \log n! - \frac2{n+1} \sum_{k=0}^n \log k!

\,.
$$
Stirling's approximation is
$n! \approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$
or
$$
\log n!
\approx
\frac12\log{(2\pi n)}+n\log\left(\frac{n}{e}\right)
= \left(n+\frac12\right)\log n+\frac12\log 2\pi-n

$$
so
$$
\eqalign{
\log \left(G_n\right)^\frac1n
&
= \frac1n \log G_n
= \frac1n \log n! - \frac2{n(n+1)} \log \prod_{k=0}^n k!
\\
&

= \frac1n \log n! - \frac2{n(n+1)} \sum_{k=0}^n \log k!
\\
&
\approx \left(1+\frac1{2n}\right) \log n
- \frac2{n(n+1)} \sum_{k=1}^n \left(k+\frac12\right)\log k
- \frac1{2n}\log 2\pi
\\
&
\approx \left(1+\frac1{2n}\right) \log n
- \frac2{n(n+1)}

\left[
\frac{n(n+1)}{2}\log n -
\frac{n(n+2)}{4}
\right]
- \frac1{2n}\log 2\pi
\\
&
= \frac{\log n-\log 2\pi}{2n}
+ \frac{n+2}{2(n+1)}
\\

&
\rightarrow \frac12
\,,
}
$$
where the sum of logarithms was approximated
using the definite integrals
$$
\sum_{k=1}^n \log k \approx
\int_1^n \log x\,dx =

\Big[x\log x-x\Big]_1^n \approx
\Big[x\log x-x\Big]_0^n
$$
and
$$
\sum_{k=1}^n k \log k \approx
\int_0^n x\log x\,dx=\left[\frac{x^2}{2}\log x - \frac{x^2}{4}\right]_0^n
$$
(using integration by parts as shown in a comment), so that
$$

\eqalign{
\sum_{k=1}^n \left(k+\frac12\right)\log k
&=
\sum_{k=1}^n k \log k + \frac12
\sum_{k=1}^n \log k
\\
&\approx
\left( \frac{n^2}{2}\log n - \frac{n^2}{4} \right) + \frac12
\Big( n \log n - n \Big)
\\

&=
\frac{n^2+n}{2}\log n - \frac{n^2+2n}{4}
\,.
}
$$
Thus
$$
G_n=e^{\log G_n}\rightarrow e^{\frac12}=\sqrt{e}
\,.
$$



Conditions for distinct real roots of cubic polynomials.



Given a cubic polynomial with real coefficients of the form $f(x) = Ax^3 + Bx^2 + Cx + D$ $(A \neq 0)$ I am trying to determine what the necessary conditions of the coefficients are so that $f(x)$ has exactly three distinct real roots. I am wondering if there is a way to change variables to simplify this problem and am looking for some clever ideas on this matter or on other ways to obtain these conditions.


Answer



Suppose that (including multiplicity) the roots of $$f(x) = A x^3 + B x^2 + C x + D,$$ $A \neq 0$ are $r_1, r_2, r_3$. Then, inspection shows that the quantity
$$D(f) := A^4 (r_3 - r_2)^2 (r_1 - r_3)^2 (r_2 - r_1)^2,$$

called the (polynomial) discriminant of $f$, vanishes iff $f$ has a repeated root. (The coefficient $A^4$ is unnecessary for the expression to enjoy this property, but among other things, its inclusion makes the below formula nicer.) On the other hand, with some work (say, by expanding and using Newton's Identities and Vieta's Formulas) we can write $D(f)$ as a homogeneous quartic expression in the coefficients $A, B, C, D$:
$$D(f) = -27 A^2 D^2 + 18 ABCD - 4 A C^3 - 4 B^3 D + B^2 C^2.$$



It turns out that $D$ gives us the finer information we want, too: $f$ has three distinct, real roots iff $D(f) > 0$ and one real root and two conjugate, nonreal roots iff $D(f) < 0$.



It's apparent that one can generalize the notion of discriminant to polynomials $p$ of any degree $> 1$, producing an expression homogeneous of degree $2(\deg p - 1)$ in the polynomial coefficients. In each case, up to a constant that depends on the degree and the leading coefficient of $f$, $D(f)$ is equal to the resultant $R(f, f')$ of $f$ and its derivative $f'(x) = 3 A x^2 + 2 B x + C$.



By making a suitable affine change of variables $x \rightsquigarrow y$, by the way, one can transform the given cubic to the form
$$\tilde{f}(y) = y^3 + P y + Q$$ (which does not change the multiplicity of roots), and for a cubic polynomial in this form the discriminant has the simple and well-known form
$$-4 P^3 - 27 Q^2.$$



discrete mathematics - Proof by induction: $sum^{2n-1}_{i=1} (2i-1)=(2n-1)^2$

$\sum^{2n-1}_{i=1} (2i-1)=(2n-1)^2$



I get stuck after proving the base case is true. Usually with induction I assume the left and right sides are equal at some k, but I'm not sure how to approach this problem since the left side is a sum.

Friday 20 June 2014

Question on a proof of Hilbert's Inequality using Cauchy Schwarz



A simpler version of Hilbert's Inequality states that:
For any real numbers $a_1,a_2\cdots,a_n$ the following inequality holds:
$\sum_{i=1}^n\sum_{j=1}^n\frac{a_ia_j}{i+j}\leq\pi\sum_{i=1}^na_i^2$.



I was reading a proof of this inequality where first they applied Cauchy Schwarz to get $(\sum_{i=1}^n\sum_{j=1}^n\frac{a_ia_j}{i+j})^2\leq(\sum_{i=1}^n\sum_{j=1}^n\frac{\sqrt{i}a_i^2}{\sqrt{j}(i+j)})(\sum_{i=1}^n\sum_{j=1}^n\frac{\sqrt{j}a_j^2}{\sqrt{i}(i+j)})$.



Then they stated that it suffices to prove $\sum_{n=1}^{\infty}\frac{\sqrt{m}}{(m+n)\sqrt{n}}\leq\pi$ for any positive integer $m$.




Can someone explain why this is true? I tried manipulating the expression in a lot of different ways but couldn't conclude the result above. I'd appreciate any ideas or thoughts.


Answer



You can show that $\displaystyle\sum_{n=1}^\infty\,\frac{\sqrt{m}}{(m+n)\sqrt{n}}<\int_0^\infty\,\frac{\sqrt{m}}{(m+x)\sqrt{x}}\,\text{d}x=\Bigg.\Bigg(2\arctan\left(\sqrt{\frac{x}{m}}\right)\Bigg)\Bigg|_{x=0}^{x=\infty}=\pi$.


summation - Proof of the binomial identity $displaystylebinom{m}{n}=sum_{k=0}^{lfloor n/2 rfloor} 2^{1-delta_{k,n-k}} binom{m/2}{k} binom{m/2}{n-k}$



Trying to prove some uncorrelated things, I came across the following identity:
$$\binom{m}{n}=\sum_{k=0}^{\lfloor n/2 \rfloor} 2^{1-\delta_{k,n-k}} \binom{m/2}{k} \binom{m/2}{n-k}, $$
where $\delta_{i,j}$ is the Kronecker delta, equal to 1 if $i=j$ and vanishing otherwise.
This identity seems to hold for every $m$ and $n$ (I checked it with Mathematica for each pair of integers $n, m$ from 1 up to 100).

I've never seen such an identity, and it doesn't seem straightforward to prove.



Is this some known identity? And how could I go in proving (or disproving) it?


Answer



It's true. The number of ways we can pick $n$ things from $m$ is:



Divide the $m$ things into two half-sized chunks. Then we need to do one of the following:




  • pick at most $n/2$ things out of the first chunk (say we pick $k$ from the first chunk), and the remaining $n-k$ things out of the second chunk;


  • pick at most $n/2$ things out of the second chunk, and the remaining $n-k$ things out of the first chunk.



By symmetry, we take care of the second case by simply doubling all terms from the first case for which $k \not = n-k$.


calculus - Changing the order of integration on a rectangular and polar region




change the order of integration for the following integral from dydx to dxdy, and from dydx to polar coordinates.
$$ \int \int f(x,y) dydx$$



where
$$ 0≤y≤(-x^2)+2 $$
$$ 0≤x≤1$$



From dydx to dxdy
$$ \int \int f(x,y) dxdy + \int \int f(x,y) dxdy$$




First integral
$$ 0≤x≤1 $$
$$ 0≤y≤1 $$



Second integral
$$ 0≤x≤\sqrt(2-y) $$
$$ 0≤y≤1 $$



I'm not sure about the $\sqrt(2-y)$ for the bounds of x for the second integral.



__




I'm having more trouble converting this into polar coordinates though. I think I can leave the first integral as it is in terms of dxdy, because the region is a rectangle. Is there any way to switch this rectangular region into polar coordinates?



For the second integral



$$ 0≤r≤1 $$
$$ 0≤\theta≤\pi/2 $$



$$ \int \int f(x,y) dxdy + \int \int f(r,\theta) rdrd\theta$$


Answer




your work fipping the order of integration is correct.



coverting to polar -- it is going to get messy.



$x = r \cos \theta\\
y = r \sin \theta$



Inside the rectangular region.



$\theta \in [0,\frac{\pi}{4})\\

x =1\\
r \cos\theta = 1\\
r = \sec\theta$



Inside the parabola



$\theta \in [\frac{\pi}{4},\frac{\pi}{2}]\\
y = -x^2 + 2\\
r \sin \theta = - r^2 cos^2 \theta + 2\\
r^2 \cos^2 \theta + r \sin \theta - 2 = 0\\

r = \frac {-\sin\theta + \sqrt{sin^2 \theta + 8 cos^2 \theta}}{2 cos^2 \theta}$



I wouldn't want to integrate that, but it would be a limit.


linear algebra - Eigenvalues of real diagonal matrix times orthogonal projection



In the analysis of the stability of a dynamical system I came across
the following Jacobi matrix $J$ (the eigenvalues of which determine stability):
\begin{equation}

J=KP
\end{equation}
where $K$ is an $m\times m$ real diagonal matrix and
\begin{equation}
P=A\left(A^{T}A\right)^{-1}A^{T}
\end{equation}
is an orthogonal projection. The non-square matrix $A$ is an $m\times n$ real
matrix, $m\geq n$.



The eigenvalues of $P$ are 1 with algebraic multiplicity $n$ and

0 with algebraic multiplicity $m-n$ (see this proof exploiting the
idempotence of $P$ and this proof arguging the multiplicity).



However, when you consider the whole matrix $J=KP$, I am not sure
about its eigenvalues.




I am particularly interested if anything
can be said about the sign of the the largest nonzero eigenvalue of $J$, as this is what matters for stability analysis.




Answer



Let $\lambda_{\max}(A)$ be the largest singular value of $A$; when $A$ is symmetric this is the largest absolute value of the eigenvalues of $A$.



Since $P$ is a projection, $\lambda_{max}(J)\le \lambda_{\max}(K)$. Equality can be attained: take $P$ to be the identity, or assume that the range of $P$ contains the largest eigenvector of $K$.



A similar inequality holds for the the maximal eigenvalue $\sigma_{\max}$, provided that $\sigma_{\max}(K)\ge0$. To see this, notice that any eigenvalue of $J=KP$ is also an eigenvalue of the symmetric matrix $PKP$. Since $P$ is a projection, $\|Pu\|\le\|u\|$ and therefore
$$
\sigma_{\max}(J)
\le\sigma_{\max}(PKP)
=\sup_{\|u\|\le1}\langle PKPu,u\rangle

=\sup_{\|u\|\le1}\langle KPu,Pu\rangle
\le\sup_{\|v\|\le1}\langle Kv,v\rangle
=\sigma_{\max}(K).
$$
For the eigenvalue statement between $KP$ and $PKP$, notice that if $KPv=\lambda v$, then $P^{1/2}KP^{1/2}(P^{1/2}v)=\lambda (P^{1/2}v)$. So all the eigenvalues of $KP$ are also eigenvalues of $P^{1/2}KP^{1/2}=PKP$, because $P$ is a projection. $A^{1/2}$ denotes the matrix square root of a nonnegative definite matrix $A$ (and $P$ is such, as a projection).



You can show that any \emph{nonzero} eigenvalue of $PKP$ is also an eigenvalue of $J$. It could be that $J$ is not diagonalisable, but this will happen only if there is a nonzero $v$ such that $Pv=v$ and $PKv=0$ yet $Kv\ne0$ ($v$ is the range of $P$ but $Kv$ is in the orthogonal complement of the range of $P$).



We will have equality in the second inequality if and only if the maximising $v$ (or \emph{a} such maximiser if there are many) is in the range of $P$. How far off we are depends on how close is the range of $P$ to a maximiser, and also on how close are the other eigenvalues of $K$ to the maximal one.




If $\sigma_{\max}(K)<0$, then taking $P=0$ shows that $\sigma_{\max}(J)$ can be larger than $\sigma_{\max}(K)$. But the same idea shows that $\sigma_{\min}(J)\ge \sigma_{\min}(K)$ if the latter is not positive.



So:



$K$ negative definite ---> $\sigma_{\max}(J)\le0$, and it will be zero unless $P$ is the identity.



$K$ not negative definite ---> $0\le\sigma_{\max}(J) \le \sigma_{\max}(K)$. Note that it doesn't mean that $K$ is nonnegative definite, all it means is that it has at least one nonnegative eigenvalue.



There analogous inequalities for $\sigma_{\min}$ also hold.


How do I define probability space $(Omega, mathcal F, mathbb{P})$ for continuous random variable?




I need to mathematically define the probability space $(\Omega, \mathcal F, \mathbb P)$ of continuous random variable $X$. I also need to define the continuous random variable $X$ itself. Problem is... I don't really know how.



It is known that $X$ has the following probability density function $f_X: \mathbb{R} \longrightarrow \left[0, \frac{4}{9} \right]$:



$$f_X(x) = \begin{cases} \begin{align*} &\frac{1}{9}\big(3 + 2x - x^2 \big) \; &: 0
\leq x \leq 3 \\ &0 \; \; &: x < 0 \; \lor \; x > 3 \end{align*}\end{cases}$$



and its plot:




enter image description here



Also, the cumulative distribution function of $X$ is $F_X: \; \mathbb{R} \longrightarrow \left[0,1\right]$ and is defined as:



$$F_X(x) = \begin{cases} \begin{align*} &0 \; \; &: x < 0 \\ &\frac{1}{9} \Big(3x + x^2 - \frac{1}{3}x^3 \Big) \; \; &: x \geq 0 \; \land \; x \leq 3 \\ &1 \; \; &: x > 3 \end{align*}\end{cases}$$



and its plot:



enter image description here




(please see this thread where I calculated CDF for reference)






I suppose:



$$X: \Omega \longrightarrow \mathbb{R}$$



and sample space:




$$\Omega = \mathbb{R}$$



How can I define $\mathcal F$ and $\mathbb{P}$, that are the quantities of probability space $(\Omega, \mathcal F, \mathbb{P})$? I was thinking:



$$\mathbb{P} : \mathcal F \longrightarrow \left[0, 1\right] \; \land \; \mathbb{P}(\Omega) = 1$$



I am jumping into statistics/probability and I am lacking the theoretical knowledge. Truth be speaking, the wikipedia definition of probability space for continuous random variable is too difficult to grasp for me.



Thanks!


Answer




It is a bit weird to ask for a probability space if the probability distribution is already there and is completely at hand. So I think this is just some theoretical question to test you. After all students in probability theory must be able to place the "probability things" they meet in the confidential context of a probability space.



In such case the easyest way is the following.



Just take $(\Omega=\mathbb R,\mathcal F=\mathcal B(\mathbb R),\mathbb P$) as probability space where $\mathcal B(\mathbb R)$ denotes the $\sigma$-algebra of Borel subsets of $\mathbb R$ and where probability measure $\mathbb P$ is prescribed by: $$B\mapsto\int_Bf_X(x)\;dx$$



Then as random variable $X:\Omega\to\mathbb R$ you can take the identity on $\mathbb R$.



The random variable induces a distribution denoted as $\mathbb P_X$ that is characterized by $$\mathbb P_X(B)=\mathbb P(X\in B)=\mathbb P(X^{-1}(B))\text{ for every }B\in\mathcal B(\mathbb R)$$




Now observe that - because $X$ is the identity - we have $X^{-1}(B)=B$ so that we end up with:$$\mathbb P_X(B)=\int_Bf_X(x)\;dx\text{ for every }B\in\mathcal B(\mathbb R)$$as it should. Actually in this special construction we have:$$(\Omega,\mathcal F,\mathbb P)=(\mathbb R,\mathcal B(\mathbb R),\mathbb P_X)\text{ together with }X:\Omega\to\mathbb R\text{ prescribed by }\omega\mapsto\omega$$



Above we created a probability space together with a measurable function $\Omega\to\mathbb R$ such that the induced distribution on $(\mathbb R,\mathcal B(\mathbb R))$ is the one that is described in your question.






PS: As soon as you are well informed about probability spaces then in a certain sense you can forget about them again. See this question to gain some understanding about what I mean to say.


radicals - How to prove that $sqrt 3$ is an irrational number?








I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?

Thursday 19 June 2014

probability - The density of a random variable $X$ is $f(x)$ proportional to $x^{-1/2}$ , what is the mean of $X$?



The density of a random variable $X$ is



$f(x)$ proportional to $x^{-1/2}$ for $x \in [0,1]$$



and $f(x) = 0$ for $x \notin [0,1]$. Then, the mean of $X$ is





  1. $\frac 12$

  2. $\frac 1{\sqrt2}$

  3. $\frac 13$

  4. $\frac 14$

  5. None of the above is correct.



By the formula $\int_{0}^1 x\times x^{-1/2} dx $ (the formula of the expectation of continuous r.v.), I calculate the answer is $\frac 23$, but what is the meaning of the words proportional to? If I multiply some number, option 1-4 are both correct, so the answer is Option 5?



Answer



Using the fact that the density must integrate to one over $[0,1]$, then proportional means
$$1 = \int_0^1cf(x)\,dx = \int_0^1\frac{c}{\sqrt x}\, dx = c\left[2\sqrt x\right]_0^1 = 2c.$$
Thus $c = 1/2$.



If you now try to compute the expectation, you will find
$$\int_0^1x\cdot \frac{1/2}{\sqrt x} = \frac{1}{3},$$
which is option 3.


convergence divergence - Should I use the comparison test for the following series?



Given the following series



$$\sum_{k=0}^\infty \frac{\sin 2k}{1+2^k}$$



I'm supposed to determine whether it converges or diverges. Am I supposed to use the comparison test for this? My guess would be to compare it to $\frac{1}{2^k}$ and since that is a geometric series that converges, my original series would converge as well. I'm not all too familiar with comparing series that have trig functions in them. Hope I'm going in the right direction




Thanks


Answer



You have the right idea, but you need to do a little more, since some of the terms are negative. Use your idea and the fact that $|\sin x|\le 1$ for all $x$ to show that



$$\sum_{k\ge 0}\frac{\sin 2k}{1+2^k}$$



is absolutely convergent, i.e., that



$$\sum_{k\ge 0}\left|\frac{\sin 2k}{1+2^k}\right|$$




converges.


Wednesday 18 June 2014

calculus - Why does $lim_{xrightarrow 0}frac{sin(x)}x=1$?




I am learning about the derivative function of $\frac{d}{dx}[\sin(x)] = \cos(x)$.



The proof stated: From $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$...



I realized I don't know why, so I wanted to learn why part is true first before moving on. But unfortunately I don't have the complete note for this proof.





  1. It started with a unit circle, and then drew a triangle at $(1, \tan(\theta))$

  2. It show the area of the big triangle is $\frac{\tan\theta}{2}$

  3. It show the area is greater than the sector, which is $\frac{\theta}{2}$
    Here is my question, how does this "section" of the circle equal to $\frac{\theta}{2}$? (It looks like a pizza slice).

  4. From there, it stated the area of the smaller triangle is $\frac{\sin(\theta)}{2}$. I understand this part. Since the area of the triangle is $\frac{1}{2}(\text{base} \times \text{height})$.


  5. Then they multiply each expression by $\frac{2}{\sin(\theta){}}$ to get
    $\frac{1}{\cos(\theta)} \ge \frac{\theta}{\sin(\theta)} \ge 1$




And the incomplete notes ended here, I am not sure how the teacher go the conclusion $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$. I thought it might be something to do with reversing the inequality... Is the answer obvious from this point? And how does step #3 calculation works?



Answer



Draw the circle of radius $1$ centered at $(0,0)$ in the Cartesian plane.



Let $\theta$ be the length of the arc from $(1,0)$ to a point on the circle. The radian measure of the corresponding angle is $\theta$ and the height of the endpoint of the arc above the coordinate axis is $\sin\theta$.



Now look at what happens when $\theta$ is infinitesimally small. The length of the arc is $\theta$ and the height is also $\theta$, since that infinitely small part of the circle looks like a vertical line (you're looking at the neighborhood of $(1,0)$ under a microscope).



Since $\theta$ and $\sin\theta$ are the same when $\theta$ is infinitesimally small, it follows that $\dfrac{\sin\theta}\theta=1$ when $\theta$ is infinitesimally small.



That is how Leonhard Euler viewed the matter in the 18th century.




Why does the sector of the circle have area $\theta/2$?



The whole circle has area $\pi r^2=\pi 1^2 = \pi$. The fraction of the circle in the sector is
$$
\frac{\text{arc}}{\text{circumference}} = \frac{\theta}{2\pi}.
$$
So the area is
$$
\frac \theta {2\pi}\cdot \pi = \frac\theta2.

$$


operations on real functions

If $f$ and $g$ are two real functions such that domain of $f$ is $D_1$ and domain of $g$ is $D_2$ both being subsets of $\mathbb R$ . My book says that the function $f+g$ will have the domain ($D_1 \cap D_2$). Why is this? And if $f$ and $g$ have co-domain $C_1$ and $C_2$ respectively such that $C_1$ and $C_2$ are subsets of set of real numbers then what will be the co-domain of $f+g$?

calculus - One sided limit with $e^x$ and $sin(x)$



Assuming that $e^{x} \to e^{a}$ and $\sin(x) \to \sin(a)$



evaluate the limit as $$\lim_{x \to 0^{+}} \frac{e^{x^2+2x-1}}{\sin(x)}$$



To me, based on the first sentence. I could just plug in $a$ and get my limit which does not exist at the point $x=0$ However the back of the book says infinity so I am at a loss as to how to prove this.



This is a real analysis class but this is isn't one of the problems requiring us to prove with definitions but it would be helpful to learn the algebra and the theory. I don't understand as to how I can apply the definition of a function converging to infinity to prove this. Plus I don't see how that could help? How am I supposed to know it goes to infinity


Answer




When $x$ is close enough to $0$, then $-1.1 so that
$$\exp(-1.1)<\exp(x^2+2x-1)$$



and hence the function in your limit is bounded from below by $\frac{\exp(-1.1)}{\sin x}$ which blows up at $x=0$ since $\sin0=0$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...