Wednesday 31 December 2014

general topology - Closure of the set ${sin(n): ninmathbb{N}, n > 0}$ is the interval of reals $[-1,1]$?

Please prove or disprove.



Any help is appreciated
Thanks in advance

limits - Find $limlimits_{nto+infty}frac{sqrt[n]{n!}}{n}$




I tried using Stirling's approximation and d'Alambert's ratio test but can't get the limit. Could someone show how to evaluate this limit?



Answer



Use equivalents:
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac{\bigl(\sqrt{2\pi n}\bigr)^{\tfrac 1n}}{n}\cdot\frac n{\mathrm{e}}=\frac 1{\mathrm{e}}\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}$$
Now $\;\ln\bigl({2\pi n}\bigr)^{\tfrac 1{2n}}=\dfrac{\ln\pi+\ln 2n}{2n}\xrightarrow[n\to\infty]{}0$, hence
$$\frac{\sqrt[n]{n!}}n\sim_{\infty}\frac 1{\mathrm{e}}. $$


calculus - Polygamma function series: $sum_{k=1}^{infty }left(Psi^{(1)}(k)right)^2$



Applying the Copson's inequality, I found:
$$S=\displaystyle\sum_{k=1}^{\infty }\left(\Psi^{(1)}(k)\right)^2\lt\dfrac{2}{3}\pi^2$$ where
$\Psi^{(1)}(k)$ is the polygamma function.
Is it known any sharper bound for the sum $S$?

Thanks.


Answer



The upper bound can be improved using asymptofic series :



enter image description here


ordinary differential equations - Extremising a functional with boundary conditions (Euler-Lagrange)

I need to determine all functions $ u(x) $ that extremise the functional: $$ I[u]= \int_{-\infty}^\infty \left[\frac{(u')^2}{2}+(1-\cos u)\right] \, dx $$
subject to the boundary conditions



$$ \lim_{x \to -\infty} u(x)=0 $$ and $$ \lim_{x \to \infty} u(x) = 2\pi $$
I used the standard approach for finding the stationary points of a functional; that is, attempting to solve the Euler-Lagrange equation, but assuming I've attempted this correctly I arrive at
$$ \frac{d^2u}{dx^2} = \sin u $$
which I believe is not (easily) directly solvable, so I'm presuming there's either another way to approach this problem or I've messed up somewhere. A point in the right direction would be great, thanks in advance

algebra precalculus - Are Base Ten Logarithms Relics?



Just interested in your thoughts regarding the contention that





the pre-eminence of base ten logarithms is a relic from
pre-calculator days.




Firstly I understand that finding the (base-10) logarithm of positive real numbers without a calculator can be reduced to finding the (base-10) logarithm of numbers (strictly) between 1 and 10 via scientific notation
$$\log_{10}(x)=\log_{10}(a\times 10^n)=\log_{10}(a)+\log_{10}(10^n)=\log_{10}(a)+n,\,\,\,(*)$$



and that we could compile (approximate) logarithm tables for $1


The next reason that we might need $\log_{10}(x)$ is to solve equations like



$$b^x=n.$$



Now we know that $x=\log_bn$ but we can use the change of base "formula" to express this in terms of log base 10. Of course the change of base "formula" comes from a calculation like



$$\begin{align}
\log_{10}(b^x)&=\log_{10}n
\\\Rightarrow x\log_{10}(b)&=\log_{10}n

\\ \Rightarrow x&=\frac{\log_{10}n}{\log_{10}b}.
\end{align}$$
However the new modern calculators can calculate $\log_bn$ in the first place.



Then you could say what about solving
$$b^{f(x)}=c^{g(x)}.$$
Well you don't need to take a base-10 log: we have the perfectly good base $e$ natural log!



In my presently narrow view, it seems to me that it is only stuff like the Richter Scale and sound intensity and similar derived quantities and scales that really use base-10 logs and that while base $e$ logs are clearly useful, that the pre-eminence of base 10 logs is due only to the the by-hand-calculation (*).




To ask a specific question... base $e$ is clearly special:




Are base 10 logs 'special' only because of the "ease" of calculating (or
should I say approximating) logs base 10?




Or am I missing something else? The reason I am looking at this is I have a section of (precalculus) maths notes that is headed "Two Distinguished Bases" and I am thinking of throwing out base 10.


Answer



Yes, $10$ is not mathematically significant as a base like $e$ is. Using base 10 logs is strictly for the benefit of non-computer calculation and estimation (which, note, can include such things as simply reading a graph with a scale in dB), and consistency with previously established conventions. This may not be of interest to mathematicians, but I doubt engineers would want to give it up.




For these purposes, $10$ does have at least one useful feature beyond being the base of our number system: $\log_{10} 2 = 0.301 ≈ 0.3$. This is a very common approximation that $3$ dB corresponds to a doubling or halving of the quantity of interest. We could get similar simplicity by using $\log_{2}$, but $\log_2 10 = 3.321…$ which is not nearly as convenient for estimation in decimal numbers.



Choosing base $10$ produces nice nearly-tenth-of-an-integer results for numbers of the form $10^x2^y$ (for integer $x$ and small integer $y$), whereas an arbitrary base $b$ is only guaranteed to be nice for $b^x$.






This suggests further investigation: evaluating bases other than $10$, $2$, and $e$ for having similar almost-integer approximations. I tried writing a program to measure/plot how many good approximations there were for various bases, but it turned out that defining the goodness of an approximation and whether it's good enough to count involves a few too many parameters and I didn't get around to refining it to a result worth sharing.


sequences and series - How to prove {$a_n$ } is increasing where $a_1 = sqrt{2}$ and $a_{n+1} = sqrt{ 2+ a_n}$

I already found out that this sequence is bounded above and $a_n <2 \forall n \in \mathbb Z_+ $



I think I'm missing a point as I can't think of a way to prove that the sequence is increasing.

probability - On average, how many friends would I need to have to have at least one friend's birthday every day?

I know that because of the birthday problem, even after 365 friends, you're going to have a lot of doubles and that there's also an infinitesimal chance that even with infinite friends that there's one day left out. But I was curious how many friends you'd need on average to have every day represented (this is ignoring leap day and assuming birthdays are equally distributed). Or to generalize it further, given n unique boxes and you're placing balls in them with an equal 1/n chance for the ball to go into any box, how many balls would you have to place on average before every box had at least one ball?

Why some complex integrals are 0



This question asks to give an answer to why the following integrals are $0$:



(Notation: $\gamma(a,r)$ is a circle that has center $a$ and radius $r$.)





$$\int_{\gamma(i,3)}\frac{dz}{(z-2)^3}$$




My attempt: All three zeros of the denominator are in the disc. If they were outside the disc, then this would be zero. But I'm not sure what to do with this next.




$$\int_{\gamma(0,1)}z|z|^4$$




My attempt: The integrand is not holomorphic, so I'm not sure how to proceed.




EDIT: The problem above was answered by a hint in the comments.




$$\int_{\gamma(1,2)}\frac{\sin(z)dz}{z}$$




My attempt: $0\leq|\int_{\gamma(1,2)}\frac{\sin(z)dz}{z}|\leq\int_{\gamma(1,2)}|\frac{\sin(z)}{z}|dz\leq\int_{\gamma(1,2)}|\frac{1}{z}|dz\leq\int_{\gamma(1,2)}\frac{1}{|z|}dz$



I want to say $\int_{\gamma(1,2)}\frac{1}{|z|}dz\leq 0$ but $\frac{1}{|z|}$ isn't holomorphic, so I'm not sure what to do here.



Answer



The first one: $\frac{1}{(z-2)^3}$ has the antiderivative $-\frac{1}{2(z-2)^2}$.



The second one: It is the same as $\int_{\gamma(0,1)}z\cdot 1^4dz$ because $|z|=1$ on the curve.



The third one: $\frac{\sin z}{z}$ has a removable singularity at $z=0$, so the integral can actually be viewed as an integral of a holomorphic function. Alternatively, use Cauchy's integral formula $\frac{1}{2\pi i}\int_{\gamma}\frac{f(z)}{z-a}dz=f(a)$ and apply it to the point $a=0$ and function $f(z)=\sin z$, noting that $\sin 0=0$.


linear algebra - What extra assumption makes this transformation affine?

Let a vector space $V$ be given. Let $f:V\to V$ have the property that for all $x,y,a\in V$,
$$

f(x+a)-f(y+a) = f(x) - f(y) \tag{$\star$}
$$
Q1. I'd like to know how weak one can make additional assumptions to guarantee that $f$ is affine, namely that there exists a linear transformation $L:V\to V$, and an element $v\in V$ such that
\begin{align}
f(x) = L(x) + v
\end{align}
for all $x\in V$.



I think, for example, that if one assumes that $V = \mathbb R^n$, and if $f$ is differentiable, then $(\star)$ implies that $f$ is affine. This makes me think that perhaps one needs a notion of smoothness or continuity in general and therefore possibly a norm? Also




Q2. Is there a common name for the property $(\star)$?

What is the motivation behind the (metric spaces) definition of an open set?



As far as I know, the standard definition of an open set is that the set $A$ is called open if $A \subseteq X$ for some set X and if $A \cap \partial A=\emptyset$ where $\partial A$ is the set of boundary points of A. In particular, I fail to see the motivation for the $A \cap \partial A=\emptyset$ part of this definition, why wouldn't replacing this with $\partial A=\emptyset$ be satisfactory?



Admittedly, experience is telling me that this is a case of "we define it this way because it is useful", but if that is the case then in what way is this useful? After all, as far as I can tell the alternative that I've proposed is very nearly equivalent to the standard definition. The only difference that comes immediately to my mind is that the standard definition would treat the case of $A=\emptyset$ differently. Is that an important difference? Or have I missed something else that is important?


Answer



If you accept that we have an intuitive understanding of what is meant by, for example, an open disc in $\Bbb R^2$, then your proposed definition will not work.



The boundary of such a disc is its bounding circle (apologies for tautology, but I'm talking about intuitive ideas, not formal calculations). So the disc and its boundary have empty intersection and the standard definition works. But the boundary is not empty, so your definition does not classify this "open disc" as an "open set".


Tuesday 30 December 2014

real analysis - Limit $limlimits_{nto +infty} e^{sqrt n }cdotleft(1 - frac1{sqrt n}right)^n$



$$\lim_{n\to +\infty}\ e^{\sqrt n }\cdot\Big(1 - \frac{1}{\sqrt n}\Big)^n$$



Answers are:



A)0




B)1



C)e



D)${\sqrt e}$



E)$\frac{1}{\sqrt e}$



I tried working on the second part to get it in a better form. In the end I got $e^{- \sqrt n}$. Returning at the beginning with this new form it would be:

$$\lim_{n\to +\infty}\ e^{\sqrt n }\cdot e^{- \sqrt n}$$ which would eventually turn into $$\lim_{n\to +\infty}\ e^{0}$$ so the answer would be B) 1 but it's not. The answer is E)$\frac{1}{\sqrt e}$ but I can't figure it out why.


Answer



Consider $$A= e^{\sqrt n } * \left(1 - \frac{1}{\sqrt n}\right)^n$$ Take logarithms of both sides $$\log(A)=\sqrt n+n\log\left(1 - \frac{1}{\sqrt n}\right)$$ Now, use the fact that for small values of $x$, $\log(1-x)=-x-\frac{x^2}{2}+O\left(x^3\right)$. Replace $x$ by $\frac{1}{\sqrt n}$ which makes $$\log(A)=\sqrt n+n(-\frac{1}{\sqrt n}-\frac{1}{2 n}+\cdots)$$ Expand and simplify.



I am sure that you can take from here.


measure theory - The infimum of a measurable $f:Xtimes Yto[-infty,infty]$ over $X$ is measurable $bar f:Yto[-infty,infty]$



Let $(X,\mathscr M)$ and $(Y,\mathscr N)$ be measurable spaces, and let $f:X\times Y\to[-\infty,\infty]$ be some measurable function. Then we may define $\bar f:Y\to[-\infty,\infty]$ by $$\bar f(y)=\inf_{x\in X}f(x,y)$$ but is this function $\mathscr N$-measurable? I suspect the answer is yes, though I don't see why.



Edit: Note that the statement that the projection map $\pi:X\times Y\to Y$ takes measurable sets to measurable sets is false.



Answer



The statement is in general false: Let $X= Y= \mathbb{R}$ and $\mathcal{M}$ the Lebesgue-$\sigma$-algebra and $\mathcal{N}$ the Borel-$\sigma$-algebra. Now take any set $E \subset \mathbb{R}$ which is Lebesgue-measurable, but not Borel.



Define $$f(x,y) = \begin{cases} 0 & \text{if } x=y \\ 1 & \text{else} \end{cases}.$$
Of course, this map is measurable according to $\mathcal{M} \otimes \mathcal{N}$, because $\{(x,y) \in \mathbb{R} : x=y\}$ is closed and thus a Borel-set. On the other side $$\inf_{x \in E} f(x,y) = 1-1_{E}(y)$$
is not Borel-measurable.


complex numbers - Find the real and imaginary parts

Find the real and imaginary parts of : $$ \frac {e^{iθ}} {1-λe^{iΦ}} $$




Here i=iota



I have used $ e^{iθ} = \cos θ +i \sin θ $ but I am not able to separate real and imaginary parts. I am not getting any clue how to proceed.



The answer given in my textbook:
Real: $ \frac {cos θ - λ cos(θ-Φ)} {1-2λ cos Φ + λ^2} $



Imaginary: $ \frac {sin θ - λ sin(θ-Φ)} {1-2λ cos Φ + λ^2} $



Thank you

Monday 29 December 2014

abstract algebra - Finding a Galois Extension



My question is part of a larger problem: I'm supposed to find the minimal polynomial over $\mathbb{Q}$ of $1 + \sqrt[3]{2} + \sqrt[3]{4}$ "using the automorphisms of the corresponding Galois extension."



After talking to my professor, I know that the Galois group I'm looking for is $S_3$. However, I don't know how to ascertain what the Galois extension is with the information I've been given.


Answer




Let $\alpha = \sqrt[3]{2}$. We are looking for the minimal polynomial of $1 + \alpha + \alpha^2$. To find the Galois extension of $\mathbb{Q}$ that contains $1 + \alpha + \alpha^2$, note that $1, \alpha, \alpha^2 \in \mathbb{Q}(\alpha)$. However, $\mathbb{Q}(\alpha)$ is not Galois over $\mathbb{Q}$ because the polynomial $p(x) = x^3 - 2$ is irreducible in $\mathbb{Q}$, has a root in $\mathbb{Q}(\alpha)$, and does not split into linear factors in $\mathbb{Q}(\alpha)$. However, $\mathbb{Q}(\alpha) \subset \mathbb{Q}(\alpha, \omega)$, where $\omega = e^{2i\pi/3}$. Furthermore, $p(x)$ splits into linear factors in $\mathbb{Q}(\alpha, \omega)$, so $\mathbb{Q}(\alpha, \omega)$ is Galois over $\mathbb{Q}$.



Next we must find the Galois group of $\mathbb{Q}(\alpha, \omega)/\mathbb{Q}$. We know that $[\mathbb{Q}(\alpha, \omega): \mathbb{Q}] \leq 3! = 6$. We also know that $[\mathbb{Q}(\alpha): \mathbb{Q}] = 3$ since $p(x)$ is the minimal polynomial over $\mathbb{Q}$ of $\alpha$. Since $[\mathbb{Q}(\alpha, \omega): \mathbb{Q}] = [\mathbb{Q}(\alpha, \omega): \mathbb{Q}(\alpha)] \cdot [\mathbb{Q}(\alpha) : \mathbb{Q}]$, it must be that $3 < [\mathbb{Q}(\alpha, \omega): \mathbb{Q}] \leq 6$ and 3 divides $[\mathbb{Q}(\alpha, \omega): \mathbb{Q}]$. So $[\mathbb{Q}(\alpha, \omega): \mathbb{Q}] = 6$; hence $\operatorname{Gal}(\mathbb{Q}(\alpha, \omega)/\mathbb{Q})$ is isomorphic to $S_3$. In particular, consider the set of permutations in $S_3$ that map $\alpha$ to roots of $p(x)$. If we define $\sigma_1, \sigma_2, \sigma_3 \in S_3$ by
\begin{align*}
\sigma_1(\alpha) &= \alpha\\
\sigma_2(\alpha) &= \alpha \omega\\
\sigma_3(\alpha) &= \alpha \omega^2
\end{align*}

then the images under $\sigma_1, \sigma_2, \sigma_3$ of $1 + \alpha + \alpha^2$ are the roots of the minimal polynomial of $1 + \alpha + \alpha^2$. We have
\begin{align*}

\sigma_1\left(1 + \alpha + \alpha^2\right) = 1 + \alpha + \alpha^2\\
\sigma_2\left(1 + \alpha + \alpha^2\right) = 1 + \alpha \omega + \alpha^2 \omega^2\\
\sigma_3\left(1 + \alpha + \alpha^2\right) = 1 + \alpha \omega^2 + \alpha^2 \omega^4
\end{align*}

So the minimal polynomial of $1 + \sqrt[3]{2} + \sqrt[3]{4}$ is $$q(x) = \left(x - 1 - 2^{1/3} - 2^{2/3}\right)\left(x - 2^{1/3}e^{2i\pi/3} - 2^{2/3}e^{-2i\pi/3}\right)\left(x - 2^{1/3}e^{-2i\pi/3} - 2^{2/3}e^{2i\pi/3}\right).$$ After doing a great deal of algebra, we find that $q(x) = x^3 - 3x^2 - 3x - 1$.


calculus - Proving that a continuous function has a solution $f:[0,1]to mathbb R$





Let $f:[0,1]\to \mathbb R$ be a continuous function such that $f(0)=f(1)$.



Prove that $f(x)=f\left(x+\frac12\right)$ has a solution for $x\in [0,\frac12]$.




This question has to do with continuity and the intermediate value theorem.




I observed that $f(0)=f\left(\frac12\right)=f(1)$ but I don't see how to show that the function go through zero (i.e has a solution) for all we know it can be a straight line parallel to the $x$ axis in $[0,1]$.


Answer



Hint: put $g(x)=f(x)-f(x+\dfrac{1}{2})$ which is also continous .



Then $g(0)=f(0)-f(\dfrac{1}{2})$ anf $g(\dfrac{1}{2})=f(\dfrac{1}{2})-f(1)=-(f(0)-f(\dfrac{1}{2}))$ so that $g(0)g(\dfrac{1}{2}) <0$


real analysis - Find $lim_{x to 0} frac{ln (x^2+1)} {x^2} $ without L'hopital's rule



I have to find the limit without L'hopital's rule:
$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $$




Is it possible?
I thought about using squeeze theorem or something, but it didn't work out.



Hints are more than welcome!



P.S - I didn't study Taylor series or Integrals yet.


Answer



$$\begin{align}
\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\

&=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\
&=\ln e=1
\end{align}$$


recreational mathematics - What is the probability of rolling two 5s or better with 5 dice?



Preamble



My question is in the context of a dice game called 31. You have 6 dice and the goal is to have the highest score. If you roll 36, you score 6 points. If you roll 35 you score 5 points and so on up until 30, where you break even. At 29 you lose 1 points and so on.



You roll all the dice at once and after each roll you need to keep at least 1 die on the table. So you roll a maximum of 6 times.




Question
If your first roll gives you 6-5-5-x-x-x (where the Xs ≤ 4), what is the probability of getting at least two 5s if you roll five dice (excluding the 6 that you keep). By "at least two 5s" I mean that 5-5-x-x-x is the bottom of the range and 6-6-6-6-6 is the top of the range.



The question could also be worded "On the 2nd roll, do you keep the 5s and roll 3 dice or do you roll them?"



Thanks.



As to add more clarification to the rules of the games, here's a good exemple provided by @Brams28 int the comment




[...]a roll is rolling all the dice you still have left. You have to keep at least one die after each roll, but a single die could potentially be rolled 6 times. Example: I roll 6,5,4,3,2,1 on the first roll. I decide to keep the 6 and the 5 but reroll the 4 others. Now I get 5,2,2,1. I keep the 5 and reroll the other 3. Now I get 6,3,1. I keep the 6 and reroll the last two. I get 5,4. OK, I keep those two (of course!)


Answer



To determine optimal strategy, I'll start from the end.



With the option of keeping/rerolling one die (i.e. the first five are 'locked in') we keep $\ge4$ and reroll $\le3$ because the expected value $EV_1$ of one $d_6$ is $3.5$



For the option of keeping/rerolling two dice, we need to determine the EV of two $d_6$ with subsequent reroll. To do this, we look at all possible outcomes of two dice and apply the optimal strategy for rerolling one die.
\begin{matrix}
6+6&6+5&6+4&6+\color{red}3&6+\color{red}2&6+\color{red}1\\
5+6&5+5&5+4&5+\color{red}3&5+\color{red}2&5+\color{red}1\\

4+6&4+5&4+4&4+\color{red}3&4+\color{red}2&4+\color{red}1\\
\color{red}3+6&\color{red}3+5&\color{red}3+4&3+\color{red}3&3+\color{red}2&3+\color{red}1\\
\color{red}2+6&\color{red}2+5&\color{red}2+4&\color{red}2+3&2+\color{red}2&2+\color{red}1\\
\color{red}1+6&\color{red}1+5&\color{red}1+4&\color{red}1+3&\color{red}1+2&1+\color{red}1\\
\end{matrix}

The red numbers are those that we reroll. Replace them with $EV_1=3.5$ and sum each outcome.
\begin{matrix}
12&11&10&9.5&9.5&9.5\\
11&10&9&8.5&8.5&8.5\\
10&9&8&7.5&7.5&7.5\\

9.5&8.5&7.5&6.5&6.5&6.5\\
9.5&8.5&7.5&6.5&5.5&5.5\\
9.5&8.5&7.5&6.5&5.5&4.5
\end{matrix}

Add all these together and divide by $36$ and we get $EV_2=\frac{296.5}{36}\approx8.236$



Now, for optimal strategy we note that keeping a $5$ and rerolling the second die $\le3$ gives us an $EV$ of $8.5>EV_2$, so this is good. Also, keeping a $4$ and rerolling the second die $\le3$ gives us an $EV$ of $7.5, so this is not good. Finally, keeping $4\ 4$ gives us $8, so rerolling both dice gives us a slight advantage.



Likewise, we find the optimal strategy for the option of keeping/rerolling three dice by examining all $216$ possible outcomes and apply our optimal strategy for two dice to each. The result is $EV_3\approx13.425$. Now keeping a $5$ and rerolling the other two dice $\le4$ gives us $5+EV_2=13.236$, which is less than $EV_3$ so our optimal strategy is to reroll all three dice unless we have at least two $5$s, then we reroll the third die $\le3$.




Through simulation, I found $EV_4>18.8$, so optimal strategy would be to reroll all four dice unless we already have $5\ 5\ 5\ 5 =20$ or $5\ 5\ 5\ 4=19$



I also found $EV_5>24.4$, so optimal strategy would be to reroll all five dice unless we have five $5$s.



Finally, with optimal strategy throughout, $EV_6>30.1$ for a full round. The creators of this game likely knew this, with a positive score earned only if you exceed this $EV$.


real analysis - Using unbounded derivative to show function is not uniformly convergent



I'm confused how to use the following theorem:



19.6 Theorem.
Let $f$ be a continuous function on an interval $I$ [$I$ may be bounded or unbounded]. Let $I^◦$ be the interval obtained by removing from $I$ any endpoints that happen to be in $I$. If $f$ is differentiable on $I^◦$ and if $f′$ is bounded on $I^◦$, then $f$ is uniformly continuous on $I.$



So far, I have encountered examples.
$f(x)= \sqrt{x}, g(x)= \frac{1}{x}, h(x)= x^2$
They are each on the interval $(0,\infty)$




I know $f$ is uniformly continuous, but $g$ and $h$ are not.
However, the derivatives for each of these functions is unbounded on $(0,\infty)$



To show that a continuous function is not uniformly continuous on $(0, \infty)$, do I need to show the derivative is unbounded for every interval $[a, \infty )$ , where $a>0$?



If so, how would I prove the function is unbounded from $[a, \infty )$?



I would appreciate a worked out example with one of the functions above or one of your choosing.


Answer





If $f$ is differentiable on $I^\circ$ and $f'$ is bounded on
$I^\circ$, then $f$ is uniformly continuous on $I$




is correct.




If $f$ is uniformly continuous on $I$, then $f$ is differentiable on $I^\circ$ and $f'$ is bounded on $I^\circ$





is not correct. The counterexample is exactly $f(x)=\sqrt x$.



To show that $f$ is not uniformly continous on $I$, it is enough to show that it is not uniformly continuous on $(0,1]$. And uniformly continous functions on a bounded interval are always bounded.


geometry - Longest chord inside the intersection area of three circles

I am currently working on my masters thesis in computer science and I stumbled onto a geometry problem.



My goal is to compute the length of the longest possible chord inside the intersection area of three circles.



I know the following thing about the circles:





  • their radius is r

  • Construction: Assume that there is a circle around a point c with size r. Then I choose two random points v1 and v2 inside this circle. These two points are the centers of the other two circles.



This bounds the distance between the centers of the circles:




  • length(c,v1) is at most r

  • length(c,v2) is at most r


  • length(v1,v2) is at most 2r



Note that the points can be the same. So some of this distances can be 0.



Therefore I will break this down into four cases:




  • Case 1: (all points are the same, c=v1=v2)
    If all points are the same, the solution is trivial. The longest possible line inside a circle is the diameter.



  • Case 2: (two points are the same)
    If two points are the same, I can compute the distance like this: http://mathworld.wolfram.com/Circle-CircleIntersection.html


  • Case 3: (three different points, but on a straight line)
    If the three points form a straight line, I can compute the distance like in the 2nd case. I just ignore the point that is located in the middle.


  • Case 4: (the three points form a triangle)
    If the 3 centers form a triangle there will be three intersection points A,B and C which will form a triangle, too (correct me if I am wrong).




To make things more clear lets rotate the situation without loss of generality.
Rotate the circle such that c is the topmost center.




Let A be the intersection point closest to c. B and C follow in clockwise direction.



Here is the situation:
Find the longest chord inside the highlighted area



I tried to compute these values by constructing other triangles but I just can not find any solution. In some special cases I could find the angles of the triangle(A,B,C) but that didn't help me either. Is it even possible? If not is there a way to find an upper bound for the longest chord (which is smaller than the obvious 2r) ?



I wouldn't be so hard to write an approximation program but since I am working on a theoretical proof that does not help me.
I hope this question is not too dumb ;) I am not a mathematician.




Thanks in advance, hhllcks

Sunday 28 December 2014

linear algebra - Finding characteristic matrix of a triangle matrix



I came to this stage when I was reading a Linear algebra text :




Suppose $M$ is a block triangular matrix, say $M= \begin{pmatrix} A_1 & B \\ 0 & A_2\end{pmatrix}$ where $A_1$ and $A_2$ are square matrices. Then the characteristic matrix of $M$,
$$\begin{pmatrix} tI-A_1 & -B \\ 0 & tI-A_2\end{pmatrix}$$is also a block triangular matrix with diagonal blocks $tI-A_1$ and $tI-A_2$. Thus by Theorem 7.12,
$$|tI-M|=\left|\begin{matrix} tI-A_1 & -B \\ 0 & tI-A_2\end{matrix}\right|=|tI-A_1||tI-A_2|.$$That is the characteristic polynomial of $M$ is the product of the characteristic polynomials of the diagonal blocks $A_1$ and $A_2$.





$tI - M$ gives a matrix with components :




  • $tI - A_1$ (makes sense)

  • $tI - A_2$ (even that I got)



But why is the top right component $-B$? Why not $tI-B$? What am I missing?


Answer




Note that $$ M = \begin{pmatrix} A_1 & B \\ 0 & A_2 \end{pmatrix} $$
writing the identity with correspoding blocks: Suppose $A_1$ is a $n_1 \times n_1$-matrix and $A_2$ is a $n_2 \times n_2$-matrix. Then in $tI - M$ the identity is a $(n_1+n_2)\times (n_1 + n_2)$-matrix. We have
$$ I_{n_1+n_2} = \begin{pmatrix} I_{n_1} & 0 \\ 0 & I_{n_2} \end{pmatrix} $$
as the identity does only have off-diagonal entries and the off-diagonal blocks do not contain diagonal-elements. Hence
$$ tI - M = t\begin{pmatrix} I_{n_1} & 0 \\ 0 & I_{n_2} \end{pmatrix}
- \begin{pmatrix} A_1 & B \\ 0 & A_2 \end{pmatrix} = \begin{pmatrix} tI - A_1 & -B \\ 0 & tI - A_2 \end{pmatrix}. $$


calculus - How to find the sum of this power series $sumlimits_{n=0}^infty frac {x^{5n}} {(5n)!}$



How to prove that
$$
\sum\limits_{n=0}^\infty \frac {x^{5n}} {(5n)!}=
\frac{2}{5} e^{-\cos \left( 1/5\,\pi \right) x}\cos \left( \sin

\left( 1/5\,\pi \right) x \right) +\frac{2}{5}\, e^{\cos \left( 2/5\,
\pi \right) x}\cos \left( \sin \left( 2/5\,\pi \right) x \right) +\frac{1}{5} e^{x}?
$$


Answer



Hint. Clearly
$$
\frac{1}{5}\sum_{j=1}^5\mathrm{e}^{\omega^j x}=\sum_{n=0}^\infty\frac{x^{5n}}{(5n)!}
$$
where $\omega=\mathrm{e}^{2\pi i/5}$, since
$$

\sum_{j=1}^5 \omega^{jn}=\left\{\begin{array}{ccc} 5&\text{if}& 5\mid n, \\ 0&\text{if} &5\not\mid n. \end{array}\right.
$$
But
$$
\omega=\cos (2\pi/5)+i\sin (2\pi/5), \,\,\omega^2=\cos (4\pi/5)+i\sin (4\pi/5),
\,\,\omega^3=\overline{\omega^2},\,\,\omega^4=\overline{\omega},
$$
and so
$$
\sum_{n=0}^\infty\frac{x^{5n}}{(5n)!}=\frac{1}{5}\sum_{j=1}^5\mathrm{e}^{\omega^j x}=\frac{1}{5}\left(\mathrm{e}^x+2\mathrm{e}^{x\cos(2\pi/5)}\cos\big(\sin(2\pi/5)\big)+2\mathrm{e}^{x\cos(4\pi/5)}\cos\big(\sin(4\pi/5)\big)\right).

$$


discrete mathematics - Binomial Theorem identities, evaluate the sum



This is a homework problem, please don't blurt out the answer! :)




I've been given the following, and asked to evaluate the sum:



$$\sum_{k = 0}^{n}(-1)^k\binom{n}{k}10^k$$



So, I started out trying to look at this as equivalent to the binomial theorem, in which case, I could attempt something like this: $10^k = y^{n-k}$ but I didn't feel that got me anywhere.



So I started actually evaluating it...



$$(-1)^0\binom{n}{0}10^0 + (-1)^1\binom{n}{1}10^1 + \ldots + (-1)^n\binom{n}{n}10^n$$




So, if I'm thinking correctly, all the other terms cancel out and you are left with:



$$(-1)^n\binom{n}{n}10^n = (-1)^n10^n$$



But, obviously this cannot be correct (or can it?). The book gives a slightly different answer, so I'm wondering where I'm going wrong. Some direction would be greatly appreciated!



Books answer: $\displaystyle (-1)^n9^n$


Answer



Try to fit your sum into one of the following:
$$

\sum_{k=0}^n\binom{n}ka^kb^{n-k}=(a+b)^n,\quad\sum_{k=0}^n\binom{n}kb^{n-k}=(1+b)^n,\quad\sum_{k=0}^n\binom{n}ka^k=(a+1)^n.
$$


Saturday 27 December 2014

real analysis - Does a function have to be bounded to be uniformly continuous?

My book defines uniform continuity as a form of continuity that works for any points $a$ and $x$ in an interval $I$ such that



$$|x-a| < \delta$$ implies that $$f(x) - f(a) < \epsilon$$



It then goes on to assert that "If $f$ is continuous over a closed and bounded interval $[a,b]$, it is uniformly continuous on said interval."



My question is this: Does $f$ have to be bounded to be uniformly continuous? If not, can someone give me an example and show me why this is the case? This is a concept that I've only been shown with bounded examples in class (and we don't have class until after Thanksgiving).




I saw there exists a question here like this, but I didn't feel the answer was rigorous enough for me to understand fully.

real analysis - Find $limlimits_{ntoinfty}frac{a_1+a_2+...+a_n}{1+frac{1}{sqrt2}+...+frac{1}{sqrt{n}}}$ with $a_1=1$ and $a_{n+1}=frac{1+a_n}{sqrt{n+1}}$




Let $(a_n)_{n\ge1}, a_1=1, a_{n+1}=\frac{1+a_n}{\sqrt{n+1}}$.
Find $$\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$





These is my try:



I intercalated the limit like that
$$L=\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{\sqrt{n+1}}\frac{\sqrt{n+1}}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$.
The second term of the limit tends to 2.
The first one, after Cesaro-Stols, become:
$$\lim_{n\to\infty}a_{n+1}(\sqrt{n+1}+\sqrt{n+2})$$
I tried to intercalate the term $a_n$ between 2 terms in function of n, just like $a_n<\frac{1}{\sqrt{n}}$ or something like that to use the sandwich theorem. Any ideas of this kind of terms? Or other ideas for the problem?


Answer




Stolz–Cesàro is a way to go, but applied to
$S_n=\sum\limits_{k=1}^n a_n$ and $T_n=\sum\limits_{k=1}^n \frac{1}{\sqrt{k}}$, where $T_n$ is strictly monotone and divergent sequence ($T_n >\sqrt{n}$). Then
$$\lim\limits_{n\rightarrow\infty}\frac{S_{n+1}-S_n}{T_{n+1}-T_n}=
\lim\limits_{n\rightarrow\infty}\frac{a_{n+1}}{\frac{1}{\sqrt{n+1}}}=
\lim\limits_{n\rightarrow\infty} \left(1+a_n\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1+a_{n-1}}{\sqrt{n}}\right)=
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{a_{n-2}}{\sqrt{n(n-1)}}\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{1}{\sqrt{n(n-1)(n-2)}}+...+\frac{a_1}{\sqrt{n!}}\right)=\\
1+\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right)$$







Now, for
$$\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right) \tag{1}$$
we have
$$0<\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)(n-3)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)<
\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)(n-2)}}\right)
=\\\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{n-3}{\sqrt{(n-1)(n-2)}}\right)\rightarrow 0$$







Finally, $(1)$ has $0$ as the limit, $\frac{S_{n+1}-S_n}{T_{n+1}-T_n}$ has $1$ as the limit. The original sequence has $1$ as the limit as well.


radicals - Calculating the following limit: $lim_{x to 0} frac{sqrt{x^2+1}-sqrt{x+1}}{1-sqrt{x+1}} $



I am trying to calculate this limit:

$$
\lim_{x \to 0} \frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}
$$



I've tried using conjugate of both denominator and numerator but I can't get the right result.


Answer



$$\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$$
$$=\frac{(1+\sqrt{x+1})\{x^2+1-(x+1)\}}{(\sqrt{x^2+1}+\sqrt{x+1})(1-(x+1))}$$
$$=\frac{(1+\sqrt{x+1})\{x(x-1)\}}{(\sqrt{x^2+1}+\sqrt{x+1})(-x)}$$
$$=\frac{(1+\sqrt{x+1})(1-x)}{(\sqrt{x^2+1}+\sqrt{x+1})}\text { if } x\ne0$$




As $x\to0,x\ne0$



So, $$\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}=\lim_{x\to0}\frac{(1+\sqrt{x+1})(1-x)}{(\sqrt{x^2+1}+\sqrt{x+1})}=\frac{(1+1)}{(1+1)}=1$$






Alternatively, as $\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$ is of the form $\frac00,$



Applying L'Hospital's Rule we get,

$$\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$$
$$=\lim_{x\to0}\frac{\frac x{\sqrt{x^2+1}}-\frac1{2\sqrt{x+1}}}{-\frac1{2\sqrt{x+1}}}$$
$$=\lim_{x\to0}\left(1-\frac{2x\sqrt{x+1}}{\sqrt{x^2+1}}\right)\text{ as }x+1\ne0$$
$$=1$$


abstract algebra - Why adjoining non-Archimedean element doesn't work as calculus foundation?




Consider the smallest ordered field that contains R and does not satisfy the Archimedean property. I assume this is a much simpler construction than ultrafilters and other big caliber artillery used in non-standard analysis. Why does this approach fail?


Answer



To perform nontrivial analysis in a nonstandard extension of $\:\mathbb R\:$ requires much more than simply the existence of infinitesimals. One needs some efficient general way to transfer (first-order) properties from $\:\mathbb R\:$ to the nonstandard extension. This is achieved by a powerful transfer principle in NSA. Lacking such, and other essential properties such as saturation, one faces huge obstacles.



One need only examine earlier approaches to see examples of such problems. For example, see the discussion of the pitfalls of Schmieden and Laugwitz's
calculus of infinitesimals in Dauben's biography of Abe Robinson
(lookup "Laugwitz" in the index) and, for much further detail, see D. Spalt: Curt Schmieden’s Non-standard Analysis, 2001. In short, viewed in terms of
ultrapowers $\mathbb R^\mathbb N,\:$ the S&L approach mods out only by a Frechet filter on $\mathbb N\:$ instead of a free ultrafilter. Thus one loses full transfer of first-order properties, e.g. one obtains only a partially ordered ring, with zero-divisors to boot. Without all of the essential properties of $\mathbb R,\:$ and without a general transfer principle, one obtains a much weaker and much more cumbersome theory as compared to Robinson's NSA.




Another point which deserves emphasis is the role played by logic, in particular the concept of formal languages. One of the major problems with early approaches to infinitesimals is that they lacked rigorous model theoretic techniques. For example, without the notion of a (first-order) formal language
it is impossible to rigorously state what properties
of reals transfer to hyperreals. This logical inadequacy is one of the primary sources of contradictions in the earlier approaches.



Abraham Robinson wrote much on these topics. It is said that he knew more about
Leibniz than anyone. His lofty goal was to vindicate Leibniz'
intuition, and to reverse the historical injustices done to
him by many Whiggish historians. See Abby's collected papers
for much more, and see also Dauben's superb biography of Robinson.




For a brief introduction to the ultraproduct approach to NSA see Wm. Hatcher: Calculus is Algebra, AMM 1982, and Van Osdol: Truth with Respect to an Ultrafilter or How to make Intuition Rigorous. For a much more comprehensive introduction to ultraproducts see Paul Eklof. Ultraproducts for Algebraists, 1977.


limits - If $lim_{x to infty} x_n = liminf_{n to infty} x_n = limsup_{n to infty} x_n= -infty$ does it exist a convergent subsequence of $x_n$?

If $\lim_{x \to \infty} x_n = \liminf_{n \to \infty} x_n = \limsup_{n \to \infty} x_n= -\infty$ does it exist a convergent subsequence of $x_n$? I've learned that the limit of a convergent subsequence is called accumulation point,$\limsup$ as the greatest accumulation point and $\liminf$ as the smallest accumulation point. Then if $\limsup$ and $\liminf$ are $-\infty$ then all limits of all subsequences of $x_n$ are between $-\infty$ and $-\infty$ i.e. any of the sequences converges. I'm not really sure if this argument is correct. Could you help me please?

Friday 26 December 2014

probability - How many papers do you expect to hand in before you receive each possible grade at least once?

A particular professor is known for his arbitrary grading policies. Each paper receives a grade from the set {A, A-, B+, B, B-, C+}, with equal probability, independently of other papers. How many papers do you expect to hand in before you receive each possible grade at least once?

lp spaces - Proof of infinity matrix norm

Given the $l_{\infty}$ matrix norm for $A{\in}{\Bbb{R}}^{mxn}$ is defined as: $\|A\|_{\infty} =\max_{1 \leq i \leq n}\|a^{i}\|_{1}$ (where $a^{i}$ is the i$^{th}$) row in matrix A),



Show that:
$\|A\|_{\infty} =\max \left\{\|Ax\|_{\infty} : x_{\infty} \le 1\right\} =\max \left\{\|Ax\|_{\infty} : x_{\infty} = 1\right\}$



I know that this is a property of subordinate matrix norms but I'm not sure how to go about with proving it.

real analysis - Show $(1+frac{1}{3}-frac{1}{5}-frac{1}{7}+frac{1}{9}+frac{1}{11}-cdots)^2 = 1+frac{1}{9}+frac{1}{25}+frac{1}{49} + cdots$



Last month I was calculating $\displaystyle \int_0^\infty \frac{1}{1+x^4}\, dx$ when I stumbled on the surprising identity:



$$\sum_{n=0}^\infty (-1)^n\left(\frac{1}{4n+1} +\frac{1}{4n+3}\right) = \frac{\pi}{\sqrt8}$$



and I knew



$$\sum_{n=0}^\infty \frac{1}{(2n+1)^2} = \frac{\pi^2}{8}$$




So if I could find a proof that $$\left(\sum_{n=0}^\infty (-1)^n\left(\frac{1}{4n+1} +\frac{1}{4n+3}\right)\right)^2 = \sum_{n=0}^\infty \frac{1}{(2n+1)^2}$$ then this could be a new proof that $\zeta(2)=\frac{\pi^2}{6}$. I've thought over this for almost a month and I'm no closer on showing this identity.



Note: Article on the multiplication of conditionally convergent series: http://www.jstor.org/stable/2369519


Answer



Let $a_k = (-1)^k \left(\frac{1}{4k+1} + \frac{1}{4k+3}\right)$ and $b_k = \frac{1}{(4k+1)^2} + \frac{1}{(4k+3)^2}$. The goal is to show that: $$ \left(\sum_{i=0}^\infty a_i\right)^2 = \sum_{i=0}^\infty b_i $$
The key observation that I missed on my previous attempt is that: $$ \sum_{i=0}^n a_i = \sum_{i=-n-1}^n \frac{(-1)^i}{4i+1} $$ This transformation allows me to then mimic the proof that was suggested in the comments by @user17762.
\begin{align*} \left(\sum_{i=0}^n a_i\right)^2 - \sum_{i=0}^n b_i
&= \left(\sum_{i=-n-1}^n \frac{(-1)^i}{4i+1}\right)^2 - \sum_{i=-n-1}^n \frac{1}{(4i+1)^2} \\
&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^i}{4i+1}\frac{(-1)^j}{4j+1} \\

&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^{i+j}}{4j-4i}\left(\frac{1}{4i+1}-\frac{1}{4j+1} \right) \\
&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^{i+j}}{2j-2i} \cdot \frac{1}{4i+1} \\
&= \frac{1}{2}\sum_{i=-n-1}^n \frac{(-1)^i}{4i+1} \sum_{\substack{j=-n-1 \\ i \neq j}}^n \frac{(-1)^j}{j-i} \\
&= \frac{1}{2}\sum_{i=-n-1}^n \frac{(-1)^i }{4i+1}c_{i,n} \\
&= \frac{1}{2}\sum_{i=0}^n a_i \,c_{i,n}
\end{align*}
Where the last equality follows from $c_{i,n} = c_{-i-1, n}$. Since $c_{i,n}$ is a partial alternating harmonic sum, it is bounded by its largest entry in the sum: $\left| c_{i,n} \right| \le \frac{1}{n-i+1}$. We also know that $\left|a_i\right| \le \frac{2}{4i+1}$. Apply these two inequalities to get:
\begin{align*}\left| \left(\sum_{i=0}^n a_i\right)^2 - \sum_{i=0}^n b_i \right| &\le \frac{1}{2} \sum_{i=0}^n \frac{2}{4i+1} \cdot \frac{1}{n-i+1} \\ &\le \sum_{i=0}^n \frac{1}{4n+5}\left( \frac{4}{4i+1} + \frac{1}{n-i+1} \right) \\ &\le \frac{1}{4n+5}\left( 5 + \ln(4n+1) +\ln(n+1)\right) \\
& \to 0 ~\text{ as }~ n \to \infty
\end{align*}

This concludes the proof. In fact, with the same idea, you can prove this general family of identities: Fix an integer $m \ge 3$, then:



\begin{align*} & \left( 1 + \frac{1}{m-1} - \frac{1}{m+1} - \frac{1}{2m-1} + \frac{1}{2m+1} + \frac{1}{3m-1} - \cdots \right)^2 \\
=& ~ \left(\sum_{i=-\infty}^\infty \frac{(-1)^i}{im+1}\right)^2 \\
=& ~ \sum_{i=-\infty}^\infty \frac{1}{(im+1)^2} \\
=& ~ 1 + \frac{1}{(m-1)^2} + \frac{1}{(m+1)^2} + \frac{1}{(2m-1)^2} + \frac{1}{(2m+1)^2} + \frac{1}{(3m-1)^2} + \cdots \\
=& ~ \left(\frac{\frac{\pi}{m}}{\sin\frac{\pi}{m}}\right)^2 \end{align*}
The last equality follows from the comment by @Lucian.


limits - Is L'Hopital for $limlimits_{xto0}frac{sin(x)}{x}$ circular?

I was considering using L'Hopital for $\displaystyle\lim\limits_{x\to0}\frac{\sin(x)}{x}$, but I was told that this is circular, because we use this limit to show $\displaystyle\frac{\mathrm d}{\mathrm dx}\sin(x) = \cos(x)$.



Do we have to use this limit to find the derivative of $\sin(x)$, or is there a legitimate counter-argument here?

integration - An integral involving error functions and a Gaussian

Let $d\ge 1$ be an integer and let $\vec{A}:=\left\{ A_i \right\}_{i=1}^d$ be real numbers. We consider a following integral:
\begin{equation}
{\mathfrak I}^{(d)}(\vec{A}):=\int\limits_0^\infty e^{-u^2}\left[ \prod_{i=1}^d \operatorname{erf}(A_i u) \right] du
\end{equation}
By expanding the error functions in Taylor series and then integrating term by term we found the answer for $d=1$ and $d=2$. We have:

\begin{eqnarray}
\sqrt{\pi} {\mathfrak I}^{(d)}(\vec{A}) =
\begin{cases}
\arctan(A_1) & \text{if $d=1$}\\[4pt]
\arctan\left(\frac{A_1 A_2}{\sqrt{1+A_1^2+A_2^2}}\right) & \text{if $d=2$}
\end{cases}
\end{eqnarray}
Now the question is how do we derive the result for arbitrary values of $d$?

Thursday 25 December 2014

sequences and series - How do I find the sum of $sumlimits_{k=1}^infty{frac{k}{2^{k+1}}}=1$?




As shown in the title, how do I find the sum of:




$$\sum\limits_{k=1}^\infty{\frac{k}{2^{k+1}}}=1$$


Answer



HINT:



Note that for $|x|<1$, $f(x)=\sum_{k=1}^{\infty}x^{k}=\frac{x}{1-x}$ implies that



$$x^2f'(x) = \sum_{k=1}^{\infty}kx^{k+1}$$



Then, let $x=1/2$



Wednesday 24 December 2014

Is my prove enough to say that it is impossible to write even number as sum of odd number of odd numbers.



I faced this problem once:



Let's say we have given even integer $n$, now we want to prove that there isn't set of odd numbers $S$ such that $S_{1} + S_{2} + \dots + S_{k} = n \text { and } k \text{ is odd number} $.



My thinking



We know that $S = \{S_{1}, S_{2}, \dots, S_{k}\}$, and we have that $k\text{ mod } 2 = 1$. If we set each element in the set $S$ to have the value $S_{i} = S_{i} \text{ mod } 2$, for each $1 \leq i \leq k$. Because the set $S$ consists only odd numbers it will have the form $S = \{1, 1, \dots , 1\}$, or there will be $k$ ones. Because $k$ is odd if we sum the values they will have odd value, so it is impossible to write even number $n$ of odd number of odd numbers.


Answer




I'd say your proof is correct and sufficient, unless you are a beginner in math. In that case, there are some technically vague parts of the proof you should improve, since they are a litle hand wavy.



For example, the sentence




If we set each element in the set $S$ to have the value $S_i=S_i\mod 2$




isn't really mathematically rigorous. Technially speaking, if you do that, you are left with the set $\{1\}$ and you can't prove anything from that. Now, sure, I know what you meant, but if you are studying math for 2 years or less, then "you know what I meant" just isn't good enough. Rigorous mathematical language is vital in math and you need to learn it.







So, if you are a "beginner" in math, I suggest you rewrite your proof without any vague hand-wavyness. You will probably notice that you will have to use induction to get anywhere, and that's fine. Induction is probably the easiest (and, arguably, the only) way to prove the statement rigorously.


ordinary differential equations - Fourier sine series of $f = cos x$

Let $f:(0,\pi) \to \mathbb{R}$ defined by $x \mapsto \cos x $



Show that the Fourier sine series of (odd extension) is given by




$$\sum\limits_{n=2}^\infty \frac{2n(1+(-1)^n)}{\pi(n^2-1)}$$






So far, because it's an odd series, I used $\displaystyle b_n =\frac{2}{\pi}\int^\pi_0 \cos x \sin nx dx$



$$\begin{align} b_n &= \frac{2}{\pi}\int^\pi_0 \cos x \sin nx dx \\
&=\frac{2}{\pi}\int^\pi_0\sin x ' \sin nx dx \\
&= \frac{2}{\pi}{[-\sin x \sin nx ]^\pi_0+n\int^\pi_0 \sin x \cos nx dx} \\

&=\frac{2}{\pi}{[-\sin x \sin nx]^\pi_0+n\int^\pi_0-\cos x ' \cos nx dx}
\end{align}$$



but now I'm thinking I've gone down the wrong path.

calculus - integration via Euler's formula, the wikipedia article is confusing me



On the wikipedia's article about integration using Euler's formula
they use $\displaystyle\int\mathrm{\sin}^{2}x\cos{4x}\,\mathrm{d}x$ as an example:



$$\displaystyle\int\mathrm{\sin}^{2}x\cos{4x}\,\mathrm{d}x$$
$$\displaystyle\int\mathrm{\bigg(\frac{e^{ix}-e^{-ix}}{2i}\bigg)^2\bigg(\frac{e^{4ix}+e^{-4ix}}{2}\bigg)}\,\mathrm{d}x$$
$$\displaystyle-\frac{1}{8}\int\mathrm{e^{6ix}+e^{-6ix}-2e^{4ix}-2e^{-4ix}+e^{2ix}+e^{-2ix}}\,\mathrm{d}x$$




And then it says that we can either integrate as it is or substitute the integrand with $\cos 6x - 2 \cos 4x + \cos2x$, shouldn't that be $2\cos 6x - 4 \cos 4x + 2\cos2x$? Is wikipedia wrong or what am I not understanding?


Answer



Rewrite the integral as
$$
\begin{align}
&-\frac{1}{8}\int\left(\mathrm{e^{6ix}+e^{-6ix}-2e^{4ix}-2e^{-4ix}+e^{2ix}+e^{-2ix}}\right)\,\mathrm{d}x
\\&=-\int\left(\mathrm{\frac14\cdot\frac{e^{6ix}+e^{-6ix}}{2}-\frac24\cdot\frac{e^{4ix}+e^{-4ix}}{2}+\frac14\cdot\frac{e^{2ix}+e^{-2ix}}{2}}\right)\,\mathrm{d}x\\
&=-\int\left(\frac14\cos6x-\frac12\cos4x+\frac14\cos2x\right)\,\mathrm{d}x\\
&=-\frac14\int\left(\color{blue}{\cos6x-2\cos4x+\cos2x}\right)\,\mathrm{d}x\\

\end{align}
$$
but
$$
\begin{align}&-\int\left(\frac14\cos6x-\frac12\cos4x+\frac14\cos2x\right)\,\mathrm{d}x
\\&=-\frac18\int\left(\color{red}{2\cos6x-4\cos4x+2\cos2x}\right)\,\mathrm{d}x.
\end{align}$$
I think you are right.


Tuesday 23 December 2014

Show that the sequence is bounded?

Show that the sequence $\frac{3n + sin^2n}{6n}_{n \in \mathbb{N}}$ is bounded.



I know that $ 0 \leq sin^2n \leq 1$, so $\frac{3n}{6n} = \frac{1}{2} \leq \frac{3n + sin^2n}{6n} \leq \frac{3n + 1}{6n}$



So it is easy to see that it is bounded below by $\frac{1}{2}$, but I can't figure out how to show it is bounded above.

Concerning the closure of a set in this topology



It is not hard to see that the set $\mathcal{I} = \{ (a, \infty) \vert a \in \mathbb{R} \} \cup \{\mathbb{R}\} \cup \{\emptyset\}$ is a basis for a topology of semi-infinite intervals.



Given some arbitrary element of this basis (that is neither the empty set nor the whole space), I would like to find its closure. Our set is trivially open, so it is equal to its interior, and thus we simply need the boundary points to see its closure.




The nature of our basis elements makes it clear that the point $a$ would be the one point on the boundary of said element (since all open sets that contain it have nontrivial intersection with both the interior of our element and $ \mathbb{R}$ sans this element). Naturally, I would believe that the closure of my basis element is $ [a, \infty)$. However, on complementation we obtain the set $ (- \infty, a)$, which is not open in our topology since, I claim, it cannot be obtained through an arbitrary union of basis elements.



This would lead me to believe that the true closure of any given basis element is simply the whole space since its complement is trivially open and also contains the boundary point in question. Is this the case? How could I have seen this sooner, if so? Also if so, why does it seem to be the case here that the closure is not the interior with the boundary? Furthermore, would the closure of any open set in this topology also be the whole space, assuming it is the case for our basis elements as I have explained?



These questions are quite basic, but I have no mathematicians nearby to dispel my confusion. Any and all help appreciated.



Edit: Thank you for your help. I understand the problem and concept much better now.


Answer



Notice that the family you've written is not just a basis for a topology, but the topology itself.
With this in mind, one should start by looking at the class of closed subsets, which is $$\{(-\infty,a]\,:\,a\in\Bbb R\}\cup\{\Bbb R,\emptyset\}$$




and then notice that the only one containing a non-empty open set is $\Bbb R$. So, every non-empty open set is dense. Finally, the boundary of $(x,\infty)$ is in fact $(-\infty,x]$.


Monday 22 December 2014

calculus - $limlimits_{ntoinfty} prodlimits_{k=1}^{n} left( 1 + tan{frac{k}{n^2}} right) $



I want to calculate $$\lim\limits_{n\to\infty} \prod_{k=1}^{n} \left( 1 + \tan{\frac{k}{n^2}} \right) $$



Taking logarithms, it's enough to find
$$\lim\limits_{n\to\infty} \sum_{k=1}^{n} \log\left( 1 + \tan{\frac{k}{n^2}} \right).$$



Since $\lim\limits_{n\to\infty} \tan{\frac{x}{n^2}} = 0$, we can combine the Taylor series near $0$ of $\log(1+x)$ with the taylor series of $\tan{x}$ near $0$ to obtain the limit $e^\frac{1}{2}$.




My question is: is there any nicer way of evaluating this limit?


Answer



Probably not nicer, but still a different way is to use the facts that
$$\lim\limits_{x\rightarrow0}\frac{\tan{x}}{x}=1$$
and, as shown here
$$\lim\limits_{n\rightarrow\infty} \frac{1}{n}\sum\limits_{k=1}^n f\left(\frac{k}{n}\right)= \int\limits_{0}^{1} f(x)dx$$






$$\sum_{k=1}^{n} \log\left( 1 + \tan{\frac{k}{n^2}} \right)=

\sum_{k=1}^{n} \tan{\frac{k}{n^2}} \cdot \log\left( 1 + \tan{\frac{k}{n^2}} \right)^{\frac{1}{\tan{\frac{k}{n^2}} }}=\\
\sum_{k=1}^{n} \frac{k}{n^2}\cdot \color{red}{ \frac{\tan{\frac{k}{n^2}}}{\frac{k}{n^2}} \cdot \log\left( 1 + \tan{\frac{k}{n^2}} \right)^{\frac{1}{\tan{\frac{k}{n^2}} }}}$$

Because the part in red $\rightarrow 1$ when $n\rightarrow\infty$ for any $k=1..n$, using the definition of the limit, $\forall \varepsilon, \exists N(\varepsilon)$ s.t. $\forall n > N(\varepsilon)$
$$(1-\varepsilon)\left(\sum_{k=1}^{n} \frac{k}{n^2}\right)<\sum_{k=1}^{n} \log\left( 1 + \tan{\frac{k}{n^2}} \right)<(1+\varepsilon)\left(\sum_{k=1}^{n} \frac{k}{n^2}\right)$$
leading to
$$\lim\limits_{n\rightarrow\infty}\sum_{k=1}^{n} \log\left( 1 + \tan{\frac{k}{n^2}} \right)=
\lim\limits_{n\rightarrow\infty}\sum_{k=1}^{n} \frac{k}{n^2}$$

But then
$$\lim\limits_{n\rightarrow\infty}\sum_{k=1}^{n} \frac{k}{n^2}=
\lim\limits_{n\rightarrow\infty}\frac{1}{n}\left(\sum_{k=1}^{n} \frac{k}{n}\right)=\int\limits_{0}^{1}x dx =\frac{1}{2}$$


and the result follows.


real analysis - Solutions to matrix-valued multiplicative Cauchy equation under local boundedness

Let $f : (0, \infty) \to \mathbb{R}^{n \times n}$, $n \in \mathbb{N}$ satisfy the functional equation $f(x + y) = f(x) f(y)$. In general, $f$ need not be measurable (by the usual constructions of non-measurable solutions based on a Hamel basis of $\mathbb{R}$ over $\mathbb{Q}$).



Is it true that if $f$ is uniformly bounded on $(0, \infty)$ (or only on some subinterval of $(0, \infty)$) then $f$ is measurable. (It then can also be shown that $f$ is continuous and of the form $f(x) = \exp(A x)$ for some $A \in \mathbb{R}^{n \times n}$.)



Remarks:





  1. For $n=1$ this follows from here. For $n \geq 2$ the above functional equation translates into a system of one-dimensional functional equations $f_{ij}(x+y) = \sum_{k=1}^n f_{ik}(x) f_{kj}(y)$. Are similar arguments applicable?

  2. For "$n=\infty$" the above claim is not true, see e.g. Doob "Topics in the theory of Markoff chains" (1942), p. 2.



Edit: For the case of stochastic solutions ($f(x)$ is a stochastic matrix for each $x > 0$) the above question was solved in the affirmative by W. Doeblin in the 30s.)

exponentiation - Why is $(-1)^x=e^{ipi x}$



I was recently taught exponentials and I decided to play around with negative bases, which they told me were not allowed. The obvious place to start was negative one, and, as expected, the graphing tool did not work. However, after trying Wolfram, it told me it was equaled $e^{i\pi x}.$ I would understand why it oscillates, like the sine and cosine waves (negative base to an even power becomes positive, negative to an odd power is negative) but why are they exactly the same?


Answer



Because one logarithm of $-1$ is indeed $i \pi,$ as in $$ e^{i \pi} +1 = 0. $$



Another logarithm is $3 i \pi,$ so a less common assignment for $(-1)^x$ would be $$ e^{3 i \pi x} $$




One application that is not widely known is with the Gelfond-Schneider Theorem, using your expression. Since $-1$ is not $0$ or $1$ and is algebraic (indeed rational and an integer), if $x$ is irrational and algebraic over $\mathbb Q,$
then $$ e^{ i \pi x} = \cos \pi x + i \sin \pi x $$
is transcendental. In turn, that means the real numbers $\cos \pi x$ and $\sin \pi x$ are transcendental.



I used that in an article.



Really.


elementary set theory - How to construct a one-to one correspondence between$left [ 0,1 right ]bigcup left [ 2,3 right ]bigcup ..$ and $left [ 0,1 right ]$




How can I construct a one-to one correspondence between the Set $\left [ 0,1 \right ]\bigcup \left [ 2,3 \right ]\bigcup\left [ 4,5 \right ] ... $ and the set $\left [ 0,1 \right ]$ I know that they have the same cardinality


Answer



Suppose that you had a bijection $f:[0,1]\to(0,1]$. Then you could decompose $[0,1]$ as



$$\begin{align*}
[0,1]&=\left[0,\frac12\right]\cup\left(\frac34,1\right]\cup\left(\frac58,\frac34\right]\cup\left(\frac9{16},\frac58\right]\cup\dots\\
&=\left[0,\frac12\right]\cup\bigcup_{n\ge 1}\left(\frac{2^n-1}{2^{n+1}},\frac{2^{n-1}+1}{2^n}\right]\;,
\end{align*}$$




map $[0,1]$ to $\left[0,\frac12\right]$ in the obvious way, and for $n\ge 1$ map $[2n,2n+1]$ to $\left(\frac{2^n-1}{2^{n+1}},\frac{2^{n-1}+1}{2^n}\right]$ using straightforward modifications of $f$ for each ‘piece’. I’ll leave that part to you unless you get stuck and ask me to expand; the hard part is finding $f$. Here’s one way:



$$f:[0,1]\to(0,1]:x\mapsto\begin{cases}
\frac12,&\text{if }x=0\\\\
\frac1{2^{n+1}},&\text{if }x=\frac1{2^n}\text{ for some }n\ge 1\\\\
x,&\text{otherwise}\;.
\end{cases}$$



In other words, $f$ is the identity map except on the set $\displaystyle{\{0\}\cup\left\{\frac1{2^n}:n\ge 1\right\}}$, which it shifts one place ‘forward’ like this:




$$0\overset{f}\mapsto\frac12\overset{f}\mapsto\frac14\overset{f}\mapsto\frac18\overset{f}\mapsto\frac1{16}\overset{f}\mapsto\dots\;.$$


calculus - Find the limit of exponent/factorial sequence











I don't know how to even stoke it...



$$ \lim_{n\to \infty } \frac{2^n}{n!} = $$


Answer



HINT



Prove that $$0 < \dfrac{2^n}{n!} \leq \dfrac4n$$ for all $n \in \mathbb{Z}^+$ using induction and then use squeeze theorem.



calculus - Nature of infinite series

I was doing some exercises on the nature of infinite series when I came across this one intriguing series - $$\sum_{i=1}^{\infty}\frac{(-1)^{i-1}}{\ln(i+1)}.$$ I tried to solve it with D'Alembert's ratio test and came at the solution that this series is convergent as $\lim\limits_{n \to \infty}|\frac{u_{n+1}}{u_n}|$ comes out to be $\lim\limits_{n \to \infty}|\frac{-\ln(n+1)}{\ln(n+2)}| = \lim\limits_{n \to \infty}\frac{\ln(n+1)}{\ln(n+2)}$ which is smaller than 1 as $\ln x$ is strictly increasing. So, the $-$ sign doesn't really make any difference. But the answer in the textbook is that this series is conditionally convergent, meaning that it wouldn't have been convergent if not for the $-$ sign. Can anyone explain this to me?

sequences and series - Proving the identity $sum_{k=1}^n {k^3} = big(sum_{k=1}^n kbig)^2$ without induction



I recently proved that




$$\sum_{k=1}^n k^3 = \left(\sum_{k=1}^n k \right)^2$$



using mathematical induction. I'm interested if there's an intuitive explanation, or even a combinatorial interpretation of this property. I would also like to see any other proofs.


Answer



Stare at the following image, taken from this MO answer, long enough:



Proof that the sum of the cubes is the square of the sum


abstract algebra - $(Mcap E)(Mcap F)= M$ for linearly disjoint fields $E$ and $F$?



Let $E$ and $F$ be linearly disjoint fields over a base field $K$ (all contained in an algebraic closure $\overline K$). Suppose there is an extension $M/K$ contained in $EF$.
Is it true that $(M\cap E)(M\cap F)= M$?
I was not able to show it. Does anybode have an idea?


Answer



It is not true, see mihaild's comment.


Sunday 21 December 2014

calculus - Show that the function $g(x) = x^2 sin(frac{1}{x}) ,(g(0) = 0)$ is everywhere differentiable and that $g′(0) = 0$


Show that the function $g(x) = x^2 \sin\left(\frac{1}{x}\right) ,(g(0) = 0)$ is everywhere differentiable and that $g′(0) = 0$.


algebra precalculus - Is $|x| cdot |x| = |x^2| = x^2$?




Is $|x| \cdot |x| = |x^2| = x^2$ ?




I'm very sorry if this question is a duplicate but I couldn't find anything about it (most likely because it's wrong..). But I'm not sure if this is correct so I need to ask you.



$$|x| \cdot |x| = |x^2| \text{ should be alright}$$




Now my confusion starts. $x^2$ should be positive / neutral for any value. That would mean we can ignore the absolute value sign? On the other hand we could have that $|-x^2|$. But that would be a different thing than $|x^2|$, they are not equal to each other...? Please help me if I do this little thing wrong the entire task will be wrong. I got some thinking error here..



When there is the same question (I couldn't find one), please link me to it and I will delete this one immediately.


Answer



You are thinking it too hard. You could just look at the definition of the absolute value
$$
|x|:=\begin{cases}
x,&x\geq 0\\
-x,&x<0
\end{cases}

$$
and check on your own that $|x|^2=|x^2|=x^2$.






In general, we have $|a|\cdot|b|=|ab|$, which is true also for complex numbers; but the identity $|x^2|=x^2$ is not necessarily true in the complex world.


calculus - How to prove the closed form of the integral $int frac {dx}{prod_{r=0}^n (x+r)}$




I want to derive a closed formula for the integral $$I_n= \int \frac {dx}{\prod_{r=0}^n (x+r)}$$



On writing out first few terms we get



For $n=0$, $$I_0=\ln \vert x\vert+C$$
For $n=1$, $$I_1=\ln \vert x\vert-\ln \vert x+1\vert+C$$
For $n=2$ $$I_2=\frac {1}{2!}\left(\sum_{r=0}^2 (-1)^r\binom {2}{r} \ln \vert x+r\vert\right)+C$$
For $n=3$ $$I_3=\frac {1}{3!}\left(\sum_{r=0}^3 (-1)^r\binom {3}{r} \ln \vert x+r\vert\right)+C$$




Hence for generalized $n$ we have $$I_n=\frac {1}{n!}\left(\sum_{r=0}^n (-1)^r\binom {n}{r} (\ln \vert x+r\vert)\right)+C$$



Now this is just an observation but I want to prove that it is correct. I have tried lot of methods but not useful. Partial fractions would have been most useful but would go out on tedious task which is nearly impossible. Also integration by parts won't help nor any trig substitution. So any ideas are welcome. And ya, this is not a homework question, it's a question which I just saw in a integral challenge paper.


Answer



$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}

\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\int{\dd x \over \prod_{r = 0}^{n}\pars{x + r}} & =

\int{\dd x \over \Gamma\pars{x + n + 1}/\Gamma\pars{x}} =
{1 \over n!}
\int{\Gamma\pars{x}\Gamma\pars{n + 1}\over \Gamma\pars{x + n + 1}}\,\dd x
\\[5mm] & =
{1 \over n!}\int\int_{0}^{1}t^{x - 1}\pars{1 - t}^{n}\,\dd t\,\dd x =
{1 \over n!}\int_{0}^{1}\pars{1 - t}^{n}\int t^{x - 1}\,\dd x
\\[5mm] & =
{1 \over n!}\int_{0}^{1}\pars{1 - t}^{n}
\bracks{{t^{x - 1} \over \ln\pars{t}} + \,\mrm{A}\pars{t}}\,\dd x
\\[5mm] & =

{1 \over n!}\int_{0}^{1}{t^{x - 1}\pars{1 - t}^{n} \over \ln\pars{t}}\,\dd t +
{1 \over n!}\int_{0}^{1}{t^{x - 1}\,\mrm{A}\pars{t} \over \ln\pars{t}}\,\dd t
\end{align}




$\ds{\mrm{A}\pars{t}}$ is an integration "constant" ( it doesn't depend on $\ds{x}$ but, in general, it depends on $\ds{t}$ ).



real analysis - T/F: a smooth function that grows faster than any linear function grows faster than $x^{1+epsilon}$




Prove or find a counterexample to the claim that a smooth function that grows faster than any linear function grows faster than $x^{1+\epsilon}$ for some $\epsilon>0$.



My attempt: I understand that the first part of the problem claims $\lim_{x\rightarrow \infty}\frac{g(x)}{kx} = \infty, \forall k>0$. We want to show, then, that $\exists \epsilon >0$ and constant $l>0$ such that $\lim_{x\rightarrow \infty}\frac{g(x)}{lx^{1+\epsilon}} = \infty$.



I've tried using the definition of limits, but I get stuck trying to bound the function $\frac{1}{x^\epsilon}$. Also, I've tried using L'Hopital's rule to no avail. Any ideas?



Any help is appreciated!


Answer



Hint: It is false. Find a counterexample.




Followup hint: (place your mouse on the hidden text to show it)




The function $f\colon(0,\infty)\to\mathbb{R}$ defined by $f(x) = x\ln x$ is such a counterexample.




Followup followup hint: (pretty much the solution, with some details to fill in; place your mouse on the hidden text to show it)




For any $a>0$, $\frac{x\ln x}{a x} = \frac{1}{a}\ln x \xrightarrow[x\to\infty]{} \infty$. However, for any fixed $\epsilon > 0$, $$\frac{x\ln x}{x^{1+\epsilon}} = \frac{\ln x}{x^\epsilon}=\frac{1}{\epsilon}\frac{\ln(x^\epsilon)}{x^\epsilon} = \frac{1}{\epsilon}\frac{\ln t}{t}$$ for $t=x^\epsilon \xrightarrow[x\to\infty]{}\infty$.




linear algebra - Determinant of a general Matrix

What are the steps to solve for the Determinant of this Matrix?

I thought about turning it into a lower triangular Matrix, but I can't figure how.



Thanks in advance.



Let $A = $
$\begin{pmatrix}1 & 1& 1 & \cdots & 1 \\ 1 & 1+a2 & 1 & \cdots & 1 \\ 1 & 1& 1 + a3 &\cdots & 1 \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ 1& 1 & 1 &\cdots& 1 + an \end{pmatrix}$

Saturday 20 December 2014

What is the probability distribution for the number of N-sided die rolls needed to get M unique results?

Suppose you have a fair $N$-sided die. You decide to roll it until $M$ unique values have been produced (i.e. you re-roll all previously rolled values). How many times will you roll the die? (Given $2 <= M <= N$.)



I know that for the special case of $M=2$ it's simply a matter of how many times you have to re-roll your attempt for the second value, so the distribution is: $$P^N_2(X=u) = \left(\frac{1}{N}\right)^{u-2}\left(1-\frac{1}{N}\right) = \frac{N-1}{N^{u-1}}$$




And that for any $M$ the probability of the lowest possible outcome $X=M$ (i.e. no re-rolls): $$P^N_M(X=M) = \prod_{i=0}^{M-1}\left(1-\frac{i}{N}\right) = \frac{1}{N^M}\prod_{i=0}^{M-1}\left(N-i\right) = \frac{N!}{N^M(N-M)!}$$



The final clue I've got is that the probability distributions for subsequent values of $M$ can be defined using the probability distribution of the previous, like so:



$$P^N_{M}(X=u) = \sum_{i=1}^{u-M+1}\left(P^N_{M-1}(X=u-i)\left(\frac{M-1}{N}\right)^{i-1}\left(1-\frac{M-1}{N}\right)\right)$$



With that I can determine the probability distribution for any value of $M$ I want, for instance $M=3$:



$$P^N_3(X=u) = \sum_{i=1}^{u-3+1}\left(P^N_2(X=u-i)\left(\frac{3-1}{N}\right)^{i-1}\left(1-\frac{3-1}{N}\right)\right)$$




$$= \sum_{i=1}^{u-2}\left(\left(\frac{N-1}{N^{u-i-1}}\right)\left(\frac{2}{N}\right)^{i-1}\left(1-\frac{2}{N}\right)\right)$$



$$= \sum_{i=1}^{u-2}\left(\left(\frac{N-1}{N^{u-1}}\right)N^i\left(\frac{N}{2}\right)\left(\frac{2}{N}\right)^i\left(\frac{N-2}{N}\right)\right)$$



$$= \frac{(N-1)(N-2)}{2 \cdot N^{u-1}}\sum_{i=1}^{u-2}\left(2^i\right)$$



$$= \frac{(N-1)(N-2)}{N^{u-1}}\sum_{i=0}^{u-1}\left(2^i\right)$$



$$= \left(\frac{(N-1)(N-2)}{N^{u-1}}\right)\left(\frac{1-2^u}{1-2}\right)$$
$$= \frac{(N-1)(N-2)(2^u-1)}{N^{u-1}}$$




However, I have no idea how to turn this into a generic formula that will allow me to calculate the probability for any $N$, $M$, and $u$ without going through the process of figuring out the PMF of every value of $M$ leading up to the one I want.

linear algebra - Prove that the matrix A is diagonalizable if $-4bc < (a-d)^2$




I'm trying to prove that matrix A is diagonalizable
if $-4bc < (a - d)^2$ and is not diagonalizable if $-4bc > (a - d)^2$



$A = \begin{bmatrix}
a & b\\
c & d
\end{bmatrix}$



I'm not sure how to start. I was thinking of taking the inverse of A but I'm sure if that's the right approach.


Answer




I am assuming that you want to work in the field of real numbers. In this case you have the following



1) If the matrix is diagonalizable, then it must have both the roots real



2) If the matrix has eigenvalues that are not repeated, then it can be diagonalized with the following additional comment: If the eigenvalues are not real, the diagonalization would require complex numbers.



3) The only real matrices with repeated eigenvalues that is also diagonalizable are multiples of the identity matrix, i.e. diagonal matrices with equal values on the diagonal.



To OP:
In one of your comments you have

$$
\lambda^2-(a+b)\lambda+ad+bc$$
It should be
$$
\lambda^2-(a+d)\lambda+ad-bc$$
So for real roots that are distinct, you need
$$
(a+d)^2 -4(ad - bc) = a^2 + 2 a d + d^2 - 4 a d + 4bc = (a-d)^2 + 4 bc \gt 0$$
or
$$ (a-d)^2 \gt -4bc$$




Note $a=d, b=1, c=0$ is not diagonalizable. So you cannot change the $\gt$ to $\ge$.
Also, $a=d, b=c=0$ is diagonalizable. So
$$
\text{If }(a-d)^2 = -4bc, \text{ the matrix is diagonalizable if and only if $b=0$ and $c=0$}$$



** Added in reply to OP **



Let $\Delta$ be the discriminant




There are 3 cases




  1. $\Delta < 0$. Both the eigenvalues are complex so matrix can not be diagonalized.

  2. $\Delta >0$. Both eigenvalues are real and distinct (different). Matrix can be diagonalized

  3. $\Delta=0$. Both eigenvalues are equal. Then there are two cases: Matrix is diagonal, i.e. $b=c=0$. Matrix is not diagonal $b \ne 0$ or $c \ne 0$. In the first case the matrix is diagonalizable (it is already diagonal!). In the second case the matrix cannot be diagonalized.


Proof involving Induction

Prove that for every integer n ≥ 1, we have



$$
\sum_{i=1}^ni^3=\left(\frac{n(n+1)}2\right)^2
$$



Solve using Mathematical Induction, include the Inductive Step



Base Case is that both the left and right side $=1$ when $n=1$.

and the Inductive Hypothesis is $1^3+2^3+\dots +k^3=\frac{\left( k(k+1)\right)^2}2$

discrete mathematics - How to prove that $4^n>n^2$ using induction...



today I've encountered a question like the following;
$$\text{Prove that }4^n>n^2\text{ using induction.}$$
My Attempts:



I have realised that this works for $P(1)$, my next attempt was $p(n)\implies p(n+1)$....(1)




I have tried to multiply both sides with a $4$ which gave $4^{n+1}>4n^2$ I have tried to turn it out like $4>1^2$ and that gave me $4^{n+1}>n^2\cdot1^2$.....(2)



After that pointless attempt I've added $2n+1$ to both sides but I couldn't figure out still what $2n$ goes to in the left side...(3)



What are your suggestions?



With the real question being the first one, is there any other way to prove this numerically? (Perhaps in a more entertaining way?:))


Answer



Numerically is not a proof.




Induction works in this way




  1. It is true for $n=1$

  2. Suppose that is is true for $n>1$, prove it for $n+1$

  3. It is true for any $n\in\mathbb{N}$.



proof





  1. actually $4^1> 1^2$

  2. (I.H.) if $4^n>n^2$ for $n>1$ consider that $4^{n+1}=4\cdot 4^n$. Now use the Inductive Hypothesis (I.H.)



$4\cdot 4^n > 4\cdot n^2 =2^2 \cdot n^2=(2n)^2>(n+1)^2$ as $n>1$



proved, so





  1. For any $n\in \mathbb{N}$ we have $4^n>n^2$



QED
$$
.
$$



To say the truth $4^n > n^{1000}$ for $n>6312$




Indeed induction can start from any $n$, but this is another story


Friday 19 December 2014

A question about a detail in Bell's "Primer of Infinitesimal Analysis"



On p.35,36 of J.L. Bell's A Primer of Infinitesimal Analysis (2nd ed.), Bell uses the book's basic methods to derive the formula for the area of a circle based on the circumference. Where $s(x)$ is a function for the length of a certain portion of the circumference of a circle, and $C(x)$ is a formula for a portion of the area of a circle, and $\varepsilon$ is a nilpotent infinitesimal quantity, he derives the formula $$C'(x)=\frac{1}{2}rs'(x)$$ and goes on to say, "Since $C(0)=s(0)=0$, the Constancy Principle now yields $$C(x)=\frac{1}{2}rs(x)."$$



The "Constancy Principle" in question is that if $f$ is a real valued function with $f'(x)$ constantly $0$, then $f$ is constant; alternatively, $f$ is constant if $f(x+\varepsilon)=f(x)$ for all $x$ and infinitesimal $\varepsilon$.



My question is, how does $C(0)=s(0)=0$ allow use of the constancy principle, and why does this follow from it? The steps before and after this make total sense to me, but I don't see what's going on here, nor in similar steps in later proofs.


Answer



Let $f(x)=C(x)-\frac{1}{2}rs(x)$. Then $f'(x)=C'(x)-\frac{1}{2}rs'(x)=0$ for all $x$. By the Constancy Principle, $f$ is constant, so $f(x)=f(0)=C(0)-\frac{1}{2}rs(0)=0$ for all $x$. That is, $C(x)=\frac{1}{2}rs(x)$ for all $x$.



calculus - Solve${{int_{-1}^{infty }{left( frac{{{x}^{4}}}{1+{{x}^{6}}} right)}}^{2}}dx$



Please help me to evaluate the integral:

$\displaystyle {{\int_{-1}^{\infty }{\left( \frac{{{x}^{4}}}{1+{{x}^{6}}} \right)}}^{2}}dx$



Thanks.


Answer



Notice that:
$$x^8 = (x^6 + 1)x^2 - x^2$$
So that your integrand takes the form:
$$\int\frac{x^2}{1+x^6}dx - \int\frac{x^2}{(1+x^6)^2}dx$$
Now substitute $u = x^3, du = 3 x^2 dx$:
$$\int\frac{1}{3(1+u^2)}du - \int\frac{1}{3(1+u^2)^2}du$$

The first integral is a multiple of $\arctan(u)$, and the second can be solved dividing again into two parts:
$$\int\frac{1}{3(1+u^2)^2}du = \int\frac{1}{3(1+u^2)}du - \int\frac{u^2}{3(1+u^2)^2}du$$
I think you get the idea..


Functional equation with three variables

I have a functional equation with three variables. $f(x,y,z)$ is a real function with three variables where y is different from z i.e., $f(x,y,z)$ defined only for $y \neq z$. This function satisfies




  1. $f(x,x,y)=0$

  2. $f(x,y,x)=1$

  3. $f(x,y,z)f(z,y,r)=f(x,y,r)$



What is the general solution for $f$?

linear algebra - Extracting vector containing the elements of the main diagonal of a matrix





Is there any mathematical operation that would extract the elements of the main diagonal as a vector? i.e. multiply it by certain vectors or something like that. I'm using this in the context of linear systems.



In the specific case I'm looking at I have a relationship between the elements of three vectors as follows:



$ \bf{a} = \begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ a_{4} \end{bmatrix} $ , $ \bf{b} = \begin{bmatrix} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \end{bmatrix} $ , and $ \bf{c} = \begin{bmatrix} c_{1} \\ c_{2} \\ c_{3} \\ c_{4} \end{bmatrix} $




I also know that: $c_{i} = a_{i}b_{i} $ for $i \in [1, 4]$



Now I want to express this relationship as a vector equation. I understand that $\bf{a} \bf{b}^\top$ would give a square matrix with the elements of $\bf{c}$ on its main diagonal, but is there anyway to extract them as a vector?



EDIT: Let me clarify a bit. If I multiply $\bf{a}$ by $\bf{b}^\top$ I get the following matrix:



$\bf{a} \bf{b}^\top = \begin{bmatrix} \bf{a_1b_1} && a_1b_2 && a_1b_3 && a_1b_4 \\ a_2b_1 && \bf{a_2b_2} && a_2b_3 && a_2b_4 \\ a_3b_1 && a_3b_2 && \bf{a_3b_3} && a_3b_4 \\ a_4b_1 && a_4b_2 && a_4b_3 && \bf{a_4b_4} \end{bmatrix} $



The elements which have been made bold are the ones I'm interested in extracting as a vector. This vector would be $\bf{c}$.




If I multiply this by the all-ones vector, as some of the answers have suggested, I would get:



$\bf{a} \bf{b}^\top \bf{1}= \begin{bmatrix} \bf{a_1b_1} && a_1b_2 && a_1b_3 && a_1b_4 \\ a_2b_1 && \bf{a_2b_2} && a_2b_3 && a_2b_4 \\ a_3b_1 && a_3b_2 && \bf{a_3b_3} && a_3b_4 \\ a_4b_1 && a_4b_2 && a_4b_3 && \bf{a_4b_4} \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} a_1b_1 + a_1b_2 + a_1b_3 + a_1b_4 \\ a_2b_1 + a_2b_2 + a_2b_3 + a_2b_4 \\ a_3b_1 + a_3b_2 + a_3b_3 + a_3b_4 \\ a_4b_1 + a_4b_2 + a_4b_3 + a_4b_4 \end{bmatrix}$



Which is not the vector I'm looking for (it isn't equal to $\bf{c}$).



EDIT 2: Multiplying by the $\bf{1}$ vector would obviously work if all off diagonal elements become zero. So if anyone knows of a way to do that without modifying the elements of the main diagonal that would also answer my question.



EDIT 3: The other question pointed out in the comments area is essentially the same and I have received similar answers but I was hoping for a simpler solution. I haven't marked it as duplicate to allow people to contribute in the future.




I was hoping for a solution that would be linear in $\bf{b}$ which I would substitute in place of $\bf{c}$ into the equation I'm trying to solve. In that case $\bf{b}$ would be my only unknown and I would be able to get an algebraic solution.


Answer



Well, it’s not pretty, but this will do it:



$$\sum_{i=1}^4\mathbf a^T\mathbf e_i\mathbf b^T\mathbf e_i\mathbf e_i$$ where the $\mathbf e_i$ are the standard basis vectors. Each term of the sum extracts the $i$th components of $\mathbf a$ and $\mathbf v$ and multiplies them together. You can also think of it as multiplying the projection of $\mathbf b$ onto $\mathbf e_i$ by $a_i$ or vice-versa.


reference request - Advice on self study of category theory



I'm very intrested in start to study category theory with aim to use this in algebraic geometry. I already took courses (besides the basics) on commutative algebra and general topology with a soft introduction to algebraic topology. But I don't know any references in category theory. Can someone give some idea for what book or lecture notes I can get a introduction on this topic?



Thank you so much in any advance!


Answer



I found Category Theory by Steve Awodey to be a really good introduction to the subject. It doesn't lead directly into algebraic geometry (or, indeed, in any particular direction), but gives a readable introduction to the subject with plenty of examples and problems. When you've worked through this you could move on to a more advanced text or one which is more specifically geared towards algebraic geometry.




On the side, I'll mention Ravi Vakil's epic notes on algebraic geometry which kick off with category theory before moving on.


Thursday 18 December 2014

real analysis - Example of a non trivial endomorphism of the space of continuous functions?




Can someone give me an example of a nontrivial endomorphism of the space of continuous functions $R$ to $R$, besides derivation and multiplication by a scalar?



I know other endomorphisms exists, but I don't want some pathological thing I would like a concrete, explicit example besides the ones I already know.


Answer



Is such example $$\Phi(f)(x)=f(2x)$$ good for you? This is rescalling in the argument.



This example has a quite natural extension. Let $h:\Bbb R\to\Bbb R$ be a continuous bijection. Define $$\Psi(f)(x)=f\bigl(h(x)\bigr).$$


combinatorics - Within the number of different ways to order in line the letters of the word LOVEPONIXO, in how many ways there are no two O's one after another?



Within the number of different ways to order in line the letters of the word LOVEPONIXO, in how many ways there are no two O's one after another?

I was thinking - there are $\frac{10!}{3!}$ different ways to order the letters. Then I want to subtract the number of ways that include two O's one after another: $2!\cdot\frac{9!}{3!}$
And then I want to subtract the number of ways that include three O's on after another (some of them are included in the last calculation) : $3!\cdot\frac{8!}{3!}$

Then I get a final answer: $\frac{10!}{3!}-2!\cdot\frac{9!}{3!}-3!\cdot\frac{8!}{3!} = 8\cdot\frac{9!}{3!}-8!$

I have a strong feeling that I did something wrong though... Any directions?


Answer



A simpler method of doing the problem is to set aside the three O's for the moment. There are $7!$ ways of arranging the other seven letters since they are distinct. For any given arrangement, we now have eight gaps, six between successive letters and the two ends, in which to place the three O's. We can place the O's in these gaps in $\binom{8}{3}$ ways. Hence, the number of permutations in which no two O's are consecutive is
$$7! \cdot \binom{8}{3}$$




The method you attempted can be made to work. As you observed, there are $$\frac{10!}{3!}$$ distinguishable arrangements of the letters. From these, we must exclude those in which at least two O's are consecutive.



Suppose two O's are consecutive. We treat them as one letter, say $\cal{O}$. Thus, we have nine objects to arrange. Since they are distinct, they can be arranged in $9!$ ways.



However, when we subtract those arrangements in which at least two O's are consecutive, we remove those arrangements in which three O's are consecutive twice. Therefore, we must add them back in. If we treat the three O's as one object, we have eight distinct objects to arrange, which can be done in $8!$ ways.



Thus, by the Inclusion-Exclusion Principle, the number of distinct permutations of LOVEPONIXO in which no two O's are consecutive is
$$\frac{10!}{3!} - 9! + 8!$$
As you can verify by direct calculation, this agrees with the answer given above.


real analysis - Construct an explicit bijection $f:[0,1] to (0,1]$, where $[0,1]$ is the closed interval in $mathbb R$ and $(0,1]$ is half open.

The problem:



Construct an explicit bijection $f:[0,1] \to (0,1]$, where $[0,1]$ is the closed interval in $\mathbb R$ and $(0,1]$ is half open.




My Thoughts:



I imagine that I am to use the fact that there is an injection $\mathbb N \to [0,1]$ whose image contains $\{0\}$ and consider the fact that a set $X$ is infinite iff it contains a proper subset $S \subset X$ with $\lvert S \rvert = \lvert X \rvert$ (because we did something similar in class). I also have a part of proof that we did in class that I believe is supposed to help with this problem; it states the following: Start with an injection $g: \mathbb N \to X$ and then define a set $S=F(X)$ where $F$ is an injective (but NOT surjective) function $X \to X$ with $F(x) = x$ if $x \notin \text{image}(g)$ and $f(g(k)) = g(2k)$ if $x=g(k) \in \text{image}(g)$. Honestly, I'm having a lot of trouble even following this proof, so I could be wrong. Anyway, any help here would be appreciated. I feel really lost on this one. Thanks!

number theory - Show that $3p^2=q^2$ implies $3|p$ and $3|q$



This is a problem from "Introduction to Mathematics - Algebra and Number Systems" (specifically, exercise set 2 #9), which is one of my math texts. Please note that this isn't homework, but I would still appreciate hints rather than a complete answer.



The problem reads as follows:




If 3p2 = q2, where $p,q \in \mathbb{Z}$, show that 3 is a common divisor of p and q.





I am able to show that 3 divides q, simply by rearranging for p2 and showing that



$$p^2 \in \mathbb{Z} \Rightarrow q^2/3 \in \mathbb{Z} \Rightarrow 3|q$$



However, I'm not sure how to show that 3 divides p.







Edit:



Moron left a comment below in which I was prompted to apply the solution to this question as a proof of $\sqrt{3}$'s irrationality. Here's what I came up with...



[incorrect solution...]



...is this correct?



Edit:




The correct solution is provided in the comments below by Bill Dubuque.


Answer



Write $q$ as $3r$ and see what happens.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...