Tuesday 28 February 2017

elementary number theory - Compute $gcd(a+b, 2a+3b)$ if $gcd(a,b) = 1$

A question from a problem set is asking to compute the value of $\gcd(a+b, 2a+3b)$ if $\gcd(a+b) = 1$, or if it isn't possible, prove why.



Here's how I ended up doing it:



$\gcd(a,b) = 1$ implies that for some integers $x$, and $y$, that $ax+by = 1$.



Let $d = gcd(a+b, 2a+3b)$. This implies:




$\implies \text{d is divisible into }2(2a+3b) - 4(a+b) = 2b\cdots (1)$



$\implies \text{d is divisible into} 6(a+b) - 2(2a+3b) = 2a\cdots (2)$



Statement $(1)$ implies that $d$ divides $2by$ for some integer $y$



Statement $(2)$ implies that $d$ divides $2ax$ for some integer $x$



This implies that $d$ is divisible into $2(ax+by)$, which implies:




$\gcd(a+b, 2a+3b) =\text{ either 1 or 2}$



Thus the result is not generally determinable as it takes $2$ possible values.



Are my assumptions and logic correct? If not, where are the errors?



Thank you!

calculus - Determine using the comparison test whether the series $sum_{n=1}^infty frac{1}{sqrt[3]{n^2 + 1}}$ diverges




How can I determine whether or not the following series converges using the comparison test?



$$\sum_{n=1}^\infty \frac{1}{(n^2 + 1)^\frac{1}{3}}$$



As $n$ goes to infinity, the sum is roughly equal to $\sum_{n=1}^\infty \frac{1}{(n^2)^\frac{1}{3}} = \sum_{n=1}^\infty \frac{1}{n^\frac{2}{3}}$.



I believe that $\sum_{n=1}^\infty \frac{1}{(n^2 + 1)^\frac{1}{3}} < \sum_{n=1}^\infty \frac{1}{n^\frac{2}{3}}$ for all $n$.



Using the $p$ test, it is clear that the latter sum diverges. However I cannot say that the former also diverges as the inequality sign does not satisfy the conditions of the comparison test for divergence.


Answer




Note that $n^2+1 \le 2 n^2$ so you have
${1 \over \sqrt[3]{n^2+1}} \ge {1 \over \sqrt[3]{2}} {1 \over \sqrt[3]{n^2}} $.


real analysis - Continuous function with continuous one-sided derivative



Simple example of the absolute value function $x \mapsto |x|$ on $\mathbb{R}$ shows that it is possible for a continuous function to posses both the right-hand and the left-hand side derivatives and still not being differentiable on $\mathbb{R}$.



I was wondering if it is possible to assume something about one of the one-hand side derivatives to obtain differentiability.




The obvious came to my mind:




Is it true that if a continuous function $f \in C(\mathbb{R})$ has left-hand-side derivative $f_{-}^{'}$ that is continuous on $\mathbb{R}$, then the function $f$ is differentiable?



Answer



Yes.



The keystone is:




Lemma. Let $f\colon [a,b]\to\mathbb R$ be continuous and assume that $f'_+(x)$ exists and is $>0$ for all $x\in [a,b)$. Then $f$ is strictly increasing.



Assume otherwise, i.e. $f(a)\ge f(b)$.
We recursively define a map $g\colon \operatorname{Ord}\to [a,b)$ such that $g$ and $f\circ g$ are strictly inreasing. Since the class $\operatorname{Ord}$ of ordinals is a proper class and $g$ is injective, we arrive at a contradiction, thus showing the claim.




  • Let $g(0)=a$.

  • For a successor $\alpha=\beta+1$ assume we have already defined $g(\beta)$. For sufficently small positive $h$ we have that $g(\beta)0$. Pick one such $h$ and let $g(\alpha)=g(\beta)+h$.

  • If $\alpha$ is a limit ordinal, assume $g(\beta)$ is defined for all $\beta<\alpha$. Let $x=\sup_{\beta<\alpha} g(\beta)$. A priori only $x\le b$, but we need $xf(g(0))=f(a)=f(b)$. Therefore $x



$\square$



Corollary 1. (something like a one-sided Rolle theorem) Let $f\colon [a,b]\to\mathbb R$ be continuous with $f(a)=f(b)$. Assume $f_+$ exists and is continuos in $[a,b)$. Then $f'_+(x)=0$ for some $x\in[a,b)$.



Proof. Assume otherwise. Then either $f_+(x)>0$ for all $x$ or $f_+(x)<0$ for all $x$. In the first case the lemma applies and gives us a contradiction to $f(a)=f(b)$; in the other case, we consider $-f$ instead of $f$. $\square$



Corollary 2. (something like a one-sided IVT) Let $f\colon [a,b]\to\mathbb R$ be continuous. Assume $f_+$ exists and is continuos in $[a,b)$. Then $f'_+(x)=\frac{f(b)-f(a)}{b-a}$ for some $x\in[a,b)$.




Proof. Apply the previous corollary to $f(x)-\frac{f(b)-f(a)}{b-a}x$. $\square$



By symmetry, we have



Corollary 3. Let $f\colon [a,b]\to\mathbb R$ be continuous. Assume $f_-$ exists and is continuos in $(a,b]$. Then $f'_-(x)=\frac{f(b)-f(a)}{b-a}$ for some $x\in(a,b]$. $\square$



Theorem. Let $f\in C(\mathbb R)$ be a function with $f'_-$ continuous on $\mathbb R$.
Then $f\in C^1(\mathbb R)$.



Proof.

Consider aribtrary $a\in \mathbb R$.
Let $\epsilon>0$ be given.
Then by continuity of $f'_-$, for some $\delta>0$ we have $|f'_-(x)-f'_-(a)|<\epsilon$ for all $x\in(a,a+\delta)$.
Thus for $0

real analysis - a differentiable function but its derivative is discontinuous everywhere

Is there a function $f$: $\mathbb R\to \mathbb R$ such that $f$ is differentiable everywhere but $f'$ is discontinuous everywhere?

trigonometry - Multiply complex numbers to show trigonometric addition formulas



Use the rules for multiplication of two complex numbers written in the form $r(\cos\theta +i\sin\theta)$ to show that $\sin(\theta_1 +\theta_2)=\sin\theta_1\cos\theta_2 +\sin\theta_2\cos\theta_1$ and $\cos(\theta_1 +\theta_2)=\cos\theta_1\cos\theta_2 −\sin\theta_1\sin\theta_2$.




I have no idea how to do this. It's part of a complex number worksheet but I can't find a way to do it. I thought about using Euler's formula but got no where. Thank you in advance

Monday 27 February 2017

calculus - Why doesn't using the approximation $sin xapprox x$ near $0$ work for computing this limit?



The limit is
$$\lim_{x\to0}\left(\frac{1}{\sin^2x}-\frac{1}{x^2}\right)$$
which I'm aware can be rearranged to obtain the indeterminate $\dfrac{0}{0}$, but in an attempt to avoid L'Hopital's rule (just for fun) I tried using the fact that $\sin x\approx x$ near $x=0$. However, the actual limit is $\dfrac{1}{3}$, not $0$.



In this similar limit, the approximation reasoning works out.


Answer



If we take one more term in the Taylor expansion:




\begin{align}
\sin x&\approx x-\frac{x^3}6+\cdots\\
\sin^2 x&\approx x^2-2x\frac{x^3}6+\cdots\\
\frac 1{\sin^2 x}&\approx\frac 1{x^2-x^4/3}\\
&=\frac 1{x^2}\cdot\frac 1{1-x^2/3}\\
\lim_{x\to 0}\left[\frac 1{\sin^2 x}-\frac 1{x^2}\right]&=\lim_{x\to 0}\left[\frac 1{x^2}\left(\frac 1{1-x^2/3}-1\right)\right]\\
&=\lim_{x\to 0}\left[\frac 1{x^2}\cdot\frac{1-1+x^2/3}{1-x^2/3}\right]\\
&=\lim_{x\to 0}\frac 1{3-x^2}\\
&=\frac 1 3

\end{align}






To see where the first-order expansion went wrong, it's necessary to keep track of where the error term goes:



\begin{align}
\sin x&= x+\text{O}(x^3)\\
\sin^2 x&=x^2+2x\text{O}(x^3)+\text{O}(x^3)^2\\
&=x^2+\text{O}(x^4)+\text{O}(x^6)\\

&=x^2+\text{O}(x^4)\\
\frac 1{\sin^2 x}&=\frac 1{x^2+\text{O}(x^4)}\\
&=\frac 1{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
\frac 1{\sin^2 x}-\frac 1{x^2}&=\frac 1{x^2}\left[\frac 1{1+\text{O}(x^2)}-1\right]\\
&=\frac 1{x^2}\cdot\frac{1-1+\text{O}(x^2)}{1+\text{O}(x^2)}\\
&=\frac{\text{O}(x^2)}{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
&=\text{O}(1)
\end{align}



Thus the $\sin x\approx x$ approximation is not accurate enough to estimate even the constant term of the expression in the limit. (Note that it does allow us to say that there are no $\text{O}(n^{-1})$ or bigger terms, so the limit probably won't diverge.)



galois theory - Find the elements of the extension field using primitive polynomial over $GF(4)$



Let $p(z) = z^2 + z + 2$ be a primitive polynomial. I want to construct the elements of the extensional field $GF(4^2)= GF(16).$




Since $p(z)$ is primitive polynomial , it should generate the elements of the extension field.



Let $\alpha$ be zero point then :




  • $\alpha^2 + \alpha + 2 \rightarrow \alpha^2 = -\alpha-2 \rightarrow \alpha^2 = 3\alpha + 2$.



Using $\alpha$ :





  • $\alpha^1 = \alpha$

  • $\alpha^2 = 3\alpha+2$ and according to the book, $\alpha^2 = \alpha+2$ but $-1 \mod 4 = 3$.



Since the base is $4$ how can it be that the correct value of $\alpha^2 = \alpha + 2$?



I have mostly solved problems where the base is a prime number and the same procedure i have used here.


Answer




You seem confused about what $GF(4)$ is: it's not $\mathbb{Z}/(4)$ (which is not even a field). You can represent $GF(4)$ as an extension of $GF(2)=\mathbb{Z}/(2)$ by an element $\beta$ such that $\beta^2=\beta+1$. You can identify $GF(4)$ with the set $\{0,1,2,3\}$ by mapping $\beta$ to $2$ and $\beta+1$ to $3$, which appears to be what the book you refer to has in mind, but this does not mean you are working mod $4$ (the addition and multiplication operations are not ordinary mod $4$ addition and multiplication on $\{0,1,2,3\}$).



In particular, $1+1=0$ in $GF(4)$ and so $\alpha+2$ and $-\alpha-2$ are the same thing, and so you have $\alpha^2=\alpha+2$.


algebra precalculus - How to solve for $theta$ in terms of $a,b,c$ in this expression?



profile



I am trying to design a cam profile like in the above image. There are 2 equations I have come up with that define variable "c". The problem, however, is that variables a, b & c are known values and it is angle $\theta$ that I need to know. The 2 arcs on either end of the straight line are equal. The radius of the arcs and the angle $\theta$ change when any of the variables are changed.



The 2 equations I have so far:



$c = \frac{2a}{\sin \theta}(1-\cos \theta)+\frac{b.\sin \theta}{\cos \theta} $




$c = 2 \left[ \frac{a}{\sin \theta} - \sqrt{ \left( \frac{a}{\sin \theta} \right)^2 - a^2}\, \right] + \frac {b.\sin \theta}{\cos \theta} $



I can't for the life of me manage to rearrange the equation to make $\theta$ the subject!



If anyone is an algebra whiz, any help would be much appreciated!



Cheers.



EDIT




Ok, so after a bit of geometric shenanigans, I have come up with some different equations to help solve this but and still struggling to rearrange it. (However, I had to replace $\theta$ with $x$ as I couldn't get $\theta$ to work with the program I drew this with!)



Redrawn Problem
Single Quadrant



By drawing a chord for the arc, we can show that the angle of the chord is half that of the slope. What I really need to find is the length of either $e$ or $d$, where $d + e = \frac{c}{2}$



Using the half angle formula I get:




$\tan x = \frac{2 \tan \frac{x}{2} }{1 - \tan^2 \frac{x}{2}}$



Which can be written as:



$\frac{c-2e}{b} = \frac{\frac{2e}{a}}{1-\frac{e^2}{a^2}} $



Simplifying to:



$\frac{c-2e}{b} = \frac{2ae}{a^2-e^2}$




My problem now, is that I can't simplify it to get $e$ on its own!



If anyone has any ideas, that would be awesome!


Answer



Considering the first equation
$$c = \frac{2a}{\sin (\theta)}(1-\cos (\theta))+\frac{b\,\sin (\theta)}{\cos (\theta)}$$
using the tangent half-angle substitution, we end with
$$c=\frac{2 t \left(-a t^2+a+b\right)}{1-t^2}$$ that is to say
$$2 a t^3-c t^2-2 t (a+b)+c=0$$ that you can solve using Cardano method.




Probably easier would be Newton method for solving the cubic equation. Starting using $t_0=0$, we should have $t_1=\frac{c}{2 (a+b)}$ and
$$t_2=\frac{c^3 (b-a)+4 c (a+b)^3}{8 (a+b)^4-2 c^2 (a-2 b) (a+b)}$$
$$t_{n+1}=\frac{4 a t_n^3-c t_n^2-c}{2 \left(3 a t_n^2-c t_n-(a+b)\right)}$$



Similarly, if $\theta$ is small, we could expand the first equation as Taylor series
$$c= (a+b)\theta+\frac{a+4 b}{12} \theta ^3 +\frac{a+16
b}{120} \theta ^5 +\frac{17 (a+64 b)}{20160}\theta ^7+O\left(\theta ^9\right)$$
and use series reversion to get
$$\theta=\frac{c}{a+b}-\frac{ (a+4 b)}{12 (a+b)^4}c^3+\frac{ \left(a^2+2 a b+16
b^2\right)}{80 (a+b)^7}c^5+O\left(c^7\right)$$




Edit



Still assuming small values fo $\theta$, we could build at $\theta=0$ the $[2,2]$ Padé approximant and get for the whole
$$f(\theta) = \frac{2a}{\sin (\theta)}(1-\cos (\theta))+\frac{b\,\sin (\theta)}{\cos (\theta)}-c$$ the approximation
$$f(\theta)\simeq\frac{-c+ (a+b)\theta+\frac{c (a+4 b)}{12 (a+b)}\theta ^2}{1-\frac{(a+4 b)}{12 (a+b)} \theta ^2 }$$ Solving the quadratic
$$\theta\simeq \frac{2 \left(\sqrt{3 c^2 (a+4 b) (a+b)+9 (a+b)^4}-3 (a+b)^2\right)}{c (a+4 b)}\tag 1$$ which seems interesting.



Let us try it for $a=1$ and $b=4$. Give $\theta$ a value to compute $c$ and recompute the estimate $\theta_*$ from $(1)$. The table below reproduces results (in degrees).
$$\left(
\begin{array}{ccc}

\theta & c & \theta_* \\
5 & 0.43728 & 5.00001 \\
10 & 0.88029 & 10.0003 \\
15 & 1.33510 & 15.0020 \\
20 & 1.80853 & 20.0082 \\
25 & 2.30862 & 25.0249 \\
30 & 2.84530 & 30.0617 \\
35 & 3.43143 & 35.1324 \\
40 & 4.08434 & 40.2567 \\
45 & 4.82843 & 45.4605 \\

50 & 5.69963 & 50.7782 \\
55 & 6.75373 & 56.2542 \\
60 & 8.08290 & 61.9466
\end{array}
\right)$$



Update



After comments, it is quite sure that we can make better knowing that the area of interest is around $\theta=\frac \pi 4$. To stay "simple", writing the $[1,1]$ Padé approximant, we should get
$$\theta\simeq \frac \pi 4+ \frac{\alpha +\beta c}{\gamma+\delta c}$$ where

$$\alpha=2 \left(\left(6 \sqrt{2}-8\right) a^2+\sqrt{2} a b+b^2\right)$$
$$\beta=2 \left(\sqrt{2}-2\right) a-2 b$$
$$\gamma=2 \left(\sqrt{2}-2\right) a^2+3 \left(5 \sqrt{2}-8\right) a b-2 b^2$$
$$\delta=\left(4-3 \sqrt{2}\right) a-2 b$$ Applied to the case
$$a= \pi 160\frac{12}{360}=\frac{16 \pi }{3}\qquad b= \pi 160\frac{50}{360}=\frac{200 \pi }{9}\qquad c=80$$ this would give
$$\theta\simeq \frac \pi 4 +\frac{90 \left(6 \sqrt{2}-37\right)+\left(337+366 \sqrt{2}\right) \pi }{\left(1161
\sqrt{2}-2497\right) \pi -90 \left(13+9 \sqrt{2}\right)}\approx 0.761710$$
which converted to degrees would give $43.6427$ while the exact solution would be $\theta=0.7617146$ corresponding to $43.6430$. Not too bad.



Keeping $a=\frac{16 \pi }{3}$ and $b= \frac{200 \pi }{9}$ and varying $c$ over a quite large range, here are some results.
$$\left(

\begin{array}{ccc}
c & \theta_{approx} & \theta_{exact} \\
60 & 35.1540 & 35.2569 \\
65 & 37.4779 & 37.5244 \\
70 & 39.6591 & 39.6759 \\
75 & 41.7103 & 41.7142 \\
80 & 43.6427 & 43.6430 \\
85 & 45.4666 & 45.4665 \\
90 & 47.1906 & 47.1894 \\
95 & 48.8228 & 48.8165 \\

100 & 50.3704 & 50.3528
\end{array}
\right)$$



New update



We could still do much better at the price of a quadratic equation. Around a given angle $\theta_0$, develop the rhs of the original equation as a $[2,2]$ Padé approximant and solve for $\theta$. I shall not give here the formulae but just the results for the last worked case $(\theta_0=\frac \pi 4, a=\frac{16 \pi }{3}, b= \frac{200 \pi }{9})$
$$\left(
\begin{array}{ccc}
c & \theta_{approx} & \theta_{exact} \\

60 & 35.256535 & 35.256856 \\
65 & 37.524331 & 37.524418 \\
70 & 39.675901 & 39.675917 \\
75 & 41.714232 & 41.714233 \\
80 & 43.643029 & 43.643029 \\
85 & 45.466540 & 45.466540 \\
90 & 47.189400 & 47.189400 \\
95 & 48.816481 & 48.816478 \\
100 & 50.352781 & 50.352763
\end{array}

\right)$$


calculus - Evaluate limit of function through its series expansion



The MacLaurin series expansion of the function



$$f(x) = \frac{1}{1 - x}$$



is




$$f(x) = 1 + x^2 + x^3 + x^4 + \ldots = \sum_{i = 0}^{+\infty} x^i, \ (x \neq 0)$$



so all the powers of $x$ taken with a positive sign.



We have



$$\lim_{x \to 1^-} \frac{1}{1 - x} = + \infty$$



while




$$\lim_{x \to 1^+} \frac{1}{1 - x} = - \infty$$



The first limit can be easily predicted from the series expansion: when $x$ is slightly greater than $1$, all the terms sum: the final result of their sum should be $+\infty$. The series expansion becomes in fact



$$f(x)_{x = 1} = \left( \sum_{i = 0}^{+\infty} x^i \right)_{x = 1} = \lim_{i \to +\infty} i \cdot 1 = +\infty$$



But this is true for both $x \to 1^+$ and $x \to 1^-$ and it shouldn't!



1) How can the limit for $x \to 1^+$ be correctly predicted from the series expansion?




The above series is centered in $x = 0$; $x = 1$ is quite far away from that point; but with more terms included, the series should provide the exact function values over a wider range, so even $x = 1$. But the result, even with more terms, is the same: the series limit is always $+\infty$.



2) Is this a correct perspective, or the problem should be dealt with in another way?


Answer



The issue here is that the series expansion isn't valid for $x \geq 1$. Remember that any infinite series has a radius of convergence $R$; that is, $R$ is the value such that
$$
f(x) = \sum_{n=0}^\infty a_nx^n
$$
converges for $|x| < R$. In the case of the geometric series that you state, we have that $R = 1$. So attempting to evaluate the series for $x > 1$ doesn't make sense.




The point is that the equality
$$
\frac{1}{1-x} = \sum_{n=0}^\infty x^n
$$
isn't always true. It only works as long as $|x| < 1$, and so for any other values of $x$, the given statement is false.



One should note that while you get the "right" answer for $x = 1$, you should still be careful about that. In this case they agree, but it isn't always true that they will (e.g. evaluating both sides at $x = -1$ yields something false).


calculus - Graph of continuous function from $[0,1]$ to $[0,1]$.




I know that the graph of continuous functions on $[0,1]$ is a null set (has Lebesgue measure zero). My question is following and seems relate to that I have conceded. Let $f:[0,1]\to[0,1]$ be a continuous and strictly increasing function. Then for any $n\in\mathbb N$, we can cover the graph of $f$ by $n$ rectangle, with side parallel x-and y-axes, such that each rectangle has area $=\frac1{n^2}$. (Rectangles can be overlap).



How to prove this? Any idea?


Answer



First, to prevent the notation from exploding, extend $f$ to an increasing function $f:[0,\infty)\to[0,\infty)$ which tends to infinity at infinity.



Since $f$ is increasing,






Note: If $0\le a




Now let $x_0=0$. Since $f$ is increasing and continuous we can recursively choose $x_1,x_2,\dots$ so that $x_{j+1}>x_j$ and $$(x_{j+1}-x_j)(f(x_{j+1})-f(x_j))=\frac1{n^2}.$$



If we can show $x_n\ge1$ we're done; then we've covered the graph of $f$ by $n$ rectangles of area $1/n^2$. Since $f([0,1])\subset[0,1]$ it's enough to show that $x_n\ge1$ or $f(x_n)\ge1$ (because if $x_n<1$ then also $f(x_n)< 1$, which is to say $f(x_n)\ge1$ implies $x_n\ge1$).



But Cauchy-Schwarz shows that $$1=\sum_{j=0}^{n-1}(x_{j+1}-x_j)^{1/2}(f(x_{j+1}-f(x_j))^{1/2}\le\left(\sum(x_{j+1}-x_j)\sum(f(x_{j+1}-f(x_j))\right)^{1/2}=\left((x_n)(f(x_n)-f(0)\right)^{1/2};$$hence either $x_n\ge1$ or $f(x_n)-f(0)\ge 1$.


elementary set theory - Reference Request: Cardinal Division



In this link, Division of cardinals, someone asks a question about cardinal division and references a Wikipedia page about it. The Wikipedia page does not give a reference to their statement, but I'd really like to know one. Does anyone know specifically (preferably book and page number) where I can find this and a proof?


Answer




This follows from the fact that cardinal multiplication is not very interesting for infinite cardinals. Namely, if $\kappa$ and $\mu$ are infinite, $\kappa \cdot \mu = \max(\kappa,\mu)$. Thus, if we're given $\lambda$ and $\kappa$ we may always solve the equation in variable $\mu$
$$ \kappa \cdot \mu = \lambda $$ if and only if $\kappa \leq \lambda$. Indeed, if $\kappa \leq \lambda$, then
$$\kappa \cdot \lambda = \max(\kappa,\lambda) = \lambda.$$ On the other hand, if $\kappa > \lambda$
$$\kappa \cdot \mu = \max(\kappa,\mu) \geq \kappa > \lambda.$$


real analysis - How do I evaluate this without using taylor expansion :$lim_{x to infty}x^2log(frac {x+1}{x})-x $?


How do I evaluate this without using Taylor expansion?



$$\lim_{x \to \infty}x^2\log\left(\frac {x+1}{x}\right)-x$$




Note: I used Taylor expansion at $z=0$ and I have got $\frac{-1}{2}$



Thank you for any help

Sunday 26 February 2017

calculus - Find $ limlimits_{n rightarrow infty} int_{0}^{1} left(1+ frac{x}{n}right)^n dx$


Find the limit of



$$ \lim\limits_{n \rightarrow \infty} \int_{0}^{1} \left(1+ \frac{x}{n}\right)^n dx$$




Let $$u= 1 +\frac{x}{n} \implies du =\frac{1}{n} dx \implies n \cdot du = dx$$
at $x=0$ $u=1$ and at $x=1$ $u=1+\frac{1}{n}$ so now limit will change from $1$ to $1+\frac{1}{n}$



Back to the integral




$$ \lim\limits_{n \rightarrow \infty} \left( n \cdot \int_{1}^{1+\frac{1}{n}} u^n du \right)= \lim\limits_{n \rightarrow \infty} \left( n \cdot \left[ \frac{nu^{n+1}}{n+1} \right]_1^{1+\frac{1}{n}} \right) = \lim\limits_{n \rightarrow \infty} \left(\frac{n^2}{n+1} \left[ u^{n+1} \right]_1^{1+\frac{1}{n}} \right) $$



$$\implies\lim\limits_{n \rightarrow \infty} \left(\frac{n^2}{n+1} \left[ \left(1+\frac{1}{n} \right)^{n+1}-1 \right] \right)=\infty$$



Is my finding correct? Is the procedure of taking the limit before completing the integration correct?



Much appreciated

Is it possible to to solve an equation with both power(?) and exponential terms for $x$?



If I have an equation of the form $Y=A+x^B+C^x$ is it possible to solve for $x$, where $A$, $B$, and $C$ are all rational numbers?



More specifically, is it possible to solve $y=12 + 2x + x^{1.92} + 2^{0.425(x-12)}$ and, if so, how would I do it?


Answer



There is no nice formula that expresses $x$ as a function of $y$. For any particular numerical value of $y$ you can use software to find a value of $x$. For examples, if you ask Wolfram alpha to find $x$ when $y=100$



solve  12+2x+x^1.92+2^(0.425(x−12))=100



you find out that $y$ is



9.0984743836320913466


Since your function is increasing, it would be straightforward to write a python program (or a program in any other language) to find values by bisection, or to build a table of values as in a comment. You could even build a table of values in a spreadsheet.


real analysis - Finding the derivative of $f(x)=int_1^{infty}frac{e^{-xy}}{y^2}dy,:::xin(0,infty)$




Let

$$f(x)=\int_1^{\infty}\frac{e^{-xy}}{y^2}dy,\:\:\:x\in(0,\infty)$$
Show that $f(x)$ is differentiable on $(0,\infty)$, find the formula for $f'(x)$? Is $f(x)$ twice differentiable?




I'm thinking to define a sequence as follow
$$f_n(x)=\int_1^n\frac{e^{-xy}}{y^2}dy.$$



To show $f_n(x)$ is differentiable, I'm thinking to show the following limit exists,



$$\lim_{h\to 0}\int_1^n\frac{e^{-(x+h)y}-e^{-xy}}{y^2h}.$$




To be able to pass the limit inside the integral, we can apply the Lebesgue dominated convergence theorem. So I want to see if I can apply it. Since $\frac{e^{-(x+h)y}-e^{-xy}}{y^2h}$ is bounded by $\frac{e^{-uy}}{y}$, (where $x\leq u\leq x+h$) which is integrable on $[1,\infty).$



Hence $f'_n(x)=\int_1^n\frac{d}{dx}(\frac{e^{-xy}}{y^2})dy=\int_1^n-\frac{e^{-xy}}{y}dy. $



Now, $\lim_{n\to\infty}\int_1^n\frac{-e^{-xy}}{y}dy=\int_{1}^{\infty}\frac{-e^{-xy}}{y}dy.$



However I see a problem here, since in fact, we have
$$f'(x)=\lim_{h\to 0}\lim_{n\to{\infty}}\int_1^n\frac{e^{-(x+h)y}-e^{-xy}}{y^2h}.$$




But I'm not sure, if I'm allowed to interchange these two limits. I appreciate any hint or alternative proof.


Answer



Let's take two derivatives under the integral first, and we will talk about justifying it after the fact.



Consider $$f(x) = \int_1^\infty \frac{e^{-xy}}{y^2} dy.$$



Then two derivatives with respect to $x$ are easy to take here:



$$f^{\prime\prime}(x) = \int_1^\infty e^{-xy} dy = -\left.\frac{e^{-xy}}{x} \right|_{1}^\infty = \frac{e^{-x}}{x}.$$







Note that the denominator gets completely cancelled by the chain rule. Hence, (provided we can justify the derivatives) we have $$f^{\prime}(x) = \int \frac{e^{-x}}{x} dx = e^{-x}\ln(x) + \int_1^x \ln(\tau)e^{-\tau}d\tau + C$$ for some $C \in \mathbb{R}$.



Finally, we can see that as $x \to \infty$ that $f'(x) \to 0$. We conclude that $f'(x) = e^{-x}\ln(x) + \int_1^x \ln(\tau)e^{-\tau}d\tau$.






To justify passing the derivative through the integral, we can appeal to the measure theory version of the Leibniz integral rule.




What we need is a function that bounds $g(x,y) = \frac{d}{dx} \frac{e^{-xy}}{y^2} = -\frac{e^{-xy}}{y}$ independent of $x$ that is also integrable. For any $\delta > 0$, we have $$|g(x,y)| \le \frac{e^{-\delta y}}{y}$$ for all $x \in [\delta, \infty)$. Therefore, given $x > \delta$, we have $$\frac{d}{dx} \int_1^\infty \frac{e^{-xy}}{y^2} dy = \int_1^\infty \frac{d}{dx}\left( \frac{e^{-xy}}{y^2} \right) dy.$$ Since, $\delta$ was chosen to be an arbitrary positive number, we may conclude that this formula holds for all $x > 0$. A similar argument can be performed for the second derivative as well.


Prove that a set of matrices is a linear space

Prove that the set of matrices
$$v:=\left\{ \begin{pmatrix} 2x-y+z & x-2y-2z \\ x+y-z & 3x+y+2z \end{pmatrix} \middle|\, x,y,z \in R\right\}$$



Is a linear space above $R$ and find it's base.




As far as I know that for the set to be a linear space it needs to be closed under vector addition and under scalar multiplication, am I right?
but still I'm having a bit trouble structuring the proof



Hints, suggestions?



Thanks.

combinatorics - Induction proof of $sum_{k=1}^{n} binom n k = 2^n -1 $




Prove by induction: $$\sum_{k=1}^{n} \binom n k = 2^n -1 $$ for all $n\in \mathbb{N}$.





Today I wrote calculus exam, I had this problem given.



I have the feeling that I will get $0$ points for my solution, because I did this:



Base Case: $n=1$
$$\sum_{k=1}^{1} \binom 1 1 = 1 = 2^1 -1 .$$



Induction Hypothesis: for all $n \in \mathbb{N}$:
$$\sum_{k=1}^{n} \binom n k = 2^n -1 $$



Induction Step: $n \rightarrow n+1$
$$\sum_{k=1}^{n+1} \binom {n+1} {k} = \sum_{k=1}^{n} \binom {n+1} {k} + \binom{n+1}{n+1} = 2^{n+1} -1$$




Please show me my mistake because next time is my last chance in this class.


Answer



Your first mistake is that you stated the induction hypothesis incorrectly: it should be simply that



$$\sum_{k=1}^n\binom{n}k=2^n-1\;.\tag{1}$$



What you gave as induction hypothesis is actually the theorem that you’re trying to prove!



Now, assuming $(1)$ you want to prove that $$\sum_{k=1}^{n+1}\binom{n+1}k=2^{n+1}-1\;.$$




This is an occasion when it’s not a good idea to split off the last term. Instead, use the Pascal’s triangle identity for binomial coefficients:



$$\begin{align*}
\sum_{k=1}^{n+1}\binom{n+1}k&=\sum_{k=1}^{n+1}\left(\binom{n}k+\binom{n}{k-1}\right)\\\\
&=\sum_{k=1}^{n+1}\binom{n}k+\sum_{k=1}^{n+1}\binom{n}{k-1}\\\\
&\overset{*}=\sum_{k=1}^n\binom{n}k+\sum_{k=0}^n\binom{n}k\\\\
&=\left(2^n-1\right)+\binom{n}0+\sum_{k=1}^n\binom{n}k\\\\
&=2^n-1+1+2^n-1\\\\
&=2\cdot2^n-1\\\\

&=2^{n+1}-1\;.
\end{align*}$$



At the starred step I used the fact that $\binom{n}{n+1}=0$ to drop the last term of the first summation, and I did an index shift, replacing $k-1$ by $k$, in the second summation.


modular arithmetic solving technique



I do know that the (mod m) in
$ a\equiv b\pmod m $ is not a multiplication with b, so, the following question has really confused me:



The given values are:
$ a\equiv 4\ mod 13$ and $ b\equiv 9\ mod 13$, now, we find the integer
0<= c <=12 such that : $ c\equiv 9a\ (mod 13)$



Now, the solution is given as:




9 . 4 (mod 13) = 36 mod 13 = 10



I can see that (a mod 13) is being replaced with it's value from the given, but, if that's then how is a(mod 13) not being treated as a separate unit after replacement? It should had been indivisible, because it ought to be the multiplication of 9*a then the modulo 13. Especially this matters because, (9*a)mod13 != 9*(a mod 13)



Just to clarify, the answer from what I see should have been 36, but it's not so, because I do know (9*a)mod13 != 9*(a mod 13)


Answer



Argh... Apparently K. Rosen defines $a \mod n$ in a subtly but very different way then I do.



I definne $a \mod n$ to mean nothing in itself but just to mean "with respect to the equivalence modulo $d$". To numbers $a$ and $b$ are equivalent modulo $n$ if $n|a-b$. That relationship between integers is an equivalence relationship. (To get pedantic that means every $a$ is related to itself, if $a$ is related to $b$ then $b$ is related to $a$. , if $a$ is related to $b$ and $b$ is related to $c$ then $a$ is related to $c$. But informally, that means, in respect to this property [having their differences being divisible by $n$] thetwo numbers are intercchangeable and can, for all intents and purposes, be considered to be the same.)




K. Rosen apparently defines $a \mod n$ be be an operation that gives , as a result, the remainder when you divide $a$ by $n$. As such you are correct that $(4*9) \mod 13 = 10$ and $4*(9 \mod 13) = 39$ and $36 \ne 10$. However notice that in the example you give "$\equiv$" (with 3 lines) is not the equal sign "$=$" (with 2 lines).



$\equiv $ is the equivalence sign. And $36 \equiv 10 \mod 13$ means that $36$ although not equal to $10$ is equivalent to $10$ and in the arithmetic system of modulo 13, $36$ and $10$ are interchangable.



The equivalence relationship is defined as $a \equiv b \iff a \mod n = b\mod n$. Note that means $a \equiv b \iff a\equiv b \mod n \iff b \equiv a \mod n$.



Rosen's approach and mine lead to the exact same results.



ANyhow so using Rosen's definitions: $4*9 = 36$ and $36 \mod 13 = 10$ and $10 \mod 13 = 10$. And $9 \mod 13 = 9$. So $36 =4*9 = 4*(9 \mod 13) \equiv (4*9) \mod 13 = 36 \mod 13 = 10$




=== old answer =====



Complete Redo:



"We can do arithmetic on remainders" and $a \equiv b \mod n" can be interpreted in four different way; two are correct, one is sort of correct but technically it's wrong but it's so convenient we pretend it is correct, and the fourth is dead outright flat WRONG.



1) $a \equiv b \mod n$



is simply a statement about two numbers $a$ and $b$. It means i) the difference between the two numbers is divisible by $n$. (i.e. $n\mid (a - b)$ ,or in other words ii) the difference is a multiple of $n$. (i.e. $a - b = kn$ for some $k \in \mathbb Z$), or in other words iii) one is equal to the other plus or minus a multiple of $n$. (i.e. $a = b + kn$ for some $k \in \mathbb Z$).




The advantage to this is it is very simple. Perhaps too simple. Students aren't likely to see how important this is. Also it misses/avoids the point that $a$ and $b$ aren't just related, there are equivalent in one respect and are interchangeable with each other.



2) $a \equiv b \mod n$



read this as "$a$ is equivalent to $b$ with respect to modulus $n$".



We can divide $\mathbb Z$ into $n$ separate and complete classes depending on how $n$ divides into all the integers.



$[0] = \{...,-2n, -n, 0, n, 2n, 3n\} = \{x\in \mathbb Z| x = kx$ for some integer $k\} = \{$ all multiples of $n\}$.




$[1] = \{...,-2n+1, -n+1, 1, n+1, 2n+1, 3n+1\} = \{x\in \mathbb Z| x = kx+1$ for some integer $k\} = \{$ all integers that will have a remainder of $1$ when divided by $n\}$.



$[2] = \{...,-2n+2, -n+2, 2, n+2, 2n+2, 3n+2\} = \{x\in \mathbb Z| x = kx+2$ for some integer $k\} = \{$ all integers that will have a remainder of $2$ when divided by $n\}$.



......



$[n-1] = \{...,-2n-1, -n-1, -1, n-1, 2n-1, 3n-1\} = \{x\in \mathbb Z| x = kx-1$ for some integer $k\} = \{$ all integers that will have a remainder of $n-1$ when divided by $n\}$.



$a \equiv b \mod n$ means that if $a \in [k]$ then $b \in [k]$ as well. They both are in the same equivalence class.




We can represent each class by any representative integer in them. Since $3 \in [3]$ and $n+3\in [3]$ we can refer to $[3]$ as $[n+3]$ and they'll both be the same set. In other words, $[3] = [n+3]$.



The advantage to this is that it is more thorough and does give the idea that $a \equiv b \mod n$ is an equivalence relationship.



The disadvantage is that it is very abstract and confusing. It gives the student this is a very complicated procedure when it is is really very simple.



3) Is the way I learned and to my mind makes the most intuitive sense.



Consider $\mathbb Z_{13} = \{0,1,2,3,4,5,6,7,8,9,10,11,12\}$. These are just like the regular integers except there are only $13$ of them instead of an infinite number of them. The other difference is that "loop over' when we count. Instead of counting $...,10,11,12,13,14,15...$, we count $...,10,11,12,0,1,2,....$.




Since the numbers loop we can have many ways to refer to an number. If we count to $15$ we "loop around" so that $15 = 2 = 28 = 41 =.....$.



So $a \equiv b \mod n$ simply means $a = b$ if we are using the $\mathbb Z_n$ arithmetic system.



The advantage to this is it is EASY! And it gets across that $a$ and $b$ are equivalent and interchangable and it's very clear we can do arithmetic in this number system.



The disadvantage is that it is incorrect. $a$ and $b$ aren't the same thing in some different number system. $a$ and $b$ are two different numbers which are equivalent to each other in a specific equivalence relationship. (The relationship is that $a$~$b$ if $n|(a-b)$.)



We can actually fixe this by noticing that if we think of the $n$ equivalence sets in 2) as individual things in their own right then $\{[0],[1],...... [n-1]\}$ turns out to work the same way as $\mathbb Z_n$.




The trouble with that is very abstract and will go over any students head.



4) Let $a \%n $ be the operation of taking the remainder of $a$ when dividing by $n$. That is you take two numbers and you get a third by doing an operation on them.



So $a \mod n$ means the remainder you get when you divide $a$ by $n$. So $36 \mod 13 = 10$.



THIS IS WRONG!!!! VERY VERY WRONG!!!!!



$\mod n$ is NOT an operation but a statement about how to consider two numbers to be equivalent.




Just don't do this. Don't.



...



So if $a \equiv 4 \mod 13$ and $b \equiv 9 \mod 13$ then $a*b \equiv 4*9 \mod 13 \equiv 36\equiv 10 \mod 13$.



That is true.



Let's look at them in terms of the four interpretations.




1) $a \equiv 4 \mod 13 \implies 13\mid a-4 \implies $a = 4 + 13k$. $b \equiv 9 \mod 13\implies 13|b-9\implies b = 13j.$



So $ab = (4+13k)(9+13j) = 36 + 9*13k + 4*13 j + 13^2 kj = 36 + 13(9k + 4j + 13kj)$. So $ab \equiv 36 \mod 13$.And $36 + 13(9k + 4j + 13kj) = 10 + 13(9k + 4j + 13kj + 1)$. So $ab \equiv 10 \mod 13$.



2) $a \equiv 4\mod 13 \implies a \in \{..., 4, 17,30,....\}$ and $b \equiv 9 \mod 13 \implies b \in \{...., 9, 22,35,48,....\}$. And $ab \in \{-3, 10, 23, 36, 49, ...\}$. (We can prove that by doing what we did ni 1). so $ab \equiv 10 \equiv 36 \mod 13$.



3) $a \equiv 4 \mod 13$ means we count to $a$ by looping around $13$ and ending at $4$ so $a = 4$ and the same thing for $b \equiv 9\mod 13$. So $a*b = 4*9 = 36$ and if we count to $36$ we loop around and end up at $10$.



4) $a\equiv 4 \mod 13$ means $a \% 13=$ the remainder $= 4$. and $b \equiv 9 \mod 13$ means $b\% 13 = 9$. So $(a*b)\% 13 = ((a\%13)*(b\% 13))\%13$. That's an interesting result but not useful in applying to a great picture about the behavior of integers as a class. And typing those nested and repeated operations to make it true is a pain.


Saturday 25 February 2017

linear algebra - Practical use of matrix right inverse

Consider a matrix $A \in \mathcal{R}^{m \times n}$ In the case when $rank(A) = n$ then this implies the existence of a left inverse: $A_l^{-1} = (A^\top A)^{-1}A^\top$ such that $A_l^{-1}A = I_{n\times n}$.




Similarly if $rank(A) = m$ then this implies the existence of a right inverse: $A_r^{-1} = A^\top(A A^\top )^{-1}$ such that $A A_r^{-1} = I_{m\times m}$.



I understand how the concept of a right inverse is naturally follows in say solution of a least squares problem: $Ax = b$, $rank(A)=n

I expect it to involve the fact that in this case $rank(A)=m < n$ and so there are now infinitely many solutions to $Ax = b$ and that the right inverse now someone seeks the solution which minimises the length of the solution vector?

linear algebra - Invertibility of block matrices, with the property of being symmetric, positive definite, and of full rank:



If A and B are real matrices, with A being symmetric, B having at least as many columns as rows, and




the matrix C defined as:
$$
\begin{bmatrix}
A & B^T \\
B & 0 \\
\end{bmatrix}
$$



how can I prove that:




1) C is invertible, if A is positive definite and B of full rank



and,



2) Is C always invertible, if A is invertible and B of full rank?



My attempt so far was to sketch the block matrices.



For part 1)
I let A be of size $nxn$. Since A is positive definite, it is invertible, and thus has full rank.

I let B be of size $(k-(n+1)+1)x(p)$ = $(k-n)x(p)$, so that $B^T$ is of size $(p)x(k-n)$. Then C is of size $kxk$. Then I try to argue that, this $kxk$ square matrix has full rank, with rank = k, which implies that C is invertible. B has full row rank, and $B^T$ has full column rank.



Am I sort of close to the answer? I'm basically trying to avoid the usage of determinants of block matrices, as I'm not all that comfortable with that method - but perhaps it's necessary for this question.



For part 2)
My work for part 1), if it's correct, would imply that, yes, C is always invertible if A is invertible and B of full rank. I get the feeling, though, that there is a counterexample.



Thanks in advance for your help,


Answer



Since $A$ is nonsingular, consider the following block factorization of $C$:

$$
C=\pmatrix{A&B^T\\B&0}
=
\pmatrix{I&0\\BA^{-1}&I}\pmatrix{A&0\\0&S}\pmatrix{I&A^{-1}B^T\\0&I},
$$
where $S:=-BA^{-1}B^T$. Since the triangular blocks are nonsingular, the matrix $C$ is nonsingular iff the Schur complement matrix $S$ is nonsingular.



Now if $A$ is SPD, it is easy to see that $S$ is SPD as well. First, the definiteness of $A$ implies that $A^{-1}$ is SPD. For a nonzero $x$, $B^Tx\neq 0$ since has $B$ has full row rank, and $$x^T(BA^{-1}B^T)x=(B^Tx)^TA^{-1}(B^Tx)>0.$$



Another way to see that $C$ is nonsingular if $A$ is SPD and $B$ has full row rank is as follows. Assume that $Cz=0$ for some nonzero $z=(x^T,y^T)^T$. Hence

$$\tag{1}
Ax+B^Ty=0, \quad Bx=0.
$$



None of the block components can be zero. If $x=0$ and $y\neq 0$ then $B^Ty=0$ which is impossible since $B$ has full row rank. If $x\neq 0$ and $y=0$ then $Ax=0$ which is impossible since $A$ is SPD. Hence both $x\neq 0$ and $y\neq 0$. Multiply the first equation in (1) with $x^T$ and the second with $y^T$ to get
$$
x^TAx+x^TB^Ty=0, \quad y^TBx=0.
$$
Since $x^TB^Ty=y^TBx=0$, we have $x^TAx=0$, which gives again a contradiction.




It is not sufficient that $A$ is nonsingular and $B$ of full rank for $S$ being nonsingular. Consider,
$$
A=\pmatrix{1 & 0 \\ 0 & -1},
\quad B=(1,1).
$$
It is easy to verify that $C$ is singular (actually $S=0$).


Sum of divergent infinite series



A series goes 1 + 1/2 + 2/3 + 3/4 + 4/5....



Is there a possible summation equation for this series?




Since it gets smaller only after the first term and never anywhere else.


Answer



Since the general term is larger than $\frac12$, the sum is larger than $\frac12\times$ the number of terms and hence it diverges.



The partial sum is related to the Harmonic numbers.



$$1 + \frac12 + \frac23 + \frac34 + \frac45...=1+\sum_{k=1}^n\frac{k-1}k=1+\sum_{k=1}^n(1-\frac1k)=1+n-H_n.$$



The Harmonic numbers only grow as the logarithm of $n$.



integration - Complex integral with Residues Theorem

I've been going crazy with this complex integral I have to estimate with the Residues Theorem. I'm obviously missing a sign or something else, but I fear I may be commiting a conceptual mistake.



$\int_{0}^{\infty}dx \frac{\log x}{x^2-1}$



I choose to solve this integral by computing the following complex integral:



$\int_{\gamma} \frac{\log^2 z}{z^2-1}dz=\lim_{\epsilon\rightarrow 0}[ \int_{0}^{R} dx \frac{\log^2 (x+i\epsilon)}{(x+i\epsilon)^2-1}+\int_{R}^{0} dx \frac{\log^2 (x-i\epsilon)}{(x-i\epsilon)^2-1}+ $
$+\int_{2\pi}^{0}d\theta i \epsilon e^{i\theta} \frac{\log^2 (\epsilon e^{i\theta})}{(\epsilon e^{i\theta})^2-1} +\int_{2\pi}^{0}d\theta i \epsilon e^{i\theta} \frac{\log^2 (1+\epsilon e^{i\theta})}{(1+\epsilon e^{i\theta})^2-1}] + \int_{\tilde \gamma}dz \frac{\log^2 z}{z^2-1}$




Where $\gamma$ is the "keyhole" path, and $\tilde\gamma$ is the circle centered in $z=0$ with $R$ radius. If $R\rightarrow\infty$, then:



$=\int_{0}^{\infty} dz \frac{\log^2 (x)}{x^2-1}-\int_{0}^{\infty} dx \frac{(\log (x)+2\pi i)^2}{x^2-1}$
$=-4\pi i \int_{0}^{\infty}dx\frac{\log (x)}{x^2-1}+ 4 \pi^2 \int_{0}^{\infty} dx \frac{1}{x^2-1}=-4\pi i \int_{0}^{\infty}dx\frac{\log (x)}{x^2-1}$



since $\int_{0}^{\infty} dx \frac{1}{x^2-1}=0$.



I can estimate the complex integral with the Residues theorem:



$\int_{\gamma} \frac{\log^2 z}{z^2-1}dz=2\pi i Res[f(z), -1]=i \pi^3$




and this means that:



$\int_{0}^{\infty}dx\frac{\log (x)}{x^2-1}=-\frac{\pi^2}{4}$



which is wrong, since the correct result should be $\frac{\pi^2}{4}$. Is there something wrong with my procedure?

elementary number theory - GCD Proof with Multiplication: gcd(ax,bx) = x$cdot$gcd(a,b)




I was curious as to another method of proof for this:



Given $a$, $b$, and $x$ are all natural numbers,



$\gcd(ax,bx) = x \cdot \gcd(a,b)$



I'm confident I've found the method using a generic common divisor and the theorem that says an integer $C$ is a common divisor of integers $A$ and $B$ iff $C|\gcd(A,B)$.



What other ways can this be proven? Is prime factorization a possibility? Or another method?


Answer




Below are $3$ proofs of the gcd distributive law $\rm\:(ax,bx) = (a,b)x\:$ using Bezout's identity, universal gcd laws, and unique factorization.






First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$



$\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $



$\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$




$\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\,\ \ $ some $\rm\:n,k\in \mathbb Z$



$\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $



The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$






Alternatively, more generally, in any integral domain $\rm\:D\:$ we have




Theorem $\rm\ \ (a,b)\ =\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D.$



Proof $\rm\quad\: c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x\ \ \ $ QED



The above proof uses the universal definitions of GCD, LCM, which often served to simplify proofs, e.g. see this proof of the GCD * LCM law.






Alternatively, comparing powers of primes in unique factorizations, it reduces to the following
$$\begin{eqnarray} \min(a+x,\,b+x) &\,=\,& \min(a,b) + x\\

\rm expt\ analog\ of\ \ \ \gcd(a \,* x,\,b \,* x)&=&\rm \gcd(a,b)\,*x\end{eqnarray}\qquad\qquad\ \ $$



The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\,\le,\,$ and



$$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\\
\rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \,|\,\ \gcd(a,b) \end{eqnarray}$$



$\ c \le a,b \!\iff\! c\!+\!x \le a\!+\!x,b\!+\!x\!\iff\! c\!+\!x \le \lfloor a\!+\!x,b\!+\!x\rfloor\!\iff\! c \le \lfloor a\!+\!x,b\!+\!x\rfloor \!-\!x$



where $\,\lfloor y,z\rfloor := \min(y,z).$



Friday 24 February 2017

rational numbers - How can I explain $0.999ldots=1$?











I have to explain $0.999\ldots=1$ to people who don't know limit.



How can I explain $0.999\ldots=1$?



The common procedure is as follows



\begin{align}
x&=0.999\ldots\\

10x&=9.999\ldots
\end{align}



$9x=9$ so $x=1$.


Answer



What I always find the most simple explanation is:
$$
\frac{1}{3} = 0.333\ldots \quad \Longrightarrow \quad 1 = 3 \cdot \frac{1}{3} = 3 \cdot 0.333\ldots = 0.999\ldots
$$


sequences and series - Evaluating $sum^{n-1}_{j=0} binom{n}{j}binom{n}{j+1}$

Is there a simpler expression for the following sum? $(n\in\mathbb{N})$
$$S_n =\binom{n}{0}\binom{n}{1} + \binom{n}{1}\binom{n}{2} + \dots + \binom{n}{n-1}\binom{n}{n}$$
It seems like $S_n = \binom{2n}{n-1}$, however I have no clue as to how I can prove that relation. I also tried re-writing the sum as
$$S_n = \binom{n}{0}\binom{n}{n-1} + \binom{n}{1}\binom{n}{n-2} + \dots + \binom{n}{n-1}\binom{n}{0} = \sum^{n-1}_{j=0}\binom{n}{j}\binom{n}{n-j-1}$$
Which resembles a special case of Vandemonde's Identity. Is there a connection between the two?

calculus - Is this enough info to solve this time dilation problem

There are two clocks. One is a regular clock measuring regular time $\tau$. The other is a clock measuring time $t$ which also advances clockwise, but does not advance uniformly--it accelerates w.r.t. $\tau$ from 12 to 6 o'clock, and decelerates from 6 to 12, such that the two clocks always tell the same time at 12 and 6. It is given that the alternating acceleration and deceleration of $t$ w.r.t. $\tau$ is uniform--$\frac{d^2t}{d\tau^2}=K$ where $K$ is a constant. Is this enough info to determine the "dilation" $\frac{dt}{d\tau}$? If not, what other info is needed?

real analysis - If $f,g$ are uniformly continuous prove $f+g$ is uniformly continuous but $fg$ and $dfrac{f}{g}$ are not


Suppose $f:\mathbb{R} \supset E \rightarrow \mathbb{R}$ and $g: \mathbb{R} \supset E \rightarrow \mathbb{R}$ are uniformly continuous. Show that $f+g$ is uniformly continuous. What about $fg$ and $\dfrac{f}{g}$?




My Attempt



Firstly let's state the definition; a function is uniformly continuous if
$$\forall \varepsilon >0\ \ \exists \ \ \delta >0 \ \ \text{such that} \ \ |f(x)-f(y)|< \varepsilon \ \ \forall \ \ x,y \in \mathbb{R} \ \ \text{such that} \ \ |x-y|<\delta$$




Sum $f+g$



Now to to prove $f+g$ is uniformly continuous;
$\bullet$ Choose $\delta_1 >0$ such that $\forall$ $x,y \in \mathbb{R}$ $|x-y|<\delta_1$ $\implies$ $|f(x)-f(y)|< \dfrac{\epsilon}{2}$



$\bullet$ Choose $\delta_2 >0$ such that $\forall$ $x,y \in \mathbb{R}$ $|x-y|<\delta_2$ $\implies$ $|g(x)-g(y)|< \dfrac{\varepsilon}{2}$



$\bullet$ Now take $\delta := min\{ \delta_1, \delta_2\}$ Then we obtain for all $x,y \in \mathbb{R}$
$$
|x-y|<\delta \implies

|f(x)+g(x)-f(y)+g(y)| <
|f(x)-f(y)| + |g(x)-g(y)| <
\dfrac{\varepsilon}{2}+\dfrac{\varepsilon}{2}=
\varepsilon$$






Product $fg$



Now for $fg$ for this to hold both $f:E \rightarrow \mathbb{R}$ and $g:E \rightarrow \mathbb{R}$ must be bounded , if not it doesn't hold.

$\bullet$ $\exists \ \ M>0 \ \ such \ that \ \ |f(x)|



$\bullet$ Choose $\delta_1 >0$ such that $\forall$ $x,y \in \mathbb{R}$ $|x-y|<\delta_1$ $\implies$ $|f(x)-f(y)|< \dfrac{\epsilon}{2M}$



$\bullet$ Choose $\delta_2 >0$ such that $\forall$ $x,y \in \mathbb{R}$ $|x-y|<\delta_2$ $\implies$ $|g(x)-g(y)|< \dfrac{\epsilon}{2M}$



$\bullet$ Now take $\delta := min\{ \delta_1, \delta_2\}$. Then, $|x-y|<\delta$ implies for all $x,y \in \mathbb{R}$, that
$$|f(x)g(x)-f(y)g(y)| \leq
|g(x)||f(x)+f(y)|+|f(y)||g(x)+g(y)| \leq
$$


$$
M|f(x)+f(y)| + M|g(x)+g(y)| <
M \dfrac{\epsilon}{2M} + M \dfrac{\epsilon}{2M} =
\epsilon$$



Are these proofs correct?
I am not sure how to approach the $\dfrac{f}{g}$ case.

number theory - Computing $22^{201} mod (30)$



I am having trouble, I tried using the fact that the $gcd(30, 22) = 2$ but I have been stuck here for a bit now.



$22^{201} \equiv x \mod (30)$




$22^{201} \equiv 22*22^{200} mod (30)$



How can I proceed?


Answer



We have $22^2=484\equiv 4\pmod{30}$
Then $4^3\equiv 4\pmod{30}$ and this means $22^6\equiv 4\pmod{30}$ and $201=33\times 6+3$ so $22^{201}=(22^6)^{33}+22^3$. This gives $$22^{201}\equiv 4^{33}\times 22^3\pmod{30}$$ Using the above $4^{33}\equiv 4^{11}\equiv (4^3)^{3}\times 4^2\equiv 4$ and $22^3\equiv4\times22\equiv 28\pmod{30}$ and we can conclude that $$22^{201}\equiv 4\times 28\equiv -8\equiv 22 \pmod{30}$$


Thursday 23 February 2017

real analysis - Find the limit without using Maclaurin series



Find a limit,
$$\lim_{x\to 0} \frac{1-\cos{(1-\cos{(1-\cos x)})}}{x^8}$$
without using Maclaurin series. My attempt was to use L'hopital's rule but that's just too much, and chances of not making a mistake trough repetitive differentiation are very low.


Answer




Hint: Use repetitively "$1-\cos x=2\sin^2 x/2$"
$$\newcommand{\b}[1]{\left(#1\right)}
\newcommand{\f}{\frac}
\newcommand{\t}{\text}
\newcommand{\u}{\underbrace}
$$
$$\lim_{x\to0}\frac{1-\cos{(1-\cos{(1-\cos x)})}}{x^8}
=\lim_{x\to0}\f{2\sin^2\b{\sin^2\b{\sin^2\b{\f x2}}}}{x^8}\\
=\lim_{x\to0}\f{2\sin^2\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}}{\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}^2}.\f{\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}^2}{\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}^4}.\f{\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}^4}{\b{\color{green}{\f x2}}^8}.\b{\f 12}^8
\\=\lim_{x\to0}\f1{128}\b{\f{\sin\b{\color{fuchsia}{\sin^2\b{\sin^2\b{\f x2}}}}}{\color{fuchsia}{\sin^2\b{\sin^2\b{\f x2}}}}}^2.\b{\f{\sin\b{\color{purple}{\sin^2\b{\f x2}}}}{\color{purple}{\sin^2\b{\f x2}}}}^4.\b{\f{\sin\b{\color{crimson}{\f x2}}}{\color{crimson}{\f x2}}}^8

\\={\large\f1{128}}\quad\b{\because \f{\sin x}x=1}$$


trigonometry - Prove that $cosalpha+cos2alpha+cdots+cos nalpha=frac{1}{2}left(frac{sinleft(n+frac{1}{2}right)alpha}{sinfrac{1}{2}alpha}-1right)$

I have to prove using mathematical induction that:
$$\cos\alpha+\cos2\alpha+\cdots+\cos n\alpha=\frac{1}{2}\left(\frac{\sin\left(n+\frac{1}{2}\right)\alpha}{\sin\frac{1}{2}\alpha}-1\right)$$

If I substitute n equals one then I'm giving a such thing as:
$$\cos\alpha=\frac{1}{2}\left(\frac{\sin\frac{3}{2}\alpha}{\sin\frac{1}{2}\alpha}-1\right)$$
But I don't what I should do to prove nextly and that this equation is completed for n+1.

number theory - Negative modulus



In the programming world, modulo operations involving negative numbers give different results in different programming languages and this seems to be the only thing that Wikipedia mentions in any of its articles relating to negative numbers and modular arithmetic. It is fairly clear that from a number theory perspective $-13 \equiv 2 \mod 5$. This is because a modulus of $5$ is defined as the set $\{0,1,2,3,4\}$ and having $-13 \equiv -3 \mod 5$ would contradict that because $-3 \not\in \{0,1,2,3,4\}$. My question is then regarding a negative modulus. What definition of a modulus of $-5$ would make the most sense in number theory? One in which $13 \equiv -2 \mod -5$, one in which $13 \equiv 3 \mod -5$, or something else entirely?


Answer



Negating the modulus preserves the congruence relation, by $\ m\mid x\color{#c00}\iff -m\mid x,\,$ so



$\quad a\equiv b\pmod m\iff m\mid a\!-\!b \color{#c00}{\iff} -m\mid a\!-\!b\iff a\equiv b\pmod {\!{-}m}$



Structurally, a congruence is uniquely determined by its kernel, i.e. the set of integers $\equiv 0.\,$ But this is a set of the form $\,m\Bbb Z,\,$ which is invariant under negation $\,-m\Bbb Z\, =\, m\Bbb Z$




When you study rings you can view this as a a special case of the fact that ideals are preserved under unimodular basis transformations, e.g. $\,aR + bR\, =\, cR + dR \ $ if $\, (a,b)M = (c,d)\,$ for some matrix $\,M\,$ having $\ \det M = \pm1,\, $ e.g. $\ a\Bbb Z + b \Bbb Z\, =\, a\Bbb Z + (b\!-\!a)\,\Bbb Z,\,$ which is the inductive step in the Euclidean algorithm for the gcd (it computes the modulus $\,m=\gcd(a,b)\,$ corresponding to the congruence generated by $\,a\equiv 0\equiv b,\,$ i.e. $\,a\Bbb Z + b\Bbb Z = m\Bbb Z).\,$ When the ideal $= a\Bbb Z$ then this amounts to multiplying $\,(a)\,$ by a $\,1\times 1\,$ matrix $\,[u]\,$ with $\det = \pm1,\,$ i.e. $\, u = \pm1,\,$ yielding $\,a\Bbb Z = -a\Bbb Z,\,$ precisely the original equality



As for the choice of canonical reps for the congruence classes, it is your freedom to choose which system you find most convenient. For example, in manual computations it often proves most convenient to choose reps of least magnitude, e.g. $\, 0,\pm1,\pm2\,$ mod $\,5,\,$ e.g. the capability to to use $\,-1$ simplifies many things, e.g. compare $(-1)^n$ vs. $\,4^n.$


sequences and series - What is $lim_{xto infty} 2sqrt{x}- sum_{n=1}^x {1over sqrt{n}}$?




I ask this because I noticed the partial sum $\sum_{n=1}^x {1\over \sqrt{n}}$ is very close to $2\sqrt{x}$, so close in fact that it appears their difference approaches a constant value, like $H_x$ and $\ln x$. However, when I put this limit as is into Wolfram, it said the limit diverged.



But, I found a way to transform the limit into an infinite sum, by using the transformation $f(x) = f(0) + \sum_{n=1}^x f(n) - f(n-1)$, an application of telescoping series to partial sums.



Thus,
\begin{align*}\lim_{x\to \infty} \left(2\sqrt{x}- \sum_{n=1}^x {1\over \sqrt{n}} \right) &=
\lim_{x\to \infty} \left( 2\sqrt{0} + \sum_{n=1}^x \left( 2\sqrt{n} - 2\sqrt{n-1} \right) - \sum_{n=1}^x {1\over \sqrt{n}} \right) \\

&= \sum_{n=1}^{\infty} \left(2\sqrt{n} - 2\sqrt{n-1} -{1\over \sqrt{n}} \right)
\end{align*}



This sum converges according to Wolfram by comparison test, and according to me by the integral test, but what does it converge to? It converges incredibly slowly; my best guess is $\approx 1.458$


Answer



May be, we could use generalized harmonic numbers since $$\sum_{n=1}^x {1\over \sqrt{n}}=H_x^{\left(\frac{1}{2}\right)}$$ and use the asymptotics for large values of $n$ $$H_x^{\left(\frac{1}{2}\right)}=2 \sqrt{x}+\zeta
\left(\frac{1}{2}\right)+\frac 1 {2\sqrt x}+O\left(\frac{1}{x^{3/2}}\right)$$


elementary set theory - Show that an infinite set $C$ is equipotent to its cartesian product $Ctimes C$

So, as the title says I'd like to give a proof of the fact that if $C$ is an infinite set then it is equipotent or equivalent to its cartesian product $C\times C$ using Zorn's Lemma (and of course some of its implications like the fact that $C$ has an infinite countable subset which I think might be very useful).




The main problem that I have is that I'm not supposed to use any theorem or result involving cardinal numbers since I'm still taking an elementary set theory course which hasn't covered that topic yet and all the proofs that I've read so far use cardinals arithmetic at some point.



Another thing that I think might be useful is a lemma which was proved via Zorn's Lemma in an answer to the question Prove that if $A$ is an infinite set then $A \times 2$ is equipotent to $A$ which states that given the infinite set $C$ there exists a non-empty set $B$ such that $B\times \mathbb{N}$ is equipotent to $C$. Then it suffices to give a bijection from $(B\times \mathbb{N})$ $\times$ $(B\times \mathbb{N})$ to $B\times \mathbb{N}$



So, any suggestion on the direct proof (without cardinals, unfortuntely) via Zorn's Lemma or an actual bijection from $(B\times \mathbb{N})$ $\times$ $(B\times \mathbb{N})$ to $B\times \mathbb{N}$ would be highly appreciated. Thanks in advance.

limits - Evaluate $ lim_{xto infty^{-}}{frac{x}{3}}{left |arctan{frac{9}{x}}right |} $

I have a problem evaluating this limit. When I am trying it will come out 0. It is not good. Can you help how can I start with this limit?




$$
\lim_{x\to \infty^{-}}{\frac{x}{3}}{\left |\arctan{\frac{9}{x}}\right |}
$$

proof verification - Prove proposition on integers using axioms



How can I prove:



If $0 < a$ and $0 < b$, then a < b if and only if $a^2



We are seeing this example in class but do not understand it.



These are the axioms we are using:




The axioms. The integers, which we denote by Z, is a set, together with a nonempty subset P ⊂ Z (which we call the positive integers), and two binary operations addition and multiplication, denoted by + and ·, satisfying the following properties:




  • (Commutativity) For all integers a, b, we have



a + b = b + a and a · b = b · a.





  • (Associativity) For all integers a, b, c, we have



a + (b + c) = (a + b) + c and a · (b · c) = (a · b) · c.




  • (Distributivity) For all integers a, b, c, we have



(a + b) · c = a · c + b · c.





  • (Identity) There exist integers 0 and 1, such that for all integers a, we have a + 0 = a and a · 1 = a.


  • (Additive inverses) For any integer a, there exists an integer −a such that a+(−a) = 0.


  • (Closure for P) If a, b are positive integers, then a + b and a · b are positive integers.


  • (Trichotomy) For every integer a, exactly one of the following three possibilities hold: either




a is a positive integer, or a = 0, or −a is a positive integer.





  • (Well-ordering) Every nonempty subset of the positive integers has a smallest element.



For inequalities:




  • Trichotomy law.

  • Transitive law.

  • Compatibility of sum with order.


  • Compatibility of product with order.



Would appreciate any help.


Answer



If $a,b\in\mathbb{R}^+$ with $b>a$, then $$b-a\in\mathbb{R}^+$$ and so $$b^2-a^2=(b-a)(b+a)\in\mathbb{R}^+$$ since $b+a$ is positive from our first assumption. Finally $$b^2-a^2\in \mathbb{R}^+\implies b^2-a^2>0\implies b^2>a^2$$



The other direction of your 'if and only if' statement is pretty much a disassembly of this proof put back together backwards.


Wednesday 22 February 2017

elementary number theory - Solve $ax equiv b mod m$ without Linear Congruence Theorem or Euclid's Algorithm?


enter image description here




Origin page 5. The overhead doesn't look like Linear Congruence Theorem or anything from Euclid's Algorithm. page 4 tries to delineate
Casting Out the Modulus? Is this it?



So if $d|m,$ then I rewrite $\begin{align} ax & \equiv b \mod m

\\dx + x & \equiv
\\ \implies x & \equiv \end{align}$



Is this errorless? Did I blight anything?

real analysis - Find the radius of convergence of power series



Suppose that $\sum_{k = 0}^\infty a_kx^k$ has radius of convergence of $R \in (0,\infty)$.



a) Find the radius of convergence of $\sum_{k = 0}^\infty a_kx^{2k}$




b) Find the radius of convergence of $\sum_{k = 0}^\infty a_k^2x^k$



Attempt: Given $$\left(\lim_{k \rightarrow \infty} \frac{\left|a_k\right|^\frac{1}{k}}{\left|a_{k+1}\right|^\frac{1}{k}}\right) = R$$



Then $$\left(\lim_{k \rightarrow \infty} \frac{\left|a_k\right|^\frac{1}{2k}}{\left|a_{k+1}\right|^\frac{1}{2k}}\right) = \left(\lim_{k \rightarrow \infty} \frac{\left|a_k\right|^\frac{1}{k}}{\left|a_{k+1}\right|^\frac{1}{k}}\right)^\frac{1}{2} = R^\frac{1}{2}$$



and



$$\left(\lim_{k \rightarrow \infty} \frac{\left|a_k^2\right|^\frac{1}{k}}{\left|a_{k+1}^2\right|^\frac{1}{k}}\right)= \left(\lim_{k \rightarrow \infty} \frac{\left|a_k\right|^\frac{1}{k}}{\left|a_{k+1}\right|^\frac{1}{k}}\right)^\frac{1}{2} =\left(\lim_{k \rightarrow \infty} \frac{\left|a_k\right|^\frac{1}{2k}}{\left|a_{k+1}\right|^\frac{1}{2k}}\right) = \left(\lim_{k \rightarrow \infty} \frac{\left|a_k \right|^\frac{1}{k}}{\left|a_{k+1}\right|^\frac{1}{k}}\right)^2 = R^2$$




Is this correct? Can anyone please help me? Any suggestion feedback would be really appreciate it. Thank you.


Answer



Let $a$ be the function in a), note that $a(x) = f(x^2)$. Hence the series is absolutely convergent iff $|x^2| < R$ iff $|x| < \sqrt{R}$. Hence the
radius of convergence is $\sqrt{R}$.



Note that you can't assume that the ratio test applies. For example, if every third $a_n$ is zero then the limit is not defined.



For b), we have ${1 \over R} = \limsup_n \sqrt[n]{|a_n|}$. Then
$\limsup_n \sqrt[n]{|a_n|^2} = \limsup_n (\sqrt[n]{|a_n|})^2 = (\limsup_n \sqrt[n]{|a_n|})^2 ={1 \over R^2}$. Hence the

radius of convergence is $R^2$.



Addendum:



To justify the exchange of $\limsup$ and squaring, suppose $A \subset \mathbb{R}$ and $\phi: \overline{A} \to \mathbb{R}$ is non increasing and continuous. Then $\phi(\sup A) = \sup \phi(A)$.



Suppose $a \in A$, then $a \le \sup A$ and so $\phi(a) \le \phi(\sup A)$, which
gives $\sup \phi(A) \le \phi(\sup A)$.



Now suppose $a_n \uparrow \sup A$, with $a_n \in A$. Then $\phi(a_n) \le \phi(\sup A)$. Continuity gives $\phi(\sup A) \le \sup \phi(A)$.




In the above, $A = \{ \sqrt[n]{|a_n|} \}_n$ and $\phi(x) = x^2$.


probability - Expected value of the minimum (discrete case)



Maybe related to this question



In the comments of this question they say that it gets easier if the variables are identically and independently distributed.
But i don't see how because in my case the variable is discrete




Here is my problem :
I toss 4 dice and keep the 3 best results. What is the expected value of the result ?



I think tossing 4 dice and keep the 3 best is like tossing 4 dice and removing the minimum.




  • Let X be the result of a standard die.

  • Let Y be tossing 4 dice and keeping the 3 best




Is that correct : $E(Y) = 4*E(X) - E(min)$ ?



So how calculate E(min) ?
I know if the variable was uniform on [0,1] I could have started with $F_Y = 1 - ( 1-F_X )^p$ where p is the number of dice I toss, but here the variable is discrete so i don't know where to start.



Generalization :
How to calculate the expected value of k realizations of a discrete random variable in [0-n]?



It's been a while since i studied probability, so my basic calculation may be wrong. Also,
English is not my mother tongue, so please forgive my mistakes.




edit : spelling mistakes


Answer



For clarity, suppose that the dice have ID numbers $1,2,3,4$. Let $X_i$ be the result on die $i$. Let $Y$ be the sum of the three largest of the $X_i$, and let $W$ be the minimum of the $X_i$.



Then $Y=X_1+X_2+X_3+X_4-W$. By the linearity of expectation, it follows that
$$E(Y)=E(X_1)+E(X_2)+E(X_3)+E(X_4)-E(W).$$
The linearity of expectation is a very useful result. Note that linearity always holds: independence is not required.



The expectation of the minimum can be calculated by first finding the distribution of the minimum $W$.




The minimum is $1$ unless the dice all show a number $\ge 2$. The probability of this is $1-\left(\frac{5}{6}\right)^4$. We rewrite this as $\frac{6^4-5^4}{6^4}$.



The minimum is $2$ if all the dice are $\ge 2$ but not all are $\ge 3$. The probability of this is $\frac{5^4-4^4}{6^4}$/



The minimum is $3$ if all results are $\ge 3$ but not all are $\ge 4$. This has probability $\frac{4^4-3^4}{6^4}$.



And so on. Now use the ordinary formula for expectation. We get that the expectation of $W$ is
$$\frac{1}{6^4}\left(1(6^4-5^4)+ 2(5^4-4^4)+3(4^4-3^4)+4(3^4-2^4)+5(2^4-1^4)+6(1^4-0^4) \right).$$
We leave you the task of computing. Before computing, simplify!




Generalization: Suppose we toss $k$ "fair" $(n+1)$-sided dice, with the numbers $0$ to $n$ written on them. For $i=1$ to $k$, let $X_i$ be the number showing on the $i$-th die. Let $S$ be the sum of the dice. Then $S=X_1+\cdots+X_k$. The expectation of $X_i$ is $\frac{0+1+\cdots +n}{n+1}$. By the usual expression for the sum of consecutive integers, $E(X_i)=\frac{n}{2}$ and therefore $E(S)=\frac{kn}{2}$.



The analysis of the minimum $W$ goes along the same lines as the earlier one. The probability that the minimum is $j$ is $\frac{(n+1-j)^k -(n-j)^k}{(n+1)^k}$. If we use the ordinary formula for expectation, and simplify, we find that
$$E(W)=\frac{1^k+2^k+\cdots+n^k}{(n+1)^k}.$$



A nice way to find $E(W)$: The following is a useful general result. Let $X$ be a random variable that only takes non-negative integer values. Then
$$E(X)=\sum_{i=1}^\infty \Pr(X\ge i).$$
We apply that to the case of the random variable $W$ which is the minimum of $X_1,\dots,X_4$. The probability that $W\ge i$ in that case is $\frac{(7-i)^k}{6^k}$.




The same procedure works for the more general situation you asked about.


Sum of series $frac {4}{10}+frac {4cdot7}{10cdot20}+ frac {4cdot7cdot10}{10cdot20cdot30}+cdots$

What is the sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots?$$



I know how to check if a series is convergent or not.Is there any technique to find out the sum of a series like this where each term increases by a pattern?

Tuesday 21 February 2017

trigonometry - If $alpha=frac{2pi}{7}$,prove that $sinalpha+sin2alpha+sin4alpha=frac{sqrt7}{2}$

If $\alpha=\frac{2\pi}{7}$,prove that $\sin\alpha+\sin2\alpha+\sin4\alpha=\frac{\sqrt7}{2}$







We need to find $\sin\frac{2\pi}{7}+\sin\frac{4\pi}{7}+\sin\frac{8\pi}{7}$

$=\sin\frac{2\pi}{7}+\sin\frac{3\pi}{7}-\sin\frac{\pi}{7}$

I am stuck here.Please help.

trigonometry - Summation of the Sine Function





I was messing around on Wolfram Alpha's summation calculator and when I plugged in the summation
$$\sum_{i=1}^n\sin\frac{i\pi}{180}$$
and it gave me the value
$$\frac12\left(\cot\frac\pi{360}-\csc\frac\pi{360}\cos\frac{(2n+1)\pi}{360}\right)$$
I don't understand... how does it arrive at this formula? And how do I verify this? If I wanted to, how could I find similar formulas in the future?


Answer



We have
$$ 2\sin{A}\sin{B} = -\cos{(A+B)}+\cos{(A-B)} $$
by using the angle addition formulae. Putting $A=ak+b$, $B=a/2$ gives
$$ 2\sin{\left(\frac{a}{2}\right)}\sin{(ak+b)} = -\cos{\left(a\left(k+\frac{1}{2}\right)+b\right)} + \cos{\left(a\left(k-\frac{1}{2}\right)+b\right)}. $$

Summing from $k=1$ to $n$ gives
$$ 2\sin{\left(\frac{a}{2}\right)}\sum_{k=1}^n \sin{(ak+b)} = \sum_{k=0}^n \left( - \cos{\left(a\left(k+\frac{1}{2}\right)+b\right)} + \cos{\left(a\left(k+\frac{1}{2}\right)+b\right)} \right). $$
The sum on the right telescopes, and the only remaining terms give
$$ \sum_{k=1}^n \sin{(ak+b)} = \frac{1}{2}\csc{\left(\frac{a}{2}\right)} \left( \cos{\left(\frac{a}{2}+b\right)} - \cos{\left(a\left(n+\frac{1}{2}\right)+b\right)} \right) $$
This is essentially the most general formula of its kind: choosing different values for $a$ and $b$ gives a number of useful trigonometric sums. In your case, $a=\pi/180$, $b=0$, which gives
$$ \sum_{k=1}^n \sin{\frac{k\pi}{180}} = \frac{1}{2}\csc{\left(\frac{\pi}{360}\right)} \left( \cos{\left(\frac{\pi}{360}\right)} - \cos{\left(\frac{(2n+1)\pi}{360}\right)} \right), $$
as expected.



These formulae may also be used in integrating the trigonometric functions from first principles.


Monday 20 February 2017

continuity - Real Analysis Continuous Function Problem

Show that the only continuous function on $(-1,+1)$, which is not identically
zero and satisfies the equation $f(x + y) = f(x)f(y)$ for all $x,y \in \mathbb{R}$, is the exponential function $f(x) = a^x$ with $a = f(1) > 0$.

Differentiability implies continuous derivative?

We know differentiability implies continuity, and in 2 independent variables cases both partial derivatives fx and fy must be continuous functions in order for the primary function f(x,y) to be defined as differentiable.



However in the case of 1 independent variable, is it possible for a function f(x) to be differentiable throughout an interval R but it's derivative f ' (x) is not continuous?

calculus - Evaluate $lim_{xto 0}frac {(cos(x))^{sin(x)} - sqrt{1 - x^3}}{x^6}$





Evaluate
$$ \displaystyle \lim_{x\to 0}\Bigg( \frac {(\cos(x))^{\sin(x)} - \sqrt{1 - x^3}}{x^6}\Bigg).$$




I tried to use L'Hopital's rule but it got very messy. Moreover I also tried to analyze from graphs, but I was getting the limit $= 0$ by observing it. However, the answer given in my book is $\frac{1}{4}$. Is there any method to do without Taylor series and L' Hopital's rule (like using special limits). We are given that the limit exists. Any help will be appreciated.
Thanks!


Answer



Let, $$\text{L}= \displaystyle \lim_{x\to 0}\Bigg( \frac {(\cos(x))^{\sin(x)} - \sqrt{1 - x^3}}{x^6}\Bigg) $$



$\implies \text{L}=\displaystyle \lim_{x \to 0}\dfrac{e^{\sin(x) \times \ln(\cos(x))}-e^\frac {\ln(1-x^3)}{2}}{x^6}$



$\implies \text{L}= \displaystyle \lim_{x \to 0}e^\frac {\ln(1-x^3)}{2} \times \left(\dfrac{e^{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}-1}{x^6}\right)$




$\implies \text{L}=\displaystyle\lim_{x \to 0}\dfrac{e^{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}-1}{x^6}$



$\implies \text{L}=\displaystyle\lim_{x \to 0}\dfrac{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}{x^6}$ $\left[\text{Using} \displaystyle\lim_{x\to 0}\left( \dfrac{e^x - 1}{x} = 1\right)\right]$



Now, since the limit exists, we infer,



$\text{L.H.L.} = \text{R.H.L.} = \text{L}$



$\implies 2\text{L} = \text{L.H.L.} + \text{R.H.L.}$




$=\displaystyle\lim_{x\to 0^-}\dfrac{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}{x^6} + \displaystyle\lim_{x\to 0^+}\dfrac{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}{x^6}$



=$\displaystyle\lim_{x\to 0}\dfrac{\sin(-x) \times \ln(\cos(-x))-\frac {\ln(1+x^3)}{2}}{x^6} + \displaystyle\lim_{x\to 0}\dfrac{\sin(x) \times \ln(\cos(x))-\frac {\ln(1-x^3)}{2}}{x^6}$



$\left[\text{Using} \displaystyle\lim_{x\to 0^-}f(x) = \displaystyle\lim_{x\to 0}f(0-x)\right] $



=$\displaystyle\lim_{x\to 0} \dfrac{\frac{-\ln(1-x^6)}{2}}{x^6}$



=$\dfrac{-(-x^6)}{2x^6}$ $\left[\text{Using} \displaystyle\lim_{x\to 0}\dfrac{\ln(1+x)}{x}=1\right]$




=$\dfrac{1}{2}$



$\implies \boxed{\text{L}=\dfrac{1}{4}}$


geometry - Two Touching Ellipses - Tangents, Centres and Collinearity




Consider two ellipses, touching each other at a point (i.e. they've a common tangent at that point). It is given that they also have two more external common tangents, which when extended meet at P. The centres of the ellipses are A and B.




Prove that A, B, P are collinear.




I've attached a diagram, for the sake of clarity -



enter image description here







I tried using coordinate geometry for this, assuming one to be a standard ellipse (axes parallel to coordinate axes) and the other to be any touching ellipse. This method is really cumbersome - finding the equation of the second ellipse, the tangents, and then the intersection point is not at all elegant and seems very time consuming.



However, it is important to note that the combined equation of tangents from point P to both ellipses should be identical. (if that helps?)



Nevertheless, this problem can probably be solved easily using the geometrical properties of an ellipse (I'm not sure how, but seems like it'll be better than a pure coordinate geometry solution).



So, how to prove the collinearity of those 3 points? Please explain with a detailed solution.



P.S.
Of course, if possible, it'll be great to know how to solve it using both methods (or more) - analytic geometry and pure geometry.



Answer



Your problem is pretty difficult, also because such collinearity does not hold.
I started from an ellipse (the lower one) and a triangle of tangents (in blue), and built, through a perspectivity, another ellipse tangent to each side of the triangle, and sharing a tangency point (the green one) with the original ellipse.



enter image description here



As you can clearly see, the red points are not collinear.
To get three collinear points we may consider the centers of the given ellipses and the midpoint of the segment cut by the external tangents on the tangent containing the green point:



enter image description here


asymptotics - Prove or disapprove the statement: $f(n)=Theta(f(frac{n}{2}))$



Prove or disapprove the statement:



$$f(n)=\Theta(f(\frac{n}{2}))$$



where $f$ is an asymptotically positive function.




I have thought the following:



Let $f(n)=\Theta(f(\frac{n}{2}))$.Then $\exists c_1,c_2>0 \text{ and } n_0 \geq 1 \text{ such that } \forall n \geq n_0:$



$$c_1 f(\frac{n}{2}) \leq f(n) \leq c_2 f(\frac{n}{2})$$



Let $f(n)=2^n$.Then:



$$c_1 2^{\frac{n}{2}} \leq 2^n \leq c_2 2^{\frac{n}{2}} \Rightarrow c_1 \leq 2^{\frac{n}{2}} \leq c_2 $$




$$2^{\frac{n}{2}} \leq c_2 \Rightarrow 2^n \leq c_2^2 \Rightarrow n \leq \lg c_2^2 \text{ Contradiction}$$



Could you tell me if it is right?


Answer



As mentioned in the comments, your work is correct (and thus the original proposition is false).


calculus - Evaluate $lim_limits{x to 0} (1 +sin^2 x)^{frac{1}{ln(cos x)}}$



$$\lim_{x \to 0} (1 + \sin^2 x)^{\frac{1}{\ln(\cos x)}}$$



I evaluated $\sin$ and $\cos x$ but what can be done with $\ln\left(1-\frac{x^2}{2}\right)$ or $\ln\left(\frac{2 - x^2}{2}\right)$?



Assume that L'Hopital is forbidden but you can use asymptotic simplifications like big and small $o$ notations and Taylor series.


Answer



You can write the function as




$$(1 + \sin^2 x)^{ \frac{1}{\sin^2 x} \frac{\sin^2x}{\ln(\cos x)}}$$
Further



$$\frac{\sin^2x}{\ln(\cos x)}=\frac{x^2+o(x^2)}{\ln(1-\frac{x^2}{2}+o(x^2))}=\frac{x^2+o(x^2)}{-\frac{x^2}{2}+o(x^2)}\to-2$$



And



$$(1 + \sin^2 x)^{ \frac{1}{\sin^2 x} } \to e$$



Hence...



Sunday 19 February 2017

Group isomorphism $h:(mathbb R,+)to (mathbb R^+,times)$ that is not an exponential function.




Let $\mathbb{R}^+$ denote the set of positive real numbers. Group isomorphism $h_b:(\mathbb R,+)\to (\mathbb R^+,\times)$ can be given by the exponential function: $h_b(r)=b^r$, where $b$ is a positive real number and is not $1$. Moreover, after we define the group automorphism $A_x:(\mathbb R^+,\times)\to (\mathbb R,+)(r\mapsto xr)$, the set of all exponential functions(with positive base) are related by $h_k(r) = h_bA_rh_b^{-1}(k)$, with $A_r$ the automorphism defined above.



I start to wonder that is this the only possible way to construct the isomorphism? Are there any isomorphisms $i:(\mathbb R,+)\to (\mathbb R^+,\times)$ different from the exponential function?(It needs not to be continuous) But what about the case if we are looking for a continuous isomorphism?


Answer



First we will construct an isomorphism $(\mathbb R,+)\to(\mathbb R^+,\times)$ which is not an exponential. Let $V$ be $\mathbb R$ as a vector space over $\mathbb Q$ and let $\{v_\alpha\in\mathbb R : \alpha\in A\}$ be a basis for $V$. Then any permutation $\sigma$ of $A$ induces linear operator $T=T_\sigma$ on $V,$ and so $T:(\mathbb R,+)\to(\mathbb R,+)$ is a group isomorphism. $T$ is not continuous (unless $\sigma$ is the identity), and so $\exp\circ T:(\mathbb R,+)\to(\mathbb R^+,\times)$ is an isomorphism which is not continuous, hence not an exponential.



Now suppose that $\phi:(\mathbb R,+)\to(\mathbb R^+,\times)$ is a continuous group isomorphism. Write $b=\phi(1).$ Then $\phi(-1)=b^{-1}$ and induction gives $\phi(k)=b^k$ for all $k\in \mathbb Z.$ Furthermore, for $p/q\in\mathbb Q,$ we have $$b^p = \phi(p) = \phi((p/q)\cdot q) = \phi(p/q)^q,$$
so that $\phi(p/q)=b^{p/q}.$ We then use continuity to extend $\phi$ to the reals, giving that $\phi(x)=b^x$ for all $x,$ so $\phi$ is an exponential with base $b.$


elementary number theory - Prove by induction that $5^n - 1$ is divisible by $4$.


Prove by induction that $5^n - 1$ is divisible by $4$.




How should I use induction in this problem. Do you have any hints for solving this problem?




Thank you so much.

elementary number theory - Given $gcd(d,d')=1, dmid n, d' mid n$, show that $dd' mid n$



Given $d,d'$ are in $\mathbb{Z} > 1$, and $\gcd(d,d')=1$, and $d \mid n$ and $d'\mid n$,



Show that $d\cdot d'\mid n$.



I pretty much have it but I think it could be made more clear. I have:



$d \mid n$ so $n=q*d$ and $d'\mid n$ so $n=q'*d'$




So $q*d*q'*d'=n^2$



So $(q*d*q'*d')/n=n$



So $(q*q'/n)*(d*d')=n$ ... so the only thing left to do is show that $q*q'/n$ must be an integer.



I think it has something to do with $\gcd(d,d')=1$ because $d$ and $d'$ cannot both divide an integer $n$ unless $n$ is a multiple of $d*d'$, but I'm not sure how to show this elegantly.



Thanks!



Answer



Hint: since $\gcd(d, d') = 1$, there exist $\alpha, \beta \in \mathbb{Z}$ such that $\alpha d + \beta d' =1$. Now multiply both sides by $n$, and use the fact that $d \mid n$ and $d' \mid n$. Feel free to comment if you need further clarification!


probability - Expected Size of largest Flush (Cards)



I have been trying the following question from TopCoder but am having trouble understanding how to calculate the probability.



You are playing a card game in which large flushes, or sets of cards of the same suit,
are beneficial. Your next move is to draw number cards: the larger the flush you get,

the better. You wish to know the expected size of the largest flush in the cards you
draw. The deck will consist of some number of cards of four suits, given as suits. The
ith element of suits is the number of cards of that suit in the deck. Return a double,
the expected size of the largest flush your hand will make.


The expected size of the largest flush is calculated as follows: For each possible
hand, multiply the size of the largest flush in that hand by the probability of
getting that hand. Then add all of these values together to get the expected size of
the largest flush. For example, if half of the possible hands give you a flush of

size 3, and the rest give you one of size 2, the expected value is (0.5 * 3) + (0.5 *
2) = 2.5. Also, recall that there are n Choose k = n!/k!/(n-k)! ways to select k
cards of one suit out of a total of n cards in that suit, where ! denotes the
factorial operation. Thus, if suits = {3,3,3,3}, then there are (3 Choose 3) * (3
Choose 2) * (3 Choose 1) * (3 Choose 0) = 1 * 3 * 3 * 1 = 9 ways to get 3 cards of
the first suit, 2 of the second, 1 of the third, and none of the fourth.


More info: http://topcoder.bgcoder.com/print.php?id=885




I've been trying to understand the problem by first trying to tackle a simpler example.



suits = [2,2,2,2]
Cards to pick = 3


When picking the 2nd card, you can either:



1) Pick a card from the same suit as the first card (EV = 1/7 * 2)




2) Pick a card from a different suit to the first card (EV = 6/7 * 1)



This is where I get confused. When picking the 3rd card, do I calculate a probability as if 1) occurred, as if 2) occurred and then sum the result?



If someone could shed some light as to what would happen at this step, I would greatly appreciate it!


Answer



Yes, you have to cover all the possibilities. This is sometimes done with a probability tree in simple situations like this one. The probabilities of each step are beside each branch. With "FS" meaning "flush size":



enter image description here




You multiply the probabilities at each level and this gives,



\begin{eqnarray*}
P(FS=1) &=& \dfrac{6}{7}\times\dfrac{4}{6} &=& \dfrac{4}{7} \\
P(FS=2) &=& \dfrac{1}{7}\times 1 + \dfrac{6}{7}\times\dfrac{2}{6} &=& \dfrac{3}{7} \\
P(FS=3) &=& \dfrac{1}{7}\times0 &=& 0.
\end{eqnarray*}



To generalise this idea for any program input you could consider a different style of tree where at each step you consider what happens if a heart, diamond, spade, or club is drawn. Its recursive/repetitive nature could make it more program-friendly than the above tree.




enter image description here


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...