Monday 30 November 2015

linear algebra - AB = I BA = I, proof not valid?



Ok so I'm right at the end of the first chapter in my book which means I just know some matrix/vector operations, what it means for a matrix to be invertible and what the transpose of a matrix is. Add to that some of the basic rules regarding the inverse and transpose of a matrix, such as $(AB)^{-1} = B^{-1}A^{-1}$



One questions asks me to prove
$AB = I \iff BA = I$
I googled but all I could find was proofs that used concepts I've never heard of, like determinants. The reason I'm asking for help is because I think the book's answer is wrong. Here it is for $AB = I \implies BA = I$:




$B = BI$
$B(AB) = (BA)B$
Post-multiply this $B = (BA)B$ by $B^{-1}$:
$BB^{-1} = (BA)BB^{-1}$
$I = (BA)I = BA$
We have $BA = I$





He proves $BA = I \implies AB = I$ in a similar way. The thing I am turning against is that it is assumed that $B$ is invertible, that $B^{-1}$ exists. The question never said anything about $B$ being invertible. But if $B^{-1}$ actually exists then no proof is needed since then by the definition of an invertible matrix, $A$ is the inverse of $B$ and $AB = BA = I$. So am I wrong or is the proof in the book not really correct? Or is it a valid assumption that $B$ is invertible?


Answer



There are thre options here: either you are sloppy or the book is or both.



It is pretty easy to show the following fact:




If $A$ and $B$ are square matrices, then $AB$ is invertible if and only if $A$ and $B$ are invertible.





Using this fact, the proof used in your book is "correct".




  1. If the statement above was already proven in your book, then it may be the case that you missed that fact (so you are sloppy)

  2. If the statement was not proven in the book, then the book is sloppy.

  3. If the statement was proven, but the book did not explain well enough that it was used in the proof you cite, then it may be that both of you are sloppy.


Real roots of a quintic polynomial




Consider a real quintic polynomial
$$p(x;\, \alpha, \beta)=a_0 (\alpha,\beta) + a_1 (\alpha,\beta) x + a_2 (\alpha,\beta) x^2+ a_3 (\alpha,\beta) x^3 + a_4 (\alpha,\beta) x^4 - x^5$$
with real valued functions $a_i$ defined by
$$\forall i \in \{1,\ldots, 5\}\quad a_i:\Omega \to \mathbb{R}, $$
where $\Omega \subset \mathbb{R}^2$.



I'd like to proof, that $p$ has only real roots in $x$ for all $(\alpha,\beta) \in \Omega$. A proof relying on Sturm's Theorem seems not feasible as the given functions $\alpha_i$ are quite complex expressions themselves. Is there an easier method to accomplish this?


Answer



I assume all $a_i$ are continuous.
Compute the discriminant $D(\alpha,\beta)$ of the polynomial. If the set $D^{-1}(0)\subseteq \Omega$ has no interior points, it is suficient to check a single $(\alpha,\beta)$ per connected component of $\Omega\setminus D^{-1}(0)$.



number theory - Find all $n in mathbb{Z^+} : phi(n)=4$



I know that there is a similar post, but I m trying a different proof.
Also I will define $P$ be the set of all positive prime numbers.



Question: If $\phi$ is Euler's Phi Fuction, we want to find all $n \in \mathbb{Z^+} : \phi(n)=4$.



Answer: Let $n=p_1^{n_1}\cdot...\cdot p_k^{n_k}\in \mathbb{Z}^+$ be the factorization of $n$ in to primes. Then
$$\phi(n)=p_1^{n_1-1}\cdot ...\cdot p_k^{n_k-1}\cdot(p_1-1)\cdot...\cdot (p_k-1)=4$$




So, $\forall i \in \{1,2,...,k \} \implies p_i-1|4 $ . And from this, we have that



$$p_i-1\in\{1,2,4 \} \implies p_i\in \{2,3,5\} \in P$$
Now, we can see the primes that $n$ containts: $n=2^{n_1}3^{n_2}5^{n_3}, \ n_1,n_2,n_3 \in \mathbb{Z}^+$. So,
$$\phi(2^{n_1}3^{n_2}5^{n_3})=4 \iff \phi(2^{n_1})\phi(3^{n_2})\phi(5^{n_3})=4 \ (*)$$



The possible cases for $n_i$ are:




  • $n_1=1,2,3\implies \phi(2)=1,\phi(2^2)=2, \phi(2^3)=4$ respectively


  • $n_2=1 \implies \phi(3)=2$

  • $n_3=1 \implies \phi(5)=4$



All the posible combinations for the relation $(*)$ are $\phi(5),\ \phi(5)\phi(2),\ \phi(3)\phi(2^2),\ \phi(2^3)$. So, $n \in \{5,10,12,8\}.$



Is this completely right?



Thank you.


Answer




This seems to be completely correct to me.


Sunday 29 November 2015

elementary number theory - Proof that $2^{222}-1$ is divisible by 3



How can I prove that $2^{222}-1$ is divisible by three?
I already have decomposed the following one: $(2^{111}-1)(2^{111}+1)$ and I understand I should just prove that $(2^{111}-1)$ is divisible by three or that $(2^{111}+1)$ is divisible by three. But how can I solve this problem?


Answer



The routine way is to invoke Fermat's little theorem: $$a^{p-1}-1\equiv 0\,(\text{mod}\,p)$$ for $\mathrm{gcd}(a,p)=1$.
Plug in $a=2^{111},p=3$.


discrete mathematics - Proof by induction of summation inequality: $1 + 1/2+ 1/3+ 1/4+1/5+⋯+ 1/2^n leq n + 1$

I have been working on this problem for literally hours, and I can't come up with anything. Please help. I feel like I am about to go insane.




For all n $\in$ N, we have $$1 + \frac{1}{2}+ \frac{1}{3}+ \frac{1}{4}+\frac{1}{5} +⋯+ \frac{1}{2^n} ≤ n + 1$$



I know that I am supposed to use a proof by induction. Here is progress so far:



1) Let P(n) be $$\sum_{i=0}^{2^n} \frac{1}{i} $$



2) Base case: $n = 1$



$$\sum_{i=1}^{2^n} \frac{1}{i} = \frac{1}{1}+ \frac{1}{2} = \frac{3}{2}, \frac{3}{2} ≤ 2 $$




So P(1) is true.



3) Inductive hypothesis:
Suppose that P(k) is true for an arbitrary integer k $\geq$ 1



4) Inductive step:
We want to prove that P(k + 1) is true or, $$\sum_{i=1}^{2^{k+1}} \frac{1}{i} ≤ k + 2$$



By inductive hypothesis,




$$\sum_{i=1}^{2^{k+1}} \frac{1}{i} = \sum_{i=1}^{2^k} \frac{1}{i} + \sum_{i=2^k+1}^{2^{k+1}}\frac{1}{i} ≤ k + 1 + \sum_{i=2^k+1}^{2^{k+1}}\frac{1}{i}$$



I know that I'm supposed to split the expression into two summations, but now I am completely stuck and don't know what to do from here. I got one hint that the fact $\frac{a}{b + c} < \frac{a}{b}$ is relevant, but I don't know how to get there from here.

real analysis - Bijection from $mathbb{N}$ to $mathbb{N}$ s.t $phi(n)ne n^2$




Is it possible to find a bijection $\phi:\mathbb{N}\to \mathbb{N}$ such that $\forall n\in \mathbb{N} ,\quad\phi(n)\ne n^2$




If it is not, how to prove, $\forall \phi\in \mathcal{L(\mathbb{N})},\;\forall N\in \mathbb{N},\exists p>N, \text{s.t.}\quad\phi(p)=p^2 $



(with $\mathcal{L(\mathbb{N})}$ the set of all bijections from $\mathbb{N}$ to $\mathbb{N} $



Answer



Yes. Put $\phi(1)=2, \phi(2)=1$ and $\phi(n)=n$ for all $n\geq 3$.


number theory - Euler's totient function determining $phi(m)$= 2 using the product formula.

Solving $\phi(m)$ =2 , to find that the only possible types of m are 3,4, and 6



I have considered the product and found $p_i$ = 2 or 3 and $a_i$ = 2 or 1 and this I have found using cases 1|2 and 2|2.



How can I use my results combining the two cases that the only possible such m are if 3,4, or 6.



edit: I am considering the case of when m and n are coprime and I know the possible solutions are (3,4) and (4,3) . For self, I am wondering about how I find these m after obtaining my cases 1|2 and 2|2.

calculus - Interchanging Inverse Laplace Transform

I have a function $f(|\boldsymbol{k}|,s,\theta)$ for which I am interested in its inverse Laplace transform. I am also interested in the function's mean value for constant $|\boldsymbol{k}|$, but technically I need to inverse Laplace transform first. I was wondering about the interchangeability of the inverse transform and the mean; that is, does:



\begin{equation}
\frac{1}{2\pi}\int_0^{2\pi} \mathcal{L}^{-1}\left\{f(|\boldsymbol{k}|,s,\theta)\right\}\mathrm{d}\theta=\mathcal{L}^{-1}\left\{\frac{1}{2\pi}\int_0^{2\pi} f(|\boldsymbol{k}|,s,\theta)\mathrm{d}\theta\right\}
\end{equation}
The inverse Laplace transform is w.r.t. $s$.



I could think of a few reasons why it may not hold; for example, would it hold if doing the mean integration first affects the poles of $s$ (if that could happen)?




Please forgive my lack of knowledge in math, I do not know much measure theory and am not sure of the formality of interchanging these two operators. Any help is greatly appreciated!



EDIT: The actual function looks like:



\begin{equation}
f(|\boldsymbol{k}|,s,\theta)=\frac{\lambda_2 \left(D |\boldsymbol{k}|^2+2 \lambda_1+s\right)+\lambda_1 (i \boldsymbol{k}\cdot \boldsymbol{v}+\lambda_1+s)+\lambda_2^2}{(\lambda_1+\lambda_2) \left(\left(D |\boldsymbol{k}|^2+s\right) (i \boldsymbol{k}\cdot\boldsymbol{v}+\lambda_1+s)+\lambda_2 (s+i\boldsymbol{k}\cdot\boldsymbol{v} )\right)}
\end{equation}
All variables are real and positive except for $s$, of course. $\theta$ is the angle between $\boldsymbol{v}$ and $\boldsymbol{k}$. Carrying out the mean integration first gives:
\begin{equation}

f(|\boldsymbol{k}|,s)=\frac{\lambda_1}{(\lambda_1+\lambda_2) \left(D |\boldsymbol{k}|^2+\lambda_2+s\right)}
\end{equation}
which makes it easy to then find the inverse Laplace transform. Not sure if this is at all helpful.

Properties of the principal square root of a complex number



I am studying the principal square root function of complex numbers. On Wikipedia they present a complex number $z$ using polar coordinates as



\begin{equation}
z = r \mathrm{e}^{i \varphi}, \quad r \ge 0, ~ -\pi < \varphi \le \pi.
\end{equation}




Further, they define the principal square root of $z$ as



\begin{equation}
\sqrt{z} = \sqrt{r} \mathrm{e}^{i \varphi/2}. \tag{1}
\end{equation}



Continuing, it is mentioned that




The principal square root function is thus defined using the

nonpositive real axis as a branch cut. The principal square root
function is holomorphic everywhere except on the set of non-positive
real numbers (on strictly negative reals it isn't even continuous).




I do not understand these two statements. My questions are




  1. Why is the principal square root function defined using the nonpositive real axis as a branch cut? It seems to me that for $z = \mathrm{e}^{i \pi}$, we obtain by equation $(1)$ the principal square root $\sqrt{z} = \sqrt{1} \mathrm{e}^{i \pi/2} = i$.

  2. Why is the principal square root function not continuous on the negative reals?



Answer



That is a convention that for principal value in general we must use $-\pi<\arg z<\pi$ for other values you can change the branch cut.


Saturday 28 November 2015

exponential function - Intuitive Understanding of the constant "$e$"

Potentially related-questions, shown before posting, didn't have anything like this, so I apologize in advance if this is a duplicate.


I know there are many ways of calculating (or should I say "ending up at") the constant e. How would you explain e concisely?



It's a rather beautiful number, but when friends have asked me "what is e?" I'm usually at a loss for words -- I always figured the math explains it, but I would really like to know how others conceptualize it, especially in common-language (say, "English").





related but not the same: Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"?

Number of solutions to congruences

Is there any general form to determine the number of non-congruent solutions to equations of the form $f(x) \equiv b \pmod m$?




I solved a few linear congruence equations ($ax \equiv b \pmod m$) and I know those have only one solution because we're basically finding $a^{-1}$ and all the inverses of $a$ are congruent.



What's the number of solutions for congruences of higher degree polynomials? (quadratic, qube, etc).



Thanks a lot.

combinatorics - Finding the maximum value of elements to be selected in a grid - ZIO $2009$, P$1$



problem
problem




Hello Community! The above problem you see is a problem I got wrong. :( This is ZIO $2009$, P$1$.



I tried the problem and miserably found the wrong answer as $20$. Here is how my approach goes - part (a): Notice that the largest element in the whole grid is $16$ which appears two times. I may be a good decision to start there to maximize the score but unfortunately, it is covered by only negative numbers. Although if we try to start, with the upper $16$, we get value: $16 - 9 + 13 = 20$. Similarly, starting with other big numbers we observe that the value gets even lesser so the answer must be $\boxed{20}$. However, like most trial and error attempts is optimization problems, this is wrong as the answer is $29$.



Now the main question I have for this problem is: How do we ensure maximum value? Is there some sort of an algorithm or something that we can follow and can be assured to have found the maximum value? Note that this problem is from a pen and paper exam where you are given 10 minutes to solve one sub-case (that is 30 minutes for this whole problem), so complete trial and error is of no use at all.



I asked a similar problem on MSE only: link but haven't got any answers till now... Any help there would also be appreciated.



The answers are $29, 9, 20$.




I would be grateful if anyone could help.. Thanks!


Answer



Starting in the upper left corner, replace each number $x$ with $x$ plus the larger of the number above it and the number to the left of it. In (a) this results in $$\matrix{-2&-1&-4&0&-4\cr10&-6&6&-6&2\cr-6&7&-7&1&-2\cr1&3&19&4&14\cr-8&19&10&23&7\cr}$$ You must exit at the bottom row or the rightmost column, and you want to exit at the biggest exit number, which is the $23$ in the bottom row. Now trace your way back to the left and up from that $23$, always choosing the larger of the two possible numbers. This takes you left to $10$, then left to $19$ (or up to $19$, it doesn't matter), then up to $3$, up to $7$, left (or up) to $-6$, up to $10$, up to $-2$. The smallest number on the way was the $-6$, so that path will give you $23-(-6)=29$, which is the maximum.


summation - Find the sum of the following series to n terms $frac{1}{1cdot3}+frac{2^2}{3cdot5}+frac{3^2}{5cdot7}+dots$



Find the sum of the following series to n terms $$\frac{1}{1\cdot3}+\frac{2^2}{3\cdot5}+\frac{3^2}{5\cdot7}+\dots$$



My attempt:



$$T_{n}=\frac{n^2}{(2n-1)(2n+1)}$$




I am unable to represent to proceed further. Though I am sure that there will be some method of difference available to express the equation. Please explain the steps and comment on the technique to be used with such questions.



Thanks in advance !


Answer



Use partial fractions to get
$$
\begin{align}
\sum_{k=1}^n\frac{k^2}{(2k-1)(2k+1)}
&=\sum_{k=1}^n\frac18\left(2+\frac1{2k-1}-\frac1{2k+1}\right)\\
&=\frac n4+\frac18-\frac1{16n+8}\\[3pt]

&=\frac{(n+1)n}{4n+2}
\end{align}
$$
where we finished by summing a telescoping series.


probability - Expectation of nonnegative random variable when passed through nonnegative increasing differentiable function. Part II: Electric Boogaloo




This is a follow up to my previous question:



Expectation of nonnegative random variable when passed through nonnegative increasing differentiable function



I am now wanting to establish a follow up to the above problem. Specifically, if $X$ is a nonnegative random variable and $g:\mathbb{R}\rightarrow\mathbb{R}$ is a nonnegative, strictly increasing, differentiable function, then



$$\mathbb{E}g(X)<\infty \iff \sum_{n=1}^{\infty}g^{\prime}(n)\mathbb{P}(X>n)<\infty$$



I believe I can show the inequality when $g(x)=x^{p}$ for $p\in\mathbb{N}$, but the case of a general $g$ is more mysterious to me.




My attempt for the converse proceeds in the following way: If you assume that the series converges then (by the linked question)



\begin{equation}
\mathbb{E}g(X) = g(0)+\int_{0}^{\infty}g^{\prime}(X)\mathbb{P}(X>x)dx \\
= g(0)+\sum_{n=0}^{\infty}\int_{n}^{n+1}g^{\prime}(x)\mathbb{P}(X>x)dx \\
\leq g(0)+\sum_{n=0}^{\infty}(g^{\prime}(n+1)+g^{\prime}(n))\mathbb{P}(X>n) \\
= g(0)+\left(\sum_{n=0}^{\infty}g^{\prime}(n+1)\mathbb{P}(X>n)\right)+\left(\sum_{n=0}^{\infty}g^{\prime}(n)\mathbb{P}(X>n)\right).
\end{equation}




However I am unsure how to proceed from here. I don't see how the middle series would converge without more assumptions on $g$.



Any help with the equivalence in general would be appreciated.


Answer



This answer elaborates on my comments (to show the claim is false): Define $g:[0,\infty)\rightarrow \mathbb{R}$ by
$$g(x) = 1 + x + \frac{1}{2\pi} \cos(2\pi x + \pi/2)$$
Then
$$g’(x) = 1 - \sin(2\pi x + \pi/2) \geq 0 \quad \forall x \geq 0$$
and for $n\geq 0$ we get
$$ g'(n) = 0 \quad \mbox{ if and only if $n$ is an integer}$$
It follows that $g$ is nonnegative and strictly increasing over $x \geq 0$.




Furthermore $g(x)\geq 1 + x -1/(2\pi)\geq x$ and so
$$ g(x) \geq x \quad \forall x \geq 0$$
Let $X$ be any nonnegative random variable that satisfies $E[X]=\infty$. We get:
$$ g(X)\geq X \implies E[g(X)] \geq E[X] = \infty$$
but
$$ \sum_{n=1}^{\infty} g’(n) P[X>n] = 0$$
You can easily extend $g$ to have domain over all real numbers while preserving the non negativity and strictly increasing properties.



*Note: This shows that one direction of the "if and only if" claim is false. The pre-kidney answer shows the other direction is also false.



General advice on multiplying polynomials in a finite field?



Any tips on how to write the multiplication table in general for a finite field of polynomials (specific example: $F = (\mathbb{Z}/2\mathbb{Z})[x]/(x^2+x+1)$. I know that $F$ has four elements here, $\{0,1,x,x+1\}$, and I know how to make a multiplication table for this. But it gets very tricky with "bigger" fields of polynomials, as modding elements becomes difficult. Any tips?


Answer



If you're in one variable modding elements is just long division. It can be a little tedious but it is a straightforward process. If things are really big and you don't want to do it by hand there are free computer programs that can handle this. You can also try to use wofphram alpha. If it gives you an answer over the rationals then you can clear fractions and reduce mod $p$ to get an answer over a prime field.



If you'd like to do it by hand having a normal form for representatives of the cosets is helpful. It's easier to tell when things are equal that way. For the example above the normal form would be a coset representative that has only a constant and degree $1$ term (because you know $x^2 = x + 1$ in the quotient). Having a normal form will also work in a lot of situations where multiple variables are involved.




If there are multiple variables and no obvious normal form then you can resort to a Grobner basis. You really don't want to do this way by hand though. Again, computer programs are helpful here. I would recommend Macaulay2.


calculus - Query regarding other seemingly indeterminate forms

I know there are 7 indeterminate forms as follows-
$$0^0$$
$$1^{\infty}$$
$${\infty}^0$$
$$\frac{0}{0}$$
$$\frac{\infty}{\infty}$$

$$0\cdot\infty$$
$${\infty}-{\infty}$$



I cant help but wonder if these are also indeterminate-
$$(-1)^{\infty}$$
$$1^{-\infty}$$
$$({-\infty})^0$$



If these are not indeterminate forms can someone give an explanation regarding this dilemma ?

trigonometry - What is the "dropoff" to the ground from the eye line of an observer straight across a curved globe earth?




Please note that we assume the observer's eye line is exactly at sea level (0 inches) and we are assuming a perfect spherical earth with no atmospheric effects. The idea here is an alternative approach to evaluating the curvature of the earth since "distance to horizon" appears to be already covered eg. http://www.wikihow.com/Calculate-the-Distance-to-the-Horizon



Diagram of Problem



PLEASE VIEW DIAGRAM HERE - TL;DR SOLVE FOR D KNOWING R AND X



Is this correct?



$$D = \sin{(90-\tan^-1{(\frac{X}{R})})} * (\sqrt{R^2+X^2}-R)$$




Long Version:



An observer stands at point $P^0$ at $0$ inches of elevation and looks in a direct straight line over a distance of $X$ miles to point $P^1$.



The ground curves away from the observer's eye line, the eye line being tangential from point $P^0$ to point $P^1$ (and beyond) over a spherical earth.



Hence a right-angle triangle is formed between $P^0$, $P^1$ and the centre of the earth ($C^0$)with radius $R$, acute angle $a$, and hypotenuse $R+H$.



Clearly the "concentric height" $H$ ie. from $P^1$ to the ground (point $P^2$) in this right-angle triangle is: $$H = \sqrt{R^2+X^2}-R$$




As the distance $X$ inreases the "concentric height" $H$ increases in accordance to the curvature of the earth. This is known colloquially as "8 inches per miles squared", for example, approximately: 1 mile gives 8 inches, 2 miles gives 32 inches, 3 miles gives 72 inches and 10 miles gives 800 inches.



As the observer is looking at 0 inches of elevation in a straight line any objects in their eye line higher than sea level will obviously be visible but will eventually curve away until it is impossible to see whether the observer uses eyesight, optical zoom or laser techniques.



An issue arises when we want to calculate the "dropoff" straight "down" to the ground perpendicular to the eye line of the observer. That is, a "dropoff" $D$ which is the opposite face of a right-angle triangle between $P^1$, a point along the straight eye line (a distance less than $X$, point $P^3$) and the ground point $P^2$.



How do you calculate this "dropoff"? Here is my method, could there be a mistake in it?



As you can see from the diagram, we can establish the following.




The "concentric height" $H$ is: $$H = \sqrt{R^2+X^2}-R$$



The acute angle (arc originating from the centre of the earth) $a$ is: $$a = \tan^-1{(\frac{X}{R})}$$



The smaller right-angle triangle relevant to $D$ within the larger right-angle triangle is formed between $P^3$, $P^1$ and $P^2$ with acute angle b where: $$b = 90 -a$$



The dropoff $D$ can then be defined as: $$D = \sin{(b)}* H$$



Hence the final formula of: $$D = \sin{(90-\tan^-1{(\frac{X}{R})})} * (\sqrt{R^2+X^2}-R)$$




"Dropoff" Example Calculations - Corrected Aug 29 2016



Eg. Find dropoff in inches, for 10 miles "eyeline distance" $X$ per diagram above, with an earth of radius 3959 miles, enter into Wolfram Alpha online: (sin(pi/2-arctan(10/3959))) * (sqrt(3959^2+10^2)-3959) * 63360. Colloquial(A) is "8 inches times miles squared". Colloquial(B) is "2/3 feet times miles squared".



X Miles, D Dropoff (Inches), D Dropoff (Feet), Colloquial(A)(Inches), Colloquial(B)(Feet)
1, 8.00, 0.664, 8, 0.667
2, 32.0, 2.66, 32, 2.67
3, 72.0, 5.98, 72, 6.00
5, 200, 16.6, 200, 16.7
10, 800, 66.4, 800, 66.7
20, 3200, 266, 3200, 267
30, 7202, 598, 7200, 600
40, 12802, 1063, 12800, 1067
50, 20002, 1660, 20000, 1667
100, 79982, 6639, 80000, 6667
1000, 7638400, 633987, 8000000, 666666
2000, 26947774, 2236665, 32000000, 2666666



Further Research





  • A formula should be derived to factor in the height of the observer from sea level

  • A formula should be derived for the distance $X^B$ which is the arc length (ground distance) from the observer. (Edit: one of the answers below has provided this).

  • A formula should be derived to factor in the angle of observation.

  • Empirical tests should be conducted using optical zoom (300mm and greater magnification) and highly-focused lasers all at different points on the earth at different dates and times.

  • The radius of approx. 4000 miles used is for observations longitudinally North to South at 0 degrees Longitude, or for observations latitudinally East to West at 0 degrees Latitude. This radius would be smaller for observations across a sphere at different points, eg. observations latitudinally East to West at 40 degrees Latitude cf. "Circumference of the earth at different latitudes" formula.


Answer



D can actually be expressed as :



$$

D = R-\frac{R^2}{\sqrt{R^2+X^2}}
$$



There's no need for sin or tan or other trigonometric functions. They're basically two similar triangles with 'D' corresponding to H in the same proportion as H corresponds to H+R. So you can just multiply R by H over (H+R) to get D :



how to get D


calculus - fractional part of the square of natural number



How can if prove that the sequence :$$a_n\:=\left\{\sqrt{n}\right\}\left(fractional\:part\:of\:\sqrt{n}\right)\:=\:\:\sqrt{n}\:-\:\left[\sqrt{n}\right]$$
is bounded from above by 1?
So far i try induction but its nothing that the assumption can help me for the "step" of the induction so i kind of stuck here.
tnx!




*($[x]$ - the floor function of x)


Answer



The fractional part of a number is, by definition, between $0$ and $1$. This is because $[x]$, the integer part of $x$, is defined as




The largest integer $n\in\mathbb Z$ such that $n



Therefore, if $x-[x] > 1$, then $[x]+1$ is:





  • smaller than $x$ (because $x-[x]>1$ can be rearanged to $x>[x]+1$)

  • larger than $[x]$ (by definition, it is larger by $1$.



meaning that $[x]$ is not the largest integer satisfying $n

calculus - Differentiable function with bounded derivative, yet not uniformly continuous



It is well-known that if a differentiable function $f:I \to \mathbb{R}$ ($I$ an interval) has bounded derivative, then it is uniformly continuous. On the other hand, there are differentiable functions, which are uniformly continuous, but whose derivative is unbounded. My related question is as follows. Does there exist a differentiable function $f:\mathbb{R} \to \mathbb{R}$, and a subset $X \subseteq \mathbb{R}$, such that $f'$ is bounded on $X$, and yet $f$ is not uniformly continuous on $X$? Note that $X$ cannot be an interval, a finite disjoint union of intervals, nor can it be a discrete set.


Answer



Let us start with



$$h(x) = \begin{cases}

\hphantom{-}4(x+2) &, -2\leqslant x < -\frac{3}{2}\\
-4(x+1) &, -\frac{3}{2} \leqslant x < -1\\
-4(x-1) &,
\hphantom{-}\; 1 \leqslant x < \frac{3}{2}\\
\hphantom{-} 4(x-2) &,
\hphantom{-}\frac{3}{2}\leqslant x < 2\\
\qquad 0 &,
\hphantom{-} \text{ otherwise.} \end{cases}$$



For $c > 0$, let




$$h_c(x) = c\cdot h(c\cdot x).$$



Then $h_c$ is continuous, and $\int_{-\infty}^0 h_c(x)\,dx = 1$ as well as $\int_{-\infty}^\infty h_c(x)\,dx = 0$. Now let



$$g(x) = \sum_{n=1}^\infty h_{5^n}\left(x-n-\frac{1}{2}\right).$$



Every $g_n(x) = h_{5^n}\left(x-n-\frac12\right)$ vanishes identically outside the interval $[n,n+1]$, so $g$ is continuous, and



$$f(x) = \int_0^x g(t)\,dt$$




is well-defined and continuously differentiable.



Furthermore, $f(x) \equiv 0$ on every interval $\left[n, n+\frac{1}{2} - \frac{2}{5^n}\right]$, and $f(x) \equiv 1$ on every interval $\left[n+\frac{1}{2}-\frac{1}{5^n}, n+\frac{1}{2}+\frac{1}{5^n}\right]$. Thus on



$$X = \bigcup_{n=1}^\infty \left(\left[n, n+\frac{1}{2} - \frac{2}{5^n}\right] \cup \left[n+\frac{1}{2}-\frac{1}{5^n}, n+\frac{1}{2}+\frac{1}{5^n}\right]\right)$$



we have $f' \equiv 0$, so the derivative is bounded, but



$$f\left(n+\frac{1}{2}-\frac{1}{5^n}\right) - f\left(n+\frac{1}{2}-\frac{2}{5^n}\right) = 1$$




for all $n$, while the distance between the two points is $5^{-n}$ which becomes arbitrarily small, so $f$ is not uniformly continuous on $X$.



If the sentence




Note that $X$ cannot be an interval, a disjoint union of intervals, nor can it be a discrete set.




was meant to forbid a construction as above where $X$ is a disjoint union of intervals, we can obey the letter of the law (but not the spirit) by adding an arbitrary subset of $(-\infty,0)$ that is not a union of disjoint intervals.



Friday 27 November 2015

number theory - Easy explanation of analytic continuation

Today, as I was flipping through my copy of Higher Algebra by Barnard and Child, I came across a theorem which said,




The series $$ 1+\frac{1}{2^p} +\frac{1}{3^p}+...$$ diverges for $p\leq 1$ and converges for $p>1$.





But later I found out that the zeta function is defined for all complex values other than 1. Now I know that Riemann analytically continued this function to fit all complex values, but how do I explain, to a layman, that $\zeta(0)=1+1+1+...=-\frac{1}{2}$?



The Wiki articles on these topics go way over my head. I'd appreciate it if someone can explain it to me what analytic continuation actually is, and which functions can be analytically continued?






Edit



If the function diverges for $p\leq1$, how is WolframAlpha able to compute $\zeta(1/5)$? Shouldn't it give out infinity as the answer?

complex analysis - Find the analytical function when the imaginary part is given?




Find the analytical function of the complex variable $z$ whose imaginary part is :
$$v(x,y)= \ln |z|$$





So it's obvious that I'm going to use Cauchy Riemann here, but I feel like there's something else that is missing...
So, I have
$v(x,y)= \ln\sqrt{x^2+y^2}$ and I find $\partial v/\partial x$, but then what?


Answer



Even though in principle the Cauchy-Riemann equations can function as differential equations that will allow you to reconstruct a function in this way, in practice doing this directly can be extraordinarily difficult. Exercises of this kind are almost always meant to be solved using a combination of inspired guesswork and previous knowledge.



If you know of the complex exponential and logarithm (and if you don't you certainly have some work cut out for you here), you should immediately notice that the imaginary part you're expected to find here is exactly the real part of the complex logarithm. So we can very quickly manufacture a function that satisfies it, simply by multiplying by $i$:
$$ f(z) = i\operatorname{Log} z $$



The only problem with this is that it is not continuous (and thus not analytic) along the cut, wherever your definition of Log places the principal cut. So the next step would be to figure out whether this is unavoidable.




It turns out that this problem cannot be avoided. Clearly if we're given the imaginary part of the function, we can add any real number we want to the entire function without losing analyticity -- so we can arbitrarily decide that the real part of $f(1)$ is going to be $0$.



But then the Cauchy-Riemann equations immediately tell that the real part of $f(x)$ for any positive real $x$ must be $0$ to, simply because $\frac{du}{dx}=\pm \frac{dv}{dy}$ (I don't even bother to look up the sign), and $\frac{dv}{dy}$ is clearly $0$ everywhere on the real axis, given the specified for $v$.



Thus, we now know the entire complex value of $f$ everywhere on the positive real axis. And we can then use a theorem that says that knowing the value of an analytic function on just a small (nontrivial) piece of curve going through its domain determines its values everywhere. So since $i \operatorname{Log} z$ matches the specified values on the positive real axis, it has to match everywhere, at least if the domain of $f$ includes the domain of $\operatorname{Log}$.



Otherwise the domain of $f$ can be stranger, and we then know only that $f(z)$ must be $i$ times some logarithm of $z$ at each point -- but we still cannot make an $f$ that's continuous all the way around the origin, because the argument (which must become minus the real part of $f$) cannot match up.


integration - Quick question why setting $a=0$ gives an indeterminate solution

$\newcommand{\dx}{\mathrm dx\,}$I’m having lots of trouble figuring this out, so perhaps you guys can help me. For example, let’s take the integral





$$\int\limits_0^{\infty}\dx\frac {\sin x\log x}x=-\frac {\gamma\pi}2$$




Our integral can be computed using differentiation under the integral sign and taking the imaginary part of
$$\mathfrak{I}(s)=-\int\limits_0^{\infty}\dx x^{a-1}e^{-sx}=-s^{-a}\Gamma(a)$$
Set $s=i$ and take the imaginary part to get
$$\operatorname{Im}\mathfrak{I}(i)=\Gamma(a)\sin\frac {\pi a}2$$
But when I differentiate and set $a=0$, then the gamma function becomes undefined because $\Gamma’(0)$ doesn’t produce a determinate form.




I’m not exactly sure what went wrong. Perhaps you can help me?

elementary set theory - Cardinality of a power set is given be $2^n$. Why?



Cardinality of power set of $A$ is determined by $2^n$, where n is the number of elements in Set $A$. How does this formula work?



I was taught that there are $n$ elements in Set $A$ and each element has a choice of either being in the power set or not i.e., $2$ choices. Hence $\underbrace{ 2\times 2\times 2\times 2 \cdots}_{n\; times} $.



But I don't understand why the element's choice of either being in the power set or not helps us in determining the cardinality of the power set.



Answer




I was taught that there are n elements in Set A and each element has a choice of either being in the power set or not i.e., 2 choices.




By definition, the power set of $A$ is the set of all subsets of $A$. The power set of $A$ does not contain elements of $A$, it contains subsets of $A$.



In how many ways can you choose a subset (say, $X$) of $A$? Well, every element in $A$ has a choice of either being in $X$ or not, i.e. $2$ choices. Thus there are $2^n$ ways you can form a subset $X$. Thus the total number of subsets is $2^n$.



To summarize: your mistake seems to be that you have swapped "choice being in $X$" (where $X$ is a subset) to "choice being in the power set".



real analysis - Increasing Sequence of Step Functions Converging to Characteristic Function of Fat Cantor Set

This is related to this question.



I want to find an open set $G\subseteq [a,b]$ such that there does not exist an increasing sequence of step functions on $[a,b]$ which converges almost everywhere to $\chi_{G^c}$.



By that question, if $G^c$ has interior, I can find an increasing sequence $s_n$ of step functions which converges to the characteristic function of the interior. If the noninterior points form a set of measure zero, the sequence will still converge almost everywhere.




So there are two options. A nowhere dense closed set which has positive measure, or a closed set with nonempty interior but which boundary has not zero measure.



For the first case, I found this, a Fat Cantor Set, and from a little answer here for this case I should be able to get the example.



So we can propose this




Proposition. Let $F\subseteq [0,1]$ be the fat Cantor Set. Then there does not exist an increasing sequence $s_n$ of step functions such that $s_n\nearrow \chi_F$ almost everywhere in $[0,1]$.





Is there any idea of how can I prove that inexistence?



The second case seems to be around something similar.

sequences and series - Closed form for sum



I wonder whether there is a closed form for this sum
$$ S_n:=\displaystyle\sum_{k=0}^n \dfrac{4^k}{4^k+5^k}$$
The question asks to express the sum in terms of $n$ then to deduce the limit of $\dfrac{S_n}{n+1}$.
I tried to use the following sum as an auxiliary sum
$$T_n:=\displaystyle\sum_{k=0}^n\dfrac{5^k}{4^k+5^k}$$

noticing that $S_n + T_n = n+1$.
Any thoughts about this ? thanks.


Answer



As hinted by SmileyCraft, I cannot think of any simple closed form to express $S_n$.



My best guess is to use Big $\mathcal{O}$ notation. As you thought, $S_n = n+1 - T_n$ then write $T_n = \sum_{k=0}^n u_k$ with $u_k = \frac{1}{1+\left(\frac{4}{5}\right)^k}\underset{k\to\infty}{=}1 + \mathcal{O}\left(\frac{4}{5}\right)^k$. Now $\left(\frac{4}{5}\right)^k$ is a positive real sequence thus sommable in Big $\mathcal{O}$ notation.



This yields:
$$
S_n \underset{n\to \infty}{=} n+1 - \left[(n+1) + \mathcal{O}(1) \right] = \mathcal{O}(1)

$$



since $\sum_{k=0}^n \left(\frac{4}{5}\right)^k \underset{n\to \infty}{=} \mathcal{O}(1)$.



Then $\frac{S_n}{n+1} = \mathcal{O}(\frac{1}{n+1})$.



Note that it is very similar and perfectly equivalent to what SmileyCraft does but it does give you an expression of $S_n$ (though trivial) depending on $n$.


Functional Equation with Inverse



How do I solve the following functional equation:
$$f(x)+12f^{-1}(x)=\frac{1}{x}f(x)$$

I've been doing a lot of functional equations, but I haven't done one yet that has the function and its inverse together. All I've done so far is figure out that $f(x)$ has a fixed point at $x=\frac{1}{13}$ and that $f(0)$ starts a cycle of orbit $2$.



Thanks! All help is appreciated!


Answer



We have



$$
y=f(x)
$$




and:



$$
x=f^{-1}(y)=g(y)
$$



Thus



$$
(\frac{1}{x}-1)f(x)=12f^{-1}(x)\\

x=f(f^{-1}(x))=f(\frac{1}{12}(\frac{1}{x}-1)f(x))\\
$$



Thus, we can formulate a recursive numeric search on $f$:



$$
min_{df}\left|x-f\left(\frac{1}{12}(\frac{1}{x}-1)f(x)+df(x)\right)+df\left(\frac{1}{12}(\frac{1}{x}-1)f(x)\right)\right|\\
$$


Thursday 26 November 2015

calculus - Using nth Term Test to find if $cos(frac{1}{n})$ is divergent




So in this problem I'm required to use the nth term test for:



$$\sum_{n=1}^\infty\cos(\frac{1}{n})$$



I made it into:



$$\lim_{n\to \infty}cos(\frac{1}{n})$$



I think it's going to diverge because cosine oscillates, but I don't know how to prove it with the nth term limit that I have above. Would I just divide by $\frac{cos\frac{1}{n}}{\frac{1}{n}}$? I'm lost.


Answer




Since $$\lim_{n \rightarrow \infty} \cos (\frac{1}{n}) = \cos (0) = 1 \neq 0$$
then the series $$\sum_{n=1}^\infty\cos(\frac{1}{n})$$
will diverge.


Expectation Poisson Distribution



A company buys a policy to insure its revenue in the event of major snowstorms that shut down business. The policy pays nothing for the first such snowstorm of the year and $10,000 for each one thereafter, until the end of the year. The number of major snowstorms per year that shut down business is assumed to have a Poisson distribution with mean 1.5. What is the expected amount paid to the company under this policy during a one-year period?




I know how to calculate the expectation and what the series is. I'm having problems with the summations. I know it should involve:



$$\sum_{k=2}^{+\infty} \frac{(1.5)^k}{k!}$$


Answer



Let $X$ be the number of snowstorms occurring in the given year and let $Y$ be the amount paid to the company. Call one unit of money $\$ 10{,}000$.



Then $Y$ takes the value $0$ when $X=0$ or $X=1$, the value $1$ when $X=2$, the value $2$ when $X=3$, etc..



The expected payment is
$$\eqalign{

\Bbb E(Y)
&=\sum_{k=2}^\infty (k-1)P[X=k]\cr
&=\sum_{k=2}^\infty (k-1) e^{-1.5}{(1.5)^k\over k!}\cr
&=\sum_{k=1}^\infty (k-1) e^{-1.5}{(1.5)^k\over k!}\cr
&=
\sum_{k=1}^\infty k e^{-1.5}{(1.5)^k\over k!}
-\sum_{k=1}^\infty e^{-1.5}{(1.5)^k\over k!}\cr
&=\underbrace{ \sum_{k=0}^\infty k e^{-1.5}{(1.5)^k\over k!}}_{\text{mean of } X} -
\biggl(-e^{-1.5}+\underbrace{\sum_{k=0}^\infty e^{-1.5}{(1.5)^k\over k!}}_{=1}\biggr)\cr
&=1.5+e^{-1.5}- 1\cr

&=0.5+e^{-1.5}\cr
&\approx .7231\,\text{units}.
}$$


real analysis - Find all roots of the equation :$(1+frac{ix}n)^n = (1-frac{ix}n)^n$



This question is taken from book: Advanced Calculus: An Introduction to Classical Analysis, by Louis Brand. The book is concerned with introductory real analysis.



I request to help find the solution.





If $n$ is a positive integer, find all roots of the equation :
$$(1+\frac{ix}n)^n = (1-\frac{ix}n)^n$$




The binomial expansion on each side will lead to:



$$(n.1^n+C(n, 1).1^{n-1}.\frac{ix}n + C(n, 2).1^{n-2}.(\frac{ix}n)^2 + C(n, 3).1^{n-3}.(\frac{ix}n)^3+\cdots ) = (n.1^n+C(n, 1).1^{n-1}.\frac{-ix}n + C(n, 2).1^{n-2}.(\frac{-ix}n)^2 + C(n, 3).1^{n-3}.(\frac{-ix}n)^3+\cdots )$$



$n$ can be odd or even, but the terms on l.h.s. & r.h.s. cancel for even $n$ as power of $\frac{ix}n$. Anyway, the first terms cancel each other.




$$(C(n, 1).1^{n-1}.\frac{ix}n + C(n, 3).1^{n-3}.(\frac{ix}n)^3+\cdots ) = (C(n, 1).1^{n-1}.\frac{-ix}n + C(n, 3).1^{n-3}.(\frac{-ix}n)^3+\cdots )$$



As the term $(1)^{n-i}$ for $i \in \{1,2,\cdots\}$ don't matter in products terms, so ignore them:



$$(C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots ) = (C(n, 1).\frac{-ix}n + C(n, 3).(\frac{-ix}n)^3+\cdots )$$



$$2(C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots ) = 0$$



$$C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots = 0$$




Unable to pursue further.


Answer



Hint:
Put $$z=\frac{1+i\frac{x}{n}}{1-i\frac{x}{n}}$$ then $z$ will be a $n$-root of unity and solve for $x:$ $$z= \frac{1+i\frac{x}{n}}{1-i\frac{x}{n}}=\exp{\left(i\frac{2k\pi}{n}\right)},\quad k\in\{0,1,...,n-1\}$$


complex analysis - A difficult functional series (still unsolved)

The problem is to prove if the following functional series converges or not and in the affirmative case to find its sum



$$\qquad\qquad\ \sin^2(\pi x) \sum_{k=1}^\infty\frac{1}{k^2\sin^2(\frac{\pi x}{k})}=\sum_{k=1}^\infty \frac{\sin^2(\pi x)}{k^2\sin^2(\frac{\pi x}{k})}\qquad\ \text{with}\quad x,k\in\Bbb N-\{0\}$$



The denominator of the series, upon which the summation variable acts, can be rewritten in many ways, as follows, none of which have seemed to be useful to solve the problem:



$$\frac{1}{k^2\sin^2(\frac{\pi x}{k})}=\frac{1}{(\pi x)^2\operatorname{sinc}^2(\frac{\pi x}{k})}=\frac{\csc^2(\frac{\pi x}{k})}{k^2}=\frac{1+\cot^2(\frac{\pi x}{k})}{k^2}=\frac{2}{k^2(1-\cos(\frac{2\pi x}{k}))}=\frac{1}{2k^2(1+\cos(\frac{\pi x}{k}))}+\frac{1}{2k^2(1-\cos(\frac{\pi x}{k}))}$$

where
$$ \csc(x)=\frac{1}{\sin(x)}\quad\text{and}\quad \operatorname{sinc}(x)=\frac{\sin(x)}{x}$$



Also



$$\frac{1}{k^2\sin^2(\frac{\pi x}{k})}=\frac{1}{\pi^2}\sum_{m=-\infty}^\infty \frac{1}{(x-mk)^2}=\frac{1}{\pi^2}\sum_{m=-\infty}^\infty \frac{1}{(x+mk)^2}$$



since



$$\frac{1}{\sin^2(x)}=\csc^2(x)=\sum_{m=-\infty}^\infty\frac{1}{(x-m\pi)^2}=\sum_{m=-\infty}^\infty\frac{1}{(x+m\pi)^2}$$




Maybe it should be worth noting that the series considered resembles in some way the so called Flint Hills Series (that, differently from the one considered, is a numerical series: http://mathworld.wolfram.com/FlintHillsSeries.html) for which, unto this day, it is not known whether it converges or not.



Note:



It is obvious that the result is always $0$, and hence trivial, if the function is evaluated before performing the sum, but this is not what is intended in this case.



In fact, for example, the value resulting for $x=10$ after that the summation of



$$\sum_{k=1}^N \frac{\sin^2(\pi x)}{k^2\sin^2(\frac{\pi x}{k})}\qquad\ \text{with}\quad x,k,N\in\Bbb N-\{0\}$$




has been performed up to $N\geqslant x=10$ is, and will always be (meaning that it stands for $N \to \infty$), $4$.



Summarizing, it can be easily verified (for convenience with a software like Maple, for example) that



$$\forall\alpha\in\Bbb N-\{0\}\qquad\lim_{x\to\alpha}\Biggl(\sum_{k=1}^{N\geqslant \alpha} \frac{\sin^2(\pi x)}{k^2\sin^2(\frac{\pi x}{k})}\Biggr)=\beta\neq0\qquad\ \text{with}\quad \beta\in\Bbb N-\{0\}$$
where, clearly, this is one of those cases in which the operation of limit and summation cannot be interchanged.
Another example, but that can be carried out by hand:



$$\lim_{x\to 2}\Biggl(\sum_{k=1}^2\frac{\sin^2(\pi x)}{k^2\sin^2(\frac{\pi x}{k})}\Biggr)=\lim_{x\to 2}\Biggl(1+\frac{\sin^2(\pi x)}{2(1-\cos(\pi x))}\Biggr)=2\neq0$$




So, from the calculations of the partial sums up to an arbitrary extent is evident that, numerically, the summation converge (pointwise?); the problem is to see if it is possible to find, first, an analytical closed form for the summation.

How do you approach a logarithm question with absolute value at its base?

So I'm a third year high school student and I'm stuck at this question. How do you approach this logarithm to get the domain of x?



$$\log_{|1-x|}(x+5) > 2$$



The answer is $-1 < x < 0$ and $2 < x < 4$



Also, how do you graph the logarithm function with an absolute value? I tried desmos but it doesnt seem to allow adding absolute value in the log base.

Thank you very much.

calculus - product $sigma$-measurable function and measurability in one component



I'm struggling with the proof for this:




Let $(X,\mathcal{X})$ and $(Y,\mathcal{Y})$ be two measurable spaces and recall the product $\sigma$-field $\mathcal{X}\otimes \mathcal{Y}$. Let $f:X\times Y\to \mathbb{R}$ be $\mathcal{X}\otimes \mathcal{Y}$-measurable function. Show that for all $x\in X$ the $f_x:Y\to \mathbb{R},f_x(y)=f(x,y)$ is $\mathcal{Y}$-measurable




My attempt, the preimage of $f_x$ is the set $M_x=\{y\in Y:f(x,y)\in X\times Y\}$, if we have shown that $M_x\in \mathcal{Y}$ then we are done, since the image of $f_x$ is equal to $f(x,y)$ which is in the product $\sigma$-Algebra.




My problem is for me it seems, that the Statement $M_x\in \mathcal{Y}$ is already coming from the Definition of $\mathcal{Y}$ since $\{y\in Y\}\in \sigma(Y)$.


Answer



Here's a sketch to get you started. You should try to fill in the details.



It suffices to show that for every $A \in \mathcal{X}\otimes \mathcal{Y}$ the set $A_x = \{y \in Y: (x,y) \in A \}$ is in $\mathcal{Y}$. And for this it suffices to show that the class $\mathcal{E}$ of all $A \in \mathcal{X}\otimes \mathcal{Y}$ for which this claim holds is a sigma-algebra that contains the measurable rectangles $B \times C$ with $B \in \mathcal{X}$, $C \in \mathcal{Y}$. It's clear that $\mathcal{E}$ contains the measurable rectangles. To see that it's a sigma-albegra note that the $(A_x)^c = (A^c)_x$ and similarly for countable unions.


Laurents Series Expansion Complex Analysis



So here is the problem, I am having a lot of trouble with laurents expansions and if you guys even know any sources where I can learn these really well and very simply then that would be a great help. But here is the question I am having trouble with specifically:



Expand



$$ \frac {1} {z(z-1)(z-2)}$$ in a laurent series in the following region: $1< |z|<2$




What I have:



The Laurent expansion after doing all that partial fraction stuff I get the laurent expansion for $$ \frac {1} {(z-1)(z-2)} = - \sum_0^{\infty} \frac{z^n}{2^{n+1}} + \frac{1}{z^{n+1}} \text{}$$ for the region stated above. But how do I incorporate the $1/z$ term in there as well, I have never done this with three terms before :( .I don't know how to get the answer and am starting to get really frustrated.


Answer



$f(z) = {1 \over z (z-1)(z-2) } = {1 \over 2z} - {1 \over z-1} + {1 \over 2(z-2) }$.



For $|z|<2$, we have ${1 \over 2(z-2)} = -{1 \over 4} ({1 \over 1- {z \over 2} } ) = -{1 \over 4} \sum_{k=0}^\infty {1 \over 2^k} z^k$.



For $|z|>1$, we have $-{1 \over z-1} = -{1 \over z} ( {1 \over 1- {1 \over z} } ) = -\sum_{k=0}^\infty {1 \over z^{k+1} } = - \sum_{k=-\infty}^{0} z^{k-1} = - \sum_{k=-\infty}^{-1} z^{k}$.




Hence $f(z) = \sum_{k=-\infty}^{\infty} f_k z^k $, where
$f_k = \begin{cases}
-1, & k < -1 \\
-{1 \over 2}, & k = -1 \\
-{1 \over 2^{k+2}}, & k > -1
\end{cases} $.


complex numbers - What would have been if $sqrt{-1}$ were named differently?

Everyone here knows $\sqrt{-1}$ is called the imaginary unit. If, suppose we are doing a calculation regarding a physical situation and some of the solutions at the end turn out to be "imaginary" and some are real then we reject the imaginary ones without even giving them a second thought $\textbf{just because they were imaginary}$ (at least I was told to do like this). But the other real solutions are given a thought about why we are rejecting them before rejecting all the ones not physically realizable.



Now, suppose we did not have given this name "imaginary" then people would surely have given at least some thought before rejecting them. One case where I encountered a "purely imaginary" (not complex) solution being accepted is in the calculation of spacetime interval in general relativity to classify them as space-like, time-like or null-like. It sure has a physical significance in GR. But, in general, how could we always say complex solutions solve no physical situation. Let me give you an example.



Say, we are calculating where will maximum bending occur in a simply supported beam given a force distribution on it. The beam is of length $5$ (from $x=0$ to $x=5$). And suppose we get solutions as $x=\{-1,\ 3,\ 1-i,\ 1+i\}$. From these solutions, we will reject $x=-1$ as it lies outside the beam but on what basis will we reject the complex solutions if they were not given the name "imaginary". The only reason I have to reject the complex solutions is that the solution $x=3$ solves the situation and my intuition (and practical work) tells me this is the only solution for this physical situation.



If someone could give me a different and satisfying way of thinking why we reject complex solutions (even if only in the case I used) would be really appreciated.

integration - Closed form of $int_0^{+infty}frac{xpi}{xpi+2sinh(xpi)} , dx$

I have numerically computed the integral $\int\limits_0^{+\infty}\frac{x\pi}{x\pi+2\sinh(x\pi)} \, dx$ such that it's value is a rational number and it's equal $0.298549$. An inverse symbolic calculator doesn't give anything. I think that it may have a closed form since it's related to the exponential function. How can I evaluate that in a closed form?

algebra precalculus - Deriving the Formula for Average Speed (Same distance).




Let me start of by specifying the question:





A and B are two towns. Kim covers the distance from A to B on a scooter at 17Km/hr and returns to A on a bicycle at 8km/hr.What is his average speed during the whole journey.





I solved this problem by using the formula (since the distances are same):




$$ \text{Average Speed (Same distance)} = \frac{2xy}{x+y} = \frac{2\times17\times8}{17+8} =10.88 \text{Km/hr}$$



Now I actually have two questions:



Q1- I know that $$ Velocity_{Average}= \frac{\Delta S }{\Delta T} $$
Now here does $$\Delta S$$ represent $$ \frac{S_2+S_1 }{2} \,\text{or}\, S_2-S_1 ?$$



Where S2 is the distance covered from point A to point B and S1 is the distance covered from point B to point A



Q2. How did they derive the equation:

$$ Velocity_{Average(SameDistance)} = \frac{2xy}{x+y} $$



Could anyone derive it by using
$$ Velocity_{Average}= \frac{\Delta S }{\Delta T} $$


Answer



If one traveled distance $d_k$ at speed $v_k$, this took time $t_k=\dfrac{d_k}{v_k}$. It took time $T=\sum\limits_kt_k$ to travel distance $D=\sum\limits_kd_k$ and the average speed $V$ solves $D=VT$, hence $$V=\frac{\sum\limits_kd_k}{\sum\limits_k\frac{d_k}{v_k}}.
$$
In the particular case when there are $n$ distances which are all equal, one gets $V=\dfrac{n}{\sum\limits_{k=1}^n\frac1{v_k}}$, or
$$\frac1V=\frac1n\sum\limits_{k=1}^n\frac1{v_k}.
$$



calculus - How to solve $ lim_{ntoinfty} frac {(n!)^frac{1}{n}}{n} $?

I need to find the limit for:




$ \lim_{n\to\infty} \frac {(n!)^\frac{1}{n}}{n} $



I know the answer is $\frac {1}{e}$ but I have no idea how to get that answer.
I'd appreciate some help.

Wednesday 25 November 2015

calculus - Calculate $int frac{1}{sqrt{4-x^2}}dx$



Calculate $$\int \dfrac{1}{\sqrt{4-x^2}}dx$$



Suppose that I only know regular substitution, not trig.



I tried to get help from an integral calculator, and what they did was:



$$\text{Let u = $\frac{x}{2}$} \to\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{1}{2}$$




Then the integral became:



$$={\displaystyle\int}\dfrac{1}{\sqrt{1-u^2}}\,\mathrm{d}u = \arcsin(u) = \arcsin(\frac{x}{2})$$
And I'm not sure how they accomplished this, where did the 4 go? I understand the arcsin part but not sure how they got rid of the 4? Also how did they know to substitute $\frac{x}{2}$? It doesn't seem very obvious to me.


Answer



$$\int \frac{\text{d}x}{\sqrt{4-x^2}}=\int \frac{2 \ \text{d}u}{\sqrt{4-(2u)^2}}=\int \frac{2 \ \text{d}u}{\sqrt{4(1-u^2)}}=\int \frac{2 \ \text{d}u}{2\sqrt{1-u^2}}=\int \frac{ \text{d}u}{\sqrt{1-u^2}} $$



Why especially this substitution: Notice that



$$\int \frac{\text{d}x}{\sqrt{4-x^2}}=\int \frac{\text{d}x}{\sqrt{4\left(1-\frac14x^2\right)}}=\int \frac{\text{d}x}{2\sqrt{1-\left(\frac{x}{2} \right)^2}}$$




so you can see that it is quite nice to substitute $u=\frac{x}{2}$; we get a function $\frac{1}{\sqrt{1-u^2}}$ and we already know the integral to this one.


calculus - Find $lim_{ntoinfty}frac{leftlfloor frac{n+1}{2} rightrfloor!}{n!}$

Find
$$\lim_{n\to\infty}\frac{\left\lfloor \dfrac{n+1}{2} \right\rfloor!}{n!}$$



I tried Stirling's formula but I seem to get nowhere. How should I proceed?

probability - Let $X_n$ be the $n$-th partial sum of i.i.d. centralized rv and $mathcal{F}_m:=sigma(X_n,nle m)$, then $text{E}[X_nmidmathcal{F}_m]=X_m$



Let





  • $(\Omega,\mathcal{F},\text{P})$ be a probability space

  • $\left(Y_i\right)_{i\in\mathbb{N}}$ be a sequence of i.i.d. random variables $(\Omega,\mathcal{F})\to\left(\mathbb{R},\mathcal{B}\left(\mathbb{R}\right)\right)$ with $\operatorname{E}[Y_i]=0$ and $$X_n:=Y_1+\cdots Y_n$$

  • $\mathcal{F}_m:=\sigma\left(X_n,n\le m\right)$ be the smallest $\sigma$-Algebra such that $X_1,\ldots,X_m$ are measurable with respect to $\mathcal{F}_m$

  • $\operatorname{E}\left[X_n\mid\mathcal{F}_m\right]$ denote the conditional expectation of $X_n$ given $\mathcal{F}_m$



Maybe it's cause there are too many new concepts for me (conditional expectation, filtrations, ...), but I don't understand why we've got $$\operatorname{E}\left[X_n\mid\mathcal{F}_m\right]=X_m\;\;\;\text{for all }m

Answer




As @aerdna91 pointed out, the identity



$$\mathbb{E}(X_n \mid \mathcal{F}_m) = X_m \tag{1}$$



holds only for $m \leq n$. For $m>n$, we have



$$\mathbb{E}(X_n \mid \mathcal{F}_m) = X_n.$$



To prove $(1)$, we consider the case $m=n-1$. Then, as $X_n = X_{n-1} +Y_n$,




$$\mathbb{E}(X_n \mid \mathcal{F}_{n-1}) = \underbrace{\mathbb{E}(X_{n-1} \mid \mathcal{F}_{n-1})}_{X_{n-1}}+ \mathbb{E}(Y_n \mid \mathcal{F}_{n-1}).$$



Now, since $\mathcal{F}_{n-1}$ and $Y_n$ are independent, the second term equals



$$\mathbb{E}(Y_n \mid \mathcal{F}_{n-1}) = \mathbb{E}(Y_n)=0.$$



Hence, we have shown that



$$\mathbb{E}(X_n \mid \mathcal{F}_{n-1}) = X_{n-1} \tag{2}.$$




Now $(1)$ follows by iterating this procedure.



Remark: The proof shows that $(X_n,\mathcal{F}_n)_{n \in \mathbb{N}}$ is a martingale.


if $i$ is a number then what is its numerical value?



$ i $ is the unit imaginary part of complex number , but there is a question which it is mixed me probably i missed the definition of a number , wolfram alpha $ i $ is assumed to be a number , and others assumed it to be variable because it satisfies $ \sqrt{i^2}$ =$+i$ or $-i $ then my question here :




Question:
Is $i$ a number then what is it's value ?


Answer



Asking what's the value of $i$ is like asking what's the value of $2$. And, just like $i^2$ has two square roots, $i$ and $-i$, $2^2$ has two square roots, $2$, and $-2$.



And yes, it is a number, not a variable.


trigonometry - Prove that the envelope of the family of lines $(costheta+sintheta)x+(costheta-sintheta)y+2sintheta-costheta-4=0$



Prove that the envelope of the family of lines $(\cos\theta+\sin\theta)x+(\cos\theta-\sin\theta)y+2\sin\theta-\cos\theta-4=0$



I did not know much about how to find envelope of a curve.I read on Wolfram and tried solving but did not get the desired answer.




I partially differentiated $(\cos\theta+\sin\theta)x+(\cos\theta-\sin\theta)y+2\sin\theta-\cos\theta-4=0$ wrt $\theta$,getting



$(\cos\theta-\sin\theta)x-(\cos\theta+\sin\theta)y+2\cos\theta+\sin\theta=0$



then i squared and added them but could not eliminate $\theta $ fully.Is my method correct?



Please help me.


Answer



HINT




I would say, equation and its derivative together add up and subtract $\Rightarrow$ after simplification two equations:



$(2 x+1) \sin(\theta)+(2 y-3) \cos(\theta)=4$ , $\quad (2 x+1) \cos(\theta)-(2 y-3) \sin(\theta)=4$



I am sure that you can take from here.


combinatorics - Probability that rolling X dice with Y sides and summing the highest Z values is above some value k



Some background: There is an RPG called Legend of the Five Rings, with an interesting dice system. You roll X dice, and keep the highest Z of them. You add those Z dice together. This is phrased as "X keep Z". All of the dice have ten sides, and if you roll a 10, it "explodes" (you roll again). It continues to explode until you roll something other than a ten, adding all the values together, so that one die is worth more than 10.



I'm trying to create my own RPG system, and am looking at different dice options. That said, I'd like to know how to calculate several different probabilities similar to the L5R roll and keep system:



1) What is the probability that X keep Z on Y sided dice is greater than k, with exploding dice?




2) What is the probability that X keep Z on Y sided dice is greater than k, without exploding dice?



3) What is the probability that X keep Z lowest on Y sided dice is greater than k?


Answer



Let $\{S_1, \ldots, S_n\}$ be random vector of i.i.d. outcomes of the die throws. Also let $S_{k \colon n}$ denote the order statistics from that sample. The total score of the $n$-keep-$k$ scheme equals $T = \sum_{i=k+1}^n S_{i \colon n}$.



Because $S_i$ are positive discrete random variables, the technique of finding the probability generating function is the most promising:
$$
\mathcal{P}_T(z) = \mathbb{E}\left(z^T\right)

$$
Once the probability generating function is know, probabilities of possible outcomes can be read off as series coefficients:
$$
\Pr(T=t) = [z^t] \mathcal{P}_T(z)
$$
Moreover, the probabilites $\Pr(T \leqslant t)$ and $\Pr(T > t)$ can be read off as well:
$$
\Pr(T \leqslant t) = \sum_{k=0}^{t} [z^k] \mathcal{P}_T(z) = [z^t] \frac{\mathcal{P}_T(z)}{1-z}
$$
$$

\Pr(T > t) = 1 - \Pr(T \leqslant t) = [z^t] \frac{1 - \mathcal{P}_T(z)}{1-z}
$$



For all of the cases of interest $\mathcal{P}_T(z)$ is a polynomial or rational function in $z$. But finding it requires the knowledge of the joint distribution of the order statistics $\{S_{i\colon:n}\}$.



Using Mathematica, the distribution of the total score can be found as follows:



TotalHighestScoreDistribution[{x_, z_}, dist_] := 
Block[{v},
TransformedDistribution[Total[Array[v, z]],

Distributed[Array[v, z],
OrderDistribution[{dist, x}, Range[x - z + 1, x]]]]]

TotalLowestScoreDistribution[{x_, z_}, dist_] :=
Block[{v},
TransformedDistribution[Total[Array[v, z]],
Distributed[Array[v, z], OrderDistribution[{dist, x}, Range[z]]]]]


Here I can provide the answer for explicit choices of the dice systems.




For $6$-keep-$3$ non-exploding 10-sided die:



enter image description here



Similarly, for the keep-lowest scores system:



enter image description here



The exploding die case I did by simulation, mostly because the probability generating function could not be computed in closed form:




enter image description here


abstract algebra - Extension of automorphism of field



Let $F$ be a field of characteristic zero, $\overline{F}$ be the algebraic closure of $F$. Let $\zeta_n$ be a primitive $n$-th root of unity in $\overline{F}$. Then it is well-known that $F(\zeta_n)$ is a finite Galois extension of $F$.



Q. If $\sigma:F\rightarrow F$ is a field automorphism, then is it always possible to extend it to an automorphism of $F(\zeta_n)$?







The question might be trivial,I do not know. But usually, in Galois theory, I had visited most of the time extension of identity automoorphism of a field to its finite (or even Galois) extensions. Here I am considering the problem of extending any automorphism of $F$ to an automorphism of $F(\zeta_n)$.


Answer



Yes, this is always possible. First note that the automorphism of $F$ induces and injective field homomorphism $F\to F(\zeta_{n})$. Then write $F(\zeta_{n})\cong F[T]/(f)$, where $f$ is the minimal polynomial of $\zeta_{n}$. By the universal property of the polynomial ring and of the quotient, sending the class of the variable $T$ to any root of $f$ in the right hand side gives you the desired automorphism of $F(\zeta_{n})$. The result is bijective because it is an injective homomorphism of $F$-vector spaces of the same finite dimension.


calculus - Mean value theorem with $ln(x)$



I understand how to do mean value theorum but I'm not sure how to apply it with $\ln(x)$.




$$f(x) = \ln(x), \ [1, 8]$$



How can I find a $c$ that satisfies the conclusion of the Mean Value theorem by using $\ln(x)$?



I know its $\dfrac{f(b)-f(a)}{b-a}$, then take derivative and fill in the slope.



But how do I solve this with ln? I only did this with quadratic.


Answer



The mean value theorem states that if $f(x)$ is continuous on an interval $[a,b]$ and differentiable on $(a,b)$, then there exists a $c \in (a,b)$ such that $$f'(c) = \dfrac{f(b)-f(a)}{b-a}$$

In your case, the function $f(x) = \ln(x)$ is continuous on an interval $[1,8]$ and differentiable on $(1,8)$. The derivative of $\ln(x)$ is $\dfrac1{x}$ in the interval $(1,8)$. Hence, by mean value theorem, $\exists c \in (1,8)$ such that $$f'(c) = \dfrac1c = \dfrac{f(8)-f(1)}{8-1} = \dfrac{\ln(8) - \ln(1)}{8-1} = \dfrac{3 \ln(2) - 0}{7} = \dfrac{3 \ln(2)}7$$
Hence, the desired point $c$ is $\dfrac7{3 \ln(2)}$.


Tuesday 24 November 2015

calculus - Is the usual proof of $lim_{xrightarrow 0}frac {sin(x)}{x} = 1$ an honest proof?




In a lot of textbooks on Calculus a proof that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$ is the following:



enter image description here



Comparing the areas of triangles $ABC, ABD$ and circular sector, you get:
$$
\sin(x)$$
from what you have:

$$
\cos(x)<\frac {\sin(x)}{x}<1
$$
from which immidiately follows that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$



Don't you think that this kind of proof is not honest. I mean, in the proof we use the fact that the area of sector is $\frac12xr^2$. This formula comes from integrating (an "infinite" sum of "infinitesimal" triangles' areas). So, roughly speaking, we somehow implicitly use that the length of "infinitesimal" chord $BC$ is equal to the length of "infinitesimal" arch $rx$, which is equivalent that $\lim_{x\rightarrow 0}\frac {\sin(x)}{x} = 1$.



Do I miss something?


Answer



It is a perfectly honest proof, but it is based on few assumptions about areas bounded by plane curves and the definitions of trigonometric functions. The main assumption is that a sector of a circle has an area. The proof of this assumption requires real analysis/calculus but it does not require the limit $\lim_{x\to 0}\dfrac{\sin x}{x}=1$. Once this assumption is established, the next step is to define the number $\pi$ as area of unit circle (circle of radius $1$).




Next we consider the same figure given in question. We need to define a suitable measurement of angles. This can be done in many ways, and one of the ways is to define the measure of $\angle CAB$ as twice the area of sector $CAB$. Note that for this definition to work it is essential that radius of sector is $1$ (which is the case here).



Further we define the functions $\sin x,\cos x$ in the following manner. Let $A$ be origin of the coordinate axes and $AB$ represent positive $x$-axis. Also let the measure of $\angle CAB$ be $x$ (so that area of sector $CAB$ is $x/2$). Then we define the coordinates of point $C$ as $(\cos x,\sin x)$.



Now that we know these assumptions, the proof presented in the question is valid and honest. This is one of the easiest routes to a proper theory of trigonometric functions. The real challenge however is to show that a sector of a circle has an area.


integration - Why does integral and the imaginary part commute?



I have many a times encountered (and used myself) the following technique:



$$\int \sin x \mathrm{d}x = \int \operatorname{Im}(e^{ix}) \mathrm{d}x = \operatorname{Im} \left( \int e^{ix} \mathrm{d}x \right) = \operatorname{Im}( -ie^{ix}) + C = -\cos x + C$$



Not only in this case, but I've used this kind of transform many a times, instinctively, to solve many of those monster trig integrals (and it works like a miracle) but never justified it.




Why and how is this interchange of integral and imaginary part justified?



At first, I thought it might be always true that we can do such a type of interchange anywhere, so, I tried the following: $\operatorname{Im}(f(z)) = f(\operatorname{Im}(z))$. But this is clearly not true, as the LHS is always real but RHS can be, possibly, complex too.



Second thoughts. I realized that we are dealing with operators here and not functions really. Both integral and imaginary parts are operators. So we have a composition of operators and we are willing to check when do these operators commute? I couldn't really make out any further conclusions from here and am stuck with the following questions:



When and why is the following true: $\int \operatorname{Im}(f(z)) \mathrm{d}z= \operatorname{Im} \left( \int f(z) \mathrm{d}z \right)$? (Provided that $f$ is integrable)



Is it always true? (Because like I've used it so many times and never found any counter example)




Edit : I am unfamiliar with integration of complex-valued functions but what I have in mind is that while doing such a thing, I tend to think of $i$ as just as some constant (Ah! I hope this doesn't sounds like really weird), as I stated in the example in the beginning. To be more precise, I have something of like this in my mind: because a complex-valued function $f(z)$ can be thought of as $f(z) = f(x+iy) = u(x,y) + iv(x,y)$ where $u$ and $v$ are real-valued functions and we can now use our definition for integration of real-valued functions as
$$\int f(z) \mathrm{d}z = \int (u(x,y) + iv(x,y)) \mathrm{d}(x+iy) = \left(\int u\mathrm{d}x - \int v\mathrm{d}y\right) +i\left(\int v\mathrm{d}x + \int u\mathrm{d}y\right)$$


Answer



You can always write $f = \operatorname{Re}(f)+i\operatorname{Im}(f)$. Then, by linearity $\int f = \int \operatorname{Re}(f)+i\int \operatorname{Im}(f)$. But this is clearly the unique decomposition of $\int f$ in its real and imaginary part since both $\int \operatorname{Re}(f)$ and $\int \operatorname{Im}(f)$ are real numbers, hence we must have $\operatorname{Re}\int f = \int \operatorname{Re}f$ and the same for the imaginary part.



This is by the way a special case of the following more general observation:



If $E,F$ are complex Banach lattices and $T:E\to F$ is a real operator, i.e. mapping real elements to real elements, then $T\circ \operatorname{Re} = \operatorname{Re}\circ T$.
Positive Operators are a special case of real operators and your question is a special case if we set $E = L^1, F=\mathbb C, T=\int$.


Generalising "composition $n$ times" to the reals over differentiable functions - can it be done?



Proving the statement below is sufficient, but any proof will be enjoyed!



Statement:
For any $f\colon\mathbb R \to\mathbb R$ differentiable over all reals, one can find a function $G : [0,1] \times \mathbb{C} \to \mathbb{C}$ defined such that:





  • $G$ is differentiable over $x$

  • $G(a,G(b,x)) = G(a+b,x)$ for $0 <=a+b <=1$

  • $G(0,x) = x;$

  • $G(1,x) = f(x)$



Given the $G$ above:




  • Is $G$ unique? If not, how many such functions exist for a given $f$?


  • Is $G$ differentiable over $a$?



If you can relax the condition to continuous $f$ and $G$ that would be even cooler.



EDIT:
In response to a proof that there exists no $G$ for monotonously shrinking $f$, such as $f(x) = -x$ I decided to relax the demand for G from $G : [0,1] \times \mathbb{R} \to \mathbb{R}$ to $G : [0,1] \times \mathbb{C} \to \mathbb{C}$, which allows you to make an infinite number of functions matching my requests simply by saying $G(a,x) = x*(f(x)/x)^a$ for any $k \in \mathbb N$ which makes the whole thing a bit trivial. My original question has been answered and the remaining one doesn't seem to be too interesting. What's the policy here? Should I delete the page?



EDIT 2: D'oh! $G(a,x) = x*(f(x)/x)^a$ only works for linear functions. Disregard edit 1. My statement has still been disproved though, the answer will be accepted as soon as I figure out how.


Answer



You seem to be looking for a fractional iterate of any $f\colon\mathbb R\to\mathbb R$; that is, to extend the notion or construct of $f^{\circ n}$ to reals $n$ for which $n\notin\mathbb N$, but still keeping within the domain of continuous functions. For trivial reasons, not possible for any monotone decreasing function $f$. Like $f(x)=-x$. So there can not be such a $G$.




Since I think cow-slowly, I may have misunderstood your question completely. If so, pardon it, I’ll remove.



EDIT: Probably my “trivial reasons” weren’t clear enough. First, your fractional iterate, in particular $f^{\circ 1/2}$, the “half-fold iterate”, must be one-to-one and onto $\mathbb R$. If continuous, it must turn intervals into intervals (continuous image of a connected is connected) and bounded closed intervals into b.c. intervals (continuous image of a compact is compact). It results that any continuous one-to-one onto map of $\mathbb R$ must be monotone increasing or monotone decreasing, and of course strictly so. But the composition of two decreasing functions is increasing. Thus $f(x)=-1$ can not be the “composition-square” of any continuous function.


Help with a infinite geometric series

My professor wants us to solve this infinite series in terms of $\phi$ and i. However, I have not worked with infinite series, or series in general, in several years. He mentioned using the geometric formula but I am not sure how to apply that here. Could someone help me start this?



$\sum_{k=i+1}^\infty \phi^{2k} (\phi^{-1} - \phi^{i})^2 $ given $ |\phi| < 1$

geometry - Equilateral Triangle: An Interesting Result



Here's an interesting problem, and result, that I wish to share with the math community here at Math SE.



enter image description here



The above problem has two methods.





  1. Pure geometry. A bit of angle chasing and standard results from circles help us arrive at the desired result - ∆MNP is equilateral.
    I'll put up a picture of the angle chasing part here (I hope someone edits it, and puts up a picture using GeoGebra or some similar software - I'm sorry I'm not good at editing)



I joined BP and CN for angle chasing purposes.



(Note that if M is the midpoint of arc BC, then the figure so formed is a star, with 8 equilateral triangles)



enter image description here





  1. This can be solved beautifully using complex numbers. The vertices of the ∆ can be assumed to be 1,W,W2 on a unit circle centered at origin.



We need to prove that the new equilateral triangle is essentially a rotation of the original one, about an axis passing through center of its circumcircle perpendicular to its plane.



I haven't posted the solution, hope you fellow Math SE members try the problem and post your solutions and ideas in the answers section.



More methods (apart from geometry and complex numbers) are welcome. I'd like to know more about why this result is interesting in itself, and what other deductions can be made from it.




P.S.
Please use LaTeX wherever necessary and edit this article. Thanks a lot!


Answer



A geometric proof for part a.



From the OP:




We need to prove that the new equilateral triangle is essentially a rotation of the original one, about an axis passing through center of its circumcircle perpendicular to its plane.





I think it's easier to think of it as a reflection. In the diagram below




  • $GH$ is the diameter of the circumcircle that is perpendicular to $AM$.

  • It's given that $BN$ and $CP$ are parallel to $AM$.

  • Hence, points $M,N,P$ are reflections of points $A,B,C$ respectively in the line $GH$.

  • Hence, $\triangle MNP$ is a reflection of $\triangle ABC$ through the line $GH$.




enter image description here



Edit, to add part b.



By way of the reflective symmetry about $GH$ and the rotational symmetry of the equilateral triangles we can note that $AP=BN=CM$, and that $AN=BM=CP$. From this we can draw a system of parallel lines to show that $AM=BN+CP$.



enter image description here


probability - Showing $int_0^{infty}(1-F_X(x))dx=E(X)$ in both discrete and continuous cases



Ok, according to some notes I have, the following is true for a random variable $X$ that can only take on positive values, i.e $P(X<0=0)$



$\int_0^{\infty}(1-F_X(x))dx=\int_0^{\infty}P(X>x)dx$



$=\int_0^{\infty}\int_x^{\infty}f_X(y)dydx$



$=\int_0^{\infty}\int_0^{y}dxf_X(y)dy$




$=\int_0^{\infty}yf_X(y)dy=E(X)$



I'm not seeing the steps here clearly. The first line is obvious and the second makes sense to me, as we are using the fact that the probability of a random variable being greater than a given value is just the density evaluated from that value to infinity.



Where I'm lost is why:
$=\int_x^{\infty}f_X(y)dy=\int_0^{y}f_X(y)dy$



Also, doesn't the last line equal E(Y) and not E(X)?



How would we extend this to the discrete case, where the pmf is defined only for values of X in the non-negative integers?




Thank you


Answer



The region of integration for the double integral is $x,y \geq 0$ and $y \geq x$. If you express this integral by first integrating with respect to $y$, then the region of integration for $y$ is $[x, \infty)$. However if you exchange the order of integration and first integrate with respect to $x$, then the region of integration for $x$ is $[0,y]$. The reason why you get $E(X)$ and not something like $E(Y)$ is that $y$ is just a dummy variable of integration, whereas $X$ is the actual random variable that defines $f_X$.


Monday 23 November 2015

calculus - Why is$ (1+frac{1}{n})^n=e$ when n goes to infinity?





Why is $\lim\limits_{n\to\infty}(1+\frac1n)^n=e$?



I think it involves $\sum\limits_{n=0}^\infty\frac1{k!}=e$ but not sure how to get from one to the other.


Answer



Have you tried expanding by the binomial theorem? ;-)
$$(1/n+1)^n=\sum_{k=0}^n\binom{n}k\frac1{n^k}=\sum_{k=0}^n\frac{n!}{k!(n-k)!}\frac1{n^k}$$then as $n\to\infty$ we find $n!/(n^k (n-k)!)\to1$ hence we have:$$\lim_{n\to\infty}\sum_{k=0}^n\frac{n!}{k!(n-k)!\cdot n^k}=\sum_{k=0}^\infty\frac1{k!}\equiv e$$


complex numbers - Find a connection how the real part of z depends on the imaginary part



Find a connection how the real part of z depends on the imaginary part, if the following two conditions for the complex number z apply:





  1. |z|=k, where k is a real number.


  2. The real part and the imaginary part of z are positive?




This is what I think:
If the complex number z is z=a+ib then the absolute value is |z|=sqrt(a^2+b^2)=k



If a and b or a or b were negative, the absolute value would still be positive.




Am I anywhere near the answer?



Appreciate your help.


Answer



You're almost there:
\begin{align}
& \sqrt{a^2+b^2} = k \\[10pt]
& a^2+b^2 = k^2 \\[10pt]
& a^2 = k^2 - b^2 \\[10pt]

& a = \sqrt{k^2 - b^2}
\end{align}

and we don't need to say $\text{“}{\pm}\text{''}$ because we know $a\ge0.$


matrices - Let $A: Bbb{R}^{+}to M_{ntimes n}(Bbb{R})$, then $A(t)$ is continuous if and only if $a_{ij}(t)$ is continuous, $forall;i,j=1,cdots,n$



Suppose \begin{align}A: \Bbb{R}^{+}\to M_{n\times n}(\Bbb{R})\end{align}
\begin{align}t\mapsto A(t),\end{align}
where $A(t)=\big(a_{ij}(t)\big).$




I want to prove that the following are equivalent.



$i.$ $A(t)$ is continuous;



$ii.$ $a_{ij}(t)$ is continuous, $\forall\;i,j=1,\cdots,n$.



MY TRIAL



Assume that $a_{ij}(t)$ is continuous $\forall\;i,j=1,\cdots,n$. Since $M_{n\times n}(\Bbb{R})$ is a finite dimensional vector space, then all norms are equivalent. Take




\begin{align}\Vert A(t)\Vert=\sum^{n}_{i,j=1}|a_{ij}(t)|\end{align}
Since $A(t)$ is a finite sum of continuous functions, then it is continuous.



Now, I need to prove that converse but no way. Please, can anyone help me? Kindly help me check if my proof is as well, correct!


Answer



The last step of your proof for $(ii)\Rightarrow (i)$ does not sound quite right to me. In your last equality, it is $\|A(t)\|$ which is a sum of continuous functions, and hence continuous. In general, this does not imply that $A(t)$ is also continuous.



Hint: consider this equality instead
$$\|A(t)-A(t_0)\|=\sum_{i,j=1}^{n}|a_{ij}(t)-a_{ij}(t_0)| ,\qquad \forall t,t_0\in \mathbb{R}^+$$

and take the limit for $t\to t_0$ (starting from the side of the equality where the limit actually exists).


Prove sequence using induction



$a_1=1$, $a_{n+1} = 3 a_n^2$.




Prove for all positive integers, $a_n\leq{3^{2^n}}$ using induction.



My work so far:



Base case is true (1 < 9)



Induction Hypothesis: $a_k\leq{3^{2^k}}$



IS: prove that n = k+1 is true




I'm stuck because I just can't seem to prove the induction step. Any help is appreciated.


Answer



We need to show the stronger condition



$$a_n\leq{3^{2^n-1}}(\leq{3^{2^n}})$$



and therefore assuming as Induction Hypothesis $a_k\leq{3^{2^k-1}}$ we have



$$a_{k+1}=3a_k^2\stackrel{Ind. Hyp.}\leq 3\cdot (3^{2^{k}-1})^2={3^{2^{k+1}-1}}$$




Refer also to the related




real analysis - If a function f has no jump discontinuities, does it have the intermediate value theorem property?

I realized I was confused by this concept (while preparing for my exam).



If a function f has no jump discontinuities, does it have the intermediate value theorem property?



Facts, I know:
I know that continuity implies intermediate value theorem property. However, intermediate value theorem property does not imply continuity. All derivatives have the intermediate value theorem property, but we can have discontinuous derivatives.
Derivatives do not have jump discontinuities, or discontinuities of the first kind.




I don't know if a function has no jump discontinuities then it necessarily has the intermediate value theorem property. I know this works for derivatives. I was trying to construct a discontinuous function with discontinuities of the second kind, (one of left or right does not exist), with no jump discontinuities, that doesn't have intermediate value theorem property, but I couldn't think of any.



Thanks

number theory - Proofs for $0^0 =1$?




Everyone knows the following:
$$0^x = 0 \quad \wedge \quad x^0 = 1 , \quad\forall x \in R^*$$




One morning, I wake up asking myself the question "$\text{What is $0^0$, then?}$".

So, I did what any curious highschool student would do, I tried to figure it out using algebra
$$0^0 = 0^{x-x} = \frac{0^x}{0^x} = \frac{0}{0} = \text{undefined}$$



...and I can't seem to not divide by zero every time I try.



But then, using the concept of limits, I can sort of say what it could be.



$$\lim_{x \to 0} 0^x = 0 \quad\wedge\quad \lim_{x \to 0} x^0 = 1$$




Yeah, that doesn't provide me with much clarity either.
From my perspective there's a 50% chance that it is $1$ and a 50% chance that it's $0$.
$$\text{So, probabilistically it's } ^1/_2 \text{ !??}$$



Almost all calculators that I put it into gives me back Math Error.
But oddly enough Google says otherwise



Thanks to this Numberphile video, I realize that there can't be a proper definition for it.



But many still define it to be $1$ for many more reasons. I wish to understand these reasons. I would like some proofs for this.



Hence, I seek proofs inclined (biased) to saying that $0^0 = 1$




EDIT: I've changed the question slightly to avoid being a duplicate.
Thank you.


Answer



Perhaps the strongest reason why some people insist that $\;0^0=1\;$ is that



$$\text{For}\;\;0

Yet $\;x^x\;$ is undefined for lots of negative values in any neighborhood of zero...


real analysis - Prove that there exists an integer greater than x such that any polynomial $f(x)$ will be strictly non-negative and get large?

Hi I am taking a number theory class and so far I have been proving modular congruences, modular arithmetic, and prime properties. There is this theorem that came up in the textbook and apparently it does not involve any modular arithmetic. The theorem is as follows




Suppose $f(x)=a_nx^n+a_{n-1}x^{n-1}+\dots+a_0$ is a polynomial of degree $n>0$. Then there exists an integer $k$ such that if $x>k$ then $f(x)>0$.





I feel like this would come up in real analysis, but I have not come that far in my studies. I have an idea of applying induction and using the ceiling function somehow, but I have no idea how to start off this proof. Any help will do and thank you

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...