Thursday 31 December 2015

calculus - Using L'Hospital's Rule to evaluate limit to infinity




I'm given this problem and I'm not sure how to solve it. I was only ever given one example in class on using L'Hospital's rule like this, but it is very different from this particular problem. Can anyone please show me the steps to solve a problem like this?



Evaluate the limit using L'Hospital's rule if necessary



$$\lim_{ x \rightarrow \infty } \left( 1+\frac{11}{x} \right) ^{\frac{x}{9}}$$



Basically, I only know the first step:
$$\lim_{ x \rightarrow \infty } \frac{x}{9} \ln \left( 1+\frac{11}{x} \right)$$




WolframAlpha evaluates it as $e^{\frac{11}{9}}$ but I obviously have no idea how to get to that point.


Answer



Let $a=1-\frac{11}{x}$. We know that $$a^{x/9}=\exp\left ( \ln\left (a^{x/9} \right ) \right )=\exp\left ( \frac{x}{9}\ln(a) \right )=\exp\left ( \frac{\ln(a)}{\left ( \frac{x}{9} \right )^{-1}} \right )$$



Since $$\lim_{x\to\infty}\frac{\ln(a)}{\left ( \frac{x}{9} \right )^{-1}}=\frac{1}{9}\lim_{x\to\infty}\frac{\ln(a)}{1/x}=\ldots \textrm{Use L'Hopital's rule}\ \ldots =\frac{11}{9}$$
we get the wished answer (like WolframAlpha).


inequality - How can I prove that $x-{x^2over2}




How can I prove that $$\displaystyle x-\frac {x^2} 2 < \ln(1+x)$$ for any $x>0$



I think it's somehow related to Taylor expansion of natural logarithm, when:



$$\displaystyle \ln(1+x)=\color{red}{x-\frac {x^2}2}+\frac {x^3}3 -\cdots$$



Can you please show me how? Thanks.


Answer



Hint:




Prove that $\ln(1 + x) - x + \dfrac{x^2}2$ is strictly increasing for $x > 0$.



edit: to see why this isn't a complete proof, consider $x^2 - 1$ for $x > 0$. It's strictly increasing; does that show that $x^2 > 1$? I hope not, because it's not true!


Convergence of the series $sumlimits_{n=3}^infty (loglog n)^{-loglog n}$



I am trying to test the convergence of this series from exercise 8.15(j) in Mathematical Analysis by Apostol:



$$\sum_{n=3}^\infty \frac{1}{(\log\log n)^{\log\log n}}$$



I tried every kind of test. I know it should be possible to use the comparison test but I have no idea on how to proceed. Could you just give me a hint?


Answer



Note that, for every $n$ large enough, $$(\log\log n)^{\log\log n}\leqslant(\log n)^{\log\log n}=\exp((\log\log n)^2)\leqslant\exp(\log n)=n,$$ provided, for every $k$ large enough, $$\log k\leqslant\sqrt{k},$$ an inequality you can probably show, used for $k=\log n$. Hence, for every $n$ large enough, $$\frac1{(\log\log n)^{\log\log n}}\geqslant\frac1n,$$ and the series...





...diverges.



Wednesday 30 December 2015

analysis - Complex equation simplification

Let $k$ be a positive integer and $c_0$ a positive constant. Consider the following expression:
\begin{equation} \left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k+\left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k
\end{equation}
I would like to find a simple expression for the above in which only real numbers appear. It is clear that the above expression is always a real number since

\begin{equation}
\overline{\left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k}= \left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k.
\end{equation}
But I am not able to simplify it. I am pretty sure I once saw how to do this in a complex analysis course but I cannot recall the necessary tools. Help is much appreciated.

paradoxes - Is this Simpson’s Paradox?

In January, there were 2,700 new sign ups, and 3,500 who opt out. As at end January, there are 60,000 customers in our database.



In February, there were 3,400 new sign ups and 4,300 who opt out. As at end February, there are 59,100 customers in our database.



Looking at the new sign ups and opt out components, we see that :
(a) New sign ups increased by 700 from Jan to Feb
(b) Opt out increased by 800 from Jan to Feb



From Jan to Feb, the no. of customers decreased by 900 in our database. However, my superior queried that the figures look odd - since the new sign ups increased by 700 and opt out increased by 800, the overall number of customers should only decrease by 100.




I tried explaining that the figures will not be able to reconcile, by summing up the difference between months. Needless to say, my explanation was not accepted. Is anyone able to advice on what paradox is this? It seems like it is similar to Simpson's paradox.



Appreciate your advice please!

linear algebra - Properties of the inverse of an upper triangular matrix



Let $U$ be a $3$ by $3$ upper triangular matrix with all diagonal entries non zero. We then know that $U$ is invertible. Show that its inverse is also upper triangular. Also show this fact for a general $n$ by $n$ matrix.



What i tried




To invert the $3$ by $3$ matrix, we need to write the identity matrix in this way and then try to switch sides between the original matrix matrix and the identity matrix.But by doing so we observe that the upper triangular portion of the identity matrix is not affected and hence the inverse remains upper triangular. Could anyone explain this question to me Thanks.$$
A =
\left[ {\begin{array}{cc}
1 & 2 & 1 \\
0 & 1& 0 \\
0 & 0 & -1
\end{array} } \right]\left[ {\begin{array}{cc}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1

\end{array} } \right]
$$


Answer



What row operations do you need to apply in order to reduce an upper triangular matrix to the identity?



Start at the bottom, scale to reduce to a row with a $1$ in the last column of the row. Use this row to eliminate all non-zero entries in the rows above the last one. Now, move up one row and scale to get a $1$ in the next to the last entry. Use this row to eliminate all non-zero entries in the next to last column from the rows above that. Continue in the same way. What happens to the identity matrix on the right of the augmented matrix as you perform these operations?


Tuesday 29 December 2015

calculus - Evaluate $ int_{0}^{infty} frac{1}{sqrt{x(1+e^x)}}mathrm dx $




I would like to evaluate:
$$ \int_{0}^{\infty} \frac{1}{\sqrt{x(1+e^x)}}\mathrm dx $$
Given that I can't find $ \int \frac{1}{\sqrt{x(1+e^x)}}\mathrm dx $, a substitution is needed: I tried $$ u=\sqrt{x(1+e^x)} $$ and $$ u=\frac{1}{\sqrt{x(1+e^x)}} $$ but I could not get rid of $x$ in the new integral....
Do you have ideas of substitution?


Answer



$$
\begin{align}
\int_0^\infty\frac{1}{\sqrt{x(1+e^x)}}\mathrm{d}x
&=2\int_0^\infty\frac{1}{\sqrt{1+e^{x^2}}}\mathrm{d}x\\

&=2\int_0^\infty(1+e^{-x^2})^{-1/2}e^{-x^2/2}\;\mathrm{d}x\\
&=2\int_0^\infty\sum_{k=0}^\infty(-\tfrac{1}{4})^k\binom{2k}{k}e^{(2k+1)x^2/2}\;\mathrm{d}x\\
&=\sum_{k=0}^\infty(-\tfrac{1}{4})^k\binom{2k}{k}\sqrt{\frac{2\pi}{2k+1}}
\end{align}
$$


trigonometry - Solving $sin 7phi+cos 3phi=0$




The question is find the general solution of this equation:$$\sin(7\phi)+\cos(3\phi)=0$$



I tried to use the "Sum-to-Product" formula, but found it only suitable for $\sin(a)\pm \sin(b)$ or $\cos(a)\pm \cos(b)$. So I tried to expand $\sin 7\phi$ and $\cos 3\phi$, but the equation became much more complicated..



I'm self studying BUT There's nothing about how to solve this type of equations on my textbook..



reeeaaaaally confused now..


Answer



Hint: Use sum to product!
$$\sin 7\phi+\sin \left(\frac{\pi}{2}-3\phi \right)=0$$



algebra precalculus - Let $f:mathbb{R}rightarrowmathbb{R}$ be multiplicative. Is it exponential?




For function $f:\mathbb{R}\rightarrow\mathbb{R}$ that satisfies $f\left(x+y\right)=f\left(x\right)f\left(y\right)$
and is not the zero-function I can prove that $f\left(1\right)>0$
and $f\left(x\right)=f\left(1\right)^{x}$ for each $x\in\mathbb{Q}$.
Is there a way to prove that for $x\in\mathbb{R}$?



This question has been marked to be a duplicate of the question whether $f(xy)=f(x)f(y)$ leads to $f(x)=x^p$ for some $p$. I disagree on that. Both questions are answered by means of construction of a function $g$ that suffices $g(x+y)=g(x)+g(y)$. In this question: $g(x)=\log f(x)$ and in the other $g(x)=\log f(e^x)$. So the answers are alike, but both questions definitely have another startpoint.


Answer



No, because if $f$ is any of your functions, you may take any additive function $g : \mathbb{R} \to \mathbb{R}$ (that is, a function such that $g(x+y) = g(x) + g(y)$), and $f \circ g$ will still satisfy your assumption, as $f \circ g (x + y) = f(g(x+y)) = f(g(x) + g(y)) = f(g(x)) f(g(y)) = f \circ g(x) f \circ g(y)$.




And there are plenty such $g$, see under Hamel basis.


integration - How to show that this integral equals $fracpi2$?



While solving a physical problem from Landau, Lifshitz "Mechanics" book, I came across an integral:



$$\int_0^\delta \frac{du}{\sqrt{\left(\frac{\cosh\delta}{\cosh u}\right)^2-1}}.$$



In the book only the final answer for the problem is given, from which I deduce that this integral must be $\frac\pi2$.




I've tried feeding it to Wolfram Mathematica, but it wasn't able to evaluate it, returning unevaluated result. Evaluating it numerically confirms that this is a likely answer, but I haven't been able to prove this.



I've tried making a substitution $v=\frac{\cosh\delta}{\cosh u}$ and got this integral instead:



$$\gamma \int_1^\gamma \frac{dv}{v\sqrt{(\gamma^2-v^2)(v^2-1)}},$$



where $\gamma=\cosh\delta$, but still this doesn't give me a clue how to proceed. Also, I can't seem to eliminate the parameter ($\delta$ or $\gamma$), which shouldn't affect the result at all.



So, the question is: how can one evaluate this integral or at least prove that it's equal $\frac\pi2$?



Answer



It's actually a lot simpler than this. Rewrite the integral as



$$\int_0^{\delta} du \frac{\cosh{u}}{\sqrt{\cosh^2{\delta}-\cosh^2{u}}} = \int_0^{\delta} du \frac{\cosh{u}}{\sqrt{\sinh^2{\delta}-\sinh^2{u}}}$$



Sub $y=\sinh{u}$ and the integral becomes



$$\int_0^{\sinh{\delta}} \frac{dy}{\sqrt{\sinh^2{\delta}-y^2}}$$



Now sub $y=\sinh{\delta}\, \sin{t}$ and the integral is




$$\int_0^{\pi/2} dt = \frac{\pi}{2}$$


coding theory - Using Extended Euclidean Algorithm to find multiplicative inverse




Having some trouble working my way back up the Extended Euclidean Algorithm.
I'm trying to find the multiplicative inverse of $497^{-1} (mod 899)$. So I started working my way down first finding the gcd:



\begin{align}
899&=497\cdot1 + 402\\
497&=402\cdot1 + 95\\
402&=95\cdot4 + 22\\
95&=22\cdot4 + 7\\
22&=7\cdot3 + 1
\end{align}

Now I work my way back up using the extended algorithm and substituting:
\begin{align}
1&=22-(7\cdot3)\\
1&=22-(95-(22\cdot4))\cdot3\\
1&=22-(95-(402-(95\cdot4)\cdot4))\cdot3\\
1&=22-(95-(402-((497-402)\cdot4)\cdot4))\cdot3\\
1&=22-(95-(402-((497-(899-497))\cdot4)\cdot4))\cdot3\\
\end{align}



Am I going about this right? Do I just keep substituting up the chain? It gets difficult to follow for me. And

Here's what the terms equal going up:



\begin{align}
7&=95-(22\cdot4)\\
22&=402- ( 95\cdot4)\\
95&=497- 402\\
402&=899- 497\\
\end{align}


Answer



For an (iterative) implementation it is easier to compute the inverse resp. the Bezout coefficients while going down.




You start with $0·497 \equiv r_0=899\mod899$ and $1·497 \equiv r_1=497\mod899$ and apply the same sequence of computations as to the remainder to the quotient sequence starting with $u_0=0, u_1=1$.
\begin{align}
r_2&=r_0-1·r_1&\implies u_2&=u_0-1·u_1=-1 &:&&-1·497 &\equiv r_2=402\mod899
\\
r_3&=r_1-1·r_2&\implies u_3&=u_1-1·u_2=2 &:&&2·497 &\equiv r_3=95\mod899
\\
r_4&=r_2-4·r_3&\implies u_4&=u_2-4·u_3=-9 &:&&-9·497 &\equiv r_4=22\mod899
\\
r_5&=r_3-4·r_4&\implies u_5&=u_3-4·u_4=38 &:&&38·497 &\equiv r_5=7\mod899

\\
r_6&=r_4-4·r_5&\implies u_6&=u_4-3·u_5=-123 &:&&-123·497 &\equiv r_6=1\mod899
\end{align}
Thus the inverse is $-123$ or in the same equivalence class $899-123=776$


sequences and series - Sum of First $n$ Squares Equals $frac{n(n+1)(2n+1)}{6}$



I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals:




$$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$



I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?


Answer



You can easily prove it by induction.



One way to find the coefficients, assuming we already know that it's a degree $3$ polynomial, is to calculate the sum for $n=0,1,2,3$. This gives us four values of a degree $3$ polynomial, and so we can find it.



The better way to approach it, though, is through the identity

$$ \sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}. $$
This identity is true since in order to choose a $(k+1)$-subset of $n+1$, you first choose an element $t+1$, and then a $k$-subset of $t$.



We therefore know that
$$ \sum_{t=0}^n A + Bt + C\binom{t}{2} = A(n+1) + B\binom{n+1}{2} + C\binom{n+1}{3}. $$
Now choosing $A=0,B=1,C=2$, we have
$$ A+Bt + C\binom{t}{2} = t^2. $$
Therefore the sum is equal to
$$ \binom{n+1}{2} + 2\binom{n+1}{3}. $$


real analysis - Show $f(x) = 1/x$ is in $L^2left([1, +infty)right)$ but not in $L^1left([1, +infty)right)$.

Proposition



$f(x) = 1/x$ is in $L^2\left([1, +\infty)\right)$ but not in $L^1\left([1, +\infty)\right)$.



Discussion



So my issue here is that I don't know how to use infinity in Lebesgue integration.




It is intuitive (I think) that evaluation of the improper Riemann integrals



\begin{align}
\int_1^\infty \left|f(x)\right| &= \int_1^\infty \frac{1}{x} = \lim_{c \to \infty} \ln c = + \infty \\ \\
\int_1^\infty \left|f(x)\right|^2 &= \int_1^\infty \frac{1}{x^2} = 1 - \lim_{c \to \infty} \frac{1}{c} = 1
\end{align}



would imply our proposition, but I've only seen $L^p$-spaces defined in the sense of Lebesgue integrals. So when I get to these steps:



\begin{align}

\int_{[1, \infty)} \left|f(x)\right| &= \int_{[1, \infty)} \frac{1}{x} = \cdots \\ \\
\int_{[1, \infty)} \left|f(x)\right|^2 &= \int_{[1, \infty)} \frac{1}{x^2} = \cdots
\end{align}



I'm not sure how to proceed. I'm guessing we need an argument for switching between the two types of integration, which I've read up on a little bit, but am not sure how to apply here in the improper case.

calculus - Partial sum of divergent series



I am trying to find the nth partial sum of this series:
$S(n) = 2(n+1)^2$




I found the answer on WolframAlpha:



$\sum_{n=0}^m (1+2n)^2 =\frac{1}{3}(m+1)(2m+1)(2m+3)$



How can I calculate that sum, without any software?


Answer



$$S(n)=(1+2n)^2=1+4n+4n^2$$
You can now use the following $$\sum_{n=0}^m1=m+1\\\sum_{n=0}^mn=\frac{m(m+1)}{2}\\\sum_{n=0}^mn^2=\frac{m(m+1)(2m+1)}{6}$$




Alternatively, compute the first 4-5 elements. The sum of a polynomial of order $p$ will be a polynomial of order $p+1$ in the number of terms. Find the coefficients, then prove by induction


Monday 28 December 2015

calculus - Prove: $int_0^infty sin (x^2) , dx$ converges.




$\sin x^2$ does not converge as $x \to \infty$, yet its integral from $0$ to $\infty$ does.



I'm trying to understand why and would like some help in working towards a formal proof.


Answer



$x\mapsto \sin(x^2)$ is integrable on $[0,1]$, so we have to show that $\lim_{A\to +\infty}\int_1^A\sin(x^2)dx$ exists. Make the substitution $t=x^2$, then $x=\sqrt t$ and $dx=\frac{dt}{2\sqrt t}$. We have $$\int_1^A\sin(x^2)dx=\int_1^{A^2}\frac{\sin t}{2\sqrt t}dt=-\frac{\cos A^2}{2\sqrt A}+\frac{\cos 1}2+\frac 12\int_1^{A^2}\cos t\cdot t^{-3/2}\frac{-1}2dt,$$
and since $\lim_{A\to +\infty}-\frac{\cos A^2}{2\sqrt A}+\frac{\cos 1}2=\frac{\cos 1}2$ and the integral $\int_1^{+\infty}t^{-3/2}dt$ exists (is finite), we conclude that $\int_1^{+\infty}\sin(x^2)dx$ and so does $\int_0^{+\infty}\sin(x^2)dx$.
This integral is computable thanks to the residues theorem.


limits - Prove that sequence $a_1 = sqrt{2}$, $a_{n+1} = sqrt{2 + a_n}$ is bounded above by 3





I need a little bit of help (just a hint, please) with an induction proof on this sequence, which I need to prove is bounded above by 3.
$$
a_1 = \sqrt{2}
$$
$$

a_{n+1} = \sqrt{2 + a_n}
$$



My attempt:
$$
a_k < 3
$$
$$
a_k + 2 < 5
$$

$$
\sqrt{a_k + 2} < \sqrt{5}
$$
$$
a_{k+1} < \sqrt{5}
$$
... and I don't know where to go from here.



If I were to find a limit of this sequence, which way would I have to go? Should I try to rewrite the sequence into a formula?


Answer




Once you have $a_{k+1} < \sqrt{5}$, you can use that $\sqrt{5} < 3$ to prove that $a_{k+1} < 3$. Hence by induction all terms of this sequence are bounded by $3$.



Now for the limit part, your sequence is bounded above, if you can show that it is an increasing sequence then it follows (see a theorem about monotone convergence) that the sequence should have a limit. Once that is established you can assume that $\lim_{n \to \infty}a_n=a$. Now you have
$$\lim_{n \to \infty}a_{n+1}=\sqrt{2+\lim_{n \to \infty}a_{n}}.$$
Solve for $a$.


How do I define probability space $(Omega, mathcal F, mathbb{P})$ for continuous random variable?




I need to mathematically define the probability space $(\Omega, \mathcal F, \mathbb P)$ of continuous random variable $X$. I also need to define the continuous random variable $X$ itself. Problem is... I don't really know how.



It is known that $X$ has the following probability density function $f_X: \mathbb{R} \longrightarrow \left[0, \frac{4}{9} \right]$:



$$f_X(x) = \begin{cases} \begin{align*} &\frac{1}{9}\big(3 + 2x - x^2 \big) \; &: 0
\leq x \leq 3 \\ &0 \; \; &: x < 0 \; \lor \; x > 3 \end{align*}\end{cases}$$



and its plot:



enter image description here




Also, the cumulative distribution function of $X$ is $F_X: \; \mathbb{R} \longrightarrow \left[0,1\right]$ and is defined as:



$$F_X(x) = \begin{cases} \begin{align*} &0 \; \; &: x < 0 \\ &\frac{1}{9} \Big(3x + x^2 - \frac{1}{3}x^3 \Big) \; \; &: x \geq 0 \; \land \; x \leq 3 \\ &1 \; \; &: x > 3 \end{align*}\end{cases}$$



and its plot:



enter image description here



(please see this thread where I calculated CDF for reference)







I suppose:



$$X: \Omega \longrightarrow \mathbb{R}$$



and sample space:



$$\Omega = \mathbb{R}$$




How can I define $\mathcal F$ and $\mathbb{P}$, that are the quantities of probability space $(\Omega, \mathcal F, \mathbb{P})$? I was thinking:



$$\mathbb{P} : \mathcal F \longrightarrow \left[0, 1\right] \; \land \; \mathbb{P}(\Omega) = 1$$



I am jumping into statistics/probability and I am lacking the theoretical knowledge. Truth be speaking, the wikipedia definition of probability space for continuous random variable is too difficult to grasp for me.



Thanks!


Answer



It is a bit weird to ask for a probability space if the probability distribution is already there and is completely at hand. So I think this is just some theoretical question to test you. After all students in probability theory must be able to place the "probability things" they meet in the confidential context of a probability space.




In such case the easyest way is the following.



Just take $(\Omega=\mathbb R,\mathcal F=\mathcal B(\mathbb R),\mathbb P$) as probability space where $\mathcal B(\mathbb R)$ denotes the $\sigma$-algebra of Borel subsets of $\mathbb R$ and where probability measure $\mathbb P$ is prescribed by: $$B\mapsto\int_Bf_X(x)\;dx$$



Then as random variable $X:\Omega\to\mathbb R$ you can take the identity on $\mathbb R$.



The random variable induces a distribution denoted as $\mathbb P_X$ that is characterized by $$\mathbb P_X(B)=\mathbb P(X\in B)=\mathbb P(X^{-1}(B))\text{ for every }B\in\mathcal B(\mathbb R)$$



Now observe that - because $X$ is the identity - we have $X^{-1}(B)=B$ so that we end up with:$$\mathbb P_X(B)=\int_Bf_X(x)\;dx\text{ for every }B\in\mathcal B(\mathbb R)$$as it should. Actually in this special construction we have:$$(\Omega,\mathcal F,\mathbb P)=(\mathbb R,\mathcal B(\mathbb R),\mathbb P_X)\text{ together with }X:\Omega\to\mathbb R\text{ prescribed by }\omega\mapsto\omega$$




Above we created a probability space together with a measurable function $\Omega\to\mathbb R$ such that the induced distribution on $(\mathbb R,\mathcal B(\mathbb R))$ is the one that is described in your question.






PS: As soon as you are well informed about probability spaces then in a certain sense you can forget about them again. See this question to gain some understanding about what I mean to say.


elementary set theory - Order type of a sum ($bigcup$) of sets

A quick question. Is $$\textrm{ot}(\bigcup\limits_{\gamma <\lambda}\alpha_{\gamma})=\bigcup\limits_{\gamma <\lambda}\textrm{ot}(\alpha_{\gamma})?$$
where $\textrm{ot}$ stands for the order type (and $\lambda$ can be limit ordinal or not).




It seems like a nice property which can be very much false, but I don't know neither how to prove it nor can I find a counterexample.

linear algebra - Why do we need each elementary matrix to be invertible?




I've been presented to a proof that having $Ax=b$, one could have elementary row operations seen as certain special matrices $E_i$. And then we can prove that applying the same sequence of matrix products to the identity, we obtain the inverse. Suppose that $A$ is invertible:



\begin{eqnarray*}
{AB}&=&{I} \\
{E_n\dots E_2E_1AB}&=&{E_n\dots E_2E_1I} \\
{IB}&=&{E_n\dots E_2E_1I} \\
\end{eqnarray*}



For each $E_i$, there is the requirement that each one of them is invertible. Why is that needed? My guess is that if some of the $E_i$ is not invertible, then we could go back to different $A$'s and then, applying $E_i$'s again, we could go to a different $B$?



Answer



If we can use elementary matrices to start from $A$ and find $B=A^{-1}$, we should be able to reverse the process starting with $B$ and finding $A=B^{-1}$. The reverse of each step in the process is just applying the inverse elementary matrix. If an elementary matrix is not invertible, then we cannot reverse the step.



Anther reason that each elementary matrix must be invertible is that the determinant of noninvertible matrices is zero. Furthermore, invertible matrices have nonzero determinant. Therefore, if even one of the $E_i$ is not inertible, then $$\det(B)=\det(E_n...E_1 I)=\det(E_n)...\det(E_1)\det(I)=0.$$ Thus, $B$ is not invertible. But we know that $B^{-1}=A$. That is a contradiction.


Sunday 27 December 2015

probability - Calculating expected value for a Binomial random variable



How do you calculate $E(X^2)$ given the the number of trials and the probability of success?



$E(X) = np$, then $E(X^2) = $?




Do we have to draw up a table for $n=0,1,2,\ldots,n$ and then use the probability of success for each.



$$E(X) = x \ P(X=x) \ldots$$ this would take forever, is there a shortcut?


Answer



Well, $\mathrm{var}(X) = np(1-p)$ and $\mathrm{var}(X) = E(X²) - (E(X))^2$, so: $$E(X²) = \mathrm{var}(X) + (E(X))² = np(1-p) + n^2p^2 = np(1-p+np)$$


linear algebra - Positive diagonal entries in $2 times 2$ matrix

I am facing the following problem:





A symmetric $2 \times 2$ matrix $A$ has eigenvalues $\lambda_1 = 3$ and $\lambda_2 = 4$. Compute the determinant and trace of $A$. Is the following statement true, false or depends on the particular entries of $A$?



"All diagonal entries of $A$ are positive."




I know that $\det(A) = 12$ and $\mbox{tr}(A) = 7$. How can I determine the signs of the diagonal entries?

Saturday 26 December 2015

elementary set theory - The cartesian product $mathbb{N} times mathbb{N}$ is countable




I'm examining a proof I have read that claims to show that the Cartesian product $\mathbb{N} \times \mathbb{N}$ is countable, and as part of this proof, I am looking to show that the given map is surjective (indeed bijective), but I'm afraid that I can't see why this is the case. I wonder whether you might be able to point me in the right direction?



Indeed, the proof begins like this:



"For each $n \in \mathbb{N}$, let $k_n, l_n$ be such that $n = 2^{k_n - 1} \left(2l_n - 1 \right)$; that is, $k_n - 1$ is the power of $2$ in the prime factorisation of $n$, and $2 l_n - 1$ is the (necessarily odd) number $\frac{n}{2^{k_n - 1}}$."



It then states that $n \mapsto \left(k_n , l_n \right)$ is a surjection from $\mathbb{N}$ to $\mathbb{N} \times \mathbb{N}$, and so ends the proof.



I can intuitively see why this should be a bijection, I think, but I'm not sure how to make these feelings rigorous?




I suppose I'd say that the map is surjective since given any $\left(k_n , l_n \right) \in \mathbb{N} \times \mathbb{N}$ we can simply take $n$ indeed to be equal to $2^{k_n - 1} \left(2l_n - 1 \right)$ and note that $k_n - 1 \geq 0$ and thus $2^{k_n - 1}$ is both greater or equal to one so is a natural number (making the obvious inductive argument, noting that multiplication on $\mathbb{N}$ is closed), and similarly that $2 l_n - 1 \geq 2\cdot 1 - 1 = 1$ and is also a natural number, and thus the product of these two, $n$ must also be a natural number. Is it just as simple as this?



I suppose my gut feeling in the proving that the map is injective would be to assume that $2^{k_n - 1} \left(2 l_n - 1 \right) = 2^{k_m - 1} \left(2 l_m - 1 \right)$ and then use the Fundamental Theorem of Arithmetic to conclude that $n = m$. Is this going along the right lines? The 'implicit' definition of the mapping has me a little confused about the approach.






On a related, but separate note, I am indeed aware that if $K$ and $L$ are any countable sets, then so is $K \times L$, so trivially, taking the identity mapping we see trivially that this map is bijective and therefore that $\mathbb{N}$ is certainly countable (!), and thus so is $\mathbb{N} \times \mathbb{N}$. Hence, it's not really the statement that I'm interested in, but rather the exciting excursion into number theory that the above alternative proof provides.


Answer



Your intuition is correct. We use the fundamental theorem of arithmetic, namely the prime factorization is unique (up to order, of course).




First we prove injectivity:



Suppose $(k_n,l_n),(k_m,l_m)\in\mathbb N\times\mathbb N$ and $2^{k_n - 1} (2 l_n - 1 ) = 2^{k_m - 1} (2 l_m - 1)$.



$2$ is a prime number and $2t-1$ is odd for all $t$, and so we have that the power of $2$ is the same on both sides of the equation, and it is exactly $k_n=k_m$.



Divide by $2^{k_n}$ and therefore $2l_n-1 = 2l_m-1$, add $1$ and divide by $2$, so $(k_n,l_n)=(k_m,l_m)$ and therefore this mapping is injective.



Surjectivity it is even simpler, take $(k,l)\in\mathbb N\times\mathbb N$ and let $n=2^{k-1}(2l-1)$. Now $n\mapsto(k,l)$, because $2l-1$ is odd, so the powers of $2$ in the prime decomposition of $n$ are exactly $k-1$, and from there $l$ is determined to be our $l$. (If you look closely, this is exactly the same argument for injectivity only applied "backwards", which is a trait many of the proofs of this kind has)







As for simpler proofs, there are infinitely many... from the Cantor's pairing function ($(n,m)\mapsto\frac{(n+m)(n+m+1)}{2}+n$), to Cantor-Bernstein arguments by $(n,m)\mapsto 2^n3^m$ and $k\mapsto (k,k)$ for the injective functions. I like this function, though. I will try to remember it and use it next time I teach someone such proof.


summation - In the process of proving Sum of Geometric Progression




I was reading the proof for the sum of geometric progression at http://www.proofwiki.org/wiki/Sum_of_Geometric_Progression



and one of the statements is the following:



$$\sum_{j=1}^{n-1}{x^j}-\sum_{j=0}^{n-1}{x^j}=x^n+\sum_{j=1}^{n-1}{x^j}-(x^0+\sum_{j=1}^{n-1}{x^j})$$



I tried to decipher why this is true but I failed. How is the above statement derived?


Answer



I assume you're referring to Proof 2, in which case you've copied the equality incorrectly; it should read:




$$\sum_{j=1}^{n}{x^j}-\sum_{j=0}^{n-1}{x^j}=x^n+\sum_{j=1}^{n-1}{x^j}-(x^0+\sum_{j=1}^{n-1}{x^j})$$



Further, notice that $$\sum_{j=0}^{n-1} x^j = x^0+\sum_{j=1}^{n-1}x^j.$$



And,



$$\sum_{j=1}^{n}x^j=x^n+\sum_{j=1}^{n-1}x^j.$$



Thus, $$\sum_{j=1}^{n}{x^j}-\sum_{j=0}^{n-1}{x^j}=x^n+\sum_{j=1}^{n-1}x^j-\left(x^0 + \sum_{j=1}^{n-1}x^j\right).$$




Your confusion may be coming from the following:



$$x\sum_{j=0}^{n-1}{x^j}=\sum_{j=0}^{n-1}x\cdot{x^j}=\sum_{j=0}^{n-1}{x^{j+1}}=\sum_{j=1}^{n}{x^j},$$



where in the last step we let $j \mapsto j-1$ and thus needed to shift the indices from $0, \dots, n-1$ to $1, \dots, n$.


trigonometry - Sum of Sine Series



I have a sine series given by



$\sum^\infty_{n=0}{\frac{\sin(n\theta)}{2n-1}}$,



and I would like to find the sum assuming that $0 < \theta < \pi$. Using some similar posts on this site I was able to express the sum as




$\text{Im} \left( \sqrt{e^{i\theta}} \space \text{arctanh}(\sqrt{e^{i\theta}}) - 1\right)$.



I'm not well-versed with these functions aside from their definitions, so how can I simplify this expression further?


Answer



You did a good job showing that
$$\sum_{n=0}^\infty \frac{e^{i n t}}{2 n-1}=e^{\frac{i t}{2}} \tanh ^{-1}\left(e^{\frac{i t}{2}}\right)-1$$
Do the same
$$\sum_{n=0}^\infty \frac{e^{-i n t}}{2 n-1}=e^{-\frac{i t}{2}} \tanh ^{-1}\left(e^{-\frac{i t}{2}}\right)-1$$ Combine them to get
$$\sum_{n=0}^\infty \frac{\sin(n t)}{2 n-1}=\frac i 2 \left(e^{-\frac{i t}{2}} \tanh ^{-1}\left(e^{-\frac{i t}{2}}\right)-e^{\frac{i t}{2}}
\tanh ^{-1}\left(e^{\frac{i t}{2}}\right) \right)$$


Now, using the hint given by metamorphy,
$$\tanh ^{-1}\left(e^{-\frac{i t}{2}}\right)=\frac 12 \log\left( \coth \left(\frac{it}{4}\right)\right)=\frac{1}{2} \log \left(-i \cot \left(\frac{t}{4}\right)\right)$$
$$\tanh ^{-1}\left(e^{\frac{i t}{2}}\right)=\frac 12 \log\left( \coth \left(-\frac{it}{4}\right)\right)=\frac{1}{2} \log \left(i \cot \left(\frac{t}{4}\right)\right)$$ After a bunch of simplifications, you should get
$$\sum_{n=0}^\infty \frac{\sin(n t)}{2 n-1}=\frac{\pi}{4} \cos \left(\frac{t}{2}\right)+\frac{1}{2} \sin
\left(\frac{t}{2}\right) \log \left(\cot \left(\frac{t}{4}\right)\right)$$


calculus - Proving of Integral $int_{0}^{infty}frac{e^{-bx}-e^{-ax}}{x}dx = lnleft(frac{a}{b}right)$




Prove that



$$
\int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx = \ln\left(\frac{a}{b}\right)
$$





My Attempt:



Define the function $I(a,b)$ as



$$
I(a,b) = \int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx
$$



Differentiate both side with respect to $a$ to get




$$
\begin{align}
\frac{dI(a,b)}{da} &= \int_{0}^{\infty}\frac{0-e^{-ax}(-x)}{x}\,dx\\
&= \int_{0}^{\infty}e^{-ax}\,dx\\
&= -\frac{1}{a}(0-1)\\
&= \frac{1}{a}
\end{align}
$$




How can I complete the proof from here?


Answer



A problem-specific solution is as follows:



\begin{align*}
\int_{0}^{\infty} \frac{e^{-bx} - e^{-ax}}{x} \, dx
&= - \int_{0}^{\infty} \int_{a}^{b} e^{-xt} dt \, dx \\
&= - \int_{a}^{b} \int_{0}^{\infty} e^{-xt} dx \, dt \\
&= - \int_{a}^{b} \frac{dt}{t}
= - \left[ \log x \right]_{a}^{b} = \log\left(\frac{a}{b}\right).

\end{align*}



Interchanging the order of integration is justified either by Fubini's theorem or Tonelli's theorem.


real analysis - Proving continuity of a function using epsilon and delta

I've just got a real quick question about proving the continuity of a function using $\epsilon$ and $\delta$ definition of continuity. The question is this:




Let $f\colon X\to R$ be continuous where $X$ is some subset of $R$. Prove that the function $1/f\colon x\mapsto 1/f(x)$ is continuous at $p$ in $X$, provided that $f(p)\neq0$.



The definition states that "A function $f(x)$ is continuous at $p$ iff for every $\epsilon>0$ there exists some $\delta>0$ such that
$|x-p|<\delta$ and $|f(x) -f(p)|<\epsilon$



After that, I am super stuck...any help would be greatly appreciated.
Thanks!

Friday 25 December 2015

how to show that this complex series converge?

If $$\sum_{n=1}^{\infty} \frac{a_{n}}{n^{s}}$$ Converges( s is real)

and $\operatorname{Re}(z)>s$.
Then $$\sum_{n=1}^{\infty} \frac{a_{n}}{n^{z}}$$
also converges. $a_n$ is complex sequence.

calculus - Extended $lim_{x rightarrow 0}{frac{sin(x)}{x}} = 1$ limit law?



So I've learned that $\lim_{x \rightarrow 0}{\frac{\sin(x)}{x}} = 1$ is true and the following picture really helped me get an intuitive feel for why that is



enter image description here



I have been told that this limit is true whenever the argument of sine matches the denominator and they both tend to zero. That is,




$$\lim_{x \rightarrow 0}{\frac{\sin(5x)}{5x}} = 1$$



$$\lim_{x \rightarrow 0}{\frac{\sin(x^2)}{x^2}} = 1$$



$$\lim_{x \rightarrow 0}{\frac{\sin(\text{sin(x)})}{\text{sin(x)}}} = 1$$



But I don't understand why.



Question: Is there an intuitive explanation for why the rule $\lim_{x \rightarrow 0}{\frac{\sin(\text{small})}{\text{same small}}} = 1$ holds?




The picture above really helped me understand the original limit, but it doesn't really help me understand why the others are true.


Answer



I'm not sure that this falls into the category of "intuitive explanation" but the general phenomenon you are considering is a consequence of the commutativity of composition of continuous functions and limits. If you let $f(x) = \frac{\sin x}x$, then for any continuous function $g$ with $g(0)=0$ we have $\lim_{x \rightarrow 0} f \circ g(x) =1$.


Thursday 24 December 2015

Integration by parts



enter image description here




I have done the 1st part of question and the answer I got is $\frac{5e^4 - 1}{32} $which I verified from the calculator too. But I am confused how to approach to the deducing part using previous result(since it is stated HENCE ). Any help is greatly appreciated.


Answer



Substitute $t=x^2$ and you will get:



$$A = \frac 12 \int_1^e t^3(\ln\sqrt{t})^2 dt = \frac 18 \int_1^e t^3(\ln t)^2dt$$


real analysis - Continuous Function and a Dense Set



Let $S$ be an open set in $\mathbb{R}$. Let $f$ be a continuous function such that $f(S)$ is dense in $\mathbb{R}$. Let $\alpha$ be an arbitrary element of $\mathbb{R}$. Then as $f(S)$ is dense in $\mathbb{R}$, there exists $\{z_{i}\} \subset S$ such that $\lim f(z_{i}) = \alpha$. Since $f$ is continuous does this imply that $\alpha = f(z)$ for some $z \in S$?



Answer



No, it does not. Let $S = \mathbb{R}\setminus \{0\}$, and let $f(x)=x$; there is no $\alpha\in S$ such that $f(\alpha)=0$.


functions - Finding the form of this holonomic sequence?




I am looking for the exponential function that describes the following behavior:



Start at the value 1. The next value of the sequence is calculated by the following formula:



Value of Previous + Value of Previous * (1/3)


The idea is that this is experience in a video game, where each level is harder to obtain than the last. Meaning the sequence would be:




1, 4/3, 16/9, 64/27 ...


After starring at this for so long, I couldn't think the equation to do it. It obviously needs to be an exponential function but I thought for sure I'd need to write 1/3 somewhere in the correct form.



The actual answer to this is (3/4)^(1-n) but I do not understand how I could have came to that conclusion if I didn't use Wolfram Alpha's Pattern Finder.



How can this be solved? I have to approach these kind of patterns all the time in my programming works and I rely on the Wolfram tool far too often.


Answer



Let $x_{n-1}$ be the previous number. Then,




$$x_n=x_{n-1}+\frac13x_{n-1}=\frac43x_{n-1}$$



We then know this is in geometric progression, so $x_n=x_0\times\left(\frac43\right)^n$.



If you did not see this, my first attempt would be to write out the first few terms to see if you can spot a pattern. Once you think you have the right formula, you can use induction to verify your results.



Or, since you knew the result should be an exponential function, you could try substitution:



$$x_n=b^n\implies b^n=\frac43b^{n-1}$$




Divide both sides by $b^{n-1}$, you will find that $b=\frac43$.


integration - What is the simplest technique to evaluate the following definite triple integral?




Consider the following definite triple integral:



$$ \int_0^\pi \int_0^\pi \int_0^\pi \frac{x\sin x \cos^4y \sin^3z}{1 + \cos^2x} ~dx~dy~dz $$



According to Wolfram Alpha, this evaluates to $\frac{\pi^3}{8}$, but I have no idea how to obtain this result. The indefinite integral $$ \int \frac{x \sin x}{1 + \cos^2 x}~dx $$ appears to not be expressible in terms of elementary functions. Thus, I am at a loss as to what sort of techniques might be used to evaluate this integral. For context, this is from a past year's vector calculus preliminary exam at my graduate school, so while I'm sure there are some advanced integration techniques that can be used here, I'm particularly interested in what elementary techniques might be used to evaluate the integral, as I don't think something like, for instance, residue techniques would be considered pre-requisite knowledge for taking this exam.


Answer



First off, note that the integrals w.r.t. $y$ and $z$ are quite trivial to evaluate. Then, consider $x\mapsto\pi-x$, since trig functions are symmetric about $\pi/2$.



$$I=\int_0^\pi\frac{x\sin(x)}{1+\cos^2(x)}~\mathrm dx=\int_0^\pi\frac{(\pi-x)\sin(x)}{1+\cos^2(x)}~\mathrm dx$$




Add these together and apply $\cos(x)\mapsto x$.



$$\begin{align}\frac2\pi I&=\int_0^\pi\frac{\sin(x)}{1+\cos^2(x)}~\mathrm dx\\&=\int_{-1}^1\frac1{1+x^2}~\mathrm dx\\&=\arctan(1)-\arctan(-1)\\&=\frac\pi2\end{align}\\\implies I=\frac{\pi^2}4$$


proof verification - The square root of a prime number is irrational


If $p$ is a prime number, then $\sqrt{p}$ is irrational.




I know that this question has been asked but I just want to make sure that my method is clear. My method is as follows:




Let us assume that the square root of the prime number $p$ is rational. Hence we can write $\sqrt{p} = \frac{a}{b}$. (In their lowest form.) Then $p = \frac{a^2}{b^2}$, and so $p b^2 = a^2$.



Hence $p$ divides $a^2$, so $p$ divides $a$. Substitute $a$ by $pk$. Find out that $p$ divides $b$. Hence this is a contradiction as they should be relatively prime, i.e., gcd$(a,b)=1$.



real analysis - Prove the limit $lim_{xrightarrow-1^{+}} = frac{1}{x^{^{2}} -1}$ exists.



For each of the following, use definitions (rather than limit theorems) to prove that the limit exists. Identify the limit in each case.




(c) $\lim_{x\rightarrow-1^{+}} = \frac{1}{x^{^{2}} -1}$



Proof: By definition the function f(x) is said to converge to infinity as x → a if and only if there is an open interval I containing and given a real M, there is an δ > 0 such that 0 < |x - a| < δ implies f(x) > M, in which case f(x) approaches infinity as x → a.



Let L = infinity, and suppose ε > 0. And suppose M > 0. Then there is an δ > 0 such that |x - (- 1) | < ε . Then choose M =
Can someone please help me prove the limit exists. I don't know how to continue.
Please, I would really appreciate it. Thank you.


Answer



For any $\;M\in\Bbb R^+\;$ and $\;x>-1\;$ (but very close to $\;-1\;$)




$$\frac1{|x^2-1|}>M\iff x+1<\frac1{M|x-1|}<\frac1{2M}\implies$$



since we can make sure that $\;|x-1|>\frac32\iff\frac1{|x-1|}<\frac23\;$ , so we can choose $\;\delta_M:=\frac2{3M}\;$ , and thus:



$$x+1<\delta_M\implies\left|\frac1{x^2-1}\right|>M$$



and the above proves



$$\lim_{x\to -1^+}\frac1{x^2-1}=-\infty$$




since $\;x<-1\implies x^2-1<0\;$


complex analysis - Solving the improper integral $int_0^infty frac{x^{1/3}}{1+x^2} mathrm dx$

I'm trying to solve:



$\displaystyle \int \limits_0^\infty \dfrac{x^{1/3}}{1+x^2} \mathrm dx$



I have tried contour integration with $C_R^+$ and the real line like this:



$\displaystyle \int \limits_T \dfrac{z^{1/3}}{1+z^2} \mathrm dz = \int \limits_{-\infty}^\infty \dfrac{z^{1/3}}{1+z^2}\mathrm dz + \int \limits_{C_R^+} \frac{z^{1/3}}{1+z^2} \mathrm dz$



Where the last integral tends to $0$ as $R \longrightarrow \infty$




$\text{Res}(f(z);i) = \dfrac{i^{1/3}}{2i}$



and



$\displaystyle \int \limits_{-\infty}^\infty \frac{z^{1/3}}{1+z^2}\mathrm dz = \int \limits_{0}^\infty \frac{z^{1/3}}{1+z^2} \mathrm dz + \int \limits_{-\infty}^0\frac{z^{1/3}}{1+z^2} \mathrm dz$



If i manipulate the last term by changing the limits and substitute $u=-t$ i get:



$\displaystyle \int \limits_{-\infty}^0\dfrac{z^{1/3}}{1+z^2} \mathrm dz = -\int \limits_{0}^{-\infty}\frac{z^{1/3}}{1+z^2} \mathrm dz$




If i now substitue $u=-z, u'=-1$



$\displaystyle \int \limits_{-\infty}^0\dfrac{z^{1/3}}{1+z^2} \mathrm dz = \int \limits_{0}^{\infty}\dfrac{(-u)^{1/3}}{1+u^2}\mathrm dz = (-1)^{1/3}\int \limits_{0}^{\infty}\dfrac{u^{1/3}}{1+u^2}\mathrm dz$



$\displaystyle \int \limits_{-\infty}^\infty \dfrac{z^{1/3}}{1+z^2}\mathrm dz = \left(1+e^\frac{i\pi}{3}\right) \int \limits_{0}^\infty \frac{z^{1/3}}{1+z^2} \mathrm dz $



So i end up with:



$\displaystyle \dfrac{2i\cdot\pi \cdot e^{i\cdot\pi /6}}{2i\cdot\left(1+e^\frac{i\pi}{3}\right)} = \dfrac{\pi\cdot e^{i\cdot \pi /6}}{\left(1+e^\frac{i\pi}{3}\right)} = \int \limits_{0}^\infty \dfrac{z^{1/3}}{1+z^2} \mathrm dz$ which is wrong answer

Relabelling indices in multiple summations

Suppose I have a summation that looks like
\begin{align}
\sum_{a=0}^{n/2} \sum_{b=0}^a \sum_{c=0}^{n-2b}\sum_{d=0}^c \alpha(a,b,c,d)f(c-2d),
\end{align}

where $\alpha$ and $f$ are functions of the indices $a,b,c,d$, and I want to change these summations in a particular way. I want the argument of the function $f$ to depend on just one index, and for the summation of that corresponding index to be independent of other indices. That is, I want to rewrite the above summation as
\begin{align}
\sum_w \sum_x \sum_y \sum_{z=-n/2}^{n/2} \alpha(w,x,y,y/2 - z) f(z),
\end{align}

where $w,x,y$ have replaced $a,b,c$. Furthermore, we see that $d = y/2 - z$.




I am having trouble figuring out the range of the three summations. I know there is some freedom involved here and I believe that the $w$ summation can be taken to be
\begin{align}
\sum_{w=0}^{n/2}
\end{align}

through its identification with the $a$-summation.



How can I systematically figure out what the remaining limits of summation are (for the $x$ and $y$ summations)?

Does $A, B in S_n^{+}$ imply $A + B - |A - B| in S_n^{+}$? (where $S_n^{+}$ is the set of positive semidefinite symmetric matrices)



If $M$ is a (say real) matrix, we can define its modulus $|M|$ as the square root of $M' M$.



Now if $A$, $B$ are positive semidefinite symmetric matrices, is $A + B - | A - B |$ positive semidefinite?



If $A$ and $B$ commute, they can be diagonalized simultaneously, in which case the answer is yes. What is the situation if $A$ and $B$ do not commute?



Thanks for your help.



Answer



HINT:



You are asking whether the "formal" $\min$ of two positive matrices is still positive, where
$$\min(A,B) = \frac{1}{2}( A+B - |A-B|)$$



No, it may be non positive semidefinite.



Let's explain first the what is the meaning behind the fact that it may be non positive. Notice that $\min(A,B)$ is $\preceq$ both $A$ and $B$ for all $A$, $B$ hermitian. Assume now that indeed it is true that whenever $A$, $B \succeq 0$ we also have $\min(A,B) \succeq 0$. Then it would follow that whenever $A$, $B \succeq C$ we also have $\min(A,B) \succeq C$. That would mean that $\min(A,B)$ is a true infimmum for $A$ and $B$. But, alas, the hermitian matrices with the order $\succeq $ is not a lattice... That is the meaning of the statement.




Now, how to get a counterexample. It will work already for $2\times 2$ real symmetric matrices. Take $A$ positive and $B$ positive semidefinite with a $0$ eigenvalue ( that is, determinant $0$). Make sure that $A$ and $B$ do not commute. Now calculate $C = \min(A,B)$. Since $C\preceq B$ its smallest eigenvalue has to be $\le 0$, the smallest eigenvalue of $B$. If $A$ and $B$ do not commute you probably will get a $C$ with determinant $\ne 0$ so it will not be positive semidefinite. From here, add a little positive to $B$, and still have $C\not\succeq 0$.



I recommend calculating the square roots of matrices with Mathematica or Wolframalpha ( command : MatrixPower[ D, 1/2], for $D= (A-B)^2$)


algebra precalculus - Determine the sum of three consecutive odd ones



I have this statement:




The central term of three consecutive odd ones can be determined, if
it is known that the sum of these is:




i) At most $75$



ii) At least $63$




My attempt was:



Let a $2n-1$ an odd number, then $(2n-1) + (2n+1) +(2n+3) =6n+3$



Using $i) n \leq12$ Exist infinity odd numbers.




Using $ii) n \geq10$ Exist infinity odd numbers.



Using $i), ii)$ together. $n= \{10,11,12\}$



Since the sum of three consecutive numbers is in the form $6n+3$, the sum must be an odd number, so the unique possible value is $11$. But if $6n+3=11$, $2n+1$ isn't a integer number, therefore i can't assure.



But, my second solution was the same, but let $2n+1$ an odd number, then:



$(2n+1)+(2n+3)+(2n+5)=6n+9$




Using $i) n \leq 11$ Exist infinity odd numbers.



Using $ii) n \geq 9$ Exist infinity odd numbers.



Using both together, $n= \{9,10,11\}$



The sum is of the form $6n+9$, Therefore is an odd sum too.



Then the possible value are $9,11$, But with $6n+9=11$, the central term isn't an integer number, however with $6n+9=9 => n = 0$, thus the central term is $3$ that work with $1+3+5 = 9$. According to the guide, the correct solution is





Additional information is required




That is according to my first solution, but no with my second solution. What is wrong with my second solution?


Answer



In the first case, for the consecutive odd terms
$2n-1,$ $2n+1,$ $2n+3,$
you correctly deduce that the sum is $6n+3$ and that $n \in \{10,11,12\}.$




But then you start treating $n$ as if it were the sum.
That is a mistake. The variable $n$ is just some integer that you put into some formulas for convenience. It is not equal to any of the numbers of interest in the question:
not the middle term, not either of the other terms, and certainly not the sum.



Note that $6n+3$ is just as odd when $n$ is $10$ or $12$ as it is when $n=11.$
So you have no reason based on "the sum must be odd" to reject any of the three values of $n.$
Likewise you have no reason to take any of the elements of $\{10,11,12\}$
and say it must be equal to $6n+3.$
Again, $n$ is just an arbitrary integer parameter you introduced, and $6n+3$ is the sum, which is something completely different from any possible value of $n.$




You have the same kind of mistake in your second attempt.
You come up with $1 + 3 + 5 = 9,$ but $9$ is not at least $63$ as was required for the sum of the three consecutive odd numbers.
In fact $n = 9$ is a perfectly fine solution, but then the terms are
$$(2n+1)+(2n+3)+(2n+5) = 19+21+23 = 63.$$
On the other hand, $n=10$ and $n=11$ are perfectly fine solutions too when you correctly write out all the formulas. So you cannot conclude that $n=9.$



It is fine to introduce an arbitrary integer variable like $n$ so that you have something to base formulas on, but you must always remember what the variables you introduce are not (in this case, not the sum of the three terms).







My approach: let $S$ be the sum, $m$ the middle term. We know $m$ is odd and the other two terms are $m - 2$ and $m + 2.$
The sum of the three terms is therefore $3m.$
That gives us $m = S/3.$
Since $63 \leq S \leq 75,$ we have $21 \leq m \leq 25.$
But by inspection, $m = 21,$ $m = 23,$ and $m=25$ all solve the given conditions. That is why more information is required.


radicals - Follow-up Question: Proof of Irrationality of $sqrt{3}$



As a follow-up to this question, I noticed that the proof used the fact that $p$ and $q$ were "even". Clearly, when replacing factors of $2$ with factors of $3$ everything does not simply come down to being "even" or "odd", so how could I go about proving that $\sqrt{3}$ is irrational?


Answer



It's very simple, actually.
Assume that $\sqrt{3}$ = $\frac{p}{q}$, with $p,q$ coprime integers.




Then, $p = \sqrt{3}q$ and $p^2 = 3q^2$. If $3\mid p^2$, then $3\mid p$. So actually, $9\mid p^2$. Then, by similar logic, $3\mid q^2$, meaning $3\mid q$. Since $3$ divides both $p$ and $q$, the two numbers are not coprime. This is a contradiction, since we assumed that they $\textbf{were}$ coprime. Therefore, $\sqrt{3}$ cannot be written as a ratio of coprime integers and must be irrational.






$\textbf{NOTE:}$ The word "even" in the original proof was just a substitution for "divisible by $2$". This same idea of divisibility was used in this proof to show that $p$ and $q$ were divisible by $3$. It really is the same idea. There just isn't a nice concise word like "even" that was used to describe a multiple of $3$ in this proof.


combinatorics - Dealing with floor function in binomial coefficients




I'm trying to estimate $\binom{n}{\left \lfloor{\alpha n}\right \rfloor }$ asymptotically using Stirling's formula. However, I'm a little lost with what to do about the floor function here.



In the case without the floor function, there is a greater ease to combine the $\alpha$ and $n$ terms, such as
$$\binom{n}{\alpha n}=\frac{n!}{\alpha n!(n-\alpha n)!} \sim \frac{\sqrt{2\pi n}\left(\frac{n}{e}\right)^n}{\sqrt{2\pi \alpha n}\left(\frac{\alpha n}{e}\right)^{\alpha n} \sqrt{2\pi (n-\alpha n)}\left(\frac{(n-\alpha n)}{e}\right)^{(n-\alpha n)}}
=\frac{1}{\sqrt{2 \pi\alpha (n- \alpha n)}
\alpha^{\alpha n}\left(\left(\frac{n}{e}\right)^{n}\right)^{(\alpha - 1)}
\left(\frac{(n-\alpha n)}{e}\right)^{(n-\alpha n)}}$$
I'm not sure if there's a further simplification of this (if there is, please left me know!) but I'm also not sure how this would work with the floor function.



I tried splitting into cases, so if $\frac{1}{\alpha}n$ leaves $\left \lfloor{\alpha n}\right \rfloor$ the same.



Answer



The asymptotic expansion you have written
without the floor function
implicitly uses the floor function
or you couldn't have approximated
$(\alpha n)!$.



What you have done is a good start.
The next thing to do
is replace all occurrences

of
$n-\alpha n$
by
$n(1-\alpha)$
so you can group more terms
with $n$ together.



I'll let you do that.
If you still have problems,
comment and

I'll proceed further.


Wednesday 23 December 2015

real analysis - continuous functions on $mathbb R$ such that $g(x+y)=g(x)g(y)$




Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$.



$g(0)=1$. If $a=g(1)$,then $a>0$ and $g(r)>a^r$ for all $r$ in $\mathbb Q$.



Show that the function is strictly increasing if $g(1)$ is greater than $1$, constant if $g(1)$ is equal to $1$ or strictly decreasing if $g(1)$ is between zero and one, when $g$ is continuous.


Answer



For $x,y\in\mathbb{R}$ and $m,n\in\mathbb{Z}$,
$$
\eqalign{

g(x+y)=g(x)\,g(y)
&\implies
g(x-y)={g(x) \over g(y)}
\\&\implies
g(nx)=g(x)^n
\\&\implies
g\left(\frac{m}nx\right)=g(x)^{m/n}
}
$$
so that $g(0)=g(0)^2$ must be one (since if it were zero, then $g$ would be identically zero on $\mathbb{R}$), and with $a=g(1)$, it follows that $g(r)=a^r$ for all $r\in\mathbb{Q}$. All we need to do now is invoke the continuity of $g$ and the denseness of $\mathbb{Q}$ in $\mathbb{R}$ to finish.




For example, given any $x\in\mathbb{R}\setminus\mathbb{Q}$, there exists a sequence $\{x_n\}$ in $\mathbb{Q}$ with $x_n\to x$ (you could e.g. take $x_n=10^{-n}\lfloor 10^nx\rfloor$ to be the approximation of $x$ to $n$ decimal places -- this is where we're using that $\mathbb{Q}$ is dense in $\mathbb{R}$). Since $g$ is continuous, $y_n=g(x_n)\to y=g(x)$. But $y_n=a^{x_n}\to a^x$ since $a\mapsto a^x$ is also continuous.



Moral: a continuous function is completely determined by its values on any dense subset of the domain.


real analysis - Showing there is a bijection from all open subsets to all closed subsets of $M$

(From Pugh's RMA) Let $\mathcal{T}$ be the collection of open subsets of a metric space $M$, and $\mathcal{K}$ be the collection of all closed subsets. Show there is a bijection from $\mathcal{T}$ to $\mathcal{K}$.



I believe the bijection between $\mathcal{T}$ and $\mathcal{K}$ would be the function $f: \mathcal{T} \rightarrow \mathcal{K}$ that returns the interior of each closed subset; the inverse function would return the closure.



I attempted to prove this using the Schroeder-Bernstein Thm: there is a bijection between $A$ and $B$ if there are injective functions $f: A \rightarrow B$ and $g: B \rightarrow A$. So I need to show that the closure and interior functions are injective.



But then I realized the interior function (which takes a closed set $U$ and returns the intersection of all opens contained in $U$) isn't injective-- consider a an interval $[a,b] \in \mathbb{R}$ and consider $[a,b] \cup [p] \in \mathbb{R}$ ($[p]$ is an isolated point). They have the same interiors (I think).



Where does my approach go wrong? Was it a mistake to choose the interior and closure functions to test for injectivity? How would you find the bijection between $\mathcal{T}$ and $\mathcal{K}$?

matrices - Change of basis matrix verification

Let $B$ and $C$ be two bases. To find the change of basis matrix $\phi_{B,C}$, I compute $\phi_{SB,B}$ and $\phi_{SB,C}$. Create the new matrix $T=[\phi_{SB,B}|\phi_{SB,C}]$. Reducing it to reduced row echelon form, should yield $T=[I|\phi_{B,C}$, right? Is there a way to verify that the new basis indeed maps $[v]_B$ to $[v]_C$?

linear algebra - Calculating the characteristic polynomial



I'm stuck with this problem, so I've got the following matrix:



$$A = \begin{bmatrix}
4& 6 & 10\\
3& 10 & 13\\
-2&-6 &-8
\end{bmatrix}$$




Which gives me the following identity matrix of $AI$:



$$\begin{bmatrix}
4 - \lambda& 6 & 10\\
3& 10 - \lambda & 13\\
-2&-6 & -8 - \lambda
\end{bmatrix}$$



I'm looking for the Polynomial Characteristic Roots of the Determinant. I can
do this on pen and paper, but I want to make this into an algorithm which can work

on any given 3x3 matrix.



I can then calculate the Determinant of this matrix by doing the following:



$$Det(A) = 4 - \lambda \begin{vmatrix}
10 - \lambda&13 \\
-6 & -8 - \lambda
\end{vmatrix} = \begin{bmatrix}
(10 - \lambda \cdot -8 \lambda) - (-6 \cdot 13)
\end{bmatrix}$$




I repeat this process for each of the columns inside the matrix (6, 10)..



Watching this video: Here
the guy factorises each of the (A) + (B) + (C) to this equation:



$$ \lambda (\lambda_{2} - 6\lambda+8) = 0$$



And then finds the polynomials: 1, 2, 4.. Which I understand perfectly.




Now, putting this into code and factorising the equation would prove to be difficult. So, I'm asking whether or not there is a simple way to calculate the determinant (using the method given here) and calculate the polynomials without having to factorise the equation.. My aim is to be left with 3 roots based on the Determinant.


Answer



I think a decently efficient way to get the characteristic polynomial of a $3 \times 3$ matrix is to use the following formula:
$$
P(x) = -x^3 + [tr(A)]x^2 + [[tr(A)]^2 - tr(A^2)]x +
[[tr(A)]^3 + 2tr(A^3) - 3tr(A)tr(A^2)]
$$
Where $tr(A)$ is the trace of $A$, and $A,A^2,A^3$ are the matrix powers of $A$.



From there, you could use the cubic formula to get the roots.







there is some computational mistake below



In this case, we'd compute
$$
A =
\pmatrix{4&6&10\\3&10&13\\-2&-6&-8} \implies tr(A) = 6\\
A^2 =

\pmatrix{14&24&38\\16&40&56\\-10&-24&-34} \implies tr(A^2) = 20\\
A^3 =
\pmatrix{52&96&148\\72&160&232\\-44&-96&-140} \implies tr(A^3) = 72
$$


calculus - Evaluate limit $lim_{x rightarrow 0}left (frac 1x- frac 1{sin x} right )$

Can someone provide me with some hint how to evaluate this limit?
$$
\lim_{x \rightarrow 0}\left (\frac 1x- \frac 1{\sin x} \right )
$$
I tried l'hopital's rule but it didn't work.

Tuesday 22 December 2015

summation - How I can got the partial sum of $sum_{k=1}^{n}frac{1}{(2k-1)}$?

It is clear that this sum $\sum_{k=1}^{n}\frac{1}{(2k-1)}$ is divergenet , but i don't succed to get it partial sum using standrad method ?



Note: The sum is presented here in wolfram alpha by digamma function.

sequences and series - Double sum trouble



Evaluate:
$$\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{j^2i}{3^j(j3^i+i3^j)}$$



Honestly, I don't see where to start with this. I am sure that this is a trick question and I am missing something very obvious. I tried writing down a few terms for a fixed $j$ but I couldn't spot any pattern or some kind of easier series to handle.




Any help is appreciated. Thanks!


Answer



After symmetrization with respect to the exchange $i\leftrightarrow j$, the sum can be rewritten as
\begin{align}
\frac12\sum_{i,j=1}^{\infty} \left(\frac{j^2i}{3^j(j3^i+i3^j)}+\frac{i^2j}{3^i(j3^i+i3^j)}\right)=\frac12\sum_{i,j=1}^{\infty} \frac{i\cdot j}{3^i\cdot3^j}=\frac12\left(\sum_{i=1}^{\infty}\frac{i}{3^i}\right)^2=\frac{9}{32}.
\end{align}


proof verification - Prove that if $(f_n)$ converges to $f$ almost uniformly then $(f_n)$ converges to $f$ in measure.



Let $E$ be a measurable set, $(f_n)$ a sequence of real valued measurable functions on $E$ and $f$ a real valued measurable function on $E$. It is required to prove that if $(f_n)$ converges to $f$ almost uniformly then $(f_n)$ converges to $f$ in measure. The following is my proof.



Let $\epsilon>0$. Suppose $(f_n)$ converges to $f$ almost uniformly. Then there exists $F\subseteq E$ such that $m(F)<\epsilon$ and $f_n$ converges uniformly to $f$ on $E\setminus F$. Thus there exists $N\in\mathbb{N}$ such that for each $n\geq N$ and $x\in E\setminus F,$ $|f_n(x)-f(x)|<\epsilon.$




But\begin{align} \{x\in E:|f_n(x)-f(x)|\geq\epsilon\}=\{x\in F:|f_n(x)-f(x)|\geq\epsilon\}\cup\{x\in E\setminus F:|f_n(x)-f(x)|\geq\epsilon\}.\end{align}



Let $n\geq N$. Then $m(\{x\in F:|f_n(x)-f(x)|\geq\epsilon\})<\epsilon$ and



$m(\{x\in E\setminus F:|f_n(x)-f(x)|\geq\epsilon\})=0$.



Therefore for each $n\geq\mathbb{N}$, $m(\{x\in E:|f_n(x)-f(x)|\geq\epsilon\})<\epsilon$.



Hence $m(\{x\in E:|f_n(x)-f(x)|\geq\epsilon\})=0$ as $n\to\infty$ and the proof is complete.




Is this proof alright? Thanks.


Answer



Yes, your proof is essentially correct. Here it is with some small improvements.



Let $\epsilon>0$. Suppose $(f_n)$ converges to $f$ almost uniformly. Then there exists $F\subseteq E$ such that $m(F)<\epsilon$ and $f_n$ converges uniformly to $f$ on $E\setminus F$. Thus there exists $N\in\mathbb{N}$ such that for each $n\geq N$ and $x\in E\setminus F,$ $|f_n(x)-f(x)|<\epsilon.$



It means, for $n\geq N$, $$ E\setminus F \subseteq \{x\in E:|f_n(x)-f(x)|<\epsilon\} $$



So, for $n\geq N$, $$ \{x\in E:|f_n(x)-f(x)|\geq\epsilon\} \subseteq F$$ and so we have, for $n\geq N$,

$$ m(\{x\in E:|f_n(x)-f(x)|\geq\epsilon\})\leqslant m(F)<\epsilon$$



Hence $m(\{x\in E:|f_n(x)-f(x)|\geq\epsilon\})=0$ as $n\to\infty$ and the proof is complete.


sequences and series - Infinite summation formula for modified Bessel functions of first kind

I was trying to find a closed form for the integral
$$4\int_0^{\pi/2} t \, I_0(2\kappa\cos{t}) dt \; ,$$
where



$$I_{\alpha}(z) := i^{-\alpha}J_{\alpha}(iz) = \sum_{m=0}^{\infty}\frac{\left(\frac{z}{2}\right)^{2m+\alpha}}{m! \Gamma(m+1+\alpha)} = \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{i\alpha \tau + z \sin{\tau}} d\tau$$
are the modified Bessel functions. This integral popped up when I was trying to find the average difference of two points on a circle, where these points are assumed to be drawn independently from a von Mises distribution. It was noted by Robert Israel that this integral can be reduced to



$$ \int_0^\pi t I_0(2\kappa \cos(t/2)) \; dt = \frac{\pi^2}{2} I_0(\kappa)^2 - 4 \sum_{r=0}^\infty \frac{I_{2r+1}(\kappa)^2}{(2r+1)^2} \; .$$
So I was wondering, if we can further simplify this expression, or to stated more clearly:





Is there a closed formula for the following sum of modified Bessel functions of the first kind?
$$\sum_{r=0}^\infty \frac{I_{2r+1}(\kappa)^2}{(2r+1)^2}$$







A lot of remarkable identities in terms of infinite sums of Bessel functions are known. E.g. Abramowitz and Stegun list in §9.6.33ff. a few of them, like:



$$\begin{align}

1 &= I_0(z) + 2\sum_{r=1}^{\infty} (-1)^{r}I_{2r}(z) \\
e^z &= I_0(z) + 2\sum_{r=1}^{\infty} I_{r}(z) \\
\cosh{z} &= I_0(z) + 2\sum_{r=1}^{\infty} I_{2r}(z) \\
\end{align}$$

WolframResearch lists another bunch of infinite series identities.
Also, Neumann's addition theorem seems to work wonders <1> <2> <3>.






Regarding the integral itself, Gradshteyn and Ryzhik mention in 6.519.1 that

$$\int_0^{\pi/2} J_{2r}(2\kappa\cos{t}) = \frac{\pi}{2} J_r^2(\kappa) \; ,$$
where $J_r(x) = i^rI_r(-ix)$. So there might be a chance to expect something along this line.






Going back to the original problem "What is the expected value of a distribution $\Delta$ with the following density function"



$$f_{\Delta}(t) := \frac{I_0 \left( 2\kappa \cos{\frac{t}{2}} \right)}{\pi I^2_0(\kappa)} \; ,$$
a straightforward integration leads to the integral mentioned above. Using some probability theory voodoo we can make use of the fact that




$$\mathbb{E}[\Delta] = -i \varphi'_{\Delta}(0)
= -i \left[\frac{d}{d\omega}
\mathcal{F}(f_{\Delta})(\omega)
\right] \Bigg|_{\omega=0}
= -i \left[\frac{d}{d\omega}
\int_{-\infty}^{\infty} e^{it\omega}f_{\Delta}(t) dt
\right] \Bigg|_{\omega=0} $$



where $\varphi_{\Delta}$ is the characteristic function of $f_{\Delta}$ and $\mathcal{F}$ the (properly scaled) Fourier transform. Now with $\varphi(-\omega) = \overline{\varphi(\omega)}$, we could further rewrite




$$\mathbb{E}[\Delta] = -i\varphi'_{\Delta}(0)
= \lim_{\omega \rightarrow 0} \frac{\varphi_{\Delta}(\omega) - \varphi_{\Delta}(-\omega)}{2i\omega}
= \lim_{\omega \rightarrow 0} \frac{\mathcal{Im}\left(\varphi_{\Delta}(\omega)\right)}{\omega} \,$$



to (by plugging in the integral representation of $I_0$) obtain



$$\mathbb{E}[\Delta]
= \frac{\pi}{2} - \frac{4}{\pi I_0^2(\kappa)} \sum_{r=0}^\infty \left( \frac{I_{2r+1}(\kappa)^2}{2r+1} \right)^2
= \frac{1}{\pi^2 I_0^2{\kappa}} \cdot \lim_{\omega \rightarrow 0} \int_0^{\pi/2} \int_{-\pi}^{\pi} \frac{\sin(t\omega)}{\omega} e^{2\kappa\cos{t}\sin{\tau}} d\tau \, dt \; ,$$




but this will essentially lead to the same integral we started with. The promising part about this approach is that the Fourier transform pops up, which might leave some room for the harmonic analysis people among you to do your magic.

calculus - Could you explain the expansion of $(1+frac{dx}{x})^{-2}$?

Could you explain the expansion of $(1+\frac{dx}{x})^{-2}$?




Source: calculus made easy by S. Thompson.



I have looked up the formula for binomial theorem with negative exponents but it is confusing. The expansion stated in the text is:



$$\left[1-\frac{2\,dx}{x}+\frac{2(2+1)}{1\cdot2}\left(\frac{dx}{x}\right)^2 - \text{etc.}\right] $$



Please explain at a high school level.

Monday 21 December 2015

foundations - Are the addition and multiplication of real numbers, as we know them, unique?

After recently concluding my Real Analysis course in this semester I got the following question bugging me:



Is the canonical operation of addition on real numbers unique?




Otherwise: Can we define another operation on Reals in a such way it has the same properties of usual addition and behaves exactly like that?



Or even: How I can reliable know if there is no two different ways of summing real numbers?



Naturally these dense questions led me to further investigations, like:
The following properties are sufficient to fully characterize the canonical addition on Reals?




  1. Closure

  2. Associativity

  3. Commutativity


  4. Identity being 0

  5. Unique inverse

  6. Multiplication distributes over



If so, property 6 raises the question: Is the canonical multiplication on Reals unique?
But then, if are them not unique, different additions are differently related with different multiplications?
And so on...



The motivation comes from the construction of real numbers.
From Peano's Axioms and the set-theoretic definition of Natural numbers to Dedekind and Cauchy's construction of Real numbers we haven't talked about uniqueness of operations in classes nor I could find relevant discussion about this topic on the internet and in ubiquitous Real Analysis reference books by authors as:





  • Walter Rudin

  • Robert G. Bartle

  • Stephen Abbott

  • William F. Trench



Not talking about the uniqueness of the operations, as we know them, in a first Real Analysis course seems rather common and not elementary matter.



Thus, introduced the subject and its context, would someone care to expand it eventually revealing the formal name of this field of study?

calculus - Interval of θ for a Diagonal Line

I have a problem that I've been trying to solve for a couple of hours , but I'm just not understanding it.



The problem is asking to take the polar equation



$$r=\frac{4}{\cosθ + \sinθ}$$



and give an interval of θ in which the entire curve is generated.



I know that the equation ends up creating a diagonal line. But, how do you determine the interval for just a line? I think the answer is $0≤θ≤π$; but I doubt that's correct. Can anyone tell me what steps I should take to get to the correct answer?




Many thanks in advance to anyone who can help.

calculus - Why does $1^{infty}$ not exist?




In my calculus class, whenever we try to find a limit of a sequence as it approaches infinity and it turns out to be like: $1^{\infty}$. We have to end up using L'Hopital's rule. I don't understand why it has to be L'Hopitaled, can't you just take the limit as the sequence approach $99999$ and the answer would be $1^{99999} = 1$ and you are done?



Why do we have to L'Hopital then?


Answer



Consider the following: $(1+1/n)^n \to e,$ while $(1+1/n)^{n^2} \to \infty.$


discrete mathematics - Number of bijective functions (two finite sets)


Let $M$ and $N$ be finite sets, with $|M|=m$ and $|N|=n$.



a)Find out the number of bijective functions $f: M \rightarrow N$. Look at the cases: $m \neq n$ and $m=n.$





Bijective: Every element in $N$ has exactly one partner in $M.$



$m\neq n:$ That means either $m < n $ or $m>n$.



If $m>n$ then wouldn't every element in $N$ have exactly one partner in $M$?



If $m then it wouldn't work, since some elements in $N$ wouldn't have a partner in $M$.



$m=n$: Here every element in $N$ will have exactly one partner in $M$.




So you can get a bijective functions if $m>n$ or $m=n$.



For $m=n$ number of bijective functions would be: $m$



For $m>n$ number of bijective functions would be: $n$



I'm not too sure on my answer.

calculus - Find modulus and argument of $omega = {frac {sin (P + Q) + i (1 - cos (P + Q))} {(cos P + cos Q) + i (sin P - sin Q) }} $



A past examination paper had the following question that I found somewhat difficult. I tried having a go at it but haven't come around with any possible double angle identities. How would one go about tackling it?





Given:



$$\omega = {\frac {\sin (P + Q) + i (1 - \cos (P + Q))} {(\cos P + \cos Q) + i (\sin P - \sin Q) }} $$



To prove:



$$|\omega| = \tan \frac {P + Q} {2} \qquad\text{and}\qquad \arg(\omega) = Q $$





A guideline on how/ which identity to use would be greatly appreciated.



To give an idea how one would start it is by;



Proof:



$$|\omega| = {\frac {\sqrt{\sin^2 (P + Q) + (1 - \cos (P + Q))^2}}
{\sqrt{(\cos P + \cos Q)^2 + (\sin P - \sin Q)^2 }}} $$



I'm still unsure about the above or how the square root come about



Answer



We have
\begin{align}
N
& := \sin^2(P+Q) + (1-\cos(P+Q))^2 = \sin^2(P+Q) + \cos^2(P+Q) + 1 - 2\cos(P+Q) \\
& = 2 (1-\cos(P+Q))
= 2\cdot2\sin^2\frac{P+Q}{2}
= 4\sin^2\frac{P+Q}{2}
\end{align}

and

\begin{align}
D
& = \cos^2P +\cos^2Q + \sin^2P + \sin^2Q + 2(\cos P\cos Q - \sin P \sin Q) \\
&= 2 +2(\cos(P+Q))
= 2(1+\cos(P+Q)) = 4\cos^2\frac{P+Q}{2}
\end{align}



Now, $$|\omega| = \sqrt{\frac{N}{D}} = \tan\frac{P+Q}{2}$$


elementary number theory - Prove the multiplicity property for $n!$

I was given this hint in a different problem,




Now use that a prime $p$ occurs in $n!$ with multiplicity exactly $\lfloor n/p\rfloor + \lfloor n/p^2\rfloor + \lfloor n/p^3\rfloor + \lfloor n/p^4\rfloor +\ldots$



For $$P=\frac{200!}{2^{100}\cdot 100!}$$ And the prime being $p = 3$.




How is the claim for $n!$ true?




Consider $n \ge 6$.



$$n = 6 \implies n! = 720$$



$$720/3 = 240 \to 240/3 = 80 \implies \text{3 comes in twice.}$$



Then, $[6/3] + [6/9] = 2$.



So suppose $3$ occurs in $n!$ with multiplicity, $\lfloor n/p\rfloor + \lfloor n/p^2\rfloor + \lfloor n/p^3\rfloor + \lfloor n/p^4\rfloor +\ldots$




It is required to show that, for $(n+1)!$, $3$ occurs in multiplicity,



$\lfloor n+1/p\rfloor + \lfloor n+1/p^2\rfloor + \lfloor n+1/p^3\rfloor + \lfloor n+1/p^4\rfloor +\ldots$



$(n+1)! = (n+1)n!$.



But I cant prove anything else.



Even intuitively, why does this make sense?

functional equations - Let $f$ a continuous function defined on $mathbb R$ such that $forall x,y

Let $f$ a continuous function defined on $\mathbb R$ such that $\forall x,y \in \mathbb R :f(x+y)=f(x)+f(y)$



Prove that :
$$\exists a\in \mathbb R , \forall x \in \mathbb R, f(x)=ax$$

calculus - Definition of derivative

Well, I know that the derivative of a function $f(x)$is defined this way:




$$\frac{df(x)}{dx} = \lim_{\Delta x\to 0}\frac{f(x+\Delta x) - f(x)}{\Delta x}$$



And it's pretty clear that the expression inside the limit will approach the tangent line at a given point. I know that this is the definition of derivative. However, we can't define this to be equals the tangent line at a given point. So how do we know that this limit will in fact be equal the slope of the function?

linear algebra - Proof of minimum eigenvalue of non-symmetric matrix with real eigenvalues



I am wondering if this true: $\lambda_{\min}(A) \ge \lambda_{\min}(\frac{A+A^T}{2})$, given that $A$ is non-symmetric but with real eigenvalues. I came across this inequality in one of the math-stackexchange posts but wonder why it is true? I did MATLAB simulations with for many rand(2,2) matrices and it seems to hold up but is not sufficient to be taken as a fact. Please let me know.


Answer



Fact. If $S$ is a real symmetric matrix (of size $n$), then
$$\forall X\in\mathbb{R}^n,\ X^T\,S X\geq\lambda_{\min}\|X\|^2,$$
where $\lambda_{\min}$ is the minimum eigenvalue of $S$.



Proof. We know that a real symmetric matrix is diagonalizable in an orthonormal basis. Let $(X_1,\ldots,X_n)$ be an orthonormal basis of eigenvectors of $S$, with $X_k$ associated with the eigenvalue $\lambda_k$. Now let $X\in\mathbb{R}^n$ and decompose it on the basis: there exists $x_1,\ldots,x_n\in\mathbb{R}^n$ such that $X=x_1X_1+\cdots+x_nX_n$. Then
$$X^T SX=\lambda_1x_1^2\|X_1\|^2+\cdots+\lambda_nx_n^2\|X_n\|^2\geq\lambda_{\min}\|X\|^2.$$







Now let $A$ be a square real matrix with real coefficients. Let $\lambda$ be a real eigenvalue of $A$ and let $X_\lambda$ be an associated eigenvector. Then
$$X_\lambda^T AX_\lambda=\lambda\|X_\lambda\|^2.$$
Now, transposing this one-by-one matrix yields
$$X_\lambda^TA^TX_\lambda=\lambda\|X_\lambda\|^2$$
too, hence
$$X_\lambda^T\left(\frac{A+A^T}2\right)X_\lambda=\lambda\|X_\lambda\|^2.$$
Hence, from the preliminary fact, and since $S=\dfrac{A+A^T}2$ is a real symmetric matrix, we must have

$$\lambda\|X_\lambda\|^2\geq\lambda_{\min}\|X_\lambda\|^2$$
where $\lambda_{\min}$ is the minimal eigenvalue of $S$. Since $X_\lambda\neq0$ we conclude that $\lambda\geq\lambda_{\min}$ i.e., that:




every real eigenvalue of $A$ is non-less than $\lambda_{\min}$.







You can generalize it slightly with the non-real eigenvalues of $A$ too: let $\lambda\in\mathbb{C}$ be an eigenvalue of $A$ and let $X_\lambda\in\mathbb{C}^n$ be an eigenvector of $A$ associated with $\lambda$. Then:

$$\overline{X_\lambda^T}AX_\lambda=\lambda\|X_\lambda\|^2,$$
and also (transpose and take the conjugate, using the fact that $A$ has real coefficients):
$$\overline{X_\lambda^T}A^TX_\lambda=\overline{\lambda}\|X_\lambda\|^2.$$
Hence
$$\overline{X_\lambda^T}\left(\frac{A+A^T}2\right)X_\lambda=\Re(\lambda)\|X_\lambda\|^2.$$
Extending the preliminary fact to complex vectors (and taking the associated hermitian product) yields $\Re(\lambda)\geq\lambda_{\min}$, i.e.,




the real part of every (complex) eigenvalue of $A$ is non-less than $\lambda_{\min}$.




Sunday 20 December 2015

Is there a formula for finding the nth number in a sequence with a changing pattern



If a sequence has a pattern where +2 is the pattern at the start, but 1 is added each time, like the sequence below, is there a formula to find the 125th number in this sequence? It would also need to work with patterns similar to this. For example if the pattern started as +4, and 5 was added each time.




2, 4, 7, 11, 16, 22 ...



Answer



Let $a_1 = 2$. From the way you defined the sequence you can see that $a_n - a_{n-1} = n$. We can use this to find
\begin{align}

a_n &= a_{n-1} + n\\
&= a_{n-2} + (n-1) + n\\
&= a_{n-3} + (n-2) + (n-1) + n\\
&\vdots \\
&= a_1 + 2 + \cdots + (n - 2) + (n-1) + n
\end{align}
which is just the sum of the natural numbers except 1($1 + 2 + \cdots + n = \frac{n(n+1)}{2}$). So
\begin{equation}
a_n = a_1 + \frac{n(n+1)}{2} - 1 = 2 - 1 + \frac{n(n+1)}{2} = \frac{n^2 + n + 2}{2}
\end{equation}

where $a_1$ is the starting number (in this case 2). This sequence is a quadratic sequence as it exhibits second differences(the difference of the differences is constant).


calculus - Proving inequality on functions $x-frac{x^2}{2}



To prove: $$x-\frac{x^2}{2}<\ln(1+x)0$$




I have used Taylor series expansion at 0 for both the inequalites. The greater than by expanding $\ln(1+x)$ and the less than by expanding $\int \ln(1+x)\,dx$ at 0.



Is there a cleaner / more elegant way of achieving the same?


Answer



Another way to do the lower bound, for example, would be to consider $f(x) = x - \dfrac{x^2}{2} - \ln(1+x)$. $f(0) = 0$.



But also $f'(x) = 1 - x - \dfrac{1}{1+x} = \dfrac{(1-x)(1+x) - 1}{1+x} = \dfrac{1 - x^2 - 1}{1+x} = \dfrac{-x^2}{1+x} < 0$.



So since $f(0) = 0$ and $f' < 0$, $f$ is monotonically and strictly decreasing. Thus $f(x) \leq 0$, and if $x > 0$ we have that $f(x) < 0$. And this says exactly that $x - x^2/2 < \ln(1 + x)$.


summation - A Curious Binomial Coefficient Sum: $sum_{j = 0}^{k} binom{k}{j} binom{j + n -ell + 1}{n}$



Let $k, \ell \leq n$ be non-negative integers. Does the following identity simplify?
\begin{align}
\sum_{j = 0}^{k} \binom{k}{j} \binom{j + n -\ell + 1}{n} = \binom{n - \ell + 1}{n} \phantom1_{2}\mathsf{F}_{1}(-k,n - \ell + 2, 2- \ell; -1)
\end{align}
where $\!\!\! \phantom1_{2}\mathsf{F}_{1}$ is a hypergeometric function. That is, does the right side have another representation in terms of simple functions given that $k,\ell$ and $n$ are non-negative integers?


Answer




Let us denote:
\begin{equation}
S_k^{n,l} := \sum\limits_{j=0}^k
\left( \begin{array}{c} k \\ j \end{array} \right)
\left( \begin{array}{c} n-l+1+j \\ n \end{array} \right)
\end{equation}
Then we have:
\begin{eqnarray}
S_k^{n,l} &:=& \left. \frac{d^n}{n! d x^n} x^{n-l+1} \cdot \left(1+x\right)^k \right|_{x=1} \\
&=&

\frac{1}{n!} \sum\limits_{p=0}^n
\left(\begin{array}{c} n \\ p \end{array} \right) (n-l+1)_{(p)} k_{(n-p)} 2^{k-n+p} \\
&=&2^{k-n}
\left( \begin{array}{c} k \\ n \end{array} \right) {}_2F_1\left[4-n,-n,1+k-n;2\right]
\end{eqnarray}



The last result, that in terms of the hypergeometric function is not particularily useful when $n$, $l$, $k$ are integers. However the result above is useful for example if $k$ is large.


functions - Proving: $f$ is injective $Leftrightarrow f(X cap Y) = f(X) cap f(Y)$








Let $A$ := "$f$ is injective" and $B$ := "$f(X \cap Y) = f(X) \cap f(Y)$".



My first idea is to show $B \implies A$ through contraposition, so $\lnot A \implies \lnot B$. Would it then be enough if I say: $f$ is not injective and then show an example where the equation in $B$ is wrong? Would it be a proof then?

geometry - The validity of the proofs of the Pythagorean Theorem and the concept of area

this might be a very elemental question but it has been bothering me for a while. Must of the proofs I've seen of the Pythagorean Theorem involve showing that the areas of the squares with side length $a$ and $b$ add up to the area of the square with side length $c$. This is generally done by rearranging triangles.
My problem with this type of proofs is that they only show that the areas must be the same but don't show that $a^2+b^2=c^2$.



Why must the area of a square with side $a$ be defined as $a^2$. Say for example that you had another way of measuring the surface of a square with a given side length (and it behaves as we would intuitively want area to behave). If this function is called $A$ then the visual proofs of the theorem would only show that $A(a)+A(b)=A(c)$.




So, does this type of proof works because we just happen to define area as we do, or does $A(a)+A(b)=A(c)$ must imply $a^2+b^2=c^2$?



Now, if $A(a)+A(b)=A(c)$ does imply $a^2+b^2=c^2$ that would mean that our function $A$ (which behaves as area does) must include the square of the side in its formula. For example $A(x)=kx^2, k>0$ (which does imply the pythagorean theorem). Are there other ways to define the surface of a square such that it behaves as it physically does? Would the visual proofs still be valid?



Thank you!

calculator - Complex number systems of equations fx-115es plus calc?



Anyone here know how to use the EQN mode on the casio fx-115es plus to find solutions to a system of equations involving complex numbers? Also, if that's not possible, what about entering complex numbers to a matrix on this calculator? I haven't been able to figure it out, and searching the web hasn't returned any relevant results.



I'm guessing its possible as this calculator is allowed on the engineering FE exams. So far I've only been able to solve systems using real numbers in EQN mode, and have only been able to perform complex calculations in CMPLX mode.



Thanks.


Answer



Complex numbers are not supported—on simultaneous equations or on matrices—on the fx-115es plus scientific calculator.




Indeed, no scientific calculator (neither T.I., Sharp or Casio) supports these two kinds of problem solving in CPLX mode.



For that, you will need to choose a graphing calculator with a CAS (Computer Algebra System).



/Silicon Valley Regards


calculus - prove, using induction that for natural $n$ and $0

How to prove, using induction, that for every natural $n$, and for every $0

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...