Friday 31 May 2013

lebesgue integral - Exercise in finite measure space.



Let $(X,\mu)$ be a finite measure space and $f:X\to[0,\infty]$ be a measurable function. Define a function $\varphi:\mathbb{R}^+\to\mathbb{R}^+$ by $\varphi(r)=\left(\frac{1}{\mu(X)}\int_Xf^rd\mu\right)^\frac1r$. Prove that $\varphi(r)\leq\varphi(s)$ if $r

Answer



We have
$$\varphi(r) = \left(\frac1{\mu(X)}\int_X f^r\ \mathsf d\mu\right)^{\frac1r} = \left(\int_X f^r \ \frac1{\mu(X)}\ \mathsf d\mu \right)^{\frac1r}. $$
Now, by Hölder's inequality, in general, for measurable functions $F, G$ and conjugate exponents $p,q$, that is, $\frac1p + \frac1q = 1$ where $1\begin{align}
\phi(r) &= \left(\int_X FG\ \mathsf d\mu\right)^{\frac1r}\\

&\leqslant \left(\int_X F^{\frac sr}\ \mathsf d\mu\right)^{\frac r{rs}}\left(\int_X G^{\frac s{s-r}} \right)^{\frac{s-r}{rs}}\\
&=\left(\int_X f^s\ \mathsf d\mu \right)^{\frac 1s}\left(\int_X \left(\frac1{\mu(X)}\right)^{\frac s{s-r}} \ \mathsf dx\right)^{\frac{s-r}{rs}}\\
&=\left(\int_X f^s\ \mathsf d\mu \right)^{\frac 1s}\left(\mu(X)^{\frac s{r-s}}\mu(X) \right)^{\frac{s-r}{rs}}\\
&=\left(\frac1{\mu(X)}\int_X f^s\ \mathsf d\mu \right)^{\frac1s}\\
&=\varphi(s),
\end{align}
as desired.


modular arithmetic - How would I find the modulo of a large number without using a calculator that supports large numbers?

How would I find the modulo of a large number without using a calculator that supports large numbers like wolfram alpha.



EX: $113^{17} \pmod{91}$

radicals - Solving Equations with Roots and Absolute Values

I'm preparing for the ACTM State contest, and I stumbled across this problem:



Which of the following is the sum of the tens digit and the units digit of the solution of the following equation?



$$15-2\sqrt{x-18} = 0.5 |x+14|-20$$



I know how to solve radical equations and absolute value equations, but I'm not familiar on how to solve an equation with both. Can anyone help me out?

combinatorics - Prove $sumlimits_{i=0}^nbinom{i+k-1}{k-1}=binom{n+k}{k}$ (a.k.a. Hockey-Stick Identity)




Let $n$ be a nonnegative integer, and $k$ a positive integer. Could someone explain to me why the identity

$$
\sum_{i=0}^n\binom{i+k-1}{k-1}=\binom{n+k}{k}
$$
holds?


Answer



One way to interpret this identity is to consider the number of ways to choose $k$ integers from the set $\{1,2,3,\cdots,n+k\}$.



There are $\binom{n+k}{k}$ ways to do this, and we can also count the number of possibilities by considering the largest integer chosen. This can vary from $k$ up to $n+k$, and if the largest integer chosen is $l$, then there are $\binom{l-1}{k-1}$ ways to choose the remaining $k-1$ integers.



Therefore $\displaystyle\sum_{l=k}^{n+k}\binom{l-1}{k-1}=\binom{n+k}{k}$, and letting $i=l-k$ gives $\displaystyle\sum_{i=0}^{n}\binom{i+k-1}{k-1}=\binom{n+k}{k}$.



Thursday 30 May 2013

calculus - How to evaluate this exponential fraction limit?



I am trying to determine if 3$^n$ grows faster than 2$^{2n}$.



One way I found online to do this was, from Growth was to evaluate $\lim_{n\to \infty} \frac{3^n}{2^{2n}}$ and if that limit evaluates to infinity, then 3$^n$ grows faster than 2$^{2n}$.



I am trouble with evaluating this limit though. In my initial evaluation of this limit, I saw that say you plugged in a really large $n$, you get the indeterminate form $ \frac{\infty}{\infty}$ so from here L Hopital, you can use L’Hospital’s Rule.



However when I tried to use L'Hospital's Rule once, this is what I got
    $\lim_{n\to \infty} \frac{3^nln(3)}{2^{2n}2ln(2)}$




I think this is mathematically correct but I couldn't find a way to reduce this further. I also tried an approach from Exponent L Hospital but I couldn't take the natural log of both sides because the two exponents are of different bases.



Am I going about this the right way? Is there another way of evaluating this limit?


Answer



Inddeed you have:
$$\lim_{n\to \infty} \frac{3^n}{2^{2n}}=\lim_{n\to \infty} \frac{3^n}{4^{n}}=\lim_{n\to \infty} \left(\frac{3}{4}\right)^n=0$$



because $\frac{3}{4}<1$


linear algebra - Reduced row echelon form without introducing fractions at any intermediate stage



How can I reduce this matrix to reduced row echelon form but without using fractions in intermediary steps (I can use them in elementary row operations just not in the results in the matrix)



$$
\begin{pmatrix}
2 & 1 & 3 \\

0 & -2 & 7 \\
3 & 4 & 5 \\
\end{pmatrix}
$$



I been trying for several hours and can seem to figure that out.
Is it even possible?



Thanks for any help


Answer




Yes, it is possible. Furthermore, there does exist some algorithms to do this, such as the fraction-free Gaussian elimination, see, e.g.,
E H Bareiss. Sylvester's identity and multistep integer-preserving Gaussian elimination. Math. Comput., 22(103):565-578, 1968.


calculus - How to show $int_0^1 left( frac{1}{ln(1+x)} - frac{1}{x} right) ,dx$ converges?



I need to show that $$\int_0^1 \left( \frac{1}{\ln(1+x)} - \frac{1}{x} \right) \,dx$$ converges, given that $$\lim_{x\rightarrow0^+} \left( \frac{1}{\ln(1+x)} - \frac{1}{x} \right) = \frac{1}{2}$$



I'm not sure how to do this because my text gave a very brief treatment on applying the limit comparison test for the improper integral of the second kind. Here is what I've tried:




I define the test function to be $g(t)=1$. Therefore,



$$\frac{\left( \frac{1}{\ln(1+x)} - \frac{1}{x} \right)}{g(t)} = \left( \frac{1}{\ln(1+x)} - \frac{1}{x} \right) \tag{1}$$



So, $(1)$ tends to $\frac{1}{2}$ as $t$ tends to $0^+$ and $$\int_0^1 g(t) \,dt$$ is convergent. Therefore, by the limit comparison test, the given integral is convergent.



Did I do the test correctly?



EDIT




My text's solution (which I find to be vague) is:



enter image description here



What does 'repair' the integrand mean?


Answer



I'll give a more general answer, which should explain what “repairing the function” means.



Suppose $f$ is a continuous function defined on the interval $(a,b]$ (where $a$$

\lim_{x\to a^+}f(x)=r
$$
is finite.



Define $g(x)=f(x)$ for $a\begin{align}
F(x)&=\int_{x}^b f(t)\,dt &&\text{for $x\in(a,b]$}\\
G(x)&=\int_{x}^b g(t)\,dt &&\text{for $x\in[a,b]$}
\end{align}
The two functions $F$ and $G$ are continuous (even differentiable, by the fundamental theorem of calculus) and coincide on $(a,b]$, so

$$
\lim_{x\to a^+}F(x)=\lim_{x\to a^+}G(x)=G(a)
$$


calculus - Calculate the limit: $lim_{xtoinfty} sqrt[x]{3^x+7^x}$




Calculate the limit: $$\lim_{x\to\infty} \sqrt[x]{3^x+7^x}$$



I'm pretty much clueless on how to approach this. I've tried using the identity of $c^x = e^{x \cdot \ln(c)}$ but that led me to nothing. Also I've tried replacing $x$ with $t=\frac{1}{x}$ such that I would end up with $\lim_{t\to 0} (3^{1/t} + 7^{1/t})^{1/t}$ however I've reached yet again a dead end.



Any suggestions or even hints on what should I do next?


Answer



Note that



$$\sqrt[x]{3^x+7^x}=7\sqrt[x]{1+(3/7)^x}=7\cdot \large{e^{\frac{\log{1+(3/7)^x}}{x}}}\to7$$


Wednesday 29 May 2013

Solving this limit $lim_{ntoinfty} left ( frac n {n+1} right)^n$ when applying ratio test on this series..




if I want to check the convergence of the series
$$\sum\frac{n!}{n^n}$$
While solving using Ratio test , I encountered this limit
$$\lim_{n\to\infty} \left( \frac n {n+1} \right)^n$$
is it correct to solve it as following
$$\lim_{n\to\infty} \left( \frac 1 {1+\frac 1 n} \right)^n = \lim_{n\to\infty} \frac 1 { \left( 1+\frac 1 n \right)^n}=e^{-1}$$
(I know that the limit gives here exponential($-1$) but I feel it is wrong to remove the power n from the numerator )



Note:
I solved it also by taking ln to the limit and applying l'hopital , I got exponential(-1) also ,but it is very long .. So I want more shorter way and I do not know whether this shorter way correct or not ..



Answer



It is absolutely correct infact it's always true that:



$$\left( \frac{a}{b} \right)^n = \frac{a^n}{b^n} $$


algebra precalculus - I'm stuck on this mathematical induction problem




I've been stuck on this problem for hours, i have no idea how do even calculate it. The exponents throw me off. If anyone can help me break it down step-by-step i would truly appreciate it.
here's the problem
k= k+1



$k*2^k= 2[1+(k-1)(2^k)]$



enter image description here


Answer



From your points $2$ and $3$, we need to show that




$$2(1+(k-1)2^k)+(k+1)2^{k+1}= 2(1+k2^{k+1})\\\stackrel{\text{divide by 2}}\iff 1+(k-1)2^k+(k+1)2^{k}= 1+k2^{k+1}\\\stackrel{\text{cancel out 1}}\iff (k-1)2^k+(k+1)2^{k}= k2^{k+1}\\\stackrel{\text{divide by} \,2^k}\iff (k-1)+(k+1)= 2k$$


Tuesday 28 May 2013

geometry - Where does $pi^2$ appear spontaneously within Physical Phenomenon and Mathematics Equations?

The term $\pi$ is found to appear in many equations and natural phenomenon; however my question is related to $\pi^2$.



While trying to figure out the reason for some $\pi^2$ terms appearing in certain equalities that I came across, I have a question. And the question is this:





In which all mathematics/physics equation or contexts does $\pi^2$ appear inherently?




-- and (now this second part is merely a follow up question that did not form part of the original query but added later) where that $\pi^2$ term can lend some interpretation of the underlying phenomenon, just like does $\pi$ whereby we can interpret (in most cases i.e.) that some type of circular ambulation in 1 dimension is involved??



As you can understand, the $\pi^2$ term is more complex and does not directly lend itself to an interpretation -- as opposed to $\pi$ which is very intuitive.



Thanks

calculus - Simplify to terms of generalized power mean



I have the following expression${}^1$ I'd like to simplify
$$r \equiv \left[ \frac1{x\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{y\sqrt{y}} \left( 1 - \frac{k}{y} \right) \right]\left[ \frac1{\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{\sqrt{y}} \left( 1 - \frac{k}{y} \right)\right]^{-1}$$
where $x > y > k\geq 1$.




Some attempts led me to think that $r$ can be expressed nicely in terms of generalized means of $x,y$ like the harmonic ($p=-1$), geometric ($p \to 0$), and other powers:${}^2$
$$\begin{align}
H &\equiv \left( \frac1x + \frac1y\right)^{-1} & G &\equiv \sqrt{xy} & M \equiv \left( x^p + y^p \right)^{1/p} \quad \text{with perhaps} \quad p=\frac{-1}2
\end{align}$$



To be specific, my question is this:




Is there a way to write it in this form $$r \overset{?}{=} \frac1x + \frac1y + \frac1{\sqrt{xy}} + {}\color{red}{??} =\frac1H + \frac1G + {}\color{red}{??} $$ such that the $\color{red}{??}$ part is "nice" in terms of $k$ and maybe $H,G$ or $M$ defined above?





I also wonder if there's some standard techniques like examining the asymptotic form to guess the coefficients of the proposed terms. I tried a bit but didn't get very far.



Directly pulling out $\frac1H$ and $\frac1G$ just change $r$ into something equally inviting but not more compact.



Any suggestions will be appreciated. Honestly, one main reason for me to think that "oh there must be some nice and more compact forms" is the glaring symmetry. I will be okay if there turn out to be none. Thanks.






Footnote 1:

$\quad$For the record, $r$ is part of the expression of the ratio between $\frac{\partial^2 f(u)}{ \partial u^2}$ and $\frac{\partial f(u) }{ \partial u}$, evaluated at $u=0$, where $$f(u) = \left( \frac1{\sqrt{u+y}} - \frac1{\sqrt{u+x}} \right)\sqrt{u+k}$$



Footnote 2:
$\quad$This is a slight abuse of notations, as the generalized means should carry the averaging factors $\frac12$ or $\frac1{\sqrt{2}}$. My shorthands can also match the p-norms, which has the benefit of the norms ordered in powers have correspondingly decreasing magnitudes. However, p-norms doesn't cover $\sqrt{x y}$ like generalized mean.






Update (Nov.28th, 2017)



With all the inverses lying around, it turns out that the best way is to express $r$ in terms of the ... you guessed it ... inverses: $v \equiv 1/x$ and $w \equiv 1/y$. I'll spare the readers of the actual algebra. Suffice to say that this is also the natural course to take to adopt the notions of p-norms; one just have to give up on unifying $1 / \sqrt{xy} = 1/G = \sqrt{vw}$ formally.



Answer



We can write
$$r=\frac 1H+\frac 1G+\color{red}{\frac{1}{M-G+\frac{GM}{k}}}$$
where
$$M=(x^{-1/2}+y^{-1/2})^{-2}$$






If we write
$$r=\frac{\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}-k\left(\frac{1}{x^2\sqrt x}-\frac{1}{y^2\sqrt y}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}=\frac 1x+\frac 1y+\frac{1}{\sqrt{xy}}+A$$ then we have

$$A=\frac{k\left(\frac 1x-\frac 1y\right)\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}$$



Using
$$x+y=\frac{G^2}{H},\quad xy=G^2,\quad \frac{1}{M}=\frac 1H+\frac 2G$$
we have
$$\left(\frac 1x-\frac 1y\right)^2=\left(\frac 1H+\frac 2G\right)\left(\frac 1H-\frac 2G\right)\implies \frac 1x-\frac 1y=-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}$$



$$\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)^2=\frac{1}{G^2}\left(\frac 1H+\frac 2G\right)\implies \frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}=\frac 1G\sqrt{\frac 1M}$$



$$\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt y}\right)^2=\frac 1H-\frac{2}{G}\implies \frac{1}{\sqrt x}-\frac{1}{\sqrt y}=-\sqrt{\frac 1H-\frac{2}{G}}$$




$$\small\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)^2=\left(\frac 1H+\frac 1G\right)^2\left(\frac 1H-\frac 2G\right)\implies \frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}=-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}$$



So, we get



$$A=\frac{k\left(-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}\right)\frac 1G\sqrt{\frac 1M}}{-\sqrt{\frac 1H-\frac{2}{G}}-k\left(-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}\right)}=\frac{1}{M-G+\frac{GM}{k}}$$


number theory - Why does $8$ not divide both the divisor and remainder?



The following is an explanation of why the division algorithm works for the two integers, $1424$ and $3084$. This is the link https://www.rit.edu/studentaffairs/asc/sites/rit.edu.studentaffairs.asc/files/docs/services/resources/handouts/DM6_EuclideanAlgorithm_BP_9_22_14.pdf





The example used to find the gcd$(1424, 3084)$ will be used to provide an idea
as to why the Euclidean Algorithm works. Let $d$ represent the greatest common
divisor. Since this number represents the largest divisor that evenly divides
both numbers, it is obvious that $d|1424$ and $d|3084$. Hence $d|3084 –1424$
in the same way that a numerator of two or more terms is divisible by a factor
in the denominator if and only if that factor divides evenly into each individual
term in the numerator. Furthermore $d|3084 – 2(1424)$ or, simplifying the left
side, $d|236$. Consequently we note that d divides the remainder resulting
from the division of the two numbers for which the greatest common factor is

being sought. This corresponds to the first long division calculated in the
example above.
The problem has now been reduced to a smaller one. We have that $d|1424$
and $d|236$. Hence $d|1424 – 236$, or better yet, $d|1424 – 6(236)$ which when
simplified reveals that $d|8$. At this point we know that d must be one of the
following values: $1, 2, 4,$ or $8$. Note that $8$ is the remainder resulting from the
division of the divisor and remainder found in the original division, so it will
not be a divisor of both. So we will take the divisor and remainder from the
second division to reduce the problem to yet an even smaller one.





My question is, why is it that because $8$ is the remainder resulting from the
division of the divisor and remainder found in the original division, that it is not a divisor of both?


Answer



The Euclidean algorithm stops when the remainder divides both divisor and quotient (which is thus shown to be the $\gcd$). So the statement that you question is odd, unless it is referring to some context outside the scope of the text and the link. Still it is certainly true that at any point in the process, the $\gcd$ is one of the factors of the latest remainder.



I write the extended Euclidean algorithm as a table:



$\begin{array}{c|c}
n & s & t & q \\ \hline

3084 & 1 & 0 & \\
1424 & 0 & 1 & 2 \\
236 & 1 & -2 & 6 \\
8 & -6 & 13 & 29 \\
\color{red}{4} & 175 & -379 & 2 \\
0 & -356 & 771 & 0 \\
\end{array}$



... each line solving the equation $n=3084\cdot s + 1424 \cdot t$, set up in the first two lines with the trivial equations for $n=3084$ and $1424$, and then subtracting $q$ times the line from the line above to get the smaller $n$ value on the next line. The last non-zero $n$ is the $\gcd$ of the opening pair.




This also gives the appropriate Bézout's identity for the original pair for a linear combination that gives their $\gcd$: $175\cdot 3084+(-379)\cdot 1424=4$


logic - Why isn't '&' used for logical conjunction?

There is a beautiful and well-established logogram for "and" that is known to virtually every more or less educated person in the world - it's the ampersand '&'. It's completely unambiguous, as opposed to logical disjunction (I'm talking about its inclusive and exclusive versions).



Why is the ampersand not used universally to denote logical conjunction?



The best answer I can come up with is because actually drawing the symbol takes a bit of knack. However, there are two reasons this is not an answer:





  1. Drawing Greek letters takes quite a bit of knack as well, and they are all over the place in mathematics and logic.

  2. There are handwritten variations of the ampersand that are extremely easy to draw.



    Is using '$\wedge$' for logical conjunction just a historical accident?




P.S. I was of course lying when I said the ampersand is not used universally, as I have seen it used here and there, and I use it myself in my own writing.

real analysis - Did I compute the limit of of the sequence $x_{n+1}=frac{x_n}{x_n+1}, x_o=1$ properly?



I need to study the limit behavior of $x_{n+1}=\frac{x_n}{x_n+1}, x_o=1$ and if the limit exists, compute the limit. I observed the first few terms and it seemed that the sequence was decreasing so I decided to show that the sequence was monotonic:



Need to show $x_{n+1}\leq x_n$ which is equivalent to showing $x_{n+1}-x_n\leq0$
$$x_{n+1}-x_n=\frac{x_n}{x_n+1}-x_n=\frac{x_n-x_n(x_n+1)}{x_n+1}=\frac{-(x_n)^2}{x_n+1}=-\frac{(x_n)^2}{x_n+1}\leq0$$ for all n if $x_n\geq0$ which can be proved by induction:

$$x_o=1\geq0$$$$x_1=1/2\geq0$$
$x_n=>x_{n+1}$
$$x_{n+1}=\frac{x_n}{x_n+1}>\frac{1}{x_n+1}>0$$ because $x_n>0$. (I don't know if that was the proper way to do the induction. Any confirmation?)



Since it was proved that $x_n\geq0$, $x_{n+1}-x_n=-\frac{(x_n)^2}{x_n+1}\leq0$, thus the sequence is monotone and decreasing. The sequence is also bounded:



Since the sequence is decreasing it is bounded above by 1, and because $x_n\geq0$ the sequence id bounded below by 0.



The boundedness and monotonicity of the sequence implies that a limit exists:




Let $\lim x_n=x$. Because $\lim x_n=\lim x_{n+1}$, $$x=\frac{x}{x+1}<=>x(x+1)=x<=>x+1=1=>x=0$$ So $0$ is the limit.



I'm not sure if there are problem in the work that I did and any help would be greatly appreciated.






I don't know if I should start a new question for this, so I've included it here anyways:
Since one was able to tell that the $\lim x_n=\lim \frac{1}{1+n}$ for the above sequence, should one try to do the same thing with the sequence $x_{n+1}= \frac{(x_n)^2}{x_n+1}$, $x_o=1$?


Answer



There’s a small error in your proof that $x_n\ge 0$ for all $n$: it’s not true that




$$\frac{x_n}{x_n+1}>\frac1{x_n+1}\;,$$



since in fact it turns out that $x_n\le 1$ for all $n$. However, given the induction hypothesis that $x_n\ge 0$, you certainly have\



$$x_{n+1}=\frac{x_n}{x_n+1}\ge 0\;,$$



which is all you need here. Otherwise it looks fine.



Note that if you calculate the first few values, you find that $x_1=\frac12$, $x_2=\frac13$, and $x_3=\frac14$, suggesting that in general $x_n=\frac1{n+1}$. An alternative approach would be to show by induction that this is true for all $n\ge 0$; this is not at all difficult.



Monday 27 May 2013

calculus - A closed form of $sum_{k=1}^inftyfrac{(-1)^{k+1}}{k!}Gamma^2left(frac{k}{2}right)$



I am looking for a closed form of the following series





\begin{equation}
\mathcal{I}=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k!}\Gamma^2\left(\frac{k}{2}\right)
\end{equation}




I have no idea how to answer this question. Wolfram Alpha gives me result:




$$\mathcal{I}\approx2.7415567780803776$$





Could anyone here please help me to obtain the closed form of the series preferably (if possible) with elementary ways (high school methods)? Any help would be greatly appreciated. Thank you.


Answer



You can use the identity given by the Euler Beta function
$$\int_{0}^{1}x^{a-1} (1-x)^{b-1} \,dx=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$
to state:
$$S=\sum_{k=1}^{+\infty}\frac{(-1)^{k+1}}{k!}\Gamma(k/2)^2=\sum_{k=1}^{+\infty}\frac{(-1)^{k-1}}{k}\int_{0}^{1}\left(x(1-x)\right)^{k/2-1}\,dx $$
and by switching the series and the integral:
$$ S = \int_{0}^{1}\frac{\log(1+\sqrt{x(1-x)})}{x(1-x)}dx = 2\int_{0}^{1/2}\frac{\log(1+\sqrt{x(1-x)})}{x(1-x)}dx,$$

$$ S = 2\int_{0}^{1/2}\frac{\log(1+\sqrt{1/4-x^2})}{1/4-x^2}dx = 4\int_{0}^{1}\frac{\log(1+\frac{1}{2}\sqrt{1-x^2})}{1-x^2}dx,$$
$$ S = 4\int_{0}^{\pi/2}\frac{\log(1+\frac{1}{2}\sin\theta)}{\sin\theta}d\theta.$$
Now Mathematica gives me $\frac{5\pi^2}{18}$ as an explicit value for the last integral, but probably we are on the wrong path, and we only need to exploit the identity
$$\sum_{k=1}^{+\infty}\frac{1}{k^2\binom{2k}{k}}=\frac{\pi^2}{18}$$
that follows from the Euler acceleration technique applied to the $\zeta(2)$-series. The other "piece" (the $U$-piece in the Marty Cohen's answer) is simply given by the Taylor series of $\arcsin(z)^2$. More details to come.






As a matter of fact, both approaches lead to an answer.
The (Taylor) series approach, as Bhenni Benghorbal shows below, leads to the identity:

$$\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k!}\Gamma^2\left(\frac{k}{2}\right)x^k= 2 \arcsin \left( x/2 \right) \left(\pi - \arcsin \left( x/2\right) \right),\tag{1}$$
while the integral approach, as Achille Hui pointed out in the comments, leads to:
$$\begin{eqnarray*}\int_{0}^{\pi/2}\frac{\log(1+\frac{1}{2}\sin\theta)}{\sin\theta}\,d\theta&=&\int_{0}^{1}\log\left(1+\frac{t}{1+t^2}\right)\frac{dt}{t}\\&=&\int_{0}^{1}\frac{\log(1-t^3)-\log(1-t)-\log(1+t^2)}{t}\,dt\\&=&\int_{0}^{1}\frac{-\frac{2}{3}\log(1-t)-\frac{1}{2}\log(1+t)}{t}\,dt\\&=&\frac{1}{6}\sum_{k=1}^{+\infty}\frac{4+3(-1)^k}{k^2}=\frac{1}{6}\left(4-\frac{3}{2}\right)\zeta(2)=\frac{5\pi^2}{72}.\end{eqnarray*}\tag{2}$$



Thanks to both since now this answer may become a reference both for integral-log-ish problems like $(2)$ and for $\Gamma^2$-series like $(1)$.






Update 14-06-2016. I just discovered that this problem can also be solved by computing
$$ \int_{-1}^{1} x^n\, P_n(x)\,dx, $$

where $P_n$ is a Legendre polynomial, through Bonnet's recursion formula or Rodrigues' formula. Really interesting.


limits - Show that $lim_{xto frac{pi}{2}} frac{1}{big(x-frac{pi}{2}big)}+{tan(x)}=0$.


Prove that
$$\lim_{x\to \frac{\pi}{2}} \frac{1}{\big(x-\frac{\pi}{2}\big)}+{\tan(x)}=0.$$





I'm not really sure how to proceed. I know that I should not try L'Hôpital's rule (tried that) but not sure how I would incorporate into the Squeeze Theorem or how I would use continuity.



Thanks!



Edit: Turns out I was really dumb and you do use L'Hôpital's rule twice. I made the mistake of differentiating the whole quotient rather than the function on top and the bottom of the vinculum separately.

Sunday 26 May 2013

calculus - If $lim_{ntoinfty}frac{a_{n+1}}{a_n}=1$, $lim_{ntoinfty}frac{a_{2n}}{a_n}=frac{1}{2}$ then $lim_{ntoinfty}frac{a_{3n}}{a_n}=frac{1}{3}$



Let $\{a_n\}$ be a decreasing sequence and $a_n>0$ for all $n$.
If $\displaystyle\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=1$ and $\displaystyle\lim_{n\to\infty}\frac{a_{2n}}{a_n}=\frac{1}{2}$,

how to prove or disprove that
$\displaystyle\lim_{n\to\infty}\frac{a_{3n}}{a_n}=\frac{1}{3}$ ?



Thank you.


Answer



The limit of $\frac{a_{3n}}{a_n}$ need not be $\frac13$, because it need not exist.



Each $n>0$ can be written uniquely as $2^k + r$, where $2^k \le n < 2^{k+1}$. Define $a_n$ (in terms of these $k$ and $r$) piecewise as follows: $$a_n = \begin{cases} \frac{2^{1 - r/2^{k-1}}}{2^k} & \mbox{if }r < 2^{k-1} \\
\frac{1}{2^k} & \mbox{if }r \ge 2^{k-1}.
\end{cases}$$




Essentially, for the first half of the range from $2^k$ to $2^{k+1}$, $a_n$ decreases from $\frac{1}{2^{k-1}}$ to $\frac1{2^k}$ geometrically, by factors of $2^{-1/2^{k-1}}$. For the second half of that range, $a_n$ stays at $\frac1{2^k}$.



It is identically true that $\frac{a_{2n}}{a_n} = \frac12$. Moreover, if $n \ge 2^k$, then $2^{-1/2^{k-1}} \le \frac{a_{n+1}}{a_n} \le 1$, so $\frac{a_{n+1}}{a_n} \to 1$.



However, when $n = 2^k$, $\frac{a_{3n}}{a_n} = \frac14$, while $\frac{a_{9n}}{a_{3n}} = \frac1{2^{5/4}}$, so $\frac{a_{3n}}{a_n}$ does not converge.



You might complain that the sequence $a_n$ is not strictly decreasing. If this is a problem, just replace $a_n$ by $a_n' = a_n\left(1 + \frac1n\right)$. We have:





  • $\frac{a'_{n+1}}{a'_n} = \frac{a_{n+1}}{a_n} \cdot \frac{1+\frac1{n+1}}{1+\frac1n} = \frac{a_{n+1}}{a_n} \left(1 - \frac1{(n+1)^2}\right)$, which still converges to 1.

  • $\frac{a'_{2n}}{a'_n} = \frac{a_{2n}}{a_n} \cdot \frac{1 + \frac1{2n}}{1 + \frac1n} = \frac{a_{2n}}{a_n} \left(1 - \frac1{2n+2}\right)$, which still converges to $\frac12$.

  • $\frac{a'_{3n}}{a'_n} = \frac{a_{3n}}{a_n} \cdot \frac{1 + \frac1{3n}}{1 + \frac1n} = \frac{a_{3n}}{a_n} \left(1 - \frac2{3n+3}\right)$, which still does not converge to anything.

  • Since we already had $a_{n+1} \le a_n$, we now have $a'_{n+1} = a_{n+1} \left(1 + \frac1{n+1}\right) \le a_n \left(1 + \frac1{n+1}\right) < a_n \left(1 + \frac1n\right) = a'_n$.



If the limit $\frac{a_{3n}}{a_n}$ does exist, then we may proceed as in Andras's answer to show that it must equal $\frac13$:



If $\frac{a_{3n}}{a_n} \to c$, then $\frac{a_{3^kn}}{a_n} \to c^k$ for any $k$, but we can bound $\frac{a_{3^kn}}{a_n}$ between $\frac{a_{2^\ell n}}{a_n}$ and $\frac{a_{2^{\ell+1} n}}{a_n}$ for $\ell$ such that $2^\ell < 3^k < 2^{\ell+1}$ (i.e., $\ell = \lfloor k \log_2 3\rfloor$). These two ratios converge to $\frac{1}{2^{\ell}}$ and $\frac1{2^{\ell+1}}$, so we get that $2^{-\ell/k} < c < 2^{-(\ell+1)/k}$. Taking $k$ arbitrarily large, $\frac{\ell}{k} \to \log_2 3$, so $c$ must be $2^{-\log_2 3} = \frac13$.


linear algebra - Is a square matrix with positive determinant, positive diagonal entries and negative off-diagonal entries an M-matrix?

I'm trying to determine if a certain class of matrices are M-matrices in general. I'm considering square matrices $A$ with the following properties:




  1. $\det(A) > 0$ (strictly),

  2. all the diagonal entries are positive,

  3. all the off diagonal entries are negative.




An M-matrix can be characterized in many ways. I've tried proving this (or finding a counterexample) by looking at the principal minors and have found that $A$ is an M-matrix if it has dimension 2 or 3, but it's hard to make any sort of induction with that. Right now I'm trying two other definitions (they're equivalent) of M-matrices




  1. There is a positive vector such that $Ax > 0$ (component-wise).

  2. $A$ is monotone (i.e. $Ax \geq 0$ implies $x \geq 0$).



Again, 1 isn't hard to show if the matrix is small, but this is hard to generalize, so I thought an easier approach might be using 2 and try to proceed by contradiction. Does anyone here have some suggestions? This is an outside project for a class I'm working on so I don't know if these matrices are or are not M-matrices in general - mostly just looking for tips here.

real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:





  • addition/subtraction

  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions

  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.




What sort of techniques are used to prove statements like this? What is this branch of mathematics called?






Merged with "How to prove that some functions don't have a primitive" by Ismael:



Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?



Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative



$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:



$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.




I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf



Liouville's original paper:




Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.



probability - Estimating the critical probabilities $mathrm{P_{c1}}$ and $mathrm{P_{c2}}$ mathematically for the infinite system case



Suppose I have an $\mathrm{N\times N}$ square matrix consisting of only $0$'s and $1$'s. (I'm considering that $0$ represents "white" and $1$ represents "black".) The probability of a certain element being $1$ is $\mathrm{p}$. At that certain probability $\mathrm{p}$, we count the number the number of white clusters and number of black clusters in the matrix. Any two elements who share a side (i.e. any one of them lies to the left, right, top or bottom of the other) or share a vertex are considered to belong to the same cluster. BUT, there's one special case i.e. if anywhere in the matrix there occurs a situation like this:



0 | 1



1 | 0



OR




1 | 0



0 | 1



That is two $1$'s share a vertex (diagonally connected) and two $0$'s also share a vertex as shown, the $50\%$ of the times, one should consider the $1$'s to belong to same cluster and other $50\%$ of the times one should consider the $0$'s to belong to the same cluster.



Suppose I am gradually increasing $\mathrm{p}$ from $0$ to $100$ in steps of $0.001$ and counting the number of black and white clusters for each $\mathrm{p}$. My aim is to find that critical probability $\mathrm{P_{c1}}$ at which number of white clusters increases from $1$ to a number greater than $1$ as $\mathrm{N}\to\infty$. Also I need to find $\mathrm{P_{c2}}$ at which number of black clusters decreases from a number greater than $1$ to $1$.





To be more clear,



Let $\mathcal{C}_0, \mathcal{C}_1$ denote the clusters of 0's and 1's
in the matrix (or graph), respectively.



We are defining $\mathrm{P_{c1}}$ as



$$ \mathcal{C}_{0} = \begin{cases} 1 & \qquad p < \mathrm{P_{c1}} \\
> 1 & \qquad p > \mathrm{P_{c1}} \end{cases} $$




and $\mathrm{P_{c2}}$ as



$$ \mathcal{C}_{1} = \begin{cases}
> 1 & \qquad p < \mathrm{P_{c2}} \\ 1 & \qquad p > \mathrm{P_{c2}} \end{cases} $$



for $N\to\infty$.




I wrote a program for the $1000\times 1000$ case, and averaged the critical probabilities over $10$ iterations. (The program basically had a loop which ran from $\mathrm{p}=0$ to $\mathrm{p}=100$ and I outputted those $\mathrm{p}$'s at which the number of white clusters increased from $1$ and number of black clusters decreased to $1$) The value I got for $\mathrm{P_{c1}}$ was $0.05$ and for $\mathrm{P_{c2}}$ it was $0.96$. However, I don't know if any purely mathematical technique exists to find the convergence value of these two critical probabilities for the infinite matrix case (i.e. $\mathrm{N}\to \infty$). Around $3$ decimal places of accuracy will be enough for my purpose. Any help will be appreciated.


Answer




At the moment it is not 100% clear how your critical probabilities are defined. In the below I consider the following definitions.



Let $\mathcal{C}_0, \mathcal{C}_1$ denote the clusters of 0's and 1's in your matrix (or graph), respectively. For a fixed $p \in [0,1]$ we can define



$$C_0(p) = \lim_{N \rightarrow \infty} \mathbf P_{p,N}[ |\mathcal C_0| > 1]$$
which is the limit of the probability that there is more than one clusters of $0$s (in your wording, the probability of more than one white cluster).



We define a critical probability $p_c$ such that
$$
C_0(p) =

\begin{cases}
0 & \qquad p < p_c \\
> 0 & \qquad p > p_c
\end{cases}
$$
That is: when $p < p_c$, in the limit $N \rightarrow \infty$ the probability of having more than $1$ cluster is $0$. Roughly speaking this would mean there is at most $1$ cluster of $0$s in the limit. Conversely when $p > p_c$ there is a positive chance (in the limit) of seeing more than 1 cluster.



For this definition of $p_c$, I claim that $p_c = 0$.



To see this, suppose that $p > 0$ then the probability that a given $3 \times 3$ sub-matrix is given by

$$ B =
\left( \begin{matrix}
1 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 1
\end{matrix}
\right)$$
The probability of this configuration occuring is $q = p^8 (1-p) > 0$. Then suppose that we have $N = 3n$, then the matrix can be seen as $n^2$ blocks like the above. Each of these $n^2$ blocks are independent, and have probability $q$ (which does not grow with $n$) of being of the form $B$. The number of the $n^2$ blocks which equals $B$ is given by a Binomial variable $\text{Bin}(n^2, q)$; in particular, the probability that more than two such blocks exist is
$$\mathbf{P}[ \text{Bin}(n^2, q) >= 2] =1 - (1-q)^{n^2} - n^2q(1-q)^{n^2 - 1},$$




The probability of there being at least 2 clusters of 1s is greater than the probability that at least two blocks like the above exist (since this is a very special case of having two clusters), so that is



\begin{align*}
\lim_{n\rightarrow \infty} \mathbf P_{p,3n}[ |\mathcal C_1| > 1] & \geq
\lim_{n \rightarrow \infty} \mathbf{P}[ \text{Bin}(n^2, q) >= 2] \\
& = \lim_{n \rightarrow \infty} 1 - (1-q)^{n^2} - n^2q(1-q)^{n^2 - 1} \\
& = 1.
\end{align*}



That is, for any $p > 0$ we have $C(p) = 1$. Clearly $C(0) = 0$, and so it follows that $p_c =0$.




The decision to use $N = 3n$ is really not needed, but makes the proof a bit simpler. Further, with a bit more probabilistic machinery you can argue via Kolmogorov's Zero-One law that in the limit the block $B$ appears infinitely often: which ensures that in fact for any $p > 0$ the expected number of clusters is infinite.






A simplified argument.



The above argument is useful as it allows us to show that there must in fact be infinitely many clusters in the limit. In this example I use a much simpler argument which you can check computationally.



If the matrix you have is denoted $A_{i,j}$ with $1 \leq i,j \leq N$ consider just the $2 \times 2$ sub-matrix in the top left corner. If this takes the specific form




$$
C =
\left(
\begin{array}{cc}
0 & 1 \\
1 & 1
\end{array}
\right)
$$




and moreover if there is at least one more $0$ elsewhere in the matrix, then this would imply that you have two clusters of $1$s.



For any $N \geq 3$, and fixed $p > 0$ the probability that the top corner is equal to $C$ is given by
$$(1-p) p^3$$
Of the remaining $N^2 - 4$ vertices, the number of $0$s to occur is distributed according to a $\text{Bin}(N^2 - 4, (1-p))$ variable, therefore the probability that there is at least one $0$ is
$$ 1 - \mathbf P[ \text{Bin}(N^2 - 4, (1-p) ) = 0] = 1 - p^{N^2 - 4}.$$
So that means the probability of seeing the corner equal to $C$ and also there being at least one more $0$ is given by
$$q = p^3 (1-p) ( 1 - p^{N^2 - 4}).$$
Note that this is a very special case of there being at least two clusters, but for any $p > 0$ the probability $q > 0$.




So we have that with probability $q$ a given sample will have this property. Then we can ask how many samples would we need to take before seeing this event. Whilst we cannot say for certain how many, on average one would expect to have to take $1/q$ samples (this is derived from a Geometric Distribution).



For $N$ large, and $p$ small we can approximate $q \sim p^{3}$, so that $1/q \approx p^{-3}$. So for example when $p = 0.01$, you would expect to need around $10^6$ samples to see this event.



Whilst this is an approximation, it is an approximation to a very specific example of having more than $1$ cluster; so in fact I would expect you to require fewer to observe a scenario with at least two clusters.



Of course when you take into account the fact that there are four corners (and not just the top left considered) then the probability rises again, and approximately (since I assume that each of the four possible corner events are independent, which they're not) the probability at least one of the four occuring is
$$q' = 1 - (1-q)^4 \sim 1 - (1-p^3)^4,$$
which in turn means that on average it would take $1/q'$ samples before observing such a corner event. And noting that

$$ \frac1{q'} \sim \frac{1}{1 - (1-p^3)^4} \sim \frac1{4p^3} + O(1)$$
we see that actually for $p = 0.01$ we need only on the order of $250,000$ samples to see such a corner event.


Saturday 25 May 2013

trigonometry - How does e, or the exponential function, relate to rotation?



$e^{i \pi} = -1$. I get why this works from a sum-of-series perspective and from an integration perspective, as in I can evaluate the integrals and find this result. However, I don't understand it intuitively.



Intuitively, what this means to me is that if you rotate pi radians around a unit circle, you will end up exactly opposite of where you started.



Expanding upon this, for any theta, $e^{i\theta}$ is equivalent to rotating $\theta$ radians around a unit circle. So it's obvious (if not intuitive) to me that $e^{(\pi/2)i} = i$ and $e^{2\pi i} = 1$ and so on.



What I'm wondering is, intuitively, how is the natural logarithm related so closely to circles? My understanding of $e$ stems from exponential growth, and I don't see how that ties to rotation around a unit circle.




I know the formulas, but I'm looking for an intuitive explanation. For example, when I used to ask how sin and cos related to circles, people would show me taylor series or tables with a bunch of values or tell me to use a calculator, but it didn't click until somebody told me that sin is just a measure of the height of a point as you travel around the unit circle. Then it all makes sense.



I'm looking for that kind of explanation of $e$ - how is $e$ related to circles and why does $e^{ix} = \cos(x) + i\sin(x)$?


Answer



There is a closely related discussion at this math.SE question with lots of details. Let me see if I can summarize it as concisely as I can:




  • Let $r(t) : \mathbb{R} \to \mathbb{R}^2$ parameterize a particle moving uniformly around the unit circle. Then $|r(t)|^2 = \langle r(t), r(t) \rangle = 1$ (where $\langle \cdot, \cdot \rangle$ denotes the dot product). Differentiating this relation gives $\langle r(t), r'(t) \rangle = 0$; in other words, the displacement is always orthogonal to the velocity. (This should make physical sense.)

  • Since in addition $r(t)$ is moving uniformly we have $|r'(t)|^2$ is a constant, and we may as well assume $|r'(t)| = 1$ by a suitable change of units. Hence $r'(t)$ is either a $90^{\circ}$ clockwise or counterclockwise rotation from $r(t)$.


  • Now identify $\mathbb{R}^2$ with $\mathbb{C}$ and identify $r$ with a function $z(t) : \mathbb{R} \to \mathbb{C}^2$. If the particle is moving counterclockwise, then the above implies that $z'(t) = i z(t)$.

  • But this differential equation clearly has unique solution $z(t) = e^{it} z(0)$.



So the fact that multiplication by $i$ is the same as rotation by $90^{\circ}$ actually immediately implies the more general fact about arbitrary rotations through Euler's formula (although one does not need Euler's formula to see this, of course). The other lesson to keep in mind here is that $e$ shows up whenever you solve linear ODEs. Abstractly this is because $e^{\lambda z}$ is an eigenvector for the derivative operator of eigenvalue $\lambda$.



I think these are important and fundamental questions, and it's a pity they aren't more clearly addressed anywhere in the undergraduate curriculum.







So, to summarize: $e^{it}$ is a complex number $\cos t + i \sin t$ which describes counterclockwise rotation by $t$ radians. It follows that if $z$ is a complex number of absolute value $1$, then the possible values of $\log z$ are the purely imaginary numbers $it$ such that $e^{it} = z$; in other words, they're the possible values of $t$ such that $z$ describes rotation by $t$ radians. So taking the logarithm of a rotation gives you an angle.


algebra precalculus - Please help to complete proof of inclusion and exclusion principle




I want to complete the following proof:



enter image description here



So continuing where the author left off, I did the following:




$\begin{array} {cc}
& \sum\limits_{i=1}^{n-1} P(A_i) - \sum\limits_{1\le i < j \le n-1}P(A_i \cap A_j) + \dots + (-1)^n P(A_1 \cap A_2 \cap \dots \cap A_{n-1}) + P(A_n) - P(\bigcup\limits_{i=1}^{n-1} (A_i \cap A_n))\\
&= \sum\limits_{i=1}^{n-1} P(A_i) - \sum\limits_{1\le i < j \le n-1}P(A_i \cap A_j) + \dots + (-1)^n P(A_1 \cap A_2 \cap \dots \cap A_{n-1}) + P(A_n) - \left[ \sum\limits_{i=1}^{n-1}P(A_i \cap A_n) - \sum\limits_{1\leq i < j \leq n-1}^{n-1}P(A_i \cap A_j \cap A_n) + \dots + (-1)^nP(A_i \cap A_2 \cap \dots \cap A_{n-1}\cap A_n)\right]\\

\end{array}$




I got here by applying the inclusion exclusion formula to $P(\bigcup\limits_{i=1}^{n-1} (A_i \cap A_n)$ as the author instructed and then simplyfying based on the following:




$(A_i \cap A_n) \cap (A_j \cap A_n) = A_i \cap A_j \cap A_n$




I tried this new formula with $n=3$ and I did get the correct answer, so I think that the manipulations I did is correct.




Also, I know I can get the $\sum\limits_{i=1}^{n}P(A_i)$ in the theorem because:




$\sum\limits_{i=1}^{n}P(A_i) = \sum\limits_{i=1}^{n-1}P(A_i) + P(A_n)$




So, this formula can be simplified to:





$\sum\limits_{i=1}^{n} P(A_i) - \sum\limits_{1\le i < j \le n-1}P(A_i \cap A_j) + \dots + (-1)^n P(A_1 \cap A_2 \cap \dots \cap A_{n-1}) - \left[ \sum\limits_{i=1}^{n-1}P(A_i \cap A_n) - \sum\limits_{1\leq i < j \leq n-1}^{n-1}P(A_i \cap A_j \cap A_n) + \dots + (-1)^nP(A_i \cap A_2 \cap \dots \cap A_{n-1}\cap A_n)\right]$




Now I don't know how to proceed.



If the answer involves manipulating the summations then the more detailed the answer, the better as I am not too familiar with manipulating sums.


Answer



In the first line of your last equation, you have the term $-\sum_{1\leq i < j\leq n-1} P(A_i\cap A_j)$. In the inclusion-exclusion formula, you have to subtract all $P(A_i\cap A_j)$ for $1\leq i

The next term that should appear on your first line (and that you abbreviated with the dots), is $\sum_{1\leq i < j < k \leq n-1} P(A_i\cap A_j\cap A_k)$. As before, in the inclusion-exclusion principle, you'd have to have $\sum_{1 \leq i < j < k \leq n} P(A_i\cap A_j \cap A_k)$, do you see the difference? Again, you are missing the terms of the form $P(A_i\cap A_j\cap A_n)$ for $1\leq i < j < n$, and again, those appear in the second line, hence you can regroup to obtain the term of the inclusion-exclusion formula.




I hope you now get the pattern. Each time with the induction hypothesis on the first line you obtained a formula for which some terms are missing, but the missing terms come from the second line (and the very last term missing comes from the third line), hence you regroup and you are done. This proof can be made more "formal" by writing it with summations, but I think its clearer this way.


elementary number theory - What is the best way to solve modular arithmetic equations such as $9x equiv 33 pmod{43}$?



What is the best way to solve equations like the following:



$9x \equiv 33 \pmod{43}$



The only way I know would be to try all multiples of $43$ and $9$ and compare until I get $33$ for the remainder.




Is there a more efficient way ?



Help would be greatly appreciated!


Answer



How would we solve it in $\mathbb{R}$? Divide both sides by $9$ of course—or, in other words, multiply both sides by the multiplicative inverse of $9$. This setting is no different.



The challenge is knowing the multiplicative inverse of $9$ in $\mathbb{Z}_{43}$. What is key$^\dagger$ is that $\gcd(9,43)=1$, which guarantees integers $n$ and $m$ such that $9n + 43m = 1$. Modding out by $43$, we see that $9n \equiv 1 \pmod{43}$. Thus, multiplying both sides of $9x \equiv 33 \pmod{43}$ by $n$ gives us $x$.



The integers $n$ and $m$ can be found by using the extended Euclidean algorithm.







$^\dagger$ This coprimality condition is if-and-only-if. An integer $x$ will not have a multiplicative inverse $(\text{mod} \ n)$ if $\gcd(x,n) \neq 1$.


trigonometry - Find the value $tan^{-1}left(frac{1}{sqrt2}right) - tan^{-1}left(frac{sqrt{5 - 2{sqrt6}}}{1+ sqrt{6}}right)$



The value of $$\tan^{-1}\left(\frac{1}{\sqrt2}\right) - \tan^{-1}\left(\frac{\sqrt{5 - 2{\sqrt6}}}{1+ \sqrt{6}}\right)$$is equal to




  1. $\frac{\pi}{6}$

  2. $\frac{\pi}{4}$

  3. $\frac{\pi}{3}$

  4. $\frac{\pi}{12} $




$$\tan^{-1}\left(\frac{1}{\sqrt2}\right) - \tan^{-1}\left(\frac{\sqrt{3} -\sqrt{2}}{1+ \sqrt{6}}\right)$$
$$\tan^{-1}\left(\frac{1}{\sqrt2}\right) -\tan^{-1}{\sqrt3} + \tan^{-1} {\sqrt2} $$
$$\implies\frac{\pi}{2} -\frac{\pi}{3}=\frac{\pi}{6}$$



Another possibility is
$$\tan^{-1}\left(\frac{1}{\sqrt2}\right) +\tan^{-1}{\sqrt3} - \tan^{-1} {\sqrt2} $$
How to solve this ?


Answer



$$\dfrac{\sqrt{5-2\sqrt6}}{1+\sqrt6}=\dfrac{\sqrt3-\sqrt2}{1+\sqrt3\cdot\sqrt2}$$




$$\implies\arctan\dfrac{\sqrt{5-2\sqrt6}}{1+\sqrt6}=\arctan\sqrt3-\arctan\sqrt2$$



$$\arctan\sqrt3=\dfrac\pi3$$ and



$$\arctan\sqrt2=\text{arccot}\dfrac1{\sqrt2}=\dfrac\pi2-\arctan\dfrac1{\sqrt2}$$


Finding the area of an equilateral triangle using the Pythagorean theorem

From an equilateral triangle $T$ where each side have a length of $L$. What is the area of $T$?



According to the Wikipedia page of equilateral triangles, the area is $$A=\dfrac{\sqrt{3}}{4}L^2$$



I am trying to solve this problem by using the Pythagorean theorem, as explained in this question, I can split the triangle in half to try and get the height.



Using the Pythagorean theorem, $$L^2=(\dfrac{L}{2})^2 + H^2$$



I can then isolate $H$ with :




$$H=\sqrt{L^2-(\dfrac{L}{2})^2}$$



Using the $A=\dfrac{1}{2}bh$ formula. I could then conclude with :
$$A=\dfrac{L\sqrt{L^2-(\dfrac{L}{2})^2}}{2}$$



As said previously, the Wikipedia page shows something very different. What went wrong?

calculus - Evaluate $lim_{nrightarrow infty} Gamma(n+frac{1}{2})/ left( sqrt{2npi} Gamma(n) right)$ using Stirling's formula.



I am working on the limit
$$
\displaystyle\lim_{n\rightarrow \infty} \frac{\Gamma(n+\frac{1}{2})}{ \sqrt{2n\pi}\, \Gamma(n)}\,.
$$



I am thinking I may be able to use Stirling's formula, but they are slightly different, and I am having trouble relating the two. Any help is appreciated.



Stirling's formula says that the limit is 1 as $n$ approaches infinity of the following:




$$\Gamma(n) / ( \sqrt{2\pi} n^{n - \frac{1}{2}}e^{-n})$$



In particular, how do I relate $\Gamma(n)$ to $n^{n}$ and $e^{-n}$? Not sure how do deal with those two terms.


Answer



You know that
$$
\lim_{n\to\infty} \frac{\Gamma(n)}{\sqrt{2\pi} n^{n - 1/2}e^{-n}} = 1.
\tag1
$$




It seems to me that this is how $\Gamma(n)$ relates to
$n^n$ and $e^{-n}$.
What you need to know, though, is whether this will help you
relate $\Gamma\left(n+\frac12\right)$ to $\Gamma(n)\sqrt{2n\pi},$
and if it does, how does it?



Notice what happens if we replace $n$ by $n+\frac12$ in Equation $(1).$
We get
$$

\lim_{n\to\infty} \frac{\Gamma\left(n+\frac12\right)}
{\sqrt{2\pi} \left(n+\frac12\right)^n e^{-(n+1/2)}} = 1. \tag2
$$



Let $f(n)$ be the left-hand side of $(1)$
and $g(n)$ be the left-hand side of $(2).$
What can you say about
$$
\lim_{n\to\infty} \frac{g(n)}{f(n)}?
$$



sequences and series - Why does $1+2+dots+2003=dfrac{2004cdot2003}2$?





Why does $1+2+\dots+2003=\dfrac{2004\cdot2003}2$?





Sorry if this is missing context; not really much to add...


Answer



$$\begin{array}{ccc}
S&=&1&+&2&+&3&+&\ldots&+&2001&+&2002&+&2003\\
S&=&2003&+&2002&+&2001&+&\ldots&+&3&+&2&+&1\\ \hline
2S&=&2004&+&2004&+&2004&+&\ldots&+&2004&+&2004&+&2004
\end{array}$$



There are $2003$ columns, so $2S=2003\cdot2004$, and therefore $S=\dfrac{2003\cdot2004}2$.



Friday 24 May 2013

algebra precalculus - Finding Inverse in modulus m



I've been learning the Euclidean algorithm and came across this simple problem.



$101^{-1} (mod 203)$



So I attempted it as such:




$203 = 101(2) + 1$



So we got a gcd of 1, we can stop and do:



$1 = 203 - 101(2)$



And since it's mod 203, we have 101(2)



So shouldn't the answer be $2$?
My textbook says it's $201$,

help would be much appreciated, as this is confusing as ever.



Thanks.


Answer



Observe that



$$2\cdot101=202=-1\pmod{203}\implies (101)^{-1}=-2=201\pmod{203}$$



Or using the EA:




$$203=2\cdot101+1\implies1\cdot203+\color{red}{(-2)}\cdot101=1\implies $$



$$\implies101^{-1}=-2\pmod{203}=201\pmod{203}$$


field theory - Extended Euclidean Algorithm in $GF(2^8)$?



I'm trying to understand how the S-boxes are produced in the AES algorithm. I know it starts by calculating the multiplicative inverse of each polynomial entry in $GF(2^8)$ using the extended euclidean algorithm. However I am having some trouble understanding how to perform the euclidean algorithm with polynomials in a field. Could someone please explain how to do this with a step by step example?


Answer



Using Rijndael's finite field, the reducing polynomial is $x^8+x^4+x^3+x+1$.




Suppose we want to compute the inverse of $x^5+1$ in this field. We want to solve the equation
$$
a(x^5+1)+b(x^8+x^4+x^3+x+1)=1
$$
I like to use the Euclid-Wallis Algorithm. Since we are dealing with polynomials, I will write things rotated by $90^\circ$.



$$
\begin{array}{c|c}
x^8+x^4+x^3+x+1&0&1&\\\hline

x^5+1&1&0\\\hline
x^4+x+1&x^3&1&x^3\\\hline
x^2+x+1&x^4+1&x&x\\\hline
\color{#C00000}{1}&\color{#C00000}{x^6+x^5+x^3+x^2+x}&\color{#C00000}{x^3+x^2+1}&x^2+x\\\hline
0&x^8+x^4+x^3+x+1&x^5+1&x^2+x+1
\end{array}
$$
The fifth row tells us that in $\mathbb{Z}_2[x]$
$$
(x^5+1)(\color{#C00000}{x^6+x^5+x^3+x^2+x})+(x^8+x^4+x^3+x+1)(\color{#C00000}{x^3+x^2+1})=\color{#C00000}{1}

$$
Thus, $x^6+x^5+x^3+x^2+x$ is the inverse of $x^5+1$ in $\left.\mathbb{Z}_2[x]\middle/(x^8+x^4+x^3+x+1)\mathbb{Z}_2[x]\right.$.


math history - A proof for this series?



The summation, $$\sum_{i=1}i^2=n(n+1)(2n+1)/6$$ However, how could you prove this? All of the proofs I've seen already assume knowledge of the formula, but how do you prove this without first knowing the formula? What about formulas for higher powers, such as cubics and quartics? If possible, please keep this on a level where I can understand (I'm in calculus AB). Thanks!


Answer



Polya gives a beautiful discussion of this problem in Chapter 3 (called "Recursion") in volume 1 of his book "Mathematical Discovery." One way to "discover" the formula is to exploit the identity $$(n+1)^3=n^3+3n^2+3n+1$$ which implies $$(n+1)^3-n^3=3n^2+3n+1$$
If you sum all such equations from $n=1$ to $n=k$ you find the left side telescopes, leaving $$(k+1)^3-1=3S+3(1+2+\cdots+k)+k$$
where $S$ is the sum you wanted. Of course, the sum in parentheses is just $\frac{k(k+1)}{2}$, and solving the equation for $S$ after making that substitution gives the result.


probability - Intuitive explanation for $mathbb{E}X= int_0^infty 1-F(x) , dx$

I can see by manipulating the expression why $\mathbb{E}X$ works out to be $\int_0^\infty 1-F(x)\,dx$, where $F$ is the distribution function of $X$, but what is an intuitive explanation for why that is true? If at each point we sum the probability $\mathbb{P}(X>x)$, why should we end up with the expectation?




Thanks

sequences and series - How to show that the limit $ lim_{n to infty}sin frac{n theta}{2}sin frac{(n+1) theta}{2}$ doesn't exist?



I've been trying to show that $\sum_n \sin n\theta$ diverges (for most thetas at least), and I've come up with this expression for the partial sum (up to a multiplicative constant), and now I want to show that its limit doesn't exist:




$$ \lim_{n \to \infty}\sin \frac{n \theta}{2}\sin \frac{(n+1) \theta}{2}$$



But I don't know how to proceed with this. Both terms are divergent, but that doesn't mean their product necessarily diverges (though in this case it sure seems so). Is there a straightforward way to show this limit doesn't exist?



EDIT: I want to clarify that while I did originally set out to show the divergence of a series, that's not the aim of this question, which is how to rigorously show a limit doesn't exist. I can show that the limit doesn't equal $0$, but I want to learn how to show that it can't equal any other number as well.


Answer



Note that $\sin\alpha\sin\beta=\frac12(\cos(\alpha-\beta)-\cos(\alpha+\beta))$, hence
$$\sin\frac{n\theta}2\sin\frac{(n+1)\theta}2=\frac12\left(\cos\frac\theta2-\cos\bigl((n+\tfrac12)\theta\bigr)\right) $$
so unless $\theta$ is special ...


contest math - Putnam Series Question

I'm studying for the Putnam Exam and am a bit confused about how to go about solving this problem.




Sum the series
$$

\sum_{m = 1}^{\infty} \sum_{n = 1}^{\infty} \frac{m^2n}{3^m(n3^m + m3^n)}.
$$




I've tried "splitting" the expression to see if a geometric sum pops up but that didn't get me anywhere. I've also tried examining the first few terms of the series for the first few values of $m$ to see if an inductive pattern emerged but no luck there either.

Thursday 23 May 2013

calculus - Show that $int_{0}^infty frac{1}{(1+x-u)(pi^2+ln^2(x))}dx=frac{1}{u}+frac{1}{ln(1-u)}$

I am trying to show that $$I=\int_{0}^\infty \frac{1}{(1+x-u)(\pi^2+\ln^2(x))}dx=\frac{1}{u}+\frac{1}{\ln(1-u)}$$




My first thought was to follow the $m=\frac{1}{x}$ which led to
$$I=1+(u-1)\int_{-\infty}^\infty \frac{1}{(e^{-m}+(1-u))(\pi^2+m^2)}dm $$
But that unfortunately did not seem to go anywhere. I original saw this integral in the Laplace Transform section of Jack D'Aurizio's notes, but I, for the life of me, could not seem to use the Laplace Transform to evaluate the integral. I'm also aware that this is very similar to the integral involving the Gregory Coefficients, but I want to avoid those if I can. I've attempted to use differentiating under the integral but couldn't find any reasonable way of doing it. I've tried series, but that did not work either. Any push in the right direction would be much appreciated.

How to Solve This Exponential Limit without Derivate / L'Hôspital's Rule



can someone teach me how can I solve this limit without using the L'Hopital's Rule?



$$\lim_{x\to 0} \left( \frac{2+x^{2}}{2-x^{2}} \right)^{\frac{1}{x^2}}$$




Thanks in advance.


Answer



$$
\frac{2+x^2}{2-x^2}=1+\frac{2x^2}{2-x^2}=1+\frac{x^2}{1-\frac{x^2}{2}}
$$

and $y=\frac{x^2}{1-\frac{x^2}{2}}\to 0$, as $x\to 0$. Hence
$$
\left(1+\frac{x^2}{1-\frac{x^2}{2}}\right)^{\frac{1-\frac{x^2}{2}}{x^2}}=(1+y)^{1/y}\to e,
$$


as $x\to 0$.



Finally
$$
\left(\frac{2+x^2}{2-x^2}\right)^{1/x^2}=\left(\left(1+\frac{x^2}{1-\frac{x^2}{2}}\right)^{\frac{1-\frac{x^2}{2}}{x^2}}\right)^{\frac{1}{1-\frac{x^2}{2}}}\to e,
$$

since, if $f(x)\to e$ and $g(x)\to 1$, then $f(x)^{g(x)}\to e$.


Defining a probability on the natural numbers set

The standard definition of a probability on a set A is as a sigma-additive function from a sigma-algebra constructed on the power set of A to the extended real positive line (also called a measure), such that the measure of the whole set A is 1.




If A is the set of real numbers, such a definition cannot be made in such a way that any set consisting of only one natural number has equal probability, for, if mu({n}) = c > 0 for any n natural (where mu is the measure), then any infinite set would have infinite measure, and, in particular, mu(A) != 1.



But it is trully natural to think, for example, about the probability of certain events relating to the choice of a random natural number, such as the probability of getting an even number out of a random choice of a natural number, which would be 1/2. This kind of probability should satisfy the above condition of having odds 0 of getting any particular natural number.



I searched the forum and saw that many people work with probability on the natural numbers by relaxing the assumption of the probability being sigma-additive to just being additive. This seems to solve the problem, although I still didn't find an algebra and a probability function which would seem satisfying (but I guess it is not so hard).



The question, then, is what exactly one misses when relaxing the assumption of sigma-additivity. Does a definition of probability in the subsets of natural numbers via a finitely additive set function lead to inconsistencies? What theorems about probability stop applying under such a definition?



Also, am I being to restrictive in my general definition of probability, and is it acceptable in modern mathematical culture to define probabilities via only finitely aditive functions?

algebra precalculus - Why is $(5sqrt{5p}-3sqrt{5q})(5sqrt{5p}+3sqrt{5q}) equiv 5(5p-3q)(5p+3q)$?

I was working on the difference of two squares, $125p^2-45q^2$



Writing my answer, $$(5\sqrt{5}p-3\sqrt{5}q)(5\sqrt{5}p+3\sqrt{5}q),$$ onto Pearson, I got a popup that said my answer was equivalent to the correct answer but in incorrect form. Apparently, the correct answer is $$5(5p-3q)(5p+3q).$$




Why is that? I'm failing to see the intuition here.

improper integrals - Evaluating $int_{-infty}^{infty}e^{-x^2}dx$ using polar coordinates.




I need to express the following improper integral as a double integral of $x$ and $y$ and then, using polar coordinates, evaluate it.



$$I=\int_{-\infty}^{\infty}e^{-x^2}dx$$



Plotting it, we find a Gaussian centered at $x=0$ which tends to infinity to both sides. We can easily express it as a double integral :



$$I=\int_{0}^{1}\int_{-\infty}^{\infty}e^{-x^2}dxdy$$




Evaluating both using Wolfram Alpha gives $\sqrt{\pi}$, so it converges.



I know that $x=rcos(\theta)$ and that $dxdy=rdrd\theta$, but substituing this in the above integral and evaluating $\theta$ from $0$ to $2\pi$ and $r$ from $0$ to $\infty$ doesn't yield the correct answer. What's wrong here?



Thanks a lot !


Answer



You could try:
$$I^2=\left ( \int_{-\infty}^{\infty}e^{-x^2}\mathrm{d}x \right ) \left( \int_{-\infty}^{\infty}e^{-y^2}\mathrm{d}y \right) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2+y^2)}\mathrm{d}x\ \mathrm{d}y$$
then use the polar coordinates to compute the double integral.



trigonometry - Trigonometric Identity involving $sin^2$ and $cos^2$



I was trying to prove this trigonometric identity, it looks like using the elementary relations should be enough, but I still can't find how:



$$\frac{1}{2}\sin^2 a \ \sin^2 b + \cos^2a \ \cos^2 b = \frac{1}{3} + \frac{2}{3} \biggl( \frac{3}{2}\cos^2 a - \frac{1}{2} \biggr)\biggl( \frac{3}{2}\cos^2 b - \frac{1}{2}\biggr)$$



Thank you!




(taken from Celestial Mechanics)


Answer



The left hand side is
$$\begin{align}\frac 12\sin^2 a\sin^2b+\cos^2 a\cos^2 b&=\frac 12(1-\cos^2 a)(1-\cos^2b)+\cos^2a\cos ^2b\\&=\frac 12-\frac 12\cos^2a-\frac12\cos ^2b+\frac 32\cos^2a\cos^2b.\end{align}$$



The right hand side is
$$\frac 13+\frac 23\left(\frac 32\cos^2a-\frac 12\right)\left(\frac 32\cos^2b-\frac 12\right)$$$$=\frac 13+\frac 23\left(\frac94\cos^2\cos^2b-\frac{3}{4}\cos^2a-\frac 34\cos^2b+\frac 14\right)$$
$$=\frac 13+\frac 32\cos^2a\cos^2b-\frac 12\cos^2a-\frac12\cos^2b+\frac 16$$
$$=\frac 12-\frac 12\cos^2a-\frac12\cos ^2b+\frac 32\cos^2a\cos^2b.$$


Wednesday 22 May 2013

Why Open Interval In Formal Definition Of Limit At Infinity




The formal definition of limit at infinity usually starts with a statement requiring an open interval. An example from OSU is as follows:



Limit At [Negative] Infinity: Let $f$ be a function defined on some open interval $(a, \infty)$ [$(-\infty, a)$]. Then we say the limit of $f(x)$ as $x$ approaches [negative] infinity is $L$, and we write $\lim_{x\to[-]\infty} f(x) = L$ if for every number $\epsilon$, there is a corresponding number $N$ such that $\left|f(x) - L\right| < \epsilon$ whenever $x > N\;[x < N]$.



But, I think it is also valid when the interval is half-open as in the following: Let $f$ be a function defined on some right-open interval $[a, \infty)$ [left-open interval $(-\infty, a]$]. Then we say the limit of $f(x)$ as $x$ approaches [negative] infinity is $L$, and we write $\lim_{x\to[-]\infty} f(x) = L$ if for every number $\epsilon$, there is a corresponding number $N$ such that $\left|f(x) - L\right| < \epsilon$ whenever $x \geq N\;[x \leq N]$.



So, why the formal definition of limit at infinity does not start with a statement requiring a half-open interval, which is more general?



Is it because people want to match it with the formal definition of limit? I understand that the formal definition of one-sided limit requires an open interval because that is necessary to define a limit at a point $a$. But, such requirement does not exist for limit at infinity, and therefore, why the more general version of half-open interval is not used in the formal definition of limit at infinity?



Answer



There is not really a difference between both approaches: If $f$ is defined on the open interval $(-\infty,a)$, then we may as well consider the restriction to the closed interval $(-\infty,a-1]$ and similarly vice versa. The reason that open interval may be preferred is that the limit requires $f$ to be defined on a topological neighbourhood of $\infty$. A neighbourhood of $\infty$ is a set that contains an open set containing $\infty$ and the basic open sets are open intervals. So the the following definition might be considered "best", but I'm afraid it is way less intuitive for the learner:



Limit At Infinity: Let $f\colon A\to\mathbb R$ be a function where $A$ is a punctured neighbourhood of $\infty$ in the two-point compactification of $\mathbb R$. Then we say ...


probability - Expected value of the minimum of a non-negative random variable and a constant



X is a non-negative random variable. Define Y = MIN(X, c) where c is a constant. What is E[Y]?
I am modeling the constant as another random variable whose pdf is Dirac Delta function: $f_{c}(x) := \delta(x-c)$. The mean and variance of this "constant random variable"(!) comes out as c and 0, but does this approach have enough mathematical rigor?


Answer



Assume that $c > 0$ otherwise $\min(X,c) = c$ which is rather trivial. Then $Y := \min(X,c)$ is a nonnegative variable and we have




\begin{align*}
E[Y] &= \int_0^\infty P( Y > t) dt \\
&= \int_0^\infty P( X > t, c > t) dt \\
&= \int_0^\infty P( X > t)1\{ c > t\} dt \\
&= \int_0^c P(X > t)dt
\end{align*}


derivatives - Continuous everywhere differentiable nowhere function



If I want to prove that a function is continuous everywhere differentiable nowhere, then I prove that the axioms for continuity hold, proving that the limit values in every point is equal to the evaluated function (for example if f is never undefined and limit is the same as f(x) in every point then it is continuous).



Then to prove that it is never differentiable, should I use the definition of the derivate and prove that the derivate never exists? How is that usually done?


Answer



A prominent example of such a function is the Weierstrass function. In general examining the difference quotients is a very good start for showing non-differentiability in such severe cases.




There is a well-written report on the Weierstrass function and continuous everywhere, differentiable nowhere functions in general where also a proof of these properties is presented for the Weierstrass function in particular.






From a quick reading it seems that they also examine the difference quotient to infer a contradiction to the basic definition of differentiability.



As of the property of being continuous everywhere, the procedure depends pretty much on the function your examining and what kind of analytic representation. As the Weierstrass function in particular is defined as a functional series, they use well-known correspondences between uniform convergence of functional sequences/series and continuity as well as the so called Weierstrass $M$-test (a test to determine whether a series of functions converges uniformly) to derive the respective results.


probability distributions - Interchange of limit and integral for a positive random variable with finite moments.



Let $f$ be the pdf of a non-negative random variable $X$ with finite moments of all orders, i.e. $E[X^n]<+\infty$ for all $n\in \mathbb N$. May I interchange the limit with the integral and infer that $$\lim_{w\to \infty}\int_{0}^{+\infty}xf(x+w)\mathop{dx}=\int_{0}^{+\infty}x\cdot\lim_{w\to \infty}f(x+w)\mathop{dx}=0$$


Answer



To elaborate on @Did's comments, by a change of variables we see that
$$\mathbb E[(X-w)^+] = \int_0^\infty (x-w)^+f(x)\mathsf dx = \int_0^\infty xf(x+w)\mathsf dx. $$
Since $X\geqslant0$, $0\leqslant(X-w)^+\leqslant X$ for all $w\geqslant 0$, and as $X\in\mathcal L^1$, by dominated convergence we have that
$$\lim_{w\to\infty}\mathbb E[(X-w)^+] = \mathbb E\left[\lim_{w\to\infty}(X-w)^+\right].$$

Since $\mathbb E[X]<\infty$, for any $\omega$, we may choose $w$ such that $w>X(\omega)$, and hence $$\lim_{w\to\infty}(X-w)^+=0.$$



It follows that



$$\lim_{w\to\infty}\int_0^\infty xf(x+w)\mathsf dx = \mathbb E\left[\lim_{w\to\infty}(X-w)^+\right] = 0.$$


real analysis - Alternatives to percentage for measuring relative difference?



If the value of something changes from $\;a\;$ to $\;b\;$, their relative difference can be expressed as a percentage:
$$
\newcommand{\upto}{\mathop\nearrow}
(D0) \;\;\; a \upto b \;=\; (b-a)/a \color{gray}{\times 100\%}
$$



(In this question $\;a\;$, $\;b\;$, and $\;c\;$ are positive (non-zero) real numbers throughout, and $\;a \upto b\;$ is a real number.)




However, percentages are not anti-symmetrical, i.e., the above definition does not satisfy
$$
(P0) \;\;\; a \upto b \;=\; -(b \upto a)
$$
This could be fixed by instead defining
$$
(D1) \;\;\; a \upto b \;=\; (b-a)/(b+a)
$$
(By the way, does this difference measure have a standard name?)




However, neither of the above definitions add up: they don't satisfy the more general property
$$
(P1) \;\;\; a \upto b \;+\; b \upto c \;=\; a \upto c
$$
After some attempts I discovered that
$$
(D2) \;\;\; a \upto b \;=\; \log_q (b/a)
$$
(for an arbitrary base $\;q > 1\;$) does satisfy $(P1)$. And it also satisfies the following other properties of percentage $(D0)$ and of $(D1)$ which seem desirable for any relative difference:

\begin{align}
(P2) \;\; & a \upto a \;=\; 0 \\
(P3) \;\; & \textit{sgn}(a \upto b) \;=\; \textit{sgn}(b - a) \\
(P4) \;\; & (s \times a) \upto (s \times b) \;=\; a \upto b \text{ for all $\;s > 0\;$} \\
(P5) \;\; & a \upto b \to M \text{ for } a \to 0 \text{ for some $\;M\;$ (which may be $\;\infty\;$)} \\
(P6) \;\; & a \upto b \to N \text{ for } b \to \infty \text{ for some $\;N\;$ (which may be $\;\infty\;$)} \\
\end{align}
(I mention $(P2)$ only for completeness; it follows of course from $(P0)$.)



My question: Apart from $(D2)$, what other definitions (if any) satisfy all of the above properties?



Answer



The general (real-valued) solution of axioms P1 and P2 is to take a function $F$ on the set $V$ of values that $a,b,\cdots$ could possibly assume, and define $a \upto b = F(b) - F(a)$.



If $F$ is injective, the converse of P2 is satisfied: $a \upto b = 0$ only for $a=b$.



If $F$ is increasing, P3 is satisfied.



If $F$ is continuous, P5 and P6 are satisfied. (Assume here that the set of values $V$ is an interval, or put the order topology on the set.)



This shows that there is a large family of solutions of the axioms, one for every increasing continuous function on $V$, if you do not assume homogeneity, P4. Homogeneity is a strong requirement that cuts down the space of continuous solutions to the logarithmic ones stated in the question.




Homogeneity of $a \upto b$ is the functional equation $F(sb) - F(sa) = F(b) - F(a)$. Solutions $F$ and $F+c$ are equivalent for constant $c$, and we can assume $F(1)=0$ by adjusting the constant. Taking $a=1$ this is $F(sb)=F(s)+F(b)$ and all continuous solutions (on intervals) are well-known to be multiples of the logarithm.



Assuming that you meant to work with open intervals of values, and $a \upto x$ to be an increasing continuous function of $x$, the not necessarily homogeneous solutions correspond to one parameter groups of homeomorphisms of the interval.


algebra precalculus - word problem - hoe to find area from work done and days




A farmer planned to plough a field by doing 120 hectares a day. After

two days of work he increased his daily productivity by 25% and he
finished the job two days ahead of schedule.



a) What is the area of the field?



b) In how many days did the farmer get the job done?



c) In how many days did the farmer plan to get the job done?





The only thing I am able to work out from here is $25\%$ of $120$ hectares a day, which is $30$, so on day 2 he works $150$ hectares a day and finishes the job two days earlier than planned, but I am not sure how to put this into algebra and work out area?


Answer



If the number of expected days (ploughing 120 hectares a day) is $x$ you have $120x = A$ where $A$ is the total area.



Instead the farmer increased the workload after two days to $150$, as you stated.



In the first two days they plowed $240$ hectares, so in this new set up the total area can be seen as $240 + 150n = A$ where $n$ is the total number of days they ploughs $150$ hectares. You can see that $x - 2 = n + 2$, so $n = x-4$.



You now have $240 + 150(x-4) = 120x$ which you can rearrange to give $x = 12$. From here it follows that $A = 1440$, and $n = 8$. (Actual days spent ploughing $=10$)


calculus - Evaluate the sum $sum^{infty}_{n=1} frac{n^2}{6^n}$

Evaluate the sum $\sum^{\infty}_{n=1} \frac{n^2}{6^n}$



My approach :



$= \frac{1}{6}+\frac{2^2}{6^2}+\frac{3^2}{6^3} +\cdots \infty$




Now how to solve this I am not getting any clue on this please help thanks.

Tuesday 21 May 2013

Is this proof of $sqrt{3}$ being an irrational number correct?



I would like to know if the following proof of $\sqrt{3}$ being a irrational number is correct.



For the sake of contradiction, assume $\sqrt{3}$ is a rational number. Therefore there exists numbers $a$ and $b$, both rationals, for which $\sqrt{3} = \frac ab$



$\therefore a = \sqrt{3} \times b$




Since $a$ is a rational number, it can be expressed as the product of two irrational numbers



$\therefore \frac {b\sqrt{3}}{\sqrt{2}} \sqrt{2} = a$



We know that$\sqrt{2}$ is an irrational number. We must show that $\frac {b\sqrt{3}}{\sqrt{2}}$ is also an irrational number.



Now, let's asume for the sake of contradiction that $\frac {b\sqrt{3}}{\sqrt{2}}$ is a rational number.



$\Rightarrow \exists c \land \exists d \in \mathbb{Q} : \frac {b\sqrt{3}}{\sqrt{2}} = \frac cd$




Which is the same as $\sqrt{3} = \frac {c\sqrt{2}}{db}$



Now, $\frac {c}{db}$ is a rational number. But $\sqrt{2}$ is an irrational. We stated at the beggining that $\sqrt{3}$ is a rational number, but this contradicts $\sqrt{3} = \frac {c\sqrt{2}}{db}$ which states that $\sqrt{3}$ must be an irrational number



Therefore we have shown by contradiction that $\sqrt{3}$ is an irrational number.



I hope the formatting and wording is ok. If I am mistaken, in what part have I committed the mistake and how should it avoid in the future?


Answer




Since $a$ is a rational number, it can be expressed as the product of two irrational numbers





Actually this is true. Every number except $0$ can be expressed as a product of two irrational number. But where do you use this statement? At least not in the statement that follows




$\therefore \frac {b\sqrt{3}}{\sqrt{2}} \sqrt{2} = a$




This follows simply by the fact that $\frac{\sqrt{2}}{\sqrt{2}}=1$.




If you state that a rational can be expressed as the product of two irrational numbers and you found two numbers that are not both irrational and their product is the given rational, then this is not a contradiction. You did not state that only the product of two irrationals can give this rational.



So no, that is no proof at all. Find a proof for the irrationality of $\sqrt{2}$ and try to construct an analogous proof for the irrationality of $\sqrt{3}$.






How can you avoid such mistakes?



In your proof replace $\sqrt{3}$ by $3$. Does your proof work for this number too? If so, then it is not a valid proof, because $3$ is not irrational. Use this to find out where your proof went wrong.




In the standard proof for the irrationality of $\sqrt{2}$ also replace $\sqrt{2}$ by $\sqrt{3}$, $3$, $\sqrt{9}$, $\sqrt[3]{3}$, $\pi$ and check if it works for these numbers.


linear algebra - Do elementary row operations give a similar matrix transformation?



So we define two matrices $A,B$ to be similar if there exists an invertible square matrix $P$ such that $AP=PB$. I was wondering if $A,B$ are related via elementary row operations (say, they are connected via some permutation rows for example) then are the necessarily similar?




Obviously swapping rows multiplies the determinant by $-1$ but I was thinking if we permute rows in pairs, would this allow us to construct a similarity transformation?


Answer



Every invertible matrix is equivalent via row operations to the identity matrix, and the identity matrix is only similar to itself.



This also gives a counterexample to the permutation question; the identity matrix is not similar to a non-identity permutation matrix.


linear algebra - Need help in understanding how to find an elementary matrix



I read this chapter in my book and thought I understood it, but I don't. I tried working a problem to test my understanding and I just don't know how to get started.



Given the following matrices:



$A=\begin{bmatrix} 1 & 2 & -3 \\ 0 & 1 & 2 \\ -1 & 2 & 0
\\ \end{bmatrix}$

$B=\begin{bmatrix} -1 & 2 & 0 \\ 0 & 1 & 2 \\ 1 & 2 & -3
\\ \end{bmatrix}$



Find an elementary matrix $E$ such that $EA = B$



What I think I understand... a matrix is elementary when a single row operation forms an $I_n$ matrix. I don't understand how this applies though. Please help!


Answer



The unique matrix that satisfies $EA = B$ is the matrix that "swaps" the
first and third rows. It is given as




$$E=\begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0
\\ \end{bmatrix}.$$



Edit:
Due to a question in the comments, here comes a bit longer explanation.



(1) The rows of matrix $B$ and $A$ are the same, except for the fact that we have to swap
the first and the third row.



(2) $E$, defined above, is the special matrix that swaps the first and third rows of any $3 \times 3$ matrix $O$ when multiplied by it from the left. This can, for example, be seen by simple matrix multiplication




$$\begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0
\\ \end{bmatrix}\begin{bmatrix} p_1 & p_2 & p_3 \\ q_1 & q_2 & q_3 \\ r_1 & r_2 & r_3
\\ \end{bmatrix} = \begin{bmatrix} r_1 & r_2 & r_3 \\ q_1 & q_2 & q_3 \\ p_1 & p_2 & p_3
\\ \end{bmatrix}. $$



Hence, as a particular case, we also have $EA=B$. (Moreover, since $A$ and $B$ are non-singular matrices, the solution to the matrix equation $XA=B$ is unique: $X=BA^{-1}$,
calculating this we would again get $X=E$.)



(3) Elementary matrices (see definition here) differs from the identity matrix by one single elementary row operation. After swapping the first and third row of $E$ (which is an elementary row operation) we arrive to matrix




$$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1
\\ \end{bmatrix},$$



which is exactly the identity matrix. Hence $E$ is an elementary matrix.


integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at



$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$



It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:




$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$



$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$



So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.







UPDATE:



The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$



With $a =\dfrac{n+1}{m}$ which converges only if



$$0 < a < 1$$




Using series I find the solution is




$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$




Can this be put it terms of the Digamma Function or something of the sort?


Answer



I would like to make a supplementary calculation on BR's answer.




Let us first assume that $0 < \mu < \nu$ so that the integral
$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$
converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}
\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx
& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\
& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\

& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),
\end{align*} $$
where the last equality follows from Euler reflexion formula.


Monday 20 May 2013

trigonometry - How prove this $cos{x}+cos{y}+cos{z}=1$

Question:




let $x,y,z\in R$ and such $x+y+z=\pi$,and such
$$\tan{\dfrac{y+z-x}{4}}+\tan{\dfrac{x+z-y}{4}}+\tan{\dfrac{x+y-z}{4}}=1$$
show that
$$\cos{x}+\cos{y}+\cos{z}=1$$





My idea: let $$x+y-z=a,x+z-y=b,y+z-x=c$$
then
$$a+b+c=\pi$$
and
$$\tan{\dfrac{a}{4}}+\tan{\dfrac{b}{4}}+\tan{\dfrac{c}{4}}=1$$
we only prove
$$\cos{\dfrac{b+c}{2}}+\cos{\dfrac{a+c}{2}}+\cos{\dfrac{a+b}{2}}=1$$
Use
$$\cos{\dfrac{\pi-x}{2}}=\sin{\dfrac{x}{2}}$$
$$\Longleftrightarrow \sin{\dfrac{a}{2}}+\sin{\dfrac{b}{2}}+\sin{\dfrac{c}{2}}=1$$

let
$$\tan{\dfrac{a}{4}}=A,\tan{\dfrac{b}{4}}=B,\tan{\dfrac{\pi}{4}}=C$$
then
$$A+B+C=1$$
and use $$\sin{2x}=\dfrac{2\tan{x}}{1+\tan^2{x}}$$
so we only prove
$$\dfrac{2A}{1+A^2}+\dfrac{2B}{1+B^2}+\dfrac{2C}{1+C^2}=1$$



other idea:let
$$\dfrac{y+z-x}{4}=a,\dfrac{x+z-y}{4}=b,\dfrac{x+y-z}{4}=c$$

then we have
$$a+b+c=\dfrac{\pi}{4},\tan{a}+\tan{b}+\tan{c}=1$$
we only prove
$$\cos{(2(b+c)}+\cos{2(a+c)}+\cos{2(a+b)}=\sin{(2a)}+\sin{(2b)}+\sin{(2c)}=1$$
then I fell very ugly, can you some can help?



Thank you very much!

Help to prove/disprove a statement for infinite series

Is the following assertion true? Justify your answer.
􏰑􏰑



Suppose that we have series $\sum_{k=p}^∞ a_k$ and $\sum_{k=p}^∞ b_k$. Suppose also that $a_k = b_k$ for all but finitely many k. Then $\sum_{k=p}^∞ a_k$ converges if and only if $\sum_{k=p}^∞ b_k$ converges.



I'm really struggling to either prove/disprove this statement. Could someone please get me on the right track of starting? Is it related to subsequences or am I looking into the wrong section of this topic?

linear algebra - What is the structure of a matrix when all the eigenvalues are complex?



Rotation matrices $\begin{bmatrix}\cos{\theta} & -\sin{\theta}\\\sin{\theta} & \cos{\theta}\end{bmatrix}$ or matrices with the structure $\begin{bmatrix}\sigma & \omega\\-\omega & \sigma\end{bmatrix}$ or $\begin{bmatrix}\sigma & -\omega\\\omega & \sigma\end{bmatrix}$ will always have complex eigenvalues.



Is the converse true i.e. will always the matrices with complex eigenvalues assume a structure like $\begin{bmatrix}\sigma & \mp{\omega}\\\pm{\omega} & \sigma\end{bmatrix}$? If so, then for any even order matrices such as $A$ with complex eigenvalues, should A always be a block diagonal with each diagonal matrix being in form as $\begin{bmatrix}\sigma & \mp{\omega}\\\pm{\omega} & \sigma\end{bmatrix}$?


Answer



I suppose that you assume that $A$ is real. If so then it has a real Schur form, that is, there exists a real orthogonal $Q$ and a block triangular $T$ such that
$$A=QTQ^T.$$
The matrices $T$ and $Q$ can be chosen such that $T$ has $1\times 1$ diagonal "blocks" corresponding to real eigenvalues and $2\times 2$ diagonal blocks of the form
$$

\begin{bmatrix}\sigma&\omega\\-\omega&\sigma\end{bmatrix}, \quad \omega\neq 0,
$$
corresponding to the conjugate pairs of complex eigenvalues $\sigma\pm i\omega$.



If, in addition, $A$ is normal ($AA^T=A^TA$), the matrix $T$ is block diagonal.



So, something like your claim is indeed true. It's just not that simple.


set theory - Is the class of cardinals totally ordered?



In a Wikipedia article



http://en.wikipedia.org/wiki/Aleph_number#Aleph-one




I encountered the following sentence:



"If the axiom of choice (AC) is used, it can be proved that the class of cardinal numbers is totally ordered."



But isnt't the class of ordinals totally ordered (in fact, well-ordered) without axiom of choice? Being a subclass of the class of ordinals, isn't the class of cardinals obviously totally ordered?


Answer



A cardinal number is a non-negative integer or an $\aleph$ number, and yes, the cardinal numbers are totally ordered. By this definition, which is fairly standard as far as I know, a cardinal number is any ordinal number $\sigma$ from which no injective function exists to any ordinal number $\tau\subsetneqq\sigma$. Thus $\aleph_0=\omega_0$, $\aleph_1=\omega_1$, etc.



The cardinality of a set is a slightly more subtle concept: we say that the cardinality of a set A is less than or equal to that of a set B if there exists a one-to-one (or injective) function f from A to B. We express this mathematically as follows:

$$|A|\le|B|\leftrightarrow \exists f:A\xrightarrow{1-1}B$$



Clearly the existence of such a one-to-one function is a reflexive, transitive relation. By the Cantor-Bernstein theorem,



$$\left(\exists f:A\xrightarrow{1-1}B\right)\land\left(\exists g:B\xrightarrow{\rm 1-1}A\right)\rightarrow \left(\exists h:A\xrightarrow{\rm 1-1,\ onto}B\right).$$



Therefore if A and B are sets, such $|A|\le|B|$ and $|B|\le|A|$, then there is a one-to-one and onto function (called a bijection) from A to B. We say that $|A|=|B|$ in this case, and the class of all sets is partitioned into equivalence classes by this relation. The cardinality of a set is defined to be its equivalence class under this relation $|\cdot|=|\cdot|$. These equivalence classes are always partially ordered by the relation $|\cdot|\le|\cdot|$ on the underlying sets.



So far I have not assumed the axiom of choice: if we accept the axiom of choice, then the equivalence classes of cardinality are totally ordered, one and only one cardinal number being an element of each class. Without the axiom of choice, the equivalence classes of cardinality are not totally ordered.




See proof by Asaf and my question The axiom of choice in terms of cardinality of sets.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...