Thursday 31 October 2013

calculus - Prove $intlimits_1^infty f(x) , dx




Theorem: Let $f$ be a positive function, continuous and decreasing in an interval $[1,\infty]$. If



$$\int_1^\infty f(x)\,dx<\infty$$



therefore we have:




$$\sum_{i=1}^\infty f(i)<\infty$$



and reciprocally.




By the definition of $\displaystyle\int_1^\infty f(x)\,dx=\lim_{\Delta x\to 0} \sum_{i=1}^\infty f(x)\,\Delta x$



As the $\lim_{\Delta x}\Delta x=dx$ is constantly infinitesimally small, we conclude that if $\int\limits_1^\infty f(x) \, dx<\infty$, then $\sum\limits_{i=1}^\infty f(i)<\infty$ is true.



Questions:




Do you think this prove is right? Can I treat $\lim_{\Delta x}\Delta x$ as constant?


Answer



\begin{align}
& \int_1^\infty f(x)\,dx = \sum_{i=1}^\infty \int_i^{i+1} f(x)\,dx \le \sum_{i=1}^\infty \int_i^{i+1} f(i)\, dx = \sum_{i=1}^\infty f(i) <\infty. \\
& \text{This instance of “$\le$'' is true because $f$ is decreasing.} \\[20pt]
& \sum_{i=1}^\infty f(i) \le \sum_{i=2}^\infty f(i) = \sum_{i=2}^\infty \int_{i-1}^i f(i)\,dx \le \sum_{i=1}^\infty \int_{i-1}^i f(x)\,dx = \int_1^\infty f(x)\,dx < \infty. \\
& \text{The second “$\le$'' on this line is true because $f$ is decreasing.}
\end{align}


calculus - How to solve the limit $limlimits_{xto infty} (x arctan x - frac{xpi}{2})$



Next week I have a math exam. While I was doing some exercises I came across this interesting limit:



$\lim\limits_{x\to \infty} (x \arctan x - \frac{x\pi}{2})$



After struggling a lot, I decided to calculate this limit using my calculator. The answer turns out to be $-1$. The problem is that I don't know how to calculate this limit without a calculator. I tried using L'Hôpital's rule after converting the expression to a fraction. My steps:



$\lim\limits_{x\to \infty} (x \arctan x - \frac{x\pi}{2}) = \lim\limits_{x\to \infty} \frac{2x^2\arctan x - x^2\pi}{2x} \stackrel{(H)}{=} \lim\limits_{x\to \infty} \frac{4x\arctan x - \frac{2}{x^2+1}-2\pi x+2}{2} = \lim\limits_{x\to \infty} \frac{4x^2\arctan x - \frac{2x}{x^2+1}-2\pi x^2+2x}{2x} \stackrel{(H)}{=} \lim\limits_{x\to \infty} \frac{8x\arctan x - \frac{2x^2+6}{(x^2+1)^2}-4\pi x+6}{2} = \dots$




This keeps going on without an end, I also don't see where I can simplify the expression when using L'Hôpital's rule. Am I missing a step or am I using the wrong method? What method can be used instead?


Answer



Observe
\begin{align}
\lim_{x\rightarrow \infty} x\arctan x-x\frac{\pi}{2}=\lim_{x\rightarrow\infty}\frac{\arctan x-\frac{\pi}{2}}{x^{-1}} = \lim_{x\rightarrow \infty} \frac{\frac{1}{1+x^2}}{-x^{-2}}=-1
\end{align}


reference request - Inequality book suggestion for basic level



I don't know if it is the right platform to ask this. (Forgive me if not.) But I need to study about inequalities and inequality solving methods from a very basic level. I can not figure out under which book title (Algebra Or Calculus) inequality chapter comes. That's why it became very much difficult to find right book on google search. Please suggest me some books of inequality for the elementary study.



$\boldsymbol{Very}$ $\boldsymbol{Basic}\boldsymbol{Level}$$\Longrightarrow$I
am studying differential calculus for postgraduate entrence exam.I need
to find domain and range of given funtion.This is the topic where

i need to get acquainted with the methods of sloving inequality equation
for x(say)



Here are few problems$\Longrightarrow$1. $\frac{1+(log_{a}x)^{2}}{1+log_{a}x}\succ1$,0$\prec a\prec1$




  1. 3sin2x$\succ$sinx + cosx+1


  2. log$_{\sqrt{2x^{2}-7x+6}}$($\frac{x}{3})\succ0$





4.log$_{3}\frac{|x^{2}-4x|}{x^{2}+|x-5|}\geq0$



Here i need to find the interval where x may belong.So Please suggest
me a book that helps in solving every kind of equation like quadratic,transcedental,exponential,logrithmic
etc.


Answer



You should try HOW TO LEARN CALCULUS OF ONE VARIABLE BY J.D. GHOSH IT HAS EXACTLY WHAT YOU NEED. It contains very sharp methods of solving inequality


arithmetic - Converting repeating decimal in base b to a fraction the same base

The repeating decimal .36666... in base 8 can be written in a fraction in base 8.

I understand simple patterns such as 1/9 in base 10 is .1111.... so 1/7 in base 8 is .1111.
But I'm not too sure how to convert this decimal in this base to the fraction in the same base.

number theory - A positive integer is equal to the sum of digits of a multiple of itself.



Let $n$ be a positive integer, prove there is a positive integer $k$ so that $n$ is equal to the sum of digits of $nk$.




I'm not really sure how I should approach this problem, I tried to do a constructive approach but I got lost.



I tried just proving existence but that didn't work either.



I'm sorry if this doesn't look like a put work into it but I feel like nothing I have done is going to yield any results. So I hope you guys can solve this problem.



Thank you very much in advance, regards.


Answer



There is a simple way to answer this sort of question. And there is a related type of number called Niven numbers.




A Niven number is a number, $N$, for which its sum of digits divides $N$. What you are trying to prove is that every integer is the sum of digits of a Niven number.



Here is how we can go about it. Every number can be written in the form: $$N = b_k 10^k + b_{k-1} 10^{k-1} + \cdots + b_0 10^0$$ for some $k$ and $0 \le b_i \le 9$.



Since $10^i$ can take at most $n$ values modulo $n$, we have $10^{i + j} = 10^i$ for some $j$ and all $i$. In other words the function $f(i) = 10^i$ has period $j$ modulo $n$.



Thus if we consider $N = 10^{nj} + 10^{(n-1)j} + \cdots + 10^{j}$ we have $$N \equiv 10^{nj} + 10^{(n-1)j} + \cdots + 10^{j} \equiv (\underbrace{1+1+\cdots+1}_{n \text{ times}})10^{j} \equiv n10^{j} \equiv 0 \mod n.$$



Ivan Niven introduced Niven numbers as a possible avenue of research accessible to undergraduates. His paper appeared in a journal for undergraduate education. These numbers also go under the name of Harshad numbers, but I believe Niven numbers are more common. The Wiki page for "Harshad" numbers contains many references to papers concerning Niven numbers.



linear algebra - Minimal and Characteristic Polynomial




Find the minimal and characteristic polynomials

of
$$A = \begin{bmatrix}λ & 1&0&...&0\\0 & λ&1&...&0\\...&...&λ&...&1\\0&...&...&...&λ\end{bmatrix}$$




The characteristic polynomial is $(\lambda-x)^n=0$.
But how to get the minimal polynomial? The minimal polynomial should be the monic polynomial $\phi$ of least degree satisfying $\phi(A)=0$.


Answer



The characteristic polynomial and the minimal polynomial are equal.



Set $0 \leq k \leq n-1. $




$(1)$ Note that the column $A^ke_1$, where $e_1$ is the first vector of the standard base, has a non-zero entry in it's $k-1$ row and zeroes above it. Can be proved by induction.



$(2)$ $A^ke_1$ is not a linear combination of $A^0e_1, Ae_1,\dots,A^{k-1}e_1$.



$(3)$ $m_A|c_A \implies \deg m_A \leq \deg c_A.$



Let $m_A(X) = \sum_{i=0}^da_iX^i$, where $a_d=1$.



So $m_A(A) = \sum_{i=0}^da_iA^i = 0$, hence $\sum_{i=0}^da_iA^ie_1 = 0$




Rearranging: $\sum_{i=0}^{d-1}-a_iA^ie_1 = A^de_1$



By $(2)$, it can't be that $d\leq n-1$, hence $\deg m_A=d \geq n.$



$c_A$ and $m_A$ are monic polynomials, so $\deg m_A=n= \deg c_A$



$$\implies m_A=c_A$$


Wednesday 30 October 2013

probability - Why is Wiener measure a Gaussian measure?

This is silly and trivial so let me be really clear with my definitions and where exactly I got stuck.




The space $(C[0,T],\|\cdot\|_\infty,\sigma(\|\cdot\|_\infty),\gamma)$ is called classical Wiener space where $\gamma$ is Wiener measure.




I define Wiener measure as follows:





Wiener measure is the Kolmogorov extension of the finite dimensional distributions of the Wiener process.




And Gaussian measure on Banach space:




Let $\mathcal B$ be a Banach space with dual $\mathcal B^\ast$. A Borel measure $\gamma$ is called Gaussian iff for all $\ell\in \mathcal B^\ast$ the pushforward measure $\ell^\ast \gamma$ is a Gaussian measure on $\Bbb R$ where $\ell^\ast \gamma (A)=\gamma(\ell^{-1}(A))$ is the pushforward measure.




So I know the dual space of $C[0,T]$ is $\operatorname{RCA}([0,T])$ (by Riesz-Markov-Kakutani theorem) the space of all complex signed Radon measures with finite total variation. So we have to check that $\mu^\ast(\gamma)$ is a Gaussian measure on $\Bbb R$ for all $\mu\in \operatorname{RCA}([0,T])$.




If $\mu$ is a finite linear combination of $\delta$s we are fine as $\delta_{t}^{-1}(A)$ are all the paths that pass through the set $A$ at time $t\in [0,T]$. Then $\gamma$ of this is Gaussian by definition. Linearity is not too bad after this.



Then we have to extend to all measures and I am not sure how to do this. I know $\delta$s should be dense in $\operatorname{RCA}([0,T])$ and Wiener measure is the Kolmogorov extension of the finite dimensions but I'm not sure exactly what the argument should be.

arithmetic - What is the name of this summation formula?

So recently I derived a formula (obviously not the first... it already existed but that is what got me into summations) that quickly adds all the numbers from 1 to "n" However I recently derived another formula (also not the first I am guessing) that adds all the numbers from any number (not just 1) to "n" (i.e. 14+15+16+17)




Where i= Starting number and n= Ending number



$$\sum_{i}^{n} = \left ( n-i+1 \right )\ast \left ( \left ( n+i \right )/2 \right )$$



What I want to know is what is this formula called? Mine is very complicated looking as well so is there a more compact way?

linear algebra - elementary row operations effects




I am trying to look at the affect that elementary row operations have on a given matrix (with the notion that elementary row operations are linear combinations of the rows ):




  • $Rspan(A)$ is not changed

  • $Cspan(A)$ is changed $\rightarrow$ $Im(T_A)$ is changed

  • $Rank(A)$ is not changed

  • $Ker(T_A)$ is not changed



and in particular if elementary row operations do effect the $Cspan$ why doesn't it change the solution set?

(Update: I was not doing elementary row operations on the column $b$, that is why the solution set is not changed)



Is that right?


Answer



Everything you've said is correct.



You ask "why, if it affects the column span, does it not change the solution set?"



Well, if you write a row op as left-multiplication by some elementary matrix $R$, then the solutions to
$$

(RA)x = b
$$
are indeed quite different from those for
$$
Ax = b,
$$
so row operations do change the solution set. The good news is that a slight variation of this is still useful. If you want to solve
$$
Ax = b
$$

you can instead solve
$$
(RA)x = Rb
$$
i.e., if you do the same row op to $A$ and to the target vector $b$, the solution set remains unchanged. If you do lots of row ops to make $A$ become diagonal, or upper triangular, the system may then be easy to solve. (Indeed, this is sometimes called something like "augmented Gaussian elimination", because you stick the column vector b on the right hand side of the matrix A, and perform row ops on the whole mess.)



In the special case $b = 0$, the row ops have no effect on $b$, and hence solving
$$
Ax = 0
$$

and
$$
(RA)x = 0
$$
give the same results, which is what you've observed in writing "$Ker(T_A)$ is not changed."


integration - How to solve this double integral involving trig substitution (using tangent function)?



This is a question I came across and I cannot find the answer.




By using a substitution involving the tangent function, show that $$\int_0^1\int_0^1\frac{x^2-y^2}{(x^2+y^2)^2}\,dy\,dx=\frac{\pi}{4}$$




My attempt




I use trig substitution, by saying
$$\tan(\theta)=\frac{y}{x}$$ which means
$$x\sec^2(\theta)\,d\theta=dy$$
Also, it should be noted that because of this
$$x\sec(\theta)=\sqrt{x^2+y^2}$$
$$x^4\sec^4(\theta)=(x^2+y^2)^2
$$
Thus, when I substitute this information into the integral, I get
$$\int_0^1\int_{\theta=0}^{\arctan(\frac{1}{x})} \frac{x^2-(x^2\tan^2(\theta))}{x^4\sec^4(\theta)}{x\sec^2(\theta)} \, d\theta \,dx$$
Then, this simplifies to

$$\int_0^1\int_{\theta=0}^{\arctan(\frac{1}{x})} \frac{x^2(1-\tan^2(\theta))}{x^4\sec^4(\theta)}{x\sec^2(\theta)}\,d\theta \,dx$$
$$\int_0^1\int_{\theta=0}^{\arctan(\frac{1}{x})} \frac{x^2\sec^2(\theta)}{x^4\sec^4(\theta)}{x\sec^2(\theta)}\,d\theta \,dx=\int_0^1\int_{\theta=0}^{\arctan(\frac{1}{x})}{\frac{1}{x}}\,d\theta \,dx$$ which leads to
$$\int_0^1 \left[\frac \theta x \right]_0^{\arctan\left(\frac{1}{x}\right)} \,dx = \int_0^1 \frac{\arctan\left(\frac{1}{x}\right)}{x} \, dx_{(3)} $$
At this point I am stuck. How do I evaluate this integral. Am I on the right path? Wolfram Alpha gives an answer other than $\frac{\pi}{4}$ for (3), so I am not sure where I am wrong.


Answer



Let us start considering $$I=\int\frac{x^2-y^2}{(x^2+y^2)^2}dy$$ Defining $$y=x\tan(\theta)\implies dy=x \sec ^2(\theta )\implies I=\int \frac{\cos (2 \theta )}{x}\,d\theta=\frac{\sin (2 \theta )}{2 x}$$ Back to $x$ $$I=\frac{y}{x^2+y^2}$$ So, $$\int_0^1\frac{x^2-y^2}{(x^2+y^2)^2}dy=\frac 1 {1+x^2}$$ So, you are left with $$\int_0^1\frac {dx} {1+x^2}$$ Repeat the same change of variable.


complex numbers - prove the following equation about inverse of tan in logarithmic for

$$\arctan(z)=\frac1{2i}\log\left(\frac{1+iz}{1-iz}\right)$$



i have tried but my answer doesn't matches to the equation .the componendo dividendo property might have been used. where
$$\arcsin(x)=\frac1i\log\left(iz+\sqrt{1-z^2}\right)$$

Tuesday 29 October 2013

combinatorics - Combinatorial identity using combinatorial argument: $left[sumlimits_{k=0}^{n} binom nkright] ^ 2 = sumlimits_{k=0}^{2n}binom{2n}k$



Is it possible to prove the following identity using combinatorial argument :




$$\left[\sum _{k=0}^{n} {n \choose k}\right] ^ 2 = \sum_{k=0}^{2n}{2n \choose k}$$


Answer



Suppose that you have a group of $n$ mixed couples. The lefthand side is the number of ways to choose a set of $k$ men and a set of $\ell$ women for some $k,\ell\in\{0,\dots,n\}$; the righthand side is the number of ways to choose an arbitrary subset of the group. Clearly these are the same.


real analysis - Show that certain lim sup additivity implies convergence.



I'm trying to prove the following:



Let $\{a_n\}$ be a bounded sequence. If for every bounded sequence $\{b_n\}$ the following holds:



$$1) \limsup_{n \to \infty} (a_n + b_n) = \limsup_{n \to \infty} a_n + \limsup_{n \to \infty} b_n$$



Show that $\{a_n\}$ is convergent.




Outline of Proof:



Here's the outline of my proof. But there's a point I get stuck below. I will be using the following theorem:



2) Let $\{a_n\}$ be a bounded sequence, let $L = \limsup_{n \to \infty} a_n$. If $\epsilon > 0$ there exists infinitely many $n$ such that $a_n > L - \epsilon$, and there exists $N_1 \in \mathbb{N}$ such that if $n \ge N_1$ then $a_n < L + \epsilon$



Since condition $1)$ holds for every bounded sequence $\{b_n\}$ we can assume it holds for a $\{b_n\}$ that's convergent. That means $\forall \epsilon > 0$ $\exists N_2 \in \mathbb{N}$ $\ni$ if $n \ge N_2$ then $L_b - \epsilon < b_n < L_b + \epsilon$ where $L_b = \lim_{n \to \infty} b_n$.



Let $\epsilon > 0$, $L_a = \limsup_{n \to \infty} a_n$, $L_b = \limsup_{n \to \infty} b_n$. Let N = $\max\{N_1, N_2\}$.




If $n \ge N$ we can easily show $a_n < L_a + \epsilon$.



This is the point where I'm stuck. For infinitely many $n$:



$$a_n + b_n > L_a + L_b - \epsilon/2$$
$$a_n > L_a + L_b - b_n - \epsilon/2$$



$\forall n \ge N$: $L_b - b_n > -\epsilon/2$.




But the only conclusion I can draw from the above is that for infinitely many $n$ (and not $\forall n \ge N$):



$$a_n > L_a - \epsilon$$


Answer



Prove that
$$
\limsup(-a_n) = -\liminf(a_n).
$$
Then with setting $b_n := -a_n$ you are done.


calculus - does any function has Indefinite integral



I am talking about indefinite integration of a function what ever its nature be as far as continuity and differentiabilty is Concerned. Can we integrate any function irrespective of the result be in elementary form or some special functions or infinite series. For example



$$\int e^{x^2}dx=\int 1+\frac{x^2}{1!}+\frac{x^4}{2!}+\cdots=x+\frac{x^3}{3}+\frac{x^5}{10}+\cdots$$




$$\int |x|dx=\int x dx$$ if $x \gt 0$ and $$\int |x|dx=\int -x dx$$ if $x \lt 0$



So is it True that any function can have Indefinite integral?


Answer



No, of course not. If $f$ has an "indefinite integral" $F$, this means that $F' = f$, therefore your question is: "can any function be the derivative of some other function"? Well, it turns out that if $F$ is derivable, then $F'$ must posess the intermediate value property. Therefore, every function that does not have this property does not admit an indefinite integral. For instance, the function defined by $f(x) = \begin{cases} 1, x \in \Bbb Q \\ 0, x \notin \Bbb Q \end{cases}$ does not have Darboux's property. On the other hand, every continuous function has an indefinite integral.



This page has more of the information that you need. Notice that all the functions that do not admit an indefinite integral are somewhat pathological and artificial.


abstract algebra - Proving that $a^{25} bmod 65 = a bmod 65$?





Prove that for all $a \in \mathbb{Z}$ we have $$ a^{25} \bmod 65 = a
\bmod 65. $$




We have $65 = 5 \cdot 13$, where $5$ and $13$ are prime. So I wanted to compute the first expression by using the Chinese Remainder theorem. I have to find a $x$ which satisfies the system $$ \begin{cases} x \bmod 5 = a^{25} \bmod 5 \\ x \bmod 13 = a^{25} \bmod 13 \end{cases}. $$ But how can I solve this system when I don't know what $a$ is? I tried using Fermat's little theorem for the prime number $23$, but the above equation has to hold for all $a \in \mathbb{Z}$, not only with $gcd(a,p) = 1$.



So how can we solve this problem?


Answer




If $a=0$ then this is trivial, so assume $a\neq 0$.



$\mathbb{Z}_5=\mathbb{Z}/5\mathbb{Z}$ is a field, so $\mathbb{Z}_5^\times$, the group of invertible elements, is a group of $4$ elements.



In particular, we have $a^4\equiv 1\bmod 5$, so $a^{24}=(a^4)^6\equiv 1\bmod 5$, and $a^{25}\equiv a\bmod 5$.



Similarly, $\mathbb{Z}_{13}$ is a field, and its group of invertible elements has $12$ elements, so $a^{12}\equiv 1\bmod 13$, $a^{24}\equiv 1\bmod 13$, and thus $a^{25}\equiv a\bmod 13$.







Remark: You can avoid fields and use Fermat's little theorem directly:
$$a^5\equiv a\bmod 5.\tag{5.1}$$
Taking the $5$-th power, we obtain
$$a^{25}\equiv a^5\bmod 5\tag{5.2}$$
and putting $(5.1)$ and $(5.2)$ together,
$$a^{25}\equiv a\bmod 5.$$
Now for $13$:
$$a^{13}\equiv a\bmod{13}\tag{13.1}$$
Multiply by $a^{12}$:
$$a^{25}\equiv a^{13}\bmod{13}\tag{13.2}$$

put $(13.1)$ and $(13.2)$ together:
$$a^{25}\equiv a\bmod{13}.$$


Measure Theory question 4



If f is a non negative mesaurable function on $\mathbb R$ such that $\int f<\infty$, show that the set $\{x|f(x)>0\}$ can be written as a union of an ascending sequence of measurable sets of finite measure.



My attempt:



What I thought was setting $E_n=\{x|f(x)>\frac{1}{n}\}$. Do these sets have finite measure? Can I use $\int f<\infty$ to prove this? If so, how?




Hints and ideas are welcome.


Answer



$\int_{E_n}f >\dfrac{1}{n}\mu (E_n) $


calculus - Tangent half-angle substitution for $int_0^{2 pi} frac{1- cos x}{3 + cos x}$

I want to evaluate the following integral using the tangent half-angle substitution $t = \tan (\frac{x}{2})$: $$\int_0^{2 \pi} \frac{1- \cos x}{3 + \cos x} ~dx$$ However, making the substitution gives me $0$ for each of the limits of integration, which is obviously incorrect. I know that one way to solve this problem is to notice that, by symmetry, the equivalent integral 2$\int_0^{\pi} \frac{1- \cos x}{3 + \cos x} ~dx$ allows the subsitution to work. What are ways to make this substitution work without noticing this symmetry? I know that this general question has been asked on here before, but I am specifically interested in how I can make it work with this substitution.

euclidean geometry - How big is my pizza, if I know its slices' sizes?




I bought a box of frozen pizza: eight slices, baked and then frozen, stacked in a box. The packaging assured me that I was holding an 18-inch[-diameter] pizza. That got me thinking: how do I know they're not lying?



Assume (though this may not be true of the pizza) that we have a circle[1] $S=\partial D$ centered at a point $O$ and four chords (closed line segments properly embedded in $D$) that intersect in a single point $C$. (Possibly $C=O$.) You may assume also (because this seems reasonable for pizza) that the angles made between adjacent chords (in the cyclic order around $C$) is between $30^\circ$ and $60^\circ$ and that the distance between $C$ and $O$ is less than the distance between $C$ and $S$.



Is it possible to use the lengths of the segments from $C$ to $S$ and the angles between adjacent segments to find the diameter of $D$? (If not, then would it be possible if my "you may assume also" assumptions were tightened a little?) If so, how?



(Of course, it's possible to find the diameter by measuring the arclengths of the curved parts of the slices of pizza, adding them up, and dividing by $\pi$. But I'm wondering if there's a way to do it from the sidelengths and tip-of-the-slice angles of the pizza slices.)







[1] A geometric circle, meaning the locus of points a certain distance from $O$, not just a topological circle.


Answer



Let a pizza triangle be the convex envelope of the vertices of a pizza slice. The area of any pizza triangle is a bit smaller then the area of its pizza slice, but not so much (especially given the condition on the angles). It is easy to compute the area of a pizza triangle through the sine theorem, so a simple criterion is given by computing the sum of the areas of the pizza triangles and compare it with the area of a regular octagon inscribed in a circle with diameter $18$ inches.



Anyway, given two opposite pizza slices it is not difficult to compute the radius of the disk from which they have been cut, since two opposite pizza slices give a cyclic quadrilateral for which it is not difficult to compute the side lengths and the area given $a,b,c,d,\theta$:



enter image description here



hence the circumradius is provided by Parameshvara's formula:




$$ R = \frac{1}{4\Delta}\sqrt{(l_1 l_3+l_2 l_4)(l_1 l_2 + l_3 l_4)(l_1 l_4+l_2 l_3)} $$
where $l_1,l_2,l_3,l_4$ are the side lengths of the cyclic quadrilateral depicted above: they can be computed through the cosine theorem. Also notice that Ptolemy's theorem gives: $$l_1 l_3+l_2 l_4=(a+c)(b+d).$$



Another possible approach is the following. We have:
$$\text{pow}_\Gamma(C) = ac = bd = R^2-OC^2, $$
so we just need to find $OC^2$. If we take $M$ and $N$ as the midpoints of the chords in the picture above, it is trivial that $OC$ is the diameter of the circumcircle of $CMN$, and we may compute the circumradius of $CMN$ through the sine theorem:
$$\frac{OC}{2}=\frac{MN}{2\sin\theta}$$
then the length of $MN$ through the cosine theorem, so that:
$$ OC^2 = \frac{1}{\sin^2\theta}\left(\left(\frac{a-c}{2}\right)^2+\left(\frac{b-d}{2}\right)^2-\frac{|a-c||b-d|}{2}\cos\theta\right) $$
and:





$$ R^2 = ac+\frac{1}{4\sin^2\theta}\left[(a-c)^2+(b-d)^2-2|a-c||b-d|\cos\theta\right].$$




If you do not know which couples of slices are "antipodal", well, they are not difficult to recognize: antipodal slices must have the same angle $\theta$ and fulfill $ac=bd$ (the intersecting chord theorem).


Monday 28 October 2013

calculus - Solving $lim_{xto 0}frac{ln(1+x)}x$ without De L'Hospital



Trying to solve this limit without derivatives I found this answer that is pretty straightforward and I can easily follow the flow. I can understand why ${u\to \infty}$ because:



$$\lim_{u\to\infty}(1 + \frac{1}u)^u = e $$



but how there is a relation to ${x\to 0}$ with ${u\to \infty}$ when we need to find the limit that approaches $0$.


Answer



When $u \rightarrow \infty$, $x = \frac 1u \rightarrow 0$, as needed.



calculus - directional derivatives and continuity

FOR A TWO VARIABLE FUNCTION $f(x,y)$,



I understood that the directional derivative at a point $(x_0,y_0)$ along direction of $\vec u$ is the derivative of the 2D graph that we obtain on the plane kept perpendicular to $xy$ plane passing through the point $(x_0,y_0)$ and parallel to u.



But for 2D graphs to be differentiable , it must be continuos.which means if directional derivative exist along some direction, then the function should be continuous in that direction.



Then how this statement is true:"a fun may be discontinuous even if all directional derivative exists"??



IT WOULD BE REALLY HELPFUL IF U CAN EXPLAIN WITH MY APPROACH

calculus - How does $u$-substitution work?




I'm in an introductory Calculus class and would really like to understand $u$-substitution. So far, I have been able to understand all the concepts but hit a brick wall here. I know that in $u$-substitution, you have $u\cdot\frac{du}{dx}$ and you set whatever $u$ is, well, equal to $u$. However, why is it that you then derive $u$ to get $\frac{du}{dx} = \text{something}$ and must solve for $dx$ and plug in? That's the part that got me lost. I understand what $u$ should be set to, but after that I'm unsure about what actually happens. Could someone guide me (simply) through the process and why $u$-substitution works like this?


Answer




Theorem 1: Derivative of composite functions (I.e. Chain-Rule):



When $y=f(u)$ is a differentiable function w.r.t $u$ and $u=g(x)$ is a differentiable function w.r.t $x$, then $y=f(g(x))$ is a differentiable function w.r.t $x$ and:



$\frac{dy}{dx} = \frac{dy}{du}\cdot\frac{du}{dx}~~~~$




(note, these are not fractions and do not necessarily follow the same properties of real numbers. It is by convenient notation and properties of derivatives that this is true. I.e., you should not have expected that you can cancel the du's on top and bottom.)



Equivalently, by relabling and using different notation you have:



$\frac{d}{dx}[f(g(x))] = f'(g(x))\cdot g'(x)$, or equivalently yet, that for $u=g(x)$ you have



$\frac{d}{dx}[f(u)] = \frac{d}{du}[f(u)]\frac{du}{dx}$




Proof:




Let $h(x)=f(g(x))$. We wish to prove then that $h'(c) = f'(g(c))g'(c)$. Assume for a moment that $g(x)\neq g(c)$ in the neighborhood of $c$ (to avoid division by zero errors).



By the definition of derivatives:



$$\begin{align} h'(c) &= \lim\limits_{x\to c} \frac{h(x)-h(c)}{x-c}\\
&= \lim\limits_{x\to c} \frac{f(g(x))-f(g(c))}{x-c}\\
&= \lim\limits_{x\to c} \frac{f(g(x))-f(g(c))}{x-c}\cdot\frac{g(x)-g(c)}{g(x)-g(c)}\\
&= \lim\limits_{x\to c} \frac{f(g(x))-f(g(c))}{g(x)-g(c)}\cdot\frac{g(x)-g(c)}{x-c}\\
&= \left[\lim\limits_{x\to c} \frac{f(g(x))-f(g(c))}{g(x)-g(c)}\right] \cdot \left[\lim\limits_{x\to c} \frac{g(x)-g(c)}{x-c}\right]\\

&= f'(g(c))g'(c)
\end{align}
$$



In a more complete proof, we may choose to be a bet more precise for the case where $g(x)=g(c)$ within the neighborhood of $c$. Either it will be constant in the neighborhood of $c$, or we can always pick a small enough neighborhood such that you avoid the issue entirely (else it will contradict the differentiability of $g$)




Theorem 2: Integration by substitution:



Let $g$ be a function whose range is an interval $I$, and let $f$ be a function continuous on $I$. If $g$ is differentiable on its domain and $F$ is an antiderivative of $f$ on $I$, then:




$\int f(g(x))g'(x)dx = F(g(x))+C$



By relabling, setting $u=g(x)$, then $du=g'(x)dx$ and:



$\int f(u)du = F(u)+C$




Proof:




Let $y=F(u)$ and $u=g(x)$. Then by theorem 1 (chain-rule) above we get:



$\frac{d}{dx}[F(g(x))] = F'(g(x))g'(x)$.



By definition of antiderivatives then, $\int F'(g(x))g'(x)dx = F(g(x))+C = F(u)+C$



Since $F$ is an antiderivative of $f$, the result follows.


probability - Expected number of tosses before you see a repeat.

Suppose we roll a fair die until some face has appeared twice. For instance, we might have a run of rolls 12545 or 636. How many rolls on average would we make? What if we roll until a face has appeared three times?



I calculated the expected value for getting a repeat for a six-sided dice and I got 1223/324 (3.77 tosses). How would we generalize this to an $n$-sided dice? What about $k$-repeats?

elementary set theory - Show a bijection between sets

The question is: prove that there is a bijection between sets A and B for all $n_{1}, n_{2}\in \mathbb N_{> 0}$ and for all $k_{1}, k_{2}\in \mathbb{Z}$



$A = \left\{ {n_{1}q + k_{1}} \mid q\in \mathbb{Z}\right\}$,



$B = \left\{ {n_{2}q + k_{2}} \mid q\in \mathbb{Z}\right\}$



Any help to define the function between both sets is appreciated !!

Sunday 27 October 2013

real analysis - How can you prove that a function has no closed form integral?




I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction

  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions

  • logarithmic functions




, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?






Merged with "How to prove that some functions don't have a primitive" by Ismael:




Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative



$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:




$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf



Liouville's original paper:




Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.





Michael Spivak's book on Calculus also has a section with a discussion of this.


geometry - Does apparent retrograde motion of planets begin and end at quadrature?

I've read it several places that the apparent retrograde motion of planets (during which they seem, as viewed from Earth, to move in the opposite sense of their normal "direct" orbital motion against background stars at infinite distance) occurs between the two quadrature points (at which the planet-Earth-Sun angle is 90°). I have always assumed that "between" is an approximation, since, at quadrature, though the Earth's motion is contributing nothing to the planet's apparent motion (since it is moving directly along the line of sight to the planet), the planet's true motion is providing apparent "direct motion", so that retrograde motion, though bounded by the quadrature points, does not begin or end there.



enter image description here




I've never been able to prove this to my satisfaction, and often come across descriptions that make me wonder whether I've had it wrong all along.



Some of these sources would seem to be quite authoritative, such as a translator's footnote to Copernicus, in which it is stated that the angular extent of a superior planet's retrograde motion observed from Earth is defined by the tangents to the Earth's orbit that pass through the planet:



enter image description here



While these tangents certainly bound the angular extent of retrograde motion — in fact, they define the retrograde (and direct) motion of an unmoving planet (since they correspond to the maximum parallax for the planet seen from Earth) — isn't the actual extent of retrograde motion smaller, for the reasons stated above?



How can it demonstrated geometrically (assuming circular orbits with common centers and uniform angular velocities, and given periods and radii for those orbits for Earth and the Planet) what the rate of change of the apparent angular position of an orbiting planet is as a function of the planet-Earth-Sun angle?

How to prove $T(n) = Tleft(frac n4right) + Tleft(frac{3n}4right) + n$ is $mathcal O(nlog n)$ using induction?

How would you go about proving the recursion
$$T(n) = T\left(\frac n4\right) + T\left(\frac{3n}4\right) + n$$is $\mathcal O(n\log n)$ using induction?



Thanks!

linear algebra - Given a Characteristic Polynomial of a Matrix...



This question contains three parts. I have already answered the first two. The last part is confusing me.



Suppose $A$ is a $4 \times 4$ matrix whose characteristic polynomial is $p(x) = (x - 1)(x + 2)^2(x - 3)$.



Part (a): Show that $A$ is invertible. Find the characteristic polynomial of $A^{-1}$.



We have that the roots of a characteristic polynomial are the eigenvalues of $A$. That is, $\lambda = -2, -2, 1, 3$ are our eigenvalues. The determinant of an $n \times n$ matrix is the product of its eigenvalues. Hence, det$A = 12$. An $n \times n$ matrix is invertible if and only if its determinant is nonzero. Therefore, $A$ is invertible.




Since none of the eigenvalues are zero, we have that $\lambda$ is an eigenvalue of $A$ if and only if $\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$. Then, the characteristic polynomial for $A^{-1}$ is $q(x) = (x - 1)(x + 1/2)^2(x - 1/3)$.



Part (b): Find the determinant and trace of $A$ and $A^{-1}$.



This is easy since the determinant of an $n \times n$ matrix is the product of its eigenvalues and the trace of an $n \times n$ matrix is the sum of its eigenvalues.



Part (c): Express $A^{-1}$ as a polynomial in $A$. Explain your answer.



Not really sure what part (c) is getting at.


Answer




By the Cayley-Hamilton theorem, we have $(A-1)(A+2)^2(A-3)=0$, that is,
$A^4-9A^2-4A+12I=0$.
Multiply both sides by $A^{-1}$, and be amazed!


negative binomial - Probability that 4 boxes are purchased?


The probability that a randomly selected box of a certain type of cereal has a particular prize is 0.2. Suppose you purchase box after box until you obtained two of these prizes. What is the probability that four boxes are purchased?




My approach was to use negative binomial distribution:




Let $X$ be the number of boxes purchased that don't have a prize until you find two prizes.



P(4 boxes purchase) = P(2 boxes w/o a prize)
=$$\binom{x+r-1}{r-1}(p)^r(1-p)^x$$ = $$\binom{2+2-1}{2-1}(0.8)^2(1-0.2)^2$$ = $$\binom 3 1(0.2)^2(0.8)^2$$ = $$.0764$$



I wanted to know if this is correct approach to solving this kind of probability question?

Saturday 26 October 2013

algebra precalculus - Work Problem that deals with Number of Men, Days, Leaving



A project can be done by 70 men in 100 days. There were 80 men at the start of the project but after 50 days, 20 of them had to be transferred to another project. How long will it take the remaining workforce to complete the job?



The correct answer is 50.



Any hints on how to go about this? I have encountered work problems before with the general formula
$$\frac1A + \frac1B + \dots = \frac1T.$$




There's also problems with time involved:



$$t_A\left(\frac1A + \frac1B\right) + t_B\left(\frac1C + \frac1D\right) \dots = 1.$$



This problem incorporates people leaving, remaining days. But I am not sure how to combine them concepts.


Answer



Think about the required amount of work in man-days. The project requires $70*100=7000$ man-days of total work. After $80*50=4000$ man-days of work, there are $7000-4000=3000$ man-days of work remaining, and there are $60$ remaining workers, so the project will take another $3000/60=50$ days.


real analysis - Function Spaces, why is the space of continuous functions (without necessarily differentiability) not important?



The space $C^0$ denotes the set of continuous and differentiable functions, the space $C^1$ the set of the continuous and differentiable functions which have a continuous and differentiable first derivative and so on.




But it is well known that there are functions which are continuous, but not differentiable. So why there are no spaces for them, I mean a space of the continuous functions, and a space for the functions which have just continuous derivative (not necessarily differentiable too)?


Answer



This post serves to elaborate on the comments made by copper.hat and Branimir Ćaćić above. In what follows, by a continuous function on a topological space, we shall mean an $ \mathbb{R} $- or $ \mathbb{C} $-valued continuous function on the space.






Part 1



The space of continuous functions is an interesting object of study. Indeed, we have a complete classification of commutative C*-algebras in terms of such spaces. This classification result is known as the Commutative Gelfand-Naimark Theorem, and its precise statement is as follows.





Theorem 1 (Gelfand-Naimark) Let $ \mathcal{A} $ be a commutative C*-algebra. If $ \mathcal{A} $ is unital (i.e. it has an identity element), then $ \mathcal{A} $ is isometrically *-isomorphic to $ C(X,\mathbb{C}) $ for some compact Hausdorff space $ X $. If $ \mathcal{A} $ is non-unital, then $ \mathcal{A} $ is isometrically *-isomorphic to $ {C_{0}}(X,\mathbb{C}) $ for some non-compact, locally compact Hausdorff space $ X $.




Hence, C*-algebras, which are abstract algebro-topological objects, can be concretely realized as algebras of continuous $ \mathbb{C} $-valued functions on locally compact Hausdorff spaces.



We also have the following result stating that compact Hausdorff spaces are classified, up to homeomorphism, by their ring of continuous $ \mathbb{R} $-valued functions.





Theorem 2 (Gelfand-Kolmogorov) If $ X $ and $ Y $ are compact Hausdorff spaces, then $ X $ and $ Y $ are homeomorphic if and only if $ C(X,\mathbb{R}) $ and $ C(Y,\mathbb{R}) $ are ring-isomorphic.




Yet another strong classification result is the Serre-Swan Theorem, which says that for compact Hausdorff spaces $ X $, there is an equivalence of categories between




  • the category of $ \mathbb{R} $- or $ \mathbb{C} $-vector bundles on $ X $ and


  • the category of finitely-generated projective modules over (resp.) $ C(X,\mathbb{R}) $ or $ C(X,\mathbb{C}) $.





Hopefully, the array of results presented here will imbue the reader with a deeper appreciation of the importance of spaces of continuous functions.






Part 2



Being important and being computationally useful are two different things, which is what the OP may have had in mind. This is especially so when we consider topological spaces that are equipped with a differential structure, which makes them differentiable manifolds. If we only use the space of continuous $ \mathbb{R} $-valued functions to study these manifolds, then we are doing ourselves a great disservice by failing to exploit their differential structure, which can be rich in topological information. Before giving a simple illustration, let us provide a definition.




Definition Let $ X $ be a topological space and $ \mathcal{A} $ an algebra of $ \mathbb{R} $-valued functions on $ X $. Given an $ x \in X $, we say that a mapping $ \delta: \mathcal{A} \to \mathbb{R} $ is a point-derivation on $ \mathcal{A} $ at $ x $ if $ \delta $ is an $ \mathbb{R} $-linear homomorphism and

$$
\forall f,g \in \mathcal{A}: \quad \delta(fg) = f(x) \cdot \delta(g) + \delta(f) \cdot g(x).
$$
The set of all point-derivations on $ \mathcal{A} $ at $ x $, which happens to form an $ \mathbb{R} $-vector space, is denoted by $ {\text{Der}_{x}}(\mathcal{A}) $.




Let $ X $ be any topological space. Then $ C(X,\mathbb{R}) $ is an algebra of $ \mathbb{R} $-valued functions on $ X $. We can therefore ask ourselves: What is $ {\text{Der}_{x}}(C(X,\mathbb{R})) $ for a given $ x \in X $? Well, it turns out that
$$
\forall x \in X: \quad {\text{Der}_{x}}(C(X,\mathbb{R})) = \{ 0_{C(X,\mathbb{R}) \to \mathbb{R}} \},
$$

which is just the trivial vector space! Not very interesting indeed. (Click here to see a proof.)



Suppose this time that $ X $ is a differentiable $ n $-dimensional manifold. Then $ {C^{1}}(X,\mathbb{R}) $ is also an algebra of $ \mathbb{R} $-valued functions on $ X $. However, $ {\text{Der}_{x}}({C^{1}}(X,\mathbb{R})) $ will be an $ n $-dimensional vector space for any $ x \in X $ (the previous link also contains an explanation of this).



Note: The point-derivations on $ {C^{1}}(X,\mathbb{R}) $ at $ x $ can be used to define the tangent space of $ X $ at $ x $.



In summary, for a differentiable manifold $ X $, the vector space of point-derivations on $ C(X,\mathbb{R}) $ at any point gives us no information whatsoever. However, by considering the algebra of differentiable functions on $ X $ instead, this very same vector space can tell us the dimension of the manifold, which is an important topological invariant.



Morse Theory takes this concept to a new level of sophistication by using smooth functions on a smooth manifold to extract even more topological information. As an example, let us take a look at the following result, which is one of the jewels of Morse Theory.





Theorem 3 Let $ M $ be a compact smooth $ n $-dimensional manifold, and suppose that there exists a smooth $ \mathbb{R} $-valued function on $ M $ having exactly two critical points, both of which are non-degenerate. Then $ M $ is homeomorphic to $ \mathbb{S}^{n} $.




This result is truly amazing, for it gives us a criterion by which the differential structure of a compact smooth manifold can actually determine its topology. One can also use smooth functions to construct a new homology theory for smooth manifolds, called Morse homology, which turns out to be isomorphic to singular homology.



Many deep theorems can be proven using Morse Theory, such as the Bott Periodicity Theorem for the homotopy groups of classical Lie groups and the existence of exotic smooth structures on $ \mathbb{S}^{7} $ (a result for which John Milnor was awarded the Fields Medal).







Conclusion: The continuous functions on a topological space are important, but if you are given a differential structure, then you might want to exploit this structure by studying the differentiable functions instead because they can provide you with valuable topological information.


What does the Fundamental Theorem of Algebra say about the number of complex zeros of a polynomial function?



I was watching the Khan Academy video on the Fundamental Theorem of Algebra when I got confused by something that Sal Khan states. From what I understand, the Theorem says that the complex zeros of a polynomial function always come in pairs because they are conjugates of each other. But exactly what would happen with a 3rd degree polynomial with no real zeros? Why can you not have 3 complex zeros with a 3rd degree polynomial but it is possible with a 4th degree polynomial?



Finally, what then would the zeros of a 3rd degree polynomial with no real zeros be?



This is the video:
https://www.khanacademy.org/math/algebra2/polynomial_and_rational/fundamental-theorem-of-algebra/v/fundamental-theorem-of-algebra-intro


Answer





Why can you not have $3$ complex zeros with a $3$rd degree polynomial, but it is possible with a $4$th degree polynomial?




Observation: You can, if the coefficients are themselves complex.



Hint: Odd exponents maintain the sign, and even ones suppress it. Notice the obvious difference between the graphics of cubic and quartic $($or even quadratic$)$ functions as $x\to\pm\infty$. It is clear from their plots that for every even-order polynomial there is a big-enough free term $($positive or negative$)$ such that the graphic will no longer intersect the horizontal axis, meaning that our poly-nomial will have no roots. But this does not apply to odd-order polynomials, which always have at least one real root, since they always span from $-\infty$ to $+\infty$, or vice-versa.


real analysis - Prove that $f$ is Borel measurable.

Let $\mu$ be a finite Borel measure on $\mathbb R$, i.e. a finite measure on the Borel $\sigma$-algebra $S (\mathbb R)$,
and let $B$ be a Borel subset of $\mathbb R$. Define the function $f$ on $\mathbb R$ by $f (x) =\mu (B + x)$.



(a) Show that $f$ is Borel measurable.
(b) Show that $\int f (x) d\lambda (x) =\int f (x) dx =\mu(R)\lambda(B)$, where $\lambda$ denotes the Lebesgue measure



I am not able to prove that $f$ is Borel measurable. I tried to prove that $f^{-1}(a,\infty)$ is a Borel set but couldn't prove it.

elementary number theory - How to prove $5^n − 1$ is divisible by 4, for each integer n ≥ 0 by mathematical induction?




Definition of Divisibility
Let n and d be integers and d≠0
then d|n ⇔ $\exists$ an integer k such that n=dk"




enter image description here



Source: Discrete Mathematics with Applications, Susanna S. Epp




Prove the following statement by mathematical induction.
$5^n − 1$ is divisible by 4, for each integer n ≥ 0.



My attempt:



Let the given statement p(n).



(1) $5^0 − 1=1-1=0$ is divisible by 4. So p(0) is true.




(2) Suppose that for all integers $k \ge 0$, p(k) is true, so $5^k − 1$ is divisible by 4 by inductive hypothesis.



Then we must show that p(k+1) is true.



$5^{k+1} − 1$ = $5\cdot 5^k − 1$



I can't further develop the step. I'm stuck on this step.
It should be something like $5\cdot(5^k − 1)$ so that p(k+1) be true to apply inductive hypothesis.


Answer



Hint:




$$5^{k+1}-1=5\times(5^k-1)+4$$


Friday 25 October 2013

soft question - Philosophy of a Mathematician.



Introduction



I don't study Mathematics at university, and probably I don't have any chances to have a little understanding of what mathematics in all its aspects.



But I love to find structures and links betwen ideas, and to ask myself philosophical questions about foundations of mathematics .If I have understood well is what is happening with the "categorical" point of view: there is some kind of "great unification" of the mathematics from a structural point of view.



Obviously I understand that for humans it is impossible to reach a total understanding of something, and that moves me is a mirage. I must give up until I'm sane and study modestly mathematics, in a relaxed way: playing with my easy problems, reading books, and learning from the basics all I can, without eccesive expectations maybe.




More I understand and more new interesting mathematical topics I discover... but the facts that overwhelm me are the acceleration of this process and the fact that there are a lot of young theories; and that this implies that this growth will be even faster.



Probably I, as an amateur mathematician, don't have a chance to come at new ideas in the foundational research (or even understand the actual point of view since it is changing so fast)...






Questions




$\mathcal Q_1$ From your($*^1$) experience how can mathematicians deal (or how they usually chose to behave) with this huge mathematical universe and be satisfied from a philosophical($*^2$) point of view?




$\mathcal Q_2$ How can a mathematician interested in the foundations of mathematics be satisfied by an always partial knowledge of the mathematics?







Personal observation and conclusion



Is maybe possible that in 100 years tha amount of mathematical knowledge will be so huge that even Mathematicians will be totally overwhelmed from it? In other words, is possible that will be impossible for anyone to research new things because the knowledge required will take like 80 or 90 years of hard study? I say this because I think that humans has a limit to the speed of learning.




If this is possible, is in the destiny of mathematicians to abandon the hope of understanding of mathematics?



I remember a quote of John von Neumann:




"Young man, in mathematics you don't understand things. You just get used to them."




I have read many stories about his math skills, and how he was a genius... anyways from my point of view... I feel very sad when I read this quote.







Notes



($*^1$) I ask this question HERE on SEMath because I'm looking for human experiences of real mathematicians, that is very important for me in this moment, and I do not think is fair that this question was closed in 40 seconds...is a soft question, as many others questions, but it has philosophical contenents too and I think it deserves at least a chance.



I must quote Andres Caicedo that was able to express way better than me, maybe, the meaning of my question:




"I think that there may be insights that only working mathematicians

could provide (as opposed to philosophers), and even if there are
wildly differing points of view, seeing them described may be useful.
[...] Of course, it may be that answering the question in detail,
considering as many of its subtleties as possible would just be too
long and unfeasible. That's fine; even providing a few references and
ideas that can potentially be fleshed out would be more useful
that
simply dismissing it."




Improved version 05/14/'13:




($*^2$) Following the advice of Gustavo Bandeira I explain better the meaning of "satisfied from a philosophical point of view" in my fisrt question; What I mean is only linked to the mathematicians interested in the foundations, in other words who is interested in that process of generalization of all the mathematics to easier concepts ( like "$\in$" for set theorists or the structuralists for example)



Here the same question on Phylosphy.SE




How can a mathematician interested in the foundation of mathematics
be satisfied by an always partial knowledge of the mathematics?



Answer




@MphLee - May I add some new comments ?



You speak of math as human experience : this is a good point to start with. You quote also von Neumann statement's: it looks a little bit "wittgensteinian".



Try to link them; math is a game where you play with simbols, prove theorems, explore new concepts... If you live inside the math community (if you play the math game) you know well how it works.




What is the "sense" of this human activity or experience ?





You can find the answer in the field of philosophy, where you can find reflections on other human experiences : art, religion, ... Are we satisfied with available answers to this kind of reflections and questions ? I don't think so.



But math is also part of a bigger human activity: science. With "science game" we were able to build rockets and travel to the moon, to bulid atomic bomb (alas !), to calculate the earth's circumference a lot of time before Columbus (ignoring ancient Greek's calculation and using a wrong estimate) discovered the Americas.



Again, we can ask ourself what is the "sense" of science as human experience, but we may not ignore the fact that science (with the central role played in it by mathematics) is a "big game".



Both "games" grows with an impressive rate and you are right : how can I understand a field of knowledge like mathematics (or physics) if I'm not able to "manage" the total amount of mathematical knowledge ?


discrete mathematics - How to express a set of all numbers, real and imaginary, irrational and rational?



The notation for real numbers is $\mathbb{R}/\mathbf{R}$, integers: $\mathbb{Z}/\mathbf{Z}$, complex numbers $\mathbb{C}/\mathbf{C}$, and rational: $\mathbb{Q}/\mathbf{Q}$ but is there an agreed-upon way to express all numbers with similar notation?




Basically, if I were told to write a phrase that captured every number in existence, how would I do this?


Answer



As you observed, blackboard bold is a standard font used for successive extensions of number systems:
$$
\Bbb{N} \subseteq \Bbb{Z} \subseteq \Bbb{Q} \subseteq \Bbb{R} \subseteq \Bbb{C}
$$

The set of quaternions, denoted by $\Bbb{H}$ in honour of the mathematician W. R. Hamilton, would be the next step. The next extension is the set of octonions, denoted by $\Bbb{O}$ and the next one the set of sedenions, denoted by $\Bbb{S}$.



You will find many other extensions in the Wikipedia articles on
Hypercomplex numbers,

Hyperreal numbers and
Surreal numbers. The class -- this is no longer a set -- of all surreal numbers is denoted by the symbol $\mathbf{No}$. They are the largest possible ordered field: every other ordered field can be embedded in the surreals. Finally, a surcomplex number is a number of the form $a+bi$, where $a$ and $b$ are surreal numbers. Surcomplex numbers form an algebraically closed field (except for being a proper class). See also this question on Mathoverflow.



Finally, in answer to your final question "If I were told to write a phrase that captured every number in existence, how would I do this?", you could try the following aphorism:




If you ask me whether there is an ordered set of all numbers, the answer is no, but if you ask me whether there is an ordered class of all numbers, the answer is $\mathbf{No}$.



polynomials - Number Theory: Prove that $x^{p-2}+dots+x^2+x+1equiv 0pmod{p}$ has exactly $p-2$ solutions

I just completed this homework problem, but I was wondering if my proof was correct:



If $p$ is an odd prime, then prove that the congruence $x^{p-2}+\dots+x^2+x+1\equiv 0\pmod{p}$ has exactly $p-2$ incongruent solutions, and they are the integers $2,3,\dots,p-1$.




Proof



Let $f(x)=x^{p-2}+\dots+x^2+x+1$ and $g(x)=(x-1)f(x)$.



Note that $f(1)=(p-2)+1=p-1\not\equiv 0\pmod{p}$.



So, $g(x)=(x-1)(x^{p-2}+\dots+x^2+x+1)=x^{p-1}-1$.



Now, $x^{p-1}-1\equiv 0\pmod{p}$ has exactly $p-1$ incongruent solutions modulo $p$ by Lagrange's Theorem.




Note that $g(1)=(1-1)f(1)=0\equiv 0\pmod{p}$, so $1$ is a root of $g(x)$ modulo $p$. Hence, the incongruent roots of $g(x)$ modulo $p$ are $1,2,3,\dots,p-1$.



But every root of $g(x)$ other than $1$ is also a root of $f(x)$ (This is the part I'm concerned about. Is it clear that this is the case?), hence $f(x)$ has exactly $p-2$ incongruent roots modulo $p$, which are $2,3,\dots,p-1$. $\blacksquare$

limits - How to compute $lim_{xtoinfty} frac{2^x}{x^{200}}$?





I am trying to solve this limit:
$$\lim_{x\to\infty} \frac{2^x}{x^{200}}$$



The first result is an indetermination of the kind $\frac{\infty}{\infty}$ but here applying L'Hopital would be too long, I do not see any substitution by means of equivalent infinitesimals possible and simplifying the limit neither.



How can I solve it? The solution must be $\infty$. Thank you!


Answer



L'Hopital rule applied $200$ times leads to
$$\lim_{x\to\infty}\frac{(\log 2)^{200} \,2^x}{200!}=\infty$$




Or you can take the logarithm of the limit



$$\log\lim_{x\to\infty}\frac{2^x}{x^{200}}=\lim_{x\to\infty}\left(x\log 2-200\log x\right)=\lim_{x\to\infty}x\left(\log 2-200\frac{\log x}{x}\right)=\infty$$



As the log of the limit is $+\infty$ the limit is $+\infty$


Thursday 24 October 2013

elementary number theory - GCD of rationals



Disclaimer: I'm an engineer, not a mathematician



Somebody claimed that $\gcd$ only is applicable for integers, but it seems I'm perfectly able to apply it to rationals also:



$$ \gcd\left(\frac{13}{6}, \frac{3}{4} \right) = \frac{1}{12} $$



I can do this for a number of cases on sight, but I need a method. I tried Euclid's algorithm, but I'm not sure about the end condition: a remainder of 0 doesn't seem to work here.




So I tried the following:



$$\gcd\left(\frac{a}{b}, \frac{c}{d} \right) = \frac{\gcd(a\cdot d, c \cdot b)}{b \cdot d}$$



This seems to work, but I'm just following my intuition, and I would like to know if this is a valid equation, maybe the correct method. (It is in any case consistent with $\gcd$ over the natural numbers.)



I'm not a mathematician, so please type slowly :-)


Answer



In the ancient Greek sense, if we have two quantities $x$ and $y$, then the quantity $z$ is a common measure of $x$ and $y$ if each of $x$ and $y$ is an integer multiple of $z$. The quantities involved were not thought of as numbers, but of course they were what we think of as positive. So for example the diagonal of a square, and the diagonal of a square whose sides are $50\%$ bigger, have a common measure. But some pairs of quantities are incommensurable, like the side and diagonal of a square.




Any two rationals, unless they are both $0$, have a greatest common measure. You are therefore perfectly correct. The two rationals also have a least number that they both measure. So if you had further argued that two rationals, not both $0$, have a least positive common multiple, you would also be right.



Your calculation method is also correct. One brings the two rationals to a common denominator $q$, say as $\frac{x}{q}$ and $\frac{y}{q}$. Then the $\gcd$ is
$\frac{\gcd(x,y)}{q}$. You chose the common denominator $q=bd$. Any common denominator will do, and will produce the same $\gcd$.



Most of the standard theorems about $\gcd$ and lcm for integers extend in a straightforward way to results about the $\gcd$ and lcm for rationals. For example, a mild variant of what we nowadays call the Euclidean algorithm will compute the $\gcd$ of two positive rationals. Actually, this variant is the original Euclidean algorithm!



Remark: A number of programs, including Mathematica, accept rationals as inputs to their $\gcd$ function. The same appears to be true of WolframAlpha, which has the great advantage of being freely accessible. One way, perhaps, to (sort of) settle the argument in your favour is to type $\gcd(3/7, 12/22)$ into Alpha. It will, probably, return $\frac{3}{77}$ (it did when I used it). With other rationals, test first, Alpha is not bug-free.


calculus - Summation of n-squared, cubed, etc.

How do you in general derive a formula for summation of n-squared, n-cubed, etc...? Clear explanation with reference would be great.

How does analytic continuation lets us extend functions to the complex plane?



I'm trying to understand analytic continuation and I noticed on wolfram that it




allows the natural extension of the definition trigonometric,
exponential, logarithmic, power, and hyperbolic functions from the
real line $\mathbb{R}$ to the entire complex plane $\mathbb{C}$





So how does it extend, say, $f(x) = \sin(x)$, $x \in \mathbb{R}$ to the complex plane? What are the steps that have to be taken to extend this function (and others) to the complex plane?


Answer



More interesting is the case that Riemann worked out:



$$\Gamma(s)\zeta(s)=\int_0^\infty\frac{x^s}{e^x-1}\frac{dx}{x},$$
where $\Gamma(s)$ is the gamma function, $\zeta(s)$ is the zeta function and $s\in\mathbb{C}.$ To extend this formula to $\mathbb{C},$ Riemann considers the path integral, on the complex plane,
$$\oint_C\frac{(-z)^s}{e^z-1}\frac{dz}{z},$$
where the path $C$ goes from $\infty$ to the origin $O$, above the positive real axis, and circulating $O$, counterclockwise through a circumference of radius $\delta$, say, returning to $\infty$ along the bottom of the positive real axis. The important thing, for the evaluation of above integral, is that we may split it into three integrals, namely



$$\biggl(\int_\infty^\delta+\int_{|z|=\delta}+\int_\delta^\infty\biggr)\frac{(-z)^s}{e^z-1}\frac{dz}{z},$$

recalling that $(-z)^s=e^{s\log(-z)}$, and $\log(-z)=\log|z|+ i\, \text{arg}(-z).$ So that, in the first integral, when $-z$ lies on the negative real axis, we take $\text{arg}(-z)=-\pi\,i;$ on the second one $-z=-\delta,$ and as $-z$ progress counterclockwise about $O$, $\text{arg}(-z)$ goes from $-\pi\,i$ to $\pi\,i.$ Finally, in the last integral $\text{arg}(-z)=\pi\,i,$ therefore the first and third integrals do not cancel. The second integral cancels as $\delta\to 0.$ The rest is purely technical and leads to the analytical continuation of the $\zeta(s)$ function of Riemann everywhere except 1 where it has a simple pole with residue 1.


abstract algebra - Reverse CRT to compute modulus



Very commonly we have a system as the one below, and if we know $a$ and $b$, then we can easily use CRT to solve for $c$.



$$
\begin{align}

x &\equiv a \pmod p \\
x &\equiv b \pmod q \\
x &\equiv c \pmod{pq} \\
% &gcd(p, q) = 1
\end{align}
$$



However, suppose we instead know $a$ and $c$, but not $b$.




What is the arithmetic way to solve this without directly doing reduction mod q?
Is this possible to compute using just $+,\ -,\ *,$ mod pq and mod p?




Clarification: I am interested in finding $b$ in addition to $x$.


Answer



If you know $c$ then you have
$$x=c+kpq$$
for some $k\in\mathbb{Z}$ hence we also have
$$x\equiv c\mod{p}$$
$$x\equiv c\mod{q}$$
by taking the modulus of $p$ or $q$ on both sides of the above equality.


number theory - Calculating ${34}^{429} mod 431$

I am trying to calculate $${34}^{429} \mod 431$$ by hand. (This follows from $34^{-1}\mod 431$).




I think I have made mistakes in my working, and have three different answers thus far from the attempts:



$$351, 306, 134$$



Is one of these correct?
If none of the above is correct please provide an answer with working.

calculus - How prove this limit $ lim_{ntoinfty}left(n+frac{1}{2}right)left(z_{n}-left(n+frac{1}{2}right)piright)=frac{H}{pi}$



Question:



Let $H\in R$,Prove that the transcendental equation $$z\cot{z}+H=0$$ has a countable number of zeros $z_{n}$ and that



$$\lim_{n\to\infty}\left(n+\dfrac{1}{2}\right)\left(z_{n}-\left(n+\dfrac{1}{2}\right)\pi\right)=\dfrac{H}{\pi}$$



My try: we must only prove this





$$z_{n}=\left(n+\dfrac{1}{2}\right)\pi+\dfrac{H}{\left(n+\dfrac{1}{2}\right)\pi}+o\left(\dfrac{1}{n^2}\right)$$
if this problem don't tell this limit reslut,then we how find this limit? Thank you




can you someone help me,Thank you very much!


Answer



Your equation can be rearranged to:



$$\cot(z) = -\frac{H}{z}$$




Put $z = y + x$ where $y = (n+\frac{1}{2})\pi$ and use $\cot(y + x) = -\tan(x)$:



$$\frac{H}{y+x} = \tan(x)$$



We need to show $\lim_{y\rightarrow\inf}xy = H$. Taylor expand both sides:



$$\frac{H}{y}(1 - \frac{x}{y}) = x + O(x^2)$$



$$\left(\frac{H}{y^2} - 1\right)x = -\frac{H}{y} + O(x^2)$$




$$x = \frac{H/y}{1-H/y^2} + O(x^2)$$



From this we can see that $x \in O(1/y)$, so $O(x^2) = O(1/y^2)$, and $xy = H + O(1/y)$ as required.


elementary number theory - Find the prime-power decomposition of 999999999999

I'm working on an elementary number theory book for fun and I have come across the following problem:



Find the prime-power decomposition of 999,999,999,999 (Note that $101 \mid 1000001$.).




Other than just mindlessly guessing primes that divide it, how should I go about finding the solution? I am curious as to how this hint about 101 dividing 1000001 helps. There is also a factor table for integers less than 10,000 in the back of the book, so really the objective is to get 999,999,999,999 down to a product of numbers with less than 5 digits, then I can just use the table.



Thank you!

Given a tower of field extensions, does this equality involving Galois group orders hold in general?

Suppose we have a tower of field extensions:



$\overline{F} \subset K \subset E \subset F$



Is it true in general that $|G(K/F)| = |G(K/E)| \cdot |G(E/F)|$?



I was able to verify some specific examples, like $\mathbb{Q}(\sqrt[3]{2}, \omega)$ for $x^3-2$ and another extension, but how could I show that this holds in general for all such towers of extensions?

Wednesday 23 October 2013

Prove that every odd prime number can be written as a difference of two squares.




  1. Prove that every odd prime number can be written as a difference of two squares.

  2. Prove also that this presentation is unique.

  3. Is such presentation possible if p is just an odd natural number?

  4. Can 2 be represented this way?




Answers



\3. Yes the presentation (i.e. odd numbers being written as differences of two squares) is possible for all odd natural number however the presentation may not be unique. For example, $57=11^2-8^2=29^2-28^2$.



\4. 2 can't be written as a difference of two squares because 4-1=3 and 1-1=0 and the difference of squares grows to integers larger that 3.



Can I get some help in proving questions 1 and 2?


Answer



$1$. Let $(x+y)(x-y) = p$




Since $p$ is prime, the smaller divisor has to be one, ie. $(x-y) = 1$, giving $2y+1 = p \implies y = \frac{p-1}{2}$ (you're guaranteed y is an integer because $p$ is an odd number).



So the only possible solution set is $x = \frac{p+1}{2}, y = \frac{p-1}{2}$



$2$. Uniqueness already established via reasoning above.



$3$. Possible, but it will be non-unique as $(x-y)$ can take on multiple values, e.g. $1$ or a single prime divisor of $p$ or a product of some (but not all) prime divisors of $p$.



$4$. No, because again $(x-y)$ = 1 is forced. But now you get $x = \frac{3}{2}$ which is non-integral. So no integer solution sets exist.



convergence divergence - Is this :$sum_{n=1}^{infty }zeta(2n)-zeta(2n+1)=frac{1}{2}$?



I would like to know the behavior of the Riemann zeta function values at even
and odd integers for studying irrationality between those values. I have tried using wolfram alpha to check the value of this sum:
$$

\sum_{n = 1}^{\infty}\left[\zeta\left(2n\right)-\zeta\left(2n + 1\right)\right].
$$
It tells me it equals $\frac{1}{2}$ .



Note: I don't have any method to show if the titled sum is true . Maybe I find who is help me here for evaluating the titled sum.



Thanks for any help.


Answer



Note that , due to the absolute convergence, we have$$\sum_{n\geq1}\left(\zeta\left(2n\right)-\zeta\left(2n+1\right)\right)=\sum_{n\geq1}\left(\sum_{k\geq1}\frac{1}{k^{2n}}-\sum_{k\geq1}\frac{1}{k^{2n+1}}\right)
$$ $$=\sum_{n\geq1}\sum_{k\geq1}\frac{k-1}{k^{2n+1}}=\sum_{k\geq2}\sum_{n\geq1}\frac{k-1}{k^{2n+1}}

$$ $$=\sum_{k\geq2}\frac{1}{k\left(k+1\right)}=\sum_{k\geq2}\left(\frac{1}{k}-\frac{1}{k+1}\right)=\color{red}{\frac{1}{2}}.$$


number theory - Square does not divide factorial



Find all natural numbers $n$ such that $n^2$ does not divide $(n - 2)!$




My thoughts are if by Wilson's theorem $(p - 1)!$ is not divisible by $p$ and $\gcd (p, p - 1) = 1$ then $n = p$ is done but I am not being able to prove it for composite numbers I. e. $n = pqr$ form.



Any help will be appreciated.


Answer



Answer:
All numbers that are primes, or twice a prime, or among $1$, $8$, $9$.



Proof:
Clearly, $n=1$ does not work ($(n-2)!$ is not even defined).

If $n=8$, we check directly that $8^2\nmid 6!$.
If $n=9$, we check directly that $9^2\nmid 7!$.



If $n=p$ or $n=2p$ with a prime $p$, at most one of the factors defining $(n-2)!$ is a multiple of $p$ (and not of $p^2$), hence $n^2\nmid (n-2)!$.



If $n=2^k$ with $k>3$, we see the factors $2^{k-1}$, $2^{k-2}$, $6$, $12$ and so $n^2\mid (n-2)!$.



In all other cases, we can write $n=ap$ with $p$ an odd prime and $a\ge3$.



If $a=3$, we can assume $p>3$ and find the distinct numbers $p$, $2p$, $3$, $6$ among the factors and thus have $n!\mid (n-2)!$.




If $a\ge 4$, the four numbers $p$, $2p$, $a$, $2a$ are all $n-2$. If they are pairwise different, this shows $n^2\mid (n-2)!$. However it may happen that $a=p$ or $a=2p$, i.e., $n=p^2$ or $n=2p^2$ with $p\ge 5$. But then we find $p$, $2p$, $3p$, $4p$ among the factors, and so again $n^2\mid 4p^4\mid (n-2)!$.
$\square$


Tuesday 22 October 2013

calculus - i dont know how to solve sum of series $n cdot frac{1}{2^n}$

i need some ideas to solve $$\sum_{n=1}^\infty n\cdot\left(\frac12\right)^n$$
I prove that the series converges to using ratio method, but i dont know how to find the sum.

real analysis - A question on convergence to zero of measurable sets

Let $f$ be a strictly positive (almost everywhere) measurable function which is also integrable. Let $E_n$ be a sequence of measurable sets such that $\int_{E_n}f\to 0$ as $n\to\infty.$ Is it true that $\mu(E_n)\to 0$ where $\mu$ is a positive measure with respect to $f$ is integrable. The measure space is finite.

linear algebra - Why are these eigenvalic theorems true?




So I was reading up on the Wikipedia page on eigenvalues and eigenvectors (which I fondly call eigencrap ;) ) and I was confused by one paragraph in particular.



Let $A$ be an arbitrary $n$ by $n$ matrix of complex numbers with eigenvalues $\lambda_1,\lambda_2, \ldots, \lambda_n$. Each eigenvalue appears $\mu_A(\lambda_i)$ times in this list, where $\mu_A(\lambda_i)$ is the eigenvalue's algebraic multiplicity. The following are the properties of this matrix and its eigenvalues:




  • The trace of $A$, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
    $$
    \text{tr}(A)=\sum_{i=1}^{n}A_{i,i}=\sum_{i=1}^{n}\lambda_i=\lambda_1+\lambda_2+\ldots+\lambda_n.
    $$

  • The determinant of $A$ is the product of all its eigenvalues,

    $$
    \det(A)=\prod_{i=1}^{n}\lambda_i=\lambda_1\lambda_2\cdots\lambda_n.
    $$
    First off, $\mathbb{R}\subset\mathbb{C}$, correct? Therefore the properties listed should hold for any $n\times n$ square matrix -- namely that $\operatorname{tr}(A)$, the sum of the diagonal elements, is equal to the sum of all eigenvalues, and that $\operatorname{det}(A)$ is equal to the product of such.



This does not strike me as intuitively true, why should this be the case? Is there a condition missing from the theorem -- perhaps that the matrix must be orthogonally diagonalized, or symmetric?



Just a thought, but is there a relation to Vieta's theorems? The sum/product identities seem indicative of such.




Any thoughts, intuitions, and explanations are appreciated, thank you!


Answer



Another well known property of the Trace operator is that it is invariant over commutation of matrices, that means:
$$
Tr(AB)=Tr(BA)
$$
So, if $A$ is diagonalized by:
$$
A=V D V^{-1}
$$

With $D$ a diagonal matrix, then:
$$
Tr(A) = Tr(V D V^{-1})=Tr(V V^{-1} D) = Tr(D)= \lambda_1+ \cdots +\lambda_n
$$
Where the invariance over commutation was used.



Regarding the second property, another well known result is Binet's theorem, that states:
$$
Det(AB)= Det(A)Det(B)
$$

Thus, upon diagonalization:
$$
Det(A)= Det(V D V^{-1}) = Det(V)Det(D)Det(V^{-1})
$$
And of course this also implies:
$$
Det(I)= Det(V V^{-1}) = Det(V)Det(V^{-1}) =1
$$
Thus:
$$

Det(A)= Det(D) = \lambda_1 \times \cdots \times\lambda_n
$$



Also, yes, there are some relation to Viète's theorems (or Girard Relations), that come from the characteristic polynomial:
$$
p(\lambda) = Det(A-\lambda I)
$$
Since for instance this implies:
$$
p(0)=Det(A)

$$


elementary number theory - General method for solving $axequiv bpmod {n}$ without using extended Euclidean algorithm?



Consider the linear congruence equation $$ax\equiv b\pmod { n}.$$ One way to solve it is solving a linear Diophantine equation
$$
ax+ny=b.
$$



I saw somebody solved it by another method somewhere I don't remember:





Solve $144x\equiv 22\pmod { 71}$.
$$\begin{align}
144x\equiv 93 &\pmod { 71}\\
48x\equiv 31&\pmod { 71}\\
48x\equiv -40&\pmod { 71}\\
6x\equiv -5&\pmod { 71}\\
6x\equiv 66&\pmod { 71}\\
x\equiv 11&\pmod { 71}
\end{align}

$$




Instead of solving a Diophantine equation using extended Euclidean algorithm, he uses the rules of congruence such as




If $a_1\equiv b_1\pmod {n}$ and $a_2\equiv b_2\pmod {n}$, then $a_1\pm a_2\equiv b_1\pm b_2\pmod {n}$.




Here are my questions:





  • Does the second method always work?

  • What's the general algorithm for solving $ax\equiv b\pmod {n}$ in this way?


Answer



For this example it is simpler to note that



$$\rm mod\ 71\!:\ \ \frac{22}{144}\: \equiv\: \frac{11}{72}\:\equiv\: \frac{11} 1 $$




When the modulus is prime one can employ Gauss's algorithm, for example



$$\rm mod\ 29\!:\ \ \frac{1}8\: \equiv \frac{4}{32}\: \equiv\: \frac{4}{3}\:\equiv\: \frac{40}{30}\: \equiv\: \frac{11}{1}$$



I.e. scale $\rm A/B\ \to AN/BN\ $ by the least $\rm\:N\:$ so that $\rm\ BN > 29\:.\ $ Then reduce the numerator and denominator $\rm\ mod\ 29,\:$ and iterate. You will eventually obtain a denominator of $1$ since each step reduces the denominator. Isn't that sweet? That's they key idea that led Gauss to the first proof of the Fundamental Theorem of Arithmetic, i.e. unique factorization of integers.


Monday 21 October 2013

geometry - Vector path length of a hypotenuse

Figure 1




Consider the red path from A that zigzags to B, which takes $n$ even steps of length $w$. The path length of the route $P_n$ will be equal to:



$ P_n = P_x + P_y = \frac{n}{2}\times w + \frac{n}{2}\times w = n \times w $



But $\frac{n}{2}\times w = 1$ beacuse it is the length of one of the sides of the triangle so:



$P_n = 2$



Which will be true no matter how many steps you take. However in the limit $n \to \infty, w \to 0$ the parth length $P_\infty$ suddenly becomes:




$P_\infty = \sqrt{1^2 + 1^2} = \sqrt{2}$



Due to Pythagoras. Why is this the case? It seems the path length suddenly decreases by 0.59!

sequences and series - Easy summation question: $S= 1-frac{1}{2}+frac{1}{4}-frac{1}{8}+frac{1}{16}cdots$

While during physics I encountered a sum I couldn't evaluate:

$$S= 1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\frac{1}{16}\cdots$$
Is there a particular formula for this sum and does it converges?

$sum_{i=0}^n frac{1}{(i+3)(i+4)} = frac{n}{4(n+4)}$ (prove by induction)

I'm having some difficulty proving by induction the following statement.




$$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$



I have shown that $\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$ holds for $n=1$ (equals $\frac{1}{20}$) , but I am getting stuck on the induction step.



As far as I know I have to show $$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$
implies
$$\sum_{i=0}^{n+1} \frac{1}{(i+3)(i+4)} = \frac{n+1}{4(n+5)}$$



To do this I think I should add the number $\frac{1}{(n+4)(n+5)}$ to $\frac{n}{4(n+4)}$ and see if it gives $\frac{n+1}{4(n+5)}$ , if I am not mistaken.




When trying to do that however I get stuck. I have:



$$\frac{n}{4(n+4)} +\frac{1}{(n+4)(n+5)} = \frac{n(n+4)(n+5)}{4(n+4)^2(n+5)} + \frac{4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+4)(n+5)+4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+5)+4}{4(n+4)(n+5)}$$



However beyond this point I don't know how to reach $\frac{n+1}{4(n+5)}$ I always just end up at the starting point of that calculation.



So I think that either my approach must be wrong or I am missing some trick how to simplify $$\frac{(n(n+5)+4}{4(n+4)(n+5)}$$



I would be very grateful for any help, as this is a task on a preparation sheet for the next exam and I don't know anyone, that has a correct solution.

calculus - Getting an X for Chinese Remainder Theorem (CRT)



how do I get modulo equations to satisfy a given X in CRT.



For example say I have X = 1234. I choose mi as 5, 7, 11, 13. This satisfies the simple requirements of Mignotte's threshold secret sharing scheme. More precisely given in my example k = n = 4, and the product of any k - 1 is smaller then X how come simply computing the remainder of each won't give equations that solve to X = 1234.




In the case of the example,



x = 4 mod 5
x = 2 mod 7
x = 2 mod 11
x = 12 mod 13


Which resolves to 31264 (won't CRT produce the smallest?)




Any hints?


Answer



The final result of the CRT calculation must be reduced modulo 5 x 7 x 11 x 13 = 5005. This gives the correct answer.


rational numbers - Square root of 6 proof rationality

I was proving $\sqrt 6 \notin \Bbb Q$, by assuming its negation and stating that: $\exists (p,q) \in \Bbb Z \times \Bbb Z^*/ \gcd(p,q) = 1$, and $\sqrt 6 = (p/q)$.




$\implies p^2 = 2 \times 3q^2 \implies \exists k \in \Bbb Z; p = 2k \implies 2k^2 = 3q^2$ and found two possible cases, either $q$ is even or odd, if even we get contradiction that $\gcd(p, q) \neq 1$, if odd we get contradiction that $2k^2 = 3q^2$.



Is it a right path for reasoning it?

Sunday 20 October 2013

calculus - Limit question - L'Hopital's rule doesn't seem to work

I have been recently trying to solve this limit problem. First of all, I used L'Hopital's rule but it doesn't seem to work (because I thought that this limit is of form $\frac{\infty}{\infty}$). Am I doing it correctly? I don't seem to understand where am I wrong.



$$\lim_{x \to \infty} \left(\frac{x+\sin^3x}{5x+6}\right)$$

combinatorics - Nested summations and their relation to binomial coefficients



As the festive seasons is coming to a close and we've seen out all twelve days of Christmas I found myself thinking about the famous carroll of the same name. Namely, how many presents in total did my true love give to me? This is can be given by the nested summation
$\sum \limits_{i=1}^{12} (\sum \limits_{j=1}^i j)$ ,
where the inside sum is the number of presents received each day (i.e a partridge in a pear tree on the first day, two turtle doves and a partridge in a pear tree on the second day, etc.) and the outside summation sums over the days. This can be found to be 364, but what about in general for $n$ days of Christmas?



Well we can generalise the summation by just inserting $n$ into the summation and solving it.
$\sum \limits_{i=1}^{n} (\sum \limits_{j=1}^i j)$
Using the well known solution of the sum $~ \sum \limits_{i=1}^{n}i = \frac{n(n+1)}{2} ~$ we can obtain a non-nested sum,
$ = \sum \limits_{i=1}^{n} \frac{i(i+1)}{2} = \sum \limits_{i=1}^{n} (\frac{1}{2}i^2 + \frac{1}{2}i )$ ,
which we can split into separate summations
$ = \frac{1}{2} \sum \limits_{i=1}^{n} i^2 + \frac{1}{2}\sum \limits_{i=1}^{n}i = \frac{n(n+1)(2n+1)}{12}+ \frac{n(n+1)}{4} = \frac{n(n+1)(2n+1) + 3n(n+1)}{12} = \frac{2n^3 + 6n^2 + 4n}{12} = \frac{n(n+1)(n+2)}{6}$
which we notice can be written as the binomial coefficient,
$ = {n+2\choose3}$.
This doesn't really come as any surprise since this kind of problem seems like the prime candidate for one which could be solved using some fancy combinatorics to yield a binomial coefficient. But if we notice now that we can write the standard arithmetic series sum as a binomial coefficient, eg.,
$~ \sum \limits_{i=1}^{n}i = \frac{n(n+1)}{2} = {n+1\choose2} ~$
we might notice a similarity between them.



Similarly it can be shown that for a twice nested sum we have,
$\sum \limits_{i=1}^{n} (\sum \limits_{j=1}^i (\sum \limits_{k=1}^j k)) = {n+3\choose4} $.



In general it seems for an $(m-1)$-th nested sum (i.e, one which has $m$ summation symbols) we have,
$\sum \limits_{i_1=1}^{n} \sum \limits_{i_2=1}^{i_1} \sum \limits_{i_3=1}^{i_2} \dots \sum \limits_{i_m=1}^{i_{m-1}}i_m = {n+m\choose1+m} $
and this is likely provable with induction.




My question is to you is, how can this relationship between binomial coefficient and summations be thought of? That is to say, why would taking the number of possible pairs of $n+1$ elements happen to give you the sum from $1$ to $n$? And how does this somehow seem to extent with nested summations; the number of ways you can take 3 elements from $n+2$ gives you the first nested summation and so on? I'm sure some combinatorics whizz will be able to rearrange the way we count these sums somehow to yield the answer. Anyway at the very least it's interesting to think about - happy holidays!


Answer



Suppose you want number of non-negative integer solutions of the equation
$$x_1+\ldots+x_n = k+1$$
The answer is $\binom{k+n}{k+1}$ (a standard application of Stars and Bars).



Now, look at the case $k=1$ for simplicity. We construct a solution as follows: initialize each components of the vector $(x_1,\ldots, x_n)$ to zero, then to make their sum $2$, we add $1$ to one of the components at two steps one by one.





  • In the first step, we have $n$ choices where to add the $1$. Suppose we added it at the $i$-th position, i.e. $x_j=0$ unless $j\neq i$, and $x_i=1$ after this step.


  • At the next step, we add the second $1$ to the $j$-th position, imposing the condition $j\leq i$ to avoid having same solution twice.




We can do this in $\sum_{i=1}^n\sum_{j=1}^i1$ ways. Result using the formula must also be same, so
$$\binom{n+1}{2} = \sum_{i=1}^n\sum_{j=1}^i1$$



For higher values of $k$, we just need to remember where we add the first $1$ is rightmost, and for the next steps, tail of the vector will remain empty. Then the formulas are apparent.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...