Sunday 30 April 2017

algebra precalculus - If $tan (pi cos theta) =cot (pi sin theta) $ then find the value of $cosleft (theta -frac{pi}{4}right)$




If $\tan (\pi \cos \theta) =\cot (\pi \sin \theta) $ then find the value of $\cos \left(\theta -\frac{\pi}{4}\right).$



I could not get any idea to solve. However I tried by using $\theta =0^\circ $. But could not get the answer.


Answer



Hint -



$\tan (\pi \cos \theta) =\cot (\pi \sin \theta)$



$\tan(\pi \cos \theta) =\tan (\frac{\pi}2 - \pi \sin \theta)$




$\pi \cos \theta =\frac{\pi}2 - \pi \sin \theta$



$\cos \theta = \frac 12 - \sin \theta$



$\sin \theta + \cos \theta = \frac 12$



Now we have,



$\cos\left(\theta-\frac{\pi}{4}\right)$




$= \cos\theta \cos \frac{\pi}4 +\sin\theta \sin \frac{\pi}4$



$= \cos\theta \frac 1{\sqrt{2}} + \sin\theta \frac 1{\sqrt{2}}$



$= \frac1{\sqrt 2} \left( \cos\theta +\sin\theta \right)$



$ = \frac 1{\sqrt2} \left(\frac 12\right)$



$= \frac 1{2\sqrt2}$


real analysis - A power series involving binomial coefficients

I've been playing around with infinite sums for quite a while now, but recently, I've come across the following question in an Under-Graduate Mathematics Book that is specifically targeted at problem solving. The problem is as follows,



Evaluate the following:
$$\sum_{r=2}^\infty \Biggl(\binom{2r}{r}{\biggl(\frac{x}{4}\biggr)^r}\Biggr)^2$$



where x is strictly less than unity.



I've thoroughly checked that the sum is, in fact, convergent, however, I am completely stumped as to how I am to evaluate it. I am guessing that the final expression is one involving the variable 'x' since I do not see any way for it to be eliminated somehow. Any kind of hint/solution/explanation to the problem would be highly appreciated.

Saturday 29 April 2017

elementary number theory - Calculate $121^{199} mod 300$

Using Fermat's little theorem I proved that





$$121^{199} = 121^{39} \mod 300$$




(as $\phi(300)$ is $80$) but I don't think I can leave it like this.
My question being how can I solve $121^{39}\hspace{-3mm}\mod 300$. Any ideas, suggestions?

limits - Stirling's formula: proof?

Suppose we want to show that $$ n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$




Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe $C = \sqrt{2 \pi}$.



What is a good way of doing this? Could we use L'Hopital's Rule? Or maybe take the log of both sides (e.g., compute the limit of the log of the quantity)? So for example do the following $$\lim_{n \to \infty} \log \left[\frac{n!}{n^{n+(1/2)}e^{-n}} \right] = \log C$$

probability - The normal approximation of Poisson distribution




(I've read the related questions here but found no satisfying answer, as I would prefer a rigorous proof for this because this is a homework problem)



Prove: If $X_\alpha$ follows the Poisson distribution $\pi(\alpha)$, then
$$\lim_{\alpha\rightarrow\infty}P\{\frac{X_\alpha-\alpha}{\sqrt{\alpha}} \leq u \} = \Phi(u)$$



where $\Phi(u)$ is the cdf of normal distribution $N(0,1)$



Hint: use the Laplace transform $E(e^{-\lambda(X_\alpha-\alpha)/\sqrt{\alpha}})$, show that as $\alpha\rightarrow\infty$ it converges to $e^{\lambda^2/2}$




I did the transform but failed to sum the series(which is essentially doing nothing)






Here's what I got:



$$g(\lambda)=\sum_{n=0}^{\infty} \frac{e^{-\alpha}}{n!}\alpha^n e^{-\frac{\lambda(n-\alpha)}{\sqrt{\alpha}}}$$



and $\lim_{\alpha\rightarrow\infty} g(\lambda)=e^{-\lambda^2}$ is what I'm trying to arrive at. I tried L'Hospital only to find that the result is identical to the original ratio.


Answer




Let $X_{\alpha}$ Poisson $\pi(\alpha)$, for $\alpha = 1, 2, \ldots$
The probability mass function of $X_{\alpha}$ is
$$f_{X_{\alpha}}(x)=\frac{{\alpha}^x\operatorname{e}^{-x}}{x!} \qquad \alpha = 1, 2, \ldots$$
The moment generating function of $X_{\alpha}$ is
$$
M_{X_{\alpha}}=\Bbb{E}\left(\operatorname{e}^{tX_{\alpha}}\right)=\operatorname{e}^{\alpha(\operatorname{e}^{t}-1)}\qquad t\in(-1,1)
$$
Now consider a “standardized” Poisson random variable $Z_{\alpha}=\frac{X_{\alpha}-{\alpha}}{\sqrt{\alpha}}$
which has limiting moment generating function
$$

\begin{align}
\lim_{\alpha\to\infty}M_{Z_{\alpha}}&=
\lim_{\alpha\to\infty}\Bbb{E}\left(\exp{(tZ_{\alpha})}\right)\\
&=\lim_{\alpha\to\infty}\Bbb{E}\left(\exp{\left(t\frac{X_{\alpha}-{\alpha}}{\sqrt{\alpha}}\right)}\right)\\
&=\lim_{\alpha\to\infty}\exp(-t\sqrt\alpha)\Bbb{E}\left(\exp{\left(\frac{tX_{\alpha}}{\sqrt{\alpha}}\right)}\right)\\
&=\lim_{\alpha\to\infty}\exp(-t\sqrt\alpha)\exp\left(\alpha(\operatorname{e}^{t/\sqrt\alpha}-1)\right)\\
&=\lim_{\alpha\to\infty}\exp\left(-t\sqrt\alpha +\alpha\left[t\alpha^{-1/2}+\frac{t^2\alpha^{-1}}{2}+\frac{t^3\alpha^{-3/2}}{6}+\cdots\right]\right)\\
&=\lim_{\alpha\to\infty}\exp\left(\frac{t^2}{2}+\frac{t^3\alpha^{-3/2}}{6}+\cdots\right)\\
&=\exp\left(\frac{t^2}{2}\right)
\end{align}

$$
by using the moment generating function of a Poisson random variable and expanding the
exponential function as a Taylor series. This can be recognized as the moment generating function of a standard normal random variable. This implies that the associated unstandardized random variable $X_{\alpha}$ has a limiting distribution that is normal with mean $\alpha$ and variance $\alpha$.


analysis - what are real and imaginary part of this expression



I have $M:=\sqrt{\frac{a\cdot(b+ic)}{de}}$ and all variables $a,b,c,d,e$ are real. Now I am looking for the real and imaginary part of this, but this square root makes it kind of hard.


Answer




$$\sqrt{\frac{a(b+ic)}{de}}=\sqrt{\frac{a}{de}}\cdot\sqrt{b+ic}$$



Let $$\sqrt{b+ic}=x+iy$$



$$\implies b+ic=(x+iy)^2=x^2-y^2+2xyi$$



Equating the real & the imaginary parts, $b=x^2-y^2, c=2xy$



So, $b^2+c^2=(x^2-y^2)^2+(2xy)^2=(x^2+y^2)^2\implies x^2+y^2=\sqrt{b^2+c^2}$




We have $$x^2-y^2=b$$



$$\implies 2x^2=\sqrt{b^2+c^2}+b\implies x^2=\frac{\sqrt{b^2+c^2}+b}2$$
$$\implies x=\pm\frac{\sqrt{\sqrt{b^2+c^2}+b}}{\sqrt2}$$



and $$\implies y^2=x^2-b=\frac{\sqrt{b^2+c^2}-b}2$$



$$\implies y=\pm\frac{\sqrt{\sqrt{b^2+c^2}-b}}{\sqrt2}$$



Now, the sign of $y=$ sign of $x\cdot$ sign of $c$



elementary set theory - Cross product of the reals question



Is $\Bbb {R} \times \Bbb {R} \subseteq \Bbb {R}$?



If this is the case then would it be true that $|\Bbb {R} \times \Bbb {R}| \leq |\Bbb {R}|$?


Answer



$\mathbb R \times \mathbb R \subseteq \mathbb R$ is incorrect.




It is incorrect for the same reason that in vector spaces $\mathbb R^3 \subseteq \mathbb R^2$ is incorrect. The number of components is different.



However...



The statement $|\mathbb R \times \mathbb R| \leq |\mathbb R|$ is correct. and infact, it is true that $|\mathbb R \times \mathbb R| = |\mathbb R|$



A simple proof for $|\mathbb R \times \mathbb R| \leq |\mathbb R|$ without resorting to cardinal arithmetique would be to find a function $f: \mathbb R^2 \to \mathbb R$ that is injective. Can you think of such function? how about $f(i,j)=2^i3^j$?


Friday 28 April 2017

elementary number theory - calculating $a^b !mod c$




What is the fastest way (general method) to calculate the quantity $a^b \!\mod c$? For example $a=2205$, $b=23$, $c=4891$.


Answer



Let's assume that a,b,c referred to here are positive integers, as in your example.



For a specific exponent b, there may be a faster (shorter) method of computing a^b than binary exponentiation. Knuth has a discussion of the phenomenon in Art of Computer Programming Vol. 2 (Semi-numerical Algorithms), Sec. 4.6.3 and the index term "addition chains". He cites b=15 as the smallest case where binary exponentiation is not optimal, in that it requires six multiplication but a^3 can be computed in two multiplications, and then (a^3)^5 in three more for a total of five multiplications.



For the specific exponent b=23 the parsimonious addition chain involves the exponents (above 1) 2,3,5,10,13, at which point a^23 = (a^10)*(a^13), for a total of six multiplications. Binary exponentiation for b=23 requires seven multiplications.




Another approach that can produce faster results when b is large (not in your example) depends on knowing something about the base a and modulus c. Recall from Euler's generalization of Fermat's Little Thm. that if a,c are coprime, then a^d = 1 mod c for d the Euler phi function of c (the number of positive integers less than c and coprime to it). In particular if c is a prime, then by Fermat's Little Thm. either c divides a and a^b = 0 mod c or else a^b = a^e mod c where e = b mod (c-1) since phi(c) = c-1 for a prime c.



If the base a and modulus c are not coprime, then it might be advantageous to factor a into its greatest common factor with c and its largest factor that is coprime to c.



Also it might be advantageous if c is not prime to factor it into prime powers and do separate exponentiations for each such factor, piecing them back together via the Chinese Remainder Thm. In your example c = 4891 = 67*73, so you might compute a^b mod 67 and a^b mod 73 and combine those results to get a^b mod c. This is especially helpful if you are limited in the precision of integer arithmetic you can do.


sequences and series - Finding $lim_{ntoinfty}(frac{1}{sqrt{n^2+1}} + frac{1}{sqrt{n^2+2}} + ... + frac{1}{sqrt{n^2+n}})$



I'm trying to find $\lim_{n\to\infty}(\frac{1}{\sqrt{n^2+1}} + \frac{1}{\sqrt{n^2+2}} + ... + \frac{1}{\sqrt{n^2+n}})$.





  • I tried to use the squeeze theorem, failed.

  • I tried to use a sequence defined recursively: $a_{n+1} = {a_n} + \frac{1}{\sqrt{(n+1)^2 +n+1}}$. It is a monotone growing sequence, for every $n$, $a_n > 0$. I also defined $f(x) = \frac{1}{\sqrt{(x+1)^2 +x+1}}$. So $a_{n+1} = a_n + f(a_n)$. But I'm stuck.



How can I calculate it?


Answer



It looks squeezable.




\begin{align}
\frac{n}{\sqrt{n^2+n}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{n}{\sqrt{n^2+1}}
\\
\\
\frac{1}{\sqrt{1+\frac{1}{n}}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{1}{\sqrt{1+\frac{1}{n^2}}}
\end{align}


Thursday 27 April 2017

Explaining invertible matrices with linear transformations

Suppose that $A$ and $B$ are square matrices and that $AB$ is invertible. Using the interpretation of multiplication by $A$ (or $B$) as a linear transformation from $\Bbb{R}^n \to \Bbb{R}^n$, explain why both $A$ and $B$ must be invertible.



So I think it has to do with $x \mapsto Ax$ and $x \mapsto Bx$ being onto and one-to-one and then using the invertible matrix theorem, but I don't quite understand how to answer this precisely.

number theory - Visualizing quadratic residues and their structure




[I corrected the pictures and deleted one question due to user i707107's valuable hint concerning cycles.]






Visualizing the functions $\mu_{n\%m}(k) = kn\ \%\ m$ (with $a\ \%\ b$ meaning $a$ modulo $b$) as graphs reveals lots of facts of modular arithmetic, among others the fixed points of $\mu_{n\%m}$ and the fact that $\mu_{n\%m}$ acts as a permutation $\pi^n_m$ of $[m] = \{0,1,\dots,m-1\}$ iff $n$ and $m$ are coprime. Furthermore the cycle structure and spectrum of $\pi^n_m$ can be visualized and related to number theoretic facts about $n$ and $m$.



This is how the graph for $\mu_{3\%64}(k) = 3k\ \%\ 64$ looks like when highlighting permutation cycles (the shorter the stronger):



enter image description here




When visualizing the function $f^2_{\%m}(k) = k^2\ \%\ m$ (which gives the quadratic residue of $k$ modulo $m$) in the same way as a graph, other observations can be made and tried to relate to facts of number theory, esp. modular arithmetic:



enter image description here



These graphs are not as symmetric and regular than the graphs for $\mu_{n\%m}$ but observations can be made nevertheless:




  • the image of ${f^2_{\%m}}$, i.e. those $n$ with ${f^2_{\%m}}(k) = n$ for some $k < m$ (black dots)


  • number and distribution of fixed points with ${f^2_{\%m}}(k) = k$ (fat black dots)



  • cycles with ${f^2_{\%m}}^{(n)}(k) = k$ (colored lines)


  • parallel lines (not highlighted)




My questions are:





  • How can the symmetric distribution of image points $n$ (with $f^2_{\%61}(k)=n$ for some $k$, black dots in the picture below) be explained?


  • Can there be more than one cycle of length greater than 1 for $f^2_{\%m}$?



  • How does the length of the cycles depend on $m$?


  • How does the "parallel structure" depend on $m$?





With "parallel structure" I mean the number and size of groups of parallel lines. For example, $f^2_{\%8}$ has two groups of two parallel lines, $f^2_{\%12}$ has two groups of three parallel lines. $f^2_{\%9}$ has no parallel lines.



For $f^2_{\%61}$ one finds at least four groups of at least two parallel lines:



enter image description here




For other prime numbers $m$ one finds no parallel lines at all, esp. for all primes $m\leq 11$ (for larger ones it is hard to tell).


Answer



This is not a complete answer to all of your questions. This is to show you some things you need to investigate. The first question is answered. The second question has an example. I do not know complete answers to the third and fourth questions, but I give a try on explaining your plot of $m=61$.



From your last sentences, it looks like you are interested in the case when $m$ is a prime. Let $m=p$ be an odd prime. Then consider $p\equiv 1$ mod $4$, and $p\equiv 3$ mod $4$.



In the former case $p\equiv 1$ mod $4$, we see the symmetric black dots. This is because the Legendre symbol at $-1$ is $1$. That is
$$
\left( \frac{-1}p \right)=1.

$$

This means $-1$ is a square of something in $\mathbb{Z}/p\mathbb{Z}$. Suppose $x\equiv y^2$ mod $p$, then we have $-x \equiv z^2$ mod $p$ for some $z\in\mathbb{Z}/p\mathbb{Z}$.



Your example $m=61$ is a prime that is $1$ mod $4$. Thus, we have a symmetric black dots.



In general when $p$ is an odd prime, the image of the square mapping is
$$\{ x^2 \ \mathrm{mod} \ p| 0\leq x \leq \frac{p-1}2 \}.$$
Note that the black dots represent image of the square mapping.



Thus, the number of black dots is $\frac{p+1}2$. In your example of $m=61$, we have $31$ black dots.




Now we use a primitive root $g$ in $\mathbb{Z}/p\mathbb{Z}$. Then any element $x\in \mathbb{Z}/p\mathbb{Z} - \{0\}$, we have some integer $a$ such that $x\equiv g^a$ mod $p$. Then a cycle formed by square mapping which includes $x$ can be written as
$$
\{g^{a\cdot 2^k} \ \mathrm{mod} \ p| k=0, 1, 2, \ldots \}.
$$

To see if we have cycles, try solving
$$
a\cdot 2^k \equiv a \ \mathrm{mod} \ p-1.
$$




In your plot of $m=61$, we have a primitive root $g=10$ and the following are cycles of length greater than $1$. All of these should be considered with modulo $61$.
$$
(g^{20} g^{40}),
$$

$$
(g^4 g^8 g^{16} g^{32}),
$$

$$
(g^{12} g^{24} g^{48} g^{36}),
$$


$$
(g^{56} g^{52} g^{44} g^{28})
$$

I am not sure if you consider these as cycles, because there can be numbers in front of these, such as
$$
g^5 \mapsto g^{10} \mapsto g^{20},
$$

and comes in to the cycle $(g^{20} g^{40})$.


Wednesday 26 April 2017

modular arithmetic - how to solve $ax+by=c mod p$?



Given $a$, $b$, $c$ (integers), and $p$ (prime),



Is there any general solution for $ax+by=c \mod p$?



I found that it has similar form to solving $ax=c \mod p$, but cannot find the connection between these two.


Answer



Assume that $p$ does not divide $a$ or $b$. Let $x_0$ be a solution of $ax\equiv 1\pmod{p}$, and let $y_0$ be a solution of $by_0\equiv 1 \pmod p$. Then all solutions $(x,y)$ of $ax+by \equiv c\pmod{p}$ are given by
$$x\equiv tx_0 \pmod{p}, \qquad y\equiv (c-t)y_0 \pmod{p},$$

where $t$ ranges over the integers from $0$ to $p-1$.



Note that $x\equiv tx_0\pmod{p}$ has infinitely many (closely related) solutions, as does $y\equiv (c-t)y_0\pmod{p}$. So we could write the general solution in the more cumbersome form $x=tx_0+mp$, $y=(c-t)y_0 +np$. Here $m$ and $n$ range over the integers, and $t$ ranges over the integers from $0$ to $p-1$.


prime factorization - Why does $sumlimits_{k=1}^infty lfloor m/(n^k)rfloor$ give you the number of times that $n$ divides $m!$?




If $n$ is a prime less than $m$, with $n,m \in \mathbb N$, why does $$\sum_{k=1}^\infty \left\lfloor \frac{m}{n^k}\right\rfloor$$
give you the number of times that $n$ divides $m!$?



Examples:




$n=13$



$m=321$



$\sum\limits_{k=1}^\infty \lfloor 321/(13^k)\rfloor=25$



$n=5$



$m=321$




$\sum\limits_{k=1}^\infty \lfloor 321/(5^k)\rfloor=78$



In fact,



FactorInteger[321!]={{2, 318}, {3, 157}, {5, 78}, {7, 51}, {11, 31}, {13, 25}, {17, 
19}, {19, 16}, {23, 13}, {29, 11}, {31, 10}, {37, 8}, {41, 7}, {43,
7}, {47, 6}, {53, 6}, {59, 5}, {61, 5}, {67, 4}, {71, 4}, {73,
4}, {79, 4}, {83, 3}, {89, 3}, {97, 3}, {101, 3}, {103, 3}, {107,
3}, {109, 2}, {113, 2}, {127, 2}, {131, 2}, {137, 2}, {139,

2}, {149, 2}, {151, 2}, {157, 2}, {163, 1}, {167, 1}, {173,
1}, {179, 1}, {181, 1}, {191, 1}, {193, 1}, {197, 1}, {199,
1}, {211, 1}, {223, 1}, {227, 1}, {229, 1}, {233, 1}, {239,
1}, {241, 1}, {251, 1}, {257, 1}, {263, 1}, {269, 1}, {271,
1}, {277, 1}, {281, 1}, {283, 1}, {293, 1}, {307, 1}, {311,
1}, {313, 1}, {317, 1}}


and so on, for every $n,m ∈ N$, $n$ prime and $


Can someone explain?


Answer



There are $\lfloor \frac{m}{n} \rfloor $ numbers less than $m$ that are multiples of $n$.
Amongst these numbers, an $n$th is also divisible by $n^2$, i.e. is a multiple of it. So $\frac{1}{n} \lfloor \frac{m}{n} \rfloor= \lfloor \frac{m}{n^2} \rfloor $ additional $n$'s appear. Because we already counted every multiple of $n^2$ as a multiple of $n$ in the previous step we just add $1$ to the total sum for each multiple. An $n$th of all these square-multiples is also a multiple of $n^3$...
This continues for each power of $n$, until some number $x$ such that $n^x > m$. Of course we can write this as an infinite sum with $x \rightarrow \infty$ as well - we then add a finite amount of nonzero summands and an infinite amount of summands that are zero.



This doesn't hold for composite $n$ as then the prime factors that $n$ is composed of can also be combined by multiplying two other numbers less than $m$, where the product contains all prime factors of $n$.


calculus - How to evaluate the following limit? $limlimits_{xtoinfty}xleft(fracpi2-arctan xright).$

How can I evaluate this limit:



$$\lim_{x\to\infty}x\left(\frac\pi2-\arctan x\right).$$



I know that the correct answer is $1$, but why?

algebra precalculus - Simplify expression $-ln(2)2xleft(frac{1}{2}right)^{x^2+1}$

I can't seem to see the how the expression was simplified from



$$-\ln(2)2x\left(\frac{1}{2}\right)^{x^2+1}$$



to



$$-\ln(2)x\left(\frac{1}{2}\right)^{x^2}$$
I am sure I am missing something, and it is probably a simple solution.



Please help.

derivatives - Application of Mean Value Theorem and Interval



Using the mean value theorem establish the inequality $$7\frac{1}{4}<\sqrt{53}<7\frac{2}{7}$$



This is obviously a true statement but can you help me form the interval and what function I should use to prove this using the mean value theorem? I've only done problems where the interval is given.




Thanks!


Answer



You can consider the function $f:[0,\infty)\to \mathbb{R}$ given by $f(x)=\sqrt{x}$ and the interval [49, 53]. By the Men Value Theorem there exists some $c\in (49, 53)$ such that
$$
\frac{\sqrt{53}-7}{4}= \frac{1}{2\sqrt{c}}
$$
or equivalently,
$$
\frac{\sqrt{53}-7}{2}= \frac{1}{\sqrt{c}}
$$

Because $49$$
\frac{1}{8}<\frac{\sqrt{53}-7}{2}<\frac{1}{7}
$$
Wich implies that
$$
7+\frac{1}{4}<\sqrt{53}<7+ \frac{2}{7}
$$
(and I suppose this is the inequality you wanted to show, not the other one you gave in the problem)


combinatorics - Natural set to express any natural number as sum of two in the set



Any natural number can be expressed as the sum of three triangular numbers, or as four square numbers. The natural analog for expressing numbers as the sum of two others would apparently be the sum of two "linear" numbers, but all natural numbers are "linear", so this is rather unsatisfying. Is there a well-known sparser set of integers (or, half-integers, for that matter) that has this property?


Answer



Assuming Goldbach's conjecture, you can take the set of all primes and successors of primes (plus some small numbers). These have density $2/\log n$.




Not assuming the conjecture, you can take primes, almost-primes and successors of primes (plus some small numbers) - this is the famous Chen's theorem. The resulting density is $\log\log n/\log n$.



Another suggestion is to take the set of all numbers whose binary expansion is "supported" on only odd powers of $2$ or only even powers of $2$. The resulting density is roughly $2/\sqrt{n}$, so this is close to optimal (you need at least $1/\sqrt{n}$).


calculus - Showing $frac{x}{1+x}

I want to show that $$\frac{x}{1+x}<\log(1+x)0$ using the mean value theorem. I tried to prove the two inequalities separately.



$$\frac{x}{1+x}<\log(1+x) \Leftrightarrow \frac{x}{1+x} -\log(1+x) <0$$




Let $$f(x) = \frac{x}{1+x} -\log(1+x).$$ Since $$f(0)=0$$ and $$f'(x)= \frac{1}{(1+x)^2}-\frac{1}{1+x}<0$$ for all $x > 0$, $f(x)<0$ for all $x>0$. Is this correct so far?



I go on with the second part:
Let $f(x) = \log(x+1)$. Choose $a=0$ and $x>0$ so that there is, according to the mean value theorem, an $x_0$ between $a$ and $x$ with



$f'(x_0)=\frac{f(x)-f(a)}{x-a} \Leftrightarrow \frac{1}{x_0+1}=\frac{ \log(x+1)}{x}$.



Since $$x_0>0 \Rightarrow \frac{1}{x_0+1}<1.$$ $$\Rightarrow 1 > \frac{1}{x_0+1}= \frac{ \log(x+1)}{x} \Rightarrow x> \log(x+1)$$

functional equations - If $f(xy) = f(x) + f(y)$, show that $f(.)$ can only be a logarithmic function.




As the question states, show that the property exhibited can only be satisfied by a logarithmic function i.e no other family of functions can satisfy the above property.


Answer



Continuity is necessary.




If $F(x+y)=F(x)+F(y)$, for all $x,y$ and $F$ discontinuous (such $F$ exist due to the Axiom of Choice, and in particular, the fact that $\mathbb R$ over $\mathbb Q$ possesses a Hamel basis) and
$f(x)=F(\log x)$, then $f(xy)=f(x)+f(y)$, and $f$ is not logarithmic!


elementary number theory - Euclid GCD intuition



I was reading someone explanation about Euclid's GCD. I understand some things that the person explain but I don't get some points. This is the explanation:



If both the large numbers $a$ and $b$ have a common factor, say $f$, then
$$a=n_1f$$
$$b=n_2f$$
where (by definition) both $n_1$ and $n_2$ are integers. Lets say $a > b$. Then it follows that $n_1 > n_2$. Now if we subtract the two numbers:
$a-b= (n_1-n_2)f$, where $n_1-n_2$ is also an integer.




This intuitively shows you that if two numbers share a common divisor, then the difference between the two numbers also shares the same divisor.



Euclid's theorem exploits this property of natural numbers to make the subsequent dividends smaller with each iteration, until the common factor is revealed.



Since $$a\mod\ b = a - kb\ (where\ kb<=a)$$
$$= n_1f - kn_2f\ (where\ n_1 > kn_2)$$
$$= (n_1-kn_2)f\ (where\ n_1 > kn_2)$$



$a \mod b$ shares this factor with $b$ as well. Now that $b$ is the greater of the two numbers, we pass $b$ as the first argument. gcd(b,a%b) will give us the answer.




Now, for the base case, if the second argument is equal to 0, that means that the previous division had no remainder. Put in simpler terms, the previous b was an exact multiple of a. We know there was no exact multiple before (larger than) that one, else the algorithm would have terminated. Therefore, this must be our answer and we return $a$.



My question is if $kb<=a$ why $n_1 > kn_2$ is the author implicit saying that this is for the case that $kb

Also, I get that if two numbers share a common factor their difference does to but why explaining this first and then explain the modulo. Is there a relation between the two?



This are my two questions


Answer



The explanation is not well written. There’s really no need to introduce $a\bmod b$, though it does no real harm to do so. The real point is that there are unique integers $k$ and $r$ such that $a=kb+r$ and $0\le r


$$0\le r=a-kb=n_1f-kn_2f=(n_1-kn_2)f\;,$$



so $f$ is a factor of $r$.



Although it doesn’t actually say so, at this point the explanation splits into two cases, and your guess is correct: the author treats first the case $r>0$, which of course is equivalent to $n_1>kn_2$. It could be more clearly written something like this:




If $r>0$, we invoke the algorithm on $b$ and $r=a\bmod b$, since we’ve just seen that $b$ and $r$ have the same common factors (and hence the same greatest common divisor) as $a$ and $b$. Note that $r


If $r=0$, then $a$ is a multiple of $b$, and therefore $\gcd(a,b)=b$. This is the first step at which the remainder was $0$, as otherwise the algorithm would have terminated at an earlier step, so we return $b$ (not $a$).



Finally, the algorithm must terminate, as the second argument is always a positive integer and decreases at each step.



calculus - Evaluating $int P(sin x, cos x) text{d}x$



Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.



For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.




Is there a general method which allows us to evaluate the indefinite integral




$$ \int P(\sin x, \cos x) \text{d} x$$




What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?



Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.







This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.



and here: List of Generalizations of Common Questions.


Answer



There are several general approaches to integrals that involve expressions with sines and cosines, polynomial expressions being the simplest ones.



Weierstrass Substitution




A method that always works is Weierstrass substitution, which will turn such an integral into an integral of rational functions, which in turn can always be solved, at least in principle, by the method of partial fractions. This works even for rational functions of sine and cosine, as well as functions that involve the other trigonometric functions.



Weierstrass substitution replaces sines and cosines (and by extension, tangents, cotangents, secants, and cosecants) by rational functions of a new variable. The identities begin by the trigonometric substitution $t = \tan\frac{x}{2}$, with $-\pi\lt x\lt \pi$, which yields
$$\begin{align*}
\sin x &= \frac{2t}{1+t^2}\\
\cos x &= \frac{1-t^2}{1+t^2}\\
dx &= \frac{2\,dt}{1+t^2}.
\end{align*}$$
For example, if we have
$$\int \frac{\sin x-\cos x}{\sin x+\cos x}\,dx$$

using the substitution above we obtain:
$$\begin{align*}
\int\frac{\sin x-\cos x}{\sin x+\cos x}\,dx &= \int\left(\frac{\quad\frac{2t}{1+t^2} - \frac{1-t^2}{1+t^2}\quad}{\frac{2t}{1+t^2} + \frac{1-t^2}{1+t^2}}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{\quad\frac{2t-1+t^2}{1+t^2}\quad}{\frac{1+2t-t^2}{1+t^2}}\right)
\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{2t-1+t^2}{2t+1-t^2}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= 2\int\frac{2t-1+t^2}{(1+t^2)(2t+1-t^2)}\,dt
\end{align*}$$
which can then be integrated by the method of partial fractions.




Substitutions and Reduction formulas



However, there are usually faster methods, particularly for polynomial expressions. By breaking up the integral into a sum of integrals corresponding to the monomials, the problem reduces to solving integrals of the form
$$\int \left(\sin x\right)^n \left(\cos x\right)^m\,dx$$
with $n$ and $m$ nonnegative integers. The standard methods then are:




  1. If $n$ is odd, then "reserve" one sine, and transform the others into cosines by using the identity $\sin^2 x = 1-\cos^2x$. Then do the change of variable $u=\cos x$ to transform the integral into the integral of a polynomial. For example,
    $$\int \left(\sin x\right)^5\left(\cos x\right)^2\,dx,$$
    then take $(\sin x)^5$, and write it as

    $$\sin x(\sin x)^4 = \sin x(\sin^2x)^2 = \sin x(1-\cos^2 x)^2.$$
    Then setting $u=\cos x$ and $du = -\sin x\,dx$, we get
    $$\int\left(\sin x\right)^4\left(\cos x\right)^2\,dx = \int \sin x\left(1-\cos^2x\right)^2\left(\cos x\right)^2\,dx = -\int (1-u^2)^2u^2\,du,$$
    which can be solved easily.


  2. If $m$ is odd, then do the same trick by by "reserving" one cosine and using the substitution $u=\sin x$. For example,
    $$\int \sin^2x\cos^3x\,dx = \int \sin^2x(\cos^2x)\cos x\,dx = \int(\sin^2x)(1-\sin^2x)\cos x\,dx$$
    and then setting $u=\sin x$, $du = \cos x\,dx$, we get
    $$\int \sin^2x\cos^3x\,dx = \int u^2(1-u^2)\,du,$$
    which can be solved easily again.


  3. If $n$ and $m$ are both even, then either replace all the sines with cosines or vice versa, using $\sin^2 x = 1 - \cos^2x$ or $\cos^2x = 1-\sin^2 x$, and expand. This will leave integrals of the form

    $$\int \sin^n x\,dx\qquad\text{or}\quad \int \cos^m x\,dx$$
    with $n$ and $m$ even positive and even. In that situation, one can use the reduction formulas, which can be obtained by using integration by parts:
    $$\begin{align*}
    \int \sin^n x\,dx &= - \frac{1}{n}\sin^{n-1} x\cos x + \frac{n-1}{n}\int \sin^{n-2}x\,dx,\\
    \int \cos^m x\,dx &= \frac{1}{m}\cos^{m-1} x\sin x + \frac{n-1}{n}\int \cos^{n-2}x\,dx.
    \end{align*}$$
    By repeated application of these formulas, one eventually ends up with an integral of the form $\int \,dx$ which can be solved directly.




The process can be shortened if you happen to spot or know some trigonometric identities; for example, the power reduction formulas allow you to replace powers of sines or cosines by expressions of multiple angles, e.g.,

$$\sin^4\theta = \frac{3-4\cos(2\theta)+\cos(4\theta)}{8}$$
could replace a single integral with three integrals that can be done fairly easily via substitution.



Other methods



If you are comfortable enough integrating functions with a complex variable in it, then the method described by Qiaochu will transform any integral that involves sines and cosines in any way into an integral that involves exponential functions instead.


Tuesday 25 April 2017

real analysis - How to calculate $sumfrac{1}{(4k-1)(4k+4)}$?

I'm trying to calculate $\sum_{k=1}^\infty\frac{1}{(4k-1)(4k+4)}$ using telescopic sums. I've already proved this equality: $\sum\frac{1}{(4k-1)(4k+4)}=\frac{1}{5}\big(\sum\frac{1}{4k-1}-\frac{1}{4k+4}\big)$. The problem is I can't cancel the terms of this sum.



I need help



Thanks

algebra precalculus - Solving $2sin(theta + 17) = frac {cos (theta +8)}{cos (theta + 17)}$



For $0<\theta<360$




$$2\sin(\theta + 17) = \dfrac {\cos (\theta +8)}{\cos (\theta + 17)}$$



$$\Longrightarrow \sin(2\theta + 34)= \sin (82-\theta)$$



since sine is an odd function



$$2\theta + 34 = 82-\theta + 360n$$



This gives $ \theta = 16, 136, 256$




But in the solution paper I can also see $64$.
Why is it missing from the above method?



I can get $64$ (due to $\cos$)when I use
$$ \sin(\alpha) - \sin(\beta) = 2\sin\left(\dfrac {\alpha - \beta} 2\right)\cos\left(\dfrac {\alpha + \beta} 2\right)$$



But the first method I use should still work no?


Answer



It is true that $\sin(2\theta + 34)= \sin (82-\theta)$ if $2\theta + 34 = 82-\theta + 360n$.




It is not true that $\sin(2\theta + 34)= \sin (82-\theta)$ only if $2\theta + 34 = 82-\theta + 360n$.



Remember that $\sin(180^\circ-\alpha) = \sin\alpha$. Thus if $\sin(2\theta+34^\circ) = \sin(82^\circ-\theta+n360^\circ)$ then
$$
\sin(2\theta+34^\circ) = \sin(180^\circ-\Big(82^\circ-\theta+n360^\circ\Big))
$$
and you'll get additional solutions.


calculus - Simplification of a nested sum



I have a nested sum like so:
$$\underbrace{\sum_{k_1=k_0}^{k^*} \ ... \sum_{k_n=k_{n-1}}^{k^*}}_{\text{n times}} 1\quad\ \text{with}\ \ n, k_0, k^* \in \mathbb{N},\ k^*\geq k_0$$



Is there a general, shorter representation that spares me calculating the actual sums?


Answer




$\displaystyle\sum_{k_0\le k_1\le\cdots\le k_n\le k^*}1=\sum_{1\le k_1-k_0+1< k_2-k_0+2<\cdots< k_n-k_0+n\le k^*-k_0+n}1={k^*-k_0+n\choose n}$


calculus - Usage of mean value theorem ; bounded derivative and open interval



Let $f : (0,1) \to \mathbb{R}$ be a function such that $ |f'(x)| \leq 5 $ on the open interval $(0,1)$. Prove that $\lim_{x \to 1^-} f(x)$ exists.



It involves the derivative and the actual function itself, so I think I have to somehow use the mean value theorem.. Also, $f$ is continuous on $(0,1)$ and differentiable on $(0,1)$ ( because the derivative exists there ).



But then, the function is defined on the open interval, so the requirements for the mean value theorem aren't satisfied. I'm guessing we have to consider intervals of the form $(a,b)$ with $a > 0$ and $b < 0$.



Lastly, I don't see the significance of the $5$ ... Is it only there to establish that the derivative is bounded, or does the number itself have some signifiance ( would the same thing hold if we had $3$ for example? ).




Please give me a hint, not the solution. Something like "consider the mean value theorem on intervals of the form ... " would be very helpful.


Answer



Pick a sequence $(x_{n})\subseteq(0,1)$ such that $x_{n}\rightarrow 1$. Then
\begin{align*}
|f(x_{n})-f(x_{m})|=|f'(\eta_{n,m})||x_{n}-x_{m}|\leq 5|x_{n}-x_{m}|,
\end{align*}
where $\eta_{n,m}$ is chosen by Mean Value Theorem. So $(f(x_{n}))$ is convergent. For other sequence $(y_{n})$ such that $y_{n}\rightarrow 1$, consider the sequence $(z_{n})$ defined by $z_{2n}=x_{n}$, $z_{2n+1}=y_{n}$ to claim that the limits of $(f(x_{n}))$ and $(f(y_{n}))$ are the same. So $\lim_{x\rightarrow 1^{-}}f(x)$ exists.


Motivation for definition of logarithm in Feynman's Lectures on Physics



I'm not sure if the title is descriptive enough; feel free to change it if you come up with something better.




I've been reading through Feynman's Lectures on Physics. In the first volume, he dedicates a chapter to just math. He starts with the natural numbers and addition by 1, and builds his way to the complex numbers, with the purpose of proving Euler's formula $e^{ix} = \cos x + i \sin x$. It's a very nice read, but there's a part in the middle I'm having trouble understanding.



After having introduced irrational numbers, he begins to explain how to calculate (or define, rather) irrational powers as succesive approximations of rational powers and how to calculate logarithms, which is a related problem. In particular, he gives as an example how to find solutions to the equations $x = 10^\sqrt{2}$ and $x = \log_{10} 2$. To do that, he makes a table of successive square roots of ten, by calculating $10^s$ for $s = 1, \frac1{2}, \frac1{4}, \frac1{8}, \cdots , \frac1{1024}$. He remarks that this is enough to calculate some logarithms, because if we already calculated $10^s$ and we want its logarithm, it is simply $s$.



He also notices that as we take square roots of number that get closer and closer to $1$, there is a pattern: $\sqrt{1+\delta}$ is approximately $1+\delta/2$, so, for numbers that are already close to $1$, (such as $10^{1/1024}$, which is the last square root he calculated) instead of keeping on doing square roots we can just "guess" at the result with a pretty good accuracy.



Now, here's the part I don't understand: Having calculated $10^{1/1024}$ to be approximately $1.0022511$, he says the following:




[...] it is clear that, to an excellent approximation, if we take

another root, we shall get 1.00112 something, and rather than actually
take all the square roots, we guess at the ultimate limit. When we take a small fraction $\Delta$ of $1024$ as $\Delta$ approaches zero,
what will the answer be? Of course it will be some number close to
$0.0022511 \Delta$. Not exactly $0.0022511 \Delta$, however -- we can
get a better value by the following trick: we substract the $1$, and
then divide by the power $s$. This ought to correct all the excesses
to the same value.




He then adds another column: for each $s$, in addition to $10^s$ there's $\frac{10^s-1}{s}$, and it looks like this converges to something as $s$ gets smaller. I recognize this as one of the usual formulas for the logarithm, but I don't follow why he introduced it. Later he uses this to make an approximation formula: $10^\Delta \approx 1+\Delta \ln 10$. I understand this, but I don't get where he got that from. Could someone clarify this?




I wasn't sure about asking this question because I thought it might be hard to understand if you've never read this chapter. If this is the case, let me know and I'll try to edit it a bit.


Answer



I'm not surprised you didn't understand this; it's rather uncharacteristically badly written. It should say "When we take a small fraction $\Delta$ of $1/1024$", not $1024$. The idea is that, since when we halve the exponent the distance from $1$ is roughly halved, the distance from $1$ (which he calls the "excess") is roughly proportional to the exponent, so $10^s\approx1+\alpha s$ for some $\alpha$. The "excess" is $\alpha s$, and by dividing through by $s$ he gets increasingly accurate approximations of $\alpha\approx(10^s-1)/s$.



The reason for $\alpha=\log10$ is that $10^s=\mathrm e^{s\log10}\approx1+s\log10$ for small $s$.


Monday 24 April 2017

calculus - limit of the sequence $a_n=1+frac{1}{a_{n-1}}$ and $a_1=1$




Problem: Find with proof limit of the sequence $a_n=1+\frac{1}{a_{n-1}}$ with $a_1=1$ or show that the limit does not exist.



My attempt:



I have failed to determine the existence. However if the limit exists then it is easy to find it.



The sequence is not monotonic and I have failed to find any monotonic subsequence subsequence. The sequence is clearly bounded below by $1$. I have observed that the sequence is a continued fraction so it alternatively increases and decreases.



So, please help me.


Answer




As you already stated, the sequence alternately increases and decreases. Therefore you have found a monotone subsequence, namely $a_1,a_3,a_5,\dots$. And you have another monotone subsequence, namely $a_2,a_4,a_6,\dots$. You can easily find the recursion formula $a_n = f(a_{n-2})$.



Now prove that (a) both of these subsequences are indeed monotone and bounded and (b) that their limits are the same.



Alternative: Prove that $|a_j - a_{j-1}|$ is bounded by a geometric sequence, which will prove that the sequence in question is Cauchy.


find the sum of the following series using Maclaurins expansion



Find the sum of the following series:
$$\sum_{n=0}^\infty {x^{n}}{\sinh(5n+5)}$$




The sum for $ {\sinh(5n+5)}$ is as it follows



$$\sum_{n=0}^\infty \frac{(5n+5)^{2n+1}}{(2n+1)!}$$



And now I do not know how to continue to find this sum of series , can anyone help me .



Thank you all !


Answer



Assuming that $|x|<\frac{1}{e^5}$ (otherwise the series is divergent) you just have a geometric series:




$$\sum_{n\geq 0} x^n \sinh(5n+5) = \frac{1}{2}\left(\frac{e^5}{1-e^5 x}-\frac{e^{-5}}{1-e^{-5}x}\right)=\frac{e^{10}-1}{2(e^5-x)(e^5 x-1)}.$$


linear algebra - Inverse of a lower triangular matrix




I got the following question to solve:



Given the lower triangular matrix



\begin{bmatrix}
A_{11} & 0 \\
A_{21} & A_{22}
\end{bmatrix}



of size $n \times n$ (n is a power of 2) where $A_{11}$, $A_{21}$ and $A_{22}$ are matrices of size $(n/2) \times (n/2)$, show that the inverse is,




\begin{bmatrix}
A_{11}^{-1} & 0 \\
-A_{22}^{-1}A_{21}A_{11} & A_{22}^{-1}
\end{bmatrix}



how do I go about to solve this problem?



Edit: the matrix is invertible.




Edit: the second matrix should be changed to:
\begin{bmatrix}
A_{11}^{-1} & 0 \\
-A_{22}^{-1}A_{21}A_{11}^{\color{red}{-1}} & A_{22}^{-1}
\end{bmatrix}
The inverse was missing.


Answer



Clearly, you are having some trouble evaluating, and I think this is because of a typo! Evaluating with normal matrix multiplication I got
$$
\begin{bmatrix}
A_{11} & 0 \\

A_{21} & A_{22}
\end{bmatrix}\begin{bmatrix}
A_{11}^{-1} & 0 \\
-A_{22}^{-1}A_{21}A_{11} & A_{22}^{-1}
\end{bmatrix} = \begin{bmatrix}
A_{11}A_{11}^{-1} & 0 \\
A_{21} A_{11}^{-1} -A_{22}A_{22}^{-1}A_{21}A_{11} & A_{22}A_{22}^{-1}
\end{bmatrix}
$$
Everything evaluates trivially except for this term

$$A_{21} A_{11}^{-1} - A_{21}A_{11}$$
Which clearly does not equal $0$ all the time.
Thus I believe there was a typo made here and that the $A_{11}$ at the end should be an $A_{11}^{-1}$, as then the above expression reduces to the identity matrix,
$$
\begin{bmatrix}
I & 0 \\
0 & I
\end{bmatrix}
$$


real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction

  • multiplication/division

  • raising to powers and roots


  • trigonometric functions

  • exponential functions

  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?







Merged with "How to prove that some functions don't have a primitive" by Ismael:



Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative



$$f(x)\exp(g(x)) \, \mathrm dx$$




can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:



$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf



Liouville's original paper:





Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.


calculus - Find the value of : $lim_{x to infty} sqrt{4x^2 + 4} - (2x + 2)$












$$\lim_{x \to \infty} \sqrt{4x^2 + 4} - (2x + 2)$$



So, I have an intermediate form of $\infty - \infty$ and I tried multiplying by the conjugate; however, I seem to be left with another intermediate form of $\frac{\infty}{\infty}$ and wasn't sure what else to to do. Is there anything else I can do other than L'Hopital's rule?


Answer



$$\begin{align}
\sqrt{4x^2 + 4} - (2x+2) & = \frac{(\sqrt{4x^2 + 4} - (2x+2))(\sqrt{4x^2 + 4} + (2x+2))}{\sqrt{4x^2 + 4} + (2x+2)}\\
& = \frac{4x^2 +4 - 4(x+1)^2}{\sqrt{4x^2 + 4} + (2x+2)} = - \frac{8x}{\sqrt{4x^2 + 4} + (2x+2)}\\
& = - \frac{8}{\sqrt{4 + 4/x^2} + 2 + 2/x}
\end{align}
$$

Now take the limit as $x \rightarrow \infty$ to get the limit as $-2$.


calculus - what is $mathop {lim }limits_{n to infty } {2^{n/{{log }_2}n}}frac{{{{({{log }_2}n)}^4}}}{{{n^3}}}$?



what is the limitation of this weird expression?
$$\mathop {\lim }\limits_{n \to \infty } {2^{n/{{\log }_2}n}}\frac{{{{({{\log }_2}n)}^4}}}{{{n^3}}}$$
I've worked on it for the whole night but I can't figure it out:(



Or no limitation exists at all?


Answer



HINT: Logarithms are meaningless. And the limit od $2^n/n^3$ is...


elementary number theory - We have $a^{n-1} equiv 1 (mod n)$ but $a^{m} notequiv 1 (mod n)$ for every divisor $m$ of $n - 1$, other than itself. Prove that $n$ is prime.



The integers $a$ and $n > 1$ satisfy $a^{n-1} \equiv 1\ (mod\ n)$ but $a^{m} \not\equiv 1\ (mod\ n)$ for every divisor $m$ of $n - 1$, other than itself. Prove that $n$ is prime.



The way I went about proving it was,







We know for every integer $k \geq 1$, $a^k \equiv 1 (mod\ n) \iff d\ \vert\ k$, where $d$ is the order of $a$.



Note $a^{n-1} \equiv 1\ (mod\ n)$ and all divisors $d\ \vert\ n - 1$ have $a^{d} \not\equiv 1\ (mod\ n)$. $\implies$ $n - 1$ is the order of $a$.



So we know $\{1, a, a^2, ..., a^{n-1}\} \subseteq residue\ class\ mod\ n$
So at least $n - 1$ elements coprime to $n \implies$ $n$ is prime






So assuming this proof is correct, I was wondering if anyone had a more formal justification / better explanation for the last two lines. My thought process was because we know the order of $a$ is $n-1$, all powers of $a$ are incongruent to each other and the set of incongruent $n -1$ elements forms a set of residues mod n.




And since there are $n -1$ elements less than and coprime to $n$, we get that $n$ must be prime.



Is this correct? If not, does anyone know of a better proof (using only elementary number theory techniques)?


Answer



Well, if $n-1$ is the order of $a$, then $n-1\mid \varphi(n)$. Since $\varphi(n)\leq n-1$, we know $\varphi(n)=n-1$. This means $n$ is prime.



Not necessarily a better proof, but shorter, and I'd say simpler too.


arithmetic - How can you multiply decimal values without using a multiplication or division operator?



I figured out a problem challenging me to write code to do this with integers, but I was wondering how I would have done it if it had been decimals and I couldn't figure it out.


Answer



Let








with $a_1, b_1 \in \Bbb Z$ and $a_2, b_2 \in \Bbb Z$ and $a_2 \le 0$ and $b_2 \le 0$ and





and





Both the numbers $a_0$ and $b_0$ are either integers or actual (finite length) decimals with this representation.




Now





Of course if $a_2 + b_2 \lt 0$ we have to make sure that the form is 'fixed up' to give a pure representation - you have to check/cancel at least one trailing zero (least significant digit) in the product $a_0 b_0$ when represented as a string of digits.






Example: $a_0 = 5$ and $b_0 = 0.2$:










The exponent is negative and the integer part has $0$ at the far right end. So you remove a zero and increment the exponent.






Since the exponent isn't negative there is nothing to check - the product is an integer and we are writing it in standard $\text{integer or decimal}$ form using $\text{(2)}$.


real analysis - Prove the curve $(x(t),y(t))$ is contained and bounded within a larger square centered at the origin

I'm analyzing continuous functions on compact intervals and came up with this question from Arthur Mattuck - Introduction to analysis book. It exactly says:



Let $x(t)$ and $y(t)$ be continuous on $[a,b]$. As $t$ varies over this interval the point $(x(t),y(t))$ traces out a curve in the $x-y$ $plane$.Prove this curve is contained within some large square centered at the origin and show by example that this might be not true if the interval used in the $x-axis$ is of the form $(a,b)$



My approach:





  1. Compactness implies a closed and finite interval. Excluding one of them, then this is not compact and therefore not bounded.


  2. Also that a curve is given by $\sqrt{x^2+y^2}$, but not sure if this is useful.




I don't know how to go over this. If I have to proof by contrapositive or what else. Any help would be appreciated.

probability - $P(Y le X)=int_0^{infty} P(Y le X | X=x)f_X(x)dx$

I was looking at a solution of a probability exercise and the author of the solution uses the formula $$P(Y \le X)=\int_0^{\infty} P(Y \le X | X=x)f_X(x)dx$$ where $X$, $Y$ are the random variables $f_X$ is the density function of the random variable $X$. From where does this result come from?

abstract algebra - Constructing a multiplication table for a finite field


Let $f(x)=x^3+x+1\in\mathbb{Z}_2[x]$ and let $F=\mathbb{Z}_2(\alpha)$, where $\alpha$ is a root of $f(x)$. Show that $F$ is a field and construct a multiplication table for $F$.




Can you please help me approach this problem? I've tried searching around, but I don't really know what I'm looking for!




Thanks.

Sunday 23 April 2017

real analysis - How do we calculate this sum $sum_{n=1}^{infty} frac{1}{n(n+1)cdots(n+p)}$?




I know that this sum $$\sum_{n=1}^{\infty} \frac{1}{n(n+1)\cdots(n+p)}$$ ($p$ fixed) converges which can be easily proved using the ratio criterion, but I couldn't calculate it.



I need help in this part.



Thanks a lot.


Answer



Hint:
$$\frac{p}{n(n+1)\cdots(n+p)}=\frac{1}{(n)(n+1)\cdots (n+p-1)}-\frac{1}{(n+1)(n+2)\cdots (n+p)}.$$


trigonometry - In an triangle the least angle is $45^circ$ and the tangents of the angles are in $A.P.$If its area be $27$ sq.cm.Find the lengths of its sides.



In an triangle the least angle is $45^\circ$ and the tangents of the angles are in $A.P.$If its area be $27$ sq.cm.Find the lengths of its sides.







Let $A$ be the smallest angle.$\angle A=45^\circ$.$\tan A,\tan B,\tan C$ are in arithmatic progression.

$2\tan B=\tan A+\tan C$

$2\frac{\sin B}{\cos B}=\frac{\sin B}{\cos A\cos C}$

$\cos A\cos C=\frac{\cos B}{2}$

$\frac{1}{\sqrt2}\cos C=\frac{\cos B}{2}$

$\frac{\cos B}{\cos C}=\sqrt2$

I am stuck here.Please help.


Answer



almagest has provided a good answer. This answer uses your idea.



Using that $B+C=180^\circ-45^\circ=135^\circ$, you can have
$$\frac{\cos B}{\cos C}=\sqrt 2\quad\Rightarrow\quad \cos B=\sqrt 2\ \cos(135^\circ-B)=-\cos B+\sin B,$$
i.e.$$2\cos B=\sin B$$

Squaring the both sides gives
$$4(1-\sin^2B)=\sin^2B\quad\Rightarrow\quad \sin B=\frac{2}{\sqrt 5},\quad \cos B=\frac{1}{\sqrt 5}\tag1$$
because $B\le 90^\circ$, and so we have
$$\sin C=\sin(135^\circ-B)=\frac{3}{\sqrt{10}}\tag2$$



By the way,
$$27=\frac 12bc\sin A\quad\Rightarrow\quad bc=54\sqrt 2$$
and so using the law of sines with $(1)(2)$ gives
$$2R\sin B\cdot 2R\sin C=54\sqrt 2\quad\Rightarrow\quad R=\frac{3\sqrt 5}{\sqrt 2}$$where $R$ is the circumradius.




Thus,
$$a=2R\sin A=3\sqrt 5,\quad b=2R\sin B=6\sqrt 2,\quad c=2R\sin C=9.$$


discrete mathematics - Completely lost on using strong induction for this proof regarding a recursive algorithm.


T(n) = 1 when n $\le$ 10



T(n) = T($\lfloor\frac{n}{5}\rfloor$) + 7 T($\lfloor\frac{n}{10}\rfloor$) + n when n $\gt$ 10



Prove by strong induction that, for all n ≥ 1, we have T (n) ≤ 10n.
Hint: The only fact about ⌊ ⌋ that you will need is that when x ≥ 1, ⌊x⌋ is an integer, and 1 ≤ ⌊x⌋ ≤ x.





So here is my proof so far:



Let P(n) be T(n) < 10n
We will show that P(n) is true for every integer of n $\ge$ 1 by strong induction.



Base cases (n=1, n=2 , n=3 , n=4 n=5 , n=6, n=7, n=8, n=9 and n=10)
Since T(n) = 1 for all values $\le$ 10 and $\ge$ 1, we know T(n) =1 for all base cases. 1 < 10n when n $\ge$ 1, therefore P(n) is true.



Inductive hypothesis. Suppose that for some arbitrary integer k $\ge$ 10, P(j) is true for every integer 1 $\le$ j $\le$ k.




Inductive step: We want to prove that P(k+1) is true.



T(k+1) = T($\lfloor\frac{k+1}{5}\rfloor$) + 7 • T( $\lfloor\frac{k+1}{10}\rfloor$ + k + 1 by definition of T(n)



T($\lfloor\frac{k+1}{5}\rfloor$) = 1 when k $\le$ 49



T($\lfloor\frac{k+1}{10}\rfloor$) = 1 when k $\le$ 99



$\lfloor\frac{k+1}{5}\rfloor$ $\le$ $\frac{k+1}{5}$ and $\lfloor\frac{k+1}{10}\rfloor$ $\le$ $\frac{k+1}{5}$




and... now I'm kinda lost. Any tips you can give to me to push me in the right direction? I'm really new to proofs and can't figure out how to get to k+1 $\le$ 10n from here. Thanks.

real analysis - Proof that $limlimits_{nto infty} sqrt{x_n} = sqrt{limlimits_{nto infty} x_n}$

It is asked to prove that $\lim\limits_{n\to \infty} \sqrt{x_n} = \sqrt{\lim\limits_{n\to \infty} x_n}$, and suggested to use the following two inequalities:



$$a+b\leq a+ 2\sqrt{a}\sqrt{b}+b$$
$$\sqrt{b}\sqrt{b}\leq \sqrt{a}\sqrt{b}$$



The second inequality holds iff $a\geq b \geq 0$.



I've tried different possibilities, but couldn't figure out how to either take the limit sign out of the square root, or take the limit sign into the square root. Would appreciate some hints, but not an entire solution please.

combinatorics - Arrange Relatively Prime Numbers in a Circle

The question:




In how many ways can you arrange the numbers $1$ to $8$ in a circle so that neighboring numbers are relatively prime? Can you generalize for $1$ to $n$?





It's fairly easy to list all possible arrangements for the numbers $1$ to $n$ when $n \leq 7$; however, beyond this it is hard to do it by hand.



I've tried approaching the problem in three ways, but none of them have got me any success (for either the general case or the special one).



Consider the graph with vertices numbered $1, 2, \dotsc, n$. In this graph, there is an edge between two distinct vertices iff their numbers are relatively prime. For example, $1$ will be connected to all other vertices, $2$ to all odd numbers, etc. Then, the number of Hamiltonian cycles in this graph is our answer.



Another way to look at the problem is by recursion. Given an arrangement of $n-1$ vertices, the $n^{\text{th}}$ vertex (with the number $n$) can be inserted on an edge where both neighbors are relatively prime to it. Thus the edge will be split into two. To solve the problem, we have to solve the recurrence. But note that the initial arrangement of $n-1$ vertices does not have to have all neighbors relatively prime; the condition is that either all neighbors are relatively prime, or there is exactly one relatively non-prime pair such that both numbers are relatively prime to the inserted number $n$.



A third way is by making 'trees' for the choices of each next vertex. We start at $1$: there are $n-1$ branches for each of the $n-1$ remaining numbers. For each of the branches, there are further branches for each of the remaining numbers relatively prime to that branch, and so on. Counting the number of ways to get to the bottom of this tree will give us an answer. Edit: As in @Mark Main's answer, this approach is equivalent to making 'option sets' for a given number: starting from 1, choose any number in its option set, then choose any number from that number's option set, and so on. This is a good way to deal with the problem programmatically, but I can see no other advantage for the solution.







Sorry for making it so long, but I thought it's better if I put up all my attempts. Please add more specific tags if you can, I couldn't find any.

calculus - Interchanging Inverse Laplace Transform

I have a function $f(|\boldsymbol{k}|,s,\theta)$ for which I am interested in its inverse Laplace transform. I am also interested in the function's mean value for constant $|\boldsymbol{k}|$, but technically I need to inverse Laplace transform first. I was wondering about the interchangeability of the inverse transform and the mean; that is, does:



\begin{equation}
\frac{1}{2\pi}\int_0^{2\pi} \mathcal{L}^{-1}\left\{f(|\boldsymbol{k}|,s,\theta)\right\}\mathrm{d}\theta=\mathcal{L}^{-1}\left\{\frac{1}{2\pi}\int_0^{2\pi} f(|\boldsymbol{k}|,s,\theta)\mathrm{d}\theta\right\}
\end{equation}
The inverse Laplace transform is w.r.t. $s$.



I could think of a few reasons why it may not hold; for example, would it hold if doing the mean integration first affects the poles of $s$ (if that could happen)?




Please forgive my lack of knowledge in math, I do not know much measure theory and am not sure of the formality of interchanging these two operators. Any help is greatly appreciated!



EDIT: The actual function looks like:



\begin{equation}
f(|\boldsymbol{k}|,s,\theta)=\frac{\lambda_2 \left(D |\boldsymbol{k}|^2+2 \lambda_1+s\right)+\lambda_1 (i \boldsymbol{k}\cdot \boldsymbol{v}+\lambda_1+s)+\lambda_2^2}{(\lambda_1+\lambda_2) \left(\left(D |\boldsymbol{k}|^2+s\right) (i \boldsymbol{k}\cdot\boldsymbol{v}+\lambda_1+s)+\lambda_2 (s+i\boldsymbol{k}\cdot\boldsymbol{v} )\right)}
\end{equation}
All variables are real and positive except for $s$, of course. $\theta$ is the angle between $\boldsymbol{v}$ and $\boldsymbol{k}$. Carrying out the mean integration first gives:
\begin{equation}

f(|\boldsymbol{k}|,s)=\frac{\lambda_1}{(\lambda_1+\lambda_2) \left(D |\boldsymbol{k}|^2+\lambda_2+s\right)}
\end{equation}
which makes it easy to then find the inverse Laplace transform. Not sure if this is at all helpful.

elementary number theory - Incorrect cancellation of fraction with correct answer




If you cancel (incorrectly) the 6 in $\frac{26}{65}$, you get the (correct) equation $\frac{26}{65} = \frac{2}{5}$. What other fraction exhibits this property?



I tried considering only the simple case where the fraction is of the form $\frac{10a+n}{10n+b}$ (as in the example), then attempting to solve $\frac{a}{b} = \frac{10a+n}{10n+b}$ to get $n(10a-b) = 9ab$. I was thinking of using divisibility tricks, but am not getting anywhere.



The case of the forms $\frac{10n+a}{10n+b}$ and $\frac{10a+n}{10b+n}$ are easy; you just get $a = b$. I haven't attempted higher than two digits or non-rationals. It would be interesting to see a complicated formula not necessarily involving a ratio of integers which works out to a correct answer with incorrect cancellation.


Answer



There are quite of few fractions that exhibit this property. They are also referred to as "lucky fractions".



Here is a paper on this topic that you might find useful: Lucky fractions


calculus - Integral of gaussian error function




Problem : Integrate the function $x\cdot e^{-x^2(1+y^2)}$ over the set $(0,\infty)\times(0,\infty) $ in two different ways. Conclude from your calculations that $\int_{0}^{\infty} e^{-t^2}\,dt = \frac{1}{2}\sqrt{\pi} $.





My attempt : I am trying to use Fubini theorem to approach the problem, I have so far calculated the integral by first integrating over $x$ and then $y$ and determined its value to be $\pi/4$ but I don't see how can I calculate it any other way (without knowing the integral of the gaussian error function first or how can I relate it to the integral of gaussian error function).



The problem essentially requires to work with Fubini's theorem in both the directions (or at least that is what my intuition is).


Answer



$$\iint_{(0,+\infty)^2}x e^{-x^2(1+y^2)}\,dx\,dy = \frac{1}{2}\int_{0}^{+\infty}\frac{dy}{1+y^2}=\frac{\pi}{4} $$
and
$$\begin{eqnarray*}\iint_{(0,+\infty)^2}x e^{-x^2(1+y^2)}\,dy\,dx &=& \int_{0}^{+\infty}\int_{0}^{+\infty} x e^{-x^2} e^{-(xy)^2}\,dy\,dx\\ & \stackrel{y\mapsto \frac{z}{x}}{=}& \int_{0}^{+\infty}\int_{0}^{+\infty} e^{-x^2} e^{-z^2}\,dz\,dx \end{eqnarray*}$$
equals the square of $\int_{0}^{+\infty} e^{-u^2}\,du$, hence $\int_{0}^{+\infty} e^{-u^2}\,du = \frac{\sqrt{\pi}}{2}$.
No polar coordinates, no $\Gamma$ function, just elementary manipulations. I like this approach!


Saturday 22 April 2017

calculus - If $f:[0,infty)to [0,infty)$ and $f(x+y)=f(x)+f(y)$ then prove that $f(x)=ax$




Let $\,f:[0,\infty)\to [0,\infty)$ be a function such that $\,f(x+y)=f(x)+f(y),\,$ for all $\,x,y\ge 0$. Prove that $\,f(x)=ax,\,$ for some constant $a$.





My proof :



We have , $\,f(0)=0$. Then ,
$$\displaystyle f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{f(h)}{h}=\lim_{h\to 0}\frac{f(h)-f(0)}{h}=f'(0)=a\text{(constant)}.$$



Then, $\,f(x)=ax+b$. As, $\,f(0)=0$ so $b=0$ and $f(x)=ax.$



Is my proof correct?



Answer



In your proof you assume that $f$ is differentiable, which is not given.



Let me suggest how to obtain the formula of $f$:



Step I. Show that $\,f(px)=p\,f(x),\,$ when $p$ is a positive rational and $x$ a non-negative real. (At first show this for $p$ integer.) We obtain also that, $\,f(0)=0$.



Step II. Observe that $f$ is increasing, since, for $y>x$, we have
$$
f(y)=f(x)+f(y-x)\ge f(x).

$$



Step III.
Since $f$ is increasing, then the limit $\,\lim_{x\to 0^+}f(x)\,$ exists. However
$$
\lim_{x\to 0^+}f(x)=\lim_{n\to\infty}f\Big(\frac{1}{n}\Big)
=\lim_{n\to\infty}\frac{1}{n}\,f(1)=0.
$$



Step IV. Pick an arbitrary $x\in(0,\infty)$, and a decreasing sequence

$\{q_n\}\subset\mathbb Q$ tending to $x$. Then
$$
f(q_n)=q_n\,f(1)
$$
and
$$
x\,f(1)\longleftarrow q_n\,f(1)=f(q_n)=f(x)+f(q_n-x)\longrightarrow f(x),
$$
since $\,\,q_n-x\to 0^+$, and thus $\,\,\lim_{n\to\infty}f(q_n-x)=0$.




Therefore, $\,f(x)=x\,f(1),\,$ for all $x\in\mathbb [0,\infty)$, and hence $\,f'(x)=f(1)$.


abstract algebra - Find a polynomial with roots from another

I apologize if the title is not complete enough.



Let $f(x) = x^3 - 7x - 3$ and $g(x) = x^3 - 4x -5$ be polynomials with complex coefficients.



I need to find a polynomial of degree $3$ with roots $g(x_1)$, $g(x_2)$, and $g(x_3)$, where $x_1$, $x_2$ and $x_3$ are roots of $f$. The coefficient in front of $x^3$ must be $1$.




I can't find roots of $f$. Any suggestions?

calculus - Evaluate $lim_{xto0}frac{sin x^2}{x^2} sin frac{1}{x}$

How to evaluate this limit?




$$\lim_{x\to0}\frac{\sin x^2}{x^2} \sin \frac{1}{x}$$



L'Hospital not working

Proof Verification (Set Theory)

Let $S$ be a set with $N$ elements and let $A_1,\dots ,A_{101}$ be $101$ (possibly non disjoint) subsets of $S$ with the following properties:



a) each element of $S$ belongs to at least one of these subsets



b) each subset contains exactly $1000$ elements of $S$



c) the intersection of any pair of subsets contains exactly $200$ elements



d) the intersection of any three subsets contains exactly $6$ elements




e) the intersection of any $4$ or more distinct subsets is empty



Prove this set cannot exist



My Proof



1) We know from inclusion-exclusion we can compute the cardinality by $101\times1000 - \binom{101}{2}\times200 + \binom{101}{3}\times6$



2) We know from property E that $A_1\cap A_2 \cap A_3 \cap A_4 \cap A_5 \cap A_6 = \emptyset$




3) By associative law we have $[A_1\cap A_2 \cap A_3] \cap [A_4 \cap A_5 \cap A_6] = \emptyset$



4) We can do this with any distinct intersection of 3 subsets. Therefore, all our intersections with 3 sets are disjoint from each other.



5) Since they are all disjoint, we can write the cardinality as the sum of all intersections of 3 distinct sets



6) The sum of all distinct intersections of 3 sets can be written as $\binom{101}{3}\times6$



7) However $\binom{101}{3}\times6$ does not equal our computation in step 1. So, the set is a contradiction. Valid ways of counting compute different cardinalities for it.




My Question



I want to know if the proof is sensible, readable and most importantly correct.

real analysis - The convergence of $sum_{n=1}^infty (-1)^nleft(frac{n}{e}right)^nfrac{1}{n!}$

I'm trying to figure out if the $\sum_{n=1}^\infty (-1)^n\left(\frac{n}{e}\right)^n\frac{1}{n!}$ converges or not. I've tried the Leibnitz test for alternating series, but it leads to Stirling's formula and I was wondering if there's any other way so I could avoid using it. I'll be grateful for any idea.

complex numbers - Solve $sum_{r=1}^n (2r-1)cos(2r-1)theta $



How should i solve : $$\sum_{r=1}^n (2r-1)\cos(2r-1)\theta $$



I can solve $\sum_{r=1}^n cos(2r-1)\theta $ by considering
$\Re \sum_{r=1}^n z^{2r-1} $ and using summation of geometric series, but I can't seem to find a common geometric ratio when $ 2r-1 $ is involved in the summation.




Visually : $\sum_{r=1}^n z^{2r-1} = z +z^3+...+z^{2r-1}$ where the common ratio $ r= z^2 $ can easily be seen, but in the case of $\sum_{r=1}^n (2r-1)z^{2r-1} = z + 3z^3 + 5z^5 +...+ (2r-1)z^{2r-1}$, how should i solve this ? A hint would be appreciated.


Answer



Note that $$\sum_{r=1}^{n} (2r-1) \cos (2r-1) \theta=\sum_{r=1}^{n} \frac{\mathrm{d}}{\mathrm{d}\theta}\sin (2r-1) \theta= \frac{\mathrm{d}}{\mathrm{d}\theta}\sum_{r=1}^{n} \sin (2r-1) \theta$$
Now calculate $$\sum_{r=1}^{n} \sin (2r-1) \theta=\frac{ \sin^2(n\theta) }{ \sin(\theta) }$$ Through the formula for the sum of sin's.



Now just diffferentiate with regard to $\theta $.


Friday 21 April 2017

real analysis - Approximation of $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-nt} mathrm dt,nrightarrowinfty $

How can I find the first term of the series expansion of



$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt,n\rightarrow\infty ?$$



Or:



As $$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt = \frac{1}{\sqrt{n}}\int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt $$



What is $$ \lim_{n\rightarrow\infty} \int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt?$$

analysis - Fourier Series of a Constant Function



My question is kinda dumb, but here I go: I'm studying Fourier Series on my own for my next semester. I needed to calculate the Fourier Series of the function $f(x) = 5$ defined in $[-4,4]$.




In this case, using the standard notation, $L = 4$ are the coefficients are
$a_{0} = \dfrac{1}{L} \displaystyle \int_{-L}^{L} f(x) \ dx$; $a_{n} = \dfrac{1}{L} \displaystyle \int_{-L}^{L} f(x) \cdot \cos\bigg(\dfrac{n\pi x}{L}\bigg) \ dx$ and $b_{n} = \dfrac{1}{L} \displaystyle \int_{-L}^{L} f(x) \cdot \sin\bigg(\dfrac{n\pi x}{L}\bigg) \ dx$, correct?



Since the function is constant the sines and cosines must have no contribution to the Fourier series at all, i.e., $a_{n} = b_{n} = 0$, but when I'm doing the calculations I'm getting $a_{n} = \dfrac{10}{\pi n} \sin(\pi n)$. It must be a pretty dumb mistake I'm not seeing, I'm kinda new at this subject.



Thanks for the help :]


Answer



$a_n = $$\frac {1}{L}\int_{-L}^L 5\cos(\frac {n\pi x}{L}) dx\\
\frac {1}{L}(5\sin(\frac {n\pi x}{L})(\frac {L}{n\pi})|_{-L}^L\\
(\frac {5}{n\pi})(\sin( {n\pi})-\sin(-{n\pi})) = 0$




since $\sin( {n\pi}) = 0$



$b_n = $$\frac {1}{L}\int_{-L}^L 5\sin(\frac {n\pi x}{L}) dx\\
\frac {1}{L}(-5\cos(\frac {n\pi x}{L})(\frac {L}{n\pi})|_{-L}^L\\
(\frac {-5}{n\pi})(\cos( {n\pi})-\cos(-{n\pi})) = 0$



since $\cos x$ is an even function.


elementary number theory - Modular Arithmetic in AMC 2010 10A #24



Link to the problem/solution:
https://artofproblemsolving.com/wiki/index.php/2010_AMC_10A_Problems/Problem_24



I'm trying to understand the following step in the solution to the aforementioned AMC problem.




Given:




  1. $M = \cfrac{90!}{5^{21}}$

  2. $N = \cfrac{90!}{10^{21}} = \cfrac{M}{2^{21}}$

  3. $M \equiv 24 \bmod 25$

  4. $2^{21} \equiv 2 \bmod 25$



Then:




$N \bmod 25 = \cfrac{24}{2} = 12$



In the above approach, we write N as a fraction, find the modulo-25 residue of the numerator and denominator, and divide the residues to find the modulo-25 residue of N.



I recently finished working through the Art of Problem Solving textbook "Introduction to Number Theory", so I'm familiar with the basics of solving modular congruence equations and modular arithmetic, but I was confused by this step. I was wondering: under what conditions can we divide modular congruent statements, and why does this step work? My understanding was that addition, multiplication, and exponentiation work with modular congruent statements, but I never saw an example in the book of dividing statements as in the above case.



For instance:



$98 \equiv 23 \bmod 25$




$49 \equiv 24 \bmod 25$



$2 \equiv ? \bmod 25$



In this example, I don't think division produces a meaningful result (perhaps it does, but I don't know how to evaluate 23/24?). Why does division fail here, but work above? (EDIT: -2 / -1 = 2, so I guess it works in this case too!)



My suspicion is that I'm missing something really basic here, perhaps even misinterpreting what the solution is doing. If you have any book recommendations that would help me improve and solidify my understanding of modular arithmetic/modular congruence, I'd greatly appreciate it!


Answer



In modulo arithmetic, "division" means multiplying by the multiplicative inverse, e.g., $b = \frac{1}{a}$ means the value which when multiplied by $a$ gives $1$ modulo the value, e.g., $ba \equiv 1 \pmod n$. Note you may sometimes see $b = a^{-1}$ instead to avoid using explicit "division". This works, and gives a unique value, in any cases where the value you're dividing and the modulus are relatively prime.




More generally, it'll work in all cases of $\frac{c}{a} \equiv e \pmod n$ where $d = \gcd(a,n)$ and $d \mid c$ since this gcd value "cancels" in the division. Thus, the resulting equivalent modulo equation of $\frac{c'}{a'} \equiv e \pmod n$, where $c' = \frac{c}{d}$ and $a' = \frac{a}{d}$ has $\gcd(a',n) = 1$, has a solution. However, as Bill Dubuque's comment indicates, this assumes you're doing integer division to the extent of removing the common factor of $d$. Note that $a^{-1}$ doesn't exist modulo $n$ in this case. However, $(a')^{-1}$ does exist modulo $\frac{n}{d}$, so a possible interpretation would be $\frac{c'}{a'} \equiv c'(a')^{-1} \equiv e \pmod{\frac{n}{d}}$.



As for why the multiplicative inverse $b = a^{-1}$ exists modulo $n$ if $\gcd(a,n) = 1$, Bézout's identity states that in such cases there exist integers $x$ and $y$ such that



$$ax + ny = 1 \tag{1}\label{eq1}$$



As such $ax \equiv 1 \pmod n$ so $x \equiv a^{-1} = b \pmod n$. This value must be unique, modulo $n$, because if there was another value $x'$ such that $xa \equiv x'a \equiv 1 \pmod n$, then $(x - x')a \equiv 0 \pmod n$. Since $\gcd(a,n) = 1$, this means that $x - x' \equiv 0 \pmod n \; \iff \; x \equiv x' \pmod n$.



Bézout's identity also shows that if $a$ and $n$ are not relatively prime, e.g., $\gcd(a,n) = d \gt 1$, then \eqref{eq1} becomes




$$ax + ny = d \tag{2}\label{eq2}$$



with the integers of the form $ax + ny$ are always multiples of $d$, so it can't be congruent to $1$ and, thus, $a$ would not have a multiplicative inverse.


modular arithmetic - Number Theory Proof Need Logic Checked



I'm working on the following problem:




Show that if $x^{p} + y^{p} = z^{p}$, then $p \space | \space (x + y -z)$





So far my proof looks something like this:



Suppose $p \nmid \space (x+y-z)$ then $x^p + y^p = z^p$ shouldn't have a solution (proof by contraposition). Taking the original equation $\bmod p$ we have
$$x^p + y^p = z^p \quad \rightarrow \quad x + y \equiv z \space \bmod p \quad \rightarrow \quad x + y - z \equiv 0 \space \bmod p \quad \rightarrow \quad p \mid (x + y -z)$$ (which we know is false). Hence the original equation holds.



So I don't think my logic is making sense, and am need of assistance if it's wrong. Any tips would be appreciated.


Answer



Hint: Fermat's little theorem states that $x^p \equiv x \pmod p$.




I'm not exactly certain how you want us to respond. It appears that you have the idea. The way that I would express it is as follows:



$x + y \equiv x^p + y^p = z^p \equiv z \pmod{p}$, hence $p \mid x+y -z$.






So, most of you know FLT as "If $x \neq 0 \pmod{p}$, then $x^{p-1} \equiv 1 \pmod{p}$. The first line is a direct corollary of this result.



If $x \equiv 0 \pmod{p}$, then $x \equiv 0 \equiv x^p \pmod{p}$. Otheriwse, $x^p \equiv x^{p-1} x \equiv x \pmod{p}$.
Hence, in all cases, $x^p \equiv p \pmod{p}$.



Thursday 20 April 2017

linear algebra - Characteristic Polynomial of a matrix and its transpose

Does a matrix and and its transpose have the same characteristic polynomial? I know that they have the same eigenvalues but different eigenvectors. Does having the same eigenvalues mean they share the same characteristic polynomial?

real analysis - Prove the sequence $x_{n+1}=frac{1}{4-x_n}, x_1=3$ converges.




I try to use induction to show that the sequence is monotonically decreasing. My induction hypothesis is $$P(n):x_{n+1}We have first few terms $x_1=3, x_2=1, x_3=1/3,\cdots$. It is clear that $P(1)$ and $P(2)$ hold. Now I try to prove $P(n+1)$ assuming $P(n)$ is true.
To do this I must show that $x_{n+2}-x_{n+1}<0$.



$$x_{n+2}-x_{n+1}=\frac{1}{4-x_{n+1}}-\frac{1}{4-x_n}=\frac{x_{n+1}-x_n}{(4-x_{n+1})(4-x_n)}$$
I get stuck here I know that numerator is less than $0$ but how do I show that denominator is positive so that the entire thing is less than $0$.



I am trying to show that sequence is monotonically decresaing and also bounded below to show that it converges.



Hints first



Answer



Hint: use as your inductive hypothesis the following statement: $x_{n+1} < x_n < 4$.






How to come up with this hypothesis? I first asked myself whether you were asking for something that's even true: is the denominator positive? The answer is obviously yes by working out the first few values. Why is the denominator positive? Because it's the product of positive things. Can we show that we're always taking the product of positive things? In order to do that, we'd need…


calculus - Proof of $frac{d}{dx}e^x = e^x$



I'm working through the proof of $\frac{d}{dx}e^x = e^x$, and trying to understand it, but my mind has gotten stuck at the last step.



Starting with the definition of a derivative, we can formulate it like so:



$$\frac{d}{dx} e^x = \lim_{h \to 0} \frac{e^{x+h}-e^x}{h}$$



After some algebra, we arrive at:




$$\frac{d}{dx} e^x = e^x \lim_{h \to 0} \frac{e^h-1}{h}$$



As $h\to0$, the expression approaches $\frac{0}{0}$, which makes it indeterminate. And, this is where my understanding ends. I've tried looking at wikipedia and other descriptions of the proof, but couldn't understand those explanations. It has usually been something along the lines of, "plot $\frac{e^x - 1}{x}$ and see the function's behavior at $0$," which ends up approaching $1$, which can substitute the limit to give the result of the derivative:



$$\frac{d}{dx} e^x = e^x \cdot 1 = e^x$$



I vaguely understand the concept of indeterminate forms, and why it is difficult to know what is happening with the function. But is there a better explanation of how the result of $1$ is obtained?


Answer



Suppose you have an exponential function, like $f(x)=2^x$.




The derivative is
$$
f'(x) = \lim_{h\to0} \frac{f(x+h)-f(x)}{h} = \lim_{h\to0}\frac{2^{x+h}-2^x}{h} = \lim_{h\to0}\left(2^x\cdot\frac{2^h-1}{h} \right).
$$
So far, just algebra. Now watch this:
$$
= 2^x\lim_{h\to0}\frac{2^h-1}{h}.
$$
This can be done because $2^x$ is "constant" and "constant" means "not depending on $h$".




But this is equal to $(2^x\cdot\text{constant})$. But in this case "constant" means "not depending on $x$". "Constant" always means "not depending on something", but "something" varies with the context.



What's the "constant"? In the case above, it's not hard to show that the constant is somewhere between $1/2$ and $1$. It we'd started with $4^x$ instead, then it would be fairly easy to show that the "constant" would be more than $1$. For a base somewhere between $2$ and $4$, the "constant" is $1$. That base is $e$.



If you want to talk about how it is knonw that $2$ is too small and $4$ is too big, to be the base of the "natural" exponential function (i.e., the one for which the "constant" is $1$), I can post further on this.



Later edit: OK, how do we know that $2$ is too small and $4$ is too big, to serve in the role of the base of the "natural" exponential function? Look at the graph of $y=2^x$. It gets steeper as you go from left to right. As $x$ goes from $0$ to $1$, $y$ goes from $1$ to $2$. So the average slope between $x=1$ and $x=2$ is
$$\dfrac{\text{rise}}{\text{run}} = \frac{2-1}{1-0} = 1.$$
Since it gets steeper going from left to right, the slope at the left end of this interval, i.e. at $x=0$, must be less than that. Thus we have $\dfrac{d}{dx}2^x = (2^x\cdot\text{constant})$ and the "constant" is less than $1$. (Thinking about the interval from $x=-1$ to $x=0$ in the same way shows that the "constant" is more than $1/2$.)




Now look at $y=4^x$, and use the interval from $x=-1/2$ to $x=0$, and do the same thing, and you see that when you write $\dfrac{d}{dx}4^x = (4^x\cdot\text{constant})$, then that "constant" is more than $1$.



This should suggest that $4$ is too big, and $2$ is too small.



You can show that $3$ is too big by using the interval from $x=-1/6$ to $x=0$ and doing the same thing. But the arithmetic is messy.


calculus - relation between the number of real roots of the derivative and the original polynomial




If the derivative of polynomial has n real roots then can we conclude that the original polynomial has to have n+1 real roots?


Answer



No. $f(x)=x^2+1$ provides a counterexample.


analysis - When does $displaystylelim_{ntoinfty}x_n^{1/n} = alpha$ imply $displaystylelim_{ntoinfty}frac{x_n}{alpha^n}$ exists?



Let $x_n$ be a sequence of numbers satisfying
$$
0 < x_n \leq x_{n-1} \leq \ldots \leq x_0 = 1, \quad n \in \mathbb{N}
$$
as well as
$$
\lim_{n \to \infty} x_n

= 0, \quad \text{and } \quad
\lim_{n\to \infty} \frac{x_n}{x_{n-1}}
= \alpha
\in (0,1).
$$
From this it follows that
$$
\lim_{n \to \infty} x_n^{1/n}
= \alpha;
$$

my question is, under what conditions (the milder the better) can one conclude that the limit
$$
L
= \lim_{n \to \infty} \frac{x_n}{\alpha^n}
$$
exists? I don't think it is true in general, but there are extra properties that the $x_n$ satisfy which may be germaine and sufficient to secure the result; for the moment though, I'll keep the question unfettered by potential red herrings.



Some thoughts which may or may not be useful:
the sequence $\phi_n := \frac{x_n}{\alpha^n}$ can be shown to be submultiplicative, i.e.
$$

\phi_{n+m}
\leq \phi_n \phi_m, \quad n,m \in \mathbb{N},
$$
and since $x_n$ decreases from $1$ to $0$, there is a point at which it "crosses" the line $y = \alpha$, after which point $\frac{x_n}{\alpha} < 1$.



EDIT



Etienne gave a nice answer. It did not apply directly to my specific situation, but I believe it handles the original question nicely. The additional details of my specific problem are unlikely to be searched for - at least much less likely than the original question. So I'll accept the answer and leave it at that!
Thanks!


Answer




Here are simple conditions under which $\frac{x_n}{\alpha^n}$ does converge, without even assuming that $(x_n)$ is decreasing or that $\frac{x_n}{x_{n-1}}\to\alpha$:



(i) If $\frac{x_n}{x_{n-1}}\leq\alpha$ for all $n$ then $\frac{x_n}{\alpha^n}$ converges (in $\mathbb R$).



(ii) If $\frac{x_n}{x_{n-1}}\geq\alpha$ for all $n$, then $\frac{x_n}{\alpha^n}$ converges in $\mathbb R\cup\{+\infty\}$.



To prove this, set $u_n:=-\log(x_n)$. If (i) holds, then $u_{n}-u_{n-1}\geq \beta:=-\log(\alpha)$ for all $n$. So the sequence $v_n:=u_n-n\beta$ is nondecreasing. Hence $v_n$ has a limit in $\mathbb R\cup\{+\infty\}$, so that $\frac{x_n}{\alpha^n}=e^{-v_n}$ has a limit in $\mathbb R$. If (ii) holds, then $v_n$ is nonincreasing, so it has a limit in $\mathbb R\cup\{-\infty\}$ and hence $\frac{x_n}{\alpha^n}$ has a limit in $\mathbb R\cup\{ +\infty\}$.



However, as you guessed, the result is not true in general.




Consider for example the sequence defined by $x_0=1$ and the "recurrence relation"
$$x_{n}=x_{n-1}\left(\alpha-\alpha_n\right)$$
where $(\alpha_n)_{n\geq 1}$ is a sequence of real numbers tending to $0$ such that $\alpha_n<\alpha$ for all $n$, $1+\inf_{n\geq 1} \alpha_n>\alpha$ and, moreover, the series $\sum\log\left(1-\frac{\alpha_n}{\alpha}\right)$ is not convergent and does not diverge to $-\infty$. For an explicit example, one may take $\alpha_n:=\frac{(-1)^n}{\sqrt n}\delta$, where $0<\delta <\alpha$ and $1-\delta>\alpha$; or (much simpler), any sequence of negative numbers $\alpha_n$ such that $1+\inf_{n\geq 1}\alpha_n>\alpha$ and $\sum_1^\infty\alpha_n=-\infty$.



Then $0$$\frac{x_n}{\alpha^n}=\frac1{\alpha^n}\prod_{k=1}^n \left(\alpha-\alpha_k\right)=\prod_{k=1}^n \left(1-\frac{\alpha_k}{\alpha}\right)\, .$$
Taking the logarithms (which is possible because $1-\frac{\alpha_k}{\alpha}>0$, this gives
$$\log\left(\frac{x_n}{\alpha^n}\right)=\sum_{k=1}^n\log\left(1-\frac{\alpha_k}{\alpha}\right)\, ; $$
so $\log\left(\frac{x_n}{\alpha^n}\right)$ does not converge in $\mathbb R$ and does not tend to $-\infty$ either, and hence $\frac{x_n}{\alpha^n}$ does not converge.




Edit. If one takes $\alpha_n:=a(1-a^{\sqrt{n}-\sqrt{n+1}})$, one gets Clin's example. If one takes $\alpha_n=-\frac\alpha{n}$ (assuming $\alpha<1/2$), one gets the extremely simple $$x_n= (n+1)\alpha^n\, .$$
Note that $\alpha<1/2$ is necessary only to ensure that $(x_n)$ is decreasing. If one forgets this requirement, one may take
$$x_n = C_n \,\alpha^n$$
where $(C_n)$ is any sequence of positive numbers such that $C_n\to\infty$ and $\frac{C_{n+1}}{C_n}\to 1$. This sequence will be decreasing with $x_0=1$ if $C_0=1$ and $\frac{C_{n+1}}{C_n}<\frac1\alpha$ for all $n$.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...