Monday 31 March 2014

calculus - Evaluation of the limit, $lim limits_{xrightarrowinfty} left(frac{20x}{20x+4}right)^{8x}$, using only elementary methods



I was assisting a TA for an introductory calculus class with the following limit,



$$\lim_{x \rightarrow \infty} \left(\frac{20x}{20x+4}\right)^{8x}$$




and I came to simple solution which involved evaluating the "reciprocal" limit



$$\lim_{z \rightarrow 0} \left(\frac{1}{1+\frac{z}{5}}\right)^{8/z}$$



by using the Taylor expansion of $\log(1+z)$ around $z=0$. However, the TA claims that the students have not learned about series expansions so that would not be a valid solution for the course. I tried applying L'Hopital's rule, which I was told the class did cover, but I was unsuccessful. As a note I will mention that



$$\lim_{x \rightarrow \infty} \left(\frac{20x}{20x+4}\right)^{8x} = e^{-8/5}.$$




Any ideas for a solution to this problem using only knowledge from a first quarter (or semester) calculus course which hasn't covered series expansions?


Answer



If they know the definition of $e$ as
$$\lim_{n\rightarrow \infty} \left(1+\frac{1}{n}\right)^n,$$
then set $\alpha=\lim_{x\rightarrow\infty}\left(\frac{20x}{20x+4}\right)^{8x}$
and note that
$$1/\alpha^5 = \lim_{x\rightarrow\infty}\left(1 + \frac{1}{5x}\right)^{40x}=
\left(\lim_{x\rightarrow\infty}\left(1 + \frac{1}{5x}\right)^{5x}\right)^8 = e^8.$$


trigonometry - $sec^2theta+csc^2theta=sec^2thetacsc^2theta$



I was playing around with trigonometric functions when I stumbled across this
$$\sec^2\theta+\csc^2\theta=\sec^2\theta\csc^2\theta$$
Immediately I checked it to see if it was flawed so I devised a proof
$$\begin{align}
\sec^2\theta+\csc^2\theta&=\sec^2\theta\csc^2\theta\\
\frac1{\cos^2\theta}+\frac1{\sin^2\theta}&=\frac1{\cos^2\theta\sin^2\theta}\\

\frac{\cos^2\theta+\sin^2\theta}{\cos^2\theta\sin^2\theta}&=\frac1{\cos^2\theta\sin^2\theta}\\
\frac1{\cos^2\theta\sin^2\theta}&=\frac1{\cos^2\theta\sin^2\theta}\blacksquare\\
\end{align}$$
I checked over my proof many times and I couldn't find a mistake, so I assume that my claim must be true.



So my questions are:



Is there a deeper explanation into why adding the squares is the same as multiplying?



Is this just a property of these trigonometric functions or do similar relationships occur with other trigonometric functions?




And finally as an additional curiosity what does this translate into geometrically?


Answer



Any pair of trig functions (other than inverses) will give some sort of identity, by clearing denominators (if any) in the pythagorean identity.



e.g. with, $\sin$ and $\tan $ give



$$ \sin^2 x + \cos^2 x = 1
\\ \sin^2 x + \left( \frac{\sin x}{\tan x} \right)^2 = 1
\\ \sin^2 x \tan^2 x + \sin^2 x = \tan^2 x

\\ \tan^2 x - \sin^2 x = \sin^2 x \tan^2 x
$$


proof verification - Proving $ 0 + 2 + 4 + cdots + (2n-2) = n(n-1)$ for all n $in Z^+$




$ 0 + 2 + 4 + \cdots + (2n-2) = n(n-1)$ for all n $\in Z^+$



What I have:



Let n exist in Z^+. If n = 2, then L.H.S. = 2, and the R.H.S. = 2(2-1) = 2. So, L.H.S. = R.H.S. and this holds for n = 2. Assume this holds for some k existing in Z^+. That is 0 + 2 + 4 + ... + 2(k-2) = k(k-1)



What I am stuck on:



How do I show that this holds for 0 + 2 ... (2k-2) = k(k-1)?



Answer



You assume that for $n=k$, the statement holds:
$$0+2+\cdots+2(k-1) = k(k-1). \tag{a}$$
You then want to prove the statement for $n=k+1$, that is, we want to show
$$0+2+\cdots+2(k-1)+2k = (k+1)k.\tag{b}$$



Can you make the connection from (a) to (b)?


measure theory - Application of dominated convergence theorem



Suppose $\{f_n\}$ is a sequence of functions in $L^2[0,1]$ such that $f_n(x)\rightarrow 0$ almost everywhere on $[0,1]$. If $\|f_n(x)\|_{L^2[0,1]}\le 1$ for all $n$ , then $$\lim_{n\rightarrow \infty} \int_0^1 f_n(x)g(x)~dx =0$$ for all $g\in L^2[0,1]$.



I feel I have to use dominated convergence theorem, but I can't fined the dominating functions. Thanks for helping.


Answer



Here's an outline:




Fix $g\in L^2$.



Choose $\delta>0$ so that $\Bigl(\int_E|g|^2\Bigr)^{1/2}$ is small whenever $\mu(E)<\delta$.



By Egoroff, find a set $E$ of measure less than $\delta$ so that $f_n$ converges uniformly to $0$ off $E$. Choose $N$ so that for $n>N$, $|f_n|$ is small on $E^C$.



Then write:
$$
\Bigl| \int f_n g\, \Bigr|\le \int |f_n ||g| =\int_E |f_n ||g|+\int_{E^C}|f_n ||g|
$$

and apply Hölder to both integrals on the right.


Sunday 30 March 2014

algebra precalculus - why 0=0 is not possible??




Hi one of my friend showed me one proof i.e.



$1)$ $2^2 - 2^2 = 10 - 10$



$2)$ $(2+2) (2-2) = 5 (2-2)$



$3)$ dividing both sides by (2-2)



$4)$ $(2 + 2) = 5$




I know this is wrong in first line as both LHS and RHS goes to 0 and u can't directly make a equation 0=0
because 0/0 != 1
but i can't explain this can anyone give a perfect reason why can't we compare 0=0



or is there any other reason also for this to be wrong?????


Answer



You can't divide both sides by $(2-2)$, because $(2-2)$ is zero, and you cannot divide by zero.



The technical reason for this is that zero does not have a multiplicative inverse in the field of rational numbers (or real numbers, or complex numbers, or any field), because the existence of such an inverse would be inconsistent with the field axioms.


Any counterexample to answer this question on elementary geometry?



Question: See the figure below. If AB=BC and DE=EF, is the line DF parallel to the line AC?



enter image description here



This should be an elementary problem. But I can't construct a counterexample to disprove the above question. If the answer is negative, please give a counterexample. Thanks.


Answer



Hint:





  • Let $D'F'$ be a segment analogous to $DF$ such that $E \in D'F'$ and $D'F' \parallel AC$.

  • Then $\triangle DED'$ and $\triangle FEF'$ are congruent and so $DD' \parallel FF'$.



I hope this helps $\ddot\smile$


calculus - Evaluating the limit $lim_{x to 0+}left(frac{1}{x}-frac{1}{arctan(x)}right)$

I think you might have to use Squeeze Theorem here? I've confused myself.




$$\lim_{x \to 0+}\frac{1}{x}-\frac{1}{\arctan(x)}$$


real analysis - Find the sum $sum_{j=0}^{n}j$

Find the sum $\sum_{j=0}^{n}j$




for all $n=0,1,2,3,\dots$.



How do I find/evaluate this sum?

real analysis - Discontinuity set of derivative



Let $f$ be a differentiable function and let $J$ and $D$ denote the set of continuity and discontinuity of $f'$. Then $J$ and $D$ can be characterized as done here.



Also $J$ is a $G_{\delta}$ dense set and hence has positive measure (?)



Now Volterra function can be used to construct a derivative which is discontinuous on a nowhere dense set with both zero and positive measure by choosing appropriate Cantor sets.



Hence $D$ can be a nowhere dense set with positive measure. This is where I am struck.




Now if $D$ has full measure, then $J$ has zero measure which is a contradiction that $J$ is a $G_{\delta} $ dense set?.


Answer



Also $J$ is a $G_{\delta}$ dense set and hence has positive measure



This is not true, and proofs of a stronger result are given in intuition of decomposition of $\mathbb R$ into disjoint union of first category and null set.



To help in understanding how these properties relate to each other, let’s consider the following three properties for subsets of $\mathbb R$ --- countable, meager, and zero Lebesgue measure. Each of these properties conveys (i) a notion of “small”, (ii) a notion of “large”, and (iii) a notion of “maximally large”:



small notions: $\;$ countable,$\;$ meager,$\;$ zero Lebesgue measure




large notions:$\;$ uncountable,$\;$ non-meager,$\;$ positive Lebesgue measure



maximally large notions:$\;$ co-countable,$\;$ co-meager,$\;$ full Lebesgue measure



The “maximally large” notions are those for which the set is so large that what’s left over is small (i.e. the complement of the set is small).



The three following observations should help in beginning to get a feel for how these notions relate to each other.



(1) Each of meager and zero Lebesgue measure is a weaker notion of being small than countable.




(2) Each of non-meager and positive Lebesgue measure is a stronger notion of being large than uncountable.



(3) Each of co-meager and full Lebesgue measure is a weaker notion of maximally large than co-countable.



The notions “countable” and “meager” (= first Baire category) can be defined via countable unions of smaller building blocks --- every countable set is a countable union of finite sets (equivalently, a countable union of singleton sets) and every meager set is a countable union of nowhere dense sets. There is no natural smaller building block notion for the zero Lebesgue measure sets.



Each subset of a small set is also a small set (of the same type) and each countable union of small sets (all of the same type) is small set (of the same type). Also, for each type of small set, $\mathbb R$ is not a small set. Putting the last two sentences together gives the result that for each type of small set, each countable union of that type of small set will not be all of $\mathbb {R}.$ In fact, each countable union of that type of small set is not remotely close to being all of $\mathbb {R},$ since each countable union will be a small set, and hence what remains in $\mathbb R$ is a maximally large set (of the same type).



As indicated above, the notions related to “countable” are strictly comparable to the notions related to “meager” and “zero Lebesgue measure”. However, there is no comparability between the notions related to “meager” and “zero Lebesgue measure”. Indeed, it is possible for a set to be meager and not have zero Lebesgue measure (e.g. a Cantor set of positive measure; note this example shows that even a building block for the meager sets can fail to have zero Lebesgue measure), and it is possible for a set to have zero Lebesgue measure and not be meager (see intuition of decomposition of $\mathbb R$ into disjoint union of first category and null set).




In fact, it is possible for a set to be meager and to have full measure (i.e. small for Baire category and maximally large Lebesgue measure), and it is possible for a set to have zero Lebesgue measure and to be co-meager (i.e. small for Lebesgue measure and maximally large for Baire category). Thread cited above shows that $\mathbb R$ can be written as the union of two small sets (of different types), that is, $\mathbb {R} = A \cup B$ where $A$ is meager and $B$ has zero Lebesgue measure (the term orthogonal is sometimes used for a pair of smallness notions that have this property), and both statements in the previous sentence follow from this --- $A$ is small for Baire category and maximally large for Lebesgue measure, and $B$ is small for Lebesgue measure and maximally large for Baire category.


Saturday 29 March 2014

elementary number theory - Strengthening the Sylvester-Schur Theorem



The Sylvester-Schur Theorem states that if $x > k$, then in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there is at least $1$ number containing a prime divisor greater than $k$.



It has always struck me that this theorem is significantly weaker than the actual reality, especially as $n$ gets larger.



As I was trying to check my intuition, I had the following thought:




  • Let $k$ be any integer greater than $1$


  • Let $p_n$ be the $n$th prime such that $p_n \le k < p_{n+1}$.

  • If an integer $x$ is sufficiently large, then it follows that in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there are at least $k-n$ numbers containing a prime divisor greater than $k$.



Here's my argument:



(1) Let $k > 1$ be an integer with $p_n \le k < p_{n+1}$ where $p_n$ is the $n$th prime.



(2) Let $x > 2p_n$ be an integer




(3) Let $0 \le t_1 < p_n$ be the smallest integer greater than $x$ such that $gpf(x+t_1) \le p_n$ where gpf() = greatest prime factor.



(4) It is clear that $x+t_1$ consists of at least one prime divisor $q$ where $q \le p_n$



(5) Let $t_1 < t_2 < p_n$ be the second smallest integer greater than $x$ such that $gpf(x+t_2) \le p_n$.



(6) Let $f = gcd(x + t_1,t_2 - t_1)$ where gcd() = greatest common divisor.



(7) Let $u = \frac{x+t_1}{f}, v = \frac{t_2-t_1}{f}$ so that $u > 2$ and $1 \le v < p_n$ and $gcd(u+v,x+t_1)=1$




(8) $x+t_2 = uf + vf = f(u+v)$ and since $u+v > 3$, there exists a prime $q$ that divides $u+v$ but does not divide $w+t_1$.



(9) Let $t_2 \le t_3 < p_n$ be the third smallest integer greater than $x$ such that $gpf(x+t_3) \le p_n$



(10) We can use the same arguments as steps (5) thru steps (8) to show that $x+t_3$ contains a prime divisor relatively prime to $x+t_1$ and relatively prime to $x+t_2$




  • Let $f_1 = gcd(x+t_1,t_3-t_1), u_1 = \frac{x+t_1}{f_1}, v_1 = \frac{t_3-t_1}{f1}$

  • Let $f_2 = gcd(x+t_2,t_3-t_2), u_2 = \frac{x+t_2}{f_2}, v_2 = \frac{t_3-t_2}{f_2}$

  • $x+t_3 = f_1(u_1 + v_1) = f_2(u_2 + v_2)$ and $gcd(u_1 + v_1,x+t_1)=1, gcd(u_2 + v_2,x+t_2)=1$


  • Let $h = gcd(f_1,f_2)$ so that $gcd(\frac{f_1}{h},\frac{f_2}{h})=1$

  • Then, $\frac{f_1}{h}(u_1 + v_1) = \frac{f_2}{h}(u_2+v_2)$

  • And: $\frac{u_1+v_1}{\frac{f_2}{h}} = \frac{u_2+v_2}{\frac{f_1}{h}}$



(11) We can repeat this argument until $x+t_n$ at which point there are no more primes less than or equal to $p_n$.



(12) We can thus use this same argument to show that all remaining integers in the sequence $x,x+1, x+2, \dots x+k-1$ have at least one prime divisor greater than $p_n$.



Of course, in order to make this argument, $x$ may well need to be greater than $(p_n) ^ n$ since I am assuming that at each point $\frac{u_i + v_i}{\frac{f_i}{h}} > p_n$.





  • Is my reasoning sound?


  • Is this a known property of large numbers?


  • Is there a more precise formulation for smaller numbers? For example, my argument seems like it could be improved to argue that for $x > 2p_n$, there are at least $2$ numbers with a prime divisor greater than $p_n$.







Edit: I found a simpler argument (modified on 12/28/2017)





  • Let $w > 1$ be an integer

  • Let $p_n$ be the $n$th prime such that $p_n \le w < p_{n+1}$

  • Let $R(p,w)$ be the largest integer $r$ such that $p$ is a prime and $p^r \le w$ but $p^{r+1} > w$

  • Let $x > \prod\limits_{p < w} p^{R(p,w)}$ be an integer

  • Let $i$ be an integer such that $0 \le i < w$



I claim that if $gpf(x+i) \le p_n$, then there exists $k,v$ such that $1 \le k \le n$ and $(p_k)^v \ge w$ and $(p_k)^v | x+i$




Assume no such $k,v$ exists. It follows that each $x+i \le \prod\limits_{p < w} R(p,w)$ which goes against assumption.



I also claim that there are at most $n$ instances where $gpf(x+1) \le p_n$.



Assume that there exists integers $v_2 > v_1$ and $i \ne j$ where $(p_k)^{v_1} | x+i$ and $(p_k)^{v_2} | x+j$.



Then there exists positive integers $a,b$ such that $a(p_k)^{v_1} = x+i$ and $b(p_k)^{v_2} = x+j$



Let $u = x+j - x - i = j - i = (p_k)^{v_1}(a - b(p_k)^{v_2 - v_1})$




We can assume $u$ is positive since if it were negative, we could set $u = x+i - x - j$ instead.



We can assume therefore that $a - b(p_k)^{v_2 - v_1} \ge 1$.



But now we have a contradiction since $w > j - i$ but $(p_k)^{v_1} \ge w$.


Answer



I think your second proof is correct. I'm going to rewrite it:



Theorem (Sylvester's theorem generalization):




Let $n,k\in\mathbb{N}$ with $n\geq$ lcm$(1,\ldots,k)$, and let $\pi(x):=\sum_{p\leq x} 1$ be the number of primes not greater than $x$. Then in the interval $[n,n+k]$ there are at least $k+1-\pi(k)$ integers $n_i$ with a prime factor $p_i>k$.



Proof: For $p$ prime let $\nu_p(k)$ be the $p$-adic valuation of $k$. Let gpf$(x)$ be the greatest prime factor of $x$ and $p_j$ be the $j$-th prime. Consider $0\leq i\leq k$.



Suppose that $i$ is such that gpf$(n+i)\leq p_{\pi(k)}$ ($p_{\pi(k)}$ is the greatest prime not greater than $k$). Then there exist a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$, as otherwise
$$n+i\leq\displaystyle\prod_{p\leq k}p^{v_p(k)}=\text{lcm}(1,\ldots,k)a contradiction.



Now pick $i\neq j\in\{0,\ldots,k\}$ such that gpf$(n+i)$, gpf$(n+j)\leq p_{\pi(k)}$ with $p_i=p_j$. Then $p_i^{v_i}|n+i$ and $p_i^{v_j}|n+j$, so $p_i^{\min\{v_i,v_j\}}|i-jk$. Therefore $p_i\neq p_j$.




Thus, to every integer $i$ such that gpf$(n+i)\leq p_{\pi(k)}$ there corresponds a different prime $p_i\leq p_{\pi(k)}$, so that there can be at most $\pi(k)$ integers of this form. Hence there are at least $k+1-\pi(k)$ numbers $n+i\in [n,n+k]$ such that gpf$(n+i)\geq p_{\pi(k)+1}>k$.



Corollary (Grimm's conjecture): If $n\geq$lcm$(1,\ldots,k)$, then for every integer $n_i\in[n,n+k]$ there is a different prime $p_i$ such that $p_i|n_i$ (i.e., Grimm's conjecture is true for this choice of $n$ and $k$).



Proof: If gpf$(n+i)\leq p_{\pi(k)}$, pick $p_i$ (we already know $p_i\neq p_j$ if $i\neq j$). Otherwise gpf$(n+i)>k$ and this factor cannot divide any other $n+j$ with $i\neq j\leq k$.



In fact, the two results are equivalent:



Lemma: Grimm's implies Sylvester's.




Proof: If there is a different prime $p_i|n_i$ for every $n_i\in[n,n+k]$, then as there are $\pi(k)$ primes below $k$, there must be at least $k+1-\pi(k)$ numbers $n_i$ such that $p_i>k$.






Now that I have put it like this, I realize that this theorem (and its proof!) are a particular case of Theorem 1 of M. Langevin, Plus grand facteur premier d'entiers en progression arithmétique, Séminaire Delange-Pisot-Poitou. Théorie des nombres (1976-1977), 18(1), 1-7. So this was known (although perhaps not very well known!).



Observe that Langevin manages to prove the result with the less restrictive condition that $n+i$ does not divide lcm$(1,\ldots,k)$ for any $i\in\{0,\ldots,k\}$. We can adapt your proof to get this condition: if gpf$(n+i)\leq p_{\pi(k)}$ and $n+i\not|$lcm$(1,\ldots,k)$ then there must be a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$. The proof then follows as before.


limits - How to prove that $limlimits_{n to infty} frac{k^n}{n!} = 0$




It recently came to my mind, how to prove that the factorial grows faster than the exponential, or that the linear grows faster than the logarithmic, etc...




I thought about writing:
$$
a(n) = \frac{k^n}{n!} = \frac{ k \times k \times \dots \times k}{1\times 2\times\dots\times n} = \frac k1 \times \frac k2 \times \dots \times \frac kn = \frac k1 \times \frac k2 \times \dots \times \frac kk \times \frac k{k+1} \times \dots \times \frac kn
$$
It's obvious that after k/k, every factor is smaller than 1, and by increasing n, k/n gets closer to 0, like if we had $\lim_{n \to \infty} (k/n) = 0$, for any constant $k$.



But, I think this is not a clear proof... so any hint is accepted.
Thank you for consideration.


Answer



If you know that this limit exists, you have

$$
\lim_{n \to \infty} \frac{k^n}{n!} = \lim_{n \to \infty} \frac{k^{n+1}}{(n+1)!} = \lim_{n \to \infty} \frac k{n+1} \frac{k^n}{n!} = \left(\lim_{n \to \infty} \frac k{n+1} \right) \left( \lim_{n \to \infty} \frac {k^n}{n!} \right) = 0.
$$
Can you think of a short way to show the limit exists? (You need existence to justify my factoring of the limits at the end. If you don't have that then there's no reason for equality to hold.)


Calculate limit involving binomial coefficient



How can I calculate this limit. Let $p\in[0,1]$ and $k\in\mathbb{N}$.



$$\lim\limits_{n\to\infty}\binom{n}{k}p^k(1-p)^{n-k}.$$



Any idea how to do it ?


Answer



A sketch of proof: The element is
$$

\frac{n(n-1)(n-2)\dots(n-k+1)}{k!} \left(\frac p{1-p}\right)^k (1-p)^n
\\= A_k n(n-1)(n-2)\dots(n-k+1)(1-p)^n
$$ for a certain $A_k$ which does not influence the convergence.



When $n\to\infty $ this is equivalent to
$$
a_n = A_k n^k(1-p)^n
$$



But $a_{n+1}/a_n = \left(\frac{n+1}{n}\right)^k(1-p) < 1-\frac p2$ as soon as $p\neq 1$, for $n$ big enough. So the limit is 0 in that case (prove it using induction).



discrete mathematics - Find All Solutions to System of Congruence



$$
\begin{cases}
x\equiv 2 \pmod{3}\\
x\equiv 1 \pmod{4}\\
x\equiv 3 \pmod{5}

\end{cases}
$$



$
n_1=3\\
n_2=4\\
n_3=5\\
N=n_1 * n_2 * n_3 =60\\
m_1 = 60/3 = 20\\
m_2 = 60/4 = 15\\

m_3 = 60/5 = 12\\
gcd(20,3)=1=-20*1+3*7\\
gcd(15,4)=1=-15*1+4*4\\
gcd(12,5)=1=12*3-5*7\\
x=-20*2-15*1+12*3\equiv -19 \equiv 41 \pmod{60}\\
$



The above is what I've tried so far. Can someone tell me where I'm going wrong? It's my first time doing this and looking at the explanation and examples for the Chinese Remainder Theorem makes my head want to explode.


Answer



I think you're a bit confused about the recipe used to solve the system of congruences using the Chinese Remainder Theorem. Using your notation, the actual solution is given by

\begin{align*}
x = 2 \cdot m_1 \cdot ({m_1}^{-1} \text{ mod } 3) + 1 \cdot m_2 \cdot ({m_2}^{-1} \text{ mod } 4) + 3 \cdot m_3 \cdot ({m_3}^{-1} \text{ mod } 5) \, .
\end{align*}
Let's see why this answer works. Since $m_1$ is divisible by both $4$ and $5$, the first term is "invisible" when we consider $x$ mod $4$ and mod $5$: it is congruent to $0$ mod $4$ and mod $5$, so it doesn't contribute to the answer of the last two congruences. Thus, we can focus on making this first term satisfy the first congruence. Okay, so far we know the first term should have as factors $2$ (the congruence we want) and $m_1$ (to get rid of the effect mod $4$ and $5$). But now we don't satisfy the first congruence; the $m_1$ throws things off. To fix this, we multiply by the ${m_1}^{-1}$ mod $3$. Then mod $3$ we have
\begin{align*}
x &= 2 \cdot m_1 \cdot ({m_1}^{-1} \text{ mod } 3) + 1 \cdot m_2 \cdot ({m_2}^{-1} \text{ mod } 4) + 3 \cdot m_3 \cdot ({m_3}^{-1} \text{ mod } 5)\\
&\equiv 2 \cdot m_1 \cdot ({m_1}^{-1} \text{ mod } 3) \equiv 2 \pmod{3}
\end{align*}
as desired. Similar reasoning shows that $x$ satisfies the given congruences mod $4$ and $5$.




You have the right idea with your solution, but you're missing some factors. Since you've shown $1 = 20 \cdot (-1) + 3 \cdot 7 \equiv 20 \cdot (-1) \pmod{3}$, then
$$
{m_1}^{-1} \text{ mod } 3 = 20^{-1} \text{ mod } 3 = -1 \equiv 2 \pmod{3} \, .
$$
So the first term should be $2 \cdot 20 \cdot 2$. Can you figure out the other two terms now?


calculus - Differential Notation Magic in Integration by u-Substitution





I'm really confused now. I always thought that the differential notation $\frac{df}{dx}$ was just that, a notation.



But somehow when doing integration by u-substitution I'm told that you can turn something like this $\frac{du}{dx} = 2x\;$ into this $\;du = 2x\ dx$.



But how is that even possible? I understand that the notation comes from the fact that $\frac{du}{dx}$ actually means the limit of the difference in $u$ over the difference in $x$, with $\Delta x$ approaching $0$.



$$u'(x) = \frac{du}{dx} = \frac{du(x)}{dx} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{(x+\Delta x) - x} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{\Delta x}$$



So if $\frac{df}{dx}$ is just a notation for the limit mentioned above, then what is the underlying argument to say that you can treat $\frac{du}{dx}$ as if it were an actual fraction?




Appreciate the help =)


Answer



It is really just a notation. And the trick with the substitution e.g. $du = 2xdx$ does not have any mathematical meaning, it is just a convenient way of memorizing the integration by substitution rule/law/theorem:



$$\int_a^b f(\phi(t)) \phi'(t) dt = \int_{\phi(a)}^{\phi(b)} f(x)dx $$



Going from left to right you might want to make the substitution $x=\phi(t)$. Our mnemonic tells us to $\frac{dx}{dt} = \phi'(t)$ or in other words that you have to replace $\phi'(t)dt$ with $dx$ if you replace $\phi(t)$ with $x$. If you look again at the equation above you see that this mnemonic does a nice job, so we do not have to memorize this whole equation.



I do use the mnemonic but still I always keep this equation in mind when doing so.


elementary number theory - Prove $log_7 n$ is either an integer or irrational

I have been trying to prove a certain claim and have hit a wall. Here is the claim...




Claim: If $n$ is a positive integer then
$\log_{7}n$ is an integer or it is irrational




Proof (so far): Let $y=\log_{7}n$. Note that to say
$n$ is a positive integer is equivalent

to saying that n is a non-zero natural
number. We will proceed by trying
to prove the contrapositive.



Claim (alternate version): If $y$ is
a rational number and is not an integer,
then either $n$ is zero or it is not a
natural number.



Given the above we can assume that there

exist integers $a$ and $b$ such that $y$ equals
the quotient of $a$ over $b$. We can also
assume from the premises that $b$ does not
equal one and that $a$ and $b$ are relatively
prime. Note thus that $n$ may be considered
equal to seven raised to the power of $a$
over $b$. Further note that because of this
$n$ cannot be zero or negative. To prove
the claim, one must prove that $n$ is not
a natural number.




Where I am stuck: How can I guarantee from here that $n$
is not a natural number? Is there any
way to concretely ensure that there are
no integers $a$ and $b$ such that the
fractional exponent above will never give
an integer when raising seven to its power?



I have been trying to play around with
a proof that there is no such thing

as a rational root of a prime number, but
that hasn't shook anything loose so far.

modular forms - Weight $2$ Eisenstein series transformation



Let
$$G_2(\tau) = \sum_{c \in \mathbb{Z}} \sum_{d \in \mathbb{Z}}\frac{1}{(c\tau + d)^2}$$
be the weight $2$ Eisenstein series for $\tau \in \mathbb{H}$. I'm trying to do an exercise from Diamond and Shurman aimed at establishing the transformation law
$$(c\tau + d)^{-2}G_2\left(\frac{a\tau + b}{c\tau + d}\right) = G_2(\tau) - \frac{2\pi i c}{c\tau + d} \quad \textrm{ for }\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \textrm{SL}_2(\mathbb{Z}).$$
The exercise is to show that if the above holds for $\gamma_1,\gamma_2 \in \textrm{SL}_2(\mathbb{Z})$, then it holds for the product $\gamma_1\gamma_2$. Define $j(\gamma) = c\tau + d$ for $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \textrm{SL}_2(\mathbb{Z})$. Let $c,c'$ be the bottom left entires of $\gamma_1$ and $\gamma_2$, respectively. Using the identity $j(\gamma_1\gamma_2,\tau) = j(\gamma_1,\gamma_2\tau)j(\gamma_2,\tau)$, we see that

\begin{equation*}
\begin{aligned}
\frac{G_2(\gamma_1\gamma_2\tau)j(\gamma_2,\tau)^2}{j(\gamma_1\gamma_2,\tau)^2} &= \frac{G_2(\gamma_1(\gamma_2\tau))}{j(\gamma_1,\gamma_2\tau)^2} \\
&= G_2(\gamma_2\tau) - \frac{2\pi i c}{j(\gamma_1,\gamma_2\tau)} \\
&= j(\gamma_2,\tau)^2\left(G_2(\tau) - \frac{2\pi i c'}{j(\gamma_2,\tau)}\right) - \frac{2\pi i c}{j(\gamma_1,\gamma_2\tau)}.
\end{aligned}
\end{equation*}

Dividing by $j(\gamma_1\gamma_2,\tau)^2$ and using the aforementioned identity again, we get
\begin{equation*}
\begin{aligned}\frac{G_2(\gamma_1\gamma_2\tau)}{j(\gamma_1\gamma_2,\tau)^2} &= G_2(\tau) - \frac{2\pi i c'}{j(\gamma_2,\tau)} - \frac{2\pi ic}{j(\gamma_2,\tau)^2j(\gamma_1,\gamma_2\tau)} \\

&= G_2(\tau) - \frac{2\pi i c'}{j(\gamma_2,\tau)} - \frac{2\pi ic}{j(\gamma_2,\tau)j(\gamma_1\gamma_2,\tau)}.
\end{aligned}
\end{equation*}

For the transformation law to hold, we would need
$$\frac{c'}{j(\gamma_2,\tau)} + \frac{c}{j(\gamma_2,\tau)j(\gamma_1\gamma_2,\tau)} = \frac{C}{j(\gamma_1\gamma_2,\tau)},$$
where $C$ is the bottom left entry of $\gamma_1\gamma_2$. But the left-hand side is equal to
$$\frac{j(\gamma_1\gamma_2,\tau)c' + c}{j(\gamma_2,\tau)j(\gamma_1\gamma_2,\tau)},$$
so this would say that
$$\frac{j(\gamma_1\gamma_2,\tau)c' + c}{j(\gamma_2,\tau)} = C$$
is constant. How can this possibly be? I cannot find where I went wrong in this tedious calculation and it's unbelievably frustrating.



Answer



There's nothing wrong:
$$Cj(\gamma_2,\tau)-c'j(\gamma_1\gamma_2,\tau)\\=(ca'+dc')(c'\tau+d')-c'\big((ca'+dc')\tau+(cb'+dd')\big)\\=(ca'+dc')d'-c'(cb'+dd')=c(a'd'-b'c')=c.$$


Perfect square palindromic numbers

A palindromic number is one which when expressed in base $10$ with no leading zeros, reads the same left to right and right to left. For example, $44944$ is a (base 10) palindrome.



I can find quite a few palindromes which are also perfect squares; indeed there are an infinite set of them of the form $1,121,10201,1002001, \ldots$. In each of these cases, however, the square root of the palindrome is itself a palindrome.



I would like to know about palindromes which are the square of non-palindromes:




  • Are there any perfect square palindromes whose square roots are not palindromic?


  • Is there an infinite set of perfect square palindromes whose square roots are not palindromic?



  • Are the answers to these questions different in other bases?


Friday 28 March 2014

solid geometry - Volume and surface area of a sphere by polyhedral approximation



Exposition:



In two dimensions, there is a (are many) straightforward explanation(s) of the fact that the perimeter (i.e. circumference) and area of a circle relate to the radius by $2\pi r$ and $\pi r^2$ respectively. One argument proceeds by approximating these quantities using regular cyclic polygons (equilateral, equiangular, on the circle of radius $r$), noting that such a polygon with $n$ sides can be decomposed into $n$ isosceles triangles with peak angle $\frac{2\pi}{n}$, base length $~2r\sin\frac{\pi}{n}$, and altitude $~r \cos \frac{\pi}{n}$ . Then, associating the circle with the limiting such polygon, we have,
$$
P = \lim_{n\to\infty} n \cdot \text{base length } = \lim_{n\to\infty}2r \cdot \pi \frac{n}{\pi} \sin \frac{\pi}{n} = 2\pi r ~~,
$$
and similarly, (via trig identity)
$$

A = \lim_{n\to\infty} n\left(\frac{1}{2} \text{ base } \times \text{ altitude }\right) = \lim_{n\to\infty}\frac{r^2\cdot 2\pi}{2} \frac{n}{2\pi} \sin \frac{2\pi}{n} = \pi r^2 ~~.
$$
Question:



Could someone offer intuition, formulas, and/or solutions for performing a similarly flavored construction for the surface area and volume of a sphere?



Images and the spatial reasoning involved are crucial here, as there are only so many platonic solids, so I am not seeing immediately the pattern in which the tetrahedra (analogous to the 2D triangles) will be arranged for arbitrarily large numbers of faces. Thus far my best result has been a mostly-rigorous construction relying on this formula (I can write up this proof on request). What I'd like to get out of this is a better understanding of how the solid angle of a vertex in a polyhedron relates to the edge-edge and dihedral angles involved, and perhaps a "dimension-free" notion for the ideas used in this problem to eliminate the need to translate between solid (2 degrees of freedom) and planar (1 degree) angles.


Answer



Alright, I've come up with a proof in what I think is the right flavor.




Take a sphere with radius $r$, and consider the upper hemisphere. For each $n$, we will construct a solid out of stacks of pyramidal frustums with regular $n$-gon bases. The stack will be formed by placing $n$ of the $n$-gons perpendicular to the vertical axis of symmetry of the sphere, centered on this axis, inscribed in the appropriate circular slice of the sphere, at the heights $\frac{0}{n}r, \frac{1}{n}r, \ldots,\frac{n-1}{n}r $ . Fixing some $n$, we denote by $r_\ell$ the radius of the circle which the regular $n$-gon is inscribed in at height $\frac{\ell}{n}r$ . Geometric considerations yield $r_\ell = \frac{r}{n}\sqrt{n^2-\ell^2}$ .



As noted in the question, the area of this polygonal base will be $\frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n}$ for each $\ell$ . I am not sure why (formally speaking) it is reasonable to assume, but it appears visually (and appealing to the 2D case) that the sum of the volumes of these frustums should approach the volume of the hemisphere.



So, for each $\ell = 1,2,\ldots,n-1$, the term $V_\ell$ we seek is $\frac{1}{3}B_1 h_1 - \frac{1}{3}B_2 h_2 $, the volume of some pyramid minus its top. Using similarity of triangles and everything introduced above, we can deduce that
$$
B_1 = \frac{n}{2}r_{\ell-1}^2 \sin\frac{2\pi}{n}~,~B_2 = \frac{n}{2}r_\ell^2 \sin\frac{2\pi}{n} ~,~h_1 = \frac{r}{n}\frac{r_{\ell-1}}{r_{\ell-1}-r_{\ell}}~,~h_2=\frac{r}{n}\frac{r_{\ell}}{r_{\ell-1}-r_{\ell}} ~~.
$$
So, our expression for $V_\ell$ is
$$

\frac{r}{6} \sin\frac{2\pi}{n} \left\{ \frac{r_{\ell-1}^3}{r_{\ell-1}-r_{\ell}} - \frac{r_{\ell}^3}{r_{\ell-1}-r_{\ell}} \right\} = \frac{\pi r}{3n} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ r_{\ell-1}^2 + r_\ell^2 + r_{\ell-1}r_\ell \right\}
$$ $$
= \frac{\pi r^3}{3n^3} \frac{\sin\frac{2\pi}{n}}{2\pi/n} \left\{ (n^2 - (\ell-1)^2) + (n^2-\ell^2) + \sqrt{(n^2-\ell^2)(n^2-(\ell-1)^2)} \right\} ~~.
$$
So, we consider $ \lim\limits_{n\to\infty} \sum_{\ell=1}^{n-1} V_\ell$ . The second factor involving sine goes to 1, and we notice that each of the three terms in the sum is quadratic in $\ell$, and so the sum over them should intuitively have magnitude $n^3$. Hence, we pass the $\frac{1}{n^3}$ into the sum and evaluate each sum and limit individually, obtaining 2/3, 2/3, and 2/3 respectively (the first two are straightforward, while the third comes from the analysis in this answer).



Thus, we arrive at $\frac{\pi r^3}{3} (2/3+2/3+2/3) = \frac{2}{3}\pi r^3$ as the volume of a hemisphere, as desired.



So was this too excessive or perhaps worth it? I'll leave that to all of you. :)


elementary number theory - Show that $30 mid (n^9 - n)$




I am trying to show that $30 \mid (n^9 - n)$. I thought about using induction but I'm stuck at the induction step.



Base Case: $n = 1 \implies 1^ 9 - 1 = 0$ and $30 \mid 0$.



Induction Step: Assuming $30 \mid k^9 - k$ for some $k \in \mathbb{N}$, we have $(k+1)^9 - (k+1) = [9k^8 + 36k^7 + 84k^6 + 126k^5 + 126k^4 + 84k^3 + 36k^2 + 9k] - (k+1)$.



However I'm not sure where to go from here.


Answer



And here are the congruences requested to complement the answer by Simon S




$$\begin{array}{c|ccccc}
n&n-1&n+1&n^2+1&n^4+1\\
\hline
0&4&1&1&1\\
1&0&2&2&2\\
2&1&3&0&4\\
3&2&4&0&2\\
4&3&0&2&2
\end{array}$$




And one can see that one of these factors is always $\equiv 0\pmod{5}$


elementary set theory - A question regarding joint cumulative probability distribution functions



Consider two random variables $X$ and $Y$ and their joint cumulative probability distribution function $F$. I'm attempting to show that $P\{a_1


My attempt
$$P\{a_1$$P(\{X>a_1\} \cap \{Xb_1\} \cap \{Y$$1 - P((\{X>a_1\} \cap \{Xb_1\} \cap \{Y$$1 - P(\{X>a_1\}^c \cup \{Xb_1\}^c \cup \{Y$$1 - P(\{X\leq a_1\} \cup \{X\geq a_2\} \cup \{Y\leq b_1\} \cup \{Y\geq b_2\}) $$



So here I suppose I must apply the Inclusion–exclusion principle and becuase
$\{X\leq a_1\}$, $\{X\geq a_2\}$ and $\{Y\leq b_1\}$, $\{Y\geq b_2\}$ are respectively mutually exclusive we have that the terms will only be probabilities defined on single sets and on the intersection of two non-mutually exclusive sets. Despite of this it still seems a bit too cumbersome and I am not sure how to simplify the probabilities of intersections.




$$1 - P(\{X\leq a_1\}) - P(\{X\geq a_2\}) - P(\{X\leq b_1\}) - P(\{X\geq b_2\}) $$
$$+ \,P(\{X\leq a_1\}\cap \{Y\leq b_1\}) + P(\{X\leq a_1\}\cap \{Y\geq b_2\}) $$
$$+ \,P(\{X\geq a_2\}\cap \{Y\leq b_1\}) + P(\{X\geq a_2\}\cap \{Y\geq b_2\}) $$



It just seems messy and I don't know how to continue. Am going in the right direction at least or am I completely off?


Answer



\begin{equation}
P(a_1\end{equation}




Also



\begin{equation}
P(X\end{equation}



\begin{equation}
P(X\end{equation}




And your result follows.


Arithmetical progression and quadratic equation

Determine the real number $k$ with the condition that the roots of the equation $x^ {4}-(3k+2) x^ {2} +k^ {2} =0$ make the arithmetic progression?



I dont know how to start ?

Thursday 27 March 2014

calculus - Prove the limit is $sqrt{e}$.





How do you show
$$\lim\limits_{k \rightarrow \infty} \frac{\left(2+\frac{1}{k}\right)^k}{2^k}=\sqrt{e}$$




I know that $$\lim\limits_{k \to \infty} \left(1+\frac{1}{k}\right)^k=e$$ but I don't know how to apply this.


Answer



Hint: $$\frac{a^k}{b^k}=\left(\frac ab\right)^k,$$ and in general, $$\lim_{k\to\infty}\left(1+\frac xk\right)^k=e^x.$$


calculus - Finding the antiderivative of $sin^6xcos^2x$



I need to find the antiderivative of
$$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.




Can I be given a hint?


Answer



Hint We can use double-angle identities to reduce powers. We could use $\cos 2t=2\cos^2 t-1$ and $\cos 2t=1-2\sin^2 t$. We end up with polynomial of degree $4$ in $\cos 2x$. Repeat the idea where needed.



It is more efficient in this case to use $\sin 2t=2\sin t\cos t$, that is, first rewrite our expression as $(\sin x\cos x)^2\sin^4 x$. Then rewrite as $\frac{1}{16}(\sin^2 2x)(1-\cos 2x)^2$. Expand the square. Not quite finished. But we end up having to integrate $\sin^2 2x$ (standard), $\sin^2 2x\cos 2x$ (simple substitution), and $\sin^2 2x\cos^2 2x$, a close relative of $\sin^2 4x$.



Remark: In this problem, like in a number of trigonometric integrations, it is possible to end up with dramatically different-looking answers. They all differ by constants, so we are saved by the $+C$.


Wednesday 26 March 2014

False Proof of 1=-1

I found the following proof online and am confused as to where the error occurs.



$1=\sqrt{1}=\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}=(\sqrt{-1})^2=-1$



My guess is that the error occurs here: $\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}$, but I'm not sure how to show that.




Is my guess correct? How might I prove this?



Thank you!

linear algebra - Find the determinant of the following matrix

Find the determinant of the following matrix:
$$A = \begin{bmatrix}
1+x_1^2 &x_1x_2 & ... & x_1x_n \\
x_2x_1&1+x_2^2 &... & x_2x_n\\
...& ... & ... &... \\

x_nx_1& x_nx_2 &... & 1+x_n^2
\end{bmatrix}$$



I computed for the case $n=2$, and $n=3$ and guessed that $\det(A)$ should be $ 1+\sum_{i=1}^n x_i^2 $ but not sure how to proceed for any $n$.

real analysis - Show that the recursive sequence converges and find its limit




Let $x_1>2$ and $x_{n+1} := 1 + \sqrt{x_n - 1}$ for all $n \in \mathbb{N}$. Show that $\{x_n\}$ is decreasing and bounded below, and find its limit.



I showed that it's bounded below ($x_n>2$ for all $n$) using induction. But then I don't know how to do the rest. Any help would be much appreciated.


Answer



Any recursively defined sequence where the recursive relation (in this case $1+\sqrt{x_n - 1}$) is continuous can only ever converge to $x$ which are stationary points, i.e. points so that if you enter $x$ into the relation, you get $x$ back.



This means that if your sequence converges to some $x$, then it must be the case that $$x=1+\sqrt{x-1}$$


Tuesday 25 March 2014

abstract algebra - gcd of two monic polynomials



I am trying to answer the following question but I'm having some trouble:



If $f$ and $g$ are monic polynomials in $Z[x]$, does their (monic) greatest common
divisor in $Q[x]$ necessarily have coefficients in $Z$?



I've tried using Gauss's lemma to get an answer but haven't really gotten anywhere. Any help would be appreciated, Thanks!



Answer



First, we show that a monic factor of a monic polynomial is always in $\mathbb Z$



Suppose, $f(x)=g(x)\cdot h(x)$



Choose $k,l\in \mathbb Z$, such that $g'(x):=k\cdot g(x)$ and $h'(x):=l\cdot h(x)$ are primitive. Then, we have $kl\cdot f(x)=g'(x)\cdot h'(x)$



Because of Gauss's Lemma, $kl\cdot f(x)$ is primitive. But $kl$ is a common factor of the coefficients of $kl\cdot f(x)$. Hence, $kl$ must be a unit and therefore $g(x)$ and $h(x)$ must be in $\mathbb Z[X]$



This implies that a monic greatest common divisor of monic polynomials in $\mathbb Z[X]$ must be in $\mathbb Z[X]$.



Basic Probability Die Roll Game



I am not sure if I got the correct answers to these basic probability questions.




You and I play a die rolling game, with a fair die. The die is equally likely to land on any of its $6$ faces. We take turns rolling the die, as follows.




At each round, the player rolling the die wins (and the game stops) if (s)he rolls either "$3$", "$4$","$5$", or "$6$". Otherwise, the next player has a chance to roll.




  1. Suppose I roll 1st. What is the probability that I win in the 1st round?

  2. Suppose I roll in the 1st round, and we play the game for at most three rounds. What is the probability that
    a. I win?
    b. You win?
    c. No one wins? (i.e., I roll "$1$" or "$2$" in the 3rd round?)




My answers:





  1. $4/6$

  2. a. I think it is $(4/6)^2$ . Can someone explain why it isn't $4/6 + 4/6$ ?
    b. I think this is $4/6$.
    c. I think it is $(2/6)^3$


Answer




What is the probability that the player who rolls first wins in the first round?





Your answer of $4/6 = 2/3$ is correct.




What is the probability that the player who rolls first wins the game if the game lasts at most three rounds?




The player who rolls first if he or she wins during the first round, second round, or third round. We know that the probability that the player who rolls first has probability $2/3$ of winning in the first round.



For the player who rolls first to win in the second round, both players must roll a 1 or 2 in the first round, then the first player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{27}$$




For the first player to win in the third round, both players must roll a 1 or 2 for the first two rounds, then the first player must roll a 3, 4, 5, or 6 in the third round. The probability that this occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{243}$$



Since these three events are mutually exclusive, the probability that the player who rolls first wins the game is
$$\frac{2}{3} + \frac{2}{27} + \frac{2}{243} = \frac{162}{243} + \frac{18}{243} + \frac{2}{243} = \frac{182}{243}$$



The probability that the first player wins cannot be
$$\frac{4}{6} + \frac{4}{6} = \frac{8}{6} = \frac{4}{3} > 1$$
since it is impossible for a probability to exceed $1$.





What is the probability that the player who rolls second wins the game if the game lasts at most three rounds.




The player who rolls second can win in the first round if the first player rolls a 1 or 2, then the second player rolls a 3, 4, 5, or 6. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{9}$$
For the second player to win in the second round, the first and second players must both roll a 1 or 2 in the first round, the first player must roll a 1 or 2 in the second round, then the second player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is
$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{81}$$
For the second player to win in the third round, both players must roll a 1 or 2 in the first two rounds, the first player must roll a 1 or 2 in the third round, then the second player must roll a 3, 4, 5, or 6 in the third round. The probability that this event occurs is

$$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{729}$$
Since these three events are mutually exclusive, the probability that the second player wins is
$$\frac{2}{9} + \frac{2}{81} + \frac{2}{729} = \frac{162}{729} + \frac{18}{729} + \frac{2}{729} = \frac{182}{729}$$




What is the probability that neither player wins a game that lasts at most three rounds?




For this event to occur, both players must roll a 1 or 2 in all three rounds. The probability that this event occurs is
$$\left(\frac{2}{6}\right)^6 = \left(\frac{1}{3}\right)^6 = \frac{1}{729}$$




Check: There are three possibilities: The first player wins, the second player wins, or neither player wins. Therefore, the probabilities of these three events should add up to $1$. Observe that
$$\frac{182}{243} + \frac{182}{729} + \frac{1}{729} = \frac{546}{729} + \frac{182}{729} + \frac{1}{729} = 1$$


elementary number theory - Prove that there is no polynomial with integer coefficients such that $p(a)=b,,p(b)=c,,p(c)=a$ for distinct integers $a,b,c$




Our teacher gave us this question but I am very stuck. I drew graphs to see why it cant be true but I didnt find anything. I see that if $p$ existed then: $$p(\cdots p(a))=a,\;p(\cdots p(b))=b,\;p(\cdots p(c))=c$$



(With $3n$ $p$'s). But I don't know where the contradiction is... Maybe we can work on the divisibility of the polynomials coefficients or something?


Answer



Hint $\ $ Writing $\rm\,px\,$ for $\rm\,p(x)\,$ and applying the Factor Theorem we have



$$\rm\,\ a\!-\!b\mid pa\!-\!pb = b\!-\!c\mid pb\!-\!pc=c\!-\!a\mid pc\!-\!pa = a\!-\!b\ $$



therefore we deduce $\rm\,\ a\!-\!b\mid b\!-\!c\mid c\!-\!a\mid a\!-\!b\ $ hence $\rm\ \ldots$



Trigonometry equation $cos(2x)-cos(3x)=0$


$$\cos(2x)-\cos(3x)=0$$




I am trying to solve this equation but get stuck every time in the middle of the exercise. Can somebody help me please?

linear algebra - Find the determinant of $N times N$ matrix



I have the following $N \times N$ matrix.
\begin{vmatrix}
0 & 1 & 1 & \ldots & 1 \\

1 & a_1 & 0 & \ldots & 0 \\
1 & 0 & a_2 & \ldots & 0 \\
\vdots & \vdots& &\ddots& \vdots\\
1 & 0 & 0 & \ldots & a_n \\
\end{vmatrix}



There seems to be a pattern going on for the determinant of the $5 \times 5$ version of this matrix, but I'm not sure how I would find the determinant for the $N \times N$ one.


Answer



Transform the matrix by the (determinant invariant) operation of adding $-a_i$ times the $(i+1)$th row on the first row. This gets us
\begin{vmatrix}

-\sum_i \frac{1}{a_i} & 0 & 0 & \ldots & 0 \\
1 & a_1 & 0 & \ldots & 0 \\
1 & 0 & a_2 & \ldots & 0 \\
\vdots & \vdots& &\ddots& \vdots\\
1 & 0 & 0 & \ldots & a_n \\
\end{vmatrix}

Then you have a lower triangle matrix whose determinant is just the product of the diagonal elements.


polynomials - Determine the remainder when $f(x) = 3x^5 - 5x^2 + 4x + 1$ is divided by $(x-1)(x+2)$



This question arose while I was tutoring a student on the topic of the Remainder Theorem. Now, the Remainder Theorem tells us that when a polynomial $p(x)$ is divided by a linear factor $(x-a)$, the remainder is simply $p(a)$. However, in this case we have a product of linear factors.




Using the Remainder Theorem we can see that neither $(x-1)$ nor $(x+2)$ is a factor of $f(x)$. Also, if we try to find the remainder using long division, we get a relatively ugly remainder of
$$
3(14x - 13)
$$
I assume this is not the correct approach as all other questions in this topic used the Remainder Theorem. So perhaps there is a more elegant approach?


Answer



Hint: the remainder will be a polynomial of degree (at most) $1$ so:



$$f(x) = (x-1)(x+2)q(x) + ax + b$$




Substitute $x=1,-2$ in the above and you get two equations in $a,b$.





[ EDIT ]   For a less conventional approach (justified in the answer here) note that $(x-1)(x+2)=0 \iff x^2=-x+2$. Repeatedly using the latter substitution:

$$
\begin{align}
3x^5 - 5x^2 + 4x + 1 &= 3 (x^2)^2 \cdot x - 5(x^2) + 4x + 1 \\
&= 3(x^2-4x+4)x - 5(-x+2) + 4x +1 \\

&= 3(-x+2-4x+4)x + 9x -9 \\
&= -15(x^2)+ 18x + 9x - 9 \\
&= -15(-x+2) + 27 x - 9 \\
&= 42 x -39
\end{align}
$$


elementary set theory - Let $ADelta Csubseteq ADelta B$. Prove $Acap B subseteq C$. (Proof.v)




Let $A\Delta C\subseteq A\Delta B$. ($\Delta$ denotes symmetric difference.)




Prove $A\cap B \subseteq C$.




I am getting ready for a test and I could really use proof verification and any help with this.



Proof: Let us look at the indicators, $x_{A\Delta C}=x_A+x_C-2x_Ax_C$, $x_{A\Delta B}=x_A+x_B-2x_Ax_B$, $x_{A\cap B}=x_Ax_B$.



Let $x_{A\cap B}(a)=1$. Then $x_{A\Delta B}(a)=0$ which means $x_{A\Delta C}(a)=0$. $x_A(a)=x_B(a)=1$ and therefore $x_C(a)$ must be 1. Therefore $x_{A\cap B}(a)=1\Rightarrow x_C(a)=1$ $\Rightarrow A\cap B \subseteq C$.


Answer



Your argument is flawless.




Depending on how much you did with indicators during your classes, you might want to elaborate on some of the steps, like:




  • $x_{A \Delta B}(a)=0 \to x_{A \Delta C}(a) = 0$

  • $x_{A \Delta C}(a) =0, x_A(a) = 1 \to x_C(a) = 1$


Monday 24 March 2014

functional equations - Are there any real-valued functions, besides logarithms, for which $f(x*y) = f(x) + f(y)$?

Is there any real-valued function, $f$, which is not a logarithm, such that $∀ x,y$ in $ℝ$ , $f(x*y) = f(x) + f(y)$?




So far, all I can think of is $z$ where $z(x) = 0$ $∀ x$ in $ℝ$



EDIT:



Functions having a domain of $ℝ^+$ or a domain of $ℝ$/{0} are acceptable as well.



What are examples of functions, $f$, from $ℝ$/{0} to $ℝ$ which are not logarithms, such that
$∀ x,y$ in $ℝ$, $f(x*y) = f(x) + f(y)$?

elementary number theory - Given two coprime integers, find multiples of them that differ by 1

I'll describe the problem with an example.




Find integers $n$ and $m$ such that $n\cdot13 + m\cdot101 = 1$.



This is the same as solving the equation $n\cdot13 \equiv 1 \pmod {101}$




I'm revising for a maths exam in a few days and it seems like a lot of the questions and examples rely on an ability to solve this type of problem. I can't seem to find a method in my notes, the solutions just get plucked out of thin air!




What is a neat way of doing it on pen and paper with a calculator?

Sunday 23 March 2014

calculus - A simple-looking rational limit



Please help me compute:
$$
\lim_{z\to 0}\frac{\sqrt{2(z-\log(1+z))}}{z}
$$
I know the answer is 1 because I plugged it into Mathematica. Attempts with L'Hopital's Rule didn't work. This a step in an exercise for my self-study project. Thanks!


Answer




Using Taylor series
$$\log (1+z)\sim_0 z-\frac{z^2}{2}$$
we get



$$\frac{\sqrt{2(z-\log(1+z))}}{z}\sim_0\frac{|z|}{z}\sim\left\{\begin{array}[cc]\\1\;&\text{at}\; 0^+\\-1\;&\text{at}\; 0^-\end{array}\right.$$
so the limit doesn't exist.


Integral involving erf and exponential

Problem



I would like to compute the integral:



\begin{align}
\int_{0}^{+\infty} \text{erf}(ax+b) \exp(-(cx+d)^2) dx \tag{1}
\end{align}




I have been looking at this popular table of integral of the error functions, and also found here the following expression:



\begin{align}
\int_{-\infty}^{+\infty} e^{-(\alpha x+ \beta)^2} \text{erf}(\gamma x+\delta) dx = \dfrac{\sqrt{\pi}}{\alpha} \text{erf} \left[ \dfrac{\alpha \delta - \beta \gamma}{\sqrt{\alpha^2+ \gamma^2}} \right] \tag{2}
\end{align}



as well as:



\begin{align}

\int_{0}^{+\infty} e^{-\alpha^2 x^2} \text{erf}(\beta x) dx = \dfrac{\text{arctan}(\beta / \alpha)}{\alpha \sqrt{\pi}} \tag{3}
\end{align}



However the latter (3) is only a particular case of (1), which is what I am looking for. Do you know how to prove (2)? This might help me understand how to compute (1)? Or do you know how to compute (1)?

factorial - How find the limit of $limlimits_{nto infty }left(n-1right)!$

I have the next limit: $$\lim\limits_{n\to \infty }\left(\sqrt[n]{\left(\frac{n!-1}{n!+1}\right)^{\left(n+1\right)!}}\right)$$



I had done some steps and simplified it to:
$$\lim\limits_{n\to \infty }\left(1-\frac{2}{n!+1}\right)^{(n+1)(n-1)!}=\\
\lim\limits_{n\to \infty }\left(1-\frac{1}{(n(n-1)!+1)\cdot 0.5}\right)^{(n+1)(n-1)!}$$




And my final result is:



$$\lim\limits_{n\to \infty }\left(\frac{1}{e}\right)^{\frac{3n+2+\frac{1}{(n-1)!}}{2}}$$



My question is what happens to $\frac{1}{(\infty -1)!}$? Is it 0?

linear algebra - $2times2$ matrices are not big enough




Olga Tausky-Todd had once said that




"If an assertion about matrices is false, there is usually a 2x2 matrix that reveals this."




There are, however, assertions about matrices that are true for $2\times2$ matrices but not for the larger ones. I came across one nice little example yesterday. Actually, every student who has studied first-year linear algebra should know that there are even assertions that are true for $3\times3$ matrices, but false for larger ones --- the rule of Sarrus is one obvious example; a question I answered last year provides another.



So, here is my question. What is your favourite assertion that is true for small matrices but not for larger ones? Here, $1\times1$ matrices are ignored because they form special cases too easily (otherwise, Tausky-Todd would have not made the above comment). The assertions are preferrably simple enough to understand, but their disproofs for larger matrices can be advanced or difficult.


Answer




Any two rotation matrices commute.


trigonometry - Use $e^{ia}+e^{ib}$ to show that $y(t)=2Acos(frac{delta}{2}t-frac{phi_1 -phi_2}{2})sin((omega+frac{delta}{2})t+frac{phi_1 +phi_2}{2})$

One guitarist causes an oscillation given by
$$y_1(t)=A\sin({\omega}t+\phi_1)$$




Another guitarist causes an oscillation given by
$$y_2(t)=A\sin({(\omega+\delta)}t+\phi_2)$$



Furthermore,
$$y(t)=y_1(t)+y_2(t)$$



Given formula (1)
$$e^{ia}+e^{ib}=2e^{i\frac{(a+b)}{2}}\cos(\frac{a-b}{2})$$




Formula (1) should be used to show
$$y(t)=2A\cos(\frac{\delta}{2}t-\frac{\phi_1 -\phi_2}{2})\sin((\omega+\frac{\delta}{2})t+\frac{\phi_1 +\phi_2}{2})$$
I've attempted adding $y_1(t)$ and $y_2(t)$, hoping that something useful would drop out. However, this becomes quite messy after using angle sum identities and I can't make sense of it. I've considered double angle formulae, product-to-sum, sum-to-product formulae. What is a good approach to solving this problem?

elementary set theory - Show $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective

Let $f: A \rightarrow B$ be a function. How can we show that for all subsets $S$ of $A$, $S \subseteq f^{-1}(f(S))$? I think this is a pretty simple problem but I'm new to this so I'm confused.



Also, how can we show that $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective?

arithmetic - Proportional Equation with Variable in Denominator



Middleschool mather here.



When solving for p in the proportion equation below our teacher tells us to handle the variable in the denominator first -- as if order of rationalization matters here:



$$\frac{8}9 = \frac{12}p$$




School Solution



First multiply both sides by $\frac{p}1$: $$\frac{8}9 * \frac{p}1 = 12$$



Then multiply both sides by $\frac{9}8$: $$p = \frac{12}1 * \frac{9}8$$



Simplify: $$p = \frac{27}2$$



I see how starting here simplifies things beautifully. But I'm interested why I can't start somewhere else. Of course when I don't rationalize the variatic denominator first I end up with a different ( and wrong? ) answer:




First divide both sides by $\frac{1}{12}$: $$\frac{8}9\div\frac{1}{12} = \frac{1}p$$



Rationalize division as multiplication on left side: $$\frac{8}9*\frac{12}{1} = \frac{1}p$$



Which becomes: $$\frac{96}{9} = \frac{1}p$$



Multiply both sides by $\frac{p}{1}$: $$\frac{96}{9} * \frac{p}1 = 1$$



Multiply both sides by $\frac{9}{96}$: $$p = \frac{9}{96}$$




Uh oh, I'm in trouble here!!



Two Questions:




  1. Is there a rule or property that governs the order for solving this that I'm not able to see? Is my math teacher teaching us rules but not explaining them?


  2. I'm bad at math. Is my math in the second solution just wrong? Can my attempt be solved first by dividing both sides by $\frac{1}{12}$?





Thanks for all your patience and help


Answer



You multiplied the LHS by $1/12$ and the RHS by $12$ -- those need to be the same for both sides. In your case, you would have gotten



$$\frac89 \cdot \frac{1}{12}=\frac{1}{p}\\
\frac{8}{108}=\frac{1}{p}\\
\frac{2}{27}=\frac{1}{p}$$



where the last line is simplification of $\frac{8}{108}$


Saturday 22 March 2014

real analysis - Continuous bijection from $(0,1)$ to $[0,1]$

Does there exist a continuous bijection from $(0,1)$ to $[0,1]$? Of course the map should not be a proper map.

complex numbers - If there are any 3nion, 5nion, 7nion, 9nion, 10nion, etc.

The quaternion/octonion extend the complex numbers, which extend the real numbers. So we go:




  • 1-tuple: Real numbers.


  • 2-tuple: Complex numbers.

  • 4-tuple: Quaternions.

  • 8-tuple: Octonions.



The Wikipedia link describes this doubling process:




In mathematics, the Cayley–Dickson construction, named after Arthur Cayley and Leonard Eugene Dickson, produces a sequence of algebras over the field of real numbers, each with twice the dimension of the previous one.





But if these are just vectors in the end, I wonder if there are vectors in the odd dimensions like 3, 5, etc., or other non-power-of-two dimensions like 10, 12, etc.. This way there would be a potentially more general construction describing the vector, and the power-of-two case would be a special case, sort of thing.

elementary number theory - Polynomial with integer coefficients of degree d





Let $P(X)$ be a polynomial with integer coefficients of degree $d>0$.



$(a)$ If $\alpha$ and $\beta$ are two integers such that $P(\alpha)=1$ and $P(\beta)=-1$ , then prove that $|\beta - \alpha|$ divides $2$.



$(b)$ Prove that the number of distinct integer roots of $P^2(x)-1$ is atmost $d+2$.




Let $P(X)=a_0+a_1x+a_2x^2+... +a_dx^d$where $a_0, a_1,..., a_d$ are inegers . Then $P(\alpha)-P(\beta)=a_1(\alpha-\beta)+a_2(\alpha^2-\beta^2)+... a_d(\alpha^d-\beta^d)$. It is clear that $\alpha-\beta$ divides $P(\alpha)-P(\beta)$. I am not able to solve second part. Any ideas? Thanks.



Answer



As commented, we consider the roots of $P(x)=1$ and $P(x)=-1$. If the roots are from solely one of the equations, then we are done. Now suppose that the roots are from both equations. We can imagine that one solution of the first equation as a white(W) ball, one solution of the other equation as a black(B) ball. Notice $|\alpha-\beta|$ divides 2, if a solution B exsits, there is at most 4 solutions W exsit, namely B-2,B-1,B+1,B+2, otherwise the distance is lager than 2. We have the following cases:




  1. All 4 solutions W exist. In this case, we can not find another solution B, since the distance of some W and the new B will always be larger than 2. Therefore we have at most $d+1$ solutions, namely $d$ from the first and 1 from the second.


  2. 3 solutions W exist. There are two case: WWBW* or WWB*W, where * stands for that this number is no solution of the both equaitons. In the first case, it is impossible to give a new B, and we have at most $d+1$ solutions; in the second case, we have at most one extra solution $B-1$, and we can not find anymore solutions B. Therefore we have at most $d+2$ solutions.


  3. 2 solutions W exist. There are 3 possibilities: $*WBW*, W*BW*,W*B*W,WWB**$. In the first and third case, we can not add one more B solution, therefore we have at most d+1 solutions. In the second case, we can add at most one more B solution, and we have at most d+2 solutions. In the fourth case, we have at most one more B solution, namely BWWB, and we can not add anymore B solution. We have then at most d+2 solutions.


  4. 1 solution W exist. It is then WB. We now have 3 possibilities to add new solution B: $B*WB*,*BWB*,**WBB$, but this go back to previous, and we can add at most 2 more W solutions, and therefore we have at most d+2 solutions. This completes the proof.



trigonometry - Prove that $cosfrac {2pi}{7}+ cosfrac {4pi}{7}+ cosfrac {8pi}{7}=-frac{1}{2}$





Prove that
$$\cos\frac {2\pi}{7}+ \cos\frac {4\pi}{7}+ \cos\frac {8\pi}{7}=-\frac{1}{2}$$




My attempt



\begin{align}

\text{LHS}&=\cos\frac{2\pi}7+\cos\frac{4\pi}7+\cos\frac{8\pi}7\\
&=-2\cos\frac{4\pi}7\cos\frac\pi7+2\cos^2\frac{4\pi}7-1\\
&=-2\cos\frac{4\pi}7\left(\cos\frac\pi7-\cos\frac{4\pi}7\right)-1
\end{align}
Now, please help me to complete the proof.


Answer



$cos(2\pi/7)$+$cos(4\pi/7)$+$cos(8\pi/7)$



= $cos(2\pi/7)$+$cos(4\pi/7)$+$cos(6\pi/7)$ (angles add to give $2\pi$, thus one is $2\pi$ minus the other)




At this point, we'll make an observation



$cos(2\pi/7)$$sin(\pi/7)$ = $\frac{sin(3\pi/7) - sin(\pi/7)}{2}$ ..... (A)



$cos(4\pi/7)$$sin(\pi/7)$ = $\frac{sin(5\pi/7) - sin(3\pi/7)}{2}$ ..... (B)



$cos(6\pi/7)$$sin(\pi/7)$ = $\frac{sin(7\pi/7) - sin(5\pi/7)}{2}$ ..... (C)



Now, add (A), (B) and (C) to get




$sin(\pi/7)*(cos(2\pi/7)+cos(4\pi/7)+cos(6\pi/7))$ = $\frac{sin(7\pi/7) - sin(\pi/7)}{2}$ = -$sin(\pi/7)/2$



The $sin(\pi/7)$ cancels out from both sides to give you your answer.


Friday 21 March 2014

special functions - Integration error spotting

What have I done wrong? 




I have to evaluate the following integral:



$$\int\limits_0^\infty\int\limits_0^{2\pi} \phi(r^2)\delta'(r^2-a)\delta\left(\theta- \left(n+{1\over 2}\right){\pi\over 2}\right) r \; d\theta \;dr$$



My (wrong) working:



$\implies 4\int\limits_0^\infty \phi(x)\delta'(x-a){1\over 2}dx$ by letting $x=r^2$



$\implies -2\phi'(a)$




I should be getting the integral equalling $\phi(a)-\phi'(a)$.



Thank you.

elementary number theory - Prove $gcd(a+b,a^2+b^2)$ is $1$ or $2$ if $gcd(a,b) = 1$




Assuming that $\gcd(a,b) = 1$, prove that $\gcd(a+b,a^2+b^2) = 1$ or $2$.





I tried this problem and ended up with
$$d\mid 2a^2,\quad d\mid 2b^2$$
where $d = \gcd(a+b,a^2+b^2)$, but then I am stuck; by these two conclusions how can I conclude $d=1$ or $2$?
And also is there any other way of proving this result?


Answer



From what you have found, you can conclude easily.



If $d$ divides two numbers, it also divides their gcd, so




$$d| \gcd (2a^2,2b^2) = 2 \gcd (a,b) ^2 =2.$$



So, $d$ is a divisor of 2 and thus either 1 or 2.





calculus - Closed form expression for infinite series



I was given the following function:



$$ f(x) = x + \frac{2x^3}{1\cdot3} + \frac{2\cdot4x^5}{1\cdot3\cdot5} + \frac{2\cdot4\cdot6x^7}{1\cdot3\cdot5\cdot7}... $$ $$ \forall x \in [0,1) $$



And then I was asked to find the value of $ f(\frac{1}{\sqrt{2}}) $, which obviously requires me to compute the closed form expression of the infinite series.




I tried 'Integration as a limit of sum' but I was unable to modify the expression accordingly. How do I approach the problem?


Answer



Another answer. We use formulas
$$\int_0^{\pi/2}\sin^{2n+1}sds=\frac{(2n)!!}{(2n+1)!!},$$
$$\sum_{k=0}^{\infty }{\left. {{z}^{2 k+1}}\right.}=\frac{z}{1-{{z}^{2}}},\quad |z|<1.$$
Then
$$f(x)=\sum_{n=0}^{\infty}\frac{x^{2n+1}(2n)!!}{(2n+1)!!}\\=\sum_{n=0}^{\infty}x^{2n+1}\int_0^{\pi/2}\sin^{2n+1}sds\\
=\int_0^{\pi/2}\left(\sum_{n=0}^{\infty}(x\sin s)^{2n+1} \right)ds\\=
\int_0^{\pi/2}\frac{x\sin s}{1-x^2\sin^2s}ds\\={\frac {1}{\sqrt {1-x^2}}\arctan \left( {\frac {x}{\sqrt {1-x^2}}} \right) }
$$


We get
$$f\left(\frac{1}{\sqrt2}\right)=\sqrt2\arctan1=\frac{\sqrt2\pi}{4}$$


induction - Proving that two summations are equivalent: $sum_{i=1}^n i^3 = (sum_{i=1}^n i)^2$




Give a constructive proof to show that for all $n \geq 1$ ,




$\sum\limits_{i=1}^n i^3 = (\sum\limits_{i=1}^n i)^2$



Observe that $(n+1)^4 - n^4 = 4n^3 + 6n^2 + 4n + 1$ .






Now, the two following equalities are obvious:



$\sum\limits_{i=1}^n i^3 = 1^3 + 2^3 + 3^3 + ... + n^3$




$(\sum\limits_{i=1}^n i)^2 = (1 + 2 + 3 + ... + n)^2$



And they are both obviously equivalent given the first few test cases:



$\sum\limits_{i=1}^n i^3 = A(n)$




  • $A(1) = 1^3 = 1$

  • $A(2) = 1^3 + 2^3 = 1 + 8 = 9$


  • $A(3) = 1^3 + 2^3 + 3^3 = 9 + 27 = 36$



$(\sum\limits_{i=1}^n i)^2 = B(n)$




  • $B(1) = (1)^2 = 1$

  • $B(2) = (1 + 2)^2 =9 $

  • $B(3) = (1 + 2 + 3)^2 = 36$




Now, I am thinking of finding the closed-forms for both functions in the hopes that they are indeed the same. Then I would prove those closed forms to work by induction. But:




  1. I don't know if that would be a sound way to do it.

  2. I don't know if this would even qualify as constructive, as the question requests.



As you may tell, I am no math major. I am a Computer Science major, though. This is a computing fundamentals class. I took discrete 1.5 years ago, so my knowledge is about as fresh as a litter box. I've been in quite a rut for a few hours over this.


Answer




Your goal is to prove the statement $S(n)$ for all $n\geq 1$ where
$$
S(n) : 1^3 + 2^3 +3^3 +\cdots + n^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Using $\Sigma$-notation, we may rewrite $S(n)$ as follows:
$$
S(n) : \sum_{r=1}^n r^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Base step: The statement $S(1)$ says that $(1)^3 = (1)^2$ which is true because $1=1$.




Inductive step [$S(k)\to S(k+1)$]: Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
$$
S(k) : \sum_{r=1}^k r^3 = \left[\frac{k(k+1)}{2}\right]^2
$$
holds. To be proved is that
$$
S(k+1) : \sum_{r=1}^{k+1} r^3 = \left[\frac{(k+1)((k+1)+1)}{2}\right]^2
$$
follows. Beginning with the left side of $S(k+1)$,
\begin{align}

\sum_{r=1}^{k+1}r^3 &= \sum_{r=1}^k r^3 + (k+1)^3\tag{evaluate sum for $i=k+1$}\\[1em]
&= \left[\frac{k(k+1)}{2}\right]^2+(k+1)^3\tag{by $S(k)$}\\[1em]
&= \frac{(k+1)^2}{4}[k^2+4(k+1)]\tag{factor out $\frac{(k+1)^2}{4}$}\\[1em]
&= \frac{(k+1)^2}{4}[(k+2)(k+2)]\tag{factor quadratic}\\[1em]
&= \frac{(k+1)^2(k+2)^2}{4}\tag{multiply and rearrange}\\[1em]
&= \left[\frac{(k+1)(k+2)}{2}\right]^2\tag{rearrange}\\[1em]
&= \left[\frac{(k+1)((k+1)+1)}{2}\right]^2,\tag{rearrange}
\end{align}
one arrives at the right side of $S(k+1)$, thereby showing that $S(k+1)$ is also true, completing the inductive step.




By mathematical induction, it is proved that for all $n\geq 1$, where $n\in\mathbb{N}$, that the statement $S(n)$ is true.



Note: The step where $\dfrac{(k+1)^2}{4}$ is factored out is an important one. If we do not factor this out and, instead, choose to expand $(k+1)^3$, the problem becomes much more messy than it needs to be.


linear algebra - Why does the inverse of the Hilbert matrix have integer entries?




Let $A$ be the $n\times n$ matrix given by



$$A_{ij}=\frac{1}{i + j - 1}$$



Show that $A$ is invertible and that the inverse has integer entries.





I was able to show that $A$ is invertible. How do I show that $A^{-1}$ has integer entries?



This matrix is called the Hilbert matrix. The problem appears as exercise 12 in section 1.6 of Hoffman and Kunze's Linear Algebra (2nd edition).


Answer



Be wise, generalize (c)



I think the nicest way to answer this question is the direct computation of the inverse - however, for a more general matrix including the Hilbert matrix as a special case. The corresponding formulas have very transparent structure and nontrivial further generalizations.







The matrix $A$ is a particular case of the so-called Cauchy matrix with elements
$$A_{ij}=\frac{1}{x_i-y_j},\qquad i,j=1,\ldots, N.$$
Namely, in the Hilbert case we can take
$$x_i=i-\frac{1}{2},\qquad y_i=-i+\frac12.$$
The determinant of $A$ is given in the general case by
$$\mathrm{det}\,A=\frac{\prod_{1\leq i Up to an easily computable constant prefactor, the structure of (1) follows from the observation that $\mathrm{det}\,A$ vanishes whenever there is a pair of coinciding $x$'s or $y$'s. (In the latter case $A$ contains a pair of coinciding raws/columns). For our $x$'s and $y$'s the determinant is clearly non-zero, hence $A$ is invertible.



One can also easily find the inverse $A^{-1}$, since the matrix obtained from a Cauchy matrix by deleting one row and one column is also of Cauchy type, with one $x$ and one $y$ less. Taking the ratio of the corresponding two determinants and using (1), most of the factors cancel out and one obtains

\begin{align}
A_{mn}^{-1}=\frac{1}{y_m-x_n}\frac{\prod_{1\leq i\leq N}(x_n-y_i)\cdot\prod_{1\leq i\leq N}(y_m-x_i)}{\prod_{i\neq n}(x_n-x_i)\cdot\prod_{i\neq m}(y_m-y_i)}.\tag{2}
\end{align}



For our particular $x$'s and $y$'s, the formula (2) reduces to
\begin{align}
A_{mn}^{-1}&=\frac{(-1)^{m+n}}{m+n-1}\frac{\frac{(n+N-1)!}{(n-1)!}\cdot
\frac{(m+N-1)!}{(m-1)!}}{(n-1)!(N-n)!\cdot(m-1)!(N-m)!}=\\
&=(-1)^{m+n}(m+n-1){n+N-1 \choose N-m}{m+N-1 \choose N-n}{m+n-2\choose m-1}^2.
\end{align}

The last expression is clearly integer. $\blacksquare$


calculus - Integral $int_0^{pi/4}frac{dx}{{sin 2x},(tan^ax+cot^ax)}=frac{pi}{8a}$



I am trying to prove this interesting integral$$
\mathcal{I}:=\int_0^{\pi/4}\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}=\frac{\pi}{8a},\qquad \mathcal{Re}(a)\neq 0.
$$
This result is breath taking but I am more stumped than usual. It truly is magnificent. I am not sure how to approach this,
note $\sin 2x=2\sin x \cos x$. I am not sure how to approach this because of the term
$$

(\tan^ax+\cot^ax)
$$
in the denominator. I was trying to use the identity
$$
\tan \left(\frac{\pi}{2}-x\right)=\cot x
$$
since this method solves a similar kind of integral but didn't get anywhere. A bad idea I tried was to try and differentiate with respect to a
$$
\frac{dI(a)}{da}=\int_0^{\pi/4}\partial_a \left(\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}\right)=\int_0^{\pi/4} \frac{(\cot^a x \log(\cot x )+\log(\tan x ) \tan^a x)}{\sin 2x \, (\cot^a x+\tan^a x)^2}dx
$$

which seems more complicated when I break it up into two integrals. How can we solve the integral? Thanks.


Answer



Rewrite:
\begin{align}
\int_0^{\Large\frac\pi4}\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}&=\int_0^{\Large\frac\pi4}\frac{dx}{{2\sin x\cos x}\,\left(\tan^ax+\dfrac1{\tan^ax}\right)}\\
&=\frac12\int_0^{\Large\frac\pi4}\frac{\tan^ax\ dx}{{\tan x\cos^2x}\,\left(1+\tan^{2a}x\right)}\\
&=\frac12\int_0^{\Large\frac\pi4}\frac{\tan^{a-1}x\ d(\tan x)}{1+\tan^{2a}x}.
\end{align}
Now, let $\tan^a x=\tan\theta\;\Rightarrow\;a\tan^{a-1}x\ d(\tan x)=\sec^2\theta\ d\theta$. Then
\begin{align}

\frac12\int_{x=0}^{\Large\frac\pi4}\frac{\tan^{a-1}x\ d(\tan x)}{1+\tan^{2a}x}&=\frac1{2|a|}\int_{\theta=0}^{\Large\frac\pi4}\frac{\sec^2\theta\ d\theta}{1+\tan^{2}\theta}\\
&=\frac1{2|a|}\int_{\theta=0}^{\Large\frac\pi4}\ d\theta\\
&=\large\color{blue}{\frac{\pi}{8|a|}}.
\end{align}


linear algebra - 2x2 matrices with sum of diagonal entries equal zero

Is the set of 2x2 matrices with sum of diagonal entries equal zero a vector space over the real numbers under the usual matrix matrix addition and scalar matrix multiplication?

linear algebra - Adjoint of derivative operator on polynomial space

I was working on a problem when I made the following reasoning.




I know that every linear operator $T:V \longrightarrow V$ on a Hilbert space $(V,\langle.,.\rangle)$ such that $\dim(V)<\infty$ has one (unique) adjoint operator $T^*:V \longrightarrow V$ (that is, $\langle T u,v\rangle = \langle u, T^* v \rangle$ $\forall u,v \in V$).



So if $V:=P_n$ is the space of all polynomials with degree less than or equal to $n \in \mathbb{N}$ (which gives $\dim(V)=n+1<\infty$) and $\langle f,g \rangle := \int_0^1f(t)g(t) \, dt$, what is the adjoint of the derivative operator $T=\dfrac{d}{dt}$?



I've tried to solve that, but still to no avail. I wonder if that is a silly question, but I haven't had any success searching for the answer either, so I apologize in advance if that's the case.

irrational numbers - How can I prove that no limit exists for this function over this particular interval?



I was given the following function: $$ f(x) = \begin{cases} \frac{1}{q} &\text{if }x = \frac{p}{q} \text{ is rational, reduced to lowest terms} \\ 0 &\text{if }x \text{ is irrational}\end{cases}$$
And was asked to attempt to prove or disprove whether or not, for some $j$ on the interval $(0,1)$, $\lim \limits_{x \to j}{}f(x)$ exists.
Intuitively, I know that I can find an irrational number between any two rationals which would obviously result in some discontinuity. I considered the idea that maybe it could be possible assuming that among the infinitely many rationals on the interval which also share a common denominator and could be used to find the limit, but that obviously would not work. However, I cannot formulate any proper line of reason to prove the limit does not exist.
How can I prove this, as formally as possible?


Answer




Recall the definition of $\lim_{x \to j}f(x)=a$:



$$\mbox{Given}\ \varepsilon > 0\ \mbox{there exists}\ \delta > 0\\ \mbox{such that}\ \left| f(x) - a\right|<\varepsilon\\ \mbox{whenever}\ 0<\left| x-j \right| <\delta$$



Now for any $j$, given $\varepsilon>0$, if we choose $\delta$ small enough we can ensure that the denominator $q$ of any fraction $\frac{p}{q}$ (in its lowest terms) in the interval $\left(j-\delta,j+\delta\right)$ satisfies $q>\frac{1}{\varepsilon}$, except possibly for $j$ itself, if it is rational. So that $0\le\left| f(x)\right|<\varepsilon$ whenever $0<\left|x-j\right|<\delta$.



Plugging this into the definition of a limit we see:



$$\lim_{x\to j}f(x) = 0\ \mbox{for all}\ j\in\left(0,1\right)$$




Now a function $g$ is continuous at $j$ iff $\lim_{x\to j}g(x)=g(j)$. It follows that $f$ is discontinuous at rational points and continuous at irrational points.


real analysis - Radius of convergence and sum of alternating series $1 - z + z^2 - z^3 + ldots $




I have a (complex) function represented by the power series



\begin{equation*} L(z) = z -\frac{z^2}{2} + \frac{z^3}{3} - \frac{z^4}{4} \ldots \end{equation*}



which I have tried to represent (perhaps incorrectly) in summation notation as



\begin{equation*} L(z) = \displaystyle\sum_{n=0}^{\infty} \frac{(-1)^n \;z^{(n+1)}}{n+1} .\end{equation*}



To find the radius of convergence I considered the ratio of terms:




\begin{equation*} \left| \dfrac{a_{n+1}}{a_n} \right| = \left| \frac{(-1)^{n+1}\;z^{(n+2)}}{(n+2)} \right| \left| \frac{(n+1)}{(-1)^n\; z^{(n+1)}} \right|\end{equation*}



which simplifies to



\begin{equation*} \left| \frac{z\;(n+1)}{(n+2)}\right| = \left| z \right| \left| \frac{(\frac{n}{n}+\frac{1}{n})}{(\frac{n}{n}+\frac{2}{n})}\right| \rightarrow \left| z \right| \text{ as } n \rightarrow \infty. \end{equation*}



By the ratio test, $L(z)$ will converge for $\left| z \right| < 1$.



I believe that the derivative $L'(z)$ of my original series will have the same radius of convergence. Differentating term-by-term I have




\begin{equation*} L'(z) = 1 - z + z^2 - z^3 + \ldots \end{equation*}



I do not believe that this is a geometric series due to the alternating sign. My attempt to find a formula for the sum is as follows. Multipling by $z$ gives me:



\begin{equation*} z\;L'(z) = z - z^2 + z^3 - z^4 + \ldots \end{equation*}



I can take the $r$th partial sums of the previous two series to get



\begin{align*} &1 - z + z^2 - z^3 + \ldots + (-1)^r\;z^r

\\ &+
\\ &z - z^2 + z^3 - z^4 + \ldots + (-1)^r\;z^{(r+1)}
\\ &= 1 + (-1)^r\;z^{(r+1)}.
\end{align*}



Thus I can say that



\begin{equation*} L'(z) + zL'(z) = \lim_{n \rightarrow \infty}1 + (-1)^n\;z^{(n+1)} \end{equation*}



where the last term tends to zero within the R.O.C. ($|z| < 1$). So finally, by factoring on the left and dividing through I get:




\begin{equation*} L'(z) = \frac{1}{1+z} \end{equation*}.



Can anyone tell me if I've done the above correctly, and if there was a quicker way of jumping to the the final sum?



Edit: Also, I've been told that the ratio test is only to be used on series with positive terms - why is it okay in this alternating series?


Answer



Yes, you have done this correctly, and you are correct that $\dfrac{1}{1+z}=1-z+z^2-z^3+\cdots$ for $\vert z\vert<1$. If you are willing to take the formula for the sum of a geometric series for granted, then that can lead to a slightly quicker answer (although you have essentially derived the formula in your argument). The series $1-z+z^2-z^3+\cdots$ is, in fact, a geometric series with common ratio $-z$, so the sum of the series (for $\vert z\vert<1$) is $\dfrac{1}{1-(-z)}=\dfrac{1}{1+z}$. That is, geometric series are allowed to have negative common ratios.


distribution theory - derivative of the function $ f(x)= sum_{n=1}^{infty} [frac{x}{n}] $



how could i evaluate the derivative of




$ f(x)= \sum_{n=1}^{\infty} [\frac{x}{n}] $



here $ [x] $ is the integer part so $ u(x)=x-[x] $ is the fractional part of the number



form distibution theory i believe that the derivative is somehow



$ f'(x)= \sum_{n=1}^{\infty}\sum_{m=-\infty}^{\infty}\delta (x-nm) $



but i'm not completely sure



Answer



The series converges in the sense of distributions - setting



$$f_k(x) = \sum_{n = 1}^k \biggl\lfloor \frac{x}{n}\biggr\rfloor,$$



the sequence $(f_k)$ is monotonically increasing on $[0,+\infty)$ and decreasing on $(-\infty,0)$, so



$$\lim_{k\to\infty} \int_{\mathbb{R}} f_k(x)\varphi(x)\,dx$$



exists for every test function $\varphi$. Therefore we can interchange differentiation (in the sense of distributions) and summation. Thus one needs to find the distributional derivative of




$$g_n \colon x \mapsto \biggl\lfloor \frac{x}{n}\biggr\rfloor$$



for $n \geqslant 1$. Since $g_n$ is constant on each interval $[mm, (m+1)n)$ and has jumps of height $1$ at each point $mn$, indeed - in the sense of distributions -



$$g_n' = \sum_{m\in \mathbb{Z}} \delta_{mn},$$



and your expression for the distributional derivative of $f$ is correct.


Thursday 20 March 2014

calculus - Integration by parts or substitution?



$$\int_{}^{}x e^x \mathrm dx$$




One of my friends said substitution , but I can't seem to get it to work.
Otherwise I also tried integration by parts but I'm not getting the same answer as wolfram.



The space in the question seems like it shouldn't take more than 2 lines though. Am I missing something?



Thanks to all the answers below , I messed up in the original question it was actually



$$\int_{}^{}x e^{x^2} \mathrm dx$$



With help from the below answers I did the following:




Let $u = x^2$ , then $du=2x\mathrm dx$



So rewriting the integral



$$\int_{}^{}{{x\cdot e^u} {1 \over 2x}} \mathrm dx$$



Simplifying yields:



$${1 \over 2x}\int_{}^{}{e^u}\mathrm dx$$




Which in turn yields:



$${\frac{e^u}{2}} + C$$



The rest is fairly obvious!


Answer



Definitely by parts, as substitution of $x$ won't get you anywhere.
Let $u=e^x$ and $dv=e^x$ in $\int udv=uv-\int vdu$
and we have




$$
\int xe^xdx=xe^x-\int e^x dx=xe^x-e^x+c
$$



As a general tip, you will usually want to use parts if you have an exponential, which doesn't get any nastier as you anti-differentiate, and a polynomial which will disappear after some number of differentiations. A notable exception is when you have something like
$$
\int xe^{x^2}dx
$$
Where a $u=x^2$ substitution will cancel out the $x$ coefficient on the exponential.



linear algebra - If functions are linearly independent for one $xin I$, are they linearly independent for all $xin I$?




This theorem comes up when talking about ordinary differential equations. Basically, if we have a fundamental system, i.e. a basis $B=(f_1,f_2,...,f_n)$ of the vector space of solutions for a differential equation $$y'=A(x)y$$, we can check for linear independence ( if we are unsure if it really is a basis ) by checking that
$$(f_1(x_0),f_2(x_0),...,f_n(x_0))$$
is linearly independent for some $x_0 \in I$. The theorem says that if they are linearly independent for some $x_0 \in I$, that's equivalent to them being linearly independent for all $x \in I$.



The proof is omitted, because this equivalence is supposed to be trivial, says the author of the textbook. Could you explain why the implication from some to all holds true?



I'd actually think there would be functions for which there is an $x\in I$ where all the functions happen to be zero, and then you can find coefficients which are non-zero so that you can say



$$ c_1 * f_1(x) + ... + c_n * f_n(x) = 0$$




$c\in \mathbb{R}$, but why does this imply that they must be linearly independent for all $x$?


Answer



Assume by contradiction that your functions are linearly dependent at some $x_1\in I$, i.e. there exist constants $c_1, \ldots, c_n\in\mathbb{R}$, with at least one $c_j\neq 0$, such that
$$
c_1 f_1(x_1) + \cdots + c_n f_n(x_1) = 0.
$$
The function $f(x) := c_1 f_1(x) + \cdots + c_n f_n(x)$ is a solution of the equation and $f(x_1) = 0$. By uniqueness, we must have $f(x) = 0$ for every $x\in I$ and, in particular,
$$
c_1 f_1(x_0) + \cdots + c_n f_n(x_0) = 0.

$$
But this last condition implies $c_1 = \cdots = c_n = 0$, a contradiction.


Is this a correct/good way to think interpret differentials for the beginning calculus student?



I was reading the answers to this question, and I came across the following answer which seems intuitive, but too good to be true:




Typically, the $\frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $\frac{dy}{dx}$ as a genuine ratio of two fixed quantities.



Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.



So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $\frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation $$\frac{dy}{dx} = f'(a)$$




holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $\frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.




By Brendan Cordy


Answer



The conclusion is right, but you should not understand $\frac{dy}{dx}$ that way. When you do what you have done it is written $\frac{\Delta y}{\Delta x}$.



If you do what you have explained and take the fixed values, observe that you can get closer to $a$ on the tangent and do the same again.




A derivative of a sufficiently nice function is saying that no matter how close you get to $a$ using your tangent principle, the result is going to be the same. In that sense you are right, you can take any fixed value on the tangent, but fixing something is less general than saying no matter what fixed value on the tangent you take.



Observe as well that the way you construct a tangent is not something that is logically above the definition of the derivative so you could say: we know how to construct a tangent and then we can argue about the consequences. In general, drawing a tangent and finding first derivative are equivalent.



When a function is sufficiently nice all things are clearer, but you must define differentiability so that it is applicable to a wider range of problems.



You need to notice that turning $\frac{\Delta y}{\Delta x}$ into $\frac{dy}{dx}$ and approaching one and the same value is in the core of the definition of having a derivative.



Believe or not, the way you have defined a derivative is applicable in another theory: the theory of chaos, since for many chaotic curves you cannot draw a tangent. Instead you take two close points find the distance and calculate $\frac{\Delta y}{\Delta x}$. In many cases you get a fixed value as you approach $\frac{dy}{dx}$, although it is not possible to draw a tangent in the classical sense. Even when you cannot get a fixed value you make some averaging and still get something useful.




Basically $\frac{dy}{dx} = \lim\limits_{\Delta x \to 0}\frac{\Delta y}{\Delta x}$ and that is the way you should understand it.



But yes, you can find a derivative the way you have described for many nice behaving functions.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...