Sunday 31 May 2015

real analysis - Converge Test on the series $sum limits_{n=0}^{infty} left(frac{2n+n^3}{3-4n}right)^n$



I want to show, that $a:=\sum \limits_{n=0}^{\infty} \left(\dfrac{2n+n^3}{3-4n}\right)^n$ is not converging, because $\lim \limits_{n \to \infty}(a)\neq 0 \; (*)$. Therefore, the series can't be absolute converge too.



Firstly, I try to simplify the term. After that I want to find the limit.




Unfortunately, I can't seem to find any good equation with that I can clearly show $(*)$.
\begin{align}
\sum \limits_{n=0}^{\infty} \left(\dfrac{2n+n^3}{3-4n}\right)^n&=\left (\dfrac{\not{n}\cdot (2+n^2)}{\not n \cdot (\frac{3}{n}-4)}\right)^n\\
&=\left(\dfrac{2+n^2}{\frac 3n-4}\right)^n\\
&= \cdots
\end{align}



How to go on?


Answer



Recall that




$$\sum_0^\infty a_n <\infty \implies a_n \to 0$$



therefore if $a_n \not \to 0$ the series can’t converge.



In that case for $n\ge 3$ we have



$$\left|\dfrac{2n+n^3}{3-4n}\right|=\dfrac{2n+n^3}{4n-3}>\dfrac{n^3}{4n}=\frac{n^2}4\ge2$$



and then




$$|a_n|=\left|\dfrac{2n+n^3}{3-4n}\right|^n\ge 2^n$$



Refer also to the related:




Proof by induction: Show that $9^n-2^n$ for any $n$ natural number is divisible by $7$.




Can someone please solve following problem.



Show that $9^n-2^n$ for any $n$ natural number is divisible by $7$. ($9 ^ n$ = $9$ to the power of $n$).



I know the principle of induction but am stuck with setting up a formula for this.


Answer



Hint:



Assume that $7\mid (9^n-2^n)$,




For $n+1:$



$$9^{n+1}-2^{n+1}=9^n9-2^n2=9^n(7+2)-2^n2=9^n7+9^n2-2^n2=9^n7+2(9^n-2^n)$$


factoring - Unnecessary use of complex numbers when factorizing a sum of squares?

Let's assume we have a 2d vector $\vec{x}=\begin{pmatrix} a \\ b \end{pmatrix}$. The square of it's length is easily computed as
$
|x|^2 = a^2 + b^2
$



Also note that always $|x|^2 \geq 0$. So to compute $|x|$ we can simply factorize by taking the square root



$

|x|^2 = a^2 + b^2 = \sqrt{a^2 + b^2}\sqrt{a^2 + b^2}
$



However sometimes I see people factorizing this sum of squares using complex numbers like



$
a^2 + b^2 = (a + i b)(a - i b)
$



But what is the point of introducing complex numbers here? The sum of squares is by definition a positive number. So taking the square root is always a valid operation. Isn't this a completely unnecessary use of complex numbers? Or am I overlooking something here? Is there some advantage in using the factorization with complex numbers?

Proof using Cauchy-Schwarz inequality



Let's say $a_1, a_2, ..., a_n$ are positive real numbers and $a_1 + a_2 + ... + a_n = 1$



I've to prove the following expression using the Cauchy-Schwarz inequality but I don't know how to do it.



$\sqrt{{a_1}} + \sqrt{{a_2}} + \dots + \sqrt{{a_n}} \leq \sqrt{n}$



Choosing a second set of real numbers $b_1 = b_2 = \dots b_n = 1$ and applying Cauchy-Schwarz inequality, I got the next inequality, which is almost trivial:




$ 1 \leq \sqrt{n} . \sqrt{{a_1^2}+{a_2^2}+\dots+{a_n^2}}$



but I think is a dead end and isn't the correct way to prove it.



Please, any ideas?



Thanks so much in advance.


Answer



Let $\textbf{a}=(\sqrt{a_1},\sqrt{a_2},\dots,\sqrt{a_n})$ and $\textbf{b}=(1,1,\dots,1)$. Then $\|\textbf{a}\|=1$ and $\|\textbf{b}\|=\sqrt{n}$. Thus

$$\sqrt{a_1}+\sqrt{a_2}+\cdots+\sqrt{a_n}=\langle \textbf{a},\textbf{b} \rangle
\leq \|\textbf{a}\| \|\textbf{b}\|=\sqrt{n}$$


algebra precalculus - Sum of series: $1*3*(2^2) + 2*4*(3^2) + 3*5*(4^2) + dots$?

I am trying to find the sum of the above series.



The sum till n terms can be found using power series expansion. However, I'm trying to solve this using the method of difference (a.k.a. Telescoping sum or $V_n$ method).



In this method, the general term is expressed as the difference between two consecutive values of some function. Like the following:




$T_n = V_n - V_{n-1}$



and then the sum is taken which comes to be



$S_n = V_n - V_0$



The general term of the series in question can be represented as a product:



$T_n = n(n+1)^2(n+2)$




But I am unable to represent this as a difference. How can I proceed from here to find the sum?

calculus - Evaluation of a simple limit with Taylor Series




I would like to evaluate $$\lim_{x\to0} \frac{e^{\sin x} - \sin^2x - 1}{x}$$
using Taylor Series expansion in a completely rigorous way.
What would a rigorous version of the following argument look like?



From
$$e^{\sin x} -1\sim_0 e^x-1 \sim_0 x \text{ and }\sin^2x \sim_0 x^2$$
we can find the limit
$$\lim_{x\to0} \frac{e^{\sin x} - \sin^2x - 1}{x} = \lim_{x\to0} \frac{x-\frac{x^2}{2}}{x} = 1.$$



In particular I'm not exactly sure how to deal with Landau symbols $o$ and $O$ in nested functions.



Answer



$\sin(x) = x + o(x)$, then $\sin^2(x) = x^2 + 2xo(x) + (o(x))^2 = x^2 + o(x)$, since $$\lim_{x\to 0}\dfrac{2xo(x)+ (o(x))^2}{x} = \lim_{x\to 0}(2o(x)+(\dfrac{o(x)}{x})^2x = 0$$



$e^{\sin(x)} = 1+ \sin(x) + o(\sin(x)) = 1 + x + o(x) + o(x+ o(x)) = 1+ x + o(x)$, since



$$\lim_{x\to 0}\dfrac{o(x)+ o(x + o(x))}{x} = \lim_{x\to 0}\dfrac{o(x)}{x} + \lim_{x\to 0}\dfrac{o(x + o(x))}{x + o(x)}\dfrac{x+o(x)}{x} = 0$$



Thus $e^{\sin(x)} - \sin^2(x) -1 = x - x^2 + o(x)$


stochastic processes - Time until a consecutive sequence of ones in a random bit sequence



This a reformulation of a practical problem I encountered.



Say we have an infinite sequence of random, i.i.d bits. For each bit $X_i$, $P(X_i=1)=p$.




What is the expected time until we get a sequence of $n$ 1 bits?



Thanks!


Answer



There is a lot of literature on such questions concerning the mean time for patterns. For your particular problem a solution can be found on page 156 of Introduction to Probability Models (10th edition) by Sheldon Ross. The formula is
$$E[T]=1/p+1/p^2+\cdots+1/p^n={(p^{-n}-1)/(1-p)}.$$



As expected, this is a decreasing function of $p$ for fixed $n$: it takes longer to see rarer events. As $p$ goes from 0 to 1, $E[T]$ decreases from infinity to $n$.







Added: Here is a derivation of the formula in my answer.



Let $T$ be the random variable that records the first time
we see $n$ ones in a row. Let's also
define the random variable $L$ to be the position of the first zero bit in
the sequence.



Looking at the first $n$ bits there are, roughly speaking,
two possibilities: either I get the desired pattern of $n$ ones

or I got a zero bit at time $k$ and the whole problem starts over.



More formally, conditioning on the value of $L$ we get
\begin{eqnarray*}
E[T] &=& \sum_{k=1}^{n} E[T \ |\ L=k]\ P(L=k) + E[T\ |\ L> n]\ P(L>n)\cr
&=& \sum_{k=1}^{n} (k+E[T])\ P(L=k) + n P(L > n)\cr
&=& \sum_{k=1}^{n} (k+E[T])\ p^{k-1}(1-p) + n p^n.
\end{eqnarray*}



Solving this equation for $E[T]$ gives the formula.




There are many generalizations of this problem and
variations on the above proof that use, for instance, Markov chains,
or martingales, or generating functions, etc. In addition to Ross's book
mentioned above, you may like to look at




  1. Section 8.4 of Concrete Mathematics by Graham, Knuth, and Patashnik

  2. Chapter 14 of Problems and Snapshots from the World of Probability by Blom, Holst, and Sandell

  3. Section XIII 7 of An Introduction to Probability Theory and Its Applications by Feller



Saturday 30 May 2015

reference request - Need for a Analysis Book with the following requisites

I am a maths graduate .I have to appear for an entrance exam which has in its syllabus the following.



Background:



I have Read Analysis from Bartle Sherbert and Understanding Analysis:Abbott.



Requisite:



I am in need of a book which contains examples and many problems on the given topics and if possible some hints to selected problems.As I am doing self study having hints is a great advantage for me.




On surfing similar questions I found that people have mostly recommended Rudin :Principles of Mathematical Analysis.Is it suitable for self study?Also there are so many books under the topic Good books for self study in Analysis that it is difficult to select one.



Also the type of questions that came in the previous exam like :If a continuous function is injective then it is either increasing/decreasing. are also not available there.Please suggest some books for these type of questions to deal with in the exam.



Syllabus:



1.General topology: Topological spaces, continuous functions, connectedness,
compactness, separation axioms, product spaces, quotient topology, complete metric
spaces, uniform continuity, Baire category theorem.




2.Real analysis: Sequences and series, continuity and dierentiability of real
valued functions of one variable and applications, uniform convergence, Riemann
integration, continuity and dierentiability of real valued functions of several variables, partial derivatives and mixed partial derivatives,total derivatives.



Looking forward to all of you for an answer.

number theory - If $gcd(a,b)=d$, then $gcd(ac,bc)=cd$?



$A$ an integral domain, $a,b,c\in A$. If $d$ is a greatest common divisor of $a$ and $b$, is it true that $cd$ is a greatest common divisor of $ca$ and $cb$? I know it is true if $A$ is a UFD, but can't think of a counterexample in general situation.


Answer




Here is the best that one can say for arbitrary integral domains:



LEMMA $\rm\ \ (a,b)\ =\ (ac,bc)/c\quad$ if $\rm\ (ac,bc)\ $ exists.



Proof $\rm\quad d\ |\ a,b\ \iff\ dc\ |\ ac,bc\ \iff\ dc\ |\ (ac,bc)\ \iff\ d|(ac,bc)/c$



Generally $\rm\ (ac,bc)\ $ need not exist, as is most insightfully viewed as failure of



EUCLID'S LEMMA $\rm\quad a\ |\ bc\ $ and $\rm\ (a,b)=1\ \Rightarrow\ a\ |\ c\quad$ if $\rm\ (ac,bc)\ $ exists.




Proof $\ \ $ If $\rm\ (ac,bc)\ $ exists then $\rm\ a\ |\ ac,bc\ \Rightarrow\ a\ |\ (ac,bc) = (a,b)\:c = c\ $ by the Lemma.



Therefore if $\rm\: a,b,c\: $ fail to satisfy the Euclid Lemma $\Rightarrow\:$,
namely if $\rm\ a\ |\ bc\ $ and $\rm\ (a,b) = 1\ $ but $\rm\ a\nmid c\:$, then one immediately deduces that the gcd $\rm\ (ac,bc)\ $ fails to exist. For the special case $\rm\:a\:$ is an atom (i.e. irreducible), the implication reduces to: atom $\Rightarrow$ prime. So it suffices to find a nonprime atom
in order to exhibit a pair of elements whose gcd fails to exist. This task is a bit simpler, e.g. for $\rm\ \omega = 1 + \sqrt{-5}\ \in\ \mathbb Z[\sqrt{-5}]\ $ we have that the atom $\rm\: 2\ |\ \omega'\: \omega = 6\:,\:$ but $\rm\ 2\nmid \omega',\:\omega\:,\:$ so $\rm\:2\:$ is not prime. Therefore we deduce that the gcd $\rm\: (2\:\omega,\ \omega'\:\omega)\ =\ (2+2\sqrt{-5},\:6)\ $ fails to exist in $\rm\ \mathbb Z[\sqrt{-5}]\:$.



Note that if the gcd $\rm\: (ac,bc)\ $ fails to exist then this implies that the ideal $\rm\ (ac,bc)\ $ is not principal. Therefore we've constructively deduced that the failure of Euclid's lemma immediately yields both a nonexistent gcd and a nonprincipal ideal.



That the $\Rightarrow$ in Euclid's lemma implies that Atoms are Prime $\rm(:= AP)$ is denoted $\rm\ D\ \Rightarrow AP\ $ in the list of domains closely related to GCD domains in my post here. There you will find links to further literature on domains closely related to GCD domains. See especially the referenced comprehensive survey by D.D. Anderson: GCD domains, Gauss' lemma, and contents of polynomials, 2000.




See also my post here for the general universal definitions of $\rm GCD,\: LCM$ and for further remarks on how such $\iff$ definitions enable slick proofs, and see here for another simple example of such.


How do I divide across equivalence in modular arithmetic



I understand how to solve for $k$ for something like $2k \equiv 4 \bmod 16$. This would become $k \equiv 2 \bmod 8$. But how would I solve for $k$ for something like $3k \equiv 1 \bmod 16$?


Answer



If you have $2k\equiv 4 \pmod {16}$, then for some integer $m$,




$$2k=16m+4$$
which means $k=8m+2$. Now if you have $3k\equiv 1\pmod{16}$, we can multiply by $5$ to get
$$15k\equiv -k\equiv 5 \pmod {16}$$
and thus



$$k\equiv -5\equiv 11\pmod{16}$$
Or $k=16m+11$.



In general, $a$ has a multiplicative inverse mod $p$ iff $(a,p)=1$.


calculus - Functions $f$ such that $f(x)+f(-x)=f(x)f(-x)$



I was looking for examples of real valued functions $f$ such that $f(x)+f(-x)=f(x)f(-x)$. Preferably, I'd like them to be continuous, differentiable, etc.



Of course, there are the constant functions $f(x)=0$ and $f(x)=2$. I also showed that $1+b^x$, where $b>0$, is another solution. Are there any other nice ones?


Answer



From your solution, I thought to consider the change of variable $f(x) = 1 + g(x)$: your functional equation then becomes



$$ g(x) g(-x) = 1 $$




and now the entire solution space becomes obvious: you can pick $g(x)$ to be any function on the positive reals that is nowhere zero, and then the values at the negative reals are determined by the functional equation. And at zero, you can pick either $g(0) = 1$ or $g(0) = -1$.


recreational mathematics - solutions to this functional equation



$f(x) = f(g(x))$ where $g(x) = x^k$.



This turned out to be much harder than it appeared initially. There are a couple of things I have figured out which will get you at least one solution: first is that this is not different from solving the more generalised $f(x) = f(g^n(x))$ for any $n$, secondly, it is easier to solve similar equations:




$h(g(x)) = h(x) + c$,



$h(x^k) = c-h(x)$ and



$h(x^k) = -h(x)$



Once we have a solution to $h(x)$, we could then convert it to $f(x)$ by taking the sine or cosine.




Example:




In order to solve $h(x) = h(x^{k^n})+c$, $f$ must transform $n$ from a
double exponent into a linear addition, therefore it must take the
logarithm twice. This would yield $c = -\log_P{k}$ and
$h(x)=\log_P{log_Q{x}}$ for any $P,Q$ when $n=1$. Therefore $$f(x) =
Asin(\frac{2\pi}{\log_Pk}(\log_P\log_Q(x^k))$$ for chosen constants
$P,Q,A$ in their appropriate ranges. There are other trivial variants
such as the square of the same function or cosine of the input. This function
is periodic in some sense.





My question is just, how do we solve the second and third equations for
$h(x)$? I can see that it is equivalent to $h(x^{k^{n}})=h(x^k)$ for
even $n$ and $h(x^{k^{n}})=c-h(x^k)$ for odd $n$. I can't see much
further with the third equation.



Another thing that may be of interest is whether there is any general behaviour for the solutions to this type of functional equation for any $g(x)$. At first I thought there should be some sort of periodicity since $f(x) = f(g^n(x))$ must be periodic for $n$ but later dismissed that. The reason being that, even for simple functions like $g(x) = \frac{1}{x}$, a nice solution by sight is $f(x) = (\log(x))^2$ which really doesn't show any periodicity at all.


Answer



$$h(x^k)=c-h(x)$$
Let us define the function $\psi$ as

$$\psi(\ln x)=h(x)$$
so that we have, from our our original functional equation,
$$\psi(k\ln x)=c-\psi(\ln x)$$
and let us make the substitution $y=\ln x$. Then
$$\psi(ky)=c-\psi(y)$$
again, let $\phi(\ln y)=\psi(y)$. Then
$$\phi(\ln k+\ln y)=c-\phi(\ln y)$$
and then if we make the substitution $z=\ln y$,
$$\phi(z+\ln k)=c-\phi(z)$$
By guess-and-check, a solution to this is

$$\phi(z)=\sin\bigg(\frac{\pi z}{\ln k}\bigg)+\frac{1}{2}c$$
And after we undo all of our substitutions, we get
$$h(x)=\sin\bigg(\frac{\pi \ln(\ln x)}{\ln k}\bigg)+\frac{1}{2}c$$
Does this work for you?



As for your general equation, that can be solved by
$$f(x)=\sin\bigg(\frac{2\pi \ln(\ln x)}{\ln k}\bigg)$$



GENERAL TIP: If you want to solve any functional equation for $f$ (given $g$) in the form
$$f(x)=f(g(x))$$

Then to find a sinusoidal solution, all you need to do is find a function $\gamma$ with the property
$$h(g(x))=x+2\pi$$
and then you can let
$$f(x)=\sin h(x)$$
for a quick and easy answer.


Friday 29 May 2015

calculus - Proving that the limit of $root n of {{a_1}^n + {a_2}^n + ... + {a_k}^n}$ is $max(a_1,...a_k)$






Prove that:
$$\lim_{n \to \infty} \root n \of {{a_1}^n + {a_2}^n + ... + {a_k}^n} = \max \left\{ {{a_1},{a_2}...{a_k}} \right\}$$





I am familiar with the theorem which says that if
$$\mathop {\lim }\limits_{n \to \infty } {{{a_n}} \over {{a_{n - 1}}}} = L$$



then,
$$\mathop {\lim }\limits_{n \to \infty } \root n \of {{a_n}} = L$$



So, I tried evaluating the expreesion:
$${{{a_1}^n + {a_2}^n + ... + {a_k}^n} \over {{a_1}^{n - 1} + {a_2}^{n - 1} + ... + {a_k}^{n - 1}}}$$



but pretty much got stuck here. Is this the right path to go?


Answer



$$\sqrt[n]{a_{\max}^n} \leq \sqrt[n]{a^n_1 + \dots + a_k^n} \leq \sqrt[n]{k \cdot a_{\max}^n}$$



combinatorics - Combinatorial Species, significance and problems can be solved with it.

Combinatorial Species, is a subject I recently came across when just out of curiosity's sake, looked out for possible interaction between category theory and combinatorics. After awhile I ended up here Learning Combinatorial Species., and later on to this book Combinatorial Species and tree-like structures. For someone comfortable in Category Theory, this may be a very beautiful thing to mull over indeed and creates a flexibility to the theory of generating functions as well, and the latter is of important significance.



Though, in any instance of book/notes I can come up with, didn't find out an "intuitive" application of combinatorial species. Combinatorics, is definitely not about counting anymore, but arguably someone declares that the most funny stuff in it has to do with counting problems (because those sort of problems have more intuition I guess).



So my question has to do with that; Could you give me please an instance of application of Combinatorial Species in Enumerative combinatorics? Any other example of application is welcome too of course!




Last comment: I ain't have problem with it becuse its definition or significance. I'm looking for an intuitive approach of the aforementioned notion
(I mentioned the latter, because I want to avoid any possible duplication with other possibly related question on MSE, because I think checked them all and not such an answer/question has been showed up. If such a duplication does exist with an answer though, please feel free to mention it!)



Thank you!

number theory - Prove that there are positive integers $x_1,x_2,ldots,x_n geq 1$ such that $x_1x_2ldots x_n = a_1x_1+a_2x_2+cdots+a_nx_n$




Prove that for any natural number $n$ and for any natural numbers $a_k, k = 1,\ldots,n,\{a_1,a_2,\ldots,a_n\} = \{1,2,\ldots,n\},$ there are positive integers $x_1,x_2,\ldots,x_n \geq 1$ such that $$x_1x_2\ldots x_n = a_1x_1+a_2x_2+\cdots+a_nx_n.$$




Since $\{a_1,a_2,\ldots,a_n\} = \{1,2,\ldots,n\}$, we may assume that $a_k = k$ and our equation becomes $$x_1+2x_2+3x_3+\cdots+nx_n = x_1x_2 \cdots x_n.$$ For example, if $n = 2$ we need $x_1+2x_2 = x_1x_2$. Then we have $x_1(x_2-1) = 2x_2$ and so $x_1 = \dfrac{2x_2}{x_2-1}$, so $x_2 = 2$ and $x_1 = 4$. For $n = 3$ we need $$x_1+2x_2+3x_3 = x_1x_2x_3.$$ Then $x_1(x_2x_3-1) = 2x_2+3x_3$ and $x_1 = \dfrac{2x_2+3x_3}{x_2x_3-1}$ and I didn't see how to choose the $x_i$ here.


Answer



For $n=3$ you have $x_1 = \dfrac{2x_2+3x_3}{x_2x_3-1}$. You want to make the division come out even, and an easy way to do so is choose $x_2=2,x_3=1$ so the denominator is $1$. Now you can just compute that $x_1=7$. The same approach works for higher $n$. You have $x_1 = \dfrac{2x_2+3x_3+\ldots +nx_n}{x_2x_3\ldots x_n-1}$. Choose $x_2=2, x_i=1$ for $3 \le i \le n$ and let $x_1$ come out what it may, which happens to be $1+\frac 12n(n+1)$


linear algebra - Can a elementary row operation change the size of a matrix?



My question is very simple - Can an elementary row operation change the size (eg: $2\times2$ or $3\times 2$) of a matrix?



I think the answer should be no, but while reading Linear Algebra by Hoffman Kunze I stumbled upon this:





Definition. An $m\times n$ matrix is said to be an elementary matrix if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation.




Now, I know of 3 elementary row operations, (adding a multiple of one row to another, multiplying throughout a row by a non zero constant and interchanging two rows) but none of them can change the size of a matrix.



But since this is a highly praised book I can't trust myself as much as I'd like to.


Answer



Original Answer:




It must be a typo. For another reference, if you look at Horn & Johnson's book here (chapter 0, section 0.3.3 in the first edition) the authors discuss how elementary row operations can be achieved via left multiplication by square matrices.



Side note: If we use the fact that elementary row operations on a matrix $\boldsymbol{M}$ are equivalent to multiplying $\boldsymbol{M}$ on the left by (certain) square matrices, it is easy to determine the effect of elementary row operations on the determinant (recall that, for square matrices, $det(\boldsymbol{UM})=det(\boldsymbol{U})det(\boldsymbol{M})$).



Update:



I can see it not being a typo if what the authors mean is this:



Elementary row operations can be represented using matrix multiplication. How? Suppose I have a matrix $\boldsymbol{A}\in M_{m\times n}$ and I want to apply any of the three operations. All I have to do is:





  • Begin with the identity matrix of size $\boldsymbol{m}$, call it $\boldsymbol{I}_m$.

  • Apply whichever of the three elementary row operations you want to $\boldsymbol{I}_m$, call this new matrix $\boldsymbol{I}_{new}$. In other words, if you want to apply an elementary row operation to $\boldsymbol{A}$ you first apply it to $\boldsymbol{I}_m$.

  • Multiply $\boldsymbol{A}$ on the left by $\boldsymbol{I}_{new}$.



Note that this method for performing elementary row operations essentially begins with the identity matrix $\boldsymbol{I}_m$ and then performs the elementary operation on $\boldsymbol{I}_m$. This might be in line with what the authors meant. Of course, you then need to multiply $\boldsymbol{A}$ by this modified identity matrix which won't change the dimension of $\boldsymbol{A}$.







Example (interchanging to rows): I want to change the first and third rows of $\boldsymbol{A}\in \mathbb{R}_{3\times5}$.



Solution: Multiply:
\begin{align*}
{\underbrace{\left[\begin{array}{ccc}0 & 0 & 1 \\0 & 1 & 0 \\1 & 0 & 0\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A},
\end{align*}
this will change the first and third row of $\boldsymbol{A}$.



Note: the determinant of $\boldsymbol{I}_{new}$ here is -1. (This can be shown using the Alternating Sums formulation of the determinant).







Example (multiplying a row by a non-zero scalar): I want to multiply the second row of $\boldsymbol{A}\in \mathbb{C}_{3\times 5}$ by 7/3.



Solution: Multiply:
\begin{align*}
{\underbrace{\left[\begin{array}{ccc}1 & 0 & 0 \\0 & 7/3 & 0 \\0 & 0 & 1\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A},
\end{align*}
this will multiply the second row of $\boldsymbol{A}$ by $7/3$.




Note: The determinant of $\boldsymbol{I}_{new}$ here is 7/3. (The determinant of a diagonal matrix is the product of the diagonal entries.)






Example (adding a multiple of one row to another): I want to add 5 times row 1 to row 2 of $\boldsymbol{A}\in \mathbb{C}_{3\times 5}$.



Solution: Multiply:
\begin{align*}
{\underbrace{\left[\begin{array}{ccc}1 & 0 & 0 \\5 & 1 & 0 \\0 & 0 & 1\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A},
\end{align*}

this will multiply the first row of $\boldsymbol{A}$ by $5$ and add it to the second row of $\boldsymbol{A}$ leaving the first row unchanged.



Note: The determinant of $\boldsymbol{I}_{new}$ here is 1. (The determinant of a triangular matrix is the product of the diagonal entries.)






Notice that, in each of these cases, all that was necessary was to choose the identity matrix of the right dimension (in our case, $3\times3$) and to apply the elementary row operation to this identity matrix. Then, we multiply $\boldsymbol{A}$ on the left by this new matrix.




  • If we wish to apply multiple elementary row operations then we can simply repeat this process sequentially for each operation we apply.



  • It is now easy to see how elementary row operations change the determinant of a square matrix $\boldsymbol{A}$ since the determinant of the product of two matrices is the product of the determinants. Additionally, the determinants of the "$\boldsymbol{I}_{new}$" matrices are easy to compute.


  • All these "$\boldsymbol{I}_{new}$" matrices are nonsingular (it is easy to see that their determinant is nonzero).



elementary number theory - Accidental divisibility sequence?

It seems (verified for $n \leq 3 \cdot 10^5$) that when the equation



$$p = n!+1 \bmod{\frac{n^2(n+1)^2}{4}}$$



is satisfied with some $n>1\in\mathbb N$ and $p\in\mathbb P$, it implies $n\in\mathbb P$ as well. The overall feel reminds me of a divisibility sequence like Fibonacci or $2^n-1$, which is the only time I've seen prime values locked to prime indices like that. However, I don't see many other factors being passed along with index factors, which is what I would normally expect.




Note that this applies only to the specific $p$ calculated as above using mod as an operation, and not to any arbitrary $p$ satisfying a congruence.



Can anyone explain why my sequence is behaving this way?






Upon reflection I think it has to do with my using Wilson's Theorem in a mangled way, and consequently I'm only getting non-$1$ values near primes and $2p$ semiprimes.



Some Mathematica code to show a table if you like:






upperBound = 100;
pretty[n_] :=
If[Length@# > 1, CenterDot @@ #, First@#] &@
FactorInteger[n] /. {{b_, 1} :> b, {b_, e_} :> Superscript[b, e]};
boldPrime[n_] := If[IntegerQ[n] && PrimeQ[n], Style[n, Bold, Blue], n];
Grid[Table[{n, boldPrime@pretty@n,
boldPrime@pretty@Mod[n! + 1, Plus @@ (n^2 (n + 1)^2/4)]}, {n, 2,

upperBound}], Alignment -> Right]





My updated thoughts:



For all $n>9$, it seems like $p=1$ except in one of three conditions:





  1. $n+1$ is prime, in which case $n+1$ divides $p$

  2. $n+1$ is an even semiprime, i.e. $2q$ with prime $q$, and in which case $2$ divides $p$

  3. $n$ is prime, in which case $p$ may or may not be prime, presumably randomly.



If these are indeed the conditions (and I've verified them through $10^5$, so it seems very likely), that would be sufficient to explain the primality correlation. However, I'm still looking for a cogent explanation for why these particular conditions arise from my equation, and will happily accept an answer that can give me some insight into that.

number theory - Field construction

Explain how to construct a field of order $343$ not using addition and multiplication tables.



I understand that every finite field has order $p^n$ for some prime $p$. Since $343$ is $7^3$, let $p=7$. I believe I need to find a polynomial of degree 3 which does not factor over $\mathbb{Z}_7$. I have considered the following polynomial $x^3+x+1$ and showed that it is irreducible over $\mathbb{Z}_7$.



I am not sure how to procede from here, any help would be great.

number theory - Size of infinte sets cardinality

The question is as follows:



Prove that if R is uncountable and T is a countable subset of R, then the cardinality of R\T is the same as the cardinality of R.



What i have:




I know that R is uncountable so it has a countable subset (this is a theorem of uncountable sets). Let T be this subset so T has the same cardinality as the set of natural numbers(by the definition of T being countable). My intution is telling me that we will have to use the cantor bernstein theorem to prove they have same cardinalities. So for that the first thing i did showed was that |R\T| <= |R| (pretty clear as its R\T, |R| means cardinality of R).i got a bit confused while trying to show that |R| <= |R\T|. Maybe we can show this by defining a bijection f from R --> R\T, such that f(x) = x when x is in R\T, but i dont know what to do if x is in R and not in R\T, if i can define that function then i can conclude |R| <= |R\T|, then use the cantor-bernstein theorem and then im done. Or maybe im doing this all wrong i cant think of any other way Any help is much appreciated!!

real analysis - Evaluating $lim_{n to infty} nleft(1-frac{1}{e}left(1+frac{1}{n}right)^{n} right)$





$$\lim_{n \to \infty} n\bigg(1-\dfrac{1}{e}\bigg(1+\dfrac{1}{n}\bigg)^{n} \bigg)$$




If I write expansion of $\bigg(1+\dfrac{1}{n}\bigg)^{n}$ it was equal to expansion of $e$ so $n-n=0$. Is limit is zero ?



Edit:
Uploading screen shot of my response. They marked it correct and I don't know if answer key is wrong or not ? Please can anyone say for sure like with $100$ percent surety if answer given is wrong. enter image description here


Answer




By the L'Hôpital's rule we obtain:
$$\lim_{n\rightarrow+\infty}n\left(1-\frac{1}{e}\left(1+\frac{1}{n}\right)^{n}\right)=\lim_{x\rightarrow0^+}\frac{e-(1+x)^{\frac{1}{x}}}{ex}=-\frac{1}{e}\lim_{x\rightarrow0^+}\left(e^{\frac{\ln(1+x)}{x}}\right)'=$$
$$=-\frac{1}{e}\lim_{x\rightarrow0^+}(1+x)^{\frac{1}{x}}\left(\frac{1}{x(1+x)}-\frac{\ln(1+x)}{x^2}\right)=$$
$$=-\frac{1}{e}\lim_{x\rightarrow0^+}(1+x)^{\frac{1}{x}}\lim_{x\rightarrow0^+}\frac{x-(1+x)\ln(1+x)}{x^2+x^3}=$$
$$=-\lim_{x\rightarrow0^+}\frac{1-\ln(1+x)-1}{2x+3x^2}=\lim_{x\rightarrow0^+}\frac{\ln(1+x)}{x}\lim_{x\rightarrow0^+}\frac{1}{2+3x}=\frac{1}{2}.$$


calculus - How to prove $lim_{n to infty}a_n=1 rightarrow lim_{n to infty}sqrt[n] a_n=1$





Let $a_n\geq0$



Prove/disprove: $$\lim_{n \to \infty}a_n=1 \rightarrow \lim_{n \to \infty}\sqrt[n] a_n=1$$




Proof: By definition a sequence $\displaystyle\lim_{n \to \infty}\sqrt[n] b_n=L$ iff $\displaystyle\lim_{n \to \infty}\frac{b_{n+1}}{b_n}=L$ since $\displaystyle\lim_{n \to \infty}a_n=1$ $\displaystyle\lim_{n \to \infty}\frac{a_{n+1}}{a_n}=1$ and therefore $\displaystyle\lim_{n \to \infty}\sqrt[n] a_n=1$



Am I right?



Answer



I don't see where does the fact you use come from (certainly not from the definition). From the link you provided it seems at least the "if" part is true, so I guess you can prove it like that.



But there's also a quick and easy way to see it:
\begin{align*}1\le\sqrt[n]x\le x&\text{ if }x\ge1\\1\ge\sqrt[n]x\ge x&\text{ if }x\le1\end{align*}
Therefore $\sqrt[n]{a_n}\to1$ because all its members are closer to $1$ than in the original sequence.


calculus - Riemann Sums as definite integrals



I looked at all the resources for Riemann Sums for BC calculus and I could not find any that solved them like my teacher does. The question asks:



Express the following Riemann Sums as definite integrals




$\lim_{n \to \infty}\sum_{k=1}^{2n}(\frac{1}{1+\frac{2k}{n}}) (\frac{1}{n})$



use the following x- values and define an appropriate definite integral



a. x = $\frac{k}{n}$



b. x = $\frac{2k}{n}$



c. x = $1 +\frac{2k}{n}$




My teacher said that there are five steps to converting a Riemann sum into a definite integral,




  1. Determine a possible dx

  2. Determine a value for x

  3. Determine the bounds of the definite integral

  4. Verify your dx is correct

  5. Determine your function & write the definite integral




So for problem a I said dx = $\frac{1}{n}$



x = $\frac{k}{n}$



Then to determine the lower bound I said the lower limit = $\lim_{x \to \infty} \frac{1}{n}$ = 0



Then to determine the upper bound I said the upper limit = $\lim_{x \to \infty} \frac{2n}{n}$ = 2



To verify dx I said dx = $\frac{b - a}{n}$ = $\frac{2 - 0}{2n}$ = $\frac{1}{n}$




Then the function is $\frac{1}{1+2x}$, so my definate integral is $\int_{0}^2 \frac{1}{1+2x}dx$



I have no idea how to do b or c or even if my answer to a is correct. All help is appreciated thanks in advance.


Answer



Your answer to part a is correct.



For part b:
$$ dx= x_{k+1}-x_k$$
$$=\frac{2(k+1)}{n}-\frac{2k}{n}=\frac{2}{n}$$




About the limits, ($a$=lower limit, $b$=upper limit) $$a=\lim_{n\to\infty}\frac{2(1)}{n}=0,$$



$$b=\lim_{n\to \infty}\frac{2(2n)}{n}=4$$
Therefore , the integral would be:
$$I=\int_0^4\frac{1}{1+x}(0.5dx)$$or
$$I=\int_0^4\frac{0.5}{1+x}dx$$



Same drill for part c(will skip some steps):
$$ dx=x_{k+1}-x_k=(1+\frac{2(k+1)}{n})-(1+\frac{2k}{n})=\frac{2}{n}$$

$$a=\lim_{n\to \infty}1+\frac{2(1)}{n}=1$$
$$b=\lim_{n\to \infty}1+\frac{2(2n)}{n}=5$$
$$I=\int_1^5\frac{1}{x}(0.5dx)=\int_1^5\frac{0.5}{x}dx $$



Notice that irrespective of the choice of $x,dx$ and limits ( $a$ and $b$), the definite integral evaluates to the same number.


number theory - Help finishing proof with polynomial discriminant?

Prove that the discriminant of $$f(x) = x^n + nx^{n-1} + n(n-1)x^{n-2} + \cdots + n(n-1)\ldots (3)(2)x + n!$$ is $(-1)^{n(n-1)/2}(n!)^n$.



So far, I let $\alpha_1,\ldots, \alpha_n$ be the roots of $f(x)$. Taking the derivative of $\log f(x) = \sum_{i=1}^n \log(x - \alpha_i)$, we have that $$\frac{f'(x)}{f(x)} = \sum_{i=1} \frac{1}{x-\alpha_i}.$$ Thus, $$f'(\alpha_j) = \prod_{i=1, i\neq j}^n (\alpha_j - \alpha_i)$$ Then the discriminant is $$D = \prod_{i

computer science - Computing the iterated logarithm (log-star) by hand

I'm having trouble figuring out how to compute the iterated logarithm (log-star) by hand without using a calculator or programming language. I wrote out the following program in Java to check my answers as I practice but can't seem to wrap my head around how to do this without a computer.



public class Recurrences {

public static void main(String[] args) {

// Compute log-star (base 2) of 100,000
System.out.println((int) logStar(100000, 2));

}


/**
* log-star recursive method
* @param n Value of input
* @param base Base value for the logarithm (i.e. log base 2 would give 2)
* @return If n > 1.0, return 1 + logStar( log2(n) ), else return 0.
*/
public static double logStar(double n, int base) {

if (n > 1.0) { // Recursive case


return 1.0 + logStar((Math.log(n)) / (Math.log(base)), base);

} else { // Base case. If n <= 1, return 0.

return 0;

}

}


}


Do you have any tips as to how you would calculate log-star of say, 100000? My program says the answer should be 5 but I don't know how I would go about getting that answer with pen and paper. Also, as shown above in the code, I'm working in log base 2.



Edit: here is a link that explains the concept. Sorry for the initial lack of info.

Thursday 28 May 2015

Could you tell me a function that is bijective?



Could you tell me a function that is bijective? The domain and codomain of the function must be $[0,1)$. I can't find any bijection. Please help me!


Answer




The identity function $\mathrm{id} : [0,1) \to [0,1)$ defined as $\mathrm{id}(x) := x$ for all $x \in [0,1)$ is a bijection.





Proof: For arbitrary $x,y \in [0,1)$ with $x \neq y$ it follows that $\mathrm{id}(x) = x \neq y = \mathrm{id}(y)$. Therefore, $\mathrm{id}$ is injective. For an arbitrary $y \in [0,1)$ there is always an $x \in [0,1)$, such that $y = \mathrm{id}(x)$, namely $x := y$. Therefore, $\mathrm{id}$ is surjective. Since $\mathrm{id}$ is both injective and surjective, it is bijective.


real analysis - How discontinuous can a derivative be?




There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]




http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).



The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:




  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.



  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.


  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$




More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].




Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).



In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.




Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.



In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]




[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]



[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.




[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


calculus - Evaluating $int P(sin x, cos x) text{d}x$



Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.



For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.




Is there a general method which allows us to evaluate the indefinite integral




$$ \int P(\sin x, \cos x) \text{d} x$$




What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?



Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.







This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.



and here: List of Generalizations of Common Questions.


Answer



There are several general approaches to integrals that involve expressions with sines and cosines, polynomial expressions being the simplest ones.



Weierstrass Substitution




A method that always works is Weierstrass substitution, which will turn such an integral into an integral of rational functions, which in turn can always be solved, at least in principle, by the method of partial fractions. This works even for rational functions of sine and cosine, as well as functions that involve the other trigonometric functions.



Weierstrass substitution replaces sines and cosines (and by extension, tangents, cotangents, secants, and cosecants) by rational functions of a new variable. The identities begin by the trigonometric substitution $t = \tan\frac{x}{2}$, with $-\pi\lt x\lt \pi$, which yields
$$\begin{align*}
\sin x &= \frac{2t}{1+t^2}\\
\cos x &= \frac{1-t^2}{1+t^2}\\
dx &= \frac{2\,dt}{1+t^2}.
\end{align*}$$
For example, if we have

$$\int \frac{\sin x-\cos x}{\sin x+\cos x}\,dx$$
using the substitution above we obtain:
$$\begin{align*}
\int\frac{\sin x-\cos x}{\sin x+\cos x}\,dx &= \int\left(\frac{\quad\frac{2t}{1+t^2} - \frac{1-t^2}{1+t^2}\quad}{\frac{2t}{1+t^2} + \frac{1-t^2}{1+t^2}}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{\quad\frac{2t-1+t^2}{1+t^2}\quad}{\frac{1+2t-t^2}{1+t^2}}\right)
\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{2t-1+t^2}{2t+1-t^2}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= 2\int\frac{2t-1+t^2}{(1+t^2)(2t+1-t^2)}\,dt
\end{align*}$$
which can then be integrated by the method of partial fractions.




Substitutions and Reduction formulas



However, there are usually faster methods, particularly for polynomial expressions. By breaking up the integral into a sum of integrals corresponding to the monomials, the problem reduces to solving integrals of the form
$$\int \left(\sin x\right)^n \left(\cos x\right)^m\,dx$$
with $n$ and $m$ nonnegative integers. The standard methods then are:




  1. If $n$ is odd, then "reserve" one sine, and transform the others into cosines by using the identity $\sin^2 x = 1-\cos^2x$. Then do the change of variable $u=\cos x$ to transform the integral into the integral of a polynomial. For example,
    $$\int \left(\sin x\right)^5\left(\cos x\right)^2\,dx,$$

    then take $(\sin x)^5$, and write it as
    $$\sin x(\sin x)^4 = \sin x(\sin^2x)^2 = \sin x(1-\cos^2 x)^2.$$
    Then setting $u=\cos x$ and $du = -\sin x\,dx$, we get
    $$\int\left(\sin x\right)^4\left(\cos x\right)^2\,dx = \int \sin x\left(1-\cos^2x\right)^2\left(\cos x\right)^2\,dx = -\int (1-u^2)^2u^2\,du,$$
    which can be solved easily.


  2. If $m$ is odd, then do the same trick by by "reserving" one cosine and using the substitution $u=\sin x$. For example,
    $$\int \sin^2x\cos^3x\,dx = \int \sin^2x(\cos^2x)\cos x\,dx = \int(\sin^2x)(1-\sin^2x)\cos x\,dx$$
    and then setting $u=\sin x$, $du = \cos x\,dx$, we get
    $$\int \sin^2x\cos^3x\,dx = \int u^2(1-u^2)\,du,$$
    which can be solved easily again.



  3. If $n$ and $m$ are both even, then either replace all the sines with cosines or vice versa, using $\sin^2 x = 1 - \cos^2x$ or $\cos^2x = 1-\sin^2 x$, and expand. This will leave integrals of the form
    $$\int \sin^n x\,dx\qquad\text{or}\quad \int \cos^m x\,dx$$
    with $n$ and $m$ even positive and even. In that situation, one can use the reduction formulas, which can be obtained by using integration by parts:
    $$\begin{align*}
    \int \sin^n x\,dx &= - \frac{1}{n}\sin^{n-1} x\cos x + \frac{n-1}{n}\int \sin^{n-2}x\,dx,\\
    \int \cos^m x\,dx &= \frac{1}{m}\cos^{m-1} x\sin x + \frac{n-1}{n}\int \cos^{n-2}x\,dx.
    \end{align*}$$
    By repeated application of these formulas, one eventually ends up with an integral of the form $\int \,dx$ which can be solved directly.





The process can be shortened if you happen to spot or know some trigonometric identities; for example, the power reduction formulas allow you to replace powers of sines or cosines by expressions of multiple angles, e.g.,
$$\sin^4\theta = \frac{3-4\cos(2\theta)+\cos(4\theta)}{8}$$
could replace a single integral with three integrals that can be done fairly easily via substitution.



Other methods



If you are comfortable enough integrating functions with a complex variable in it, then the method described by Qiaochu will transform any integral that involves sines and cosines in any way into an integral that involves exponential functions instead.


Wednesday 27 May 2015

algebra precalculus - Intuition behind multiplication



I recently read this post and the highest voted comment and it got me thinking. How does think about multiplication if it is decimals?




For example, if we have $3.9876542 \times 2.3156479$ then how would we multiply that? It doesn't make a lot of sense to add $3.9876542$, $2.3156479$ times. Then how would you think about multiplying that i.e. what's the intuition of behind that?



Thanks!


Answer



A rectangle with sides $3.9876542\,\mathrm m$ and $2.3156479\,\mathrm m$ can be viewed as $3987654200$ by $2315647900$ namometers instead. Then you can actually count all thos tiny square-nanometers (or simplify this by repaeted addition!) and obtain an area of $9234003074156180000\,\mathrm{nm}^2$. Since there are $1000000000000000000\,\mathrm{nm}^2$ in each $\mathrm m^2$, you end up with $9.23400307415618\,\mathrm m^2$ and thus we should have $3.9876542\cdot 2.3156479 = 9.23400307415618$.


analysis - What is the motivation behind the study of sequences?



I was discussing some ideas with my professor and he always says that before you work on something in mathematics, you need to know the motivation for studying/working on it.

A better way to put this would be,




Why were sequences studied and sought after?



Answer



I think that one of the most interesting thing about sequences is that sequences are "easy" to manage.



In a first place, when you modeling something, a recurrence relation often arise. When you have this relation, you can compute (by hand or with a computer) all the terms you want.




But in a lot of cases, your phenomenon is continuous (e.g. physical movement), hence the most appropriate way to model that is (real) functions. However, it is hard to deal with functions $\mathbb R \to \mathbb R$. So you discretize your problem using sequences (e.g. Euler method to solve ODE).



It is the same in a more abstract point of view: when you are doing some topology, the spaces can be very very difficult to manage. For example, the set of the continuous functions over $[0,1]$: here, manipulate sequences of functions is easier than work with big (non countable) bundle of functions.



So we try to turn properties concerning "open sets", "compactness", etc. into properties concerning sequences as much as we can: for example in a metric space, a set $A$ is closed iff every sequence converging converges in $A$ (which is often simpler to use than "$A$ is closed iff its complementary is open").



That's why it is important to be fluent with sequences! If you are not, it will be difficult to understand well more abstracts concepts (since those concepts use sequences a lot).


linear algebra - An example of a square matrix with the same eigenvectors but different eigenvalues

Is there an example such that $A$ and $B$, three by three, that have the same eigenvectors, but different eigenvalues?



What would be the eigenvectors and eigenvalues if it exists because I'm stuck on this practice problem.



I know that if matrices $A$ and $B$ can be written such that $AB=BA$, they share the same eigenvectors, but what about their eigenvalues? precisely if they're squared matrices ($3\times 3$ case)

calculus - What is$limlimits_{n rightarrow +infty} left(int_{a}^{b}e^{-nt^2}text{d}tright)^{1/n}$?




For $\left(a,b\right)) \in \left(\mathbb{R}^{*+}\right)^2$. Let $\left(I_n\right)_{n \in \mathbb{N}}$ be the sequence of improper integrals defined by
$$
\left(\int_{a}^{b}e^{-nt^2}\text{d}t\right)^{1/n}
$$
I'm asked to calculate the limit of $I_n$ when $ \ n \rightarrow +\infty$.



I've shown that
$$
\int_{x}^{+\infty}e^{-t^2}\text{d}t \underset{(+\infty)}{\sim}\frac{e^{-x^2}}{2x}

$$
However, how can I use it ? I wrote that
$$
\int_{a}^{b}e^{-nt^2}\text{d}t=\frac{1}{\sqrt{n}}\int_{\sqrt{n}a}^{\sqrt{n}b}e^{-t^2}\text{d}t
$$
Hence I wanted to split it in two integrals to use two times the equivalent but i cannot sum them so ... Any idea ?


Answer



First answer. This has some problems but now it is fixed.



So you have the result:

\begin{align}\tag{1}
\int^{\infty}_x e^{-t^2}\,dt = \frac{e^{-x^2}}{2x}+o\left(\frac{e^{-x^2}}{x}\right) \ \ \ \text{as} \ \ x\to\infty
\end{align}
In your last step, you had a mistake. It would be:
\begin{align}
\int^b_a e^{-nt^2}\,dt &= \frac{1}{\sqrt[]{n}}\int^{\sqrt[]{n}b}_{\sqrt[]{n}a}e^{-t^2}\,dt\\
& = \frac{1}{\sqrt[]{n}}\left(\int^{\infty}_{\sqrt[]{n}a}e^{-t^2}\,dt - \int^\infty_{\sqrt[]{n}b}e^{-t^2}\,dt \right)\\
\end{align}
Assume $0\begin{align}\tag{2}

\int^b_a e^{-nt^2}\,dt = \frac{e^{-na^2}}{2na}+o\left(\frac{e^{-na^2}}{n}\right)
\end{align}
For $n$ large enough we can take $n$-th root on both sides of $(2)$ to get:
\begin{align}
\left(\int^b_a e^{-nt^2}\,dt\right)^{1/n}&=\left[\frac{e^{-na^2}}{2na}+o\left(\frac{e^{-na^2}}{n}\right)\right]^{1/n}\\
&=e^{-a^2}\frac{1}{n^{1/n}(2a)^{1/n}}\left[1+o\left(1\right)\right]^{1/n}\\
&\to e^{-a^2}
\end{align}
Where we have used $c_n^{1/n}\to 1$ for $c_n$ strictly positive and bounded away from $0$ and the fact that $\sqrt[n]{n}\to 1$.




$(\star)$: If you allow $a=0$, then something similar can be done which is even easier.






Edit One can also come up with the asymptotics of the integral:
\begin{align}
I_n^n=\int^b_a e^{-nt^2}\,dt
\end{align}
Assume $0The Laplace Method, we get:
\begin{align}

I_n^n\sim \frac{e^{-na^2}}{2an}
\end{align}
Taking $n$-th root we obtain the result:
\begin{align}
\lim_{n\to\infty} I_n = e^{-a^2}
\end{align}


abstract algebra - Why does it follow from $ns = 1 - ar$ that $ar equiv 1 mod n$?



I'm reading through Prof. Tom Judson's online textbook "Abstract Algebra: Theory and Applications". Proposition 3.4 (under the heading The Integers mod n) states:




"Let $\Bbb Z_n$ be the set of equivalence classes of the integers $\mod n$ and $a,b,c∈\Bbb Z_n$."



and, under (6):



"Let $a$ be a nonzero integer. Then $\gcd(a,n)=1$ if and only if there exists a multiplicative inverse $b$ for $a \mod n$; that is, a nonzero integer $b$ such that $ab \equiv 1 \mod n$."



The first part of the proof for this states:



"Suppose that $\gcd(a,n)=1$. Then there exist integers $r$ and $s$ such that $ar+ns=1$. Since $ns=1−ar$, it must be the case that $ar≡1 \mod n$. Letting $b$ be the equivalence class of $r$, $ab \equiv 1 \mod n$."




I'm following everything just fine except for this one sentence from the preceding paragraph:



"Since $ns=1-ar$, it must be the case that $ar \equiv 1 \mod n$."



As in the title to this question, why does it follow from $ns = 1 - ar$ that $ar \equiv 1 \mod n$? It's obvious to me that $ns = 1 - ar$, but not that this implies $ar \equiv 1 \mod n$. The entire rest of the proof (including the second part not copied here) makes perfect sense to me. Just that one sentence eludes me.



What am I missing?



Thank you in advance for you help.


Answer




We say that $a \equiv b \text{ mod } n$ if we can write $a = b + kn$ for some integer $k$.



In your example, we have $1 = ar + ns$.


real analysis - Sine function dense in $[-1,1]$



We know that the sine function takes it values between $[-1,1]$. So is the set $$A = \{ \sin{n} \ : \ n \in \mathbb{N}\}$$ dense in $[-1,1]$. Generally, for showing the set is dense, one proceeds, by finding out what is $\overline{A}$ of this given set. And if $\overline{A} = [-1,1]$, we are through with the proof, but i having trouble here!



Similarly can one do this with cosine function also, that is proving $B= \{ \cos{n} \ : \ n \in \mathbb{N}\}$ being dense in $[-1,1]$


Answer



The hard part is to show that for any $x$ such that $0 \le x \le 2\pi$, and any $\epsilon>0$ there exists a real number $y$ and two integers $m$ and $n$ such that $|y-x|<\epsilon$ and $n=2\pi m+y$. Hint: break up $[0,2\pi]$ into small subintervals, remember that $\pi$ is irrational and apply the pigeonhole principle.


Tuesday 26 May 2015

modular arithmetic - Calculate a number (mod)



Calculate: $3^{1234} $ (mod 17)



We're not suppose to use any "tricks" like the little theorem or anything else alike because we haven't learned that yet just the the definition of modulo.




I tried to do this but it doesn't really help:



$3^{1234}=20^{1234}=2^{1234}10^{1234} $



Thanks in advance.


Answer



Doing arithmetic modulo $\;17\;$ all along:



$$3^4=81=-4\;,\;\;3^5=-12=5\;,\;\;3^6=15=-2\;,\;\;3^7=-6\;,\;\;3^8=-18=-1\implies$$




$$\implies 3^{16}=1\;,\;\;\text{and $\;3\;$ is a primitive root modulo}\;17$$



Now:



$$1234=77\cdot 16+2\implies3^{1234}=(3^{16})^{77}\cdot3^2=3^2=9$$


algebra precalculus - How is the sin function being rewritten?




I'm working through a trigonometry book and was shown this equation being worked out. I don't understand the rules for doing a particular step:



$$\begin{align}
A &= A\sin(x-vt) \\
1 &= \sin(x-vt) \\
x-vt &= {\pi \over 2} \\
x &= {\pi \over 2}+vt
\end{align}$$




How are they going from $1=\sin(x-vt)$ to $x-vt = {\pi \over 2}$? Thanks!


Answer



Whenever you confront a step like this, notice that the $\sin$ is being taken off one side. That means that the inverse of the $\sin$ function must have been used. From this reasoning, even if you don't know that $\arcsin(1) = \frac{\pi}{2}$, then you can make the logical assumption that it is, and from that you can understand the step.
EDIT:
Note that $\arcsin(\sin(x))$ does not always equal $x$. I would not use this step in trying to solve an equation, but for something like this, the strategy shown above helps.


modular arithmetic - When is $(p - 2)! equiv 1 (bmod p)$

I want to show when the following is true for $p$ a prime number. $(p - 2)! \equiv 1 \pmod p$. Could someone help me prove this? It worked for $p = 2$, $p = 3$, $p = 5$, so I believe it may work for all primes but I need to prove it. I don't know how to apply Wilson's or Fermat's theorem to this. I tried to rewrite it as $(p - 1 - 1)! \equiv 1 \pmod p$ but I still couldn't see how to apply Wilson's theorem to it. Could someone help me?

calculus - Evaluate $int_0^1 frac{(x^2-1)}{log x} dx$




Evaluate$$\int_0^1 \frac{(x^2-1)}{\log x} dx$$



Context:



I came across a similar integral some time ago, and now I would like to know how to tackle this. Any hint is very welcome.


Answer



Let $$f(k)=\int_0^1 \frac{x^k-1}{\log x} \mathrm{d}x$$



Then $$f'(k)=\displaystyle \int_0^1 \frac{x^k \log x}{\log x} \mathrm{d}x = \int_0^1 x^k \mathrm{d}x= \frac{1}{1+k} \iff f(k)-f(0)=\log(1+k) \iff f(k) = \log(1+k)$$




Evaluating $k=2$ we get $$f(2)=\int_0^1 \frac{x^2-1}{\log x} \mathrm{d}x=\log (1+2)= \log 3$$


elementary number theory - Solve congruence equation $3373x equiv 2^{485} pmod{168}$



$$3373x \equiv 2^{485} \pmod{168}$$



Uhm...I don't even know how to start.



$GCD(3373,168)=1$, so solution exists.




Usually I would use extended Euclidean algorithm and get the outcome, but it would require multiplying by $2^{485}$ later, so...well, is there some easier way?


Answer



First of all, $168=2^3\cdot3\cdot7$, so you want to solve the three congruences:



$3373x\equiv2^{485} \pmod 8 \\
3373x\equiv2^{485} \pmod 3 \\
3373x\equiv2^{485} \pmod 7$



The first of these three is really $3373x\equiv_8 0$, which just implies $x\equiv_8 0$




For the second, $3373\equiv_3 1$, and any odd power of $2$ is congruent modulo $3$ to $2$, so we have $x\equiv_32$



Thirdly, $3373\equiv_76$, and $2^3\equiv_71 \Rightarrow 2^{485}\equiv_72^2=4$. Thus, we have $6x\equiv_74$, or $x\equiv_73$



Finally, we use the Chinese Remainder Theorem to find a number satisfying:



$x\equiv_80 \\
x\equiv_32 \\
x\equiv_73$




In this way, we obtain $x\equiv 80 \pmod {168}$.


sequences and series - What is $sum_{n=1}^{infty} frac{1}{sqrt{n^{3} + 1}}$?

I am interested in the symbolic evaluation of infinite series over algebraic functions in terms of well-known special functions such as the
Hurwitz zeta function. Often, infinite sums over algebraic expressions can be evaluated in a simple way in terms of well-known special functions. For example, a direct application of Euler's identity may be used to show that $$ \sum_{n \in \mathbb{N}} \frac{1}{\sqrt{(n^2+1)(\sqrt{n^2+1}+n)}}
= - \frac{i \left( \zeta\left( \frac{1}{2}, 1 - i \right) -
\zeta\left(\frac{1}{2}, 1 + i \right) \right)}{\sqrt{2}}. $$
The above formula seems to suggest that there may be many classes of infinite series over algebraic expressions that could be easily evaluated in terms of well-established special functions.



Inspired in part by this question, together with

this question
and
this question, as well as
this question, I'm interested in the problem of evaluating
$$\sum_{n=1}^{\infty} \frac{1}{\sqrt{n^{3} + 1}}
= 2.29412...$$
in terms of "known" special functions, e.g., special functions implemented within Mathematica. This problem is slightly different from the related problems given in the above links, since I'm interested specifically in the use of special functions to compute the above sum symbolically.



More generally, what kinds of algorithms could be used to evaluate infinite sums over algebraic expressions symbolically in terms of special functions?

Sunday 24 May 2015

sequences and series - Proving a relation between $sumfrac{1}{(2n-1)^2}$ and $sum frac{1}{n^2}$

I ran into this question:




Prove that:



$$\sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}=\frac{3}{4}\sum_{n=1}^{\infty}\frac{1}{n^2}$$




Thank you very much in advance.

Probability Question A fair coin is tossed repeatedly until a head appears

Can you help me with this.




A fair coin is tossed repeatedly until a head appears. Let $N$ be the number of
trials until the rst head appears. Then a fair die is rolled $N$ times. Let $X$ be the number of times that the die comes up $4$. Calculate $P\{X = 0\}$, $P\{X = 1\}$ and $E[X]$.

measure theory - existence of a measurable right inverse of a mapping



Good evening,




I have a question concerning the inverse of a measurable function.



Let $f\colon X\to Y$ be a Borel measurable mapping between two topological spaces. Suppose that $f$ is surjective.



My question : Does there exist always a measurable right inverse of $f$?



Any help is appreciated.Thanks in advance.


Answer



By a Borel measurable mapping $f$ you must mean that $f$ is a Borel map. (That is, I guess you're not assuming that there are any measures floating around.) In that case, the answer is no:




Let $f$ be the identity function $\mathbb{N}_d \to \mathbb{N}_i$, where $\mathbb{N}_d$ has the discrete topology and $\mathbb{N}_i$ has the indiscrete (or trivial) topology. Then $f$ is surjective, but its inverse is not Borel. For instance, $f^{-1}\{3\} = \{3\}$ is not a Borel set in $\mathbb{N}_i$, though $\{3\}$ is Borel in $\mathbb{N}_d$. (Of course, there's nothing special about $\mathbb{N}$ here: pick your favorite set with at least two elements.)


elementary number theory - Prove that for $x^3=3$ there isn't rational solution




Prove that for $x^3=3$ there isn't rational solution




What I did:



Suppose $x= u /v$ is solution




$$\left(\frac u v\right)^3=3$$



Let's take third root from both sides:



$$\frac u v =\sqrt[3]3$$



and $\sqrt[3]3$ is irrationl ,



my problem is this "and $\sqrt[3]3$ is irrationl " is it well known that this number is irrationl? same as $\pi$?




or maybe there is another why to prove it?


Answer



Just start off my assuming $(u,v) = 1$ i.e the fraction is fully reduced. Then it follows that;
$$u^3 = 3v^3 \Rightarrow 3 \mid u^3 \Rightarrow 3 \mid u \Rightarrow u = 3M$$



Therefore $(3M)^3 = 27M^3 = 3v^3$ and so $9 \mid v^3$ which implies $3 \mid v^3$ and so $(v,u) \not = 1$ which is a contradiction.


discrete mathematics - Strong Induction: parity of sum of odd numbers



This is the question:



Use strong mathematical induction to prove that for any integer $n \ge 2$, if $n$ is even, then any sum of $n$ odd integers is even, and if $n$ is odd, then any sum of $n$ odd integers is odd.



I know that $P(n)$ is the sentence:




“If $n$ is even, then any sum of $n$ odd integers is even, and if $n$ is odd, then any sum of $n$ odd integers is odd.”



If anyone could guide me a bit or provide some sort of formula, it be much appreciated!


Answer



The first step is
to get this into
mathematical form.



I would write it like this:




Let
$(a_i)_{i=1}^n$
be odd integers.
Then,
for any positive integer $m$,
$\sum_{i=1}^{2m} a_i$
is even
and
$\sum_{i=1}^{2m-1} a_i$

is odd.



Proof.



Note:
All variables are integers.



The basic facts needed are that
(1) every even number $a$
can be written in the form

$a = 2b$;
(2) every odd number $a$
can be written in the form
$a = 2b+1$;
(3) all numbers are either
even or odd;
(4) the sum of two even numbers
is even;
(5) the sum of an odd and even integer
is odd;

(6) the sum of two odd numbers
is even.



Note:
Facts (1) and (2)
are definitions.
A good exercise is
to prove facts
(3) through (6).




For $m=1$,
this states that
$a_1$ is odd and
$a_1+a_2$ is even.



The first is true by assumption.



The second is true because
the sum of two odd integers
is even.




For the induction step,
suppose it is true for $m$.



The statement for
$m+1$ is
$\sum_{i=1}^{2(m+1)} a_i$
is even
and
$\sum_{i=1}^{2(m+1)-1} a_i$

is odd.



For the first,



$\begin{array}\\
\sum_{i=1}^{2(m+1)} a_i
&=\sum_{i=1}^{2m+2} a_i\\
&=\sum_{i=1}^{2m} a_i+a_{2m+1}+a_{2m+2}\\
&=(\sum_{i=1}^{2m} a_i)+(a_{2m+1}+a_{2m+2})\\
\end{array}

$



and this is even
(using fact 4) because
$\sum_{i=1}^{2m} a_i$
is even by the induction hypothesis
and
$a_{2m+1}+a_{2m+2}$
is even by fact 6.




For the second,



$\begin{array}\\
\sum_{i=1}^{2(m+1)-1} a_i
&=\sum_{i=1}^{2m+1} a_i\\
&=\sum_{i=1}^{2m-1} a_i+a_{2m}+a_{2m+1}\\
&=(\sum_{i=1}^{2m-1} a_i)+(a_{2m}+a_{2m+1})\\
\end{array}
$




and this is odd
(using fact 5) because
$\sum_{i=1}^{2m-1} a_i$
is odd by the induction hypothesis
and
$a_{2m}+a_{2m+1}$
is even by fact 6.



You could also group the sum as
$\sum_{i=1}^{2m} a_i+a_{2m+1}

$;
in this,
the sum is even
and $a_{2m+1}$
is odd,
so their sum is odd.


linear algebra - Do similar, non-diagonalizable matrices have the same eigenvalues?

To better explain, I have the matrix
\begin{bmatrix}3&2&0\\5&0&0\\k&b&-2\end{bmatrix}
where k is chosen to make the matrix non-diagonal. I have to find, if possible, a matrix with the same eigenvalues which is not similar to this one, but I can't seem to find it.
Is it only this particular case, or in general non-diagonalizable matrices that are not similar have different eigenvalues?




Thanks in advance.

Saturday 23 May 2015

algorithms - How to add or multiply decimal numbers?

The tag ''calculus'' may not fit for this question but I couldn't think of anything else. Please feel free to change it.



Suppose $a=0,a_{-1}a_{-2}a_{-3}\ldots$ and $b=0,b_{-1}b_{-2}b_{-3}\ldots$ are two decimal numbers. To be precise, I should have add ''representation of'' before ''decimal numbers'' but this is not of what the question is about.




How can I practically add and multiply $a$ and $b$?




If $a$ and $b$ are rational and thus periodic, you could answer, that I should convert them to fractions $a={a'\over a''}$ and $b={b'\over b''}$ and do the obvious thing, but I don't want to do this.




Starting with addition, the problem is about digit sums bigger then $10$. If $a=b=0,111\ldots$ you have no problems adding them digit by digit from the left to $a+b=0,222\ldots$. But when you have something like $a=0,0888\ldots$ and $0,0111\ldots$ you can't write down any digit of the sum until you know what comes next if you start from the left since there may occur a digit sum $>10$ as with $a=0,088890\ldots$ and $0,011110\ldots$, can you? What happens here exactly, if $a$ and $b$ are rational? How does the length of the period of $a+b$ relies to the period lengths of $a$ and $b$?



The problem gets more complicated, if you think of multiplication. For ''finite'' numbers like $a=b=0,2=0,200\ldots$ you just write down a multiplication table and that's it (despite the fact, that $0,2=0,1999\ldots$, but let's ignore this). But how about arbitrary or rational numbers? I have the impression, that the length of the period may be much much bigger in the product. Is there an upper bound depending on the period lengths of $a$ and $b$?



As you define a decimal number as a certain absolutely convergent row, you define the multiplication by the Cauchy product, but this doesn't help when you want to do multiplication practically. Is there an easy algorithm?



Here is a concrete challenge: Multiply $a=b=0,\overline{142857}$.

elementary number theory - Let $a,m,n in mathbf{N}$. Show that if $gcd(m,n)=1$, then $gcd(a,mn)=gcd(a,m)cdotgcd(a,n)$.




Let $a,m,n\in\mathbf{N}$. Show that if $\gcd(m,n)=1$, then $\gcd(a,mn)=\gcd(a,m)\cdot\gcd(a,n)$.



Proof:



Let $u,v\in\mathbf{Z}$ such that $\gcd(m,n)=um+vn=1$. Let $b,c\in\mathbf{Z}$ such that $\gcd(m,a)=ab+cm$. Let $d,e\in\mathbf{Z}$ such that $\gcd(a,n)=ad+en$.



So $\gcd(a,m)\cdot\gcd(a,n)=a^2bd+cmen+aben+emad$.



Where do I go from here?



Answer



$\gcd(a,m)\cdot \gcd(a,n) = a(abd+ben+emd)+(mn)(ce) \ge \gcd(a,mn)$



Say $\gcd(\gcd(a,m),\gcd(a,n)) = p$ where $p>1$.
Then $p|gcd(a,m)$ and $p|gcd(a,n)$. Which means $ p|m$ and $p|n$. So $p$ is a common divisor of $m$ and $n$. $\gcd(m,n) \ge p$. But this is impossible since $\gcd(m,n)=1$ and $p>1$. Thus, $p=1$



If $\gcd(a,m) = x$, this means $x|a$ and $x|m$. If $x|m$, then $x|mn$.
Thus $x$ is a common divisor of $a$ and $mn$. $x|\gcd(a,mn)$
If $\gcd(a,n) = y$, this means $y|a$ and $y|n$. If $y|n$, then $y|mn$.
Thus $y$ is a common divisor of $a$ and $mn$. $y|\gcd(a,mn)$

Because $\gcd(x,y) = 1, xy|\gcd(a,mn)$
So $\gcd(a,m) \cdot \gcd(a,n) \le \gcd(a,mn)$



Therefore, $$\gcd(a,m) \cdot \gcd(a,n) = \gcd(a,mn)$$


functions - Bijection from $[0,1]^3$ to $[0,1]$?




Is there any bijection from $[0,1]^3$ to $[0,1]$? How can I construct it?


Answer



Hint:



If there exists a surjection between $A$ to $B$ and a surjection between $B$ to $A$, then there exists a bijection between $A$ to $B$. In your case, space filling curves are surjections from $[0,1]$ to $[0,1]^3$. It should be easy to find a surjection going the other way.


algebra precalculus - Square roots of Complex Number.

Calculate, in the form $a+ib$, where $a,b\in \Bbb R$, the square roots of $16-30i$.






My attempt with $(a+ib)^2 =16-30i$ makes me get $a^2+b^2=16$ and $2ab=−30$. Is this correct?

Friday 22 May 2015

summation - Can't understand an equality between sums



During an induction proof I came across an equality that I can't understand.

During the last step of the induction there is:
$$ \sum_{k=1}^{n}{(k+1){n \choose k-1}} = \sum_{k=0}^{n-1}{(k+2){n \choose k}} $$



My question is which sum and combination identities were used to achieve this transformation. Using simply the index shift of sums I couldn't work out the same result.
Edit: I understand that the equality stands, I can't understand how I can produce the RHS from the LHS. Which identities do I have to use?


Answer



If you're unfamiliar with index shifts, change the variable first.
\begin{align}
\sum_{k=1}^{n} (k+1)\binom{n}{k-1}
&=\sum_{h=1}^{n} (h+1)\binom{n}{h-1} && \text{(change dummy variable)} \\

&=\sum_{k=0}^{n-1} (k+2)\binom{n}{k} &&
\begin{gathered}
h-1=k \\
h+1=k+2 \\
\begin{aligned}
& h=1\implies k=0\\
& h=n\implies k=n-1
\end{aligned}
\end{gathered}
\end{align}



abstract algebra - Powerset bijection problem

Please do not provide a full answer for this.



Let $2^{S} = \{f : S \rightarrow \{0, 1\}\}$. For $A \subseteq S$, define $\chi_{A}\in2^{S}$ by
$$\chi_{A}(s) =
\begin{cases}

0 & \text{if } s \notin A \\
1 & \text{if } s \in A
\end{cases}. $$
Show that $\mu : P(S)\rightarrow2^{S}$ given by $\mu(A)=\chi_{A}$ is a bijection.



I know that the standard procedure for showing that a function is bijective is to show that it is both injective and surjective, and the "standard procedures" for those as well. It's just that I don't really know where to start with this.

elementary set theory - Prove question $(Asetminus B) cup (Bsetminus C) = Asetminus C$ , $ (Asetminus B)setminus C= Asetminus(Bcup C)$



I want to prove the following statements but for do it I need some hint.



\begin{align}
\tag{1} (A\setminus B) \cup (B\setminus C) &= A\setminus C\\
\tag{2} (A\setminus B)\setminus C&= A\setminus(B\cup C)

\end{align}
Thanks!


Answer



For the first one, suppose that $(A \setminus B) \cup (B \setminus C)$ is not empty. Take any $x \in (A \setminus B) \cup (B \setminus C)$. Then either $x \in A \setminus B$ or $x \in B \setminus C$. Note that in this particular case, both cannot be true (why?). If $x \in A \setminus B$, then $x \in A$ and $x \not \in B$. If $x \in B \setminus C$, then $x \in B$ and $x \not \in C$. This does not imply that $x \in A \setminus C$. If $x \in A \setminus B$, one of the possibilities above, then this does not give us any information about whether $x \in C$.



For example, suppose $A = \{1,2,3\},\ B = \{1,2\}$, and $C = \{3\}$. Then $3 \in A \setminus B$ and so $3 \in (A \setminus B) \cup (B \setminus C)$, but $A \setminus C = \{1,2\}$ and so $3 \not \in A \setminus C$.


probability - $X$ and $Y$ are independent and follow $U(0,1)$. Show $P(f(X) > Y) = int_0^1 f(x) dx$





Let $X$ and $Y$ be two independent uniformly distributed r.v. on $[0,1]$, and $f$ is a continuous function from $[0,1]$ to $[0,1]$. Show that $P(f(X) > Y) = \int_0^1 f(x) dx$.




I tried to prove it by change of variable but failed. I can only reach the step
$$
P(f(X) > Y) = \int_{\{(x,y): f(x) > y\} }I_{[0,1]}(x) I_{[0,1]}(y) dx dy
$$
How can I proceed with the proof? Thanks for any help in advance.


Answer



$$

\int_{(x,y):f(x)>y} \mathbf{1}_{[0,1]}(x)\mathbf{1}_{[0,1]}(y)\,\mathrm dx\,\mathrm dy=\int_\mathbb{R}\int_\mathbb{R} \mathbf{1}_{[0,1]}(x)\mathbf{1}_{[0,f(x))}(y)\,\mathrm dy\,\mathrm dx
$$


Thursday 21 May 2015

Factorial Inequality problem $left(frac n2right)^n > n! > left(frac n3right)^n$



I met an inequality, I ask, do not mathematical induction to prove that:




Prove \[ \left(\frac n2\right)^n > n! > \left(\frac n3\right)^n \] without using induction




Answer



Let $a_n =\displaystyle \frac{2^n n!}{n^n}.$ Note that $a_6 = 80/81 < 1.$ We also have $$ \frac{a_{n+1}}{a_n} = \frac{2^{n+1} (n+1)! }{(n+1)^{n+1}} \cdot \frac{n^n}{2^n n!} = 2 \left(\frac{n}{n+1}\right)^n < 1.$$ The sequence $$x_n = \left(1- \frac{1}{n+1} \right)^n$$ is monotonically decreasing to $1/e.$ Since $e>2$, $a_{n+1}/a_n < 1$ so $(a_n)$ is a monotonically decreasing sequence. Thus the first inequality holds.



By considering Taylor series, $\displaystyle e^x \geq \frac{x^n}{n!}$ for all $x\geq 0,$ and $n\in \mathbb{N}.$ In particular, for $x=n$ this yields $$ n! \geq \left( \frac{n}{e} \right)^n $$ and this is stronger than the second inequality.



We could have used the same proof method for the second inequality as we did for the first: Let $b_n= \displaystyle \frac{3^n n!}{n^n}.$ Then $b_6 = 45/4 > 1.$ Also, $$ \frac{b_{n+1}}{b_n} = \frac{3^{n+1} (n+1)! }{(n+1)^{n+1}} \cdot \frac{n^n}{3^n n!} = 3 \left(\frac{n}{n+1}\right)^n $$ and this is greater than $1$ since $e<3.$



What we have just done suggests we can prove the following: If $a,b$ are positive real numbers such that $a n! > \left( \frac{n}{b} \right)^n $$ holds for sufficiently large $n$. This is turn suggests something stronger about the growth of the factorial function: $n!$ is asymptotically equal to $(n/e)^n$ to within at most a sub-exponential factor.


calculus - Find an equivalent of sequence at infinity




I want to find an equivalent at infinity to those two sequences and then deduce their possible limits:
$$
u_n=\frac{(-1)^n+1}{n+\sqrt{n}},\,v_n=\frac{n^5+e^n}{2n+e^n}.
$$
For the first one, I found, using $n\sim \sqrt{n}$ that
$$
u_n\sim \frac{(-1)^n+1}{2n},
$$
and I use then the fact that both $u_{2p}$ and $u_{2p+1}$ converge to zero to obtain that $u_n$ converges also to zero. Can we find a more elegant equivalent of $u_n$ at infinity?

For the second, we already know that
$$
\lim_{n\rightarrow +\infty}n^a\,e^{-b\,n}=0, a>0,b>o
$$
so the limit of the second sequence $v_n$ is $1$ if we divide both the numerator and the denominator by $e^n$. But, how can we find an equivalent $w_n$ of $v_n$ such that $w_n$ converges to the value $1$ at the infinity?


Answer



$$u_n=\frac{(-1)^n+1}{n+\sqrt{n}}\\0 \leq u_n \leq \frac{1+1}{n+\sqrt{n}} \leq \frac{2}{\sqrt{n}+\sqrt{n}}\\ 0\leq u_n \leq \frac{1}{\sqrt{n}} $$when $n \rightarrow \infty $ $$\frac{1}{\sqrt{n}} \rightarrow 0 \\ \rightarrow u_n \rightarrow 0 $$


summation - Error in a binomial coefficient sum identity proof



I have been given an identity $\begin{align}
\sum_{i=0}^n i\binom{n}{i}^2 = n\binom{2n-1}{n-1}
\end{align}$.
However when I tried to prove it, I got a different result.

$\begin{align}
\sum_{i=0}^n i\binom{n}{i}^2 = n\sum_{i=0}^n \binom{n-1}{i-1} \binom{n}{i} =
n\sum_{i=0}^n \binom{n-1}{n-i} \binom{n}{i} = n\binom{2n-1}{n}
\end{align}$



First equation follows from absorption identity, second one from symmetry, and the third one from Vandermonde's identity. Where is my mistake?


Answer



Let us devise a purely combinatorial approach: assume to have a parliament with $n$ people in the right wing, $n$ people in the left wing. In how many ways can we form a committee with $n$ people and elect a chief of the commitee from the left wing? The first approach is to select $i$ people from the left wing, $n-i$ people from the right wing, then the chief among the selected $i$ people from the left wing. This leads to $\sum_{i=0}^{n}i\binom{n}{i}\binom{n}{n-i}=\sum_{i=0}^{n}i\binom{n}{i}^2$. The other approach is to select the chief from the left wing first ($n$ ways for doing that), then select $n-1$ people from the remaining $2n-1$ in the parliament. Conclusion:
$$ \sum_{i=0}^{n}i\binom{n}{i}^2 = n\binom{2n-1}{n-1} = n\binom{2n-1}{n}. $$


calculus - How can I calculate the limit of exponential divided by factorial?




I suspect this limit is 0, but how can I prove it?



$$\lim_{n \to +\infty} \frac{2^{n}}{n!}$$


Answer




The easiest way to do this is to do the following: Assume $n \ge 4$. Then $$0 \le \frac{2^n}{n!} = \prod_{i=1}^n \frac{2}{i} = \frac{2\cdot 2\cdot 2}{1 \cdot 2 \cdot 3} \cdot \prod_{i=4}^n \frac{2}{i} \le \frac{8}{6} \cdot \prod_{i=1}^n \frac{2}{4} = \frac{8}{6 \cdot 2^{n-3}}.$$ Applying the squeeze theorem gives the result.


complex analysis - Construction of a quasiconformal mapping of the closed disk to itself

My problem relates to the construction of the quasiconformal map $T:\overline{\mathbb{D}} \longrightarrow \overline{\mathbb{D}}$ from beginning of the proof of Lemma 2.2. in Marshall-Rhode.



Question



The text in question writes as follows:





Let $T:\overline{\mathbb{D}} \longrightarrow \overline{\mathbb{D}}$ be a quasiconformal map with $T(i)=\alpha ^+, T(-i)=\alpha ^-, T(1)=1$ and $T^{-1}(0)\in (-1,1)$. For instance, $T$ can be constructed as a composition of a quasiconformal map that sends $\alpha^+$ and $\alpha^-$ to symmetric points, followed by a Möbius transformation.




Here $\alpha ^+$ and $\alpha ^-$ are fixed points on $\partial \mathbb{D}$ such that the oriented arc from $\alpha ^-$ to $\alpha ^+$ (denoted $\langle \alpha ^-,\alpha^+\rangle$ in the text) on $\partial \mathbb{D}$ is seperated by 1 (they are defined above Lemma 2.2, but I don't think that it is relevant how).



My questions are these: Why does the quasiconformal map sending $\alpha^-$ and $\alpha^+$ to symmetric points exist? Is it symmetric in the real axis? And how does it look?



Attempt to solve




To me it would seem as though we need to either



1) Use some sort of "quasiconformal stretching" of the the arcs $\langle \alpha^-, 1\rangle$ and $\langle 1, \alpha^+\rangle$, where we somehow try to preserve the disk and map $\alpha^{-}$ and $\alpha^+$ into $c$ and $\bar{c}$ respectively for some $c\in \partial \mathbb{D}$. Then we could use the Möbius transformation which maps $(\bar{c},c,1)$ into $(-i,i,1)$ and so kinda ``push'' $0$ around on the real line (satisfying $T^{-1}(0)\in (-1,1)$), or



2) Quasiconformally stretch $\overline{\mathbb{D}}$ to an elliptical disk such that the two points become symmetric and then map the elliptical disk conformally to $\overline{\mathbb{D}}$ with a Möbius transformation.



Any help would be greatly appreciated.

real analysis - Show that $f(x+y)=f(x)+f(y)$ implies $f$ continuous $Leftrightarrow$ $f$ measurable




Let $f:\mathbb R \rightarrow \mathbb R$, and for every $x,y\in \mathbb R$ we have $f(x+y)=f(x)+f(y)$.



Show that $f$ measurable $\Leftrightarrow f$ continuous.


Answer



One implication is trivial. If a function is continuous, then it is measurable. The converse is more tricky.



You can find a very nice proof in the following document. Another proof can be found considering the function $F(x)=\int_0^x f(t)dt$, which is well defined since $F$ is measurable.



Another approach is the following: prove that a discontinuous solution for the functional equation is not bounded on any open interval. It can be shown that for a discontinuous solution the image of any interval is dense in $\Bbb{R}$, and therefore we have problems with the measurability.


real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...